Comments Locked

38 Comments

Back to Article

  • HollyDOL - Friday, August 12, 2016 - link

    This is 3rd "between RAM and SSD" technology implementation recently. It will be very interesting to see them coming live and compare odds and cons once they are published.
  • ddriver - Friday, August 12, 2016 - link

    Seems like overhyped optane's advantage (if any) will be rather short lived.
  • Michael Bay - Friday, August 12, 2016 - link

    You`d readily shill for literally compost if it somehow competed with something intel makes. ^_^
  • ddriver - Friday, August 12, 2016 - link

    Compost is priceless compared to your so called "opinions" ;)
  • Michael Bay - Saturday, August 13, 2016 - link

    I enjoy your pain though, keep going!
  • BrokenCrayons - Friday, August 12, 2016 - link

    The aggressive, brand-slanted nature of your own posts leaves you with no room to criticize other readers and their thoughts. Tone down your own rhetoric and try to be a bit more civil before hurling a rock at someone from the window of that glass house you've built.
  • ddriver - Friday, August 12, 2016 - link

    Which brand would that be? Aggressive? If you are a wuss maybe, you haven't seen my aggressive, nor will you, since it will be pointless, as the internet doesn't conduct it. Expecting others to conform to your "standards" is in practice fascism. For better or worse, we can't all be complacent sheeple baa-ing at the mediocre greedy industry with admiration, some of us have that thing called "individuality". You don't have to like it, but if you want to change it, you'd have to do something better than mitch and boan about it. Your first step towards self improvement would be to keep your comments on topic instead of whining about my "insolence" at stating facts ;)
  • BrokenCrayons - Friday, August 12, 2016 - link

    My post was addressed at Michael Bay, not you. I don't associate your comments with rabid aggression at all. However that other guy...that's another story altogether.
  • ddriver - Friday, August 12, 2016 - link

    Well, I get similar responses from several individuals here, and the design of the comment space here makes it kind of hard to see the structure, so I jumped to the logical conclusion before taking the ruler to see how comments line up. And yeah, no "edit" feature either. I noticed only after I refreshed the page, so again - my bad ;)
  • BrokenCrayons - Friday, August 12, 2016 - link

    Not a problem! I realized that was probably what happened and yes, I agree the comment box replies aren't always easy follow.
  • ironargonaut - Monday, August 15, 2016 - link

    "stating facts" == "Seems like..." ROFL What fact exactly was it you stated?
  • ddriver - Friday, August 12, 2016 - link

    Ops, my bad, you were replying to that other guy. Still, that post will apply to enough people.
  • Michael Bay - Saturday, August 13, 2016 - link

    Oh, another one.
  • Xanavi - Friday, August 12, 2016 - link

    You really think Intel and Micron will be overshadowed by anyone? LOL. IMO it is under hyped compared to what it can do. Forever retention, ns speed, cheap. It's literally the future.
  • ddriver - Friday, August 12, 2016 - link

    I'll believe it when I see it. So far the hints are that xpoint is mostly hype and nothing to get excited about.
  • ddriver - Friday, August 12, 2016 - link

    BTW it doesn't boil down to what I *think*, but to what you *believe*. Intel is far from "unbeatable", it currently dominates the desktop and laptop CPU market, a dominance it came to through *questionable* tactics, but Intel is being overshadowed in a number of markets - mobile devices, graphics, ssds, gsm radio, embedded... out of everything Intel makes CPUs is the only thing they are not being vastly out-competed at. And Intel has gotten mediocre and boring at CPUs as well, due to the lack of competition, so their "dominance" in that market is not only product of dirty tactics, but also intrinsic lack of competition. Intel is so depleted they spent billions on acquisitions of companies for their products, which an innovative company could have developed at the cost of millions. It doesn't seem that Intel is doing well in fields it doesn't enjoy a monopoly over.
  • jospoortvliet - Saturday, August 13, 2016 - link

    It is true that Intel bought innovation rather than 'do it' but sadly that is what all the big companies resort to - Google, Apple, Microsoft... they rarely innovate. It seems we still have not figured out how to innovate within big companies.

    Otherwise, yeah, Intel had some dirty tricks but let's be honest - they also simply executed well and their competition blundered. So I don't think they didn't earn their position in the high end market - while I agree with you it has made them complacent. Sadly. I hope arm will continue to give them trouble... And who knows, Zen might be worth the hype.
  • jjj - Friday, August 12, 2016 - link

    XPoint has always been a stop-gap mid-term solution before better technology , like this one, arrives. Micron (and to a certain extent Intel) are working on other things too.
    Just be cautious about timing for 3D ReRAM and other new memories, it's really hard to make these things and it takes time to sort everything out.
  • name99 - Saturday, August 13, 2016 - link

    I suspect you're misinterpreting the Intel plan. To make maximal use of SCM, you need new cache control (memory flushing/ordering) instructions. It is quite possible Intel only puts these in Xeon E7's, at least for a few years. That doesn't force one to buy Optane, but it does force one to pay a lot to INTEL if one wants to use any SCM well...

    The real question, then, is how long till either AMD add those instructions to their CPU's, or, more interestingly, till a server class ARMv8.2 CPU is released (since v8.2 likewise contains those instructions).
  • fanofanand - Friday, August 12, 2016 - link

    Yay for not having any standardization! Whoever is able to take advantage of existing interconnects will be the winner, as they should be able to have a significantly lower implementation cost. Of course this should present us with new bottlenecks, hopefully this deluge of new technology will drive the entire industry forward. One can hope right?
  • BrokenCrayons - Friday, August 12, 2016 - link

    It almost feels like we're taking a step back in order to move forward. The thought of adding yet another layer of memory/storage between the CPU and the data it needs to process to gain performance sort of runs counter to the idea of just making storage itself faster and cheaper. I realize SCM is thought of as needed to address the disparity between CPU speed and storage speed, but its a cumbersome and inelegant approach to mitigating the problem. In fact, it highlights the number of band-aid solutions the computer industry has had to put into place over the years to deal with cost-sensitive buyers and aggressive competitors.
  • DanNeely - Friday, August 12, 2016 - link

    The history of computer evolution can be seen as the continual adding of more cache layers because from the very beginning it has been easier to scale compute performance than storage latency. Occasionally we shed the slowest storage layer off the bottom of the stack; as happened to tape in the consumer market during the 80s. (Some 70s/early 80s computers used audio cassettes as storage. High density tape lasted a lot longer as an archival storage/backup system for very large enterprises.) The same thing is happening with hard drives today, with SSD only solutions taking ever larger portions of the mainstream storage market pushing HDDs to backups, bulk data storage, and the most cost sensitive segments of the market. Give it a few more years and even boxmart special laptops will probably be SSD only; with consumer HDD use limited to NASes and large capacity USB backup devices.
  • FunBunny2 - Friday, August 12, 2016 - link

    -- The thought of adding yet another layer of memory/storage between the CPU and the data it needs to process to gain performance sort of runs counter to the idea of just making storage itself faster and cheaper.

    I forget? what was the consensus when the cpu makers added L1 then L2 then L3 caches? to the extent that a cpu is doing real multi-user or multi-tasking or multi-programming (each having a balance of compute and I/O), then adding caching along the way is win-win. we will only know, of course, when such cpu become compute bound (hard) or stalled.
  • BrokenCrayons - Friday, August 12, 2016 - link

    Don't misunderstand my comment. I'm very much in favor of more performance even though the solution appears to be another tier of memory. It doesn't excuse the fact that doing so will add complexity and cost when really, a better answer would be improving storage performance so a few of these additional layers are unnecessary. boeush's comment below suggestion some form of SCM such as 3D ReRAM eventually replacing current SSD technologies is just the sort of thing that'd be more sensible in the long run.
  • djayjp - Friday, August 12, 2016 - link

    They'll release it this year. It says right there in the slide above: "fast storage: 2016"
  • boeush - Friday, August 12, 2016 - link

    First step toward mass adoption: hybrid SSDs with SCM buffer -- akin to the hybrid HDDs of yore.

    Next step, wholesale replacement of NAND with SCM -- initially at lower capacities and higher prices, but then asymptotically approaching parity - just as is now happening with SSDs vs HDDs.

    All this talk of an extra and additional layer between DRAM and NAND is, in my opinion, naive. Simplicity and convenience always win in the end.
  • FunBunny2 - Friday, August 12, 2016 - link

    -- Simplicity and convenience always win in the end.

    only if the bean counters permit. they've stopped 450mm wafers for the better part of a decade, fur instance. faster only matters if the user notices. in the past, we had the Wintel monopoly (really, the Wintel symbiosis), under which M$ built ever more bloated code, demanding more cycles from the cpu. Intel was happy to oblige with ever faster cpu on ever smaller nodes.

    now, for 99.44% of users (even, so called, Enterprise), a Pentium and a small SSD really is fast enough. in those old days, running the Office programs made the symbiosis viable. that whole field has been plowed to exhaustion, which is important, since the current source of demand for faster is at least an order of magnitude smaller, i.e. gamers, video editing, and ??

    what matters these days is innterTubes bandwidth. not Intel's or the memory makers' bailiwick.
  • boeush - Friday, August 12, 2016 - link

    Your thinking reminds me somewhat of those people who used to think nobody would ever need more than 1 MB of RAM...

    Part of the reason PC performance stagnated, was indeed a lack of a performance-hungry killer app. I believe that the advent of VR is about to fix that. There is never such a thing as too much performance headroom, where high-quality VR is concerned. Bigger and more complex AIs will push memory/CPU performance as well, and when it comes to neural networks for instance, they'll be performance-starved essentially forever. And they'll be everywhere before long: in self-driving cars, in home robots, in games, etc.
  • Murloc - Saturday, August 13, 2016 - link

    your example kinda prove the point that PCs have peaked in performance since your examples could all do without a x86 traditional PC, especially the car and IoT, but also the VR stuff (if it could be untethered, much the better!).
  • jjj - Friday, August 12, 2016 - link

    The PC is dead anyway, there is no point in even considering the PC in their vision.
    In server perf matters, in glasses latency is crucial and so is power. In robots ,including cars , power and latency also matter a lot. In IoT there is no good solution yet and there is an acute need for something better as both DRAM and NAND don't fit the purpose.
  • jjj - Friday, August 12, 2016 - link

    "Western Digital decided to use SanDisk’s long-discussed ReRAM along with 3D manufacturing tech to build the aforementioned special-purpose SSDs"

    Sandisk's ReRAM has always been 3D, the way you phrase it suggests that it's a new idea.
    Capital intensity and scaling is not about using the same clean room, that's not where the big costs are.
    The comparison with what Micron and Intel are doing is way off. Yes ofc they are using the same clean rooms but XPoint is not using much of the same tools and they have to spend a lot, it's very capital intensive and it doesn't scale all that well.
    The main point that you should have been able to figure out just from the title of the presentation is that there is a big cost difference between just adding layers (like XPoint) and something that is designed for 3D (like 3D NAND). Fewer process steps, shorter cycle, less fab area and so on is what makes the cost difference.
    That's the main point, adding layers by just repeating the same steps won't result in costs scaling well. A clever design where you process multiple layers in 1 step leads to costs scaling much better.
    Maybe this helps understand how 3D NAND is not just adding layers http://www.chipworks.com/about-chipworks/overview/...
  • rahvin - Friday, August 12, 2016 - link

    This is why Intel/Micron aren't providing details about Xpoint. HP is supposedly got one in the works based on memsistors along with the FeRAM that's out there as well. They won't tip their hand until product is shipping because of this.
  • Penti - Friday, August 12, 2016 - link

    So this is Sandisk/Toshiba's ReRAM effort? With a WD-branding? I.e. they will ship WD drives/end user products with ReRAM "soon" I guess. But I guess OEM customers (with previous relationship with Sandisk) still get their stuff...
  • dealcorn - Friday, August 12, 2016 - link

    I interpret this WD news as a failed homage to Edwin Land's observation "Someone is going to make your product obsolete. Make sure it's you." The market will ultimately decide who wins, but I see the deck as strongly and intentionally stacked in Intel's favor. While XPoint's virtues are viable through a variety of current interfaces, the full benefits of XPoint require a faster interface such as the DIMM interface Intel promises with Kaby Lake. Intel will ship XPoint using any interface it can, but once Kaby Lake arrives, I expect the faster DIMM interface to dominate sales to the data center which is where most SCM will be sold. If you pay a substantial premium for performance, it is dumb to leave performance on the table by using an inferior interface. I doubt Intel's firmware behind the Kaby Lake DIMM socket will support competitive products that are not XPoint. If you exclude Data Center demand for SCM, the remaining demand that is willing to pay premium prices, such as high end gaming, is table scraps that may not be adequate to support an economic ramp.

    Often, in tech, he who ramps first and fastest wins. Even the technical superiority of Beta could not overcome the faster ramp of VHS. WD's release spec's make it sound like a wannabe XPoint except the it is not yet in production and WD can not yet show a sample or tout performance advantages which may not be known at this time. Intel is already in production and ramps while we sleep. The concept that WD can leverage Wannabe technology applied to table scraps market demand to create a competitive product is definitely possible if the technology is vastly superior enough. I do not get that vibe from what WD is saying today.

    Ultimately, after a successful ramp reduces costs significantly, XPoint escapes the confines of the Data Center and goes everywhere. However, Data Center dominance is the most likely and perhaps only path to a successful ramp.
  • name99 - Saturday, August 13, 2016 - link

    Wrong wrong wrong. ("However, Data Center dominance is the most likely and perhaps only path to a successful ramp.")
    The other possible route is that Apple uses SCM on the iPhone 2018 (or whatever) as a way to yet again bump up IO speed and responsiveness. Then, after a few years, the rest of the phone market follows.
    If I were WD/SanDisk this is the path I'd try to follow (i.e. be willing to make whatever changes and meet the price points Apple want some). But WD have not been an especially forward looking company in the past (witness their lack of interest in SSDs and hybrids for so long) so I would not be optimistic that they're smart enough to do this.
  • vladx - Saturday, August 13, 2016 - link

    With 3D XPoint offering ~1000x lower latency than NAND I don't know how this can compete with that. Too late. Sandisk/WD
  • iwod - Monday, August 15, 2016 - link

    So basically this is WD / Sandisk version of Optane. The interesting bit is ( assume the scale and graphs are correct ) we will reach ReRAM 10x cheaper /GB then DRAM at around 2019. If we assume DRAM dont get any cheaper, we could get a $40 80 ReRAM Drive in 2019.

    But where are the bottlenecks? We have PCI-E SSD Drives that offer 2x IOPS and 4x Bandwidth compared to best of SATA SSD, and they offer negligible UX difference in 80-90% use cases.

    So may be it is a matter of Software that needs some fundamental changes?
  • LukaP - Monday, August 15, 2016 - link

    The bottlenecks arent so much in the consumer space (the UX you refer to) but more in the HPC and server space, where many applications are very IO bound (eg databases) and there, even the fastest of SSDs still bottleneck the CPU

Log in

Don't have an account? Sign up now