Didn't intel slides on future CPU's talk about the new RAM? That would mean they might create their own market and there is no need to hope someone is interested really, if it's architecturally a (semi-)requirement for intel based systems.
I was really excited about this article due to the leadup on Twitter. But I'm really disappointed on the coverage of the technology.
I think Ian has a bit of confirmation bias going into this and did not examine PCM closely enough:
> During the discussions after the announcement, we were told > categorically that this is not a phase change material, eliminating > one potential avenue that it might be the change in the crystal > structure of the cell producing the resistance change.
"So…so let me take the first piece while Rob you jump in. First…first of all you shouldn't think of this as NAND or DRAM. You should think of it as a whole new class of memory. It…it…it…it really does fill it's own unique spot. Now it can be used in more of a storage type of application or it can be used more as a system main memory and we think it will be used as both. Uh…uh f…for different applications and different reasons. Um, but it really kinda fits in that…in that unique spot.
Now rel…I'm not familiar with sigma RAM I'm sorry maybe…maybe Rob is, but…but relative to…to phase change which has been the market before and which Micron has some experience with in the past. Uh, again, this is uh, this is a, this is a…a very different architecture in terms of the place it fills in the…in the…in the memory hierarchy because it has these…these dramatic improvements, uh, in speed, uh, and…and volatility and, uh, performance."
I don't view that as a categorical denial that it's PCM. Just that it's a different architecture than the PCM product had out before, which it is. This is cross point. And there is a lot of hesitation in this response and it seems like rather than trying to answer the technology question, he goes back on message.
Along with patents, linkedin profiles (Employee confirms working on a 2xnm PCM 3D cross-point chip since January 2014, see Giulio Albini), and the mentions in the webcast of "property change" and "bulk material".
The interesting thing is that 2xnm PCM cross point technology has been on the roadmap for a while, but in 2014, mention of PCM was phased out. The 2014/2015 materials still mention "other technologies" though. It could be that the technology failed. It seems more likely that there is some legal or corporate strategy for not mentioning the technology.
We had a separate question and answer session with Greg Matson, SSD Director at Intel. When specifically asked if it was PCM, he said he could confirm that it was not.
OK now that I'm at a computer I can respond properly.
Kristian attended the event live, I was at the UK briefing led by Greg Matson, so all questions on my end went through him with other press based in the UK, so no it was not recorded. It was specifically asked 'Is this Phase Change?' and he responded 'I can confirm it is not Phase Change'. The other journalist at that Q&A that I've seen pick up on this was the one that asked the question, Chris Mellor from The Register. Check his tweets on the subject as a double confirmation: https://twitter.com/Chris_Mellor/status/6267342455... https://twitter.com/Chris_Mellor/status/6267347543...
If you read through Chris' piece on XPoint, he comes to similar conclusions based that a 64Mb phase change demo with an ovonic switch last year was different to Micron's slide demonstration of XPoint with a diode-based selector.
So standard PCM/PCMS revolves around bulk crystal structure changes and metastable forms to differentiate resistive states, hence the 'phase' part of phase change. Arguably conductive bridging is also a change in phase, from a charged ion to a conducting metal, although is not specifically called phase change as such. It could also not necessarily be called a 'bulk change' as mentioned by Intel, although if the electrolyte layer is thin it would certainly act like bulk between the electrodes.
PCM, as of last year, was also considered one of the front runners leading into the technology based on information released although there have been reservations based on the currentneeded to transition current materials and the respective heat. Given Micron's investor briefing slides, conductive bridging is still perhaps the most likely, especially given how Matson answered the PCM question with an affirmative no. I understand that a few analysts have stated is PCM, given the watchful eye on patents and so forth, but coming direct from the source is hard to ignore with all the other suggestions.
As Kristian points out, Micron's investor roadmap points to a second technology in a couple of years also entering the market. If this isn't PCM, that could be, or vice versa. Or even STT.
Just for the record I'm merely trying to pinpoint where the evidence leads me, rather than introduce any sort of bias here. Without a direct SEM or quote from Intel, we can't be sure. Both PCM and CB can be done with many different materials, and I'd hazard a guess there are combinations that haven't been made public. So we're still talking about general methodology rather than specific physical interations between named structures.
If anyone comes up with anything else, I'd be glad to hear and read it.
BTW, I am still leaning very strongly towards PCM. It of course seems unlikely in the highest degree that Matson could of misspoken on something so basic. Maybe I'm not familiar enough with the tech industry, but it seems so very strange that they are so cagey on the tech. There must be a very strongly company wide memo from legal. They seem to be able to confirm that it is a resistive memory element, but nothing beyond that. So from that aspect, it seems strange that someone would be willing to go on record stating what type of resistive memory element it is not.
Given the number of companies with promising cross point style resistive memory architectures (many of them PCM, eg, ST), and the patent warchests to go with them, there is likely to be a legal battle that will make the whole RAMBUS thing seem like it was a small claims case.
I wonder what event is gating release of tech details. Is it a legal agreement? A patent date? A pending legal action?
Most likely they're wanting to protect their investment and not let the cat out of the bag for others to copy. Keeping IP close to the chest and industry secrets is part of the game is important, especially if there's 10 years of funding behind it. That's why we don't get any insights at all into things like Qualcomm's Adreno graphics and such - to them they want us to consider it a black box and that's all they're willing to speak on the issue.
There may be something legal too. Can't discount that for sure.
Don't confuse the public with the competition. Why they hide from the public, ask their IR and marketing.Their competitors know a lot more and a lot sooner than you imagine. When corporations claim"competitive reasons" it's a flat out lie 99.99% of the time. Here once they start sampling there is nothing to hide anymore and they'll do that soon enough although there have been rumors about the tech and some might be working on controllers for the thing already so the relevant competitors might have all the info they need- Samsung has been involved in plenty of scandals over the year, Toshiba is in the middle of one right now so don't imagine for a minute that big corporations have any kind of ethics and they won't do what they need to do to obtain info. Micron has it's summer analyst day on August 14 and they will disclose more then, remains to be seen how much.
Damn I wish I was smarter. Although from what I could...grasp (and I use that term so incredibly loosely) it looks awesome.
One question I had though, if it's faster by a large margin than NAND, and more reliable, does that mean the introductory pricing will push enterprise SSD costs down, or simply be artificially inflated as to not damage the profit margins from that sector ?
It's difficult to say at this point as it depends on what product segment will exploit XPoint the most. If we're looking at an intermediary for database applications, it might need a change on the hardware level and certainly at a software level, and be sold different to storage. If it's acting as an SSD replacement, you'll most likely see it being sold at a premium against 3D NAND technologies and the market will adjust accordingly. There's also the aspect of competition too, and if anyone else will have something in this space soon.
Just to add to Ian's comment, there was also a private "Meet the Architect" Q&A after the webinar with Micron's VP of R&D and one of Intel's Senior Fellows and the two went into great detail of how PCM never ended up being viable to replace DRAM due to scaling issues.
How about PCMS? This very informed article a number of weeks ago predicted it would be PCMS. He makes a very strong case, and that was before the announcement.
Obsessing about this is idiotic. Intel/Micron is avoiding certain language because that language has an unfortunate past (cf Windows Vista becomes Windows 7 --- "is it Windows Vista? No no no, Completely new OS"...)
Whether it's phase change or not (or whether changing the material from one state to another counts as a phase change) is utterly irrelevant to anyone except the manufacturer. It's like if Intel announced 3D-NAND and the question everyone felt worth asking was what color the masks are.
The questions that DO matter are the user-facing questions --- performance (read and write), power, cost, reliability, form factor.
As an end-user, yes it doesn't ultimately matter what the underlying technology is. As an analyst interested in the science behind the industry, or if you were a financial investment agent looking into the market to see which technologies are keeping which companies in growth figures with potential market share adjustments, it's an absolute must-know.
User facing questions are about how the product is used. Business facing questions are about how the product fits in, and the technology behind it. Research related questions are about exploiting fundamental laws of physics in different ways, regardless of the name. All of these questions matter, even if you're not involved in the latter two segments.
This seems like it would compliment HBM well. If an APU was made with "only" 4-8GB of on-package memory, but could use swap space on a XPoint partition, the performance hit from paging could be pretty minor.
I don't think so... this is slower than current RAM. They aren't very likely to use HBM only on an APU for various reasons, so you're still going to be using something like DDR4 for your main memory. Which again, is faster than this XPoint tech.
XPoint is however a lot denser than RAM, and it's non-volatile so it will make excellent high-speed storage if we can get a better interface. I think in a few years we could at least be using it as a cache for NAND devices or as "boot drives" similar to how we were using then-costly NAND-based SSDs not so long ago.
If we're talking more in a "conventional" non-enterprise, consumer/professional product sense, then I believe this type of memory would be more of a complement to eDRAM (or other forms of higher density, lower speed cache memory), with DRAM completely omitted from the hierarchy. But this may fundamentally change the way operating systems and applications work, and depending on design/application, may lead to breakthrough performance gains.
Ian, I have some serious doubts that this is STT-MRAM. The endurance and density numbers don't really line up. STT has virtually limitless endurance but fairly poor density due to the high current required, hence the need for a large transistor. I don't have the hubris to claim that it's impossible, but I believe it highly unlikely. Source: completed my dissertation in nanomagnetic logic and memory devices last year.
It might very well be Perpendicular Magnetic Anisotropic Magnetic Tunneling Junction STT-MRAM. It's a variant of STT-MRAM that does not suffer from the density issues and is more than one order of magnitude efficient than Spin torque transfer. It was covered in the AIP journal and published back in April of 2014 by Luc Thomas and associates. At the time they had IBM producing chips for them as the entire process is fully compatible with the existing CMOS backend and requires no special changes be made to the process. This expedited the research quite a bit as they were able to test fully functioning chips.
About the positioning in the market you are being a bit misleading initially. The technology itsalf is likely able to compete with NAND in pricing,there would be a process and layers race but it could be doable. So it's not really in between NAND and DRAM, cost wise, at least that's not a must, it will cost us a lot more than NAND because Intel and Micron will milk the hell out of it. About output, that's a startegy matter, the goal being to maximize profits ,nothing else matters. The 2 companies are trying to justify their initial prices and markets by placing it inthe middle- sure it is in the middle perf wise and cost is likely higher for now than the most efficient NAND. When you comment about power vs NAND you forget to say that it would be per bit and that's kinda relevant. When you talk about how the laywer are made and costs, it would be important to point out that 3D NAND has very poor planar density compared to 2D NAND. the density here seems to be very close to 2D NAND density. You make it sound like it would cost a lot more than 3D NAND and don't think that's a case at all. Sure maybe it's 2-4 times more than more for now but that's not too far and it's a lot cheaper than RAM. Yes scaling the layers seems costlier here than with 3D NAND. When talking die size it stops being as misleading as some previous bits. On die size it looks more like 18+ dies and close to 23 so some 13x16mm for 208-ish mm2. High cell efficiency would be good too when scaling soif they go 16nm 4 layers in gen 2,it would be interesting. Micron can double it's profits once they max that facility (and Intel takes half) , i was assuming they'll push SSDs at 4-5$ per GB too but i'm sure they'll try to go even higher if they can. As far as i know PCI 4.0 was due in 2017 so not too far away. You keep pushing their agenda at the end about where it can go. Look, DDR3 is some 4.5$ per GB, DDR4 getting close to 6.5$ per GB , 128Gb NAND is some 5$ but the range is pretty wide for NAND (3.5-6$). Could they sell it in phones at 1-2$ per GB? Easily, but they won't at first ,it's more profitable not to. Will they do it in gen 2-3, yeah they will. They need to expand it slowly before others have their own 3D ReRAM slutions and have a solid base by that time,whilemaking a lot of money with it in the few years of monopoly. Ofc in phones they can go for 4-8GB at 3$ per GB and lesser RAM to save power. Don't forget power in phones, just on that and it's worth using a hybrid RAM/ReRAM in high end.
So overall i think you fail to make a clear distinction between the technology and the financial strategy. The big limitation in adoption is the very high margins, the technology itself seems plenty capable and cheap. In IoT could be interesting too when it gets cheap enough but it's not ideal since it's not quite as cheap and dense as the industry would like, a lot more is needed there long term. Anyway, great that we have this 5 or more years before it was expected, not so great (for us) that it might take a while before prices become accessible for consumers. At least this forces others to accelerate their ReRAM roadmaps.
Hmm, I think I know why Intel is so invested in this. This will eventually replace NAND drives as performance storage while current NAND drives of today becomes the cold, backup storage replacing the spinning disk drives. I feel that 3D-NAND has more potential for higher density and lower power versus disks. It might become more cost effective or cheaper than hard disks when OEMs starts using NAND in cheap and mid range PCs because of the scale and less buyers of the hard disks.
I think short term you may see Intel and Micron put a small amount of Xpoint as a read/write cache onto their Enterprise and performance oriented SSD's. It would give them a decent performance advantage with a price bump modest enough to still attract consumers.
I've been looking forward to this writeup! I work in NSG at Intel (the Non-Volatile Memory Solutions Group i.e. the people developing 3D XPoint) and we've been super excited for this reveal.
It's fun to see the industry analysis, and as always Anandtech has one of the most in-depth!
Forgot to mention that in a promo video they claim SSDs with this would be up to 10x faster over PCIe/NVMe. https://www.youtube.com/watch?t=184&v=Wgk4U4qV... No idea how they do the math ofc so i wouldn't expect 10x random.
If you want to know what's being sold, go back and look up Unity Semiconductor's CMOx tech. Rambus bought them, then Rambus and Micron settled, including a patent sharing arrangement. The last Unity CEO said, just before Rambus bought them, that 2015 was production year. Could be.
10^15 P/E cycles for DRAM? How does this work?, as typical DRAM does on the order of 10^16 cycles in a year. I'm assuming a P/E cycle is the same as a clock cycle because of the constant refreshing, is this wrong?
I had to look this up, but the DDR3 standard calls for at least 8 refresh commands every 7.8 usec. Rounding down to the nearest 50ns, means to one refresh every 950 ns. When calculated out, that equals roughly 3.32x10^13 cycles/year. That means DDR3 should survive up to 30 years with a 10^15 P/E cycles rating, while never turning off your computer or putting it in hibernate.
In a refresh cycle, the information in a cell is read, then rewritten. There is no erase. I'm not sure the speed a typical P/E cycle occurs when erasing and writing new data is required. If it is significantly quicker than 950ns, there may be a decrease in lifespan from 30 years. However, unless you run intensive programs that delete and write new information to all memory cells every 32ns, you are not going to exceed the 10^15 P/E cycles in a year.
Excellent work. Anandtech always has the best information and reviews, even if they are the last.
This is pretty exciting stuff. If storage can become fast enough, then perhaps we will not need memory. Theoretically this would be a massive improvement to efficiency and performance. I would argue that the perfect computer would only have a processor and extremely fast storage. This is not enough to fill the gap, but storage is certainly catching up.
As a gamer, the idea of having my game loaded onto storage that is fast enough to not need to load into the memory is pretty appealing. Zero load time, no texture streaming issues, and potentially larger scale.
I have to wonder about bandwidth with this tech. Latency is clearly between ram and SSDs, but is closer to ram. But I haven't seen any solid bandwidth stats.
In the article they mention that gamers already can by-pass slow NAND and HDD speeds by just creating a RAMDisk. If you have 32GB of RAM, you could take 8GB of it for your system memory, turn the other 24GB into a RAM disk, and put all of your game files onto it and then your games will load their resources at the speed of your RAM.
And DDR4 is coming down in price very quickly so it isn't such a crazy idea. The cheapest 32GB DDR4 kit I can find is $176 which means 64GB will cost you $350 for games that have 40GB of resources. While not incredibly cheap, it's also not totally unreasonable especially if you're already complaining about SSD's not loading game resources fast enough.
Sadly, 24GB is a bit short for modern games and 8GB for the OS and the game is also a bit on the low side. Games are finally taking advantage of 64-bit executables (and thus far larger memory cap) and it's showing up as a dramatic increase in asset size, both on disk and in memory.
64GB of RAM might get you there, but I think 32's on the short-ish side. 3D XPoint would side-step the issue by providing far more storage than contemporary games would likely need.
As said by Friendly0Fir 24GB is unfortunately nothing today, many games today have 20-50GB disk requirments (not sure if devs are plain lazy to optimize or they really need that much space for stuff) Plus dont forget that you need to first fetch data into ramdisk after boot, and wait it to flush it out before shutdown. So personally I would not bother with ramdisks, and probably load times doesnt solely depend on read time from storage only. On some games I didnt seen much difference between HDD and SSD load performance (which shows either bad game engine/coding or some other bottleneck, maybe my CPU). And not to say leaving only 8GB for OS is really not that great.
Not to mention it's a giant pain in the butt to have to create the ram drive, copy all the files over, and then create all the links needed to actually run the game. By the time you're done futzing around with all that crap, you've cost yourself 10x the time you've saved in loading screens.
"This is pretty exciting stuff. If storage can become fast enough, then perhaps we will not need memory. " imho this will "never" be true, RAM will always be faster, no matter how much you make storage faster you can still also improve RAM which in turn will always keep ahead of storage. Plus as shown in article it is much closer to CPU and thus better perf/latencies etc.
Maybe in case when Xpoint v3 reach performance level of DDR3/4 then diminishing returns could start to kick in , but still by that time we will probably have DDR5/6 or HBM3. So I think RAM will stick around, even if it could perhaps shift into CPU L4 like cache with HBM for example.
Enterprise SSDs are too expensive for low-end home desktop PC. Removing DRAM would make them cheaper because less capacitors should be needed. SSDs could be probably cheaper if they use solid capacitors which are used for motherboards. Bigger size of those capacitors isn't problem for desktop PC.
You didn't specify cheap in your first comment. And you know that you can't have everything. If you want cheap then you give up something else. Like 5 year warranties and power loss protection.
Intel 535 is DRAM-less SSD with 5 year warranty without power loss protection. Solid motherboard capacitor cost about $1. DRAM-less PCIe or M.2 SSD with motherboard capacitors shouldn't be much more expensive than Intel 535.
Not DRAM-free but with big capacitators that fills the disk with enough power to write down the cache. Most of the ram is for the indexing table anyway. Mabye you just hate DRAM?
Not one mention of the mobile market, when they're an ideal place to replace DRAM + NAND. The fact that it's non-volatile will cut idle power usage, and you save PCB space by including it all in a single chip. Obviously database servers will be huge, but the place we're likely to see this stuff on the consumer market is in a cell phone.
Bottom of the Products & Applications page talks about mobile devices. It'd be great for all of the low-end to mid-range smartphones but the high end ones that do benefit from fast RAM are likely to keep using RAM while the more mainstream phones could potentially switch to Xpoint.
Thinking about it, wouldn't an ideal use for this sort of memory be to use it as write-cache for storage devices? Almost as fast as RAM and does not need any sort of battery backup encase of power failures. Sounds perfect :)
I don't agree with the assumptions in the article about how this won't be a good replacement for current SSDs, because of cost. What I see here is that the prices,for,this arrive right at the price range of current SSDs. Yes, they are higher price SSDs, but still, not higher.
It seems that the lesson of technology is lost here. All Tech becomes cheaper. It's almost as though the writers have forgotten that the first SSDs cost $3,600 for 32GB drives. HD prices have continued to fall, but not nearly as fast as that of SSDs.
Apple has almost all of their computers using SSDs, and that has certainly helped. They also use a major portion of the world's supply of NAND in their iOS devices. I'm not plugging Apple here, just pointing out that a major consumer company can affect usage and pricing dramatically.
If Apple, or some other major manufacturer decides that this Tech is just what they need, and begin to use it, then prices will begin to,drop,faster than otherwise thought.
I believe that this is a very good candidate for NAND drive substitution. And I feel as though it will begin happening more quickly than the writers here think it will.
Tech becomes cheaper as volume increases and manufacturing improves but SSD NAND will also become cheaper. So it remains to be seen how well this technology will drop in comparison with SSD NAND. Many people are still using 5400 RPM hard disks in their laptops so it is also not clear if there will be anything to compel regular people into buying something faster than an SSD and a higher price.
I believe you’re falling into a marketing trap, when you imply that 32-layer Flash has 32x the capacity of planar flash (or 48-layer 48x capacity).
When flash vendors talk about 3D Flash layers they are actually talking about process layers and it takes about 8 of them to implement a full logical storage plane. So 32 layer NAND simply has quad planar capacity and 48 layers six times the capacity of a planar chip at the same process size.
And since in the past 3D V-NAND was used to stay on the higher geometry node for endurance, actual capacity gain was even lower.
Intel/Microns bending technique was another way to retain surface area at lower geometries.
And as the V- in the V-NAND implies, you can’t stack silicon layers with complete freedom, even if processing cost were no issue. They were building terraces originally, something that the Toshiba 3D process avoids.
Still 100 or 1000 layers won’t happen on silicon, because that’s like building a skyscraper using mud bricks.
However, that’s not an issue with HP’s Memristor device, because that’s not a silicon process and layers of titanium dioxide can be slapped on top of each other without any crystalline alignment issues or deposition/etching limitations.
That is one of the enduring limitations of Xpoint vs. Memristor, the fact that it seems to remain a silicon based process, which means it doesn't allow anywhere the number of layers that a non-crystalline process can do.
And since the cost per layer is close to linear and high on silicon, that means it fails to deliver Moore's promise economically.
I'm well aware that 3D NAND uses a much larger lithography and the density per layer is far from planar NAND. I apologize if it reads differently, but that was unintentional, not a praise talk for 3D NAND.
I think 100 layers will happen given that we are already close to 50 layers, but I agree that 1,000 layers would require a more significant change to the manufacturing process and materials.
My biggest fear with Xpoint is that Intel is attempting to create a de-facto monopoly around the NV-RAM space. They seem to have made a deal with HP for HP to delay the Memristor in return for some very favorable conditions on Xpoint, CPUs and whatever else HP needs to produce servers.
An open price war between Memristor based and Xpoint based DDR4 DIMMs with hundreds of gigabytes if not terabyte capacity would have left half the industry bleeding to death, Intel would have lost against HP technologically, because the Memristor scales better in 3D, retains data indefinitely and has no endurance issues at all (also better latency, potentially even beating DRAM), but might have taken perhaps a little longer to get there.
And with Intel as an enemy and HP's current financial stand, there is a good chance they would have bled out on day 1 of that war.
So they agreed that is was better for both parties to delay the Memristor and give Intel a full run with Xpoint to recoup their investments and let HP regain some health and headstart against Lenovo, Dell and SuperMicro, who have no Memristor on the back burner to negotiate back channel rebates with Intel.
The only problem is that even if Xpoint looks like DDR4 RAM on the memory bus, it will require wear management, special initialization etc. via a control channel like SPI and in the BIOS.
Good luck trying to license that from Intel if you're a maker of ARM, AMD, p-Series or even z/Arch CPUs.
Intel gave up DRAM, because it was cut-throat commodity decades ago, but these days winds up making far less money off a standard big data server than DRAM manufacturers, even after they've pushed everybody else off the motherboard (Intel may make more profit, though).
XPoint not only gives them back the biggest slice on the server cake, and at a price they can move as close to DRAM as they want, while their production cost may actually be far lower, but it also eliminates all these pestly little ARM competitors as well as finishes big iron for once and for all for lack of a competitive memory solution.
What was probably a smart tactical move for HP, puts the future of the IT industry at risk because Intel has years of a practical, but thanks to Micron not legal, monopoly.
Micron is on the verge of a hostile takeover of $23 Billion. This Joint Intel/Micron announcement came 3 days after that takeover bid.
Sorry, but silicon is not the future, but the past. HP is in the driver seat with the Memristor. Once they fire Meg and hire an engineering board/ceo leveraging their IP will make Intel one unhappy camper.
Unfortunately I dont see HP (ES) is firing Meg anytime soon, she is going to HP ES as CEO...So I think that best chance to stuff her off was during separation where she should rather go to HP Ink rather then ES. Would not hold my breath for hope that HP would get good ceo, just look on couple of recent ceos we had...
I struggle to see the purpose of this memory. While flash is much slower, latency is limited by the controller. If you put this 3d XPoint memory in an SSD, you gain very little in performance since the controller was the bottleneck anyway. Flash manufacturers can get much higher performance from the memory out of a NOR design at the cost of some density, but they don't do it because again the controller is the issue. All I really see this being used for is business applications where flash memory's endurance is too low to be suitable.
Also the term NAND only refers to the architecture of a memory system. I would not be surprised at all if 3D XPoint was also a NAND architecture. You might want to call the current tech flash or floating gate instead.
As always, waited for Anand to tear it to the point and explain it as they did it, this is stuff that differ you from other "news" sites... So let see what they bring to market...
DRAM, XPOINT boot drive, SSD application drive and HDD bulk storage. Maybe we can usher in holographic storage and have another tier! I mostly kid, but man getting excessive. It would be cool if this was added in through new/advanced memory controllers and utilize the DRAM slots. If the price is at least reasonably less than DRAM (by at least a factor of 2), I can see uses. A legit OS drive space. Useful for higher end tablets and stuff with embedded high speed storage. A serious swap disk, etc.
16GB of RAM with 32-64GB of XPOINT and then the SSD/HDD storage systems would probably make a pretty wicked system. Use the XPOINT to do things like load large parts of games/applications in a super fast swap disk pre-loading as much as possible and then quickly import the parts in to RAM that are needed instead of slower imports from SSD/HDD to RAM.
A little disappointed that this doesn't sound like it'll be remotely economical to compete with NAND anytime soon. I had my hopes up with the density claims and what not that we might have a NAND replacement at HDD price per GB in a couple of years.
"A quick look at NewEgg puts DRAM pricing at approximately $5-6 per gigabyte, whereas the high-end enterprise SSDs are in the range of $2-3. While client SSDs can be had for as low as $0.35, they aren't really a fair comparison because at least initially 3D XPoint will be aimed for enterprise applications. My educated guess is that the first 3D XPoint based products will be priced at about $4 per gigabyte, possibly even slightly lower depending on how DRAM and NAND pricess fall within a year."
Oh, okay, see you in five years or so, when this technology becomes relevant to the consumer. @___@
I hate that Intel and Micron didn't talk about potential uses for this new technology. Letting my imagination run wild, I'm thinking small, battery powered embedded solutions is a good starting point. Basically IoT devices, from infrastructure sensors on up to smart watches. There are not many market standards in place in this category of computing devices and energy efficiency is more important than high performance. This XPoint tech could replace both NAND and DRAM in these devices, presumably increasing energy efficiency. It also provides small, adaptable platforms for developers to start programming for applications with no RAM.
I don't see XPoint replacing DRAM and NAND in smartphones and tablets any time soon. I assume it will take a while before OSes and apps can adapt to a no RAM environment. It will take a few SoC generations for this tech to have hardware support as well (unless they were already in-the-know). Smaller issues include degraded performance in RAM heavy applications (i.e. graphics processing) and increased hardware costs. The GPU might need its own RAM buffer (might I suggest HBM), further increasing implementation costs. Also, encrypted storage gets a little costly, from an energy perspective, when that storage bandwidth is very fast. Ideally, there will be a hardware encryption accelerator in the mix (and the OS will implement it (looking at you Android)).
However, there are a lot of potential benefits to replacing RAM and NAND completely with XPoint from a smart phone. The device could turn on and off almost instantly. Power management would not need to deal with the energy costs of shuffling around large amounts of data to enter and exit a sleep state. OSes and apps would be smaller and more efficient due to significantly reduced memory management concerns (some minimal wear leveling and ECC).
The first implementation of XPoint in smartphones/tablets will probably be as an added cache to accelerate the NAND and act as a swap partition. The NAND eMMC in most smart phones is more competitive with a modern HDD than a SSD when it comes to transfer speeds, so the NAND could definitely use a boost.
The day XPoint replaces NAND SSD's in the consumer space will be glorious. If speculation on price is correct, it may be a while before even the average enthusiast can rationalize the expense. However, I did see an Intel video that described DRAM as expensive and both XPoint and NAND as inexpensive. So, it would stand to reason that XPoint would be closer to the price of NAND than DRAM once initial market shock has subsided a bit and production has ramped up. However, Intel is rarely known to offer inexpensive products compared to its competition, so the real hope is that Micron pushes this technology at a price and quantity that stimulates quick market adoption.
Could you elaborate? I think this article in particular is far more in-depth than any of the other articles I've read on the topic and it really goes into great detail about the physics side as well.
There is NO mention of the error rate in 3D Xpoint compare to Enterprise NAND!?. This may make a big difference in Enterprise SSD Controller architecture & implementation like whether LDPC, BCH or a very simple Error Correction algorithm is good enough!?. The power is an another big factor especially in Write, One of the main limiting factor in enterprise SSD write performance is the power budget.
That's a good point and admittedly something I didn't think about. I would assume 3D XPoint is more robust than NAND given the higher performance and endurance, but Intel/Micron declined to talk about any failure mechanisms, so at this point it's hard to say how robust the technology is.
"Memory cells are accessed and written or read by varying the amount of voltage sent to each selector. This eliminates the need for transistors, increasing capacity and reducing cost."
...but 3d xpoint will be expensive, more like $10 per gigabyte.
With proper UEFI/BIOS support, one feature we proposed in a Provisional Patent Application was a "Format RAM" option prior to running Windows Setup. This would format RAM as an NTFS C: partition into which Windows software would be freshly installed. For comparison purposes, imagine a ramdisk in the upper 32-to-64GB of a large 1-to-2 TB DRAM subsystem, in a manner similar to how SuperSpeed's RamDisk Plus allocates RAM addresses. Then, imagine that all 2 TB consist of Non-Volatile DIMMs. I can see this one feature enabling very rapid RESTARTS, even cold RESTARTS after a full power-down (for maintenance). If the UEFI/BIOS is told that the OS is already memory-resident, this one change radically improves the speed with which a routine STARTUP occurs i.e. currently a STARTUP must load all OS software from a storage subsystem into RAM. If that OS software is already loaded into RAM, that "loading" is mostly eliminated under these new assumptions. Moreover, mounting Optane on the 2.5" form factor should free designers to consider more aggressive overclocking of the data cables connecting motherboards to those 2.5" drives: just work backwards from PCIe 4.0's 16GHz clock and 128b/130b jumbo frame. It's possible that Optane will be fast enough to justify data cables that also oscillate at 16GHz, increasing to 32GHz with predictable success. Assuming x4 NVMe lanes at PCIe 4.0, then 4 lanes @ (16G / 8.125) ~= 4 lanes @ 2GB/s ~= 8 GB/s raw bandwidth per 2.5" device. Modern G.Skill DDR4 easily exceeds 25GB/s raw bandwidth. Thus, Optane should allow "overclocked" data cables to achieve blistering NVMe storage performance with JBOD devices, and even higher performance with RAID-0 arrays.
I don't know, is it possible to have an educated guess on this? Back in the PS2 days, before the PS3, i was @0zyx on forum or few, talking about NASA RAM, magnet donuts on a metal grid of wires, insisting why don't we do this with memory today? The electricity crosses and creates a charge or reads the charge. This is the RAM of the first space computer. ~ I was made confident by believing this is what AMD "Mirror Bit" Memory was working towards before it flat out evaporated from the internet? Same happened to 48Bit Intel "Iranium" processors with 16cores. Still look in books from time to time, hoping an old edition of hardware lists with intel spy cpu, will confirm the internet is a BlackHole. Not to go Ellery Hale, with being one of those to store curious science bits no one is using, and everyone should be clamoring to own some day. ~ I check the metal recycling at the city dump for computer servers and extra high grade tower cases for my own builds, at least a parts from the towers anyways. ~ Twitter @0zyx ~ either way this is the memory design from the first NASA space capsule to carry people into space, except larger than 1 kilobyte. It may have been 512Bytes back then, not sure what sort of grid that is?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
80 Comments
Back to Article
Pork@III - Friday, July 31, 2015 - link
1000X1000X10=3 Touch my crazy math! "Analyze This"Wwhat - Saturday, August 8, 2015 - link
Didn't intel slides on future CPU's talk about the new RAM? That would mean they might create their own market and there is no need to hope someone is interested really, if it's architecturally a (semi-)requirement for intel based systems.Wwhat - Saturday, August 8, 2015 - link
Oh excuse me, wasn't meant to be a reply but a standalone comment.[email protected] - Friday, July 31, 2015 - link
I was really excited about this article due to the leadup on Twitter. But I'm really disappointed on the coverage of the technology.I think Ian has a bit of confirmation bias going into this and did not examine PCM closely enough:
> During the discussions after the announcement, we were told
> categorically that this is not a phase change material, eliminating
> one potential avenue that it might be the change in the crystal
> structure of the cell producing the resistance change.
Here's the portion of the webcast:
https://www.youtube.com/watch?v=VsioS35D-HY&t=...
"So…so let me take the first piece while Rob you jump in. First…first of all you shouldn't think of this as NAND or DRAM. You should think of it as a whole new class of memory. It…it…it…it really does fill it's own unique spot. Now it can be used in more of a storage type of application or it can be used more as a system main memory and we think it will be used as both. Uh…uh f…for different applications and different reasons. Um, but it really kinda fits in that…in that unique spot.
Now rel…I'm not familiar with sigma RAM I'm sorry maybe…maybe Rob is, but…but relative to…to phase change which has been the market before and which Micron has some experience with in the past. Uh, again, this is uh, this is a, this is a…a very different architecture in terms of the place it fills in the…in the…in the memory hierarchy because it has these…these dramatic improvements, uh, in speed, uh, and…and volatility and, uh, performance."
I don't view that as a categorical denial that it's PCM. Just that it's a different architecture than the PCM product had out before, which it is. This is cross point. And there is a lot of hesitation in this response and it seems like rather than trying to answer the technology question, he goes back on message.
Along with patents, linkedin profiles (Employee confirms working on a 2xnm PCM 3D cross-point chip since January 2014, see Giulio Albini), and the mentions in the webcast of "property change" and "bulk material".
[email protected] - Friday, July 31, 2015 - link
The interesting thing is that 2xnm PCM cross point technology has been on the roadmap for a while, but in 2014, mention of PCM was phased out. The 2014/2015 materials still mention "other technologies" though. It could be that the technology failed. It seems more likely that there is some legal or corporate strategy for not mentioning the technology.2013 Fall and Summer slides: http://i.imgur.com/pAHeUPH.png
Ian Cutress - Friday, July 31, 2015 - link
We had a separate question and answer session with Greg Matson, SSD Director at Intel. When specifically asked if it was PCM, he said he could confirm that it was not.[email protected] - Friday, July 31, 2015 - link
I'm guessing this Q/A session was not recorded, can you give an actual quote? Are they just arguing semantics and claiming that it is PCMS?Ian Cutress - Friday, July 31, 2015 - link
OK now that I'm at a computer I can respond properly.Kristian attended the event live, I was at the UK briefing led by Greg Matson, so all questions on my end went through him with other press based in the UK, so no it was not recorded. It was specifically asked 'Is this Phase Change?' and he responded 'I can confirm it is not Phase Change'. The other journalist at that Q&A that I've seen pick up on this was the one that asked the question, Chris Mellor from The Register. Check his tweets on the subject as a double confirmation:
https://twitter.com/Chris_Mellor/status/6267342455...
https://twitter.com/Chris_Mellor/status/6267347543...
If you read through Chris' piece on XPoint, he comes to similar conclusions based that a 64Mb phase change demo with an ovonic switch last year was different to Micron's slide demonstration of XPoint with a diode-based selector.
So standard PCM/PCMS revolves around bulk crystal structure changes and metastable forms to differentiate resistive states, hence the 'phase' part of phase change. Arguably conductive bridging is also a change in phase, from a charged ion to a conducting metal, although is not specifically called phase change as such. It could also not necessarily be called a 'bulk change' as mentioned by Intel, although if the electrolyte layer is thin it would certainly act like bulk between the electrodes.
PCM, as of last year, was also considered one of the front runners leading into the technology based on information released although there have been reservations based on the currentneeded to transition current materials and the respective heat. Given Micron's investor briefing slides, conductive bridging is still perhaps the most likely, especially given how Matson answered the PCM question with an affirmative no. I understand that a few analysts have stated is PCM, given the watchful eye on patents and so forth, but coming direct from the source is hard to ignore with all the other suggestions.
As Kristian points out, Micron's investor roadmap points to a second technology in a couple of years also entering the market. If this isn't PCM, that could be, or vice versa. Or even STT.
Just for the record I'm merely trying to pinpoint where the evidence leads me, rather than introduce any sort of bias here. Without a direct SEM or quote from Intel, we can't be sure. Both PCM and CB can be done with many different materials, and I'd hazard a guess there are combinations that haven't been made public. So we're still talking about general methodology rather than specific physical interations between named structures.
If anyone comes up with anything else, I'd be glad to hear and read it.
[email protected] - Friday, July 31, 2015 - link
They have two future memory tech on their timeline, A and B. Perhaps we are seeing A now, and B is phase change.[email protected] - Friday, July 31, 2015 - link
BTW, I am still leaning very strongly towards PCM. It of course seems unlikely in the highest degree that Matson could of misspoken on something so basic. Maybe I'm not familiar enough with the tech industry, but it seems so very strange that they are so cagey on the tech. There must be a very strongly company wide memo from legal. They seem to be able to confirm that it is a resistive memory element, but nothing beyond that. So from that aspect, it seems strange that someone would be willing to go on record stating what type of resistive memory element it is not.Given the number of companies with promising cross point style resistive memory architectures (many of them PCM, eg, ST), and the patent warchests to go with them, there is likely to be a legal battle that will make the whole RAMBUS thing seem like it was a small claims case.
I wonder what event is gating release of tech details. Is it a legal agreement? A patent date? A pending legal action?
Ian Cutress - Saturday, August 1, 2015 - link
Most likely they're wanting to protect their investment and not let the cat out of the bag for others to copy. Keeping IP close to the chest and industry secrets is part of the game is important, especially if there's 10 years of funding behind it. That's why we don't get any insights at all into things like Qualcomm's Adreno graphics and such - to them they want us to consider it a black box and that's all they're willing to speak on the issue.There may be something legal too. Can't discount that for sure.
jjj - Saturday, August 1, 2015 - link
Don't confuse the public with the competition. Why they hide from the public, ask their IR and marketing.Their competitors know a lot more and a lot sooner than you imagine. When corporations claim"competitive reasons" it's a flat out lie 99.99% of the time. Here once they start sampling there is nothing to hide anymore and they'll do that soon enough although there have been rumors about the tech and some might be working on controllers for the thing already so the relevant competitors might have all the info they need- Samsung has been involved in plenty of scandals over the year, Toshiba is in the middle of one right now so don't imagine for a minute that big corporations have any kind of ethics and they won't do what they need to do to obtain info.Micron has it's summer analyst day on August 14 and they will disclose more then, remains to be seen how much.
Tunnah - Saturday, August 1, 2015 - link
This post literally gave me a headache.Damn I wish I was smarter. Although from what I could...grasp (and I use that term so incredibly loosely) it looks awesome.
One question I had though, if it's faster by a large margin than NAND, and more reliable, does that mean the introductory pricing will push enterprise SSD costs down, or simply be artificially inflated as to not damage the profit margins from that sector ?
Ian Cutress - Saturday, August 1, 2015 - link
It's difficult to say at this point as it depends on what product segment will exploit XPoint the most. If we're looking at an intermediary for database applications, it might need a change on the hardware level and certainly at a software level, and be sold different to storage. If it's acting as an SSD replacement, you'll most likely see it being sold at a premium against 3D NAND technologies and the market will adjust accordingly. There's also the aspect of competition too, and if anyone else will have something in this space soon.Kristian Vättö - Monday, August 3, 2015 - link
Just to add to Ian's comment, there was also a private "Meet the Architect" Q&A after the webinar with Micron's VP of R&D and one of Intel's Senior Fellows and the two went into great detail of how PCM never ended up being viable to replace DRAM due to scaling issues.witeken - Friday, July 31, 2015 - link
How about PCMS? This very informed article a number of weeks ago predicted it would be PCMS. He makes a very strong case, and that was before the announcement.http://seekingalpha.com/article/3253655-intel-and-...
name99 - Friday, July 31, 2015 - link
Obsessing about this is idiotic.Intel/Micron is avoiding certain language because that language has an unfortunate past (cf Windows Vista becomes Windows 7 --- "is it Windows Vista? No no no, Completely new OS"...)
Whether it's phase change or not (or whether changing the material from one state to another counts as a phase change) is utterly irrelevant to anyone except the manufacturer. It's like if Intel announced 3D-NAND and the question everyone felt worth asking was what color the masks are.
The questions that DO matter are the user-facing questions --- performance (read and write), power, cost, reliability, form factor.
Ian Cutress - Saturday, August 1, 2015 - link
As an end-user, yes it doesn't ultimately matter what the underlying technology is.As an analyst interested in the science behind the industry, or if you were a financial investment agent looking into the market to see which technologies are keeping which companies in growth figures with potential market share adjustments, it's an absolute must-know.
User facing questions are about how the product is used. Business facing questions are about how the product fits in, and the technology behind it. Research related questions are about exploiting fundamental laws of physics in different ways, regardless of the name. All of these questions matter, even if you're not involved in the latter two segments.
Refuge - Monday, August 3, 2015 - link
This is Anandtech right? I didn't click on the wrong link?I thought this site existed solely because we all obsess over the latest tech, and appreciate knowing how the nitty gritty's all work together. ;)
KateH - Friday, July 31, 2015 - link
This seems like it would compliment HBM well. If an APU was made with "only" 4-8GB of on-package memory, but could use swap space on a XPoint partition, the performance hit from paging could be pretty minor.Alexvrb - Friday, July 31, 2015 - link
I don't think so... this is slower than current RAM. They aren't very likely to use HBM only on an APU for various reasons, so you're still going to be using something like DDR4 for your main memory. Which again, is faster than this XPoint tech.XPoint is however a lot denser than RAM, and it's non-volatile so it will make excellent high-speed storage if we can get a better interface. I think in a few years we could at least be using it as a cache for NAND devices or as "boot drives" similar to how we were using then-costly NAND-based SSDs not so long ago.
lilmoe - Monday, August 3, 2015 - link
If we're talking more in a "conventional" non-enterprise, consumer/professional product sense, then I believe this type of memory would be more of a complement to eDRAM (or other forms of higher density, lower speed cache memory), with DRAM completely omitted from the hierarchy. But this may fundamentally change the way operating systems and applications work, and depending on design/application, may lead to breakthrough performance gains.Scoobmx - Friday, July 31, 2015 - link
Ian, I have some serious doubts that this is STT-MRAM. The endurance and density numbers don't really line up. STT has virtually limitless endurance but fairly poor density due to the high current required, hence the need for a large transistor. I don't have the hubris to claim that it's impossible, but I believe it highly unlikely. Source: completed my dissertation in nanomagnetic logic and memory devices last year.J03_S - Friday, July 31, 2015 - link
It might very well be Perpendicular Magnetic Anisotropic Magnetic Tunneling Junction STT-MRAM. It's a variant of STT-MRAM that does not suffer from the density issues and is more than one order of magnitude efficient than Spin torque transfer. It was covered in the AIP journal and published back in April of 2014 by Luc Thomas and associates. At the time they had IBM producing chips for them as the entire process is fully compatible with the existing CMOS backend and requires no special changes be made to the process. This expedited the research quite a bit as they were able to test fully functioning chips.jjj - Friday, July 31, 2015 - link
About the positioning in the market you are being a bit misleading initially.The technology itsalf is likely able to compete with NAND in pricing,there would be a process and layers race but it could be doable.
So it's not really in between NAND and DRAM, cost wise, at least that's not a must, it will cost us a lot more than NAND because Intel and Micron will milk the hell out of it.
About output, that's a startegy matter, the goal being to maximize profits ,nothing else matters. The 2 companies are trying to justify their initial prices and markets by placing it inthe middle- sure it is in the middle perf wise and cost is likely higher for now than the most efficient NAND.
When you comment about power vs NAND you forget to say that it would be per bit and that's kinda relevant.
When you talk about how the laywer are made and costs, it would be important to point out that 3D NAND has very poor planar density compared to 2D NAND. the density here seems to be very close to 2D NAND density. You make it sound like it would cost a lot more than 3D NAND and don't think that's a case at all. Sure maybe it's 2-4 times more than more for now but that's not too far and it's a lot cheaper than RAM. Yes scaling the layers seems costlier here than with 3D NAND.
When talking die size it stops being as misleading as some previous bits. On die size it looks more like 18+ dies and close to 23 so some 13x16mm for 208-ish mm2.
High cell efficiency would be good too when scaling soif they go 16nm 4 layers in gen 2,it would be interesting.
Micron can double it's profits once they max that facility (and Intel takes half) , i was assuming they'll push SSDs at 4-5$ per GB too but i'm sure they'll try to go even higher if they can.
As far as i know PCI 4.0 was due in 2017 so not too far away.
You keep pushing their agenda at the end about where it can go. Look, DDR3 is some 4.5$ per GB, DDR4 getting close to 6.5$ per GB , 128Gb NAND is some 5$ but the range is pretty wide for NAND (3.5-6$). Could they sell it in phones at 1-2$ per GB? Easily, but they won't at first ,it's more profitable not to. Will they do it in gen 2-3, yeah they will. They need to expand it slowly before others have their own 3D ReRAM slutions and have a solid base by that time,whilemaking a lot of money with it in the few years of monopoly.
Ofc in phones they can go for 4-8GB at 3$ per GB and lesser RAM to save power. Don't forget power in phones, just on that and it's worth using a hybrid RAM/ReRAM in high end.
So overall i think you fail to make a clear distinction between the technology and the financial strategy. The big limitation in adoption is the very high margins, the technology itself seems plenty capable and cheap. In IoT could be interesting too when it gets cheap enough but it's not ideal since it's not quite as cheap and dense as the industry would like, a lot more is needed there long term.
Anyway, great that we have this 5 or more years before it was expected, not so great (for us) that it might take a while before prices become accessible for consumers. At least this forces others to accelerate their ReRAM roadmaps.
zodiacfml - Friday, July 31, 2015 - link
Hmm, I think I know why Intel is so invested in this. This will eventually replace NAND drives as performance storage while current NAND drives of today becomes the cold, backup storage replacing the spinning disk drives. I feel that 3D-NAND has more potential for higher density and lower power versus disks. It might become more cost effective or cheaper than hard disks when OEMs starts using NAND in cheap and mid range PCs because of the scale and less buyers of the hard disks.DrKlahn - Friday, July 31, 2015 - link
I think short term you may see Intel and Micron put a small amount of Xpoint as a read/write cache onto their Enterprise and performance oriented SSD's. It would give them a decent performance advantage with a price bump modest enough to still attract consumers.Drumsticks - Friday, July 31, 2015 - link
I've been looking forward to this writeup! I work in NSG at Intel (the Non-Volatile Memory Solutions Group i.e. the people developing 3D XPoint) and we've been super excited for this reveal.It's fun to see the industry analysis, and as always Anandtech has one of the most in-depth!
Vlad_Da_Great - Friday, July 31, 2015 - link
@Drumsticks. Keep the good work, the world is moving thanks to people like you and INTC as a company. Thank you!!!jjj - Friday, July 31, 2015 - link
Forgot to mention that in a promo video they claim SSDs with this would be up to 10x faster over PCIe/NVMe. https://www.youtube.com/watch?t=184&v=Wgk4U4qV...No idea how they do the math ofc so i wouldn't expect 10x random.
FunBunny2 - Friday, July 31, 2015 - link
If you want to know what's being sold, go back and look up Unity Semiconductor's CMOx tech. Rambus bought them, then Rambus and Micron settled, including a patent sharing arrangement. The last Unity CEO said, just before Rambus bought them, that 2015 was production year. Could be.nwarawa - Friday, July 31, 2015 - link
I can't wait for this to be a normal conversation:A:"How much storage do you have?"
B:"256GB"
A:"RAM or on your drive?"
B:"Yes."
ajp_anton - Friday, July 31, 2015 - link
10^15 P/E cycles for DRAM? How does this work?, as typical DRAM does on the order of 10^16 cycles in a year. I'm assuming a P/E cycle is the same as a clock cycle because of the constant refreshing, is this wrong?Crazy1 - Saturday, August 1, 2015 - link
I had to look this up, but the DDR3 standard calls for at least 8 refresh commands every 7.8 usec. Rounding down to the nearest 50ns, means to one refresh every 950 ns. When calculated out, that equals roughly 3.32x10^13 cycles/year. That means DDR3 should survive up to 30 years with a 10^15 P/E cycles rating, while never turning off your computer or putting it in hibernate.In a refresh cycle, the information in a cell is read, then rewritten. There is no erase. I'm not sure the speed a typical P/E cycle occurs when erasing and writing new data is required. If it is significantly quicker than 950ns, there may be a decrease in lifespan from 30 years. However, unless you run intensive programs that delete and write new information to all memory cells every 32ns, you are not going to exceed the 10^15 P/E cycles in a year.
TallestJon96 - Friday, July 31, 2015 - link
Excellent work. Anandtech always has the best information and reviews, even if they are the last.This is pretty exciting stuff. If storage can become fast enough, then perhaps we will not need memory. Theoretically this would be a massive improvement to efficiency and performance. I would argue that the perfect computer would only have a processor and extremely fast storage. This is not enough to fill the gap, but storage is certainly catching up.
As a gamer, the idea of having my game loaded onto storage that is fast enough to not need to load into the memory is pretty appealing. Zero load time, no texture streaming issues, and potentially larger scale.
I have to wonder about bandwidth with this tech. Latency is clearly between ram and SSDs, but is closer to ram. But I haven't seen any solid bandwidth stats.
Freakie - Friday, July 31, 2015 - link
In the article they mention that gamers already can by-pass slow NAND and HDD speeds by just creating a RAMDisk. If you have 32GB of RAM, you could take 8GB of it for your system memory, turn the other 24GB into a RAM disk, and put all of your game files onto it and then your games will load their resources at the speed of your RAM.And DDR4 is coming down in price very quickly so it isn't such a crazy idea. The cheapest 32GB DDR4 kit I can find is $176 which means 64GB will cost you $350 for games that have 40GB of resources. While not incredibly cheap, it's also not totally unreasonable especially if you're already complaining about SSD's not loading game resources fast enough.
Friendly0Fire - Saturday, August 1, 2015 - link
Sadly, 24GB is a bit short for modern games and 8GB for the OS and the game is also a bit on the low side. Games are finally taking advantage of 64-bit executables (and thus far larger memory cap) and it's showing up as a dramatic increase in asset size, both on disk and in memory.64GB of RAM might get you there, but I think 32's on the short-ish side. 3D XPoint would side-step the issue by providing far more storage than contemporary games would likely need.
lordken - Sunday, August 2, 2015 - link
As said by Friendly0Fir 24GB is unfortunately nothing today, many games today have 20-50GB disk requirments (not sure if devs are plain lazy to optimize or they really need that much space for stuff)Plus dont forget that you need to first fetch data into ramdisk after boot, and wait it to flush it out before shutdown. So personally I would not bother with ramdisks, and probably load times doesnt solely depend on read time from storage only. On some games I didnt seen much difference between HDD and SSD load performance (which shows either bad game engine/coding or some other bottleneck, maybe my CPU).
And not to say leaving only 8GB for OS is really not that great.
JKflipflop98 - Monday, August 3, 2015 - link
Not to mention it's a giant pain in the butt to have to create the ram drive, copy all the files over, and then create all the links needed to actually run the game. By the time you're done futzing around with all that crap, you've cost yourself 10x the time you've saved in loading screens.lordken - Sunday, August 2, 2015 - link
"This is pretty exciting stuff. If storage can become fast enough, then perhaps we will not need memory. "imho this will "never" be true, RAM will always be faster, no matter how much you make storage faster you can still also improve RAM which in turn will always keep ahead of storage. Plus as shown in article it is much closer to CPU and thus better perf/latencies etc.
Maybe in case when Xpoint v3 reach performance level of DDR3/4 then diminishing returns could start to kick in , but still by that time we will probably have DDR5/6 or HBM3. So I think RAM will stick around, even if it could perhaps shift into CPU L4 like cache with HBM for example.
dlop - Friday, July 31, 2015 - link
I'm still using only 5400 RPM HDDs. I'm waiting for DRAM-less SSD with 2-bit MLC memory, power loss protection and 5 year warranty.jamyryals - Friday, July 31, 2015 - link
Buy an enterprise SSD, they have those features.dlop - Saturday, August 1, 2015 - link
Enterprise SSDs are too expensive for low-end home desktop PC. Removing DRAM would make them cheaper because less capacitors should be needed. SSDs could be probably cheaper if they use solid capacitors which are used for motherboards. Bigger size of those capacitors isn't problem for desktop PC.Zan Lynx - Sunday, August 16, 2015 - link
You didn't specify cheap in your first comment. And you know that you can't have everything. If you want cheap then you give up something else. Like 5 year warranties and power loss protection.dlop - Monday, August 17, 2015 - link
Intel 535 is DRAM-less SSD with 5 year warranty without power loss protection. Solid motherboard capacitor cost about $1. DRAM-less PCIe or M.2 SSD with motherboard capacitors shouldn't be much more expensive than Intel 535.MrBowmore - Friday, July 31, 2015 - link
Intel 750?MrBowmore - Friday, July 31, 2015 - link
Not DRAM-free but with big capacitators that fills the disk with enough power to write down the cache. Most of the ram is for the indexing table anyway. Mabye you just hate DRAM?JKflipflop98 - Monday, August 3, 2015 - link
Awfully picky for someone willing to put up with such a crappy drive for so long.toooskies - Friday, July 31, 2015 - link
Not one mention of the mobile market, when they're an ideal place to replace DRAM + NAND. The fact that it's non-volatile will cut idle power usage, and you save PCB space by including it all in a single chip. Obviously database servers will be huge, but the place we're likely to see this stuff on the consumer market is in a cell phone.Freakie - Friday, July 31, 2015 - link
Bottom of the Products & Applications page talks about mobile devices. It'd be great for all of the low-end to mid-range smartphones but the high end ones that do benefit from fast RAM are likely to keep using RAM while the more mainstream phones could potentially switch to Xpoint.failquail - Friday, July 31, 2015 - link
Thinking about it, wouldn't an ideal use for this sort of memory be to use it as write-cache for storage devices? Almost as fast as RAM and does not need any sort of battery backup encase of power failures. Sounds perfect :)melgross - Friday, July 31, 2015 - link
I don't agree with the assumptions in the article about how this won't be a good replacement for current SSDs, because of cost. What I see here is that the prices,for,this arrive right at the price range of current SSDs. Yes, they are higher price SSDs, but still, not higher.It seems that the lesson of technology is lost here. All Tech becomes cheaper. It's almost as though the writers have forgotten that the first SSDs cost $3,600 for 32GB drives. HD prices have continued to fall, but not nearly as fast as that of SSDs.
Apple has almost all of their computers using SSDs, and that has certainly helped. They also use a major portion of the world's supply of NAND in their iOS devices. I'm not plugging Apple here, just pointing out that a major consumer company can affect usage and pricing dramatically.
If Apple, or some other major manufacturer decides that this Tech is just what they need, and begin to use it, then prices will begin to,drop,faster than otherwise thought.
I believe that this is a very good candidate for NAND drive substitution. And I feel as though it will begin happening more quickly than the writers here think it will.
Oxford Guy - Saturday, August 1, 2015 - link
Tech becomes cheaper as volume increases and manufacturing improves but SSD NAND will also become cheaper. So it remains to be seen how well this technology will drop in comparison with SSD NAND. Many people are still using 5400 RPM hard disks in their laptops so it is also not clear if there will be anything to compel regular people into buying something faster than an SSD and a higher price.abufrejoval - Friday, July 31, 2015 - link
I believe you’re falling into a marketing trap, when you imply that 32-layer Flash has 32x the capacity of planar flash (or 48-layer 48x capacity).When flash vendors talk about 3D Flash layers they are actually talking about process layers and it takes about 8 of them to implement a full logical storage plane. So 32 layer NAND simply has quad planar capacity and 48 layers six times the capacity of a planar chip at the same process size.
And since in the past 3D V-NAND was used to stay on the higher geometry node for endurance, actual capacity gain was even lower.
Intel/Microns bending technique was another way to retain surface area at lower geometries.
And as the V- in the V-NAND implies, you can’t stack silicon layers with complete freedom, even if processing cost were no issue. They were building terraces originally, something that the Toshiba 3D process avoids.
Still 100 or 1000 layers won’t happen on silicon, because that’s like building a skyscraper using mud bricks.
However, that’s not an issue with HP’s Memristor device, because that’s not a silicon process and layers of titanium dioxide can be slapped on top of each other without any crystalline alignment issues or deposition/etching limitations.
That is one of the enduring limitations of Xpoint vs. Memristor, the fact that it seems to remain a silicon based process, which means it doesn't allow anywhere the number of layers that a non-crystalline process can do.
And since the cost per layer is close to linear and high on silicon, that means it fails to deliver Moore's promise economically.
Kristian Vättö - Monday, August 3, 2015 - link
I'm well aware that 3D NAND uses a much larger lithography and the density per layer is far from planar NAND. I apologize if it reads differently, but that was unintentional, not a praise talk for 3D NAND.I think 100 layers will happen given that we are already close to 50 layers, but I agree that 1,000 layers would require a more significant change to the manufacturing process and materials.
abufrejoval - Friday, July 31, 2015 - link
My biggest fear with Xpoint is that Intel is attempting to create a de-facto monopoly around the NV-RAM space. They seem to have made a deal with HP for HP to delay the Memristor in return for some very favorable conditions on Xpoint, CPUs and whatever else HP needs to produce servers.An open price war between Memristor based and Xpoint based DDR4 DIMMs with hundreds of gigabytes if not terabyte capacity would have left half the industry bleeding to death, Intel would have lost against HP technologically, because the Memristor scales better in 3D, retains data indefinitely and has no endurance issues at all (also better latency, potentially even beating DRAM), but might have taken perhaps a little longer to get there.
And with Intel as an enemy and HP's current financial stand, there is a good chance they would have bled out on day 1 of that war.
So they agreed that is was better for both parties to delay the Memristor and give Intel a full run with Xpoint to recoup their investments and let HP regain some health and headstart against Lenovo, Dell and SuperMicro, who have no Memristor on the back burner to negotiate back channel rebates with Intel.
The only problem is that even if Xpoint looks like DDR4 RAM on the memory bus, it will require wear management, special initialization etc. via a control channel like SPI and in the BIOS.
Good luck trying to license that from Intel if you're a maker of ARM, AMD, p-Series or even z/Arch CPUs.
Intel gave up DRAM, because it was cut-throat commodity decades ago, but these days winds up making far less money off a standard big data server than DRAM manufacturers, even after they've pushed everybody else off the motherboard (Intel may make more profit, though).
XPoint not only gives them back the biggest slice on the server cake, and at a price they can move as close to DRAM as they want, while their production cost may actually be far lower, but it also eliminates all these pestly little ARM competitors as well as finishes big iron for once and for all for lack of a competitive memory solution.
What was probably a smart tactical move for HP, puts the future of the IT industry at risk because Intel has years of a practical, but thanks to Micron not legal, monopoly.
mdriftmeyer - Saturday, August 1, 2015 - link
Micron is on the verge of a hostile takeover of $23 Billion. This Joint Intel/Micron announcement came 3 days after that takeover bid.Sorry, but silicon is not the future, but the past. HP is in the driver seat with the Memristor. Once they fire Meg and hire an engineering board/ceo leveraging their IP will make Intel one unhappy camper.
lordken - Sunday, August 2, 2015 - link
Unfortunately I dont see HP (ES) is firing Meg anytime soon, she is going to HP ES as CEO...So I think that best chance to stuff her off was during separation where she should rather go to HP Ink rather then ES.Would not hold my breath for hope that HP would get good ceo, just look on couple of recent ceos we had...
Michael Bay - Sunday, August 2, 2015 - link
Memristor as technology is dead, HP is swithcing off from it. So there is no need for Intel to have any kind of dealing with them.Khenglish - Saturday, August 1, 2015 - link
I struggle to see the purpose of this memory. While flash is much slower, latency is limited by the controller. If you put this 3d XPoint memory in an SSD, you gain very little in performance since the controller was the bottleneck anyway. Flash manufacturers can get much higher performance from the memory out of a NOR design at the cost of some density, but they don't do it because again the controller is the issue. All I really see this being used for is business applications where flash memory's endurance is too low to be suitable.Also the term NAND only refers to the architecture of a memory system. I would not be surprised at all if 3D XPoint was also a NAND architecture. You might want to call the current tech flash or floating gate instead.
wishgranter - Saturday, August 1, 2015 - link
As always, waited for Anand to tear it to the point and explain it as they did it,this is stuff that differ you from other "news" sites... So let see what they bring to market...
azazel1024 - Saturday, August 1, 2015 - link
Great, so now I need to get a boot drive again.DRAM, XPOINT boot drive, SSD application drive and HDD bulk storage. Maybe we can usher in holographic storage and have another tier! I mostly kid, but man getting excessive. It would be cool if this was added in through new/advanced memory controllers and utilize the DRAM slots. If the price is at least reasonably less than DRAM (by at least a factor of 2), I can see uses. A legit OS drive space. Useful for higher end tablets and stuff with embedded high speed storage. A serious swap disk, etc.
16GB of RAM with 32-64GB of XPOINT and then the SSD/HDD storage systems would probably make a pretty wicked system. Use the XPOINT to do things like load large parts of games/applications in a super fast swap disk pre-loading as much as possible and then quickly import the parts in to RAM that are needed instead of slower imports from SSD/HDD to RAM.
A little disappointed that this doesn't sound like it'll be remotely economical to compete with NAND anytime soon. I had my hopes up with the density claims and what not that we might have a NAND replacement at HDD price per GB in a couple of years.
userDavid - Saturday, August 1, 2015 - link
"SPoint" is my guess for pronunciation. If the marketing idiots can't spell out "3D CrossPoint", I don't think we're obliged to pronounce it that way.jay401 - Saturday, August 1, 2015 - link
"A quick look at NewEgg puts DRAM pricing at approximately $5-6 per gigabyte, whereas the high-end enterprise SSDs are in the range of $2-3. While client SSDs can be had for as low as $0.35, they aren't really a fair comparison because at least initially 3D XPoint will be aimed for enterprise applications. My educated guess is that the first 3D XPoint based products will be priced at about $4 per gigabyte, possibly even slightly lower depending on how DRAM and NAND pricess fall within a year."Oh, okay, see you in five years or so, when this technology becomes relevant to the consumer. @___@
Crazy1 - Saturday, August 1, 2015 - link
I hate that Intel and Micron didn't talk about potential uses for this new technology. Letting my imagination run wild, I'm thinking small, battery powered embedded solutions is a good starting point. Basically IoT devices, from infrastructure sensors on up to smart watches. There are not many market standards in place in this category of computing devices and energy efficiency is more important than high performance. This XPoint tech could replace both NAND and DRAM in these devices, presumably increasing energy efficiency. It also provides small, adaptable platforms for developers to start programming for applications with no RAM.I don't see XPoint replacing DRAM and NAND in smartphones and tablets any time soon. I assume it will take a while before OSes and apps can adapt to a no RAM environment. It will take a few SoC generations for this tech to have hardware support as well (unless they were already in-the-know). Smaller issues include degraded performance in RAM heavy applications (i.e. graphics processing) and increased hardware costs. The GPU might need its own RAM buffer (might I suggest HBM), further increasing implementation costs. Also, encrypted storage gets a little costly, from an energy perspective, when that storage bandwidth is very fast. Ideally, there will be a hardware encryption accelerator in the mix (and the OS will implement it (looking at you Android)).
However, there are a lot of potential benefits to replacing RAM and NAND completely with XPoint from a smart phone. The device could turn on and off almost instantly. Power management would not need to deal with the energy costs of shuffling around large amounts of data to enter and exit a sleep state. OSes and apps would be smaller and more efficient due to significantly reduced memory management concerns (some minimal wear leveling and ECC).
The first implementation of XPoint in smartphones/tablets will probably be as an added cache to accelerate the NAND and act as a swap partition. The NAND eMMC in most smart phones is more competitive with a modern HDD than a SSD when it comes to transfer speeds, so the NAND could definitely use a boost.
The day XPoint replaces NAND SSD's in the consumer space will be glorious. If speculation on price is correct, it may be a while before even the average enthusiast can rationalize the expense. However, I did see an Intel video that described DRAM as expensive and both XPoint and NAND as inexpensive. So, it would stand to reason that XPoint would be closer to the price of NAND than DRAM once initial market shock has subsided a bit and production has ramped up. However, Intel is rarely known to offer inexpensive products compared to its competition, so the real hope is that Micron pushes this technology at a price and quantity that stimulates quick market adoption.
Brane2 - Saturday, August 1, 2015 - link
How the heck can you "analyze" something that you know practically nothing about ?All you have is a bit of marketing fluff and you are building virtual castles out of that....
Laststop311 - Saturday, August 1, 2015 - link
maybe bias is affecting me but it seems the quality of articles has gone down since this site was sold and anand left.Kristian Vättö - Monday, August 3, 2015 - link
Could you elaborate? I think this article in particular is far more in-depth than any of the other articles I've read on the topic and it really goes into great detail about the physics side as well.speculatrix - Sunday, August 2, 2015 - link
typo "either slower, non-volitile memory" - volAtileVetri33 - Sunday, August 2, 2015 - link
There is NO mention of the error rate in 3D Xpoint compare to Enterprise NAND!?. This may make a big difference in Enterprise SSD Controller architecture & implementation like whether LDPC, BCH or a very simple Error Correction algorithm is good enough!?. The power is an another big factor especially in Write, One of the main limiting factor in enterprise SSD write performance is the power budget.Kristian Vättö - Monday, August 3, 2015 - link
That's a good point and admittedly something I didn't think about. I would assume 3D XPoint is more robust than NAND given the higher performance and endurance, but Intel/Micron declined to talk about any failure mechanisms, so at this point it's hard to say how robust the technology is.Nilth - Sunday, August 2, 2015 - link
Well, I really hope it won't take 10 years to see this technology at the consumer level.dotpex - Monday, August 3, 2015 - link
From Micron site https://www.micron.com/about/innovations/3d-xpoint..."Memory cells are accessed and written or read by varying the amount of voltage sent to each selector. This eliminates the need for transistors, increasing capacity and reducing cost."
...but 3d xpoint will be expensive, more like $10 per gigabyte.
Adam Bise - Friday, August 7, 2015 - link
"First and foremost, Intel and Micron are making it clear that they are not positioning 3D XPoint as a replacement technology for either NAND or DRAM"I wonder if this is because they would rather create a new market than replace an existing one.
hans_ober - Saturday, August 8, 2015 - link
@Ian. PhD Chem was useful! :)Ian Cutress - Monday, September 28, 2015 - link
Yiss :)duartix - Monday, August 10, 2015 - link
I see two immediate consumer usages:a) Instant Go To / Wake From deep hibernation
b) Scratch disks
MRFS - Monday, August 24, 2015 - link
With proper UEFI/BIOS support, one feature we proposed in a Provisional Patent Application was a "Format RAM" option prior to running Windows Setup. This would format RAM as an NTFS C: partition into which Windows software would be freshly installed. For comparison purposes, imagine a ramdisk in the upper 32-to-64GB of a large 1-to-2 TB DRAM subsystem, in a manner similar to how SuperSpeed's RamDisk Plus allocates RAM addresses. Then, imagine that all 2 TB consist of Non-Volatile DIMMs. I can see this one feature enabling very rapid RESTARTS, even cold RESTARTS after a full power-down (for maintenance). If the UEFI/BIOS is told that the OS is already memory-resident, this one change radically improves the speed with which a routine STARTUP occurs i.e. currently a STARTUP must load all OS software from a storage subsystem into RAM. If that OS software is already loaded into RAM, that "loading" is mostly eliminated under these new assumptions. Moreover, mounting Optane on the 2.5" form factor should free designers to consider more aggressive overclocking of the data cables connecting motherboards to those 2.5" drives: just work backwards from PCIe 4.0's 16GHz clock and 128b/130b jumbo frame. It's possible that Optane will be fast enough to justify data cables that also oscillate at 16GHz, increasing to 32GHz with predictable success. Assuming x4 NVMe lanes at PCIe 4.0, then 4 lanes @ (16G / 8.125) ~= 4 lanes @ 2GB/s ~= 8 GB/s raw bandwidth per 2.5" device. Modern G.Skill DDR4 easily exceeds 25GB/s raw bandwidth. Thus, Optane should allow "overclocked" data cables to achieve blistering NVMe storage performance with JBOD devices, and even higher performance with RAID-0 arrays.FutureCTO - Tuesday, November 15, 2016 - link
I don't know, is it possible to have an educated guess on this? Back in the PS2 days, before the PS3, i was @0zyx on forum or few, talking about NASA RAM, magnet donuts on a metal grid of wires, insisting why don't we do this with memory today? The electricity crosses and creates a charge or reads the charge. This is the RAM of the first space computer. ~ I was made confident by believing this is what AMD "Mirror Bit" Memory was working towards before it flat out evaporated from the internet? Same happened to 48Bit Intel "Iranium" processors with 16cores. Still look in books from time to time, hoping an old edition of hardware lists with intel spy cpu, will confirm the internet is a BlackHole. Not to go Ellery Hale, with being one of those to store curious science bits no one is using, and everyone should be clamoring to own some day. ~ I check the metal recycling at the city dump for computer servers and extra high grade tower cases for my own builds, at least a parts from the towers anyways. ~ Twitter @0zyx ~ either way this is the memory design from the first NASA space capsule to carry people into space, except larger than 1 kilobyte. It may have been 512Bytes back then, not sure what sort of grid that is?FutureCTO - Tuesday, November 15, 2016 - link
educated guess on price? ~ To me it is simpler to make, and faster to verify trace integrity.