Not lame? Cheaper SSDs are always a good thing, right?
If the first customers are enterprise customers you can bet that the first consumer drives will actually be pretty good. I certainly don't need a drive with multiple "drive write per day" much less more than 1 drive write per month...
This comment makes zero sense. Are you running your system off SLC NAND drives? Eh. Anyone that cares about their data is using redundancy schemes like hard or software RAID/ZFS/StorageSpaces and following the 3-2-1 rule (with cloud backup). QLC NAND is PERFECT for bulk storage, for a capacity tier. Or are you too simple minded to think anyone would have more than 1 drive in their system? Intel 3D Xpoint (phase change) and Samsung's Z-NAND (SLC mode) will take over for performance tiers. While TLC will probably stay around for consumers with a balance of price, capacity and performance. Even with 1K P/E cycles and 0.5 writes per day, using QLC NAND for capacity / bulk storage sounds like a hell of a lot better for prosumers/enthusiasts than anything platter based. I'd say platters are more "cheap out on storage for your data" than anything else. .
Cheap out on storage? You are implying a technology that technically isn't even available at retail yet is unreliable.
Considering how long they have been sampling QLC for QA (since last year) and Micron's track record for legendary NAND reliability, the drives, especially paired with a Marvell controller, will likely be more reliable than any magnetic storage medium hence it's enterprise focus for bulk cold storage where write performance will still exceed most hard disks and read performance will exceed any hard disk.
If QLC is what it takes for the death knell to hard drives, then QLC is what it takes. I'd trust my data on NAND over spinning glass any day.
I can unplug and put "quality" hard drives into storage today and be assured the data is still there in 100 months
These new QLC SSD's on the other hand can be unplugged for up to 3 months an still be assured your data is still there
So, until the new tech can retain data as long as old tech, I'll keep irrelevant data on the fastest SSD, massive Volumes of what I want to keep on hard disk, and critical data on M-Disk (and hard disk)
Retention for three months is guaranteed for a drive that's burned through its entire write endurance rating. If you're using the drive primarily as an archive drive and not using much of the write endurance, then data retention will be years instead of months.
Micron did provide some details on the NAND gen beyond 96 layers in their financial event today. Had the audio in the background but could not fully focus on it so might have missed a few bits, or more than a few. Anyway, the few bits I do remember. Developing a "novel charge trap" 96 layers NAND will use string stacking, 48+48 96 layers provides over 30% in bandwidth and over 40% lower power XPoint gen 2 moved from R&D to production. XPoint products in the later part of 2019 - no idea if fiscal or calendar but assuming calendar. And there was a slide where cost reduction for gen 2 seemed minimal so would assume 4 layers and no horizontal scaling but that's based on too little so could be wrong.
Good to know. It did seem like the most likely reason for ending the partnership was that one of them wanted to switch from 3D floating gate to charge trap, and Micron seems to be the more likely one to want to make that jump.
I don't agree, think it's more likely that Intel wanted to use existing clean room capacity instead of the JV and they are also likely to sell their NAND operations in a few years. And Micron likely wasn't eager to renew the supply agreements they had with Intel and I think those expire one this year and one in 2019. China is likely to ramp NAND beyond 2020, seems likely at this point so Intel would be better off selling to China or Micron. They likely needed their own technology to be able to sell to anyone but Micron and having their own prod , outside of the JV is way better too. So they create value right now with their own NAND nd production and then dump it for quite a few billions.
They have also mentioned developing HBM, I think it's the first time they mention it. Using TSV in sever DRAM was mentioned. And new nodes for DRAM beyond 1z, 1alpha and 1beta but not sure if those new nodes or packaging focus. No EUV for these new nodes either.
I don't mind about the current pricing of Xpoint. The problem is, it is not fast enough in latency and bandwidth as initially promised, endurance too. And it doesn't seem they are shipping in huge quantities.
I am still waiting to see Quad Channel, 8 x 64GB Xpoint DIMM performance in Database. Assuming it kept with its 10x reduction per GB pricing. these should be a very good trade off. Giving up a 5-10ms in Database performance for 10x more capacity. ( Assuming your Apps is not latency critical )
Gen 1 XPoint is likely to never be ramped to substantial volumes and it is running at a substantial loss. Gen 2 could even be just improvements for manufacturability with no vertical scaling and minimal or no horizontal scaling. Likely when they show small costs gains they compare at mature yield and then, the gains in practice will be much large once Gen 2 is ramped and gets to yield- does not mean we seem substantially lower prices in retail. The 10x lower cost vs DRAM might be misleading, likely an aspirational goal for a future gen with more layers.
Forgot to mention in my previous comments that Micron will be launching NVMe SSDs in the next few quarters.
Hopefully they will be cheaper than similar capacity Samsung drives, which haven't decreased in cost, at all, for several years. In particular, the 2.5" form factor Samsung drives that are 2tb and higher in capacity. If the price of these drives are anywhere near reasonable I will definitely be taking a look.
And following economic history of TLC over MLC, where costs fell 50% (excluding the whole spike during the NAND shortage last year) from $0.25/GB to $0.17/GB.
The whole curveball in this is QLC isn't exactly as effective as TLC was over MLC. The diminishing returns for every expansion of voltage states just wont show the impact MLC had over SLC, and TLC had over MLC, but...the real wildcard here is the 1Tb memory chips. Those at scale will greatly increase supply if yields are on par with 512Gb chips. And processing four planes in parallel +0.0+
Theoretically QLC is 25% cheaper to fab compared to TLC, but when considering the additional overheads (ECC area, extra peripherals etc), the reality will be closer to 15%. 1Tbit QLC die is simply larger than 512Gbit TLC die. It's an evolution rather than a revolution.
Economically, yes. But you will probably be able to buy relatively inexpensive 4TB retail\commercial SSD's for ~$300 next year, and you can fit 2 of them in the space of one 5TB 2.5" disk.
As Intel said TEN years ago when they launched the X25-M: we will have magnetic storage through the next decade, but it will be application specific, not general use. Application specific is becoming a pseudonym for bulk storage\cold storage. There should be no realistic reason PC's will come with hard disks in the next year or two when 1TB QLC drives are almost as cheap as 1TB hard drives.
on bottom end laptops spinning rust continues to cling on because it's minimum viable price is lower than the cheapest SSDs can go (controller costs and minimum nand die counts for decent perf keep the cheapest SSD more expensive than the cheapest HDD); IMO the pursuit of thinner is better is the only thing likely to actually kill them off in the next year or two.
As far as bulk storage goes, I suspect the NAS I build sometime in the next 3-12 months will be the last time I purchase spinning rust. Projections from the last few years show the mid 2020's as when flash finally becomes cheaper as bulk/cold storage. Even if that date slips to the late 2020's I suspect the gap will be narrow enough that going with a much smaller box will be worth whatever lingering price premium remains.
Battery, thermal, volume and weight are costs and low end is actually already NAND as most 200-300$ laptops use eMMC. HDDs have controllers too, a costlier shell and can't scale to small capacities. A 16GB eMMC must be 5-6$ now. Would be interesting if PC SoCs would support uMCP https://www.micron.com/products/multichip-packages...
NAND penetration in PC is likely about 5 years away from 100% - ofc some PCs will have both SSDs and HDDs.
I can unplug and put "quality" hard drives into storage today and be assured the data is still there in 100 months
These new QLC SSD's on the other hand can be unplugged for up to 3 months an still be assured your data is still there
So, until the new tech can retain data as long as old tech, I'll keep irrelevant data on the fastest SSD, massive Volumes of what I want to keep on hard disk, and critical data on M-Disk (and hard disk)
Nand flash is not very good at data retention. IIRC JEDEC only requires 12 months at a certain temperature. I wouldn't count on using ssds for long term data storage.
The JEDEC standards are retention for 12 months for client drives and 3 months for enterprise drives, but those figures are for drives at the end of their write endurance. I don't think the standards have anything to say about long-term data retention for lightly-used drives, but it's going to be measured in years.
Quite excited for this for the consumer space - QLC is _perfect_ for game drives. High capacity, good read speeds, decent write speeds, plenty of endurance for a game drive (sporadic 50-100GB write sessions that then stay in place for months if not years, with sporadic minor updates and the negligible write load of save games). Not to mention that game libraries are among the least important data you have - everything can be downloaded again.Sure, it's a hassle, but it wouldn't be that bad. I'd love a ~2TB drive like this to replace the 500GB 850 Evo in my desktop.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
38 Comments
Back to Article
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Monday, May 21, 2018 - link
How lame is that?theeldest - Monday, May 21, 2018 - link
Not lame? Cheaper SSDs are always a good thing, right?If the first customers are enterprise customers you can bet that the first consumer drives will actually be pretty good. I certainly don't need a drive with multiple "drive write per day" much less more than 1 drive write per month...
MamiyaOtaru - Tuesday, May 22, 2018 - link
yeah great, cheap out on storage for your data - the most important thing on the machineCheapSushi - Tuesday, May 22, 2018 - link
This comment makes zero sense. Are you running your system off SLC NAND drives? Eh. Anyone that cares about their data is using redundancy schemes like hard or software RAID/ZFS/StorageSpaces and following the 3-2-1 rule (with cloud backup). QLC NAND is PERFECT for bulk storage, for a capacity tier. Or are you too simple minded to think anyone would have more than 1 drive in their system? Intel 3D Xpoint (phase change) and Samsung's Z-NAND (SLC mode) will take over for performance tiers. While TLC will probably stay around for consumers with a balance of price, capacity and performance. Even with 1K P/E cycles and 0.5 writes per day, using QLC NAND for capacity / bulk storage sounds like a hell of a lot better for prosumers/enthusiasts than anything platter based. I'd say platters are more "cheap out on storage for your data" than anything else. .Samus - Tuesday, May 22, 2018 - link
Cheap out on storage? You are implying a technology that technically isn't even available at retail yet is unreliable.Considering how long they have been sampling QLC for QA (since last year) and Micron's track record for legendary NAND reliability, the drives, especially paired with a Marvell controller, will likely be more reliable than any magnetic storage medium hence it's enterprise focus for bulk cold storage where write performance will still exceed most hard disks and read performance will exceed any hard disk.
If QLC is what it takes for the death knell to hard drives, then QLC is what it takes. I'd trust my data on NAND over spinning glass any day.
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
Depends on what you mean by "Cold Storage" SamusI can unplug and put "quality" hard drives into storage today and be assured the data is still there in 100 months
These new QLC SSD's on the other hand can be unplugged for up to 3 months an still be assured your data is still there
So, until the new tech can retain data as long as old tech, I'll keep irrelevant data on the fastest SSD, massive Volumes of what I want to keep on hard disk, and critical data on M-Disk (and hard disk)
Billy Tallis - Tuesday, May 22, 2018 - link
Retention for three months is guaranteed for a drive that's burned through its entire write endurance rating. If you're using the drive primarily as an archive drive and not using much of the write endurance, then data retention will be years instead of months.ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
OK, thanksHow many years would you trust YOUR data to a brand new one of these unplugged?
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
Sorry Billy, but you cannot truthfully claim years of data retention without errors until you have tested it for years without errorI don't think you can simply extrapolate error free data life expectancy from accelerated temperature extremes on these drives
peevee - Thursday, May 24, 2018 - link
1000 cycles is good enough for 99.99%+ of home users with modern controllers (DRAM & SLC cache & wear leveling).They'd better make their parts more parallel to reach 4xPCIe3 limit (4GB/s please) already. 1-2TB m.2 PCIex4 at 4GB/s is what doctor ordered.
jjj - Monday, May 21, 2018 - link
Micron did provide some details on the NAND gen beyond 96 layers in their financial event today. Had the audio in the background but could not fully focus on it so might have missed a few bits, or more than a few.Anyway, the few bits I do remember.
Developing a "novel charge trap"
96 layers NAND will use string stacking, 48+48
96 layers provides over 30% in bandwidth and over 40% lower power
XPoint gen 2 moved from R&D to production.
XPoint products in the later part of 2019 - no idea if fiscal or calendar but assuming calendar.
And there was a slide where cost reduction for gen 2 seemed minimal so would assume 4 layers and no horizontal scaling but that's based on too little so could be wrong.
Billy Tallis - Monday, May 21, 2018 - link
Good to know. It did seem like the most likely reason for ending the partnership was that one of them wanted to switch from 3D floating gate to charge trap, and Micron seems to be the more likely one to want to make that jump.jjj - Monday, May 21, 2018 - link
I don't agree, think it's more likely that Intel wanted to use existing clean room capacity instead of the JV and they are also likely to sell their NAND operations in a few years.And Micron likely wasn't eager to renew the supply agreements they had with Intel and I think those expire one this year and one in 2019.
China is likely to ramp NAND beyond 2020, seems likely at this point so Intel would be better off selling to China or Micron. They likely needed their own technology to be able to sell to anyone but Micron and having their own prod , outside of the JV is way better too. So they create value right now with their own NAND nd production and then dump it for quite a few billions.
jjj - Monday, May 21, 2018 - link
They have also mentioned developing HBM, I think it's the first time they mention it.Using TSV in sever DRAM was mentioned.
And new nodes for DRAM beyond 1z, 1alpha and 1beta but not sure if those new nodes or packaging focus. No EUV for these new nodes either.
iwod - Tuesday, May 22, 2018 - link
TSV DRAM? Is that part of the DDR5 specification?jjj - Tuesday, May 22, 2018 - link
Safe to assume it's just a packaging solution used where/when it makes sense.jjj - Monday, May 21, 2018 - link
Correction here - 4th gen provides 30% more BW and 40% lower power vs 96L.And the first 96L die seems to be 512Gb.
iwod - Tuesday, May 22, 2018 - link
I don't mind about the current pricing of Xpoint. The problem is, it is not fast enough in latency and bandwidth as initially promised, endurance too. And it doesn't seem they are shipping in huge quantities.I am still waiting to see Quad Channel, 8 x 64GB Xpoint DIMM performance in Database. Assuming it kept with its 10x reduction per GB pricing. these should be a very good trade off. Giving up a 5-10ms in Database performance for 10x more capacity. ( Assuming your Apps is not latency critical )
jjj - Tuesday, May 22, 2018 - link
Gen 1 XPoint is likely to never be ramped to substantial volumes and it is running at a substantial loss. Gen 2 could even be just improvements for manufacturability with no vertical scaling and minimal or no horizontal scaling. Likely when they show small costs gains they compare at mature yield and then, the gains in practice will be much large once Gen 2 is ramped and gets to yield- does not mean we seem substantially lower prices in retail.The 10x lower cost vs DRAM might be misleading, likely an aspirational goal for a future gen with more layers.
Forgot to mention in my previous comments that Micron will be launching NVMe SSDs in the next few quarters.
ZeDestructor - Monday, May 21, 2018 - link
Any news on the price?Rictorhell - Monday, May 21, 2018 - link
Hopefully they will be cheaper than similar capacity Samsung drives, which haven't decreased in cost, at all, for several years. In particular, the 2.5" form factor Samsung drives that are 2tb and higher in capacity. If the price of these drives are anywhere near reasonable I will definitely be taking a look.Samus - Tuesday, May 22, 2018 - link
Hasn't hit retail yet but when it does I'd guess close to $0.10 per GB, or $200 for 2TB, according to the IDC chart referenced in this article:https://www.theregister.co.uk/2017/05/22/ssd_price...
And following economic history of TLC over MLC, where costs fell 50% (excluding the whole spike during the NAND shortage last year) from $0.25/GB to $0.17/GB.
The whole curveball in this is QLC isn't exactly as effective as TLC was over MLC. The diminishing returns for every expansion of voltage states just wont show the impact MLC had over SLC, and TLC had over MLC, but...the real wildcard here is the 1Tb memory chips. Those at scale will greatly increase supply if yields are on par with 512Gb chips. And processing four planes in parallel +0.0+
Kristian Vättö - Tuesday, May 22, 2018 - link
Theoretically QLC is 25% cheaper to fab compared to TLC, but when considering the additional overheads (ECC area, extra peripherals etc), the reality will be closer to 15%. 1Tbit QLC die is simply larger than 512Gbit TLC die. It's an evolution rather than a revolution.Valantar - Tuesday, May 22, 2018 - link
25% cheaper to fab - by what metric? Per wafer? Per die? Per Gb? The last one is the only one that matters.Kristian Vättö - Wednesday, May 23, 2018 - link
Per Gbit. Four bits instead of three yields 33% increase in Gbit per wafer (ceteris paribus), which means 25% lower cost per bit (1/1.33-1).AbRASiON - Monday, May 21, 2018 - link
Waiting so, so, so long to replace my 5TB disks in my NAS (late 2014 Purchase date) with 8TB SSDs. I'm thinking it's still 3+ years out. >:(Samus - Tuesday, May 22, 2018 - link
Economically, yes. But you will probably be able to buy relatively inexpensive 4TB retail\commercial SSD's for ~$300 next year, and you can fit 2 of them in the space of one 5TB 2.5" disk.As Intel said TEN years ago when they launched the X25-M: we will have magnetic storage through the next decade, but it will be application specific, not general use. Application specific is becoming a pseudonym for bulk storage\cold storage. There should be no realistic reason PC's will come with hard disks in the next year or two when 1TB QLC drives are almost as cheap as 1TB hard drives.
DanNeely - Tuesday, May 22, 2018 - link
on bottom end laptops spinning rust continues to cling on because it's minimum viable price is lower than the cheapest SSDs can go (controller costs and minimum nand die counts for decent perf keep the cheapest SSD more expensive than the cheapest HDD); IMO the pursuit of thinner is better is the only thing likely to actually kill them off in the next year or two.As far as bulk storage goes, I suspect the NAS I build sometime in the next 3-12 months will be the last time I purchase spinning rust. Projections from the last few years show the mid 2020's as when flash finally becomes cheaper as bulk/cold storage. Even if that date slips to the late 2020's I suspect the gap will be narrow enough that going with a much smaller box will be worth whatever lingering price premium remains.
jjj - Tuesday, May 22, 2018 - link
Battery, thermal, volume and weight are costs and low end is actually already NAND as most 200-300$ laptops use eMMC. HDDs have controllers too, a costlier shell and can't scale to small capacities. A 16GB eMMC must be 5-6$ now.Would be interesting if PC SoCs would support uMCP https://www.micron.com/products/multichip-packages...
NAND penetration in PC is likely about 5 years away from 100% - ofc some PCs will have both SSDs and HDDs.
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
Depends on what you mean by "Cold Storage" SamusI can unplug and put "quality" hard drives into storage today and be assured the data is still there in 100 months
These new QLC SSD's on the other hand can be unplugged for up to 3 months an still be assured your data is still there
So, until the new tech can retain data as long as old tech, I'll keep irrelevant data on the fastest SSD, massive Volumes of what I want to keep on hard disk, and critical data on M-Disk (and hard disk)
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
Double postJaLooNz - Tuesday, May 22, 2018 - link
The article should really address how long can the data be retained for, since being cheap will mean that its an alternative for archival drives.Dr. Swag - Tuesday, May 22, 2018 - link
Nand flash is not very good at data retention. IIRC JEDEC only requires 12 months at a certain temperature. I wouldn't count on using ssds for long term data storage.Billy Tallis - Tuesday, May 22, 2018 - link
The JEDEC standards are retention for 12 months for client drives and 3 months for enterprise drives, but those figures are for drives at the end of their write endurance. I don't think the standards have anything to say about long-term data retention for lightly-used drives, but it's going to be measured in years.ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
OK, thanksHow many years would you trust YOUR data to a brand new one of these unplugged?
ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Tuesday, May 22, 2018 - link
*^&%^#^& dOUBLE pOSTS!!!*^&&%$stoatwblr - Tuesday, May 22, 2018 - link
About as long as I'd trust it to a mechanical drive and expect it to actually spin up.If you want long term backup, use tape.
Valantar - Tuesday, May 22, 2018 - link
Quite excited for this for the consumer space - QLC is _perfect_ for game drives. High capacity, good read speeds, decent write speeds, plenty of endurance for a game drive (sporadic 50-100GB write sessions that then stay in place for months if not years, with sporadic minor updates and the negligible write load of save games). Not to mention that game libraries are among the least important data you have - everything can be downloaded again.Sure, it's a hassle, but it wouldn't be that bad. I'd love a ~2TB drive like this to replace the 500GB 850 Evo in my desktop.