I suggest Micron 9400 PRO and MAX go against the Kioxia CM7-R and CM7-V, which haven't released the briefs. The performance of Kioxia CM6-R and CM6-V is worse than Micron 9400 series. Moreover, they are very hot. And predictably, Memblaze will introduce their new product after the Micron, soon.
I was part way through the article before I realized these were just flash drives, not crosspoint. It was the capacity figures that triggered the realization. No way someone's going to put 30.7TB of that in a drive. You can put stuff in space for less that that would cost. And who would want a tiny U.2 PCI-E v4x4 'straw' to drink it through?
This is for the datacenter market. Very different requirements. The key driver here is the IOPS. Think of a server supporting 50 different engineers doing a project compile. Lots of small files being read in simultaneously by different users. All of them can be from this one single disk. Sequential bandwidth is only one part of the story. Random IOPS is key in a lot of other scenarios - databases, ML training, OLTP etc. etc..
Wow, what was your hint? Was it the U.3 profile? Maybe the title of the article?
What I'm refering to is the ration between drive size and both total daily data written or write speed. In other terms, how long--at full speed--does it take to exceed the DWPD or to fill a drive. The larger the drive and the slower the interface, the worse the ratio.
Not every workload is the same in the datacenter, you know. You need to make sure you're using hardware that's appropriate for your needs. So, understanding where these drives fit in that arena is important. Hence my "large drive, small interface" comment. That tells us where in the spectrum of performance this(these) drive sits.
Don't forget that these will be used in a SAN so the writes will be spread across more drives. According to VMware's vSAN documentation, even a lowly 1 DWPD drive could be a write caching drive if the capacity is high enough. I can tell you that at 7.68TB that would fall into their Performance Class F (2nd highest, needs 350k+ random IOPS for the highest level) and Enduranc Class D (highest level for 7.3PB+ write endurance). Basically it would qualify for being a write cache drive for the highest performance vSANs. Also you will run into storage network bottlenecks before PCIe bus bottlenecks in an array. Basically if you have 24x PCIe 4 SSDs you would need quad 400 Gbps connections to be able to take all the possible storage bandwidth.
> The larger the drive and the slower the interface, the worse the ratio.
Okay, but the sequential write speed is 7 GB/s and the max capacity is 30700 GB. That means you can fill the largest drive in just 1.22 hours, which is massively better than the *days* it takes to fill a 20 TB enterprise hard drive. So, what's the problem with these SSDs?
Back before NVME, a SATA drive you could fill in the same amount of time would be just 2.6 TB, and I'm pretty sure there were already enterprise drives bigger than that. You can currently buy SATA drives in sizes of 4 TB or bigger, in fact.
The pricing is not unreasonable for what you get. As for bandwidth, in the enterprise, you're also likely to put the drives into a raid array, and your bandwidth is aggregated across all the drives in the array for sufficiently busy workloads.
In this case it would be your storage network. 99.9% of all applications are run either on a VM or in a container. It is getting rarer by the day that a company runs even their largest DBs on a physical appliance. Doing it in a VM is the better solution.
If you finished reading the rather short article you would know the drives are U.3, not U.2. Also, PCIe4x4 has more bandwidth than the peak read or write speeds of the drives, so I don't see why they would use a higher bandwidth PCIe link. Also, as mentioned in the article, these aren't even the first 30TB drives like this. There is absolutely demand for a drive like this. If 7GBps isn't enough, then you use multiple drives in an appropriate distributed system, like RAID or zpool.
You don't seem to comprehend how these would be connected, setup, or configured do you? Or even the performance in a zfs array on the correct controller.
It is a DAMN SHAME that Intel beancounters nuked Optane look at P5800X and checkout it's endurance and it's top notch performance it will never choke like the garbage NVMe we have.
We consumers get nothing like this tech, look at the capacity. Except that Firecuda NVMe from Seagate which uses top end Micron 176L NAND flash none of the SSDs have good endurance. Samsung dropped ball big time after they killed MLC technology and never cared enough since their brand is earning them money.
Shame really and we got milked for 2 years straight on PCIe4.0 NVMe SSDs with extreme price and yet trash capacity and insanely poor Endurance rating. Still we do not have any 8TB mainstream high end SSDs. There's a SATA QLC junk from Samsung, check the TLC 870 Evo 4TB it has like 85% of free space man, Samsung can make it 16TB drive without issues and with cool running SATA it would be a boon for many, but they do not want as I think in general people stopped caring about capacities and doing basic research and instead go with the stupid PR and get hooked by numbers and benchmarks.
PCIe 5.0 NVMe rip off incoming and same 2TB capacity and 4TB top which will be uber expensive because reasons, oh yea theres no viable difference with NVMe vs SATA in general purpose daily compute unless you are running some 4K REMUX on mkvtoolnix on a Threadripper like machine, Optane is still better but dead now.
Well, fingers crossed for price cuts on Firecuda 4TB for my machine what else can I expect at this point, blowing cash on PCIe5.0 is worthless.
Also on a final note, the U.3 connector is very interesting once I searched about it, 100% compat with NVMe, SATA and U.2 I wish OEMs offer these as standard and get splitters for SATA. Apart from EVGA no one offers U.2 connectors shame. Enterprise stuff is very good but normies do not care as they only care about MUH Gaming and PR like Direct X Storage gonna change game lol.
The funny thing is what turned low endurance junk TLC into premium-tier, high-endurance storage was the introduction of even lower endurance QLC. Now, most of the average, uninformed yet self-proclaimed tech savvy that were decrying TLC at introduction because of the decline in endurance from MLC are now defending TLC while using QLC as a point of comparison.
Granted, TLC is ... marginally acceptable if you're mindful of the endurance limitations and treat it kindly, but I would prefer being able to thrash my storage and not really care. MLC offered that for the most part and struck a good balance between itself and SLC. At any rate if you want capacity, you're going to be stuck with less endurance for the time being until someone finds a cost effective alternative to NAND flash storage. Given how quickly Optane died, I don't think an alternative is coming in the near term.
"At any rate if you want capacity, you're going to be stuck with less endurance for the time being until someone finds a cost effective alternative to NAND flash storage. Given how quickly Optane died, I don't think an alternative is coming in the near term."
but, but, but... weren't we promised that 3D NAND (TLC or QLC or xLC) would have even better endurance than MLC (may be even SLC?) just because it's fabbed on Really Big Olde Nodes??? weren't we?
That is true and I shudder to think what TLC or QLC endurance would look like in a planar NAND cell. It helped, but it clearly didn't totally resolve the problem and I believe modern 3D NAND ended up landing on smaller process nodes anyway in pursuit of lower cost for a given capacity.
The only reason I accepted TLC is for a single drive, Firecuda because it is very high endurance for a consumer. Samsung 860 Pro was MLC but that was like 5 year old technology I think it might be on 22nm, I do not know rough guess and lazy to search about it. But the newer Firecuda 4TB one NVMe beats it in endurance with TLC.
Except that instance, none of the TLC drives are replacing MLC, but here if we imagine Samsung making 870 Pro with modern node MLC I bet it would destroy it.
I do not really expect any Optane successor. Enterprise get the drive like above, meanwhile idiot consumers keep on buying the latest sticker that will give them more E-Points than anything, that is PCIe5 woah and anything that has shiny useless benchmarks and marketing.
Look at HDD space, still 20TB is uber expensive. $400 that was the discounted price for WD Gold with OptiNAND that I got and that price never stuck even during BF sales. And with HDD shipments slashed by 40% for all the major companies, I do not expect any magic even there like ePMR, Optinand, other types of EAMR technologies for masses. In fact WD removes Helium for all drives below 12TB.
the thing about bidnezz and equipment - Uncle Sugar, which is to say you the American Taxpayer, pays them to replace stuff. so they're happy to get the warranty period years out of SSD, and replace at expiry date. they get to deduct the whole cost of each drive. all they care about is that a drive model doesn't die before warranty expires 99.9% of the time.
99.99% of any SSD will keep going pretty much forever in usual consumer use. I just ordered a Samsung 512 gig drive (damn those 500 gig pieces of shit) so I can clone my MX100, which is a many drive descendent of the first Ubuntu HDD, circa kernel 2.2 (or thereabouts), which has a now way too tiny /boot partition and I need to expand it; potentially destructive process. long since forgotten what was in the MX100. looked it up. turns out MLC. also turns out I've had the drive since 2014! time flies when your having fun.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
18 Comments
Back to Article
RZLNIE - Tuesday, January 10, 2023 - link
I suggest Micron 9400 PRO and MAX go against the Kioxia CM7-R and CM7-V, which haven't released the briefs. The performance of Kioxia CM6-R and CM6-V is worse than Micron 9400 series. Moreover, they are very hot. And predictably, Memblaze will introduce their new product after the Micron, soon.dwillmore - Tuesday, January 10, 2023 - link
I was part way through the article before I realized these were just flash drives, not crosspoint. It was the capacity figures that triggered the realization. No way someone's going to put 30.7TB of that in a drive. You can put stuff in space for less that that would cost. And who would want a tiny U.2 PCI-E v4x4 'straw' to drink it through?ganeshts - Tuesday, January 10, 2023 - link
This is for the datacenter market. Very different requirements. The key driver here is the IOPS. Think of a server supporting 50 different engineers doing a project compile. Lots of small files being read in simultaneously by different users. All of them can be from this one single disk. Sequential bandwidth is only one part of the story. Random IOPS is key in a lot of other scenarios - databases, ML training, OLTP etc. etc..dwillmore - Tuesday, January 10, 2023 - link
Wow, what was your hint? Was it the U.3 profile? Maybe the title of the article?What I'm refering to is the ration between drive size and both total daily data written or write speed. In other terms, how long--at full speed--does it take to exceed the DWPD or to fill a drive. The larger the drive and the slower the interface, the worse the ratio.
Not every workload is the same in the datacenter, you know. You need to make sure you're using hardware that's appropriate for your needs. So, understanding where these drives fit in that arena is important. Hence my "large drive, small interface" comment. That tells us where in the spectrum of performance this(these) drive sits.
schujj07 - Tuesday, January 10, 2023 - link
Don't forget that these will be used in a SAN so the writes will be spread across more drives. According to VMware's vSAN documentation, even a lowly 1 DWPD drive could be a write caching drive if the capacity is high enough. I can tell you that at 7.68TB that would fall into their Performance Class F (2nd highest, needs 350k+ random IOPS for the highest level) and Enduranc Class D (highest level for 7.3PB+ write endurance). Basically it would qualify for being a write cache drive for the highest performance vSANs. Also you will run into storage network bottlenecks before PCIe bus bottlenecks in an array. Basically if you have 24x PCIe 4 SSDs you would need quad 400 Gbps connections to be able to take all the possible storage bandwidth.mode_13h - Wednesday, March 15, 2023 - link
> The larger the drive and the slower the interface, the worse the ratio.Okay, but the sequential write speed is 7 GB/s and the max capacity is 30700 GB. That means you can fill the largest drive in just 1.22 hours, which is massively better than the *days* it takes to fill a 20 TB enterprise hard drive. So, what's the problem with these SSDs?
Back before NVME, a SATA drive you could fill in the same amount of time would be just 2.6 TB, and I'm pretty sure there were already enterprise drives bigger than that. You can currently buy SATA drives in sizes of 4 TB or bigger, in fact.
wojtow - Tuesday, January 10, 2023 - link
The pricing is not unreasonable for what you get. As for bandwidth, in the enterprise, you're also likely to put the drives into a raid array, and your bandwidth is aggregated across all the drives in the array for sufficiently busy workloads.schujj07 - Tuesday, January 10, 2023 - link
You will be limited by your network speed before the bus speed.Threska - Wednesday, January 11, 2023 - link
Assuming it crosses the network. For those with a monster server those numbers would be impressive.https://youtu.be/4TwfM3s2Wdw
schujj07 - Wednesday, January 11, 2023 - link
In this case it would be your storage network. 99.9% of all applications are run either on a VM or in a container. It is getting rarer by the day that a company runs even their largest DBs on a physical appliance. Doing it in a VM is the better solution.jordanclock - Monday, January 16, 2023 - link
If you finished reading the rather short article you would know the drives are U.3, not U.2. Also, PCIe4x4 has more bandwidth than the peak read or write speeds of the drives, so I don't see why they would use a higher bandwidth PCIe link. Also, as mentioned in the article, these aren't even the first 30TB drives like this. There is absolutely demand for a drive like this. If 7GBps isn't enough, then you use multiple drives in an appropriate distributed system, like RAID or zpool.Dug - Friday, January 20, 2023 - link
You don't seem to comprehend how these would be connected, setup, or configured do you? Or even the performance in a zfs array on the correct controller.Silver5urfer - Thursday, January 12, 2023 - link
It is a DAMN SHAME that Intel beancounters nuked Optane look at P5800X and checkout it's endurance and it's top notch performance it will never choke like the garbage NVMe we have.We consumers get nothing like this tech, look at the capacity. Except that Firecuda NVMe from Seagate which uses top end Micron 176L NAND flash none of the SSDs have good endurance. Samsung dropped ball big time after they killed MLC technology and never cared enough since their brand is earning them money.
Shame really and we got milked for 2 years straight on PCIe4.0 NVMe SSDs with extreme price and yet trash capacity and insanely poor Endurance rating. Still we do not have any 8TB mainstream high end SSDs. There's a SATA QLC junk from Samsung, check the TLC 870 Evo 4TB it has like 85% of free space man, Samsung can make it 16TB drive without issues and with cool running SATA it would be a boon for many, but they do not want as I think in general people stopped caring about capacities and doing basic research and instead go with the stupid PR and get hooked by numbers and benchmarks.
PCIe 5.0 NVMe rip off incoming and same 2TB capacity and 4TB top which will be uber expensive because reasons, oh yea theres no viable difference with NVMe vs SATA in general purpose daily compute unless you are running some 4K REMUX on mkvtoolnix on a Threadripper like machine, Optane is still better but dead now.
Well, fingers crossed for price cuts on Firecuda 4TB for my machine what else can I expect at this point, blowing cash on PCIe5.0 is worthless.
Also on a final note, the U.3 connector is very interesting once I searched about it, 100% compat with NVMe, SATA and U.2 I wish OEMs offer these as standard and get splitters for SATA. Apart from EVGA no one offers U.2 connectors shame. Enterprise stuff is very good but normies do not care as they only care about MUH Gaming and PR like Direct X Storage gonna change game lol.
PeachNCream - Friday, January 13, 2023 - link
The funny thing is what turned low endurance junk TLC into premium-tier, high-endurance storage was the introduction of even lower endurance QLC. Now, most of the average, uninformed yet self-proclaimed tech savvy that were decrying TLC at introduction because of the decline in endurance from MLC are now defending TLC while using QLC as a point of comparison.Granted, TLC is ... marginally acceptable if you're mindful of the endurance limitations and treat it kindly, but I would prefer being able to thrash my storage and not really care. MLC offered that for the most part and struck a good balance between itself and SLC. At any rate if you want capacity, you're going to be stuck with less endurance for the time being until someone finds a cost effective alternative to NAND flash storage. Given how quickly Optane died, I don't think an alternative is coming in the near term.
FunBunny2 - Saturday, January 14, 2023 - link
"At any rate if you want capacity, you're going to be stuck with less endurance for the time being until someone finds a cost effective alternative to NAND flash storage. Given how quickly Optane died, I don't think an alternative is coming in the near term."but, but, but... weren't we promised that 3D NAND (TLC or QLC or xLC) would have even better endurance than MLC (may be even SLC?) just because it's fabbed on Really Big Olde Nodes??? weren't we?
PeachNCream - Sunday, January 15, 2023 - link
That is true and I shudder to think what TLC or QLC endurance would look like in a planar NAND cell. It helped, but it clearly didn't totally resolve the problem and I believe modern 3D NAND ended up landing on smaller process nodes anyway in pursuit of lower cost for a given capacity.Silver5urfer - Tuesday, January 17, 2023 - link
The only reason I accepted TLC is for a single drive, Firecuda because it is very high endurance for a consumer. Samsung 860 Pro was MLC but that was like 5 year old technology I think it might be on 22nm, I do not know rough guess and lazy to search about it. But the newer Firecuda 4TB one NVMe beats it in endurance with TLC.Except that instance, none of the TLC drives are replacing MLC, but here if we imagine Samsung making 870 Pro with modern node MLC I bet it would destroy it.
I do not really expect any Optane successor. Enterprise get the drive like above, meanwhile idiot consumers keep on buying the latest sticker that will give them more E-Points than anything, that is PCIe5 woah and anything that has shiny useless benchmarks and marketing.
Look at HDD space, still 20TB is uber expensive. $400 that was the discounted price for WD Gold with OptiNAND that I got and that price never stuck even during BF sales. And with HDD shipments slashed by 40% for all the major companies, I do not expect any magic even there like ePMR, Optinand, other types of EAMR technologies for masses. In fact WD removes Helium for all drives below 12TB.
FunBunny2 - Monday, January 23, 2023 - link
"Enterprise get the drive like above"the thing about bidnezz and equipment - Uncle Sugar, which is to say you the American Taxpayer, pays them to replace stuff. so they're happy to get the warranty period years out of SSD, and replace at expiry date. they get to deduct the whole cost of each drive. all they care about is that a drive model doesn't die before warranty expires 99.9% of the time.
99.99% of any SSD will keep going pretty much forever in usual consumer use. I just ordered a Samsung 512 gig drive (damn those 500 gig pieces of shit) so I can clone my MX100, which is a many drive descendent of the first Ubuntu HDD, circa kernel 2.2 (or thereabouts), which has a now way too tiny /boot partition and I need to expand it; potentially destructive process. long since forgotten what was in the MX100. looked it up. turns out MLC. also turns out I've had the drive since 2014! time flies when your having fun.