Marvell announced a NVMe RAID controller for SSDs (Marvell® 88NV1140/88NV112). Sounds like Kingston should have went with them for the controller instead of the Phison controller plus the PEX switch.
Overall, I like the idea of multiple M.2 drives in the 2.5" U.2 form factor. Seems like a lot of mfg's are creating these products, whether in U.2 format or AIC.
I do wonder how they will alleviate the heat of the M.2 drives when encased. Perhaps a heat spreader that connects to the case as a larger heatsink? Then you could cool them with server fans.
Those Marvell controllers are just ordinary SSD controllers. They don't have any PCIe switch or multi-drive RAID features. Kingston has a longtime relationship with Phison, especially for their enterprise PCIe products.
The heat of the M.2 drives won't be much of a problem compared to the heat of that PCIe switch. This drive is going to require serious airflow.
I don't understand why they thought it was a good idea to choke 16 PCIe 3.0 lanes into 4, and thus create a bottleneck. Why not use it with a PCIe x16 slot, either via an adapter or directly? Is it a prerequisite for enterprise SSDs to use solely the U.2 slot? Is this only intended for 1U server blades where the available height is limited?
They already have that. The DCP1000. There are other quad M.2 PCIe adapters as well. They're simply giving another option because all flash arrays becoming more popular. There are already a lot of server chassis with 2.5" hotswap bays, usually 24 in a 2U. M.3 is coming out with allows even higher density in 1U. But for existing servers, this would allow even higher density in already made standard 2U 2.5" chassis. With QLC NAND coming out, which could be several terabytes per stick, it gives you a lot of density per drive. Not all drives are just about performance but you still have to have a capacity tier. So say you have a 4TB 2.5" SSD. In a 2U, that's 96TB in a 2U (24 drives). With quad M.2 on a 2.5" like this, that's 384TB (4x24). That's SIGNIFICANTLY higher for the same chassis design. With QLC, they'll be even higher density. So again, you even get further hotswap and redudancny. Because if a 2.5" goes bad, you replace the whole drive. On here, a single M.2 might go back, so you replace one M.2, while 3 are still there doing their thing. It makes a lot of sense actually. The best thing as usual is having lots of options. There should never just be one way to do something. Some things make sense to do one way, some things make sense to do another way, somethings only make sense if starting fresh, something makes sense with already bought infrastructure, etc. And Kingston already offers stuff for x16 slots. U.2 is x4. So they'd have to come out with a new standard for x16 U.2.
I'm trying to picture how you replace one M.2 stick while leaving the others (apparently) functioning. It *looks* like you'd have to pull the thing apart to get at one of the M.2 sticks, meaning it couldn't still be reading/writing to the other three?
Enterprise applications such as databases need iops and raw capacity more than aggregated gbps. x16 link is physically much larger than x4 link, reduce density and therefore is not needed. Cpus also have a limited number of pcie links anyway, so why waste those links when what you want is to pack as many of then into a chassis as possible and let parallelism takes care of bandwidth?
Agreed. Overall since the introduction of SSD's several years ago, you've seen a dramatic increase in raw transfer speed and IOPS vs regular HDDs, but the main sticking point is simply capacity. Your average HDD a few years ago was 2-3TB, your average SSD was a few hundred $$ for a 128GB SSD. Servers though have and hold large databases and lots of files, more than 128GB easily.
Now, your max capacity (with certain exceptions) is now 6-10TB for HDDs and 2TB for SSDs for a few hundred $$. So - HDDs have increased their size by a factor of 2-3, while SSDs have increased their capacity by a factor 18. And if you merge a few behind a small raid controller, you have a capacity that approaches the size of your bigger HDDs while having that huge advantage in transfer speed and IOPS.
Your best HDD's do 200-300MB/sec alone... Even these 'limited' (??) drives have 10 times the speed and orders of magnitude higher IOPS.
A board with the right processor (ie: EPYC) with only 8x U2 connectors versus a x16 raid card with 8x SAS drives will still have double the raw bandwidth (32 PCIe lanes vs 16 PCIe lanes), and as far as utilized bandwidth, the x16 SAS setup would be lucky to hit a quarter of it's potential: 8 drives @ 300MB/sec = 2400MB/sec out of a total bandwidth of 15000MB/sec for an x16 slot.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
evlfred - Friday, January 19, 2018 - link
wow that's a lot of wasted storage if you have 4 800GB SSD's and only end up with 6.4GB to usedescendency - Friday, January 19, 2018 - link
It's to prevent durability issues during writes./s
romrunning - Friday, January 19, 2018 - link
Marvell announced a NVMe RAID controller for SSDs (Marvell® 88NV1140/88NV112). Sounds like Kingston should have went with them for the controller instead of the Phison controller plus the PEX switch.Overall, I like the idea of multiple M.2 drives in the 2.5" U.2 form factor. Seems like a lot of mfg's are creating these products, whether in U.2 format or AIC.
I do wonder how they will alleviate the heat of the M.2 drives when encased. Perhaps a heat spreader that connects to the case as a larger heatsink? Then you could cool them with server fans.
Billy Tallis - Friday, January 19, 2018 - link
Those Marvell controllers are just ordinary SSD controllers. They don't have any PCIe switch or multi-drive RAID features. Kingston has a longtime relationship with Phison, especially for their enterprise PCIe products.The heat of the M.2 drives won't be much of a problem compared to the heat of that PCIe switch. This drive is going to require serious airflow.
zsero - Friday, January 19, 2018 - link
Maybe GB -> TB?dromoxen - Friday, January 19, 2018 - link
presumably they can go even higher than 6.4TB when the chips are down?Santoval - Friday, January 19, 2018 - link
I don't understand why they thought it was a good idea to choke 16 PCIe 3.0 lanes into 4, and thus create a bottleneck. Why not use it with a PCIe x16 slot, either via an adapter or directly? Is it a prerequisite for enterprise SSDs to use solely the U.2 slot? Is this only intended for 1U server blades where the available height is limited?CheapSushi - Friday, January 19, 2018 - link
They already have that. The DCP1000. There are other quad M.2 PCIe adapters as well. They're simply giving another option because all flash arrays becoming more popular. There are already a lot of server chassis with 2.5" hotswap bays, usually 24 in a 2U. M.3 is coming out with allows even higher density in 1U. But for existing servers, this would allow even higher density in already made standard 2U 2.5" chassis. With QLC NAND coming out, which could be several terabytes per stick, it gives you a lot of density per drive. Not all drives are just about performance but you still have to have a capacity tier. So say you have a 4TB 2.5" SSD. In a 2U, that's 96TB in a 2U (24 drives). With quad M.2 on a 2.5" like this, that's 384TB (4x24). That's SIGNIFICANTLY higher for the same chassis design. With QLC, they'll be even higher density. So again, you even get further hotswap and redudancny. Because if a 2.5" goes bad, you replace the whole drive. On here, a single M.2 might go back, so you replace one M.2, while 3 are still there doing their thing. It makes a lot of sense actually. The best thing as usual is having lots of options. There should never just be one way to do something. Some things make sense to do one way, some things make sense to do another way, somethings only make sense if starting fresh, something makes sense with already bought infrastructure, etc. And Kingston already offers stuff for x16 slots. U.2 is x4. So they'd have to come out with a new standard for x16 U.2.rpg1966 - Saturday, January 20, 2018 - link
I'm trying to picture how you replace one M.2 stick while leaving the others (apparently) functioning. It *looks* like you'd have to pull the thing apart to get at one of the M.2 sticks, meaning it couldn't still be reading/writing to the other three?Hereiam2005 - Friday, January 19, 2018 - link
Enterprise applications such as databases need iops and raw capacity more than aggregated gbps.x16 link is physically much larger than x4 link, reduce density and therefore is not needed.
Cpus also have a limited number of pcie links anyway, so why waste those links when what you want is to pack as many of then into a chassis as possible and let parallelism takes care of bandwidth?
bill.rookard - Saturday, January 20, 2018 - link
Agreed. Overall since the introduction of SSD's several years ago, you've seen a dramatic increase in raw transfer speed and IOPS vs regular HDDs, but the main sticking point is simply capacity. Your average HDD a few years ago was 2-3TB, your average SSD was a few hundred $$ for a 128GB SSD. Servers though have and hold large databases and lots of files, more than 128GB easily.Now, your max capacity (with certain exceptions) is now 6-10TB for HDDs and 2TB for SSDs for a few hundred $$. So - HDDs have increased their size by a factor of 2-3, while SSDs have increased their capacity by a factor 18. And if you merge a few behind a small raid controller, you have a capacity that approaches the size of your bigger HDDs while having that huge advantage in transfer speed and IOPS.
Your best HDD's do 200-300MB/sec alone... Even these 'limited' (??) drives have 10 times the speed and orders of magnitude higher IOPS.
A board with the right processor (ie: EPYC) with only 8x U2 connectors versus a x16 raid card with 8x SAS drives will still have double the raw bandwidth (32 PCIe lanes vs 16 PCIe lanes), and as far as utilized bandwidth, the x16 SAS setup would be lucky to hit a quarter of it's potential: 8 drives @ 300MB/sec = 2400MB/sec out of a total bandwidth of 15000MB/sec for an x16 slot.
Pinn - Saturday, January 20, 2018 - link
These comments are why I got to Anandtech.