Adding more PCIe lanes would substantially increase the price and power consumption of these chips. The SSD controllers would need much higher NAND channel counts to be able to use the bandwidth of an x16 link, which would make for the biggest and most expensive SSD controller ASIC ever. Such a chip would have a very small market—many servers don't even have x16 slots, instead opting for a higher number of x8 slots, and the drives would need a minimum of 2TB to even come close to full performance.
As for why the NVMe switch doesn't have an x16 uplink, I suspect that it is simply due to this being the smallest in what will eventually be a broad product family.
I'm looking at the top picture wondering how do I connect an ssd to that? I've seen controllers where you attach the m.2 directly to the controller, but this doesn't seem the same. I can't find a solution that attaches to it, or find out if more than one ssd can connect, or what cables are needed.
The photo at the top is an SSD, with lots of extra connectors for debugging. The controller in the center is the 88SS1098, and it's surrounded by either 2TB or 4TB of Toshiba 3D TLC.
This is connected through M.2 adaptor on EVB. This board is for demo only. No cables are required. Same product is also available is in U.2 form factor which you can connect two M.2 SSDs.
Using 4x nvme on an 8 lane device means they are using pcie3 x2 links, as they should more in desktops imo.
For an affordable nvme to aim for 2GB read and write is achievable. Only the best and dearest nvmeS get near 3.5GB/s read & ~2.4GB/s write in burst sequential.
For most its more like sub 1.5 GB/s write and sub 2.4GB/s read. The rest of the 4GB/s bandwidth allocated from scarce io resources is wasted.
In the amd amd am4 ryzen and intel desktop world, lanes are extremely scarce, so many are precluded from using this powerful new nvme resource to the full for want of lanes, yet we are squandering what we have by overproviding for nvmeS which barely use the extra bandwidth.
I hope the low lane count and simplicity of the switch / raid controller (88NR2241) are to keep costs down. Speaking of which, is this a product I can buy? Pricing, availability?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
9 Comments
Back to Article
jardows2 - Tuesday, March 27, 2018 - link
6.4GB/s. Wow! I remember when talking about 66MB/s was a big deal!Dug - Tuesday, March 27, 2018 - link
66MB/s isn't a big deal? I really need to catch up.ytoledano - Tuesday, March 27, 2018 - link
Why aren't all these controllers PCIe3x16?Billy Tallis - Tuesday, March 27, 2018 - link
Adding more PCIe lanes would substantially increase the price and power consumption of these chips. The SSD controllers would need much higher NAND channel counts to be able to use the bandwidth of an x16 link, which would make for the biggest and most expensive SSD controller ASIC ever. Such a chip would have a very small market—many servers don't even have x16 slots, instead opting for a higher number of x8 slots, and the drives would need a minimum of 2TB to even come close to full performance.As for why the NVMe switch doesn't have an x16 uplink, I suspect that it is simply due to this being the smallest in what will eventually be a broad product family.
Dug - Tuesday, March 27, 2018 - link
I'm looking at the top picture wondering how do I connect an ssd to that?I've seen controllers where you attach the m.2 directly to the controller, but this doesn't seem the same.
I can't find a solution that attaches to it, or find out if more than one ssd can connect, or what cables are needed.
Billy Tallis - Wednesday, March 28, 2018 - link
The photo at the top is an SSD, with lots of extra connectors for debugging. The controller in the center is the 88SS1098, and it's surrounded by either 2TB or 4TB of Toshiba 3D TLC.Nainesh - Sunday, April 1, 2018 - link
This is connected through M.2 adaptor on EVB. This board is for demo only. No cables are required. Same product is also available is in U.2 form factor which you can connect two M.2 SSDs.msroadkill612 - Wednesday, April 4, 2018 - link
Using 4x nvme on an 8 lane device means they are using pcie3 x2 links, as they should more in desktops imo.For an affordable nvme to aim for 2GB read and write is achievable. Only the best and dearest nvmeS get near 3.5GB/s read & ~2.4GB/s write in burst sequential.
For most its more like sub 1.5 GB/s write and sub 2.4GB/s read. The rest of the 4GB/s bandwidth allocated from scarce io resources is wasted.
In the amd amd am4 ryzen and intel desktop world, lanes are extremely scarce, so many are precluded from using this powerful new nvme resource to the full for want of lanes, yet we are squandering what we have by overproviding for nvmeS which barely use the extra bandwidth.
sethk - Friday, April 6, 2018 - link
I hope the low lane count and simplicity of the switch / raid controller (88NR2241) are to keep costs down. Speaking of which, is this a product I can buy? Pricing, availability?