21 is a strange number of drives to support. I know there are Microchip PCIe switches that support 100 lanes, which would be a perfect number for the 84 disk + 16 interface lanes, but this card has two switch chips. Maybe 2 x 68-lane switches with a 16-lane connection between them and 4 lanes going unused?
I hope to see a lot more variants of this concept, because lower capacity and lower PCIe revision NVMe sticks keep piling up, which are simply too sad to waste for lack of lanes: recyling older SATA-SSDs was a lot easier.
And to be honest, on AM4/5 platforms especially, I'd like to see an approach, that takes advantage of the modularity which the base architecture has, whereby the SoC simply offers bundles of lanes, which can then turned into USB, SATA and another set of lanes by what is essentially a multi-protocol ASmedia switch... that can even be cascaded.
Now if that should be a mainboard with x4 PCIe slots instead of all these M.2 connectors, of if you should use M.2 to PCIe connectors for these break-out switched expansion board, is a matter of taste, form factors and volumes.
Well at least at PCIe 5.0 levels trace length could be another factor, but from what I see on high-end server boards, cables make trace lengths more manageable than PCBs with dozens of layers, both probably not budget items.
Anyway, a quad NVMe to single M.2 (or even PCIe x4) board that ideally can even deliver PCIe 5.0 on the upstream port from those four PCIe 3.0 1-2TB M.2 drives for less than €100 should sell significant volumes.
With the right PLX switches, you could use a PCIe 5.0 x16 connection to the motherboard to provide 64 PCIe 3.0 lanes to drives. That's enough for 16 PCIe 3.0 x4 NVMe drives to run at full bandwidth.
I'm surprised with the design choices... like they could have aligned the SSDs on the daughter board to be mirrors of the ones on the main board, and that would allow maybe to have a custom heatsink between both boards with some holes through the center of the heatsink to allow for airflow - use adhesive thermally conductive tape to make sure those 8 ssds are connected to heatsink, heatsink gets cooled by air flow (maybe make it optional to have card longer and have a blower fan at the end). They could have used those slightly more expensive polymer tantalum capacitors that are much lower height, so that the air flow is not blocked by big surface mount capacitors.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
9 Comments
Back to Article
jdq - Wednesday, March 15, 2023 - link
The Apex Storage website states that the card is bootable, and that "Pricing is $2,800.00 USD with volume discounts available."ballsystemlord - Wednesday, March 15, 2023 - link
What size of SSDs does that $2,800 card come with?rpg1966 - Thursday, March 16, 2023 - link
lolAthlex - Wednesday, March 15, 2023 - link
This is very cool. I have their original 16-SSD SATA design but an NVMe version is the dream...The older one isn't listed, but it was sold here:
https://www.kickstarter.com/projects/storage-scale...
The Von Matrices - Wednesday, March 15, 2023 - link
21 is a strange number of drives to support. I know there are Microchip PCIe switches that support 100 lanes, which would be a perfect number for the 84 disk + 16 interface lanes, but this card has two switch chips. Maybe 2 x 68-lane switches with a 16-lane connection between them and 4 lanes going unused?deil - Thursday, March 16, 2023 - link
Sounds reasonable. I assume that extra 4 lanes are dedicated to firmware updates.abufrejoval - Saturday, March 18, 2023 - link
I hope to see a lot more variants of this concept, because lower capacity and lower PCIe revision NVMe sticks keep piling up, which are simply too sad to waste for lack of lanes: recyling older SATA-SSDs was a lot easier.And to be honest, on AM4/5 platforms especially, I'd like to see an approach, that takes advantage of the modularity which the base architecture has, whereby the SoC simply offers bundles of lanes, which can then turned into USB, SATA and another set of lanes by what is essentially a multi-protocol ASmedia switch... that can even be cascaded.
Now if that should be a mainboard with x4 PCIe slots instead of all these M.2 connectors, of if you should use M.2 to PCIe connectors for these break-out switched expansion board, is a matter of taste, form factors and volumes.
Well at least at PCIe 5.0 levels trace length could be another factor, but from what I see on high-end server boards, cables make trace lengths more manageable than PCBs with dozens of layers, both probably not budget items.
Anyway, a quad NVMe to single M.2 (or even PCIe x4) board that ideally can even deliver PCIe 5.0 on the upstream port from those four PCIe 3.0 1-2TB M.2 drives for less than €100 should sell significant volumes.
phoenix_rizzen - Friday, April 14, 2023 - link
With the right PLX switches, you could use a PCIe 5.0 x16 connection to the motherboard to provide 64 PCIe 3.0 lanes to drives. That's enough for 16 PCIe 3.0 x4 NVMe drives to run at full bandwidth.In theory, anyway. :)
mariushm - Tuesday, March 21, 2023 - link
I'm surprised with the design choices... like they could have aligned the SSDs on the daughter board to be mirrors of the ones on the main board, and that would allow maybe to have a custom heatsink between both boards with some holes through the center of the heatsink to allow for airflow - use adhesive thermally conductive tape to make sure those 8 ssds are connected to heatsink, heatsink gets cooled by air flow (maybe make it optional to have card longer and have a blower fan at the end). They could have used those slightly more expensive polymer tantalum capacitors that are much lower height, so that the air flow is not blocked by big surface mount capacitors.