Congrats Avago, you succeeded in plumping your profits significantly for 2 or 3 years at the long term cost of bringing in two competitors into what was a cosy sole source monopoly.
I'm not sure about ASMedia, but from my experience with its development hardware, Marvell would lawyer up before committing single engineer to such ambitious projects. Their code review is brutal, I guess same goes for physical IP.
What would be nice is if this switch could do RAID5/6 off loading between the NVMe devices. The narrower uplink wouldn’t be an issue since the parity would be generate on the switch for downstream bandwidth. There would be a latency hit but things should still be comfortably lower than SATA/SAS based solutions.
Parity/checksum based RAID is not for flash storage, even on WORM class high capacity storage (those demanding such systems should be drowned in molten silicone, I believe). It will be decades before solid state undercuts spinning rust, if ever. Even there, capacities rise fast enough that two, three or even four-way mirrors make more sense than any SSD, or parity. Also current high-speed protocols would cause RAID to create overhead so high any NVMe based system would get slower than SATA. Mirrors are also much lighter on switches like these.
What would be nice is intel offering enough lanes from the cpu that such tinkering isn't even needed. The switch needs 8xlanes itself so you are left with 8xfor the gpu. If intel (and AMD) would up their dmi speed from 4 to 8, then such switches wouldn't even be needed.
Are these PCIe 3.0 switches specifically designed for SSD use or can any device controller be attached to them like PLX switches? For example, can these switches be used in Thunderbolt docks to more efficiently share bandwidth between device controllers for the various interface ports (perhaps PCIe 3.0 x2 for 2-port USB 3.1 Gen 2 controller, PCIe 3.0 x2 for 10Gb Ethernet controller, PCIe 2.0 x2 for 2-port SATA III controller, and PCIe 1.1 x1 for FireWire 800 controller)?
They're just transparent switches. Not tied to any particular type of device. Could be used for buttcoin GPUs, RAID/HBAs, nvme drives, data acquisition cards, etc
I worked at IDT when they sold their PCI-E flash controller and PCIE-E switch business to PMC Sierra. Then PMC Sierra was acquired by Microsemi. It looks like Microsemi is still selling them. I always felt the demand for the switches was low which is why they bundled it in the sale with the more valuable flash controller business.
Even in AMDland, still useful for mainstream Zen, with only 24 3.0 lanes (and 6 2.0 lanes on the chipset). Because the chipset lanes are only 2.0 speed, it would need something like this to connect 3 m.2 PCIe SSDs while still supporting an x16 GPU; something that Intel's recent LGA11xx chipsets have done out of the box.
Yes there is! Have you not seen the new 1U servers from SuperMicro for example with all NVMe M.3 slots? With 16TB QLC NAND per slot, they can have 1PB per 1U. Even with EPYC, you'll still need some switching. This is one aspect of PCIe 5.0, where x1 lane becomes quite enough for each storage device. But at 3.0, or even 4.0, we're still at either x4 or x2, thus still need lots of lanes and/ore switches, beyond just 128.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
DanNeely - Wednesday, June 27, 2018 - link
Congrats Avago, you succeeded in plumping your profits significantly for 2 or 3 years at the long term cost of bringing in two competitors into what was a cosy sole source monopoly.dgingeri - Wednesday, June 27, 2018 - link
Avago/Broadcom will probably sue to keep these off the market, claiming some sort of patent infringement.Vatharian - Wednesday, June 27, 2018 - link
I'm not sure about ASMedia, but from my experience with its development hardware, Marvell would lawyer up before committing single engineer to such ambitious projects. Their code review is brutal, I guess same goes for physical IP.Kevin G - Wednesday, June 27, 2018 - link
What would be nice is if this switch could do RAID5/6 off loading between the NVMe devices. The narrower uplink wouldn’t be an issue since the parity would be generate on the switch for downstream bandwidth. There would be a latency hit but things should still be comfortably lower than SATA/SAS based solutions.Vatharian - Wednesday, June 27, 2018 - link
Parity/checksum based RAID is not for flash storage, even on WORM class high capacity storage (those demanding such systems should be drowned in molten silicone, I believe). It will be decades before solid state undercuts spinning rust, if ever. Even there, capacities rise fast enough that two, three or even four-way mirrors make more sense than any SSD, or parity. Also current high-speed protocols would cause RAID to create overhead so high any NVMe based system would get slower than SATA. Mirrors are also much lighter on switches like these.CheapSushi - Wednesday, June 27, 2018 - link
You're better off with software RAID. As mentioned switches should just be agnostic/transparent.beginner99 - Wednesday, June 27, 2018 - link
What would be nice is intel offering enough lanes from the cpu that such tinkering isn't even needed. The switch needs 8xlanes itself so you are left with 8xfor the gpu. If intel (and AMD) would up their dmi speed from 4 to 8, then such switches wouldn't even be needed.ltcommanderdata - Wednesday, June 27, 2018 - link
Are these PCIe 3.0 switches specifically designed for SSD use or can any device controller be attached to them like PLX switches? For example, can these switches be used in Thunderbolt docks to more efficiently share bandwidth between device controllers for the various interface ports (perhaps PCIe 3.0 x2 for 2-port USB 3.1 Gen 2 controller, PCIe 3.0 x2 for 10Gb Ethernet controller, PCIe 2.0 x2 for 2-port SATA III controller, and PCIe 1.1 x1 for FireWire 800 controller)?timecop1818 - Wednesday, June 27, 2018 - link
They're just transparent switches. Not tied to any particular type of device. Could be used for buttcoin GPUs, RAID/HBAs, nvme drives, data acquisition cards, etcbobj3832 - Wednesday, June 27, 2018 - link
I worked at IDT when they sold their PCI-E flash controller and PCIE-E switch business to PMC Sierra. Then PMC Sierra was acquired by Microsemi. It looks like Microsemi is still selling them. I always felt the demand for the switches was low which is why they bundled it in the sale with the more valuable flash controller business.https://www.microsemi.com/product-directory/809-ic...
Duncan Macdonald - Wednesday, June 27, 2018 - link
Not needed for AMD EPYC servers (128 PCIe lanes) or Threadripper desktops (64 PCIe lanes).DanNeely - Wednesday, June 27, 2018 - link
Even in AMDland, still useful for mainstream Zen, with only 24 3.0 lanes (and 6 2.0 lanes on the chipset). Because the chipset lanes are only 2.0 speed, it would need something like this to connect 3 m.2 PCIe SSDs while still supporting an x16 GPU; something that Intel's recent LGA11xx chipsets have done out of the box.DigitalFreak - Wednesday, June 27, 2018 - link
This is exactly what differentiated the X470 and Z490 chipsets for Ryzen - a PCI-E switch. Unfortunately it ended up costing too much.CheapSushi - Wednesday, June 27, 2018 - link
Yes there is! Have you not seen the new 1U servers from SuperMicro for example with all NVMe M.3 slots? With 16TB QLC NAND per slot, they can have 1PB per 1U. Even with EPYC, you'll still need some switching. This is one aspect of PCIe 5.0, where x1 lane becomes quite enough for each storage device. But at 3.0, or even 4.0, we're still at either x4 or x2, thus still need lots of lanes and/ore switches, beyond just 128.CheapSushi - Wednesday, June 27, 2018 - link
Heck yeah! I love the quad NVMe M.2 boards but only one exists (at x8) that has a switch on it. The rest need slot bufrication.CheapSushi - Wednesday, June 27, 2018 - link
bifurcation*