Any indication if the "backplane" card will support transparent passthru of the nvme devices? Would be really cool if it did for non-enterprise uses that need raw access to the individual disks (home ZFS servers for me).
Not necessarily, I’ve got a quad port Intel NIC in my server that uses a PCIe x4 switch to connect two separate dual port chipsets; I can most certainly pass each chipset through to a VM separately.
Yeah, the point of this device is precisely *not* to be transparent. It has its own PCIe switch, so that the host computer doesn't have to bifurcate its PCIe among 4 drives. To the host, this device looks like one giant drive.
So there are advantages and disadvantages. One advantage, I think, is that the RAID is hardware rather than software.
My understanding from talking with Marvell about the NVMe switch when they announced it earlier this year is that it can either do RAID-0/1/10 or present the drives as individual namespaces of the virtualized NVMe controller. The latter would be appropriate for giving ZFS and similar systems direct control over what gets stored on each physical SSD.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
11 Comments
Back to Article
cygnus1 - Tuesday, June 12, 2018 - link
Any indication if the "backplane" card will support transparent passthru of the nvme devices? Would be really cool if it did for non-enterprise uses that need raw access to the individual disks (home ZFS servers for me).rahvin - Tuesday, June 12, 2018 - link
They've got a controller chip and probably a bridge chip, there is very little chance they could pass that through transparent with those in between.jordanclock - Tuesday, June 12, 2018 - link
The PCIe switch means that it would appear as a single monolithic storage device to the OS.If you really need direct drive access, you'll want to look into the options that use bifurcation and are already available.
MajGenRelativity - Tuesday, June 12, 2018 - link
I agree with the above comment. I believe Applicata has a x16 slot card that may meet your needsCharlie22911 - Tuesday, June 12, 2018 - link
Not necessarily, I’ve got a quad port Intel NIC in my server that uses a PCIe x4 switch to connect two separate dual port chipsets; I can most certainly pass each chipset through to a VM separately.Mikewind Dale - Tuesday, June 12, 2018 - link
Yeah, the point of this device is precisely *not* to be transparent. It has its own PCIe switch, so that the host computer doesn't have to bifurcate its PCIe among 4 drives. To the host, this device looks like one giant drive.So there are advantages and disadvantages. One advantage, I think, is that the RAID is hardware rather than software.
Billy Tallis - Tuesday, June 12, 2018 - link
My understanding from talking with Marvell about the NVMe switch when they announced it earlier this year is that it can either do RAID-0/1/10 or present the drives as individual namespaces of the virtualized NVMe controller. The latter would be appropriate for giving ZFS and similar systems direct control over what gets stored on each physical SSD.Hereiam2005 - Wednesday, June 13, 2018 - link
A Trenton HDB8231 and a SBC will get you 18 PCIE slots, and cost no more than 1500$ at most.tyger11 - Tuesday, June 12, 2018 - link
I've been waiting for someone to put out an NVMe m.2 SSD-based DAS for a while now. Still surprised it hasn't happened.