Comments Locked

27 Comments

Back to Article

  • goatfajitas - Tuesday, October 6, 2020 - link

    Looks alot like the Dell BOSS drives I've been using on my servers for a few years. https://i.dell.com/sites/doccontent/shared-content...
  • bigi - Tuesday, October 6, 2020 - link

    apparently, one can use the BOSS card in Dell server only.
  • goatfajitas - Tuesday, October 6, 2020 - link

    Yeah, this Marvell one is faster too. The Dell one is 3 years old and only SATA... Most servers these days are HyperV or VMware hosts so its fine for that. If it was a specific purpose driven server that needed extra i/o speed the Marvell would help.
  • DigitalFreak - Tuesday, October 6, 2020 - link

    I hope you're not one of those people that store their SQL Server enterprise databases on the C: drive...
  • goatfajitas - Tuesday, October 6, 2020 - link

    Of course not. Mine is safe in HyperV RAID'd out and backed up with Veeam.
  • damianrobertjones - Wednesday, October 14, 2020 - link

    But... isn't that the safest place for it?
  • Santoval - Thursday, October 15, 2020 - link

    Surely you jest right?
  • damianrobertjones - Wednesday, October 21, 2020 - link

    I am. :)
  • ToTTenTranz - Tuesday, October 6, 2020 - link

    So if this is limited to PCIe 3.0 x4, for the high performance NVMe drives that saturate the PCIe 3.0 NVMe bus this is only useful for the redundancy aspect of RAID1. There's no reading performance ever going above 4GB/s.
  • schujj07 - Tuesday, October 6, 2020 - link

    As a boot drive in either a Hyper-V or ESXi environment that will be plenty fast. We use SATA SSD for our ESXi boot drives and they have plenty of performance for that. Most of the boot time on a server deals with all the BIOS checks.
  • Spunjji - Wednesday, October 7, 2020 - link

    Honestly, I got the impression that redundancy was the entire point - even so it should still be faster than solutions using SATA or some other form of hack.
  • James5mith - Tuesday, October 6, 2020 - link

    It's cute that you think a "read oriented" drive is 1DWPD. Realistically, they are probably 0.3DWPD max.
  • schujj07 - Tuesday, October 6, 2020 - link

    These are enterprise grade drives. They will work for the 1DWPD rating as it would be very expensive for companies to have to replace them too early.
  • Kevin G - Tuesday, October 6, 2020 - link

    I wonder if there will be a variant of this chip that supports six NVMe drives in RAID6 using a PCIe 16x link. The bandwidth would still be there for four drive reads and the ability for the controller to generate the two different sets of parity for the remaining two NVMe drives. SSD's are generally more reliable their mechanical counter parts but that just reduces the likelihood of a failure for an array, not making it a impossibility. As such, there will always be that niche market that wants the redundancy to ensure data integrity while increasing speed.
  • MenhirMike - Tuesday, October 6, 2020 - link

    Curious, how does the Host system know if there is a RAID failure? Is it exposed via S.M.A.R.T., or as a sensor, or using something proprietary?
  • Desierz - Tuesday, October 6, 2020 - link

    What happens if one of the drives becomes corrupt? Won't the corrupt data be mirrored onto the other drive?
  • Dug - Tuesday, October 6, 2020 - link

    Yup, but that's not what RAID is trying to prevent. That's what backups are for.
  • Dug - Tuesday, October 6, 2020 - link

    This does seem like a niche product. The host isn't generally used except for some reads and occasional updates. Reboot times are dependent on manufacturer bios, and not the drives, so I don't see a need for speed here. The space savings is questionable as most servers have OS drive space in the back already for 2.5", or even running on a pair of sd cards.

    I don't see any mention of hotswap, so if there is any downtime because you have to bring the entire server down to replace a drive, then it kind of defeats the purpose. If you had a single drive and it went down, it would take the same amount of time to replace it and restore from backup.
  • Spunjji - Wednesday, October 7, 2020 - link

    Agreed about it being niche, for sure. Downtime to replace a drive doesn't really defeat the purpose though - it allows you to control *when* that downtime happens, as opposed to having the server go down randomly whenever the drive fails.
  • foobardotcom - Friday, October 9, 2020 - link

    These kind of cards are quite handy if you have for example 1U server chassis with 8x2.5" disk bays and no internal m.2 slots. These cards allow you to use pci-e slot for the OS disk and leave all 2.5" disks for data raid. This kind of non-transparent raid setup is easy when dealing with for example Debian UEFI installation because you don't have to deal with over complicated preseed configs and pull your hair out when trying to make it do another UEFI partition to provide somekind of redundancy. SD card based setups are more geared towards using it more or less like booting VMware ESX from the sd cards and not using sd cards for constant writing not even system logs.

    Usually these kind of cards have some server vendor provided management software that can communicate with controller to retrieve hardware information from those drives. Most probably MTBF of these cards is so high that total loss of other card is non-issue in big picture when compared with some more common read errors.
  • Jorgp2 - Tuesday, October 6, 2020 - link

    I don't under the point of this, hasn't high point had a quad m.2 card for a few years now?

    https://www.highpoint-tech.com/USA_new/CS-product_...
  • MenhirMike - Tuesday, October 6, 2020 - link

    The specific thing here is that from the view of the Operating System, it's not a RAID Controller but only sees one regular NVMe drive. All the RAID Stuff is done transparently by the controller, without the need for any special driver support.

    I can't judge whether this is needed/useful, but it is a key difference between this and other NVMe RAID cards.
  • Tomatotech - Tuesday, October 6, 2020 - link

    Looks good. If I still ran ATX desktops systems, I'd definitely be interested in picking one of these card up (either 2 slot or 4 slot) and throwing a some fast nvme drives for extreme speed in general use.

    Here's a review (not by me) from Scan.co.uk:

    "We needed very fast writes as well as reads. Our app generates millions of small files, and this nicely removed the write bottleneck. Running on Asus Z10PE-D16, dual Xeon E5-2699v4 with 1TB RAM. We used 4 x 2TB Samsung Evo Plus M.2 ssds on 2 x SSD7103 cards using s/w raid on WinS2019."

    Transparent RAID support is important, am fed up of being burnt by substandard RAID drivers which is why I've avoided RAID for many years.
  • Tomatotech - Tuesday, October 6, 2020 - link

    To be clear that review is for a different card that does similar things.
  • Toadster - Thursday, October 22, 2020 - link

    Intel vROC does this without an add-on card... https://www.intel.com/content/www/us/en/support/pr...

Log in

Don't have an account? Sign up now