This could be an amazing card. I wonder what the overhead will be, and how close it can get to the speed of 4 M.2 NVME 4x drives. Those drives are amazingly fast to begin with, and if you can raid 4 of them in a raid 0 (since the SSD failure rate is also amazingly low) they could offer truly unbelievable performance levels.
I look forward to when others will come out with cards like this. It will help bring the price down some.
With the PLX chip on board by necessity, these are going to be priced between "obscenely expensive" and "my company is paying for this".
Unless you're speccing a maxed-out money-no-object workstation, it will likely be cheaper to buy a CPU that supports PCIe Bifurcation (pretty much any Intel CPU in the last 3/4 generations) and motherboard to go with it than to buy this card alone.
If you have a suitable CPU and motherboard then the much cheaper ASRock Ultra Quad M.2 Card is a better option (needs a CPU with 16 available PCIe lanes and the ability to subdivide to 4x4 - eg an AMD Threadripper or EPYC system).
This is the thing - HighPoint will support this for years. If you need consistency over a large scale deployment, then the vendor based quad M.2 cards have too many limitations.
Not really a concern for professional use. Downtime often costs a lot more than $400/unit. For home use, yeah. But Highpoint doesn't exactly target the home user with any of their products.
There are one or two competitors that have release products recently. I don't recall the names, but I suspect they're just to new to be used in products like this yet.
It's not exactly a PCIe switch; the 88NR2241 is meant specifically for NVMe and includes RAID and virtualization features that you don't get from a plain PCIe switch. But since it only has an x8 uplink, it isn't a true replacement for things like the PLX switch HighPoint uses. It will be interesting to see whether/when Marvell expands the product line.
Microsemi also makes PCIe switches, in competition with PLX/Avago/Broadcom. But they're both targeting the datacenter market heavily and don't want to reduce their margins enough to cater to consumers. Until most server platforms have as many PCIe lanes as EPYC, these switches will be very lucrative in the datacenter and thus too expensive for consumers.
Very good question, IMO the cost of PLX switches has been holding back interesting board features like this for a long time, we really need a competitor in this arena and I feel that anyone who chose to compete would be able to make a lot of money.
Mainly the SLI market died. The single cards became too powerful, the problems too many because it was niche and poorly supported. That only left professional users which lead to lower volume, higher prices. But the problem now was you started to reach a price range where you could just buy enthusiast/server platforms with more lanes. So PLX chips became an even more niche for special enterprise level needs. To top that off AMDs Threadripper/Epyc line went PCIe lane crazy with 64/128 native lanes and bifurcation for those who need it, which leaves them with with hardly any market at all. You'd have to be crazy to invest in re-inventing this technology, this is just PLX trying to squeeze the last few dollars out of a dying technology.
I'm sitting near nForce 780SLI (ASUS P5NT Deluxe) and Intel Skulltrail, both have PCIe switches. After that most of my multi-PCIe boards where dual CPU, thanks to Intel being a male reproductive organ to consumers, and AMD spectacularly failing to deliver any system capable of anything more than increased power bill.
On the fun side, mining haze brought a lot of small players to the yard that learned how to work PCI-Express magic. Some of them may try tackling switch challenge, at some point, which I very much wish for.
Things might have been more lively if PCI SIG didn't stopped to smell flowers too. Excuse me, how many years we are stuck at PCI-Expess 2.5?
People dont know but this is not something new. Amfeltec did a similar board, perhaps with a similar splitter, and it works with 4 NVME ssds. I had to order from them directly from Canada and payed 400 500 dolars, idont know. But it was at a time, where People kept telling me 4 NVMes from a single pci express 3.0 16x is impossible. http://amfeltec.com/products/pci-express-gen-3-car... I have 3 SSDs, a 250 960 evo, a 512 950pro and a 250 WD black 2018 model. They are both recognized without any hiccups whatsoever, and all are individually bootable.
they could/should have used a larger likely MUCH quieter fan as them "tiny fans" tend to be overly loud for nothing, high pressure is not what these things need, they need high flow.
likely they could have used say a 90 or 110mm fan size set at an incline to vent "out the back" to promote cooling efficacy.
All depends on how it works of course, but it does open the doors of slotting 4 lower capacity m.2 drives to create a single larger volume ^.^
>likely they could have used say a 90 or 110mm fan size set at an incline to vent "out the back"
Sure they could have done that but then it takes up more than 1 PCI slot. Well actually it takes up more since radial fans do not handle tight fitting spaces very well unlike the blower style fan that it comes with.
I forgot to add, I agree. I haven't checked all cards of course, but we've had RAID controllers with backup batteries fail. Why did they fail? Because when the RAID card detects the backup battery is bad or can't hold a charge- it won't continue running even if your system is powered on! So all this redundancy and the single point of failure is a tiny battery that corrupts your entire array.
What on Earth is the point of paying extra to be able to boot from your large NVMe RAID? Your board surely already has an NVMe slot which you can put the small boot drive in!
(I agree with the point about RAID controllers; I'd be happy with a system that gave me full-speed /dev/nvme1n1 through /dev/nvme4n1 and just let md deal with putting the bytes in the right place)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
Vorl - Wednesday, October 24, 2018 - link
This could be an amazing card. I wonder what the overhead will be, and how close it can get to the speed of 4 M.2 NVME 4x drives. Those drives are amazingly fast to begin with, and if you can raid 4 of them in a raid 0 (since the SSD failure rate is also amazingly low) they could offer truly unbelievable performance levels.I look forward to when others will come out with cards like this. It will help bring the price down some.
edzieba - Wednesday, October 24, 2018 - link
"It will help bring the price down some. "With the PLX chip on board by necessity, these are going to be priced between "obscenely expensive" and "my company is paying for this".
Unless you're speccing a maxed-out money-no-object workstation, it will likely be cheaper to buy a CPU that supports PCIe Bifurcation (pretty much any Intel CPU in the last 3/4 generations) and motherboard to go with it than to buy this card alone.
Duncan Macdonald - Wednesday, October 24, 2018 - link
If you have a suitable CPU and motherboard then the much cheaper ASRock Ultra Quad M.2 Card is a better option (needs a CPU with 16 available PCIe lanes and the ability to subdivide to 4x4 - eg an AMD Threadripper or EPYC system).Ian Cutress - Wednesday, October 24, 2018 - link
This is the thing - HighPoint will support this for years. If you need consistency over a large scale deployment, then the vendor based quad M.2 cards have too many limitations.imaheadcase - Wednesday, October 24, 2018 - link
Except price. :PFlunk - Wednesday, October 24, 2018 - link
Not really a concern for professional use. Downtime often costs a lot more than $400/unit. For home use, yeah. But Highpoint doesn't exactly target the home user with any of their products.rtho782 - Wednesday, October 24, 2018 - link
I remember back in the x38 days, loads of motherboards had PLX chips. Now they are so expensive as to be incredibly rare.What is stopping a competitor stepping in here? Is there a patent? If so, when does it run out?
DigitalFreak - Wednesday, October 24, 2018 - link
There are one or two competitors that have release products recently. I don't recall the names, but I suspect they're just to new to be used in products like this yet.mpbello - Wednesday, October 24, 2018 - link
Marvell also recently announced a PCIe switch.Billy Tallis - Wednesday, October 24, 2018 - link
It's not exactly a PCIe switch; the 88NR2241 is meant specifically for NVMe and includes RAID and virtualization features that you don't get from a plain PCIe switch. But since it only has an x8 uplink, it isn't a true replacement for things like the PLX switch HighPoint uses. It will be interesting to see whether/when Marvell expands the product line.Billy Tallis - Wednesday, October 24, 2018 - link
Microsemi also makes PCIe switches, in competition with PLX/Avago/Broadcom. But they're both targeting the datacenter market heavily and don't want to reduce their margins enough to cater to consumers. Until most server platforms have as many PCIe lanes as EPYC, these switches will be very lucrative in the datacenter and thus too expensive for consumers.The_Assimilator - Wednesday, October 24, 2018 - link
Very good question, IMO the cost of PLX switches has been holding back interesting board features like this for a long time, we really need a competitor in this arena and I feel that anyone who chose to compete would be able to make a lot of money.Kjella - Wednesday, October 24, 2018 - link
Mainly the SLI market died. The single cards became too powerful, the problems too many because it was niche and poorly supported. That only left professional users which lead to lower volume, higher prices. But the problem now was you started to reach a price range where you could just buy enthusiast/server platforms with more lanes. So PLX chips became an even more niche for special enterprise level needs. To top that off AMDs Threadripper/Epyc line went PCIe lane crazy with 64/128 native lanes and bifurcation for those who need it, which leaves them with with hardly any market at all. You'd have to be crazy to invest in re-inventing this technology, this is just PLX trying to squeeze the last few dollars out of a dying technology.Vatharian - Wednesday, October 24, 2018 - link
I'm sitting near nForce 780SLI (ASUS P5NT Deluxe) and Intel Skulltrail, both have PCIe switches. After that most of my multi-PCIe boards where dual CPU, thanks to Intel being a male reproductive organ to consumers, and AMD spectacularly failing to deliver any system capable of anything more than increased power bill.On the fun side, mining haze brought a lot of small players to the yard that learned how to work PCI-Express magic. Some of them may try tackling switch challenge, at some point, which I very much wish for.
Things might have been more lively if PCI SIG didn't stopped to smell flowers too. Excuse me, how many years we are stuck at PCI-Expess 2.5?
Bytales - Wednesday, October 24, 2018 - link
People dont know but this is not something new. Amfeltec did a similar board, perhaps with a similar splitter, and it works with 4 NVME ssds.I had to order from them directly from Canada and payed 400 500 dolars, idont know. But it was at a time, where People kept telling me 4 NVMes from a single pci express 3.0 16x is impossible.
http://amfeltec.com/products/pci-express-gen-3-car...
I have 3 SSDs, a 250 960 evo, a 512 950pro and a 250 WD black 2018 model. They are both recognized without any hiccups whatsoever, and all are individually bootable.
Dragonstongue - Wednesday, October 24, 2018 - link
they could/should have used a larger likely MUCH quieter fan as them "tiny fans" tend to be overly loud for nothing, high pressure is not what these things need, they need high flow.likely they could have used say a 90 or 110mm fan size set at an incline to vent "out the back" to promote cooling efficacy.
All depends on how it works of course, but it does open the doors of slotting 4 lower capacity m.2 drives to create a single larger volume ^.^
Diji1 - Friday, November 2, 2018 - link
>likely they could have used say a 90 or 110mm fan size set at an incline to vent "out the back"Sure they could have done that but then it takes up more than 1 PCI slot. Well actually it takes up more since radial fans do not handle tight fitting spaces very well unlike the blower style fan that it comes with.
'nar - Wednesday, October 24, 2018 - link
I've had more RAID controllers fail on me than SSDs, so I only use JBOD now.Dug - Wednesday, October 31, 2018 - link
I wonder if 4 nvme ssd's would saturate even a 12Gb/s hba though.Dug - Wednesday, October 31, 2018 - link
I forgot to add, I agree. I haven't checked all cards of course, but we've had RAID controllers with backup batteries fail. Why did they fail? Because when the RAID card detects the backup battery is bad or can't hold a charge- it won't continue running even if your system is powered on! So all this redundancy and the single point of failure is a tiny battery that corrupts your entire array.TomWomack - Wednesday, October 24, 2018 - link
What on Earth is the point of paying extra to be able to boot from your large NVMe RAID? Your board surely already has an NVMe slot which you can put the small boot drive in!(I agree with the point about RAID controllers; I'd be happy with a system that gave me full-speed /dev/nvme1n1 through /dev/nvme4n1 and just let md deal with putting the bytes in the right place)
tommo1982 - Tuesday, October 30, 2018 - link
It'd be interesting to see it tested by Anandtech.