I'd agree that you totally screw the purpose of going M.2 by using PCIe adapters, but some people are only using M.2 because it is the most common NVMe form factor on the market. I've never been a fan of the M.2 form factor for desktops/workstations/servers as both U.2 and PCIe SSDs have better cooling and U.2 doesn't take up so much area on the board. M.2 makes a lot of sense for laptops and thin-mITX, but I don't think we'll see Threadripper on either of those platforms particularly soon. When manufacturers other than just Intel decide to start making U.2 SSDs, you'll be a lot more likely to see support for larger NVMe RAID arrays on motherboards.
NVMe SSDs are the same bus bandwidth as PCIE Gen 3 x4. So a 16x PCIE adapter can run 4 of them, if the bios will support 4+4+4+4.
Derbauer has a video on YouTube running 8x 960 Pro NVMe drives on 2x 4 slot cards. The combined bandwidth of 8 960 pros is actually faster than a channel of the memory controller, so pretty insanely fast. (The memory controller on Threadripper is 85 GB per second according to CPU World.)
It seems there is a typo in the sentence "The X399 chipset doesn't offer RAID capability for NVMe SSDs, but given its limited PCIe lane counts, this isn't much of an inconvenience. " .
No, this news piece is about X399 and Threadripper. The implied comparison here is to Intel's platforms where the chipsets have more lanes, and do offer NVMe RAID.
" Intel's platforms where the chipsets have more lanes, and do offer NVMe RAID."
This is sadly the perception many have. Intel can plausibly say the latter part is true, but it isn't in reality.
Yes intel offer native nvme raid on the chipset, but no they dont provide more than the usual 4 lanes for the chipset.
Meaning one good nvme almost saturates the entire bandwidth of the chipset, making multi drive raid acceleration pointless.
It gets worse.
All that processing required to share that bandwidth takes time and introduces lag.
Intels most popular current processor, the 7700k, has no spare lanes after adding a normal 16 lane gpu. There is no way (except an 8 lane gpu) that you can add a true nvme drive at all, let alone an array.
There is no way you will get anything like comparable speeds from an intel chipset raid0 array.
I feel sad knowing how disappointed folks will be so soon after purchase if they are ~excluded from the still evolving benefits of nvme.
In the known universe, name me something that has increased by factor of 35 in a short space of time? Even in the IT world, thats astonishing. Yet in the time SSDs have been affordable, thats what ~mainstream desktop workdisks have become, speedwise.
Our ~mainstream such work drive device went from a 100MB/s sata hdd, to 3500MB/s samsung nvme. Its not a tick or a tock, or several of each, its seismic. A vacuum that is a long way from filled.
That Amd now have provided TR users with the native means and lanes to multiply that 35x factor, by further ~6x, is truly head spinning.
It blurs the traditional boundaries between storage and memory, which has interesting possibilities in a seemingly perpetually dram starved IT market.
FYI, an instructive thread on what a mess intel raid is.
By using 4 drives. The up to 3 NVME is up to 3 M.2 NVME compatible slots, but you can use PCIe cards and use up to 4 M.2 Drives per x16 PCIe slot. Der8auers Video (was released early and is still currently taken down.) has him using 2 4xM.2 cards for 8 960 pros in a RAID 0 setup.
becaseu our infos are wrong.. it suppiots 6 drives.
AMD: "With a single GPU in the system, arrays containing up to six NVMe SSDs can be supported without adapters. NVMe RAID support on the AMD Ryzen Threadripper platform does not require specific NVMe disks or hardware activation keys."
Threadripper allows up to 10 nvme drives. Not sure where you got the limit of 3 from. Three is just the typical number of onboard m.2 connections, but you can just add more via the normal pcie slots and a riser card.
I quoted the article. "In a typical configuration, the motherboard will have up to three M.2 slots providing PCIe x4 connections to the Threadripper CPU"
"And without RAID5, 3 SSD RAIDs are not possible at all." Everything the others have said is right, but I just wanted to comment on this: RAID5 needs at least 3 drives, that is true. But 3 drives don't need RAID5. RAID0 can run with as many drives as the controller supports, you can have 5 drives in RAID0 if you don't care about failure rates.
This may not be related, but you CAN do a RAID10 with three drives. It's also called RAID1E, it stripes and mirrors, but since there's an odd number of drives it just rotates where the chunks are kept. Linux software RAID10 will do this with an odd number of drives.
You would never want to use RAID 5 on a consumer based SSD. RAID 5 parity writes would kill and Samsung EVO drive. If you are tinkering with the idea of RAID 5 on SSD than you need to buy SLC based SSD which will cost you an arm and a leg. RAID 0 is a different story but you better be able to deal with loss of data should you lose a drive in RAID 0 configuration.
The Threadripper platform has LOTS of PCIe lanes - there are passive adapters that allow 4 NVMe M.2 cards to connect to a x16 PCIe slot. (E.g. the Aplicata Quad M.2 NVMe SSD PCIe x16 Adapter @ $199)
Not necessarily. I have a older Asus P7F7-E WS motherboard which has full x8 slots x 4. With 3 add in cards for M.2 (keep one for the GPU, or just go with IGP and run all 4 slots) - they could run at x2 each, x4 drives, at 2TB M.2 drives I could run 32TB of all SSD NVMe storage. A RAID5 setup would leave me 30TB usable.
That board only has 16 PCIe 2.0 lanes coming off the CPU. Back then, PCIe switch chips like the NF200 were cheap enough to be used in enthusiast products, but PCIe 3 switches have never been so cheap. And since Threadripper has so many lanes coming directly off the CPU instead of being fanned out by the PCH or a separate switch chip, it has more than six times the I/O bandwidth available compared to your Lynnfield system.
Yeah, I'm sure. Go read up on the trouble they've had getting their Linux driver for those chips accepted into the kernel. It's an ugly mess, but amounts to more of a NVMe passthrough capability than hardware RAID.
I haven't tested it yet, but I plan to. It's another software RAID solution, but one with both Windows and Linux support. Last I heard, they felt pretty good about the sequential I/O performance but were still working to improve the random I/O performance.
Clearly the scaling is possible, but afaik, it has never been possible before. Its almost a law of physics that there will be severe overheads to raid.
This scaling is revolutionary afaik.
I ~know whats happening.
I am just amazed none of the pundits delve any deeper into this remarkable phenomenon.
Latency always goes up becuase there is something doing processing along the drive connection.
For some people this is acceptable and raid serves a purpose, but personally I don't understand why anyone who doesn't have some enterprise requirements would do it. I think mostly they just think RAID is always faster regardless, even if it is not.
I occasionally visits these forums just to reassure myself that there ARE people who understand tech, after feeling that most do not. Playing games is not the same as really understanding, just like driving a car and thinking the battery is running the electronics. But where does an average person find someone to help sort out their own computer issues? Anyone?
Is there any X399 motherboard that, along with all recent BIOS updates, is currently *verified* to support bootable NVMe RAID?
I am trying it on ASUS Rog Zenith with Samsung EVO 960 M.2 NVMe drives, and it doesn't work. The drives are only partially detected: they are listed in boot order, but not in any other place, and ASUS RAID tool has "Create array" grayed out, as if the drives were ignored.
Should I try to install Windows on a spare hard drive, boot from it, use AMD tools to configure RAID, and then try to boot from it? Will that work? My assumption is that bootable RAID should be configured from BIOS.
However, if there is a mobo already *proven* to work, I'll return Rog Zenith and buy another mobo.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
47 Comments
Back to Article
peevee - Monday, October 2, 2017 - link
How are they going to create RAID10 (which requires at least 4 disks) from "up to 3" NVMe drives?And without RAID5, 3 SSD RAIDs are not possible at all.
JoeyJoJo123 - Monday, October 2, 2017 - link
At least Ryzen Threadripper has NVMe RAID enabled, unlike (cough) Intel.Gothmoth - Monday, October 2, 2017 - link
and i suppose you can use PCIe adapater cards who support 2 to 8 nvme ssd´s.duploxxx - Tuesday, October 3, 2017 - link
hahah you totally screw the purpose of NVMe going pci-e adaptors ....BurntMyBacon - Tuesday, October 3, 2017 - link
How so?I'd agree that you totally screw the purpose of going M.2 by using PCIe adapters, but some people are only using M.2 because it is the most common NVMe form factor on the market. I've never been a fan of the M.2 form factor for desktops/workstations/servers as both U.2 and PCIe SSDs have better cooling and U.2 doesn't take up so much area on the board. M.2 makes a lot of sense for laptops and thin-mITX, but I don't think we'll see Threadripper on either of those platforms particularly soon. When manufacturers other than just Intel decide to start making U.2 SSDs, you'll be a lot more likely to see support for larger NVMe RAID arrays on motherboards.
barleyguy - Wednesday, October 4, 2017 - link
No, you really don't.NVMe SSDs are the same bus bandwidth as PCIE Gen 3 x4. So a 16x PCIE adapter can run 4 of them, if the bios will support 4+4+4+4.
Derbauer has a video on YouTube running 8x 960 Pro NVMe drives on 2x 4 slot cards. The combined bandwidth of 8 960 pros is actually faster than a channel of the memory controller, so pretty insanely fast. (The memory controller on Threadripper is 85 GB per second according to CPU World.)
Xpl1c1t - Thursday, October 5, 2017 - link
faster throughput, but how about the latency?ah... one day we will have a non-volatile unified storage technology that replaces NAND and DRAM
IGTrading - Monday, October 2, 2017 - link
It seems there is a typo in the sentence "The X399 chipset doesn't offer RAID capability for NVMe SSDs, but given its limited PCIe lane counts, this isn't much of an inconvenience. " .The author probably talks about X370, not X399.
hulu - Monday, October 2, 2017 - link
No, this news piece is about X399 and Threadripper. The implied comparison here is to Intel's platforms where the chipsets have more lanes, and do offer NVMe RAID.mdriftmeyer - Monday, October 2, 2017 - link
128 PCI-E lanes on Threadripper. Where's the bottleneck?Hul8 - Monday, October 2, 2017 - link
First of all, EPYC has 128 lanes, Threadripper only has 64, of which 4 go to the chipset.Secondly, the sentence was about the PCIe lanes from the chipset. Those are limited.
IGTrading - Tuesday, October 3, 2017 - link
Understood.Thank you!
msroadkill612 - Friday, October 6, 2017 - link
" Intel's platforms where the chipsets have more lanes, and do offer NVMe RAID."This is sadly the perception many have. Intel can plausibly say the latter part is true, but it isn't in reality.
Yes intel offer native nvme raid on the chipset, but no they dont provide more than the usual 4 lanes for the chipset.
Meaning one good nvme almost saturates the entire bandwidth of the chipset, making multi drive raid acceleration pointless.
It gets worse.
All that processing required to share that bandwidth takes time and introduces lag.
Intels most popular current processor, the 7700k, has no spare lanes after adding a normal 16 lane gpu. There is no way (except an 8 lane gpu) that you can add a true nvme drive at all, let alone an array.
There is no way you will get anything like comparable speeds from an intel chipset raid0 array.
I feel sad knowing how disappointed folks will be so soon after purchase if they are ~excluded from the still evolving benefits of nvme.
In the known universe, name me something that has increased by factor of 35 in a short space of time? Even in the IT world, thats astonishing. Yet in the time SSDs have been affordable, thats what ~mainstream desktop workdisks have become, speedwise.
Our ~mainstream such work drive device went from a 100MB/s sata hdd, to 3500MB/s samsung nvme. Its not a tick or a tock, or several of each, its seismic. A vacuum that is a long way from filled.
That Amd now have provided TR users with the native means and lanes to multiply that 35x factor, by further ~6x, is truly head spinning.
It blurs the traditional boundaries between storage and memory, which has interesting possibilities in a seemingly perpetually dram starved IT market.
FYI, an instructive thread on what a mess intel raid is.
http://forum.asrock.com/forum_posts.asp?TID=5587&a...
msroadkill612 - Thursday, October 12, 2017 - link
The chipsets provide the illusion of pcie3 lanes, but they all share the same four real lanes. They are a joke.The Benjamins - Monday, October 2, 2017 - link
By using 4 drives. The up to 3 NVME is up to 3 M.2 NVME compatible slots, but you can use PCIe cards and use up to 4 M.2 Drives per x16 PCIe slot. Der8auers Video (was released early and is still currently taken down.) has him using 2 4xM.2 cards for 8 960 pros in a RAID 0 setup.Gothmoth - Monday, October 2, 2017 - link
becaseu our infos are wrong.. it suppiots 6 drives.AMD: "With a single GPU in the system, arrays containing up to six NVMe SSDs can be supported without adapters. NVMe RAID support on the AMD Ryzen Threadripper platform does not require specific NVMe disks or hardware activation keys."
Gothmoth - Monday, October 2, 2017 - link
stupid comment system.. i am typing on my phone... sorry for the typos. but no editing feature with this stupid comment system.Ej24 - Monday, October 2, 2017 - link
Threadripper allows up to 10 nvme drives. Not sure where you got the limit of 3 from. Three is just the typical number of onboard m.2 connections, but you can just add more via the normal pcie slots and a riser card.peevee - Monday, October 2, 2017 - link
I quoted the article. "In a typical configuration, the motherboard will have up to three M.2 slots providing PCIe x4 connections to the Threadripper CPU"Billy Tallis - Monday, October 2, 2017 - link
M.2 slots on the motherboard are far from being the only way to attach a NVMe SSD to the CPU's PCIe lanes.Death666Angel - Monday, October 2, 2017 - link
"And without RAID5, 3 SSD RAIDs are not possible at all."Everything the others have said is right, but I just wanted to comment on this: RAID5 needs at least 3 drives, that is true. But 3 drives don't need RAID5. RAID0 can run with as many drives as the controller supports, you can have 5 drives in RAID0 if you don't care about failure rates.
peevee - Monday, October 2, 2017 - link
Agree, but RAID0 has no redundancy at all.sor - Monday, October 2, 2017 - link
This may not be related, but you CAN do a RAID10 with three drives. It's also called RAID1E, it stripes and mirrors, but since there's an odd number of drives it just rotates where the chunks are kept. Linux software RAID10 will do this with an odd number of drives.evancox10 - Tuesday, October 3, 2017 - link
Uh, both RAID 0 and 1 support an arbitrary number of drives.bull2760 - Wednesday, October 4, 2017 - link
You would never want to use RAID 5 on a consumer based SSD. RAID 5 parity writes would kill and Samsung EVO drive. If you are tinkering with the idea of RAID 5 on SSD than you need to buy SLC based SSD which will cost you an arm and a leg. RAID 0 is a different story but you better be able to deal with loss of data should you lose a drive in RAID 0 configuration.someonesomewherelse - Saturday, October 14, 2017 - link
You could do it the same way mdraid does 'raid10' with 3 drives.https://en.wikipedia.org/wiki/Non-standard_RAID_le...
Gothmoth - Monday, October 2, 2017 - link
works pretty well and scales linear.i just tried it this morning with two samsung 960 evo.
Duncan Macdonald - Monday, October 2, 2017 - link
The Threadripper platform has LOTS of PCIe lanes - there are passive adapters that allow 4 NVMe M.2 cards to connect to a x16 PCIe slot. (E.g. the Aplicata Quad M.2 NVMe SSD PCIe x16 Adapter @ $199)mdriftmeyer - Monday, October 2, 2017 - link
Don't tell the Intel fans. Let them live in their deluded reality.peevee - Monday, October 2, 2017 - link
Why is it so expensive, after all it is a dumb mechanical connection?Duncan Macdonald - Monday, October 2, 2017 - link
Low volume market. Threadripper, EPYC and high end Xeon CPUs are the only ones with enough PCIe lanes to make good use of such a card.bill.rookard - Monday, October 2, 2017 - link
Not necessarily. I have a older Asus P7F7-E WS motherboard which has full x8 slots x 4. With 3 add in cards for M.2 (keep one for the GPU, or just go with IGP and run all 4 slots) - they could run at x2 each, x4 drives, at 2TB M.2 drives I could run 32TB of all SSD NVMe storage. A RAID5 setup would leave me 30TB usable.Billy Tallis - Tuesday, October 3, 2017 - link
That board only has 16 PCIe 2.0 lanes coming off the CPU. Back then, PCIe switch chips like the NF200 were cheap enough to be used in enthusiast products, but PCIe 3 switches have never been so cheap. And since Threadripper has so many lanes coming directly off the CPU instead of being fanned out by the PCH or a separate switch chip, it has more than six times the I/O bandwidth available compared to your Lynnfield system.colonelclaw - Monday, October 2, 2017 - link
"Nobody has manufactured a true NVMe hardware RAID controller"Are you sure? https://www.broadcom.com/products/storage/raid-con...
Billy Tallis - Monday, October 2, 2017 - link
Yeah, I'm sure. Go read up on the trouble they've had getting their Linux driver for those chips accepted into the kernel. It's an ugly mess, but amounts to more of a NVMe passthrough capability than hardware RAID.msroadkill612 - Wednesday, October 4, 2017 - link
What is your opinion of the highpoint 4x nvme raid card please?Billy Tallis - Wednesday, October 4, 2017 - link
I haven't tested it yet, but I plan to. It's another software RAID solution, but one with both Windows and Linux support. Last I heard, they felt pretty good about the sequential I/O performance but were still working to improve the random I/O performance.msroadkill612 - Monday, October 9, 2017 - link
fyii think this may be a bench of 2x highpoint 4xnvme cards and 6x panasonic sm961 (~same as evo specs afaik)
https://imgur.com/a/a68Sd
It also scales well.
Expensive at $400, vs asus ~$220 list - which makes getting a tr mobo w/ as many native nvme ports included as possible a bargain.
lilmoe - Monday, October 2, 2017 - link
U.2?BrokenCrayons - Monday, October 2, 2017 - link
Yeah, they had a few hits people liked. I don't see the appeal, but my musical tastes are different.MrSpadge - Monday, October 2, 2017 - link
Haha, right on topic!lilmoe - Monday, October 2, 2017 - link
Barista?msroadkill612 - Wednesday, October 4, 2017 - link
Clearly the scaling is possible, but afaik, it has never been possible before. Its almost a law of physics that there will be severe overheads to raid.This scaling is revolutionary afaik.
I ~know whats happening.
I am just amazed none of the pundits delve any deeper into this remarkable phenomenon.
They just say "thats a nice result".
prisonerX - Wednesday, October 4, 2017 - link
Latency always goes up becuase there is something doing processing along the drive connection.For some people this is acceptable and raid serves a purpose, but personally I don't understand why anyone who doesn't have some enterprise requirements would do it. I think mostly they just think RAID is always faster regardless, even if it is not.
hansensteven - Tuesday, October 10, 2017 - link
good afternoon I want to play in the casino at . but for this I need. the last driver for the Hd 8600 amd. Where can I find it?terrefirma - Sunday, October 15, 2017 - link
I occasionally visits these forums just to reassure myself that there ARE people who understand tech, after feeling that most do not. Playing games is not the same as really understanding, just like driving a car and thinking the battery is running the electronics. But where does an average person find someone to help sort out their own computer issues? Anyone?morfizm - Friday, November 24, 2017 - link
Is there any X399 motherboard that, along with all recent BIOS updates, is currently *verified* to support bootable NVMe RAID?I am trying it on ASUS Rog Zenith with Samsung EVO 960 M.2 NVMe drives, and it doesn't work. The drives are only partially detected: they are listed in boot order, but not in any other place, and ASUS RAID tool has "Create array" grayed out, as if the drives were ignored.
Should I try to install Windows on a spare hard drive, boot from it, use AMD tools to configure RAID, and then try to boot from it? Will that work? My assumption is that bootable RAID should be configured from BIOS.
However, if there is a mobo already *proven* to work, I'll return Rog Zenith and buy another mobo.