Does this have any implications for motherboard hardware in the near future (six months or so)? Or in fact for any hardware not part of the SSD? From the text it seems like it doesn't, unless I missed something.
As with anything, technology always marches on. At least if a motherboard supports PCIe3, or soon PCIe4 then you'll be able to get appropriate cards to interface these newer drives with.
Motherboards are only minimally involved with NVMe; basically just for booting. Otherwise, they're just routing PCIe lanes from the CPU to the SSD and are blind to the higher-level protocols.
so does this mean that nvme 1.4 will be back compat with current boards if new drives are used? or will new hardware be needed? i know the other person basically asked that but i didnt fully understand if an answer had been given. seems like as long as the drive supports it as well as the ssd then it should work. unless the bios has to support it?
As was mentioned, this is software change so I don't know that anything would be necessary depending on the changes in the protocol. But if anything is needed, I think a motherboard would just need to have a firmware update to enable NVMe 1.4 support. I haven't seen it said anywhere, but I would think as long as the device could boot and everything after that is just passed between the SSD and the OS, then the OS and the SSD could possibly just negotiate up to 1.4 even if the motherboard only supported 1.3 for boot purposes.
Good stuff. I hope anyone who implements PMR also does periodic (like once a second) synchronization to storage. Otherwise it’s pointless to even bother sending the file across PCIe in the first place.
You miss the point of PMR. It doesn't need to sync periodically, because it is automatically guaranteed persistence across power cuts. That's what makes it different from regular RAM, and provides the reason for sending the file across PCIe.
Backup power on drives is used to flush data that is cached in RAM to NAND. You can’t guarantee when power will be restored. If a design treats data that users care about as disposable then it’s a bad design. If it’s data the user doesn’t care about then why is it getting sent to storage at all? Main memory works fine. It costs very little to have the drive sync to NAND.
> it's a general-purpose chunk of memory that is made persistent thanks to the same power loss protection capacitors that allow a typical enterprise SSD's internal caches to be safely flushed in the event of an unexpected loss of host power. The contents of the PMR will automatically be written out to flash, and when the host system comes back up it can ask the SSD to reload the contents of the PMR.
...would've taken less time to (re)read that part than to post two messages about it.
Once it is sent, it can be considered "stored" and so a database can return success for a transaction. Basically the same thing that's done already with "hardware" RAID controllers that have flash-backed or battery/capacitor-supported RAM, just faster.
If the power is kept on, it theoretically doesn't ever have to store it - it can delay indefinitely until power's withdrawn, at which point it has to be stored to maintain its guarantee.
It's pretty shocking that preferred alignment and granularity have taken so long to get exposed. Even SATA SSDs should've had that.
Can anyone explain why alignment and granularity should differ? I understand there are better and worse alignments, even among those that are non-optimal. Is that what it's about?
Basically reads and writes aren't the same size for an SSD. So it's possible to have an alignment that's faster for small reads, but is terrible for writes. Increase the size of that granularity and write performance goes up (and write amplification goes down), but at the cost of read performance.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
14 Comments
Back to Article
Arbie - Friday, June 14, 2019 - link
Does this have any implications for motherboard hardware in the near future (six months or so)? Or in fact for any hardware not part of the SSD? From the text it seems like it doesn't, unless I missed something.Excellent article BTW; thanks.
dooferorg - Friday, June 14, 2019 - link
As with anything, technology always marches on. At least if a motherboard supports PCIe3, or soon PCIe4 then you'll be able to get appropriate cards to interface these newer drives with.Billy Tallis - Friday, June 14, 2019 - link
Motherboards are only minimally involved with NVMe; basically just for booting. Otherwise, they're just routing PCIe lanes from the CPU to the SSD and are blind to the higher-level protocols.bobhumplick - Saturday, June 15, 2019 - link
so does this mean that nvme 1.4 will be back compat with current boards if new drives are used? or will new hardware be needed? i know the other person basically asked that but i didnt fully understand if an answer had been given. seems like as long as the drive supports it as well as the ssd then it should work. unless the bios has to support it?cygnus1 - Monday, June 17, 2019 - link
As was mentioned, this is software change so I don't know that anything would be necessary depending on the changes in the protocol. But if anything is needed, I think a motherboard would just need to have a firmware update to enable NVMe 1.4 support. I haven't seen it said anywhere, but I would think as long as the device could boot and everything after that is just passed between the SSD and the OS, then the OS and the SSD could possibly just negotiate up to 1.4 even if the motherboard only supported 1.3 for boot purposes.CheapSushi - Friday, June 14, 2019 - link
I really like the idea of there being more variety for addon cards or storage drives that can act as accelerators or coprocessors.willis936 - Saturday, June 15, 2019 - link
Good stuff. I hope anyone who implements PMR also does periodic (like once a second) synchronization to storage. Otherwise it’s pointless to even bother sending the file across PCIe in the first place.boeush - Saturday, June 15, 2019 - link
You miss the point of PMR. It doesn't need to sync periodically, because it is automatically guaranteed persistence across power cuts. That's what makes it different from regular RAM, and provides the reason for sending the file across PCIe.willis936 - Saturday, June 15, 2019 - link
Backup power on drives is used to flush data that is cached in RAM to NAND. You can’t guarantee when power will be restored. If a design treats data that users care about as disposable then it’s a bad design. If it’s data the user doesn’t care about then why is it getting sent to storage at all? Main memory works fine. It costs very little to have the drive sync to NAND.extide - Sunday, June 16, 2019 - link
It syncs when the power goes out. It doesn't need to at any other time.mode_13h - Monday, June 17, 2019 - link
Yeah, it's like he didn't read the article.> it's a general-purpose chunk of memory that is made persistent thanks to the same power loss protection capacitors that allow a typical enterprise SSD's internal caches to be safely flushed in the event of an unexpected loss of host power. The contents of the PMR will automatically be written out to flash, and when the host system comes back up it can ask the SSD to reload the contents of the PMR.
...would've taken less time to (re)read that part than to post two messages about it.
GreenReaper - Saturday, June 15, 2019 - link
Once it is sent, it can be considered "stored" and so a database can return success for a transaction. Basically the same thing that's done already with "hardware" RAID controllers that have flash-backed or battery/capacitor-supported RAM, just faster.If the power is kept on, it theoretically doesn't ever have to store it - it can delay indefinitely until power's withdrawn, at which point it has to be stored to maintain its guarantee.
mode_13h - Saturday, June 15, 2019 - link
It's pretty shocking that preferred alignment and granularity have taken so long to get exposed. Even SATA SSDs should've had that.Can anyone explain why alignment and granularity should differ? I understand there are better and worse alignments, even among those that are non-optimal. Is that what it's about?
cygnus1 - Monday, June 17, 2019 - link
Basically reads and writes aren't the same size for an SSD. So it's possible to have an alignment that's faster for small reads, but is terrible for writes. Increase the size of that granularity and write performance goes up (and write amplification goes down), but at the cost of read performance.