While implied but never directly stated, the performance gains are on the COU of things with utilization reductions which in turn means more cycles toward other workloads. Not a bad thing but this doesn’t inherently equate to faster drive speeds. In fact these might be slightly less due to parity overhead transfers from drive to drive.
This is a novel way of scaling array performance in the era of high speed NVMe. If the controller is already doing some parity calculation on blocks for its own integrity, being able to simply shuffle that result result off to another drive is efficient. The way traditional RAID5 works would still require some MUX of multiple input blocks to be stored but that effectively boils down to IO parsing. This would have a nominal impact on power consumption this way too.
There are a couple of details that’s need to be worked out for mass enterprise support for this technology. The first is good interoperability which goes beyond specific vendor but mixing models and even firmware. In other words it’d be ideal for say Kioxia and Solidigm to work together on this RAID acceleration so that enterprises wouldn’t have to go to a single vendor for a reaplcement disk. Vendor lock-in is just bad and submitting this as a standard to the NVMe spec is a good move to prevent that. There are details about how a rebuild happens that just needs to be disclosed. Similarly is the capability to expand or even contract an array safely. The last issue is that hardware RAID has seen a decline due to its past as a vendor lock-in technology. This was at the controller level and not the drive level. What has risen since then have been software defined solutions like ZFS which focus on data integrity. Given how ZFS does the parity computation first before writing across the drives (and then invalidating old blocks for a data update), this new accelerated technique inherently may be incompatible with ZFS’s integrity algorithms. At the very least some significant rework would need to be done.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
1 Comments
Back to Article
Kevin G - Monday, August 12, 2024 - link
While implied but never directly stated, the performance gains are on the COU of things with utilization reductions which in turn means more cycles toward other workloads. Not a bad thing but this doesn’t inherently equate to faster drive speeds. In fact these might be slightly less due to parity overhead transfers from drive to drive.This is a novel way of scaling array performance in the era of high speed NVMe. If the controller is already doing some parity calculation on blocks for its own integrity, being able to simply shuffle that result result off to another drive is efficient. The way traditional RAID5 works would still require some MUX of multiple input blocks to be stored but that effectively boils down to IO parsing. This would have a nominal impact on power consumption this way too.
There are a couple of details that’s need to be worked out for mass enterprise support for this technology. The first is good interoperability which goes beyond specific vendor but mixing models and even firmware. In other words it’d be ideal for say Kioxia and Solidigm to work together on this RAID acceleration so that enterprises wouldn’t have to go to a single vendor for a reaplcement disk. Vendor lock-in is just bad and submitting this as a standard to the NVMe spec is a good move to prevent that. There are details about how a rebuild happens that just needs to be disclosed. Similarly is the capability to expand or even contract an array safely. The last issue is that hardware RAID has seen a decline due to its past as a vendor lock-in technology. This was at the controller level and not the drive level. What has risen since then have been software defined solutions like ZFS which focus on data integrity. Given how ZFS does the parity computation first before writing across the drives (and then invalidating old blocks for a data update), this new accelerated technique inherently may be incompatible with ZFS’s integrity algorithms. At the very least some significant rework would need to be done.