Comments Locked

5 Comments

Back to Article

  • croc - Wednesday, November 25, 2020 - link

    Why is write speed always the hard part? Whether DC or consumer, it always seems to lag read by a good margin. Seriously, curious about this.
  • Kristian Vättö - Thursday, November 26, 2020 - link

    It's down to the fundamental operation of NAND. Read is done by sensing the voltage level, whereas in writes electrons have to be injected to the cell to create the charge. Writes get even harder in multi-level cell variants since the programming has to be done in multiple iterations. For modern TLC, write latency is ~10x higher than read.
  • Calin - Thursday, November 26, 2020 - link

    Furthermore, reading is "painless" - i.e. you read the data.
    Writing in a current SSD (i.e. one based on either MLC or TLC, or even QLC) means you write quickly into flash as "SLC" (i.e. just one bit per cell), and mark the sector to be moved into MLC/TLC/QLC later (MLC = 2 bits per cell, TLC = three bits per cell)
    (current consumer SSD are basically TLC with SLC cache).
    So, an SSD will have a maintenance process running in the background - hopefully when no other operations are done, but you could swamp the SLC cache and start it.

    Furthermore, if flash block A was written into much more often than flash block B, then the contents and addresses of flash block A and flash block B will be swapped (so hopefully the next, often, future drives into block A will in fact use the reliability of block B).

    Of course, enterprise level SSDs might have a RAM cache and "battery" - so writes will be initially received at the speed they can be written into the RAM cache (i.e. basically instant as long as the RAM cache has not filled up). The battery is there to give enough energy to write the RAM cache into the flash.
  • diediealldie - Friday, November 27, 2020 - link

    For a NAND, sensing existing election(1 is erased, 0 is written) is quite easy, but moving elections back and forth is quite hard.

    Funny thing is that in real world, read latency(not throughput!) matters more than write latency. Because small writes can be hidden by DRAM write-back cache, while there is no way to hide read latency from device perspective.
  • TheinsanegamerN - Monday, November 30, 2020 - link

    Others have already answered well. I wanted to add that the way you use datacenter drives is typically over a network interface.

    For the slowest read speed listed here, the sequential speed would saturate two 10 gigabit network interfaces. The fastest, the FADU, could consume 41Gbps. Most datacenters will usually top out at 4x10Gbps for a BIG data unit. At these speeds most servers will be network bound befor ebeing drive bound, and that's not counting, say, RAID 5 or 6 or custom solutions. DCs faster then this will typically be using PCIe slot based SSDs or optane style devices with much higher read/write speeds and prices to match.

Log in

Don't have an account? Sign up now