Comments Locked

14 Comments

Back to Article

  • deil - Thursday, March 14, 2019 - link

    Well (16¢/GB) for NVME speeds, that's wow at least today.
  • MDD1963 - Monday, March 18, 2019 - link

    well, for 'half-spec' NVME speeds and still 3x SATA spec, it's pretty darn inexpensive... Now we need mainboards to have 6x NVME slots instead of 6x SATA ports..
  • WiredTexan - Friday, March 29, 2019 - link

    "well, for 'half-spec' NVME speeds and still 3x SATA spec, it's pretty darn inexpensive... Now we need mainboards to have 6x NVME slots instead of 6x SATA ports."

    And there's the problem. Literally no room on an ATX board for more than 3. Each one also consumes PCIe lanes, etc. Not really knowledgeable about this field, but is there a group working on a new spec to replace the current standards of iTX, mATX, ATX and EATX for consumers? Seems we're moving into new territory that can't be accomodated by the current standards. Or is PCIe4 enough, along with increased capacity of NVMe?
  • amnesia0287 - Tuesday, August 27, 2019 - link

    PCIE 4.0 and the abundance of lanes on the newest amd chips partially solves this. Once we move to PCIE5.0 current PCIE3.1 speeds could be achieved using a single lane. Then adding lots of u.2 style ssd will be easy (can only fit so many m.2 slot`s on a pc, though I could totally see someone figuring out a pcie x16 card with 16 m.2 ssd which is just silly to think about.

    We are almost there tho. I’ve moved almost ALL my storage to ssd, only 2 spinners left. Next round of upgrades should fix that.

    What I’d love to see is some massive 3.5” ssd, I don’t even care which interface. I’ve never understood why they gravitated to 2.5”. I guess it doesn’t really matter to much. They can already do 8 and I think even 16tb in the 2.5 form factor. But 3.5 would add so much room for like capacitors or batteries or more chips (and channels).
  • BPB - Thursday, March 14, 2019 - link

    This seems like a nice choice for my older PC that needs a PCIe card to use an NVMe drive. The system would never take full advantage of more expensive drives. I may finally upgrade to an NVMe drive on that system now that I can get a reasonable size for cheap.
  • TelstarTOS - Thursday, March 14, 2019 - link

    Where are 2TB drives, WD?
  • jabber - Thursday, March 14, 2019 - link

    So would leaving say 10GB+ free for over provisioning help with these lesser performance driven drives?
  • Cellar Door - Thursday, March 14, 2019 - link

    You don't need to do that - you won't see any real world difference unless you are running a professional workload. In which case you should be starting out with a different drive.
  • abufrejoval - Thursday, March 14, 2019 - link

    I am a bit confused when you assert a DRAMless design yet speak of an "undisclosed amount of memory on-board" and categorially exclude a host memory buffer...

    I guess the controller would include a certain amount of RAM, more likely static because it takes an IBM p-Series CPU to mix DRAM and logic on a single die.

    I guess there could in fact be a PoP RAM chip and we couldn't tell from looking at the plastic housing, but could they afford that?

    That leaves embedded MRAM or ReRAM which I believe WD is working on, but would it already be included on this chip?

    And I wonder if a HMB-less design can actually be verified or where and how you can see what amount of host memory is actually being requested by an NVMe drive.

    BTW: How do they actually use that memory? The optimal performance would actually be achieved by having the firmware execute on the host CPU on its own DRAM, but for that the drive would have to upload the firmware, which is a huge security risk unless it were to be eBPF type code (hey, perhaps I should patent that!)

    What remains is PCI bus master access, which would explain how these drives may not be speed daemons.
  • Billy Tallis - Thursday, March 14, 2019 - link

    When WD introduced the second generation WD Black SSD, they briefed the media on their controller architecture in general and answered some questions about the SN520. The controller ASIC includes SRAM buffers, but they don't disclose the exact capacity. It's probably tens of MB, comparable to the amount of memory used by HMB drives and far too small to be worth using a separate DRAM device. WD specifically stated that HMB was not used, and that they had sufficient memory on the controller itself to make using HMB unnecessary. (And even without such a statement, it's trivial to inspect HMB configuration from software, since the drive has to ask the OS to give it a buffer to use, and the OS gets to choose how much memory to give the SSD access to.)

    None of the above buffers have anything to do with executing SSD controller firmware; that's always 100% on-chip even for drives that have multi-GB DRAM on board. SSDs use discrete DRAM or HMB or (in this case) on-controller buffers to cache the flash translation layer's mappings between logical block addresses and physical NAND locations.
  • abufrejoval - Thursday, March 14, 2019 - link

    Thanks for the feedback. Took the opportunity to find and read your piece on the HMB from last June.

    I’ve tested FusionIO drives when they came out with 160GB of SLC and in fact I still operate a 2.4TB eMLC unit in one of my machines (similar performance levels as a 970Pro, but “slightly” higher energy consumption).

    Those had a much “fatter” HMB design and in fact operated most of their "firmware" as OS drivers on the host, including all mapping tables. The controller FPGA would only run the "analog" parts, low level flash chip writing, reading and erasure with all their current adjustments, including also perhaps some of the ECC logic.

    Of course, you couldn’t boot these and on shutdown all these maps would be saved on a management portion of the device. But on a power failure they could be reconstructed from the write journal and full scans of status bits and translation info on the data blocks.

    That approach was ok for their data center use and it helped them attain performance levels that nobody else could match—at least at the time, because massive server CPUs are difficult to beat even with today’s embedded controllers.

    It also allowed for a higher performance “native” persistent block interface that eliminated most of the typical block layer overhead: Facebook is rumored to have motivated and used that interface directly for some years.

    NVMe has eliminated much of the original overhead yet following the same reasoning which puts smartNIC logic as eBPF into kernel space on Linux, you could argue that you could follow a similar approach for SSDs, where a kernel would load safe eBPF type code from the SSD to manage the translation layer, wear management and SLC to xLC write journal commits....

    Didn’t I even read about some Chinese DC startup doing that type of drive?

    With a split between CPU and RAM across PCIe x4 HBM seems borderline usable, especially since the buffers can both be denied and reclaimed by the host. Translation table accesses via bus master from the SSD controller, with all that arbitration overhead and bandwidth limitations… I doubt it scales.
  • Billy Tallis - Thursday, March 14, 2019 - link

    HMB isn't meant to scale, it's meant to shave a dollar or two of the BOM of a low-end SSD.

    The modern successor to the Fusion-IO concept is the Open Channel SSD, for which there are a few competing standards as the industry tries to figure out the appropriate level of abstraction. Generally speaking, they all involve running most of the flash translation layer on the host CPU, while keeping the media-specific stuff (error correction) on the drive. But it's starting to look like before Open Channel SSDs can really catch on in the datacenter, NVMe protocol extensions will take care of most of the same use cases, by optionally exposing SSDs as zoned block devices analogous to SMR hard drives.
  • lightningz71 - Friday, March 15, 2019 - link

    I’v Got to wonder if the advent of PCIe gen 5 will make HMB a much more usable thing for all but data center drives? On an NVME device, that can represent a 4 x improvement in data bandwidth between the sad and host memory. That throughput would easily exceed the data rates that current generation controllers are seeing that still use DDR3 in only one and two chip implementations on their PCBs. When you look at the expected capacity improvements expected with ddr5, having an SSD use even 2GB for HBM wouldn’t be a significant impact on the system.

    Granted, I’m talking about the consumer space where bleeding edge performance isn’t the name of the game. Pro level and big iron implementations will certainly want to use higher performance solutions.
  • icelava - Thursday, May 30, 2019 - link

    Seems like the mainstream market does not want to favour a scenario for 2242 SSDs? I just ordered a Lenovo ThinkPad T480 and now am facing great difficultly sourcing for an NVMe SSD model with that dimension (probably B + M keys).

Log in

Don't have an account? Sign up now