Comments Locked

14 Comments

Back to Article

  • limitedaccess - Monday, March 18, 2019 - link

    Interesting that they launched a SATA SSD with their newer 96L NAND last month but this is 64L instead.
  • DanNeely - Monday, March 18, 2019 - link

    I wonder if it means this was delayed a few months, or if a v1 product it was just prioritized low for getting access to the newer nand.
  • ksec - Monday, March 18, 2019 - link

    I think the most important information is price. For consumers, pretty much all NVMe SSD now perform adequately. We will have to wait ( or Anandtech could test it ) with SSD going 6GB/s Seq Read Write and see if there are any Real World difference.
  • kpb321 - Monday, March 18, 2019 - link

    Yeah price will be key. It's specs are a bit lower than an HP ex920 or similar ADATA drive (~200mbs read/write and ~30k IOPS assuming those specs are for the larger size Drive(s)) but neither is far enough below it to be a huge difference but you'd definitely have a hard time selling a slightly slower drive for an equal or higher price. At this point in time if you can't beat Samsung on performance then you need to beat the EX920 and similar drives on price.
  • Duncan Macdonald - Monday, March 18, 2019 - link

    For most users, there is only one NVMe slot - normally occupied by the OS disk. If additional local storage is wanted then SATA is usually the only possibility. (It is a lot easier to add a SATA drive than to migrate the OS from one NVMe drive to another.)

    SATA drives will still have a future until most motherboards have multiple NVMe slots.
  • Cullinaire - Monday, March 18, 2019 - link

    Not having to support SATA drives with their bulk + cabling is gonna result in some pretty crazy form factors in the near future!
  • FunBunny2 - Monday, March 18, 2019 - link

    MB makers can put, what?, 8 or 10 SATA ports in the same real estate as one of these drives.
  • Cullinaire - Monday, March 18, 2019 - link

    The ports are not the issue - the space taken up by the cabling and the drives themselves is what matters.
  • bolkhov - Monday, March 18, 2019 - link

    "For most users, there is only one NVMe slot - normally occupied by the OS disk. If additional local storage is wanted then SATA is usually the only possibility. " -- no, quite the opposite is true.

    Many modern motherboards have TWO M.2 slots and usually only one of them is NVMe+SATA, while the second one is NVMe only.
  • JKJK - Tuesday, March 19, 2019 - link

    What stops you in say using a asus pcie card with space for 4x m2 slots?
  • abufrejoval - Tuesday, March 19, 2019 - link

    The fact that most of these cards don't offer what you think: They may just offer one actual PCEe M.2 connector while the others are nothing but holding brackets that connect SATA M.2 SSDs to sata ports on the motherboard.

    What you probably expect is a PCIe switch, which allows you to multiplex those PCIe lanes and use the full bandwidth for one SSD at one time, even if the aggregate bandwidth cannot exceed that of a single device.

    What some might deliver is bifurication or even quadfurication if it were to be a x16 device, but that requires support from the motherboard hardware and BIOS, which btw. is also the case for the PCIe switch and that is the main reason why meaningful solutions do either not exist, are not bootable or outrageously expensive (Avago-Broadcom driving prices for switch chips through the roof).

    With the PCIe lanes being limited as they are on desktop CPUs, there has to be either static partitioning (x-furication) or multiplexing. It can take the form of SATA, which multiplexes bandwidth on the PCH link or it can be PCIe based switching: Connectors alone do not create bandwidth.
  • Dug - Wednesday, March 20, 2019 - link

    I would think storage should/ could possibly be separated to a different lane like ram. You can add more ram without slowing down (to an extent). Instead of relying on PCIe
  • Billy Tallis - Monday, March 25, 2019 - link

    A multi-drop/shared parallel bus would not work well for storage. It only works for DRAM because of the short distances involved. Point-to-point serial links have proven superior everywhere else. A daisy-chained bus could provide most of the performance of PCIe but would add to the cost and pin count of every peripheral device, which is unreasonable when very few computers actually have more than about two PCIe peripherals.
  • Booyaah - Tuesday, March 19, 2019 - link

    I would need to see the pricing on this. Just from an IOPS perspective, the new 970 Evo+ is a good deal faster than this. So I don't really see it as news worthy really...maybe a head to head comparison would be better.

Log in

Don't have an account? Sign up now