Comments Locked

38 Comments

Back to Article

  • Xajel - Monday, April 3, 2017 - link

    Good, an open standard against Intel's proprietary Optane solutions.. if that later is actually going to be a consumer friendly solution.. I don't see it with their current products, even a little bit peaking into future.. the advantage that Optane bring is very minimal for 90% of consumers compared to NVMe. So it will still need more time to prove it self...

    BTW, Intel announced Optane products based on 3D X-Point... but Micron did not release any thing yet.. are they planning actually to do ?
  • Yojimbo - Monday, April 3, 2017 - link

    Intel and Micron plan to use 3D XPoint in two different forms. One is through the PCI express bus as a mass storage device, which is what you see with the Optane drives. They use the standard NVMe interface, I believe. The other way is through the memory bus as an NVDIMM, which is what this JEDEC standard addresses. 3D XPoint is proprietary to Intel and Micron no matter how it's attached to the system. But other non-volatile memory solutions can similarly use NVMe or these NVDIMM-Ps or both. It works the exact same way with NAND flash. There are NAND flash NVMe SSDs and NAND flash NVDIMMs.
  • Xajel - Monday, April 3, 2017 - link

    I know all that, and that consumer Optane drives ( bootable, more capacity like 512GB ) will mostly come in 2.5" ( U.2 interface ) & PCIe cards. as the current density of 3D XPoint chips does not allow such capacity in M.2 modules.

    The thing I was saying is the current implementation, where Optane will be able to act like a system RAM, in this case only Intel's latest platform support it. but having an industry standard with JEDEC means it's not limited to Intel platform. AMD might also make use of such approach with future platforms...

    The questions is, well Intel's first implementation of the NVDIMM be complaint to the final JEDEC standard or they will need to change things later.
  • ImSpartacus - Monday, April 3, 2017 - link

    Oh god, so you're telling me u.2 won't be dying any time soon?

    FML

    I'm tired of this confusingly unorganized storage market.
  • Xajel - Tuesday, April 4, 2017 - link

    Nope, Intel is a big promoter to U.2, and AFAIK they make over 90% of U.2 drives, but on market maybe they're alone.

    U.2 still better than SATA Express IMO. and it might be the best option for any 2.5" PCIe SSD.

    The only problem with U.2 is it's port on motherboard is different than the one in the drive. and it's a little bit more expensive than SATA Express. but the bulkiness of SATA Express is just doesn't worth it.
  • andychow - Monday, April 10, 2017 - link

    SATA Express has been around forever and there has never been a drive for it. The connector is junk imo. Thunderbolt isn't bulky and it's fast, they can make a fast small connector to replace SATA-3.
  • Yojimbo - Monday, April 3, 2017 - link

    Was it ever in much doubt whether 3D XPoint would be available on non-Intel platforms? As long as the technology were successful I would assume Micron would work towards putting 3D XPoint on OpenPOWER and any other platform that customers were interested in.
  • Sarah Terra - Friday, April 7, 2017 - link

    optane is fail, dead in the water, its just turbocache 2.0.I wont be investing a dime into it ever, I'm very happy to wait for a real non volatile high speed storage media appears to permanently replace ram, SSD's, caches, and everything else in between.
  • Yojimbo - Monday, April 3, 2017 - link

    Oh, as far as Micron, they plan to release 3D XPoint products under the QuantX brand name later in 2017. They seem to expect "break out" revenues from 3D XPoint products starting in 2019.
  • vFunct - Monday, April 3, 2017 - link

    I really don't see the DIMM standard being relevant for consumer applications in 5-10 years. What's going to happen is that CPUs are going to come with memory in HBM packages. Intel is working on partial silicon interposer substrates that don't have the size limits of current HBM packages, and so we will eventually be buying Core i7 parts with 64GB of memory included. And, perhaps with 1 TB of Optane memory as well.

    The DIMM standard is only going to matter for servers that need 16-64TB of memory.
  • MikeMurphy - Monday, April 3, 2017 - link

    I suspect HBM and external DRAM with co-exist early on, with external memory eventually going the way of the Dodo, especially for consumer devices.
  • Xajel - Tuesday, April 4, 2017 - link

    HBM still more expensive than DDRx... and upgradability is a big issue here if it was embedded. While Ultrabooks makers might be happy, the majority of other PC market will not.

    HBM on CPU package has a lot of potential, but the next step might be just as vRAM for the iGPU on an APU.

    Maybe later we might see CPU's with HBM as high speed cache like L4 and the rest is DDRx.. Same like what AMD is doing with VEGA's HBCC, it might be a good idea also for CPU's.
  • helvete - Thursday, June 15, 2017 - link

    What a crazy world. GPU and memory within a CPU and storage in DRAMs. Tha cases shall be much flatter!
  • SalemF - Monday, February 26, 2018 - link

    Ironically I was asking fro the same thing on AMD sub and everyone disagree with me
    https://www.reddit.com/r/Amd/comments/8088i5/could...
    being cheap all time wont save you on long run , AMD should learn that lesson already
  • grant3 - Monday, April 3, 2017 - link

    Optane is a manufacturing technology and DDR5 is an interface standard, so I don't know why you claim they are "against" each other.

    Intel could put regular transistors, Optane transistors, or even miniature-elves on the memory chips and as long as they meet spec they're still DDR5.
  • MrSpadge - Monday, April 3, 2017 - link

    He was talking about the NVDIMMS.
  • BrokenCrayons - Monday, April 3, 2017 - link

    That's some good news all around coming from JDEC. iGPUs could use the extra bandwidth into system memory that DDR5 would offer and the idea of NVDIMM-P allowing a DIMM or two to act as a complete storage package to replace system memory and SSDs (am I understanding the intent of that correctly?) would reduce the number of system components and connectors.
  • Yojimbo - Monday, April 3, 2017 - link

    I think the idea with NVDIMMs isn't so much to replace system memory or storage, but rather to have a fourth tier: DRAM, NVRAM, SSD, HD. I guess you could consider a fifth tier of HBM in a CPU-GPU system. The NVRAM will allow them to have a large pool of reasonably fast memory to do in-memory operations on data sets that can't fit into the DRAM pool at a reasonable cost, but it won't be able to replace mass storage in most cases. These are server concerns. I am not sure if anyone is planning much NVDIMM usage on PCs. Most PC systems don't really need that much RAM. Giving up some performance for several times the capacity at the same price doesn't seem like a good trade off for PCs. On the other hand, 500GB of NVDIMMs would probably be prohibitively expensive for consumers or an office PC.
  • BrokenCrayons - Monday, April 3, 2017 - link

    Ah thanks for clearing that up!
  • danjw - Wednesday, April 5, 2017 - link

    My understanding is that the idea is eventually to merge it, but that is a not real soon thing. The problem is that none of the existing technology, that I am aware of, beats DRAM for speed and Flash memory for capacity. Those two things need to happen before this is viable. For now, the NVRAM is mostly targeted as a cache for hard drives.
  • SharpEars - Monday, April 3, 2017 - link

    Hello latency my old friend...
  • ImSpartacus - Monday, April 3, 2017 - link

    Remember the faster your clock speeds, the more clocks you can give to latency without impacting actual latency figures.
  • close - Tuesday, April 4, 2017 - link

    Latency increases when you count clock cycles. But in real time (seconds) you will have the same 8-9ns latency just as it is for years now.
    If you increase the frequency those same 8-9ns will translate to more clock cycles. Double the frequency, double the number of clocks until those 9ns pass. Nothing to worry about.
  • willis936 - Monday, April 3, 2017 - link

    AMD is crying tears of joy.
  • vladx - Monday, April 3, 2017 - link

    Except Optane will dominate just like GSync does.
  • valinor89 - Monday, April 3, 2017 - link

    On entusiast builds maybe, but like SSD I doubt this will be mainstream any time soon if it ever is.
    Consider that just new we are seeing SSD as a comodity in the mainstream space.
  • MajGenRelativity - Monday, April 3, 2017 - link

    I can see DDR5 becoming more important for low-cost devices, but I would like to see adoption of HBM. As a side note, I haven't seen anything about it, I'll ask the question:

    What is the latency of HBM?
    I know GDDR5 has relatively high latency compared to DDR4, but I don't know where HBM falls.
  • MrSpadge - Monday, April 3, 2017 - link

    All current DRAM is using the same DRAM cells which limit the absolute latency measured in ns. It has stayed about the same since >15 years. What's changing is how many clocks those ~10 ns are, plus latency improvements in the controller.
  • MajGenRelativity - Monday, April 3, 2017 - link

    Oh ok
  • DanNeely - Monday, April 3, 2017 - link

    There've been some gains. DDR4 uses a 16x bus multiplier (vs SDRAM) and 16x parallelism to feed it. Top of the line DDR4-4266 runs the ram itself at 266mhz, twice as fast as SDRAM (topped out at PC-133). DDR3-3100 has the ram itself running considerably faster at 387MHz; DDR2 apparently got at least as high as 1250 which puts the ram at 312MHz.

    Those numbers suggest that there's still a good potential for faster DDR4 ram in the future, because it appears the limiting factor in it is the speed the data bus will run at not the underlying ram feeding it; and the existence of DDR5 itself suggests that memory engineers are confident of being able to push the data busses themselves considerably faster than they currently are (because without doing so performance would actually regress vs DDR4).
  • close - Tuesday, April 4, 2017 - link

    After retrieving the first word you have pipelining to improve the latency. First word latency is the same 9ns but it drops significantly for retrieving 4 words (~10ns) or 8 words (~12ns).
  • zodiacfml - Monday, April 3, 2017 - link

    Not holding my breath. HBM memory is coming to AMD's APUs while Intel announced to utilize such for Kaby Lake. Impact of RAM speeds will be less crucial
  • Lolimaster - Monday, April 3, 2017 - link

    Make new DIMMs in socket form, so just place "packages" of ram on top of the mobo, instead of this slot mess. Even if you got an SSD you can crash the PC with a small hit to the side of a tower PC.

    With so much density you won't need more than 2 "sockets" for consumer except really high end mobos.

    2x16GB
    2x32GB

    REAL High end mobos (no gamer sh*t gizmos)
    4x32GB

    AMD's AM4 demonstrated how much useless space you get on regular mobos with more and more things being integrated on the cpu/SOC.
  • close - Tuesday, April 4, 2017 - link

    I think you're mixing up the motherboard segment with the target audience. Gaming motherboards can very well be high end. The "REAL High end mobos" you're referring to are workstation motherboards. They can be just as high end and a gaming motherboard but they're simply addressed to another type of workload. And they're usually E-ATX, which can accommodate *a lot* of slots.
    It has nothing to do with the pricing or quality of the motherboard. In the consumer space you'll be hard pressed to find a high end motherboard that doesn't hint at gaming. This doesn't make them less "high-end".
  • Lolimaster - Monday, April 3, 2017 - link

    We are still with the same 30 year old motherboard design. socketed ram is much more reliable than slot type of ram.
  • close - Tuesday, April 4, 2017 - link

    When you say motherboard design you mean PCB-type stuff with components, sockets, and slots mounted on? The current popular format, ATX, was introduced by Intel ~20 years ago. And while the general layout is similar, components on mobos have come and gone. But the ones that stayed remained the same for a reason.

    In case you're wondering Intel also introduced the BTX format 10-12 years ago. The supported memory is still DIMM but I guess you can already tell how popular it was. It takes up very little space and has no reliability issues.

    Also lol@socketed RAM... Everybody knows bolt-on and screw in memory are better o_O.
  • CaedenV - Tuesday, April 4, 2017 - link

    @JEDEC, Intel and others,
    As a consumer I don't really want or need faster RAM. It is time to remove the ram/storage barier and move to something like HBM. Even if it is technically slower; the ability to simply flag software and files on and off rather than having to spool them from storage to ram would be amazing. Plus the instant off/on possibilities of the system would be a huge game changer. RAM may still be a huge necessity in servers and heavy write intensive markets; but for my cell phone and laptop where I deal with word docs, spreadsheets, web browsers, etc there is little to no point in having an ultra-fast memory storage pool.
  • Pork@III - Tuesday, April 4, 2017 - link

    I just wait for RAM which "thousand times faster than today proposals". Ooops this type of RAM was delayed from year 2017 to 2055.

Log in

Don't have an account? Sign up now