Comments Locked

5 Comments

Back to Article

  • Diogene7 - Tuesday, November 10, 2020 - link

    @Ian: Thanks Ian for this great article.

    I am wondering if you may also have any news about the standardization of NVDIMM-P that relates to Persitent Memory on the memory interface ? We are nearing the end of 2020, and it should have been standardized by now, but unfortunately it seems not the case...

    I am also quite curious about the Gen-Z protocol in the context of Persistent Memory ?

    It seems to me that NVDIMM-P, Gen-Z and CXL 2.0 are pieces of an ecosystem that (very) slowly could finally enable a much broader usage of agnostic Persistent Memory, isn’t it ?

    After those standards are finally completed, I would think that somewhere between 2022 - 2025, we could finally see all big memory manufacturers launch their version of Persistent Memory, which I believe could bring plenty of innovation (really looking forward to see prices coming down for consumer products, especially mobility).
  • Tomatotech - Tuesday, November 10, 2020 - link

    Excellent overview thanks.

    For the moment this seems for big iron use only, but it seems to be on the way to trickling down to desktop / laptop / personal devices.

    Another step on the road to creating Iain Bank's Minds.
  • Dolda2000 - Tuesday, November 10, 2020 - link

    Why is a separate specification required for persistent memory? Why are traditional load/stores and memory barriers not enough to handle all aspects about it?
  • name99 - Tuesday, November 10, 2020 - link

    The important thing with persistent memory is the ordering of writes to the persistent memory.
    Simple stores are not enough because they write to cache lines that are flushed in random order.
    Barriers are not enough because those only enforce ordering as far as the CPU (and other CPUs) are concerned, they have nothing to do with forcing cache lines out to storage, and ensuring that the order in which the cache lines are forced out does not change.
    Hence you need additional primitives.

    Like all such primitives, you can do these the dumb way (make them very explicit in exactly what they say), meaning it's very difficult for the system to optimize them, or more abstract, making optimization easier. The primary difference is -- do you explicitly state "I need this line to be flushed then that line then that line" or do you say "flush the following set in lines, in any order you like up to a re-ordering barrier". This is all still new enough that it's not clear (at least to me) which of team x86, team ARM, and team IBM have done the best job of defining these primitives at the optimal level of abstraction.
  • guyr - Thursday, March 18, 2021 - link

    " I was told that customers that need the highest latency, who can ensure safety in the system, will likely have it disabled, whereas those who need it to conform to customer requests, or for server-to-server communications, are likely to use it."

    I'm guessing you meant "lowest latency" here. No one wants high latency, so since encryption will increase latency, any application that can assure safety will disable it.

Log in

Don't have an account? Sign up now