Comments Locked

20 Comments

Back to Article

  • TristanSDX - Friday, August 14, 2020 - link

    "Such a large SRAM die would naturally also allow for significantly more SRAM that would allow for higher performance and lower power usage for a chip." - really not. Cache efficiency decrease with size, bigger size -> lower gain. SRAM is highly power consuming, currently at 30% for CPU, so significantly increasing it wont't be optimal for perf and power draw.
  • Andrei Frumusanu - Friday, August 14, 2020 - link

    You could run an SRAM die significantly wider and slower at low voltage versus current on-chip SRAM, and there would still be large power advantages versus going to off-chip DRAM.
  • brucethemoose - Friday, August 14, 2020 - link

    Also, couldn't Samsung use a process optimized for low-power SRAM?
  • Spunjji - Friday, August 14, 2020 - link

    Couldn't you also conceivably use a more cost-effective and mature manufacturing process for the SRAM die? I'm thinking 14nm here.

    You could presumably either have it as a large L3 and remove that area penalty from the compute die entirely, or as a supplemental L4. In either case, at that level the relative efficiency loss wouldn't be so great because you're already looking at a relatively high latency, and it's still far less than going out to DRAM.
  • brakdoo - Friday, August 14, 2020 - link

    How wide it is doesn't really matter much with SRAM. On-chip SRAM is already pretty wide.

    DRAM is always more power efficient for large arrays because only one word-line is active all the time per bank plus leakage is lower, no matter what you do.

    SRAM has one big advantage: Random access with low latency for big arrays. That's impossible with regular DRAM.

    You can have DRAM with thousands of banks with smaller arrays (like extreme RLDRAM) but that would put your power consumption through the roof and density wouldn't be nearly as high.

    This is never going to come to cellphones, this is for HPC (rest is Samsung marketing).
  • saratoga4 - Friday, August 14, 2020 - link

    This is probably aimed at machine learning applications which are memory bandwidth starved using HBM. Compared to HBM, this will be a lot more power efficient.
  • abufrejoval - Friday, August 14, 2020 - link

    Who said anything about cache?

    While that represents the majority of its historical use, you can also use SRAM as a scratch pad area for data that sees a lot of re-use, especially when you have more than just a couple of kilobytes.
  • brucethemoose - Friday, August 14, 2020 - link

    Though not as huge of a problem for mobile, the top die(s) would block some heat transfer out of the main logic die.

    I'm interested to see if they address this in the presentation, particularly when talking about HPC. There are some papers on pretty radical solutions, like pushing water through microchannels between chiplet layers, though you think they would've mentioned something like that.
  • brakdoo - Friday, August 14, 2020 - link

    Nothing new, just look at the Hisilicon Ascend 910 from last year...
  • FreckledTrout - Friday, August 14, 2020 - link

    I bet Microsoft is thinking damn why didn't we think of that name. X-Cube. All joking aside if they work out the thermals man this could be some seriously promising tech.
  • nandnandnand - Friday, August 14, 2020 - link

    I'm waiting for those giant L4 caches to start appearing, like gigabytes of DRAM stacked on the I/O die with AMD Zen 4/5, or Intel's Foveros. Maybe if you buy a more expensive CPU, you'll get a larger L4 cache in addition to the higher core count.
  • dwillmore - Friday, August 14, 2020 - link

    "The company this is the industry’s first design such design with an advanced process node technology."

    I think you're missing a verb there in the beginning of that sentence.
  • dwillmore - Monday, August 17, 2020 - link

    So, articles here are just throwaway? Post and forget, huh? No reading the comments--not even for spam let alone corrections.
  • zamroni - Friday, August 14, 2020 - link

    When will sram replace dram?
    Based on transistor count, sram price should be 6x of dram. So, it quite economical for server use case
  • ksec - Friday, August 14, 2020 - link

    What about heat and cooling?

    How is that going to work?
  • Kamen Rider Blade - Friday, August 14, 2020 - link

    I could be wrong about this, but the SRAM portions on a CPU don't get nearly as hot as the Compute/Switching portions. Ergo, there should be a bit more room for Heat in the SRAM areas such as L1/L2/L3 $.
  • Kamen Rider Blade - Friday, August 14, 2020 - link

    I think the more important use for this is for SMT (Simultaneous Multi-Threading)

    Imagine SMT4, SMT8, each thread with it's own SRAM layer.

    Ergo help preventing Threads from peaking into other threads via physical isolation for security and allowing fast swapping of Threads because each Thread would be loaded already for the Registers to use.

    No need to flush out L1D$ or L1I$ as you can literally swap to a different thread and do work on that Thread/layer's L1D$ or L1$. That saves you a few steps in the CPU cycle.

    Or if you have to flush, a different part of the CPU that is dedicated to loading L1$ can operate while the main registers are busy, ergo minimizing idle-time.
  • pixelstuff - Friday, August 14, 2020 - link

    E-Cube would have been better.
  • Diogene7 - Saturday, August 15, 2020 - link

    I am wondering if Samsung X-Cube technology is or could be compatible with STT-MRAM (or even better, SOT-MRAM) to get several Megabytes or maybe even Gigabytes of fast non-volatile memory close to the CPU : it would be quite disrupting compare to what exist as of 2020, and open plenty of new opportunities (especially to create low-power always-on devices)
  • silencer12 - Sunday, August 16, 2020 - link

    So since HBM was designed and went up against Micron's memory-cube. Is Samsung doing a more improved version due to Micron's technology failing to become adopted?

    That's what it appears

Log in

Don't have an account? Sign up now