I assume the 128b is not because it's using a wider memory bus but because the memory controller is using that for addressing both ranks at one time. So it's not two independent 64bit memory channels being interleaved but one 128bit memory address being read over a 64bit bus so Rank 0 and Rank 1 are always reading from the same addresses. Two independent buses would be more flexible but probably harder to implement.
I wonder what this does to max memory capacity. Presumably having a buffer chip like LRDIMMS would help support more dimms per channel and therefore memory capacity but running the memory bus at such high speed might counteract that.
> I assume the 128b is not because it's using a wider memory bus
Where did you get 128 bits? Neither the article nor the original press release said anything about 128 bits. Then, you just dig yourself a deeper hole talking about 64-bit memory channels and "128 bit memory address"... WTF?
Gavin caused a lot of confusion with the statement: "there doesn't seem to be nearly enough pins to support a physically wider memory bus." *Obviously*, they don't mean 128 byte-wide bus, as that would mean having 1024 pins per channel! Duh.
DDR5 uses a minimum burst size of 64 bytes. That's implemented as a sequence of 16 cycles of 32 bits per cycle, because each DDR5 DIMM has 2x 32-bit channels. So, what they're saying is that by multiplexing two ranks, MCR increases that burst size to 128 bytes.
That could pose issues, from the CPU's point of view, because CPU cache lines are 64 bytes, hence the natural transaction size would be only half of what MCR would support. In the worst case, this could mean half of each burst is wasted, if the transfers to a given channel are all non-consecutive. That's a big deal.
> I wonder what this does to max memory capacity.
Depends on how many ranks you can fit on one of these DIMMs. If you can make a quad-ranked DIMM, with each pair being multiplexed to appear as a single rank, then you'd double the max capacity per DIMM (assuming DDR5 doesn't already support quad-ranked RDIMMs - does anyone know?). Otherwise, no change.
the article did say 128 bytes, even the author wasn't sure how
"Unfortunately, the details beyond this are slim and unclear – in particular, SK hynix claims that MCR "allows transmission of 128 bytes of data to CPU at once", but looking at the supplied DIMM photo, there doesn't seem to be nearly enough pins to support a physically wider memory bus."
I wonder what latency impact is. I would presume something small since this piggy backs off of LR-DIMM topology. Similarly I wonder this does to the various subtimings on the module.
Presuming these work on Genoa chips (big if given the parties involved), that'd permit 768 GByte/s of memory bandwidth per socket. That's rather impressive.
I read it as 128-bit wide DDR5-4000 to an on module buffer that then transfers to the CPU over a 64-bit DDR5-8000 bus - thus lowering required pin count compared to standard DDR5.
That's the way I read it too. It sounds like DDR5 buses are moving to speeds the chips can't supply, so they are using two slower chips to feed a buffer that then transfers over a traditional (higher speed) DDR5 bus. If true, all Intel (and AMD) need to do is support DDR5-8000.
so does the mobo need to support MCR DIMMs, or will it work in any mobo that supports DDR5? Because if MCR only works in MCR mobos, then I don't see why they wouldn't just use a wider memory bus from the server level CPU. Like give it 32 channels.
Motherboard complexity goes up as more channels are added. The 12 channels Genoa has is on the upper limits of what is feasible without going to fully buffered memory/CLX.
This doesn't add any more traces on the motherboard, and the memory slots would be the same. However, like LRDIMMs and ECC, the memory controller on the CPU has to support it, which I doubt AMD and especially Intel will do on anything but their most expensive server CPUs.
> I doubt AMD and especially Intel will do on anything but their most expensive server CPUs.
Competition is a wonderful thing. Once the Epyc or Xeon memory controller can support it, it should just be a configuration option that stands between enabling it on any model of server CPU. However, we're talking about CPUs that already support RDIMMs.
And that's why we make those upgrades even when it seems bottlenecked. It's not even solid on the market, and someone is already ripping a chunk out of limits. Cache everything! it seems like it might both double size and speed, which is never a bad thing, no matter what you do. when and how much is now the main question. I hope its 2023 not 2032...
Well, look at the alternative: AMD's Genoa has reached 12 channels per CPU! It's not as if that 50% increase from the previous generation doesn't add costs.
On consumer boards, we're already seeing people reach DDR5-7000 speeds and possibly higher. So, my expectation is that servers will indeed increase DDR5 speed, as a less expensive alternative to adding ever more channels. The only question is whether they'll use exotic techniques like MCR to do it.
Just what I been waiting for in my next desktop Raptor Lake & Ryzen 7000 PCs. Need tip top memory performance for email, internet browsing and watching streaming videos/channels!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
20 Comments
Back to Article
kpb321 - Thursday, December 8, 2022 - link
I assume the 128b is not because it's using a wider memory bus but because the memory controller is using that for addressing both ranks at one time. So it's not two independent 64bit memory channels being interleaved but one 128bit memory address being read over a 64bit bus so Rank 0 and Rank 1 are always reading from the same addresses. Two independent buses would be more flexible but probably harder to implement.I wonder what this does to max memory capacity. Presumably having a buffer chip like LRDIMMS would help support more dimms per channel and therefore memory capacity but running the memory bus at such high speed might counteract that.
mode_13h - Monday, December 26, 2022 - link
> I assume the 128b is not because it's using a wider memory busWhere did you get 128 bits? Neither the article nor the original press release said anything about 128 bits. Then, you just dig yourself a deeper hole talking about 64-bit memory channels and "128 bit memory address"... WTF?
Gavin caused a lot of confusion with the statement: "there doesn't seem to be nearly enough pins to support a physically wider memory bus." *Obviously*, they don't mean 128 byte-wide bus, as that would mean having 1024 pins per channel! Duh.
DDR5 uses a minimum burst size of 64 bytes. That's implemented as a sequence of 16 cycles of 32 bits per cycle, because each DDR5 DIMM has 2x 32-bit channels. So, what they're saying is that by multiplexing two ranks, MCR increases that burst size to 128 bytes.
That could pose issues, from the CPU's point of view, because CPU cache lines are 64 bytes, hence the natural transaction size would be only half of what MCR would support. In the worst case, this could mean half of each burst is wasted, if the transfers to a given channel are all non-consecutive. That's a big deal.
> I wonder what this does to max memory capacity.
Depends on how many ranks you can fit on one of these DIMMs. If you can make a quad-ranked DIMM, with each pair being multiplexed to appear as a single rank, then you'd double the max capacity per DIMM (assuming DDR5 doesn't already support quad-ranked RDIMMs - does anyone know?). Otherwise, no change.
MobiusPizza - Thursday, February 16, 2023 - link
the article did say 128 bytes, even the author wasn't sure how"Unfortunately, the details beyond this are slim and unclear – in particular, SK hynix claims that MCR "allows transmission of 128 bytes of data to CPU at once", but looking at the supplied DIMM photo, there doesn't seem to be nearly enough pins to support a physically wider memory bus."
Kevin G - Thursday, December 8, 2022 - link
I wonder what latency impact is. I would presume something small since this piggy backs off of LR-DIMM topology. Similarly I wonder this does to the various subtimings on the module.Presuming these work on Genoa chips (big if given the parties involved), that'd permit 768 GByte/s of memory bandwidth per socket. That's rather impressive.
onewingedangel - Thursday, December 8, 2022 - link
I read it as 128-bit wide DDR5-4000 to an on module buffer that then transfers to the CPU over a 64-bit DDR5-8000 bus - thus lowering required pin count compared to standard DDR5.Jp7188 - Wednesday, January 4, 2023 - link
That's the way I read it too. It sounds like DDR5 buses are moving to speeds the chips can't supply, so they are using two slower chips to feed a buffer that then transfers over a traditional (higher speed) DDR5 bus. If true, all Intel (and AMD) need to do is support DDR5-8000.meacupla - Thursday, December 8, 2022 - link
so does the mobo need to support MCR DIMMs, or will it work in any mobo that supports DDR5?Because if MCR only works in MCR mobos, then I don't see why they wouldn't just use a wider memory bus from the server level CPU. Like give it 32 channels.
Kevin G - Sunday, December 11, 2022 - link
Motherboard complexity goes up as more channels are added. The 12 channels Genoa has is on the upper limits of what is feasible without going to fully buffered memory/CLX.mode_13h - Monday, December 26, 2022 - link
> The 12 channels Genoa hasSince each DDR5 DIMM has 2x 32-bit subchannels, Genoa should technically have a 24-channel memory subsystem. Just sayin'.
The Von Matrices - Sunday, December 11, 2022 - link
This doesn't add any more traces on the motherboard, and the memory slots would be the same. However, like LRDIMMs and ECC, the memory controller on the CPU has to support it, which I doubt AMD and especially Intel will do on anything but their most expensive server CPUs.mode_13h - Monday, December 26, 2022 - link
> I doubt AMD and especially Intel will do on anything but their most expensive server CPUs.Competition is a wonderful thing. Once the Epyc or Xeon memory controller can support it, it should just be a configuration option that stands between enabling it on any model of server CPU. However, we're talking about CPUs that already support RDIMMs.
Oxford Guy - Friday, January 6, 2023 - link
Too bad people have been trained to believe that what barely qualifies as a duopoly is adequate competition.Foeketijn - Monday, January 2, 2023 - link
It doesn't cost any more traces, but those traces need to be able to handle a far higher frequency.mode_13h - Monday, December 26, 2022 - link
> so does the mobo need to support MCR DIMMsI think the issue is mainly that the CPU needs to support it. Its memory controller needs to be adapted to deal with a 128-byte minimum burst size.
deil - Friday, December 9, 2022 - link
And that's why we make those upgrades even when it seems bottlenecked. It's not even solid on the market, and someone is already ripping a chunk out of limits.Cache everything!
it seems like it might both double size and speed, which is never a bad thing, no matter what you do.
when and how much is now the main question. I hope its 2023 not 2032...
mode_13h - Monday, December 26, 2022 - link
It's probably even be too late for Turin (Zen5-based Epyc) and definitely too late for Emerald Rapids. Maybe Granite Rapids could support it.In other words, 2024 at the soonest. 2025, more likely.
Foeketijn - Monday, January 2, 2023 - link
Or never, when it cuts into the margins.mode_13h - Monday, January 2, 2023 - link
Well, look at the alternative: AMD's Genoa has reached 12 channels per CPU! It's not as if that 50% increase from the previous generation doesn't add costs.On consumer boards, we're already seeing people reach DDR5-7000 speeds and possibly higher. So, my expectation is that servers will indeed increase DDR5 speed, as a less expensive alternative to adding ever more channels. The only question is whether they'll use exotic techniques like MCR to do it.
Harry_Wild - Monday, December 12, 2022 - link
Just what I been waiting for in my next desktop Raptor Lake & Ryzen 7000 PCs. Need tip top memory performance for email, internet browsing and watching streaming videos/channels!mode_13h - Monday, December 26, 2022 - link
Allegedly, you can already buy DDR5-7800 for desktops. You don't need this.