Likely 2025? Did you pull that date from your behind? DDR5 was finalised in 2020, AMD released AM5 more than 2 years later. DDR4 was released in 2014, AM4 followee in 2016, DDR3 was released in 2007, can you guess when AM3 was released? AM2 was the only exception so far, but it was 2006 after the 2003 DDR2 release.
There's no way AMD would release anything with DDR6 in 2025, 0%, no dice. Can't be clearer.
AMD isn’t even likely to release AM6 in 2026. There's a very very VERY small chance.
It's all but confirmed AMD would release "AM6" with "Zen7" in late 2027 or 2028.
I'm not sure why the people below associate LPDDR6 with AMD's future Zen 6 processors. LPDDR memory is a soldered-down, mobile standard - that comes to desktops really only in the form of mini PCs (NUCs as another term). And although yes, Socket AM5 with its DDR5 memory is here for a while, AMD can obviously create new mobile silicon with an updated LPDDR6 controller at any time. I don't know what is on the roadmap, but a refreshed Strix Point product in 2025, still using Zen 5 and RDNA 3.5, but with a LPDDR6 controller could totally happen. Something like that is not dependent on AMD also releasing Socket AM6/DDR6 on the desktop side.
I would guess 2025 for actual production/launch. You'll definitely hear about it before it happens, like the usual "Samsung makes 24/32 GB LPDDR6 package for smartphones" press release.
It's cute you think DDR5 can actually just fetch 32 bits to transfer 32 bits of data per clock per channel.
Also why would you assume per channel width would go backwards from 32-bit to 24-bit?
Literally for one channel you already get 96 bits per cycle. On any sensible system you would get 192/384-bits per cycle.
That's not to mention there's absolutely no way you are waiting 40ns - that's 200 CPU cycles or over 100 DDR cylces just to transfer one cycle of data especially when the next 7 cycles of data have zero additional latency thanks to 16n prefetch.
That means for a 96-bit module (2x 48-bit), you would typically get 2x48x16=2x768 bits or 2x96 bytes per "burst" over 8 cycles or 16 transfers.
Why would a "word" - that's 16 bits BTW - or even a quadword (64 bit) be of any concern.
I don't think the burst has much impact here. 192bit used to be hexa-channel (although I don't see any) means the DDR controller can get 6 32b word per cycle from 6 addresses. Now it's only quad channel, which means you can only get 4 48b word from 4 addresses. It's okay if the data is contiguous but that seems limits the flexibility of the DRAM a lot, if I understand correctly.
And DDR5 can fetch 32bit per cycle, just not from the address issued in current cycle. But that's another story not related to the channel organization.
While your individual program MIGHT issue a fetch for a single byte, word, or even DWORD, once that filters down to the memory controller, through multiple levels of cache and memory managers, I can assure you that, whatever request gets sent to the actual memory modules is NOT, in almost any case, a request for a single byte or word of information. Instead, the memory manager and cache controllers are passing block requests to the memory controller that is fulfilling them as quickly as possible. Widening the bus to the modules by 50% will just allow those same block requests to resolve in about 33% less time.
In a bigger picture, the memory arrangement of a modern computer is MASSIVELY abstracted away from what an individual program can see.
Dumb question, but can LPCAMM support multiple channels?
I.e. current gen Ryzen is quad-channel memory controllers. Are the mobile systems with LPCAMM that are launching full quad-channel bandwidth equivalent?
I honestly haven't see much discussion about how it's addressed with the coverage of LPCAMM so far in the general tech media.
"Dumb question, but can LPCAMM support multiple channels?"
LPDDR5 LPCAMM2 features a 128-bit memory bus. So yes, it technically already incorporates multiple "channels", though that concept is slowly losing meaning.
As such, one module is sufficient for current and next-generation AMD and Intel CPUs.
"though that concept is slowly losing meaning" Does this requires emphasizing bandwidth throughput on IO lines (instead of channel numbers, versus multiplexed rank access) towards a memory socket (including memory controller and power supply or even a dedicated memory (ai) optimizer)?
LPDDR5 with Samsung is now on a 10.7GT/s, what's ~10.7Gbps, because there's no protocol overhead like with PCIe(?) and for a 64x interface, that's ~85.6GB/s, and possibly double for 4x32bit wide data bus (lpddr4 is 64bit, 6bits SDR command/address bus width, lpddr5 is 32b and 7bit DDR cmd/address bus width for each channel?)
Is there (still) any ongoing work to enable emerging Non-Volatile-Memory (NVM) / Persistent Memory (PM) like SOT-MRAM to be used as Persitent DRAM ?
I understand that making a 8 GB or more SOT-MRAM memory module to be used as a Persistent DRAM would likely be crazy expensive (likely even higher than HBM), but PM would enable to dramatically lower latencies, likely first in some niche server / military use cases, but if volume can be scale-up to significantly lower the cost, then in consumer use cases (ex: smartphone and IoT devices with Non-Volatile-Memory as main memory,…)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
Lodix - Wednesday, July 24, 2024 - link
When is the standard LPDDR6 launching?deil - Wednesday, July 24, 2024 - link
Most likely with zen6 at earliest, so mid-late 2025. Guessing. More data welcome.dotjaz - Wednesday, July 24, 2024 - link
Likely 2025? Did you pull that date from your behind? DDR5 was finalised in 2020, AMD released AM5 more than 2 years later. DDR4 was released in 2014, AM4 followee in 2016, DDR3 was released in 2007, can you guess when AM3 was released? AM2 was the only exception so far, but it was 2006 after the 2003 DDR2 release.There's no way AMD would release anything with DDR6 in 2025, 0%, no dice. Can't be clearer.
AMD isn’t even likely to release AM6 in 2026. There's a very very VERY small chance.
It's all but confirmed AMD would release "AM6" with "Zen7" in late 2027 or 2028.
Rudde - Thursday, July 25, 2024 - link
LPDDR6 is not DDR6. Also, AMD is very unlikely to be the first to implement LPDDR6.boozed - Thursday, August 1, 2024 - link
Do you always talk to people like this or only after you've misunderstood the question?NextGen_Gamer - Wednesday, July 24, 2024 - link
I'm not sure why the people below associate LPDDR6 with AMD's future Zen 6 processors. LPDDR memory is a soldered-down, mobile standard - that comes to desktops really only in the form of mini PCs (NUCs as another term). And although yes, Socket AM5 with its DDR5 memory is here for a while, AMD can obviously create new mobile silicon with an updated LPDDR6 controller at any time. I don't know what is on the roadmap, but a refreshed Strix Point product in 2025, still using Zen 5 and RDNA 3.5, but with a LPDDR6 controller could totally happen. Something like that is not dependent on AMD also releasing Socket AM6/DDR6 on the desktop side.nandnandnand - Wednesday, July 24, 2024 - link
DDR6 spec should be finalized by Q1/Q2 2025. LPDDR6 should be done months sooner than that, probably within 2024:https://www.driverscloud.com/en/news/2322/new-gene...
I would guess 2025 for actual production/launch. You'll definitely hear about it before it happens, like the usual "Samsung makes 24/32 GB LPDDR6 package for smartphones" press release.
Lodix - Friday, July 26, 2024 - link
ThanksRοb - Monday, July 29, 2024 - link
Before the standard is finalized, and it comes to desktop / server, it's expected to come to mobile.Maybe supported by Snapdragon 8 Gen 4 LPDDR6, a weaker rumour is Dimensity 9400.
One source: https://www-ajunews-com.translate.goog/view/202403...
https://www.gizchina.com/2024/03/11/lpddr6-memory-...
erinadreno - Wednesday, July 24, 2024 - link
64 cannot be divide by 24. So if the CPU needs to fetch a word, it has to use 2 channels anyway? What would be the rationale behind that?dotjaz - Wednesday, July 24, 2024 - link
It's cute you think DDR5 can actually just fetch 32 bits to transfer 32 bits of data per clock per channel.Also why would you assume per channel width would go backwards from 32-bit to 24-bit?
Literally for one channel you already get 96 bits per cycle. On any sensible system you would get 192/384-bits per cycle.
That's not to mention there's absolutely no way you are waiting 40ns - that's 200 CPU cycles or over 100 DDR cylces just to transfer one cycle of data especially when the next 7 cycles of data have zero additional latency thanks to 16n prefetch.
That means for a 96-bit module (2x 48-bit), you would typically get 2x48x16=2x768 bits or 2x96 bytes per "burst" over 8 cycles or 16 transfers.
Why would a "word" - that's 16 bits BTW - or even a quadword (64 bit) be of any concern.
erinadreno - Wednesday, July 24, 2024 - link
I don't think the burst has much impact here. 192bit used to be hexa-channel (although I don't see any) means the DDR controller can get 6 32b word per cycle from 6 addresses. Now it's only quad channel, which means you can only get 4 48b word from 4 addresses. It's okay if the data is contiguous but that seems limits the flexibility of the DRAM a lot, if I understand correctly.And DDR5 can fetch 32bit per cycle, just not from the address issued in current cycle. But that's another story not related to the channel organization.
dotjaz - Wednesday, July 24, 2024 - link
Obviously I'm only assuming 16n for DDR6, 32n would be even more absurd to even consider the impact of fetching just one quadword.lightningz71 - Friday, July 26, 2024 - link
While your individual program MIGHT issue a fetch for a single byte, word, or even DWORD, once that filters down to the memory controller, through multiple levels of cache and memory managers, I can assure you that, whatever request gets sent to the actual memory modules is NOT, in almost any case, a request for a single byte or word of information. Instead, the memory manager and cache controllers are passing block requests to the memory controller that is fulfilling them as quickly as possible. Widening the bus to the modules by 50% will just allow those same block requests to resolve in about 33% less time.In a bigger picture, the memory arrangement of a modern computer is MASSIVELY abstracted away from what an individual program can see.
James5mith - Wednesday, July 24, 2024 - link
Dumb question, but can LPCAMM support multiple channels?I.e. current gen Ryzen is quad-channel memory controllers. Are the mobile systems with LPCAMM that are launching full quad-channel bandwidth equivalent?
I honestly haven't see much discussion about how it's addressed with the coverage of LPCAMM so far in the general tech media.
Ryan Smith - Thursday, July 25, 2024 - link
"Dumb question, but can LPCAMM support multiple channels?"LPDDR5 LPCAMM2 features a 128-bit memory bus. So yes, it technically already incorporates multiple "channels", though that concept is slowly losing meaning.
As such, one module is sufficient for current and next-generation AMD and Intel CPUs.
back2future - Thursday, July 25, 2024 - link
"though that concept is slowly losing meaning"Does this requires emphasizing bandwidth throughput on IO lines (instead of channel numbers, versus multiplexed rank access) towards a memory socket (including memory controller and power supply or even a dedicated memory (ai) optimizer)?
back2future - Thursday, July 25, 2024 - link
LPDDR5 with Samsung is now on a 10.7GT/s, what's ~10.7Gbps, because there's no protocol overhead like with PCIe(?) and for a 64x interface, that's ~85.6GB/s, and possibly double for 4x32bit wide data bus (lpddr4 is 64bit, 6bits SDR command/address bus width, lpddr5 is 32b and 7bit DDR cmd/address bus width for each channel?)Diogene7 - Friday, July 26, 2024 - link
Is there (still) any ongoing work to enable emerging Non-Volatile-Memory (NVM) / Persistent Memory (PM) like SOT-MRAM to be used as Persitent DRAM ?I understand that making a 8 GB or more SOT-MRAM memory module to be used as a Persistent DRAM would likely be crazy expensive (likely even higher than HBM), but PM would enable to dramatically lower latencies, likely first in some niche server / military use cases, but if volume can be scale-up to significantly lower the cost, then in consumer use cases (ex: smartphone and IoT devices with Non-Volatile-Memory as main memory,…)