GDDR6 was supposed to have up to 24Gbit (3GB) and 32Gbit (4GB) densities, but those never showed up. And it looks like GDDR7 24Gbit (3GB) density won't show up until 2025, or probably 1 to 1.5 years from now. So don't expect any interesting dekstop GPUs to arrive before 2025 March.
That would be really funny, if the 5090 launched first with 24 GB using 2 GB chips, and Nvidia waited so long to release low/mid chips that we get the 96-bit 9 GB wonder.
If they put 96bit memory interface on a RTX XX60 class card they deserve to burn. That's absurd. At this point the XX50 class card should be able to play everything at 4K. It's a dedicated GPU for crying out loud! People should be able to game on it! That means it's gonna need AT LEAST 512GB/s of memory bandwidth and at least an 8GB frame buffer.
The 40XX series was a slap in the face. I bought one cause my GPU was 10 years old, and the AMD one I got has driver issues that AMD themselves essentially said "we have no idea, we give up." Random audio issues, never the same, never consistent, totally irritating. Sound just pops in and out, crackles, volume dips and jumps for no reason. But only sometimes, unless you forgot about it, then it's just happening every 30 seconds. Everyone at AMD, MSI and myself included gave up. They said it was fixed in the 7XXX series, issue just exists on 6650XT when you use an audio receiver between your PC and TV. I don't believe them it's fixed on the 7 series, if it was they'd be able to define what was going on. I digress.
This is how we'll likely get 12 GB capacity cards using a 128 bit wide bus in 2025 for the low end.
The new midrange will likely be a mix of 12 GB and 18 GB cards via a 192 bit wide bus.
The 256 bit wide GPUs will likely stick to just 16 GB and 384 bit wide units will continue with 24 GB for consumers. The workstation variants of these cards though will use the higher capacity dies for 24 GB and 36 GB respectively. Absolute high end will leverage memory on both sides of the board to jump to a 72 GB capacity.
I hope you're right, those are reasonable numbers. The 4XXX series was a slap in the face. 8GB vram on a 128bit but on the RTX4060TI?! Seriously?! I mean if they were charging $200 that'd be fine but they aren't, $400!!!!!!
Just because the initial density of GDDR7 is the same doesn't mean there won't be an interesting GPU launches. 16GB and above is still perfectly fine for high-end GPUs. And rumors point to the big boy "Blackwell" from NVIDIA as moving to a 512-bit bus, which would mean 32GB VRAM using these announced GDDR7 chips from Samsung. AMD could possibly do the same, and in fact would have an easier time due to their chiplet design.
It's not going to be interesting when the RTX 5090 costs around $2000 - $2500. The next step down, the GB203 die only has a 256-bit mem bus and 16GB for a 5080 sounds boring. 16GB on a 5070 wouldn't be bad, except those get the GB204 die with a 192-bit mem bus. 12GB for a 70 class? yeah, no thanks.
Both a faster and wider memory bus in a generation would be unusual. Bandwidth gains would be around 78% which certainly would benefit some workloads. Pulling that off in a chip and board design is not going to be easy.
I would expect the consumer versions Blackwell to continue the trend of focusing on caching to compensate for memory bandwidth. GDDR7 is a necessary increase in speed but still falls short of the depends modern GPUs can leverage.
GDDR improvements are a good-ish sort of thing but more importantly, system RAM matters a fair bit more since graphics for the vast majority of people reside on the CPU package where GDDR is meaningless. DDR4 and 5 have certainly helped but latency remains high and RAM quantity remains low with some laptops for sale today shipping with a mere 4GB of single channel soldered memory. Fleetingly few people spend anything on a dGPU so not many outside of GPU compute business operations will realize benefits with yet another GDDR generation.
Windows and ChromeOS (Plus) have basically flushed out 4 GB in favor of 8 GB, although you can still find older systems being sold with it. The "AI PC" marketing craze could make 16 GB more common, with the purpose of that spec being to run the smaller 7/13B parameter LLMs and other AI stuff locally:
I don't know if we will ever see significant DRAM latency improvements. We can hope for big L3/L4 cache to come to cheaper systems (on-package DRAM like in Meteor/Lunar Lake does not count as L4, I'm thinking Intel's no-show "Adamantine").
2.5D/3D packaging will become more common until the point it is ubiquitous by design even in cheap products 10+ years from now.
X3D isn't doubling the cost of AMD chips, it's adding like $50 at most. We'll see how Adamantine does when Intel deigns to release it. Nobody is expecting life-changing amounts like 8 GB at launch, maybe closer to 512 MB instead.
I think Anandtech published a news in 2018 announcing that Samsung was starting mass manufacturing of 16Gbits GDDR6 chip.
It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die.
Actually it should even be 32Gbits or 64Gbits Non-Volatile-Memory (NVM) memory die, like SOT-MRAM or VCMA-MRAM as it would unlock plenty new opportunities (especially in IoT and mobile devices).
The US Chip Act funding should be allocating a lot of funding to scale-up disruptive NVM spintronic MRAM memory manufacturing, especially from US Avalanche technology and Everspin as it would allow the US to take leadership in the next generation beyond CMOS memory and computing technology.
"It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die."
There are a number of reasons for this. But at the end of the day DRAM capacity scaling has tapered off. Logic is the only thing really scaling well with EUV and newer nodes. The trench capacitors and other analog bits of DRAM aren't getting smaller, and that keeps DRAM fabs from making significantly denser dies.
To be sure, they're still making some progress. Take a look at DDR5 die density, for example. But most capacity increases at the high-end are coming from die stacking, either in the form of HBM or TSV-stacked dies for DIMMs.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
18 Comments
Back to Article
meacupla - Wednesday, March 27, 2024 - link
GDDR6 was supposed to have up to 24Gbit (3GB) and 32Gbit (4GB) densities, but those never showed up.And it looks like GDDR7 24Gbit (3GB) density won't show up until 2025, or probably 1 to 1.5 years from now.
So don't expect any interesting dekstop GPUs to arrive before 2025 March.
Dante Verizon - Wednesday, March 27, 2024 - link
5060 9gb successfully confirmed.nandnandnand - Wednesday, March 27, 2024 - link
That would be really funny, if the 5090 launched first with 24 GB using 2 GB chips, and Nvidia waited so long to release low/mid chips that we get the 96-bit 9 GB wonder.Hrel - Tuesday, May 14, 2024 - link
If they put 96bit memory interface on a RTX XX60 class card they deserve to burn. That's absurd. At this point the XX50 class card should be able to play everything at 4K. It's a dedicated GPU for crying out loud! People should be able to game on it! That means it's gonna need AT LEAST 512GB/s of memory bandwidth and at least an 8GB frame buffer.The 40XX series was a slap in the face. I bought one cause my GPU was 10 years old, and the AMD one I got has driver issues that AMD themselves essentially said "we have no idea, we give up." Random audio issues, never the same, never consistent, totally irritating. Sound just pops in and out, crackles, volume dips and jumps for no reason. But only sometimes, unless you forgot about it, then it's just happening every 30 seconds. Everyone at AMD, MSI and myself included gave up. They said it was fixed in the 7XXX series, issue just exists on 6650XT when you use an audio receiver between your PC and TV. I don't believe them it's fixed on the 7 series, if it was they'd be able to define what was going on. I digress.
meacupla - Thursday, March 28, 2024 - link
I wouldn't be surprised to see that.At the very least, I expect the 5050Ti mobile or 5060 mobile to have that configuration.
Kevin G - Thursday, March 28, 2024 - link
This is how we'll likely get 12 GB capacity cards using a 128 bit wide bus in 2025 for the low end.The new midrange will likely be a mix of 12 GB and 18 GB cards via a 192 bit wide bus.
The 256 bit wide GPUs will likely stick to just 16 GB and 384 bit wide units will continue with 24 GB for consumers. The workstation variants of these cards though will use the higher capacity dies for 24 GB and 36 GB respectively. Absolute high end will leverage memory on both sides of the board to jump to a 72 GB capacity.
Hrel - Tuesday, May 14, 2024 - link
I hope you're right, those are reasonable numbers. The 4XXX series was a slap in the face. 8GB vram on a 128bit but on the RTX4060TI?! Seriously?! I mean if they were charging $200 that'd be fine but they aren't, $400!!!!!!RTX 5050 better have AT LEAST 128bit at 12GB.
NextGen_Gamer - Wednesday, March 27, 2024 - link
Just because the initial density of GDDR7 is the same doesn't mean there won't be an interesting GPU launches. 16GB and above is still perfectly fine for high-end GPUs. And rumors point to the big boy "Blackwell" from NVIDIA as moving to a 512-bit bus, which would mean 32GB VRAM using these announced GDDR7 chips from Samsung. AMD could possibly do the same, and in fact would have an easier time due to their chiplet design.meacupla - Wednesday, March 27, 2024 - link
It's not going to be interesting when the RTX 5090 costs around $2000 - $2500.The next step down, the GB203 die only has a 256-bit mem bus and 16GB for a 5080 sounds boring.
16GB on a 5070 wouldn't be bad, except those get the GB204 die with a 192-bit mem bus. 12GB for a 70 class? yeah, no thanks.
Threska - Thursday, March 28, 2024 - link
Ah well $2k should buy some nice power cables to keep the "interest" to a minimum.nandnandnand - Wednesday, March 27, 2024 - link
I'll believe 512-bit when I see it. It's not as if it won't have enough bandwidth using 24 GB of GDDR7 chips.Kevin G - Thursday, March 28, 2024 - link
Both a faster and wider memory bus in a generation would be unusual. Bandwidth gains would be around 78% which certainly would benefit some workloads. Pulling that off in a chip and board design is not going to be easy.I would expect the consumer versions Blackwell to continue the trend of focusing on caching to compensate for memory bandwidth. GDDR7 is a necessary increase in speed but still falls short of the depends modern GPUs can leverage.
PeachNCream - Thursday, March 28, 2024 - link
GDDR improvements are a good-ish sort of thing but more importantly, system RAM matters a fair bit more since graphics for the vast majority of people reside on the CPU package where GDDR is meaningless. DDR4 and 5 have certainly helped but latency remains high and RAM quantity remains low with some laptops for sale today shipping with a mere 4GB of single channel soldered memory. Fleetingly few people spend anything on a dGPU so not many outside of GPU compute business operations will realize benefits with yet another GDDR generation.nandnandnand - Thursday, March 28, 2024 - link
Windows and ChromeOS (Plus) have basically flushed out 4 GB in favor of 8 GB, although you can still find older systems being sold with it. The "AI PC" marketing craze could make 16 GB more common, with the purpose of that spec being to run the smaller 7/13B parameter LLMs and other AI stuff locally:https://www.tomshardware.com/software/windows/micr...
I don't know if we will ever see significant DRAM latency improvements. We can hope for big L3/L4 cache to come to cheaper systems (on-package DRAM like in Meteor/Lunar Lake does not count as L4, I'm thinking Intel's no-show "Adamantine").
Dante Verizon - Thursday, March 28, 2024 - link
I don't see it being cheap.nandnandnand - Friday, March 29, 2024 - link
2.5D/3D packaging will become more common until the point it is ubiquitous by design even in cheap products 10+ years from now.X3D isn't doubling the cost of AMD chips, it's adding like $50 at most. We'll see how Adamantine does when Intel deigns to release it. Nobody is expecting life-changing amounts like 8 GB at launch, maybe closer to 512 MB instead.
Diogene7 - Friday, March 29, 2024 - link
I think Anandtech published a news in 2018 announcing that Samsung was starting mass manufacturing of 16Gbits GDDR6 chip.It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die.
Actually it should even be 32Gbits or 64Gbits Non-Volatile-Memory (NVM) memory die, like SOT-MRAM or VCMA-MRAM as it would unlock plenty new opportunities (especially in IoT and mobile devices).
The US Chip Act funding should be allocating a lot of funding to scale-up disruptive NVM spintronic MRAM memory manufacturing, especially from US Avalanche technology and Everspin as it would allow the US to take leadership in the next generation beyond CMOS memory and computing technology.
Ryan Smith - Friday, March 29, 2024 - link
"It is really a bit sad that by now, we still don’t have at least 32Gbits and even 64Gbits memory die."There are a number of reasons for this. But at the end of the day DRAM capacity scaling has tapered off. Logic is the only thing really scaling well with EUV and newer nodes. The trench capacitors and other analog bits of DRAM aren't getting smaller, and that keeps DRAM fabs from making significantly denser dies.
To be sure, they're still making some progress. Take a look at DDR5 die density, for example. But most capacity increases at the high-end are coming from die stacking, either in the form of HBM or TSV-stacked dies for DIMMs.