Not happening. We even have a leak saying 8/16/32-core CCDs for Zen 6, which suggests 8-core CCDs will continue to be the base unit for consumer products. That or fake or outdated info is swirling around.
Strix Point will be dual-CCX. Split L3 cache too, so 24 MB is actually 16+8. When someone talks about a 12-core CCD for desktops, I think they want it to be a single, unified CCX, with 32 MB or more of L3 cache. That said, I'm sure Strix Point will perform well, even as a desktop chip.
We need to see a 16-core CCD in desktop CPUs eventually. I expected dual-CCX like Zen 4c but the same rumors mentioned 16 Zen 5c cores in a single CCX, which would be wild: https://wccftech.com/amd-zen-6-three-ccd-configura...
Is it? Nvidia is definitely focused on the data center but both AMD and Intel are both expected to showcase products for PCs, with Zen 5 and Lunar Lake respectively.
AMD and Intel might announce PC products, but their focus, according to the titles, is AI, and with data centre being part of the synopsis, they're definitely not focused on PC. Qualcomm is also mentioning AI, but it's purely focusing on PC and end users.
what is there to focus on? they dont make the OS, they dont need a recompiled OS and apps, they dont make the chassis, they have decades of established partners and users, etc
It looks like RDNA4 will be a flop, with very little improvement over the RX7000 series. Which is why AMD is already working on getting RDNA5 out the door asap.
RDNA4 needn't be a flop, if it delivers improved price/performance and power efficiency over RDNA2/3 GPUs.
It *could* also surpass RDNA3 in one area: ray-tracing.
I predict that the top RDNA4 model with 16 GB VRAM will beat the 7800 XT in every way at a $500-600 MSRP, and move 7700/7800 XT prices down. I don't know what effect the smaller RDNA4 die will have.
If RDNA4 somehow doesn't do better than RDNA3 in RT, that would be hilarious. But what it really needs to accomplish is beating RTX 30 series at ray tracing. Then we can finally start talking about price.
Just because they aren't pushing new high-end variants, that in no way means it has to be a flop. If they can figure out a way to get manufacturing costs down in a way that entails meaningfully cheaper cards, that's *a lot* more exciting than getting new ultra-expensive products when GPU performance is quite arguably good enough anyway.
I don't care about the high-end variants. RDNA4 is looking like more like a RX7000 series, but with improved ray-tracing. Considering AMD's track record with developing new features... I doubt it's RT performance will be all that impressive.
Speaking just for myself, I don't care about its "RT performance" at all, I just care about vector FLOPS per dollar.
If I may say so, the only reason we now need to have "RT" hardware is because nVidia needed some marketing gimmick for Turing. I put "RT" in quotes because their BVH acceleration is just one specific form of RT. Prior to Turing, there was a bit of an explosion of interesting RT-based projects just using the general-purpose vector hardware, many of them trying new and interesting, non-triangle-based geometry models and whatnot; but since then everyone is just focused on traditional triangle-mesh geometries with specific hardware-accelerated effects, and game engines are more bloated and inelegant then ever because they basically need to support two separate rendering pipelines (because no hardware is even close to being able to run full RT-based scenes in real time). All the interesting projects have died off, casualties of nVidia forcing an uninteresting direction on the whole industry, one with very marginal results, visually speaking.
why would FULL rt be necessary? rendering has been full of shortcuts and perceptual tricks for decades, now suddenly any rt must be all or nothing?? i find the results of accurate interactive gi much greater than marginal and i dont even have rt hw yet to experience it in person
not sure what other 'interesting' directions are possible or performant on older hw, and an interesting note from digitalfoundry was that xbox gpu specs down to the amount of compute units were decided a couple or several years before launch (that means rt hw was designed at ms's request or amd's ability before turing even launched)
>why would FULL rt be necessary? My point about that was just that, as long as full RT isn't the reality, game engines effectively need to fully implement two separate rendering pipelines; one for the rasterized rendering, and one for the RT effects, and it's just bloated and inelegant.
Strix Point = 12 cores in 2 CCXs (4x Zen 5 cores with 16 MB L3, and 8x Zen 5c cores with 8 MB L3), 16 CUs RDNA3+, 45+ TOPS in a new XDNA2 NPU. Supporting faster LPDDR5X.
I want to know how much better RDNA3+ is. I'd also like to know about XDNA2's die size (and XDNA1) and its power efficiency when compared to the iGPU. Finally, I want to see how the new core configuration is handled by games and applications, since dual-CCX is a departure from AMD's recent APUs. The heterogeneous cache sizes are new to APUs as well.
ah i see the clarifier now, liveblogging 'most' keynotes, and technically the nv one is not official or at the same venue, so no nv liveblog
it's quite amusing that jensen and lisa are actual cousins, and it seems some asus laptops with unannounced (but already leaked) ryzen ai model numbers showed up in the nv keynote
It's strange that you only included PT and ET timings, which are both in US, and almost every US citizen doesn't have any issue converting between the two because they know it... You must include UTC timing in everything because the whole world know how to convert UTC to their own timezone, especially with the US adopting the summer time and most of the world doesn't and find it confusing as hell.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
30 Comments
Back to Article
Iketh - Friday, May 31, 2024 - link
AMD 12-core CCDs pleasenandnandnand - Saturday, June 1, 2024 - link
Not happening. We even have a leak saying 8/16/32-core CCDs for Zen 6, which suggests 8-core CCDs will continue to be the base unit for consumer products. That or fake or outdated info is swirling around.Dolda2000 - Sunday, June 2, 2024 - link
That being said, all rumors point to there being Strix products with 12 cores (4 Z5 + 8 Z5c).nandnandnand - Sunday, June 2, 2024 - link
Strix Point will be dual-CCX. Split L3 cache too, so 24 MB is actually 16+8. When someone talks about a 12-core CCD for desktops, I think they want it to be a single, unified CCX, with 32 MB or more of L3 cache. That said, I'm sure Strix Point will perform well, even as a desktop chip.We need to see a 16-core CCD in desktop CPUs eventually. I expected dual-CCX like Zen 4c but the same rumors mentioned 16 Zen 5c cores in a single CCX, which would be wild:
https://wccftech.com/amd-zen-6-three-ccd-configura...
ET - Saturday, June 1, 2024 - link
It's funny that the only company whose focus is PC here is Qualcomm.elmagio - Saturday, June 1, 2024 - link
Is it? Nvidia is definitely focused on the data center but both AMD and Intel are both expected to showcase products for PCs, with Zen 5 and Lunar Lake respectively.ET - Saturday, June 1, 2024 - link
AMD and Intel might announce PC products, but their focus, according to the titles, is AI, and with data centre being part of the synopsis, they're definitely not focused on PC. Qualcomm is also mentioning AI, but it's purely focusing on PC and end users.FreckledTrout - Monday, June 3, 2024 - link
I disagree. If you look at the products thekn00tcn - Saturday, June 1, 2024 - link
what is there to focus on? they dont make the OS, they dont need a recompiled OS and apps, they dont make the chassis, they have decades of established partners and users, etcit's compute-x not c(onsumer)es, in taip-ai
ballsystemlord - Saturday, June 1, 2024 - link
They do not have enough talks about AI in this conference. (ha ha)Dante Verizon - Saturday, June 1, 2024 - link
I was more interested in learning about RDNA4 and Strix-Pointmeacupla - Saturday, June 1, 2024 - link
It looks like RDNA4 will be a flop, with very little improvement over the RX7000 series.Which is why AMD is already working on getting RDNA5 out the door asap.
Dante Verizon - Sunday, June 2, 2024 - link
We'll see soon enough. But it shouldn't be possible to make major improvements, we're still talking about 4nm, it's only 6% densernandnandnand - Sunday, June 2, 2024 - link
RDNA4 needn't be a flop, if it delivers improved price/performance and power efficiency over RDNA2/3 GPUs.It *could* also surpass RDNA3 in one area: ray-tracing.
I predict that the top RDNA4 model with 16 GB VRAM will beat the 7800 XT in every way at a $500-600 MSRP, and move 7700/7800 XT prices down. I don't know what effect the smaller RDNA4 die will have.
meacupla - Sunday, June 2, 2024 - link
If RDNA4 somehow doesn't do better than RDNA3 in RT, that would be hilarious.But what it really needs to accomplish is beating RTX 30 series at ray tracing. Then we can finally start talking about price.
ballsystemlord - Sunday, June 2, 2024 - link
At that point we'd be talking about how only the top 1% can afford it.Dolda2000 - Sunday, June 2, 2024 - link
Just because they aren't pushing new high-end variants, that in no way means it has to be a flop. If they can figure out a way to get manufacturing costs down in a way that entails meaningfully cheaper cards, that's *a lot* more exciting than getting new ultra-expensive products when GPU performance is quite arguably good enough anyway.meacupla - Sunday, June 2, 2024 - link
I don't care about the high-end variants.RDNA4 is looking like more like a RX7000 series, but with improved ray-tracing. Considering AMD's track record with developing new features... I doubt it's RT performance will be all that impressive.
I am more excited to see Intel's Battlemage.
Dolda2000 - Sunday, June 2, 2024 - link
Speaking just for myself, I don't care about its "RT performance" at all, I just care about vector FLOPS per dollar.If I may say so, the only reason we now need to have "RT" hardware is because nVidia needed some marketing gimmick for Turing. I put "RT" in quotes because their BVH acceleration is just one specific form of RT. Prior to Turing, there was a bit of an explosion of interesting RT-based projects just using the general-purpose vector hardware, many of them trying new and interesting, non-triangle-based geometry models and whatnot; but since then everyone is just focused on traditional triangle-mesh geometries with specific hardware-accelerated effects, and game engines are more bloated and inelegant then ever because they basically need to support two separate rendering pipelines (because no hardware is even close to being able to run full RT-based scenes in real time). All the interesting projects have died off, casualties of nVidia forcing an uninteresting direction on the whole industry, one with very marginal results, visually speaking.
Yes, I am a bit salty about it.
kn00tcn - Sunday, June 2, 2024 - link
why would FULL rt be necessary? rendering has been full of shortcuts and perceptual tricks for decades, now suddenly any rt must be all or nothing?? i find the results of accurate interactive gi much greater than marginal and i dont even have rt hw yet to experience it in personnot sure what other 'interesting' directions are possible or performant on older hw, and an interesting note from digitalfoundry was that xbox gpu specs down to the amount of compute units were decided a couple or several years before launch (that means rt hw was designed at ms's request or amd's ability before turing
even launched)
Dolda2000 - Sunday, June 2, 2024 - link
>why would FULL rt be necessary?My point about that was just that, as long as full RT isn't the reality, game engines effectively need to fully implement two separate rendering pipelines; one for the rasterized rendering, and one for the RT effects, and it's just bloated and inelegant.
ballsystemlord - Saturday, June 1, 2024 - link
I was as well.ballsystemlord - Sunday, June 2, 2024 - link
That is to say, I'm interested in RDNA and the new gen of Zen cores. (My previous comment got a bit buried so you can't tell what I'm saying.)nandnandnand - Sunday, June 2, 2024 - link
Strix Point = 12 cores in 2 CCXs (4x Zen 5 cores with 16 MB L3, and 8x Zen 5c cores with 8 MB L3), 16 CUs RDNA3+, 45+ TOPS in a new XDNA2 NPU. Supporting faster LPDDR5X.I want to know how much better RDNA3+ is. I'd also like to know about XDNA2's die size (and XDNA1) and its power efficiency when compared to the iGPU. Finally, I want to see how the new core configuration is handled by games and applications, since dual-CCX is a departure from AMD's recent APUs. The heterogeneous cache sizes are new to APUs as well.
Please keep those in mind, Ryan Smith. ;-)
Terry_Craig - Sunday, June 2, 2024 - link
AMD doesn't usually talk about its new gaming GPUs at Computex, but my memory isn't fresh tbhDante Verizon - Sunday, June 2, 2024 - link
AMD is on June 3rd, isn't it?Ryan Smith - Sunday, June 2, 2024 - link
June 3rd local time. That's 9:30pm on the 2nd if you're a US east-coaster (EDT). Taipei is 12 hours ahead.Dante Verizon - Sunday, June 2, 2024 - link
Got it. Thank youkn00tcn - Sunday, June 2, 2024 - link
ah i see the clarifier now, liveblogging 'most' keynotes, and technically the nv one is not official or at the same venue, so no nv liveblogit's quite amusing that jensen and lisa are actual cousins, and it seems some asus laptops with unannounced (but already leaked) ryzen ai model numbers showed up in the nv keynote
Xajel - Sunday, June 2, 2024 - link
It's strange that you only included PT and ET timings, which are both in US, and almost every US citizen doesn't have any issue converting between the two because they know it... You must include UTC timing in everything because the whole world know how to convert UTC to their own timezone, especially with the US adopting the summer time and most of the world doesn't and find it confusing as hell.