You forgot . before 3 and 5. ^^ M2 isn't a big improvement over M1. And even 6nm Zen 3 based Rembrandt trades blows with M1 under full load. If Zen 4 improves power efficiency by 75% at 65W TDP it will extremely impressive. M2 won't beat that, still being a lot more expensive.
That's not an excuse at all. Obviously you would compare 2 different architectures on the same process node. You wouldn't compare Zen on 14nm Vs 5nm, so...
M2 may well be better, but I'm not sure why does it matter since most people are not going to be using Apple regardless.
What’s your point exactly? Firstly they’re two different architectures and we all know that ARM was designed with low power in mind. Secondly it’s like comparing apples to oranges because if you WANT an M2 then you have to have MacOS and let’s face it nobody wants that. So it’s a useless comparison. If you want x86 your choice is AMD or Intel, Apple doesn’t even feature.
ARM is an ISA. It can be used in either low or high power designs. Its designs used to be exclusively low power, and now new higher power micro architectures are being designed. x86 designs also used to be low power.
x86 legacy bloat should have less than a 10% impact on power. Demanding that x86 processors output high levels of performance with excellent levels of efficiency should be the norm. It’s not a useless comparison because we know what the technology could be, and Intel/AMD aren’t going to focus on efficiency unless we demand it.
I want a x86 M2, and will continue to demand that Intel or AMD engineers match the competition (Apple). Also people are actively looking into Apple computers due to the massive efficiency advantage, so the comparison isn’t apples to oranges or useless.
Let me spoil the anticipation: M2 is more FLOPS/watt but in absolute performance AMD crushes the M2 because, shocker, performance scaling isn't linear. Still.
Epically dumb comment, like I was at some college football game listening to a mindless drunk jock screaming in the bleachers. Do you realize that you are comparing with M2 which was significantly lower single core performance (20% less) and exponentially less multi-core performance (30-70% less) than a desktop processor? Are you on crack or forgot to take your meds? You do realize too that Apple uses an obscene amount of die space per core since their engineers are instructed to go wide and be pigs about die space to get crazy efficiency, and that is what will always prevent them from scaling up production and design into mainstream desktop computing. Charging $4000 for a Godzilla-sized 20-core M1 Ultra that falls short in single core and is maybe on par in multicore with the totally tinier 16-core $699 Ryzen 9 7950X is exactly why they cannot compete.
Epically dumb comment, like I was at some college football game listening to a mindless drunk jock screaming in the bleachers. Do you realize that you are comparing with M2 which was significantly lower single core performance (20% less) and exponentially less multi-core performance (30-70% less) than a Zen 4-based desktop processor? Are you on a crack marathon and forgot to take your meds before your bedtime story? You do realize of course too that Apple uses an obscene amount of die space per core since their engineers are instructed to go wide and be pigs about die space to get crazy efficiency, and that is what will always prevent them from scaling up production and design into mainstream desktop computing given the costs. Charging $4000 for a Godzilla-sized 20-core M1 Ultra that falls short in single core and is maybe on par in multicore performance with the totally tinier 16-core $699 Ryzen 9 7950X is exactly why they cannot compete in desktop territory.
They integrated a GPU, so they had extra die space and decided it wasn't worth while to use that on actual CPU performance. We the customers are ALWAYS going to have an actual GPU. Make one CPU for business use, exempt the rest from the integrated GPU.
Just goes to show real advancements in CPU performance are long dead. Just like Intel needing native OS integration to see any gains at all.
Has there been any news on AMD getting native OS scheduler integration? Seems illegal for Intel to be the only one's with that advantage. Anybody know?
The first IGPs showed up on the Northbridge chips when Intel found itself needing more physical chip area for IO pins than the internal circuitry needed to implement everything else the chip needed to do.
AMDs IO die is an on package equivalent in many ways, I'm wondering if a similar dynamic might be at play here.
He said 30% over Zen 4 processors without V-cache. That works out to 45% over Zen 3. I’m curious to see if V-cache gen 2 is truly that good, or if it’s a case of Zen 4 A0 silicon floundering.
That class of graphics performance would require using GDDR ram instead of conventional DDR; and would probably either require the CPU to be soldered to the mobo or the GDDR to be on the same package as the CPU.
In either case you'd be looking at something different enough from any previous APUs to effectively be it's own product category. And with PC graphics being as much of a moving target as they are, you'd probably end up having to replace it at the same rate you do a graphics card anyway to keep up.
Graphics aren't that much of a moving target. They are held back by the consoles periodically for several years straight. AMD APUs should beat at least the Xbox Series S soon, good enough for 1080p which is what most people are using anyways.
You don't need to have soldered/ball connections for the CPU in order for GDDR memory. AMD has had CPUs sockets with high enough pin counts to handle the additional pins needed for the difference of DDR vs GDDR (AMD socket AM5 is a 1718 pin socket, but they have made sockets with 4094 pins (socket sTRX4)).
I also appreciate faster AMD desktop APUs, but I think you will be waiting awhile longer for a desktop APU that runs as fast as a PS5. DDR5 on modular sticks is just not as fast as GDDR6 soldered as close as possible to the GPU. There are also some special tricks AMD uses in PS5 chips to minimize unnecessary transferring of files loaded onto the RAM.
Where as desktop APUs have none of that. I think you would have to wait for a single board PC using UCIe APU to even come close.
Oh please. Integrated graphics are extremely useful, even for gamers. Scenario: Your computer no longer outputs any video. Is your graphics card fried, or is it the motherboard? If the board comes with integrated graphics, you can find out which. You can also continue to use the computer for day to day tasks while you're waiting for the replacement GPU to arrive.
Your comment is cringe. The iGPU added to Raphael is absolutely minuscule in size and makes the CPUs viable for office and HTPC use with no GPU, and allows them to be tested without a GPU. Not only is the iGPU on the I/O die, but the I/O die is not even on the same 5nm node as the core dies. It's budget 6nm silicon.
AMD has to prove BIOS and Firmware (will take time but it would be reassuring if AMD does mention anything). Plus I'm curious to know their IOD improvements, esp the DRAM / IF and the Stability. Then the major one is how the CPU -> PCH link is being handled, I read on TPU that PCH link speed is not 5.0 but rather 4.0, that's not good.
As for main Zen 4 core, well we know most of it anyways, the only thing left is how it stacks up with the Intel and older Zen.
In a year, Zen 5 will come with Xilinx IP for FPGAs and more. Good time to be alive. Zen 4 is a phenomenal evolutionary new line. Six years ago the industry was dead.
Are you talking about on-package FPGAs for Epyc? I don't know if it's relevant for consumer desktop. But there should be machine learning accelerators on the way for Zen 5 desktop users.
Anyone working in Pro Tools or other DAWs for music, post production film, etc., will welcome FPGAs in Neural Engines, etc., Cores just don't cut it by comparison. The reason Pro Tools works so well on their Audio Interfaces [Carbon, etc] is they have a large set of FPGAs and DSPs on board that offload all the heavy lifting required for low latency to be maintained with audio plugins.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
44 Comments
Back to Article
lemurbutton - Monday, August 29, 2022 - link
And finally, we can see how much more efficient M2 is over Zen when they’re both 5nm chips.No more excuses for AMD fanboys.
lemurbutton - Monday, August 29, 2022 - link
Look for 3-5x the power required for Zen4 to achieve the same level of performance as the M2.cosmotic - Monday, August 29, 2022 - link
Where are you looking?Hulk - Monday, August 29, 2022 - link
What does one do with and "M2?"Hulk - Monday, August 29, 2022 - link
Is is like the M5 that killed all those innocent people?emn13 - Monday, August 29, 2022 - link
Die area is also a factor in this tradeoff.gruffi - Thursday, September 1, 2022 - link
You forgot . before 3 and 5. ^^ M2 isn't a big improvement over M1. And even 6nm Zen 3 based Rembrandt trades blows with M1 under full load. If Zen 4 improves power efficiency by 75% at 65W TDP it will extremely impressive. M2 won't beat that, still being a lot more expensive.Tams80 - Friday, September 2, 2022 - link
Replying to yourself. Stay classy.To anyone not familiar, lemurbutton is a well-known troll around here.
flyingpants265 - Monday, August 29, 2022 - link
That's not an excuse at all. Obviously you would compare 2 different architectures on the same process node. You wouldn't compare Zen on 14nm Vs 5nm, so...M2 may well be better, but I'm not sure why does it matter since most people are not going to be using Apple regardless.
DannyH246 - Monday, August 29, 2022 - link
What’s your point exactly? Firstly they’re two different architectures and we all know that ARM was designed with low power in mind. Secondly it’s like comparing apples to oranges because if you WANT an M2 then you have to have MacOS and let’s face it nobody wants that. So it’s a useless comparison. If you want x86 your choice is AMD or Intel, Apple doesn’t even feature.shabby - Monday, August 29, 2022 - link
It'll prove everything!!!!11Otritus - Tuesday, August 30, 2022 - link
ARM is an ISA. It can be used in either low or high power designs. Its designs used to be exclusively low power, and now new higher power micro architectures are being designed. x86 designs also used to be low power.x86 legacy bloat should have less than a 10% impact on power. Demanding that x86 processors output high levels of performance with excellent levels of efficiency should be the norm. It’s not a useless comparison because we know what the technology could be, and Intel/AMD aren’t going to focus on efficiency unless we demand it.
I want a x86 M2, and will continue to demand that Intel or AMD engineers match the competition (Apple). Also people are actively looking into Apple computers due to the massive efficiency advantage, so the comparison isn’t apples to oranges or useless.
Anoldnewb2 - Monday, August 29, 2022 - link
A Turdle is more efficient than rabbit.Dante Verizon - Monday, August 29, 2022 - link
If it was a 5nm mobile APU your comparison would make sense.sirmo - Monday, August 29, 2022 - link
You still kind of have to wait for the APU Phoenix Point for that comparison.vanilla_gorilla - Monday, August 29, 2022 - link
Let me spoil the anticipation: M2 is more FLOPS/watt but in absolute performance AMD crushes the M2 because, shocker, performance scaling isn't linear. Still.Hifihedgehog - Tuesday, August 30, 2022 - link
Epically dumb comment, like I was at some college football game listening to a mindless drunk jock screaming in the bleachers. Do you realize that you are comparing with M2 which was significantly lower single core performance (20% less) and exponentially less multi-core performance (30-70% less) than a desktop processor? Are you on crack or forgot to take your meds? You do realize too that Apple uses an obscene amount of die space per core since their engineers are instructed to go wide and be pigs about die space to get crazy efficiency, and that is what will always prevent them from scaling up production and design into mainstream desktop computing. Charging $4000 for a Godzilla-sized 20-core M1 Ultra that falls short in single core and is maybe on par in multicore with the totally tinier 16-core $699 Ryzen 9 7950X is exactly why they cannot compete.Hifihedgehog - Tuesday, August 30, 2022 - link
Epically dumb comment, like I was at some college football game listening to a mindless drunk jock screaming in the bleachers. Do you realize that you are comparing with M2 which was significantly lower single core performance (20% less) and exponentially less multi-core performance (30-70% less) than a Zen 4-based desktop processor? Are you on a crack marathon and forgot to take your meds before your bedtime story? You do realize of course too that Apple uses an obscene amount of die space per core since their engineers are instructed to go wide and be pigs about die space to get crazy efficiency, and that is what will always prevent them from scaling up production and design into mainstream desktop computing given the costs. Charging $4000 for a Godzilla-sized 20-core M1 Ultra that falls short in single core and is maybe on par in multicore performance with the totally tinier 16-core $699 Ryzen 9 7950X is exactly why they cannot compete in desktop territory.Hrel - Monday, August 29, 2022 - link
They integrated a GPU, so they had extra die space and decided it wasn't worth while to use that on actual CPU performance. We the customers are ALWAYS going to have an actual GPU. Make one CPU for business use, exempt the rest from the integrated GPU.Just goes to show real advancements in CPU performance are long dead. Just like Intel needing native OS integration to see any gains at all.
Has there been any news on AMD getting native OS scheduler integration? Seems illegal for Intel to be the only one's with that advantage. Anybody know?
Ryan Smith - Monday, August 29, 2022 - link
Note that the GPU is on the IOD, not the core die(s). So it's not a (direct) case of one or the other.DanNeely - Monday, August 29, 2022 - link
The first IGPs showed up on the Northbridge chips when Intel found itself needing more physical chip area for IO pins than the internal circuitry needed to implement everything else the chip needed to do.AMDs IO die is an on package equivalent in many ways, I'm wondering if a similar dynamic might be at play here.
nandnandnand - Monday, August 29, 2022 - link
The iGPU in Raphael is expected to be 2 CUs, very tiny on 6nm.Threska - Monday, August 29, 2022 - link
What "real" advancements that don't violate physics?meacupla - Monday, August 29, 2022 - link
The GPU is not on the same die as the CPUIf you are not happy with Zen 4, then just wait for Zen 4 X3D.
Leaked AMD internal benchmarks shows up to 45% performance over Zen 3.
iranterres - Monday, August 29, 2022 - link
Sources please.meacupla - Monday, August 29, 2022 - link
moore's law is deadDolda2000 - Monday, August 29, 2022 - link
He said 30%, not 45%.Otritus - Tuesday, August 30, 2022 - link
He said 30% over Zen 4 processors without V-cache. That works out to 45% over Zen 3. I’m curious to see if V-cache gen 2 is truly that good, or if it’s a case of Zen 4 A0 silicon floundering.flyingpants265 - Monday, August 29, 2022 - link
We've wanted proper APUs for a very long time. A PS5 that runs full x86 would be like a dream come true. The 5600G is slow.DanNeely - Monday, August 29, 2022 - link
That class of graphics performance would require using GDDR ram instead of conventional DDR; and would probably either require the CPU to be soldered to the mobo or the GDDR to be on the same package as the CPU.In either case you'd be looking at something different enough from any previous APUs to effectively be it's own product category. And with PC graphics being as much of a moving target as they are, you'd probably end up having to replace it at the same rate you do a graphics card anyway to keep up.
nandnandnand - Monday, August 29, 2022 - link
Graphics aren't that much of a moving target. They are held back by the consoles periodically for several years straight. AMD APUs should beat at least the Xbox Series S soon, good enough for 1080p which is what most people are using anyways.Fallen Kell - Monday, August 29, 2022 - link
You don't need to have soldered/ball connections for the CPU in order for GDDR memory. AMD has had CPUs sockets with high enough pin counts to handle the additional pins needed for the difference of DDR vs GDDR (AMD socket AM5 is a 1718 pin socket, but they have made sockets with 4094 pins (socket sTRX4)).meacupla - Monday, August 29, 2022 - link
I also appreciate faster AMD desktop APUs, but I think you will be waiting awhile longer for a desktop APU that runs as fast as a PS5.DDR5 on modular sticks is just not as fast as GDDR6 soldered as close as possible to the GPU.
There are also some special tricks AMD uses in PS5 chips to minimize unnecessary transferring of files loaded onto the RAM.
Where as desktop APUs have none of that.
I think you would have to wait for a single board PC using UCIe APU to even come close.
erotomania - Monday, August 29, 2022 - link
5600G CPU part is very very fast, and fantastic ovcerclocker. GPU part is very very slow. 720p Low still painful.nandnandnand - Monday, August 29, 2022 - link
Rembrandt, Phoenix Point, and Strix Point are coming, to mobile at least, some to desktop.A desktop Rembrandt with the GPU clocked at 2.6 GHz might be around an Xbox Series S.
Then you will see no less than a +50% gain from Phoenix Point, and some kind of gain from Strix Point.
name99 - Monday, August 29, 2022 - link
big.LITTLE -- but for GPUs:-/
gruffi - Thursday, September 1, 2022 - link
No, thanks. Big.LITTLE is garbage in all devices larger than a tablet.Glaurung - Monday, August 29, 2022 - link
Oh please. Integrated graphics are extremely useful, even for gamers. Scenario: Your computer no longer outputs any video. Is your graphics card fried, or is it the motherboard? If the board comes with integrated graphics, you can find out which. You can also continue to use the computer for day to day tasks while you're waiting for the replacement GPU to arrive.nandnandnand - Monday, August 29, 2022 - link
Your comment is cringe. The iGPU added to Raphael is absolutely minuscule in size and makes the CPUs viable for office and HTPC use with no GPU, and allows them to be tested without a GPU. Not only is the iGPU on the I/O die, but the I/O die is not even on the same 5nm node as the core dies. It's budget 6nm silicon.Silver5urfer - Monday, August 29, 2022 - link
AMD has to prove BIOS and Firmware (will take time but it would be reassuring if AMD does mention anything). Plus I'm curious to know their IOD improvements, esp the DRAM / IF and the Stability. Then the major one is how the CPU -> PCH link is being handled, I read on TPU that PCH link speed is not 5.0 but rather 4.0, that's not good.As for main Zen 4 core, well we know most of it anyways, the only thing left is how it stacks up with the Intel and older Zen.
mdriftmeyer - Monday, August 29, 2022 - link
In a year, Zen 5 will come with Xilinx IP for FPGAs and more. Good time to be alive. Zen 4 is a phenomenal evolutionary new line. Six years ago the industry was dead.nandnandnand - Monday, August 29, 2022 - link
Are you talking about on-package FPGAs for Epyc? I don't know if it's relevant for consumer desktop. But there should be machine learning accelerators on the way for Zen 5 desktop users.https://hothardware.com/news/amd-patent-hybrid-cpu...
https://www.theregister.com/2022/05/04/amd_xilinx_...
mdriftmeyer - Monday, August 29, 2022 - link
Anyone working in Pro Tools or other DAWs for music, post production film, etc., will welcome FPGAs in Neural Engines, etc., Cores just don't cut it by comparison. The reason Pro Tools works so well on their Audio Interfaces [Carbon, etc] is they have a large set of FPGAs and DSPs on board that offload all the heavy lifting required for low latency to be maintained with audio plugins.imaskar - Tuesday, August 30, 2022 - link
Internal GPU means finally gaming on Linux! By passing through your videocard into a VM, lol.