Really cool testing. If I had an 7950, I wouldn't be running it higher than 105W. Then again, I have a 5600 and I am running that at 45 W. Feels like the gains these days aren't generally worth the power (for my use cases and especially for AMD). I am surprised by how well Intel's performance scales with power. I would be curious how 12th gen looks since this isn't their first gen with high power consumption they may have done a lot better designing for power scaling.
Yeah. I think AMD's sweet spot for Zen 4 should have remained 105W like last gen. They could have introduced 170W for a bit of extra performance... but otherwise make a statement that there was too much in diminishing returns.
7900x and 7950x should probably be run at 105W by default... you get 6.4% less performance at nearly 50% lower power draw (vs 170W), and generally, lower temps.
They needed 170/230 target only for compete with Intel. Otherwise it doesn't make sense. I tried all eco modes and I'm perfectly fine with 65w. 105w is identical as my former 5900x wattage - in reality it is peaked around 145w. They needed to max out stock settings, but there is no need to use that for any ordinary user.
I'm really disappointed with Intel, AMD, NVidia, and AMD (again); all their current generation products are dragging high end overclocking power levels and resultingly terrible power efficiency into stock performance packages.
I think you need to include at least one power level between 125W and 230W. That's quite a spread there that has no data that might change your conclusions had you included say 170W and 200W in the data.
Also outside of benchmarks many real world applications don't saturate the individual cores enough to demand a lot of power. In these cases performances is more about how high the lightly loaded cores will scale rather than power limiting performance. This is exactly what you saw with the gaming tests and is why real world workloads should have been made before making conclusions.
But then again I'm almost afraid to criticize because I appreciate any new content on Anandtech these days!
"I appreciate any new content on Anandtech these days!"—Amen to that.
While I agree that the sampling points chosen make it more difficult to visualize the curve, it clearly starts to bend significantly between 65 and 105 W and has flattened out substantially by the time you hit 125 W. So personally, I'd be much more interested in seeing performance at 80 and 95 W instead, as the best power / performance trade-off is probably somewhere around there. Which, not coincidentally, is also where desktop CPU TDPs traditionally were, before Intel ran into process issues.
@gavbon, peak power is really not relevant to either TDP or power efficiency. And given their myriad differences, comparing temperatures as reported by these two CPUs seems unlikely to yield any meaningful takeaways. I'm not sure how Andrei used to do it, but running SPEC CPU 2017 (a fixed workload) while measuring the energy usage in Joules is pretty much the holy grail as far as efficiency is concerned.
This is essential coverage though, great to see Anandtech adopting this. I would love to see detailed power curves become standard for desktop chips going forward. If I had a 7950x to play with, it would be amazing to have both curves not he same chart. The inflexion points in the bend are sensitive so I might need to make some inferences but could use the Anandtech data as a starting point.
It would be interesting to plot benchmark performance vs. /actual/ power consumption rather than /reported/ consumption, as the Y-cruncher results on the last page show that the Ryzen chips are significantly underreporting power consumption vs. Intel (with the 105W capped Ryzen pulling the same power as the 125W-capped Intel). A scatterplot of normalised-per-benchmark scoring vs. measured power should result in a trend per CPU illustrating power scaling behaviour for different workloads.
The yCruncher chart shows peak power, which tells us essentially nothing about power consumption. If we knew the average power and duration of the test, we could infer energy usage. Peak power only gives us the magnitude of the largest recorded current spike during the test.
Came here to say the same thing. Can you potentially use a Power-Meter at the wall-plug to obtain real-world (W) consumption. Run your benchmark repeatedly back-to-back in a 30min timed period and plot the average Watts used. You can also measure them at Idle, which we can subtract from the Test Figures, to account for the Energy used from the other components (Mobo, RAM, SSD, PSU). You can even unplug the dGPU for the compute tests to get even more precise readings.
Still no update, come on Gav, make it happen Power should scale fairly linear with power so I'm guessing either the AMD chips are using a lot more power than advertised, or Intel's using a lot less. Has to be worth testing! Hell, is 65W even the same on the AMD X chips as the non-X equivalents
Wall-power consumption normalized testing would indeed be much appreciated, because that is what ultimately counts (for me and probably many other efficiency-minded people).
I would also be very interested to see a series like this for GPUs across different architectures and chip sizes, again perhaps with power consumption at the wall in mind. That would be a great series.
I would like to see this too. As for Y-cruncher, there may be a bug where when using AVX-512 PPT isn't accurately enforced. Or PPT is just broken in some cases. Either way, it's something worth looking in to.
Impressive performance from the 7950x at laptop power levels. Going to be exciting to see how AMD laptops do with very little power and extended battery life.
A 30%+ increase in power for an (at best) 5% increase in performance? Thats just wasteful. If this were an older processor, I'd call that a mediocre overclock, but AMD is *shipping* them that way :/.
If they didn’t eke out every percentage point of performance sites would post context-less charts showing Intel winning all benchmarks (versus just a majority) and sales would suffer. I wish that wasn’t the state of affairs but sadly it is.
You say that Intel loses more performance when scaling down - but on the other hand Intel also seems to adhere to lower limits more than AMD, so what's really going on?
If the Intel at 65W setting consumes much less than the AMD at the same 65W setting then the whole comparison needs to be changed to use actual power draw, not the configured one!
On our Cinebench R15 test, which includes both single-core and multi-core workflows, the Ryzen 9 far outpaces the Core i9 when it's using all of its cores and threads.
Why in the world are you using blue for the AMD graphs and orange for the Intel? Also, this test would be so much better if you used CPU socket power draw (PPT for AMD, not sure of the Intel equivalent) to make the measurements.
I did use PPT for AMD, but why they are going above that threshold is something I need to test on more boards, which I need some time for.
And the color of the graphs wasn't a concern to me, but if it appeases you/others, I'm more than happy to change them over when I've spent some much-needed time with my children (I'm already working over the weekend as it is).
We call can wait for that, family first. Thank you so much for this. Pretty shocking to me, AMD appears to win at all power match settings. My only concern is AMD is actually using more power then advertised, so understanding that or putting in a correction ratio to understand true watt to watt comparison would help Intel but looks like they are still going to come up short.
Not that history always repeats itself, but this current GhZ war and TDP ramping-up is reminiscent of the circa ~2005 with Pentium 4 Prescott and then Pentium Dual Core as well. The only way Intel back then were able to compete and beat Athlon 64 was on the GhZ path.
Perhaps Meteor lake can bring the TDP back down to sensible levels with huge increases in IPC and efficiency gains much like the Core 2 Duo did?
In that case, what does AMD have planned to compete with that? They'll also need to significantly raise their IPC while dropping clocks to compete
Not Meteor Lake, no. We'll need GAAFETs and backside power delivery to substantially increase efficiency.
But even then, I don't think the high TDP's are going anywhere. It's the cheapest and easiest way for brands to increase their performance metrics, and today every other way is extremely expensive.
Very interesting test and results indeed. But how much more can be had by adding undervolting into the mix? I know that moves the tests into the silicon lottery territory, but still? The tests performed all depends on - presumably - the BIOS code and voltage levels of the vendor. That may or may not be 100% according to Intel/AMDs specs. It could be really interesting to see what additonal gains could be found by undervolting across fx. Three different samples of the same CPU.
Would be really cool to see this type of content for the GPU side of things... The 4090 seems to be VERY efficient and retains performance even when reducing the power drastically.
These results should really be reported using an external power meter. The reports from internal probes can be incorrect, if not "doctored", and the "command" to enforce a given power limit can be widely interpreted, as evidenced by some of these tests.
On this last point, AMD is the worst cheater, with > 30% difference between the "claimed" power limit and the reported one. This has worked well for them, since they can score "great" performance results at low TDP ... just by not respecting the TDP.
This is an important point, because as long as news outlets continue to fall for this false reporting trick, there will be a direct incentive for chip manufacturers to cheat even more. Pretty soon, we'll have meaningless "TDP limit", and countless "tech expert" praising the efficiency of the one that cheats the most.
You could spend more time ensuring the power caps can be the same in practice (your caps were in theory but not followed by CPU on average). It does show that Intel performance/power curve is flatter & rewards greater power usage. AMD perf/pwr curve is flatter so it doesn't pay to overclock it but does reward underclocking. Just imagine if AMD could keep its low freq efficiency but get Intel scaling on top.
TL;DR = AMD's going to absolutely MURDER Intel in laptops this generation. Dragon Range-HX vs Raptor Lake-HX is going to be an absolute freaking bloodbath in AMD's favor...
Is there a reason why Intel has such low temps compared to AMD at the same power usage? Seems like in a SFF PC or laptop using Intel would mean a cooler chassis despite drawing more power?
Look at the actual power used, AMD seems to be sipping 30 watts more than advertised, that is likely one reason, the other has to be the IHS with the chiplets, probably not providing good equal contact as well as the IHS being much thicker, See derbauer's video on that from June 2022 "Very thick IHS and less Contact Area - some thoughts on Ryzen 7000"
Harder to transfer that heat when the IHS is more insulating and slowing the transfer out of the chip into the heatsink. Intel will always win temp to temp comparison unless it is delidded.
gavin, can you please also measure actual power usage from wall socket. from other reviewers there was a big obvious difference between amd’s tdp and intel’s tdp. then you should try to match the tdps to actual power. also would be nice to try and recomend best settings for both processors. i ordered a 13900k and i plan to run it on around 180-200w tdp, where i am hoping for max 75c on full load with a nh-d15
I measured wall power for my 13900k using a self-reporting corsair PSU and hwinfo logging. My system has high idle power because of some power hungry components but it gives a more detailed view of how the power scaling metrics pan out. My results available here: https://www.reddit.com/r/hardware/comments/10bna5r...
Very interesting article, thank you. If it is possible, please add to it, or in a new article, two tests for which I searched over the net, but I don’t seem to find the answers anywhere: 1. Power draw in idle state in different eco modes. This is very important because in many scenarios most of the time the CPU is in idle state. For example a programmer 90% of the time only enters code, or a creator only adjusts some pixels. In the default mode it seems that Intel is more efficient, with about 10W idle, and AMD some times bigger, but I wonder if in eco modes this changes. 2. If the article is about power efficiency, can you add some graphs to show exactly this, in terms of total power consumed to finish a job (best if the total consumed power is measured at the outlet)? For example, a CPU/entire system needed 58.3Wh to render an image.
I would love similar analysis on future laptop CPU. It's been several generations since the last fanless Intel y-series CPU was released and I guess fanless TDP is no longer in the roadmaps. I need a notebook CPU that can run the most tasks without needing to turn on the fans.
This is an all-around great article that will be references for years to come. It explains a lot to amateur PC builders and gamers alike that, for one, you can scale back one component at little performance compromise, making headroom for another. Additionally, if you are hitting the thermal limits of your cooler or chassis, undervolting a bit with an offset of like -0.100 will have virtually no impact on performance but tremendous impact on overall heat production.
Would be really useful to see a total joules reading from a power meter for a selection of the gaming results that show no performance gains (so the GPU isn't doing any more work). Does the CPU use 200W and but not achieve any more than the 65W, or does it use less than 65W to reach peak performance (35W in Borderlands) on whichever thread is limiting the FPS?
The weird one is the TW 4k 95th percentile test, 35W and 125W are within touching distance of each other, but full power suddenly unlocks another 20FPS.
Essentially, if I power limit the CPU and only play games, am I actually saving any power in most games, or does the CPU just not use the available power because it is GPU limited, so both the performance and the electricity bill are the same with a 65W limit, 125W limit or no limit?
Also would be nice to see total J for the other benchmarks, but I think its a lot more obvious that the extra power is going into extra performance in most of them, even if they get less efficient
As a SFF PC owner, it's a struggle to balance performance and power budgets so this sort of analysis is a godsend. Power usage is not just about power bills for me, default CPU settings will max out the cooling solutions turning the PC into a very noisy hair dryer! It's not desirable to say the least. I'll second other's suggestions about measuring actual power usage for Performance per watt metrics. The AMD CPU is still very hot at 105W which seems suspect.
I found it confusing how the term “scaling” was used in the article. Clearly, the Intel CPU was more responsive to power while the AMD CPU had near saturated performance from 105W upwards. If I had to write the report myself, I would’ve said that the Intel CPU scaled better with power based on the data points.
I actually think the most useful thing for the average person would be - what is the 'W' at which each processor can run WITHOUT a watercooler - 65W? 105W? Just with a good case and fans. These people are not interested in the upkeep and fear associated with water-cooling so it would be very useful information.
Wow. I know server chips are tuned further down the curve, but I don't think as far down as that 65W, relatively speaking. This could have massive implications at the data center level.
@Gavin great investigation. I love stuff like this.
It would be nice to include actual power usage of the cinebench runs instead of assuming they used the same as the ycruncher runs in your calculations. I think that's a pretty big assumption. Could you perhaps do a couple of quick sanity checks of that assumption?
I'd be interested to hear how each CPU reacts to different coolers. I can see someone using these in a HTPC config with a low profile cooler and limited fan speeds.
Oh, and you know that 100 W CPU that you've bought from us? That "W" actually doesn't stand for Watts anymore and it consumes 500 W-h when you fire up your favourite, exciting P2W video game.
Very interesting and essential stuff. Now I wonder how the 13700K would fare against the 13900K with these varying power limits. Which one is faster at a given power limit? Imagine that they both perform the same... then there is no reason to buy the 13900K at all, unless you go for unlimited power usage.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
61 Comments
Back to Article
ingwe - Friday, January 6, 2023 - link
Really cool testing. If I had an 7950, I wouldn't be running it higher than 105W. Then again, I have a 5600 and I am running that at 45 W. Feels like the gains these days aren't generally worth the power (for my use cases and especially for AMD). I am surprised by how well Intel's performance scales with power. I would be curious how 12th gen looks since this isn't their first gen with high power consumption they may have done a lot better designing for power scaling.deksman2 - Friday, January 6, 2023 - link
Yeah. I think AMD's sweet spot for Zen 4 should have remained 105W like last gen.They could have introduced 170W for a bit of extra performance... but otherwise make a statement that there was too much in diminishing returns.
7900x and 7950x should probably be run at 105W by default... you get 6.4% less performance at nearly 50% lower power draw (vs 170W), and generally, lower temps.
fybyfyby - Saturday, January 7, 2023 - link
They needed 170/230 target only for compete with Intel. Otherwise it doesn't make sense. I tried all eco modes and I'm perfectly fine with 65w. 105w is identical as my former 5900x wattage - in reality it is peaked around 145w. They needed to max out stock settings, but there is no need to use that for any ordinary user.DanNeely - Monday, January 9, 2023 - link
I'm really disappointed with Intel, AMD, NVidia, and AMD (again); all their current generation products are dragging high end overclocking power levels and resultingly terrible power efficiency into stock performance packages.Hulk - Friday, January 6, 2023 - link
I think you need to include at least one power level between 125W and 230W. That's quite a spread there that has no data that might change your conclusions had you included say 170W and 200W in the data.Also outside of benchmarks many real world applications don't saturate the individual cores enough to demand a lot of power. In these cases performances is more about how high the lightly loaded cores will scale rather than power limiting performance. This is exactly what you saw with the gaming tests and is why real world workloads should have been made before making conclusions.
But then again I'm almost afraid to criticize because I appreciate any new content on Anandtech these days!
repoman27 - Friday, January 6, 2023 - link
"I appreciate any new content on Anandtech these days!"—Amen to that.While I agree that the sampling points chosen make it more difficult to visualize the curve, it clearly starts to bend significantly between 65 and 105 W and has flattened out substantially by the time you hit 125 W. So personally, I'd be much more interested in seeing performance at 80 and 95 W instead, as the best power / performance trade-off is probably somewhere around there. Which, not coincidentally, is also where desktop CPU TDPs traditionally were, before Intel ran into process issues.
@gavbon, peak power is really not relevant to either TDP or power efficiency. And given their myriad differences, comparing temperatures as reported by these two CPUs seems unlikely to yield any meaningful takeaways. I'm not sure how Andrei used to do it, but running SPEC CPU 2017 (a fixed workload) while measuring the energy usage in Joules is pretty much the holy grail as far as efficiency is concerned.
Hresna - Tuesday, January 17, 2023 - link
I agree it would be great to see the curve - I was inspired to plot it myself for 13900k.https://www.reddit.com/r/hardware/comments/10bna5r...
This is essential coverage though, great to see Anandtech adopting this. I would love to see detailed power curves become standard for desktop chips going forward. If I had a 7950x to play with, it would be amazing to have both curves not he same chart. The inflexion points in the bend are sensitive so I might need to make some inferences but could use the Anandtech data as a starting point.
edzieba - Friday, January 6, 2023 - link
It would be interesting to plot benchmark performance vs. /actual/ power consumption rather than /reported/ consumption, as the Y-cruncher results on the last page show that the Ryzen chips are significantly underreporting power consumption vs. Intel (with the 105W capped Ryzen pulling the same power as the 125W-capped Intel). A scatterplot of normalised-per-benchmark scoring vs. measured power should result in a trend per CPU illustrating power scaling behaviour for different workloads.repoman27 - Friday, January 6, 2023 - link
The yCruncher chart shows peak power, which tells us essentially nothing about power consumption. If we knew the average power and duration of the test, we could infer energy usage. Peak power only gives us the magnitude of the largest recorded current spike during the test.Gavin Bonshor - Friday, January 6, 2023 - link
You are not wrong, if I get time over the weekend, I'll add some additional graphs profiling the power usage over one or two of the runs.Kangal - Saturday, January 7, 2023 - link
Came here to say the same thing.Can you potentially use a Power-Meter at the wall-plug to obtain real-world (W) consumption. Run your benchmark repeatedly back-to-back in a 30min timed period and plot the average Watts used. You can also measure them at Idle, which we can subtract from the Test Figures, to account for the Energy used from the other components (Mobo, RAM, SSD, PSU). You can even unplug the dGPU for the compute tests to get even more precise readings.
Jase76 - Wednesday, January 11, 2023 - link
Still no update, come on Gav, make it happenPower should scale fairly linear with power so I'm guessing either the AMD chips are using a lot more power than advertised, or Intel's using a lot less. Has to be worth testing! Hell, is 65W even the same on the AMD X chips as the non-X equivalents
Gavin Bonshor - Friday, January 6, 2023 - link
Your point is duly notedydeer - Tuesday, January 10, 2023 - link
Wall-power consumption normalized testing would indeed be much appreciated, because that is what ultimately counts (for me and probably many other efficiency-minded people).I would also be very interested to see a series like this for GPUs across different architectures and chip sizes, again perhaps with power consumption at the wall in mind. That would be a great series.
Thunder 57 - Friday, January 6, 2023 - link
I would like to see this too. As for Y-cruncher, there may be a bug where when using AVX-512 PPT isn't accurately enforced. Or PPT is just broken in some cases. Either way, it's something worth looking in to.Pneumothorax - Friday, January 6, 2023 - link
Impressive performance from the 7950x at laptop power levels. Going to be exciting to see how AMD laptops do with very little power and extended battery life.brucethemoose - Friday, January 6, 2023 - link
I am salty about AMD's 230W stock limit now.A 30%+ increase in power for an (at best) 5% increase in performance? Thats just wasteful. If this were an older processor, I'd call that a mediocre overclock, but AMD is *shipping* them that way :/.
Pneumothorax - Friday, January 6, 2023 - link
That power spike for so little gain is almost criminal. Red is taking too many lessons from Blue.III-V - Friday, January 6, 2023 - link
Don't forget Bulldozer was a thing. It's not AMD's (or Intel's) first time making a processor that runs hotSunrise089 - Saturday, January 7, 2023 - link
If they didn’t eke out every percentage point of performance sites would post context-less charts showing Intel winning all benchmarks (versus just a majority) and sales would suffer. I wish that wasn’t the state of affairs but sadly it is.ABR - Thursday, January 12, 2023 - link
Still, they could make the high-power mode opt-in rather than the other way around. Like the old turbo buttons.hMunster - Friday, January 6, 2023 - link
You say that Intel loses more performance when scaling down - but on the other hand Intel also seems to adhere to lower limits more than AMD, so what's really going on?If the Intel at 65W setting consumes much less than the AMD at the same 65W setting then the whole comparison needs to be changed to use actual power draw, not the configured one!
Redbbull - Friday, January 6, 2023 - link
On our Cinebench R15 test, which includes both single-core and multi-core workflows, the Ryzen 9 far outpaces the Core i9 when it's using all of its cores and threads.Josh128 - Friday, January 6, 2023 - link
Why in the world are you using blue for the AMD graphs and orange for the Intel? Also, this test would be so much better if you used CPU socket power draw (PPT for AMD, not sure of the Intel equivalent) to make the measurements.Gavin Bonshor - Friday, January 6, 2023 - link
I did use PPT for AMD, but why they are going above that threshold is something I need to test on more boards, which I need some time for.And the color of the graphs wasn't a concern to me, but if it appeases you/others, I'm more than happy to change them over when I've spent some much-needed time with my children (I'm already working over the weekend as it is).
cyrusfox - Friday, January 6, 2023 - link
We call can wait for that, family first. Thank you so much for this. Pretty shocking to me, AMD appears to win at all power match settings. My only concern is AMD is actually using more power then advertised, so understanding that or putting in a correction ratio to understand true watt to watt comparison would help Intel but looks like they are still going to come up short.Thank you much for this original content!
Farfolomew - Friday, January 6, 2023 - link
Not that history always repeats itself, but this current GhZ war and TDP ramping-up is reminiscent of the circa ~2005 with Pentium 4 Prescott and then Pentium Dual Core as well. The only way Intel back then were able to compete and beat Athlon 64 was on the GhZ path.Perhaps Meteor lake can bring the TDP back down to sensible levels with huge increases in IPC and efficiency gains much like the Core 2 Duo did?
In that case, what does AMD have planned to compete with that? They'll also need to significantly raise their IPC while dropping clocks to compete
Gavin Bonshor - Friday, January 6, 2023 - link
Both Intel and AMD announced 65 W SKUs due out this month, so it'll be interesting to see how that pans out performance-per-watt wiseWereweeb - Saturday, January 7, 2023 - link
Not Meteor Lake, no. We'll need GAAFETs and backside power delivery to substantially increase efficiency.But even then, I don't think the high TDP's are going anywhere. It's the cheapest and easiest way for brands to increase their performance metrics, and today every other way is extremely expensive.
Der Keyser - Friday, January 6, 2023 - link
Very interesting test and results indeed. But how much more can be had by adding undervolting into the mix? I know that moves the tests into the silicon lottery territory, but still?The tests performed all depends on - presumably - the BIOS code and voltage levels of the vendor. That may or may not be 100% according to Intel/AMDs specs. It could be really interesting to see what additonal gains could be found by undervolting across fx. Three different samples of the same CPU.
cbutters - Friday, January 6, 2023 - link
Would be really cool to see this type of content for the GPU side of things... The 4090 seems to be VERY efficient and retains performance even when reducing the power drastically.icoreaudience - Friday, January 6, 2023 - link
These results should really be reported using an external power meter.The reports from internal probes can be incorrect, if not "doctored", and the "command" to enforce a given power limit can be widely interpreted, as evidenced by some of these tests.
On this last point, AMD is the worst cheater, with > 30% difference between the "claimed" power limit and the reported one. This has worked well for them, since they can score "great" performance results at low TDP ... just by not respecting the TDP.
This is an important point, because as long as news outlets continue to fall for this false reporting trick, there will be a direct incentive for chip manufacturers to cheat even more. Pretty soon, we'll have meaningless "TDP limit", and countless "tech expert" praising the efficiency of the one that cheats the most.
tygrus - Friday, January 6, 2023 - link
You could spend more time ensuring the power caps can be the same in practice (your caps were in theory but not followed by CPU on average).It does show that Intel performance/power curve is flatter & rewards greater power usage. AMD perf/pwr curve is flatter so it doesn't pay to overclock it but does reward underclocking. Just imagine if AMD could keep its low freq efficiency but get Intel scaling on top.
Cooe - Friday, January 6, 2023 - link
TL;DR = AMD's going to absolutely MURDER Intel in laptops this generation. Dragon Range-HX vs Raptor Lake-HX is going to be an absolute freaking bloodbath in AMD's favor...corinthos - Sunday, January 22, 2023 - link
why are the likes of ASUS dropping AMD and going Intel-only this next gen for their laptop lines then?trenzterra - Friday, January 6, 2023 - link
Is there a reason why Intel has such low temps compared to AMD at the same power usage? Seems like in a SFF PC or laptop using Intel would mean a cooler chassis despite drawing more power?cyrusfox - Friday, January 6, 2023 - link
Look at the actual power used, AMD seems to be sipping 30 watts more than advertised, that is likely one reason, the other has to be the IHS with the chiplets, probably not providing good equal contact as well as the IHS being much thicker, See derbauer's video on that from June 2022 "Very thick IHS and less Contact Area - some thoughts on Ryzen 7000"Harder to transfer that heat when the IHS is more insulating and slowing the transfer out of the chip into the heatsink. Intel will always win temp to temp comparison unless it is delidded.
HideOut - Friday, January 6, 2023 - link
Or you can buy one of those newly announced non X varients of the AMD chips. Lower costs and lower power out of the box.mirancar - Saturday, January 7, 2023 - link
gavin, can you please also measure actual power usage from wall socket. from other reviewers there was a big obvious difference between amd’s tdp and intel’s tdp. then you should try to match the tdps to actual power. also would be nice to try and recomend best settings for both processors. i ordered a 13900k and i plan to run it on around 180-200w tdp, where i am hoping for max 75c on full load with a nh-d15Hresna - Tuesday, January 17, 2023 - link
I measured wall power for my 13900k using a self-reporting corsair PSU and hwinfo logging. My system has high idle power because of some power hungry components but it gives a more detailed view of how the power scaling metrics pan out. My results available here:https://www.reddit.com/r/hardware/comments/10bna5r...
GeoffreyA - Saturday, January 7, 2023 - link
Thanks, Gavin, for the interesting, insightful article.sonny73n - Saturday, January 7, 2023 - link
How do you limit CPU power draw in BIOS and still getting Turbo Boost for Ryzen?t.s - Tuesday, January 17, 2023 - link
I choose to disable turbo boost. I set my Ryzen 5 5600G to 3.4GHz max, GPU to 1600. It consume about ~50 watt when playing dota 2.erotomania - Wednesday, January 25, 2023 - link
I overclock my 5600G to 4.55 GHz, iGPU also, and it consumes about 60W when gaming. Maybe 65-70 if CPU and iGPU are hammered.razvan.a - Saturday, January 7, 2023 - link
Very interesting article, thank you. If it is possible, please add to it, or in a new article, two tests for which I searched over the net, but I don’t seem to find the answers anywhere:1. Power draw in idle state in different eco modes. This is very important because in many scenarios most of the time the CPU is in idle state. For example a programmer 90% of the time only enters code, or a creator only adjusts some pixels. In the default mode it seems that Intel is more efficient, with about 10W idle, and AMD some times bigger, but I wonder if in eco modes this changes.
2. If the article is about power efficiency, can you add some graphs to show exactly this, in terms of total power consumed to finish a job (best if the total consumed power is measured at the outlet)? For example, a CPU/entire system needed 58.3Wh to render an image.
wr3zzz - Saturday, January 7, 2023 - link
I would love similar analysis on future laptop CPU. It's been several generations since the last fanless Intel y-series CPU was released and I guess fanless TDP is no longer in the roadmaps. I need a notebook CPU that can run the most tasks without needing to turn on the fans.Vitor - Saturday, January 7, 2023 - link
That 330w peak power by Intel is horrifying.Samus - Saturday, January 7, 2023 - link
This is an all-around great article that will be references for years to come. It explains a lot to amateur PC builders and gamers alike that, for one, you can scale back one component at little performance compromise, making headroom for another. Additionally, if you are hitting the thermal limits of your cooler or chassis, undervolting a bit with an offset of like -0.100 will have virtually no impact on performance but tremendous impact on overall heat production.Sunrise089 - Saturday, January 7, 2023 - link
Great article, thank you for the content Gavin and Ryan!zodiacfml - Sunday, January 8, 2023 - link
Thanks team. No one does this except Anandtech.allenb - Sunday, January 8, 2023 - link
Just adding to what others have said, but this is great data. One of the more interesting articles I've seen here in some time. Keep it up!alanritchie - Sunday, January 8, 2023 - link
Would be really useful to see a total joules reading from a power meter for a selection of the gaming results that show no performance gains (so the GPU isn't doing any more work). Does the CPU use 200W and but not achieve any more than the 65W, or does it use less than 65W to reach peak performance (35W in Borderlands) on whichever thread is limiting the FPS?The weird one is the TW 4k 95th percentile test, 35W and 125W are within touching distance of each other, but full power suddenly unlocks another 20FPS.
Essentially, if I power limit the CPU and only play games, am I actually saving any power in most games, or does the CPU just not use the available power because it is GPU limited, so both the performance and the electricity bill are the same with a 65W limit, 125W limit or no limit?
Also would be nice to see total J for the other benchmarks, but I think its a lot more obvious that the extra power is going into extra performance in most of them, even if they get less efficient
Jase76 - Sunday, January 8, 2023 - link
Great work Anand, Love this article!As a SFF PC owner, it's a struggle to balance performance and power budgets so this sort of analysis is a godsend.
Power usage is not just about power bills for me, default CPU settings will max out the cooling solutions turning the PC into a very noisy hair dryer! It's not desirable to say the least.
I'll second other's suggestions about measuring actual power usage for Performance per watt metrics. The AMD CPU is still very hot at 105W which seems suspect.
ricebunny - Monday, January 9, 2023 - link
I found it confusing how the term “scaling” was used in the article. Clearly, the Intel CPU was more responsive to power while the AMD CPU had near saturated performance from 105W upwards. If I had to write the report myself, I would’ve said that the Intel CPU scaled better with power based on the data points.leavenfish - Wednesday, January 11, 2023 - link
I actually think the most useful thing for the average person would be - what is the 'W' at which each processor can run WITHOUT a watercooler - 65W? 105W? Just with a good case and fans. These people are not interested in the upkeep and fear associated with water-cooling so it would be very useful information.ABR - Thursday, January 12, 2023 - link
Wow. I know server chips are tuned further down the curve, but I don't think as far down as that 65W, relatively speaking. This could have massive implications at the data center level.Jp7188 - Monday, January 16, 2023 - link
@Gavin great investigation. I love stuff like this.It would be nice to include actual power usage of the cinebench runs instead of assuming they used the same as the ycruncher runs in your calculations. I think that's a pretty big assumption. Could you perhaps do a couple of quick sanity checks of that assumption?
Jp7188 - Monday, January 16, 2023 - link
I'd be interested to hear how each CPU reacts to different coolers. I can see someone using these in a HTPC config with a low profile cooler and limited fan speeds.sorintt - Wednesday, January 18, 2023 - link
You do not need power consumption measurement to see that AMD does not lower the consumption according to the setting. Just look at temperature.Gastec - Friday, January 20, 2023 - link
Oh, and you know that 100 W CPU that you've bought from us? That "W" actually doesn't stand for Watts anymore and it consumes 500 W-h when you fire up your favourite, exciting P2W video game.Salipander - Tuesday, March 28, 2023 - link
Very interesting and essential stuff. Now I wonder how the 13700K would fare against the 13900K with these varying power limits. Which one is faster at a given power limit? Imagine that they both perform the same... then there is no reason to buy the 13900K at all, unless you go for unlimited power usage.