I am hoping for an high frequency 8 core i5 with zero ecores and high cache. It's would be a gamer sweet spot, and could counter the inevitable 3d cache Zen 4.
Honestly, I'm underwhelmed by Intel's current big.LITTLE setup. As near as I can tell, under load the E cores are considerably less efficient than the P cores are, and currently just seem to be there so Intel can claim multi-threading victories with less die space.
And with the CPU's heat limits, it just seems to be pushing the chip into thermal throttling even faster.
Hopefully future big.LITTLE implementations are better.
Meteor Lake will bring Redwood Cove to replace Golden/Raptor Cove, and Crestmont to replace Gracemont. Gracemont in Raptor Lake is the same as in Alder Lake except for more cache, IIRC. All of this will be on "Intel 4" instead of "Intel 7", and the core count might be 8+16 again.
Put it all together and it should have a lot of breathing room compared to the 13900K(S).
8+32 will be the ultimate test of small cores, but they're already migrating on down to the cheaper chips like the 13400/13500.
Yes it does seem backwards that the more efficient architecture is in the P core. Reducing power consumption for light tasks seems better to keep it on the P core and downclock. I don't see the point of the "e" cores as effiency, but rather academic multithreaded benchmark war. Which isn't serving the consumer at all.
E is still useful, as you get 8/8 cores in space where you could cram 2/4. I agree E for efficiency should be B as background to make it clearer what's the point. They are good for consumers as they offer all the high speed cores for main process, so OS and other things dont slow down. I am not sure if you followed, but intel cpu's literally doubled in power since they appeared, and at ~25% utilization, cpu's halved power usage. What you should complain about is bad software support, as this is not something that happens in the background.
I don't think you are fully grasping the results of the benchmarks. Compute/Rendering scores prove that e-cores can tackle heavy work loads. Often trading blows with AMD's all P-Core 7950X, and costing less at the same time. AMD needs to lower all prices immediately.
That's an oversimplification actually, P-cores and E-cores are both efficient, just for different tasks. The main efficiency gain of P-cores is it's much much faster than E-cores for larger tasks. Between 3 and 4GHz, P-cores are so fast they finish tasks much earlier than e-cores so total energy drawn is lower. But E-cores are efficient too, just for simple tasks(at low clockspeeds). Below 3GHz and above 1GHz, e-cores are much more efficient, beating P-cores in performance while drawing less power.
Big.LITTLE is hard to do, and ARM took ages and a lot of optimization before phone CPUs got much benefit from it.
The problem of the LITTLE cores not adding anything in the way of power efficiency is well-known.
I'm saddened that Intel is dropping their own winning formula of "race-to-sleep" that they've successfully used for decades for aping something objectivly worse because they're a little behind in die shrinking.
It's probably more like the modern turbocharged cars in which no real driver can reach the quoted fuel consumption because the manufacturer cheesed the economy testing.
Dunno what you mean, I regularly exceed the rated fuel economy for my car (twin turbo V8) as well as rental cars with turbo engines. All it takes is only going on boost when you actually want to go fast.
One of the car magazines in Australia consistently had trouble with small European turbo engines using up to twice as much as quoted even when not being pushed hard. BMW was the worst offender.
TDP has a technical meaning and Intel (and AMD, because they do they dame thing) are using it properly.
Intel is even moving away from calling it "TDP" because of consumer, and hardware review sites/channels, misunderstanding of the term.
In order to understand the situation, go search the anandtech article where Ian Cutress actually suggests Intel do exactly what it is doing to cut down on confusion.
That's absolutely asinine and completely incorrect. 125w TDP can lead people to think that the processor uses around 125w, or is limited to 125w, or that they should plan on cooling around 125w, that they should plan for a PSU load in the 125w range, or that the performance quoted by Intel is produced at around 125w. Because that's what Intel says TDP means. None of these are even a little bit correct. It is entirely misleading, and a completely useless number for consumers. Since AMD's TDP is more accurate (though still off) compared to Intel's, you can't even count on it to indicate which processor might use more power than another. 7950X @ 170w TDP uses less power than 13900k @ 125W TDP in all cases
From Intel directly: "TDP stands for Thermal Design Power, in watts, and refers to the power consumption under the maximum theoretical load. Power consumption is less than TDP under lower loads. The TDP is the maximum power that one should be designing the system for. This ensures operation to published specs under the maximum theoretical workload."
By intel's own definition, TDP means exactly what people would expect it to mean, however it is a completely inaccurate number, as Anandtech and others' testing clearly shows. You can act like a knowitall all you want and claim others are just uneducated, but all that does is expose your own ignorance of the situation here.
Firstly this discussion is not confined to Intel. All the modern CPUs use turbo clocks. They all have various performance characteristics dependent on the thermal design of the product they are in.
Please cite where Intel writes that. Intel only uses TDP in its technical literature these days for the very reason that consumers are confused about it. Intel uses PL1 and PL2. TDP is the MINIMUM power that one should be designing for, not the maximum. The amount of turbo clock exposed by the cooling solution is optional, but the thermal solution associated with the processor must be capable of handling the TDP. The processor will not be damaged with a cooling solution that only handles the TDP. The processor will not use its turbo clocks much and will stay at or below the TDP power except for short periods of time. On the other hand if a cooling solution cannot handle the TDP there could he bad consequences.
Again. This isn't an Intel-specific thing. TDP and turbo clocks are ubiquitous in the industry. What is also very widespread is massive misunderstanding and misinformation about the term. Perhaps Anandtech should stop using the term with respect to CPUs because it seems to me that it's a minority of readers who understand it.
Reviews should stop quoting TDP. Intel no longer uses it; their latest product spec pages e.g. for the i9-13900K quote Maximum Turbo Power: "The maximum sustained (>1s) power dissipation of the processor as limited by current and/or temperature controls. Instantaneous power may exceed Maximum Turbo Power for short durations (<=10ms). Note: Maximum Turbo Power is configurable by system vendor and can be system specific."
Ah, not confined to Intel, solid argument that it's not a problem to do it but that "people are uneducated". Scale matters. When your real power consumption is 120% over the advertised one (see link below) this isn't an "everybody's doing it" but it is indeed a matter of "people are uneducated". At this time Jimbo, anyone trying to find excuses for Intel, and downplaying the shenanigans is _really_ uneducated, was born yesterday, or benefits from the lie.
This doesn't mean you should stop using Intel if it does the job for you,. But only a fool or the fraudster would defend or downplay what they're doing.
Doesn't especially matter whether they are conforming to the technical definition or not as it is tells me nothing useful about the CPU in the context in which it is presented.
Yes, TDP has a meaning, and technically, neither company is using it correctly. Back in the good-ol’ days when TDP was really max power under load, it easily allowed you to spec a cooler. Clock boosts were meant to be temporary, transient states so that *on average*, you’d still lie within the thermal budget of the cooler. Obviously, we are well past that.
So yes, AMD is playing it a bit loose (+31 %). But Intel is playing it ridiculous: the i9’s max power (as tested here) is 2.7x (!) their “TDP”.
i checked and it's 60 W. That doesn't make AMD "less dishonest”. Neither company are being dishonest. It means AMD does not intend their desktop products to be used in lower power products. If you want to design a product around a Ryzen 7950X you need a 170 W cooling solution. Whereas you can put an i9 13900K in a product that can only dissipate 125 W. That's the difference between the two processors in terms if the TDPs. That's what TDP means.
I mean, it works. The processor automatically steps down the v/f curve and doesn't hiccup with a puny cooler good for 140'ish W. I tested a 12900k with a low-profile AXP-200 from my Skylake days. Performance wasn't bad, over 4GHz all 16 cores. I left all the OC settings on, or else stock E-cores would be 3.9GHz.
none of the companies "do” anything here. The "doing" is by the people who, though they are ignorant, write seething rants in comment sections damning the companies.
This issue would be a lot less contentious if technical sites like Anandtech actually used their expertise to curate information presented. They just shouldn't even show TDP as it's simply not relevant to the end users who are reading the articles. They should have some standard benchmark they run to determine peak and maximum sustained power draws and show ONLY those values in any charts.
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised.
You're comparing the PL2 of one chip with the TDP of the other. Also, the article mentioned the motherboard may have something to do with ignoring the PL2 on the 13900k.
If the chip can't dissipate PL2, it'll incrementally step down to TDP gracefully. It's like you're complaining your 130mph sedan went 165mph on the race track...
Yeah it does. $250 seems like a better price--maybe even a bit lower. It will probably come down. I just picked up a 5600 for $125 though so I am set. I suspect that the price will only come down on the 7600x once the stock of 5 series is cleared out.
?? B650 is released? At least on Newegg and amazon you can purchase a selection of B650 motherboards today. Out of the 23 boards listed, 1 is out of stock and 5 are preoders for the 21st or 27th. So that leaves 17 boards that you can purchase today that are in stock, the in stock boards go from 170-350 (Asrock 650M PG Riptide and Gigabyte B650E Aorus Master respectively). But yeah AMD should and from what I have heard can lower Zen 4 prices to compete with Intel's prices.
When AMD release 3d cache versions, it will reduce the prices of normal versions. How much, is interesting question. I expect that AMD also will release 7600 to compete with intel in price, so 7600x may not come down a lot.
You list Handbrake under legacy tests however, either the graphs or mislabeled or the tests are not included. Did you test these CPUs with Handbrake? If so, please post the results. If not, please consider testing and updating the article. This is the only workload that matters to me, and the number one reason I come to Anandtech for CPU reviews/benchmarks.
Comparing the performance of 7600X vs 13600k i see some overall advantage for the 13600K. But, i will defionitly go for the 7600X due one argue. Load Consumption of 134 Watt vs 238 Watt at almost same performance is something. Regarding the poweer costs in europe of 60 cent per Watt that is quiet some pricing argue at a 5 years lifetime.
At anandtech: this argue should be mentioned in your closing thoughts. 100 Watt more powerconsumption at todays powerprices is a serious issue.
<blockquote>Load Consumption of 134 Watt vs 238 Watt at almost same performance is something.</blockquote> This thought is completely wrong. It is called "induction", as you were looking at something and then reported that on something else. You see a graph of power consumption for a "unlimited test" (where performance is not measured) and then you think that that measure is valid also for other tests. So you just think that for each bench those CPUs consume always those Watts (how can it be?) and that the performance are the same (where did you got that? In almost all benches the 13600K leaves the 7600X in the dust, but not knowing their power consumption for those test you cannot say which is the most efficient).
that looks more plausible. but it is also mostly useless except in the context of the specific workload. modern cpu performance testing is very complicated and performance versus power should be taken in the specific workload one is interested in, or at the very least an average of workloads of a similar type.
PCs are idle (or used for light browsing, reading bews, watching youtube or a movie, etc.) most of the time. Intel idles at around 12W due to E cores while AMD idles at around 45W which will make the energy consumption 4x.
A 5600g is a monolithic chip, just like the Intels. A 7600x or 7950x is a multi-chip module, though, with 2 or 3 modules, and the IOD idle is very substantial now with all the PCIe5 lanes. Bottom line Zen 4 is more efficient when doing major work, courtesy of being one process generation ahead, but Raptor Lake and Alder Lake idle lower. If you want low idle with Zen4, wait for the SoC variants like your 5600g.
They don't run constantly with at maximum power consumption in all workloads. They use less while gaming or more integer & less FP/AVX. Highest usage probably when they have a performance lead over the other. AMD can run at lower power limits & loose a few % in many cases.
Thanks. I liked the ones on Techpowerup, as they include tests at 720p low, and tested more than a few titles. Part of my interest is the need to compare to Tomshardware 7950 iGPU results, which looked suspiciously low for the specs, and probably faulty: https://www.tomshardware.com/news/ryzen-7000-integ...
About power consumption. I think it is completely useless to measure it when running a useless benchmark that you then don't even use to compare the relative performances to other CPUs. It would be much worth having a measurement for some more useful (common?) benches, just to understand when a real work is applied how much the CPU is consuming and, related to the performances, understand how efficient it is.
Just think what the results would be if the CPU would be artificially limited (by BIOS/driver) in Prime95 bench: you would measure a much lower consumption that extrapolated for other tests, and you could just think the CPU is consuming a fraction of what is does. It's the same for the torture benches of GPUs. The max consumption in that test is useless to understand how much they really consume while gaming, and in fact, most of them are artificially limited or just hit the max TDP (which is again not a measure of power consumption).
If you don't want to provide the power consumption for most benches, at least use a bench that gives a comparable performance, so that (at least for that test) one can make a comparison of the efficiency.
For gaming: 13900k is more effecient than Ryzen per Igor’s lab’s test. Here’s what he has to say:
“From a purely statistical point of view, it is a clear victory of the Core i9-13900K against the Ryzen 9 7950X in gaming, although life does not only consist of pure gaming. The Core i9-13900K often wins in the workstation and creation field, but not always. And even if it is even a tad more efficient at gaming than AMD’s Ryzen 9 7950X counterpart…”
For idle: Given below is a comprehensive review for Alder Lake vs Zen3 done by Tech notice. He found Ryzen to use almost 4x during idle. He also tested some realistic day to day use cases where 12th gen was more efficient than Ryzen. I expect it to continue with 13th gen vs zen4.
Socket power is a fair comparison - both sides have comparable socket PCIe lanes and chipset lanes. When using wall numbers for Intel vs. AMD you introduce motherboard and component variability. Even using the same CPU you'll find motherboards can vary by 10-20W at load due to VRM quality differences.
I'm sure they're at least part of the reason why RPL has much lower idle power draw than Zen4, but their real purpose is to provide 4 threads for the same die area and power draw as a P core to scale MT workloads.
Pricing is wrong. Like many AT are quoting Intel's 1000unit tray prices as the MSRPs. Tray prices are not the retail prices. NewEgg shows the retail price for the 13900k as US$659
MSRP is just suggested retail price, it's not enforced. In this instance Newegg appears to be pricegouging, as a boxed retail i9-13900K can be bought at the $569 price from other retailers, like Microcenter.
"Price reflects Recommended Customer Price (RCP) rather than MSRP. RCP is the cost per unit, in bulk sales of 1000 units or more, to OEMs, ODMs, and retail outlets when purchasing from Intel. Actual MSRP is higher than RCP"
Microcenter is not a comparable retailer, ever. They only sell at those prices to local markets. You might as well compare prices of Amazon to that of Crazy Eddie's CPU Barn that sells only in one neighborhood of St. Louis.
In July Intel signaled a +20% price increase and AMD ignored Intel counsel and the channel will settle that question by Black Friday Cybor Monday. The question has already been answered for dGPU in the market for RTX 4090?
Prior gen CPU and dGPU production overage run end absolutely a "bit after Christmas".
New primary dGPU a Pareto distribution curve and that does not explain it the situation assessment fully for all consideration.
On new CPU production AMD ignoring Intel + 20% price increase offer it's a CPU new primary price war unless the channel disagrees bringing normalcy to cost : price / margin assessment on cost : price / margin realities.
Like any other CPU launch, the only prices we have at the start are the prices provided by the manufacturers. Retail prices can and will vary, especially at the very start when chips are in short supply.
It's best to consider it guidance rather than hard numbers.
Now this is what we want to see! Proper, vicious, dog-eat-dog competition from Intel and AMD. I've rarely seen a clearer example of why competition is good and entrenched monopolies (or near monopolies) are bad. Hats off to both competitors.
Buy Intel get it under AIO and get ready for 340W load on 13900K while AMD Zen 4 is at 95C but its significantly lower power at 230W only max. The flagship parts need AIOs no Air coolers, but with AMD some of the Air coolers can work without problem since the heat is only factor but not the high power as the Temp target can be set on AMD platform from 95C to 92C. Intel 12900K and up aka 13700K, 13900K cannot be tamed on Air coolers esp when you tune them. So a mild win to AMD.
The I/O is a win for Intel due to DMI is 4.0x8 while X670E is PCIe4.0x4 like X570 bummer from AMD perhaps PCIe5.0 redrivers and layers cost. IMC wise Intel is winning, but with DDR5 in this infancy stage even buying 7000MHz low Latency DRAM won't benefit RPL at all. AMD stuck to 6000MHz EXPO why did you not review on that ? I think AT should have stuck to XMP for Intel and EXPO for AMD as AMD will have better performance with better DRAM since the Zen 2 days. Ultimately IMC is bragging right for Intel DDR5 RPL now, the socket is EOL and you cannot install new Kits and expect magic just like 8th gen vs 10th gen IMCs you will need a new Chip.
Socket is dead end for Intel nothing extra is coming, you are locked out. AM5 will get Zen 4D and Zen 5 and 5D as well. Much better longevity past 2025+ if AMD launches Zen 6 then it's insane on this AM5 socket. Also Z790 will have CPU socket bending issues note that as well.AMD wins here.
Performance wise, both are neck to neck. High clocks on both high MT workloads on both camps, this is very interesting market for R9 and i9 parts. Coming to i5 and R5 parts, Intel has more performance but AMD has better pricing. However the most parts shipped will be this range only I think Intel may win more Client sales vs AMD due to DDR4. No winner but it's a great consumer choice. One point to note, AMD has higher Base Freq vs Intel this means better performance for AMD on all workloads and not just demanding. Esp with Zen 4 which is a solid chip than Zen 3 with it's lower clocks annoying IOD crapping out and IMC being subpar.
AVX512 is dead, big shame to Intel. They are wasting 30% of the die space in the RPL processors P cores, ultimate pathetic move. AMD is a champion with dual AVX256 making it solid no AVX offset performance unlike Intel 11th and 12th gen. AMD wins here.
I hope this sad excuse from Intel shuts the facial orifices of those who thought the power draw of the 7950X was "too high"...;) These CPUs should sell well in colder climes, no doubt (for people who can afford the power bills...;))
For workload efficiency it's mainly about the process tech. AMD with TSMC are at 5nm, Intel is still at 7nm (or you can say TSMC is around Intel's 7nm, while Intel is using its 3rd-gen 10nm).
I like my P-cores on 12900k, thank you, they are the reason I didn't stick with Zen 3. A desktop computer needs to be highly responsive and it needs throughput when called for. I weigh those as 50% ST:50% MT, but everyone should personalize their ratio to what they really do. 90% ST:10% MT? Get a laptop. 10% ST:90% MT? A workstation or remote server/cloud.
I also have no issue with a D15 air cooler. The processor automatically tamps down to 250W sustained, but if I want something intense done, it'll blitz through the first second or two. As for power bill considerations, the Zen 3 did idle pretty high and I noticed. But on my desktop I rarely ever idle. It was more that a year ago, Zen 3 and Alder Lake were the same process generation, and Alder Lake hands down won the ST.
So top of the line Raptor Lake trades blows/equals on average to Zen 4. But only if you have a motherboard/cooler capable of delivering and cooling 300+w, which means top of the line hardware. By the time you factor in the cost of a top notch MB and 360 AIO, you erase the price advantage of the processor itself.
Limit i9 to similar power levels, and the performance would reduce more than just a few % I'd wager, so for those sticking with air cooling or smaller AIO's, Zen 4 has a clear advantage. This also points out that Raptor Lake doesn't have much headroom above this to go, where Zen 4 (if allowed 250w+) would clearly outperform RL at same power levels
Overall, this generation is much closer than I'd thought it would be, which as always is great for consumers
Very happy that Raptor Lake is super competitive to AMD 7000 series! AMD has to lower it’s high end pricing now for both it’s chipset and retail pricing. Let the price wars begin after Thanksgivings. I expect the 7050X to go from $699 to $499. X670E boards will be the same price as the Intel equivalent models! 😁👍
Any plans to test these in thermally or energy-constrained limits? Like with Air Cooling, or certain Watt limits?
Or perhaps, how will Zen4 on laptops compare to Intel's RPL-equivalent on laptops...?
From here it looks similar to Zen3 vs Intel 12th, or Zen2 vs Intel 11th. That AMD is competitive in multithread and better efficiency, and Intel only remains competitive by expending alot of power, and it's mostly for the single-core.
Yes. Performance testing at lower power levels is also on the to-do list. We had a chance to play with eco mode a bit for the Ryzen review, but didn't get to do something similar for Raptor Lake.
@Ryan: How about testing idle power and realistic day-to-day use cases? I can only find this kind of review for 12th gen vs Zen3 and not 13th gen vs zen4. Would be really nice to have the numbers for 13th zen vs zen4.
Could you please add the aggregates, in the SPEC 2017 scores? There's usually a summary chart that has an average of the individual benchmarks, and then it often has the equivalent scores from more CPUs/configurations than the individual test graphs contain. For example, see the Alder Lake review:
TechSpot / Hardware Unboxed show that to complete a Blender job the 13900K takes 50% more total system energy than does the 7950X. Intel completing a Cinebench job takes 70% more energy. Meaning heat in the room. And that's with the Intel chip thermal throttling instantly on even the best cooling.
Looking at AT's "Power" charts here, which list the Intel chip as "125W" and AMD as "170W", many readers will get EXACTLY THE OPPOSITE impression.
Sure, you mention the difficulties in comparing TDPs etc, and compare this gen Intel to last gen etc but none of that "un-obscures" the totally erroneous Intel vs AMD picture you've conveyed.
ESPECIALLY when your conclusion says they're "very close in performance" !! BAD JOB, AT. The worst I've seen here in a very long time. Incomprehensibly bad.
We go into the subject of power consumption at multiple points and with multiple graphs, including outlining the 13900K's high peak power consumption in the conclusion.
Not true. You have those insanely misleading "TDP" labels on every CPU in the legend of every performance comparison chart. This paints a very misleading picture of "competitive" performance, whereas performance at iso-power (e.g. normalized per watt, based on total system power consumption measured at the outlet) would be much more enlightening.
Is it just me, or does the L1 cache arrangement seem a bit odd? 48k data and 32k instruction for the P cores and 32k data and 64k instruction on the e-cores. Seems a bit odd to me.
Golden/Raptor Cove has a micro-op cache for instructions. 4096 micro-ops is about equal to 16Kb of instruction cache, which is effectively 48Kb-D + 48Kb-I. I don’t remember whether Gracemont has a micro-op cache. However, it doesn’t have hyperthreading, so maybe it just needs less data cache per core.
"The new instruction cache on Gracemont is actually very unique. x86 instruction encoding is all over the place and in the worst (and very rare) case can be as long as 15 bytes long. Pre-decoding an instruction is a costly linear operation and you can’t seek the next instruction before determining the length of the prior one. Gracemont, like Tremont, does not have a micro-op cache like the big cores do, so instructions do have to be decoded each time they are fetched. To assist that process, Gracemont introduced a new on-demand instruction length decoder or OD-ILD for short. The OD-ILD generates pre-decode information which is stored alongside the instruction cache. This allows instructions fetched from the L1$ for the second time to bypass the usual pre-decode stage and save on cycles and power."
Thank you for the review. So Intel too, is finally throwing more cores and increasing frequencies to the problem these days, which increases heat and power usage in turn. AMD too, is a culprit of this practice but has not gone to these lengths as Intel. 16 cores versus supposedly efficiency cores. What is not happening?
It would be a good idea to highlight that the MT Spec benchmarks are just N instantiations of the single thread test. They are not indicative of parallel computing application performance. There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks.
"There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks."
They're not part of the actual SPEC CPU suite. I'm assuming you're talking about the SPEC Workstation benchmarks, which are system-level benchmarks and a whole other kettle of fish.
With SPEC, we're primarily after a holistic look at the CPU architecture, and in the rate-N workloads, whether there's enough memory bandwidth and other resources to keep the CPU cores fed.
its strange to me that when we are talking about value ...especially for budget constraint buyers ... who are also willing to let go of bleeding edge/performance ... we dont even mention AM4 platform.
AM4 is still good ..if not great (not to say mature/stable) platform for many ..and you can still buy a lot of reasonably price good procs including 5800X3D ...and users have still chance to upgrade it upto 5950X if they need more cpu at a later date.
Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test. The single threaded test does not have the same regression so it's a curious result that could indicate something wrong with the test setup. Alternately, perhaps the 13900K was throttling during that part of the test or maybe E cores are really not good at compiling code.
I had that same thought. Why publish something so obviously anomalous and not even say anything about it? Did you try re-testing it? Did you accidentally flip the scores between the 12th and 13th gen? There's no obvious reason this should be happening given the few changes between 12th and 13th gen cores.
"Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test."
We still are. That was flagged earlier this week, and re-runs have produced the same results.
So at this point we're digging into matters a bit more trying to figure out what is going on, as the cause is non-obvious. I'm thinking it may be a thread director hiccup or an issue with the ratio of P and E cores, but there's a lot of different (and weird) ways this could go.
I think it's starting to become a little disingenuous to list the default TDP in the benchmarks, when it's become increasingly obvious over the past few generations that Intel chips run nowhere in the stratosphere of those TDPs.
When you see a "125W" $589 chip virtually tied with a "170W" $699 chip it makes it seem like Intel is a no brainer. Might be time to start putting actual power draw in each of the tests in there, or simply leave stock TDP out, because listing a Core i9 at "125W" when it's running 50-100W higher than an equivalent AMD chip doesn't make much sense any longer.
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised. On all of Intel’s slides
Doesn’t matter if they advertise it. The charts are misleading because the W number at the left of the chart has nothing to do with the power consumed to get the performance indicated in the chart. They should really just leave the W number off or show a measured average W required to complete the test. Then the number would have meaning. As it stands, for the purposes of the graph, the number doesn’t mean much.
And, to be fair to Intel, why are some of the IGP gaming benchmarks only showing the 12th and 13th gen Intel vs AMD APUs? There's really nothing to be gleaned from this; of course APUs will be faster in IGP tests. If you can't do like for like, then either just publish the Intel scores or don't publish at all.
In your closing comments about power consumption, I was reminded about the AMD article that compared the performance difference between 230W and 65W. I think you should also mention that in this article. I'm holding out for AMD mobile parts. Those laptops will be nice.
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised. On all of Intel’s slides
Reviews shouldn’t care about the advertised power, or what it says in the bios when you set the “limit” to 65 watts, reviews should actually measure and report the real power draw.
We don’t read reviews to read intel and amd marketing numbers, we want to know the real numbers for a given workload
I don't know what is it but openbenchmarking.org gives geometric mean of all tests +20% for 7950X. In many tests 7950X kills 13900K by huge margin. Please see https://openbenchmarking.org/vs/Processor/AMD%20Ry...
Why would undercut all of the processors with such a piss poor RAM configuration... this is just ridiculous to pair 13th Gen Zen4 and even 12th Gen with such a slow memory and those timings... The whole review and testing are invalid.
"Whereas Alder Lake officially topped out at DDR5-4800, Alder Lake can run at DDR5-5600, helping to feed the beast a bit more with higher memory clockspeeds."
Guessing the second mention should be Raptor Lake.
Nice review, however I have to say that this site has lost itself after the departure of Andrei and Ian. The deep dives on mobile processors for smartphones were very important as you were the only ones to do those and it is a real shame not to have Spec data and a detailed comparison of the A14, A15, Exynos 2100, Exynos 2200, Snapdragon 888, Snapdragon 8 gen 1, Snapdragon 8 plus gen 1, Dimensity 9000 and Dimensity 9000+. I hope that you find and you are actively looking for a new editor for those pieces of content and that onceyou find one you push out deep dives on those SOCs even if they will not be the latest and greatest because it will complete the amazing database of reviews which stopped with 865 and those reviews where real gold!
good that intel is able to compete for now. but i go for the AM5 plattform. support until 2025 at least and the X 3D versions will blow intel out of the water. i am not buying an already EOL plattform for a bit more performance.
1. Buy onto the platform early, upgrade very late. Like 1700X to 5800X3D. Except that didn't work for every motherboard on AM4. 2. Buy a budget chip, upgrade to an expensive chip 1+ gen later. The Ryzen 5 7600X is currently the cheapest but at $300 it doesn't really qualify.
Nobody should buy AM5 or Raptor Lake (new system) right now. Wait for 7800X3D/Zen5 and Meteor Lake.
This. Initially the r7-1700 and x370 offered mixed value, and the upgrade path looked great. But AMD wasn't able to properly fulfil their AM4 promise.
So perhaps AMD realised their issues, and fixed things for AM5. So perhaps but the most expensive motherboard and the best value cpu to upgrade the cpu later. Or maybe nothing has changed, since AMD is so far ahead of Intel when it comes to motherboard relevancy.
So for new system builders, you can blow the budget and go all-in on a new Intel + Nvidia tower. For the best value builders, customising an older AMD (5800x3D) and RDNA is the way to go. For the risk takers, you can overpay for things that are going to last, and cut-back on things you know yobare going to upgrade (gpu, cpu, more storage).
The promise of upgrading is great, but sometimes doesn't work out as planned. I built a 2200G + B450 Tomahawk in 2019, with the hope to upgrade to a 6-core APU later on. Now, the 5600G is the one to go for, but has considerable issues when joined with the Tomahawk. So, I tend to think I'll just wait for a whole new system, AM6 perhaps, who knows?
Transient power spikes with an RTX 4090 and 13900K mean you will need at LEAST a 1500w power supply to prevent random computer shutdowns. That's crazy! Of course, this will only happen when you are running a game 4K, Max Settings, with Ray Tracing enabled. Still, getting 1,000-1,200w spikes is crazy!
"...eeking out every last bit of performance" +1 for word choice, -1 for the spelling: it's actually "eking", which looks weird to me too! https://www.dictionary.com/browse/eke
I am surprised Gavin did not include the cooler and perhaps beefier power supply in the price comparison: clearly, if you get an Intel system, you need a *much* bigger cooler, especially if you care about noise. And you might need a bigger power supply, especially if you plan on getting an nVidia 4000-series card.
THX for the review. It should be mentioned that INTEL's 13900K is nowhere near the MSRP of $589, I expect the writer to check the real prices, it's price is $795, about $100 more than AMD's ryzen 7950X. AMD CPU's are cheaper. it's FACT.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
169 Comments
Back to Article
Nero3000 - Thursday, October 20, 2022 - link
Correction: the 12600k is 6P+4E - table on first pageHixbot - Thursday, October 20, 2022 - link
I am hoping for an high frequency 8 core i5 with zero ecores and high cache. It's would be a gamer sweet spot, and could counter the inevitable 3d cache Zen 4.nandnandnand - Friday, October 21, 2022 - link
big.LITTLE isn't going away. It's in a billion smartphones, and it will be in most of Intel's consumer CPUs going forward.Just grab your 7800X3D, before AMD does its own big/small implementation with Zen 5.
HarryVoyager - Friday, October 21, 2022 - link
Honestly, I'm underwhelmed by Intel's current big.LITTLE setup. As near as I can tell, under load the E cores are considerably less efficient than the P cores are, and currently just seem to be there so Intel can claim multi-threading victories with less die space.And with the CPU's heat limits, it just seems to be pushing the chip into thermal throttling even faster.
Hopefully future big.LITTLE implementations are better.
nandnandnand - Friday, October 21, 2022 - link
Meteor Lake will bring Redwood Cove to replace Golden/Raptor Cove, and Crestmont to replace Gracemont. Gracemont in Raptor Lake is the same as in Alder Lake except for more cache, IIRC. All of this will be on "Intel 4" instead of "Intel 7", and the core count might be 8+16 again.Put it all together and it should have a lot of breathing room compared to the 13900K(S).
8+32 will be the ultimate test of small cores, but they're already migrating on down to the cheaper chips like the 13400/13500.
Hixbot - Saturday, October 22, 2022 - link
Yes it does seem backwards that the more efficient architecture is in the P core. Reducing power consumption for light tasks seems better to keep it on the P core and downclock. I don't see the point of the "e" cores as effiency, but rather academic multithreaded benchmark war. Which isn't serving the consumer at all.deil - Monday, October 24, 2022 - link
E is still useful, as you get 8/8 cores in space where you could cram 2/4. I agree E for efficiency should be B as background to make it clearer what's the point. They are good for consumers as they offer all the high speed cores for main process, so OS and other things dont slow down.I am not sure if you followed, but intel cpu's literally doubled in power since they appeared, and at ~25% utilization, cpu's halved power usage. What you should complain about is bad software support, as this is not something that happens in the background.
TEAMSWITCHER - Monday, October 24, 2022 - link
I don't think you are fully grasping the results of the benchmarks. Compute/Rendering scores prove that e-cores can tackle heavy work loads. Often trading blows with AMD's all P-Core 7950X, and costing less at the same time. AMD needs to lower all prices immediately.haoyangw - Monday, October 24, 2022 - link
That's an oversimplification actually, P-cores and E-cores are both efficient, just for different tasks. The main efficiency gain of P-cores is it's much much faster than E-cores for larger tasks. Between 3 and 4GHz, P-cores are so fast they finish tasks much earlier than e-cores so total energy drawn is lower. But E-cores are efficient too, just for simple tasks(at low clockspeeds). Below 3GHz and above 1GHz, e-cores are much more efficient, beating P-cores in performance while drawing less power.Source: https://chipsandcheese.com/2022/01/28/alder-lakes-...
Great_Scott - Friday, November 25, 2022 - link
Big.LITTLE is hard to do, and ARM took ages and a lot of optimization before phone CPUs got much benefit from it.The problem of the LITTLE cores not adding anything in the way of power efficiency is well-known.
I'm saddened that Intel is dropping their own winning formula of "race-to-sleep" that they've successfully used for decades for aping something objectivly worse because they're a little behind in die shrinking.
Castillan - Thursday, October 20, 2022 - link
It never ceases to amaze me how Intel gets away with marketing a 330W+ CPU as a 125W CPUHulk - Thursday, October 20, 2022 - link
It's kind of like how you can drive a car rated at 32mpg EPA mileage and have it return 18mpg.boozed - Thursday, October 20, 2022 - link
It's probably more like the modern turbocharged cars in which no real driver can reach the quoted fuel consumption because the manufacturer cheesed the economy testing.abhaxus - Saturday, October 22, 2022 - link
Dunno what you mean, I regularly exceed the rated fuel economy for my car (twin turbo V8) as well as rental cars with turbo engines. All it takes is only going on boost when you actually want to go fast.boozed - Saturday, October 22, 2022 - link
One of the car magazines in Australia consistently had trouble with small European turbo engines using up to twice as much as quoted even when not being pushed hard. BMW was the worst offender.maxijazz - Friday, November 4, 2022 - link
Define "not being pushed hard".Yojimbo - Thursday, October 20, 2022 - link
TDP has a technical meaning and Intel (and AMD, because they do they dame thing) are using it properly.Intel is even moving away from calling it "TDP" because of consumer, and hardware review sites/channels, misunderstanding of the term.
In order to understand the situation, go search the anandtech article where Ian Cutress actually suggests Intel do exactly what it is doing to cut down on confusion.
yh125d - Thursday, October 20, 2022 - link
It has a technical meaning, but that meaning is not important to consumers/enthusiasts using the machine. It's misleading at bestYojimbo - Thursday, October 20, 2022 - link
it's not misleading at all. people are just uneducated.yh125d - Thursday, October 20, 2022 - link
That's absolutely asinine and completely incorrect. 125w TDP can lead people to think that the processor uses around 125w, or is limited to 125w, or that they should plan on cooling around 125w, that they should plan for a PSU load in the 125w range, or that the performance quoted by Intel is produced at around 125w. Because that's what Intel says TDP means. None of these are even a little bit correct. It is entirely misleading, and a completely useless number for consumers. Since AMD's TDP is more accurate (though still off) compared to Intel's, you can't even count on it to indicate which processor might use more power than another. 7950X @ 170w TDP uses less power than 13900k @ 125W TDP in all casesFrom Intel directly: "TDP stands for Thermal Design Power, in watts, and refers to the power consumption under the maximum theoretical load. Power consumption is less than TDP under lower loads. The TDP is the maximum power that one should be designing the system for. This ensures operation to published specs under the maximum theoretical workload."
By intel's own definition, TDP means exactly what people would expect it to mean, however it is a completely inaccurate number, as Anandtech and others' testing clearly shows. You can act like a knowitall all you want and claim others are just uneducated, but all that does is expose your own ignorance of the situation here.
flyingpants265 - Thursday, October 20, 2022 - link
That doesn't matter. All that proves is TDP is a phony measurement. If the CPU draws up to 300 watts, then it's a 300 watt CPU.yh125d - Friday, October 21, 2022 - link
ExactlyIketh - Friday, October 21, 2022 - link
proving TDP is a phony measurement is the entire point of that postYojimbo - Friday, October 21, 2022 - link
Firstly this discussion is not confined to Intel. All the modern CPUs use turbo clocks. They all have various performance characteristics dependent on the thermal design of the product they are in.Please cite where Intel writes that. Intel only uses TDP in its technical literature these days for the very reason that consumers are confused about it. Intel uses PL1 and PL2. TDP is the MINIMUM power that one should be designing for, not the maximum. The amount of turbo clock exposed by the cooling solution is optional, but the thermal solution associated with the processor must be capable of handling the TDP. The processor will not be damaged with a cooling solution that only handles the TDP. The processor will not use its turbo clocks much and will stay at or below the TDP power except for short periods of time. On the other hand if a cooling solution cannot handle the TDP there could he bad consequences.
Again. This isn't an Intel-specific thing. TDP and turbo clocks are ubiquitous in the industry. What is also very widespread is massive misunderstanding and misinformation about the term. Perhaps Anandtech should stop using the term with respect to CPUs because it seems to me that it's a minority of readers who understand it.
Meteor2 - Saturday, October 22, 2022 - link
Reviews should stop quoting TDP. Intel no longer uses it; their latest product spec pages e.g. for the i9-13900K quote Maximum Turbo Power: "The maximum sustained (>1s) power dissipation of the processor as limited by current and/or temperature controls. Instantaneous power may exceed Maximum Turbo Power for short durations (<=10ms). Note: Maximum Turbo Power is configurable by system vendor and can be system specific."Which for the i9-13100K is 253W.
Meteor2 - Saturday, October 22, 2022 - link
AMD still quotes TDP (e.g. 170W for the 7950X) with no definition of TDP provided, which I would suggest IS misleading.at_clucks - Monday, October 24, 2022 - link
Ah, not confined to Intel, solid argument that it's not a problem to do it but that "people are uneducated". Scale matters. When your real power consumption is 120% over the advertised one (see link below) this isn't an "everybody's doing it" but it is indeed a matter of "people are uneducated". At this time Jimbo, anyone trying to find excuses for Intel, and downplaying the shenanigans is _really_ uneducated, was born yesterday, or benefits from the lie.This doesn't mean you should stop using Intel if it does the job for you,. But only a fool or the fraudster would defend or downplay what they're doing.
https://images.anandtech.com/graphs/graph17585/130...
catavalon21 - Sunday, November 20, 2022 - link
"Please cite where Intel writes that."Step right up, folks...
https://www.intel.com/content/www/us/en/support/ar...
Truebilly - Friday, October 21, 2022 - link
🫳🎤HarryVoyager - Friday, October 21, 2022 - link
Doesn't especially matter whether they are conforming to the technical definition or not as it is tells me nothing useful about the CPU in the context in which it is presented.OreoCookie - Tuesday, October 25, 2022 - link
Yes, TDP has a meaning, and technically, neither company is using it correctly. Back in the good-ol’ days when TDP was really max power under load, it easily allowed you to spec a cooler. Clock boosts were meant to be temporary, transient states so that *on average*, you’d still lie within the thermal budget of the cooler. Obviously, we are well past that.So yes, AMD is playing it a bit loose (+31 %). But Intel is playing it ridiculous: the i9’s max power (as tested here) is 2.7x (!) their “TDP”.
shaolin95 - Thursday, October 20, 2022 - link
AMD does the same thing. dont be a fanboyyh125d - Thursday, October 20, 2022 - link
If you're equating AMD going ~50w over TDP to intel going 210w over TDP, you're being the fanboy.Yojimbo - Friday, October 21, 2022 - link
AMD's turbo clocking is more than 50W.Yojimbo - Friday, October 21, 2022 - link
i checked and it's 60 W. That doesn't make AMD "less dishonest”. Neither company are being dishonest. It means AMD does not intend their desktop products to be used in lower power products. If you want to design a product around a Ryzen 7950X you need a 170 W cooling solution. Whereas you can put an i9 13900K in a product that can only dissipate 125 W. That's the difference between the two processors in terms if the TDPs. That's what TDP means.Truebilly - Friday, October 21, 2022 - link
I'd like to see someone run that 13900k with 120mm radWrs - Friday, October 21, 2022 - link
I mean, it works. The processor automatically steps down the v/f curve and doesn't hiccup with a puny cooler good for 140'ish W. I tested a 12900k with a low-profile AXP-200 from my Skylake days. Performance wasn't bad, over 4GHz all 16 cores. I left all the OC settings on, or else stock E-cores would be 3.9GHz.nandnandnand - Thursday, October 20, 2022 - link
Go look at some efficiency curves for the 7950X and 13900K, for example at 19:00 in Hardware Unboxed's review: https://www.youtube.com/watch?v=P40gp_DJk5EYojimbo - Friday, October 21, 2022 - link
none of the companies "do” anything here. The "doing" is by the people who, though they are ignorant, write seething rants in comment sections damning the companies.bji - Friday, October 21, 2022 - link
This issue would be a lot less contentious if technical sites like Anandtech actually used their expertise to curate information presented. They just shouldn't even show TDP as it's simply not relevant to the end users who are reading the articles. They should have some standard benchmark they run to determine peak and maximum sustained power draws and show ONLY those values in any charts.WannaBeOCer - Friday, October 21, 2022 - link
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised.“Processor Base Power
125 W
Maximum Turbo Power
253 W”
Wrs - Friday, October 21, 2022 - link
You're comparing the PL2 of one chip with the TDP of the other. Also, the article mentioned the motherboard may have something to do with ignoring the PL2 on the 13900k.If the chip can't dissipate PL2, it'll incrementally step down to TDP gracefully. It's like you're complaining your 130mph sedan went 165mph on the race track...
Gastec - Sunday, October 23, 2022 - link
Oh, so the motherboards are the culprits for overclocking the CPU's to new height of financial success, not Intel?WhatYaWant - Thursday, October 20, 2022 - link
7600x seems a bit overpriced, doesn’t it?ingwe - Thursday, October 20, 2022 - link
Yeah it does. $250 seems like a better price--maybe even a bit lower. It will probably come down. I just picked up a 5600 for $125 though so I am set. I suspect that the price will only come down on the 7600x once the stock of 5 series is cleared out.meacupla - Thursday, October 20, 2022 - link
I am happy that Raptor Lake offers stiff competition to Zen 4. Hopefully AMD brings down the price of 7600X, and hurry up and launch their B650 boardscaqde - Thursday, October 20, 2022 - link
?? B650 is released? At least on Newegg and amazon you can purchase a selection of B650 motherboards today. Out of the 23 boards listed, 1 is out of stock and 5 are preoders for the 21st or 27th. So that leaves 17 boards that you can purchase today that are in stock, the in stock boards go from 170-350 (Asrock 650M PG Riptide and Gigabyte B650E Aorus Master respectively). But yeah AMD should and from what I have heard can lower Zen 4 prices to compete with Intel's prices.haukionkannel - Thursday, October 20, 2022 - link
When AMD release 3d cache versions, it will reduce the prices of normal versions.How much, is interesting question. I expect that AMD also will release 7600 to compete with intel in price, so 7600x may not come down a lot.
meacupla - Thursday, October 20, 2022 - link
Oh, B650 is out already? I saw no news coverage of them, so I thought they were still waiting to be released.techguymaxc - Thursday, October 20, 2022 - link
You list Handbrake under legacy tests however, either the graphs or mislabeled or the tests are not included. Did you test these CPUs with Handbrake? If so, please post the results. If not, please consider testing and updating the article. This is the only workload that matters to me, and the number one reason I come to Anandtech for CPU reviews/benchmarks.brucethemoose - Thursday, October 20, 2022 - link
x264 is (more or less) the same thing as the handbrake test... and it kinda is legacy software at this point.Personally, I'd like to see a more modern encoding test, like av1an with x265+chunked encoding, or maybe Staxrip with some filters enabled.
GeoffreyA - Thursday, October 20, 2022 - link
Yes, some libaom would be fantastic.jakky567 - Monday, October 24, 2022 - link
I wouldn't say handbrake/x264 are obsolete yet. We should be looking towards the future, but h264 is here to stay as at least a fallback codec.GeoffreyA - Tuesday, October 25, 2022 - link
It is very much the MP3 of video and here to stay. Plus, its successors have not been indisputably better or have come with tradeoffs.Ashantus - Thursday, October 20, 2022 - link
Comparing the performance of 7600X vs 13600k i see some overall advantage for the 13600K.But, i will defionitly go for the 7600X due one argue.
Load Consumption of 134 Watt vs 238 Watt at almost same performance is something.
Regarding the poweer costs in europe of 60 cent per Watt that is quiet some pricing argue at a 5 years lifetime.
At anandtech:
this argue should be mentioned in your closing thoughts. 100 Watt more powerconsumption at todays powerprices is a serious issue.
Yojimbo - Thursday, October 20, 2022 - link
I don't remember seeing power versus performance numbers. Did I miss them?CiccioB - Thursday, October 20, 2022 - link
<blockquote>Load Consumption of 134 Watt vs 238 Watt at almost same performance is something.</blockquote>This thought is completely wrong. It is called "induction", as you were looking at something and then reported that on something else.
You see a graph of power consumption for a "unlimited test" (where performance is not measured) and then you think that that measure is valid also for other tests.
So you just think that for each bench those CPUs consume always those Watts (how can it be?) and that the performance are the same (where did you got that? In almost all benches the 13600K leaves the 7600X in the dust, but not knowing their power consumption for those test you cannot say which is the most efficient).
Ashantus - Thursday, October 20, 2022 - link
Just found another test, whereas a powerconsumption at action is recorded.At gaming (average out of 12 games tested) is:
13600k = 88 W. 7600X = 60 W
13990K = 144 W 7900X = 107W
Yojimbo - Thursday, October 20, 2022 - link
that looks more plausible. but it is also mostly useless except in the context of the specific workload. modern cpu performance testing is very complicated and performance versus power should be taken in the specific workload one is interested in, or at the very least an average of workloads of a similar type.Gastec - Sunday, October 23, 2022 - link
Specific workload such as : 13990K produces 100 fps @ 144 W, while 7600X produces 100 fps @ 60 W?m53 - Thursday, October 20, 2022 - link
PCs are idle (or used for light browsing, reading bews, watching youtube or a movie, etc.) most of the time. Intel idles at around 12W due to E cores while AMD idles at around 45W which will make the energy consumption 4x.t.s - Thursday, October 20, 2022 - link
idle around 45w? sources? My 5600G idle at 11W. others, around 7 s/d 17W.titaniumrock - Thursday, October 20, 2022 - link
here is the source link https://www.youtube.com/watch?v=UNmpVvTUkJE&li...t.s - Friday, October 21, 2022 - link
And where it states the AMD vs Intel watt vs watt?Wrs - Friday, October 21, 2022 - link
A 5600g is a monolithic chip, just like the Intels. A 7600x or 7950x is a multi-chip module, though, with 2 or 3 modules, and the IOD idle is very substantial now with all the PCIe5 lanes. Bottom line Zen 4 is more efficient when doing major work, courtesy of being one process generation ahead, but Raptor Lake and Alder Lake idle lower. If you want low idle with Zen4, wait for the SoC variants like your 5600g.tygrus - Saturday, October 22, 2022 - link
They don't run constantly with at maximum power consumption in all workloads. They use less while gaming or more integer & less FP/AVX. Highest usage probably when they have a performance lead over the other. AMD can run at lower power limits & loose a few % in many cases.neblogai - Thursday, October 20, 2022 - link
I was hoping for Ryzen 7000X iGPU benchmarks too. There are no proper comparisons of them vs Intel's 32EU iGPUs on the internet.nandnandnand - Thursday, October 20, 2022 - link
ETA Prime 7700X iGPU tests (no comparisons):https://www.youtube.com/watch?v=p4cwNn4kI6M (gaming)
https://www.youtube.com/watch?v=MnSVPM78ZaQ (emulation)
7600X vs. 12900 vs. 5700G
https://arstechnica.com/gadgets/2022/09/ryzen-7600...
All Zen 4 vs. 12900K vs. others
https://www.techpowerup.com/review/amd-ryzen-7-770...
It's similar to the UHD 770 in Alder Lake, sometimes a little better or worse. About half the performance of a 5700G which is impressive for 2 CUs.
UHD 770 in Raptor Lake gets +100 MHz across the board, so that could make a slight difference.
neblogai - Thursday, October 20, 2022 - link
Thanks. I liked the ones on Techpowerup, as they include tests at 720p low, and tested more than a few titles. Part of my interest is the need to compare to Tomshardware 7950 iGPU results, which looked suspiciously low for the specs, and probably faulty: https://www.tomshardware.com/news/ryzen-7000-integ...CiccioB - Thursday, October 20, 2022 - link
About power consumption.I think it is completely useless to measure it when running a useless benchmark that you then don't even use to compare the relative performances to other CPUs.
It would be much worth having a measurement for some more useful (common?) benches, just to understand when a real work is applied how much the CPU is consuming and, related to the performances, understand how efficient it is.
Just think what the results would be if the CPU would be artificially limited (by BIOS/driver) in Prime95 bench: you would measure a much lower consumption that extrapolated for other tests, and you could just think the CPU is consuming a fraction of what is does. It's the same for the torture benches of GPUs. The max consumption in that test is useless to understand how much they really consume while gaming, and in fact, most of them are artificially limited or just hit the max TDP (which is again not a measure of power consumption).
If you don't want to provide the power consumption for most benches, at least use a bench that gives a comparable performance, so that (at least for that test) one can make a comparison of the efficiency.
shabby - Thursday, October 20, 2022 - link
http://uploads.disquscdn.com/images/ce6075096ed8d9...Sometimes you need to go elsewhere to get what you need.
shabby - Thursday, October 20, 2022 - link
7700x uses 80w less during gaming than 13700k.CT007 - Sunday, October 23, 2022 - link
7700X is an awesome chip for pure gaming... I don't understand why it has been excluded in so many major benchmarks I've scene lately.titaniumrock - Thursday, October 20, 2022 - link
her is a link for you https://www.youtube.com/watch?v=H4Bm0Wr6OEQm53 - Friday, October 21, 2022 - link
For gaming: 13900k is more effecient than Ryzen per Igor’s lab’s test. Here’s what he has to say:“From a purely statistical point of view, it is a clear victory of the Core i9-13900K against the Ryzen 9 7950X in gaming, although life does not only consist of pure gaming. The Core i9-13900K often wins in the workstation and creation field, but not always. And even if it is even a tad more efficient at gaming than AMD’s Ryzen 9 7950X counterpart…”
Link: https://www.igorslab.de/en/intel-core-i9-13900k-an...
For idle: Given below is a comprehensive review for Alder Lake vs Zen3 done by Tech notice. He found Ryzen to use almost 4x during idle. He also tested some realistic day to day use cases where 12th gen was more efficient than Ryzen. I expect it to continue with 13th gen vs zen4.
https://youtu.be/4F2z3F64o94
t.s - Friday, October 21, 2022 - link
Pity there's no from the wall numbers.Wrs - Friday, October 21, 2022 - link
Socket power is a fair comparison - both sides have comparable socket PCIe lanes and chipset lanes. When using wall numbers for Intel vs. AMD you introduce motherboard and component variability. Even using the same CPU you'll find motherboards can vary by 10-20W at load due to VRM quality differences.t.s - Saturday, October 22, 2022 - link
yep, you're right. I just curious about the consumption from the wall.catavalon21 - Sunday, October 23, 2022 - link
CPU reviews used to. https://www.anandtech.com/bench/CPU-2020/2734meacupla - Thursday, October 20, 2022 - link
Intel is really pushing those e-cores. Do they really help with keeping power draw to a minimum while doing low power tasks?kwohlt - Thursday, October 20, 2022 - link
I'm sure they're at least part of the reason why RPL has much lower idle power draw than Zen4, but their real purpose is to provide 4 threads for the same die area and power draw as a P core to scale MT workloads.Ryan Smith - Thursday, October 20, 2022 - link
Bingo. They're for area efficiency reasons, not power efficiency reasons.tipoo - Thursday, October 20, 2022 - link
The X3D continues to impress in many areas doesn't itmeacupla - Thursday, October 20, 2022 - link
It's going to be a slaughter when 7000X3D series comes out...nandnandnand - Thursday, October 20, 2022 - link
The wins/ties/near-losses for Zen 4 and 5800X3D show the way. 7800X3D will come in like a wrecking ball.brucethemoose - Thursday, October 20, 2022 - link
Typo at the bottom of page one: "Ryzen 5 7600K"TimSyd - Thursday, October 20, 2022 - link
Pricing is wrong. Like many AT are quoting Intel's 1000unit tray prices as the MSRPs. Tray prices are not the retail prices.NewEgg shows the retail price for the 13900k as US$659
Mr Perfect - Thursday, October 20, 2022 - link
MSRP is just suggested retail price, it's not enforced. In this instance Newegg appears to be pricegouging, as a boxed retail i9-13900K can be bought at the $569 price from other retailers, like Microcenter.nandnandnand - Thursday, October 20, 2022 - link
Intel did NOT provide MSRPs for Raptor Lake:https://en.wikipedia.org/wiki/Raptor_Lake#Raptor_L...
"Price reflects Recommended Customer Price (RCP) rather than MSRP. RCP is the cost per unit, in bulk sales of 1000 units or more, to OEMs, ODMs, and retail outlets when purchasing from Intel. Actual MSRP is higher than RCP"
bji - Friday, October 21, 2022 - link
Microcenter is not a comparable retailer, ever. They only sell at those prices to local markets. You might as well compare prices of Amazon to that of Crazy Eddie's CPU Barn that sells only in one neighborhood of St. Louis.Bruzzone - Friday, October 21, 2022 - link
Raptor ask first day in the open market;13900K = $845 + 43% over i$1K
13900KF = $1187 + 110%
13700K = $393 (-12.5%) some assemblance of reality in the world
13700KF = $415 + 8%
13600K = $393 + 23%
13600KF = $415 + 34%
Raphael R7K fifth week of supply open market;
7950X = $933 + 33.6%
7900X = $695 + 26.6%
7700K = $477 + 19.5%
7600X = $422 + 41.3%
In July Intel signaled a +20% price increase and AMD ignored Intel counsel and the channel will settle that question by Black Friday Cybor Monday. The question has already been answered for dGPU in the market for RTX 4090?
mb
Wrs - Friday, October 21, 2022 - link
That's why I usually buy new hardware a bit after Xmas. That wouldn't have worked for several reasons in 2020-21, but other years it's served me well.Bruzzone - Friday, October 21, 2022 - link
Prior gen CPU and dGPU production overage run end absolutely a "bit after Christmas".New primary dGPU a Pareto distribution curve and that does not explain it the situation assessment fully for all consideration.
On new CPU production AMD ignoring Intel + 20% price increase offer it's a CPU new primary price war unless the channel disagrees bringing normalcy to cost : price / margin assessment on cost : price / margin realities.
mb
Ryan Smith - Thursday, October 20, 2022 - link
Like any other CPU launch, the only prices we have at the start are the prices provided by the manufacturers. Retail prices can and will vary, especially at the very start when chips are in short supply.It's best to consider it guidance rather than hard numbers.
allenb - Thursday, October 20, 2022 - link
Now this is what we want to see! Proper, vicious, dog-eat-dog competition from Intel and AMD. I've rarely seen a clearer example of why competition is good and entrenched monopolies (or near monopolies) are bad. Hats off to both competitors.Oxford Guy - Friday, October 28, 2022 - link
Duopoly is hardly adequate competition.Silver5urfer - Thursday, October 20, 2022 - link
I will keep it short.Buy Intel get it under AIO and get ready for 340W load on 13900K while AMD Zen 4 is at 95C but its significantly lower power at 230W only max. The flagship parts need AIOs no Air coolers, but with AMD some of the Air coolers can work without problem since the heat is only factor but not the high power as the Temp target can be set on AMD platform from 95C to 92C. Intel 12900K and up aka 13700K, 13900K cannot be tamed on Air coolers esp when you tune them. So a mild win to AMD.
The I/O is a win for Intel due to DMI is 4.0x8 while X670E is PCIe4.0x4 like X570 bummer from AMD perhaps PCIe5.0 redrivers and layers cost.
IMC wise Intel is winning, but with DDR5 in this infancy stage even buying 7000MHz low Latency DRAM won't benefit RPL at all. AMD stuck to 6000MHz EXPO why did you not review on that ? I think AT should have stuck to XMP for Intel and EXPO for AMD as AMD will have better performance with better DRAM since the Zen 2 days. Ultimately IMC is bragging right for Intel DDR5 RPL now, the socket is EOL and you cannot install new Kits and expect magic just like 8th gen vs 10th gen IMCs you will need a new Chip.
Socket is dead end for Intel nothing extra is coming, you are locked out. AM5 will get Zen 4D and Zen 5 and 5D as well. Much better longevity past 2025+ if AMD launches Zen 6 then it's insane on this AM5 socket. Also Z790 will have CPU socket bending issues note that as well.AMD wins here.
Performance wise, both are neck to neck. High clocks on both high MT workloads on both camps, this is very interesting market for R9 and i9 parts. Coming to i5 and R5 parts, Intel has more performance but AMD has better pricing. However the most parts shipped will be this range only I think Intel may win more Client sales vs AMD due to DDR4. No winner but it's a great consumer choice. One point to note, AMD has higher Base Freq vs Intel this means better performance for AMD on all workloads and not just demanding. Esp with Zen 4 which is a solid chip than Zen 3 with it's lower clocks annoying IOD crapping out and IMC being subpar.
AVX512 is dead, big shame to Intel. They are wasting 30% of the die space in the RPL processors P cores, ultimate pathetic move. AMD is a champion with dual AVX256 making it solid no AVX offset performance unlike Intel 11th and 12th gen. AMD wins here.
WaltC - Thursday, October 20, 2022 - link
I hope this sad excuse from Intel shuts the facial orifices of those who thought the power draw of the 7950X was "too high"...;) These CPUs should sell well in colder climes, no doubt (for people who can afford the power bills...;))Wrs - Friday, October 21, 2022 - link
For workload efficiency it's mainly about the process tech. AMD with TSMC are at 5nm, Intel is still at 7nm (or you can say TSMC is around Intel's 7nm, while Intel is using its 3rd-gen 10nm).I like my P-cores on 12900k, thank you, they are the reason I didn't stick with Zen 3. A desktop computer needs to be highly responsive and it needs throughput when called for. I weigh those as 50% ST:50% MT, but everyone should personalize their ratio to what they really do. 90% ST:10% MT? Get a laptop. 10% ST:90% MT? A workstation or remote server/cloud.
I also have no issue with a D15 air cooler. The processor automatically tamps down to 250W sustained, but if I want something intense done, it'll blitz through the first second or two. As for power bill considerations, the Zen 3 did idle pretty high and I noticed. But on my desktop I rarely ever idle. It was more that a year ago, Zen 3 and Alder Lake were the same process generation, and Alder Lake hands down won the ST.
yh125d - Thursday, October 20, 2022 - link
So top of the line Raptor Lake trades blows/equals on average to Zen 4. But only if you have a motherboard/cooler capable of delivering and cooling 300+w, which means top of the line hardware. By the time you factor in the cost of a top notch MB and 360 AIO, you erase the price advantage of the processor itself.Limit i9 to similar power levels, and the performance would reduce more than just a few % I'd wager, so for those sticking with air cooling or smaller AIO's, Zen 4 has a clear advantage. This also points out that Raptor Lake doesn't have much headroom above this to go, where Zen 4 (if allowed 250w+) would clearly outperform RL at same power levels
Overall, this generation is much closer than I'd thought it would be, which as always is great for consumers
nandnandnand - Thursday, October 20, 2022 - link
The 7950X outperforms the 13900K from 65W to 185W by substantial amounts: https://www.youtube.com/watch?v=P40gp_DJk5E (19:00, Cinebench R23 multi)It also seems to use less power at lower temps in gaming (23:00, Cyberpunk 2077)
That's probably not the end of the story, but Zen 4 is clearly doing better out of the box. Good news for Dragon Range buyers in 2023.
Harry_Wild - Thursday, October 20, 2022 - link
Very happy that Raptor Lake is super competitive to AMD 7000 series! AMD has to lower it’s high end pricing now for both it’s chipset and retail pricing. Let the price wars begin after Thanksgivings. I expect the 7050X to go from $699 to $499. X670E boards will be the same price as the Intel equivalent models! 😁👍Drazick - Thursday, October 20, 2022 - link
Could you build / compiler the SPEC tests with AVX512 flags for the Ryzen 7xxx?Ryan Smith - Thursday, October 20, 2022 - link
It's on the to-do list. Though we're not expecting a significant change in performance.Kangal - Thursday, October 20, 2022 - link
Any plans to test these in thermally or energy-constrained limits? Like with Air Cooling, or certain Watt limits?Or perhaps, how will Zen4 on laptops compare to Intel's RPL-equivalent on laptops...?
From here it looks similar to Zen3 vs Intel 12th, or Zen2 vs Intel 11th. That AMD is competitive in multithread and better efficiency, and Intel only remains competitive by expending alot of power, and it's mostly for the single-core.
Ryan Smith - Friday, October 21, 2022 - link
Yes. Performance testing at lower power levels is also on the to-do list. We had a chance to play with eco mode a bit for the Ryzen review, but didn't get to do something similar for Raptor Lake.Kangal - Friday, October 21, 2022 - link
Oh nice, will be waiting for that next article to drop. Cheers!m53 - Friday, October 21, 2022 - link
@Ryan: How about testing idle power and realistic day-to-day use cases? I can only find this kind of review for 12th gen vs Zen3 and not 13th gen vs zen4. Would be really nice to have the numbers for 13th zen vs zen4.Here is a link to the review for 12th gen vs zen3: https://youtu.be/4F2z3F64o94
Drazick - Friday, October 21, 2022 - link
@Ryan, I am not sure about it.I think enabling AVX512 on Ryzen will have a great effect on the FP tests of SPEC.
Oxford Guy - Friday, October 28, 2022 - link
There wasn’t a delay when one of the rendering apps got AVX-512 support several years ago.Pjotr - Thursday, October 20, 2022 - link
Closing thoughts typos: Ryzen 580X3D and Ryzen 700.Ryan Smith - Thursday, October 20, 2022 - link
Thanks!mode_13h - Thursday, October 20, 2022 - link
Thanks for the review!Could you please add the aggregates, in the SPEC 2017 scores? There's usually a summary chart that has an average of the individual benchmarks, and then it often has the equivalent scores from more CPUs/configurations than the individual test graphs contain. For example, see the Alder Lake review:
https://www.anandtech.com/show/17047/the-intel-12t...
Arbie - Thursday, October 20, 2022 - link
TechSpot / Hardware Unboxed show that to complete a Blender job the 13900K takes 50% more total system energy than does the 7950X. Intel completing a Cinebench job takes 70% more energy. Meaning heat in the room. And that's with the Intel chip thermal throttling instantly on even the best cooling.Looking at AT's "Power" charts here, which list the Intel chip as "125W" and AMD as "170W", many readers will get EXACTLY THE OPPOSITE impression.
Sure, you mention the difficulties in comparing TDPs etc, and compare this gen Intel to last gen etc but none of that "un-obscures" the totally erroneous Intel vs AMD picture you've conveyed.
ESPECIALLY when your conclusion says they're "very close in performance" !! BAD JOB, AT. The worst I've seen here in a very long time. Incomprehensibly bad.
gezafisch - Thursday, October 20, 2022 - link
Cope harder - watch Der8auer's video showing that the 13900k can beat any chip at efficiency with the right settings - https://youtu.be/H4Bm0Wr6OEQRyan Smith - Thursday, October 20, 2022 - link
We go into the subject of power consumption at multiple points and with multiple graphs, including outlining the 13900K's high peak power consumption in the conclusion.https://images.anandtech.com/graphs/graph17601/130...
Otherwise, the only place you see 125W and 170W are in the specification tables. And those values are the official specifications for those chips.
boeush - Thursday, October 20, 2022 - link
Not true. You have those insanely misleading "TDP" labels on every CPU in the legend of every performance comparison chart. This paints a very misleading picture of "competitive" performance, whereas performance at iso-power (e.g. normalized per watt, based on total system power consumption measured at the outlet) would be much more enlightening.boeush - Thursday, October 20, 2022 - link
*per watt-hour (not per watt)[summed over the duration of the benchmark run]
dgingeri - Thursday, October 20, 2022 - link
Is it just me, or does the L1 cache arrangement seem a bit odd? 48k data and 32k instruction for the P cores and 32k data and 64k instruction on the e-cores. Seems a bit odd to me.Otritus - Thursday, October 20, 2022 - link
Golden/Raptor Cove has a micro-op cache for instructions. 4096 micro-ops is about equal to 16Kb of instruction cache, which is effectively 48Kb-D + 48Kb-I. I don’t remember whether Gracemont has a micro-op cache. However, it doesn’t have hyperthreading, so maybe it just needs less data cache per core.mode_13h - Friday, October 21, 2022 - link
"The new instruction cache on Gracemont is actually very unique. x86 instruction encoding is all over the place and in the worst (and very rare) case can be as long as 15 bytes long. Pre-decoding an instruction is a costly linear operation and you can’t seek the next instruction before determining the length of the prior one. Gracemont, like Tremont, does not have a micro-op cache like the big cores do, so instructions do have to be decoded each time they are fetched. To assist that process, Gracemont introduced a new on-demand instruction length decoder or OD-ILD for short. The OD-ILD generates pre-decode information which is stored alongside the instruction cache. This allows instructions fetched from the L1$ for the second time to bypass the usual pre-decode stage and save on cycles and power."Source: https://fuse.wikichip.org/news/6102/intels-gracemo...
Sailor23M - Friday, October 21, 2022 - link
Interesting to see Ryzen 5 7600X perform so well in excel/ppt benchmarks. Why is that so?Makste - Friday, October 21, 2022 - link
Thank you for the review. So Intel too, is finally throwing more cores and increasing frequencies to the problem these days, which increases heat and power usage in turn. AMD too, is a culprit of this practice but has not gone to these lengths as Intel. 16 cores versus supposedly efficiency cores. What is not happening?ricebunny - Friday, October 21, 2022 - link
It would be a good idea to highlight that the MT Spec benchmarks are just N instantiations of the single thread test. They are not indicative of parallel computing application performance. There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks.Ryan Smith - Friday, October 21, 2022 - link
"There are a few dedicated SPEC benchmarks for parallel performance but for some reason they are never included in Anandtechs benchmarks."They're not part of the actual SPEC CPU suite. I'm assuming you're talking about the SPEC Workstation benchmarks, which are system-level benchmarks and a whole other kettle of fish.
With SPEC, we're primarily after a holistic look at the CPU architecture, and in the rate-N workloads, whether there's enough memory bandwidth and other resources to keep the CPU cores fed.
wolfesteinabhi - Friday, October 21, 2022 - link
its strange to me that when we are talking about value ...especially for budget constraint buyers ... who are also willing to let go of bleeding edge/performance ... we dont even mention AM4 platform.AM4 is still good ..if not great (not to say mature/stable) platform for many ..and you can still buy a lot of reasonably price good procs including 5800X3D ...and users have still chance to upgrade it upto 5950X if they need more cpu at a later date.
cowymtber - Friday, October 21, 2022 - link
Burning hot POS.BernieW - Friday, October 21, 2022 - link
Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test. The single threaded test does not have the same regression so it's a curious result that could indicate something wrong with the test setup. Alternately, perhaps the 13900K was throttling during that part of the test or maybe E cores are really not good at compiling code.Avalon - Friday, October 21, 2022 - link
I had that same thought. Why publish something so obviously anomalous and not even say anything about it? Did you try re-testing it? Did you accidentally flip the scores between the 12th and 13th gen? There's no obvious reason this should be happening given the few changes between 12th and 13th gen cores.Ryan Smith - Friday, October 21, 2022 - link
"Disappointed that you didn't spend more time investigating the serious regression for the 13900K vs the 12900K in the 502.gc_r test."We still are. That was flagged earlier this week, and re-runs have produced the same results.
So at this point we're digging into matters a bit more trying to figure out what is going on, as the cause is non-obvious. I'm thinking it may be a thread director hiccup or an issue with the ratio of P and E cores, but there's a lot of different (and weird) ways this could go.
adenta180 - Friday, June 23, 2023 - link
Did you guys ever get to the bottom of this SPECint rate GCC regression on 13900K?Avalon - Friday, October 21, 2022 - link
I think it's starting to become a little disingenuous to list the default TDP in the benchmarks, when it's become increasingly obvious over the past few generations that Intel chips run nowhere in the stratosphere of those TDPs.When you see a "125W" $589 chip virtually tied with a "170W" $699 chip it makes it seem like Intel is a no brainer. Might be time to start putting actual power draw in each of the tests in there, or simply leave stock TDP out, because listing a Core i9 at "125W" when it's running 50-100W higher than an equivalent AMD chip doesn't make much sense any longer.
WannaBeOCer - Friday, October 21, 2022 - link
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised. On all of Intel’s slides“Processor Base Power
125 W
Maximum Turbo Power
253 W”
bcortens - Saturday, October 22, 2022 - link
Doesn’t matter if they advertise it. The charts are misleading because the W number at the left of the chart has nothing to do with the power consumed to get the performance indicated in the chart.They should really just leave the W number off or show a measured average W required to complete the test. Then the number would have meaning. As it stands, for the purposes of the graph, the number doesn’t mean much.
Avalon - Friday, October 21, 2022 - link
And, to be fair to Intel, why are some of the IGP gaming benchmarks only showing the 12th and 13th gen Intel vs AMD APUs? There's really nothing to be gleaned from this; of course APUs will be faster in IGP tests. If you can't do like for like, then either just publish the Intel scores or don't publish at all.Iketh - Friday, October 21, 2022 - link
In your closing comments about power consumption, I was reminded about the AMD article that compared the performance difference between 230W and 65W. I think you should also mention that in this article. I'm holding out for AMD mobile parts. Those laptops will be nice.Iketh - Friday, October 21, 2022 - link
125W on Intel 7 process, when it's actually 325W on 10nm lmao... pure marketingWannaBeOCer - Friday, October 21, 2022 - link
Did you even read the article? Intel advertises the 13900k as a 253w chip. It drew 32% more than it advertised while AMD advertises its 7950x as a 170w and it drew 30% more than they advertised. On all of Intel’s slides“Processor Base Power
125 W
Maximum Turbo Power
253 W”
bcortens - Saturday, October 22, 2022 - link
Reviews shouldn’t care about the advertised power, or what it says in the bios when you set the “limit” to 65 watts, reviews should actually measure and report the real power draw.We don’t read reviews to read intel and amd marketing numbers, we want to know the real numbers for a given workload
Iketh - Sunday, October 23, 2022 - link
what on earth does that have to do with my statementFinTechno - Saturday, October 22, 2022 - link
I don't know what is it but openbenchmarking.org gives geometric mean of all tests +20% for 7950X. In many tests 7950X kills 13900K by huge margin. Please see https://openbenchmarking.org/vs/Processor/AMD%20Ry...CT007 - Sunday, October 23, 2022 - link
So many apps that nobody uses. . . -_-Cinzzano - Saturday, October 22, 2022 - link
Why would undercut all of the processors with such a piss poor RAM configuration... this is just ridiculous to pair 13th Gen Zen4 and even 12th Gen with such a slow memory and those timings...The whole review and testing are invalid.
James5mith - Saturday, October 22, 2022 - link
"Whereas Alder Lake officially topped out at DDR5-4800, Alder Lake can run at DDR5-5600, helping to feed the beast a bit more with higher memory clockspeeds."Guessing the second mention should be Raptor Lake.
Hrel - Sunday, October 23, 2022 - link
I really don't want any CPU over 65Wnandnandnand - Sunday, October 23, 2022 - link
You can limit TDPs yourself.Contrabondo - Friday, December 16, 2022 - link
To get performance lower than zen 3 5950x?FYI 5950x when properly tuned draws about 70-75W in Cinebench R23 at 3200MHz
Archer_Legend - Sunday, October 23, 2022 - link
Nice review, however I have to say that this site has lost itself after the departure of Andrei and Ian.The deep dives on mobile processors for smartphones were very important as you were the only ones to do those and it is a real shame not to have Spec data and a detailed comparison of the A14, A15, Exynos 2100, Exynos 2200, Snapdragon 888, Snapdragon 8 gen 1, Snapdragon 8 plus gen 1, Dimensity 9000 and Dimensity 9000+.
I hope that you find and you are actively looking for a new editor for those pieces of content and that onceyou find one you push out deep dives on those SOCs even if they will not be the latest and greatest because it will complete the amazing database of reviews which stopped with 865 and those reviews where real gold!
Gothmoth - Sunday, October 23, 2022 - link
good that intel is able to compete for now. but i go for the AM5 plattform.support until 2025 at least and the X 3D versions will blow intel out of the water.
i am not buying an already EOL plattform for a bit more performance.
TheinsanegamerN - Monday, October 24, 2022 - link
Why not stop buying CPUs every year? It's a waste of money.nandnandnand - Monday, October 24, 2022 - link
I can see a couple of things that make sense:1. Buy onto the platform early, upgrade very late. Like 1700X to 5800X3D. Except that didn't work for every motherboard on AM4.
2. Buy a budget chip, upgrade to an expensive chip 1+ gen later. The Ryzen 5 7600X is currently the cheapest but at $300 it doesn't really qualify.
Nobody should buy AM5 or Raptor Lake (new system) right now. Wait for 7800X3D/Zen5 and Meteor Lake.
Kangal - Monday, October 24, 2022 - link
This.Initially the r7-1700 and x370 offered mixed value, and the upgrade path looked great. But AMD wasn't able to properly fulfil their AM4 promise.
So perhaps AMD realised their issues, and fixed things for AM5. So perhaps but the most expensive motherboard and the best value cpu to upgrade the cpu later. Or maybe nothing has changed, since AMD is so far ahead of Intel when it comes to motherboard relevancy.
So for new system builders, you can blow the budget and go all-in on a new Intel + Nvidia tower. For the best value builders, customising an older AMD (5800x3D) and RDNA is the way to go. For the risk takers, you can overpay for things that are going to last, and cut-back on things you know yobare going to upgrade (gpu, cpu, more storage).
GeoffreyA - Tuesday, October 25, 2022 - link
The promise of upgrading is great, but sometimes doesn't work out as planned. I built a 2200G + B450 Tomahawk in 2019, with the hope to upgrade to a 6-core APU later on. Now, the 5600G is the one to go for, but has considerable issues when joined with the Tomahawk. So, I tend to think I'll just wait for a whole new system, AM6 perhaps, who knows?tvdang7 - Sunday, October 23, 2022 - link
Is it to much for the reviewer to add the 7900x and 7700x into the graphs just so we know what we are dealing with ?Ryan Smith - Monday, October 24, 2022 - link
We do not currently have those chips. AMD has only sampled the top and bottom SKUs.We'll get them eventually through other means. We just don't have them right now.
o01326 - Sunday, October 23, 2022 - link
Just signed up to comment this: why are you benchmarking Civ IV by FPS?TheinsanegamerN - Monday, October 24, 2022 - link
The same reason they were, up until this review, still using a 2080ti for their CPU gaming benchmarks.coolkwc - Monday, October 24, 2022 - link
this review is failed, don't even post what's the core temperature under stress? So difficult to get that reading huh?Annnonymmous - Monday, October 24, 2022 - link
Transient power spikes with an RTX 4090 and 13900K mean you will need at LEAST a 1500w power supply to prevent random computer shutdowns. That's crazy! Of course, this will only happen when you are running a game 4K, Max Settings, with Ray Tracing enabled. Still, getting 1,000-1,200w spikes is crazy!trueonefix - Monday, October 24, 2022 - link
Awesomeblppt - Monday, October 24, 2022 - link
Wow, what is going on with Civ 5 and Intel 12/13 series CPUs? They get absolutely wrecked.shoestring - Tuesday, October 25, 2022 - link
"...eeking out every last bit of performance" +1 for word choice, -1 for the spelling: it's actually "eking", which looks weird to me too! https://www.dictionary.com/browse/ekeRyan Smith - Tuesday, October 25, 2022 - link
You are correct. The article has been fixed.I humbly accept my deduction in points. Thank you for bringing it to my attention.
OreoCookie - Tuesday, October 25, 2022 - link
I am surprised Gavin did not include the cooler and perhaps beefier power supply in the price comparison: clearly, if you get an Intel system, you need a *much* bigger cooler, especially if you care about noise. And you might need a bigger power supply, especially if you plan on getting an nVidia 4000-series card.nader_21007 - Wednesday, November 2, 2022 - link
THX for the review.It should be mentioned that INTEL's 13900K is nowhere near the MSRP of $589, I expect the writer to check the real prices, it's price is $795, about $100 more than AMD's ryzen 7950X.
AMD CPU's are cheaper. it's FACT.
SanX - Wednesday, November 9, 2022 - link
I do not need your bs E cores in desktop, IntelSantoval - Tuesday, November 22, 2022 - link
I wonder if the 5.8 GHz turbo is going to last more than 1 nanosecond at a time.VVTF - Tuesday, October 17, 2023 - link
The sad thing is, I find this Alder Lake-N review for Nuc Boxes on Anadtech much more interesting:https://www.anandtech.com/show/21085/asrock-indust...