It shows the exceptional predictors in Intel's architecture, they don't have 96MB of L3 cache either(duh) and yet they whip that world generator. Wonder what odd coding of Tarn's the CPUs are butting against. And exceptional amount of civilizations and monsters to simulate for 550 years in the 257x257 world?
Compared to some of the earliest reviews like Tom's, this one found more productivity/code scenarios where the extra cache helps it edge out over the 5800X, despite the lower clock speed. Obviously there are niches where the 5800X3D will do really well, like the workloads that Milan-X can boost by >50%. You won't usually see them all in one review.
Sure, but it's more than just memory performance that's an issue here: in very abstract principle compressors need to find correlation across broad swaths of memory. It's actually not at all obvious whether that's cache-friendly; and indeed in 7-zip it appears not to be.
After all, if your compression context significantly exceeds the L3 cache, then that cache will largely be useless. Conversely, if your window (almost) fits within the smaller L3-cache, then increasing its size is likely largely useless.
The fact that this helps WinRAR but not 7-zip is not obvious. Given the compression ratio differences, I'm going to assume that 7-zip is using more context, and thus can't benefit from "just" 96MB of cache. And perhaps that WinRAR at higher settings (if it has any?) wouldn't either.
That does make me curious how the 3d-vcache impacts the more high-throughput compressors such as zstd or even lz4 perhaps.
Just wanna say I am ecstatic over the DF/Factorio tests. Stuff like that is where I'm most critically CPU bottlenecked these days, as opposed to CPU Blender or Battlefield at 720p.
I'd like to suggest Minecraft chunk generation as another test (though its a bit tricky to set up). Expansive strategy/sim games like Starsector, Rimworld, Stellaris and such would also be great to test, but I don't think they're fully automatable.
Nailed it. That's the biggest problem with all the games tested these days, they are single player games where CPU performance matters very little IMO.
Games that are heavily CPU bottlenecked tend to be online FPS games with large player counts (~100) and lots entities to simulate such as Plantside 2 you mentioned and many Unreal Engine 4 titles: Hell Let Loose, Squad, Post Scriptum, Beyond the Wire, Holdfast Nations at War, Rising Storm 2: Vietnam, Squad, PUBG etc. to name but a few.
The problem is that it's very hard to reliably benchmark these games where it matters: in online gameplay without developer support. It's also where gaming has stagnated for the last 10-15 years. We have these amazing openworld maps that are completely barren with nothing happening in them and stuck with a limited number of players.
I realise that this isn't directly related to CPU performance and is also a software engineering challenge simulating world across multiple cores etc. but being able to showcase new CPUs and associated performance in these titles would probably help a lot with CPU marketing and would probably drive further innovation. Just look at UE5, theirs no mention of it's online capability or what new gameplay it enables, just more eye candy.
Agreed, in an shooter like Planetside 2 the closer things like resist tables and default texture maps can be to the CPU the more fps can be gained on the graphic side. Having easily 200-300 players in a single fight over a base along with tanks, quads, troop transports and aircraft flying around you need the best single core processor you can get, which is why I switched to AMD with the 5 series and having more 3D cache just sweetens that pot.
I won't beat around the bush: it's taking longer than expected to get a new Sr. CPU Editor hired. That's a big part of the reason things are so slow - fewer hands, and my time is being tied up with hiring.
Look, if we're being honest the M line punches above its weight so to speak and yes, it does manage to embarrass traditional (x86) rivals on more than one occasion.
This being said, I see no reason to review it here and compare it to most x86 CPUs. The reason is simple: nobody buys an M CPU, they buy a package. So comparing M2 against R7 5800X3D is pretty useless. And even if you compare "system to system" you'll immediately run into major discrepancies, starting with the obvious OS choice, or the less obvious "what's an equivalent x86 system?".
With Intel vs. AMD it's easy, they serve the same target and are more or less a drop in replacement for each other. Not so with Apple. The only useful review in that case is "workflow to workflow", even with different software on different platforms. Not that interesting for the audience here.
I never understood this argument. Sure some people will decide never to buy any Apple product, but I wouldn't say that this is the majority. Let's assume that M3 gets 500% faster than the competition for 5% of the power, I am convinced that some people will be convinced to switch over no matter the package.
I'd say it's interesting to know where the M series stands in relation to Intel and AMD, purely out of curiosity. But, even if it were orders faster, I would have no desire to go over to Apple.
Yes, we want to follow the state of the art in tech. And when Apple is a leading player, that means reviewing and examining their latest, cutting edge products.
Perhaps that could make sense in a seperate piece, but M1 doesn't really have a place in a gaming focused review. M1 gaming is still in its infancy as far as natively supported titles.
Am I the only one who found it puzzling that Gavin recommends DDR4-3600 and then immediately tests with a much slower kit? And ran gaming benchmarks with a 4 year old GPU?
Having done an upgrade from a 5800X to a 5800X3D, one of the interesting things about the 5800X3D is that its largely RAM insensitive. You can get pretty much the same performance out of DDR4-2366 as you can 3600+.
And its not that it is under-performing. The things that it beats the 5800X at, it still beats it at, even when the 5800X is running very fast low latency RAM.
The up shot is, if you're on an AM4 platform with stock ram, you actually get a lot of improvement from the 5800X3D in its favored applications
This is why I hope to see this extra cache come to the APU series. My 4650G is very RAM speed sensitive on the GPU side. Problem is, if you start spending a bunch of cash on faster system memory to boost GPU speeds, it doesn't take long before a discrete video card becomes the better choice.
Wow, awhile since I looked at these gaming benchmarks. These FPS times are way past the point of "minimum" necessary. I submit two conclusions: 1) At some point you just have to say the game is playable and just check that box. 2) The benchmarks need to reflect this result.
If I were doing these tests, I would probably just set a low limit for FPS and note how much (% wise) of the benchmark run was below that level. If it is 0%, then that CPU/GPU/whatever combination just gets a "pass", if not it gets a "fail" (and you could dig into the numbers to see how much it failed).
Based on this criteria, if I had to buy one of these processors for gaming, I would go with the least costly processor here, the i5-12600k. It does the job just fine, and I can spend the extra $210 on a better GPU/Memory/SSD. (Note: I'm not buying one of these processors, I don't like Alder Lake for other reasons, and this is not an endorsement of Alder Lake)
I agree. I'm using a 5600X + 3080 + 32GB dual channel dual rank and my 3080 is still the bottleneck most of the time at the resolution I play all my games in, 1440p@144Hz
> These FPS times are way past the point of "minimum" necessary.
You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.
And the relevance of such testing is because future games will probably lean more heavily on the CPU than current games. So, even at higher resolutions, we should expect to see future game performance affected by one's choice of a CPU, today, to a greater extent than current games are.
So, in essence, what you're seeing is somewhat artificial, but that doesn't make it irrelevant.
> I would probably just set a low limit for FPS and > note how much (% wise) of the benchmark run was below that level.
Good luck getting consensus on what represents a desirable framerate. I think the best bet is to measure mean + 99th percentile and then let people decide for themselves what's good enough.
>Good luck getting consensus on what represents a desirable framerate.
You would need to do some research (blind A-B testing) to see what people can actually detect. There are probably dozens of human factors PhD thesis about this in the last 20 years. I suspect that anything above 60 Hz is going to be the limit for most people (after all, a majority of movies are still shot at 24 FPS).
>You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.
I can see your logic, but what I see is this: 1) Low resolution test is CPU bound: At several hundred FPS on some of these tests they are not CPU bound, and the few percent difference is no real difference. 2) Predictor of future performance: Probably not. Future games if they are going to push the CPU will use a) even more GPU offloading (e.g. ray-tracing, physics modeling), b) use more CPUs in parallel, c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration). IOW, you're benchmark isn't measuring the right "thing", and you can't know what the right thing is until it happens.
> You would need to do some research (blind A-B testing) to see what people can actually detect.
Obviously not going to happen, on a site like this. Furthermore, readers have their own opinions of what framerates they want and trying to convince them otherwise is probably a thankless errand.
> I suspect that anything above 60 Hz is going to be the limit for most people > (after all, a majority of movies are still shot at 24 FPS).
I can tell you from personal experience this isn't true. But, it's also not an absolute. You can't divorce the refresh rate from other properties of the display, like whether the pixel illumination is fixed or strobed.
BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive. And digital cinema cameras are quite likely used at higher framerates, if only so they can avoid judder when re-targeting to 30 or 60 Hz targets.
> At several hundred FPS on some of these tests they are not CPU bound,
When different CPUs produce different framerates with the same GPU, then you know the CPU is a limiting factor to some degree.
> and the few percent difference is no real difference.
The point of benchmarking is to quantify performance. If the difference is only a few percent, then so be it. We need data in order to tell us that. Without actually testing, we wouldn't know.
> Predictor of future performance: Probably not.
That's a pretty bold prediction. I say: do the testing, report the data, and let people decide for themselves whether they think they'll need more CPU headroom for future games.
> Future games if they are going to push the CPU will use > a) even more GPU offloading (e.g. ray-tracing, physics modeling),
With the exception of ray-tracing, which can *only* be done on the GPU, then why do you think games aren't already offloading as much as possible to the GPU?
> b) use more CPUs in parallel
That starts to get a bit tricky, as you have increasing numbers of cores. The more you try to divide up the work involved in rendering a frame, the more overhead you incur. Contrast that to a CPU with faster single-thread performance, and you know all of that additional performance will end up reducing the CPU portion of frame preparation. So, as nice as parallelism is, there are practical challenges when trying to scale up realtime tasks to use ever increasing numbers of cores.
> c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration).
Okay, but if you're buying a CPU today that you want to use for several years, you need to decide which is best from the available choices. Even if future CPUs have those features and future games can use them, that doesn't help me while I'm still using the CPU I bought today. And games will continue to work on "legacy" CPUs for a long time.
> IOW, you're benchmark isn't measuring the right "thing", > and you can't know what the right thing is until it happens.
Let's be clear: it's not *my* benchmark. I'm just a reader.
Also, video games aren't new and the gaming scene changes somewhat incrementally, especially given how many years it now takes to develop them. So, tests done today should have similar relevance in the next few years as what test from a few years ago would tell us about gaming performance today.
I'll grant you that it would be nice to have data to support this: if someone would re-benchmark modern games with older CPUs and compare the results from those benchmarks to ones takes when the CPUs first launched.
"BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive."
I can't help commenting that, for my part, movies look unsightly over 24 fps, giving them a low-quality TV-like appearance. Then again, that's me, a person who can't stand digital cameras to begin with and laments the phasing out of film.
> for my part, movies look unsightly over 24 fps, giving them a low-quality TV-like appearance.
I know some people say that, but when you get accustomed to watching good quality motion interpolation, it's hard to go back.
Even famous Hollywood actors and directors have complained about motion interpolation, but I think the main thing they're worried about is that it reveals subtleties in their facial expressions that can reveal sub-par acting. For things like camera pans and action sequences, it's pretty much without a downside.
BTW, movies were originally shown with a strobed backlight. CRT TVs had a similar effect (which is apparent if you ever look at a fast-exposure photograph of a CRT display). When you start playing them on a display with continuous illumination, motion blur becomes much more apparent. That's why I think plasma and LCD TVs started going out of their way to add motion interpolation - it wasn't a trivial or inexpensive feature to add!
It may seem I'm coming from the Middle Ages, but too smooth motion is the thing that strikes me as ugly in movies. Of course, I don't want a choppy frame rate, but there's a noticeable difference when departing from 24/25. When I think of Jackson with his slick, 48-fps Hobbit, I feel the man was out of his mind. My opinion is that the taste of the industry has gone down: it's all about "vibrant" colours and details now, an obsession with ever-increasing resolution, HDR, smooth motion, and fake-looking CGI. I contend, today's movies can try all they want but will never match the excellence of the past. (Say, will today's stuff ever beat the realism of Ben-Hur's chariot chase? I doubt it.)
Hollywood feels that infinite detail is the future, but the truth is, our minds fill in the blanks and far more effectively. Indeed, one can dispense with motion altogether: 1962's La jetée, a succession of photos and sound, demonstrates this well.
There's good stuff getting made, today. It just might not be the top Hollywood block busters. With streaming being so pervasive and the tech & tools of movie making now being so accessible, the door is wide open to just about anyone with ideas.
Far be it from me to convince you, though. Watch whatever you like. 24 Hz, if you prefer. Just don't try to tell me that low-framerate adds to the experience rather than detracts. We'll just have to disagree on that.
P.S. If you want to talk about letting your mind fill in the blanks, it's hard to beat reading, radio, and podcasts. I'm so glad I read Dune before watching it, because I got the chance to conjure all my own imagery and that made it so much more fun to see what the different film versions came up with.
You're right, reading, radio, and podcasts are far better than gazing at video; and I admit I feel impoverished because I read so little these days. Time was, all I did was read, nothing else.
Radio dramas are enjoyable and stimulating. Last year, I heard a few episodes of "Suspense" from the '40s/'50s. Fantastic and of a high quality, the top actors of the time participating. I remember one with Ava Gardner and another with James Stewart. Reminds me, I must return to "Suspense" again!
I used to listen to audio books when I had a long commute. I still listen to stuff while doing chores, but most of it is spent just keeping up on the news. So much crazy stuff going on in the world...
The Ryzen 7 5800X is listed at $350 in your first page chart but $449 in every one of your benchmark charts (or at least, the ones I've seen thus far, on the first page).
Forget this chip because at this point anyone is better off with a 5900X as it's far superior in MT plus higher cores. This thing is locked for tuning, and the X3D is causing the clock speed deficit on top. The only advantage is low voltage for PBO as it runs at 1.3v only vs the usual Zen 3 at 1.4v.
The major issue is AMD BIOS and Firmware. It's not upto the mark the 1.2.0.7 is finally out it fixes a lot of things BUT the core problems of USB are not fixed. Anyone who has Ryzen 2000 should get a 5900X once Ryzen 7000 launches, not only you can get more cores at cheap plus also a beast at MT which can be used at any task. Not some gaming only junk. This thing struggles on RPCS3 emulation too. You need ST performance and high clocks for such and major workloads the L3 cache is great but the limitations are not worth spending over $450 for this, you can get a 10850K for $320 if you looked for deals, that's a 10C20T CPU which will destroy this processor in all workloads and if you add OC it will be as fast as a 12th gen, check Tomshardware CPU ranking list for gaming here and see for yourself how 10th gen fares better you don't have to deal with BS BIOS issues, USB problems etc
For those who want to buy new, wait for AMD Ryzen 7000 or Z790, Z690 has ILM problems you must avoid it. Ryzen is buggy platform. You are left with Intel 10th gen, it's best but the PCIe3.0 is it's downside.
The fact that it can trade blows with the 12900K/S often in gaming makes the 5800X3D impossible to so easily dismiss for current AM4 owners. You could get a cheaper 5900X, but if you don't need the extra cores, you won't notice the difference.
It would be better to get it on sale for far less than $450, but the supply might be kept low to make that difficult. AMD gets more $$$ from sending these cache chiplets to Epyc Milan-X customers.
Going forward, I think AMD should offer the 16-core 7950X only with V-Cache, delaying it after the other models, and 7800X with and without V-Cache.
I don't think they will launch it on only the 8-core 2 generations in a row. 5800X3D is an experiment, soon it's time to make it normal. 7950X is the obvious choice since it uses good chiplets and it's a low-volume halo product.
It would also be nice to see a 6-core with a bad yield partial cache chip on top, but that's probably a pipe dream.
3DVC mostly scales for gaming so again it makes no sense to give it a production CPU. And lol no 7950X is no low volume product, if sells in the millions. It’s a content creation CPU, also who said 5800X3D is a experiment? Lol so many nonsensical comments here
5800X3D is an experimental SKU, the fact that it cannot handle tuning is first, second it's not high saturation since Zen 3 is already at peak which means it's low volume. Third, this is a reject EPYC chiplet which is why AMD segregated it to a single SKU vs whole Stack of X3D refresh on Zen 3. Plus this thing is just AMD looking at how things work IRL when they use their 3D Cache for Consumer processors to understand how the CPU makes market react.
It is not going to get any new consumers for AMD, AM4 is a dead end socket. Using this SKU priced at $450 to buy is worse and I certainly would not given the QC of AMD in Firmware.
It would make more sense if AMD did one more "experimental" release with V/3D-Cache on their 8-core Zen4 processor (r7-7800X3D). But then I think they should offer it throughout the product stack, utilising the "X" moniker, and making their Planar/2D-Cache on their regular options: - r9-8950 vs 8950X - r9-8900 vs 8900X - r7-8800 vs 8800X - r5-8600 vs 8600X
That would eat into Zen 4 so they avoided it, it's just like Intel's Rocket Lake, Intel marketed a 2C4T deficit CPU only for gaming but abandoned - IMC, Efficiency (which was already poor), MT workloads. Both did half-measures on the socket refreshes.
If AMD did a whole stack, plus I think Zen 3D is too late, I mean the CPU uArch design team R&D and work is better spent on Zen 4 probably since allocating more time to make the X3D work on all Zen 3 lineup and giving them a full refresh = lot of work / money / resources.
Shame, because It would have been a solid product if they had optimized it from core, esp with the IOdie, that thing is the biggest Achilles heel for AMD Ryzen Zen 3. Also given their nature of AGESA and QC It would definitely have more impact. EPYC X has lower clocks so it's pretty much better to bin them and put money there as it will net them more cash, Zen 3 was already heavily saturated in the market.
I think for the future we will have to see how AMD will add X3D to their SKUs I do not think they will do a full stack, but who knows ? maybe they can address the heat and tuning since Zen 4 might be built from ground up on massive improvements esp given it's nature of IPC boost which is marginal but higher clocks and more stability and more mature platform this round.
Nah, experimental CPUs aren’t released in a common end user and server product, please go inform yourself on PC hardware, you don’t really know much, it’s silly.
Facepalm Everything is experimental when it is new. I specifically said they should repeat what they did here, and then later, when it is fleshed out better, apply those innovations into the mainstream products. Instead of "please go/inform yourself" maybe you should "learn to read"?
Case in point: AI/ML/Neural Networks. They have been pointless hardware for several years, and made processors "worse" due to the added silicon cost, consumer price, design complexity, software stack, thermals and energy use. However, today they are a welcome addition and in some cases crucial for functionality (that innovation made Rosetta2 possible, relying on this hardware in Apple M1 chips).
I do not have the AMD platform. I waited for an year to see what AMD will do for B2 stepping and the AGEGA 1.2.0.2 which was supposed to come out on April 2021 fixing the problem. I kept on searching online at OCN, r/AMD constantly and found the brand new X570S and the top tier boards exhibit the problem regardless of stepping. The 1.2.0.7 is supposedly stable BUT it has it's share of issues with the DRAM OC. And finally the Processors from AMD cannot sustain a single clock speed multiplier ratio. Plus the voltage coupled with all that I did not want to delve into the platform, when I saw 3 users on a forum I visit mention the USB problems.
It's a hit or miss. You are lucky if you do not have it. But I also think after touching the DRAM past 3600MHz the IOD is flaky, it won't be able to cope and cause lot of stability issues. And a friend of mine ran on X470 C7H, 5900X with Windows 7 and got a lot of instability. I want Windows 7 too... which caused me to skip it.
This is Anandtech, of all folks here I see the salt only from you, which is self-explanatory who is the one here doing the usual nonsense of fanboy slandering.
The product is shoddy, I called out Intel with trash Z690 because of ILM issues but you see I said bad things about your precious CPU you came out of your small cocoon and started nonsense.
If you do not want to read them and try to even understand the issue at hand better stop at it, do not call everything that you do not understand as "Beta use at own risk" it's laughable how you dismiss the platform problems. clicked on top first link and called everything is beta..
1.2.0.7 is currently a beta release rest of them are not and on top many already said 1.2.0.7 is better for platform stability over previous AGESA esp with TPM issues since Windows 11 dropped. Then again the 1.2.0.7 is bad for DRAM OC. You will dismiss the below link also.
1.2.0.7 is NOT beta. I only looked at Asus, but the 1.2.0.7 BIOSs (which I have had installed for at least a month) are not listed as beta. They may have been listed as such on the first few days of release.
I've been using 1207 for about a month, and it's running without problems on my B450 Tomahawk; but I did notice that MSI seemingly removed this BIOS from the website, the current one reverting to 2020's AGESA 1006. A quick search online shows that some people have had issues with it.
I do not crave for that pathetic 1-2% I want a machine do everything from my 4K muxing to whatever I throw at it including a ton of MT workloads. The CPU is also locked I do not want that either. 5900X is a far better purchase.
A PC is not just a damn "Gaming" only junk. It should be able to do everything. If I want just for gaming I would get any CPU and be done with it.
Sure 15% more marketing BS eh ? Here's one which shows 8% tops. Also TPU scores all are over the place, when I checked 11900K review of 10900K benches and then in 12900K benches. Still it's some metric to use. Also go and check out Tomshardware that I mentioned in the last page. There's not even 10% difference you say 15% this type of BS is really annoying.
It depends on the games, sure. Still your response was nonsense, the 5800X3D is far better for gaming than the 5900X, those extra cores are completely useless to gamers.
100% true. My 3080 is still the bottleneck more often than not, not my 5600X. Unless you're competing at esports or playing a game that is actually CPU bottlenecked, for most people a 5600X is still more than enough. Both the 5800X3D and 5900X are still enthusiast level or if you know you need it for productivity where time == money.
[META] Could you please NOT include auto-playing videos on all pages? When I'm sitting in a park, my laptop tethered to my mobile phone for internet access, the last thing that I need is something that drains my battery and my data plan!
It would be useful to factor in energy costs, perhaps over 3 years. Pretty sure AMD's offering will destroy Intel's one. Also please don't stop at listing processors TDP, but show a more realistic number.
There’s a weird mistake in the logic of the article. IPC is mentioned to be lower. But for a lowly clocked chip like 5800X3D to beat Intels auto overclocked CPUs, in fact IPC has to be far higher for it to win, otherwise it wouldn’t and would be always behind. IPC is everything it has and is absolute IPC king when it comes to gaming. The 12700K only has better performance in apps that aren’t gaming, otherwise it doesn’t stand a chance.
We're starting to see confusion over what to call the gains caused by increasing cache. This is a chip that loses to the 5800X in Cinebench, but wins in most games.
"Doesn't stand a chance" wrong again. Once a 12700K is unlocked and disabled the E Cores and add OC it will beat any processor in gaming. Cache is not IPC, IPC is not cache, Cache helps proof is L4 eDRAM on 5775C i7 processor and the Haswell 4980HQ i7 BGA processor.
Gaming is nothing to sneeze at also these all are running on OOTB basic. A little tweak AMD is toast. Because AMD cannot handle tweaking at all properly. It's like buy it and run it with PBO2 at max anything else = instability.
Not really, and you have proven to be a biased intel fanboy. There are multiple benchmarks with 12900KS with OC applied and it still loses against the 5800X3D, 50 game benchmark. Too bad, fanboy.
No there's no mistake. The 5800X3D indeed has lower IPC but wins in specific workloads where its larger cache can keep the CPU fed with data where other CPU's have to wait ages to fetch the data from main RAM.
The usual colour pattern is: red for AMD processors being tested, orange for AMD comparison, dark blue for Intel processors being tested and blue for Intel comparison.
No, I have to agree, this was constantly confusing for me. Most stand-out colour MUST be the thing that you're reviewing. Having a near comparison be the brightest or most eye-catching is very misleading.
For some impressive engineering software results take a look at the Phoronix's tests. One thing I personally miss in most reviews is not using at least the "slower" preset for the x265 and trying the deblocking filters, particularly the EEDI2 for a low core-count performance test. Using the faster presets quickly turn the computing bounded test into a memory bandwidth, or even to a IO bandwidth test.
The biggest performance gains with the 5800x3d come from games with less than stellar multi core utilization and yet are very demanding from a single core performance standpoint
MSFS 2020 and DCS, for example, especially in VR, since you have double the number of frames for the CPU to set up for each game frame. The 5800x3d performs 40 percent better than a 5600x in those situations. Dual Rank memory at 3600mhz is important to get the most out of it, too.
Escape from Tarkov is another one with huge gains from the 5800x3d.
‘ As a result, in lieu of CPU overclocking, the biggest thing a user can do to influence higher performance with the Ryzen 7 5800X3D is to use faster DDR4 memory with lower latencies, such as a good DDR4-3600 kit. These settings are also the known sweet spot for AMD's Infinity Fabric Interconnect as set out by AMD.’
Perhaps it’s a bit odd, then, to see DDR-3200 listed below that paragraph — the apparently only RAM chosen for the testing.
What we see is that there are plenty of cases that benefit from it, substantially. As mentioned in this article, they found cases where the extra L3 cache is enough even for it to pull ahead of the i9-12900K, even when it beats all of AMD's other desktop CPU models.
They know it's late, but I'd rather have it than not. Even though there are some gaps in their testing, and some of the usual analysis is lacking (e.g. latency analysis), it does give us some apples-to-apples data vs. other CPUs they have or will review.
I've been benchmarking Elder Scrolls 4: Oblivion, and it doesn't benefit a huge amount from cache. 12900k's the way to go. Very interesting. On other extreme, Fallout 4 benefits hugely from larger cache (50%).
Buying a 5800X3D at near 5950X prices would be a bit painful for an IT professional like me: I'd always go the other direction (as I did before the 3D was available).
But it was a no-brainer for my son, who only cares for having the best gaming experience that fits his budget.
He upgraded from a Kaby Lake i7-7700K with DDR3-2400 to the 3D on a Gigabyte X570S UD with DDR4-4000 for his RTX 3070 and wears a constant smile since.
Installation was a breeze, BIOS update easy (system even booted without it), XMP timings worked perfectly and RAM bandwidth is 55GB/s on Geekbench 4, best I ever measured. Windows 10 and near 100 games on 6 ports of SATA and 1 NVMe SSDs didn't even flinch at the new hardware.
For that point in time several weeks ago, it was quite simply the best gaming platform for a reasonable amount of money and a simple air-cooled system that runs super-cool and Noctua quiet when the GPU tiger sleeps on 2D or movies.
Yes, it will be outdated in a month or two, but that surely doesn't make in inadequate for some years (unless Microsoft pulls a Pluton stunt).
And maturity means more time for games.
I tend to spend more time on tinkering than gaming, but that's why having these choices is so great.
Hi everyone, I've seen the comments about some of the game data and I've investigated it just now. I found the issue and I've updated the gaming graphs with the correct data.
I have also updated the correct data to bench. I plan to re-bench the Core i7-12700K and Core i5-12600K gaming results this weekend, so expect them to be updated in this review and the subsequent reviews.
Apologies about the discrepancies, a genuine mistake where a tonne of data is concerned.
Sorry to necro this thread, but with Zen4 coming I would be curious to see how 5800X3D performs against 7700X specific to gaming. Getting to choose cheaper AM4 motherboards, much cheaper DDR4-3600 RAM (vs DDR5-6000) is going to impact the price/performance balance.
Will the 5800X3D be the chip for gamers to choose instead of Zen4 in Q4 2022?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
125 Comments
Back to Article
nandnandnand - Thursday, June 30, 2022 - link
Great results for the 5800X3D in Dwarf Fortress and Factorio. Clearly it does not have enough cache for the World Gen 257x257 test.dorion - Thursday, June 30, 2022 - link
It shows the exceptional predictors in Intel's architecture, they don't have 96MB of L3 cache either(duh) and yet they whip that world generator. Wonder what odd coding of Tarn's the CPUs are butting against. And exceptional amount of civilizations and monsters to simulate for 550 years in the 257x257 world?AndreaSussman - Sunday, July 31, 2022 - link
HelloSamus - Thursday, June 30, 2022 - link
Also impressive is how much the cache improves WinRAR performance. Going from last to 2nd place - with a lower clock speednandnandnand - Thursday, June 30, 2022 - link
Compared to some of the earliest reviews like Tom's, this one found more productivity/code scenarios where the extra cache helps it edge out over the 5800X, despite the lower clock speed. Obviously there are niches where the 5800X3D will do really well, like the workloads that Milan-X can boost by >50%. You won't usually see them all in one review.DanNeely - Sunday, July 3, 2022 - link
WinRAR has always been extremely dependent on memory performance. That a huge cache benefits it isn't a big surprise.emn13 - Monday, July 11, 2022 - link
Sure, but it's more than just memory performance that's an issue here: in very abstract principle compressors need to find correlation across broad swaths of memory. It's actually not at all obvious whether that's cache-friendly; and indeed in 7-zip it appears not to be.After all, if your compression context significantly exceeds the L3 cache, then that cache will largely be useless. Conversely, if your window (almost) fits within the smaller L3-cache, then increasing its size is likely largely useless.
The fact that this helps WinRAR but not 7-zip is not obvious. Given the compression ratio differences, I'm going to assume that 7-zip is using more context, and thus can't benefit from "just" 96MB of cache. And perhaps that WinRAR at higher settings (if it has any?) wouldn't either.
That does make me curious how the 3d-vcache impacts the more high-throughput compressors such as zstd or even lz4 perhaps.
brucethemoose - Friday, July 1, 2022 - link
Just wanna say I am ecstatic over the DF/Factorio tests. Stuff like that is where I'm most critically CPU bottlenecked these days, as opposed to CPU Blender or Battlefield at 720p.I'd like to suggest Minecraft chunk generation as another test (though its a bit tricky to set up). Expansive strategy/sim games like Starsector, Rimworld, Stellaris and such would also be great to test, but I don't think they're fully automatable.
29a - Tuesday, July 5, 2022 - link
I’d also be interested in a Stellaris benchmark.ballsystemlord - Thursday, June 30, 2022 - link
@Gavin , is it just me, or do you have two sets of identical WoT Benchmarks at 1080p? BTW: I'm looking at the print view.Stuka87 - Thursday, June 30, 2022 - link
Its not you, there is two full sets of them.Gavin Bonshor - Thursday, June 30, 2022 - link
Thank you, fixed 😊RBeen - Thursday, June 30, 2022 - link
Also does wonders for Planetside 2. I'm getting almost double the FPS from a 3600Xbrucethemoose - Friday, July 1, 2022 - link
Ah, the other game I wanted to see!Unsurprising, and it definitely needs all the CPU it can get.
bunkle - Saturday, July 2, 2022 - link
Nailed it. That's the biggest problem with all the games tested these days, they are single player games where CPU performance matters very little IMO.Games that are heavily CPU bottlenecked tend to be online FPS games with large player counts (~100) and lots entities to simulate such as Plantside 2 you mentioned and many Unreal Engine 4 titles: Hell Let Loose, Squad, Post Scriptum, Beyond the Wire, Holdfast Nations at War, Rising Storm 2: Vietnam, Squad, PUBG etc. to name but a few.
The problem is that it's very hard to reliably benchmark these games where it matters: in online gameplay without developer support. It's also where gaming has stagnated for the last 10-15 years. We have these amazing openworld maps that are completely barren with nothing happening in them and stuck with a limited number of players.
I realise that this isn't directly related to CPU performance and is also a software engineering challenge simulating world across multiple cores etc. but being able to showcase new CPUs and associated performance in these titles would probably help a lot with CPU marketing and would probably drive further innovation. Just look at UE5, theirs no mention of it's online capability or what new gameplay it enables, just more eye candy.
MadAd - Sunday, July 3, 2022 - link
Agreed, in an shooter like Planetside 2 the closer things like resist tables and default texture maps can be to the CPU the more fps can be gained on the graphic side. Having easily 200-300 players in a single fight over a base along with tanks, quads, troop transports and aircraft flying around you need the best single core processor you can get, which is why I switched to AMD with the 5 series and having more 3D cache just sweetens that pot.Slash3 - Thursday, June 30, 2022 - link
On page 5, it states memory used is DDR4-3200 CL40. I assume that's a typo, and that the usual JEDEC 3200 CL22 kit was used?Slash3 - Thursday, June 30, 2022 - link
Also, "insert analysis" at the bottom of page 7, and "DDR4-43200" at the top of page 8. ;)Glad to see the review up!
Gavin Bonshor - Thursday, June 30, 2022 - link
You are correct, was just a typoDevBuildPlay - Thursday, June 30, 2022 - link
Great article! I would love to see a similar article on the 3D cache differences on Epic under DB and other server workloads.Gavin Bonshor - Thursday, June 30, 2022 - link
Thank you. We're overhauling our 3D test suite for the next generation CPUs. Watch this space!lemurbutton - Thursday, June 30, 2022 - link
Review something far more interesting: M1 Ultra.djboxbaba - Thursday, June 30, 2022 - link
Was beginning to think that they were planning on shutting this website down… are they struggling to get writers?Threska - Thursday, June 30, 2022 - link
*smirk* Whaaa! They're not reviewing my favorite piece of silicon, they're going out of business.Gavin Bonshor - Thursday, June 30, 2022 - link
Don't worry, we're not going anywhere.Ryan Smith - Thursday, June 30, 2022 - link
I won't beat around the bush: it's taking longer than expected to get a new Sr. CPU Editor hired. That's a big part of the reason things are so slow - fewer hands, and my time is being tied up with hiring.djboxbaba - Thursday, June 30, 2022 - link
Makes sense. Thank you!abufrejoval - Monday, July 11, 2022 - link
I am assuming the primary issue is having first rate editors accept third rate salaries?Unlike Anand it wouldn't seem that Ian and Andrei were lured away by big cash.
mode_13h - Wednesday, July 13, 2022 - link
> Unlike Anand it wouldn't seem that Ian and Andrei were lured away by big cash.So, you found out where they went? Do tell.
Makaveli - Thursday, June 30, 2022 - link
Give it a rest dude.Qasar - Thursday, June 30, 2022 - link
Makaveli, he wont, according to only him. the m1 is the best thing since sliced bread.GeoffreyA - Thursday, June 30, 2022 - link
Lor', the Apple Brigade is already out in full force.at_clucks - Saturday, July 2, 2022 - link
Look, if we're being honest the M line punches above its weight so to speak and yes, it does manage to embarrass traditional (x86) rivals on more than one occasion.This being said, I see no reason to review it here and compare it to most x86 CPUs. The reason is simple: nobody buys an M CPU, they buy a package. So comparing M2 against R7 5800X3D is pretty useless. And even if you compare "system to system" you'll immediately run into major discrepancies, starting with the obvious OS choice, or the less obvious "what's an equivalent x86 system?".
With Intel vs. AMD it's easy, they serve the same target and are more or less a drop in replacement for each other. Not so with Apple. The only useful review in that case is "workflow to workflow", even with different software on different platforms. Not that interesting for the audience here.
TheMode - Tuesday, July 5, 2022 - link
I never understood this argument. Sure some people will decide never to buy any Apple product, but I wouldn't say that this is the majority. Let's assume that M3 gets 500% faster than the competition for 5% of the power, I am convinced that some people will be convinced to switch over no matter the package.GeoffreyA - Wednesday, July 6, 2022 - link
I'd say it's interesting to know where the M series stands in relation to Intel and AMD, purely out of curiosity. But, even if it were orders faster, I would have no desire to go over to Apple.mode_13h - Thursday, July 7, 2022 - link
Yes, we want to follow the state of the art in tech. And when Apple is a leading player, that means reviewing and examining their latest, cutting edge products.Jp7188 - Friday, July 8, 2022 - link
Perhaps that could make sense in a seperate piece, but M1 doesn't really have a place in a gaming focused review. M1 gaming is still in its infancy as far as natively supported titles.Skree! - Friday, July 8, 2022 - link
Skree!mode_13h - Sunday, July 10, 2022 - link
I'm going to call spam on this. Whatever it's about, I don't see it adding to the discussion.noobmaster69 - Thursday, June 30, 2022 - link
Better late than never I guess.Am I the only one who found it puzzling that Gavin recommends DDR4-3600 and then immediately tests with a much slower kit? And ran gaming benchmarks with a 4 year old GPU?
Gavin Bonshor - Thursday, June 30, 2022 - link
We test at JEDEC to compare apples to apples from previous reviews. The recommendation is on my personal experience and what AMD recommends.HarryVoyager - Thursday, June 30, 2022 - link
Having done an upgrade from a 5800X to a 5800X3D, one of the interesting things about the 5800X3D is that its largely RAM insensitive. You can get pretty much the same performance out of DDR4-2366 as you can 3600+.And its not that it is under-performing. The things that it beats the 5800X at, it still beats it at, even when the 5800X is running very fast low latency RAM.
The up shot is, if you're on an AM4 platform with stock ram, you actually get a lot of improvement from the 5800X3D in its favored applications
Lucky Stripes 99 - Saturday, July 2, 2022 - link
This is why I hope to see this extra cache come to the APU series. My 4650G is very RAM speed sensitive on the GPU side. Problem is, if you start spending a bunch of cash on faster system memory to boost GPU speeds, it doesn't take long before a discrete video card becomes the better choice.Oxford Guy - Saturday, July 2, 2022 - link
The better way to test is to use both the optimal RAM and the slow JEDEC RAM.sonofgodfrey - Thursday, June 30, 2022 - link
Wow, awhile since I looked at these gaming benchmarks. These FPS times are way past the point of "minimum" necessary. I submit two conclusions:1) At some point you just have to say the game is playable and just check that box.
2) The benchmarks need to reflect this result.
If I were doing these tests, I would probably just set a low limit for FPS and note how much (% wise) of the benchmark run was below that level. If it is 0%, then that CPU/GPU/whatever combination just gets a "pass", if not it gets a "fail" (and you could dig into the numbers to see how much it failed).
Based on this criteria, if I had to buy one of these processors for gaming, I would go with the least costly processor here, the i5-12600k. It does the job just fine, and I can spend the extra $210 on a better GPU/Memory/SSD.
(Note: I'm not buying one of these processors, I don't like Alder Lake for other reasons, and this is not an endorsement of Alder Lake)
lmcd - Thursday, June 30, 2022 - link
Part of the intrigue is that it can hit the minimums and 1% lows for smooth play with 120Hz/144Hz screens.hfm - Friday, July 1, 2022 - link
I agree. I'm using a 5600X + 3080 + 32GB dual channel dual rank and my 3080 is still the bottleneck most of the time at the resolution I play all my games in, 1440p@144Hzmode_13h - Saturday, July 2, 2022 - link
> These FPS times are way past the point of "minimum" necessary.You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.
And the relevance of such testing is because future games will probably lean more heavily on the CPU than current games. So, even at higher resolutions, we should expect to see future game performance affected by one's choice of a CPU, today, to a greater extent than current games are.
So, in essence, what you're seeing is somewhat artificial, but that doesn't make it irrelevant.
> I would probably just set a low limit for FPS and
> note how much (% wise) of the benchmark run was below that level.
Good luck getting consensus on what represents a desirable framerate. I think the best bet is to measure mean + 99th percentile and then let people decide for themselves what's good enough.
sonofgodfrey - Tuesday, July 5, 2022 - link
>Good luck getting consensus on what represents a desirable framerate.You would need to do some research (blind A-B testing) to see what people can actually detect.
There are probably dozens of human factors PhD thesis about this in the last 20 years.
I suspect that anything above 60 Hz is going to be the limit for most people (after all, a majority of movies are still shot at 24 FPS).
>You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.
I can see your logic, but what I see is this:
1) Low resolution test is CPU bound: At several hundred FPS on some of these tests they are not CPU bound, and the few percent difference is no real difference.
2) Predictor of future performance: Probably not. Future games if they are going to push the CPU will use a) even more GPU offloading (e.g. ray-tracing, physics modeling), b) use more CPUs in parallel, c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration). IOW, you're benchmark isn't measuring the right "thing", and you can't know what the right thing is until it happens.
mode_13h - Thursday, July 7, 2022 - link
> You would need to do some research (blind A-B testing) to see what people can actually detect.Obviously not going to happen, on a site like this. Furthermore, readers have their own opinions of what framerates they want and trying to convince them otherwise is probably a thankless errand.
> I suspect that anything above 60 Hz is going to be the limit for most people
> (after all, a majority of movies are still shot at 24 FPS).
I can tell you from personal experience this isn't true. But, it's also not an absolute. You can't divorce the refresh rate from other properties of the display, like whether the pixel illumination is fixed or strobed.
BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive. And digital cinema cameras are quite likely used at higher framerates, if only so they can avoid judder when re-targeting to 30 or 60 Hz targets.
> At several hundred FPS on some of these tests they are not CPU bound,
When different CPUs produce different framerates with the same GPU, then you know the CPU is a limiting factor to some degree.
> and the few percent difference is no real difference.
The point of benchmarking is to quantify performance. If the difference is only a few percent, then so be it. We need data in order to tell us that. Without actually testing, we wouldn't know.
> Predictor of future performance: Probably not.
That's a pretty bold prediction. I say: do the testing, report the data, and let people decide for themselves whether they think they'll need more CPU headroom for future games.
> Future games if they are going to push the CPU will use
> a) even more GPU offloading (e.g. ray-tracing, physics modeling),
With the exception of ray-tracing, which can *only* be done on the GPU, then why do you think games aren't already offloading as much as possible to the GPU?
> b) use more CPUs in parallel
That starts to get a bit tricky, as you have increasing numbers of cores. The more you try to divide up the work involved in rendering a frame, the more overhead you incur. Contrast that to a CPU with faster single-thread performance, and you know all of that additional performance will end up reducing the CPU portion of frame preparation. So, as nice as parallelism is, there are practical challenges when trying to scale up realtime tasks to use ever increasing numbers of cores.
> c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration).
Okay, but if you're buying a CPU today that you want to use for several years, you need to decide which is best from the available choices. Even if future CPUs have those features and future games can use them, that doesn't help me while I'm still using the CPU I bought today. And games will continue to work on "legacy" CPUs for a long time.
> IOW, you're benchmark isn't measuring the right "thing",
> and you can't know what the right thing is until it happens.
Let's be clear: it's not *my* benchmark. I'm just a reader.
Also, video games aren't new and the gaming scene changes somewhat incrementally, especially given how many years it now takes to develop them. So, tests done today should have similar relevance in the next few years as what test from a few years ago would tell us about gaming performance today.
I'll grant you that it would be nice to have data to support this: if someone would re-benchmark modern games with older CPUs and compare the results from those benchmarks to ones takes when the CPUs first launched.
GeoffreyA - Friday, July 8, 2022 - link
"BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive."I can't help commenting that, for my part, movies look unsightly over 24 fps, giving them a low-quality TV-like appearance. Then again, that's me, a person who can't stand digital cameras to begin with and laments the phasing out of film.
mode_13h - Friday, July 8, 2022 - link
> for my part, movies look unsightly over 24 fps, giving them a low-quality TV-like appearance.I know some people say that, but when you get accustomed to watching good quality motion interpolation, it's hard to go back.
Even famous Hollywood actors and directors have complained about motion interpolation, but I think the main thing they're worried about is that it reveals subtleties in their facial expressions that can reveal sub-par acting. For things like camera pans and action sequences, it's pretty much without a downside.
BTW, movies were originally shown with a strobed backlight. CRT TVs had a similar effect (which is apparent if you ever look at a fast-exposure photograph of a CRT display). When you start playing them on a display with continuous illumination, motion blur becomes much more apparent. That's why I think plasma and LCD TVs started going out of their way to add motion interpolation - it wasn't a trivial or inexpensive feature to add!
GeoffreyA - Saturday, July 9, 2022 - link
It may seem I'm coming from the Middle Ages, but too smooth motion is the thing that strikes me as ugly in movies. Of course, I don't want a choppy frame rate, but there's a noticeable difference when departing from 24/25. When I think of Jackson with his slick, 48-fps Hobbit, I feel the man was out of his mind. My opinion is that the taste of the industry has gone down: it's all about "vibrant" colours and details now, an obsession with ever-increasing resolution, HDR, smooth motion, and fake-looking CGI. I contend, today's movies can try all they want but will never match the excellence of the past. (Say, will today's stuff ever beat the realism of Ben-Hur's chariot chase? I doubt it.)Hollywood feels that infinite detail is the future, but the truth is, our minds fill in the blanks and far more effectively. Indeed, one can dispense with motion altogether: 1962's La jetée, a succession of photos and sound, demonstrates this well.
mode_13h - Sunday, July 10, 2022 - link
There's good stuff getting made, today. It just might not be the top Hollywood block busters. With streaming being so pervasive and the tech & tools of movie making now being so accessible, the door is wide open to just about anyone with ideas.Far be it from me to convince you, though. Watch whatever you like. 24 Hz, if you prefer. Just don't try to tell me that low-framerate adds to the experience rather than detracts. We'll just have to disagree on that.
P.S. If you want to talk about letting your mind fill in the blanks, it's hard to beat reading, radio, and podcasts. I'm so glad I read Dune before watching it, because I got the chance to conjure all my own imagery and that made it so much more fun to see what the different film versions came up with.
GeoffreyA - Sunday, July 10, 2022 - link
You're right, reading, radio, and podcasts are far better than gazing at video; and I admit I feel impoverished because I read so little these days. Time was, all I did was read, nothing else.Radio dramas are enjoyable and stimulating. Last year, I heard a few episodes of "Suspense" from the '40s/'50s. Fantastic and of a high quality, the top actors of the time participating. I remember one with Ava Gardner and another with James Stewart. Reminds me, I must return to "Suspense" again!
mode_13h - Wednesday, July 13, 2022 - link
I used to listen to audio books when I had a long commute. I still listen to stuff while doing chores, but most of it is spent just keeping up on the news. So much crazy stuff going on in the world...GeoffreyA - Thursday, July 14, 2022 - link
Exactly, and with no end in sight.bji - Thursday, June 30, 2022 - link
The Ryzen 7 5800X is listed at $350 in your first page chart but $449 in every one of your benchmark charts (or at least, the ones I've seen thus far, on the first page).yankeeDDL - Thursday, June 30, 2022 - link
Typo: "...when comparing the Ryzen 9 5900X (12c/16t)" should be ".../24t)".spaceship9876 - Thursday, June 30, 2022 - link
Did you guys forget to post an article about the new cpu and gpu cores announced by ARM a few days ago?iphonebestgamephone - Thursday, June 30, 2022 - link
They probably found it not as important as this one.Kangal - Friday, July 1, 2022 - link
There's no Andrei... so maybe you're right.iphonebestgamephone - Friday, July 1, 2022 - link
Andrei be busy with the nuvia cores.Silver5urfer - Thursday, June 30, 2022 - link
Forget this chip because at this point anyone is better off with a 5900X as it's far superior in MT plus higher cores. This thing is locked for tuning, and the X3D is causing the clock speed deficit on top. The only advantage is low voltage for PBO as it runs at 1.3v only vs the usual Zen 3 at 1.4v.The major issue is AMD BIOS and Firmware. It's not upto the mark the 1.2.0.7 is finally out it fixes a lot of things BUT the core problems of USB are not fixed. Anyone who has Ryzen 2000 should get a 5900X once Ryzen 7000 launches, not only you can get more cores at cheap plus also a beast at MT which can be used at any task. Not some gaming only junk. This thing struggles on RPCS3 emulation too. You need ST performance and high clocks for such and major workloads the L3 cache is great but the limitations are not worth spending over $450 for this, you can get a 10850K for $320 if you looked for deals, that's a 10C20T CPU which will destroy this processor in all workloads and if you add OC it will be as fast as a 12th gen, check Tomshardware CPU ranking list for gaming here and see for yourself how 10th gen fares better you don't have to deal with BS BIOS issues, USB problems etc
https://www.tomshardware.com/reviews/cpu-hierarchy...
For those who want to buy new, wait for AMD Ryzen 7000 or Z790, Z690 has ILM problems you must avoid it. Ryzen is buggy platform. You are left with Intel 10th gen, it's best but the PCIe3.0 is it's downside.
nandnandnand - Thursday, June 30, 2022 - link
The fact that it can trade blows with the 12900K/S often in gaming makes the 5800X3D impossible to so easily dismiss for current AM4 owners. You could get a cheaper 5900X, but if you don't need the extra cores, you won't notice the difference.It would be better to get it on sale for far less than $450, but the supply might be kept low to make that difficult. AMD gets more $$$ from sending these cache chiplets to Epyc Milan-X customers.
Going forward, I think AMD should offer the 16-core 7950X only with V-Cache, delaying it after the other models, and 7800X with and without V-Cache.
Khanan - Thursday, June 30, 2022 - link
That won’t happen as again 7950X is a mixed cpu not entirely for gamers. So the likeliness is high only the 8 core will get 3DVCache again.nandnandnand - Thursday, June 30, 2022 - link
I don't think they will launch it on only the 8-core 2 generations in a row. 5800X3D is an experiment, soon it's time to make it normal. 7950X is the obvious choice since it uses good chiplets and it's a low-volume halo product.It would also be nice to see a 6-core with a bad yield partial cache chip on top, but that's probably a pipe dream.
Khanan - Friday, July 1, 2022 - link
3DVC mostly scales for gaming so again it makes no sense to give it a production CPU. And lol no 7950X is no low volume product, if sells in the millions. It’s a content creation CPU, also who said 5800X3D is a experiment? Lol so many nonsensical comments hereSilver5urfer - Friday, July 1, 2022 - link
5800X3D is an experimental SKU, the fact that it cannot handle tuning is first, second it's not high saturation since Zen 3 is already at peak which means it's low volume. Third, this is a reject EPYC chiplet which is why AMD segregated it to a single SKU vs whole Stack of X3D refresh on Zen 3. Plus this thing is just AMD looking at how things work IRL when they use their 3D Cache for Consumer processors to understand how the CPU makes market react.It is not going to get any new consumers for AMD, AM4 is a dead end socket. Using this SKU priced at $450 to buy is worse and I certainly would not given the QC of AMD in Firmware.
Anything I disagree = nonsense I guess lol.
Kangal - Friday, July 1, 2022 - link
It would make more sense if AMD did one more "experimental" release with V/3D-Cache on their 8-core Zen4 processor (r7-7800X3D). But then I think they should offer it throughout the product stack, utilising the "X" moniker, and making their Planar/2D-Cache on their regular options:- r9-8950 vs 8950X
- r9-8900 vs 8900X
- r7-8800 vs 8800X
- r5-8600 vs 8600X
Silver5urfer - Friday, July 1, 2022 - link
That would eat into Zen 4 so they avoided it, it's just like Intel's Rocket Lake, Intel marketed a 2C4T deficit CPU only for gaming but abandoned - IMC, Efficiency (which was already poor), MT workloads. Both did half-measures on the socket refreshes.If AMD did a whole stack, plus I think Zen 3D is too late, I mean the CPU uArch design team R&D and work is better spent on Zen 4 probably since allocating more time to make the X3D work on all Zen 3 lineup and giving them a full refresh = lot of work / money / resources.
Shame, because It would have been a solid product if they had optimized it from core, esp with the IOdie, that thing is the biggest Achilles heel for AMD Ryzen Zen 3. Also given their nature of AGESA and QC It would definitely have more impact. EPYC X has lower clocks so it's pretty much better to bin them and put money there as it will net them more cash, Zen 3 was already heavily saturated in the market.
I think for the future we will have to see how AMD will add X3D to their SKUs I do not think they will do a full stack, but who knows ? maybe they can address the heat and tuning since Zen 4 might be built from ground up on massive improvements esp given it's nature of IPC boost which is marginal but higher clocks and more stability and more mature platform this round.
Khanan - Friday, July 1, 2022 - link
Nah, experimental CPUs aren’t released in a common end user and server product, please go inform yourself on PC hardware, you don’t really know much, it’s silly.Kangal - Friday, July 1, 2022 - link
FacepalmEverything is experimental when it is new. I specifically said they should repeat what they did here, and then later, when it is fleshed out better, apply those innovations into the mainstream products. Instead of "please go/inform yourself" maybe you should "learn to read"?
Case in point: AI/ML/Neural Networks. They have been pointless hardware for several years, and made processors "worse" due to the added silicon cost, consumer price, design complexity, software stack, thermals and energy use. However, today they are a welcome addition and in some cases crucial for functionality (that innovation made Rosetta2 possible, relying on this hardware in Apple M1 chips).
Leeea - Thursday, June 30, 2022 - link
Not having usb problems hereSaw there was a bios patch a while back, have you updated your bios?
Silver5urfer - Thursday, June 30, 2022 - link
I do not have the AMD platform. I waited for an year to see what AMD will do for B2 stepping and the AGEGA 1.2.0.2 which was supposed to come out on April 2021 fixing the problem. I kept on searching online at OCN, r/AMD constantly and found the brand new X570S and the top tier boards exhibit the problem regardless of stepping. The 1.2.0.7 is supposedly stable BUT it has it's share of issues with the DRAM OC. And finally the Processors from AMD cannot sustain a single clock speed multiplier ratio. Plus the voltage coupled with all that I did not want to delve into the platform, when I saw 3 users on a forum I visit mention the USB problems.It's a hit or miss. You are lucky if you do not have it. But I also think after touching the DRAM past 3600MHz the IOD is flaky, it won't be able to cope and cause lot of stability issues. And a friend of mine ran on X470 C7H, 5900X with Windows 7 and got a lot of instability. I want Windows 7 too... which caused me to skip it.
Khanan - Friday, July 1, 2022 - link
You did not have it? So you’re just a salty Intel fanboy that talks trash. Good to know, thanks.Khanan - Friday, July 1, 2022 - link
Because your CPU got eclipsed, hahaha, cry more. I enjoy it.Silver5urfer - Friday, July 1, 2022 - link
This is Anandtech, of all folks here I see the salt only from you, which is self-explanatory who is the one here doing the usual nonsense of fanboy slandering.The product is shoddy, I called out Intel with trash Z690 because of ILM issues but you see I said bad things about your precious CPU you came out of your small cocoon and started nonsense.
Here's your shoddy garbage now cry more.
https://old.reddit.com/r/Amd/comments/ughkay/worse...
https://old.reddit.com/r/Amd/comments/twph8r/if_yo...
https://old.reddit.com/r/Amd/comments/sqxby/usb_st...
https://www.overclock.net/threads/amd-ryzen-5000-z...
Eclipsed by garbage yep I can say that perfectly alright. I rather stick with a stable system.
Silver5urfer - Friday, July 1, 2022 - link
Fixed link for one of it - https://old.reddit.com/r/Amd/comments/sqxby0/usb_s...teldar - Friday, July 1, 2022 - link
You're linking to posts with complaints about beta bios installs? Beta, use at your own risk, firmware?Makaveli - Monday, July 4, 2022 - link
Guy is a troll ignore him.Silver5urfer - Monday, July 4, 2022 - link
If you do not want to read them and try to even understand the issue at hand better stop at it, do not call everything that you do not understand as "Beta use at own risk" it's laughable how you dismiss the platform problems. clicked on top first link and called everything is beta..1.2.0.7 is currently a beta release rest of them are not and on top many already said 1.2.0.7 is better for platform stability over previous AGESA esp with TPM issues since Windows 11 dropped. Then again the 1.2.0.7 is bad for DRAM OC. You will dismiss the below link also.
https://www.deskmodder.de/blog/2022/06/06/agesa-1-...
Makeveli is even bigger joker using slandering of all.
erotomania - Thursday, July 7, 2022 - link
1.2.0.7 is NOT beta. I only looked at Asus, but the 1.2.0.7 BIOSs (which I have had installed for at least a month) are not listed as beta. They may have been listed as such on the first few days of release.GeoffreyA - Friday, July 8, 2022 - link
I've been using 1207 for about a month, and it's running without problems on my B450 Tomahawk; but I did notice that MSI seemingly removed this BIOS from the website, the current one reverting to 2020's AGESA 1006. A quick search online shows that some people have had issues with it.Khanan - Thursday, June 30, 2022 - link
Nah it’s excellent for gaming, far better than the 5900X. The 5900X only makes sense for creators same as with the 5950X.Silver5urfer - Thursday, June 30, 2022 - link
I do not crave for that pathetic 1-2% I want a machine do everything from my 4K muxing to whatever I throw at it including a ton of MT workloads. The CPU is also locked I do not want that either. 5900X is a far better purchase.A PC is not just a damn "Gaming" only junk. It should be able to do everything. If I want just for gaming I would get any CPU and be done with it.
nandnandnand - Thursday, June 30, 2022 - link
Just because a product doesn't meet your needs doesn't make it junk.Most people don't even need 8 cores, much less 12-16.
Threska - Saturday, July 2, 2022 - link
Those doing virtualization might, be it home or office.nandnandnand - Saturday, July 2, 2022 - link
And they will buy the product they need.Khanan - Friday, July 1, 2022 - link
1-2% lol go inform yourself a little better, this CPU is 15% ahead of Zen 3 in average.Silver5urfer - Friday, July 1, 2022 - link
Sure 15% more marketing BS eh ? Here's one which shows 8% tops. Also TPU scores all are over the place, when I checked 11900K review of 10900K benches and then in 12900K benches. Still it's some metric to use. Also go and check out Tomshardware that I mentioned in the last page. There's not even 10% difference you say 15% this type of BS is really annoying.https://tpucdn.com/review/amd-ryzen-7-5800x3d/imag...
Khanan - Friday, July 1, 2022 - link
It depends on the games, sure. Still your response was nonsense, the 5800X3D is far better for gaming than the 5900X, those extra cores are completely useless to gamers.hfm - Friday, July 1, 2022 - link
100% true. My 3080 is still the bottleneck more often than not, not my 5600X. Unless you're competing at esports or playing a game that is actually CPU bottlenecked, for most people a 5600X is still more than enough. Both the 5800X3D and 5900X are still enthusiast level or if you know you need it for productivity where time == money.Khanan - Friday, July 1, 2022 - link
Too bad that it’s over 20% faster than the 5800X in this review: https://cdn.mos.cms.futurecdn.net/uJUYKcHoYUmhoUw5... Generally AMD doesn’t lie, the 15% estimation is correct.Khanan - Friday, July 1, 2022 - link
https://cdn.mos.cms.futurecdn.net/uJUYKcHoYUmhoUw5...hMunster - Thursday, June 30, 2022 - link
[META] Could you please NOT include auto-playing videos on all pages? When I'm sitting in a park, my laptop tethered to my mobile phone for internet access, the last thing that I need is something that drains my battery and my data plan!nandnandnand - Thursday, June 30, 2022 - link
You should install AdblockPlus or something.Khanan - Friday, July 1, 2022 - link
Nah, he is right.hfm - Friday, July 1, 2022 - link
It would be nice to be able to give them the ad revenue without it being an egregious overstepping of bounds.Silma - Thursday, June 30, 2022 - link
It would be useful to factor in energy costs, perhaps over 3 years. Pretty sure AMD's offering will destroy Intel's one.Also please don't stop at listing processors TDP, but show a more realistic number.
jamesfuston - Thursday, June 30, 2022 - link
The 5800X3D is a total monster in MMOs. ESO, WoW, FF14, etc. I can't recommend this chip highly enough if those are your mainstay games.Khanan - Thursday, June 30, 2022 - link
There’s a weird mistake in the logic of the article. IPC is mentioned to be lower. But for a lowly clocked chip like 5800X3D to beat Intels auto overclocked CPUs, in fact IPC has to be far higher for it to win, otherwise it wouldn’t and would be always behind. IPC is everything it has and is absolute IPC king when it comes to gaming. The 12700K only has better performance in apps that aren’t gaming, otherwise it doesn’t stand a chance.nandnandnand - Thursday, June 30, 2022 - link
We're starting to see confusion over what to call the gains caused by increasing cache. This is a chip that loses to the 5800X in Cinebench, but wins in most games.Khanan - Friday, July 1, 2022 - link
As the cpu is mostly made for gaming I would rate its IPC based on games and not apps.Silver5urfer - Friday, July 1, 2022 - link
"Doesn't stand a chance" wrong again. Once a 12700K is unlocked and disabled the E Cores and add OC it will beat any processor in gaming. Cache is not IPC, IPC is not cache, Cache helps proof is L4 eDRAM on 5775C i7 processor and the Haswell 4980HQ i7 BGA processor.Gaming is nothing to sneeze at also these all are running on OOTB basic. A little tweak AMD is toast. Because AMD cannot handle tweaking at all properly. It's like buy it and run it with PBO2 at max anything else = instability.
Khanan - Friday, July 1, 2022 - link
Not really, and you have proven to be a biased intel fanboy. There are multiple benchmarks with 12900KS with OC applied and it still loses against the 5800X3D, 50 game benchmark. Too bad, fanboy.Khanan - Friday, July 1, 2022 - link
And that’s the 12900KS, the 12700K is a joke.Kvaern1 - Sunday, July 3, 2022 - link
No there's no mistake. The 5800X3D indeed has lower IPC but wins in specific workloads where its larger cache can keep the CPU fed with data where other CPU's have to wait ages to fetch the data from main RAM.Spaceship - Thursday, June 30, 2022 - link
Chart color fail. Rookie mistake.Why is 5800X3D in dark blue while 5800X in bright orange?
This review is for 5800X3D. 5800X3D should have the most obvious color, not 5800X.
People kept getting distracted by 5800X.
Get someone who isn't color blind to make the charts.
Rudde - Friday, July 1, 2022 - link
The usual colour pattern is: red for AMD processors being tested, orange for AMD comparison, dark blue for Intel processors being tested and blue for Intel comparison.asmian - Friday, July 1, 2022 - link
No, I have to agree, this was constantly confusing for me. Most stand-out colour MUST be the thing that you're reviewing. Having a near comparison be the brightest or most eye-catching is very misleading.hfm - Friday, July 1, 2022 - link
I agree as well. There was also a ton of typos and errors. They need a second pair of eyes on these things.MDD1963 - Thursday, June 30, 2022 - link
What a prompt, timely review! :/TeXWiller - Thursday, June 30, 2022 - link
For some impressive engineering software results take a look at the Phoronix's tests. One thing I personally miss in most reviews is not using at least the "slower" preset for the x265 and trying the deblocking filters, particularly the EEDI2 for a low core-count performance test. Using the faster presets quickly turn the computing bounded test into a memory bandwidth, or even to a IO bandwidth test.TeXWiller - Saturday, July 2, 2022 - link
I noticed I wrote deblocking when I actually meant deinterlacing there.garblah - Friday, July 1, 2022 - link
The biggest performance gains with the 5800x3d come from games with less than stellar multi core utilization and yet are very demanding from a single core performance standpointMSFS 2020 and DCS, for example, especially in VR, since you have double the number of frames for the CPU to set up for each game frame. The 5800x3d performs 40 percent better than a 5600x in those situations. Dual Rank memory at 3600mhz is important to get the most out of it, too.
Escape from Tarkov is another one with huge gains from the 5800x3d.
Oxford Guy - Saturday, July 2, 2022 - link
‘ As a result, in lieu of CPU overclocking, the biggest thing a user can do to influence higher performance with the Ryzen 7 5800X3D is to use faster DDR4 memory with lower latencies, such as a good DDR4-3600 kit. These settings are also the known sweet spot for AMD's Infinity Fabric Interconnect as set out by AMD.’Perhaps it’s a bit odd, then, to see DDR-3200 listed below that paragraph — the apparently only RAM chosen for the testing.
mode_13h - Saturday, July 2, 2022 - link
> 5800X3D Compute Analysis: Extra L3 Does Little For Compute PerformanceOr, maybe your testing of "compute performance" is too limited to find the cases where it's a big win.
Phoronix found many cases where the extra L3 cache is a substantial win for technical computing.
https://www.phoronix.com/scan.php?page=article&...
What we see is that there are plenty of cases that benefit from it, substantially. As mentioned in this article, they found cases where the extra L3 cache is enough even for it to pull ahead of the i9-12900K, even when it beats all of AMD's other desktop CPU models.
MDD1963 - Monday, July 4, 2022 - link
Let's hear it for timely reviews!!!mode_13h - Tuesday, July 5, 2022 - link
They know it's late, but I'd rather have it than not. Even though there are some gaps in their testing, and some of the usual analysis is lacking (e.g. latency analysis), it does give us some apples-to-apples data vs. other CPUs they have or will review.Dark_wizzie - Wednesday, July 6, 2022 - link
I've been benchmarking Elder Scrolls 4: Oblivion, and it doesn't benefit a huge amount from cache. 12900k's the way to go. Very interesting. On other extreme, Fallout 4 benefits hugely from larger cache (50%).abufrejoval - Monday, July 11, 2022 - link
Buying a 5800X3D at near 5950X prices would be a bit painful for an IT professional like me: I'd always go the other direction (as I did before the 3D was available).But it was a no-brainer for my son, who only cares for having the best gaming experience that fits his budget.
He upgraded from a Kaby Lake i7-7700K with DDR3-2400 to the 3D on a Gigabyte X570S UD with DDR4-4000 for his RTX 3070 and wears a constant smile since.
Installation was a breeze, BIOS update easy (system even booted without it), XMP timings worked perfectly and RAM bandwidth is 55GB/s on Geekbench 4, best I ever measured. Windows 10 and near 100 games on 6 ports of SATA and 1 NVMe SSDs didn't even flinch at the new hardware.
For that point in time several weeks ago, it was quite simply the best gaming platform for a reasonable amount of money and a simple air-cooled system that runs super-cool and Noctua quiet when the GPU tiger sleeps on 2D or movies.
Yes, it will be outdated in a month or two, but that surely doesn't make in inadequate for some years (unless Microsoft pulls a Pluton stunt).
And maturity means more time for games.
I tend to spend more time on tinkering than gaming, but that's why having these choices is so great.
Gavin Bonshor - Friday, July 29, 2022 - link
Hi everyone, I've seen the comments about some of the game data and I've investigated it just now. I found the issue and I've updated the gaming graphs with the correct data.I have also updated the correct data to bench. I plan to re-bench the Core i7-12700K and Core i5-12600K gaming results this weekend, so expect them to be updated in this review and the subsequent reviews.
Apologies about the discrepancies, a genuine mistake where a tonne of data is concerned.
mcnabney - Tuesday, August 9, 2022 - link
Sorry to necro this thread, but with Zen4 coming I would be curious to see how 5800X3D performs against 7700X specific to gaming. Getting to choose cheaper AM4 motherboards, much cheaper DDR4-3600 RAM (vs DDR5-6000) is going to impact the price/performance balance.Will the 5800X3D be the chip for gamers to choose instead of Zen4 in Q4 2022?