should be out next year as AMD has been very much on the ball with Ryzen launches more or less to the DAY they claimed would launch which is very nice...basically what they are promising for product delivery they are doing what they say IMO, not to mention TSMC recently announced volume production of their 7nm, so that likely means GloFo will be very soon to follow, and AMD can use TSMC just the same :)
If you ever do fancy a bit more oomph in the meantime (and assuming IPC is less important than threaded performance, eg. HandBrake is more important than PDF loading), a decent temporary sideways step for X79 is a XEON E5-2697 v2 (IB-EP). An oc'd 3930K is quicker for single-threaded of course, but for multithreaded the XEON does very well, easily beating an oc'd 3930K, and the XEON has native PCIe 3.0 so no need to bother with the not entirely stable forced NVIDIA tool. See my results (for FireFox, set Page Style to No Style in the View menu):
I never felt limited by my i5-4670k either, especially mildly overclocked to 4.0GHz.
Until I build a new PC around the same old components because the MSI Z97 motherboard (thanks MSI) failed (it was 4 years old but still...) so I picked up a new i3-8350k + ASRock Z270 at Microcenter bundled together for $200 a month ago, and it's a joke how much faster it is than my old i5.
First off, it's noticeably faster, at STOCK, than the max stable overclock I could get on my old i5. Granted I replaced the RAM too, but still 16GB, now PC4-2400 instead of PC3-2133. Doubt it makes a huge difference.
Where things are noticeably faster comes down to boot times, app launches and gaming. All of this is on the same Intel SSD730 480GB SATA3 I've had for years. I didn't even do a fresh install, I just dropped it in and let Windows 10 rebuild the HAL, and reactivated with my product key.
Even on paper, the 8th gen i3's are faster than previous gen i5's. The i3 stock is still faster than the 4th gen i5 mildly overclocked.
I wish I waited. It's compelling (although more expensive) to build an AMD Ryzen 2 now. It really wasn't before, but now that performance is slightly better and prices are slightly lower, it would be worth the gamble.
i think there's something wrong with your old Haswell setup if the difference is that noticeable. I have every generation of Intel I7 or I5 except Coffee Lake running in 2 rooms attached to each other, and I can't even notice a significant difference from my SANDY 2600k system with a SATA 850 Evo Pro sitting literally right next to my Kaby I7 with a 960 EVO NVMe SSD. I want to convince myself how much better the newer one is, but it just isn't. And this is 5 generations apart for the CPU's/mobos and using one of the fastest SSD's ever made compared to a SATA drive, although about the fastest SATA drive there is. Coffee Lake is faster than Kaby but so tiny between the equivalent I7 to I7, I can't see myself noticing a major difference.
In the same room across from these 2 is my first Ryzen build, the 1800X also with an 960 EVO SSD. Again, I can barely convince myself it's a different system than the Sandy 2600k with SATA SSD. I have your exact Haswell I5 too, and it feels fast as hell still. Especially for app launches and gaming. The only time I notice major differences between these systems is when I'm encoding videos or running synthetic benchmarks. Just for the thrill of a new flagship release I just ordered the 2700X too and it'll be sitting next to the 1800X for another side by side experience. It'll be fun to setup but I'm pretty convinced I won't be able to tell the 2 systems apart when not benchmarking.
Oh, I'm actually curious about your experience with all the systems.
I'm still running my i7 2700K at ~4.6Ghz. I do agree I haven't felt that it's a ~2012 CPU and it does everything pretty damn well still, but I'd like to know if you have noticed a difference between the new AMD and your Sandy Bridge. Same for when you assemble the 2700X.
I'm trying to find an excuse to get the 2700X, but I just can't find one, haha.
YukaKun, your 2700K is only at 4.6? Deary me, should be 5.0 and proud, doable with just a basic TRUE and one fan. 8) For reference btw, a 2700K at 5GHz gives the same threaded performance as a 6700K at stock.
And I made a typo in my earlier reply, mentioned the wrong XEON model, should have been the 2680 V2.
For daily usage and stability, I found that 4.6Ghz worked best in terms of noise/heat/power ratios.
I also did not disable any power saving features, so it does not work unnecessarily when not under heavy load.
I'm using AS5 with a TT Frio (the original one) on top, so it's whisper quiet at 4.6Ghz and I like it like that. When I made it work at 5Ghz, I found I had to have the fans near 100%, so it wasn't something I'd like, TBH.
But, all of this to say: yes, I've done it, but settled with 4.6Ghz.
(an old thread, but in case someone comes across it...)
I use dynamic vcore so I still get the clock/voltage drops when idle. I'm using a Corsair H80 with 2x NDS 120mm PWM, so also quiet even at full load; no need for such OTT cooling to handle the load heat, but using an H80 means one can have low noise aswell. An ironic advantage of the lower thermal density of the older process sizes. Modern CPUs with the same TDP dump it out in a smaller area, making it more difficult to keep cool.
Having said that, I've been recently pondering an upgrade to have much better general idle power draw and a decent bump for threaded performance. Considering a Ryzem 5 2600 or 7 2700, but might wait for Zen2, not sure yet.
No, it might have to do with the fact that the 8350K has 1.5x the cache size and beastly per-thread performance that is also sustained at all times—so it doesn't have to switch from a lower-powered state (which the older CPUs were slower at), nor does it taper off as other cores get loaded, which is most noticeable on the the things Samus mentioned, ie. "boot times, app launches and gaming". Boot times and app launches are both essentially single-thread tasks with no prior context, and gaming is where a CPU upgrade like that will improve worst-case scenarios by at least an order of magnitude, which is really what's most noticeable.
For instance, if your monitor is 60Hz and your average framerate is 70, you won't notice the difference between 60 and 70—you will only notice the time spent under 60. Even a mildly overclocked 8350K is still the one of best gaming CPUs for this reason, easily rivaling or outperforming previous-gen Ryzens in most cases and often being on par with the much more expensive 8700K where thread count isn't as important as per-thread performance for responsiveness and eliminating stutters. When pushed to or above 5 GHz, I'm reasonably certain it will still give many of the newer, more expensive chips, a run for their money.
Memory prices? Memory prices are still pretty much the way they've always been: -- faster memory costs (a little) more than slower memory -- larger memory sticks/kits cost (a little) more than smaller sticks/kits -- last-gen RAM (DDR3) is (very slightly) cheaper than current-gen RAM (DDR4)
I suppose you can wait 5 billion years for the Sun to fade out, at which point all RAM (or whatever has replaced it by then) will have the same cost ($0...since no one will be around to buy or sell it)...but I don't think you need to worry about that.
You didn't write anything about price there... All you've said is that relative pricing for things is the same it has always been, and that's no surprise.
The $$$ cost of any give stick is more than it was a year or two ago. 2x8gb DDR4-3200 G.Skill Ripjaws V is $180 on Newegg today. It was $80 two years ago. Clearly not the way they've always been...
Ridiculous comment. 7 years ago I bought 4x8 GB of RAM for $110. That same kit, from the same company, seven years later, now sells for $300. 4x16GB kits are around $800. Memory prices aren't at all the way they've always been. There is clear collusion going on. Micron and SK Hynix have both seen their stock price increase 400% in the last two years. 400%!!!!!
The price of RAM just keeps increasing and increasing, and the 3 manufacturers are in no hurry to increase supply. They are even responsible for the lack of GPUs, because they are the bottleneck.
I love how you ignored everyone that already smushed your talking points to focus on a post which was likely just poorly worded.
RAM prices have traditionally gone DOWN over time for the same capacity, as density improves. But recently the limited supply has completely blown up the normal price-per-capacity-over-time curve. Profit margins are massive. Saying this is "the same as always" is beyond comprehension. If it wasn't for your reply I would have sworn you were simply trolling.
Anyway this is what a lack of genuine competition looks like. NAND market isn't nearly as bad but there's supply problems there too.
True. When prices double with no explanation, there must be collusion.
The same thing has happened with videocards. I have great doubts about bitcoin mining as a driver for those price increases. If mining was so profitable, you would think there would be a mad scramble to design cards specifically for mining. Instead the load falls on the DYI consumer.
They DO design things specifically for mining. It's called an ASIC miner. Unfortunately for us, some currencies are ASIC-resistant, and in some cases they can potentially change the algorithm, which makes such (expensive!) development challenging.
Yep. I went with 16GB in 2013-2014 just because I was like meh what difference does $50-$60 make when building a $1000+ PC. These days I do a double take when choosing between 8GB and 16GB for PC's I build. Even hardcore gaming PC's don't *NEED* more than 8GB, so it's worth saving $100+
Memory prices have nearly doubled in the last 5 years. Sure there is cheap ram, there always has been. But a kit of quality Gskill costs twice as much as a comparable kit of quality Gskill cost in 2012.
Your gaming benchmarks results are garbage and every other reviewer got different results than you did. I hope no one takes this review seriously as the data is simply incorrect and misleading.
I was wondering about gaming, so there is no mistake there as Ryzen 2 seems to top Intel. As of right now, I don't seem to find memory specs in the review yet, safe to assume you did as always, highest non-OC so Ryzen is using faster DRAM? Also yet to spot memory letency, any chance you have some numbers at 3600MHz vs Intel? Thanks.
Would be nice if any reviewer actually benchmarked storage devices maybe even virtualization because then we'd see meltdown and spectre mitigation performance. Then again do AMD have any for spectre v2 yet? If not who knows what that will do.
I notice that that systems had higher memory, but for me I believe single threaded performance is more important that more cores. But it would be bias if one platform is OC more than another. Personally I don't over clock - except for what is provided with CPU like Turbo mode.
One thing that I foresee in the future is Intel coming out with 8 core Coffee Lake
But at least it appears one thing is over is this Meltdown/Spectre stuff
CL can't scale to 8 cores...not without done serious changes to it's architecture...Intel is in some trouble with this Ryzen refresh...also worth noting is that 7nm Ryzen 2 will likely bring a considerable performance jump while Intel isn't sitting on anything worthwhile at the moment.
All Intel's 8cores in HEDT except SkylakeX are based on their year older architecture with a bigger cache and the quad channel.
So if Intel have the need, they will simply make a CL 8core. 2700X is pretty hungry when OC'd, so Intel don't have to worry at all about its power consuption.
> 2700X is pretty hungry when OC'd And Intel chips aren't? If Zen+ is already on Intel's heels for both performance per watt and raw frequency, a 7nm chip with improved IPC and/or cache is very likely going to have them pull ahead by a significant margin. And even if it won't, it's still going to eat into Intel's profit as their next tech is 10nm vs. AMD's 7nm, meaning more optimal wafer estate utilization for the latter.
AMD has really climbed back at the top of their game; I've been in the Intel camp for the last 12 years or so, but the recent developments throw me way back to K7 and A64 days. Almost makes me sad that I won't have any reason to move to a different mobo in the next 6–8 years or so.
Amusing to look back given how things panned out. So yes, Intel released the 9900K, but it was 100% more expensive than the 2700X. :D A complete joke. And meanwhile tech reviewers raved about a peasly 5 to 5.2 oc, on a chip that already has a 4.7 max turbo (major yawn fest), focusing on specific 1080p gaming tests that gave silly high fps number favoured by a market segment that is a tiny minority. Then what happens, RTX comes out and pushes the PR focus right back down to 60Hz. :D
I wish people to stop drinking the Intel/NVIDIA coolaid. AMD does it aswell sometimes, but it's bizarre how uncritical tech reviewers often are about these things. The 9900K dragged mainstream CPU pricing up to HEDT levels; epic fail. Some said oh but it's great for poorly optimised apps like Premiere, completely ignoring the "poorly optimised" part (ie. why the lack of pressure to make Adobe write better code? It's weird to justify an overpriced CPU on the back of a pro app that ought to run a lot better on far cheaper products).
It's possible that the first consumer Intel 8-core will be based on Ice Lake. Cannon Lake will probably largely limited to low power CPUs, and will probably top out at 4 cores. Of course if Ice Lake is delayed again Intel might scale out Cannon Lake to more cores. Cannon Lake will be just a 10nm node of the Skylake/Kaby/Coffee Lake architecture, so it will most likely provide mostly power efficiency gains.
Do you really need to be spoon-fed information? How long would it take you to find the other reviews by yourself? PCPER, Tweaktown, Toms Hardware, Hothardware, Computerbase all had different results (can't post link due to spam protection). Not to mention you'd have to be totally tech illiterate to believe that stock 2600 can beat 8700k by such a huge margin. Meltdown/Spectre patches don't affect gaming performance that much, so don't you put blame on that. The result discrepancy is embarrassing, there goes the last speck of reputation Anandtech had as a reliable source of tech news.
Anandtech has no responsibility to go out and ensure their results match up with anyone else’s. They run their own selection of tests with their own build and report the numbers. They provide the test setup, if you can’t spot the differences that’s your own issue.
"Anandtech has no responsibility to go out and ensure their results match up with anyone else’s"
Responsibility? No. But should we anyhow? Yes.
Our responsibility is accuracy. If something looks weird with our data - which it does right now - then it's our job to go back, validate, and explain the results that we're seeing. If our results disagree with other sites, then that is definitely an indication that we may have a data issue.
I bet none of the other sites applied spectre and meltdown patches for Intel because they dont care about such things. Intel fanboys are now crying because someone actually showed true numbers.
I was also totally misled. I came here first, only to find out after having misleading people online that this site’s results are completely off. I am a big AMD fan but these results need to be audited and corrected.
Don't be a prick. Ian isn't lying to you. He's sharing the data his benchmarking showed. It being different to other reviewers is something he'll gladly look into, and is in fact looking into, but you ought to show yourself as a respectful individual when you point it out, otherwise you won't be listened to.
I did one for you since you seemed to be having issues. If you read below they use a different methodology for estimating fps vs what AnandTech did in their review. the result is nearly the same. solid gains for AMD on a incremental upgrade. Was that so hard?
Correct me if I'm wrong, but TechRadar seems to have tested only two games and provides minimal information on how they tested. Plus, Intel is still a bit faster in their tests.
AMD hardware crushes intel on GEEKBENCH. You have to look at all tests together, and never focus on one test, unless that is the only thing you are buying your processor for, like gaming, or video encoding.
Was AMD's recently announced Spectre mitigation used in the testing? I'm sorry if it was mentioned in the article. Too long and still in the process of reading.
I'm a big fan of AMD but want to make sure the comparison is apples to apples. BTW, does anyone have link to performance impact analysis of AMD's Spectre mitigation?
I can't find any other site using a BIOS as recent as the 0508 version you used (on the ASUS Crosshair VII Hero). Most sites are using older versions. These days, BIOS updates surrounding processor launches make significant performance differences. We've seen this with every Intel and AMD CPU launch since the original Ryzen.
Hi , im looking to gain some insight into your testing methods. Could you please explain why you test at such high graphics settings? Im sure you have previously stated the reasons but i am not familiar with them. My understanding has always been that this creates a graphics bottleneck?
When you consider that people want to see benchmark results how THEY would play the games or do work, it makes sense to focus on that sort of thing. Who plays at a 720p resolution? Yes, it may show CPU performance, or eliminate the GPU being the limiting factor, but if you have a Geforce 1080 GTX, 1080p, 1440, and then 4k performance is what people will actually game at.
The ability to actually run video cards at or near their ability is also important, which can be a platform issue. If you see every CPU showing the same numbers with the same video card, then yea, it makes sense to go for the lower settings/resolutions, but since there ARE differences between the processors, running these tests the way they are makes more sense from a "these are similar to what people will see in the real world" perspective.
Fwiw I took five minutes to see what you guys are talking about. To me it looks like Toms is screwed up. If you look at the time graphs it looks to me like it’s the purple line on top most of the time, but the summaries have that CPU in 3rd or 4th place. E.G. https://img.purch.com/r/711x457/aHR0cDovL21lZGlhLm...
At any rate things are generally damn close, and they largely aren’t even benchmarking the same games, so I don’t understand why a few people are complaining.
"Our test rigs now include Meltdown And Spectre Variant 1 mitigations. Spectre Variant 2 requires both motherboard firmware/microcode and operating system patches. We have installed the operating system patches for Variant 2.
Today's performance measurements do not include Intel's motherboard firmware mitigations for Spectre Variant 2 though, as we've been waiting for AMD patches to level the playing field. Last week, AMD announced that it’s making the mitigations available to motherboard vendors and OEMs, which the company says should take time to appear in the wild. We checked MSI's website for firmware updates applicable to our X370 platforms when AMD made its announcement, but no new BIOSes were available (and still aren't).
Unfortunately, we were only made aware that Variant 2 mitigations are present in our X470 board's firmware just before launch, precluding us from re-testing the Intel platforms with patches applied. We're working on this now, and plan to post updated results in future reviews.
The lack of Spectre Variant 2 patches in our Intel results likely give the Core CPUs a slight advantage over AMD's patched platforms. But the performance difference should be minimal with modern processors."
For those that are TL:DR in their viewpoint: unlike Anandtech, TH did NOT include all of the Spectre/Meltdown patches, & even said that there might be differences in their test results.
Other reviewers also had their setups meltdown/spectre patched and it's been already confirmed that these patches don't greatly impact gaming performance at all. It's clear that Anandtech's results are wrong here. I have read 12 other reviews and most of their results differ from the ones you got. You'd have to be delusional to take just 1 review as the absolute truth.
Incorrect. Those reviews were conducted back in January 2018 (look at the review dates). Microsoft issued new patches for Meltdown and Spectre earier this month (April 2018). I could find no other performance review showing performance gain/loss for Intel CPUs based upon the new patches other than the one posted now by AnandTech.
The only way to know for sure is for each hardware reviewer to provide the exact version of Windows 10 they used for testing. This will prove whether or not they ran benchmarks with the most current Windows updates/patches.
Looking at Tom’s results, they have OC intels in first place. Other than that it’s damn close. Is there a chance you’re just browsing graphs to see who is in the top spot and not really comprehending the results?
Aside from that, the test setups and even benchmarks used are different. You owe Ian an apology for not realizing you’re comparing OC results to his.
Yes. Ian is a top reviewer. At worst he made a mistake in this evaluations. It happens to the best of us. However, I have an issue with non OC test. It seems to me people will purchase overclockable processors and graphic cards to overclock them. At least game results should probably be based on OC benchmarks.
@Silma No, it makes more sense to do it this way. Everyone who buys these processors are guaranteed to have a part that will run the manufacturer spec. OC is a random lottery.
Ahem those initial results were meltdown only and January, there have been a boatload of fixes since then on both the meltdown and spectre side. So the data is not correct anymore. Even in January VMs etc.. everything I/O intensive already encountered a serious performance hit.
I think what you're seeing with the other reviews is old database information being used without the spectre and meltdown patches. They only say that Ryzen+ was tested with the latest patches, but it dosn't say that they retested all the Intel systems with the BIOS fix and patches applied.
It's be interesting to have an article running all these tests pre and post patches to show how much they affect the system. There seems to be a lot of confusion about how bad it is.
You need to take into account the latest system/bios patches for meltdown/spectre as well. Anandtech is not manipulating the results. Just because they get "different" results from "everybody else"(especially when you fail to cite the differences), strains your credibility.
Their benchmarks are garbage? You are welcome to buy a 2700X and test for yourself. The benchmarks they used are built in for the most part to each game. It coincides pretty much with what I know of Ryzen, Coffee Lake, and Ryzen 2xxx.
While out of respect for the reviewer's hard work, I wouldn't describe the results as "garbage", they certainly don't match up with results from other publications.
Yes, Anandtech's are honest and objective...I believe Tech Radar was comparing Coffee Lake OC'd to 5.2ghz vs Ryzen 2700x at 4.1ghz...the stock turbo alone hits 4.3ghz...they are slanting to benefit Intel...a 5.2ghz stable overclock on Coffee Lake alone is very hard to achieve and maybe 10-15% of CPUs can do it.
Well, golly gee... did the other reviewers use the *exact* setup as used here? No? Hmm... I guess that then makes your grouchy mcgrouchface missive not worth consideration then, no? If anyone is to not be taken seriously here, it's you.
Typical ad hominem and burden of proof fallacies. Well done, Chris113q.
WRONG. AT has it right, these are properly patched systems. Heavy IO perf loss with Intel Meltdown patches has been well known for months. See top comment here. https://np.reddit.com/r/pcmasterrace/comments/7obo... Prove your claim that the data is incorrect or misleading in any way whatsoever, child.
One of the problems is that other reviewers see a less pronounced difference between the new AMD Ryzen CPU's and the older ones. Most reviewers claim that they have tested with all available patches in place.
Your conclusion that AT has it right is based on what? Your belief that AT can't make mistakes? Maybe there is a logical explanation, but for now, it seems that AT might have done something wrong.
I have evidence to backup my claim, users with no motivation to mislead agree with AT, and did months ago. You have no evidence, simply butthurt. Good luck.
They definitely used slower memory. Don't know if that's the thing. Don't know what fps others get in the same games and settings. Otherwise maybe it's ASUS doing special tricks like with MCE before or have better memory timits or can use some trick to get similar of precision boost overdrive already. Or a software mistake.
Sweclockers is the best for game performance. They do 720p medium so the gpu limits will be smallest there.
They Anandtech used the rated speeds that the processors were stated to support by the manufactures. Anandtech, is using everything at stock. Anandtech ran all the processors through fully patched systems (both bios and OS). Not every website other tests to these same methodology. So, there will be differences in their results. None the less, Anandtech, is auditing their results to double check them. I really don't think they are going to see anything wrong. Toms, ran their Intel parts without the latest bios updates. Others overclocked their systems.
Most users do not overclock their systems. Sure, a lot of us readers do, but not everyone. I overclock my systems, but, my two brothers who are both just as technical as I am, do not. It is a choice some make and others do not. The majority of users do not overclock. So, Anandtech does not overclock in their most reviews. They have at times in the past and may in the future include overclocking results in reviews, but they have are always broken out the overclocking results in a separate section and/or labeled the overclocked results to differentiate them from the standard clocked results. These are editorial choices that Anandtech makes, I don't see any problem with that.
Well the main difference is they tested against fully meltdown and specte patched systems, which in fact is the norm, while all other reviewers simply tested against bare metal. It is known that Intel took a pretty serious hit especially with Meltdown and a more serious hit with Spectre compared to AMD which did not have meltdown at all and to a lesser degree Spectre than Intel did. I would say Anandtechs tests are spot on. And this reflects the sad state of nowadays performance testing which seems to be done to 99.9% by incompetent idiots or fanboys (especially the youtubers are the worst)
However in extreme situations Intel again wins since the 8700k can be oced by decapping and good cooling to 5GHz while the OC capabilities of the 2700x are basically non existent. It really depends, which is better. But the performance gap is closing and in non OCed system it is not existent anymore. It will be interesting next year when AMD has moved to 7nm while Intel still will be stuck at 10nm which they currently try to pull it but not have yet managed. Then the game might be entirely reversed.
Unfortunatelly, you are the only idiot and fanboy here. Pretty much everyone stated in their reviews, the system were fully patched, all cpus were reused and everything was retested, because AMD fanboys were screaming Meltdown here, Spectre there.
Now, the internet is full of this garbage review, it spreads like cancer, because AMD fanboys have nothing better to do, once again they are disappointed that 6 cores from Intel outperformed 8 cores from AMD and they are now like the Liverpool fans repeating "The next year will be ours"
Alphasoldier, I've been reading the reviews, and while many have stated they have applied the software (OS) patches, very few have stated they applied both the software and BIOS patches for the Spectre variant 2. Thew few places that I have seen which have stated both the software and BIOS patches were applied all seem to be showing much more similar results as the AT article.
In anycase, Ryan stated they are looking into it, and I am certain we will see an update within the next few days. And don't come saying that I am a AMD fanboi, I havn't purchased a AMD CPU since the Thunderbird (i.e. a slot A CPU).
werpu, oc an 8700K to 5GHz? Makes me laugh that a 300MHz bump over a CPU's max single core turbo is even called an oc these days. Sheesh, it's a far cry from the days of SB, oc hardly seems worth bothering with now.
What is with the gaming benchmarks? On your tests the whole ryzen 2 series is a step above everything else, but all other reviews show it between ryzen and coffee lake...
We're looking into it right now. Some of these results weren't in until very recently, so we're going back and doing some additional validation and logging to see if we can get to the bottom of this.
Either you don't have a fast enough GPU to remove the GPU bottleneck or there's something wrong with your data because there is NO chance Ryzen is faster than *lake in GTA V, with lower IPC and clocks.
Don't get me wrong, Ryzen 2 looks like a good product family and I wouldn't discourage anyone from buying.
Stock CPU and RAM speeds. Fully spectre / meltdown patched on both sides. Who is re-using old results? This review re-uses old results for the older generation Ryzen, and so some of the performance boost could be false (new drivers, OS patches, firmware, bios....).
More investigation is needed on all sides. Many other review sites are significantly more lazy than AT and are likely recycling old results for the Intel side.
As for your GPU bottleneck.... um no. Look at the results, as the resolution goes up, THEN you get GPU bottlenecked and all CPUs look the same. At low resolutions, it is clearly not GPU bottlenecked as there is a big FPS difference by CPU.
Gamers Nexus have tested the 2700X to work at 1.175V locked to 4.1GHz where it consumes 129W compared to stock frequency and stock voltage where it consumes 200W. Performance is generally the same on average.
Wonder whether it won't be that much longer until AMD launches something which actually beats Intel in IPC. Atm, people keep saying Intel wins on IPC, but it's only because Intel has punched its clock rates through the roof (it's like the old P4 days again), something they could have done years ago but never bothered because there was no competition, just as they could have released a consumer 8-core long ago but didn't (the 3930K was a crippled 8-core, but back then AMD couldn't even beat mainstream SB, never mind SB-E).
You know IPC is "instructions per clock", yeah? So saying Intel wins on IPC because their clock rate is faster doesn't make sense, it's like saying UK cars have a higher mpg then US cars because their gallons are bigger.
Intel wins (won?) on IPC because they executed more instructions per MHz of the clock rate. When you couple that with a faster clock rate you get a double whammy of performance. It does appear that AMD has almost closed the door on IPC but is still not operating on as high a clock rate.
This is why many are looking forward to Zen 2 in 2019, which will have true design improvements compared to Zen and Zen+. Zen+ is a small and incremental improvement over Zen(first generation Ryzen chips). Combined with 7nm, we may very well see AMD get very close to Intel clock speeds while having very similar, if not better IPC and a higher core count.
Very odd choice to only include the Intels with high clocks in the charts, it's like you wanted to put all Intels at top in ST results, make it look better than it is.
I'm afraid there are physical space limits regarding how much hardware Ian can fit in his domain. It's been popular to recycle scores from previous tests among sites, but after "Smeltdown" (and with Nvidia drivers being all over the place) it doesn't work that way right now. In an ideal world you'd compare five or ten different setups, sure. But then you'd not just want 8400 @ B360 but also 8700k OC, 2600k OC, 4770k OC, etc...
Wow. I'm actually excited to read a review for the first time in a long time! Fantastic review as usual!
I'm still sitting on my 3770k @ 4-4-4.7ghz and I'm likely to try delidding for fun and see if I can push it any more. But this review makes me excited to look forward to perhaps building a Ryzen 2/3 (whatever the heck they name it) this time next year!
AMD has caught up to Intel another vital few paces here! If Intel sits on their butts again next year and AMD can do the same thing next year, this is going to get very, very interesting :)
AMD is going to unleash some serious tech next year with Ryzen 2 on 7nm...this is just a refresh of the original Ryzen...the real deal will be next April/May...Intel is in for a rough ride.
the patches are the difference.. which everyone should do on intel machines.. the fact is they came with a performance hit! AMD is now leading the pack... security over performance any day!
Techradar's review is some form of manipulation. They don't even show the test system specs for the comparison scores. In their 8700K review they wrote the CPU hit 76° for them at stock and 87° OC; in the 2700X review they wrote that the 8700K only went up to 52°(!!!). That CPU literally had its handbrake™ pulled.
Thanks for the review, but I noticed a minor error -- your AMD Ryzen Cache Clocks graph on the 3rd page shows data for the 2700X, but in the preceding text it is referred to as the 2800X.
AMD wins all gaming benchmarks, hands down and does this at a real 105W TDP.
In my opinion, it is not fair to say that Intel "wins" the single threaded scenarios as long as we see clearly that the 8700 and the 8700K have the "multi-core enhancement" activated and the motherboard allows them to draw 120W on a regular basis, like your own graphs show.
Allow AMD's Ryzen to draw 120W max and auto-overclock and only the would we have a fair comparison.
In the end, I guess that all those that bought the 7700K and the 8700K "for gaming" are now very pissed off.
The former have a 100% dead/un-upgradeable platform while the latter spent a ton of money on a platform that was more expensive, consumes more power and will surely be rendered un-upgradeable soon by Intel :) while AMD already rendered it obsolete (from the "best of the best" POV) or at least the X370+8700K is now the clear second-best in 99% of the tests @ the same power consumption while losing all price/performance comparisons.
IMHO ... allowing the 8700 & 8700K to draw 120W instead of 65W / 95W and allowing auto-overclocking while the AMD Ryzen is not tested with equivalent settings is maybe the only thing that needs to be improved with regards to the fairness of this review.
Incorrect comparison. Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action. 6 core/12 thread vs 8 core/16 thread makes no sense. Comparing the Intel 8700K 6 core/12 thread @ $347 to the AMD 2600X 6 core/12 thread @ $229.99 makes the most sense here. Once the proper math is done, AMD destroys Intel in performance vs. cost, especially when you game at any resolution higher than 1080P. The GPU becomes the bottleneck at that point, negating any IPC benefits of the Intel CPUs. I know this how? Simple. I also own a 8700K gaming PC ;-)
I'd like to see more scatterplots with performance versus cost. Also, total cost (MB+CPU+cooler if needed) would be ideal. Even an overall average of 99th percentile 4k scores in gaming (one chart) would be interesting.... hmmm maybe a project for the afternoon.
The English-language version of the Tomshardware review has a million plots on the last page (14). 4K is complete irrelevant for plotting though since you're GPU-limited there.
Wrong. Performance at a given price level is absolutely a metric chip buyers care about - if not the MOST important metric.
People usually think "Okay, I have this $300 budget for a CPU, which is the best CPU I can get for that money?" - It's irrelevant whether one has 4 cores or 8 cores or 16 cores. They will get the best CPU for the money, regardless of cores and threads.
Compared core vs core or thread vs thread is just a synthetic and academic comparison. People don't actually buy based on that kind of thinking. If X chip has 15% better gaming performance than the Y chip for the same amount of money, they'll get the X chip, regardless of cores, threads, caches, and whatnot.
Incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
Sure, in some cases it is possible to compare two processors of 'equal quality' and then look at cost second.
But that is an impossible task in a review. And for some processors it is impossible for anyone.
This is impossible because there is no such thing as an 'equal quality chip'. Subjectively, I might be able to find two chips that I think are roughly equal, then compare price. But this is subjective -- depending on what my needs are.
Price is objective. We can compare two system builds at nearly equal cost directly, then see what is better. Comparing 'roughly equal' chips first starts out in the wrong place for most consumers. Only those that are not very price sensitive do that -- get the 'best' for what they want, and if there are two equal things use price as a tie breaker. Most people are looking for the best they can get for a price, rather than the lower price for what they want.
Now, to make it worse, by your reasoning the 2700X can not be compared to anything, because the core counts differ. Bull$#17. I could just as easily say that the 8700K can not be compared to the 2600X because it can overclock to 5Ghz, so they are not technically the same.
There is absolutely reason to compare 8C/16T products to 6C/12T to 4C/8T products -- BECAUSE PEOPLE HAVE TO PICK ONE TO BUY.
You are completely wrong, and Krysto is correct. Performance per dollar is the metric of greatest relevance for the vast majority of users and thus is the most useful metric to use in reviews.
Performance per dollar for the workload you care about is what you are talking about, since game performance doesn't matter much in business, but being able to do whatever the day to day work as quickly as possible is. That may mean lower core counts with high clock speeds will be more important, or higher core counts will beat out most other things(16+ cores at 1.5GHz might beat out 4 cores at 5GHz). It all depends.
"Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action."
To abuse an already overused meme here, why not both? This is why we have the data for all of these parts.
Our focus is on price comparisons, because at the end of the day most readers will make their buying decisions around a given budget. But there is also plenty here looking into IPC and other architectural elements.
Lol don't feed him Ryan! As one of our so gracious and glorious overloads it pains me to see you get into the mud with that dingus. Leave that to us nobodies :).
In the real world we have to choose depending on features and performance while constrained by a budget.
For intellectual discussion and better understanding of the chips and architecture we need direct comparison.
Both arguments work for entirely different reasons. I rarely have the budget for high end Intel. I'm also into overclocking and run VM, so the only way I hit both of those is to run AMD.
I've also got a few apps that really take advantage of AVX2 and AVX512, which even the Ryzen gets monstrously stomped by Intel.
If you judge by a single metric you're missing the big picture. Everything is a compromise.
Once again incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
Once again incorrect. Cost vs Cost is the primary factor for a buyer on a budget. It is the main one. Case in point, if I can get a 2600X for the same price as a much slower Intel chip, it is obviously better. Comparing a $300+ chip to a $200+ one makes absolurely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
See what I did there? Your argument (and the one above) are BS. You are either a troll, or have a serious intellectual disability. Price, performance, and implementation details (core count) are all independent dimensions and you can look at any of them from the perspective of the other.
Price just happens to be the constraint that most shoppers have to start with. They can vary the other parameters, within the price constraint.
A others with more money might instead lock in a performance / feature set requirement and _then_ consider price, but that is the minority.
They compared multiple "qualities" of processors between two Ryzen generations and CL. If you want to look at them core for core, is it that hard to shift your eyes 3 lines up to see the next line of results? Do you want them to exclude the 2700X since there isn't a consumer level CL to match it?
Price and absolute performance are paramount. Comparing at raw architecture levels is interesting but less important.
In the real world, there are consumers who are not that price sensitive, in which case they only care about a top end part that is within their range. They don't care if it is 10 core/ 20 thread vs 8 core /16 thread or 6 core 12 thread -- they care about the raw performance for what they need, and are usually willing to go up in cost somewhat for that performance (including mobo/ram costs). This is the sort of consumer I am today.
There are then others who are price sensitive and have a budget. For these people the price tag is paramount. The flaw with this review (and most in general) is that it does not include mobo / ram / etc costs and often just looks at the CPU price alone. For someone budget conscious they have to carefully consider whether saving $100 on a CPU or $50 on a mobo can give them the ability to spend that on say, a better GPU or nicer monitor. For those, comparing products by price point is way more important than comparing them by architecture. This is the sort of consumer I was when I was a poor college student / gamer that had to part together my own systems with very limited budgets.
As a tech geek, I am always interested in the core-for-core or clock-for-clock comparison, but in the real world for purchasing decisions it doesn't matter if a Ryzen with 6 cores/12 threads at 3Ghz is faster or slower than an Intel chip with 6 cores/12 threads at 3Ghz. In the end, they can have different core counts, threads, and Ghz -- all that matters is the actual performance.
In the case of Ryzen, you can use the same motherboard from the first generation to the second, or the third, or the fourth(in 2020). You may not get all the features, but they will work, and CPU cost is the only thing needed since you already have the other components.
Actual performance is the correct focus, but game performance isn't the same as rendering performance, or for those who tend to have 8+ programs open as a part of their normal work environment. Just saying "performance" ignores that what you use your computer for isn't necessarily the same as what other people use their computer(s) for.
That is why they use different game benchmarks. Some do make use of more cores/threads, and others make use of other design differences between different products. Price vs. performance is a very valid comparison based on workload, not just games, but in other tasks. You could have higher core count processors with lower clock speeds at the same price point, even when looking at Intel. 6-core lower speed, or 4-core higher speed at the same price point. Which does better for the tasks you personally care about? Intel 8700K vs. AMD 2700X is the fair comparison, while you will compare the 2600 to the i5, again, due to the price point. When you look at the performance results, you SHOULD in theory, see that these chips match up in terms of performance for the price, though AMD tends to have an advantage these days in multi-threaded tasks, while Intel tends to do better in lightly threaded workloads due to clock speeds.
Just because transistors can be 15% smaller, doesn't mean that they have to be. Every IC design includes transistors of many different sizes. GF is saying that the minimum transistor size is 15% smaller than the previous minimum transistor size. And it seems that AMD chose not to use them, selecting to use a larger, higher performance transistor instead that happens to be the same size as their previous transistor.
And you confirm that in the next paragraph. "AMD confirmed that they are using 9T transistor libraries, also the same as the previous generation, although GlobalFoundries offers a 7.5T design as well." So please delete your very misleading transistor diagram and accompanying text.
I think you are misreading that part of the article. AMD shrunk the size of the processor blocks giving them more "dark silicone" between the blocks. This allowed better thermal isolation between blocks, thus higher clocks.
"Intel is expected to have a frequency and IPC advantage AMD’s counter is to come close on frequency and offer more cores at the same price
It is easy for AMD to wave the multi-threaded crown with its internal testing, however the single thread performance is still a little behind."
If so, why is it given such emphasis - its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms. Oh so recently, the norm wa probably 2 core, so that's what they coded for - THEN.
This minor advantage, compares to intel getting absolutely smashed on increasingly multi threaded apps, at any price point, is rarely mentioned in proximity, where it deserves to be in a balanced analysis.
"its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms" As I often do, I'd like to remind people that not all readers of this article are gamers or give a darn about games. I am one of those i.e. game performance is meaningless to me.
I am a gamer, but the gaming benchmarks are nearly irrelevant at this point.
Almost every CPU (ignoring Atom) can easily feed a modern video card and keep the framerate above 60fps. I'm running an FX 6300 and I still run everything at 1080p with a GTX 970 and hardly ever see a framerate drop.
Gaming benches are somewhat less important than days gone by. Everything on the market hits the minimum requirement and then some. It's primarily fuel for the fanboys, "OMG!!! AMD sucks!!! Intel is faster at gaming!!!"
Well, considering Intel is running 200fps and AMD is hitting 175fps I'm *thinking* they're both playable.
Gaming + streaming benchmarks, as done by GamersNexus, are exactly the kind of relevant and important benchmarks more sites need to be doing. Those numbers you don't care about are much more important when you start trying to do streaming.
Your 60fps? That isn't even what most people who game care about with high refresh rate monitors doing 144hz+. Add in streaming where you're taking a decent FPS hit and that difference between 200 and 175 fps all of a sudden is the difference between maintaining the 144hz and not.
Yea but.. of all the people interested in gaming, those with high refresh rate monitors and/or streaming online is what - 10% of the market? Tops?
Sure the GamersNexus reviews have relevance.. to that distinct minority of people out there. Condemning/praising CPU architectures for gaming in general due to these corner cases is non-sensical.
Like Oldman79 said, damn near any of these CPUs is fine for gaming - unless you happen to be one of the corner cases.
You're pulling a number out of thin air and building an entire argument around a made up number. 72% of steam users have 1080p monitors. What percentage of those are high refresh rate is unknown, but 120hz monitors have existed for at least 5 years now and maybe even longer. At this stage arguing around 60fps is like arguing about sound quality of cassettes today as we are long past it.
If by 'pulling a number out of thin air' you mean that I looked at the same steam hardware survey as you did and also a (year old) TechReport survey (https://techreport.com/news/31542/poll-what-the-re... ) - then yes, I absolutely pulled a number out of thin air. I actually think 10% of the entire market as a max for x1080 resolution and high refresh rate monitors will be significantly too high, as the market will have a lot of old or cheap monitors out there.
The fact is, once you say Ryzen is perfectly fine for x1080 (at 60 hz) gaming and anything at or above x1440 because your GPU limited (and I'm not saying there is no difference - but is it significant enough?), the argument is no longer 'Ryzen is worse at gaming', but is instead 'Ryzen is just as good for gaming as Intel counterparts, unless you have a high refresh rate x1080 monitor and high end graphics card.'
Which is a bloody corner case. It might be an important one to a bunch of people, but as I said - it is a distinct minority and it is nonsensical to condemn or praise a CPU architecture for gaming in general because of one corner case. The conclusion is too general and sweeping.
This is where current benchmarks, other than the turn length benchmark in Civ 6, are not doing enough to show where slowdowns come from. Framerates don't matter as much if the game adds complexity based on CPU processing capability. AI in games for example, will benefit from additional CPU cores(when you don't use your maxed out video card for AI of course).
I agree that game framerates as the end all, be all that people look at is far too limited, and we do see other things, Cinebench for example, that help expand things, but doesn't go far enough. I just know that I personally find anything below 8-cores will feel sluggish with the number of programs I tend to run at once.
Monitors in use do lag the market. All of my standalone monitors are over a decade old. My laptop and tablet are over five years old. Many people have 4K TVs, but rarely hook them up to their PC.
It's hard to tell, of course, because some browsers don't fully-communicate display capabilities, but 1920x1080 is a popular resolution with maybe 22.5% of the market on it (judging by the web stats of a large art website I run). Another ~13.5% is on 1366x768.
I think it's safe to say that only ~5% have larger than 1080p - 2560x1440 has kinda taken off with gamers, but even then it only has 3.5% in the Steam survey - and of course, this mainly counts new installations. 4K is closer to 0.3%.
Performance for resolutions not in use *now* may matter for a new CPU because you might well want to pair it with a new monitor and video card down the road. You're buying a future capability - maybe you don't need HEVC 10-bit 4K 60FPS decode now, but you might later. However, it could be a better bet to upgrade the CPU/GPU later, especially since we may see AV1 in use by then.
Buying capabilities for the future is more important for laptops and all-in-one boxes, since they're least likely to be upgradable - Thunderbolt and USB display solutions aside.
Would be gaming in 144 Hz while streaming 60 Hz, unless in Akkuma's fantasy world of 240 Hz monitors, the majority of stream viewers would want 144 Hz streams too ;)
Thats a great point. Every time i have upgraded it has been due to me not hitting 60fps. I have no interest in 144hz/240hz monitors. Had a Q9400 till GTA IV released. Bought a FX 8300 due to lag. Used that till COD WW2 stuttered (Still not sure why really). Now i own a 7700k paired with a 1060 6gb. Not the kind of thing you should say out loud but im not gonna buy a GTX 1080ti for 1080p/60HZ. The PCIe x16 slot is here to stay, i can upgrade whenever. The CPU socket on my Z270 board on the other hand is obsolete a year after purchase.
Just wait until you upgrade to 4k, at which point you will be waiting for a new generation of video card to come out, and then you find that even the new cards can't handle 4k terribly well. I agree about video card upgrades not making a lot of sense if you are not going above 1080p/60Hz.
For 4K you've so far always needed SLI, and SLI was always either bad, bugged, or -as of recently- retired. Why they still make multi GPU mainboards and bundle SLI bridges is beyond me.
Anandtech is the only site that shows 8700k trailing 2700k in gaming. Hell even the 8400 is slightly faster overall than the 2700k everywhere else. This is what we called an outlier.
That's going to throw off the scores a bit too, and a lot of reviewers leave it on. I don't remember if Anandtech does or doesn't. I think their stance is "out of the box".
Sorry, but these gaming results are total crap. I have 110 average fps on OC'ed 1600X 4.0GHz and GTX 980TI 1600 MHz in RotTR on high, and you show less FPS for 1080? Not to mention I have a crapload of processes running simultaneously. This is the hardest fail I've ever read.
Anandtech needs to redo the gaming test. It’s blatantly a false representation. And Meltdown doesn’t impact gaming. No excuse what so ever if they want to still be considered a reliable reviewer.
Well, you should claim. AFAIK it already matched 1080's performance on 1500 clocks and just going above that should make it faster.
Just because reviewers benchmarked (And still use those results) the numbers on original 980 TI's that don't even hit the 1300mhz clocks on stock (what the aftermarket ones do), doesn't make the later non reference 980 TI's slower in reality. Many people think that 1070 matches the speed of OC'd 980Ti but in reality the old one is better - just more powerhungry. But hey, nvidia couldn't have sold those 1070's if all the people would know the truth. :)
Then why do other reviews not show even near the level of performance gap between Ryzen 1 and Ryzen 2? It's not as if spectre or meltdown patches would somehow make the 2 series way better than the 1
I tend to keep away from the comments here because I lack the knowledge to really contribute.
I couldn't resist the urge to pop in because I'm certain this is the only time that sentence will be me boasting, when I'm reading comments from 'kill3x' and 'realistz' concerning 'hard fails'.
Follow my great example and realize your anecdote (leaving aside your 'hard fail' comprehending results and placing them in context the article hands you plenty of, assuming you read every word you should have), is right on the edge of worthless and garbage. Then read all the comments (particularly page one and two). Then come back tomorrow to get potential updates. Then go back into whatever game you were playing and be silly gooses there.
To clarify, are you looking at the same sub-score we are, or the overall average? Our posted results are off of the first scene, Valley, as that's the most strenuous. The overall average is going to be higher, as you can see here: https://www.anandtech.com/bench/GPU16/1471
Thank you for your reply, Ryan. Yes, this is more on point. But then again, if you mean Geothermal valley, I have different results there. The first area is Siberian wilderness with heavy snow, and I have lower results there. So a question arises about testing methods and testing scenes. Was it combat? Static? In a cave or on the top of area? All of these things affect FPS heavily. That's why the best way to review hardware in games is using scripted scenes and showing results in a video with detailed options' setup. Why didn't you guys just use ingame benchmark which 100% runs same scenes with same density? All of this looks like reviewer tried to cherrypick results in favor of Zen+. When you can't reproduce the result of a benchmark with same hardware as reviewer used is example of a bad approach and distortion of perception of your visitors. But then again, thank you, Ryan, for speaking with us and listening to our rant.
"Why didn't you guys just use ingame benchmark which 100% runs same scenes with same density?"
To clarify, we do. We just use one of the scenes, and not the average of all of them. This is the same scene we've used for over a year now, since introducing this benchmark to our CPU & mobo testing suite.
Ryan, I retested Valley scene in built-in benchmark, this time with 8700k and gtx 980 Ti. I used high instead of very high, all options like on your screenshot. I got 122 FPS on valley with this settings. On 980 Ti. I'm really trying to keep this polite, but this is 20% difference on marginally weaker card. This just can't be a "gap" kind of error. These benchmarks are horribly wrong. Make your site and Ian a favor, Ryan, please consider retesting this. People are already suspect you of shill (rightfully so). Be an honest guy and just admit that a technical mistake was made, and correct it. Noone would blame you, mistakes happen. If you leave that as it is, it will be a much bigger mistake.
Yeah my setup's is nowhere near similar to theirs, and still my results are 20% better on 2 different CPUs. That kinda puts credibility of their review to zero, with all my respect to Ryan. The only goal was to put Ryzen 1 and CFL-S CPUs in a bad light, so people will buy new Ryzen 2 CPUs and suddenly find out that its capability's are not that huge.
Wouldn’t it actually suggest that there’s a difference between your setup and theirs that favors yours? For instance, in the case of your earlier benchmark, you admittedly overclocked your 1600X and they didn’t, so don’t you think that might account for the 10% difference you saw over theirs? And in the case of the 8700K, you omitted key contextual information (e.g. is your system updated, and if so, which updates to what components?) that would allow others to verify that it was an apples-to-apples comparison.
Ryan may very well have made a mistake and you may very well be entirely correct about all of this, but claiming he’s a liar on the basis of your overclocked system and then following it up with claims about the 8700K that lack the information necessary for someone else to verify your data does not help your case.
Meanwhile, my horse in this race died years ago. The latest product I bought from either teams red or blue was a 2011 Mac Mini that had an Intel CPU and an AMD GPU. All of which is to say, I’m a fan of passionate debate, but let’s keep aspersions to a minimum and focus on getting to the truth.
On the first paragraph, Ian writes the following: "This is not AMD’s next big microarchitecture, which we know is called Rome (or Zen 2) on 7nm." That is incorrect. Rome is the codename for the upcoming EPYC 2nd Gen CPUs that will replace the current Naples products, and not the codename for AMD's next gen CPU core arch.
"anything that is hard on a single-threaded, such as our FCAT test or Cinebench 1T, Intel wins hands down"
Yeah, I know, its just an indicator, but it's telling that the test seems as silly as the emphasis on ipc due to shrill/shill gamers - who would use single thread for cinebench?
Single thread Cinebench 15 score is *the* indicator of IPC used in meme-filled debates on online forums. It's just an important metric right now. And unlike, uh, GeekBench, CPU-Z, and whatever else claims to judge single thread score, it's pretty accurate.
Anandtech isn't alone... Techradar has AMD winning on fully patched systems as well...the sites that have Intel winning are using either old scores or unpatched systems for Smeltdown.
It's just hunch/intuition, but those cache improvements were outstanding, and apply to some seriously large chunks of cache. IMO, we haven't heard the last of perf improvements from this source, via driver tweaks etc.
My main gripe with people saying "other reviewers didn't use the meltdown/spectre" patches and stuff... 1. Those patches have already been tested and they do NOT affect gaming much at all, we're talking lower than 1%
2. Even if you take out the entire meltdown/spectre thing, look at ryzen 1800x vs 2700x. a 3% IPC increase and some memory latency improvements do NOT account for 20% increased performance in gaming, not at a 200mhz clock change.
3. Even if Ryzen 2 series DID somehow gain 20% on Ryzen 1...why do other websites not show this? They all show at most 10%. Completely remove intel from the situation and you still have glaringly large performance jumps from Ryzen 1 to 2, this is what sticks out the most here.
You need to educate yourself on Meltdown/Spectre a bit more...patches are multilayered and in chipset driver, in the OS AND motherboard BIOS...unless the system being used isn't fully patched and using the latest BIOS and driver's, the test is useless.
I've been saying for months that Spectre and Meltdown will close the performance gap between Intel and AMD. All those speculative tricks will be reeled back in via security concerns. This article (and others like it) are confirming it.
We shouldn't treat closing of the performance gap as a problem. This is not a black mark for Intel, and it's not a knockout punch from AMD. What this shows is that we have real competition in x86 again! Competition caused tremendous performance jumps in the AMD K7 and Intel Pentium days. Likewise, it's rapidly improved ARM offerings to make them a viable threat to x86. Competition is good!
Not sure that I'm ready to replace a 5-year old x79 i7-4930k system. The 2700X is very tempting, especially given that x79 lacks boot NVME support. I've been doing enough Blender and Unreal stuff that it may be a worthwhile speedup.
Re NVMe boot, not true, it can be added to numerous mbds, especially ASUS (check the ROG thread with the modded BIOSs which one can install via FlashBack). Plus, you can use a 950 Pro or various other models since they already have their own boot ROM (ie. they don't need boot support on the mbd). I get over 3GB/sec with a 960 Pro on an ASUS R4E using a 4820K, 4930K and E5-2697 v2. Also have an SM961, various SM951 and some 950 Pros. Works great. I'm sure othe modders have done the same for non-ASUS boards.
The modder on ROG has also made BIOS files for the M4E and P9X79/E WS on my request.
is incredible what AMD has achieved not only reached Intel but in many cases it exceeds above all in relation quality price that R5 2600X to 229 dollars for games looks great
Forbes:"Two weeks ago, if you were building a PC purely for games, I'd have said go with Intel. Today, I'd say you have a choice, and that's a hugely important development."
I think maybe you've stuffed up your testing somewhere, every other review I've seen paints a different picture, with the 8700k walking all over the 2700x in gaming performance (50+FPS difference in places).
Even your cinebench scores are a little off, 1395 for an 8700k?? At stock it should be closer to 1495 and OC around 1695.
Discounting margin or error, this review really is the outlier when put alongside reviews from Hardware unboxed/Gamers Nexus/Linus Tech Tips etc al.
Anandtech isn't the only review site to come up with this conclusion...other sites are likely either lying or using old scores before all the patches came out...the only testing results worthwhile are those that setup a new testbed using fresh install of Windows, all patches available, latest chipset driver's, video driver's and the latest BIOS.
1400 is the real stock number for 8700 and 8700k. Just because original 8700K youtube reviewers had multi core enhancement support as default (where the turbo clocks go 4.7ghz on all cores, aka out of spec overclock), doesn't make it the real number for your average consumer machine, that doesn't have the option for it nor the TDP headroom.
Look at Techradar, same results as AnandTech. It's the new Meltdown & Spectre patches it seems. 1 site having weird results is one thing, but that's not what we are seeing here.
The dolphin emulator benchmark has the 1700 beating the 2700. Is that right?
Thanks for benching the 2700 by the way. Seemed that most interesting of the lineup. It packs quite a punch for 65W! Not sure I'd buy it to save $30 over the 2700X but for a heat/power constrained workstation it would be at the top of my list.
Yup, I use 65W chips because the form factor I want is worth more to me than benchmark results. My system is fast. Really fast. Any additional speed really wouldn't be noticed. A big case on the floor with enough space for a massive heatsink inside would be noticed for many years. As long as I'm on a relatively recent architecture (Zen1 here), I'm probably getting the majority of the enhancements.
I plan on buying the Zen2 65W chip as an upgrade with X570.
AMD has made some progress on refining Ryzen and I will be upgrading to Threadripper 2 when it comes out in 2019.
The only disappointment is that 4.4 to 4.5 GHz does not seem to be possible. Then again, it may be by the time Threadripper+ comes around. The original Ryzen did 3.8 to 4.0 GHz, while Threadripper was capable of 4.0 to 4.2, and even a few golden chips did 4.3 GHz.
In terms of performance vs cost this is a solid win for AMD. I just wish that it was possible to get AVX 512 onto AMD. Maybe with Zen 2 or 3 it will be.
the 99th percentile and time under 60fps gaming numbers are amazing. Ryzen+ is beating Intel in every single benchmark. It's pretty much the mathematical equivalent of a better game experience.
Do yourselves a favor and test Blender Master 2.79.x with CPU+GPGPU rendering of the benchmark models. You'll see Ryzen does far better than Intel's own.
If the 8600K gets killed by the 2600X in productivity, what's the point in adding 8400 into the mix? 8400 numbers are useful for the 2600 review, not the 2600X. IMO, even there the 2600 would destroy the i5.
Just test the Intel hardware with and without the Spectre/Meltdown patches in 1 title , 1 resolution to see if impact is 10-15%, if it is ,that's how your numbers are different.
Maybe the script that rearranges the results to be displayed from highest to lowest, is rearranging the 2 columns (SKUs and results) by a different set of rules.
I do wonder how you tell if you have the old power profile or not though. Probably best to just start with a clean Win10 install each time, install latest and bench. Otherwise at least do an uninstall of the old chipset drivers, maybe run DDU for AMD stuff, then install the latest cleanly.
With Ryzen 1x, while officially supporting DRAM 2666, people were usually aiming at running it at 3200, with significant speed improvement. Is Ryzen 2x also able to run at 3200 (or more) and will the impact of the speed difference be significant again?
When I was looking at it half year ago, the memory support of Ryzen 1x was a mess, but there some boards and some memory modules (typically samsung "B" parts), which were able hit 3200 with good timing.
The problem was more than 16 GB support and in general dual sided modules. But maybe the situation has changed since then. What was more concerning was the feeling of instability of the platform (i.e. some modules do not work with some chips/boards). I wonder if this prevails on Ryzen 2x.
What I also remember is that the memory throughput peaked at 3000/3200 depending on timing. It would be interesting to know if the same applies to Ryzen 2x.
The memory speed policy.. So in a nutshell, what we, your readers (the ones reading this article) are being told is - most of us don't know how to enter bios and click on XMP? Add me to the list that disagrees - especially as memory speed is critical to AMD infinity fabric, and zero chance of me running @ a slow 2933.. I did not bother reading the review past test bed setup as a result.
The memory frequency has little impact... memory timings have a big impact...on x370, people were getting lower scores at 3200 and higher with loose timings vs running at 2933 with 14-14-14 timings.
What timings were used? Didn't see it spec'd. 16 perhaps? Like I mentioned in another post, I'm running a 1700x on a x370 taichi @ 3200 14-14-14-34 (simple XMP),and getting better than I could @ 2933. Regardless, the base/default speed/timings policy should be revisited.
Not sure why you're running Spectre/Meltdown for Intel when AFAIK both were yanked by MSFT and INTEL (microsoft first, then Intel finally also as they're not working properly) last I checked.
Also, why 8700k machine having 4x8 dimms instead of AMD 2x8? If you're not comparing like systems, it kind of introduces things that cause weird benchmarks. Just read Hexus, no 1080p game lost by Intel, the rest even at 4k which nobody uses & is GPU limited anyway (I call it nobody until 10%, miles from that, even 1440p miles from that). Despite what you guys have been claiming since 660ti, people still running 1920x1200 or lower (feel free to check steam survey and add up ALL others above 1920x1200), or they have TWO cards or more. AMD claiming 1440p best res but you guys do 1080p & 4K? LOL.
One more point after a quick read here and elsewhere: You should Have tested Intel at 2933 mem also as it is EASILY doable on Intel boards even for the Crucial. Just do timings yourself, heck you could have used the exact modules for both. People who use SPD's are lazy. Nearly every module can run on every board if you IGNORE SPD's and set it up yourself. IE PCper ran 8700k/2700x at 3400 actual.
Darn good chip either way, not sure why AMD insists on making no money by charging less than their stuff is worth. This chip should be $400 with fan. You just pissed away much of your NET INCOME. Nice work again AMD retarded management (and that's an insult to retards). I say retarded, but really just stupid (as in can't learn from past pricing mistakes). Ignorance would mean they can learn...You're not in business to be our friend, you need to make MONEY for a few years to have R&D. Over the life of the company AMD has NOT made a dime. I don't think people get this. We need them to make NET INCOME and 60% margins like NV/Intel, not this 30% crap. Market share does you ZERO good if you make ZERO from having it. I'd rather be Apple/NV/Intel and OWN the high end and most of the profit.
depends on the way of looking at things, AMD gives back what they can where they can, they absolutely need to make money, one can "assume" they are not, but then again do we really know how much Intel/Nv or Apple are making, nope, AMD has a smaller staff size far less overhead (generally speaking)
they absolutely need to bank some, but who is to say that AMD charging $400 for product X is not directly the same thing as Intel/Nv or Apple charing $600, we do not know simple as that.
in my books, you are in business to make money for sure, but there is no reason to gouge the crap out of those buying said product, doubly so for a company like AMD that seems to give a great deal towards the industry that benefits everyone including themselves, this is "priceless"
We already have enough mega greed corporations out there, am glad at least one of them is charging a fair price for the product instead of gouging "just because they will buy it anyways"
like big pharma type deal, screw that lol...if they only need to sell at say $400 to "bank" even 20%, they still are profitable, now when it comes to the very high end likely their margins are MUCH higher with little overall to them increase in cost (such as Threadripper/EPYC)
either way they are being fair to themselves AND their customers, at the very least they are not selling more or less at cost ~5% at most margins (Bulldozer as an example) basically doubled the price likely about 1/3 less to produce them still is a win win for them, they put $ in the bank and towards RnD and we get some shiny new toys as well.
Umm, no, we do know what they Q reports say and as noted they haven't made a dime over the ENTIRE life of the company. We also know they've lost ~8B in the last 15yrs. We also know they've released multiple new products and still aren't making money for a full year yet and barely making money in quarter here and there.
https://finance.yahoo.com/quote/AMD/financials?p=A... Net income for the last 4 quarters: 61,000 71,000 -16,000 -73,000 Get the point genius? I'll keep bitchin until AMD sets an appropriate price to actually make 100mil for the year...LOL. NO not GROSS, that doesn't count..NET INCOME is the only thing that matters in the end. Commonly known as "THE BOTTOM LINE".
If you're happy with AMD making ~40mil for a year, I'd say Intel/NV are laughing their butts off. New toys cost more than 40mil...ROFLMAO. Thats about the cost of taping out a 7nm chip probably as they skyrocket per shrink now. Used to be ~10mil at 40/28nm, not so now. There's a reason why Intel has problems with 14nm for a while (not now) and now 10nm too. It's not easy shrinking today and is vastly more expensive to pull it off.
AMD has margins of 34% right now. Now look at that MASSIVE 40mil profit for the TTM (trailing 12 months since you don't seem to read balance sheets or Q reports). Comic people like you comment but don't know the numbers. You run on assumptions I can't afford as a stock investor. You know what happens when people ASSuME things right? ;)
Oh, and AMD profits after MULTIPLE launches and how many years of re-organization? Pricing and HBM/HBM2 (kills margin on top products currently, should only be done on Titan/Quadro/Tesla type stuff with massive room to adjust), chasing console (single digit-15% margins over the life of first xbox1 according to AMD themselves), and chasing APU (which was released at $165...LOL - should have made HBM 8809g chips instead of Intel doing it - great margin on ~500 bucker). Now they massively undercharge on new 2700x. Essentially $40 off orignal $369 and free heatsink/fan thrown on top. This is DUMB, until you can prove otherwise via a Q report which you can't.
MEGA GREED? ROFL. Add up EVERY SINGLE YEAR of AMD NET INCOME please, or please kindly shut up. They should be making a billion right now, but they chose HBM/HBM2 killing top cards margins twice, now cpus twice. The cpu looks VERY strong, why screw themselves to make a few IGNORANT people (hoping you can learn by reading a Q report, so I refrained from calling you stupid) like yourself happy? While you're doing some homework, make sure you understand how they've lost fabs, buildings, property, 1/3 engineers in the last ~5-10yrs (layoffs), and massively diluted their shares during that time also (almost a billion shares outstanding now vs. 600mil a decade ago). They are NOT healthy by any measure, so by all means prove your case or go away quietly before making a bigger fool of yourself. :) It's shortsighted to have them go out of business soon (or go completely junk status) over a few bucks on your cpu or gpu. Margins do matter, so does NET INCOME.
https://finance.yahoo.com/quote/AMD/key-statistics... AMD Operating margin for the last 12 months..3.83%...LOL. Again, read something. I could go a lot deeper, but if you don't even get all this, what is the point in driving you into the ground. I have more AMD reviews to read so I can decide 8700k or 2700x. :) I was excited over Anandtech games (apps already good for me), but now have reservations again reading elsewhere...LOL.
What money they putting in the bank? R&D dropping for last 4yrs, Nvidia/Intel up over the same. One more for good measure, easily understood I'd hope by ALL: https://finance.yahoo.com/quote/AMD/financials?p=A... NET INCOME last 4 years: 43,000 -497,000 -660,000 -403,000 So, multiple product launches last year, and barely breaking even? Never mind the previous 3 years of MASSIVE losses. If you're selling 4-5B worth of crap a year and losing 10-12% every year on that, umm, you're doing something wrong right? You're CLEARLY not charging enough correct? Doing the same thing on $250 1080 gpu next year...They are pre-announcing another BAD yearly loss...LOL. FIRE your management AMD! If NV is $350 or more next year on that speed of card, you'd better rethink your margins! What gouging? They've lost ~500-650mil a year 3 out of 4 of the last 4yrs and only 43mil in the best year in the last 5...LOL. Are you high? Your idea of margins have AMD out of the cpu race for 5yrs straight, now finally back in but still you'd have them keep doing the same stupid pricing that loses 500mil a year. UGH.
Goodwill=PRICELESS? LOL. Stupid pricing=losses...How about charging appropriate pricing to stay in business and ADD R&D instead of reduce it? Make sense? They are NOT banking 20% and if your idea of profitable is losing 1.5B in the last 4yrs and only 43mil in what should be their BEST year in a decade, you sir are...Never mind. ;) We know margins on NV and Intel product segments. IE, gaming cards around 50% overall (80% of that from top end stuff) and workstation stuff is ~80%+. Those margins allow the low end to actually get something worth buying too.
But hey, congrats on all your "guess work", I'd rather deal in DATA. https://nvidianews.nvidia.com/news/nvidia-announce... That's how your summary for a quarter's financial results should look. See the "RECORD" stuff in there...NO ripping people off either as everyone has other options but still buy. I'm happy with my 1070ti even at $500 during a mining war in Nov. Still a great card today. The only thing I don't like about the financial summary above is giving 1.25B back to shareholders. Dividends: no point for tech co, and share buybacks while nice, don't make the next product. R&D please. Can't be bothered to correct spelling/grammar (been up all night for a hospital visit for family). Are we done? LOL.
I would say AMD is doing well only for the fact they under new leadership are profitable after being under the mud for many many years, that is what counts, paying down the bills and making some on the side...comparing to Ngreedia who is VERY overvalued not just my opinion, but whatever, not worth talking about.
lets use a company the constantly cuts corner, that constantly screws consumers for the $$$$$$$$$$$$$$$$$$$$$$$ and nothing more, way to go...when a company such as AMD who has basically been less then broke for many years turns a very high turn over profit, I say they are doing SOMETHING right, I guess the awards they have got over the least 2 years mean jack shit huh?
They have to invest to stay ahead of the game, hard to do when you owed billions in past debts you cannot go to making billions overnight while still making a high quality product(s)
guess AMD should price their chips at $600+ just because they should also be greedy mofos and make everything proprietary nonsense.
glad they at least are trying to do what they can as best as they are able, or will you also "argue" on that point sir? income year over year was UP, debts were DOWN, they are doing what is right in my books period.
They are running the Spectre/Meltdown patches because they were re-released as final fixes this month. Intel released and Microsoft approved of the changes mid-March for Skylake and Coffee-Lake, and Microsoft approved the updated patch mid-March: https://support.microsoft.com/en-us/help/4090007/i...
So, yes, Anandtech is running the latest released patches, which have been approved by both Intel and Microsoft and are the recommended solution. Not all systems can be patched yet, as your motherboard provider also needs to provide a BIOS update, but for the testbed Anandtech used, the motherboard provider does have the BIOS updated with the fixes, and thus, it was benchmarked using the fully supported, patched configuration for this security vulnerability.
https://threatpost.com/bad-microsoft-meltdown-patc... I guess the worry is win7/2008sr2 64bit ONLY still (which affects me on multiple machines). “Microsoft is aware of this and looking into the matter further. This issue impacts Win7 SP1 (x64 only) and Server 2008R2 SP1 (x64 only). We are actively testing a solution, and will make it available as soon as it is properly validated.”
Maybe I missed if they have been fixed (april patch tuesday fix this?), as I've been dealing with parents (hospital crap). But with quick checks I think win10 (used here) is ok it seems. Still odd game benchmarks based on more reviews elsewhere. Either something is fishy or everyone else did it wrong? LOL. Still think mem speed and # of dimms (2x8GB) should be the same especially since you can run at 4000 on most Intel boards (apparently maybe 470 chipset boards for AMD too). The post above was from article date Mar28th. So unless it was fixed days later, guessing win7/2008sr2 64bit varieties are still buggered (not to mention all the cpus Intel abandoned).
My guess is they tested with 2 DIMMs because the Ryzen's memory controller is only dual channel, and using 4 DIMMs would mean it had to use multi-plexing to access the additional DIMMs, thus creating slightly slower performance. That said, Coffee Lake is only dual channel as well...
Intel is making in the ballpark of 90+% gross profit on high end desktop processors and server chips. AMD's cost scale is at least twice worse, hence they need VOLUME mostly, to make better profits and to establish themselves, and they simply cannot charge more than Intel for similar performance, worse track record and higher business risk - if you are a system integrator /Dell, HP/ or large corporate buyer /Chase Bank for example/ , you don't want to buy billions of $ inventory from non rock solid business with decades of consistent tech support etc, unless price is like triple difference... So in essence, if AMD can get away with this high pricing, they will be very lucky.
Gaming results are odd in the 8700k and 8400 review too, maybe some script that automates the process went sideways? Compare this review to the 8700k review in Civ6 for the 8700k or look how he 8400 tops the charts at times.
Stop focusing on if Anandtech destroyed Coffee Lake's performance. They didn't. Look back at their coffee lake review and all the game numbers are the same. The real question is, how did they get Ryzen to perform so well!
Anandtech's Coffee lake review and they used a gtx 1080 with similar games. Here are the results for a 8700k.
Coffee Lake Review:
GTA V: 90.14
ROTR: 100.45
Shadow of Mordor. 152.57
Ryzen 2nd Gen Review Post Patch
GTA5: 91.77
ROTR: 103.63
Shadow of Mordor: 153.85
Post patch Intel chip actually shows improved performance so this is not about other reviewers not patching their processors but how did Anandtech get such kickass results with Ryzen 2nd Gen.
8700k is clearly the better chip for gaming because of the better clocks. As simple as that. Ryzen 2 just closed the gap a little more from Ryzen 1. For multithreaded workloads, Ryzen 2 is likely the better buy which also makes sense because of the 2 extra cores. So same as before except now the gap between those chips is much smaller. From a price point though, Intel has the better pricing . The 8700k has been as low as $280 so ultimately depends on pricing but if both chips were the exact same price and the board were the exact same price then i would buy based on your use case....productivity vs gaming.
I'm looking at 344 for the 8700K. That doesn't look like it's cheaper to me, especially I'll have to drop another 50 bucks for a cooler. Also there are very few specific use cases for a 8700k in the gaming department. It's actually pretty hard to think of any since a 2700k will be just fine if you have a 144hz monitor. Perhaps there is that one game that's just falling below your target frame rate but for any esport games with a 1080ti, both processors are giving you way beyond what you need.
Losing about 30% in productivity tasks on the hand is more devastating than losing 5-10% frames from 200 vs 189...all that for almost 100 dollars more with zero upgrade path either.
Agree with most of that (raise your hand if you have a 144hz monitor...Not many), but don't forget you get a gpu for free out of Intel (well, it costs more, not free I guess...LOL). I just had to use my 4790k's gpu for a while to RMA my Radeon. I was surprised by how good it was for my usage sans most gaming and even then I just went gog older games for a bit.
Handbrake is ludicrously fast with Intel's gpu also. Quality is pretty darn good if you add some settings for instance: option1=value1:option2=value2:tu=1:ref=4 There are more, just what I used with 4790k. I can't wait to try the 8700k if I don't go AMD. Unfortunately cuda isn't supported yet in handbrake so my 1070ti didn't do squat for that (boggles my mind, cuda is in 70-80% of the desktops out there today).
I'm looking at $346 for 8700k, but the heatsink will only be $30 (evo212 same as my 4790k) as I'm just going to buy another of the same for this build unless I go AMD. So $375 vs. 330. I don't think you'll be able to upgrade to 12 core on the same socket (only thing worth it from a 2700x IMHO, a few speeds bumps aren't worth squat), so not sure I'm gaining anything either way. I buy at or near top these days for the board bought (whatever is $350 or so I guess) and replace cpu/moboard/mem in most cases for the entire family the last 10yrs or so. I used to upgrade more, but today there are usually more reasons to dump it all than to keep it. I just give it to some other family member much like business does to employees who don't need top end stuff (they always love it, IE, dad getting 4790k shortly).
"Losing about 30% in productivity tasks on the hand is more devastating than losing 5-10% frames"
Yes, except it can be even more extreme, yet reviewers annoyingly seem to give vastly disproportionate coverage to a few fps gain fora frivolity, to a major edge in areas that put bread on the table.
Nobody is going to buy a pc for work and another for gaming when one would do both, but work is first priority when choosing a rig.
Yep spot on... kickass result and really low power usage compare to other review where Ryzen2 on the power hungry side.. One hypothesis is the Motherboard, from what is suggested in other review (gamersnexus.net) the way MB maker implemented the new differents 'turbo' feature and auto memory timing adjustment seems to change from one Manufacturer to and other.. That's an hypothesis maybe its wrong
I totally agree. This is a dumb circus. I see comments from AMD financial from Intel fanboys who fail to realize that Lisa Su is hungry and that Meltdown and Spectre story is what making the wind change.
Zen was not a fluff, it was a start. Wallstreet even issue a buy rating for AMD. The company might be back to an A64 era. If Navi can be what Zen was, then AMD would had shuffle the cards of the silicon business. Intel has everything to lose while AMD has everything to gain.
aww.. blah blah blah.. face it now AMD is the better option. After the patches... the small tweaks intel had to increase fps in benchmarks has been negated... give amd props... they are the new king of desktop processors for now! This is good for us consumers.. it will force intel to actually make a better processor.. instead of just higher clocks!
Yup, lots of resistance to this fact. AMD is dominating. It's over for now, time for people to just admit it, they got screwed if they don't have Ryzen. Or at least, bought the inferior product.
Fanboi much there? You must be that clown who said AMD's Fury X would be the 980 Ti killer. How'd that work out for ya fanboi? Anyway AMD provides a better balanced system between gaming and productivity but it does NOT dominate Intel for gaming. Especially when overclocking is so easy to hit 4.9-5.0GHz on Coffee Lakes. What percentage overhead does Ryzen 2 overclock to? Wipes away that Spectre/Meltdown patch performance drop. In any event it's childish fanbois like you who can't appreciate healthy competition. Let me guess: you'd like AMD to be the only GPU and CPU maker and Nvidia and Intel off. Am I right? You think that would be good for you? Good for ANY of us long term? They'd turn into Intel and get sloppy and lazy too. Now if AMD can just compete with Nvidia on the high end GPU market....the RX 64 ain't it sport being priced at GTX 1080 Ti levels.
Ouch someone is triggered, the real fanboy... and surprise! It's the guy named "Nfarce". Hope you have a better day tomorrow and your mental health improves. Best of luck.
Utterly baffled by the cpu cooler choice for the 95W TDP Coffee Lake cpu. Is Anandtech sure that these cpu were not throttling given what appears to be a less than adequate cpu cooler?
The Silverstone AR10-115XS is a beefy cooler, if it can't perform well enough with that, you're just trying to ensure Intel is #1 out of seeking Intel's best-interests.
This review did it right, all updated software including security patches for Smeltdown in chipset drivers/UEFI/microcode/OS, and good but not ultimate conditions for benchmarking. Maxing everything out to peak performance when few to none are going to actually do that (with 100% stability) is no service to the community at all. It's actually misleading. You might see Intel at the top, then everyone thinks it's "the best" because of a hyper-optimized situation, like leaving on performance enhancing (past TDP) motherboard option with the world's best cooler on it.
It's ridiculous and about time someone (Anandtech) put a stop to it.
The reason I asked was because I could not find anything at all. Just some very loose specs from Silverstone. Specs that suggest the cooler pushes substantially less air over its fins than the Wraith coolers.
You stated unequivocally that the the AR10-115XS "is a beefy cooler". It's a natural assumption that you must base this on some actual data. Since I couldn't find any, I asked you. This seems quite reasonable and undeserving of such a snarky answer.
Eh, I'm used to combative little nerds on here telling everyone else to disprove their arguments (which isn't how it works), and dealing with them appropriately, rather than reasonable people.
I was just going off the surface area measurements of the heatsink (most important) and secondly the mass of the heatsink. Surface area is good, mass is a little light but not outside of typical. So just based off what I've seen comparatively, combined with the little I know about thermodynamics, there's nothing out of the norm for the Silverstone. I think its selection is perfect for a review and I'd like to see any and all CPU reviews using a baseline like it since the majority of customers will use it or something close to it.
It seems very strange that the 1st gen Ryzen rated an NH-U12S, a cooler nearly as capable as its bigger brethren. Yet the same cooler was not used on the Coffee Lake cpu. Surely that would have made more sense than picking a cooler intended for rackmount cases.
I do not agree with your estimation of the efficacy of the cooler. But since we do not have any actual test data the argument is moot.
I was going to put that in my post, use the same Silverstone on the AMD. I would've, and I think that's a smart decision. The Wraiths probably are a little bit better (Coolermaster makes them last I knew). I would have no qualm picking a cooler that supports both sockets. Makes more sense to me too. I just disagree on using top tier cooling, and I know you haven't suggested that. I hate the "ultimate results" aspect most tech sites take. I look for average equipment, the more info I can get on an RX580 vs 1060, the better. I don't honestly care about the 1080Ti, even though I'm in my mid 30s and can easily afford it, I'm not one of the gamer kids here.
On the efficiency of the cooler, I respect your disagreement, but these are just hunks of aluminum & copper. Even if for some strange reason it doesn't act like the rest of the chunks of aluminum+copper heatsinks out there with similar mass/surface area, then it can't be too far off in the end.
Yup, I use 65W chips because the form factor I want is worth more to me than benchmark results. My system is fast. Really fast. Any additional speed really wouldn't be noticed. A big case on the floor with enough space for a massive heatsink inside would be noticed for many years. As long as I'm on a relatively recent architecture (Zen1 here), I'm probably getting the majority of the enhancements.
I plan on buying the Zen2 65W chip as an upgrade with X570.
I do wonder how you tell if you have the old power profile or not though. Probably best to just start with a clean Win10 install each time, install latest and bench. Otherwise at least do an uninstall of the old chipset drivers, maybe run DDU for AMD stuff, then install the latest cleanly.
Or as Molyneux would say, not an argument. 8) It's funny how often one realises people are not actually saying anything constructive once one filters out the jokes, insults, etc.
All of these discrepancy theories related to meltdown/spectre microcode updates or what not only reinforce my initial deduction that around 10 years ago Intel able to took over the crown from Athlon 64 X2 is by using unsecure branch prediction method and invest even more heavily on that method in subsequent iteration of core processors.
It simply took 10 years for things to unraveled for the worst of that intention.
The reason AMD got behind with FX is that they went with 2 ALU / core and shared FPUs between two cores and more cores instead meaning each core got weak whereas Intel have 4 ALUs and each core have its own two FMAC capable units and wider (256/512 bit) at that meaning Intel can execute a thread faster than FX. At-least as long as the instructions are suitable for that. HT help keep the cores busy whereas AMD can't really run one thread on two cores as compensation.
We found that different motherboard vendors were doing very odd things with uncore frequency - up to a 700 MHz difference, and adjusting it from BIOS to BIOS. All the ones we asked stated that they were within Intel spec, and Intel doesn't disclose what the spec actually is for the chip. At 3.7 GHz on one board we saw 86W peak, at 4.4 GHz on another we saw 119W. Both of these sets of results were in our database, Bench, a couple of weeks after the review. I had planned on writing something about it, but other topics always take over.
For this review, we took the latest BIOS for the board and went with the power results from our automated testing for this piece. It's likely that other minor enhancements have been made. I've checked the core frequencies at least, and they have parity.
Thank you Ian, for taking the time to answer while being in the storm of comments. I hear you, that's disturbing. As a customer getting an idea of the consumptions of those cpu, is something I look for, and if MB manufacturer "tweaks" can leads to a 40% power draw increase that's significant to me. As far as the Ryzen performance "discrepancies" between reviewers, I have the feeling Motherboards manufacturers and bios implementation of AMD's"features" and other ram tweak, might also be in great part the culprit .. Good luck investigating, I hope it will be as exiting as exhausting to figure this one out. Cheers
Come on guys, Intel and AMD fans and fanboys, stop that BS talk and let the Ian review his numbers as he stated in the article.
BTW I've got an idea why AMD could pull away in this review. 1. Memory on AMD platform is 2933 MHz cuz it officialy supports that 2. Memory on Intel is 2666 MHz cuz the same reason 3. Not only major memory timings have inpact on the performance. Subtimings are also crucial and, as Steve over GamersNexus discovered, they have big impact on the performance.
Pay attention, I am not saying if that Steve's, Ian's or other's test are inaccurate and I am not saying that there is but there COULD BE a large gap between reviews, depending on the MoBo and memory kit combinations reviewers used. Considering all of these it can mean that stock 8700K, which memory manufacturers had a lot of time to adjust for and for sure were working somewhat together with Intel (XMP is Intel standard afaik) is not as sensible to subtimings, could be outpaced when running on stock frequency of CPU and memory by the correct combination of Ryzen 7, MoBo and memory, also on its stock settings (and that mean higher then Intel memory clock). So don't call every outlier a BS before reviewing the data because that outlier can be very accurate test of what Ryzen is capable of when components are chosen carefully.
Indeed, or another way of putting it, a bell curve of results means outliers at both ends must exist, otherwise there wouldn't be a bell curve, but it's bad thinking to critique a result on the other end while ignoring the outliers on the lower end, which there must surely be. There's a bias on tech site forums to go after any article that has higher than perceived average results, but that's just the modern disease of lowest common denominator thinking.
As a owner of the 8700K, 1600 and 1700, and still rocking out my 4790K on another system, I must say that I was underwhelmed by the 8700K as I expected more than what I received, and actually fairly impressed by the improvements Amd was able to make with their Ryzen processors. Why? Because I went in expecting less..
This update (for me..) brings one interesting thing to the table... That awesome cooler on the 2700. I'm down for that... and for a higher end system would buy the cpu over any other in a heart beat if I was into needing to build another system. I think it makes sense for people who are still on 2000/3000 series Intel (or older inc variations from amd like the FX line..) as you can notice some fairly substantial gains without doing any benchmarking at all.. but anyone on 4x series or later Intels.. (or last gen ryzens..) upgrading is more of a want than a actual need.
Still.. great new product from AMD and as always a interesting read from Ian.
It's going to interesting to see how Intel Counter this and future Zen 2 architecture, with their current monolithic die. Ryzen may be a slower in single threaded performance, but their process (lots of small dies glued together via infinity fabric), does mean they have a much higher yield. As core counts go up and node size goes down, the number of defects goes up. When you have a few large dies on a wafer vs lots of small dies, there will be a massive amount of waste thus driving prices up. Intel will probably have to think of a new design and quickly. Ryzen 3 based on Zen 2 is only a year away (with 8 core i7 due to land late this year), with potentially even more cores (10/12??) AMD is also likely going to release a Zen + based Epyc this year where they will really eat into the performance gap on the server side. It will also be interesting to see how much more reviewers can get out of Ryzen 2, using even faster memory, high end cooling solutions (5.8Ghz having already been achieved with LN2) and any other further tweaks. Would be interesting to see some gaming benchmarks with low/medium spec GPUs, given the current GPU shortage. And just out of interest, what about disabling a CCX. The 2200G and 2400G got some good results having only a single CCX.
There's no way an Intel 8700K can do such low fps on Rocket League. I can rock this game to 377+ fps on 1080p with an eVga 1080sc2 stock... and 250+ fps on 1440p. Ok, I'm at 4.8Ghz on all cores... so a stock 8700k might get lower fps, but there's no way fps could drop to an avg of 274 fps. (Can't speak for the other games tested since I don't have a single one of them installed on my sys) So we wait Anandtech to correct those bugged numbers.
People should have them and they don't affect games much. Even on my EVO SATA SSD in 4K benchmark I just got a 10% drop. Also I'm not sure the AMD systems actually got AMD Spectre v2 mitigation yet. AMD wasn't immune to it.
Meltdown is the serious bug and it's the one that everyone rushed to patch. Spectre is considerably less damaging and that's the one that affected AMD...but no performance loss happened due to patching Spectre... Meltdown, with the official patches from Microsoft and Intel affected Intel performance by up to 35%. It is a serious blow to Intel's performance.
A bit unfair that i7-8700K was tested on 2666 DDR4... and R7 2700X at 2933 DDR4, and game suite is limited and in favor of AMD's multi cores. But given the stunt intel pulled with the multicore enhancement at the 8xxx series launch... i'd say bravo! fwck em!
They are using the chips to their specifications as listed by Intel and AMD. Intel Coffe Lake has support for DDR4-2666 and Pinacle Ridge supports DDR4-2933. They also didn't overclock. That's the fairest review you can get, by leaving everything stock and what max supported memory.
There's plenty of reviews out there which pump the memory speeds up to the maximum the specific platform supports, for you to take a look at it. It's refreshing, among the sea of overclocked system reviews, to read something where actual chip specifications are taken into account and tested based on that.
I agree. I prefer to see stock on release and later articles can start tinkering. I don't recall seeing many car reviews being trashed because the reviewer didn't drop the largest turbo they could find under the hood.
Agree 100%. I want good stock reviews and also good overclocked reviews. Many cases people want to run stock for noise / heat etc so it's nice to see what the chips can do at stock.
Performance of Ryzen CPUs seem to be similar with other reviews. It is the Intel scores that are much lower in this review, and I think the reason might be the cooling solution used. Cooler used for Ryzen 1800X (NH-U12S) is much better than the one used for Intel. If Intel was thermal throttled, then results are correct. You should check again some of the tests with better cooling.
The thing only got 3 heat pipes and a max 19.5 CFM from a 70mm fan. Even the stock intel hsf that goes with the non-K cpus, are at least 90mm fans. What kind of bad joke is this?!
If you wanted to go cheap on your air cooled HSF, you'd at least get a hyper212(whatever the latest modifier +/evo etc.) And use it on all the CPUs under test.
Just because the AMD wraith coolers are noisy and garbage, doesn't give then the liberty to conduct crappy testing by introducing an uncontrolled variable in HSF.
Just the fact alone that it has heat pipes declassifies the stock Intel coolers by default - i.e. it's a better heat sink. How it actually performs I've no idea, but it's better than the Intel stock one, in fact almost every single cooler on the market is better than the Intel one.
If the chip throttled with that cooler, I don't know, they're re-evaluating the differences in their review as noted on the first page, so we have to wait an see.
I'm running a i53570k @4.5ghz, Z77 sabertooth Mb, 8gigs of 1333Mhz DDR3, MSI Gaming GTX980ti and and a couple of Samsung SSD's. I am really looking forward to upgrading to Zen 2 there hasn't been anything of interest for a very long timee.
I'm on roughly the same platform. I have a i7-3770k @4.5ghz on an Asus P8Z77-V Premium motherboard, 16GB 1866 MHz DDR3 (10-11-10-30-2T). I've been holding off on a new setup for something that has some real performance difference. If I was running at stock speeds, going to a i7-8700k would give me about a 40% increase in performance, but given that I have a 28% overclock (I had it up to 4.9GHz, but dialed it back for less wear/tear as that required increasing voltages, but the 4.5 is at stock voltage), it simply hasn't made sense to upgrade yet.
At some point, it will, but we have not hit that point yet. Given the last 3-4 years Intel has focused on power optimizations as opposed to additional performance, the relative performance of the CPUs has not really changed much. I have been hoping that with AMD finally competitive again that it will cause some real fight in the CPU performance race again, where we were seeing 30-40% performance gains when the manufacturing process nodes were cut in 1/2, like we use to see back in the late 90's through about 2010...
MCE cheating? It is a feature made by MB vendors, not my problem, AMD can't clock past 4.1 Ghz on all their CPUS. Ooooh, AMD sucks at OC, let's punish everyone.
Proper RAM? Again, Intel IMC is capable of higher frequencies than AMD, not my problem that AMD is not capable of supporting higher frequencies without stability issues. 3200hmz is now the standard, yet many Ryzen mobos still dream about reaching and running at such frequency stable. Paying 400 bucks for a cpu, another 200 for a mobo which can't handle it is a joke.
Yeah, Spectre 2 patching halved the Rocket 2 League fps.
I understand that you are an AMD fanboi with low income so you are very happy you could finally afford 6cores without selling your kidney paired with low clocked ram producing occasional BSOD whilst using Vega gpu as a room heater and dancing to the rhytm of your cheap rainbow cooler, because that's trendy now, to show everyone you are LGBT positive but stop with this, you are bad at trolling if you actually believe the things you publish, then you should visit lunatic asylum or stop reading that liberal swamp called reddit.
And in that same point, then why isn't Intel actually supporting all those settings? MCE is motherboard based, thus, you need to have a specific motherboard that supports it. This article is testing the CPU, NOT THE MOTHERBOARD. The same goes with the memory support. Intel official memory support is DDR4-2666. They do not guaranty anything faster than that will work. Sure, it might work, but you can get a CPU which might not work with faster memory (I have never personally witnessed this), Intel will tell you to go pound sand because they only officially support DDR4-2666.
Why isn't Intel simply supporting these speeds and settings? AMD decided to support DDR4-2933. No reason why Intel can't as well, other than simply not wanting to go through certification processes and/or don't want to take the risk on their warranty...
So again, AT is testing supported configurations, not unsupported configurations, and giving the results. There are plenty of customers (i.e. the entire business community, who have admins who read sites like AT) that will not run unsupported configurations.
Reviewers can't win in this regard. Stick to official specs and one side will moan that chip A "can do" higher even though it's not officially supported, regardless of whether any mbd vendor decides to include such support in some way. Oc the RAM (which is very mbd-specific in how well that can work) and the other side will say that's oc'ing chip A more than chip B, or possibly oc'ing chip A but not chip B at all, depending on how it's done. As GamersNexus has shown, there is huge variability in testing setups.
Remember way back when AT used a factory oc'd GTX 460 when reviewing the 460 at launch? (EVGA GTX 460 FTW, oddly enough the very model I bought) There were various reasons they did it, but it caused a hell of a row. I thought it was a good thing because where I lived at the time oc'd cards were cheaper than stock models, but other people were outraged. I can see why many didn't like it, but they had the freedom to check other sites that were reviewing with stock cards (the reference clock was 675MHz, the FTW was 850MHz). I thought it was ok because the product was available to buy at a very cost effective price, buying a stock card made no sense, at least among the choices I had. It wasn't as if they'd taken an 800MHz Sonic Platinum, oc'd that to 850 and then claimed those results a were baseline comparison to everything else. But then, someone in different circumstances to me would have a different point of view, eg. if their relevant retail source didn't have the FTW or only sold much slower cards (quite a few models were set around the 700 to 730MHz mark, including the standard Sonic, Gigabyte WindForce and various others).
Stock or oc'd? What does that even mean when there are so many different variables involved? GN showed that memory subtimings can make a significant difference for AMD, and mbds are now getting pretty good at selecting these efficiently when left on Auto.
In the end, as long as it's clear what the review setup is doing, I don't see that it matters, it's just more useful data points along with all the data from other sites.
Whether the present results are correct (hopefully) or incorrect, will the story title be bumped up the homepage and changed to indicate updated information? Hoping it'll be easy to know when to jump back in to the article.
Thank you Anandtech and Ian for a great review. Too bad, so many don't understand your methodology. I hope you guys have a chance to post an overclocking review for the Ryzen 2 processors. I appreciate your hard work and enjoyed reading this review. I will be coming back to read the parts that are missing.
I am looking forward to upgrading my primary system to a Ryzen 2 system in the next few weeks.
"Too bad, so many don't understand your methodology. " Yes, *that's* the issue here, not the results being 100% opposite what 99 other reviewers' results show....
There is must be something wrong with anandtech review for gaming benchmarks. Most reviews are showing that 8700K is winning gaming. Yet you own review show Ryzen 2 winning in ALL CASES ?!
Too bad, the Civilization 6 AI test would have been the most interesting, because it's still not making full use of CPU cores. This is even worse with Total War, which is another title that let's you wait for the AI to finish its turns.
It's been more than a day since the article came out, and it still hasn't been fleshed out properly. I assume that this is because you are reviewing the data, which is correct and commendable. However, doesn't it then make more sense to temporarily retract the article and publish it, with reviewed data and in full form, in a few days? I'm happy to wait for the quality analysis and results that Anandtech has built a reputation on. In the meantime, it doesn't leave a good impression for readers to come back to the article a day later and _still_ see it unfinished.
Agreed. To be honest this is a minor disaster for AnandTech. Even if their benchmarks are correct, a clarification is in order.
At the very least, they could have replicated a test by another publication to see if they get a similar result.
I'm also disappointed by the fact that so very few games have been tested. In the not so distant past, AnandTech was often not first but would impress with extensive and thorough testing.
I guess a lack of staff has caught up with the site.
No need to retract it. It sounds like, from what I have read, they did everything right. They ran everything withing the specs of the manufactures of the CPUs. Others did not. They had updated all the Intel runs with new ones on systems that were fully patched for Spectre and Meltdown. One other site did the same and came up with similar results. So, it seems that it is other websites that need to explain their methodology, not Anandtech. Tom's hardware, has admitted that it made the mistake of not fully patching the Intel systems, they are working on fixing their results.
Well, that is a relief… It sounds like that they did everything right… No, it doesn't. They tested just a few games and their results are different from most other publications that are out there. But, hey, it's good that you keep the faith.
At the very least, I would like to see a clarification. For example, they could replicate a test by another publication and they could do one of their own tests manually. After additional testing, they could either stand by their results and give a possible explanation for the discrepancies, or they could retract.
Any publication that is not prepared to do additional testing if they are an outlier is unreliable and irrelevant.
No they didn't and if you looked properly you'd see there's some crazy shit going on. A Pentium having better gaming perfomance than a 8700k in some games yeah right, 2700x having 240 more fps on average vs 1800x hah yeah not believing this shit sorry. They messed up badly somewhere and this bs excuse about spectre meltdown being the reason for Intel's bad performances how do they explain the gaming benchmarks vs the 1800x?
Nice attempt, the AMD red brigade is working as hard as it could to spread the lies, to misinformate, to publish this review everywhere they can so people don't forget, damn, you are better than Russians.
Anyway, why not to share with us what site it was who got a similar results? Tom's hardware didn't patch both, Intel and AMD fully, because there was no bios form the mb manufacturer during that time.
I'm with Carmen00 you need to retract the article until it is finished. I'm really interested in the storage portion of the article but it just says [text] where the info should be. I don't recall anything like this when Anand was running the show.
Just wanted to say thanks to Ian and the anandtech staff for their hard work. I can understand the difficulty of producing these reviews. I have no question that you all act with absolute integrity and strive to use a fair and scientific approach to evaluating these products.
I also appreciate the interview you guys did with Global Foundries prior to this launch and the interview with Intel recently. I think all of these things really educate us about these products.
After seeing the controversy in the comments section for this review I have read several reviews from other sites. There are differences in some of the benchmarks. I appreciate that you all are reviewing your work to see if there was a methodology failure. I would caution anyone from adding their vitriol simply to be a jerk when none of us really know why those differences exist or if the testing methodology used here is deficient compared to other reviewers. I would also note in some reviews I could not even find how their test bed was setup. We should applaud the openness with which Anandtech performs these reviews and their willingness to take the absolute garbage they do from some in the comments section.
Lastly, I have been coming to anandtech for many years, which I am sure others have too. Ryan, Ian and the rest of the staff have done a great job since Anand's departure and to say otherwise is simply ridiculous.
"There are differences in some of the benchmarks." THat's an understatement. There are fairly large differences, and, with many processors in completely opposite placings. Hell, AMD did much better here in AMD gaming than even AMD themselves claimed they would! :) AMD's own slides showed 2700X losing in 9 out of 10 games, as expected.
Here's my take on this Ian mentioned on the Precision Boost page that the BIOS option for PB2 is cryptic, and may even lead people to disable it thinking it is something like Multi-Core enhancement all core turbo thing.
There's a small chance that this is the case with other reviewers, whereas Ian discarded some of his test data and restarted.
on the new PB2, at 3-4 cores that the games would engage, this is a BIG difference in freqnency, to the tune of 400 ~ 500 MHz (3.7GHz to 4.1 ~ 4.2 GHz). This 13 odd percent can easily bring about the performance increase in games that is seen here.
Just a thought, may not be correct, but maybe worth investigating... Ian/Ryan, any comments?
Someone needs to explain the bizarre choice of cooler for the Intel benchmarks. It isn't one I'd heard of - I had to look it up and there isn't much information to be found. It looks rather inadequate to me, especially when compared with the unlikely combination of a Ryzen 2200G and Thermaltake Flo Riing 360 tested only a few days ago.
By the way, for those still waiting to read about it and in case this article never gets completed, StoreMI looks a bit like Apple's fusion drive technology.
No. I don't think that's true at all because trying "to sabotage" anyone is pointless as there's no transparency. That's why I said it needs to be explained. I want AMD to make good products and I want them to do well in tests but I want them to do it fairly. So I want to know why the Ryzen 1800X was tested with a much better cooler than the Intel processors. This nonsense is damaging for both Anandtech and AMD.
Well if we're talking nonsense the test was done with the cooler as provided in the box by AMD.... the wraith prism. If the test was to be properly fair Anad should of used the as provided cooler 'out of the box' from intel. Now what is the name for that cooler ? P.O.S.!?!?
14nm 6 cores without HT /i5/ still owning 12nm 8 cores of AMD with HT, their flagship in gaming. Woudn't call that the top.
8core coffe lake will destroy everything and giving 2700x power usage after overclocking and upcoming 2800x which will use even more power, Intel doesn't have to even worry about their TPD.
Why the use of such emotional language? It's just a ruddy CPU. :D I swear sometimes these forums make tech discussions read like purile Transformers battles...
There is a lot of nonsense in here towards ian... he did everything right.... he took the time to secure both platforms and gave them a level playing field... we knew ahead of time that meltdown had up to 35 percent performance penalty on intel.... why is this hard to believe... the only reviewers who took time to secure the platforms first were anandtech and tech radar... also if you remember right.. the wcftech preview showed the same results. Amd themselves didnt bank on the meltdown penalty as it pertains to intel... but I'm sure they'll take it! Crown the new king... amd has ryzen! Intel what are you going to do... completion is good!!!
Sadly Ian Cutress has lost the plot. What he should do is replicate a few tests by other publications. That would give useful information. What he seems to be doing is going over data that he has already collected.
But thank you for stating that ‘completion’ is good. That put a smile on my face.
Kinda agree. Remember the scene at the start of 2001 where the two bands of apes face off each other at the water hole, yelling and screaming? Some of the exchanges here remind me of that. :D
I don't get your numbers. In the 8700k review you wrote 87W total package (full load) power. In this review you're now up to 120W for the 8700k total package (full load) power.
Hi, I asked the question and Ian replied yesterday. on the intel platform you have a feature called MCE (Multi-Core Enhancement) that allow to have all core running at freq rather than just one on the stock setting. The thing is that some motherboard manufacturer have it on by default, some don't, and even with the same motherboard from one bios to and other..manufacturer change the way it work... so with MCE on you get 120W with MCE off you get 87W ... (I think that MCE can improve productivity performance, but may lower gaming single thread performance if the game favours mhz.. and dont use the core)
If that is true then Intel not only has roughly 10% IPC advantage, but also several hundred MHz if not GHz of clock advantage plus a big efficiency advantage.
8700k is easy to overclock, if you get lucky, you can reach 5.0ghz without delidding just on air with something like Noctua NHD15. If you want to be safe https://siliconlottery.com/collections/coffeelake
Even the Intel's highend lineup, 8/10/12 core are able to reach 4.7 - 4.8 ghz, the problem is just cooling.
I had to read that twice to belive my eyes. So a 5GHz oc is getting "lucky" on a CPU that already has an official max turbo of 4.7? Blimey the state of modern oc'ing realy is woeful. The 2700K will happily oc to 5GHz on air with just one fan and a moderate cooler like a TRUE, good temps and decent voltage, a chip that has a max Turbo of only 3.9. If this is what it's come to, where a mere +300MHz bump over the max Turbo is considered lucky, then oc'ing as a thing is dead, the CPUs and mbds are just getting better and better at doing it automatically, slowly including the concept in various official ways, more effective Turbo, XFR, and so on.
I don't see the appeal of oc'ing an 8700K at all, it's such a small difference over what the chip can already do. I'm still running on a 5GHz 2700K, it still holds up very well against modern products, especially with a mod BIOS that allows for NVMe boot, or a 950 Pro which has its own boot ROM.
It's a shame I can't read Bulgarian! In cases where a manufacturer makes such a request of reviewers they must absolutely declare it in their reports and the reason for the request should be explored.
Apart from the usual "which is fastest" arguments, I'm surprised nobody has picked up on power consumption. The 2700 has by far the lowest power consumption of any chip tested for a given amount of processing, speed is typically 20% lower than the 2700X but power consumption is less than half. I thought Anand usually provides this as a result (total energy to complete a given test) but it's missing here.
This would make the 2700 an excellent choice for quiet SFF systems which want a fast CPU but also want to keep power down to reduce noise and heat.
Gamer Nexus found that if you lock frequency to 4.1GHz on the 2700X you can undervolt it to 1.175V and have 2/3rd power consumption for marginal performance loss (less than 2% loss) compared to stock voltages and frequencies.
I thought that was one of the most impresstive results about these new Ryzen CPUs, GN showing the power consumption tweaks were really impressive. IIRC, Steve summarised it by saying it wasn't that one could achieve higher ocs than before, rather, the voltage required to achieve a particular oc compared to 1st gen Ryzen was now considerably less.
1. with an enthusiast GPU 2. with a high end CPU 3. at 1080p 4. and affecting only 144hz users
This bench needs to be gone. It is misleading and inaccurate depending if the GPU is bottleneck or not. Joe Blo looking at these, don't understand that buying a RX 580, is not going to get out the same thing from extracting the results out of these stupid CPU benchmarks at 1080p.
Joe Blo is not going to know until he sees budget, high end and enthusiasm GPU in plays with his intended CPU purchase. WE KNOW, but they don't.
All this for a stupid bench that impact 1080p @ 144HZ users.
I am having a 1080 TI @ 2160p, I can tell you that this stupid bench doesn't do jack in my situation... but the multi-threaded results does.
Well, although admittedly there are users that aren't interested in 1080p 144Hz performance numbers, there are LARGE sum of players that need exactly that. Cybercafe that I'm administering, for one, has 40 PCs with 40 144Hz monitors.
My point is that by looking at numbers, you can get the wrong idea.
Unless you test a budget, mid range and high-end GPU at 1080p, 1440p and 2160p with a specified CPU, you don't get a clear picture.
As of today, this bench is only specific to 1080p @ 144Hz which represent a really % of potential users.
Like I was saying, I am at 2160p, this render this bench totally useless. GPU bottleneck is going to be more and more present in the future because resolution is just increasing.
There aren't large numbers at all. The no. of gamers who use high frequency monitors of any kind is a distinct minority. Irony is they're resensitising their visual system to *need* higher refresh rates, and they can't go back (ref New Scientis article last year IIRC). IMO this whole push for high refresh rates is just a sideways move by the industry because nobody bothers introducing new visual features anymore, there's been nothing new on that front for many years. Nowadays it's just performance, and encouraging refresh is one way of pushing it. How convenient that these things came a long just as GPU tech had become way more than powerful enough to run any game at 60Hz locked.
You are simply wrong. Doing say 4K benchmarks would just make people think it doesn't matter which CPU you have and all are the same for gaming which is totally wrong and inaccurate. Benchmarks for CPU game performance should definitely be done at a low resolution and with a powerful graphics card. Sweclockers still did 720p medium. The problem with medium is that you may lower the load on the CPU for things such as physics and reflections. Still valid for high fps gamers but maybe should be combined with a higher setting too in case that use more CPU. Ultra as worst case scenario.
Opposite of your suggestion which basically result in no data and hence you could just as well not benchmarks games whatsoever instead do it at a low resolution but then simple conclude something like "Even an i3 or Ryzen 3 is enough to achieve a 60 fps avg experience" for instance. If that was the case. Then it would still be accurate and useful and people could decide themselves how much they care about 140 or 180 fps.
All these idiots who claim the Intel lead is only there I low resolutions are wrong and fool others. The Intel lead in executing game code is always there. It's just that you of course need to have a strong enough gpu vs settings and resolution to be able to appreciate / get it too. But that run in both directions. On YouTube someone tested for instance project cars on the new cpus and he had little above 50% gpu load so obviously didn't used it all and was bottleneck ed by the CPU performance yet only used just over 20% of the CPU which for the uneducated may seem like pooh the Ryzen is so powerful for games with so much headroom but it's not because clearly one (virtual) core was fully utilised and couldn't offer more performance and the rest and unused capacity was and is irrelevant for the game performance because the game aren't using it anyway. It doesn't help to have unused cores. It do help to have more powerful ones though.
144 isn't enough? :D That's hillarious. Welcome to the world of gamers who sensitised themselves away from normal refresh rates, and now they can't go back. You're chasing moving goal posts. Look up the article in New Scientist about this.
It doesn't matter which CPU you choose for gaming. That's the point but people like to dig up tech from 10 years ago to prove one vs the other. Games are going multithreaded and even Intel is pushing this. So 1080p heavy single threaded gaming benchmank is misleading unless you like living in the past. You win with ryzen @ 1440P or above and you win with future highly multithreaded games. But nope..let's just test world of Warcraft at 720p to show Intel's dominance because that's the future?
"Doing say 4K benchmarks would just make people think it doesn't matter which CPU you have and all are the same for gaming"
Well, you are totally right on the first part, they don't matter much. Resolutions are just exploding and 4k will become the new standard when the new console generation is going to be released. That means 2160p will become the norm and GPU bottleneck will be even more present than it is right now. Right now the only way to have CPU bottleneck is using a 1080 GTX at 1080p. Not a single sane person would spend that much for rendering 1080p especially for running LoL, Fortnite, WoW, DOTA, CS or RL.
And by the way, we are fighting over result having +- 5% difference in gaming benchs at 1080p. A storm in a glass of water.
Well, unless Nvidia and/or AMD has a GPU which is 2-4x faster than what they currently have up their sleeves and can manufacture and sell it for only around $150, the next generation consoles will not be 4K resolution, but simply up-scaled 1080.
On computerbase.de review tighter timings gave the ryzen between 3% and 14% positive increase in game titles FPS on the same RAM frequency alone.Melting the difference with 8700K runing the same memory config in the range 3% to 9 % difference in Intel's favor :) Who says ram config does not matter :)
So days later, still no answer as to why only AnandTech has such a "Great Disparity" in gaming results compared to...well, everyone? (Can't they double check just the 2700X and 8700K in a few key games in something under 4 days?) :)
If you compare the 'web' numbers in Ian's Coffee Lake review to this article there is a huge performance drop across the board; both original Ryzen and Intel numbers. I think that rules out cooler performance as the source of the anomaly. Also, those numbers shouldn't really be affected by vulnerability patching. That article lists the same version of benchmarks, on the same browser, and they are not allowing it to update. Those tests should see limited affect from any updates due to spectre and meltdown if I am to believe what we are being told (I'm no programmer, I hated programming.)
There is something to be explained here, but I've yet to hear any good theories.
Hmm.. Guru3D has the 2700x performing a 965ms on Kraken, in line with these numbers. Their 1700x in the graph shows 752ms, in line with the Coffee Lake review numbers. Either this new chip is much slower, or those are old numbers. Their Intel numbers are in line with the Coffee Lake review numbers as well. Most certainly old numbers. They make no comment on this aberration. Perhaps this is due to Microsoft patching.
It's not a mistake. Anandtech uses the Windows 10 Enterprise edition vs. the Windows 10 Home or Pro most other reviewers. The Spectre/Meltdown mitigations on the Windows 10 Enterprise and Education versions are safer, and therefore incur a higher performance penalty.
If people at Anandtech can test Intel and AMD chips under Pro or Home, and it's been a while people compare numbers from different operating systems. According to Steam survey most people still use Windows 7.
People who get a new CPU / new system are likely to use Win 10. Anyway, if Enterprise behaves differently performance wise than Pro, then that is the real story. And AnandTech missed it.
There is a lot of speculation because AnandTech isn't able to provide a clarification in a timely matter. I'm going to avoid AnandTech from now on.
If there is a specific reason for the strange results they got (other than AnandTech mucking things up), that would be an interesting story. A serious tech journalist would have realized that right away.
The gaming benchmarks are disappointing anyway, since the scope of the gaming test is very limited.
And right now I don't care that much about the productivity test since an 8-core Ryzen is obviously going to outperform an 6-core Intel i7 with optimized software.
Considering they already stated they would be publishing their findings this coming week, you are wrong about not being able to provide clarification. But it is taking time because they have to go thru and run various tests with various setting changes to determine what is influencing their numbers. This is beyond what just running tests and writing up a review is. People want to know how they got their numbers, even at stock, and defaults, they have to go thru and turn off or on bios settings, windows settings, etc to determine and be able to explain the impact on the results.
This pretty much doubles if not triples the time it would to do just a simple review. And since Ian was up for 36 hours straight to get the first review out (partially because they had to scrap 36 hours of testing results and start over). So, he had to actually go home and get some sleep so he could tackle this task with an energized and clear mind.
So, how about you learn some patience and realize that they will get the information to us, or would you rather have it all rushed without any real details? They also most likely have weekends off, so in reality, we should not expect anything until mid week this week.
"However, this is just AMD’s standard PB2: disabling it will disable PB2. Initially we turned it off, thinking it was a motherboard manufacturer tool, only to throw away some testing because there is this odd disconnect between AMD’s engineers and AMD’s marketing."
I didn't read the tweet so didn't make the connection.
A publication should either stand by its tests or not. If AnandTech is unsure about their results, they should temporarily pull the benchmarks since they might be misleading or place a much larger disclaimer.
@Ranger1065 I'll see this one out, and yes, then I'm gone for good. I don't understand why you are happy about people not visiting this site anymore. I'm pretty sure that AnandTech can use all the traffic they can get.
It's sad that a once leading publication doesn't seem to have a place in the market anymore.
FFS, *please* tell me this (crappy 8700K results) is not a result of using bastardly patched Windows Enterprise for the gaming tests ...!(For all those folks at home benching games with Windows Enterprise...yeah, that happens :/ )
I would really like to see some storage bench mark to compare pre and post Spectre/Meltdown patching of Intel CPUs as well as an apples-to-apples comparison of nvme storage performance compared to an Intel 8700k.
It's really hard to generalize on why people purchase the processors they do.
I met a guy online with an over the top, super expensive computer. His sole purpose seems to be the first in the online tests and he will spend hours fine tuning the overclocking and whatnot.
Another guy mostly playing D3 purchased a 3K euro computer, which is absolutely over the top for what he plays/does. His reasoning is, I change my computer every 10 years, so when i do, I want the best components.
In my opinion, for most people without special needs (YouTube encoding, 3D rendering and whatnot), most processors have been good enough for years, and there is no reason to invest a lot in a processor when money is much better spent in an x4 PCIe SSD where you'll instantly feel the difference vs a hard drive or a medium quality SSD. To me, power consumption and noise of processor as well as graphic card is a consideration at least as important as price. The sole reason I would change processor today would be to get a fully Thunderbolt 3 compatible system, since the first TB3 audio interfaces are slowly coming to market.
Then again, I'm sure many people will have other priorities and reasons for purchasing their processors.
Many of these high end systems are overpriced, or they come with components that are not worth it for what is being done. With that being said, going for a higher end CPU does make sense for those looking to keep their systems for a long time. Video cards and storage are areas that people should pay close attention to when it comes to price.
NVMe drives are VERY expensive if you go up to the 1TB level, so spending that sort of money doesn't make sense when the prices will drop in the next two years. A 250-500GB NVMe drive would make more sense when combined with a traditional hard drive for additional storage. Video cards are also at a premium right now, as is RAM. If the system were purchased back in April of 2017, then yea, not too horrible to go for 32GB of RAM back then, but now, I'd stick with 16GB of RAM due to the prices being so much higher than they were.
For desktops, Thunderbolt isn't all that amazing when you can add a video or sound card to the system that will do what you want it to. Laptops are another story, and you need to pick and choose your priorities.
That's if you're short on money. I don't spend much extra other than vacations & eating very well. So when I upgrade, which is every 5 to 10 years, I buy the best available like Silma. I have a 1TB 960 Pro for that reason, it was $650 and I didn't think twice about it. I need the most reliable, fastest drive at the time. The 960 Pro is a MLC memory configuration, I've always used higher end MLC drives and they've served me very well.
I'm not waiting a year or two, when I have over $100,000 sitting in my bank account doing nothing. What's the point, it's just $650. Same goes for the rest of my computer, which I only own/maintain one of.
Not everyone is a child or someone who doesn't spend the majority of their time progressing their careers so they can make more money. The price consideration is not the end-all, ultimate rule on hardware for every single consumer.
Indeed, though I guarantee some here will react poorly at the notion of someone who can make such a purchasing decision. :) Sometimes the best makes perfect sense, and if one can afford it, then why not.
Why are Anand's gaming numbers showing the 2700X beating all Intel CPUs when every other reviewer still shows the 8700K/7700K still being the best gaming CPUs?
TechRadar & the wccftech preview has the same results. If you have been following Spectre as I have, you would've seen even users find this result. See the top comments here. https://np.reddit.com/r/pcmasterrace/comments/7obo...
AT, TR & WCCF's results are accurate. Many reasons for this. - Many reviewers used the old Ryzen balanced power setting which cripples the 2700X - Disallowed the motherboard settings that push the chip over TDP - Fully patched as possible for Spectre v1 & v2, which cripples Intel up to 50% in IO heavy tasks (streaming textures for games that do so).
There is naturally, lots of resistance to the fact that AMD is dominating. It's over for now, time for people to just admit it, they got screwed if they don't have Ryzen. Or at least, bought the inferior product.
I don't think those posting so much venom about the results will change their minds until AMD releases something that really is just right out the gate blatantly faster, including for IPC. Another year or two and I think that will happen.
There's usually a lag from 6-12 months on any change that's already in place. Any topic really. Humans aren't very good at seeing what's in front of them. It requires enough people repeating it over and over around them, until they accept reality.
Before that reassurance from society around them, they don't have the confidence to see/admit reality. Just something I've noticed. :)
I don't know what reviews you read, but the WCCF review shows slight favor to 8700K in gaming. However, it's an incomplete review of gaming as they only test at 1440p Ultra, where the GPU bears most of the workload, and only show average framerate. Tech Report doesn't even go into any detail whatsoever on gaming and only broaches the topic in a couple paragraphs on the conclusion page. Still, they even show a lead to Intel. Anandtech shows the 2700X leading every game in framerate, which is flat out inaccurate when compared to other reviews.
The Spectre BS has marginal, if any, impact on game performance. I don't know how you get the idea that CPU IO is related to loading textures in a game when textures are loaded into VRAM by the GPU. Looking further into the test setup, Anand uses slower RAM on Intel platforms, an ECC mobo for Z170, doesn't disclose GPU driver versions and uses an enterprise OS on consumer hardware. I'm guessing these and/or other factors contributed to the inaccurate numbers, relative to other reviewers, causing me to lose a lot of respect for this once well-regarded hardware reviewer. I'll get my benchmark numbers from PC Perspective and Gamers Nexus instead.
Not hating on AMD, and I even own stock in both AMD and Intel. They offer tremendous value at their price points, but I spend alot of money on my PC and use it for gaming, overclocking/benching, and basic tasks, which all seem better suited to Intel's IPC/clock speed advantage. I need reviews to post accurate numbers so that I can make my upgrade decisions, and this incomplete review with numbers not reflective of actual gaming performance fails to meet that need.
Come on man. I almost stop responding to replies like this. WCCF benches the base 2700, of course the 8700K wins, they don't include the 2700X. Again, the results line up with AT's. I wrote TR but meant TechRadar.
Eh, I'm not going to keep going on addressing all these "points". IO is a syscall, reading/writing to disk is a syscall and that's where Intel takes up to a 50% perf hit with their Spectre v3 patches in place. This is known, and been known for months on the impact for games that do lots of texture steaming like ROTR. I even provided user provided evidence, that beat Anandtech here to the punch by 3 months.
Anand used Intel/AMD memory spec. That's what you're supposed to do when testing a product advertised to use certain components (for good reason, BTW, stupid gamer kids discounted).
Bottom line is that you and people flipping out just like you are wrong. I already knew about this being under the surface months ago. Now that it's impossible to cover it up with the 2000 series launch, more people are simply aware that AMD has taken over.
Actually, I can't bother waiting because, it's futile.
The benchmark from that thread shows there has been no noticable performance regression after the updates had been applied.
I know what you gonna do. Look at those min fps. I WAS RIGHT. I WAS RIGHT. You are thinking right now. No, you weren't. If you ever had run TOR benchmarks, you would have experienced it. There are quite severe discrepancies in the inbuilt benchmark when comes to min/max fps. I noticed it myself when I was overclocking 6700k and running game benchmarks, stability tests. Since you are mostly using anecdotal evidence, you do not know how to make proper arguments, don't provide valid sources, we are really limited here, but that's what we have.
It is not mine, but it is proving my point, there is an issue in the benchmark. It shows wrong/misleading min/max fps pretty often which other benchmarking solutions doesn't record.
The video was published on 7 Jul 2016, so no meltdown/spectre for you. I know you will argue it is no coincidence with those min fps, but look at the max as well.
Are you retarded? I know you are because I ran those benchmarks myself and it's reproducible on more games than ROTR. Where's your contradicting information to back your claim, you do know that trying to poke holes in info is not an argument.
Thanks for a great review. Any chance it would be possible to look into how SpeedShift 2 compares to AMD:s solution for short burst loads and clock ramp-up? Thanks!
I’ve visited the comments section a few times since the publication. As a psychologist in training, I’ve found it interesting as the initial complaints about this review were reasonable (it doesn’t match other sites), but by page 45 are now bordering on paranoia and conspiracy theories. The conspiracy theories are all the more puzzling when the simplest and most reasonable explanation is that the spectre patch has punished Intel processors rather severely. I’ve found trying to argue against conspiracy theories, be it the moon landing or anti-vaxers, to be singularly ineffective.
The more you provide scientific evidence and rationality, the harder conspiracy theorists dig in their heels and defend their original position. Our natural confirmatory bias to only seek evidence which confirms pre-existing beliefs seems to be a flaw built into the wiring of the human brain. Psychologically protective? Yes... it’s nice to always be right. Useful for doing science? No.
I’d be delighted (and shocked) in a week’s time to learn of massive incompetence or a cover up. I expect there to be some interesting and unexpected details. But I’m guessing no evidence will be found for the commonly repeated conspiracy theories (spectre effect is minimal, heatsink throttling, bias against intel, etc.). But I guess that will just be further evidence there really is a conspiracy... whatever.
I think you need more training, psychologist in training, because it seems that you can't detect your own personal bias. As you stated yourself, the original complaints are quite reasonable. The problem is that AnandTech is not addressing these complaints in a timely manner and is mostly interested in damage control.
The fact that some complaints are unreasonable doesn't change the fact.
Many other reviewers have applied all relevant patches, it is poor form to assume that they haven't. But I understand why you question their competence or integrity. It's cognitive dissonance. You trust AnandTech. In this case AnandTech is an outlier and has not clarified the unique results of their gaming test. Your trust in AnandTech is therefore not logical, and yet you consider yourself a logical person.
Therefore, you have decided that the 'logical' explanation is that all other reviewers haven't applied the patches... whatever.
This comment by RafaelHerschel doesn't make sense. The person being maligned said exactly this: "I expect there to be some interesting and unexpected details. But I’m guessing no evidence will be found for the commonly repeated conspiracy theories..."
And he/she was EXACTLY CORRECT in that prediction.
Your complaint, on the other hand, seems disingenuous. Anandtech's staff immediately flagged their gaming results as anomalous (on just about every page of the article). Then they dug deep to figure out what happened, which takes time to test, confirm, and then publish about). Then about 5 days later they posted updated results (2700x and i7-8700k, so far) and a VERY DETAILED explanation of what happened.
So.... What's the problem again? That sometimes unforeseen test parameters can lead to different results? That can happen. The only question is how was the situation handled. In this case, I think reasonably well under the circumstances.
Grud knows now what "timely manner" is supposed to mean these days. Perhaps RafaelHerschel would only be happy if AT can go back in time and change the article before it's published.
Meow.au, re what you said, Stefan Molyneux has some great pieces on these issues on YT.
They ran all systems at both Intel's & AMD's listed specs as such AMD's memory was at 2933MHz on Zen+ & 2666MHz on Intel's Coffee lake 8700K,they did the same for the older gen parts as well and ran those at the spec's listed for them as well.
There have been a few other media outlets that did the same thing and got the same results or very close to the same results. AMD's memory bandwidth as in memory controller seems to give more bandwidth than Intel's does at the same speed so with Intel not running at 3200MHz like most media outlets did maybe Intel loses a lot of performance because of that and AMD lost next to nothing from not going 3200MHz. It is all just guesses on my part at the moment.
Food for thought when Intel released the entire Coffee Lake line up they only released the z370 chip set which has full support for over clocking including the memory and almost all reviews were done with 3200MHz-3400MHz memory on the test beds even for the non K Coffee lakes CPU's. Maybe Intel knew this would happen and made sure all Coffee lakes looked their best in the reviews. For a few sites that retested once the lower tier chip sets were released the non K's using their rated memory speeds lost about 5%-7% performance in some cases a bit even more.
I am no fanboy of any company I just put out my opinions & theories that are based off of the information we are given by the companies and as well as the media sites.
People never fail to amaze me, so you basically know nothing about the topic, yet you still managed to spit 4 paragraphs of mess, even made some "food for thought".
Slower ram - performance regression unless you have big caches which is not the case of Intel nor AMD.
It seems pretty basic to me as to what was said in the post. It is not my problem if you do not under stand what myself and some others have said about this topic. Pretty simple slower memory less bandwidth which in turn will give less performance in memory intensive work loads such as most games. ALl you have to do is go and look at some benches in the reviews to see AMD has the upper hand when it comes to memory bandwidth even Hardware Unboxed was pretty surprised by how good AMD's memory controller when compared to Intel's. Yes Intel's can run memory at higher speeds than AMD but even with that said AMD does just fine. You are right about cache sizes neither has a overly large cache but AMD 's is bigger on the desktop class CPU's and that is most likely one of the reasons their bandwidth for memory is slightly better.
Almost all the popular hw reviewers don't have a clue. They tell you to OC but do not explain why and what you should accomplish by overclocking. Imagine you have some bad hynix ram which can be barelly OC from 2666 to 3000mhz but you have to loose timing from CL15 for CL20 to get there.
schlock, the chips were run at official spec. Or are you saying it's AMD's fault that Intel doesn't officially support faster speeds? :D Also, GN showed that subtimings have become rather important for AMD CPUs; some mbds left on Auto for subtimings will make very good selections for them, giving a measurable performance advantage.
It is April 24th, and the page on X470 still states: "Technically the details of the chipset are also covered by the April 19th embargo, so we cannot mention exactly what makes them different to the X370 platform until then."
Today phoronix is reporting that after AMD's newest AGESA update their 2700X system is showing 10+% improvement on a number of benchmarks. It is unknown if on Windows the impact will be the same. But you see how all the many variables could explain the differences.
Well i am still waiting for anandtech updating the article.i am very interested to know how ryzen beat coffelake so well.i believe anandtech review is perfomed rightly but i wanna know what is actually wrong with other reviews that make intel winner in some games.it seems not to be the security patches related.
Yep they sure did they must have redone the tsts but this time turned on MCE for Intel and upped the memory clock to at least 3200MHz for Intel as well to see those kinds of gains in games from the old charts from last week. If they decide to explain it they will spin it that oh they had the wrong data points in the charts for Intel...lol
Yes .. at 1080P. The 4K gaming results are rather mixed. So the original conclusion still stands for me. The AMD Ryzen 2700X is roughly on par with the 8700K at 4K gaming, and pulls ahead in productivity applications.
Here is how I see it, at 1080p the new Ryzen results are good enough for 60 FPS gaming. The 2600 (non-x model) sometimes drops below 60 FPS but for a system that is equally used for productivity and gaming, I can certainly live with that. For a system that is mainly used for gaming, I still prefer Intel, but by a slimmer margin than before.
You are hereby awarded the Sensible Chap medal for mentioning 60Hz gaming in at least a non-negtive manner. 8) A few pages back, one guy described anything below 144Hz as useless.
The question is if testing a CPU at 4K Gaming does make much sense. At 4K the bottleneck is the GPU, not the CPU, especially since they tested with a 1080 and not a 1080TI. It is not a coincidence that the cpus all are showing roundabout the same fps in the 4K tests. Civilization seems to be easier on the GPU and shows 8700K in the lead, all other games show almost same fps for all 4 tested CPUs. Thats because the fps is limited by GPU in that case, not by the CPU.
You might want to bring up the point that if you are Gaming in 4K and at highest settings, it doesn't make sense for you to look at 1080p benchmarks. And right now this might make sense, but not in a couple years when you upgrade your GPU to a faster model and the games are not GPU bottlenecked anymore. Then where you now see 60fps you might see 100 fps with an 8700K and only 80fps with the Ryzen 2600X.
Basically, testing CPUs in Gaming at a resolution that stresses out the GPU so much that the performance of the CPU becomes almost irrelevant is not the right way to judge the Gaming Performance of a CPU.
If your point is that at the time you purchase a new GPU you will also purchase a new CPU, then this might not affect you, and you decide to pick the 2700X over an 8700K because of all the advantages in other areas. But in general, we have to admit, the crown of "best gaming CPU" is (sadly) still in Intel's Corner.
If all you're doing is gaming at 4K then yes, in most titles thebottleneck will be the GPU, but this is not always the case. These days live streaming on Twitch is becoming popular, and for that it really does help to have more cores; the load is pushed back onto the CPU, even when the player sees smooth updates (the viewer side experience can be bad instead). GN has done some good tests on this. Plus, some games are more reliant on CPU power for various reasons, especially the use of outdated threading mechanisms. And in time, newer games will take better advantage of more cores, especially due the compatibility with consoles.
So...i have 2666mhz RAM...RAM support for 2700X says 2933...what does that mean ? is 2933 the lowest ram compatibility ? FML if i cant go with 2700X bcz of ram.. -_-
It refers to the highest OFFICIALLY supported frequency by the chipset on your mobo. You should be able to run RAM with higher clocks than 2933 but they might be issues. Because Ryzen memory support sucks. For higher clocked rams, I would check it they are on the QVL, so that way, you can be sure, they were tested with your mobo and no issues will arrise.
2666mhz RAM will run without any issue on your system.
Make sure you have the newest bios update, AGESA 1.0.0.2a seems to improve memory compatibility too. My crappy kingston 2400 cl17 now works fine at 3000 cl15 1.36V. I'll try 3200 at 1.38V later.
Good work tracking down the timing issues! I know that this review is still WIP, but just noticed that the "Power Analysis" block has a "fsfasd" written right after it, that probably isn't needed :)
Not an argument. It is just as interesting to learn about how and why this issue occured, to understand the nature of benchmarking. Life isn't just about being spoonfed end nuggets of things, the process itself is relevant. Or would you rather we don't learn from history?
When 65W i7 8700 is 15% faster in Octane 2.0 than 105W Rizen 7 2700x, it is just sad.
Of course, the horrible x64 practically demands than compilers must optimize for a very specific CPU implementation (choosing and sorting instructions in the code accordingly), AMD could have at least realized the fact and optimize their own implementation for the same Intel-optimized code generators...
Intel compilers and libraries tend not to use the ideal instructions unless they detect a GenuineIntel signature via CPUID - it'll likely use the default lowest-common-denominator pathway instead.
TDP is more of a guideline - it doesn't determine actual power usage (we've seen Coffee Lake use way more than the TDP), let alone the power used in a particular operation. Having said that, I wouldn't be surprised if Intel were more efficient in this particular test. But it'd be interesting to know how much impact Meltdown patches have in that area; they might well increase the amount of time the CPU spends idle (but not idle enough to go into a sleep mode) as it waited to fetch instructions.
It's a guideline for cooling solutions. Look at the power consumption numbers in this test for example.
Ryzen 2700X power consumption under full load 110W. Intel i7 8700K power consumption under full load 120W.
Both are at stock speeds with the Ryzen having 8 cores versus 6 cores, and scoring 2700X 24% higher Cinebench scores. Ryzen is rated at 105W TDP so actual power consumption at stock speed is pretty close. The 8700K uses 120W so it's pretty far from the 95W TDP it is rated at.
The 8700 also uses 120W so it's even further from the 65W TDP it's rated at. In comparison Ryzen 2700 uses 45W when it has the same rated 65W TDP. I know which one I'd prefer to put into a quiet low-power system...
just looking back at this, you say according to title 2700x-2700-2600x-2600 and yet in most tests are only listing the results for 2700x-2600x..not good for someone really wanting to see the differences in power use or performance comparing them head to head sort of speak.
seems the 2700 would be a "good choice" as according to the little bit of info given about it, it ends up using less power than the 2600 even though rated same TDP with 2 extra core 4 extra threads O.O
I do "hope" the sellers such as amazon at least for us Canadian folk stick closer to the price they should be vs tacking on $15-$25 or more compared to MSRP pricing, seems if one bought them same day of launch pricing was right where it should be.
1600 has bounced around a little bit whereas 1600x is actually a fair price compared to what it was "very tempting" though the lack of a boxed cooler is not good.....shame 2600 only comes with wraith stealth instead of spire seeing as the price is SOOO close (not to mention at least launch price vs what the 1xxx generation is NOW, AMD should have been extra nice and bundled the wraith spire for 2600-2600x and wraith LED and wraith max or whatever for the 2700-2700x
I would imagine if they decide to do a 4 core 8 thread 2xxx that would be the spot to use the wraith spire (less heat load via less cores type deal)
Not trying to be sarcastic but will this article be finished? I really wanted to read the storage and chipset info. If the article is as complete as it is going to get please let us know, 20 year reader asking.
I'm sure it will be finished one day but I agree that it doesn't seem so at the moment. If you want to find out about StoreMI AMD has a page about it: https://www.amd.com/en/technologies/store-mi
I think we've got ourselves a race: which will get here first, the missing parts of the 2nd gen Ryzen review, or new Raven Ridge drivers? Or perhaps hell will freeze first.
Sadly it appears as though the article will not be finished. This site was great during about its first 15 years of existence, Purch has done a thorough job of purching it up.
"Technically the details of the chipset are also covered by the April 19th embargo, so we cannot mention exactly what makes them different to the X370 platform until then"
That was written for the article published April 19th, and as of May 10th STILL in the text.
LOL, the benchmarks are now updated, Ryzen+ absolutely outperformed in games by 8700k even with Meltdown and Spectre patches. So nothing new, Ryzen is still bad.
If your usecase is 1080p gaming I would agree, however the difference becomes marginal as resolution increases. Also keep in mind that the 8700k currently retails for about $20 more than the 2700x and doesn't include a cooler, which means it is overall about $50 dearer...
My 2600 X at stock does 177 in single core cinebench. But that is with h100i V2 cooler. With the default cooler it gets the same score as you 173. The cooler the chip the higher the Boost. Also out-of-the-box XMP in the Bios Works 3200 no problem. In fact cl14. Out of the box versus my 1600 X in the exact same system it is 15% faster across the board.
Nice review. On thing that bothers me is the inclusion of Winrar for this review without a note stating it is a underperforming compression tool. It is know that 7zip can compress almost twice as fast as Winrar. Not that but also the lack of consistency in between compressions tests as instead of compressing and decrompressing a set file you are taking different procedures for each benchmark. I mean the job is to compress/decompress, let the user know how it does and why it does that.
Try and run a S.M.A.R.T. test on the drives. The virtual adapter is unable to provide any data and causes a Blue-Screen. At least the last time I used the Enmotus version did.
YET ANOTHER REVIEW THAT DOESN'T SHOW US THERMALS! HOW HARD IS IT TO SHOW US HOW HOT A CHIP RUNS ON AIR COOLING FFS, NO ONE SHOWS THERMALS ON THESE DAN CHIPS, THIS IS THE 20'TH REVIEW IN GOOGLE AND NO THERMALS!
Last year I upgraded from a 1st gen i7 920 to i7 8700K and even with spectre & meltdown performance has been amazing, also Asus has been recently updating the motherboard BIOS with further CPU performance improvements.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
545 Comments
Back to Article
Marlin1975 - Thursday, April 19, 2018 - link
Looks good, guess AMD will replace my Intel system next.Just waiting for GPU and memory prices to fall.
3DoubleD - Thursday, April 19, 2018 - link
Agreed... the waiting continuesWorldWithoutMadness - Thursday, April 19, 2018 - link
Lol, you might even wait until Zen 2 comes out next year or even later.Dragonstongue - Thursday, April 26, 2018 - link
should be out next year as AMD has been very much on the ball with Ryzen launches more or less to the DAY they claimed would launch which is very nice...basically what they are promising for product delivery they are doing what they say IMO, not to mention TSMC recently announced volume production of their 7nm, so that likely means GloFo will be very soon to follow, and AMD can use TSMC just the same :)t.s - Tuesday, July 31, 2018 - link
What @WWM want to say is: You can wait forever for the RAM price to go down, rather than when ryzen 2 out.StevoLincolnite - Thursday, April 19, 2018 - link
I still haven't felt limited by my old 3930K yet.Can't wait to see what Zen 2 brings and how Intel counters that.
mapesdhs - Friday, April 20, 2018 - link
If you ever do fancy a bit more oomph in the meantime (and assuming IPC is less important than threaded performance, eg. HandBrake is more important than PDF loading), a decent temporary sideways step for X79 is a XEON E5-2697 v2 (IB-EP). An oc'd 3930K is quicker for single-threaded of course, but for multithreaded the XEON does very well, easily beating an oc'd 3930K, and the XEON has native PCIe 3.0 so no need to bother with the not entirely stable forced NVIDIA tool. See my results (for FireFox, set Page Style to No Style in the View menu):http://www.sgidepot.co.uk/misc/tests-jj.txt
mapesdhs - Monday, April 23, 2018 - link
Correction, I meant the 2680 v2.Samus - Friday, April 20, 2018 - link
I never felt limited by my i5-4670k either, especially mildly overclocked to 4.0GHz.Until I build a new PC around the same old components because the MSI Z97 motherboard (thanks MSI) failed (it was 4 years old but still...) so I picked up a new i3-8350k + ASRock Z270 at Microcenter bundled together for $200 a month ago, and it's a joke how much faster it is than my old i5.
First off, it's noticeably faster, at STOCK, than the max stable overclock I could get on my old i5. Granted I replaced the RAM too, but still 16GB, now PC4-2400 instead of PC3-2133. Doubt it makes a huge difference.
Where things are noticeably faster comes down to boot times, app launches and gaming. All of this is on the same Intel SSD730 480GB SATA3 I've had for years. I didn't even do a fresh install, I just dropped it in and let Windows 10 rebuild the HAL, and reactivated with my product key.
Even on paper, the 8th gen i3's are faster than previous gen i5's. The i3 stock is still faster than the 4th gen i5 mildly overclocked.
I wish I waited. It's compelling (although more expensive) to build an AMD Ryzen 2 now. It really wasn't before, but now that performance is slightly better and prices are slightly lower, it would be worth the gamble.
gglaw - Saturday, April 21, 2018 - link
i think there's something wrong with your old Haswell setup if the difference is that noticeable. I have every generation of Intel I7 or I5 except Coffee Lake running in 2 rooms attached to each other, and I can't even notice a significant difference from my SANDY 2600k system with a SATA 850 Evo Pro sitting literally right next to my Kaby I7 with a 960 EVO NVMe SSD. I want to convince myself how much better the newer one is, but it just isn't. And this is 5 generations apart for the CPU's/mobos and using one of the fastest SSD's ever made compared to a SATA drive, although about the fastest SATA drive there is. Coffee Lake is faster than Kaby but so tiny between the equivalent I7 to I7, I can't see myself noticing a major difference.In the same room across from these 2 is my first Ryzen build, the 1800X also with an 960 EVO SSD. Again, I can barely convince myself it's a different system than the Sandy 2600k with SATA SSD. I have your exact Haswell I5 too, and it feels fast as hell still. Especially for app launches and gaming. The only time I notice major differences between these systems is when I'm encoding videos or running synthetic benchmarks. Just for the thrill of a new flagship release I just ordered the 2700X too and it'll be sitting next to the 1800X for another side by side experience. It'll be fun to setup but I'm pretty convinced I won't be able to tell the 2 systems apart when not benchmarking.
YukaKun - Saturday, April 21, 2018 - link
Oh, I'm actually curious about your experience with all the systems.I'm still running my i7 2700K at ~4.6Ghz. I do agree I haven't felt that it's a ~2012 CPU and it does everything pretty damn well still, but I'd like to know if you have noticed a difference between the new AMD and your Sandy Bridge. Same for when you assemble the 2700X.
I'm trying to find an excuse to get the 2700X, but I just can't find one, haha.
Cheers!
Luckz - Monday, April 23, 2018 - link
The the once in a lifetime chance to largely keep your CPU name (2700K => 2700X) should be all the excuse you need.YukaKun - Monday, April 23, 2018 - link
That is so incredibly superficial and dumb... I love it!Cheers!
mapesdhs - Monday, April 23, 2018 - link
YukaKun, your 2700K is only at 4.6? Deary me, should be 5.0 and proud, doable with just a basic TRUE and one fan. 8) For reference btw, a 2700K at 5GHz gives the same threaded performance as a 6700K at stock.And I made a typo in my earlier reply, mentioned the wrong XEON model, should have been the 2680 V2.
YukaKun - Tuesday, April 24, 2018 - link
For daily usage and stability, I found that 4.6Ghz worked best in terms of noise/heat/power ratios.I also did not disable any power saving features, so it does not work unnecessarily when not under heavy load.
I'm using AS5 with a TT Frio (the original one) on top, so it's whisper quiet at 4.6Ghz and I like it like that. When I made it work at 5Ghz, I found I had to have the fans near 100%, so it wasn't something I'd like, TBH.
But, all of this to say: yes, I've done it, but settled with 4.6Ghz.
Cheers!
mapesdhs - Friday, March 29, 2019 - link
(an old thread, but in case someone comes across it...)I use dynamic vcore so I still get the clock/voltage drops when idle. I'm using a Corsair H80 with 2x NDS 120mm PWM, so also quiet even at full load; no need for such OTT cooling to handle the load heat, but using an H80 means one can have low noise aswell. An ironic advantage of the lower thermal density of the older process sizes. Modern CPUs with the same TDP dump it out in a smaller area, making it more difficult to keep cool.
Having said that, I've been recently pondering an upgrade to have much better general idle power draw and a decent bump for threaded performance. Considering a Ryzem 5 2600 or 7 2700, but might wait for Zen2, not sure yet.
moozooh - Sunday, April 22, 2018 - link
No, it might have to do with the fact that the 8350K has 1.5x the cache size and beastly per-thread performance that is also sustained at all times—so it doesn't have to switch from a lower-powered state (which the older CPUs were slower at), nor does it taper off as other cores get loaded, which is most noticeable on the the things Samus mentioned, ie. "boot times, app launches and gaming". Boot times and app launches are both essentially single-thread tasks with no prior context, and gaming is where a CPU upgrade like that will improve worst-case scenarios by at least an order of magnitude, which is really what's most noticeable.For instance, if your monitor is 60Hz and your average framerate is 70, you won't notice the difference between 60 and 70—you will only notice the time spent under 60. Even a mildly overclocked 8350K is still the one of best gaming CPUs for this reason, easily rivaling or outperforming previous-gen Ryzens in most cases and often being on par with the much more expensive 8700K where thread count isn't as important as per-thread performance for responsiveness and eliminating stutters. When pushed to or above 5 GHz, I'm reasonably certain it will still give many of the newer, more expensive chips, a run for their money.
spdragoo - Friday, April 20, 2018 - link
Memory prices? Memory prices are still pretty much the way they've always been:-- faster memory costs (a little) more than slower memory
-- larger memory sticks/kits cost (a little) more than smaller sticks/kits
-- last-gen RAM (DDR3) is (very slightly) cheaper than current-gen RAM (DDR4)
I suppose you can wait 5 billion years for the Sun to fade out, at which point all RAM (or whatever has replaced it by then) will have the same cost ($0...since no one will be around to buy or sell it)...but I don't think you need to worry about that.
Ferrari_Freak - Friday, April 20, 2018 - link
You didn't write anything about price there... All you've said is that relative pricing for things is the same it has always been, and that's no surprise.The $$$ cost of any give stick is more than it was a year or two ago. 2x8gb DDR4-3200 G.Skill Ripjaws V is $180 on Newegg today. It was $80 two years ago. Clearly not the way they've always been...
James5mith - Friday, April 20, 2018 - link
2x16GB Crucial DDR4-2400 SO-DIMM kit.https://www.amazon.com/gp/product/B019FRCV9G/
November 29th 2016 (when I purchased): $172
Current Amazon price for exact same kit: $329
MDD1963 - Friday, April 20, 2018 - link
The Gskill 32 GB kit (2 x 16 GB/3200 MHz) I bought 13 months ago for $205 is now $400-ish...andychow - Friday, April 20, 2018 - link
Ridiculous comment. 7 years ago I bought 4x8 GB of RAM for $110. That same kit, from the same company, seven years later, now sells for $300. 4x16GB kits are around $800. Memory prices aren't at all the way they've always been. There is clear collusion going on. Micron and SK Hynix have both seen their stock price increase 400% in the last two years. 400%!!!!!The price of RAM just keeps increasing and increasing, and the 3 manufacturers are in no hurry to increase supply. They are even responsible for the lack of GPUs, because they are the bottleneck.
spdragoo - Friday, April 20, 2018 - link
You mean a price history like this?https://camelcamelcamel.com/Corsair-Vengeance-4x8G...
Or perhaps, as mentioned here (https://www.techpowerup.com/forums/threads/what-ha... how the previous-generation RAM tends to go up in price once the manufacturers switch to the next-gen?
Since I KNOW you're not going to claim that you bought DDR4 RAM 7 YEARS AGO (when it barely came out 4 years ago)...
Alexvrb - Friday, April 20, 2018 - link
I love how you ignored everyone that already smushed your talking points to focus on a post which was likely just poorly worded.RAM prices have traditionally gone DOWN over time for the same capacity, as density improves. But recently the limited supply has completely blown up the normal price-per-capacity-over-time curve. Profit margins are massive. Saying this is "the same as always" is beyond comprehension. If it wasn't for your reply I would have sworn you were simply trolling.
Anyway this is what a lack of genuine competition looks like. NAND market isn't nearly as bad but there's supply problems there too.
vext - Friday, April 20, 2018 - link
True. When prices double with no explanation, there must be collusion.The same thing has happened with videocards. I have great doubts about bitcoin mining as a driver for those price increases. If mining was so profitable, you would think there would be a mad scramble to design cards specifically for mining. Instead the load falls on the DYI consumer.
Something very odd is happening.
Alexvrb - Friday, April 20, 2018 - link
They DO design things specifically for mining. It's called an ASIC miner. Unfortunately for us, some currencies are ASIC-resistant, and in some cases they can potentially change the algorithm, which makes such (expensive!) development challenging.Samus - Friday, April 20, 2018 - link
Yep. I went with 16GB in 2013-2014 just because I was like meh what difference does $50-$60 make when building a $1000+ PC. These days I do a double take when choosing between 8GB and 16GB for PC's I build. Even hardcore gaming PC's don't *NEED* more than 8GB, so it's worth saving $100+Memory prices have nearly doubled in the last 5 years. Sure there is cheap ram, there always has been. But a kit of quality Gskill costs twice as much as a comparable kit of quality Gskill cost in 2012.
FireSnake - Thursday, April 19, 2018 - link
Awesome, as always. Happy reading! :)Chris113q - Thursday, April 19, 2018 - link
Your gaming benchmarks results are garbage and every other reviewer got different results than you did. I hope no one takes this review seriously as the data is simply incorrect and misleading.Ian Cutress - Thursday, April 19, 2018 - link
Always glad to see you offer links to show the differences.We ran our tests on a fresh version of RS3 + April Security Updates + Meltdown/Spectre patches using our standard testing implementation.
jjj - Thursday, April 19, 2018 - link
I was wondering about gaming, so there is no mistake there as Ryzen 2 seems to top Intel.As of right now, I don't seem to find memory specs in the review yet, safe to assume you did as always, highest non-OC so Ryzen is using faster DRAM?
Also yet to spot memory letency, any chance you have some numbers at 3600MHz vs Intel? Thanks.
jjj - Thursday, April 19, 2018 - link
And just between us, would be nice to have some Vega gaming results under DX12.aliquis - Thursday, April 19, 2018 - link
Would be nice if any reviewer actually benchmarked storage devices maybe even virtualization because then we'd see meltdown and spectre mitigation performance. Then again do AMD have any for spectre v2 yet? If not who knows what that will do.HStewart - Thursday, April 19, 2018 - link
I notice that that systems had higher memory, but for me I believe single threaded performance is more important that more cores. But it would be bias if one platform is OC more than another. Personally I don't over clock - except for what is provided with CPU like Turbo mode.One thing that I foresee in the future is Intel coming out with 8 core Coffee Lake
But at least it appears one thing is over is this Meltdown/Spectre stuff
Lolimaster - Thursday, April 19, 2018 - link
Intel 8 core CL won't stop the bleeding, lose more profits making them "cheap" vs a new Ryzen 7nm with at least 10% more clocks and 10% more IPC, RIP.HStewart - Thursday, April 19, 2018 - link
I just have to agree to disagree on that statement - especially on "cheap" statementACE76 - Thursday, April 19, 2018 - link
CL can't scale to 8 cores...not without done serious changes to it's architecture...Intel is in some trouble with this Ryzen refresh...also worth noting is that 7nm Ryzen 2 will likely bring a considerable performance jump while Intel isn't sitting on anything worthwhile at the moment.Alphasoldier - Friday, April 20, 2018 - link
All Intel's 8cores in HEDT except SkylakeX are based on their year older architecture with a bigger cache and the quad channel.So if Intel have the need, they will simply make a CL 8core. 2700X is pretty hungry when OC'd, so Intel don't have to worry at all about its power consuption.
moozooh - Sunday, April 22, 2018 - link
> 2700X is pretty hungry when OC'dAnd Intel chips aren't? If Zen+ is already on Intel's heels for both performance per watt and raw frequency, a 7nm chip with improved IPC and/or cache is very likely going to have them pull ahead by a significant margin. And even if it won't, it's still going to eat into Intel's profit as their next tech is 10nm vs. AMD's 7nm, meaning more optimal wafer estate utilization for the latter.
AMD has really climbed back at the top of their game; I've been in the Intel camp for the last 12 years or so, but the recent developments throw me way back to K7 and A64 days. Almost makes me sad that I won't have any reason to move to a different mobo in the next 6–8 years or so.
mapesdhs - Friday, March 29, 2019 - link
Amusing to look back given how things panned out. So yes, Intel released the 9900K, but it was 100% more expensive than the 2700X. :D A complete joke. And meanwhile tech reviewers raved about a peasly 5 to 5.2 oc, on a chip that already has a 4.7 max turbo (major yawn fest), focusing on specific 1080p gaming tests that gave silly high fps number favoured by a market segment that is a tiny minority. Then what happens, RTX comes out and pushes the PR focus right back down to 60Hz. :DI wish people to stop drinking the Intel/NVIDIA coolaid. AMD does it aswell sometimes, but it's bizarre how uncritical tech reviewers often are about these things. The 9900K dragged mainstream CPU pricing up to HEDT levels; epic fail. Some said oh but it's great for poorly optimised apps like Premiere, completely ignoring the "poorly optimised" part (ie. why the lack of pressure to make Adobe write better code? It's weird to justify an overpriced CPU on the back of a pro app that ought to run a lot better on far cheaper products).
Santoval - Thursday, April 19, 2018 - link
It's possible that the first consumer Intel 8-core will be based on Ice Lake. Cannon Lake will probably largely limited to low power CPUs, and will probably top out at 4 cores. Of course if Ice Lake is delayed again Intel might scale out Cannon Lake to more cores. Cannon Lake will be just a 10nm node of the Skylake/Kaby/Coffee Lake architecture, so it will most likely provide mostly power efficiency gains.aliquis - Thursday, April 19, 2018 - link
Latest road map show coffee lake refresh in Q4.mahoney87 - Thursday, April 19, 2018 - link
lol :Dhttps://imgur.com/SmJBKkf
They done fecked up
Luckz - Monday, April 23, 2018 - link
Rocket League is a joke game when it comes to benchmarking, optimization and so on.Chris113q - Thursday, April 19, 2018 - link
Do you really need to be spoon-fed information? How long would it take you to find the other reviews by yourself?PCPER, Tweaktown, Toms Hardware, Hothardware, Computerbase all had different results (can't post link due to spam protection). Not to mention you'd have to be totally tech illiterate to believe that stock 2600 can beat 8700k by such a huge margin. Meltdown/Spectre patches don't affect gaming performance that much, so don't you put blame on that.
The result discrepancy is embarrassing, there goes the last speck of reputation Anandtech had as a reliable source of tech news.
MuhOo - Thursday, April 19, 2018 - link
You sir are right.Aegan23 - Thursday, April 19, 2018 - link
You do know who Ian is, right? XDsor - Thursday, April 19, 2018 - link
Anandtech has no responsibility to go out and ensure their results match up with anyone else’s. They run their own selection of tests with their own build and report the numbers. They provide the test setup, if you can’t spot the differences that’s your own issue.Ryan Smith - Thursday, April 19, 2018 - link
"Anandtech has no responsibility to go out and ensure their results match up with anyone else’s"Responsibility? No. But should we anyhow? Yes.
Our responsibility is accuracy. If something looks weird with our data - which it does right now - then it's our job to go back, validate, and explain the results that we're seeing. If our results disagree with other sites, then that is definitely an indication that we may have a data issue.
xidex2 - Thursday, April 19, 2018 - link
I bet none of the other sites applied spectre and meltdown patches for Intel because they dont care about such things. Intel fanboys are now crying because someone actually showed true numbers.RafaelHerschel - Thursday, April 19, 2018 - link
Hardware unboxed certainly did. It's a bit odd that you automatically assume that only Anandtech knows how to test.aliquis - Thursday, April 19, 2018 - link
If they don't rerun the tests with up to date software then they are useless.What I assume has happened here is some software, driver or firmware being "off" in the Anandtech review somehow.
RafaelHerschel - Thursday, April 19, 2018 - link
@Ryan Smith That is an excellent response.krumme - Thursday, April 19, 2018 - link
AT run bm at Jdec specs. 2666 for 8700k 2933 for 2700x. That and the security patches.Hifihedgehog - Thursday, April 19, 2018 - link
I was also totally misled. I came here first, only to find out after having misleading people online that this site’s results are completely off. I am a big AMD fan but these results need to be audited and corrected.Ryan Smith - Thursday, April 19, 2018 - link
"these results need to be audited and corrected."Validating right now.=)
stefanve - Thursday, April 19, 2018 - link
Clearly someone didn't apply his meltdown patch ....casperes1996 - Thursday, April 19, 2018 - link
Advice for the future:Don't be a prick.
Ian isn't lying to you. He's sharing the data his benchmarking showed. It being different to other reviewers is something he'll gladly look into, and is in fact looking into, but you ought to show yourself as a respectful individual when you point it out, otherwise you won't be listened to.
MadManMark - Thursday, April 19, 2018 - link
Hear, hear!bfoster68 - Thursday, April 19, 2018 - link
Chris113q,I did one for you since you seemed to be having issues.
If you read below they use a different methodology for estimating fps vs what AnandTech did in their review. the result is nearly the same. solid gains for AMD on a incremental upgrade. Was that so hard?
https://www.tomshardware.com/reviews/amd-ryzen-7-2...
ComposingCoder - Thursday, April 19, 2018 - link
just an FYI, they tested on different settings..... Toms Hardware for example used High on civ VI vs ultra that was used here.fallaha56 - Thursday, April 19, 2018 - link
Try techradar who actually patchedThey too are showing massive Intel hits
RafaelHerschel - Thursday, April 19, 2018 - link
Correct me if I'm wrong, but TechRadar seems to have tested only two games and provides minimal information on how they tested. Plus, Intel is still a bit faster in their tests.fallaha56 - Thursday, April 19, 2018 - link
Look at the geekbench scoresThey also include ‘before and after’ Spectre2 patches for Intel
The reliance of Intel on prefetch is well-known and now it’s busted
Crazyeyeskillah - Friday, April 20, 2018 - link
AMD hardware crushes intel on GEEKBENCH. You have to look at all tests together, and never focus on one test, unless that is the only thing you are buying your processor for, like gaming, or video encoding.sardaukar - Thursday, April 19, 2018 - link
There's no need to be a dick about it.SkyBill40 - Thursday, April 19, 2018 - link
Burden of proof fallacy?ACTIVATE!
xidex2 - Thursday, April 19, 2018 - link
So you are now Intel engineer or what? How do you know what impact those patches have on Intel CPUs? Get a grip and delete these childish comments.RafaelHerschel - Thursday, April 19, 2018 - link
I'll add Hardware Unboxed on YouTube.ACE76 - Thursday, April 19, 2018 - link
Anandtech isn't the only one to have come to this conclusion bud.bsp2020 - Thursday, April 19, 2018 - link
Was AMD's recently announced Spectre mitigation used in the testing? I'm sorry if it was mentioned in the article. Too long and still in the process of reading.I'm a big fan of AMD but want to make sure the comparison is apples to apples. BTW, does anyone have link to performance impact analysis of AMD's Spectre mitigation?
fallaha56 - Thursday, April 19, 2018 - link
Yep, X470 is microcode parchedThis article as it stands is Intel Fanboi stuff
fallaha56 - Thursday, April 19, 2018 - link
As in the Toms articleSaturnusDK - Thursday, April 19, 2018 - link
Maybe he didn't notice that the tests are at stock speeds?DCide - Friday, April 20, 2018 - link
I can't find any other site using a BIOS as recent as the 0508 version you used (on the ASUS Crosshair VII Hero). Most sites are using older versions. These days, BIOS updates surrounding processor launches make significant performance differences. We've seen this with every Intel and AMD CPU launch since the original Ryzen.Shaheen Misra - Sunday, April 22, 2018 - link
Hi , im looking to gain some insight into your testing methods. Could you please explain why you test at such high graphics settings? Im sure you have previously stated the reasons but i am not familiar with them. My understanding has always been that this creates a graphics bottleneck?Targon - Monday, April 23, 2018 - link
When you consider that people want to see benchmark results how THEY would play the games or do work, it makes sense to focus on that sort of thing. Who plays at a 720p resolution? Yes, it may show CPU performance, or eliminate the GPU being the limiting factor, but if you have a Geforce 1080 GTX, 1080p, 1440, and then 4k performance is what people will actually game at.The ability to actually run video cards at or near their ability is also important, which can be a platform issue. If you see every CPU showing the same numbers with the same video card, then yea, it makes sense to go for the lower settings/resolutions, but since there ARE differences between the processors, running these tests the way they are makes more sense from a "these are similar to what people will see in the real world" perspective.
FlashYoshi - Thursday, April 19, 2018 - link
Intel CPUs were tested with Meltdown/Spectre patches, that's probably the discrepancy you're seeing.MuhOo - Thursday, April 19, 2018 - link
Computerbase and pcgameshardware also used the patched... every other site has completely different results from anandtechsor - Thursday, April 19, 2018 - link
Fwiw I took five minutes to see what you guys are talking about. To me it looks like Toms is screwed up. If you look at the time graphs it looks to me like it’s the purple line on top most of the time, but the summaries have that CPU in 3rd or 4th place. E.G. https://img.purch.com/r/711x457/aHR0cDovL21lZGlhLm...At any rate things are generally damn close, and they largely aren’t even benchmarking the same games, so I don’t understand why a few people are complaining.
spdragoo - Thursday, April 19, 2018 - link
Per Tom's Hardware (https://www.tomshardware.com/reviews/amd-ryzen-7-2..."Our test rigs now include Meltdown And Spectre Variant 1 mitigations. Spectre Variant 2 requires both motherboard firmware/microcode and operating system patches. We have installed the operating system patches for Variant 2.
Today's performance measurements do not include Intel's motherboard firmware mitigations for Spectre Variant 2 though, as we've been waiting for AMD patches to level the playing field. Last week, AMD announced that it’s making the mitigations available to motherboard vendors and OEMs, which the company says should take time to appear in the wild. We checked MSI's website for firmware updates applicable to our X370 platforms when AMD made its announcement, but no new BIOSes were available (and still aren't).
Unfortunately, we were only made aware that Variant 2 mitigations are present in our X470 board's firmware just before launch, precluding us from re-testing the Intel platforms with patches applied. We're working on this now, and plan to post updated results in future reviews.
The lack of Spectre Variant 2 patches in our Intel results likely give the Core CPUs a slight advantage over AMD's patched platforms. But the performance difference should be minimal with modern processors."
For those that are TL:DR in their viewpoint: unlike Anandtech, TH did NOT include all of the Spectre/Meltdown patches, & even said that there might be differences in their test results.
Chris113q - Thursday, April 19, 2018 - link
Other reviewers also had their setups meltdown/spectre patched and it's been already confirmed that these patches don't greatly impact gaming performance at all.It's clear that Anandtech's results are wrong here. I have read 12 other reviews and most of their results differ from the ones you got. You'd have to be delusional to take just 1 review as the absolute truth.
Ninjawithagun - Thursday, April 19, 2018 - link
Incorrect. Those reviews were conducted back in January 2018 (look at the review dates). Microsoft issued new patches for Meltdown and Spectre earier this month (April 2018). I could find no other performance review showing performance gain/loss for Intel CPUs based upon the new patches other than the one posted now by AnandTech.Ninjawithagun - Thursday, April 19, 2018 - link
The only way to know for sure is for each hardware reviewer to provide the exact version of Windows 10 they used for testing. This will prove whether or not they ran benchmarks with the most current Windows updates/patches.Intel999 - Thursday, April 19, 2018 - link
It is plausible that many reviewers were lazy and carried over data from earlier reviews on Intel and 1000 series Ryzen CPUs.Thank you Anandtech for doing aa genuinely unbiased review that required a great deal of extra work compared to others.
5080 - Thursday, April 19, 2018 - link
And don't forget BIOS patches as well. If you have a fully patched system the impact is even bigger than just updating with the Windows KB patches.sor - Thursday, April 19, 2018 - link
Looking at Tom’s results, they have OC intels in first place. Other than that it’s damn close. Is there a chance you’re just browsing graphs to see who is in the top spot and not really comprehending the results?Aside from that, the test setups and even benchmarks used are different. You owe Ian an apology for not realizing you’re comparing OC results to his.
Silma - Thursday, April 19, 2018 - link
Yes. Ian is a top reviewer. At worst he made a mistake in this evaluations. It happens to the best of us.However, I have an issue with non OC test. It seems to me people will purchase overclockable processors and graphic cards to overclock them. At least game results should probably be based on OC benchmarks.
pogostick - Thursday, April 19, 2018 - link
@Silma No, it makes more sense to do it this way. Everyone who buys these processors are guaranteed to have a part that will run the manufacturer spec. OC is a random lottery.ACE76 - Thursday, April 19, 2018 - link
Wrong... majority of even gamers DON'T overclock...that us relagated to a niche market of enthusiasts.Luckz - Monday, April 23, 2018 - link
PB2/XFR2 seems to be all the overclocking anyone would want to do on Ryzen 2xxx (besides LN2 and other non-sustainable things)werpu - Friday, April 20, 2018 - link
Ahem those initial results were meltdown only and January, there have been a boatload of fixes since then on both the meltdown and spectre side. So the data is not correct anymore. Even in January VMs etc.. everything I/O intensive already encountered a serious performance hit.Crazyeyeskillah - Friday, April 20, 2018 - link
Those reviews haven't rerun the intel chip parts, hence the dated data.Azix - Thursday, April 19, 2018 - link
They used a 1080 in these tests. maybe thats a factor5080 - Thursday, April 19, 2018 - link
I think what you're seeing with the other reviews is old database information being used without the spectre and meltdown patches. They only say that Ryzen+ was tested with the latest patches, but it dosn't say that they retested all the Intel systems with the BIOS fix and patches applied.Ryan Smith - Thursday, April 19, 2018 - link
All of our Intel systems were re-run with the full Smeltdown fixes for this review.wicketr - Thursday, April 19, 2018 - link
It's be interesting to have an article running all these tests pre and post patches to show how much they affect the system. There seems to be a lot of confusion about how bad it is.Ryan Smith - Thursday, April 19, 2018 - link
It's definitely something we're intending to mine from the data later, after we're over this launch hump.5080 - Thursday, April 19, 2018 - link
That's what I thought and what Chris113q doesn'r realize.hescominsoon - Thursday, April 19, 2018 - link
Chris,You need to take into account the latest system/bios patches for meltdown/spectre as well. Anandtech is not manipulating the results. Just because they get "different" results from "everybody else"(especially when you fail to cite the differences), strains your credibility.
eek2121 - Thursday, April 19, 2018 - link
Their benchmarks are garbage? You are welcome to buy a 2700X and test for yourself. The benchmarks they used are built in for the most part to each game. It coincides pretty much with what I know of Ryzen, Coffee Lake, and Ryzen 2xxx.AndersFlint - Thursday, April 19, 2018 - link
While out of respect for the reviewer's hard work, I wouldn't describe the results as "garbage", they certainly don't match up with results from other publications.ACE76 - Thursday, April 19, 2018 - link
Yes, Anandtech's are honest and objective...I believe Tech Radar was comparing Coffee Lake OC'd to 5.2ghz vs Ryzen 2700x at 4.1ghz...the stock turbo alone hits 4.3ghz...they are slanting to benefit Intel...a 5.2ghz stable overclock on Coffee Lake alone is very hard to achieve and maybe 10-15% of CPUs can do it.Luckz - Monday, April 23, 2018 - link
I haven't really heard of anyone unable to reach 5 GHz.SkyBill40 - Thursday, April 19, 2018 - link
Well, golly gee... did the other reviewers use the *exact* setup as used here? No? Hmm... I guess that then makes your grouchy mcgrouchface missive not worth consideration then, no? If anyone is to not be taken seriously here, it's you.Typical ad hominem and burden of proof fallacies. Well done, Chris113q.
Flying Aardvark - Thursday, April 19, 2018 - link
WRONG. AT has it right, these are properly patched systems. Heavy IO perf loss with Intel Meltdown patches has been well known for months. See top comment here. https://np.reddit.com/r/pcmasterrace/comments/7obo...Prove your claim that the data is incorrect or misleading in any way whatsoever, child.
RafaelHerschel - Thursday, April 19, 2018 - link
One of the problems is that other reviewers see a less pronounced difference between the new AMD Ryzen CPU's and the older ones. Most reviewers claim that they have tested with all available patches in place.Your conclusion that AT has it right is based on what? Your belief that AT can't make mistakes? Maybe there is a logical explanation, but for now, it seems that AT might have done something wrong.
Flying Aardvark - Friday, April 20, 2018 - link
I have evidence to backup my claim, users with no motivation to mislead agree with AT, and did months ago. You have no evidence, simply butthurt. Good luck.boozed - Thursday, April 19, 2018 - link
Let's ask a total jerk from the internet what he thinks.aliquis - Thursday, April 19, 2018 - link
They definitely used slower memory. Don't know if that's the thing. Don't know what fps others get in the same games and settings. Otherwise maybe it's ASUS doing special tricks like with MCE before or have better memory timits or can use some trick to get similar of precision boost overdrive already. Or a software mistake.Sweclockers is the best for game performance. They do 720p medium so the gpu limits will be smallest there.
danjw - Friday, April 20, 2018 - link
They Anandtech used the rated speeds that the processors were stated to support by the manufactures. Anandtech, is using everything at stock. Anandtech ran all the processors through fully patched systems (both bios and OS). Not every website other tests to these same methodology. So, there will be differences in their results. None the less, Anandtech, is auditing their results to double check them. I really don't think they are going to see anything wrong. Toms, ran their Intel parts without the latest bios updates. Others overclocked their systems.Most users do not overclock their systems. Sure, a lot of us readers do, but not everyone. I overclock my systems, but, my two brothers who are both just as technical as I am, do not. It is a choice some make and others do not. The majority of users do not overclock. So, Anandtech does not overclock in their most reviews. They have at times in the past and may in the future include overclocking results in reviews, but they have are always broken out the overclocking results in a separate section and/or labeled the overclocked results to differentiate them from the standard clocked results. These are editorial choices that Anandtech makes, I don't see any problem with that.
Luckz - Monday, April 23, 2018 - link
Intel for some reason have 4 memory sticks. Weird idea.werpu - Friday, April 20, 2018 - link
Well the main difference is they tested against fully meltdown and specte patched systems, which in fact is the norm, while all other reviewers simply tested against bare metal. It is known that Intel took a pretty serious hit especially with Meltdown and a more serious hit with Spectre compared to AMD which did not have meltdown at all and to a lesser degree Spectre than Intel did.I would say Anandtechs tests are spot on.
And this reflects the sad state of nowadays performance testing which seems to be done to 99.9% by incompetent idiots or fanboys (especially the youtubers are the worst)
However in extreme situations Intel again wins since the 8700k can be oced by decapping and good cooling to 5GHz while the OC capabilities of the 2700x are basically non existent. It really depends, which is better. But the performance gap is closing and in non OCed system it is not existent anymore. It will be interesting next year when AMD has moved to 7nm while Intel still will be stuck at 10nm which they currently try to pull it but not have yet managed. Then the game might be entirely reversed.
Alphasoldier - Friday, April 20, 2018 - link
Unfortunatelly, you are the only idiot and fanboy here. Pretty much everyone stated in their reviews, the system were fully patched, all cpus were reused and everything was retested, because AMD fanboys were screaming Meltdown here, Spectre there.Now, the internet is full of this garbage review, it spreads like cancer, because AMD fanboys have nothing better to do, once again they are disappointed that 6 cores from Intel outperformed 8 cores from AMD and they are now like the Liverpool fans repeating "The next year will be ours"
But at least they got some fancy RBG cooler.
Fallen Kell - Friday, April 20, 2018 - link
Alphasoldier, I've been reading the reviews, and while many have stated they have applied the software (OS) patches, very few have stated they applied both the software and BIOS patches for the Spectre variant 2. Thew few places that I have seen which have stated both the software and BIOS patches were applied all seem to be showing much more similar results as the AT article.In anycase, Ryan stated they are looking into it, and I am certain we will see an update within the next few days. And don't come saying that I am a AMD fanboi, I havn't purchased a AMD CPU since the Thunderbird (i.e. a slot A CPU).
mapesdhs - Saturday, April 21, 2018 - link
werpu, oc an 8700K to 5GHz? Makes me laugh that a 300MHz bump over a CPU's max single core turbo is even called an oc these days. Sheesh, it's a far cry from the days of SB, oc hardly seems worth bothering with now.mkaibear - Thursday, April 19, 2018 - link
It's here, it's here!Dr. Swag - Thursday, April 19, 2018 - link
What is with the gaming benchmarks? On your tests the whole ryzen 2 series is a step above everything else, but all other reviews show it between ryzen and coffee lake...fallaha56 - Thursday, April 19, 2018 - link
This is the Spectre2 patch effectNot looking great for Intel and HFR gaming
Ryan Smith - Thursday, April 19, 2018 - link
"What is with the gaming benchmarks?"We're looking into it right now. Some of these results weren't in until very recently, so we're going back and doing some additional validation and logging to see if we can get to the bottom of this.
techguymaxc - Thursday, April 19, 2018 - link
Either you don't have a fast enough GPU to remove the GPU bottleneck or there's something wrong with your data because there is NO chance Ryzen is faster than *lake in GTA V, with lower IPC and clocks.Don't get me wrong, Ryzen 2 looks like a good product family and I wouldn't discourage anyone from buying.
SaturnusDK - Thursday, April 19, 2018 - link
As everyone else that are misreading the results. Tests are done at stock speeds and no overclocking.LurkingSince97 - Thursday, April 19, 2018 - link
Yes there is.Stock CPU and RAM speeds. Fully spectre / meltdown patched on both sides. Who is re-using old results? This review re-uses old results for the older generation Ryzen, and so some of the performance boost could be false (new drivers, OS patches, firmware, bios....).
More investigation is needed on all sides. Many other review sites are significantly more lazy than AT and are likely recycling old results for the Intel side.
As for your GPU bottleneck.... um no. Look at the results, as the resolution goes up, THEN you get GPU bottlenecked and all CPUs look the same. At low resolutions, it is clearly not GPU bottlenecked as there is a big FPS difference by CPU.
jaydee - Thursday, April 19, 2018 - link
Great review. Curious to see how things scale down for a 35W TDP part compared to Intel's latest 35W TDP CPUs.SaturnusDK - Thursday, April 19, 2018 - link
Gamers Nexus have tested the 2700X to work at 1.175V locked to 4.1GHz where it consumes 129W compared to stock frequency and stock voltage where it consumes 200W. Performance is generally the same on average.Flunk - Thursday, April 19, 2018 - link
Wow, that single-thread performance delta sure has shrunk hasn't it? Between meltdown and higher core clocks on the Zen+.mapesdhs - Saturday, April 21, 2018 - link
Wonder whether it won't be that much longer until AMD launches something which actually beats Intel in IPC. Atm, people keep saying Intel wins on IPC, but it's only because Intel has punched its clock rates through the roof (it's like the old P4 days again), something they could have done years ago but never bothered because there was no competition, just as they could have released a consumer 8-core long ago but didn't (the 3930K was a crippled 8-core, but back then AMD couldn't even beat mainstream SB, never mind SB-E).mkaibear - Monday, April 23, 2018 - link
You know IPC is "instructions per clock", yeah? So saying Intel wins on IPC because their clock rate is faster doesn't make sense, it's like saying UK cars have a higher mpg then US cars because their gallons are bigger.Intel wins (won?) on IPC because they executed more instructions per MHz of the clock rate. When you couple that with a faster clock rate you get a double whammy of performance. It does appear that AMD has almost closed the door on IPC but is still not operating on as high a clock rate.
Targon - Monday, April 23, 2018 - link
This is why many are looking forward to Zen 2 in 2019, which will have true design improvements compared to Zen and Zen+. Zen+ is a small and incremental improvement over Zen(first generation Ryzen chips). Combined with 7nm, we may very well see AMD get very close to Intel clock speeds while having very similar, if not better IPC and a higher core count.MajGenRelativity - Thursday, April 19, 2018 - link
Looks like a good review. Glad to see AMD closing the performance gap even further!SaturnusDK - Thursday, April 19, 2018 - link
Surely you mean widening the performance gap. It was already ahead in professional and multi-threaded workloads. Now it's miles ahead.MajGenRelativity - Thursday, April 19, 2018 - link
I was referring to single-threaded performance. As for multi-threaded workloads, you are rightfallaha56 - Thursday, April 19, 2018 - link
Like which ones?After the Spectre2 patch the Intel scores have been hammered...
And no doubt with proper default settings on MCE as well
jjj - Thursday, April 19, 2018 - link
Very odd choice to only include the Intels with high clocks in the charts, it's like you wanted to put all Intels at top in ST results, make it look better than it is.Luckz - Monday, April 23, 2018 - link
I'm afraid there are physical space limits regarding how much hardware Ian can fit in his domain. It's been popular to recycle scores from previous tests among sites, but after "Smeltdown" (and with Nvidia drivers being all over the place) it doesn't work that way right now. In an ideal world you'd compare five or ten different setups, sure.But then you'd not just want 8400 @ B360 but also 8700k OC, 2600k OC, 4770k OC, etc...
T1beriu - Thursday, April 19, 2018 - link
Typo: "Cycling back to that Cinebench R15 nT result that showed a 122% gain". I think the gain is just 22%.SirCanealot - Thursday, April 19, 2018 - link
Wow. I'm actually excited to read a review for the first time in a long time! Fantastic review as usual!I'm still sitting on my 3770k @ 4-4-4.7ghz and I'm likely to try delidding for fun and see if I can push it any more. But this review makes me excited to look forward to perhaps building a Ryzen 2/3 (whatever the heck they name it) this time next year!
AMD has caught up to Intel another vital few paces here! If Intel sits on their butts again next year and AMD can do the same thing next year, this is going to get very, very interesting :)
SmCaudata - Thursday, April 19, 2018 - link
I'm sitting on a 2500k. The geek in me wants to upgrade, but I've really no need until Cyberpunk finally releases. Maybe zen 5 by that time.Lolimaster - Thursday, April 19, 2018 - link
You can upgrade to the new 400 mobos, it will be compatible with any Ryzen released till 2020.Luckz - Monday, April 23, 2018 - link
Four times the threads though, four times.Depends on what you do, of course :)
ACE76 - Thursday, April 19, 2018 - link
AMD is going to unleash some serious tech next year with Ryzen 2 on 7nm...this is just a refresh of the original Ryzen...the real deal will be next April/May...Intel is in for a rough ride.tmiller02 - Thursday, April 19, 2018 - link
the patches are the difference.. which everyone should do on intel machines.. the fact is they came with a performance hit! AMD is now leading the pack... security over performance any day!Dr. Swag - Thursday, April 19, 2018 - link
I've seen other sites do their tests with the patches. I'm just not certain if they went back and rested older cpus or not...Lolimaster - Thursday, April 19, 2018 - link
You can be sure most didn't.fallaha56 - Thursday, April 19, 2018 - link
Most didn’t patchLook at techradar who did -they too are showing massive losses for Intel
Luckz - Monday, April 23, 2018 - link
Techradar's review is some form of manipulation. They don't even show the test system specs for the comparison scores. In their 8700K review they wrote the CPU hit 76° for them at stock and 87° OC; in the 2700X review they wrote that the 8700K only went up to 52°(!!!). That CPU literally had its handbrake™ pulled.Ranger1065 - Thursday, April 19, 2018 - link
Thank you for an informative and timely review.T1beriu - Thursday, April 19, 2018 - link
What workload was used during per core power consumption tables?Ian Cutress - Thursday, April 19, 2018 - link
Prime95Luckz - Monday, April 23, 2018 - link
Assuming that means the non-AVX version, with that tool it makes sense to clarify.DisoRDeR4 - Thursday, April 19, 2018 - link
Thanks for the review, but I noticed a minor error -- your AMD Ryzen Cache Clocks graph on the 3rd page shows data for the 2700X, but in the preceding text it is referred to as the 2800X.IGTrading - Thursday, April 19, 2018 - link
AMD wins all gaming benchmarks, hands down and does this at a real 105W TDP.In my opinion, it is not fair to say that Intel "wins" the single threaded scenarios as long as we see clearly that the 8700 and the 8700K have the "multi-core enhancement" activated and the motherboard allows them to draw 120W on a regular basis, like your own graphs show.
Allow AMD's Ryzen to draw 120W max and auto-overclock and only the would we have a fair comparison.
In the end, I guess that all those that bought the 7700K and the 8700K "for gaming" are now very pissed off.
The former have a 100% dead/un-upgradeable platform while the latter spent a ton of money on a platform that was more expensive, consumes more power and will surely be rendered un-upgradeable soon by Intel :) while AMD already rendered it obsolete (from the "best of the best" POV) or at least the X370+8700K is now the clear second-best in 99% of the tests @ the same power consumption while losing all price/performance comparisons.
IMHO ... allowing the 8700 & 8700K to draw 120W instead of 65W / 95W and allowing auto-overclocking while the AMD Ryzen is not tested with equivalent settings is maybe the only thing that needs to be improved with regards to the fairness of this review.
Thank you for your work Ian!
Luckz - Monday, April 23, 2018 - link
The 2700X draws so much more than its fake on-paper TDP it's not funny. With XFR2 and PB2 of course.PBO can add even more.
Ninjawithagun - Thursday, April 19, 2018 - link
Incorrect comparison. Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action. 6 core/12 thread vs 8 core/16 thread makes no sense. Comparing the Intel 8700K 6 core/12 thread @ $347 to the AMD 2600X 6 core/12 thread @ $229.99 makes the most sense here. Once the proper math is done, AMD destroys Intel in performance vs. cost, especially when you game at any resolution higher than 1080P. The GPU becomes the bottleneck at that point, negating any IPC benefits of the Intel CPUs. I know this how? Simple. I also own a 8700K gaming PC ;-)SmCaudata - Thursday, April 19, 2018 - link
I'd like to see more scatterplots with performance versus cost. Also, total cost (MB+CPU+cooler if needed) would be ideal. Even an overall average of 99th percentile 4k scores in gaming (one chart) would be interesting.... hmmm maybe a project for the afternoon.Luckz - Monday, April 23, 2018 - link
The English-language version of the Tomshardware review has a million plots on the last page (14). 4K is complete irrelevant for plotting though since you're GPU-limited there.Krysto - Thursday, April 19, 2018 - link
Wrong. Performance at a given price level is absolutely a metric chip buyers care about - if not the MOST important metric.People usually think "Okay, I have this $300 budget for a CPU, which is the best CPU I can get for that money?" - It's irrelevant whether one has 4 cores or 8 cores or 16 cores. They will get the best CPU for the money, regardless of cores and threads.
Compared core vs core or thread vs thread is just a synthetic and academic comparison. People don't actually buy based on that kind of thinking. If X chip has 15% better gaming performance than the Y chip for the same amount of money, they'll get the X chip, regardless of cores, threads, caches, and whatnot.
Ninjawithagun - Thursday, April 19, 2018 - link
Incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.fallaha56 - Thursday, April 19, 2018 - link
Ok by your rationale we should compare Threadripper to 8700k tooNinjawithagun - Thursday, April 19, 2018 - link
Now you are just being stupid.fallaha56 - Thursday, April 19, 2018 - link
I’m not -I’m jst showing how stupid your OP wasIf someone is selling an entry level chip for the same price as someone else’s that’s the comparison
Include the platform costs if you like but that’s what matters -bang for buck
Only for .1% of people does performance at any costs matter
Ninjawithagun - Thursday, April 19, 2018 - link
Actually no. Once again proving you do not know how to count to 8.LurkingSince97 - Thursday, April 19, 2018 - link
Um... NO.Sure, in some cases it is possible to compare two processors of 'equal quality' and then look at cost second.
But that is an impossible task in a review. And for some processors it is impossible for anyone.
This is impossible because there is no such thing as an 'equal quality chip'. Subjectively, I might be able to find two chips that I think are roughly equal, then compare price. But this is subjective -- depending on what my needs are.
Price is objective. We can compare two system builds at nearly equal cost directly, then see what is better. Comparing 'roughly equal' chips first starts out in the wrong place for most consumers. Only those that are not very price sensitive do that -- get the 'best' for what they want, and if there are two equal things use price as a tie breaker. Most people are looking for the best they can get for a price, rather than the lower price for what they want.
Now, to make it worse, by your reasoning the 2700X can not be compared to anything, because the core counts differ. Bull$#17. I could just as easily say that the 8700K can not be compared to the 2600X because it can overclock to 5Ghz, so they are not technically the same.
There is absolutely reason to compare 8C/16T products to 6C/12T to 4C/8T products -- BECAUSE PEOPLE HAVE TO PICK ONE TO BUY.
LurkingSince97 - Thursday, April 19, 2018 - link
Incorrect. Q.E.D.bji - Thursday, April 19, 2018 - link
You are completely wrong, and Krysto is correct. Performance per dollar is the metric of greatest relevance for the vast majority of users and thus is the most useful metric to use in reviews.mapesdhs - Saturday, April 21, 2018 - link
Maybe Ninjawithagun is just crazy rich and doesn't care about price. :)Targon - Monday, April 23, 2018 - link
Performance per dollar for the workload you care about is what you are talking about, since game performance doesn't matter much in business, but being able to do whatever the day to day work as quickly as possible is. That may mean lower core counts with high clock speeds will be more important, or higher core counts will beat out most other things(16+ cores at 1.5GHz might beat out 4 cores at 5GHz). It all depends.Ryan Smith - Thursday, April 19, 2018 - link
"Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action."To abuse an already overused meme here, why not both? This is why we have the data for all of these parts.
Our focus is on price comparisons, because at the end of the day most readers will make their buying decisions around a given budget. But there is also plenty here looking into IPC and other architectural elements.
Cooe - Thursday, April 19, 2018 - link
Lol don't feed him Ryan! As one of our so gracious and glorious overloads it pains me to see you get into the mud with that dingus. Leave that to us nobodies :).Ninjawithagun - Thursday, April 19, 2018 - link
Ignorance is your bliss.mapesdhs - Saturday, April 21, 2018 - link
Not an argument.0ldman79 - Thursday, April 19, 2018 - link
In the real world we have to choose depending on features and performance while constrained by a budget.For intellectual discussion and better understanding of the chips and architecture we need direct comparison.
Both arguments work for entirely different reasons. I rarely have the budget for high end Intel. I'm also into overclocking and run VM, so the only way I hit both of those is to run AMD.
I've also got a few apps that really take advantage of AVX2 and AVX512, which even the Ryzen gets monstrously stomped by Intel.
If you judge by a single metric you're missing the big picture. Everything is a compromise.
Ninjawithagun - Thursday, April 19, 2018 - link
Actually, the comparison between the 2600X (not 2700X) and the 8700K is based upon multiple metrics, not just one.Ninjawithagun - Thursday, April 19, 2018 - link
Once again incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.LurkingSince97 - Thursday, April 19, 2018 - link
Once again incorrect. Cost vs Cost is the primary factor for a buyer on a budget. It is the main one.Case in point, if I can get a 2600X for the same price as a much slower Intel chip, it is obviously better.
Comparing a $300+ chip to a $200+ one makes absolurely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
See what I did there? Your argument (and the one above) are BS. You are either a troll, or have a serious intellectual disability. Price, performance, and implementation details (core count) are all independent dimensions and you can look at any of them from the perspective of the other.
Price just happens to be the constraint that most shoppers have to start with. They can vary the other parameters, within the price constraint.
A others with more money might instead lock in a performance / feature set requirement and _then_ consider price, but that is the minority.
fallaha56 - Thursday, April 19, 2018 - link
Well saidI suggested the chap apply his own facile argument and compare threadripper to the 8700k...
gglaw - Saturday, April 21, 2018 - link
They compared multiple "qualities" of processors between two Ryzen generations and CL. If you want to look at them core for core, is it that hard to shift your eyes 3 lines up to see the next line of results? Do you want them to exclude the 2700X since there isn't a consumer level CL to match it?LurkingSince97 - Thursday, April 19, 2018 - link
Price and absolute performance are paramount. Comparing at raw architecture levels is interesting but less important.In the real world, there are consumers who are not that price sensitive, in which case they only care about a top end part that is within their range. They don't care if it is 10 core/ 20 thread vs 8 core /16 thread or 6 core 12 thread -- they care about the raw performance for what they need, and are usually willing to go up in cost somewhat for that performance (including mobo/ram costs). This is the sort of consumer I am today.
There are then others who are price sensitive and have a budget. For these people the price tag is paramount. The flaw with this review (and most in general) is that it does not include mobo / ram / etc costs and often just looks at the CPU price alone. For someone budget conscious they have to carefully consider whether saving $100 on a CPU or $50 on a mobo can give them the ability to spend that on say, a better GPU or nicer monitor. For those, comparing products by price point is way more important than comparing them by architecture. This is the sort of consumer I was when I was a poor college student / gamer that had to part together my own systems with very limited budgets.
As a tech geek, I am always interested in the core-for-core or clock-for-clock comparison, but in the real world for purchasing decisions it doesn't matter if a Ryzen with 6 cores/12 threads at 3Ghz is faster or slower than an Intel chip with 6 cores/12 threads at 3Ghz. In the end, they can have different core counts, threads, and Ghz -- all that matters is the actual performance.
Targon - Monday, April 23, 2018 - link
In the case of Ryzen, you can use the same motherboard from the first generation to the second, or the third, or the fourth(in 2020). You may not get all the features, but they will work, and CPU cost is the only thing needed since you already have the other components.Actual performance is the correct focus, but game performance isn't the same as rendering performance, or for those who tend to have 8+ programs open as a part of their normal work environment. Just saying "performance" ignores that what you use your computer for isn't necessarily the same as what other people use their computer(s) for.
Targon - Monday, April 23, 2018 - link
That is why they use different game benchmarks. Some do make use of more cores/threads, and others make use of other design differences between different products. Price vs. performance is a very valid comparison based on workload, not just games, but in other tasks. You could have higher core count processors with lower clock speeds at the same price point, even when looking at Intel. 6-core lower speed, or 4-core higher speed at the same price point. Which does better for the tasks you personally care about? Intel 8700K vs. AMD 2700X is the fair comparison, while you will compare the 2600 to the i5, again, due to the price point. When you look at the performance results, you SHOULD in theory, see that these chips match up in terms of performance for the price, though AMD tends to have an advantage these days in multi-threaded tasks, while Intel tends to do better in lightly threaded workloads due to clock speeds.bryanlarsen - Thursday, April 19, 2018 - link
Just because transistors can be 15% smaller, doesn't mean that they have to be. Every IC design includes transistors of many different sizes. GF is saying that the minimum transistor size is 15% smaller than the previous minimum transistor size. And it seems that AMD chose not to use them, selecting to use a larger, higher performance transistor instead that happens to be the same size as their previous transistor.bryanlarsen - Thursday, April 19, 2018 - link
And you confirm that in the next paragraph. "AMD confirmed that they are using 9T transistor libraries, also the same as the previous generation, although GlobalFoundries offers a 7.5T design as well." So please delete your very misleading transistor diagram and accompanying text.danjw - Friday, April 20, 2018 - link
I think you are misreading that part of the article. AMD shrunk the size of the processor blocks giving them more "dark silicone" between the blocks. This allowed better thermal isolation between blocks, thus higher clocks.The Hardcard - Thursday, April 19, 2018 - link
“Cache Me Ousside, How Bow Dah?“Very low hanging fruit, yet still so delicious.
msroadkill612 - Thursday, April 19, 2018 - link
"Intel is expected to have a frequency and IPC advantageAMD’s counter is to come close on frequency and offer more cores at the same price
It is easy for AMD to wave the multi-threaded crown with its internal testing, however the single thread performance is still a little behind."
If so, why is it given such emphasis - its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms. Oh so recently, the norm wa probably 2 core, so that's what they coded for - THEN.
This minor advantage, compares to intel getting absolutely smashed on increasingly multi threaded apps, at any price point, is rarely mentioned in proximity, where it deserves to be in a balanced analysis.
Ratman6161 - Thursday, April 19, 2018 - link
"its increasingly a corner xase benefit as game devs begin to use the new mainstream multi core platforms" As I often do, I'd like to remind people that not all readers of this article are gamers or give a darn about games. I am one of those i.e. game performance is meaningless to me.0ldman79 - Thursday, April 19, 2018 - link
Agreed.I am a gamer, but the gaming benchmarks are nearly irrelevant at this point.
Almost every CPU (ignoring Atom) can easily feed a modern video card and keep the framerate above 60fps. I'm running an FX 6300 and I still run everything at 1080p with a GTX 970 and hardly ever see a framerate drop.
Gaming benches are somewhat less important than days gone by. Everything on the market hits the minimum requirement and then some. It's primarily fuel for the fanboys, "OMG!!! AMD sucks!!! Intel is faster at gaming!!!"
Well, considering Intel is running 200fps and AMD is hitting 175fps I'm *thinking* they're both playable.
Akkuma - Thursday, April 19, 2018 - link
Gaming + streaming benchmarks, as done by GamersNexus, are exactly the kind of relevant and important benchmarks more sites need to be doing. Those numbers you don't care about are much more important when you start trying to do streaming.Your 60fps? That isn't even what most people who game care about with high refresh rate monitors doing 144hz+. Add in streaming where you're taking a decent FPS hit and that difference between 200 and 175 fps all of a sudden is the difference between maintaining the 144hz and not.
Vesperan - Thursday, April 19, 2018 - link
Yea but.. of all the people interested in gaming, those with high refresh rate monitors and/or streaming online is what - 10% of the market? Tops?Sure the GamersNexus reviews have relevance.. to that distinct minority of people out there. Condemning/praising CPU architectures for gaming in general due to these corner cases is non-sensical.
Like Oldman79 said, damn near any of these CPUs is fine for gaming - unless you happen to be one of the corner cases.
Akkuma - Friday, April 20, 2018 - link
You're pulling a number out of thin air and building an entire argument around a made up number. 72% of steam users have 1080p monitors. What percentage of those are high refresh rate is unknown, but 120hz monitors have existed for at least 5 years now and maybe even longer. At this stage arguing around 60fps is like arguing about sound quality of cassettes today as we are long past it.Vesperan - Sunday, April 22, 2018 - link
If by 'pulling a number out of thin air' you mean that I looked at the same steam hardware survey as you did and also a (year old) TechReport survey (https://techreport.com/news/31542/poll-what-the-re... ) - then yes, I absolutely pulled a number out of thin air. I actually think 10% of the entire market as a max for x1080 resolution and high refresh rate monitors will be significantly too high, as the market will have a lot of old or cheap monitors out there.The fact is, once you say Ryzen is perfectly fine for x1080 (at 60 hz) gaming and anything at or above x1440 because your GPU limited (and I'm not saying there is no difference - but is it significant enough?), the argument is no longer 'Ryzen is worse at gaming', but is instead 'Ryzen is just as good for gaming as Intel counterparts, unless you have a high refresh rate x1080 monitor and high end graphics card.'
Which is a bloody corner case. It might be an important one to a bunch of people, but as I said - it is a distinct minority and it is nonsensical to condemn or praise a CPU architecture for gaming in general because of one corner case. The conclusion is too general and sweeping.
Targon - Monday, April 23, 2018 - link
This is where current benchmarks, other than the turn length benchmark in Civ 6, are not doing enough to show where slowdowns come from. Framerates don't matter as much if the game adds complexity based on CPU processing capability. AI in games for example, will benefit from additional CPU cores(when you don't use your maxed out video card for AI of course).I agree that game framerates as the end all, be all that people look at is far too limited, and we do see other things, Cinebench for example, that help expand things, but doesn't go far enough. I just know that I personally find anything below 8-cores will feel sluggish with the number of programs I tend to run at once.
GreenReaper - Wednesday, April 25, 2018 - link
Monitors in use do lag the market. All of my standalone monitors are over a decade old. My laptop and tablet are over five years old. Many people have 4K TVs, but rarely hook them up to their PC.It's hard to tell, of course, because some browsers don't fully-communicate display capabilities, but 1920x1080 is a popular resolution with maybe 22.5% of the market on it (judging by the web stats of a large art website I run). Another ~13.5% is on 1366x768.
I think it's safe to say that only ~5% have larger than 1080p - 2560x1440 has kinda taken off with gamers, but even then it only has 3.5% in the Steam survey - and of course, this mainly counts new installations. 4K is closer to 0.3%.
Performance for resolutions not in use *now* may matter for a new CPU because you might well want to pair it with a new monitor and video card down the road. You're buying a future capability - maybe you don't need HEVC 10-bit 4K 60FPS decode now, but you might later. However, it could be a better bet to upgrade the CPU/GPU later, especially since we may see AV1 in use by then.
Buying capabilities for the future is more important for laptops and all-in-one boxes, since they're least likely to be upgradable - Thunderbolt and USB display solutions aside.
Bourinos - Friday, April 20, 2018 - link
Streaming at 144Hz? Are you mad???Luckz - Monday, April 23, 2018 - link
Would be gaming in 144 Hz while streaming 60 Hz, unless in Akkuma's fantasy world of 240 Hz monitors, the majority of stream viewers would want 144 Hz streams too ;)Shaheen Misra - Sunday, April 22, 2018 - link
Thats a great point. Every time i have upgraded it has been due to me not hitting 60fps. I have no interest in 144hz/240hz monitors. Had a Q9400 till GTA IV released. Bought a FX 8300 due to lag. Used that till COD WW2 stuttered (Still not sure why really). Now i own a 7700k paired with a 1060 6gb. Not the kind of thing you should say out loud but im not gonna buy a GTX 1080ti for 1080p/60HZ. The PCIe x16 slot is here to stay, i can upgrade whenever. The CPU socket on my Z270 board on the other hand is obsolete a year after purchase.Targon - Monday, April 23, 2018 - link
Just wait until you upgrade to 4k, at which point you will be waiting for a new generation of video card to come out, and then you find that even the new cards can't handle 4k terribly well. I agree about video card upgrades not making a lot of sense if you are not going above 1080p/60Hz.Luckz - Monday, April 23, 2018 - link
For 4K you've so far always needed SLI, and SLI was always either bad, bugged, or -as of recently- retired. Why they still make multi GPU mainboards and bundle SLI bridges is beyond me.Lolimaster - Thursday, April 19, 2018 - link
Zen2 should easily surpass the 200pts in CB15 ST, a minimum of 5-10% + a minum of 5-10% higher clocks, being extremely negative.Lolimaster - Thursday, April 19, 2018 - link
IPC and clock, no edit button gg.mapesdhs - Saturday, April 21, 2018 - link
Not being abel to edit typos sucks. :)realistz - Thursday, April 19, 2018 - link
Anandtech is the only site that shows 8700k trailing 2700k in gaming. Hell even the 8400 is slightly faster overall than the 2700k everywhere else. This is what we called an outlier.Lolimaster - Thursday, April 19, 2018 - link
All of the intel CL chipa autoOC beyond TDP + many reviews didn't apply the latest spectre/meltdown patches or didn't rerun tests cause of "rea$ons".0ldman79 - Thursday, April 19, 2018 - link
How about the multi-core enhancement?That's going to throw off the scores a bit too, and a lot of reviewers leave it on. I don't remember if Anandtech does or doesn't. I think their stance is "out of the box".
mapesdhs - Saturday, April 21, 2018 - link
Which on some mbds means it'll be on, though the BIOS settings for this can be confusing. GN has covered this a lot in recent months.Total Meltdowner - Thursday, April 19, 2018 - link
FirstSinguy888 - Thursday, April 19, 2018 - link
Security patch did not cripple Intel's gaming performance. The question is, how did Anandtech gets such kickass Ryzen results?https://www.reddit.com/r/Amd/comments/8dfbtq/spect...
fallaha56 - Thursday, April 19, 2018 - link
Yes it didThe Spectre2 patch is causing anything up to 20% perf drops
https://np.reddit.com/r/pcmasterrace/comments/7obo...
Lolimaster - Thursday, April 19, 2018 - link
Those cache latencies were really holding the true Ryzen performance :DLolimaster - Thursday, April 19, 2018 - link
2600X is really the champ.Gothmoth - Thursday, April 19, 2018 - link
what´s up with this THANK YOU advertising in articles now?one page of "we must thank" BS.....
mapesdhs - Saturday, April 21, 2018 - link
Never kick your creditor in the nuts. :)Lolimaster - Thursday, April 19, 2018 - link
Unkermit the Ryzen, ya gaming king.msroadkill612 - Thursday, April 19, 2018 - link
One thing that stands out, is how manifestly uncompetitive the intel 6700k & 7700K are at their absurd prices.ACE76 - Thursday, April 19, 2018 - link
Dead platforms that sucked money out of customer pockets as well.Luckz - Monday, April 23, 2018 - link
The 8350K isn't that expensive. Same thing.kill3x - Thursday, April 19, 2018 - link
Sorry, but these gaming results are total crap. I have 110 average fps on OC'ed 1600X 4.0GHz and GTX 980TI 1600 MHz in RotTR on high, and you show less FPS for 1080? Not to mention I have a crapload of processes running simultaneously. This is the hardest fail I've ever read.realistz - Thursday, April 19, 2018 - link
Anandtech needs to redo the gaming test. It’s blatantly a false representation. And Meltdown doesn’t impact gaming. No excuse what so ever if they want to still be considered a reliable reviewer.kill3x - Thursday, April 19, 2018 - link
True. From the list of games I have in library, no results match with these. Or I can claim that OCed 980 Ti beats 1080 by a large margin.Kankipappa - Friday, April 20, 2018 - link
Well, you should claim. AFAIK it already matched 1080's performance on 1500 clocks and just going above that should make it faster.Just because reviewers benchmarked (And still use those results) the numbers on original 980 TI's that don't even hit the 1300mhz clocks on stock (what the aftermarket ones do), doesn't make the later non reference 980 TI's slower in reality. Many people think that 1070 matches the speed of OC'd 980Ti but in reality the old one is better - just more powerhungry. But hey, nvidia couldn't have sold those 1070's if all the people would know the truth. :)
fallaha56 - Thursday, April 19, 2018 - link
Er look again Intel fanboisTechradar review with fully patched intel systems is showing exactly the same thing...
Spectre2 patch looks like it has a massive hit
Tropicocity - Thursday, April 19, 2018 - link
Then why do other reviews not show even near the level of performance gap between Ryzen 1 and Ryzen 2? It's not as if spectre or meltdown patches would somehow make the 2 series way better than the 1fallaha56 - Thursday, April 19, 2018 - link
XFR, ram, cooling, MCE, lots of variables hereBut the key difference is the patching and quite possibly the RAM
DearEmery - Thursday, April 19, 2018 - link
I tend to keep away from the comments here because I lack the knowledge to really contribute.I couldn't resist the urge to pop in because I'm certain this is the only time that sentence will be me boasting, when I'm reading comments from 'kill3x' and 'realistz' concerning 'hard fails'.
Follow my great example and realize your anecdote (leaving aside your 'hard fail' comprehending results and placing them in context the article hands you plenty of, assuming you read every word you should have), is right on the edge of worthless and garbage. Then read all the comments (particularly page one and two). Then come back tomorrow to get potential updates. Then go back into whatever game you were playing and be silly gooses there.
Ryan Smith - Thursday, April 19, 2018 - link
Hey Kill3x,To clarify, are you looking at the same sub-score we are, or the overall average? Our posted results are off of the first scene, Valley, as that's the most strenuous. The overall average is going to be higher, as you can see here: https://www.anandtech.com/bench/GPU16/1471
kill3x - Thursday, April 19, 2018 - link
Thank you for your reply, Ryan. Yes, this is more on point. But then again, if you mean Geothermal valley, I have different results there. The first area is Siberian wilderness with heavy snow, and I have lower results there. So a question arises about testing methods and testing scenes. Was it combat? Static? In a cave or on the top of area? All of these things affect FPS heavily. That's why the best way to review hardware in games is using scripted scenes and showing results in a video with detailed options' setup. Why didn't you guys just use ingame benchmark which 100% runs same scenes with same density? All of this looks like reviewer tried to cherrypick results in favor of Zen+. When you can't reproduce the result of a benchmark with same hardware as reviewer used is example of a bad approach and distortion of perception of your visitors.But then again, thank you, Ryan, for speaking with us and listening to our rant.
Ryan Smith - Thursday, April 19, 2018 - link
"Why didn't you guys just use ingame benchmark which 100% runs same scenes with same density?"To clarify, we do. We just use one of the scenes, and not the average of all of them. This is the same scene we've used for over a year now, since introducing this benchmark to our CPU & mobo testing suite.
kill3x - Friday, April 20, 2018 - link
Ryan, I retested Valley scene in built-in benchmark, this time with 8700k and gtx 980 Ti. I used high instead of very high, all options like on your screenshot. I got 122 FPS on valley with this settings. On 980 Ti. I'm really trying to keep this polite, but this is 20% difference on marginally weaker card. This just can't be a "gap" kind of error. These benchmarks are horribly wrong. Make your site and Ian a favor, Ryan, please consider retesting this. People are already suspect you of shill (rightfully so). Be an honest guy and just admit that a technical mistake was made, and correct it. Noone would blame you, mistakes happen. If you leave that as it is, it will be a much bigger mistake.divertedpanda - Saturday, April 21, 2018 - link
Your setup is no where near similar to theirs. You can't use your PC to Bench vs Them, and call it scientific.......kill3x - Saturday, April 21, 2018 - link
Yeah my setup's is nowhere near similar to theirs, and still my results are 20% better on 2 different CPUs. That kinda puts credibility of their review to zero, with all my respect to Ryan. The only goal was to put Ryzen 1 and CFL-S CPUs in a bad light, so people will buy new Ryzen 2 CPUs and suddenly find out that its capability's are not that huge.Aichon - Tuesday, April 24, 2018 - link
Wouldn’t it actually suggest that there’s a difference between your setup and theirs that favors yours? For instance, in the case of your earlier benchmark, you admittedly overclocked your 1600X and they didn’t, so don’t you think that might account for the 10% difference you saw over theirs? And in the case of the 8700K, you omitted key contextual information (e.g. is your system updated, and if so, which updates to what components?) that would allow others to verify that it was an apples-to-apples comparison.Ryan may very well have made a mistake and you may very well be entirely correct about all of this, but claiming he’s a liar on the basis of your overclocked system and then following it up with claims about the 8700K that lack the information necessary for someone else to verify your data does not help your case.
Meanwhile, my horse in this race died years ago. The latest product I bought from either teams red or blue was a 2011 Mac Mini that had an Intel CPU and an AMD GPU. All of which is to say, I’m a fan of passionate debate, but let’s keep aspersions to a minimum and focus on getting to the truth.
tn_techie - Thursday, April 19, 2018 - link
On the first paragraph, Ian writes the following:"This is not AMD’s next big microarchitecture, which we know is called Rome (or Zen 2) on 7nm."
That is incorrect. Rome is the codename for the upcoming EPYC 2nd Gen CPUs that will replace the current Naples products, and not the codename for AMD's next gen CPU core arch.
msroadkill612 - Thursday, April 19, 2018 - link
"anything that is hard on a single-threaded, such as our FCAT test or Cinebench 1T, Intel wins hands down"Yeah, I know, its just an indicator, but it's telling that the test seems as silly as the emphasis on ipc due to shrill/shill gamers - who would use single thread for cinebench?
Luckz - Monday, April 23, 2018 - link
Single thread Cinebench 15 score is *the* indicator of IPC used in meme-filled debates on online forums. It's just an important metric right now. And unlike, uh, GeekBench, CPU-Z, and whatever else claims to judge single thread score, it's pretty accurate.peevee - Thursday, April 19, 2018 - link
"ranging from the Silent 65W Wraith models"You mean Stealth, right?
Ryan Smith - Thursday, April 19, 2018 - link
Indeed we do. Thanks!fallaha56 - Thursday, April 19, 2018 - link
Techradar also confirming massive performance hit from Intel patchesEg 1000points in single core Geekbench
ACE76 - Thursday, April 19, 2018 - link
Techradar has AMD beating Intel in pretty much everything...I guess Intel fanboys could just run their systems unpatched and claim to be kings...lol.msroadkill612 - Thursday, April 19, 2018 - link
My memory is bad, but not that bad.I read a lot of cpu reviews, and this is the first that has made it clear that these are post security patch.
Could this be the first honest comparison review of ryzen - new OR old?
It certainly stirred a fanboi wasp nest.
ACE76 - Thursday, April 19, 2018 - link
Anandtech isn't alone... Techradar has AMD winning on fully patched systems as well...the sites that have Intel winning are using either old scores or unpatched systems for Smeltdown.mapesdhs - Saturday, April 21, 2018 - link
Not that that will stop the yelling and screaming. :)msroadkill612 - Thursday, April 19, 2018 - link
It's just hunch/intuition, but those cache improvements were outstanding, and apply to some seriously large chunks of cache. IMO, we haven't heard the last of perf improvements from this source, via driver tweaks etc.Tropicocity - Thursday, April 19, 2018 - link
My main gripe with people saying "other reviewers didn't use the meltdown/spectre" patches and stuff... 1. Those patches have already been tested and they do NOT affect gaming much at all, we're talking lower than 1%2. Even if you take out the entire meltdown/spectre thing, look at ryzen 1800x vs 2700x. a 3% IPC increase and some memory latency improvements do NOT account for 20% increased performance in gaming, not at a 200mhz clock change.
3. Even if Ryzen 2 series DID somehow gain 20% on Ryzen 1...why do other websites not show this? They all show at most 10%. Completely remove intel from the situation and you still have glaringly large performance jumps from Ryzen 1 to 2, this is what sticks out the most here.
ACE76 - Thursday, April 19, 2018 - link
You need to educate yourself on Meltdown/Spectre a bit more...patches are multilayered and in chipset driver, in the OS AND motherboard BIOS...unless the system being used isn't fully patched and using the latest BIOS and driver's, the test is useless.fallaha56 - Thursday, April 19, 2018 - link
Suggest you also look at the boost system on Ryzen (and also the power consumption)These higher scores are coming from much higher sustained multi core clocks...
Luckz - Monday, April 23, 2018 - link
If you look at page 8, this article shows 2018 Ryzen 2 versus Ryzen 1 performance from the dark ages. "Not retested for this article".BigDragon - Thursday, April 19, 2018 - link
I've been saying for months that Spectre and Meltdown will close the performance gap between Intel and AMD. All those speculative tricks will be reeled back in via security concerns. This article (and others like it) are confirming it.We shouldn't treat closing of the performance gap as a problem. This is not a black mark for Intel, and it's not a knockout punch from AMD. What this shows is that we have real competition in x86 again! Competition caused tremendous performance jumps in the AMD K7 and Intel Pentium days. Likewise, it's rapidly improved ARM offerings to make them a viable threat to x86. Competition is good!
Not sure that I'm ready to replace a 5-year old x79 i7-4930k system. The 2700X is very tempting, especially given that x79 lacks boot NVME support. I've been doing enough Blender and Unreal stuff that it may be a worthwhile speedup.
mapesdhs - Saturday, April 21, 2018 - link
Re NVMe boot, not true, it can be added to numerous mbds, especially ASUS (check the ROG thread with the modded BIOSs which one can install via FlashBack). Plus, you can use a 950 Pro or various other models since they already have their own boot ROM (ie. they don't need boot support on the mbd). I get over 3GB/sec with a 960 Pro on an ASUS R4E using a 4820K, 4930K and E5-2697 v2. Also have an SM961, various SM951 and some 950 Pros. Works great. I'm sure othe modders have done the same for non-ASUS boards.The modder on ROG has also made BIOS files for the M4E and P9X79/E WS on my request.
r13j13r13 - Thursday, April 19, 2018 - link
is incredible what AMD has achieved not only reached Intel but in many cases it exceeds above all in relation quality price that R5 2600X to 229 dollars for games looks greatchrcoluk - Thursday, April 19, 2018 - link
I really hope anandtech, didnt think it was realistic to bench a 8600k at 4.3ghz clocks? even really poor binned 8600ks hit 4.8ghzmsroadkill612 - Thursday, April 19, 2018 - link
Forbes:"Two weeks ago, if you were building a PC purely for games, I'd have said go with Intel. Today, I'd say you have a choice, and that's a hugely important development."AndersFlint - Thursday, April 19, 2018 - link
I think maybe you've stuffed up your testing somewhere, every other review I've seen paints a different picture, with the 8700k walking all over the 2700x in gaming performance (50+FPS difference in places).Even your cinebench scores are a little off, 1395 for an 8700k?? At stock it should be closer to 1495 and OC around 1695.
Discounting margin or error, this review really is the outlier when put alongside reviews from Hardware unboxed/Gamers Nexus/Linus Tech Tips etc al.
ACE76 - Thursday, April 19, 2018 - link
Anandtech isn't the only review site to come up with this conclusion...other sites are likely either lying or using old scores before all the patches came out...the only testing results worthwhile are those that setup a new testbed using fresh install of Windows, all patches available, latest chipset driver's, video driver's and the latest BIOS.Kankipappa - Friday, April 20, 2018 - link
1400 is the real stock number for 8700 and 8700k. Just because original 8700K youtube reviewers had multi core enhancement support as default (where the turbo clocks go 4.7ghz on all cores, aka out of spec overclock), doesn't make it the real number for your average consumer machine, that doesn't have the option for it nor the TDP headroom.Cooe - Friday, April 20, 2018 - link
Look at Techradar, same results as AnandTech. It's the new Meltdown & Spectre patches it seems. 1 site having weird results is one thing, but that's not what we are seeing here.mapesdhs - Saturday, April 21, 2018 - link
I can't help thinking people are not reading the intro where it clearly states the tests were done with all patches applied.Ryun - Thursday, April 19, 2018 - link
The dolphin emulator benchmark has the 1700 beating the 2700. Is that right?Thanks for benching the 2700 by the way. Seemed that most interesting of the lineup. It packs quite a punch for 65W! Not sure I'd buy it to save $30 over the 2700X but for a heat/power constrained workstation it would be at the top of my list.
Flying Aardvark - Thursday, April 19, 2018 - link
Yup, I use 65W chips because the form factor I want is worth more to me than benchmark results. My system is fast. Really fast. Any additional speed really wouldn't be noticed. A big case on the floor with enough space for a massive heatsink inside would be noticed for many years. As long as I'm on a relatively recent architecture (Zen1 here), I'm probably getting the majority of the enhancements.I plan on buying the Zen2 65W chip as an upgrade with X570.
osamabinrobot - Thursday, April 19, 2018 - link
hey way to go amd!CrazyElf - Thursday, April 19, 2018 - link
Good work and thanks for the review.AMD has made some progress on refining Ryzen and I will be upgrading to Threadripper 2 when it comes out in 2019.
The only disappointment is that 4.4 to 4.5 GHz does not seem to be possible. Then again, it may be by the time Threadripper+ comes around. The original Ryzen did 3.8 to 4.0 GHz, while Threadripper was capable of 4.0 to 4.2, and even a few golden chips did 4.3 GHz.
In terms of performance vs cost this is a solid win for AMD. I just wish that it was possible to get AVX 512 onto AMD. Maybe with Zen 2 or 3 it will be.
Crazyeyeskillah - Thursday, April 19, 2018 - link
the 99th percentile and time under 60fps gaming numbers are amazing. Ryzen+ is beating Intel in every single benchmark. It's pretty much the mathematical equivalent of a better game experience.mdriftmeyer - Thursday, April 19, 2018 - link
Do yourselves a favor and test Blender Master 2.79.x with CPU+GPGPU rendering of the benchmark models. You'll see Ryzen does far better than Intel's own.msroadkill612 - Friday, April 20, 2018 - link
I wonder how an OCd Zen Vega 2400g APU fares comparatively?LarsBars - Thursday, April 19, 2018 - link
Would love to see you add a Vega 64 to your gaming results. CPU/GPU vendor combinations have been a high point of recent AnandTech review benchmarks.krazyfrog - Thursday, April 19, 2018 - link
You left out 8400? For real?Ket_MANIAC - Thursday, April 19, 2018 - link
If the 8600K gets killed by the 2600X in productivity, what's the point in adding 8400 into the mix? 8400 numbers are useful for the 2600 review, not the 2600X. IMO, even there the 2600 would destroy the i5.jjj - Thursday, April 19, 2018 - link
Just test the Intel hardware with and without the Spectre/Meltdown patches in 1 title , 1 resolution to see if impact is 10-15%, if it is ,that's how your numbers are different.jjj - Thursday, April 19, 2018 - link
Maybe the script that rearranges the results to be displayed from highest to lowest, is rearranging the 2 columns (SKUs and results) by a different set of rules.polyzp - Thursday, April 19, 2018 - link
Everyone please note, many reviewers used the old Ryzen Balanced power setting in windows which has been seen to heavily cripple ryzen 2700x.Furthermore, intel has no secret motherboard tdp bumping advantages set in motherboard in the anandtech article.
We are looking at a truly stock 8700k, with no advantages. Furthermore, latest spectre patches cripple intel significantly.
https://techbuyersguru.com/amds-second-assault-ryz...
Flying Aardvark - Thursday, April 19, 2018 - link
This man speaks the truth ^^^^I do wonder how you tell if you have the old power profile or not though. Probably best to just start with a clean Win10 install each time, install latest and bench. Otherwise at least do an uninstall of the old chipset drivers, maybe run DDU for AMD stuff, then install the latest cleanly.
risa2000 - Thursday, April 19, 2018 - link
With Ryzen 1x, while officially supporting DRAM 2666, people were usually aiming at running it at 3200, with significant speed improvement. Is Ryzen 2x also able to run at 3200 (or more) and will the impact of the speed difference be significant again?Total Meltdowner - Thursday, April 19, 2018 - link
No one can hit 3200 with ryzen 1 bro. Everyone is running that shit at 2666 tops. If you're lucky with RAM... 3000.Alistair - Thursday, April 19, 2018 - link
Common misconception. I have 3200 CAS 15 running with a 1700, have since the first bios update fixed the problem.Sungit - Thursday, April 19, 2018 - link
I'm running a 1700x 16GB (2x8GB) @ 3200 14-14-14-34, - just enabled XMP in bios. Cake.msroadkill612 - Friday, April 20, 2018 - link
Wow, what ram & mobo?risa2000 - Friday, April 20, 2018 - link
When I was looking at it half year ago, the memory support of Ryzen 1x was a mess, but there some boards and some memory modules (typically samsung "B" parts), which were able hit 3200 with good timing.The problem was more than 16 GB support and in general dual sided modules. But maybe the situation has changed since then. What was more concerning was the feeling of instability of the platform (i.e. some modules do not work with some chips/boards). I wonder if this prevails on Ryzen 2x.
What I also remember is that the memory throughput peaked at 3000/3200 depending on timing. It would be interesting to know if the same applies to Ryzen 2x.
T1beriu - Friday, April 20, 2018 - link
>No one can hit 3200 with ryzen 1 bro.Your mom can.
Sungit - Thursday, April 19, 2018 - link
The memory speed policy.. So in a nutshell, what we, your readers (the ones reading this article) are being told is - most of us don't know how to enter bios and click on XMP? Add me to the list that disagrees - especially as memory speed is critical to AMD infinity fabric, and zero chance of me running @ a slow 2933.. I did not bother reading the review past test bed setup as a result.ACE76 - Thursday, April 19, 2018 - link
The memory frequency has little impact... memory timings have a big impact...on x370, people were getting lower scores at 3200 and higher with loose timings vs running at 2933 with 14-14-14 timings.Sungit - Thursday, April 19, 2018 - link
What timings were used? Didn't see it spec'd. 16 perhaps? Like I mentioned in another post, I'm running a 1700x on a x370 taichi @ 3200 14-14-14-34 (simple XMP),and getting better than I could @ 2933. Regardless, the base/default speed/timings policy should be revisited.ACE76 - Friday, April 20, 2018 - link
Yes...my original memory was 16...I eventually got it to run at 2933 which is fine by me...I might upgrade to 3400 15 memory in the future...TheJian - Thursday, April 19, 2018 - link
Not sure why you're running Spectre/Meltdown for Intel when AFAIK both were yanked by MSFT and INTEL (microsoft first, then Intel finally also as they're not working properly) last I checked.Also, why 8700k machine having 4x8 dimms instead of AMD 2x8? If you're not comparing like systems, it kind of introduces things that cause weird benchmarks. Just read Hexus, no 1080p game lost by Intel, the rest even at 4k which nobody uses & is GPU limited anyway (I call it nobody until 10%, miles from that, even 1440p miles from that). Despite what you guys have been claiming since 660ti, people still running 1920x1200 or lower (feel free to check steam survey and add up ALL others above 1920x1200), or they have TWO cards or more. AMD claiming 1440p best res but you guys do 1080p & 4K? LOL.
One more point after a quick read here and elsewhere: You should Have tested Intel at 2933 mem also as it is EASILY doable on Intel boards even for the Crucial. Just do timings yourself, heck you could have used the exact modules for both. People who use SPD's are lazy. Nearly every module can run on every board if you IGNORE SPD's and set it up yourself. IE PCper ran 8700k/2700x at 3400 actual.
Darn good chip either way, not sure why AMD insists on making no money by charging less than their stuff is worth. This chip should be $400 with fan. You just pissed away much of your NET INCOME. Nice work again AMD retarded management (and that's an insult to retards). I say retarded, but really just stupid (as in can't learn from past pricing mistakes). Ignorance would mean they can learn...You're not in business to be our friend, you need to make MONEY for a few years to have R&D. Over the life of the company AMD has NOT made a dime. I don't think people get this. We need them to make NET INCOME and 60% margins like NV/Intel, not this 30% crap. Market share does you ZERO good if you make ZERO from having it. I'd rather be Apple/NV/Intel and OWN the high end and most of the profit.
Dragonstongue - Thursday, April 19, 2018 - link
depends on the way of looking at things, AMD gives back what they can where they can, they absolutely need to make money, one can "assume" they are not, but then again do we really know how much Intel/Nv or Apple are making, nope, AMD has a smaller staff size far less overhead (generally speaking)they absolutely need to bank some, but who is to say that AMD charging $400 for product X is not directly the same thing as Intel/Nv or Apple charing $600, we do not know simple as that.
in my books, you are in business to make money for sure, but there is no reason to gouge the crap out of those buying said product, doubly so for a company like AMD that seems to give a great deal towards the industry that benefits everyone including themselves, this is "priceless"
We already have enough mega greed corporations out there, am glad at least one of them is charging a fair price for the product instead of gouging "just because they will buy it anyways"
like big pharma type deal, screw that lol...if they only need to sell at say $400 to "bank" even 20%, they still are profitable, now when it comes to the very high end likely their margins are MUCH higher with little overall to them increase in cost (such as Threadripper/EPYC)
either way they are being fair to themselves AND their customers, at the very least they are not selling more or less at cost ~5% at most margins (Bulldozer as an example) basically doubled the price likely about 1/3 less to produce them still is a win win for them, they put $ in the bank and towards RnD and we get some shiny new toys as well.
quit your bitchin ^.^
TheJian - Thursday, April 19, 2018 - link
Umm, no, we do know what they Q reports say and as noted they haven't made a dime over the ENTIRE life of the company. We also know they've lost ~8B in the last 15yrs. We also know they've released multiple new products and still aren't making money for a full year yet and barely making money in quarter here and there.https://finance.yahoo.com/quote/AMD/financials?p=A...
Net income for the last 4 quarters:
61,000 71,000 -16,000 -73,000
Get the point genius? I'll keep bitchin until AMD sets an appropriate price to actually make 100mil for the year...LOL. NO not GROSS, that doesn't count..NET INCOME is the only thing that matters in the end. Commonly known as "THE BOTTOM LINE".
If you're happy with AMD making ~40mil for a year, I'd say Intel/NV are laughing their butts off. New toys cost more than 40mil...ROFLMAO. Thats about the cost of taping out a 7nm chip probably as they skyrocket per shrink now. Used to be ~10mil at 40/28nm, not so now. There's a reason why Intel has problems with 14nm for a while (not now) and now 10nm too. It's not easy shrinking today and is vastly more expensive to pull it off.
AMD has margins of 34% right now. Now look at that MASSIVE 40mil profit for the TTM (trailing 12 months since you don't seem to read balance sheets or Q reports). Comic people like you comment but don't know the numbers. You run on assumptions I can't afford as a stock investor. You know what happens when people ASSuME things right? ;)
Oh, and AMD profits after MULTIPLE launches and how many years of re-organization? Pricing and HBM/HBM2 (kills margin on top products currently, should only be done on Titan/Quadro/Tesla type stuff with massive room to adjust), chasing console (single digit-15% margins over the life of first xbox1 according to AMD themselves), and chasing APU (which was released at $165...LOL - should have made HBM 8809g chips instead of Intel doing it - great margin on ~500 bucker). Now they massively undercharge on new 2700x. Essentially $40 off orignal $369 and free heatsink/fan thrown on top. This is DUMB, until you can prove otherwise via a Q report which you can't.
MEGA GREED? ROFL. Add up EVERY SINGLE YEAR of AMD NET INCOME please, or please kindly shut up. They should be making a billion right now, but they chose HBM/HBM2 killing top cards margins twice, now cpus twice. The cpu looks VERY strong, why screw themselves to make a few IGNORANT people (hoping you can learn by reading a Q report, so I refrained from calling you stupid) like yourself happy? While you're doing some homework, make sure you understand how they've lost fabs, buildings, property, 1/3 engineers in the last ~5-10yrs (layoffs), and massively diluted their shares during that time also (almost a billion shares outstanding now vs. 600mil a decade ago). They are NOT healthy by any measure, so by all means prove your case or go away quietly before making a bigger fool of yourself. :) It's shortsighted to have them go out of business soon (or go completely junk status) over a few bucks on your cpu or gpu. Margins do matter, so does NET INCOME.
https://finance.yahoo.com/quote/AMD/key-statistics...
AMD Operating margin for the last 12 months..3.83%...LOL. Again, read something. I could go a lot deeper, but if you don't even get all this, what is the point in driving you into the ground. I have more AMD reviews to read so I can decide 8700k or 2700x. :) I was excited over Anandtech games (apps already good for me), but now have reservations again reading elsewhere...LOL.
What money they putting in the bank? R&D dropping for last 4yrs, Nvidia/Intel up over the same.
One more for good measure, easily understood I'd hope by ALL:
https://finance.yahoo.com/quote/AMD/financials?p=A...
NET INCOME last 4 years:
43,000 -497,000 -660,000 -403,000
So, multiple product launches last year, and barely breaking even? Never mind the previous 3 years of MASSIVE losses. If you're selling 4-5B worth of crap a year and losing 10-12% every year on that, umm, you're doing something wrong right? You're CLEARLY not charging enough correct? Doing the same thing on $250 1080 gpu next year...They are pre-announcing another BAD yearly loss...LOL. FIRE your management AMD! If NV is $350 or more next year on that speed of card, you'd better rethink your margins! What gouging? They've lost ~500-650mil a year 3 out of 4 of the last 4yrs and only 43mil in the best year in the last 5...LOL. Are you high? Your idea of margins have AMD out of the cpu race for 5yrs straight, now finally back in but still you'd have them keep doing the same stupid pricing that loses 500mil a year. UGH.
Goodwill=PRICELESS? LOL. Stupid pricing=losses...How about charging appropriate pricing to stay in business and ADD R&D instead of reduce it? Make sense? They are NOT banking 20% and if your idea of profitable is losing 1.5B in the last 4yrs and only 43mil in what should be their BEST year in a decade, you sir are...Never mind. ;) We know margins on NV and Intel product segments. IE, gaming cards around 50% overall (80% of that from top end stuff) and workstation stuff is ~80%+. Those margins allow the low end to actually get something worth buying too.
But hey, congrats on all your "guess work", I'd rather deal in DATA.
https://nvidianews.nvidia.com/news/nvidia-announce...
That's how your summary for a quarter's financial results should look. See the "RECORD" stuff in there...NO ripping people off either as everyone has other options but still buy. I'm happy with my 1070ti even at $500 during a mining war in Nov. Still a great card today. The only thing I don't like about the financial summary above is giving 1.25B back to shareholders. Dividends: no point for tech co, and share buybacks while nice, don't make the next product. R&D please. Can't be bothered to correct spelling/grammar (been up all night for a hospital visit for family). Are we done? LOL.
Dragonstongue - Thursday, April 19, 2018 - link
I would say AMD is doing well only for the fact they under new leadership are profitable after being under the mud for many many years, that is what counts, paying down the bills and making some on the side...comparing to Ngreedia who is VERY overvalued not just my opinion, but whatever, not worth talking about.lets use a company the constantly cuts corner, that constantly screws consumers for the $$$$$$$$$$$$$$$$$$$$$$$ and nothing more, way to go...when a company such as AMD who has basically been less then broke for many years turns a very high turn over profit, I say they are doing SOMETHING right, I guess the awards they have got over the least 2 years mean jack shit huh?
They have to invest to stay ahead of the game, hard to do when you owed billions in past debts you cannot go to making billions overnight while still making a high quality product(s)
guess AMD should price their chips at $600+ just because they should also be greedy mofos and make everything proprietary nonsense.
glad they at least are trying to do what they can as best as they are able, or will you also "argue" on that point sir? income year over year was UP, debts were DOWN, they are doing what is right in my books period.
mehhh whatever, am done, simply not worth it.
Fallen Kell - Thursday, April 19, 2018 - link
They are running the Spectre/Meltdown patches because they were re-released as final fixes this month. Intel released and Microsoft approved of the changes mid-March for Skylake and Coffee-Lake, and Microsoft approved the updated patch mid-March:https://support.microsoft.com/en-us/help/4090007/i...
For complete information of Spectre and Meltdown with Microsoft:
https://support.microsoft.com/en-us/help/4073757/p...
So, yes, Anandtech is running the latest released patches, which have been approved by both Intel and Microsoft and are the recommended solution. Not all systems can be patched yet, as your motherboard provider also needs to provide a BIOS update, but for the testbed Anandtech used, the motherboard provider does have the BIOS updated with the fixes, and thus, it was benchmarked using the fully supported, patched configuration for this security vulnerability.
TheJian - Thursday, April 19, 2018 - link
https://threatpost.com/bad-microsoft-meltdown-patc...I guess the worry is win7/2008sr2 64bit ONLY still (which affects me on multiple machines).
“Microsoft is aware of this and looking into the matter further. This issue impacts Win7 SP1 (x64 only) and Server 2008R2 SP1 (x64 only). We are actively testing a solution, and will make it available as soon as it is properly validated.”
Maybe I missed if they have been fixed (april patch tuesday fix this?), as I've been dealing with parents (hospital crap). But with quick checks I think win10 (used here) is ok it seems. Still odd game benchmarks based on more reviews elsewhere. Either something is fishy or everyone else did it wrong? LOL. Still think mem speed and # of dimms (2x8GB) should be the same especially since you can run at 4000 on most Intel boards (apparently maybe 470 chipset boards for AMD too). The post above was from article date Mar28th. So unless it was fixed days later, guessing win7/2008sr2 64bit varieties are still buggered (not to mention all the cpus Intel abandoned).
Fallen Kell - Thursday, April 19, 2018 - link
My guess is they tested with 2 DIMMs because the Ryzen's memory controller is only dual channel, and using 4 DIMMs would mean it had to use multi-plexing to access the additional DIMMs, thus creating slightly slower performance. That said, Coffee Lake is only dual channel as well...Cooe - Friday, April 20, 2018 - link
Techradar has identical results to AnandTech with the new patches.msroadkill612 - Friday, April 20, 2018 - link
Afaik, it takes the comments section to break the big story in the review - "Meltdown Smackdown":(
Ananke - Thursday, April 19, 2018 - link
Intel is making in the ballpark of 90+% gross profit on high end desktop processors and server chips. AMD's cost scale is at least twice worse, hence they need VOLUME mostly, to make better profits and to establish themselves, and they simply cannot charge more than Intel for similar performance, worse track record and higher business risk - if you are a system integrator /Dell, HP/ or large corporate buyer /Chase Bank for example/ , you don't want to buy billions of $ inventory from non rock solid business with decades of consistent tech support etc, unless price is like triple difference...So in essence, if AMD can get away with this high pricing, they will be very lucky.
jjj - Thursday, April 19, 2018 - link
Gaming results are odd in the 8700k and 8400 review too, maybe some script that automates the process went sideways?Compare this review to the 8700k review in Civ6 for the 8700k or look how he 8400 tops the charts at times.
Singuy888 - Thursday, April 19, 2018 - link
Stop focusing on if Anandtech destroyed Coffee Lake's performance. They didn't. Look back at their coffee lake review and all the game numbers are the same. The real question is, how did they get Ryzen to perform so well!Anandtech's Coffee lake review and they used a gtx 1080 with similar games. Here are the results for a 8700k.
Coffee Lake Review:
GTA V: 90.14
ROTR: 100.45
Shadow of Mordor. 152.57
Ryzen 2nd Gen Review Post Patch
GTA5: 91.77
ROTR: 103.63
Shadow of Mordor: 153.85
Post patch Intel chip actually shows improved performance so this is not about other reviewers not patching their processors but how did Anandtech get such kickass results with Ryzen 2nd Gen.
Hxx - Thursday, April 19, 2018 - link
8700k is clearly the better chip for gaming because of the better clocks. As simple as that. Ryzen 2 just closed the gap a little more from Ryzen 1. For multithreaded workloads, Ryzen 2 is likely the better buy which also makes sense because of the 2 extra cores. So same as before except now the gap between those chips is much smaller. From a price point though, Intel has the better pricing . The 8700k has been as low as $280 so ultimately depends on pricing but if both chips were the exact same price and the board were the exact same price then i would buy based on your use case....productivity vs gaming.Singuy888 - Thursday, April 19, 2018 - link
I'm looking at 344 for the 8700K. That doesn't look like it's cheaper to me, especially I'll have to drop another 50 bucks for a cooler. Also there are very few specific use cases for a 8700k in the gaming department. It's actually pretty hard to think of any since a 2700k will be just fine if you have a 144hz monitor. Perhaps there is that one game that's just falling below your target frame rate but for any esport games with a 1080ti, both processors are giving you way beyond what you need.Losing about 30% in productivity tasks on the hand is more devastating than losing 5-10% frames from 200 vs 189...all that for almost 100 dollars more with zero upgrade path either.
TheJian - Thursday, April 19, 2018 - link
Agree with most of that (raise your hand if you have a 144hz monitor...Not many), but don't forget you get a gpu for free out of Intel (well, it costs more, not free I guess...LOL). I just had to use my 4790k's gpu for a while to RMA my Radeon. I was surprised by how good it was for my usage sans most gaming and even then I just went gog older games for a bit.Handbrake is ludicrously fast with Intel's gpu also. Quality is pretty darn good if you add some settings for instance:
option1=value1:option2=value2:tu=1:ref=4
There are more, just what I used with 4790k. I can't wait to try the 8700k if I don't go AMD. Unfortunately cuda isn't supported yet in handbrake so my 1070ti didn't do squat for that (boggles my mind, cuda is in 70-80% of the desktops out there today).
I'm looking at $346 for 8700k, but the heatsink will only be $30 (evo212 same as my 4790k) as I'm just going to buy another of the same for this build unless I go AMD. So $375 vs. 330. I don't think you'll be able to upgrade to 12 core on the same socket (only thing worth it from a 2700x IMHO, a few speeds bumps aren't worth squat), so not sure I'm gaining anything either way. I buy at or near top these days for the board bought (whatever is $350 or so I guess) and replace cpu/moboard/mem in most cases for the entire family the last 10yrs or so. I used to upgrade more, but today there are usually more reasons to dump it all than to keep it. I just give it to some other family member much like business does to employees who don't need top end stuff (they always love it, IE, dad getting 4790k shortly).
msroadkill612 - Friday, April 20, 2018 - link
Where desktops are hitting a wall mainly is system memory speed and IO lanes.Only threadripper platform really takes you to the next level.
I dont believe TR has to wait for 7nm to be spectacular - the Zen+ treatment will do that in 2H 2018.
msroadkill612 - Friday, April 20, 2018 - link
"Losing about 30% in productivity tasks on the hand is more devastating than losing 5-10% frames"Yes, except it can be even more extreme, yet reviewers annoyingly seem to give vastly disproportionate coverage to a few fps gain fora frivolity, to a major edge in areas that put bread on the table.
Nobody is going to buy a pc for work and another for gaming when one would do both, but work is first priority when choosing a rig.
lfred - Thursday, April 19, 2018 - link
Yep spot on... kickass result and really low power usage compare to other review where Ryzen2 on the power hungry side.. One hypothesis is the Motherboard, from what is suggested in other review (gamersnexus.net) the way MB maker implemented the new differents 'turbo' feature and auto memory timing adjustment seems to change from one Manufacturer to and other.. That's an hypothesis maybe its wrongboozed - Thursday, April 19, 2018 - link
AMD's on a roll. This is good.Having read the comments... I am really surprised that you still allow comments.
Flying Aardvark - Thursday, April 19, 2018 - link
Brings in traffic.eva02langley - Saturday, April 21, 2018 - link
I totally agree. This is a dumb circus. I see comments from AMD financial from Intel fanboys who fail to realize that Lisa Su is hungry and that Meltdown and Spectre story is what making the wind change.Zen was not a fluff, it was a start. Wallstreet even issue a buy rating for AMD. The company might be back to an A64 era. If Navi can be what Zen was, then AMD would had shuffle the cards of the silicon business. Intel has everything to lose while AMD has everything to gain.
tmiller02 - Thursday, April 19, 2018 - link
aww.. blah blah blah.. face it now AMD is the better option. After the patches... the small tweaks intel had to increase fps in benchmarks has been negated... give amd props... they are the new king of desktop processors for now! This is good for us consumers.. it will force intel to actually make a better processor.. instead of just higher clocks!Flying Aardvark - Thursday, April 19, 2018 - link
Yup, lots of resistance to this fact. AMD is dominating. It's over for now, time for people to just admit it, they got screwed if they don't have Ryzen. Or at least, bought the inferior product.Nfarce - Friday, April 20, 2018 - link
Fanboi much there? You must be that clown who said AMD's Fury X would be the 980 Ti killer. How'd that work out for ya fanboi? Anyway AMD provides a better balanced system between gaming and productivity but it does NOT dominate Intel for gaming. Especially when overclocking is so easy to hit 4.9-5.0GHz on Coffee Lakes. What percentage overhead does Ryzen 2 overclock to? Wipes away that Spectre/Meltdown patch performance drop. In any event it's childish fanbois like you who can't appreciate healthy competition. Let me guess: you'd like AMD to be the only GPU and CPU maker and Nvidia and Intel off. Am I right? You think that would be good for you? Good for ANY of us long term? They'd turn into Intel and get sloppy and lazy too. Now if AMD can just compete with Nvidia on the high end GPU market....the RX 64 ain't it sport being priced at GTX 1080 Ti levels.Flying Aardvark - Friday, April 20, 2018 - link
Ouch someone is triggered, the real fanboy... and surprise! It's the guy named "Nfarce". Hope you have a better day tomorrow and your mental health improves. Best of luck.prisonerX - Friday, April 20, 2018 - link
The butthurt is strong in this one.broberts - Thursday, April 19, 2018 - link
Utterly baffled by the cpu cooler choice for the 95W TDP Coffee Lake cpu. Is Anandtech sure that these cpu were not throttling given what appears to be a less than adequate cpu cooler?Flying Aardvark - Thursday, April 19, 2018 - link
The Silverstone AR10-115XS is a beefy cooler, if it can't perform well enough with that, you're just trying to ensure Intel is #1 out of seeking Intel's best-interests.This review did it right, all updated software including security patches for Smeltdown in chipset drivers/UEFI/microcode/OS, and good but not ultimate conditions for benchmarking. Maxing everything out to peak performance when few to none are going to actually do that (with 100% stability) is no service to the community at all.
It's actually misleading. You might see Intel at the top, then everyone thinks it's "the best" because of a hyper-optimized situation, like leaving on performance enhancing (past TDP) motherboard option with the world's best cooler on it.
It's ridiculous and about time someone (Anandtech) put a stop to it.
broberts - Friday, April 20, 2018 - link
Can you point me to hard data on the actual performance of the AR10-115XS?Flying Aardvark - Friday, April 20, 2018 - link
You can probably find out all you want, some search engines were invented in the early 90s.broberts - Friday, April 20, 2018 - link
The reason I asked was because I could not find anything at all. Just some very loose specs from Silverstone. Specs that suggest the cooler pushes substantially less air over its fins than the Wraith coolers.You stated unequivocally that the the AR10-115XS "is a beefy cooler". It's a natural assumption that you must base this on some actual data. Since I couldn't find any, I asked you. This seems quite reasonable and undeserving of such a snarky answer.
Flying Aardvark - Friday, April 20, 2018 - link
Eh, I'm used to combative little nerds on here telling everyone else to disprove their arguments (which isn't how it works), and dealing with them appropriately, rather than reasonable people.I was just going off the surface area measurements of the heatsink (most important) and secondly the mass of the heatsink. Surface area is good, mass is a little light but not outside of typical.
So just based off what I've seen comparatively, combined with the little I know about thermodynamics, there's nothing out of the norm for the Silverstone. I think its selection is perfect for a review and I'd like to see any and all CPU reviews using a baseline like it since the majority of customers will use it or something close to it.
broberts - Friday, April 20, 2018 - link
So why not use the same cooler with the AMD cpu?It seems very strange that the 1st gen Ryzen rated an NH-U12S, a cooler nearly as capable as its bigger brethren. Yet the same cooler was not used on the Coffee Lake cpu. Surely that would have made more sense than picking a cooler intended for rackmount cases.
I do not agree with your estimation of the efficacy of the cooler. But since we do not have any actual test data the argument is moot.
Flying Aardvark - Saturday, April 21, 2018 - link
I was going to put that in my post, use the same Silverstone on the AMD. I would've, and I think that's a smart decision. The Wraiths probably are a little bit better (Coolermaster makes them last I knew). I would have no qualm picking a cooler that supports both sockets. Makes more sense to me too.I just disagree on using top tier cooling, and I know you haven't suggested that. I hate the "ultimate results" aspect most tech sites take. I look for average equipment, the more info I can get on an RX580 vs 1060, the better. I don't honestly care about the 1080Ti, even though I'm in my mid 30s and can easily afford it, I'm not one of the gamer kids here.
On the efficiency of the cooler, I respect your disagreement, but these are just hunks of aluminum & copper. Even if for some strange reason it doesn't act like the rest of the chunks of aluminum+copper heatsinks out there with similar mass/surface area, then it can't be too far off in the end.
Flying Aardvark - Thursday, April 19, 2018 - link
Yup, I use 65W chips because the form factor I want is worth more to me than benchmark results. My system is fast. Really fast. Any additional speed really wouldn't be noticed. A big case on the floor with enough space for a massive heatsink inside would be noticed for many years. As long as I'm on a relatively recent architecture (Zen1 here), I'm probably getting the majority of the enhancements.I plan on buying the Zen2 65W chip as an upgrade with X570.
Flying Aardvark - Thursday, April 19, 2018 - link
This man speaks the truth ^^^^I do wonder how you tell if you have the old power profile or not though. Probably best to just start with a clean Win10 install each time, install latest and bench. Otherwise at least do an uninstall of the old chipset drivers, maybe run DDU for AMD stuff, then install the latest cleanly.
Peter2k - Thursday, April 19, 2018 - link
I love how there is so much drama over something like a script has gone haywire for AnandI've seen 5 reviews by sites/people who are just as trustworthy as Anand (like PCper)
Anand is the outlier
Other reviewers are using the meltdown/spectre patches as well
Know what they don't use?
Scripts
Peter2k - Thursday, April 19, 2018 - link
If Ryzen is actually that good, I don't mindIt'll replace my Intel system soon enough
nyucca - Friday, April 20, 2018 - link
PCper is not reliable. They are shills.Have you not seen AdoredTV's video? They've been exposed.
MDD1963 - Friday, April 20, 2018 - link
Mr. Adored "Everything is Rigged Because I have an Accent!" TV?SkyBill40 - Friday, April 20, 2018 - link
@MDD1963Disprove the statements on Adored TV, point by point, if you can. We'll wait.
mapesdhs - Saturday, April 21, 2018 - link
Or as Molyneux would say, not an argument. 8) It's funny how often one realises people are not actually saying anything constructive once one filters out the jokes, insults, etc.Cooe - Friday, April 20, 2018 - link
Techradar has identical results to AnandTech.Flying Aardvark - Saturday, April 21, 2018 - link
"Techradar has identical results to AnandTech."*crickets*
bairlangga - Friday, April 20, 2018 - link
All of these discrepancy theories related to meltdown/spectre microcode updates or what not only reinforce my initial deduction that around 10 years ago Intel able to took over the crown from Athlon 64 X2 is by using unsecure branch prediction method and invest even more heavily on that method in subsequent iteration of core processors.It simply took 10 years for things to unraveled for the worst of that intention.
MDD1963 - Friday, April 20, 2018 - link
Yes, I recall you predicting everything that has unfolded! :/bairlangga - Friday, April 20, 2018 - link
really :Paliquis - Friday, April 20, 2018 - link
The reason AMD got behind with FX is that they went with 2 ALU / core and shared FPUs between two cores and more cores instead meaning each core got weak whereas Intel have 4 ALUs and each core have its own two FMAC capable units and wider (256/512 bit) at that meaning Intel can execute a thread faster than FX. At-least as long as the instructions are suitable for that. HT help keep the cores busy whereas AMD can't really run one thread on two cores as compensation.lfred - Friday, April 20, 2018 - link
There is something wrong with the power consumption measurement on the Intel Coffe Lake platform, looking back at Anandtech previous review we find a jump from 86W to 120W (while 7700 platform, and ryzen 1700 results are consistent)https://images.anandtech.com/graphs/graph11859/918...
https://images.anandtech.com/graphs/graph12625/971...
Ian Cutress - Friday, April 20, 2018 - link
We found that different motherboard vendors were doing very odd things with uncore frequency - up to a 700 MHz difference, and adjusting it from BIOS to BIOS. All the ones we asked stated that they were within Intel spec, and Intel doesn't disclose what the spec actually is for the chip. At 3.7 GHz on one board we saw 86W peak, at 4.4 GHz on another we saw 119W. Both of these sets of results were in our database, Bench, a couple of weeks after the review. I had planned on writing something about it, but other topics always take over.For this review, we took the latest BIOS for the board and went with the power results from our automated testing for this piece. It's likely that other minor enhancements have been made. I've checked the core frequencies at least, and they have parity.
lfred - Friday, April 20, 2018 - link
Thank you Ian, for taking the time to answer while being in the storm of comments.I hear you, that's disturbing. As a customer getting an idea of the consumptions of those cpu, is something I look for, and if MB manufacturer "tweaks" can leads to a 40% power draw increase that's significant to me.
As far as the Ryzen performance "discrepancies" between reviewers, I have the feeling Motherboards manufacturers and bios implementation of AMD's"features" and other ram tweak, might also be in great part the culprit .. Good luck investigating, I hope it will be as exiting as exhausting to figure this one out. Cheers
Osaka2407 - Friday, April 20, 2018 - link
Come on guys, Intel and AMD fans and fanboys, stop that BS talk and let the Ian review his numbers as he stated in the article.BTW I've got an idea why AMD could pull away in this review.
1. Memory on AMD platform is 2933 MHz cuz it officialy supports that
2. Memory on Intel is 2666 MHz cuz the same reason
3. Not only major memory timings have inpact on the performance. Subtimings are also crucial and, as Steve over GamersNexus discovered, they have big impact on the performance.
First of all take a look at R7 2700X memory comparision:
https://www.gamersnexus.net/images/media/2018/cpus...
Baseline is Corsair LPX 2x8 3200MHz kit and GeiL is also 2x8 3200MHz kit.
Second take a look at this:
https://www.gamersnexus.net/images/media/2018/cpus...
So yeah, overall subtimings also matter AND chipset matters cuz X470 is new, it needs a little more love to be as reliable as X370 now is.
Pay attention, I am not saying if that Steve's, Ian's or other's test are inaccurate and I am not saying that there is but there COULD BE a large gap between reviews, depending on the MoBo and memory kit combinations reviewers used. Considering all of these it can mean that stock 8700K, which memory manufacturers had a lot of time to adjust for and for sure were working somewhat together with Intel (XMP is Intel standard afaik) is not as sensible to subtimings, could be outpaced when running on stock frequency of CPU and memory by the correct combination of Ryzen 7, MoBo and memory, also on its stock settings (and that mean higher then Intel memory clock). So don't call every outlier a BS before reviewing the data because that outlier can be very accurate test of what Ryzen is capable of when components are chosen carefully.
mapesdhs - Monday, May 14, 2018 - link
Indeed, or another way of putting it, a bell curve of results means outliers at both ends must exist, otherwise there wouldn't be a bell curve, but it's bad thinking to critique a result on the other end while ignoring the outliers on the lower end, which there must surely be. There's a bias on tech site forums to go after any article that has higher than perceived average results, but that's just the modern disease of lowest common denominator thinking.mapesdhs - Monday, May 14, 2018 - link
(I meant to say 'upper end' rather than 'other end'; when will we be able to edit our posts?? This is so 90s...)just4U - Friday, April 20, 2018 - link
As a owner of the 8700K, 1600 and 1700, and still rocking out my 4790K on another system, I must say that I was underwhelmed by the 8700K as I expected more than what I received, and actually fairly impressed by the improvements Amd was able to make with their Ryzen processors. Why? Because I went in expecting less..This update (for me..) brings one interesting thing to the table... That awesome cooler on the 2700. I'm down for that... and for a higher end system would buy the cpu over any other in a heart beat if I was into needing to build another system. I think it makes sense for people who are still on 2000/3000 series Intel (or older inc variations from amd like the FX line..) as you can notice some fairly substantial gains without doing any benchmarking at all.. but anyone on 4x series or later Intels.. (or last gen ryzens..) upgrading is more of a want than a actual need.
Still.. great new product from AMD and as always a interesting read from Ian.
mapesdhs - Monday, May 14, 2018 - link
Careful, making balanced and sensible posts like that can attract fan boy flames from both sides. :Djor5 - Friday, April 20, 2018 - link
If you have an element of doubt over your numbers then you should pull the benchmarks until you validate. Otherwise, stand over them.Zyphod - Friday, April 20, 2018 - link
It's going to interesting to see how Intel Counter this and future Zen 2 architecture, with their current monolithic die.Ryzen may be a slower in single threaded performance, but their process (lots of small dies glued together via infinity fabric), does mean they have a much higher yield. As core counts go up and node size goes down, the number of defects goes up. When you have a few large dies on a wafer vs lots of small dies, there will be a massive amount of waste thus driving prices up.
Intel will probably have to think of a new design and quickly. Ryzen 3 based on Zen 2 is only a year away (with 8 core i7 due to land late this year), with potentially even more cores (10/12??)
AMD is also likely going to release a Zen + based Epyc this year where they will really eat into the performance gap on the server side.
It will also be interesting to see how much more reviewers can get out of Ryzen 2, using even faster memory, high end cooling solutions (5.8Ghz having already been achieved with LN2) and any other further tweaks.
Would be interesting to see some gaming benchmarks with low/medium spec GPUs, given the current GPU shortage.
And just out of interest, what about disabling a CCX. The 2200G and 2400G got some good results having only a single CCX.
ShadowMrrgl - Friday, April 20, 2018 - link
There's no way an Intel 8700K can do such low fps on Rocket League. I can rock this game to 377+ fps on 1080p with an eVga 1080sc2 stock... and 250+ fps on 1440p. Ok, I'm at 4.8Ghz on all cores... so a stock 8700k might get lower fps, but there's no way fps could drop to an avg of 274 fps. (Can't speak for the other games tested since I don't have a single one of them installed on my sys)So we wait Anandtech to correct those bugged numbers.
nzweers - Friday, April 20, 2018 - link
Are you testing on a secure system? In other words, did you apply your motherboard spectre/meltdown fixes? Do you have all windows patches? AMD does.aliquis - Friday, April 20, 2018 - link
People should have them and they don't affect games much.Even on my EVO SATA SSD in 4K benchmark I just got a 10% drop. Also I'm not sure the AMD systems actually got AMD Spectre v2 mitigation yet. AMD wasn't immune to it.
ACE76 - Friday, April 20, 2018 - link
Meltdown is the serious bug and it's the one that everyone rushed to patch. Spectre is considerably less damaging and that's the one that affected AMD...but no performance loss happened due to patching Spectre... Meltdown, with the official patches from Microsoft and Intel affected Intel performance by up to 35%. It is a serious blow to Intel's performance.edsib1 - Friday, April 20, 2018 - link
Did you use the correct cooler for the Intel CPUS's ??The Silverstone AR10-115XS only has a range of 4.83 - 19.49 CFM. The AMD Wraith has much higher rating at 55.78 CFM
atomycal - Friday, April 20, 2018 - link
A bit unfair that i7-8700K was tested on 2666 DDR4... and R7 2700X at 2933 DDR4, and game suite is limited and in favor of AMD's multi cores.But given the stunt intel pulled with the multicore enhancement at the 8xxx series launch... i'd say bravo! fwck em!
FMinus - Friday, April 20, 2018 - link
They are using the chips to their specifications as listed by Intel and AMD. Intel Coffe Lake has support for DDR4-2666 and Pinacle Ridge supports DDR4-2933. They also didn't overclock. That's the fairest review you can get, by leaving everything stock and what max supported memory.There's plenty of reviews out there which pump the memory speeds up to the maximum the specific platform supports, for you to take a look at it. It's refreshing, among the sea of overclocked system reviews, to read something where actual chip specifications are taken into account and tested based on that.
Holliday75 - Friday, April 20, 2018 - link
I agree. I prefer to see stock on release and later articles can start tinkering. I don't recall seeing many car reviews being trashed because the reviewer didn't drop the largest turbo they could find under the hood.FreckledTrout - Friday, April 20, 2018 - link
Agree 100%. I want good stock reviews and also good overclocked reviews. Many cases people want to run stock for noise / heat etc so it's nice to see what the chips can do at stock.malakudi - Friday, April 20, 2018 - link
Performance of Ryzen CPUs seem to be similar with other reviews. It is the Intel scores that are much lower in this review, and I think the reason might be the cooling solution used. Cooler used for Ryzen 1800X (NH-U12S) is much better than the one used for Intel. If Intel was thermal throttled, then results are correct. You should check again some of the tests with better cooling.fallaha56 - Friday, April 20, 2018 - link
Nope, this isn’t thermal throttlingThink stock RAM as they should, no MCE cheating and proper Spectre2 patching
AntiShill - Friday, April 20, 2018 - link
Anandtech really screwed up with the testing here, by totally not controlling the HSF variable correctly.They used a lousy Silverstone AR10-115XS for their test bed. See:
http://www.silverstonetek.com/product.php?area=en&...
The thing only got 3 heat pipes and a max 19.5 CFM from a 70mm fan. Even the stock intel hsf that goes with the non-K cpus, are at least 90mm fans. What kind of bad joke is this?!
If you wanted to go cheap on your air cooled HSF, you'd at least get a hyper212(whatever the latest modifier +/evo etc.) And use it on all the CPUs under test.
Just because the AMD wraith coolers are noisy and garbage, doesn't give then the liberty to conduct crappy testing by introducing an uncontrolled variable in HSF.
FMinus - Friday, April 20, 2018 - link
Just the fact alone that it has heat pipes declassifies the stock Intel coolers by default - i.e. it's a better heat sink. How it actually performs I've no idea, but it's better than the Intel stock one, in fact almost every single cooler on the market is better than the Intel one.If the chip throttled with that cooler, I don't know, they're re-evaluating the differences in their review as noted on the first page, so we have to wait an see.
Madpaulie - Friday, April 20, 2018 - link
I'm running a i53570k @4.5ghz, Z77 sabertooth Mb, 8gigs of 1333Mhz DDR3, MSI Gaming GTX980ti and and a couple of Samsung SSD's. I am really looking forward to upgrading to Zen 2 there hasn't been anything of interest for a very long timee.Fallen Kell - Sunday, April 22, 2018 - link
I'm on roughly the same platform. I have a i7-3770k @4.5ghz on an Asus P8Z77-V Premium motherboard, 16GB 1866 MHz DDR3 (10-11-10-30-2T). I've been holding off on a new setup for something that has some real performance difference. If I was running at stock speeds, going to a i7-8700k would give me about a 40% increase in performance, but given that I have a 28% overclock (I had it up to 4.9GHz, but dialed it back for less wear/tear as that required increasing voltages, but the 4.5 is at stock voltage), it simply hasn't made sense to upgrade yet.At some point, it will, but we have not hit that point yet. Given the last 3-4 years Intel has focused on power optimizations as opposed to additional performance, the relative performance of the CPUs has not really changed much. I have been hoping that with AMD finally competitive again that it will cause some real fight in the CPU performance race again, where we were seeing 30-40% performance gains when the manufacturing process nodes were cut in 1/2, like we use to see back in the late 90's through about 2010...
Alphasoldier - Friday, April 20, 2018 - link
Really, Rocket league AVG:Ryzen 2700x 359 fps
8700k 274 fps
Ryzen 2600 317
GTA V AVG
8700k 91FPS
1800X 90 FPS
YOU JOKING ME, RIGHT?
Do you even check the data before you publish them?
How am I even suppose to believe this?
I bet that Ryzen would win here even in Far Cry Primal.
fallaha56 - Friday, April 20, 2018 - link
It’s jst because of the proper RAM, no MCE cheating and full Spectre2 patching...Alphasoldier - Friday, April 20, 2018 - link
MCE cheating? It is a feature made by MB vendors, not my problem, AMD can't clock past 4.1 Ghz on all their CPUS. Ooooh, AMD sucks at OC, let's punish everyone.Proper RAM? Again, Intel IMC is capable of higher frequencies than AMD, not my problem that AMD is not capable of supporting higher frequencies without stability issues. 3200hmz is now the standard, yet many Ryzen mobos still dream about reaching and running at such frequency stable. Paying 400 bucks for a cpu, another 200 for a mobo which can't handle it is a joke.
Yeah, Spectre 2 patching halved the Rocket 2 League fps.
I understand that you are an AMD fanboi with low income so you are very happy you could finally afford 6cores without selling your kidney paired with low clocked ram producing occasional BSOD whilst using Vega gpu as a room heater and dancing to the rhytm of your cheap rainbow cooler, because that's trendy now, to show everyone you are LGBT positive but stop with this, you are bad at trolling if you actually believe the things you publish, then you should visit lunatic asylum or stop reading that liberal swamp called reddit.
Fallen Kell - Monday, April 23, 2018 - link
And in that same point, then why isn't Intel actually supporting all those settings? MCE is motherboard based, thus, you need to have a specific motherboard that supports it. This article is testing the CPU, NOT THE MOTHERBOARD. The same goes with the memory support. Intel official memory support is DDR4-2666. They do not guaranty anything faster than that will work. Sure, it might work, but you can get a CPU which might not work with faster memory (I have never personally witnessed this), Intel will tell you to go pound sand because they only officially support DDR4-2666.Why isn't Intel simply supporting these speeds and settings? AMD decided to support DDR4-2933. No reason why Intel can't as well, other than simply not wanting to go through certification processes and/or don't want to take the risk on their warranty...
So again, AT is testing supported configurations, not unsupported configurations, and giving the results. There are plenty of customers (i.e. the entire business community, who have admins who read sites like AT) that will not run unsupported configurations.
mapesdhs - Monday, May 14, 2018 - link
Reviewers can't win in this regard. Stick to official specs and one side will moan that chip A "can do" higher even though it's not officially supported, regardless of whether any mbd vendor decides to include such support in some way. Oc the RAM (which is very mbd-specific in how well that can work) and the other side will say that's oc'ing chip A more than chip B, or possibly oc'ing chip A but not chip B at all, depending on how it's done. As GamersNexus has shown, there is huge variability in testing setups.Remember way back when AT used a factory oc'd GTX 460 when reviewing the 460 at launch? (EVGA GTX 460 FTW, oddly enough the very model I bought) There were various reasons they did it, but it caused a hell of a row. I thought it was a good thing because where I lived at the time oc'd cards were cheaper than stock models, but other people were outraged. I can see why many didn't like it, but they had the freedom to check other sites that were reviewing with stock cards (the reference clock was 675MHz, the FTW was 850MHz). I thought it was ok because the product was available to buy at a very cost effective price, buying a stock card made no sense, at least among the choices I had. It wasn't as if they'd taken an 800MHz Sonic Platinum, oc'd that to 850 and then claimed those results a were baseline comparison to everything else. But then, someone in different circumstances to me would have a different point of view, eg. if their relevant retail source didn't have the FTW or only sold much slower cards (quite a few models were set around the 700 to 730MHz mark, including the standard Sonic, Gigabyte WindForce and various others).
Stock or oc'd? What does that even mean when there are so many different variables involved? GN showed that memory subtimings can make a significant difference for AMD, and mbds are now getting pretty good at selecting these efficiently when left on Auto.
In the end, as long as it's clear what the review setup is doing, I don't see that it matters, it's just more useful data points along with all the data from other sites.
GreenMeters - Friday, April 20, 2018 - link
Whether the present results are correct (hopefully) or incorrect, will the story title be bumped up the homepage and changed to indicate updated information? Hoping it'll be easy to know when to jump back in to the article.danjw - Friday, April 20, 2018 - link
Thank you Anandtech and Ian for a great review. Too bad, so many don't understand your methodology. I hope you guys have a chance to post an overclocking review for the Ryzen 2 processors. I appreciate your hard work and enjoyed reading this review. I will be coming back to read the parts that are missing.I am looking forward to upgrading my primary system to a Ryzen 2 system in the next few weeks.
MDD1963 - Saturday, April 21, 2018 - link
"Too bad, so many don't understand your methodology. "Yes, *that's* the issue here, not the results being 100% opposite what 99 other reviewers' results show....
maroon1 - Friday, April 20, 2018 - link
https://tpucdn.com/reviews/AMD/Ryzen_5_2600X/image...There is must be something wrong with anandtech review for gaming benchmarks. Most reviews are showing that 8700K is winning gaming. Yet you own review show Ryzen 2 winning in ALL CASES ?!
techpowerup puts even i5 8400 over 2700X
fallaha56 - Friday, April 20, 2018 - link
Or mayb they set Ryzen up right with proper power profiles and patched the Intel rig with Spectre2 as they should?And didn’t run wildly overclocked RAM
mahoney87 - Friday, April 20, 2018 - link
Did u see the RL benchmark where the2700x gets 240fps more on average than the 1800x? They fucked up majorly
Alphasoldier - Saturday, April 21, 2018 - link
If you need to "set Ryzen up right" and it does not work as intended out of the box, it only shows how poor of a product it is then.So stop with the excuses. Intel is not as much affected by OC ram as AMD, so stop spreading your lies, you are obviously technically illiterate.
Timur Born - Friday, April 20, 2018 - link
Too bad, the Civilization 6 AI test would have been the most interesting, because it's still not making full use of CPU cores. This is even worse with Total War, which is another title that let's you wait for the AI to finish its turns.Carmen00 - Friday, April 20, 2018 - link
It's been more than a day since the article came out, and it still hasn't been fleshed out properly. I assume that this is because you are reviewing the data, which is correct and commendable. However, doesn't it then make more sense to temporarily retract the article and publish it, with reviewed data and in full form, in a few days? I'm happy to wait for the quality analysis and results that Anandtech has built a reputation on. In the meantime, it doesn't leave a good impression for readers to come back to the article a day later and _still_ see it unfinished.MDD1963 - Friday, April 20, 2018 - link
Ferris......? Anyone.........? Buehler......????RafaelHerschel - Friday, April 20, 2018 - link
Agreed. To be honest this is a minor disaster for AnandTech. Even if their benchmarks are correct, a clarification is in order.At the very least, they could have replicated a test by another publication to see if they get a similar result.
I'm also disappointed by the fact that so very few games have been tested. In the not so distant past, AnandTech was often not first but would impress with extensive and thorough testing.
I guess a lack of staff has caught up with the site.
danjw - Friday, April 20, 2018 - link
No need to retract it. It sounds like, from what I have read, they did everything right. They ran everything withing the specs of the manufactures of the CPUs. Others did not. They had updated all the Intel runs with new ones on systems that were fully patched for Spectre and Meltdown. One other site did the same and came up with similar results. So, it seems that it is other websites that need to explain their methodology, not Anandtech. Tom's hardware, has admitted that it made the mistake of not fully patching the Intel systems, they are working on fixing their results.RafaelHerschel - Friday, April 20, 2018 - link
Well, that is a relief… It sounds like that they did everything right… No, it doesn't. They tested just a few games and their results are different from most other publications that are out there. But, hey, it's good that you keep the faith.At the very least, I would like to see a clarification. For example, they could replicate a test by another publication and they could do one of their own tests manually. After additional testing, they could either stand by their results and give a possible explanation for the discrepancies, or they could retract.
Any publication that is not prepared to do additional testing if they are an outlier is unreliable and irrelevant.
mahoney87 - Friday, April 20, 2018 - link
No they didn't and if you looked properly you'd see there's some crazy shit going on. A Pentium having better gaming perfomance than a 8700k in some games yeah right, 2700x having 240 more fps on average vs 1800x hah yeah not believing this shit sorry. They messed up badly somewhere and this bs excuse about spectre meltdown being the reason for Intel's bad performances how do they explain the gaming benchmarks vs the 1800x?Alphasoldier - Saturday, April 21, 2018 - link
Nice attempt, the AMD red brigade is working as hard as it could to spread the lies, to misinformate, to publish this review everywhere they can so people don't forget, damn, you are better than Russians.Anyway, why not to share with us what site it was who got a similar results? Tom's hardware didn't patch both, Intel and AMD fully, because there was no bios form the mb manufacturer during that time.
29a - Friday, April 20, 2018 - link
I'm with Carmen00 you need to retract the article until it is finished. I'm really interested in the storage portion of the article but it just says [text] where the info should be. I don't recall anything like this when Anand was running the show.CalebDume - Friday, April 20, 2018 - link
Just wanted to say thanks to Ian and the anandtech staff for their hard work. I can understand the difficulty of producing these reviews. I have no question that you all act with absolute integrity and strive to use a fair and scientific approach to evaluating these products.I also appreciate the interview you guys did with Global Foundries prior to this launch and the interview with Intel recently. I think all of these things really educate us about these products.
After seeing the controversy in the comments section for this review I have read several reviews from other sites. There are differences in some of the benchmarks. I appreciate that you all are reviewing your work to see if there was a methodology failure. I would caution anyone from adding their vitriol simply to be a jerk when none of us really know why those differences exist or if the testing methodology used here is deficient compared to other reviewers. I would also note in some reviews I could not even find how their test bed was setup. We should applaud the openness with which Anandtech performs these reviews and their willingness to take the absolute garbage they do from some in the comments section.
Lastly, I have been coming to anandtech for many years, which I am sure others have too. Ryan, Ian and the rest of the staff have done a great job since Anand's departure and to say otherwise is simply ridiculous.
Keep up the good work.
MDD1963 - Friday, April 20, 2018 - link
"There are differences in some of the benchmarks." THat's an understatement. There are fairly large differences, and, with many processors in completely opposite placings. Hell, AMD did much better here in AMD gaming than even AMD themselves claimed they would! :) AMD's own slides showed 2700X losing in 9 out of 10 games, as expected.Psycho_McCrazy - Friday, April 20, 2018 - link
Here's my take on thisIan mentioned on the Precision Boost page that the BIOS option for PB2 is cryptic, and may even lead people to disable it thinking it is something like Multi-Core enhancement all core turbo thing.
There's a small chance that this is the case with other reviewers, whereas Ian discarded some of his test data and restarted.
on the new PB2, at 3-4 cores that the games would engage, this is a BIG difference in freqnency, to the tune of 400 ~ 500 MHz (3.7GHz to 4.1 ~ 4.2 GHz). This 13 odd percent can easily bring about the performance increase in games that is seen here.
Just a thought, may not be correct, but maybe worth investigating...
Ian/Ryan, any comments?
Alphasoldier - Saturday, April 21, 2018 - link
Do you even realize in other reviews the cpus ran @4.3ghz constantly and still barelly managed being close to the i5s?fallaha56 - Saturday, April 21, 2018 - link
not when the Intel systems are properly patched and with HPET ondid you not know that these patches are only just out?
John_M - Friday, April 20, 2018 - link
Someone needs to explain the bizarre choice of cooler for the Intel benchmarks. It isn't one I'd heard of - I had to look it up and there isn't much information to be found. It looks rather inadequate to me, especially when compared with the unlikely combination of a Ryzen 2200G and Thermaltake Flo Riing 360 tested only a few days ago.By the way, for those still waiting to read about it and in case this article never gets completed, StoreMI looks a bit like Apple's fusion drive technology.
Alphasoldier - Saturday, April 21, 2018 - link
Obviously, it was rigged, some popular youtuber posted video of 8700 throttling on Intel stock cooler, so they decided to sabotage Intel.There is no other explanation.
B3an - Saturday, April 21, 2018 - link
Lol being a fanboy for the worst and most corrupt tech company in history. You absolute subhuman cunt.John_M - Saturday, April 21, 2018 - link
And how, exactly, does that add to the debate?John_M - Saturday, April 21, 2018 - link
No. I don't think that's true at all because trying "to sabotage" anyone is pointless as there's no transparency. That's why I said it needs to be explained. I want AMD to make good products and I want them to do well in tests but I want them to do it fairly. So I want to know why the Ryzen 1800X was tested with a much better cooler than the Intel processors. This nonsense is damaging for both Anandtech and AMD.fingerbob69 - Sunday, April 22, 2018 - link
Well if we're talking nonsense the test was done with the cooler as provided in the box by AMD.... the wraith prism. If the test was to be properly fair Anad should of used the as provided cooler 'out of the box' from intel. Now what is the name for that cooler ? P.O.S.!?!?"Cooling Wraith Prism RGB AMD Wraith Stealth Noctua-U12S Silverstone AR10-115XS"
Wraith Prism RGB is better than the Silverstone AR10-115XS? ....fantastic added value from AMD!
sharath.naik - Friday, April 20, 2018 - link
Serves intel right for sitting on 14nm for over 4 years. Once agsin AMD processors are at the top.Alphasoldier - Saturday, April 21, 2018 - link
14nm 6 cores without HT /i5/ still owning 12nm 8 cores of AMD with HT, their flagship in gaming. Woudn't call that the top.8core coffe lake will destroy everything and giving 2700x power usage after overclocking and upcoming 2800x which will use even more power, Intel doesn't have to even worry about their TPD.
mapesdhs - Monday, May 14, 2018 - link
"...destroy everything..."Why the use of such emotional language? It's just a ruddy CPU. :D I swear sometimes these forums make tech discussions read like purile Transformers battles...
tmiller02 - Friday, April 20, 2018 - link
There is a lot of nonsense in here towards ian... he did everything right.... he took the time to secure both platforms and gave them a level playing field... we knew ahead of time that meltdown had up to 35 percent performance penalty on intel.... why is this hard to believe... the only reviewers who took time to secure the platforms first were anandtech and tech radar... also if you remember right.. the wcftech preview showed the same results. Amd themselves didnt bank on the meltdown penalty as it pertains to intel... but I'm sure they'll take it! Crown the new king... amd has ryzen! Intel what are you going to do... completion is good!!!Flying Aardvark - Saturday, April 21, 2018 - link
^^ this man speaks the truth.RafaelHerschel - Saturday, April 21, 2018 - link
Sadly Ian Cutress has lost the plot. What he should do is replicate a few tests by other publications. That would give useful information. What he seems to be doing is going over data that he has already collected.But thank you for stating that ‘completion’ is good. That put a smile on my face.
Ranger1065 - Saturday, April 21, 2018 - link
Anandtech needs to be age restricted. Far too many near retarded children commenting here. Bring back ddriver and send the kids back to wccftech.mapesdhs - Monday, May 14, 2018 - link
Kinda agree. Remember the scene at the start of 2001 where the two bands of apes face off each other at the water hole, yelling and screaming? Some of the exchanges here remind me of that. :Dpsychok9 - Saturday, April 21, 2018 - link
Hello, is there any news update about this great results?Ryan Smith - Saturday, April 21, 2018 - link
We'll have something for you next week.=)psychok9 - Sunday, April 22, 2018 - link
I appreciate a lot! ^_^xnor - Saturday, April 21, 2018 - link
I don't get your numbers. In the 8700k review you wrote 87W total package (full load) power.In this review you're now up to 120W for the 8700k total package (full load) power.
What is right? Did your methodology change?
lfred - Saturday, April 21, 2018 - link
Hi, I asked the question and Ian replied yesterday. on the intel platform you have a feature called MCE (Multi-Core Enhancement) that allow to have all core running at freq rather than just one on the stock setting. The thing is that some motherboard manufacturer have it on by default, some don't, and even with the same motherboard from one bios to and other..manufacturer change the way it work... so with MCE on you get 120W with MCE off you get 87W ... (I think that MCE can improve productivity performance, but may lower gaming single thread performance if the game favours mhz.. and dont use the core)xnor - Saturday, April 21, 2018 - link
Thanks! Do you happen to know the clock speed of the 6 cores with MCE?Alphasoldier - Saturday, April 21, 2018 - link
It is the highest turbo setting, for example, 8700k has turbo @ 4.7ghz if only one core is used, so it sets all the cores @ 4.7ghz.It is better to overclock it manually cos the MCE voltage is usually way higher than needed and may make your cpu unnecessary hot.
xnor - Saturday, April 21, 2018 - link
If that is true then Intel not only has roughly 10% IPC advantage, but also several hundred MHz if not GHz of clock advantage plus a big efficiency advantage.Alphasoldier - Saturday, April 21, 2018 - link
8700k is easy to overclock, if you get lucky, you can reach 5.0ghz without delidding just on air with something like Noctua NHD15.If you want to be safe https://siliconlottery.com/collections/coffeelake
Even the Intel's highend lineup, 8/10/12 core are able to reach 4.7 - 4.8 ghz, the problem is just cooling.
mapesdhs - Monday, May 14, 2018 - link
I had to read that twice to belive my eyes. So a 5GHz oc is getting "lucky" on a CPU that already has an official max turbo of 4.7? Blimey the state of modern oc'ing realy is woeful. The 2700K will happily oc to 5GHz on air with just one fan and a moderate cooler like a TRUE, good temps and decent voltage, a chip that has a max Turbo of only 3.9. If this is what it's come to, where a mere +300MHz bump over the max Turbo is considered lucky, then oc'ing as a thing is dead, the CPUs and mbds are just getting better and better at doing it automatically, slowly including the concept in various official ways, more effective Turbo, XFR, and so on.I don't see the appeal of oc'ing an 8700K at all, it's such a small difference over what the chip can already do. I'm still running on a 5GHz 2700K, it still holds up very well against modern products, especially with a mod BIOS that allows for NVMe boot, or a 950 Pro which has its own boot ROM.
NWRMidnight - Saturday, April 21, 2018 - link
There are rumors that Intel asked reviewers to disable HPET when they benchmarked, and it skewed the results in there favor:http://hardwarebg.com/44332-ryzen-7-2700x-ryzen-5-...
John_M - Saturday, April 21, 2018 - link
It's a shame I can't read Bulgarian! In cases where a manufacturer makes such a request of reviewers they must absolutely declare it in their reports and the reason for the request should be explored.John_M - Saturday, April 21, 2018 - link
I notice that Alphasoldier's comment about Bulgaria and my response have both been removed. Why is that?ijdat - Saturday, April 21, 2018 - link
Apart from the usual "which is fastest" arguments, I'm surprised nobody has picked up on power consumption. The 2700 has by far the lowest power consumption of any chip tested for a given amount of processing, speed is typically 20% lower than the 2700X but power consumption is less than half. I thought Anand usually provides this as a result (total energy to complete a given test) but it's missing here.This would make the 2700 an excellent choice for quiet SFF systems which want a fast CPU but also want to keep power down to reduce noise and heat.
SaturnusDK - Saturday, April 21, 2018 - link
Gamer Nexus found that if you lock frequency to 4.1GHz on the 2700X you can undervolt it to 1.175V and have 2/3rd power consumption for marginal performance loss (less than 2% loss) compared to stock voltages and frequencies.mapesdhs - Monday, May 14, 2018 - link
I thought that was one of the most impresstive results about these new Ryzen CPUs, GN showing the power consumption tweaks were really impressive. IIRC, Steve summarised it by saying it wasn't that one could achieve higher ocs than before, rather, the voltage required to achieve a particular oc compared to 1st gen Ryzen was now considerably less.utmode - Saturday, April 21, 2018 - link
Thank you Ian for testing on Latest Win10 patch.eva02langley - Saturday, April 21, 2018 - link
All this because of 1080p gaming benchmarks...All this circus for a benchmark,
1. with an enthusiast GPU
2. with a high end CPU
3. at 1080p
4. and affecting only 144hz users
This bench needs to be gone. It is misleading and inaccurate depending if the GPU is bottleneck or not. Joe Blo looking at these, don't understand that buying a RX 580, is not going to get out the same thing from extracting the results out of these stupid CPU benchmarks at 1080p.
Joe Blo is not going to know until he sees budget, high end and enthusiasm GPU in plays with his intended CPU purchase. WE KNOW, but they don't.
All this for a stupid bench that impact 1080p @ 144HZ users.
I am having a 1080 TI @ 2160p, I can tell you that this stupid bench doesn't do jack in my situation... but the multi-threaded results does.
wizyy - Saturday, April 21, 2018 - link
Well, although admittedly there are users that aren't interested in 1080p 144Hz performance numbers, there are LARGE sum of players that need exactly that.Cybercafe that I'm administering, for one, has 40 PCs with 40 144Hz monitors.
eva02langley - Monday, April 23, 2018 - link
My point is that by looking at numbers, you can get the wrong idea.Unless you test a budget, mid range and high-end GPU at 1080p, 1440p and 2160p with a specified CPU, you don't get a clear picture.
As of today, this bench is only specific to 1080p @ 144Hz which represent a really % of potential users.
Like I was saying, I am at 2160p, this render this bench totally useless. GPU bottleneck is going to be more and more present in the future because resolution is just increasing.
mapesdhs - Monday, May 14, 2018 - link
There aren't large numbers at all. The no. of gamers who use high frequency monitors of any kind is a distinct minority. Irony is they're resensitising their visual system to *need* higher refresh rates, and they can't go back (ref New Scientis article last year IIRC). IMO this whole push for high refresh rates is just a sideways move by the industry because nobody bothers introducing new visual features anymore, there's been nothing new on that front for many years. Nowadays it's just performance, and encouraging refresh is one way of pushing it. How convenient that these things came a long just as GPU tech had become way more than powerful enough to run any game at 60Hz locked.mapesdhs - Monday, May 14, 2018 - link
(I was replying to wizyy btw; why does the reply thing put messages in the wrong place? This forum system is so outdated, and still can't edit)aliquis - Saturday, April 21, 2018 - link
You are simply wrong.Doing say 4K benchmarks would just make people think it doesn't matter which CPU you have and all are the same for gaming which is totally wrong and inaccurate.
Benchmarks for CPU game performance should definitely be done at a low resolution and with a powerful graphics card. Sweclockers still did 720p medium. The problem with medium is that you may lower the load on the CPU for things such as physics and reflections. Still valid for high fps gamers but maybe should be combined with a higher setting too in case that use more CPU. Ultra as worst case scenario.
Opposite of your suggestion which basically result in no data and hence you could just as well not benchmarks games whatsoever instead do it at a low resolution but then simple conclude something like "Even an i3 or Ryzen 3 is enough to achieve a 60 fps avg experience" for instance. If that was the case. Then it would still be accurate and useful and people could decide themselves how much they care about 140 or 180 fps.
All these idiots who claim the Intel lead is only there I low resolutions are wrong and fool others. The Intel lead in executing game code is always there. It's just that you of course need to have a strong enough gpu vs settings and resolution to be able to appreciate / get it too. But that run in both directions. On YouTube someone tested for instance project cars on the new cpus and he had little above 50% gpu load so obviously didn't used it all and was bottleneck ed by the CPU performance yet only used just over 20% of the CPU which for the uneducated may seem like pooh the Ryzen is so powerful for games with so much headroom but it's not because clearly one (virtual) core was fully utilised and couldn't offer more performance and the rest and unused capacity was and is irrelevant for the game performance because the game aren't using it anyway. It doesn't help to have unused cores. It do help to have more powerful ones though.
aliquis - Saturday, April 21, 2018 - link
And avg 144 fps isn't enough. Lows matter and even then even shorter frame times would get even more up to date images if vsync off.But sure if all frame times are below 1/144 s you're likely fine in that situation. But a few at 1/50s isn't perfect.
mapesdhs - Monday, May 14, 2018 - link
144 isn't enough? :D That's hillarious. Welcome to the world of gamers who sensitised themselves away from normal refresh rates, and now they can't go back. You're chasing moving goal posts. Look up the article in New Scientist about this.Singuy888 - Saturday, April 21, 2018 - link
It doesn't matter which CPU you choose for gaming. That's the point but people like to dig up tech from 10 years ago to prove one vs the other. Games are going multithreaded and even Intel is pushing this. So 1080p heavy single threaded gaming benchmank is misleading unless you like living in the past. You win with ryzen @ 1440P or above and you win with future highly multithreaded games. But nope..let's just test world of Warcraft at 720p to show Intel's dominance because that's the future?invasmani - Sunday, April 22, 2018 - link
I'd prefer they test EQ at 480p...that is the future!GreenReaper - Wednesday, April 25, 2018 - link
Why stop at VGA? Benchmark 240p video rendered in 80-bit x87 algorithms!eva02langley - Monday, April 23, 2018 - link
"Doing say 4K benchmarks would just make people think it doesn't matter which CPU you have and all are the same for gaming"Well, you are totally right on the first part, they don't matter much. Resolutions are just exploding and 4k will become the new standard when the new console generation is going to be released. That means 2160p will become the norm and GPU bottleneck will be even more present than it is right now. Right now the only way to have CPU bottleneck is using a 1080 GTX at 1080p. Not a single sane person would spend that much for rendering 1080p especially for running LoL, Fortnite, WoW, DOTA, CS or RL.
And by the way, we are fighting over result having +- 5% difference in gaming benchs at 1080p. A storm in a glass of water.
Fallen Kell - Wednesday, April 25, 2018 - link
Well, unless Nvidia and/or AMD has a GPU which is 2-4x faster than what they currently have up their sleeves and can manufacture and sell it for only around $150, the next generation consoles will not be 4K resolution, but simply up-scaled 1080.DARK_BG - Saturday, April 21, 2018 - link
On computerbase.de review tighter timings gave the ryzen between 3% and 14% positive increase in game titles FPS on the same RAM frequency alone.Melting the difference with 8700K runing the same memory config in the range 3% to 9 % difference in Intel's favor :) Who says ram config does not matter :)MDD1963 - Saturday, April 21, 2018 - link
So days later, still no answer as to why only AnandTech has such a "Great Disparity" in gaming results compared to...well, everyone? (Can't they double check just the 2700X and 8700K in a few key games in something under 4 days?) :)Ryan Smith - Saturday, April 21, 2018 - link
We've been working on it since Thursday and we'll have a proper, detailed analysis for you guys next week.=)MDD1963 - Saturday, April 21, 2018 - link
:) (Thank you, SIr!)Mugur - Sunday, April 22, 2018 - link
Did the numbers suffer changes? ;-) If not, you are going to start a revolution on the internet... :-)StrangerGuy - Sunday, April 22, 2018 - link
Techreport outright says 8700K actually did better after the Meltdown/Spectre patches in Cinebench, but it's OK, you can always dream.fallaha56 - Sunday, April 22, 2018 - link
hey smart guy is that spectre1 or spectre2 patches...coburn_c - Sunday, April 22, 2018 - link
If you compare the 'web' numbers in Ian's Coffee Lake review to this article there is a huge performance drop across the board; both original Ryzen and Intel numbers. I think that rules out cooler performance as the source of the anomaly. Also, those numbers shouldn't really be affected by vulnerability patching. That article lists the same version of benchmarks, on the same browser, and they are not allowing it to update. Those tests should see limited affect from any updates due to spectre and meltdown if I am to believe what we are being told (I'm no programmer, I hated programming.)There is something to be explained here, but I've yet to hear any good theories.
coburn_c - Sunday, April 22, 2018 - link
Hmm.. Guru3D has the 2700x performing a 965ms on Kraken, in line with these numbers. Their 1700x in the graph shows 752ms, in line with the Coffee Lake review numbers. Either this new chip is much slower, or those are old numbers. Their Intel numbers are in line with the Coffee Lake review numbers as well. Most certainly old numbers. They make no comment on this aberration. Perhaps this is due to Microsoft patching.coburn_c - Sunday, April 22, 2018 - link
This is huge, a 20% loss in javascript interpreting? And these companies are saying minor performance impact? Please tell me this is a mistake.SaturnusDK - Sunday, April 22, 2018 - link
It's not a mistake. Anandtech uses the Windows 10 Enterprise edition vs. the Windows 10 Home or Pro most other reviewers. The Spectre/Meltdown mitigations on the Windows 10 Enterprise and Education versions are safer, and therefore incur a higher performance penalty.29a - Sunday, April 22, 2018 - link
Can you supply a source to backup your claim that Win 10 Enterprise and Education are getting different patches than Home and Pro?Th-z - Sunday, April 22, 2018 - link
If people at Anandtech can test Intel and AMD chips under Pro or Home, and it's been a while people compare numbers from different operating systems. According to Steam survey most people still use Windows 7.RafaelHerschel - Sunday, April 22, 2018 - link
People who get a new CPU / new system are likely to use Win 10. Anyway, if Enterprise behaves differently performance wise than Pro, then that is the real story. And AnandTech missed it.mapesdhs - Monday, April 23, 2018 - link
Though if true, then those here who've accused AT of deliberate deception should apologise.RafaelHerschel - Sunday, April 22, 2018 - link
There is a lot of speculation because AnandTech isn't able to provide a clarification in a timely matter. I'm going to avoid AnandTech from now on.If there is a specific reason for the strange results they got (other than AnandTech mucking things up), that would be an interesting story. A serious tech journalist would have realized that right away.
The gaming benchmarks are disappointing anyway, since the scope of the gaming test is very limited.
And right now I don't care that much about the productivity test since an 8-core Ryzen is obviously going to outperform an 6-core Intel i7 with optimized software.
NWRMidnight - Sunday, April 22, 2018 - link
Considering they already stated they would be publishing their findings this coming week, you are wrong about not being able to provide clarification. But it is taking time because they have to go thru and run various tests with various setting changes to determine what is influencing their numbers. This is beyond what just running tests and writing up a review is. People want to know how they got their numbers, even at stock, and defaults, they have to go thru and turn off or on bios settings, windows settings, etc to determine and be able to explain the impact on the results.This pretty much doubles if not triples the time it would to do just a simple review. And since Ian was up for 36 hours straight to get the first review out (partially because they had to scrap 36 hours of testing results and start over). So, he had to actually go home and get some sleep so he could tackle this task with an energized and clear mind.
So, how about you learn some patience and realize that they will get the information to us, or would you rather have it all rushed without any real details? They also most likely have weekends off, so in reality, we should not expect anything until mid week this week.
John_M - Monday, April 23, 2018 - link
"partially because they had to scrap 36 hours of testing results and start over"Where was this mentioned and explained, please?
NWRMidnight - Monday, April 23, 2018 - link
talks about having to throw out some of the data, and the reason, at the bottom of this page of the review:https://www.anandtech.com/show/12625/amd-second-ge...
Here is Ian's comment on twitter:
https://twitter.com/IanCutress/status/987366840442...
John_M - Monday, April 23, 2018 - link
Do you mean this:"However, this is just AMD’s standard PB2: disabling it will disable PB2. Initially we turned it off, thinking it was a motherboard manufacturer tool, only to throw away some testing because there is this odd disconnect between AMD’s engineers and AMD’s marketing."
I didn't read the tweet so didn't make the connection.
RafaelHerschel - Monday, April 23, 2018 - link
A publication should either stand by its tests or not. If AnandTech is unsure about their results, they should temporarily pull the benchmarks since they might be misleading or place a much larger disclaimer.Ranger1065 - Monday, April 23, 2018 - link
Great news about avoiding Anandtech in future Rafael. Please make sure you stick to your word.RafaelHerschel - Monday, April 23, 2018 - link
@Ranger1065 I'll see this one out, and yes, then I'm gone for good. I don't understand why you are happy about people not visiting this site anymore. I'm pretty sure that AnandTech can use all the traffic they can get.It's sad that a once leading publication doesn't seem to have a place in the market anymore.
mapesdhs - Monday, May 14, 2018 - link
Perhaps we're just happy that people who think we live in a completely black & white world won't be hanging around. Nuance is dead in this SJW world.ET - Sunday, April 22, 2018 - link
The 2700 is quoted as $309 in all benchmarks. Should that be $299?MDD1963 - Sunday, April 22, 2018 - link
FFS, *please* tell me this (crappy 8700K results) is not a result of using bastardly patched Windows Enterprise for the gaming tests ...!(For all those folks at home benching games with Windows Enterprise...yeah, that happens :/ )John_M - Monday, April 23, 2018 - link
What edition of Windows 10 to typical gamers use and why?johnsmith222 - Monday, April 23, 2018 - link
In the meantime we have a lot benches to analyse :)Sum of web benches:
https://www.3dcenter.org/news/ryzen-2000-launchrev...
oRAirwolf - Monday, April 23, 2018 - link
I would really like to see some storage bench mark to compare pre and post Spectre/Meltdown patching of Intel CPUs as well as an apples-to-apples comparison of nvme storage performance compared to an Intel 8700k.Silma - Monday, April 23, 2018 - link
It's really hard to generalize on why people purchase the processors they do.I met a guy online with an over the top, super expensive computer. His sole purpose seems to be the first in the online tests and he will spend hours fine tuning the overclocking and whatnot.
Another guy mostly playing D3 purchased a 3K euro computer, which is absolutely over the top for what he plays/does. His reasoning is, I change my computer every 10 years, so when i do, I want the best components.
In my opinion, for most people without special needs (YouTube encoding, 3D rendering and whatnot), most processors have been good enough for years, and there is no reason to invest a lot in a processor when money is much better spent in an x4 PCIe SSD where you'll instantly feel the difference vs a hard drive or a medium quality SSD.
To me, power consumption and noise of processor as well as graphic card is a consideration at least as important as price.
The sole reason I would change processor today would be to get a fully Thunderbolt 3 compatible system, since the first TB3 audio interfaces are slowly coming to market.
Then again, I'm sure many people will have other priorities and reasons for purchasing their processors.
Targon - Monday, April 23, 2018 - link
Many of these high end systems are overpriced, or they come with components that are not worth it for what is being done. With that being said, going for a higher end CPU does make sense for those looking to keep their systems for a long time. Video cards and storage are areas that people should pay close attention to when it comes to price.NVMe drives are VERY expensive if you go up to the 1TB level, so spending that sort of money doesn't make sense when the prices will drop in the next two years. A 250-500GB NVMe drive would make more sense when combined with a traditional hard drive for additional storage. Video cards are also at a premium right now, as is RAM. If the system were purchased back in April of 2017, then yea, not too horrible to go for 32GB of RAM back then, but now, I'd stick with 16GB of RAM due to the prices being so much higher than they were.
For desktops, Thunderbolt isn't all that amazing when you can add a video or sound card to the system that will do what you want it to. Laptops are another story, and you need to pick and choose your priorities.
Flying Aardvark - Monday, April 23, 2018 - link
That's if you're short on money. I don't spend much extra other than vacations & eating very well. So when I upgrade, which is every 5 to 10 years, I buy the best available like Silma. I have a 1TB 960 Pro for that reason, it was $650 and I didn't think twice about it. I need the most reliable, fastest drive at the time. The 960 Pro is a MLC memory configuration, I've always used higher end MLC drives and they've served me very well.I'm not waiting a year or two, when I have over $100,000 sitting in my bank account doing nothing. What's the point, it's just $650. Same goes for the rest of my computer, which I only own/maintain one of.
Not everyone is a child or someone who doesn't spend the majority of their time progressing their careers so they can make more money. The price consideration is not the end-all, ultimate rule on hardware for every single consumer.
mapesdhs - Monday, April 23, 2018 - link
Indeed, though I guarantee some here will react poorly at the notion of someone who can make such a purchasing decision. :) Sometimes the best makes perfect sense, and if one can afford it, then why not.Kaihekoa - Monday, April 23, 2018 - link
Why are Anand's gaming numbers showing the 2700X beating all Intel CPUs when every other reviewer still shows the 8700K/7700K still being the best gaming CPUs?Flying Aardvark - Monday, April 23, 2018 - link
TechRadar & the wccftech preview has the same results. If you have been following Spectre as I have, you would've seen even users find this result. See the top comments here. https://np.reddit.com/r/pcmasterrace/comments/7obo...AT, TR & WCCF's results are accurate. Many reasons for this.
- Many reviewers used the old Ryzen balanced power setting which cripples the 2700X
- Disallowed the motherboard settings that push the chip over TDP
- Fully patched as possible for Spectre v1 & v2, which cripples Intel up to 50% in IO heavy tasks (streaming textures for games that do so).
There is naturally, lots of resistance to the fact that AMD is dominating. It's over for now, time for people to just admit it, they got screwed if they don't have Ryzen. Or at least, bought the inferior product.
mapesdhs - Monday, April 23, 2018 - link
I don't think those posting so much venom about the results will change their minds until AMD releases something that really is just right out the gate blatantly faster, including for IPC. Another year or two and I think that will happen.Flying Aardvark - Monday, April 23, 2018 - link
There's usually a lag from 6-12 months on any change that's already in place. Any topic really. Humans aren't very good at seeing what's in front of them. It requires enough people repeating it over and over around them, until they accept reality.Before that reassurance from society around them, they don't have the confidence to see/admit reality. Just something I've noticed. :)
mapesdhs - Monday, May 14, 2018 - link
That's why I like Goodkind's "1st Rule": people will believe a lie either because they want to believe it's true, or they're afraid it's true.Kaihekoa - Tuesday, April 24, 2018 - link
I don't know what reviews you read, but the WCCF review shows slight favor to 8700K in gaming. However, it's an incomplete review of gaming as they only test at 1440p Ultra, where the GPU bears most of the workload, and only show average framerate. Tech Report doesn't even go into any detail whatsoever on gaming and only broaches the topic in a couple paragraphs on the conclusion page. Still, they even show a lead to Intel. Anandtech shows the 2700X leading every game in framerate, which is flat out inaccurate when compared to other reviews.The Spectre BS has marginal, if any, impact on game performance. I don't know how you get the idea that CPU IO is related to loading textures in a game when textures are loaded into VRAM by the GPU. Looking further into the test setup, Anand uses slower RAM on Intel platforms, an ECC mobo for Z170, doesn't disclose GPU driver versions and uses an enterprise OS on consumer hardware. I'm guessing these and/or other factors contributed to the inaccurate numbers, relative to other reviewers, causing me to lose a lot of respect for this once well-regarded hardware reviewer. I'll get my benchmark numbers from PC Perspective and Gamers Nexus instead.
Not hating on AMD, and I even own stock in both AMD and Intel. They offer tremendous value at their price points, but I spend alot of money on my PC and use it for gaming, overclocking/benching, and basic tasks, which all seem better suited to Intel's IPC/clock speed advantage. I need reviews to post accurate numbers so that I can make my upgrade decisions, and this incomplete review with numbers not reflective of actual gaming performance fails to meet that need.
Flying Aardvark - Tuesday, April 24, 2018 - link
Come on man. I almost stop responding to replies like this. WCCF benches the base 2700, of course the 8700K wins, they don't include the 2700X. Again, the results line up with AT's. I wrote TR but meant TechRadar.Eh, I'm not going to keep going on addressing all these "points". IO is a syscall, reading/writing to disk is a syscall and that's where Intel takes up to a 50% perf hit with their Spectre v3 patches in place. This is known, and been known for months on the impact for games that do lots of texture steaming like ROTR. I even provided user provided evidence, that beat Anandtech here to the punch by 3 months.
Anand used Intel/AMD memory spec. That's what you're supposed to do when testing a product advertised to use certain components (for good reason, BTW, stupid gamer kids discounted).
Bottom line is that you and people flipping out just like you are wrong. I already knew about this being under the surface months ago. Now that it's impossible to cover it up with the 2000 series launch, more people are simply aware that AMD has taken over.
GreenMeters - Tuesday, April 24, 2018 - link
But Anandtech has the 2700, and even the 2600X and 2600, beating the 8700K. So how are the WCCF benchmarks lining up with Anandtech's?Maxiking - Tuesday, April 24, 2018 - link
"I just finished running Rise of the Tomb Raider benchmarks, 1080p, very high preset, FXAA.Unpatched:
Mountain Peak: 131.48 FPS (min: 81.19 max: 197.02)
Syria: 101.99 FPS (min: 62.73, max: 122.24)
Geothermal Valley: 98.93 FPS (min:76.48, max: 117.00)
Overall score: 111.31 FPS
Windows patch only:
Mountain Peak: 135.34 FPS (min: 38.21 max: 212.84)
Syria: 102.54 FPS (min: 44.22, max: 144.03)
Geothermal Valley: 96.36 FPS (min:41.35, max: 148.46)
Overall score: 111.93 FPS
Windows patch and BIOS update:
Mountain Peak: 134.01 FPS (min: 59.91 max: 216.16)
Syria: 101.68 FPS (min: 38.95, max: 143.44)
Geothermal Valley: 97.55 FPS (min:46.18, max: 143.97)
Overall score: 111.62 FPS
Average framerates don't seem affected."
From the link you posted, you got rekt by yourself.
Maxiking - Tuesday, April 24, 2018 - link
Actually, I can't bother waiting because, it's futile.The benchmark from that thread shows there has been no noticable performance regression after the updates had been applied.
I know what you gonna do. Look at those min fps. I WAS RIGHT. I WAS RIGHT. You are thinking right now. No, you weren't. If you ever had run TOR benchmarks, you would have experienced it. There are quite severe discrepancies in the inbuilt benchmark when comes to min/max fps. I noticed it myself when I was overclocking 6700k and running game benchmarks, stability tests. Since you are mostly using anecdotal evidence, you do not know how to make proper arguments, don't provide valid sources, we are really limited here, but that's what we have.
To support my statement, here is the video:
https://www.youtube.com/watch?v=BZEhkcs9hpU
It is not mine, but it is proving my point, there is an issue in the benchmark. It shows wrong/misleading min/max fps pretty often which other benchmarking solutions doesn't record.
The video was published on 7 Jul 2016, so no meltdown/spectre for you. I know you will argue it is no coincidence with those min fps, but look at the max as well.
Maxiking - Tuesday, April 24, 2018 - link
*solutionFlying Aardvark - Wednesday, April 25, 2018 - link
Are you retarded? I know you are because I ran those benchmarks myself and it's reproducible on more games than ROTR. Where's your contradicting information to back your claim, you do know that trying to poke holes in info is not an argument.Ranger1065 - Wednesday, April 25, 2018 - link
So sad the review failed to meet your expectations. Enjoy your time at Gamer's Nexus (cough).Maxiking - Tuesday, April 24, 2018 - link
"I just finished running Rise of the Tomb Raider benchmarks, 1080p, very high preset, FXAA.Unpatched:
Mountain Peak: 131.48 FPS (min: 81.19 max: 197.02)
Syria: 101.99 FPS (min: 62.73, max: 122.24)
Geothermal Valley: 98.93 FPS (min:76.48, max: 117.00)
Overall score: 111.31 FPS
Windows patch only:
Mountain Peak: 135.34 FPS (min: 38.21 max: 212.84)
Syria: 102.54 FPS (min: 44.22, max: 144.03)
Geothermal Valley: 96.36 FPS (min:41.35, max: 148.46)
Overall score: 111.93 FPS
Windows patch and BIOS update:
Mountain Peak: 134.01 FPS (min: 59.91 max: 216.16)
Syria: 101.68 FPS (min: 38.95, max: 143.44)
Geothermal Valley: 97.55 FPS (min:46.18, max: 143.97)
Overall score: 111.62 FPS
Average framerates don't seem affected."
From the link you posted, you got rekt by yourself.
Ranger1065 - Wednesday, April 25, 2018 - link
Nicely done Mr Aardvark. That made me smile.mikael.skytter - Tuesday, April 24, 2018 - link
Thanks for a great review. Any chance it would be possible to look into how SpeedShift 2 compares to AMD:s solution for short burst loads and clock ramp-up?Thanks!
koekkoe - Tuesday, April 24, 2018 - link
My favorite part in the article: fsfasd.Meow.au - Tuesday, April 24, 2018 - link
I’ve visited the comments section a few times since the publication. As a psychologist in training, I’ve found it interesting as the initial complaints about this review were reasonable (it doesn’t match other sites), but by page 45 are now bordering on paranoia and conspiracy theories. The conspiracy theories are all the more puzzling when the simplest and most reasonable explanation is that the spectre patch has punished Intel processors rather severely. I’ve found trying to argue against conspiracy theories, be it the moon landing or anti-vaxers, to be singularly ineffective.The more you provide scientific evidence and rationality, the harder conspiracy theorists dig in their heels and defend their original position. Our natural confirmatory bias to only seek evidence which confirms pre-existing beliefs seems to be a flaw built into the wiring of the human brain. Psychologically protective? Yes... it’s nice to always be right. Useful for doing science? No.
I’d be delighted (and shocked) in a week’s time to learn of massive incompetence or a cover up. I expect there to be some interesting and unexpected details. But I’m guessing no evidence will be found for the commonly repeated conspiracy theories (spectre effect is minimal, heatsink throttling, bias against intel, etc.). But I guess that will just be further evidence there really is a conspiracy... whatever.
Keep up the good work guys. A long time reader.
RafaelHerschel - Wednesday, April 25, 2018 - link
I think you need more training, psychologist in training, because it seems that you can't detect your own personal bias. As you stated yourself, the original complaints are quite reasonable. The problem is that AnandTech is not addressing these complaints in a timely manner and is mostly interested in damage control.The fact that some complaints are unreasonable doesn't change the fact.
Many other reviewers have applied all relevant patches, it is poor form to assume that they haven't. But I understand why you question their competence or integrity. It's cognitive dissonance. You trust AnandTech. In this case AnandTech is an outlier and has not clarified the unique results of their gaming test. Your trust in AnandTech is therefore not logical, and yet you consider yourself a logical person.
Therefore, you have decided that the 'logical' explanation is that all other reviewers haven't applied the patches... whatever.
divertedpanda - Wednesday, April 25, 2018 - link
Other reviewers admitted having not patched down to the bios since some used mobos where patches were not yet released.TrackSmart - Thursday, April 26, 2018 - link
This comment by RafaelHerschel doesn't make sense. The person being maligned said exactly this: "I expect there to be some interesting and unexpected details. But I’m guessing no evidence will be found for the commonly repeated conspiracy theories..."And he/she was EXACTLY CORRECT in that prediction.
Your complaint, on the other hand, seems disingenuous. Anandtech's staff immediately flagged their gaming results as anomalous (on just about every page of the article). Then they dug deep to figure out what happened, which takes time to test, confirm, and then publish about). Then about 5 days later they posted updated results (2700x and i7-8700k, so far) and a VERY DETAILED explanation of what happened.
So.... What's the problem again? That sometimes unforeseen test parameters can lead to different results? That can happen. The only question is how was the situation handled. In this case, I think reasonably well under the circumstances.
mapesdhs - Monday, May 14, 2018 - link
Grud knows now what "timely manner" is supposed to mean these days. Perhaps RafaelHerschel would only be happy if AT can go back in time and change the article before it's published.Meow.au, re what you said, Stefan Molyneux has some great pieces on these issues on YT.
schlock - Tuesday, April 24, 2018 - link
Why aren't we running DDR4-3200 across all systems? It may go a small ways to explaining the small discrepancy in intel performance ...rocky12345 - Tuesday, April 24, 2018 - link
They ran all systems at both Intel's & AMD's listed specs as such AMD's memory was at 2933MHz on Zen+ & 2666MHz on Intel's Coffee lake 8700K,they did the same for the older gen parts as well and ran those at the spec's listed for them as well.There have been a few other media outlets that did the same thing and got the same results or very close to the same results. AMD's memory bandwidth as in memory controller seems to give more bandwidth than Intel's does at the same speed so with Intel not running at 3200MHz like most media outlets did maybe Intel loses a lot of performance because of that and AMD lost next to nothing from not going 3200MHz. It is all just guesses on my part at the moment.
Food for thought when Intel released the entire Coffee Lake line up they only released the z370 chip set which has full support for over clocking including the memory and almost all reviews were done with 3200MHz-3400MHz memory on the test beds even for the non K Coffee lakes CPU's. Maybe Intel knew this would happen and made sure all Coffee lakes looked their best in the reviews. For a few sites that retested once the lower tier chip sets were released the non K's using their rated memory speeds lost about 5%-7% performance in some cases a bit even more.
I am no fanboy of any company I just put out my opinions & theories that are based off of the information we are given by the companies and as well as the media sites.
Maxiking - Tuesday, April 24, 2018 - link
People never fail to amaze me, so you basically know nothing about the topic, yet you still managed to spit 4 paragraphs of mess, even made some "food for thought".Slower ram - performance regression unless you have big caches which is not the case of Intel nor AMD.
rocky12345 - Tuesday, April 24, 2018 - link
It seems pretty basic to me as to what was said in the post. It is not my problem if you do not under stand what myself and some others have said about this topic. Pretty simple slower memory less bandwidth which in turn will give less performance in memory intensive work loads such as most games. ALl you have to do is go and look at some benches in the reviews to see AMD has the upper hand when it comes to memory bandwidth even Hardware Unboxed was pretty surprised by how good AMD's memory controller when compared to Intel's. Yes Intel's can run memory at higher speeds than AMD but even with that said AMD does just fine. You are right about cache sizes neither has a overly large cache but AMD 's is bigger on the desktop class CPU's and that is most likely one of the reasons their bandwidth for memory is slightly better.Maxiking - Wednesday, April 25, 2018 - link
The raw bandwidth doesn't matter, it's cas latency what makes the difference here.https://www.anandtech.com/show/11857/memory-scalin...
https://imgur.com/MhqKfkf
With CL16, it doesn't look that much impressive, is it.
Now, lower the CL latencies to something more 2k18-ish, booom.
https://www.eteknix.com/memory-speed-large-impact-...
Another test
https://www.pcper.com/reviews/Processors/Ryzen-Mem...
Almost all the popular hw reviewers don't have a clue. They tell you to OC but do not explain why and what you should accomplish by overclocking. Imagine you have some bad hynix ram which can be barelly OC from 2666 to 3000mhz but you have to loose timing from CL15 for CL20 to get there.
mapesdhs - Monday, May 14, 2018 - link
schlock, the chips were run at official spec. Or are you saying it's AMD's fault that Intel doesn't officially support faster speeds? :D Also, GN showed that subtimings have become rather important for AMD CPUs; some mbds left on Auto for subtimings will make very good selections for them, giving a measurable performance advantage.peevee - Tuesday, April 24, 2018 - link
It is April 24th, and the page on X470 still states: "Technically the details of the chipset are also covered by the April 19th embargo, so we cannot mention exactly what makes them different to the X370 platform until then."jor5 - Tuesday, April 24, 2018 - link
The review is a shambles. They've gone to ground.coburn_c - Tuesday, April 24, 2018 - link
I have been wanting to read their take on x470..risa2000 - Wednesday, April 25, 2018 - link
It is my favorite page too.mpbello - Tuesday, April 24, 2018 - link
Today phoronix is reporting that after AMD's newest AGESA update their 2700X system is showing 10+% improvement on a number of benchmarks. It is unknown if on Windows the impact will be the same. But you see how all the many variables could explain the differences.mapesdhs - Monday, May 14, 2018 - link
You know what will happen there though, yet more accusations of conspiracy, etc.lfred - Wednesday, April 25, 2018 - link
Could anyone confirm the Wraith Prism cooler height 9.4cm , (and therefore wont fit a Silverstone Raven Z Mini-ITX case ) . thank youpsychok9 - Wednesday, April 25, 2018 - link
Hello Ian, is there any news this week?ET - Wednesday, April 25, 2018 - link
I'm still waiting for the StoreMI page.Sx57 - Wednesday, April 25, 2018 - link
Well i am still waiting for anandtech updating the article.i am very interested to know how ryzen beat coffelake so well.i believe anandtech review is perfomed rightly but i wanna know what is actually wrong with other reviews that make intel winner in some games.it seems not to be the security patches related.FaultierSid - Wednesday, April 25, 2018 - link
Did they just silently switch out all gaming benchmarks? Intel 8700K now winning across the board.rocky12345 - Wednesday, April 25, 2018 - link
Yep they sure did they must have redone the tsts but this time turned on MCE for Intel and upped the memory clock to at least 3200MHz for Intel as well to see those kinds of gains in games from the old charts from last week. If they decide to explain it they will spin it that oh they had the wrong data points in the charts for Intel...lolTEAMSWITCHER - Wednesday, April 25, 2018 - link
Yes .. at 1080P. The 4K gaming results are rather mixed. So the original conclusion still stands for me. The AMD Ryzen 2700X is roughly on par with the 8700K at 4K gaming, and pulls ahead in productivity applications.RafaelHerschel - Wednesday, April 25, 2018 - link
Here is how I see it, at 1080p the new Ryzen results are good enough for 60 FPS gaming. The 2600 (non-x model) sometimes drops below 60 FPS but for a system that is equally used for productivity and gaming, I can certainly live with that. For a system that is mainly used for gaming, I still prefer Intel, but by a slimmer margin than before.mapesdhs - Monday, May 14, 2018 - link
You are hereby awarded the Sensible Chap medal for mentioning 60Hz gaming in at least a non-negtive manner. 8) A few pages back, one guy described anything below 144Hz as useless.FaultierSid - Wednesday, April 25, 2018 - link
The question is if testing a CPU at 4K Gaming does make much sense. At 4K the bottleneck is the GPU, not the CPU, especially since they tested with a 1080 and not a 1080TI.It is not a coincidence that the cpus all are showing roundabout the same fps in the 4K tests. Civilization seems to be easier on the GPU and shows 8700K in the lead, all other games show almost same fps for all 4 tested CPUs. Thats because the fps is limited by GPU in that case, not by the CPU.
You might want to bring up the point that if you are Gaming in 4K and at highest settings, it doesn't make sense for you to look at 1080p benchmarks. And right now this might make sense, but not in a couple years when you upgrade your GPU to a faster model and the games are not GPU bottlenecked anymore. Then where you now see 60fps you might see 100 fps with an 8700K and only 80fps with the Ryzen 2600X.
Basically, testing CPUs in Gaming at a resolution that stresses out the GPU so much that the performance of the CPU becomes almost irrelevant is not the right way to judge the Gaming Performance of a CPU.
If your point is that at the time you purchase a new GPU you will also purchase a new CPU, then this might not affect you, and you decide to pick the 2700X over an 8700K because of all the advantages in other areas.
But in general, we have to admit, the crown of "best gaming CPU" is (sadly) still in Intel's Corner.
mapesdhs - Monday, May 14, 2018 - link
If all you're doing is gaming at 4K then yes, in most titles thebottleneck will be the GPU, but this is not always the case. These days live streaming on Twitch is becoming popular, and for that it really does help to have more cores; the load is pushed back onto the CPU, even when the player sees smooth updates (the viewer side experience can be bad instead). GN has done some good tests on this. Plus, some games are more reliant on CPU power for various reasons, especially the use of outdated threading mechanisms. And in time, newer games will take better advantage of more cores, especially due the compatibility with consoles.jjj - Wednesday, April 25, 2018 - link
So what was wrong, was it HPET crippling Intel or does Intel have some kind of issue with 4 channels memory?Ryan Smith - Wednesday, April 25, 2018 - link
The former.risa2000 - Thursday, April 26, 2018 - link
Can you explain a bit HPET crippling? I was looking around Google, but did not find anything really conclusive.Uxot - Wednesday, April 25, 2018 - link
So...i have 2666mhz RAM...RAM support for 2700X says 2933...what does that mean ? is 2933 the lowest ram compatibility ? FML if i cant go with 2700X bcz of ram.. -_-Maxiking - Thursday, April 26, 2018 - link
It refers to the highest OFFICIALLY supported frequency by the chipset on your mobo. You should be able to run RAM with higher clocks than 2933 but they might be issues. Because Ryzen memory support sucks. For higher clocked rams, I would check it they are on the QVL, so that way, you can be sure, they were tested with your mobo and no issues will arrise.2666mhz RAM will run without any issue on your system.
johnsmith222 - Thursday, April 26, 2018 - link
Make sure you have the newest bios update, AGESA 1.0.0.2a seems to improve memory compatibility too. My crappy kingston 2400 cl17 now works fine at 3000 cl15 1.36V. I'll try 3200 at 1.38V later.Uxot - Wednesday, April 25, 2018 - link
Ok...my comment got deleted for NO REASON...Gideon - Thursday, April 26, 2018 - link
Good work tracking down the timing issues! I know that this review is still WIP, but just noticed that the "Power Analysis" block has a "fsfasd" written right after it, that probably isn't needed :)jor5 - Thursday, April 26, 2018 - link
Pull this shambles and repost when you've corrected it fully.mapesdhs - Monday, May 14, 2018 - link
Not an argument. It is just as interesting to learn about how and why this issue occured, to understand the nature of benchmarking. Life isn't just about being spoonfed end nuggets of things, the process itself is relevant. Or would you rather we don't learn from history?peevee - Thursday, April 26, 2018 - link
When 65W i7 8700 is 15% faster in Octane 2.0 than 105W Rizen 7 2700x, it is just sad.Of course, the horrible x64 practically demands than compilers must optimize for a very specific CPU implementation (choosing and sorting instructions in the code accordingly), AMD could have at least realized the fact and optimize their own implementation for the same Intel-optimized code generators...
GreenReaper - Thursday, April 26, 2018 - link
Intel compilers and libraries tend not to use the ideal instructions unless they detect a GenuineIntel signature via CPUID - it'll likely use the default lowest-common-denominator pathway instead.TDP is more of a guideline - it doesn't determine actual power usage (we've seen Coffee Lake use way more than the TDP), let alone the power used in a particular operation. Having said that, I wouldn't be surprised if Intel were more efficient in this particular test. But it'd be interesting to know how much impact Meltdown patches have in that area; they might well increase the amount of time the CPU spends idle (but not idle enough to go into a sleep mode) as it waited to fetch instructions.
SaturnusDK - Thursday, April 26, 2018 - link
Compare power consumption to blender score. Ryzen is about 9% more power efficient.TDP is literally Thermal Design Power. It has nothing to do with power consumption.
peevee - Thursday, April 26, 2018 - link
"TDP is literally Thermal Design Power. It has nothing to do with power consumption."Unless you have invented a way to overcome energy conservation law, power consumed = power dissipated.
SaturnusDK - Friday, April 27, 2018 - link
It's a guideline for cooling solutions. Look at the power consumption numbers in this test for example.Ryzen 2700X power consumption under full load 110W.
Intel i7 8700K power consumption under full load 120W.
Both are at stock speeds with the Ryzen having 8 cores versus 6 cores, and scoring 2700X 24% higher Cinebench scores. Ryzen is rated at 105W TDP so actual power consumption at stock speed is pretty close. The 8700K uses 120W so it's pretty far from the 95W TDP it is rated at.
ijdat - Saturday, April 28, 2018 - link
The 8700 also uses 120W so it's even further from the 65W TDP it's rated at. In comparison Ryzen 2700 uses 45W when it has the same rated 65W TDP. I know which one I'd prefer to put into a quiet low-power system...mapesdhs - Monday, May 14, 2018 - link
Perhaps this is AMD's biggest win this time round, potent HTPC setups.peevee - Thursday, April 26, 2018 - link
"Intel compilers "What Intel compilers have to do with it?
peevee - Thursday, April 26, 2018 - link
I mean, Octane test in Chrome is what V8 javascript compiler does. And it itself is build with MSVC AFAIR.Dragonstongue - Thursday, April 26, 2018 - link
just looking back at this, you say according to title 2700x-2700-2600x-2600 and yet in most tests are only listing the results for 2700x-2600x..not good for someone really wanting to see the differences in power use or performance comparing them head to head sort of speak.seems the 2700 would be a "good choice" as according to the little bit of info given about it, it ends up using less power than the 2600 even though rated same TDP with 2 extra core 4 extra threads O.O
I do "hope" the sellers such as amazon at least for us Canadian folk stick closer to the price they should be vs tacking on $15-$25 or more compared to MSRP pricing, seems if one bought them same day of launch pricing was right where it should be.
1600 has bounced around a little bit whereas 1600x is actually a fair price compared to what it was "very tempting" though the lack of a boxed cooler is not good.....shame 2600 only comes with wraith stealth instead of spire seeing as the price is SOOO close (not to mention at least launch price vs what the 1xxx generation is NOW, AMD should have been extra nice and bundled the wraith spire for 2600-2600x and wraith LED and wraith max or whatever for the 2700-2700x
I would imagine if they decide to do a 4 core 8 thread 2xxx that would be the spot to use the wraith spire (less heat load via less cores type deal)
29a - Thursday, April 26, 2018 - link
Not trying to be sarcastic but will this article be finished? I really wanted to read the storage and chipset info. If the article is as complete as it is going to get please let us know, 20 year reader asking.John_M - Saturday, April 28, 2018 - link
I'm sure it will be finished one day but I agree that it doesn't seem so at the moment. If you want to find out about StoreMI AMD has a page about it: https://www.amd.com/en/technologies/store-miET - Tuesday, May 1, 2018 - link
I think we've got ourselves a race: which will get here first, the missing parts of the 2nd gen Ryzen review, or new Raven Ridge drivers? Or perhaps hell will freeze first.29a - Friday, May 4, 2018 - link
Sadly it appears as though the article will not be finished. This site was great during about its first 15 years of existence, Purch has done a thorough job of purching it up.jor5 - Tuesday, May 8, 2018 - link
Oh dear what an embarrassing end to this article.Tuck it away under "what was I thinking??" and pretend it never happened.
x0fff8 - Wednesday, May 9, 2018 - link
is this article ever gonna get updated with the new benchmarks?MDD1963 - Thursday, May 10, 2018 - link
And just like that, my 7700K is fast again! :)peevee - Thursday, May 10, 2018 - link
"Technically the details of the chipset are also covered by the April 19th embargo, so we cannot mention exactly what makes them different to the X370 platform until then"That was written for the article published April 19th, and as of May 10th STILL in the text.
John_M - Friday, May 11, 2018 - link
And still there's nothing on the StoreMI page. What's the excuse for that?AmbroseAthan - Friday, May 18, 2018 - link
Are we really over 3.5 weeks after this was updated as TBD, and you guys have fallen this far behind?This is not the standard I feel like Anandtech normally adheres to.
klatscho - Monday, May 21, 2018 - link
I second that.Maxiking - Monday, May 21, 2018 - link
LOL, the benchmarks are now updated, Ryzen+ absolutely outperformed in games by 8700k even with Meltdown and Spectre patches. So nothing new, Ryzen is still bad.klatscho - Monday, May 21, 2018 - link
If your usecase is 1080p gaming I would agree, however the difference becomes marginal as resolution increases. Also keep in mind that the 8700k currently retails for about $20 more than the 2700x and doesn't include a cooler, which means it is overall about $50 dearer...peevee - Tuesday, May 22, 2018 - link
"and the speed is limited to how the system reads from a drive that spins at 7200 or 5400 times per second"It is PER MINUTE. As in RPM.
cvearl - Friday, June 8, 2018 - link
My 2600 X at stock does 177 in single core cinebench. But that is with h100i V2 cooler. With the default cooler it gets the same score as you 173. The cooler the chip the higher the Boost. Also out-of-the-box XMP in the Bios Works 3200 no problem. In fact cl14. Out of the box versus my 1600 X in the exact same system it is 15% faster across the board.[email protected] - Tuesday, June 19, 2018 - link
Nice review.On thing that bothers me is the inclusion of Winrar for this review without a note stating it is a underperforming compression tool. It is know that 7zip can compress almost twice as fast as Winrar.
Not that but also the lack of consistency in between compressions tests as instead of compressing and decrompressing a set file you are taking different procedures for each benchmark. I mean the job is to compress/decompress, let the user know how it does and why it does that.
0ldman79 - Monday, July 23, 2018 - link
I realize they probably don't have an FX 6300 and 83xx system for comparison.The FX 8350 scores 23719 MIPS on the 64 MB 7zip test, a good deal higher than the Kaveri or Bristol Ridge. I need to bench my 6300 just for giggles.
mrinmaydhar - Friday, July 27, 2018 - link
Try and run a S.M.A.R.T. test on the drives. The virtual adapter is unable to provide any data and causes a Blue-Screen. At least the last time I used the Enmotus version did.prateekprakash - Friday, August 3, 2018 - link
"We’ll cover these in the next few pages, as well as the results from our testing.Overclocking"
Where is the overclocking result?
kithylin - Tuesday, September 4, 2018 - link
YET ANOTHER REVIEW THAT DOESN'T SHOW US THERMALS! HOW HARD IS IT TO SHOW US HOW HOT A CHIP RUNS ON AIR COOLING FFS, NO ONE SHOWS THERMALS ON THESE DAN CHIPS, THIS IS THE 20'TH REVIEW IN GOOGLE AND NO THERMALS!JRW - Thursday, December 6, 2018 - link
Last year I upgraded from a 1st gen i7 920 to i7 8700K and even with spectre & meltdown performance has been amazing, also Asus has been recently updating the motherboard BIOS with further CPU performance improvements.eyorngpbwcze - Monday, August 24, 2020 - link
http://bitly.com/zoom-viber-skype-psyzbbyykdivmfc - Friday, August 28, 2020 - link
50149 44831 16021 47938http://bitly.com/1ntFjX
89463 72869 99217 18464
http://bitly.com/1ntFjX - Вратарь Галактики смотреть онлайн 23419 47810 14568 17111