Even then there's an interesting option if you want threaded performance; I just upgraded to a XEON E5-2680 v2 (IB-EP) for 165 UKP. Lower 1T speed for sure, but MT should be the same or better as a 3930K @ 4.8. No oc means more stable, less heat/noise/power, and being IB-based means it ups the slots to PCIe 3.0. Not a relevant choice for gaming, but a possibility for those doing VMs, rendering, etc., and just want to get by for a little while longer.
OR search for an XEON E5-1680v2... :) It's an Ivy Bridge-E 8c/16t chip that will fit in Sandy Bridge-E mainboards (x79) and has an unlocked multiplier opposed to this E5-2680v2. So with this you won't lose your overclocking ability.
But in the end, I guess that the greatly reduced power draw and the more "modern" platform from an i7-8700K system compared to the x79 platform will give it the edge here.
Very interesting that the 1680 v2 is unlocked, I didn't know that.
Alas though, availability of the 1680 v2 is basically zero, whereas the 2680 v2 is very easy to find, and the cost of 1680 v2s which are available (outside the UK) is extremely high (typical BIN of 600 UKP, normal auction start price of 350 UKP, completed listings only shown for BIN items which were purchased for between 500 and 600 UKP). By contrast, I bought several 2680 v2s for 165 UKP each. Testing on a P9X79 WS (all-core turbo of 3.1) gives a very impressive 15.44 for CB 11.5, and 1381 for CB R15 which is faster than a stock 8700K (for reference, the 1680 v2 scores 1230 for CB R15). Note the following page on AT has a very handy summary of all the turbo bin levels:
So, I'm very pleased with the 2680 v2 purchase, it's faster than my 3930K @ 4.8, runs with very low temps, much lower power draw, hence less heat, less fan noise and since it's not oc'd it'll be solidly reliable (this particular test system will eventually be in an office in India, so power/heat/reliability is critical). For the target systems in question, it's a great solution. Only thing I noticed so far is it didn't like me trying to set a 2133 RAM speed, but it worked ok at 1866; I can probably tighten the timings instead, currently just at 9/11/10/28/2T (GSkill 32GB kit, 8x4GB).
The 4930K I have though will go into my gaming system (R4E), since I don't mind the oc'ing fun, higher noise, etc., but it's not a system I'll use for converting video, for that I have a 6850K.
Can you buy it? No? Paper launch of Unobtanium 8000? -> panic, PR propaganda bullshit and dirty Intel marketing tactics as usual targeted at lamer fanboys.
In speech: "grateful" to AMD for reinvigorating competition
In deed: gives money to Intel, effectively taking part in keeping AMD down for that bit longer, maybe causing them to return to non-competitive state, upon which AMD is blamed for being non-competitive once more.
The Invisible Hand doesn't seem so wise, sometimes.
AMD did a fantastic job with Ryzen while Intel were busy milking their customers dry. We should support AMD when they need us most. If AMD goes down it would suck not for the industry but technology as a whole.
at least you can enable MCE to make all cores run at 4.6Ghz (make sure you got a good cooler) 8700K would allow you to goto 4.9-5.1Ghz with very good cooling
what a "brilliant" asshardware , your Kudos worth shit to amd as they cant fund further r&d with it. But sure, run to support your milking master because they finally bothered to release, after 10 years, 4+ core for mainstream...
Salty salty salty people! AMD are big boys too, they can fight for themselves. It is called a FREE MARKET, and until Ryzen, AMD had nothing to even come within spitting distance of an i7-2600k!
Which is coincidentally what I upgraded to my 8700k from. Running 4.8 GHZ on all cores for now, I still have plenty of thermal room, so once more people have figured out all the minute settings, I will just leave it at 4.8 til then! Also, Firestrike! https://www.3dmark.com/3dm/23022903 CPU-Z: https://valid.x86.fr/nkr5vi
Thas me, ahead of 96% of all results! And that single threaded perf, is totally insane - as is multi-core, nothing short of 16+ threads can touch it.
We know that this is a "short" "pre-review", but it is a bit bizarre that there is no mention of AMD in the conclusion.
Not that we consider that AMD should be necessarily mentioned in an article dedicated to an Intel launch, BUT Intel's offerings were always discussed in the conclusion section of every AMD review.
So we would consider it's just fair to remind people in the conclusion as well that the new Coffee Lake chips from Intel are a welcomed addition, but that they are unable to completely dethrone the competition and should be praised for the fact that AMD will be now forced to lower the Ryzen prices a bit.
The way it is right now, the conclusion is written like Intel is the only alternative, quad or hexa core, with nothing else on the market.
Personal opinion :
Despite me being the technical consultant on the team, this was observed by two of my colleagues (financial consultants) and they even brushed it away themselves as "nitpicking" .
Since I've worked in online media myself, this looks very similar with an attempt to post something to "play nice" with Intel's PR so we've decided to post this comment.
Therefore we eagerly await Ian's full review, with his widely appreciated comprehensive testing and comparisons.
Nevertheless, thank you Ian for your work! It is appreciated.
What did you expect, Anandtech is an intel pro review site. They didn't even mention the huge price difference between intel's z370 chipset motherboards required for coffee lake in contrast to amd's b350 chipset motherboards. It's almost double price.
I haven't been following this closely, but does that mean that b350 boards are about $60? That's incredible! The Z370 I'm looking at is only $120, which didn't seem that bad, but if the b350s are really $60-70, then it might be worth checking out. Are they really that cheap?
Yup. Newegg shows almost a dozen B350 boards for $60-$70 currently. Most are micro ATX but there are a couple ATX boards in that range currently (after rebate) including "gaming" boards like the MSI B350 TOMAHAWK.
Really? Other times I've heard they are a pro Apple review site, a pro IBM review site, and a pro MS review site. They really seem remarkably catholic in whom they support.
We chose a dozen processors we thought would be best for the review graphs. As mentioned on every results page, you can find the other data in our Benchmark database, Bench.
Well, you either have bad inspiration or you chose the CPUs from AMD that most people won't buy. You are missing R7 1700 and R5 1600 which are ~ same as new Intel offerings in computing tasks but they cost less. So...
I don't know how you define as "best" for review graph. The point is that we are seeing new 6 cores solution from Intel, so very logically, people will be trying to compare apples with apples, i.e. 6 core solutions from both camps. So omitting the results from 1600 actually looks more than meets the eye to me, especially when you folks previously did a R5 1600X review.
Thank you Ian. Please do add the data for R5 1600/X. From what I read elsewhere, it appears there is good competition between the Core i5 and R5 1600/X series.
You really should have chosen SIMILARLY priced chips (So, 1700 / 1600 / 1600X) because it would have shown "here's the performance you get per dollar" which ultimately is what matters.
My ultimate goal is to have a graph widget that lets you select which CPU you want to focus on, and it automatically pulls in several around that price point as well as some of the same family. I'm not that skilled, though
Performane per dollar doesn't matter, out right performance matters. SMH, only a fool buys the second best cpu when the best is only a few dollars more.
Again the myth that rich people don't care about wasting money. So wrong. :D As for fanboyism, that kind of label gets hurled in both directions, but IRL has little meaning.
Ryzen has no integrated GPU so it can't be the best choice for anyone without a discrete GPU (aka the vast majority of the market - about 70% as per q1 2017). Ironically the gamers are the ones more likely to snap up Ryzen as they have discrete graphics cards anyway...
I see the same, ryzen 1700 remains the best buy, followed by ryzen 1600, which recent batches seems to have 8 cores instead of 6, for around $170. They do come with heatsink, another $30 saved. With ok board it will total $250. Even better, readily built Dell gaming desktops can achieve around $800 with r580 8gb and 16 GB ram with 1700 ryzen vs above $1100 for similar Intel. It is literally no brainer choice
"and i bet intel paid you quite a bit to ignore stuff other (less intel biased) reviewers pointed out today."
You'd lose that bet.
Now since we're apparently doing this Jeopardy style, please tell me how much you wagered so that I know how much I'm collecting. Since Intel isn't paying me, you will have to do. ;-)
In all seriousness though, taking sides and taking bribes would be a terrible way to run a business. Trust is everything, so losing the trust of you guys (the readers) would be about the worst possible thing we could do.
"Not sure why there is no R5 1600 in the test though. It will be good to see how the 6 cores solution compete."
It's essentially as you'd expect. In older, single-threaded code, the Intel CPU has a slight advantage, but in any newer, multi-threaded code, the Ryzen 5 1600's hyperthreading 6 cores will dominate. It's time to stop giving Intel money for fewer cores. They don't deserve the cash. Give it to AMD for a change, now that they're genuinely competitive.
8700k is in backorder there. And for the rest of the world? I can get a 8400 in my country. 8700k seems to come 2 dec. I cant remember anything similar for the last 3 decades. Perhaps the P3-1000. If this is not a paper launch nothing is.
I'm not sure. :D It's certainly annoying though. Worst part is searching for anything and then changing the list order to cheapest first, what a mess...
Anyone else read that and think that it is something we should have been reading ages ago? Consumer technology is progressing slower than many expected and I feel the same way. Nonetheless I can't help but envision a Very near future where I'll be coming back and reading this article and being depressed at this level of technology all the while on my future monolithic many thousand core 3D processor ;)
"The Core i5-8400 ($182) and Core i3-8350K ($169) sit near the Ryzen 5 1500X ($189) and the Ryzen 5 1400 ($169) respectively. Both the AMD parts are six cores and twelve threads, up against the 6C/6T Core i5 and the 4C/4T Core i3. The difference between the Ryzen 4 1400 and the Core i3-8350K would be interesting, given the extreme thread deficit between the two."
"The difference between the Ryzen 5 1500X and the Core i3-8350K would be interesting, given the extreme thread deficit (12 threads vs 4) between the two."
the 1500X is a 4c8t processor so it effectively has hyper-threading over the i3-8350K while having a lower overclocking ceiling and lower ipc.
Are you sure that the i5-7400 got 131 FPS average in benchmark 1 - Spine of the Mountain in Rise of the Tomb Raider? Besting all the other vastly superior processors?
Looks like a typing error there or something went wrong with your benchmark (lower settings for example on that run).
I've mentioned it in several reviews in the past: RoTR stage 1 is heavily optimized for quad core. Check our Bench results - the top eight CPUs are all 4C/4T. The minute you add threads, the results plummet.
Any idea what that optimisation is? Seems odd that adding extra pure cores would harm performance, as opposed to adding HT which some games don't play nice with. Otherwise, are you saying that for this test, if it was present, the i3 8100 would come out on top? Blimey.
Ian, this is probably your worst review to date. Lackluster choice of CPUs, mid-grade GPU, and lack of direct competition in the product stack... Why would you not use a GTX 1080 Ti or Titan XP?
All the CPUs we've ever tested are in Bench. Plenty of other data in there: the goal was to not put 30+ CPUs into every graph.
Our benchmark database includes over 40 CPUs tested on the GTX 1080, which is the most powerful GPU I could get a set of so I can do parallel testing across several systems. If that wasn't enough (a full test per CPU takes 5 hours per GPU), the minute I get better GPUs I would have to start retesting every CPU. At the exclusion of other content. Our benchmark suite was updated in early Q2, and we're sticking with that set of GPUs (GTX 1080/1060/R9 Fury/RX 480) for a good while for that reason.
To be fair the R5 1600 was added to the benches after the fact. In addition, your othwr reviews tend to be much more detailed and data driven with relevant products and multiple GPUs.
Why would I read your review if you expect me to dig through your benchmark to obtain relivant data?
I can understand and appreciate the time crunch but it is a poor excuse for some of the decisions made in this review.
Take it with a grain of salt, this was not your best work.
I disagree, he mentioned pretty much all the info you need to know about the CPU.
The choice of GPU is hardly even relevant to CPU tests anymore. For gaming performance my 6 year old i7-2600K is neck and neck (or faster in some cases) than this new crop of CPUs.
And if you do need more cores you can always move sideways to a very low cost SB-E or IB-EP. I built a 4.8GHz 2700K system for a friend two years ago, am upgrading it soon to a 3930K at the same clock, replacing the M4E mbd with an R4E, swapping the RAM kits (2x8GB for 4x4GB, both 2400MHz), total cost 200 UKP. 8) And the both mbds now have the option of booting from NVMe.
Newer CPUs can have a distinct advantage for some types of 1080p gaming, but with newer GPUs the frame rates are usually so high it really doesn't matter. Move up the scale of resolution/complexity and quickly it becomes apparent there's plenty of life left in SB, etc. zuber, at what clock are you running your 2600K? Also note that P67/Z68 can benefit aswell from faster RAM if you're only using 1600 or less atm.
Yeah that doesn't make a lot of sense to me either.
CPU A is the 8600K. Runs at a base of 3.6 and an all-core turbo of 4.1. CPU B is the 8700. Runs at a base of 3.2 and an all-core turbo of 4.3.
That's either 11% slower (base) or about 5% faster (all-core turbo). Neither is 20%!
If you compare the base speed of the 8600K and the all-core turbo speed of the 8700 then you get about 19.4% which is close enough to 20% I suppose but that's not really a fair comparison?
It's a shame you didn't compare it to the 7820X. I think it was expected that it would better the 7800X at least to some degree, so the more interesting comparison is how much performance does the added cost of 8 cores get you.
I'd like to see the memory testing done on Ryzen done on coffee lake as well. It's clear that 2 DDR4 channels is not enough for 8 cores, at least with AMD's memory subsystem. Is it enough for 6 cores with Intel's memory subsystem? Also please be sure to use a GPU powerful enough to warrant even reporting the gaming results.
Kabylake is faster than Coffeelake. where is the 15% increase? what is the point of + and ++ iteration when there is no improvement in performance? intel is just burning wafers for no reason. Better for them to go back to tick tock clock and stop wasting resources................
Honestly I'm not sure why Intel doesn't just keep fab lines for the 7th gen i5s going and just re-label into the 8th gen i3s and just bin differently, ie higher base/turbo.
Thanks for the review Ian. Just one question. Why do you think power consumption differs so much with the data from techspot, were the 8700k consumes 190w, and it's on par with the 16c32t 1920x?
Are they testing at-wall power consumption at stock? That might add a bunch.
Our power numbers are just for the CPU, not the at wall - they are derived from the internal calibration tools that the processor uses to determine its own power P-states, which in effect is directly related to the turbo.
There seems to be a lot of boards that screw around with multi-core turbo this generation, which may also lead to higher power consumption numbers.
Or a used Intel, sooo much value. I'd been looking for a 4930K upgrade for an X79 system (over a 3930K), so as to provide proper PCIe 3.0, etc., main focus is animation, rendering and video processing; gave up, bought a 10-core (20 thread) XEON E5-2680 v2 instead for 165 UKP (very easy to find). It scores 15.44 for CB 11.5, and 1381 for CB R15 (these tests force an all-core Turbo of 3.1GHz), compare these to the 8700K numbers, not bad at all for a board as old as X79, and the temps/power/heat/etc. are excellent.
This actually makes sense. I wonder how Ian explains (even to himself) that additional 2 new cores in i7-8700K do not push the power envelope at all compared to i7-7700K. Is it because in Anandtech benchmark 8700K uses only 4 cores? Or uses 6 but throttles them down to stay in power limit?
I was also puzzled about some test results in this review but after reading thru the comment section, I conclude that this is indeed his worst review to date. He mentioned that he only had 3 days for this review. Maybe this is the reason.
At a guess Anands temp test isn't using the bigger AVX sizes because at full load the hugely wide calculations use a lot more power than anything else; to the extend that by default they have a negative clock speed offset of several hundred MHz. I'm not sure how or if MCT shenanigans affect the AVX clock speed.
You'd be surprised at how aggressively Intel is binning. We've said for a long while that Intel never pushes its chips to the limits in DVFS or frequency (like AMD), so if they were willing to make higher SKUs and commit to their usual warranty, it should be an easy enough task.
Is Paul/Igor testing at-wall power consumption at stock? That might add a bunch. Even the lower end CPUs seem out of whack there. Our power numbers are just for the CPU, not the at wall.
Our numbers are derived from the internal calibration tools that the processor uses to determine its own power P-states, which in effect is directly related to the turbo. We run a P95 workload during our power run, and pull the data from the internal tools after 30 seconds of full load, which is long enough to hit peak power without hitting any thermal issues or PL2/PL3 states.
There seems to be a lot of boards that screw around with multi-core turbo this generation, which may also lead to higher power consumption numbers.
Intel played shenanigans with the 7700W TDP. They gave it as one value in the first briefings, then switched to the other just before launch, then switched back again after launch.
There also seems to be a price change to the 1800x and 1700x which are as much as 100USD lower than what's reflected on the charts. I think that would factor in notably.
Maybe resource contention on the hyper-threaded parts? It is odd, but I'm very impressed with that 8400. For most workloads it easily hangs out with the $300+ CPUs.
Not only RoTR but also in GTAV. I hope there is an explanation the guys at AT will figure out. If it was the congestion of the threads (as suggested above) then all Ryzen chips should be even worse, but they are not.
Good info Ian, thank you. Am I the only one who's terribly disappointed by this release?! I've been holding out for this moment to upgrade and what I can gather from the benchmarks is that this will have no noticeable improvement on performance for most applications vs. the last 2 gen's of CPUs....
yeah.. when I saw these numbers, I figured I'd go back to waiting for 10nm or Ryzen 2. But Techreport's comparison a) used a 1080ti, which I also have b) included my current cpu, the 4790k. The results were far more pronounced and closer to what I'd hoped...
I'm mostly baffled by the i5-8400... if it's just 6 cores and it did so much better than the 8700k at a lower turbo clock, on those thread-crippled games, would the new i7 have done better with HT disabled? Would it run cooler with only 6 threads?
Anyone having an issue with Bench? I'm trying to compare my i7-3770k to the i7-8700k and it comes back with no data. Same with trying the Threadripper 1920x
CPU tests changed so benchmarks weren't comparable. Latest processor tested on the old tests was the 7700K iirc, and not everything is tested on the new tests.
I'd compare results for the 3770k and the 2600K to get a baseline then you can compare 2600K to the 8700K. It's a bit fiddly, I have to do the same with my 4790K.
We updated our CPU testing suite for Windows 10 in Q1. Regression testing is an on-going process, though it's been slow because of all the CPU launches this year. Normally we have 1/2 a year. We're so far at what, 6 or 7 for 2017?
Doesn't look to me like the die size actually increased at all due to the increased gate pitch. The calculations in the article forgot to account for the increase of the unused area (at the bottom left) - this area is tiny with 2c die, but increases with each 2 cores added significantly. By the looks of it, that unused area would have grown by about 2 mm^2 or so going from 4 to 6 cores, albeit I'm too lazy to count the pixels...
Your conclusion is weirdest thing ever, you fully ignore the 8359k and AMD.
In retail, the 8350k will do very very well and retail is what matters for most readers And ignoring AMD is not ok at all, it's like you think that we are all idiots that buy on brand.You do think that, your system guides make that very clear but you should not accept, support and endorse such an idiotic behavior. AMD got hit hard here, Intel takes back the lead and it's important to state that. Sure they might have Pinnacle Ridge in a few months and take back the lead but buyers that can't wait should go with Intel right now, for the most part. AMD could also adjust prices ofc.
Really confused why the pricing listed in this review isn't consistent- for Intel you were posting prices you found online, but for Ryzen you appear to be posting MSRP.
The truth is- you can find 1700x for $298 right now EASILY (Amazon), yet Microcenter is selling the 8700k for $499.
If you factor this information in, the AMD solutions are still far more valuable per dollar.
I really can’t belive the amount of flak Anandtech takes these days. I find it un-earned an unwarrented. Out of all the tech sites and forums I manage to read in a given week, Anandtech is the most often quoted and linked to. Hell I use it as my go to for reference and comparison (and general reading). My only big complaint is your ads, and I’d gladly pay a sub to completely remove that nonsense and directly support the site!
Ian, you and your staff deserve far more credit than you get and that’s an injustice. Each piece is pretty thorough and pretty spot on. So for that thank you very much.
This article is no exception to the rule and is superb. Your graph layouts are a welcome feature!!!!! I look forward to your ever expanding tests as new chips roll in. I think the 8600k is going to be a game changer in the i5 vs i7 performance category for these hexacore cpus. I think that’s why almost all the reviews I’m reading today are with the 8700k and 8400.
Agin, thank you and your staff very much for the work you put into publishing amazing articles!!
Personally I buy whatever is best at the time. Right now I'm typing this on a 1700x and I can see a 4770k build on the desk next to me. So it's always funny to see the bias. Intel review gets posted, AMD fanboys come out of the wood works to trash them as paid shills. But it works exactly the same on any positive AMD reviews. Intel fans come in trashing them. It's really odd. Anandtech is one of the most unbiased sites I've found and I trust their reviews implicitly.
No temperature comparison? According to some reviews I see Intel stubbornly continues to rely on their horrible TIM solution, after my horrible experience with Haswell overheating I am not considering Intel again in my build until this is fixed. There is nothing worse than when your build starts crashing because of overheating 2 years down the road.
Looking at these results, there is still no reason to migrate from my i7-2600k. Still running at 4.6 Ghz for years now, flawlessly with equal or better performace of all that came after. It would seem Moore's law failed to hold true for the past 5 years.
For the price of a new i7, you can buy a used i7-2600k, z68 or z77 mobo, and 16 gb ram. It looks like that situation will not end soon.
Intel has to give me a compelling reason to spend a ton of money on NEW stuff.
For quite a lot less than a new i7, I bought a 3930K, ASUS R4E and 16GB/2400 RAM. And a 120mm AIO. And a 2nd R4E. :D
Compelling reasons are perhaps more likely to come from peripheral and I/O scenarios, eg. if one wants the latest USB 3.1, M2 or somesuch. However, addin cards solve most of these, and numerous SB/SBE boards can now boot from NVMe. I saw great numbers with M.2 drives and Z68, and because of the wiring it should be possible for an M4E to use an M2 as boot and have two GPUs at x16 (aka 3.0 @ x8, same as modern mainstream split), while X79 has enough lanes anyway.
5930k does surprisingly well compared to the 8700k in the gaming benchmarks (at least for the games tested). Any particular reason it would be doing so well? Can't think of any advantage it would have besides the quad channel memory.
Why are all these benches between sites so wildly different (with supposedly same settings/specs). RoTR has had some of the most absurd results... one has 8700k beating everything by 20fps, another has 7700k over 8700k by 40fps, and one even had a 6600k beating everything.
Exactly! No matter what side you're on, you gotta love the fact that competition is back in the x86 desktop space! And it looks like AMD 1700X is now under $300 on Amazon. Works both ways!
I just don't see it this way. Since the release of Haswell-E in 2014 we've had sub $400 six core processors. While some like to compartmentalize the industry into mainstream and HEDT, the fact is, I built a machine with similar performance three years ago, for a similar price. Today's full featured Z370 motherboards (like the ROG Maximus X) cost nearly as much as X99 motherboards from 2014. To say that Intel was pushed by AMD is simply not true.
I feel the fact that Intel had to rush a 6 core mainstream processor out in the same year they introduced Kaby Lake is a sign that AMD is putting pressure on them. You may find a Haswell E chip for sub 400 bucks in 2014, but you need to be mindful that Intel historically have only increase prices due to the lack of competition. Now you are seeing a 6 core mainstream chip from both AMD and Intel below 200 bucks. Motherboard prices are difficult to compare since there are lots of motherboards out there that are over engineered and cost significantly more. Assuming you pick the cheapest Z370 motherboard out there, I don't believe it's more expensive than a X99 board.
KL-X is dead, that's for sure. Some sites claim CFL was not rushed, in which case Intel knew KL-X would be pointless when it was launched. People claiming Intel was not affected by AMD have to choose: either CFL was rushed because of pressure from AMD, or Intel released a CPU for a mismatched platform they knew would be irrelevant within months.
There's plenty of evidence Intel was in a hurry here, especially the way X299 was handled, and the horrible heat issues, etc. with SL-X.
PS. Is it just me or are we almost back to the days of the P4, where Intel tried to maintain a lead really by doing little more than raising clocks? It wasn't that long ago there was much fanfare when Intel released its first minimum-4GHz part (4790K IIRC), even though we all knew they could run their CPUs way quicker than that if need be (stock voltage oc'ing has been very productive for a long time). Now all of a sudden Intel is nearing 5GHz speeds, but it's kinda weird there's no accompanying fanfare given the reaction to their finally reaching 4GHz with the 4790K. At least in th mainstream, has Intel really just reverted to a MHz race to keep its performance up? Seems like it, but OS issues, etc. are preventing those higher bins from kicking in.
Intel has been pushing up clock speeds, but (unlike the P4), not at the expense of IPC. The biggest thing that Intel has done to improve performance in this iteration is to increase the number of cores.
Except in reality it's often not that much of a boost at all, and in some cases slower because of how the OS is affecting turbo levels.
Remember, Intel could have released a CPU like this a very long time ago. As I keep having to remind people, the 3930K was an 8-core chip with two cores disabled. Back then, AMD couldn't even compete with SB, never mind SB-E, so Intel held back, and indeed X79 never saw a consumer 8-core part, even though the initial 3930K was a XEON-sourced crippled 8-core.
Same applies to the mainstream, we could have had 6 core models ages ago. All they've really done to counter the lack of IPC improvements is boost the clocks way up. We're approaching standard bin levels now that years ago were considered top-notch oc's unless one was definitely using giant air coolers, decent AIOs or better.
I hope Anandtech solves the Civ6 AI benchmark soon. It's almost as important as compression and encoding benchmarks for me to decide CPU price-performance options as I am almost always GPU constrained in games.
...apart from including all the data in their benchmark tool, which they make freely available, you mean? They put in the CPUs they felt that were most relevant. The readership disagreed, so they changed it from their benchmark database. That level of service is almost unheard of in the industry and all you can do is complain. Bravo.
Irrelevant. While I agree with most of what you said, that does not change the fact that Anandtech did not include Ryzen 1600/X until called out by astute readers. To make things a little clearer for you, the i7-8700 is a 6C/12T processor. The Ryzen 1600 is a 6C/12T processor. Therefore, a comparison with the Ryzen 1600 is relevant.
You should have addressed the point I made. Instead all you can do is complain about my post. Bravo. (In case this goes over your head again, that last bit is added just to illustrate how pointless such comments are.)
So your point is, in essence, "they didn't do what I wanted them to do so they're damned for all time".
They put up the comparison they felt was relevant, then someone asked them to include something different - so they did it. They listened to their readers and made changes to an article to fix it.
Should they have put the R5 in the original comparison? Possibly. I can see arguments either way but if pushed I'd have said they should have done - but since even the 1600X gets beaten by the 8400 in virtually every benchmark on their list (as per https://www.anandtech.com/bench/product/2018?vs=20... they would then have been accused by the lurking AMD fanboys of having picked comparisons to make AMD look bad (like on every other article where AMD gets beaten in performance).
So what are you actually upset about? That they made an editorial decision you disagree with? You can't accuse them of hiding data since they make it publicly accessible. You can't accuse them of not listening to the readers because they made the change when asked to. Where's the issue here?
OK on further reading it's not "virtually every" benchmark on the list, just more than half. It's 50% i5 win, 37% R5 win, 12% tied. So not exactly a resounding triumph for the Ryzen but not as bad as I made it out to be.
In the UK the price differential is about £12 in favour of the i5, although the motherboard is about £30 more expensive (though of course Z370 is a lot more fully featured than B650) so I think pricing wise it's probably a wash - but if you want gaming performance on anything except Civ VI then you'd be better off getting the i5.
...oh and if you don't want gaming performance then you'll need to buy a discrete graphics card with the R5 which probably means the platform costs are skewed in favour of Intel a bit (£25 for a GF210, £32 for a R5 230...)
As mentioned when I first called out this omission, I would think comparing a 6 vs 4 core irrelevant. This is what AnandTech recommended to lookout for on page 4 "Core Wars": Core i5-8400 vs Ryzen 5 1500X. You be the judge if this makes sense when there is a far better competition/ comparison between the i5 8400 vs R5 1600. Only when you go reading around and you realized that hey, the i5 8400 seems to be losing in some areas to the 1600. I give AnandTech the benefit of the doubt, so I am done debating what is relevant or not.
The Anandtech benchmark tool confirms what Ryan indicated in the introduction: the i7-8700k wins against the 1600X across the board, due faster clocks and better IPC. The comparison to the i5-8400 is more interesting. It either beats the 1600X by a hair, or loses rather badly. I think the issue is the lack of hyperthreading on the i5-8400 makes the 1600X the better all-around performer. But if you mostly run software that can't take advantage of more than 6 threads, then the i5-8400 looks very good.
Personally, I wouldn't buy i5-8400 just because of the socket issue. Coffee Lake is basically just a port of Skylake to a new process, but Intel still came out with a new socket for it. Since I don't want to dump my motherboard in a landfill every time I upgrade my CPU, Intel needs a significantly superior processor (like they had when they were competing against AMD's bulldozer derivatives) to convince me to buy from them.
So Intel still isn't getting their head out of their rear and offering the option of a CPU that trades all the integrated GPU space for additional cores? Moronic.
Integrated graphics make up more than 70% of the desktop market. It's even greater than that for laptops. Why would they sacrifice their huge share of that 70% in order to gain a small share of the 30%? *that* would be moronic.
In the meantime you can know that if you buy a desktop CPU from Intel it will have an integrated GPU which works even with no discrete graphics card, and if you need one without the integrated graphics you can go HEDT.
Besides, the limit for Intel isn't remotely "additional space", they've got more than enough space for 8/10/12 CPU cores - it's thermal. Having an integrated GPU which is unused doesn't affect that at all - or arguably it gives more of a thermal sink but I suspect in truth that's a wash.
We need a completely new PC architecture - you need more CPU cores - add more CPU cores, you need more GPU cores add more GPU cores, all of them connected via some sort of Infinity fabric like bus and sharing a single RAM. That should be possible to implement. Instead of innovating Intel is stuck in the current 80s architecture introduced by IBM.
There are latency issues with that kind of approach but I'm sure they'd be solvable. It'll be interesting to see what happens with Intel's Mesh when it inevitably trickles down to the lower end / AMD's Infinity Fabric when they launch their APUs.
Such an idea is kinda similar to SGI's shared memory designs. Problem is, scalable systems are expensive, and these days the issue of compatibility is so strong, making anything new and unique is very difficult, companies just don't want to try out anything different. SGI got burned with this re their VW line of PCs.
I think it's a **VERY** safe bet that most systems selling with an i7 8700/k will also include some sort of a discrete GPU. It's almost unimaginable that anyone would buy/build a system with such a CPU but no better GPU than integrated graphics
Which makes the iGPU a total waste of space and a piece of useless silicon that consumers are needlessly paying for (because every extra square inch of die area costs $$$).
For high-end CPUs like the i7s, it would make much more sense to ditch the iGPU and instead spend that extra silicon to add an extra couple of cores, and a ton more cache. Then it would be a far better CPU for the same price.
Of the many hundreds of computers I've bought or been responsible for speccing for corporate and educational entities, about half have been "performance" oriented (I'd always spec a decent i5 or i7 if there's a chance that someone might be doing something CPU limited - hardware is cheap but people are expensive...) Of those maybe 10% had a discrete GPU (the ones for games developers and the occasional higher-up's PC). All the rest didn't.
From chatting to my fellow managers at other institutions this is basically true across the board. They're avidly waiting for the Ryzen APUs to be announced because it will allow them to actually have competition in the areas they need it!
It's not surprising to see business customers largely not caring about graphics performance - or about the hit to CPU performance that results from splitting the TDP budget with the iGPU...
In my experience, business IT people tend to be either penny-wise and pound-foolish, or obsessed with minimizing their departmental TCO while utterly ignoring company performance as a whole. If you could get a much better-performing CPU for the same money, and spend an extra $40 for a discrete GPU that matches or exceeds the iGPU's capabilities - would you care? Probably not. Then again, that's why you'd stick with an i5 - or a lower-grade i7. Save a hundred bucks on hardware per person per year; lose a few thousand over the same period in wasted time and decreased productivity... I've seen this sort of penny-pinching miscalculation too many times to count. (But yeah, it's much easier to quantify the tangible costs of hardware, than to assess/project the intangibles of sub-par performance...)
But when it comes specifically to the high-end i7 range - these are CPUs targeted specifically at consumers, not businesses. Penny-pinching IT will go for i5s or lower-grade i7s; large-company IT will go for Xeons and skip the Core line altogether.
Consumer builds with high-end i7s will always go with a discrete GPU (and often more than one at a time.)
That's just not true dude. There are a bunch of use cases which spec high end CPUs but don't need anything more than integrated graphics. In my last but-one place, for example, they were using a ridiculous Excel spreadsheet to handle the manufacturing and shipping orders which would bring anything less than an i7 with 16Gb of RAM to its knees. Didn't need anything better than integrated graphics but the CPU requirements were ridiculous.
Similarly in a previous job the developers had ludicrous i7 machines with chunks of RAM but only using integrated graphics.
Yes, some it managers are penny wise and pound foolish, but the decent ones who know what they're doing they spend the money on the right CPU for the job - and as I say a serious number of use cases don't need a discrete GPU.
...besides it's irrelevant because the integrated GPU has zero impact on performance for modern Intel chips, as I said the limit is thermal not package size.
If Intel whack an extra 2 cores on and clock them at the same rate their power budget is going up by 33% minimum - so in exchange for dropping the integrated GPU you get a chip which can no longer be cooled by a standard air cooler and has to have something special on there, adding cost and complexity.
Sticking with integrated GPUs is a no-brainer for Intel. It preserves their market share in that environment and has zero impact for the consumer, even gaming consumers.
Adding 2 cores to a 6-core CPU drives the power budget up by 33% if and **ONLY IF** all cores are actually getting fully utilized. If that is the case, then the extra performance from those extra 2 cores would be indeed actually needed! (at least on those occasions, and would be, therefore, sorely missed in a 6-core chip.). Otherwise, any extra cores would be mostly idle, not significantly impacting power utilization, cooling requirements, or maximum single-thread performance.
Equally important to the number of cores is the amount of cache. Cache takes up a lot of space, doesn't generate all that much heat (compared to the actual CPU pipeline components), but can boost performance hugely, especially on some tasks that are memory-constrained. Having more L1/L2/L3 cache would provide a much better bang for the buck when you need the CPU grunt (and therefore a high-end i7), than the waste of an iGPU (eating up ~50% of die area) ever could.
Again, when you're already spending top dollar on an i7 8700/k (presumable because you actually need high CPU performance), it makes little sense that you go, "well, I'd rather have **LOWER** CPU performance, than be forced to spend an extra $40 on a discrete GPU (that I could then reuse on subsequent system builds/upgrades for many years to come)"...
Again, that's not true. Adding 2 cores to a 6 core CPU means that unless you find some way to prevent your OS from scheduling threads on it then all those cores are going to end up used somewhat - which means that you have to plan for your worst case TDP not your best case TDP - which means you have to engineer a cooling solution which will work for the full 8 core CPU, increasing costs to the integrator and the end user. Why do you think Intel's worked so hard to keep the 6-core CPU within a few watts of the old 4-core CPU?
In contrast an iGPU can be switched on or off and remain that way, the OS isn't going to assign cores to it and result in it suddenly dissipating more power.
And again you're focussing on the extremely limited gamer side of things - in the real world you don't "reuse the graphics card for many years to come", you buy a machine which does what you need it to and what you project you'll need it to, then replace it at the end of whatever period you're amortising the purchase over. Adding a $40 GPU and paying the additional electricity costs to run that GPU over time means your TCO is significantly increased for zero benefits, except in a very small number of edge cases in which case you're probably better off just getting a HEDT system anyway.
The argument about cache might be a better one to go down, but the amount of cache in desktop systems doesn't have as big an impact on normal workflow tasks as you might expect - otherwise we'd see greater segmentation in the marketplace anyway.
In short, Intel introducing desktop processors without iGPUs makes no sense for them at all. It would benefit a small number of enthusiasts at a cost of winding up a large number of system integrators and OEMs, to say nothing of a huge stack of IT Managers across the industry who would suddenly have to start fitting and supporting discrete GPUs across their normal desktop systems. Just not a good idea, economically, statistically or in terms of customer service.
The TDP argument as you are trying to formulate it is just silly. Either the iGPU is going to be in fact used on a particular build, or it's going to be disabled in favor of headless operation or a discrete GPU. If the iGPU is disabled, then it is the very definition of all-around WASTE - a waste of performance potential for the money, conversely/accordingly a waste of money, and a waste in terms of manufacturing/materials efficiency. On the other hand, if the iGPU is enabled, it is actually more power-dense that the CPU cores - meaning you'll have to budget even more heavily for its heat and power dissipation, than you'd have for any extra CPU cores. So in either case, your argument makes no sense.
Remember, we are talking about the high end of the Core line. If your build is power-constrained, then it is not high-performance and you have no business using a high-end i7 in it. Stick to i5/i3, or the mobile variants, in that case. Otherwise, all these CPUs come with a TDP. Whether the TDP is shared with an iGPU or wholly allocated to CPU is irrelevant: you still have to budget/design for the respective stated TDP.
As far as "real-world", I've seen everything from companies throwing away perfectly good hardware after a year of use, to people scavenging parts from old boxes to jury-rig a new one in a pinch.
And again, large companies with big IT organizations will tend to forego the Core line altogether, since the Xeons provide better TCO economy due to their exclusive RAS features. The top-end i7 really is not a standard 'business' CPU, and Intel really is making a mistake pushing it with the iGPU in tow. That's where they've left themselves wide-open to attack from AMD, and AMD has attacked them precisely along those lines (among others.)
Lastly, don't confuse Intel's near-monopolistic market segmentation engineering with actual consumer demand distribution. Just because Intel has chosen to push an all-iGPU lineup at any price bracket short of exorbitant (i.e. barring the so-called "enthusiast" SKUs), doesn't mean the market isn't clamoring for a more rational and effective alternative.
1) Yes, you're right, if the iGPU isn't being used then it will be disabled, and therefore you don't need to cool it. Conversely, if you have additional cores then your OS *will* use them, and therefore you *do* need to cool them.
iGPU doesn't draw very much power at all. HD2000 drew 3W. The iGPU in the 7700K apparently draws 6W so I assume the 8700K with a virtually identical iGPU draws just as much (figures available via your friendly neighbourhood google). Claiming the iGPU has a higher power budget than the CPU cores is frankly ridiculous. (in fact it also draws less than .2W when it's shut down which means that having it in there is far outweighed by the additional thermal sink available, but anyway)
2) Large companies with big IT organisations don't actually forego the Core line altogether and go with Xeons. They could if they wanted to, but in general they still use off-the shelf Dells and HPs for everything except extremely bespoke setups - because, as I previously mentioned, "hardware is cheap, people are expensive" - getting an IT department to build and maintain bespoke computers is hilariously expensive. No-one is arguing that for an enthusiast building their own computer that the option of the extra cores would be nice, but my point all along has been that Intel isn't going to risk sacrificing their huge market share in the biggest market to gain a slice of a much smaller market. That would be extremely bad business.
3) The market isn't "clamoring for a more rational and effective alternative" because if it was then Ryzen would have flown off the shelves much faster than it did.
Bottom line: business IT wants simple solutions, the fewer parts the better. iGPUs on everything fulfil far more needs than dGPUs for some and iGPUs for others. iGPUs make designing systems easier, they make swapouts easier, they make maintenance easier, they reduce TCO, they reduce RMAs and they just make IT staff's lives easier. I've run IT for a university, a school and a manufacturing company, and for each of them the number of computers which needed a fast CPU outweighed the number of computers which needed a dGPU by a factor of at least 10:1 - and the university I worked for had a world-leading art/media/design dept and a computer game design course which all had dGPUs. The average big business has even less use for dGPUs than the places I've worked.
If you want to keep trying to argue this then can you please answer one simple question: why do you think it makes sense for Intel to prioritise a very small area in which they don't have much market share over a very large area in which they do? That seems the opposite of what a successful business should do.
There are pros and cons of having integrated graphics. It sure takes up a lot of die space, but it is something that allows Intel to sell a lot of chips. Amongst enthusiasts, this is unnecessary, but this group may only represent a small percentage vs corporates that need only decent CPU and no need for fancy graphics. To be honest, Intel could likely have created a 8 core processor easily since the die size is still fairly small for Coffee Lake, but they chose not to. I don't think it is a matter of the graphic that is holding them back.
"The difference between the Ryzen 5 1500X and the Core i3-8350K would be interesting, given the extreme thread deficit (12 threads vs 4) between the two."
The difference between the R5 1500X and i3 8350K goes beyond just the number of threads. The cache is also 2x more on the Ryzen chip. However, the i3 chip have the advantage of being able to reach higher clockspeed. I do agree that this will be an interesting comparison.
I'm not up to date with current bios versions. Is multi-core enhancement still present in z370 motherboards? That would get rid of all those differences in turbo speeds. I know it is technically overclocking but i bet it's a pretty safe procedure without increasing the voltages.
Also, what's the deal with the 8700? Is it just as good as 8700k (minus 100mhz) if one decides not to overclock? Just trying to gather as many practical facts as i can before formulating an upgrade plan (sandy bridge user hehe )
This cpu family looks good on specs and benches (maybe the first worthy successor to sandy bridge) but it's not perfect. I hate that Intel decided not to solder, i expect temperatures to soar in the high 80's. Also the current motherboards are somewhat lacking in ports (usb, lan, sata).
I love my sandy bridge setup though. 6 1/2 years old and still going strong. Overclocked, cool, stable, silent. With current cpu's you don't get all these points. Even if i upgrade i'm not going to touch it.
Is multi-core enhancement still present in z370 motherboards?
As an option, yes. As default? Will vary board to board. You can disable it. However we had trouble with one of our boards: disabling MCT/MCE and then enabling XMP caused the CPU to sit at 4.3 GHz all day. Related to a BIOS bug which the vendor updated in a hurry.
What’s up with those rise of Tomb Raider benchmarks? Am I too seriously believ the i5 7400 is more capable than the 8700K...did I miss the overclocking part?
Tech reports review much better with results that make sense.
Regarding most normal/gaming scenarios, I'm wondering with the 8700/k whether one couldn't get an even better performance by disabling hyperthreading in the UEFI.
That would still yield 6 threads, but now ostensibly with a full 2 MB of L3 per thread. Plus, lower power per core (due to lower resource utilization) might mean more thermal headroom and higher overall sustained frequencies.
So you'd get maximum-possible single-thread performance while still being able to run 6-wide SMT (which, under most normal usage, isn't even a constraint worth noting...)
To expand on this a bit more, with the "core wars" now in effect, I wonder if hyperthreading might be an unnecessary holdover feature that could be actually reducing performance of many(8+)-core chips in all but the most extremely threaded scenarios. Might it not be better to have many simple/efficient cores, rather than perhaps fewer cores loaded with the hyperthreading overhead both in terms of die area and energy density, as well as cache thrashing?
Hyperthreading was invented to optimize the use of CPU logic that would otherwise remain unutilized during high loads.There is no way of reducing performance with current architectures. There are "hyperthreading-less" CPUs and you compare them to hyperthreded CPUs.
Hyperthreading was particularly useful in the context of not having a lot of cores to work with - allowing to squeeze extra multi-threaded performance from your dual- or quad-core CPU. It comes at the cost of extra silicon and complexity in the CPU pipeline, but allows better utilization of CPU resources as you mention. At runtime, it has the dual detrimental effects on single-thread performance, of (1) splitting/sharing the on-CPU cache among more threads, thereby raising the frequency of cache misses for any given thread due to the threads trampling over each other's cached data, and (2) indeed maximizing CPU resource utilization, thereby maximizing dissipated energy per unit area - and thereby driving the CPU into a performance-throttling regime.
With more cores starting to become available per CPU in this age of "core wars", it's no longer as important to squeeze every last ounce of resource utilization from each core. Most workloads/applications are not very parallelizable in practice, so you end up hitting the limits of Amdahl's law - at which point single-thread performance becomes the main bottleneck. And to maximize single-thread performance on any given core, you need two things: (a) maximum attainable clock frequency (resource utilization be damned), and (b) as much uncontested, dedicated on-CPU cache as you can get. Hyperthreading is an impediment to both of those goals.
So, it seems to me that if we're going toward the future where we routinely have CPUs with 8 or more cores, then it would be beneficial for each of those cores to be simpler, more compact, more streamlined and optimized for single-thread performance (while foregoing hyperthreading support), while spending any resulting die space savings on more cores and/or more cache.
To add to the above: 'more cores and/or more cache' - and/or better branch predictor, and/or faster/wider ALU and/or FPU, and/or more pipeline stages to support a faster clock, and/or...
The i3-8100 is made utterly redundant by the the necessity to buy a Z370 motherboard along with it; it'd be cheaper to get an i5-7400 with a lower-end motherboard. Intel...
This applies to all the non-overclocking chips, particularly i5 and below. The high cost of the Z370 boards currently simply wipe out any price benefits. For example, a i5 840 is good value for money, but once you factor in the price of a motherboard with a Z370 chipset, it may not be that good value for money anymore.
"The problem here is *snip* Windows 10, *snip* All it takes is for a minor internal OS blip and single-threaded performance begins to diminish. Windows 10 famously kicks in a few unwanted instruction streams when you are not looking,"
This is why single threaded performance is a silly benchmark in today's market, unless you happen to boot to DOS to run something. Your OS is designed to use threads. There are no systems in use today as a desktop (in any market these processors will compete - even if used as a server) where they will ever run a single thread. The only processors that run single threads today are ... single core processors (without hyperthreading even).
Open your task manager - click the performance tab - look at the number of threads - when you have enough cores to match that number then single threaded performance is important. In the real world how the processor handles multiple tasks and thread switching is more important. Even hardcore gamers seem to miss this mark forgetting that behind the game the OS has threads for memory management, disk management, kernel routines, checking every piece of hardware in your system, antivirus, anti-malware (perhaps), network stack management, etc. That's not even counting if you run more than one monitor and happen to have web browsing or videos playing on another screen - and anything in the background you are running.
The myth that you never need more than 4 cores is finally coming to rest - lets start seeing benchmarks that stress a system with 10 programs going in the background. My system frequently will be playing a movie, playing a game, and running handbrake in the background while it also serves as a plex server, runs antivirus, has 32 tabs open in 2 different browsers, and frequently has something else playing at the same time - A true benchmark would be multiple programs all tying up as many resources as possible - while a single app can give a datapoint I want to see how these new multi-core beasts handle real world scenarios and response times.
Your comment has merit. It is crazy the number of tasks running on a modern OS. I sometimes miss the olden days where a clean system truly was clean and had minimal tasks upon bootup. ;-)
Well yeah, but even with non-HT i5 and i3, you still have plenty of cores to work with.Even if the OS (or a background task - say Windows Defender?) takes up a thread, you still have other cores for your game engine.
Do we? I've yet to see a good benchmark that measures task switching and multiple workloads - they measure 'program a' that is bad at using cores - and 'program b' that is good at using cores.
In today's reality - few people are going to need maximum single program performance. Outside of very specific types of workloads (render farming or complex simulations for science) please show me the person that is just focused on a single program. I want to see side by side how these chips square off when you have multiple completing workloads that force the scheduler to balance tasks and do multiple context shifting etc. We used to see benchmarks back in the day (single core days) where they'd do things like run a program designed to completely trash the predictive cache so we'd see 'worst case' performance, and things that would stress a cpu. Now we run a benchmark suite that shows you how fast handbrake runs *if it's the only thing you run*.
I wonder if there's pressure never to test systems in that kind of real-world manner, perhaps the results would not be pretty. Not so much a damnation of the CPU, rather a reflection of the OS. :D Windows has never been that good at this sort of thing.
An *intelligent* OS thread scheduler would group low-demand/low-priority threads together, to multitask on one or two cores, while placing high-priority and high-CPU-utilization threads on respective dedicated cores. This would maximize performance and avoid trashing the cache, where and when it actually matters.
If Windows 10 makes consistent single-thread performance hard to obtain, then the testing is revealing a fundamental problem (really, a BUG) with the OS' scheduler - not a flaw in benchmarking methodology...
I fail to understand how you guys review a CPU meant for overclocking and only put non OC results in your tables ?
If I wanted the i7 8700K without overclocking I would pick up the i7 8700 ans save $200 for both cooling and cheaper motherboard. and the i7 8700 can turbo all 6 cores to 4.3Ghz just like the i7 8700K
I'm getting lost in all these CPU releases this year, it feels like there is a new CPU coming out every 2 months. Don't get me wrong, I like to have many choices but this is pathetic really. Someone is really desperate for more money.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
222 Comments
Back to Article
FireSnake - Thursday, October 5, 2017 - link
Awesome revies! Let's read...prisonerX - Thursday, October 5, 2017 - link
No need, here is a quick summary: "Intel blind panic."StevoLincolnite - Thursday, October 5, 2017 - link
At-least they have finally soundly beat my 3930K in the mainstream after 6 years.Still. No point me upgrading just yet.
mapesdhs - Friday, October 6, 2017 - link
Even then there's an interesting option if you want threaded performance; I just upgraded to a XEON E5-2680 v2 (IB-EP) for 165 UKP. Lower 1T speed for sure, but MT should be the same or better as a 3930K @ 4.8. No oc means more stable, less heat/noise/power, and being IB-based means it ups the slots to PCIe 3.0. Not a relevant choice for gaming, but a possibility for those doing VMs, rendering, etc., and just want to get by for a little while longer.Breit - Friday, October 6, 2017 - link
OR search for an XEON E5-1680v2... :)It's an Ivy Bridge-E 8c/16t chip that will fit in Sandy Bridge-E mainboards (x79) and has an unlocked multiplier opposed to this E5-2680v2. So with this you won't lose your overclocking ability.
But in the end, I guess that the greatly reduced power draw and the more "modern" platform from an i7-8700K system compared to the x79 platform will give it the edge here.
mapesdhs - Monday, October 9, 2017 - link
Very interesting that the 1680 v2 is unlocked, I didn't know that.Alas though, availability of the 1680 v2 is basically zero, whereas the 2680 v2 is very easy to find, and the cost of 1680 v2s which are available (outside the UK) is extremely high (typical BIN of 600 UKP, normal auction start price of 350 UKP, completed listings only shown for BIN items which were purchased for between 500 and 600 UKP). By contrast, I bought several 2680 v2s for 165 UKP each. Testing on a P9X79 WS (all-core turbo of 3.1) gives a very impressive 15.44 for CB 11.5, and 1381 for CB R15 which is faster than a stock 8700K (for reference, the 1680 v2 scores 1230 for CB R15). Note the following page on AT has a very handy summary of all the turbo bin levels:
https://www.anandtech.com/show/7852/intel-xeon-e52...
So, I'm very pleased with the 2680 v2 purchase, it's faster than my 3930K @ 4.8, runs with very low temps, much lower power draw, hence less heat, less fan noise and since it's not oc'd it'll be solidly reliable (this particular test system will eventually be in an office in India, so power/heat/reliability is critical). For the target systems in question, it's a great solution. Only thing I noticed so far is it didn't like me trying to set a 2133 RAM speed, but it worked ok at 1866; I can probably tighten the timings instead, currently just at 9/11/10/28/2T (GSkill 32GB kit, 8x4GB).
The 4930K I have though will go into my gaming system (R4E), since I don't mind the oc'ing fun, higher noise, etc., but it's not a system I'll use for converting video, for that I have a 6850K.
Ian.
MrSpadge - Friday, October 6, 2017 - link
Full throttle: yes. Panic: no. Blind: no.Zingam - Saturday, October 7, 2017 - link
Can you buy it? No? Paper launch of Unobtanium 8000? -> panic, PR propaganda bullshit and dirty Intel marketing tactics as usual targeted at lamer fanboys.This comment is written by an Intel user! ;)
prisonerX - Saturday, October 7, 2017 - link
We've got enough dumb Intel apologists here already, thanks.coolhardware - Sunday, October 8, 2017 - link
The i7-8700 is *finally* going to replace my trusty i5-2500K.Ordered my 8700 on Amazon http://amzn.to/2y9IamG ($319) and looking forward to a nice upgrade :-) That is a lot of CPU for the money IMHO.
Kudos to AMD for bringing competition back to the CPU market!
sld - Monday, October 9, 2017 - link
In speech: "grateful" to AMD for reinvigorating competitionIn deed: gives money to Intel, effectively taking part in keeping AMD down for that bit longer, maybe causing them to return to non-competitive state, upon which AMD is blamed for being non-competitive once more.
The Invisible Hand doesn't seem so wise, sometimes.
kooya - Tuesday, October 10, 2017 - link
Hear, hearWB312 - Wednesday, October 11, 2017 - link
AMD did a fantastic job with Ryzen while Intel were busy milking their customers dry. We should support AMD when they need us most. If AMD goes down it would suck not for the industry but technology as a whole.leexgx - Monday, October 16, 2017 - link
at least you can enable MCE to make all cores run at 4.6Ghz (make sure you got a good cooler) 8700K would allow you to goto 4.9-5.1Ghz with very good coolinglordken - Saturday, October 28, 2017 - link
what a "brilliant" asshardware , your Kudos worth shit to amd as they cant fund further r&d with it. But sure, run to support your milking master because they finally bothered to release, after 10 years, 4+ core for mainstream...Budburnicus - Wednesday, November 1, 2017 - link
Salty salty salty people! AMD are big boys too, they can fight for themselves. It is called a FREE MARKET, and until Ryzen, AMD had nothing to even come within spitting distance of an i7-2600k!Which is coincidentally what I upgraded to my 8700k from. Running 4.8 GHZ on all cores for now, I still have plenty of thermal room, so once more people have figured out all the minute settings, I will just leave it at 4.8 til then! Also, Firestrike! https://www.3dmark.com/3dm/23022903
CPU-Z: https://valid.x86.fr/nkr5vi
Thas me, ahead of 96% of all results! And that single threaded perf, is totally insane - as is multi-core, nothing short of 16+ threads can touch it.
Zingam - Saturday, October 7, 2017 - link
I like how Ryzen 5 1600 beats the much higher TDP and much higher priced Coffee Lake in Civ Vl.This should ring a bell how software is written!!!
mkaibear - Saturday, October 7, 2017 - link
...cherry pick one benchmark, claim that it's the only one that matters...*cough*AMD fanboy*cough*
Zingam - Saturday, October 7, 2017 - link
I don't own anything AMD!Drink some water and stop that coughing!
mapesdhs - Monday, October 9, 2017 - link
:Dmasouth - Tuesday, October 10, 2017 - link
lol, he makes a comment on how SOFTWARE is written and you can't jump off YOUR bandwagon (be it Intel or Neutral) quick enough to heap your scorn.Dolpiz - Saturday, October 14, 2017 - link
https://youtu.be/oCSkyNHXIAE?t=20m48squanticchaos - Monday, February 5, 2018 - link
If you consider Ryzen does not have integrated graphics, Coffee lake beats its TDP hands down.Crono - Thursday, October 5, 2017 - link
I love the smell of coffee In The MorningIGTrading - Thursday, October 5, 2017 - link
We know that this is a "short" "pre-review", but it is a bit bizarre that there is no mention of AMD in the conclusion.Not that we consider that AMD should be necessarily mentioned in an article dedicated to an Intel launch, BUT Intel's offerings were always discussed in the conclusion section of every AMD review.
So we would consider it's just fair to remind people in the conclusion as well that the new Coffee Lake chips from Intel are a welcomed addition, but that they are unable to completely dethrone the competition and should be praised for the fact that AMD will be now forced to lower the Ryzen prices a bit.
The way it is right now, the conclusion is written like Intel is the only alternative, quad or hexa core, with nothing else on the market.
Personal opinion :
Despite me being the technical consultant on the team, this was observed by two of my colleagues (financial consultants) and they even brushed it away themselves as "nitpicking" .
Since I've worked in online media myself, this looks very similar with an attempt to post something to "play nice" with Intel's PR so we've decided to post this comment.
Therefore we eagerly await Ian's full review, with his widely appreciated comprehensive testing and comparisons.
Nevertheless, thank you Ian for your work! It is appreciated.
eddieobscurant - Thursday, October 5, 2017 - link
What did you expect, Anandtech is an intel pro review site. They didn't even mention the huge price difference between intel's z370 chipset motherboards required for coffee lake in contrast to amd's b350 chipset motherboards. It's almost double price.RDaneel - Thursday, October 5, 2017 - link
I haven't been following this closely, but does that mean that b350 boards are about $60? That's incredible! The Z370 I'm looking at is only $120, which didn't seem that bad, but if the b350s are really $60-70, then it might be worth checking out. Are they really that cheap?kpb321 - Thursday, October 5, 2017 - link
Yup. Newegg shows almost a dozen B350 boards for $60-$70 currently. Most are micro ATX but there are a couple ATX boards in that range currently (after rebate) including "gaming" boards like the MSI B350 TOMAHAWK.name99 - Thursday, October 5, 2017 - link
Really? Other times I've heard they are a pro Apple review site, a pro IBM review site, and a pro MS review site.They really seem remarkably catholic in whom they support.
seamonkey79 - Friday, October 6, 2017 - link
It depends on the article.mapesdhs - Friday, October 6, 2017 - link
It depends on the commenter. :D Sites get accused of being everything week to week.Dr. Swag - Friday, October 6, 2017 - link
Fanboys gonna fanboyGastec - Saturday, October 14, 2017 - link
You mean "orthodox"? :)prisonerX - Thursday, October 5, 2017 - link
The only time we're going to get a fair review of an Intel product is when they no longer dominate the market.It's just the reality of how things work.
Ranger1065 - Friday, October 6, 2017 - link
+1rtho782 - Friday, October 6, 2017 - link
Eh, as 8700k is currently unobtainium, it doesn't really matter, as I'm sure the review will be finished by the time it's possible to buy!!Zingam - Saturday, October 7, 2017 - link
The only problem you don't have a coffee this morning and the coffee shops are closed. You are feeling the smell but it is only in your imagination.watzupken - Thursday, October 5, 2017 - link
Not sure why there is no R5 1600 in the test though. It will be good to see how the 6 cores solution compete.Ian Cutress - Thursday, October 5, 2017 - link
We chose a dozen processors we thought would be best for the review graphs.As mentioned on every results page, you can find the other data in our Benchmark database, Bench.
https://www.anandtech.com/bench/product/2024?vs=20...
yeeeeman - Thursday, October 5, 2017 - link
Well, you either have bad inspiration or you chose the CPUs from AMD that most people won't buy.You are missing R7 1700 and R5 1600 which are ~ same as new Intel offerings in computing tasks but they cost less. So...
watzupken - Thursday, October 5, 2017 - link
I don't know how you define as "best" for review graph. The point is that we are seeing new 6 cores solution from Intel, so very logically, people will be trying to compare apples with apples, i.e. 6 core solutions from both camps. So omitting the results from 1600 actually looks more than meets the eye to me, especially when you folks previously did a R5 1600X review.Ian Cutress - Thursday, October 5, 2017 - link
I'm adding the R5 1600 data. It'll take 10-15 minutes, it's not a quick process. brbwatzupken - Thursday, October 5, 2017 - link
Thank you Ian. Please do add the data for R5 1600/X. From what I read elsewhere, it appears there is good competition between the Core i5 and R5 1600/X series.Ian Cutress - Thursday, October 5, 2017 - link
OK, should be added. You might have to CTRL+F5 to clear the cache to see the updated versions.WinterCharm - Thursday, October 5, 2017 - link
Thanks Ian!It really is the most logical and fair comparison.
WinterCharm - Thursday, October 5, 2017 - link
You really should have chosen SIMILARLY priced chips (So, 1700 / 1600 / 1600X) because it would have shown "here's the performance you get per dollar" which ultimately is what matters.Ian Cutress - Thursday, October 5, 2017 - link
My ultimate goal is to have a graph widget that lets you select which CPU you want to focus on, and it automatically pulls in several around that price point as well as some of the same family. I'm not that skilled, thoughError415 - Thursday, October 5, 2017 - link
Performane per dollar doesn't matter, out right performance matters. SMH, only a fool buys the second best cpu when the best is only a few dollars more.zuber - Thursday, October 5, 2017 - link
You basically just said "performance per dollar doesn't matter, only a fool ignores performance per dollar".sonny73n - Friday, October 6, 2017 - link
+1Zingam - Saturday, October 7, 2017 - link
Not everybody has a rich daddy! Performance per dollar matters in all areas of life!It doesn't matter to very, very rich people or sucker fanboys!
mapesdhs - Monday, October 9, 2017 - link
Again the myth that rich people don't care about wasting money. So wrong. :D As for fanboyism, that kind of label gets hurled in both directions, but IRL has little meaning.Gothmoth - Thursday, October 5, 2017 - link
an overlcocked ryzen 1700 is bit for bit the best choice.. still.except for hardcore gamers.
and i bet intel paid you quite a bit to ignore stuff other (less intel biased) reviewers pointed out today.
mkaibear - Thursday, October 5, 2017 - link
Ryzen has no integrated GPU so it can't be the best choice for anyone without a discrete GPU (aka the vast majority of the market - about 70% as per q1 2017). Ironically the gamers are the ones more likely to snap up Ryzen as they have discrete graphics cards anyway...Ananke - Thursday, October 5, 2017 - link
I see the same, ryzen 1700 remains the best buy, followed by ryzen 1600, which recent batches seems to have 8 cores instead of 6, for around $170. They do come with heatsink, another $30 saved. With ok board it will total $250. Even better, readily built Dell gaming desktops can achieve around $800 with r580 8gb and 16 GB ram with 1700 ryzen vs above $1100 for similar Intel. It is literally no brainer choiceGastec - Saturday, October 14, 2017 - link
Wow there, rewind! "Ryzen 1600, which recent batches seems to have 8 cores instead of 6". Care to explain more?Ryan Smith - Thursday, October 5, 2017 - link
"and i bet intel paid you quite a bit to ignore stuff other (less intel biased) reviewers pointed out today."You'd lose that bet.
Now since we're apparently doing this Jeopardy style, please tell me how much you wagered so that I know how much I'm collecting. Since Intel isn't paying me, you will have to do. ;-)
In all seriousness though, taking sides and taking bribes would be a terrible way to run a business. Trust is everything, so losing the trust of you guys (the readers) would be about the worst possible thing we could do.
FourEyedGeek - Saturday, October 7, 2017 - link
Are you happy for an overclocked Ryzen 1700 to be compared against overclocked Intel processors as well?gnufied - Thursday, October 5, 2017 - link
Your bench pages are either loading very slowly or displaying Gateway timeout.Ryan Smith - Thursday, October 5, 2017 - link
Thanks. Having the server team look into it.Ian Cutress - Saturday, October 7, 2017 - link
Should be sorted now. Found a small issue, pages should be loading in sub 2 seconds.anubis44 - Monday, October 9, 2017 - link
"Not sure why there is no R5 1600 in the test though. It will be good to see how the 6 cores solution compete."It's essentially as you'd expect. In older, single-threaded code, the Intel CPU has a slight advantage, but in any newer, multi-threaded code, the Ryzen 5 1600's hyperthreading 6 cores will dominate. It's time to stop giving Intel money for fewer cores. They don't deserve the cash. Give it to AMD for a change, now that they're genuinely competitive.
rtho782 - Thursday, October 5, 2017 - link
Annnd stock is unpossible to find...Complete paper launch.
Ian Cutress - Thursday, October 5, 2017 - link
Newegg seems to be accepting orders.rtho782 - Thursday, October 5, 2017 - link
I'm british! :POcUK put all their allocation (30 pcs) into binned delidded cpus at £499/£599/£799
The others are all gone.
krumme - Thursday, October 5, 2017 - link
8700k is in backorder there. And for the rest of the world?I can get a 8400 in my country. 8700k seems to come 2 dec.
I cant remember anything similar for the last 3 decades. Perhaps the P3-1000.
If this is not a paper launch nothing is.
bill.rookard - Thursday, October 5, 2017 - link
If you want a P3-1000 I have one I can sell you! Fully working with motherboard. LOL. I think it also has a whopping 256MB RAM.watzupken - Thursday, October 5, 2017 - link
I think I read somewhere that mentioned that supply will be limited, especially at the start.mapesdhs - Friday, October 6, 2017 - link
Gotta love the way searching on Amazon for 8700K brings back the 7700K (as opposed to simply, Not Found). By grud their search engine is bad. :DFourEyedGeek - Saturday, October 7, 2017 - link
Bad for them?mapesdhs - Monday, October 9, 2017 - link
I'm not sure. :D It's certainly annoying though. Worst part is searching for anything and then changing the list order to cheapest first, what a mess...SunnyNW - Thursday, October 5, 2017 - link
"That changes today."Anyone else read that and think that it is something we should have been reading ages ago?
Consumer technology is progressing slower than many expected and I feel the same way. Nonetheless I can't help but envision a Very near future where I'll be coming back and reading this article and being depressed at this level of technology all the while on my future monolithic many thousand core 3D processor ;)
KAlmquist - Friday, October 6, 2017 - link
Yes. A year ago this would have been an exciting development. Now it's just Intel remaining competitive against AMD's offerings.Valcoma - Thursday, October 5, 2017 - link
"The Core i5-8400 ($182) and Core i3-8350K ($169) sit near the Ryzen 5 1500X ($189) and the Ryzen 5 1400 ($169) respectively. Both the AMD parts are six cores and twelve threads, up against the 6C/6T Core i5 and the 4C/4T Core i3. The difference between the Ryzen 4 1400 and the Core i3-8350K would be interesting, given the extreme thread deficit between the two."Those AMD parts are 4 cores, 8 threads.
Ian Cutress - Thursday, October 5, 2017 - link
You're right, had a brain spasm while writing that bit. Updated.kpb321 - Thursday, October 5, 2017 - link
Still off"The difference between the Ryzen 5 1500X and the Core i3-8350K would be interesting, given the extreme thread deficit (12 threads vs 4) between the two."
the 1500X is a 4c8t processor so it effectively has hyper-threading over the i3-8350K while having a lower overclocking ceiling and lower ipc.
Zingam - Saturday, October 7, 2017 - link
Drinking too much Coffee, eh?hansmuff - Thursday, October 5, 2017 - link
Ian, I love the way the gaming benchmarks are listed. So easy to access and much less confusing than drop-downs or arrows. Nice job!Valcoma - Thursday, October 5, 2017 - link
Are you sure that the i5-7400 got 131 FPS average in benchmark 1 - Spine of the Mountain in Rise of the Tomb Raider? Besting all the other vastly superior processors?Looks like a typing error there or something went wrong with your benchmark (lower settings for example on that run).
Ian Cutress - Thursday, October 5, 2017 - link
I've mentioned it in several reviews in the past: RoTR stage 1 is heavily optimized for quad core. Check our Bench results - the top eight CPUs are all 4C/4T. The minute you add threads, the results plummet.https://www.anandtech.com/bench/CPU/1827
mapesdhs - Friday, October 6, 2017 - link
Any idea what that optimisation is? Seems odd that adding extra pure cores would harm performance, as opposed to adding HT which some games don't play nice with. Otherwise, are you saying that for this test, if it was present, the i3 8100 would come out on top? Blimey.Ian Cutress - Saturday, October 7, 2017 - link
They're either doing something to align certain CPU tasks for AVX, or it's bypassing code. You'd have to ask the developers on that.mapesdhs - Monday, October 9, 2017 - link
I doubt they'd explain what's happening, might be proprietory code or something.WickedMONK3Y - Thursday, October 5, 2017 - link
You have the spec of the i7 8700K slightly wrong. It has a base frequency of 3.7GHz not 3.8GHz.https://ark.intel.com/products/126684/Intel-Core-i...
Ian Cutress - Thursday, October 5, 2017 - link
Mistake on our part. I was using our previous news post as my source and that had a Typo. This review (and that news) should be updated now.Slomo4shO - Thursday, October 5, 2017 - link
Ian, this is probably your worst review to date. Lackluster choice of CPUs, mid-grade GPU, and lack of direct competition in the product stack... Why would you not use a GTX 1080 Ti or Titan XP?Ian Cutress - Thursday, October 5, 2017 - link
All the CPUs we've ever tested are in Bench. Plenty of other data in there: the goal was to not put 30+ CPUs into every graph.Our benchmark database includes over 40 CPUs tested on the GTX 1080, which is the most powerful GPU I could get a set of so I can do parallel testing across several systems. If that wasn't enough (a full test per CPU takes 5 hours per GPU), the minute I get better GPUs I would have to start retesting every CPU. At the exclusion of other content. Our benchmark suite was updated in early Q2, and we're sticking with that set of GPUs (GTX 1080/1060/R9 Fury/RX 480) for a good while for that reason.
Note I had three days to do this review.
crimson117 - Thursday, October 5, 2017 - link
Good job! More people need to know about the bench...Slomo4shO - Thursday, October 5, 2017 - link
To be fair the R5 1600 was added to the benches after the fact. In addition, your othwr reviews tend to be much more detailed and data driven with relevant products and multiple GPUs.Why would I read your review if you expect me to dig through your benchmark to obtain relivant data?
I can understand and appreciate the time crunch but it is a poor excuse for some of the decisions made in this review.
Take it with a grain of salt, this was not your best work.
mapesdhs - Friday, October 6, 2017 - link
Ooohhh the effort of examing the data in Bench! :D First world problems. Sheesh...Run your own tests then, see how you get on with having a life. It's insanely time consuming.
zuber - Thursday, October 5, 2017 - link
I disagree, he mentioned pretty much all the info you need to know about the CPU.The choice of GPU is hardly even relevant to CPU tests anymore. For gaming performance my 6 year old i7-2600K is neck and neck (or faster in some cases) than this new crop of CPUs.
mapesdhs - Friday, October 6, 2017 - link
And if you do need more cores you can always move sideways to a very low cost SB-E or IB-EP. I built a 4.8GHz 2700K system for a friend two years ago, am upgrading it soon to a 3930K at the same clock, replacing the M4E mbd with an R4E, swapping the RAM kits (2x8GB for 4x4GB, both 2400MHz), total cost 200 UKP. 8) And the both mbds now have the option of booting from NVMe.Newer CPUs can have a distinct advantage for some types of 1080p gaming, but with newer GPUs the frame rates are usually so high it really doesn't matter. Move up the scale of resolution/complexity and quickly it becomes apparent there's plenty of life left in SB, etc. zuber, at what clock are you running your 2600K? Also note that P67/Z68 can benefit aswell from faster RAM if you're only using 1600 or less atm.
Itveryhotinhere - Thursday, October 5, 2017 - link
Not yet have power consumption graph ?Ryan Smith - Thursday, October 5, 2017 - link
It's there: https://www.anandtech.com/show/11859/the-anandtech...Itveryhotinhere - Thursday, October 5, 2017 - link
ThanksItveryhotinhere - Thursday, October 5, 2017 - link
That power consumption at full load already use boost or only at base clock ?Ian Cutress - Thursday, October 5, 2017 - link
All-core turbo, as always.SunnyNW - Thursday, October 5, 2017 - link
Can you please tell me how you got to the +20% frequency for CPU B in the twitter poll?mkaibear - Friday, October 6, 2017 - link
Yeah that doesn't make a lot of sense to me either.CPU A is the 8600K. Runs at a base of 3.6 and an all-core turbo of 4.1.
CPU B is the 8700. Runs at a base of 3.2 and an all-core turbo of 4.3.
That's either 11% slower (base) or about 5% faster (all-core turbo). Neither is 20%!
If you compare the base speed of the 8600K and the all-core turbo speed of the 8700 then you get about 19.4% which is close enough to 20% I suppose but that's not really a fair comparison?
sonny73n - Friday, October 6, 2017 - link
Nice pointing that out. But there still were about 1,800 blind votes ;)Ian Cutress - Saturday, October 7, 2017 - link
That was a mistake on my part. On that I'm still mentally in an era where 150 MHz is a 10% gain. My quick mental arithmetic failed.ScottSoapbox - Thursday, October 5, 2017 - link
It's a shame you didn't compare it to the 7820X. I think it was expected that it would better the 7800X at least to some degree, so the more interesting comparison is how much performance does the added cost of 8 cores get you.Ryan Smith - Thursday, October 5, 2017 - link
The graphs were already getting ridiculously long. For something like that, be sure to look at Bench: https://www.anandtech.com/bench/product/1904?vs=20...realistz - Thursday, October 5, 2017 - link
AMD panic mode. Price drop imminent.Anonymous Blowhard - Thursday, October 5, 2017 - link
Price drop already happened. R7 1700X now USD$300 on Amazon.willis936 - Thursday, October 5, 2017 - link
I'd like to see the memory testing done on Ryzen done on coffee lake as well. It's clear that 2 DDR4 channels is not enough for 8 cores, at least with AMD's memory subsystem. Is it enough for 6 cores with Intel's memory subsystem? Also please be sure to use a GPU powerful enough to warrant even reporting the gaming results.bharatwd - Thursday, October 5, 2017 - link
Kabylake is faster than Coffeelake. where is the 15% increase? what is the point of + and ++ iteration when there is no improvement in performance? intel is just burning wafers for no reason. Better for them to go back to tick tock clock and stop wasting resources................SunnyNW - Thursday, October 5, 2017 - link
Honestly I'm not sure why Intel doesn't just keep fab lines for the 7th gen i5s going and just re-label into the 8th gen i3s and just bin differently, ie higher base/turbo.AleXopf - Thursday, October 5, 2017 - link
Thanks for the review Ian. Just one question. Why do you think power consumption differs so much with the data from techspot, were the 8700k consumes 190w, and it's on par with the 16c32t 1920x?Ian Cutress - Saturday, October 7, 2017 - link
Are they testing at-wall power consumption at stock? That might add a bunch.Our power numbers are just for the CPU, not the at wall - they are derived from the internal calibration tools that the processor uses to determine its own power P-states, which in effect is directly related to the turbo.
There seems to be a lot of boards that screw around with multi-core turbo this generation, which may also lead to higher power consumption numbers.
mapesdhs - Monday, October 9, 2017 - link
GN did a great video on this, it's certainly complicated.crimson117 - Thursday, October 5, 2017 - link
Will the included heatsink / cooler be viable on the i7-8700? Or would you still need to buy an aftermarket cooler?AndrewJacksonZA - Thursday, October 5, 2017 - link
Typo page 7 (Civ AI):"an asymptotic result wken you"
"wken"
jimjamjamie - Thursday, October 5, 2017 - link
RIP hyperthreading for anything under $300...Anonymous Blowhard - Thursday, October 5, 2017 - link
Buy AMD.mapesdhs - Tuesday, October 10, 2017 - link
Or a used Intel, sooo much value. I'd been looking for a 4930K upgrade for an X79 system (over a 3930K), so as to provide proper PCIe 3.0, etc., main focus is animation, rendering and video processing; gave up, bought a 10-core (20 thread) XEON E5-2680 v2 instead for 165 UKP (very easy to find). It scores 15.44 for CB 11.5, and 1381 for CB R15 (these tests force an all-core Turbo of 3.1GHz), compare these to the 8700K numbers, not bad at all for a board as old as X79, and the temps/power/heat/etc. are excellent.AndrewJacksonZA - Thursday, October 5, 2017 - link
Thank you very much for your efforts, ladies and gentlemen, this was a really informative review and I enjoyed reading it. :-)[email protected] - Thursday, October 5, 2017 - link
Here is a more accurate TDP test:https://img.purch.com/image001-png/w/711/aHR0cDovL...
bongey - Friday, October 6, 2017 - link
Quiet now, Anandtech only publishes what Intel tells them to publish.Ian Cutress - Saturday, October 7, 2017 - link
Last week I was being called an AMD shill. Before that, an Intel shill, Before that, an AMD shill. Swings, roundabouts, hedges.risa2000 - Friday, October 6, 2017 - link
This actually makes sense. I wonder how Ian explains (even to himself) that additional 2 new cores in i7-8700K do not push the power envelope at all compared to i7-7700K. Is it because in Anandtech benchmark 8700K uses only 4 cores? Or uses 6 but throttles them down to stay in power limit?sonny73n - Friday, October 6, 2017 - link
I was also puzzled about some test results in this review but after reading thru the comment section, I conclude that this is indeed his worst review to date. He mentioned that he only had 3 days for this review. Maybe this is the reason.sonny73n - Friday, October 6, 2017 - link
I forgot to mention his lack of AMD comparisons among little mistakes here and there.DanNeely - Friday, October 6, 2017 - link
At a guess Anands temp test isn't using the bigger AVX sizes because at full load the hugely wide calculations use a lot more power than anything else; to the extend that by default they have a negative clock speed offset of several hundred MHz. I'm not sure how or if MCT shenanigans affect the AVX clock speed.Ian Cutress - Saturday, October 7, 2017 - link
You'd be surprised at how aggressively Intel is binning. We've said for a long while that Intel never pushes its chips to the limits in DVFS or frequency (like AMD), so if they were willing to make higher SKUs and commit to their usual warranty, it should be an easy enough task.Our power benchmark loads up all threads.
Ian Cutress - Saturday, October 7, 2017 - link
Is Paul/Igor testing at-wall power consumption at stock? That might add a bunch. Even the lower end CPUs seem out of whack there. Our power numbers are just for the CPU, not the at wall.Our numbers are derived from the internal calibration tools that the processor uses to determine its own power P-states, which in effect is directly related to the turbo. We run a P95 workload during our power run, and pull the data from the internal tools after 30 seconds of full load, which is long enough to hit peak power without hitting any thermal issues or PL2/PL3 states.
There seems to be a lot of boards that screw around with multi-core turbo this generation, which may also lead to higher power consumption numbers.
MingoDynasty - Thursday, October 5, 2017 - link
There is a typo in the article. 7700K has a TDP of 91W, not 95W.Ian Cutress - Saturday, October 7, 2017 - link
Intel played shenanigans with the 7700W TDP. They gave it as one value in the first briefings, then switched to the other just before launch, then switched back again after launch.mapesdhs - Tuesday, October 10, 2017 - link
I know it's only a typo, but I still had to laugh at "7700W TDP". ;D#EditingForATForums
W1nTry - Thursday, October 5, 2017 - link
There also seems to be a price change to the 1800x and 1700x which are as much as 100USD lower than what's reflected on the charts. I think that would factor in notably.madwolfa - Thursday, October 5, 2017 - link
How in the world 8400 is so significantly faster than 7700K/8700K in all ROTR 1080p benchmarks?neo_1221 - Thursday, October 5, 2017 - link
Maybe resource contention on the hyper-threaded parts? It is odd, but I'm very impressed with that 8400. For most workloads it easily hangs out with the $300+ CPUs.risa2000 - Friday, October 6, 2017 - link
Not only RoTR but also in GTAV. I hope there is an explanation the guys at AT will figure out. If it was the congestion of the threads (as suggested above) then all Ryzen chips should be even worse, but they are not.mapesdhs - Friday, October 6, 2017 - link
According to Ian, RoTR has pure quad-core optimisations present in the engine.nsaklas - Thursday, October 5, 2017 - link
Good info Ian, thank you. Am I the only one who's terribly disappointed by this release?! I've been holding out for this moment to upgrade and what I can gather from the benchmarks is that this will have no noticeable improvement on performance for most applications vs. the last 2 gen's of CPUs....xyvyx2 - Friday, October 6, 2017 - link
yeah.. when I saw these numbers, I figured I'd go back to waiting for 10nm or Ryzen 2. But Techreport's comparison a) used a 1080ti, which I also have b) included my current cpu, the 4790k. The results were far more pronounced and closer to what I'd hoped...I'm mostly baffled by the i5-8400... if it's just 6 cores and it did so much better than the 8700k at a lower turbo clock, on those thread-crippled games, would the new i7 have done better with HT disabled? Would it run cooler with only 6 threads?
limitedaccess - Thursday, October 5, 2017 - link
I'm wondering if you can provide information on what the uncore speeds are for the various Coffeelake SKUs?limitedaccess - Thursday, October 5, 2017 - link
For example would the 8700 and 8700k possibly differ in uncore speed?Ian Cutress - Saturday, October 7, 2017 - link
Usually not, though Intel doesn't provide this information for all the chips in the stack for various (unfathomable) reasons. We've asked before.mapesdhs - Tuesday, October 10, 2017 - link
Is it possible to use cache snooping and other methods to work out uncore speeds?DigitalFreak - Thursday, October 5, 2017 - link
Anyone having an issue with Bench? I'm trying to compare my i7-3770k to the i7-8700k and it comes back with no data. Same with trying the Threadripper 1920xmkaibear - Friday, October 6, 2017 - link
CPU tests changed so benchmarks weren't comparable. Latest processor tested on the old tests was the 7700K iirc, and not everything is tested on the new tests.I'd compare results for the 3770k and the 2600K to get a baseline then you can compare 2600K to the 8700K. It's a bit fiddly, I have to do the same with my 4790K.
Ian Cutress - Saturday, October 7, 2017 - link
We updated our CPU testing suite for Windows 10 in Q1. Regression testing is an on-going process, though it's been slow because of all the CPU launches this year. Normally we have 1/2 a year. We're so far at what, 6 or 7 for 2017?mczak - Thursday, October 5, 2017 - link
Doesn't look to me like the die size actually increased at all due to the increased gate pitch.The calculations in the article forgot to account for the increase of the unused area (at the bottom left) - this area is tiny with 2c die, but increases with each 2 cores added significantly. By the looks of it, that unused area would have grown by about 2 mm^2 or so going from 4 to 6 cores, albeit I'm too lazy to count the pixels...
jjj - Thursday, October 5, 2017 - link
Your conclusion is weirdest thing ever, you fully ignore the 8359k and AMD.In retail, the 8350k will do very very well and retail is what matters for most readers
And ignoring AMD is not ok at all, it's like you think that we are all idiots that buy on brand.You do think that, your system guides make that very clear but you should not accept, support and endorse such an idiotic behavior.
AMD got hit hard here, Intel takes back the lead and it's important to state that. Sure they might have Pinnacle Ridge in a few months and take back the lead but buyers that can't wait should go with Intel right now, for the most part. AMD could also adjust prices ofc.
Tigris - Thursday, October 5, 2017 - link
Really confused why the pricing listed in this review isn't consistent- for Intel you were posting prices you found online, but for Ryzen you appear to be posting MSRP.The truth is- you can find 1700x for $298 right now EASILY (Amazon), yet Microcenter is selling the 8700k for $499.
If you factor this information in, the AMD solutions are still far more valuable per dollar.
wolfemane - Thursday, October 5, 2017 - link
I really can’t belive the amount of flak Anandtech takes these days. I find it un-earned an unwarrented. Out of all the tech sites and forums I manage to read in a given week, Anandtech is the most often quoted and linked to. Hell I use it as my go to for reference and comparison (and general reading). My only big complaint is your ads, and I’d gladly pay a sub to completely remove that nonsense and directly support the site!Ian, you and your staff deserve far more credit than you get and that’s an injustice. Each piece is pretty thorough and pretty spot on. So for that thank you very much.
This article is no exception to the rule and is superb. Your graph layouts are a welcome feature!!!!! I look forward to your ever expanding tests as new chips roll in. I think the 8600k is going to be a game changer in the i5 vs i7 performance category for these hexacore cpus. I think that’s why almost all the reviews I’m reading today are with the 8700k and 8400.
Agin, thank you and your staff very much for the work you put into publishing amazing articles!!
vanilla_gorilla - Thursday, October 5, 2017 - link
Personally I buy whatever is best at the time. Right now I'm typing this on a 1700x and I can see a 4770k build on the desk next to me. So it's always funny to see the bias. Intel review gets posted, AMD fanboys come out of the wood works to trash them as paid shills. But it works exactly the same on any positive AMD reviews. Intel fans come in trashing them. It's really odd. Anandtech is one of the most unbiased sites I've found and I trust their reviews implicitly.mkaibear - Saturday, October 7, 2017 - link
> Anandtech is one of the most unbiased sites I've found and I trust their reviews implicitly.Yep. Anyone who looks at AT and sees bias needs to examine their own eyesight.
SeannyB - Thursday, October 5, 2017 - link
For the H.264 encoding tests, you could consider using the "medium" preset or better. The "very fast" preset has a tendency to use fewer cores.sirmo - Thursday, October 5, 2017 - link
No temperature comparison? According to some reviews I see Intel stubbornly continues to rely on their horrible TIM solution, after my horrible experience with Haswell overheating I am not considering Intel again in my build until this is fixed. There is nothing worse than when your build starts crashing because of overheating 2 years down the road.shreduhsoreus - Thursday, October 5, 2017 - link
The 8100 is basically a 6600 locked at it's quad core turbo frequency. Pretty decent for $120.Gordo-UT - Thursday, October 5, 2017 - link
Looking at these results, there is still no reason to migrate from my i7-2600k. Still running at 4.6 Ghz for years now, flawlessly with equal or better performace of all that came after. It would seem Moore's law failed to hold true for the past 5 years.For the price of a new i7, you can buy a used i7-2600k, z68 or z77 mobo, and 16 gb ram. It looks like that situation will not end soon.
Intel has to give me a compelling reason to spend a ton of money on NEW stuff.
vanilla_gorilla - Thursday, October 5, 2017 - link
"It would seem Moore's law failed to hold true for the past 5 years."Moore's Law is about transistor density, not performance.
mapesdhs - Friday, October 6, 2017 - link
For quite a lot less than a new i7, I bought a 3930K, ASUS R4E and 16GB/2400 RAM. And a 120mm AIO. And a 2nd R4E. :DCompelling reasons are perhaps more likely to come from peripheral and I/O scenarios, eg. if one wants the latest USB 3.1, M2 or somesuch. However, addin cards solve most of these, and numerous SB/SBE boards can now boot from NVMe. I saw great numbers with M.2 drives and Z68, and because of the wiring it should be possible for an M4E to use an M2 as boot and have two GPUs at x16 (aka 3.0 @ x8, same as modern mainstream split), while X79 has enough lanes anyway.
mapesdhs - Tuesday, October 10, 2017 - link
(btw, when I say a lot less than a new i7, I did mean the 8700K)SuperRobomike - Thursday, October 5, 2017 - link
5930k does surprisingly well compared to the 8700k in the gaming benchmarks (at least for the games tested). Any particular reason it would be doing so well? Can't think of any advantage it would have besides the quad channel memory.CTHL - Thursday, October 5, 2017 - link
Why are all these benches between sites so wildly different (with supposedly same settings/specs). RoTR has had some of the most absurd results... one has 8700k beating everything by 20fps, another has 7700k over 8700k by 40fps, and one even had a 6600k beating everything.firerod1 - Thursday, October 5, 2017 - link
i5-8400 is King performance/dollar!watzupken - Saturday, October 7, 2017 - link
The i5 8400 looks like a good value processor until you factor in the price of a Z370 motherboard which is the only chipset available.Chaser - Thursday, October 5, 2017 - link
Thank you AMD.vanilla_gorilla - Thursday, October 5, 2017 - link
Exactly! No matter what side you're on, you gotta love the fact that competition is back in the x86 desktop space! And it looks like AMD 1700X is now under $300 on Amazon. Works both ways!TEAMSWITCHER - Friday, October 6, 2017 - link
I just don't see it this way. Since the release of Haswell-E in 2014 we've had sub $400 six core processors. While some like to compartmentalize the industry into mainstream and HEDT, the fact is, I built a machine with similar performance three years ago, for a similar price. Today's full featured Z370 motherboards (like the ROG Maximus X) cost nearly as much as X99 motherboards from 2014. To say that Intel was pushed by AMD is simply not true.watzupken - Friday, October 6, 2017 - link
I feel the fact that Intel had to rush a 6 core mainstream processor out in the same year they introduced Kaby Lake is a sign that AMD is putting pressure on them. You may find a Haswell E chip for sub 400 bucks in 2014, but you need to be mindful that Intel historically have only increase prices due to the lack of competition. Now you are seeing a 6 core mainstream chip from both AMD and Intel below 200 bucks. Motherboard prices are difficult to compare since there are lots of motherboards out there that are over engineered and cost significantly more. Assuming you pick the cheapest Z370 motherboard out there, I don't believe it's more expensive than a X99 board.mapesdhs - Friday, October 6, 2017 - link
KL-X is dead, that's for sure. Some sites claim CFL was not rushed, in which case Intel knew KL-X would be pointless when it was launched. People claiming Intel was not affected by AMD have to choose: either CFL was rushed because of pressure from AMD, or Intel released a CPU for a mismatched platform they knew would be irrelevant within months.There's plenty of evidence Intel was in a hurry here, especially the way X299 was handled, and the horrible heat issues, etc. with SL-X.
mapesdhs - Friday, October 6, 2017 - link
PS. Is it just me or are we almost back to the days of the P4, where Intel tried to maintain a lead really by doing little more than raising clocks? It wasn't that long ago there was much fanfare when Intel released its first minimum-4GHz part (4790K IIRC), even though we all knew they could run their CPUs way quicker than that if need be (stock voltage oc'ing has been very productive for a long time). Now all of a sudden Intel is nearing 5GHz speeds, but it's kinda weird there's no accompanying fanfare given the reaction to their finally reaching 4GHz with the 4790K. At least in th mainstream, has Intel really just reverted to a MHz race to keep its performance up? Seems like it, but OS issues, etc. are preventing those higher bins from kicking in.KAlmquist - Friday, October 6, 2017 - link
Intel has been pushing up clock speeds, but (unlike the P4), not at the expense of IPC. The biggest thing that Intel has done to improve performance in this iteration is to increase the number of cores.mapesdhs - Tuesday, October 10, 2017 - link
Except in reality it's often not that much of a boost at all, and in some cases slower because of how the OS is affecting turbo levels.Remember, Intel could have released a CPU like this a very long time ago. As I keep having to remind people, the 3930K was an 8-core chip with two cores disabled. Back then, AMD couldn't even compete with SB, never mind SB-E, so Intel held back, and indeed X79 never saw a consumer 8-core part, even though the initial 3930K was a XEON-sourced crippled 8-core.
Same applies to the mainstream, we could have had 6 core models ages ago. All they've really done to counter the lack of IPC improvements is boost the clocks way up. We're approaching standard bin levels now that years ago were considered top-notch oc's unless one was definitely using giant air coolers, decent AIOs or better.
wr3zzz - Thursday, October 5, 2017 - link
I hope Anandtech solves the Civ6 AI benchmark soon. It's almost as important as compression and encoding benchmarks for me to decide CPU price-performance options as I am almost always GPU constrained in games.Ian Cutress - Saturday, October 7, 2017 - link
We finally got in contact with the Civ6 dev team to integrate the AI benchmark into our suite better. You should see it moving forward.Koenig168 - Friday, October 6, 2017 - link
Hmm ... rather disappointing that Anandtech did not include Ryzen 1600/X until called out by astute readers.mkaibear - Friday, October 6, 2017 - link
...apart from including all the data in their benchmark tool, which they make freely available, you mean? They put in the CPUs they felt that were most relevant. The readership disagreed, so they changed it from their benchmark database. That level of service is almost unheard of in the industry and all you can do is complain. Bravo.Koenig168 - Friday, October 6, 2017 - link
Irrelevant. While I agree with most of what you said, that does not change the fact that Anandtech did not include Ryzen 1600/X until called out by astute readers. To make things a little clearer for you, the i7-8700 is a 6C/12T processor. The Ryzen 1600 is a 6C/12T processor. Therefore, a comparison with the Ryzen 1600 is relevant.You should have addressed the point I made. Instead all you can do is complain about my post. Bravo. (In case this goes over your head again, that last bit is added just to illustrate how pointless such comments are.)
mkaibear - Saturday, October 7, 2017 - link
So your point is, in essence, "they didn't do what I wanted them to do so they're damned for all time".They put up the comparison they felt was relevant, then someone asked them to include something different - so they did it. They listened to their readers and made changes to an article to fix it.
Should they have put the R5 in the original comparison? Possibly. I can see arguments either way but if pushed I'd have said they should have done - but since even the 1600X gets beaten by the 8400 in virtually every benchmark on their list (as per https://www.anandtech.com/bench/product/2018?vs=20... they would then have been accused by the lurking AMD fanboys of having picked comparisons to make AMD look bad (like on every other article where AMD gets beaten in performance).
So what are you actually upset about? That they made an editorial decision you disagree with? You can't accuse them of hiding data since they make it publicly accessible. You can't accuse them of not listening to the readers because they made the change when asked to. Where's the issue here?
mkaibear - Saturday, October 7, 2017 - link
OK on further reading it's not "virtually every" benchmark on the list, just more than half. It's 50% i5 win, 37% R5 win, 12% tied. So not exactly a resounding triumph for the Ryzen but not as bad as I made it out to be.In the UK the price differential is about £12 in favour of the i5, although the motherboard is about £30 more expensive (though of course Z370 is a lot more fully featured than B650) so I think pricing wise it's probably a wash - but if you want gaming performance on anything except Civ VI then you'd be better off getting the i5.
...oh and if you don't want gaming performance then you'll need to buy a discrete graphics card with the R5 which probably means the platform costs are skewed in favour of Intel a bit (£25 for a GF210, £32 for a R5 230...)
watzupken - Saturday, October 7, 2017 - link
As mentioned when I first called out this omission, I would think comparing a 6 vs 4 core irrelevant. This is what AnandTech recommended to lookout for on page 4 "Core Wars": Core i5-8400 vs Ryzen 5 1500X.You be the judge if this makes sense when there is a far better competition/ comparison between the i5 8400 vs R5 1600. Only when you go reading around and you realized that hey, the i5 8400 seems to be losing in some areas to the 1600. I give AnandTech the benefit of the doubt, so I am done debating what is relevant or not.
KAlmquist - Friday, October 6, 2017 - link
The Anandtech benchmark tool confirms what Ryan indicated in the introduction: the i7-8700k wins against the 1600X across the board, due faster clocks and better IPC. The comparison to the i5-8400 is more interesting. It either beats the 1600X by a hair, or loses rather badly. I think the issue is the lack of hyperthreading on the i5-8400 makes the 1600X the better all-around performer. But if you mostly run software that can't take advantage of more than 6 threads, then the i5-8400 looks very good.Personally, I wouldn't buy i5-8400 just because of the socket issue. Coffee Lake is basically just a port of Skylake to a new process, but Intel still came out with a new socket for it. Since I don't want to dump my motherboard in a landfill every time I upgrade my CPU, Intel needs a significantly superior processor (like they had when they were competing against AMD's bulldozer derivatives) to convince me to buy from them.
GreenMeters - Friday, October 6, 2017 - link
So Intel still isn't getting their head out of their rear and offering the option of a CPU that trades all the integrated GPU space for additional cores? Moronic.mkaibear - Friday, October 6, 2017 - link
Integrated graphics make up more than 70% of the desktop market. It's even greater than that for laptops. Why would they sacrifice their huge share of that 70% in order to gain a small share of the 30%? *that* would be moronic.In the meantime you can know that if you buy a desktop CPU from Intel it will have an integrated GPU which works even with no discrete graphics card, and if you need one without the integrated graphics you can go HEDT.
Besides, the limit for Intel isn't remotely "additional space", they've got more than enough space for 8/10/12 CPU cores - it's thermal. Having an integrated GPU which is unused doesn't affect that at all - or arguably it gives more of a thermal sink but I suspect in truth that's a wash.
Zingam - Saturday, October 7, 2017 - link
We need a completely new PC architecture - you need more CPU cores - add more CPU cores, you need more GPU cores add more GPU cores, all of them connected via some sort of Infinity fabric like bus and sharing a single RAM. That should be possible to implement. Instead of innovating Intel is stuck in the current 80s architecture introduced by IBM.mkaibear - Saturday, October 7, 2017 - link
Well, I'd broadly agree with that!There are latency issues with that kind of approach but I'm sure they'd be solvable. It'll be interesting to see what happens with Intel's Mesh when it inevitably trickles down to the lower end / AMD's Infinity Fabric when they launch their APUs.
mapesdhs - Tuesday, October 10, 2017 - link
Such an idea is kinda similar to SGI's shared memory designs. Problem is, scalable systems are expensive, and these days the issue of compatibility is so strong, making anything new and unique is very difficult, companies just don't want to try out anything different. SGI got burned with this re their VW line of PCs.boeush - Saturday, October 7, 2017 - link
I think it's a **VERY** safe bet that most systems selling with an i7 8700/k will also include some sort of a discrete GPU. It's almost unimaginable that anyone would buy/build a system with such a CPU but no better GPU than integrated graphicsWhich makes the iGPU a total waste of space and a piece of useless silicon that consumers are needlessly paying for (because every extra square inch of die area costs $$$).
For high-end CPUs like the i7s, it would make much more sense to ditch the iGPU and instead spend that extra silicon to add an extra couple of cores, and a ton more cache. Then it would be a far better CPU for the same price.
So I'm totally with the OP on this one.
mkaibear - Sunday, October 8, 2017 - link
You need a better imagination!Of the many hundreds of computers I've bought or been responsible for speccing for corporate and educational entities, about half have been "performance" oriented (I'd always spec a decent i5 or i7 if there's a chance that someone might be doing something CPU limited - hardware is cheap but people are expensive...) Of those maybe 10% had a discrete GPU (the ones for games developers and the occasional higher-up's PC). All the rest didn't.
From chatting to my fellow managers at other institutions this is basically true across the board. They're avidly waiting for the Ryzen APUs to be announced because it will allow them to actually have competition in the areas they need it!
boeush - Sunday, October 8, 2017 - link
It's not surprising to see business customers largely not caring about graphics performance - or about the hit to CPU performance that results from splitting the TDP budget with the iGPU...In my experience, business IT people tend to be either penny-wise and pound-foolish, or obsessed with minimizing their departmental TCO while utterly ignoring company performance as a whole. If you could get a much better-performing CPU for the same money, and spend an extra $40 for a discrete GPU that matches or exceeds the iGPU's capabilities - would you care? Probably not. Then again, that's why you'd stick with an i5 - or a lower-grade i7. Save a hundred bucks on hardware per person per year; lose a few thousand over the same period in wasted time and decreased productivity... I've seen this sort of penny-pinching miscalculation too many times to count. (But yeah, it's much easier to quantify the tangible costs of hardware, than to assess/project the intangibles of sub-par performance...)
But when it comes specifically to the high-end i7 range - these are CPUs targeted specifically at consumers, not businesses. Penny-pinching IT will go for i5s or lower-grade i7s; large-company IT will go for Xeons and skip the Core line altogether.
Consumer builds with high-end i7s will always go with a discrete GPU (and often more than one at a time.)
mkaibear - Monday, October 9, 2017 - link
That's just not true dude. There are a bunch of use cases which spec high end CPUs but don't need anything more than integrated graphics. In my last but-one place, for example, they were using a ridiculous Excel spreadsheet to handle the manufacturing and shipping orders which would bring anything less than an i7 with 16Gb of RAM to its knees. Didn't need anything better than integrated graphics but the CPU requirements were ridiculous.Similarly in a previous job the developers had ludicrous i7 machines with chunks of RAM but only using integrated graphics.
Yes, some it managers are penny wise and pound foolish, but the decent ones who know what they're doing they spend the money on the right CPU for the job - and as I say a serious number of use cases don't need a discrete GPU.
...besides it's irrelevant because the integrated GPU has zero impact on performance for modern Intel chips, as I said the limit is thermal not package size.
If Intel whack an extra 2 cores on and clock them at the same rate their power budget is going up by 33% minimum - so in exchange for dropping the integrated GPU you get a chip which can no longer be cooled by a standard air cooler and has to have something special on there, adding cost and complexity.
Sticking with integrated GPUs is a no-brainer for Intel. It preserves their market share in that environment and has zero impact for the consumer, even gaming consumers.
boeush - Monday, October 9, 2017 - link
Adding 2 cores to a 6-core CPU drives the power budget up by 33% if and **ONLY IF** all cores are actually getting fully utilized. If that is the case, then the extra performance from those extra 2 cores would be indeed actually needed! (at least on those occasions, and would be, therefore, sorely missed in a 6-core chip.). Otherwise, any extra cores would be mostly idle, not significantly impacting power utilization, cooling requirements, or maximum single-thread performance.Equally important to the number of cores is the amount of cache. Cache takes up a lot of space, doesn't generate all that much heat (compared to the actual CPU pipeline components), but can boost performance hugely, especially on some tasks that are memory-constrained. Having more L1/L2/L3 cache would provide a much better bang for the buck when you need the CPU grunt (and therefore a high-end i7), than the waste of an iGPU (eating up ~50% of die area) ever could.
Again, when you're already spending top dollar on an i7 8700/k (presumable because you actually need high CPU performance), it makes little sense that you go, "well, I'd rather have **LOWER** CPU performance, than be forced to spend an extra $40 on a discrete GPU (that I could then reuse on subsequent system builds/upgrades for many years to come)"...
mkaibear - Tuesday, October 10, 2017 - link
Again, that's not true. Adding 2 cores to a 6 core CPU means that unless you find some way to prevent your OS from scheduling threads on it then all those cores are going to end up used somewhat - which means that you have to plan for your worst case TDP not your best case TDP - which means you have to engineer a cooling solution which will work for the full 8 core CPU, increasing costs to the integrator and the end user. Why do you think Intel's worked so hard to keep the 6-core CPU within a few watts of the old 4-core CPU?In contrast an iGPU can be switched on or off and remain that way, the OS isn't going to assign cores to it and result in it suddenly dissipating more power.
And again you're focussing on the extremely limited gamer side of things - in the real world you don't "reuse the graphics card for many years to come", you buy a machine which does what you need it to and what you project you'll need it to, then replace it at the end of whatever period you're amortising the purchase over. Adding a $40 GPU and paying the additional electricity costs to run that GPU over time means your TCO is significantly increased for zero benefits, except in a very small number of edge cases in which case you're probably better off just getting a HEDT system anyway.
The argument about cache might be a better one to go down, but the amount of cache in desktop systems doesn't have as big an impact on normal workflow tasks as you might expect - otherwise we'd see greater segmentation in the marketplace anyway.
In short, Intel introducing desktop processors without iGPUs makes no sense for them at all. It would benefit a small number of enthusiasts at a cost of winding up a large number of system integrators and OEMs, to say nothing of a huge stack of IT Managers across the industry who would suddenly have to start fitting and supporting discrete GPUs across their normal desktop systems. Just not a good idea, economically, statistically or in terms of customer service.
boeush - Tuesday, October 10, 2017 - link
The TDP argument as you are trying to formulate it is just silly. Either the iGPU is going to be in fact used on a particular build, or it's going to be disabled in favor of headless operation or a discrete GPU. If the iGPU is disabled, then it is the very definition of all-around WASTE - a waste of performance potential for the money, conversely/accordingly a waste of money, and a waste in terms of manufacturing/materials efficiency. On the other hand, if the iGPU is enabled, it is actually more power-dense that the CPU cores - meaning you'll have to budget even more heavily for its heat and power dissipation, than you'd have for any extra CPU cores. So in either case, your argument makes no sense.Remember, we are talking about the high end of the Core line. If your build is power-constrained, then it is not high-performance and you have no business using a high-end i7 in it. Stick to i5/i3, or the mobile variants, in that case. Otherwise, all these CPUs come with a TDP. Whether the TDP is shared with an iGPU or wholly allocated to CPU is irrelevant: you still have to budget/design for the respective stated TDP.
As far as "real-world", I've seen everything from companies throwing away perfectly good hardware after a year of use, to people scavenging parts from old boxes to jury-rig a new one in a pinch.
And again, large companies with big IT organizations will tend to forego the Core line altogether, since the Xeons provide better TCO economy due to their exclusive RAS features. The top-end i7 really is not a standard 'business' CPU, and Intel really is making a mistake pushing it with the iGPU in tow. That's where they've left themselves wide-open to attack from AMD, and AMD has attacked them precisely along those lines (among others.)
Lastly, don't confuse Intel's near-monopolistic market segmentation engineering with actual consumer demand distribution. Just because Intel has chosen to push an all-iGPU lineup at any price bracket short of exorbitant (i.e. barring the so-called "enthusiast" SKUs), doesn't mean the market isn't clamoring for a more rational and effective alternative.
mkaibear - Wednesday, October 11, 2017 - link
Sheesh. Where to start?1) Yes, you're right, if the iGPU isn't being used then it will be disabled, and therefore you don't need to cool it. Conversely, if you have additional cores then your OS *will* use them, and therefore you *do* need to cool them.
iGPU doesn't draw very much power at all. HD2000 drew 3W. The iGPU in the 7700K apparently draws 6W so I assume the 8700K with a virtually identical iGPU draws just as much (figures available via your friendly neighbourhood google). Claiming the iGPU has a higher power budget than the CPU cores is frankly ridiculous. (in fact it also draws less than .2W when it's shut down which means that having it in there is far outweighed by the additional thermal sink available, but anyway)
2) Large companies with big IT organisations don't actually forego the Core line altogether and go with Xeons. They could if they wanted to, but in general they still use off-the shelf Dells and HPs for everything except extremely bespoke setups - because, as I previously mentioned, "hardware is cheap, people are expensive" - getting an IT department to build and maintain bespoke computers is hilariously expensive. No-one is arguing that for an enthusiast building their own computer that the option of the extra cores would be nice, but my point all along has been that Intel isn't going to risk sacrificing their huge market share in the biggest market to gain a slice of a much smaller market. That would be extremely bad business.
3) The market isn't "clamoring for a more rational and effective alternative" because if it was then Ryzen would have flown off the shelves much faster than it did.
Bottom line: business IT wants simple solutions, the fewer parts the better. iGPUs on everything fulfil far more needs than dGPUs for some and iGPUs for others. iGPUs make designing systems easier, they make swapouts easier, they make maintenance easier, they reduce TCO, they reduce RMAs and they just make IT staff's lives easier. I've run IT for a university, a school and a manufacturing company, and for each of them the number of computers which needed a fast CPU outweighed the number of computers which needed a dGPU by a factor of at least 10:1 - and the university I worked for had a world-leading art/media/design dept and a computer game design course which all had dGPUs. The average big business has even less use for dGPUs than the places I've worked.
If you want to keep trying to argue this then can you please answer one simple question: why do you think it makes sense for Intel to prioritise a very small area in which they don't have much market share over a very large area in which they do? That seems the opposite of what a successful business should do.
watzupken - Saturday, October 7, 2017 - link
There are pros and cons of having integrated graphics. It sure takes up a lot of die space, but it is something that allows Intel to sell a lot of chips. Amongst enthusiasts, this is unnecessary, but this group may only represent a small percentage vs corporates that need only decent CPU and no need for fancy graphics. To be honest, Intel could likely have created a 8 core processor easily since the die size is still fairly small for Coffee Lake, but they chose not to. I don't think it is a matter of the graphic that is holding them back.James5mith - Friday, October 6, 2017 - link
Now to wait for the generation of Intel CPU's with native Thunderbolt3 on-die like Intel announced earlier this year.Zingam - Saturday, October 7, 2017 - link
Why is that a good thing?ReeZun - Friday, October 6, 2017 - link
"The difference between the Ryzen 5 1500X and the Core i3-8350K would be interesting, given the extreme thread deficit (12 threads vs 4) between the two."The 1500X houses 8 threads (not 12).
watzupken - Saturday, October 7, 2017 - link
The difference between the R5 1500X and i3 8350K goes beyond just the number of threads. The cache is also 2x more on the Ryzen chip. However, the i3 chip have the advantage of being able to reach higher clockspeed. I do agree that this will be an interesting comparison.sweeper765 - Friday, October 6, 2017 - link
I'm not up to date with current bios versions.Is multi-core enhancement still present in z370 motherboards? That would get rid of all those differences in turbo speeds. I know it is technically overclocking but i bet it's a pretty safe procedure without increasing the voltages.
Also, what's the deal with the 8700? Is it just as good as 8700k (minus 100mhz) if one decides not to overclock? Just trying to gather as many practical facts as i can before formulating an upgrade plan (sandy bridge user hehe )
This cpu family looks good on specs and benches (maybe the first worthy successor to sandy bridge) but it's not perfect. I hate that Intel decided not to solder, i expect temperatures to soar in the high 80's. Also the current motherboards are somewhat lacking in ports (usb, lan, sata).
I love my sandy bridge setup though.
6 1/2 years old and still going strong. Overclocked, cool, stable, silent. With current cpu's you don't get all these points.
Even if i upgrade i'm not going to touch it.
Ian Cutress - Saturday, October 7, 2017 - link
Is multi-core enhancement still present in z370 motherboards?As an option, yes.
As default? Will vary board to board. You can disable it.
However we had trouble with one of our boards: disabling MCT/MCE and then enabling XMP caused the CPU to sit at 4.3 GHz all day. Related to a BIOS bug which the vendor updated in a hurry.
Jodiuh - Friday, October 6, 2017 - link
What’s up with those rise of Tomb Raider benchmarks? Am I too seriously believ the i5 7400 is more capable than the 8700K...did I miss the overclocking part?Tech reports review much better with results that make sense.
peevee - Friday, October 6, 2017 - link
"Core i5-8600K and the Core i7-8700. These two parts are $50 apart, however the Core i7-8700 has double the threads, +10% raw frequency"+10%? Count again.
boeush - Friday, October 6, 2017 - link
Regarding most normal/gaming scenarios, I'm wondering with the 8700/k whether one couldn't get an even better performance by disabling hyperthreading in the UEFI.That would still yield 6 threads, but now ostensibly with a full 2 MB of L3 per thread. Plus, lower power per core (due to lower resource utilization) might mean more thermal headroom and higher overall sustained frequencies.
So you'd get maximum-possible single-thread performance while still being able to run 6-wide SMT (which, under most normal usage, isn't even a constraint worth noting...)
Amirite?
boeush - Friday, October 6, 2017 - link
To expand on this a bit more, with the "core wars" now in effect, I wonder if hyperthreading might be an unnecessary holdover feature that could be actually reducing performance of many(8+)-core chips in all but the most extremely threaded scenarios. Might it not be better to have many simple/efficient cores, rather than perhaps fewer cores loaded with the hyperthreading overhead both in terms of die area and energy density, as well as cache thrashing?Zingam - Saturday, October 7, 2017 - link
Hyperthreading was invented to optimize the use of CPU logic that would otherwise remain unutilized during high loads.There is no way of reducing performance with current architectures. There are "hyperthreading-less" CPUs and you compare them to hyperthreded CPUs.boeush - Monday, October 9, 2017 - link
Hyperthreading was particularly useful in the context of not having a lot of cores to work with - allowing to squeeze extra multi-threaded performance from your dual- or quad-core CPU. It comes at the cost of extra silicon and complexity in the CPU pipeline, but allows better utilization of CPU resources as you mention. At runtime, it has the dual detrimental effects on single-thread performance, of (1) splitting/sharing the on-CPU cache among more threads, thereby raising the frequency of cache misses for any given thread due to the threads trampling over each other's cached data, and (2) indeed maximizing CPU resource utilization, thereby maximizing dissipated energy per unit area - and thereby driving the CPU into a performance-throttling regime.With more cores starting to become available per CPU in this age of "core wars", it's no longer as important to squeeze every last ounce of resource utilization from each core. Most workloads/applications are not very parallelizable in practice, so you end up hitting the limits of Amdahl's law - at which point single-thread performance becomes the main bottleneck. And to maximize single-thread performance on any given core, you need two things: (a) maximum attainable clock frequency (resource utilization be damned), and (b) as much uncontested, dedicated on-CPU cache as you can get. Hyperthreading is an impediment to both of those goals.
So, it seems to me that if we're going toward the future where we routinely have CPUs with 8 or more cores, then it would be beneficial for each of those cores to be simpler, more compact, more streamlined and optimized for single-thread performance (while foregoing hyperthreading support), while spending any resulting die space savings on more cores and/or more cache.
boeush - Monday, October 9, 2017 - link
To add to the above: 'more cores and/or more cache' - and/or better branch predictor, and/or faster/wider ALU and/or FPU, and/or more pipeline stages to support a faster clock, and/or...alinypd - Saturday, October 7, 2017 - link
Slowest GAMING CPU Ever, Garbage!yhselp - Saturday, October 7, 2017 - link
The i3-8100 is made utterly redundant by the the necessity to buy a Z370 motherboard along with it; it'd be cheaper to get an i5-7400 with a lower-end motherboard. Intel...watzupken - Saturday, October 7, 2017 - link
This applies to all the non-overclocking chips, particularly i5 and below. The high cost of the Z370 boards currently simply wipe out any price benefits. For example, a i5 840 is good value for money, but once you factor in the price of a motherboard with a Z370 chipset, it may not be that good value for money anymore.FourEyedGeek - Saturday, October 7, 2017 - link
Enjoyed the article, thanks. An overclocked Ryzen 1700 looks appealing.nierd - Saturday, October 7, 2017 - link
"The problem here is *snip* Windows 10, *snip* All it takes is for a minor internal OS blip and single-threaded performance begins to diminish. Windows 10 famously kicks in a few unwanted instruction streams when you are not looking,"This is why single threaded performance is a silly benchmark in today's market, unless you happen to boot to DOS to run something. Your OS is designed to use threads. There are no systems in use today as a desktop (in any market these processors will compete - even if used as a server) where they will ever run a single thread. The only processors that run single threads today are ... single core processors (without hyperthreading even).
Open your task manager - click the performance tab - look at the number of threads - when you have enough cores to match that number then single threaded performance is important. In the real world how the processor handles multiple tasks and thread switching is more important. Even hardcore gamers seem to miss this mark forgetting that behind the game the OS has threads for memory management, disk management, kernel routines, checking every piece of hardware in your system, antivirus, anti-malware (perhaps), network stack management, etc. That's not even counting if you run more than one monitor and happen to have web browsing or videos playing on another screen - and anything in the background you are running.
The myth that you never need more than 4 cores is finally coming to rest - lets start seeing benchmarks that stress a system with 10 programs going in the background. My system frequently will be playing a movie, playing a game, and running handbrake in the background while it also serves as a plex server, runs antivirus, has 32 tabs open in 2 different browsers, and frequently has something else playing at the same time - A true benchmark would be multiple programs all tying up as many resources as possible - while a single app can give a datapoint I want to see how these new multi-core beasts handle real world scenarios and response times.
coolhardware - Sunday, October 8, 2017 - link
Your comment has merit. It is crazy the number of tasks running on a modern OS. I sometimes miss the olden days where a clean system truly was clean and had minimal tasks upon bootup. ;-)xchaotic - Monday, October 9, 2017 - link
Well yeah, but even with non-HT i5 and i3, you still have plenty of cores to work with.Even if the OS (or a background task - say Windows Defender?) takes up a thread, you still have other cores for your game engine.nierd - Monday, October 9, 2017 - link
Do we? I've yet to see a good benchmark that measures task switching and multiple workloads - they measure 'program a' that is bad at using cores - and 'program b' that is good at using cores.In today's reality - few people are going to need maximum single program performance. Outside of very specific types of workloads (render farming or complex simulations for science) please show me the person that is just focused on a single program. I want to see side by side how these chips square off when you have multiple completing workloads that force the scheduler to balance tasks and do multiple context shifting etc. We used to see benchmarks back in the day (single core days) where they'd do things like run a program designed to completely trash the predictive cache so we'd see 'worst case' performance, and things that would stress a cpu. Now we run a benchmark suite that shows you how fast handbrake runs *if it's the only thing you run*.
mapesdhs - Tuesday, October 10, 2017 - link
I wonder if there's pressure never to test systems in that kind of real-world manner, perhaps the results would not be pretty. Not so much a damnation of the CPU, rather a reflection of the OS. :D Windows has never been that good at this sort of thing.boeush - Monday, October 9, 2017 - link
An *intelligent* OS thread scheduler would group low-demand/low-priority threads together, to multitask on one or two cores, while placing high-priority and high-CPU-utilization threads on respective dedicated cores. This would maximize performance and avoid trashing the cache, where and when it actually matters.If Windows 10 makes consistent single-thread performance hard to obtain, then the testing is revealing a fundamental problem (really, a BUG) with the OS' scheduler - not a flaw in benchmarking methodology...
samer1970 - Monday, October 9, 2017 - link
I fail to understand how you guys review a CPU meant for overclocking and only put non OC results in your tables ?If I wanted the i7 8700K without overclocking I would pick up the i7 8700 ans save $200 for both cooling and cheaper motherboard. and the i7 8700 can turbo all 6 cores to 4.3Ghz just like the i7 8700K
someonesomewherelse - Saturday, October 14, 2017 - link
Classic Intel, can't they make a chipset/socket with extra power pins so it would last for at least a few cpu generations?Gastec - Saturday, October 14, 2017 - link
I'm getting lost in all these CPU releases this year, it feels like there is a new CPU coming out every 2 months. Don't get me wrong, I like to have many choices but this is pathetic really. Someone is really desperate for more money.zodiacfml - Sunday, October 15, 2017 - link
The i3!lordken - Saturday, October 28, 2017 - link
cant you make bars for amd cpus red in graphs? Its crap to search for them if all lines are black (at least 7700k was highlighted in some)a bit disappointed, not a single world of ryzen/amd on summary page, you compare only to intel cpus? how come?
why only 1400 in civ AI test and not any R7/5 CPUs?
Also I would expect you hammer down intel a bit more on that not-so-same socket crap.
Ritska - Friday, November 3, 2017 - link
Why is 6800k faster then 7700k and 8700k in gaming? Is it worth buying if I can get one for 300$?Anato - Sunday, November 5, 2017 - link
Sandy Bridge still going strong. I see no need to spend money for what ever lake they made this time.Almeida7233 - Monday, January 18, 2021 - link
Nice Article. Thank you for sharing.