This chip is 6 years late. Back when sandy bridge was the newest chip, a dual core i3 was a super relevant choice for gaming, a quad core was overkill.
Today, for gaming builds, a i5 chip is almost always a better choice, unless you only play games that are single threaded. And the i3 is more power hungry then locked quad cores.
At $130, this would be a great choice, but ATM, the i3k is overpriced for what it offers for a modern system.
I'd argue that with the introduction of this i3 K-variant and the new hyperthreaded Pentium, Intel just gave a lot of people a reason to not by an i5. The message from Intel seems to be this:
"If you need great single-threaded performance with some mild multi-threaded, get the Pentium or i3. If you need great multi-threaded performance with great single-threaded, get an i7."
I'd say they are preemptively stacking the product deck prior to the release of AMD Ryzen - offering entry-level gamers more options without diluting their HEDT status.
In many of the games an i3-6100 offers effectively the same performance and is $50 cheaper. It isn't a case of the i-3750k offering great performance, so much as the games are not CPU limited. This points towards an even more expensive graphics card and the even cheaper CPU.
Agree. Whatever about actual performance, it seems quite clear the cool factor of "unlocked" Ryzen's and joining the "overclocking community" is getting a pre-emptive strike from Intel.
In the late 90's into the early 00's, when people would travel for hours carting their PC to a LAN gaming event with 100's (or even thousands) of other people, having an OC'ed machine was indeed cool amongst that Geek crowd.
/em remembers his dual celeron 300A's OC'ed to 450MHz (yes, that's Mega - not Giga - hertz).
It was indeed and still is very very cool......I had been OC'ing my systems way back to the original Pentium 100, and then got a Celeron 300, OMG those were the days....If you don't think its cool, what the hell are you doing on Anandtech??!??!!!?
^^This^^ Intel is ~finally~ facing some upcoming opposition in the CPU arena and they're trying to fill in some perceived gaps in their CPU lineup. After Ryzen is released, expect to see multiple product changes from team blue right away to combat AMD's offerings.
I think it will make more sense with next gen parts. I suspect we are watching a shift in the lineup that is slowly rolling out. celeron - duel core pentium - entry duel core with HT (limited cache/clock/iGPU) i3 - high-end duel core HT (essentially unchanged) i5 - quad core with HT (today's i7) i7 - 6-12 core with HT (today's LGA2011 line)
So why no straight quad core part? Well, 2 reasons. 1) it probably isn't needed. The original i5 parts were just i7s with broken HT cores that were disabled. I imagine most chips coming out now have perfectly fine HT cores, so they are artificially disabled. This increases the cost of the binning process, and reduces profit on a per-chip basis... especially if they can sell the same part somewhere between today's i5 and i7 price. 2) Right now I would wager that most home builders buy either an i3 because they are budget conscious, or i7 because their pride will not let them get anything less than the best. But the i7 that they buy is the lower margin 'consumer' i7 chips rather than the premium laiden LGA2011 i7 chips that make buku bucks on both CPU and chipset sales. Moving the i7 lineup to start at ~$500 instead of ~$280 would more than off-set the number of people willing to step down to an i5 chip; even if the step down is in name only and really the i5 would be more affordable while offering traditionally i7 performance levels. 3) Bonus reason; Ryzen chips are expected to land near today's i5/i7 chips in performance, and Intel does not want AMD to be able to say 'our chips are as fast as an i7 but only cost what an i5 does'. Instead, intel want's it's smug users (like myself) to say 'ya, that Ryzen is not a bad chip, but it doesn't hold a candle to my i7'. Real world benchmarks be damned, it is what people are going to say.
I wouldn't necessarily bet that more home users buy i7s than i5s. I personally know two gamers that recently built i5 systems because they wanted more oomph than a 2C/4T i3, but didn't want to spend money on an i7. Why? So they could spend more money where it makes the biggest difference... the graphics card. An i5 provides plenty of CPU horsepower for games, and gives you another $100 or so to spend on better graphics.
I think their judgement was sound. I doubt they are alone in this kind of assessment. I think you're letting your admitted i7 smugness cloud your judgement a little bit.
I build PCs for my friends, and advise people on what to buy, and I don't know a single person apart from myself who has an i7 (and only know of 1 person who has an i3 but he uses his box for media). i5 is a perfect chip for casual users who use the PC mostly to game.
Hell the only reason I have an i7 is for Civ VI ha.
i7s (or Xeons) are nice if you're encoding a lot of x265 video (x265 gives better quality per bitrate than hardware encoders). That's the only desktop use case I can think of.
Exactly why I have an I5 3570k. My next build will either be a ryzen (depending on wither it hits expectations), or the I5 of whatever generation is out when I'm ready to buy. To big of a price jump to the I7 for a hard core 1080P maxed setting gamer, but not to much of a price jump over the i3's. That is, until now with the i3k which I may actually give a second look at.
I fit into that category with one caveat, I also do some 3d modeling and rendering. This pushed me to the i7-2600k about four years ago, and I still don't feel that my CPU is the limiting factor on my PC.
Yeah that's true except when people find out that there's this thing called Ryzen just on the horizon. Seems to me that the HT pentium & unlocked is just a way to sell more of these KL chips & Intel are hoping/waiting for Coffee Lake to counter Zen.
There's no way a dual core is justified today, even if unlocked or with HT, unless you're absolutely on a shoestring of a budget &/or KL is the only thing you want. It's such bad value for money atm that no one should be recommending it, not at this point in time.
I have to agree. Not necessarily because there is anything wrong with this chip technically but because of the competitive landscape where Intel's own quad-core chips can be had for the same or lower prices.
I think the only thing which can be recommended at the moment is not to buy a CPU until Zen is released, in case AMD live up to their hype in performance and price their products competitively (I.e. cheaper than Intel).
They would be misleading gamers badly as is this Anandtech review. Their gaming becnhmarks are just woefully out of date its getting embarassing. There are already games that have come out like Farcry 4 that literally won't run if the system doesn't have 4 full cores. Any real gamer is screwing themselves over by trying to skimp on a CPU like this. Any legit tech site would never reccomend less than a 4 core CPU in 2017.
@nathanandrews hasn't that always been the case? Except, you might not be able to afford an i7, and (as these results show) you're better buying an i5 and a better GPU for gaming.
Agreed, the timing of the first ever i3 K variant just ahead of Ryzen seems more than just coincidental. Intel seems to be arguing that for value minded users, the IPC and high clocks will make this a better prospect that Ryzen's many-core and likely somewhat lower IPC. That's not new, what is new is that little K on the end meant to capture that market segment of users on a budget who still want the fun of overclocking. Before, the logic was always that intel wouldn't release an i3 K because it would canabalize i5 sales. Now they seem to be proactively guarding a piece of market share that would pick an overclockabe Ryzen chip instead of an i5. Competition is a wonderful thing!
"I'd say they are preemptively stacking the product deck prior to the release of AMD Ryzen"
Yep, Ryzen will also launch with its high-end parts first- AMD's competitiveness will not filter down to low-end parts until 2h16. Until 2C4T Ryzen parts appear, Intel will still have a monopoly on good cheap processors so the more they can sell in that time, the better, for them.
how about a non-k i3? I mean look at the charts, they keep up just fine. Sure, you don't get overclocking capability, but you also get to save money by not needing a custom cooler ($30-50), or a z-series motherboard ($30-150), and the chips themselves are cheaper ($30-50). That saves you some $90+ on your build right there, while offering most of the performance. Either pocket the money, or spend it on a good SSD or better GPU.
If you want to go cheaper, see CaedenV's post below. If you're thinking about staying in roughly the same price range, get an entry-level i5. Something like a i5-7400. The cost of the processor itself is higher, but the total platform price will be around the same because of cost-savings elsewhere, like Caeden listed for the i3 non-K. You won't need to worry about overclocking so no need for upgraded cooling, and no need for an overclock-friendly board.
The i3 available back in the day suffered from quite a few things at the time, and had rather dramatic setbacks compared to the i5 and i7 offerings of the day. Still not bad as an entry level gaming CPU... but even it would bottleneck a mid to high range GPU at the time. But today's i3 offerings are able to offer enough performance to keep up with even today's mid to high end GPUs without problem! Part of that is the move to PCIe3, part of it is efficiency making up for a lack of cores, and part of it is simply because more and more games support HT cores where that use to not be the case. On a win10 system there is even more advantage as it is better at off-loading background processes to less used cores, so even if your game does not take advantage of HT, windows will in order to alleviate the heavily loaded 'real' cores.
I think the really amazing thing to look at in these charts are how well the non-K i3 chips do. You can save a lot of money if you can give up OC and ~2-300 MHz. a plain-jane i3 on a B or H series chipset and a single mid to high-end GPU would game fantastically compared to a high-end i7 with z-series chip. Still not amazing for content creation (though not bad for a hobbiest)... but if all you are doing is video games, office/school work, web browsing, and watching videos then it is getting harder and harder to recommend anything other than an i3.
I don't understand most comments. If you're gaming, an extra 50$ for an i5 is nothing. A CPU is good enough for 3-4 years. How much are you going to spend on games in that time period ? Here in Canada, Battlefield 1 Premium costs about 160$. That's just one game. How many games are you going to buy ? More than a few I guess. Besides, with DX12 and Vulkan becoming mainstream API's, a quad core is must. Just get an i5 or Ryzen and forget about it.
Am I the only one that thinks that these test should have been between the overclocked speeds of both processors? Isn't the idea behind an unlocked processor that you overclock it?
Absolutely should have included overclocking. Sandy Bridge chips had very conservative stock clocks and great overclocking potential. At the time you were almost guaranteed 4.4GHz-4.7GHz on air and and there were lucky users reaching 4.8-5GHz (and more under water). My 2600K has been running stable at 4.6GHz (a 35% overclock) for six years now at 1.35v. Those single-threaded charts would look much different if you included overclocks and the multi-threaded charts would seriously widen the gap.
I'm glad Intel has opened the gates for overclocking i3's but this review really just shows how small Intel's gains have been in the last six years. I'm hoping Ryzen brings some serious performance to the table (especially at the high end) and lights a fire under Intel's asses. Better iGPUs and lower power consumption are great for laptops and and basic users's needs but there has been no innovation in the HEDT market for many years unless you're willing to shell out $1700 for the 10-core Extreme Edition.
Yep! Sandy Bridge was/is good tech! I got the non-K i7 Sandy Bridge, and even that overclocked easy to 4.2GHz. It was an artificial limit, but I didn't need to spend the $50 premium to get a potential 3-500MHz out of it. Been humming along for 6 years now and hasn't missed a beat! At this rate I will probably be 'upgrading' my game rig to a tiny little i3, and recycling my i7 as a home server for storage and VMs.
Agreed, Ian might have posted them. Still, read between the lines: there is a statement the 2600K does 4.8-5GHz. At 20% higher clock speed, the 2600K destroys the OCed i3 7350K, no contest. It may consume 4x the power, but dunno, do you care when the GPU consumes 5x more anyway?
Ya, not too useful on the gaming charts as even the non-K chips kept up with the GPUs just fine. But getting to see what it does for productivity tasks would be interesting. Actually, stock vs OC i3, i5, i7, and i7 Sandy would be very interesting to me.
Too much individual variation. Non-overclocked performance is the guarantee. Everything else is up to chance in the silicon lottery.
Also potential for abuse: say, the manufacturer sends reviewers some golden sample that hits 5.1GHz on air. Hah! GPU makers used to pull stunts like that.
A dual core processor is still a dual core processor even if it is unlocked and offers a high clockspeed. I still feel Kaby Lake is a lazy upgrade over Skylake considering it barely offers anything new. Just take a look at the feature page to get a sense of the "upgrades". With competition coming from ARM and AMD Ryzen, is Intel only capable of a clockspeed war just like they did for Pentium 4?
Well, to be fair Kabby Lake isn't for you and I. It is Skylake with very minor improvements mostly aimed at fixing the firmware level sleep and wake issues that manufacturers had (ie, the reason Apple didn't move to Skylake until well after release, and the botched deployment of the Surface Pro 4). Outside of that it is just skylake with a minor clock bumb, slightly better thermals, and more of the chip on 14nm.
So it will be 2025 before an i3 beats a stock 2600K in all benchmarks? That must mean it will be 2030 before it can beat a 4.8GHz 2600K. That's crazy, considering how badly the Core2Quad compares to even a modern celeron.
Page 2: "There is one caveat however – Speed Shift currently only works in Windows 10. It requires a driver which is automatically in the OS (v2 doesn’t need a new driver, it’s more a hardware update), but this limitation does mean that Linux and macOS do not benefit from it."
This is incorrect: support for Speed Shift (HW pstates) was commited to Linux kernel back in November of 2014, way before Skylake release. https://lkml.org/lkml/2014/11/6/628
Of the 3 CPU'S Anandtech received to review, this was the only one that was marginally interesting (we didn't need a review to know Kabylake performs equally to Skylake).
So of course you spent one month before reviewing it. Good for Anand that he took the money and ran.
This would be interesting if the part wasn't so bloody expensive. $120 would be interesting. At this price, you're better off spending a little more and getting an i5 or spending a lot less and getting the G4600, which is also dual core kaby lake with hyperthreading.
Things I learned: 1. 7350K is hilariously overpriced versus a G4560. 2. Overclocking stock 4.2GHz Intel parts that are already so far from the freq/power sweet spot and little headroom that it's mostly a exercise in futility. 3. That i5 7400 is crazy power efficient.
It makes you wonder what the "T" designation is really all about. Is the i5-7600T @2.8 - 3.7GHz (35W), basically the same thing as the i5-7400 (65W), only difference being they downclocked the base 200 MHz and upclocked the turbo 200 MHz. On paper you'd expect the "T" to be way more power efficient, but in actuality I bet they are about the same.
Hey Ian, correct me if wrong, but couldn't you have just downclocked a 7600k to "simulate" an i5 7400? Afterall, the cache is the same so it should be the same except for the TDP...
That would produce a ton of new variables though, i7's theoretically have gone through more exhaustive binning and are a "higher quality" chip that should be able to operate at higher frequencies with lower voltage. Should being an important caveat there.
MicroCenter has long been offering sweetheart mobo + cpu deals, including the 7700K, so I'm not sure what you think you are proving with your comment. Go look at this processor with a mobo at MicroCenter and you tell me what you see.
2600 was $230, 2600k was $280 I only know because I didn't sleep for a week while I made the decision lol. Ended up with the 2600 non-k because it still boosted to 4.2GHz just fine and that was more than enough horsepower for me. Been using it for 6 years... omg... how is there no clear upgrade yet?
Next test bed update will be on W10. I keep getting mixed reactions recently from W10/W7/Linux users on this front - some want to see W10 poweeeeeer, others want default. But for DX12 it'll have to change over.
Bench-marking in win10 is... well... difficult. The OS has too many automatic features, so it is hard to get consistent results. You still get better overall performance... but not consistent performance. Win7 is gloriously dumb and gives very clear numbers to make very easy comparisons.
It's a bit sad that you can compare any CPU from 2011 to one from 2017 and have them match up like this. In the 90's a CPU that was 6 years newer was many times faster than the older one. Is it lack of competition? Or have we just hit the wall with silicon chip technology?
It's probably a combination of both, but I'd go out on a limb and say it's mostly due to technology and not so much market forces. Intel's primary competition for new processor models really ends up being its own prior generations It the company wants to land sales, it needs to offer a compelling incentive to upgrade.
There's also Intel's efforts to reduce TDP over successive generations (something the company would probably not do were there more credible competitive forces in the market). Those reductions are probably a side effect of a mobile-first perspective in modern CPU design, but there's something nice about buying a reasonably power 35W desktop processor and not having to worry about copper-pipe festooned tower coolers with 120mm fans strapped on them just to keep your chip happy. If I were to build a new desktop, I'd entertain a T-series part before exploring any other option.
It's funny we got big perf/watt increases over the past few years in CPUs and GPUs, yet somehow everyone are still buying massive overkill 650W+ PSUs where most systems would struggle to even draw 1/3 of the PSU rated wattage at load.
I'm pretty confident that an undervolted i5 7400 and GTX 1060 (60W @ 1600MHz according to THG) would be able to draw <100W at the wall in a normal gaming load with an efficient enough PSU...
Because MOAR POWER and marketing. Seriously, they sell the high power PSUs for a LOT more than the lower powered PSUs, it's going to take consumers buying the 300-450W psu's en masse before the manufacturers adjust. Your theoretical operates under false assumptions however. The 1060 boosts up well beyond 1600 and will consume far more than 60 watts, and there are efficiency losses in the PSU and throughout your system. Go ahead and try to run a 1060 and an undervolted i5, see what happens.
No, it's not. For typical gaming the 1060 consumes between 90-120 watts. So please do tell me how his system with a 100 watt GPU is going to consume less than 100 watts with a CPU, mobo, RAM, etc.?
As a point of reference, I have a 1060 in a i5 4670 system running a 400W Platinum PSU. All stock clocks, 1 SSD, 1 HDD. Peak power in games measured at the wall is ~200W (180-200 depending on which AAA game), so I doubt <100W is doable. But agree with the commentary about how overkill most PSUs are.
Yeah, that is funny. I'm using a massively overpowered PSU myself. I have a 850W unit running a system with a moderately-overclocked i7-6700k and Geforce 1070. Had it left over from my previous massively overclocked i5-2500k and dual Radeon 7970s, even if it's aged badly (which it probably hasn't it's only a few years old) it should still be good for ages, especially as under-stressed as it now is.
Perhaps when they're available for purchase I'll look into it. I'm interested in seeing what AMD does with mobile Ryzen, integrated graphics, and HBM for CPUs (unlikely) and how it changes laptop computing.
The rumor mill has been churning and the consensus is that APU's will be available in 2018 with HBM. That will be a game changer for more than just mobile computing, but for small form factors as well. At least theoretically, experience tells me we should wait for reviews before deciding how profound the impact will be.
The Wraith cooler is both marginal and loud compared to quality aftermarket coolers that cost as little as $35. Sure it's better than the last AMD stock cooler, but that's more a case of the last AMD stock cooler being total garbage.
Hey, no dissing huge air coolers! :D (Yeah, I have one and it's so big it largely dictated the case selection. Does keep a hexcore Bulldozer at 52 degrees at 4 GHz tho.) There's also the niggle on Intel side that their enthusiast line has only made it to Broadwell-E, so that's what I'll be upgrading to. A huge upgrade in IPC (which probably won't rise much in the next years), more cores and lower power use per core. I figure I'll be upgrading next around 2025. :D I'm pondering whether I should go AIO liquid or custom...
I doubt it is competition. I mean, lack of competition certainly explains the price per performance not coming down even though the manufacturing costs are getting cheaper, but I think that we have hit a performance wall. With every die shrink we can get more performance per watt... but the die is also more heat sensitive which kills stability for higher clocks. The idea that you can hit 5GHz on the new chips is nothing short of a miracle! But without a major increase in clock speed, then your performance is limited to the instruction sets and execution model... and that is much harder to change. And that isn't hard to change because of competition. That is hard to change because PCs live and die by legacy applications. If I can't go back and play my 20 year old games every 3-4 years then I am going to get rather annoyed and not upgrade. If businesses can't run their 20 year old software every day, then they will get annoyed and not upgrade. I think we are rather stuck with today's performance until we can get a new CPU architecture on the market that is as good as ARM on the minimum power consumption side, but as efficient as x86 on the performance per watt side... but whatever chip comes out will have to be able to emulate today's x86 technology fast enough to feel like it isn't a huge step backwards... and that is going to be hard to do!
Anandtech please do frame-time tests as well for games. Average frame rate is good and all, but if the processor causes dips in games that could lead to an unpleasant experience.
The site slips my mind, but somewhere tested multiple generations of i7s, i5s and i3s for minimum framerate and even the oldest i7s had a more consistent framerate then the newest i3s. It would be interesting to get AT's take on this.
There are some minimum frame rate numbers in Bench, however they're not all regular (they're based on pure min, not 99%). The goal is to have some nicer numbers in our testbed update. Soon. When I find time to finish the script ... :D
"This RGB fad that apparently sells like hot cakes"
I love you Ian! In a totally hetero way.....
Seriously though great article, this should silence all the crybabies who whine about the lack of "Anandtech style in-depth analysis". You are still the best CPU reviewer in the biz!
I'm still running an i5-2400 at default speeds that I paid $205 for when it first came out. It is insane how slow the improvement of intel chips have been. You'd think by now an i3 would be an upgrade.
Don't count on this being the new norm. Even though Intel just invalidated a long standing policy and the perception that these are inferior chips with this change I don't think it will lat. The next process shrink will likely bring with it a die size change leaving the i3 people who want to upgrade a few years down SOL. They could simply roll back this "feature" and we're back to status quo.
Conclusions page: "A good example of this is Agisoft: the Core i5-7400 (which costs $14 more, quad core, 3.4-3.8 GHz) completes the work ~10% quicker."
Do you mean the i5-7400 @ 3.0-3.5 GHz, or the i5-7500 @ 3.4-3.8 GHz?
"and goes in line with the fact that Intel has officially stated that one of the key features of the new 14+ process is that the transistors are more ‘relaxed’ and there’s no decrease in density."
"The latest memory technology to hit prime time is Intel and Micron’s 3D XPoint. "
Where "hit prime time" means "may ship some time in 2019"? No-one cares about 3D XPoint in SSDs; and the DRAM version seems utterly MIA since the initial enthusiastic Intel claims. (Come to think of it, much like Intel 10nm ...)
I have an 8+ year old Q6600 desktop and am thinking about a new build. I do mostly office work and photography with Adobe Lightroom and Photoshop, no gaming at all, I'll use the integrated graphics. Both LR and PS seem to not utilize multiple cores very well and respond mostly to clock speed. I'm wondering if my best bet is to buy one of these i3K cpus and mildly overclock. I will get the highest clock speed at a price lower than an i5K or i7K. What do you think?
i3 7100 is not a bad alternative. Like I said, most photography oriented tests show LR and PS to perform better with just a higher clock speed, not to multiple cores or anything else. With this cpu with a mild overclock up to say 4.5ghz I'm faster than an i7 clock for hundreds less. I'm wondering if that is worth $50 over the 7100. I'm thinking it is, $50 more factored into a complete new build is not much. Thanks for the comments.
Dude... you are coming from a Core2Quad. Even the weakest i3 is going to blow your mind! Seriously though; the CPU you buy essentially does not matter. If you are running on the iGPU then just get a chip with the best iGPU you can afford and call it a day. Pair it with a SSD (does not even need to be NVMe), and you will be absolutely blown away with the performance gains! You should probably look at a simi-custom build like an Intel NUC, or Brix, or other such system where you just add ram, SSD, and OS. There is little to no point in building a whole tower PC unless you are doing something heftier than lightroom.
For that matter, look into a laptop with a decent dock. You can do most of your work in the field while taking pictures, and dock it to a nice color correct screen for the fine-tuning end of work.
Oh, I know anything I build will be a big step up from what I have. I am going for a desktop but probably a mini ITX mb maybe in a nice Lian Li case, definitely an SSD. I'm still thinking about the details.
I am still on a Q6600 at home and a Skylake CPU at work. The difference isn't as "mind blowing" as some would suggest. It depends on what you do, and yes things that are IPC dependent will be much faster on newer systems, but the Q6600 is still no slouch.
I agree with fanofanand on this one. I previously owned a Q6600 and went through the trouble of upgrading to an Athlon x4 860K (recently died) with a lot more/faster RAM (16GB DDR3 2133 vs 4GB DDR2 800). The difference was pretty underwhelming. I've got a Haswell i7 at the office and was moving between it and the Q6600 and the difference in performance was something I noticed, but it didn't leave me feeling like the Q6600 was incomparably slow.
I'm just gonna point out that anyone saying Intel is in trouble, needs to realize that they have intentionally chosen to not improve CPU performance for several years, instead focusing on improving the integrated GPU. Look how much that's increased! That's in addition to incrementally improving CPU performance.
Intel can start channeling their immense resources into improving CPU performance anytime they wish.
Remember, it's in Intel's best interests to keep AMD from going out of business. Outpacing them to the point of making AMD irrelevant would hurt Intel, long term.
I really don't think that Ryzen is going to make Intel dramatically bump up the per-clock efficiency. The real bottle-neck in performance is in the instruction set itself. AMD is behind Intel because they are hitting the same IPC wall. It isn't that Intel hasn't attempted to push the envelope on IPC, it is that it is a universally hard thing to improve at this stage in the game. If it wasn't so difficult then AMD would have stepped up to the task years ago. Even Ryzen isn't going to beat intel in IPC, they are merely going to close the gap a bit, sell for less, and partner with game studios to push bundles. Die shrinks will continue to make chips more efficient, but unless someone finds a way to dramatically increase clock speed, or come out with a new instruction set that has better IPC while being backwards compatible with x86 (with minimal performance hit, which is the hard part), then I think we are stuck at this level of performance for a long time.
There are plenty of games that can bottleneck i5 or even low power i7, Which those benchmarks never show. People who do this sorts of review clearly never play any demanding games, Therefore they are not fit to do a comparison for gaming CPUs.
Seems to me you only hit the CPU wall when dealing with multiple GPUs. For most games, with a single GPU, and i3 is plenty. Considering an i3 does not have enough PCIe lanes to support multiple GPUs this is a rather moot point.
Like I said, You don't play any CPU demanding games so you have no right to make those ridiculous comments. Take battlefield 1 for example, Good luck in a 64 player map with i3.
Nice trolling, loser. I am simply making a point: There are tons of games can bottleneck i3, Battlefield 1 is just one example, Stop lying to others "i3 can game just fine like i7 etc" it's very misleading and misinformed.
The notion of someone who is good at theorycraft reviews must be an expert at knowing gaming PC is absurd. 1 min of benchmark run in single player mode suddenly makes you an expert at gaming computer? Give me a break.
If you doubt the validity of the claims made in these articles in spite of the years of experience the writers have AND the supporting evidence of their work, then its rather odd you'd read any of these reviews at all. We can infer from your responses that you feel insecure about your purchase decisions, feel compelled to defend them aggressively, and that you're dismissing the evidence at hand even though you personally find it useful so you can justify the defensiveness -- more to yourself than others here because honestly, why should the opinions ruffle your feathers if there's genuinely no doubt in your mind that you feel you're as correct as you claim?
Nice job coming up with such rhetoric yet no concrete evidence in your argument. I do NOT DOUBT the validity of the claims, I KNOW they are WRONG for a fact. Your reviewers do reviews for the sake of results from incomplete tests and benchmarks that can not represent real life results, Therefore the conclusion is wrong. I have been playing video games and playing around with hardware when you guys were playing with sand, Before this website is even established, Yet you want to talk about "years of experience"? I am not defending anything, I am simply pointing out you are wrong and you are misleading people, You are just butthurt because finally someone is having a different opinion and they are actually right. You want evidence? Let's save the rhetoric and go straight to facts, There are games that can severely bottleneck i3 or even i5, Battlefield is just one of them. Doesn't matter what video card you use, Join a 64 player game and you can see your CPU usage go over 90% and game starts stuttering on i3 / i5.
"there will be a time where a Core i3 based CPU will match the performance of the older Core i7-2600K"
Maybe, but not today! I'm still patting myself on the back for my purchase of the i7-2600K back in the spring of 2011. My first computer ran an 8088 and for every computer I owned weather off the shelf or self built from then until 2011 left me constantly on the upgrade treadmill looking for more speed . But with the 2600K I finally hit good enough (other than adding more RAM and replacing spinning disks with SSD's along the way). While its fun to read about the new stuff I don't really see myself upgrading again until either the CPU or (more likely) the motherboard give out.
Yep! I have upgraded the ram several times, the GPU a few times, and moved from HDD to SSD, all of which have kept my 2600 very happy. What is going to get me to change over is going to be the motherboard level improvements. I cant get a much faster GPU without hitting a PCIe2 bottleneck. NVMe, while impractical (real world tests show little to no improvement), has me drooling for an upgrade. 4-10gig Ethernet is going to be more and more popular going forward. DDR4 is finally maturing to a point where it is actually better than DDR3. All new devices run USB-C, and my piddly 4 USB3 ports can't do them all, and certainly not at full speed.
It is literally everything around the CPU that is slowly making me want to upgrade, not the CPU itself. But even these improvements need another year or two to bake before I am really thinking about an upgrade. I am just unfortunately very happy with what I have, and I have one more GPU upgrade (looking forward to something like a GTX1170) before I will really need a new system. Who knows, this might be my last 'real' PC. In a few years it may make more sense to have a central gaming server that runs all the games, and then stream them to an end-user PC that is no more powerful than a cell phone.... or just dock my cell phone.
For consumers who intend to purchase a discrete GPU card, it's interesting to see it confirmed that Intel could include four additional CPU cores in instead of the unnecessary (for you) integrated GPU within pretty much the exact same die size.
It wouldn't cost them more to manufacture than an i7. They just want to be able to charge more money by forcing you into a different price range of product if you need many cores.
For instance: Intel’s new 10-core Core i7 Extreme Edition costs a whopping $1,723
I'm not sure I agree with your assessment. Intel has a vested interest in pushing people like you into the HEDT platform which is far more profitable. If you have a powerful dGPU then you are not "mainstream" by Intel's thinking. Based on the number of computers that have no dGPU maybe they are right.
I am not defending anything, I am neither an Intel shareowner nor have they seen a penny of mine in a decade. I am saying that what they are doing makes business sense even if it doesn't suit you as well as you'd like.
For someone who frequently responds to other people's posts with silly BS like "This post brought to you by CompanyName" why are you suddenly defending price gouging on Intel's part?
Price gouging makes business sense for any company. Tacking on an additional thousand dollars per part? Not really defensible.
The CPU I want is one that costs me the least electricity for the 95% of the time when I'm on Facebook and surfing, yet supports Photoshop well. I bought a gold rated seasonic PSU a couple years ago and my overall power usage is very low (40w?). Really, a low end i3 ought to suffice for me. I'm still using a 2.8ghz AMD phenom II with the CPU throttling enabled (800mhz).
Have you considered one of the Intel T suffix cores? They don't get a lot of coverage but give you the newest architecture with a very low TDP. The current Kaby Lake T cores are 35w TDP.
It takes Intel 6 years to post 25% increase in single threaded performance? Yeesh. Competiton (read: Ryzen) can't come fast enough. My Sandy Bridge Dell Precision is staying put for a bit more.
Well, they've transformed a power-hungry server architecture into something that the majority of users could use in a mobile device without complaining about power or performance. I'm also pretty sure that if they had commissioned Zen even before Bulldozer's release, we still wouldn't have seen anything until recently. I'm not going to defend them for Bulldozer, but it was either trash it and work on a replacement with no money coming in or at least try to fix its problems with power, L1 and decoder, and redesign the FPU. A 25%+ IPC boost from Bulldozer to Excavator, despite the loss of L3, would have been much better received had Bulldozer been better to begin with. That's what AMD have been doing.
Please consider abandoning the extreme focus on average framerates. It's old-school and doesn't really reflect the performance differences between CPUs anymore. Frame-time variance and minimum framerates are what is needed for these CPU reviews.
Would be a good choice for a new build if the user needs the latest tech, but I upgraded my 2500K to a 3770 for <$100USD.
I run an 850 for boot, a 950 for high speed storage on an adapter (thought it was a good idea at the time but it's not noticeable vs the 850) and an RX480.
Those CPUs exist but don't make sense for home usage. Have you noticed how hard it is to cool 150 watts? Imagine double that. There are some extremely high powered server chips but what would you do with 32 cores?
I read the part wasn't going to be available until later, did a search to confirm and found two offers: One slightly more expensive had "shipping date unknown", another slightly cheaper read "ready to ship", so that's what I got mid-January, together with a Z170 based board offering DDR3 sockets, because it was to replace an A10-7850K APU based system and I wanted to recycle 32GB of DDR3 RAM.
Of course it wouldn't boot, because 3 out of 3 mainboards didn't have KabyLake support in the BIOS. Got myself a Skylake Pentium part to update the BIOS and returned that: Inexcusable hassle that, for me, the dealer and hopefully for the manufacturers which had advertised "Kaby Lake" compatibility for moths, but shipped outdates BIOS versions.
After that this chips runs 4.2GH out of the box and overclocks to 4.5 without playing with voltage. Stays cool and sucks modest Watts (never reaching 50W according to the onboard sensors, which you can't really trust, I gather).
Use case is a 24/7 home-lab server running quite a mix of physical and virtual workloads on Win 2008R2 and VMware workstation, mostly idle but with some serious remote desktop power, Plex video recoding ummp if required and even a game now and then at 1080p.
I want it to rev high on sprints, because I tend to be impatient, but there is a 12/24 core Xeon E5 at 3 GHz and a 4/8 Xeon E3 at 4GHz sitting next to it, when I need heavy lifting and torque: Those beasts are suspended when not in use.
Sure enough, it is noticible snappier than the big Xeon 12 core on desktop things and still much quieter than the Quad, while of course any synthetic multi-core benchmark or server load leaves this chip in the dust.
I run it with an Nvidia GTX 1050ti, which ensures a seamless experience with the Windows 7 generation Sever 2008R2 on all operating systems, including CentOS 7 virtual or physical which is starting to grey a little on the temples, yet adds close to zero power on idle.
At 4.2 GHz the Intel i3-7350K HT dual is about twice as fast as the A10-7850K integer quad at the same clock speed (it typically turbos to 4.2 GHz without any BIOS OC pressure) for all synthetic workloads I could throw at it, which I consider rather sad (been running AMD and Intel side by side for decades).
I overclocked mine easily to 4.8 GHz and even to 5 GHz with about 1.4V and leaving the uncore at 3.8 GHz. It was Prime95 stable, but my simple slow and quiet Noctua NH-L9x65 couldn't keep temperatures at safe levels so I stopped a little early and went back to an easy and cool 4.6 GHz at 1.24V for "production".
I'm most impressed running x265 video recodes on terabytes of video material at 800-1200FPS on this i3-7350K/GTX 1050ti combo, which seems to leave both CPU and GPU oddly bored and able to run desktop and even gaming workloads in parallel with very little heat and noise.
The Xeon monsters with their respective GTX 1070 and GTX 980ti GPUs would that same job actually slower while burning more heat and there video recoding has been such a big sales argument for the big Intel chips.
Actually Handbrake x265 software encodes struggle to reach double digits on 24 threads on the "big" machine: Simply can't beat ASIC power with general purpose compute.
I guess the Pentium HT variants are better value, but so is a 500cc scooter vs. a Turbo-Hayabusa. And here the difference is less than a set of home delivered pizzas for the family, while this chip will last me a couple of years and the pizza is gone in minutes.
Sure the Handbrake x265 code will scale with CPU cores, but the video processing unit (VPU) withing the GTX 10x series provides several orders of magnitude better performance at much lower energy budgets. You'd probably need downright silly numbers of CPU cores (hundreds) with Handbrake to draw even in performance and by then you'd be using several orders of magnitude more energy to get it done.
AFAIK the VPU all the same on all (consumer?) Pascal GPUs and not related to GPU cores, so a 1080 or even a Titan-X may not be any faster than a 1050.
When I play around with benchmarks I typically have HWinfo running on a separate monitor and it reports the utilization and power budget from all the distinct function blocks in today's CPUs and GPUs.
Not only does the GTX 1050ti on this system deliver 800-1200FPS when transcoding 1080p material from x264 to x265, but it also leaves CPU and GPU cores rather idle so I actually felt it had relatively little impact on my ability to game or do production work, while it is transcoding at this incredible speed.
Intel CPUs at least since Sandy Bridge have also sported VPUs and I have tried to them similarly for the MPEG to x264 transitions, but there from my experience compression factor, compression quality and speed have fallen short of Handbrake, so I didn't use them. AFAIK x265 encoding support is still missing on Kaby Lake.
It just highlights the "identity" crisis of general purpose compute, where even the beefiest CPUs suck on any specific job compared to a fully optimized hardware solution.
Any specific compute problem shared by a sufficiently high number of users tends to be moved into hardware. That's how GPUs and DSPs came to be and that's how VPUs are now making CPU and GPU based video transcoding obsolete via dedicated function blocks.
And that explains why my smallest system really feels fastest with just 2 cores.
The only type of workload where I can still see a significant benefit for the big Xeon cores are things like a full Linux kernel compile. But if the software eco-system there wasn't as bad as it is, incremental compiles would do the job and any CPU since my first 1MHz 8-Bit Z80 has been able to compile faster than I was able to write code (especially with Turbo Pascal).
I think the sales argument for the big Intel chips as video encoders has been for x264 where the faster NVENC, VCE, and QuickSync technologies offer lower quality at a given bitrate for higher quality x264 settings. For most people, the hardware encoders are enough but for many others, the quality is not sufficient.
The quality difference between hardware and software HEVC is smaller with higher quality software h265 encodes beating the quality of your Pascal x265 encodes but with a big performance penalty. It's not worth it for most people, but if you have limited bitrate/storage and want the best quality, it might be.
Thanks for the great review, Ian. Considering one needs an expensive Z-board to OC, for most people buying an i5 makes more sense. I don't understand why so many people complain about Intel allegedly not making enough progress. Now you get a dual-core that comes close (or even exceeds in single threaded benches) to the former flagship quadcore. If you want to have a CPU that vastly exceeds the "old" quadcore, Intel also has newer quadcores. It is not like the i3 is the end of the lineup.... For the $317 that the 2600k used to cost you can get a KabyLake non-K i7, which sure vastly exceeds in performance (and much lower TDP). I assume someone who could afford an over $300 CPU 6 years ago can afford $300 now and upgrading to an i3 may not be what that person would do anyway. the trend goes to more cores.... most people here complain about Intel not offering mainstream hexa and octa cores... not sure why the same people allegedly are so eager to get dual-cores.
The first number refers to the number of CPU cores. Te second number refers to the IGP configuration (the number of shaders, which may be a little bit different across generations, e.g. Haswell GT3 has 40 shaders, while Broadwell/Skylake GT3 have 48 shaders). The extra e means there is an extra eDRAM cache (Crystalwell) on the CPU package.
Thank you for you article (especially when many of us are waiting on the information of new CPU of both AMD and Intel). It is always good to have something to play (overclocked) with, but this is a little bit expensive.
When I read the analysis of the first page, I see the lack of information on the CPU die size and transistor count disclosed by Intel recently. Also, I feel strange that the effect of the change of the 32nm to 22nm (from Haswell to Broadwell) have such a large difference on the 2C+GT2(which Intel claims there is a 37% reduction of the die, which can be seen in the table) and the 4C+GT3(which the difference are much smaller) CPU die. I feel even stranger when I see the Skylake 4C+GT3e die is a bit smaller than the Broadwell 4c+GT3e die. So I am quite curious on the sources of the die estimate.
P.S. I found the origin of the 234mm^2 of the Skylake die size estimate.
It seems that the die described is the Skylake-H(which is a 4C+GT4e configuration). This makes the 241.5mm^2 estimate of the Broadwell 4C+GT3e a little bit unrealistic (Skylake GT4e have 72 shaders, while Broadwell GT3e have 48 only)
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...
Wasn't the 4690 Devil's Canyon? Similar IPC higher clocks I would assume. Most of the changes lately have been hardware decoders/encoders and I/o changes. Intel takes baby steps because it can, hopefully that changes with Ryzen.
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...
no wonder intel is not selling to consumers, complaining about stagnation. the money they make comes from enterprise i guess.
i have not updated my sandy bridge for 6 years.
and i will not until intel gives me a reason.. this is only babysteps. i had to cash out 1200 euro for a new mobo, cpu, ram, cooler... and for what.... 30% more performance..... meh
This is how good a 2011 chip really is! This shows there really isn't much sense to upgrade a PC anymore, and with time it will get even less sense to invest money for an upgrade. What will be in 2024? Are we entering into a stalemate in the PC area?
"The Intel Core i3-7350K (60W) Review: Almost a Core i7-2600K" --------------------------------------------------------------------------------------- .....and not even CLOSE to a Sandy Bridge!
Can the Intel Core i3-7350K use my Optical port in DRM crippled Windows 10 for Audio Production?
Show me how!
Can I record what I hear on the desktop with the DRM crippling API's found in Windows Vista / 7 / 8 and 10 ?
Show me how!
Will it boot "directly" to Windows XP faster than I can on my 35 watt dualcore Sandy Bridge (3-seconds on a Samsung 850 Pro SSD) so I CAN use my optical ports and record whatever I want without a DRM crippled Spyware Platform, or do the new motherboards prevent me from booting to a NON-crippled O.S. like my copy of Windows XP?
Well?
Should I "upgrade" to a crippled platform that prevents me from doing ANYTHING I want to do, but allows me to do only what Microsoft graciously allows me to do?
........ and explain to me again why I should pay more for my own enslavement?
Finally got Optical SPDIF working in both Windows 8.1 AND Windows 10 after that rant above!
Yes, I Really did think that was a DRM issue My original USB Soundblaster had optical in and out disabled in software updates after all the hysterical copyright violation complaints
Should've rather used an i5-2500K in the comparison; 2c/4t vs 4c/4t is fairer than 4c/8t. Although, a real comparison at 4.6GHz on both chips (or whatever the i3 can hit) would see the KBL obliterated regardless.
I think they said they're working on an overclocking article, but I agree the i5-2500K with both chips overclocked would have been a much more interesting test.
Intel should make all K-processors fully enabled, HT, ECC, Cache and sell them cheaply as 2.0-3GHz parts. Then give user tools to make changes to cache, ECC etc and after that its users task to find out what CPU can do. That would bring back good old days and Intel could get rid of cores that are otherwise unsellable.
Still no need to upgrade from Sandy Bridge i5-2500k and just bought GTX 1080 for it.
So is Optane ever actually coming out? And is it going to actually work as a 16/32GB cache for mechanical storage the ~1500/500 read/write speeds I saw quoted for it a while back would be nice as a cache for HDDs, but are slower than NVMe drives at this point.
Well, I guess I'm *still* sitting on my i7-2600k overclocked to 4.6 GHz. I pushed it from stock clocks in ~2013 assuming I'd replace it soon but four years later it's still ticking along just fine and I still don't have a compelling upgrade path!
I couldn't help remember the old Celerons from years past that could be overclocked to the point of more than double the performance of chips barely twice their price from Intel. This is nothing new. And glad to see Intel has really not lost their "geeky" mindset for the true hardware hardcore among us.
They testing the i3-7350 w/Z270 here and used the on chip gpu with Win7 x64. It would appear Wintel lied about Z270+Kaby lake not working with Win7? What driver is Ian Cutress using here for the integrated gpu testing? Please clear this up Ian.
I'd be hardly pressed to change 2600K (which I had) to 2C/4T CPU. But then, I was blessed with a God's chip: my 2600K easily and comfortably reached 5.2 GHz at ~1.38 V. I really don't believe 7350K would catch up with THIS.
BTW, anyone doing even just a little bit of coding on their PC would welcome compilation benchmark!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
186 Comments
Back to Article
TheinsanegamerN - Friday, February 3, 2017 - link
This chip is 6 years late. Back when sandy bridge was the newest chip, a dual core i3 was a super relevant choice for gaming, a quad core was overkill.Today, for gaming builds, a i5 chip is almost always a better choice, unless you only play games that are single threaded. And the i3 is more power hungry then locked quad cores.
At $130, this would be a great choice, but ATM, the i3k is overpriced for what it offers for a modern system.
nathanddrews - Friday, February 3, 2017 - link
I'd argue that with the introduction of this i3 K-variant and the new hyperthreaded Pentium, Intel just gave a lot of people a reason to not by an i5. The message from Intel seems to be this:"If you need great single-threaded performance with some mild multi-threaded, get the Pentium or i3. If you need great multi-threaded performance with great single-threaded, get an i7."
I'd say they are preemptively stacking the product deck prior to the release of AMD Ryzen - offering entry-level gamers more options without diluting their HEDT status.
BedfordTim - Friday, February 3, 2017 - link
In many of the games an i3-6100 offers effectively the same performance and is $50 cheaper. It isn't a case of the i-3750k offering great performance, so much as the games are not CPU limited. This points towards an even more expensive graphics card and the even cheaper CPU.jayfang - Friday, February 3, 2017 - link
Agree. Whatever about actual performance, it seems quite clear the cool factor of "unlocked" Ryzen's and joining the "overclocking community" is getting a pre-emptive strike from Intel.Michael Bay - Saturday, February 4, 2017 - link
OC never had a "cool factor".eldakka - Sunday, February 5, 2017 - link
In the late 90's into the early 00's, when people would travel for hours carting their PC to a LAN gaming event with 100's (or even thousands) of other people, having an OC'ed machine was indeed cool amongst that Geek crowd./em remembers his dual celeron 300A's OC'ed to 450MHz (yes, that's Mega - not Giga - hertz).
drgoodie - Tuesday, February 7, 2017 - link
I had a dual 366Mhz Celeron box OC'd to 550Mhz. It was cool back then.dsraa - Tuesday, February 14, 2017 - link
It was indeed and still is very very cool......I had been OC'ing my systems way back to the original Pentium 100, and then got a Celeron 300, OMG those were the days....If you don't think its cool, what the hell are you doing on Anandtech??!??!!!?DLimmer - Wednesday, February 15, 2017 - link
You may have missed the joke. This was a play on words; Overclocking produces more heat so it's "not cool."DLimmer - Wednesday, February 15, 2017 - link
OC is "hot, hot, hot"!realneil - Wednesday, February 8, 2017 - link
^^This^^Intel is ~finally~ facing some upcoming opposition in the CPU arena and they're trying to fill in some perceived gaps in their CPU lineup.
After Ryzen is released, expect to see multiple product changes from team blue right away to combat AMD's offerings.
CaedenV - Friday, February 3, 2017 - link
I think it will make more sense with next gen parts. I suspect we are watching a shift in the lineup that is slowly rolling out.celeron - duel core
pentium - entry duel core with HT (limited cache/clock/iGPU)
i3 - high-end duel core HT (essentially unchanged)
i5 - quad core with HT (today's i7)
i7 - 6-12 core with HT (today's LGA2011 line)
So why no straight quad core part? Well, 2 reasons.
1) it probably isn't needed. The original i5 parts were just i7s with broken HT cores that were disabled. I imagine most chips coming out now have perfectly fine HT cores, so they are artificially disabled. This increases the cost of the binning process, and reduces profit on a per-chip basis... especially if they can sell the same part somewhere between today's i5 and i7 price.
2) Right now I would wager that most home builders buy either an i3 because they are budget conscious, or i7 because their pride will not let them get anything less than the best. But the i7 that they buy is the lower margin 'consumer' i7 chips rather than the premium laiden LGA2011 i7 chips that make buku bucks on both CPU and chipset sales. Moving the i7 lineup to start at ~$500 instead of ~$280 would more than off-set the number of people willing to step down to an i5 chip; even if the step down is in name only and really the i5 would be more affordable while offering traditionally i7 performance levels.
3) Bonus reason; Ryzen chips are expected to land near today's i5/i7 chips in performance, and Intel does not want AMD to be able to say 'our chips are as fast as an i7 but only cost what an i5 does'. Instead, intel want's it's smug users (like myself) to say 'ya, that Ryzen is not a bad chip, but it doesn't hold a candle to my i7'. Real world benchmarks be damned, it is what people are going to say.
Alexvrb - Friday, February 3, 2017 - link
I wouldn't necessarily bet that more home users buy i7s than i5s. I personally know two gamers that recently built i5 systems because they wanted more oomph than a 2C/4T i3, but didn't want to spend money on an i7. Why? So they could spend more money where it makes the biggest difference... the graphics card. An i5 provides plenty of CPU horsepower for games, and gives you another $100 or so to spend on better graphics.I think their judgement was sound. I doubt they are alone in this kind of assessment. I think you're letting your admitted i7 smugness cloud your judgement a little bit.
Tunnah - Saturday, February 4, 2017 - link
I build PCs for my friends, and advise people on what to buy, and I don't know a single person apart from myself who has an i7 (and only know of 1 person who has an i3 but he uses his box for media). i5 is a perfect chip for casual users who use the PC mostly to game.Hell the only reason I have an i7 is for Civ VI ha.
Alexvrb - Saturday, February 4, 2017 - link
Bingo!Meteor2 - Sunday, February 5, 2017 - link
i7s (or Xeons) are nice if you're encoding a lot of x265 video (x265 gives better quality per bitrate than hardware encoders). That's the only desktop use case I can think of.Meteor2 - Sunday, February 5, 2017 - link
...or apparently not, according to a comment way below, where a 12C/24T Xeon barely does double digit x265 FPS.bak0n - Monday, February 6, 2017 - link
Exactly why I have an I5 3570k. My next build will either be a ryzen (depending on wither it hits expectations), or the I5 of whatever generation is out when I'm ready to buy. To big of a price jump to the I7 for a hard core 1080P maxed setting gamer, but not to much of a price jump over the i3's. That is, until now with the i3k which I may actually give a second look at.DiHydro - Monday, February 6, 2017 - link
I fit into that category with one caveat, I also do some 3d modeling and rendering. This pushed me to the i7-2600k about four years ago, and I still don't feel that my CPU is the limiting factor on my PC.Byte - Saturday, February 4, 2017 - link
Very true, most customers want the best or the cheapest. Changing the lineup liek that would make it easier.eldakka - Sunday, February 5, 2017 - link
"celeron - duel core"Calm down, breathe. It's not something worth dueling over!
AndrewJacksonZA - Monday, February 6, 2017 - link
Yeah eldakke, "duel" vs "dual" is also something that gets my heart rate up. :-)Old_Fogie_Late_Bloomer - Monday, February 6, 2017 - link
Username checks outAssBall - Friday, February 17, 2017 - link
Duel core? Funny mine never got into a fight.....R0H1T - Friday, February 3, 2017 - link
Yeah that's true except when people find out that there's this thing called Ryzen just on the horizon. Seems to me that the HT pentium & unlocked is just a way to sell more of these KL chips & Intel are hoping/waiting for Coffee Lake to counter Zen.There's no way a dual core is justified today, even if unlocked or with HT, unless you're absolutely on a shoestring of a budget &/or KL is the only thing you want. It's such bad value for money atm that no one should be recommending it, not at this point in time.
lopri - Sunday, February 5, 2017 - link
I have to agree. Not necessarily because there is anything wrong with this chip technically but because of the competitive landscape where Intel's own quad-core chips can be had for the same or lower prices.Meteor2 - Sunday, February 5, 2017 - link
I think the only thing which can be recommended at the moment is not to buy a CPU until Zen is released, in case AMD live up to their hype in performance and price their products competitively (I.e. cheaper than Intel).bananaforscale - Wednesday, February 8, 2017 - link
And see how Intel reacts. Gimme a 20% price drop on hex cores!Jumangi - Saturday, February 4, 2017 - link
They would be misleading gamers badly as is this Anandtech review. Their gaming becnhmarks are just woefully out of date its getting embarassing. There are already games that have come out like Farcry 4 that literally won't run if the system doesn't have 4 full cores. Any real gamer is screwing themselves over by trying to skimp on a CPU like this. Any legit tech site would never reccomend less than a 4 core CPU in 2017.Meteor2 - Sunday, February 5, 2017 - link
@nathanandrews hasn't that always been the case? Except, you might not be able to afford an i7, and (as these results show) you're better buying an i5 and a better GPU for gaming.forgot2yield28 - Sunday, February 5, 2017 - link
Agreed, the timing of the first ever i3 K variant just ahead of Ryzen seems more than just coincidental. Intel seems to be arguing that for value minded users, the IPC and high clocks will make this a better prospect that Ryzen's many-core and likely somewhat lower IPC. That's not new, what is new is that little K on the end meant to capture that market segment of users on a budget who still want the fun of overclocking. Before, the logic was always that intel wouldn't release an i3 K because it would canabalize i5 sales. Now they seem to be proactively guarding a piece of market share that would pick an overclockabe Ryzen chip instead of an i5. Competition is a wonderful thing!futurepastnow - Sunday, February 12, 2017 - link
"I'd say they are preemptively stacking the product deck prior to the release of AMD Ryzen"Yep, Ryzen will also launch with its high-end parts first- AMD's competitiveness will not filter down to low-end parts until 2h16. Until 2C4T Ryzen parts appear, Intel will still have a monopoly on good cheap processors so the more they can sell in that time, the better, for them.
futurepastnow - Sunday, February 12, 2017 - link
I meant 2h17 lol, I write the date a dozen times a day and still get it wrong.zeeBomb - Friday, February 3, 2017 - link
Hmm. What should I get instead of this then around the price range or cheaper?CaedenV - Friday, February 3, 2017 - link
how about a non-k i3?I mean look at the charts, they keep up just fine. Sure, you don't get overclocking capability, but you also get to save money by not needing a custom cooler ($30-50), or a z-series motherboard ($30-150), and the chips themselves are cheaper ($30-50). That saves you some $90+ on your build right there, while offering most of the performance. Either pocket the money, or spend it on a good SSD or better GPU.
stardude82 - Friday, February 3, 2017 - link
G4560... $64. Widely available now. Preforms just below a i3-6100/i5-2500 above Haswell i3s.Alexvrb - Friday, February 3, 2017 - link
If you want to go cheaper, see CaedenV's post below. If you're thinking about staying in roughly the same price range, get an entry-level i5. Something like a i5-7400. The cost of the processor itself is higher, but the total platform price will be around the same because of cost-savings elsewhere, like Caeden listed for the i3 non-K. You won't need to worry about overclocking so no need for upgraded cooling, and no need for an overclock-friendly board.CaedenV - Friday, February 3, 2017 - link
The i3 available back in the day suffered from quite a few things at the time, and had rather dramatic setbacks compared to the i5 and i7 offerings of the day. Still not bad as an entry level gaming CPU... but even it would bottleneck a mid to high range GPU at the time.But today's i3 offerings are able to offer enough performance to keep up with even today's mid to high end GPUs without problem! Part of that is the move to PCIe3, part of it is efficiency making up for a lack of cores, and part of it is simply because more and more games support HT cores where that use to not be the case.
On a win10 system there is even more advantage as it is better at off-loading background processes to less used cores, so even if your game does not take advantage of HT, windows will in order to alleviate the heavily loaded 'real' cores.
I think the really amazing thing to look at in these charts are how well the non-K i3 chips do. You can save a lot of money if you can give up OC and ~2-300 MHz. a plain-jane i3 on a B or H series chipset and a single mid to high-end GPU would game fantastically compared to a high-end i7 with z-series chip. Still not amazing for content creation (though not bad for a hobbiest)... but if all you are doing is video games, office/school work, web browsing, and watching videos then it is getting harder and harder to recommend anything other than an i3.
cocochanel - Friday, February 3, 2017 - link
I don't understand most comments. If you're gaming, an extra 50$ for an i5 is nothing. A CPU is good enough for 3-4 years. How much are you going to spend on games in that time period ? Here in Canada, Battlefield 1 Premium costs about 160$. That's just one game. How many games are you going to buy ? More than a few I guess. Besides, with DX12 and Vulkan becoming mainstream API's, a quad core is must. Just get an i5 or Ryzen and forget about it.javier_machuk - Friday, February 3, 2017 - link
Am I the only one that thinks that these test should have been between the overclocked speeds of both processors? Isn't the idea behind an unlocked processor that you overclock it?cknobman - Friday, February 3, 2017 - link
I definitely think they should have at least included those results.fanofanand - Friday, February 3, 2017 - link
Some people purchase a "K" processor for the binning, not to overclock it.JackNSally - Friday, February 3, 2017 - link
Ok. So exclude those that buy it to overclock?fanofanand - Sunday, February 5, 2017 - link
I was responding to the comment saying the only reason to get a k was to overclock.WithoutWeakness - Friday, February 3, 2017 - link
Absolutely should have included overclocking. Sandy Bridge chips had very conservative stock clocks and great overclocking potential. At the time you were almost guaranteed 4.4GHz-4.7GHz on air and and there were lucky users reaching 4.8-5GHz (and more under water). My 2600K has been running stable at 4.6GHz (a 35% overclock) for six years now at 1.35v. Those single-threaded charts would look much different if you included overclocks and the multi-threaded charts would seriously widen the gap.I'm glad Intel has opened the gates for overclocking i3's but this review really just shows how small Intel's gains have been in the last six years. I'm hoping Ryzen brings some serious performance to the table (especially at the high end) and lights a fire under Intel's asses. Better iGPUs and lower power consumption are great for laptops and and basic users's needs but there has been no innovation in the HEDT market for many years unless you're willing to shell out $1700 for the 10-core Extreme Edition.
CaedenV - Friday, February 3, 2017 - link
Yep! Sandy Bridge was/is good tech!I got the non-K i7 Sandy Bridge, and even that overclocked easy to 4.2GHz. It was an artificial limit, but I didn't need to spend the $50 premium to get a potential 3-500MHz out of it. Been humming along for 6 years now and hasn't missed a beat!
At this rate I will probably be 'upgrading' my game rig to a tiny little i3, and recycling my i7 as a home server for storage and VMs.
eldakka - Sunday, February 5, 2017 - link
"(and more under water)"Shocking.
dragosmp - Friday, February 3, 2017 - link
Agreed, Ian might have posted them. Still, read between the lines: there is a statement the 2600K does 4.8-5GHz. At 20% higher clock speed, the 2600K destroys the OCed i3 7350K, no contest. It may consume 4x the power, but dunno, do you care when the GPU consumes 5x more anyway?CaedenV - Friday, February 3, 2017 - link
Ya, not too useful on the gaming charts as even the non-K chips kept up with the GPUs just fine. But getting to see what it does for productivity tasks would be interesting.Actually, stock vs OC i3, i5, i7, and i7 Sandy would be very interesting to me.
dave_the_nerd - Friday, February 3, 2017 - link
Too much individual variation. Non-overclocked performance is the guarantee. Everything else is up to chance in the silicon lottery.Also potential for abuse: say, the manufacturer sends reviewers some golden sample that hits 5.1GHz on air. Hah! GPU makers used to pull stunts like that.
watzupken - Friday, February 3, 2017 - link
A dual core processor is still a dual core processor even if it is unlocked and offers a high clockspeed. I still feel Kaby Lake is a lazy upgrade over Skylake considering it barely offers anything new. Just take a look at the feature page to get a sense of the "upgrades". With competition coming from ARM and AMD Ryzen, is Intel only capable of a clockspeed war just like they did for Pentium 4?CaedenV - Friday, February 3, 2017 - link
Well, to be fair Kabby Lake isn't for you and I. It is Skylake with very minor improvements mostly aimed at fixing the firmware level sleep and wake issues that manufacturers had (ie, the reason Apple didn't move to Skylake until well after release, and the botched deployment of the Surface Pro 4).Outside of that it is just skylake with a minor clock bumb, slightly better thermals, and more of the chip on 14nm.
Shadowmaster625 - Friday, February 3, 2017 - link
So it will be 2025 before an i3 beats a stock 2600K in all benchmarks? That must mean it will be 2030 before it can beat a 4.8GHz 2600K. That's crazy, considering how badly the Core2Quad compares to even a modern celeron.user_5447 - Friday, February 3, 2017 - link
Page 2: "There is one caveat however – Speed Shift currently only works in Windows 10. It requires a driver which is automatically in the OS (v2 doesn’t need a new driver, it’s more a hardware update), but this limitation does mean that Linux and macOS do not benefit from it."This is incorrect: support for Speed Shift (HW pstates) was commited to Linux kernel back in November of 2014, way before Skylake release.
https://lkml.org/lkml/2014/11/6/628
Hinton - Friday, February 3, 2017 - link
Of the 3 CPU'S Anandtech received to review, this was the only one that was marginally interesting (we didn't need a review to know Kabylake performs equally to Skylake).So of course you spent one month before reviewing it. Good for Anand that he took the money and ran.
fanofanand - Friday, February 3, 2017 - link
You may be unaware, but Ian has been kind of busy lately......Meteor2 - Sunday, February 5, 2017 - link
He has? How so?PCHardwareDude - Friday, February 3, 2017 - link
This would be interesting if the part wasn't so bloody expensive. $120 would be interesting.At this price, you're better off spending a little more and getting an i5 or spending a lot less and getting the G4600, which is also dual core kaby lake with hyperthreading.
AssBall - Friday, February 17, 2017 - link
If you have a GPUnotjamie - Friday, February 3, 2017 - link
At £170 this is the exact price I paid for my 3570k almost 5 years ago. That's what I call progress.Gich - Friday, February 3, 2017 - link
Sure, it's progress... but it used to be much more, much faster... so it doesn't feal progress anymore.StrangerGuy - Friday, February 3, 2017 - link
Things I learned:1. 7350K is hilariously overpriced versus a G4560.
2. Overclocking stock 4.2GHz Intel parts that are already so far from the freq/power sweet spot and little headroom that it's mostly a exercise in futility.
3. That i5 7400 is crazy power efficient.
jaydee - Friday, February 3, 2017 - link
Regarding point #3.It makes you wonder what the "T" designation is really all about. Is the i5-7600T @2.8 - 3.7GHz (35W), basically the same thing as the i5-7400 (65W), only difference being they downclocked the base 200 MHz and upclocked the turbo 200 MHz. On paper you'd expect the "T" to be way more power efficient, but in actuality I bet they are about the same.
Dr. Swag - Friday, February 3, 2017 - link
Hey Ian, correct me if wrong, but couldn't you have just downclocked a 7600k to "simulate" an i5 7400? Afterall, the cache is the same so it should be the same except for the TDP...fanofanand - Friday, February 3, 2017 - link
That would produce a ton of new variables though, i7's theoretically have gone through more exhaustive binning and are a "higher quality" chip that should be able to operate at higher frequencies with lower voltage. Should being an important caveat there.snarfbot - Friday, February 3, 2017 - link
i think microcenter was selling 2600k's for 230 bucks. so 6 years later and you get this pos. progress.fanofanand - Friday, February 3, 2017 - link
MicroCenter has long been offering sweetheart mobo + cpu deals, including the 7700K, so I'm not sure what you think you are proving with your comment. Go look at this processor with a mobo at MicroCenter and you tell me what you see.CaedenV - Friday, February 3, 2017 - link
2600 was $230, 2600k was $280I only know because I didn't sleep for a week while I made the decision lol. Ended up with the 2600 non-k because it still boosted to 4.2GHz just fine and that was more than enough horsepower for me. Been using it for 6 years... omg... how is there no clear upgrade yet?
SaolDan - Friday, February 3, 2017 - link
Excellent!Mr Perfect - Friday, February 3, 2017 - link
Wouldn't testing on Windows 10 have changed the results in favor of the i3 a little? It can't use it's Speed Shift v2 in Windows 7.Ian Cutress - Friday, February 3, 2017 - link
Next test bed update will be on W10. I keep getting mixed reactions recently from W10/W7/Linux users on this front - some want to see W10 poweeeeeer, others want default. But for DX12 it'll have to change over.CaedenV - Friday, February 3, 2017 - link
Bench-marking in win10 is... well... difficult. The OS has too many automatic features, so it is hard to get consistent results. You still get better overall performance... but not consistent performance. Win7 is gloriously dumb and gives very clear numbers to make very easy comparisons.Flunk - Friday, February 3, 2017 - link
It's a bit sad that you can compare any CPU from 2011 to one from 2017 and have them match up like this. In the 90's a CPU that was 6 years newer was many times faster than the older one. Is it lack of competition? Or have we just hit the wall with silicon chip technology?Ro_Ja - Friday, February 3, 2017 - link
Back in the days it was all about higher clock speed = faster. Nowadays it's a bit complex for me :\BrokenCrayons - Friday, February 3, 2017 - link
It's probably a combination of both, but I'd go out on a limb and say it's mostly due to technology and not so much market forces. Intel's primary competition for new processor models really ends up being its own prior generations It the company wants to land sales, it needs to offer a compelling incentive to upgrade.There's also Intel's efforts to reduce TDP over successive generations (something the company would probably not do were there more credible competitive forces in the market). Those reductions are probably a side effect of a mobile-first perspective in modern CPU design, but there's something nice about buying a reasonably power 35W desktop processor and not having to worry about copper-pipe festooned tower coolers with 120mm fans strapped on them just to keep your chip happy. If I were to build a new desktop, I'd entertain a T-series part before exploring any other option.
StrangerGuy - Friday, February 3, 2017 - link
It's funny we got big perf/watt increases over the past few years in CPUs and GPUs, yet somehow everyone are still buying massive overkill 650W+ PSUs where most systems would struggle to even draw 1/3 of the PSU rated wattage at load.I'm pretty confident that an undervolted i5 7400 and GTX 1060 (60W @ 1600MHz according to THG) would be able to draw <100W at the wall in a normal gaming load with an efficient enough PSU...
fanofanand - Friday, February 3, 2017 - link
Because MOAR POWER and marketing. Seriously, they sell the high power PSUs for a LOT more than the lower powered PSUs, it's going to take consumers buying the 300-450W psu's en masse before the manufacturers adjust. Your theoretical operates under false assumptions however. The 1060 boosts up well beyond 1600 and will consume far more than 60 watts, and there are efficiency losses in the PSU and throughout your system. Go ahead and try to run a 1060 and an undervolted i5, see what happens.t.s - Friday, February 3, 2017 - link
He said normal gaming. His number is quite possible --with good mobo, ssd, no optical drive.fanofanand - Friday, February 3, 2017 - link
No, it's not. For typical gaming the 1060 consumes between 90-120 watts. So please do tell me how his system with a 100 watt GPU is going to consume less than 100 watts with a CPU, mobo, RAM, etc.?hybrid2d4x4 - Friday, February 3, 2017 - link
As a point of reference, I have a 1060 in a i5 4670 system running a 400W Platinum PSU. All stock clocks, 1 SSD, 1 HDD. Peak power in games measured at the wall is ~200W (180-200 depending on which AAA game), so I doubt <100W is doable.But agree with the commentary about how overkill most PSUs are.
Flunk - Monday, February 6, 2017 - link
Yeah, that is funny. I'm using a massively overpowered PSU myself. I have a 850W unit running a system with a moderately-overclocked i7-6700k and Geforce 1070. Had it left over from my previous massively overclocked i5-2500k and dual Radeon 7970s, even if it's aged badly (which it probably hasn't it's only a few years old) it should still be good for ages, especially as under-stressed as it now is.fanofanand - Friday, February 3, 2017 - link
Or you could just get Ryzen with the wraith cooler :)BrokenCrayons - Friday, February 3, 2017 - link
Perhaps when they're available for purchase I'll look into it. I'm interested in seeing what AMD does with mobile Ryzen, integrated graphics, and HBM for CPUs (unlikely) and how it changes laptop computing.fanofanand - Friday, February 3, 2017 - link
The rumor mill has been churning and the consensus is that APU's will be available in 2018 with HBM. That will be a game changer for more than just mobile computing, but for small form factors as well. At least theoretically, experience tells me we should wait for reviews before deciding how profound the impact will be.Flunk - Monday, February 6, 2017 - link
The Wraith cooler is both marginal and loud compared to quality aftermarket coolers that cost as little as $35. Sure it's better than the last AMD stock cooler, but that's more a case of the last AMD stock cooler being total garbage.bananaforscale - Wednesday, February 8, 2017 - link
Hey, no dissing huge air coolers! :D (Yeah, I have one and it's so big it largely dictated the case selection. Does keep a hexcore Bulldozer at 52 degrees at 4 GHz tho.) There's also the niggle on Intel side that their enthusiast line has only made it to Broadwell-E, so that's what I'll be upgrading to. A huge upgrade in IPC (which probably won't rise much in the next years), more cores and lower power use per core. I figure I'll be upgrading next around 2025. :D I'm pondering whether I should go AIO liquid or custom...MonkeyPaw - Friday, February 3, 2017 - link
More emphasis is going into the IGP.CaedenV - Friday, February 3, 2017 - link
I doubt it is competition. I mean, lack of competition certainly explains the price per performance not coming down even though the manufacturing costs are getting cheaper, but I think that we have hit a performance wall.With every die shrink we can get more performance per watt... but the die is also more heat sensitive which kills stability for higher clocks. The idea that you can hit 5GHz on the new chips is nothing short of a miracle! But without a major increase in clock speed, then your performance is limited to the instruction sets and execution model... and that is much harder to change.
And that isn't hard to change because of competition. That is hard to change because PCs live and die by legacy applications. If I can't go back and play my 20 year old games every 3-4 years then I am going to get rather annoyed and not upgrade. If businesses can't run their 20 year old software every day, then they will get annoyed and not upgrade.
I think we are rather stuck with today's performance until we can get a new CPU architecture on the market that is as good as ARM on the minimum power consumption side, but as efficient as x86 on the performance per watt side... but whatever chip comes out will have to be able to emulate today's x86 technology fast enough to feel like it isn't a huge step backwards... and that is going to be hard to do!
xenol - Friday, February 3, 2017 - link
Anandtech please do frame-time tests as well for games. Average frame rate is good and all, but if the processor causes dips in games that could lead to an unpleasant experience.Mr Perfect - Friday, February 3, 2017 - link
I would also be interested in seeing this.The site slips my mind, but somewhere tested multiple generations of i7s, i5s and i3s for minimum framerate and even the oldest i7s had a more consistent framerate then the newest i3s. It would be interesting to get AT's take on this.
Ian Cutress - Friday, February 3, 2017 - link
There are some minimum frame rate numbers in Bench, however they're not all regular (they're based on pure min, not 99%). The goal is to have some nicer numbers in our testbed update. Soon. When I find time to finish the script ... :Dfanofanand - Friday, February 3, 2017 - link
"This RGB fad that apparently sells like hot cakes"I love you Ian! In a totally hetero way.....
Seriously though great article, this should silence all the crybabies who whine about the lack of "Anandtech style in-depth analysis". You are still the best CPU reviewer in the biz!
jgarcows - Friday, February 3, 2017 - link
I'm still running an i5-2400 at default speeds that I paid $205 for when it first came out. It is insane how slow the improvement of intel chips have been. You'd think by now an i3 would be an upgrade.crashtech - Friday, February 3, 2017 - link
Frame times would be what hurts the i3 in games if anything. The averages may not be telling the whole story.djscrew - Friday, February 3, 2017 - link
Don't count on this being the new norm. Even though Intel just invalidated a long standing policy and the perception that these are inferior chips with this change I don't think it will lat. The next process shrink will likely bring with it a die size change leaving the i3 people who want to upgrade a few years down SOL. They could simply roll back this "feature" and we're back to status quo.fanofanand - Friday, February 3, 2017 - link
Your comment doesn't make sense. The next node will require a new chipset and ANYONE with today's mobos will need to upgrade, EVERYONE will be SOL.jaydee - Friday, February 3, 2017 - link
Conclusions page: "A good example of this is Agisoft: the Core i5-7400 (which costs $14 more, quad core, 3.4-3.8 GHz) completes the work ~10% quicker."Do you mean the i5-7400 @ 3.0-3.5 GHz, or the i5-7500 @ 3.4-3.8 GHz?
Ian Cutress - Friday, February 3, 2017 - link
Ah yes, I meant the 7400. I had 2600K numbers in my head at the time. :)name99 - Friday, February 3, 2017 - link
"and goes in line with the fact that Intel has officially stated that one of the key features of the new 14+ process is that the transistors are more ‘relaxed’ and there’s no decrease in density."Remember those days when Intel was slagging TSMC for no transistor scaling? Ah good times.
[img] https://www.extremetech.com/wp-content/uploads/201... [/img]
I guess TSMC just decided to "relax" their transistors...
name99 - Friday, February 3, 2017 - link
"The latest memory technology to hit prime time is Intel and Micron’s 3D XPoint. "Where "hit prime time" means "may ship some time in 2019"?
No-one cares about 3D XPoint in SSDs; and the DRAM version seems utterly MIA since the initial enthusiastic Intel claims. (Come to think of it, much like Intel 10nm ...)
allanmac - Friday, February 3, 2017 - link
G4600T HD 610 ⬅ TYPO: HD 630G4560 HD 630 ⬅ TYPO: HD 610
Ian Cutress - Friday, February 3, 2017 - link
Updated :)KLC - Friday, February 3, 2017 - link
I have an 8+ year old Q6600 desktop and am thinking about a new build. I do mostly office work and photography with Adobe Lightroom and Photoshop, no gaming at all, I'll use the integrated graphics. Both LR and PS seem to not utilize multiple cores very well and respond mostly to clock speed. I'm wondering if my best bet is to buy one of these i3K cpus and mildly overclock. I will get the highest clock speed at a price lower than an i5K or i7K. What do you think?t.s - Friday, February 3, 2017 - link
Me: $168 is way to overpriced. I'll get i3 7100. Or go with i5 7500.KLC - Friday, February 3, 2017 - link
i3 7100 is not a bad alternative. Like I said, most photography oriented tests show LR and PS to perform better with just a higher clock speed, not to multiple cores or anything else. With this cpu with a mild overclock up to say 4.5ghz I'm faster than an i7 clock for hundreds less. I'm wondering if that is worth $50 over the 7100. I'm thinking it is, $50 more factored into a complete new build is not much. Thanks for the comments.CaedenV - Friday, February 3, 2017 - link
Dude... you are coming from a Core2Quad. Even the weakest i3 is going to blow your mind!Seriously though; the CPU you buy essentially does not matter. If you are running on the iGPU then just get a chip with the best iGPU you can afford and call it a day. Pair it with a SSD (does not even need to be NVMe), and you will be absolutely blown away with the performance gains!
You should probably look at a simi-custom build like an Intel NUC, or Brix, or other such system where you just add ram, SSD, and OS. There is little to no point in building a whole tower PC unless you are doing something heftier than lightroom.
For that matter, look into a laptop with a decent dock. You can do most of your work in the field while taking pictures, and dock it to a nice color correct screen for the fine-tuning end of work.
Elsote - Friday, February 3, 2017 - link
"just get a chip with the best iGPU you can afford"Are you talking about AMD?
Michael Bay - Saturday, February 4, 2017 - link
He said "best", not "the only reason anybody will even look at this case heater".Iris Pro.
KLC - Friday, February 3, 2017 - link
Oh, I know anything I build will be a big step up from what I have. I am going for a desktop but probably a mini ITX mb maybe in a nice Lian Li case, definitely an SSD. I'm still thinking about the details.fanofanand - Friday, February 3, 2017 - link
I am still on a Q6600 at home and a Skylake CPU at work. The difference isn't as "mind blowing" as some would suggest. It depends on what you do, and yes things that are IPC dependent will be much faster on newer systems, but the Q6600 is still no slouch.fanofanand - Friday, February 3, 2017 - link
More to the point, the difference between SSD and HDD is bigger than the difference between Q6600 and i5 7400.BrokenCrayons - Friday, February 3, 2017 - link
I agree with fanofanand on this one. I previously owned a Q6600 and went through the trouble of upgrading to an Athlon x4 860K (recently died) with a lot more/faster RAM (16GB DDR3 2133 vs 4GB DDR2 800). The difference was pretty underwhelming. I've got a Haswell i7 at the office and was moving between it and the Q6600 and the difference in performance was something I noticed, but it didn't leave me feeling like the Q6600 was incomparably slow.Hrel - Friday, February 3, 2017 - link
I'm just gonna point out that anyone saying Intel is in trouble, needs to realize that they have intentionally chosen to not improve CPU performance for several years, instead focusing on improving the integrated GPU. Look how much that's increased! That's in addition to incrementally improving CPU performance.Intel can start channeling their immense resources into improving CPU performance anytime they wish.
Remember, it's in Intel's best interests to keep AMD from going out of business. Outpacing them to the point of making AMD irrelevant would hurt Intel, long term.
CaedenV - Friday, February 3, 2017 - link
I really don't think that Ryzen is going to make Intel dramatically bump up the per-clock efficiency.The real bottle-neck in performance is in the instruction set itself. AMD is behind Intel because they are hitting the same IPC wall. It isn't that Intel hasn't attempted to push the envelope on IPC, it is that it is a universally hard thing to improve at this stage in the game. If it wasn't so difficult then AMD would have stepped up to the task years ago. Even Ryzen isn't going to beat intel in IPC, they are merely going to close the gap a bit, sell for less, and partner with game studios to push bundles.
Die shrinks will continue to make chips more efficient, but unless someone finds a way to dramatically increase clock speed, or come out with a new instruction set that has better IPC while being backwards compatible with x86 (with minimal performance hit, which is the hard part), then I think we are stuck at this level of performance for a long time.
fanofanand - Friday, February 3, 2017 - link
Graphene.BrokenCrayons - Friday, February 3, 2017 - link
Number 2 pencils all over the world unite! :3AndrewJacksonZA - Saturday, February 4, 2017 - link
lolfanofanand - Sunday, February 5, 2017 - link
Think you may confusing graphene with graphite ;)BrokenCrayons - Tuesday, February 7, 2017 - link
Maybe I am.....oooooor maybe the pencils know something we don't!Aerodrifting - Friday, February 3, 2017 - link
There are plenty of games that can bottleneck i5 or even low power i7, Which those benchmarks never show. People who do this sorts of review clearly never play any demanding games, Therefore they are not fit to do a comparison for gaming CPUs.CaedenV - Friday, February 3, 2017 - link
Seems to me you only hit the CPU wall when dealing with multiple GPUs. For most games, with a single GPU, and i3 is plenty. Considering an i3 does not have enough PCIe lanes to support multiple GPUs this is a rather moot point.Aerodrifting - Saturday, February 4, 2017 - link
Like I said, You don't play any CPU demanding games so you have no right to make those ridiculous comments. Take battlefield 1 for example, Good luck in a 64 player map with i3.Michael Bay - Saturday, February 4, 2017 - link
>plays bf1>blabs about rights
You`re like a perfect example.
Aerodrifting - Saturday, February 4, 2017 - link
Nice trolling, loser.I am simply making a point: There are tons of games can bottleneck i3, Battlefield 1 is just one example, Stop lying to others "i3 can game just fine like i7 etc" it's very misleading and misinformed.
fanofanand - Sunday, February 5, 2017 - link
Considering the length of time Dr Cutress has been reviewing CPUs and gaming, any notion that he is not fit to be reviewing gaming CPUs is absurd.Aerodrifting - Sunday, February 5, 2017 - link
The notion of someone who is good at theorycraft reviews must be an expert at knowing gaming PC is absurd. 1 min of benchmark run in single player mode suddenly makes you an expert at gaming computer? Give me a break.BrokenCrayons - Thursday, February 9, 2017 - link
If you doubt the validity of the claims made in these articles in spite of the years of experience the writers have AND the supporting evidence of their work, then its rather odd you'd read any of these reviews at all. We can infer from your responses that you feel insecure about your purchase decisions, feel compelled to defend them aggressively, and that you're dismissing the evidence at hand even though you personally find it useful so you can justify the defensiveness -- more to yourself than others here because honestly, why should the opinions ruffle your feathers if there's genuinely no doubt in your mind that you feel you're as correct as you claim?Aerodrifting - Saturday, February 11, 2017 - link
Nice job coming up with such rhetoric yet no concrete evidence in your argument. I do NOT DOUBT the validity of the claims, I KNOW they are WRONG for a fact. Your reviewers do reviews for the sake of results from incomplete tests and benchmarks that can not represent real life results, Therefore the conclusion is wrong. I have been playing video games and playing around with hardware when you guys were playing with sand, Before this website is even established, Yet you want to talk about "years of experience"? I am not defending anything, I am simply pointing out you are wrong and you are misleading people, You are just butthurt because finally someone is having a different opinion and they are actually right.You want evidence? Let's save the rhetoric and go straight to facts, There are games that can severely bottleneck i3 or even i5, Battlefield is just one of them. Doesn't matter what video card you use, Join a 64 player game and you can see your CPU usage go over 90% and game starts stuttering on i3 / i5.
Ratman6161 - Friday, February 3, 2017 - link
"there will be a time where a Core i3 based CPU will match the performance of the older Core i7-2600K"Maybe, but not today! I'm still patting myself on the back for my purchase of the i7-2600K back in the spring of 2011. My first computer ran an 8088 and for every computer I owned weather off the shelf or self built from then until 2011 left me constantly on the upgrade treadmill looking for more speed . But with the 2600K I finally hit good enough (other than adding more RAM and replacing spinning disks with SSD's along the way). While its fun to read about the new stuff I don't really see myself upgrading again until either the CPU or (more likely) the motherboard give out.
CaedenV - Friday, February 3, 2017 - link
Yep! I have upgraded the ram several times, the GPU a few times, and moved from HDD to SSD, all of which have kept my 2600 very happy.What is going to get me to change over is going to be the motherboard level improvements. I cant get a much faster GPU without hitting a PCIe2 bottleneck. NVMe, while impractical (real world tests show little to no improvement), has me drooling for an upgrade. 4-10gig Ethernet is going to be more and more popular going forward. DDR4 is finally maturing to a point where it is actually better than DDR3. All new devices run USB-C, and my piddly 4 USB3 ports can't do them all, and certainly not at full speed.
It is literally everything around the CPU that is slowly making me want to upgrade, not the CPU itself. But even these improvements need another year or two to bake before I am really thinking about an upgrade. I am just unfortunately very happy with what I have, and I have one more GPU upgrade (looking forward to something like a GTX1170) before I will really need a new system.
Who knows, this might be my last 'real' PC. In a few years it may make more sense to have a central gaming server that runs all the games, and then stream them to an end-user PC that is no more powerful than a cell phone.... or just dock my cell phone.
BillBear - Friday, February 3, 2017 - link
For consumers who intend to purchase a discrete GPU card, it's interesting to see it confirmed that Intel could include four additional CPU cores in instead of the unnecessary (for you) integrated GPU within pretty much the exact same die size.It wouldn't cost them more to manufacture than an i7. They just want to be able to charge more money by forcing you into a different price range of product if you need many cores.
For instance: Intel’s new 10-core Core i7 Extreme Edition costs a whopping $1,723
http://www.geek.com/tech/intels-new-10-core-core-i...
fanofanand - Friday, February 3, 2017 - link
I'm not sure I agree with your assessment. Intel has a vested interest in pushing people like you into the HEDT platform which is far more profitable. If you have a powerful dGPU then you are not "mainstream" by Intel's thinking. Based on the number of computers that have no dGPU maybe they are right.BillBear - Friday, February 3, 2017 - link
You're just defending price gouging now.fanofanand - Sunday, February 5, 2017 - link
I am not defending anything, I am neither an Intel shareowner nor have they seen a penny of mine in a decade. I am saying that what they are doing makes business sense even if it doesn't suit you as well as you'd like.BillBear - Sunday, February 5, 2017 - link
For someone who frequently responds to other people's posts with silly BS like "This post brought to you by CompanyName" why are you suddenly defending price gouging on Intel's part?Price gouging makes business sense for any company. Tacking on an additional thousand dollars per part? Not really defensible.
block2 - Friday, February 3, 2017 - link
The CPU I want is one that costs me the least electricity for the 95% of the time when I'm on Facebook and surfing, yet supports Photoshop well. I bought a gold rated seasonic PSU a couple years ago and my overall power usage is very low (40w?). Really, a low end i3 ought to suffice for me. I'm still using a 2.8ghz AMD phenom II with the CPU throttling enabled (800mhz).Scipio Africanus - Friday, February 3, 2017 - link
Have you considered one of the Intel T suffix cores? They don't get a lot of coverage but give you the newest architecture with a very low TDP. The current Kaby Lake T cores are 35w TDP.The newest review i could find was for Haswell T cores:
http://www.anandtech.com/show/8774/intel-haswell-l...
Scipio Africanus - Friday, February 3, 2017 - link
It takes Intel 6 years to post 25% increase in single threaded performance? Yeesh. Competiton (read: Ryzen) can't come fast enough. My Sandy Bridge Dell Precision is staying put for a bit more.StrangerGuy - Friday, February 3, 2017 - link
And AMD did what exactly during the same 6 years? Sheesh.silverblue - Saturday, February 4, 2017 - link
Well, they've transformed a power-hungry server architecture into something that the majority of users could use in a mobile device without complaining about power or performance. I'm also pretty sure that if they had commissioned Zen even before Bulldozer's release, we still wouldn't have seen anything until recently. I'm not going to defend them for Bulldozer, but it was either trash it and work on a replacement with no money coming in or at least try to fix its problems with power, L1 and decoder, and redesign the FPU. A 25%+ IPC boost from Bulldozer to Excavator, despite the loss of L3, would have been much better received had Bulldozer been better to begin with. That's what AMD have been doing.Michael Bay - Saturday, February 4, 2017 - link
>competition>AMD
Ranger1065 - Sunday, February 5, 2017 - link
You are such a twat.Meteor2 - Sunday, February 5, 2017 - link
Ignore him. Don't feed trolls.jeremynsl - Friday, February 3, 2017 - link
Please consider abandoning the extreme focus on average framerates. It's old-school and doesn't really reflect the performance differences between CPUs anymore. Frame-time variance and minimum framerates are what is needed for these CPU reviews.Danvelopment - Friday, February 3, 2017 - link
Would be a good choice for a new build if the user needs the latest tech, but I upgraded my 2500K to a 3770 for <$100USD.I run an 850 for boot, a 950 for high speed storage on an adapter (thought it was a good idea at the time but it's not noticeable vs the 850) and an RX480.
I don't feel like I'm missing anything.
Barilla - Friday, February 3, 2017 - link
"if we have GPUs at 250-300W, why not CPUs?"I'm very eager to read a full piece discussing this.
fanofanand - Sunday, February 5, 2017 - link
Those CPUs exist but don't make sense for home usage. Have you noticed how hard it is to cool 150 watts? Imagine double that. There are some extremely high powered server chips but what would you do with 32 cores?abufrejoval - Friday, February 3, 2017 - link
I read the part wasn't going to be available until later, did a search to confirm and found two offers: One slightly more expensive had "shipping date unknown", another slightly cheaper read "ready to ship", so that's what I got mid-January, together with a Z170 based board offering DDR3 sockets, because it was to replace an A10-7850K APU based system and I wanted to recycle 32GB of DDR3 RAM.Of course it wouldn't boot, because 3 out of 3 mainboards didn't have KabyLake support in the BIOS. Got myself a Skylake Pentium part to update the BIOS and returned that: Inexcusable hassle that, for me, the dealer and hopefully for the manufacturers which had advertised "Kaby Lake" compatibility for moths, but shipped outdates BIOS versions.
After that this chips runs 4.2GH out of the box and overclocks to 4.5 without playing with voltage. Stays cool and sucks modest Watts (never reaching 50W according to the onboard sensors, which you can't really trust, I gather).
Use case is a 24/7 home-lab server running quite a mix of physical and virtual workloads on Win 2008R2 and VMware workstation, mostly idle but with some serious remote desktop power, Plex video recoding ummp if required and even a game now and then at 1080p.
I want it to rev high on sprints, because I tend to be impatient, but there is a 12/24 core Xeon E5 at 3 GHz and a 4/8 Xeon E3 at 4GHz sitting next to it, when I need heavy lifting and torque: Those beasts are suspended when not in use.
Sure enough, it is noticible snappier than the big Xeon 12 core on desktop things and still much quieter than the Quad, while of course any synthetic multi-core benchmark or server load leaves this chip in the dust.
I run it with an Nvidia GTX 1050ti, which ensures a seamless experience with the Windows 7 generation Sever 2008R2 on all operating systems, including CentOS 7 virtual or physical which is starting to grey a little on the temples, yet adds close to zero power on idle.
At 4.2 GHz the Intel i3-7350K HT dual is about twice as fast as the A10-7850K integer quad at the same clock speed (it typically turbos to 4.2 GHz without any BIOS OC pressure) for all synthetic workloads I could throw at it, which I consider rather sad (been running AMD and Intel side by side for decades).
I overclocked mine easily to 4.8 GHz and even to 5 GHz with about 1.4V and leaving the uncore at 3.8 GHz. It was Prime95 stable, but my simple slow and quiet Noctua NH-L9x65 couldn't keep temperatures at safe levels so I stopped a little early and went back to an easy and cool 4.6 GHz at 1.24V for "production".
I'm most impressed running x265 video recodes on terabytes of video material at 800-1200FPS on this i3-7350K/GTX 1050ti combo, which seems to leave both CPU and GPU oddly bored and able to run desktop and even gaming workloads in parallel with very little heat and noise.
The Xeon monsters with their respective GTX 1070 and GTX 980ti GPUs would that same job actually slower while burning more heat and there video recoding has been such a big sales argument for the big Intel chips.
Actually Handbrake x265 software encodes struggle to reach double digits on 24 threads on the "big" machine: Simply can't beat ASIC power with general purpose compute.
I guess the Pentium HT variants are better value, but so is a 500cc scooter vs. a Turbo-Hayabusa. And here the difference is less than a set of home delivered pizzas for the family, while this chip will last me a couple of years and the pizza is gone in minutes.
Meteor2 - Sunday, February 5, 2017 - link
Interesting that x265 doesn't scale well with cores. The developers claim to be experts in that area!abufrejoval - Sunday, February 12, 2017 - link
Sure the Handbrake x265 code will scale with CPU cores, but the video processing unit (VPU) withing the GTX 10x series provides several orders of magnitude better performance at much lower energy budgets. You'd probably need downright silly numbers of CPU cores (hundreds) with Handbrake to draw even in performance and by then you'd be using several orders of magnitude more energy to get it done.AFAIK the VPU all the same on all (consumer?) Pascal GPUs and not related to GPU cores, so a 1080 or even a Titan-X may not be any faster than a 1050.
When I play around with benchmarks I typically have HWinfo running on a separate monitor and it reports the utilization and power budget from all the distinct function blocks in today's CPUs and GPUs.
Not only does the GTX 1050ti on this system deliver 800-1200FPS when transcoding 1080p material from x264 to x265, but it also leaves CPU and GPU cores rather idle so I actually felt it had relatively little impact on my ability to game or do production work, while it is transcoding at this incredible speed.
Intel CPUs at least since Sandy Bridge have also sported VPUs and I have tried to them similarly for the MPEG to x264 transitions, but there from my experience compression factor, compression quality and speed have fallen short of Handbrake, so I didn't use them. AFAIK x265 encoding support is still missing on Kaby Lake.
It just highlights the "identity" crisis of general purpose compute, where even the beefiest CPUs suck on any specific job compared to a fully optimized hardware solution.
Any specific compute problem shared by a sufficiently high number of users tends to be moved into hardware. That's how GPUs and DSPs came to be and that's how VPUs are now making CPU and GPU based video transcoding obsolete via dedicated function blocks.
And that explains why my smallest system really feels fastest with just 2 cores.
The only type of workload where I can still see a significant benefit for the big Xeon cores are things like a full Linux kernel compile. But if the software eco-system there wasn't as bad as it is, incremental compiles would do the job and any CPU since my first 1MHz 8-Bit Z80 has been able to compile faster than I was able to write code (especially with Turbo Pascal).
JordanV - Tuesday, February 14, 2017 - link
I think the sales argument for the big Intel chips as video encoders has been for x264 where the faster NVENC, VCE, and QuickSync technologies offer lower quality at a given bitrate for higher quality x264 settings. For most people, the hardware encoders are enough but for many others, the quality is not sufficient.The quality difference between hardware and software HEVC is smaller with higher quality software h265 encodes beating the quality of your Pascal x265 encodes but with a big performance penalty. It's not worth it for most people, but if you have limited bitrate/storage and want the best quality, it might be.
HerrKaLeun - Friday, February 3, 2017 - link
Thanks for the great review, Ian.Considering one needs an expensive Z-board to OC, for most people buying an i5 makes more sense.
I don't understand why so many people complain about Intel allegedly not making enough progress. Now you get a dual-core that comes close (or even exceeds in single threaded benches) to the former flagship quadcore. If you want to have a CPU that vastly exceeds the "old" quadcore, Intel also has newer quadcores. It is not like the i3 is the end of the lineup.... For the $317 that the 2600k used to cost you can get a KabyLake non-K i7, which sure vastly exceeds in performance (and much lower TDP). I assume someone who could afford an over $300 CPU 6 years ago can afford $300 now and upgrading to an i3 may not be what that person would do anyway. the trend goes to more cores.... most people here complain about Intel not offering mainstream hexa and octa cores... not sure why the same people allegedly are so eager to get dual-cores.
zodiacfml - Friday, February 3, 2017 - link
Dual core is too weak for me. Web browsing can use more cores.Hulk - Friday, February 3, 2017 - link
Sorry to be dense.What does 2+2, 4+2, 4+3/e mean?
babysam - Saturday, February 4, 2017 - link
The first number refers to the number of CPU cores. Te second number refers to the IGP configuration (the number of shaders, which may be a little bit different across generations, e.g. Haswell GT3 has 40 shaders, while Broadwell/Skylake GT3 have 48 shaders).The extra e means there is an extra eDRAM cache (Crystalwell) on the CPU package.
Hulk - Saturday, February 4, 2017 - link
Thanks.AndrewJacksonZA - Saturday, February 4, 2017 - link
Thank you babysam.babysam - Saturday, February 4, 2017 - link
Thank you for you article (especially when many of us are waiting on the information of new CPU of both AMD and Intel). It is always good to have something to play (overclocked) with, but this is a little bit expensive.When I read the analysis of the first page, I see the lack of information on the CPU die size and transistor count disclosed by Intel recently. Also, I feel strange that the effect of the change of the 32nm to 22nm (from Haswell to Broadwell) have such a large difference on the 2C+GT2(which Intel claims there is a 37% reduction of the die, which can be seen in the table) and the 4C+GT3(which the difference are much smaller) CPU die. I feel even stranger when I see the Skylake 4C+GT3e die is a bit smaller than the Broadwell 4c+GT3e die. So I am quite curious on the sources of the die estimate.
P.S. I found the origin of the 234mm^2 of the Skylake die size estimate.
https://techreport.com/forums/viewtopic.php?t=1177...
which based on the images of the following
http://www.anandtech.com/show/10281/intel-adds-cry...
It seems that the die described is the Skylake-H(which is a 4C+GT4e configuration). This makes the 241.5mm^2 estimate of the Broadwell 4C+GT3e a little bit unrealistic (Skylake GT4e have 72 shaders, while Broadwell GT3e have 48 only)
babysam - Saturday, February 4, 2017 - link
Just find the die size of the Broadwell-H (4C+3e) in this documenthttp://www.intel.com/content/www/us/en/embedded/pr...
According to the document, the die size of Broadwell-H (4C+GT3e) should be 13.7mmx12.3mm = 168.51mm^2
(Many thanks for the hints: https://forums.anandtech.com/threads/broadwell-cor... , which the got the answer two years ago.)
WoodyBL - Saturday, February 4, 2017 - link
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...silverblue - Saturday, February 4, 2017 - link
It's a bit weird, but most of them are within margin of error.WoodyBL - Saturday, February 4, 2017 - link
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...fanofanand - Sunday, February 5, 2017 - link
Wasn't the 4690 Devil's Canyon? Similar IPC higher clocks I would assume. Most of the changes lately have been hardware decoders/encoders and I/o changes. Intel takes baby steps because it can, hopefully that changes with Ryzen.WoodyBL - Saturday, February 4, 2017 - link
Am I the only one noticing that the i5-4690 was beating the i5-7600k in a lot of benchmarks? I'm having a hard time processing how that was even possible...yankeeDDL - Saturday, February 4, 2017 - link
Glad to see that my 2 years old A10 still trashes anything Intel on integrated graphics.Gothmoth - Saturday, February 4, 2017 - link
no wonder intel is not selling to consumers, complaining about stagnation.the money they make comes from enterprise i guess.
i have not updated my sandy bridge for 6 years.
and i will not until intel gives me a reason.. this is only babysteps.
i had to cash out 1200 euro for a new mobo, cpu, ram, cooler... and for what.... 30% more performance..... meh
TelstarTOS - Saturday, February 4, 2017 - link
"Responsiveness? Top class."No way. It will suck in heavy multitasking.
synth0 - Sunday, February 5, 2017 - link
This is how good a 2011 chip really is!This shows there really isn't much sense to upgrade a PC anymore, and with time it will get even less sense to invest money for an upgrade. What will be in 2024? Are we entering into a stalemate in the PC area?
lopri - Sunday, February 5, 2017 - link
An excellent review but I would rather get a 7600K. Oh, wait. I already have something similar: 2600K.Bullwinkle J Moose - Sunday, February 5, 2017 - link
"The Intel Core i3-7350K (60W) Review: Almost a Core i7-2600K"---------------------------------------------------------------------------------------
.....and not even CLOSE to a Sandy Bridge!
Can the Intel Core i3-7350K use my Optical port in DRM crippled Windows 10 for Audio Production?
Show me how!
Can I record what I hear on the desktop with the DRM crippling API's found in Windows Vista / 7 / 8 and 10 ?
Show me how!
Will it boot "directly" to Windows XP faster than I can on my 35 watt dualcore Sandy Bridge (3-seconds on a Samsung 850 Pro SSD) so I CAN use my optical ports and record whatever I want without a DRM crippled Spyware Platform, or do the new motherboards prevent me from booting to a NON-crippled O.S. like my copy of Windows XP?
Well?
Should I "upgrade" to a crippled platform that prevents me from doing ANYTHING I want to do, but allows me to do only what Microsoft graciously allows me to do?
........ and explain to me again why I should pay more for my own enslavement?
Bullwinkle J Moose - Sunday, February 5, 2017 - link
Correction: Can I record what I hear on the desktop with the DRM crippling API's found in Windows Vista / 7 / 8 and 10 ?should be > Can I record what I hear on the desktop with the DRM "crippled" API's found in Windows Vista / 7 / 8 and 10 ?
The API's do not cripple the DRM
The DRM does all the crippling !
Bullwinkle J Moose - Sunday, February 5, 2017 - link
Finally got Optical SPDIF working in both Windows 8.1 AND Windows 10 after that rant above!Yes, I Really did think that was a DRM issue
My original USB Soundblaster had optical in and out disabled in software updates after all the hysterical copyright violation complaints
Meteor2 - Sunday, February 5, 2017 - link
RE: why are there no 150-200W consumer CPUs. Because there's no consumer software which could take advantage of 24C/48T CPUs, unlike GPUs.Of course, if you want a 150W CPU, you can buy a big Xeon. But there's not a lot of software out there which can make use of them.
The_Assimilator - Monday, February 6, 2017 - link
Should've rather used an i5-2500K in the comparison; 2c/4t vs 4c/4t is fairer than 4c/8t. Although, a real comparison at 4.6GHz on both chips (or whatever the i3 can hit) would see the KBL obliterated regardless.evilpaul666 - Wednesday, February 8, 2017 - link
I think they said they're working on an overclocking article, but I agree the i5-2500K with both chips overclocked would have been a much more interesting test.Anato - Monday, February 6, 2017 - link
Intel should make all K-processors fully enabled, HT, ECC, Cache and sell them cheaply as 2.0-3GHz parts. Then give user tools to make changes to cache, ECC etc and after that its users task to find out what CPU can do. That would bring back good old days and Intel could get rid of cores that are otherwise unsellable.Still no need to upgrade from Sandy Bridge i5-2500k and just bought GTX 1080 for it.
evilpaul666 - Wednesday, February 8, 2017 - link
So is Optane ever actually coming out? And is it going to actually work as a 16/32GB cache for mechanical storage the ~1500/500 read/write speeds I saw quoted for it a while back would be nice as a cache for HDDs, but are slower than NVMe drives at this point.evilspoons - Wednesday, February 8, 2017 - link
Well, I guess I'm *still* sitting on my i7-2600k overclocked to 4.6 GHz. I pushed it from stock clocks in ~2013 assuming I'd replace it soon but four years later it's still ticking along just fine and I still don't have a compelling upgrade path!ANobody - Wednesday, February 8, 2017 - link
Slow is the death of IPC progress. Painful to watch.Ubercake - Friday, February 10, 2017 - link
I would hope an i3 marketed as 5 generations later could match an i7 from 5 generations before?Intel has had the market cornered for too long...
blzd - Friday, February 10, 2017 - link
You may need to test with some newer games, some of which I read are having issues running with dual cores.Minimum FPS might be worth including as well.
Narg - Friday, February 10, 2017 - link
I couldn't help remember the old Celerons from years past that could be overclocked to the point of more than double the performance of chips barely twice their price from Intel. This is nothing new. And glad to see Intel has really not lost their "geeky" mindset for the true hardware hardcore among us.albert89 - Friday, February 17, 2017 - link
You can run the i7-2600K on Win8.1 and down. You can't do that with the i3-7350.TheJian - Wednesday, February 22, 2017 - link
They testing the i3-7350 w/Z270 here and used the on chip gpu with Win7 x64. It would appear Wintel lied about Z270+Kaby lake not working with Win7? What driver is Ian Cutress using here for the integrated gpu testing? Please clear this up Ian.Wish they had used a 1080 gtx.
Vatharian - Friday, March 3, 2017 - link
I'd be hardly pressed to change 2600K (which I had) to 2C/4T CPU. But then, I was blessed with a God's chip: my 2600K easily and comfortably reached 5.2 GHz at ~1.38 V. I really don't believe 7350K would catch up with THIS.BTW, anyone doing even just a little bit of coding on their PC would welcome compilation benchmark!
Artanis2 - Friday, June 9, 2017 - link
Still to comeCalculating Generational IPC Changes from Sandy Bridge to Kaby Lake
Intel Core i7-7700K, i5-7600K and i3-7350K Overclocking: Hitting 5.0 GHz on AIR
Intel Launches 200-Series Chipset Breakdown: Z270, H270, B250, Q250, C232
Intel Z270 Motherboard Preview: A Quick Look at 80+ Motherboards
WHEN ?!