Unfortunately, it seems that, while broadwell does have the best IPC of the bunch, the overclock is pathetic. 1.325v to hit 4.2 GHz? my ivy bridge 3570k does the same clock with 1.075v. now, I've been told I have a exceptionally good chip, but it strikes me as odd that broadwell, being on a smaller 14nm process, cant match what ivy bridge could do two years ago. and since sandy bridge can be OC'ed to 4.7GHz+ with ease,and ive can hit 4.5, it seems there is still no reason to upgrade to broadwell, as any IPC gains are cancelled out by the lower clock rate. unless you need to do lots of dolphin emulation and refuse to overclock at all, the ancient sandy bridge still seems to do the best.
Ah, yes, the 920 was a lovely beast. Started overclocking at 3.6. It booted, tried 3.8, booted, tried 4.0, failed. 3.8 was literally done in less than an hour as my second ever attempt at overclocking, with my first being the intel e6600. And when a dying PSU wounded it, I got a 3930k. It does 4.0 ghz, and I've yet to find any situation where it's a bottleneck, besides things like rendering and benchmarks. I considered upgrading to the 59xx series, but when I learned that only the 5960x would be a 8-core, that was quickly decided against.
It'll be interesting to watch Skylake and Zen fight it out in a year or so.
I'm surprised Intel isn't banking on nostalgic memories of the Q6600 to hype the 6600K & 6700K... Surely marketing had a hand in the simplified naming reminiscent of the old C2Q.
I'm still on a i7-920 from mid 2009. Been running 3.6GHz the entire time, still rock solid as the day I bought it. I still can't believe I've been using a PC for this long. Before the i7, I would upgrade every 1.5 - 2 years tops. This thing is nuts.
We've reached the end of that exponential advancement, so you can expect things to advance at roughly this rate for a while, at least until we also reach "small enough".
There is no end. Intel just do not care, as they have no competition. Why would they waste money on increasing performance, when they can focus on efficiency for mobile? They can get away with selling basically the same CPUs every year on desktop, as they are still the fastest.
Same situation as you. Got a 3930K... It has happily sat at 4.8ghz just fine for many many years and still gives Haswell-E a good run for it's money. Only paid $500 AUD at the time too! That is in stark contrast to the 5930K which is currently $860 AUD... Intel has provided me with ZERO compelling reason to upgrade unless I wish to drop down $1500 for the 5960X, which isn't three times as fast as my 3930K. It's like they don't want my money!
If Intel had released an 8-core 5930K around the $600 mark I wouldn't have thought twice about upgrading, even if the motherboard and memory drove the prices higher.
Also still got a Core 2 Quad Q6600 rig running at 3.6ghz which handles most tasks fine, it's almost 8 years old now, it has certainly paid for itself and still handles most of the latest games fine.
I totally agree with you, 6-cores isn't worth bothering with for what Intel's charging. I'm still holding on to my i5-2500k, which does 4.4 all day at 1.2V (and a bit more if you give it a bit more juice).
Haven't even bothered to take my 2500k past 4.0 Ghz leaving all the tuning at stock, I remember back in the day I had er up at 4.6 stable with tweaks, but even then it wasn't really a bottleneck. Humming along at 55c max under load @ 4.0Ghz. Most of these CPUs are much hotter thanks to the IGP as well, making Sandy even more enjoyable. Also why would a K series CPU even ship with integrated graphics in the first place.
I still rock a Asus X58 with a i7-950 (used to be a i7-920) back at the office. Never overclocked, completely stable, still completely competitive with modern PC's 7 years later.
Obviously it uses more power (130w vs 80w) to a comparably performance-equivalent Haswell Xeon 1230v3 but the difference is a few dollars a year,
Same here. I have an Asus x58 motherboard with 24GB CL7 memory. Recently pulled the 4Ghz overclocked i7-920 and replaced it with an $85 Xeon X5675 overclocked to 4.6Ghz. My ancient system will hold it's own against most newer systems. Not going to upgrade until Skylake-E or later.
Still running a i7 920 @ 3.8Ghz on custom water cooling. OS, SSDs, HDDs, GPUs, have all been upgraded since 2009. Still holding it's own in gaming and multithreaded software (video rendering).
Also, with the hexacore X5650 being available for around $70-100 used, I can probably breathe a bit more life into this.
I invested big time in my X58 platform but it's still going strong. One year ago I upgraded my i7 920 to a Xeon X5650 (which I bought second hand for like $100). Now I have a six core HT 32nm OC'd beast that runs much cooler than my 920. I can't believe this platform is now 7 years old.
I have been waiting for so long to upgrade my 2500k an R9 290, was looking at a 6600k and Fury X but for like $2,000 the performance really isn't there. Second hand 290 for $250 will probably be my next upgrade and still faster than a brand new 6600k Fury build. More money for games I guess.
Is that entirely true? It seems from the graphs that you can expect 10% to 20% improvement in performance at the same clock compared to Sandy Bridge and the Broadwell is a good 30% less in terms of power consumption. In other words the 0.25V difference in overclock is exactly the reason Broadwell consumes 30% less power. Since you don't care about the power then you can clock it up and volt it up and see a 10% to 20% improvement in performance. You can argue that the 10% to 20% improvement isn't worth it, of course. The IPC gains only matter if you care to overclock the Broadwell part.
Still rocking a i5 2500k @ 4.0Ghz and an R9 290 @ 1.1Ghz and it's rockin along no problem, I want to but new shiny things, but there is zero reason to, which is nice for the value but a little odd given the age of this processor. If I had known I was going to be hanging onto this CPU for so long I would have picked up the i7 2600/2700k, but even then, the i5 2500k is a powerhouse, apparently. Just puts into perspective how crazy powerful these CPUs were 5 years ago when they landed.
They should just take each of these CPUs to 4.0Ghz locked and bench it out, I bet the 2600k still fires up there with the best of them. The sample used here is still stock, so with an OC it's more or less up there all the way through to a 980ti.
For desktop users the article only consolidated what was already known: hold tight to your Haswell CPU until Skylake (and even then you probably won't need to upgrade). The Z97 chipset is such a mature platform and the high frequency clocked parts have dropped in price. The 4790K is an absolute brute. Haswell -just like Sandy and Nehalem is going to be very stretchy generation.
Forget that, just hold tight to your Sandy Bridge. Four years running and you can make up most of the difference in IPC by just cranking up the clock. Once you take the lower frequency into account, Broadwell's ~18% improvement drops to only around 10%.
I'm really curious how the Skylake i7 will present itself in exactly this comparison article. I, like many am still on a 2600k and pondering an upgrade to either haswell-e or skylake.
Me too. I run my 2600k at 4.5 ghz on air with no stability issues ever and by the looks of it, 4 years later, I'm STILL looking at a lateral upgrade. Skylake better at least add some OC headroom back along with its 15% or this might be the first gaming PC i have that dies of old age...
I'd wait it out if I were you. The 2600K is absolutely no slouch, and you'll probably be disappointed after spending lots of cash on Haswell-E. The only thing going for these newer chips are peripherals, so it's all about your priorities.
Yup same here, OC'ed my 2500K to 4.6 Ghz on air and have had the same build for over 4 years now. Still excellent performance and the only regret I have is not getting the 2600K at the time. I've started to delve greatly into developing and server VMs locally so that would've been a great setup.
I'm in a similar boot with a 2600K but I also have a Sandy Bridge-E 3930k. So far I'm not feeling any pressure to upgrade from on the processor side for either chip.
For me, the most attractive thing about Skylake is the chipset which adds 20 PCIe lanes on top of the 16 from the CPU. This should enable some motherboards to stack on features without compromising dual GPU scenarios and even enable triple GPU setups with all 8x links. (There is enough lanes to do quad GPUs but DMI would be too much of a bottleneck for two cards + IO.)
Haswell-E on the other hand just doesn't interest me at all. The low end 5820K is a cheap 6 core part but has the reduced PCIe lane count. In many regards, SkyLake with Z170 would be the better option than a 5820K setup. Going to the 5930K improves IO but the price premium just isn't worth it. Thankfully Broadwell-E should be arriving at the very end of this year/early next year so hopefully Intel can revive the X99 platform.
Yes, thanks for including the 2600 in this. Mine has been doing well, and with DX12 reducing CPU dependance in the future, it's probably going to be relevant for some time. It's nice to see what an upgrade will actually be worth.
Heh me too. 2600k OC'd to 4.5 ghz going on over 4 years now. Which is just crazy because before that I upgraded CPU every 2 -3 years. But now it seems Intel does not care about performance, just power/performance, and AMD is a clusterduck.
Well think about WHY these results are as they are:
- There is one set of benchmarks (most of the raytracing and sci stuff) that can make use of AVX. They see a nice boost from initial AVX (implemented by routing each instruction through the FPU twice) to AVX on a wider execution unit to the introduction of AVX2.
- There is a second set of benchmarks (primarily winRAR) that manipulate data which fits in the crystalwell cache but not in the 8MB L3). Again a nice win there; but that's a specialized situation. In data streaming examples (which better described most video encode/decode/filtering) that large L4 doesn't really buy you anything.
- There WOULD be a third set of benchmarks (if AnandTech tested for this) that showed a substantial improvement in indirect branch performance going from IB to Haswell. This is most obvious on interpreters and similar such code, though it also helps virtual functions in C++/Swift style code and Objective C method calls. My recollection is that you can see this jump in the GeekBench Lua benchmark. (Interestingly enough, Apple's A8 seems to use this same advanced TAGE-like indirect predictor because it gets Lua IPC scores as good as Intel).
OK, no we get to Skylake. Which of these apply? - No AVX bump except for Xeons. - Usually no CrystalWell So the betting would be that the BIG jumps we saw won't be there. Unless they've added something new that they haven't mentioned yet (eg a substantially more sophisticated prefetcher, or value prediction), we won't even get the small targeted boost that we saw when Haswell's indirect predictor was added. So all we'll get is the usual 1 or 2% improvement from adding 4 or 6 more physical registers and ROB slots, maybe two more issue slots, a few more branch predictor slots, the usual sort of thing.
There ARE ideas still remaining in the academic world for big (30% or so) improvements in single-threaded IPC, but it's difficult for Intel to exploit these given how complex their CPUs are, and how long the pipeline is from starting a chip till when it ships. In the absence of competition, my guess is they continue to play it safe. Apple, I think, is more likely to experiment with these ideas because their base CPU is a whole lot easier to understand and modify, and they have more competition.
(Though I don't expect these changes in the A9. The A7 was adequate to fight off the expected A57; the A8 is adequate to fight off the expected A72; and all the A9 needs to do to maintain a one year plus lead is add the ARMv81.a ISA and the same sort of small tweaks and a two hundred or so MHz boost that we saw applied to the A8. I don't expect the big microarchitectural changes at Apple until - they've shipped ARMv81.a ISA - they've shipped their GPU (tightly integrated HSA style with not just VM and shared L3, but with tighter faster coupling between CPU and GPU for fast data movement, and with the OS able to interrupt and to some extent virtualize the GPU) - they're confident enough in how wide-spread 64-bit apps are that they don't care about stripping out the 32-bit/thumb ISA support in the CPU [with what they implies for the pipeline, in particular predication and barrel shifter] and can create a microarchitecture that is purely optimized for the 64-bit ISA.
Maybe this will be the A10, IF the A9 has ARMv8.1a and an Apple GPU.)
"The A7 was adequate to fight off the expected A57;"
In hindsight the A7 was not very good at all, it was the reason that Apple was unable to launch a large screen phone with decent battery life. Look at he improvements made to A8, around 10% better performance, but 50% more battery life.
"they've shipped their GPU" by the way, why do you expect them to ship their own GPU and not use IMG's. The IMG GPU have consistently been the best in the market.
Nah. the older ivys can be overclocked to easily meet these chips. the IPC of broadwell is overshadowed by a 400mhz lower clock rate on typical OC. only reason to upgrade is if you NEED something on the new chipset or are running some nehalem-era chip.
A slight correction, on the image of crystal well it is the die on the left (the much larger one) which is the cache and the small one is the cpu on the right.
Actually, no the Author has it correct. The big die the che CPU/GPU, and the small one is the eDRAM. On the GT3 dies, Intel folds the graphics back across the CPU's, instead of having it as a very long rectangle.
Ian, Thank you for this excellent article. I have wished for a 2600k comparison to the more recent CPU iterations and one can piece some of it together here and there but this comprehensive view is outstanding! Still holding out for Skylake, then the 2600k might have to retire.
I am really happy to see the i7-2600k comparison here. Like others who've commented, I'm still running that CPU- albeit at stock clock- and it's been totally stable with months on end of uptime (knock on wood). Sure, I've upgraded the GPU once or twice since 2011, but I can't see any reason to build a new system based on these benchmarks. The GPU (GTX 780) is still the limiting factor for gaming, and the 15-20% performance boost overall won't make a significant difference in my day-to-day usage. I now understand why Intel is struggling.
Same here. I bought a 2600K in the first month it was out. After years of 24/7 operation at 4,9GHz it died. I replaced it with a $100 2500K that's running at 4,6GHz. SB for the win.
OC benchmarks from each generation? I saw stock benchmarks and 3GHz benchmarks, but not benchmarks for Good or Great OC. I was expecting it based off of the title, but didn't see anything in the article.
Yes, that's what I thought. I want to see what they can all do at a good o/c. I don't run my cpu stock or at 3ghz, I want to see how my o/c sandy bridge would do against an o/c broadwell to see if it's worth an upgrade yet?
With the whole point of the article being that IPC goes up, this rule is really not suitable. If the IPC goes up by 20%, then if the previous generation followed the 5% per 200MHz rule the new generation follows either 6% per 200MHz or 5% per 167MHz. Though we'd really expect the instructions per second (IPS) to be the important part, and that's not dependent solely upon the size of the overclock, but the ratio of the overclock to stock. Jumping to 4.2GHz from 3.2GHz is a 31% gain, but going to 4.5GHz from 3.5GHz is a 29% gain despite both being a 1.0GHz overclock.
With the typical IPC gain of 4.4%, we could roughly estimate that a Broadwell at 4.2GHz is like a Haswell at 4.4GHz. With 4.2GHz on Broadwell being a "Good OC" and 4.5GHz on Haswell being a "Good OC" we'd still expect Haswell to be faster once overclocked - but the review should be showing this. However if the particular program is making really good use of the eDRAM, then that 4.2GHz is akin to Haswell at 4.9GHz, which is beyond an excellent OC...
I never understood the "pea size" method. Peas come in many different sizes. And it seems to me that the size of a typical pea is rather large. You need something standard, like a bb. They are a standard size, 0.177 caliber, and three of them in a line seems to work best.
I put a grain of rice size in the middle. get a glasses cloth and rub it on both the heatsink and the heat spreader, and rub it off. Should be a slight tinting left.
Then put another grain of rice size in the middle and screw the heatsink in. Done.
Very polarizing CPU. Any ideas why Intel doesn't have Crystalwell in laptops? I don't want a discrete GPU anymore in mobile due to risk of dead GPUs/Mobo after a few years.
They do, but the CPU's with Crystalwell are quite expensive, so most OEM's shy away from them because it is too expensive for a cheap laptop, and then a higher end laptop they put a dGPU in.
In my next laptop, I want a Iris Pro (w/ Crystalwell) chip, and NO DGPU! I don't want the power consumption, and Iris Pro is plenty enough performance for what I do on a laptop. Unfortunately, it's kinda hard to find high-ish end laptop's with that config. :(
Until my pretty much 5 year old Sandy Bridge 2600k that runs between 4.5-5.0ghz PCIE 2.0 @ 8x lane speed does not bottleneck a dual GPU setup to the point it can not push at least 60FPS at 3440-1440 resolution on my 34" 21/9 LG 34UM95 monitor their is no reason to upgrade whatsoever.
Also with DX12's reduced CPU overhead any new DX12 games should run great with a old Sandy CPU. Also DX12 Should greatly improve the performance of my EVGA's GTX 770 4gb Classified SLI setup since it will split frame render instead of alternate frame rendering allowing the vram to be sorta stacked since each card is only rendering half a frame instead of a whole frame allowing the 4gb of Vram on each card to act like one 8GB card.
Skylake should be the big one that has been waited for since Sandy Bridge; Ivy Bridge's tick reduced overclockability because of the process node, Haswell improved IPC but added the onboard voltage regulator which made overclocking at that node still worse, and Broadwell keeps the voltage regulator whilst further focussing on lower power.
I'm not saying Skylake will go to the dizzying raw gigahertz of Sandy Bridge, but two generations of tocks, and the removal of the onboard voltage regulator, and if we're lucky, the improved thermal compound used in Haswell Devil's Canyon, could together make for a significantly faster chip; one which may well see the upper ends of the 4.x GHz attainable.
Fingers crossed, if it can come near SB levels of OC I'd be complacent... Just enough for the IPC advantage not to be mitigated by raw clock speed, I'm really upgrading my 2500K for the platform anyway (M.2 in particular) but it'd be nice to get a halfway decent CPU upgrade.
The Broadwell equivalent of a Sandy Bridge 5.0 GHz overclock in raw instruction throughput (assuming that Skylake doesn't have any improvement in ipc) is 4.3 GHz. A 4.6 GHz Haswell overclock is equivalent to a 4.4 GHz Broadwell. Broadwell wasn't designed for the desktop, so it isn't designed for good overclocking. If Skylake's consumer flagship has the same clocks as the 4790K with say 10% ipc over Broadwell, then at 4.4 GHz, it has will have the same instruction throughput as a 5.0 GHz Haswell or a 5.7 GHz Sandy Bridge chip. The overclock should increase as the 14nm process becomes more mature, so less voltage is needed for better clocks. If Intel does it right, then everyone will be happy (except AMD since Zen would be screwed over).
Can the integrated GPU overclock well? The reason I ask is I have a very micro PC currently with an A10-7800 and I'm looking at getting this badboy to replace it
Hey, the answer will more than likely be no, i see no oc tools for igpu overclocking in my pc's z97 bios and my testing on older an intel hd 4600 igpu yeilded no results so i do doubt intel will unlock the igpu, if you can just hold out for the next big apu launch or skylake or get faster ram (which will easily allow the 7800 to catch this chip up if you can get a 2400 mhz kit since kaveri gains a lot from better ram) then you will be sitting pretty, im also sure you can overclock amd igpu's no matter whether or not the cpu is locked so if you haven't already tried it do that.
Fantastic article! I know more than a few people who will be happy to see there's still no compelling performance reason to upgrade their Sandy Bridge systems. Talk about getting extra bang for the buck!
Any chance you might be able to do a quick follow-on post and throw some Nehalem numbers in there? I'm sure there's a few first-gen Core i7 owners wondering if it's worth the upgrade now, or to try and hold off for Skylake.
On a side note, does anyone know when we'll actually be able to buy these Broadwell processors?
What would you consider worth upgrading for then? I mean a moderately overclocked modern cpu could easily be twice as fast as your setup, not to mention the numerous platform upgrades since...
I mean sure, the Q9550 may be fast enough to not be a dog, but there certainly is a lot to be gained...
But to be gained for what? Back in the day, you had to upgrade, because new content formats were emerging. I distinctly remember buying a PC every two years in the 90's early 00 period. I had a 386, then a 486, then a Pentium 166 (no MMX :), Celeron 333, Athlon 1100, Athlon x64 3200+. The reason I upgraded was the new content. MP3s, DVDs, XviDs, MKVs. All of them weren't able to be played on some of these systems. There was a reason to invest. The i7-920 system I have now for 6.5 years is still going. Sure, there are games, and there is 4k, but 4k is not the jump HD was over DVDs. There just comes a point where you don't notice. Like with smartphones, 150 to 300 ppi, night and day. 300 and 5xx ppi? Not so much. There's just no reason to upgrade if not for numbercrunching or gaming.
personally i'd be very interested what intel could do if it would push it's tdp to GPU levels (200-300W) and use that thermal headroom for added GPU EUs. possibly as a "dual die on one interposer" solution, to keep costs down
It looks like if you didn't upgrade before, may as well keep waiting. I'm on X58 with a X5650 and it looks like Skylake-E is the best bet for consideration.
Nice Ian, Anyway we can get you to test the tdp down configuration? Intel states 37w can be achieved at 2.2ghz. As one that is looking this for an htpc processor upgrade the cpu performance is overkill for me, what Im interested in is the integrated graphics.
IPC is getting meaningless, I only upgraded to x99 a few months ago because I knew waiting wouldn't be worth it. And from what I've seen of Skylake and Broadwell, they're not worth the wait. Skylake wasn't even fairly benched in those leaked slides, so it had a 900mhz advantage over the 5820K. Clock both to 4ghz and I bet the Skylake chips are less than 10% faster overall.
Imo 5820k is the best bet, 2 more cores where it matters and even DX12 may take advantage of them. Yes people are obviously going to go Broadwell or Skylake just for the IPC increase even if it is a very small increase, or people just want it literally for e-peen benchmarks where you wouldn't even see the difference in real life use.
Also to add to this, I upgraded from x58 and it was completely worth it, over 40% faster in IPC terms.. Plus the new tech of the mobo, SATA 3, much faster bootup and UEFI bios. The stuff coming in Skylake-e will have all the same and is still a ways off, only a small IPC increase difference.
But so expensive, it's not just the cpu but a decent motherboard is easily twice the price of a non X99 one, and you really want quad channel DDR4 which also costs a lot.
"there is plenty of talk surrounding the upcoming launch of Skylake, an architectural update to Intel’s processor line on 14nm. I can’t wait to see how that performs in relation to the four generations tested in this article."
remember that the PR folks will be trying to compare the slower broadwell to the skylake and not the faster overall Haswell so these 4way/5way tests will be critical to your purchase choices. well done and lets hear about the missing/fused off! AVX updates inside the consumer skylake
I keep hoping for a reason to upgrade my i7-2500k, but I'm just not seeing it here outside of maybe you do a lot of time critical work in the areas it has larger improvements. For gaming the differences seem really 'meh' Money better spent on keep up with GPUs. Everything else is 'fast enough'
Think you mean i5 there, or 2600K... No such thing as an i7 2500K AFAIK. ;P There's some gaming scenarios where Sandy Bridge starts to lag behind, tho they're fringe cases, e.g. 4K/Eyefinity with CF/SLI and very recent games that are actually CPU bound a little in certain areas. There's also minimum fps vs average which is often overlooked. If I upgrade it'll be largely for the platform benefits (namely M.2 with enough lanes to use it), and I'm tempted by Haswell-E, we'll see...
a. A "Good" OC for the 2500k/2600k SB chips is 4.5 GHz. 4.7 GHz 24/7 is for the incredibly small minority that watercools with custom loops and 4.9 GHz is for the 5% or so of the guys who won the silicon lottery and custom watercool as well.
You can't believe how cheap most people are. Most people just rock the Cooler Master Hyper 212 Evo for $25. You can't do 4.7 GHz with a 212. Period.
Also, failure to include a budget option for context, like the lack of an overclocked AMD FX here. Some consider buying a $100 8320E and overclocking to 4.5 on something like an EVO or 4.7 on something like a Thermalright TSP 140 ($55 on Amazon) if they can get the $40 off Microcenter combo deal. It's helpful to see how many FPS a person is going to lose by going that route, especially with recent games like Witcher 3 that are apparently trending toward loading multiple cores more successfully.
I was hoping any more coverage of broadwell would include ripping quality comparisons to haswell at least. Is it still fast but crappy, or have they fixed quality so I don't have to keep my gpu off? :( Throw some handbrake tests in please. Quicksync fixed yet?
http://www.anandtech.com/show/7007/intels-haswell-... Any changes since this? Or the review by anand that covered it (linked in there)? Or do we all just hope for a fix with skylake? I saw a recent software update for haswell, but not sure if that does anything about quality here.
Grid Autosport with 290X show very strange result. I assume this game support AVX2 instruction set ? Since Sandy and Ivy have roughly the same performance. But jump to Haswell gain big improvement.
Yawn...At this point I'm more interested in much better utilisation of hardware through software like DX12 than sinking tens of billions into CPU die shrinks with next to zero real world benefit. The paradox here is of course how the former will make the latter even more irrelevant as it is.
Unfortunately, your IPC increase charts on page 3 of the article reveal several mistakes. The benchmark charts show that the Broadwell chip is not the fastest on both the 3DPM:ST as well as the CBenchR15:MT benchmarks, while your IPC increase charts tell something completely different. Also, some of the other calculated IPC increase values stated in the charts are completely irreproducable on my side. Second, you made a systematic mistake in calculating the IPC increase for the "Lower is better" benchmarks. This is very crucial especially in the Dolphin benchmark, where the total IPC increase from SB to Broadwell would be 58.0% rather than 36.7%. To make this clear: Processor A, that takes half the time for performing a certain task compared to processor B, does not offer a 50% increase in IPC over B, but a total of 100%.
You should re-check your numbers. Hope this helps.
Somehow the incorrect benchmark result graphs were placed in those spots and they were from the non 3GHz testing - they also had a different z-height. I have updated it - the IPC numbers for those benchmarks in the main graphs are still accurate. For those benchmarks at stock voltage, the balance between frequency and IPC as to which is more important plays out on a larger scale for sure.
Also, with the timed benchmarks. Arguably I actually labelled the axis in terms of Percentage Improvement rather than IPC improvement, despite the title of the benchmark. But you are correct - I mistakenly used the % improvement and the term IPC interchangeably. I have updated the results with a disclaimer.
Any other issues, let me know. I'm also contactable by email if urgent! -Ian
I have a question. I have seen a article on Intel that speculates that it intends to launch a new chip that has a larger eDRAM (1Gb) and then has 3d Xpoint (15GB) on. The large eDRAM will compensate for the lower write speed of the 3d Xpoint, however the overall chip will offer massive advantages in power consumption and having nonvolatile memory for sleep states. This would be extremely competitive for mobile computing and servers.
Given this quote above "Cycling back to our WinRAR test, things look a little different. Ivy Bridge to Haswell gives only a 3.1% difference, but the eDRAM in Broadwell slaps on another 16.6% performance increase, dropping the benchmark from 76.65 seconds to 63.91 seconds. When eDRAM counts, it counts a lot." would that make sense.
The article says that this change in chip design is the reason we are seeing another tock for kaby lake.
Can intel please bring out a processor that destroys Sandy Bridge so that I don't have to hear all the smug comments every time they release a processor. I am so bored by them. Cant we talk about something interesting, like why is crystal well such a big focus for them when apparently all anyone wants in the enthusiast bracket is raw power?
And now look at all the PC-users who don't play games, but use their PCs for doing actual work.
No tests of professional graphics software like Photoshop, Premiere, etc. No tests of professional audio-software like Reason, CuBase, etc. No tests of professional 3d-software that actually would benefit from the IrisPro and it's GPGPU-capabilities - and no, Cinebench is only for CPU-based rendering, so it doesn't count that much for stuff like flowsimulations done in Solidworks etc.
The i7-5775C is currently the best CPU outthere for small workstations, due to the very powerful IrisPro and the low TDP. You can build a very powerful rig in a box as small as the MacMini while still keeping the advantage of the 4C/8T CPU and an iGPU as powerful as a R7-250/GT740. And this rig would only draw some 150W under full load, which is totally doable with a picoPSU.
I'm getting my i7-5775C finally delivered sometime next week, after waiting for 2 month, and I can finally build the rig I've allways wanted to combined with the following: Antec ISK110, picoPSU 160XT, Noctua NH-L9i, 2x SSD (256/512 GB) and 16GB DDR3. Strapped to the back of my screen, allmost inaudible and very low power-draw.
This CPU is ment for SFF-workstations, but for this niche it's the best there is.
Thanks for the detailed review, Ian! As far as I know it's the most detailed out yet. And probably the last one, with Skylake hopefully just around the corner.
Anyway, I would really like to see how low the power consumption of this chip can go. It's obviously optimized for mobile and low clocks, so pushing it to high clocks and high voltages (even the stock voltage is a lot for 14 nm!) is not showing it in its best light. When you tested the voltage scaling I would really like to see the minimum voltage needed to get to the lower speeds (you just left it at stock). How low can the power consumption of this chip become at ~1.0 V? Which clock speed hit does it take to get there? (btw: I run my 3770K at 4.0 GHz and a bit over 1.0 V, which is great for 24/7 load)
Concerning the Linux Performance, could one kindly ask for another software to be benchmarked? NAMD is nice, but the to my knowledge the bigger share of the scientific community in molecular dynamics uses Gromacs (www.gromacs.org), which is also much faster and better optimized for current Hardware (AVX2 support for instance, multiple GPU support) and has a bigger community around it.
I'm still rockin' an i7-950, ASUS P6X58D-E, 12gb DDR3 1600 and 2x 6870's in Crossfire. The only part showing me its age is the 6870's (probably going to upgrade next GPU gen). The rest runs everything I throw at it perfectly and that's at stock speed. I can always get a better CPU cooler and OC it for more longevity. Intel really needs to get things together. Hope Zen gives them a push.
My question is: Does the L4 cache help with multi threaded use? Say a home Hyper-V server lab environment? It might make the 5775C a better buy than one of the hex core 2111 socket CPUs..
They should put out a multi-threaded K series 4 core with no IGP at a basement price for the more value minded people, gamers specifically, especially considering the main purpose of upgrading are the tertiary features on the motherboard. I doubt anyone with a Sandy is looking at this with any interest. Especially considering my Sandy board has PCIe3.0 and Sata 6Gbps and rocks 4-4.5 Ghz easily.
I'd love to see dGPU test regimes that weren't obviously GPU limited, and that include minimums, or some form of data that indicates how well stutters are being avoided. But an interesting read, nonetheless.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
121 Comments
Back to Article
TheinsanegamerN - Monday, August 3, 2015 - link
Quite nice comparison.Unfortunately, it seems that, while broadwell does have the best IPC of the bunch, the overclock is pathetic. 1.325v to hit 4.2 GHz? my ivy bridge 3570k does the same clock with 1.075v. now, I've been told I have a exceptionally good chip, but it strikes me as odd that broadwell, being on a smaller 14nm process, cant match what ivy bridge could do two years ago. and since sandy bridge can be OC'ed to 4.7GHz+ with ease,and ive can hit 4.5, it seems there is still no reason to upgrade to broadwell, as any IPC gains are cancelled out by the lower clock rate. unless you need to do lots of dolphin emulation and refuse to overclock at all, the ancient sandy bridge still seems to do the best.
K_Space - Monday, August 3, 2015 - link
TheinsanegamerN agreed. Those who held into their Sandy made a very wise investment, just like those good ol' 920s back in the X58 era.Dupl3xxx - Monday, August 3, 2015 - link
Ah, yes, the 920 was a lovely beast. Started overclocking at 3.6. It booted, tried 3.8, booted, tried 4.0, failed. 3.8 was literally done in less than an hour as my second ever attempt at overclocking, with my first being the intel e6600. And when a dying PSU wounded it, I got a 3930k. It does 4.0 ghz, and I've yet to find any situation where it's a bottleneck, besides things like rendering and benchmarks. I considered upgrading to the 59xx series, but when I learned that only the 5960x would be a 8-core, that was quickly decided against.It'll be interesting to watch Skylake and Zen fight it out in a year or so.
Impulses - Monday, August 3, 2015 - link
I'm surprised Intel isn't banking on nostalgic memories of the Q6600 to hype the 6600K & 6700K... Surely marketing had a hand in the simplified naming reminiscent of the old C2Q.augiem - Monday, August 3, 2015 - link
I'm still on a i7-920 from mid 2009. Been running 3.6GHz the entire time, still rock solid as the day I bought it. I still can't believe I've been using a PC for this long. Before the i7, I would upgrade every 1.5 - 2 years tops. This thing is nuts.mkozakewich - Tuesday, August 4, 2015 - link
We've reached the end of that exponential advancement, so you can expect things to advance at roughly this rate for a while, at least until we also reach "small enough".close - Tuesday, August 4, 2015 - link
That's logarithmic advancement :). It keeps slowing down year after year.Cryio - Tuesday, August 4, 2015 - link
Technically with Sandy Bridge they reached the end. SB was quite a jump over Nehalem.Harry Lloyd - Tuesday, August 4, 2015 - link
There is no end. Intel just do not care, as they have no competition. Why would they waste money on increasing performance, when they can focus on efficiency for mobile? They can get away with selling basically the same CPUs every year on desktop, as they are still the fastest.Badelhas - Tuesday, August 4, 2015 - link
I also blame AMD. If they had good high end CPUs Intel would be forced to improve the ones they´ve been selling for the last 5 years or so.plonk420 - Wednesday, August 12, 2015 - link
you "blame" AMD?a fuckup warrants "blame." you can't really "blame" someone or some company for not being smart enough to outwit/outperform the competition.
david_tocker - Tuesday, August 4, 2015 - link
I recently upgraded my i7-920 to a Xeon x5670 in the same board. Less power, more performance, same motherboard. Has USB3 - what else do I need?StevoLincolnite - Monday, August 3, 2015 - link
Same situation as you.Got a 3930K... It has happily sat at 4.8ghz just fine for many many years and still gives Haswell-E a good run for it's money. Only paid $500 AUD at the time too!
That is in stark contrast to the 5930K which is currently $860 AUD... Intel has provided me with ZERO compelling reason to upgrade unless I wish to drop down $1500 for the 5960X, which isn't three times as fast as my 3930K.
It's like they don't want my money!
If Intel had released an 8-core 5930K around the $600 mark I wouldn't have thought twice about upgrading, even if the motherboard and memory drove the prices higher.
Also still got a Core 2 Quad Q6600 rig running at 3.6ghz which handles most tasks fine, it's almost 8 years old now, it has certainly paid for itself and still handles most of the latest games fine.
Flunk - Monday, August 3, 2015 - link
I totally agree with you, 6-cores isn't worth bothering with for what Intel's charging. I'm still holding on to my i5-2500k, which does 4.4 all day at 1.2V (and a bit more if you give it a bit more juice).Jetpil0t - Thursday, August 6, 2015 - link
Haven't even bothered to take my 2500k past 4.0 Ghz leaving all the tuning at stock, I remember back in the day I had er up at 4.6 stable with tweaks, but even then it wasn't really a bottleneck. Humming along at 55c max under load @ 4.0Ghz. Most of these CPUs are much hotter thanks to the IGP as well, making Sandy even more enjoyable. Also why would a K series CPU even ship with integrated graphics in the first place.Samus - Monday, August 3, 2015 - link
I still rock a Asus X58 with a i7-950 (used to be a i7-920) back at the office. Never overclocked, completely stable, still completely competitive with modern PC's 7 years later.Obviously it uses more power (130w vs 80w) to a comparably performance-equivalent Haswell Xeon 1230v3 but the difference is a few dollars a year,
hughlle - Monday, August 3, 2015 - link
And I still rock a tock Q6600 and HD 7750. Plays everything that's on the market just fine :)Jon Tseng - Tuesday, August 4, 2015 - link
yeah I still have QX6850 and GTX 970 and it runs all new gaming releases ultra @1080p smooth as butter.I have a suspicion I'm going to get a decade of use out of this Kentsfield, which is completely nuts (and bad news for Intel!)
Bad Bimr - Monday, August 3, 2015 - link
Same here. I have an Asus x58 motherboard with 24GB CL7 memory. Recently pulled the 4Ghz overclocked i7-920 and replaced it with an $85 Xeon X5675 overclocked to 4.6Ghz. My ancient system will hold it's own against most newer systems. Not going to upgrade until Skylake-E or later.xorfish - Monday, August 3, 2015 - link
Go Xeon, those 6 core chips go for 80$ and perform as well as a 4790k in multithreaded tasks.Got mine to 4.0 Ghz for 24/7.
Also 32nm saves you some power...
Shadow7037932 - Monday, August 3, 2015 - link
Still running a i7 920 @ 3.8Ghz on custom water cooling. OS, SSDs, HDDs, GPUs, have all been upgraded since 2009. Still holding it's own in gaming and multithreaded software (video rendering).Also, with the hexacore X5650 being available for around $70-100 used, I can probably breathe a bit more life into this.
milli - Tuesday, August 4, 2015 - link
I invested big time in my X58 platform but it's still going strong. One year ago I upgraded my i7 920 to a Xeon X5650 (which I bought second hand for like $100). Now I have a six core HT 32nm OC'd beast that runs much cooler than my 920. I can't believe this platform is now 7 years old.Jetpil0t - Thursday, August 6, 2015 - link
I have been waiting for so long to upgrade my 2500k an R9 290, was looking at a 6600k and Fury X but for like $2,000 the performance really isn't there. Second hand 290 for $250 will probably be my next upgrade and still faster than a brand new 6600k Fury build. More money for games I guess.michael2k - Monday, August 3, 2015 - link
Is that entirely true? It seems from the graphs that you can expect 10% to 20% improvement in performance at the same clock compared to Sandy Bridge and the Broadwell is a good 30% less in terms of power consumption. In other words the 0.25V difference in overclock is exactly the reason Broadwell consumes 30% less power. Since you don't care about the power then you can clock it up and volt it up and see a 10% to 20% improvement in performance. You can argue that the 10% to 20% improvement isn't worth it, of course. The IPC gains only matter if you care to overclock the Broadwell part.sonny73n - Tuesday, August 4, 2015 - link
"my ivy bridge 3570k does the same clock with 1.075v."1.075v @4.2GHz? Are you sure you didn't mistype? Prime95 stress test?
Jetpil0t - Thursday, August 6, 2015 - link
Still rocking a i5 2500k @ 4.0Ghz and an R9 290 @ 1.1Ghz and it's rockin along no problem, I want to but new shiny things, but there is zero reason to, which is nice for the value but a little odd given the age of this processor. If I had known I was going to be hanging onto this CPU for so long I would have picked up the i7 2600/2700k, but even then, the i5 2500k is a powerhouse, apparently. Just puts into perspective how crazy powerful these CPUs were 5 years ago when they landed.Jetpil0t - Thursday, August 6, 2015 - link
They should just take each of these CPUs to 4.0Ghz locked and bench it out, I bet the 2600k still fires up there with the best of them. The sample used here is still stock, so with an OC it's more or less up there all the way through to a 980ti.Oxford Guy - Thursday, January 21, 2016 - link
Proper Broadwell overclocking appears to require that the EDRAM clock be changed. They didn't do that here, hence the poor result.K_Space - Monday, August 3, 2015 - link
For desktop users the article only consolidated what was already known: hold tight to your Haswell CPU until Skylake (and even then you probably won't need to upgrade). The Z97 chipset is such a mature platform and the high frequency clocked parts have dropped in price. The 4790K is an absolute brute. Haswell -just like Sandy and Nehalem is going to be very stretchy generation.Nagorak - Tuesday, August 4, 2015 - link
Forget that, just hold tight to your Sandy Bridge. Four years running and you can make up most of the difference in IPC by just cranking up the clock. Once you take the lower frequency into account, Broadwell's ~18% improvement drops to only around 10%.Pino - Monday, August 3, 2015 - link
Looks like my i7 3770 will live longer.icebox - Monday, August 3, 2015 - link
I'm really curious how the Skylake i7 will present itself in exactly this comparison article. I, like many am still on a 2600k and pondering an upgrade to either haswell-e or skylake.blaktron - Monday, August 3, 2015 - link
Me too. I run my 2600k at 4.5 ghz on air with no stability issues ever and by the looks of it, 4 years later, I'm STILL looking at a lateral upgrade. Skylake better at least add some OC headroom back along with its 15% or this might be the first gaming PC i have that dies of old age...lilmoe - Monday, August 3, 2015 - link
I'd wait it out if I were you. The 2600K is absolutely no slouch, and you'll probably be disappointed after spending lots of cash on Haswell-E. The only thing going for these newer chips are peripherals, so it's all about your priorities.HollyDOL - Monday, August 3, 2015 - link
2500k at 4.3GHz here, still no pressurre for upgrades. Not that I'd complain :-)HollyDOL - Monday, August 3, 2015 - link
pressure*where is the [edit]?
faizoff - Monday, August 3, 2015 - link
Yup same here, OC'ed my 2500K to 4.6 Ghz on air and have had the same build for over 4 years now. Still excellent performance and the only regret I have is not getting the 2600K at the time. I've started to delve greatly into developing and server VMs locally so that would've been a great setup.Kevin G - Monday, August 3, 2015 - link
I'm in a similar boot with a 2600K but I also have a Sandy Bridge-E 3930k. So far I'm not feeling any pressure to upgrade from on the processor side for either chip.For me, the most attractive thing about Skylake is the chipset which adds 20 PCIe lanes on top of the 16 from the CPU. This should enable some motherboards to stack on features without compromising dual GPU scenarios and even enable triple GPU setups with all 8x links. (There is enough lanes to do quad GPUs but DMI would be too much of a bottleneck for two cards + IO.)
Haswell-E on the other hand just doesn't interest me at all. The low end 5820K is a cheap 6 core part but has the reduced PCIe lane count. In many regards, SkyLake with Z170 would be the better option than a 5820K setup. Going to the 5930K improves IO but the price premium just isn't worth it. Thankfully Broadwell-E should be arriving at the very end of this year/early next year so hopefully Intel can revive the X99 platform.
Mr Perfect - Monday, August 3, 2015 - link
Yes, thanks for including the 2600 in this. Mine has been doing well, and with DX12 reducing CPU dependance in the future, it's probably going to be relevant for some time. It's nice to see what an upgrade will actually be worth.Sttm - Monday, August 3, 2015 - link
Heh me too. 2600k OC'd to 4.5 ghz going on over 4 years now. Which is just crazy because before that I upgraded CPU every 2 -3 years. But now it seems Intel does not care about performance, just power/performance, and AMD is a clusterduck.name99 - Monday, August 3, 2015 - link
Well think about WHY these results are as they are:- There is one set of benchmarks (most of the raytracing and sci stuff) that can make use of AVX. They see a nice boost from initial AVX (implemented by routing each instruction through the FPU twice) to AVX on a wider execution unit to the introduction of AVX2.
- There is a second set of benchmarks (primarily winRAR) that manipulate data which fits in the crystalwell cache but not in the 8MB L3). Again a nice win there; but that's a specialized situation. In data streaming examples (which better described most video encode/decode/filtering) that large L4 doesn't really buy you anything.
- There WOULD be a third set of benchmarks (if AnandTech tested for this) that showed a substantial improvement in indirect branch performance going from IB to Haswell. This is most obvious on interpreters and similar such code, though it also helps virtual functions in C++/Swift style code and Objective C method calls. My recollection is that you can see this jump in the GeekBench Lua benchmark. (Interestingly enough, Apple's A8 seems to use this same advanced TAGE-like indirect predictor because it gets Lua IPC scores as good as Intel).
OK, no we get to Skylake. Which of these apply?
- No AVX bump except for Xeons.
- Usually no CrystalWell
So the betting would be that the BIG jumps we saw won't be there. Unless they've added something new that they haven't mentioned yet (eg a substantially more sophisticated prefetcher, or value prediction), we won't even get the small targeted boost that we saw when Haswell's indirect predictor was added. So all we'll get is the usual 1 or 2% improvement from adding 4 or 6 more physical registers and ROB slots, maybe two more issue slots, a few more branch predictor slots, the usual sort of thing.
There ARE ideas still remaining in the academic world for big (30% or so) improvements in single-threaded IPC, but it's difficult for Intel to exploit these given how complex their CPUs are, and how long the pipeline is from starting a chip till when it ships. In the absence of competition, my guess is they continue to play it safe. Apple, I think, is more likely to experiment with these ideas because their base CPU is a whole lot easier to understand and modify, and they have more competition.
(Though I don't expect these changes in the A9. The A7 was adequate to fight off the expected A57; the A8 is adequate to fight off the expected A72; and all the A9 needs to do to maintain a one year plus lead is add the ARMv81.a ISA and the same sort of small tweaks and a two hundred or so MHz boost that we saw applied to the A8. I don't expect the big microarchitectural changes at Apple until
- they've shipped ARMv81.a ISA
- they've shipped their GPU (tightly integrated HSA style with not just VM and shared L3, but with tighter faster coupling between CPU and GPU for fast data movement, and with the OS able to interrupt and to some extent virtualize the GPU)
- they're confident enough in how wide-spread 64-bit apps are that they don't care about stripping out the 32-bit/thumb ISA support in the CPU [with what they implies for the pipeline, in particular predication and barrel shifter] and can create a microarchitecture that is purely optimized for the 64-bit ISA.
Maybe this will be the A10, IF the A9 has ARMv8.1a and an Apple GPU.)
Speedfriend - Tuesday, August 4, 2015 - link
"The A7 was adequate to fight off the expected A57;"In hindsight the A7 was not very good at all, it was the reason that Apple was unable to launch a large screen phone with decent battery life. Look at he improvements made to A8, around 10% better performance, but 50% more battery life.
Speedfriend - Tuesday, August 4, 2015 - link
"they've shipped their GPU" by the way, why do you expect them to ship their own GPU and not use IMG's. The IMG GPU have consistently been the best in the market.nunya112 - Monday, August 3, 2015 - link
by the looks of it. the 4790K seems to be the best CPU. until skylake that is. but even then I doubt there will be much improvementnunya112 - Monday, August 3, 2015 - link
unless u have the older ivy's then yeah maybe worth it ?TheinsanegamerN - Monday, August 3, 2015 - link
Nah. the older ivys can be overclocked to easily meet these chips. the IPC of broadwell is overshadowed by a 400mhz lower clock rate on typical OC. only reason to upgrade is if you NEED something on the new chipset or are running some nehalem-era chip.Teknobug - Monday, August 3, 2015 - link
Ivy's are the best overclockers.TheinsanegamerN - Monday, August 3, 2015 - link
Sandy overclocked better than ivy,Hulk - Monday, August 3, 2015 - link
Ian - Very nice job on this one! Thanks.Meaker10 - Monday, August 3, 2015 - link
A slight correction, on the image of crystal well it is the die on the left (the much larger one) which is the cache and the small one is the cpu on the right.extide - Monday, August 3, 2015 - link
Actually, no the Author has it correct. The big die the che CPU/GPU, and the small one is the eDRAM.On the GT3 dies, Intel folds the graphics back across the CPU's, instead of having it as a very long rectangle.
See this: http://www.computershopper.com/var/ezwebin_site/st...
vs This: http://www.overclock.net/content/type/61/id/230657...
hansmuff - Monday, August 3, 2015 - link
Ian, Thank you for this excellent article. I have wished for a 2600k comparison to the more recent CPU iterations and one can piece some of it together here and there but this comprehensive view is outstanding! Still holding out for Skylake, then the 2600k might have to retire.Ewann - Monday, August 3, 2015 - link
I am really happy to see the i7-2600k comparison here. Like others who've commented, I'm still running that CPU- albeit at stock clock- and it's been totally stable with months on end of uptime (knock on wood). Sure, I've upgraded the GPU once or twice since 2011, but I can't see any reason to build a new system based on these benchmarks. The GPU (GTX 780) is still the limiting factor for gaming, and the 15-20% performance boost overall won't make a significant difference in my day-to-day usage. I now understand why Intel is struggling.Awesomeness - Monday, August 3, 2015 - link
Same here. I bought a 2600K in the first month it was out. After years of 24/7 operation at 4,9GHz it died. I replaced it with a $100 2500K that's running at 4,6GHz. SB for the win.nathanddrews - Monday, August 3, 2015 - link
OC benchmarks from each generation? I saw stock benchmarks and 3GHz benchmarks, but not benchmarks for Good or Great OC. I was expecting it based off of the title, but didn't see anything in the article.Staafk - Monday, August 3, 2015 - link
Missing OC performance comparisons. Or am I blind? The latter is quite possible tbh.Dribble - Monday, August 3, 2015 - link
Yes, that's what I thought. I want to see what they can all do at a good o/c. I don't run my cpu stock or at 3ghz, I want to see how my o/c sandy bridge would do against an o/c broadwell to see if it's worth an upgrade yet?Impulses - Monday, August 3, 2015 - link
You can typically extrapolate like 5% per 200MHz, tho it would've been nice to see indeed.joex4444 - Monday, August 3, 2015 - link
With the whole point of the article being that IPC goes up, this rule is really not suitable. If the IPC goes up by 20%, then if the previous generation followed the 5% per 200MHz rule the new generation follows either 6% per 200MHz or 5% per 167MHz. Though we'd really expect the instructions per second (IPS) to be the important part, and that's not dependent solely upon the size of the overclock, but the ratio of the overclock to stock. Jumping to 4.2GHz from 3.2GHz is a 31% gain, but going to 4.5GHz from 3.5GHz is a 29% gain despite both being a 1.0GHz overclock.With the typical IPC gain of 4.4%, we could roughly estimate that a Broadwell at 4.2GHz is like a Haswell at 4.4GHz. With 4.2GHz on Broadwell being a "Good OC" and 4.5GHz on Haswell being a "Good OC" we'd still expect Haswell to be faster once overclocked - but the review should be showing this. However if the particular program is making really good use of the eDRAM, then that 4.2GHz is akin to Haswell at 4.9GHz, which is beyond an excellent OC...
SirMaster - Monday, August 3, 2015 - link
Feelin' pretty good about my 4.6GHz 4770K that I bought more than 2 years ago heh.Shadowmaster625 - Monday, August 3, 2015 - link
I never understood the "pea size" method. Peas come in many different sizes. And it seems to me that the size of a typical pea is rather large. You need something standard, like a bb. They are a standard size, 0.177 caliber, and three of them in a line seems to work best.Pissedoffyouth - Monday, August 3, 2015 - link
I put a grain of rice size in the middle. get a glasses cloth and rub it on both the heatsink and the heat spreader, and rub it off. Should be a slight tinting left.Then put another grain of rice size in the middle and screw the heatsink in. Done.
You want to use the bare minimum amount of paste.
zodiacfml - Monday, August 3, 2015 - link
Very polarizing CPU. Any ideas why Intel doesn't have Crystalwell in laptops? I don't want a discrete GPU anymore in mobile due to risk of dead GPUs/Mobo after a few years.zodiacfml - Monday, August 3, 2015 - link
Oh, nevermind! I found them.extide - Monday, August 3, 2015 - link
They do, but the CPU's with Crystalwell are quite expensive, so most OEM's shy away from them because it is too expensive for a cheap laptop, and then a higher end laptop they put a dGPU in.In my next laptop, I want a Iris Pro (w/ Crystalwell) chip, and NO DGPU! I don't want the power consumption, and Iris Pro is plenty enough performance for what I do on a laptop. Unfortunately, it's kinda hard to find high-ish end laptop's with that config. :(
Gigaplex - Monday, August 3, 2015 - link
Before Broadwell, the only way to get Crystalwell was in the mobile chips.varg14 - Monday, August 3, 2015 - link
Until my pretty much 5 year old Sandy Bridge 2600k that runs between 4.5-5.0ghz PCIE 2.0 @ 8x lane speed does not bottleneck a dual GPU setup to the point it can not push at least 60FPS at 3440-1440 resolution on my 34" 21/9 LG 34UM95 monitor their is no reason to upgrade whatsoever.Also with DX12's reduced CPU overhead any new DX12 games should run great with a old Sandy CPU. Also DX12 Should greatly improve the performance of my EVGA's GTX 770 4gb Classified SLI setup since it will split frame render instead of alternate frame rendering allowing the vram to be sorta stacked since each card is only rendering half a frame instead of a whole frame allowing the 4gb of Vram on each card to act like one 8GB card.
PrinceGaz - Monday, August 3, 2015 - link
Skylake should be the big one that has been waited for since Sandy Bridge; Ivy Bridge's tick reduced overclockability because of the process node, Haswell improved IPC but added the onboard voltage regulator which made overclocking at that node still worse, and Broadwell keeps the voltage regulator whilst further focussing on lower power.I'm not saying Skylake will go to the dizzying raw gigahertz of Sandy Bridge, but two generations of tocks, and the removal of the onboard voltage regulator, and if we're lucky, the improved thermal compound used in Haswell Devil's Canyon, could together make for a significantly faster chip; one which may well see the upper ends of the 4.x GHz attainable.
Impulses - Monday, August 3, 2015 - link
Fingers crossed, if it can come near SB levels of OC I'd be complacent... Just enough for the IPC advantage not to be mitigated by raw clock speed, I'm really upgrading my 2500K for the platform anyway (M.2 in particular) but it'd be nice to get a halfway decent CPU upgrade.Aspiring Techie - Monday, August 3, 2015 - link
The Broadwell equivalent of a Sandy Bridge 5.0 GHz overclock in raw instruction throughput (assuming that Skylake doesn't have any improvement in ipc) is 4.3 GHz. A 4.6 GHz Haswell overclock is equivalent to a 4.4 GHz Broadwell. Broadwell wasn't designed for the desktop, so it isn't designed for good overclocking. If Skylake's consumer flagship has the same clocks as the 4790K with say 10% ipc over Broadwell, then at 4.4 GHz, it has will have the same instruction throughput as a 5.0 GHz Haswell or a 5.7 GHz Sandy Bridge chip. The overclock should increase as the 14nm process becomes more mature, so less voltage is needed for better clocks. If Intel does it right, then everyone will be happy (except AMD since Zen would be screwed over).Pissedoffyouth - Monday, August 3, 2015 - link
Can the integrated GPU overclock well? The reason I ask is I have a very micro PC currently with an A10-7800 and I'm looking at getting this badboy to replace itderuberhanyok - Monday, August 3, 2015 - link
you'd have to find one for sale, first.zepi - Monday, August 3, 2015 - link
At least in Europe these are relatively easy to come by in retail.hmind - Wednesday, August 5, 2015 - link
Hey, the answer will more than likely be no, i see no oc tools for igpu overclocking in my pc's z97 bios and my testing on older an intel hd 4600 igpu yeilded no results so i do doubt intel will unlock the igpu, if you can just hold out for the next big apu launch or skylake or get faster ram (which will easily allow the 7800 to catch this chip up if you can get a 2400 mhz kit since kaveri gains a lot from better ram) then you will be sitting pretty, im also sure you can overclock amd igpu's no matter whether or not the cpu is locked so if you haven't already tried it do that.deruberhanyok - Monday, August 3, 2015 - link
Fantastic article! I know more than a few people who will be happy to see there's still no compelling performance reason to upgrade their Sandy Bridge systems. Talk about getting extra bang for the buck!Any chance you might be able to do a quick follow-on post and throw some Nehalem numbers in there? I'm sure there's a few first-gen Core i7 owners wondering if it's worth the upgrade now, or to try and hold off for Skylake.
On a side note, does anyone know when we'll actually be able to buy these Broadwell processors?
Glock24 - Monday, August 3, 2015 - link
Would be nice to have a similar comparison for mobile CPUs, and even more so a comparison of generational improvements of mobile integrated graphics.Marburg U - Monday, August 3, 2015 - link
Still no reason to retire my [email protected].Marburg U - Monday, August 3, 2015 - link
err: of course it's 3.8.extide - Monday, August 3, 2015 - link
What would you consider worth upgrading for then? I mean a moderately overclocked modern cpu could easily be twice as fast as your setup, not to mention the numerous platform upgrades since...I mean sure, the Q9550 may be fast enough to not be a dog, but there certainly is a lot to be gained...
lukarak - Tuesday, August 4, 2015 - link
But to be gained for what? Back in the day, you had to upgrade, because new content formats were emerging. I distinctly remember buying a PC every two years in the 90's early 00 period. I had a 386, then a 486, then a Pentium 166 (no MMX :), Celeron 333, Athlon 1100, Athlon x64 3200+. The reason I upgraded was the new content. MP3s, DVDs, XviDs, MKVs. All of them weren't able to be played on some of these systems. There was a reason to invest. The i7-920 system I have now for 6.5 years is still going. Sure, there are games, and there is 4k, but 4k is not the jump HD was over DVDs. There just comes a point where you don't notice. Like with smartphones, 150 to 300 ppi, night and day. 300 and 5xx ppi? Not so much. There's just no reason to upgrade if not for numbercrunching or gaming.bernstein - Monday, August 3, 2015 - link
personally i'd be very interested what intel could do if it would push it's tdp to GPU levels (200-300W) and use that thermal headroom for added GPU EUs.possibly as a "dual die on one interposer" solution, to keep costs down
darckhart - Monday, August 3, 2015 - link
It looks like if you didn't upgrade before, may as well keep waiting. I'm on X58 with a X5650 and it looks like Skylake-E is the best bet for consideration.FlanK3r - Monday, August 3, 2015 - link
Nice review Ian :)TallestJon96 - Monday, August 3, 2015 - link
Not very compelling. All skylake needs to do is be indisputably the fastest, even if only by 10%, and it will seem like a huge upgrade.edlee - Monday, August 3, 2015 - link
yes, anyone with sandy bridge and ivy bridge i5 and i7 already knows not to upgrade cpu anymore.I don't replace SB or IVY, I only add systems for different locations.
The only thing I care about now is GPU improvements, and with DX12 we should see less cpu constraints.
MattVincent - Monday, August 3, 2015 - link
Nice Ian, Anyway we can get you to test the tdp down configuration? Intel states 37w can be achieved at 2.2ghz. As one that is looking this for an htpc processor upgrade the cpu performance is overkill for me, what Im interested in is the integrated graphics.Stuka87 - Monday, August 3, 2015 - link
Well, glad I went with a Devils Canyon haswell instead of waiting for Broadwell to launch this past spring.Welsh_Jester - Monday, August 3, 2015 - link
IPC is getting meaningless, I only upgraded to x99 a few months ago because I knew waiting wouldn't be worth it. And from what I've seen of Skylake and Broadwell, they're not worth the wait. Skylake wasn't even fairly benched in those leaked slides, so it had a 900mhz advantage over the 5820K. Clock both to 4ghz and I bet the Skylake chips are less than 10% faster overall.Imo 5820k is the best bet, 2 more cores where it matters and even DX12 may take advantage of them. Yes people are obviously going to go Broadwell or Skylake just for the IPC increase even if it is a very small increase, or people just want it literally for e-peen benchmarks where you wouldn't even see the difference in real life use.
Welsh_Jester - Monday, August 3, 2015 - link
Also to add to this, I upgraded from x58 and it was completely worth it, over 40% faster in IPC terms.. Plus the new tech of the mobo, SATA 3, much faster bootup and UEFI bios. The stuff coming in Skylake-e will have all the same and is still a ways off, only a small IPC increase difference.Dribble - Monday, August 3, 2015 - link
But so expensive, it's not just the cpu but a decent motherboard is easily twice the price of a non X99 one, and you really want quad channel DDR4 which also costs a lot.BMNify - Monday, August 3, 2015 - link
"there is plenty of talk surrounding the upcoming launch of Skylake, an architectural update to Intel’s processor line on 14nm. I can’t wait to see how that performs in relation to the four generations tested in this article."remember that the PR folks will be trying to compare the slower broadwell to the skylake and not the faster overall Haswell so these 4way/5way tests will be critical to your purchase choices. well done and lets hear about the missing/fused off! AVX updates inside the consumer skylake
Midwayman - Monday, August 3, 2015 - link
I keep hoping for a reason to upgrade my i7-2500k, but I'm just not seeing it here outside of maybe you do a lot of time critical work in the areas it has larger improvements. For gaming the differences seem really 'meh' Money better spent on keep up with GPUs. Everything else is 'fast enough'Impulses - Monday, August 3, 2015 - link
Think you mean i5 there, or 2600K... No such thing as an i7 2500K AFAIK. ;P There's some gaming scenarios where Sandy Bridge starts to lag behind, tho they're fringe cases, e.g. 4K/Eyefinity with CF/SLI and very recent games that are actually CPU bound a little in certain areas. There's also minimum fps vs average which is often overlooked. If I upgrade it'll be largely for the platform benefits (namely M.2 with enough lanes to use it), and I'm tempted by Haswell-E, we'll see...Achaios - Monday, August 3, 2015 - link
The "OC" page is not correct/misleading.a. A "Good" OC for the 2500k/2600k SB chips is 4.5 GHz. 4.7 GHz 24/7 is for the incredibly small minority that watercools with custom loops and 4.9 GHz is for the 5% or so of the guys who won the silicon lottery and custom watercool as well.
You can't believe how cheap most people are. Most people just rock the Cooler Master Hyper 212 Evo for $25. You can't do 4.7 GHz with a 212. Period.
-Haswell 1st gen: Avg OC 4.3 GHz, Good OC: 4.6 GHz
-Haswell 2nd Gen: Avg OC: 4.6 GHz, Good OC: 4.8 GHz - 4.9 GHz 24/7 is for the ppl who won the silicon lottery.
Oxford Guy - Monday, August 3, 2015 - link
Enthusiast websites do several things wrong, generally:1) open test beds with powerful coolers running at full tilt -- no regard for noise or thermal limitations stemming from reasonably quiet case airflow
2) massive amounts of voltage for overclocks instead of reasonable safety margin 24/7 settings
3) lack of an additional sample, retail-purchased, to avoid flukes and cherry-picking
4) use of sub-par TIM rather than something good like LiquidPro
5) failure to verify their overclocks' stability at higher ambient temps
Oxford Guy - Monday, August 3, 2015 - link
Also, failure to include a budget option for context, like the lack of an overclocked AMD FX here. Some consider buying a $100 8320E and overclocking to 4.5 on something like an EVO or 4.7 on something like a Thermalright TSP 140 ($55 on Amazon) if they can get the $40 off Microcenter combo deal. It's helpful to see how many FPS a person is going to lose by going that route, especially with recent games like Witcher 3 that are apparently trending toward loading multiple cores more successfully.foobaz - Monday, August 3, 2015 - link
Anyone know when I'll be able to purchase an i7-5775C? They're still not available from Newegg, Amazon, or Frys.FlashYoshi - Monday, August 3, 2015 - link
Small typo: the 750 is an i5, not an i7Oxford Guy - Monday, August 3, 2015 - link
Should have included 4.5 GHz AMD FX in the charts.Ian Cutress - Tuesday, August 4, 2015 - link
We've got some results in Bench for you.http://anandtech.com/bench/product/1403?vs=1501
Oxford Guy - Tuesday, August 4, 2015 - link
No gaming results.TheJian - Monday, August 3, 2015 - link
I was hoping any more coverage of broadwell would include ripping quality comparisons to haswell at least. Is it still fast but crappy, or have they fixed quality so I don't have to keep my gpu off? :( Throw some handbrake tests in please. Quicksync fixed yet?http://www.anandtech.com/show/7007/intels-haswell-...
Any changes since this? Or the review by anand that covered it (linked in there)? Or do we all just hope for a fix with skylake? I saw a recent software update for haswell, but not sure if that does anything about quality here.
Enterprise24 - Tuesday, August 4, 2015 - link
Grid Autosport with 290X show very strange result. I assume this game support AVX2 instruction set ? Since Sandy and Ivy have roughly the same performance. But jump to Haswell gain big improvement.StrangerGuy - Tuesday, August 4, 2015 - link
Yawn...At this point I'm more interested in much better utilisation of hardware through software like DX12 than sinking tens of billions into CPU die shrinks with next to zero real world benefit. The paradox here is of course how the former will make the latter even more irrelevant as it is.lukarak - Tuesday, August 4, 2015 - link
i7-920 waving...Still no reason to upgrade. It was released in 2008, bought it early 2009. and it has been quite sufficient for almost 6.5 years now.
HeJoSpartans - Tuesday, August 4, 2015 - link
Hello Ian,Unfortunately, your IPC increase charts on page 3 of the article reveal several mistakes. The benchmark charts show that the Broadwell chip is not the fastest on both the 3DPM:ST as well as the CBenchR15:MT benchmarks, while your IPC increase charts tell something completely different. Also, some of the other calculated IPC increase values stated in the charts are completely irreproducable on my side. Second, you made a systematic mistake in calculating the IPC increase for the "Lower is better" benchmarks. This is very crucial especially in the Dolphin benchmark, where the total IPC increase from SB to Broadwell would be 58.0% rather than 36.7%. To make this clear: Processor A, that takes half the time for performing a certain task compared to processor B, does not offer a 50% increase in IPC over B, but a total of 100%.
You should re-check your numbers. Hope this helps.
Ian Cutress - Tuesday, August 4, 2015 - link
Hi HeJoSpartans,Somehow the incorrect benchmark result graphs were placed in those spots and they were from the non 3GHz testing - they also had a different z-height. I have updated it - the IPC numbers for those benchmarks in the main graphs are still accurate. For those benchmarks at stock voltage, the balance between frequency and IPC as to which is more important plays out on a larger scale for sure.
Also, with the timed benchmarks. Arguably I actually labelled the axis in terms of Percentage Improvement rather than IPC improvement, despite the title of the benchmark. But you are correct - I mistakenly used the % improvement and the term IPC interchangeably. I have updated the results with a disclaimer.
Any other issues, let me know. I'm also contactable by email if urgent!
-Ian
Navvie - Tuesday, August 4, 2015 - link
No compelling reason to upgrade from my 4770k.Intel needs to get back to working on CPU development rather than GPU.
Speedfriend - Tuesday, August 4, 2015 - link
I have a question. I have seen a article on Intel that speculates that it intends to launch a new chip that has a larger eDRAM (1Gb) and then has 3d Xpoint (15GB) on. The large eDRAM will compensate for the lower write speed of the 3d Xpoint, however the overall chip will offer massive advantages in power consumption and having nonvolatile memory for sleep states. This would be extremely competitive for mobile computing and servers.Given this quote above "Cycling back to our WinRAR test, things look a little different. Ivy Bridge to Haswell gives only a 3.1% difference, but the eDRAM in Broadwell slaps on another 16.6% performance increase, dropping the benchmark from 76.65 seconds to 63.91 seconds. When eDRAM counts, it counts a lot." would that make sense.
The article says that this change in chip design is the reason we are seeing another tock for kaby lake.
Any view from CPU experts on here?
NeilPeartRush - Tuesday, August 4, 2015 - link
1999: Intel Celeron 300A @ 450MHz2003: AMD Athlon XP-M 2200+ @ 2.40GHz
2007: Intel Q6600 @ 3.60GHz
2011: Intel i5-2500K @ 4.8GHz
2015: Skylake?
doggface - Tuesday, August 4, 2015 - link
Can intel please bring out a processor that destroys Sandy Bridge so that I don't have to hear all the smug comments every time they release a processor. I am so bored by them. Cant we talk about something interesting, like why is crystal well such a big focus for them when apparently all anyone wants in the enthusiast bracket is raw power?jrs77 - Tuesday, August 4, 2015 - link
And now look at all the PC-users who don't play games, but use their PCs for doing actual work.No tests of professional graphics software like Photoshop, Premiere, etc. No tests of professional audio-software like Reason, CuBase, etc. No tests of professional 3d-software that actually would benefit from the IrisPro and it's GPGPU-capabilities - and no, Cinebench is only for CPU-based rendering, so it doesn't count that much for stuff like flowsimulations done in Solidworks etc.
The i7-5775C is currently the best CPU outthere for small workstations, due to the very powerful IrisPro and the low TDP. You can build a very powerful rig in a box as small as the MacMini while still keeping the advantage of the 4C/8T CPU and an iGPU as powerful as a R7-250/GT740.
And this rig would only draw some 150W under full load, which is totally doable with a picoPSU.
I'm getting my i7-5775C finally delivered sometime next week, after waiting for 2 month, and I can finally build the rig I've allways wanted to combined with the following: Antec ISK110, picoPSU 160XT, Noctua NH-L9i, 2x SSD (256/512 GB) and 16GB DDR3.
Strapped to the back of my screen, allmost inaudible and very low power-draw.
This CPU is ment for SFF-workstations, but for this niche it's the best there is.
MrSpadge - Tuesday, August 4, 2015 - link
Thanks for the detailed review, Ian! As far as I know it's the most detailed out yet. And probably the last one, with Skylake hopefully just around the corner.Anyway, I would really like to see how low the power consumption of this chip can go. It's obviously optimized for mobile and low clocks, so pushing it to high clocks and high voltages (even the stock voltage is a lot for 14 nm!) is not showing it in its best light. When you tested the voltage scaling I would really like to see the minimum voltage needed to get to the lower speeds (you just left it at stock). How low can the power consumption of this chip become at ~1.0 V? Which clock speed hit does it take to get there? (btw: I run my 3770K at 4.0 GHz and a bit over 1.0 V, which is great for 24/7 load)
zlandar - Tuesday, August 4, 2015 - link
No reason to replace my 4 year-old i7-2600k for gaming. Kinda nutty since I just installed my 3rd video card on the same system.joanwa - Tuesday, August 4, 2015 - link
Concerning the Linux Performance, could one kindly ask for another software to be benchmarked? NAMD is nice, but the to my knowledge the bigger share of the scientific community in molecular dynamics uses Gromacs (www.gromacs.org), which is also much faster and better optimized for current Hardware (AVX2 support for instance, multiple GPU support) and has a bigger community around it.Morg72 - Tuesday, August 4, 2015 - link
I'm still rockin' an i7-950, ASUS P6X58D-E, 12gb DDR3 1600 and 2x 6870's in Crossfire. The only part showing me its age is the 6870's (probably going to upgrade next GPU gen). The rest runs everything I throw at it perfectly and that's at stock speed. I can always get a better CPU cooler and OC it for more longevity. Intel really needs to get things together. Hope Zen gives them a push.nagi603 - Wednesday, August 5, 2015 - link
4770k owner here... If this is how Intel upgrades, I won't be changing this baby out for a decade.Lazn_W - Wednesday, August 5, 2015 - link
My question is: Does the L4 cache help with multi threaded use? Say a home Hyper-V server lab environment? It might make the 5775C a better buy than one of the hex core 2111 socket CPUs..Jetpil0t - Thursday, August 6, 2015 - link
They should put out a multi-threaded K series 4 core with no IGP at a basement price for the more value minded people, gamers specifically, especially considering the main purpose of upgrading are the tertiary features on the motherboard. I doubt anyone with a Sandy is looking at this with any interest. Especially considering my Sandy board has PCIe3.0 and Sata 6Gbps and rocks 4-4.5 Ghz easily.SanX - Friday, August 7, 2015 - link
Big fart of Intel.Remember AMD Bulldozer?
Heads will be rolling.
crashtech - Friday, August 14, 2015 - link
I'd love to see dGPU test regimes that weren't obviously GPU limited, and that include minimums, or some form of data that indicates how well stutters are being avoided. But an interesting read, nonetheless.