Sandy Bridge overclockers forget the better hardware performance and added hardware for newer generation CPU/motherboard combinations.
As to gaming, I was FINALLY able to have very good performance with Microsoft Flight Simulator [that ate high end systems alive] with Ivy Bridge i5 3570K @ 4.2GHz, Gigabyte Z77X-D3H, and an AMD R9 290. NOTE: My CPU can do 4.7GHz and struggles with 4.8GHz on air, however, I do not like the temps, I need water cooling for 4.7GHz and higher.
Hi, i'm upgrading my system. I bought an asus sabertoothx79 mobo. Should i go for the 4820K or 4920K for FSX and P3D ? Or should i go for the 5820K ? Would there be any dif in performance ? Thx, Marc;
I don't mean to be a prick, but you're not going to see anything in gaming performance even if intel releases a 32core 200$ 3.0 ghz processor. Because in the end, it's about the developers usage of the processors, and not many game developers want to ostracize the entry level market by making CPU heavy games. Now, when Star Citizen launches, there'll be a bit of a rush for 'better' but not 'best' cpus, and that appears to be virtually the only example worth bringing up in the next forseeable 3years of gaming (atleast so far in the public eye..) All you can do is boost up single core performance to a certain point before there's just no more benefits, upgrading your cpu for gaming is like upgrading your plumbing for pissing. Yeah it still goes through and could see marginal benefits, but you know damn well pissin' ain't shit ;D
Not prick-like at all, I appreciate the comment. I'm an old dude who hasn't "gamed" for years and I'm just now getting back into it, trying to figure out what will work and for what price. Your insight is very helpful! Sounds like a lot of guys are using OC'ed i5 cores, good to know.
Crank up World of Tanks video settings to maximum and watch your FPS sink like a rock. A high end system is needed to run this title at max settings, that only recently began to use 2 CPU cores and 2 GPUs. No one using a mid-range Intel CPU and upper-midrange single GPU will see 60fps with the video cranked to maximum.
Gaming was the first area I wanted to look at, so seeing all the comments and review messages saying this is a skip for gaming is great, actually. It means prices will probably drop soon for the gaming parts, hopefully.
Not really, xeons can't be overclocked, even the new 6C 5820K will give you a lot more bang for your buck. Xeons are great for Professional or Enterprise solutions(And are very expensive because of that). But if you need 6-8C and no ECC ram, I'd take a Haswell-E I7 over a Haswell-EP Xeon anyday.
One could put together a HEDT system with OC headroom, tons of RAM, and a fancy GPU for the price of an entry level xeon processor, let alone full on server. Xeons aren't for people, they're for companies. HEDT are for prosumers and I think I'm right in saying a lot of people reading anandtech fall into that category.
Unless you require ECC memory and-or the ability to install two processors on one motherboard, the Xeon processors are a waste of money. You can also do a modest overclock on the i7 Extreme edition and get some really good performance compared to an 8 core Xeon that costs probably twice as much and can not be overclocked.
And if you're getting ECC memory that you don't really need, it costs a lot more too. The money you save on the i7 Extreme over the Xeon can also be put towards extra big and or fast solid state storage. The people who do need ECC ram or dual processors tend to know it and they are not even looking at these i7's anyway. There are a lot of things that a lot of power users do that do not need or benefit from ECC ram. That's who these processors are marketed to.
At the end of the day the Xeons are just bug fixed lower power i7 chips anyway. But one way that Xeons come into their own is on the second hand market. I'll be picking up ex. corp dual CPU Xeon workstations for peanuts compared to the domestic versions. I have a 7 year old 8 core Xeon workstation that still WPrimes in 7 seconds. Not bad for a $100 box.
All correct, though it concerns me that the max RAM of X99 may only be 64GB much of the time. After adding two cores and moving up to working with 4K material, that's not going to be enough.
Performance-wise, good for a new build, but sadly probably not good enough as an upgrade over a 3930K @ 4.7+ or anything that follows. The better storage options might be an incentive for some to upgrade though, depending on their RAID setups & suchlike.
They are applicable to different crowds, and computing doesn't exclude gaming, whereas Xeons to a degree do (Though I'm sure for most of them you'd be fine, I for one like those PCI lanes, as well as the per core performance on the desktop processors is just typically better. Plus form factor and all that. These fill a glorious niche that I am indeed excited about. They're pretty damn cheap for their quality too. I guess the RAM totally circumvents that benefit though.
How exactly will DX12 help? DX12 is good for helping wimpy hardware move from horrible settings to acceptable settings, but for the high end it will not help much at all. Beyond that, it helps the GPU be more efficient and will have little effect on the CPU. Even if it did help the CPU at all, take a look at those charts; pretty much every mid to high end CPU on the market can already saturate a GPU. If the GPU is already the bottle neck then improving the CPU does not help at all.
I'm sick of hearing this nonsense. Even with reasonably high end hardware mantle and DX12 can help minimum framerates and framerate consistency considerably. I have a 2500k and a 280x, and when I use mantle I get a big boost in minimum framerate.
Given the yet to be released directx 12 and the overall tendency to have less cpu intensive graphics directives ( mantle) i guess the days in which we needed extra powerful cpus to run graphic intensive games are coming to an end.
I thought the same thing, but it probably depends on the game. I got the MSI XPower AC X99s Board with the 5930K. when I was running a 2500k at 4.5 Ghz for years. I play a lot of FFXIV which is DX9 and therefore CPU strapped. I noticed a marked improvement. Its a multithreaded game so that helps, but on my trusty sandy bridge I was always at 100% across all cores while playing, now its rarely above 15-20%. Areas where Ethernet traffic picks up, high population areas, show a much better improvement as I am not running out of CPU cycles. Lastly Turnbased games like GalCivIII and Civ5 on absurdly large Maps/AI's run much faster. Loading an old game on Civ5 where turns took 3-4 minutes now take a few seconds.
There is also the fact that when Broadwell-E's are out in 2016 they will still use the LGA 2011-3 socket and X99 chipset, I figured it was a good time to upgrade for 'future proofing' my box for a while.
Right, for rendering, video encoding, server applications and only if there is no GPU-accelerated version for the task at hand. You have to admit that embarrassingly parallel workloads are both rare and quite often better off handed to the GPU.
Also, you're neglecting overclocking. If you take that into account the lowest-end Haswell-E only has a 20%-30% advantage. Also, I'm not sure about you but I normally use Xeons for my servers.
Haswell-E has a point, but it's extremely niche and dare I say extremely overpriced? 8-core at $600 would be a little more palatable to me, especially with these low clocks and uninspiring single thread performance.
The 5960X is half the price of the equivalent Xeon. Sure if you're budget is unlimited, 1k or 2k per CPU doesn't matter, but how often is that realistic.
For content creation, CPU performance is still very much relevant. GPU acceleration just isn't up to scratch in many areas. Too little RAM, not flexible enough. When you're waiting days or weeks for renderings, every bit counts.
improvements are relative. For gaming... not so much. Most games still only use 4 core (or less!), and rely more on the clock rate and GPU rather than specific CPU technologies and advantages, so having a newer 8 core really does not bring much more to the table to most games compared to an older quad core... and those sandy bridge parts could OC to the moon, even my locked part hits 4.2GHz without throwing a fuss. Even for things like HD video editing, basic 3D content creation, etc. you are looking at minor improvements that are never going to be noticed by the end user. Move into 4K editing, and larger 3D work... then you see substantial improvements moving to these new chips... but then again you should probably be on a dual Xeon setup for those kinds of high-end workloads. These chips are for gamers with too much money (a class that I hope to join some day!), or professionals trying to pinch a few pennies... they simply are not practical in their benefits for either camp.
Same here. I think the cost of operation is of concern in these days of escalating energy rates. I run the 2500K in a little Antec MITX case with something like a 150 or 160 watt inbuilt power supply. It idles in the low 20s, if I recall, meaning I can leave it on all day without California needing to build more nuclear power plants. I can only cringe at talk about 1500 watt power supplies.
Performance per watt is what's important. If the CPU is twice as fast, and uses 60% more power! you still come out ahead. The idle draw is actually pretty good for the Haswell-E. It's only when you start overclocking it gets really crazy.
DDR4's main selling point is reduced power draw, so that helps as well.
If you have a 1500 watt power supply, it doesn't mean you're actually using 1500 watts. It will only put out what the system demands at whatever workload you're putting on it at the time. If you replaced your system with one of these big new ones, your monthly bill might go up 5 to 8 dollars per month if you are a pretty heavy user, and you're really hammering that system frequently and hard. The only exception I can think of would be if you were mining Bit Coin 24/7 or something like that. Even then it would be your graphics cards that would be hitting you hard on the electric bill. It may be a little higher in California since you guys get overcharged for pretty much everything.
Just out of curiosity, what do you pay for electricity? Because I pay less here than I did when I lived in IA. We're at .10 KWh to .16 KWh (Tier 3 based on 1000KWh+ usage). Heard these tired blanket statements before we moved, and were pleased to find out it's mostly BS.
Agreed, my little i7 2600 still keeps up just fine, and I am not really tempted to upgrade my system yet... maybe a new GPU, but the system itself is still just fine.
Let's see some more focus on better single-thread performance, refine DDR4 support a bit more, give PCIe HDDs a chance to catch on, then I will look into upgrading. Still, this is the first real step forward on the CPU side that we have seen in a good long time, and I am really excited to finally see some Intel consumer 8 core parts hit the market.
The overclocking results are definitely a positive relative to the last generation, but really the pull-the-trigger point for me would have been the 5930K coming with 8 cores.
It looks like I'll be waiting another generation as well. I'm currently running an OCed 3930K, and given the cost of this platform, the performance increase for the cost just doesn't justify the upgrade.
I agree; I'd been hoping for a midrange 8-core of some kind, but Intel's once again shoved up the peformance/price scale purely because it can. Shame.
And IMO the PCIe provision chop with the 5820K is in the wrong direction; by that I mean the 5820K should have 40, the 5930K have 60 and the 5960X given 80, something like that. Supporting 4-way native x16 with enough left over for good storage at the top-end of the CPU range would make its price much more tolerable, but now the price difference is really just the +2 cores, at a time when the mid-range chip ought to be an 8-core anyway (remember the 3930K was an 8-core but with 2 cores disabled, ie. Intel could have released a consumer 8c a long time ago).
Ian.
PS. twtech, I've benched oc'd 3930Ks quite a lot. What do you use your system for?
Sheesh... Looking at these performance reviews, I'm questioning whether or not I'll even go for an -E series when I invest in a full upgrade from my 2500k in 2-3 years. In many scenarios the 4790k is at the top or near the top of the rankings due to the higher clock speeds stock/overclocked, yet the platform is much cheaper. Perhaps the 4790k equivalent 2-3 generations from now will be 6-cores, or the IPC will start going up if Intel focuses on it (I hope, but unlikely...). Otherwise these systems are just too expensive for little gain except in CPU bound workloads.
Same, I'll stick with my Core i7 3930K which happily hits 5ghz.
Years ago I never would have thought a CPU released 3 years prior to the latest and greatest would still be able to compete/beat the top chips in most scenarios.
Hopefully Haswell-E's successor gives me the upgrade itch and maybe DDR4 drops in price by then, by then my platform would be 4+ years old. Intel sure does make it hard to justify plonking down a few grand though. :(
I was going to hold out for the first DDR4 chipset to replace this thing, but...maybe I'll wait for DDR5. Even 5 years later, my first-gen i7 is faster than like 90% of the desktop CPU's out there TODAY. Intel really outdid themselves with Nehalem.
Anyone still on 1366, I'd recommend searching around on ebay for xeon X5650 's that are getting dumped for less than $100. If your motherboard is alright with a 191BCK guess what - 4.2Ghz Gulftown with full 6(12) cores. Yeay =)
I upgraded from a i7 930 to a xeon 5650. Its at 4.6ghz with sli titans in surround (4800 x 2560). Based on this x99 seems to be a worthwhile upgrade albeit an expensive one. Anyone else in my boat?
I might give this a try once I stuff that Studio XPS 435 MT motherboard sporting an i7 940 into a Prodigy M. I actually had a look at compatible new processors, but they were too expensive. I am not sure if I am going to trust a used offer though.
This really only makes sense if you don’t have “real” work to do on your computer. Or you only have work that utilizes 1-2 cores. Look at how these bench marks stack up against the 5960: http://www.anandtech.com/bench/product/47?vs=1317. For single threaded stuff its 20-30% faster and for multithreaded stuff its around 3x faster.
That’s HUGE if you’re actually putting your computer through a tough workload. Instead of something finishing in month it finishes in 10 days? You don’t think that’s worth it?
And with the i7-920, are you on a motherboard with SATA III, or do you have PCIe expansion for SATA III. For those I/O limited, SATA III with a couple of striped SSDs is a tremendous improvement. Over what was around 5 years ago.
Same here, run my 2500K at 4.2 on air and I just haven't seen any reason to upgrade as of yet and I've been running it for near 3 years now.... We need something new and groundbreaking...
These are different classes of hardware even for considerab=bly different purposes. It's like reading the review of Escalade and saying then, "I'll stick to my Focus then" :)
I meant, comparison of 2500K and Haswell-E is like comparing Escalade and Focus.
Crazy forum engine; AT really should look around, notice that better forums are on the web for 10+ years, and ask some web developer to make a normal forum (and not like a student alpha version course project). It's a bit of a shame for such a good website. Sorry for abruptness, but this is indeed the case.
Yup - thankfully new games are almost completely limited by the GPU at high resolutions/quality (1440p/High+). I think my i7 [email protected] and R9 290X can last another year at least, and i can afford to put it underwater instead of upgrading.
For normal desktop use, an SSD and 8GB+ of ram will burn thru everything without a prob
Correction? I think you mean "also featuring 6 cores"
"The entry level model is a slightly slower i7-5820K, also featuring eight cores and DDR4-2133 support. The main difference here is that it only has 28 PCIe 3.0 lanes. When I first read this, I was relatively shocked, but if you consider it from a point of segmentation in the product stack, it makes sense."
Is there anything but 'edge case' justification for upgrading any more? PCs used to be exciting because things were always changing, this is just getting boring.
VR. The frame rendering time requirements are pretty stringent. This is more on the GPU than the CPU for graphics, but you want to try and keep physics tics at a good rate to prevent objects jumping around the world.
It used to be that if you were 2 generations behind your system was so slow and irrelevant that you just couldn't run modern software at anything approaching an acceptable level. Now we have a situation where ancient systems on X58 (circa 2008) are still close enough in performance to the extreme high end in 2014 to not only be in this review but also fit somewhere into the top half of the product stack of modern Haswell based hardware.
If you compare a top of the line Nehalem chip to its equivalent from 6 years prior (a Northwood core P4 from 2002) it would make a mockery of 8 of them at the same time. This article is saying a 31% jump from Nehalem to Haswell-E -- that kind of performance increase (as a percentage) would have amounted to 2 or 3 months worth of clock speed bumps at any other time in the history of PCs.
Somewhat true, but consider that you get 30% IPC increase, 25-30% frequency increase and a 50% core-count increase, and it adds up to around a 100% increase in performance.
Granted, 100-110% over 6 years is hardly impressive compared to earlier, but there isn't that much low-hanging fruit. Also, the mainstream which drives revenue is, as you point out, largely content. They're looking at adding devices like tablets and consoles, instead of upgrading their computers. That probably plays into the amount of R&D Intel decides to spend on the HEDT platform.
Exactly exactly exactly! I am still on X58 with a i7 990x. I don't play much games any more but even to play the newest games ... I do not need to upgrade my CPU and have not needed to upgrade my CPU since 2010. And even my i7 975x or a i7 920 from 2008 would still be more then fast enough. Then music. I use my system mainl as Digital Audio Workstation. Most of my plugins and music applications support multithreading. I cannot realistically ad so much stuff to a project that it maxes out the CPU. And rendering time? Who cares, most of my renders are done before I am done playing chess on the toilet anyway. Then overclocking. The i7 920 and anything on X58 was great. After that ... the fun and the excitement kind of went away and has never come back. What's the SUPERPI MOD record these days? I have not heard about any significant record breaking for a long time. Back in 2008, 2009,2010 I was hearing news about famous new overclock records. After that, it stopped. Let's face it. We hit a clock limit and for a breaktrought in singlethreated speed ... it's just not gonna happen until some genius designs a totally different system. Probably not using electricity but light. But that's like 20 years away because you don't just start over. All we have been doing is improving old technology not inventing something completely new. We are hitting the limits of nature ... so all the geeks and the nerds will just have to way at least another 10 years before we get to the exciting stuff again.
it wont be the cpu performance difference that makes u upgrade it will be the new features. Skylake will have pci-e 4.0 and usb 3.1 and then chipsets after that will add more new things faster storage standards and who knows what else.
I was already in this position. The speed of the i7-980x was still rly good. Got mine oc'd to 4261mhz. But guess what on x58 you get not pci-e3.0 no sata 3 no usb 3.0. These features have become very standard. You also get no sata express or pci-e ultra m2 which will soon be commonplace as well as no quad channel memory and no ddr4. All the missing features made me upgrade, not the speed. Similar situations in the future will cause people to upgrade every 4-6 years.
You can still plug the PCIe USB 3.0 extension board there and get at least 2 USB 3.0 ports on the back of the case, to somewhat mitigate the age of the platform. But, with PCIe 2.0 and SATA 2, one is stuck, indeed.
Nehalm was great, but the last big bump was really Sandy Bridge. After that, not so much. This is actually a big concern for the processor makers. The technology and the silicon itself is reaching its limits as far as making significant gains on the next generations. They were getting big performance gains from just die shrinks alone, but those days are over. And how much more can they shrink them? It's getting harder all the time, they may get to 7nm to 10nm.
I'm guessing you don't do much animation? :) Even though many of my renders only run around 10-20 minutes max, when you do a 30s animation, you can multiply that rendertime by 900... Even for a minute a frame (which is fairly fast), that's still 15 hours.
But I think the main draw for people on X58 like us, is the newer platform. X58 is really low on modern features, it came out at an awkward time. No native USB 3.0, no 6Gbps SATA, no Thunderbolt (which might be relevant in the future), no PCIe 3.0, and no support for the newer standards coming out with Z97. Also consider DDR4 is going to be the standard going forward, so investing a lot of money in 32-64GB of DDR3, even at the lower prices, just seems like throwing good money after bad.
You are right! When I talking about a render, I am talking about exporting a music project to separate wave files (one per track). You can either record while playing it back or you can render it. Most VST plugins have a render mode as well so the end result is sometimes quite different. Not necessarily better, but different. And recording is done in real time, so sometimes a render is a lot faster.
the 31% is at the same frequency. being able to do double the work with the same amount of energy or the same work with 1/2 the energy is a big deal. imagine if cars could do that... 2x the horsepower but same amount of gas... or the same mileage at 1/2 the gas
ive been happily on my x58 i7-980x for 4 years and honestly if my chipset wasn't missing so many modern features i wouldn't even upgrade. But the lack of pci-e 3.0 lack of sata 3 lack of usb 3.0 are just becoming a pain in the ass. ur circo 2008 is wrong too i know because i got the i7-980x right when it came out and it was in 2010 not 2008 so that would be 100-110% over 4 years.
I really want an ultra m2 pci-e 3.0 x4 drive as my main os and application drive. Can't wait to pop in a 1TB samsung sm951 ultra m2 drive. Also cant wait for the 16GB DDR4 sticks to start showing up. 8x16GB 128GB of ram, can you say 112GB ram drive and 16GB ram for the system. It's gotta be awesome working on video editing with your entire video in a super fast ram drive. My memory and storage is what's going to boost the performance for me gtom my x58 the cpu will too but not like the ultra m2 ssd with like 1400MB/sec read and 1200MB/sec write
I recently went from an i7-950 to a i7 4770. It's made for a solid bump in performance. I notice it particularly in navigating game menus that used to be a little sluggish before.
Sold! As I'm still on Gulftown, this is just what the doctor ordered. The i7-970, IMHO, has held its own for four years, at least for my needs. This will be one helluva shot in the arm.
with you on this. I actually sold my gulftown with 48GB last year for the 6xSATA on z87, The x58 + gulftown is one heck of a system. If I don't already have the 4770k, this 5960 or 5820 would be extremely difficult to resist.
I actually tempted to mount the mobo+CPU+intel sink on the wall instead of selling it, it was a piece of computing history.
You obviously have never played with an x58 system with it's triple-channel DDR3 set-up.
You *can* actually run 48Gb of Ram, but it's also not guaranteed to actually work, some people managed to win the luck of the draw early on. Some people who installed 48Gb of memory would only have 32Gb of Ram detected.
Conversely, x58 processors actually have a 36-bit address bus, so theoretically they could support even 64Gb of Ram.
Basically, Intel guarantees up-to 24Gb to function, doesn't mean it cannot handle more.
Agreed. I do think this justifies the switch from Nehalem 920; however the price of DDR4 is far too prohibitive. Given that fact that Skylake is rumoured to be around the corner ?Q3 2015 (will probably get pushed back); I'm just going to a grab a bargain used X5670 6-core for $120. I have to say, I never thought the 4.1Ghz 24/7 920 would hold for this long. Multi-GPU and newer PCI-E storage solutions will mean that 40 lanes will matter in a generation or two.
So X99 has the same specifications as Z87, just with 4 extra SATA ports that cannot be RAIDed. Warrants a resounding "meh" from me. Intel could have at least increased the number of USB 3.0 ports.
The Z-platform is the enthusiast platform, this is an entirely different segment. For instance, you'll never get 6 cores on the Z-platform, so the comparison is kinda silly. :)
I hope you're wrong with your never statement. If we're still on 4 cores on the Z platform by Montlake or the generation afterwards, I will be very, very disappointed. Intel either needs to aggressively expand core counts to give developers a reason to make software utilizing more threads or push aggressively for increased IPC. Otherwise my 2500k could last a very, very long time.
Right. Gulftown was the 32 nm shrink of 45 nm Nehalem. However, in that case there was no associated IPC change (except AES addition in Gulftown), so in light-threaded tasks there was no performance difference between the two (except AES), so people often informally confuse the two, despite the fact that Gulftown has two more cores due to the 32 nm instead of 45 nm.
Thanks for this review. It's now clear, this platform is for an extremely narrow target audience, requiring the most computational speed on a single mainboard, requiring 4 RAM channels or 40pcie lanes but be cheaper than xeon e5 yet not need ecc. i truly wonder what prcatical application there is? i mean it can neither mission critical or scientific stuff becuase that would require ecc... gaming at 8k? (no 4k works fine on a 4790K with two titans)
Yeah until you stop upgrading and build a rendering farm that can split the workload over multiple systems. Then you just keeping buying more of the price/quality stuff. CPU's or GPU's.
Ian - How come we didn't see any tests of over clocking with the 6 core 5930k and 5920k? I had just purchased a i7-4790k that isn't even installed yet. I am not just a gamer, I do video and photo work as well and I am constantly CPU limited. The X cpu is way to expensive, so these others particularly the 5920k overclocked might be a perfect sweet spot for a lot of people.
When I tested the 5930K/5820K, the motherboard BIOSes were still very early alpha builds and did not allow overclocking. If I can get these CPUs in again to test (they had to be sent back), I will do some overclocking results for sure.
I'll be waiting for two more generations. Maybe something worthwhile comes along to replace my 2600k at 4.4GHz. I'm glad the review shows so clearly where this new chip excels and who should save their money.
Typo: "Modules should be available from DDR3-2133 to DDR3-3200 at launch, with the higher end of the spectrum being announced by both G.Skill and Corsair. See our DDR4 article later this week for more extensive testing."
Good article but the overclocking comparisons are a bit limited, i.e. 5930K and 5920K overclocking tables are not provided, nor a comparison with older SB/IB cpu @around 4,5ghz which most people still have and are deciding whether to upgrade to a X99 or a Z97 platform. A more accurate RAM performance comparison is also missing.
At the time I had the 5930K/5820K, I was not in a position to be able to overclock due to early alpha firmware. Due to our newer benchmarking suite, I still need to go back to the early Sandy (non-E) CPUs to retest. Anything you see in Bench with the power listed has been retested at least in part this year, depending on my scheduling. Unfortunately I don't have the space to have this as an ongoing project, it occurs in time with reviews.
"For the six core models, the i7-5930K and the i7-5820K, one pair of cores is disabled; the pair which is disabled is not always constant, but will always be a left-to-right pair from the four rows as shown in the image. Unlike the Xeon range where sometimes the additional cache from disabled cores is still available, the L3 cache for these two cores will be disabled also."
Are you sure that these various statements are correct? I'm not doubting you, but I would point out that at Hot Chips 2014 discussing the Xeon IVB server, Intel stated that they'd designed the floorplan to be "choppable". They gave a diagram that showed a base of fifteen (3x5) CPUs+L3 slices, which chop lines to take off a th3 right-hand 5 CPUs, then to take off one or two of the horizontal pairs (taking the 10 CPus down to 8 and then 6).
Point is: - the impression they gave is that these reduced CPU counts are not (at least not PRIMARILY) from full dies with disabled (or nonfunctional) cores --- they are manufactured to have smaller area with fewer cores. - which suggests that versions with fewer cores but larger cache are some sort of anomaly (because the chop cuts out L3 slices along with cores). Perhaps THOSE are the chips that really did have one or two non-functional cores but with still functional L3 slices?
As far as we know, the floorplan for the die for i7 is an 8-core, with the 6-core models being disabled versions rather than wholly new dies.
With the Ivy-E Xeons, there are a number of CPUs that have high L3 per core numbers due to the way the cores are disabled - Intel usually sticks to 3 floor plans or so depending on how their product line is stacked up. This may change with Haswell-E, though the Xeons have not been officially released yet. The Ivy-E floorplans can be found here, where I did a breakdown of L3/core: http://www.anandtech.com/show/7852/intel-xeon-e526...
I'm still not sure I was able to glean enough data to determine efficiency. If we consider that third party sata 6 and usb 3 is just fine by me, and DDR3 price is nice and 1.35v cas 8 is easy, the question becomes a little more murky. Is AVX2 still broken (I believe that was what was being disabled in micro code right)? If so, and given that I use 3 GPUs and some pci-e SSDs, 5820k is less interesting to me. So my current 4930k vs. a 5930k, for me, comes down to power efficiency under load once overclocked and undervolted. This box is more active than it is idle and my experiments on the desktop showed that a properly overclocked ivy bridge at 4.8ghz or so could go toe to toe with a haswell at 4.3ish but was more power efficient in the process. Maybe I stick with the 4930k. What do you think folks?
again, in your table of extreme core i7 cpus, you forgot the last 4-core Nehalem which is : the i7-975X at 3.3GHz . No, the 965X is not the latest 4-core extreme !
Considering this would have cost me ~340€ over my i7-4770K (which I have @ 4.5GHz and delidded), because of the price difference in CPU and the fact that I had a 1150 socket mainboard from my retired mining rig, I'm not too salty about it. At least it is 6 core at the low end, that is encouraging. I've been mostly fine with my i7-860 so I guess the i7-4770k will serve me a while.
"With ASUS motherboards, they have implemented a new onboard button which tells 2x/3x GPU users which slots to go in with LEDs on the motherboard to avoid confusion." Because looking stuff up in the manual is way too complicated!
The 5820 can be had for $299 at micro center and they will also discount a compatible motherboard by $40. Jus' sayin'. IDK if there's some kind of ad agreement, etc for listing Newegg's price... Anyone shopping for anything should always shop around.
The 28 lanes of the i7-5820K has almost no effect on SLI gaming at 1080p.
I realize you where trying to CPU limit the benchmarks by using such a low resolution, but does this still hold up when running, say, three 1440p monitors? Wouldn't that be the time when the GPUs are maxed out and start shuttling large amounts of data between themselves?
I want to test with higher resolutions in the near future, although my monitor situation is not as fruitful as I would hope. There is no big AnandTech warehouse, we all work in our corner of the world so shipping around this HW is difficult.
"The move to DDR4 2133 C15 would seem to have latency benefits over previous DDR3-1866 and DDR3-1600 implementations as well."
If my math is correct, this is wrong. With DDR4 2133 timings of 15-15-15, each of those 15's corresponds to 14.1 nanoseconds. (Divide 2133 by two to get the actual frequency, then divide the clock count by the frequency.) With DDR3 1600 and the common 9-9-9 timings, each time is only 11.25 nanoseconds. With DDR3, the actual transfer of the data takes four clock cycles (there are eight transfers, but "DDR" stands for "double data rate" meaning that there are two transfers per clock cycle). That translates to 5 nanonseconds on DDR3 1600. DDR4 transfers twice as much data at a time, so with DDR4 2133 a transfer takes eight clock cycles or 7.5 nanoseconds. So DDR3 1600 has lower latency than the DDR4 2133 memory.
So why does Sandra report a memory latency of around 28.75 nanoseconds (92 clock cycles at 3.2 Ghz) as shown in the chart on page 2 of this review? If a bank does not have an open page, then the memory latency should be 15+15+8 clock cycles, or 35.6 nanoseconds, not counting the latency internal to the processor. So the Sandra benchmark result seems implausible to me. As far as I can tell, the source code for the Sandra benchmark is not available so there is no way to tell exactly what it is measuring.
Hey guys, I'm running a i7 2600k on ME4Z @ 4.8GHz. My system is fast enough for most applications, only struggles with multiple applications running. Should I be looking at Haswell-E or waiting until Broadwell? Only annoyance I have with i7 2600k is slow video encoding and restrictions on multi-tasking.
Adjust manually your process priorities and core affinities, if necessary. The old work horse will take you a lot longer still. I may have a 14-hour rendering process going with fully loaded cores and still run a nice FPS session at will, provided that I adjust the process priority manually.
What is your RAM speed? I find 2133 @ CL10 to be optimal with the M4E/Z (I have five of those boards, all with 2700Ks @ 5.0).
Make sure you have a good SATA3 SSD to exploit the Intel SATA3 port. Don't use the Marvell ports. An H80i with NDS fans works really well for low-noise, but even an old TRUE with any two decent fans will happily run a 2700K @ 5.0 (I've used the TRUE, TRUE Black, VenomousX, Phanteks, etc., but recently bought a whole pile of refurb H80s for a good price).
If you want an intermediate upgrade, get a used 3930K C2, a good used X79 board (I keep buying the ASUS P7X79 WS, done five so far), move over your RAM, etc. Note the same caveats re Intel/Marvell ports, use an H100i + NDS fans instead, and voila, you're up & running away with a 6-core for not much outlay. A recent build I did empllyed a 3960X which cost 245 UKP, the above ASUS board for 190 UKP, etc. Gave 1221 for CB R15, while your 2600K @ 4.8 should give around 850, so that's a very nice bump for threaded tasks and running multiple apps in general.
I suggest an 840 Pro or 850 Pro for an SSD, though there are lots of used bargains available. I bagged a 512GB Vector for 160, ideal as a cache for AE, while an OEM 840 Pro was only 87. Best of all, I keep getting 1475W Thermaltake Toughpower XT Gold units for around 125 UKP (less than half normal new cost), perfect for handling four heavy GPUs for CUDA or whatever (my system has four GTX 580 3GB atm) in an oc'd 6-core system with multiple SSDs, RAID, etc.
More references, examples & suchlike available on request - don't want to clog this thread.
Mixed feelings on this one. This is a solid effort here, and the 5820K at around $390 is potentially interesting, seems very similar to the 3930K and it's a bit cheaper by default.
That said, I don't see much of a reason to upgrade from SB-E or IB-E if you already have something in that. Certainly even the 5820K is a bit overkill for gaming for the price.
I do have to say we live in strange times where even latter-day Core 2 systems (paired with very good video cards, as a caveat) are still fairly capable for most single player gaming environments; certainly they can still handle any casual task thrown at them. And anyone who's got to Sandy Bridge has had little reason to upgrade their systems yet, for sure. Ten years ago, saying "I have no reason to upgrade my 2-4 or so generation old box" would have been crazy talk.
Very impressed with the 8 core overclock. I was worried that having such a low stock meant the oc wouldn't be too good. They had it on a crappy closed loop 140mm rad and it did well. I have a custom loop that cools the motherboard chipset and vrm, the cpu, and the gpu. A triple 5.25" reservoir with dual mcp 655 pumps in series at setting 4 1 below max pump speed a 60mm thick 420mm rad with 6x 140mm noctua a15 fans in push pull. Hopefully I can hit 4.8ghz i'll be very happy but as long as i can hit 4.5ghz ill be satisfied. I'm coming from a 4261mhz i7-980x so this is going to be a rly big upgrade for my video work and even a noticeable boost in gaming not huge but noticeable.
I'm totally pleased with the i7-5960x. Waited 3 generations from nehalem to upgrade. With haswell-e ill be waiting at least 4 possibly 6 generations to upgrade unless some crazy new chipset geature makes me do it earlier.
I really wish they took the voltage regulator off the CPU. Its really a bit sad that a 4790K can beat this highend expensive chip in single threaded tasks. I was really looking forward to this but performance doesn't justify the cost unless you do a lot of multi threaded stuff. With skylake coming with PCI-E 4 this system is going to be outdated pretty quickly. One thing is for sure, the days of big overclocks on the CPU side are over.
The 28 lanes in the 5820k don't make much difference in SLI because it uses the SLI bridge as interconnect between the graphics cards.
It would be interesting to see if the 16x/8x configuration makes any difference with two of the newer bridgeless Radeon cards. Especially since the first build exemplified in this review uses that same configuration (5820k with two Radeon 285 cards).
there absolutely is *not* data going across the sli bridge at all. the only thing going across the bridge is timing and signaling info. it is a tiny 1GB/s interconnect
at extremely high resolutions on pcie 2 SLI is where you specifically DO need more lanes. pcie 3 alleviates this. multi 4k would bottleneck again, but the best gpus can barely handle a single 4k in high detail anyhow even in tri-sli
Hmm one last benchmark (emulation-related) I'd like to see (and suspect many others would too) -run PCSX2 in software mode with 8+ threads and see if there's benefits.
Try something really tough like Shadow of the Colossus.
Looks like this chip is a man looking for a mission, is that it?
I'm not so sure it would be of great benefit. Emulators are thread limited by the hardware they're attempting to emulate. I read somewhere that pcsx has a thread limit due to the difficulty in synchronizing each ps2 hardware component in each thread.
Dolphin also favors clock speed over simultaneous threads.
Glad I upgraded to Z87+4770K last year. While it is great that Intel *FINALLY* upgraded the rest of their platform to native USB 3.0 and all SATA3 (6G) ports, along with newer options like M.2 and SATA Express, the drop in clocks to accommodate the higher number of cores and higher resultant TDP makes it a wash for my primary purpose: gaming.
I also didn't want to have to pay early adopters tax on DDR4, and it looks like that tax is high right now. Coming from X58, I was also very pleased with the drop in total system power going to Z87. I'd estimate between a 920@4GHz and the difference in board power, its pulling about 50W less at idle and 100W less under load. My Kill-A-Watt measurements indicate similar.
Still, if buying today and putting together a new platform for the future, this would be a good option now that Intel has addressed all of the major issues I had with the X79 platform (full native USB 3.0, full native SATA6G, official PCIe 3.0 etc).
@Ian, I am sure it is due to being limited to what you have on hand, but it would have been nice to see some more powerful GPUs tested, just to better illustrate potential CPU performance differences once the GPU bottleneck is lifted. Nice job though, the new graph toggles are really slick.
I pulled the trigger on the 4770K last year also....but I did so only because the Ivy Bridge E was stuck on the X79 chipset. For me, it was an interim solution while I waited for Haswell-E. When my new X99 parts arrive next week, I'll upgrade my system and put the Haswell parts on craigslist - I should be able to sell them for a bargain price and reclaim some cash.
the 4770k still sells well on ebay i got mine at 270 (used) looked like new and works likea champ, Haswell e doesnt look bad but in a world where the x86 doesnt use all the cores on many of its applications, or gaming im happy with my purchase, enjoy your CPU and milk every buck out of it!
"With Haswell LGA1150 CPUs, while the turbo frequency of the i7-4770K was 3.9 GHz, some CPUs barely managed 4.2 GHz for a 24/7 system."
I think I spotted a little typo on page 3, did you mean "With Haswell z87..."? I didn't think any of the 4770x CPUs could use an 1150 socket. Or am I misreading it?
Oh, you're right. I guess I'm confusing sockets and chipsets. Obviously CPUs need a matching socket, but do they also need a matching chipset, or do newer motherboards just allow newer feature sets introduced by the cpu? Or am I still getting it wrong?
It seems like every time a new generation of CPUs are released, a bunch of new motherboards with identical chipsets show up to compliment them, so I thought each generation of CPUs have matching chipset that need to pair with one another.
Sorry, this is like amateur hour, I'll just google this stuff. It's strange, I like reading these articles, but I haven't the slightest idea why - I only understand what they're saying like half of the time!
Thank you Anand Tech for showing performance with ALL of the cpus overclocked as apposed to only one chip overclocked and the rest at stock. This makes the comparison much more fair and realistic.
I really like the analysis of performance per clock. It really helps me to judge CPU performance. However, why do you disable HT for these tests? All the CPUs considered have it, and on average it boosts performance. And most importantly:
"Haswell bought... two new execution ports and increased buffers to feed an increased parallel set of execution resources... Intel doubled the L1 cache bandwidth."
Right. Which means Haswell may very well see better performance improvements from enabling HT than older CPUs. This could be very relevant for the workloads which people should run on these 6 and 8 core monsters. And by that I'm not talking about gaming ;)
Most people don't need this setup because the only thing here is really your Haswell processor with couple of extra cores, a few different parts for DDR4, and a little bump here in L2 and that's it. Games don't need all of these changes because most games today are sophisticated enough to utilize them.
I can certainly see that my VM and rendering machine will love the new 5960X and DDR4 but it's not worth investing in new platform when it just came out.
Anyone that does high end AV content creation will see a big bump if you got the money to spend on it.
All great benchmarks except for the gaming ones. It's pretty common knowledge that Geforce cards like to handle almost everything "in-house" whereas AMDs tend to dump a big chunk of their workload onto the CPU. All I'm saying is that I'd love to see a gaming benchmark redone with R9's - I'm betting it would show the differences between these processors in games better - if there are actually any :D
the differences are minor with PURE cpu tests. sandy e to ivy e is about 5% ipc gain. ivy e to hw e is another 8% or so, but it suffers a 10% oc ceiling deficit against them and it has higher latency ram to boot.
unless your workload truly has 8 threads or you multitask to the point you are saturating 6 fast cores this is a non upgrade coming from sandy or ivy
I might finally upgrade my work computer. I'm running an i7 920 with 8gb memory and a 256 GB SSD drive. I need that extra power for word processing, emailing, and surfing the web. Oh wait, no I don't. For the basic computer user, computers have been fast enough for many years.
For my home gaming machine, I don't see a reason to upgrade my 4770k.
As a matter of fact, for the next generation (nvidia 9xx series and amd xxx) in quad sli/xfire setups, this chip will allow more bandwidth through the PCI-E lanes. So for the top 1% of setups this chip does matter.
no it wont. my p7x79ws with 3960x and tri sli titan has 40 native pcie 3 lanes. sb-e has 40 lanes and pcie 3 support. advanced x79 boards with current bios fully support.
this is a non upgrade honestly (x99) vs a high end x79. all you really get is usb3 and sata 6Gb/s native vs third party integrated (BFD), and the mixed bag that is ddr4
Very strangely, tomshardware has the 5820 beating out the 5930 and 5960 in gaming benchmarks. The test platforms here and there are very different, but for a start it would be great if you guys could also show us some results with a single GPU for the 5820 and 5930.
IMHO, tomshardware messed up something here - there is no physical reason why 5820 can beat 5930, because they have the same core count and cache structure, but 5930 has higher frequencies and 40 PCIe lanes vs 28 lanes of 5820.
Yup, I'm leaning towards the same conclusion. It's a shame that most reviews do not cover anything but the highest-end chip, but between anandtech, tomshardware, 3dguru and another review I found in Belgian (the graphs were self-explanatory at least ;) only tomshardware has that anomaly, so I'm inclined to agree.
These cpu's would be more beneficial to higher resolution gaming. I love how people are trying to compare this new cpu's to the i5-2500k. There is nothing wrong with the 2500k when gaming at 1080p but when the time comes when people start witching over to 4k gaming when it becomes more affordable your 2500k is gonna struggle big time. Like people have said earlier, there is more to computing than gaming. These cpu's will be vastly faster at video converting/rendering, and other cpu intensive applications when compared to older series of intel cpu's. On another note I see people are complaining about heat because of the high tdp... when there are more cores its gonna need more power......soooooo get a water cooler and problem solve :). At least the tdp isnt 220watts like amd 8 core cpu. So 150watts is pretty fair for a intel 8 core cpu that straight up murders a AMD 8 CORE CPU.
Was thinking of moving from my 2600k at 4.8 to the 5930k but really not worth the $ for a 20% boost in certain apps and little to no improvement in gaming.
Why is it that every review of the haswell-e chipsets doesn't even make an attempt to saturate more then 16 pcie lanes... The whole point of the haswell-e chips in my opinion is the extra PCIe Lanes for tri and quad sli/crossfire set ups. dual gtx 770s is a joke of a review if you ask me. Put in at least 3 of them or dont do a gaming review at all because its going to be less then 2% difference in performance from regular i7s.
You would think since intel seems stuck at a specific design speed for the most part(other than die shrink) that AMD could come up with a better architecture. Maybe there is no better architecture out there?
...With the i7-5820K being two generations newer, it should afford a 10-15% performance improvement in CPU limited benchmarks. This is quite amazing if we consider the release price of the i7-3960X was $990 and the release price of the i7-5820K is $389....
ha, in three years we have 10-15% performance increase.
Since those parts are nearly identically clocked with a cost per core core being so low you're really getting about a 280% increase in performance per dollar over three years. Compared to other market segments that have practically stalled out everywhere in x86 land yes it is pretty amazing.
about the gaming benchmarks, core count doesn't really make a difference in most of your chosen games... however there are still a few good CPU-benchmark games that come to mind, Civ V and the Supreme Commander series, to mention a couple. In fact I'd keep an eye out for Planetary Annihilation. It's avalilable this weekend, although I'm pretty sure Uber Entertainment would hand one over just for benchmarking.
"now the lowest end CPU for the extreme Intel platform". Yep, I'm just using an i7-920, that was exactly at the same spot in its day. Same architecture and memory as the extreme flagship, at a fraction of the cost. Not counting server chips, right?
Intel Core i7-5820K, you say eh? Hmm... maybe I should write that down for later... Except I'm not planning anything beyond 1080p, then all of these chips and cards are a bit overkill.
Correction: I want full pedal-to-the-metal at 1080p, perhaps the "average build" should help... right? I appreciate anybody pitching in on the details for a machine to run anything at 1080p (or 1920x1200 actually) at 60 fps, but not more.
I'd like to add what is the most interesting to me is the 5820k. Hexacore cheap (Intel Hexacore, which IMHO is the only kind that matters). I'll likely never have the budget for a $1000 chip and then the system to do it justice.
However I currently have an i5-3570 OC to 4GHz. Cost something like $380 for processor and board at the time. For what appears to be around $600 I could now get 50% more cores, hyperthreading and probably be able to reliably OC to a "consevative" 4.2GHz for probably >60% performance improvement for less than half again the cost.
In comparison to moving up to the i7-4770k, which would only cost maybe $50 or so less, with a modest performance improvement (probably only 15-30%).
It does make me VERY interested in Broadwell-E and Skylake-E, as those are the most likely points at which I might be looking at an upgrade. I do wonder if Skylake-E will see entry level Enthusiast Octocore, and maybe even if it'll mean high end mainstream Skylake Hexacore.
That would be an interesting decision if 6 or 8 core processors were not terribly far off in price.
"However I currently have an i5-3570 OC to 4GHz. Cost something like $380 for processor and board at the time. For what appears to be around $600 I could now get 50% more cores, hyperthreading and probably be able to reliably OC to a "consevative" 4.2GHz for probably >60% performance improvement for less than half again the cost."
Dude, did you forget about the cost of having to buy DDR4 memory modules? That will throw your numbers out the window.
Regarding your statement about game benchmarks, "It makes sense that we should test this with 4K in the future".
You did not say how far into the future, in the near future it makes NO since. The video card you used was an 770; there is no way that GPU can handle 4K at high game settings, even medium settings will bring it to it's knees. First of all it is a mid-range GPU, secondly, it does not have enough local memory.
Lower resolutions more indicate what the CPU can do because the GPU is not overtaxed therefore not becoming a factor.
If you change anything, pick a higher end video card, to make sure the GPU is not bottlenecking tests.
I owned a first generation 2008 I7 running at a measly 2.7Ghz and I can tell you the new motherboard alone having 6GB/s SATA and 10GB/sec SATA interface made up for speed losses not to mention the 4790K running at 4.5GHz on a stable overclock. For photoshop and video the new CPU and motherboard have made a world of difference. Save time and open / read time has been reduced from 3-15 minutes per file down to "wait what I can count it on fingers???!" 7 seconds. That means no more "oh well it's saving let me go to the bathroom or something while I wait for this slow a$$ computer. I'm kind of wanting to kick myself for buying a z97 on the very day that the new x99's came out but when I did a price check it just wasn't worth it. This will hold me over til those crazy prices drop. I looked at bench marks for the 4790k vs the lower spectrum of the newer x99's and it looks like the 4790K does better in Photoshop than the X99 due to higher frequency. I even had to drop 8GB of Ram because my old motherboard had 6 slots and was holding 24GB of Ram in 4GB sticks now my new motherboard only has 4 slots. That made me sad. But even with less RAM the motherboard and processor are much more efficient and they can actually use the higher speeds of my SSDs and my 3GB/s and 6GB/s internal and external hard drives as well. Everyone arguing that their old processors are amazing need to open their eyes. On paper it all sounds like they are very much equal. But people forget that motherboards have been improving as well. I was having consistent blue screen crashes on my old system eve after refreshing the system a few times. This new configuration (Asus with i7-4790k) runs like a champ no blue screens at all, nothing but blue skies. I think Intel needs to just drop their prices a little, I would have prefered to get the 8 core i7 or 12 core xeon Yes that sucker is out there as well but at $1000-2500 for these units it's just not worth the small amounts of improvements vs the price. Whereas the huge increase i felt in performance at a much lower price point was worth the upgrade.
What about benching some games that have decent multithreading? Like games on the bitsquid engine like War of the roses/war of the vikings? Or Natural Selection 2 with its immensive poligon count? No. Lets just benchmark GPU Heavy AAA titles that generally push the GPU Market more than the gaming market. If you wanna benchmark a 8 core CPU with games, you should AT LEAST let them be half with decent MT support.
Well, except for gaming.. Mostly true but not completely true. If you play chess, the 8 core version will smoke any 4 core version just for fun. There are other games that are not in the front scene right now that are mainly cpu demanding. Any fps game currently in the market, is heavily based on showing realism, so it requires graphics processing power,and demanding from a powerful cpu to run such a game faster than the fractions of a second a slower cpu would, is well.. pointless. But gaming is vast , bigger than your encoding software, your bitcoin mining, and your much advertised enterprise software. When developed further, it will require what the most demanding scientific applications require and probably more. See that it is already a main driving force(if not the main driving force) of modern supercomputer improvement. And it will be for the future. Think of ingame ais, multiple ais, that will interact with the game world like a human would. Think of voice and pattern recognition, of tracking thousands or millions of objects, etc etc. If your only aspiration in buying such a cpu is how good it will run current gen games, you wouldn't ever belong in the category of appropriate customers for this cpu. You would rather be excited by the mobile parts, which while anemic compared to the 8-core haswell, are fancy and fashionable and satisfy your vanity. Of course it still remains as a problem, because of this sad market turn, the rise in the pricing of the "extreme" parts which puts the most of us off.
The one thing I like about the X99 chipset over my ASUS P8P67 Deluxe are the plethora of SATAIII ports and vastly improved onboard sound. The problem is the SB is a dead platform; there is no upgrade path with it and all boards ever made have few ports. I actually had to buy a separate add-in card but it suffers from being on the PCI-8x port - connect more than 1 drive, and they share the throughput.
That's why I'm looking at this build. I figure the ASUS Rampage IV + 5820 + 16 GB RAM should set me back about $1,100 but it gives me a bit of future-proofing.
I just purchased the i7 5930K along with an EVGA X99 Classified mobo with 16 GB of Corsair Dominator Platinum DDR4 3000 RAM for now along with 2 EVGA GTX 980 Classified for SLI. I purchased both the Classified versions of the mobo and GTX 980's because I love tinkering and overclocking to see what is the best stable clocks I can achieve without heating up my bedroom like my PC were a space heater which would be handy in a month or so as the cold weather returns to Northern VA where I live. My current PC and even my older PC that this build replaced are air cooled for now. I will buy WC blocked and such for my GTX 980's since I have fantastic components for overclocking with the Classified things from EVGA and a CPU that OC's well too on air.
Sure,this was a very expensive upgrade, but at least the CPU, RAM and the MOBO will be good for the next 4-5 years just like my old aging i7 920 D0/MSI X58 Pro-E mobo w/12 GB of Corsair XMS3 DDR3 1600 Triple Channel and that PC had aging SLI GTX 680's that have been great so far, but it was time to upgrade that system, it was a great PC for all these years since I purchased it when the X58 and i7 series first launched back then.
This upgrade over my aging X58 build is a massive jump as I was running into CPU bottlenecks with my SLI GTX 680 even with the CPU overclocked to a stable 3.8GHZ on air cooling with good temps in mind. I tested my new system with both my old EVGA GTX 680 FTW+ 4GB cards in SLI and a single GTX 980 beats it in games that don;t have as great SLI/Crossfire support and when I placed my second GTX 980, it was overkill compared to the 680's but also using less power at the reference clocks at first. I may get a third GTX 980 but I'm holding out on the possible rumored GTX 980 Ti that may come around Spring 2015 and I hope to get 2 at least for SLI if they are a comparble upgrade like the GTX 780 Ti was over the 780 and I hope to have 6 to 8GB of VRAM on the cards too as I am gaming at 4K and there are times where the VRAM is topped out and more VRAM is needed. I'm just glad the that GTX 980 was such a great buy this time around and made a big difference for those of use on aging GTX 680's and earlier cards. I usually upgrade every 2 years for my GPU anyways. I am currently reading up and also overclocking my new build to to tutorials and such on these components that I have since it's been years since I really used to tinker with voltages, RAM memory timings and of course serious voltage tweaks to the GPUS for better overclocking. I actually miss the old days of my tinkeirng my PC for hours a day when I was younger, but at least I have the weekends off to tinker with my new gaming beast that should last me for a few more years. For now, I'm happy at the performance that I have with a mild overclock and my games run and look fantastic on my new 55 inch LG 55UB8500 4K TV that has HDMI 2.0 so that I can play at 60 FPS, but with my current setup,I'm averaging 40+ FPS in most games on ultra settings with AA set to 2 or non at all since in my opinion, a 55 inch 4K TV is the perfect size foe the resolution and turning off AA in most games makes no major difference in appearance on the screen due to the high resolution, so that's a major plus for 4K gaming and it's easier on the hardware too. That TV only cost me about $1500 in Best Buy during the Labor Day weekend sales. It was a steal for a TV that has great features and most importantly, HDMI 2.0 to take advantage of my GTX 980's in 4K instead of having to use Display Port to HDMI adapters I was using with my GTX 680 SLI to achieve 30+ FPS in a few games at 4K with no AA and such.
For those that want great performance at 4K, SLI GTX 980's are great and are great for systems that have PSUs typically that can comfortably handle both cards at load with a quality 750W PSU vs needing a 1000W unit for older cards.
I forgot to mention that I am currently using 2 EVGA GTX 980 SC, but I will be sending them back next week to exchange them for the Classified cards that will be the aire cooled ones, but I am currently running clocks on both my current cards at a mild 1300 mhz OC on the core while the memory is at the factory clocks for now, not bad for air cooling, I know I can push for 1400 to 1500 but I will wait for the Classified cards for that type of OC as I will be getting EK WC blocks for both cards this time.
Been trying to decipher Intel roadmaps and the like, to no avail, so... can anyone tell me approximately when the DDR4-supporting Core i5 line is expected to launch? I'm needing to upgrade my aged Core 2 Duo (DDR2!), but I don't wanna hop on the DDR3 bandwagon just as it's being superseded by DDR4... :-/
There should be a line of discussion of why haven't CPU speeds increased in the past 5years in a significant way. My 5yr old Intel I7 is a 4 core at 3GHz. The ones discussed here are only 6 or 8 cores and run stock in the mid-3 to 4GHz range. So over 5yrs, the CPU capability has not grown 2 to 3times faster, and that only applies to applications that can use the extra cores and hyper-threading. The usual rule we work to is that people won't even notice a 50% speed increase. It has to be 2 to 3 times before it is noticed. Previously, a 3year refresh of a computer resulted in a 5 to 10x computer speed increase.
With the current barely noticeable 2x, why bother with the trouble of an upgrade? No wonder Intel's and AMD's sales figures are failing to grow.
i am a very happy owner of i7 4960x.....i think x79 boards still win over the x99's i mean x79 can still run like 2800+3100Mhz Rams without Much of Overclocking....I know technology must move on but i fell like x99 is still not much worth at all.....x99 is about 7% improvement is performance if we compare with x79
Obviously higher OC'ing is good... But, you guys can have fun with your QUAD core 2500k, and a memory bandwidth of 21GB/s... While the 5960x has EIGHT cores, and three times (3x) the bandwidth... Enjoy the 2500k (x
I wish when they were comparing the 5820k to 5930k cpu the clockrates were adjusted to be the same while benching SLI gamings 770 GTX scores so you could see the real difference between 16/8 and 16/16 available lanes on performance because I can't tell if that 5% difference is due to the higher clock speed or not. Also wouldn't hurt to use a Titan instead to max out those lanes.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
203 Comments
Back to Article
klmccaughey - Friday, August 29, 2014 - link
I'll be sticking with my 2500k for at least another year then ;)osamabinrobot - Friday, August 29, 2014 - link
same. a little disappointed but i suppose my wallet and new baby will benefit.maroon1 - Friday, August 29, 2014 - link
Haswell-E is massive improvement over 2500K in multi-threaded workloads. Even cheapest model i7 5820K is going to be about twice as fast as your 2500Kdanielkza - Friday, August 29, 2014 - link
Except for gaming;osamabinrobot - Friday, August 29, 2014 - link
YUPraad11 - Saturday, August 30, 2014 - link
Still running my 2500K @ 4.5GHz on air 3 years later for games without a problem.pt2501 - Saturday, August 30, 2014 - link
2500K @ 4.6 Ghz. Nothing to see here for gaming. Move along.AndreiM - Monday, September 1, 2014 - link
So true, same here, 2500k @ 4.7 GHz. It's like they don't want my money :Dswing848 - Thursday, September 4, 2014 - link
Sandy Bridge overclockers forget the better hardware performance and added hardware for newer generation CPU/motherboard combinations.As to gaming, I was FINALLY able to have very good performance with Microsoft Flight Simulator [that ate high end systems alive] with Ivy Bridge i5 3570K @ 4.2GHz, Gigabyte Z77X-D3H, and an AMD R9 290. NOTE: My CPU can do 4.7GHz and struggles with 4.8GHz on air, however, I do not like the temps, I need water cooling for 4.7GHz and higher.
zinok2001 - Wednesday, November 5, 2014 - link
Hi, i'm upgrading my system. I bought an asus sabertoothx79 mobo. Should i go for the 4820K or 4920K for FSX and P3D ? Or should i go for the 5820K ? Would there be any dif in performance ?Thx, Marc;
Ninjawithagun - Tuesday, January 12, 2016 - link
You can't use the 5820K with an X79 motherboard.dawie1976 - Friday, September 12, 2014 - link
Yip,same here.I7 4790 @ 8.8 GHz.I am still goodmyT4U - Friday, March 4, 2016 - link
Tell us about your system and setup pleasedamianrobertjones - Sunday, March 8, 2015 - link
Yep 2500k @ 4.8 Ghz. Not really Just found it funny that each new post beat the previous by 100MhzStas - Friday, April 10, 2015 - link
Agreed, doing quite well with 2500k @ 4.8Ghzleminlyme - Tuesday, September 2, 2014 - link
I don't mean to be a prick, but you're not going to see anything in gaming performance even if intel releases a 32core 200$ 3.0 ghz processor. Because in the end, it's about the developers usage of the processors, and not many game developers want to ostracize the entry level market by making CPU heavy games. Now, when Star Citizen launches, there'll be a bit of a rush for 'better' but not 'best' cpus, and that appears to be virtually the only example worth bringing up in the next forseeable 3years of gaming (atleast so far in the public eye..) All you can do is boost up single core performance to a certain point before there's just no more benefits, upgrading your cpu for gaming is like upgrading your plumbing for pissing. Yeah it still goes through and could see marginal benefits, but you know damn well pissin' ain't shit ;Dawakenedmachine - Tuesday, September 2, 2014 - link
Not prick-like at all, I appreciate the comment. I'm an old dude who hasn't "gamed" for years and I'm just now getting back into it, trying to figure out what will work and for what price. Your insight is very helpful! Sounds like a lot of guys are using OC'ed i5 cores, good to know.swing848 - Thursday, September 4, 2014 - link
Crank up World of Tanks video settings to maximum and watch your FPS sink like a rock. A high end system is needed to run this title at max settings, that only recently began to use 2 CPU cores and 2 GPUs. No one using a mid-range Intel CPU and upper-midrange single GPU will see 60fps with the video cranked to maximum.Midwayman - Tuesday, September 30, 2014 - link
That game is just horribly coded. There is no excuse for the amount of CPU and GPU it needs.swing848 - Thursday, September 4, 2014 - link
Check out my post above regarding MS-FSX.And, yes, I have installed Star Citizen some time ago [alpha release FINALLY allowed some play], and my system has done well with it, even in alpha.
designerfx - Thursday, September 4, 2014 - link
Gaming was the first area I wanted to look at, so seeing all the comments and review messages saying this is a skip for gaming is great, actually. It means prices will probably drop soon for the gaming parts, hopefully.SirMaster - Friday, August 29, 2014 - link
Well there is a LOT more to computing than gaming so this is exciting for a lot of us.Daniel Egger - Friday, August 29, 2014 - link
Not at all, if you're into computing then you'll more than likely buy Xeon anyway.gilles3000 - Friday, August 29, 2014 - link
Not really, xeons can't be overclocked, even the new 6C 5820K will give you a lot more bang for your buck. Xeons are great for Professional or Enterprise solutions(And are very expensive because of that). But if you need 6-8C and no ECC ram, I'd take a Haswell-E I7 over a Haswell-EP Xeon anyday.Samus - Saturday, August 30, 2014 - link
ECC RAM is pretty nice though, even on a prosumer PC.TelstarTOS - Saturday, August 30, 2014 - link
I agree, most xeons are too expensive for all but the most multithreaded jobs.jbruner007 - Saturday, August 30, 2014 - link
actually we overclock Xeon's all the time... see Supermicro Hyperspeed. we OC all the E5-2600v2 we use - 2650v2, 2670v2 and 2690v2.willis936 - Friday, August 29, 2014 - link
One could put together a HEDT system with OC headroom, tons of RAM, and a fancy GPU for the price of an entry level xeon processor, let alone full on server. Xeons aren't for people, they're for companies. HEDT are for prosumers and I think I'm right in saying a lot of people reading anandtech fall into that category.Spirall - Friday, August 29, 2014 - link
Exactly. This is the platform for professional computing home stuff.actionjksn - Saturday, August 30, 2014 - link
Unless you require ECC memory and-or the ability to install two processors on one motherboard, the Xeon processors are a waste of money. You can also do a modest overclock on the i7 Extreme edition and get some really good performance compared to an 8 core Xeon that costs probably twice as much and can not be overclocked.And if you're getting ECC memory that you don't really need, it costs a lot more too. The money you save on the i7 Extreme over the Xeon can also be put towards extra big and or fast solid state storage. The people who do need ECC ram or dual processors tend to know it and they are not even looking at these i7's anyway. There are a lot of things that a lot of power users do that do not need or benefit from ECC ram. That's who these processors are marketed to.
jabber - Saturday, August 30, 2014 - link
At the end of the day the Xeons are just bug fixed lower power i7 chips anyway. But one way that Xeons come into their own is on the second hand market. I'll be picking up ex. corp dual CPU Xeon workstations for peanuts compared to the domestic versions. I have a 7 year old 8 core Xeon workstation that still WPrimes in 7 seconds. Not bad for a $100 box.mapesdhs - Saturday, August 30, 2014 - link
All correct, though it concerns me that the max RAM of X99 may only be 64GB much
of the time. After adding two cores and moving up to working with 4K material, that's
not going to be enough.
Performance-wise, good for a new build, but sadly probably not good enough as an
upgrade over a 3930K @ 4.7+ or anything that follows. The better storage options
might be an incentive for some to upgrade though, depending on their RAID setups
& suchlike.
Ian.
leminlyme - Tuesday, September 2, 2014 - link
They are applicable to different crowds, and computing doesn't exclude gaming, whereas Xeons to a degree do (Though I'm sure for most of them you'd be fine, I for one like those PCI lanes, as well as the per core performance on the desktop processors is just typically better. Plus form factor and all that. These fill a glorious niche that I am indeed excited about. They're pretty damn cheap for their quality too. I guess the RAM totally circumvents that benefit though.Mithan - Friday, August 29, 2014 - link
I am into gaming and nothing is worth upgrading over the 2500 if you have it. For you it's different of course :)AnnihilatorX - Saturday, August 30, 2014 - link
I am thinking of upgrading my 2500 k actually, because I got a dud CPU which won't even overclock to 4.2Ghzmindbomb - Friday, August 29, 2014 - link
That's the fault of the software. Seems unfair to blame the chip for that. DX12 should change that anyway.CaedenV - Friday, August 29, 2014 - link
How exactly will DX12 help? DX12 is good for helping wimpy hardware move from horrible settings to acceptable settings, but for the high end it will not help much at all. Beyond that, it helps the GPU be more efficient and will have little effect on the CPU. Even if it did help the CPU at all, take a look at those charts; pretty much every mid to high end CPU on the market can already saturate a GPU. If the GPU is already the bottle neck then improving the CPU does not help at all.iLovefloss - Friday, August 29, 2014 - link
DirectX12 promises to make more efficient use of multicore processors. AnandTech has already did a piece on Intel's demonstration of its benefit.bwat47 - Sunday, August 31, 2014 - link
I'm sick of hearing this nonsense. Even with reasonably high end hardware mantle and DX12 can help minimum framerates and framerate consistency considerably. I have a 2500k and a 280x, and when I use mantle I get a big boost in minimum framerate.The3D - Friday, September 12, 2014 - link
Given the yet to be released directx 12 and the overall tendency to have less cpu intensive graphics directives ( mantle) i guess the days in which we needed extra powerful cpus to run graphic intensive games are coming to an end.schmak01 - Friday, January 16, 2015 - link
I thought the same thing, but it probably depends on the game. I got the MSI XPower AC X99s Board with the 5930K. when I was running a 2500k at 4.5 Ghz for years. I play a lot of FFXIV which is DX9 and therefore CPU strapped. I noticed a marked improvement. Its a multithreaded game so that helps, but on my trusty sandy bridge I was always at 100% across all cores while playing, now its rarely above 15-20%. Areas where Ethernet traffic picks up, high population areas, show a much better improvement as I am not running out of CPU cycles. Lastly Turnbased games like GalCivIII and Civ5 on absurdly large Maps/AI's run much faster. Loading an old game on Civ5 where turns took 3-4 minutes now take a few seconds.There is also the fact that when Broadwell-E's are out in 2016 they will still use the LGA 2011-3 socket and X99 chipset, I figured it was a good time to upgrade for 'future proofing' my box for a while.
Flunk - Friday, August 29, 2014 - link
Right, for rendering, video encoding, server applications and only if there is no GPU-accelerated version for the task at hand. You have to admit that embarrassingly parallel workloads are both rare and quite often better off handed to the GPU.Also, you're neglecting overclocking. If you take that into account the lowest-end Haswell-E only has a 20%-30% advantage. Also, I'm not sure about you but I normally use Xeons for my servers.
Haswell-E has a point, but it's extremely niche and dare I say extremely overpriced? 8-core at $600 would be a little more palatable to me, especially with these low clocks and uninspiring single thread performance.
wireframed - Friday, August 29, 2014 - link
The 5960X is half the price of the equivalent Xeon. Sure if you're budget is unlimited, 1k or 2k per CPU doesn't matter, but how often is that realistic.For content creation, CPU performance is still very much relevant. GPU acceleration just isn't up to scratch in many areas. Too little RAM, not flexible enough. When you're waiting days or weeks for renderings, every bit counts.
CaedenV - Friday, August 29, 2014 - link
improvements are relative. For gaming... not so much. Most games still only use 4 core (or less!), and rely more on the clock rate and GPU rather than specific CPU technologies and advantages, so having a newer 8 core really does not bring much more to the table to most games compared to an older quad core... and those sandy bridge parts could OC to the moon, even my locked part hits 4.2GHz without throwing a fuss.Even for things like HD video editing, basic 3D content creation, etc. you are looking at minor improvements that are never going to be noticed by the end user. Move into 4K editing, and larger 3D work... then you see substantial improvements moving to these new chips... but then again you should probably be on a dual Xeon setup for those kinds of high-end workloads. These chips are for gamers with too much money (a class that I hope to join some day!), or professionals trying to pinch a few pennies... they simply are not practical in their benefits for either camp.
ArtShapiro - Friday, August 29, 2014 - link
Same here. I think the cost of operation is of concern in these days of escalating energy rates. I run the 2500K in a little Antec MITX case with something like a 150 or 160 watt inbuilt power supply. It idles in the low 20s, if I recall, meaning I can leave it on all day without California needing to build more nuclear power plants. I can only cringe at talk about 1500 watt power supplies.wireframed - Friday, August 29, 2014 - link
Performance per watt is what's important. If the CPU is twice as fast, and uses 60% more power! you still come out ahead. The idle draw is actually pretty good for the Haswell-E. It's only when you start overclocking it gets really crazy.DDR4's main selling point is reduced power draw, so that helps as well.
actionjksn - Saturday, August 30, 2014 - link
If you have a 1500 watt power supply, it doesn't mean you're actually using 1500 watts. It will only put out what the system demands at whatever workload you're putting on it at the time. If you replaced your system with one of these big new ones, your monthly bill might go up 5 to 8 dollars per month if you are a pretty heavy user, and you're really hammering that system frequently and hard. The only exception I can think of would be if you were mining Bit Coin 24/7 or something like that. Even then it would be your graphics cards that would be hitting you hard on the electric bill. It may be a little higher in California since you guys get overcharged for pretty much everything.Flashman024 - Friday, May 8, 2015 - link
Just out of curiosity, what do you pay for electricity? Because I pay less here than I did when I lived in IA. We're at .10 KWh to .16 KWh (Tier 3 based on 1000KWh+ usage). Heard these tired blanket statements before we moved, and were pleased to find out it's mostly BS.CaedenV - Friday, August 29, 2014 - link
Agreed, my little i7 2600 still keeps up just fine, and I am not really tempted to upgrade my system yet... maybe a new GPU, but the system itself is still just fine.Let's see some more focus on better single-thread performance, refine DDR4 support a bit more, give PCIe HDDs a chance to catch on, then I will look into upgrading. Still, this is the first real step forward on the CPU side that we have seen in a good long time, and I am really excited to finally see some Intel consumer 8 core parts hit the market.
twtech - Friday, August 29, 2014 - link
The overclocking results are definitely a positive relative to the last generation, but really the pull-the-trigger point for me would have been the 5930K coming with 8 cores.It looks like I'll be waiting another generation as well. I'm currently running an OCed 3930K, and given the cost of this platform, the performance increase for the cost just doesn't justify the upgrade.
mapesdhs - Saturday, August 30, 2014 - link
I agree; I'd been hoping for a midrange 8-core of some kind, but Intel's once
again shoved up the peformance/price scale purely because it can. Shame.
And IMO the PCIe provision chop with the 5820K is in the wrong direction;
by that I mean the 5820K should have 40, the 5930K have 60 and the 5960X
given 80, something like that. Supporting 4-way native x16 with enough left
over for good storage at the top-end of the CPU range would make its price
much more tolerable, but now the price difference is really just the +2 cores,
at a time when the mid-range chip ought to be an 8-core anyway (remember
the 3930K was an 8-core but with 2 cores disabled, ie. Intel could have
released a consumer 8c a long time ago).
Ian.
PS. twtech, I've benched oc'd 3930Ks quite a lot. What do you use your system for?
garadante - Friday, August 29, 2014 - link
Sheesh... Looking at these performance reviews, I'm questioning whether or not I'll even go for an -E series when I invest in a full upgrade from my 2500k in 2-3 years. In many scenarios the 4790k is at the top or near the top of the rankings due to the higher clock speeds stock/overclocked, yet the platform is much cheaper. Perhaps the 4790k equivalent 2-3 generations from now will be 6-cores, or the IPC will start going up if Intel focuses on it (I hope, but unlikely...). Otherwise these systems are just too expensive for little gain except in CPU bound workloads.l_d_allan - Friday, August 29, 2014 - link
same, with 2600kStevoLincolnite - Friday, August 29, 2014 - link
Same, I'll stick with my Core i7 3930K which happily hits 5ghz.Years ago I never would have thought a CPU released 3 years prior to the latest and greatest would still be able to compete/beat the top chips in most scenarios.
Hopefully Haswell-E's successor gives me the upgrade itch and maybe DDR4 drops in price by then, by then my platform would be 4+ years old.
Intel sure does make it hard to justify plonking down a few grand though. :(
Mithan - Friday, August 29, 2014 - link
I am waiting for skylake next year.Samus - Saturday, August 30, 2014 - link
i7-920 at 3.5GHz/X58 here (first DDR3 chipset)I was going to hold out for the first DDR4 chipset to replace this thing, but...maybe I'll wait for DDR5. Even 5 years later, my first-gen i7 is faster than like 90% of the desktop CPU's out there TODAY. Intel really outdid themselves with Nehalem.
xrror - Saturday, August 30, 2014 - link
Anyone still on 1366, I'd recommend searching around on ebay for xeon X5650 's that are getting dumped for less than $100. If your motherboard is alright with a 191BCK guess what - 4.2Ghz Gulftown with full 6(12) cores. Yeay =)DPOverLord - Sunday, August 31, 2014 - link
I upgraded from a i7 930 to a xeon 5650. Its at 4.6ghz with sli titans in surround (4800 x 2560). Based on this x99 seems to be a worthwhile upgrade albeit an expensive one. Anyone else in my boat?getho - Tuesday, September 2, 2014 - link
So is it worth upgrading an i7 @3.2 GHz to the x5650? My bottleneck is single thread performance (light room likes fast chips).xaml - Saturday, September 27, 2014 - link
I might give this a try once I stuff that Studio XPS 435 MT motherboard sporting an i7 940 into a Prodigy M. I actually had a look at compatible new processors, but they were too expensive. I am not sure if I am going to trust a used offer though.wallysb01 - Saturday, August 30, 2014 - link
This really only makes sense if you don’t have “real” work to do on your computer. Or you only have work that utilizes 1-2 cores. Look at how these bench marks stack up against the 5960: http://www.anandtech.com/bench/product/47?vs=1317. For single threaded stuff its 20-30% faster and for multithreaded stuff its around 3x faster.That’s HUGE if you’re actually putting your computer through a tough workload. Instead of something finishing in month it finishes in 10 days? You don’t think that’s worth it?
And with the i7-920, are you on a motherboard with SATA III, or do you have PCIe expansion for SATA III. For those I/O limited, SATA III with a couple of striped SSDs is a tremendous improvement. Over what was around 5 years ago.
TonyZ - Sunday, August 31, 2014 - link
Same here, run my 2500K at 4.2 on air and I just haven't seen any reason to upgrade as of yet and I've been running it for near 3 years now.... We need something new and groundbreaking...chinmi - Sunday, August 31, 2014 - link
came here to say that :)it may be 10% slower, but it's 90% cheaper.
TiGr1982 - Tuesday, September 2, 2014 - link
These are different classes of hardware even for considerab=bly different purposes.It's like reading the review of Escalade and saying then, "I'll stick to my Focus then" :)
TiGr1982 - Tuesday, September 2, 2014 - link
I meant, comparison of 2500K and Haswell-E is like comparing Escalade and Focus.Crazy forum engine; AT really should look around, notice that better forums are on the web for 10+ years, and ask some web developer to make a normal forum (and not like a student alpha version course project). It's a bit of a shame for such a good website. Sorry for abruptness, but this is indeed the case.
Stas - Wednesday, September 3, 2014 - link
Likewise. 4.4Ghz is plenty. Lived through video card upgrades and still GPU limited with HD7950 o/c.Stas - Wednesday, September 3, 2014 - link
*3 video card upgradesq2klepto - Monday, September 8, 2014 - link
Yup - thankfully new games are almost completely limited by the GPU at high resolutions/quality (1440p/High+). I think my i7 [email protected] and R9 290X can last another year at least, and i can afford to put it underwater instead of upgrading.For normal desktop use, an SSD and 8GB+ of ram will burn thru everything without a prob
imaheadcase - Friday, August 29, 2014 - link
Correction? I think you mean "also featuring 6 cores""The entry level model is a slightly slower i7-5820K, also featuring eight cores and DDR4-2133 support. The main difference here is that it only has 28 PCIe 3.0 lanes. When I first read this, I was relatively shocked, but if you consider it from a point of segmentation in the product stack, it makes sense."
Ian Cutress - Friday, August 29, 2014 - link
Corrected :)MrBungle123 - Friday, August 29, 2014 - link
Is there anything but 'edge case' justification for upgrading any more? PCs used to be exciting because things were always changing, this is just getting boring.edzieba - Friday, August 29, 2014 - link
VR. The frame rendering time requirements are pretty stringent. This is more on the GPU than the CPU for graphics, but you want to try and keep physics tics at a good rate to prevent objects jumping around the world.tech6 - Friday, August 29, 2014 - link
Even the 'edge case' is not longer a slam dunk as most workstation workloads like CAD do very well on the 4960X.The only real cases are heavy scientific number crunching, animation rendering and cracking password hashes by brute force.
MrBungle123 - Friday, August 29, 2014 - link
It used to be that if you were 2 generations behind your system was so slow and irrelevant that you just couldn't run modern software at anything approaching an acceptable level. Now we have a situation where ancient systems on X58 (circa 2008) are still close enough in performance to the extreme high end in 2014 to not only be in this review but also fit somewhere into the top half of the product stack of modern Haswell based hardware.If you compare a top of the line Nehalem chip to its equivalent from 6 years prior (a Northwood core P4 from 2002) it would make a mockery of 8 of them at the same time. This article is saying a 31% jump from Nehalem to Haswell-E -- that kind of performance increase (as a percentage) would have amounted to 2 or 3 months worth of clock speed bumps at any other time in the history of PCs.
wireframed - Friday, August 29, 2014 - link
Somewhat true, but consider that you get 30% IPC increase, 25-30% frequency increase and a 50% core-count increase, and it adds up to around a 100% increase in performance.Granted, 100-110% over 6 years is hardly impressive compared to earlier, but there isn't that much low-hanging fruit.
Also, the mainstream which drives revenue is, as you point out, largely content. They're looking at adding devices like tablets and consoles, instead of upgrading their computers. That probably plays into the amount of R&D Intel decides to spend on the HEDT platform.
Kain_niaK - Friday, August 29, 2014 - link
Exactly exactly exactly! I am still on X58 with a i7 990x. I don't play much games any more but even to play the newest games ... I do not need to upgrade my CPU and have not needed to upgrade my CPU since 2010. And even my i7 975x or a i7 920 from 2008 would still be more then fast enough. Then music. I use my system mainl as Digital Audio Workstation. Most of my plugins and music applications support multithreading. I cannot realistically ad so much stuff to a project that it maxes out the CPU. And rendering time? Who cares, most of my renders are done before I am done playing chess on the toilet anyway. Then overclocking. The i7 920 and anything on X58 was great. After that ... the fun and the excitement kind of went away and has never come back. What's the SUPERPI MOD record these days? I have not heard about any significant record breaking for a long time. Back in 2008, 2009,2010 I was hearing news about famous new overclock records. After that, it stopped. Let's face it. We hit a clock limit and for a breaktrought in singlethreated speed ... it's just not gonna happen until some genius designs a totally different system. Probably not using electricity but light. But that's like 20 years away because you don't just start over. All we have been doing is improving old technology not inventing something completely new. We are hitting the limits of nature ... so all the geeks and the nerds will just have to way at least another 10 years before we get to the exciting stuff again.MrBungle123 - Friday, August 29, 2014 - link
At this rate 10 years from now any Haswell i7 is still going to be within spitting distance of whatever the best is. lolIf you wan't Skylake performance today OC your Haswell by 250MHz, Ivy Bridge by 400MHz, or Sandy Bridge by 600MHz.
Laststop311 - Saturday, August 30, 2014 - link
it wont be the cpu performance difference that makes u upgrade it will be the new features. Skylake will have pci-e 4.0 and usb 3.1 and then chipsets after that will add more new things faster storage standards and who knows what else.I was already in this position. The speed of the i7-980x was still rly good. Got mine oc'd to 4261mhz. But guess what on x58 you get not pci-e3.0 no sata 3 no usb 3.0. These features have become very standard. You also get no sata express or pci-e ultra m2 which will soon be commonplace as well as no quad channel memory and no ddr4. All the missing features made me upgrade, not the speed. Similar situations in the future will cause people to upgrade every 4-6 years.
TiGr1982 - Tuesday, September 2, 2014 - link
You can still plug the PCIe USB 3.0 extension board there and get at least 2 USB 3.0 ports on the back of the case, to somewhat mitigate the age of the platform.But, with PCIe 2.0 and SATA 2, one is stuck, indeed.
actionjksn - Saturday, August 30, 2014 - link
Nehalm was great, but the last big bump was really Sandy Bridge. After that, not so much. This is actually a big concern for the processor makers. The technology and the silicon itself is reaching its limits as far as making significant gains on the next generations. They were getting big performance gains from just die shrinks alone, but those days are over. And how much more can they shrink them? It's getting harder all the time, they may get to 7nm to 10nm.wireframed - Saturday, August 30, 2014 - link
I'm guessing you don't do much animation? :) Even though many of my renders only run around 10-20 minutes max, when you do a 30s animation, you can multiply that rendertime by 900... Even for a minute a frame (which is fairly fast), that's still 15 hours.But I think the main draw for people on X58 like us, is the newer platform. X58 is really low on modern features, it came out at an awkward time. No native USB 3.0, no 6Gbps SATA, no Thunderbolt (which might be relevant in the future), no PCIe 3.0, and no support for the newer standards coming out with Z97.
Also consider DDR4 is going to be the standard going forward, so investing a lot of money in 32-64GB of DDR3, even at the lower prices, just seems like throwing good money after bad.
Kain_niaK - Saturday, August 30, 2014 - link
You are right! When I talking about a render, I am talking about exporting a music project to separate wave files (one per track). You can either record while playing it back or you can render it. Most VST plugins have a render mode as well so the end result is sometimes quite different. Not necessarily better, but different. And recording is done in real time, so sometimes a render is a lot faster.kevith - Friday, August 29, 2014 - link
Right on!bebimbap - Friday, August 29, 2014 - link
the 31% is at the same frequency.being able to do double the work with the same amount of energy or the same work with 1/2 the energy is a big deal.
imagine if cars could do that... 2x the horsepower but same amount of gas... or the same mileage at 1/2 the gas
Laststop311 - Friday, August 29, 2014 - link
ive been happily on my x58 i7-980x for 4 years and honestly if my chipset wasn't missing so many modern features i wouldn't even upgrade. But the lack of pci-e 3.0 lack of sata 3 lack of usb 3.0 are just becoming a pain in the ass. ur circo 2008 is wrong too i know because i got the i7-980x right when it came out and it was in 2010 not 2008 so that would be 100-110% over 4 years.I really want an ultra m2 pci-e 3.0 x4 drive as my main os and application drive. Can't wait to pop in a 1TB samsung sm951 ultra m2 drive. Also cant wait for the 16GB DDR4 sticks to start showing up. 8x16GB 128GB of ram, can you say 112GB ram drive and 16GB ram for the system. It's gotta be awesome working on video editing with your entire video in a super fast ram drive. My memory and storage is what's going to boost the performance for me gtom my x58 the cpu will too but not like the ultra m2 ssd with like 1400MB/sec read and 1200MB/sec write
nonoverclock - Saturday, August 30, 2014 - link
I recently went from an i7-950 to a i7 4770. It's made for a solid bump in performance. I notice it particularly in navigating game menus that used to be a little sluggish before.Railgun - Friday, August 29, 2014 - link
Sold! As I'm still on Gulftown, this is just what the doctor ordered. The i7-970, IMHO, has held its own for four years, at least for my needs. This will be one helluva shot in the arm.PEJUman - Friday, August 29, 2014 - link
with you on this. I actually sold my gulftown with 48GB last year for the 6xSATA on z87, The x58 + gulftown is one heck of a system. If I don't already have the 4770k, this 5960 or 5820 would be extremely difficult to resist.I actually tempted to mount the mobo+CPU+intel sink on the wall instead of selling it, it was a piece of computing history.
bebimbap - Friday, August 29, 2014 - link
If i remember the weight of that particular heatsink, you would need a sturdy mount for your wall... thumbtacks do not applyLaststop311 - Saturday, August 30, 2014 - link
48gb on gulftown thats strange considering gulftown supports 24gb ram. http://ark.intel.com/products/47932right on intel's page. i sense a fibber. Why lie about what pc u have?
StevoLincolnite - Saturday, August 30, 2014 - link
You obviously have never played with an x58 system with it's triple-channel DDR3 set-up.You *can* actually run 48Gb of Ram, but it's also not guaranteed to actually work, some people managed to win the luck of the draw early on.
Some people who installed 48Gb of memory would only have 32Gb of Ram detected.
Conversely, x58 processors actually have a 36-bit address bus, so theoretically they could support even 64Gb of Ram.
Basically, Intel guarantees up-to 24Gb to function, doesn't mean it cannot handle more.
K_Space - Friday, August 29, 2014 - link
Agreed. I do think this justifies the switch from Nehalem 920; however the price of DDR4 is far too prohibitive. Given that fact that Skylake is rumoured to be around the corner ?Q3 2015 (will probably get pushed back); I'm just going to a grab a bargain used X5670 6-core for $120. I have to say, I never thought the 4.1Ghz 24/7 920 would hold for this long.Multi-GPU and newer PCI-E storage solutions will mean that 40 lanes will matter in a generation or two.
GammaLaser - Friday, August 29, 2014 - link
Cinebench R15 multithreaded score is "1337", coincidence? I think not.ochadd - Friday, August 29, 2014 - link
As a gamer it looks like my overclocked 2600k will remain in it's place. Was hoping for more.maroon1 - Friday, August 29, 2014 - link
Games are limited by the GPU most of the time. Even if the CPU is 10 times faster, you might not see a big gains in gamingThe_Assimilator - Friday, August 29, 2014 - link
So X99 has the same specifications as Z87, just with 4 extra SATA ports that cannot be RAIDed. Warrants a resounding "meh" from me. Intel could have at least increased the number of USB 3.0 ports.wireframed - Friday, August 29, 2014 - link
The Z-platform is the enthusiast platform, this is an entirely different segment. For instance, you'll never get 6 cores on the Z-platform, so the comparison is kinda silly. :)garadante - Friday, August 29, 2014 - link
I hope you're wrong with your never statement. If we're still on 4 cores on the Z platform by Montlake or the generation afterwards, I will be very, very disappointed. Intel either needs to aggressively expand core counts to give developers a reason to make software utilizing more threads or push aggressively for increased IPC. Otherwise my 2500k could last a very, very long time.Makaveli - Friday, August 29, 2014 - link
why do you guys keep referring to the i7 990x chip as nelahem when its Gulftown?nonoverclock - Saturday, August 30, 2014 - link
Good point. I looked it up and Gulftown is a Westmere microarchitecture CPU.TiGr1982 - Tuesday, September 2, 2014 - link
Right. Gulftown was the 32 nm shrink of 45 nm Nehalem. However, in that case there was no associated IPC change (except AES addition in Gulftown), so in light-threaded tasks there was no performance difference between the two (except AES), so people often informally confuse the two, despite the fact that Gulftown has two more cores due to the 32 nm instead of 45 nm.SantaAna12 - Friday, August 29, 2014 - link
Good timing Anandtech!I might have missed it, but any variance at 1440?
danjw - Friday, August 29, 2014 - link
"Intel has decided to made the lower cost Haswell-E processor ..." I think it should be /made/make.Ian Cutress - Monday, September 1, 2014 - link
Thanks for the catch, edited :)bernstein - Friday, August 29, 2014 - link
Thanks for this review. It's now clear, this platform is for an extremely narrow target audience, requiring the most computational speed on a single mainboard, requiring 4 RAM channels or 40pcie lanes but be cheaper than xeon e5 yet not need ecc.i truly wonder what prcatical application there is? i mean it can neither mission critical or scientific stuff becuase that would require ecc... gaming at 8k? (no 4k works fine on a 4790K with two titans)
bombardira - Friday, August 29, 2014 - link
video editing, 3d rendering, audio/photo work should still benefit from lots of cores.Kain_niaK - Friday, August 29, 2014 - link
Yeah until you stop upgrading and build a rendering farm that can split the workload over multiple systems. Then you just keeping buying more of the price/quality stuff. CPU's or GPU's.Brigaid - Friday, August 29, 2014 - link
"Modules should be available from DDR3-2133 to DDR3-3200 at launch, with the higher end of the spectrum being announced by both G.Skill and Corsair."Page 1. Shouldn't these both say DDR4-?
Ian Cutress - Monday, September 1, 2014 - link
Thanks for the catch, edited :)apexjr - Friday, August 29, 2014 - link
Ian - How come we didn't see any tests of over clocking with the 6 core 5930k and 5920k? I had just purchased a i7-4790k that isn't even installed yet. I am not just a gamer, I do video and photo work as well and I am constantly CPU limited. The X cpu is way to expensive, so these others particularly the 5920k overclocked might be a perfect sweet spot for a lot of people.Ian Cutress - Monday, September 1, 2014 - link
When I tested the 5930K/5820K, the motherboard BIOSes were still very early alpha builds and did not allow overclocking. If I can get these CPUs in again to test (they had to be sent back), I will do some overclocking results for sure.jwcalla - Friday, August 29, 2014 - link
Might as well wait for them to fix TSX at this point.iwod - Friday, August 29, 2014 - link
How likely will this be in Mac Pro?DigitalFreak - Friday, August 29, 2014 - link
Doubtful. Apple is using Xeons in the Mac Pro.hansmuff - Friday, August 29, 2014 - link
I'll be waiting for two more generations. Maybe something worthwhile comes along to replace my 2600k at 4.4GHz. I'm glad the review shows so clearly where this new chip excels and who should save their money.Yuriman - Friday, August 29, 2014 - link
Typo:"Modules should be available from DDR3-2133 to DDR3-3200 at launch, with the higher end of the spectrum being announced by both G.Skill and Corsair. See our DDR4 article later this week for more extensive testing."
TelstarTOS - Friday, August 29, 2014 - link
Good article but the overclocking comparisons are a bit limited, i.e. 5930K and 5920K overclocking tables are not provided, nor a comparison with older SB/IB cpu @around 4,5ghz which most people still have and are deciding whether to upgrade to a X99 or a Z97 platform.A more accurate RAM performance comparison is also missing.
Ian Cutress - Monday, September 1, 2014 - link
At the time I had the 5930K/5820K, I was not in a position to be able to overclock due to early alpha firmware. Due to our newer benchmarking suite, I still need to go back to the early Sandy (non-E) CPUs to retest. Anything you see in Bench with the power listed has been retested at least in part this year, depending on my scheduling. Unfortunately I don't have the space to have this as an ongoing project, it occurs in time with reviews.name99 - Friday, August 29, 2014 - link
"For the six core models, the i7-5930K and the i7-5820K, one pair of cores is disabled; the pair which is disabled is not always constant, but will always be a left-to-right pair from the four rows as shown in the image. Unlike the Xeon range where sometimes the additional cache from disabled cores is still available, the L3 cache for these two cores will be disabled also."Are you sure that these various statements are correct? I'm not doubting you, but I would point out that at Hot Chips 2014 discussing the Xeon IVB server, Intel stated that they'd designed the floorplan to be "choppable".
They gave a diagram that showed a base of fifteen (3x5) CPUs+L3 slices, which chop lines to take off a th3 right-hand 5 CPUs, then to take off one or two of the horizontal pairs (taking the 10 CPus down to 8 and then 6).
Point is:
- the impression they gave is that these reduced CPU counts are not (at least not PRIMARILY) from full dies with disabled (or nonfunctional) cores --- they are manufactured to have smaller area with fewer cores.
- which suggests that versions with fewer cores but larger cache are some sort of anomaly (because the chop cuts out L3 slices along with cores). Perhaps THOSE are the chips that really did have one or two non-functional cores but with still functional L3 slices?
Ian Cutress - Monday, September 1, 2014 - link
As far as we know, the floorplan for the die for i7 is an 8-core, with the 6-core models being disabled versions rather than wholly new dies.With the Ivy-E Xeons, there are a number of CPUs that have high L3 per core numbers due to the way the cores are disabled - Intel usually sticks to 3 floor plans or so depending on how their product line is stacked up. This may change with Haswell-E, though the Xeons have not been officially released yet. The Ivy-E floorplans can be found here, where I did a breakdown of L3/core:
http://www.anandtech.com/show/7852/intel-xeon-e526...
botijo - Friday, August 29, 2014 - link
I wonder, isn't RAM specially expensive in the builds?icrf - Friday, August 29, 2014 - link
Are the Xeon versions of these chips still slated for release in two weeks after IDF? I want more cores plus AVX2, but I also want VT-d.TheinsanegamerN - Monday, September 1, 2014 - link
All 3 of these support VT-d.icrf - Thursday, September 4, 2014 - link
Ah, I realize now that I'm really after a motherboard question. Support was usually restricted to server chipsets.Solix - Friday, August 29, 2014 - link
I'm still not sure I was able to glean enough data to determine efficiency. If we consider that third party sata 6 and usb 3 is just fine by me, and DDR3 price is nice and 1.35v cas 8 is easy, the question becomes a little more murky. Is AVX2 still broken (I believe that was what was being disabled in micro code right)? If so, and given that I use 3 GPUs and some pci-e SSDs, 5820k is less interesting to me. So my current 4930k vs. a 5930k, for me, comes down to power efficiency under load once overclocked and undervolted. This box is more active than it is idle and my experiments on the desktop showed that a properly overclocked ivy bridge at 4.8ghz or so could go toe to toe with a haswell at 4.3ish but was more power efficient in the process. Maybe I stick with the 4930k. What do you think folks?jwcalla - Friday, August 29, 2014 - link
TSX is still broken, yes.Ammohunt - Friday, August 29, 2014 - link
yet another new socket?!?!?!?! F U intel.StealthGhost - Friday, August 29, 2014 - link
2500k / 2600k benchmarks to compare against would be amazing.Etern205 - Friday, August 29, 2014 - link
Cinebench R15 Multithread benchmark. Did the 5960X really get a "1337"?JumpingJack - Monday, September 1, 2014 - link
Yes, it really got 1337 for CB R15, several sites are showing around the 1330 mark:http://techreport.com/review/26977/intel-core-i7-5...
Michael REMY - Friday, August 29, 2014 - link
again, in your table of extreme core i7 cpus, you forgot the last 4-core Nehalem which is : the i7-975X at 3.3GHz .No, the 965X is not the latest 4-core extreme !
Death666Angel - Friday, August 29, 2014 - link
Considering this would have cost me ~340€ over my i7-4770K (which I have @ 4.5GHz and delidded), because of the price difference in CPU and the fact that I had a 1150 socket mainboard from my retired mining rig, I'm not too salty about it. At least it is 6 core at the low end, that is encouraging. I've been mostly fine with my i7-860 so I guess the i7-4770k will serve me a while.Death666Angel - Saturday, August 30, 2014 - link
"With ASUS motherboards, they have implemented a new onboard button which tells 2x/3x GPU users which slots to go in with LEDs on the motherboard to avoid confusion."Because looking stuff up in the manual is way too complicated!
anactoraaron - Friday, August 29, 2014 - link
The 5820 can be had for $299 at micro center and they will also discount a compatible motherboard by $40. Jus' sayin'. IDK if there's some kind of ad agreement, etc for listing Newegg's price... Anyone shopping for anything should always shop around.tuxRoller - Friday, August 29, 2014 - link
"Very few PC games lose out due to having PCIe 3.0 x8 over PCIe 3.0 x16"Any? Even BF4 might be more due to other factors. It might be more useful to determine these bottlenecks with uhd.
Ian Cutress - Monday, September 1, 2014 - link
I want to try with UHD. Need the monitors though.Mr Perfect - Friday, August 29, 2014 - link
I realize you where trying to CPU limit the benchmarks by using such a low resolution, but does this still hold up when running, say, three 1440p monitors? Wouldn't that be the time when the GPUs are maxed out and start shuttling large amounts of data between themselves?
Ian Cutress - Monday, September 1, 2014 - link
I want to test with higher resolutions in the near future, although my monitor situation is not as fruitful as I would hope. There is no big AnandTech warehouse, we all work in our corner of the world so shipping around this HW is difficult.KAlmquist - Friday, August 29, 2014 - link
"The move to DDR4 2133 C15 would seem to have latency benefits over previous DDR3-1866 and DDR3-1600 implementations as well."If my math is correct, this is wrong. With DDR4 2133 timings of 15-15-15, each of those 15's corresponds to 14.1 nanoseconds. (Divide 2133 by two to get the actual frequency, then divide the clock count by the frequency.) With DDR3 1600 and the common 9-9-9 timings, each time is only 11.25 nanoseconds. With DDR3, the actual transfer of the data takes four clock cycles (there are eight transfers, but "DDR" stands for "double data rate" meaning that there are two transfers per clock cycle). That translates to 5 nanonseconds on DDR3 1600. DDR4 transfers twice as much data at a time, so with DDR4 2133 a transfer takes eight clock cycles or 7.5 nanoseconds. So DDR3 1600 has lower latency than the DDR4 2133 memory.
So why does Sandra report a memory latency of around 28.75 nanoseconds (92 clock cycles at 3.2 Ghz) as shown in the chart on page 2 of this review? If a bank does not have an open page, then the memory latency should be 15+15+8 clock cycles, or 35.6 nanoseconds, not counting the latency internal to the processor. So the Sandra benchmark result seems implausible to me. As far as I can tell, the source code for the Sandra benchmark is not available so there is no way to tell exactly what it is measuring.
JumpingJack - Monday, September 1, 2014 - link
Good points.NeatOman - Friday, August 29, 2014 - link
Sooo... its good for very high end gaming and rendering.Motion2082 - Friday, August 29, 2014 - link
Hey guys, I'm running a i7 2600k on ME4Z @ 4.8GHz. My system is fast enough for most applications, only struggles with multiple applications running. Should I be looking at Haswell-E or waiting until Broadwell? Only annoyance I have with i7 2600k is slow video encoding and restrictions on multi-tasking.TeXWiller - Saturday, August 30, 2014 - link
Adjust manually your process priorities and core affinities, if necessary. The old work horse will take you a lot longer still. I may have a 14-hour rendering process going with fully loaded cores and still run a nice FPS session at will, provided that I adjust the process priority manually.mapesdhs - Saturday, August 30, 2014 - link
What is your RAM speed? I find 2133 @ CL10 to be optimal with the M4E/Z (I have
five of those boards, all with 2700Ks @ 5.0).
Make sure you have a good SATA3 SSD to exploit the Intel SATA3 port. Don't use
the Marvell ports. An H80i with NDS fans works really well for low-noise, but even an
old TRUE with any two decent fans will happily run a 2700K @ 5.0 (I've used the
TRUE, TRUE Black, VenomousX, Phanteks, etc., but recently bought a whole pile of
refurb H80s for a good price).
If you want an intermediate upgrade, get a used 3930K C2, a good used X79 board
(I keep buying the ASUS P7X79 WS, done five so far), move over your RAM, etc.
Note the same caveats re Intel/Marvell ports, use an H100i + NDS fans instead, and
voila, you're up & running away with a 6-core for not much outlay. A recent build I did
empllyed a 3960X which cost 245 UKP, the above ASUS board for 190 UKP, etc.
Gave 1221 for CB R15, while your 2600K @ 4.8 should give around 850, so that's a
very nice bump for threaded tasks and running multiple apps in general.
I suggest an 840 Pro or 850 Pro for an SSD, though there are lots of used bargains
available. I bagged a 512GB Vector for 160, ideal as a cache for AE, while an OEM
840 Pro was only 87. Best of all, I keep getting 1475W Thermaltake Toughpower XT
Gold units for around 125 UKP (less than half normal new cost), perfect for handling
four heavy GPUs for CUDA or whatever (my system has four GTX 580 3GB atm) in
an oc'd 6-core system with multiple SSDs, RAID, etc.
More references, examples & suchlike available on request - don't want to clog this thread.
Ian.
LordHaHa - Friday, August 29, 2014 - link
Mixed feelings on this one. This is a solid effort here, and the 5820K at around $390 is potentially interesting, seems very similar to the 3930K and it's a bit cheaper by default.That said, I don't see much of a reason to upgrade from SB-E or IB-E if you already have something in that. Certainly even the 5820K is a bit overkill for gaming for the price.
I do have to say we live in strange times where even latter-day Core 2 systems (paired with very good video cards, as a caveat) are still fairly capable for most single player gaming environments; certainly they can still handle any casual task thrown at them. And anyone who's got to Sandy Bridge has had little reason to upgrade their systems yet, for sure. Ten years ago, saying "I have no reason to upgrade my 2-4 or so generation old box" would have been crazy talk.
Laststop311 - Friday, August 29, 2014 - link
Very impressed with the 8 core overclock. I was worried that having such a low stock meant the oc wouldn't be too good. They had it on a crappy closed loop 140mm rad and it did well. I have a custom loop that cools the motherboard chipset and vrm, the cpu, and the gpu. A triple 5.25" reservoir with dual mcp 655 pumps in series at setting 4 1 below max pump speed a 60mm thick 420mm rad with 6x 140mm noctua a15 fans in push pull. Hopefully I can hit 4.8ghz i'll be very happy but as long as i can hit 4.5ghz ill be satisfied. I'm coming from a 4261mhz i7-980x so this is going to be a rly big upgrade for my video work and even a noticeable boost in gaming not huge but noticeable.I'm totally pleased with the i7-5960x. Waited 3 generations from nehalem to upgrade. With haswell-e ill be waiting at least 4 possibly 6 generations to upgrade unless some crazy new chipset geature makes me do it earlier.
cactusdog - Saturday, August 30, 2014 - link
I really wish they took the voltage regulator off the CPU. Its really a bit sad that a 4790K can beat this highend expensive chip in single threaded tasks. I was really looking forward to this but performance doesn't justify the cost unless you do a lot of multi threaded stuff. With skylake coming with PCI-E 4 this system is going to be outdated pretty quickly. One thing is for sure, the days of big overclocks on the CPU side are over.ToTTenTranz - Saturday, August 30, 2014 - link
The 28 lanes in the 5820k don't make much difference in SLI because it uses the SLI bridge as interconnect between the graphics cards.It would be interesting to see if the 16x/8x configuration makes any difference with two of the newer bridgeless Radeon cards.
Especially since the first build exemplified in this review uses that same configuration (5820k with two Radeon 285 cards).
mlambert890 - Monday, September 1, 2014 - link
there absolutely is *not* data going across the sli bridge at all. the only thing going across the bridge is timing and signaling info. it is a tiny 1GB/s interconnectat extremely high resolutions on pcie 2 SLI is where you specifically DO need more lanes. pcie 3 alleviates this. multi 4k would bottleneck again, but the best gpus can barely handle a single 4k in high detail anyhow even in tri-sli
fallaha56 - Saturday, August 30, 2014 - link
Hmm one last benchmark (emulation-related) I'd like to see (and suspect many others would too) -run PCSX2 in software mode with 8+ threads and see if there's benefits.Try something really tough like Shadow of the Colossus.
Looks like this chip is a man looking for a mission, is that it?
tuxfool - Saturday, August 30, 2014 - link
I'm not so sure it would be of great benefit. Emulators are thread limited by the hardware they're attempting to emulate. I read somewhere that pcsx has a thread limit due to the difficulty in synchronizing each ps2 hardware component in each thread.Dolphin also favors clock speed over simultaneous threads.
bleh0 - Saturday, August 30, 2014 - link
After holding off for 4 years I think it is time for an upgrade. While the builds in the article are good I'm still looking for more.chizow - Saturday, August 30, 2014 - link
Glad I upgraded to Z87+4770K last year. While it is great that Intel *FINALLY* upgraded the rest of their platform to native USB 3.0 and all SATA3 (6G) ports, along with newer options like M.2 and SATA Express, the drop in clocks to accommodate the higher number of cores and higher resultant TDP makes it a wash for my primary purpose: gaming.I also didn't want to have to pay early adopters tax on DDR4, and it looks like that tax is high right now. Coming from X58, I was also very pleased with the drop in total system power going to Z87. I'd estimate between a 920@4GHz and the difference in board power, its pulling about 50W less at idle and 100W less under load. My Kill-A-Watt measurements indicate similar.
Still, if buying today and putting together a new platform for the future, this would be a good option now that Intel has addressed all of the major issues I had with the X79 platform (full native USB 3.0, full native SATA6G, official PCIe 3.0 etc).
@Ian, I am sure it is due to being limited to what you have on hand, but it would have been nice to see some more powerful GPUs tested, just to better illustrate potential CPU performance differences once the GPU bottleneck is lifted. Nice job though, the new graph toggles are really slick.
AsakuraZero - Saturday, August 30, 2014 - link
i was worried about this new processors since i just bought an i7 4770k, and damn im still a happy owner of an amazing chipTEAMSWITCHER - Saturday, August 30, 2014 - link
I pulled the trigger on the 4770K last year also....but I did so only because the Ivy Bridge E was stuck on the X79 chipset. For me, it was an interim solution while I waited for Haswell-E. When my new X99 parts arrive next week, I'll upgrade my system and put the Haswell parts on craigslist - I should be able to sell them for a bargain price and reclaim some cash.AsakuraZero - Sunday, August 31, 2014 - link
the 4770k still sells well on ebay i got mine at 270 (used) looked like new and works likea champ, Haswell e doesnt look bad but in a world where the x86 doesnt use all the cores on many of its applications, or gaming im happy with my purchase, enjoy your CPU and milk every buck out of it!Jonathan_Rung - Saturday, August 30, 2014 - link
"With Haswell LGA1150 CPUs, while the turbo frequency of the i7-4770K was 3.9 GHz, some CPUs barely managed 4.2 GHz for a 24/7 system."I think I spotted a little typo on page 3, did you mean "With Haswell z87..."? I didn't think any of the 4770x CPUs could use an 1150 socket. Or am I misreading it?
Mr Perfect - Saturday, August 30, 2014 - link
The Haswell i7-4770k is socket 1150.http://www.newegg.com/Product/Product.aspx?Item=N8...
Jonathan_Rung - Saturday, August 30, 2014 - link
Oh, you're right. I guess I'm confusing sockets and chipsets. Obviously CPUs need a matching socket, but do they also need a matching chipset, or do newer motherboards just allow newer feature sets introduced by the cpu? Or am I still getting it wrong?It seems like every time a new generation of CPUs are released, a bunch of new motherboards with identical chipsets show up to compliment them, so I thought each generation of CPUs have matching chipset that need to pair with one another.
Sorry, this is like amateur hour, I'll just google this stuff. It's strange, I like reading these articles, but I haven't the slightest idea why - I only understand what they're saying like half of the time!
mcbowler - Saturday, August 30, 2014 - link
at least my dolphin rating is still on top! not sure why that is important.akLuckyCharm - Saturday, August 30, 2014 - link
Thank you Anand Tech for showing performance with ALL of the cpus overclocked as apposed to only one chip overclocked and the rest at stock. This makes the comparison much more fair and realistic.MrSpadge - Saturday, August 30, 2014 - link
I really like the analysis of performance per clock. It really helps me to judge CPU performance. However, why do you disable HT for these tests? All the CPUs considered have it, and on average it boosts performance. And most importantly:"Haswell bought... two new execution ports and increased buffers to feed an increased parallel set of execution resources... Intel doubled the L1 cache bandwidth."
Right. Which means Haswell may very well see better performance improvements from enabling HT than older CPUs. This could be very relevant for the workloads which people should run on these 6 and 8 core monsters. And by that I'm not talking about gaming ;)
vision33r - Sunday, August 31, 2014 - link
Most people don't need this setup because the only thing here is really your Haswell processor with couple of extra cores, a few different parts for DDR4, and a little bump here in L2 and that's it. Games don't need all of these changes because most games today are sophisticated enough to utilize them.I can certainly see that my VM and rendering machine will love the new 5960X and DDR4 but it's not worth investing in new platform when it just came out.
Anyone that does high end AV content creation will see a big bump if you got the money to spend on it.
HongKonger1997 - Sunday, August 31, 2014 - link
So if I only game with computer, my 3960X is still good?MrSpadge - Sunday, August 31, 2014 - link
Of course. Even a Sandy Bridge i5 would easily do the jpb, a 3960X is actually complete overkill.Artemis *Seven* - Sunday, August 31, 2014 - link
All great benchmarks except for the gaming ones. It's pretty common knowledge that Geforce cards like to handle almost everything "in-house" whereas AMDs tend to dump a big chunk of their workload onto the CPU. All I'm saying is that I'd love to see a gaming benchmark redone with R9's - I'm betting it would show the differences between these processors in games better - if there are actually any :Dmlambert890 - Monday, September 1, 2014 - link
the differences are minor with PURE cpu tests. sandy e to ivy e is about 5% ipc gain. ivy e to hw e is another 8% or so, but it suffers a 10% oc ceiling deficit against them and it has higher latency ram to boot.unless your workload truly has 8 threads or you multitask to the point you are saturating 6 fast cores this is a non upgrade coming from sandy or ivy
woj666 - Sunday, August 31, 2014 - link
It's curious why they used the dud 5960 results for overclocking vs the one that kept up to the 4790 in overclocking. I detect some 4 core bias here.A [email protected] will run at the same temperature as a [email protected].
mlambert890 - Monday, September 1, 2014 - link
double the active cores at same clock rate equals lots more power which equals lots more heat. notice the tdp values?faster - Sunday, August 31, 2014 - link
I might finally upgrade my work computer. I'm running an i7 920 with 8gb memory and a 256 GB SSD drive. I need that extra power for word processing, emailing, and surfing the web. Oh wait, no I don't. For the basic computer user, computers have been fast enough for many years.For my home gaming machine, I don't see a reason to upgrade my 4770k.
oranos - Sunday, August 31, 2014 - link
Just like with anything in life, there are some people who will pay a premium to have the best of the best. This is exactly that.oranos - Sunday, August 31, 2014 - link
As a matter of fact, for the next generation (nvidia 9xx series and amd xxx) in quad sli/xfire setups, this chip will allow more bandwidth through the PCI-E lanes. So for the top 1% of setups this chip does matter.mlambert890 - Monday, September 1, 2014 - link
no it wont. my p7x79ws with 3960x and tri sli titan has 40 native pcie 3 lanes. sb-e has 40 lanes and pcie 3 support. advanced x79 boards with current bios fully support.this is a non upgrade honestly (x99) vs a high end x79. all you really get is usb3 and sata 6Gb/s native vs third party integrated (BFD), and the mixed bag that is ddr4
Fedor - Monday, September 1, 2014 - link
Very strangely, tomshardware has the 5820 beating out the 5930 and 5960 in gaming benchmarks. The test platforms here and there are very different, but for a start it would be great if you guys could also show us some results with a single GPU for the 5820 and 5930.TiGr1982 - Tuesday, September 2, 2014 - link
IMHO, tomshardware messed up something here - there is no physical reason why 5820 can beat 5930, because they have the same core count and cache structure, but 5930 has higher frequencies and 40 PCIe lanes vs 28 lanes of 5820.Fedor - Wednesday, September 3, 2014 - link
Yup, I'm leaning towards the same conclusion. It's a shame that most reviews do not cover anything but the highest-end chip, but between anandtech, tomshardware, 3dguru and another review I found in Belgian (the graphs were self-explanatory at least ;) only tomshardware has that anomaly, so I'm inclined to agree.Bombreezey - Monday, September 1, 2014 - link
These cpu's would be more beneficial to higher resolution gaming. I love how people are trying to compare this new cpu's to the i5-2500k. There is nothing wrong with the 2500k when gaming at 1080p but when the time comes when people start witching over to 4k gaming when it becomes more affordable your 2500k is gonna struggle big time. Like people have said earlier, there is more to computing than gaming. These cpu's will be vastly faster at video converting/rendering, and other cpu intensive applications when compared to older series of intel cpu's. On another note I see people are complaining about heat because of the high tdp... when there are more cores its gonna need more power......soooooo get a water cooler and problem solve :). At least the tdp isnt 220watts like amd 8 core cpu. So 150watts is pretty fair for a intel 8 core cpu that straight up murders a AMD 8 CORE CPU.untoreh - Friday, September 12, 2014 - link
amd octa core costs 1/4 of the intel octa core, if not 1/5.soldier45 - Monday, September 1, 2014 - link
Was thinking of moving from my 2600k at 4.8 to the 5930k but really not worth the $ for a 20% boost in certain apps and little to no improvement in gaming.ryanbrod - Monday, September 1, 2014 - link
Why is it that every review of the haswell-e chipsets doesn't even make an attempt to saturate more then 16 pcie lanes... The whole point of the haswell-e chips in my opinion is the extra PCIe Lanes for tri and quad sli/crossfire set ups. dual gtx 770s is a joke of a review if you ask me. Put in at least 3 of them or dont do a gaming review at all because its going to be less then 2% difference in performance from regular i7s.ol1bit - Tuesday, September 2, 2014 - link
You would think since intel seems stuck at a specific design speed for the most part(other than die shrink) that AMD could come up with a better architecture. Maybe there is no better architecture out there?coachingjoy - Tuesday, September 2, 2014 - link
ha, in three years we have 10-15% performance increase.
willis936 - Tuesday, September 2, 2014 - link
Since those parts are nearly identically clocked with a cost per core core being so low you're really getting about a 280% increase in performance per dollar over three years. Compared to other market segments that have practically stalled out everywhere in x86 land yes it is pretty amazing.LemmingOverlord - Tuesday, September 2, 2014 - link
about the gaming benchmarks, core count doesn't really make a difference in most of your chosen games... however there are still a few good CPU-benchmark games that come to mind, Civ V and the Supreme Commander series, to mention a couple. In fact I'd keep an eye out for Planetary Annihilation. It's avalilable this weekend, although I'm pretty sure Uber Entertainment would hand one over just for benchmarking.Gonemad - Tuesday, September 2, 2014 - link
"now the lowest end CPU for the extreme Intel platform". Yep, I'm just using an i7-920, that was exactly at the same spot in its day. Same architecture and memory as the extreme flagship, at a fraction of the cost. Not counting server chips, right?Intel Core i7-5820K, you say eh? Hmm... maybe I should write that down for later... Except I'm not planning anything beyond 1080p, then all of these chips and cards are a bit overkill.
Correction: I want full pedal-to-the-metal at 1080p, perhaps the "average build" should help... right? I appreciate anybody pitching in on the details for a machine to run anything at 1080p (or 1920x1200 actually) at 60 fps, but not more.
azazel1024 - Tuesday, September 2, 2014 - link
I'd like to add what is the most interesting to me is the 5820k. Hexacore cheap (Intel Hexacore, which IMHO is the only kind that matters). I'll likely never have the budget for a $1000 chip and then the system to do it justice.However I currently have an i5-3570 OC to 4GHz. Cost something like $380 for processor and board at the time. For what appears to be around $600 I could now get 50% more cores, hyperthreading and probably be able to reliably OC to a "consevative" 4.2GHz for probably >60% performance improvement for less than half again the cost.
In comparison to moving up to the i7-4770k, which would only cost maybe $50 or so less, with a modest performance improvement (probably only 15-30%).
It does make me VERY interested in Broadwell-E and Skylake-E, as those are the most likely points at which I might be looking at an upgrade. I do wonder if Skylake-E will see entry level Enthusiast Octocore, and maybe even if it'll mean high end mainstream Skylake Hexacore.
That would be an interesting decision if 6 or 8 core processors were not terribly far off in price.
Nfarce - Wednesday, September 3, 2014 - link
"However I currently have an i5-3570 OC to 4GHz. Cost something like $380 for processor and board at the time. For what appears to be around $600 I could now get 50% more cores, hyperthreading and probably be able to reliably OC to a "consevative" 4.2GHz for probably >60% performance improvement for less than half again the cost."Dude, did you forget about the cost of having to buy DDR4 memory modules? That will throw your numbers out the window.
BLOODYHELL - Thursday, September 4, 2014 - link
WHAT DO YOU GUYS THINK FOR LARGE FILE VIDEO EDITING? MMA FIGHTING ETC. ADOBE PREMIRE CS6, VEGAS?BLOODYHELL - Thursday, September 4, 2014 - link
What do you guys think for large video editing MMA FIGHTS, WEDDINGS? Trying to save time editing and rendering videos.naxeem - Thursday, September 4, 2014 - link
Well, since Broadwell is not out yet, I doubt we'll see SkyLake that soon...swing848 - Thursday, September 4, 2014 - link
Regarding your statement about game benchmarks, "It makes sense that we should test this with 4K in the future".You did not say how far into the future, in the near future it makes NO since. The video card you used was an 770; there is no way that GPU can handle 4K at high game settings, even medium settings will bring it to it's knees. First of all it is a mid-range GPU, secondly, it does not have enough local memory.
Lower resolutions more indicate what the CPU can do because the GPU is not overtaxed therefore not becoming a factor.
If you change anything, pick a higher end video card, to make sure the GPU is not bottlenecking tests.
M.Q.Leo - Saturday, September 6, 2014 - link
This generation has not much improvement I think. Especially 5820K, even less PCI-E Lanes there is! :(djemir - Monday, September 8, 2014 - link
I owned a first generation 2008 I7 running at a measly 2.7Ghz and I can tell you the new motherboard alone having 6GB/s SATA and 10GB/sec SATA interface made up for speed losses not to mention the 4790K running at 4.5GHz on a stable overclock. For photoshop and video the new CPU and motherboard have made a world of difference. Save time and open / read time has been reduced from 3-15 minutes per file down to "wait what I can count it on fingers???!" 7 seconds. That means no more "oh well it's saving let me go to the bathroom or something while I wait for this slow a$$ computer. I'm kind of wanting to kick myself for buying a z97 on the very day that the new x99's came out but when I did a price check it just wasn't worth it. This will hold me over til those crazy prices drop. I looked at bench marks for the 4790k vs the lower spectrum of the newer x99's and it looks like the 4790K does better in Photoshop than the X99 due to higher frequency. I even had to drop 8GB of Ram because my old motherboard had 6 slots and was holding 24GB of Ram in 4GB sticks now my new motherboard only has 4 slots. That made me sad. But even with less RAM the motherboard and processor are much more efficient and they can actually use the higher speeds of my SSDs and my 3GB/s and 6GB/s internal and external hard drives as well. Everyone arguing that their old processors are amazing need to open their eyes. On paper it all sounds like they are very much equal. But people forget that motherboards have been improving as well. I was having consistent blue screen crashes on my old system eve after refreshing the system a few times. This new configuration (Asus with i7-4790k) runs like a champ no blue screens at all, nothing but blue skies. I think Intel needs to just drop their prices a little, I would have prefered to get the 8 core i7 or 12 core xeon Yes that sucker is out there as well but at $1000-2500 for these units it's just not worth the small amounts of improvements vs the price. Whereas the huge increase i felt in performance at a much lower price point was worth the upgrade.untoreh - Friday, September 12, 2014 - link
What about benching some games that have decent multithreading? Like games on the bitsquid engine like War of the roses/war of the vikings? Or Natural Selection 2 with its immensive poligon count? No. Lets just benchmark GPU Heavy AAA titles that generally push the GPU Market more than the gaming market. If you wanna benchmark a 8 core CPU with games, you should AT LEAST let them be half with decent MT support.IUU - Thursday, September 18, 2014 - link
Well, except for gaming..Mostly true but not completely true.
If you play chess, the 8 core version will smoke any 4 core version just for fun.
There are other games that are not in the front scene right now that are mainly cpu demanding.
Any fps game currently in the market, is heavily based on showing realism, so it requires graphics processing power,and demanding from a powerful cpu to run such a game faster than the fractions of a second a slower cpu would, is well.. pointless.
But gaming is vast , bigger than your encoding software, your bitcoin mining, and your much advertised enterprise software. When developed further, it will require what the most demanding scientific applications require and probably more. See that it is already a main driving force(if not the main driving force) of modern supercomputer improvement. And it will be for the future.
Think of ingame ais, multiple ais, that will interact with the game world like a human would. Think of voice and pattern recognition, of tracking thousands or millions of objects, etc etc.
If your only aspiration in buying such a cpu is how good it will run current gen games, you wouldn't ever belong in the category of appropriate customers for this cpu. You would rather be excited by the mobile parts, which while anemic compared to the 8-core haswell, are fancy and fashionable and satisfy your vanity.
Of course it still remains as a problem, because of this sad market turn, the rise in the pricing of the "extreme" parts which puts the most of us off.
kelendar - Tuesday, September 23, 2014 - link
The one thing I like about the X99 chipset over my ASUS P8P67 Deluxe are the plethora of SATAIII ports and vastly improved onboard sound. The problem is the SB is a dead platform; there is no upgrade path with it and all boards ever made have few ports. I actually had to buy a separate add-in card but it suffers from being on the PCI-8x port - connect more than 1 drive, and they share the throughput.That's why I'm looking at this build. I figure the ASUS Rampage IV + 5820 + 16 GB RAM should set me back about $1,100 but it gives me a bit of future-proofing.
Spartan 363 - Wednesday, October 1, 2014 - link
I just purchased the i7 5930K along with an EVGA X99 Classified mobo with 16 GB of Corsair Dominator Platinum DDR4 3000 RAM for now along with 2 EVGA GTX 980 Classified for SLI. I purchased both the Classified versions of the mobo and GTX 980's because I love tinkering and overclocking to see what is the best stable clocks I can achieve without heating up my bedroom like my PC were a space heater which would be handy in a month or so as the cold weather returns to Northern VA where I live. My current PC and even my older PC that this build replaced are air cooled for now. I will buy WC blocked and such for my GTX 980's since I have fantastic components for overclocking with the Classified things from EVGA and a CPU that OC's well too on air.Sure,this was a very expensive upgrade, but at least the CPU, RAM and the MOBO will be good for the next 4-5 years just like my old aging i7 920 D0/MSI X58 Pro-E mobo w/12 GB of Corsair XMS3 DDR3 1600 Triple Channel and that PC had aging SLI GTX 680's that have been great so far, but it was time to upgrade that system, it was a great PC for all these years since I purchased it when the X58 and i7 series first launched back then.
This upgrade over my aging X58 build is a massive jump as I was running into CPU bottlenecks with my SLI GTX 680 even with the CPU overclocked to a stable 3.8GHZ on air cooling with good temps in mind. I tested my new system with both my old EVGA GTX 680 FTW+ 4GB cards in SLI and a single GTX 980 beats it in games that don;t have as great SLI/Crossfire support and when I placed my second GTX 980, it was overkill compared to the 680's but also using less power at the reference clocks at first. I may get a third GTX 980 but I'm holding out on the possible rumored GTX 980 Ti that may come around Spring 2015 and I hope to get 2 at least for SLI if they are a comparble upgrade like the GTX 780 Ti was over the 780 and I hope to have 6 to 8GB of VRAM on the cards too as I am gaming at 4K and there are times where the VRAM is topped out and more VRAM is needed. I'm just glad the that GTX 980 was such a great buy this time around and made a big difference for those of use on aging GTX 680's and earlier cards. I usually upgrade every 2 years for my GPU anyways. I am currently reading up and also overclocking my new build to to tutorials and such on these components that I have since it's been years since I really used to tinker with voltages, RAM memory timings and of course serious voltage tweaks to the GPUS for better overclocking. I actually miss the old days of my tinkeirng my PC for hours a day when I was younger, but at least I have the weekends off to tinker with my new gaming beast that should last me for a few more years. For now, I'm happy at the performance that I have with a mild overclock and my games run and look fantastic on my new 55 inch LG 55UB8500 4K TV that has HDMI 2.0 so that I can play at 60 FPS, but with my current setup,I'm averaging 40+ FPS in most games on ultra settings with AA set to 2 or non at all since in my opinion, a 55 inch 4K TV is the perfect size foe the resolution and turning off AA in most games makes no major difference in appearance on the screen due to the high resolution, so that's a major plus for 4K gaming and it's easier on the hardware too. That TV only cost me about $1500 in Best Buy during the Labor Day weekend sales. It was a steal for a TV that has great features and most importantly, HDMI 2.0 to take advantage of my GTX 980's in 4K instead of having to use Display Port to HDMI adapters I was using with my GTX 680 SLI to achieve 30+ FPS in a few games at 4K with no AA and such.
For those that want great performance at 4K, SLI GTX 980's are great and are great for systems that have PSUs typically that can comfortably handle both cards at load with a quality 750W PSU vs needing a 1000W unit for older cards.
Spartan 363 - Wednesday, October 1, 2014 - link
I forgot to mention that I am currently using 2 EVGA GTX 980 SC, but I will be sending them back next week to exchange them for the Classified cards that will be the aire cooled ones, but I am currently running clocks on both my current cards at a mild 1300 mhz OC on the core while the memory is at the factory clocks for now, not bad for air cooling, I know I can push for 1400 to 1500 but I will wait for the Classified cards for that type of OC as I will be getting EK WC blocks for both cards this time.sandwich_hlp - Tuesday, October 7, 2014 - link
Been trying to decipher Intel roadmaps and the like, to no avail, so... can anyone tell me approximately when the DDR4-supporting Core i5 line is expected to launch? I'm needing to upgrade my aged Core 2 Duo (DDR2!), but I don't wanna hop on the DDR3 bandwagon just as it's being superseded by DDR4... :-/GGuess - Saturday, December 6, 2014 - link
There should be a line of discussion of why haven't CPU speeds increased in the past 5years in a significant way. My 5yr old Intel I7 is a 4 core at 3GHz. The ones discussed here are only 6 or 8 cores and run stock in the mid-3 to 4GHz range. So over 5yrs, the CPU capability has not grown 2 to 3times faster, and that only applies to applications that can use the extra cores and hyper-threading. The usual rule we work to is that people won't even notice a 50% speed increase. It has to be 2 to 3 times before it is noticed. Previously, a 3year refresh of a computer resulted in a 5 to 10x computer speed increase.With the current barely noticeable 2x, why bother with the trouble of an upgrade? No wonder Intel's and AMD's sales figures are failing to grow.
avikbellic911 - Wednesday, January 28, 2015 - link
i am a very happy owner of i7 4960x.....i think x79 boards still win over the x99's i mean x79 can still run like 2800+3100Mhz Rams without Much of Overclocking....I know technology must move on but i fell like x99 is still not much worth at all.....x99 is about 7% improvement is performance if we compare with x79Visibilities - Friday, April 10, 2015 - link
Obviously higher OC'ing is good... But, you guys can have fun with your QUAD core 2500k, and a memory bandwidth of 21GB/s... While the 5960x has EIGHT cores, and three times (3x) the bandwidth... Enjoy the 2500k (xmoheban79 - Tuesday, September 1, 2015 - link
I wish when they were comparing the 5820k to 5930k cpu the clockrates were adjusted to be the same while benching SLI gamings 770 GTX scores so you could see the real difference between 16/8 and 16/16 available lanes on performance because I can't tell if that 5% difference is due to the higher clock speed or not. Also wouldn't hurt to use a Titan instead to max out those lanes.