Probably within 1-2C of similar "extra wide" 120x37mm closed-loop coolers. Looks like Intel's solution is made by Asetek going by it's block and mounting design, so I'd compare it against the Antec 920 for starters.
Still running i7-950 system (was an i7-920 back in 2008) and all I've upgraded since building it is a small bump in CPU speed, added water cooling, and installed two GTX660's in place of two GTX460's installed in 2010, which replaced the Radeon 4870x2 from the original 2008 assembly date. I've also replaced the original 500GB Seagate Boot Drive from 2008 with an Intel 160GB X25-M in 2010. Still use the same SSD to this day.
Same motherboard, same 6x2GB G.skill DDR3-1600 modules (that cost $600 back in 2008) and same PC Power & Cooling 750-QUAD.
I've added a USB 3.0 PCIe controller as well.
Overall, this is the longest (5 years) I've ever owned a system that retained the same motherboard. The irony is Intel discontinued Socket 1366 so fast it wasn't even funny. It was actively supported less than 2 years, and only 2 generations of chips (using the same architecture and process) were made within a year of each other, essentially giving this socket a 15-month lifetime.
But 5 years later, a system built on this socket is still faster than 90% of the production systems today.
Yeah, for 1366 owners, there's absolutely no reason to upgrade, especially with overclocking. At least you got one generation of upgrades, unlike 1156 owners who got completely screwed.
P.S. The second gen upgrade on 1366 (Westmere) was a new architecture and process. They shrunk down from 45nm to 32nm and added AES instructions.
I'm a current 1156 socket owner running I7 875K @ 4.2. My rig is still running strong, but I'm ready for an upgrade. Just purchase a 4960x with 16 gig of Corsair Dominator Platinum at 2400 and the Asus Black Edition mobo. Hope the spending is worth it.
What does this have to do with chizow's comment about the closed-loop cooler??
Absolutely nothing. I'm guessing it was just an easy way to get posted near the top.
Probably within 1-2C of similar "extra wide" 120x37mm closed-loop coolers. Looks like Intel's solution is made by Asetek going by it's block and mounting design, so I'd compare it against the Antec 920 for starters.
If I recall correctly, the 920 is 49mm thick. Also, I've found that fan selection can make more than a little difference. I would not expect the Intel cooler to match Antec's 920, given their history of racing to the bottom with cooler components. That said, it should beat the 620 and similar 120x120x25mm closed loop systems (assuming they didn't screw up the fan selection in epic manner).
I run a i7950 at 4.07 ghz with a overclocking thermaltake cooler, 3x 580's and 12 gb ddr, am now upgrading to the i74960x, thermaltake water 3.0 and 2 x asus gtx 780ti sli, 32 gb ddr3. the old rig is still going strong and will use it for a simulator pc as i have a g27 sitting doing nothing. Great machine and has served me well!~
just might add for the asus sabertooth x79, i74960x, 32gb 2400 ddr3, and 2 asus gtx 780ti oc cards is a $4000 upgrade, been doing a lot of overtime so i thought i would update while i got the extra cash
Corsairs closed loop and Intel's appear to be built by the same company.. Some of Corsair's earlier attempts were noisy whereas you didn't really have that problem with Intel's. Overall I think it's a pretty solid contender with very few faults. There is better on the market obviously... but it's decent for it's price.
Excellent review. I must say the reviews Anand himself writes always seem to be spot on. One question, where does the 22 months number come from? Is that the scheduled/rumored/leaked release date of Haswell-E? If so, isn't that going to be even more behind than Ivy-E? I'm assuming Skylake will have been out for longer than Haswell has been out now, no? Also, do we know that Skylake will support DDR4? I've heard Haswell-E will.
Anywho, thanks for the excellent review, I thoroughly enjoyed it.
It's not often that I think that a review really nails it all (at least for what I need), but this one did it. Very balanced, taking shots at the weak points, while also thoroughly explaining what is good about it, and who it might benefit.
X79 is adequate only if you are willing to load your machine up with lots of add-in controller cards.
Intel will really need to up their game on the chipset when Haswell-E gets here.
With so few new things coming out in the desktop division these days Im sure Anand is quite fine to getting back into the trenches to do the odd review. Used to be (or so it seemed) every other day we were getting glimpses at exciting new things but these days that appears in the form of tablets and smartphones /w the desktop industry being somewhat stagnant in my opinion.
At this rate, as Anand suggests, Haswell-E will come out around/after Skylake based Desktop parts(assuming that is still on track for 2015 release). I am convinced that it would have been a better approach to skip Haswell-E altogether and jump straight to Skylake-E in 2015. This logic is further supported by the fact that next gen will require a new socket design (since Haswell comes with FIVR).
Since there is going to be a Haswell refresh in 2014, I'd expect Haswell-E to be introduced around the same time: roughly a year from now. The delay of Broadwell on the desktop will allow Haswell-E to catch up in cadence a bit.
The new question is when Broadwell-E will arrive: with Skylake or vanilla Broadwell on the desktop?
nice cpu for those that have an older platform (like 1156 or 1366 maybe) but as someone that already owns a 2011 socket cpu im still hoping they will eventually release 8 or even 10/12 core cpu's so i can encode vids faster cmon intel release those 8/10/12 core cpu's for the enthusiast platform.!
Whats the point? A 10-core only runs at 2GHz, and a 8-core only runs at 3 GHz, so both have less overall performance than a 6-core overclocked to more than 4GHz. You simply cannot put more computing power into a reasonable power envelope for a single socket. If a water-cooled Enthusiast 6-core is not enough for your needs, you automatically need a 2-socket system.
And its not like that is not feasible for enthusiasts. The ASUS Z9PE-D8 WS, the EVGA Classified SR-X and the Supermicro X9DAE are mainboard aiming at the enthusiast / workstation market, combining two sockets for XEON-26xx with the capability to run GPUs in SLI/CrossFire. And if you are looking to spend significantly more than 1k$ for a CPU, the 400$ on those boards and the extra cost for ECC Memory should not scare you either.
Just go and check Anandtech own benchmarking: http://www.anandtech.com/show/6808/westmereep-to-s... . It's clear that you need two 8-cores to be faster then the enthusiast 6-cores even before overclocking is taken into account.
Maybe with Haswell-E we can get 8 cores with >3.5GHz into <130W, but with Ivy Bridge, there is simply no point.
who cares if the power envelope is "reasonable"? i already have my SBE overclocked to 5.125Ghz and if they release a 10core i would oc that thing like a mutha******
that link you posted is EXACTLY why i want a 10/12 core instead of dual socket (which i could afford if it made sense performance wise) - its obvious that video encoding doesnt work well with NUMA and dual sockets but it does work well with multi cored single cpu's
so i say give me a 10 core and let me OC it like crazy - i dont care if it ends up using 350W+ i have some pretty insane watercooling to suck it up (3k ultra kaze's in push/pull on a rx480rad 24v laingd5s raystorm wb - a little over the top but isnt that what these extreme cpu's are for?)
I have to agree with you in the extreme market who gives a damn about being green, most will run 1200watt Plat mod PSUs with an added extra 450 watt in the background, and 4GPUs as this is pretty much the only reason to buy into 2011 socket in the first place 2 extra cors and 40x PCIe lanes.
I could not agree with you more! I have a OC'd i920 that just keeps chugging along and if I'm going to drop some coin on an upgrade, I want it to be an UPGRADE. Let ME decide what's reasonable for power consumption. If I burn up a 8/10 core CPU with some crazy cooling solution then it's MY fault. I accept this. This is the hobby that I've chosen and it comes with risks. This is not some elementary school "color by numbers" hobby where you can follow a simple set of instructions to get the desired result in 10 minutes. This is for the big boys. It takes weeks or more to get it right and even then, we know we can do better. Not interested in XEON either.
The 12 core models run at 2.7Ghz, which will be slightly faster than six cores at 5.125Ghz. You could also bump up the bclk to 105, which would put the CPU at 2.835Ghz.
2690 v2 will be 10c @ 3.0 and 130W. Effectively 30Ghz. 2697 v2 will be 12c @ 2.7 and 130W. Effectively 32.4Ghz
Assuming a 6 Core OC'd to 5Ghz Stable, 6c @ 5.0 and 150W? (More Power due to OC) effectively 30Ghz.
So tell me again how a highly OC'd and large unavailable to the masses 6c is better than a 10/12c when you need Multiple Threads? Keep in mind those 10 and 12 core Server CPUs are almost entirely AIR cooled and not overclocked.
I think they should have released an 8 and 10 core Enthusiast CPU. Hike up the price and let the market decide which one they want.
For Sandy Bridge, we had: 2687, 8c @ 3.1 GHz => 24.8 GHz effectively 3970X, 6c @ 3.5 GHz => 21 GHz before overclocking, only 4.2 GHz required to exceed the Xeon.
Fair enough, for Ivy Bridge Xeons, the 10core at 3 GHz has been announced. I'll believe that claim when I see some actual benchmarks on it. I have some serious doubts that a 10core at 3 GHz can actually use less power than an 8 core at 3.4 GHz. So lets see on what frequency those parts will actually run, under load.
Furthermore, the effective GHz are not the whole truth, even on highly parallel tasks. While cache seems to scale with the number of cores for most Xeons, memory bandwidth does not, and there are always overheads due to the common use of the L3 cache and the memory.
Finally, not directly towards you but to several people talking about "green": Entirely not the point. No matter how much power your cooling system can remove, you are always creating thermal gradients when generating too much heat on a very small space. Why do you guys think there was no 3.5GHz 8 core for Sandy Bridge-EP? The silicon is the same for 6-core and 8-core, the core itself could run the speed. But INTEL is not going to verify the continued operation of a chip with a TDP >150W.
They give a little leeway when it comes to the K-class, because there the risk is with customer to a certain point. But they just won't go and sell a CPU which reliably destroys itself or the MB the very moment somebody tries to overclock it.
All I really learn from these high end CPU results is that if you actually invested in high end 1366 in the form of 980x all that time ago, you've got probably the longest lasting system in terms of good performance that I can even think of.
If you invested in the 980 or the 970 (not the extreme ones) you got an awesome deal. Three years old, $600, overclockable, and within 30% of the 4960X on practically everything.
True, but my Haswell i5-4670k was around $200 for the CPU (on sale), and under $150 for an ASUS Z87-Plus motherboard. It's running on air cooling at 4.5/4.5/4.5/4.4GHz.
I wasn't expecting it to be as fast for gaming as an i7-4770k, but looking at the gaming benchmarks in this article, I'm extremely pleased that I did not spend more for the i7.
I had a launch model Core 2 Duo (the E6300) that with overclocking (1.86Ghz => 2.77Ghz) was a pretty decent CPU until last year (when I replaced it with an Ivy Bridge Core i5). That's what? Six years out of the CPU and it's still going strong for my buddy (to whom it now belongs).
"My biggest complaint about IVB-E isn't that it's bad, it's just that it could be so much more. With a modern chipset, an affordable 6-core variant (and/or a high-end 8-core option) and at least using a current gen architecture, this ultra high-end enthusiast platform could be very compelling."
I think that you answered why Intel isn't going this route earlier in the article. Consumers are getting the smaller 6 core Ivy Bridge-E chip. There is also a massive 12 core chip due soon for socket 2011 based servers. Harvesting an 8 core versions from the 12 core die is an expensive proposition and something Intel may not have the volumes for (they're not going to hinder 10 and 12 core capable dies to supply 8 core volumes to consumers). Still, if Intel wanted to, they could release an 8 core Sandy bridge-E chip and use that for their flag ship processor since the architectural differences between Sandy and Ivy Bridge are minor.
The chipset situation just sucks. Intel didn't even have to release a new chipset, they could have released an updated X79 (Z79 perhaps?) that fixed the initial bugs. For example, ship with SAS ports enabled and running at 6 Gbit speeds.
"The big advantages that IVB-E brings to the table are a ridiculous number of PCIe lanes , a quad-channel memory interface and 2 more cores in its highest end configuration."
I'm going to pick on you a little bit here Anand, because I think it is important that we convey an accurate image to Intel about what we as end-users want from the hardware they design. 40 PCIe 3.0 lanes is NOT "ridiculous". In fact, for my purposes I would call it "inadequate". Sure, "my purposes" are running 3 2560x1440 screens @ 120Hz and that isn't the average rig today, but I want to suggest it isn't far off what people are now asking for. We should be encouraging Intel to give us more PCIe connectivity, not implying we have too much already. :)
Actually, you would find that you are still badly limited by graphics power, rather than limited by system bandwidth.
A modern graphics card doesn't even stress out 8 lanes PCIe 3.0.
I'm also not saying that it is a bad thing to have lots of I/O, It isn't. However you do need to know where your bottlenecks are. Otherwise you spend money trying to fix the wrong thing.
Not all high bandwidth PCI-e cards are graphics cards.
I for one would like to be able to run 2x PCIe x16 GPU's and at least 1 each of LSI SAS 2008, dual port DDR or QDR Infiniband, dual port 10GBe and perhaps an actual RAID card.
Sure that is a somewhat extreme example. But you can only run one of the expansion cards plus 2 GPU before you run out of lanes. This is an enthusiast platform after all. Many of us are going to want to do Extreme things with it.
Now you're just being silly, sending $10,000 on a system without any real increase in performance for anything you're going to do on a desktop/workstation is just stupid.
Besides, if you're being incredibly stupid you'd need to go quad Xeons anyway (160 PCI-E lanes FTW).
On the one hand, good review. On the other hand, my dream of a new build in the "performance" line is snuffed out. It just seems so lame making all these compromises vs Haswell, and basically things will never get better because the platform target is shifting to mobile and so battery life is key and performance parts will just never be a focus again.
i feel the same way the future doesnt look too bright for the performance enthusiast - i dont want low power smaller cpu's i want BIG 8/12 core cpus and i dont really give a crap about power usage
If you really want that number of cores, Ivy Bridge E5/E7 Xeons are going to deliver that, in the 150W power envelope. This is useful in the server market, but will only sell in homeopathic quantities in the desktop market. Still, you should be able to find them in retail around Christmas. Knock yourself out!
Really IB-E is a free product for Intel, which is the only reason it made it to market at all. They need the 6-core dies for the medium density servers anyway, which is where they actually make sense over SB-Xeons, due to the smaller power envelope/higher efficiency. The investment to turn that core into a consumer product on an existing platform is almost zero, short of a small marketing budget, and possibly a tiny bit of (re-)validation.
This was never a product designed for the enthusiast market, and is being shoe-horned into that position. Due to the smaller die Intel can probably make better margin over SB-E, which is the only reason to introduce this product in the sector anyway, and possibly to get some brand awareness going with the launch of a new flagship.
From an economical point of view it makes no sense for Intel to have an actual enthusiast platform. Haswell refresh will be unlikely to bring more cores either (and without the extra I/O they would be a bit hobbled, I imagine), so possibly with Skylake there will be a 6-core upper mainstream solution. Still unlikely from an economical point of view, as Intel would probably prefer sticking to two dies, and going 6/4 may not be economical, whereas selling 6-core CPUs as quads (as they do with 48xx) doesn't work that well in the part of the market that generates reasonable volume.
the problem with xeon is that you cant overclock them so my 5ghz SBE would be close to as good as a 8/10 core xeon
i dont really care about why intel are not releasing high core count cpu's i just know i want them at a decent price ($1k and under) and overclockable - these 6 core ones just dont make the cut anymore
i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4 (6 with sbe) core cpu's as standard which blows and per core performance hasnt really changed much going from sandy bridge to haswell
i dont care about power and heat just give me the performance i want to encode highest quality handbrake movies in less than 24 hours.!!
"All" you want is Intel to invest a massive development effort in order to produce for the first time an overclocking CPU with a TDP of around 200W, with silicone for which their business customers would pay 2k$ to 3k$, and sell it to you and the other 500 people in your niche for less than 1k$?
Intel already offer you a solution if you need more processing power than the enthusiast solution gives you: 2 socket workstation boards, 4 socket server boards, 60-core co-processor cards.
2 socket is inefficient for my workloads they could just release a xeon that is unlocked and let me do what i want with it - its not like the workstation/server guys would overclock so its not like intel would be losing any money no development needed 2-3k? i can already buy 8 core SBE for 1k - why not let me oc that?
Many would overclock when Intel is charging hundreds of dollars for just small GHz bumps. You won't seem the academic or large corporation clusters doing it, but the small businesses with just a handful of workstations? They might.
Look at the 2660 v2 2.2GHz at $1590 and the 2680 v2 2.8GHz at $1943. That's $353 for 600MHz. On a dual processor system its $700, then you have to pay the markups from those actually selling the computers (ie Dell/HP), which takes $700 to $1000 or more. One small little tweak and you're saving yourself $1000, while not stressing the system all that much (assuming you don't go crazy and try to get 3.5GHz from that 2.2GHz base chip).
The catch though is that the mbds used for these systems don't have BIOS setups which support oc'ing, and the people who use them aren't generally experienced in such things. I know someone at a larger movie company who said it'd be neat to be able to experiment with this, especially an unlocked XEON, but in reality the pressures of time, the scale of the budgets involved, the large number of systems used for renderfarms, the OS management required, etc., all these issues mean it's easier to just buy off the shelf units and scale as required (the renderfarm at the company I'm thinking of has more than 7000 cores total, mostly based on Dell blade servers) and management isn't that interested in doing anything different or innovative/risky. It's easy to think a smaller company might be more likely to try such a thing, but in reality for a smaller company it would be a much larger financial risk to do so. Bigger companies could afford to try it, but aren't geared up for such ideas.
Btw, oc'ing a XEON is viable with single-socket mbds that happen to support them and have chipsets which don't rely on the CPU multiplier for oc'ing, eg. an X5570 on an Asrock X58 Extreme6 works ok (I have one); the chip advantage is a higher TDP and 50% faster QPI compared to a clock-comparable i7 950.
Sadly, other companies often don't bother supporting XEONs anyway; Gigabyte does on some of its boards (X58A-UD3R is a good example) but ASUS tends not to.
Some have posted about core efficiency and they're correct; I have a Dell T7500 with two X5570s, but my oc'd 3930K beats it for highly threaded tasks such as CB 11.5, and it's about 2X faster for single- threaded ops. The 3930K's faster RAM probably helps aswell (64GB DDR3/2400, vs. only DDR3/1333 in the Dell which one can't change).
Someone commented about Intel releasing an unlocked XEON. Of course they could, but they won't because they don't need to, and biz users wouldn't really care, it's not what they want, and note that power efficiency is very important for big server setups, something of which oc'ing can of course make utterly ruin. :D Someone said who cares about power guzzling when it comes to enthusiast builds, and that's true, but when it comes to XEONs the main target market does care, so again Intel has no incentive to bother releasing an unlocked XEON.
I agree with the poster who said 40 PCIe lanes isn't ridiculous. We had such provision with X58, so if anything for a top-end platform only 40 lanes isn't that impressive IMO. Far worse is the continued limit of just 2 SATA3 ports; that really is a pain, because the 3rd party controllers are generally awful. The Asrock X79 Extreme11 solved this to some extent by offering onboard SAS, but they kinda crippled it by not having any cache RAM as part of the built-in SAS chip.
"It's easy to think a smaller company might be more likely to try such a thing, but in reality for a smaller company it would be a much larger financial risk to do so. Bigger companies could afford to try it, but aren't geared up for such ideas.
Btw, oc'ing a XEON is viable with single-socket mbds that happen to support them and have chipsets which don't rely on the CPU multiplier for oc'ing, eg. an X5570 on an Asrock X58 xtreme6 works ok (I have one); the chip advantage is a higher TDP and 50% faster QPI compared to a clock-comparable i7 950."
These two statements work against eachother. If OC a SP xeon is relatively easy, only if supported, there isn't much reason a DP xeon set up couldn't be OCed within reason without much effort.
I'm not going to say this would be a common thing, but the small shops run by someone with a "tinkerer" mind set towards computing would certainly be interested in attempting to get that extra 10-20% performance, which Intel would change another $1000 or more for, but get it for free.
Z9PE-D8 WS has decent overclocking options (not like their consumer X79 boards, but not bad either).
However, apart from a small BCLK bump, this is useless as SNB-EP and IVB-EP Xeons are locked.
The best I can do with dual Xeon 2697 v2 is ~3150 MHz (I might be able to go a bit further but I did not bother) for all-core turbo.
Even if Intel ignores the business reasons NOT to allow Xeon overclocking (to force high-performance-trading people to buy more expensive Xeons as they showed willingness to overclock and, so, potentially cannibalize market for more expensive EX parts) technically this would be a huge challenge.
Why? Well, 12-core Xeon 2697 power-usage would literally explode if you allow running this on 4+ GHz and with voltages normally seen in overclocking world. I am sure the power draw of the single part would be more than 300W, so 600W for a dual-socket board.
This is not unheard of (after all, high-end GPUs can draw comparable power) - however, this would mandate significantly higher specs for the motherboard components and put people in actual danger of fires by using inadequate components.
Maybe when Intel moves to Haswell E/EP - when the voltage regulation becomes CPUs's business, maybe they can find a way to allow overclocking of such huge CPUs after passing lots of checks. Otherwise, Intel runs huge risk of being sued for causing fires.
--[i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4]--
So I got an AMD FX8350, that's 8 cores and 4GHz before turbo. Quite a bit cheaper than Intel's too.
OK, obviously AMD gets less operations per clock and the 8 cores only have 4 "real" FPUs between them but I wanted 8 cores to test scaling of computer programs on without breaking the bank.
as someone who regularly does encoding, 4k gaming, and (when not in use otherwise) folding@home - all things which can fully leverage mult-core processors and powerfull GPUs - i look forward to these reviews of new enthusiast class processors. and, it saddens me that since SB-E there have been only marginal improvements in this sector. i never thought we (as a technology power-house, and as a society) would settle for this. for me, it all began when they started putting GPUs on-die with CPUs for desktop PCs (sure, for laptops i can certainly understand) - i mean who DOESN'T use a discreet GPU in a desktop system???? and for those who do, why don't you just get a laptop???
GPUs on-die took the focus away from the CPU. and, while there are minimal gains to be had, the showing here today is abysmal. 2 yrs of waiting and we get a 5% increase (for what i do, i want power and could really care less about power draw - as i would say most enthusiasts do). i get it - to build more powerful hardware, it HAS to become more efficient, but it's an evolutionary development process. haswell could very easily be an enthusiast class product: get rid of the rediculous GPU (for the desktop), double the core count, and raise the TDP to 125/130 (haswell-E?) - and they could do it a LOT earlier than 1-2 years from now. come on Intel - stop screwing the guys who you built your reputation on (after all, it's always the fastest/most powerfull hardware that's shown in reviews to boost the reputation of any company).
/rant off/ sorry, this is just very dissapointing.
i agree very disappointing too much integration and not enough performance is the problem with modern intel cpus i dont want integrated graphics and vrm's and whatever else they plan on integrating - i want huge core counts in a single die for the enthusiast platform.!
Just how many of these chips does Intel actually sell a year?
I bet it's tiny. I bet the i3/i5 chips outsell them 50 to 1. Thats why stuff isn't happening at the top so much. The demand has dwindled. Ten years ago a lot of people could eat all the cpu power they could get their hands on. Now? Not so much. Plenty people now still happy with their 2008/9 spec quads. Basically these top end Intel i7 chips are the Mercedes S class. A way for Intel to put new stuff and techniques in, that may or may not filter down in the future generations.
Intel knows the figures and it knows that the action is at the other end of the spectrum. Not for folks that largely want to rip video and run benchmarks all day.
i agree they dont sell as many as the lower end cpu's but why not just sell us an unlocked xeon than can also OC?
its not like they would lose money from letting us OC the xeon because the people that would normally buy a xeon for servers etc would never think about overclocking them
then its a win/win situation for intel as they are still getting their xeon money and they will have a decent enthusiast cpu also and yes i would happily pay 1k (the price i can find current SBE 8 core cpu's) for an OCable 8 core
I believe the only way to get a specially binned or configured chip from Intel is to be an OEM and order a large volume. For an unlocked Xeon, the only chance Intel would release such a system would be under contract for a super computer contract that also used liquid cooling.
OEM's like HP, Dell and Apple can also acquire specifically binned chips for a premium if the OEM wants something better or for a discount if Intel has excess inventories of low grade chips they need to sell.
Apple was the one who petitioned Intel to put the GPU on Die, so they could get away selling at higher prices with a lower cost to them. Do like I do BLAME APPLE.
In conclusion, if you're an enthusiast who wants a high core count, Xeon is your only choice. For the price of the top-end Xeon you can buy a pretty decent second-hand car! We really need AMD to get back into the high-end game.
Hate to burst your bubble but AMD is going through a bit of a reset right now. Opteron 6400s in 2014. Minimal increase in performance.
Next Gen Ground Up architecture is 2015, or when you get your AMD rep drunk at a trade show, you hear more likely 2016. If they can pull it off, this is where they will become a player again.
Most of their attention at the moment is Trinity style APUs with minimal Core Counts just like Intel's desktop stuff.
When are these E5 V2 Xeons gonna be out ? Why release this first instead of the new Xeons ?
Hardly any performance increase after 22 months. I get that they want to be able to sell the 12-cores Xeon for three grands instead of one, but why can't they just add two extra cores to 4960X instead of just adding 200MHz ?
E5-2600 v2 is next week, Septh 10th E5-4600 v2 and E5-2400 v2 will be very end of 2013 or early 2014. E7 (Ivy EX) will also be like January 2014. 15 cores is what I am hearing there.
Intel is so greedy. They could have made this chip 10 core / 20 thread and the die size still would have been less than SNB-E. For a high end part, a chip this small is just a slap in the face. I hope their greed costs them lots of $$.
Sure. Also, the TDP at close to 4 GHz would have been 220W. And the majority of customers would have tried to overclock them and drive 300W through them. And either complained because they damage too easily, or because of the lousy overclocking potential.
IB-E is a massive failure just like SB-E. Thanks Intel for killing the highend for me. Actually, I think this is their plan, to kill the highend. Its ridiculous that this platform is so far behind the mainstream platform. 2x sata 6GB/s ports? No Intel USB 3.0? Worse Single threaded performance than mainstream? Sandy Bridge-e seemed like an unfinished project where many compromises were made and ivy-e looks the same.
Anand, you write that Corsair supplied 4x 8GB DDR3-1866 Vengeance Pro memory for the testbed. However, you also remark "infrequent instability at stock voltages" with 32 GB. Then, in the legend of memory latency chart, you write "Core i7-4960X (DDR3-1600)" .
So I wonder which memory configuration was actually used during the benchmarks? Less than 32 GB with DDR3-1866, non-stock voltages, or 32GB DDR3-1600? Wouldn't anything but 4x DDR3-1866 be a little bit unfair because you otherwise don't utilise the full potential of the CPU?
Nice job Anand, your conclusion pretty much nailed it as to why LGA2011 doesn't cut it today and why this release is pretty ho-hum in general. I would've liked to have seen some 4820K results in there to better illustrate the difference between 4770K Haswell and SB-E, but I suppose that is limited by what review samples you received.
But yeah, unless you need the 2 extra cores or need double DIMM capacity, there's not much reason to go LGA2011/IVB-E over Haswell at this point. Even the PCIe lane benefit is hit or miss for Nvidia users, as PCIe 3.0 is not officially supported for Nvidia cards and their reg hack is hit or miss on some boards still.
The downsides of LGA2011 vs LGA1150 are much greater, imo, as you lose 4 extra SATA3(6G) ports and native USB 3.0 as you covered, along with much lower overall platform power consumption. The SATA3 situation is probably the worst though, as 2 isn't really enough to do much, but 6 opens up the possibility of an SSD boot drive along with a few really fast SATA3 RAID0 arrays.
I'm really disappointed by these numbers. As a software developer, the FireFox compile benchmark best indicates the benefit I would get from upgrading to this CPU. And, it looks like the 4770K would be about the same difference - except far less expensive. I really don't think I need anything more than 16GB for RAM, and one high-end graphics card is enough to drive my single WQHD (2560x1440) display. Do bragging rights count? No...I mean really?
What's a Xeon going to do? Be slower than the 4960X? You lose clock speed by going with huge core counts and that translates to even more losses in single threaded performance. There comes a point where there are diminishing returns on adding more cores... (see AMD)
Your best bet would be to hope for desktop CrystalWell part. That extra cache should do wonders for compile times even if you'd lose a bit of clock speed. However, Intel is intentionally holding back the best socket 1150 parts they could offer as the benefits of Crystalwell + TSX optimized software would put performance into large core count Xeon territory in some cases.
Can you have a comparison chart please for the 4770k, E5-8core Xenon, 4960X, with benchmarks included. This kind of makes little sense to me X-79 was behind on feature sets like full SATA3 when in reality a lot of these boards will be used as workstation/normal/gaming computers, performance on those boards tends to suffer because lack there of native support. Instead 3rd party chips are used to add extra features which have significant drawbacks. I understand using the socket for 2 gen in order to extend life of boards however 1336 and the next leap to haswell should have been taken, making a board last 2 years with the prime features that defined that generation. This just seams like intel is ignoring its higher end market due to lack of competition out there.
Kind of depressing that 3 years of technology only took the compile of Firefox from 23 minutes to 20 minutes. The high-end isn't looking so high these days.
So where's the 4820k review? I don't care much about more than 4 cores, but I need more I/O than Haswell offers. (crappy motherboards that offer either 8/4/4 or 8/8/2 are just unacceptable.) I'd like to know how the 4820k overclocks and handles I/O from dual and triple SLi/Crossfire.
Visual Studio unfortunately does not compile in parallel the way you might think. In a solution you may have multiple projects. If one project depends on four other projects, those four will be compiled in parallel; one project per thread. Once the four dependencies are built, it can build the fifth; however, that last project will be built single-threaded.
Xcode and native Android projects (with gcc) can actually build multiple files from one project in parallel. On an i7 with hyperthreading, all eight logical processors can build up to eight files simultaneously. This scales with more cores very nicely.
In summary, VS builds multiple projects from one solution in parallel, while gcc builds multiple files from one project in parallel; the latter of which is much faster.
I'm curious now to see the build times of Firefox for Mac on a rMBP with an i7. Eagerly waiting for a 12 core Mac Pro with 24 logical processors.
Visual Studio is a very poor parallel compilation test. GCC with make -p can really utilise a lot more cores but its not very Windows like to use GCC (although I suspect many developers do that).
I haven't found many Java builds doing well on multiple cores, and neither Scala. Its the unit tests where I get the cores going, I can saturate hundreds of cores with unit tests if I had them, and since I run them in the background on every change I certainly do get a lot of usage out of the extra cores. But a clean compile is not one of those cases where I see any benefit from the 6 cores. Of course I would hope these days we don't do that very often.
Parallel file-level compilation is possible in VS2010 and up with the /MP project switch. This is not enabled by default I believe for compatibility reasons.
A Haswell-E will most likely bring a different pin-count, correct?? So this X79 is a dead end platform any way you look at it. Buying the Quad IVB-E makes almost no sense whatsoever.
Most Intel chips use a Tick Tock release cycle. Tick Tock Tick Tock Tick Tock etc Tick is an Incremental upgrade. Same socket and largely same design, but reduced lithography (32nm down to 22nm for example). Sometimes new Instructions but often not. Tock is an Overhaul upgrade. Uses same Lithography as the previous gen, but is a new internal architecture, often a new Socket, and where most new Instruction sets show up. Then you get another Tick.
Core 2/Conroe was a Tock and was 65nm Core 2/Penryn was a Tick and was 45nm Core iX/Nehalem was a Tock and was 45nm Core iX/Westmere was a Tick and was 32nm Core iX/Sandy Br was a Tock and was 32nm Core iX/Ivy Bridge is a Tick and is 22nm Core iX/Haswell is a Tock and is 22nm
So to say that X79 is a dead platform should not really be a shock to anyone. They got Sandy and Ivy out of it. Thats 1 Tock and 1 Tick and now its time to move on. They do this exact same thing in the 2P Server market where people spend $10K or more per server. The fact of the matter is the server market has already pretty much learned. Don't bother upgrading that server/machine, just ride it for 3-4 years and then replace it completely. SATA, Memory and CPUs have all changed enough by then you want to reset everything anyway.
Is the IHS soldered or using the cheap termal material? The issue with desktop IB & Haswell overclockability has been proven to be the cheap thermal material between the chip and IHS. If they have a soldered chip to IHS then this will be a decent upgrade over straight IB.
Considering the power consumption, clock speed, overclock and temperature obtained its looking most like this is the same interface as SB-E - ie its soldered. Not that it makes much difference as just like SB-E it doesn't actually overclock all that well compared to its 4 core sibling.
Look at the results: temperature is not the main problem any more due to the bigger die, but OC is still not good at 4.3 GHz / 1.4 V. Actually I'd say this is ridiculously bad compared to earlier 22 nm chips (my Ivy can do this at ~1.1 V).
And I recently got a 3770K which requires 1.11 V to even hit 4.0 GHz! Seems to me Intels current process is to blame for Haswell OC rather than the thermal paste. Sure, temps drop when replacing the paste.. but OC doesn't improve all that much, does it? And if Ivy and Ivy-E don't clock all that well either...
If SNB-E @ 435 mm^2 fit into 130W then they could have made IVB-E @ 435 mm^2 fit into about the same power envelope. If they had to drop the clocks a couple hundred MHz then that's a small price to pay for 10 cores.
Oh I've waiting for this! Now the most important question to me...which motherboard is everyone getting an Ivy Bridge-E going to use? I'm doing a custom water cooled loop if that makes any difference.
Wow! 40 PCIe lanes sounds great until you remember skt. 2011 still only supports two 'true' sata3 ports and no native usb3. PCIe storage is never a smooth experience. It's a shame Intel seems unconcerned with power users that are not enterprised based.
What a shame these don't support ECC memory! I want it back for the enthusiast!
I'm a scientist, what am I supposed to purchase (privately)? I want a beefy machine for physics simulations at home that run for days/weeks i.e. What if a privat person wants to run several VMs? The E-series would provide nice performance, but no ECC, what a shame!
Right now one has to pick between speed without ECC (chances are the crashes on you) or a chip with relatively slow performance (clock wise) to get ECC, at the same price point. The highend XEON CPUs are out of the question.
@ Anand, please point out to the Intel representatives you meet that there's a market for this! One has to consider AMD at this point, they offer many threads + ECC for a consumer price point. Granted, they're slower, but the premium for the Intel chips with ECC is just out of proportion for private use.
dual Xeons, I have a supermicro 2U unit with low voltage xeons (they were $650ish each) and they're great. You can pick and choose board to have as much ECC memory as you want!
The E5-1660 will be the same as the 4960X, just not unlocked and with ECC. Same 6 cores, same 3.6-4.0 GHz range. Is the overclocking really worth all the hassle for maybe 20% speed increase, even if you had ECC?
But I generally agree, it looks like Ivy Bridge 49xx/E5-16xx v2 is probably worth skipping. The upgrade over Sandy is not that much, and Haswell will likely bring 8-cores to the 59xx/16xx v3 space. Ivy Bridge for the top end only really made big gains in the 26xx space thanks to adding 10 and 12 core options, but man do you pay for them....
VERY VERY congratulation to Anandtech for having put in the test the old flagships cpu ! Now we can really compare and read more & more deeper the evolution and interest in this architecture.
It is great that someone underdstood people do not buy or change each year their whole computer, but only every 3-4 year .
Would be fascinated to see some statistics re tri and quad sli usage as I'm already using 3 titans. There seems to be almost no coverage at that end of the scale despite being one of the target markets for this chip.
This reminds me, of years ago when I had access to the first DEC Alpha with its super fast clock speed and fast IPC, a IBM AIX RS6000, a HP9000 PA-RISC and a multi-socket Sparc. The alpha was the fastest by a long way versus multi core even on our SAP systems. I still use this as a rule of thumb that for most tasks a faster clocked processor is better most of the time, except for very specific situations - and in general you will know what they are. I'm battling with what to do with my next upgrade, I really wanted a top end ivy-e but it really doesn't seem worth it compared to a 4770, even with my need to run big VM's. he'll its not enough faster than my i2500s ivy imac to be worth it.
very interesting review coming from an i7-930 O/C to 4.3GHZ makes you debate if it makes sense to upgrade to the 4930K and a RIVE.
For those considering there is also another benchmark review of the 3930K vs I7-930 on 2, 3, & 4 way TITAN SLI GK110 Scaling at 7680 x 1440 and 7680 x 1600. Worth a look.
For those considering there is also another benchmark review of the 3930K vs I7-930 on 2, 3, & 4 way TITAN SLI GK110 Scaling at 7680 x 1440 and 7680 x 1600. Worth a look.
Should nVidia's shadowplay turn out to be rubbish then I could use that CPU for encoding recorded gameplay, otherwise there's no benefit in having a 4960X over a 4770K. It won't improve my gaming experience any. Two GTX 780s will do that.
This is a great article for informing us about the latest and greatest architecture and chips, anyway I too have stuck with my powerhouse system since 2006, mine is the Q9550 and P35 mobo, DDR2 second generation SSD and so forth! It has been rock solid and frankly I don't think I remember the last time it crashed, come to think of it it has never crashed! (...and I am able to have all my progs running plus 30 plus instances of chrome, yeah I should do something about that!) So there you have it, now i m looking at the LGA 2011 with X79 and maybe (If i can convince myself of the benefits of having a six core system) an i7 4920 or40, ...but will probably go with their 4 core 10 or 12 meg cache cpus...the difference in having 2 more cores is 200 bux! I don't have any problem spending 3 bills on a nice CPU, but when it comes to 2 more bills just to get 2 more cores, I really cant convince myself of that, since i can think of so many other ways to spend those two bills with more return so to speak! So yeah building a powerhouse of a system that costs a little more but is rock solid and still faster than 90 percent of what is out there is worth it for me, and I am sure others will say the same! Or maybe I can just get the better P45 chipset, and stick with this system for another two to three years!
I edit video from time to time and am currently using an i7-2600k system built in May of 2011. I've been looking at the 6 core i7-3930k, but not sure if it provides enough of an increase to build a new system. Are we closer to an 8 core solution coming out under $1000 in 2014? What's on the way?
You guys should really, really include ArmA as a benchmark for CPU gaming performance, as it sees pretty much constant improvements as the CPU gets more potent. I do not understand why more sites don't use A3 for CPU benchmarking when they touch on gaming performance.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
120 Comments
Back to Article
wsaenotsock - Tuesday, September 3, 2013 - link
How does Intel's closed-loop cooling package compare to say, Corsair's or other similar products?chizow - Tuesday, September 3, 2013 - link
Probably within 1-2C of similar "extra wide" 120x37mm closed-loop coolers. Looks like Intel's solution is made by Asetek going by it's block and mounting design, so I'd compare it against the Antec 920 for starters.Samus - Tuesday, September 3, 2013 - link
Still running i7-950 system (was an i7-920 back in 2008) and all I've upgraded since building it is a small bump in CPU speed, added water cooling, and installed two GTX660's in place of two GTX460's installed in 2010, which replaced the Radeon 4870x2 from the original 2008 assembly date. I've also replaced the original 500GB Seagate Boot Drive from 2008 with an Intel 160GB X25-M in 2010. Still use the same SSD to this day.Same motherboard, same 6x2GB G.skill DDR3-1600 modules (that cost $600 back in 2008) and same PC Power & Cooling 750-QUAD.
I've added a USB 3.0 PCIe controller as well.
Overall, this is the longest (5 years) I've ever owned a system that retained the same motherboard. The irony is Intel discontinued Socket 1366 so fast it wasn't even funny. It was actively supported less than 2 years, and only 2 generations of chips (using the same architecture and process) were made within a year of each other, essentially giving this socket a 15-month lifetime.
But 5 years later, a system built on this socket is still faster than 90% of the production systems today.
Assimilator87 - Tuesday, September 3, 2013 - link
Yeah, for 1366 owners, there's absolutely no reason to upgrade, especially with overclocking. At least you got one generation of upgrades, unlike 1156 owners who got completely screwed.P.S. The second gen upgrade on 1366 (Westmere) was a new architecture and process. They shrunk down from 45nm to 32nm and added AES instructions.
Inso-ThinkTank - Sunday, January 19, 2014 - link
I'm a current 1156 socket owner running I7 875K @ 4.2. My rig is still running strong, but I'm ready for an upgrade. Just purchase a 4960x with 16 gig of Corsair Dominator Platinum at 2400 and the Asus Black Edition mobo. Hope the spending is worth it.evilspoons - Tuesday, September 3, 2013 - link
What does this have to do with chizow's comment about the closed-loop cooler??JPForums - Thursday, September 5, 2013 - link
Absolutely nothing. I'm guessing it was just an easy way to get posted near the top.
If I recall correctly, the 920 is 49mm thick. Also, I've found that fan selection can make more than a little difference. I would not expect the Intel cooler to match Antec's 920, given their history of racing to the bottom with cooler components. That said, it should beat the 620 and similar 120x120x25mm closed loop systems (assuming they didn't screw up the fan selection in epic manner).
foursixty - Saturday, April 5, 2014 - link
I run a i7950 at 4.07 ghz with a overclocking thermaltake cooler, 3x 580's and 12 gb ddr, am now upgrading to the i74960x, thermaltake water 3.0 and 2 x asus gtx 780ti sli, 32 gb ddr3. the old rig is still going strong and will use it for a simulator pc as i have a g27 sitting doing nothing. Great machine and has served me well!~foursixty - Saturday, April 5, 2014 - link
just might add for the asus sabertooth x79, i74960x, 32gb 2400 ddr3, and 2 asus gtx 780ti oc cards is a $4000 upgrade, been doing a lot of overtime so i thought i would update while i got the extra cashjust4U - Friday, September 6, 2013 - link
Corsairs closed loop and Intel's appear to be built by the same company.. Some of Corsair's earlier attempts were noisy whereas you didn't really have that problem with Intel's. Overall I think it's a pretty solid contender with very few faults. There is better on the market obviously... but it's decent for it's price.Snoopykins - Tuesday, September 3, 2013 - link
Excellent review. I must say the reviews Anand himself writes always seem to be spot on. One question, where does the 22 months number come from? Is that the scheduled/rumored/leaked release date of Haswell-E? If so, isn't that going to be even more behind than Ivy-E? I'm assuming Skylake will have been out for longer than Haswell has been out now, no? Also, do we know that Skylake will support DDR4? I've heard Haswell-E will.Anywho, thanks for the excellent review, I thoroughly enjoyed it.
eiriklf - Tuesday, September 3, 2013 - link
22 months is about the time between the releases of sandy bridge extreme and ivy bridge extreme1Angelreloaded - Tuesday, September 3, 2013 - link
Haswell-e will if they move to a new socket and board DDR4 probably won't be compatible with DDR3 DIMM sockets.hrrmph - Tuesday, September 3, 2013 - link
Yep, that was a great review.It's not often that I think that a review really nails it all (at least for what I need), but this one did it. Very balanced, taking shots at the weak points, while also thoroughly explaining what is good about it, and who it might benefit.
X79 is adequate only if you are willing to load your machine up with lots of add-in controller cards.
Intel will really need to up their game on the chipset when Haswell-E gets here.
jasonelmore - Tuesday, September 3, 2013 - link
most modern x79 boards have all the add on controllers soldered onto the motherboard. Cards are not needed.just4U - Friday, September 6, 2013 - link
With so few new things coming out in the desktop division these days Im sure Anand is quite fine to getting back into the trenches to do the odd review. Used to be (or so it seemed) every other day we were getting glimpses at exciting new things but these days that appears in the form of tablets and smartphones /w the desktop industry being somewhat stagnant in my opinion.dishayu - Tuesday, September 3, 2013 - link
At this rate, as Anand suggests, Haswell-E will come out around/after Skylake based Desktop parts(assuming that is still on track for 2015 release). I am convinced that it would have been a better approach to skip Haswell-E altogether and jump straight to Skylake-E in 2015. This logic is further supported by the fact that next gen will require a new socket design (since Haswell comes with FIVR).jasonelmore - Tuesday, September 3, 2013 - link
there is still a node jump to go through. Most of the time this delays projected release dates. Broadwell may be late.Kevin G - Wednesday, September 4, 2013 - link
Since there is going to be a Haswell refresh in 2014, I'd expect Haswell-E to be introduced around the same time: roughly a year from now. The delay of Broadwell on the desktop will allow Haswell-E to catch up in cadence a bit.The new question is when Broadwell-E will arrive: with Skylake or vanilla Broadwell on the desktop?
f0d - Tuesday, September 3, 2013 - link
nice cpu for those that have an older platform (like 1156 or 1366 maybe) but as someone that already owns a 2011 socket cpu im still hoping they will eventually release 8 or even 10/12 core cpu's so i can encode vids fastercmon intel release those 8/10/12 core cpu's for the enthusiast platform.!
ShieTar - Tuesday, September 3, 2013 - link
Whats the point? A 10-core only runs at 2GHz, and a 8-core only runs at 3 GHz, so both have less overall performance than a 6-core overclocked to more than 4GHz. You simply cannot put more computing power into a reasonable power envelope for a single socket. If a water-cooled Enthusiast 6-core is not enough for your needs, you automatically need a 2-socket system.And its not like that is not feasible for enthusiasts. The ASUS Z9PE-D8 WS, the EVGA Classified SR-X and the Supermicro X9DAE are mainboard aiming at the enthusiast / workstation market, combining two sockets for XEON-26xx with the capability to run GPUs in SLI/CrossFire. And if you are looking to spend significantly more than 1k$ for a CPU, the 400$ on those boards and the extra cost for ECC Memory should not scare you either.
Just go and check Anandtech own benchmarking: http://www.anandtech.com/show/6808/westmereep-to-s... . It's clear that you need two 8-cores to be faster then the enthusiast 6-cores even before overclocking is taken into account.
Maybe with Haswell-E we can get 8 cores with >3.5GHz into <130W, but with Ivy Bridge, there is simply no point.
f0d - Tuesday, September 3, 2013 - link
who cares if the power envelope is "reasonable"?i already have my SBE overclocked to 5.125Ghz and if they release a 10core i would oc that thing like a mutha******
that link you posted is EXACTLY why i want a 10/12 core instead of dual socket (which i could afford if it made sense performance wise) - its obvious that video encoding doesnt work well with NUMA and dual sockets but it does work well with multi cored single cpu's
so i say give me a 10 core and let me OC it like crazy - i dont care if it ends up using 350W+ i have some pretty insane watercooling to suck it up (3k ultra kaze's in push/pull on a rx480rad 24v laingd5s raystorm wb - a little over the top but isnt that what these extreme cpu's are for?)
1Angelreloaded - Tuesday, September 3, 2013 - link
I have to agree with you in the extreme market who gives a damn about being green, most will run 1200watt Plat mod PSUs with an added extra 450 watt in the background, and 4GPUs as this is pretty much the only reason to buy into 2011 socket in the first place 2 extra cors and 40x PCIe lanes.crouton - Tuesday, September 3, 2013 - link
I could not agree with you more! I have a OC'd i920 that just keeps chugging along and if I'm going to drop some coin on an upgrade, I want it to be an UPGRADE. Let ME decide what's reasonable for power consumption. If I burn up a 8/10 core CPU with some crazy cooling solution then it's MY fault. I accept this. This is the hobby that I've chosen and it comes with risks. This is not some elementary school "color by numbers" hobby where you can follow a simple set of instructions to get the desired result in 10 minutes. This is for the big boys. It takes weeks or more to get it right and even then, we know we can do better. Not interested in XEON either.Assimilator87 - Tuesday, September 3, 2013 - link
The 12 core models run at 2.7Ghz, which will be slightly faster than six cores at 5.125Ghz. You could also bump up the bclk to 105, which would put the CPU at 2.835Ghz.Casper42 - Tuesday, September 3, 2013 - link
2690 v2 will be 10c @ 3.0 and 130W. Effectively 30Ghz.2697 v2 will be 12c @ 2.7 and 130W. Effectively 32.4Ghz
Assuming a 6 Core OC'd to 5Ghz Stable, 6c @ 5.0 and 150W? (More Power due to OC)
effectively 30Ghz.
So tell me again how a highly OC'd and large unavailable to the masses 6c is better than a 10/12c when you need Multiple Threads?
Keep in mind those 10 and 12 core Server CPUs are almost entirely AIR cooled and not overclocked.
I think they should have released an 8 and 10 core Enthusiast CPU. Hike up the price and let the market decide which one they want.
MrSpadge - Tuesday, September 3, 2013 - link
6c @ 5.0 will eat more like 200+ W instead of 130/150.ShieTar - Wednesday, September 4, 2013 - link
For Sandy Bridge, we had:2687, 8c @ 3.1 GHz => 24.8 GHz effectively
3970X, 6c @ 3.5 GHz => 21 GHz before overclocking, only 4.2 GHz required to exceed the Xeon.
Fair enough, for Ivy Bridge Xeons, the 10core at 3 GHz has been announced. I'll believe that claim when I see some actual benchmarks on it. I have some serious doubts that a 10core at 3 GHz can actually use less power than an 8 core at 3.4 GHz. So lets see on what frequency those parts will actually run, under load.
Furthermore, the effective GHz are not the whole truth, even on highly parallel tasks. While cache seems to scale with the number of cores for most Xeons, memory bandwidth does not, and there are always overheads due to the common use of the L3 cache and the memory.
Finally, not directly towards you but to several people talking about "green": Entirely not the point. No matter how much power your cooling system can remove, you are always creating thermal gradients when generating too much heat on a very small space. Why do you guys think there was no 3.5GHz 8 core for Sandy Bridge-EP? The silicon is the same for 6-core and 8-core, the core itself could run the speed. But INTEL is not going to verify the continued operation of a chip with a TDP >150W.
They give a little leeway when it comes to the K-class, because there the risk is with customer to a certain point. But they just won't go and sell a CPU which reliably destroys itself or the MB the very moment somebody tries to overclock it.
psyq321 - Thursday, September 5, 2013 - link
I am getting 34.86 @Cinebench with dual Xeon 2697 v2 running @3 GHz (max all-core turbo).Good luck reaching that with superclocked 4930/4960X ;-)
piroroadkill - Tuesday, September 3, 2013 - link
All I really learn from these high end CPU results is that if you actually invested in high end 1366 in the form of 980x all that time ago, you've got probably the longest lasting system in terms of good performance that I can even think of.madmilk - Tuesday, September 3, 2013 - link
If you invested in the 980 or the 970 (not the extreme ones) you got an awesome deal. Three years old, $600, overclockable, and within 30% of the 4960X on practically everything.bobbozzo - Tuesday, September 3, 2013 - link
True, but my Haswell i5-4670k was around $200 for the CPU (on sale), and under $150 for an ASUS Z87-Plus motherboard.It's running on air cooling at 4.5/4.5/4.5/4.4GHz.
I wasn't expecting it to be as fast for gaming as an i7-4770k, but looking at the gaming benchmarks in this article, I'm extremely pleased that I did not spend more for the i7.
althaz - Tuesday, September 3, 2013 - link
I had a launch model Core 2 Duo (the E6300) that with overclocking (1.86Ghz => 2.77Ghz) was a pretty decent CPU until last year (when I replaced it with an Ivy Bridge Core i5). That's what? Six years out of the CPU and it's still going strong for my buddy (to whom it now belongs).Kevin G - Tuesday, September 3, 2013 - link
"My biggest complaint about IVB-E isn't that it's bad, it's just that it could be so much more. With a modern chipset, an affordable 6-core variant (and/or a high-end 8-core option) and at least using a current gen architecture, this ultra high-end enthusiast platform could be very compelling."I think that you answered why Intel isn't going this route earlier in the article. Consumers are getting the smaller 6 core Ivy Bridge-E chip. There is also a massive 12 core chip due soon for socket 2011 based servers. Harvesting an 8 core versions from the 12 core die is an expensive proposition and something Intel may not have the volumes for (they're not going to hinder 10 and 12 core capable dies to supply 8 core volumes to consumers). Still, if Intel wanted to, they could release an 8 core Sandy bridge-E chip and use that for their flag ship processor since the architectural differences between Sandy and Ivy Bridge are minor.
The chipset situation just sucks. Intel didn't even have to release a new chipset, they could have released an updated X79 (Z79 perhaps?) that fixed the initial bugs. For example, ship with SAS ports enabled and running at 6 Gbit speeds.
Sabresiberian - Tuesday, September 3, 2013 - link
"The big advantages that IVB-E brings to the table are a ridiculous number of PCIe lanes , a quad-channel memory interface and 2 more cores in its highest end configuration."I'm going to pick on you a little bit here Anand, because I think it is important that we convey an accurate image to Intel about what we as end-users want from the hardware they design. 40 PCIe 3.0 lanes is NOT "ridiculous". In fact, for my purposes I would call it "inadequate". Sure, "my purposes" are running 3 2560x1440 screens @ 120Hz and that isn't the average rig today, but I want to suggest it isn't far off what people are now asking for. We should be encouraging Intel to give us more PCIe connectivity, not implying we have too much already. :)
canthearu - Tuesday, September 3, 2013 - link
Actually, you would find that you are still badly limited by graphics power, rather than limited by system bandwidth.A modern graphics card doesn't even stress out 8 lanes PCIe 3.0.
I'm also not saying that it is a bad thing to have lots of I/O, It isn't. However you do need to know where your bottlenecks are. Otherwise you spend money trying to fix the wrong thing.
The Melon - Tuesday, September 3, 2013 - link
Not all high bandwidth PCI-e cards are graphics cards.I for one would like to be able to run 2x PCIe x16 GPU's and at least 1 each of LSI SAS 2008, dual port DDR or QDR Infiniband, dual port 10GBe and perhaps an actual RAID card.
Sure that is a somewhat extreme example. But you can only run one of the expansion cards plus 2 GPU before you run out of lanes. This is an enthusiast platform after all. Many of us are going to want to do Extreme things with it.
Flunk - Tuesday, September 3, 2013 - link
Now you're just being silly, sending $10,000 on a system without any real increase in performance for anything you're going to do on a desktop/workstation is just stupid.Besides, if you're being incredibly stupid you'd need to go quad Xeons anyway (160 PCI-E lanes FTW).
Azethoth - Tuesday, September 3, 2013 - link
On the one hand, good review. On the other hand, my dream of a new build in the "performance" line is snuffed out. It just seems so lame making all these compromises vs Haswell, and basically things will never get better because the platform target is shifting to mobile and so battery life is key and performance parts will just never be a focus again.f0d - Tuesday, September 3, 2013 - link
i feel the same waythe future doesnt look too bright for the performance enthusiast - i dont want low power smaller cpu's i want BIG 8/12 core cpus and i dont really give a crap about power usage
Rick83 - Tuesday, September 3, 2013 - link
If you really want that number of cores, Ivy Bridge E5/E7 Xeons are going to deliver that, in the 150W power envelope. This is useful in the server market, but will only sell in homeopathic quantities in the desktop market. Still, you should be able to find them in retail around Christmas. Knock yourself out!Really IB-E is a free product for Intel, which is the only reason it made it to market at all. They need the 6-core dies for the medium density servers anyway, which is where they actually make sense over SB-Xeons, due to the smaller power envelope/higher efficiency. The investment to turn that core into a consumer product on an existing platform is almost zero, short of a small marketing budget, and possibly a tiny bit of (re-)validation.
This was never a product designed for the enthusiast market, and is being shoe-horned into that position. Due to the smaller die Intel can probably make better margin over SB-E, which is the only reason to introduce this product in the sector anyway, and possibly to get some brand awareness going with the launch of a new flagship.
From an economical point of view it makes no sense for Intel to have an actual enthusiast platform. Haswell refresh will be unlikely to bring more cores either (and without the extra I/O they would be a bit hobbled, I imagine), so possibly with Skylake there will be a 6-core upper mainstream solution. Still unlikely from an economical point of view, as Intel would probably prefer sticking to two dies, and going 6/4 may not be economical, whereas selling 6-core CPUs as quads (as they do with 48xx) doesn't work that well in the part of the market that generates reasonable volume.
f0d - Tuesday, September 3, 2013 - link
the problem with xeon is that you cant overclock them so my 5ghz SBE would be close to as good as a 8/10 core xeoni dont really care about why intel are not releasing high core count cpu's i just know i want them at a decent price ($1k and under) and overclockable - these 6 core ones just dont make the cut anymore
i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4 (6 with sbe) core cpu's as standard which blows and per core performance hasnt really changed much going from sandy bridge to haswell
i dont care about power and heat just give me the performance i want to encode highest quality handbrake movies in less than 24 hours.!!
ShieTar - Tuesday, September 3, 2013 - link
"All" you want is Intel to invest a massive development effort in order to produce for the first time an overclocking CPU with a TDP of around 200W, with silicone for which their business customers would pay 2k$ to 3k$, and sell it to you and the other 500 people in your niche for less than 1k$?Intel already offer you a solution if you need more processing power than the enthusiast solution gives you: 2 socket workstation boards, 4 socket server boards, 60-core co-processor cards.
f0d - Tuesday, September 3, 2013 - link
2 socket is inefficient for my workloadsthey could just release a xeon that is unlocked and let me do what i want with it - its not like the workstation/server guys would overclock so its not like intel would be losing any money
no development needed
2-3k? i can already buy 8 core SBE for 1k - why not let me oc that?
wallysb01 - Tuesday, September 3, 2013 - link
Many would overclock when Intel is charging hundreds of dollars for just small GHz bumps. You won't seem the academic or large corporation clusters doing it, but the small businesses with just a handful of workstations? They might.Look at the 2660 v2 2.2GHz at $1590 and the 2680 v2 2.8GHz at $1943. That's $353 for 600MHz. On a dual processor system its $700, then you have to pay the markups from those actually selling the computers (ie Dell/HP), which takes $700 to $1000 or more. One small little tweak and you're saving yourself $1000, while not stressing the system all that much (assuming you don't go crazy and try to get 3.5GHz from that 2.2GHz base chip).
mapesdhs - Wednesday, September 4, 2013 - link
The catch though is that the mbds used for these systems don't have
BIOS setups which support oc'ing, and the people who use them aren't
generally experienced in such things. I know someone at a larger movie
company who said it'd be neat to be able to experiment with this,
especially an unlocked XEON, but in reality the pressures of time, the
scale of the budgets involved, the large number of systems used for
renderfarms, the OS management required, etc., all these issues mean
it's easier to just buy off the shelf units and scale as required (the
renderfarm at the company I'm thinking of has more than 7000 cores
total, mostly based on Dell blade servers) and management isn't that
interested in doing anything different or innovative/risky. It's easy
to think a smaller company might be more likely to try such a thing,
but in reality for a smaller company it would be a much larger
financial risk to do so. Bigger companies could afford to try it, but
aren't geared up for such ideas.
Btw, oc'ing a XEON is viable with single-socket mbds that happen to
support them and have chipsets which don't rely on the CPU multiplier
for oc'ing, eg. an X5570 on an Asrock X58 Extreme6 works ok (I have
one); the chip advantage is a higher TDP and 50% faster QPI compared to
a clock-comparable i7 950.
Sadly, other companies often don't bother supporting XEONs anyway;
Gigabyte does on some of its boards (X58A-UD3R is a good example) but
ASUS tends not to.
Some have posted about core efficiency and they're correct; I have a
Dell T7500 with two X5570s, but my oc'd 3930K beats it for highly
threaded tasks such as CB 11.5, and it's about 2X faster for single-
threaded ops. The 3930K's faster RAM probably helps aswell (64GB
DDR3/2400, vs. only DDR3/1333 in the Dell which one can't change).
Someone commented about Intel releasing an unlocked XEON. Of course
they could, but they won't because they don't need to, and biz users
wouldn't really care, it's not what they want, and note that power
efficiency is very important for big server setups, something of which
oc'ing can of course make utterly ruin. :D Someone said who cares about
power guzzling when it comes to enthusiast builds, and that's true, but
when it comes to XEONs the main target market does care, so again Intel
has no incentive to bother releasing an unlocked XEON.
I agree with the poster who said 40 PCIe lanes isn't ridiculous. We had
such provision with X58, so if anything for a top-end platform only 40
lanes isn't that impressive IMO. Far worse is the continued limit of
just 2 SATA3 ports; that really is a pain, because the 3rd party
controllers are generally awful. The Asrock X79 Extreme11 solved this
to some extent by offering onboard SAS, but they kinda crippled it by
not having any cache RAM as part of the built-in SAS chip.
Ian.
wallysb01 - Wednesday, September 4, 2013 - link
"It's easy to think a smaller company might be more likely to try such a thing, but in reality for a smaller company it would be a much larger financial risk to do so. Bigger companies could afford to try it, but aren't geared up for such ideas.Btw, oc'ing a XEON is viable with single-socket mbds that happen to support them and have chipsets which don't rely on the CPU multiplier for oc'ing, eg. an X5570 on an Asrock X58 xtreme6 works ok (I have one); the chip advantage is a higher TDP and 50% faster QPI compared to a clock-comparable i7 950."
These two statements work against eachother. If OC a SP xeon is relatively easy, only if supported, there isn't much reason a DP xeon set up couldn't be OCed within reason without much effort.
I'm not going to say this would be a common thing, but the small shops run by someone with a "tinkerer" mind set towards computing would certainly be interested in attempting to get that extra 10-20% performance, which Intel would change another $1000 or more for, but get it for free.
psyq321 - Thursday, September 5, 2013 - link
Z9PE-D8 WS has decent overclocking options (not like their consumer X79 boards, but not bad either).However, apart from a small BCLK bump, this is useless as SNB-EP and IVB-EP Xeons are locked.
The best I can do with dual Xeon 2697 v2 is ~3150 MHz (I might be able to go a bit further but I did not bother) for all-core turbo.
Even if Intel ignores the business reasons NOT to allow Xeon overclocking (to force high-performance-trading people to buy more expensive Xeons as they showed willingness to overclock and, so, potentially cannibalize market for more expensive EX parts) technically this would be a huge challenge.
Why? Well, 12-core Xeon 2697 power-usage would literally explode if you allow running this on 4+ GHz and with voltages normally seen in overclocking world. I am sure the power draw of the single part would be more than 300W, so 600W for a dual-socket board.
This is not unheard of (after all, high-end GPUs can draw comparable power) - however, this would mandate significantly higher specs for the motherboard components and put people in actual danger of fires by using inadequate components.
Maybe when Intel moves to Haswell E/EP - when the voltage regulation becomes CPUs's business, maybe they can find a way to allow overclocking of such huge CPUs after passing lots of checks. Otherwise, Intel runs huge risk of being sued for causing fires.
mapesdhs - Sunday, August 13, 2017 - link
Four years later, who could have imagined we'd end up with Threadripper, and the mess Intel is now in? Funny old world. :Dstephenbrooks - Monday, September 23, 2013 - link
--[i just hate the direction cpu's are going with low power low core count highly integrated everythiing, 5 years ago i was dreaming of 8 core cpu's being standard about now but we still have 4]--So I got an AMD FX8350, that's 8 cores and 4GHz before turbo. Quite a bit cheaper than Intel's too.
OK, obviously AMD gets less operations per clock and the 8 cores only have 4 "real" FPUs between them but I wanted 8 cores to test scaling of computer programs on without breaking the bank.
knirfie - Tuesday, September 3, 2013 - link
Why not use a gaming benchmark that does benefit from the extra cores, such as Civ5?ShieTar - Tuesday, September 3, 2013 - link
Or Starcraft 2?JarredWalton - Tuesday, September 3, 2013 - link
Hahahaha.... best 1.5 core benchmark around!althaz - Tuesday, September 3, 2013 - link
It sure does murder one core though.BrightCandle - Tuesday, September 3, 2013 - link
or Arma 3.There are games out there that can utilise more cores and yet you didn't test with any of them.
bds71 - Tuesday, September 3, 2013 - link
as someone who regularly does encoding, 4k gaming, and (when not in use otherwise) folding@home - all things which can fully leverage mult-core processors and powerfull GPUs - i look forward to these reviews of new enthusiast class processors. and, it saddens me that since SB-E there have been only marginal improvements in this sector. i never thought we (as a technology power-house, and as a society) would settle for this. for me, it all began when they started putting GPUs on-die with CPUs for desktop PCs (sure, for laptops i can certainly understand) - i mean who DOESN'T use a discreet GPU in a desktop system???? and for those who do, why don't you just get a laptop???GPUs on-die took the focus away from the CPU. and, while there are minimal gains to be had, the showing here today is abysmal. 2 yrs of waiting and we get a 5% increase (for what i do, i want power and could really care less about power draw - as i would say most enthusiasts do). i get it - to build more powerful hardware, it HAS to become more efficient, but it's an evolutionary development process. haswell could very easily be an enthusiast class product: get rid of the rediculous GPU (for the desktop), double the core count, and raise the TDP to 125/130 (haswell-E?) - and they could do it a LOT earlier than 1-2 years from now. come on Intel - stop screwing the guys who you built your reputation on (after all, it's always the fastest/most powerfull hardware that's shown in reviews to boost the reputation of any company).
/rant off/ sorry, this is just very dissapointing.
f0d - Tuesday, September 3, 2013 - link
i agree very disappointingtoo much integration and not enough performance is the problem with modern intel cpus
i dont want integrated graphics and vrm's and whatever else they plan on integrating - i want huge core counts in a single die for the enthusiast platform.!
jabber - Tuesday, September 3, 2013 - link
I think what folks forget is -Just how many of these chips does Intel actually sell a year?
I bet it's tiny. I bet the i3/i5 chips outsell them 50 to 1. Thats why stuff isn't happening at the top so much. The demand has dwindled. Ten years ago a lot of people could eat all the cpu power they could get their hands on. Now? Not so much. Plenty people now still happy with their 2008/9 spec quads. Basically these top end Intel i7 chips are the Mercedes S class. A way for Intel to put new stuff and techniques in, that may or may not filter down in the future generations.
Intel knows the figures and it knows that the action is at the other end of the spectrum. Not for folks that largely want to rip video and run benchmarks all day.
f0d - Tuesday, September 3, 2013 - link
i agree they dont sell as many as the lower end cpu's but why not just sell us an unlocked xeon than can also OC?its not like they would lose money from letting us OC the xeon because the people that would normally buy a xeon for servers etc would never think about overclocking them
then its a win/win situation for intel as they are still getting their xeon money and they will have a decent enthusiast cpu also
and yes i would happily pay 1k (the price i can find current SBE 8 core cpu's) for an OCable 8 core
jabber - Tuesday, September 3, 2013 - link
Indeed and in being happy to pay $1000 for a CPU that puts you in a very very small group.Times are tough. Sell low and sell many.
Kevin G - Wednesday, September 4, 2013 - link
I believe the only way to get a specially binned or configured chip from Intel is to be an OEM and order a large volume. For an unlocked Xeon, the only chance Intel would release such a system would be under contract for a super computer contract that also used liquid cooling.OEM's like HP, Dell and Apple can also acquire specifically binned chips for a premium if the OEM wants something better or for a discount if Intel has excess inventories of low grade chips they need to sell.
1Angelreloaded - Tuesday, September 3, 2013 - link
Apple was the one who petitioned Intel to put the GPU on Die, so they could get away selling at higher prices with a lower cost to them. Do like I do BLAME APPLE.colonelclaw - Tuesday, September 3, 2013 - link
In conclusion, if you're an enthusiast who wants a high core count, Xeon is your only choice. For the price of the top-end Xeon you can buy a pretty decent second-hand car!We really need AMD to get back into the high-end game.
f0d - Tuesday, September 3, 2013 - link
yeah cpu's were much better when amd competed in the high endlets just hope they can pull a good one out of somewhere
Casper42 - Tuesday, September 3, 2013 - link
Hate to burst your bubble but AMD is going through a bit of a reset right now.Opteron 6400s in 2014. Minimal increase in performance.
Next Gen Ground Up architecture is 2015, or when you get your AMD rep drunk at a trade show, you hear more likely 2016. If they can pull it off, this is where they will become a player again.
Most of their attention at the moment is Trinity style APUs with minimal Core Counts just like Intel's desktop stuff.
DG4RiA - Tuesday, September 3, 2013 - link
When are these E5 V2 Xeons gonna be out ? Why release this first instead of the new Xeons ?Hardly any performance increase after 22 months. I get that they want to be able to sell the 12-cores Xeon for three grands instead of one, but why can't they just add two extra cores to 4960X instead of just adding 200MHz ?
Casper42 - Tuesday, September 3, 2013 - link
E5-2600 v2 is next week, Septh 10thE5-4600 v2 and E5-2400 v2 will be very end of 2013 or early 2014.
E7 (Ivy EX) will also be like January 2014. 15 cores is what I am hearing there.
DG4RiA - Wednesday, September 4, 2013 - link
Thanks for the info. I'm looking at dual socket build, so hopefully these V2 Xeon is worth the wait.Shadowmaster625 - Tuesday, September 3, 2013 - link
Intel is so greedy. They could have made this chip 10 core / 20 thread and the die size still would have been less than SNB-E. For a high end part, a chip this small is just a slap in the face. I hope their greed costs them lots of $$.ShieTar - Tuesday, September 3, 2013 - link
Sure. Also, the TDP at close to 4 GHz would have been 220W. And the majority of customers would have tried to overclock them and drive 300W through them. And either complained because they damage too easily, or because of the lousy overclocking potential.cactusdog - Tuesday, September 3, 2013 - link
IB-E is a massive failure just like SB-E. Thanks Intel for killing the highend for me. Actually, I think this is their plan, to kill the highend. Its ridiculous that this platform is so far behind the mainstream platform. 2x sata 6GB/s ports? No Intel USB 3.0? Worse Single threaded performance than mainstream? Sandy Bridge-e seemed like an unfinished project where many compromises were made and ivy-e looks the same.knweiss - Tuesday, September 3, 2013 - link
Anand, you write that Corsair supplied 4x 8GB DDR3-1866 Vengeance Pro memory for the testbed. However, you also remark "infrequent instability at stock voltages" with 32 GB. Then, in the legend of memory latency chart, you write "Core i7-4960X (DDR3-1600)" .So I wonder which memory configuration was actually used during the benchmarks? Less than 32 GB with DDR3-1866, non-stock voltages, or 32GB DDR3-1600? Wouldn't anything but 4x DDR3-1866 be a little bit unfair because you otherwise don't utilise the full potential of the CPU?
bobbozzo - Tuesday, September 3, 2013 - link
The article says that 1600 is the max memory speed SUPPORTED if you use more than one DIMM per channel.knweiss - Wednesday, September 4, 2013 - link
There are 4 channels.chizow - Tuesday, September 3, 2013 - link
Nice job Anand, your conclusion pretty much nailed it as to why LGA2011 doesn't cut it today and why this release is pretty ho-hum in general. I would've liked to have seen some 4820K results in there to better illustrate the difference between 4770K Haswell and SB-E, but I suppose that is limited by what review samples you received.But yeah, unless you need the 2 extra cores or need double DIMM capacity, there's not much reason to go LGA2011/IVB-E over Haswell at this point. Even the PCIe lane benefit is hit or miss for Nvidia users, as PCIe 3.0 is not officially supported for Nvidia cards and their reg hack is hit or miss on some boards still.
The downsides of LGA2011 vs LGA1150 are much greater, imo, as you lose 4 extra SATA3(6G) ports and native USB 3.0 as you covered, along with much lower overall platform power consumption. The SATA3 situation is probably the worst though, as 2 isn't really enough to do much, but 6 opens up the possibility of an SSD boot drive along with a few really fast SATA3 RAID0 arrays.
TEAMSWITCHER - Tuesday, September 3, 2013 - link
I'm really disappointed by these numbers. As a software developer, the FireFox compile benchmark best indicates the benefit I would get from upgrading to this CPU. And, it looks like the 4770K would be about the same difference - except far less expensive. I really don't think I need anything more than 16GB for RAM, and one high-end graphics card is enough to drive my single WQHD (2560x1440) display. Do bragging rights count? No...I mean really?madmilk - Tuesday, September 3, 2013 - link
Time to consider Xeons...MrBungle123 - Tuesday, September 3, 2013 - link
What's a Xeon going to do? Be slower than the 4960X? You lose clock speed by going with huge core counts and that translates to even more losses in single threaded performance. There comes a point where there are diminishing returns on adding more cores... (see AMD)MrSpadge - Tuesday, September 3, 2013 - link
Time to save yourself some money with a 4770 - not the worst news.Kevin G - Wednesday, September 4, 2013 - link
Your best bet would be to hope for desktop CrystalWell part. That extra cache should do wonders for compile times even if you'd lose a bit of clock speed. However, Intel is intentionally holding back the best socket 1150 parts they could offer as the benefits of Crystalwell + TSX optimized software would put performance into large core count Xeon territory in some cases.1Angelreloaded - Tuesday, September 3, 2013 - link
Can you have a comparison chart please for the 4770k, E5-8core Xenon, 4960X, with benchmarks included. This kind of makes little sense to me X-79 was behind on feature sets like full SATA3 when in reality a lot of these boards will be used as workstation/normal/gaming computers, performance on those boards tends to suffer because lack there of native support. Instead 3rd party chips are used to add extra features which have significant drawbacks. I understand using the socket for 2 gen in order to extend life of boards however 1336 and the next leap to haswell should have been taken, making a board last 2 years with the prime features that defined that generation. This just seams like intel is ignoring its higher end market due to lack of competition out there.sabarjp - Tuesday, September 3, 2013 - link
Kind of depressing that 3 years of technology only took the compile of Firefox from 23 minutes to 20 minutes. The high-end isn't looking so high these days.dgingeri - Tuesday, September 3, 2013 - link
So where's the 4820k review? I don't care much about more than 4 cores, but I need more I/O than Haswell offers. (crappy motherboards that offer either 8/4/4 or 8/8/2 are just unacceptable.) I'd like to know how the 4820k overclocks and handles I/O from dual and triple SLi/Crossfire.Eidigean - Tuesday, September 3, 2013 - link
Visual Studio unfortunately does not compile in parallel the way you might think. In a solution you may have multiple projects. If one project depends on four other projects, those four will be compiled in parallel; one project per thread. Once the four dependencies are built, it can build the fifth; however, that last project will be built single-threaded.Xcode and native Android projects (with gcc) can actually build multiple files from one project in parallel. On an i7 with hyperthreading, all eight logical processors can build up to eight files simultaneously. This scales with more cores very nicely.
In summary, VS builds multiple projects from one solution in parallel, while gcc builds multiple files from one project in parallel; the latter of which is much faster.
I'm curious now to see the build times of Firefox for Mac on a rMBP with an i7. Eagerly waiting for a 12 core Mac Pro with 24 logical processors.
BrightCandle - Tuesday, September 3, 2013 - link
Visual Studio is a very poor parallel compilation test. GCC with make -p can really utilise a lot more cores but its not very Windows like to use GCC (although I suspect many developers do that).I haven't found many Java builds doing well on multiple cores, and neither Scala. Its the unit tests where I get the cores going, I can saturate hundreds of cores with unit tests if I had them, and since I run them in the background on every change I certainly do get a lot of usage out of the extra cores. But a clean compile is not one of those cases where I see any benefit from the 6 cores. Of course I would hope these days we don't do that very often.
althaz - Tuesday, September 3, 2013 - link
It is a poor parallel test, but it is a fantastic real-world test for a lot of devs.madmilk - Tuesday, September 3, 2013 - link
About 25 minutes here on an 2.6GHz/16GB rMBP. Pretty much as expected for quad Ivy Bridge.bminor13 - Tuesday, September 3, 2013 - link
Parallel file-level compilation is possible in VS2010 and up with the /MP project switch. This is not enabled by default I believe for compatibility reasons.BSMonitor - Tuesday, September 3, 2013 - link
A Haswell-E will most likely bring a different pin-count, correct?? So this X79 is a dead end platform any way you look at it. Buying the Quad IVB-E makes almost no sense whatsoever.Casper42 - Tuesday, September 3, 2013 - link
Most Intel chips use a Tick Tock release cycle. Tick Tock Tick Tock Tick Tock etcTick is an Incremental upgrade. Same socket and largely same design, but reduced lithography (32nm down to 22nm for example). Sometimes new Instructions but often not.
Tock is an Overhaul upgrade. Uses same Lithography as the previous gen, but is a new internal architecture, often a new Socket, and where most new Instruction sets show up.
Then you get another Tick.
Core 2/Conroe was a Tock and was 65nm
Core 2/Penryn was a Tick and was 45nm
Core iX/Nehalem was a Tock and was 45nm
Core iX/Westmere was a Tick and was 32nm
Core iX/Sandy Br was a Tock and was 32nm
Core iX/Ivy Bridge is a Tick and is 22nm
Core iX/Haswell is a Tock and is 22nm
So to say that X79 is a dead platform should not really be a shock to anyone. They got Sandy and Ivy out of it. Thats 1 Tock and 1 Tick and now its time to move on. They do this exact same thing in the 2P Server market where people spend $10K or more per server. The fact of the matter is the server market has already pretty much learned. Don't bother upgrading that server/machine, just ride it for 3-4 years and then replace it completely. SATA, Memory and CPUs have all changed enough by then you want to reset everything anyway.
chadwilson - Tuesday, September 3, 2013 - link
Is the IHS soldered or using the cheap termal material? The issue with desktop IB & Haswell overclockability has been proven to be the cheap thermal material between the chip and IHS. If they have a soldered chip to IHS then this will be a decent upgrade over straight IB.BrightCandle - Tuesday, September 3, 2013 - link
Considering the power consumption, clock speed, overclock and temperature obtained its looking most like this is the same interface as SB-E - ie its soldered. Not that it makes much difference as just like SB-E it doesn't actually overclock all that well compared to its 4 core sibling.MrSpadge - Tuesday, September 3, 2013 - link
Look at the results: temperature is not the main problem any more due to the bigger die, but OC is still not good at 4.3 GHz / 1.4 V. Actually I'd say this is ridiculously bad compared to earlier 22 nm chips (my Ivy can do this at ~1.1 V).And I recently got a 3770K which requires 1.11 V to even hit 4.0 GHz! Seems to me Intels current process is to blame for Haswell OC rather than the thermal paste. Sure, temps drop when replacing the paste.. but OC doesn't improve all that much, does it? And if Ivy and Ivy-E don't clock all that well either...
Shadowmaster625 - Tuesday, September 3, 2013 - link
If SNB-E @ 435 mm^2 fit into 130W then they could have made IVB-E @ 435 mm^2 fit into about the same power envelope. If they had to drop the clocks a couple hundred MHz then that's a small price to pay for 10 cores.Kevin G - Wednesday, September 4, 2013 - link
Actually, the core count for the larger Ivy Bridge-E goes up to 12.adamantinepiggy - Tuesday, September 3, 2013 - link
So do these CPU use actual solder under the lid or crappy paste like the 4770K?noeldillabough - Tuesday, September 3, 2013 - link
Oh I've waiting for this! Now the most important question to me...which motherboard is everyone getting an Ivy Bridge-E going to use? I'm doing a custom water cooled loop if that makes any difference.diceman2037 - Tuesday, September 3, 2013 - link
Anand, that marketing image is suffering from a typo, "18% Lower" refers to power utilized, not performance.DMCalloway - Tuesday, September 3, 2013 - link
Wow! 40 PCIe lanes sounds great until you remember skt. 2011 still only supports two 'true' sata3 ports and no native usb3. PCIe storage is never a smooth experience. It's a shame Intel seems unconcerned with power users that are not enterprised based.randfee - Tuesday, September 3, 2013 - link
What a shame these don't support ECC memory! I want it back for the enthusiast!I'm a scientist, what am I supposed to purchase (privately)? I want a beefy machine for physics simulations at home that run for days/weeks i.e. What if a privat person wants to run several VMs?
The E-series would provide nice performance, but no ECC, what a shame!
Right now one has to pick between speed without ECC (chances are the crashes on you) or a chip with relatively slow performance (clock wise) to get ECC, at the same price point. The highend XEON CPUs are out of the question.
@ Anand, please point out to the Intel representatives you meet that there's a market for this! One has to consider AMD at this point, they offer many threads + ECC for a consumer price point. Granted, they're slower, but the premium for the Intel chips with ECC is just out of proportion for private use.
randfee - Tuesday, September 3, 2013 - link
anyways, I am likely to wait for Haswell XEONs next year, with AVX2 (which greatly enhances such scientific calculations if used) and DDR4 ;)noeldillabough - Thursday, September 5, 2013 - link
dual Xeons, I have a supermicro 2U unit with low voltage xeons (they were $650ish each) and they're great. You can pick and choose board to have as much ECC memory as you want!wallysb01 - Tuesday, September 3, 2013 - link
The E5-1660 will be the same as the 4960X, just not unlocked and with ECC. Same 6 cores, same 3.6-4.0 GHz range. Is the overclocking really worth all the hassle for maybe 20% speed increase, even if you had ECC?But I generally agree, it looks like Ivy Bridge 49xx/E5-16xx v2 is probably worth skipping. The upgrade over Sandy is not that much, and Haswell will likely bring 8-cores to the 59xx/16xx v3 space. Ivy Bridge for the top end only really made big gains in the 26xx space thanks to adding 10 and 12 core options, but man do you pay for them....
mapesdhs - Wednesday, September 4, 2013 - link
You sound like the kind of person who'd benefit from a used SGI UV 10 or UV 100.
No idea about their availability though.
Ian.
mapesdhs - Wednesday, September 4, 2013 - link
Oops, I was replying to randfee btw. Apologies for any confusion.Ian.
FwFred - Wednesday, September 4, 2013 - link
Intel offers plenty of parts for you. See the Xeon line--it doesn't need to be high end.JlHADJOE - Tuesday, September 3, 2013 - link
Intel Marketing: Honest guys.Michael REMY - Wednesday, September 4, 2013 - link
VERY VERY congratulation to Anandtech for having put in the test the old flagships cpu ! Now we can really compare and read more & more deeper the evolution and interest in this architecture.It is great that someone underdstood people do not buy or change each year their whole computer, but only every 3-4 year .
Very Thank you my Lord Anandtech
Remarius - Wednesday, September 4, 2013 - link
Would be fascinated to see some statistics re tri and quad sli usage as I'm already using 3 titans. There seems to be almost no coverage at that end of the scale despite being one of the target markets for this chip.noeldillabough - Thursday, September 5, 2013 - link
Heh I'd be happy with ONE Titan :) But I'd love to see those results too!Oscarcharliezulu - Friday, September 6, 2013 - link
This reminds me, of years ago when I had access to the first DEC Alpha with its super fast clock speed and fast IPC, a IBM AIX RS6000, a HP9000 PA-RISC and a multi-socket Sparc. The alpha was the fastest by a long way versus multi core even on our SAP systems. I still use this as a rule of thumb that for most tasks a faster clocked processor is better most of the time, except for very specific situations - and in general you will know what they are. I'm battling with what to do with my next upgrade, I really wanted a top end ivy-e but it really doesn't seem worth it compared to a 4770, even with my need to run big VM's. he'll its not enough faster than my i2500s ivy imac to be worth it.DPOverLord - Tuesday, September 10, 2013 - link
very interesting review coming from an i7-930 O/C to 4.3GHZ makes you debate if it makes sense to upgrade to the 4930K and a RIVE.For those considering there is also another benchmark review of the 3930K vs I7-930 on 2, 3, & 4 way TITAN SLI GK110 Scaling at 7680 x 1440 and 7680 x 1600. Worth a look.
DPOverLord - Tuesday, September 10, 2013 - link
very interesting review coming from an i7-930 O/C to 4.3GHZ makes you debate if it makes sense to upgrade to the 4930K and a RIVE.overclock.net/t/1415441/7680x1440-benchmarks-plus-2-3-4-way-sli-gk110-scaling/0_100
For those considering there is also another benchmark review of the 3930K vs I7-930 on 2, 3, & 4 way TITAN SLI GK110 Scaling at 7680 x 1440 and 7680 x 1600. Worth a look.
Remarius - Wednesday, September 11, 2013 - link
Really useful - I missed that thread somehow.Fierce Guppy - Saturday, September 14, 2013 - link
Should nVidia's shadowplay turn out to be rubbish then I could use that CPU for encoding recorded gameplay, otherwise there's no benefit in having a 4960X over a 4770K. It won't improve my gaming experience any. Two GTX 780s will do that.scorpyclone - Tuesday, September 24, 2013 - link
This is a great article for informing us about the latest and greatest architecture and chips, anyway I too have stuck with my powerhouse system since 2006, mine is the Q9550 and P35 mobo, DDR2 second generation SSD and so forth!It has been rock solid and frankly I don't think I remember the last time it crashed, come to think of it it has never crashed! (...and I am able to have all my progs running plus 30 plus instances of chrome, yeah I should do something about that!)
So there you have it, now i m looking at the LGA 2011 with X79 and maybe (If i can convince myself of the benefits of having a six core system) an i7 4920 or40, ...but will probably go with their 4 core 10 or 12 meg cache cpus...the difference in having 2 more cores is 200 bux! I don't have any problem spending 3 bills on a nice CPU, but when it comes to 2 more bills just to get 2 more cores, I really cant convince myself of that, since i can think of so many other ways to spend those two bills with more return so to speak!
So yeah building a powerhouse of a system that costs a little more but is rock solid and still faster than 90 percent of what is out there is worth it for me, and I am sure others will say the same!
Or maybe I can just get the better P45 chipset, and stick with this system for another two to three years!
Laphaswiff - Monday, December 9, 2013 - link
δοκοῦσι δέ μοι Λακεδαιμόνιοι μάλα δεινῶν ἔργον ἀνθρώπων ποιεῖν. νῦν γάρ φασιν ἐκεῖνοι δεῖν Ἠλείους μὲν τῆς Τριφυλίας τινὰ κομίσασθαι, Φλειασίους δὲ τὸ Τρικάρανον, ἄλλους δέ τινας τῶν Ἀρκάδων τὴν αὑτῶν, καὶ τὸν Ὠρωπὸν ἡμᾶς, οὐχ ἵν' ἑκάστους ἡμῶν ἴδωσιν ἔχοντας τὰ αὑτῶν, οὐδ' ὀλίγου δεῖ· (17) ὀψὲ γὰρ ἂν φιλάνθρωποι γεγονότες εἶεν· ἀλλ' ἵνα πᾶσι δοκῶσι συμπράττειν, ὅπως ἕκαστοι κομίσωνται ταῦθ', ἅ φασιν αὑτῶν εἶναι, ἐπειδὰν δ' ἴωσ' ἐπὶ Μεσσήνην αὐτοί, συστρατεύωνται πάντες αὐτοῖς οὗτοι καὶ βοηθῶσι προθύμως, ἢ δοκῶσ' ἀδικεῖν, περὶ ὧν ἔφασαν ἕκαστοι σφῶν αὐτῶν εἶναι συμψήφους λαβόντες ἐκείνους, μὴ τὴν ὁμοίαν αὐτοῖς χάριν ἀποδιδόντες.SeanFL - Wednesday, January 15, 2014 - link
I edit video from time to time and am currently using an i7-2600k system built in May of 2011. I've been looking at the 6 core i7-3930k, but not sure if it provides enough of an increase to build a new system. Are we closer to an 8 core solution coming out under $1000 in 2014? What's on the way?SeanFL - Wednesday, January 15, 2014 - link
typo, meant looking at the i7-4930k. Still wondering, is 8 core under $1k on the way this year?MordeaniisChaos - Thursday, April 17, 2014 - link
You guys should really, really include ArmA as a benchmark for CPU gaming performance, as it sees pretty much constant improvements as the CPU gets more potent. I do not understand why more sites don't use A3 for CPU benchmarking when they touch on gaming performance.