At $446 this isn't exactly an entry level CPU. I wonder where are the desktop CPU's with IRIS graphics like the i5-5575r which is suppose to be priced at $244 and available now but is not for sale anywhere?
It's not meant to be a cheap CPU. It's a workstation/server chip. I has some additional data integrity features that the normal desktop CPUs can't use, like ECC memory. The drivers for the GPU are also optimized and tested for workstation level software, which is expensive to do. Sometimes, just frequency isn't enough.
I always wonder how people don't see that a server part is going to be more expensive than a desktop part. They always have been and always will be because they are binned higher and have additional features that desktops don't or can't use. Saying that $446 for this CPU is actually rather cheap for an entry level Xeon CPU and is not a bad price.
Thats right it's too expensive. Intel will continue to gouge consumers with lower quality binned parts and disabled server features until Apple starts making decent desktop CPUs and then we can forever leave Intel and Microsoft at our leisure. Thats why none of the mobile Intel CPUs are selling - most the suppliers dont want to go back to the old monopoly days regardless of performance (which isn't incrementally significantly anymore anyway, just power savings). Intel thinks suppliers and consumers will put up with this forever but they are so wrong. It's just a matter of time now.
It is rather disappointing that it seems that energy efficiency has regressed since the awesome 4790K. I was hoping that switching to 14nm would allow intel to do what the 88W 4790K could do in the 65W power envelope, but neither broadwell or skylake seems to be able to deliver that promise.
While CPU performance of the chip is only a little bit better, its GPU part is much bigger and performs much better, hence power consumption is the same as older chips. It's actually an achievement.
For regular desktop CPUs i'd prefer Intel to give us native mainstream 6 core with no GPU at all. But that would not played nicely with premium E series CPU. Money, money, money. Give me more :)
If you consider pure CPU loads, broadwell/skylake doesn't seem to show much power advantage over devil's canyon when you are getting to the 4GHz range. Skylake seems to be more overclock friendly, but it does consume a lot of power doing it.
I was thinking the same thing based on Anandtech's original tests, but if you look at their notes under the delta power consumption and looking at a few other review sites, it looks a lot like motherboard manufacturers are all over the board with voltage/frequency curves for Skylake (and I assume here with Broadwell too) and it is biting them in the butt on power consumption. You've got a difference of easily 35% in power consumption from one board to the next using the same chip.
Using the better numbers I have seen in some tests, Skylake, specifically the 6700k is actually significantly better than any other generation in performance per watt. Looking at the higher numbers in a few reviews, it is much worse than Broadwell and Haswell and only fractionally better than Ivy Bridge. I suspect that Skylake and probably Broadwell, that Intel's 14nm process has poor voltage/frequency scaling. Also that most motherboard manufactures are choosing poor voltage curves for the chip in an attempt to be extremely conservative.
A knock on effect here is, it is likely to be impacting actual performance too. If the 6700k has a TDP of 94w and the Dp is 110w...I'd half imagine that there is some throttling going on there with some loads.
Its the GPU the put on the CPU. I wonder why they don't have performance CPU without the GPU part. Gamers don't buy a CPU for the GPU since they have dedicated one already.
Maybe im missing something, but seems like wasted space they could use for gamer orientated instructions.
The advantage of the 14 nm transistors is higher at lower voltages. The 6700K is so massively "overvolted" in stock mode, it's operating far above this sweet spot. That's why I would have loved to what those chips can achieve, regarding power consumption and efficiency, at ~4.0 GHz and minimum voltages. Alas, noone else seems to care about that. Most reviews are just showing full throttle operation. AT measured at lower OCs as well (thanks for that!), but apparently did not even try to go below 1.20 V either. That's higher than the stock voltage of Sandy Bridge..
High voltage overclocking is more likely to sell expensive products for cooling and has the added benefit of burning out chips, leading to even more sales.
Average framerates don't really seem to tell us very much when seemingly every game tested is GPU-limited with an i5 or better. Would it be possible to post minimum framerates and/or frame time variance in future CPU gaming benchmarks? I suspect there would be more practically useful differences between CPUs in that data.
Agreed - those benchmarks are pretty boring. Some website (forgot which one) showed minimum fps advantages of the 6700K to be massive (20 - 50%) for some games. This might be the more interesting metric.
Our minimum results, on some benchmarks, seem to be all over the shot. It only takes one frame to drop a result down, which may or may not be inconsistent. We still have those values - check them over at anandtech.com/bench.
In response to TallestJon96 below, I'm working on pulling 99th percentile data in a regular, consistent way.
Primarily in the graphs you compare a 1276 v3 to two flavours of 1285 v4. But, on the first page, you tabulate the 1285 v3 and 1265 v3. Would it be possible to include the 1276 v3 in this table?
Just quickly looking at ark.intel; the 1276 v3 has a box price of $350 and seems otherwise identical to the 1285 v3. On the face of it, it appears a disruptor.
Because Intel's power consumption ratings are a load of nonsense often enough. And AMD isn't always accurate either. The 8320E, for instance, is rated 95W but actually used 86 in Anandtech's tests. The 8370E is rated 95W but used 107 or something. The 9590 is even further away from its rating.
But, Intel is the one gaining the most from this deceptive marketing since people know AMD's FX chips are power-hungry due to being on 32nm and not having had as much money invested in hand-tuning to lower power. So, Intel underestimates the consumption of parts like the 4790K to make its chips seem even more dramatically efficient.
There's one problem with this: TDP is not "power consumption", but "Thermal Design Power". A chip with a 95w TDP needs to function in an environment designed to dissipate 95w of heat over a given period of time. CPUs can go well over this for short periods.
This is a server chip. Why are you benchmarking games? Furthermore, for SPEC, why are you using a dGPU when this chip has on die graphics? Where are the OpenCL, OpenMP, GPGPU benchmarks, which are going to be the majority of how these will be used for green heterogeneous computing?
They benchmark games because ignorant gamers (like myself) love to see gaming benchmarks for everything, even if they will never be used for games! If it was a 20 core Xeon clocked at 2ghz with hyper threading, we would want the benchmarks, even though they just show that everything i5 and up performs identically. We are a strange species, and you should not waste your time trying to understand us.
No benchmarks are irrelevant when they involve products people are using today. Gaming benchmarks are practical. However, that doesn't mean charts are necessarily well-considered, such as with how this site refuses to include a 4.5 GHz FX chip (or any FX chip) and instead only includes weaker APUs.
As listed in a couple of sections of the review, this is because Broadwell-H on the desktop does not have an equivalent 84W part for previous generations and this allows us, perhaps somewhat academically, so see if there ends up being a gaming difference between Broadwell and Haswell at the higher power consumption levels.
I was wondering that too; desktop cards get high numbers for Viewperf 12 because they cheat in the driver layer on image quality. SPEC testing should be done with pro cards where the relevance is more sensible. The situation is worse now because both GPU makers have fiddled with their drivers to be more relevant to consumer cards. Contrast how Viewperf 12 behaves with desktop cards to the performance spread observed with Viewperf 11, the differences are enormous.
For example, tesing a 980 vs. a Quadro k5000 with Viewperf 11 and 12, the 980 is 3X faster than the K5000 for Viewperf 12, whereas the K5000 is 6x faster than the 980 for Viewperf 11. More than an order of magnitude performance shift just by using the newer test suite?? I have been told by tech site people elsewhere that the reason is changes to drivers and the use of much less image quality on consumer cards. Either way, it makes a nonsense of the usefulness of Viewperf if this is what's going on now. Otherwise, someone has to explain why the 980 compares so differently to a K5000 for Viewperf 11.
The gaming charts are messed up - igp performs faster than the dgpu on the SAME settings? i think something is wrong - most likely the labels of settings.
Also it would have been better to compare IGP performance against the older versions of IRIS - where is 4770R? the point here is that while keeping the W similar, what are we really getting out of 14nm?
The R7 240 used on that page isn't exactly fast. Actually, the A10 APU has more graphics hardware than that card, which shows in the results. The fact that Crystal Well parts can beat an A10 APU means that they also beat the R7 240.
As far as the 4770R comparison goes, it seems I'm coming up with nothing useful from a quick search. Anandtech has numbers for the 4770R and numbers for the 5675C, but in none of the same benchmarks. Iris Pro 5200 (4770R) had 40 EUs that could turbo to 1.3 GHz and Iris Pro P6300 (E3-1285* v4) has 48 EUs that can turbo to 1.15 GHz (same for 5775C and 1.1 GHz for 5675C). I would think it would be a wash (some wins, some losses) between the two generations, but you're right. There would be some utility in having some hard numbers to compare the two.
I wish you also included e5-1630 v3 in your tests. It is slightly more expensive ($600 range I guess) but with 6 cores at 3.5 MHz is probably more attractive than the any of faster e3 series.
Oh, sorry, my bad. I meant E5-1650 V3. I recently built a workstation with that for CFD analysis. 140 W is bit high these days but then again, there is no argument about the performance.
"but the main parallel we should be making is the 95W of the E3-1285 v4 and the E3-1276 v3 at 84W. The E3 has some extra frequency (peaks at 4 GHz) and extra L3 cache, but the Xeon has eDRAM."
"If I were thinking from the point of view of the motherboard manufacturer, they are more likely to overvolt a Xeon processor to ensure that stability rather than deal with any unstable platforms"
Ian, this would be a really really poor move & explanation. You are literally paying Intel for the guaranteed stability of the Xeon. the CPU tells the mainboard exactly which voltage it wants. If a mainboard maker gives it more than this on purpose, he's sabotaging either the TDP and power efficiency, or the performance. Neither is good and could easily lead to lawsuits in the US (because the product wouldn't perform as promised).
In any way, you should be able to check this! You can test on different boards, with different software. You can read out & report the voltages of the CPUs under different load conditions. You can log & report the average CPU clocks during those tests. One would think such information is interesting when we're seemingly confronted with 2 of 3 CPUs consuming far more than promised by Intel and performing really good for that TDP.
So to clear up your misconceptions: we (or more specifically, I) have not retested any AM3 product yet on our 2015 benchmark suite due to time restrictions and general lack of reader interest in AM3. I have 3 test beds, and our CPU/GPU tests are only partially automated, requiring 35+ working hours of active monitoring for results. (Yes, can leave some tests on overnight, but not that many). Reserving one test bed for a month a year for AM3+ limits the ability to do other things, such as motherboard tests/DRAM reviews/DX12 testing and so on.
You'll notice our FX-9590 review occurred many, many months after it was officially 'released', due to consumer availability. And that was just over 12 months ago - I have not been in a position to retest AM3 since then. However, had AMD launched a new CPU for it, then I would have specifically made time to circle back around - for example I currently have the A8-7670K in to test, so chances are I'll rerun the FM2+ socket as much as possible in September.
That being said, we discussed with AMD about DirectX 12 testing recently. Specifically when more (full/non-beta) titles are launched to the public, and we update our game tests (on CPU reviews) for 2016. You will most likely see the FX range of CPUs being updated in our database at that time. Between now and then, we have some overlap between the FX processors and these E3 processors in our benchmarking database. This is free for anyone to access at any time as and when we test these products. Note that there is a large price difference, a large TDP difference, but there are some minor result comparisons for you. Here's a link for the lazy:
The 9590 is a specialty product, hardly what I was focusing on which is FX overclocked to a reasonable level of power consumption. The 9590 does not fall into that category.
You can get an 8320E for around $100 at Microcenter and pair it with a discount 970 motherboard like I did ($25 with the bundle pricing a few months ago for the UD3P 2.0) and get a decent clockspeed out of it for now much money. I got my Zalman cooler for $20 via slickdeals and then got two 140mm fans for it. The system runs comfortably at 4.5 GHz (4.4 - 4.5 are considered the standard for FX -- for the point where performance per watt is still reasonable). Those pairing it with an EVO cooler might want 4.3 GHz or so.
The 9590 requires an expensive motherboard, expensive (or loud) case cooling, and an expensive heatsink. Running an FX at a clockspeed that is below the threshold at which the chip begins to become a power hog is generally much more advisable. And, review sites that aren't careful will run into throttling from VRMs or heat around the chip which will give a false picture of the performance. People in one forum said adamantly that the 9590 chips tend to be leaky so their power consumption is even higher than a low-leakage chip like 8370E.
One of your reviews (Broadwell I think) had like 8 APUs in it and not a single FX. That gives people the impression that APUs are the strongest competition AMD has. Since that's not true it gives people the impression that this site is trying to manipulate readers into thinking Intel is more superior than it actually is in terms of price-performance.
There is no doubt that FX is old and was not ideal for typical desktop workloads when it came out. Even today it only has about 1.2 billion transistors and still has 32nm power consumption. But, since games are finally beginning to use more than two cores or so, and because programs like Blender (which you probably should use in your results) can leverage those cores without exaggerating the importance of FPU (as Cinebench is said to do) it seems to still be clinging to relevance. As for lack of reader interest in FX, it's hard to gauge that when your articles don't include results from even one FX chip.
Regardless of reader interest if you're going to include AMD at all, which you should, you should use their best-performing chip (although not the power-nuts 9590) design — not APUs — unless you're specifically targeting small form factors or integrated graphics comparisons.
You also ran an article about the 8320E. Why not use that 8320E, overclocked to reasonably level like 4.5 GHz, as the basis for benchmarks you can include in reviews?
Clocks are not identical (you know the meaning of that word, right?). And the 4790k was released a year after first haswells. Usually you compare models from the launch day of the said arhitecture.
It doesn't matter what launched on launch day of the older competition. It matters what one can buy at the current launch date instead of the new product.
Hear hear! Reminds me of the way reference GPUs keep being used in gfx articles, even when anyone with half a clue would buy an oc'd card either because they're cheaper, or seller sites don't sell reference cards anymore anyway.
Corporations are a conspiracy to make profit for shareholders, CEOs, etc. The assumption of conspiracy should be a given, not a "theory". Any business that isn't constantly conspiring to deliver the least product for the most return is going to either die or stagnate.
Your comparison of the Xeon e3-1276 v3 to the e3-1285 v4, e3-1285L v4, and e3-1265L v4 is systematically slightly biased in favor of the e3-1276 v3, because for all tests you use (non-ECC) DDR3 1866 memory, whereas with ECC memory (and a C226 chipset that supports it, as in an ASUS P9D WS motherboard), the v3 Xeon is limited to DDR3 1600, while the v4 Xeons can use DDR3 1866 memory.
Therefore using DDR3 1866 memory with the v3 Xeon gives it a slight systematic performance boost over what it would achieve with only DDR3 1600 memory, which is the maximum speed it can use in an ECC / C226 workstation.
With this in mind, I believe the performance of a e3-1276 v3 Xeon with DDR3 1600 memory would more closely match that of the e3-1285 v4 and e3-1285L Xeons with DDR3 1866 memory, than is indicated in the graphs here, where the v3 and v4 Xeons are all tested with the same DDR3 1866 memory only.
This power consumption mystery have to be discovered, its like Geforce 970 4 GB thing. Maybe Intel cheating with those numbers, because there are customer like me, which prefer lower power and silence are ready to pay for that.
Most typical workstation use case, where im still missing tons of horsepower on CPU side is virtualization, especialy for gaming, yesterday released Vmware workstation 12 with DX10 support. Especially in Linux enviroment, gaming in virtual machine make a sense (i know, i know there is not DX10 suportt even through wrapper).
Is it even clear that the 1285 and 1285L performed differently to a statistically significant degree? I mean if one has a benchmark performed three times and scores of, say, {1176, 1188, 1182} are obtained for the 1285 but the 1285L gets {1190, 1175, 1184} then the 1285L seems to have an average of 1183 while the 1285 has an average of 1182. But when we look at those distributions, they completely agree and show no performance difference, which given one has an extra 100MHz on it we'd expect a 1 part in 34 advantage, ie, a 2.9% performance gap with the 95W 1285 outperforming the 65W 1285L.
Further, it's important to recall the first chart showing that the 95W 1285 actually used less power in the idle -> OCCT test. The TDP is not a measure of how much power the CPU uses, plain and simple. It's a specification stating the maximum amount of power that can be dissipated in the form of heat. Therefore when the author states "100MHz does not adequately explain 30W in the grand scheme of things" they're exactly correct about the TDP, but it comes off suggesting one actually *uses* 30W more than the other which is simply not true. It does sound pretty clear that either (a) Intel bins their TDPs and the 3.5GHz one bumped up past the 65W bin or (b) Intel uses better parts for the 1285L, but this does not explain why it would cost $100-ish (~18%) less as we would expect better parts to be scarcer not more abundant.
As far as binned TDPs go, we know they do this. Look at the 84W parts. They don't all use 84W, they're just all rated as capable of dissipating up to 84W. Further we don't see arbitrary TDPs, we see a few, eg, 35W, 65W, 84W, 95W, 125W, and if you're AMD, 220W.
"All but one soldered part has the eDRAM disabled." - surely you mean the opposite? "All but one of the soldered parts are eDRAM-enabled."... otherwise you're saying they're all disabled, but one.
Heads up that the first link in the article looks wrong: it points at file:///D:/Dropbox/AnandTech/CPUs%20-%20Intel/20150815%20Broadwell%20Xeon%20E3%20v4/anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5765c
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
72 Comments
Back to Article
piasabird - Wednesday, August 26, 2015 - link
At $446 this isn't exactly an entry level CPU. I wonder where are the desktop CPU's with IRIS graphics like the i5-5575r which is suppose to be priced at $244 and available now but is not for sale anywhere?piasabird - Wednesday, August 26, 2015 - link
I guess Intel makes this processor but would rather have you buy a more expensive one. What is up with this? Same thing goes for i5-5675cdgingeri - Wednesday, August 26, 2015 - link
It's not meant to be a cheap CPU. It's a workstation/server chip. I has some additional data integrity features that the normal desktop CPUs can't use, like ECC memory. The drivers for the GPU are also optimized and tested for workstation level software, which is expensive to do. Sometimes, just frequency isn't enough.Camikazi - Friday, August 28, 2015 - link
I always wonder how people don't see that a server part is going to be more expensive than a desktop part. They always have been and always will be because they are binned higher and have additional features that desktops don't or can't use. Saying that $446 for this CPU is actually rather cheap for an entry level Xeon CPU and is not a bad price.Free008 - Tuesday, September 1, 2015 - link
Thats right it's too expensive. Intel will continue to gouge consumers with lower quality binned parts and disabled server features until Apple starts making decent desktop CPUs and then we can forever leave Intel and Microsoft at our leisure. Thats why none of the mobile Intel CPUs are selling - most the suppliers dont want to go back to the old monopoly days regardless of performance (which isn't incrementally significantly anymore anyway, just power savings). Intel thinks suppliers and consumers will put up with this forever but they are so wrong. It's just a matter of time now.zoxo - Wednesday, August 26, 2015 - link
It is rather disappointing that it seems that energy efficiency has regressed since the awesome 4790K. I was hoping that switching to 14nm would allow intel to do what the 88W 4790K could do in the 65W power envelope, but neither broadwell or skylake seems to be able to deliver that promise.mmrezaie - Wednesday, August 26, 2015 - link
I am also wondering why even though the performance is not changing that much but why power usage is not getting that much better!milkod2001 - Wednesday, August 26, 2015 - link
While CPU performance of the chip is only a little bit better, its GPU part is much bigger and performs much better, hence power consumption is the same as older chips. It's actually an achievement.For regular desktop CPUs i'd prefer Intel to give us native mainstream 6 core with no GPU at all.
But that would not played nicely with premium E series CPU. Money, money, money. Give me more :)
zoxo - Wednesday, August 26, 2015 - link
If you consider pure CPU loads, broadwell/skylake doesn't seem to show much power advantage over devil's canyon when you are getting to the 4GHz range. Skylake seems to be more overclock friendly, but it does consume a lot of power doing it.azazel1024 - Friday, August 28, 2015 - link
I was thinking the same thing based on Anandtech's original tests, but if you look at their notes under the delta power consumption and looking at a few other review sites, it looks a lot like motherboard manufacturers are all over the board with voltage/frequency curves for Skylake (and I assume here with Broadwell too) and it is biting them in the butt on power consumption. You've got a difference of easily 35% in power consumption from one board to the next using the same chip.Using the better numbers I have seen in some tests, Skylake, specifically the 6700k is actually significantly better than any other generation in performance per watt. Looking at the higher numbers in a few reviews, it is much worse than Broadwell and Haswell and only fractionally better than Ivy Bridge. I suspect that Skylake and probably Broadwell, that Intel's 14nm process has poor voltage/frequency scaling. Also that most motherboard manufactures are choosing poor voltage curves for the chip in an attempt to be extremely conservative.
A knock on effect here is, it is likely to be impacting actual performance too. If the 6700k has a TDP of 94w and the Dp is 110w...I'd half imagine that there is some throttling going on there with some loads.
imaheadcase - Wednesday, August 26, 2015 - link
Its the GPU the put on the CPU. I wonder why they don't have performance CPU without the GPU part. Gamers don't buy a CPU for the GPU since they have dedicated one already.Maybe im missing something, but seems like wasted space they could use for gamer orientated instructions.
MrSpadge - Wednesday, August 26, 2015 - link
The advantage of the 14 nm transistors is higher at lower voltages. The 6700K is so massively "overvolted" in stock mode, it's operating far above this sweet spot. That's why I would have loved to what those chips can achieve, regarding power consumption and efficiency, at ~4.0 GHz and minimum voltages. Alas, noone else seems to care about that. Most reviews are just showing full throttle operation. AT measured at lower OCs as well (thanks for that!), but apparently did not even try to go below 1.20 V either. That's higher than the stock voltage of Sandy Bridge..Oxford Guy - Wednesday, August 26, 2015 - link
High voltage overclocking is more likely to sell expensive products for cooling and has the added benefit of burning out chips, leading to even more sales.Ian Cutress - Thursday, August 27, 2015 - link
If you're burning out your processor by overclocking, you're doing it wrong.Oxford Guy - Wednesday, August 26, 2015 - link
The 4790K used more than 88 watts. That was marketing magic.typographie - Wednesday, August 26, 2015 - link
Average framerates don't really seem to tell us very much when seemingly every game tested is GPU-limited with an i5 or better. Would it be possible to post minimum framerates and/or frame time variance in future CPU gaming benchmarks? I suspect there would be more practically useful differences between CPUs in that data.MrSpadge - Wednesday, August 26, 2015 - link
Agreed - those benchmarks are pretty boring. Some website (forgot which one) showed minimum fps advantages of the 6700K to be massive (20 - 50%) for some games. This might be the more interesting metric.TallestJon96 - Wednesday, August 26, 2015 - link
I agree, give us average and either minimum or 99th percentile frame rates. Averages typically are GPU bound, but minimums are often more CPU heavy.I would prefer 99th percentile over minimum, as it is more consistent.
Ian Cutress - Thursday, August 27, 2015 - link
Our minimum results, on some benchmarks, seem to be all over the shot. It only takes one frame to drop a result down, which may or may not be inconsistent. We still have those values - check them over at anandtech.com/bench.In response to TallestJon96 below, I'm working on pulling 99th percentile data in a regular, consistent way.
satai - Wednesday, August 26, 2015 - link
I would be pretty interested in some compilation benchmarks - does cache trumph frequency?lilmoe - Wednesday, August 26, 2015 - link
+1I'm looking forward to the new mobile Xeon chips and would love to see that too.
Ian Cutress - Thursday, August 27, 2015 - link
Working on it! :)satai - Friday, August 28, 2015 - link
Great to hear that.Atari2600 - Wednesday, August 26, 2015 - link
Ian,Primarily in the graphs you compare a 1276 v3 to two flavours of 1285 v4.
But, on the first page, you tabulate the 1285 v3 and 1265 v3.
Would it be possible to include the 1276 v3 in this table?
Just quickly looking at ark.intel; the 1276 v3 has a box price of $350 and seems otherwise identical to the 1285 v3. On the face of it, it appears a disruptor.
Cheers
Brendan
lilmoe - Wednesday, August 26, 2015 - link
That's one hell of a 35W chip. Not bad at all.jamyryals - Wednesday, August 26, 2015 - link
Why does the 95Watt part exist? I don't get it.Gigaplex - Wednesday, August 26, 2015 - link
And why does the 65W chip consume more power than the 95W one?Oxford Guy - Wednesday, August 26, 2015 - link
Because Intel's power consumption ratings are a load of nonsense often enough. And AMD isn't always accurate either. The 8320E, for instance, is rated 95W but actually used 86 in Anandtech's tests. The 8370E is rated 95W but used 107 or something. The 9590 is even further away from its rating.But, Intel is the one gaining the most from this deceptive marketing since people know AMD's FX chips are power-hungry due to being on 32nm and not having had as much money invested in hand-tuning to lower power. So, Intel underestimates the consumption of parts like the 4790K to make its chips seem even more dramatically efficient.
Yuriman - Thursday, August 27, 2015 - link
There's one problem with this: TDP is not "power consumption", but "Thermal Design Power". A chip with a 95w TDP needs to function in an environment designed to dissipate 95w of heat over a given period of time. CPUs can go well over this for short periods.Oxford Guy - Thursday, August 27, 2015 - link
Short periods are one thing. False advertising is another.runciterassociates - Wednesday, August 26, 2015 - link
This is a server chip. Why are you benchmarking games?Furthermore, for SPEC, why are you using a dGPU when this chip has on die graphics?
Where are the OpenCL, OpenMP, GPGPU benchmarks, which are going to be the majority of how these will be used for green heterogeneous computing?
Gigaplex - Wednesday, August 26, 2015 - link
The E3 Xeons are more likely to be used in a workstation than a server.TallestJon96 - Wednesday, August 26, 2015 - link
They benchmark games because ignorant gamers (like myself) love to see gaming benchmarks for everything, even if they will never be used for games! If it was a 20 core Xeon clocked at 2ghz with hyper threading, we would want the benchmarks, even though they just show that everything i5 and up performs identically. We are a strange species, and you should not waste your time trying to understand us.Oxford Guy - Wednesday, August 26, 2015 - link
No benchmarks are irrelevant when they involve products people are using today. Gaming benchmarks are practical. However, that doesn't mean charts are necessarily well-considered, such as with how this site refuses to include a 4.5 GHz FX chip (or any FX chip) and instead only includes weaker APUs.Ian Cutress - Thursday, August 27, 2015 - link
As listed in a couple of sections of the review, this is because Broadwell-H on the desktop does not have an equivalent 84W part for previous generations and this allows us, perhaps somewhat academically, so see if there ends up being a gaming difference between Broadwell and Haswell at the higher power consumption levels.Jaybus - Friday, August 28, 2015 - link
Because, as stated in the article, the Ubuntu Live CD kernel was a fail for these new processors, so they couldn't run the Linux stuff.Voldenuit - Wednesday, August 26, 2015 - link
SPECviewperf on a desktop card?I'd be interested to see if a Quadro or FirePro would open up the gap between the CPUs.
mapesdhs - Thursday, August 27, 2015 - link
I was wondering that too; desktop cards get high numbers for Viewperf 12 because they cheat in the driver layer on image quality. SPEC testing should be done with pro cards where the relevance is more sensible. The situation is worse now because both GPU makers have fiddled with their drivers to be more relevant to consumer cards. Contrast how Viewperf 12 behaves with desktop cards to the performance spread observed with Viewperf 11, the differences are enormous.For example, tesing a 980 vs. a Quadro k5000 with Viewperf 11 and 12, the 980 is 3X faster than the K5000 for Viewperf 12, whereas the K5000 is 6x faster than the 980 for Viewperf 11. More than an order of magnitude performance shift just by using the newer test suite?? I have been told by tech site people elsewhere that the reason is changes to drivers and the use of much less image quality on consumer cards. Either way, it makes a nonsense of the usefulness of Viewperf if this is what's going on now. Otherwise, someone has to explain why the 980 compares so differently to a K5000 for Viewperf 11.
Ian Cutress - Thursday, August 27, 2015 - link
Both points noted. I'll see what I can do to obtain the professional cards.XZerg - Wednesday, August 26, 2015 - link
The gaming charts are messed up - igp performs faster than the dgpu on the SAME settings? i think something is wrong - most likely the labels of settings.Also it would have been better to compare IGP performance against the older versions of IRIS - where is 4770R? the point here is that while keeping the W similar, what are we really getting out of 14nm?
Urizane - Wednesday, August 26, 2015 - link
The R7 240 used on that page isn't exactly fast. Actually, the A10 APU has more graphics hardware than that card, which shows in the results. The fact that Crystal Well parts can beat an A10 APU means that they also beat the R7 240.As far as the 4770R comparison goes, it seems I'm coming up with nothing useful from a quick search. Anandtech has numbers for the 4770R and numbers for the 5675C, but in none of the same benchmarks. Iris Pro 5200 (4770R) had 40 EUs that could turbo to 1.3 GHz and Iris Pro P6300 (E3-1285* v4) has 48 EUs that can turbo to 1.15 GHz (same for 5775C and 1.1 GHz for 5675C). I would think it would be a wash (some wins, some losses) between the two generations, but you're right. There would be some utility in having some hard numbers to compare the two.
alefsin - Wednesday, August 26, 2015 - link
I wish you also included e5-1630 v3 in your tests. It is slightly more expensive ($600 range I guess) but with 6 cores at 3.5 MHz is probably more attractive than the any of faster e3 series.tyger11 - Wednesday, August 26, 2015 - link
Yeah, that's the one I'm thinking about building my video workstation around, unless a 6 core skylake comes out soon.Mastadon - Thursday, August 27, 2015 - link
Skylake Xeons aren't due until 2017.JesseKramer - Wednesday, August 26, 2015 - link
According to Ark the e5-1630 v3 is a 4c8t part.http://ark.intel.com/products/82764/Intel-Xeon-Pro...
alefsin - Thursday, August 27, 2015 - link
Oh, sorry, my bad. I meant E5-1650 V3. I recently built a workstation with that for CFD analysis. 140 W is bit high these days but then again, there is no argument about the performance.Gigaplex - Wednesday, August 26, 2015 - link
"but the main parallel we should be making is the 95W of the E3-1285 v4 and the E3-1276 v3 at 84W. The E3 has some extra frequency (peaks at 4 GHz) and extra L3 cache, but the Xeon has eDRAM."The E3 vs the Xeon? They're both E3 Xeons.
MrSpadge - Wednesday, August 26, 2015 - link
"If I were thinking from the point of view of the motherboard manufacturer, they are more likely to overvolt a Xeon processor to ensure that stability rather than deal with any unstable platforms"Ian, this would be a really really poor move & explanation. You are literally paying Intel for the guaranteed stability of the Xeon. the CPU tells the mainboard exactly which voltage it wants. If a mainboard maker gives it more than this on purpose, he's sabotaging either the TDP and power efficiency, or the performance. Neither is good and could easily lead to lawsuits in the US (because the product wouldn't perform as promised).
In any way, you should be able to check this! You can test on different boards, with different software. You can read out & report the voltages of the CPUs under different load conditions. You can log & report the average CPU clocks during those tests. One would think such information is interesting when we're seemingly confronted with 2 of 3 CPUs consuming far more than promised by Intel and performing really good for that TDP.
Morawka - Wednesday, August 26, 2015 - link
i wish you guys would start putting the i7 4790K in these Skylake/Broadwell Comparisons because the clock speed is identical to the 6770K.Comparing the 6770K to the 4770K is not fair because clock speeds are different.
Oxford Guy - Wednesday, August 26, 2015 - link
Anandtech refuses to include the AMD FX 8 core in its results charts and instead only includes weaker APUs.Ian Cutress - Thursday, August 27, 2015 - link
So to clear up your misconceptions: we (or more specifically, I) have not retested any AM3 product yet on our 2015 benchmark suite due to time restrictions and general lack of reader interest in AM3. I have 3 test beds, and our CPU/GPU tests are only partially automated, requiring 35+ working hours of active monitoring for results. (Yes, can leave some tests on overnight, but not that many). Reserving one test bed for a month a year for AM3+ limits the ability to do other things, such as motherboard tests/DRAM reviews/DX12 testing and so on.You'll notice our FX-9590 review occurred many, many months after it was officially 'released', due to consumer availability. And that was just over 12 months ago - I have not been in a position to retest AM3 since then. However, had AMD launched a new CPU for it, then I would have specifically made time to circle back around - for example I currently have the A8-7670K in to test, so chances are I'll rerun the FM2+ socket as much as possible in September.
That being said, we discussed with AMD about DirectX 12 testing recently. Specifically when more (full/non-beta) titles are launched to the public, and we update our game tests (on CPU reviews) for 2016. You will most likely see the FX range of CPUs being updated in our database at that time. Between now and then, we have some overlap between the FX processors and these E3 processors in our benchmarking database. This is free for anyone to access at any time as and when we test these products. Note that there is a large price difference, a large TDP difference, but there are some minor result comparisons for you. Here's a link for the lazy:
http://anandtech.com/bench/product/1289?vs=1538
The FX-9590 beats the 35W v4 Xeon in CineBench, POV-Ray and Hybrid, despite being 1/3 the price but 6x the power consumption.
Oxford Guy - Thursday, August 27, 2015 - link
The 9590 is a specialty product, hardly what I was focusing on which is FX overclocked to a reasonable level of power consumption. The 9590 does not fall into that category.You can get an 8320E for around $100 at Microcenter and pair it with a discount 970 motherboard like I did ($25 with the bundle pricing a few months ago for the UD3P 2.0) and get a decent clockspeed out of it for now much money. I got my Zalman cooler for $20 via slickdeals and then got two 140mm fans for it. The system runs comfortably at 4.5 GHz (4.4 - 4.5 are considered the standard for FX -- for the point where performance per watt is still reasonable). Those pairing it with an EVO cooler might want 4.3 GHz or so.
The 9590 requires an expensive motherboard, expensive (or loud) case cooling, and an expensive heatsink. Running an FX at a clockspeed that is below the threshold at which the chip begins to become a power hog is generally much more advisable. And, review sites that aren't careful will run into throttling from VRMs or heat around the chip which will give a false picture of the performance. People in one forum said adamantly that the 9590 chips tend to be leaky so their power consumption is even higher than a low-leakage chip like 8370E.
One of your reviews (Broadwell I think) had like 8 APUs in it and not a single FX. That gives people the impression that APUs are the strongest competition AMD has. Since that's not true it gives people the impression that this site is trying to manipulate readers into thinking Intel is more superior than it actually is in terms of price-performance.
There is no doubt that FX is old and was not ideal for typical desktop workloads when it came out. Even today it only has about 1.2 billion transistors and still has 32nm power consumption. But, since games are finally beginning to use more than two cores or so, and because programs like Blender (which you probably should use in your results) can leverage those cores without exaggerating the importance of FPU (as Cinebench is said to do) it seems to still be clinging to relevance. As for lack of reader interest in FX, it's hard to gauge that when your articles don't include results from even one FX chip.
Regardless of reader interest if you're going to include AMD at all, which you should, you should use their best-performing chip (although not the power-nuts 9590) design — not APUs — unless you're specifically targeting small form factors or integrated graphics comparisons.
Oxford Guy - Thursday, August 27, 2015 - link
You also ran an article about the 8320E. Why not use that 8320E, overclocked to reasonably level like 4.5 GHz, as the basis for benchmarks you can include in reviews?SuperVeloce - Thursday, August 27, 2015 - link
Clocks are not identical (you know the meaning of that word, right?). And the 4790k was released a year after first haswells. Usually you compare models from the launch day of the said arhitecture.MrSpadge - Thursday, August 27, 2015 - link
It doesn't matter what launched on launch day of the older competition. It matters what one can buy at the current launch date instead of the new product.mapesdhs - Thursday, August 27, 2015 - link
Hear hear! Reminds me of the way reference GPUs keep being used in gfx articles, even when anyone with half a clue would buy an oc'd card either because they're cheaper, or seller sites don't sell reference cards anymore anyway.Oxford Guy - Wednesday, August 26, 2015 - link
"cue the realists"Corporations are a conspiracy to make profit for shareholders, CEOs, etc. The assumption of conspiracy should be a given, not a "theory". Any business that isn't constantly conspiring to deliver the least product for the most return is going to either die or stagnate.
boxof - Wednesday, August 26, 2015 - link
"In a recent external podcast, David Kanter"Couldn't bring yourselves to mention your competition huh? Stay classy.
Dr.Neale - Wednesday, August 26, 2015 - link
Your comparison of the Xeon e3-1276 v3 to the e3-1285 v4, e3-1285L v4, and e3-1265L v4 is systematically slightly biased in favor of the e3-1276 v3, because for all tests you use (non-ECC) DDR3 1866 memory, whereas with ECC memory (and a C226 chipset that supports it, as in an ASUS P9D WS motherboard), the v3 Xeon is limited to DDR3 1600, while the v4 Xeons can use DDR3 1866 memory.Therefore using DDR3 1866 memory with the v3 Xeon gives it a slight systematic performance boost over what it would achieve with only DDR3 1600 memory, which is the maximum speed it can use in an ECC / C226 workstation.
With this in mind, I believe the performance of a e3-1276 v3 Xeon with DDR3 1600 memory would more closely match that of the e3-1285 v4 and e3-1285L Xeons with DDR3 1866 memory, than is indicated in the graphs here, where the v3 and v4 Xeons are all tested with the same DDR3 1866 memory only.
ruthan - Thursday, August 27, 2015 - link
This power consumption mystery have to be discovered, its like Geforce 970 4 GB thing. Maybe Intel cheating with those numbers, because there are customer like me, which prefer lower power and silence are ready to pay for that.Most typical workstation use case, where im still missing tons of horsepower on CPU side is virtualization, especialy for gaming, yesterday released Vmware workstation 12 with DX10 support. Especially in Linux enviroment, gaming in virtual machine make a sense (i know, i know there is not DX10 suportt even through wrapper).
ruthan - Thursday, August 27, 2015 - link
So pleas add some virtualization into benchmarking set.Ian Cutress - Thursday, August 27, 2015 - link
It's on the cards.Mastadon - Thursday, August 27, 2015 - link
No support for DDR4 RAM? C'mon, it's 2015.SuperVeloce - Thursday, August 27, 2015 - link
This is Broadwell, not Skylake... It's meant to introduce new litography process and updated platform, not new arhitectures and memory controllers...Oxford Guy - Thursday, August 27, 2015 - link
DDR4 isn't of much benefit, except for servers (power consumption)AnnonymousCoward - Thursday, August 27, 2015 - link
Skylake FTW. Why pay more for the slower Xeon?Oxford Guy - Thursday, August 27, 2015 - link
If you read the Skylake review here you'll find that it's not really better than Broadwell, just different.AnnonymousCoward - Thursday, August 27, 2015 - link
Dude, look at the graphs on the conclusion page of this review. Skylake beats the closest Xeon by 19% in most of them.Oxford Guy - Sunday, August 30, 2015 - link
I wasn't talking about Xeon. Look at the previous desktop review. I read your post too quickly and missed that you were talking about Xeon.joex4444 - Thursday, August 27, 2015 - link
Is it even clear that the 1285 and 1285L performed differently to a statistically significant degree? I mean if one has a benchmark performed three times and scores of, say, {1176, 1188, 1182} are obtained for the 1285 but the 1285L gets {1190, 1175, 1184} then the 1285L seems to have an average of 1183 while the 1285 has an average of 1182. But when we look at those distributions, they completely agree and show no performance difference, which given one has an extra 100MHz on it we'd expect a 1 part in 34 advantage, ie, a 2.9% performance gap with the 95W 1285 outperforming the 65W 1285L.Further, it's important to recall the first chart showing that the 95W 1285 actually used less power in the idle -> OCCT test. The TDP is not a measure of how much power the CPU uses, plain and simple. It's a specification stating the maximum amount of power that can be dissipated in the form of heat. Therefore when the author states "100MHz does not adequately explain 30W in the grand scheme of things" they're exactly correct about the TDP, but it comes off suggesting one actually *uses* 30W more than the other which is simply not true. It does sound pretty clear that either (a) Intel bins their TDPs and the 3.5GHz one bumped up past the 65W bin or (b) Intel uses better parts for the 1285L, but this does not explain why it would cost $100-ish (~18%) less as we would expect better parts to be scarcer not more abundant.
As far as binned TDPs go, we know they do this. Look at the 84W parts. They don't all use 84W, they're just all rated as capable of dissipating up to 84W. Further we don't see arbitrary TDPs, we see a few, eg, 35W, 65W, 84W, 95W, 125W, and if you're AMD, 220W.
LemmingOverlord - Friday, August 28, 2015 - link
"All but one soldered part has the eDRAM disabled." - surely you mean the opposite? "All but one of the soldered parts are eDRAM-enabled."... otherwise you're saying they're all disabled, but one.lplatypus - Sunday, August 30, 2015 - link
Heads up that the first link in the article looks wrong: it points at file:///D:/Dropbox/AnandTech/CPUs%20-%20Intel/20150815%20Broadwell%20Xeon%20E3%20v4/anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5765c