"It might come across as somewhat surprising that a 15W CPU like the i7-5650U has a 2.2 GHz base frequency but then a 3.2 GHz to 3.1 GHz operating window, and yet the i7-5557U has a 3.1 GHz base with 3.4 GHz operating for almost double the TDP. Apart from the slight increase in CPU and GPU frequency, it is hard to account for such a jump without point at the i7-5650U and saying that ultimately it is the more efficient bin of the CPUs." This is not surprising. This is used to increase the GPU performance. 28W CPUs have Iris 6100, 15W CPUs have HD 6000. In no way does TDP tell us anything about efficiency.
Iris 6100 vs HD 6000 are almost identical. The only difference is a slightly faster clock speed. I think the problem is, HD6000 will throttle more to stay in that power envelope.
Looks to me like the 23W ones (i.e. those with the 6100 graphics) will be the only ones to be capable of being near the max turbo clocks for long.
Would also be interesting to know the AVX base and turbo clocks for these chips, to compare the possible 64b DP GFLOPS from the CPU cores to those listed on page 2 from the GPUs. Top bin is likely somewhere < 102 (vs 211 from GPU), but how much lower?
For the big Xeons the AVX base clock is typically 200 MHz below the regular base clock. They operate in a similar frequency & voltage range as the mobile chips (and are as power-limited as they are), so expect the same to apply here.
First time I read about AVX clocks, then found another mention in a previous Xeon CPU article. Is this a thing for Xeon only, or do the Haswell desktop chips throttle the clock with heavy AVX as well?
A good example of this is in the throttling of the HD5000 in the 15W NUC i5-4250. You can get 40% better performance by changing the TDP settings from 25W short burst / 15W steady to 35W short burst / 31W steady.
"In no way does TDP tell us anything about efficiency."
Agreed - TDP is far to crude for this. Intel Desktop CPUs often operate far below TDP, whereas mobile chips are throttled by it. How much? Depends on the laptop, environment temperature etc.
So even though the 15 W CPUs quoted above are allowed to top out at 3+ GHz, they won't run at anywhere close to this frequency under sustained heavy load. The 28 W chips should have no trouble sustaining the speed, given adequate cooling.
Iris 6100 + edram or at least DDR4 bandwidth increases would have made a terrific difference to "retina" and high-dpi ultrabooks / laptops, but now this upgrade is watered to irrelevancy.
In a retina/hdpi environment, few applications would come close to saturating the bus. The EUs (even with the 6100) would bottleneck long before LPDDR3/DDR3 would.
as i see it the given tdp only ensures operation at base clocks and without a substantial graphics load. operation at turbo clocks requires to overstep the tdp until power draw and or temps are too high and the clock returns to the base frequency.
if you look at it like this it's not surprising a 15w sku has a base clock of 2.2ghz ans a 28w sku 3.1ghz. that said the 28w tdp still looks "too high" for the frequency you get out of it, but i guess that this extra power/heat-budget is there for the sole reason so the 28w sku can operate at turbo clocks for longer without throttling down again, plus there is more headroom for a graphics load at the same time. this ensures, even with similar hardware and turbo-clocks, the 28w sku is allowed to produce more heat and in turn get more work done in the same time.
that's the same reason core-m has very high turbo speeds, but can only turbo for a couple seconds until it's too hot and it "throttles" down to base clocks.
Thanks. I hope we see the power improvements here coupled with the same or larger batteries (maybe with the extra space...) i am really hoping the Surface Pro 4 can consistently break ten hours (which is the "great enough" spot for me) per charge, which would make me happily drop my SP1.
Agreed here definitely. I'd really like a Skylake SP4, we'd see some serious jumps there. It deserves Skylake anyway, we've skipped a generation of performance improvement.
I was right with you until the last phrase. You need a 10-hour Surface Pro 4 in order to drop your 4-hour Surface Pro 1, when both the Surface Pro 2 & Surface Pro 3 already offer significantly improved battery life?
Love my SP 2. Skipped the SP3 because my SP2 was only a couple of months old and because of the throttling issues. I do hope they do a SP3 refresh with these chips though, ahead of a full Skylake SP4. I'm assuming they would likely go for the 5500HD parts because of the lower cTDP requirement?
According to last year rumors, desktop parts are supposed to happen as Broadwell-K Core i5 and i7 for LGA1150 (for 90-series chipsets M/B only) around Q3 2015 (Q2 2015 at best).
It's articles like this that make me keep coming back to this website. Thanks a lot for the info. I hope we get to see some direct benchmarks against Haswell, especially in the battery life area. 5% IPC improvement isn't too big, but then again it is just a die shrink and we shouldn't expect too much more. I am much more interested in battery life, since now Intel and MS seem to be really collaborating on that end since W8 was launched.
I'm not holding my breath on some big battery improvements though. Excellent battery life with Haswell was already possible, but except for some OEMs (like Samsung with Ativ Book 9 and ASUS with Zenbooks), none put in the effort to really make the most of it.
We really have to wait for benchmarks. Right now it feel too much like "meh" to me. TDP stayed the same. Sure there are some performance increases especially on the GPU side, but I would say they mostly come from more transistors. I was hoping that 14 nm would bring them more energy saving.
And the elephant in the room is still Skylake. I have both a 13" rMBP on Ivy and a Surface Pro 2 on Haswell, which both could use a upgrade (the MacBook more on the CPU side and the Surface more on the GPU side in my use cases). I was hoping for Quad Cores in the 28 W range on 14 nm, so a 13" rMBP could go all Quad. Also weren't there plans for a complete overhaul off the GPU architecture? Is that now scheduled for Skylake?
Well, my best bet is that the xx00 (24EU) parts will be the most popular just like Haswell ones were, because if I remember correctly these were the same prices for the equivalent Haswell parts. I only ever saw the 28W TDP parts in MacBook Pro and Zenbook Infinity, which are the very highest end ultrabooks. You're right when you say that i5 (and 24EU) parts will be the most common, and I don't think the performance increase will be noticeable at all...
It used to be that you could generally equate an i7 with 4 cores. These i7 Broadwell-U CPUs all seem to be 2-core, so it just seems to muddy up the issue for the consumer. Oh well... maybe these were all meant for OEM designs anyway.
There have been many faster Sandy and Ivy quad-core i7s than the 2 models I specifically quoted. It is very misleading to call a dual-core HT CPU as an i7, when really that has always been an i3.
Sorry. The QMs have been around for longer than I remembered, but dual core mobile i7s were around from the very beginning, and the quads have always been largely restricted to mobile workstation-type laptops and the occasional gaming laptop.
Seems that the 48 EU chips will match the GPU performance, on paper, of AMD's Kaveri. This should allow for some interesting GPU comparisons in reviews.
The prices are not very pretty, and where are the quad-core chips? The cores are tiny on 14nm. Almost irrelevant. Modern CPUs are more like GPUs with CPUs tacked on.
H! What is it with Intel and the random letters of late? I thought at first they corresponded to TDP levels with later in the alphabet being lower. Now it seems they're for variants of the same architecture used with different numbers of cores, GPU cores and turbo/thermal limit.
Only two processor cores for number leading Core i7-5650U. Intel sucks! Wait for Skylake. If with a Skaylake exposed so sucks,I will unsubscribing Intel from my portfolio - forever!
It's their U series. Those SKUs target lower power -- quad cores aren't suitable for that. It's likely that with the power constraints, a quad core design wouldn't offer a whole lot more performance than a dual core.
Maybe DDR3-1866 may help a little bit for 48 EU GPUs (vs DDR3-1600 before). But indeed, without eDRAM ("Crystallwell"), 1866 is still not enough, presumably.
We will have to wait and see. There might be more of a performance difference for Haswell than in the past because they decreased the # of EU's per slice from 10 down to 8 and increased the cache size. That should mean a lot more cache available per EU which should help keep it from being as bandwidth limited as in the past. It will probably still be bandwidth limited but hopefully just not as much making the GT3 version without eDRAM more reasonable.
With that said integrated GPUs will always be behind dedicated GPUs in performance because graphics is so parallel that is scales easily with more units but those additional units mean higher power and bandwidth requirements. That's why you see high end GPUs using 200+ watts and very wide/fast memory interfaces both of which are much higher than can be reasonable handled in integrated GPU setting.
Gen8 actually makes a lot of changes that reduce its reliance on external memory. Take a look at the bit on caches in this article. It'll still be constrained by bandwidth, but not as much as you seem to be expecting.
It is nice to see audio DSP element is integrated into the PCH. I hope to see more and more integration in the near future. The power charts show clearly that display panel still has the major contribution in the overall platform power consumption. I think Intel and other SoC players have reached the point where SoCs can no longer provide pronounced improvements in overall power saving given demand for higher display resolution. Igzo display technology can cut the display power by at least half which will give further opportunity for SoC designers to effectively improve efficiency.
Talking FP64, Tegra X1 may not even have it at all, or, at best, I suppose, it may have it at the same ratio, as GM204, which is just 1/32. So, I bet, FP64 capability does not really apply seriously to Tegra X1. FP16 and FP32 to be used there.
X1 gives this flops for FP16 (half precision). Don't be fooled by usual nV marketing and compare "apples to apples". However, this is not to say that this Broadwell-U is very impressive. To me, it looks just as one more evolutionary step over Haswell-U. Nothing special, I would say. Still dual core x86, as a lot of people complain here - for some reason Intel strongly believes quad core is not need in -U segment. Instead, they beef up only the GPU, which may be bottle-necked anyway by DDR3 just as in AMD Kaveri case. And all of these Broadwell-U i5 and i7 are offered for big $$$, as usual in Intel's case. Somewhat disappointing - I agree with some other posters in this thread.
BUT Intel offer a coherent single address space for GPU and CPU compute. This is an important step on the path to practical commonplace heterogeneous computing. As far as I know nV aren't offering that in X1. (I'd be curious to know if Imagination are offering it in Rogue 7, because Apple are probably the company that's best set to put all the pieces of this together, from the hardware through the OS to the language+dev tools to the frameworks' but if IMG aren't on track, Apple are kinda screwed. Of course Apple COULD be preparing their mythical self-designed GPU...)
There's plenty to be irritated about with Broadwell, but let's praise the improvements that really are a good foundation for the future, and this is primary among them.
Argh. The X1 gflop number you just gave is for half precision, it's their double speed FP16 mode. And then you're going and comparing that to a full precision number.
I'm really interested to see what kind of performance improvement can be seen over my current Ivy Bridge i5-3317U. I really want to upgrade my laptop for numerous reasons, but after 2 years I'm looking for more than just a marginal increase in performance. I was very disappointed when Lenovo went with the Core M for the Yoga 3.
I have this same Core i5-3317U in Surface Pro 1 (1st gen, it works without throttling there, but two CPU fans are audible under heavy load). I believe, CPU-wise, you won't get really big improvements with Broadwell-U over Ivy-U. GPU-wise, however, Broadwell-U will be way faster. So, it depends on what you are interested in more: CPU progress, GPU progress, or both.
I'm in the same boat. Think I have the same processor, too.
I am looking at getting the inevitable Surface Pro 4. It looks like I'll be able to get the same CPU performance (assuming it's Broadwell-Y), and a fair bit better GPU performance, with much improved battery life. Should be good for my needs... Don't need too much number crunching power on the thing, and the graphics are a real sore spot on my Asus S400CA.
I am concerned about the Broadwell Pentiums and Celerons overlapping with the new Cherry Trail.
Up until now, Baytrail and Ivybridge Celerons have been sold in similarly priced systems. The Celeron might use a bit more power, but in a cheap laptop, or especially a little ITX box, it was worth it for the extra performance. The 1.5 IvyBridge cores still hold a noticeable lead over a Baytrail core at 2.6 (boost). They shared the same GPU generation, but the Celeron had 6 EUs to the Baytrail's 4 EUs, plus ran at a higher clock.
Now, apparently CherryTrail is going to have 16 EUs to Broadwell Celeron's 12 EUs. They both will have dual-channel LPDDR3-1600 interfaces, so unless the GT1 is clocked significantly higher, this will be the first time an Atom has a better GPU than its mainstream contemporary.
As for the CPU portion, for years the cheapest Celeron U has hardly budged in clockrate, from the 1.4 Sandybridge to the new 1.5 Broadwell. So these chips' upgrades have depended almost entirely on process improvements. The process improvements from BayTrail to CherryTrail sound a bit more aggressive than those from Haswell to Broadwell. So, if Intel ends up clocking the CherryTrail higher too (say, 2.8boost), we may also have situation were the CherryTrail has near CPU performance parity with the Broadwell Celeron.
I still not seeing any Broadwell desktop desktop parts. This continues to make me believe that Intel is having the same kind of issues with its 14nm die shrink as TSMC is with their 20nm process. Despite Intel's assertions otherwise.
How definitive! ;-) Really, this was supposed to be out early to middle of last year. This is just the latest claim of a release date from Intel. I will believe it when they are shipping them in volume. Until that it is vaporware, not hardware.
I agree with you, they are long overdue, in terms of previous Intel tick-tock pace. Yield problems, I guess. However, for some people like me it does not matter, because I already have Z87-based desktop since June 2013, which won't be compatible with Broadwell-K (so I'll just stick with my Z87-based setup for a while in the future).
What problems is TSMC having with 20nm? Apple probably sold around 100M A8/A8X devices in Q4 2014. Intel sells around 100M CPUs in a quarter, but only a REALLY tiny fraction of those were Broadwell last quarter, and even this quarter only a fraction will be Broadwell.
As far as I can see, TSMC 20nm basically hit its targets at the expected times. Intel 14nm, on the other hand has been delayed by around 6 to 9 months or so from what everyone expected. The weird slowness of the Broadwell rollout (with M then Y/U and eventually desktop) compared to previous rollouts also makes you wonder how healthy 14nm is, even today.
"32b INT GFLOPs" – wut? "L2 transaction lookaside buffer (TLB)" – another great one. How illiterate can you be to write that? Oh, it's Ian's article again...
I just love that every new CPU release continues to advertise some form of "now fast enough for speech recognition!"
I recall seeing a poster on the wall at an Intel facility in 1994 documenting features of the then-new Pentium Processor, including "Real-time voice recognition!"
Intel sucks. We already live in 2015 not in 1995. Now has many apps which needs of more of 3-4-6 threads. None of these processors does not deserve the least be described in i7 series. They are all hell cut as the number of cores as the number of transistors as volume cache. These are junk low grade.
I say this! I'm The Pork@III! You're just someone Michael Bay :D Intel to take "GPU"-it and to put it where the sun does not shine. This CPU is it or what? Some pathetic mixture that resembles no CPU nor video. The only meaningful line of Intel processor is enthusiastic class, presented recently by processors socket LGA-2011-3.
i'm glad broadwell u is here at last, but at the same time i'm missing some of the bigger things. where is ddr4 for improved memory performance and lower power use? where is pcie3 so we can say goodbye to the limitations of sata3 at last? where is usb3.1 and hdmi2.0 so we're not limited by ancient interfaces in times of 4k displays and superslim devices?
do we have to wait for skylake to get all those things? or maybe even longer?
i like intel as much as the next guy and haswell as well as broadwell are very nice mobile platforms as is, but i still can't shake the feeling of stagnation when it comes to i/o and interfaces.
Broadwell/-K-M at all other versions/ is 14hm Haswell shrink. Same arcitecture, same memory controller(only futurest Broadwell-E in entusiast class will have quad channel DDR4 support like Haswell-E). DDR4 we have support in normal high class with Intel Skylake-S and Skylake-H.
thanks for clearing that up, i don't know why i didn't make the connection between shrink != new architecture. i should have known that better. still boring though, that we have to wait another generation to see some real innovations again.
Yes with Skylake we have a few new intructions; new memory controller, eDRAM on die area, also few other arcitectural changes. With all these changes together Intel promise significant progress in computational capacity of this generation processors. Intel did not once have promised more than what actually imagined their customers. But we still have hope.
Intel should think about producing a mid-range quad core without HT and without the HD graphics GPU specifically for gaming laptops that come with discrete Nvidia GPUs. Before, it made sense to have the iGPU for non-gaming situations to keep the battery from draining. The laptop would switch back and forth automatically. Now, though, with Maxwell being designed as a mobile architecture from the start, there has to be some way that Nvidia can disable most of the GPU so that it can operate at extremely low power consumption when browsing or doing light tasks. I don't think Intel has ever had a mobile core-i CPU with 4 physical cores and no HT. And I'm pretty sure there never was a mobile core-i CPU that did not have an iGPU. Right now, gamers have the choice of a 4c/8t i7 or a 2c/4t i5 or i7 and nothing in between. Giving us a 4c/4t i5 would bring the cost of halfway decent gaming laptops down to under $1,000 depending on the GPU installed. I'm sure there are plenty of other applications for a mobile CPU like this other than gaming, but this would be the ideal gaming laptop CPU.
I have the Iris Pro 5200, I'll be interested to see where that 6100 falls in comparison to it. More EUs, and the other benefits to the small GPU-level caches, but no eDRAM. I think the 5200 should still beat it, but I wonder how close it can come without eDRAM.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
85 Comments
Back to Article
KaarlisK - Monday, January 5, 2015 - link
"It might come across as somewhat surprising that a 15W CPU like the i7-5650U has a 2.2 GHz base frequency but then a 3.2 GHz to 3.1 GHz operating window, and yet the i7-5557U has a 3.1 GHz base with 3.4 GHz operating for almost double the TDP. Apart from the slight increase in CPU and GPU frequency, it is hard to account for such a jump without point at the i7-5650U and saying that ultimately it is the more efficient bin of the CPUs."This is not surprising. This is used to increase the GPU performance. 28W CPUs have Iris 6100, 15W CPUs have HD 6000.
In no way does TDP tell us anything about efficiency.
aratosm - Monday, January 5, 2015 - link
Iris 6100 vs HD 6000 are almost identical. The only difference is a slightly faster clock speed. I think the problem is, HD6000 will throttle more to stay in that power envelope.Topinio - Monday, January 5, 2015 - link
Looks to me like the 23W ones (i.e. those with the 6100 graphics) will be the only ones to be capable of being near the max turbo clocks for long.Would also be interesting to know the AVX base and turbo clocks for these chips, to compare the possible 64b DP GFLOPS from the CPU cores to those listed on page 2 from the GPUs. Top bin is likely somewhere < 102 (vs 211 from GPU), but how much lower?
MrSpadge - Monday, January 5, 2015 - link
For the big Xeons the AVX base clock is typically 200 MHz below the regular base clock. They operate in a similar frequency & voltage range as the mobile chips (and are as power-limited as they are), so expect the same to apply here.hansmuff - Thursday, January 8, 2015 - link
First time I read about AVX clocks, then found another mention in a previous Xeon CPU article. Is this a thing for Xeon only, or do the Haswell desktop chips throttle the clock with heavy AVX as well?naloj - Monday, January 5, 2015 - link
A good example of this is in the throttling of the HD5000 in the 15W NUC i5-4250. You can get 40% better performance by changing the TDP settings from 25W short burst / 15W steady to 35W short burst / 31W steady.MrSpadge - Monday, January 5, 2015 - link
"In no way does TDP tell us anything about efficiency."Agreed - TDP is far to crude for this. Intel Desktop CPUs often operate far below TDP, whereas mobile chips are throttled by it. How much? Depends on the laptop, environment temperature etc.
So even though the 15 W CPUs quoted above are allowed to top out at 3+ GHz, they won't run at anywhere close to this frequency under sustained heavy load. The 28 W chips should have no trouble sustaining the speed, given adequate cooling.
zepi - Monday, January 5, 2015 - link
Iris 6100 + edram or at least DDR4 bandwidth increases would have made a terrific difference to "retina" and high-dpi ultrabooks / laptops, but now this upgrade is watered to irrelevancy.Nothing to see here...
HungryTurkey - Wednesday, January 14, 2015 - link
In a retina/hdpi environment, few applications would come close to saturating the bus. The EUs (even with the 6100) would bottleneck long before LPDDR3/DDR3 would.fokka - Thursday, January 8, 2015 - link
as i see it the given tdp only ensures operation at base clocks and without a substantial graphics load. operation at turbo clocks requires to overstep the tdp until power draw and or temps are too high and the clock returns to the base frequency.if you look at it like this it's not surprising a 15w sku has a base clock of 2.2ghz ans a 28w sku 3.1ghz. that said the 28w tdp still looks "too high" for the frequency you get out of it, but i guess that this extra power/heat-budget is there for the sole reason so the 28w sku can operate at turbo clocks for longer without throttling down again, plus there is more headroom for a graphics load at the same time. this ensures, even with similar hardware and turbo-clocks, the 28w sku is allowed to produce more heat and in turn get more work done in the same time.
that's the same reason core-m has very high turbo speeds, but can only turbo for a couple seconds until it's too hot and it "throttles" down to base clocks.
Drumsticks - Monday, January 5, 2015 - link
Thanks. I hope we see the power improvements here coupled with the same or larger batteries (maybe with the extra space...) i am really hoping the Surface Pro 4 can consistently break ten hours (which is the "great enough" spot for me) per charge, which would make me happily drop my SP1.fallaha56 - Monday, January 5, 2015 - link
then let's hope for a Skylake Surface 4 -too little change hereWalkop - Monday, January 5, 2015 - link
Brianandforce? ;)Agreed here definitely. I'd really like a Skylake SP4, we'd see some serious jumps there. It deserves Skylake anyway, we've skipped a generation of performance improvement.
ws3 - Monday, January 5, 2015 - link
I was right with you until the last phrase. You need a 10-hour Surface Pro 4 in order to drop your 4-hour Surface Pro 1, when both the Surface Pro 2 & Surface Pro 3 already offer significantly improved battery life?Aftershocker - Friday, January 9, 2015 - link
Love my SP 2. Skipped the SP3 because my SP2 was only a couple of months old and because of the throttling issues. I do hope they do a SP3 refresh with these chips though, ahead of a full Skylake SP4. I'm assuming they would likely go for the 5500HD parts because of the lower cTDP requirement?aegisofrime - Monday, January 5, 2015 - link
Any word on desktop parts?TiGr1982 - Monday, January 5, 2015 - link
According to last year rumors, desktop parts are supposed to happen as Broadwell-K Core i5 and i7 for LGA1150 (for 90-series chipsets M/B only) around Q3 2015 (Q2 2015 at best).kspirit - Monday, January 5, 2015 - link
It's articles like this that make me keep coming back to this website. Thanks a lot for the info. I hope we get to see some direct benchmarks against Haswell, especially in the battery life area. 5% IPC improvement isn't too big, but then again it is just a die shrink and we shouldn't expect too much more. I am much more interested in battery life, since now Intel and MS seem to be really collaborating on that end since W8 was launched.I'm not holding my breath on some big battery improvements though. Excellent battery life with Haswell was already possible, but except for some OEMs (like Samsung with Ativ Book 9 and ASUS with Zenbooks), none put in the effort to really make the most of it.
Nice article. *thumbs up*
Galatian - Monday, January 5, 2015 - link
We really have to wait for benchmarks. Right now it feel too much like "meh" to me. TDP stayed the same. Sure there are some performance increases especially on the GPU side, but I would say they mostly come from more transistors. I was hoping that 14 nm would bring them more energy saving.And the elephant in the room is still Skylake. I have both a 13" rMBP on Ivy and a Surface Pro 2 on Haswell, which both could use a upgrade (the MacBook more on the CPU side and the Surface more on the GPU side in my use cases). I was hoping for Quad Cores in the 28 W range on 14 nm, so a 13" rMBP could go all Quad. Also weren't there plans for a complete overhaul off the GPU architecture? Is that now scheduled for Skylake?
kspirit - Monday, January 5, 2015 - link
Does Intel update architecture at all in the year of the die shrink? That's news to me, I didn't know any GPU rework was planned. :oDigitalFreak - Monday, January 5, 2015 - link
They do, but it's not usually a major update.R3MF - Monday, January 5, 2015 - link
the 15W i5 chips will be the ones found in the vast majority of ultrabook models released by the vast majority of the ultrabook manufacturers.to me, the interesting question is whether the 48EU model will be popular, or whether the bulk of the above will be taken up by the 24EU models...?
will a Core i5-5300U** be sufficient to run Total War: Attila at low settings at 1600x900?
** 2 / 4 2.3 2.9 2.7 24 300/900 1600/1600 3MB 7.5W Yes $281
kspirit - Monday, January 5, 2015 - link
Well, my best bet is that the xx00 (24EU) parts will be the most popular just like Haswell ones were, because if I remember correctly these were the same prices for the equivalent Haswell parts. I only ever saw the 28W TDP parts in MacBook Pro and Zenbook Infinity, which are the very highest end ultrabooks. You're right when you say that i5 (and 24EU) parts will be the most common, and I don't think the performance increase will be noticeable at all...romrunning - Monday, January 5, 2015 - link
It used to be that you could generally equate an i7 with 4 cores. These i7 Broadwell-U CPUs all seem to be 2-core, so it just seems to muddy up the issue for the consumer. Oh well... maybe these were all meant for OEM designs anyway.drothgery - Monday, January 5, 2015 - link
Mobile i7s have been dual core from the very beginning. Real quad-core mobile parts didn't show up until the highest-wattage mobile Haswell parts ...RussianSensation - Monday, January 5, 2015 - link
What do you mean? There have been mobile quad-core HT i7s for 4 years now, with Sandy and Ivy:1. Core i7 2630QM - Jan 3, 2011
http://www.notebookcheck.net/Intel-Core-i7-2630QM-...
2. Core i7 3635QM - Sept 30, 2012
http://www.notebookcheck.net/Intel-Core-i7-3635QM-...
There have been many faster Sandy and Ivy quad-core i7s than the 2 models I specifically quoted. It is very misleading to call a dual-core HT CPU as an i7, when really that has always been an i3.
TiGr1982 - Monday, January 5, 2015 - link
Even earlier than that, for example, the first gen Nehalem (45 nm) Core i7-720QM was released in Q3 2009 and was a 4 core and 8 thread CPU:http://ark.intel.com/products/43122/Intel-Core-i7-...
However, these were 45 W parts; full wattage parts are a different story.
LukaP - Friday, February 20, 2015 - link
Those are all QM parts. Those have always been 4core. This article is about the Broadwell-U parts, which are made on a 2core die.drothgery - Monday, January 5, 2015 - link
Sorry. The QMs have been around for longer than I remembered, but dual core mobile i7s were around from the very beginning, and the quads have always been largely restricted to mobile workstation-type laptops and the occasional gaming laptop.psychobriggsy - Monday, January 5, 2015 - link
Very small dies, even for the 48 EU chip.Seems that the 48 EU chips will match the GPU performance, on paper, of AMD's Kaveri. This should allow for some interesting GPU comparisons in reviews.
The prices are not very pretty, and where are the quad-core chips? The cores are tiny on 14nm. Almost irrelevant. Modern CPUs are more like GPUs with CPUs tacked on.
III-V - Monday, January 5, 2015 - link
Quads are Broadwell-H. I think those come in Q2.stephenbrooks - Tuesday, January 6, 2015 - link
H! What is it with Intel and the random letters of late? I thought at first they corresponded to TDP levels with later in the alphabet being lower. Now it seems they're for variants of the same architecture used with different numbers of cores, GPU cores and turbo/thermal limit.Pork@III - Monday, January 5, 2015 - link
Only two processor cores for number leading Core i7-5650U. Intel sucks! Wait for Skylake. If with a Skaylake exposed so sucks,I will unsubscribing Intel from my portfolio - forever!yvizel - Monday, January 5, 2015 - link
Oh NO! What would Intel do without your portfolio subscription??Someone PLEASE call Intel ASAP!
p1esk - Monday, January 5, 2015 - link
You made me laugh :-)maroon1 - Monday, January 5, 2015 - link
LOLCan you name any other CPU with 15w TDP that can match Core i7-5650U ?
In fact Core i7-5650U should probably even beat most of AMD 35w chips. Number cores doesn't really matter.
III-V - Monday, January 5, 2015 - link
It's their U series. Those SKUs target lower power -- quad cores aren't suitable for that. It's likely that with the power constraints, a quad core design wouldn't offer a whole lot more performance than a dual core.kenansadhu - Monday, January 5, 2015 - link
U-class of Intel mobile processors never had quad core, if I'm not mistakenaratosm - Monday, January 5, 2015 - link
HD6100 still lacks eDRAM. The performance improvement from haswell will be marginal since the biggest problem with HD5100 has been memory bottleneck.TiGr1982 - Monday, January 5, 2015 - link
Maybe DDR3-1866 may help a little bit for 48 EU GPUs (vs DDR3-1600 before).But indeed, without eDRAM ("Crystallwell"), 1866 is still not enough, presumably.
kpb321 - Monday, January 5, 2015 - link
We will have to wait and see. There might be more of a performance difference for Haswell than in the past because they decreased the # of EU's per slice from 10 down to 8 and increased the cache size. That should mean a lot more cache available per EU which should help keep it from being as bandwidth limited as in the past. It will probably still be bandwidth limited but hopefully just not as much making the GT3 version without eDRAM more reasonable.With that said integrated GPUs will always be behind dedicated GPUs in performance because graphics is so parallel that is scales easily with more units but those additional units mean higher power and bandwidth requirements. That's why you see high end GPUs using 200+ watts and very wide/fast memory interfaces both of which are much higher than can be reasonable handled in integrated GPU setting.
III-V - Monday, January 5, 2015 - link
Gen8 actually makes a lot of changes that reduce its reliance on external memory. Take a look at the bit on caches in this article. It'll still be constrained by bandwidth, but not as much as you seem to be expecting.texasti89 - Monday, January 5, 2015 - link
It is nice to see audio DSP element is integrated into the PCH. I hope to see more and more integration in the near future. The power charts show clearly that display panel still has the major contribution in the overall platform power consumption. I think Intel and other SoC players have reached the point where SoCs can no longer provide pronounced improvements in overall power saving given demand for higher display resolution. Igzo display technology can cut the display power by at least half which will give further opportunity for SoC designers to effectively improve efficiency.thunderising - Monday, January 5, 2015 - link
So, the fastest Intel Core i7, which costs a lot of $$, and spends nearly 70% of its die space on graphics, produces 844.8 GFlops.Whereas, NVIDIA's Tegra X1 outputs 1024 TFlops.
*Claps*
Pork@III - Monday, January 5, 2015 - link
NVIDIA's Tegra X1 outputs 1024 TFlops>(in FP16)< But we already live in 2015 and work with FP32 and FP64 mostly
TiGr1982 - Monday, January 5, 2015 - link
Talking FP64, Tegra X1 may not even have it at all, or, at best, I suppose, it may have it at the same ratio, as GM204, which is just 1/32. So, I bet, FP64 capability does not really apply seriously to Tegra X1. FP16 and FP32 to be used there.III-V - Monday, January 5, 2015 - link
I'm sure it'll have some FP64 support... Probably at 1/32, 1/48, or 1/64 rate. It'd be ludicrous for it to not support it at all.TiGr1982 - Monday, January 5, 2015 - link
I suppose, FP64 can be at 1/32, like I said, is the case for GM204. But that's not a lot, certainly.TiGr1982 - Monday, January 5, 2015 - link
X1 gives this flops for FP16 (half precision). Don't be fooled by usual nV marketing and compare "apples to apples".However, this is not to say that this Broadwell-U is very impressive. To me, it looks just as one more evolutionary step over Haswell-U. Nothing special, I would say. Still dual core x86, as a lot of people complain here - for some reason Intel strongly believes quad core is not need in -U segment. Instead, they beef up only the GPU, which may be bottle-necked anyway by DDR3 just as in AMD Kaveri case.
And all of these Broadwell-U i5 and i7 are offered for big $$$, as usual in Intel's case. Somewhat disappointing - I agree with some other posters in this thread.
DigitalFreak - Monday, January 5, 2015 - link
It is a node shrink, so you shouldn't expect anything major over Haswell. Now if Skylake doesn't bring the goods, then they'll have an issue.Jaybus - Monday, January 5, 2015 - link
Huh? Peak FP32 for x1 is 512 GFlops, substantially less than 48 EU Broadwell-U.name99 - Tuesday, January 6, 2015 - link
BUT Intel offer a coherent single address space for GPU and CPU compute. This is an important step on the path to practical commonplace heterogeneous computing. As far as I know nV aren't offering that in X1.(I'd be curious to know if Imagination are offering it in Rogue 7, because Apple are probably the company that's best set to put all the pieces of this together, from the hardware through the OS to the language+dev tools to the frameworks' but if IMG aren't on track, Apple are kinda screwed. Of course Apple COULD be preparing their mythical self-designed GPU...)
There's plenty to be irritated about with Broadwell, but let's praise the improvements that really are a good foundation for the future, and this is primary among them.
stephenbrooks - Tuesday, January 6, 2015 - link
GFlops not TFlops! And at FP32 the Tegra X1 peaks at 512 Gflops (source: http://anandtech.com/show/8811/nvidia-tegra-x1-pre... ).tipoo - Sunday, January 18, 2015 - link
*Twitch*Argh. The X1 gflop number you just gave is for half precision, it's their double speed FP16 mode. And then you're going and comparing that to a full precision number.
MattCoz - Monday, January 5, 2015 - link
I'm really interested to see what kind of performance improvement can be seen over my current Ivy Bridge i5-3317U. I really want to upgrade my laptop for numerous reasons, but after 2 years I'm looking for more than just a marginal increase in performance. I was very disappointed when Lenovo went with the Core M for the Yoga 3.TiGr1982 - Monday, January 5, 2015 - link
I have this same Core i5-3317U in Surface Pro 1 (1st gen, it works without throttling there, but two CPU fans are audible under heavy load). I believe, CPU-wise, you won't get really big improvements with Broadwell-U over Ivy-U. GPU-wise, however, Broadwell-U will be way faster.So, it depends on what you are interested in more: CPU progress, GPU progress, or both.
III-V - Monday, January 5, 2015 - link
I'm in the same boat. Think I have the same processor, too.I am looking at getting the inevitable Surface Pro 4. It looks like I'll be able to get the same CPU performance (assuming it's Broadwell-Y), and a fair bit better GPU performance, with much improved battery life. Should be good for my needs... Don't need too much number crunching power on the thing, and the graphics are a real sore spot on my Asus S400CA.
nwarawa - Monday, January 5, 2015 - link
I am concerned about the Broadwell Pentiums and Celerons overlapping with the new Cherry Trail.Up until now, Baytrail and Ivybridge Celerons have been sold in similarly priced systems. The Celeron might use a bit more power, but in a cheap laptop, or especially a little ITX box, it was worth it for the extra performance. The 1.5 IvyBridge cores still hold a noticeable lead over a Baytrail core at 2.6 (boost). They shared the same GPU generation, but the Celeron had 6 EUs to the Baytrail's 4 EUs, plus ran at a higher clock.
Now, apparently CherryTrail is going to have 16 EUs to Broadwell Celeron's 12 EUs. They both will have dual-channel LPDDR3-1600 interfaces, so unless the GT1 is clocked significantly higher, this will be the first time an Atom has a better GPU than its mainstream contemporary.
As for the CPU portion, for years the cheapest Celeron U has hardly budged in clockrate, from the 1.4 Sandybridge to the new 1.5 Broadwell. So these chips' upgrades have depended almost entirely on process improvements. The process improvements from BayTrail to CherryTrail sound a bit more aggressive than those from Haswell to Broadwell. So, if Intel ends up clocking the CherryTrail higher too (say, 2.8boost), we may also have situation were the CherryTrail has near CPU performance parity with the Broadwell Celeron.
naloj - Monday, January 5, 2015 - link
Looks like Intel has published the specs of the Broadwell-U NUC line:http://www.intel.com/content/www/us/en/nuc/product...
ayejay_nz - Monday, January 5, 2015 - link
Thanks a lot for this link!danjw - Monday, January 5, 2015 - link
I still not seeing any Broadwell desktop desktop parts. This continues to make me believe that Intel is having the same kind of issues with its 14nm die shrink as TSMC is with their 20nm process. Despite Intel's assertions otherwise.TiGr1982 - Monday, January 5, 2015 - link
Broadwell desktop parts are supposed to be released "Mid 2015" (May-July, maybe).There is one more Intel slide stating this (see arstechnica):
http://arstechnica.com/gadgets/2015/01/broadwell-u...
danjw - Monday, January 5, 2015 - link
How definitive! ;-) Really, this was supposed to be out early to middle of last year. This is just the latest claim of a release date from Intel. I will believe it when they are shipping them in volume. Until that it is vaporware, not hardware.TiGr1982 - Monday, January 5, 2015 - link
I agree with you, they are long overdue, in terms of previous Intel tick-tock pace. Yield problems, I guess.However, for some people like me it does not matter, because I already have Z87-based desktop since June 2013, which won't be compatible with Broadwell-K (so I'll just stick with my Z87-based setup for a while in the future).
III-V - Monday, January 5, 2015 - link
Broadwell desktop has been slated for a summer release for a good 6+ months now... Where have you been?name99 - Tuesday, January 6, 2015 - link
What problems is TSMC having with 20nm?Apple probably sold around 100M A8/A8X devices in Q4 2014.
Intel sells around 100M CPUs in a quarter, but only a REALLY tiny fraction of those were Broadwell last quarter, and even this quarter only a fraction will be Broadwell.
As far as I can see, TSMC 20nm basically hit its targets at the expected times. Intel 14nm, on the other hand has been delayed by around 6 to 9 months or so from what everyone expected. The weird slowness of the Broadwell rollout (with M then Y/U and eventually desktop) compared to previous rollouts also makes you wonder how healthy 14nm is, even today.
jjjia - Wednesday, January 7, 2015 - link
100M? 20M at most.Senti - Monday, January 5, 2015 - link
"32b INT GFLOPs" – wut? "L2 transaction lookaside buffer (TLB)" – another great one.How illiterate can you be to write that? Oh, it's Ian's article again...
toyotabedzrock - Monday, January 5, 2015 - link
I'm sorry but whoever is deciding specs and model numbers for Intel is partaking in to much medicinal weed. It gets worse every year.WaitingForNehalem - Monday, January 5, 2015 - link
So no HDMI 2.0 huhCharonPDX - Monday, January 5, 2015 - link
I just love that every new CPU release continues to advertise some form of "now fast enough for speech recognition!"I recall seeing a poster on the wall at an Intel facility in 1994 documenting features of the then-new Pentium Processor, including "Real-time voice recognition!"
sonofgodfrey - Monday, January 5, 2015 - link
TLB = Translation Lookaside Buffer (not transaction).Pork@III - Tuesday, January 6, 2015 - link
Intel sucks. We already live in 2015 not in 1995. Now has many apps which needs of more of 3-4-6 threads. None of these processors does not deserve the least be described in i7 series. They are all hell cut as the number of cores as the number of transistors as volume cache. These are junk low grade.darkich - Tuesday, January 6, 2015 - link
Wow ,intel GPU's are laughableMichael Bay - Tuesday, January 6, 2015 - link
Says who, asspained amd sucker?Your whole pathetic gaggle down here is always a good reason to read comments.
Pork@III - Tuesday, January 6, 2015 - link
I say this! I'm The Pork@III! You're just someone Michael Bay :DIntel to take "GPU"-it and to put it where the sun does not shine. This CPU is it or what? Some pathetic mixture that resembles no CPU nor video. The only meaningful line of Intel processor is enthusiastic class, presented recently by processors socket LGA-2011-3.
jed22281 - Thursday, January 8, 2015 - link
sub.fokka - Thursday, January 8, 2015 - link
i'm glad broadwell u is here at last, but at the same time i'm missing some of the bigger things. where is ddr4 for improved memory performance and lower power use? where is pcie3 so we can say goodbye to the limitations of sata3 at last? where is usb3.1 and hdmi2.0 so we're not limited by ancient interfaces in times of 4k displays and superslim devices?do we have to wait for skylake to get all those things? or maybe even longer?
i like intel as much as the next guy and haswell as well as broadwell are very nice mobile platforms as is, but i still can't shake the feeling of stagnation when it comes to i/o and interfaces.
Pork@III - Saturday, January 10, 2015 - link
Broadwell/-K-M at all other versions/ is 14hm Haswell shrink. Same arcitecture, same memory controller(only futurest Broadwell-E in entusiast class will have quad channel DDR4 support like Haswell-E). DDR4 we have support in normal high class with Intel Skylake-S and Skylake-H.fokka - Wednesday, January 14, 2015 - link
thanks for clearing that up, i don't know why i didn't make the connection between shrink != new architecture. i should have known that better. still boring though, that we have to wait another generation to see some real innovations again.Pork@III - Friday, January 16, 2015 - link
Yes with Skylake we have a few new intructions; new memory controller, eDRAM on die area, also few other arcitectural changes. With all these changes together Intel promise significant progress in computational capacity of this generation processors. Intel did not once have promised more than what actually imagined their customers. But we still have hope.jman9295 - Sunday, January 18, 2015 - link
Intel should think about producing a mid-range quad core without HT and without the HD graphics GPU specifically for gaming laptops that come with discrete Nvidia GPUs. Before, it made sense to have the iGPU for non-gaming situations to keep the battery from draining. The laptop would switch back and forth automatically. Now, though, with Maxwell being designed as a mobile architecture from the start, there has to be some way that Nvidia can disable most of the GPU so that it can operate at extremely low power consumption when browsing or doing light tasks. I don't think Intel has ever had a mobile core-i CPU with 4 physical cores and no HT. And I'm pretty sure there never was a mobile core-i CPU that did not have an iGPU. Right now, gamers have the choice of a 4c/8t i7 or a 2c/4t i5 or i7 and nothing in between. Giving us a 4c/4t i5 would bring the cost of halfway decent gaming laptops down to under $1,000 depending on the GPU installed. I'm sure there are plenty of other applications for a mobile CPU like this other than gaming, but this would be the ideal gaming laptop CPU.tipoo - Sunday, January 18, 2015 - link
Why does each EU handle 7 threads, when they have 8 "shaders" each?tipoo - Sunday, January 18, 2015 - link
I have the Iris Pro 5200, I'll be interested to see where that 6100 falls in comparison to it. More EUs, and the other benefits to the small GPU-level caches, but no eDRAM. I think the 5200 should still beat it, but I wonder how close it can come without eDRAM.boe - Wednesday, February 4, 2015 - link
I'd certainly like to know more about the onboard GPU for HTPCs. Will it support 4K@60? 4K 3D specs etc.