There are already several 15" 4K laptops on the market, so it's not unrealistic. I think Apple will probably reserve 4K screens for the RMBP line, but maybe there will be a RMBA. If not this next lineup, the one after. It's going to happen at some point.
Unlikely. "Retina" might be Apple's term but they're not buying into the pixel density space race that the rest of the industry is in right now. They go up to what you will be able to actually see in normal usage and then stop. The 13" rMBP has 2560x1600 so that's more likely.
4K on a 12-13 inch laptop is a marketing gimmick, especially with existing Windows 8.1. If you use modern laptops with resolution above 1080P, it's already plenty. There is no need at all to include that many pixels on a 12-13 inch laptop as it will severely degrade battery life and GPU performance.
"It's important to people who read with complex characters, people who like small text/large work spaces ..."
You mean 4k in 12-13 inch laptops ?
Because (and please excuse the inevitable sarcasm)if you do, those people should get checked in advanced research centers, if they are able to discern "visible pixels" at 1080p or even 720p. Who knows, they may be able to see in ultraviolet or hear beyond 20khz. In a more "serious" tone, if you are able to discern pixels on 5-13 inches screens, when in hd, imagine how bad it is at 22 or 30 inch screens. So, the manufacturers should, first of all, increase the pixel density of those sizes. It's like putting 100 litres reservoirs in motorcycles because you think 70 litres is not enough and then say 50 litres is ok for trucks.
In my guess Apple will stick with a 16:9 aspect ratio for this retina macbook air to fit in a similar design with slimmer bezel without having to increase the depth in the dimensions. As for the resolutions I'd say it'll be 2732*1536 (doubling 1366*768). a 4k resolution will be too power hungry for it to run smoothly or provide sufficient battery life for all day use especially if they do go with the Y series processors and try to reduce the overall thickness and weight for a compact design.
My concern is that performance would significantly regress compared to the current Haswell version, which may have low base clocks but can turbo all day long .
I wouldn't be too concerned since the distinction between the Pro & Air notebook product lines will be even better defined, allowing the Air to target primarily on the low end users with basic needs and to provide the ultimate mobility in terms of lightweight & long-lasting battery life, while letting the Pro series laptops focus on people with professional needs, since the TDPs of both the U & H series processors will still remain the same. Let along the high possibility that Apple will still keep the non-retina Airs with the U processors in the market.
If I'm not wrong, the current Macbook Airs are using the 15W part, while the 13" Retina Macbook Pro uses a 28W part. So if apple does indeed use Broadwell-Y (which is around 4.5W if I'm not wrong) for the new macbook air, don't you think the gap between the two would be too huge...?
Also, as a side not: can anyone share what kind of performance differences we are looking at between the Y processors and the U processors...?
It can't stay at the maximum turbo clock all day long, but it can run significantly faster than base clocks all day long, particularly if you're loading only the CPUs.
a few thoughts on the matter: 1) This makes the release of the Surface Pro 3 even stranger than it already was. I mean it was supposed to be all about the SP Mini, and the SP3 was going to be a side note which then took center stage at the last minute. Maybe they saw these new chips coming and decided to purge inventory to do a quick turnaround with a fanless SP4 (similar to the release of the SP and SP2) so that they do not get stuck with the bill again? Or perhaps they know something that we don't about yields, costs, and availability which will push the release of a Broadwell product down the line a bit?
2) I seriously wonder about the y-series parts being Atom in disguise. We have already seen Atom being sold as Celeron (duel core) and Pentium (quad core) parts with this last generation. Perhaps with the die shrink and some added attention they were able to get the IPC up to a point where they feel that they can sell them as i3/5/7y pats without anyone noticing? That would make sense for as to why nobody was allowed to look at Control Panel or Task Manager yesterday (probably still listing the part as Atom), and it would also make sense with they way that Intel has been bringing Atom into the fold of their more mainstream product lines, being built on the same process, etc. This is not something I would personally take issue with as I have had really good experiences with Atom products (especially the last 2 generations of them), but Atom is marketing poison and a lot of people would be very annoyed if Intel took this path.
3) Obligitory comment about how this still offers no real upgrade value for desktop systems. Lower power is great, but outside of upgrading to a 6-8 core part there is no real replacement for my Sandy Bridge system still. To be honest my wallet is thankful for it, but typically at this point I start drooling over new hardware, and so far I have got nothing to look forward to any time soon. By the time there is a proper replacement my whole computer (processor, ram, SSD storage, IO, etc.) will just be a tiny SOC that will end up mounting to the back of my GPU. That in itself would be neat... but my next CPU upgrade will probably be so that I can get support for newer external technologies rather than the CPU itself (DDR4, PCIe4, M.2/PCIe Storage, etc.)
The sad (or happy) reality is that, for mainstream desktop users, more CPU performance is utterly pointless. The CPU is already nowhere near being the performance bottleneck, and with four cores already being prevalent on the desktop, the state of application multi-threading is going to have to advance significantly before there's any point in adding more cores, either.
If you're an enthusiast, well, that's why Intel has an enthusiast-oriented platform, in S2011. Whether you need faster cores, more cores, more cache, more PCIe lanes, etc., S2011 is where you want to be.
Yeah, it means that people like you and I are still sitting on Sandy Bridge, because the mainstream platform hasn't advanced enough to be worth spending money on, and the enthusiast platform plain isn't worth the cost for our uses. But that's not a bad thing; I don't know about you, but I don't feel that my 2500K is inadequate in any way. If I did, I'd flip it from stock to its highest 24/7 stable OC (4.6GGHz) and that would tide me over until Skylake and its raft of new technologies.
I'm a huge PC hardware geek, but I'm not complaining that my main desktop rig is going to last 5 years without breaking a sweat. It's just more money to spend elsewhere (HTPCs, laptops, 4K or 21:9 LCDs, etc.)
I wonder if things will finally be power-efficient enough that we can have a quad-core chip as a mainstream laptop chip...
If that happens, the desktop platform should start cracking forward again as programmers start moving to taking advantage of more cores. Then again, the steam hardware survey says that we have a 48.63% users with 2cpus and 43.49% on 4cpus (and steam can see the difference between hyperthreaded cores and real cores)...
Maybe the 8CPU consoles will help? Then again, current quads are easily as fast in pure compute...
Damnit, now I'm sad. We need SOMETHING that isn't server or HPC crap to push performance forward!
"If that happens, the desktop platform should start cracking forward again as programmers start moving to taking advantage of more cores."
Isn't the mainstream laptop core TODAY a quad-thread core (2 cores, each hyperthreaded)? To imagine that going to four cores will change programmer behavior strikes me as extremely optimistic. Until better programming abstractions are made available, it seems unlikely anything will change.
Such abstractions MAY arise from the C++ atomics work (which will build up a body of compiler knowledge, vocabulary, and ideas which will eventually make its way through the whole developer community, not just C++).
Or they may arise from widespread HW TM which allows for a different type of programming language abstraction, one that is more composable. This doesn't require widespread quad-cores --- but it does require HW TM to make the leap from x86 (and POWER and z CPUs) to ARM. (And then, of course, enough time to percolate through the ARM ecosystem, from the high-end down to the middle.) Personally that's my theory as to how this will play out: Apple will add HW TM to their ARM CPUs, and will then announce Swift parallel programming constructions that require HW TM. Apple can do this because they can push ARM fast (everyone else is still waiting for A57s, let alone HW TM), and they're willing to obsolete the past --- they could announce that iOS and OSX for four years from now will only run on HW TM machines and no-one would be surprised. MS can't force their ecosystem to move fast (or at all *cough* XP *cough*). Google likewise have limited control over the CPUs their vendors use, and it would be a hell of a change in how they do things to ditch Java for a custom language.
I'd put it differently. Both MS and Intel seem to be flailing wildly, both without any idea of what will actually sell, both without any interest in investigating reality.
To me the strangest thing is the REAL backstory behind the Broadwell delay. Let's consider. The official story is that there were a few minor problems, now fixed, getting 14nm up and running. OK, let's assume that's true. Then why the ridiculous staggering of the separate Broadwell parts? We're supposed to get the laptop parts around 4 months after the Y parts, then the desktop parts about 5 months after the laptop parts. WHY? - there's supposedly a fab in AZ going empty, so it can't be capacity. (And, face the truth, damn few of these Y parts are ever going to be bought.)
- if the process is debugged as claimed, then that shouldn't affect the laptop and desktop parts. Their delay can't be blamed on process.
- why make the centerpiece of the intro, the part that's going to get all the press, a low power part that, honestly (as AdTech states above) just doesn't meet any real needs. The laptop crowd don't want it because it's too low powered; the tablet crowd don't want it because it's too expensive. The rational thing to do (in terms of both income and making a splash) would have been to ship the laptop parts first.
There's something very strange going on here that we're not being told. My GUESS is that something went terribly wrong in the design process (basically the ever-increasing x86 complexity has finally hit the breaking point) and a bunch of (more or less desperate) decisions that may have made sense for an ultra-low power part have landed up substantially hurting the performance of anything above Y class, so Intel is now desperately scrambling to undo those changes...
Anyone have a better theory? It strikes me that this is THE tech story of 2014, for any real journalist out there.
Broadwell design does not seem that much more complex than Haswell. It's just a few small tricks on top of existing architecture. I don't think the transistor count went up significantly. The reason behind delay is more likely to be a poor yield. They are pushing the lowest power chips first because that's where ARM is beating them in the battle for smartphones and tablets. Laptop/desktop market does not demand the same urgency to mass produce chips at low yield.
I think the most likely cause for the delay of Broadwell is that there's a huge inventory of Haswell that people aren't exactly gobbling up, and really there's not much competition for Haswell. AMD is slower and/or more power hungry, depending on which parts you look at. Notebooks and desktops might see a 5-10% boost in performance, but that's not particularly important in the grand scheme of things.
But look at the Y-series parts. Tablets are not using Haswell-Y for the most part, and it's a growing market. There's little cannibalizing of existing sales by working on Broadwell-Y (BDW-Y), so Intel can focus on a design catering specifically to the needs of the extreme low power market and perhaps make some inroads in the tablet sector.
BDW-Y and Core M are a bit of a long shot, but really if Intel were to release BDW for desktops today, what would most people say? "Hmm... a bit faster than HSW? I think I can skip this upgrade." And that's what many have been saying ever since Sandy Bridge came out. Convincing people to upgrade to the next Core processor is a difficult task when everything still runs plenty fast.
Intel claims double performance per watt for BW-Y line compared to HW-Y. Why do you think they can't release BW-desktop chip that is more than "a bit faster" than equivalent Haswell? From the presentation, it looks like 14nm transistors are 30% better in every way than 22nm ones, so why not shoot for 30% faster clock at the same TDP?
You're making some serious assumptions there about how well the process scales at the high-end. It may well be that as you get up to the same clock speeds, that TDP advantage shrinks.
The fact is that Intel are competing with their own past successes. As Jarred said, BDW-Y is the only area they can release a chip that doesn't have to deal with upgrade inertia over previous products.
"there's supposedly a fab in AZ going empty, so it can't be capacity." - they won't roll-out a new process to another fab until it reach an acceptable level of production yield, and then when it does, it will only be rolled out to one fab at a time , so it will take until early/mid 2015 to get three fabs online.
" The laptop crowd don't want it because it's too low powered; the tablet crowd don't want it because it's too expensive." - our company looked at rolling out tablets to some of our employee, turned out that iPad was not feasible due to the legacy systems (and staff not wanting to carry a laptop as well), so we have gone for hybrids. We would love a Broadwell Y based tablet that has better battery life, was thinner and lighter and would happily pay over $1,000 for it. In fact I would love one personally to replace my iPad Air, which I only ever use as a toy.
Your point #2 is, sorry to say, crazy talk. Intel is very clear about Core M being Broadwell-Y, and to date there has never been an Atom chip branded as Core. Maybe at some point Core will see a chip derived from Atom, but I seriously doubt it. Pentium and Celeron are the budget Intel brands, and so if Atom has a stigma attached to it that's the place to go. Core is for performance, which is why slower Haswell and Ivy Bridge parts are still relegated to the Celeron and Pentium family.
Core M seems to be nice and all... But $300+ is a complete rip off (if previous Y series is any indication)... We need more OEMs to compete in that performance/Watt profile for prices to go down...
My opinion: No one can compete with intel right now. The only fab with the resources to try is Samsung, and I doubt their willingness to commit the resources for relatively low margin ARM chips. AMD almost went bankrupt trying, Global Foundries has now thrown in the towel (signing an agreement with Samsung), and TMSC is showing signs of wavering.
FinFET (Intel's implementation is called Tri-Gate) is hard. 14nm is even harder, based on what we've seen out of Intel. Samsung & TMSC are going to try to go to 14nm and start using FinFET for the first time at the same time. More power to them, but I feel like they will have major pains trying to.
DISCLOSURE: I own INTC shares, mostly after forming the opinion above.
As someone who has been following this industry for a while now, I see no reason to disagree with any of what you just said. Intel have been gradually eking out a lead in process technology and that lead is only getting stronger as successive die shrinks become more difficult to achieve.
If there are going to be any serious challenges, I see them coming from Samsung. They have the capital and the vertical integration to justify the expense of the investment... whether they have the engineering skill to match Intel remains to be seen though.
- My Hope #1 is that with no fan to cool we will not see 94C core temperatures again like in Yoga 2 Pro.
- Hope #2: is to reduce future Yogas and Microsoft Surfaces weight since after 2+ hours reading those in the bed I do not need go to gym anymore
- Hope #3 is that with 14nm Core M or Atom we will finally see 6" phones based on true Windows OS
Hurry up. If not then 14nm shrinks of all Intel competitors will make Wintel on mobiles irrelevant. The remote desktop apps like Teamviewer on any phone, tablet or laptop for example do the job of having your desktop PC Windows with all its apps anywhere you go probably even better if data network is available which became rarely not true lately
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
31 Comments
Back to Article
eggimage - Tuesday, August 12, 2014 - link
I'm seriously hoping broadwell Y lands on the retina Macbook Airiwod - Tuesday, August 12, 2014 - link
Well I think it is all but confirmed, Fanless MBA, Retina MBA ( 4K ).willis936 - Tuesday, August 12, 2014 - link
4K in a MBA? get real.nathanddrews - Tuesday, August 12, 2014 - link
There are already several 15" 4K laptops on the market, so it's not unrealistic. I think Apple will probably reserve 4K screens for the RMBP line, but maybe there will be a RMBA. If not this next lineup, the one after. It's going to happen at some point.SirKnobsworth - Tuesday, August 12, 2014 - link
Unlikely. "Retina" might be Apple's term but they're not buying into the pixel density space race that the rest of the industry is in right now. They go up to what you will be able to actually see in normal usage and then stop. The 13" rMBP has 2560x1600 so that's more likely.RussianSensation - Tuesday, August 12, 2014 - link
4K on a 12-13 inch laptop is a marketing gimmick, especially with existing Windows 8.1. If you use modern laptops with resolution above 1080P, it's already plenty. There is no need at all to include that many pixels on a 12-13 inch laptop as it will severely degrade battery life and GPU performance.This Guy - Wednesday, August 13, 2014 - link
It's important to people who read with complex characters, people who like small text/large work spaces and those that hate visible pixels.Yes HD screens drain battery. But we have gained 5W from the CPU.
GPU performance shouldn't matter too much in OSX or Windows for a non-gamer. And MBA's aren't really a gamers first choice.
IUU - Tuesday, August 19, 2014 - link
"It's important to people who read with complex characters, people who like small text/large work spaces ..."You mean 4k in 12-13 inch laptops ?
Because (and please excuse the inevitable sarcasm)if you do, those people should get checked in advanced research centers, if they are able to discern "visible pixels" at 1080p or even 720p.
Who knows, they may be able to see in ultraviolet or hear beyond 20khz.
In a more "serious" tone, if you are able to discern pixels on 5-13 inches screens, when in hd, imagine how bad it is at 22 or 30 inch screens. So, the manufacturers should, first of all, increase the pixel density of those sizes.
It's like putting 100 litres reservoirs in motorcycles because you think 70 litres is not enough and then say 50 litres is ok for trucks.
Visual - Friday, October 10, 2014 - link
You do not need to discern individual pixels in order to appreciate the improved sharpness and detail or to be able to work with smaller font sizes.eggimage - Tuesday, August 12, 2014 - link
In my guess Apple will stick with a 16:9 aspect ratio for this retina macbook air to fit in a similar design with slimmer bezel without having to increase the depth in the dimensions. As for the resolutions I'd say it'll be 2732*1536 (doubling 1366*768). a 4k resolution will be too power hungry for it to run smoothly or provide sufficient battery life for all day use especially if they do go with the Y series processors and try to reduce the overall thickness and weight for a compact design.OrphanageExplosion - Tuesday, August 12, 2014 - link
My concern is that performance would significantly regress compared to the current Haswell version, which may have low base clocks but can turbo all day long .eggimage - Tuesday, August 12, 2014 - link
I wouldn't be too concerned since the distinction between the Pro & Air notebook product lines will be even better defined, allowing the Air to target primarily on the low end users with basic needs and to provide the ultimate mobility in terms of lightweight & long-lasting battery life, while letting the Pro series laptops focus on people with professional needs, since the TDPs of both the U & H series processors will still remain the same. Let along the high possibility that Apple will still keep the non-retina Airs with the U processors in the market.zainichia - Monday, August 18, 2014 - link
If I'm not wrong, the current Macbook Airs are using the 15W part, while the 13" Retina Macbook Pro uses a 28W part. So if apple does indeed use Broadwell-Y (which is around 4.5W if I'm not wrong) for the new macbook air, don't you think the gap between the two would be too huge...?Also, as a side not: can anyone share what kind of performance differences we are looking at between the Y processors and the U processors...?
nevertell - Tuesday, August 12, 2014 - link
Except that it can't turbo all day long.protomech - Tuesday, August 12, 2014 - link
It can't stay at the maximum turbo clock all day long, but it can run significantly faster than base clocks all day long, particularly if you're loading only the CPUs.CaedenV - Tuesday, August 12, 2014 - link
a few thoughts on the matter:1) This makes the release of the Surface Pro 3 even stranger than it already was. I mean it was supposed to be all about the SP Mini, and the SP3 was going to be a side note which then took center stage at the last minute. Maybe they saw these new chips coming and decided to purge inventory to do a quick turnaround with a fanless SP4 (similar to the release of the SP and SP2) so that they do not get stuck with the bill again? Or perhaps they know something that we don't about yields, costs, and availability which will push the release of a Broadwell product down the line a bit?
2) I seriously wonder about the y-series parts being Atom in disguise. We have already seen Atom being sold as Celeron (duel core) and Pentium (quad core) parts with this last generation. Perhaps with the die shrink and some added attention they were able to get the IPC up to a point where they feel that they can sell them as i3/5/7y pats without anyone noticing? That would make sense for as to why nobody was allowed to look at Control Panel or Task Manager yesterday (probably still listing the part as Atom), and it would also make sense with they way that Intel has been bringing Atom into the fold of their more mainstream product lines, being built on the same process, etc. This is not something I would personally take issue with as I have had really good experiences with Atom products (especially the last 2 generations of them), but Atom is marketing poison and a lot of people would be very annoyed if Intel took this path.
3) Obligitory comment about how this still offers no real upgrade value for desktop systems. Lower power is great, but outside of upgrading to a 6-8 core part there is no real replacement for my Sandy Bridge system still. To be honest my wallet is thankful for it, but typically at this point I start drooling over new hardware, and so far I have got nothing to look forward to any time soon. By the time there is a proper replacement my whole computer (processor, ram, SSD storage, IO, etc.) will just be a tiny SOC that will end up mounting to the back of my GPU. That in itself would be neat... but my next CPU upgrade will probably be so that I can get support for newer external technologies rather than the CPU itself (DDR4, PCIe4, M.2/PCIe Storage, etc.)
Black Obsidian - Tuesday, August 12, 2014 - link
The sad (or happy) reality is that, for mainstream desktop users, more CPU performance is utterly pointless. The CPU is already nowhere near being the performance bottleneck, and with four cores already being prevalent on the desktop, the state of application multi-threading is going to have to advance significantly before there's any point in adding more cores, either.If you're an enthusiast, well, that's why Intel has an enthusiast-oriented platform, in S2011. Whether you need faster cores, more cores, more cache, more PCIe lanes, etc., S2011 is where you want to be.
Yeah, it means that people like you and I are still sitting on Sandy Bridge, because the mainstream platform hasn't advanced enough to be worth spending money on, and the enthusiast platform plain isn't worth the cost for our uses. But that's not a bad thing; I don't know about you, but I don't feel that my 2500K is inadequate in any way. If I did, I'd flip it from stock to its highest 24/7 stable OC (4.6GGHz) and that would tide me over until Skylake and its raft of new technologies.
I'm a huge PC hardware geek, but I'm not complaining that my main desktop rig is going to last 5 years without breaking a sweat. It's just more money to spend elsewhere (HTPCs, laptops, 4K or 21:9 LCDs, etc.)
ZeDestructor - Tuesday, August 12, 2014 - link
I wonder if things will finally be power-efficient enough that we can have a quad-core chip as a mainstream laptop chip...If that happens, the desktop platform should start cracking forward again as programmers start moving to taking advantage of more cores. Then again, the steam hardware survey says that we have a 48.63% users with 2cpus and 43.49% on 4cpus (and steam can see the difference between hyperthreaded cores and real cores)...
Maybe the 8CPU consoles will help? Then again, current quads are easily as fast in pure compute...
Damnit, now I'm sad. We need SOMETHING that isn't server or HPC crap to push performance forward!
name99 - Tuesday, August 12, 2014 - link
"If that happens, the desktop platform should start cracking forward again as programmers start moving to taking advantage of more cores."Isn't the mainstream laptop core TODAY a quad-thread core (2 cores, each hyperthreaded)? To imagine that going to four cores will change programmer behavior strikes me as extremely optimistic. Until better programming abstractions are made available, it seems unlikely anything will change.
Such abstractions MAY arise from the C++ atomics work (which will build up a body of compiler knowledge, vocabulary, and ideas which will eventually make its way through the whole developer community, not just C++).
Or they may arise from widespread HW TM which allows for a different type of programming language abstraction, one that is more composable. This doesn't require widespread quad-cores --- but it does require HW TM to make the leap from x86 (and POWER and z CPUs) to ARM. (And then, of course, enough time to percolate through the ARM ecosystem, from the high-end down to the middle.)
Personally that's my theory as to how this will play out: Apple will add HW TM to their ARM CPUs, and will then announce Swift parallel programming constructions that require HW TM. Apple can do this because they can push ARM fast (everyone else is still waiting for A57s, let alone HW TM), and they're willing to obsolete the past --- they could announce that iOS and OSX for four years from now will only run on HW TM machines and no-one would be surprised. MS can't force their ecosystem to move fast (or at all *cough* XP *cough*). Google likewise have limited control over the CPUs their vendors use, and it would be a hell of a change in how they do things to ditch Java for a custom language.
name99 - Tuesday, August 12, 2014 - link
I'd put it differently. Both MS and Intel seem to be flailing wildly, both without any idea of what will actually sell, both without any interest in investigating reality.To me the strangest thing is the REAL backstory behind the Broadwell delay. Let's consider.
The official story is that there were a few minor problems, now fixed, getting 14nm up and running. OK, let's assume that's true. Then why the ridiculous staggering of the separate Broadwell parts? We're supposed to get the laptop parts around 4 months after the Y parts, then the desktop parts about 5 months after the laptop parts. WHY?
- there's supposedly a fab in AZ going empty, so it can't be capacity. (And, face the truth, damn few of these Y parts are ever going to be bought.)
- if the process is debugged as claimed, then that shouldn't affect the laptop and desktop parts. Their delay can't be blamed on process.
- why make the centerpiece of the intro, the part that's going to get all the press, a low power part that, honestly (as AdTech states above) just doesn't meet any real needs. The laptop crowd don't want it because it's too low powered; the tablet crowd don't want it because it's too expensive.
The rational thing to do (in terms of both income and making a splash) would have been to ship the laptop parts first.
There's something very strange going on here that we're not being told. My GUESS is that something went terribly wrong in the design process (basically the ever-increasing x86 complexity has finally hit the breaking point) and a bunch of (more or less desperate) decisions that may have made sense for an ultra-low power part have landed up substantially hurting the performance of anything above Y class, so Intel is now desperately scrambling to undo those changes...
Anyone have a better theory? It strikes me that this is THE tech story of 2014, for any real journalist out there.
p1esk - Tuesday, August 12, 2014 - link
Broadwell design does not seem that much more complex than Haswell. It's just a few small tricks on top of existing architecture. I don't think the transistor count went up significantly.The reason behind delay is more likely to be a poor yield. They are pushing the lowest power chips first because that's where ARM is beating them in the battle for smartphones and tablets. Laptop/desktop market does not demand the same urgency to mass produce chips at low yield.
JarredWalton - Tuesday, August 12, 2014 - link
I think the most likely cause for the delay of Broadwell is that there's a huge inventory of Haswell that people aren't exactly gobbling up, and really there's not much competition for Haswell. AMD is slower and/or more power hungry, depending on which parts you look at. Notebooks and desktops might see a 5-10% boost in performance, but that's not particularly important in the grand scheme of things.But look at the Y-series parts. Tablets are not using Haswell-Y for the most part, and it's a growing market. There's little cannibalizing of existing sales by working on Broadwell-Y (BDW-Y), so Intel can focus on a design catering specifically to the needs of the extreme low power market and perhaps make some inroads in the tablet sector.
BDW-Y and Core M are a bit of a long shot, but really if Intel were to release BDW for desktops today, what would most people say? "Hmm... a bit faster than HSW? I think I can skip this upgrade." And that's what many have been saying ever since Sandy Bridge came out. Convincing people to upgrade to the next Core processor is a difficult task when everything still runs plenty fast.
p1esk - Tuesday, August 12, 2014 - link
Intel claims double performance per watt for BW-Y line compared to HW-Y. Why do you think they can't release BW-desktop chip that is more than "a bit faster" than equivalent Haswell? From the presentation, it looks like 14nm transistors are 30% better in every way than 22nm ones, so why not shoot for 30% faster clock at the same TDP?Spunjji - Thursday, August 14, 2014 - link
You're making some serious assumptions there about how well the process scales at the high-end. It may well be that as you get up to the same clock speeds, that TDP advantage shrinks.The fact is that Intel are competing with their own past successes. As Jarred said, BDW-Y is the only area they can release a chip that doesn't have to deal with upgrade inertia over previous products.
Speedfriend - Wednesday, August 13, 2014 - link
@name99"there's supposedly a fab in AZ going empty, so it can't be capacity." - they won't roll-out a new process to another fab until it reach an acceptable level of production yield, and then when it does, it will only be rolled out to one fab at a time , so it will take until early/mid 2015 to get three fabs online.
" The laptop crowd don't want it because it's too low powered; the tablet crowd don't want it because it's too expensive." - our company looked at rolling out tablets to some of our employee, turned out that iPad was not feasible due to the legacy systems (and staff not wanting to carry a laptop as well), so we have gone for hybrids. We would love a Broadwell Y based tablet that has better battery life, was thinner and lighter and would happily pay over $1,000 for it. In fact I would love one personally to replace my iPad Air, which I only ever use as a toy.
JarredWalton - Tuesday, August 12, 2014 - link
Your point #2 is, sorry to say, crazy talk. Intel is very clear about Core M being Broadwell-Y, and to date there has never been an Atom chip branded as Core. Maybe at some point Core will see a chip derived from Atom, but I seriously doubt it. Pentium and Celeron are the budget Intel brands, and so if Atom has a stigma attached to it that's the place to go. Core is for performance, which is why slower Haswell and Ivy Bridge parts are still relegated to the Celeron and Pentium family.lilmoe - Tuesday, August 12, 2014 - link
Core M seems to be nice and all... But $300+ is a complete rip off (if previous Y series is any indication)... We need more OEMs to compete in that performance/Watt profile for prices to go down...Thermogenic - Tuesday, August 12, 2014 - link
My opinion: No one can compete with intel right now. The only fab with the resources to try is Samsung, and I doubt their willingness to commit the resources for relatively low margin ARM chips. AMD almost went bankrupt trying, Global Foundries has now thrown in the towel (signing an agreement with Samsung), and TMSC is showing signs of wavering.FinFET (Intel's implementation is called Tri-Gate) is hard. 14nm is even harder, based on what we've seen out of Intel. Samsung & TMSC are going to try to go to 14nm and start using FinFET for the first time at the same time. More power to them, but I feel like they will have major pains trying to.
DISCLOSURE: I own INTC shares, mostly after forming the opinion above.
Spunjji - Thursday, August 14, 2014 - link
As someone who has been following this industry for a while now, I see no reason to disagree with any of what you just said. Intel have been gradually eking out a lead in process technology and that lead is only getting stronger as successive die shrinks become more difficult to achieve.If there are going to be any serious challenges, I see them coming from Samsung. They have the capital and the vertical integration to justify the expense of the investment... whether they have the engineering skill to match Intel remains to be seen though.
SanX - Tuesday, August 12, 2014 - link
- My Hope #1 is that with no fan to cool we will not see 94C core temperatures again like in Yoga 2 Pro.- Hope #2: is to reduce future Yogas and Microsoft Surfaces weight since after 2+ hours reading those in the bed I do not need go to gym anymore
- Hope #3 is that with 14nm Core M or Atom we will finally see 6" phones based on true Windows OS
Hurry up. If not then 14nm shrinks of all Intel competitors will make Wintel on mobiles irrelevant. The remote desktop apps like Teamviewer on any phone, tablet or laptop for example do the job of having your desktop PC Windows with all its apps anywhere you go probably even better if data network is available which became rarely not true lately
Spunjji - Thursday, August 14, 2014 - link
Braswell is where we might finally see Windows in a 6" device. The current Atom chips are so close already but that should seal the deal.Whether you'd want to actually use that device for anything, on the other hand... :D