I would have thought not getting on 10nm would have hurt sales but nope not a bit Intel missed the mark on 10nm so they are suffering shortages since they had planned to be able to make more chips due to 10nm being able to produce more chips per wafer. Its a best case scenario for Intel.
I'm aware I just thought the enthusiast and data center sales which AMD is doing very well with would have hurt a little. Not sure if companies are stockpiling chips or what but Intel and AMD are both doing just fine in this market. Guess it's not as much of a yin yang relationship as I thought.
This is the reason why Intel has not moved to 10nm. It does not make economic sense to pay for the transition when the demand for the existing is still so high. This will hurt them in server space but for the consumer end. Even with 12nm AMD simply is not matching intel in power consumption (Battery life not raw performance). They may barely match them even with 7nm. Given performance has reached a good enough state now for Most, it is battery life that drives buying decisions these days.
Intel hasn't moved to 10nm because they don't have the necessary capacity. Hell, they have been trying to get the process going for *years* and it's only relatively recently they released *anything* on 10nm. They would do 10nm if they had the yields and capacity but they don't.
Intel made shortage problem thanks to its greediness. When first shortages appeared, then Intel moved to bigger cores to make higher profits, this way shortage was worsened, as clients followed that trend, than stop buying PC. Now they will have huge 14nm capacity, that already is obsolete, and they paid for it much more than gained from these higher profits.
Nope, the new scanners purchased are perfect to ramp on 10nm. So they are in a great situation of 100% utilization of the fabs, this means great profits. The equipment will be utilized to manufacture on 10nm without any issue in 2020-2022. About the demand, OEMs are only stockpiling chips to avoid new tariffs from Trump in 2020.
This is a double edge sword though. they ramp 14nm up but what happens if they have break through on 10nm+++, or what ever they are on now, and yields and performance look good? Then they will have a large glut of 14nm to dump which may hurt the newer chip sales.
Doubt that will happen but may read more into that intel has less faith 10nm will be truly viable anytime soon. Looks like they are putting all their eggs into 7nm now. I hope it works better than their 10nm designs.
If there's anything to Intel's claims that future cores are going to be designed to be less process dependent, they could make chips with lower core counts for the target power envelope on 14 nm pretty much indefinitely and just use 10nm for higher density CPUs. They'd certainly be doing that now if they could.
Intel chipsets moved back and forth between 14nm and the newer low power 22nm over the past several years.
SSDs with PCIe 4.0+ bandwidth need faster controllers. Optane memory can be scaled down further.
They have also shown packaging examples which combine everything from 22nm to 7nm. The DDR5 and PCIe 5.0 controllers of future Xeons likely are somewhere in the middle.
And then there are wild cards, like MRAM and ReRAM. 14nm CPUs may be boring, but this will be fun.
There is no magic bullet to make 10nm work. Like most of these problems, it requires many little fixes to get viable. It isn't like chips are just fine until the last step kills them all. It is not impossible to make it work eventually, but even a massive improvement will take at least 3 months to show in chips - and nobody plans for the best case that failed to materialize for several years now.
"...it's worth noting that 25% more wafers does not necessarily mean 25% more CPUs."
This statement pretty much captures the problem in a nutshell. Intel, like any other manufacturing and production line company, predicts demand and uses that to make judgement calls about expanding or shrinking production capacity relatively far in advance. Although there may have been poor planning, I think the problem was mainly unexpected competition. AMD's abrupt return to competition using higher core count processors forced Intel to increascecore counts outside of its own predictions whcih in turn reduced yield per wafer and decreased the number of sellable items that a single wafer produces. Other factors like the technical problems associated with 10nm development and the high demand for EUV fab units from other companies may have been factors that planning didn't predict either, but I feel that increasing core counts by necessity was not something people within Intel we able to forecast.
25% might seem like a lot but if you consider Intel had to increase 50% to 100% the number of cores to compete with AMD in the last 2 years, it's actually quite insufficient. So now that Intel cannot actually increase physical core they finally allow hyper-threading across the whole CPU stack. Should have done it sooner.
Are you referring to the desktop comet lake rumors? They actually do seem to plan to launch a 10-core part, but you are probably right.
If the shortage is really that bad, they'd limit that part to the i9 line only, meaning that for the first time there won't be a core-count overlap among the tiers, meaning each tier can have HT without ending up with more threads than the next one.
Still, it won't do anything to counter AMD's physical core count advantage. At least i3 to i7 won't be so crap for productivity workloads anymore.
Just look at the 7700k 4 cores ($350), 8700k 6 cores ($370), 9700k 8 cores ($385), and also consider the HEDT line run over by Threadripper, and the notebook chips that also had to increase core counts. And also 14nm++ is 20% less dense than 14nm to allow for higher clocks.
14++ has a different gate pitch but the dies aren't bigger. Both kaby and coffee quad-cores have the same ~126mm2 size, at least according to wikichip.
8 core Coffee Lake R isn't much bigger than dual-core Haswell. Desktop sales also declined since Intel increased production, but the shortage continues.
The trouble is that the largest Cascade Lake die is about 700 mm². While that is unchanged over the last 2 years, the Amazons and Googles of the world haven't stopped gobbling them up.
You know 14nm++ is 20% less dense than original 14nm to allow higher clocks, right? Gobbling them up as you call it is due to all the security vulnerabilities that forced datacenter operators to make large orders to regain lost capacity. Intel may have screwed them in the short run but their relationships have deteriorated and Google, MS, Amazon see that Intel is taking all the profits an buying back shares and giving out dividends instead of investing in making their products more efficient (new nodes), more secure and more performant.
I can't wait for the tell-all memoirs about the 10nm debacle start coming out. I feel like years from now once all the details come to light this is going to be a huge cautionary tail for the tech industry, and a punchline for the modern ages.
25% more wafers at 14nm. I'm confused, as I just read another Intel article here saying Intel has basically EOL 14nm...Hmm...Reporting these days...I'd say 14nm has a few years left and NOT on purpose from Intel...ROFL. You don't tool up just to shut it down and 10nm still has issues as noted by no LARGE dies yet, no high watt yet.
And I think the avg consumer is a lot smarter today. It is EASY to read a review, find watts/heat/noise etc to your liking without doing anything more than reading a few pages. It used to be hard to get data on whatever you're buying. Not today, the data you want is on dozens of sites reviewing product X. I may read a dozen or more before buying, but I fully believe even the avg consumer reads at least ONE review of anything they consider pricey. I consider HEAT of everything before I buy because of where I live. IT is not because of how advanced I am in tech, it is simply because I'd like to game in the summer too without dehydrating at the desk...LOL.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
27 Comments
Back to Article
FreckledTrout - Friday, October 25, 2019 - link
I would have thought not getting on 10nm would have hurt sales but nope not a bit Intel missed the mark on 10nm so they are suffering shortages since they had planned to be able to make more chips due to 10nm being able to produce more chips per wafer. Its a best case scenario for Intel.imaheadcase - Friday, October 25, 2019 - link
The average consumer does not follow or care about what process it is made by, they just want the product.FreckledTrout - Friday, October 25, 2019 - link
I'm aware I just thought the enthusiast and data center sales which AMD is doing very well with would have hurt a little. Not sure if companies are stockpiling chips or what but Intel and AMD are both doing just fine in this market. Guess it's not as much of a yin yang relationship as I thought.sharath.naik - Saturday, October 26, 2019 - link
This is the reason why Intel has not moved to 10nm. It does not make economic sense to pay for the transition when the demand for the existing is still so high. This will hurt them in server space but for the consumer end. Even with 12nm AMD simply is not matching intel in power consumption (Battery life not raw performance). They may barely match them even with 7nm. Given performance has reached a good enough state now for Most, it is battery life that drives buying decisions these days.Korguz - Sunday, October 27, 2019 - link
" This is the reason why Intel has not moved to 10nm " um no... they are not at 10nm, because of various things, but it wasnt for that you say....bananaforscale - Sunday, October 27, 2019 - link
Intel hasn't moved to 10nm because they don't have the necessary capacity. Hell, they have been trying to get the process going for *years* and it's only relatively recently they released *anything* on 10nm. They would do 10nm if they had the yields and capacity but they don't.TristanSDX - Friday, October 25, 2019 - link
Intel made shortage problem thanks to its greediness. When first shortages appeared, then Intel moved to bigger cores to make higher profits, this way shortage was worsened, as clients followed that trend, than stop buying PC. Now they will have huge 14nm capacity, that already is obsolete, and they paid for it much more than gained from these higher profits.Gondalf - Friday, October 25, 2019 - link
Nope, the new scanners purchased are perfect to ramp on 10nm. So they are in a great situation of 100% utilization of the fabs, this means great profits. The equipment will be utilized to manufacture on 10nm without any issue in 2020-2022.About the demand, OEMs are only stockpiling chips to avoid new tariffs from Trump in 2020.
Korguz - Friday, October 25, 2019 - link
sources for this speculation on your part, Gondalf ???III-V - Friday, October 25, 2019 - link
More cores = greed?Wow, that's a new one. How many times were you held back in school?
TristanSDX - Saturday, October 26, 2019 - link
more cores in CPU = higher profit per core, think before writingFreckledTrout - Monday, October 28, 2019 - link
You are correct but you inversely have less chips sell as well.Marlin1975 - Friday, October 25, 2019 - link
This is a double edge sword though. they ramp 14nm up but what happens if they have break through on 10nm+++, or what ever they are on now, and yields and performance look good? Then they will have a large glut of 14nm to dump which may hurt the newer chip sales.Doubt that will happen but may read more into that intel has less faith 10nm will be truly viable anytime soon. Looks like they are putting all their eggs into 7nm now. I hope it works better than their 10nm designs.
drothgery - Friday, October 25, 2019 - link
If there's anything to Intel's claims that future cores are going to be designed to be less process dependent, they could make chips with lower core counts for the target power envelope on 14 nm pretty much indefinitely and just use 10nm for higher density CPUs. They'd certainly be doing that now if they could.brantron - Friday, October 25, 2019 - link
Intel chipsets moved back and forth between 14nm and the newer low power 22nm over the past several years.SSDs with PCIe 4.0+ bandwidth need faster controllers. Optane memory can be scaled down further.
They have also shown packaging examples which combine everything from 22nm to 7nm. The DDR5 and PCIe 5.0 controllers of future Xeons likely are somewhere in the middle.
And then there are wild cards, like MRAM and ReRAM. 14nm CPUs may be boring, but this will be fun.
Zizy - Friday, October 25, 2019 - link
There is no magic bullet to make 10nm work. Like most of these problems, it requires many little fixes to get viable. It isn't like chips are just fine until the last step kills them all. It is not impossible to make it work eventually, but even a massive improvement will take at least 3 months to show in chips - and nobody plans for the best case that failed to materialize for several years now.lejeczek - Friday, October 25, 2019 - link
What shortages? Who cares??PeachNCream - Friday, October 25, 2019 - link
"...it's worth noting that 25% more wafers does not necessarily mean 25% more CPUs."This statement pretty much captures the problem in a nutshell. Intel, like any other manufacturing and production line company, predicts demand and uses that to make judgement calls about expanding or shrinking production capacity relatively far in advance. Although there may have been poor planning, I think the problem was mainly unexpected competition. AMD's abrupt return to competition using higher core count processors forced Intel to increascecore counts outside of its own predictions whcih in turn reduced yield per wafer and decreased the number of sellable items that a single wafer produces. Other factors like the technical problems associated with 10nm development and the high demand for EUV fab units from other companies may have been factors that planning didn't predict either, but I feel that increasing core counts by necessity was not something people within Intel we able to forecast.
sgeocla - Friday, October 25, 2019 - link
25% might seem like a lot but if you consider Intel had to increase 50% to 100% the number of cores to compete with AMD in the last 2 years, it's actually quite insufficient. So now that Intel cannot actually increase physical core they finally allow hyper-threading across the whole CPU stack. Should have done it sooner.eddman - Friday, October 25, 2019 - link
Are you referring to the desktop comet lake rumors? They actually do seem to plan to launch a 10-core part, but you are probably right.If the shortage is really that bad, they'd limit that part to the i9 line only, meaning that for the first time there won't be a core-count overlap among the tiers, meaning each tier can have HT without ending up with more threads than the next one.
Still, it won't do anything to counter AMD's physical core count advantage. At least i3 to i7 won't be so crap for productivity workloads anymore.
sgeocla - Saturday, October 26, 2019 - link
Just look at the 7700k 4 cores ($350), 8700k 6 cores ($370), 9700k 8 cores ($385), and also consider the HEDT line run over by Threadripper, and the notebook chips that also had to increase core counts.And also 14nm++ is 20% less dense than 14nm to allow for higher clocks.
eddman - Saturday, October 26, 2019 - link
Uh, I was agreeing with you on the shortage part.14++ has a different gate pitch but the dies aren't bigger. Both kaby and coffee quad-cores have the same ~126mm2 size, at least according to wikichip.
brantron - Friday, October 25, 2019 - link
8 core Coffee Lake R isn't much bigger than dual-core Haswell. Desktop sales also declined since Intel increased production, but the shortage continues.The trouble is that the largest Cascade Lake die is about 700 mm². While that is unchanged over the last 2 years, the Amazons and Googles of the world haven't stopped gobbling them up.
sgeocla - Saturday, October 26, 2019 - link
You know 14nm++ is 20% less dense than original 14nm to allow higher clocks, right?Gobbling them up as you call it is due to all the security vulnerabilities that forced datacenter operators to make large orders to regain lost capacity.
Intel may have screwed them in the short run but their relationships have deteriorated and Google, MS, Amazon see that Intel is taking all the profits an buying back shares and giving out dividends instead of investing in making their products more efficient (new nodes), more secure and more performant.
Ashinjuka - Friday, October 25, 2019 - link
I can't wait for the tell-all memoirs about the 10nm debacle start coming out. I feel like years from now once all the details come to light this is going to be a huge cautionary tail for the tech industry, and a punchline for the modern ages.scholztec - Friday, October 25, 2019 - link
Adding all those plus signs to 14nm is hard...14++++++++++++++...
TheJian - Tuesday, October 29, 2019 - link
25% more wafers at 14nm. I'm confused, as I just read another Intel article here saying Intel has basically EOL 14nm...Hmm...Reporting these days...I'd say 14nm has a few years left and NOT on purpose from Intel...ROFL. You don't tool up just to shut it down and 10nm still has issues as noted by no LARGE dies yet, no high watt yet.And I think the avg consumer is a lot smarter today. It is EASY to read a review, find watts/heat/noise etc to your liking without doing anything more than reading a few pages. It used to be hard to get data on whatever you're buying. Not today, the data you want is on dozens of sites reviewing product X. I may read a dozen or more before buying, but I fully believe even the avg consumer reads at least ONE review of anything they consider pricey. I consider HEAT of everything before I buy because of where I live. IT is not because of how advanced I am in tech, it is simply because I'd like to game in the summer too without dehydrating at the desk...LOL.