If GF moves forward with 7 nm in 2018 as planned while Intel takes its time rolling out 10 nm, perhaps AMD actually has a chance and can begin to close the efficiency gap with Intel over the next few years. At least one can hope.
It is not all that hard to make a chip, it involves no luck. It will perform the way it is designed. AMD sucked for a long time because their designs sucked. They didn't really target performance, they didn't widen the architecture, they didn't increase throughput. I have no idea whey they didn't, they easily could have done it at any point, but they didn't. Even way before you put it into silicon, you can accurately simulate how it will perform, naturally, process is important too, and sure intel has traditionally had ample process advantage, but process was never amd's biggest problem. Their designs just weren't ambitious.
Most likely they deliberately ran the company into the ground, to make its shares as cheap and undesired as possible. I am willing to bet certain people bought a LOT of amd stock prior to the announcements and demos of zen. The share price fluctuation will likely move a lot more money than selling actual products would. After all, it does seem that amd's sole purpose of existence is to make intel look less like the monopoly it defacto is, so if that is the case, it is understandable that their designs would be intentionally subpar and only make a good design every once in a while, to create the illusion of hope and competition, while secretly using those to make tons of money fast and easy by manipulating share prices.
I think it was just bad management. Bulldozer was a bad architecture, but rather than going back to the drawing board they milked it for four years. That was what almost ran them into the ground. They should have just gone ahead with a new processor immediately, rather than trying to refine something that hadn't turned out to be very good.
AMD tends the be over-optimistic in predicting new trends.
Bulldozer, for example, was not a bad design, but to make use of it you needed software that could make full use of all the cores/threads it made available, because the single-threaded performance was sacrificed for this. It also needed a better process. Zen is a far better timing for octo-core in the mainstream.
Another thing was cutting FP throughput because they had GPUs. It's taken a lot longer than expected for FP workloads to move to GPUs, and Bulldozer's weak FPU (at least on a per-core basis, it wasn't so bad on a per-module basis in later iterations) lost it benchmarks.
And finally, the total failure of 20nm to happen.
Despite all this, and some poor management before Su got in, AMD made Carrizo and Bristol Ridge, which are excellent general-purpose APUs. And being stuck on 28nm made AMD develop their power saving technologies, which surely contributes to having a 65W octo-core Zen offering now they've hit 14nm.
And they recognised the issues with Bulldozer fairly early, bringing in Keller for Zen.
Nope, bulldozer was plain out dumb. Sharing SIMD cores is DUMB. It is where 99% of the performance that matters is, graphics, multimedia, pretty much anything new since the days of DOS. The ALU units - they are just for control, and you just don't benefit from having more of them slow control threads when they don't have the capacity to push data to the SIMD units.
You need a wider processor in order to push more data through it, and how fast a processor is depends on how much data you can push through it. Adding "cores" that don't have their own dedicated SIMD units is barely none to straight out no increase of the CPU throughput.
Bulldozer's design was stupid, and I personally have my doubts that engineers can be that stupid by accident. It is far more likely it was a deliberate decision. You don't even have to be a digital logic engineer to be able to tell that it would have sucked. Basic understanding on how processors work and data is crunched suffices. It would be generous to even call it half-assed design, it was really more of like a hundredth-assed design.
There is probably a pattern to when amd is allowed/supposed to make a good design, maybe even a formula too. Something about intel making n amount of money over x years on mediocre, barely incremental products adjusted for inflation until amd gets to be competitive. Someone should definitely look into it, don't be so gullible, the industry is far less about competing against each other than they are about cooperating with each other to such money out of the chumps. And guess what, something tells me them "regulatory bodies" will not be looking into it, ever! Oh that's right, who they are, who made them and who pays them explains it all.
@ddriver: "... sure intel has traditionally had ample process advantage, but process was never amd's biggest problem."
The original Phenom architecture was hamstrung with too little cache and competed poorly against Intel's Core 2 architecture. Phenom II was built on a better process with a more appropriate amount of cache and some other related optimizations, but was otherwise remarkably similar to Phenom. It competed quite well with Core 2, but not the Core i series. This case is a counterexample that disproves your statement.
I will, however, concede that there are several periods of time (notably the era of bulldozer and derivatives) where design and/or management presented themselves as greater failings than their process technologies. That said, there are also periods where the largest failing is uncertain. For instance, what would AMD look like today if they had launched K8 on a higher performing or, more importantly, better yielding process. If they were better able to meet demand when they had the clear lead over Intel, they would have had more funding for R&D and investments and may not have made some of the seemingly desperate and possibly short sighted decisions that they've had to make. Of course, other factors came into play, but it's hard to say what the biggest factor was there.
Even in the case where Intel does have a raw transistor density advantage it turns out actual designs from TSMC and GF have a higher density (this was widely reported both for AMD Ryzen and Apple Fusion SoC - Intel even came out with a marketing slide that claimed Apple was using too many small transistors...).
Read agan, I said Intel's 10nm is only comparable with GF's 7nm, not better. When I said Intel's process is denser I was comparing 7nm to 7nm, 10nm to 10nm, etc. And your own image confirms it.
And you can't compare AMD's upcoming CPU Ryzen's size with Intel's s because as you probably know Ryzen doesn't have integrated graphics anymore like previous generations so it's not an apples to apples comparison.
Nothing is ever a perfect comparison, it still wouldn't be even if Ryzen had a GPU. But the fact is Intel's designs don't use the small transistors as much as everyone else (certainly not now their new 14nm process is even less dense).
Yes, Intel only uses their smallest transistors in critical paths. The smallest transistor isn't always the best option depending on your goals. Higher leakage currents sometimes makes them less desirable for power efficiency. High capacitance loads require larger transistors to drive them. Smaller transistors struggle here and reduce the speed at which the transistors can switch. Adding more logic paths that use a single signal, for instance, increases the load that its associated output transistor has to drive. It isn't surprising that a feature heavy complex design like Intel's would need some larger transistors to maintain performance goals. A different architecture might allow for smaller transistors and denser arrangements, but there is a trade-off in performance. We'll have to wait and see who made the smarter trade-offs.
Barely anyone willing to buy a high end CPU cares about integrated graphics. It is trash. As far as I am concerned, it is just wasted space and power. I'd take 50% more cores instead of the lousy igpu any day.
There are many situations (headless media servers, or other types of headless servers or servers that just don't have room and/or need for a dGPU) where having an iGPU would be useful for being able to whack a monitor on occasionally if you need direct access/console type access.
But in that case, the iGPU could be about 1/16th what they have in there currently, hell it could probably be equivalent to a late 90's card like a Matrox Millenium G200 (with its huge 16MB of VRAM).
Well we know one thing: 8MB L3 SRAM on Intel is 19mm^2, 8MB L3 SRAM on AMD is 16mm^2.
It's unlikely that they differ in much in configuration, it's either 6T or 8T SRAM for both of them, given the frequencies it has to run at (we do have some evidence that Intel's L3 SRAM throttles at a certain level though, maybe AMD's will too, we'll find out).
Standard 6t SRAM Zen - 0.0806mm^2 Competitor A - 0.0588mm^2 Given Zen's standard 6t SRAM is 37% larger than Competitor A's (Skylake), I'd say it is in fact very likely that they differ in configuration. There is almost certainly a design trade-off going on here and we'll have to wait and see whether Intel or AMD made the smarter trade-off. That does not. however, make the processes equal. All feature sizes listed in the chart are smaller on the Intel process (CPP, Fin Pitch, Metal Pitch, and 6t SRAM).
That all said, process advantage doesn't have the same meaning that it once did. The power advantage is offset somewhat by higher leakage as transistors get smaller. The frequencies seem to have hit a brick wall. The space (cost) savings are offset a fair bit by the more expensive processes and poor yields. Sure, it can be painful being several processes behind (See AMD FX series), but one or a half node behind isn't nearly so meaningful.
Well you replied to a post that suggested that AMD was catching up - "closing the gap", so "only comparable" suggests that GF 7nm is still worse than Intel 10nm when in reality it is leap frogging Intel by 30%. And if Intel is as slow with rolling out 7nm as it was with 10nm, GF will enjoy that lead for several years.
There is no point in comparing 7nm to 7nm given that Intel will be last with 7nm. So who has the best (densest) process right now? Today both Samsung/TSMC are well ahead of Intel. Later this year Intel and TSMC will be almost equal. That's the reality.
If I'm reading that correctly, the chart shows Intel 10nm = TSMC 7nm and Intel 7nm = TSMC 5nm. They are not just close, they are right on top of each other. That said, there are a lot more physical size features to compare than just metal pitch and poly pitch. Beyond size, there are also doping consistency across the wafer, power leakage (combination of individual element properties), defect rate, and a large list of other intrinsic properties that differ between processes and can make one process more desirable than another. For example, the Apple A9 SoC is dual source from TSMC (16nm) and Samsung (14nm). Despite being marginally larger on the TSMC 16nm node, the chip apparently has (marginally?) better power characteristics than the Samsung equivalent. Samsung has since released an updated 14nm node that likely eliminates or possibly even reverses the disparity.
That's what has been claimed, but if AMD is really catching up to Intel with Ryzen, then it's not looking like it's accurate. Supposedly Intel's 14nm is also better, but then how can Ryzen be on par (granted we haven't seen it yet). I say if Ryzen is basically on-par with Sky Lake then the process advantage will be revealed to be false. And if Intel has lost the process advantage they could be in for a rough time.
In some aspects, maybe. But Intel's 14nm has its own quirks. In the end, it appears that GF 14nm and Intel 14nm at a die level gain similar overall achieved transistor densities. Intel has smaller transistors, GF has many 2D low level metal layers that enabled better packing of their larger transistors.
I understand that a larger node can mean cost savings for clients not needing the most cutting edge technology, and as the author mentioned somewhere, it is easier to make planar than it is to make finfet. I can't find the exact quote on my phone. But what gets me is making a new fab on 180/130nm. Surely a smaller node, but still planar, would be better. More chips per wafer must mean more capacity and higher revenue, not to mention the benefits to the client of getting something more energy efficient. What is it that I am not quite getting?
Disclaimer: I am not in any way deeply knowledgeable about this industry. I don't even know what many of the acronyms mean.
Because there's a lot of stuff like embedded ICs, sensors and probably even industrial robots and PLCs running on 130/180nm nodes. Not every device in the computing world needs the latest tech available.
And in a fair few application it's unwelcome. 130/180nm is for applications requiring higher voltage, higher amperage, larger temperature tolerance range etc etc.
Apart from the issues raised above, there is also the issue that design and mask costs rise as you go smaller. If you're creating something that you don't expect to sell in the millions, then your total cost (including the one-time costs of design and masks) may be lower, even on a less dense process.
I have a friend who works on a system that is essentially like a dot-matrix printer for chip manufacturing. Rather than using masks for lithography, it scans a laser over an array of micro-mirrors that each control one 100um^2 or so "pixel" on the substrate. It's slow as heck (six hours or so for one pass over the wafer) BUT it's ideal for certain types of prototyping and certain situations (think eg military, or space, or manufacturing something to control a scientific experiment) where all you care about is manufacturing one or five or maybe fifty chips and that's it.
It's a big world, with a MASSIVE range in product volumes, from hundreds of millions at the iPhone SoC level, down to single digits. Those less popular chips also need to be fabbed somehow!
Yeah, some of these processes are better for building "tough" chips. Automotive and industrial applications have to be more robust and resistant to high temps, voltage fluctuations, etc.
They also may be using these for Hardened chips for military and aerospace applications. I believe GF also has a SiGe process which may be better suited for these types of applications.
Very basic products like children's toy's, cash registers, credit card readers, etc will use these. As I understand it even an older process node such as 28nm makes these basic types of processors so small that they become very difficult to work with during product manufacturing. It's easier/cheaper to work with a chip a that's a quarter the size of a postage stamp than one the size of a piece of sand.
I'm guessing here but I imagine that much like a lot of other Chinese production stories (IE cars, planes, etc), the 180/130 may be old equipment and tooling that Glofo is going to move, as opposed to brand new equipment they will be buying.
What is GF doing holding onto all the old equipment for >28nm ? You may need some for non-CPU/non-GPU but that doesn't seem much capacity for the small stuff.
These older processes are massively popular, and can form a large part of a fab company's profits. TSMC, GF, Samsung, UMC, etc - they all have masses of older node fabs because they make money.
Leading edge fabs are actually a minority of production - they just happen to fab what we are interested in, which is why we're reading this article. A third party fab can keep old processes going, and making money on fully-paid-for equipment, for quite some time - new processes these days tend to need new fab buildings (Intel's 7nm in Texas for example) anyway.
22FDX is going to be huge. 14nm performance for 28nm cost, with lower power than 14nm and better analog/RF than FF ? Give it a couple of years and every IoT device, wearable, and low to mid range phone/tablet will be using it. That's why it's being built in China.
Intels dreams of becoming a 3rd party fab supplier for those ^ are just that, dreams. Because nobody is going to buy expensive to produce (and even more expensive to design for) 10/14nm FF when they can get cheap 22FDX instead.
You have to be crazy at this point to take anything that GloFo says at the face value. It`s always going to bury everything else, but then somehow fails to materialize, time after time.
The fact they seem to be spending serious money to expand their fabs seems to suggest they actually are making quite a bit of money, and they are selling plenty of chips.
So far, the promises regarding FD-SOI appear to be holding out. It's multiple companies that are doing it (Samsung 28nm FD-SOI, GlobalFoundries 22nm FD-SOI, ST, ...)
And there are basic reasons why - it's planar (cheaper), it has fewer mask steps (cheaper), FD-SOI is low-power, loads of RF works on it, and so on. It really appears to be the high-end IoT dream process, and that's why the customers are lining up. They've clearly seen enough evidence to go for it - all you have for your judgement is a general dislike of GlobalFoundries.
Yep. Companies interested in GloFo 22FDX = ~60 and counting. Probably more if you count Samsungs FD-SOI too.
Companies queuing up to use Intels ridiculously expensive 10/14nm FF = None ?
So no Michael Bay, Intel are *not* going to win any contracts for IoT, wearables, mainstream mobile, ULP, thin and light etc. You are simply unaware of what's actually going on. Because you're a troll, not an expert.
Just curious about this statement in the article stating GF is "preparing to start high-volume manufacturing (HMV) of chips using its 7 nm FinFET technology in the second quarter of next year (so, several months ahead of the plan)". This is the first I have heard about a move in their road map. Can you elaborate on the updated timeline? Is risk production still planned for 1H 2018 as well (seems mutually exclusive to be in risk and HVM simultaneously)? I didn't think they were even taping out this year as the SDK is only getting to v0.7 this summer. Any insight would be helpful on what and how you learned about this.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
Stochastic - Saturday, February 11, 2017 - link
If GF moves forward with 7 nm in 2018 as planned while Intel takes its time rolling out 10 nm, perhaps AMD actually has a chance and can begin to close the efficiency gap with Intel over the next few years. At least one can hope.SaturnusDK - Saturday, February 11, 2017 - link
You probably mean AMD will widen their efficiency lead. RyZen coming out early March is going to be on par with or better than Skylake.Michael Bay - Sunday, February 12, 2017 - link
Only if you`re gullible enough to believe AMD marketing still, after all those years of failure.Why should intel worry about another dud?
ddriver - Sunday, February 12, 2017 - link
It is not all that hard to make a chip, it involves no luck. It will perform the way it is designed. AMD sucked for a long time because their designs sucked. They didn't really target performance, they didn't widen the architecture, they didn't increase throughput. I have no idea whey they didn't, they easily could have done it at any point, but they didn't. Even way before you put it into silicon, you can accurately simulate how it will perform, naturally, process is important too, and sure intel has traditionally had ample process advantage, but process was never amd's biggest problem. Their designs just weren't ambitious.Most likely they deliberately ran the company into the ground, to make its shares as cheap and undesired as possible. I am willing to bet certain people bought a LOT of amd stock prior to the announcements and demos of zen. The share price fluctuation will likely move a lot more money than selling actual products would. After all, it does seem that amd's sole purpose of existence is to make intel look less like the monopoly it defacto is, so if that is the case, it is understandable that their designs would be intentionally subpar and only make a good design every once in a while, to create the illusion of hope and competition, while secretly using those to make tons of money fast and easy by manipulating share prices.
Nagorak - Sunday, February 12, 2017 - link
I think it was just bad management. Bulldozer was a bad architecture, but rather than going back to the drawing board they milked it for four years. That was what almost ran them into the ground. They should have just gone ahead with a new processor immediately, rather than trying to refine something that hadn't turned out to be very good.Michael Bay - Monday, February 13, 2017 - link
I`m pretty sure they had a lot of healthy ambition back in P4 days. But where did it go after that?psychobriggsy - Monday, February 13, 2017 - link
AMD tends the be over-optimistic in predicting new trends.Bulldozer, for example, was not a bad design, but to make use of it you needed software that could make full use of all the cores/threads it made available, because the single-threaded performance was sacrificed for this. It also needed a better process. Zen is a far better timing for octo-core in the mainstream.
Another thing was cutting FP throughput because they had GPUs. It's taken a lot longer than expected for FP workloads to move to GPUs, and Bulldozer's weak FPU (at least on a per-core basis, it wasn't so bad on a per-module basis in later iterations) lost it benchmarks.
And finally, the total failure of 20nm to happen.
Despite all this, and some poor management before Su got in, AMD made Carrizo and Bristol Ridge, which are excellent general-purpose APUs. And being stuck on 28nm made AMD develop their power saving technologies, which surely contributes to having a 65W octo-core Zen offering now they've hit 14nm.
And they recognised the issues with Bulldozer fairly early, bringing in Keller for Zen.
ddriver - Tuesday, February 14, 2017 - link
Nope, bulldozer was plain out dumb. Sharing SIMD cores is DUMB. It is where 99% of the performance that matters is, graphics, multimedia, pretty much anything new since the days of DOS. The ALU units - they are just for control, and you just don't benefit from having more of them slow control threads when they don't have the capacity to push data to the SIMD units.You need a wider processor in order to push more data through it, and how fast a processor is depends on how much data you can push through it. Adding "cores" that don't have their own dedicated SIMD units is barely none to straight out no increase of the CPU throughput.
Bulldozer's design was stupid, and I personally have my doubts that engineers can be that stupid by accident. It is far more likely it was a deliberate decision. You don't even have to be a digital logic engineer to be able to tell that it would have sucked. Basic understanding on how processors work and data is crunched suffices. It would be generous to even call it half-assed design, it was really more of like a hundredth-assed design.
There is probably a pattern to when amd is allowed/supposed to make a good design, maybe even a formula too. Something about intel making n amount of money over x years on mediocre, barely incremental products adjusted for inflation until amd gets to be competitive. Someone should definitely look into it, don't be so gullible, the industry is far less about competing against each other than they are about cooperating with each other to such money out of the chumps. And guess what, something tells me them "regulatory bodies" will not be looking into it, ever! Oh that's right, who they are, who made them and who pays them explains it all.
BurntMyBacon - Monday, February 13, 2017 - link
@ddriver: "... sure intel has traditionally had ample process advantage, but process was never amd's biggest problem."The original Phenom architecture was hamstrung with too little cache and competed poorly against Intel's Core 2 architecture. Phenom II was built on a better process with a more appropriate amount of cache and some other related optimizations, but was otherwise remarkably similar to Phenom. It competed quite well with Core 2, but not the Core i series. This case is a counterexample that disproves your statement.
I will, however, concede that there are several periods of time (notably the era of bulldozer and derivatives) where design and/or management presented themselves as greater failings than their process technologies. That said, there are also periods where the largest failing is uncertain. For instance, what would AMD look like today if they had launched K8 on a higher performing or, more importantly, better yielding process. If they were better able to meet demand when they had the clear lead over Intel, they would have had more funding for R&D and investments and may not have made some of the seemingly desperate and possibly short sighted decisions that they've had to make. Of course, other factors came into play, but it's hard to say what the biggest factor was there.
TheMightyVoice - Tuesday, May 26, 2020 - link
This comment aged well.vladx - Saturday, February 11, 2017 - link
Samsung/GF 7nm is only comparable with Intel's 10nm.lilmoe - Saturday, February 11, 2017 - link
Go ahead and keep saying that if it makes you feel better. Now, did Intel decide to go EUV with their "10nm" yet?vladx - Saturday, February 11, 2017 - link
Well Intel said EUV will be used only for <=7nm, but that doesn't change the fact that Intel's process is denser than the competition.Wilco1 - Saturday, February 11, 2017 - link
Wrong. TSMC 10nm is 20% denser than Intel 14nm. GF 7nm is denser than Intel 10nm. See http://m.eet.com/images/eetimes/2017/01/1331136/1-...Even in the case where Intel does have a raw transistor density advantage it turns out actual designs from TSMC and GF have a higher density (this was widely reported both for AMD Ryzen and Apple Fusion SoC - Intel even came out with a marketing slide that claimed Apple was using too many small transistors...).
vladx - Saturday, February 11, 2017 - link
Read agan, I said Intel's 10nm is only comparable with GF's 7nm, not better. When I said Intel's process is denser I was comparing 7nm to 7nm, 10nm to 10nm, etc. And your own image confirms it.vladx - Saturday, February 11, 2017 - link
And you can't compare AMD's upcoming CPU Ryzen's size with Intel's s because as you probably know Ryzen doesn't have integrated graphics anymore like previous generations so it's not an apples to apples comparison.Wilco1 - Sunday, February 12, 2017 - link
Nothing is ever a perfect comparison, it still wouldn't be even if Ryzen had a GPU. But the fact is Intel's designs don't use the small transistors as much as everyone else (certainly not now their new 14nm process is even less dense).BurntMyBacon - Monday, February 13, 2017 - link
@Wilco1Yes, Intel only uses their smallest transistors in critical paths. The smallest transistor isn't always the best option depending on your goals. Higher leakage currents sometimes makes them less desirable for power efficiency. High capacitance loads require larger transistors to drive them. Smaller transistors struggle here and reduce the speed at which the transistors can switch. Adding more logic paths that use a single signal, for instance, increases the load that its associated output transistor has to drive. It isn't surprising that a feature heavy complex design like Intel's would need some larger transistors to maintain performance goals. A different architecture might allow for smaller transistors and denser arrangements, but there is a trade-off in performance. We'll have to wait and see who made the smarter trade-offs.
ddriver - Sunday, February 12, 2017 - link
Barely anyone willing to buy a high end CPU cares about integrated graphics. It is trash. As far as I am concerned, it is just wasted space and power. I'd take 50% more cores instead of the lousy igpu any day.eldakka - Sunday, February 12, 2017 - link
Agreed - mostly.There are many situations (headless media servers, or other types of headless servers or servers that just don't have room and/or need for a dGPU) where having an iGPU would be useful for being able to whack a monitor on occasionally if you need direct access/console type access.
But in that case, the iGPU could be about 1/16th what they have in there currently, hell it could probably be equivalent to a late 90's card like a Matrox Millenium G200 (with its huge 16MB of VRAM).
psychobriggsy - Monday, February 13, 2017 - link
Well we know one thing: 8MB L3 SRAM on Intel is 19mm^2, 8MB L3 SRAM on AMD is 16mm^2.It's unlikely that they differ in much in configuration, it's either 6T or 8T SRAM for both of them, given the frequencies it has to run at (we do have some evidence that Intel's L3 SRAM throttles at a certain level though, maybe AMD's will too, we'll find out).
BurntMyBacon - Monday, February 13, 2017 - link
@psychobriggsy: "Well we know one thing: 8MB L3 SRAM on Intel is 19mm^2, 8MB L3 SRAM on AMD is 16mm^2.It's unlikely that they differ in much in configuration, ..."
http://www.eetimes.com/document.asp?doc_id=1331317...
Standard 6t SRAM
Zen - 0.0806mm^2
Competitor A - 0.0588mm^2
Given Zen's standard 6t SRAM is 37% larger than Competitor A's (Skylake), I'd say it is in fact very likely that they differ in configuration. There is almost certainly a design trade-off going on here and we'll have to wait and see whether Intel or AMD made the smarter trade-off. That does not. however, make the processes equal. All feature sizes listed in the chart are smaller on the Intel process (CPP, Fin Pitch, Metal Pitch, and 6t SRAM).
That all said, process advantage doesn't have the same meaning that it once did. The power advantage is offset somewhat by higher leakage as transistors get smaller. The frequencies seem to have hit a brick wall. The space (cost) savings are offset a fair bit by the more expensive processes and poor yields. Sure, it can be painful being several processes behind (See AMD FX series), but one or a half node behind isn't nearly so meaningful.
Wilco1 - Sunday, February 12, 2017 - link
Well you replied to a post that suggested that AMD was catching up - "closing the gap", so "only comparable" suggests that GF 7nm is still worse than Intel 10nm when in reality it is leap frogging Intel by 30%. And if Intel is as slow with rolling out 7nm as it was with 10nm, GF will enjoy that lead for several years.There is no point in comparing 7nm to 7nm given that Intel will be last with 7nm. So who has the best (densest) process right now? Today both Samsung/TSMC are well ahead of Intel. Later this year Intel and TSMC will be almost equal. That's the reality.
BurntMyBacon - Monday, February 13, 2017 - link
If I'm reading that correctly, the chart shows Intel 10nm = TSMC 7nm and Intel 7nm = TSMC 5nm. They are not just close, they are right on top of each other. That said, there are a lot more physical size features to compare than just metal pitch and poly pitch. Beyond size, there are also doping consistency across the wafer, power leakage (combination of individual element properties), defect rate, and a large list of other intrinsic properties that differ between processes and can make one process more desirable than another. For example, the Apple A9 SoC is dual source from TSMC (16nm) and Samsung (14nm). Despite being marginally larger on the TSMC 16nm node, the chip apparently has (marginally?) better power characteristics than the Samsung equivalent. Samsung has since released an updated 14nm node that likely eliminates or possibly even reverses the disparity.SaturnusDK - Saturday, February 11, 2017 - link
If you mean in terms of not existing in a product yet... you're rightvladx - Saturday, February 11, 2017 - link
Everyone wants to delay the inevitable, not just Intel or GF.Nagorak - Sunday, February 12, 2017 - link
That's what has been claimed, but if AMD is really catching up to Intel with Ryzen, then it's not looking like it's accurate. Supposedly Intel's 14nm is also better, but then how can Ryzen be on par (granted we haven't seen it yet). I say if Ryzen is basically on-par with Sky Lake then the process advantage will be revealed to be false. And if Intel has lost the process advantage they could be in for a rough time.psychobriggsy - Monday, February 13, 2017 - link
In some aspects, maybe. But Intel's 14nm has its own quirks. In the end, it appears that GF 14nm and Intel 14nm at a die level gain similar overall achieved transistor densities. Intel has smaller transistors, GF has many 2D low level metal layers that enabled better packing of their larger transistors.Robert Pankiw - Saturday, February 11, 2017 - link
I understand that a larger node can mean cost savings for clients not needing the most cutting edge technology, and as the author mentioned somewhere, it is easier to make planar than it is to make finfet. I can't find the exact quote on my phone. But what gets me is making a new fab on 180/130nm. Surely a smaller node, but still planar, would be better. More chips per wafer must mean more capacity and higher revenue, not to mention the benefits to the client of getting something more energy efficient. What is it that I am not quite getting?Disclaimer: I am not in any way deeply knowledgeable about this industry. I don't even know what many of the acronyms mean.
vladx - Saturday, February 11, 2017 - link
Because there's a lot of stuff like embedded ICs, sensors and probably even industrial robots and PLCs running on 130/180nm nodes. Not every device in the computing world needs the latest tech available.SaturnusDK - Saturday, February 11, 2017 - link
And in a fair few application it's unwelcome. 130/180nm is for applications requiring higher voltage, higher amperage, larger temperature tolerance range etc etc.name99 - Saturday, February 11, 2017 - link
Apart from the issues raised above, there is also the issue that design and mask costs rise as you go smaller. If you're creating something that you don't expect to sell in the millions, then your total cost (including the one-time costs of design and masks) may be lower, even on a less dense process.I have a friend who works on a system that is essentially like a dot-matrix printer for chip manufacturing. Rather than using masks for lithography, it scans a laser over an array of micro-mirrors that each control one 100um^2 or so "pixel" on the substrate. It's slow as heck (six hours or so for one pass over the wafer) BUT it's ideal for certain types of prototyping and certain situations (think eg military, or space, or manufacturing something to control a scientific experiment) where all you care about is manufacturing one or five or maybe fifty chips and that's it.
It's a big world, with a MASSIVE range in product volumes, from hundreds of millions at the iPhone SoC level, down to single digits. Those less popular chips also need to be fabbed somehow!
Alexvrb - Saturday, February 11, 2017 - link
Yeah, some of these processes are better for building "tough" chips. Automotive and industrial applications have to be more robust and resistant to high temps, voltage fluctuations, etc.They also may be using these for Hardened chips for military and aerospace applications. I believe GF also has a SiGe process which may be better suited for these types of applications.
jimjamjamie - Friday, February 17, 2017 - link
This is what I came to the comments for. ThanksSquarePeg - Saturday, February 11, 2017 - link
Very basic products like children's toy's, cash registers, credit card readers, etc will use these. As I understand it even an older process node such as 28nm makes these basic types of processors so small that they become very difficult to work with during product manufacturing. It's easier/cheaper to work with a chip a that's a quarter the size of a postage stamp than one the size of a piece of sand.Cygni - Saturday, February 11, 2017 - link
I'm guessing here but I imagine that much like a lot of other Chinese production stories (IE cars, planes, etc), the 180/130 may be old equipment and tooling that Glofo is going to move, as opposed to brand new equipment they will be buying.tygrus - Saturday, February 11, 2017 - link
What is GF doing holding onto all the old equipment for >28nm ? You may need some for non-CPU/non-GPU but that doesn't seem much capacity for the small stuff.Alexvrb - Saturday, February 11, 2017 - link
Because there are applications for it. Cheap chips, tough chips, small production runs, etc. See various posts above.Nagorak - Sunday, February 12, 2017 - link
Obviously there is the demand for it or they wouldn't be doing it. They aren't completely dumb.psychobriggsy - Monday, February 13, 2017 - link
These older processes are massively popular, and can form a large part of a fab company's profits. TSMC, GF, Samsung, UMC, etc - they all have masses of older node fabs because they make money.Leading edge fabs are actually a minority of production - they just happen to fab what we are interested in, which is why we're reading this article. A third party fab can keep old processes going, and making money on fully-paid-for equipment, for quite some time - new processes these days tend to need new fab buildings (Intel's 7nm in Texas for example) anyway.
Haawser - Sunday, February 12, 2017 - link
22FDX is going to be huge. 14nm performance for 28nm cost, with lower power than 14nm and better analog/RF than FF ? Give it a couple of years and every IoT device, wearable, and low to mid range phone/tablet will be using it. That's why it's being built in China.Intels dreams of becoming a 3rd party fab supplier for those ^ are just that, dreams. Because nobody is going to buy expensive to produce (and even more expensive to design for) 10/14nm FF when they can get cheap 22FDX instead.
Michael Bay - Sunday, February 12, 2017 - link
You have to be crazy at this point to take anything that GloFo says at the face value. It`s always going to bury everything else, but then somehow fails to materialize, time after time.prisonerX - Sunday, February 12, 2017 - link
On that basis we can ignore you then.Michael Bay - Monday, February 13, 2017 - link
You`re just like GloFo, I love it.Nagorak - Sunday, February 12, 2017 - link
The fact they seem to be spending serious money to expand their fabs seems to suggest they actually are making quite a bit of money, and they are selling plenty of chips.psychobriggsy - Monday, February 13, 2017 - link
So far, the promises regarding FD-SOI appear to be holding out. It's multiple companies that are doing it (Samsung 28nm FD-SOI, GlobalFoundries 22nm FD-SOI, ST, ...)And there are basic reasons why - it's planar (cheaper), it has fewer mask steps (cheaper), FD-SOI is low-power, loads of RF works on it, and so on. It really appears to be the high-end IoT dream process, and that's why the customers are lining up. They've clearly seen enough evidence to go for it - all you have for your judgement is a general dislike of GlobalFoundries.
Haawser - Thursday, February 16, 2017 - link
Yep. Companies interested in GloFo 22FDX = ~60 and counting. Probably more if you count Samsungs FD-SOI too.Companies queuing up to use Intels ridiculously expensive 10/14nm FF = None ?
So no Michael Bay, Intel are *not* going to win any contracts for IoT, wearables, mainstream mobile, ULP, thin and light etc. You are simply unaware of what's actually going on. Because you're a troll, not an expert.
mmusto - Tuesday, February 14, 2017 - link
Just curious about this statement in the article stating GF is "preparing to start high-volume manufacturing (HMV) of chips using its 7 nm FinFET technology in the second quarter of next year (so, several months ahead of the plan)". This is the first I have heard about a move in their road map. Can you elaborate on the updated timeline? Is risk production still planned for 1H 2018 as well (seems mutually exclusive to be in risk and HVM simultaneously)? I didn't think they were even taping out this year as the SDK is only getting to v0.7 this summer. Any insight would be helpful on what and how you learned about this.