TSMC telling Intel that there is no way that Intel can catch them. (I wonder how the performance of AMD CPUs on N3 will compare to Intel's CPUs on 14/10/7 nm.)
I used to consider ARM not competitive with Intel and AMD, but ARM is running around unchecked, rapidly moving towards desktops while maintaining their iron grip on mobile and embedded.
Right. As if AMD or Intel doesn't produce or use any ARM CPUs. Every Ryzen CPU's utilize ARM cpu (for security). Intel uses ARM SOC for their FPGA products.
Agreed. They lose market share in servers and mobile applications like Notebooks fast. Desktop is in x86 hands so far but only a minor market.
Wonder if AMD and Intel would work together to create a "Lean86" version of x86 that gets rid of legacy or only emulates it. It would free up die space a lot. Also I would love to see SVE on x86.
Lean x86 + SVE # bigLITTLE could help x86 to extend its life or survive but for now I see it dead by 2030.
What might safe them too is Nvidia really being able to buy ARM and other customers rather trandition to RISC-v. At least it helps until RISC-V kills both: ARM and x86!
ARM has almost 0 presence in both desktops and notebooks. WTF are you talking about? ARM has never been able to sucessfully break into high power markets, they've had roughly the same luck as intel getting into ultra low power markets.
"lean 86" is already here. The x86 code, as has been discussed many times on many pages, is a tiny portion of modern core design and is already digested in small chunks like all other instructions. It isnt some big monolith that dominates cores like the 486 days.
It's funny, I remember reading in 2010 how by 2020 ARM would be a major industry player. Qualcomm was making some noise about entering the server market, AMD bought Seamicro, developed the K12 platform, then did nothing with it. Now the lines in "x86 will be dead by 2030"? I cant wait to see how utterly wrong that prediction is as well.
Yeah, sure. Just like s86 couldn’t do any useful work in the workstation days. What happened there? Just like laptops could never do any useful work. There too. Now we see Apple’s M1, the worst chip Apple will ever produce for desktops and high end tablets, run rings around most x86 laptops and even many desktops. It will just get worse for s86 as manufacturers continue to look at that and figure that they need to produce better chips too. Qualcomm isn’t sitting still either.
Ah melgross, I remember you on the same context last time. Now coming to your points M1 gets crushed by Ryzen processors with SMT granted the only advantage is thin and light BS but when the workload increases that M1 cannot compete.
"run rings around most x86 laptops and even many desktops" many desktops lol what a massive pile of bs. Which desktop processor got beaten by M1 ? Can you point out the benchmarks and realworld test cases, will you say it beat 7700K in the real world performance or any 4790K processor in gaming or such, it doesn't even have a damn GPU lol. A stupid iGPU cannot do anything vs a PCIe PEG GPU.
M1 has dedicated blocks for encode and decode which is why they got good performance in Encoding vs x86 processors since they do not have that functionality inbuilt, but that's nothing if NVENC comes into play, it's game over. And if we add the Matlab and other compute workloads for consumer the HT/SMT based x86 processors will melt the M1 garbage which can be only fit for Apple first party workloads which were made by Adobe, MS and other companies and their Final Cut Pro and running their joke of an iOS apps on that with jank (see LTT new video on his iOS apps running on that M1)
Qualcomm is not going to do anything, they abandoned the idea of Server since Centriq was axed and their custom uArch team got axed with that, that SD888 still fall behind AT's great SPEC graphs but in realworld scenarios it doesn't even have any impact. There's absolutely none, Qcomm 888 processor phones vs iPhone have bog standard same response times in opening and using applications, which is called "realworld" workloads.
"x86 couldn't do any useful work in Workstation days" HEDT or the Enterprise workloads ? Is that a joke lol.
Keep shilling for Apple when having no absolute advantage of them in the consumer space. Esp the fact that Mac makes only a pathetic 10% of Apple revenue AND the OS marketshare at 13%.
And the fact that ARM processors are simply custom and Enterprise custom centric which do not have any use case for end users, what use you have with ARM processor when it's not even there to replace the x86 desktop processor, as for the IT, EPYC Milan 7763 is the king of the hill which even a normal person can buy and build a homelab, speaking of home server setups, which processor is to buy ? M1 ? haha.
And Xilinx / FPGAs are not going to sit idle, that's going to be the biggest bet for AMD, Intel already has FPGAs with them, that compute block will be having any customized workload which is where many AI and other things will be added to the x86 CPUs.
I don't really understand why people shill for Apple and claim some magical bs, before M1 there's nothing for consumer ARM yet saw so many posts on that BS claims of x86 is dead while using it daily. Peak comedy.
What does the GPU have to do with it - It is not like you can't design a ARM CPU that connects to GPU over PCIe... Nvidia will do so and Imagination Technologies might also try to get into that space.
Even AMD might be ok with RDNA/CDNA + ARM combination. They will milk x86 but have less issues moving to ARM compared to Intel.
Intel is the one that has most to lost on ARM / RISC as they hold the x86 keys and dominate market.
Sorry but x86 is beyong prime. When I wrote dead I dead not mean it is not existing but by that time it lost significant market share and will continue losing basically marking its long term end / decline. Why? Well:
Server / HPC: We can already see multiple companies pushing into the market and with ARM v9 a quite well hardware base is provided. From there on when Software support increases it is set. How many companies already started own designs!
Mobile: How many users even today only use a tablet and no real Notebook anymore. How many popular games [especially in Asia] do not need massive GPUs / CPUs. Perfromance will be enough for both and they prefer battery lifetime and ARM. Once Microsoft stops screwing it up it will also increase share fast.
Desktop: Yes share is low and will be probably the longest time a x86 domination. But the market is nothing compared to mobile and server.
And how many people will switch to newest MacBook Airs and iPadPros taking lots of x86 away as they just like the battery life and performance of newest M1 devices combined with easy to use OS - It is just enough for most everyday users and prosumers besides gamers!
x86 is everything but lean and smart if you compare SSE, AVX, AV2 AVX512 compared to SVE... K12 might have not worked because time wasn't ready for it yet. But now it is and also AMD was starving cash and couldn't push it. It is not the same situation as is now.
Also one thing about market share: We talk about desktop, server, mobile.
With mobile we talk notebooks and tablets however smartphones are forgotten!
If include those and check for CPU / APU market share then you can already see a quite big possible share lost...And as I said - This perfromance of those tablets and phones is ebough for most people that are not gaming or professional oriented!
Lmao, Server and Mobile are you joking ? Server space is x86 dominated over 90% of Intel and AMD Is clawing at that with ARM restricted to AWS ONLY and there are no others offering the ARM HW at the moment there will be in the future yes as of today's New ARM Neoverse news. But once ARM has a solid position AND beating out x86 then we will talk again.
Oh mobiles !! Amazing pieces of trash HW outselling Desktops. What a revelation, dude we talk on x86 performance and the capabilities of what they can do, your stupid iPhone or my Android phone which cannot do anything remotely compared to what x86 notebooks and desktops can do. You don't even have a filesystem to control forget running applications and scientific or machine learning workloads or physics either. Basically browsing Internet and using Social Media crap and you are using that to define the ARM dominance ? what a fcking joke.
"How many people will switch" lol, Mac has 10% marketshare and 10% world wide OS netmarketshare, what are you babbling about the world flipping to Apple HW and SW in a fortnight or magical moment. And iPad cannot translate x86 code it doesn't and cannot since it's not running Mac OS period. Only M1 Mac can do that. Everything is in the future your talk is not just dumb but really a time waste, idk from where all these Apple shills come from and claim the x86 is dead, damn it.
What a load of bs is this "It is not like you can't design a ARM CPU that connects to GPU over PCIe... Nvidia will do so and Imagination Technologies might also try to get into that space.
Even AMD might be ok with RDNA/CDNA + ARM combination. They will milk x86 but have less issues moving to ARM compared to Intel."
Sigh, what is that ? Imagination IP is stolen by Apple and they are dead sold out to a Chinese company and so far eGPU doesn't exist and why the fuck is Apple only one ? What about others, yeah you don't have anything else, and before M1 also same argument but nothing, now also same. And there's no product that exists on the market that is offering ARM CPUs to be paired with the PCIe express slots, fucking there's no PCIe ecosystem for ARM Consumer CPUs which is M1 soldered POS how are you even thinking about a GPU lol, and AMD is moving to ARM ? WTH. Last time Lisa Su said they do not have any plans for BigLittle forget ARM, there's one rumor today that came about APUs of Zen 5 using Big Little, that is a rumor and they are competing against Intel x86 big little ADL rather than stupid M1 which has pathetic marketshare.
That SVE is going to be ground shattering moment of truth. Ah I see, maybe it will make PS3 cell emulation go nuts and wreck havoc onto the x86, let us see. What about x86 market Xilinx FPGA ? nah that's dumb since it's x86. And x86 even though it's CISC it uses RISC under just like ARM just the thing is ARM processors have wide architecture on front end where as x86 use high clockspeed and SMT. But let's ignore all that, x86 is junk all the while relying on it, I bet you typed this on an x86 machine.
What do you think about Homelab ? nah it's junk M1 ftw right...better read up on what people can do on their x86 old Xeon hardware. Not even new. While for ARM what do we have ? Pi that's all, only consumer centric device which can run Linux code natively and can do same home server type compute for small workloads. Nah it's all too much logic, let's only talk Apple and M1 and the never ending bs of x86 is dead ARM is the future.
TSMC's 5nm and 3nm, although they call them full node advances, are more like between half and 3/4 nodes. Intel made one misstep, trying to bite off too much without EUV. So far all its manufacturing problems can be ultimately traced back to that. So we need to see what Intel can do with EUV - what will be the quality of its 7 nm process? - before declaring that TSMC has lapped Intel. Intel doubled its usage of EUV in its 7 nm process. It seems to have the pellicles it's been waiting for. We'll have a better idea in 2023 of where Intel stands.
Intel made a lot more than one misstep with 10nm. TSMC don't use EUV at all on N7, so if that were the only problem then TSMC ought to have had similar issues given their comparable densities in practice.
That's a good point. I'm curious to see AMD's gross margins for this past quarter, which will be released this afternoon along with their earnings. Their Q4 2020 results showed no gross margin improvement over Q4 2019 despite a Y/Y revenue increase of 53% and an improved competitive landscape. This is very strange, and I assume has to do with the cost they are paying, and have been paying, for the 7 nm node. To me that implies that the 7 nm node is not all that great for high powered chips. But the foundry customers don't have a choice but to move to the newer node because TSMC's older nodes are stale. TSMC isn't developing them the way Intel chose to develop 14 nm. As for AMD, they still benefit greatly from picking up market share and having the same profitability as they would otherwise have with lower market share. Intel, on the other hand, which already has high market share, needs to maintain its margins and has a lot to lose financially by switching to a less profitable node that keeps them at 88% market share instead of 85%, or what have you. That's just the way the business and financials shake out for the two companies.
The situation is a lot more nuanced than Intel's 10 nm is trash and TSMC's 7 nm is good. There are a lot of filters in play here. To me it's telling that NVIDIA chose Samsung's 8 nm for their gaming chips. They were able to leverage their architectural superiority to maintain their margins by avoiding TSMC's 7 nm node. We need to consider the total volume of high powered chips being produced on the 7 nm node and what their margins are compared to Intel's demand for high powered chips and what its margin requirements are. We don't have the information of how much more Intel could have charged for a 10 nm chip compared with how much more it would have cost to make. And we don't have the information of the HPC-only margin of TSMC's nodes. TSMC's gross margins are not extraordinarily high considering the capacity shortage, and they actually dropped qa bit in 2019 before the shortage began.
Agreed. Intel will be back on track with EUV 7nm and catch up with 5nm GAA. Their 7nm DUV can compete with 7nm TSMC / SAMSUNG. And If the dats is correct Intels 7nm is closer to SAMSUNG 2nm / TSMC 3nm than to TSMC 5nm. SAMSUNG 5nm is totally off (Don't know what happened there. Probably because it is more of a 7nm LPU+ instead of a new design?). They might have focused on GAA and come back big...
However I see one issue that Intel does not get enough EUV scanners from ASML and do not have enough capacity.
TSMC is far ahead of Samsung and Intel in EUV orders and installed scanners. And now even Hynix and Micron will increase order so supply will be tight for years.
I guess EUV capacity is one reason besides maturity of 7nm for Intel to bring a 10nm ESF (10nm+++) RaptorLake as AlderLake successor.
With ASML banned from exporting to China though (a situation looking likely to only get 'worse'), that should at least relieve some of the stress that the shortage of ASML EUV scanners has caused
Chinese semiconductor manufacturers weren't yet ready to use EUV for production. They had bought 2 machines for development purposes. It's not that big of a difference. It's thought that SMIC's 16/14 nm still isn't that good at this point.
there are a lot of ways to calculate density.... believe whatever you want, one thing is for sure. 10nm Intel is not as high volume as they want and certainly not high performance and 7nm is nowhere to find real soon.
so perhaps read something about real sizes and differences comparing logic and make a good calculation from a good site: https://en.wikichip.org/wiki/WikiChip
Can we be sure about Intel's density projections for 7nm? We still don't know much for sure about their density at 10nm - the 100Mtr/mm^2 was their original projection, and it's abundantly clear that the process never met those initial expectations. In practice they seem to be closer to 70Mtr/mm^2.
If 7nm is a full node over their *working* 10nm process then it would roughly match the density of TSMC's N5, which would be in keeping with Intel's historical advantage over TSMC at comparable node names (i.e. one full node ahead), but still leave them far behind TSMC's 3nm in practice.
Intel is not going to catch them, it's over. They lost the years long advantage they had be it Intel management problems or that stupid CA state demographics in all aspects etc.
Intel is simply going to use/steal their IP and build Intel parts as per their latest disclosure, so far their 10nm Consumer processor is not out yet to show performance and granted the 7N and these marketing terms do not that great justice to the actual technology.
All in all the end processors will show the real king, EPYC 7763 which is the one right now. I think we all have to simply wait and see what's going to happen.
What happened to SOI (silicon-on-insulator)? Did it die because IBM and Global Foundries focused on it? Or, is there an unavoidable technical impediment that makes it too uncompetitive? I suppose I should do a web search to answer the question.
‘As far as capacity is concerned, TSMC is unchallenged and is not going to be for years to come.’
It's an odd sentence, but everyone else is so far behind, that it's probably true. Ramp up will take years for TSMC competitors, and that's to be where they are today, if they keep investing, the target moves further into the future.
It is 'odd' because a word is missing, not for its truthfulness: 'As far as capacity is concerned, TSMC is unchallenged and is not going to be <b>[challenged]</b> for years to come.' Probably it is what the intended meaning of the sentence was.
I also found it odd, but I think adding a second '--challenged' in the same sentence is problematic and kind of ugly. I think it would be cleaner to rephrase it as : 'As far as capacity is concerned, TSMC has no challenge and it is not going to have for years to come' or : 'As far as capacity is concerned, TSMC is unchallenged and that will remain the case for years to come'.
Silicon-on-insulator was a way to reduce power at the same frequency (lower capacitance hence lower current necessary to switch bits). Strained Silicon was a way to reduce resistance (same current needed to switch bits but lower heat losses). Both are useful, but they don't allow you to put more "bits" on the same area. And in the leading edge, he who has the most bits wins (as in, Intel's Rocket Lake has 8 cores on 14 nm whereas its equivalent Intel 10nm processor has 10 cores). More "bits" (logic area) allows you to - for example - run a larger GPU at a lower speed for similar performance at lower power (or at higher performance and higher power). Or fit - in the same area - your smartphone processor but with more cache, more "AI" cores, more GPU cores, ....
Yes, but is it somehow incompatible with the leading-edge stuff? High density hasn’t always been the most important factor. I remember reading about how the choice of choosing a high-density library vs. a high-performance library was not due to the former being simply better for all CPUs but because the CPUs (like Excavator as I recall) were specifically designed to be cheap + small lower-end parts.
Having the density improvement from the node shrinkage does not seem to be the same problem thing as leveraging what you describe SOI’s benefit is. As you described them they seem to be complementary.
So, again I ask if there is something about SOI that’s an inherent impediment that keeps it from being used these days for the leading edge. Remember they back in 2011 Global Foundries was using SOI for its leading edge 32nm. There was also IBM as far as I know. So, what happened? Did finFET kill the tech by being incompatible and/or simply better —making SOI obsolete?
Odd to see a TSMC article that mentions Apple as a customer--but not AMD--as large or an even larger TSMC customer than Apple now. Strange omission, imo.
Although to us AMD seems big, it is still not even close compared to Apple. AMD sold 9-10 million 7-nm chips total in Q4 2020 - that is including PS5, Xbox Series S/X, Zen 2 and Zen 3, and RDNA 1+2. Apple on the other hand? They sold 13+ million iPhone 12's IN THE FIRST WEEKEND. Not sure what total Q4 2021 sales would be, but obviously it's going to far far ahead of AMD.
You mean how AMD's chips are larger in die size, on average, than Apple's? Yeah I would guess that close the gap, but still not make much of a difference. If you were to combine all of Apple's 7-nm chips (iPhone XR/XS/11/11 Pro/SE 2nd, iPad Mini 5th, iPad Air 3rd, iPad 8th) and 5-nm chips (iPhone 13/13 Pro, M1-based Macs) for Q4 2020, it has to be over 100 million? If you were to say each Apple one is an average size of 1/3 of AMD's, a good estimate, that would still put them at a over 3:1 ratio in favor of Apple. There is just so much volume in iPhone. All of AMD's businesses combined are like the volume of the iPad business for Apple (at a chip level).
The A14 is 88mm^2, the Zen3 compute die is 80mm^2. You would need a 24-core AMD CPU to get to your 1/3 number. Do we have stats on the average number of compute dies in AMD's CPUs?
I was referring to all 7-nm/5-nm IP from each. I think the majority of AMD's 7-nm production is going to PS5 and Xbox Series X consoles - each over 300mm². The next biggest is RDNA-2 cards - again over 300mm². After that will be enthusiast CPUs, which have two CPU complexes in them, or server EPYC processors, which have 2 to 4 complexes in them. And then finally after that will be things like mobile CPUs and lower-end processors with one complex.
These numbers are interesting, even plausible, have no meaning at all in this context. What counts are the wafer starts each company has bought. Despite thinking that Apple pays they wafers more than 3 times what AMD pays, Apple gives TSMC more than 3 times more money that AMD does. That's the value as TSMC customer for these companies.
Apple sold around 200 million plus phones last year, plus about 55 million iPads and about 10 million Apple TV’s, plus about 10 million iPod Touches. Then they sold about 35 million Apple watches, and about 150 million other devices with Apple chips in them such as the various AirPods and Beats headphones. Also, at the end of the year, about 6 million new Macs with the M1 chips and other assorted Apple chips.
I’m likely missing a lot of other small chips TSMC produces for them such as video controllers, sensors and the like.
Comparing chips produced doesn't make much sense - AMD's chips are, reliably, significantly larger than Apple's. You need to compare wafer starts reserved, as that's what TSMC's customers are actually purchasing.
Probably because AMD is (still) a small customer for TSMC. You may be surprised to know that despite the quantities and numbers of CPU/SoC/GPU/APU AMD has made in 2021, Intel contribution to TSMC's revenue is just a bit smaller. In 2019 it was even bigger. AMD is 7% of TSMC revenue, while Intel 6%. Against Apple 24% of contribution to TSMC revenue, they are both dwarfs. That is why AMD is not named here. Before it there are Broadcom, Qualcomm and even Nvidia...
This year AMD is foreseen to make the jump to become the second TSMC's customer, thanks also to the plan to produce Xilinx (now AMD) FPGAs on TSMC advanced nodes. But that just means that AMD will go from 7% to a bit more than 9% of TSMC's total revenue. Which is still a dwarf against over 25% of Apple and just a bit more than Intel (about 7%). But it then will finally surpass Nvidia in the race of the big TSMC customer as it appears Nvidia will move to Samsung for production even more than before (probably TSMC is going to be really too tight in supply quantities).
Apart the numbers given here, what is missing is the foreseen number of wafer the customer have requester to TSMC and how many they are ready to serve them. Because having a good PP (the best one even) but not being able to satisfy customer's supply needs means that some of them will be disappointed nonetheless (a lost sale is a loss of money) and some may recur to worse PP but larger supply. See Intel and its immortal 14nm PP, it may be behind TSMC 7nm, but it guarantees Intel to satisfy any mass volume request of their chips, something AMD cannot and we have seen what this meant to their notebook series (where are series 4000 APU and 6000 (but also 5000) mobile GPUs?). They may be good, even marvelous on paper, but if they can't supply the OEM, these ones can only produce with Intel chips and sell these only devices leaving AMD waiting for TSMC to make new chips when they can (and lose money and potential market share).
And on top of this, seen the capital investments TSMC is going to afford for these new PPs, I doubt these are going to be cheap. The time when AMD could fight against Intel with the weapon of price dumping (losing money every quarter) has ended, but seen AMD's much smaller gross profit than Intel's I see a quite difficult period for the former to improve profitability by using the latest and more advanced PP to stay ahead in all markets (CPUs and GPUs). Nvidia produces more paying less. Amd Intel is going to use their 10nm SFE PP at full power at the end of the year. We will see how this race towards the best and more advanced to stay afloat will pay off.
I guess it is less about capacity than early adopting and price. Or a mix of both as AMD might be 2nd biggest but not as big as Apple [yet].
Also Apple will always be first adoper as they can start using a new node on their A-series mobile chips that are small and do not need high frequencies and therefore perfect for a less mature node. AMD even with chiplets might not be able to do that and only follows...
Also early capacity is low and prices high so Apple outpays AMD on new nodes while AMD follows a year or two after Apple moves on.
Remember we talk about newes nodes and not who is most important TSMC customer.
No. Apple is by far, their largest customer, additionally, it’s been mentioned in various places, that Apple pretty much funded TSMC’s 5nm node. That is likely to continue with the smaller nodes.
"and earning our customers' trust. The last point, customers' trust, is fairly important because we do not have internal products that compete with customer."
Now why on earth would he feel it necessary to push that point... (Will be interesting to see if this becomes a recurring theme in TSMC's communication.)
Because with Samsung mostly producing their own chips, and Intel opening it’s foundry to most others, there are questions as to who is really go8ng to get the best technology from either—their own designs, which therefor are the most profitable for then? Or, a customer’s, where the profit only from the manufacturing?
There is a question about Nvidia with their purchase of ARM over this very same reason, and one of the reasons why Apple sold its one third share of ARM stock (the other major one was financial) in the early 2000’s
"Following Intel's announced foundry comeback in March, TSMC’s willingness to set a 3-year $100 billion CapEx/R&D investment plan, starting from 2021, indicates its confidence to widen its foundry leadership,"
That $100B sounds impressive. Which is why I present to you: https://www.apple.com/newsroom/2021/04/apple-commi... WTF will Apple be spending all that money on? The items they actually list are chicken feed compared to that total. Even a few data centers barely move the needle.
Two obvious answers are - Apple car... I always assumed contract manufacturing in China, but maybe Apple has concluded that's now just too much risk?
- (relevant to this article) Are they fronting much of the money for TSMC's expansion? They did this years ago with Foxcon, buying masses of things like aluminum milling machines. I could see Apple providing the money for new fabs and EUV machines, in return for guaranteed first crack at them for many years, with some sort of gradual reversion of ownership to TSMC over time.
I don’t know the numbers, but Apple has financed tsmc’s 7nm and 5nm nodes. Not just with money directly, but as with other companies, including Samsung, sharp, Foxconn and a number of others, both large and small, with actual machinery, worker training and more.
Apple receives as a result, first guaranteed access, lower pricing and a partial say in where the companies are going in their R&D and production. This works out well, as long as the companies can fulfill their part of the deal. They can use the machinery for other customers as long as Apple’s needs are met.
Imagine a 2nm process where the transistor is actually 2nm. It would be less than 10 atoms across. I wonder when quantum effects will kick in so to make further shrinkage impossible..
It kicked in starting at 90nm. That’s why Intel and others rethought their approaches to chip design and went for more cores instead of trying for the 10-20GHz cores they were promising in a few years.
Now though, they’re really against the wall. It’s amazing they’ve been able to get as far as they have.
Transistors stopped shrinking at around 24nm. That's what FinFET and other tricks are about, raising a 3D transistor out of a much smaller trace. Most of these transistors are around 37nm wide at the throat.
But given that, the 7nm process at TSMC does have it's smallest traces at 7nm. They can't be very long or quantum effects destroy conductivity, but they are 7nm wide. The problem comes as you get smaller and smaller. A silicon atom is 0.2nm wide, each shrink is going to extract more and more quantum effects from the surrounding material and ever increasing tricks like FinFET will be needed to negate them.
More importantly, each process shrink now doubles the cost of the Fab and costs are growing geometrically. I would be willing to bet that Samsung drops out of being out of the cutting edge after the next phase. And then it will simply be a question of how much money Intel and TSMC have to keep advancing.
We are rapidly approaching a critical moment in the Semiconductor industry.
TSMC got no chill while Intel is still limping on 10nm. I think its the start of the end for Intel manufacturing. Intel foundry is just too late unless they choose to spend like crazy which is unlikely and TSMC can easily outspend Intel.
Intel won't overcome TSMC and in longer perspective will be forced to be fabless. TSMC is pumping 30 b$ per year (and this in increasing), while Intel plans (with help of gvt) to spend 20 b$ in three years. Intel is too litle too late to compete with TSMC, and will be increasingly forced to use TSMC, while his inferior fabs will be used for less demanding ASIC and for low profit foundry services.
Do you also think Samsung is incapable of competing?
Between the US and EU, there is a lot of funding Intel can ask for — on the basis of not being entirely at the mercy of two firms in Asia, one that’s vulnerable to possible takeover by China.
Now that the US Government has considered fabs to be a strategic asset and is subsidizing Intel's domestic fab build out and outsourced foundry service, it's a good thing Intel didn't listen to investors calling for them to go fabless.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
74 Comments
Back to Article
Duncan Macdonald - Monday, April 26, 2021 - link
TSMC telling Intel that there is no way that Intel can catch them.(I wonder how the performance of AMD CPUs on N3 will compare to Intel's CPUs on 14/10/7 nm.)
EthiaW - Monday, April 26, 2021 - link
Intel would better pray for China to launch the attack ASAP. Nothing else can save it from the AMD+TSMC juggernaut.danjw - Monday, April 26, 2021 - link
It seems to me, that the greatest threat to both Intel and AMD is ARM. Not either to each other.dullard - Monday, April 26, 2021 - link
Exactly. People here don't realize that AMD and Intel are on the same team. The true struggle is AMD + Intel vs ARM.bcronce - Tuesday, April 27, 2021 - link
I used to consider ARM not competitive with Intel and AMD, but ARM is running around unchecked, rapidly moving towards desktops while maintaining their iron grip on mobile and embedded.icedeocampo - Wednesday, May 5, 2021 - link
But if AMDs does what it did with the x64 x86 discussion- we're all in for a treat - AMD can probably do a hybrid ARM x64 chip.MiauwMing - Wednesday, May 5, 2021 - link
Right. As if AMD or Intel doesn't produce or use any ARM CPUs. Every Ryzen CPU's utilize ARM cpu (for security). Intel uses ARM SOC for their FPGA products.https://www.intel.in/content/www/in/en/products/sk...
Matthias B V - Tuesday, April 27, 2021 - link
Agreed. They lose market share in servers and mobile applications like Notebooks fast. Desktop is in x86 hands so far but only a minor market.Wonder if AMD and Intel would work together to create a "Lean86" version of x86 that gets rid of legacy or only emulates it. It would free up die space a lot. Also I would love to see SVE on x86.
Lean x86 + SVE # bigLITTLE could help x86 to extend its life or survive but for now I see it dead by 2030.
What might safe them too is Nvidia really being able to buy ARM and other customers rather trandition to RISC-v. At least it helps until RISC-V kills both: ARM and x86!
TheinsanegamerN - Tuesday, April 27, 2021 - link
ARM has almost 0 presence in both desktops and notebooks. WTF are you talking about? ARM has never been able to sucessfully break into high power markets, they've had roughly the same luck as intel getting into ultra low power markets."lean 86" is already here. The x86 code, as has been discussed many times on many pages, is a tiny portion of modern core design and is already digested in small chunks like all other instructions. It isnt some big monolith that dominates cores like the 486 days.
It's funny, I remember reading in 2010 how by 2020 ARM would be a major industry player. Qualcomm was making some noise about entering the server market, AMD bought Seamicro, developed the K12 platform, then did nothing with it. Now the lines in "x86 will be dead by 2030"? I cant wait to see how utterly wrong that prediction is as well.
bcronce - Tuesday, April 27, 2021 - link
I guess you haven't heard of Apple's M1 CPU that is going to be in nearly all of their devices? It uses ARM for the compute cores.melgross - Tuesday, April 27, 2021 - link
Yeah, sure. Just like s86 couldn’t do any useful work in the workstation days. What happened there? Just like laptops could never do any useful work. There too. Now we see Apple’s M1, the worst chip Apple will ever produce for desktops and high end tablets, run rings around most x86 laptops and even many desktops. It will just get worse for s86 as manufacturers continue to look at that and figure that they need to produce better chips too. Qualcomm isn’t sitting still either.Keep living in your made up world.
Silver5urfer - Tuesday, April 27, 2021 - link
Ah melgross, I remember you on the same context last time. Now coming to your points M1 gets crushed by Ryzen processors with SMT granted the only advantage is thin and light BS but when the workload increases that M1 cannot compete."run rings around most x86 laptops and even many desktops" many desktops lol what a massive pile of bs. Which desktop processor got beaten by M1 ? Can you point out the benchmarks and realworld test cases, will you say it beat 7700K in the real world performance or any 4790K processor in gaming or such, it doesn't even have a damn GPU lol. A stupid iGPU cannot do anything vs a PCIe PEG GPU.
M1 has dedicated blocks for encode and decode which is why they got good performance in Encoding vs x86 processors since they do not have that functionality inbuilt, but that's nothing if NVENC comes into play, it's game over. And if we add the Matlab and other compute workloads for consumer the HT/SMT based x86 processors will melt the M1 garbage which can be only fit for Apple first party workloads which were made by Adobe, MS and other companies and their Final Cut Pro and running their joke of an iOS apps on that with jank (see LTT new video on his iOS apps running on that M1)
Qualcomm is not going to do anything, they abandoned the idea of Server since Centriq was axed and their custom uArch team got axed with that, that SD888 still fall behind AT's great SPEC graphs but in realworld scenarios it doesn't even have any impact. There's absolutely none, Qcomm 888 processor phones vs iPhone have bog standard same response times in opening and using applications, which is called "realworld" workloads.
"x86 couldn't do any useful work in Workstation days" HEDT or the Enterprise workloads ? Is that a joke lol.
Keep shilling for Apple when having no absolute advantage of them in the consumer space. Esp the fact that Mac makes only a pathetic 10% of Apple revenue AND the OS marketshare at 13%.
And the fact that ARM processors are simply custom and Enterprise custom centric which do not have any use case for end users, what use you have with ARM processor when it's not even there to replace the x86 desktop processor, as for the IT, EPYC Milan 7763 is the king of the hill which even a normal person can buy and build a homelab, speaking of home server setups, which processor is to buy ? M1 ? haha.
And Xilinx / FPGAs are not going to sit idle, that's going to be the biggest bet for AMD, Intel already has FPGAs with them, that compute block will be having any customized workload which is where many AI and other things will be added to the x86 CPUs.
I don't really understand why people shill for Apple and claim some magical bs, before M1 there's nothing for consumer ARM yet saw so many posts on that BS claims of x86 is dead while using it daily. Peak comedy.
Matthias B V - Tuesday, April 27, 2021 - link
What does the GPU have to do with it - It is not like you can't design a ARM CPU that connects to GPU over PCIe... Nvidia will do so and Imagination Technologies might also try to get into that space.Even AMD might be ok with RDNA/CDNA + ARM combination. They will milk x86 but have less issues moving to ARM compared to Intel.
Intel is the one that has most to lost on ARM / RISC as they hold the x86 keys and dominate market.
Matthias B V - Tuesday, April 27, 2021 - link
Sorry but x86 is beyong prime. When I wrote dead I dead not mean it is not existing but by that time it lost significant market share and will continue losing basically marking its long term end / decline. Why? Well:Server / HPC: We can already see multiple companies pushing into the market and with ARM v9 a quite well hardware base is provided. From there on when Software support increases it is set. How many companies already started own designs!
Mobile: How many users even today only use a tablet and no real Notebook anymore. How many popular games [especially in Asia] do not need massive GPUs / CPUs. Perfromance will be enough for both and they prefer battery lifetime and ARM. Once Microsoft stops screwing it up it will also increase share fast.
Desktop: Yes share is low and will be probably the longest time a x86 domination. But the market is nothing compared to mobile and server.
And how many people will switch to newest MacBook Airs and iPadPros taking lots of x86 away as they just like the battery life and performance of newest M1 devices combined with easy to use OS - It is just enough for most everyday users and prosumers besides gamers!
x86 is everything but lean and smart if you compare SSE, AVX, AV2 AVX512 compared to SVE... K12 might have not worked because time wasn't ready for it yet. But now it is and also AMD was starving cash and couldn't push it. It is not the same situation as is now.
Matthias B V - Tuesday, April 27, 2021 - link
Also one thing about market share: We talk about desktop, server, mobile.With mobile we talk notebooks and tablets however smartphones are forgotten!
If include those and check for CPU / APU market share then you can already see a quite big possible share lost...And as I said - This perfromance of those tablets and phones is ebough for most people that are not gaming or professional oriented!
Silver5urfer - Tuesday, April 27, 2021 - link
Lmao, Server and Mobile are you joking ? Server space is x86 dominated over 90% of Intel and AMD Is clawing at that with ARM restricted to AWS ONLY and there are no others offering the ARM HW at the moment there will be in the future yes as of today's New ARM Neoverse news. But once ARM has a solid position AND beating out x86 then we will talk again.Oh mobiles !! Amazing pieces of trash HW outselling Desktops. What a revelation, dude we talk on x86 performance and the capabilities of what they can do, your stupid iPhone or my Android phone which cannot do anything remotely compared to what x86 notebooks and desktops can do. You don't even have a filesystem to control forget running applications and scientific or machine learning workloads or physics either. Basically browsing Internet and using Social Media crap and you are using that to define the ARM dominance ? what a fcking joke.
"How many people will switch" lol, Mac has 10% marketshare and 10% world wide OS netmarketshare, what are you babbling about the world flipping to Apple HW and SW in a fortnight or magical moment. And iPad cannot translate x86 code it doesn't and cannot since it's not running Mac OS period. Only M1 Mac can do that. Everything is in the future your talk is not just dumb but really a time waste, idk from where all these Apple shills come from and claim the x86 is dead, damn it.
What a load of bs is this "It is not like you can't design a ARM CPU that connects to GPU over PCIe... Nvidia will do so and Imagination Technologies might also try to get into that space.
Even AMD might be ok with RDNA/CDNA + ARM combination. They will milk x86 but have less issues moving to ARM compared to Intel."
Sigh, what is that ? Imagination IP is stolen by Apple and they are dead sold out to a Chinese company and so far eGPU doesn't exist and why the fuck is Apple only one ? What about others, yeah you don't have anything else, and before M1 also same argument but nothing, now also same. And there's no product that exists on the market that is offering ARM CPUs to be paired with the PCIe express slots, fucking there's no PCIe ecosystem for ARM Consumer CPUs which is M1 soldered POS how are you even thinking about a GPU lol, and AMD is moving to ARM ? WTH. Last time Lisa Su said they do not have any plans for BigLittle forget ARM, there's one rumor today that came about APUs of Zen 5 using Big Little, that is a rumor and they are competing against Intel x86 big little ADL rather than stupid M1 which has pathetic marketshare.
That SVE is going to be ground shattering moment of truth. Ah I see, maybe it will make PS3 cell emulation go nuts and wreck havoc onto the x86, let us see. What about x86 market Xilinx FPGA ? nah that's dumb since it's x86. And x86 even though it's CISC it uses RISC under just like ARM just the thing is ARM processors have wide architecture on front end where as x86 use high clockspeed and SMT. But let's ignore all that, x86 is junk all the while relying on it, I bet you typed this on an x86 machine.
What do you think about Homelab ? nah it's junk M1 ftw right...better read up on what people can do on their x86 old Xeon hardware. Not even new. While for ARM what do we have ? Pi that's all, only consumer centric device which can run Linux code natively and can do same home server type compute for small workloads. Nah it's all too much logic, let's only talk Apple and M1 and the never ending bs of x86 is dead ARM is the future.
29a - Wednesday, April 28, 2021 - link
Go away douch bag no one is going to read your wall of idiot text.0razor1 - Tuesday, May 4, 2021 - link
I actually found it weirdly entertaining.melgross - Tuesday, April 27, 2021 - link
Legacy is what keeps Microsoft on x86. Losing legacy is why Microsoft is having so many problems moving to ARM.Arsenica - Monday, April 26, 2021 - link
To the layman it may seem so but marketing "nanometers" have really clouded the waters.TSMC´s N5 has a density of 170Mtr/mm^2 while N3 is projected to do around 285Mtr/mm^2 in 2023.
By then Intel's 7nm (P1278) will do around 235Mtr/mm^2.
So TSMC will have a density advantage but not by nearly as much as marketing nanometers make it appear.
Yojimbo - Monday, April 26, 2021 - link
TSMC's 5nm and 3nm, although they call them full node advances, are more like between half and 3/4 nodes. Intel made one misstep, trying to bite off too much without EUV. So far all its manufacturing problems can be ultimately traced back to that. So we need to see what Intel can do with EUV - what will be the quality of its 7 nm process? - before declaring that TSMC has lapped Intel. Intel doubled its usage of EUV in its 7 nm process. It seems to have the pellicles it's been waiting for. We'll have a better idea in 2023 of where Intel stands.Spunjji - Tuesday, April 27, 2021 - link
Intel made a lot more than one misstep with 10nm. TSMC don't use EUV at all on N7, so if that were the only problem then TSMC ought to have had similar issues given their comparable densities in practice.Yojimbo - Tuesday, April 27, 2021 - link
That's a good point. I'm curious to see AMD's gross margins for this past quarter, which will be released this afternoon along with their earnings. Their Q4 2020 results showed no gross margin improvement over Q4 2019 despite a Y/Y revenue increase of 53% and an improved competitive landscape. This is very strange, and I assume has to do with the cost they are paying, and have been paying, for the 7 nm node. To me that implies that the 7 nm node is not all that great for high powered chips. But the foundry customers don't have a choice but to move to the newer node because TSMC's older nodes are stale. TSMC isn't developing them the way Intel chose to develop 14 nm. As for AMD, they still benefit greatly from picking up market share and having the same profitability as they would otherwise have with lower market share. Intel, on the other hand, which already has high market share, needs to maintain its margins and has a lot to lose financially by switching to a less profitable node that keeps them at 88% market share instead of 85%, or what have you. That's just the way the business and financials shake out for the two companies.The situation is a lot more nuanced than Intel's 10 nm is trash and TSMC's 7 nm is good. There are a lot of filters in play here. To me it's telling that NVIDIA chose Samsung's 8 nm for their gaming chips. They were able to leverage their architectural superiority to maintain their margins by avoiding TSMC's 7 nm node. We need to consider the total volume of high powered chips being produced on the 7 nm node and what their margins are compared to Intel's demand for high powered chips and what its margin requirements are. We don't have the information of how much more Intel could have charged for a 10 nm chip compared with how much more it would have cost to make. And we don't have the information of the HPC-only margin of TSMC's nodes. TSMC's gross margins are not extraordinarily high considering the capacity shortage, and they actually dropped qa bit in 2019 before the shortage began.
Matthias B V - Tuesday, April 27, 2021 - link
Agreed. Intel will be back on track with EUV 7nm and catch up with 5nm GAA. Their 7nm DUV can compete with 7nm TSMC / SAMSUNG. And If the dats is correct Intels 7nm is closer to SAMSUNG 2nm / TSMC 3nm than to TSMC 5nm. SAMSUNG 5nm is totally off (Don't know what happened there. Probably because it is more of a 7nm LPU+ instead of a new design?). They might have focused on GAA and come back big...However I see one issue that Intel does not get enough EUV scanners from ASML and do not have enough capacity.
TSMC is far ahead of Samsung and Intel in EUV orders and installed scanners. And now even Hynix and Micron will increase order so supply will be tight for years.
I guess EUV capacity is one reason besides maturity of 7nm for Intel to bring a 10nm ESF (10nm+++) RaptorLake as AlderLake successor.
Tams80 - Wednesday, April 28, 2021 - link
With ASML banned from exporting to China though (a situation looking likely to only get 'worse'), that should at least relieve some of the stress that the shortage of ASML EUV scanners has causedYojimbo - Friday, April 30, 2021 - link
Chinese semiconductor manufacturers weren't yet ready to use EUV for production. They had bought 2 machines for development purposes. It's not that big of a difference. It's thought that SMIC's 16/14 nm still isn't that good at this point.duploxxx - Tuesday, April 27, 2021 - link
there are a lot of ways to calculate density.... believe whatever you want, one thing is for sure.10nm Intel is not as high volume as they want and certainly not high performance and 7nm is nowhere to find real soon.
so perhaps read something about real sizes and differences comparing logic and make a good calculation from a good site:
https://en.wikichip.org/wiki/WikiChip
TheinsanegamerN - Tuesday, April 27, 2021 - link
Wow, that was really helpful and insightful.https://en.wikipedia.org/wiki/Sarcasm
Spunjji - Tuesday, April 27, 2021 - link
Can we be sure about Intel's density projections for 7nm? We still don't know much for sure about their density at 10nm - the 100Mtr/mm^2 was their original projection, and it's abundantly clear that the process never met those initial expectations. In practice they seem to be closer to 70Mtr/mm^2.If 7nm is a full node over their *working* 10nm process then it would roughly match the density of TSMC's N5, which would be in keeping with Intel's historical advantage over TSMC at comparable node names (i.e. one full node ahead), but still leave them far behind TSMC's 3nm in practice.
Silver5urfer - Tuesday, April 27, 2021 - link
Intel is not going to catch them, it's over. They lost the years long advantage they had be it Intel management problems or that stupid CA state demographics in all aspects etc.Intel is simply going to use/steal their IP and build Intel parts as per their latest disclosure, so far their 10nm Consumer processor is not out yet to show performance and granted the 7N and these marketing terms do not that great justice to the actual technology.
All in all the end processors will show the real king, EPYC 7763 which is the one right now. I think we all have to simply wait and see what's going to happen.
sharath.naik - Wednesday, May 5, 2021 - link
you mean intel on 10nm at best.. or most of it will still be on 14nm.. Intel will still be using that until they need to move.Oxford Guy - Monday, April 26, 2021 - link
What happened to SOI (silicon-on-insulator)? Did it die because IBM and Global Foundries focused on it? Or, is there an unavoidable technical impediment that makes it too uncompetitive? I suppose I should do a web search to answer the question.‘As far as capacity is concerned, TSMC is unchallenged and is not going to be for years to come.’
That’s an odd sentence.
juancn - Monday, April 26, 2021 - link
It's an odd sentence, but everyone else is so far behind, that it's probably true. Ramp up will take years for TSMC competitors, and that's to be where they are today, if they keep investing, the target moves further into the future.CiccioB - Monday, April 26, 2021 - link
It is 'odd' because a word is missing, not for its truthfulness:'As far as capacity is concerned, TSMC is unchallenged and is not going to be <b>[challenged]</b> for years to come.'
Probably it is what the intended meaning of the sentence was.
Santoval - Monday, April 26, 2021 - link
I also found it odd, but I think adding a second '--challenged' in the same sentence is problematic and kind of ugly. I think it would be cleaner to rephrase it as : 'As far as capacity is concerned, TSMC has no challenge and it is not going to have for years to come' or : 'As far as capacity is concerned, TSMC is unchallenged and that will remain the case for years to come'.BQP - Monday, April 26, 2021 - link
SOI going strong for the RF/Analog market. There is one size fits all in the IC industryCalin - Tuesday, April 27, 2021 - link
Silicon-on-insulator was a way to reduce power at the same frequency (lower capacitance hence lower current necessary to switch bits). Strained Silicon was a way to reduce resistance (same current needed to switch bits but lower heat losses).Both are useful, but they don't allow you to put more "bits" on the same area. And in the leading edge, he who has the most bits wins (as in, Intel's Rocket Lake has 8 cores on 14 nm whereas its equivalent Intel 10nm processor has 10 cores).
More "bits" (logic area) allows you to - for example - run a larger GPU at a lower speed for similar performance at lower power (or at higher performance and higher power). Or fit - in the same area - your smartphone processor but with more cache, more "AI" cores, more GPU cores, ....
Oxford Guy - Tuesday, April 27, 2021 - link
Yes, but is it somehow incompatible with the leading-edge stuff? High density hasn’t always been the most important factor. I remember reading about how the choice of choosing a high-density library vs. a high-performance library was not due to the former being simply better for all CPUs but because the CPUs (like Excavator as I recall) were specifically designed to be cheap + small lower-end parts.Having the density improvement from the node shrinkage does not seem to be the same problem thing as leveraging what you describe SOI’s benefit is. As you described them they seem to be complementary.
So, again I ask if there is something about SOI that’s an inherent impediment that keeps it from being used these days for the leading edge. Remember they back in 2011 Global Foundries was using SOI for its leading edge 32nm. There was also IBM as far as I know. So, what happened? Did finFET kill the tech by being incompatible and/or simply better —making SOI obsolete?
Oxford Guy - Tuesday, April 27, 2021 - link
‘the same problem thing’I didn’t write that. It was the groovy auto-defect in the phone. Anyway, the sentence should be intelligible. Just substitute ‘type’ for ‘problem’.
grant3 - Saturday, May 1, 2021 - link
According to the internets: SOI is used in some manufacturing but not much, because it's expensive.WaltC - Monday, April 26, 2021 - link
Odd to see a TSMC article that mentions Apple as a customer--but not AMD--as large or an even larger TSMC customer than Apple now. Strange omission, imo.NextGen_Gamer - Monday, April 26, 2021 - link
Although to us AMD seems big, it is still not even close compared to Apple. AMD sold 9-10 million 7-nm chips total in Q4 2020 - that is including PS5, Xbox Series S/X, Zen 2 and Zen 3, and RDNA 1+2. Apple on the other hand? They sold 13+ million iPhone 12's IN THE FIRST WEEKEND. Not sure what total Q4 2021 sales would be, but obviously it's going to far far ahead of AMD.ingwe - Monday, April 26, 2021 - link
I would be curious how they compare from a die area. I would expect that the gap closes between the two (at least somewhat?)NextGen_Gamer - Monday, April 26, 2021 - link
You mean how AMD's chips are larger in die size, on average, than Apple's? Yeah I would guess that close the gap, but still not make much of a difference. If you were to combine all of Apple's 7-nm chips (iPhone XR/XS/11/11 Pro/SE 2nd, iPad Mini 5th, iPad Air 3rd, iPad 8th) and 5-nm chips (iPhone 13/13 Pro, M1-based Macs) for Q4 2020, it has to be over 100 million? If you were to say each Apple one is an average size of 1/3 of AMD's, a good estimate, that would still put them at a over 3:1 ratio in favor of Apple. There is just so much volume in iPhone. All of AMD's businesses combined are like the volume of the iPad business for Apple (at a chip level).ajp_anton - Monday, April 26, 2021 - link
Huh, Apple's chips being 1/3 of AMD's?The A14 is 88mm^2, the Zen3 compute die is 80mm^2. You would need a 24-core AMD CPU to get to your 1/3 number. Do we have stats on the average number of compute dies in AMD's CPUs?
NextGen_Gamer - Monday, April 26, 2021 - link
I was referring to all 7-nm/5-nm IP from each. I think the majority of AMD's 7-nm production is going to PS5 and Xbox Series X consoles - each over 300mm². The next biggest is RDNA-2 cards - again over 300mm². After that will be enthusiast CPUs, which have two CPU complexes in them, or server EPYC processors, which have 2 to 4 complexes in them. And then finally after that will be things like mobile CPUs and lower-end processors with one complex.CiccioB - Monday, April 26, 2021 - link
These numbers are interesting, even plausible, have no meaning at all in this context.What counts are the wafer starts each company has bought.
Despite thinking that Apple pays they wafers more than 3 times what AMD pays, Apple gives TSMC more than 3 times more money that AMD does.
That's the value as TSMC customer for these companies.
melgross - Tuesday, April 27, 2021 - link
Apple sold around 200 million plus phones last year, plus about 55 million iPads and about 10 million Apple TV’s, plus about 10 million iPod Touches. Then they sold about 35 million Apple watches, and about 150 million other devices with Apple chips in them such as the various AirPods and Beats headphones. Also, at the end of the year, about 6 million new Macs with the M1 chips and other assorted Apple chips.I’m likely missing a lot of other small chips TSMC produces for them such as video controllers, sensors and the like.
Spunjji - Tuesday, April 27, 2021 - link
Comparing chips produced doesn't make much sense - AMD's chips are, reliably, significantly larger than Apple's. You need to compare wafer starts reserved, as that's what TSMC's customers are actually purchasing.ajp_anton - Monday, April 26, 2021 - link
By revenue, TSMC's biggest customers for 2021 are:Apple: 25%
AMD: 9%
Mediatek, Broadcom, Qualcomm: 8%
So AMD is far, far behind Apple, and not that far ahead of the others.
CiccioB - Monday, April 26, 2021 - link
Probably because AMD is (still) a small customer for TSMC.You may be surprised to know that despite the quantities and numbers of CPU/SoC/GPU/APU AMD has made in 2021, Intel contribution to TSMC's revenue is just a bit smaller. In 2019 it was even bigger.
AMD is 7% of TSMC revenue, while Intel 6%. Against Apple 24% of contribution to TSMC revenue, they are both dwarfs. That is why AMD is not named here. Before it there are Broadcom, Qualcomm and even Nvidia...
This year AMD is foreseen to make the jump to become the second TSMC's customer, thanks also to the plan to produce Xilinx (now AMD) FPGAs on TSMC advanced nodes. But that just means that AMD will go from 7% to a bit more than 9% of TSMC's total revenue. Which is still a dwarf against over 25% of Apple and just a bit more than Intel (about 7%).
But it then will finally surpass Nvidia in the race of the big TSMC customer as it appears Nvidia will move to Samsung for production even more than before (probably TSMC is going to be really too tight in supply quantities).
Apart the numbers given here, what is missing is the foreseen number of wafer the customer have requester to TSMC and how many they are ready to serve them.
Because having a good PP (the best one even) but not being able to satisfy customer's supply needs means that some of them will be disappointed nonetheless (a lost sale is a loss of money) and some may recur to worse PP but larger supply.
See Intel and its immortal 14nm PP, it may be behind TSMC 7nm, but it guarantees Intel to satisfy any mass volume request of their chips, something AMD cannot and we have seen what this meant to their notebook series (where are series 4000 APU and 6000 (but also 5000) mobile GPUs?). They may be good, even marvelous on paper, but if they can't supply the OEM, these ones can only produce with Intel chips and sell these only devices leaving AMD waiting for TSMC to make new chips when they can (and lose money and potential market share).
And on top of this, seen the capital investments TSMC is going to afford for these new PPs, I doubt these are going to be cheap. The time when AMD could fight against Intel with the weapon of price dumping (losing money every quarter) has ended, but seen AMD's much smaller gross profit than Intel's I see a quite difficult period for the former to improve profitability by using the latest and more advanced PP to stay ahead in all markets (CPUs and GPUs).
Nvidia produces more paying less. Amd Intel is going to use their 10nm SFE PP at full power at the end of the year. We will see how this race towards the best and more advanced to stay afloat will pay off.
Matthias B V - Tuesday, April 27, 2021 - link
I guess it is less about capacity than early adopting and price. Or a mix of both as AMD might be 2nd biggest but not as big as Apple [yet].Also Apple will always be first adoper as they can start using a new node on their A-series mobile chips that are small and do not need high frequencies and therefore perfect for a less mature node. AMD even with chiplets might not be able to do that and only follows...
Also early capacity is low and prices high so Apple outpays AMD on new nodes while AMD follows a year or two after Apple moves on.
Remember we talk about newes nodes and not who is most important TSMC customer.
melgross - Tuesday, April 27, 2021 - link
No. Apple is by far, their largest customer, additionally, it’s been mentioned in various places, that Apple pretty much funded TSMC’s 5nm node. That is likely to continue with the smaller nodes.name99 - Monday, April 26, 2021 - link
"and earning our customers' trust. The last point, customers' trust, is fairly important because we do not have internal products that compete with customer."Now why on earth would he feel it necessary to push that point...
(Will be interesting to see if this becomes a recurring theme in TSMC's communication.)
Rudde - Tuesday, April 27, 2021 - link
Because it sets TSMC apart from Samsung and Intel.name99 - Wednesday, April 28, 2021 - link
Oh dude, can you not spot sarcasm in a post?Oxford Guy - Tuesday, April 27, 2021 - link
For the same reason PC gamers need to wise-up and buy a clue when it comes to how AMD competes directly against them via the console scam.melgross - Tuesday, April 27, 2021 - link
Because with Samsung mostly producing their own chips, and Intel opening it’s foundry to most others, there are questions as to who is really go8ng to get the best technology from either—their own designs, which therefor are the most profitable for then? Or, a customer’s, where the profit only from the manufacturing?There is a question about Nvidia with their purchase of ARM over this very same reason, and one of the reasons why Apple sold its one third share of ARM stock (the other major one was financial) in the early 2000’s
name99 - Monday, April 26, 2021 - link
"Following Intel's announced foundry comeback in March, TSMC’s willingness to set a 3-year $100 billion CapEx/R&D investment plan, starting from 2021, indicates its confidence to widen its foundry leadership,"That $100B sounds impressive. Which is why I present to you:
https://www.apple.com/newsroom/2021/04/apple-commi...
WTF will Apple be spending all that money on? The items they actually list are chicken feed compared to that total. Even a few data centers barely move the needle.
Two obvious answers are
- Apple car... I always assumed contract manufacturing in China, but maybe Apple has concluded that's now just too much risk?
- (relevant to this article) Are they fronting much of the money for TSMC's expansion? They did this years ago with Foxcon, buying masses of things like aluminum milling machines. I could see Apple providing the money for new fabs and EUV machines, in return for guaranteed first crack at them for many years, with some sort of gradual reversion of ownership to TSMC over time.
melgross - Tuesday, April 27, 2021 - link
I don’t know the numbers, but Apple has financed tsmc’s 7nm and 5nm nodes. Not just with money directly, but as with other companies, including Samsung, sharp, Foxconn and a number of others, both large and small, with actual machinery, worker training and more.Apple receives as a result, first guaranteed access, lower pricing and a partial say in where the companies are going in their R&D and production. This works out well, as long as the companies can fulfill their part of the deal. They can use the machinery for other customers as long as Apple’s needs are met.
Pretty good deal, really.
Desierz - Tuesday, April 27, 2021 - link
Imagine a 2nm process where the transistor is actually 2nm. It would be less than 10 atoms across. I wonder when quantum effects will kick in so to make further shrinkage impossible..melgross - Tuesday, April 27, 2021 - link
It kicked in starting at 90nm. That’s why Intel and others rethought their approaches to chip design and went for more cores instead of trying for the 10-20GHz cores they were promising in a few years.Now though, they’re really against the wall. It’s amazing they’ve been able to get as far as they have.
rahvin - Tuesday, April 27, 2021 - link
Transistors stopped shrinking at around 24nm. That's what FinFET and other tricks are about, raising a 3D transistor out of a much smaller trace. Most of these transistors are around 37nm wide at the throat.But given that, the 7nm process at TSMC does have it's smallest traces at 7nm. They can't be very long or quantum effects destroy conductivity, but they are 7nm wide. The problem comes as you get smaller and smaller. A silicon atom is 0.2nm wide, each shrink is going to extract more and more quantum effects from the surrounding material and ever increasing tricks like FinFET will be needed to negate them.
More importantly, each process shrink now doubles the cost of the Fab and costs are growing geometrically. I would be willing to bet that Samsung drops out of being out of the cutting edge after the next phase. And then it will simply be a question of how much money Intel and TSMC have to keep advancing.
We are rapidly approaching a critical moment in the Semiconductor industry.
Teckk - Tuesday, April 27, 2021 - link
Any reason there is zero coverage on Intel earnings or the Apple Spring Loaded event with multiple product launches?Desierz - Tuesday, April 27, 2021 - link
Probably hasn't gotten around to it yet. But there are many, many other sites that do cover these. There's no need for you to wait for it here.Teckk - Tuesday, April 27, 2021 - link
Yeah I did church those elsewhere but noticed the lack of content a bit these days. Anandtech is my preferred source 😅Teckk - Tuesday, April 27, 2021 - link
*check ... still no editSilver5urfer - Tuesday, April 27, 2021 - link
Yeah, even Sony announcement too. Esp the HW it packs.zodiacfml - Tuesday, April 27, 2021 - link
TSMC got no chill while Intel is still limping on 10nm. I think its the start of the end for Intel manufacturing. Intel foundry is just too late unless they choose to spend like crazy which is unlikely and TSMC can easily outspend Intel.melgross - Tuesday, April 27, 2021 - link
“It ain’t over ‘til it’s over.”TristanSDX - Tuesday, April 27, 2021 - link
Intel won't overcome TSMC and in longer perspective will be forced to be fabless. TSMC is pumping 30 b$ per year (and this in increasing), while Intel plans (with help of gvt) to spend 20 b$ in three years. Intel is too litle too late to compete with TSMC, and will be increasingly forced to use TSMC, while his inferior fabs will be used for less demanding ASIC and for low profit foundry services.Oxford Guy - Wednesday, April 28, 2021 - link
Do you also think Samsung is incapable of competing?Between the US and EU, there is a lot of funding Intel can ask for — on the basis of not being entirely at the mercy of two firms in Asia, one that’s vulnerable to possible takeover by China.
kwohlt - Thursday, October 28, 2021 - link
Now that the US Government has considered fabs to be a strategic asset and is subsidizing Intel's domestic fab build out and outsourced foundry service, it's a good thing Intel didn't listen to investors calling for them to go fabless.JoeDuarte - Thursday, May 6, 2021 - link
Strange that 3nm only brings a 10-15% performance boost. It seems like performance gains are shrinking with each shrink.