Everyone myself included poured cold water on the idea AMD may have 5nm out soon but honestly the way TSMC are going it would not surprise me if it showed up sooner rather than later
With AMD at least one year, likely two years, away from releasing Zen 4 based processors, them utilising N5 for those seems like a given. On the graphics side, AMD might very well release CDNA 2 on N5 (marked 'advanced node' on their road map). Both are expected in 2022.
I don't know, N5 do not gives many advantages over N7. It is more expensive and the area scaling is delusional on cpus and GPUs. Not to mention the peak clock speed regression at the same power noticed by Qualcomm sometime ago. The advantage of new nodes is becoming very very tiny. Architecture and higher IPC will become much more important in future and the node shrink slowly will slide in the irrelevance.
"N5 do not gives many advantages over N7" And yet I expect Apple will ship cores on it that are ~30% faster than their current cores. Cores that are good for phones, watches, and desktops. And that have an accompanying GPU perhaps ~40% faster than the current GPU. We heard this same story about 7nm being no big deal compared to 10nm.
If your CPU design flow is based around the idea that future processes get more and more dense, you can do pretty well. On the other hand, if your CPU design flow is based around the idea that you can keep cranking up the GHz...
More interesting is the way SRAM doesn't shrink much. TSMC being what they are, they won't tell us until this is ready, but one path forward is 3D. It's just a matter of time before someone ships parts like that in mass quantities. (Sure, Intel will be first to announce, like always... But I'm more interested in who's first to actually ship in bulk.) The most trivial version of this is an SRAM chip stacked on a logic chip, but more interesting is 3D bulk fabrication, either double decker (n/p stacking) transistors, or run the wafer through the line twice.
The main hurdle to 3D stacking has always been how to keep the stacks cool. Intel's Lakefield has a mere 7W of TDP but it cannot operate its "big" Sunny Cove core continuously, only in short bursts for "fast responsiveness" (look up its review here if you have not read it). It is firmware prohibited to run single threaded code for more than a few moments at a time due to excessive heat generation.
This is how the TDP was limited to 7W, among other power optimizations. And that's despite "kneecapping" the Sunny Cove core to have "feature parity" with the small cores. Both AVX-512 *and* AVX-256 were disabled because the small Tremont cores lack them (this is why AVX-256 support is to be added to Gracemont; so that the Golden Cove cores of Alder Lake will have only AVX-512 disabled ... right when AMD adds such support to Zen 4).
Imagine if 7W of TDP was so much a thermal trouble how infernally hot a 45W TDP 3D stacked chip would run if it lacked active cooling between the dies (microfluidic cooling is unfortunately still lab confined). It is literally impossible to do this and completely beyond the bounds of reality for even higher TDP CPUs. Maybe it's possible up to 25W of TDP, but these 25W parts would require equivalent cooling to what's normally required for 65W non stacked parts (or higher) to compensate. 15W should be less of a problem, but preferably the TDP should be limited to 10 - 12W. No more.
Intel already showed plans for multi-chip packages using several variations of 10nm. That could become the norm, with a dedicated manufacturing process for particular circuits.
If a new node overall is cost prohibitive, it would likely still be used right away for new iPhones. Apple could just do what everyone else already does, with different SoCs for the regular vs. Pro models.
Costs will go down over time, so there would just be a delay before widespread adoption.
+15% performance or -30% power is not to be disregarded. It's not as great as the halved power between Global Foundries 12nm and TSMC N7. That halved power consumption allowed AMD to pack 8 cores in a low power (15W) processor and 16 cores at 3.5GHz base clock in a 105W processor. Neither would have been possible at 12nm. N5 will further increase performance while keeping power consumption down. It will not necessarily manifest in higher clocks, but in increased instructions per clock (IPC) and core counts.
The advantage would be increased fab throughput.... since some of the quad and double patterning is eliminated in favor of EUV... they can churn out a LOT more chips. Probably 2.5x as many transistors produced per unity time even though the density increase isn't that much.
How do you figure? If you mean on the extra high core count level, sure. core for core though, you'd have to limit it to performance/watt for AMD to come out ahead of intel.
The reason Intel over the past decade had such good performance/watt than competitors was because they were able to flex their lead in manufacturing. Now the tables have turned with TSMC approaching a full generation ahead of Intel. This puts Intel at a clear disadvantage in terms of technology. Leveraging TSMC for some manufacturing does put Intel at the same level as their competitors and is great for design analysis as the node variable might end up normalized between players.
In terms of density TSMC's 5nm process node is going to be slightly less dense than Intel's 7nm process node (whenever that might be released..). TSMC's 3nm node is definitely going to be quite denser than Intel's 7nm node, but Intel's 5nm node (in 2026+ when it will be released..) should in turn be a bit denser than TSMC's 3nm node.
However density is not the be-all and end-all. A less dense well designed GAA-FET node should perform better than a denser FinFET node due to lower leakage (and thus ability to clock higher). I doubt Intel are switching to GAA transistors with their 7nm node but they might switch to GAA with their 5nm node. They should actually, because FinFET can only go so far.. It is kind of disappointing that TSMC are retaining FinFETs at least until their 3nm node..
Until "performance crown" is defined, the very concept is meaningless. Fastest single threaded? Highest throughput? Power restrictions? GPU performance?
Anyone can (and will) define "performance crown" to mean whatever gives the result they want.
Highest overall performance. Currently Xbox one X's chip packs the fastest overall performance on a single package(both CPU and GPU). I recent analysis I saw suggested it could be used in Surface products as well.
So, using that metric, whatever is the currently ranked #1 supercomputer in the world has the performance crown. Seems a bit overkill for playing Mincecraft though.
Poor more cold water cause it ain't happening. AMD will not be one of the first clients to get 5nm tech. The first clients are mobile clients that have an old and established partnership and also are RICH enough to afford paying for the premium. AMD is currently a relatively small company still so it is not able to do what Apple does for example, where it bought an EUV machine for TSMC to get first access to 7nm and 5nm... AMD will be out with 5nm products late next year or in 2022.
Cost isn't really the problem with EUV machines (though they are very expensive); it's supply. The EUV machine manufacturers have completely full order books.
AMD have a pretty old and established relationship with TSMC - or have you forgotten where they've been getting their GPUs manufactured all this time? 😁
Agreed about the time-frame for 5nm, though. They don't seem to be in any rush to push that out.
At this rate of yearly cadence for those Apple processors the Moore's Law is going to die, the biggest issue with this too fast refreshes in my mind is, the GaaS / SaaS / IaaS is going to bleed to consumer as well, Stadia abomination started, Xcloud is coming soon, Amazon is also going to do something from rumors, Office is already as a Service, Windows as well.
I just hope we do not lose the ownership and DIY market once this madness reaches 2025+ citing too powerful computers so you have to bend to our infrastructure and services to use anything, so a thin client ARM processor phone with OS running on their Servers, Apple will be the first to do so if that's going to happen, Android will copy as well.
Long gone are the days of CD Ripping and others, Thankfully Bandcamp is still alive in this sea of BS services for music, Movies also so much of streaming now, BDs still there and biggest thanks to new Consoles damn, I hate them for being cheaping out last gen but they still have the BluRay option, Physical discs. But past this gen I doubt. Market will decide.
That just means more room for Linux and the FOSS community to fill in the gaps. Something as a Service relies too heavily on an internet connection that frankly, the US just doesn't really have that's workable for mass adoption of such.
But they'll try to force it on us anyway for whatever reason.
We haven't even seen monolithic 3D yet. There's more performance per dollar/Watt to be realized, and that doesn't help SaaS. There will be an enthusiast/DIY market in 2025+. Probably with x86, ARM, and RISC-V options. The market won't exist by 2050 because everyone will be dead.
In the US, its not just node scaling thats slowing though. Stuff-as-a-service requires exponential internet access as well.
Line-of-sight mmWave and such not going to work with everyone moving out of cities now, and unless the FCC does something radical, we're running out longer range, longer wavelength bands to comandeer.
Meanwhile, landline ISPs have seemingly captured regulatory bodies, and stalled.
There's no way Apple is moving to an OS as a service. They are firmly planted in the edge computing camp. Going over a network for everything is a bad user experience from a performance, latency, bandwidth, power consumption, and privacy perspective. Not to mention, the device stops functioning when you lose network? This is literally why they build their own processors, so that nearly everything can be done locally instead of a server.
Yeah, I was confused by that "inference" too. Apple have consciously been at the forefront of pushing CPU and GPU performance in phones for at least half of the last decade. It would be extremely weird for them to have pushed that hard and then suddenly decide not to bother anymore.
Not so soon. Fast Internet connections are just not there. In urban areas there are just too many users and rural areas are expensive to roll-out to.
Unfortunately, I don't think latency will be cared about much, even though there's no real fix for it other an AI prediction. There's just not enough demand for extremely low latency.
"Modern chip designs are very SRAM-heavy with a rule-of-thumb ratio of 70/30 SRAM to logic ratio, so on a chip level the expected die shrink would only be ~26% or less."
How was the SRAM scaling between N7 and N5? N3 might be good enough for a core count doubling in the same area from N7.
Definitively it is not a full node scaling, combining logic with sram.....this is an half node, nothing more. Obviously outside very low power SOC devices the shrink in logic will be a lot lower than their 0.58X. In summary these are nodes aimed for phone makers and NOT for Pc/server makers. Right now N5 is a step behind in comparison to the goodies of N7 (the advantages are very questionable. sure not at the level of N7 over 12/16nm or 10nm), that will remain the real golden node for TSMC. Both N5 and N3 are a little delusion under many parameters. The lack of sram scaling will be an huge issue for cpu and GPU makers. But TSMC was fair saying 7nm will remain the right horse for some years in future.
Oh JFC! You do realize that no-one on earth cares about your "full node scaling" anymore except a few shills in Intel marketing? People like you have been beating the "not a full node scaling" drum since TSMC 20nm; how's that working out for you?
TSMC's generations are "what can we achieve to hit a 2 year cadence reliably". For 5nm that was 1.8x density. For 3nm it is 1.7x density. 2nm might be 1.6x density. 1.4nm might be 1.5x density.
What matters for TSMC and their customers is that every 2 years there is a measurable improvement in the process. Other technology can pick up the slack - chiplets, poorly scaling logic on older process dies, and so on.
Intel went berzerk with their 2.7x density target, and it's cost them 4 years (from 2 years ahead to 2 years behind). They don't talk about what the real world scaling of their working 10nm is now, but it isn't 2.7x looking at the achieved transistor densities of Lakefield, ICL, TGL.
Gondalf and the other Intel shills are still operating on that 2.7X figure. Apparently because Intel refuse to talk about density now, that means we can't infer that they haven't achieved it... or something.
IBM hasn't indicated either way that their POWER10 will be using eDRAM but it looks like they've moved back to SRAM. It is interesting that the L3 cache capacity hasn't changed much as the POWER9 had 120 MB where as POWER10 shifts to 128 MB. Of note is that slice of L3 per core has moved from 15 MB down to 8 MB due to the changing number of cores and layout organization. Shifting back to SRAM is likely why IBM hasn't radically increased core counts vs. POWER9. Die size is ~90 mm^2 smaller than the previous generation though.
They have 2nm, 1.5nm, 1nm, 800pm, 600pm, 350pm, 250pm, 180pm, 130pm, 90pm, 65pm, 55pm, 40pm, 28pm, 22pm, 16pm, 10pm, 7pm, 5pm and 3pm left. Of course, these doesn't make any sense realistically, as atom bonds are typically 100-200pm (1-2 Ångtröms), but has that ever stopped the marketing department? A better way might be to count the width in (silicon) atoms.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
58 Comments
Back to Article
alufan - Monday, August 24, 2020 - link
Everyone myself included poured cold water on the idea AMD may have 5nm out soon but honestly the way TSMC are going it would not surprise me if it showed up sooner rather than laterRudde - Monday, August 24, 2020 - link
With AMD at least one year, likely two years, away from releasing Zen 4 based processors, them utilising N5 for those seems like a given. On the graphics side, AMD might very well release CDNA 2 on N5 (marked 'advanced node' on their road map). Both are expected in 2022.Gondalf - Monday, August 24, 2020 - link
I don't know, N5 do not gives many advantages over N7.It is more expensive and the area scaling is delusional on cpus and GPUs. Not to mention the peak clock speed regression at the same power noticed by Qualcomm sometime ago.
The advantage of new nodes is becoming very very tiny. Architecture and higher IPC will become much more important in future and the node shrink slowly will slide in the irrelevance.
name99 - Monday, August 24, 2020 - link
"N5 do not gives many advantages over N7"And yet I expect Apple will ship cores on it that are ~30% faster than their current cores. Cores that are good for phones, watches, and desktops. And that have an accompanying GPU perhaps ~40% faster than the current GPU.
We heard this same story about 7nm being no big deal compared to 10nm.
If your CPU design flow is based around the idea that future processes get more and more dense, you can do pretty well. On the other hand, if your CPU design flow is based around the idea that you can keep cranking up the GHz...
More interesting is the way SRAM doesn't shrink much. TSMC being what they are, they won't tell us until this is ready, but one path forward is 3D. It's just a matter of time before someone ships parts like that in mass quantities. (Sure, Intel will be first to announce, like always... But I'm more interested in who's first to actually ship in bulk.)
The most trivial version of this is an SRAM chip stacked on a logic chip, but more interesting is 3D bulk fabrication, either double decker (n/p stacking) transistors, or run the wafer through the line twice.
Tomatotech - Monday, August 24, 2020 - link
How hot does SDRAM run vs logic?If it runs cool, then yeah stack it. If it runs hot because of high use, then stacking it will be problematic.
We might end up with areas of fast, hot, well-cooled, single layer SRAM and areas of cool stacked slower SDRAM.
Tomatotech - Monday, August 24, 2020 - link
Edit SRAM not bloody autocorrect SDRAMnandnandnand - Monday, August 24, 2020 - link
There was the Samsung 3D TSV SRAM story recently. I don't think they gave any details about cooling, and it might be mobile-focused.Santoval - Sunday, January 31, 2021 - link
The main hurdle to 3D stacking has always been how to keep the stacks cool. Intel's Lakefield has a mere 7W of TDP but it cannot operate its "big" Sunny Cove core continuously, only in short bursts for "fast responsiveness" (look up its review here if you have not read it). It is firmware prohibited to run single threaded code for more than a few moments at a time due to excessive heat generation.This is how the TDP was limited to 7W, among other power optimizations. And that's despite "kneecapping" the Sunny Cove core to have "feature parity" with the small cores. Both AVX-512 *and* AVX-256 were disabled because the small Tremont cores lack them (this is why AVX-256 support is to be added to Gracemont; so that the Golden Cove cores of Alder Lake will have only AVX-512 disabled ... right when AMD adds such support to Zen 4).
Imagine if 7W of TDP was so much a thermal trouble how infernally hot a 45W TDP 3D stacked chip would run if it lacked active cooling between the dies (microfluidic cooling is unfortunately still lab confined). It is literally impossible to do this and completely beyond the bounds of reality for even higher TDP CPUs. Maybe it's possible up to 25W of TDP, but these 25W parts would require equivalent cooling to what's normally required for 65W non stacked parts (or higher) to compensate. 15W should be less of a problem, but preferably the TDP should be limited to 10 - 12W. No more.
brantron - Monday, August 24, 2020 - link
Intel already showed plans for multi-chip packages using several variations of 10nm. That could become the norm, with a dedicated manufacturing process for particular circuits.If a new node overall is cost prohibitive, it would likely still be used right away for new iPhones. Apple could just do what everyone else already does, with different SoCs for the regular vs. Pro models.
Costs will go down over time, so there would just be a delay before widespread adoption.
Rudde - Tuesday, August 25, 2020 - link
+15% performance or -30% power is not to be disregarded. It's not as great as the halved power between Global Foundries 12nm and TSMC N7. That halved power consumption allowed AMD to pack 8 cores in a low power (15W) processor and 16 cores at 3.5GHz base clock in a 105W processor. Neither would have been possible at 12nm.N5 will further increase performance while keeping power consumption down. It will not necessarily manifest in higher clocks, but in increased instructions per clock (IPC) and core counts.
yankeeDDL - Tuesday, August 25, 2020 - link
30% power reduction and 1.8x density is quite decent, in my opinion.psychobriggsy - Tuesday, August 25, 2020 - link
It appears like you've been reading the Intel presentations.The reality is that Marvell have shown a 40% average density improvement, and 40% power savings (at iso-performance) from N5 compared to N7- https://semiaccurate.com/2020/08/25/marvell-talks-...
Analogue and I/O always scales poorly - it's why as we enter a chiplet age, these are on older processes.
N3's poor SRAM scaling is likely going to result in SRAM moving to another die that is 3D assembled via TSVs to the dense logic die.
cb88 - Tuesday, August 25, 2020 - link
The advantage would be increased fab throughput.... since some of the quad and double patterning is eliminated in favor of EUV... they can churn out a LOT more chips. Probably 2.5x as many transistors produced per unity time even though the density increase isn't that much.Spunjji - Wednesday, August 26, 2020 - link
Useless comment as ever, Gondalf. The contradictory information is just a few inches above your comment.eek2121 - Monday, August 24, 2020 - link
Well no, Zen 4 is currently scheduled for a 2021 release.nandnandnand - Monday, August 24, 2020 - link
Zen 3+ "Warhol" in 2021, in between Zen 3 "Vermeer" and Zen 4 "Raphael".scineram - Tuesday, August 25, 2020 - link
Well no, it's not.MarcusMo - Monday, August 24, 2020 - link
Crazy to think that the battle for the performance crown in the next couple of years will likely be fought by AMD and Apple...Azix - Monday, August 24, 2020 - link
How do you figure? If you mean on the extra high core count level, sure. core for core though, you'd have to limit it to performance/watt for AMD to come out ahead of intel.michael2k - Monday, August 24, 2020 - link
Maybe they mean AMD and Apple on 3nm vs Intel on 7nm?Kevin G - Tuesday, August 25, 2020 - link
The reason Intel over the past decade had such good performance/watt than competitors was because they were able to flex their lead in manufacturing. Now the tables have turned with TSMC approaching a full generation ahead of Intel. This puts Intel at a clear disadvantage in terms of technology. Leveraging TSMC for some manufacturing does put Intel at the same level as their competitors and is great for design analysis as the node variable might end up normalized between players.Santoval - Sunday, January 31, 2021 - link
In terms of density TSMC's 5nm process node is going to be slightly less dense than Intel's 7nm process node (whenever that might be released..). TSMC's 3nm node is definitely going to be quite denser than Intel's 7nm node, but Intel's 5nm node (in 2026+ when it will be released..) should in turn be a bit denser than TSMC's 3nm node.However density is not the be-all and end-all. A less dense well designed GAA-FET node should perform better than a denser FinFET node due to lower leakage (and thus ability to clock higher). I doubt Intel are switching to GAA transistors with their 7nm node but they might switch to GAA with their 5nm node. They should actually, because FinFET can only go so far.. It is kind of disappointing that TSMC are retaining FinFETs at least until their 3nm node..
name99 - Monday, August 24, 2020 - link
Until "performance crown" is defined, the very concept is meaningless.Fastest single threaded? Highest throughput? Power restrictions? GPU performance?
Anyone can (and will) define "performance crown" to mean whatever gives the result they want.
Nicon0s - Tuesday, August 25, 2020 - link
Highest overall performance.Currently Xbox one X's chip packs the fastest overall performance on a single package(both CPU and GPU). I recent analysis I saw suggested it could be used in Surface products as well.
ironargonaut - Tuesday, August 25, 2020 - link
So, using that metric, whatever is the currently ranked #1 supercomputer in the world has the performance crown. Seems a bit overkill for playing Mincecraft though.yeeeeman - Tuesday, August 25, 2020 - link
Poor more cold water cause it ain't happening. AMD will not be one of the first clients to get 5nm tech. The first clients are mobile clients that have an old and established partnership and also are RICH enough to afford paying for the premium. AMD is currently a relatively small company still so it is not able to do what Apple does for example, where it bought an EUV machine for TSMC to get first access to 7nm and 5nm...AMD will be out with 5nm products late next year or in 2022.
Tams80 - Tuesday, August 25, 2020 - link
Cost isn't really the problem with EUV machines (though they are very expensive); it's supply. The EUV machine manufacturers have completely full order books.Spunjji - Wednesday, August 26, 2020 - link
AMD have a pretty old and established relationship with TSMC - or have you forgotten where they've been getting their GPUs manufactured all this time? 😁Agreed about the time-frame for 5nm, though. They don't seem to be in any rush to push that out.
Quantumz0d - Monday, August 24, 2020 - link
At this rate of yearly cadence for those Apple processors the Moore's Law is going to die, the biggest issue with this too fast refreshes in my mind is, the GaaS / SaaS / IaaS is going to bleed to consumer as well, Stadia abomination started, Xcloud is coming soon, Amazon is also going to do something from rumors, Office is already as a Service, Windows as well.I just hope we do not lose the ownership and DIY market once this madness reaches 2025+ citing too powerful computers so you have to bend to our infrastructure and services to use anything, so a thin client ARM processor phone with OS running on their Servers, Apple will be the first to do so if that's going to happen, Android will copy as well.
Long gone are the days of CD Ripping and others, Thankfully Bandcamp is still alive in this sea of BS services for music, Movies also so much of streaming now, BDs still there and biggest thanks to new Consoles damn, I hate them for being cheaping out last gen but they still have the BluRay option, Physical discs. But past this gen I doubt. Market will decide.
xenol - Monday, August 24, 2020 - link
That just means more room for Linux and the FOSS community to fill in the gaps. Something as a Service relies too heavily on an internet connection that frankly, the US just doesn't really have that's workable for mass adoption of such.But they'll try to force it on us anyway for whatever reason.
nandnandnand - Monday, August 24, 2020 - link
We haven't even seen monolithic 3D yet. There's more performance per dollar/Watt to be realized, and that doesn't help SaaS. There will be an enthusiast/DIY market in 2025+. Probably with x86, ARM, and RISC-V options. The market won't exist by 2050 because everyone will be dead.Spunjji - Wednesday, August 26, 2020 - link
Nailed it on future projections.brucethemoose - Monday, August 24, 2020 - link
In the US, its not just node scaling thats slowing though. Stuff-as-a-service requires exponential internet access as well.Line-of-sight mmWave and such not going to work with everyone moving out of cities now, and unless the FCC does something radical, we're running out longer range, longer wavelength bands to comandeer.
Meanwhile, landline ISPs have seemingly captured regulatory bodies, and stalled.
defferoo - Monday, August 24, 2020 - link
There's no way Apple is moving to an OS as a service. They are firmly planted in the edge computing camp. Going over a network for everything is a bad user experience from a performance, latency, bandwidth, power consumption, and privacy perspective. Not to mention, the device stops functioning when you lose network? This is literally why they build their own processors, so that nearly everything can be done locally instead of a server.Spunjji - Wednesday, August 26, 2020 - link
Yeah, I was confused by that "inference" too. Apple have consciously been at the forefront of pushing CPU and GPU performance in phones for at least half of the last decade. It would be extremely weird for them to have pushed that hard and then suddenly decide not to bother anymore.Tams80 - Tuesday, August 25, 2020 - link
Not so soon. Fast Internet connections are just not there. In urban areas there are just too many users and rural areas are expensive to roll-out to.Unfortunately, I don't think latency will be cared about much, even though there's no real fix for it other an AI prediction. There's just not enough demand for extremely low latency.
nandnandnand - Monday, August 24, 2020 - link
"Modern chip designs are very SRAM-heavy with a rule-of-thumb ratio of 70/30 SRAM to logic ratio, so on a chip level the expected die shrink would only be ~26% or less."How was the SRAM scaling between N7 and N5? N3 might be good enough for a core count doubling in the same area from N7.
Gondalf - Monday, August 24, 2020 - link
Definitively it is not a full node scaling, combining logic with sram.....this is an half node, nothing more. Obviously outside very low power SOC devices the shrink in logic will be a lot lower than their 0.58X.In summary these are nodes aimed for phone makers and NOT for Pc/server makers.
Right now N5 is a step behind in comparison to the goodies of N7 (the advantages are very questionable. sure not at the level of N7 over 12/16nm or 10nm), that will remain the real golden node for TSMC.
Both N5 and N3 are a little delusion under many parameters. The lack of sram scaling will be an huge issue for cpu and GPU makers.
But TSMC was fair saying 7nm will remain the right horse for some years in future.
eek2121 - Monday, August 24, 2020 - link
It doesn’t matter, they are beating everyone else. The node name is just a name. Nothing more.name99 - Monday, August 24, 2020 - link
Oh JFC!You do realize that no-one on earth cares about your "full node scaling" anymore except a few shills in Intel marketing?
People like you have been beating the "not a full node scaling" drum since TSMC 20nm; how's that working out for you?
psychobriggsy - Tuesday, August 25, 2020 - link
TSMC's generations are "what can we achieve to hit a 2 year cadence reliably". For 5nm that was 1.8x density. For 3nm it is 1.7x density. 2nm might be 1.6x density. 1.4nm might be 1.5x density.What matters for TSMC and their customers is that every 2 years there is a measurable improvement in the process. Other technology can pick up the slack - chiplets, poorly scaling logic on older process dies, and so on.
Intel went berzerk with their 2.7x density target, and it's cost them 4 years (from 2 years ahead to 2 years behind). They don't talk about what the real world scaling of their working 10nm is now, but it isn't 2.7x looking at the achieved transistor densities of Lakefield, ICL, TGL.
Spunjji - Wednesday, August 26, 2020 - link
Solid summary.Gondalf and the other Intel shills are still operating on that 2.7X figure. Apparently because Intel refuse to talk about density now, that means we can't infer that they haven't achieved it... or something.
Spunjji - Wednesday, August 26, 2020 - link
Lack of SRAM scaling will affect GPU makers the least, surely? They're more constrained by how densely you can pack the ALUs.I will never tire of observing the different ways in which you manage to be wrong.
shabby - Monday, August 24, 2020 - link
Intel: hey tsmc slow down, please!Spunjji - Wednesday, August 26, 2020 - link
I have a rock in my shoe! 😂s.yu - Monday, August 24, 2020 - link
N7P vs N7 and N7+ vs N7 read exactly the same in the chart, I presume that's wrong?brucethemoose - Monday, August 24, 2020 - link
What about eDRAM, the stuff that IBM seems to like so much?If last level SRAM cache is going to take up so much space on N3, shrinking it down by replacing it with eDRAM may be worth the performance hit.
Kevin G - Tuesday, August 25, 2020 - link
IBM hasn't indicated either way that their POWER10 will be using eDRAM but it looks like they've moved back to SRAM. It is interesting that the L3 cache capacity hasn't changed much as the POWER9 had 120 MB where as POWER10 shifts to 128 MB. Of note is that slice of L3 per core has moved from 15 MB down to 8 MB due to the changing number of cores and layout organization. Shifting back to SRAM is likely why IBM hasn't radically increased core counts vs. POWER9. Die size is ~90 mm^2 smaller than the previous generation though.eek2121 - Monday, August 24, 2020 - link
TSMC is on fire!albertmamama - Monday, August 24, 2020 - link
Looks like Intel@22nm, a full node award all other players.Arbie - Monday, August 24, 2020 - link
3nm is getting pretty close to zero.haukionkannel - Tuesday, August 25, 2020 - link
Heh They have to invent new marketing term... in reality these 3nm still Are about 40 to 54nm...Rudde - Tuesday, August 25, 2020 - link
They have 2nm, 1.5nm, 1nm, 800pm, 600pm, 350pm, 250pm, 180pm, 130pm, 90pm, 65pm, 55pm, 40pm, 28pm, 22pm, 16pm, 10pm, 7pm, 5pm and 3pm left. Of course, these doesn't make any sense realistically, as atom bonds are typically 100-200pm (1-2 Ångtröms), but has that ever stopped the marketing department? A better way might be to count the width in (silicon) atoms.Zizy - Tuesday, August 25, 2020 - link
Width of what? Node names were sensible as long as traditional scaling worked. They have been misleading ever since hkmg.Rudde - Tuesday, August 25, 2020 - link
The imaginary width. The marketing jargon. The feature size. Maybe with a width of atoms they's define it as some real width, although probably not.Oxford Guy - Monday, August 24, 2020 - link
Waiting for -1 nm.If you jump enough times, just right, you'll make it through the brick wall.
back2future - Tuesday, August 25, 2020 - link
Seems there can be space for improving thread scheduling for multi core sockets?evanh - Tuesday, August 25, 2020 - link
SRAM is what's known as "dark silicon". So called because it's comparatively cold vs the surrounding logic circuits when both are active.