ah, the Capitalist's dilemma: automate further/better/cheaper (on a per-widget basis) is profitable only if excess output from further/better/cheaper can be shifted at a nice price. mostly, so far, it's resulted in consolidation of function from what-used-to-be multiple chips into ginormous monoliths. the GPU situation is a bit different, in that it always benefits from further/better/cheaper just because what it does is just brute-force computing.
now that bitcoin is at/near the next having (only ~1.5 million coin remain to be mined) that lode is about shot.
will AI suck up the new-found largesse? only The Shadow knows. hallucination be damned.
High-NA tools won't be used for GPU die anytime soon. The increase in feature fidelity comes at a cost of maximum shot area, so a single exposure drops from 858mm^2 (26x33mm) to 429mm^2 (26x16.5mm). That is a necessity of the new optics required for high NA machines, they use anamorphic lenses to compensate for changes to the light path that would otherwise lead to interference effects that degrade image quality. The new lenses stretch the mask pattern in only one direction, so the printable area per shot is halved in that direction. Eventually designers can work around this with mask stitching, but that would still require 2 exposures and masks, both of which are pretty expensive. Most likely we will see GPU designs land at 3/4/5nm design nodes and stay there for a couple generations, but this has historically been when the best and most innovative changes to designs come about.
There is a reason both Intel and AMD have been working on gpu chiplets, Nvidia seems to be trying to catch up with their next design Don't worry, gpus will not be stuck on older process nodes at lest not due to shrinking aperture size, more likely cost of new nodes compared to old will keep to be the arbiter of progress.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
8 Comments
Back to Article
nicolaim - Wednesday, February 14, 2024 - link
I don't think the proofreader made it to the end..."Meanwhile, uncertainties about plance of chipmaker"
Threska - Wednesday, February 14, 2024 - link
A plance shortage.Ryan Smith - Wednesday, February 14, 2024 - link
Thanks!bob27 - Monday, February 19, 2024 - link
Now it says "chipmarkers"Threska - Wednesday, February 14, 2024 - link
Important takeaway is expensive tools and process means more expensive chips. So remember that next time we're complaining about GPU prices.FunBunny2 - Thursday, February 15, 2024 - link
ah, the Capitalist's dilemma: automate further/better/cheaper (on a per-widget basis) is profitable only if excess output from further/better/cheaper can be shifted at a nice price. mostly, so far, it's resulted in consolidation of function from what-used-to-be multiple chips into ginormous monoliths. the GPU situation is a bit different, in that it always benefits from further/better/cheaper just because what it does is just brute-force computing.now that bitcoin is at/near the next having (only ~1.5 million coin remain to be mined) that lode is about shot.
will AI suck up the new-found largesse? only The Shadow knows. hallucination be damned.
FullmetalTitan - Saturday, February 17, 2024 - link
High-NA tools won't be used for GPU die anytime soon. The increase in feature fidelity comes at a cost of maximum shot area, so a single exposure drops from 858mm^2 (26x33mm) to 429mm^2 (26x16.5mm).That is a necessity of the new optics required for high NA machines, they use anamorphic lenses to compensate for changes to the light path that would otherwise lead to interference effects that degrade image quality. The new lenses stretch the mask pattern in only one direction, so the printable area per shot is halved in that direction.
Eventually designers can work around this with mask stitching, but that would still require 2 exposures and masks, both of which are pretty expensive.
Most likely we will see GPU designs land at 3/4/5nm design nodes and stay there for a couple generations, but this has historically been when the best and most innovative changes to designs come about.
Zoolook - Sunday, February 25, 2024 - link
There is a reason both Intel and AMD have been working on gpu chiplets, Nvidia seems to be trying to catch up with their next designDon't worry, gpus will not be stuck on older process nodes at lest not due to shrinking aperture size, more likely cost of new nodes compared to old will keep to be the arbiter of progress.