No, it is not. Though I did alter the title shortly after it was posted precisely because the original title sounded a little too PR-y at first glance.
This article is based on a briefing and follow-up research done by Anton. Synopsys reached out to us ahead of their SNUG event this week.
The article doesn't provide the insight of type of AI used and it's working. Like how are you traning your data at synopys and what is data security of end user. We may not like other companies to know our chip design strategies in-house projects. 2. In designing micro architecture, does the tools provide suggestions to architect about which architecture choice or strategies might work good or does it design the architecture itself? This needs performance data on each node to be known to your AI engine. How does it evaluate a new Iap or some IPs which are again in-house ? Do snopsys share data of IPs with all customers? Or if it's private to a customer, then do the customer needs to train network about their all IPs made & currently in use today to this AI server first?
Disclaimer: I work for Synopsys. As the article says, the algorithms use Reinforcement Learning specifically to keep each customers data private. The training happens during deployment by running simulations that sample your model and then reward or penalize the control algorithm. This is not a new idea, DeepMind does the same thing, you’re welcome to read up on it. About the architecture exploration: you mostly get feedback on your choices and can run a test model of your workload, then analyze the bottlenecks and adjust, iteratively getting close to your design goals. That includes data from the target node’s PDK, which can be replaced. That’s also by the way how one of our other products work to enable node retargeting of your IP. Hope this explanation clears things up.
From my personal experience with the EDA tools, specifically the Synopsys ones, they need a lot of constraint (rules written by human) to producing a somewhat usable result. The most critical part being at the very beginning starts with their Design Compiler, which after you wrote a script is mostly automated anyway. I doubt AI would be able to help much nor the companies would trust AI enough for sign-off advance node designs.
"At 3 nm, a complex GPU will cost about $1.5 billion to develop, including circa 40% for software." with selling 1-10M GPU (licenses for IP&)devices (smth. equivalent to a RTX3090(7-8nm) or RTX4090(TSMC 4 nm)?) this would add $150-1500 on materials, production, sales network
maybe interesting to add sales numbers for Nvidia 30's series and 2021 GPU overall: "At the 19th Annual JP Morgan Tech Conference, NVIDIA CFO, Colette Kress shared some details regarding the adoption of the GeForce RTX graphics cards, including the relative sales of the latest 30 series (Ampere) lineup. According to Kress, about 10% of all PC gamers are presently using an RTX graphics card which means a total of 20 million Turing and Ampere cards have been sold to date." "John Peddie Research has some numbers to share, with 119 million graphics cards shipped in Q1 2021 -- a 39% increase over Q1 2020."
share of AMD, Intel, Nvidia for pc GPUs (prob. integrated&discrete) ~ 1/6, 2/3, 1/6
Its like any other automation. It just shifts costs from labor to do the work to labor that supports the technology doing the work and then silently, costs balloon behind the scenes and nothing changes with the exception that an AI company hocking rudimentary ML cloaked under AI the buzzword of the moment, profit from weak-minded C-level people desperately rubber stamping acquisitions in the hopes of pleasing the board. Wash, rinse, repeat.
and the reason for that is The Tyranny of Fixed Cost. viz, unless that automation runs full blast 24/7/365, average cost isn't minimized; there's way too much capital to be amortized and depreciated across the units. aka, if demand for your widgets slackens (does that ever happen?), you've little to no labour to make redundant as you decrease output. there's a reason China was able to steal American manufacturing: a billion pairs of hands at subsistence wages, which could be hired and fired as output oscillated. to the extent that semi-conductor bidnezz moves ever deeper toward 100% capital production function, the more desperate the industry becomes.
"shifts costs from labor to do the work to labor that supports the technology doing the work"
and, no surprise, the number of labour units, aka humans, gets smaller as automation becomes ever more embedded in the production function. OTOH, non-production labour (R&D, Suits, etc.) will increase, adding yet more to fixed cost. in due time (thanks to AI), no humans will be paid. then what do producers do?? only The Shadow Knows.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
waterdog - Thursday, March 30, 2023 - link
Was this article supposed to be tagged as a sponsored post or press release?Ryan Smith - Thursday, March 30, 2023 - link
No, it is not. Though I did alter the title shortly after it was posted precisely because the original title sounded a little too PR-y at first glance.This article is based on a briefing and follow-up research done by Anton. Synopsys reached out to us ahead of their SNUG event this week.
Ryan Smith - Thursday, March 30, 2023 - link
(It would seem I accidentally deleted a valid user comment while cleaning up some spam here. Sorry about that!)Ankistein - Thursday, March 30, 2023 - link
The article doesn't provide the insight of type of AI used and it's working. Like how are you traning your data at synopys and what is data security of end user. We may not like other companies to know our chip design strategies in-house projects.2. In designing micro architecture, does the tools provide suggestions to architect about which architecture choice or strategies might work good or does it design the architecture itself?
This needs performance data on each node to be known to your AI engine. How does it evaluate a new Iap or some IPs which are again in-house ?
Do snopsys share data of IPs with all customers?
Or if it's private to a customer, then do the customer needs to train network about their all IPs made & currently in use today to this AI server first?
moarpwr - Saturday, April 1, 2023 - link
Disclaimer: I work for Synopsys. As the article says, the algorithms use Reinforcement Learning specifically to keep each customers data private. The training happens during deployment by running simulations that sample your model and then reward or penalize the control algorithm. This is not a new idea, DeepMind does the same thing, you’re welcome to read up on it. About the architecture exploration: you mostly get feedback on your choices and can run a test model of your workload, then analyze the bottlenecks and adjust, iteratively getting close to your design goals. That includes data from the target node’s PDK, which can be replaced. That’s also by the way how one of our other products work to enable node retargeting of your IP. Hope this explanation clears things up.erinadreno - Friday, March 31, 2023 - link
From my personal experience with the EDA tools, specifically the Synopsys ones, they need a lot of constraint (rules written by human) to producing a somewhat usable result. The most critical part being at the very beginning starts with their Design Compiler, which after you wrote a script is mostly automated anyway. I doubt AI would be able to help much nor the companies would trust AI enough for sign-off advance node designs.Dante Verizon - Friday, March 31, 2023 - link
This is one of the rare examples of positive use of AI.It's scary how production costs are rising, if something doesn't happen mid-end GPUs will soon cost over $1000
back2future - Saturday, April 1, 2023 - link
"At 3 nm, a complex GPU will cost about $1.5 billion to develop, including circa 40% for software."with selling 1-10M GPU (licenses for IP&)devices (smth. equivalent to a RTX3090(7-8nm) or RTX4090(TSMC 4 nm)?) this would add $150-1500 on materials, production, sales network
back2future - Saturday, April 1, 2023 - link
maybe interesting to add sales numbers for Nvidia 30's series and 2021 GPU overall:"At the 19th Annual JP Morgan Tech Conference, NVIDIA CFO, Colette Kress shared some details regarding the adoption of the GeForce RTX graphics cards, including the relative sales of the latest 30 series (Ampere) lineup. According to Kress, about 10% of all PC gamers are presently using an RTX graphics card which means a total of 20 million Turing and Ampere cards have been sold to date."
"John Peddie Research has some numbers to share, with 119 million graphics cards shipped in Q1 2021 -- a 39% increase over Q1 2020."
share of AMD, Intel, Nvidia for pc GPUs (prob. integrated&discrete) ~ 1/6, 2/3, 1/6
back2future - Saturday, April 1, 2023 - link
share of discrete GPUs for AMD, Nvidia (2021) ~ 19%, 81% (Intel 2021: Xe DG2 512EU,16GB VRAM, ~RTX3070/3080?)PeachNCream - Friday, March 31, 2023 - link
Its like any other automation. It just shifts costs from labor to do the work to labor that supports the technology doing the work and then silently, costs balloon behind the scenes and nothing changes with the exception that an AI company hocking rudimentary ML cloaked under AI the buzzword of the moment, profit from weak-minded C-level people desperately rubber stamping acquisitions in the hopes of pleasing the board. Wash, rinse, repeat.FunBunny2 - Tuesday, April 4, 2023 - link
"costs balloon behind the scenes "and the reason for that is The Tyranny of Fixed Cost. viz, unless that automation runs full blast 24/7/365, average cost isn't minimized; there's way too much capital to be amortized and depreciated across the units. aka, if demand for your widgets slackens (does that ever happen?), you've little to no labour to make redundant as you decrease output. there's a reason China was able to steal American manufacturing: a billion pairs of hands at subsistence wages, which could be hired and fired as output oscillated. to the extent that semi-conductor bidnezz moves ever deeper toward 100% capital production function, the more desperate the industry becomes.
"shifts costs from labor to do the work to labor that supports the technology doing the work"
and, no surprise, the number of labour units, aka humans, gets smaller as automation becomes ever more embedded in the production function. OTOH, non-production labour (R&D, Suits, etc.) will increase, adding yet more to fixed cost. in due time (thanks to AI), no humans will be paid. then what do producers do?? only The Shadow Knows.