Personally, this is the launch I'm watching. Arc was 2020 midrange and RTX 4000 is expensive and power hungry, so a 2022 high end card with an eye on power is tempting.
Ryan, I think you have a typo at the end of the article. You wrote that the presentation is slated to air on Nov 29th, but it clearly is for Nov 3rd. Also, do you think you will have a GPU review this time? :)
I would actually agree. Modern video-based reviews are not as effective at presenting the expected graphs and charts that benchmark-heavy scenarios like these present. Though if all else fails, Tom's Hardware is owned by the same company and likely will have a review. It also has some ex-AT staff so if were a fan of someone, you might find their familiar typos not to far away.
It will be interesting to see what they do here with chiplets. Looking at the 6000 series you could make the base 6400/6500 a single chiplet with a 64bit memory bus. Then the 6600 would be 2 chiplets, the 6700's would be 3 chiplets and the 6800/6900 would be 4 chiplets. That would pretty much line up the memory bus scaling but not quite the compute units or cache. I'm not sure if you could present 4 chiplets with a PCI-E 4x link as a single PCI-E 16x device or if a 12x PCI-E link is even valid. A Central IO die with the PCI-E link, infinity cache and the video encode/decode blocks could make a lot of sense too. You'd avoid having 2 or 3 unused chiplet links on the base card and the possibility of non-uniform access if you had 2 hops to memory/cache on another chiplet.
They could also only use the chiplets in the highest tier or two keeping more of the low/mid range cards using single chips. Maybe the 7500 is single chip with no multi chip support. 7600 is single chip with multi chip support and then 7700 is two partially enabled 7600 and the 7800/7900 are two 7600's.
Either way it will be interesting to see how they pull it off and how they keep the power costs manageable for the high bandwidth interconnects between the chiplets.
The rumor mill has us covered on this one, assuming their sources are correct of course.
The biggest Navi31 is rumored to be a large graphics die with six smaller memory dies clustered around it. Mid-sized Navi32 should be a mid sized graphics die surrounded by four memory dies. The smallest die, Navi33, is still monolithic.
I wonder which cards they'll choose to launch. Nvidia has clearly decided they want to sell through their inventory of the 3xxx cards and only launched their 4xxx cards at the highest end that the older cards can't reach. Presumably AMD is in a similar situation with last gen inventory, but they do have an opening if the new stuff resets the price/performance curve.
I think you'll be dissapointed on the price/perf curve. Looking at the MSRP for AMD CPUs for price/transistor, its unlikely the GPUs which are 10X the transistor count will be much less than 10X the cost. Fab space is limited and I'd imagine they'd want to push fabbing CPUs that make them much more money over costly GPUs.
With the quality and demand of their CPUs, might make sense to have very limited run of GPUs until fab pricing comes down.
Don't count transistors (and that 10x is wrong). Navi 31 is using the cheaper N6 process. Total die size is under 350mm^2, which is going to save them a ton of money. Navi 33 is monolithic and is around 200mm^2. Don't worry about the GPU production. If there is any limit it will be GDDR6 availability. There won't even be any real crippled-core binning. Only binning for chiplets that can clock higher for selling MCDs that go into hot-clocked boards with their partners (or RX 7900/7950). If they have low volume of RX6XXX boards in the channel they can undercut Nvidia by both price and power demand which can drive up their market share (they already own the consoles). Nvidia's 12pin connector on their 40 series is not well timed for the green team.
They will have to undercut Nvidia's prices substantially to sell. Watch the 7900 XT cost $400-600 less than the 4090.
People are choosing to buy high end instead of mid range cards just to make AI images, and Nvidia GPUs just work for that. That's a disaster years in the making.
From what their BOM costs are expected to be, they should be able to do that and still make a good profit margin.
The chiplette tech we're looking at is expected to let them make a 4080/4090 raster competitive card using the 6nm and 5nm nodes, using smaller individual dies.
Material costs should be much lower and yields should be much higher than Lovelace on the 4N node.
I'm not expecting AMD to present a game changer, except for faster rasterization performance and competitive pricing. Same thing as last gen, just faster. And I do not believe they will equal or surpass Nvidia's dedicated RT performance. But, I could be wrong, and I hope I am wrong.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
17 Comments
Back to Article
Mr Perfect - Thursday, October 20, 2022 - link
Personally, this is the launch I'm watching. Arc was 2020 midrange and RTX 4000 is expensive and power hungry, so a 2022 high end card with an eye on power is tempting.Any chance Anandtech will be reviewing them?
Ryan Smith - Thursday, October 20, 2022 - link
TBDsaylick - Thursday, October 20, 2022 - link
Ryan, I think you have a typo at the end of the article. You wrote that the presentation is slated to air on Nov 29th, but it clearly is for Nov 3rd. Also, do you think you will have a GPU review this time? :)Mr Perfect - Thursday, October 20, 2022 - link
Fair enough. Hopefully your staffing issues get sorted out, because your GPU articles are sorely missed.PeachNCream - Wednesday, October 26, 2022 - link
I would actually agree. Modern video-based reviews are not as effective at presenting the expected graphs and charts that benchmark-heavy scenarios like these present. Though if all else fails, Tom's Hardware is owned by the same company and likely will have a review. It also has some ex-AT staff so if were a fan of someone, you might find their familiar typos not to far away.kpb321 - Thursday, October 20, 2022 - link
It will be interesting to see what they do here with chiplets. Looking at the 6000 series you could make the base 6400/6500 a single chiplet with a 64bit memory bus. Then the 6600 would be 2 chiplets, the 6700's would be 3 chiplets and the 6800/6900 would be 4 chiplets. That would pretty much line up the memory bus scaling but not quite the compute units or cache. I'm not sure if you could present 4 chiplets with a PCI-E 4x link as a single PCI-E 16x device or if a 12x PCI-E link is even valid. A Central IO die with the PCI-E link, infinity cache and the video encode/decode blocks could make a lot of sense too. You'd avoid having 2 or 3 unused chiplet links on the base card and the possibility of non-uniform access if you had 2 hops to memory/cache on another chiplet.They could also only use the chiplets in the highest tier or two keeping more of the low/mid range cards using single chips. Maybe the 7500 is single chip with no multi chip support. 7600 is single chip with multi chip support and then 7700 is two partially enabled 7600 and the 7800/7900 are two 7600's.
Either way it will be interesting to see how they pull it off and how they keep the power costs manageable for the high bandwidth interconnects between the chiplets.
Mr Perfect - Thursday, October 20, 2022 - link
The rumor mill has us covered on this one, assuming their sources are correct of course.The biggest Navi31 is rumored to be a large graphics die with six smaller memory dies clustered around it. Mid-sized Navi32 should be a mid sized graphics die surrounded by four memory dies. The smallest die, Navi33, is still monolithic.
https://videocardz.com/newz/amd-navi-3x-gpus-expos...
MooseMuffin - Thursday, October 20, 2022 - link
I wonder which cards they'll choose to launch. Nvidia has clearly decided they want to sell through their inventory of the 3xxx cards and only launched their 4xxx cards at the highest end that the older cards can't reach. Presumably AMD is in a similar situation with last gen inventory, but they do have an opening if the new stuff resets the price/performance curve.webdoctors - Thursday, October 20, 2022 - link
I think you'll be dissapointed on the price/perf curve. Looking at the MSRP for AMD CPUs for price/transistor, its unlikely the GPUs which are 10X the transistor count will be much less than 10X the cost. Fab space is limited and I'd imagine they'd want to push fabbing CPUs that make them much more money over costly GPUs.With the quality and demand of their CPUs, might make sense to have very limited run of GPUs until fab pricing comes down.
mcnabney - Thursday, October 27, 2022 - link
Don't count transistors (and that 10x is wrong).Navi 31 is using the cheaper N6 process. Total die size is under 350mm^2, which is going to save them a ton of money. Navi 33 is monolithic and is around 200mm^2. Don't worry about the GPU production. If there is any limit it will be GDDR6 availability. There won't even be any real crippled-core binning. Only binning for chiplets that can clock higher for selling MCDs that go into hot-clocked boards with their partners (or RX 7900/7950).
If they have low volume of RX6XXX boards in the channel they can undercut Nvidia by both price and power demand which can drive up their market share (they already own the consoles). Nvidia's 12pin connector on their 40 series is not well timed for the green team.
haukionkannel - Thursday, October 20, 2022 - link
The high end. AMD also have to sell old stuff, so expect to see high end about the same price and Nvidia.nandnandnand - Thursday, October 20, 2022 - link
They will have to undercut Nvidia's prices substantially to sell. Watch the 7900 XT cost $400-600 less than the 4090.People are choosing to buy high end instead of mid range cards just to make AI images, and Nvidia GPUs just work for that. That's a disaster years in the making.
Qasar - Thursday, October 20, 2022 - link
" Watch the 7900 XT cost $400-600 less than the 4090." that would be good , here at least, the 4090 here, starts @ $2220 and top out at $2780scineram - Friday, October 21, 2022 - link
Bullshit. The 4080 is $400 less than the 4090. They will easily beat that.HarryVoyager - Monday, October 24, 2022 - link
From what their BOM costs are expected to be, they should be able to do that and still make a good profit margin.The chiplette tech we're looking at is expected to let them make a 4080/4090 raster competitive card using the 6nm and 5nm nodes, using smaller individual dies.
Material costs should be much lower and yields should be much higher than Lovelace on the 4N node.
Chaser - Monday, October 24, 2022 - link
I'm not expecting AMD to present a game changer, except for faster rasterization performance and competitive pricing. Same thing as last gen, just faster. And I do not believe they will equal or surpass Nvidia's dedicated RT performance. But, I could be wrong, and I hope I am wrong.meacupla - Tuesday, October 25, 2022 - link
from the rumors I have seen, AMD thinks their 7900XT will be competitive.