There's a big difference between talking through IO die on the same package and cores on the same die talking through off package NB via FSB. It's insane to pass information off package then back just to talk to each other.
"Which was a response" Not so much a response as a desperate callback to a diss laid out 11 years prior that referred to an entirely different set of technologies.
AMD's product that Intel accused of using "glue" was designed from the ground-up to be used with multiple dies on a single package communicating over a dedicated interconnect. Meanwhile, Intel's product that AMD had previously accused of using "glue" effectively took a dual-socket FSB-based-communication configuration and crammed it onto a single package, something they had done before with the Pentium D to no great acclaim.
So that's why some folks find it "okay" when AMD says it and "not" when Intel did - because there are material differences between the solutions being described this way. You *could* argue that there's similarity in the reasons *behind* the trash-talking, though. In both cases, it was a comment being made by a company who weren't able to compete on even terms with the product they were trying to demean.
The disadvantage of "glued together" versus "smaller transistors" is that in the long term cost is proportional to die era, so it should be cheaper to deliver more transistors with by process shrink than by sticking together dies with a larger process.
Advanced packaging is usually going to be expensive. Some cases might be simple (maybe you can come out ahead with 8 chiplets as opposed to 1 chip that is 8 times as large) and others offer high performance (HBM) which could translate to relative economy. But advanced packaging shines for low-volume expensive parts such as the brains for a Stinger missile.
Paper launches, most likely. It doesn't matter, as Intel's tactic of "getting people to wait for its GPUs" will not work. By the time these are on the market, people will be so blown away by Ampere/its refresh and RDNA 3 that they'll forget all about Intel's offerings, or find them boring.
Nobody is excited about rdna3, including AMD who can't even confirm if they've locked down a node for it. RDNA2 on the series X is running around 2080 spec yet still around the same power envelope even though its on a superior node, with the series x eating up to 300w. Not exactly a good sign for RDNA3. AMD hit a homerun with their chiplet CPUs but are still just undercutting nvidia instead of competing in the GPU department.
intel, well, if its even vaguely performant that would be a huge surprise to everyone, so they've got a really low bar to work with.
I think you need to wait and see on RDNA2. You might be right. However I would not assume you can extrapolate from the custom ASIC from the XBOX to the dedicated GPU.
I don't think this is accurate at all. The OG RTX 2080 draws ~225W, and I haven't actually seen any solid specs for Series X power consumption yet - that 300W "estimate" appears to have originally come from Digital Foundry and was apparently based on the assumption that it would use RDNA. RDNA is already at approximate PPW parity with Turing, so seeing no improvement at all from RDNA 2 would be... unexpected.
Personally I'd be surprised if the Series X pulls more than 250W total, given what we know about the thermal design and the clock speeds at which they're planning to run it; that would also be more in line with the claimed PPW improvements from RDNA to RDNA 2.
I'm not personally expecting RDNA 2 to take any performance crowns, but I am expecting it to compete *meaningfully* with Ampere at everything but the high-end.
For a long time Intel has not even tried to have a public roadmap that makes sense. They have been so cagy at times it has been a threat to their business continuity.
They might not hit their goals, but it is good to see that they have plans and goals. Also hearing that they are looking to foundries for 7nm means they have a plan B in case Intel 7nm ends up like Intel 10nm. That makes it much more likely we will have Intel around in 10 years.
Intel didn't say their 7nm process is delayed by 6 months. They said their 7nm yields were low and their 7nm CPU schedules will be shifted by 6 months. Here is the quote from the q2 cc. Their 7nm cpus were already on the roadmap in 2022. Their Sapphire Rapids CPU in 2021 is 10nm.
"We are seeing an approximate six-month shift in our seven-nanometer-based CPU product timing relative to prior expectations. The primary driver is the yield of our seven-nanometer process"
Intel's announced DG1 is already available in devcloud.
They've already demoed the ability to stitch together 4 HP tiles, so I'm going to go out on a limb and guess that they can stitch together four LP tiles to create the SG1.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
19 Comments
Back to Article
CityBlue - Thursday, August 13, 2020 - link
Is this Intel GPU glued together? Asking for a friend.dullard - Thursday, August 13, 2020 - link
Are you referring to AMD complaining that Intel's chips are glued together?Sahrin - Thursday, August 13, 2020 - link
I think he’s referring to Intel complaining that AMD’s chips are ‘glued together.’https://www.techpowerup.com/235092/intel-says-amd-...
dullard - Thursday, August 13, 2020 - link
Which was a response to AMD and AMD supporters saying it first back in 2006: https://www.computerworld.com/article/2818842/amd-...I'm just wondering if it is okay when AMD says it but not Intel.
dotjaz - Friday, August 14, 2020 - link
There's a big difference between talking through IO die on the same package and cores on the same die talking through off package NB via FSB. It's insane to pass information off package then back just to talk to each other.Spunjji - Friday, August 14, 2020 - link
"Which was a response"Not so much a response as a desperate callback to a diss laid out 11 years prior that referred to an entirely different set of technologies.
AMD's product that Intel accused of using "glue" was designed from the ground-up to be used with multiple dies on a single package communicating over a dedicated interconnect. Meanwhile, Intel's product that AMD had previously accused of using "glue" effectively took a dual-socket FSB-based-communication configuration and crammed it onto a single package, something they had done before with the Pentium D to no great acclaim.
So that's why some folks find it "okay" when AMD says it and "not" when Intel did - because there are material differences between the solutions being described this way. You *could* argue that there's similarity in the reasons *behind* the trash-talking, though. In both cases, it was a comment being made by a company who weren't able to compete on even terms with the product they were trying to demean.
dullard - Thursday, August 13, 2020 - link
Edit: that was even said back in 2005 from our own Anand: https://www.anandtech.com/show/1665/2PaulHoule - Thursday, August 13, 2020 - link
The disadvantage of "glued together" versus "smaller transistors" is that in the long term cost is proportional to die era, so it should be cheaper to deliver more transistors with by process shrink than by sticking together dies with a larger process.Advanced packaging is usually going to be expensive. Some cases might be simple (maybe you can come out ahead with 8 chiplets as opposed to 1 chip that is 8 times as large) and others offer high performance (HBM) which could translate to relative economy. But advanced packaging shines for low-volume expensive parts such as the brains for a Stinger missile.
Lord of the Bored - Friday, August 14, 2020 - link
Truly, our man Anand was a trendsetter.Null666666 - Thursday, August 13, 2020 - link
Glue is the firmware and software that negotiate the interconnects between the chip modules.JayNor - Thursday, August 13, 2020 - link
Several GPU products coming in 2021, which is a bit of a surprise.Especially the hardware accelerated ray tracing so soon.
Krysto - Thursday, August 13, 2020 - link
Paper launches, most likely. It doesn't matter, as Intel's tactic of "getting people to wait for its GPUs" will not work. By the time these are on the market, people will be so blown away by Ampere/its refresh and RDNA 3 that they'll forget all about Intel's offerings, or find them boring.whatthe123 - Thursday, August 13, 2020 - link
Everyone is excited about Ampere.Nobody is excited about rdna3, including AMD who can't even confirm if they've locked down a node for it. RDNA2 on the series X is running around 2080 spec yet still around the same power envelope even though its on a superior node, with the series x eating up to 300w. Not exactly a good sign for RDNA3. AMD hit a homerun with their chiplet CPUs but are still just undercutting nvidia instead of competing in the GPU department.
intel, well, if its even vaguely performant that would be a huge surprise to everyone, so they've got a really low bar to work with.
FreckledTrout - Thursday, August 13, 2020 - link
I think you need to wait and see on RDNA2. You might be right. However I would not assume you can extrapolate from the custom ASIC from the XBOX to the dedicated GPU.Spunjji - Friday, August 14, 2020 - link
I don't think this is accurate at all. The OG RTX 2080 draws ~225W, and I haven't actually seen any solid specs for Series X power consumption yet - that 300W "estimate" appears to have originally come from Digital Foundry and was apparently based on the assumption that it would use RDNA. RDNA is already at approximate PPW parity with Turing, so seeing no improvement at all from RDNA 2 would be... unexpected.Personally I'd be surprised if the Series X pulls more than 250W total, given what we know about the thermal design and the clock speeds at which they're planning to run it; that would also be more in line with the claimed PPW improvements from RDNA to RDNA 2.
I'm not personally expecting RDNA 2 to take any performance crowns, but I am expecting it to compete *meaningfully* with Ampere at everything but the high-end.
PaulHoule - Thursday, August 13, 2020 - link
For a long time Intel has not even tried to have a public roadmap that makes sense. They have been so cagy at times it has been a threat to their business continuity.They might not hit their goals, but it is good to see that they have plans and goals. Also hearing that they are looking to foundries for 7nm means they have a plan B in case Intel 7nm ends up like Intel 10nm. That makes it much more likely we will have Intel around in 10 years.
damianrobertjones - Friday, August 14, 2020 - link
I wouldn't wish to mess with, 'Rambo Cache'.JayNor - Friday, August 14, 2020 - link
Intel didn't say their 7nm process is delayed by 6 months. They said their 7nm yields were low and their 7nm CPU schedules will be shifted by 6 months. Here is the quote from the q2 cc. Their 7nm cpus were already on the roadmap in 2022. Their Sapphire Rapids CPU in 2021 is 10nm."We are seeing an approximate six-month shift in our seven-nanometer-based CPU product timing relative to prior expectations. The primary driver is the yield of our seven-nanometer process"
JayNor - Friday, August 14, 2020 - link
Intel's announced DG1 is already available in devcloud.They've already demoed the ability to stitch together 4 HP tiles, so I'm going to go out on a limb and guess that they can stitch together four LP tiles to create the SG1.
They stated SG1 will be available in 2020.