I really hope this doesn't end up on consumer cards unless it can be produced EASILY (meaning won't be in such a shortage it kills multiple flagships for AMD...LOL), and CHEAPER than GDDR6 (or whatever). Win the price war or feck off UNLESS you can PROVE it makes my games faster at 30fps or better. Meaning, no point in showing me it winning at 32K (yeah that's about how dumb this crap gets) when it's running .1fps doing it. "It's 10 times faster than GDDR75, which only gets .01 fps, so it sucks, get HBMblahblahBS now!".
Wake me when it wins something and does it cheaper, and is EASY to produce. Not sure why AMD ever got on this dumb bandwagon to destroyed flagships (2 so far? 3 soon?). There are very few situations that can exploit this memory, so don't waste your effort designing for it unless you are IN THAT USE CASE.
Let's hope AMD stops making mistakes that kill NET INCOME, and overall raises prices to finally get some REAL NET INCOME. Quit the discounts. Charge like you are winning win you ARE.
AMD was looking 5-10 years out :) The entire point of HBM was to stack it on die. The holy grail was for APU's. Lots of marketing discussing this a few years ago. I suspect the end game to kill Nvidia in the dedicated GPU segment is an APU that can have enough CPU for anything the average person might do so say 8-cores, a GPU that can play 1080p flawlessly and is ok at 4K, and has stacked HBM memory all on die. Oh its coming and once AMD is on TSMC's 3nm in a couple years densities will be such that an 8-16 core chiplet will be tiny so there is plenty of space for a decently high end GPU on die.
Your weed must be trash. Nvidia's dGPU business is dying, that's a fact, as a matter of fact, the entire dGPU segment is dying.
HBM+iGPU will kill any remaining dGPU on laptops soon enough. That's not up for debate, just a fact.
Desktop PC is dying fast, I mean really fast, it's shrinking every year. PC gaming is shrinking even faster. When AMD command both consoles there's no escaping from the fact that next year, iGPU+HBM can match PS5/XBSX performance in 2 years, rendering dGPU below $400 useless. How long do you think Nvidia can remain profitable without any iGPU or console?
Nvidia, to their credit, are also aware of this, and it isn't as if they aren't diversifying. They are and are doing so very successfully in most cases. Plus, they do still have their ARM developments to fall back on if they have to.
AMD appear to have set themselves up nicely.
As for Intel, well they are a slow, likely quite corrupt, behemoth that are turning in the right direction. And they do have the advantage of having their own fabs (although for the last few years that hasn't been so great, but they are an asset).
I agree with you Tams80. Nvidia are doing a great job moving into AI and the data center. Just people arguing AMD and Intel wont make such powerful APU's that all but eliminate the very high end GPU's don't see what is going on. We will easily have enough CPU cores and GPU power in a single APU once manufacturing is on TSMC's 3nm or comparable gate all around process. When this happens only x86 APU makers will be relevant which leaves only Intel and AMD standing. That is unless Nvidia does something crazy like buy VIA which is the only other x86 license holder(well Zhaoxin as well but they are under VIA).
dGPU is not going to die any time soon, and Nvidia is diversifying because they make a lot of money doing so, while it's a good thing to do in business.
As long as dGPU keeps getting better, Nvidia is going to be fine, because people will move up to higher resolutions, and the industry will take advantage of the fact that GPU are getting stronger, so you'll need a stronger GPU.
Intel and AMD will push Nvidia out of the dedicated GPU market in under a decade. At this point its not if its when. Nvidia will move into the data center for AI etc.
Right let's stack one bad thermal conductor on top of a power hungry device, makes sense...
nVIDIA will never die on the dGPU, their dGPU is today an extension of their other business and creating a dGPU costs them little since it's shares most of their technology with other products.
Anyways Intel would kill AMD faster than AMD would kill NVIDIA, NVIDIA owns the GPU high end and nothing seems to stop them, if Intel comes up with a decent iGPU it will eat AMD market share on the low end and AMD will be left with little to nothing.
their GCN architecture required huge memory bandwidth to be efficient and reach high performance. To reach this speed with GDDR5 would have required 512bit bus which would consume like 75W of power for the memory alone. HBM could provide more bandwidth with only like ~15W of power. It was out of necessity to make the gcn architecture perform competitively while staying within a reasonable power limit.
One of AMD's strategy to compete with Nvidia is to use better memory hardware across their products, back then. Nvidia should be using HBM on consumer gaming cards to improve volumes and make HBM standard, at least on high end. I like the Sapphire Vega 56 design so much that I bought it wihle it is expensive during the cryptomining craze. Too bad for me that I game so little.
It was more of a strategy to allow them to continue to use an inefficient gpu design - Vega/Polaris were way behind Nvidia on efficient usage of memory bandwidth. AMD couldn't afford to fix this so they sticking plastered over it by attaching very fast HBM memory which made their cards expensive and harder to make but was the only way they could stay competitive.
It's AMD. I'd venture AMD will wait until after Intel starts offering Intel "APUs" with HBM2E (or later), before even considering it for their own AMD laptop APUs.
Im sorry but that doesn't even sound right. You really think Intel about to push the industry forward like that? Id wager AMD does it first. Intel is all about the status quo more than the others.
It's AMD. HBM + APU has been a blindingly obvious thing for a long, long time now. Intel even coaxed AMD to make a special HBM GPU for them, on Kaby Lake G (Quad core Intel SoC + AMD "Vega-not-Vega" GPU with HBM2 memory, all three on a single package substrate). Since then, Apple has made a habit of collecting special HBM2 GPUs from AMD, that never see the market in non-Apple laptops.
Yet AMD refuses to release any HBM APU. We are on the third generation of Zen + Vega APUs. They were even willing to make a Zen + Vega APU with faster GDDR5 memory controllers for some small Chinese company, so that company could make a custom PC exclusively for the Chinese domestic market. That special APU had a 256bit GDDR5 memory bus and more than twice the number of CUs than any other Zen+Vega APU (24 in this one). Also more than 3 times the memory bandwidth of any Zen + Vega APU on the market (256GB/s vs Renoir's 68.3GB/s).
Renoir finally provides an uplift in the memory department, only to see AMD regress in CU count. "Oh, but the individual CUs are faster," yet all we get are minimal GPU gains (sometimes minimal regressions, too) over the previous gen. Why? Fewer CUs, that's why. Having faster CUs doesn't mean much if their total number is cut down to the point where Nvidia's old MX150, which predates Zen/Ryzen altogether, is still a legitimate competitor against it. 14nm tech from 2017 should never be competitive against the 3rd/4th iteration of AMD 7nm products from 2020 (AMD had 7nm Vega in 2018, 7nm Zen2 in 2019, 7nm RDNA in 2019, and now another 7nm Vega [APU] in 2020).
Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again.
You are pretty much just complaining that people have uses for integrated graphics and graphics cards. Good luck with that one.
If the combined efforts of Intel, AMD, Dell, HP, et al couldn't turn Kaby Lake G into a de facto standard, that should say all you need to know about the odds of AMD pulling that off all by their lonesome.
@branton "Combined efforts" = no driver support whatsoever.
I highly doubt your POV, when it's clear as day it's not a combined effort. I stand by my view of Intel coaxing AMD to make a custom GPU for them. AMD didn't care that they just proved the idea of HBM in mobile installations. Intel wasn't about to share the love on their platform. Dell and HP both went out of their way to design new laptops altogether for that package.
HP proved the laptop they designed could handle a small-footprint dGPU. So what did AMD do? Wait until Apple wanted one, to make sure it would only stay exclusive to Apple. Another wasted opportunity to take point and lead, instead of following Apple and staying a subserviently dog. Problem is, Apple is eyeing their own newborn, and they will eventually put down their old dog, sooner or later.
Again, AMD has all of the necessary technology and has been part of taking the lead in many critical partnerships. They are just wasting away their tech lead, letting others take the credit, and sapping away their advantages. It just wouldn't surprise me if Intel made it to the market first with a HBM APU.
I think part of the reason AMD reduced the CUs in Renoir is limited memory bandwidth.
As far as laptops go, I wonder how big of an advantage physically separating the two major heat sources via discrete graphics is. It seems that even if the apu bandwidth issue were addressed via hbm, this would be a limiting factor. So perhaps, this is yet another reason such apus are still not being made.
it's not that simple in mobile space. currently AMD strategies for APUs are focusing on mobile first then bin it for desktop space. that's making it limited to die size, PPA, and efficiency, which is the design rules for laptop with limited thermal headroom but not really a problem on desktop.
also, increasing CU count doesn't always show consistent performance gain, especially when Vega are bandwidth starved while using DDR4 memory.
if you look at Renoir die size, the NGCUs are smaller than the CCX, unlike Picasso where it's the other way around. I think it shows that AMD here focusing at their 25x20, by making it far more efficient than it's predecessor, not brute-forcing performance gains like Intel's 14nm+++ through clock speed.
Mid range cards should not get HBM. AMD produces mid range cards.
"Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again." I guess you are too young to remember Atari and Commodore.
Good! HBM memory (as HBM2, 2E) allows for more robust dGPUs in portable, such as laptops, due to its lower power draw. Just look at the most recent MacBook Pro with dGPU. AFAIK, it also has lower latencies than other VRAM, which makes it interesting for CPU use also, although that will probably stay reserved for server and high-end workstation use.
Yes! This is what people don't get. HMB memory would be amazing for laptop sized GPUs. Nvidia produces multiple laptop SKUs of the same chip for different cases (there's at least 4 different mobile RTX 2080 variants). There's the Max Q versions, which are about a tier below their desktop counterparts, and then there's the full powered versions which are much closer to their desktop monikers (there's also multiple inbetween versions that are power starved by OEMs for thermal reasons and thus perform nowhere near their tier, but still cost the same)!
HBM on mobile GPUs on the die itself would save on space in the laptop, heat generation, as well as power usage! The only downside is the cost of the chips... and unfortunately, that's a big big downside, especially for people who know nothing about laptops and are not ready to pay through the nose for something they don't understand.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
TheJian - Thursday, July 2, 2020 - link
I really hope this doesn't end up on consumer cards unless it can be produced EASILY (meaning won't be in such a shortage it kills multiple flagships for AMD...LOL), and CHEAPER than GDDR6 (or whatever). Win the price war or feck off UNLESS you can PROVE it makes my games faster at 30fps or better. Meaning, no point in showing me it winning at 32K (yeah that's about how dumb this crap gets) when it's running .1fps doing it. "It's 10 times faster than GDDR75, which only gets .01 fps, so it sucks, get HBMblahblahBS now!".Wake me when it wins something and does it cheaper, and is EASY to produce. Not sure why AMD ever got on this dumb bandwagon to destroyed flagships (2 so far? 3 soon?). There are very few situations that can exploit this memory, so don't waste your effort designing for it unless you are IN THAT USE CASE.
Let's hope AMD stops making mistakes that kill NET INCOME, and overall raises prices to finally get some REAL NET INCOME. Quit the discounts. Charge like you are winning win you ARE.
brucethemoose - Thursday, July 2, 2020 - link
I didn't know HBM was part of GPU politics...But now that the compute lineup is splitting from the graphics lineup, I suspect you won't have to worry about that.
FreckledTrout - Thursday, July 2, 2020 - link
AMD was looking 5-10 years out :) The entire point of HBM was to stack it on die. The holy grail was for APU's. Lots of marketing discussing this a few years ago. I suspect the end game to kill Nvidia in the dedicated GPU segment is an APU that can have enough CPU for anything the average person might do so say 8-cores, a GPU that can play 1080p flawlessly and is ok at 4K, and has stacked HBM memory all on die. Oh its coming and once AMD is on TSMC's 3nm in a couple years densities will be such that an 8-16 core chiplet will be tiny so there is plenty of space for a decently high end GPU on die.Deicidium369 - Sunday, July 5, 2020 - link
Man I grow some amazing cannabis - but whatever you are smoking has mine beat.AMD will NEVER kill Nvidia. Won't happen. Ever.
dotjaz - Sunday, July 5, 2020 - link
Your weed must be trash. Nvidia's dGPU business is dying, that's a fact, as a matter of fact, the entire dGPU segment is dying.HBM+iGPU will kill any remaining dGPU on laptops soon enough. That's not up for debate, just a fact.
Desktop PC is dying fast, I mean really fast, it's shrinking every year. PC gaming is shrinking even faster. When AMD command both consoles there's no escaping from the fact that next year, iGPU+HBM can match PS5/XBSX performance in 2 years, rendering dGPU below $400 useless. How long do you think Nvidia can remain profitable without any iGPU or console?
FreckledTrout - Monday, July 6, 2020 - link
Yeah people have blinders on if they don't see what Intel and AMD are doing with APU's.Tams80 - Tuesday, July 7, 2020 - link
Nvidia, to their credit, are also aware of this, and it isn't as if they aren't diversifying. They are and are doing so very successfully in most cases. Plus, they do still have their ARM developments to fall back on if they have to.AMD appear to have set themselves up nicely.
As for Intel, well they are a slow, likely quite corrupt, behemoth that are turning in the right direction. And they do have the advantage of having their own fabs (although for the last few years that hasn't been so great, but they are an asset).
FreckledTrout - Tuesday, July 7, 2020 - link
I agree with you Tams80. Nvidia are doing a great job moving into AI and the data center. Just people arguing AMD and Intel wont make such powerful APU's that all but eliminate the very high end GPU's don't see what is going on. We will easily have enough CPU cores and GPU power in a single APU once manufacturing is on TSMC's 3nm or comparable gate all around process. When this happens only x86 APU makers will be relevant which leaves only Intel and AMD standing. That is unless Nvidia does something crazy like buy VIA which is the only other x86 license holder(well Zhaoxin as well but they are under VIA).RSAUser - Wednesday, July 15, 2020 - link
dGPU is not going to die any time soon, and Nvidia is diversifying because they make a lot of money doing so, while it's a good thing to do in business.As long as dGPU keeps getting better, Nvidia is going to be fine, because people will move up to higher resolutions, and the industry will take advantage of the fact that GPU are getting stronger, so you'll need a stronger GPU.
FreckledTrout - Monday, July 6, 2020 - link
Intel and AMD will push Nvidia out of the dedicated GPU market in under a decade. At this point its not if its when. Nvidia will move into the data center for AI etc.Strunf - Monday, July 6, 2020 - link
Right let's stack one bad thermal conductor on top of a power hungry device, makes sense...nVIDIA will never die on the dGPU, their dGPU is today an extension of their other business and creating a dGPU costs them little since it's shares most of their technology with other products.
Anyways Intel would kill AMD faster than AMD would kill NVIDIA, NVIDIA owns the GPU high end and nothing seems to stop them, if Intel comes up with a decent iGPU it will eat AMD market share on the low end and AMD will be left with little to nothing.
Ej24 - Thursday, July 2, 2020 - link
their GCN architecture required huge memory bandwidth to be efficient and reach high performance. To reach this speed with GDDR5 would have required 512bit bus which would consume like 75W of power for the memory alone. HBM could provide more bandwidth with only like ~15W of power. It was out of necessity to make the gcn architecture perform competitively while staying within a reasonable power limit.Quantumz0d - Friday, July 3, 2020 - link
THIS. So many miss this aspect of GCN-HBM relation.Jorgp2 - Thursday, July 2, 2020 - link
Fuck that.HBM is amazing for water-cooling, and small form factor GPUs
zodiacfml - Saturday, July 4, 2020 - link
One of AMD's strategy to compete with Nvidia is to use better memory hardware across their products, back then. Nvidia should be using HBM on consumer gaming cards to improve volumes and make HBM standard, at least on high end.I like the Sapphire Vega 56 design so much that I bought it wihle it is expensive during the cryptomining craze. Too bad for me that I game so little.
Deicidium369 - Sunday, July 5, 2020 - link
Nvidia is the biggest user of HBM - it is standard on their high end - their high end is compute cardsDribble - Monday, July 6, 2020 - link
It was more of a strategy to allow them to continue to use an inefficient gpu design - Vega/Polaris were way behind Nvidia on efficient usage of memory bandwidth. AMD couldn't afford to fix this so they sticking plastered over it by attaching very fast HBM memory which made their cards expensive and harder to make but was the only way they could stay competitive.quorm - Thursday, July 2, 2020 - link
Hoping for more apus with integrated hbm.quorm - Thursday, July 2, 2020 - link
Maybe when AMD gets around to putting RDNA in their apus they can glue in some hbm, too.jeremyshaw - Thursday, July 2, 2020 - link
It's AMD. I'd venture AMD will wait until after Intel starts offering Intel "APUs" with HBM2E (or later), before even considering it for their own AMD laptop APUs.TheReason8286 - Thursday, July 2, 2020 - link
Im sorry but that doesn't even sound right. You really think Intel about to push the industry forward like that? Id wager AMD does it first. Intel is all about the status quo more than the others.jeremyshaw - Friday, July 3, 2020 - link
It's AMD. HBM + APU has been a blindingly obvious thing for a long, long time now. Intel even coaxed AMD to make a special HBM GPU for them, on Kaby Lake G (Quad core Intel SoC + AMD "Vega-not-Vega" GPU with HBM2 memory, all three on a single package substrate). Since then, Apple has made a habit of collecting special HBM2 GPUs from AMD, that never see the market in non-Apple laptops.Yet AMD refuses to release any HBM APU. We are on the third generation of Zen + Vega APUs. They were even willing to make a Zen + Vega APU with faster GDDR5 memory controllers for some small Chinese company, so that company could make a custom PC exclusively for the Chinese domestic market. That special APU had a 256bit GDDR5 memory bus and more than twice the number of CUs than any other Zen+Vega APU (24 in this one). Also more than 3 times the memory bandwidth of any Zen + Vega APU on the market (256GB/s vs Renoir's 68.3GB/s).
Renoir finally provides an uplift in the memory department, only to see AMD regress in CU count. "Oh, but the individual CUs are faster," yet all we get are minimal GPU gains (sometimes minimal regressions, too) over the previous gen. Why? Fewer CUs, that's why. Having faster CUs doesn't mean much if their total number is cut down to the point where Nvidia's old MX150, which predates Zen/Ryzen altogether, is still a legitimate competitor against it. 14nm tech from 2017 should never be competitive against the 3rd/4th iteration of AMD 7nm products from 2020 (AMD had 7nm Vega in 2018, 7nm Zen2 in 2019, 7nm RDNA in 2019, and now another 7nm Vega [APU] in 2020).
Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again.
brantron - Friday, July 3, 2020 - link
You are pretty much just complaining that people have uses for integrated graphics and graphics cards. Good luck with that one.If the combined efforts of Intel, AMD, Dell, HP, et al couldn't turn Kaby Lake G into a de facto standard, that should say all you need to know about the odds of AMD pulling that off all by their lonesome.
jeremyshaw - Friday, July 3, 2020 - link
@branton"Combined efforts" = no driver support whatsoever.
I highly doubt your POV, when it's clear as day it's not a combined effort. I stand by my view of Intel coaxing AMD to make a custom GPU for them. AMD didn't care that they just proved the idea of HBM in mobile installations. Intel wasn't about to share the love on their platform. Dell and HP both went out of their way to design new laptops altogether for that package.
HP proved the laptop they designed could handle a small-footprint dGPU. So what did AMD do? Wait until Apple wanted one, to make sure it would only stay exclusive to Apple. Another wasted opportunity to take point and lead, instead of following Apple and staying a subserviently dog. Problem is, Apple is eyeing their own newborn, and they will eventually put down their old dog, sooner or later.
Again, AMD has all of the necessary technology and has been part of taking the lead in many critical partnerships. They are just wasting away their tech lead, letting others take the credit, and sapping away their advantages. It just wouldn't surprise me if Intel made it to the market first with a HBM APU.
Deicidium369 - Sunday, July 5, 2020 - link
by coaxing, you mean asking them to produce a custom part that would be sold in quantities that AMD has never sold before.quorm - Friday, July 3, 2020 - link
I think part of the reason AMD reduced the CUs in Renoir is limited memory bandwidth.As far as laptops go, I wonder how big of an advantage physically separating the two major heat sources via discrete graphics is. It seems that even if the apu bandwidth issue were addressed via hbm, this would be a limiting factor. So perhaps, this is yet another reason such apus are still not being made.
Fulljack - Friday, July 3, 2020 - link
it's not that simple in mobile space. currently AMD strategies for APUs are focusing on mobile first then bin it for desktop space. that's making it limited to die size, PPA, and efficiency, which is the design rules for laptop with limited thermal headroom but not really a problem on desktop.also, increasing CU count doesn't always show consistent performance gain, especially when Vega are bandwidth starved while using DDR4 memory.
if you look at Renoir die size, the NGCUs are smaller than the CCX, unlike Picasso where it's the other way around. I think it shows that AMD here focusing at their 25x20, by making it far more efficient than it's predecessor, not brute-forcing performance gains like Intel's 14nm+++ through clock speed.
Deicidium369 - Sunday, July 5, 2020 - link
Mid range cards should not get HBM. AMD produces mid range cards."Never had I seen a company with such a collection of talent, leadership, and knowledge, fight so hard to eek out critical advantages in key areas, just to piss them away, time and time again." I guess you are too young to remember Atari and Commodore.
Deicidium369 - Sunday, July 5, 2020 - link
When you are the goat at the top of the hill, the status quo is fine - it's called Winning.bananaforscale - Thursday, July 2, 2020 - link
"This in turn has led to Samsung becoming the principle memory partner"Principal. As in main. Principle is something you follow in actions.
Ryan Smith - Thursday, July 2, 2020 - link
Well that's embarrassing for a professional writer...Thanks!
eastcoast_pete - Friday, July 3, 2020 - link
Good! HBM memory (as HBM2, 2E) allows for more robust dGPUs in portable, such as laptops, due to its lower power draw. Just look at the most recent MacBook Pro with dGPU. AFAIK, it also has lower latencies than other VRAM, which makes it interesting for CPU use also, although that will probably stay reserved for server and high-end workstation use.anad0commenter - Friday, July 3, 2020 - link
Yes! This is what people don't get. HMB memory would be amazing for laptop sized GPUs. Nvidia produces multiple laptop SKUs of the same chip for different cases (there's at least 4 different mobile RTX 2080 variants). There's the Max Q versions, which are about a tier below their desktop counterparts, and then there's the full powered versions which are much closer to their desktop monikers (there's also multiple inbetween versions that are power starved by OEMs for thermal reasons and thus perform nowhere near their tier, but still cost the same)!HBM on mobile GPUs on the die itself would save on space in the laptop, heat generation, as well as power usage! The only downside is the cost of the chips... and unfortunately, that's a big big downside, especially for people who know nothing about laptops and are not ready to pay through the nose for something they don't understand.
Deicidium369 - Sunday, July 5, 2020 - link
Or it's a horrible idea and people know what they want. I would vote for the horrible idea theory.Smell This - Sunday, July 5, 2020 - link
Make me wonder if *Deicidium369* is the same Troll as *H-Stewart* and *Phynez* ...
JoeDuarte - Tuesday, July 7, 2020 - link
Anyone have a ballpark of how much this stuff costs, either from Samsung or SK Hynix? Or what regular HBM2 goes for these days?What about latency? How does HBM2(E) latency compare to DDR4 or GDDR6, from processor to memory? Does this stack configuration reduce latency?
Oxford Guy - Wednesday, July 8, 2020 - link
"What about latency?"I was going to ask the same thing.