HBM2 is already made by SK Hynix and Samsung. Sammy's HBM2 is used in Nvidia's GP100 and SK Hynix's HBM2 is used in Vega 10.
But it looks like GDDR6 will provide 50% more bandwidth than the fastest GDDR5X sold today (which, itself, isn't even used at full speed in any of today's GPUs). So there won't be much of a reason for HBM2 to be used unless the exceptional efficiency is necessary. Otherwise, cheaper GDDR6 is probably preferred.
HBM2 will show up in Vega "soon" whenever that is. Theoretically HBM can still scale beyond GDDR6 because its bandwidth is per die with multiple dies on a single GPU while DDR goes through its one bus. But that's also costly to have more and more dies, so GDDR6 is still quite competitive.
At least until HBM3, which doubles the bandwidth again, comes along. Though who knows when that'll be out.
I could be mistaken but I think enlarging the bus is far more expensive than adding another die to the stack, and uses up a lot of power. There is a reason why Nvidia has almost always tried to go with a smaller bus.
No physical law? Memory controllers take up die space, and you need extra transistors to run higher speeds. It's expensive. AMD had a 512-bit GDDR5 bus width in the 290(X)/390(X) to feed bandwidth hungry areas of the GPU (Hawaii), but AMD also succeeded in reducing its size in-die (vs Tahiti's 384-bit bus), albeit that came with a penalty in overall max RAM speed operation (5.5-6.0 Gbps).
On top of that, you need to use more memory chips. Memory controllers are sets of dual-channel 64-bit (32x2) interfaces. To get the number of RAM chips you need, divide 512 by 32. So, you need 16 RAM ICs on the board.
512 = 16 384 = 12 256 = 8
768? That's 24 RAM chips. You'd have to clamshell (front and back of PCB) sets of 12. Still, a large PCB and power hungry even with the reductions in power consumption with GDDR6.
It sounds like GV102 will be using that 16ish Gbps GDDR6 on a standard 384-bit bus. That'll yield plenty of bandwidth. I wouldn't be surprised if actual speeds end up at like 14 Gbps initially. That'd still be plenty.
And it's curious that they don't mention GV104. I wonder if it'll use 12 Gbps GDDR5X. That'd sync up with the recent rumors that GV104 is coming in Q3 2017 (too soon for GDDR6).
Nah, Volta seems to be delayed till 2018, which isn't surprising. But the card is pretty much certainly a new high end Nvidia one. 384bit bus fits right in with their recent past offerings while AMD seems to have HBM2 to use for that category.
What reason do you have to believe it will be delayed other than this GDDR6 information? That information can be explained without any Volta delay if NVIDIA uses GDDR6 for GV102 and not for GV104 and below, as ImSpartacus pointed out.
That sounds like a good analysis to me, ImSpartacus. I can't imagine NVIDIA would delay consumer Volta 6 months to save what looks to me like 10 Watts or less (possibly along with some cost from using a wider bus, but if GDDR6 is more expensive than GDDR5X that's not necessarily the case). The GV102 cards could come out 6 months later without causing much of an issue. Although, I wouldn't be surprised to see GV104 in early Q4 instead of Q3.
WCCFTech compiled a rumor that listed some upcoming Volta chip codenames and also suggested that GV104 would arrive in Q3 2017 (no comment on GV102, etc).
To keep up with the traditional pace of the new G@104 part matching the performance of the outgoing G@100/102 part, GV104 nerfs roughly 30-35% better performance than 1080, which uses 10 Gbps GDDR5X. 12 Gbps GDDR5X gets a 20% bump, which is close, maybe close enough with whatever architectural improvements come with Volta.
Micron's 12 Gbps GDDR5X is already available today (used in Titan Xp), so it's also possible that they could have 13 Gbps stuff ready later in the year, enabling a 30% bump in bandwidth. That's probably not going to happen as it looks like everyone is sprinting to GDDR6 (letting GDDR5X stagnate), but who knows. GDDR5 got pushed to 9 Gbps retardedly late in it's lifetime.
Either way, it looks like GV104 might arrive as soon as Q3 and it might not use GDDR6.
I am quite sure that Nvidia did say that there will not be new architecture this year. There is less and less reasons to bring new architectures in every year.
No, they'll use it when it makes sense. For consumer cards GDDR5/X/6 can provide enough bandwidth without major additional costs. I'm sure they will continue to use HBM or some later form of it at least for the highest end chips, where - the additional cost is no problem - the additional bandwidth is actually needed or helpful - the improved power efficiency allows them to fit more performance into the highest TDP bin (250 W)
If they were to pass those cost saving onto the consumer, I'd be okay with it, but since they're charging ludicrous premiums - no amount of inflation or R&D can account for the spike from $250 in 2011 (GF114, 360 mm2) to $600 in 2016 (GP104, 314 mm2) for a video card based on a medium-sized GPU. The same goes for the entire lineup; not to mention the delay of the real flagship based on the large-sized GPU that used to come out at the beginning of every cycle.
So not only are they super greedy, seeking enterprise margins from consumers, but they wouldn't even release the latest technologies, and we're okay with that? Not cool.
ROFL. It's taken Nvidia 10 YEARS to get back to 2007 profits. BTW, the job of EVERY company is to make as much money as possible. The goal of any company is also to price products as high as the market will take. PERIOD. Failing that, you end up like AMD. LOSS after loss, year after year.
Note each shrink is now becoming VERY costly to make, and gpu sales are WAY down from their highs years ago. With much less market to fight over, dwindling PC sales for ages, R&D costs exploding, die shrinks exploding, etc you should expect prices to go up on the latest tech. We used to sell ~390mil PC's a year, and now that's down to ~300mil. Down again this Q. That's a quarter of the market dead. Thank god gaming PC's have taken off or you'd really be getting sticker shock from GPU's.
Newsflash genius, NV only has 58% margins and most of that comes from cards in the pro end where margins are really high. Also note, the gpu is not $600, that's for the whole card. NV/AMD don't walk off with hundreds of dollars on your typical gamer cards. On top of the shrinking market here, Nvidia for example went from 698mil on R&D in 2008 to 1.5B R&D yearly now (and it keeps going up). So give it a rest, or go get a better job if you can't afford new toys. You get an amazing amount of performance from all sides (AMD, Intel, NV) on PC's today for the money.
My first PC when I was a kid (ok, apple //e, whatever, it was a PC) cost my parents over $3K (gave up a vacation for it) and wasn't even a multi-color monitor at 13in...LOL. Today I can get a pretty great PC for gaming with a huge monitor etc for under a grand easy. You choose what you buy. Nobody is forcing you to buy 1080ti's etc. Be thankful some people are buying at ridiculous prices so the majority of us can get some really great tech at reasonable pricing. You complain about higher end stuff but seem to fail to realize those sales are exactly what pays for a really great mid-range everything (cpu, gpu, ssd's etc etc).
AMD had better start getting more "greedy" or they're dead. The just had a full quarter's worth of ryzen sales (brand new cpu tech) and couldn't make a profit and margins are a paltry 34%. Vega is going to have a real challenge with 1080ti etc (volta can come out if NV really wants for xmas) so I don't expect that to help. The only thing AMD has coming that will pull in some real margins is 16 core and up for desktop (HEDT whenever they hit) and server. I really hope they told MSFT/Sony we now want 25% margins period on consoles or go fly a kite. The single digit margins they started with on xbox1/ps4 was stupid and should have been passed on like NV did (not worth blowing R&D on that, when it should be spent on CORE tech Jen said...He was right). To date I haven't heard AMD made it above 15% (they said mid teens last I heard) margins on consoles and sales are nowhere near enough to be worth costing them the cpu race, and basically the gpu race for ages now. Not saying they have a bad gpu, just that NV always has something better (either in watts or perf or both), which gives them PRICING power. Same for AMD and Intel cpus. That said I can't wait for AMD's server stuff as I think that is the real cash cow that can get them back to profits for a few years perhaps and maybe kill some debt (badly needed). If Vega really is good, I REALLY hope they price it accordingly and get every dollar they can from it. If it BEATS a 1080ti in more games than it loses, price it like or better than 1080ti! No favors, no discounts. Quit the stupid price war that you can't win with NV. Perf should be there, it's just a question of drivers and watts IMHO. If they are close to NV watts, and charge accordingly I don't expect NV to be price cutting. Jen has said for years he wishes they'd stop their stupid pricing. Intel on the other hand never avoids a price war with AMD, so better to price right on gpus where NV would rather NOT have one.
Don't get me wrong I love a great price too, but AMD can't afford any favors. I have zero complaints about NV pricing either. Whatever price I decide to pay for my next product (from either side) you get a great deal. It's up to you how good that bargain is. Is that last 10% perf worth it or not? It all depends on your wallet I guess. Or do you draw that line at the last 20%? Whatever, same story. Great 1080p gaming can be had by even poor people today. Above that YMMV and you get what you pay for for the most part. Steam and GOG can fill a poor person's HD yearly with entertainment to run on that PC. Dirt poor? Don't buy those games until they're "last years" games or even further back. It's not hard to have a great time at a decent price (hardware or games). Again, enhance your paycheck if you can't afford what you want. It's not their job to fill your hearts desires at prices you love. :)
I would assume it will be AMD, Nvidia rarely takes a risk with new tech. This will be nice for midrange and lower-end cards when the true high-end are all HBM.
The amount of bandwidth required pretty much scales linearly with resolution and GPU die space. The more GPU unit, the more data are required to feed them.
The current state of the art 14nm GPU already approaches these memory bandwidth @ 384Bit for GDDR5X. GDDR6 basically allows Nvidia to scale the Titan XP to 7nm ( TSMC 10nm is a short node literally made for Apple only )
Again delaying HBM2 appearance.
So GDDR6 will be cheaper then HBM2, provide much of the bandwidth needed for Next Gen GPU, and less complication / changes to GPU design. Apart from lower Watt/ bandwidth, what is the advantage of HBM? I am pretty sure those using 250W GPU dont care the extra 10 or 20W power.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
osxandwindows - Sunday, April 30, 2017 - link
So, what about hbm?lilmoe - Sunday, April 30, 2017 - link
Soon (TM).ImSpartacus - Sunday, April 30, 2017 - link
HBM2 is already made by SK Hynix and Samsung. Sammy's HBM2 is used in Nvidia's GP100 and SK Hynix's HBM2 is used in Vega 10.But it looks like GDDR6 will provide 50% more bandwidth than the fastest GDDR5X sold today (which, itself, isn't even used at full speed in any of today's GPUs). So there won't be much of a reason for HBM2 to be used unless the exceptional efficiency is necessary. Otherwise, cheaper GDDR6 is probably preferred.
Flunk - Sunday, April 30, 2017 - link
AMD's RX Vega will have HBM2 and they still claim it will be released this quarter (Q2 2017).Frenetic Pony - Sunday, April 30, 2017 - link
HBM2 will show up in Vega "soon" whenever that is. Theoretically HBM can still scale beyond GDDR6 because its bandwidth is per die with multiple dies on a single GPU while DDR goes through its one bus. But that's also costly to have more and more dies, so GDDR6 is still quite competitive.At least until HBM3, which doubles the bandwidth again, comes along. Though who knows when that'll be out.
ImSpartacus - Monday, May 1, 2017 - link
Theoretically can't GDDR6 scale further as well?I mean, there's no physical law preventing a 512-bit setup, or, hell, a 768-bit setup.
Just like you can add more stacks in parallel, you can add more GDDR6 die in parallel.
fanofanand - Monday, May 1, 2017 - link
I could be mistaken but I think enlarging the bus is far more expensive than adding another die to the stack, and uses up a lot of power. There is a reason why Nvidia has almost always tried to go with a smaller bus.JasonMZW20 - Monday, May 1, 2017 - link
No physical law? Memory controllers take up die space, and you need extra transistors to run higher speeds. It's expensive. AMD had a 512-bit GDDR5 bus width in the 290(X)/390(X) to feed bandwidth hungry areas of the GPU (Hawaii), but AMD also succeeded in reducing its size in-die (vs Tahiti's 384-bit bus), albeit that came with a penalty in overall max RAM speed operation (5.5-6.0 Gbps).On top of that, you need to use more memory chips. Memory controllers are sets of dual-channel 64-bit (32x2) interfaces. To get the number of RAM chips you need, divide 512 by 32. So, you need 16 RAM ICs on the board.
512 = 16
384 = 12
256 = 8
768? That's 24 RAM chips. You'd have to clamshell (front and back of PCB) sets of 12. Still, a large PCB and power hungry even with the reductions in power consumption with GDDR6.
ImSpartacus - Sunday, April 30, 2017 - link
It sounds like GV102 will be using that 16ish Gbps GDDR6 on a standard 384-bit bus. That'll yield plenty of bandwidth. I wouldn't be surprised if actual speeds end up at like 14 Gbps initially. That'd still be plenty.And it's curious that they don't mention GV104. I wonder if it'll use 12 Gbps GDDR5X. That'd sync up with the recent rumors that GV104 is coming in Q3 2017 (too soon for GDDR6).
Dr. Swag - Sunday, April 30, 2017 - link
Yeah, nvidia seems to like 384 bit buses on their high end gpus, so my guess would be the product they're referring to would be gv102Frenetic Pony - Sunday, April 30, 2017 - link
Nah, Volta seems to be delayed till 2018, which isn't surprising. But the card is pretty much certainly a new high end Nvidia one. 384bit bus fits right in with their recent past offerings while AMD seems to have HBM2 to use for that category.Yojimbo - Sunday, April 30, 2017 - link
What reason do you have to believe it will be delayed other than this GDDR6 information? That information can be explained without any Volta delay if NVIDIA uses GDDR6 for GV102 and not for GV104 and below, as ImSpartacus pointed out.Meteor2 - Monday, May 1, 2017 - link
Volta has always been 2018 for consumer. All production in 2017 is going into HPC.Yojimbo - Sunday, April 30, 2017 - link
That sounds like a good analysis to me, ImSpartacus. I can't imagine NVIDIA would delay consumer Volta 6 months to save what looks to me like 10 Watts or less (possibly along with some cost from using a wider bus, but if GDDR6 is more expensive than GDDR5X that's not necessarily the case). The GV102 cards could come out 6 months later without causing much of an issue. Although, I wouldn't be surprised to see GV104 in early Q4 instead of Q3.Who is mentioning GV102 and GV104, by the way?
ImSpartacus - Monday, May 1, 2017 - link
WCCFTech compiled a rumor that listed some upcoming Volta chip codenames and also suggested that GV104 would arrive in Q3 2017 (no comment on GV102, etc).http://wccftech.com/nvidia-geforce-20-volta-graphi...
To keep up with the traditional pace of the new G@104 part matching the performance of the outgoing G@100/102 part, GV104 nerfs roughly 30-35% better performance than 1080, which uses 10 Gbps GDDR5X. 12 Gbps GDDR5X gets a 20% bump, which is close, maybe close enough with whatever architectural improvements come with Volta.
Micron's 12 Gbps GDDR5X is already available today (used in Titan Xp), so it's also possible that they could have 13 Gbps stuff ready later in the year, enabling a 30% bump in bandwidth. That's probably not going to happen as it looks like everyone is sprinting to GDDR6 (letting GDDR5X stagnate), but who knows. GDDR5 got pushed to 9 Gbps retardedly late in it's lifetime.
Either way, it looks like GV104 might arrive as soon as Q3 and it might not use GDDR6.
HomeworldFound - Sunday, April 30, 2017 - link
I love how quickly these companies have increased the amount of VRAM on graphics cards. It seems like it'll continue.mdriftmeyer - Sunday, April 30, 2017 - link
So much for Nvidia's fandom waxing on about GDDR6 on the Volta. Or is Volta not coming until 2018?Flunk - Sunday, April 30, 2017 - link
It's pretty likely that it won't come before 2018. Pascal hasn't even been out for a year yet.haukionkannel - Monday, May 1, 2017 - link
I am quite sure that Nvidia did say that there will not be new architecture this year. There is less and less reasons to bring new architectures in every year.Meteor2 - Monday, May 1, 2017 - link
2017 for HPC, 2018 for consumer.yhselp - Monday, May 1, 2017 - link
Sounds a lot like NVIDIA, and possibly AMD as well, has decided against HBM for GeForce, which is a shame.MrSpadge - Monday, May 1, 2017 - link
No, they'll use it when it makes sense. For consumer cards GDDR5/X/6 can provide enough bandwidth without major additional costs. I'm sure they will continue to use HBM or some later form of it at least for the highest end chips, where- the additional cost is no problem
- the additional bandwidth is actually needed or helpful
- the improved power efficiency allows them to fit more performance into the highest TDP bin (250 W)
yhselp - Tuesday, May 2, 2017 - link
If they were to pass those cost saving onto the consumer, I'd be okay with it, but since they're charging ludicrous premiums - no amount of inflation or R&D can account for the spike from $250 in 2011 (GF114, 360 mm2) to $600 in 2016 (GP104, 314 mm2) for a video card based on a medium-sized GPU. The same goes for the entire lineup; not to mention the delay of the real flagship based on the large-sized GPU that used to come out at the beginning of every cycle.So not only are they super greedy, seeking enterprise margins from consumers, but they wouldn't even release the latest technologies, and we're okay with that? Not cool.
TheJian - Wednesday, May 3, 2017 - link
ROFL. It's taken Nvidia 10 YEARS to get back to 2007 profits. BTW, the job of EVERY company is to make as much money as possible. The goal of any company is also to price products as high as the market will take. PERIOD. Failing that, you end up like AMD. LOSS after loss, year after year.Note each shrink is now becoming VERY costly to make, and gpu sales are WAY down from their highs years ago. With much less market to fight over, dwindling PC sales for ages, R&D costs exploding, die shrinks exploding, etc you should expect prices to go up on the latest tech. We used to sell ~390mil PC's a year, and now that's down to ~300mil. Down again this Q. That's a quarter of the market dead. Thank god gaming PC's have taken off or you'd really be getting sticker shock from GPU's.
Newsflash genius, NV only has 58% margins and most of that comes from cards in the pro end where margins are really high. Also note, the gpu is not $600, that's for the whole card. NV/AMD don't walk off with hundreds of dollars on your typical gamer cards. On top of the shrinking market here, Nvidia for example went from 698mil on R&D in 2008 to 1.5B R&D yearly now (and it keeps going up). So give it a rest, or go get a better job if you can't afford new toys. You get an amazing amount of performance from all sides (AMD, Intel, NV) on PC's today for the money.
My first PC when I was a kid (ok, apple //e, whatever, it was a PC) cost my parents over $3K (gave up a vacation for it) and wasn't even a multi-color monitor at 13in...LOL. Today I can get a pretty great PC for gaming with a huge monitor etc for under a grand easy. You choose what you buy. Nobody is forcing you to buy 1080ti's etc. Be thankful some people are buying at ridiculous prices so the majority of us can get some really great tech at reasonable pricing. You complain about higher end stuff but seem to fail to realize those sales are exactly what pays for a really great mid-range everything (cpu, gpu, ssd's etc etc).
AMD had better start getting more "greedy" or they're dead. The just had a full quarter's worth of ryzen sales (brand new cpu tech) and couldn't make a profit and margins are a paltry 34%. Vega is going to have a real challenge with 1080ti etc (volta can come out if NV really wants for xmas) so I don't expect that to help. The only thing AMD has coming that will pull in some real margins is 16 core and up for desktop (HEDT whenever they hit) and server. I really hope they told MSFT/Sony we now want 25% margins period on consoles or go fly a kite. The single digit margins they started with on xbox1/ps4 was stupid and should have been passed on like NV did (not worth blowing R&D on that, when it should be spent on CORE tech Jen said...He was right). To date I haven't heard AMD made it above 15% (they said mid teens last I heard) margins on consoles and sales are nowhere near enough to be worth costing them the cpu race, and basically the gpu race for ages now. Not saying they have a bad gpu, just that NV always has something better (either in watts or perf or both), which gives them PRICING power. Same for AMD and Intel cpus. That said I can't wait for AMD's server stuff as I think that is the real cash cow that can get them back to profits for a few years perhaps and maybe kill some debt (badly needed). If Vega really is good, I REALLY hope they price it accordingly and get every dollar they can from it. If it BEATS a 1080ti in more games than it loses, price it like or better than 1080ti! No favors, no discounts. Quit the stupid price war that you can't win with NV. Perf should be there, it's just a question of drivers and watts IMHO. If they are close to NV watts, and charge accordingly I don't expect NV to be price cutting. Jen has said for years he wishes they'd stop their stupid pricing. Intel on the other hand never avoids a price war with AMD, so better to price right on gpus where NV would rather NOT have one.
Don't get me wrong I love a great price too, but AMD can't afford any favors. I have zero complaints about NV pricing either. Whatever price I decide to pay for my next product (from either side) you get a great deal. It's up to you how good that bargain is. Is that last 10% perf worth it or not? It all depends on your wallet I guess. Or do you draw that line at the last 20%? Whatever, same story. Great 1080p gaming can be had by even poor people today. Above that YMMV and you get what you pay for for the most part. Steam and GOG can fill a poor person's HD yearly with entertainment to run on that PC. Dirt poor? Don't buy those games until they're "last years" games or even further back. It's not hard to have a great time at a decent price (hardware or games). Again, enhance your paycheck if you can't afford what you want. It's not their job to fill your hearts desires at prices you love. :)
fanofanand - Monday, May 1, 2017 - link
I would assume it will be AMD, Nvidia rarely takes a risk with new tech. This will be nice for midrange and lower-end cards when the true high-end are all HBM.iwod - Monday, May 1, 2017 - link
The amount of bandwidth required pretty much scales linearly with resolution and GPU die space. The more GPU unit, the more data are required to feed them.The current state of the art 14nm GPU already approaches these memory bandwidth @ 384Bit for GDDR5X. GDDR6 basically allows Nvidia to scale the Titan XP to 7nm ( TSMC 10nm is a short node literally made for Apple only )
Again delaying HBM2 appearance.
So GDDR6 will be cheaper then HBM2, provide much of the bandwidth needed for Next Gen GPU, and less complication / changes to GPU design. Apart from lower Watt/ bandwidth, what is the advantage of HBM? I am pretty sure those using 250W GPU dont care the extra 10 or 20W power.
Haawser - Thursday, June 8, 2017 - link
Pseudo channels that can do simultaneous R/W, lower tEAW, temp based refresh, smaller packaging, optional ECC, to name a few.In comparison, faster GDDRX is a bit like putting a new shade of lipstick on a Pig. Different, but not really much more attractive.
blaktron - Monday, January 15, 2018 - link
Small quibble, GDDR5 replaces GDDR3 not 4. Almost everyone skipped 4.Funny how quickly we forget our history.