@SaturnusDK: Vega supported GDDR and HBM, but I think we only saw it with GDDR5 in that one Chinese console. Everywhere else Vega was using HBM (well except in the small APUs where it used the CPU's DDR4). I expect Navi 10 to use GDDR6, and then Navi 20 will use HBM2. I won't be surprised if AMD goes with just two stacks for the consumer versions of Navi 20 (750-800GB/s I think would be enough for gaming GPUs - as Vega 20 doesn't seem to benefit that much from having more than double the bandwidth of Vega 10 in most gaming scenarios, and it'd still be ~50% more bandwidth than I'd expect Navi 10 using GDDR6 will have).
Its hard to know what is what since AMD overhauled their roadmaps after Vega 10's release (and Raja no longer being there). I think the plan was somewhat similar as for Vega, with a Navi 10 initial release that targets consumers (and in this case things have pointed to Polaris' mainstream market segment), and then a Navi 20 that is really targeting the pro markets, but will bring enthusiast level cards based on it that will come in 2020. With the rumors of APUs that pair Zen 2 modules with Navi based GPU modules, I'd guess there's gonna be at least 3 main versions of the GPU (for the PC consumer markets, with sub versions - like Polaris 10 and 11, or the Vega versions where its 10, 11, think 12, 20).
Oh and I think it is. Navi 10 is replacing Polaris, and Navi 20 is likely to replace Vega sometime next year. Some I think are expecting something between them, but I have a suspicion that its actually going to be a resurgence of multi-GPU (not sure if AMD will stick with Crossfire branding, as I think this is the beginning of their shift to multi-dice mGPU, where the idea is to get them operating so that to applications they seem to be just a single GPU), where AMD will rely on two (or maybe more) smaller and efficient GPUs to offer enthusiast level performance. I think Navi 20 will be targeting HPC/pro market more so will probably be more compute heavy (it'll still offer more typical GPU power than Navi 10 making it viable for enthusiast level single dGPU, but unless you can utilize the compute capability, that you might be better off with like 2-3 Navi 10 cards for gaming).
Navi is 100% GCN based, apparently the last GCN based GPU. The first post-GCN AMD GPU is not Navi, it is Arcturus, which is to be released in late 2020 or 2021.
There's been a lot of rumors. If I remember some AMD people have said Navi is a new architecture, but supposedly the Navi we'll be getting in Navi 10 in dGPU isn't "real Navi" that was developed for Sony for the PS5.
And so it seems likely that Navi 10 will be an evolution of Vega but designed solely for 7nm, and featuring significant changes. I'm not sure that "GCN based" is that bad if they fix some of the issues they're alleged to (like improving geometry throughput, both through the base hardware units - think there's patents suggesting a 50% improvement per SP, going from 4 to 6, but also in software with Navi allegedly enabling the "NGG Fastpath" that was slated for Vega; there's been suggestions that the ratio of various components that has existed in GCN won't be there either so we should probably see higher ROPs relative to shader count than GCN had). But I'd expect that to some extent, future GPUs will be built on previous ones. That's not to say they can't make big improvements, but I think the talk of "new architecture vs GCN" is somewhat meaningless (as some GCN changes were big enough to be considered new architectures - Vega for instance was quite a big change and features a lot more programmability than previous GCN, but was built around the GCN ratios - which I think was a mistake and seems like Navi is going to be Vega but not built around GCN ratios).
Doubling the density is a huge feat for HBM memories. Because of the close integration with the GPU on the interposer, you are limited to 4 stacks of HBM because of space. Because of the TSVs, you are limited to stacking 8 dies for now. So any capacity boost needs to come from the dies itself. I am wondering which process they used, and if the dies are a lot larger now.
Also, it seems they improved the IO speed without any architectural changes. Then this must mean they also increased the internal core memory clock to fairly high speeds, pushing into GDDR core clock speeds. So where HBM could relax the need for high internal clock speeds, I guess Samsung just wants more BW at any cost. I wonder what this means for the memory core energy use and temperature in the HBM dies.
I think all their 16gb dram chips use their 18nm process. And yes density for this increased by 33% or so over the older 20nm tech, whereas capacity doubled, so the dies should be a fair bit larger. Although I thought that the hbm2 size was actually quite a bit larger than the samsung dram die size for some reason, so maybe it didn't grow that much...
Huge feat and frankly quite unexpected, as that capacity was expected for HBM3. It doesn't appear that HBM2E is even an official JEDEC spec though. I couldn't find anything about it after a quick googling.
For the love of GOD AMD (and my stock price...ROFL), DO NOT PUT THIS ON A CONSUMER CARD! You will blow your income (NET) yet again. Unless you can explain how it makes card X FASTER, and PROVE IT with REAL benchmarks smoking NVDA, don't waste your time killing your new cards. So far HBM has been absolutely useless to gamers and has killed every card AMD put it on for gamers. Quit doing it. HBM has it's use cases, but gaming isn't one of them. It only raises prices, and kills margins, oh and usually causes shortages too thus killing any chance of your new card's profit anyway. If you can't sell it (because you can't make enough of it...LOL), you can't profit from it. This is not rocket science here. Leave HBM to cards well over $1000 (just a reference here, you get the point, $1299+?).
Boy these comments about just another Vega variant didn't age well, and yes big Navi will soon be upon us and blowing the doors off of Vega/Vega II, not to mention Nvidia.
How's that ROME love so far? Intel's getting spanked.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
25 Comments
Back to Article
JoeyJoJo123 - Wednesday, March 20, 2019 - link
Pretty cool. But it'll be a while before memory like this shows up in consumer GPUs.Targon - Wednesday, March 20, 2019 - link
It will probably be ready for the high end Navi video cards that will come with HBM2 memory(November 2019-January 2020 I expect).FreckledTrout - Wednesday, March 20, 2019 - link
I really hope your wrong because then we are saying Navi is just a tweak of Vega. I absolutely hope they redesigned Navi to use GDDR6 and GCN is gone.SaturnusDK - Wednesday, March 20, 2019 - link
Not to worry. AMD has already stated Navi can use both GDDR and HBM2.darkswordsman17 - Friday, March 22, 2019 - link
@SaturnusDK: Vega supported GDDR and HBM, but I think we only saw it with GDDR5 in that one Chinese console. Everywhere else Vega was using HBM (well except in the small APUs where it used the CPU's DDR4). I expect Navi 10 to use GDDR6, and then Navi 20 will use HBM2. I won't be surprised if AMD goes with just two stacks for the consumer versions of Navi 20 (750-800GB/s I think would be enough for gaming GPUs - as Vega 20 doesn't seem to benefit that much from having more than double the bandwidth of Vega 10 in most gaming scenarios, and it'd still be ~50% more bandwidth than I'd expect Navi 10 using GDDR6 will have).StevoLincolnite - Wednesday, March 20, 2019 - link
Navi is likely a Polaris replacement, not a Vega replacement.I.E. Designed for the Mainstream and not the high-end.
Santoval - Thursday, March 21, 2019 - link
Quite a few Navi versions are rumored, so it might be a replacement for both Polaris and Vega.darkswordsman17 - Friday, March 22, 2019 - link
Its hard to know what is what since AMD overhauled their roadmaps after Vega 10's release (and Raja no longer being there). I think the plan was somewhat similar as for Vega, with a Navi 10 initial release that targets consumers (and in this case things have pointed to Polaris' mainstream market segment), and then a Navi 20 that is really targeting the pro markets, but will bring enthusiast level cards based on it that will come in 2020. With the rumors of APUs that pair Zen 2 modules with Navi based GPU modules, I'd guess there's gonna be at least 3 main versions of the GPU (for the PC consumer markets, with sub versions - like Polaris 10 and 11, or the Vega versions where its 10, 11, think 12, 20).darkswordsman17 - Friday, March 22, 2019 - link
Oh and I think it is. Navi 10 is replacing Polaris, and Navi 20 is likely to replace Vega sometime next year. Some I think are expecting something between them, but I have a suspicion that its actually going to be a resurgence of multi-GPU (not sure if AMD will stick with Crossfire branding, as I think this is the beginning of their shift to multi-dice mGPU, where the idea is to get them operating so that to applications they seem to be just a single GPU), where AMD will rely on two (or maybe more) smaller and efficient GPUs to offer enthusiast level performance. I think Navi 20 will be targeting HPC/pro market more so will probably be more compute heavy (it'll still offer more typical GPU power than Navi 10 making it viable for enthusiast level single dGPU, but unless you can utilize the compute capability, that you might be better off with like 2-3 Navi 10 cards for gaming).Santoval - Thursday, March 21, 2019 - link
Navi is 100% GCN based, apparently the last GCN based GPU. The first post-GCN AMD GPU is not Navi, it is Arcturus, which is to be released in late 2020 or 2021.marees - Thursday, March 21, 2019 - link
Arcturus is rumoured to be the GPU inside XBox Twomarees - Thursday, March 21, 2019 - link
Here is the source of the rumourArcturus is a GPU not an architecture
It may or may not be post GCN
If it is inside XBOX it is extremely unlikely to have HBM due to cost reasons
https://www.reddit.com/r/Amd/comments/ag3ufk/radeo...
darkswordsman17 - Friday, March 22, 2019 - link
There's been a lot of rumors. If I remember some AMD people have said Navi is a new architecture, but supposedly the Navi we'll be getting in Navi 10 in dGPU isn't "real Navi" that was developed for Sony for the PS5.And so it seems likely that Navi 10 will be an evolution of Vega but designed solely for 7nm, and featuring significant changes. I'm not sure that "GCN based" is that bad if they fix some of the issues they're alleged to (like improving geometry throughput, both through the base hardware units - think there's patents suggesting a 50% improvement per SP, going from 4 to 6, but also in software with Navi allegedly enabling the "NGG Fastpath" that was slated for Vega; there's been suggestions that the ratio of various components that has existed in GCN won't be there either so we should probably see higher ROPs relative to shader count than GCN had). But I'd expect that to some extent, future GPUs will be built on previous ones. That's not to say they can't make big improvements, but I think the talk of "new architecture vs GCN" is somewhat meaningless (as some GCN changes were big enough to be considered new architectures - Vega for instance was quite a big change and features a lot more programmability than previous GCN, but was built around the GCN ratios - which I think was a mistake and seems like Navi is going to be Vega but not built around GCN ratios).
Xajel - Wednesday, March 20, 2019 - link
I wonder when will Samsung release the long announced LCHBM chips? are they targeting HBM3 also with them?Teckk - Wednesday, March 20, 2019 - link
Curious, the spec for HBM2E won't have the requirements for Voltage or is that not it's place?wumpus - Wednesday, March 20, 2019 - link
Nice. But nothing to indicate that it isn't limited to high-price systems like nvidia's HPC units.If Intel/Micron ever get around to trying to make Optane a DRAM replacement, you'll likely want something like this as cache.
tmnvnbl - Wednesday, March 20, 2019 - link
Doubling the density is a huge feat for HBM memories. Because of the close integration with the GPU on the interposer, you are limited to 4 stacks of HBM because of space. Because of the TSVs, you are limited to stacking 8 dies for now. So any capacity boost needs to come from the dies itself.I am wondering which process they used, and if the dies are a lot larger now.
Also, it seems they improved the IO speed without any architectural changes. Then this must mean they also increased the internal core memory clock to fairly high speeds, pushing into GDDR core clock speeds. So where HBM could relax the need for high internal clock speeds, I guess Samsung just wants more BW at any cost. I wonder what this means for the memory core energy use and temperature in the HBM dies.
mczak - Wednesday, March 20, 2019 - link
I think all their 16gb dram chips use their 18nm process. And yes density for this increased by 33% or so over the older 20nm tech, whereas capacity doubled, so the dies should be a fair bit larger. Although I thought that the hbm2 size was actually quite a bit larger than the samsung dram die size for some reason, so maybe it didn't grow that much...Santoval - Thursday, March 21, 2019 - link
Huge feat and frankly quite unexpected, as that capacity was expected for HBM3. It doesn't appear that HBM2E is even an official JEDEC spec though. I couldn't find anything about it after a quick googling.ksec - Wednesday, March 20, 2019 - link
Is this the first time we broke the 1TB/s Memory bandwidth? Or has this been done before theoretically?And what happened to HBM3 and HBM4, I think both were announced ( Not Shipping but intention to develop ) quite a while ago.
Ryan Smith - Wednesday, March 20, 2019 - link
The previous 2.4gbps/pin "Aquabolt" and 2.0gbps/pin "Flarebolt" HBM2 could get you over 1TB/sec in a 4 stack configuration.AMD's Radeon Instinct MI60 in fact does just that, its 2gbps memory clock giving it 1TB/sec on the dot.
SaturnusDK - Wednesday, March 20, 2019 - link
Same as Radeon VII so we already have it on a gaming GPU as well.ksec - Thursday, March 21, 2019 - link
Thanks. Aren't really keeping up with these tech as they are mostly out of my budget :/TheJian - Wednesday, March 27, 2019 - link
For the love of GOD AMD (and my stock price...ROFL), DO NOT PUT THIS ON A CONSUMER CARD! You will blow your income (NET) yet again. Unless you can explain how it makes card X FASTER, and PROVE IT with REAL benchmarks smoking NVDA, don't waste your time killing your new cards. So far HBM has been absolutely useless to gamers and has killed every card AMD put it on for gamers. Quit doing it. HBM has it's use cases, but gaming isn't one of them. It only raises prices, and kills margins, oh and usually causes shortages too thus killing any chance of your new card's profit anyway. If you can't sell it (because you can't make enough of it...LOL), you can't profit from it. This is not rocket science here. Leave HBM to cards well over $1000 (just a reference here, you get the point, $1299+?).mdriftmeyer - Thursday, August 8, 2019 - link
Boy these comments about just another Vega variant didn't age well, and yes big Navi will soon be upon us and blowing the doors off of Vega/Vega II, not to mention Nvidia.How's that ROME love so far? Intel's getting spanked.