"Another note is the revision of GTX 1060 6GB models to include GDDR5X memory, though clocked the same as the original GDDR5. Providing the same amount of memory bandwidth, the logic points to a supply/inventory reason rather than a performance refresh."
Basically NVidia is making these cards to use up GP104 dies that either had at least 6 bad groups of cores or 1 bad memory controller and thus were unsuitable to make into GTX1070s. It's the same deal as with the 5GB GT1060 that came out earlier in the year. It'll be interesting to see how available these end up being, AFAIK the 5GB 1060's only showed up in a few price sensitive Asian markets. As a higher performing variant these might show up in the west instead; OTOH since they're being primarily made as a way to use up dud parts there's no guarantee they'll have a lot of them to sell.
No, probably not. GDDR5 is backwards-compatible with GDDR5 controllers (although it only runs in double-data rate mode).Yields on the GP104 have to be pretty good or Nvidia wouldn't have launched the 1070 TI.
Product refreshes every 12 months were nice, but when the industry was stuck on 28nm, those new GPUs were often mild tweaks of existing chips or just rebranded models. Those models that did offer performance increases often came from bigger dies that increased TDP and costs. Thus the dual slot blower and vapor chamber cooling that are status quo in the present day became normalized across all products rather than positioned at the upper end of the stack (though I would be remiss not to accept the idea that there were competitive market forces and demand-based pull from consumers that also played a role).
True, but even then going from the 680 to the 780, then the 780ti and then the 980ti, over the course of three years, offered great performance improvements with just tweaking.
All good points. I often overlook the highest end graphics cards since gaming hasn't been a priority in over a decade or so. The rebrands were mainly a thing in the mobile space and in the lower end. I recall seeing multiple iterations of the same Radeon GPU. For instance, the HD 6450 is basically identical to the HD 7450 despite the implied higher performance from the increased model number. It was much the same situation with the chipset integrated HD 3200 and 4200 IIRC. At the top end, I would imagine that significant engineering effort was invested because the halo effect of having higher performance was helpful across a given graphics generation as was the trickle-down nature of developing a good GPU that could be later incorporated into subsequent lesser models.
Hows about you compare the GTX 480 and GTX 680, released a whopping 3 years apart, and get back to me bud? Or hos about the 780ti_>980TI->1080ti. Seems there were plenty of improvements on those "complex GPUs".
You didn't understand a thing from what the guy said. GTX680 GPU was a pretty small GPU for being the high end model - it was only ~300mm2. This was because AMD highend part was pretty poor, so nvidia just sold something small and cheap for big money. GTX780TI is much bigger because it had where to grow. Starting from a 300mm2 GPU, you can work your way up until ~600mm2 on the same node. So this GPU was 550mm2, basically the highend GPU that nvidia was planning for the 600 series that never happened. Now, maxwell's 980TI was over 600mm2 in size and it got faster just because they improved by a big margin the memory compression algorithms and stripped the GPU of the FP units. They used that new space for more graphics resources. From Pascal onwards, things have changed. 1080TI is on 16nm, but it is still pretty big at 470mm2. GP100, which is the full fat GPU is very big in size (610mm2) so as big as the 980Ti which was the last increase in size for 28nm. It had HBM2, which was very expensive at the time, so they only sold it in Quadro line-up. Volta is another beast, over 800mm2. That is a HUGE chip and they could launch it on the market but it would cost over 2k. Now, with the RTX line-up, the process is again pushed to the limits. The 2080ti is a huge chip at 750mm2 and considering the price, I think it is quite cheap for the amount of silicon it has.
So bottom line is that nvidia released better cards than the 1080Ti, but they would be so expensive that almost nobody could afford to buy them.
Why does this strike me mostly as way to avoid lowering the prizes of these Pascal cards now that Turing is out? "Yes, we're still charging high prices for our previous gen GPUs, but look at the nice game that we bundled with it" or similar. Thanks, but no thanks! I for one would prefer no bundling and lower priced cards instead; that would drive sales by really improving the price/performance ratio.
This WILL tip a lot of people who were thinking of getting a 1050ti or 3gb 1060 into getting the higher priced 6gb 1060. I could do with one myself but can hang on to see what pops out of the Black friday box or even the new years sales. Then again the rx590 is lurking in the background ....
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
DanNeely - Thursday, October 25, 2018 - link
"Another note is the revision of GTX 1060 6GB models to include GDDR5X memory, though clocked the same as the original GDDR5. Providing the same amount of memory bandwidth, the logic points to a supply/inventory reason rather than a performance refresh."Basically NVidia is making these cards to use up GP104 dies that either had at least 6 bad groups of cores or 1 bad memory controller and thus were unsuitable to make into GTX1070s. It's the same deal as with the 5GB GT1060 that came out earlier in the year. It'll be interesting to see how available these end up being, AFAIK the 5GB 1060's only showed up in a few price sensitive Asian markets. As a higher performing variant these might show up in the west instead; OTOH since they're being primarily made as a way to use up dud parts there's no guarantee they'll have a lot of them to sell.
Flunk - Thursday, October 25, 2018 - link
No, probably not. GDDR5 is backwards-compatible with GDDR5 controllers (although it only runs in double-data rate mode).Yields on the GP104 have to be pretty good or Nvidia wouldn't have launched the 1070 TI.ImSpartacus - Thursday, October 25, 2018 - link
They probably are excellent by now, but that doesn't mean Nvidia hasn't spent the past three years stockpiling "broken" GP104s.This is a common practice for GPU makers. We know how this works.
TheinsanegamerN - Thursday, October 25, 2018 - link
Two and a half years later, STILL pushing pascal.Remember when we had a new generation of GPUs every 12 months?
PeachNCream - Thursday, October 25, 2018 - link
Product refreshes every 12 months were nice, but when the industry was stuck on 28nm, those new GPUs were often mild tweaks of existing chips or just rebranded models. Those models that did offer performance increases often came from bigger dies that increased TDP and costs. Thus the dual slot blower and vapor chamber cooling that are status quo in the present day became normalized across all products rather than positioned at the upper end of the stack (though I would be remiss not to accept the idea that there were competitive market forces and demand-based pull from consumers that also played a role).TheinsanegamerN - Thursday, October 25, 2018 - link
True, but even then going from the 680 to the 780, then the 780ti and then the 980ti, over the course of three years, offered great performance improvements with just tweaking.Same as the 7970 to the beastly 290x.
PeachNCream - Thursday, October 25, 2018 - link
All good points. I often overlook the highest end graphics cards since gaming hasn't been a priority in over a decade or so. The rebrands were mainly a thing in the mobile space and in the lower end. I recall seeing multiple iterations of the same Radeon GPU. For instance, the HD 6450 is basically identical to the HD 7450 despite the implied higher performance from the increased model number. It was much the same situation with the chipset integrated HD 3200 and 4200 IIRC. At the top end, I would imagine that significant engineering effort was invested because the halo effect of having higher performance was helpful across a given graphics generation as was the trickle-down nature of developing a good GPU that could be later incorporated into subsequent lesser models.Cellar Door - Thursday, October 25, 2018 - link
What you are missing here is how complex gpus have become. Go ahead and compare the transistor counts then think about your comment.TheinsanegamerN - Thursday, October 25, 2018 - link
Hows about you compare the GTX 480 and GTX 680, released a whopping 3 years apart, and get back to me bud? Or hos about the 780ti_>980TI->1080ti. Seems there were plenty of improvements on those "complex GPUs".yeeeeman - Friday, October 26, 2018 - link
You didn't understand a thing from what the guy said.GTX680 GPU was a pretty small GPU for being the high end model - it was only ~300mm2. This was because AMD highend part was pretty poor, so nvidia just sold something small and cheap for big money.
GTX780TI is much bigger because it had where to grow. Starting from a 300mm2 GPU, you can work your way up until ~600mm2 on the same node. So this GPU was 550mm2, basically the highend GPU that nvidia was planning for the 600 series that never happened.
Now, maxwell's 980TI was over 600mm2 in size and it got faster just because they improved by a big margin the memory compression algorithms and stripped the GPU of the FP units. They used that new space for more graphics resources.
From Pascal onwards, things have changed. 1080TI is on 16nm, but it is still pretty big at 470mm2.
GP100, which is the full fat GPU is very big in size (610mm2) so as big as the 980Ti which was the last increase in size for 28nm. It had HBM2, which was very expensive at the time, so they only sold it in Quadro line-up. Volta is another beast, over 800mm2. That is a HUGE chip and they could launch it on the market but it would cost over 2k.
Now, with the RTX line-up, the process is again pushed to the limits. The 2080ti is a huge chip at 750mm2 and considering the price, I think it is quite cheap for the amount of silicon it has.
So bottom line is that nvidia released better cards than the 1080Ti, but they would be so expensive that almost nobody could afford to buy them.
eastcoast_pete - Friday, October 26, 2018 - link
Why does this strike me mostly as way to avoid lowering the prizes of these Pascal cards now that Turing is out? "Yes, we're still charging high prices for our previous gen GPUs, but look at the nice game that we bundled with it" or similar. Thanks, but no thanks! I for one would prefer no bundling and lower priced cards instead; that would drive sales by really improving the price/performance ratio.dromoxen - Monday, October 29, 2018 - link
This WILL tip a lot of people who were thinking of getting a 1050ti or 3gb 1060 into getting the higher priced 6gb 1060. I could do with one myself but can hang on to see what pops out of the Black friday box or even the new years sales. Then again the rx590 is lurking in the background ....