Comments Locked

24 Comments

Back to Article

  • Pork@III - Thursday, January 18, 2018 - link

    Theoretical GDDR6 256-bit memory sub-system Theoretical GDDR6 384-bit memory sub-system

    That is awesome! But....Why don't NVIDIA and AMD make 512 and 448 bit cards today?
  • T1beriu - Thursday, January 18, 2018 - link

    Overkill? Crazy power consumption? increased die sizes?
  • nevcairiel - Thursday, January 18, 2018 - link

    Because it costs a whole lot of money and die space, and while more memory bandwidth can help various workloads, there is also limits to how much it can.
  • Pork@III - Thursday, January 18, 2018 - link

    Okay, and 10+ years ago, how it was not a problem to offer 512-bit video cards. Was it easier and cheaper to manufacture? At that, prices were not higher than the current flagship models (from before the mining speculation). I do not accept explanations like yours for a good enough argument.
  • DanNeely - Thursday, January 18, 2018 - link

    No, they did it because they desperately needed the extra bandwidth or total capacity for top end GPUs. Doing so added a lot to manufacturing costs, in the number of ram chips, number of PCB layers to connect them, and higher manufacturing failure rates. Since then ram makers have managed to provide ram that is fast enough and high enough capacity that going that wide hasn't been necessary.

    Adding more ram/a wider bus than the GPU can utilize wouldn't do anything but make the card more expensive; and other than AMDs 1st generation HMB1 cards nothing at the high end in the last decade has been ram limited.
  • MrSpadge - Thursday, January 18, 2018 - link

    And also back then the wider memory busses increased chip costs (area) and power consumption over more narrow, similarly clocked choices. The chip area was cheaper back then, though.
  • Dragonstongue - Thursday, January 18, 2018 - link

    because it takes ALOT of die space for the extra wide memory bus, which drives up cost and complexity a great deal, and, the "modern" designs seem to favor square numbers 128-256-512, seems most of the not square number cards while they can be fast do not seem to get the proper scaling in some things..as the maker AMD Nvidia or whatever need to "feed the card" properly, so a massive or tiny bus not being fed with fast enough memory or whatever makes using it, useless and expensive a proposition.

    am sure sometimes they do such on purpose to make sure they can market at a higher price then should but usually they are building them to effectively use a "narrow" but fast memory bus, obviously always exceptions to the rule, such as many many many cards loaded with a whole whack of gb of memory they can never ever use because of a narrow bus or simply not enough grunt behind the design.
  • Pork@III - Thursday, January 18, 2018 - link

    I think they just save costs in order to maximize their net profit. This action brings us to the fact that we are receiving less and less benefits for the money we spend.
  • MrSpadge - Thursday, January 18, 2018 - link

    If they introduced a card with lacked memory bandwidth, it would underperform and could not be sold for the target price. Hint: any company doing this is already bankrupt.

    And yes, the more narrow busses increase the companies profit - but not at the cost of performance. That's why overclocking GPU memory helps performance a bit, but overclocking the chip yields far greater gains, percentage-wise. That means the cards are well balanced and have approximately enough bandwidth.
  • Frenetic Pony - Thursday, January 18, 2018 - link

    Nvidia doesn't have the design for it, which would be massive overkill when it comes to filling up the rest of the die with enough power to make use of it anyway. AMD did at one point, but switched to HBM, which Nvidia has done as well, at least for their big cards.

    Thing is, while HBM is more efficient from die size/bandwidth and energy used, it's also turned out to be too limited and expensive right now. So here's GDDR6, less efficient than HBM2, and far less than 3 (whever that comes out). But easier to make, and so cheaper and more available. Considering the ridiculous price you have to pay for a GPU today, assuming you can even get one at all, cheaper and more available is definitely preferred right now.
  • boeush - Thursday, January 18, 2018 - link

    so the company has managed to make its 16 Gb ICs smaller than its previous-gen 8 Gb ICs. The company does not elaborate on its achievement, but it looks like the new chips are not only made using a thinner process technology, but have other advantages over predecessors, such as a new DRAM cell structure, or an optimized architecture.


    Maybe my math is naive or somehow wrong, but at a first approximation, going from a 20 nm-class process down to a 10-nm class increases the number of circuit elements per unit area by roughly a factor of 4.

    So if the density went up by 4x, but the memory capacity went only from 8 Gb to 16 GB (2x), then why wouldn't we expect roughly 2x the number of dies per wafer even with otherwise no other changes?
  • limitedaccess - Thursday, January 18, 2018 - link

    All the DRAM/NAND manufactures are obfuscating the actual designation of their sub 20nm process hence why terms like 10nm class or 1x nm http://www.samsung.com/semiconductor/dram/gddr6/ as opposed to outright stating 10nm. On top of that the process designations from foundries are effectively arbitrary and not subject to any standard independent measurement with respect to actual density of end products.
  • MrSpadge - Thursday, January 18, 2018 - link

    The first "10 nm class" NAND processes where 2 generations of 19 nm processes (e.g. Toshiba, if I remember corrctly).

    Regarding the scaling discussed here: moving from 8 GBit to 16 Gbit dies will reduce the fraction of area for fixed control logic and decrease the area lost to die sawing. Both increase "productivity" in addition to the new process. Besides, "30% higher productivity" has by no means to mean "the new chip is smaller". Otherwise the increase would have to exceeded approximately 100%. The new chip will be larger, but they now get ~30% more DRAM capacity per wafer, i.e. less dies.
  • martinpw - Thursday, January 18, 2018 - link

    Yes indeed, "class" is a weasel word here. "10nm class" just means the first digit is a 1. So anywhere between 10nm and 19nm.
  • FreckledTrout - Thursday, January 18, 2018 - link

    LIke limitedaccess said, only the marketing density went up by 4x the actual density is likely closer to 2x.
  • webdoctors - Thursday, January 18, 2018 - link

    This is great news. You'd be able to get HBM or better bandwidth for much less BOM costs (no interposer or HBM memory prices to pay) plus better yields since no interposer issue.
  • SunnyNW - Thursday, January 18, 2018 - link

    Am I the only one that is extremely impressed by the speed of these ICs, especially being very early in the life cycle? The 18 gbps is even more than the initial announcement of 16 gbps and if I'm not mistaken I believe even the JEDEC spec states the fastest speed for GDDR6 as 16 gbps.
  • MrSpadge - Friday, January 19, 2018 - link

    +1

Log in

Don't have an account? Sign up now