Comments Locked

24 Comments

Back to Article

  • Dragonstongue - Monday, February 3, 2020 - link

    problem is, does anyone or anything use HBM anymore after Radeon and the few Ngreeedia err Nvidia used on GPU stuff?

    shame as it never quite seemed to live up to the "promises" as there was no denying it had "great potential" somewhere along the line Raja "screwed the pooch" when it came to Radeon overall...great intentions, lead to seemingly poor "in the real world" given results.

    That being said, the various Vega, R9 Fury etc once "tuned" supposedly worked outstanding, beyond this, they seemingly chewed up a bunch of power (real or imagined) vs more standard designs...I suppose it was a very good thing they did use HBM memory ^.^

    I wonder what AMD is planning in the future to keep power levels to a more "sane" amount and/or to have some sort of properly done "power down when not needed design" seeing as modern GPU are for sure computers in their own right (they have butt load of shaders and such, why can they not help to "on the ms" adjust timings/power use?

    finally, all is well and good, but knowing Samsung they are likely to price this very high indeed and maybe maybe not have it running at the best possible speed with lowest possible power therefore heat produced.....still this would lead to some serious serious "firepower" on a fully fledged out product I am sure of it (would hate to see the cost however)

    not likely to be in no $200 GPU (not just USD priced) sure would be nice though, if they could have a "lower cost $ and TDP/watt, but stupidly quick gamers card"
  • Cooe - Monday, February 3, 2020 - link

    Yeah. A metric crapton of high end ASIC's & FPGA's. Also, it's still the only game in town for the highest end data center compute GPU's with seriously beefy double precision compute & whatnot (V100, Vega 20). HBM is doing just fine.
  • Kevin G - Tuesday, February 4, 2020 - link

    Of those ASICs, there are several highend networking chips that leverage this due to the very nice bandwidth with relatively low latency it provides. The latency side of HBM (vs. GDDR5/6 in particular) is one of its most under values aspects.

    (Though for lowest latency, large SRAM and TCAM are still necessary for switches but those remain on-die.)
  • extide - Monday, February 3, 2020 - link

    I bet we will continue to see it in the occasional high end gaming GPU -- and this stuff will almost certainly show up in nvidia's next high end tesla product. If they decided to make an 8GB stack, you could make a nice product with only 2 stacks -- especially if you were willing to run it > 3.2Gb/sec mode -- you could get Radeon VIII b/w numbers with half the stacks. That's progress!
  • lilkwarrior - Tuesday, February 4, 2020 - link

    You're extremely not accounting for the fact most GPUs of value to the most valuable GPU consumers for GPu companies—pros not gamers looking for $200 cards—want GPUs w/ optimized capabilities for 4K, deep-learning, and of course ray-tracing.

    Nvidia definitely hasn't been greedy leading to them having 70% of the workstation market & similar share for the gaming market. Ther RTX cards are next-gen cards for gamers while coveted cards for pros—especially for deep-learning, their core competency.
  • Retycint - Tuesday, February 4, 2020 - link

    This whole concept of Nvidia being "greedy" is laughable, considering that both AMD and Nvidia are billion-dollar, profit-maximizing corporations that answer to its shareholders and board of directors. The pricing strategies of both firms represent their best attempt at maximizing profit. Nvidia didn't price their products high because they were "greedy", it's because they had no competition in the high end market. AMD didn't price their products lower because they were benevolent demi-gods who wanted to "save consumers", but because they were competing against the market leader Nvidia and needed to undercut Nvidia's products to persuade consumers to switch.

    Had the market situation in the past few years been reversed, AMD would have probably price their products similarly, because why would they voluntarily give up profit?
  • Spunjji - Wednesday, February 5, 2020 - link

    But the situation *was* reversed in prior years (Radeon 9800 Pro, HD4870 / 5870 series), and yet AMD never mysteriously added 50%+ to the cost of their high-end products in a single generation.

    Yes, it's silly to refer to a corporation as "greedy". It's also true, though, that Nvidia's version of fierce competition has involved leveraging their market position to introduce proprietary offerings of dubious value to tie customers to their product stack, then cranking prices up in cycles where competition is lighter.

    They're basically the Apple of the GPU market. The iPhone 11 Pro may be an excellent phone and arguably one of the best on the market, but there's no damn way it's worth $1000+ for any reason other than that's what Apple chose to charge for it.
  • PEJUman - Wednesday, February 5, 2020 - link

    I am wondering what is the fraction of ppl buying 11 Pro at full retail asking price. Most US carriers brought back the 2 years contract subsidies during the holiday seasons last year, effectively dropping the 11 Pros by $300-400. And I think there is a healthy amount of '24 months lease' user base with the '0% financing' from carriers and apple themselves.

    TLDR: I think just like car leasing, the ownership business model for phones might by ending soon. See: "https://www.fool.com/investing/2018/06/08/why-risi...

    My speculation is that apple been inflating its phone prices starting in 2017, with the goal to slowly build the residual value of its phones to support 12-24 months lease business model by 2022-23 time-frame. Ultimately transforming itself into a 'hardware&software ecosystem-as-service' behemoth.
  • Cullinaire - Friday, February 7, 2020 - link

    It's worth whatever Apple decided to charge for it, because people chose to BUY IT at that price.

    Quite simple really.
  • ksec - Tuesday, February 4, 2020 - link

    in the realm of HPC, HBM is cheap. And as we move into more Custom Processor in HPC hopefully HBM will pick up speed with even higher bandwidth.

    But we dont seems to be severely bandwidth limited in high end Gaming GPU.
  • azfacea - Wednesday, February 5, 2020 - link

    not in high end GPU's with 256-384 bit GDDR6 but APU, gaming laptops are severely bandwidth limited. even ignoring price, premium gaming laptops that are TDP limited could immensly benefit from HBM but volume doesnt seem to exist for some reason.
  • Valantar - Wednesday, February 5, 2020 - link

    A bit OT first: please stop using quotation marks in that way, it makes everything you write seem intended to be ironic in a way that makes no sense whatsoever. If you say "great potential" in quotation marks like that it looks like you mean the opposite. It makes reading your posts genuinely confusing and annoying.

    Beyond that: HBM and HBM2 have been a significant factor in _reducing_ power draw for GPUs using it. An 8GB 256-bit high-speed GDDR5 setup - which was the main competitor in the Fiji/Vega days - uses somewhere around 40-50W. HBM/HBM2 halves that if not more. That these were relatively inefficient GPU architectures has no relation to the memory used - quite the opposite, as the more efficient memory allowed more power to go to the core, improving performance for these cards.

    As for AMD card powering down when not needed - they do. Have you read a GPU review in the past ... few years? AMD has been a few watts behind Nvidia in idle power, but not much, and currently they are a few watts ahead. As for in-use voltage and clock adjustments: all modern GPUs do those too, from both vendors. AMD just happens to have a less efficient design than Nvidia (though RDNA has made major strides in efficiency they still only manage to sort of match them with a significant node advantage).
  • Brane2 - Monday, February 3, 2020 - link

    These thgings seem like a flop to me. At leasst in their current form.

    They were marketed as having infinite bandwidth, compared to conventional RAM and ended up at about the same. That's a flop through a lot of orders of magnitude - it looks awkward even on logarithmic scale.

    Also, all those bits have to be driven, which demands separate area for driver and connecting pad on the die. Sure, driver is smaller, but there are a lots more of them.

    HBM will neded changes in underlying tech to be effective:

    - DRAM cell -- that capacitor stuff is getting really old by now
    - transfer signalling and protocol

    If/when they solve it, HBM might be very intersting for APUs with onboard memory, extra large low latency caches etc...
  • A5 - Tuesday, February 4, 2020 - link

    Gotta love armchair EEs. All they have to do is completely abandon the architecture? Gee, how simple!
  • azfacea - Tuesday, February 4, 2020 - link

    the combination of much higher bandwidth and lower power consumption **should** in theory make flagship smart phones very interested in HBM and much more so than AMD was for vega 56 and 64. Apple and Samsung have pretty deep pockets as well, so why arent they doing it ? what is the problem ? defect rate ??
  • azfacea - Tuesday, February 4, 2020 - link

    z-height ?? integrity/durability ??
  • Spunjji - Wednesday, February 5, 2020 - link

    Cost, plus the fact that a flagship phone would blow past its power budget doing anything that might need this much memory bandwidth.
  • Soulkeeper - Tuesday, February 4, 2020 - link

    Too bad they can't slap 2 of these on a ryzen interposer.
    Bandwidth per core/thread has taken a big hit over the years.
  • PeachNCream - Tuesday, February 4, 2020 - link

    That would be nice. Giving a modern CPU one or two HBM stacks as RAM would provide sufficient bandwidth for not only the processor cores but also any iGPU needs. Integrated graphics have been held back in part by sharing bandwidth to RAM for as long as they have existed. It doesn't solve the problem of cramming all that power demand and heat into a comparably smaller space than having discrete components, but it does at least get rid of the unified memory bottleneck.
  • FreckledTrout - Tuesday, February 4, 2020 - link

    I feel like this is the end game for AMD. My guess is that we will see this around the time they move to TSMC's 3nm node. That should make for plenty of die space for 4 stacks of HBM, a 8 core CPU chiplet and a very decent GPU chiplet. Frankly I think the entire low end GPU segment for gaming will disappear around this time as well as the APU's will be good enough.
  • PeachNCream - Tuesday, February 4, 2020 - link

    I hope you're right and that AMD is heading in that direction. HBM costs will need to come down quite a bit though since APUs represent a portion of AMD's lower priced computing segment. The other problem is the inevitable improvement of technology. If HBM is cheap enough to be added to the APU package, what will higher performing products offer in terms of capabilites on a relative scale? Will a hypothetical HBM-equipped APU have enough performance to get relatively closer to the low- to mid-range discrete GPUs on sale if/when they go on sale and what does APU performance end up looking like when running games and other software that will be available at that time? Hopefully, we'll see the gap close, but it is going to be hard to predict how the cards will fall.
  • Xajel - Tuesday, February 4, 2020 - link

    Before almost 3 years, Samsung announced a LC-HBM2/3 solution, which uses a narrower bus, IIRC it was something 448~512bit compared to 1024bit current HBM's have.

    This narrower bus allows to use a much less expensive organic interposer packaging rather than the silicon interposer required by the wider 1024bit bus, in theory this can be used like any chip/chiplet in an MCM package.

    But Samsung was silence after that, nothing was released, it seems Samsung didn't find enough interest in the product. Sad if this was the case, it has a lot of potential for various of products.
  • 29a - Tuesday, February 4, 2020 - link

    I'd love to see what an APU with 4 stacks of this would perform like.
  • ballsystemlord - Friday, February 7, 2020 - link

    Grammar error:

    "Of particular note, Samsung is only announcement 16GB stacks at this time,..."
    Wrong suffix:
    "Of particular note, Samsung is only announcing 16GB stacks at this time,..."

Log in

Don't have an account? Sign up now