Comments Locked

10 Comments

Back to Article

  • ballsystemlord - Friday, March 1, 2024 - link

    Not to complain about AT, but that press release is completely devoid of information.

    "To service AI workloads, HBM-something is going to be manufactured sometime for someone somewhere."

    Is all that really need to be printed in this article.
  • nandnandnand - Friday, March 1, 2024 - link

    How about HBM for consumer products hahahhhahhoohhoohehehehe
  • BZD - Sunday, March 3, 2024 - link

    Worked well with the Rage Fury.
  • ballsystemlord - Sunday, March 3, 2024 - link

    It's the HBCC, not so much the HBM, that really made Vega a good card.
  • boozed - Sunday, March 3, 2024 - link

    High Bandwidth Memory Memory
  • nandnandnand - Monday, March 4, 2024 - link

    AnandTech really didn't like the discussion about how it's dying. Sad!
  • Ryan Smith - Tuesday, March 5, 2024 - link

    It's a discussion better suited for our forums, than at the top of a news post.
  • PeachNCream - Thursday, March 7, 2024 - link

    It is their site and if they don't want comments discussing its decline in readership and content quality, they have every right to hide it by deleting comments about it to control damage.
  • Kevin G - Wednesday, March 6, 2024 - link

    I wonder for custom solutions if more exotic implementations are possible. For example independent read write buses to remove turn-around time latencies. Another radical idea would be implement custom HBM as SRAM using a leading edge node (<5 nm). Despite a single SRAM needing multiple transistors, these are about to eclipse the densities used by DRAM and their single bit capacitor mechanism. The different process nodes for DRAM have not kept up with logic.
  • bananaforscale - Wednesday, March 13, 2024 - link

    There's apparently something about SRAM that doesn't shrink well with node improvements.

Log in

Don't have an account? Sign up now