Comments Locked

11 Comments

Back to Article

  • thestryker - Wednesday, July 19, 2023 - link

    If we're going to keep seeing memory bus width shrinking then we really need to see 32Gb capacity chips.
  • Ryan Smith - Wednesday, July 19, 2023 - link

    We're not currently expecting to see smaller memory buses. Both AMD and NVIDIA were able to enjoy one-off benefits of larger caches. That's not a trick that either can repeat any time soon, given the transistor cost.
  • thestryker - Wednesday, July 19, 2023 - link

    It's not so much that I'm worried about them getting even smaller so much as they've already gotten small and VRAM capacity is directly limited by it. You'd be looking at 4080 bandwidth on a 192 bit bus, but 12GB VRAM is already questionable for the 4070 Ti and would be utterly unacceptable for something with 4080 level performance. Now maybe they will increase bus widths to increase VRAM pool, but I'm just not willing to trust that to happen, especially with nvidia.
  • brucethemoose - Wednesday, July 19, 2023 - link

    Yeah, this is worrying me.

    GPU VRAM is my #1 priority for messing with AI.
  • osmarks - Wednesday, July 19, 2023 - link

    Nvidia not adding more VRAM is clearly deliberate. If they had bigger chips they would not use them. Note that their workstation/professional cards have more VRAM than equivalent-compute gaming cards.
  • brucethemoose - Saturday, July 22, 2023 - link

    Well I am happy to buy a competitor (probably Intel from the look of things).
  • sheh - Wednesday, July 19, 2023 - link

    Maybe GPUs could also start to adopt non-power-of-2 memory chip sizes:
    https://www.tomshardware.com/news/intel-alder-lake...
  • iwod - Wednesday, July 19, 2023 - link

    Yes we need at least double the capacity. For CUDA usage we need even more.
  • meacupla - Friday, July 21, 2023 - link

    Can't the memory chips be stacked into the same lanes?
    Like I am pretty sure the 4060Ti 16GB model achieves 16GB by placing another 4x2GB chips on the back of the PCB, rather than use 4x4GB chips.
  • thestryker - Friday, July 21, 2023 - link

    Yes they can, and that's part of why the 16GB version is so much more. PCB and manufacturing likely costs more than the additional VRAM does. This isn't a viable option for the lower end cards as the additional cost eats into margins (companies aren't going to give these up) or raises costs a stupid amount.
  • brucethemoose - Wednesday, July 19, 2023 - link

    > AI and networking accelerators

    But which AI accelerators?

    Tenstorrent is switching to LPDDRX last I checked, and... I don't know of others that ever had plans to use GDDRX?

Log in

Don't have an account? Sign up now