Not sure if faster speeds would be immediately useful (the MCs of the GPUs / AI chips or whatever need to be ready for it as well), but if Hynix can ship larger chips earlier than the competition that would probably be quite a big deal. But I remain sceptical...
These align with various rumors of what next generation graphics cards are using and similarly rumored time tables for their refreshes. 16 Gbit GDDR7 at 28 GT/s this year and 24 Gbit GDDR7 at >32 GT/s a little over a year later.
This may fair well for the various AI startups that can't source HBM due to being edged out by the larger players (nVidia, AMD, Intel etc.). A 512 bit wide interface at 32 GT/s would provide 2 TB/s of bandwidth, on par with the PCIe version of the H100 but with more memory (128 GB vs. 80 GB). A boost to 40 GT/s and 24 Gbit capacity GDDR7 parts would rival the full H100 at much, much higher memory capacity. With the trend being toward massive models, capacity matters more than raw compute and bandwidth so a smaller startup can be competitive here with enough interconnect IO to link things together to scale up.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
2 Comments
Back to Article
mczak - Tuesday, June 11, 2024 - link
Not sure if faster speeds would be immediately useful (the MCs of the GPUs / AI chips or whatever need to be ready for it as well), but if Hynix can ship larger chips earlier than the competition that would probably be quite a big deal. But I remain sceptical...Kevin G - Wednesday, June 12, 2024 - link
These align with various rumors of what next generation graphics cards are using and similarly rumored time tables for their refreshes. 16 Gbit GDDR7 at 28 GT/s this year and 24 Gbit GDDR7 at >32 GT/s a little over a year later.This may fair well for the various AI startups that can't source HBM due to being edged out by the larger players (nVidia, AMD, Intel etc.). A 512 bit wide interface at 32 GT/s would provide 2 TB/s of bandwidth, on par with the PCIe version of the H100 but with more memory (128 GB vs. 80 GB). A boost to 40 GT/s and 24 Gbit capacity GDDR7 parts would rival the full H100 at much, much higher memory capacity. With the trend being toward massive models, capacity matters more than raw compute and bandwidth so a smaller startup can be competitive here with enough interconnect IO to link things together to scale up.