I believe the article is imprecise. It should read that GDDR6X transmits 2 bits per clock edge and GDDR7 transmits 1.5 bits per clock edge, rather than bits per clock. Both standards are DDR as they both transmit on both the rising and falling edges of the clock cycle.
This sentence in the 5th paragraph should be updated as well, "which allows it to transfer three bits of data over a span of two cycles." Two cycles would be four edges and thus 6 bits.
It was for a period... I have owned a hbm card, was top tier and I costed me 300usd new from the store.
This gddr7 tech, you know what it will do right? Make Nvidia and AMD to lower their bus width to 96 bits and maintain performance while gouging the price a bit more. The next 5060 will perform worse than the by then very old 2060 and cost twice as much.
Just guessing here based on previous 8 years of gpu development...
You should be thanking your lucky stars that consumer GPUs don't use HBM right now. If they did there would be a shortage of them because of the AI frenzy going on. TSMC only has so much CoWoS capacity and people making consumer GPUs with it would have a hard time getting capacity over the big data center GPUs that sell for much, much more money.
You have to admit, he has a point. Though I too think it best that the top tier cards utilize HBM. In particular, AMD's LLCC makes GPU RAM usage much more efficient.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
12 Comments
Back to Article
ballsystemlord - Thursday, June 29, 2023 - link
If it doesn't transfer 2 bits per clock then it's not DDR RAM (Double Data Rate).Yojimbo - Thursday, June 29, 2023 - link
I believe the article is imprecise. It should read that GDDR6X transmits 2 bits per clock edge and GDDR7 transmits 1.5 bits per clock edge, rather than bits per clock. Both standards are DDR as they both transmit on both the rising and falling edges of the clock cycle.Ryan Smith - Thursday, June 29, 2023 - link
This is correct; it was meant to be a discussion on clock edges. I've updated the article to clarify.Zoolook - Tuesday, July 11, 2023 - link
This sentence in the 5th paragraph should be updated as well, "which allows it to transfer three bits of data over a span of two cycles."Two cycles would be four edges and thus 6 bits.
nandnandnand - Thursday, June 29, 2023 - link
I'd like to see HBM become standard in consumer products. I also want a free pony and no more existential dread.Kurosaki - Thursday, June 29, 2023 - link
It was for a period... I have owned a hbm card, was top tier and I costed me 300usd new from the store.This gddr7 tech, you know what it will do right? Make Nvidia and AMD to lower their bus width to 96 bits and maintain performance while gouging the price a bit more. The next 5060 will perform worse than the by then very old 2060 and cost twice as much.
Just guessing here based on previous 8 years of gpu development...
GreenReaper - Thursday, June 29, 2023 - link
But think of the power efficiency!sheh - Thursday, June 29, 2023 - link
The 4060 is 50% faster than he 2060, and with 33% more RAM:https://www.techpowerup.com/review/galax-geforce-r...
The 5060 isn't likely to be slower.
StevoLincolnite - Saturday, July 1, 2023 - link
The 2070 released in 2018. It's 5 years old and the 4060 is what... 10-20% faster? What a joke.sonny73n - Monday, July 3, 2023 - link
https://m.youtube.com/watch?v=WLk8xzePDg8I completely agree with Kurosaki.
Yojimbo - Thursday, June 29, 2023 - link
You should be thanking your lucky stars that consumer GPUs don't use HBM right now. If they did there would be a shortage of them because of the AI frenzy going on. TSMC only has so much CoWoS capacity and people making consumer GPUs with it would have a hard time getting capacity over the big data center GPUs that sell for much, much more money.ballsystemlord - Thursday, June 29, 2023 - link
You have to admit, he has a point.Though I too think it best that the top tier cards utilize HBM. In particular, AMD's LLCC makes GPU RAM usage much more efficient.