They are not talking about GPU performance, but workload performance, in particular LLM inference. When a workload is memory bandwidth bound then you get more performance by increasing memory bandwidth. As GDDR7 is higher bandwidth than GDDR6 and LLM inference is often memory bandwidth bound it's not hard to accept their slide.
That's not why the picture makes no sense at all though. These GDDR6 and 6X board also have different GPUs and as such you cannot isolate GDDR performance.
They never claim it's just the memory in isolation, they say the best GDDR6 and 6X applications. The best GDDR6 application versus the best GDDR6X application. It makes sense as they displayed it.
It's not clueless. If you want a fast GPU you need fast RAM or you GPU will starve for data whatever the size of the cache you are going to put on it RAM speed depends on technology used. PAM encoding is a step beyond the now insufficient NRZ coding mechanism. If you want fast transfer with decent current usage you need PAM. Yes you can obtain the same performances using NRZ encoding but you'll need higher frequency and so higher signal levels with higher power consumption.
So the technology used for RAM determines its features such as speed and efficiency and real fast GPU needs them. Or they would not be so fast.
Basically, they will be shipping new faster products and they expect to make significant inroads into the HBM market. I live near where they have their HQ in Boise Idaho, they have been building a 15 $ billion expansion to the complex, I would assume if they are/ were already investing that much money, they are pretty confident about the products coming down the pipeline. It looks like a pretty decent upgrade, but everything is all about AI....I personally at this point don't care about AI.
If I were a betting man, I'd say this FAI will become the new search engine.
* another vote for higher order thinking - something 'college' used to instill in students, until the evangelical radical right wingnuts insisted on religious purity and vocational training as the only vectors that should exist.
AI, as currently implemented, is just a massive correlation matrix, and as such, benefits from massive amounts of memory. QED.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
15 Comments
Back to Article
Dante Verizon - Friday, June 7, 2024 - link
I loved the clueless parallel between GPU performance and the memory technology used hahaYojimbo - Friday, June 7, 2024 - link
They are not talking about GPU performance, but workload performance, in particular LLM inference. When a workload is memory bandwidth bound then you get more performance by increasing memory bandwidth. As GDDR7 is higher bandwidth than GDDR6 and LLM inference is often memory bandwidth bound it's not hard to accept their slide.Dante Verizon - Friday, June 7, 2024 - link
https://images.anandtech.com/galleries/9483/Micron...They're talking about games too, and appropriating the performance of GPUs as if it were a perfect parallel to memory tech
Adramtech - Friday, June 7, 2024 - link
It's not a stretch to say that GDDR6X with PAM4 is one reason the 3080, 3090, 4080, 4090 are flagship GPUs, and they don't use GDDR6BvOvO - Saturday, June 8, 2024 - link
That's not why the picture makes no sense at all though. These GDDR6 and 6X board also have different GPUs and as such you cannot isolate GDDR performance.Dante Verizon - Saturday, June 8, 2024 - link
Finally someone with common sense.Adramtech - Saturday, June 8, 2024 - link
They never claim it's just the memory in isolation, they say the best GDDR6 and 6X applications. The best GDDR6 application versus the best GDDR6X application. It makes sense as they displayed it.Terry_Craig - Saturday, June 8, 2024 - link
This doesn't make sense in any parallel reality.BvOvO - Sunday, June 9, 2024 - link
No, it doesn't make sense as it doesn't provide no information at all, not about the memory nor the used GPUs. Why are you defending this anyway.Terry_Craig - Saturday, June 8, 2024 - link
The worst of all is the data about RT, as we know the XTX (that was visibly used there) does not have dedicated ASICs for this.This adds another layer of distortion to numbers that already say very little about Micron's product itself.
CiccioB - Tuesday, June 11, 2024 - link
It's not clueless.If you want a fast GPU you need fast RAM or you GPU will starve for data whatever the size of the cache you are going to put on it
RAM speed depends on technology used.
PAM encoding is a step beyond the now insufficient NRZ coding mechanism. If you want fast transfer with decent current usage you need PAM.
Yes you can obtain the same performances using NRZ encoding but you'll need higher frequency and so higher signal levels with higher power consumption.
So the technology used for RAM determines its features such as speed and efficiency and real fast GPU needs them. Or they would not be so fast.
Papaspud - Saturday, June 8, 2024 - link
Basically, they will be shipping new faster products and they expect to make significant inroads into the HBM market. I live near where they have their HQ in Boise Idaho, they have been building a 15 $ billion expansion to the complex, I would assume if they are/ were already investing that much money, they are pretty confident about the products coming down the pipeline. It looks like a pretty decent upgrade, but everything is all about AI....I personally at this point don't care about AI.charlesg - Sunday, June 9, 2024 - link
AI is the latest hype to funnel gazillions of dollars into.IMHO it should be called FAI - fake artificial intelligence. So far it's not panning out to be very intelligent.
Regurgitating data with no concept of reality is not intelligence.
If I were a betting man, I'd say this FAI will become the new search engine.
Better, but still a search engine.
FunBunny2 - Monday, June 10, 2024 - link
If I were a betting man, I'd say this FAI will become the new search engine.* another vote for higher order thinking - something 'college' used to instill in students, until the evangelical radical right wingnuts insisted on religious purity and vocational training as the only vectors that should exist.
AI, as currently implemented, is just a massive correlation matrix, and as such, benefits from massive amounts of memory. QED.
haukionkannel - Monday, June 10, 2024 - link
Lets see how much gddr7 i crease gpu prices!