Comments Locked

82 Comments

Back to Article

  • Slash3 - Thursday, December 23, 2021 - link

    Process Lasso, not Project Lasso. ;)

    (This has happened in previous articles, too)
  • futrtrubl - Thursday, December 23, 2021 - link

    Interesting that 5800 seems consistently better in these tests. I wonder if there is a timing/ratio related reason for that.
  • evilspoons - Thursday, December 23, 2021 - link

    Consistently worse, no? The times are longer and the frame rates are lower.
  • meacupla - Thursday, December 23, 2021 - link

    I would like to see IGPU performance with the various speeds.

    Now, I know the IGPU on desktop alder lake is poor, but AMD 6000 APUs are right around the corner, and I would like to see how well IGPU scales on DDR5 in general.
  • gagegfg - Thursday, December 23, 2021 - link

    There is almost no difference. UHD 770 has very poor performance and does not scale as well with higher bandwidth as it does on AMD's IGPU.
  • meacupla - Thursday, December 23, 2021 - link

    well that's a shame
  • Wrs - Thursday, December 23, 2021 - link

    Amd's iGPU hardly scales on DDR4 - can't really tell it apart from CPU scaling or even run to run variance.
  • praeses - Thursday, December 23, 2021 - link

    They typically see a 10% performance increase going from 3200-4000 with similar timings, the delta grows if you're comparing loose 3200 and tight 4000 timings.
  • Samus - Thursday, December 23, 2021 - link

    I suspect the memory bus is the limiting factor with AMD iGPU's as all of their recent memory architectures (going back to at least Polaris) were 128-bit+ DDR5 or HBM.

    The rare, NERFed examples paint a clearer picture: the only Polaris desktop GPU on 64-bit was the R7 435 (I think) and it had DDR3 at 2GHz. It was slower than most APU's at the time and remains one of those recent desktop cards that shouldn't have ever existed for desktop PC's. There just aren't many 64-bit cards, especially on DDR3, that return reasonable gains over integrated graphics; both are going to be so underperforming neither will play games reasonably well.
  • TheinsanegamerN - Tuesday, December 28, 2021 - link

    I think tha tmostly comes down to AMD's memory controller then, considering those APUs are running on dual channel 128 bit memory busses.

    DDR4 just isnt that good for GPUs, easily demonstrated with the GT 1030 which, depsite its lack of power, was severely hamstrung on DDR4 VS 5.
  • bananaforscale - Wednesday, December 29, 2021 - link

    That's DDR4 vs GDDR5, not DDR4 vs DDR5. GDDR is dual ported and can be read from and written to at the same time.
  • 29a - Sunday, January 2, 2022 - link

    Do you have a source for that for those of us who don't want to take your word? Also this article is about ddr5 memory scaling so I don't know why they half assed the article and didn't do iGPU testing.
  • gagegfg - Thursday, December 23, 2021 - link

    I do not understand how in the gaming tests of a top-of-the-range processor and last generation memory combined with a GPU almost 5 years old and 4K and 1440 resolutions, that analysis is really meaningless, they have a terrible bottleneck.
  • HammerStrike - Thursday, December 23, 2021 - link

    Yeah, it’s bafflingly. I get the GPU shortage situation is an issue (although you can buy GPU’s, just at inflated prices, not to mention the industry contacts they have), but to test at 1440p / 4k at high settings on a 1080 is just… ignorant?

    Worse case if you are stuck with the 1080 test at 720p low settings to make the CPU and RAM as much a bottle neck as possible. What they did is just a waste of time.
  • Ian Cutress - Thursday, December 23, 2021 - link

    Ever tried going to a GPU vendor, asking for 2-4+ of the same high-end GPUs (for concurrent testing), during a shortage, saying you can't promise them a review, just for testing? Even with the CPU stuff, it happens once every two generations, maybe, so trying to get one for our motherboard guy is nearly impossible. No we don't have the budget. People complain that I'm running RTX 2080 Ti cards on my CPU reviews. Either we run what we have and the article is written in that context, as mentioned right there on page one no less, or we don't run anything at all.
  • andr_gin - Thursday, December 23, 2021 - link

    I understand your problem about not having high end GPUs, but why not at least reduce resolution to 720p?
  • Ryan Smith - Thursday, December 23, 2021 - link

    That's an unforced error on our part. We're going to run the numbers for 720p and update the article. Thank you for the feedback.
  • Oxford Guy - Friday, December 24, 2021 - link

    More important is DDR-4 data for comparison, particularly at low latencies and nothing below 3200 speed.
  • shabby - Saturday, December 25, 2021 - link

    I find it hard to believe you can't source a video card, no other reviewers have this issue and I bought two myself this year.
  • Ooga Booga - Tuesday, December 28, 2021 - link

    1) Go to Oxford for transistor studies
    2) See moron youtubers become millionaires running canned benchmarks
    3) Still refuse to leave Anandtech
  • TheinsanegamerN - Tuesday, December 28, 2021 - link

    What is the hold up on video card reviews? I know there was that cali fire last year, but that was over a year ago now.
  • gagegfg - Thursday, December 30, 2021 - link

    The shortage of Chips is global and not just GPU. A Hardware tester like you, should not have problems in acquiring a GPU to do these tests, otherwise, you have a serious public relations problem.
    And returning to the point of my criticism, I think that those who understand why CPU scaling is tested in games, it is not only FPS, but also being able to evaluate longevity to upgrade to future GPUs or which CPU will generate a bottleneck sooner. that other. Today all CPUs can run games at 4k and that's not news to anyone.
    In short, if you cannot get a decent GPU to test a top-of-the-range CPU and its limitations with different hardware combinations, try to eliminate the GPU bottleneck with 720p / 1080p resolutions or by dropping detail from excene. That is the correct way to test the other points and not the GPU itself.
    This criticism is constructive, and is not intended to generate repudiation.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    Erm, you do know that other reviewers have been able to get ahold of GPUs for testing, right? If you dont have the budget for the stuff you need to do your job, cant get the stuff you need to do your job, and find excuses to now do the reviews your viewerbase wants to so, that says to a lot of people that anandtech is being mismanaged into the ground.

    Come to think of it, these same excuses were used when the 3080 was never reviewed, alongside "well we have one guy who does it and he lives near the fires in cali". That was a year and a half ago.

    Perhaps Anandtech presenting excuses instead of articles is why you cant get companies to send you hardware? Just a thought.
  • Azix - Monday, January 10, 2022 - link

    I can understand manufacturers being less likely to send out a GPU if they aren't guaranteed publicity. The key is that he said just for testing, not necessarily for a review. Most other reviewers are given for marketing purposes.
  • zheega - Thursday, December 23, 2021 - link

    I didn't even notice that at first, I just assumed that they would get rid of the GPU bottleneck. How amazingly weird.
  • thestryker - Thursday, December 23, 2021 - link

    The vast majority of people play at the highest playable resolution for the hardware they have which means they're GPU bound no matter what their GPU is. The frame rates in the review are perfectly playable and indicates the amount of variation one could expect for a mostly GPU bound situation. None of the titles are esports/competitive where you'd need to be maxing out frame rate so even if they were using a 3090 it'd be a pointless reflection for that.

    So while the metrics aren't perfect from a scaling under ideal circumstances perspective it's perfectly fine for practicality.
  • haukionkannel - Friday, December 24, 2021 - link

    True. I have not latest PC hardware and still play at 1440p at highest settings. So I can confirm that is the way to test to see what we see in real world situation...
  • Ooga Booga - Tuesday, December 28, 2021 - link

    Because they haven't done meaningful GPU stuff in years, it all goes to Tom's Hardware. Eventually the card they use will be 10 years old if this site is even still around.
  • TimeGoddess - Thursday, December 23, 2021 - link

    If youre gonna use a gtx 1080 at least try and do the gaming benchmarks in 720p so that there is actually a CPU bottleneck
  • Ian Cutress - Thursday, December 23, 2021 - link

    You know how many people complain when I run our CPU reviews at 720p resolutions? 'You're only doing that to show a difference'.
  • milli - Thursday, December 23, 2021 - link

    Are you going to start doing things the way idiots like it? Even when it's clearly wrong?
    Just because they don't understand it, doesn't mean you have to adhere to their wishes.
    Anybody can go to Linus if they want just a show.
  • Oxford Guy - Friday, December 24, 2021 - link

    I’m with Dr. Cutress on this. Anything below 1080 is silly to bother testing in 2021.
  • TheinsanegamerN - Tuesday, December 28, 2021 - link

    Unless you're testing memory scaling or CPU scaling, which is what is being tested here.

    720p/768p has a much larger pool of users then 4k does, so if were going by popularity then why ever bother testing at 4k?
  • Oxford Guy - Tuesday, December 28, 2021 - link

    Mobile casual gaming has a larger pool of users than 1080 on PCs. That doesn't mean tests that are relevant for mobile devices are worth doing on PCs with decent discrete GPUs.

    4K testing has always been relevant as a demonstration of the wall one hits due to diminishing returns.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    And 720p testing is also relevant as a demonstration of theoretical CPU scaling in games, yet people seem to have a hard on for demanding it's removal from CPU scaling reviews, for some reason.
  • YukaKun - Thursday, December 23, 2021 - link

    What if you disable the E-cores and check the ring-BUS speeds relative to the RAM and the benchmark numbers?

    I have a feeling this may be a similar pain as Zen and the IF.

    Regards.
  • thestryker - Thursday, December 23, 2021 - link

    This would be very interesting to check out, and if not on the whole suite maybe just something like WinRAR where we're seeing a lot of scaling already.
  • haukionkannel - Friday, December 24, 2021 - link

    HU did test that and disabling E-cores did not help. 1% difference...
  • YukaKun - Friday, December 24, 2021 - link

    They didn't do RAM scaling though. And only focused on games.

    Regards.
  • mikk - Thursday, December 23, 2021 - link

    What the hell, 4k and 1440p with a GTX 1080! This is all heavily GPU bound, this RAM scaling is so useless. What a bad test, I'm spechless.
  • Makaveli - Thursday, December 23, 2021 - link

    I'd have to agree I'm not sure about the component choices here. Tech spot same article today provide alittle more useful information when it comes to games.

    https://www.techspot.com/review/2387-ddr4-vs-ddr5/
  • Notmyusualid - Thursday, December 23, 2021 - link

    I'd have to disagree.

    7 of the top 10 GPUs in usage on this months Steam H/W survey are all deployed with Nvidia's Pascal architecture.

    So clearly, for most gamers, the results contained here are relevant.

    As to your other comment - my buddy runs 3440x1440 on his Seahawk (w/c) 1080, and it runs very well indeed.
  • Makaveli - Thursday, December 23, 2021 - link

    3440x1440p with a 1080 at ultra settings?

    at 144hz? And what games? that comment doesn't have any details.
  • mikk - Thursday, December 23, 2021 - link

    By this logic they shouldn't even bother testing a 12900k or any other ADL-S, the market share is even lower on Steam, much lower. It makes no sense what you are saying. The chance someone combines a new 12900k with an old GTX 1080 is rather slim, only Anandtech does this. This RAM test in heavy GPU limit is a complete waste of time. A typical Anandtech test.
  • Oxford Guy - Friday, December 24, 2021 - link

    I have a Fury-X that I obtained used for a low price coupled with a 9700K. Not everyone is willing to pay obscene prices for PC GPUs.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    The price people are willing to pay has absolutely nothing to do with scaling performance reviews. Talk about a red herring argument.
  • TheinsanegamerN - Tuesday, December 28, 2021 - link

    Yes, but this is not a GPU test, it is a RAM scaling test. Running GPU limited scenarios is not going to provide useful data for RAM scaling.
  • Oxford Guy - Friday, December 24, 2021 - link

    It’s fine data, hardly useless. It’s incomplete, though.
  • dicobalt - Thursday, December 23, 2021 - link

    DDR5's latency is higher than 1970s Ozzy Osbourne.
  • bananaforscale - Thursday, December 30, 2021 - link

    Those are cycles, not time.
  • Targon - Thursday, December 23, 2021 - link

    Looking at these numbers, and how DDR5-5800 and DDR5000 both seem to have a performance penalty, but the other numbers aren't all that different implies something in the Alder Lake design or BIOS or SOMETHING isn't making very good use of the memory.

    Some may just chalk it up to the RAM, but I suspect it has more to do with Alder Lake itself supporting both DDR4 and DDR5 memory. At that point, I suspect the memory controller on the CPU and how it links to the CPU cores is at fault. If the memory controller were actually making better use of the changes in DDR5 compared to 4(dual 32 bit per clock eliminating certain wait states as one example), then we SHOULD see a definite improvement with better memory and not this trivial difference.

    It will be interesting to see if AMD Zen4 Ryzen shows better scaling between memory speeds, because if there is, that will show very clearly how poorly Intel implemented DDR5 memory support. Just getting it working isn't the same as taking advantage of the benefits.
  • mikk - Thursday, December 23, 2021 - link

    They didn't test the RAM, it's a GPU test. It's called GPU limit.
  • haukionkannel - Friday, December 24, 2021 - link

    AMD use most likely more cache, so the difference will be smaller.
  • felixbrault - Thursday, December 23, 2021 - link

    Why is Anandtech still using Windows 10 for testing?!?
  • Ryan Smith - Thursday, December 23, 2021 - link

    Windows 11 has been very, er, "quirky" to put it politely. We're keeping an eye on it and running it internally, but thus far we've found it to be rather inconsistent on performance benchmarks.

    In the interim, even when it behaves itself and doesn't halve our results for no good reason, we just end up with results similar to Windows 10. So there's no net benefit to using it right now.
  • Oxford Guy - Friday, December 24, 2021 - link

    Probably because Windows 11 is worse than 10.
  • Samus - Thursday, December 23, 2021 - link

    So basically the same story as ever, good timings mean almost as much as frequency, and paying ultra premiums for frequency still don't net reasonable returns on the investment.

    As always, just stick with reliable, quality memory at JEDEC speeds and invest the savings elsewhere.
  • Oxford Guy - Friday, December 24, 2021 - link

    ‘quality memory at JEDEC speeds’

    Lol, no. Certainly not with a CPU like Zen 1. There was a huge difference between JEDEC and 3200-speed DDR-4, without a huge cost increase. Zen 3 is optimal at 3600. I would think you’re aware of the fabric speed being tied to the RAM speed, hence there being a price-performance sweet spot. That spot hasn’t been JEDEC for a long time.

    Your point may apply to early adopter RAM at most. Once the DDR-5 market is more mature... In this situation it looks like getting Alder with DDR-4 capable of low latencies is most optimal. Too bad there’s no data on that here.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    JEDEC DDR3 was 1066, and later updated to 1333 when everyone was running 1866 or 2133.
    JEDEC ddr4 is 2133 mhz, up to 2666, when 3200-3600 is the performance sweet spot.
  • haukionkannel - Friday, December 24, 2021 - link

    Well it seems that we are not memory speed bottle necked at the current state...
    And timings seems to affect more than pure band wide.
  • Spunjji - Friday, December 24, 2021 - link

    Not worth it yet, then. Go go DDR4!
  • mirancar - Friday, December 24, 2021 - link

    hello guys

    can we also test some web browser (javascript) performance? i think most people either running games or using the web browser all the time. (sadly) even alot of applications are now made with web browser engine

    also the SPEC int/fp suite bench could be useful or even just geekbench 5

    the gaming benchmarks also seem abit flawed, the way it was tested with the gtx 1080, would it even make a difference even if using some older hardware ? i think it would be better finding games which have a cpu bottleneck and scale well with actual differenc cpus used. can you have same performance for example with 12700k+ddr5 6000 vs 12900k+ddr5 4800 ?
  • Oxford Guy - Friday, December 24, 2021 - link

    ‘What's very interesting from our testing is when we went low latency at DDR5-4800 with CL32 latencies. Tightening up the primary latencies at the same frequency netted us an additional 6.4% jump in performance, which shows there that increasing frequency isn't the only way to improve overall performance.’

    So that DDR-5 7000 mentioned may not be ‘extremely fast’ in the real world.

    The lack of DDR-4 with low latencies at various speeds here seems to be an oversight, in terms of providing necessary context. Consider a follow-up that includes the DDR-4 data.
  • ruthan - Friday, December 24, 2021 - link

    Nice but without DDR4 and price / performance ration is not great.
  • Tchamber - Saturday, December 25, 2021 - link

    I remember when they used to make low-latency chips. My first DDR3 was for an i7 920, and I had some super, super low latency memory that was expensive, but at least it was an option. It would be nice to see them make that available.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    It is available. DDR4 4000 CL 15 memory is a thing.
  • Silver5urfer - Saturday, December 25, 2021 - link

    Skip the entire ADL tbh. If you are running X470, Z390 and up. There's no need to upgrade to this experimental platform. You pay more for single digit gains. Not worth. Skip this and Zen 4, get Zen 5 and then Intel. By that time PCIe5.0 and DDR5 will have 2 years of maturity.
  • Wereweeb - Sunday, December 26, 2021 - link

    A comment by Silver5urfer that is actually reasonable, thank you for this christmas gift.
  • Kvaern1 - Tuesday, December 28, 2021 - link

    This just confirms what has always been true.
    Expensive highend RAM has no noticeable effect on most usecases.
  • Oxford Guy - Tuesday, December 28, 2021 - link

    That depends. If one is comparing JEDEC 2133 CL15 DDR4 with B-die running at 3200 CL14 on Zen there is a big performance difference in games. For example, take a look at the reddit topic 'I compared 3200Mhz ram vs 2133 in 12 games'.

    Going after MHz sees diminishing returns set in quickly after a certain point. With Zen 3, the optimum MHz is 3600. With Zen 2 it's probably 3200 with that low command rate of 14. Zen 1 also paid dividends via running 3200 RAM instead of slower RAM.
  • bananaforscale - Thursday, December 30, 2021 - link

    Command rate of 14? Shirley you mean latency of 14.
  • Oxford Guy - Saturday, January 1, 2022 - link

    Yes. Posting with the flu leads to mistakes.
  • Kvaern1 - Sunday, January 9, 2022 - link

    I wouldn't call DDR4 3200/3600 expensive highend RAM. 3200 wasn't even expensive when I put it in my old Skylake in early 2016. DDR5 OTOH currently cost about twice as much as DDR4 for basically no gains outside of massively parallel computing realms.
  • Oxford Guy - Wednesday, January 12, 2022 - link

    It depends upon whether it’s top-grade B die or not. 3600–14 can be quite pricy.
  • throAU - Thursday, December 30, 2021 - link

    uh.... why benchmarking game performance at 4k with a GTX1080 from ... 2016?

    surely to benchmark CPU vs. memory performance with DDR5 you'd want a relevant GPU to pair with it? one that isn't being strangled at 4k, etc.
  • throAU - Thursday, December 30, 2021 - link

    Saw an earlier comment from Ian - No GPUs available.

    Well then, I guess don't run GPU limited benchmarks as part of the memory scaling analysis. If you can't run the numbers legitimately, then don't run them. Have to say this is just not up to the usual anandtech high standard we've come to expect over the past 2 decades.
  • Oxford Guy - Sunday, January 2, 2022 - link

    AMD and Nvidia are selling plenty of video cards for high prices that have less performance than a 1080. Cards with less performance than a 1080 and cards with equivalent performance continue to be brought to market.

    People have been conditioned to assume the only valid tests involve the most expensive cutting-edge consumer hardware because that hardware is typically provided by companies to try to get sales. I remember when this place had chart data points featuring triple SLI GTX setups with the top Nvidia cards of time. Who could afford that and who could tolerate the noise and heat? Other sites have routinely done GPU tests with Intel CPUs that were really expensive. They say it's to eliminate bottlenecks but when the only data involves luxury hardware it can be less informative to the majority of buyers who aren't going to spend that kind of money. The same goes for using a video card like a Titan instead of one at a reasonable price point. Not only were cards in that line overpriced, they had a short market lifespan as I recall. Buying one was more about bragging rights than value for one's money.

    If the video card being used were a GTX 580 or something else that's totally irrelevant to contemporary gaming then you could call the data illegitimate legitimately. What it actually is is legitimate data that's not as complete as you'd like. I would particularly like to see DDR-4 performance in any article's charts about DDR-5's performance. Not having that doesn't make the data invalid, just less convenient.

    Many would be happy to have 1080-level performance given the current situation. Extremetech was actually recommending that people look at the ancient 290X due to the GPU pricing situation.

    Vega cards, which weren't so impressive (especially in performance-per-watt but also in performance-per-decibel) when they were new are now in their second round of mining-induced gouging. The pricing for those used is preposterous and yet that's the situation we're in.

    Unless you have quite a bit of disposable income for gaming you're unlikely to have a 3080/3090 now or in the near future. It may be that testing with a 3090 would be more irrelevant than with a 1080 simply due to the smallness of the percentage of those who will own a card with that performance in the near future and present.
  • TheinsanegamerN - Monday, January 3, 2022 - link

    "People have been conditioned to assume the only valid tests involve the most expensive cutting-edge consumer hardware"

    People are smart enough to use their brains, and realize that to test scaling of a component you must remove every possible bottleneck not related to said component. running a GPU limited test on a memory scaling benchmark is utterly pointless.

    "AMD and Nvidia are selling plenty of video cards for high prices that have less performance than a 1080"

    Intel sells lots of CPUs that are slower then a 12900k. Shoudl they have used a pentium G6400 for these tests? How about 2133 mhz DDR4?

    See, sales nubers are not relevant to scaling tests. We are concerned with how well a certian part scales, not how well it sells. Why is this so hard for people to understand?
  • Oxford Guy - Tuesday, January 4, 2022 - link

    'See, sales nubers are not relevant to scaling tests.'

    If the tests don't match the product usage the tests aren't relevant.
  • Tom Sunday - Tuesday, January 4, 2022 - link

    All said and done I am still running with DDR3 memory on my used and hobbled together DELL XPS 730X from 2008. If I had the money now and having a meaningful job, I would most certainly buy DDR5 and an Alder Lake set-up to be happy
  • rexocex - Tuesday, January 4, 2022 - link

    So for gamers, a 4800CL32 kit is the best choice due to availability and pricing.
    One thing remains the pain point: larger than 2x16 kits simply do not exist and across the motherboard lineup 4 DIMMs are terrible at stability and can't even clock to stock kit XMPs.
  • brunis.dk - Monday, January 24, 2022 - link

    Obviously low timing is king and the slowest memory won out almost everywhere.. especially value for money. Bring on low timings at higher speeds!

Log in

Don't have an account? Sign up now