Comments Locked

74 Comments

Back to Article

  • iwod - Thursday, June 28, 2018 - link

    When are we going to get faster memory? Haven't we stuck with DDR4 for quite long? Even with DDR5 it doesn't seems to scale well with APU's needs.
  • Chaitanya - Thursday, June 28, 2018 - link

    Biggest problem is most modules on market are still stuck with Jedec 2133Mhz and there are hardly handful of Jedec 2666Mhz kits on market. Xmp sucks and it needs to die soon. Ddr5 it seems is just a capacity solution to Ddr4 rather than a speed problem.
  • Maxiking - Thursday, June 28, 2018 - link

    I don't think you understand here what's going on. Jedec is just the standard, a set of predetermined memory speed settings Bios will run RAM at. It doesn't matter if rams are Jedec 2133 or Jedec 2666mhz, both specifications are painfully slow. When you run above those Jedec specs, you use XMP profiles. Nothing more, nothing less. If you set up rams in XMP and copy Jedec specs, they will offer the same performance.

    If anything, it is Jedec what sucks. It takes them years to update and create new memory standards, so DIMM manufacturers have to overclock on their own via XMP.
  • Samus - Thursday, June 28, 2018 - link

    DDR4 for too long? It’s only been around for 4 years!
  • Stuka87 - Thursday, June 28, 2018 - link

    It was released 4 years ago, but very little used it. As I recall only Haswell-E used in in 2014. 2016 is when DD4 pretty much became the standard and mainstream chips were using it.
  • Andy Chow - Friday, June 29, 2018 - link

    How is JEDEC 2133 slow? It's 17 GB/s. If you run it in quad channel, it's 68 GB/s. I seriously doubt that bottlenecks most workloads. Just look at the 3DPM results, when you're actually doing calculations, and you actually decrease performance with faster memory, probably because the JEDEC standards aren't defined so the RAM and the memory controller aren't behaving perfectly which random io read/write queues. And I bet if you used registered 2133 DDR4, you would actually see a performance increase, even through there's 2-3 more controllers in the way.

    JEDEC is obviously very prudent and conservative when defining their specs, but by no way are there current specs slow. If your workload is simple and linear (gpu, compression, encryption), then DDR4 isn't the recommended RAM type, HBM is, and the HBM JEDEC specs pre-date the DDR4 ones. DDR4 is optimized for low random io latency, whereas HBM is optimized for sequential io bandwidth. Most datacenter and server workload needs are io latency bottlenecked, not bandwidth bottlenecked, so I doubt on the DDR side the next generations will increase pre-fetch sizes above what they already are, regardless of how that would benefit games, encryption or compression (the last two are asic solved in the corporate world).
  • bananaforscale - Saturday, June 30, 2018 - link

    This. There are *very* few workloads that benefit from more than two channels. Heck, Ryzen 2 is less memory speed dependent than Ryzen was. Now, *latency* could go way down and it would benefit stuff.
  • invasmani - Tuesday, July 10, 2018 - link

    Not accurate and 2666MHz Cas 9 for example on a good memory kit has a amazing performance index of 296. I run 2000MHz Cas 7 w/ a performance index of 285 it's actually better overall than the kit is rated for at DDR4 3200MHz Cas 16 by a long shot.
  • Ket_MANIAC - Friday, June 29, 2018 - link

    How long exactly have you been stuck with DDR4 and what would you do with DDR5 on a desktop, if I may?
  • Andy Chow - Friday, June 29, 2018 - link

    No. We had DDR3 for 11 years before DDR4 came out. DDR4 came out in 2014. DDR5 won't come out before 2019 at the earliest, the specs aren't even final as of today, so you don't really know what you are talking about.

    The reason that DDR ram will never be good with graphics compared to GDDR is because DDR is optimized for cisc operations while GDDR is optimized for risc operations. You could run GDDR on an apu (the ps4 does this), but this makes the CPU run slower.

    An apu will always be a sub-standard solution performance wise. It's a solution that aims to be good for low-cost, low-power consumption, or small form factor. It will always deliver mediocre performance.
  • GreenReaper - Saturday, June 30, 2018 - link

    Not sure CISC vs. RISC is right here - SIMD, sure, since that operates on large blocks of memory and so should be more suitable for GDDR's larger bus size.,
  • peevee - Tuesday, July 3, 2018 - link

    Type of memory does not determine bus size.
    128-bit GDDR5 is exactly as wide as 2-channel DDR4 in all the cheap CPUs.
    But it is a little bit smarter - for example, it contains hardware clear operation - no need to write a whole lot of zeros...
  • close - Saturday, June 30, 2018 - link

    DDR3 has been in use since 2007. Adoption rate aside, the cycle reached a peak with DDR3's 7 year reign and it might come back down if DDR5 comes soon.

    DDR1 was announced in 2000, DDR2 in 2003, DDR3 in 2007, DDR4 in 2014. DDR5 is rumored for next year.
  • peevee - Tuesday, July 3, 2018 - link

    " DDR is optimized for cisc operations while GDDR is optimized for risc operations"

    What a load of BS... Learn, people, before writing.
  • niva - Tuesday, July 3, 2018 - link

    I always thought it was that GDDR was faster memory that can't be mass produced in quantities to satisfy DRAM market, not that there was something fundementally different about the memory. I also questioned that RISC vs. CISC statement but simple google searching reveals this: https://www.quora.com/What-is-the-difference-betwe...

    So perhaps that wasn't way off base.
  • Dragonstongue - Tuesday, July 3, 2018 - link

    G for GDDR means GRAPHICS, DDR and GDDR "same thing"in theory "however"
    GDDR is not the same as DDR. Overall, GDDR is built for much higher bandwidth, thanks to a wider memory bus.
    GDDR has lower power and heat dispersal requirements compared to DDR, allowing for higher performance modules, with simpler cooling systems.
    DDR1, DDR2, and DDR3 have a 64 bit bus (or 128 bit in dual channel). GDDR3, comparatively, commonly uses between a 256 bit bus and 512 bit bus, or interface (across 4-8 channels).
    GDDR3 has a 4 bit prefetch and GDDR5 has an 8 bit prefetch, making GDDR5 twice as fast as GDDR3 in apples to apples comparisons.
    GDDR can request and receive data on the same clock cycle, where DDR cannot.
    DDR1 chips sends 8 data bits for every cycle of the clock, GDDR1 sends 16 data bits.

    things get extra "confusing" when GDDR5 came out because whatever the "rating is" for example GDDR5 900 "clock" you take this number and quadruple it which is the "effective speed" so this 900 becomes 3600 as it has a wider bus available to it A and B GDDR can send and receive data at the same time on the same clock cycle (normal DDR cannot, from what I have read)

    also GDDR is a chunk more expensive then "normal" DDR ram, though it does have multiple benefits.

    I suppose one can look at "DDR SDRAM is optimised to handle data packets from various in small bits with very low latency e.g browsers, programs, anti-virus scans, messengers.

    GDDR, on the other hand, can send and receive massive chunks of data on the same clock cycle.

    (source) http://www.dignited.com/27670/gddr-vs-ddr-sdram-gd...
  • bananaforscale - Saturday, June 30, 2018 - link

    You are complaining about DDR4 because APUs struggle with it? You're barking up the wrong tree. The issue is in using shared memory.
  • peevee - Tuesday, July 3, 2018 - link

    Or they could have supported 4 channels, given that they support 4 DIMMS anyway. Would be useful for CPU operations too, given that they run 8 threads in parallel...
  • Dragonstongue - Tuesday, July 3, 2018 - link

    AMD memory controller for desktop purposes is NOT built nor designed for quad channel usage, the cost is "not worth it" there is no way you can keep costs down for a "simple" APU for those looking for a computer on a budget and have access to quad channel memory A and B very very few things the average everyday consumer does with their computer needs or can effectively use beyond what AMD CPU have provided with their HT (hypertransport) or IF (infinity fabric for Ryzen) are able to provide with dual channel.

    More is not always better, most of the time it becomes chasing unicorns vs actually "needing it", you know, for those who have a massive wallet and buy it just to say they have it AH HAHA
  • Dragonstongue - Tuesday, July 3, 2018 - link

    we have not had DDR4 "that long" compared to say DDR3 or DDR2 which were and have been out far far longer
    DDR (2000) DDR2 (2003) DDR3 (2007) DDR4 (2014)....if you are "bad at maths" ^..^
    18+ years........15+ years.....11+ years.....4+ years
    .
    DDR5 should be towards end of 2018 though JEDEC is saying 2020 for end consumer (me and you) purchase.

    It is not the raw "speed" holding things back FYI, latencies, cycle speed, bandwidth available, power required to keep them running, all the subtimings ALL matter in their own fashion (depending on the task they are being used for) I remember many DDR2 sticks that you could heavily overclock and they ran crazy fast but also got crazy hot and died early deaths (suicide runs)

    I do not ever hear of this happening with DDR3 or DDR4 (lower volts and the chip makers such as Intel do their damndest to monitor/control the memory controller speeds and volts to avoid killing things, back in the day these same safeguards were not in place)

    (max JEDEC certified specs best I can tell)
    DDR 400, DDR2 1066, DDR3 2133. DDR4 3200
    best "jump" on a percentage basis seems to have been from DDR to DDR2 (166.5%) DDR2 to DDR3 (100.1%) DDR3 to 4 (50%)

    current spec (not finalized as of yet) for DDR5 are "up to" approximately double what the fastest current modules of DDR4 are rated for 4266-6400 (vs 3200) (33.3% "gain" or at the "best" (100%)

    I hardly call either of those "double" but I am a simple man ^.^

    SOOOOOO the major jump absolutely was ddr to ddr2 when comparing official "spec" of the fastest rated memory, obviously there is even faster that one can manage when you overclock or whatever, but this is not a guarantee either, ratings and specs are ratings and specs.

    Now as far as "when are we going to get faster memory" that depends, can your cpu or motherboard "handle it" IMHO, nope, not at this point anyways, very few can "handle" say DDR4 4700 (G.Skill) and generally speaking the extreme "fastest" also suffer from far looser timings and subtimings and a marked increase in power required to "make it happen"

    RAM is not a "simple" thing to crank up the speeds with everything getting a "nitrous boosT" like you could with a car engine ^.^
  • invasmani - Tuesday, July 10, 2018 - link

    We really need quad channel memory to become mainstream on low end CPU's ideally and octa channel or even six channel or example to become more affordable on mid range ones. The ram itself performance increases are slowly so we need to run more parallel memory channels similar to CPU cores like Ryzen has done well with.
  • peevee - Thursday, June 28, 2018 - link

    "With the aim being to procure a set of consistent results, the G.Skill Ripjaws V DDR4-3600 kit was set to latencies of 17-18-18-38 throughout"

    By doing that, you essentially invalidated the test as in real life lower frequencies mean lower values for those latency-inducing parameters, reducing the impact of frequencies.
  • edzieba - Thursday, June 28, 2018 - link

    Setting a constant CAS actually increases the impact on latency compared to most RAM available. CAS is a clock-relative measure, and normally you will see CAS latencies rise as clockspeeds rise. This means real-time latencies (nanoseconds) stay surprisingly flat across clock frequencies. Only if you pay the excruciating prices for low-CAS high-clock DIMMs (in which case, why on earth are you using them with a budget APU?) do you start to see appreciable reductions in access latency.

    Of course, higher clock rates means more access /granularity/ even if latency is static so it depends on what your workload benefits from.
  • peevee - Thursday, June 28, 2018 - link

    That was exactly my point. You you can run CAS 17 at 3466, you can CAS 12 or 13 at 2133, even on much less expensive memory. And then difference in the results will be much less.
    As surprising as it might seem, the memory latencies for a while hit the limitations of speed of electromagnetic wave in the wires, and the only solution (both for latencies and energy consumption) with those outdated Von Neumann-based CPU architectures is to do PIP, like in the phones, instead of DIMMs/SODIMMs.
  • mode_13h - Thursday, June 28, 2018 - link

    I've been hoping to see HBM2 appear in phones and (at least laptop) CPUs for a while, now.
  • peevee - Tuesday, July 3, 2018 - link

    HBM2 has nothing to do with it.
    Phones actually get it right, they usually have memory in stacked configuration with CPU reducing the length of the wires to the minimum. Or at least set the memory chips close to the CPU. Desktop CPUs using DIMMs are hopeless. They could have improved things with a special CPU slot which would accept DIMMs (or better yet SODIMMs) in square configuration, like this:
    *DIMM*
    D|--C--|D
    I-||--P--|I
    M|--U--|M
    M|------|M
    *DIMM*
  • peevee - Tuesday, July 3, 2018 - link

    The best for the DIMMs would be to lie flat, and CPU has 4 memory channels with pins on each side for each channel. Best of all would be LPDDR4x, with 0.6V signal voltage, would save a bunch for the whole platform. Something like 4-channel CL4 1866 or 2133 would have plenty of bandwidth and properly low latency, maybe even LLC will not be needed for 4 cores or less.
  • peevee - Tuesday, July 3, 2018 - link

    ..."would save a bunch" of power...
  • Lolimaster - Friday, June 29, 2018 - link

    The thing is DDR4 3000 CL15 barely cost a few buck over 2400 CL15.
  • Lolimaster - Friday, June 29, 2018 - link

    All memory even cheapo 2133 are in the range of 165-180. Only CL14 3200 is above $200.
  • BedfordTim - Thursday, June 28, 2018 - link

    To illustrate your point, even the cheapest DDR4 2133 memory is CL15
  • JasonMZW20 - Thursday, June 28, 2018 - link

    I think it's better to have the same timings as a testing reference. Yes, typically memory latencies are lower with lower clockspeeds, but really, AT are testing for memory bandwidth limitations and performance increases associated with increased memory bandwidth alone (not factoring latencies, since they're static across the range). In that sense, it's a successful test.

    There is a bit of performance left out for lower clockspeed kits, but that's outside the scope of this test which singled out bandwidth only.
  • FullmetalTitan - Thursday, June 28, 2018 - link

    Seems like an effective control to me, pretty standard experimental control.

    This was clearly not a real-world testing scenario, but rather an artificial one where they fixed a common variable (CAS latency) in order to highlight the direct correlation of mem frequency to GPU performance. Anyone interested in this type of information should understand that these tests are very much NOT a real-world scenario.
  • Lonyo - Thursday, June 28, 2018 - link

    They fixed the variable but didn't fix it. Fixing the CAS cycles doesn't fix the CAS delay. Adjusting the CAS cycles would provide a set CAS delay. The latency is measured in cycles, but faster RAM has faster cycles so the time-based latency decreases with the same CAS setting.

    e.g. 2000MHz CAS 10 has the same ns latency as 4000MHz CAS20.

    DDR4-3466 at 17 CAS is 9.81s latency. DDR4-2133 at 12 CAS would be 11.26. At 17 it's 15.95ns.

    Intel's XMP profiles for 2133MHz go from 9 to 13ns. For 3466MHz it's 16 or 18. 8.44ns vs 9.23ns.

    If the increase in framerate is partly due to changes in latency (99th percentile may well be), then this test isn't reflective of real world performance with reasonable variables.
  • gglaw - Saturday, June 30, 2018 - link

    This test was definitely very flawed and misleading for most average computer users. Most techies can read into the details and understand it was purely a demonstration of what happens when you lock the same timings and increase only the frequency of the RAM, but many will feel that this is an indirect review of how different memory kits perform. The reality is that the cheaper, low frequency kits are a lot closer to the high-end ones than the graphs show. A much better review would include both comparisons with and without locked timings, and if not including the real world part where the timings are also varied it should be noted in bold multiple times, perhaps even as a disclaimer below every chart. "These graphs do not demonstrate real world performance of the memory kits as they were not allowed to operate at the recommended timings." (That the huge majority of users would let them run at.)

    You go as far as making purchasing recommendations further indicating that the performance displayed in your charts are indicative of specific product reviews and finding a sweet spot for price/performance. This is the most flawed part. You neutered the very popular 2133-2400 segment by exaggerating how much faster the 3000+ kits are. If you allowed the lower frequencies to operate at their recommended/default timings, the price/performance recommendations would be entirely different.
  • mode_13h - Thursday, June 28, 2018 - link

    I agree with this, but I do see value in a set of tests which varied only clock speed.

    What I was thinking they should've done was a second set of tests at the lowest latency usable with each speed. Now, that would lend real-world applicability you want *and* show the impact of memory latency.
  • peevee - Tuesday, July 3, 2018 - link

    They actually VARIED the latencies, as measured properly, in time. They artificially forced longer latencies when running memory on lower frequencies. That is the problem. Maybe they have memory advertisers. Or it was gross incompetence.
  • peevee - Thursday, June 28, 2018 - link

    " specifically up to version 1.0.0.6, but now preceded by 1.1.0.1."

    You mean "superseded"?
  • .vodka - Thursday, June 28, 2018 - link

    No. AGESA version number got reset after Raven Ridge's. Pinnacle's is back to 1.0.0.x.

    This is why tools report SummitPI/RavenPI/PinnaclePI and the version number. One without the other is a recipe for confusion.
  • mooninite - Thursday, June 28, 2018 - link

    This has really made me reconsider a 2200G instead of a 2400G for my HTPC. The 2200G is currently $99 vs $160 for the 2400G. Why pay $60 more for ~2-4 more FPS?
  • Meat Hex - Thursday, June 28, 2018 - link

    I would not read into these graphs and tests too much as they are only showing you the % gained from higher speed memory and not the actual FPS. You're still going to have a higher FPS on the 2400G vs the 2200G.
  • PeachNCream - Thursday, June 28, 2018 - link

    The 2200G is a reasonable CPU and the price is good for the performance you get back. If you do upgrade to a dedicated graphics card later (unlikely given the HTPC role you're aiming for due to heat, power, noise, and space concerns) the dGPU benchmarks show most of the games measured demonstrate the 2200G is relatively close in performance to the 2400G so there's that as well.
  • sing_electric - Thursday, June 28, 2018 - link

    Especially for an HTPC, "good enough" performance is often EXACTLY what you want, especially when you're considering chips on the same architecture/process, since the 2200G will make less heat, making the entire system run cooler, which, in turn, can mean a quieter system.
  • Lolimaster - Friday, June 29, 2018 - link

    It's all about the extra thread that will minimize stuttering from 4c/4t cpu, and will handle better a dgpu higher than a 1060.
  • GreenReaper - Friday, June 29, 2018 - link

    To be honest it might make the most sense to buy the APU, then the dGPU later, then a later-model CPU using 7nm architecture to replace the APU.
  • drzzz - Thursday, June 28, 2018 - link

    After reading the article I was expecting a conclusion to talk heavily about DDR-3333 memory and how it performed the best overall and clearly point out that speeds over that seemed to fall off and maybe some insightful thoughts on why that was seen. Given the spikes at 2933 and 3333 compared to 2400 it would seem to indicate there is some steady state synergy with the IF, memory and the caching mechanics that only manifest at specific bounds in the frequency increase. Again expected more about these two points in a conclusion vice the 2133 to 3466 differences that were talked about. The performance at 2933 and 3333 makes me curious if there is some logical hard design choice AMD made that would make memory selection easier for us once we identify it and if the same factors play into all the Zen based CPU's. I find it interesting that the xx33 speeds seem to be the strong points. So I am curious what would 2533 and 3733 look like. I know 3733 is not a realistic option. If the xx33 speeds are the best performing across the spectrum I would seriously love to know why and if same is true for Zen base CPU's.
  • peevee - Tuesday, July 3, 2018 - link

    "Given the spikes at 2933 and 3333"

    No spikes, the steps are different in size.
    And of course the simply misleading slowing of latencies on slower frequencies by setting CL etc to the same value.
  • zodiacfml - Thursday, June 28, 2018 - link

    Do a review of the GT 1030 with DDR4.
  • PeachNCream - Thursday, June 28, 2018 - link

    Do you mean the GT 1030 that uses DDR4 on the card as opposed to the GDDR5 model? If that's the case, I'd like to second that. The performance difference between the two memory types would be worth analyzing. As the GDDR5 model is a bit ahead of AMD's APUs, I'd imagine the DDR4 model would cost enough performance to hand the performance advantage back to a 2400g, but it'd be useful to see that play out in Anandtech's benchmarks.
  • TheWereCat - Thursday, June 28, 2018 - link

    Gamers Nexus released their review yesterday.
    Apparently the GT1030 DDR4 is so starved for memory bandwidth that at best it performs "only" 50% worse than the GDDR5 version and at worst it falls 2x short.
    That makes it worse than a 2200G paired with a single channel 2400MHz DDR4.
  • PeachNCream - Thursday, June 28, 2018 - link

    Oh wow, that's a terrible result! Thanks for sharing the information. I was expecting something more like 25% less performance in worst case scenarios, but that was clearly optimistic.
  • Lolimaster - Friday, June 29, 2018 - link

    GT1030 2133 DDR4 is basically 3x less bandwidth than the GDDR5 version which like giving single channel DDR4 2133 on an APU.
  • TheWereCat - Saturday, June 30, 2018 - link

    The funny thing is that even if it had "only" 25% less performance, the price difference (at least im my country) is only €4, so it wouldnt make sense anyway.
  • eastcoast_pete - Thursday, June 28, 2018 - link

    Yes, thanks for that. I looked it up. That was REALLY BAD. The 1030 DDR4 card paired with a i7700K got a serious butt kicking by the stock Ryzen 2200G with dual channel. The only thing slightly slower than that 1030 setup was if they hogtied the 2200G with single channel memory only, which one would only do if truly desparate. Eye opening, really. Much better off with the 2200G (CPU+iGPU) for about the price of the 1030DDR4 dGPU alone. Nvidia really made one stinker of a card here.
  • Lolimaster - Friday, June 29, 2018 - link

    With DDR4 the GT1030 can lose so much performance than even the 2200G walks over it.
  • Lolimaster - Friday, June 29, 2018 - link

    Simple, 64bit + DDR vs dual channel DDR on the APU.
  • 29a - Thursday, June 28, 2018 - link

    Please remove 3DPM from the benchmark suite, it would have been much better to include video encoding benchmarks. Seems the 3DPM benchmark is only included for ego purposes because as usual it offered no beneficial information.
  • boeush - Thursday, June 28, 2018 - link

    Yeah, for budget/consumer CPUs/APUs such science/engineering oriented benchmarks seem to be off-topic. Conversely, if you're going to test STEAM aspects, then commit all the way and include the Chrome compilation benchmark, and maybe also include something about molecular dynamics and electronic circuit simulation...
  • lightningz71 - Thursday, June 28, 2018 - link

    First of all, thank you for putting together this series of articles. I really respect the time and effort that went into all of this. I don't know if you have any more articles planned, but I would like to offer some constructive criticism.

    My first point is that you claim to focus on real world scenarios and expectations for the two processors, then proceeded to go right into a scenario that was anything but. The vast majority of the benchmarks that were run were set to 1080p at high detail. It is widely accepted that the performance target of the two processors is either 720p at mid-high detail or 1080p at very low detail. Most people aim for about 60fps for these budget solutions to be considered well playable. Most of your gaming benchmarks never made it north of 30fps.

    My second point is that this review neglected to show what the processors would achieve with memory scaling while the iGPU is also overclocked. If the user is going to go to the effort to turn their memory clocks up to 3400 Mxfrs, they are also very likely to overclock their iGPU at the same time. Another point in support of this is the unusual memory scaling that was seen between the highest two memory click settings. The higher clocked iGPU case might have made better use of the higher clocked memory and shown more linear scaling.

    I suggest that you make a follow on article that focuses on the gaming and content creation aspects with both processors that is run at both 720p high and 1080p low with the iGPU at stock and overclocked to 1500mhz. I suspect that the numbers would be very enlightening.

    Again, thank you. I hope that you can take the time to look at my suggestions and try a follow on article.
  • eastcoast_pete - Thursday, June 28, 2018 - link

    Yes, thanks to Gavin for this continued exploration of the Ryzen chips with Vega iGPUs. Glad to see that your hand has healed up.

    About Lightningz71's points: +1. I fully agree, the real world scenario for anyone who goes through the trouble to boost the memory clock would be to OC the iGPUs to 1500 MHz and maybe even higher. A German site had a test where they sort of did that, found some strange unstable behavior at mild overclock of the iGPUs, but regained stability once they overvolted a bit more and hit 1450 MHz and up. They did find quite substantial increases in frame rates, making some games playable at settings that previously failed to get even close to 30fps. But, their 2200G and 2400G Ryzens were on stock heatsinks and NOT delidded, so I have even higher hopes for Gavin's setups.
    Lastly, given the current crazy high prices for memory, I'd love to see at least some data for 2x4 Gb, representing the true ~$300 potato with a 2200G.
  • gavbon - Monday, July 2, 2018 - link

    I really appreciate it, I have said all along that iGPU is coming, I'm working on it now currently and I'm currently looking at which variables to consider during the testing. As for memory, capacity doesn't have any bearing on performance other than in situations where memory is running at maximum capacity.

    Soon as I work out the tests I'm going to run, I can crack on! :D
  • hansmuff - Thursday, June 28, 2018 - link

    Please someone tell me how to get that particular recommended G.Skill set to work at 2933 on a 2400G CPU. I haven't been able to get it past 2100 with Prime95 stability. What am I doing wrong?

    Gigabyte AB350N motherboard latest BIOS

    Thanks
  • DennisSmith - Thursday, July 5, 2018 - link

    Hi hansmuff, I just had a similar problem with a 2400G and ASRock AB350M build. Reading the MB manual, I noticed they recommend when installing only 2 sticks to use DIMM slots 2 & 4 rather than 1 & 3. After making that change, I reached the rated 3200 speed after enabling XMP in the UEFI / BIOS, and I am stable at 3400 now using the Master Ryzen Software to set the frequency (3400 is not an option in the UEFI menu).
  • Lolimaster - Friday, June 29, 2018 - link

    Again doing review the wrong way, 3466 with too high latencies and for the sake of being "consistent" all the slower dram setting are getting injured by the "global" latency for the sake of being consistent.

    Just do a CL15 across all kits.
  • Vatharian - Friday, June 29, 2018 - link

    To some degree he is right - Jedec has in specification, that modules regardless of speed must support 2133 or more recently 2666 MHz. Faster modules accelerate only by manually inputing over 40 parameters or loading them up from XMP profiles. And as a matter of fact I have only seen two modules that support 'base' speed of 2666 instead of 2133 MHz.

    Truth is, XMP is a stop-gap, JEDEC should have prepared something universal that will do XMP's job, otherwise only thing that's left is memory ladder like ramp-up that is not fault proof, and good luck on eight module four channels that work on 4200+ MHz, you'll get boot times in range of five minutes.
  • plonk420 - Saturday, June 30, 2018 - link

    i hope this doesn't get buried, but could you do some tests that use the CPU on just 1 stick of RAM? i built a distributed computing box and as my budget wouldn't allow at the time, just got 1 4GB stick of DDR4-2400. and i'm still only at ~66-75% memory use with some of World Community Grid's more memory intensive programs PLUS 3 instances of MilkyWay@Home on the GPU
  • GreenReaper - Saturday, June 30, 2018 - link

    If you don't want to get buried, don't call yourself plonk - it's the sound a user makes when they're added to a Usenet killfile!
  • peevee - Tuesday, July 3, 2018 - link

    "just got 1 4GB stick of DDR4-2400. "

    Why even bother to build such a joke?
  • SanX - Sunday, July 1, 2018 - link

    On a different note with memory speed not related to higher frequency per single channel. There existed dual channel, quad channel, even eight channel memory architectures. Can anyone show the applications which benefit from this the most? For example are Gauss elimination matrix equation solvers or particle movement or PIC methods or even some games actually memory bandwidth bound? If yes do we get 2x, 4x or 8x increases in performances with such architectures? That where people may actually say WOW.

    Ian?
  • Cloakstar - Monday, July 2, 2018 - link

    This is not a full memory scaling test. Where is the testing for scaling out to 4 sticks of RAM and bank/channel interleaving? I have found this to be critical to getting the most FPS out of prior generation AMD APU's.
  • maroon1 - Sunday, July 8, 2018 - link

    DDR4-3466 cost 2.5x times as DDR4 2400 and almost 3 times as DDR4 2133

    What is point of building budget build if you want to spend that much on memory
  • msroadkill612 - Tuesday, July 17, 2018 - link

    Its a budget build cos no dgpu cost.

    You are just well advised to invest a little ($20-40) of your savings in better ram.
  • invasmani - Tuesday, July 10, 2018 - link

    I'm running DDR4 2000/CAS7= 285 performance index 1.35v on my G.Skill 64GB Kit it's actually two unmatched 32GB kits, but works fine once setup right. The RAS to CAS/RAS to Precharge is the most finicky for stability I noticed with Skylake's IMC at least and large density kits.
  • Oxford Guy - Saturday, February 23, 2019 - link

    This article should have had the best stable latencies for each of the speeds tested, as well as the same CAS settings as shown here.

    It also should have had 3200 speed.

Log in

Don't have an account? Sign up now