Interesting findings. I've seen Ryzen hailed on other simple forums like Reddit as having great scaling. There's definitely some at play, but not as much as I'd have thought.
How does this compare to Intel? Are there any plans to do an Intel version of this article?
I'd like to see how much quad channel helps the (low end) X299 vs the dual channel Z370. With overlapping CPUs in that space it could be really interesting.
While I wouldn't mind another test there have been plenty over the last year's as the authors also pointed out in the opening of the article and the results were simple - it makes barely any difference, far less even than for Ryzen.
Default subtimings in Ryzen are horribly loose, and there's lots of performance left on the table apart from IF scaling through memory frequency and more bandwidth. You've got B-die here, you could try these, thanks to The Stilt:
The sub-timings are determined by the memory kit at hand, and how aggressive the DRAM module manufacturer wants to make their ICs. So when you say 'default subtimings on Ryzen are horribly loose', that doesn't make sense: it's determined by the DRAM here. Sure there are adjustments that could be made to the kit. We'll be tackling sub-timings in a later piece, as I wanted Gavin's first analysis piece for us to a reasonable task but not totally off the deep end (as our Haswell scaling piece showed, 26 different DRAM/CL combinations can take upwards of a month of testing). I'll be working with Gavin next week, when I'm back in the office from an industry event the other side of the world and I'm not chasing my own deadlines, to pull percentile data from his numbers and bringing parity with some of our other testing.
Here's a proper look at how timings and memory speed improves Ryzen performance, on a 3.9GHz 1700x vs a 5GHz 7700k.
He's using a Vega (much faster than a GTX 980), comparing 2666C16 auto timings, The Stilt's timings for 3200C14 and 3466C14, and 3600C16 with auto subtimings. All B-die.
Unsurprisingly, auto subtimings are a disaster with 2666C16 and 3600C16 performing mostly the same in these five games (what you've found in your article), and Ryzen's true performance is hidden in tight subtimings that you have to manually configure and test for. The results are more than worth it.
Please, have a look in this direction. Get a Crosshair VI Hero and some proper high speed B-die memory capable of those timing sets. Make a follow up article...
Hopefully future Ryzen iterations will not be as reliant on fast memory to perform like that.
Why would you only use a Nvidia gfx card in the test bed if the Infinity Fabric is designed to integrate with AMD GPUs as well. Looks like you need to go back to the bench and run these tests again with AMD gfx to get the true results.
GPUs don't talk to the CPU using IF, only PCIe. Well, onc consumer desktop anyways - presumably AMD has some crazy IF-IF under testing internally to compete against NVLink, CAPI, OmniPath and InfiniBand
I would really like to see a review of this G.Skill kit F4-3200C14D-16GFX, It seems it was the first ryzen optimized set for 3200 @ fairly low timings out of the box.
DDR4-2933 16-18-18 (Nearest to memory kit rating) DDR4-3066 16-18-18
DDR4-3066 is actually closer to the memories kits rated 3000 speed. 2933 is the max speed supported by the processor that is below the rated speed of the kit. 3066 would be a slight overclock.
Those ashes results are interesting. They are the only one that doesn't show a fairly straight forward improvement as memory speed increases. For the tested kit you'd actually get better performance dropping speed down to DDR4 2800 instead of DDR4 2933. Same thing if you are OCs the memory 3200 is faster than 3333.
Overall this makes me happy that I decided to spend an extra buck or two when I put together my Ryzen system to grab a 3000 kit which happened to be from Team Group also over the typical 2400/2666 kits around the same price. I hadn't typically seen the value in paying a premium for faster memory kits but the even the early indications showed it was more important for Ryzen systems and this shows how important it can be.
I think of this way, AMD was desperate to get back into business of CPU's, but financially they had some issues to really think it though. So they created an 8 Core Zen and then clunk them together so that they claim higher core count. This designed is likely primary why it does not scaled.
But they did something that they probably didn't want to here - they ignored that Intel has been making higher core CPU's in the Xeon and that it quite simple for them to place them in gaming machines. This has a good side effect for Intel owners, because it means it keeps Intel on its toes - but the bad news I am afraid is that AMD will not be financially able to keep up with core wars and eventually have to drop - also purchasing ATI has alienated potential buyers - who in right mind would purchase an AMD GPU on Intel CPU.
One thing that is interesting, is that Intel and the industry is moving in a different direction. Mobile is where the industry is going not huge fat desktops. This is a place where AMD is missing the mark and could possible complete loose it enter company open and solely based there efforts on the desktop industry.
A) its intel that responded on AMD core count B) Zen 8 core multi die was in the design from the start to keep cost low C) Xeon v2 and v3 both had issues with scale out on core hence the reason for the new grid which is sub optimal on caching. D) Intel has a way more expensive die, you forget that they ask 2500+ euro upto 14000 euro for there 16+ cores? while AMD charges 4000 euro for 32 cores. The gold series dont even come close to AMD offerings in cores. E) Intel is not moving at all, they own the biggest part of the industry on x86 and that is what they try to keep. THey lost the low power war vs ARM and they sure try to get into IOT with lots of money but it aint that easy. F)AMD has low budget so they infiltrate markets where they believe they can gain.
Intel didn’t respond to AMD with higher core counts, processors are designed years in advance, suggesting otherwise is just plain ignorance. What they did do was push forward the release date of Coffee Lake thanks to AMD’s pressure.
Their design was meant for servers. Bringing the 18 cores suddenly to the high end desktop was most certain my something they kept as an ace option but it wasn't their original plan and that is obvious from the way it was rushed to market, being months later than the 10 core model and many of the earlier motherboards barely or simply not able to handle the load. They are also obviously clocked very high with barely room for overclock and breaking their tdp, throttling under heavy use on many boards even without oc.
I'm tired of hearing this. what you are suggesting is ignorance. Intel had loads of time to R&D, yes designs take years, but they've had those years to design coffe lake with 4 cores, with 8 cores. They design and design, they could have designed a sandy bridge as 8 core and not released it. You think they need to take years to respond to a competitive push. Let me tell you, they can design all sorts of options "years in advance" and only bring to market what they choose. So if it weren't for Zen, we might be staring at a 4 core (max) coffee lake. OMG it's hilarious to see this "design takes years" argument. They can and do take years to design all sorts of potential processors, they can then choose what to bring to market in a much shorter time.
For what? Some small and very expensive ST performance increase? Consider what AMD has done for us in reigniting competition and moving the tech envelope forward. Think what that took, and whether they could possibly do it again. Anyone who doesn't absolutely have to buy Intel this time around should give the nod to AMD. They've earned it, where Intel has not. Really, the tech is almost equal and in most regards AMD gives you more for the dollar. If we as consumers don't respond to that, vigorously, they may give up. How would you like an Intel-only future?
Plenty of other tests have shown significant scaling. This is with loose subtimings. You can get even more performance from tight subtimings on top of faster memory speed. Remember Ryzen was only about 8% slower clock for clock than Kaby Lake. Faster memory speeds make up most of that difference, albeit Ryzen can't run at such high frequency as KL.
I'm curious if the higher clocked parts scale any better, presumably they were spending more time waiting on memory in the first place. The tests were done with a 1700, 1800X has 20% higher all-core clock.
It would seem that 2 channels of DDR4 is not enough to keep 8 cores fed. It will be interesting to see if it's enough to keep 6 cores fed on coffee lake since intel's memory subsystem is higher performance but they also have higher single threaded performance (and may need more memory throughput as a result).
According to page 3, how come 2933 MT/s (67 MT/s apart of the rated bandwidth) is *nearest* to the kit's rating, if 3066 MT/s is just 66 MT/s apart of the kit's rating?
Because rounding. They're 2933.33333.... and 3066.66666..... Both are 66.6666.... off and XMP (which is how the DIMM maker specifies what to do) rounds to the lower one not the higher one.
In theory anyway. In practice manufacturing variance (not sure if CPU or mobo) means the step size won't be exactly 133.3333.... but rather slightly higher or lower than that value.
I think it is time for RAM to go the QDR route (quad data rate) instead for upcoming DDR5. It's already proven and workable in SRAM and GDDR5X (it's QDR despite the name. This would be a MUCH more significant improvement in latency and I/O than the paltry MHz bump DDR5 will do. I think AMD's Zen architecture would benefit and go even further with QDR for next gen.
QDR is the same thing as DDR with a the clock running at half frequency. It's not a magical way to make your datarates higher. The same paltry MHz increase would be seen on QDR but with just tighter jitter requirements. I don't see the benefit since DDR isn't running into a power limit.
Now that i don't play to many games I'm ok with my 5 year old [email protected] and R9 280x. Although i find that it does keep up with heavy multi-tasking, like having 20-50 tabs open while playing a FHD youtube video and working in SketchUp on a 40" 4K monitor. It also runs a file server, media server that real time transcodes 1080p in high quality, and i won't really notice while browsing and watching videos other than the lights getting brighter inside the case because the fans ramp up a bit.
Well poor test in my eyes... Gyuess You dont know that pass 3200 its TIMINGS ALL THE WAY !!!! Join us at Overclockers.net for PROPER numbers and tests with carious timings ect.
I hope your comment isn't an example of Overclockers.net writing quality. Proper numbers and tests aren't very useful when the supporting writing is almost incoherent.
All gaming tests are GPU bound, and that is why the CPU shows little to no scaling. The GTX 980 is clearly the bottleneck here. Either test with a GTX 1080 /Ti or lower settings until GPU is not a bottleneck.
Tests only show average fps, which is a mistake as faster RAM affects minimum fps more than average. You should add 99% and 99.9% minimum fps to the graphs.
You should also include G. Skill Flare X 3200 CL14 RAM with the Stilt's 3200 fast OC profile found in the Crosshair VI Hero UEFI. On other MB's the settings are relatively simple to configure and you only have to test stability once instead of tuning all subtimings for days.
Agreed on this. Game testing at more modest resolutions and settings would remove potential GPU bottlenecks from the results. Then again, there is a little bit of support for testing at settings closer to the settings an end user would realistically used on a daily basis. It does at least demonstrate the lack of change memory timings would have in a real-world gaming scenario. It'd be optimal to do both really so readers could see results free of GPU concerns AND see how memory perfomance will impact their day-to-day gaming.
I think AT is one of the worst sites to get an idea of CPU gaming performance, always GPU limited or scripted part of the game with low cpu demand. Really the only time you see difference is 10% on bulldozer vs i7, where as in real world the difference is 40%. Most of the time AT test show almost no difference between core i3 and i7 because of that testing methodology
I'd like to see what the effects are on Threadripper, considering that the IF spans two dies and the platform is geared towards maximising memory bandwidth.
Seems these tests are GPU-limited (gtx 980 is about 1060-6gb) thus may not show true gains if you had something like 1080ti, and also not the most demanding cpu-wise except maybe warhammer and ashes
Some of the regressions don't make sense. Did you double-check timings at every frequency setting, perhaps also with Ryzen Master software (the newer versions don't require HPET either IIRC)? I've read on a couple of forums where above certain frequencies, the BIOS would bump some timings regardless of what you selected. Not sure if that only affects certain AGESA/BIOS revisions and if it was only certain board manufacturers (bug) or widespread. That could reduce/reverse gains made by increasing frequency, depending on the software.
Still, there is definitely evidence that raising memory frequency enables decent performance scaling, for situations where the IF gets hammered.
As others have mentioned here, it is often extremely useful to employ modern game benchmarks that will report CPU results regardless of GPU bottlenecks. Case in point, I ran a similar test to this back in June utilizing the Gears of War 4 benchmark. I chose it primarily because the benchmark with display CPU (game) and CPU (render) fps regardless of GPU frames generated.
At least in Gears of War 4, the memory scaling on the CPU style was substantial. But to be fair, I was GPU bound in all of these tests, so my observed fps would have been identical every time.
For gaming, wouldn't it be more illuminating to look at frame-time variance and CPU induced minimums to get a better idea of the true benefit of the faster ram?
I'd like to see some tests where lower subtimings were used on say 3066 and 3200, versus higher subtimings at the same speeds (more speeds would be nice, but it'd take too much time). I'd think gaming is more affected by latency, since they're computing and transferring datasets immediately.
I run my Corsair 3200 Vengeance kit (Hynix ICs) at 3066 using 14-15-15-34-54-1T at 1.44v. The higher voltage is to account for tighter subtimings elsewhere, but I've tested just 14-15-15-34-54-1T (auto timings for the rest) in Memtest86 at 1.40v and it threw 0 errors after about 12 hours. Geardown mode disabled.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
65 Comments
Back to Article
Drumsticks - Wednesday, September 27, 2017 - link
Interesting findings. I've seen Ryzen hailed on other simple forums like Reddit as having great scaling. There's definitely some at play, but not as much as I'd have thought.How does this compare to Intel? Are there any plans to do an Intel version of this article?
ScottSoapbox - Thursday, September 28, 2017 - link
2nd!I'd like to see how much quad channel helps the (low end) X299 vs the dual channel Z370. With overlapping CPUs in that space it could be really interesting.
blzd - Thursday, September 28, 2017 - link
Yes please compare to Intel memory gains, would be very interested to see if it sees less/more boost from higher speed memory.Great article BTW.
jospoortvliet - Saturday, September 30, 2017 - link
While I wouldn't mind another test there have been plenty over the last year's as the authors also pointed out in the opening of the article and the results were simple - it makes barely any difference, far less even than for Ryzen..vodka - Wednesday, September 27, 2017 - link
Default subtimings in Ryzen are horribly loose, and there's lots of performance left on the table apart from IF scaling through memory frequency and more bandwidth. You've got B-die here, you could try these, thanks to The Stilt:http://www.overclock.net/t/1624603/rog-crosshair-v...
This has also been explored by AMD in one of their community updates, at least in games:
https://community.amd.com/community/gaming/blog/20...
Ian Cutress - Wednesday, September 27, 2017 - link
The sub-timings are determined by the memory kit at hand, and how aggressive the DRAM module manufacturer wants to make their ICs. So when you say 'default subtimings on Ryzen are horribly loose', that doesn't make sense: it's determined by the DRAM here. Sure there are adjustments that could be made to the kit. We'll be tackling sub-timings in a later piece, as I wanted Gavin's first analysis piece for us to a reasonable task but not totally off the deep end (as our Haswell scaling piece showed, 26 different DRAM/CL combinations can take upwards of a month of testing). I'll be working with Gavin next week, when I'm back in the office from an industry event the other side of the world and I'm not chasing my own deadlines, to pull percentile data from his numbers and bringing parity with some of our other testing.xTRICKYxx - Wednesday, September 27, 2017 - link
.vodka is right. Please investigate!looncraz - Wednesday, September 27, 2017 - link
AMD sets its own subtimings as memory kits were designed for Intel's IMC and the subtimings are set accordingly.The default subtimings are VERY loose... sometimes so loose as to even be unstable.
.vodka - Wednesday, September 27, 2017 - link
Sadly, that's the situation right now. We'll see if the upcoming AGESA 1.0.0.7 does anything to get things running better at default settings.This article as is, isn't showing the entire picture.
notashill - Wednesday, September 27, 2017 - link
There's a new AGESA 1.0.0.6b but AMD has said very little about what changed in it.Arbie - Wednesday, September 27, 2017 - link
While you're at it, reinstall the spellchecker on his PC. Looks like the DRAM testing broke it.lagittaja - Friday, September 29, 2017 - link
Ian, .vodka is right about this, you should take a closer look at the sub-timings. Maybe get in touch with The Stilt?.vodka - Sunday, October 1, 2017 - link
As luck would have it, someone did an excellent piece on what this article tried to explore.https://www.youtube.com/watch?v=S6yp7Pi39Z8
Here's a proper look at how timings and memory speed improves Ryzen performance, on a 3.9GHz 1700x vs a 5GHz 7700k.
He's using a Vega (much faster than a GTX 980), comparing 2666C16 auto timings, The Stilt's timings for 3200C14 and 3466C14, and 3600C16 with auto subtimings. All B-die.
Screenshots of the results in five games: https://imgur.com/a/EapgO
Unsurprisingly, auto subtimings are a disaster with 2666C16 and 3600C16 performing mostly the same in these five games (what you've found in your article), and Ryzen's true performance is hidden in tight subtimings that you have to manually configure and test for. The results are more than worth it.
Please, have a look in this direction. Get a Crosshair VI Hero and some proper high speed B-die memory capable of those timing sets. Make a follow up article...
Hopefully future Ryzen iterations will not be as reliant on fast memory to perform like that.
Zeed - Thursday, September 28, 2017 - link
OO You ware faster Maybe they should test RYZEN memory kits like this one ??https://www.overclockers.co.uk/team-group-dark-pro...
Or maybe Gskill Ryzen kit ??
peevee - Wednesday, September 27, 2017 - link
"he DDR4-2600 value can certainly be characterized as the lowest number near to 45-46% FPS"Nonsense alert.
Ian Cutress - Wednesday, September 27, 2017 - link
My mistake, edited the sentence one way, then changed my mind and went another route and forgot to remove the %. Updated.Ken_g6 - Wednesday, September 27, 2017 - link
And shouldn't that have been DDR4-2400?Jacerie - Wednesday, September 27, 2017 - link
Why would you only use a Nvidia gfx card in the test bed if the Infinity Fabric is designed to integrate with AMD GPUs as well. Looks like you need to go back to the bench and run these tests again with AMD gfx to get the true results.Dr. Swag - Wednesday, September 27, 2017 - link
Memory clock speed doesn't affect the IF clock speed on AMD GPUsZeDestructor - Wednesday, September 27, 2017 - link
GPUs don't talk to the CPU using IF, only PCIe. Well, onc consumer desktop anyways - presumably AMD has some crazy IF-IF under testing internally to compete against NVLink, CAPI, OmniPath and InfiniBandThreska - Saturday, September 30, 2017 - link
That could potentially be VERY interesting since GPUs are one of the few things that need a high bandwidth.Thefinaleofseem - Wednesday, September 27, 2017 - link
Pity that latency wasn't tested as well as clocks. It would be interesting to see how Ryzen scales with both factors.germz1986 - Wednesday, September 27, 2017 - link
I would really like to see a review of this G.Skill kit F4-3200C14D-16GFX, It seems it was the first ryzen optimized set for 3200 @ fairly low timings out of the box.kpb321 - Wednesday, September 27, 2017 - link
Small nitpickDDR4-2933 16-18-18 (Nearest to memory kit rating)
DDR4-3066 16-18-18
DDR4-3066 is actually closer to the memories kits rated 3000 speed. 2933 is the max speed supported by the processor that is below the rated speed of the kit. 3066 would be a slight overclock.
Gavin Bonshor - Wednesday, September 27, 2017 - link
It probably needs re-wording as 2933MHz CL16 is what the XMP profile runs at on Ryzen with this particular kit.nismotigerwvu - Wednesday, September 27, 2017 - link
Perhaps he was going by "The Price Is Right" rules :)Dr. Swag - Wednesday, September 27, 2017 - link
You guys should've lowered timings along with frequency to keep latency constant while increasing bandwidth/IF clock speedskpb321 - Wednesday, September 27, 2017 - link
Those ashes results are interesting. They are the only one that doesn't show a fairly straight forward improvement as memory speed increases. For the tested kit you'd actually get better performance dropping speed down to DDR4 2800 instead of DDR4 2933. Same thing if you are OCs the memory 3200 is faster than 3333.Overall this makes me happy that I decided to spend an extra buck or two when I put together my Ryzen system to grab a 3000 kit which happened to be from Team Group also over the typical 2400/2666 kits around the same price. I hadn't typically seen the value in paying a premium for faster memory kits but the even the early indications showed it was more important for Ryzen systems and this shows how important it can be.
DanNeely - Wednesday, September 27, 2017 - link
That result makes me suspect the dominant effect we're seeing is something random not memory related.SpartanJet - Wednesday, September 27, 2017 - link
Really disappointing results, all people talked about is how Ryzen scalled with memory. I guess I'm going with Intel 8700k after all.HStewart - Wednesday, September 27, 2017 - link
I think of this way, AMD was desperate to get back into business of CPU's, but financially they had some issues to really think it though. So they created an 8 Core Zen and then clunk them together so that they claim higher core count. This designed is likely primary why it does not scaled.But they did something that they probably didn't want to here - they ignored that Intel has been making higher core CPU's in the Xeon and that it quite simple for them to place them in gaming machines. This has a good side effect for Intel owners, because it means it keeps Intel on its toes - but the bad news I am afraid is that AMD will not be financially able to keep up with core wars and eventually have to drop - also purchasing ATI has alienated potential buyers - who in right mind would purchase an AMD GPU on Intel CPU.
One thing that is interesting, is that Intel and the industry is moving in a different direction. Mobile is where the industry is going not huge fat desktops. This is a place where AMD is missing the mark and could possible complete loose it enter company open and solely based there efforts on the desktop industry.
duploxxx - Thursday, September 28, 2017 - link
lol dude what have you been smoking?A) its intel that responded on AMD core count
B) Zen 8 core multi die was in the design from the start to keep cost low
C) Xeon v2 and v3 both had issues with scale out on core hence the reason for the new grid which is sub optimal on caching.
D) Intel has a way more expensive die, you forget that they ask 2500+ euro upto 14000 euro for there 16+ cores? while AMD charges 4000 euro for 32 cores. The gold series dont even come close to AMD offerings in cores.
E) Intel is not moving at all, they own the biggest part of the industry on x86 and that is what they try to keep. THey lost the low power war vs ARM and they sure try to get into IOT with lots of money but it aint that easy.
F)AMD has low budget so they infiltrate markets where they believe they can gain.
cap87 - Friday, September 29, 2017 - link
Intel didn’t respond to AMD with higher core counts, processors are designed years in advance, suggesting otherwise is just plain ignorance. What they did do was push forward the release date of Coffee Lake thanks to AMD’s pressure.jospoortvliet - Saturday, September 30, 2017 - link
Their design was meant for servers. Bringing the 18 cores suddenly to the high end desktop was most certain my something they kept as an ace option but it wasn't their original plan and that is obvious from the way it was rushed to market, being months later than the 10 core model and many of the earlier motherboards barely or simply not able to handle the load. They are also obviously clocked very high with barely room for overclock and breaking their tdp, throttling under heavy use on many boards even without oc.Hixbot - Monday, October 9, 2017 - link
I'm tired of hearing this. what you are suggesting is ignorance.Intel had loads of time to R&D, yes designs take years, but they've had those years to design coffe lake with 4 cores, with 8 cores. They design and design, they could have designed a sandy bridge as 8 core and not released it. You think they need to take years to respond to a competitive push. Let me tell you, they can design all sorts of options "years in advance" and only bring to market what they choose. So if it weren't for Zen, we might be staring at a 4 core (max) coffee lake. OMG it's hilarious to see this "design takes years" argument. They can and do take years to design all sorts of potential processors, they can then choose what to bring to market in a much shorter time.
Arbie - Wednesday, September 27, 2017 - link
For what? Some small and very expensive ST performance increase? Consider what AMD has done for us in reigniting competition and moving the tech envelope forward. Think what that took, and whether they could possibly do it again. Anyone who doesn't absolutely have to buy Intel this time around should give the nod to AMD. They've earned it, where Intel has not. Really, the tech is almost equal and in most regards AMD gives you more for the dollar. If we as consumers don't respond to that, vigorously, they may give up. How would you like an Intel-only future?Nagorak - Thursday, September 28, 2017 - link
Plenty of other tests have shown significant scaling. This is with loose subtimings. You can get even more performance from tight subtimings on top of faster memory speed. Remember Ryzen was only about 8% slower clock for clock than Kaby Lake. Faster memory speeds make up most of that difference, albeit Ryzen can't run at such high frequency as KL.notashill - Wednesday, September 27, 2017 - link
I'm curious if the higher clocked parts scale any better, presumably they were spending more time waiting on memory in the first place. The tests were done with a 1700, 1800X has 20% higher all-core clock.willis936 - Wednesday, September 27, 2017 - link
It would seem that 2 channels of DDR4 is not enough to keep 8 cores fed. It will be interesting to see if it's enough to keep 6 cores fed on coffee lake since intel's memory subsystem is higher performance but they also have higher single threaded performance (and may need more memory throughput as a result).sor - Wednesday, September 27, 2017 - link
“AGESA 1.0.0.6 BIOS updates were introduced several weeks ago”Shouldn’t that be several *months* ago, or was there some more recent AGESA release from the one being discussed in April/May?
notashill - Wednesday, September 27, 2017 - link
There's a new AGESA 1.0.0.6b but AMD has said very little about what changed in it.JocPro - Wednesday, September 27, 2017 - link
According to page 3, how come 2933 MT/s (67 MT/s apart of the rated bandwidth) is *nearest* to the kit's rating, if 3066 MT/s is just 66 MT/s apart of the kit's rating?DanNeely - Wednesday, September 27, 2017 - link
Because rounding. They're 2933.33333.... and 3066.66666..... Both are 66.6666.... off and XMP (which is how the DIMM maker specifies what to do) rounds to the lower one not the higher one.DanNeely - Wednesday, September 27, 2017 - link
In theory anyway. In practice manufacturing variance (not sure if CPU or mobo) means the step size won't be exactly 133.3333.... but rather slightly higher or lower than that value.FreckledTrout - Wednesday, September 27, 2017 - link
For those prices I would rather pick up G Skill Flare x running at 3200Mhz and CAS14 ($190 on newegg).qlum - Wednesday, September 27, 2017 - link
While it is an older lgame it would have been interesting to see fallout 4 included here as it is notorious for its memory scalingOutlander_04 - Wednesday, September 27, 2017 - link
I am curious about the use of such an old graphics card. Surely an nVidia 10xx card, or RX vega was availableLolimaster - Wednesday, September 27, 2017 - link
Come one, people are running 3200 CL14 on Ryzen for many months, why test with a puny CL16.This should also include DDR4 3600-4000 with many brands.
Nagorak - Thursday, September 28, 2017 - link
Few have even managed to get 3600 MHz stable with Ryzen, let alone anything more than that. Even 3466 isn't a given for many boards/processors.CheapSushi - Thursday, September 28, 2017 - link
I think it is time for RAM to go the QDR route (quad data rate) instead for upcoming DDR5. It's already proven and workable in SRAM and GDDR5X (it's QDR despite the name. This would be a MUCH more significant improvement in latency and I/O than the paltry MHz bump DDR5 will do. I think AMD's Zen architecture would benefit and go even further with QDR for next gen.Image of QDR vs DDR: https://upload.wikimedia.org/wikipedia/commons/thu...
Image of QDR vs DDR: http://image.slideserve.com/1303208/qdr-class-vs-d...
willis936 - Thursday, September 28, 2017 - link
QDR is the same thing as DDR with a the clock running at half frequency. It's not a magical way to make your datarates higher. The same paltry MHz increase would be seen on QDR but with just tighter jitter requirements. I don't see the benefit since DDR isn't running into a power limit.NeatOman - Thursday, September 28, 2017 - link
Now that i don't play to many games I'm ok with my 5 year old [email protected] and R9 280x. Although i find that it does keep up with heavy multi-tasking, like having 20-50 tabs open while playing a FHD youtube video and working in SketchUp on a 40" 4K monitor. It also runs a file server, media server that real time transcodes 1080p in high quality, and i won't really notice while browsing and watching videos other than the lights getting brighter inside the case because the fans ramp up a bit.Zeed - Thursday, September 28, 2017 - link
Well poor test in my eyes... Gyuess You dont know that pass 3200 its TIMINGS ALL THE WAY !!!! Join us at Overclockers.net for PROPER numbers and tests with carious timings ect.BrokenCrayons - Thursday, September 28, 2017 - link
I hope your comment isn't an example of Overclockers.net writing quality. Proper numbers and tests aren't very useful when the supporting writing is almost incoherent.chikatana - Thursday, September 28, 2017 - link
I'm more interested in how will the system perform when all DIMMs are fully loaded.TAspect - Thursday, September 28, 2017 - link
All gaming tests are GPU bound, and that is why the CPU shows little to no scaling. The GTX 980 is clearly the bottleneck here. Either test with a GTX 1080 /Ti or lower settings until GPU is not a bottleneck.Tests only show average fps, which is a mistake as faster RAM affects minimum fps more than average. You should add 99% and 99.9% minimum fps to the graphs.
You should also include G. Skill Flare X 3200 CL14 RAM with the Stilt's 3200 fast OC profile found in the Crosshair VI Hero UEFI. On other MB's the settings are relatively simple to configure and you only have to test stability once instead of tuning all subtimings for days.
BrokenCrayons - Thursday, September 28, 2017 - link
Agreed on this. Game testing at more modest resolutions and settings would remove potential GPU bottlenecks from the results. Then again, there is a little bit of support for testing at settings closer to the settings an end user would realistically used on a daily basis. It does at least demonstrate the lack of change memory timings would have in a real-world gaming scenario. It'd be optimal to do both really so readers could see results free of GPU concerns AND see how memory perfomance will impact their day-to-day gaming.lyssword - Friday, September 29, 2017 - link
I think AT is one of the worst sites to get an idea of CPU gaming performance, always GPU limited or scripted part of the game with low cpu demand. Really the only time you see difference is 10% on bulldozer vs i7, where as in real world the difference is 40%. Most of the time AT test show almost no difference between core i3 and i7 because of that testing methodologyDabuXian - Thursday, September 28, 2017 - link
Trying to find a CPU bottleneck while using an old Geforce 980? Seriously? I'd expect some basic hardware knowledge from Anandtech?r3loaded - Friday, September 29, 2017 - link
I'd like to see what the effects are on Threadripper, considering that the IF spans two dies and the platform is geared towards maximising memory bandwidth.lyssword - Friday, September 29, 2017 - link
Seems these tests are GPU-limited (gtx 980 is about 1060-6gb) thus may not show true gains if you had something like 1080ti, and also not the most demanding cpu-wise except maybe warhammer and ashesAlexvrb - Sunday, October 1, 2017 - link
Some of the regressions don't make sense. Did you double-check timings at every frequency setting, perhaps also with Ryzen Master software (the newer versions don't require HPET either IIRC)? I've read on a couple of forums where above certain frequencies, the BIOS would bump some timings regardless of what you selected. Not sure if that only affects certain AGESA/BIOS revisions and if it was only certain board manufacturers (bug) or widespread. That could reduce/reverse gains made by increasing frequency, depending on the software.Still, there is definitely evidence that raising memory frequency enables decent performance scaling, for situations where the IF gets hammered.
ajlueke - Friday, October 6, 2017 - link
As others have mentioned here, it is often extremely useful to employ modern game benchmarks that will report CPU results regardless of GPU bottlenecks. Case in point, I ran a similar test to this back in June utilizing the Gears of War 4 benchmark. I chose it primarily because the benchmark with display CPU (game) and CPU (render) fps regardless of GPU frames generated.https://community.amd.com/servlet/JiveServlet/down...
At least in Gears of War 4, the memory scaling on the CPU style was substantial. But to be fair, I was GPU bound in all of these tests, so my observed fps would have been identical every time.
https://community.amd.com/servlet/JiveServlet/down...
Really curious if my results would be replicated in Gears 4 with the hardware in this article? That would be great to see.
farmergann - Wednesday, October 11, 2017 - link
For gaming, wouldn't it be more illuminating to look at frame-time variance and CPU induced minimums to get a better idea of the true benefit of the faster ram?JasonMZW20 - Tuesday, November 7, 2017 - link
I'd like to see some tests where lower subtimings were used on say 3066 and 3200, versus higher subtimings at the same speeds (more speeds would be nice, but it'd take too much time). I'd think gaming is more affected by latency, since they're computing and transferring datasets immediately.I run my Corsair 3200 Vengeance kit (Hynix ICs) at 3066 using 14-15-15-34-54-1T at 1.44v. The higher voltage is to account for tighter subtimings elsewhere, but I've tested just 14-15-15-34-54-1T (auto timings for the rest) in Memtest86 at 1.40v and it threw 0 errors after about 12 hours. Geardown mode disabled.