They have other, different restrictions instead. Such as locking most of their lineup, and preventing overclocking on non-Z chipsets. Some OEMs tried to bypass the second limitation... it didn't work out in the long run, especially if you want an updated BIOS that supports the latest chips.
One the reason AMDs stock tanked recently was due to them not locking up their lineup.. either way, hardly a defense for not having basic bios options. Inb4 new platform cop-out.
Are you on crack? I suggest you go back and read some history on the first year of the Intel x99 platform. It was a disaster. Memory support was horrendous. MUCH worse than this and BIOS updates didn't come anywhere near as fast as this. Stop being a fanboy hater, it makes you arrogant at worst, and ignorant at best.
New Intel platforms and crap memory immediately makes me think of ye olde Pentium 4/Rambus debacle back at the turn of the century. Too bad that couldn't be fixed with a BIOS update.
Same as below... Get some knowledge and perspective. Go back and read some history on the first year of the Intel x99 platform. WAAAAAY more problems than this.
Do CPU use memory channels interleaving mode to reduce access time ? I mean copy some data to two DIMM that works with different phases, so acces time is reduced two times (ideally),
No. DRAM has a fixed latency (around 50 ns, hasn't changed much in the past 30 years). After that it can bulk transfer the addresses listed (sequential at the starting address). So if you expect you'll need data you can load it into cache early (preferch, done by programmer or compiler). Using two channels just doubles the throughput and also doubles the number of concurrent addresses you can access. Interleaving channels does effectively nothing for latency and throughput. It has been measured to increase latency very slightly, likely from the added work the CPU memory controller has to do.
Most memory has a latency of below 15ns. Even museum pieces such pre-DDR PC100 SDR dram had a 20ns latency. Fast normal kit is around 9ns. Sub 5ns memory seems to exist, but I've never used or seen any, so I'm not sure if there's some gotcha. That's just the memory; the processor imposes latency too. See e.g. http://www.anandtech.com/show/9482/intel-broadwell... for a few CPU-level benchmarks using DDR3-1966 CL9 (9.6ns) memory; But the TL;DR is that the processor observes around 20ns of latency. I don't believe this varies hugely depending on processor, but I'm not sure. Using slower memory might add a few ns, but it's going to be faster than 50ns by a large margin.
But you're right that this number hasn't changed much in years.
Pretty much any time you have interleaved storage (be it RAM in Dual/Quad DDR, or HDDs in RAID) there is still a single host clock, which means all latency related issues remain the same as working with a single device. The advantage is typically in bulk transfer after the latency, so RAM in Dual DDR gets a ~30% file speed increase, and RAID is typically ~60% per drive... but that is only on file transfer speed. If you are doing lots of little stuff then the latency issue is going to make those transfer speed advantages disappear. In fact; having RAID (not so sure about Dual/Quad DDR) generally adds extra complexity to the system which will increase latency. So for bulk transfers (say... video editing) you end up with huge speed increases, but for lots of small transfers (such as a hammered database) then you will actually get lower performance.
Just goes to show that not everything is cut-and-dry 'better'. What is 'best' can change rather dramatically depending on what kinds of workloads you are doing.
Well when talking about RAM, whether or not interleaving benefits you depends on if what you're doing is latency-dependent or bandwidth-dependent (and if it's bottlenecked). So it's not so much that there's an inherent latency "issue", but rather a lot of tasks simply don't benefit for one reason or another. Meanwhile on the other end of the spectrum, bandwidth dependent tasks (APU rendering 3D graphics) benefit massively from the second channel.
For mass storage systems you're definitely on to something. There's layers of added complexity. Granted the controller, drives, software, and so forth all have tremendous impact. Especially the controller... finding a really good RAID controller is reason enough for me to just never use RAID ever again. SSDs are plenty fast for me.
Aye yi yi... This is complicated. Tom's hardware has a good primer on memory you can read that explains it. Also I believe Linus tech tips has a pretty short memory video you can watch.
ok im a bit out of the loop when it comes to memory speed and performance.
but how much is the performance difference at maximum for ddr4 2666 MHz or 4000 MHz?
i normaly buy the third best memory speed that is available, because i never noticed any difference (video editing, 3d rendering, photoshop). so for me it looks like a waste of money to buy the fastest memory.
This is for AMD Ryzen which is really tied to memory speed and cpu performance. The CPU used the memory to communicate between the CCXes in the CPU. This has high latency but can be made up for with higher memory bandwidth
No, you won't be able to overclock further due to that. Your system might perform better though as communication speed between CPU and Memory would increase. Now, if I remember correctly, the sweet spot for Sandy Bridge was indeed 1600Mhz. You can overclock your current memory to say 1866Mhz to see if it perform better.
That's totally true, it wasn't until Haswell that 1866 made sense. 1600 with aggressive timings was best for Sandy Bridge, and still entirely competent to this day in almost everything except energy consumption.
I'm still running a 130w Nehalem i7-950 as my server, and it's most comfortable at PC3-1333 7-7-7-18
The real problem with that platform (X58) is no native USB3.0, no native SATA3, PCIe 2.0, high power consumption, and a lot of heat.
Well there is a substantial speed difference between DDR3 and DDR4. I think a fair comparison is DDR3-1600 compares to DDR4-2133 when run at their stock timings (9-9-9-24 vs 12-12-12-35)
Beyond DDR4-2133, the sky is the limit. Yes, there are unofficial spec DDR3-1866 and above kits, and yes, they add some mild performance, but the timings for those kits are in DDR4 territory, so clock for clock they are at best comparable to DDR4 at the same speed.
For example, DDR3-2133 memory usually runs around 11-11-11-31, at higher voltage, than DDR4. The DDR3 might edge it out, but now that DDR4 is available in almost DOUBLE the clock speed of that spec, something DDR3 can't even come within 1000MHz of, we finally have a clear reason to migrate to DDR4 other than energy efficiency and density.
The great part about building in the DDR3 era was that you could get both cheap AND fast for quite some time. An equivalent faster-than-JEDEC-by-a-couple-steps DDR4 kit might well cost me more today even at the same capacity, let alone when you factor in the normal RAM capacity increases when looking at a new build.
Don't know about better but Intel overclocking is multiplier of CPU times the BLK clock of the memory so you can lower the cpu multiplier but increase memory speed to get the same clocks or keep the same multiplier but increase BLK clock to get a higher overclock.
For Ryzen 4000 MHz should be a huge boost because the CCX Interconnect Infinity Fabric(Communication channel that talks between the two quad-core CCX's) runs at 1/2 the memory speed. This interconnect is the highest latency thing inside of Ryzen so when you go from 2666Mhz ro 4000Mhz memory you are bumping the Interconnect speed from 1,333Mhz to 2,000Mhz which have a profound effect on the latency let alone the normal benefits of having faster memory. Going from 2133 MHz to 3200 Mhz ram most testers were seeing around 15% improvment in 1080p games so we might see another 10-12% moving up to 4000Mahz from 3200Mhaz.
"By going to 3200 MHz over 2133 MHz we were seeing near 50% performance improvement in simple memory bandwidth tests and a 16% performance jump in our 1080P gaming test. Read more at http://www.legitreviews.com/ddr4-memory-scaling-am...
It is, with the caveat that it's not really the faster memory that helps, it's that the part of the CPUs internal communications run at the speed of the memory. Speeding that internal connection up can give significant real world gains. Since AMD's said they're aware of low hanging fruit for performance improvements in future Zen generations, it's possible that Zen2 or Zen3 will allow clocking the internal bus separately from memory; at which point it's performance will be more Intel-like in that memory speed doesn't matter much in the real world. (Memory starved APU graphics will presumably continue to be an exception unless HBM ever gets cheap enough to put on the CPU package.)
It all depends on what you are doing. Go look up RAM speed calculators and you will get a feel for how this works; but here is a rough description:
There are 2 basic parts in RAM; one is the MHz speed of it (how quickly the clock ticks), and the other is operation latency. With each new generation of RAM the clock speed doubles (DDR to DDR2, then DDR2 to DDR3, and now 4). This means that with each successive generation there are twice as many time slots in which a RAM module has the opportunity to do something (ie start, move, or complete a process). The higher the frequency, then generally the more quickly it can transfer data between the RAM and the CPU. For most workloads this is the bit that matters because RAM is typically there as a giant buffer between the HDD, CPU, and GPU, but does not do a whole lot of heavy lifting itself.
But there are some workloads where RAM does do a lot of heavy lifting; Lots of content creation that renders in RAM, increasingly more video games, and also things like databases make lots of small change calls to the RAM itself. This is qualitatively different than simply loading a texture from the HDD and using the RAM to shove it off to the CPU or GPU. This relies more on the RAM's internal ability to change things quickly, and that is all based on timings. RAM timing is basically how many 'ticks' of the clock it takes to make an internal change. The smaller these numbers are, the faster RAM can deal with these small but difficult workloads. But this comes up against much more fundamental physical barriers of electronics than mere MHz of the clock. No matter what, it is still going to take x number of nanoseconds to make an internal change on the die of the RAM, so as the clock speed has doubled with each successive generation, the internal latency has also doubled. As you can imagine, this generally means that DDR3 at 1600Mhz and a timing of (lets just say) 10, is going to have about the same internal speed as DDR4 at 3200Mhz and a timing of 20. Again, there are other advantages for throughput of having a much faster clock speed, but the internal processing speed of the RAM is going to be about the same. This generally means that things like textures will load much faster in your hot new game, but performing a CPU render or database transaction (both largely RAM constrained processes) will perform about the same on either platform. To get a full overall performance increase you need to have higher clock speed, AND keep the latency low. So DDR4 3200 at 18 would stomp all over DDR3 1600 at 12 on just about any task (double the 12 to get 24, minus the 18, leaving you with a lower latency of 6 'clock ticks').
What's more, the platform itself has a lot to say about CPU performance in conjunction with RAM performance. On an intel platform the RAM controller is very will separated from the CPU, so you can often see benchmarks where a RAM bump has little to no impact on CPU performance, or a CPU overclock can still be pretty effective even when paired with slow-ish RAM. On the plus side it has made for some fantastic stability on Intel platforms... but it also means that there are fewer synergistic benefits of combining super fast RAM with a beefy CPU on Intel platforms. AMD does things differently. On AMD the RAM shares the same communications fabric as the 2 halves of the CPU die. This generally means that small changes in RAM performance can make some noticeable changes on the CPU. On the plus side, this means that you get more bang for your buck when buying insanely fast RAM modules... but the negative is that budget systems will be more constrained with cheap consumer grade RAM (or conversely budget builders will need to think more about budgeting money towards RAM or a GPU).
So ya, this is a pretty big deal for AMD. Being able to support faster RAM, and have more fine-grained control over said RAM, will mean that AMD users will be able to push their systems further, which is awesome!
That's why I went with DDR3-1600 Ca7 for my Haswell build two years ago. Haswell didn't benefit from higher clock speeds so I just went with the lowest latency I could find. That all changed with Broadwell. Good to see AMD getting this improvement early on in the platform's life cycle.
Kudos to AMD for doing this. TBH I was a bit worried about buying a Ryzen 1600 because of all the stuff about memory compatibility. But now I'm just itching to buy one and get stuck in :-)
Unfortunately their infinity fabric runs at the memory speed, which means inter-module communication is directly affected by the memory speed. This will be an issue for servers where you can only get DDR4-2333 with ECC. 32 cores with very slow inter communication.
well... yes and no. The attractive thing about AMDs design in servers is that you have the same internal latency no matter how many cores you throw at the design. So for small core count designs Intel is going to eat AMD's lunch. But for high core counts AMD systems will hum along nicely, while Intel chips will get slower and slower.
@any from Anandtech: would y'all consider revisiting Ryzen and Core performance in the context of higher frequency (say, 3200) MHz memory? I don't know if I've ever seen a piece from y'all about this, and I'd love to get one from a high quality source on precisely how much each uArch enjoys the extra memory speed.
"even a command rate option, which was natively locked to the most aggressive 1T setting"
Not quite true. While the Command Rate option could not be changed by the user, it did change, automatically, based on multiplier. Example (AsRock 1.5x-1.9x BIOS versions): at 2400, 2T was used, while at 2667, 1T was used. It's possible this changed by OEM.
Excellent, this is good for everyone. It shows that the platform support is there too which always feels better than buying a something and hoping a company does something... anything even.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
55 Comments
Back to Article
Araa - Friday, May 26, 2017 - link
They are doing a surprisingly good job supporting their hardware.RaistlinZ - Friday, May 26, 2017 - link
I would hope so, it's only been out a few months.Chaitanya - Friday, May 26, 2017 - link
Intel never supported their hardware to this level.vladx - Friday, May 26, 2017 - link
Did you even read the article? Intel motherboards already support all of that and more and Intel never had such restrictions to their platforms.Alexvrb - Saturday, May 27, 2017 - link
They have other, different restrictions instead. Such as locking most of their lineup, and preventing overclocking on non-Z chipsets. Some OEMs tried to bypass the second limitation... it didn't work out in the long run, especially if you want an updated BIOS that supports the latest chips.nottatard - Saturday, May 27, 2017 - link
One the reason AMDs stock tanked recently was due to them not locking up their lineup.. either way, hardly a defense for not having basic bios options. Inb4 new platform cop-out.TiberiusJonez - Saturday, May 27, 2017 - link
Are you on crack? I suggest you go back and read some history on the first year of the Intel x99 platform. It was a disaster. Memory support was horrendous. MUCH worse than this and BIOS updates didn't come anywhere near as fast as this. Stop being a fanboy hater, it makes you arrogant at worst, and ignorant at best.Scannall - Saturday, May 27, 2017 - link
When X58 came out that was pretty awful as well. It took a while to get that cleaned up and running well.Lord of the Bored - Monday, May 29, 2017 - link
New Intel platforms and crap memory immediately makes me think of ye olde Pentium 4/Rambus debacle back at the turn of the century. Too bad that couldn't be fixed with a BIOS update.Lolimaster - Saturday, May 27, 2017 - link
Rmember that current intel cpu's are legacy of Nehalem and Sandy Bridge.TiberiusJonez - Saturday, May 27, 2017 - link
Same as below... Get some knowledge and perspective. Go back and read some history on the first year of the Intel x99 platform. WAAAAAY more problems than this.ddriver - Friday, May 26, 2017 - link
So they finally hacked it.Samus - Friday, May 26, 2017 - link
So is this a microcode update?edlee - Friday, May 26, 2017 - link
Wow, is it just me, or did it feel like amd dropped the mic and left the room. They absolutely crushed it with this one.TristanSDX - Friday, May 26, 2017 - link
Do CPU use memory channels interleaving mode to reduce access time ? I mean copy some data to two DIMM that works with different phases, so acces time is reduced two times (ideally),willis936 - Friday, May 26, 2017 - link
No. DRAM has a fixed latency (around 50 ns, hasn't changed much in the past 30 years). After that it can bulk transfer the addresses listed (sequential at the starting address). So if you expect you'll need data you can load it into cache early (preferch, done by programmer or compiler). Using two channels just doubles the throughput and also doubles the number of concurrent addresses you can access. Interleaving channels does effectively nothing for latency and throughput. It has been measured to increase latency very slightly, likely from the added work the CPU memory controller has to do.emn13 - Friday, May 26, 2017 - link
Most memory has a latency of below 15ns. Even museum pieces such pre-DDR PC100 SDR dram had a 20ns latency. Fast normal kit is around 9ns. Sub 5ns memory seems to exist, but I've never used or seen any, so I'm not sure if there's some gotcha. That's just the memory; the processor imposes latency too. See e.g. http://www.anandtech.com/show/9482/intel-broadwell... for a few CPU-level benchmarks using DDR3-1966 CL9 (9.6ns) memory; But the TL;DR is that the processor observes around 20ns of latency. I don't believe this varies hugely depending on processor, but I'm not sure. Using slower memory might add a few ns, but it's going to be faster than 50ns by a large margin.But you're right that this number hasn't changed much in years.
Dolda2000 - Friday, May 26, 2017 - link
15 ns would be the CAS latency. He most likely meant the row-cycle time, which is often the relevant timing for truly random accesses.Dolda2000 - Friday, May 26, 2017 - link
Oops, meant to reply to emn13, not to willis936.CaedenV - Friday, May 26, 2017 - link
Pretty much any time you have interleaved storage (be it RAM in Dual/Quad DDR, or HDDs in RAID) there is still a single host clock, which means all latency related issues remain the same as working with a single device. The advantage is typically in bulk transfer after the latency, so RAM in Dual DDR gets a ~30% file speed increase, and RAID is typically ~60% per drive... but that is only on file transfer speed. If you are doing lots of little stuff then the latency issue is going to make those transfer speed advantages disappear.In fact; having RAID (not so sure about Dual/Quad DDR) generally adds extra complexity to the system which will increase latency. So for bulk transfers (say... video editing) you end up with huge speed increases, but for lots of small transfers (such as a hammered database) then you will actually get lower performance.
Just goes to show that not everything is cut-and-dry 'better'. What is 'best' can change rather dramatically depending on what kinds of workloads you are doing.
Alexvrb - Saturday, May 27, 2017 - link
Well when talking about RAM, whether or not interleaving benefits you depends on if what you're doing is latency-dependent or bandwidth-dependent (and if it's bottlenecked). So it's not so much that there's an inherent latency "issue", but rather a lot of tasks simply don't benefit for one reason or another. Meanwhile on the other end of the spectrum, bandwidth dependent tasks (APU rendering 3D graphics) benefit massively from the second channel.For mass storage systems you're definitely on to something. There's layers of added complexity. Granted the controller, drives, software, and so forth all have tremendous impact. Especially the controller... finding a really good RAID controller is reason enough for me to just never use RAID ever again. SSDs are plenty fast for me.
TiberiusJonez - Saturday, May 27, 2017 - link
Aye yi yi... This is complicated. Tom's hardware has a good primer on memory you can read that explains it. Also I believe Linus tech tips has a pretty short memory video you can watch.Gothmoth - Friday, May 26, 2017 - link
ok im a bit out of the loop when it comes to memory speed and performance.but how much is the performance difference at maximum for ddr4 2666 MHz or 4000 MHz?
i normaly buy the third best memory speed that is available, because i never noticed any difference (video editing, 3d rendering, photoshop). so for me it looks like a waste of money to buy the fastest memory.
Gothmoth - Friday, May 26, 2017 - link
addon..why does faster memory makes overclocking better?i have my sandy bridge OC to 4.7 GHZ and my 2666 MHz memory stays at default.
how would faster memory help me achieve 4.9 GHz?
Gothmoth - Friday, May 26, 2017 - link
sorry i meant 1600MHz memory... it´s my DDR 3 system.Gasaraki88 - Friday, May 26, 2017 - link
This is for AMD Ryzen which is really tied to memory speed and cpu performance. The CPU used the memory to communicate between the CCXes in the CPU. This has high latency but can be made up for with higher memory bandwidthVeauX - Friday, May 26, 2017 - link
No, you won't be able to overclock further due to that. Your system might perform better though as communication speed between CPU and Memory would increase. Now, if I remember correctly, the sweet spot for Sandy Bridge was indeed 1600Mhz. You can overclock your current memory to say 1866Mhz to see if it perform better.Samus - Friday, May 26, 2017 - link
That's totally true, it wasn't until Haswell that 1866 made sense. 1600 with aggressive timings was best for Sandy Bridge, and still entirely competent to this day in almost everything except energy consumption.I'm still running a 130w Nehalem i7-950 as my server, and it's most comfortable at PC3-1333 7-7-7-18
The real problem with that platform (X58) is no native USB3.0, no native SATA3, PCIe 2.0, high power consumption, and a lot of heat.
Samus - Friday, May 26, 2017 - link
Well there is a substantial speed difference between DDR3 and DDR4. I think a fair comparison is DDR3-1600 compares to DDR4-2133 when run at their stock timings (9-9-9-24 vs 12-12-12-35)Beyond DDR4-2133, the sky is the limit. Yes, there are unofficial spec DDR3-1866 and above kits, and yes, they add some mild performance, but the timings for those kits are in DDR4 territory, so clock for clock they are at best comparable to DDR4 at the same speed.
For example, DDR3-2133 memory usually runs around 11-11-11-31, at higher voltage, than DDR4. The DDR3 might edge it out, but now that DDR4 is available in almost DOUBLE the clock speed of that spec, something DDR3 can't even come within 1000MHz of, we finally have a clear reason to migrate to DDR4 other than energy efficiency and density.
Alexvrb - Saturday, May 27, 2017 - link
The great part about building in the DDR3 era was that you could get both cheap AND fast for quite some time. An equivalent faster-than-JEDEC-by-a-couple-steps DDR4 kit might well cost me more today even at the same capacity, let alone when you factor in the normal RAM capacity increases when looking at a new build.Samus - Saturday, May 27, 2017 - link
Well part of that was DDR3 base speed kits were pretty expensive. My first DDR3 triple channel kit (2GBx3) gskill PC3-1333 was nearly $200.Gasaraki88 - Friday, May 26, 2017 - link
Don't know about better but Intel overclocking is multiplier of CPU times the BLK clock of the memory so you can lower the cpu multiplier but increase memory speed to get the same clocks or keep the same multiplier but increase BLK clock to get a higher overclock.FreckledTrout - Friday, May 26, 2017 - link
For Ryzen 4000 MHz should be a huge boost because the CCX Interconnect Infinity Fabric(Communication channel that talks between the two quad-core CCX's) runs at 1/2 the memory speed. This interconnect is the highest latency thing inside of Ryzen so when you go from 2666Mhz ro 4000Mhz memory you are bumping the Interconnect speed from 1,333Mhz to 2,000Mhz which have a profound effect on the latency let alone the normal benefits of having faster memory. Going from 2133 MHz to 3200 Mhz ram most testers were seeing around 15% improvment in 1080p games so we might see another 10-12% moving up to 4000Mahz from 3200Mhaz."By going to 3200 MHz over 2133 MHz we were seeing near 50% performance improvement in simple memory bandwidth tests and a 16% performance jump in our 1080P gaming test.
Read more at http://www.legitreviews.com/ddr4-memory-scaling-am...
Gothmoth - Friday, May 26, 2017 - link
ok so i guess the situation with ryzen and memory speed is much different to intels core cpus.is that right?
DanNeely - Friday, May 26, 2017 - link
It is, with the caveat that it's not really the faster memory that helps, it's that the part of the CPUs internal communications run at the speed of the memory. Speeding that internal connection up can give significant real world gains. Since AMD's said they're aware of low hanging fruit for performance improvements in future Zen generations, it's possible that Zen2 or Zen3 will allow clocking the internal bus separately from memory; at which point it's performance will be more Intel-like in that memory speed doesn't matter much in the real world. (Memory starved APU graphics will presumably continue to be an exception unless HBM ever gets cheap enough to put on the CPU package.)ddriver - Friday, May 26, 2017 - link
This improvement is however only expressed in situations that were already bottlenecked by that design, such as 1080p gaming.ddriver - Friday, May 26, 2017 - link
Also, what the fudge is Mahz and Mhaz?FreckledTrout - Friday, May 26, 2017 - link
You want me to go edit those typos?CaedenV - Friday, May 26, 2017 - link
It all depends on what you are doing. Go look up RAM speed calculators and you will get a feel for how this works; but here is a rough description:There are 2 basic parts in RAM; one is the MHz speed of it (how quickly the clock ticks), and the other is operation latency. With each new generation of RAM the clock speed doubles (DDR to DDR2, then DDR2 to DDR3, and now 4). This means that with each successive generation there are twice as many time slots in which a RAM module has the opportunity to do something (ie start, move, or complete a process). The higher the frequency, then generally the more quickly it can transfer data between the RAM and the CPU. For most workloads this is the bit that matters because RAM is typically there as a giant buffer between the HDD, CPU, and GPU, but does not do a whole lot of heavy lifting itself.
But there are some workloads where RAM does do a lot of heavy lifting; Lots of content creation that renders in RAM, increasingly more video games, and also things like databases make lots of small change calls to the RAM itself. This is qualitatively different than simply loading a texture from the HDD and using the RAM to shove it off to the CPU or GPU. This relies more on the RAM's internal ability to change things quickly, and that is all based on timings.
RAM timing is basically how many 'ticks' of the clock it takes to make an internal change. The smaller these numbers are, the faster RAM can deal with these small but difficult workloads. But this comes up against much more fundamental physical barriers of electronics than mere MHz of the clock. No matter what, it is still going to take x number of nanoseconds to make an internal change on the die of the RAM, so as the clock speed has doubled with each successive generation, the internal latency has also doubled. As you can imagine, this generally means that DDR3 at 1600Mhz and a timing of (lets just say) 10, is going to have about the same internal speed as DDR4 at 3200Mhz and a timing of 20. Again, there are other advantages for throughput of having a much faster clock speed, but the internal processing speed of the RAM is going to be about the same. This generally means that things like textures will load much faster in your hot new game, but performing a CPU render or database transaction (both largely RAM constrained processes) will perform about the same on either platform.
To get a full overall performance increase you need to have higher clock speed, AND keep the latency low. So DDR4 3200 at 18 would stomp all over DDR3 1600 at 12 on just about any task (double the 12 to get 24, minus the 18, leaving you with a lower latency of 6 'clock ticks').
What's more, the platform itself has a lot to say about CPU performance in conjunction with RAM performance. On an intel platform the RAM controller is very will separated from the CPU, so you can often see benchmarks where a RAM bump has little to no impact on CPU performance, or a CPU overclock can still be pretty effective even when paired with slow-ish RAM. On the plus side it has made for some fantastic stability on Intel platforms... but it also means that there are fewer synergistic benefits of combining super fast RAM with a beefy CPU on Intel platforms.
AMD does things differently. On AMD the RAM shares the same communications fabric as the 2 halves of the CPU die. This generally means that small changes in RAM performance can make some noticeable changes on the CPU. On the plus side, this means that you get more bang for your buck when buying insanely fast RAM modules... but the negative is that budget systems will be more constrained with cheap consumer grade RAM (or conversely budget builders will need to think more about budgeting money towards RAM or a GPU).
So ya, this is a pretty big deal for AMD. Being able to support faster RAM, and have more fine-grained control over said RAM, will mean that AMD users will be able to push their systems further, which is awesome!
bigboxes - Friday, May 26, 2017 - link
That's why I went with DDR3-1600 Ca7 for my Haswell build two years ago. Haswell didn't benefit from higher clock speeds so I just went with the lowest latency I could find. That all changed with Broadwell. Good to see AMD getting this improvement early on in the platform's life cycle.haukionkannel - Friday, May 26, 2017 - link
Ok. Soon we Are in situation that we really need good big test where Ryzen is tested with different memory speeds.Haawser - Friday, May 26, 2017 - link
Kudos to AMD for doing this. TBH I was a bit worried about buying a Ryzen 1600 because of all the stuff about memory compatibility. But now I'm just itching to buy one and get stuck in :-)CaedenV - Friday, May 26, 2017 - link
LOL, Level 1 Techs is going to be soooo happy. All they talk about are the issues they run into with IOMMU groupings on the new AMD chips.bcronce - Friday, May 26, 2017 - link
Unfortunately their infinity fabric runs at the memory speed, which means inter-module communication is directly affected by the memory speed. This will be an issue for servers where you can only get DDR4-2333 with ECC. 32 cores with very slow inter communication.CaedenV - Friday, May 26, 2017 - link
well... yes and no.The attractive thing about AMDs design in servers is that you have the same internal latency no matter how many cores you throw at the design. So for small core count designs Intel is going to eat AMD's lunch. But for high core counts AMD systems will hum along nicely, while Intel chips will get slower and slower.
bcronce - Friday, May 26, 2017 - link
Ohh, better scaling.Drumsticks - Friday, May 26, 2017 - link
@any from Anandtech: would y'all consider revisiting Ryzen and Core performance in the context of higher frequency (say, 3200) MHz memory? I don't know if I've ever seen a piece from y'all about this, and I'd love to get one from a high quality source on precisely how much each uArch enjoys the extra memory speed.Hurn - Friday, May 26, 2017 - link
"even a command rate option, which was natively locked to the most aggressive 1T setting"Not quite true. While the Command Rate option could not be changed by the user, it did change, automatically, based on multiplier. Example (AsRock 1.5x-1.9x BIOS versions): at 2400, 2T was used, while at 2667, 1T was used. It's possible this changed by OEM.
HomeworldFound - Friday, May 26, 2017 - link
Excellent, this is good for everyone. It shows that the platform support is there too which always feels better than buying a something and hoping a company does something... anything even.mnpoorsadegh - Friday, May 26, 2017 - link
rayen - amd we love you!!!!!!!!!!!!!!!!!!!kissssssssssssssssssssssssssssssssssssssss
amd is incredibles!!!!!!
Lolimaster - Saturday, May 27, 2017 - link
AMD updates Ryzen so fast, that any review prior to mid say is basically obsolete.Lolimaster - Saturday, May 27, 2017 - link
mid may*Intel999 - Saturday, May 27, 2017 - link
Yes, this is true.Gothmoth - Tuesday, May 30, 2017 - link
so when will we see new benchmarks from anandtech? :-)machetex1 - Saturday, December 2, 2017 - link
New agesa released on Ga-ab350m gaming 3 On 12/01/2017.. Not muxh news on how is workin.. Also heard on forums some issue with it.. Here is the link.. http://forum.gigabyte.us/thread/1542/am4-beta-bios...