Original Link: https://www.anandtech.com/show/12516/amd-ryzen-2200g-2400g-cpu-core-frequency-scaling



When AMD launched their first generation Ryzen-based APUs with a Zen cores and Vega graphics, both of the new parts entered the market at two very different budget-focused price points. The Ryzen 3 2200G, sitting at $99 for a quad-core CPU with Vega graphics was an amazing feat, and Ryzen 5 2400G coming in at $169 became the new integrated graphics champion. In our run of performance analysis articles, the question being asked today are relatively simple ones: 'how well do the new AMD Ryzen 2000 series APUs scale with core frequency'? We tested our APUs for standard benchmark performance, discrete gaming performance, and integrated graphics performance.

Core Frequency Scaling on The Ryzen 2000 Series

The perception when overclocking a CPU, or any other component for that matter, is that the increase in clock speed will directly correlate into better performance. The theory is pretty simple on paper, but the translation between the increase of clock rate and increase in performance can be a somewhat different story depending on the rest of the system or how the program is computed.

As a result, a 25% increase in clock speed only really correlates to a 25% jump in performance for the most simple programs, as there are many other limiting factors to consider such as bottlenecks on graphics, memory performance, or stalls in the compute pipeline.

In our testing for this article, we aim to go through and evaluate the differences and performance scaling at different frequencies on our APUs.

AMD Ryzen 2000-Series APUs
  Ryzen 5 2400G
with Vega 11
Ryzen 3 2200G
with Vega 8
CPU Cores/Threads 4 / 8 4 / 4
Base CPU Frequency 3.6 GHz 3.5 GHz
Turbo CPU Frequency 3.9 GHz 3.7 GHz
TDP @ Base Frequency 65 W 65 W
Configurable TDP 46-65 W 46-65 W
L2 Cache 512 KB/core 512 KB/core
L3 Cache 4 MB 4 MB
Graphics Vega 11 Vega 8
Compute Units 11 CUs 8 CUs
Streaming Processors 704 SPs 512 SPs
Base GPU Frequency 1250 MHz 1100 MHz
DRAM Support DDR4-2933
Dual Channel
DDR4-2933
Dual Channel
OPN PIB YD2400C4FBBOX YD2200C5FBBOX
OPN Tray YD2400C5M4MFB YD2200C4M4MFB
Price $169 $99
Bundled Cooler AMD Wraith Stealth AMD Wraith Stealth

Our previous articles covering the APU performance include a pure overclock analysis, as well as a detailed guide in delidding the processor for extra performance. We have a future article planned on memory performance.


After delidding the processor, for better thermal performance

For the testing, we took each of our APUs from 3.5 GHz to 4.0 GHz on the core frequency in 100 MHz jumps and performed our testing suite throughout. This correlates to a 14.3% performance jump overall, and matches the frequencies we saw in our overclocking articles. At each point we will compare to see if the performance uplift is even loosely correlated to CPU speed.

Test Bed Setup

As per our testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory. With this test setup, we are using the BIOS to set the CPU core frequency using the provided straps on the MSI B350I Pro AC motherboard. The memory is set to the maximum supported frequency of DDR4-2933 with CAS latency timings of 18-18-18 within the BIOS to provide consistency throughout the different frequencies tested.

Test Setup
Processors AMD Ryzen 3 2200G AMD Ryzen 5 2400G
Motherboard MSI B350I Pro AC
Cooling Thermaltake Floe Riing RGB 360
Power Supply Thermaltake Toughpower Grand 1200 W Gold PSU
Memory G.Skill Ripjaws V
DDR4-3600 17-18-18
2x8 GB
1.35 V
Integrated GPU Vega 8
1100 MHz
Vega 11
1250 MHz
Discrete GPU ASUS GTX 1060 Strix
1620 MHz Base, 1847 MHz Boost
Hard Drive Crucial MX300 1 TB
Case Open Test Bed
Operating System Windows 10 Pro


We must thank the following companies for kindly providing hardware for our multiple test beds.

Thank you to Crucial for providing us with MX300 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX300 units are strong performers. Based on Marvell's 88SS1074 controller and using Micron's 384Gbit 32-layer 3D TLC NAND, these are 7mm high, 2.5-inch drives rated for 92K random read IOPS and 530/510 MB/s sequential read and write speeds.

The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 360TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX300 (750 GB) Review

Recommended Reading



CPU Performance

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, and it is our CPU testing that is likely to show the best linearity in improvement.

Rendering - Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Blender 2.78

The Ryzen 5 2400G scored a +12.1% increase in throughput, while the Ryzen 3 2200G did a bit better at +13.1%.

Rendering – POV-Ray 3.7: link

The Persistence of Vision Ray Tracer, or POV-Ray, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 1-2 minutes on high-end platforms.

POV-Ray 3.7 Render Benchmark (Multi-Threaded)

The Ryzen 5 2400G gets a +14.9% bump in POV-Ray, compared to the +14.3% we get with the 2200G, which is spot on with the frequency gain.

Compression – WinRAR 5.4: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30-second 720p videos.

WinRAR 5.0.1 Compression Test

For this test, the Ryzen 5 2400G scaled at least in part (+4.7%) across the frequency gain, however the Ryzen 3 2200G was jumping a bit over the place. WinRAR is highly memory sensitive, which is particually why the 2400G only scored a smaller gain, but it would seem that other factors came into play with the 2200G.

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-Zip 9.2 Compress/Decompress Benchmark

7-zip is another benchmark that can have other bottlenecks, like memory, and as a result we see only a +8.7% gain on the 2400G, however the 2200G gets a full +14.6% gain in performance.

Point Calculations – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz, and IPC win in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.

3DPM: Movement Algorithm Tester (Multi-threaded)

3DPM scales very well over cores and threads, being more latency dependent than anything else. The 2400G nets a +13.4% gain in performance up to 4.0 GHz, and the 2200G gets a similar +13.2% gain as well.

Neuron Simulation - DigiCortex v1.20: link

The newest benchmark in our suite is DigiCortex, a simulation of biologically plausible neural network circuits, and simulates activity of neurons and synapses. DigiCortex relies heavily on a mix of DRAM speed and computational throughput, indicating that systems which apply memory profiles properly should benefit and those that play fast and loose with overclocking settings might get some extra speed up. Results are taken during the steady state period in a 32k neuron simulation and represented as a function of the ability to simulate in real time (1.000x equals real-time).

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex is almost all about the memory performance, although can sometimes be CPU bottlenecked. The 2400G is ultimately hovering around 0.63x-0.65x simulation speed, however the 2200G does see a small gain up to 4% by increasing the core frequency.

HandBrake v1.0.2 H264 and HEVC: link

As mentioned above, video transcoding (both encode and decode) is a hot topic in performance metrics as more and more content is being created. First consideration is the standard in which the video is encoded, which can be lossless or lossy, trade performance for file-size, trade quality for file-size, or all of the above can increase encoding rates to help accelerate decoding rates. Alongside Google's favorite codec, VP9, there are two others that are taking hold: H264, the older codec, is practically everywhere and is designed to be optimized for 1080p video, and HEVC (or H265) that is aimed to provide the same quality as H264 but at a lower file-size (or better quality for the same size). HEVC is important as 4K is streamed over the air, meaning less bits need to be transferred for the same quality content.

Handbrake is a favored tool for transcoding, and so our test regime takes care of three areas.

Low Quality/Resolution H264: Here we transcode a 640x266 H264 rip of a 2 hour film, and change the encoding from Main profile to High profile, using the very-fast preset.

Handbrake v0.9.9 H.264: LQ

High Quality/Resolution H264: A similar test, but this time we take a ten-minute double 4K (3840x4320) file running at 60 Hz and transcode from Main to High, using the very-fast preset.

Handbrake v0.9.9 H.264: HQ

HEVC Test: Using the same video in HQ, we change the resolution and codec of the original video from 4K60 in H264 into 4K60 HEVC.

Handbrake v0.9.9 H.264: 4K60



Integrated Graphics Performance

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from. 

Thief

Thief has been a long-standing title in the hearts of PC gamers since the introduction of the very first iteration back in 1998 (Thief: The Dark Project). Thief is the latest reboot in the long-standing series and renowned publisher Square Enix took over the task from where Eidos Interactive left off back in 2004. The game itself uses the UE3 engine and is known for optimised and improved destructible environments, large crowd simulation and soft body dynamics.

Thief on iGPU - Average Frames Per SecondThief on iGPU - 99th Percentile

Increasing the core frequency does little for the average frame rates on Thief for integrated graphics, however the 99th percentiles clearly increase for both processors. They do not increase linearly so much, making the overall result hard to predict.

Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

Shadow of Mordor on iGPU - Average Frames Per SecondShadow of Mordor on iGPU - 99th Percentile

With the Ryzen 3 2200G, we see a clear gain in frame rates as the frequency is increased, around +7.6%, and similarly in the 99th percentile numbers. The 2400G isn't affected in the same way.

F1 2017

Released in the same year as the title suggests, F1 2017 is the ninth variant of the franchise to be published and developed by Codemasters. The game is based around the F1 2017 season and has been and licensed by the sports official governing body, the Federation Internationale de l’Automobile (FIA). F1 2017 features all twenty racing circuits, all twenty drivers across ten teams and allows F1 fans to immerse themselves into the world of Formula One with a rather comprehensive world championship season mode.

F1 2017 on iGPU - Average Frames Per SecondF1 2017 on iGPU - 99th Percentile

The Codemasters EGO engine has historically been an engine that has benefited from an increase in anything: CPU, memory, graphics, the lot. F1 2017 is using EGO 4.0, which seems to have removed some of the CPU bottleneck, as we're getting no difference in our integrated gaming results.



Integrated Graphics Performance

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from. 

Civilization 6

First up in our APU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Civilization 6 on iGPU - Average Frames Per SecondCivilization 6 on iGPU - 99th Percentile

For a turn-based game, frame rate is not as vital for Civ 6, so we run our settings at a standard 'real-world' level of detail. At this level, the CPU frequency does not seem to matter so much.

Ashes of the Singularity (DX12)

Seen as the holy child of DX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go and explore as many of the DX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

Ashes of The Singularity on iGPU - Average Frames Per SecondAshes of The Singularity on iGPU - 99th Percentile

AoTS seems to get a small uplift in percentile numbers coming from a 3.5 GHz base, although the performance gain does not seem to scale up to 4.0 GHz so much.

Rise Of The Tomb Raider (DX12)

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Rise of the Tomb Raider on iGPU - Average Frames Per SecondRise of the Tomb Raider on iGPU - 99th Percentile



Discrete Graphics Performance

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from. 

For our gaming tests, we are only concerned with real-world resolutions and settings for these games. It would be fairly easy to adjust the settings in each game to a CPU limited scenario, however the results from such a test are mostly pointless and non-transferable to the real world in our view. Scaling takes many forms, based on GPU, resolution, detail levels, and settings, so we want to make sure the results correlate to what users will see day-to-day.

Civilization 6

First up in our CPU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Civilization 6 on ASUS GTX 1060 Strix 6GB - Average Frames Per SecondCivilization 6 on ASUS GTX 1060 Strix 6GB - 99th Percentile

Despite not showing any great scaling for integrated graphics, the minute we bump up to our discrete GPU we can see that Civilization gets a good bump from frequency scaling. The average frame rates climb up +7.0% for the 2400G and +9.7% for the 2200G. Percentile numbers seem to vary on the 2400G, but the 2200G gets a distinct +10.6% gain.

Ashes of the Singularity (DX12)

Seen as the holy child of DX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go and explore as many of the DX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

Ashes of The Singularity on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondAshes of The Singularity on ASUS GTX 1060 Strix 6GB - 99th Percentile

AoTS is again a little over the place: technically there's an 8% gain in frame rates for the 2400G, however the 2200G seems to fluctuate a bit more. The better performance on the 2200G seems a bit startling too: technically with only four threads, each thread has more memory bandwidth and more core resources per thread than having eight threads together. This might improve certain latencies in the instruction list, although it is surprising so see such a big change.

Rise Of The Tomb Raider (DX12)

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Rise of the Tomb Raider on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondRise of the Tomb Raider on ASUS GTX 1060 Strix 6GB - 99th Percentile

RoTR sees a small +3% gain in average frame rates going up to 4.0 GHz, however the percentiles get the biggest boost, showing +17.9% on the 2400G.



Discrete Graphics Performance, Cont

As stated on the first page, here we take both APUs from 3.5 GHz to 4.0 GHz in 100 MHz increments and run our testing suite at each stage. This is a 14.3% increase in clock speed, however when it comes to gaming it can be unpredictable where those gains are going to come from. 

For our gaming tests, we are only concerned with real-world resolutions and settings for these games. It would be fairly easy to adjust the settings in each game to a CPU limited scenario, however the results from such a test are mostly pointless and non-transferable to the real world in our view. Scaling takes many forms, based on GPU, resolution, detail levels, and settings, so we want to make sure the results correlate to what users will see day-to-day.

Thief

Thief has been a long-standing title in the hearts of PC gamers since the introduction of the very first iteration back in 1998 (Thief: The Dark Project). Thief is the latest reboot in the long-standing series and renowned publisher Square Enix took over the task from where Eidos Interactive left off back in 2004. The game itself uses the UE3 engine and is known for optimised and improved destructible environments, large crowd simulation and soft body dynamics.

Thief on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondThief on ASUS GTX 1060 Strix 6GB - 99th Percentile

Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

Shadow of Mordor on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondShadow of Mordor on ASUS GTX 1060 Strix 6GB - 99th Percentile

F1 2017

Released in the same year as the title suggests, F1 2017 is the ninth variant of the franchise to be published and developed by Codemasters. The game is based around the F1 2017 season and has been and licensed by the sports official governing body, the Federation Internationale de l’Automobile (FIA). F1 2017 features all twenty racing circuits, all twenty drivers across ten teams and allows F1 fans to immerse themselves into the world of Formula One with a rather comprehensive world championship season mode.

F1 2017 on ASUS GTX 1060 Strix 6GB -  Average Frames Per SecondF1 2017 on ASUS GTX 1060 Strix 6GB - 99th Percentile



APU Core Frequency Scaling

One of the holy grails in processor performance is higher clock speed. All things being told, if we exclude power consumption from the mix, most performance tools will prefer, in order, a high IPC, then a high frequency, and finally more cores. Building a magical processor with double the IPC or double the frequency (at the same power) would be a magnificent thing, and would usually be preferred to just adding cores, as cores can scale: IPC and frequency often does not.

The main issue with driving frequency is power consumption and process. We all remember the days of Pentium 4, where driving frequency led to disasterous heat output and power consumption, and Intel went back to the drawing board to work more on IPC. Processors today are often built around the notion of a peak efficiency point, and the design is geared towards that specific level of performance and power. Moving the frequency outside of that peak area can lead to drastic increases in power consumption, so a balance is made. We get cores down at 1W each, or large CPUs consuming 300W-500W depending on the industry the chip is designed for. 

Once a CPU is designed and built, the main dial for adjusting performance is only the frequency. For the whole chip, this might be the frequency of the cores, or the frequency of the memory/memory controller, or the frequency of the graphics. For this article, we are concerned purely with the core frequency and performance. Predicting how well core performance scales with frequency requires an intimate knowledge of how the software processes instructions: if the program is all about raw compute throughput, then as long as the memory bandwidth can keep up, then a direct scaling factor can usually be observed. When the memory cannot keep up, or the storage is lacking, or other pathways are blocked, then the CPU performance is seen as identical no matter what the frequency. 

In our CPU tests, we adjusted the frequency from 3.5 GHz to 4.0 GHz, testing at each 100 MHz level, for a total +14.3% frequency gain. In the results, the raw throughput tests such as Blender, POV-Ray, 3DPM, and Handbrake all had performance level gains around the 13-14% level, as you would expect with a direct frequency uplift, showing that these tests scale well. Our memory limited benchmarks, such as WinRAR and DigiCortex, only saw smaller gains, of 5% at best, though often not at all.

For gaming, predicting how the performance will change is quite difficult. Moving data over the PCIe bus is one thing, but it will come down to draw calls and the ability for the CPU to move textures and perform in-scene adjustments. We did tests on both the integrated graphics and using a discrete graphics card, the GTX 1060, which is around the price point that someone buying an APU is likely to invest into a discrete graphics card.

For our integrated graphics testing, where we normally expect to see improvements with overclocking the CPU, almost nothing happened (or within testing variance). Shadow of Mordor saw an uptick on the Ryzen 3 2200G, but that was more of an anomoly than a rule. The 99th percentile faired a bit better.

Thief was the best recipient of increasing the core clock for percentiles, with Ashes not far behind. Shadow of Mordor saw the same 7% gain or so with the 2200G.

On our discrete GPU, the improvements were more obvious:

Most combinations saw at least a 3% increase, although only Civilization 6 and Ashes on the 2200G approached a 13% increase consummate with the frequency uplift.

The scaling worked best with the 99th percentile frame rates, with almost every game seeing a 5% gain in performance, and a lot more with a 10-13% gain as well. Rise of the Tomb Raider, which always loves being the oddball in these cases, actually saw a benefit above and beyond a 14.3% frequency increase.

Overall, it would seem, overclocking an APU with just the core frequency in mind works best for pure CPU tests, and in the percentiles when using a discrete graphics card. Users that are more focused on integrated graphics should focus on the IGP frequency and the memory instead.

Recommended Reading

Log in

Don't have an account? Sign up now