Original Link: https://www.anandtech.com/show/17078/intel-alder-lake-ddr5-memory-scaling-analysis



One of the most agonizing elements of Intel's launch of its latest 12th generation Alder Lake desktop processors is its support of both DDR5 and DDR4 memory. Motherboards are either one or the other, while we wait for DDR5 to take hold in the market. While DDR4 memory isn't new to us, DDR5 memory is, and as a result, we've been reporting on the release of DDR5 since last year. Now that DDR5 is here, albeit difficult to obtain, we know from our Core i9-12900K review that DDR5 performs better at baseline settings when compared to DDR4. To investigate the scalability of DDR5 on Alder Lake, we have used a premium kit of DDR5 memory from G.Skill, the Trident Z5 DDR5-6000. We test the G.Skill Trident Z5 kit from DDR5-4800 to DDR5-6400 at CL36 and DDR5-4800 with as tight timings as we could to see if latency also plays a role in enhancing the performance.

DDR5 Memory: Scaling, Pricing, Availability

In our launch day review and analysis of Intel's latest Core i9-12900K, we tested many variables that could impact performance on the new platform. This includes the performance variation when using Windows 11 versus Windows 10, performance with both DDR5 and DDR4 at official speeds, and the impact that both the new hybrid Performance and Efficient cores.

With all of the different variables in that review, the purpose of this article is to evaluate and analyze the impact that DDR5 memory frequency plays on performance. While in our past memory scaling articles, we've typically stuck with just focusing on the effects of frequency, but this time we wanted to see how tighter latencies can have an impact on overall performance as well.


ASUS ROG Maximus Z690 Hero motherboard with G.Skill Trident Z5 DDR5-6000 memory

Touching on the pricing and availability of DDR5 memory at the time of writing, the TLDR is that it's currently hard to find any in stock, and when it is in stock, it costs a lot. With a massive global chip shortage that many put down to the Coronavirus Pandemic, the drought has bumped prices above MSRP on many components. Interestingly enough, it's not the DDR5 itself causing the shortage, but the power management controllers that DDR5 uses per module to get higher bandwidth are in short supply. As a result, the increased cost can be likened to a sort of early adopters fee, where users wanting the latest and greatest will have to pay through the nose to own it.

Another variable to consider with DDR5 memory is that a 32GB (2x16) kit of G.Skill Ripjaw DDR5-5200 can be found at retailer MemoryC for $390. In contrast, a more premium and faster kit such as G.Skill Trident Z5 DDR5-6000 has a price tag of $508, an increase of around 30%. One of the things to consider is that a price increase isn't linear to the performance increase, and that goes for pretty much every component from memory, graphics cards, and even processors. The more premium a product, the more it costs.

Enabling X.M.P 3.0: It's Technically Overclocking

In March 2021, we reported that Intel effectively discontinued its 'Performance Tuning Protection Plan.' This was essentially an extended warranty for users planning to overclock Intel's processors, which could be purchased at an additional cost. One of the main benefits of this was that if users somehow damaged the silicon with higher than typical voltages (CPU VCore and Memory related voltages), users could effectively RMA the processors back to Intel on a like for like replacement basis. Intel stated that very few people took advantage of the plan to continue it.

One of the variables to note when running Intel's Xtreme Memory Profiles (X.M.P 3.0) on DDR5 memory is that Intel classes this as overclocking. That means when RMA'ing a faulty processor, running the CPU at stock settings but enabling, an X.M.P 3.0 memory profile at DDR5-6000 CL36 is something they consider as an overclock. This could inherently void the warranty of the CPU. All processor manufacturers adhere to JEDEC specifications with their recommended memory settings to use with any given processor, such as DDR4-3200 for its 11th generation (Rocket Lake) and DDR5-4800/DDR4-3200 for its 12th generation (Alder Lake) processors.

When it comes to overclocking DDR5 memory on the ASUS ROG Maximus Z690 Hero, we did all of our testing with Intel's Memory Gear at the 1:2 ratio. We did test the 1:1 and 1:4 ratio but without any great success. When enabling X.M.P on the G.Skill kit, it automatically sets the 1:2 ratio, with the memory controller running at half the speed of the memory kit.

Issues Within Windows 10: Priority and Core Scheduling

As we highlighted in our review of the Intel Core i9-12900K processor, in certain software environments there can be unexpected performance behavior. When a thread starts, the operating system (Windows 10) will assign a task to a specific core. As the P-Cores (performance) and E-Cores (efficiency) on the hybrid Alder Lake design are at different performances and efficiencies, it is up to the scheduler to make sure the right task is on the right core. Intel's intended use case is that the in-focus software gets priority, and everything else is moved to background tasks. However, on Windows 10, there is an additional caveat - any software set to below normal (or lower) priority will also be considered background, and be placed on the E-cores, even if it is in focus. Some high-performance software sets itself as below normal priority in order to keep the system running it responsive, so there's a clash of ideology between the two.

Various solutions to this exist. Intel stated to us that users could either run dual monitors or change the Windows Power Plan to High Performance. To investigate the issue during testing, all of our testing in this article was done with the Windows Power Plan set to High-Performance (as I do for motherboard reviews) and running the tests with the High-Performance Power Plan active.

In addition to this, I also used a third-party scheduler, the Process Lasso software, to check for performance variations. I can safely and confidently say that there was around a 0.5% margin of variance between using the High-Performance Power Plan and setting the affinities and priorities to high using the Process Lasso software.

It should also be noted that users running Windows 11 shouldn't experience any of these issues. When set correctly, we saw no difference between Windows 10 and Windows 11 in our original Core i9-12900K review, and so to keep things consistent with our previous testing for now, we're sticking with Windows 10 with our fix applied.

Test Bed, Setup, and Hardware

As this article focuses on how well DDR5 memory scales, we have used a premium Z690 motherboard, the ASUS ROG Maximus Z690 Hero, and a premium ASUS ROG Ryujin II 360 mm AIO CPU cooler. In terms of settings, we've left the Intel Core i9-12900K at default variables as per the firmware, with the only changes made regarding the memory settings.

DDR5 Memory Scaling Test Setup (Alder Lake)
Processor Intel Core i9-12900K, 125 W, $589
8+8 Cores, 24 Threads 3.2 GHz (5.2 GHz P-Core Turbo)
Motherboard ASUS ROG Maximus Z690 Hero (BIOS 0803)
Cooling ASUS ROG Ryujin II 360 360mm AIO
Power Supply Corsair HX850 80Plus Platinum 850 W
Memory G.Skill Trident Z5 2 x 16 GB
DDR5-6000 CL 36-36-36-76 (XMP)
Video Card MSI GTX 1080 (1178/1279 Boost)
Hard Drive Crucial MX300 1TB
Case Open Benchtable BC1.1 (Silver)
Operating System Windows 10 Pro 64-bit: Build 21H2

For the operating system, we've used the most widely available and latest build of Windows 10 64-bit (21H2) with all of the current updates at the time of testing. (For those wondering about our selection of GPU, the truth is that all our editors are in different locations in the world and we do not have a singular pool of resources. This is Gavin's regular testing GPU until we can get a replacement; which in this current climate is unlikely. - Ian)

DDR5 Memory Frequencies/Latencies Tested
Memory Frequency/Timings Memory IC
G.Skill Trident Z5 (2 x 16 GB) DDR5-4800 CL 32-32-32-72
DDR5-4800 CL 36-36-36-76
DDR5-5000 CL 36-36-36-76
DDR5-5200 CL 36-36-36-76
DDR5-5400 CL 36-36-36-76
DDR5-5600 CL 36-36-36-76
DDR5-5800 CL 36-36-36-76
DDR5-6000 CL 36-36-36-76
DDR5-6200 CL 36-36-36-76
DDR5-6400 CL 36-36-36-76
Samsung

Above are all of the frequencies and latencies we've tested in this article. For scaling, we selected the G.Skill Trident Z5 memory kit as it had the best overclocking ability from all of the DDR5 kits we received at launch. Out of the box it was rated the highest for frequency, and we pushed it even further. The G.Skill Trident Z5 memory was tested from DDR5-4800 CL36 up to and including DDR5-6400 CL36, but also a special case of DDR5-4800 CL32 for lower CAS latencies. Details on our overclocking exploits are later in the review.

Read on for more information about G.Skill's Trident Z5 DDR5-6000, as well as our analysis on the scalability of DDR5 memory on Intel's Alder Lake. In this article, we cover the following:

  • 1. Overview and Test Setup (this page)
  • 2. A Closer Look at the G.Skill Trident Z5 DDR5-6000 CL36
  • 3. CPU Performance
  • 4. Gaming Performance: Low Resolution
  • 5. Gaming Performance: High Resolution
  • 6. Conclusion


G.Skill Trident Z5 Memory (F5-6000U3636E16G)

2x16GB of DDR5-6000 CL36

For the purposes of this article and to investigate scaling performance on Alder Lake, G.Skill supplied us with a kit of its latest Trident Z5 DDR5-6000 CL36 memory. The G.Skill Trident Z series has been its flagship model for many years, focusing on performance but blending in a premium and clean-cut aesthetic. G.Skill offers two types of its Trident Z5 memory, some without RGB LEDs such as the kit we are taking a look at today (Z5), and the Trident Z5 RGB, which includes an RGB LED light bar along the top of each memory stick.

Focusing on the non-RGB variants, the G.Skill Trident Z D5 is available in various 32 GB (2x16) configurations starting at DDR5-5600 CL36 and ranging up to DDR5-6000 CL36. G.Skill unveiled a kit of Trident Z D5 RGB DDR5-7000 CL40 kit, which is extremely fast, and when it is released, it will ultimately be one of the most, if not the most, expensive DDR5 memory kit on the market.

Looking at the design, the G.Skill Trident Z5 DDR5 memory uses a 42 mm tall (at the highest point) heatsink, with G.Skill offering a two-tone contrasting matte black kit, as well as a black and metallic silver kit. The kit supplied to us by G.Skill uses two-tone matte black heatsinks. The heatsinks are constructed from aluminum, and G.Skill states that it uses a newer and more 'streamlined' design. There are quite pointy, which given previous G.Skill memory kits might have the tendency to feel too sharp when installing them.

Looking at what CPU-Z is reporting, we can see that the X.M.P 3.0 profile matches up with the advertised specifications, with this particular kit using DDR5-6000 with latency timings of 36-36-36-76. The operating voltage for the kit is 1.3 V, which is a 0.2 V bump from the JEDEC SPD rating of this kit, which is DDR5-4800 at 1.1 V.

Checking the more intricate details of the G.Skill Trident Z5 DDR5-6000 memory, CPU-Z reports that the kit is using Samsung IC's, with a 1Rx8 array of 16 Gb ICs employed on each module. While CPU-Z doesn't actually report this, we reached out to G.Skill who informed us that this kit uses a single rank design.



CPU Performance, Short Form

To show the performance and scaling of DDR5 memory, we've opted for a more selective and short-form selection of benchmarks from our test suite.

Compression – WinRAR 5.90: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30-second 720p videos.

WinRAR 5.90
Blue is XMP; Orange is JEDEC at Low CL

In our WinRAR 5.90 benchmark, this is where we saw the most effective and conclusive levels of performance, From DDR5-4800 CL36 to DDR5-6400 CL36, we saw an impressive 14.1 % increase in throughput. Even at the DDR5-6000 CL36 XMP, there was a 9.4% jump in performance in terms of scale from the baseline.

The DDR5-4800 CL32 also provided a good uplift in performance here. It should also be noted that WinRAR 5.90 performance can be very memory dependant, and it shows in our results.

Rendering - Blender 2.79b: 3D Creation Suite

A high-profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender 2.79b bmw27_cpu Benchmark
Blue is XMP; Orange is JEDEC at Low CL

In terms of scaling performance in our Blender benchmark, we saw very little variation in performance from top to bottom. Although the Trident Z5 at DDR5-6400 CL36 did perform best, it was a modest 0.6% jump in performance from our lowest result to the best.

Rendering - Cinebench R23: link

Maxon's real-world and cross-platform Cinebench test suite has been a staple in benchmarking and rendering performance for many years. Its latest installment is the R23 version, which is based on its latest 23 code which uses updated compilers. It acts as a real-world system benchmark that incorporates common tasks and rendering workloads as opposed to less diverse benchmarks which only take measurements based on certain CPU functions. Cinebench R23 can also measure both single-threaded and multi-threaded performance.

Cinebench R23 CPU: Single ThreadCinebench R23 CPU: Multi Thread
Blue is XMP; Orange is JEDEC at Low CL

Looking at performance in Cinebench R23, the results were a little sporadic, in both the single-threaded and multi-threaded testing. All of the results in the single-threaded test were within a margin of 1.8%, with the multi-threaded results within a 1.6% level of variation from top to bottom.

3DPMv2.1 – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz, and IPC win in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.

3D Particle Movement v2.1
Blue is XMP; Orange is JEDEC at Low CL

Similar to what we saw in both Cinebench R23 and in our Blender benchmarks, performance in our 3DPM v2.1 testing shows little to no improvement with faster memory across the range of results. The level of variation between the best result and the worst result was around 0.3%.



Scaled Gaming Performance: Low Resolution

Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it is a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

GTX 1080: Civilization VI, Average FPSGTX 1080: Civilization VI, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Civ VI shows there is some benefit to be had by going from DDR5-4800 to DDR5-6400. The results also show that Civ VI can benefit from lower latencies, with DDR5-4800 at CL32 performing similar to DDR5-5400 CL36.

Shadow of the Tomb Raider (DX12)

The latest installment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

GTX 1080: Shadow of the Tomb Raider, Average FPSGTX 1080: Shadow of the Tomb Raider, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

In Shadow of the Tomb Raider, we saw a consistent bump in performance in both average and the 95th percentiles as we tested each frequency. Testing at DDR5-4800 CL32, we saw decent gains in performance over DDR5-4800 CL36, with the lower latencies outperforming DDR5-5800 CL36 in both average and 95th percentile.

Strange Brigade (DX12)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arisen once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative-centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations. For our testing, we use the DirectX 12 benchmark.

GTX 1080: Strange Brigade DX12, Average FPSGTX 1080: Strange Brigade DX12, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Strange Brigade shows there is some benefit to higher frequencies, with DDR5-6400 CL36 consistently outperforming DDR5-4800 to DDR5-6200 CL36. The biggest benefit came in 95th percentile performance, with DDR5-4800 at lower latencies of CL32 coming close to DDR5-5800 performance.



Scaled Gaming Performance: High Resolution

Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it is a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

GTX 1080: Civilization VI, Average FPSGTX 1080: Civilization VI, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Civ VI shows there is very little benefit to be had by going from DDR5-4800 to DDR5-6400. The results also show that Civ VI actually benefits from lower latencies, with DDR5-4800 at CL32 outperforming all the other frequencies tested at CL36.

Shadow of the Tomb Raider (DX12)

The latest installment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

GTX 1080: Shadow of the Tomb Raider, Average FPSGTX 1080: Shadow of the Tomb Raider, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Looking at our results in Shadow of the Tomb Raider, we did see some improvements in performance scaling from DDR5-4800 to DDR5-6400. The biggest improvement came when testing DDR4-4800 CL32, which performed similarly to DDR5-6000 CL36.

Strange Brigade (DX12)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arisen once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative-centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations. For our testing, we use the DirectX 12 benchmark.

GTX 1080: Strange Brigade DX12, Average FPSGTX 1080: Strange Brigade DX12, 95th Percentile
Blue is XMP; Orange is JEDEC at Low CL

Performance in Strange Brigade wasn't influenced by the frequency and latency of the G.Skill Trident Z5 DDR5 memory in terms of average frame rates. We do note however that 95th percentile performance does, for the most part, improve as we increased the memory frequency.



DDR5 Memory Scaling on Alder Lake Conclusion

The launch of Intel's Alder Lake has enabled options for users to consider. One of them is the Intel Z690 chipset, which supports either DDR4 memory or the latest DDR5 memory, albeit not on the same motherboard. This means the most premium Z690 models support DDR5 memory because the best hardware needs the best components to be the best. The more 'value-orientated' Z690 models typically have variants for either DDR5 or DDR4 support, with DDR4 still widely available to purchase.

Ultimately when it comes down to performance, as per our Core i9-12900K review, DDR5 memory has a superior advantage to DDR4 in mostly heavy multithreaded scenarios. With the uplift in memory overall memory bandwidth coupled with the potential for even faster memory, SK Hynix announced last year that it was planning to produce up to DDR5-8400 memory. The sky is the limit. We've reached the pinnacle of DDR4 in terms of performance across multiple AMD and Intel platforms, but now it's the time for DDR5 to make its mark, despite the fact it's only supported on Intel's Alder Lake architecture at the time of writing.


The Intel Core i9-12900K processor (Alder Lake)

The biggest question we wanted to address within this article are, 'how does DDR5 scale with frequency'.

Increasing Memory Frequency, The Performance Isn't Linear

Deciphering the bigger picture from our variety of CPU/motherboard tests and benchmarks from our suite shows that increasing memory frequency, for the most part, from DDR5-4800 to DDR5-6400, doesn't play as much of a critical role as first thought. Going from DDR5-4800 to DDR5-6400 in terms of raw MT/s is a 33.3% jump. The effect in the increase of frequency doesn't in any way relate to the real-world impact and performance increase, if any at all.

This is down to the design of Intel's Alder Lake - the bottleneck is usually somewhere else in the system. Even the infancy of the operating system and new scheduler might play a role more than the memory. Our results paint quite a simplistic picture for the most part, with three main points to take away:

  • 1. CPU intensive benchmarks with smaller memory workloads provided zero uplift
  • 2. Benchmarks where memory workloads were higher show benefit from higher frequencies/tighter latencies
  • 3. Games we tested benefitted from increased memory speed and tighter timings, but we're more likely to see a bigger improvement from an increase in CPU frequency or GPU frequency.

Taking our results from our Shadow of the Tomb Raider benchmark testing at 1440p, we barely saw an increase of 3.7% in average framerate from DDR5-4800 CL36 to DDR5-6400 CL36, with a 4% increase in the 95th percentile performance. There was a small uplift, even with the somewhat noisy results.

The TLDR of it comes down to one main point, there are a lot of titles out there, some more CPU intensive, some more reliant on graphical power. Memory frequency does play a part in uplifting that performance slightly. It might be a small part of the overall grand scheme of things, but it's still increasing performance whatever way you look at it.

Looking at the results of our WinRAR 5.90 testing, this is where we saw the most significant variation in performance from top to bottom. Going from DDR5-4800 CL36 to DDR5-6400 CL36, we saw an uplift in performance of just over 14%. What's very interesting from our testing is when we went low latency at DDR5-4800 with CL32 latencies. Tightening up the primary latencies at the same frequency netted us an additional 6.4% jump in performance, which shows there that increasing frequency isn't the only way to improve overall performance.

DDR5 Memory Pricing and Availability

One of the most frustrating aspects of building a new system or upgrading to the latest generation is availability. The current global chip shortage has made things very difficult, not just for consumers but for manufacturers too. This has resulted in low stock worldwide of computer components from processors, memory ICs, and graphics cards; yes, the mining craze has also played a massive part in gobbling up all of those precious high-performance GPUs. For memory, the issue isn't DDR5 chips themselves, but the power management controllers each module needs. There isn't enough to meet market demand.


The G.Skill Trident Z5 DDR5 memory in silver and black

Looking at the pricing and availability of the G.Skill Trident Z5 DDR5-6000 32GB (2x16) memory kit, it's available for around $430 at the time of writing. Trying to find any value in DDR5 at the moment is difficult to justify, and it's even harder to get a solid baseline on pricing, given demand outweighs the current supply. This inherently pushes pricing up to uncomfortable levels.

Comparing a kit of G.Skill Trident Z Neo DDR4-4000 CL18 32 GB (2x16) kit which currently costs around $169, the G.Skill Trident Z5 DDR6-6000 CL36 kit with the same capacity costs roughly 154% more. This large hike in price doesn't come close to the increase in performance, and as we've mentioned previously about linear performance, it's just not a subjective area to make fair comparisons. Pricing for DDR5 and trying to find value in it at the time of writing is sadly nonexistent.

Also yes, we've seen DDR5 memory kits on eBay going for over $1000. That is somewhat insane.

Final Thoughts

Since the launch of Intel's 12th generation Alder Lake processors, the availability of the processors has been relatively decent for a new launch. The biggest issue to unlocking many of the memory performance benefits is that DDR5 stock hasn't been available, unless you're willing to pay almost double the DDR4 equivalent cost. This means that users looking to use Alder Lake have had to either wait for DDR5 stock or opt for DDR4 and a relative Z690 motherboard.

While there is still value to be had from DDR4 on Alder Lake, as it's much more cost-effective, we typically always recommend holding out for the newer memory. It means the hardware has better resale value, and provides a platform for future improvement. Buying into DDR4 now means you're investing in a platform that has reached the ceiling. But in this market, perhaps just being able to buy what is available matters more. Opting for DDR4 memory, Alder Lake, and a compatible Z690 motherboard will still yield benefits over previous generations, but it's clear from our testing so far that going for DDR5 does perform better. Although the scalability of DDR5 is not as large in our testing as a user might first think, there's still much more room left in terms of raw MT/s left for manufacturers to eke out. The question is whether any of our usual software actually sees that as a bottleneck these days,

We've got some additional vendor memory kits in for testing, which we'll put into a review early next year. Stay tuned for that. 

Log in

Don't have an account? Sign up now