Original Link: https://www.anandtech.com/show/13748/the-intel-xeon-w-3175x-review-28-unlocked-cores-2999-usd



Intel has always ensured that its high-end server platforms, one where multiple CPUs can act as a single system, have the highest core count processors. These servers go into the most expensive deployments, so they can afford the most expensive silicon to produce. The consumer market by contrast is very price sensitive by comparison, so consumers get fewer cores. However, consumers have always asked for a way of getting all of those cores, preferably in an overclockable chip, at more reasonable prices. Intel has answered your call, with the Xeon W-3175X. All 28-cores, all the time. This is our review.

Intel’s Biggest and Fastest Chip Ever

The Xeon W-3175X is a behemoth processor. Using Intel’s biggest x86 Skylake silicon design, it has a full 28 cores and 56 threads. These cores are rated at a 3.1 GHz base frequency, with the chip having a peak turbo frequency of 4.5 GHz. These cores are fed with six channels of DDR4-2666 memory, and are supported by 44 PCIe 3.0 lanes for add-in cards. All of this is rated at a 255W thermal design power.

Intel at 28 cores
AnandTech Cores Base
Freq
Turbo
Freq
PCIe
3.0
DRAM TDP
Xeon W-3175X 28 (56) 3.1 GHz 4.5 GHz 44 DDR4-2666 255W

Normally Intel will cite a price for a processor as the ‘tray’ price, the price that they give to system integrators that buy batches of 1000 CPUs. As a result, the price at retail for customers buying off the shelf was often 5-15% higher, depending on how aggressive the retailer was compared to demand. For this product, Intel is actually providing an RCP or ‘recommended consumer price’ of $2999, rather than a tray price. Does this mean Intel doesn’t expect to sell 1000 units to any individual OEM? Either way, this is well below the $8k that OEMs were originally quoted when the product was first discussed under NDA, and below the $4k predicted prices seen at retailers in the last few weeks.

The new CPU will require an LGA3647 motherboard, of which there will only be two that will be validated: ASUS’ Dominus Extreme, and an as-yet-unnamed GIGABYTE product. As the chip is a Xeon processor, it supports both consumer grade memory and registered memory (RDIMMs) with ECC support, up to 512GB of RDIMMs.

Normally a Xeon processor is a locked processor, however for this model Intel has unlocked the multiplier. This allows users to adjust the frequencies of the cores for normal code, for AVX2 code, and for AVX512 code. It is worth noting that this processor is rated at 3.1 GHz for 255W TDP, so trying to push 4.5 GHz across all cores will start to draw some serious power. This is why the ASUS motherboard has 32 phases, and we were provided with a 1600W power supply with our review sample. More on that in the following pages.

Aiming to Retake the CPU Crown

There is competition for the Xeon W-3175X from both Intel’s own product stack as well as AMD.

On the team blue side, the 28-core Xeon W-3175X is the bigger brother to the recently released $2000 18-core Core i9-9980XE, which has fewer cores and is lower frequency, but a much lower TDP. By comparison, Intel also has a 14-core Core i9-9990XE, which is an odd hybrid of fewer cores but a higher frequency, supposedly offering 5.0 GHz on all cores. This processor is being sold at auction to OEMs only, with no warranty, so the odds of seeing one in the wild are very slim.

Intel also has its server parts to offer as competition. The analogue to the new W-3175X is the Xeon Platinum 8180, Intel’s most expensive processor to date. This processor has 28-cores, like the W-3175X, but at a lower power (205W), lower frequencies (2.5-3.8 GHz), but support for up to eight sockets, whereas the W-3175X is a single socket processor. With a tray price of $10k, the Xeon Platinum is a lot more expensive, but the markets that buy those processors very easily amortize the cost over the lifetime of the servers. Unfortunately we were not able to secure one to compare the W-3175X against, however we are trying to get one for a future review.

Comparison to the Xeon W-3175X
Xeon
W-3175X
Core
i9-9980XE
Xeon
8180
AnandTech TR
2950X
TR
2990WX
EPYC
7601
28 (56) 18 (36) 28 (56) Cores 16 (32) 32 (64) 32 (64)
3.1 GHz 3.0 GHz 2.5 GHz Base Freq. 3.5 GHz 3.0 GHz 2.2 GHz
4.5 GHz 4.5 GHz 3.8 GHz Turbo Freq. 4.4 GHz 4.2 GHz 3.2 GHz
44 44 48 PCIe 64 64 128
255 W 165 W 205 W TDP 180 W 250 W 180 W
6 x 2666 4 x 2666 6 x 2666 DDR4 4 x 2933 4 x 2933 8 x 2666
$2999 $1999 $10003 Price $899 $1799 $4200
~$1500 ~$300 ~$600 1P MB Price ~$300 ~$300 ~$600
1P 1P 1-8P Multi-Socket 1P 1P 2P

While Intel moved from 18-core to 28-core on its consumer line, AMD released its 32-core processor for the high-end desktop market back in August 2018. The $1799 Threadripper 2990WX uses AMD’s multi-die strategy to pair together four Zeppelin silicon dies, each with eight cores, to offer a total of 32 cores, 64 threads, and four memory channels. This processor has a similar 250W TDP to the 255W TDP of the W-3175X, but with slightly lower set of frequencies, at 3.0-4.2 GHz and using AMD’s Zen core which has a previous generation level of clock-for-clock performance. This processor also has 64 PCIe lanes, compared to 44. The Threadripper is some $1200 cheaper, widely available, and has over a dozen motherboards to choose from. This will be an interesting comparison.

On AMD’s server side, the nearest comparison point is AMD’s EPYC 7601. The design of this processor is similar to the Threadripper comparison point, but it offers 128 PCIe 3.0 lanes, eight memory channels, and supports 2TB of DDR4 per processor. The rated power is lower (180W) and so the frequencies are lower (2.2-3.2 GHz), but supports dual socket configurations. As a result the price of the EPYC 7601 is listed as $4200.


Top Row: Opteron 6127 (G34), Opteron 180 (S939), Threadripper (TR4), Duron 900 (S462)
Middle Row: Ryzen 2700X (AM4), Core i9-9900K (LGA1151v2)
Bottom Row: EPYC 7551 (SP3), Xeon W-3175X (LGA3647), i9-9980XE (LGA2066), i7-930 (LGA1366)

Almost every one of the comparison points of the W-3175X has at least one functional specification where it wins, but on paper is expected to lose in performance. For this review, we’re directly comparing the new W-3175X against the Core i9-9980XE and the Threadripper 2990WX, with follow up reviews for the EPYC and Xeon Platinum.

How Intel Makes This Chip

The way that Intel creates its enterprise processor portfolio has been relatively consistent for several generations – it builds three processors with different core counts to cover every server market. At present it has the Xeon Scalable ‘Low Core Count’ (LCC) design which goes up to 10 cores, a Xeon Scalable ‘High Core Count’ (HCC) design that goes up to 18 cores, and a Xeon Scalable ‘eXtreme Core Count’ (XCC) design that goes up to 28 cores. Each design can have cores disabled (though yield or product placement) to have a variety of core counts, power targets, or cache amounts.

In the latest server generation, based on the Skylake-SP microarchitecture, these three chips make up Intel’s entire Xeon Scalable platform. The Xeon Bronze and Silver processors are mostly LCC parts capable up to dual socket systems, Xeon Gold is mostly HCC with some cut down XCC good for 2-4 socket systems, and Xeon Platinum is mostly XCC for up to 8 socket systems.

In order to supply Intel’s ‘High-End Desktop’ (HEDT) market of enthusiasts and prosumers that want a workstation but not a server, it brings these designs in. For consumer parts, Intel removes ECC support, but enables overclocking. Some of the frequencies are adjusted, and all processors are limited to single socket implementations.

In the past, Intel only used to bring the LCC processor to these ‘price sensitive’ markets. For the Skylake platform, Intel brought its 18-core HCC processors down to consumers as well, helping the company to compete against AMD’s consumer offerings. Now, with the W-3175X, Intel is bringing that XCC design into the hands of enthusiasts and prosumers.

For a number of generations, many enthusiasts have requested Intel’s highest core count processor in an overclockable format. When the previous generation processors were launched, one of Intel’s employees at the time polled the community on Twitter about their opinions for this part – and it was a strong ‘yes, please!’. The only downside here is that releasing a consumer high-core count processor, unlocked for maximum frequency, had the potential to eat into Intel’s Xeon server market. If you wanted the top-core count processor, without a consumer equivalent, users had to invest in the server version at a much higher cost. It would appear that Intel is now ready to make that gamble.

Intel’s Rocky Road to the Xeon W-3175X

At Computex 2018, Intel first demoed this new processor. The company used its keynote to demonstrate ‘a 28-core processor running at 5.0 GHz, coming in Q4’. This it itself is an astounding feat; however it became apparent very quickly that Intel was using additional cooling methods to make this happen.

Using the GIGABYTE motherboard and a 1700W industrial grade water chiller, Intel had hidden the fact that it needed extreme measures to overclock this hard. We saw the water chiller setup the next day, and Intel later clarified that they had intended to say on stage that the processor was overclocked, but the speaker on the stage forgot to as part of the presentation.

We were later told by other sources that this chip was not even a Skylake-based processor, however we cannot confirm that report.

Nonetheless, Intel officially announced the naming of the new processor and the specifications at its Fall PC event in October. Some details were kept under wraps, such as the price, but the ASUS motherboard was also on display and we were told to expect it at retail in Q4.

Q4 came and went, without much of a peep from Intel. The company did not even acknowledge the launch in its presentations at CES. However we did see a number of system builds using the ASUS motherboard and awesome liquid cooling setups from ASUS, Phanteks, and Digital Storm at the show. It was clear that the launch was close, and within a few days of coming back from CES, one of Intel’s pre-built systems for product reviews arrived on our doorstep. We’ll be detailing that system on the next page.

How The W-3175X Will Be Sold

So when we think back to servers or workstations, almost every one of them sold in the market will have been built by an OEM or a system integrator (SI) for the customer. It is up to the OEM or SI to enable the system based on power consumption, thermal limits, and customer requirements. The consumer market is different, in that there is a mix of pre-built systems and do-it-yourself systems where users build their own PC after buying the components individually.

Because of the extreme power nature of this processor, Intel is taking the view that it should only be sold by OEMs and SIs that have the where-with-all to deal with how to cool them and how to provide technical support. As a result, users that want this chip will have to invest in a pre-built system.

Users might remember a similar processor, the 220W AMD FX-9590, was sold in this way – it wasn’t until twelve months later that we actually saw retail boxes with just the processor. By contrast, today AMD sells 250W Threadripper processors off the shelf at major retailers quite happily.

With all this being said, even with going for an OEM system, it might be difficult to get one. Based on rumors flying around when at CES, we were told by various sources that Intel only intends to make around 1500 of these W-3175X processors, worldwide. This might explain the reason why Intel gives a consumer price, rather than a 1k unit tray price. We were also told that even though there are only two motherboard manufacturers making motherboards, one of them only has plans to make a single run of 500 retail boards for OEMs, with the other expected to make up the deficit. The reason for this was simple: ‘Intel only ordered 500 from us’. These motherboards are expected to be ca $1500 apiece, but I still wonder if ASUS/GIGABYTE will break even designing these products.

This Review

In this review, we are going to take a look at the Xeon W-3175X processor in our benchmark suite. Our main comparison points are the consumer competition: Intel’s own Core i9-9980XE, and AMD’s Threadripper 2990WX (and 2950X). We will follow up with later reviews comparing the Xeon W-3175X to both AMD EPYC and Xeon Scalable. We also have some power analysis and a quick look at overclocking, with the latter likely getting a dedicated article as well. We have only had the system less than a week or so, which has limited what we can do.

On the next page, I want to go over the system that Intel sent us for review. Then we’ll go into the benchmarks and data. Due to the unique way that ASUS runs its motherboards, we actually have two sets of data for the chip, one on Intel specifications, and one with Multi-Core Enhancement enabled. This is somewhat related to our TDP discussions previously, however I will cover this in the Power Analysis section of the review.

 

Pages In This Review

  1. Intel Xeon W-3175X Detailed
  2. Intel’s Prebuilt Test System: A $7000 Build
  3. Power Consumption
  4. Test Bed and Setup
  5. 2018 and 2019 Benchmark Suite: Spectre and Meltdown Hardened
  6. CPU Performance: System Tests
  7. CPU Performance: Rendering Tests
  8. CPU Performance: Office Tests
  9. CPU Performance: Encoding Tests
  10. CPU Performance: Web and Legacy Tests
  11. Gaming: World of Tanks enCore
  12. Gaming: Final Fantasy XV
  13. Gaming: Shadow of War
  14. Gaming: Civilization 6
  15. Gaming: Ashes Classic
  16. Gaming: Strange Brigade
  17. Gaming: Grand Theft Auto V
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Conclusions and Final Words


Intel’s Prebuilt Test System: A $7000 Build

How we receive test units for review has varied greatly over the years. The company providing the review sample has a range of choices and hands-on solutions.

For a regular run of the mill launch, such as Kaby Lake/Coffee Lake/Coffee Lake gen 2, which are second generation launches on the same mature platform as the last generation, we get just the CPU and a set of ‘expected test result notes’ to help guide our testing. The reviewers are expected to know how to use everything and the vendor has confidence in the reviewers analysis. This method allows for the widest range of sampling and the least work at the vendor level, although relies on the journalist to have the relevant contacts with motherboard and memory companies as well as the ability to apply firmware updates as needed.

For important new launches, such as Ryzen and AM4, or Threadripper and TR4, or Skylake-X and X299, the vendor supplied the CPU(s), a motherboard, a memory kit, and a suitable CPU cooler. Sometimes there’s a bit of paper from the FAE tester that confirmed the set worked together over some basic stress tests, but it puts less work in the hands of the reviewer knowing that none of the kit should be dead on arrival and it should at least get to the OS without issue.

For unique launches, where only a few samples are being distributed, or there is limited mix-and-match support ready for day one, the option is the full system sample. This means case, motherboard, CPU, CPU cooler, memory, power supply, graphics card, and storage are all shipped as one, sometimes directly from a system integrator partner, but with the idea that the system has been pre-built, pre-tested, and ready to go. This should give the reviewer the least amount of work to do (in practice it’s usually it’s the opposite), but it puts a lot of emphasis on the vendor to plan ahead, and limits the scope of sampling. It also the most expensive for the vendor to implement, but usually the tradeoff is perceived as worth it.

Usually we deal with options one or two for every modern platform to date. Option three is only ever taken if the CPU vendor aims to sell the processor to OEMs and system integrators (SI) only. This is what Intel has done with the Xeon W-3175X, however they built the systems internally rather than outsourcing. After dispatch from the US to the UK, via the Netherlands, an 80 lb (36 kg) box arrived on my doorstep.

This box was huge. I mean, I know the motherboard is huge, I’ve seen it in the flesh several times, but Intel also went and super-sized the system too. This box was 33 inches tall (84 cm), and inside that was a set of polystyrene spacers for the actual box for the case, which again also had polystyrene spacers. Double spacey.

Apologies for taking these photos in my kitchen – it is literally the only room in my flat in which I had enough space to unbox this thing. Summer wanted to help, and got quite vocal.

The case being used is the Anidees AI Crystal XL AR, listed on the company’s website as ‘all the space you need for your large and heavy loaded components’, including support for HPTX, XL-ATX, E-ATX, and EEB sized motherboards, along with a 480mm radiator on top and a 360mm radiator on front, and comes with five 120mm RGB fans as standard. It’s a beast, surrounded with 5mm tempered glass on every side that needs it.

The case IO has a fan control switch (didn’t work), two audio jacks, an LED power button, a smaller LED reset button, two USB 3.0 Type-A ports, and two USB 2.0 Type-A ports. These were flush against the design making for a very straight edged design.

This picture might show you how tall it is. Someone at Intel didn’t install the rear IO plate leaving an air gap, but actually the system airflow was designed for the rear of the chassis to be the intake and the front of the chassis to be the exhaust. There are 10 PCIe slot gaps here, along with two vertical ones for users that want to mount in that way. There is sufficient ‘case bezel’ on all sides, unlike some smaller cases that minimize this.

Users may note the power supply has an odd connector. This is a C19 connecter usually used for high-wattage power supplies, and strapped to the box Intel had supplied a power cable.

This bad boy is thick. Ignoring the fact that this is a US cable and the earth pin is huge to the extent that it would only fit in one of my adaptors and even nudging the cable caused the machine to restart so I had to buy a UK cable that worked great, this unit is designed for the low voltage US market it seems. It has to be able to deliver up to 13A of current on a 120V line, or potentially more, so is built as such. With this it is obviously recommended that no socket extenders are used and this goes directly into the wall.


About to take the side panels off. This little one wants to play.

Both of the tempered glass side panels are held on by nine thumb screws each, which sit on rubber stands on the inside of the case. Unscrewing these was easy enough to do, however it’s one of the slowest ways to open a case I’ve ever come across.

Now inside the system at hand. The LGA3647 socket holds the Xeon W-3175X processor, which is capped with an Asetek 690LX-PN liquid cooler specifically designed for the workstation market. This goes to a 360mm liquid cooling radiator, paired with three high power (I’m pretty sure they’re Delta) fans that sound like a jet engine above 55ºC.

Intel half populated the memory with 8GB Samsung DDR-2666 RDIMMs, making for a total of 48 GB of memory, which is likely going to be the lowest configuration one of these CPUs will ever be paired with. The graphics card is a GIGABYTE GTX 1080, specifically the GV-N1080TTOC-8GD, which requires one 8-pin power connector.

For the motherboard, the ASUS Dominus Extreme, we’ve detailed it in previous coverage, however it’s worth to note that the big thing at the top of this motherboard is actually the heatsink for the 32-phase VRM. It’s a beast. Here is an ASUS build using this motherboard with a liquid cooler on the CPU and VRM:


The build at ASUS’ suite at CES 2019

There’s a little OLED display to the left, which as a full color display useful for showing BIOS codes and CPU temperatures when in Windows. When the system is off, it goes through a short 15 second cycle with the logo:

I’m pretty sure users can put their own gifs (perhaps within some limits) on the display during usual run time using ASUS software.

The rear of the case is quite neat, showing part of the back of the motherboard and the fan controller. At the bottom we have an EVGA 1600W T2 80PLUS Titanium power supply, which is appropriate for this build. Unfortunately Intel only supplied the cables that they actually used with the system, making it difficult to expand to multiple GPUs, which is what a system like this would ultimately end up with.

For storage, Intel provided an Optane 905P 480GB U.2 drive, which unfortunately had so many issues with the default OS installation (and then failing my own OS installation) that I had to remove it and debug it another day. Instead I put in my own Crucial MX200 1TB SATA SSD which we normally use for CPU testing and installed the OS directly on that. ASUS has a feature in the BIOS that will automatically push a software install to initiate driver updates without the need for a driver DVD – this ended up being very helpful.

Overall, the system cost is probably on the order of $7000:

Intel Reference System
  Item List Price
CPU Intel Xeon W-3175X $2999
CPU Cooler Asetek 690LX-PN $260
Motherboard ASUS Dominus Extreme $1500 ?
Memory 6 x 8GB Samsung DDR4-2666 RDIMM $420
Storage Intel Optane 905P 480 GB U.2 $552
Video Card GIGABYTE GTX 1080 OC 8GB $550
Chassis Anidees AI Crystal XL AR $300
Power Supply EVGA 1600W T2 Titanium $357
Total $6938

However, this is with a minimum amount of memory, only one GTX 1080, and a mid-sized U.2 drive. If we add in liquid cooling, a pair of RTX 2080 Ti graphics cards, 12x16GB of DDR4, and some proper storage, the price could easily creep over $10k-$12k, then add on the system builder additions. The version of this system we saw at the Digital Storm booth at CES, the Corsa, was around $20k.



Power Consumption and Overclocking

When Intel did a little demo at Computex 2018, with 28 cores all running at 5.0 GHz, we eventually found out that the system needed a 1700W water chiller to stay cool. Even at that point, people were wondering exactly how much power this CPU would put out. Then later in the year, Intel declared that the newly named Xeon W-3175X would be rated at 3.1 GHz for a 255W TDP. That makes it Intel’s highest TDP chip  for a non-server focused processor. Just don’t ignore the fact that it has a 3.8 GHz all-core turbo frequency, which will push that 255W TDP through the roof.

Speaking with Intel before this review, they gave us two numbers of ‘power limits’. Intel defines two power limits for this processor: the PL1 or ‘sustained’ power limit, at 255W, and a PL2 or ‘turbo’ power limit at 510W. Normally Intel sets the PL2 at only 25% higher, but this time around, it’s a full 100% higher. Ouch.

This is only a limit though – processors can (and have) run well below this power limit, so we actually need to do some testing.

Per Core Turbos

As always with new Intel processors, we ask the company how the turbo ratios change as more cores are loaded. They used to give this information out freely, but in recent consumer launches no longer offer this info, despite it being available directly from the chip if you have one to put in a system. As a result, we have the following turbo values:

Intel Per Core Turbo Values (SSE)
Cores 2 4 8 16 18 24 28
Xeon
W-3175X
4.3 GHz 4.1 GHz 4.0 GHz 4.0 GHz 4.0 GHz 4.0 GHz 3.8 GHz
Core
i9-7980XE
4.4 GHz 4.0 GHz 3.9 GHz 3.5 GHz 3.4 GHz    

The top 4.3 GHz turbo frequency is 4.3 GHz, which within eight cores goes down to 4.0 GHz. That frequency is kept all the way until >24 cores are loaded, where it sits at 3.8 GHz. With these big chips, usually a system needs a few cores or all the cores, so expect to sit around 4.0-3.8 GHz most of the time.

Intel did give us all-core ratios for AVX2 and AVX512 as well, at 3.2 GHz and 2.8 GHz respectively, however the ASUS motherboard we used had other ideas, setting these values at 3.5 GHz and 3.4 GHz which it said was ‘Intel POR (specification)’.

If you want to read our discussion on what Intel’s TDP values actually mean, here’s a handy guide we wrote late last year.

The ASUS BIOS: The Key to Power and Overclocking

One of the issues stemming from last year’s high-powered CPU reviews was the matter of Intel specifications. Simply put, while Intel has a list of suggested values for certain settings, motherboard manufacturers can (and often) do what they want for consumer systems, including lots of turbo, higher power consumption, and higher-than-expected defaults. Motherboard vendor features like Multi-Core Acceleration and Multi-Core Turbo are sometimes put at default, making testing a chip all the more tricky – should we test out of the box performance, or Intel specification performance (which isn’t always fixed anyway)?

For this new platform, ASUS has made it simpler, yet more confusing. They are still using the Multi-Core Enhancement option in their BIOS, or MCE for short, however the way it works has changed.

It offers two modes: Disabled, or Auto. When in Disabled mode, it puts all the options in ‘Intel POR’ mode, or Intel’s recommended settings. This includes voltages, frequencies, current limits, and removes all of ASUS’ independent tweaks for stability and performance. When in Auto mode, it opens up the power limits and the current limits, and sets the system up for overclocking. It doesn’t actually change any of the frequencies of the system, but just opens a few doors.

We spoke with Intel about this. They said ‘we recommend Intel specifications’, however despite this the company sent me this system with ASUS’ additional tweaks and geared for overclocking. If that isn’t confusing, I don’t know what is.

Intel’s MCE setting, among other things, does two very important changes:

  1. Changes the maximum temperature from 85ºC to 110ºC
  2. Changes the reported current

The first change gives the CPU some headroom before the system thermally throttles. Most Intel CPUs have a temperature limit of 95ºC, however this chip has a limit of 120ºC, so this can make a lot of sense, especially as a system ages and dust gets everywhere, reducing performance.

The second change might seem a little odd. Why does the reported current need changing? The issue here is that for the firmware, the underlying Intel system is relying on some older reporting code when dealing with current limits. In order for this high current processor to not be automatically throttled by this code, a divider is put in place.

Intel’s ‘recommended’ divider is 1.28, however ASUS’ tweaked setting puts this divider at 4, which opens up some headroom for overclocking. One of the downsides to this is that it causes confusion for any software that reports power numbers, such as Intel’s Power Gadget and AIDA64. (AI Suite automatically corrects for this.) ASUS states that when the setting is at 4, the actual power value reported by the processor is 2.25x the value it gives. Thus if the processor says 100W, it is actually drawing 225W. This corrective factor has been made in all our subsequent graphs.

Power Consumption

So here we go into actual hard power numbers. For this test we run our affinity scaling script to test the power consumption as we increase the threads. We’re using MCE enabled here, which doesn’t affect frequency but should allow for a full turbo, as we normally see on consumer processors.

At full all-core frequency in that higher power mode, we don’t reach 510W, but we certainly go well beyond 255W, scoring about 380W maximum. If we apply this to the Intel Spec version, and compare to other CPUs, we get the following:

Power (Package), Full Load

Overall, that’s a lot of power. But that’s what we expected, right? The cooling used on this system has an apparent rating of 500W, so we’re just about happy with that.

Overclocking

So how do you push the limits on a system where the limits are already being pushed? Easy, push harder – as long as you don’t break it.

We haven’t had time for a full run of our benchmark suite in overclocked mode yet, however we were able to record some results and some power values. They key parts to chips like this is how we manage AVX2 and AVX512 ratios – normally users just set an ‘all-core turbo’ to some value as an overclock, but for this chip the AVX ratios need to be systematically lower in order to keep the system stable based on how much extra current they need.

So starting with MCE enabled to open up the power limits, the current limits, and the temperature limits, I probed the standard all-core turbo and the AVX2 turbo separately. In each instance, I didn’t change any setting other than the CPU multiplier, and increased the values. When the system booted, I ran Cinebench R15 for non-AVX and POV-Ray for AVX2, using Intel’s power gadget to take both power, frequency, and temperature values.

Starting with non-AVX testing, I raised the frequency from 4.0 GHz up to 4.4 GHz. The benchmark result scaled from stock frequencies up to 4.3 GHz, however it was clear that we were hitting thermal limits as the sensor was reading 110ºC, which felt really uncomfortable. Here are the power traces for those tests, along with the score:

At 4.3 GHz, we were hitting almost 600W peak load (confirmed by wall meter), which is the limit of the cooling setup provided. Compared to the 4.0 GHz result, we calculated that the CPU actually used 17% more power overall to get a 7% increase in performance.

With AVX2, we started much lower, at 3.6 GHz, again raising the frequency by 100 MHz at a time and recording the POV-Ray run with our software tools.

Here the power is overall a bit lower, but we can see that the score isn’t rising much at 4.0 GHz, again due to our CPU temperature sensor showing 110ºC very easily. In this instance, the power consumption between 3.9 GHz and 3.6 GHz increased by 14%, while the score rose 10%.

Intel sent an EKWB Phoenix cooler which is rated for much higher power consumption, but arrived too late for our testing. We’re planning on doing an overclocking review, so this should help. But what our results show is that when Intel showed that 5.0 GHz demonstration using a water chiller they really did need it. Users might look into investing in one themselves if they want this chip.

But What About That 5.0 GHz? How Much Power?

We took some of our benchmark values for power and frequency, extrapolated them with a power curve, and we estimate that at 5.0 GHz, this chip is likely to be drawing in excess of 900W, perhaps as high as 1200W. Yes, Intel really did need that 1700W water chiller.



Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

We changed Intel's reference system slightly from what they sent us, for parity. We swapped out the storage for our standard SATA drive (mostly due to issues with the Optane drive supplied), and put in our selection of GPUs for testing.

Xeon W-3175X System As Tested
  Item
CPU Intel Xeon W-3175X
CPU Cooler Asetek 690LX-PN
Motherboard ASUS Dominus Extreme
Memory 6 x 8GB Samsung DDR4-2666 RDIMM
Storage Crucial MX200 1TB
Video Card Sapphire RX 460 2GB for CPU
MSI GTX 1080 Gaming 8GB for Gaming
Chassis Anidees AI Crystal XL AR
Power Supply EVGA 1600W T2 Titanium

Other systems tested followed our usual testing procedure.

Test Setups
Intel HEDT i9-9980XE
i9-7980XE
ASRock X299
OC Formula
P1.40 TRUE
Copper
Crucial Ballistix
4x4GB
DDR4-2666
AMD TR4 TR2 2970WX
TR2 2920X
ASUS ROG
X399 Zenith
1501 Enermax
Liqtech TR4
Corsair Vengeance
RGB Pro 4x8GB
DDR4-2933
TR2 2990WX
TR2 2950X
ASUS ROG
X399 Zenith
0508 Enermax
Liqtech TR4
G.Skill FlareX
4x8GB
DDR4-2933
EPYC SP3 EPYC 7601 GIGABYTE
MW51-HP0
F1 Enermax
Liqtech TR4
Micron LRDIMMs
8x128GB
DDR4-2666
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
Corsair AX1200i
SSD Crucial MX200 1TB
OS Windows 10 x64 RS3 1709
Spectre and Meltdown Patched
VRM Supplimented with SST-FHP141-VF 173 CFM fans

 



Our New Testing Suite for 2018 and 2019

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  • Power
  • Memory
  • Office
  • System
  • Render
  • Encoding
  • Web
  • Legacy
  • Integrated Gaming
  • CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running POV-Ray as our main test for Power, as it seems to hit deep into the system and is very consistent. In order to limit the number of cores for power, we use an affinity mask driven from the command line.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new, when feasible)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux (when feasible)

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We have recently automated around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing. So far we have the following games automated:

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard
Shadow of War Action / RPG Sep
2017
DX11 720p
Ultra
1080p
Ultra
4K
High
8K
High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low
Car Mechanic Simulator '18 Simulation / Racing July
2017
DX11 720p
Low
1080p
Medium
1440p
High
4K
Ultra
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra
Far Cry 5 FPS Mar
2018
DX11 720p
Low
1080p
Normal
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

Scale Up vs Scale Out: Benefits of Automation

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Updates

As always, we do take requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.



CPU Performance: System Tests

Our System Test section focuses significantly on real-world testing, user experience, with a slight nod to throughput. In this section we cover application loading time, image processing, simple scientific physics, emulation, neural simulation, optimized compute, and 3D model development, with a combination of readily available and custom software. For some of these tests, the bigger suites such as PCMark do cover them (we publish those values in our office section), although multiple perspectives is always beneficial. In all our tests we will explain in-depth what is being tested, and how we are testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

Application Load: GIMP 2.10.4

One of the most important aspects about user experience and workflow is how fast does a system respond. A good test of this is to see how long it takes for an application to load. Most applications these days, when on an SSD, load fairly instantly, however some office tools require asset pre-loading before being available. Most operating systems employ caching as well, so when certain software is loaded repeatedly (web browser, office tools), then can be initialized much quicker.

In our last suite, we tested how long it took to load a large PDF in Adobe Acrobat. Unfortunately this test was a nightmare to program for, and didn’t transfer over to Win10 RS3 easily. In the meantime we discovered an application that can automate this test, and we put it up against GIMP, a popular free open-source online photo editing tool, and the major alternative to Adobe Photoshop. We set it to load a large 50MB design template, and perform the load 10 times with 10 seconds in-between each. Due to caching, the first 3-5 results are often slower than the rest, and time to cache can be inconsistent, we take the average of the last five results to show CPU processing on cached loading.

AppTimer: GIMP 2.10.4

.

FCAT: Image Processing

The FCAT software was developed to help detect microstuttering, dropped frames, and run frames in graphics benchmarks when two accelerators were paired together to render a scene. Due to game engines and graphics drivers, not all GPU combinations performed ideally, which led to this software fixing colors to each rendered frame and dynamic raw recording of the data using a video capture device.

The FCAT software takes that recorded video, which in our case is 90 seconds of a 1440p run of Rise of the Tomb Raider, and processes that color data into frame time data so the system can plot an ‘observed’ frame rate, and correlate that to the power consumption of the accelerators. This test, by virtue of how quickly it was put together, is single threaded. We run the process and report the time to completion.

FCAT Processing ROTR 1440p GTX980Ti Data

.

3D Particle Movement v2.1: Brownian Motion

Our 3DPM test is a custom built benchmark designed to simulate six different particle movement algorithms of points in a 3D space. The algorithms were developed as part of my PhD., and while ultimately perform best on a GPU, provide a good idea on how instruction streams are interpreted by different microarchitectures.

A key part of the algorithms is the random number generation – we use relatively fast generation which ends up implementing dependency chains in the code. The upgrade over the naïve first version of this code solved for false sharing in the caches, a major bottleneck. We are also looking at AVX2 and AVX512 versions of this benchmark for future reviews.

For this test, we run a stock particle set over the six algorithms for 20 seconds apiece, with 10 second pauses, and report the total rate of particle movement, in millions of operations (movements) per second. We have a non-AVX version and an AVX version, with the latter implementing AVX512 and AVX2 where possible.

3DPM v2.1 can be downloaded from our server: 3DPMv2.1.rar (13.0 MB)

3D Particle Movement v2.1

.

3D Particle Movement v2.1 (with AVX)

.

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

.

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

Example of a 2.1B neuron simulation

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

.

y-Cruncher v0.7.6: Microarchitecture Optimized Compute

I’ve known about y-Cruncher for a while, as a tool to help compute various mathematical constants, but it wasn’t until I began talking with its developer, Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance. Naturally, any simulation that can take 20+ days can benefit from a 1% performance increase! Alex started y-cruncher as a high-school project, but it is now at a state where Alex is keeping it up to date to take advantage of the latest instruction sets before they are even made available in hardware.

For our test we run y-cruncher v0.7.6 through all the different optimized variants of the binary, single threaded and multi-threaded, including the AVX-512 optimized binaries. The test is to calculate 250m digits of Pi, and we use the single threaded and multi-threaded versions of this test.

Users can download y-cruncher from Alex’s website: http://www.numberworld.org/y-cruncher/

y-Cruncher 0.7.6 Single Thread, 250m Digitsy-Cruncher 0.7.6 Multi-Thread, 250m Digits

.

Agisoft Photoscan 1.3.3: 2D Image to 3D Model Conversion

One of the ISVs that we have worked with for a number of years is Agisoft, who develop software called PhotoScan that transforms a number of 2D images into a 3D model. This is an important tool in model development and archiving, and relies on a number of single threaded and multi-threaded algorithms to go from one side of the computation to the other.

In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test. We report the total time to complete the process.

Agisoft’s Photoscan website can be found here: http://www.agisoft.com/

Agisoft Photoscan 1.3.3, Complex Test

.



CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

.

 



CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

3DMark Physics - Time Spy

.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST OverallGeekbench 4 - MT Overall

 



CPU Performance: Encoding Tests

With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

Handbrake 1.1.0: Streaming and Archival Video Transcoding

A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.

We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:

  • 720p60 at 6000 kbps constant bit rate, fast setting, high profile
  • 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
  • 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile

Handbrake 1.1.0 - 720p60 x264 6000 kbps FastHandbrake 1.1.0 - 1080p60 x264 3500 kbps FasterHandbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

.

7-zip v1805: Popular Open-Source Encoding Engine

Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.

It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.

Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.

7-Zip 1805 Compression7-Zip 1805 Decompression7-Zip 1805 Combined

.

WinRAR 5.60b3: Archiving Tool

My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.

WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.

WinRAR 5.60b3

.

AES Encryption: File Security

A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.

The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.

AES Encoding

.



CPU Performance: Web and Legacy Tests

While more the focus of low-end and small form factor systems, web-based benchmarks are notoriously difficult to standardize. Modern web browsers are frequently updated, with no recourse to disable those updates, and as such there is difficulty in keeping a common platform. The fast paced nature of browser development means that version numbers (and performance) can change from week to week. Despite this, web tests are often a good measure of user experience: a lot of what most office work is today revolves around web applications, particularly email and office apps, but also interfaces and development environments. Our web tests include some of the industry standard tests, as well as a few popular but older tests.

We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

For our graphs, some of them have two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we're providing both sets results for our CPU tests.

WebXPRT 3: Modern Real-World Web Tasks, including AI

The company behind the XPRT test suites, Principled Technologies, has recently released the latest web-test, and rather than attach a year to the name have just called it ‘3’. This latest test (as we started the suite) has built upon and developed the ethos of previous tests: user interaction, office compute, graph generation, list sorting, HTML5, image manipulation, and even goes as far as some AI testing.

For our benchmark, we run the standard test which goes through the benchmark list seven times and provides a final result. We run this standard test four times, and take an average.

Users can access the WebXPRT test at http://principledtechnologies.com/benchmarkxprt/webxprt/

WebXPRT 3 (2018)

WebXPRT 2015: HTML5 and Javascript Web UX Testing

The older version of WebXPRT is the 2015 edition, which focuses on a slightly different set of web technologies and frameworks that are in use today. This is still a relevant test, especially for users interacting with not-the-latest web applications in the market, of which there are a lot. Web framework development is often very quick but with high turnover, meaning that frameworks are quickly developed, built-upon, used, and then developers move on to the next, and adjusting an application to a new framework is a difficult arduous task, especially with rapid development cycles. This leaves a lot of applications as ‘fixed-in-time’, and relevant to user experience for many years.

Similar to WebXPRT3, the main benchmark is a sectional run repeated seven times, with a final score. We repeat the whole thing four times, and average those final scores.

WebXPRT15

Speedometer 2: JavaScript Frameworks

Our newest web test is Speedometer 2, which is a accrued test over a series of javascript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.

Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. We report this final score.

Speedometer 2

Google Octane 2.0: Core Web Compute

A popular web test for several years, but now no longer being updated, is Octane, developed by Google. Version 2.0 of the test performs the best part of two-dozen compute related tasks, such as regular expressions, cryptography, ray tracing, emulation, and Navier-Stokes physics calculations.

The test gives each sub-test a score and produces a geometric mean of the set as a final result. We run the full benchmark four times, and average the final results.

Google Octane 2.0

Mozilla Kraken 1.1: Core Web Compute

Even older than Octane is Kraken, this time developed by Mozilla. This is an older test that does similar computational mechanics, such as audio processing or image filtering. Kraken seems to produce a highly variable result depending on the browser version, as it is a test that is keenly optimized for.

The main benchmark runs through each of the sub-tests ten times and produces an average time to completion for each loop, given in milliseconds. We run the full benchmark four times and take an average of the time taken.

Mozilla Kraken 1.1

3DPM v1: Naïve Code Variant of 3DPM v2.1

The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).

In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.

3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)

3DPM v1 Single Threaded3DPM v1 Multi-Threaded

x264 HD 3.0: Older Transcode Test

This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.

x264 HD 3.0 Pass 1x264 HD 3.0 Pass 2



Gaming: World of Tanks enCore

Albeit different to most of the other commonly played MMO or massively multiplayer online games, World of Tanks is set in the mid-20th century and allows players to take control of a range of military based armored vehicles. World of Tanks (WoT) is developed and published by Wargaming who are based in Belarus, with the game’s soundtrack being primarily composed by Belarusian composer Sergey Khmelevsky. The game offers multiple entry points including a free-to-play element as well as allowing players to pay a fee to open up more features. One of the most interesting things about this tank based MMO is that it achieved eSports status when it debuted at the World Cyber Games back in 2012.

World of Tanks enCore is a demo application for a new and unreleased graphics engine penned by the Wargaming development team. Over time the new core engine will implemented into the full game upgrading the games visuals with key elements such as improved water, flora, shadows, lighting as well as other objects such as buildings. The World of Tanks enCore demo app not only offers up insight into the impending game engine changes, but allows users to check system performance to see if the new engine run optimally on their system.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

WoT enCore IGP Low Medium High
Average FPS
95th Percentile

.



Gaming: Final Fantasy XV

Upon arriving to PC earlier this, Final Fantasy XV: Windows Edition was given a graphical overhaul as it was ported over from console, fruits of their successful partnership with NVIDIA, with hardly any hint of the troubles during Final Fantasy XV's original production and development.

In preparation for the launch, Square Enix opted to release a standalone benchmark that they have since updated. Using the Final Fantasy XV standalone benchmark gives us a lengthy standardized sequence to record, although it should be noted that its heavy use of NVIDIA technology means that the Maximum setting has problems - it renders items off screen. To get around this, we use the standard preset which does not have these issues.

Square Enix has patched the benchmark with custom graphics settings and bugfixes to be much more accurate in profiling in-game performance and graphical options. For our testing, we run the standard benchmark with a FRAPs overlay, taking a 6 minute recording of the test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Final Fantasy XV JRPG Mar
2018
DX11 720p
Standard
1080p
Standard
4K
Standard
8K
Standard

All of our benchmark results can also be found in our benchmark engine, Bench.

Final Fantasy XV IGP Low Medium High
Average FPS
95th Percentile

.



Gaming: Shadow of War

Next up is Middle-earth: Shadow of War, the sequel to Shadow of Mordor. Developed by Monolith, whose last hit was arguably F.E.A.R., Shadow of Mordor returned them to the spotlight with an innovative NPC rival generation and interaction system called the Nemesis System, along with a storyline based on J.R.R. Tolkien's legendarium, and making it work on a highly modified engine that originally powered F.E.A.R. in 2005.

Using the new LithTech Firebird engine, Shadow of War improves on the detail and complexity, and with free add-on high-resolution texture packs, offers itself as a good example of getting the most graphics out of an engine that may not be bleeding edge. Shadow of War also supports HDR (HDR10).

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of War Action / RPG Sep
2017
DX11 720p
Ultra
1080p
Ultra
4K
High
8K
High

All of our benchmark results can also be found in our benchmark engine, Bench.

Shadow of War IGP Low Medium High
Average FPS

.



Gaming: Civilization 6 (DX12)

Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Civilization VI RTS Oct
2016
DX12 1080p
Ultra
4K
Ultra
8K
Ultra
16K
Low

All of our benchmark results can also be found in our benchmark engine, Bench.

Civilization VI IGP
Average FPS
95th Percentile

We had issues running Civilization beyond IGP, we're looking into exactly why.



Gaming: Ashes Classic (DX12)

Seen as the holy child of DirectX12, Ashes of the Singularity (AoTS, or just Ashes) has been the first title to actively go explore as many of the DirectX12 features as it possibly can. Stardock, the developer behind the Nitrous engine which powers the game, has ensured that the real-time strategy title takes advantage of multiple cores and multiple graphics cards, in as many configurations as possible.

As a real-time strategy title, Ashes is all about responsiveness during both wide open shots but also concentrated battles. With DirectX12 at the helm, the ability to implement more draw calls per second allows the engine to work with substantial unit depth and effects that other RTS titles had to rely on combined draw calls to achieve, making some combined unit structures ultimately very rigid.

Stardock clearly understand the importance of an in-game benchmark, ensuring that such a tool was available and capable from day one, especially with all the additional DX12 features used and being able to characterize how they affected the title for the developer was important. The in-game benchmark performs a four minute fixed seed battle environment with a variety of shots, and outputs a vast amount of data to analyze.

For our benchmark, we run Ashes Classic: an older version of the game before the Escalation update. The reason for this is that this is easier to automate, without a splash screen, but still has a strong visual fidelity to test.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Ashes: Classic RTS Mar
2016
DX12 720p
Standard
1080p
Standard
1440p
Standard
4K
Standard

Ashes has dropdown options for MSAA, Light Quality, Object Quality, Shading Samples, Shadow Quality, Textures, and separate options for the terrain. There are several presents, from Very Low to Extreme: we run our benchmarks at the above settings, and take the frame-time output for our average and percentile numbers.

[game list table]

All of our benchmark results can also be found in our benchmark engine, Bench.

Ashes: Classic IGP Low Medium High
Average FPS
95th Percentile

.



Gaming: Strange Brigade (DX12, Vulkan)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arose once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Strange Brigade* FPS Aug
2018
DX12
Vulkan
720p
Low
1080p
Medium
1440p
High
4K
Ultra
*Strange Brigade is run in DX12 and Vulkan modes

All of our benchmark results can also be found in our benchmark engine, Bench.

Strange Brigade IGP Low Medium High
Average FPS
95th Percentile

.

Strange Brigade IGP Low Medium High
Average FPS
95th Percentile


Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

GTA V IGP Low Medium High
Average FPS
95th Percentile

.



Gaming: Shadow of the Tomb Raider (DX12)

The latest instalment of the Tomb Raider franchise does less rising and lurks more in the shadows with Shadow of the Tomb Raider. As expected this action-adventure follows Lara Croft which is the main protagonist of the franchise as she muscles through the Mesoamerican and South American regions looking to stop a Mayan apocalyptic she herself unleashed. Shadow of the Tomb Raider is the direct sequel to the previous Rise of the Tomb Raider and was developed by Eidos Montreal and Crystal Dynamics and was published by Square Enix which hit shelves across multiple platforms in September 2018. This title effectively closes the Lara Croft Origins story and has received critical acclaims upon its release.

The integrated Shadow of the Tomb Raider benchmark is similar to that of the previous game Rise of the Tomb Raider, which we have used in our previous benchmarking suite. The newer Shadow of the Tomb Raider uses DirectX 11 and 12, with this particular title being touted as having one of the best implementations of DirectX 12 of any game released so far.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Shadow of the Tomb Raider Action Sep
2018
DX12 720p
Low
1080p
Medium
1440p
High
4K
Highest

All of our benchmark results can also be found in our benchmark engine, Bench.

SOTR Low Medium High
Average FPS
95th Percentile

.



Gaming: F1 2018

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained; otherwise, we should see any newer versions of Codemasters' EGO engine find its way into F1. Graphically demanding in its own right, F1 2018 keeps a useful racing-type graphics workload in our benchmarks.

We use the in-game benchmark, set to run on the Montreal track in the wet, driving as Lewis Hamilton from last place on the grid. Data is taken over a one-lap race.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

F1 2018 IGP Low Medium High
Average FPS
95th Percentile

.



Conclusion: Price Makes Perfect

When you buy a system, ask yourself – what matters most to you?

Is it gaming performance?
Is it bang-for-buck?
Is it all-out peak performance?
Is it power consumption?
Is it performance per watt?

I can guarantee that out of the AnandTech audience, we will have some readers in each of these categories. Some will be price sensitive, while others will not. Some will be performance sensitive, others will be power (or noise) sensitive. The point here is that the Xeon W-3175X only caters to one market: high performance.

We tested the Xeon W-3175X in our regular suite of tests, and it performs as much as we would expect – it is a 28 core version of the Core i9-9980XE, so in single threaded tests it is about the same, but in raw multi-threaded tests it performs up to 50% better. For rendering, that’s great. For our variable threaded tests, the gains are not as big, from either no gain at all to around 20% or so. This is the nature of increasing threads – at some point, software hits Amdahl’s law of scaling and more threads does nothing. However, for software that isn’t at that point, the W-3175X comes in like a wrecking ball.

Corona 1.3 Benchmark

For our graphs, some of them had two values: a regular value in orange, and one in red called 'Intel Spec'. ASUS offers the option to 'open up' the power and current limits of the chip, so the CPU is still running at the same frequency but is not throttled. Despite Intel saying that they recommend 'Intel Spec', the system they sent to us to test was actually set up with the power limits opened up, and the results they provided for us to compare to internally also correlated with that setting. As a result, we provided both sets results for our CPU tests.

For the most part, the 'opened up' results scored better, especially in multithreaded tests, however Intel Spec did excel in memory bound tests. This is likely because in the 'opened up' way, there is no limit to keeping the high turbo which means there could be additional stalls for memory based workloads. In a slower 'Intel Spec' environment, there's plenty of power for the mesh and the memory controllers do deal with requests as they come.

Power, Overclockability, and Availability

Two-and-a-half questions hung over Intel during the announcement and launch of the W-3175X. First one was power, second was overclockability, and two-point-five was availability.

On the power side of the equation, again the W-3175X comes in like a wrecking ball, and this baby is on fire. While this chip has a 255W TDP, the turbo max power value is 510W – we don’t hit that at ‘stock’ frequency, which is more around the 300W mark, but we can really crank out the power when we start overclocking.

This processor has a regular all-core frequency of 3.8 GHz, with AVX2 at 3.2 GHz and AVX-512 at 2.8 GHz. In our testing, just by adjusting multipliers, we achieved an all-core turbo of 4.4 GHz and an AVX2 turbo of 4.0 GHz, with the systems drawing 520W and 450W respectively. At these frequencies, our CPU was reporting temperatures in excess of 110ºC! This processor is actually rated with a thermal shutoff at 120ºC, well above the 105ºC we see with regular desktop processors, which shows that perhaps Intel had to bin these chips enough that the high temperature profile was required.

On the question of availability, this is where the road is not so clear. Intel is intending only to sell these processors through OEMs and system integrators as part of pre-built systems only, for now. We’ve heard some numbers about how many chips will be made (it’s a low four-digit number), but we can only approximately confirm those numbers given one motherboard vendor also qualified how many boards they were building.

One of Anand’s comments I will always remember during our time together at AnandTech was this:

“There are no bad products, only bad prices.”

According to OEMs we spoke to, initially this processor was going to be $8k. The idea here is that being 28-core and unlocked, Intel did not want to consume its $10k Xeon market. Since then, distributors told us that the latest information they were getting was around $4500, and now Intel is saying that the recommended consumer price is $3000. That’s not Intel’s usual definition of ‘per-1000 units’, that’s the actual end-user price. Intel isn’t even quoting a per-1000 unit price, which just goes to substantiate the numbers we heard about volume.

At $8000, this CPU would be dead in the water, only suitable for high-frequency traders who could eat up the cost within a few hours of trading. At $4500, it would be a stretch, given that 18-core on Intel is only $2099, and AMD offers the 32-core 2990WX for $1799 which surpasses the performance per dollar on any rendering task.

At $2999, Intel has probably priced this one just right.

At $2999, it's not a hideous monstronsity that some worried it would be, but instead becomes a very believeable progression from the Core i9-9980XE. Just don’t ask about the rest of the system, as an OEM is probably looking at a $7k minimum build, or $10k end-user shelf price.

Log in

Don't have an account? Sign up now