Original Link: https://www.anandtech.com/show/13526/intel-xeon-e-review-e2186g-and-more-tested



Despite having officially launched back in July, Intel’s Xeon E desktop platform has yet to see the light of day in systems casually available to users or small businesses. This should change today, with the official embargo lift for reviews on the parts, as well as the announcement today that SGX-enabled versions are coming for Server use. The Xeon E platform is the replacement for what used to be called the E3-1200 family, using Intel’s new nomenclature, and these parts are based on Intel’s Coffee Lake (not Coffee Lake Refresh) microarchitecture. We managed to get a few processors in to test, and today we’ll start by examining most of the six-core family.

Xeon E3 becomes Xeon E

Ever since the launch of Intel’s Xeon Scalable platform naming scheme, most of the Xeon product stack has gone through a naming scheme transformation. The E5 and E7 families were rolled into the Xeon Scalable line with names like Platinum, Gold, Silver, and Bronze, while the workstation focused Xeon E5-1600 parts are now called Xeon W. With that in mind, Intel also changed the Xeon E3-1200 family, into Xeon E, with E being for Entry.

Intel Xeon Naming Strategy
    SNB IVB HSW BDW SKL-SP Future
Servers E7-8000 'v1' v2 v3 v4 Xeon SP Platinum
Xeon SP Gold
Xeon SP Silver
Xeon SP Bronze
E7-4000 'v1' v2 v3 v4
E7-2800 'v1' v2 - -
E5-4600 'v1' v2 v3 v4
E5-2600 'v1' v2 v3 v4
E5-2400 'v1' v2 v3 -
Workstations E5-1600 'v1' v2 v3 v4 Xeon W
  E5-1400 'v1' v2 v3 - - - -
    SNB IVB HSW BDW SKL KBL CFL
Mobile E3-1500M - - - - v5 v6 Xeon E
Consumer E3-1200 'v1' v2 v3 v4 v5 v6 Xeon E
Comms E3-1100 'v1' v2 - - - - -
Network Xeon-D - - - D-1500 D-2100    

The target market for these processors is still the same: ECC enabled versions of consumer Core family parts, with slightly different base frequencies, turbos, pricing, and TDP values. Users who follow the Xeon E product line will have noticed that some of the processors in this family have the integrated graphics disabled, but also the pricing structure is often a notch above the consumer parts.

Despite the fact that the consumer product line recently launched the 9th Gen Core, known as ‘Coffee Lake Refresh’, the Xeon E-2100 family being reviewed today belongs to the non-refresh version of Coffee Lake. This is mostly down to the timing – the market that Xeon E is after often requires additional testing and qualification. It still doesn’t explain why the launch was in July but the performance embargo lift is today. But here we are.

Xeon E-2100 Family

As we reported back in July, Intel is coming out with a range of quad-core and six-core parts for Xeon E. Those labeled with ‘G’ at the end of the name will have integrated graphics.

Sitting at the top is the Xeon E-2186G, a six core processor with a TDP of 95W, a base frequency of 3.8 GHz, and a turbo frequency of 4.7 GHz. In the past, the top processor was often called the ‘E3-1285’, so the naming scheme follows. The top Xeon E/E3 bin was always an odd processor in general: for most of the previous generations, it offered a slight bump (often +100 MHz) over the second best bin, but the price increased by 40-60% to around $620. In this case we see that the E-2186G is indeed a 100 MHz increase in base frequency over the E-2176G, from 3.7 GHz to 3.8 GHz, although the price increase is a slightly more modest 24% increase this time. Well, there’s always more room at the top, should Intel ever decide to release 8-core variants.

The E-2186G is the only part listed with a TDP of 95W, while the rest are 80W for the six-core parts and 71W for the four-core parts. Every chip has hyperthreading, except for those with a third digit of ‘2’, such as the E-2126G, E-2124G, and E-2124. The processors with integrated graphics, the ‘G’ processors, are using the ‘Pro’ equivalent of the UHD 630 graphics we see on consumer Coffee Lake. In this instance, the iGPU is called UHD P630. Frequencies of the iGPUs, as well as DRAM frequencies, match the consumer parts as well.

Aside from the official parts listed, Intel also has two off-roadmap SKUs that we can find: the Xeon E-2106 and Xeon E-2104. These are not normally searchable on Intel’s database if we look at the ‘overall Xeon E family’ specifications, but users can find them by putting in the exact name into the search. These parts will be OEM only, and Intel often makes them with specific customers in mind.

If we do some direct comparisons with the consumer variants of the processors, we get the following:

Xeon E-2176G Core i7-8700K   Xeon E-2174G Core i5-8600K
6 / 12 6 / 12 Cores 4 / 8 6 / 6
3700 MHz 3700 MHz Base Freq 3800 MHz 3600 MHz
4700 MHz 4700 MHz Turbo Freq 4700 MHz 4300 MHz
80 W 95 W TDP 71 W 95W
UHD P630 UHD 630 IGP UHD P630 UHD 630
DDR4-2666 DDR4-2666 DRAM DDR4-2666 DDR4-2666
Yes No ECC Yes No
$362 $359 Price $328 $257

In both cases the TDP listed for the consumer parts is higher - for the six cores with hyperthreading the rest of the spec is the same, but for the quad-core vs six core it comes with a difference in frequency as well. The consumer parts have overclocking, while the Xeon parts can use ECC memory. Pricing is also higher on the Xeon E parts, as they also support vPro.

Using a Xeon E-2100 Processor: Choosing a Motherboard

When Intel launched the E3-1200 v5 family two generations back, the company made a conscious decision to split the interoperability between the consumer Core processors and the commercial Xeon E3 processors. This means that there are specific chipsets and motherboards for each, and users could no longer mix-and-match, at least as far as putting a Xeon E3 into a consumer 200-series motherboard. Instead, users had to look to the C-series chipset enabled motherboards to get them to work. Intel is carrying that policy over for the Xeon E family as well, which requires new motherboards compared to the E3-1200 v5/v6 generations.

For example, in our review today we are using one of Supermicro’s Xeon E motherboards, the X11SCA-W. This motherboard uses the C246 chipset, which is the commercial/enterprise version of the Z390 chipset without the overclocking and with ECC memory support.

Because of this C246 requirement, there will be very few motherboards on the shelves for users that want to build their own Xeon E machine, as most of the primary motherboard manufacturers make one or two retail models and that’s about it. We’ve seen GIGABYTE and Supermicro show a single Xeon E motherboard at a trade show each, but for the others it is anyone’s guess.

Gavin Bonshor, our motherboard reviewer, will have a full review of the X11SCA-W ready in the next couple of weeks.

Intel’s Announcement Today, and This Review

Today Intel is officially announcing Xeon E for server customers. Up until this point it would seem that Intel was focusing more on professional desktop environments, but now the Xeon E platform is going to the datacenter. Intel is keen to point out that these parts for the server market support security features such as SGX.

For this review, we managed to obtain several of the processors in advance, several six-core and four-core units. Due to time constrains, we have only been able to test the six-core processors in our CPU benchmark suite so far, so we will follow up with another piece in a few weeks with the four-core variants. If we test them all, I’ll write a fresh article with the full analysis.

In the interest of disclosure, Intel did not sample any of the processors tested today. Intel contacted us about a week ago on sampling, and is set to sample us one of the parts we do not yet have, though it was not available in time for the embargo today. I’m on the lookout to borrow the others from our partners. Please get in touch if that could be you!

In this review, we are taking the E-2186G, the E-2176G, the E-2146G and the E-2136 through our workflow to gauge performance. The focus will be on the CPU metrics in the first few pages, however we are also including the gaming performance with a GTX 1080 as well as the integrated graphics performance. The main comparison points will be the equivalent consumer processors, and previous generation Xeon E3 processors that we could acquire.

Pages In This Review

  1. Xeon E3-1200 Becomes Xeon E-2100
  2. Test Bed and Setup
  3. 2018 and 2019 Benchmark Suite: Spectre and Meltdown Hardened
  4. CPU Performance: System Tests
  5. CPU Performance: Rendering Tests
  6. CPU Performance: Office Tests
  7. CPU Performance: Encoding Tests
  8. CPU Performance: Web and Legacy Tests
  9. Gaming: Integrated Graphics
  10. Gaming: World of Tanks enCore
  11. Gaming: Final Fantasy XV
  12. Gaming: Shadow of War
  13. Gaming: Civilization 6
  14. Gaming: Ashes Classic
  15. Gaming: Strange Brigade
  16. Gaming: Grand Theft Auto V
  17. Gaming: Far Cry 5
  18. Gaming: Shadow of the Tomb Raider
  19. Gaming: F1 2018
  20. Power Consumption
  21. Conclusions and Final Words


Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Intel Xeon E-2186G
E-2176G
E-2146G
E-2136
Supermicro
X11SCA-W
v1 TRUE Copper Corsair Ballistix
4x4GB
DDR4-2666
E3-1280 v5
E3-1275 v5
E3-1270 v5
GIGABYTE
X170-Extreme ECC
F21e Silverstone
AR10-115XS*
G.Skill RipjawsV
2x16GB
DDR4-2133
Intel i9-9900K
i9-9700K
i9-9600K
ASRock Z390
Gaming i7
P1.70 TRUE Copper Crucial Ballistix
4x4GB
DDR4-2666
Intel i7-8086K
i7-8700K
i5-8600K
i5-8400
ASRock Z390
Gaming i7
P1.70 TRUE Copper Crucial Ballistix
4x4 GB
DDR4-2666
AMD Ryzen 7 2700X
Ryzen 5 2600X
ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2x8GB
DDR4-2933
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
Corsair AX1200i
SSD Crucial MX200 1TB
OS Windows 10 x64 RS3 1709
Spectre and Meltdown Patched
*VRM Supplimented with SST-FHP141-VF 173 CFM fans

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Hardware Providers
Sapphire RX 460 Nitro MSI GTX 1080 Gaming X OC Crucial MX200 +
MX500 SSDs
Corsair AX860i +
AX1200i PSUs
G.Skill RipjawsV,
SniperX, FlareX
Crucial Ballistix
DDR4
Silverstone
Coolers
Silverstone
Fans


Our New Testing Suite for 2018 and 2019

Spectre and Meltdown Hardened

In order to keep up to date with our testing, we have to update our software every so often to stay relevant. In our updates we typically implement the latest operating system, the latest patches, the latest software revisions, the newest graphics drivers, as well as add new tests or remove old ones. As regular readers will know, our CPU testing revolves an automated test suite, and depending on how the newest software works, the suite either needs to change, be updated, have tests removed, or be rewritten completely. Last time we did a full re-write, it took the best part of a month, including regression testing (testing older processors).

One of the key elements of our testing update for 2018 (and 2019) is the fact that our scripts and systems are designed to be hardened for Spectre and Meltdown. This means making sure that all of our BIOSes are updated with the latest microcode, and all the steps are in place with our operating system with updates. In this case we are using Windows 10 x64 Enterprise 1709 with April security updates which enforces Smeltdown (our combined name) mitigations. Uses might ask why we are not running Windows 10 x64 RS4, the latest major update – this is due to some new features which are giving uneven results. Rather than spend a few weeks learning to disable them, we’re going ahead with RS3 which has been widely used.

Our previous benchmark suite was split into several segments depending on how the test is usually perceived. Our new test suite follows similar lines, and we run the tests based on:

  • Power
  • Memory
  • Office
  • System
  • Render
  • Encoding
  • Web
  • Legacy
  • Integrated Gaming
  • CPU Gaming

Depending on the focus of the review, the order of these benchmarks might change, or some left out of the main review. All of our data will reside in our benchmark database, Bench, for which there is a new ‘CPU 2019’ section for all of our new tests.

Within each section, we will have the following tests:

Power

Our power tests consist of running a substantial workload for every thread in the system, and then probing the power registers on the chip to find out details such as core power, package power, DRAM power, IO power, and per-core power. This all depends on how much information is given by the manufacturer of the chip: sometimes a lot, sometimes not at all.

We are currently running POV-Ray as our main test for Power, as it seems to hit deep into the system and is very consistent. In order to limit the number of cores for power, we use an affinity mask driven from the command line.

Memory

These tests involve disabling all turbo modes in the system, forcing it to run at base frequency, and them implementing both a memory latency checker (Intel’s Memory Latency Checker works equally well for both platforms) and AIDA64 to probe cache bandwidth.

Office

  • Chromium Compile: Windows VC++ Compile of Chrome 56 (same as 2017)
  • PCMark10: Primary data will be the overview results – subtest results will be in Bench
  • 3DMark Physics: We test every physics sub-test for Bench, and report the major ones (new)
  • GeekBench4: By request (new)
  • SYSmark 2018: Recently released by BAPCo, currently automating it into our suite (new, when feasible)

System

  • Application Load: Time to load GIMP 2.10.4 (new)
  • FCAT: Time to process a 90 second ROTR 1440p recording (same as 2017)
  • 3D Particle Movement: Particle distribution test (same as 2017) – we also have AVX2 and AVX512 versions of this, which may be added later
  • Dolphin 5.0: Console emulation test (same as 2017)
  • DigiCortex: Sea Slug Brain simulation (same as 2017)
  • y-Cruncher v0.7.6: Pi calculation with optimized instruction sets for new CPUs (new)
  • Agisoft Photoscan 1.3.3: 2D image to 3D modelling tool (updated)

Render

  • Corona 1.3: Performance renderer for 3dsMax, Cinema4D (same as 2017)
  • Blender 2.79b: Render of bmw27 on CPU (updated to 2.79b)
  • LuxMark v3.1 C++ and OpenCL: Test of different rendering code paths (same as 2017)
  • POV-Ray 3.7.1: Built-in benchmark (updated)
  • CineBench R15: Older Cinema4D test, will likely remain in Bench (same as 2017)

Encoding

  • 7-zip 1805: Built-in benchmark (updated to v1805)
  • WinRAR 5.60b3: Compression test of directory with video and web files (updated to 5.60b3)
  • AES Encryption: In-memory AES performance. Slightly older test. (same as 2017)
  • Handbrake 1.1.0: Logitech C920 1080p60 input file, transcoded into three formats for streaming/storage:
    • 720p60, x264, 6000 kbps CBR, Fast, High Profile
    • 1080p60, x264, 3500 kbps CBR, Faster, Main Profile
    • 1080p60, HEVC, 3500 kbps VBR, Fast, 2-Pass Main Profile

Web

  • WebXPRT3: The latest WebXPRT test (updated)
  • WebXPRT15: Similar to 3, but slightly older. (same as 2017)
  • Speedometer2: Javascript Framework test (new)
  • Google Octane 2.0: Depreciated but popular web test (same as 2017)
  • Mozilla Kraken 1.1: Depreciated but popular web test (same as 2017)

Legacy (same as 2017)

  • 3DPM v1: Older version of 3DPM, very naïve code
  • x264 HD 3.0: Older transcode benchmark
  • Cinebench R11.5 and R10: Representative of different coding methodologies

Linux (when feasible)

When in full swing, we wish to return to running LinuxBench 1.0. This was in our 2016 test, but was ditched in 2017 as it added an extra complication layer to our automation. By popular request, we are going to run it again.

Integrated and CPU Gaming

We have recently automated around a dozen games at four different performance levels. A good number of games will have frame time data, however due to automation complications, some will not. The idea is that we get a good overview of a number of different genres and engines for testing.

For our CPU Gaming tests, we will be running on an NVIDIA GTX 1080. For the CPU benchmarks, we use an RX460 as we now have several units for concurrent testing.

In previous years we tested multiple GPUs on a small number of games – this time around, due to a Twitter poll I did which turned out exactly 50:50, we are doing it the other way around: more games, fewer GPUs.

Scale Up vs Scale Out: Benefits of Automation

One comment we get every now and again is that automation isn’t the best way of testing – there’s a higher barrier to entry, and it limits the tests that can be done. From our perspective, despite taking a little while to program properly (and get it right), automation means we can do several things:

  1. Guarantee consistent breaks between tests for cooldown to occur, rather than variable cooldown times based on ‘if I’m looking at the screen’
  2. It allows us to simultaneously test several systems at once. I currently run five systems in my office (limited by the number of 4K monitors, and space) which means we can process more hardware at the same time
  3. We can leave tests to run overnight, very useful for a deadline
  4. With a good enough script, tests can be added very easily

Our benchmark suite collates all the results and spits out data as the tests are running to a central storage platform, which I can probe mid-run to update data as it comes through. This also acts as a mental check in case any of the data might be abnormal.

We do have one major limitation, and that rests on the side of our gaming tests. We are running multiple tests through one Steam account, some of which (like GTA) are online only. As Steam only lets one system play on an account at once, our gaming script probes Steam’s own APIs to determine if we are ‘online’ or not, and to run offline tests until the account is free to be logged in on that system. Depending on the number of games we test that absolutely require online mode, it can be a bit of a bottleneck.

Benchmark Suite Updates

As always, we do take requests. It helps us understand the workloads that everyone is running and plan accordingly.

A side note on software packages: we have had requests for tests on software such as ANSYS, or other professional grade software. The downside of testing this software is licensing and scale. Most of these companies do not particularly care about us running tests, and state it’s not part of their goals. Others, like Agisoft, are more than willing to help. If you are involved in these software packages, the best way to see us benchmark them is to reach out. We have special versions of software for some of our tests, and if we can get something that works, and relevant to the audience, then we shouldn’t have too much difficulty adding it to the suite.



CPU Performance: System Tests

Our System Test section focuses significantly on real-world testing, user experience, with a slight nod to throughput. In this section we cover application loading time, image processing, simple scientific physics, emulation, neural simulation, optimized compute, and 3D model development, with a combination of readily available and custom software. For some of these tests, the bigger suites such as PCMark do cover them (we publish those values in our office section), although multiple perspectives is always beneficial. In all our tests we will explain in-depth what is being tested, and how we are testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Application Load: GIMP 2.10.4

One of the most important aspects about user experience and workflow is how fast does a system respond. A good test of this is to see how long it takes for an application to load. Most applications these days, when on an SSD, load fairly instantly, however some office tools require asset pre-loading before being available. Most operating systems employ caching as well, so when certain software is loaded repeatedly (web browser, office tools), then can be initialized much quicker.

In our last suite, we tested how long it took to load a large PDF in Adobe Acrobat. Unfortunately this test was a nightmare to program for, and didn’t transfer over to Win10 RS3 easily. In the meantime we discovered an application that can automate this test, and we put it up against GIMP, a popular free open-source online photo editing tool, and the major alternative to Adobe Photoshop. We set it to load a large 50MB design template, and perform the load 10 times with 10 seconds in-between each. Due to caching, the first 3-5 results are often slower than the rest, and time to cache can be inconsistent, we take the average of the last five results to show CPU processing on cached loading.

AppTimer: GIMP 2.10.4

 

FCAT: Image Processing

The FCAT software was developed to help detect microstuttering, dropped frames, and run frames in graphics benchmarks when two accelerators were paired together to render a scene. Due to game engines and graphics drivers, not all GPU combinations performed ideally, which led to this software fixing colors to each rendered frame and dynamic raw recording of the data using a video capture device.

The FCAT software takes that recorded video, which in our case is 90 seconds of a 1440p run of Rise of the Tomb Raider, and processes that color data into frame time data so the system can plot an ‘observed’ frame rate, and correlate that to the power consumption of the accelerators. This test, by virtue of how quickly it was put together, is single threaded. We run the process and report the time to completion.

FCAT Processing ROTR 1440p GTX980Ti Data

 

3D Particle Movement v2.1: Brownian Motion

Our 3DPM test is a custom built benchmark designed to simulate six different particle movement algorithms of points in a 3D space. The algorithms were developed as part of my PhD., and while ultimately perform best on a GPU, provide a good idea on how instruction streams are interpreted by different microarchitectures.

A key part of the algorithms is the random number generation – we use relatively fast generation which ends up implementing dependency chains in the code. The upgrade over the naïve first version of this code solved for false sharing in the caches, a major bottleneck. We are also looking at AVX2 and AVX512 versions of this benchmark for future reviews.

For this test, we run a stock particle set over the six algorithms for 20 seconds apiece, with 10 second pauses, and report the total rate of particle movement, in millions of operations (movements) per second. We have a non-AVX version and an AVX version, with the latter implementing AVX512 and AVX2 where possible.

3DPM v2.1 can be downloaded from our server: 3DPMv2.1.rar (13.0 MB)

3D Particle Movement v2.1

3D Particle Movement v2.1 (with AVX)

 

Dolphin 5.0: Console Emulation

One of the popular requested tests in our suite is to do with console emulation. Being able to pick up a game from an older system and run it as expected depends on the overhead of the emulator: it takes a significantly more powerful x86 system to be able to accurately emulate an older non-x86 console, especially if code for that console was made to abuse certain physical bugs in the hardware.

For our test, we use the popular Dolphin emulation software, and run a compute project through it to determine how close to a standard console system our processors can emulate. In this test, a Nintendo Wii would take around 1050 seconds.

The latest version of Dolphin can be downloaded from https://dolphin-emu.org/

Dolphin 5.0 Render Test

 

DigiCortex 1.20: Sea Slug Brain Simulation

This benchmark was originally designed for simulation and visualization of neuron and synapse activity, as is commonly found in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron / 1.8B synapse simulation, equivalent to a Sea Slug.

Example of a 2.1B neuron simulation

We report the results as the ability to simulate the data as a fraction of real-time, so anything above a ‘one’ is suitable for real-time work. Out of the two modes, a ‘non-firing’ mode which is DRAM heavy and a ‘firing’ mode which has CPU work, we choose the latter. Despite this, the benchmark is still affected by DRAM speed a fair amount.

DigiCortex can be downloaded from http://www.digicortex.net/

DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

 

y-Cruncher v0.7.6: Microarchitecture Optimized Compute

I’ve known about y-Cruncher for a while, as a tool to help compute various mathematical constants, but it wasn’t until I began talking with its developer, Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance. Naturally, any simulation that can take 20+ days can benefit from a 1% performance increase! Alex started y-cruncher as a high-school project, but it is now at a state where Alex is keeping it up to date to take advantage of the latest instruction sets before they are even made available in hardware.

For our test we run y-cruncher v0.7.6 through all the different optimized variants of the binary, single threaded and multi-threaded, including the AVX-512 optimized binaries. The test is to calculate 250m digits of Pi, and we use the single threaded and multi-threaded versions of this test.

Users can download y-cruncher from Alex’s website: http://www.numberworld.org/y-cruncher/

y-Cruncher 0.7.6 Single Thread, 250m Digitsy-Cruncher 0.7.6 Multi-Thread, 250m Digits

 

Agisoft Photoscan 1.3.3: 2D Image to 3D Model Conversion

One of the ISVs that we have worked with for a number of years is Agisoft, who develop software called PhotoScan that transforms a number of 2D images into a 3D model. This is an important tool in model development and archiving, and relies on a number of single threaded and multi-threaded algorithms to go from one side of the computation to the other.

In our test, we take v1.3.3 of the software with a good sized data set of 84 x 18 megapixel photos and push it through a reasonably fast variant of the algorithms, but is still more stringent than our 2017 test. We report the total time to complete the process.

Agisoft’s Photoscan website can be found here: http://www.agisoft.com/

Agisoft Photoscan 1.3.3, Complex Test

 



CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

 

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

We had a small issue with Blender - after installing Intel IGP drivers, this version didn't want to work. We'll run through the tests that were affected when time permits.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

 

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

 



CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

There's a slight issue using PCMark with having IGP drivers installed then restarting the system using a discrete GPU installed. We'll come back and rerun when the time permits.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

For our tests, we report in Bench the results from every physics test, but for the sake of the review we keep it to the most demanding of each scene: Ice Storm Unlimited, Cloud Gate, Sky Diver, Fire Strike Ultra, and Time Spy.

3DMark Physics - Cloud Gate3DMark Physics - Sky Diver3DMark Physics - Fire Strike Ultra3DMark Physics - Time Spy

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST OverallGeekbench 4 - MT Overall



CPU Performance: Encoding Tests

With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Handbrake 1.1.0: Streaming and Archival Video Transcoding

A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.

We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:

  • 720p60 at 6000 kbps constant bit rate, fast setting, high profile
  • 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
  • 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile

Handbrake 1.1.0 - 720p60 x264 6000 kbps FastHandbrake 1.1.0 - 1080p60 x264 3500 kbps FasterHandbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

7-zip v1805: Popular Open-Source Encoding Engine

Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.

It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.

Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.

7-Zip 1805 Compression7-Zip 1805 Decompression7-Zip 1805 Combined

 

WinRAR 5.60b3: Archiving Tool

My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.

WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.

WinRAR 5.60b3

 

AES Encryption: File Security

A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.

The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.

AES Encoding

 



CPU Performance: Web and Legacy Tests

While more the focus of low-end and small form factor systems, web-based benchmarks are notoriously difficult to standardize. Modern web browsers are frequently updated, with no recourse to disable those updates, and as such there is difficulty in keeping a common platform. The fast paced nature of browser development means that version numbers (and performance) can change from week to week. Despite this, web tests are often a good measure of user experience: a lot of what most office work is today revolves around web applications, particularly email and office apps, but also interfaces and development environments. Our web tests include some of the industry standard tests, as well as a few popular but older tests.

We have also included our legacy benchmarks in this section, representing a stack of older code for popular benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

WebXPRT 3: Modern Real-World Web Tasks, including AI

The company behind the XPRT test suites, Principled Technologies, has recently released the latest web-test, and rather than attach a year to the name have just called it ‘3’. This latest test (as we started the suite) has built upon and developed the ethos of previous tests: user interaction, office compute, graph generation, list sorting, HTML5, image manipulation, and even goes as far as some AI testing.

For our benchmark, we run the standard test which goes through the benchmark list seven times and provides a final result. We run this standard test four times, and take an average.

Users can access the WebXPRT test at http://principledtechnologies.com/benchmarkxprt/webxprt/

WebXPRT 3 (2018)

WebXPRT 2015: HTML5 and Javascript Web UX Testing

The older version of WebXPRT is the 2015 edition, which focuses on a slightly different set of web technologies and frameworks that are in use today. This is still a relevant test, especially for users interacting with not-the-latest web applications in the market, of which there are a lot. Web framework development is often very quick but with high turnover, meaning that frameworks are quickly developed, built-upon, used, and then developers move on to the next, and adjusting an application to a new framework is a difficult arduous task, especially with rapid development cycles. This leaves a lot of applications as ‘fixed-in-time’, and relevant to user experience for many years.

Similar to WebXPRT3, the main benchmark is a sectional run repeated seven times, with a final score. We repeat the whole thing four times, and average those final scores.

WebXPRT15

Speedometer 2: JavaScript Frameworks

Our newest web test is Speedometer 2, which is a accrued test over a series of javascript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.

Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. We report this final score.

Speedometer 2

Google Octane 2.0: Core Web Compute

A popular web test for several years, but now no longer being updated, is Octane, developed by Google. Version 2.0 of the test performs the best part of two-dozen compute related tasks, such as regular expressions, cryptography, ray tracing, emulation, and Navier-Stokes physics calculations.

The test gives each sub-test a score and produces a geometric mean of the set as a final result. We run the full benchmark four times, and average the final results.

Google Octane 2.0

Mozilla Kraken 1.1: Core Web Compute

Even older than Octane is Kraken, this time developed by Mozilla. This is an older test that does similar computational mechanics, such as audio processing or image filtering. Kraken seems to produce a highly variable result depending on the browser version, as it is a test that is keenly optimized for.

The main benchmark runs through each of the sub-tests ten times and produces an average time to completion for each loop, given in milliseconds. We run the full benchmark four times and take an average of the time taken.

Mozilla Kraken 1.1

3DPM v1: Naïve Code Variant of 3DPM v2.1

The first legacy test in the suite is the first version of our 3DPM benchmark. This is the ultimate naïve version of the code, as if it was written by scientist with no knowledge of how computer hardware, compilers, or optimization works (which in fact, it was at the start). This represents a large body of scientific simulation out in the wild, where getting the answer is more important than it being fast (getting a result in 4 days is acceptable if it’s correct, rather than sending someone away for a year to learn to code and getting the result in 5 minutes).

In this version, the only real optimization was in the compiler flags (-O2, -fp:fast), compiling it in release mode, and enabling OpenMP in the main compute loops. The loops were not configured for function size, and one of the key slowdowns is false sharing in the cache. It also has long dependency chains based on the random number generation, which leads to relatively poor performance on specific compute microarchitectures.

3DPM v1 can be downloaded with our 3DPM v2 code here: 3DPMv2.1.rar (13.0 MB)

3DPM v1 Single Threaded3DPM v1 Multi-Threaded

x264 HD 3.0: Older Transcode Test

This transcoding test is super old, and was used by Anand back in the day of Pentium 4 and Athlon II processors. Here a standardized 720p video is transcoded with a two-pass conversion, with the benchmark showing the frames-per-second of each pass. This benchmark is single-threaded, and between some micro-architectures we seem to actually hit an instructions-per-clock wall.

x264 HD 3.0 Pass 1x264 HD 3.0 Pass 2



Gaming: World of Tanks enCore

Albeit different to most of the other commonly played MMO or massively multiplayer online games, World of Tanks is set in the mid-20th century and allows players to take control of a range of military based armored vehicles. World of Tanks (WoT) is developed and published by Wargaming who are based in Belarus, with the game’s soundtrack being primarily composed by Belarusian composer Sergey Khmelevsky. The game offers multiple entry points including a free-to-play element as well as allowing players to pay a fee to open up more features. One of the most interesting things about this tank based MMO is that it achieved eSports status when it debuted at the World Cyber Games back in 2012.

World of Tanks enCore is a demo application for a new and unreleased graphics engine penned by the Wargaming development team. Over time the new core engine will implemented into the full game upgrading the games visuals with key elements such as improved water, flora, shadows, lighting as well as other objects such as buildings. The World of Tanks enCore demo app not only offers up insight into the impending game engine changes, but allows users to check system performance to see if the new engine run optimally on their system.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
World of Tanks enCore Driving / Action Feb
2018
DX11 768p
Minimum
1080p
Medium
1080p
Ultra
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

Game IGP Low Medium High
Average FPS
95th Percentile


Gaming: Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
Grand Theft Auto V Open World Apr
2015
DX11 720p
Low
1080p
High
1440p
Very High
4K
Ultra

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

All of our benchmark results can also be found in our benchmark engine, Bench.

Game IGP Low Medium High
Average FPS
95th Percentile

 



Gaming: F1 2018

Aside from keeping up-to-date on the Formula One world, F1 2017 added HDR support, which F1 2018 has maintained; otherwise, we should see any newer versions of Codemasters' EGO engine find its way into F1. Graphically demanding in its own right, F1 2018 keeps a useful racing-type graphics workload in our benchmarks.

We use the in-game benchmark, set to run on the Montreal track in the wet, driving as Lewis Hamilton from last place on the grid. Data is taken over a one-lap race.

AnandTech CPU Gaming 2019 Game List
Game Genre Release Date API IGP Low Med High
F1 2018 Racing Aug
2018
DX11 720p
Low
1080p
Med
4K
High
4K
Ultra

All of our benchmark results can also be found in our benchmark engine, Bench.

Game IGP Low Medium High
Average FPS
95th Percentile


Power Consumption

How TDP Works

Over the last decade, while the use of the term TDP (thermal design power) has not changed much, the way that processors use a power budget has. Inside each processor, Intel defines several power levels based on the capabilities and expected operating environments. This sounds all well and good, however these power levels and capabilities can be adjusted at the firmware level, allowing OEMs to decide how they want the processors to perform in their systems. Ultimately it gives a really fuzzy reading at exactly what the power consumption of a processor will be when it is in a system.

To simplify, there are three numbers to be aware of. Intel calls these numbers PL1 (power level 1), PL2 (power level 2), and T (or tau).

  1. PL1 is the effective long-term expected steady state power consumption of a processor. For all intents and purposes, the PL1 is usually defined as the TDP of a processor. So if the TDP is 80W, then PL1 is 80W.
  2. PL2 is the short-term maximum power draw for a processor. This number is higher than PL1, and the processor goes into this state when a workload is applied, allowing the processor to use its turbo modes up to the maximum PL2 value. This means that if Intel has defined a processor with a series of turbo modes, they will only work when PL2 is the driving variable for maximum power consumption. Turbo does not work in PL1 mode.
  3. Tau is a timing variable. It dictates how long a processor should stay in PL2 mode before hitting a PL1 mode. Note that Tau is not dependent on power consumption, nor is it dependent on the temperature of the processor (it is expected that if the processor hits a thermal limit, then a different set of super low voltage/frequency values are used and PL1/PL2 is discarded).

So let us go on a journey where a large workload is applied to a processor.

Firstly, it starts in PL2 mode. If a single-threaded workload is used, then we should hit the top turbo value as listed in the spec sheet. Normally a single thread will consume nowhere near the PL2 power limit. As we load up the cores, the processor reacts by reducing the turbo frequency in line with the per-core turbo values dictated by Intel. If the power consumption of the chip hits the PL2 value, then the frequency is adjusted so PL2 is never exceeded.

When the system has a substantial workload applied for a fixed time, in this case ‘tau’ seconds, the firmware should immediately invoke PL1 as the new power limit. The turbo tables no longer apply.

If the workload applied produces a power consumption above PL1, then the frequency and voltages are adjusted such that the overall power consumption of the chip is within the PL1 value. This usually means that the whole processor reduces in frequency, often to the base frequency, for the duration of the workload. This means that temperatures on the processor should decrease, increasing the longevity of the processor.

PL1 stays in place until the workload is removed and a CPU core hits an idle state for a fixed amount of time (usually sub 5-seconds). After this, the system can re-enable PL2 again if another workload is applied.

So some examples of numbers here – Intel lists several in its specification sheets for the different processors. In this case, I will take a consumer grade Core i7-8700K. For this processor, we have the following:

  • PL1 = TDP = 95 W
  • PL2 = TDP * 1.25 = 118.75 W
  • Tau = 8 seconds

In this case, the system should be able to boost up to ~119W for eight seconds, before being pulled back down to 95W. This has actually been in place for a number of generations of processors, and most of the time it didn’t actually matter, as the power draw for the full chip was often well below the PL1 value even at full load.

However, this is where it gets really stupid: the motherboard vendors got involved, because PL1, PL2 and Tau are configurable in firmware. Motherboard manufacturers can do what they like, and that gets frustrating

With a commercial product like Xeon E, we would expect a motherboard to adhere to Intel’s standards moreso than a high-end consumer motherboard, right? Well we do.

Power (Package), Full Load

The high-end Xeon E-2186G has a TDP of 95W, and a PL2 of 119W, and if we record the load power after a few seconds, it hits 116W. The long term power, measured about 20 seconds into the load, is 95W. For the other chips, they all have a TDP of 80W, and a PL2 of 100W, and we record the short load power at 95-97W, and the long load power at 80W. These numbers stayed true even when we run a workload for a considerable time – so while PL2 and Tau are both set to Intel recommended values. No other desktop processor we've tested has ever done this, unless it was bundled in an SFF system.



Xeon E Six Core Conclusion

The Xeon E family is actually a niche element to Intel’s portfolio. For the datacenter, when costs are amortized at scale, is usually becomes beneficial to invest in the big iron Xeon-SP processors to take advantage of more performance, more PCIe lanes, additional connectivity, and the support that OEMs provide. Xeon E ultimately ends up in commercial systems where some form of grunt is needed, but also where ECC is needed or an IT department is looking for strict control in a company-wide deployment. That said, there are a number of home users who may wish to invest in this platform for their personal setups.

Intel’s game here is features for money: the Xeon E parts do ECC and advanced management, some parts come with graphics, and this will tack on a few more dollars on to a system build. The only question is which processor to get, and how much will it cost.

The Xeon E family is currently split into six-core processors and four-core processors, varying mostly in frequency rather than power, with a couple of low-end processors without hyperthreading. All of the E-2100 parts come from Intel’s Coffee Lake microarchitecture, similar to Intel's 8th Gen Core processors, and in fact we’re pretty sure it’s the same silicon die with a few features in silicon enabled/disabled (as is usually the case). In this review, we tested most of the six-core offerings that should be available in the next few weeks.

Even on paper, there is not much to separate all the six-core offerings. The E-2126G starts at $255 tray price, with six cores and no hyperthreading, and it goes up to the E-2186G at $450 tray price with six cores, hyperthreading, and some more base frequency and turbo frequency. With all of the six core processors here varying from 3.3 GHz to 3.8 GHz base frequency and 4.5 GHz to 4.7 GHz turbo frequency, they should perform roughly the same.

And indeed in our testing this is what we find. I mean, these parts are so eerily similar to each other, the results form a super tight grouping, with run-to-run variation deciding which processor is going to take the top spot. The extra TDP of the E-2186G at 95W, rather than the 80W of the other processors, doesn’t mean much as these processors always go into a TDP limited mode.  

In this case, the recommendation is simple: get the E-2136 if integrated graphics isn’t needed, and the E-2146G if they are needed. These two Xeon E processors both hit the price/performance ratio by having the same practical performance as the top end E-2186G, but end up costing 33%+ less.

There is no real reason to recommend the E-2186G, unless you want to say you have the top Xeon E – and who really is going to boast about that?

Upgrading from Xeon E3-1200 v5

For this review, we were able to rustle up two of the top Xeon E3-1200 v5 parts for comparison. Back in the day, the E3-1280 v5 was the top end processor and was offered at a cool list price of $612, or the smarter option was the E3-1270 v5 for over $200 less for 100-200 MHz lower frequencies.

For users on the older v5 parts, an upgrade to the Xeon E family would mean a complete system refit: motherboard, CPU, and memory. This means that the upgrade cycle would need to offer a good deal of performance in order to be worthwhile. If we compare the E3-1275 v5 (without graphics) to our recommended E-2136, then the performance uplift is substantial.

With the E-2136, there are two additional cores to play with, as well as a much higher turbo frequency (the list price is also lower, for what it is worth, potentially lowering the Capex from the v5 upgrade). In our benchmarks, some of the quick tests saw a 10-20% speedup, while the larger throughput tests saw a 50-75% speedup, such as our compile test that saw the ability to process 70% more data in the same tie. Encoding was up 53%, rendering up 68%, emulation up 19%.

The only downside is that even though both processors are listed at 80W TDP, we measured the E3-1270 v5 at 63W peak power consumption, and the E3-2136 at 95W peak power. It’s a calculation to make: higher power costs for faster workflows.

For users that don’t need ECC, options will stem from the consumer line of processors. The Core i7-8700 family of processors also have six cores and twelve threads, but push the base frequency up higher, which also increases the power, and the price. However there are a larger number of motherboards to choose from, which maybe cheaper.

Unfortunately AMD hasn’t engaged us in testing their Ryzen Pro processor line, despite my requests, so it’s hard to get a read here on how they might fare based on the target markets. The six-core, twelve-thread Ryzen 5 2600X is the consumer version of the Ryzen 5 Pro 2600X, and costs $50 cheaper than the E-2136. The battle between the benchmarks can be close at times, however Intel tends to win most of the time by up to 10%. That might not be worth the $50, but it would be interesting to see these two processors in a head-to-head review.

Six Core Xeon E: Almost Identical Parts, So Buy The Cheap One

I’ve wanted to get to grips with the Xeon E/Xeon E3 family for a while. We managed to get in a set of v4/v5 processors a while back, but close to the v6 launch. Unfortunately Intel never sampled the v6. This time around we took the initiative for alternative sourcing, and the goods look good – in fact when you pair all the six-core parts together, it’s a bowl of identical fruit with different price stickers.

Intel’s strategy is a little odd here, announcing the Xeon E parts in July but then having a review embargo in November. While there have been parts listed at retailers, none have actually ever been in stock, with no ETAs. Technically today is a ‘secondary announcement’ around server deployment for Xeon E, although given the lack of retail availability, this is probably more like the ‘real’ launch than the July announcement was.

With the launch being today, there is perhaps a question if this has anything to do with Intel’s recent increase in demand for its server platform, which puts pressure on Intel’s manufacturing resources, to be unable to make enough Xeon E parts. Xeon E is a small part of the Xeon business after all. That being said, Intel still has strong demand for the Xeon-SP parts, which doesn’t look like it is subsiding with reports that it might be unable to fill sales orders. As a result, I wonder what bar has been reached in the manufacturing process that has enabled a sufficient quantity (whatever that means) of Xeon E to be made available.

There’s actually a funny story here: after hearing nothing from Intel about Xeon E since July, without prompting, two users in the last week emailed me about the availability of Xeon E. Sorry to the both of you, I couldn’t say anything due to NDAs. But kudos on timing! I hope they hit retail soon for the readers that are interested in getting their hands on them. In the meantime, I’ll try and rustle up some v6 parts for a generational comparison.

Log in

Don't have an account? Sign up now