Original Link: https://www.anandtech.com/show/12945/the-intel-core-i7-8086k-review



Intel announced it, we asked for one, we were told no press samples, so we get one for ourselves. This is Intel's 40th anniversary of the original x86 microarchitecture with the 8086 processor, and to celebrate they released the Core i7-8086K, based on Coffee Lake with a turbo up to 5.0 GHz. With only 50,000 parts to be sold worldwide, 8,086 of which are given away for free in regional sweepstakes, Intel is being bold in trying to limit supply and actively promote its new mainstream headline act. As with most processors, it is all in the binning, and this is truly Intel’s highest performance mainstream processor ever. However the 8086K may not be all it's cracked up to be.

The Headline Act: 40 Years of x86

Anniversary edition processors, or limited edition processors, have been hit or miss through the years. Back in June 2014, Intel launched the Pentium Anniversary Edition G3258 – an overclockable dual-core processor – to much fanfare, but no matter how much the CPU was overclocked it never performed close to a full quad core. In 2009 AMD launched a limited 100-part run of the Phenom II X4 TWKR – a part highly binned for performance world records – to a select bunch of extreme overclockers. It was never available at retail. However, for Intel’s 50th anniversary and the 40th anniversary of the x86 architecture, the Core i7-8086K is now available to a ‘lucky’ 50,000 users.

Plaudits go to David Schor of Wikichip, who originally suggested an 8th generation six-core Coffee Lake at 5.0 GHz called the i7-8086K for June 8th, the anniversary date.

Either David has moles in Intel, access to a time machine, or has the incredible ability to influence Intel’s manufacturing decisions – to request/predict the product as far back as January 18th is quite a feat.

The Core i7-8086K is, as David says, a six-core Coffee Lake based processor that runs at a 5.0 GHz turbo. The part is, for all intents and purposes, a higher binned Core i7-8700K.

Intel Core i7 Coffee Lake
AnandTech Cores TDP Freq L3 vPro DRAM
DDR4
iGPU iGPU
Turbo
Core i7-8086K $425 6 / 12 95 W 4.0 / 5.0 12 MB No 2666 24 EUs 1200
Core i7-8700K $359 6 / 12 95 W 3.7 / 4.7 12 MB No 2666 24 EUs 1200
Core i7-8700 $303 6 / 12 65 W 3.2 / 4.6 12 MB Yes 2666 24 EUs 1200
Core i7-8700T $303 6 / 12 35 W 2.4 / 4.0 12 MB Yes 2666 24 EUs 1200

The key thing to note is that the processor is not 5.0 GHz on all cores. The CPU still has an official rated TDP of 95 W, so rather than bumping up the TDP to give something extra, Intel is still playing within the limits of their TDP range. We have seen that the recent move to six-core processors on the mainstream platform is causing Intel’s processors to bump and exceed their usual TDP ratings, causing confusion for consumers and the press. It is important to note that Intel’s TDP rating is only valid at the base frequency, in this case 95W at 4.0 GHz all-core, and any turbo modes can run at any power consumption as required. This is something we’ll come on to at this review.

So one of the key questions for this ‘faster’ processor is exactly how much faster it will be over the Core i7-8700K. Looking back several years, Intel launched the Devil’s Canyon range of processors as ‘better overclocking’ versions of the mainstream Haswell processors, which afforded more frequency and a better thermal response to frequency and voltage. If you were expecting the same thing for the Core i7-8086K, I’m afraid you might come away disappointed.

For the Intel Core i7-8086K, it has exactly the same per-core turbo ratios as the Core i7-8700K, except for the single core. The single core turbo is boosted from 4.7 GHz to 5.0 GHz, but that is about it.

Moving only the single core turbo has limited benefits. For all but the purest single core tests, on paper, there should be no difference at all. The problem here is that very few situations on modern machines only have a single core loaded: almost every user has programs running in the background, such as other browser tabs, virus scanners, or unscrupulous updates. Gamers, on the face of it, should expect to see little-to-no benefit.

Technically the base frequency has risen from 3.7 GHz to 4.0 GHz, which means that Intel is guaranteeing a higher minimum frequency for the 95W TDP, but practically that means very little for users that just plug it into their system as the idea is that the processor never has to use that base frequency. There can be some argument that a 4.0 GHz base clock means that the processor is from a better binning process, and should overclock better, and that will be an interesting part of this review.

However, as we will see in the results, the sole increase in single core turbo means that this processor is not as big of a jump as the ‘5.0 GHz’ number on the side of the box suggests. This is especially galling when considering the price of the new processor: Intel has the MSRP of the Core i7-8086K listed as $425, compared to the $350 retail price for the Core i7-8700K. This is a $75+ price difference, or even more when you consider that the 8700K periodically goes on sale for less than that.

Some Testing Context

The CPU being tested here was not sampled by Intel. We were told while at Computex that Intel has no plans to seed samples to the media. This is somewhat disappointing, given how much fanfare Intel is giving to these new parts. However we are under the assumption that Intel will be able to sell every single one anyway, so additional media coverage to help drive sales might not be on the cards. As we’ll see in this review, there might also be other reasons for not sampling the media.

At the moment, I’m still here in Taiwan for some post-Computex meetings, and will not be back home in my office for a few days. So when Intel said they were not sampling the CPU, we had two choices: try and win one in the sweepstakes and wait eight weeks, or wait until I get home and buy one. Or we could make our own third option – the AnandTech option: get ahold of one from the local shops and test it here.


Some guy with a CPU box

We reached out to partners to borrow a system for a couple of days to test in the hotel room, and ASRock beautifully stepped up to the plate on a Saturday when their office was closed. I bumped into my contacts on the show floor (after buying the CPU that morning) and they promptly arrived later that night at my hotel with one of the fully-built systems from the show floor and the few cables I needed.

Big kudos to ASRock for enabling this testing. They also provided a Core i7-8700K as well. This test system is obviously different to my testing system back in the office, and normally for the sake of reviews we tend to keep the system constant between CPUs for consistency, however for the sake of expediency only the 8700K and 8086K numbers will have true parity here. It’s also a good thing I bought my review OS and benchmarks with me to Taiwan. Never leave home without them, folks.

Test System

As mentioned, a big shout-out to ASRock for the system loan at such short notice. Without them this review would not have been possible within the time frame we wanted.

It should be noted that as with previous reviews, our 'stock' CPU settings include setting the memory to the maximum supported frequency of the processor. In this case, we use DDR4-2666 for Intel Coffee Lake CPUs.

Test Setup
  Intel Core
Processor LGA1151
i7-8086K
i7-8700K
6C / 12T
6C / 12T
4.0 / 5.0 GHz
3.7 / 4.7 GHz
95 W
95 W
$425
$350
Motherboards ASRock Z370 Taichi
BIOS P1.80
Spectre/Meltdown Applied Yes
Cooling Cooler Master CLC
Power Supply Cooler Master V1000 PSU
Memory Team Group DDR4-3200 (stock)
Memory Settings Stock: DDR4-2666 16-18-18 2T
OC: DDR4-3466 16-18-18 2T
GPUs ASRock RX 580 Gaming
Hard Drive Crucial MX200 1TB
Case Cooler Master H500
OS Windows 10 Enterprise RS3 (1803) with OS Patches

It should be noted that ASRock were not able to loan us the exact GPU that I normally use for our gaming testing. Instead we were able to source an RX 580, so this means that our gaming testing data will only have two data points: a Core i7-8700K and a Core i7-8086K. We will get some more data next week when we are back in the office.

As mentioned, one of the key differences for this test is the motherboard. Back in the office we have used an ASRock Z370 Gaming i7 (P1.70 BIOS) for our Coffee Lake testing, while here we are using an ASRock Z370 Taichi (P1.80 BIOS). Different motherboards, even from the same company, use different methods of controlling the internal frequencies on the board (such as Uncore) or power limits (PL2) which can vary from BIOS to BIOS. It is hard to keep these consistent across systems, so there will be some differences in play.

In the short time we have had with the processor, we have the following pages to tantalize your eyes:

  1. Intel Core i7-8086K: 40 Years of x86
  2. Thermal Interface and Extreme Overclocking with lucky_n00b
  3. Ambient Overclocking and Power Scaling Analysis
  4. Benchmarking Performance: CPU System Tests
  5. Benchmarking Performance: CPU Rendering Tests
  6. Benchmarking Performance: CPU Encoding Tests
  7. Benchmarking Performance: CPU Office Tests
  8. Benchmarking Performance: CPU Legacy Tests
  9. Gaming Performance: Civilization 6
  10. Gaming Performance: Shadow of Mordor
  11. Gaming Performance: Rise of the Tomb Raider
  12. Gaming Performance: Rocket League
  13. Gaming Performance: Grand Theft Auto V
  14. Overclocking Performance at 5.0 GHz: CPU Tests
  15. Overclocking Performance at 5.0 GHz: GPU Tests
  16. Conclusions


Thermal Interface and Extreme Overclocking

(with Alva Jonathan)

One of the big questions surrounding the new CPU is if Intel has decided to make changes to the way the CPU and the heatspreader make contact. The best way to make contact is to use an Indium-Tin solder, or a liquid metal, to ensure that the thermal load from the CPU is taken directly to the CPU cooler. The cheaper method (but more reliable method) is with a thermal paste, which is more resilient to thermal expansion coefficients over the lifecycle of the processor. In a perfect world, we'd expect the highest performance processors to use the solder method while cheaper processors can use a thermal paste. However Intel has been making its processors solely with thermal paste of late, causing extreme enthusiasts to resort to delidding and adjusting the thermal paste with liquid metal. AMD uses thermal paste in its APUs, and we did a delidding guide a few weeks back:

Delidding The AMD Ryzen 5 2400G APU: How To Guide and Results

The Intel method is mostly similar. However, the question for this review was if Intel would change from a thermal paste as used on the Core i7-8700K to a more overclocking and thermally friendly solder for the Core i7-8086K. The idea is that if Intel is geared towards enthusiasts, solder should be used, right?

Making It Possible

For this page, we are extremely thankful to Alva Jonathan, aka ‘Lucky_n00b’, a fellow overclocker and journalist for Jagat Review. I'm known Alva for almost 10 years, and like me, he also purchased his Core i7-8086K during Computex this week, except he went the full beans with delidding and liquid nitrogen. He is allowing us to share his results with our audience, so a big thank you to Alva!

 

Alva does some impressive overclocking coverage on all the new platforms at Jagat Review (in Indonesian), as well as doing exceeding well at overclocking competitions around the world. This week he scored third place at G.Skill’s live overclocking event at Computex, scoring some nice hardware and a cash prize.

Alva’s Core i7-8086K OC and analysis can be found here (in Indonesian).

Opening Up The Chip

Suffice to say, Intel made zero changes to the thermal interface on the Core i7-8086K. It is completely identical to the Core i7-8700K, using the same thermal goop as in previous generations of chips. For current Coffee Lake processors, removing the thermal goop and replacing it with a liquid metal implementation is generally good for lowering temperatures from 5-15C (depending on the quality of the application) or gaining another 100-300 MHz depending on the voltage response of the chip.

Alva recommends only delidding the processor for more frequency or better thermals if you intended to use more than 1.30 volts through the CPU. At this voltage, with a good ambient cooler, users will start to hit around 80 C when running the CPU at full load (we can confirm, our sample was similar), which is a good point for anyone considering a delid.

With his CPU, Alva achieved 5.0 GHz at 1.20 volts, which was stable enough to run CineBench R15 for a score of 1627 (compared to 1424 at stock with fast memory). The CPU also managed 5.2 GHz at 1.35 volts for a few more points at 1692. He used KingpinCooling KPX as the replacement thermal interface material.

Going Beyond with Liquid Nitrogen (LN2)

Extreme overclocking is an interesting pastime to participate in, however for the users on the extreme edge of the sport, every MHz counts. Not only for cooling but systems are physically modified to add better power delivery or to adjust voltages manually rather than through software. For those that can, it creates a thrill or two.

In Alva’s testing notes, he started with MSI’s Z370 Godlike Gaming motherboard prepped for sub-zero cooling, and used a heavy LN2 copper pot to manage temperatures with the liquid nitrogen. After bring the system down to -100C, he booted with BIOS settings such that the CPU was at 6.0 GHz (60x100), with an uncore of 5.0 GHz and a CPU voltage of 1.70 volts. Don’t try this without sub-zero cooling (!). Other voltages were as follows:

  • SA/IO Voltage: 1.35 V
  • DMI Voltage: 1.80 V
  • CPU PLL Voltage: 2.20 V
  • CPU PLL OC Voltage 2.20 V
  • CPU ST Voltage: 1.35 V
  • CPU ST V6 Voltage: 1.35 V

The CPU was kept in its full 6C/12T mode.

After booting into the OS, MSI Command Center Lite was used to adjust the processor variables (multiplier, base clock, voltage) in real time. The system was cooled down further to its limit, known as ‘full-pot’ liquid nitrogen benchmarking, and the multiplier was raised to find the absolute processor frequency limit for a no-holds barred validation.

The final result? 7309 MHz: http://valid.x86.fr/2tx32n

In general, Skylake-based processors tend to see peak liquid nitrogen frequencies around 7.1-7.4 GHz, so this new processor is nothing out of the ordinary. Alva said that he was quite happy with this single chip, however he will need to test a few more to see exactly where if there is variation in the wafer/batch from Intel. When Alva posts his full sub-zero overclocking article, I will link to it here.

Edit: Here is Alva's article - http://oc.jagatreview.com/2018/06/intel-core-i7-8086k-extreme-overclocking-7-3ghz-on-msi-z370-godlike-gaming/



Ambient Overclocking and Power Scaling Analysis

For 24/7 overclocking, we used our hotel-room system to get a good shot at how the Core i7-8086K performs on a closed loop liquid cooler going through the multipliers one-by-one. For this we used a variation of our standard overclocking technique.

Home Overclocking, Step by Step

Due to timing and location, our overclocking method was as follows.

  1. Start with the CPU at 40x Multiplier and 1.05 volts
  2. Set Load Line Calibration to Level 1 (ASRock Z370 Taichi)
  3. Load up OS
  4. Run our Blender Test, take power and temperature data from AIDA
  5. If system fails, or test is over 95C, then stop testing
  6. If system fails, add +0.025 volts and go to step 3
  7. If test passes, note down Blender result, add a multiplier, and go back to step 3

Blender was a good mix of hardcore CPU load, memory accesses, and as a result, power draw. Any issues that required additional voltage for stability and coherency were found relatively quickly after starting the test.

The Blender test lasts around five minutes on the Core i7-8086K, which for our quick overclock testing is sufficient. For users who insist on 24/7 rock solid stability, it isn’t the test that you might like to see, however it still marks a good attack on the system.

Results

Using this methodology, we achieved the following results:

At default, our system would hit an all-core turbo of 4.3 GHz and scored 311 seconds on Blender, with the CPU at 62 degrees C and consuming 115 W. We also tested the system ‘at auto’ but with the CPU set to 5.0 GHz on all cores. This gave a Blender score of 268 seconds, but much higher temperatures (82 C) and power consumption (175 W).

When going up from 4.0 GHz manually, we can see that there is a disconnect in how the power is reported in the OS: AIDA64 reported a voltage that slowly increased as the multiplier increased, even if the voltage setting in the BIOS did not change. It also showed to hit a wall at 1.364 volts, even when adjusting the voltage in the BIOS helped with the higher multipliers. This was odd, but I think the poignant results here are Blender, Temperature, and Power.

I’m going to adjust the Blender results in to ‘renders per hour’, which is easier to visualize in a graph.

The key result here is going to be 5.0 GHz, which is a nice medium for power and performance but also temperatures and voltage. At this level, the system gives +16% performance for an additional +16% frequency. The problem though is the power.

Comparing a 5.0 GHz manual overclock to the ‘stock’ operation of the processor shows a 32% increase in power. But when compared to an equivalent 4.3 GHz manual overclock, the power gain is now a whopping +68%. We really are stretching the microarchitectural design at this stage.

What should be noted is that at default, the system drew 115W, which is 20W above TDP. As mentioned before, TDP is defined at base frequency, which is in this case 4.0 GHz. We saw a power consumption of 80 W at the base frequency, showing that the processor is still technically under that TDP value, at least when the user optimizes the voltage.  At a 95W level, if we were maximizing frequency for the TDP, we should have seen a base frequency of 4.4 GHz with this chip.

However, consider what might have been if Intel had decided to increase the TDP by +10W or +15W, up to 110W. In that case, we could have been playing with a chip that had a base frequency of around 4.6 GHz, depending on how other chips perform. As we will see in the results over the next few pages, Intel really did miss a trick here by not going down an increased TDP route.

Going for Gold

For anyone interested about the upper limits of our chip, 5.1 GHz was the realistic maximum. I could not get 5.2 GHz to be stable with Blender for more than about 30 seconds without it throwing an error, and as the voltage in the BIOS rose up to 1.425 volts, the system was showing peak temperatures at 100C, well beyond a comfortable limit. Speaking with Alva and his nice chip, he stated that with a delid, 5.2 GHz should be possible, although beyond that might be a bit tough given how quickly the voltage seems to ramp in our sample.

As for absolute maximum that we could load into Windows with, I was able to see 5.4 GHz. No load was applied for fear of the temperatures, and 5.5 GHz did not want to play ball.

Testing at 5.0 GHz

As part of our testing, we were able to run through a few benchmarks at both a high overclock and fast memory (and we tried both). Again, many thanks to ASRock again for the system for the system loan.



Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

All of our benchmark results can also be found in our benchmark engine, Bench.

FCAT Processing: link

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool.

System: FCAT Processing ROTR 1440p GTX980Ti Data

FCAT is single threaded, however in this test the full 5.0 GHz did not kick in.

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

System: Dolphin 5.0 Render Test

For a test that did have 5.0 GHz kick in, the 8086K takes the record in our Dolphin test.

3D Movement Algorithm Test v2.1: link

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data.

System: 3D Particle Movement v2.1

On 3DPM, the 8086K shows that the 4.3 GHz all-core is on par with the 8700K.

DigiCortex v1.20: link

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

Despite the faster single core frequency, this DRAM-limited test seems to load up another core and stops the 8086K from reaching 5.0 GHz.

Agisoft Photoscan 1.3.3: link

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

System: Agisoft Photoscan 1.3.3 (Large) Total Time

Agisoft is variable threaded, but the 8086K is still a small stones throw from the 8700K.



Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Corona is a fully multi-threaded test, so it is surprising to see the 8086K lag behind the 8700K here. This is likely a scenario where the fact that our borrowed testbed setup doesn't perfectly match our standard testbed is playing a factor.

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

Blender also likes to load up the threads, and the 8086K is behind again.

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Virtually identical scores between the 8086K and 8700K in POV-Ray.

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 SingleThreaded
Rendering: CineBench 15 MultiThreaded

The 8086K gets a new fastest single core score in CineBench R15 ST, but falls slightly behind the 8700K in MT.



Benchmarking Performance: CPU Encoding Tests

One of the interesting elements on modern processors is encoding performance. This includes encryption/decryption, as well as video transcoding from one video format to another. In the encrypt/decrypt scenario, this remains pertinent to on-the-fly encryption of sensitive data - a process by which more modern devices are leaning to for software security. Video transcoding as a tool to adjust the quality, file size and resolution of a video file has boomed in recent years, such as providing the optimum video for devices before consumption, or for game streamers who are wanting to upload the output from their video camera in real-time. As we move into live 3D video, this task will only get more strenuous, and it turns out that the performance of certain algorithms is a function of the input/output of the content.

All of our benchmark results can also be found in our benchmark engine, Bench.

7-Zip 9.2

One of the freeware compression tools that offers good scaling performance between processors is 7-Zip. It runs under an open-source licence, is fast, and easy to use tool for power users. We run the benchmark mode via the command line for four loops and take the output score.

Encoding: 7-Zip Combined Score

Encoding: 7-Zip Compression

Encoding: 7-Zip Decompression

Again, trading blows with the 8700K, but falling behind a little bit.

WinRAR 5.40

For the 2017 test suite, we move to the latest version of WinRAR in our compression test. WinRAR in some quarters is more user friendly that 7-Zip, hence its inclusion. Rather than use a benchmark mode as we did with 7-Zip, here we take a set of files representative of a generic stack (33 video files in 1.37 GB, 2834 smaller website files in 370 folders in 150 MB) of compressible and incompressible formats. The results shown are the time taken to encode the file. Due to DRAM caching, we run the test 10 times and take the average of the last five runs when the benchmark is in a steady state.

Encoding: WinRAR 5.40

The 8086K takes another benchmark sitting behind the 8700K.

AES Encoding

Algorithms using AES coding have spread far and wide as a ubiquitous tool for encryption. Again, this is another CPU limited test, and modern CPUs have special AES pathways to accelerate their performance. We often see scaling in both frequency and cores with this benchmark. We use the latest version of TrueCrypt and run its benchmark mode over 1GB of in-DRAM data. Results shown are the GB/s average of encryption and decryption.

Encoding: AES

Under AES encoding we get literally identical results.

HandBrake v1.0.2 H264 and HEVC: link

As mentioned above, video transcoding (both encode and decode) is a hot topic in performance metrics as more and more content is being created. First consideration is the standard in which the video is encoded, which can be lossless or lossy, trade performance for file-size, trade quality for file-size, or all of the above can increase encoding rates to help accelerate decoding rates. Alongside Google's favorite codec, VP9, there are two others that are taking hold: H264, the older codec, is practically everywhere and is designed to be optimized for 1080p video, and HEVC (or H265) that is aimed to provide the same quality as H264 but at a lower file-size (or better quality for the same size). HEVC is important as 4K is streamed over the air, meaning less bits need to be transferred for the same quality content.

Handbrake is a favored tool for transcoding, and so our test regime takes care of three areas.

Low Quality/Resolution H264: Here we transcode a 640x266 H264 rip of a 2 hour film, and change the encoding from Main profile to High profile, using the very-fast preset.

Encoding: Handbrake H264 (LQ)

High Quality/Resolution H264: A similar test, but this time we take a ten-minute double 4K (3840x4320) file running at 60 Hz and transcode from Main to High, using the very-fast preset.

Encoding: Handbrake H264 (HQ)

HEVC Test: Using the same video in HQ, we change the resolution and codec of the original video from 4K60 in H264 into 4K60 HEVC.

Encoding: Handbrake HEVC (4K)



Benchmarking Performance: CPU Office Tests

The office programs we use for benchmarking aren't specific programs per-se, but industry standard tests that hold weight with professionals. The goal of these tests is to use an array of software and techniques that a typical office user might encounter, such as video conferencing, document editing, architectural modelling, and so on and so forth.

All of our benchmark results can also be found in our benchmark engine, Bench.

Chromium Compile (v56)

Our new compilation test uses Windows 10 Pro, VS Community 2015.3 with the Win10 SDK to combile a nightly build of Chromium. We've fixed the test for a build in late March 2017, and we run a fresh full compile in our test. Compilation is the typical example given of a variable threaded workload - some of the compile and linking is linear, whereas other parts are multithreaded.

Office: Chromium Compile (v56)

This is another case where I think our improvised testbed is playing a bigger part, and I'd like to eventually re-run this on my standard testbed. Especially as compiling heavily hits more than just the CPU.

GeekBench4: link

Due to numerous requests, GeekBench 4 is now part of our suite. GB4 is a synthetic test using algorithms often seen in high-performance workloads along with a series of memory focused tests. GB4’s biggest asset is a single-number output which its users seem to love, although it is not always easy to translate that number into real-world performance comparisons.

Office: Geekbench 4 - Single Threaded Score (Overall)

Office: Geekbench 4 - MultiThreaded Score (Overall)

Like CineBench, the Core i7-8086K does will on the synthetic single threaded test.

PCMark8: link

Despite originally coming out in 2008/2009, Futuremark has maintained PCMark8 to remain relevant in 2017. On the scale of complicated tasks, PCMark focuses more on the low-to-mid range of professional workloads, making it a good indicator for what people consider 'office' work. We run the benchmark from the commandline in 'conventional' mode, meaning C++ over OpenCL, to remove the graphics card from the equation and focus purely on the CPU. PCMark8 offers Home, Work and Creative workloads, with some software tests shared and others unique to each benchmark set.

Office: PCMark8 Home (non-OpenCL)

Here the 8086K does eek out a win over the 8700K, although just barely.



Benchmarking Performance: CPU Legacy Tests

Our legacy tests represent benchmarks that were once at the height of their time. Some of these are industry standard synthetics, and we have data going back over 10 years. All of the data here has been rerun on Windows 10, and we plan to go back several generations of components to see how performance has evolved.

All of our benchmark results can also be found in our benchmark engine, Bench.

3D Particle Movement v1

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores. This is the original version, written in the style of a typical non-computer science student coding up an algorithm for their theoretical problem, and comes without any non-obvious optimizations not already performed by the compiler, such as false sharing.

Legacy: 3DPM v1 Single ThreadedLegacy: 3DPM v1 MultiThreaded

CineBench 11.5 and 10

Cinebench is a widely known benchmarking tool for measuring performance relative to MAXON's animation software Cinema 4D. Cinebench has been optimized over a decade and focuses on purely CPU horsepower, meaning if there is a discrepancy in pure throughput characteristics, Cinebench is likely to show that discrepancy. Arguably other software doesn't make use of all the tools available, so the real world relevance might purely be academic, but given our large database of data for Cinebench it seems difficult to ignore a small five minute test. We run the modern version 15 in this test, as well as the older 11.5 and 10 due to our back data.

Legacy: CineBench 11.5 MultiThreadedLegacy: CineBench 11.5 Single ThreadedLegacy: CineBench 10 MultiThreadedLegacy: CineBench 10 Single Threaded

x264 HD 3.0

Similarly, the x264 HD 3.0 package we use here is also kept for historic regressional data. The latest version is 5.0.1, and encodes a 1080p video clip into a high quality x264 file. Version 3.0 only performs the same test on a 720p file, and in most circumstances the software performance hits its limit on high end processors, but still works well for mainstream and low-end. Also, this version only takes a few minutes, whereas the latest can take over 90 minutes to run.

Legacy: x264 3.0 Pass 1Legacy: x264 3.0 Pass 2



Civilization 6

First up in our CPU gaming tests is Civilization 6. Originally penned by Sid Meier and his team, the Civ series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer overflow. Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, it a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Perhaps a more poignant benchmark would be during the late game, when in the older versions of Civilization it could take 20 minutes to cycle around the AI players before the human regained control. The new version of Civilization has an integrated ‘AI Benchmark’, although it is not currently part of our benchmark portfolio yet, due to technical reasons which we are trying to solve. Instead, we run the graphics test, which provides an example of a mid-game setup at our settings.

At both 1920x1080 and 4K resolutions, we run the same settings. Civilization 6 has sliders for MSAA, Performance Impact and Memory Impact. The latter two refer to detail and texture size respectively, and are rated between 0 (lowest) to 5 (extreme). We run our Civ6 benchmark in position four for performance (ultra) and 0 on memory, with MSAA set to 2x.

For reviews where we include 8K and 16K benchmarks (Civ6 allows us to benchmark extreme resolutions on any monitor) on our GTX 1080, we run the 8K tests similar to the 4K tests, but the 16K tests are set to the lowest option for Performance.

As a reminder, ASRock were not able to loan us the exact GPU that I normally use for our gaming testing. Instead we were able to source an RX 580, so this means that our gaming testing data will only have two data points: a Core i7-8700K and a Core i7-8086K. We will get some more data next week when we are back in the office.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Civilization 6 (1080p, Ultra)Civilization 6 (1080p, Ultra)

Civilization 6 (4K, Ultra)Civilization 6 (4K, Ultra)

Almost zero difference for Civilization between the two. The 8086K is never in a situation to fire up to 5.0 GHz.



Shadow of Mordor

The next title in our testing is a battle of system performance with the open world action-adventure title, Middle Earth: Shadow of Mordor (SoM for short). Produced by Monolith and using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.

A 2014 game is fairly old to be testing now, however SoM has a stable code and player base, and can still stress a PC down to the ones and zeroes. At the time, SoM was unique, offering a dynamic screen resolution setting allowing users to render at high resolutions that are then scaled down to the monitor. This form of natural oversampling was designed to let the user experience a truer vision of what the developers wanted, assuming you had the graphics hardware to power it but had a sub-4K monitor.

The title has an in-game benchmark, for which we run with an automated script implement the graphics settings, select the benchmark, and parse the frame-time output which is dumped on the drive. The graphics settings include standard options such as Graphical Quality, Lighting, Mesh, Motion Blur, Shadow Quality, Textures, Vegetation Range, Depth of Field, Transparency and Tessellation. There are standard presets as well.

We run the benchmark at 1080p and a native 4K, using our 4K monitors, at the Ultra preset. Results are averaged across four runs and we report the average frame rate, 99th percentile frame rate, and time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

 

ASRock RX 580 Performance

Shadow of Mordor (1080p, Ultra)Shadow of Mordor (1080p, Ultra)

Shadow of Mordor (4K, Ultra)Shadow of Mordor (4K, Ultra)



Rise of the Tomb Raider

One of the newest games in the gaming benchmark suite is Rise of the Tomb Raider (RoTR), developed by Crystal Dynamics, and the sequel to the popular Tomb Raider which was loved for its automated benchmark mode. But don’t let that fool you: the benchmark mode in RoTR is very much different this time around.

Visually, the previous Tomb Raider pushed realism to the limits with features such as TressFX, and the new RoTR goes one stage further when it comes to graphics fidelity. This leads to an interesting set of requirements in hardware: some sections of the game are typically GPU limited, whereas others with a lot of long-range physics can be CPU limited, depending on how the driver can translate the DirectX 12 workload.

Where the old game had one benchmark scene, the new game has three different scenes with different requirements: Geothermal Valley (1-Valley), Prophet’s Tomb (2-Prophet) and Spine of the Mountain (3-Mountain) - and we test all three. These are three scenes designed to be taken from the game, but it has been noted that scenes like 2-Prophet shown in the benchmark can be the most CPU limited elements of that entire level, and the scene shown is only a small portion of that level. Because of this, we report the results for each scene on each graphics card separately.

Graphics options for RoTR are similar to other games in this type, offering some presets or allowing the user to configure texture quality, anisotropic filter levels, shadow quality, soft shadows, occlusion, depth of field, tessellation, reflections, foliage, bloom, and features like PureHair which updates on TressFX in the previous game.

Again, we test at 1920x1080 and 4K using our native 4K displays. At 1080p we run the High preset, while at 4K we use the Medium preset which still takes a sizable hit in frame rate.

It is worth noting that RoTR is a little different to our other benchmarks in that it keeps its graphics settings in the registry rather than a standard ini file, and unlike the previous TR game the benchmark cannot be called from the command-line. Nonetheless we scripted around these issues to automate the benchmark four times and parse the results. From the frame time data, we report the averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Rise of the Tomb Raider (1080p, Ultra)

Rise of the Tomb Raider (1080p, Ultra)



Rocket League

Hilariously simple pick-up-and-play games are great fun. I'm a massive fan of the Katamari franchise for that reason — passing start on a controller and rolling around, picking up things to get bigger, is extremely simple. Until we get a PC version of Katamari that I can benchmark, we'll focus on Rocket League.

Rocket League combines the elements of pick-up-and-play, allowing users to jump into a game with other people (or bots) to play football with cars with zero rules. The title is built on Unreal Engine 3, which is somewhat old at this point, but it allows users to run the game on super-low-end systems while still taxing the big ones. Since the release in 2015, it has sold over 5 million copies and seems to be a fixture at LANs and game shows. Users who train get very serious, playing in teams and leagues with very few settings to configure, and everyone is on the same level. Rocket League is quickly becoming one of the favored titles for e-sports tournaments, especially when e-sports contests can be viewed directly from the game interface.

Based on these factors, plus the fact that it is an extremely fun title to load and play, we set out to find the best way to benchmark it. Unfortunately for the most part automatic benchmark modes for games are few and far between. Partly because of this, but also on the basis that it is built on the Unreal 3 engine, Rocket League does not have a benchmark mode. In this case, we have to develop a consistent run and record the frame rate.

Read our initial analysis on our Rocket League benchmark on low-end graphics here.

With Rocket League, there is no benchmark mode, so we have to perform a series of automated actions, similar to a racing game having a fixed number of laps. We take the following approach: Using Fraps to record the time taken to show each frame (and the overall frame rates), we use an automation tool to set up a consistent 4v4 bot match on easy, with the system applying a series of inputs throughout the run, such as switching camera angles and driving around.

It turns out that this method is nicely indicative of a real bot match, driving up walls, boosting and even putting in the odd assist, save and/or goal, as weird as that sounds for an automated set of commands. To maintain consistency, the commands we apply are not random but time-fixed, and we also keep the map the same (Aquadome, known to be a tough map for GPUs due to water/transparency) and the car customization constant. We start recording just after a match starts, and record for 4 minutes of game time (think 5 laps of a DIRT: Rally benchmark), with average frame rates, 99th percentile and frame times all provided.

The graphics settings for Rocket League come in four broad, generic settings: Low, Medium, High and High FXAA. There are advanced settings in place for shadows and details; however, for these tests, we keep to the generic settings. For both 1920x1080 and 4K resolutions, we test at the High preset with an unlimited frame cap.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Rocket League (1080p, Ultra)
Rocket League (1080p, Ultra)



Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

For our test we have scripted a version of the in-game benchmark. The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data.

There are no presets for the graphics options on GTA, allowing the user to adjust options such as population density and distance scaling on sliders, but others such as texture/shadow/shader/water quality from Low to Very High. Other options include MSAA, soft shadows, post effects, shadow resolution and extended draw distance options. There is a handy option at the top which shows how much video memory the options are expected to consume, with obvious repercussions if a user requests more video memory than is present on the card (although there’s no obvious indication if you have a low end GPU with lots of GPU memory, like an R7 240 4GB).

To that end, we run the benchmark at 1920x1080 using an average of Very High on the settings, and also at 4K using High on most of them. We take the average results of four runs, reporting frame rate averages, 99th percentiles, and our time under analysis.

All of our benchmark results can also be found in our benchmark engine, Bench.

ASRock RX 580 Performance

Grand Theft Auto (1080p, VHigh)
Grand Theft Auto (1080p, VHigh)



Overclocking Performance: CPU Tests

In the third page of the review we showed our overclocking results, with our CPU managing to hit 5.1 GHz stable with a sizeable increase in voltage. Running at 5.1 GHz incurred rather high temperatures however, so for our benchmark suite we dialed back to 5.0 GHz and run a number of our tests again at this fast speed. We also ran some benchmarks at stock frequency but with increased DRAM frequencies. We ran the DRAM in our ASRock provided system at DDR4-3466, slightly overclocked beyond its DDR4-3200 sticker value.

For this page (and the next), we’ll show the overclocked results of the Core i7-8086K using the fast memory kits as well as the 5.0 GHz overclocked setting (at base memory). The Core i7-8700K numbers are also included for reference.

FCAT Processing

System: FCAT Processing ROTR 1440p GTX980Ti Data

3DPM v2.1

System: 3D Particle Movement v2.1

Dolphin v5

System: Dolphin 5.0 Render Test

DigiCortex v1.20

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

Blender

Rendering: Blender 2.78

POV-Ray

Rendering: POV-Ray 3.7

Cinebench R15 ST

Rendering: CineBench 15 SingleThreaded

Cinebench R15 MT

Rendering: CineBench 15 MultiThreaded

7-zip

Encoding: 7-Zip Combined Score

TrueCrypt

Encoding: AES

GeekBench 4 ST

Office: Geekbench 4 - Single Threaded Score (Overall)

GeekBench 4 MT

Rendering: CineBench 15 MultiThreaded

For everything except the most lightly threaded workloads, overclocking the 8086K to a flat-out 5GHz shows some reasonable gains. These results aren't you couldn't already extrapolate based on the clockspeeds, but it's nice to put theory to practice. It also highlights the unfortunate shortcoming of the CPU: being able to turbo one thread to 5GHz just isn't that useful, since you'll very rarely have a complete system workload that allows it, even if the heaviest workload is single-threaded. The 8086K simply begs to be run at a flat-out 5GHz to get the most out of its capabilities.



Overclocking Performance: GPU Tests

In the third page of the review we showed our overclocking results, with our CPU managing to hit 5.1 GHz stable with a sizeable increase in voltage. 5.1 GHz was also high in temperatures, so for our benchmark suite, we dialed back to 5.0 GHz and run a number of our tests again at this fast speed. We also ran some benchmarks at stock frequency but with increased DRAM frequencies. We initially ran the DRAM in our ASRock provided system at DDR4-3466, slightly overclocked beyond its DDR4-3200 sticker value.

For this page (and the next), we’ll show the overclocked results of the Core i7-8086K using the fast memory kits as well as the 5.0 GHz overclocked setting (at base memory). The Core i7-8700K numbers are also included for reference.

Civilization 6

Civilization 6 (1080p, Ultra)

Shadow of Mordor

Shadow of Mordor (1080p, Ultra)

Rise of the Tomb Raider

Rise of the Tomb Raider (1080p, Ultra)

Grand Theft Auto V

Grand Theft Auto (1080p, VHigh)

There's not much to say with our GPU testing since we ended up being GPU-bound most of the time against the Radeon RX 580. In a more CPU-limited scenario overclocking should help, but these aren't it. Though at some point I'd like to dig into Civilization 6 turn times with the 8086K, as that stands to prove more impactful.



Conclusions: Save Your Money

Intel launching the Core i7-8086K as a 40th Anniversary part took us by surprise. The processor on paper is a slightly higher binned version of the Core i7-8700K, with a +300 MHz bump on the base frequency and the single core turbo frequency, allowing Intel to announce the 8086K as Intel’s first 5 GHz processor in the market.

We must give a BIG thanks to ASRock for letting us borrow a system in Taipei at such short notice.

Our Final Analysis comes in three parts, depending on how you are planning to use the processor.

1) Running at Stock

On paper, the change in specifications are a little underwhelming to be honest. At stock frequencies, the per-core turbo of the CPU is identical to the i7-8700K from two-cores of load up to a full-load. Meanwhile the processor will almost never shift out of turbo and drop to its improved base frequency thanks to the ample power and cooling capabilities of desktop PCs. This means that the only real performance benefit users will see is when the CPU is under a single-core stress.

Given the nature of PCs having multiple applications open at once or running in the background, a truely isolated single-core load almost never happens: in fact with our processor we only able to trigger a core to 5.0 GHz unless we set the affinity to a single core. In that respect, the Core i7-8086K is very limited, especially when it commands a premium price ($425) over its nearest rival, which is often sold at much less (8700K at $350 or below).

In our ‘stock’ results, this analysis bore fruit. In most benchmarks, the 8086K was on par with the 8700K. In a few, like CineBench R15 ST, it took a lead and afforded a new record due to the high frequency, but in others it seemed to perform worse, such as Blender and WinRAR, likely due to the thermal performance and response of our specific chip.

For anyone looking to buy the Core i7-8086K to run it at stock frequencies, save your money. There are better deals elsewhere.

2) Going to Overclock

In our testing, and corroborated by extreme overclocker Alva Jonathan, the Core i7-8086K seems to be a nice part if you want all six cores running at 5.0 GHz. Our chip ran a full flat 5.0 GHz by only adjusting the CPU multiplier, and the motherboard sorting out the rest, and with a little care we could get 5.1 GHz under our Blender-stable test.

High performance is great, and a Core i7-8086K at 5.0 GHz gets a nice level of performance. It is possible to get a nice Core i7-8700K for overclocking within the silicon lottery, but these processors are binned slightly better, so the chances of a full flat 5.0 GHz are much higher. If removing that risk is worth another $65+, then the Core i7-8086K should be on that list.

As showed by Alva, going beyond 1.3 volts can benefit from delidding the processor, as it uses the same thermal goop found in the previous Coffee Lake processors. Intel did not make any change to the thermal interface, which I know is not what enthusiasts want to hear. Intel constantly states its commitment to its enthusiasts, and moving away from the very average thermal interface would be the easiest way to show that commitment.

A downside to going up and over 5.0 GHz is the power consumption. Overclocking to 5.0GHz improved performance by 16% in CPU-bound scenarios thanks to the matching frequency gain, but it increased the chip's power consumption by 68%. Intel's 14++ manufacturing process can indeed turn out a Skylake CPU core capable of these kinds of high frequencies, but it's clear that it's well past the knee in terms of energy efficiency. The up shot is that while energy efficiency does take a dive when overclocking, the performance can still easily be yours, as these kinds of voltages and wattages are exactly what high-end CPU coolers are designed for.

3) For CPU Collectors

Intel is set to only produce 50,000 of these processors, so it is likely a must-buy for any collectors, and you probably have your orders in already. If not, our Amazon link is below.

Or fingers crossed that you entered the sweepstakes and might win. Those processors are likely to arrive in 6-8 weeks.

Is This Launch Just a Stunt?

Ultimately Intel did not need to launch a 40th Anniversary processor. While it is a multiple of ten, 40th anniversaries are not especially notable for corporations. Still, for a product that was seemingly spurred by a Twitter joke, the 8086K is not entirely without merit.

At this point Intel is between a rock and a hard place: the Core i7-8700K competes directly against the Ryzen 7 2700X, winning in single threaded performance and low resolution gaming, losing in multi-threaded performance, and equal at GPU-limited high resolution gaming. We know that the 8-core Coffee Lake processor is due out later this year, but that is in Q4, which is a while away. In that time, AMD will launch a 32-core Threadripper 2 product available to everyone. In terms of actual launches, this is more a hold-over.

Intel had many options for a new Coffee Lake processor, and on paper the Core i7-8086K does not look that great compared to the Core i7-8700K. The best thing Intel could have done here is given the top SKU a little extra TDP to play with, so that the per-core turbo values across the range would get a sizeable bump. Sure going to 105W might seem like Intel was copying AMD, but the performance should speak for itself. Otherwise users are paying +$75 for a better binned part.

The danger of offering a significantly higher performance processor in that way might be seen as cannibalizing future sales. If the 9th Generation ‘i7-9700K’ does not have a 105W TDP, it might not sell. But Intel is promising that these 8086K processors are limited edition and a limited run, meaning that Intel controls the flow of product in the market. If there are no 8086K processors left to buy, then the 9700K takes the best spot. Intel could have done this, and make the Core i7-8086K a better step up worthy of an anniversary edition, but the company decided not to. It's clear why Intel did not seed the press for the launch of the part, given the minor uptick in frequencies, but also Intel expects to sell the whole lot anyway, so additional media coverage wasn't really needed.

The flip side of that however is that – as Ryan and I have been debating internally – is whether this is a processor even meant for stock usage. Intel’s binning process means that for one core to be able to run at 5.0 GHz at a reasonable voltage, all of the other cores are practically guaranteed to do this as well. Which is to say that this chip is incredibly trivial to overclock to 5.0 GHz, even more so than the 8700K. In fact it feels like Intel really wanted to release a true all-core 5GHz CPU – damn the power requirements – but chickened out at the last moment and decided to require end-users to press the magic awesome button to unlock its full potential.

Ryan says that he can't imagine anyone well-read on the subject of CPUs not overclocking this chip to a flat 5.0 GHz, and after discussion I’m inclined to agree with him. Which makes the 8086K’s merits all about how it’s framed. Is it a one-off CPU whose stock performance improvement is too low to matter, or is it a backdoor attempt by Intel to release a highly-binned, high wattage Coffee Lake processor for customers who want the highest clocked Intel CPU out there?

Ultimately the question becomes whether the Core i7-8086K is a good buy or not. The Core i7-8086K is, without a doubt, Intel’s best performing mainstream desktop processor ever, and benefits in some tests from the additional single core turbo frequency, although only in a few select tests. The best benefits of the processor come in its overclocking, with the two units mentioned in this review easily hitting 5.0 GHz across all cores. It is nice performance, and if you want the best it makes sense that you have to pay the most (or get lucky), but it is a hard sell for most users.

Intel Could Have Done More

My advice? If you are truly deciding between the Core i7-8700K and the Core i7-8086K, then get the i7-8700K. While having an anniversary edition might make you feel proud in the short term, being able to have it in your forum signature or reddit flair for a few years, or having that higher overclock puts a grin on your face, the ultimate difference is minimal and down to perception and placebo effect. Spend the extra on a bigger SSD or more memory. It’s a nice part, but Intel could have done more. 

Log in

Don't have an account? Sign up now