Original Link: https://www.anandtech.com/show/10127/overclockable-core-i3-supermicro-c7h170-m-intel-core-i3-6100te-review
Why an Overclockable Core i3 Might Not Exist: The Supermicro C7H170-M and Intel Core i3-6100TE Review
by Ian Cutress on March 17, 2016 10:30 AM EST- Posted in
- CPUs
- Intel
- Motherboards
- Core i3
- Supermicro
- Overclocking
When a new Intel platform hits the market, there are two primary product chains - processors and chipsets. For the most part, at least in the consumer space, any processor should work with any chipset, with the higher end chipsets offering more functionality at an added cost. For Skylake, Z170 it's the top model, with H170/H150, B150, Q150 and H110 filling the rest, with one more business focused. The Supermicro C7H170-M reviewed here has the H170 chipset, but also was the first to come with BCLK overclocking for non-K processors. This is our first proper look at a Supermicro motherboard in a long time, as well as our probe into the brief jump into H170 and non-K overclocking which provides some deep indicators into the current processor lineup.
Other AnandTech Reviews for Intel’s 6th Generation CPUs and 100-Series Motherboards
Skylake-K Review: Core i7-6700K and Core i5-6600K - CPU Review
Comparison between the i7-6700K and i7-2600K in Bench - CPU Comparison
Overclocking Performance Mini-Test to 4.8 GHz - Overclocking
Skylake Architecture Analysis - Microarchitecture
Z170 Chipset Analysis and 55+ Motherboards - Motherboard Overview
Discrete Graphics: An Update for Z170 Motherboards - PCIe Firmware Update
Price Check: Intel Skylake i7-6700K and i5-6600K - Skylake Price Check (2/16)
100-Series Motherboard Reviews:
Prices Correct at time of each review
($500) The GIGABYTE Z170X-Gaming G1 Review
($250) The ASUS Maximus VIII Impact
($240) The ASRock Z170 Extreme7+ Review
($230) The MSI Z170 Gaming M7 Review
($208) The GIGABYTE Z170-UD5 TH Review
($165) The ASUS Z170-A Review
($128) The Supermicro C7H170-M Review (this review)
To read specifically about the Z170 chip/platform and the specifications therein, our deep dive into what it is can be found at this link.
A Side Note
This piece is going to play out a little differently to our regular CPU/motherboard coverage, purely due to the story about what is happening about the industry. I want to start by bringing everyone up to speed on what we know…
A Brief History of Skylake Overclocking
The story of how to overclock on Intel’s latest consumer focused platform reads like novella on envy. The 6th generation platform, codename Skylake, introduced a number of features in the base design to enhance a particular way of overclocking by adjusting the ‘base frequency’ of the processor. Historically (pre-2009), most overclocking was performed in this way, but with the introduction of Nehalem and especially Sandy Bridge processors Intel had changed the fundamental way different parts of the processor were synchronized and users were limited to ‘multiplier’ overclocking, and only on certain CPU parts that enabled the feature.
So for example, here is our i7-5960X processor, with an out-of-the-box frequency of 3.3 GHz (33x100) being overclocked to 4.6 GHz (46x100):
Core Speed ~3300 MHz on left and ~4600 MHz on right
Nominally the frequency of a processor is the base frequency (typically 100 MHz) multiplied by the CPU multiplier (anywhere from 12-42 depending on the model). Thus for a processor like the i3-6100TE in this review, it runs at a base frequency of 100 MHz and a multiplier of 27x, giving 2700 MHz (or 2.7 GHz) as the frequency. Since Nehalem/Sandy Bridge, and as more functionality was integrated into the processor, more features were tied to the base frequency of the processor. As a result, PCIe frequency (the speed at which the root complex of the processor communicates with graphics cards), chipset/DMI frequency (between the CPU and IO ports) and DRAM frequency (CPU to memory) were all tied into this value. Minor adjustments in the base frequency could take place (110-113 MHz on Sandy Bridge, 107 MHz on Ivy Bridge/Haswell, 104 MHz on Broadwell), but when it was pushed too much either the PCIe communication would fail or the chipset communication would fail – the latter being important as it could lead to corruption of data stored.
For Skylake, the layout of the CPU frequency generation was adjusted. Either as a fundamental design choice or due to other factors, Intel decided to decouple a large number of aspects of the platform from the fundamental base frequency, either providing transition segments or having separate base frequency generators. This, at a high level, means that different parts of the system can operate at different base frequencies as well as different multipliers, and the decoupling mechanisms would handle how data is moved around.
This looks like a complex diagram and I won’t fool you, implementing this is a fundamentally complex problem. In the same way that designing games for 60 FPS only makes it a lot easier to create the game, if there was one constant base frequency it would be easier to design. But in a nutshell, there are three frequency areas: the core/memory/uncore, the PCIe communication and the chipset. This means we can adjust one of these three without affecting the others. When we normally overclock a processor, it is that first one of core/memory/uncore that matters the most.
Supermicro is First to Fight
For the Skylake processors, this was supposed to be limited to certain high-cost CPU models, such as the i7-6700K, the i5-6600K and the mobile i7-6820HK - but Supermicro was the first to find a way to enable this method in two different vectors. Firstly, as a function of feature set: base clock overclocking was only designed to operate on Z170 chipsets – Supermicro enabled this on a H170 chipset. Secondly, as a function of processor: base clock overclocking was only designed to operate on K/HK processors – Supermicro enabled this on the other processors from Intel.
We reported this when it was first becoming known to the media. The overclocker Dhenzjhen, an employee of Supermicro, used a Core i3-6320 and pushed it from 100 MHz up to 127 MHz by increasing the base clock on his C7H170-M motherboard using custom firmware.
At the time, the method of making this change was not being promoted. As we have since learned, it requires a separate external clock generator for parts of the processor such that they would not change when the CPU core was moved above 100 MHz. These clock generators are standard on Z170 motherboards, but moving them down to a H170 series motherboard, along with a series of custom firmware to manage the change, opened up non-K and non-Z overclocking on the Supermicro motherboard.
Other motherboard manufacturers quickly followed suit to include the method. ASRock were the loudest, stating that the adjustment was easy to make on motherboards that already had the secondary clock generator (because the same layouts are made for Z and H series chipsets sometimes), and from CES they were poised to release a series of motherboards that enabled overclocking in this non-Z and non-K methodology, even going as far as showing enterprise level motherboards with the feature, potentially allowing Xeon based processors to be overclocked. The other three major motherboard manufacturers were fairly tight-lipped on the matter by comparison.
There are obvious avenues where having overclockable processors would be helpful to parts of the industry, especially when it was the norm back in the early part of the century. On the Xeon side, it would aid enterprise customers who prioritize response times (such as the financial industry) and revamp the supposed ‘high response time’ products that currently existed with a 3% overclock. For the consumer and regular customers, it would enable the mythical ‘overclockable dual core i3’ processor that has been on the lips of enthusiasts. After the overclockable Pentium G3258 being launched in 2014 not offering that great performance when overclocked, enthusiasts are interested in the overclocked i3 performance metrics. Also, if all CPUs are overclockable, it means that perhaps the highest frequency part isn’t what you need, and more money can be spent elsewhere.
Pulling the Carpet
However, the ‘feature’ seems to have been taken away as quickly as it was introduced. We reported that only sixty days after our initial coverage on the feature, ASRock had removed all advertising and references to their upcoming line of motherboards designed for base clock overclocking. Every ASRock motherboard on the market that had an updated BIOS for this feature had another update which removed the feature.
ASRock were unwilling to say on the record the reasons why the feature was removed, and why the advertising had been pulled. We noted that because the other motherboard manufacturers had not been as head strong, there was less of an effect and the removal focused around ASRock. We also contacted Supermicro at the time, and were told they were monitoring the situation but did not mention if they had to push an update.
Of course, there has been immense speculation as to what had happened, if there was a push from a certain direction, or if the microcode update (to 0x76 as above) from Intel had purposefully broken or locked the feature. Considering the potential adjustment in CPU/system sales that might occur as a result of this feature, a number of possible theories emerged.
It should be noted that the BIOS versions that support non-K overclocking for ASRock motherboards are still available for download, but due to the microcode update it may not be possible to roll back to a previous version. If someone buys an ASRock motherboard with a BIOS version before the update, it could be flashed to enable this however your mileage may vary as to which BIOS version the motherboard will have out of the box. The planned motherboards that ASRock had with this feature out of the box were all mothballed when the decision to pull the feature was made. It could be considered that a user gets a working overclocking motherboard and keeps it as is, without updating the BIOS – and not updating a BIOS accounts for most motherboards sold anyway.
The motherboard that started the ball rolling on this feature, the Supermicro C7H170-M (which is the motherboard in this review) is still for sale at Newegg, currently at $128 on 3/17. This doesn’t mention the BIOS revision however, and the sole BIOS on the Supermicro website does not mention a date or a changelog, so at this point it is hard to tell the state of play. The motherboard has not been removed from sale at least.
So where does this leave us? We had the motherboard and Core i3 overclocked and tested, but the series of events throws up a number of questions if the data is even barely relevant any more. Thankfully, waiting until the past week has changed this. Under the radar, it seems ASRock is still trying to release motherboards with the feature. Various tech websites have picked up new listings for the H170 Performance/Hyper and B150 Gaming K4/Hyper, with base clock overclocking as a core feature, although we suspect that ASRock will not put out a formal press release announcing these products so as to not be seen as promoting a non-Intel supported feature. It is unclear if ASRock has found another way to implement the feature, or is purposefully keeping the microcode for this model held back, and I’m sure if we asked they wouldn’t tell us.
But this brings me back to this review. It sounds as if the base clock overclocking saga is being played out like a panto and we’re not sure who is in the lead role, who is the panto dame, the comic lead or the villain in this context, purely due to a lack of transparency. Either way, here’s what we’re going to do here.
In a Nutshell
In this review we will test:
- The Supermicro C7H170-M motherboard, with our regular i7-6700K
- The Intel Core i3-6100TE at stock frequency
- The effect of base clock overclocking on the Core i3-6100TE
- The threat of an overclocked Core i3 and why there isn’t one
Quick Links to Review Pages
Brief History of Skylake Overclocking
Supermicro C7H170-M Overview
Motherboard Features and Visual Inspection
Supermicro C7H170-M BIOS
Supermicro C7H170-M Software
Motherboard System Performance
Motherboard Processor Performance
Motherboard Gaming Performance
The Core i3-6100TE: An Unlikely Candidate?
Core i3-6100TE Office and Web Performance
Core i3-6100TE Professional Performance: Windows
Core i3-6100TE Professional Performance: Linux
Core i3-6100TE Gaming Performance: High End GTX 980/R9 290X
Base Clock Overclocking the Core i3-6100TE: Scaling
Base Clock Overclocking the Core i3-6100TE: Competition
Supermicro Going Consumer
When it comes to consumer grade motherboards, or at least on the enthusiast side, our coverage consists of 90%+ of the top four manufacturers - mostly due to sales figures and reader interest. Every so often we get in a sample from the next tierof vendor, which can throw us for a curveball based on price, software and/or utility. Arguably Supermicro is in this latter crowd, purely in terms of consumer volume, but they have been a primary Intel partner for two decades and make most of their revenue in the enterprise space. Back in Computex 2015, I sat down with one of the CEOs main advisors and we spoke about the consumer motherboard space, and how/if/whether Supermicro should launch into the area. At the time mentioned three points to them:
- The base quality of consumer motherboards is a relatively high bar to match. The four main companies going at it have had multiple generations of learning, updating, fixing and tweaking their design. Customers expect a lot, even at the bottom end of the market.
- The motherboard market is declining in volume. Each manufacturer is redoubling efforts to maintain their sales volume, let alone keeping their market share. This means having engineers, good marketing, and a clear working relationship with customers on all levels, some of which Supermicro may not be familiar with
- Brand presence and technical prowess are the main avenues to get people talking about your product. Having both the correct stack of parts for your customers as well as something new and innovative (either livery or active feature) is how users will understand your parts, and simple gimmicks are easy to see through.
At the time, Supermicro were quietly confident. They have large technical teams, albeit server based, and a large number of enterprise customers that would appreciate the server touch at a consumer grade sale. However, I would argue that from my perspective, 2014 and 2015 were relatively dull from Supermicro. We technically had the Z87 overclocking motherboard in for review, for example, but I read several reviews where the BIOS needed a lot of work, the software was non-existent, and enthusiasts wanting to push the boat were going nowhere.
We never got around to reviewing the motherboard, due to time constraints with other reviews, but Supermicro was willing to listen to my feedback last year on the state of the industry. They have since moved to selling motherboards through the regular retail channels to get a semblance of market share, and are also trying to build a brand around the SuperO name, which has seen several motherboards launched for Skylake including this green one we reported on late last year. But by some swift engineering, Supermicro managed to be at the center of one of the most interesting overclocking stories in a number of years.
The OC story started with the motherboard we are reviewing in this piece, the C7H170-M. If you read the previous page, we go through the trials and tribulations of how base frequency over clocking on Intel non-K processors is fundamentally encouraged by the base CPU design but was locked by default, then enabled if certain hardware changes were made, then locked again by firmware, but might be re-enabled in certain circumstances. Throughout the debacle, Supermicro has held firm and not removed any product from the market, but is also being tight lipped on their updates.
Supermicro C7H170-M Overview
At $128 as the current retail price over at Newegg, the Supermicro C7H170-M is the cheapest motherboard we have tested on the Skylake platform so far, but also uses the cheaper H series chipset in a microATX sized motherboard. The H170 chipset is the first step down from the high end Z170, and as such comes with a few more restrictions. H series chipsets, for example, are designed for systems that incorporate a single discrete graphics card (which fundamentally covers most PC users), and have a lower number of high speed ports for PCIe based RAID storage or extra controllers connected to the chipset.
As for the motherboard, it's clear that Supermicro are taking things like livery a bit more seriously. The board is busy - lots of contact pads, pin-connection switches and new sizes/combinations of push buttons. This is mixed with the new color scheme, which can be a bit off putting. But for $128, there are a number of points both positive and negative on the bill of materials.
At this price point I was glad to see an Intel I219-V network controller as well as the high end Realtek ALC 1150 audio code. Typically with a cheaper motherboard, audio and networking are the first to be downgraded but Supermicro has kept them here. We have no USB 3.1 unfortunately, which is atypical from our 100-series coverage so far, but the board has support for all the USB 3.0 ports that the chipset offers. As a server motherboard company we get a trusted platform module header as well as a power switch, but not a two-digit debug display for error codes. At this price point and board size there is a full complement of memory slots, supporting JEDEC speeds up to 16GB per module of DDR4. This is a motherboard that isn't really built for overclocking, despite the nature of this review, so as a result we get a five phase power delivery design using standard server-grade VRMs and chokes.
On the BIOS and software side, it is clear that Supermicro has a lot of work still to do in terms of user experience. They have transitioned from a bland BIOS interface to something graphical, though it is significantly clunky with both mouse speed and the ease of use of the keypad to move into certain sections. There’s also the utility aspect, such as fan controls, which have been reduced the optimal or full-speed only. I would say that the overclocking options, although basic, give an easy way for most people to go and overclock by offering an automatic look-up-table in 5 MHz increments.
The software stack uses monitoring software, oddly through a HTML interface which is probably indicative of how server systems are usually controlled (even though we don’t have an IPMI connection here). That being said, the software tool does provide a lot of information, even though it is not as extensive as what the regular consumer motherboard manufacturers provide.
Performance was a mixed bag in the grand scheme of things, albeit with a few interesting segments above the price band: there’s no Multi-Core Turbo here, the DPC Latency was high and POST times are beyond 30 seconds, but the power consumption between idle and load is decent enough and the audio results put the solution as one of the best we’ve tested so far on Skylake.
At this point, for $128, the C7H170-M comes across as a nice motherboard to have, but only if it comes with the overclocking feature and/or retains its position as the only motherboard capable of non-Z and non-K base frequency overclocking. That’s where the true value lies, mostly because there are other motherboards in this price range that have more features. As it currently stands, base clock overclocking is still listed on retailers as its main feature (3/17), so if it still says that when purchased but is removed at a later date, I would assume it could be returned.
Quick Board Feature Comparison
Motherboard Comparison | ||
Supermicro C7H170-M | ||
Socket | LGA1151 | LGA1151 |
MSRP at Review | $128 | $230 |
DRAM | 4 x DDR4 | 4 x DDR4 |
PCIe Layout | x16 | x8/x8 |
BIOS Version Tested | v1.0c | 142 |
MCT Enabled Automatically? | No | Yes |
USB 3.1 (10 Gbps) | No | ASMedia ASM1142 1 x Type-A 1 x Type-C |
M.2 Slots | 1 x PCIe 3.0 x4 | 2 x PCIe 3.0 x4 |
U.2 Ports | No | No |
Network Controller | 1 x Intel 219-V | 1 x Killer E2400 |
Audio Controller | Realtek ALC1150 | Realtek ALC1150 |
HDMI 2.0 | No | No |
Board Features
For $128, it is perhaps odd that we’re not seeing USB 3.1 here as it is one of the primary reasons for users to upgrade to a Skylake based system. The audio and networking portion are good for the price, and there are certainly plenty of USB 3.0 ports/headers to make up for the deficit. The lack of fan controls is somewhat of an issue, especially with all the headers, and for this price we would have also liked to have seen a two-digit debug to help with errors.
Supermicro C7H170-M | |
Warranty Period | 3 Years |
Product Page | Link |
Price | Amazon US |
Size | mATX |
CPU Interface | LGA1151 |
Chipset | Intel H170 |
Memory Slots (DDR4) | Four DDR4, Supporting 64GB, Dual Channel, Up to 2133 MHz |
Memory Slots (DDR3L) | None |
Video Outputs | HDMI DisplayPort DVI-D |
Network Connectivity | Intel I219-V |
Onboard Audio | Realtek ALC1150 |
PCIe Slots for Graphics (from CPU) | 1 x PCIe 3.0 (x16) |
PCIe Slots for Other (from PCH) | 1 x PCIe 3.0 x4 1 x PCIe 3.0 x1 |
Onboard SATA | Six, RAID 0/1/5/10 |
Onboard SATA Express | None |
Onboard M.2 | 1x PCIe 3.0 x4, RAID 0/1/5/10 |
Onboard U.2 | None |
USB 3.1 | None |
USB 3.0 | 4 x Rear Panel 4 via headers |
USB 2.0 | 2 x Rear Panel 2 via headers |
Power Connectors | 1 x 24-pin ATX 1 x 8-pin CPU 1 x 4-pin |
Fan Headers | 1 x CPU (4-pin) 4 x Fan (4-pin) |
IO Panel | 1 x Combination PS/2 2 x USB 2.0 4 x USB 3.0 1 x Network RJ-45 HDMI DisplayPort DVI-D Audio Jacks |
Other Features | Thunderbolt Header Power/Clear CMOS Buttons BIOS Restore Button Front Panel Header Front Audio Header COM Header |
In The Box
We get the following:
Quick List
Rear IO Shield
Driver DVD
M.2 Screws
Four double-length SATA cables
The C7H170-M certainly comes in an interesting box shape, but in the box there isn’t much to talk about – but this is to be expected for a $128 motherboard. The double length SATA cables are interesting though, as it’s a first on a motherboard that I’ve ever tested. Perhaps Supermicro’s customer research teams gave SATA cable length as one of their primary concerns? Not sure there.
Visual Inspection
Supermicro’s gaming line is designated ‘SuperO’, and much like almost all of the gaming motherboard lines on the market comes in a red and black livery. Whether you like the red plus black combination, it seems that Supermicro is coming up against a few aesthetic issues that the other motherboard manufacturers have come up against: placing the white box around every part for the automated placement machines detracts from the overall look. This will probably be looked at but take time to adjust, as it did with the other manufacturers.
By virtue of this being a H series chipset, there is not much of the over engineering we see on the Z series chipset based motherboards. The power delivery has a small heatsink over one part of it rather than the whole set, and this doesn’t extend over other areas or to the chipset heatsink because it doesn’t really need to. The socket area around the CPU bracket is very busy, with plenty of standard filter caps and resistors, which also takes away from the look. I would also point out that the DRAM spacing between the slots is irregular, which also looks odd and given this I would assume that Supermicro is not implementing a T-topology memory design.
For users keeping track of fan header placement, the socket has immediate access to all five – three 4-pin headers along the top (the CPU one is the one in the middle), one 4-pin on the left near the 4-pin power, and one to the right above the SATA ports.
On the top right side of the motherboard, we have three buttons. In most motherboard designs, these would be power, reset and reset BIOS – but here they are power, reset BIOS and BIOS restore. The last one is to restore the BIOS in the event of a corruption – because Supermicro does not have an easy way to update the BIOS yet, any attempt to flash the BIOS is more risky than the other manufacturers right now, so this button may be vital in some circumstances.
At this point I want to talk about the excessive amount of jumpers on this motherboard.
In total I count ten, most of which are not labelled in the materials which leads me to believe that they’re just for internal testing when designing the motherboard. These are typically removed in the final design, but for whatever reason they are kept here.
Most of them are on the bottom of the motherboard, and this is where we find the second USB 3.0 header (the first being above the SATA slots), a TPM header, the front panel header, a COM header, a USB header and a Thunderbolt card header.
There is no special shielding here in play for the Realtek ALC1150 audio codec on the left hand side, but it seems to perform well in our tests. The PCIe layout gives a single PCIe 3.0 x16 from the CPU – the H series chipset means that Supermicro has to play by the rules and only offer a single PCIe slot from the CPU when the H-series is in use. The other two PCIe slots are an x4 and x1 from the chipset – I would have preferred if these were open ended, as this would allow other x8 or x16 cards to be used, albeit with limited bandwidth. Above the PCIe slots is our M.2 slot, supporting PCIe 3.0 x4 M.2 drives.
The rear IO panel has a combination PS/2 port, two USB 2.0 ports, three video outputs (DVI-D, HDMI, DisplayPort), four USB 3.0 ports, a network port and the audio jacks.
Test Setup
Test Setup | |
Processor | Intel Core i7-6700K (ES, Retail Stepping), 91W, $350 4 Cores, 8 Threads, 4.0 GHz (4.2 GHz Turbo) |
Intel Core i3-6100TE, 35W, $117 2 Cores, 4 Threads, 2.7 GHz |
|
Motherboards | Supermicro C7H170-M |
Cooling | Cooler Master Nepton 140XL |
Power Supply | OCZ 1250W Gold ZX Series Corsair AX1200i Platinum PSU |
Memory | Corsair DDR4-2133 C15 2x8 GB 1.2V or G.Skill Ripjaws 4 DDR4-2133 C15 2x8 GB 1.2V |
Memory Settings | JEDEC @ 2133 |
Video Cards | ASUS GTX 980 Strix 4GB MSI GTX 770 Lightning 2GB (1150/1202 Boost) ASUS R7 240 2GB |
Hard Drive | Crucial MX200 1TB |
Optical Drive | LG GH22NS50 |
Case | Open Test Bed |
Operating System | Windows 7 64-bit SP1 |
Readers of our motherboard review section will have noted the trend in modern motherboards to implement a form of MultiCore Enhancement / Acceleration / Turbo (read our report here) on their motherboards. This does several things, including better benchmark results at stock settings (not entirely needed if overclocking is an end-user goal) at the expense of heat and temperature. It also gives in essence an automatic overclock which may be against what the user wants. Our testing methodology is ‘out-of-the-box’, with the latest public BIOS installed and XMP enabled, and thus subject to the whims of this feature. It is ultimately up to the motherboard manufacturer to take this risk – and manufacturers taking risks in the setup is something they do on every product (think C-state settings, USB priority, DPC Latency / monitoring priority, overriding memory sub-timings at JEDEC). Processor speed change is part of that risk, and ultimately if no overclocking is planned, some motherboards will affect how fast that shiny new processor goes and can be an important factor in the system build.
For reference, the Supermicro C7H170-M, on our testing BIOS 1.0c, MCT was not enabled by default. Also, the FCLK 10x ratio was not present in the BIOS tested at the time of testing.
Many thanks to...
We must thank the following companies for kindly providing hardware for our test bed:
Thank you to AMD for providing us with the R9 290X 4GB GPUs.
Thank you to ASUS for providing us with GTX 980 Strix GPUs and the R7 240 DDR3 GPU.
Thank you to ASRock and ASUS for providing us with some IO testing kit.
Thank you to Cooler Master for providing us with Nepton 140XL CLCs.
Thank you to Corsair for providing us with an AX1200i PSU.
Thank you to Crucial for providing us with MX200 SSDs.
Thank you to G.Skill and Corsair for providing us with memory.
Thank you to MSI for providing us with the GTX 770 Lightning GPUs.
Thank you to OCZ for providing us with PSUs.
Thank you to Rosewill for providing us with PSUs and RK-9100 keyboards.
Supermicro C7H170-M BIOS
As noted in many previous motherboard reviews, some users care deeply about the BIOS interface, whereas others might not care at all. In all honesty, it only gets accessed by a few percent of all users, and usually at most just to set defaults or due to an accidental BIOS reset. Despite this, manufacturers need to spend time on it for two reasons – enthusiasts and optimization. Because the C7H170-M is being advertised as an overclocking motherboard, this means the overclocking tools should be easy to use.
One of the issues of using BIOSes outside the main four motherboard manufacturers is that they tend to be 2-3 years behind in terms of interface, implementation and options. Supermicro is in this space – while there is a graphical interface, it is a bit of a handful to use and doesn’t open up as many options as I would like. One example is that the BIOS does not have a screenshot mode, so apologies for the following photos of a screen. One big aspect I should point out here is that the BIOS does not have an update tool – in order to update the BIOS, the user needs to have a DOS bootable USB with the required files already in place or use the HTML interface after already installing an OS.
The first screen on entry is a basic display showing the time, the board name and the BIOS version. Typically we want to see a lot more in the opening screen – the CPU installed, the CPU speed, voltage, temperature, the DRAM installed, the DRAM speed, the storage drives installed, the fans installed, the fan speeds, the boot order, and basically everything that could facilitate an easy fix for 85% of all problems without entering any other menus. In time, Supermicro will learn to add this, and should see that other motherboard manufacturers typically do this via an Easy Mode.
Regarding the controls of the BIOS, it can be rather confusing to get to grips with. There is mouse movement and selection, however the use of a high-DPI implementation of the BIOS and my usual DPI setting on my mouse meant that it takes a while to scroll over to any of the options. Normally I would use a keyboard anyway, but that can be confusing too. Selecting an option on the far left automatically moves the cursor to the new menu on the right, but it is not always obvious whether it is the first option in the secondary tabs or the list of options at all. Normally most BIOS implementations would leave the cursor on the far left and not move it at all, so you can go through each of the main tabs without any forced movement. I suspect that Supermicro hasn’t done much QA or market research on their BIOS implementation beyond the small group of engineers that coded it.
As for the BIOS options themselves, we have a few options worth talking about. The first set of menus are the System Information screens, which as mentioned above should all be placed in a single entry screen rather than split apart.
The Processor/CPU tab is the list of standard options we typically get relating to CPUs: hyperthreading, ratio, power states, C-states, turbo mode and so on.
As with other BIOS implementations, the actual overclocking options are in a different menu. Supermicro has them here under ‘Extra Performance’, which gives a single menu with a drop down for base clock frequency adjustment (BCLK Clock Frequency, that second clock being grammatically redundant). Here, with our i3-6100TE processor sample, it offered 100 MHz to 150 MHz in 5 MHz increments.
It is worth noting here that this isn’t an automatic overclock look-up table as with some other motherboard vendors – this is simply a MHz adjustment and users will have to manage their own processor voltages. In this case we have a CPU core offset, rather than an absolute value. This can play havoc if the DVFS table decides that the stock voltage needs to be high, and is why we typically request absolute value adjustments (such as the System Agent voltage shown in the screen shot). We also like to see load line calibration options, but they are not enabled here. Nevertheless, our overclocking tests showed that without touching the voltages, we were able to happily push some boundaries moving up to 135 MHz without issue.
Memory options are relatively limited here – no option to change any sub-timings, but the motherboard is only rated at DDR4-2133 anyway. The maximum memory frequency and fast boot options are the primary ones people may use.
The booting menu is similar to other vendors, offering a complete list for boot options. There is no ‘Single Boot’ option here though to boot from a device in a single instance. By default the boot mode is set to Legacy, which may want to be changed if implementing a UEFI boot protocol.
The Input/Output tab is where we see the majority of the options we normally see in a BIOS, such as AHCI/RAID with the chipset ports as above, or enabling/disabling controllers as shown below.
Ideally we would like to see an image of the board and a list of everything that is user installed, such as memory, PCIe devices, USB ports. Both ASRock and MSI do this as a handy aid when hardware might have an issue or is not detected properly.
The monitoring tab is the usual array of temperatures, fan speeds and voltages, although they are split up somewhat and could have been enabled in the same screen. For users that are into their fan controls, unfortunately Supermicro only offers ‘standard’ and ‘full speed’, which is extremely limited. We typically suggest that a motherboard vendor implements an on screen point-and-click multi-point gradient in a graphical interface, ideally with hysteresis so the fans stay on for a short while when coming out of an intense workload to help with cooling.
Nothing else is worth discussing in the BIOS, aside from the ability to save a couple of profiles. For enthusiasts it is worth noting that the button to save and exit the BIOS is F4, rather than the F10 we see on consumer platforms.
Ultimately Supermicro has a long way to go in order to make the BIOS as consumer friendly as their competition, and as mentioned above it is eerily similar to the implementations we saw from the big four when they first started going graphical in the BIOS. Hopefully the Supermicro engineers can have a look at their competition in detail and take some feedback as to how to move forward with their design.
Supermicro C7H170-M Software
Historically outside of the big four motherboard manufacturers mentioning a software package usually throws up some horror stories or completely blank looks. I have had manufacturers provide just the basic drivers on disk before, or that plus a basic tool with a poor GUI that ends up crashing when you select a few options. I set aside my predispositions, as with every review, and was looking forward to Supermicro’s software attempts, especially given our previous discussions about how interested they are in the consumer market. In our meetings together back at Computex I laid out the fact that their competitors have many years of experience in this, so it may take some time to match their quality. In a surprising twist, it seems that Supermicro has fallen onto their historic experience and gone not so much with a software GUI, but a web interface for their analysis tools. They call it Super Doctor 5.
I should explain. Supermicro, as a server company, has for many years dealt with management chips (such as Aspeed variants) that allow users to access some system controls and monitoring tools via a web interface even when the system is powered but not turned on. Having created their own IMPI interfaces for these management chips for a number of years, these tools were turned to the consumer crowd. Typically a consumer motherboard will not have a management chip, so this is more just an interface for the user to see the current status or adjust some minor aspects to the system.
As shown in the graph above, after an initial install, we get the standard motherboard monitoring metrics: fan speed, voltages and temperatures. There is the motherboard name listed, but we don’t have a series of the usual information I would want: CPU installed, CPU frequency, memory installed, memory speed, storage devices attached, boot order, fan speeds, and/or perhaps even a picture of the motherboard in use. There are a lot of things possible, but this is a basic list I would expect.
Perhaps unsurprisingly then, all the information I want is in the System Info tab along the top. The interface is easy to use, for sure: a summary with separate options on the left, with the results on the right. The problem for me is that the interface is lifeless – there’s no user experience here. A user in 2016 normally requests a tailored and styled experience, but here there is none to speak of.
Some of the sub menus, like this Disk Drive one, is more along the lines of what looks good, although the decision for width being at 100% makes everything look stretched.
The configuration tab links in with the hardware monitoring to give a series of alerts should the system get too hot or something fail and voltages drop. A user can set up an SMTP email server which can be used to send an email when it happens – a standard thing in the server industry, but you rarely see it in the consumer part because if the system gets a low voltage point, it is more likely to shut off rather than have something always-on to send an email. There is also an option here for users to flash the BIOS.
The monitoring tab shows the boundaries for each of the components the user can monitor, and makes them adjustable via text boxes.
One interesting addition is this tab on power control, to turn the system off. Again, this is more of a server feature – log in through the management chip in order to restart a system that isn’t responding. There isn’t much demand for this in a management chip-less consumer based system.
Overall, Supermicro’s software package is interesting, if a bit light. Typically for a consumer product we get full fan controls, or if overclocking is enabled, the ability to adjust base frequency and voltages on the fly. There’s also the lack of added software features, which we see on other vendors, such as audio packages or gaming focused software which Supermicro doesn’t have (macros, sniper features, network management).
System Performance
Not all motherboards are created equal. On the face of it, they should all perform the same and differ only in the functionality they provide - however this is not the case. The obvious pointers are power consumption, but also the ability for the manufacturer to optimize USB speed, audio quality (based on audio codec), POST time and latency. This can come down to manufacturing process and prowess, so these are tested.
Power Consumption
Power consumption was tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.
While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.
The C7H170-M pulled in some good low numbers for idling and load, which should be expected for a smaller motherboard without too many controllers. The power delta from long idle to load was 86W, which is one of the best of all the 100-series systems we’ve tested.
Non UEFI POST Time
Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows 7 starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)
One drawback of systems outside of the normal big four vendors has historically been POST times, and the C7H170-M continues this trend, being over 30 seconds from power on to seeing Windows 7 being loaded. Cutting out the audio and network controllers for a stripped POST time reduced it to just under 30 seconds, but that is still twice as long as the best 100-series motherboards.
Rightmark Audio Analyzer 6.2.5
Rightmark:AA indicates how well the sound system is built and isolated from electrical interference (either internally or externally). For this test we connect the Line Out to the Line In using a short six inch 3.5mm to 3.5mm high-quality jack, turn the OS speaker volume to 100%, and run the Rightmark default test suite at 192 kHz, 24-bit. The OS is tuned to 192 kHz/24-bit input and output, and the Line-In volume is adjusted until we have the best RMAA value in the mini-pretest. We look specifically at the Dynamic Range of the audio codec used on board, as well as the Total Harmonic Distortion + Noise.
Using the ALC1150 codec means that the C7H170-M should have some potential, although the board comes without most of the enhancements we typically see with souped up versions of the codec. Perhaps surprisingly we get the best THD+N result out of any codec we’ve ever tested on 100-series motherboards.
USB Backup
For this benchmark, we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer. The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second HD videos. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.
Due to the introduction of USB 3.1, as of June 2015 we are adjusting our test to use a dual mSATA USB 3.1 Type-C device which should be capable of saturating both USB 3.0 and USB 3.1 connections. We still use the same data set as before, but now use the new device. Results are shown as seconds taken to complete the data transfer.
Using the default Intel drivers, the USB 3.0 ports for the C7H170-M gave our worst result so far. This may be down to some BIOS tuning which the other motherboard manufacturers have been doing for many years.
DPC Latency
Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.
If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.
DPC Latency is still an odd discussion point on 100-series. We’ve seen ASUS get it right, MSI not too far behind but the others are playing catchup.
CPU Performance, Short Form
For our motherboard reviews, we use our short form testing method. These tests usually focus on if a motherboard is using MultiCore Turbo (the feature used to have maximum turbo on at all times, giving a frequency advantage), or if there are slight gains to be had from tweaking the firmware. We leave the BIOS settings at default and memory at JEDEC (DDR4-2133 C15) for these tests, making it very easy to see which motherboards have MCT enabled by default.
Video Conversion – Handbrake v0.9.9: link
Handbrake is a media conversion tool that was initially designed to help DVD ISOs and Video CDs into more common video formats. For HandBrake, we take two videos (a 2h20 640x266 DVD rip and a 10min double UHD 3840x4320 animation short) and convert them to x264 format in an MP4 container. Results are given in terms of the frames per second processed, and HandBrake uses as many threads as possible.
Compression – WinRAR 5.0.1: link
Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.
Point Calculations – 3D Movement Algorithm Test: link
3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.
Rendering – POV-Ray 3.7: link
The Persistence of Vision Ray Tracer, or POV-Ray, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 2-3 minutes on high end platforms.
Synthetic – 7-Zip 9.2: link
As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.
Gaming Performance 2015
Our 2015 gaming results are still relatively new, but the issue of FCLK settings might play a big role here. At launch, the default setting for the communication buffer between the CPU and PCIe stack was 800 MHz, even though Intel suggested 1000 MHz, but this was because of firmware limitations from Intel. Since then, there is firmware to enable 1000 MHz, and most motherboard manufacturers have this - but it is unclear if the motherboard will default to 1000 MHz and it might vary from BIOS version to BIOS version. As we test at default settings, our numbers are only ever snapshots in time, but it leads to some interesting differences in discrete GPU performance.
Alien: Isolation
If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.
For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.
Total War: Attila
The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.
For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.
Grand Theft Auto V
The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.
For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).
GRID: Autosport
No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.
GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.
Middle-Earth: Shadow of Mordor
The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.
For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.
The Core i3-6100TE: An Unlikely Candidate?
Because of Supermicro’s big story regarding base clock frequency overclocking on non-K processors with non-Z platforms, it was imperative that we also get a non-K processor in to test with it. Typically Intel only seeds the top processors for review, and we had not had a chance to get other processors in to test when this motherboard arrived, so Supermicro also seeded us a Core i3 processor.
The Core i3-6100TE is an unlikely candidate for this testing. It’s not a processor that a user can go out and buy. The TE designation is a variant of Intel’s low power processors, whereby a T processor is typically 45W and the TE models are even lower - in this case 35W. These processors are typically for larger customers only, or the bigger OEMs, so you are more likely to see them in mini-PCs or all-in-ones rather than custom builds. If you are lucky, a big system distributor (think Dell or Lenovo) or even a large system integrator might have access to them and offer them for sale as part of a system. But by and large, aside from eBay, you would be lucky to find one for sale on its own unless you have a distributor nearby that sells OEM parts.
There are several angles to testing the CPU as well. Firstly, as a processor in its own right – where does it sit in the stack and if the price merit the performance and power characteristics. Secondly, as a tool for overclocking, and can we verify that the changes Supermicro have made to the C7H170-M to enable base clock overclocking on processors like this actually works. Then the third angle, which is perhaps the biggest: How well does an overclocked i3 processor actually perform, and why does Intel not offer an i3-K equivalent?
We will be addressing each of these questions as part of this review.
The Market
For those who are not keeping many tabs on the processor market, Intel’s mainstream desktop processor line comes in five flavors:
Flavor | Power | Price | Notes | |
Core i7 | 4 Cores 8 Threads |
35W to 95W | $300-$340 | High performance 2 MB L3 Cache per core Enthusiast focused |
Core i5 | 4 Cores 4 Threads |
65W to 91W | $180-$242 | More palatable price, No Hyperthreading, 1.5 MB L3 Cache per core Still for enthusiasts |
Core i3 | 2 Cores 4 Threads |
35W to 54W | $117-$150 | Mid-range CPU performance All except -P with HD 530 graphics No turbo mode. |
Pentium | 2 Cores 2 Threads |
35W to 54W | $64-$86 | Lower CPU performance. 1.5 MB L3 Cache per core No turbo mode. |
Celeron | 2 Cores 2 Threads |
35W to 54W | $42-$52 | Low CPU/GPU performance. Low-cost option. 1 MB L3 Cache per core No turbo mode. |
Within each of these flavors, processors will have a number that indicates their position in the stack (e.g. i7-6700, i3-6300), and some will also have a letter that indicates the segment they are in. The several types, for Skylake, are:
Type | Example | Meaning |
-K | i7-6700K | Overclocking processor, Multiplier unlocked. 91W |
no letter | i5-6500 | Standard processor, locked, 51W-65W |
-T | i3-6100T | Even lower power processor, 35W |
-TE | i3-6100TE | Similar to T but with a lower base frequency. Aimed at OEMs/embedded. 35W |
-P | i3-6098P | Special part for specific OEMs, Typically high CPU and low IGP. 54W/65W |
Not Currently Used in Skylake | ||
-S (e.g. i5-4690S) | i5-4690S | Lower power processor, ~65W |
-R (e.g. i5-5675R) | i5-5675R | Uses eDRAM, soldered down |
-C (e.g. i5-5675C) | i5-5675C | Uses eDRAM, socketed CPU |
Not all processor segments (C/P/i3/i5/i7) combine with every type (K/S/T/TE), and it mostly ends up being a pick and choose depending on how Intel sees the market. So for example, for desktop processors, Skylake has three Core i7 (one K, one T), five Core i5 (one K, one P), seven Core i3 (two T, one TE, one P), six Pentium (two T, one TE) and four Celeron (one T, one TE) parts.
Choosing the CPU, and the Overclocking Conundrum
When a user, or an OEM/SI, needs a processor, several factors come into play. Assuming that they definitely need a Skylake part, the three things most people focus on are performance, cost and power. Depending on which one is the most vital automatically limits the choices – if a user needs the most performance, then a Core i5 or Core i7 is on the cards, or if the user needs something under $120, then the low-model Core i3 parts are as high as you go.
Most enthusiasts who want to overclock have a different set of requirements. At current, only two Skylake processors allow multiplier overclocking – the Core i5-6600K and Core i7-6700K, which we reviewed and looked into overclocking scaling last year. These are 91W parts that start at $242 for the i5, making entry into this market for mainstream enthusiasts only.
It wasn't always like this. Several generations ago, overclocking (via the base frequency) occurred with every CPU that was on sale, and users would regularly go after the mid-range part with a good cooler and overclock it to be the equivalent of a high-performance processor. It made computing fun, and got me into the world of competitive overclocking which actually ended up with me working for AnandTech, so I’m a nice big advocate for it. To reach back into the nostalgia stakes, back in 2014, with the launch of Haswell’s Devil’s Canyon parts, Intel also launched an overclockable Pentium processor, the Pentium G3258.
The idea behind the G3258 was to offer a cheaper processor (~$72) that could be overclocked and offer a low cost entry into the world of overclocking. As with every review website, we tested the Pentium G3258 in both default and overclocked mode. There were two main conclusions. Firstly, the single core performance at 4.2 GHz was great and it felt like a high-end processor for day-to-day tasks like browsing the web and email. Secondly, because it was physically still a dual-core Pentium processor, overclocking it did not elevate it to the status of a coveted Core i5 at a third of the cost. So despite the price, enthusiasts looking at some interesting cheap overclocking and performance were not impressed, and went back to the Core i5/i7 processors because of the fundamental performance difference.
Intel did not release a Pentium G3258 equivalent for Skylake, so we cannot probe that segment. But one thing that did come out of the G3258 testing was a question on a lot of people’s lips: would an overclockable Core i3 provide enough performance to go after some of the big guns?
Intel has never expressed much interest in an unlocked Core i3. Some users might argue that the G3258 felt more of a forced part because it was never given a name with the ‘K’ unlocked designation, such as the G3240K (the base processor was a G3240 underneath). Despite Intel’s PR enthusiasm for overclocking, it seems they only want it at the high end of their product stack. An astute observer might point out that offering a cheaper part might cut into sales, especially average selling price, and Intel has no competition beyond an i3 right now so it makes sense they do not want to talk about it. But everyone wants to know ‘if’ an i3 can branch out in performance.
So this is where Supermicro’s C7H170-M motherboard, our Core i3-6100TE sample, and this review comes in.
It also makes the story regarding base clock overclocking being enabled, then removed, then kind of enabled again interesting to follow.
Results then Overclocking
The next few pages will showcase our usual CPU benchmark suite. Alongside the Core i3-6100TE at stock frequencies, we will also put in our overclocked numbers for our 135% stable overclock (moving from 2.7 GHz to 3.645 GHz) as well as results from processors in that range to which we have data for. After the results, we will discuss the actual process of overclocking, and the results of scaling the base frequency from 100 MHz to 145 MHz. Then we will take a page to answer the question: is overclocking a Core i3 actually worth it?
All of our benchmark results can also be found in our benchmark engine, Bench.
Office Performance
The dynamics of CPU Turbo modes, both Intel and AMD, can cause concern during environments with a variable threaded workload. There is also an added issue of the motherboard remaining consistent, depending on how the motherboard manufacturer wants to add in their own boosting technologies over the ones that Intel would prefer they used. In order to remain consistent, we implement an OS-level unique high performance mode on all the CPUs we test which should override any motherboard manufacturer performance mode.
All of our benchmark results can also be found in our benchmark engine, Bench.
Dolphin Benchmark: link
Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.
Dolphin loves single threaded performance, and got a big boost when Haswell was introduced. The overclock puts it within spitting distance of a few i7 parts, and comfortably above the lower clocked i5 processors.
WinRAR 5.0.1: link
Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.
Being a variable threaded workload, the 6100TE gains some benefit with an overclock but is still behind the true quad core parts. This is most likely due to cache contention on the hyperthreads.
3D Particle Movement
3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores.
For the single threaded test, similarly to the G3258 when overclocked, the frequency and architecture make a big difference. In the multithreaded test, the Core i3-6100TE when overclocked starts to play with the i5 parts.
3D Particle Movement v2.0 beta-1
I am in the process of updating the 3DPM benchmark, and you can follow the progress with source code and files in this thread on our forums. It was pointed out that the original code, while written under the naivety of a chemist rather than a non-computer scientist, might have some rough time with a phenomenon called false sharing which seems to affect low-cache AMD processors more than Intel processors (but both get a big increase in performance). The software is currently in the beta phase, with the core algorithms in place, but to showcase the difference we ran it on a few processors.
For an upcoming review with AMD Carrizo, we see some interesting results with this new version of 3DPM.
Web Benchmarks
On the lower end processors, general usability is a big factor of experience, especially as we move into the HTML5 era of web browsing. For our web benchmarks, we take four well known tests with Chrome 35 as a consistent browser.
Professional Performance: Windows
Agisoft Photoscan – 2D to 3D Image Manipulation: link
Agisoft Photoscan creates 3D models from 2D images, a process which is very computationally expensive. The algorithm is split into four distinct phases, and different phases of the model reconstruction require either fast memory, fast IPC, more cores, or even OpenCL compute devices to hand. Agisoft supplied us with a special version of the software to script the process, where we take 50 images of a stately home and convert it into a medium quality model. This benchmark typically takes around 15-20 minutes on a high end PC on the CPU alone, with GPUs reducing the time.
The variable workload of Agisoft means there are parts where the code path is single threaded and others where multithreading is near perfect. As a result we see the overclocked i3 march past almost all the AMD parts but not up to the Core i5s.
Cinebench R15
Cinebench is a benchmark based around Cinema 4D, and is fairly well known among enthusiasts for stressing the CPU for a provided workload. Results are given as a score, where higher is better.
HandBrake v0.9.9: link
For HandBrake, we take two videos (a 2h20 640x266 DVD rip and a 10min double UHD 3840x4320 animation short) and convert them to x264 format in an MP4 container. Results are given in terms of the frames per second processed, and HandBrake uses as many threads as possible.
At double 4K, Handbrake requires enough cache and buffers to keep supplying frames. The Core i3 line has a reduced L3 compared to the Core i5s, but also has to share it between threads (1.5MB/thread compared to 2MB per core) which can cause bottlenecks.
Linux Performance
Built around several freely available benchmarks for Linux, Linux-Bench is a project spearheaded by Patrick at ServeTheHome to streamline about a dozen of these tests in a single neat package run via a set of three commands using an Ubuntu 11.04 LiveCD. These tests include fluid dynamics used by NASA, ray-tracing, OpenSSL, molecular modeling, and a scalable data structure server for web deployments. We run Linux-Bench and have chosen to report a select few of the tests that rely on CPU and DRAM speed.
C-Ray: link
C-Ray is a simple ray-tracing program that focuses almost exclusively on processor performance rather than DRAM access. The test in Linux-Bench renders a heavy complex scene offering a large scalable scenario.
C-Ray doesn't care much for the overclock, indicating that the bottleneck is elsewhere.
NAMD, Scalable Molecular Dynamics: link
Developed by the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign, NAMD is a set of parallel molecular dynamics codes for extreme parallelization up to and beyond 200,000 cores. The reference paper detailing NAMD has over 4000 citations, and our testing runs a small simulation where the calculation steps per unit time is the output vector.
The Molecular Dynamics module of the test certainly prefers more physical cores , with the overclock giving the result a small raise but still lagging behind the Core i5 parts.
NPB, Fluid Dynamics: link
Aside from LINPACK, there are many other ways to benchmark supercomputers in terms of how effective they are for various types of mathematical processes. The NAS Parallel Benchmarks (NPB) are a set of small programs originally designed for NASA to test their supercomputers in terms of fluid dynamics simulations, useful for airflow reactions and design.
Fluid Dynamics appreciates the overclock, and we sit in the middle of the Core i5 parts and well above the previous generation Core i7s.
Redis: link
Many of the online applications rely on key-value caches and data structure servers to operate. Redis is an open-source, scalable web technology with a strong developer base, but also relies heavily on memory bandwidth as well as CPU performance.
With Redis single thread speed as well as IPC is king, so with an overclocked Skylake it does rather well.
Gaming Benchmarks: High End
On this page are our 2015 high-end results with the top models at their respective release dates – the GTX 980 and R9 290X. Results for the R7 240, GTX 770 and R9 285 can be found in our benchmark database, Bench.
Alien: Isolation
If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.
For low end graphics, we test at 720p with Ultra settings, whereas for mid and high range graphics we bump this up to 1080p, taking the average frame rate as our marker with a scripted version of the built-in benchmark.
Total War: Attila
The Total War franchise moves on to Attila, another The Creative Assembly development, and is a stand-alone strategy title set in 395AD where the main story line lets the gamer take control of the leader of the Huns in order to conquer parts of the world. Graphically the game can render hundreds/thousands of units on screen at once, all with their individual actions and can put some of the big cards to task.
For low end graphics, we test at 720p with performance settings, recording the average frame rate. With mid and high range graphics, we test at 1080p with the quality setting. In both circumstances, unlimited video memory is enabled and the in-game scripted benchmark is used.
Grand Theft Auto V
The highly anticipated iteration of the Grand Theft Auto franchise finally hit the shelves on April 14th 2015, with both AMD and NVIDIA in tow to help optimize the title. GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.
For our test we have scripted a version of the in-game benchmark, relying only on the final part which combines a flight scene along with an in-city drive-by followed by a tanker explosion. For low end systems we test at 720p on the lowest settings, whereas mid and high end graphics play at 1080p with very high settings across the board. We record both the average frame rate and the percentage of frames under 60 FPS (16.6ms).
GRID: Autosport
No graphics tests are complete without some input from Codemasters and the EGO engine, which means for this round of testing we point towards GRID: Autosport, the next iteration in the GRID and racing genre. As with our previous racing testing, each update to the engine aims to add in effects, reflections, detail and realism, with Codemasters making ‘authenticity’ a main focal point for this version.
GRID’s benchmark mode is very flexible, and as a result we created a test race using a shortened version of the Red Bull Ring with twelve cars doing two laps. The car is focus starts last and is quite fast, but usually finishes second or third. For low end graphics we test at 1080p medium settings, whereas mid and high end graphics get the full 1080p maximum. Both the average and minimum frame rates are recorded.
Middle-Earth: Shadow of Mordor
The final title in our testing is another battle of system performance with the open world action-adventure title, Shadow of Mordor. Produced by Monolith using the LithTech Jupiter EX engine and numerous detail add-ons, SoM goes for detail and complexity to a large extent, despite having to be cut down from the original plans. The main story itself was written by the same writer as Red Dead Redemption, and it received Zero Punctuation’s Game of The Year in 2014.
For testing purposes, SoM gives a dynamic screen resolution setting, allowing us to render at high resolutions that are then scaled down to the monitor. As a result, we get several tests using the in-game benchmark. For low end graphics we examine at 720p with low settings, whereas mid and high end graphics get 1080p Ultra. The top graphics test is also redone at 3840x2160, also with Ultra settings, and we also test two cards at 4K where possible.
Base Clock Overclocking the Core i3-6100TE: Scaling
As mentioned at several points in this overall piece, overclocking using the Supermicro C7H170-M was actually really easy. There is a single option in the BIOS under ‘Extra Performance’ where you can change the base frequency from 100 MHz to 150 MHz in 5 MHz increments. This doesn’t adjust the processor voltage, and we have no load line calibrations, but that didn’t seem to matter much.
There is an option for Core Voltage Offset, although I’m not so much a fan of offsets when I saw a motherboard a couple of years ago apply a double offset, and I freaked out in case it burned out that $999 CPU. In this case though, we did not have much trouble.
Adjusting the base frequency will adjust the memory speed as well, so the two main limitations will be the processor itself (either physical limits, temperature or voltage) and the memory (also limits, temperature and voltage). This can become tricky to manage as a 35% overclock on DDR4-2133 memory can instantly push it to DDR4-2880. There is an option to reduce the memory multiplier if needed in the BIOS.
As for testing the limits of overclocking, we employed our regular methodology. Increase the frequency, run a pass of our POV-Ray benchmark followed by five minutes of OCCT, noting the voltage, temperature and power where possible. If it passes these two tests, we reboot into the BIOS and increase the frequency. If we fail the test, we would typically go back and reduce the voltage, however this wasn’t needed here. Our results are as follows:
In this case our CPU worked well all through our tests until 145 MHz, or 3.915 GHz for the processor that starts as a 2.7 GHz part. Here when we applied a strong load, it caused the system to blue screen. We tried with other benchmarks, and confirmed that the system was sort of stable, except for gaming and video editing. So as a result we moved back down the scale and found 135 MHz a reasonable compromise for the testing for the full benchmark suite.
But for testing the scaling of the overclock we did our short form benchmark suite on 100, 130, 135 and 140 MHz do see if performance in these benchmarks can accurately scale. As one of the bigger questions, we of course also did some gaming benchmark testing, using our GTX 980 at each of the points. For users interested in R9 285, R9 290 and GTX 770 data at 135 MHz, please look at our benchmark database.
Testing the Scaling – CPU Short Form
In all of our short form tests, the scaling from 2.7 GHz to 3.78 GHz was near perfect, particularly in 3DPM single threaded. The more you put in, the proportional you get out.
Testing the Scaling – CPU Extra
We also had some extra testing on hand. A few benchmarks showed an odd jump between 135 MHz and 140 MHz, such as Agisoft and Cinebench 10 single thread. Dolphin saw some odd regression at 140 MHz, but the general trend still stood.
Testing the Scaling – GTX 980 Gaming
In our gaming tests, every title showed proportional gains such that moving from 130 to 135 MHz did the same increase as moving from 135 to 140 MHz, although in some cases it was really, really minor. The best way to look at it is to plot a graph of frame rate against CPU frequency, and find where the frequency = 0 cuts the line and note the gradient. What matters here is two points. If the intercept (frequency = 0) value is high, then it offers a good performance no matter what. If the gradient is high, then you get a better response per adjustment in frequency.
Benchmark (1080p Ultra) | Intercept | Gradient |
Alien Isolation, Average FPS | 86.59 FPS | 24.1 FPS per GHz |
Total War: Attila, Average FPS | 10.86 FPS | 6.1 FPS per GHz |
Grand Theft Auto, Average FPS | 33.11 FPS | 8.6 FPS per GHz |
Grand Theft Auto, % Frames >16.6ms | -20.4% per GHz | |
GRID, Average FPS | 67.71 FPS | 24.9 FPS per GHz |
GRID, Minimum FPS | 28.00 FPS | 26.0 FPS per GHz |
Shadow of Mordor 4K, Average FPS | 39.31 FPS | 0.3 FPS per GHz |
Shadow of Mordor 4K, Minimum FPS | 16.18 FPS | 3.2 FPS per GHz |
From these results, essentially everything except Mordor seems to get really nice gains (proportionally) from increasing the frequency.
As an exercise in stupid numbers, here’s a calculation. Using the intercept and gradient, and assuming a perpetual linear relationship, calculate the frequency needed for 60, 120 or 240 FPS average. The results are:
Benchmark (1080p Ultra) | Frequency Needed for 60 FPS |
Frequency Needed for 120 FPS |
Frequency Needed for 240 FPS |
Alien Isolation, Average FPS | Always | 1.38 GHz | 6.37 GHz |
Total War: Attila, Average FPS | 8.06 GHz | Stupid | Even more stupid |
Grand Theft Auto, Average FPS | 3.13 GHz | 10.10 GHz | Stupid |
Grand Theft Auto, % Frames >16.6ms |
For all frames below 16.6ms: 5.98 GHz | ||
GRID, Average FPS | Always | 2.10 GHz | 6.92 GHz |
GRID, Minimum FPS | 1.23 GHz | 3.54 GHz | 8.15 GHz |
Shadow of Mordor 4K Average FPS |
69.67 GHz | Stupid | Even more stupid |
Shadow of Mordor 4K Minimum FPS |
13.69 GHz | Stupid | Even more stupid |
Because of the titles that scale, I’m inclined to believe some of these numbers, such as Alien Isolation and GRID, but Mordor is just amusing as the minimum scales faster than the average in our small test. Give me a ring if we ever hit 70 GHz.
A Word on Power Consumption
It turns out that power consumption numbers becomes a byline in this test. At stock frequencies and at +35% overclock, the power consumption of this 35W part moved from 32W to 38W, which is pretty much what was to be expected.
Base Clock Overclocking the Core i3-6100TE: The i5 Competition
Now we have the data, I want to pull up the data for the overclocked Core i3-6100TE and pit it against the data we already have in our database for the most likely contenders. Sitting at $117 at the base cost, and ignoring for the fact that it is almost impossible to buy because it’s a TE model, we’ll look purely at the overclocking compared to an equivalent i5 to see where having four physical cores (and more L3 cache per core) will beat the dual core with hyperthreading. We've also added in the Pentium G3258 results, overclocked to 4.7 GHz, to see where that sits. The i5 in this is the Core i5-6500 processor, which sits at a 3.3-3.9 GHz frequency. We've tested it but not yet written up the review, but the results are included.
CPU Short Form
Handbrake with a low quality file relies mainly on pure frequency and floating point performance, hence why the overclocked Pentium at 4.7 GHz beats the i3-6100TE at 3.65 GHz.
When we move up to large frame conversion, the benchmark is more in line with the number of threads available as well as frequency, so the i5 takes more of a lead at the top and the Pentium comes down. The overclocked Core i3 holds station at mid-field, and in our benchmark database it sits at the top of the i3 parts, but significantly behind the Core i5s.
Dolphin likes single core performance and high IPC, but also gets a boost from Haswell and beyond in terms of CPU architecture. This is why the G3258 when overclocked can beat almost everything else at stock.
Photoscan is a mixed back of threading, where at some points high frequency wins the day but at others it's a combination with cores and threads. Here, the lack of true cores (and in turn, L3 cache per thread), is the issue.
While WinRAR is a variable threaded load, it sits more comfortably with more cache, faster memory and more threads. There is still a big gap between the Core i3 parts and the Core i5 parts, even when the Core i3 is overclocked.
Cinebench in single threaded mode is all about frequency and IPC, hence the i3-6100TE OC can beat the older i5 parts. The Pentium G3258 at 4.7 GHz storms ahead here as a result.
However, the lack of true cores brings it down to earth in the multithreaded test. The difference between the overclocked i3-6100TE and the Core i5-6600 is a big 50%, which is hard to make up on frequency alone.
3DPM v1 in single thread mode loves frequency and IPC, hence why the overclocked i3 sits at the bottom of this small graph but in the middle of the older i7 parts in our benchmark database.
In multithreaded mode, while the i3 and i5 parts can spawn similar amounts of threads, the 3.6 GHz overclock on the i3-6100TE isn't enough to bring the fight to the Core i5s.
WebXPRT is a big fan of responsiveness, and having an overclocked system seems to help here. This means both the i3-6100TE OC and G3258 OC storm ahead.
Octane is more multithreaded than WebXPRT, relying more on synthetic testing. In our benchmark database the overclocked i3 pushes above some of the older Core i5s, but the Skylake i5-6600 is still on top.
For AES encryption, the Pentium parts drop out due to the lack of AES-NI instructions, but it does become a case of threads and frequency here.
Overall conclusions on the pure CPU performance puts the stock Core i3 at the bottom end of our table in most tests, but overclocking it +35% turns it into a very average performer. In single threaded tests, depending on the memory footprint, it either handily beats or goes toe-to-toe with the Core i5s, usually sitting a pace behind. When the threads come out to play though, there is still that gap between the Core i3 and the Core i5 segments, by virtue of hyperthreads compared to real cores. This makes the issue more to do with cache per thread, and more trips out to higher latency memory to fetch data - typically highly threaded environments are processing a lot of data anyway, making it a compound effect.
GPU Tests on R9 290X
Alien Isolation gets a good +12% boost in frame rates from that 35% overclock, pushing it above the Sandy Bridge i7 when the i7 runs at stock speed, but still behind an i5.
Total War rises to an asymptotic peak of frame rates as cores and frequency increases, and while the overclocked i3 can't match the i5s they can get very close, as shown above.
Similarly with GTA, we get a good 20% rise in frame rates from the overclock but it still isn't enough for the last 1-8% or so to the old i7s or newer Core i5s.
GRID responds to a number of benefits, especially relating to DRAM speed, IPC and frequency. Using DDR4 helps the Core i3 here it seems, with that overclock giving a good 30% push in frame rates and putting the i3 and i5 within a margin of error.
Mordor is relatively flat on CPU performance.
With the AMD GPU tests, the overclocked Core i3 sits very much in mid table when looking at the big picture. The overclock doesn't really pull any of the games out of the gutter, but the use of DDR4 seems to help in games like GRID which love it when any component is upgraded. In games like Mordor, the GPU is the bottleneck so everyone seems to perform the same.
GPU Tests on GTX 980
In everything except Mordor, the overclocked i3 is anywhere from 10-15% behind the Core i5 in frame rates, but mid-table overall.
Conclusions
Everyone has been wondering for a while just how good an overclocked Core i3 part is. Well, here is our data, and the answer is perhaps somewhat surprising: a faster Core i3 moves itself into a mid-table position. In most cases it sits behind the Core i5 parts, unable to get over that hump of using two threads per core and having to share cache resources between hyperthreads. Having real cores in this instance makes a big difference. In a number of cases, the overclocked Core i3 sits above the older Core i7s, especially when improvements to the architecture have a profound impact on the performance of the processor.
But is an overclocked Core i3 going to feel like a part of higher value?
So Why Do We Not See an Overclockable i3 CPU?
A lot of users interested in the story of whether overclocking is going to come back to the cheaper end of the Intel spectrum are users who were part of the culture that did the overclocking thing back in the Pentium 4 and Core 2 Duo era. Back then it was a miasma of anything goes, and competitions involving extreme overclocking with liquid nitrogen starting to get a foothold within the industry. I remember the reason I got into overclocking: I wanted the performance of the expensive part but only wanted to pay the price of the cheaper part. My gaming system at the time, an X2 4400+, got a severe beating. My graphics card, an X1900XTX, ran hot and eventually died a couple of years later. It wasn’t until I spent some of that student loan money on a Core i7 920 system that I felt I had performance under my belt, but I still overclocked it to within an inch of its life, along with my shiny new 4850 graphics cards.
Giving those processors a 30-50% boost in pure frequency felt like a major accomplishment – I had done something my LAN party friends had not done (well, most of them), and I was blitzing around as if I had spent 3x on my gaming PC. This was the hay-day of overclocking, when you could buy cheaper components and feel like a champion, as well as spot a noticeable difference in actual performance.
Fast forward ten years or so: I’m older, I’m wiser, I have a regular income, a family, and limited free time. Suddenly spending $50-$100 to jump from a low end to a mid-range processor, if it merits a return on my free time, sounds like a good investment. If I was in any professional career apart from this one, I might not have time to tinker with an overclock so getting something that works is more important.
However, there are two immediate people in my life that can directly benefit from overclocking, and I’ve mentioned them in previous reviews.
First is my younger brother: in his mid-late 20s, in the games industry as a QA, moving through his career track. He has some money to spend (he bought two pre-overclocked GTX 980s recently), but still wants to push a quad-core mainstream CPU beyond 4.4 GHz for a quicker response time so he can play, watch Twitch, and have a massive amount of background processes. As long as it is stable.
Second is my cousin-in-law: a 16-year old who splits his time between CS:Go and DOTA2 for the most part, as well as some free-to-play MMOs. His system is budget restricted, even when I threw a few parts his way. If overclocking a Core i3 was a thing when we put his system together, I may have put him on one, but I wasn’t keen on going the Pentium route just because of what he does with the system and the results in our Pentium review. In the end he has an AMD setup, slightly overclocked. No doubt it will be upgraded by the time he graduates; depending on if he wants a laptop or a new desktop.
While my younger brother could afford and sit happy with an i5-K, my cousin-in-law is in that ideal spot to start looking for something cheap he can push to the limit, and is about the same age when I started getting interested in hardware rather than just the games being played on it. Starting to overclock and learning about how hardware works became a quintessential part of who I am today – I wanted to understand exactly what was happening, what tools I needed, and why those tools worked. One poignant question to ask is if this entry into technology no longer exists without a price barrier, or does it need to exist?
The purpose of this review boils down to one question: can an overclocked i3 compare to a higher class of processor and save money? The answer to that is no… and yes.
Why Yes
One of the benefits of a highly overclocked Intel processor is responsiveness. As seen with both the overclocked benchmark scores of the Pentium G3258 and the Core i3-6100TE, when single thread performance is paramount it doesn’t matter if the CPU costs $72 or $472, as long as the IPC and frequency is there. The only problem is that the same argument can be made for the cheaper Pentium being overclocked over 4.4 GHz – if all you need is single threaded performance, then there’s already a product on the market.
Why Yes, sort of
Our gaming tests show that the overclocked Core i3 is within spitting distance of the Core i5 performance. At worst, with the GTX 980, we saw a 10-15% gap in the average frame rates in our high-end titles. For the cost of an extra $50-100, which equates to almost 100% of the value of a second i3, is it really worth going from 63 FPS in Grand Theft Auto with the i3-6100TE overclocked to 71 FPS on the stock i5-6600? Perhaps that money is better spent on moving up a GPU class. Also, doing the overclock pushed the minimum frame rate for GRID on the GTX 980 from 99 to 123 FPS, hitting a 120Hz marker for minimum frame rates. That’s something to be interested about. Then when we moved to AMD and the R9 290X, the results were even closer between the overclocked i3 and the stock i5. The Core i3 part though never really matched the Core i5, meaning that if there was a 20% gap between the i3 and the i5 at stock, the overclocked Core i3 would be good to fit in the middle between the two:
Why No, sort of
A big question mark in gaming is DX12, and because of the lack of real world DX12 benchmarks in play right now (AoTS is still beta, 3DMark is synthetic), we’re not sure how it would play out. Using DX12 has both a pull and a push factor. With the right code and engine, using DX12 properly can pull low-performance hardware into something reasonable, and it can push high-performance hardware into something magical. If anything, while both sides of the coin are increased, it is difficult to determine if the gap (measuring FPS vs quality) between the two will close or widen. It will mean that low-end overclock systems can do more, but it will depend on if CPU throughput is the limiting factor in the game engine. One of the benefits of having more cores, and more cache per core, is that when CPU load is high there is less time spent fetching data from main memory, which puts a serious plug on performance.
Why No
This point latches on to the last sentence: there are some situations where having more physical cores is a tough barrier to beat. DX12 might expose this a bit more in time, but in a large number of our CPU benchmarks that had some form of variable threaded workload, most easily favored the i5 as the threads did not have to share registers or cache within an individual core, allowing the instruction flow to be singularly focused. When a single thread can fill the instruction buffers, HyperThreading has a negative response due to increased memory transactions to slower memory.
The Good Old Days
So while I may wistfully talk about how ‘overclocking was in the good old days’, it is clear that the landscape has changed significantly. A decade ago we were overclocking on processors with either one or two cores, doing that overclock gave a significant improvement in many areas. Now we have four threads on tap, and despite the nature software changing as well, it means that background processes no longer interfere with single threaded code or multi-threaded software can take advantage of the new microarchitectures and hide the older bottlenecks.
Overclocking clearly has its place, but it is obvious that the place it held a decade ago has shifted. Previously it enabled better multi-tasking and raised the ceiling on basic tasks, allowing users to save money on their purchase if they were willing to tweak some options and keep an eye on cooling. Today that ceiling is already high, and given that Intel only sells two high-end K processors for the mainstream segment, overclocking is relegated to the performance junkie with a large budget, unless you want to try your luck with a motherboard that might enable non-K overclocking. But that’s the thing – anyone interested in peak performance wants to take the best and go further, not necessarily take the low end and push it into mid-range.
The Gaming Market, and Overclockers
I’ve always been a big advocate of focal marketing. If you want to sell a product, there is no point in marketing it to the wrong audience (the whole concept of influencers or halo products/Top Gear is another topic I won't discss here). One metric I like to showcase is one derived from market observations: PC gaming can be broadly defined into two categories.
The first is the under-25s: either still in school or starting their first job, they might not be earning much or saving an allowance. These users are more likely to spend sub-$1000 on a gaming PC and play eSports titles (CS:Go, DOTA, LoL, Rocket League), sometimes looking at the triple-A titles at medium settings with their 1080p panel.
The second is the over-25s: In their first graduate position, or on their second or third job or promotion, perhaps with a family, but like to spend their spare earnings or bonuses on their hobbies. These are the users that might spend over $1500 on a gaming PC, and transition from the sub-$1000 crowd and want something that will play triple-A titles bigger and better. They will go in at the i5 or i7, perhaps a GPU or two and an SSD with a high end monitor. This is also the crowd currently looking at PC-based virtual reality.
Currently, it is the latter crowd that can afford the hardware to overclock, however most of these users would have been in the first crowd when they learned about overclocking. While the Pentium exists, the lack of a Core i3 might mean that fewer younger gamers venture into hardware enthusiasts, and instead just buy that pre-overclocked system when it comes time to get a PC for VR.
Core i3-6100TE at stock vs. Core i3-6100TE at 30% overclock
There’s something to be said about building a loyal consumer base, and an ideal way for Intel would be to rope in the younger crowd at Day 1, pushing them through until the big sales. That is how some of us late-20 and early-30 somethings, who are interested in overclocking and performance, came through the industry. But without competition, one might think that it’s not needed and you will buy the medium performance part anyway.
Non-K Overclocking: Xeons
For users reading this who are more interested in the professional side of the equation, it is time to talk Xeon. Having the ability to open up base clocking on casual Intel consumer parts would allow motherboard manufacturers to build the same technology into their Xeon platforms. Because Xeons are high margin, professional and have a focus on stability, it would perhaps be understandable why Intel would want to restrict overclocking such that the Xeons were not affected. We’ve spoken to several professional companies over the past year that say they have customers who want faster parts, they want an extra 10% on the base frequency, and they’re willing to pay increased service fees for it. The professional market (namely finance) still wants performance, but again there is no competition, so Intel already has your sale in the market which keeps their operating margins high. The question is at what price point you will have to enter as a result, and whether your software would have benefitted from a 4.2 GHz 10-core rather than a higher cost part with more cores but a slower frequency. If base clock overclocking transitioned into the Xeon space, I wonder how easily Intel would be able to close the gates if they wanted to.
There’s also the upgrade argument, which we are seeing play out in the consumer space. If one platform allowed overclocking, and you invested at that point, why would you need the latest two or three platforms if they didn’t allow you a performance advantage? This is potential lost sales.
Loop it back: Why Do We Not See an Overclockable i3 CPU?
The 1080p gaming tests show that an overclocked Core i3 can easily knock on the door of a stock Core i5 for $100 less, or the rough equivalent of another Core i3 sale. The situation is a little muddier on CPU benchmarks; with single-thread responsive getting a benefit but many workload based tests showed you need real cores to get a benefit. It doesn’t matter much at the higher end, where it won’t cannibalize sales, and it didn’t matter much on the overclockable Pentium where two threads and low cache were bottlenecks you can’t overcome.
So if you want that performance, you need to spend the extra money.
If it was a question of market share, we would see it added very quickly. But as it is not, it ends up being the difference between buying two chips or one from the same vendor – they would rather you buy two (or the equivalent of two) when there's no alternative.