Comments Locked

103 Comments

Back to Article

  • Zoeff - Friday, August 28, 2015 - link

    As an owner of a 6700K that's running at 4.8GHz, this is a very interesting article for me. :)

    I've currently entered 1.470v in the UEFI and I can get up to 1.5v in CPUz. Anything lower and it becomes unstable. So I guess I'm probably on the high side voltage wise...
  • zepi - Friday, August 28, 2015 - link

    Sounds like a scorching voltage for 24/7 operations considering it is 14nm process... But obviously, we don't really know if this is detrimental on longer term.
  • 0razor1 - Friday, August 28, 2015 - link

    I believe it is. Ion shift. High voltage = breakdown at some level. Enough damage and things go amiss.
    When one considers 1.35+ for 22nm high, I wonder why we're doing this (1.35+) at 14nm.

    If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?

    Further, where are the AVX stable loads? Sorry, but no P95 small in place FFTs with AVX = NOT stable enough for me. It's not the temps ( I have an h100i) for sure. For example, on my 4670k, it takes 1.22VCore for 4.6GHz, but 1.27VCore when I stress with AVX loads ( P95 being one of them).

    It's *not* OK to say hey that synthetic is too much of a stress etc. I used nothing but P95 since K-10 and haven't found a better error catcher.
  • 0razor1 - Friday, August 28, 2015 - link

    To add to the above, downclocking the core on GPU's and running memcheck in OCCT is *it* for my VRAM stability tests when I OC my graphics cards. I wonder how people just 'look' for corruption in benchmarks like firestrike and call their OC's stable. It doesn't work.

    Run a game and leave it idle for ~ 10 hours and come back. You will find glitches all over the place on your 'stable' OC.

    Just sayin- OC stability testing has fallen to new lows in the recent past, be it graphic cards or processors.
  • Zoeff - Friday, August 28, 2015 - link

    I tend to do quick tests such as Cinebench 15 and HandBrake, then if that passes I just run it for a week with regular usage such as gaming and streaming. If it blue screens or I get any other oddities I raise the voltage by 0.01v. I had to do that twice in the space of 1 week (started at 1.45v, 4.8GHz)
  • Oxford Guy - Saturday, August 29, 2015 - link

    That's a great way to corrupt your OS and programs.
  • Impulses - Saturday, August 29, 2015 - link

    Yeah I do all my strenuous testing first, if I have to simulate real world conditions by leaving two tests running simultaneously I do it too... Like running an encode with Prime in the background; or stressing the CPU, GPU, AND I/O simultaneously.

    AFTER I've done all that THEN I'll restore a pre-tinkering OS image, unless I had already restored one after my last BSOD or crash... Which I'll do sometimes mid-testing if I think I've pushed the OC far enough that anything might be hinky.

    It's so trivial to work with backups like that, should SOP.
  • Oxford Guy - Sunday, August 30, 2015 - link

    If a person is using an unstable overclock for daily work it may be hard to know if stealth corruption is happening.
  • kuttan - Sunday, August 30, 2015 - link

    haha that is funny.
  • kmmatney - Saturday, September 19, 2015 - link

    I do the same as the OP (but use Prime95 and Handbrake). If it passes a short test there (say one move in Handbrake) I just start using the machine. I've had blue screens, but never any corruption issues. I guess corruption could happen, but the odds are pretty low. My computer gets backed up every night to a WHS server, so I can be fearless..
  • MikeMurphy - Saturday, January 30, 2016 - link

    Tragically, few stress testing programs cycle power states during testing, which is important. Most just place CPU under continuous load.
  • 0razor1 - Friday, August 28, 2015 - link

    Didn't mean memcheck - it's error check on regular OCCT :GPU.
  • Xenonite - Saturday, August 29, 2015 - link

    Different parts of the CPU are fabricated to withstand different voltages. The DRAM controller is only optimised for the lowest possible power/performance, so the silicon is designed with low leakage in mind.

    As another example, the integrated PLL runs at 1.8V.

    Electron migration is indeed the main failure mechanism in most modern processors, however, the metal interconnect fabric has not been shinking by the same factors as the CMOS logic has. That means these 14nm processors can take more voltage than you would expect from a simple linear geometric relationship.

    What exactly the maximum safe limits are will probably never be known to those outside of Intel, but just as with Haswell, I've been running at a 24/7 1.4V core voltage, which I don't believe will significantly shorten the life of the CPU (especially if you have the cooling capacity to up the voltage to around 1.55V as the CPU degrades over the following decade).

    In any case, NOT running the CPU at at least 4.6GHz would mean that it wasn't a worthwhile upgrade from my 5960X, so the safety question is pretty much moot in my case.
  • Oxford Guy - Saturday, August 29, 2015 - link

    Unless that worthwhile upgrade burns up in which case it's a non-worthwhile downgrade.
  • 0razor1 - Sunday, August 30, 2015 - link

    Hey @Xenonite, sorry to quote you directly..
    'Electron migration is indeed the main failure mechanism in most modern processors, however, the metal interconnect fabric has not been shinking by the same factors as the CMOS logic has'
    -> I'd say spot on. But I thought that's what Intel 14nm was all about -they had the metal shrunk down to 14nm as well, as opposed to what Samsung as a pseudo 14nm (20nm metal interconnect).

    just as with Haswell, I've been running at a 24/7 1.4V core voltage
    -> Intel has specified 1.3VCore as being the max safe voltage. I'd pay heed :)

    NOT running the CPU at at least 4.6GHz would mean that it wasn't a worthwhile upgrade from my 5960X, so the safety question is pretty much moot in my case.
    -> You're right. But then everyone can't afford upgrades. I've just come from a Phenom 2 @4GHz (1.41VCore) to a 4670k @ 4.6GHz (1.29/1.9 core/VDDIN). What I did do was once the Ph2 was out of warranty, lapped it and OC'd it as far as it would go. Tried my hands at an FX -6300 for a week and was disappointed to say the least.

    Long story short and back to my point, P95.

    If it doesn't P95, it corrupts. /motto
  • varg14 - Friday, September 4, 2015 - link

    I agree if you have good cooling like my Cooler Master Nepton 140XL on my 4.6-5.1ghz 2600k that uses motherboard VRM's and keep temps under 70c always I see no reason to see any chip degradation or failures. I really only heard of chip degradation and failures when they started putting the VRM's on the CPU die itself with haswell adding more heat to the CPU. Now with Skylake using motherboard VRM;s again everything should be peachy keen.
  • StevoLincolnite - Saturday, August 29, 2015 - link

    Electromigration.

    I was going those volts at 40nm...
  • 0razor1 - Sunday, August 30, 2015 - link

    Lol I topped out @ 1.41Vcore on 45nm Phenom 2 ( with max temps on the core of 60C @ 4GHz).

    Earlier 24x7 for three years was 1.38Vcore for 3.8GHz.
  • JKflipflop98 - Sunday, September 6, 2015 - link

    "If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?"

    Because the DRAM's line voltage goes straight into the integrated memory controller within the cpu. While the chunky circuits in your ram modules can handle 1.6V, the tiny little logical transistors on the CPU can only handle 1.35 before vaporizing.
  • Zoeff - Friday, August 28, 2015 - link

    Yeah that's what I thought as well but apparently the voltage in the silicon is lower compared to what the input voltage is which is what you can control as the user. At least, this is what I read on overclock.net. Right now CPUz reports ~1.379v (flicking, +/- 0.01v) and that's with EIST, C-States and SVID Support disabled. Different monitoring software sometimes reports different voltages too so I find it hard to check what my CPU is actually doing.
  • bill.rookard - Friday, August 28, 2015 - link

    I wonder if not having the FIVR on-die has to do with the difference between the Haswell voltage limits and the Skylake limits?
  • Communism - Friday, August 28, 2015 - link

    Highly doubtful, as Ivy Bridge has relatively the same voltage limits.
  • Morawka - Saturday, August 29, 2015 - link

    yea thats a crazy high voltage.. that was even high for 65nm i7 920's
  • kuttan - Sunday, August 30, 2015 - link

    i7 920 is 45nm not 65nm
  • Cellar Door - Friday, August 28, 2015 - link

    Ian, so it seems like the memory controller - even though capable of driving DDR4 to some insane frequencies seems to error out with large data sets?

    It would interesting to see this behavior with Skylake and DDR3L.

    Also it would be interesting to see in the i56600k, lacking the hyperthreading would run into same issues.
  • Communism - Friday, August 28, 2015 - link

    So your sample definitively wasn't stable above 4.5ghz after all then.......

    Haswell/Broadwell/Skylake dud confirmed. Waiting for Skylake-E where the "reverse hyperthreading" will be best leveraged with the 6/8 core variant with proper quad channel memory bandwidth.
  • V900 - Friday, August 28, 2015 - link

    Nope, it was stable above 4.5 Ghz...

    And no dud confirmed in Broadwell/Skylake.

    There is just one specific scenario (4K/60 encoding) where the combination of the software and the design of the processor makes overclocking unfeasible.

    Not really a failure on Intels part, since it's not realistic to expect them to design a mass-market CPU according to the whims of the 0.5% of their customers who overclock.
  • Gigaplex - Saturday, August 29, 2015 - link

    If you can find a single software load that reliably works at stock settings, but fails at OC, then the OC by definition is not 100% stable. You might not care and are happy to risk using a system configured like that, but I sure as hell wouldn't.
  • Oxford Guy - Saturday, August 29, 2015 - link

    Exactly. Not stable is not stable.
  • HollyDOL - Sunday, August 30, 2015 - link

    I have to agree... While we are not talking about server stable with ECC and things, either you are rock stable on desktop use or not stable at all. Already failing on one of test scenarios is not good at all. I wouldn't be happy if there were some hidden issues occuring during compilations, or after few hours of rendering a scene... or, let's be honest, in the middle of gaming session with my online guild. As such I am running my 2500k half GHz lower than stability testing shown as errorless. Maybe it's excessively much, but I like to be on a safe side with my OC, especially since the machine is used for wide variety of purposes.
  • StrangerGuy - Sunday, August 30, 2015 - link

    If we keep dropping the OC multi on Skylake we are going into single-digit clock increases territory from 4GHz stock :)

    Yeah, I wonder why AT mentioned in their Skylake review about why people are losing interest in OCing despite Intel's claims of catering to it. From the looks of it, their 14nm process simply isn't tuned for 4GHz+ operation but towards the lower clocked but much more lucrative chips for the server and mobile segment.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Then you are deluded. There are edge cases and scenarios that will cause a hardware crash on a Xeon server with ECC RAM at stock speeds, so by your reckoning *nothing* is ever 100% stable.
  • danjw - Friday, August 28, 2015 - link

    When can we expect a platform overview? You reviewed the i7-6700K, but you didn't have much in details about them. You were expecting that from IDF. IDF is over, so is there an ETA?
  • MrBowmore - Friday, August 28, 2015 - link

    +1
  • hansmuff - Friday, August 28, 2015 - link

    I assume the POV-Ray score is the "Render averaged PPS"?
    My 2600K @4.4 gets 1497 PPS, so a 35% improvement compared to 6700k @4.4
  • hansmuff - Friday, August 28, 2015 - link

    And of course I mean the 6700k seems to be 35% faster in POV... sigh this needs an edit button
  • looncraz - Saturday, August 29, 2015 - link

    POV-Ray has been seeing outsized performance improvements on Intel.

    From Sandy Bridge to Haswell sees a 20% improvement, when the overall improvement is closer to 13%.

    HandBrake improved even more - a whopping 29% from Sandy Bridge to Haswell.

    And, of course, I'm talking core-for-core, clock-for-clock.

    I suspect much of this improvement is related to the AVX/SIMD improvements.

    Just hope AMD focused on optimizing for the big benchmark programs as well as their server target market with Zen (this is past tense since Zen is being taped out and currently being prototyped.. rumors and some speculation, of course, but probably pretty accurate).
  • zepi - Sunday, August 30, 2015 - link

    One has to remember, that "handbrake" doesn't actually use CPU-resources at all. The process that is actually benchmarked is running x264 codec with certain settings easily accessible by using GUI called handbrake.

    If x264 or x265 programmers create new codepaths inside the codecs that take benefit of new architecture, it received huge performance gains. But what this actually means is that Sandy Bridge and Skylake actually run different benchmarks with different instructions fed to processors.

    Do I care? No, because I just want my videos to be transcoded as quickly as possible, but one should still remember that this kind of real world benchmarks don't necessarily run same workloads on different processors.
  • MrBowmore - Friday, August 28, 2015 - link

    When are you going to publish the runthrough of the architechture?! Waiting impatiently! :)
  • NA1NSXR - Friday, August 28, 2015 - link

    Sigh, still no BCLK comparisons at same clocks. What would really answer some unanswered questions would be comparing 100 x 40 to 200 x 20 for example.
  • Impulses - Saturday, August 29, 2015 - link

    Why would it make a difference? The BCLK is now decoupled from anything that would matter... It's just another tool like the ratio, one that could let you eke out an extra 50MHz or whatever if you really care to take it to the edge.
  • Khato - Friday, August 28, 2015 - link

    Two inquiries regarding future Skylake testing:

    1. While the initial review was intriguing in terms of actually exploring the DDR3L vs DDR4 2133 performance difference, higher DDR4 frequency testing is still absent. Will there be a memory scaling article at some point?

    2. What's the point of evening including the discrete gaming benchmarks? Is there a plan to revamp this category of testing to provide meaningful data - inclusion of minimum frame rates, exploring different settings, using different games.
  • ImSpartacus - Saturday, August 29, 2015 - link

    Yeah, it would be nice if we could get some proper frame time benchmarking.
  • varg14 - Saturday, September 5, 2015 - link

    I too would love to see High end DDR3L compared to DDR4 on skylake and if the tight timings of DDR3l are beneficial in what areas if at all.
  • ImSpartacus - Friday, August 28, 2015 - link

    I feel ridiculously shallow for asking this but could we see fewer tables that look straight out of excel going forward?

    Anandtech has a glassy table & graph design language. While it might be a bunch of excel templates, it still lets me suspend my disbelief a little bit more.

    I can't justify my request with any tangible argument other than something "feels" off. I apologize as I understand how frustrating such feedback can be. I trust Anandtech to always be improving & setting the standard on all fronts.
  • ImSpartacus - Friday, August 28, 2015 - link

    classy*
  • garbagedisposal - Saturday, August 29, 2015 - link

    They've used the same format a number of times before and it's pretty damn clear and easy to understand. Prettifying the excel tables on a mini article is a waste of time.
  • ImSpartacus - Saturday, August 29, 2015 - link

    You're right, this isn't an isolated issue. I didn't comment on it first time or the second time.

    And it's hard to tell someone who exhaustively tested numerous scenarios that they oughta spend even more time to ensure that they follow style guides and that the extra time spent won't even affect the utilitarian value of the results.
  • V900 - Friday, August 28, 2015 - link

    Nice overview.

    But isn't overclocking in reality not really relevant anymore? A remnant of days gone by?

    Dont get me wrong, I was an eager overclocker myself back in the day. But back then, you could make a 2-300$ part perform like a CPU that cost twice as much, if not more.

    Today, processors have gotten so fast, that even the cheap 200$ CPUs are "fast enough" for most tasks.

    And when you do overclock a 4ghz CPU by 600mhz or more, is the 5-10% speed increase really worth it? Most people would have been better off taking the hundreds of dollars they invested in coolers, OC friendly motherboards, etc and put them towards a better CPU instead.
  • Impulses - Friday, August 28, 2015 - link

    There's a lot of people that just do it for fun, same way people mess with their cars for often negligible gains... Not all spend a lot on it either, I'd buy the same $130-170 mobo whether I was OC'ing or not, and I'd run the same $65 cooler for the sake of noise levels (vs something like a $30 212).
  • kmmatney - Friday, August 28, 2015 - link

    The G3258 is fun to overclock. The overclock on my Devils Canyon i5 made a difference on my server, which runs 3 minecraft servers at the same time. I needed the overclock to make up for the lousy optimization of Java on the WHS 2011 OS.
  • StrangerGuy - Saturday, August 29, 2015 - link

    Yeah, spend $340 on a 6700K, $200 on a mobo, $100 on a cooler for measly 15% CPU OC, all for a next to zero real world benefit in single GPU gaming loads compared to a $250 locked i5 / budget mobo.

    Who cares about how easy you can perform the OC when the value for money is rubbish.
  • hasseb64 - Saturday, August 29, 2015 - link

    You nailed it stranger!
  • Beaver M. - Saturday, August 29, 2015 - link

    You should have known that before, that even without overclock your CPU will be so fast that you wont be seeing any difference in most games when overclocking.
  • Deders - Saturday, August 29, 2015 - link

    If you intend to keep the hardware for a long period of time it can help. My previous i5-750's 3800MHz overclock made it viable as a gaming processor for the 5 or so years I was using it.

    For example it allowed me to play Arkham City with full PhysX on a single 560TI, at stock speeds it wasn't playable with these settings. The most recent Batman game was no problem for it even though many people were having issues, same goes for Watchdogs.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Sure, and my 50% overclock on my i7 920 made it viable for my main gaming PC for a few years longer than it otherwise would have been, but a 10-15% overclock? With a <1% gaming performance increase? Meh.
  • Impulses - Saturday, August 29, 2015 - link

    You're exaggerating the basic requirements tho, I'm sure some do that, but I've never paid $200 or $100 for a cooler ($160/65 tops)... And if you spent more on the i7 it damn better had been for HT or you've no clue what you're doing...
  • Xenonite - Saturday, August 29, 2015 - link

    @V900: "Today, processors have gotten so fast, that even the cheap 200$ CPUs are "fast enough" for most tasks."

    For almost all non-gaming tasks (except realtime frame interpolation) this is most certainly true. The thing is, CPUs are NOT even nearly fast enough to game at 140+ fps with the 99% frame latency at a suitable <8mS value.

    I realize that no one cares about >30fps gaming and that most people even condemn it as "looking to real" (in the same sentence as "your eyes can't even see a difference anyway"), therefore games aren't being optimised for low 99% frame latencies, and neither are modern CPUs.

    But for the few of us who are sensitive to 1ms frametime variances in sub-10ms average frame latency streams, overclocking to high clock speeds is the only way to approach relatively smooth frame delivery.
    On the same note, I would really have loved to see an FCAT or at least a FRAPS comparison of 99% frametimes between the different overclocked states, with a suitably overclocked GTX 980ti and some high-speed DDR4 memory along with the in-game settings being dialed back a bit (at least for the 1080p test).
  • EMM81 - Saturday, August 29, 2015 - link

    "CPUs are NOT even nearly fast enough to game at 140+ fps" "But for the few of us who are sensitive to 1ms frametime variances in sub-10ms average frame latency streams"

    1) People may care about 30-60+fps but where do you get 140 from???
    2) You are not sensitive to 1ms frametime variations...going from 33.3ms-16.7ms(30-60fps) makes only a very subtle difference to most people and that is a 16.6ms(0.5x) delta. There is zero possible way you can perceive anywhere near that level of difference. Even if we were talking about running at 60fps with a variation of 2ms and you could somehow stare at a side by side comparison until you maybe were able to pick out the difference why does it matter??? You care more about what the numbers say and how cool you think it is...
  • Communism - Saturday, August 29, 2015 - link

    Take your pseudo-intellectualism elsewhere.
  • Oxford Guy - Tuesday, September 1, 2015 - link

    (says the guy with a name like Communism)
  • HollyDOL - Sunday, August 30, 2015 - link

    Well, you might have a point with something here. Even though eye itself can take information very fast and optical nerve itself can transmit them very fast, the electro-chemical bridge (Na-K bridge) between them needs "very long" time before it stabilises chemical levels to be able to transmit another impulse between two nerves. Afaic it takes about 100ms to get the levels back (though I currently have no literature to back that value up) so the next impulse can be transmitted. I suspect there are multitudes of lanes so they are being cycled to get better "frame rate" and other goodies that make it up (tunnel effect for example - narrow field of vision to get more frames with same bandwidth?)...
    Actually I would like to see a science based article on that topic that would make things straight on this play field. Maybe AT could make an article (together with some opthalmologist/neurologist) to clear that up?
  • Communism - Monday, August 31, 2015 - link

    All latencies between the input and the output directly to the brain add up.

    Any deviation on top of that is an error rate that is added on top of that.

    Your argument might as well be "Light between the monitor and your retina is very fast traveling, so why would it matter?"

    One must take everything into consideration when talking about latency and temporal error.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Not to mention, even the best monitors have more than 2ms variance in response time depending on what colours they're showing.
  • Nfarce - Sunday, August 30, 2015 - link

    As one who has been building gaming rigs and overclocking since the Celeron 300->450MHz days of the late 1990s, I'd +1 that. Over the past 15+ years, every new build I did with a new chipset (about every 2-3 years) has shown a diminished return in overclock performance for gaming. And my resolutions have increased over that period as well, further putting more demand on the GPU than CPU (going from 1280x1024 in 1998 to 1600x1200 in 2001 to 1920x1080 in 2007 to 2560x1440p in 2013). So here I am today with an i5 4690K which has successfully been gamed on at 4.7GHz, yet I'm only running it at stock speed because there is ZERO improvement on frames in my benchmarked games (Witcher III, BF4, Crysis 3, Alien Isolation, Project Cars, DiRT Rally). It's just a waste of power and heat and wear and tear. I will overclock it however when running video editing software and other CPU-intensive apps which noticeably helps.
  • TallestJon96 - Friday, August 28, 2015 - link

    Scalling seems pretty good, I'd love to see analysis on the i5-6600k as well.
  • vegemeister - Friday, August 28, 2015 - link

    Not stable for all possible inputs == not stable. And *especially* not stable when problematic inputs are present in production software that actually does something useful.
  • Beaver M. - Saturday, August 29, 2015 - link

    Exactly. Fact of the matter is that proper overclocking takes a LONG LONG time to get stable, unless you get extremely lucky. I sometimes spend months to get it stable. Even when testing with Prime95 like theres no tomorrow, it still wont prove that the system is 100% stable. You also have to test games for hours for several days and of course other applications. But you cant really play games 24/7, so it takes quite some time.
  • sonny73n - Sunday, August 30, 2015 - link

    If you have all power saving features disabled, you only have to worry about stability under load. Otherwise, as CPU voltage and frequency fluctuate depend on each application, it maybe a pain. Also most mobos have issues with RAM together with CPU OCed to certain point.
  • V900 - Saturday, August 29, 2015 - link

    Thats an extremely theoretical definition of "production software".

    No professional or production company would ever overclock their machines to begin with.

    For the hobbyist overclocker who on a rare occasion needs to encode something in 4K60, the problem is solved by clicking a button in his settings and rebooting.

    I really don't see the big deal here.
  • Oxford Guy - Saturday, August 29, 2015 - link

    The problem is that overclocks should NEVER be called stable if they aren't.

    And, I don't like the way Anandtech pumps ridiculous amounts of voltage into chips (like they did with the 8320E).
  • Gigaplex - Sunday, September 27, 2015 - link

    Production software in my books is any released software that completes a useful task, rather than just run synthetic tests.
  • hyno111 - Saturday, August 29, 2015 - link

    Is there a temperature chart for overclocking?
  • sonny73n - Sunday, August 30, 2015 - link

    Ian seems to miss the most important part in OCing.
  • MrSpadge - Thursday, September 3, 2015 - link

    The temperature depends strongly on your cooling, TIM application etc. If Ian included those numbers, people would be shouting "but I get different values with..."
  • kneelbeforezod - Saturday, August 29, 2015 - link

    12% better performance for a 32% power increase. uha.
  • StrangerGuy - Saturday, August 29, 2015 - link

    I OCed a cheapo AXP1700 by 25% on a budget nForce 2 board and stock cooling simply with FSB 266->333. I OCed my $183 E6300 that surpassed a $1000 X6800 in performance.

    Now, Intel and Asus et al thinks they are doing us a favor top-end mainstream CPUs that are barely overclockable on even on the most pricey of mobos, and hardly anyone calling out their bullshit, just because of unlocked multipiers? Gimme a break.

    Am I the only sane guy here or what?
  • jihe - Monday, August 31, 2015 - link

    That's why I'm still on nehalem, overclocking an x5650 is much more fun than this pay a premium to overclock crap that intel has been feeding us.
  • SanX - Saturday, August 29, 2015 - link

    Ian, add at least 4790k at 4.5-4.8GHz for us to see how bad new processors actually are
  • V900 - Saturday, August 29, 2015 - link

    Isn't this right about the usual time an AMD troll jumps in to tell us how you can overclock Kaveri to 5 Ghz, and you don't even need an aircooler or anything!
  • Oxford Guy - Sunday, August 30, 2015 - link

    Who is the one trolling?
  • SanX - Saturday, August 29, 2015 - link

    If Intel by moving to 14nm (with its potentially twice smaller surface area versus 22nm) made mainstream octacores overclockable like 4770k/4790k i'd be interested. Otherwise it is hard to see any progress at all. Shame, even cellphones are octacores.
  • Oxford Guy - Sunday, August 30, 2015 - link

    You can get AMD FX to run at over 4.5 GHz with generally only decent cooling. As for seeing progress, overclocking may be less and less viable as process nodes shrink.
  • Oxford Guy - Sunday, August 30, 2015 - link

    We seem to be seeing this borne out in the review, too. Voltages remain too high. Then, when that fails to do the trick, reviewers try to claim that unstable settings are good enough.
  • stephenbrooks - Sunday, August 30, 2015 - link

    I wonder how much of this is choice of the "optimum" frequency for the process? I think Intel only has two processes per node, a standard and a low-voltage/low-power one. If the standard one has been stretched to pretty low powers (10W?), the top end suffers from high voltages because it's not optimal there. They'd end up having to commission a 3rd "high frequency" process to get these top end chips right. There's no incentive until they have a real competitor there, but at this glacial rate of progress perhaps AMD really will catch up in a few years.
  • SanX - Saturday, August 29, 2015 - link

    Isn't true reason behind this zero progress that having no competition at all, Intel wants us to pay $3-4K for octacore chip which probably costs just $100-200 to produce
  • MrSpadge - Thursday, September 3, 2015 - link

    Of course they'd want us to pay 3-4k$ for an octacore, but they don't expect us to do so. That's why they sell it for 1k$.
  • IUU - Sunday, August 30, 2015 - link

    So, obviously it is not worth the benefit and the risk you take, to overclock it. Which is somewhat understandable. Overclocking it as a hobby is always nice, naturally.
    Of course, a progress would be to be able to overclock to 5.5Ghz, especially as 4.8 - 5.0 Ghz have been achieved already since several years. But you can't force nature.
    On the other hand, I am just curious about the theoretical fp performace and the impact of the new instruction sets on it. The fact that the market chooses to ignore them, does not negate the potential of these processors.
    And also curious as to how able they are to run narrow ai apps, or multitasking several "heavy" apps. Sorry, but the fixation on office, "professional apps", and first person shooter(or moving camera) games doesn't cut it anymore.
    Would you be so interested to see how quickly your pc plays pacman in 2000? This is something like it.
    I, myself have witbessed the effect of new instruction sets up to haswell, on niche apps. And I have to say , impressive is an understatement. We are talking a 40% percent improvement clock for clock. I just hope programmers will be in a position to take advantage of this untapped potential some day.
    Also, algorithmic optimizations do perform miracles. I have seen them increasing the performance on the same app and chip up to 50%, instructions sets excluded.
    And finishing, often seeing sites examining the performance of theses chips on browsers, is in my opinion, the peak of the decadence.(no matter how many useless addons we added or how much clunky we made, unneccesarily, the code.)
  • callous - Sunday, August 30, 2015 - link

    I don't think most people test properly their overclocks. You always need to run prim95 with some 3d component looping for at least 24 hours. prime95 stable does not mean 3d stable. Only testing both at the same time: prime95 + 4 instances of heaven benmark can you test most of the components inside the cpu at the same time while system is being stressed.

    I would go further and say if you can do this for 24 hours then you should run some VMware and see if there are bad things happening like bsod or weird crashes of programs running the background.
  • sonny73n - Sunday, August 30, 2015 - link

    I got excited when I first saw the title of this article but I'm a little disappointed now after finished reading it.

    1.52v for only 4.8Ghz? Does 14nm need that much voltage? My 32nm 2500k only needs 1.42v for 48x. What about temp under load? Regardless what settings Handbrake uses or 4K60fps it encodes, if it didn't cause BSOD with CPU at stock speed but BSOD when CPU OCed, then OC isn't stable. Since you OC this chip with the 2 best OCing mobo brands, then it's the chip's fault. Why did you use HB for stability test? P95 for 6 hours would do. Beside, IBT is about 8C hotter than P95 but HB runs about 10C cooler than P95. HB is hardly a stress test.

    I might've missed it but did you have power saving features disabled or enabled? If you let the MB auto OC, what are the settings?
  • MrSpadge - Thursday, September 3, 2015 - link

    If HB exposes errors which the other programs do not find, it is a stress test. Just a different one. It's not about the highest power draw & temperature, but about a code path which apparently takes a bit longer to complete than others and hence can't be pushed to such high frequencies.
  • Dr_Orgo - Sunday, August 30, 2015 - link

    The conditionally stable overclocking results were pretty interesting. When I overclocked my GTX 970, I primarily used Unigine Heaven to stress test. Got to 1500 MHz stable with voltage maxed in Precision X. Used it in a number of games with zero crashing even with sustained 100% usage, seemed completely stable. Running the unit preloader (loads all units/annimations) in Starcraft 2 would make the game crash every time. Dropping the overclock to 1460 MHz made it stable. I'm not sure what specifically makes that unit preloader less overclock friendly.
  • LemmingOverlord - Monday, August 31, 2015 - link

    I think the premise behind the Discrete Graphics tests are incorrect. If you max out the settings you are capping the performance of the system by the graphics card. If you lower the settings just a bit, you'll definitely see how the CPU influences overall game performance. I know this is a mini-test, but these discrete tests prove absolutely nothing on how the overclock impacted the game performance.

    Either lower the detail on these tests, or test with a game that is non-GPU Intensive. Civ V is an excellent benchmark for CPU tests, because it really is CPU-intensive...
  • dimonakid - Sunday, September 6, 2015 - link

    In the past couple of months, we see alot of BSOD and freez and what not from media encoding softwares.
    Some of our friends mentioned (while they were testing), that this maybea a global XMP issue.
    Same resaults regarding handbrake were showing on z77 z68.
    Just to comment
  • SeanJ76 - Monday, September 14, 2015 - link

    Not impressed, my 4690k does 4.8ghz with a $29 Hyper Evo....
  • InstinctXIV - Friday, November 20, 2015 - link

    I would love to see your 4690K do this http://imgur.com/U6ZZ1Ll (It is my PC)
  • gravy1958 - Monday, October 19, 2015 - link

    I have a 6700k with an asus maximus ranger viii MB and an hours gaming produces regular clock_watchdog_timeout errors if I use the overclocking... and frequent boot fails with the random overclocking failed press F1 to enter setup 8^(
  • gravy1958 - Monday, October 19, 2015 - link

    should add it is set at 4.6Ghz and all advice points to the voltage being too low.
  • jonainpdx - Thursday, May 5, 2016 - link

    It's pretty obvious that overclocking a new, state of the art CPU is nothing more than a waste of money for a gamer.

Log in

Don't have an account? Sign up now