What you should do from now on is that for all CPU tests (both Intel and AMD) find out the official spec of the processor and force the motherboard you are using to test the CPUs to run at the official spec. If you are testing motherboards, you can run at default spec then auto settings and see the difference between the difference brands.
I understand that Out-Of-The-Box is easier to test, and will better represent what a user will get, but it's also WHY the motherboard manufacturers are using ludicrous Out-Of-The-Box settings. They know it's going to win them the review and the resulting sales.
Ultimately, the manufacturers are going to push as hard as reviewers let them.
Well that seems fine to me... stick with out of the box settings - the vendors push as far as their hardware goes which is exactly as it should be.
What i would love to see is an analysis of the higher core counts and their power usage. Given the lack of 24-28 Intel cores on the market I wonder how useful they really are...
The vendors do have OC profiles on their boards. If people want to push the hardware, that's exactly what those are for.
For people who don't want a 180watt power draw from a 95 watt CPU, there really should be a standard Intel spec profile. It would also be nice if it was the default profile, so people don't get caught unaware as they have been lately. Surely users should be making the choice, not some OEM who wants a couple extra points on a benchmark.
To me, personally, a desktop's max possible sustained boost should only be limited by temperature. If 9900k's official all-core turbo frequency is 4.7 GHz, then I expect it to be able to hold that frequency virtually indefinitely if the PSU, mainboard's power circuit and cooler are up to the task.
> If 9900k's official all-core turbo frequency is 4.7 GHz
This is obviously not the case - it's sold as a 3.6 GHz CPU, and if it officially supported running all cores at 4.7 GHz while staying in spec (95 watt), they'd be insane not to sell it as a 4.7 GHz CPU.
Intel's Ark says it has a single core turbo at 5 GHz, and if they can't run two at 5 GHz while staying below 95 watt, there's absolutely no way they'll have eight cores at 4.7 GHz below 95 watt.
4.7Ghz is the all core TURBO. Turbo is never guaranteed, and certainly not within the TDP. Thats what differentiates the Turbo from base frequency. They guarantee you can run base within TDP. No such guarantees for any Turbo mode.
All I can say is that current CPUs run too hot. I've gone to the trouble of disabling turbo in the BIOS of my laptops (or speedstep in a very old laptop that's still alive) and, when possible swapping out the HSF of laptops that have only an iGPU with the HSF designed for the same model with a dGPU to keep fan noise down and temperatures in check. Generally that prevents thermal throttling and gives more consistent performance, but I also sometimes go through the trouble of telling Intel's GPU driver software to run in max battery mode even if a laptop is plugged into AC power and limiting CPU max clocks to 80% in Window's power settings. That sort of effort usually tames a laptop to the point where it can keep itself cool without kicking the fan on very much while holding skin temps down so it can actually sit on my lap.
Modern CPUs do not run "too hot" they run "as hot as the specified maximum", i.e. if the CPU is any court than that it will increase power (and thus performance) until it hits that temperature (in the case of Core) or max voltage (in the case of Ryzen). If they ran any cooler, they would be leaving headroom (and thus performance) on the table.
Yes, but you also need to include time in that efficiency equation. Drawing the absolute maximum amount of power might be the most efficient point in many applications.
I don't. Throwing down more than $100 on any computer is something I refuse to do these days (and don't need to now that I don't feel the compulsion to keep up with playing current generation video games). My toy of the moment was $20.50 USD laptop off ebay running a P8400 C2D. It's just hard to switch from using a Bay Trail with no cooling fan whatsoever to a noisy box that has not only a cooling fan, but is also a running second, unfortunately mechanical, hard drive in the ROM bay so it can act as a small file server for my home network.
I certainly didn't mean to offend anyone or make someone else feel threatened because I spend less of my income on computer parts. I'm sorry about making you feel like an attack was warranted because I was calling your purchasing decisions into question, but it's difficult to know when someone will feel like one of their nerves has been hit over a comment.
I think everyone already knows that more battery life is better and they aren't pointing that others are fools for not realising this even though everyone realises it.
If you're able to find a reliable undervolt, it's likely to save you significant power and so reduce the temperature for a given level of performance. The downside is that it can take significant time with occasional freeze sto figure out the actual minimum over a range of conditions (and potentially it can degrade over time, although this seems less likely after a certain amount of burn-in time).
I do think Intel really needs to change how they rate their CPU's TDP as it is totally meaningless with today's CPU's released from them when they were forced to release CPU's with more cores because of AMD's Ryzen CPU's having more cores. Intels TDP numbers never really changed when they upped the core count.
So yes they need to change how the TDP is rated this measuring it from the feeble base clock has to go away and soon. They need to list TDP on how the CPU will actually run at it's stock config with the proper cooling. So if it says 5.0GHz max turbo on lets say 2 cores and the rest are running at their rated speed as well then list that as the TDP of the chip the numbers will be higher of coarse but it also shows the end user just how much cooling he/she is going to need to keep the chips temps under control. The way Intel is doing it now is very misleading for sure and listing a chip that says it is 95 watts and the customer buys a cooler that is good for lets say 100 watts and the chip under full load is drawing 150-200 watts then it is going to run very hot and shorten the life of the chip as well.
This brings up another point and that is this. It is not just Intel at fault here yes their part in this is listing a misleading TDP but you also have to look at board makers as well here. They are taking a lot of liberties with settings auto set that cause the CPU to run at a lot faster speeds than what the 95 TDP says it should and getting them into the 125-160 watt range straight out of the box. GN did a video on this and it was found that from the boards he tested from different board makers that they all were pushing things out of spec just to look faster than their competition which in my books is not right at all. Now if Intel list the TDP as it should be for any given CPU then maybe this would not be in such a huge grey area. As it sits you can not trust to many reviews not because the reviewers are being biased because it could be that they do not know any better and set everything to default not knowing those default settings are actually running the CPU way out of rated/listed specs. This also explains why if you go from review to review for the same CPU model the numbers are all over the place. Where you have one site listing benches with one set of numbers and another listing numbers that are not even close to the same. It depends on the board being used and just how honest the board maker was when configuring their UEFI bios. GN found that on the ASUS board yes default was to increase performance a little bit but not as bad as the rest that he tested. and that when he turned stuff off from auto on the ASUS board it did change the performance and the CPU was running within Intel's listed specs then. I think EVGA also complied when he turned off all extra performance settings and ran the CPU way under spec.
Sorry for the wall of text I should have looked at it more before posting this or at least broke it up into smaller paragraphs so it would be easier to read.
I haven't tested in quite a few years, but ever since the Pentium2 and through my brother's i7 Nehalem OC'd from 3ghz to 4ghz, the actual draw has been lower than listed.
I had a P3 that was rated for one N watts, but when I measured at the wall, the entire system under load was less than the listed TDP.
My brother's Nehalem build would tell you how many joules the CPU has consumed in 1 second intervals. Even at 4ghz with a listed a stock TDP of around 100watts, the CPU was maxing out around 60watts during burn-in.
Things may have changed, but I generally assume the actual max practical power draw of Intel desktop CPUs is less than the TDP. In my experience, this is where AMD and Intel have generally differed a lot.
I have not seen actual power draws of either in a long time and now I am interested. Time for an article!
My i7-7700k can draw roughly 30 watts per core at its full 4.5 GHz turbo frequency, with multi-core enhancement enabled I can get the whole package to 115w with all 4 cores going at 4.5 on a power virus like prime 95 AVX. Outside of that it is actually a challenge to even get the chip much past 70w on the package, but the 9900k is literally double the core count, more cache, and clocks 200 MHz higher even without multi-core enhancement enabled. I can totally understand the 9900k hitting 180w from that.
I agree and my very same experience: I'd be very surprised by a different outcome, because that's physics: Clocks and cores cost power, can't increase both on the same budget.
AT should have paid attention earlier, but now differentiate their reporting better on scenario settings they may have to define themselves, if none from vendors are good.
The real problem is that there is really only one MB that lets you do 95W on the 9900K at all. There was a round up with 12 z390 and all except one showed the 9900K to consume 140W on even simple gaming. Only one motherboard, the ASUS Maximus Hero lets you even get a setting that limits the 9900K to 95W, and that's not a default setting but one you have to look for.
You shouldn't force "Intel Specifications", because those aren't really specifications, they're starting point recommendations for system builders. You should generally first do "out of the box", while taking note of power consumption, frequency response, VRM design, etc. etc. to place performance numbers in context, with that context being informed by an expert understanding of how relevant those performance numbers are to the target consumer and how they might change. Then do "petal to the metal until the smoke comes out" to see where the limits are if you've got the time. You know, like an expert professional reviewer and stuff.
Let the youtube kiddies do their "reviews" and get "outraged" at TDP "scandals" or whatever, and stick with what you're good at.
I was under the impression that it was always a bad thing if a manufacturer provides specifications that have no relation to the product's actual performance, but evidently YMMV.
You should test it like the users will use it: out of the box settings using mainstream motherboards.
I think power is an overrated concern anyway for desktop users. As long as it doesn’t crash or seriously thermal throttle I don’t care how much power is being used.
"the user" is too much of a simplification for this publication, I think. The "typical user" doesn't read AT, but the typicall AT reader may want to do more and follow good advice, tested by AT staff, for the scenario that fits his use case best.
200 Watts of peak CPU power are less of a concern when your GPUs use 250 each while gaming: But a desktop that starts howling when you do something non-trivial may be an issue in a workplace.
You must be joking about our "user" indifference for electricity consumption or heat generated. But then again you continue with "I don't care" :) You don't care, we care. WE are better for caring.
They need to stop this base frequency nonsense. Or at least post it on the box. If they advertise the base boost clock on all cores they need to show TDP numbers for that. Anything else reeks if false advertising. If tdp equals power consumption then the real number needs to be known for end consumers to choose proper cooling and power for the chips. Motherboard vendors should also publish these numbers for any optimizations they provide. They should tell us how much power a chip uses in these auto settings.
The Federal Trade Commission has broad authority to prohibit unfair or deceptive acts or practices for interstate advertisements under the Federal Trade Commission Act. If it’s proved that Intel has deceived consumers with their false advertisements, can the Feds do anything for the victims? (Since their piles of rules and regulations couldn’t protect anyone).
> This means that if Intel has defined a processor with a series of turbo modes, they will only work when PL2 is the driving variable for maximum power consumption. Turbo does not work in PL1 mode.
This part is misleading. It makes it sound like the CPU always reverts back to the base frequency when PL1 is hit, but frequency actually only reduced as much as necessary to keep power draw at PL1. There's also no special relationship between PL2 and turbo frequencies. The turbo frequency table and PL2 are just two of many limits which might constrain the CPU's operating frequency. All of these limits are orthogonal to each other, the processor simply selects the maximum frequency which allows it to satisfy all of the enabled limits. If you set PL2 low enough it will constrain operating frequency below the max turbo just like PL1 does. The turbo frequency is just the maximum frequency when no other limits are exceeded.
If Tau is 1s, and turbo only works during PL2, this means that there is practically no turbo at all and the CPU runs at its base clock all the time. Which *should* never go above 95W, because that's how the base clock has been chosen.
So how is Constant 165W so much faster than Constant 95W (94% vs 71%)? It can't be a very long benchmark, if 1 second turbo affects the score that much.
Isn't this all moot? Who's going to be buying Intel anyway?
[yeah OK; couldn't resist; great article BTW thanks for tackling the subject ... which will matter to Intel buyers ... if there are any... not me, for a long time]
I was about to get a Dell i7 laptop from Best Buy ($240 discount), but I guess I’ll stick to my old laptop for now until AMD straighten out their Ryzen’s.
Laptops running Intel CPUs are indeed the most egregious examples of the problems this article describes. It used to be that an unusually poor cooling solution might hamper turbo performance, but we've reached the absurd situation where there Intel's CPUs are so "configurable" by the OEM and so abusive with their power consumption that you just can't tell how they will perform from the model number alone.
Apple's notebooks are a great example of one extreme, whereby their power limits and Tau are set to absurdly high levels but sustained performance is crappy because the chips rapidly overheat thanks to the cooling system being designed to Intel's misleading specifications.
Lenovo take the opposite approach - on a lot of their products they set such a low power limit that the thermal limits never come into play, even with their awful heat-sinks. CPU performance just sucks all day long and you can't fix it through cooling improvements.
Either way, you can throw down $300 on a "better" CPU and see no real-world benefit. It's a complete nonsense.
They need to advertise maximum power draw of the CPU, as this is necessary to determine the power of the PSU. If I am to built a mITX-System with a 160 Watt picoPSU, then I need to know that the system doesn't exceed this powerlimit. It's astonishing, that manufacturers get away with this kind of false advertisements and limited availability of real numbers.
I thought picopsu could deliver more than their rated power for long periods ? . I am currently trying to overclock i5-8600k with itx case and Noctua 65w cooler.. the only "real" load I use is handbarke , and even then every overclocking tut says set avx to -2 (or is that plus2) maxes out the cpu for 90-100mins. All info is good tho ...
The only thing I am surprised about is your being surprised at this "factory overclock" game going on.
You are a leading publication and disclose your process. So everyone in the hobbyist business knows how to appear best.
And that adding cores to an existing die at a given process size and TDP can only mean either extra heat or lower clocks is fixed in physics, so you should have tested for that long ago: This "surprise" comes a little late, I think.
As for the strategy going forward: I understand that "auto" is comfortable or even what most people would do. But I'd argue that most people who read your stuff carefully, actually don't, but will follow your advice, if they think it suits their use case.
There are those, who go to AT for hints on overclocking and they will happily activate "auto" knowing, that CPU overclock is what they get. They would even appreciate you operating overclocker RAM at its overclocked specification, rather MB defaults, because it makes a difference. They'd also want to know, which cooler allows them to run their newest monster 200Watt CPU monster at tolerable noise and where MB hotspots might appear and require special treatment. That sort of used to be a specialty of AT, I believe.
And there are those who go to AT, because they want something rock-solid that fits their informed expectations, and for those you should set the BIOS settings to AMD/Intel specifications, typically "manual" and "100%" will give you something like that, easily enough observed via monitoring tools like HWinfo, CPU-Z or similar. Again, here I'd recommend using DRAM specs not MB auto, unless you're testing an entry level system configuration for those who want the quick-n-easy according to AT, which could be a third audience.
I doubt that Intel will change very much, what they put on the box. The current situation obviously benefits them in the benchmarks that are spread around everywhere and then the CPUs are simply so individual at high clocks and power, that it will be too many bins and numbers to handle meaningfully in marketing.
CPUs are becomming ever more dynamic and on one hand we expect manufacturers to exploit the full potential of the piece of silicon we have in our system, while on the other many use cases also require reliable and sustainable performance, which in turn requires conservative power settings. I see little chance of having both at a single setting, so there need to be alternatives to "auto" and perhaps you can define some?
But do you really need to be _that_ conservative to get a rock solid system? I haven't run into issues with a 600 MHz overclock over the turbo frequency running on a relatively modest cooler. At least the turbo frequency should be rock solid, shouldn't it?
The main question to me is... why? Why release CPUs with a 35W or 65W TDP, when they really would rather use 200W? What's the point in printing those numbers?
I have an i5-8400 and I have now tried to adjust these BIOS settings. I use 82 Watt for 1 sec and 65 Watt continuously. I am testing with HandBrake so 98% load on all six cores.
After now an hour or so CPUID HWMonitor reads 65 Watt as Package Power so this is all fine. The Clock is (to my surprice) at 3.6 to 3.7 GHz all the time. So clos to full Turbo and no where near the 2.8 GHz Base frequency. I was expected that the 65 Watt limit would be around 2.8 GHz.
Does not seem to fit with what I read in this article?
The base frequency represents the minimum guaranteed frequency when running a worst case workload (e.g. Prime95 small FFT). If your workload is not completely saturating the processor's AVX units then it will run at somewhere between the base and the max turbo frequency depending on how demanding the workload is.
Most workloads do not fully saturate the CPU's resources, even when the CPU reports 100% utilization, so it is actually quite rare to see the base frequency outside of running specific benchmarks.
Hey! Great article, thanks for that! The diagrams are a bit weird to me though. The time is in 100ms units and labled from 0 (/ 5) to 50 / 80 in 10/20 steps. So a "20" in the 100 ms graph is 2000 ms or 2 s, right? If that is the case and I understand it correctly, why not just reduce the steps by one magnitude and use 1000 ms or 1 s as the axis unit? So you have 0 to 5 / 8 s in 1 or 2 s steps? I hope that makes sense, English isn't my native language and talking about proper technical phrases isn't easy for me.
This is interesting. If the 8700 chip in the new MacMini pulls a lot of power then the 150W power supply will probably be something very limiting. The heatsink is a different issue althogether ... Maybe compare benchmarks with the MBP chip to the MacMini chip. Both same cores/threads but different power draw.
This is all welcome information, but I'm not sure why nobody offers the simple explanation.
At these power draws, metal doesn't heat instantly. If the CPU has been idling for a while under a heatsink designed to dissipate 95W, the heatsink is cool enough so it can handle something like 120W+ for a short period. That's it. That's what clever overclocking is putting to good use.
Here's a way to be representative for most: use the most popular cooler in the world the CM212 for Intel and AMD in closed case. Otherwise put it upfront that the results are just the best possible and unless you run open case in an AC-ed room with the most expensive water cooler you won't see such performance.
It's funny how after reading this article I doubt every CPU review I read lately. The Intel performance seemed too good to be true - double the cores and same power at the same litho node. Not likely.
So this means (if you’re testing “out of the box”) that *all* Intel CPU benchmarks are influenced heavily by the choice of motherboard for testing? If you benchmark all Intel CPUs on the same motherboard I suspect the results would still be useful relative to each other, but a motherboard from one manufacturer could show Intel performance above a comparable Ryzen part, while a differently configured motherboard could have performance below it, using the same CPU and other system parts.
This makes it seem like reviews comparing same-chipset Intel motherboards are more necessary than we thought. The performance delta could potentially be huge, especially if it was different enough on Supermicro motherboards to notice and prompt further testing.
Question: How much power draw is safe or even just okay for the motherboard? If a processor like the 9900K has liquid cooling etc. and is allowed to draw unlimited power to the thermal limit, it can draw as much as some higher-end graphics cards. So, are all the motherboards out there able to provide hundreds of Watts/hour to the CPU socket without a problem? Has anybody looked into this?
Unfortunately the answer to that depends entirely on the board in question. Most enthusiast motherboards are over-engineered on the power-delivery front, so you should be okay, however this very problem reared its extremely ugly head with the early boards supporting the i9-7900X. OEMs slapped attractive-but-ineffectual heat-sinks on the VRMs and peak performance instability was the end result.
The 9900K is literally the brand-new hotness in this area, so you're not going to get an answer without buying the board yourself and waiting to see. :/
I agree with your solution. PL2 should be in the box. If they can, also include the sub second TDPs. This can be valuable for consumers buying CPU coolers who doesn't want to oversize but gets the guaranteed PL2 boost.
As you mention, you are going to run into problems with reviews. I'd suggest three configurations to cover the most realistic scenarios in which they'll be used (this goes for all CPUs):
1) Everything within spec. Simulate the performance you'd get in an OEM computer bought from somewhere like HP, Dell etc. This means (for the 9900K) DDR4-2666 RAM at default timings and no XMP, cheap 95 W CPU cooler, high mid-range to low high-end GPU (Dell's gaming PCs with 8700K comes with a 1080 for reference), PSU that is just good enough to cover requirements (1080 minimum requirement says 500 watt PSU). 2) Same hardware as 1), but everything tweaked and OC'ed for maximum performance. This simulates the same as 1, but in the hands of a hardware enthusiast, or what a hardware enthusiast on a tight budget can achieve. 3) Balls to the wall; dual 1,500 watt PSUs if needed, quad 360 mm radiators if needed, dual RTX 2080Tis, fastest possible memory and tightest possible timings. What can this thing do if the only limits are itself and what you can do with air-cooling or advanced water cooling?
@Ian Cutress thanks for the great article. can you please do a similar roundup on AMD side(and perhaps a few other popular platforms .. i mean non-x86 ..ARM,SPARC,Power), how are they managing their TDP and Turbo,etc.
Today most ..if not all.. processor vendors have some kind of turbo modes and ways to manage TDP. i understand it might not be possible to cover all vendors..but something would still be nice :)
Why not also measure total energy consumed (dissipated) on a test run, as well as average power dissipation for that test (total energy consumed divided by test duration). Maybe also include peak momentary power requirements. That way, people can judge: a) how energy efficient the whole CPU actually is while performing a given workload b) what kind of cooling solution they might need in order to replicate the benchmark results c) what PSU rating they need, so their computer does not crash from a PSU overload (voltage drop)
This seems like deja vu. If I recall, AMD was doing a similar thing in the days of Conroe and Penryn, when AMD got caught with its pants down. AMD started releasing high TDP chips and came up with a new way of measuring their power dissipation that was less conservative (and more misleading) than Intel's. Now that the shoe is on the other foot, Intel is the one playing games with misleading TDPs, because Intel's got the less efficient processors.
And iirc, this happened previously to the above as well, when Intel was hanging on to the P4 Netburst space heaters, and AMD was riding roughshod over them with the Athlon 64 X2.
Thank you for the article Ian. I've been blaming Intel for the last three years about how much over the power budget our servers are. Maybe it's SuperMicro's fault.
Power levels and milliseconds of turbo boost are good design ideas. However on the macro-scale, when I am trying to identify how many computers I can put on a 30 amp circuit, total power consumed and total power dissipated are, as per physics, equal. It is unpleasant when the kill-a-watt confirms that the computers consume 160% of what they claim they dissipate.
To me this is on par with Volkswagen cheating on their emissions tests. It's just plain lying.
Intel decided that it will artificially delay its lithography, to avoid monopoly laws eating its corporate flesh. So, the fact that it lets AMD gain an advantage at 7nm, is to let it become hardly competitive, else with is superior ipc and quality dies, it will again smash the competition, if it goes on smaller transistor technologies, that is, its advertised weakness to go to 10nm, or smaller is a hilarious scam. And because of this, it needs to clock its processors at frequencies that consume more power, so that it competes with AMD with its aforementioned advantages. And this results necessarily to higher TDPs. But at headquarters, it does not sit well to let it appear that they compete at the cost of energy efficiency . Hence their little tricks to show that although they raised frequencies the energy consumption is about the same, because as they say of 'improved' process at 14 nm , after so many months of refinement.But You can only improve a process node so much.... Figures like 14++ and similar are a joke, but one can understand its position and tactics.
Intel could and should already be at 7nm right now, preparing for ending Moore's law in 2019 by issuing a highly questionable 5nm or a replacement of the IF. It may well be on a new hardware paradigm and it simply sees the futility of walking the few meters at the end of this road, preparing to March on a new avenue that we know little about.
The package power control settings of PL1 ... PL4 and Tau allow the designer to configure Intel Turbo ... to match the plattform power delivery and package thermal solution limitations.
Power Limit 1 (PL1): ...- recommend to set to equal TDP power. PL1 should not be set higher than thermal solution cooling limits.
This means intel only says "the plattform power delivery and package thermal solution limitations" have to be followed.
Along this Intel document the designer (motherboard manufacturer in some kind) has free hands to set PL1-4 and Tau anywhere, as long as intels "power delivery limitations" and "package thermal solution limitations" are followed, wich holds data about fine details of jitters, amperes, thermals and such stuff.
I personally whould understand if a hardware-rewiew whould differenciate between:
- Intels own advertised TDP and recommended PL1, PL2 and Tau. It´s not fully pure advertised TDP, but the 8 seconds in PL2 are readable processor-manufacturer data. The motherboard-manufacturers interpretation of what their desings can sustain is not readable processor-manufacturer data, and those settings should not be used for pure CPU comparisons.
- Maximum Performance of a CPU (OC etc.) Here everything is allowed aslong as it is "appels to apples" and fair.
- Beside the above a mainboard shootout is okay, but its tricky to stay "apples to appels", as some are full throttle and others aren´t. Overclocking of those boards is okay, too as long as it stays "apples to apples"
- I think a little bit diffrent for laptops or other prebuild systems as these are 100% preset, and if a systembuilder goes its way, it should be 100% reviewed as is. No apples to appels here. If those prebuilds are tested in OC, too, i´m fine with that.
It's all within the latest trends. If the CPU burns 200 or 4096 Wh let it burn! If it melts in the process just buy 10 more! What, you guys dont have money? ;)
I know this article's over a year old, but it can still turn up in searches, so comments are still relevant.
There's nothing at all deceptive about Intel's method of rating and specifying its TDP.
Recall that TDP stands for thermal design power, which simply means that you need to use a cooler capable of dissipating the specified amount of heat continuously. A 95 watt TDP CPU is designed to work with a cooler that can dissipate 95 or more watts continuously, and that means the system OEM or integrator needs to make sure the cooler they use meets that specification if the CPU is expected to perform as designed. That's all the TDP rating means.
TDP is not, and never has been, an estimate of how much power the CPU is going to consume under load! It has often been mistaken for exactly that, but there's a reason they call the stat "thermal design power" and not "maximum power consumption." It relates to the thermal design of the CPU and the cooler used with it. Intel, AMD, or any other chip maker can't be blamed if people think "thermal design power" means something other than what it actually means.
we have dell mobile precision 3541 XCTO; with Dell Tag - 8ZFN3X2...SR - 1026836654...HAC - 42697875...Heating issue. [ ref:_00D0bGaMp._5002R19;
The system goes at 99 degree celcious even at playing windows media player and thermal and power limit throttling's are applied which we can see thru intel xtu application. I can see the cpu uttilization is less than 25%; still the throttling tdps are applied. Dell customer service says that the processor temperature of 110 deg C is normal. And he dont tell the designed tdp numbers for the system.
We have very hot surface on the left side of the keypad which is very uncomfortable to work with and hanging.
I feel, these people are selling high configuration laptops but the motherboards are not designed to handle either bus speeds or the thermal handling. What will be the turbo boost do if the system does not have the design in place to cope up with the processor.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
87 Comments
Back to Article
Gasaraki88 - Friday, November 9, 2018 - link
What you should do from now on is that for all CPU tests (both Intel and AMD) find out the official spec of the processor and force the motherboard you are using to test the CPUs to run at the official spec. If you are testing motherboards, you can run at default spec then auto settings and see the difference between the difference brands.Mr Perfect - Saturday, November 10, 2018 - link
I second this.I understand that Out-Of-The-Box is easier to test, and will better represent what a user will get, but it's also WHY the motherboard manufacturers are using ludicrous Out-Of-The-Box settings. They know it's going to win them the review and the resulting sales.
Ultimately, the manufacturers are going to push as hard as reviewers let them.
jospoortvliet - Sunday, November 11, 2018 - link
Well that seems fine to me... stick with out of the box settings - the vendors push as far as their hardware goes which is exactly as it should be.What i would love to see is an analysis of the higher core counts and their power usage. Given the lack of 24-28 Intel cores on the market I wonder how useful they really are...
Mr Perfect - Tuesday, November 13, 2018 - link
The vendors do have OC profiles on their boards. If people want to push the hardware, that's exactly what those are for.For people who don't want a 180watt power draw from a 95 watt CPU, there really should be a standard Intel spec profile. It would also be nice if it was the default profile, so people don't get caught unaware as they have been lately. Surely users should be making the choice, not some OEM who wants a couple extra points on a benchmark.
eddman - Friday, November 9, 2018 - link
To me, personally, a desktop's max possible sustained boost should only be limited by temperature. If 9900k's official all-core turbo frequency is 4.7 GHz, then I expect it to be able to hold that frequency virtually indefinitely if the PSU, mainboard's power circuit and cooler are up to the task.ImSpartacus - Friday, November 9, 2018 - link
You do need a "PL4" though. It's not good to request more power than what the power delivery can safely provide.But yes, I agree that the ideal setup is more temperature-based.
Martin_Schou - Sunday, November 11, 2018 - link
> If 9900k's official all-core turbo frequency is 4.7 GHzThis is obviously not the case - it's sold as a 3.6 GHz CPU, and if it officially supported running all cores at 4.7 GHz while staying in spec (95 watt), they'd be insane not to sell it as a 4.7 GHz CPU.
Intel's Ark says it has a single core turbo at 5 GHz, and if they can't run two at 5 GHz while staying below 95 watt, there's absolutely no way they'll have eight cores at 4.7 GHz below 95 watt.
doggface - Sunday, November 11, 2018 - link
RtfaThe whole point is that it only stays within tdp for the base clock. Turbo takes it outside it's tdp range.
nevcairiel - Monday, November 12, 2018 - link
You should really read the article.4.7Ghz is the all core TURBO. Turbo is never guaranteed, and certainly not within the TDP. Thats what differentiates the Turbo from base frequency. They guarantee you can run base within TDP. No such guarantees for any Turbo mode.
PeachNCream - Friday, November 9, 2018 - link
All I can say is that current CPUs run too hot. I've gone to the trouble of disabling turbo in the BIOS of my laptops (or speedstep in a very old laptop that's still alive) and, when possible swapping out the HSF of laptops that have only an iGPU with the HSF designed for the same model with a dGPU to keep fan noise down and temperatures in check. Generally that prevents thermal throttling and gives more consistent performance, but I also sometimes go through the trouble of telling Intel's GPU driver software to run in max battery mode even if a laptop is plugged into AC power and limiting CPU max clocks to 80% in Window's power settings. That sort of effort usually tames a laptop to the point where it can keep itself cool without kicking the fan on very much while holding skin temps down so it can actually sit on my lap.edzieba - Friday, November 9, 2018 - link
Modern CPUs do not run "too hot" they run "as hot as the specified maximum", i.e. if the CPU is any court than that it will increase power (and thus performance) until it hits that temperature (in the case of Core) or max voltage (in the case of Ryzen). If they ran any cooler, they would be leaving headroom (and thus performance) on the table.PeachNCream - Friday, November 9, 2018 - link
It's foolish to push something to the absolute maximum when a more efficient place on the work accomplished vs input required curve exists.Darkknight512 - Friday, November 9, 2018 - link
Yes, but you also need to include time in that efficiency equation. Drawing the absolute maximum amount of power might be the most efficient point in many applications.Mr Perfect - Saturday, November 10, 2018 - link
Why spend more on a high-speed processor and then down clock it? You'd save money and have the same performance buying a cooler, lower-clocked part.PeachNCream - Saturday, November 10, 2018 - link
I don't. Throwing down more than $100 on any computer is something I refuse to do these days (and don't need to now that I don't feel the compulsion to keep up with playing current generation video games). My toy of the moment was $20.50 USD laptop off ebay running a P8400 C2D. It's just hard to switch from using a Bay Trail with no cooling fan whatsoever to a noisy box that has not only a cooling fan, but is also a running second, unfortunately mechanical, hard drive in the ROM bay so it can act as a small file server for my home network.Spunjji - Monday, November 12, 2018 - link
You're not even talking about a modern processor at that point, so I'm not sure what the relevance to this article is.PeachNCream - Monday, November 12, 2018 - link
I certainly didn't mean to offend anyone or make someone else feel threatened because I spend less of my income on computer parts. I'm sorry about making you feel like an attack was warranted because I was calling your purchasing decisions into question, but it's difficult to know when someone will feel like one of their nerves has been hit over a comment.Diji1 - Sunday, November 11, 2018 - link
I think everyone already knows that more battery life is better and they aren't pointing that others are fools for not realising this even though everyone realises it.edzieba - Friday, November 9, 2018 - link
S/court/lower. Fscking autocorrect!GreenReaper - Friday, November 9, 2018 - link
If you're able to find a reliable undervolt, it's likely to save you significant power and so reduce the temperature for a given level of performance. The downside is that it can take significant time with occasional freeze sto figure out the actual minimum over a range of conditions (and potentially it can degrade over time, although this seems less likely after a certain amount of burn-in time).rocky12345 - Friday, November 9, 2018 - link
Nice write thanks.I do think Intel really needs to change how they rate their CPU's TDP as it is totally meaningless with today's CPU's released from them when they were forced to release CPU's with more cores because of AMD's Ryzen CPU's having more cores. Intels TDP numbers never really changed when they upped the core count.
So yes they need to change how the TDP is rated this measuring it from the feeble base clock has to go away and soon. They need to list TDP on how the CPU will actually run at it's stock config with the proper cooling. So if it says 5.0GHz max turbo on lets say 2 cores and the rest are running at their rated speed as well then list that as the TDP of the chip the numbers will be higher of coarse but it also shows the end user just how much cooling he/she is going to need to keep the chips temps under control. The way Intel is doing it now is very misleading for sure and listing a chip that says it is 95 watts and the customer buys a cooler that is good for lets say 100 watts and the chip under full load is drawing 150-200 watts then it is going to run very hot and shorten the life of the chip as well.
This brings up another point and that is this. It is not just Intel at fault here yes their part in this is listing a misleading TDP but you also have to look at board makers as well here. They are taking a lot of liberties with settings auto set that cause the CPU to run at a lot faster speeds than what the 95 TDP says it should and getting them into the 125-160 watt range straight out of the box. GN did a video on this and it was found that from the boards he tested from different board makers that they all were pushing things out of spec just to look faster than their competition which in my books is not right at all. Now if Intel list the TDP as it should be for any given CPU then maybe this would not be in such a huge grey area. As it sits you can not trust to many reviews not because the reviewers are being biased because it could be that they do not know any better and set everything to default not knowing those default settings are actually running the CPU way out of rated/listed specs. This also explains why if you go from review to review for the same CPU model the numbers are all over the place. Where you have one site listing benches with one set of numbers and another listing numbers that are not even close to the same. It depends on the board being used and just how honest the board maker was when configuring their UEFI bios. GN found that on the ASUS board yes default was to increase performance a little bit but not as bad as the rest that he tested. and that when he turned stuff off from auto on the ASUS board it did change the performance and the CPU was running within Intel's listed specs then. I think EVGA also complied when he turned off all extra performance settings and ran the CPU way under spec.
rocky12345 - Friday, November 9, 2018 - link
Sorry for the wall of text I should have looked at it more before posting this or at least broke it up into smaller paragraphs so it would be easier to read.bcronce - Friday, November 9, 2018 - link
I haven't tested in quite a few years, but ever since the Pentium2 and through my brother's i7 Nehalem OC'd from 3ghz to 4ghz, the actual draw has been lower than listed.I had a P3 that was rated for one N watts, but when I measured at the wall, the entire system under load was less than the listed TDP.
My brother's Nehalem build would tell you how many joules the CPU has consumed in 1 second intervals. Even at 4ghz with a listed a stock TDP of around 100watts, the CPU was maxing out around 60watts during burn-in.
Things may have changed, but I generally assume the actual max practical power draw of Intel desktop CPUs is less than the TDP. In my experience, this is where AMD and Intel have generally differed a lot.
I have not seen actual power draws of either in a long time and now I am interested. Time for an article!
IndianaKrom - Friday, November 9, 2018 - link
My i7-7700k can draw roughly 30 watts per core at its full 4.5 GHz turbo frequency, with multi-core enhancement enabled I can get the whole package to 115w with all 4 cores going at 4.5 on a power virus like prime 95 AVX. Outside of that it is actually a challenge to even get the chip much past 70w on the package, but the 9900k is literally double the core count, more cache, and clocks 200 MHz higher even without multi-core enhancement enabled. I can totally understand the 9900k hitting 180w from that.abufrejoval - Saturday, November 10, 2018 - link
I agree and my very same experience: I'd be very surprised by a different outcome, because that's physics: Clocks and cores cost power, can't increase both on the same budget.AT should have paid attention earlier, but now differentiate their reporting better on scenario settings they may have to define themselves, if none from vendors are good.
SaturnusDK - Saturday, November 10, 2018 - link
The real problem is that there is really only one MB that lets you do 95W on the 9900K at all. There was a round up with 12 z390 and all except one showed the 9900K to consume 140W on even simple gaming. Only one motherboard, the ASUS Maximus Hero lets you even get a setting that limits the 9900K to 95W, and that's not a default setting but one you have to look for.Supercell99 - Tuesday, November 20, 2018 - link
"power virus" bahahahaEl Sama - Friday, November 9, 2018 - link
Seems like GamersNexus is ahead of you guys.DanNeely - Friday, November 9, 2018 - link
It's the curse of too much travel and other higher priorities.https://twitter.com/IanCutress/status/106094257874...
... without saying what it was at the time, Ian used the same description a few days ago combined with a comment about getting scooped.
https://twitter.com/IanCutress/status/106023964929...
And you're the inevitable comment. :-/
GeorgeH - Friday, November 9, 2018 - link
You shouldn't force "Intel Specifications", because those aren't really specifications, they're starting point recommendations for system builders. You should generally first do "out of the box", while taking note of power consumption, frequency response, VRM design, etc. etc. to place performance numbers in context, with that context being informed by an expert understanding of how relevant those performance numbers are to the target consumer and how they might change. Then do "petal to the metal until the smoke comes out" to see where the limits are if you've got the time. You know, like an expert professional reviewer and stuff.Let the youtube kiddies do their "reviews" and get "outraged" at TDP "scandals" or whatever, and stick with what you're good at.
Spunjji - Monday, November 12, 2018 - link
I was under the impression that it was always a bad thing if a manufacturer provides specifications that have no relation to the product's actual performance, but evidently YMMV.deathBOB - Friday, November 9, 2018 - link
You should test it like the users will use it: out of the box settings using mainstream motherboards.I think power is an overrated concern anyway for desktop users. As long as it doesn’t crash or seriously thermal throttle I don’t care how much power is being used.
abufrejoval - Saturday, November 10, 2018 - link
"the user" is too much of a simplification for this publication, I think. The "typical user" doesn't read AT, but the typicall AT reader may want to do more and follow good advice, tested by AT staff, for the scenario that fits his use case best.200 Watts of peak CPU power are less of a concern when your GPUs use 250 each while gaming: But a desktop that starts howling when you do something non-trivial may be an issue in a workplace.
Gastec - Sunday, April 28, 2019 - link
You must be joking about our "user" indifference for electricity consumption or heat generated. But then again you continue with "I don't care" :)You don't care, we care. WE are better for caring.
znd125 - Friday, November 9, 2018 - link
I am starting to respect Supermicro a lot ...Bp_968 - Friday, November 9, 2018 - link
Me too. I especially love how they included those extra chinese chips on their motherboards for us. ;)ipkh - Friday, November 9, 2018 - link
They need to stop this base frequency nonsense. Or at least post it on the box. If they advertise the base boost clock on all cores they need to show TDP numbers for that. Anything else reeks if false advertising. If tdp equals power consumption then the real number needs to be known for end consumers to choose proper cooling and power for the chips.Motherboard vendors should also publish these numbers for any optimizations they provide. They should tell us how much power a chip uses in these auto settings.
sonny73n - Sunday, November 11, 2018 - link
The Federal Trade Commission has broad authority to prohibit unfair or deceptive acts or practices for interstate advertisements under the Federal Trade Commission Act. If it’s proved that Intel has deceived consumers with their false advertisements, can the Feds do anything for the victims? (Since their piles of rules and regulations couldn’t protect anyone).magila - Friday, November 9, 2018 - link
> This means that if Intel has defined a processor with a series of turbo modes, they will only work when PL2 is the driving variable for maximum power consumption. Turbo does not work in PL1 mode.This part is misleading. It makes it sound like the CPU always reverts back to the base frequency when PL1 is hit, but frequency actually only reduced as much as necessary to keep power draw at PL1. There's also no special relationship between PL2 and turbo frequencies. The turbo frequency table and PL2 are just two of many limits which might constrain the CPU's operating frequency. All of these limits are orthogonal to each other, the processor simply selects the maximum frequency which allows it to satisfy all of the enabled limits. If you set PL2 low enough it will constrain operating frequency below the max turbo just like PL1 does. The turbo frequency is just the maximum frequency when no other limits are exceeded.
ajp_anton - Friday, November 9, 2018 - link
How does that Constant setting work?"Constant 165W 165W 1s 165W 94%"
If Tau is 1s, and turbo only works during PL2, this means that there is practically no turbo at all and the CPU runs at its base clock all the time. Which *should* never go above 95W, because that's how the base clock has been chosen.
So how is Constant 165W so much faster than Constant 95W (94% vs 71%)? It can't be a very long benchmark, if 1 second turbo affects the score that much.
magila - Friday, November 9, 2018 - link
Ian's description of how turbo frequencies, PL2, and PL1 interact is misleading. See my comment above.Arbie - Saturday, November 10, 2018 - link
Isn't this all moot? Who's going to be buying Intel anyway?[yeah OK; couldn't resist; great article BTW thanks for tackling the subject ... which will matter to Intel buyers ... if there are any... not me, for a long time]
sonny73n - Sunday, November 11, 2018 - link
I was about to get a Dell i7 laptop from Best Buy ($240 discount), but I guess I’ll stick to my old laptop for now until AMD straighten out their Ryzen’s.Spunjji - Monday, November 12, 2018 - link
Laptops running Intel CPUs are indeed the most egregious examples of the problems this article describes. It used to be that an unusually poor cooling solution might hamper turbo performance, but we've reached the absurd situation where there Intel's CPUs are so "configurable" by the OEM and so abusive with their power consumption that you just can't tell how they will perform from the model number alone.Apple's notebooks are a great example of one extreme, whereby their power limits and Tau are set to absurdly high levels but sustained performance is crappy because the chips rapidly overheat thanks to the cooling system being designed to Intel's misleading specifications.
Lenovo take the opposite approach - on a lot of their products they set such a low power limit that the thermal limits never come into play, even with their awful heat-sinks. CPU performance just sucks all day long and you can't fix it through cooling improvements.
Either way, you can throw down $300 on a "better" CPU and see no real-world benefit. It's a complete nonsense.
jrs77 - Saturday, November 10, 2018 - link
They need to advertise maximum power draw of the CPU, as this is necessary to determine the power of the PSU. If I am to built a mITX-System with a 160 Watt picoPSU, then I need to know that the system doesn't exceed this powerlimit.It's astonishing, that manufacturers get away with this kind of false advertisements and limited availability of real numbers.
sonny73n - Sunday, November 11, 2018 - link
I’d like to see a class action lawsuit for this deceptive practice so I can get some of my money back and switch team.Spunjji - Monday, November 12, 2018 - link
Agreed here. I don't mind a "typical" value as long as I can get an honest appraisal of a maximum value.dromoxen - Tuesday, November 13, 2018 - link
I thought picopsu could deliver more than their rated power for long periods ? . I am currently trying to overclock i5-8600k with itx case and Noctua 65w cooler.. the only "real" load I use is handbarke , and even then every overclocking tut says set avx to -2 (or is that plus2) maxes out the cpu for 90-100mins. All info is good tho ...abufrejoval - Saturday, November 10, 2018 - link
The only thing I am surprised about is your being surprised at this "factory overclock" game going on.You are a leading publication and disclose your process. So everyone in the hobbyist business knows how to appear best.
And that adding cores to an existing die at a given process size and TDP can only mean either extra heat or lower clocks is fixed in physics, so you should have tested for that long ago: This "surprise" comes a little late, I think.
As for the strategy going forward: I understand that "auto" is comfortable or even what most people would do. But I'd argue that most people who read your stuff carefully, actually don't, but will follow your advice, if they think it suits their use case.
There are those, who go to AT for hints on overclocking and they will happily activate "auto" knowing, that CPU overclock is what they get. They would even appreciate you operating overclocker RAM at its overclocked specification, rather MB defaults, because it makes a difference. They'd also want to know, which cooler allows them to run their newest monster 200Watt CPU monster at tolerable noise and where MB hotspots might appear and require special treatment. That sort of used to be a specialty of AT, I believe.
And there are those who go to AT, because they want something rock-solid that fits their informed expectations, and for those you should set the BIOS settings to AMD/Intel specifications, typically "manual" and "100%" will give you something like that, easily enough observed via monitoring tools like HWinfo, CPU-Z or similar. Again, here I'd recommend using DRAM specs not MB auto, unless you're testing an entry level system configuration for those who want the quick-n-easy according to AT, which could be a third audience.
I doubt that Intel will change very much, what they put on the box. The current situation obviously benefits them in the benchmarks that are spread around everywhere and then the CPUs are simply so individual at high clocks and power, that it will be too many bins and numbers to handle meaningfully in marketing.
CPUs are becomming ever more dynamic and on one hand we expect manufacturers to exploit the full potential of the piece of silicon we have in our system, while on the other many use cases also require reliable and sustainable performance, which in turn requires conservative power settings. I see little chance of having both at a single setting, so there need to be alternatives to "auto" and perhaps you can define some?
kadajawi - Thursday, April 18, 2019 - link
But do you really need to be _that_ conservative to get a rock solid system? I haven't run into issues with a 600 MHz overclock over the turbo frequency running on a relatively modest cooler. At least the turbo frequency should be rock solid, shouldn't it?The main question to me is... why? Why release CPUs with a 35W or 65W TDP, when they really would rather use 200W? What's the point in printing those numbers?
Liltorp - Saturday, November 10, 2018 - link
I have an i5-8400 and I have now tried to adjust these BIOS settings.I use 82 Watt for 1 sec and 65 Watt continuously.
I am testing with HandBrake so 98% load on all six cores.
After now an hour or so CPUID HWMonitor reads 65 Watt as Package Power so this is all fine.
The Clock is (to my surprice) at 3.6 to 3.7 GHz all the time. So clos to full Turbo and no where near the 2.8 GHz Base frequency.
I was expected that the 65 Watt limit would be around 2.8 GHz.
Does not seem to fit with what I read in this article?
magila - Saturday, November 10, 2018 - link
The base frequency represents the minimum guaranteed frequency when running a worst case workload (e.g. Prime95 small FFT). If your workload is not completely saturating the processor's AVX units then it will run at somewhere between the base and the max turbo frequency depending on how demanding the workload is.Most workloads do not fully saturate the CPU's resources, even when the CPU reports 100% utilization, so it is actually quite rare to see the base frequency outside of running specific benchmarks.
Death666Angel - Saturday, November 10, 2018 - link
Hey! Great article, thanks for that! The diagrams are a bit weird to me though. The time is in 100ms units and labled from 0 (/ 5) to 50 / 80 in 10/20 steps. So a "20" in the 100 ms graph is 2000 ms or 2 s, right? If that is the case and I understand it correctly, why not just reduce the steps by one magnitude and use 1000 ms or 1 s as the axis unit? So you have 0 to 5 / 8 s in 1 or 2 s steps? I hope that makes sense, English isn't my native language and talking about proper technical phrases isn't easy for me.coreai - Saturday, November 10, 2018 - link
This is interesting. If the 8700 chip in the new MacMini pulls a lot of power then the 150W power supply will probably be something very limiting. The heatsink is a different issue althogether ... Maybe compare benchmarks with the MBP chip to the MacMini chip. Both same cores/threads but different power draw.bug77 - Saturday, November 10, 2018 - link
This is all welcome information, but I'm not sure why nobody offers the simple explanation.At these power draws, metal doesn't heat instantly. If the CPU has been idling for a while under a heatsink designed to dissipate 95W, the heatsink is cool enough so it can handle something like 120W+ for a short period. That's it. That's what clever overclocking is putting to good use.
dragosmp - Saturday, November 10, 2018 - link
Is this an Intel-only problem?Here's a way to be representative for most: use the most popular cooler in the world the CM212 for Intel and AMD in closed case. Otherwise put it upfront that the results are just the best possible and unless you run open case in an AC-ed room with the most expensive water cooler you won't see such performance.
It's funny how after reading this article I doubt every CPU review I read lately. The Intel performance seemed too good to be true - double the cores and same power at the same litho node. Not likely.
shelbystripes - Saturday, November 10, 2018 - link
So this means (if you’re testing “out of the box”) that *all* Intel CPU benchmarks are influenced heavily by the choice of motherboard for testing? If you benchmark all Intel CPUs on the same motherboard I suspect the results would still be useful relative to each other, but a motherboard from one manufacturer could show Intel performance above a comparable Ryzen part, while a differently configured motherboard could have performance below it, using the same CPU and other system parts.This makes it seem like reviews comparing same-chipset Intel motherboards are more necessary than we thought. The performance delta could potentially be huge, especially if it was different enough on Supermicro motherboards to notice and prompt further testing.
eastcoast_pete - Saturday, November 10, 2018 - link
Question: How much power draw is safe or even just okay for the motherboard? If a processor like the 9900K has liquid cooling etc. and is allowed to draw unlimited power to the thermal limit, it can draw as much as some higher-end graphics cards. So, are all the motherboards out there able to provide hundreds of Watts/hour to the CPU socket without a problem? Has anybody looked into this?Spunjji - Monday, November 12, 2018 - link
Unfortunately the answer to that depends entirely on the board in question. Most enthusiast motherboards are over-engineered on the power-delivery front, so you should be okay, however this very problem reared its extremely ugly head with the early boards supporting the i9-7900X. OEMs slapped attractive-but-ineffectual heat-sinks on the VRMs and peak performance instability was the end result.The 9900K is literally the brand-new hotness in this area, so you're not going to get an answer without buying the board yourself and waiting to see. :/
zodiacfml - Saturday, November 10, 2018 - link
I agree with your solution. PL2 should be in the box. If they can, also include the sub second TDPs. This can be valuable for consumers buying CPU coolers who doesn't want to oversize but gets the guaranteed PL2 boost.marees - Sunday, November 11, 2018 - link
Agree with your concluding statement.Since, you conducted all tests as 'out-of-the-box', the cooler also should be out of the box stock cooler (if it came with CPU)
Martin_Schou - Sunday, November 11, 2018 - link
As you mention, you are going to run into problems with reviews. I'd suggest three configurations to cover the most realistic scenarios in which they'll be used (this goes for all CPUs):1) Everything within spec. Simulate the performance you'd get in an OEM computer bought from somewhere like HP, Dell etc. This means (for the 9900K) DDR4-2666 RAM at default timings and no XMP, cheap 95 W CPU cooler, high mid-range to low high-end GPU (Dell's gaming PCs with 8700K comes with a 1080 for reference), PSU that is just good enough to cover requirements (1080 minimum requirement says 500 watt PSU).
2) Same hardware as 1), but everything tweaked and OC'ed for maximum performance. This simulates the same as 1, but in the hands of a hardware enthusiast, or what a hardware enthusiast on a tight budget can achieve.
3) Balls to the wall; dual 1,500 watt PSUs if needed, quad 360 mm radiators if needed, dual RTX 2080Tis, fastest possible memory and tightest possible timings. What can this thing do if the only limits are itself and what you can do with air-cooling or advanced water cooling?
wolfesteinabhi - Monday, November 12, 2018 - link
@Ian Cutressthanks for the great article. can you please do a similar roundup on AMD side(and perhaps a few other popular platforms .. i mean non-x86 ..ARM,SPARC,Power), how are they managing their TDP and Turbo,etc.
Today most ..if not all.. processor vendors have some kind of turbo modes and ways to manage TDP. i understand it might not be possible to cover all vendors..but something would still be nice :)
shtldr - Monday, November 12, 2018 - link
Why not also measure total energy consumed (dissipated) on a test run, as well as average power dissipation for that test (total energy consumed divided by test duration). Maybe also include peak momentary power requirements.That way, people can judge:
a) how energy efficient the whole CPU actually is while performing a given workload
b) what kind of cooling solution they might need in order to replicate the benchmark results
c) what PSU rating they need, so their computer does not crash from a PSU overload (voltage drop)
magreen - Monday, November 12, 2018 - link
This seems like deja vu. If I recall, AMD was doing a similar thing in the days of Conroe and Penryn, when AMD got caught with its pants down. AMD started releasing high TDP chips and came up with a new way of measuring their power dissipation that was less conservative (and more misleading) than Intel's. Now that the shoe is on the other foot, Intel is the one playing games with misleading TDPs, because Intel's got the less efficient processors.And iirc, this happened previously to the above as well, when Intel was hanging on to the P4 Netburst space heaters, and AMD was riding roughshod over them with the Athlon 64 X2.
UrQuan3 - Tuesday, November 13, 2018 - link
Thank you for the article Ian. I've been blaming Intel for the last three years about how much over the power budget our servers are. Maybe it's SuperMicro's fault.Power levels and milliseconds of turbo boost are good design ideas. However on the macro-scale, when I am trying to identify how many computers I can put on a 30 amp circuit, total power consumed and total power dissipated are, as per physics, equal. It is unpleasant when the kill-a-watt confirms that the computers consume 160% of what they claim they dissipate.
To me this is on par with Volkswagen cheating on their emissions tests. It's just plain lying.
Gastec - Tuesday, November 13, 2018 - link
Nice, three cheers for MCE and unwanted auto overcloking! What's next, a (hidden) power virus as standard Motherboard software?magreen - Tuesday, November 13, 2018 - link
You mean windows 10? ;)IUU - Wednesday, November 14, 2018 - link
Intel decided that it will artificially delay its lithography, to avoid monopoly laws eating its corporate flesh. So, the fact that it lets AMD gain an advantage at 7nm, is to let it become hardly competitive, else with is superior ipc and quality dies, it will again smash the competition, if it goes on smaller transistor technologies, that is, its advertised weakness to go to 10nm, or smaller is a hilarious scam.And because of this, it needs to clock its processors at frequencies that consume more power, so that it competes with AMD with its aforementioned advantages. And this results necessarily to higher TDPs. But at headquarters, it does not sit well to let it appear that they compete at the cost of energy efficiency . Hence their little tricks to show that although they raised frequencies the energy consumption is about the same, because as they say of 'improved' process at 14 nm , after so many months of refinement.But You can only improve a process node so much.... Figures like 14++ and similar are a joke, but one can understand its position and tactics.
IUU - Wednesday, November 14, 2018 - link
Intel could and should already be at 7nm right now, preparing for ending Moore's law in 2019 by issuing a highly questionable 5nm or a replacement of the IF. It may well be on a new hardware paradigm and it simply sees the futility of walking the few meters at the end of this road, preparing to March on a new avenue that we know little about.IUU - Wednesday, November 14, 2018 - link
*edit replacement of the IC.peevee - Tuesday, November 20, 2018 - link
"If we should be following Intel's specifications to the letter, should we go one step further, and only use stock coolers too?"yes.
mamboman - Tuesday, February 12, 2019 - link
The package power control settings of PL1 ... PL4 and Tau allow the designer to configureIntel Turbo ... to match the plattform power delivery and package thermal solution limitations.
Power Limit 1 (PL1):
...- recommend to set to equal TDP power.
PL1 should not be set higher than thermal solution cooling limits.
This means intel only says "the plattform power delivery and package thermal solution limitations"
have to be followed.
Along this Intel document the designer (motherboard manufacturer in some kind) has free hands to set PL1-4 and Tau anywhere,
as long as intels "power delivery limitations" and "package thermal solution limitations" are followed,
wich holds data about fine details of jitters, amperes, thermals and such stuff.
I personally whould understand if a hardware-rewiew whould differenciate between:
- Intels own advertised TDP and recommended PL1, PL2 and Tau.
It´s not fully pure advertised TDP, but the 8 seconds in PL2 are readable processor-manufacturer data.
The motherboard-manufacturers interpretation of what their desings can sustain is not readable processor-manufacturer data,
and those settings should not be used for pure CPU comparisons.
- Maximum Performance of a CPU (OC etc.)
Here everything is allowed aslong as it is "appels to apples" and fair.
- Beside the above a mainboard shootout is okay, but its tricky to stay "apples to appels", as some are full throttle and others aren´t.
Overclocking of those boards is okay, too as long as it stays "apples to apples"
- I think a little bit diffrent for laptops or other prebuild systems as these are 100% preset,
and if a systembuilder goes its way, it should be 100% reviewed as is. No apples to appels here.
If those prebuilds are tested in OC, too, i´m fine with that.
Gastec - Thursday, April 25, 2019 - link
It's all within the latest trends. If the CPU burns 200 or 4096 Wh let it burn! If it melts in the process just buy 10 more! What, you guys dont have money? ;)sleepeeg3 - Saturday, May 11, 2019 - link
Great article, Ian. Thanks for the research!Ascaris - Thursday, April 2, 2020 - link
I know this article's over a year old, but it can still turn up in searches, so comments are still relevant.There's nothing at all deceptive about Intel's method of rating and specifying its TDP.
Recall that TDP stands for thermal design power, which simply means that you need to use a cooler capable of dissipating the specified amount of heat continuously. A 95 watt TDP CPU is designed to work with a cooler that can dissipate 95 or more watts continuously, and that means the system OEM or integrator needs to make sure the cooler they use meets that specification if the CPU is expected to perform as designed. That's all the TDP rating means.
TDP is not, and never has been, an estimate of how much power the CPU is going to consume under load! It has often been mistaken for exactly that, but there's a reason they call the stat "thermal design power" and not "maximum power consumption." It relates to the thermal design of the CPU and the cooler used with it. Intel, AMD, or any other chip maker can't be blamed if people think "thermal design power" means something other than what it actually means.
jbautomation - Tuesday, July 21, 2020 - link
we have dell mobile precision 3541 XCTO; with Dell Tag - 8ZFN3X2...SR - 1026836654...HAC - 42697875...Heating issue. [ ref:_00D0bGaMp._5002R19;The system goes at 99 degree celcious even at playing windows media player and thermal and power limit throttling's are applied which we can see thru intel xtu application. I can see the cpu uttilization is less than 25%; still the throttling tdps are applied. Dell customer service says that the processor temperature of 110 deg C is normal. And he dont tell the designed tdp numbers for the system.
We have very hot surface on the left side of the keypad which is very uncomfortable to work with and hanging.
I feel, these people are selling high configuration laptops but the motherboards are not designed to handle either bus speeds or the thermal handling. What will be the turbo boost do if the system does not have the design in place to cope up with the processor.