Interesting in a way that there are so many people that always believe in benchmarking and that in real world all cores are always idle....
The world of wonders. Artificial TDP, turbo modes and decreased frequency when running multiple cores. All to fool consumers and benchmark believers.
Very nice review. Now the question: can this also be tested on a Ryzen 2700 and a 8700K and a 9900. Put all 3 albeit in a different setup on a stock or even reduced cooling device and see how they behave....
I see why you are intested, but both 2700 and 8700k are actually quite close in power use to their rated TDP. The issue was that the 9900k wasn't at all. If you see the power/performance graph on the last page, i think you have your answer ;)
It's almost a very nice graph but could really stand to have a few more CPUs labeled. I mean even the literal headlining CPU that the entire article is about isn't labeled.
And trying to compare to the POV-Ray results earlier in the article either a bunch of the CPUs are missing or the scale on the chart does not actually match the labels.
Unless a game pushed those TDPs up. Games that can use many cores at once, like CIV and battlefield. You know, two minor franchises nobody would notice.....
There's a big difference between starting to use 6-8 cores (like Civ & BF) and hitting all those cores with a heavy load for a sustained period. Show me a game benchmark that has the 9900K literally doubling the performance of a 7700K and then you'll have a game that can push the 9900K well past its 95W tdp.
Game streaming from a single PC would certainly do that, but I'd hopefully streamers are doing some research and choosing hardware carefully.
To be clear, I'm not defending Intel here, the tdp figure has become a joke, but we're a long way from this being a widespread issue for gaming workloads.
You are assuming no game uses all the cores (or enough that they go above TDP). The assumption is incorrect now and it will become more incorrect as quad core becomes the minimum.
I think it's totally insane a CPU can use 25-27% more power than its advertised rating. Sure, that includes more performance, but as a system builder this has got to be a liability if you are putting together, say, a little 1U rack for video encoding security camera feeds. You would use a specified CPU based on its performance AND advertised TDP rating, only to find out to GET that performance, it needs to go well beyond its TDP rating, which likely wont be possible in a tiny rack with a 1U cooler (I don't believe they make 1U coolers rated beyond 105W - and those are incredibly rare, most are 73w-88w)
I wanted to build a new PC on Black Friday, and I bought an i9-9900k. I never overclock and typically buy a locked/non-k CPU but couldn't wait until next year. I also always use a SFF case (Cooler Master Elite 130).
The real question is real world performance. If the goal is a SFF machine where you don't have closed loop coolers, and you have a small ITX motherboard and a small case, what will happen to temperatures in those cases. That is where you get heat related issues with performance.
We know that the 2700X hits 4.3GHz, 4.4GHz in some situations, but put it in an ITX case, benchmark it. Will the i9-9900k end up being all that much faster when you are pushing your machine, not just in games, but when you are using your system as an 8 core system where you have web browsers, mail, MS Word, plus other things open at the same time? With all of this running, then go to it with your benchmarks. Compare how well the 2700 and 2700X perform without overclocking and just use the defaults to allow boost/turbo to operate. Is the 9900k all that much faster when playing games with that other stuff still running in the background? Push it for an hour of nonstop use to make sure that you are seeing how well the chip will work in the real world(when used by enthusiasts).
At that point, will we see the average CPU speed be 4GHz, or will it be down in the 3.6-3.7GHz range? Would the Ryzen chips at that point be faster in a SFF case than the i9-9900k?
Another factor here is that it not CPU that uses the power, one must also include the power consumption of the GPU which is a lot of time significantly more power than the CPU.
But in normal peoples usage in real world - the cores are not running as much. It requires that the software to be designed multithreaded or multiple applications running at the same time - the major problem is video is often have to single threaded. In the real world every one is not a hard core gamer.
One also remember that previous we had more desktop and all had external GPU's - but now with most of market - especially business market is mobile, the desired for high performance, high power system is not as important. So power savings modes is important to customers.
This is not just important for PC's - just this morning, I got message on my Samsung Note 8 that my settings was causing my phone to use battery
It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine.
@HStewart: "It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine."
Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?
"Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?"
Yes I realize that - but appears that people in this category tend to believe they are the only category. Also not all Hardcore Gamers are overclockers. I would say I done a lot of game in my life and even at 57 I still do. But all that time unless done by the manufacture, I have really not done it. I believe both my XPS 13 2in1 and XPS 15 2in1 have some built in over clocking but it is control by system.
All I am saying is everyone does not over clock and hard core games.
You don't need to manually overclock to enjoy the benefits of how long the processor can run at turbo speeds, vs. base speeds. If a chip can turbo to 5GHz all the time due to good cooling, then that will mean that even without manually overclocking, that CPU will have a much higher performance than lower tier chips. On the other hand, if the cooling isn't very good, then it will stay at base speeds most of the time.
Small Form Factor....the beauty of having a small machine. If it also means that the performance will be limited due to cooling, then why bother paying for a faster processor when a slower processor will be almost as fast at half the price?
What many want to see are real world situations. People do not buy a 9900k if they don't want high performance, even if they do not manually overclock. So, 8 core/16 thread, because why pay for that if 4 core/8 thread, or 6 core/12 thread will perform just as well if not better? Same case size, will the 9900k be faster than a Ryzen 7 2700X in the same SFF case if the 9900k can't be cooled well enough to keep the chip running faster than base speeds? What would you do if the 2700X, which doesn't bench as well, were actually better at holding turbo/boost speeds in a SFF environment? Do you expect a SFF machine to have a discrete video card(which Intel chips don't necessarily need, even if the people who buy a 9900k will almost always put one in)?
Laptops are not the target of this article(no 9900k has ever been put into a laptop), so laptop boost/turbo results will be a bit more difficult because the design of the laptop itself won't allow a fair apples to apples comparison, unless you could swap the motherboards/processors while keeping the same motherboard/cooling.
I understand laptops are not target of this article - but some crazy laptop makes like to put desktop components into perverted laptop.
Like it not this industry is moving away from desktop components and not just laptops - all and ones are perfect example. The closet thing that Apple has to desktop is iMacMini. In ways even servers are changing - blades are good example.
As far as SFF concern mobile chips are idea for it - and a solution like EMiB is perfect for increase graphics performance - exceor the GPU in my Dell XPS 15 2in1 is just not inpar with NV|idia - Intel made a bad choice teaming up with AMD on it - don't get me wrong against iGPU - it is awesome and better than older generation NVidias like 860m
When you are not using the iGPU, it is powergated off. It isnt using any power, or if it is, it is minute to the point where it doesnt matter.
People have been saying, for years, that the iGPU was a detriment to OCing and power usage. The existence of HDET has proven that idea wrong many, many, many times over.
How does HDET prove that an iGPU isn't a detriment to OCing or power usage? One might be able to argue that the dead silicon provides some sinking & surface area
Yes yes, all those office workers running a 9900K would barely notice. Serious, man? The whole reason this issue came to light was because gamers and other demanding users complained that the processor at DEFAULT settings on pretty much any retail board was annihilating TDP, even more noticeably so than their previous flagship. Journalists are lagging well behind, and when they DO bother looking into it, it's very much a "yeah it's true, shrug" article.
Well said, "shrugging". Automagically overclocked CPU's do make benchmark graphs look better and CPUs sell better. Reminds me of one of AngryJoe's video about a certain MMO: "Oh, you want more? That would be $15 please! $15 more, please! That would be another $15!" Same with Intel, you want more performance, that would be 75% to 100% more watts, please! Yeah, I get it. Want 10 Ghz on all cores? 1000 W, what's the problem?
It would be interesting, though perhaps not entirely related to this article in particular, to get a comparison on actual power draw and load temperatures as well. Similar to what you provide in your usual CPU reviews, to get a fair comparison to both the "unlocked" 9900K as well as the other slew of processors in the bench.
I'd imagine the 9900K would look much better on those numbers, though obviously worse on performance, when actually adhering to TDP.
You are missing the entire point of the article. This is a follow-up to how Intel rates TDP for their CPUs. Intel's TDP is for the base clock only and this was to show what the performance would be if they had TDP meaning the absolute max power draw of the CPU. Right now the i9-9900k uses over 160W of power in its out-of-box configuration that most people use. If you buy a CPU cooler that is rated for say 125W thinking you will be covered since it is a "95W" CPU you will not be getting the performance that you are seeing in professional benchmarks. AMD on the other hand has their TDP being the max power draw of the CPU. Exception being the 2700X that hits like 110W in reviews I have seen. Therefore you buy a 125W cooler for the 2700X you will get the performance you are expecting.
It's not about OC, but the experience out of the box.
Out of the box, AMD very closely follows TDP, going over by 5 - 10 W at the most.
Intel motherboard manufacturers ignore Intel guidelines and allow the CPU to boost ad infinitum (instead of the Intel spec 8 seconds). This means that *out of the box*, a CPU rated 95 W will require a 145 - 160 W cooler when running 100% on all cores, or it will throttle.
That's the main point. The reviews and benches all are testing it on "unlimited", which makes it look better than it actually IS when you're TDP-limited.
A lesser issue is that when you're NOT TDP limited, it eats a crapton more power, runs hotter, and dumps more heat into your system than you were anticipating based on TDP.
I would think that people that overclock a system, would understand that running at higher than base clock means that you need a more powerful power supply - plus they like have external GPU that uses a lot power and in a lot cases more than the CPU itself.
Problem here is that it's not the user overclocking the system - it's the motherboard with default UEFI settings increasing Tau to (close to) infinity, thereby allowing the CPU to boost for hours.
Beginners won't even be aware that they're not getting the most of their expensive CPU, since there is no way for them to know to anticipate 145 - 160 W of thermal dissipation.
ASUS is the only motherboard manufacturer whose Z390 boards can be configured to obey the TDP and even there you first need to enable XMP and then select "Intel" instead of "ASUS" in the prompt that appears. If you don't touch XMP (as many beginners are likely to), you'll run with grossly extended Tau out of the box.
I would expect if the motherboard company is making the settings higher than recommend from processor company - they should inform the customer they recommend larger power. This assumes I understand the entire motherboard settings of desktop machines lately - it been about slight over 10 years since I built a desktop machine and it was a Supermicro Dual Xeon
The fact that all motherboard vendors do the exact same thing could lead one to draw the conclusion that the practice is actually mandated and suggested by Intel - unofficially of course.
Higher benchmark results will look good especially for casual readers (who only look at certain performance graphs and skip the power consumption numbers), all the while allowing Intel to market them as "95 W" parts.
If Intel didn't like this practice they could hardcode behavior in the CPU itself. Oh wait, they DO... and they allow this because it makes them bench better. Meanwhile look at their cheaper locked "95W" models, I bet you won't see them auto-overclocking to 150W+ even with the board defaulting to "unlimited" TDP.
It should be ILLEGAL for motherboard makers to go out of Intel's specifications by default. All overclocking should be entirely the responsibility of the user.
It's not capping, it's running the CPU according to the Intel datasheet specification.
Operating the component beyond specification is usually called overclocking which is nice and all but doesn't allow an unbiased comparison of the different products.
With Intel's blessing. If Intel wasn't onboard, they'd clamp the behavior on-chip, and you'd have to manually overclock to override TDP for any length of time (for unlocked chips, anyway).
Anyway my prediction is that if Intel continues this practice, AMD just starts following suit more and more as time goes on. We'll see.
What I find interesting about all of this is that with mobile ARM chips the exact same characteristics are called throttling instead. Possibly we should get these naming conventions together? Either x86 chips throttle, as mobile ARM chips do, or mobile ARM chips have turbo mode too.
The difference is the ARM chips being labelled with the short-term term frequencies and performance, while Intel put the steady state values on the box. Motherboard manufacturers throw the box values right out the window, but if Intel were to dictate /those/ the wailing and gnashing of teeth from the peanut gallery would be cacaphonous.
There are actually three primary states. Base clocks, boost or turbo speeds, and then you can get thermal throttle which will actually lower the speed below the base clock speed. If the i9-9900k has a base of 3.6GHz, a turbo that goes up to 5GHz, but you have poor cooling, you may be seeing the CPU sticking to that 3.6GHz, or even below it if the temperatures get too high.
This is where those very thin laptops may have Ryzen versions performing better than Intel, because of the temperatures keeping the chip running at or even below base speeds. For a small form factor machine, will the 9900k be running at base speeds ALL THE TIME due to temperatures/TDP/cooling? In the same small form factor case, would a Ryzen 7 2700X end up having a similar level of performance after several hours(to allow the heat generation to stabilize)? If you start when things are COLD, you could turn the machine on and run benchmarks, and see better numbers than if the machine were already on and you had been running intensive applications for several hours prior to running the benchmarks.
@Ian: Thanks for this informative test and review. One comment, one question/request. Comment: I continue to be struck by Intel's prowess when AVX512/AVX2 comes into play. I am also (negatively) impressed by the thermal load use of these instructions causes. The reduction in performance when using AVX512/AVX2 under strict adherence to a TdP of 95 Wh speaks volumes. Did you ever have a chance to ask Intel why running AVX makes their chips so power-hungry? Even if not, I'd appreciate your thoughts on why AVX makes Intel's chips run so hot.
Here my question/request: I now that you/Anandtech have a large dataset on x264 video encoding speeds. However, especially for i7s and AMD's six-core and up Zen chips, I'd like to know how they fare when encoding/transcoding a 2160p 10bit video, as that is now in increasing demand, and really makes the processor sweat (and slow down, a lot). Any chance you and your colleagues can add that to the encoding tests? If space is an issue, I suggest to dump the x264 720p speed test; even a lowly Athlon or Celeron chip does that quite well, and at good speed.
I believe you can turn off AVX512 in bios - it use in special application that need the speed
Also I would think the external GPU's is another major factor in considering power requirements on a system.
I don't belkieve there is any power needs or reduction in topp speed for AVX2 only that AVX512 uses extra power on system and top frequency are reduce if being used.
One thing about AVX2 - on Intel it is 256bit and AMD has dual 128 bits currently - not sure about new Zen's coming out next year. But at least with PowerDirector, it give you significantly performance increase
It's pretty simple, really: the more data the CPU has to process in parallel, the more horsepower it uses. It's like doubling or quadrupling the number of active cylinders in an engine - you gain performance, but it requires more power and produces more heat. That's why they're off if not in use.
Dedicated GPU blocks for video coding will also use more power, but are likely to be far more efficient than doing the operations with general code - as long as it's within their defined capabilities. (Similarly, if you had to do the equivalent of the AVX operations without the relevant hardware, it would probably use even more power than it currently does, at least over the extra time it took.)
I have found much confusion among the readers on hardware review websites when it comes to this issue. So I would like to present some information from Anandtech's Bench tool in order to clarify the situation for me and others hopefully:
The following two processors have these results under full package, full load: i7-6700k 82.55W First mainstream desktop 14 nm processor, 95W TDP according to Intel i9-9900k 168.48W Latest mainstream desktop 14 nm processor, 95W TDP according to Intel
I assume that these two values were measured in unlimited mode. If this is the case, this means that the power listed above is when all cores/threads are loaded at full max turbo mode. So if you are expecting a certain level of performance given that Intel advertises 95W for both CPUs, then you are being misled and may not get the performance you are expecting when upgrading the CPU but not your cooling.
This is a CHANGE from the past in how Intel uses TDP without telling the customer. It also highlights that Intel use to be conservative with cores/clocks/turbo when they had no competition and were able to shrink nodes between Nehalem and Skylake. Now they are PRETENDING that they can just double the cores and raise clocks on the same node and not increase power. Please correct me if I'm wrong, but it doesn't look like this is the case anymore.
A power unlimited 2700x. Also this article doesn't include any games. If it did you'd see the 9900k still does much better, because games don't use all 8 cores.
2700X - 117.18W Max = 11.6% over stated TDP 9900K - 168.48W Nax = 77.3% over stated TDP
Don't forget not everyone views gaming and the end all be all form of benchmarking. Would it be interesting to see if it affects the gaming sure. It most certainly would affect those who game and stream at the same time.
Yeah, the 9900K is the latest and greatest when it comes to mainstream CPU's, but at its current pricing you're better off purchasing a more value oriented CPU such as the 8700K to get more bang for your buck. Intel has been losing their sh!t as of late ever since AMD's Ryzen has them on their heels.
The 9900K is awesome if you build a rig that can keep it cool when overclocked. But otherwise yeah an 8700K is a better choice or for bang for buck the 2700X is good
If you want value stop playing games and get a part time job so you can have more money in your pocket. The 9900k is only a few hundred dollars more. THat's not much spread over years.
Found the Intel shill/apologist. Not everyone has unlimited funds to spend like that, and even if they could afford the CPU, what about the power to run it? This CPU is very power-hungry and expensive, and insulting users that don't have a lot of money (or have a better sense of how to properly budget it) won't change that fact.
Very true. Has to justify the reason for spending an arm and a leg on his/her new space heater. 4800z while the CPU is $200 more expensive, you also need the $70+ cooler, you have to have the expensive z390 motherboard that adds another $70 onto the build, your room is going to be warmer so that will increase your cooling costs, and drawing 77% more power means you are going to increase your electric bill. All these costs add up and unless you are able to purchase this on a credit card with 0% interest or have saved up for a long time, the extra $400 is going to make a huge difference. Also someone could go with the 2700X or 8700K and use the extra $400 for a nice upgrade on their build, where said person would have gotten a RTX2080 not s/he can afford a RTX2080Ti.
He's not wrong. The 9900K is closer the Thread Ripper in many benchmarks than Ryzen is to the 9900K, and at the same time closer to Ryzen in cost. While billed as a gaming processor, the 9900K is great for content creators. Unless you have a specific need for HEDT platform capabilities (RAM or PCIe lanes) the 9900k would get the job done for less money.
And if you just wait until March of 2019, Ryzen 3rd generation will probably meet or beat the performance of the 9900k for $330. So your $200+ higher price will only be for 4-5 months of having superior performance before the new AMD will be considered faster.
"This rises to 44.2 if the processor is fixed to 95W" but there is no data point on the plot at that spot. A mouse-over labeling of that plot would be very-helpful.
I don't understand - the article title says "Fixing the Power for SFF", and yet no motherboards with the form factor typically used in SFF systems were actually tested. The motherboards listed were all ATX; no mini-ITX or even micro-STX boards were used.
Why not? Wouldn't this have provided valuable insight for those looking to purchase a SFF system, custom or DIY, to see which mfgs cap the TDP usage or let it go full range?
The author said he tested a MSI Vortex G3 small form factor desktop last year. Well, why not get some comments from ASRock, Gigabyte, ASUS, and MSI as to whether it's standard practice for them to limit CPUs to a specific power limit in their BIOS for those SFF boards.
Fro example, I'd love to know if that sweet-looking ASRock DeskMini GTX Z390 that was recently reviewed can take the i9-9900k rated at 95W to the full "unlimited" power settings. I can put 450-600W SFX/SFX-L PSUs into a SFF system, so I'd like to know if I can get the full performance out of the CPU or if the mfg locks the power draw in the BIOS.
Why is this article, and Anandtech in general, using 1000 unit OEM prices for Intel products which are typically 15-20% less than the lowest retail price you can find. But use the highest you can find retail prices for AMD products? It seems like Anandtech is deliberate trying to make people think Intel products have any value when the reality is that they don't.
Good re-review. Although, Ian doesn't seem to want to call Intel out. This is OBVIOUSLY something initiated by Intel. If the 9900k were to run in spec it would be slower than the 2700x in a LOT of benchmarks. Intel couldn't have that for such a massive hot monolithic die. That's why all the shady sponsored benchmarks and having the processor way out of spec.
It's obvious Intel is hurting. Let's hope this brings about a competitive landscape again.
How do motherboards treat the non-k versions of these CPUs? When I built my mITX machine, I bought the non-K processor since there wouldn't be any overclocking going on. Just how locked is a locked CPU? Technically, this could be considered turboing ratehr then overclocking and could be applied to the non-Ks.
It is possible that Intel won't release a non-k version of these chips, just because there won't be a significant enough performance benefit vs. the AMD 2700X if the chips were not being pushed to their absolute limit.
An interesting point that you make is that a 9900K constrained to 95W performs like an unconstrained 9900K for single threaded loads and an unconstrained 9700K for multithreaded loads.
The 9700K has half the threads, so that is an interesting claim, and I think the key is how does the 9700K perform when constrained to 95W.
Hyperthreading is supposed to be a big win to perf/W, thus I’d expect 9900K at 95W to be more efficient than the 9700K for the same perf, which is a definitive win too.
How does the 9700K at 95W perform in the multi threaded benchmarks?
I'm not so sure it's as big a win in most cases as it's cracked up to be, especially now that new security measures are required to prevent threads on the same core from being able to use Spectre-class attacks to divulge secrets from timing based on data accessed by other thread.
I thinking about replacing my proprietary Lenovo Thinkstation P300 motherboard. It is so limited and Lenovo does not update their PC BIOS like other manufacturers to keep the PC up to date with new hardware. Lenovo answer is to buy a new Lenovo PC! Just have to find a new one about the same size and I will jerry rig it in.
So, basically with the power limit in place an 8C-16T i9-9900k is an 8C-16T i7-7820x in a different tshirt riding a different cars, being uncapped is like giving it the pass to run on the autobahn.
While we've know that Ryzen 1800x had blown the 7820x out of the water. No more IPC increment for Intel, it seems.
Ian The most amazing thing you have revealed in your benchmarks over few last months was the crazy speedup of avx512 on 3D Particle movement which put all recent HEDT incarnations both from Intel and AMD deep into the mud. And in this paper you removed this test. Where is 7900x in the second plot? Or your test was buggy showing these crazy 5x improvements even 7th gen over 9th gen when avx512 was on?
That would be interesting to see. While AVX can do wonders if the workload is suitable, it IS power hungry. I guess you would still end up with better performance ("tasks per kWh") even with the power limit, but hard to say by how much. I can see on my 8700 how much power at wall and core temp rises when it gets loaded with something AVX heavy.
Would you be so kind as to change the price of the 9900K in your graphs to the list prices for which it can actually be bought at Amazon and Newegg? Those prices are $579 and $569 respectively when not on sale. It is deceptive to keep listing it at $488.
" Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance."
What CPU he optimized it for? Let me guess... the one he has in his room.
I think people are confusing WATTS USED with TDP (amount of HEAT a chip puts off that your HSF or case etc has too be able to accommodate to cool said chip). They are telling manufacturers of laptops, pc's etc how good their cooling design needs to be to keep the chip from heating up.
THERMAL DESIGN POWER (point might be more accurate, as some use it), is just as it sounds. THERMAL, er, uh, HEAT. Get it? I'm confused by everyone's confusion...LOL.
"TDP ≠ power draw?" "Not quite, no. TDP doesn't equate to how much power will be drawn by the component in question, but that doesn't mean you can't use the value provided as an estimation."
"TDP is not — however — a direct measure of how much power a component will draw, but it can be a good indicator."
So, don't expect watts PULLED from a wall to equal a quoted TDP. That isn't what it is, although it may come close to meaning it...ROFL.
If you had a 100% efficient chip (as someone else noted isn't possible...yet?), your chips TDP rating would be ZERO. It would not require anything to cool it. See the point?
https://en.wikipedia.org/wiki/Thermal_design_power "The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload."
Not exactly watts used right?
https://fullforms.com/TDP "What does TDP mean? Thermal Design Power (TDP), sometimes called Thermal Design Point, is a metric that is expressed in watts. TDP refers to the amount of power/heat a cooling system (like fan, heatsink) is expected to dissipate to prevent overheating."
Again, not watts used. I could point to another dozen, but people should get the point. Despite whatever Intel/AMD think it means year to year (ROFL), it's heat.
https://www.overclockers.com/forums/showthread.php... Same story from OC people. To each his own I guess, but many seem confused about why things blow past tdp (because it's not WATTS). What is the chips temp when it blows past those TDP numbers at stock settings? Is it 150 instead of 95 or whatever? I mean if Dell or someone designs their slim pc's for 95w it likely won't work to well if it's going to 150 temps with a box that is designed to cool 95-100w right? Again, the definition used here really don't work IMHO (and everyone else I seem to look up...LOL). But hey, maybe my old A+ test was wrong (I'm old, maybe I'm just not recalling things correctly, and all the web is wrong too) :) I doubt it ;)
Perpetuum mobile IS impossible. And I don't want a CPU that's advertised as consuming 95W to 110W (give more than take the PSU inefficiency and other losses on the pipe) to automatically overclock to 170 W because of review benchmarks. I want it to be set BY DEFAULT at max. 95-110W and I also want it to do 5GHz on all cores @ 95-110W, as advertised:) Then I would pay 490€ for it.
I wanted to build a new PC on Black Friday, and I bought an i9-9900k. I never overclock and typically buy a locked/non-k CPU but couldn't wait until next year. I also always use a SFF case (Cooler Master Elite 130).
Great article! I've been guessing about turbo values for years and this aticle answered it all!!
Of course we need more transparency from Intel, I suppose this info is left for marketers to release and they think we'd not understand, so they just leave it hidden.
It's great how the same chip can be used on a small form factor and on a big E-ATX case. Modern turbo makes manual overclocking almost not needed, left for watercooling or maybe some manual Vcore setting.
It's basicly a matter of having a good case, a great cooler, and live in Europe to be able to keep 4700MHz all the time!
I wish Intel would release a top performing CPU with 4 core and no IGP, that would do 4.5GHz base and 5.5GHz All Core Turbo without watercooling. We don't need more than 4 cores.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
duploxxx - Thursday, November 29, 2018 - link
Interesting in a way that there are so many people that always believe in benchmarking and that in real world all cores are always idle....The world of wonders. Artificial TDP, turbo modes and decreased frequency when running multiple cores. All to fool consumers and benchmark believers.
Very nice review. Now the question:
can this also be tested on a Ryzen 2700 and a 8700K and a 9900. Put all 3 albeit in a different setup on a stock or even reduced cooling device and see how they behave....
olde94 - Thursday, November 29, 2018 - link
I see why you are intested, but both 2700 and 8700k are actually quite close in power use to their rated TDP. The issue was that the 9900k wasn't at all. If you see the power/performance graph on the last page, i think you have your answer ;)notashill - Thursday, November 29, 2018 - link
It's almost a very nice graph but could really stand to have a few more CPUs labeled. I mean even the literal headlining CPU that the entire article is about isn't labeled.And trying to compare to the POV-Ray results earlier in the article either a bunch of the CPUs are missing or the scale on the chart does not actually match the labels.
duploxxx - Thursday, November 29, 2018 - link
according anandtech measurements:2700x 105w rated buring 117.18
8700k 95w rated buring 145.71
9900k 95w rated burning 168.45
so no i ma not kidding. even the 8700k will have reduced performance with real tdp limit vs glorious benchmarking with best of best mobo and cooling.
4800z - Thursday, November 29, 2018 - link
No the 9900k and 8700k would have no lower performance on games. This only comes up when maxing out all cores for things like cinibench.TheinsanegamerN - Thursday, November 29, 2018 - link
Unless a game pushed those TDPs up. Games that can use many cores at once, like CIV and battlefield. You know, two minor franchises nobody would notice.....rhysiam - Friday, November 30, 2018 - link
There's a big difference between starting to use 6-8 cores (like Civ & BF) and hitting all those cores with a heavy load for a sustained period. Show me a game benchmark that has the 9900K literally doubling the performance of a 7700K and then you'll have a game that can push the 9900K well past its 95W tdp.Game streaming from a single PC would certainly do that, but I'd hopefully streamers are doing some research and choosing hardware carefully.
To be clear, I'm not defending Intel here, the tdp figure has become a joke, but we're a long way from this being a widespread issue for gaming workloads.
mr_fokyou - Thursday, November 29, 2018 - link
not if you are streaming while gaming than you are very much bottlenecking 9900k if u force TDP limitsbananaforscale - Saturday, December 1, 2018 - link
You are assuming no game uses all the cores (or enough that they go above TDP). The assumption is incorrect now and it will become more incorrect as quad core becomes the minimum.Samus - Saturday, December 1, 2018 - link
I think it's totally insane a CPU can use 25-27% more power than its advertised rating. Sure, that includes more performance, but as a system builder this has got to be a liability if you are putting together, say, a little 1U rack for video encoding security camera feeds. You would use a specified CPU based on its performance AND advertised TDP rating, only to find out to GET that performance, it needs to go well beyond its TDP rating, which likely wont be possible in a tiny rack with a 1U cooler (I don't believe they make 1U coolers rated beyond 105W - and those are incredibly rare, most are 73w-88w)DennisBaker - Tuesday, December 4, 2018 - link
I wanted to build a new PC on Black Friday, and I bought an i9-9900k. I never overclock and typically buy a locked/non-k CPU but couldn't wait until next year. I also always use a SFF case (Cooler Master Elite 130).This is a great article, but I'm not sure how to actually set the bios for a 95w max cpu setting.
I have the Asrock z390 phantom gaming-itx/ac motherboard:
http://asrock.pc.cdn.bitgravity.com/Manual/Z390%20...
I've been googling without success and figured I would just ask here if there is a general guide for this.
Targon - Thursday, November 29, 2018 - link
The real question is real world performance. If the goal is a SFF machine where you don't have closed loop coolers, and you have a small ITX motherboard and a small case, what will happen to temperatures in those cases. That is where you get heat related issues with performance.We know that the 2700X hits 4.3GHz, 4.4GHz in some situations, but put it in an ITX case, benchmark it. Will the i9-9900k end up being all that much faster when you are pushing your machine, not just in games, but when you are using your system as an 8 core system where you have web browsers, mail, MS Word, plus other things open at the same time? With all of this running, then go to it with your benchmarks. Compare how well the 2700 and 2700X perform without overclocking and just use the defaults to allow boost/turbo to operate. Is the 9900k all that much faster when playing games with that other stuff still running in the background? Push it for an hour of nonstop use to make sure that you are seeing how well the chip will work in the real world(when used by enthusiasts).
At that point, will we see the average CPU speed be 4GHz, or will it be down in the 3.6-3.7GHz range? Would the Ryzen chips at that point be faster in a SFF case than the i9-9900k?
HStewart - Thursday, November 29, 2018 - link
Another factor here is that it not CPU that uses the power, one must also include the power consumption of the GPU which is a lot of time significantly more power than the CPU.But in normal peoples usage in real world - the cores are not running as much. It requires that the software to be designed multithreaded or multiple applications running at the same time - the major problem is video is often have to single threaded. In the real world every one is not a hard core gamer.
One also remember that previous we had more desktop and all had external GPU's - but now with most of market - especially business market is mobile, the desired for high performance, high power system is not as important. So power savings modes is important to customers.
This is not just important for PC's - just this morning, I got message on my Samsung Note 8 that my settings was causing my phone to use battery
It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine.
BurntMyBacon - Thursday, November 29, 2018 - link
@HStewart: "It really must be take in perspective of users needs - for hard core gamers - more cores, external GPUs are important. But for most users using Office and such, Internal GPU and dual core is fine."Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?
HStewart - Thursday, November 29, 2018 - link
"Which group of users that you defined do you suppose is the target audience for the i9-9900K tested in this article?"Yes I realize that - but appears that people in this category tend to believe they are the only category. Also not all Hardcore Gamers are overclockers. I would say I done a lot of game in my life and even at 57 I still do. But all that time unless done by the manufacture, I have really not done it. I believe both my XPS 13 2in1 and XPS 15 2in1 have some built in over clocking but it is control by system.
All I am saying is everyone does not over clock and hard core games.
Targon - Thursday, November 29, 2018 - link
You don't need to manually overclock to enjoy the benefits of how long the processor can run at turbo speeds, vs. base speeds. If a chip can turbo to 5GHz all the time due to good cooling, then that will mean that even without manually overclocking, that CPU will have a much higher performance than lower tier chips. On the other hand, if the cooling isn't very good, then it will stay at base speeds most of the time.Small Form Factor....the beauty of having a small machine. If it also means that the performance will be limited due to cooling, then why bother paying for a faster processor when a slower processor will be almost as fast at half the price?
What many want to see are real world situations. People do not buy a 9900k if they don't want high performance, even if they do not manually overclock. So, 8 core/16 thread, because why pay for that if 4 core/8 thread, or 6 core/12 thread will perform just as well if not better? Same case size, will the 9900k be faster than a Ryzen 7 2700X in the same SFF case if the 9900k can't be cooled well enough to keep the chip running faster than base speeds? What would you do if the 2700X, which doesn't bench as well, were actually better at holding turbo/boost speeds in a SFF environment? Do you expect a SFF machine to have a discrete video card(which Intel chips don't necessarily need, even if the people who buy a 9900k will almost always put one in)?
Laptops are not the target of this article(no 9900k has ever been put into a laptop), so laptop boost/turbo results will be a bit more difficult because the design of the laptop itself won't allow a fair apples to apples comparison, unless you could swap the motherboards/processors while keeping the same motherboard/cooling.
HStewart - Thursday, November 29, 2018 - link
I understand laptops are not target of this article - but some crazy laptop makes like to put desktop components into perverted laptop.Like it not this industry is moving away from desktop components and not just laptops - all and ones are perfect example. The closet thing that Apple has to desktop is iMacMini. In ways even servers are changing - blades are good example.
As far as SFF concern mobile chips are idea for it - and a solution like EMiB is perfect for increase graphics performance - exceor the GPU in my Dell XPS 15 2in1 is just not inpar with NV|idia - Intel made a bad choice teaming up with AMD on it - don't get me wrong against iGPU - it is awesome and better than older generation NVidias like 860m
Manch - Friday, November 30, 2018 - link
And there it is. I was wondering how you were going to steer towards bashing AMD. LOLTheinsanegamerN - Thursday, November 29, 2018 - link
When you are not using the iGPU, it is powergated off. It isnt using any power, or if it is, it is minute to the point where it doesnt matter.People have been saying, for years, that the iGPU was a detriment to OCing and power usage. The existence of HDET has proven that idea wrong many, many, many times over.
Icehawk - Thursday, November 29, 2018 - link
How does HDET prove that an iGPU isn't a detriment to OCing or power usage? One might be able to argue that the dead silicon provides some sinking & surface areaAlexvrb - Thursday, November 29, 2018 - link
Yes yes, all those office workers running a 9900K would barely notice. Serious, man? The whole reason this issue came to light was because gamers and other demanding users complained that the processor at DEFAULT settings on pretty much any retail board was annihilating TDP, even more noticeably so than their previous flagship. Journalists are lagging well behind, and when they DO bother looking into it, it's very much a "yeah it's true, shrug" article.Gastec - Wednesday, June 19, 2019 - link
Well said, "shrugging". Automagically overclocked CPU's do make benchmark graphs look better and CPUs sell better. Reminds me of one of AngryJoe's video about a certain MMO: "Oh, you want more? That would be $15 please! $15 more, please! That would be another $15!" Same with Intel, you want more performance, that would be 75% to 100% more watts, please! Yeah, I get it. Want 10 Ghz on all cores? 1000 W, what's the problem?koaschten - Thursday, November 29, 2018 - link
Check https://www.youtube.com/watch?v=kmAWqyHdebILindseyLopez - Wednesday, December 19, 2018 - link
nonicolaim - Thursday, November 29, 2018 - link
Missing link to TDP article on first page.Exodite - Thursday, November 29, 2018 - link
Thanks for this!It would be interesting, though perhaps not entirely related to this article in particular, to get a comparison on actual power draw and load temperatures as well. Similar to what you provide in your usual CPU reviews, to get a fair comparison to both the "unlocked" 9900K as well as the other slew of processors in the bench.
I'd imagine the 9900K would look much better on those numbers, though obviously worse on performance, when actually adhering to TDP.
Wingartz - Thursday, November 29, 2018 - link
*You can read it all [here], although what it boils down to is this diagram*[here] doesn't have a link to the article
:nudge> - Thursday, November 29, 2018 - link
For easy comparisons it's great to see Geekbench making an appearanceAlexvrb - Friday, November 30, 2018 - link
*worthless comparisons barely better Dhrystone and WhetstoneSirMaster - Thursday, November 29, 2018 - link
How many people are actually buying a $500 CPU and capping it's power limit to 95W?Can't be many, how about buy a different CPU if you plan on capping it's power limit like that.
I have an SFF mini-ITX build and my old overclocked haswell CPU is running at like 180W and staying cool just fine.
schujj07 - Thursday, November 29, 2018 - link
You are missing the entire point of the article. This is a follow-up to how Intel rates TDP for their CPUs. Intel's TDP is for the base clock only and this was to show what the performance would be if they had TDP meaning the absolute max power draw of the CPU. Right now the i9-9900k uses over 160W of power in its out-of-box configuration that most people use. If you buy a CPU cooler that is rated for say 125W thinking you will be covered since it is a "95W" CPU you will not be getting the performance that you are seeing in professional benchmarks. AMD on the other hand has their TDP being the max power draw of the CPU. Exception being the 2700X that hits like 110W in reviews I have seen. Therefore you buy a 125W cooler for the 2700X you will get the performance you are expecting.4800z - Thursday, November 29, 2018 - link
The 2700x can't go faster even if you gave it more power and a more expensive cooler. No one has been able to materially overclock the 2700x.Hul8 - Thursday, November 29, 2018 - link
It's not about OC, but the experience out of the box.Out of the box, AMD very closely follows TDP, going over by 5 - 10 W at the most.
Intel motherboard manufacturers ignore Intel guidelines and allow the CPU to boost ad infinitum (instead of the Intel spec 8 seconds). This means that *out of the box*, a CPU rated 95 W will require a 145 - 160 W cooler when running 100% on all cores, or it will throttle.
Hul8 - Thursday, November 29, 2018 - link
Obviously once you run a i9 9900K at 150 W, you will definitely get much better performance, but that is contingent on good cooling.Targon - Thursday, November 29, 2018 - link
And you won't get great cooling in a SFF machine.Alexvrb - Friday, November 30, 2018 - link
That's the main point. The reviews and benches all are testing it on "unlimited", which makes it look better than it actually IS when you're TDP-limited.A lesser issue is that when you're NOT TDP limited, it eats a crapton more power, runs hotter, and dumps more heat into your system than you were anticipating based on TDP.
The cake is a lie. I mean TDP.
HStewart - Thursday, November 29, 2018 - link
I would think that people that overclock a system, would understand that running at higher than base clock means that you need a more powerful power supply - plus they like have external GPU that uses a lot power and in a lot cases more than the CPU itself.Hul8 - Thursday, November 29, 2018 - link
Problem here is that it's not the user overclocking the system - it's the motherboard with default UEFI settings increasing Tau to (close to) infinity, thereby allowing the CPU to boost for hours.Beginners won't even be aware that they're not getting the most of their expensive CPU, since there is no way for them to know to anticipate 145 - 160 W of thermal dissipation.
Hul8 - Thursday, November 29, 2018 - link
ASUS is the only motherboard manufacturer whose Z390 boards can be configured to obey the TDP and even there you first need to enable XMP and then select "Intel" instead of "ASUS" in the prompt that appears. If you don't touch XMP (as many beginners are likely to), you'll run with grossly extended Tau out of the box.HStewart - Thursday, November 29, 2018 - link
I would expect if the motherboard company is making the settings higher than recommend from processor company - they should inform the customer they recommend larger power. This assumes I understand the entire motherboard settings of desktop machines lately - it been about slight over 10 years since I built a desktop machine and it was a Supermicro Dual XeonHul8 - Friday, November 30, 2018 - link
The fact that all motherboard vendors do the exact same thing could lead one to draw the conclusion that the practice is actually mandated and suggested by Intel - unofficially of course.Higher benchmark results will look good especially for casual readers (who only look at certain performance graphs and skip the power consumption numbers), all the while allowing Intel to market them as "95 W" parts.
Alexvrb - Friday, November 30, 2018 - link
If Intel didn't like this practice they could hardcode behavior in the CPU itself. Oh wait, they DO... and they allow this because it makes them bench better. Meanwhile look at their cheaper locked "95W" models, I bet you won't see them auto-overclocking to 150W+ even with the board defaulting to "unlimited" TDP.Gastec - Wednesday, June 19, 2019 - link
It should be ILLEGAL for motherboard makers to go out of Intel's specifications by default. All overclocking should be entirely the responsibility of the user.rsandru - Thursday, November 29, 2018 - link
It's not capping, it's running the CPU according to the Intel datasheet specification.Operating the component beyond specification is usually called overclocking which is nice and all but doesn't allow an unbiased comparison of the different products.
LTC8K6 - Thursday, November 29, 2018 - link
Why not clamp it to the Intel spec?TheinsanegamerN - Thursday, November 29, 2018 - link
Because motherboards dont do that, they are letting the 9900k run wild.Alexvrb - Friday, November 30, 2018 - link
With Intel's blessing. If Intel wasn't onboard, they'd clamp the behavior on-chip, and you'd have to manually overclock to override TDP for any length of time (for unlocked chips, anyway).Anyway my prediction is that if Intel continues this practice, AMD just starts following suit more and more as time goes on. We'll see.
djayjp - Thursday, November 29, 2018 - link
So many of these tests would run better (faster and with much greater efficiency) on a highly parallel GPU instead.PeachNCream - Thursday, November 29, 2018 - link
You may have missed the point of the article.melgross - Thursday, November 29, 2018 - link
What I find interesting about all of this is that with mobile ARM chips the exact same characteristics are called throttling instead. Possibly we should get these naming conventions together? Either x86 chips throttle, as mobile ARM chips do, or mobile ARM chips have turbo mode too.edzieba - Thursday, November 29, 2018 - link
The difference is the ARM chips being labelled with the short-term term frequencies and performance, while Intel put the steady state values on the box. Motherboard manufacturers throw the box values right out the window, but if Intel were to dictate /those/ the wailing and gnashing of teeth from the peanut gallery would be cacaphonous.melgross - Thursday, November 29, 2018 - link
Its just a matter of semantics. It doesn’t have to be spelled out.Targon - Thursday, November 29, 2018 - link
There are actually three primary states. Base clocks, boost or turbo speeds, and then you can get thermal throttle which will actually lower the speed below the base clock speed. If the i9-9900k has a base of 3.6GHz, a turbo that goes up to 5GHz, but you have poor cooling, you may be seeing the CPU sticking to that 3.6GHz, or even below it if the temperatures get too high.This is where those very thin laptops may have Ryzen versions performing better than Intel, because of the temperatures keeping the chip running at or even below base speeds. For a small form factor machine, will the 9900k be running at base speeds ALL THE TIME due to temperatures/TDP/cooling? In the same small form factor case, would a Ryzen 7 2700X end up having a similar level of performance after several hours(to allow the heat generation to stabilize)? If you start when things are COLD, you could turn the machine on and run benchmarks, and see better numbers than if the machine were already on and you had been running intensive applications for several hours prior to running the benchmarks.
eastcoast_pete - Thursday, November 29, 2018 - link
@Ian: Thanks for this informative test and review. One comment, one question/request. Comment: I continue to be struck by Intel's prowess when AVX512/AVX2 comes into play. I am also (negatively) impressed by the thermal load use of these instructions causes. The reduction in performance when using AVX512/AVX2 under strict adherence to a TdP of 95 Wh speaks volumes. Did you ever have a chance to ask Intel why running AVX makes their chips so power-hungry? Even if not, I'd appreciate your thoughts on why AVX makes Intel's chips run so hot.Here my question/request: I now that you/Anandtech have a large dataset on x264 video encoding speeds. However, especially for i7s and AMD's six-core and up Zen chips, I'd like to know how they fare when encoding/transcoding a 2160p 10bit video, as that is now in increasing demand, and really makes the processor sweat (and slow down, a lot). Any chance you and your colleagues can add that to the encoding tests? If space is an issue, I suggest to dump the x264 720p speed test; even a lowly Athlon or Celeron chip does that quite well, and at good speed.
HStewart - Thursday, November 29, 2018 - link
I believe you can turn off AVX512 in bios - it use in special application that need the speedAlso I would think the external GPU's is another major factor in considering power requirements on a system.
I don't belkieve there is any power needs or reduction in topp speed for AVX2 only that AVX512 uses extra power on system and top frequency are reduce if being used.
One thing about AVX2 - on Intel it is 256bit and AMD has dual 128 bits currently - not sure about new Zen's coming out next year. But at least with PowerDirector, it give you significantly performance increase
GreenReaper - Saturday, December 1, 2018 - link
It's pretty simple, really: the more data the CPU has to process in parallel, the more horsepower it uses. It's like doubling or quadrupling the number of active cylinders in an engine - you gain performance, but it requires more power and produces more heat. That's why they're off if not in use.Dedicated GPU blocks for video coding will also use more power, but are likely to be far more efficient than doing the operations with general code - as long as it's within their defined capabilities. (Similarly, if you had to do the equivalent of the AVX operations without the relevant hardware, it would probably use even more power than it currently does, at least over the extra time it took.)
Davenreturns - Thursday, November 29, 2018 - link
I have found much confusion among the readers on hardware review websites when it comes to this issue. So I would like to present some information from Anandtech's Bench tool in order to clarify the situation for me and others hopefully:Looking at the CPU Power Bench
https://www.anandtech.com/bench/CPU-2019/2194
The following two processors have these results under full package, full load:
i7-6700k 82.55W First mainstream desktop 14 nm processor, 95W TDP according to Intel
i9-9900k 168.48W Latest mainstream desktop 14 nm processor, 95W TDP according to Intel
I assume that these two values were measured in unlimited mode. If this is the case, this means that the power listed above is when all cores/threads are loaded at full max turbo mode. So if you are expecting a certain level of performance given that Intel advertises 95W for both CPUs, then you are being misled and may not get the performance you are expecting when upgrading the CPU but not your cooling.
This is a CHANGE from the past in how Intel uses TDP without telling the customer. It also highlights that Intel use to be conservative with cores/clocks/turbo when they had no competition and were able to shrink nodes between Nehalem and Skylake. Now they are PRETENDING that they can just double the cores and raise clocks on the same node and not increase power. Please correct me if I'm wrong, but it doesn't look like this is the case anymore.
AlyxSharkBite - Thursday, November 29, 2018 - link
Really interesting how when you limit it to 95W it’s really close to the 2700X4800z - Thursday, November 29, 2018 - link
A power unlimited 2700x. Also this article doesn't include any games. If it did you'd see the 9900k still does much better, because games don't use all 8 cores.schujj07 - Thursday, November 29, 2018 - link
2700X - 117.18W Max = 11.6% over stated TDP9900K - 168.48W Nax = 77.3% over stated TDP
Don't forget not everyone views gaming and the end all be all form of benchmarking. Would it be interesting to see if it affects the gaming sure. It most certainly would affect those who game and stream at the same time.
urbanman2004 - Thursday, November 29, 2018 - link
Yeah, the 9900K is the latest and greatest when it comes to mainstream CPU's, but at its current pricing you're better off purchasing a more value oriented CPU such as the 8700K to get more bang for your buck. Intel has been losing their sh!t as of late ever since AMD's Ryzen has them on their heels.AlyxSharkBite - Thursday, November 29, 2018 - link
The 9900K is awesome if you build a rig that can keep it cool when overclocked. But otherwise yeah an 8700K is a better choice or for bang for buck the 2700X is good4800z - Thursday, November 29, 2018 - link
It's not that hard to cool it. Just need a big noctua or an AIO.TheinsanegamerN - Thursday, November 29, 2018 - link
It still runs high 90C under OC on such coolers. The only way to keep it running at a 5 GHz OC with with a 360mm rad, and even then just barely.The 9900k is the hardest intel to cool since the pentium DEE
4800z - Thursday, November 29, 2018 - link
If you want value stop playing games and get a part time job so you can have more money in your pocket. The 9900k is only a few hundred dollars more. THat's not much spread over years.sa666666 - Thursday, November 29, 2018 - link
Found the Intel shill/apologist. Not everyone has unlimited funds to spend like that, and even if they could afford the CPU, what about the power to run it? This CPU is very power-hungry and expensive, and insulting users that don't have a lot of money (or have a better sense of how to properly budget it) won't change that fact.schujj07 - Thursday, November 29, 2018 - link
Very true. Has to justify the reason for spending an arm and a leg on his/her new space heater. 4800z while the CPU is $200 more expensive, you also need the $70+ cooler, you have to have the expensive z390 motherboard that adds another $70 onto the build, your room is going to be warmer so that will increase your cooling costs, and drawing 77% more power means you are going to increase your electric bill. All these costs add up and unless you are able to purchase this on a credit card with 0% interest or have saved up for a long time, the extra $400 is going to make a huge difference. Also someone could go with the 2700X or 8700K and use the extra $400 for a nice upgrade on their build, where said person would have gotten a RTX2080 not s/he can afford a RTX2080Ti.TEAMSWITCHER - Thursday, November 29, 2018 - link
He's not wrong. The 9900K is closer the Thread Ripper in many benchmarks than Ryzen is to the 9900K, and at the same time closer to Ryzen in cost. While billed as a gaming processor, the 9900K is great for content creators. Unless you have a specific need for HEDT platform capabilities (RAM or PCIe lanes) the 9900k would get the job done for less money.Targon - Thursday, November 29, 2018 - link
And if you just wait until March of 2019, Ryzen 3rd generation will probably meet or beat the performance of the 9900k for $330. So your $200+ higher price will only be for 4-5 months of having superior performance before the new AMD will be considered faster.HollyDOL - Saturday, December 1, 2018 - link
I know a person holding to a similar philosophy. He still runs Athlon XP, always waiting for the next generation beating the current one.Rukur - Monday, December 3, 2018 - link
9900K comes out of the box with 5Ghz so its going to win on games. The prize it a game stopper but.woggs - Thursday, November 29, 2018 - link
"This rises to 44.2 if the processor is fixed to 95W" but there is no data point on the plot at that spot. A mouse-over labeling of that plot would be very-helpful.romrunning - Thursday, November 29, 2018 - link
I don't understand - the article title says "Fixing the Power for SFF", and yet no motherboards with the form factor typically used in SFF systems were actually tested. The motherboards listed were all ATX; no mini-ITX or even micro-STX boards were used.Why not? Wouldn't this have provided valuable insight for those looking to purchase a SFF system, custom or DIY, to see which mfgs cap the TDP usage or let it go full range?
The author said he tested a MSI Vortex G3 small form factor desktop last year. Well, why not get some comments from ASRock, Gigabyte, ASUS, and MSI as to whether it's standard practice for them to limit CPUs to a specific power limit in their BIOS for those SFF boards.
Fro example, I'd love to know if that sweet-looking ASRock DeskMini GTX Z390 that was recently reviewed can take the i9-9900k rated at 95W to the full "unlimited" power settings. I can put 450-600W SFX/SFX-L PSUs into a SFF system, so I'd like to know if I can get the full performance out of the CPU or if the mfg locks the power draw in the BIOS.
SaturnusDK - Thursday, November 29, 2018 - link
Why is this article, and Anandtech in general, using 1000 unit OEM prices for Intel products which are typically 15-20% less than the lowest retail price you can find. But use the highest you can find retail prices for AMD products? It seems like Anandtech is deliberate trying to make people think Intel products have any value when the reality is that they don't.Rezurecta - Thursday, November 29, 2018 - link
Good re-review. Although, Ian doesn't seem to want to call Intel out. This is OBVIOUSLY something initiated by Intel. If the 9900k were to run in spec it would be slower than the 2700x in a LOT of benchmarks. Intel couldn't have that for such a massive hot monolithic die. That's why all the shady sponsored benchmarks and having the processor way out of spec.It's obvious Intel is hurting. Let's hope this brings about a competitive landscape again.
kernel-panic - Thursday, November 29, 2018 - link
it would be nice if somewhere you let readers know what TDP, PL1 and PL2 mean. I enjoy this kind of articles but I'm not related with the terminology.Icehawk - Thursday, November 29, 2018 - link
It's in the (by now) linked article at the very beginningMr Perfect - Thursday, November 29, 2018 - link
How do motherboards treat the non-k versions of these CPUs? When I built my mITX machine, I bought the non-K processor since there wouldn't be any overclocking going on. Just how locked is a locked CPU? Technically, this could be considered turboing ratehr then overclocking and could be applied to the non-Ks.Targon - Sunday, December 2, 2018 - link
It is possible that Intel won't release a non-k version of these chips, just because there won't be a significant enough performance benefit vs. the AMD 2700X if the chips were not being pushed to their absolute limit.stux - Thursday, November 29, 2018 - link
An interesting point that you make is that a 9900K constrained to 95W performs like an unconstrained 9900K for single threaded loads and an unconstrained 9700K for multithreaded loads.The 9700K has half the threads, so that is an interesting claim, and I think the key is how does the 9700K perform when constrained to 95W.
Hyperthreading is supposed to be a big win to perf/W, thus I’d expect 9900K at 95W to be more efficient than the 9700K for the same perf, which is a definitive win too.
How does the 9700K at 95W perform in the multi threaded benchmarks?
GreenReaper - Saturday, December 1, 2018 - link
I'm not so sure it's as big a win in most cases as it's cracked up to be, especially now that new security measures are required to prevent threads on the same core from being able to use Spectre-class attacks to divulge secrets from timing based on data accessed by other thread.stux - Thursday, November 29, 2018 - link
Apparently a post with an ‘at mark’ in it is spam...Harry_Wild - Thursday, November 29, 2018 - link
I thinking about replacing my proprietary Lenovo Thinkstation P300 motherboard. It is so limited and Lenovo does not update their PC BIOS like other manufacturers to keep the PC up to date with new hardware. Lenovo answer is to buy a new Lenovo PC! Just have to find a new one about the same size and I will jerry rig it in.bairlangga - Thursday, November 29, 2018 - link
So, basically with the power limit in place an 8C-16T i9-9900k is an 8C-16T i7-7820x in a different tshirt riding a different cars, being uncapped is like giving it the pass to run on the autobahn.While we've know that Ryzen 1800x had blown the 7820x out of the water. No more IPC increment for Intel, it seems.
SanX - Thursday, November 29, 2018 - link
IanThe most amazing thing you have revealed in your benchmarks over few last months was the crazy speedup of avx512 on 3D Particle movement which put all recent HEDT incarnations both from Intel and AMD deep into the mud. And in this paper you removed this test. Where is 7900x in the second plot? Or your test was buggy showing these crazy 5x improvements even 7th gen over 9th gen when avx512 was on?
HollyDOL - Friday, November 30, 2018 - link
That would be interesting to see. While AVX can do wonders if the workload is suitable, it IS power hungry. I guess you would still end up with better performance ("tasks per kWh") even with the power limit, but hard to say by how much.I can see on my 8700 how much power at wall and core temp rises when it gets loaded with something AVX heavy.
xTRICKYxx - Friday, November 30, 2018 - link
Once the 9900k is at 95w, the 2700X is looking far more competitive.sharath.naik - Friday, November 30, 2018 - link
You missed come nebench scoresDeath666Angel - Friday, November 30, 2018 - link
Wouldn't mind some tuned undervolted tests for the top consumer processors out right now. :)Consumer1 - Friday, November 30, 2018 - link
Would you be so kind as to change the price of the 9900K in your graphs to the list prices for which it can actually be bought at Amazon and Newegg? Those prices are $579 and $569 respectively when not on sale. It is deceptive to keep listing it at $488.TechSideUp - Sunday, December 2, 2018 - link
Can you show me where your getting this i9-9900k for $488? Lolpeevee - Monday, December 3, 2018 - link
" Alex Yee, a researcher from NWU and now software optimization developer, that I realized that he has optimized the software like crazy to get the best performance."What CPU he optimized it for? Let me guess... the one he has in his room.
tviceman - Monday, December 3, 2018 - link
I'd like to see what kind of performance gains may be had with an undervolt when TDP limited.TheJian - Tuesday, December 4, 2018 - link
I think people are confusing WATTS USED with TDP (amount of HEAT a chip puts off that your HSF or case etc has too be able to accommodate to cool said chip). They are telling manufacturers of laptops, pc's etc how good their cooling design needs to be to keep the chip from heating up.THERMAL DESIGN POWER (point might be more accurate, as some use it), is just as it sounds. THERMAL, er, uh, HEAT. Get it? I'm confused by everyone's confusion...LOL.
https://www.windowscentral.com/what-tdp-and-why-sh...
Perhaps a bit better explanation than anandtech is providing. Maybe they need an A+ course?
"TDP ≠ power draw?"
"Not quite, no. TDP doesn't equate to how much power will be drawn by the component in question, but that doesn't mean you can't use the value provided as an estimation."
"TDP is not — however — a direct measure of how much power a component will draw, but it can be a good indicator."
So, don't expect watts PULLED from a wall to equal a quoted TDP. That isn't what it is, although it may come close to meaning it...ROFL.
If you had a 100% efficient chip (as someone else noted isn't possible...yet?), your chips TDP rating would be ZERO. It would not require anything to cool it. See the point?
https://en.wikipedia.org/wiki/Thermal_design_power
"The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload."
Not exactly watts used right?
https://fullforms.com/TDP
"What does TDP mean?
Thermal Design Power (TDP), sometimes called Thermal Design Point, is a metric that is expressed in watts. TDP refers to the amount of power/heat a cooling system (like fan, heatsink) is expected to dissipate to prevent overheating."
Again, not watts used. I could point to another dozen, but people should get the point. Despite whatever Intel/AMD think it means year to year (ROFL), it's heat.
https://www.overclockers.com/forums/showthread.php...
Same story from OC people. To each his own I guess, but many seem confused about why things blow past tdp (because it's not WATTS). What is the chips temp when it blows past those TDP numbers at stock settings? Is it 150 instead of 95 or whatever? I mean if Dell or someone designs their slim pc's for 95w it likely won't work to well if it's going to 150 temps with a box that is designed to cool 95-100w right? Again, the definition used here really don't work IMHO (and everyone else I seem to look up...LOL). But hey, maybe my old A+ test was wrong (I'm old, maybe I'm just not recalling things correctly, and all the web is wrong too) :) I doubt it ;)
Gastec - Wednesday, June 19, 2019 - link
Perpetuum mobile IS impossible. And I don't want a CPU that's advertised as consuming 95W to 110W (give more than take the PSU inefficiency and other losses on the pipe) to automatically overclock to 170 W because of review benchmarks. I want it to be set BY DEFAULT at max. 95-110W and I also want it to do 5GHz on all cores @ 95-110W, as advertised:) Then I would pay 490€ for it.DennisBaker - Tuesday, December 4, 2018 - link
I wanted to build a new PC on Black Friday, and I bought an i9-9900k. I never overclock and typically buy a locked/non-k CPU but couldn't wait until next year. I also always use a SFF case (Cooler Master Elite 130).This is a great article, but I'm not sure how to actually set the bios for a 95w max cpu setting.
I have the Asrock z390 phantom gaming-itx/ac motherboard:
http://asrock.pc.cdn.bitgravity.com/Manual/Z390%20...
I've been googling without success and figured I would just ask here if there is a general guide for this.
DennisBaker - Tuesday, December 4, 2018 - link
Set to:Long Duration Power Limit: 95
Long Duration Maintained: Auto
Short Duration Power Limit: 95
Seems like that should work.
0ldman79 - Thursday, December 6, 2018 - link
I guess the 95w limit prevented whatever resource snag or thermal throttling issues that was happening with the unlimited version.That would explain the benches where it won vs the unlimited 9900k.
HikariWS - Thursday, December 13, 2018 - link
Great article! I've been guessing about turbo values for years and this aticle answered it all!!Of course we need more transparency from Intel, I suppose this info is left for marketers to release and they think we'd not understand, so they just leave it hidden.
It's great how the same chip can be used on a small form factor and on a big E-ATX case. Modern turbo makes manual overclocking almost not needed, left for watercooling or maybe some manual Vcore setting.
It's basicly a matter of having a good case, a great cooler, and live in Europe to be able to keep 4700MHz all the time!
I wish Intel would release a top performing CPU with 4 core and no IGP, that would do 4.5GHz base and 5.5GHz All Core Turbo without watercooling. We don't need more than 4 cores.
misources - Sunday, May 10, 2020 - link
Nice article about Intel Core i9. please visit my site for more tutorial www.misources.commisources - Sunday, May 10, 2020 - link
https://www.misources.com