Building a, i9-9900K and RTX 2080 Ti system will cost a minimum of roughly USD $3000. Computers are starting to get back to late 80s/early 90s prices, LOL (only the 35+ crowd will remember those days)
Yup. The 2080 Ti is, adjusted for inflation, considerably more expensive than the Matrox Millenium was in 1994 and the Millenium was the highest end for everything at the time, not only gaming. So yea what happened is that everything did get much cheaper, chinese sweatshops & economics of scale and all, but it's big corporation which reaps it all.
What happened was that GPUs started using bleeding edge processes and taking bigger slices of the wafer with its associated lower yields and more waste. Voodoo was a toy using just over a million transistors on a half micron. On a die equivalent it was like fabricating a 486 DX4 in late '96. You could use all the spare capapcity that was sitting around after the bleeding edge guys matured the process and moved on.
A 2080 Ti is 700-something mm2. That's ridiculous. Of course it's going to be an expensive ass chip and Nvidia still needs their cut.
The question is more who really needs stronger CPUs anymore. With a high end GPU you already get a lot of (cuda) power for transcoding and GPU is still the limiting factor for gaming. Office does not need that neither. In the past if was always the video editing pushing me for a new CPU but now?
It still is CPU power that is holding you back.. In Davinchi Resolve you wont see any performance change between a 1060 and a 2080 ti. They are equal fast.. What is holding it all back is CPU power.. That is all the way up to 6K.. At 8K you begin to see a little difference. But not much. Resolve likes cores, and as such you can use 32-core threadripper and get fast performance. Atleast in rendering.
With Premiere Pro it is the same story but with the 1070 instead. But Pr likes frequency more than just plain cores.. So it is a mixture and for that you need fast cores. Not many. If I remember correctly it wont benefit of anything over 12 cores. So I think the i9 9900K with 8 cores of good speed is a good way to go. But many takes like Warp stabilizer only works on one core pr. effect.
But what I wanted to say is that it is the CPU that is holding us back.. Not the GPUs at the time.
Can't agree. GPU's today aren't anymore bleeding edge tech than they were in 1994. What is considerably more bleeding edge these days is margins and it's just not GPU's. It's pretty much all tech where the competition has been narrowed down to just a few giants, and they reap it all.
It's funny that I had a Millenium when I was a kid making 3k a year part time at McDonald's but now that I'm an adult I look at these prices and feel I can't afford it even though I'm well into the six figures.
It's been that way for me as well, but I think adults look down the long road of life and realize that saving for a work-free future or paying for the debt incurred during youthful, mistake-filled days of spending has to take priority over the short-lived gratification of owning a desktop PC that can run a few video games we don't have the time to or desire to play.
depending on where you live this might no be as much as it sounds. If it is US dollar, in California it doesn't buy you that much. And in rupee it is great to earn it in India but won't help you on holiday in Europe ;-)
Okay but rewind 10 years. How much was a build with a Q6600 and a 9800 GTX? $800. I know because that was my first build. If you wanted to build a similar middle of the road (albeit on the high side) gaming rig today you're looking at a $400 CPU and a $500 GPU. Those parts alone cost more than the entire equivalent build from ten years ago. I don't think inflation has been that aggressive.
That is true, but at what resolution with what framerate? If you compare what you need to build in order to lock 1080p at 60fps it is actually quite cheap comparably, especially if adjusted for inflation. And I don't know about the rest of you but my Q6600 build was only shooting for 60fps at 900p.
The Q6600 wasn't top of the line 10 years ago, not even when it released in 2007. There was the QX6700 for $1000, with an unlocked multiplier and higher clocks.
Neither was the 9800 GTX because you had the 9800GX2 (SLI was pretty big back then).
Sure, building a system with those two wouldn't cost $3000, but it wouldn't cost $800 either.
I dunno guys.. I've been building computers for 20 years and aside from the bump this generation... I think your memories are a little off. maybe 20-30 less for my Q6600 over a 4790k a 2600k a 8700k.. (etc..) there was a bump with Q8400s and as far as GPU's... Cad pricing...
Amd has always liked to play in the 400 range for high mid range but Nvidia has always flirted with higher price.. I remember paying almost 500 for a Geforce2 and 875 for a GeForce3. Still pricing wise if you wanted a high mid range machine with zero corners cut you were always looking in the $1800 range (guessing that's about 1300 American..) Pricing today still seems fairly consistent with that.
Just a refresher that not everything was cheap for the time frame you provided. 8800Ultra in 2007 was $829, the 8800GTX was $600. The Q6600 launched at $851 dropping to $531 that year. Before that you were paying $400-1000 for a dual core Athlon 64. It's just a different feeling back then because performance would double every couple years. I don't think the entry to pc gaming ever been this cheap. You can 1080p 60fps for really cheap these days. Back then there was more of a divide between enjoyable performance to just having a slide show.
Not really - people using CRTs during that time could run circles around LCD resolutions and refresh rates. Until the more common gaming-centric LCDs (1440p 120Hz) arrived, LCD had been a major regression in display capability. I played an awful lot of UT99, TF, and CS at resolutions and refresh rates over 1080p60 on my CRTs over the years.
I used 2048x1536 for a long time on a 22" Dell P1130 CRT, was great for Stalker, Far Cry 2, etc. When I finally switched (the Dell was losing its colour), it had to be IPS (the off-axis colour shifting of TN was just too annoying) and the lower vertical height of 1080p really grated my brain, so I bought a 1920x1200 24" IPS (HP LP2475W), which was pricey but good, though I did miss the extra height from 1536 which was handy for documents, web pages, etc. I had looked at 2560x1600 (the standard "high end" gaming resolution of the day) but the prices were way too high. People forget, but back then the typical tested gaming resolutions were 1024x768, 1280x1024, 1600x1200 or 1920x1200, and 2560x1600. I think the first newer aspect ratio to come along was 1680x1050. How it's changed, nowadays almost everyone is used to screens with a wider aspect ratio, and a decent IPS 27" 1440p is only 200 UKP:
You may have played at over 60hz, but you didn't play at a higher resolution than 1080P and especially not higher than 1080P with a refresh rate over 60hz. And the refresh rate isn't apples to apples. LCD refreshes per pixel.
If you are ready to aim just a little lower, I bought an Acer 17" notebook with a GTX1050ti and a 35Watt TCP non-HT i5 Kabel Lake for €800: Turned out to be quite a solid gaming machine even at 1920x1080.
2080ti is about 4k, inference and a novel way to do more realistic graphics: Simply another ball game.
It's not though remotely about 4K *and* realistic graphics; in that regard NVIDIA has very much pushed the focus back down to 1080p and sub-60Hz frame rates, because the tech just isn't fast enough for anything better. A lot of gamers who spend that kind of money are far more likely to just turn off the relevant features in exchange for better frame rates, especially if they're gaming with 1440p/4K and/or high-frequency monitors (the latter being something that's very hard to pull back from if one has gotten used to it, the brain's vision system gets retrained over time, New Scientist had an article about this last year). As for more realistic, visually speaking yes (though NVIDIA's demos were grud awful and often highly misleading), but not a single thing they talked about in any way implied improved game worlds with respect to world/object functionality which IMO is far more important.
For example, many people moaned about the changed water effect in the Spiderman game, but my first thought was, is it wet? Can I use it in the manner implied by its appearance? Drink it? Use it to fill a bottle? Put out a fire? Ditto the terrible looking fire effect in the NVIDIA demo, is it hot? Does it radiate heat? Does it make the gun barrel hot to touch afterwards? Does it melt the glass of the carriage nearby if too close? (no) Similar questions about the reflective glass, can I break it, use a shard as a weapon? Do the pieces persist, acting as an audible warning if enemies step on them? Can I throw a grenade into the carriage, make the glass be a weapon against passing enemies via the explosion? Can I cut down trees to block a road? Dig a pit as a trap? Remove a door? All of these questions relate to interactivity with the world, and it's this which makes for a genuinely more engaging and immersive environment, not pure visual realism. Our environment is interesting because of the properties of materials and objects around us and how they can interact, with us and with each other. If anything, the more visually realistic a game world appears to be, all the more jarring it is when an object cannot be used in a manner implied by how it looks. At least years ago a fake door was kinda obvious, one could see the way the texure was part of the frame surround, but these days it's all too easy to see something, assume one can interact with it, only to then be disappointed. Nope, can't open that door, can't pick up that object (or if you can it's just a generic object, it has no real functionality), the ground surface cannot be changed (oh how I miss the original Red Faction), substances are just visual effects, they don't affect the world in a meaningful manner, etc.
I want functional game worlds; if they look great aswell then that's icing on the cake, but less important overall. Elite Dangerous can look amazing, but it lacks functional depth, whereas Subnautica has far more basic visuals but is functionally fascinating and very engaging indeed.
Put bluntly, Turing isn't a gaming technology at all, it's a table scrap spinoff from Enterprise compute, packaged in a manner designed to make gamers believe there's this great New Thing they never knew they always wanted, launched using highly deceptive PR, with demos that looked generally terrible anyway (I mean really, the fire effect was just a bunch of sprites, the box scene was b.s. from start to finish, etc.) The demos made out like without RTX these effects would look awful, but games already do things that look really good without RTX, in many cases better than the way the demos looked. Their focus on such irrelevance as the reflections in a soldier's eye was also stupid; who cares? Just make the bullets go into the bad guy! Too damn busy to think about anything else; oh wow, look at that amazing raytraced effect, it looks just - oh dear, I got shot.
NVIDIA helped create the market for high-res, high-frequency gaming and VR, yet now they've about-faced and are trying to drop back to 1080p because their baseline Enterprise tech isn't pushing for higher rasterisation anymore. These are not gaming cards now, they're spinoffs from compute.
As I've posted before, a teapot can look as amazing as you like, with all sorts of fancy reflective surface effects & suchlike, but unless I can use it to make tea then it isn't a teapot, a concept explored in more detail from the following old but now even more relevant article:
All of this, along with the crazy pricing, RAM capacity stagnation and other issues (what's the point of preorders if they're delivered late?) shows very clearly where this industry is headed. I think gamers would be far better served if devs just made better, more engaging, more interactive and more functional game worlds. Sure there'll always be a market for the FPS titles where people largely couldn't give a hoot about such things, but then again, Crysis was so impressive at launch partly because it was also funtionality interactive in a manner that hadn't been done before.
What you are asking for has nothing to do with graphics and video cards though. These are game logic related functionalities that run on the CPU. I don't think CPUs are yet powerful enough to run something that is basically a reality simulator.
Besides, coding such games would require so much money and time that I don't think any game studio would even want to do it. They want to create a game and not a simulator.
Do you run it oc'd at all? If you need any comparison data to modern reviews, I've tested a number of 3930Ks and other X79 CPUs (for Firefox, set Page Style to No Style from the View menu):
the i9 and 2080 TI are categories of product that just didn't exist before - they are basically pro-sumer level, used by the ultra-rich or for productivity/work tasks.
It would be more realistic to compare the i7-9700k and 2080 which are the mainstream high-end parts.
i9 9900k is an excuse to justify not offering HT and full cache with i7 anymore, and just because some people want to think 2080 Ti is a new category doesn't make it so, unless nvidia explicitly says so.
It's always been like that. The top of the top of the line always cost 3000$. Thing is you can play with more than sufficient quality with a 200$ CPU and a 1070 still.
Another difference is that back then you spent $3000 on a PC just so you can run Windows smoothly. The $3000 that RTX2080Ti commands does not get you as much delta in user experience, not even from a couple of generations ago let alone the 80s and 90s.
RTX 2080 Ti offer better performance than TItan Xp at same price. So, if you are willing to spend that much, you still getting better performance than what was before at same price
Otherwise, you can just buy regular RTX 2080 or GTX 1080 Ti and get great performance. It is not like you need RTX 2080 Ti to play games.
False reasoning. 2080 Ti is not the replacement for titan. 1) it is branded 2080 Ti, 2) it doesn't even have the fully enabled TU102 chip, meaning a Turing based titan is incoming.
So you're saying that if you own a Titan Xp, you are not allowed to upgrade it to a 2080 Ti, because the 2080 Ti is "not the replacement for titan"? I think the ridiculousness of that argument is self evident.
No, I didn't. You can upgrade to whatever you want, but 2080 Ti is still not the successor to titan. I think you should read comments better before jumping to conclusions.
Actually, that's pretty much exactly what you said.
To summarize: O.P. said that the RTX 2080 Ti is better performance than a Titan Xp for the same price, so if you were comfortable spending Titan Xp costs for a video card, you can upgrade to a 2080 Ti for the same price and better performance. Or if you are not willing to spend Titan Xp price you can buy a 2080 and get great performance for games.
You responded with some drivel about how the 2080 Ti is "not a replacement for the titan" where I guess in your mind a "replacement" is only a "replacement" if it meets some specific product categorization criteria including "having a fully enabled TU102" chip and having a specific branding name. Which implies that you don't think that a 2080 Ti is a valid replacement for a Titan Xp for the same price, because otherwise, what is the point of your post at all?
So your post makes no sense, which I pointed out.
Who defines what is a "successor"? You? And who even cares what card is a successor to what card? The O.P. didn't even mention 'successor' as an important criteria in deciding whether or not to upgrade a Titan Xp to a 2080 Ti. You brought that drivel in.
Let's see what happens when you take your used Honda Accord in for a trade ...
Dealer: You are in luck! We have just gotten in a new shipment of this year's Lexus sedan that is better than your car in every way! And it costs the same as your Accord did new!
You: Are you kidding me? That's not a proper 'successor' to the Honda Accord, and everyone knows that. It doesn't even have the same model name! What kind of trick are you trying to pull here? Bring me the Accord Mark II please or nothing at all.
@bji, honestly you are trolling. If you look back at generations the next gen Ti beats the prior gen Titian. If people were will to pay for every percent increase in performance over Nvidia's 20 years you would easily be paying $20K per gaming card. A generational performance increase should not demand more dollars except for a small change for inflation. The pricing of the RTX 2080 Ti is a complete rip off and everyone will agree in years time, even you bji.
It's stupid to respond that a post you do not agree with is 'trolling' just because you don't agree with it. Please look up what 'trolling' actually means before making such baseless accusations. Then please remember what you read before wasting anyone's time with a similar post. Thank you.
I do think it's amusing how many posters think that companies should create products and pricing based on their own personal wishes and desires. It's generally funny to watch people get all bent out of shape because their own personal belief system about products doesn't actually correspond to reality, and then rather than adjusting their belief system, claim that the companies are wrong to make products at prices that they don't like.
I suspect that making a new generation of video card is an intensely expensive and difficult process from an R&D perspective, and this priced accordingly. But I guess alot of basement dwellers think that everything should just be given to them for cheap because ... well, I don't know. Just because, I guess.
"I suspect that making a new generation of video card is an intensely expensive and difficult process from an R&D perspective, and this priced accordingly."
Those things factor in, yes, but if you think that companies never overprice their products when there is no competition, then you are extremely naive.
Reality? Overpricing is also a reality. It happens all the time and people are allowed to criticize it.
Is it really not obvious what I meant by "replacement"? Nvidia replaces a titan card in the lineup with another titan card, not with a numbered geforce card, unless they specifically state that 2080 Ti is replacing titan xp and 2080 is replacing 1080 Ti, which they haven't.
Um, yes, branding exists for a reason. If they branded it "2080 Ti" then there is no reason to think it's not the successor to 1080 Ti.
Who defines? Nvidia. It's x080 Ti, therefore it's safe to assume it replaces the previous x080 Ti in the lineup. Simple as that.
Again, I have no problem with people upgrading from a titan to 2080 Ti. It's their money. It still doesn't mean the cards are in the same category.
Also, it's kinda nuts that the 2080 Ti didn't even increase the RAM over the 1080 Ti, while the 2080 actually goes down compared to the previous card it's currently competing with at the equivalent price point. The 2080 Ti should have had 16GB; I can only think NVIDIA didn't bother because they thought if they can push people back to accepting 1080p as the norm then it wouldn't need so much.
1080p is the norm still, we haven't pushed beyond that yet. The Steam hardware survey shows 1080p is still by far the dominant resolution followed by a distant ~13.5% for 1366 x 768 which is most likely the dominant notebook resolution. Higher resolutions are still a tiny fraction of the market
Primary Display Resolution
1366 x 768 13.51% -0.67%
1920 x 1080 62.06% +1.40%
2560 x 1080 0.96% +0.01% 2560 x 1440 3.59% -0.03% 3440 x 1440 0.43% +0.01% 3840 x 2160 1.32% -0.01% Other 1.56% 0.00%
I remember my parents purchasing be an Acer system in 1993. It came with 15" CRT Monitor, Pentium 66 mhz, 8mb Ram, 500mb Hard Drive, 1 MB video card, Soundblaster compatible sound card, 14.4k modem, 2x cd-rom, and a HP 360dpi color inkjet printer all for $3,500.
Pretty sure integrated wifi address more users than RST support. RST is niche of a niche (and I'm big RST fan and my x99 chipset never had an actual Intel version).
You are rigfht but it's still about corporate greed. Integrated wifi adds stuff to the Intel BOM @ 60+% margins and enabling full RST (at no cost)) on their prosumer boards might lose them a few bigger sales.
i7 without HT is just bizarre (9700K cannot be a direct upgrade over the 8700K, not when it's losing HT in the process). And in pure marketing terms, is it just me or does i9 just not sound or look as cool as i7? Always been something better in the brain about 7, hence 007 for Bond (009 just sounds like dialing a wrong number in a phone box). Plus of course Intel is making the whole thing a naming mess by using i9 for HEDT parts aswell. Oh dear...
Yeah its funny. 2 more cores vs 6 hyperthreads, more cache, and cheaper price. I guess both these chips are at performance parity, although the 8700k will definitely be faster in encoding. The 8700k may be faster than the 9700k all around. This is definitely a great opportunity for amd
The point is that the whole segmentation thing muddles the market. Remember that for a normal user, 6 with HT shows up in Windows as 12 apparent cores. The cost difference also makes it confusing, and the gain is often a lot better than 20%, as was very clear back in the days of SB. more than that though, I don't care how much faster 8 cores is than 6 with HT when a 2700X is half the price. :D That's enough to cover the cost of an entire mbd and more.
The funny thing is we already have scores for the 9700K so we don't even need to speculate on how the 8700K compares to the 9700K. See this HardOCP video of the z390 MSI ACE MB https://youtu.be/1JgQxazSAGc .At 5:26 there's a test sheet provided by MSI running the 9700K on the MB in question. Interesting take away scores are Cinebench R15 scores 1488 points. Compare that to stock 8700K scores usually landing in the 1420 range, so 5% faster. Firestrike Extreme scores 13587 on the 9700K. Compare that to a stock 8700K typically scoring in the 19100s and you'll see the 9700K is 30% _slower_ than the 8700K in a typical gaming benchmark.
While I share the disappointment, I believe Intel has been assuming hyperthreads to have 0.33x the performance of a native core in a typical multithread task, and have spaced their i-family CPU steps in even increments (or close to even) based on that. The math works out and is very sensible.
Kaby Lake and prior: 2/4 (2.66) — 4/4 (4) — 4/8 (5.33). 1.33 "cores" per step. Coffee Lake: 4/4 (4) — 6/6 (6) — 6/12 (8). 2 "cores" per step. CL Refresh: 4/4 (4) — 6/6 (6) — 8/8 (8) — 8/16 (10.66). 2 "cores" per step between i3 and i7, 2.66 from i7 to i9. As close as it gets.
That said, 9700K will certainly outperform 8700K on tasks that care about the performance of threads more so than their raw number (e.g. actually fully loading the cores so there isn't much room for HT to take advantage of), as you'd be very unlikely to find HT to add more than 15–20% performance in such a scenario. Frequency-wise, where a stock 8700K would have 6 cores running at 4.3 GHz, a 9700K will have 8 cores at 4.6 GHz. I do believe this will prove to be an improvement in the vast majority of cases. That being said, it will also be a marginal improvement on the order of 7–8% at best, and certainly not any sort of an impetus to upgrade. But then again Coffee Lake was kind of an outlier in that regard; between Sandy Bridge and Kaby Lake it had been the same 7–8% from one generation to another. So consider this to be more of a return to the status quo.
Is that why everyone in the enterprise sector was scrambling for cover at the start of the year? OR you know the people who complain about China or NSA, do they also not care?
Not that I really agree with downplaying these completely, but these are consumer parts, and they aren't destined for any of those scenarios. Intel launched fixed parts for the Enterprise already, too.
People do still have a reason for wanting fixes, though. A hardware designed fix is likely to lift performance up higher than doing the equivalent thing in software.
Correct, if these are hardware fixes, less affect on performance and now, from the slides, I learnt there are 2 hardware fixes which is partially good (something better than nothing).
I didn't mean to Troll but looks like you are hellbent on it. Are you living under rock for the past few months? My genuine question is whether there is anything meaningful advantage other than frequency increase to consider the new processors? As, I am interested in purchasing Core i5 and the article is mostly talking about new i9 processor, I was asking something specific as I can get the slightly cheaper 8th gen processor if there is no meaningful improvement.
Thank you R0H1T for giving the support and covering me...
I care about a ~15% performance hit, yeah. You don't? Why are you posting on an enthusiast technology site, then? Did you get lost on your way to Instagram?
They are the same. The patches on the OS and microcode have affected performance for all of us on Intel systems because they're trying to mitigate it for everyone with the chip, not just Enterprise.
Or a better question is what are the changes for the Core i5 and below processors other than frequency change? Security fixes or full cache available for all cores instead of per core?
Only a couple of "hardware" fixes, according to the slides, namely Meltdown v3 & L1TF while the rest of the possible mitigations will be enabled via microcode &/or software updates.
Intel was notified about Meltdown and Spectre on (or possibly before) February 1, 2017.
So seriously, what the hell, Intel. I get that Spectre is a huge problem inherent in the design of modern CPUs, but why isn't Meltdown fixed yet? You've had almost 2 YEARS.
Meltdown will also require a fair bit of redesign of their existing uarch. Now as much as I'd like "hardware" fixes to just appear ~ they will take some time considering a botched job will be much worse, in terms of security & performance.
Welcome to the madness of reduced turbo speed depending on the used core count.... Intel trying to push the nice 5Ghz uber benchmark score but a cpu failing to deliver anything more then previous generation due to the mixed load characteristics of a daily system with OS, AV, etc...) Got to love the fact they called it I9, reduced cache and removed some HT... really intel reducing cpu count stack just for consumer concerns??? you only have about 46 SKU in the server space.... how about that?
nah its reduced, if you even believe that you are only running 1 active cpu at the time go figure.... in many cases there are several cpu at the same time active and you will never reach any advocated speed besides some fancy banchmarks running on an idle system with nothing installed.
The concept why chips turbo in early design is taken over by fancy marketing slides.
I think you guys are on the same page, just looking at it from opposite directions, ie. lower turbo levels as the thread load goes up, vs. higher turbo levels as the thread load goes down. Same thing really.
Either way though, still a valid point, the PR focuses on 5GHz but most of the time it won't be running that way. I bought a 10-core XEON which in theory can turbo to 3.6, but in a real system not once have I ever seen it go above 3.1. Even more ridiculous is the way people are getting so hyped about the 9900K yet for some time now the max oc's one can achieve are really not that much higher than the built-in max turbo, in other words oc'ing has become boring, it really doesn't achieve much anymore, not like the days of SB when a 5GHz oc (utterly trivial with a 2700K) is a huge bump of the max turbo of 3.9.
I have my 2080 Ti's preordered. Was going to wait another year or so before upgrading my i7-6950X. Since this is another refresh, I'm debating if I should just get it or still wait.
Thanks Ian! Did Intel or anybody else there comment on availability of both the new CPUs and new chipsets in Q4 2018 and Q1 2019? Are they shipping in quantity? The new i7s look interesting, but with recent manufacturing problems at Intel and ensuing price increases at retail, AMD's Ryzen might still be "it" for many who want eight cores.
For gaming, with the 2700X being so much cheaper, the price difference is enough to cover the cost of a better GPU anyway. It's really only more complicated if one is using 1080p in certain specific scenarios. For anyone else, at 1440p or higher, a better GPU with a cheaper 2700X is far more sensible.
What we'll see as usual is Intel winning plenty of benchmarks with the 9900K, especially gaming, but at a cost which makes no sense. The kind of people who care about the upper tier of mainstream tech tend to be those gaming at 1440p or higher, in which case the IPC/clock advantage is less relevant anyway, Ryzen holds its own there, and offers good productivity performance at lower cost. When the reviews hit, I'm sure we'll see the usual tech site responses where they fawn over enormous frame rates at 1080p which only a minority of gamers actually care about (which is fine for those that do, but it's just annoying when most mainstream gamera don't, doubly so when reviews pair the tested CPU with the best GPU available - the rationale is understandable, but it kinda misses the point of the nature of those faced with real world upgrade decisions).
unless it's bringing something new to the table it's a waste to bother with a 2800/x the 2700/x processors are already pretty friggin awesome with decent coolers.. and a 2800 won't be the incentive people would jump to pull the trigger on a purchase.. Unless maybe it came with a water cooling kit.. maybe.. Doubt it tho.. Besides anything more expensive on their parts and you'd be eyeing those thread rippers instead.
Well I pre-ordered the i7-9700k. Came here, and disappointed to read there will be lower L3 cache than the i7-9900k. Hopefully, won't make that much of a difference as most of what I have seen shows that games and apps rarely utilize 16 threads and 8 is the sweet spot. Also sad that no reviews will be available until after my pre-order ships, so no time to change my mind.
You could always cancel your pre-order and wait until the reviews come in... not like they'll run out.
As a practical matter your will have saved 23.4% of the price for 25% reduction in cache and probably about the same in maximum CPU performance - but if your task is within cache and eight threads, more like a 2% reduction. As you say most games currently don't use more than eight cores.
In other software HyperThreading impact is mixed, in some cases up to 30-40% benefit - but *only* if you're using the CPU full-throttle already, and in a parallel case where cache is not so significant: https://www.phoronix.com/scan.php?page=article&...
If you're doing two different tasks on the same core, HT can actually make things slower due to cache thrashing and contention for the un-shared portions of the execution pipeline.
What a product segmentation hell. Typical of Intel.
The really funny part here is that the Core i7-8700K at 6/12 with HT is arguably a faster processor for multitasking than the new Core i7-9700K 8/8 with less cache. GG guys.
And in true to Intel fashion, they relace a 8/16 CPU on top and disable hypertreading for the 8/8 just below! WTF Intel? Core i7 always was about hyperthreading. You are aware there is an AMD now that is full HP from bottom to top, offer more core for cheaper and lags only in single treaded games?
It's nice to see competition in the cpu market again.. We've waited a long time for a 8core mainstream part from Intel and it looks like the Ryzen line is pushing them to move to 6 then 8 to compete. I actually thought they'd do one with a fancy cooler to but no.. Still I like this as the new norm.. (not thrilled with the price on the i9 tho.. going to have to take a pass on this gen besides I already have a 8700K and a Ryzen 2700X :)
People forget that the 3930K was actually an 8 core, but with 2 cores disabled. Intel could have released an 8 core a loooong time ago, but they just didn't have to (back then, AMD could barely compete with the 2500K). With IB, mainstream stagnation began.
I am surprised they keep the iGPU at the high-end. In a notebook, that makes sense, because a zero Watt dGPU simply allows a longer battery life away from gaming. In a desktop, I cannot see that sell while the surface area spent on the iGPU should have yielded two cores easily, even four.
That in fact I've regarded it the stroke of 'genious' of AMD to trade the iGPU nobody used on the RyZen high-end to give 2-4 extra cores for 'free' vs. Intel, payback for how Intel used the free iGPU to deprive Nvidia and ATI from volume revenues they had come to rely on in times of Core-2.
Hyper-threading: It does cost heat. It does create hot-spots on the chip, disabling it gives you another round at binning better etc. There are plenty of technical reasons to disable it on any chip less than perfect, before resorting to side channel attack speculations: I can't seem them being so important in the gamer market the 8-cores address and I thought they fixed the issues regardless of HT (BTW: Still no word on the Control Flow Integrity (CFI) extensions?)
I guess another reason is that life as a games engine developer isn't all that easy already. Creating a better user-experience by seamlessly scaling thousands of GPU cores and now a handful or a dozen CPU cores, isn't easy when you also have dynamic frequency scaling, TDP or downright physical cooling limits and increasingly people playing on laptops. And now there is NUMA and CPU cores which don't have direct access to memory etc., etc.
Eliminating HT from the already rather complex field of CPU options may also play well with the game engine designers, who are currently trying to make use of the additional real CPU cores the new RyZen and Intel desktops deliver. When there is good reason to believe, that the extra Wattage requried by HT will rather soon slow down turbos on cooling constrained CPUs anyway, there is nothing to gain and only complexity to pay.
The reason they have an iGPU on the 9900K is because that's the biggest base die, and every smaller chip is made from cutting the thing.
It will cost MORE to have the iGPU disabled, not to mention many people will appreciate having a backup GPU when they are waiting for a replacement to arrive, or the discrete GPU fails to load. Also, some don't require the discrete GPU.
"Hyper-threading: It does cost heat. It does create hot-spots on the chip, disabling it gives you another round at binning better etc."
They could've branded the best chips i7 9700k as usual without disabling HT and reducing cache and then used the chips that didn't make it to the top for 9700 non-k with reduced clocks to keep the heat at bay.
Why do that though when they can artificially create a new category, price it higher, disable features on the lower category, and make easy money.
I'll freely admit, that defending Intel doesn't come naturally to me...
But I feel that the heat they get on the TIM isn't wholly deserved.
Please consider that thermal expansion is very much a challenge when you go and cycle between 100 and 1 Watts of power potentially several times a second. That is already quite hard on the solder ball grid array that points toward the motherboard. When you add rigidity on the other side using solders rather than pastes, the stress on the die can only get worse.
Intel has tended to keep turbo clocks on high-core count CPUs rather low, not because those cores weren't capable to achieve high-clocks for binning reasons. I believe they went conservative on the physical stress this would cause within the chip and against top and bottom connections.
When they binned high-end parts for higher frequency, they disabled intermediate cores to create "dark-silicon" islands to carefully spread the heat horizontal first and reduce the stress vertically.
These days they are so eager to please, they offer extreme clocks *and* soldered heat spreaders, but I cannot help but think that they are sacrificing longetivity and reliability with it.
But by reduceing the thermal resistance less heat will accumulate in the die and less temperature less expansion. Also lesser the temperature longer the life
What an underwhelming product announcement. So a year later we have:
Z390 is Z370 with better USB
9000 series CPU's basically the same as 8000 series CPU's, just more expensive.
I'm willing to bet an 8000 series i7 beats a 9000 series i7 in a number of benchmarks due to the overall core advantage the i7-8700k has (6/12) over the physical 8 cores in the i7-9700k
Hyperthreading obviously isn't as good as physical cores, but replacing it entirely with 2 additional physical cores and charging more for the resulting product likely isn't going to make it a good dollar-for-dollar improvement over the 8th gen.
He said 10% faster, not 50% as Intel claim. 10% faster in most games, and on par with the 2700X in professional workloads seems fairly reasonable to expect. At double the price at retail of the 2700X one would have to be completely insane to buy a 9900K though. For any reason.
I don't know about that. Mazda in actually going to market a direct ignition gasoline engine for the 2020MY. It increases gasoline engine efficiency 30%, the most significant jump in efficiency in the history of the petrol engine design, by compressing the gas much like a diesel engine does to the point of combustion. It will still use spark plugs during cold starts when the cylinder chamber isn't warm enough to ignite the fuel under pressure alone.
That said, the analogy isn't exactly suitable for the Core architecture, which literally has run out of steam.
"If you believe everything the motherboard manufacturers tell me, most of them have been ready for this release for several months, hence why we’re going to see about 55+ new models of motherboard hit the market over the next few weeks."
Please compare 8th gen i7,i5 to 9th gen i5,i7,i9 for performance on compression tasks, other comment workloads you do comparisons with scripts. I REALLY want to see some heat comparisons with the 8th vs 9th gen since they have it soldered. Does anyone do air cooling for these or just liquid? Do any of these CPUs come with a heatsinks, that'd be telling some of how Intel thinks they can perform on air cooling. I'd also like to see some Motherboard comparisons, grab a low end and mid/high motherboards manufacturers, Asus, Asrock, etc. Also FPS on games for 8th vs 9th gen. PUBG, CSgo, etc. Thank you! PS people need to chill about the Spectre/Meltdown fixes, they had 2 hardware fixes on the 9th gen cpu's, others are fixes w/ Microcode,firmware or software, what's the problem?
I'm still confused. I have been an Intel fan for decades but AMD has made a big impact on the market so I have to consider them as an option for my next HEDT build for 4K video editing and some compositing/graphics work using DaVinci Resolve. Budget is limited. I was going for the i7-8700K but decided to wait for for the 8 core 9900K. Planned build is - Asus Z390 M/board, i7-9900K, 32GB DDR43200, 500GB M.2 system drive, 1TBSSD, 4XHDDs raid, plus a GTX1080TI gpu. Water cooled so hopefully close to 5Ghz on all cores. I'm not a gamer and can't wait till 2020. 1) In reality, 4 cores is enough for most single user apps, Moores Law died and silicon CPU technology hit the wall many years ago and won't exceed 4-5ghz in the foreseeable future. AMD has confused the consumer market with their hype "If 4 cores is good - 8, 16, or 32 must be better". Benchmarks confuse the issue as they are designed to max load all cores to score - which is far from the single user real world. 2) Intel beats AMD on clock speed (Ryzens struggle to exceed 4Ghz) which is important to me. 3) AMD beats Coffee Lake on PCIe lanes. Intel states 16 lanes but Anandtech here show 24. Why? The 8 core Ryzen 1900X has a massive 64 lanes. How many do I need if I expand to 2 X PCIe M.2 SSDs, 2 X1080TI gpus and may add TB3 external storage later? What sort of desktop uses 64 lanes? Price - 9900K $530 vs 1900X $336 - Ryzen 1920X 12 core $499
Any thoughts/advice appreciated. Thanks
4) Ryzen supports quad channel memory plus ECC vs Intel dual channel and no ECC. How does quad channel memory affect performance and is ECC needed?
CPUs didn't hit a brick wall, Intel simply had a monopoly and exploited it for maximum profit. We've gotten more progress in the CPU space in the last 2 years then we did in the previous 8 before that.
Intel's clock speed figures are only for single core turbo. All core turbo of the 8700K for example is only 4.3 GHz compared to Ryzen's all core turbo of 4.0 GHz. This difference is negligible. The real advantage of the Intel processor is overclocking but for professional use such as what you want to do, I would highly recommend against that as even slight instability can cause a loss or corruption of work.
As for PCIe lanes, given that you want 2 PCIe SSDs, 2 1080 Ti GPUs, and TB3 external storage you definitely need the 64 lanes that the threadrippper platform offers. You simply cannot run that setup on the 24 lanes that Intel offers. Two GPUs alone will saturate that, let alone your PCIe SSDs, external storage, and your M.2 boot SSD which takes x4.
Technically, there was an 8-core chip for the X79 platform that was released Q3/13. Since it was running on X79 I would consider it "consumer" as well. This chip was the Xeon E5-1680v2 (Ivy Bridge).
People will pay even more for this CPU that is still broke, Spectre/Meltdown. We bought before but did not know they were selling us a CPU that had security problems. To me it takes some nerve to build a broken CPU after finding out about it and then charge even more for it. How does Intel get away with this and they would not if people would stop buying them till they fix it, you would not pay full price for a damaged car and most places discount stuff like that but no Intel charges more and is even giving you less.
Paul's Hardware is posting a review of MSI Z390 MEG Godlike https://youtu.be/YF-aIboiuSs Check at 3:20 there are some benchmark results for I7-9700K. Cinebench 1476 Firestrike Extreme 13611 ....
Interesting. The Cinebench score is one that should have reasonable consistency so we can use that as a rough guide. The 9700K scores 1488, or under 5% higher than a typical stock 8700K score at 1420ish points. A stock Ryzen 2700X scores around 1800 stock though so a stock Ryzen 2700X is 21% faster than a 9700K.
We really don't know yet but going by the Intel commissioned benchmarks, aside being the genesis of Game Mode Gate, shows that the 9900K is barely 5% faster than the 8700K in games on average at 1080p where the difference should be the highest. And it's 50% more expensive so it doesn't look promising for the 9th gen to be honest. I look forward to seeing actual review by credible sources but I for one foresee the 9000 series to be a massive belly flop.
Atm I wouldn't touch Intel-commissioned benchmarks with a 10m cattle prod. Best wait for proper reviews, but the pricing is already enough to persuade me to go AMD, 600 UKP in the UK for a 9900K (that's approaching $800 US).
So what's the consensus, buy 9th generation or wait a bit more until the 10nm parts next year? I think I may have to jump in this generation as I'm limping along with an i5-6600k. Although it does pretty darn well, coupled with a 1080ti I'm playing every single PC game release at 4k with most settings maxed and still getting 30-50fps. But I know my CPU is the bottleneck and I won't be able to even consider the 2080ti unless I upgrade the CPU first. But if the 10nm ones next year are that much better I have no issue waiting another year.
Definitely wait for reviews from credible sources on the 9000 series. Also might want to see what AMD brings to the market with their 7nm Ryzen 3000 series launching in March/April next year.
What he said. The elephant in the room is that TSMC’s shrink is proven to work and so far, Intel’s is proven not to. And who’s going to be using that TSMC shrink next spring? AMD.
That's perfect! Now I can return i7-8086k for refund, and then immediately pre-order i9-9900k at BHphotovideo for $530! Plus, it gives me the advantage to reduce CPU bottleneck, especially when running RTX 2080 Ti Video Card.
The i9-9900K is definitely going to be an interesting product given the fact that now it can Turbo itself up to 5 GHz with TWO threads as opposed to just one.
The fact that this is also an 8-core processor and has an 11 bin self-turbo clocking (3.6 GHz stock, 4.7 GHz all cores turbo) makes this a VERY tempting product/platform.
humm my country reaches 640 dollars of output and the latest tests offers a performance in games 16% higher than r7 2700x clear depending on the game this amazing figure in fps would be between 5 and 12 fps but absolutely amazing is worth every cent paid but because they did not test in a professional setting ?????
How about you try dropping in some of these Coffee Lake Refreshes into z370 motherboards with older bioses and see if they boot or not? This generation is a refresh and not really all that different (some leaked documents even said they have the same stepping codes as 8th gen parts).
I received a free EVGA z370 FTW motherboard that shipped about 4 weeks after their latest bios update (that had notes of Support new Coffee Lake-S Processors, Add new NVIDIA USB 3.1 Support, Improves Intel SGX compatibility). It's possible my board has the new bios, or the prior one from back in March. I have hopes that a part like a 9600k (with only 6 cores) would boot nomatter what bios is on the board. The FTW board's supported CPU list is pretty abysmal actually, listing no 8086k or any 300series CPUs cheaper than the i3-8100 (although plenty of folks say it runs the 8086k even on the March bios).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
164 Comments
Back to Article
wingless - Monday, October 8, 2018 - link
Building a, i9-9900K and RTX 2080 Ti system will cost a minimum of roughly USD $3000. Computers are starting to get back to late 80s/early 90s prices, LOL (only the 35+ crowd will remember those days)Kvaern1 - Monday, October 8, 2018 - link
Yup. The 2080 Ti is, adjusted for inflation, considerably more expensive than the Matrox Millenium was in 1994 and the Millenium was the highest end for everything at the time, not only gaming.So yea what happened is that everything did get much cheaper, chinese sweatshops & economics of scale and all, but it's big corporation which reaps it all.
Veliladon - Monday, October 8, 2018 - link
What happened was that GPUs started using bleeding edge processes and taking bigger slices of the wafer with its associated lower yields and more waste. Voodoo was a toy using just over a million transistors on a half micron. On a die equivalent it was like fabricating a 486 DX4 in late '96. You could use all the spare capapcity that was sitting around after the bleeding edge guys matured the process and moved on.A 2080 Ti is 700-something mm2. That's ridiculous. Of course it's going to be an expensive ass chip and Nvidia still needs their cut.
bebby - Tuesday, October 9, 2018 - link
The question is more who really needs stronger CPUs anymore. With a high end GPU you already get a lot of (cuda) power for transcoding and GPU is still the limiting factor for gaming. Office does not need that neither. In the past if was always the video editing pushing me for a new CPU but now?gamingkingx - Tuesday, October 9, 2018 - link
It still is CPU power that is holding you back.. In Davinchi Resolve you wont see any performance change between a 1060 and a 2080 ti. They are equal fast.. What is holding it all back is CPU power.. That is all the way up to 6K.. At 8K you begin to see a little difference. But not much.Resolve likes cores, and as such you can use 32-core threadripper and get fast performance. Atleast in rendering.
With Premiere Pro it is the same story but with the 1070 instead. But Pr likes frequency more than just plain cores.. So it is a mixture and for that you need fast cores. Not many. If I remember correctly it wont benefit of anything over 12 cores. So I think the i9 9900K with 8 cores of good speed is a good way to go. But many takes like Warp stabilizer only works on one core pr. effect.
But what I wanted to say is that it is the CPU that is holding us back.. Not the GPUs at the time.
Meteor2 - Tuesday, October 9, 2018 - link
That’s just not true, as any number of benchmarks and test reports demonstrate.umano - Friday, October 12, 2018 - link
Agreed, daVinci uses gpu a lot, that's one of the reasons why it's awesomeKvaern1 - Tuesday, October 9, 2018 - link
Can't agree. GPU's today aren't anymore bleeding edge tech than they were in 1994. What is considerably more bleeding edge these days is margins and it's just not GPU's. It's pretty much all tech where the competition has been narrowed down to just a few giants, and they reap it all.SlyNine - Tuesday, October 9, 2018 - link
Way to ignore the argument and see what you want to see.. You're preaching from a narrative and won't accept anything out of it.If he's wrong, in his very specific and detailed argument. Then detail how you know it's wrong.
euler007 - Tuesday, October 9, 2018 - link
It's funny that I had a Millenium when I was a kid making 3k a year part time at McDonald's but now that I'm an adult I look at these prices and feel I can't afford it even though I'm well into the six figures.PeachNCream - Tuesday, October 9, 2018 - link
It's been that way for me as well, but I think adults look down the long road of life and realize that saving for a work-free future or paying for the debt incurred during youthful, mistake-filled days of spending has to take priority over the short-lived gratification of owning a desktop PC that can run a few video games we don't have the time to or desire to play.Gastec - Wednesday, October 10, 2018 - link
"well into the six figures" as in more than 100,000 per year? Are we to support shameless trolling from rich people now?jospoortvliet - Wednesday, October 17, 2018 - link
depending on where you live this might no be as much as it sounds. If it is US dollar, in California it doesn't buy you that much. And in rupee it is great to earn it in India but won't help you on holiday in Europe ;-)Gasaraki88 - Monday, October 8, 2018 - link
No one needs that though. Top of the line gaming rigs have always been expensive.willis936 - Monday, October 8, 2018 - link
Okay but rewind 10 years. How much was a build with a Q6600 and a 9800 GTX? $800. I know because that was my first build. If you wanted to build a similar middle of the road (albeit on the high side) gaming rig today you're looking at a $400 CPU and a $500 GPU. Those parts alone cost more than the entire equivalent build from ten years ago. I don't think inflation has been that aggressive.mapesdhs - Monday, October 8, 2018 - link
Correct, but people are going to argue the toss anyway in order to defend their purchasing decisions.megadirk - Monday, October 8, 2018 - link
That is true, but at what resolution with what framerate? If you compare what you need to build in order to lock 1080p at 60fps it is actually quite cheap comparably, especially if adjusted for inflation. And I don't know about the rest of you but my Q6600 build was only shooting for 60fps at 900p.ToTTenTranz - Monday, October 8, 2018 - link
The Q6600 wasn't top of the line 10 years ago, not even when it released in 2007.There was the QX6700 for $1000, with an unlocked multiplier and higher clocks.
Neither was the 9800 GTX because you had the 9800GX2 (SLI was pretty big back then).
Sure, building a system with those two wouldn't cost $3000, but it wouldn't cost $800 either.
just4U - Monday, October 8, 2018 - link
I dunno guys.. I've been building computers for 20 years and aside from the bump this generation... I think your memories are a little off. maybe 20-30 less for my Q6600 over a 4790k a 2600k a 8700k.. (etc..) there was a bump with Q8400s and as far as GPU's... Cad pricing...Amd has always liked to play in the 400 range for high mid range but Nvidia has always flirted with higher price.. I remember paying almost 500 for a Geforce2 and 875 for a GeForce3. Still pricing wise if you wanted a high mid range machine with zero corners cut you were always looking in the $1800 range (guessing that's about 1300 American..) Pricing today still seems fairly consistent with that.
meownayze - Monday, October 8, 2018 - link
Just a refresher that not everything was cheap for the time frame you provided. 8800Ultra in 2007 was $829, the 8800GTX was $600. The Q6600 launched at $851 dropping to $531 that year. Before that you were paying $400-1000 for a dual core Athlon 64. It's just a different feeling back then because performance would double every couple years. I don't think the entry to pc gaming ever been this cheap. You can 1080p 60fps for really cheap these days. Back then there was more of a divide between enjoyable performance to just having a slide show.ziofil - Monday, October 8, 2018 - link
Also, screen resolution used to be less demanding.nathanddrews - Tuesday, October 9, 2018 - link
Not really - people using CRTs during that time could run circles around LCD resolutions and refresh rates. Until the more common gaming-centric LCDs (1440p 120Hz) arrived, LCD had been a major regression in display capability. I played an awful lot of UT99, TF, and CS at resolutions and refresh rates over 1080p60 on my CRTs over the years.mapesdhs - Wednesday, October 10, 2018 - link
I used 2048x1536 for a long time on a 22" Dell P1130 CRT, was great for Stalker, Far Cry 2, etc. When I finally switched (the Dell was losing its colour), it had to be IPS (the off-axis colour shifting of TN was just too annoying) and the lower vertical height of 1080p really grated my brain, so I bought a 1920x1200 24" IPS (HP LP2475W), which was pricey but good, though I did miss the extra height from 1536 which was handy for documents, web pages, etc. I had looked at 2560x1600 (the standard "high end" gaming resolution of the day) but the prices were way too high. People forget, but back then the typical tested gaming resolutions were 1024x768, 1280x1024, 1600x1200 or 1920x1200, and 2560x1600. I think the first newer aspect ratio to come along was 1680x1050. How it's changed, nowadays almost everyone is used to screens with a wider aspect ratio, and a decent IPS 27" 1440p is only 200 UKP:https://www.cclonline.com/product/benq-gw2765he-mo...
Flying Aardvark - Wednesday, October 10, 2018 - link
You may have played at over 60hz, but you didn't play at a higher resolution than 1080P and especially not higher than 1080P with a refresh rate over 60hz. And the refresh rate isn't apples to apples. LCD refreshes per pixel.willis936 - Tuesday, October 9, 2018 - link
I paid less than $300 for my Q6600 in 2007.Hrel - Monday, October 8, 2018 - link
Moore's law is dead, computers are lasting WAY longer than ever before. These combine to shrink both the market and the margins.$1000 for a good computer is going to become a pipe dream.
abufrejoval - Tuesday, October 9, 2018 - link
If you are ready to aim just a little lower, I bought an Acer 17" notebook with a GTX1050ti and a 35Watt TCP non-HT i5 Kabel Lake for €800: Turned out to be quite a solid gaming machine even at 1920x1080.2080ti is about 4k, inference and a novel way to do more realistic graphics: Simply another ball game.
mapesdhs - Wednesday, October 10, 2018 - link
It's not though remotely about 4K *and* realistic graphics; in that regard NVIDIA has very much pushed the focus back down to 1080p and sub-60Hz frame rates, because the tech just isn't fast enough for anything better. A lot of gamers who spend that kind of money are far more likely to just turn off the relevant features in exchange for better frame rates, especially if they're gaming with 1440p/4K and/or high-frequency monitors (the latter being something that's very hard to pull back from if one has gotten used to it, the brain's vision system gets retrained over time, New Scientist had an article about this last year). As for more realistic, visually speaking yes (though NVIDIA's demos were grud awful and often highly misleading), but not a single thing they talked about in any way implied improved game worlds with respect to world/object functionality which IMO is far more important.For example, many people moaned about the changed water effect in the Spiderman game, but my first thought was, is it wet? Can I use it in the manner implied by its appearance? Drink it? Use it to fill a bottle? Put out a fire? Ditto the terrible looking fire effect in the NVIDIA demo, is it hot? Does it radiate heat? Does it make the gun barrel hot to touch afterwards? Does it melt the glass of the carriage nearby if too close? (no) Similar questions about the reflective glass, can I break it, use a shard as a weapon? Do the pieces persist, acting as an audible warning if enemies step on them? Can I throw a grenade into the carriage, make the glass be a weapon against passing enemies via the explosion? Can I cut down trees to block a road? Dig a pit as a trap? Remove a door? All of these questions relate to interactivity with the world, and it's this which makes for a genuinely more engaging and immersive environment, not pure visual realism. Our environment is interesting because of the properties of materials and objects around us and how they can interact, with us and with each other. If anything, the more visually realistic a game world appears to be, all the more jarring it is when an object cannot be used in a manner implied by how it looks. At least years ago a fake door was kinda obvious, one could see the way the texure was part of the frame surround, but these days it's all too easy to see something, assume one can interact with it, only to then be disappointed. Nope, can't open that door, can't pick up that object (or if you can it's just a generic object, it has no real functionality), the ground surface cannot be changed (oh how I miss the original Red Faction), substances are just visual effects, they don't affect the world in a meaningful manner, etc.
I want functional game worlds; if they look great aswell then that's icing on the cake, but less important overall. Elite Dangerous can look amazing, but it lacks functional depth, whereas Subnautica has far more basic visuals but is functionally fascinating and very engaging indeed.
Put bluntly, Turing isn't a gaming technology at all, it's a table scrap spinoff from Enterprise compute, packaged in a manner designed to make gamers believe there's this great New Thing they never knew they always wanted, launched using highly deceptive PR, with demos that looked generally terrible anyway (I mean really, the fire effect was just a bunch of sprites, the box scene was b.s. from start to finish, etc.) The demos made out like without RTX these effects would look awful, but games already do things that look really good without RTX, in many cases better than the way the demos looked. Their focus on such irrelevance as the reflections in a soldier's eye was also stupid; who cares? Just make the bullets go into the bad guy! Too damn busy to think about anything else; oh wow, look at that amazing raytraced effect, it looks just - oh dear, I got shot.
NVIDIA helped create the market for high-res, high-frequency gaming and VR, yet now they've about-faced and are trying to drop back to 1080p because their baseline Enterprise tech isn't pushing for higher rasterisation anymore. These are not gaming cards now, they're spinoffs from compute.
https://www.youtube.com/watch?v=PkeKx-L_E-o
As I've posted before, a teapot can look as amazing as you like, with all sorts of fancy reflective surface effects & suchlike, but unless I can use it to make tea then it isn't a teapot, a concept explored in more detail from the following old but now even more relevant article:
http://www.sgidepot.co.uk/reflections.txt
All of this, along with the crazy pricing, RAM capacity stagnation and other issues (what's the point of preorders if they're delivered late?) shows very clearly where this industry is headed. I think gamers would be far better served if devs just made better, more engaging, more interactive and more functional game worlds. Sure there'll always be a market for the FPS titles where people largely couldn't give a hoot about such things, but then again, Crysis was so impressive at launch partly because it was also funtionality interactive in a manner that hadn't been done before.
Ian.
eddman - Wednesday, October 10, 2018 - link
What you are asking for has nothing to do with graphics and video cards though. These are game logic related functionalities that run on the CPU. I don't think CPUs are yet powerful enough to run something that is basically a reality simulator.Besides, coding such games would require so much money and time that I don't think any game studio would even want to do it. They want to create a game and not a simulator.
cjl - Monday, October 8, 2018 - link
A QX6800 and 8800 Ultra would set you back almost $2000 around that timeStevoLincolnite - Monday, October 8, 2018 - link
I spent $400 AUD on my 3930K. It still handles every game I throw at it.Don't see a reason to upgrade yet to be honest... Not until prices are lower.
mapesdhs - Wednesday, October 10, 2018 - link
Do you run it oc'd at all? If you need any comparison data to modern reviews, I've tested a number of 3930Ks and other X79 CPUs (for Firefox, set Page Style to No Style from the View menu):http://www.sgidepot.co.uk/misc/tests-jj.txt
Other data here:
http://www.sgidepot.co.uk/sgi.html#PC
mitchel09 - Monday, October 8, 2018 - link
the i9 and 2080 TI are categories of product that just didn't exist before - they are basically pro-sumer level, used by the ultra-rich or for productivity/work tasks.It would be more realistic to compare the i7-9700k and 2080 which are the mainstream high-end parts.
eddman - Monday, October 8, 2018 - link
i9 9900k is an excuse to justify not offering HT and full cache with i7 anymore, and just because some people want to think 2080 Ti is a new category doesn't make it so, unless nvidia explicitly says so.Da W - Monday, October 8, 2018 - link
It's always been like that. The top of the top of the line always cost 3000$. Thing is you can play with more than sufficient quality with a 200$ CPU and a 1070 still.abufrejoval - Monday, October 8, 2018 - link
Hmm, paid $4000 for my first Apple ][ (two floppy disk drives), $20.ooo for my first 80286 (640K DRAM, EGA Graphics, 20MB disk drive).I chose not to drive a Porsche or Mercedes but own a computer, today that choice is much easier.
wr3zzz - Monday, October 8, 2018 - link
Another difference is that back then you spent $3000 on a PC just so you can run Windows smoothly. The $3000 that RTX2080Ti commands does not get you as much delta in user experience, not even from a couple of generations ago let alone the 80s and 90s.maroon1 - Monday, October 8, 2018 - link
RTX 2080 Ti offer better performance than TItan Xp at same price. So, if you are willing to spend that much, you still getting better performance than what was before at same priceOtherwise, you can just buy regular RTX 2080 or GTX 1080 Ti and get great performance. It is not like you need RTX 2080 Ti to play games.
eddman - Monday, October 8, 2018 - link
False reasoning. 2080 Ti is not the replacement for titan. 1) it is branded 2080 Ti, 2) it doesn't even have the fully enabled TU102 chip, meaning a Turing based titan is incoming.People really seem to like defending high prices.
bji - Monday, October 8, 2018 - link
So you're saying that if you own a Titan Xp, you are not allowed to upgrade it to a 2080 Ti, because the 2080 Ti is "not the replacement for titan"? I think the ridiculousness of that argument is self evident.eddman - Tuesday, October 9, 2018 - link
No, I didn't. You can upgrade to whatever you want, but 2080 Ti is still not the successor to titan. I think you should read comments better before jumping to conclusions.bji - Tuesday, October 9, 2018 - link
Actually, that's pretty much exactly what you said.To summarize: O.P. said that the RTX 2080 Ti is better performance than a Titan Xp for the same price, so if you were comfortable spending Titan Xp costs for a video card, you can upgrade to a 2080 Ti for the same price and better performance. Or if you are not willing to spend Titan Xp price you can buy a 2080 and get great performance for games.
You responded with some drivel about how the 2080 Ti is "not a replacement for the titan" where I guess in your mind a "replacement" is only a "replacement" if it meets some specific product categorization criteria including "having a fully enabled TU102" chip and having a specific branding name. Which implies that you don't think that a 2080 Ti is a valid replacement for a Titan Xp for the same price, because otherwise, what is the point of your post at all?
So your post makes no sense, which I pointed out.
Who defines what is a "successor"? You? And who even cares what card is a successor to what card? The O.P. didn't even mention 'successor' as an important criteria in deciding whether or not to upgrade a Titan Xp to a 2080 Ti. You brought that drivel in.
Let's see what happens when you take your used Honda Accord in for a trade ...
Dealer: You are in luck! We have just gotten in a new shipment of this year's Lexus sedan that is better than your car in every way! And it costs the same as your Accord did new!
You: Are you kidding me? That's not a proper 'successor' to the Honda Accord, and everyone knows that. It doesn't even have the same model name! What kind of trick are you trying to pull here? Bring me the Accord Mark II please or nothing at all.
FreckledTrout - Tuesday, October 9, 2018 - link
@bji, honestly you are trolling. If you look back at generations the next gen Ti beats the prior gen Titian. If people were will to pay for every percent increase in performance over Nvidia's 20 years you would easily be paying $20K per gaming card. A generational performance increase should not demand more dollars except for a small change for inflation. The pricing of the RTX 2080 Ti is a complete rip off and everyone will agree in years time, even you bji.bji - Tuesday, October 9, 2018 - link
It's stupid to respond that a post you do not agree with is 'trolling' just because you don't agree with it. Please look up what 'trolling' actually means before making such baseless accusations. Then please remember what you read before wasting anyone's time with a similar post. Thank you.I do think it's amusing how many posters think that companies should create products and pricing based on their own personal wishes and desires. It's generally funny to watch people get all bent out of shape because their own personal belief system about products doesn't actually correspond to reality, and then rather than adjusting their belief system, claim that the companies are wrong to make products at prices that they don't like.
I suspect that making a new generation of video card is an intensely expensive and difficult process from an R&D perspective, and this priced accordingly. But I guess alot of basement dwellers think that everything should just be given to them for cheap because ... well, I don't know. Just because, I guess.
eddman - Wednesday, October 10, 2018 - link
"I suspect that making a new generation of video card is an intensely expensive and difficult process from an R&D perspective, and this priced accordingly."Those things factor in, yes, but if you think that companies never overprice their products when there is no competition, then you are extremely naive.
Reality? Overpricing is also a reality. It happens all the time and people are allowed to criticize it.
eddman - Tuesday, October 9, 2018 - link
Is it really not obvious what I meant by "replacement"? Nvidia replaces a titan card in the lineup with another titan card, not with a numbered geforce card, unless they specifically state that 2080 Ti is replacing titan xp and 2080 is replacing 1080 Ti, which they haven't.Um, yes, branding exists for a reason. If they branded it "2080 Ti" then there is no reason to think it's not the successor to 1080 Ti.
Who defines? Nvidia. It's x080 Ti, therefore it's safe to assume it replaces the previous x080 Ti in the lineup. Simple as that.
Again, I have no problem with people upgrading from a titan to 2080 Ti. It's their money. It still doesn't mean the cards are in the same category.
mapesdhs - Wednesday, October 10, 2018 - link
Also, it's kinda nuts that the 2080 Ti didn't even increase the RAM over the 1080 Ti, while the 2080 actually goes down compared to the previous card it's currently competing with at the equivalent price point. The 2080 Ti should have had 16GB; I can only think NVIDIA didn't bother because they thought if they can push people back to accepting 1080p as the norm then it wouldn't need so much.PopinFRESH007 - Sunday, October 14, 2018 - link
1080p is the norm still, we haven't pushed beyond that yet. The Steam hardware survey shows 1080p is still by far the dominant resolution followed by a distant ~13.5% for 1366 x 768 which is most likely the dominant notebook resolution. Higher resolutions are still a tiny fraction of the marketPrimary Display Resolution
1366 x 768 13.51% -0.67%
1920 x 1080 62.06% +1.40%
2560 x 1080 0.96% +0.01%
2560 x 1440 3.59% -0.03%
3440 x 1440 0.43% +0.01%
3840 x 2160 1.32% -0.01%
Other 1.56% 0.00%
https://store.steampowered.com/hwsurvey/Steam-Hard...
Lolimaster - Monday, October 8, 2018 - link
Get the 2600X, upgrade to the 3700X next year if you will really need extra cores.Lolimaster - Monday, October 8, 2018 - link
Why would you get an obsolete 2080ti which is mere months of 7nm true next gen gpu's?Shlong - Monday, October 8, 2018 - link
I remember my parents purchasing be an Acer system in 1993. It came with 15" CRT Monitor, Pentium 66 mhz, 8mb Ram, 500mb Hard Drive, 1 MB video card, Soundblaster compatible sound card, 14.4k modem, 2x cd-rom, and a HP 360dpi color inkjet printer all for $3,500.Ironchef3500 - Tuesday, October 9, 2018 - link
Totally agree, and this isn't goodKvaern1 - Monday, October 8, 2018 - link
How about no integrated wifi and propper RST support instead ?Yea guess not, greedy ¤"#"! corporate people.
crimsonson - Monday, October 8, 2018 - link
Pretty sure integrated wifi address more users than RST support. RST is niche of a niche (and I'm big RST fan and my x99 chipset never had an actual Intel version).Kvaern1 - Monday, October 8, 2018 - link
You are rigfht but it's still about corporate greed. Integrated wifi adds stuff to the Intel BOM @ 60+% margins and enabling full RST (at no cost)) on their prosumer boards might lose them a few bigger sales.kulareddy - Monday, October 8, 2018 - link
no hyperthreading and 3.6 base frequency? no thank you, i will wait for the next iterationmapesdhs - Monday, October 8, 2018 - link
i7 without HT is just bizarre (9700K cannot be a direct upgrade over the 8700K, not when it's losing HT in the process). And in pure marketing terms, is it just me or does i9 just not sound or look as cool as i7? Always been something better in the brain about 7, hence 007 for Bond (009 just sounds like dialing a wrong number in a phone box). Plus of course Intel is making the whole thing a naming mess by using i9 for HEDT parts aswell. Oh dear...ianmills - Monday, October 8, 2018 - link
Yeah its funny. 2 more cores vs 6 hyperthreads, more cache, and cheaper price. I guess both these chips are at performance parity, although the 8700k will definitely be faster in encoding. The 8700k may be faster than the 9700k all around. This is definitely a great opportunity for amdgamingkingx - Tuesday, October 9, 2018 - link
The 8 cores without HT is going to be faster than the 6 cores with HT.HT does not double performance. At best it does 50% (synthetic bench only). It gets you maybe 10-20% realisticly (less in some cases).
mapesdhs - Wednesday, October 10, 2018 - link
The point is that the whole segmentation thing muddles the market. Remember that for a normal user, 6 with HT shows up in Windows as 12 apparent cores. The cost difference also makes it confusing, and the gain is often a lot better than 20%, as was very clear back in the days of SB. more than that though, I don't care how much faster 8 cores is than 6 with HT when a 2700X is half the price. :D That's enough to cover the cost of an entire mbd and more.SaturnusDK - Wednesday, October 10, 2018 - link
The funny thing is we already have scores for the 9700K so we don't even need to speculate on how the 8700K compares to the 9700K. See this HardOCP video of the z390 MSI ACE MB https://youtu.be/1JgQxazSAGc .At 5:26 there's a test sheet provided by MSI running the 9700K on the MB in question. Interesting take away scores are Cinebench R15 scores 1488 points. Compare that to stock 8700K scores usually landing in the 1420 range, so 5% faster. Firestrike Extreme scores 13587 on the 9700K. Compare that to a stock 8700K typically scoring in the 19100s and you'll see the 9700K is 30% _slower_ than the 8700K in a typical gaming benchmark.moozooh - Wednesday, October 10, 2018 - link
While I share the disappointment, I believe Intel has been assuming hyperthreads to have 0.33x the performance of a native core in a typical multithread task, and have spaced their i-family CPU steps in even increments (or close to even) based on that. The math works out and is very sensible.Kaby Lake and prior: 2/4 (2.66) — 4/4 (4) — 4/8 (5.33). 1.33 "cores" per step.
Coffee Lake: 4/4 (4) — 6/6 (6) — 6/12 (8). 2 "cores" per step.
CL Refresh: 4/4 (4) — 6/6 (6) — 8/8 (8) — 8/16 (10.66). 2 "cores" per step between i3 and i7, 2.66 from i7 to i9. As close as it gets.
That said, 9700K will certainly outperform 8700K on tasks that care about the performance of threads more so than their raw number (e.g. actually fully loading the cores so there isn't much room for HT to take advantage of), as you'd be very unlikely to find HT to add more than 15–20% performance in such a scenario. Frequency-wise, where a stock 8700K would have 6 cores running at 4.3 GHz, a 9700K will have 8 cores at 4.6 GHz. I do believe this will prove to be an improvement in the vast majority of cases. That being said, it will also be a marginal improvement on the order of 7–8% at best, and certainly not any sort of an impetus to upgrade. But then again Coffee Lake was kind of an outlier in that regard; between Sandy Bridge and Kaby Lake it had been the same 7–8% from one generation to another. So consider this to be more of a return to the status quo.
Srikzquest - Monday, October 8, 2018 - link
Any Hardware Spectre/Meltdown fixes? like Whiskey Lake?imaheadcase - Monday, October 8, 2018 - link
So you are that .0000000000000000000000000000000000001% of people in the world that worry about it?R0H1T - Monday, October 8, 2018 - link
Is that why everyone in the enterprise sector was scrambling for cover at the start of the year? OR you know the people who complain about China or NSA, do they also not care?Drumsticks - Monday, October 8, 2018 - link
Not that I really agree with downplaying these completely, but these are consumer parts, and they aren't destined for any of those scenarios. Intel launched fixed parts for the Enterprise already, too.People do still have a reason for wanting fixes, though. A hardware designed fix is likely to lift performance up higher than doing the equivalent thing in software.
Srikzquest - Monday, October 8, 2018 - link
Correct, if these are hardware fixes, less affect on performance and now, from the slides, I learnt there are 2 hardware fixes which is partially good (something better than nothing).Srikzquest - Monday, October 8, 2018 - link
I didn't mean to Troll but looks like you are hellbent on it. Are you living under rock for the past few months? My genuine question is whether there is anything meaningful advantage other than frequency increase to consider the new processors? As, I am interested in purchasing Core i5 and the article is mostly talking about new i9 processor, I was asking something specific as I can get the slightly cheaper 8th gen processor if there is no meaningful improvement.Thank you R0H1T for giving the support and covering me...
schizoide - Monday, October 8, 2018 - link
Sure. You also get more cores at the high-end.schizoide - Monday, October 8, 2018 - link
I care about a ~15% performance hit, yeah. You don't? Why are you posting on an enthusiast technology site, then? Did you get lost on your way to Instagram?Kvaern1 - Monday, October 8, 2018 - link
What you care about is performance then, not spectre/meltdown "fixes". It's not the same.schizoide - Monday, October 8, 2018 - link
They are functionally *exactly* the same.Kvaern1 - Tuesday, October 9, 2018 - link
They are absolutely not the same. Speed is what you're asking for and speed does not fix bug. Period.CheapSushi - Monday, October 8, 2018 - link
They are the same. The patches on the OS and microcode have affected performance for all of us on Intel systems because they're trying to mitigate it for everyone with the chip, not just Enterprise.SirMaster - Monday, October 8, 2018 - link
Well, you could choose to disable the patches.schizoide - Monday, October 8, 2018 - link
Yes, and in certain controlled enterprise non-virtualized environments that makes sense to do. But less so on the desktop.Gastec - Wednesday, October 10, 2018 - link
You literally just made the human population of Earth larger than the number of known stars in the Universe :)Srikzquest - Monday, October 8, 2018 - link
Or a better question is what are the changes for the Core i5 and below processors other than frequency change? Security fixes or full cache available for all cores instead of per core?schizoide - Monday, October 8, 2018 - link
Here's what I care about: Did they fix Meltdown? Are there any new Spectre mitigations?schizoide - Monday, October 8, 2018 - link
Well to be fair I also care about IPC a great deal, but of course that'll have to wait on the embargo.R0H1T - Monday, October 8, 2018 - link
Only a couple of "hardware" fixes, according to the slides, namely Meltdown v3 & L1TF while the rest of the possible mitigations will be enabled via microcode &/or software updates.schizoide - Monday, October 8, 2018 - link
Intel was notified about Meltdown and Spectre on (or possibly before) February 1, 2017.So seriously, what the hell, Intel. I get that Spectre is a huge problem inherent in the design of modern CPUs, but why isn't Meltdown fixed yet? You've had almost 2 YEARS.
R0H1T - Monday, October 8, 2018 - link
Meltdown will also require a fair bit of redesign of their existing uarch. Now as much as I'd like "hardware" fixes to just appear ~ they will take some time considering a botched job will be much worse, in terms of security & performance.schizoide - Monday, October 8, 2018 - link
Two years, man. They had two years.Namisecond - Monday, October 8, 2018 - link
How long does it take to design a new microarchitecture? about 5 years.duploxxx - Monday, October 8, 2018 - link
Welcome to the madness of reduced turbo speed depending on the used core count....Intel trying to push the nice 5Ghz uber benchmark score but a cpu failing to deliver anything more then previous generation due to the mixed load characteristics of a daily system with OS, AV, etc...)
Got to love the fact they called it I9, reduced cache and removed some HT... really intel reducing cpu count stack just for consumer concerns??? you only have about 46 SKU in the server space.... how about that?
Kvaern1 - Monday, October 8, 2018 - link
It's not reduced turbo speed based on core count. It's increased turbo speed based on core count...duploxxx - Tuesday, October 9, 2018 - link
nah its reduced, if you even believe that you are only running 1 active cpu at the time go figure.... in many cases there are several cpu at the same time active and you will never reach any advocated speed besides some fancy banchmarks running on an idle system with nothing installed.The concept why chips turbo in early design is taken over by fancy marketing slides.
mapesdhs - Wednesday, October 10, 2018 - link
I think you guys are on the same page, just looking at it from opposite directions, ie. lower turbo levels as the thread load goes up, vs. higher turbo levels as the thread load goes down. Same thing really.Either way though, still a valid point, the PR focuses on 5GHz but most of the time it won't be running that way. I bought a 10-core XEON which in theory can turbo to 3.6, but in a real system not once have I ever seen it go above 3.1. Even more ridiculous is the way people are getting so hyped about the 9900K yet for some time now the max oc's one can achieve are really not that much higher than the built-in max turbo, in other words oc'ing has become boring, it really doesn't achieve much anymore, not like the days of SB when a 5GHz oc (utterly trivial with a 2700K) is a huge bump of the max turbo of 3.9.
crimsonson - Monday, October 8, 2018 - link
"Welcome to the madness of reduced turbo speed depending on the used core count...."Wasn't this always the case outside of modified BIOS/EFI?
Meaker10 - Tuesday, October 9, 2018 - link
*Whoosh*There goes the entire concept of why chips turbo, flying straight over your head.
Try thinking about what you just said.
Adm_SkyWalker - Monday, October 8, 2018 - link
I have my 2080 Ti's preordered. Was going to wait another year or so before upgrading my i7-6950X. Since this is another refresh, I'm debating if I should just get it or still wait.mapesdhs - Monday, October 8, 2018 - link
Oh well, it's your money I guess. ;Dbenedict - Tuesday, October 9, 2018 - link
A fool and his money are easily parted.eastcoast_pete - Monday, October 8, 2018 - link
Thanks Ian! Did Intel or anybody else there comment on availability of both the new CPUs and new chipsets in Q4 2018 and Q1 2019? Are they shipping in quantity? The new i7s look interesting, but with recent manufacturing problems at Intel and ensuing price increases at retail, AMD's Ryzen might still be "it" for many who want eight cores.mapesdhs - Wednesday, October 10, 2018 - link
For gaming, with the 2700X being so much cheaper, the price difference is enough to cover the cost of a better GPU anyway. It's really only more complicated if one is using 1080p in certain specific scenarios. For anyone else, at 1440p or higher, a better GPU with a cheaper 2700X is far more sensible.Falloutboy - Monday, October 8, 2018 - link
wounder how long AMD will wait to announce the 2800 now. and will just be the 10 cores rumored, or a faster 8 core chipmapesdhs - Monday, October 8, 2018 - link
What we'll see as usual is Intel winning plenty of benchmarks with the 9900K, especially gaming, but at a cost which makes no sense. The kind of people who care about the upper tier of mainstream tech tend to be those gaming at 1440p or higher, in which case the IPC/clock advantage is less relevant anyway, Ryzen holds its own there, and offers good productivity performance at lower cost. When the reviews hit, I'm sure we'll see the usual tech site responses where they fawn over enormous frame rates at 1080p which only a minority of gamers actually care about (which is fine for those that do, but it's just annoying when most mainstream gamera don't, doubly so when reviews pair the tested CPU with the best GPU available - the rationale is understandable, but it kinda misses the point of the nature of those faced with real world upgrade decisions).just4U - Monday, October 8, 2018 - link
unless it's bringing something new to the table it's a waste to bother with a 2800/x the 2700/x processors are already pretty friggin awesome with decent coolers.. and a 2800 won't be the incentive people would jump to pull the trigger on a purchase.. Unless maybe it came with a water cooling kit.. maybe.. Doubt it tho.. Besides anything more expensive on their parts and you'd be eyeing those thread rippers instead.mapesdhs - Wednesday, October 10, 2018 - link
I've been pondering the 2920X, looking quite decent.ThaSpacePope - Monday, October 8, 2018 - link
Well I pre-ordered the i7-9700k. Came here, and disappointed to read there will be lower L3 cache than the i7-9900k. Hopefully, won't make that much of a difference as most of what I have seen shows that games and apps rarely utilize 16 threads and 8 is the sweet spot. Also sad that no reviews will be available until after my pre-order ships, so no time to change my mind.GreenReaper - Monday, October 8, 2018 - link
You could always cancel your pre-order and wait until the reviews come in... not like they'll run out.As a practical matter your will have saved 23.4% of the price for 25% reduction in cache and probably about the same in maximum CPU performance - but if your task is within cache and eight threads, more like a 2% reduction. As you say most games currently don't use more than eight cores.
In other software HyperThreading impact is mixed, in some cases up to 30-40% benefit - but *only* if you're using the CPU full-throttle already, and in a parallel case where cache is not so significant: https://www.phoronix.com/scan.php?page=article&...
If you're doing two different tasks on the same core, HT can actually make things slower due to cache thrashing and contention for the un-shared portions of the execution pipeline.
Great_Scott - Monday, October 8, 2018 - link
What a product segmentation hell. Typical of Intel.The really funny part here is that the Core i7-8700K at 6/12 with HT is arguably a faster processor for multitasking than the new Core i7-9700K 8/8 with less cache. GG guys.
shabby - Monday, October 8, 2018 - link
Shhhh maybe no one will notice!RaistlinZ - Monday, October 8, 2018 - link
The 8700K also has more L3 cache per core than the 9700K. I don't think the 9700K is a good value.Da W - Monday, October 8, 2018 - link
And in true to Intel fashion, they relace a 8/16 CPU on top and disable hypertreading for the 8/8 just below!WTF Intel? Core i7 always was about hyperthreading. You are aware there is an AMD now that is full HP from bottom to top, offer more core for cheaper and lags only in single treaded games?
just4U - Monday, October 8, 2018 - link
Could be worse.. they could have come out with a new socket for these chips. That would really piss me off.mapesdhs - Wednesday, October 10, 2018 - link
Give them time. ;Dwow&wow - Monday, October 8, 2018 - link
“Foreshadow” security problem fixed? Otherwise, just more faulty chips with “Spec Violation Inside” and “Security Risk Inside”!Dug - Monday, October 8, 2018 - link
Glad to see Thunderbolt 3 on the ASRock ITX. I'll definitely be picking this up.just4U - Monday, October 8, 2018 - link
It's nice to see competition in the cpu market again.. We've waited a long time for a 8core mainstream part from Intel and it looks like the Ryzen line is pushing them to move to 6 then 8 to compete. I actually thought they'd do one with a fancy cooler to but no.. Still I like this as the new norm.. (not thrilled with the price on the i9 tho.. going to have to take a pass on this gen besides I already have a 8700K and a Ryzen 2700X :)mapesdhs - Wednesday, October 10, 2018 - link
People forget that the 3930K was actually an 8 core, but with 2 cores disabled. Intel could have released an 8 core a loooong time ago, but they just didn't have to (back then, AMD could barely compete with the 2500K). With IB, mainstream stagnation began.abufrejoval - Monday, October 8, 2018 - link
I am surprised they keep the iGPU at the high-end. In a notebook, that makes sense, because a zero Watt dGPU simply allows a longer battery life away from gaming. In a desktop, I cannot see that sell while the surface area spent on the iGPU should have yielded two cores easily, even four.That in fact I've regarded it the stroke of 'genious' of AMD to trade the iGPU nobody used on the RyZen high-end to give 2-4 extra cores for 'free' vs. Intel, payback for how Intel used the free iGPU to deprive Nvidia and ATI from volume revenues they had come to rely on in times of Core-2.
Hyper-threading: It does cost heat. It does create hot-spots on the chip, disabling it gives you another round at binning better etc. There are plenty of technical reasons to disable it on any chip less than perfect, before resorting to side channel attack speculations: I can't seem them being so important in the gamer market the 8-cores address and I thought they fixed the issues regardless of HT (BTW: Still no word on the Control Flow Integrity (CFI) extensions?)
I guess another reason is that life as a games engine developer isn't all that easy already. Creating a better user-experience by seamlessly scaling thousands of GPU cores and now a handful or a dozen CPU cores, isn't easy when you also have dynamic frequency scaling, TDP or downright physical cooling limits and increasingly people playing on laptops. And now there is NUMA and CPU cores which don't have direct access to memory etc., etc.
Eliminating HT from the already rather complex field of CPU options may also play well with the game engine designers, who are currently trying to make use of the additional real CPU cores the new RyZen and Intel desktops deliver. When there is good reason to believe, that the extra Wattage requried by HT will rather soon slow down turbos on cooling constrained CPUs anyway, there is nothing to gain and only complexity to pay.
Just my 2 cents of opinion...
IntelUser2000 - Tuesday, October 9, 2018 - link
The reason they have an iGPU on the 9900K is because that's the biggest base die, and every smaller chip is made from cutting the thing.It will cost MORE to have the iGPU disabled, not to mention many people will appreciate having a backup GPU when they are waiting for a replacement to arrive, or the discrete GPU fails to load. Also, some don't require the discrete GPU.
eddman - Tuesday, October 9, 2018 - link
"Hyper-threading: It does cost heat. It does create hot-spots on the chip, disabling it gives you another round at binning better etc."They could've branded the best chips i7 9700k as usual without disabling HT and reducing cache and then used the chips that didn't make it to the top for 9700 non-k with reduced clocks to keep the heat at bay.
Why do that though when they can artificially create a new category, price it higher, disable features on the lower category, and make easy money.
abufrejoval - Monday, October 8, 2018 - link
I'll freely admit, that defending Intel doesn't come naturally to me...But I feel that the heat they get on the TIM isn't wholly deserved.
Please consider that thermal expansion is very much a challenge when you go and cycle between 100 and 1 Watts of power potentially several times a second. That is already quite hard on the solder ball grid array that points toward the motherboard. When you add rigidity on the other side using solders rather than pastes, the stress on the die can only get worse.
Intel has tended to keep turbo clocks on high-core count CPUs rather low, not because those cores weren't capable to achieve high-clocks for binning reasons. I believe they went conservative on the physical stress this would cause within the chip and against top and bottom connections.
When they binned high-end parts for higher frequency, they disabled intermediate cores to create "dark-silicon" islands to carefully spread the heat horizontal first and reduce the stress vertically.
These days they are so eager to please, they offer extreme clocks *and* soldered heat spreaders, but I cannot help but think that they are sacrificing longetivity and reliability with it.
vlado08 - Monday, October 8, 2018 - link
But by reduceing the thermal resistance less heat will accumulate in the die and less temperature less expansion. Also lesser the temperature longer the lifeTyler_Durden_83 - Monday, October 8, 2018 - link
I wanna see a core i7 9700k vs a Nehalem core i7 to see how much nine generations have really brought to the tablemapesdhs - Wednesday, October 10, 2018 - link
GamersNexus and other channels have done some tests on this, you can compare their results to the upcoming reviews.Gunbuster - Monday, October 8, 2018 - link
Looking forward to Intel burning tons of cash on giveaways/promotion for these like they did the last round of chips...Samus - Monday, October 8, 2018 - link
What an underwhelming product announcement. So a year later we have:Z390 is Z370 with better USB
9000 series CPU's basically the same as 8000 series CPU's, just more expensive.
I'm willing to bet an 8000 series i7 beats a 9000 series i7 in a number of benchmarks due to the overall core advantage the i7-8700k has (6/12) over the physical 8 cores in the i7-9700k
Hyperthreading obviously isn't as good as physical cores, but replacing it entirely with 2 additional physical cores and charging more for the resulting product likely isn't going to make it a good dollar-for-dollar improvement over the 8th gen.
Lolimaster - Monday, October 8, 2018 - link
9900KA 10% faster 2700X for 50%+ price yay
SaturnusDK - Tuesday, October 9, 2018 - link
9900K price is double the 2700X.mapesdhs - Wednesday, October 10, 2018 - link
Absolute nonsense, or did Principled Technologies pay you? :Dhttps://www.youtube.com/watch?v=D1mJMI_uaa8
SaturnusDK - Wednesday, October 10, 2018 - link
He said 10% faster, not 50% as Intel claim. 10% faster in most games, and on par with the 2700X in professional workloads seems fairly reasonable to expect. At double the price at retail of the 2700X one would have to be completely insane to buy a 9900K though. For any reason.bigi - Monday, October 8, 2018 - link
This architecture has been dead horse for a while. It just can't go any further really. Very much like gasoline engines.Samus - Tuesday, October 9, 2018 - link
I don't know about that. Mazda in actually going to market a direct ignition gasoline engine for the 2020MY. It increases gasoline engine efficiency 30%, the most significant jump in efficiency in the history of the petrol engine design, by compressing the gas much like a diesel engine does to the point of combustion. It will still use spark plugs during cold starts when the cylinder chamber isn't warm enough to ignite the fuel under pressure alone.That said, the analogy isn't exactly suitable for the Core architecture, which literally has run out of steam.
master381 - Monday, October 8, 2018 - link
How do you get the data for the turbo speeds at each number of cores active? Is it online somewhere besides this article?Thanks.
GreenReaper - Monday, October 8, 2018 - link
Ask Intel? If you *have* a CPU you can run turbostat from the linux-cpupower package on Debian/Ubuntu.master381 - Tuesday, October 9, 2018 - link
Thanks. I was hoping there's a spec from Intel itself that's publicly available already.The linux package is useful to know about.
boozed - Monday, October 8, 2018 - link
"If you believe everything the motherboard manufacturers tell me, most of them have been ready for this release for several months, hence why we’re going to see about 55+ new models of motherboard hit the market over the next few weeks."https://brians.wsu.edu/2016/05/19/hence-why/
mapesdhs - Wednesday, October 10, 2018 - link
Reminds me of the way people confuse less/fewer. :Dhttps://www.youtube.com/watch?v=QcnS95yQ2do
Darcey R. Epperly - Monday, October 8, 2018 - link
No Spectre or Meltdown fixtures? They are releasing unsafe CPUs. I'm glad Intel not producing cars.SanX - Tuesday, October 9, 2018 - link
Same maximum 64GB RAM limit ripoff to push everyone to buy thw new processor every few years?NAM27 - Tuesday, October 9, 2018 - link
Please compare 8th gen i7,i5 to 9th gen i5,i7,i9 for performance on compression tasks, other comment workloads you do comparisons with scripts.I REALLY want to see some heat comparisons with the 8th vs 9th gen since they have it soldered. Does anyone do air cooling for these or just liquid? Do any of these CPUs come with a heatsinks, that'd be telling some of how Intel thinks they can perform on air cooling.
I'd also like to see some Motherboard comparisons, grab a low end and mid/high motherboards manufacturers, Asus, Asrock, etc.
Also FPS on games for 8th vs 9th gen. PUBG, CSgo, etc.
Thank you!
PS people need to chill about the Spectre/Meltdown fixes, they had 2 hardware fixes on the 9th gen cpu's, others are fixes w/ Microcode,firmware or software, what's the problem?
Cud0s - Tuesday, October 9, 2018 - link
V1 of spectre is not solved in hardware. Software fix causes 30-40% slowdown in some applicationslashek37 - Tuesday, October 9, 2018 - link
Famed overclocker Splave pushed his Core i9-9900K to 6.9 GHz ,to 7.1 on some sample.SaturnusDK - Tuesday, October 9, 2018 - link
The current 8700K OC record is 7.4GHz so the 9900K isn't a very good overclocker. Not that either is relevant in any way.Achtung_BG - Tuesday, October 9, 2018 - link
Bad Intel:https://youtu.be/6bD9EgyKYkU
AGS3 - Tuesday, October 9, 2018 - link
I'm still confused. I have been an Intel fan for decades but AMD has made a big impact on the market so I have to consider them as an option for my next HEDT build for 4K video editing and some compositing/graphics work using DaVinci Resolve. Budget is limited.I was going for the i7-8700K but decided to wait for for the 8 core 9900K.
Planned build is - Asus Z390 M/board, i7-9900K, 32GB DDR43200, 500GB M.2 system drive, 1TBSSD, 4XHDDs raid, plus a GTX1080TI gpu. Water cooled so hopefully close to 5Ghz on all cores. I'm not a gamer and can't wait till 2020.
1) In reality, 4 cores is enough for most single user apps, Moores Law died and silicon CPU technology hit the wall many years ago and won't exceed 4-5ghz in the foreseeable future. AMD has confused the consumer market with their hype "If 4 cores is good - 8, 16, or 32 must be better". Benchmarks confuse the issue as they are designed to max load all cores to score - which is far from the single user real world.
2) Intel beats AMD on clock speed (Ryzens struggle to exceed 4Ghz) which is important to me.
3) AMD beats Coffee Lake on PCIe lanes. Intel states 16 lanes but Anandtech here show 24. Why? The 8 core Ryzen 1900X has a massive 64 lanes. How many do I need if I expand to 2 X PCIe M.2 SSDs, 2 X1080TI gpus and may add TB3 external storage later? What sort of desktop uses 64 lanes?
Price - 9900K $530 vs 1900X $336 - Ryzen 1920X 12 core $499
Any thoughts/advice appreciated. Thanks
4) Ryzen supports quad channel memory plus ECC vs Intel dual channel and no ECC. How does quad channel memory affect performance and is ECC needed?
evernessince - Tuesday, October 9, 2018 - link
CPUs didn't hit a brick wall, Intel simply had a monopoly and exploited it for maximum profit. We've gotten more progress in the CPU space in the last 2 years then we did in the previous 8 before that.Intel's clock speed figures are only for single core turbo. All core turbo of the 8700K for example is only 4.3 GHz compared to Ryzen's all core turbo of 4.0 GHz. This difference is negligible. The real advantage of the Intel processor is overclocking but for professional use such as what you want to do, I would highly recommend against that as even slight instability can cause a loss or corruption of work.
As for PCIe lanes, given that you want 2 PCIe SSDs, 2 1080 Ti GPUs, and TB3 external storage you definitely need the 64 lanes that the threadrippper platform offers. You simply cannot run that setup on the 24 lanes that Intel offers. Two GPUs alone will saturate that, let alone your PCIe SSDs, external storage, and your M.2 boot SSD which takes x4.
Gastec - Wednesday, October 10, 2018 - link
MAXIMUM PROFIT! But can it play Crysis? :)Breit - Tuesday, October 9, 2018 - link
Technically, there was an 8-core chip for the X79 platform that was released Q3/13. Since it was running on X79 I would consider it "consumer" as well.This chip was the Xeon E5-1680v2 (Ivy Bridge).
mapesdhs - Wednesday, October 10, 2018 - link
The 3930K was an 8-core aswell, but with 2 cores disabled. Intel didn't need to offer anything more back then.thejoe - Tuesday, October 9, 2018 - link
People will pay even more for this CPU that is still broke, Spectre/Meltdown.We bought before but did not know they were selling us a CPU that had security problems. To me it takes some nerve to build a broken CPU after finding out about it and then charge even more for it. How does Intel get away with this and they would not if people would stop buying them till they fix it, you would not pay full price for a damaged car and most places discount stuff like that but no Intel charges more and is even giving you less.
Gratin - Tuesday, October 9, 2018 - link
Paul's Hardware is posting a review of MSI Z390 MEG Godlike https://youtu.be/YF-aIboiuSsCheck at 3:20 there are some benchmark results for I7-9700K.
Cinebench 1476
Firestrike Extreme 13611 ....
Gratin - Tuesday, October 9, 2018 - link
HardOCP TV is also sharing that info https://youtu.be/1JgQxazSAGc at 5:26 for I7-9700K at 3.6 GHZ base clock.SaturnusDK - Tuesday, October 9, 2018 - link
Interesting. The Cinebench score is one that should have reasonable consistency so we can use that as a rough guide. The 9700K scores 1488, or under 5% higher than a typical stock 8700K score at 1420ish points. A stock Ryzen 2700X scores around 1800 stock though so a stock Ryzen 2700X is 21% faster than a 9700K.evernessince - Tuesday, October 9, 2018 - link
Just goes to show you that HT definitely impacts performance.SaturnusDK - Tuesday, October 9, 2018 - link
We really don't know yet but going by the Intel commissioned benchmarks, aside being the genesis of Game Mode Gate, shows that the 9900K is barely 5% faster than the 8700K in games on average at 1080p where the difference should be the highest. And it's 50% more expensive so it doesn't look promising for the 9th gen to be honest. I look forward to seeing actual review by credible sources but I for one foresee the 9000 series to be a massive belly flop.mapesdhs - Wednesday, October 10, 2018 - link
Atm I wouldn't touch Intel-commissioned benchmarks with a 10m cattle prod. Best wait for proper reviews, but the pricing is already enough to persuade me to go AMD, 600 UKP in the UK for a 9900K (that's approaching $800 US).spinedoc777 - Tuesday, October 9, 2018 - link
So what's the consensus, buy 9th generation or wait a bit more until the 10nm parts next year? I think I may have to jump in this generation as I'm limping along with an i5-6600k. Although it does pretty darn well, coupled with a 1080ti I'm playing every single PC game release at 4k with most settings maxed and still getting 30-50fps. But I know my CPU is the bottleneck and I won't be able to even consider the 2080ti unless I upgrade the CPU first. But if the 10nm ones next year are that much better I have no issue waiting another year.SaturnusDK - Tuesday, October 9, 2018 - link
Definitely wait for reviews from credible sources on the 9000 series. Also might want to see what AMD brings to the market with their 7nm Ryzen 3000 series launching in March/April next year.Meteor2 - Wednesday, October 10, 2018 - link
What he said. The elephant in the room is that TSMC’s shrink is proven to work and so far, Intel’s is proven not to. And who’s going to be using that TSMC shrink next spring? AMD.Gastec - Wednesday, October 10, 2018 - link
Wait long enough to see the prices of components becoming 3x what they are now :)sammy389 - Wednesday, October 10, 2018 - link
That's perfect! Now I can return i7-8086k for refund, and then immediately pre-order i9-9900k at BHphotovideo for $530! Plus, it gives me the advantage to reduce CPU bottleneck, especially when running RTX 2080 Ti Video Card.mapesdhs - Wednesday, October 10, 2018 - link
What, no Optane for the C-drive? Bit of a lame system then.AshlayW - Wednesday, October 10, 2018 - link
Huh. I had a 'mainstream' 8-core 16-thread processor early last year. You're late, Intel.HikariWS - Wednesday, October 10, 2018 - link
It's really sad they are another Coffee Lake and probably won't have on-die fix. The only interesting thing then is the higher clock.I'd like to see comparisons of 9900K, 9700K and 8700K turbo clock for 4C and maxC.
El Sama - Friday, October 12, 2018 - link
What? they ditching this? I really wanted to see 14 nm+++++ in four years after their 10 nm node fails again and again.alpha754293 - Friday, October 12, 2018 - link
Ooooh.....this is going to be really interesting.The i9-9900K is definitely going to be an interesting product given the fact that now it can Turbo itself up to 5 GHz with TWO threads as opposed to just one.
The fact that this is also an 8-core processor and has an 11 bin self-turbo clocking (3.6 GHz stock, 4.7 GHz all cores turbo) makes this a VERY tempting product/platform.
r13j13r13 - Monday, October 15, 2018 - link
humm my country reaches 640 dollars of output and the latest tests offers a performance in games 16% higher than r7 2700x clear depending on the game this amazing figure in fps would be between 5 and 12 fps but absolutely amazing is worth every cent paid but because they did not test in a professional setting ?????ChefJoe - Tuesday, October 16, 2018 - link
How about you try dropping in some of these Coffee Lake Refreshes into z370 motherboards with older bioses and see if they boot or not? This generation is a refresh and not really all that different (some leaked documents even said they have the same stepping codes as 8th gen parts).I received a free EVGA z370 FTW motherboard that shipped about 4 weeks after their latest bios update (that had notes of Support new Coffee Lake-S Processors, Add new NVIDIA USB 3.1 Support, Improves Intel SGX compatibility). It's possible my board has the new bios, or the prior one from back in March. I have hopes that a part like a 9600k (with only 6 cores) would boot nomatter what bios is on the board. The FTW board's supported CPU list is pretty abysmal actually, listing no 8086k or any 300series CPUs cheaper than the i3-8100 (although plenty of folks say it runs the 8086k even on the March bios).
Heilwiga - Friday, March 1, 2019 - link
Is the performance comparable to the new AMD Ryzen? Is the comparison correct?https://www.linkedin.com/company/seo-company-in-gr...