unofficially threadripper supports ECC. Do you have the plan to look into it?
p.s. I sent an email to Anandtech support about abusive ads directing to some questionable websites. I am in EU and I see these ads for a long time now.
"p.s. I sent an email to Anandtech support about abusive ads directing to some questionable websites. I am in EU and I see these ads for a long time now."
Er, we don't have a support email address. So I'm not sure who you sent that to.
Anyhow, we're always trying to squash malvertising. It comes in on programmatic ads, which does make the process tricky. But if you can get it to reliably and repeatedly trigger, please contact me. If we can get network logs collected, then we can isolate the source and get said ads pulled.
I sent it to advertisement link I found in the "contact us" page. Sorry by saying support. That's what we call it in our organization. thanks for the reply.
I sent it to advertisement link I found in the "contact us" page. Sorry by saying support. That's what we call it in our organization. thanks for the reply.
I get inappropriate ads as well. I'm not sure where I can send a screenshot, but the one I have on this page under the article is "This Is Better Than Adderall, According to US College Students. Try It!"
Like what? This is a professional tech site and ads like that have no business being on here. Banner ads for tech companies? Good. Side ads for relevant products? Good. This BS thats always underneath every article? Absolutely unacceptable.
"maybe when anad was still here" LOL, if he didn't care about money, he'd not sell the site to money makers for money. That's the N1 business model, and the sole motivation for doing anything - get it to get popular, then sell it out, and all its users with it.
lol, I was going to say that too. Anand had (in my opinion) a clear apple bias at the end and then went to work for them. That's not to say apple wasn't making good products or not doing interesting things - they were one of the few tech companies doing anything interesting.
I mean, imo he was pretty fair about them, he liked them and didn't say they were utter garbage because they tend not to make utter garbage. He did point out flaws fairly.
Some of the podcasts with Anand and Brian Klug were embarrassing, they had a third guy but they would just talk over him. Brian was this really obnoxious guy who made fun of people who want removable batteries and microSD cards, he said "You got what you got!"
lmao... industry shills.. wants to save the companies 10 cents for a microSD slot, and force people to overpay for 12GB space plus data usage.. How are you supposed to shoot 4k video and keep a movie/TV database with that. 128gb microSD card is perfect. Meanwhile they add ridiculous nonsense like taptic engine and face scanning instead of making the battery a bit thicker
I do because they know a disproportionate amount of their user base is tech savvy and run ad blockers with one click will en mass black block adds. Keep the adds clean and we leave the blockers off....we help each other but it is a give and take.
Workstation without ECC... that's a bad joke right there. Or at best, some very casual workstation. But hey, if you like losing data, time and money - be my guest. Twice the memory channels, and usually all dims would be populated in a workstation scenario, that's plenty of ram to get faulty and ruin tons of potentially important data.
Also, what ads? Haven't you heard of uBlock :)
"Explaining the Jump to Using HCC Silicon" - basically the only way for intel to avoid embarrassment. Which they did in a truly embarrassing way - by gutting the ECC support out of silicon that already has it.
AVX512 - all good, but it will take a lot of time before software catches up. Kudos to intel for doing the early pioneering for once.
At that price - thanks but no thanks. At that price point, you might as well skip TR and go EPYC. Performance advantages, where intel has them, are hardly worth the price premium. You also get more IO on top of not supporting a vile, greedy, anticompetitive monopoly that has held progress back for decades so it can milk it. But hey, as AT seems to hint it, you have got to buy intel not to be considered a poor peasant who can't afford it. I guess being dumb enough to not value your money is a good thing if it sends your money in intel's pocket.
Don't worry, they get plenty of support from the big boys for all those shamelessly biassed reviews. And don't act like your pennies will go to feed someone's starving children. So yeah, uBlock FTW.
just curious but do you have any evidence, even a small bit, to support your claim that “they get plenty of support from the big boys for all those...” i think it is possible, and you seem a man of science - evidence can support any statement no matter how outlandish — so please present such if you would. thnks
Obvious stuff is obvious, as are you ;) Nice try thou. Are you the one who is going to pay for evidence searching investigation? I personally don't feel like obvious things need evidence, but if you do, go ahead and investigate.
It seems you've finally met the 'village idi0t'. He will provide no evidence, as you likely expected, and we have to endure his bizarre views on ever major article.
He's 2017's version of LordRaiden from around 2004 in these forums. Knows just enough to sound knowledgeable to those not in the industry, but is incapable of supporting his assertions because he is running on the theory that if he believes it in his mind it must be true.
Look up LordRaiden in the AT forums if you want to see when this last happened.
Yeah, it's hilarious. Anyone who's actually in the IT industry knows he's talking out of his lower orifices and it's always funny to watch him huff about like anyone actually takes him seriously.
It's like the cat tax. No article is complete without a good laugh at ddriver.
Oh wow, the fanclub is sure gradually moving down, and just when it seemed it already hit the bottom. But hey, if pretending that you are not a completely clueless wannabes works for you, by all means, know yourselves out :)
You know, I complete agree, however you have mistyped "lame suckers" and typed "IT industry" instead.
Here is a hint - you cannot take seriously that which you don't have the capacity to understand. Your "best" boils down to clapping and cheering at the mainstream mediocrity to cultivate the illusion that you are smart. And when someone comes along and tears that illusion down, you are sore to realize the reality about you. And you are only left with denial in the form of those pathetically anemic attempts at intimidation through ridicule. But suckers will be suckers, and as suck, always failing to make a valid argument in their favor :)
I'd ask you to post your credentials, but seriously your statements long ago precluded you from being anyone either in the industries you are opinionated about, or with the education to question anyone in those industries.
And did you notice that Big Blue has actually lost its marbles and its neurons are misfiring? Both the Core i9-7980XE and the Core i9-7960X have a TDP rating of 165W. However while the latter meets this TDP, the TCore i9-7980XE draws 190W at full load. That is a big no thanks also, when you consider 165W coolers are likely to be installed on the basis of the 165W TDP rating. We haven't even started over clocking yet, and it is likely this CPU will draw in excess of 350W, and one can only pray that thermal paste under the lid will play nice. Or did they really do something different this time around?
Those are intel lies. Totally justified, because intel is rich. Not only are intel lies not bad, they are actually good. It makes you more intelligent if you believe in them. Only very intelligent people can get it.
Curiously, no word of intel's AMAZING DUAL CORE HEDT i3-7360X here at AT. Lagging behind the cutting edge here :)
Now that's a real game changer for intel. Although I wish they could launch a single core HEDT processor too. That's really where their portfolio is left gaping.
Big blue is IBM BTW, intel is just intel, or if you want to call them anything else, go with "money grubbing, cheating, anti competitive, bastards who will screw everyone over for a buck in a heart beat". For short.
I've created countless videos, processed a lot of documents, but have never, ever, lost anything due to using standard non-ecc ram. Sure, in work, ALL of the servers use ecc but there's not even one standard desktop with the stuff. STILL no data loss. 32Gb at home and 64Gb in work.
Yes, okay, I understand that ECC is for x and y, but is it 'really', REALLY, that important?
Run Unbound on a Pi or other Linux VM and block all thise adverts at the DNS level for all the devices on your LAN. I havent seen a site add anywhere in years from my home.
Interesting - But that won't work for me - I'm a frequent traveller, and thus on different LANs all the time.
But what works for me, is PeerBlock, then iblocklist.com for the Ad-server & Malicious lists and others, add Microsoft and any other entity I don't want my packets broadcast to (my Antivirus alerts me when I need updates anyway - and thus I temporarily allow http through the firewall for that type of occasion).
On the contrary, single-threaded performance is largely a dead end until we hit quantum computing due to instability inherent to extremely high clock speeds. The core wars is exactly what we need to incentivize developers to improve multi-core scaling and performance: it represents the future of computing.
Some things just can't be split up into multiple threads -- it's not a developer skill level or laziness issue, it's just the way it is. Single threaded speed will always be important.
As a developer for 30 years this is absolutely correct - especially with the user interface logic which includes graphics. Until technology is a truly able to multi-thread the display logic and display hardware - it very important to have single thread performance. I would think this is critically important for games since they deal a lot with screen. Intel has also done something very wise and I believe they realize this important - by allowing some cores to go faster than others. Multi-core is basically hardware assisted multi-threaded applications which is very dependent on application design - most of time threads are used for background tasks. Another critical error is database logic - unless the database core logic is designed to be multithread, you will need single point of entry and in some cases - they database must be on screen thread. Of course with advancement is possible hardware to handle threading and such, it might be possible to over come these limitations. But in NO WAY this is laziness of developer - keep in mind a lot of software has years of development and to completely rewrite the technology is a major and costly effort.
There are lots of instances where I'd need summation and other complex algorithm results from millions of records in certain tables. If I'm going the traditional sql route, it would take ages for the computation to return the desired values. I instead divide the load one multiple threads to get a smaller set in which I would perform some cleanup and final arithmetic. Lots of extra work? Yup. More ram per transaction total? Oh yea. Faster? Yes, dramatically faster.
WPF was the first attempt by Microsoft to distribute UI load across multiple cores in addition to the gpu, it was so slow in its early days due to lots out inefficiencies and premature multi-core hardware. It's alot better now, but much more work than WinForms as you'd guess. UWP UI is also completely multithreaded.
Android is inching closer to completely have it's UI multithreaded and separate from the main worker thread. We're getting there.
Both you and sonich are correct, but it's also a fact that developers are taking their sweet time to get familiar with and/or use these technologies. Some don't want to that route simply because of technology bias and lock-in.
"Both you and sonich are correct, but it's also a fact that developers are taking their sweet time to get familiar with and/or use these technologies. Some don't want to that route simply because of technology bias and lock-in."
That is not exactly what I was saying - it completely understandable to use threads to handle calculation - but I am saying that the designed of hardware with a single screen element makes it hard for true multi-threading. Often the critical sections must be lock - especially in a multi-processor system.
The best use of multi-threading and mult-cpu systems is actually in 3D rendering, this is where multiple threads can be use to distribute the load. In been a while since I work with Lightwave 3D and Vue, but in those days I would create a render farm - one of reason, I purchase a Dual Xeon 5160 ten years ago. But now a days processors like these processors here could do the work or 10 or normal machines on my farm ( Xeon was significantly more power then the P4's - pretty much could do the work of 4 or more P4's back then )
You are living in a world of mainstream TV functional BS.
Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.
Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.
There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?
At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
Yes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
Can you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.
I'd really like it if Enthusiast ECC RAM was a thing.
I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.
THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.
I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.
People could burn their systems if they design workstations and use cooling solutions for 165W TDP.
If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...
On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.
It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.
The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!
This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0
This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.
For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.
This is used to design the motherboard and the cooling system to give designers a clear limit over which the system doesn't go unless it is purposely overcloked.
Wikipedia : "The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload."
Intel : "TDP (Thermal Design Power) Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design
Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE.1"
Intel : "Due to normal manufacturing variations, the exact thermal characteristics of each individual processor are unique. Within the specified parameters of the part, some processors may operate at a slightly higher or lower voltage, some may dissipate slightly higher or lower power and some may draw slightly higher or lower current. As such, no two parts have identical power and thermal characteristics.
However the TDP specifications represent a “will not exceed” value. "
This is what we've understood by TDP in the past 21 years while in IT hardware industry.
If you have a different definition, then perhaps we're talking about different things.
Specification for 7980xe says "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements." There's a different specification for electrical design. This is not your ancient Xeon TDP.
You mean the definition of TDP should change every year to suit Intel's marketing ?! :)
"Ancient" Xeon TDP ?! :)
I've quoted Intel's own definition.
If the company just came up with a NEW and DIFFERENT definition just for the Core i9 series, then that's just plain deceiving marketing, changing with the wind (read : new generation of products) .
Plus, why the heck are they calling it TDP ?!
If they now claim that TDP "represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active " then they basically use AMD's ACP from 2011.
You may be unhappy with what Intel promised you, but to claim that you could burn a system with increased power usage from turbo clocks is ridiculous, thermal throttling is not fire, and it's ridiculous to argue on a cpu that can run overclocked at >400w power consumption.
You can't talk rationale with a loyalist sympathizer. TDP is a set definition in the industry and one Intel seems to be misleading about with their Extreme HEDT CPU. That seems to be a fact clearly made among almost all reviews of the 7980xe.
I think I read a few articles yesterday talking about how the 7980xe was having major issues and wasn't boosting correctly but showing high power draw. But yesterday was a long time ago and I cant remember where I read that.
Yes, it's total bullshit that they are misinterpreting what TDP is. I imagine this is how they'll get away with claiming a lower TDP than the real one in the 8700k chip, too, which has low base clock speed, but the super-high Turbo-Boost, which probably means the REAL TDP will go through the rough when that Turbo Boost is maximized.
This is how Intel will get to claim that its chips are still faster than AMD "at the same TDP" (wink wink, nudge nudge).
"What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you."
I find it ironic that you would call someone ignorant, then reveal your own ignorance about the TDP and turbo clocks.
I'm quite curious what happens if your system cooling simply can't handle it. I suspect if you designed a cooling solution which only supported 165W the CPU would simply throttle itself, but I'm curious by how much.
Strictly speaking, all forms of Turbo boost are a form of vendor-sanctioned overclocking. The fact that measured power goes beyond TDP when at max all-core turbo should really not be all that surprising. The ~36% increase in power for ~31% increase in clocks is pretty reasonable and inline when you keep that in mind. Especially when you factor that there has to have been a bit of extra voltage added for stability reasons (power scales linearly with clocks and current, and quadratically to exponentially with voltage).
I agree. Everything looked good until that page. 190 watts is unacceptable, and Intel needs to correct this right away - either make the CPU run within the TDP limit, or update the TDP to 190 watts in the specs.
They most certainly do, that is one of the biggest gripes against Vega 64, people do seem to have short memory on how high GPU TDP's used to be however.
On a video card, the same manufacturer takes responsibility for the GPU, cooling system, design, PCB, components and warranty.
On the CPU, you have somebody else designing the cooling system, the motherboard, the power lines and they all have to offer warranty for their components while Intel is only concerned with the CPU.
If the CPU is throttling or burnt out, they will say "sufficient cooling was not provided" and so on ...
Thermal throttling is not a burn out and not a warranty event, you don't get to warranty your gpu when it throttles under load, cooling warranty does not include cpu/gpu chip performance and Intel designed the ATX specification and the electrical specification for the boards.
You clearly don't know the things you're talking about.
It's really no different than if a car was sold with inadequate cooling.
"Average" heat production at normal speeds is fine, but if you actually come close to using the 300HP the engine produces by, I dunno, pulling a trailer at those same speeds it will overheat and you'll have to pull over and let it cool.
I have a still running dual Intel Xeon 3Gz 5160 and my biggest complaint is that the box is huge. This machine is 10 years old has 8G of memory and about 5T of storage. It CPU's alone cost around $2000 and in your terms it like the Bentley or my 2000 Toyota Tundra with Lexus Engine with 240,000. In essence you get what you pay for.
Super relevant, because they indicate how badly thermally limited the CPU is - which is hella good info to have if you're, say, considering delidding a $1999 processor because the manufacturer used toothpaste under the IHS.
Poor AMD... No chance they are going to supply (even more) cpu's demand after posting this article.. I am trying to purchase at list 7 systems for my customers in my country but there's nowhere I can find them beasts here..
I wish someone could do an article on that too. GF doesn't seems to be the limitation here. GF, should in theory more then enough capacity in their Fab 8 for AMD. Unless GF have some other big customers, otherwise AMD should really be bumping out as much unit as possible.
threadripper delivers 80+% of the perfromance for less than 50% of the price.... you don´t have to be a genius to see what the better deal is (price germany: TR 1950x = 950 euro, 7890xe =2300 euro)
Don't let that stop them equivocating about how companies who need that power yet somehow have no need for ECC don't care about cost because something something software TCO blah blah.
I'm trying really, really hard to think of a company that, at some point or another, doesn't say, "Equipment X may outperform Equipment Y, but the extra cost to buy Equipment X is too much, we'll just make-do with Y instead." Especially since 100% of companies have a limit on their budgets.
What's that, you say? Multi-billion dollar corporations don't have to worry about the money they spend? Someone apparently didn't pay attention in their Econ 200 class, or their Introduction to Accounting coursework.
By definition, every business has a *finite* amount of money they can spend, based on a) how much money they collect from their customers, b) how much they can recoup on the sale of assets (tangible or intangible), & c) how much they can get from "other sources" (mostly bank loans or by selling stock shares, or sometimes government grants, but you might find the occasional situation where a generous benefactor just bequeaths money to a company...but I doubt you'll even see that happen to 1% of the companies out there -- & no, venture capitalists pouring money into a company is *not* a situation where they "give the money away", they're getting something for their money, usually stock shares or guarantees of repayment of the loans). Of that money, some of it is earmarked for employee compensation (not just the executives, but the office drones & lower-level employees that do 99% of the actual work), some of it goes towards taxes, some of it pays for rental payments, some for loan payments, some for utilities (telephone, Internet, electricity, gas, water, etc.), some of it may get set aside for "emergencies", some gets earmarked for dividends to the shareholders, etc. That means that a (relatively) small portion is set aside for "equipment replacement". Now, if the company is lucky, the lion's share of that budget is for IT-related equipment...but that covers more than just the office drones' machines, that covers everything: server racks, storage services, cloud vendor payments, etc.
And that is where the price comes into play. For probably 90% of office users out there, not only is Threadripper an overpowered product, so are these products. Heck, we're in the middle of an upgrade from Windows 7 to Windows 10, & they're taking the opportunity to replace our old Sandy Bridge i5 machines with Skylake i7 machines. Sure, they're running faster now...but the main reason they're running faster is because we went from 32-bit Windows to 64-bit Windows, so our PCs now have 8GB of RAM instead of 4GB. That helps with our workload...which primarily revolves around MS Office & using browsers to access & modify a number of massive databases. Having an 8C/16T CPU, let alone a 16C/32T CPU, wouldn't provide any boost for us, since the primary slowdown is on the server side.
These are going to be expensive systems for specialized purposes...& those individual companies are going to look at their budgets very closely, as well as the performance benchmarks, before deciding to purchase these systems. Sure, they may hold the performance crown...but not by that big of a margin, & especially when compared to the margin that gives them the "most expensive price" crown.
Human labor is more expensive than hardware. The 20% additional performance for $1000 more can be earned back quickly by the increased productivity of your workforce (assuming your management staff is effective enough to keep the employees gainfully employed of course and that's certainly not always the case).
I have a big issue with latest performance results - especially dealing with multi-core performance. What is most important is single core performance - this primary because real applications and not benchmark application use the primary thread more that secondary threads. Yes the secondary threads do help in calculations and such - but most important especially in graphical application is using the primary thread. Plus quality is important - I not sure AMD is going to last in this world - because they seem to have a very limited focus.
On the contrary, these chips will sell very well since they aren't geared towards "prosumers" but big businesses where every minute wasted could mean thousands $$ lost.
That performance ler dollar page is amazing. I could look at graphs like that comparing all types of compinents and configurations against different workloads all day.
If we factor in the price of the whole system, rather then just CPU, ( AMD's MB tends to be cheaper ), then AMD is doing pretty well here. I am looking forward to next years 12nm Zen+.
Can an IPC comparison be done between this and Skylake-s? Skylake-x LCC lost in some cases to skylake, but is it due to lack of l3 cache or is it because the l3 cache is slower?
There will never be an IPC comparison of Intel's new processors, because all it would do is showcase how Intel's IPC actually went down from Broadwell and further down from KabyLake.
Intel's IPC is a downtrend affair and this is not really good for click and internet traffic.
Even worse : it would probably upset Intel's PR and that website will surely not be receiving any early review samples.
Great review thank you. This is how a proper review is done. Those benchmarks we seen of the 18 core i9 last week were a complete joke since the guy had the chip over clocked to 4.2GHz on all core which really inflated the scores vs a stock Threadripper 16/32 CPU. Which was very unrealistic from a cooling stand point for the end users.
This review had stock for stock and we got to see how both CPU camps performed out of the box states. I was a bit surprised the mighty 18 core CPU did not win more of the benches and when it did it was not by very much most of the time. So a 1K CPU vs a 2K CPU and the mighty 18 core did not perform like it was worth 1K more than the AMD 1950x or the 1920x for that matter. Yes the mighty i9 was a bit faster but not $1000 more faster that is for sure.
I too am interested to see 'out of the box performance' also.
But if you think ANYONE would buy this and not overclock - you'd have to be out of your mind.
There are people out there running 4.5GHz on all cores, if you look for it.
And what is with all this 'unrealistic cooling' I keep hearing about? You fit the cooling that fits your CPU. My 14C/28T CPU runs 162W 24/7 running BOINC, and is attached to a 480mm 4-fan all copper radiator, and hand on my heart, I don't think has ever exceeded 42C, and sits at 38C mostly.
If I had this 7980XE, all I'd have to do is increase pump speed I expect.
Personally, I think the comments about people that spend $10K on licenses having the money to go for the $2K part are not necessarily correct. Companies will spend that much on a license because they really do not have any other options. The high end Intel part in some benchmarks gets 30 to may be 50 percent more performance on a select few benchmarks. I am not going to debate that that kind of performance improvement is significant even though it is limited to a few benchmarks; however, to me that kind of increased performance comes at an extreme price premium, and companies that do their research on the capabilities of each platform vs price are not, IMO, likely to throw away money on a part just for bragging rights. IMO, a better place to spend that extra money would be on RAM.
In my last job, they spent over $100k for software version system.
In workstation/server world they are looking for reliability, this typically means Xeon.
Gaming computers are different, usually kids want them and have less money, also they are always need to latest and greatest and not caring about reliability - new Graphics card comes out they replace it. AMD is focusing on that market - which includes Xbox One and PS 4
For me I looking for something I depend on it and know it will be around for a while. Not something that slap multiple dies together to claim their bragging rights for more core.
Competition is good, because it keeps Intel on it feat, I think if AMD did not purchase ATI they would be no competition for Intel at all in x86 market. But it not smart also - would anybody be serious about placing AMD Graphics Card on Intel CPU.
Hate to burst your foreign bubble but companies are cheap in terms of staying within budgets. Specially up and coming corporations. I'll use the company I work for as an example. Fairly large print shop with 5 locations along the US West coast that's been in existence since the early 70's. About 400 employees in total. Server, pcs, and general hardware only sees an upgrade cycle once every 8 years (not all at once, it's spread out). Computer hardware is a big deal in this industry, and the head of IT for my company Has done pretty well with this kind of hardware life cycle. First off, macs rule here for preprocessing, we will never see a Windows based pc for anything more than accessing the Internet . But when it comes to our servers, it's running some very old xeons.
As soon as the new fiscal year starts, we are moving to an epyc based server farm. They've already set up and established their offsite client side servers with epyc servers and IT absolutely loves them.
But why did I bring up macs? The company has a set budget for IT and this and the next fiscal year had budget for company wide upgrades. By saving money on the back end we were able to purchase top end graphic stations for all 5 locations (something like 30 new machines). Something they wouldn't have been able to do to get the same layout with Intel. We are very much looking forward to our new servers next year.
I'd say AMD is doing more than keeping Intel on their feet, Intel got a swift kick in the a$$ this year and are scrambling.
Ian, thanks for the great review! Very much appreciate the initial focus on productivity tasks, encoding, rendering, etc., instead of games. One thing though, something that's almost always missing from reviews like this (ditto here), how do these CPUs behave for platform stability with max RAM, especially when oc'd?
When I started building oc'd X79 systems for prosumers on a budget, they often wanted the max 64GB. This turned out to be more complicated than I'd expected, as reviews and certainly most oc forum "clubs" achieved their wonderful results with only modest amounts of RAM, in the case of X79 typically 16GB. Mbd vendors told me published expectations were never with max RAM in mind, and it was "normal" for a mbd to launch without stable BIOS support for a max RAM config at all (blimey). With 64GB installed (I used two GSkill TridentX/2400 4x8GB kits), it was much harder to achieve what was normally considered a typical oc for a 3930K (mab was the ASUS P9X79 WS, basically an R4E but with PLEX chips and some pro features), especially if one wanted the RAM running at 2133 or 2400. Talking to ASUS, they were very helpful and advised on some BIOS tweaks not mentioned in their usual oc guides to specifically help in cases where all RAM slots were occupied and the density was high, especially a max RAM config. Eventually I was able to get 4.8GHz with 64GB @ 2133. However, with the help of an AE expert (this relates to the lack of ECC I reckon), I was also able to determine that although the system could pass every benchmark I could throw at it (all of toms' CPU tests for that era, all 3DMark, CB, etc.), a large AE render (gobbles 40GB RAM) would result in pixel artefacts in the final render which someone like myself (not an AE user) would never notice, but the AE guy spotted them instantly. This was very interesting to me and not something I've ever seen mentioned in any article, ie. an oc'd consumer PC can be "stable" (benchmarks, Prime95 and all the rest of it), but not correct, ie. the memory is sending back incorrect data, but not in a manner that causes a crash. Dropping the clock to 4.7 resolved the issue. Tests like P95 and 3DMark only test parts of a system; a large AE render hammered the whole lot (storage, CPU, RAM and three GTX 580s).
Thus, could you or will you be able at some point to test how these CPUs/mbds behave with the max 128GB fitted? I suspect you'd find it a very different experience compared to just having 32GB installed, especially under oc'd conditions. It stresses the IMCs so much more.
I note the Gigabyte specs page says the mbd supports up to 512GB with Registered DIMMs; any chance a memory corp could help you test that? Mind you, I suspect that without ECC, the kind of user who would want that much RAM would probably not be interested in such a system anyway (XEON or EPYC much more sensible).
"256 KB per core to 1 MB per core. To compensate for the increase in die area, Intel reduced the size of the size of the L3 from 2.5 MB per core to 1.375 MB per core, keeping the overall L2+L3 constant"
Maybe Intel saw the AMD TR numbers and had to add 10-15% to their expected freqs. Sure, there is some power that goes to the CPU which ends up in RAM et. al. but these are expensive room heaters. Intel marketing bunnies thought 165w looked better thn 180w to fool the customers.
Wow! Another intel pro review. I was expecting this but having graphs displaying intels perf/$ advantage, just wow , you've really outdone yourselves this time.
Of course i wanted to see how long are you gonna keep delaying the gaming benchmarks of intel's core i9 due to mess rearrangement horrid performance. I guess you're expecting game developers to fix what can be fixed. It's been already several months, but on ryzen you were displaying a few issues since day 1.
You tested amd with 2400mhz ram , when you know that performance is affected with anything below 3200mhz.
Several different intel cpus come and go into your graphs only to show that a different intel cpu is better when core i9 lacks in performance and an amd cpu is better.
Didn't even mention the negligent performance difference bettween the 7960x and 7980xe. Just take a look at phoronix review.
Can this site even get any lower? Anands name is the only thing keeping it afloat.
Erm, there are five graphs on the performance/$ page, and three of them show AMD with a clear price/$ advantage in everything except the very top end and the very bottom end (and one of the other two is pretty much a tie).
...how can you possibly call that a pro-Intel review?
And why the heck would you want game reviews on these CPUs anyways? By now we KNOW what the results are gonna be and they won't be astonishing. And more than likely will be under a 7700k. Game benchmarks are utterly worthless for these CPUs and any kind of s surprise by the reader in their lack of overall performance in game is the readers fault for not paying attention to previous reviews.
That's a perfectly valid comparison with the exception of the fact that Intel's X299 platform will look completely handicapped next to AMD's EPYC based solution and it will have just half of the computational power.
"Intel also launched Xeon-W processors in the last couple of weeks."
Just where can one purchase these mythical Xeon-W processors? There hasn't been a single peep about them since the "launch" week. I've only heard of two motherboards that will support them. They seem to be total vaporware. On Intel's own site, it says they were "Launched" in 3Q2017. Intel had better hurry up, 3Q will be up in 4 days!
I dont understand why intel disables ECC on their i9 CPU , they are losing low budget workstation buyers who will 100% choose AMD threadripper over intel i9..
Even if they are doing this to protect their xeons chips ,they can enable non buffered ECC and not allow Registered ECC on the i9 problem solved. unbuffered ECC has Size limitation and people who want more RAM will go for xeons.
Remember that their i3 has ECC support , but only the i3 ...
The "low budget workstation buyers" as you call them are a really insignificant percentage of an already really small piece of the huge pie of Intel customers.
who told you so ? Most engineering students at universities need one , and Art Students who render alot as well. all these people will buy threadripper CPU and avoid intel , for intel xeon are 50% more expensive .
andI dont cae about the percentage in intel Pie ... hundreds of thousands student enter uiviersites around the world each year . Low percentage or not they are alot ...
how much do you think a low budget workstation costs ? they start from $3000 ... and with xeon Pricing , it will be very difficult to add alot of RAM and a good workstation card and fast SSD .
So I ran the Poisson benchmark on by 6950X. It uses all 10 cores (20 h/w threads), but can be configured to run in different ways: you can set the number of s/w threads per process. It then creates enough processes to ensure there's one s/w thread per h/w thread. Changing the s/w threads per processes significantly effects the result:
Each process only uses about 2.5MB of RAM. So the 1-thread per process probably has a low result as this will result in more RAM usage than L3 cache, whereas the others should all fit in.
Would be interesting to see what was used for the 7980/7960. Perhaps the unusual number of cores resulted in a less than optimal process/thread mapping.
Hey guys, question.. Toms and others have mentioned that they HAD to put watercooling to keep this thing stable. Did the same happened to your sample? Wouldnt that increase the "cost of ownership" even more than the intel counterpart?
I mean, the mobo, the ram, the watercooling kit and then the hefty processor?
I'm going to grab another cup-o-coffee and read it again, but the performance per dollar, AMD costs about half as much as Intel for several comparable models, how does Intel have better performance per dollar on so many of those graphs?
Admittedly my kids are driving me nuts and I've been reading this for two days now trying to finish...
Price for Intel Core i9-7980XE and Core i9-7960X My opinion, I can not justify to spend extra $700~1k on these processors. The performances weren't that significant.
Would have been nice to see the Xeon Gold 6154 in the test. 18 cores / 36 threads and apparently an all core turbo of 3.7Ghz, plus the advantage of adding a second one on a dual socket Mobo.
Planning a pair of 6154's on either an Asus WS C621E or a Supermicro X11DPG-QT and Quad GPU set up.
My 5 year old dual E5-2687w system scores 2298 in Cinebench R15, which has served me well and paid for itself countless times over, but having dual 6154's will bring a huge smile to the face for V-ray production rendering.
My alternative is to build two systems on the i9-7980XE, one for content creation, single CPU, single GPU, and the other as a GPU workhorse for V-ray RT, and Iray, single CPU, Quad GPU+ to call on when needed.
So the comparison would have been nice for the various tests performed.
I wonder if AnandTech is considering about adding TensorFlow as part of the CPU and GPU benchmark suite?
I'm a PhD student in computer science and a lot of us are using TensorFlow for research so we are interested in the performance of CPU/GPUs on TensorFlow.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
152 Comments
Back to Article
mmrezaie - Monday, September 25, 2017 - link
unofficially threadripper supports ECC. Do you have the plan to look into it?p.s. I sent an email to Anandtech support about abusive ads directing to some questionable websites. I am in EU and I see these ads for a long time now.
Ryan Smith - Monday, September 25, 2017 - link
"p.s. I sent an email to Anandtech support about abusive ads directing to some questionable websites. I am in EU and I see these ads for a long time now."Er, we don't have a support email address. So I'm not sure who you sent that to.
Anyhow, we're always trying to squash malvertising. It comes in on programmatic ads, which does make the process tricky. But if you can get it to reliably and repeatedly trigger, please contact me. If we can get network logs collected, then we can isolate the source and get said ads pulled.
mmrezaie - Monday, September 25, 2017 - link
I sent it to advertisement link I found in the "contact us" page. Sorry by saying support. That's what we call it in our organization. thanks for the reply.snowmyr - Monday, September 25, 2017 - link
Check that link again and you'll see that it's not really an anandtech email address and might not get forwarded to the right people.mmrezaie - Monday, September 25, 2017 - link
I sent it to advertisement link I found in the "contact us" page. Sorry by saying support. That's what we call it in our organization. thanks for the reply.AdditionalPylons - Monday, September 25, 2017 - link
Just sent you a tweet with a screenshot, Ryan. I've been very annoyed with these clickbait ads for avery long time as well.
hughc - Monday, September 25, 2017 - link
Wasn't sure what you were referring to. I have AdBlock whitelisting the domain, so I see all the display advertising.I'm also using ClickbaitKiller. Disabled it, and now I can see the unit in question - very happy to hide this trash.
thesavvymage - Monday, September 25, 2017 - link
I get inappropriate ads as well. I'm not sure where I can send a screenshot, but the one I have on this page under the article is "This Is Better Than Adderall, According to US College Students. Try It!"Like what? This is a professional tech site and ads like that have no business being on here. Banner ads for tech companies? Good. Side ads for relevant products? Good. This BS thats always underneath every article? Absolutely unacceptable.
Gothmoth - Monday, September 25, 2017 - link
do you really think anandtech cares how they make money.. maybe when anad was still here.i see these ads too, a website who cares about it´s reputation would distance itself from such crap.. but not anandtech.
ddriver - Monday, September 25, 2017 - link
"maybe when anad was still here" LOL, if he didn't care about money, he'd not sell the site to money makers for money. That's the N1 business model, and the sole motivation for doing anything - get it to get popular, then sell it out, and all its users with it.Gothmoth - Monday, September 25, 2017 - link
well i did not notice as much bias and other stuff when anand was still here.Spunjji - Monday, September 25, 2017 - link
Seriously..? Ever read any of the Apple product reviews? :Dandrewaggb - Monday, September 25, 2017 - link
lol, I was going to say that too. Anand had (in my opinion) a clear apple bias at the end and then went to work for them. That's not to say apple wasn't making good products or not doing interesting things - they were one of the few tech companies doing anything interesting.Notmyusualid - Tuesday, September 26, 2017 - link
+1tipoo - Tuesday, September 26, 2017 - link
I mean, imo he was pretty fair about them, he liked them and didn't say they were utter garbage because they tend not to make utter garbage. He did point out flaws fairly.flyingpants1 - Tuesday, September 26, 2017 - link
Yes that is the general consensus around here.Some of the podcasts with Anand and Brian Klug were embarrassing, they had a third guy but they would just talk over him. Brian was this really obnoxious guy who made fun of people who want removable batteries and microSD cards, he said "You got what you got!"
lmao... industry shills.. wants to save the companies 10 cents for a microSD slot, and force people to overpay for 12GB space plus data usage.. How are you supposed to shoot 4k video and keep a movie/TV database with that. 128gb microSD card is perfect. Meanwhile they add ridiculous nonsense like taptic engine and face scanning instead of making the battery a bit thicker
FreckledTrout - Monday, September 25, 2017 - link
I do because they know a disproportionate amount of their user base is tech savvy and run ad blockers with one click will en mass black block adds. Keep the adds clean and we leave the blockers off....we help each other but it is a give and take.damianrobertjones - Saturday, September 30, 2017 - link
Did you know that capitals can be your friend!ddriver - Monday, September 25, 2017 - link
Workstation without ECC... that's a bad joke right there. Or at best, some very casual workstation. But hey, if you like losing data, time and money - be my guest. Twice the memory channels, and usually all dims would be populated in a workstation scenario, that's plenty of ram to get faulty and ruin tons of potentially important data.Also, what ads? Haven't you heard of uBlock :)
"Explaining the Jump to Using HCC Silicon" - basically the only way for intel to avoid embarrassment. Which they did in a truly embarrassing way - by gutting the ECC support out of silicon that already has it.
AVX512 - all good, but it will take a lot of time before software catches up. Kudos to intel for doing the early pioneering for once.
At that price - thanks but no thanks. At that price point, you might as well skip TR and go EPYC. Performance advantages, where intel has them, are hardly worth the price premium. You also get more IO on top of not supporting a vile, greedy, anticompetitive monopoly that has held progress back for decades so it can milk it. But hey, as AT seems to hint it, you have got to buy intel not to be considered a poor peasant who can't afford it. I guess being dumb enough to not value your money is a good thing if it sends your money in intel's pocket.
nowayandnohow - Monday, September 25, 2017 - link
"Haven't you heard of uBlock :)"Haven't you heard that this site isn't free to run, and some of us support anandtech by letting them display ads?
ddriver - Monday, September 25, 2017 - link
Don't worry, they get plenty of support from the big boys for all those shamelessly biassed reviews. And don't act like your pennies will go to feed someone's starving children. So yeah, uBlock FTW.pedrostee - Monday, September 25, 2017 - link
just curious but do you have any evidence, even a small bit, to support your claim that “they get plenty of support from the big boys for all those...”i think it is possible, and you seem a man of science - evidence can support any statement no matter how outlandish — so please present such if you would.
thnks
ddriver - Monday, September 25, 2017 - link
Obvious stuff is obvious, as are you ;) Nice try thou. Are you the one who is going to pay for evidence searching investigation? I personally don't feel like obvious things need evidence, but if you do, go ahead and investigate.ddriver - Monday, September 25, 2017 - link
But the ugliest part is intel went cheap even on a 2000$ CPU, taking a literal dump on it by going for the same old lousy TIM implementation.After this reveal from intel, TR looks even better than it did before.
Notmyusualid - Monday, September 25, 2017 - link
@ pedrosteeIt seems you've finally met the 'village idi0t'. He will provide no evidence, as you likely expected, and we have to endure his bizarre views on ever major article.
Reflex - Monday, September 25, 2017 - link
He's 2017's version of LordRaiden from around 2004 in these forums. Knows just enough to sound knowledgeable to those not in the industry, but is incapable of supporting his assertions because he is running on the theory that if he believes it in his mind it must be true.Look up LordRaiden in the AT forums if you want to see when this last happened.
mkaibear - Monday, September 25, 2017 - link
Yeah, it's hilarious. Anyone who's actually in the IT industry knows he's talking out of his lower orifices and it's always funny to watch him huff about like anyone actually takes him seriously.It's like the cat tax. No article is complete without a good laugh at ddriver.
Reflex - Monday, September 25, 2017 - link
It is unfortunate however because he often derails actually interesting conversations.ddriver - Monday, September 25, 2017 - link
Oh wow, the fanclub is sure gradually moving down, and just when it seemed it already hit the bottom. But hey, if pretending that you are not a completely clueless wannabes works for you, by all means, know yourselves out :)ddriver - Monday, September 25, 2017 - link
You know, I complete agree, however you have mistyped "lame suckers" and typed "IT industry" instead.Here is a hint - you cannot take seriously that which you don't have the capacity to understand. Your "best" boils down to clapping and cheering at the mainstream mediocrity to cultivate the illusion that you are smart. And when someone comes along and tears that illusion down, you are sore to realize the reality about you. And you are only left with denial in the form of those pathetically anemic attempts at intimidation through ridicule. But suckers will be suckers, and as suck, always failing to make a valid argument in their favor :)
Reflex - Monday, September 25, 2017 - link
I'd ask you to post your credentials, but seriously your statements long ago precluded you from being anyone either in the industries you are opinionated about, or with the education to question anyone in those industries.Notmyusualid - Tuesday, September 26, 2017 - link
@ ReflexAaaand... check mate.
Well done.
[email protected] - Monday, September 25, 2017 - link
And did you notice that Big Blue has actually lost its marbles and its neurons are misfiring? Both the Core i9-7980XE and the Core i9-7960X have a TDP rating of 165W. However while the latter meets this TDP, the TCore i9-7980XE draws 190W at full load. That is a big no thanks also, when you consider 165W coolers are likely to be installed on the basis of the 165W TDP rating. We haven't even started over clocking yet, and it is likely this CPU will draw in excess of 350W, and one can only pray that thermal paste under the lid will play nice. Or did they really do something different this time around?ddriver - Monday, September 25, 2017 - link
Those are intel lies. Totally justified, because intel is rich. Not only are intel lies not bad, they are actually good. It makes you more intelligent if you believe in them. Only very intelligent people can get it.ddriver - Monday, September 25, 2017 - link
Curiously, no word of intel's AMAZING DUAL CORE HEDT i3-7360X here at AT. Lagging behind the cutting edge here :)Now that's a real game changer for intel. Although I wish they could launch a single core HEDT processor too. That's really where their portfolio is left gaping.
artk2219 - Monday, September 25, 2017 - link
Big blue is IBM BTW, intel is just intel, or if you want to call them anything else, go with "money grubbing, cheating, anti competitive, bastards who will screw everyone over for a buck in a heart beat". For short.[email protected] - Tuesday, September 26, 2017 - link
Sorry I meant to say Big Blue IIAndrewJacksonZA - Tuesday, September 26, 2017 - link
Or, you know, "Chipzilla."Just sayin'.
artk2219 - Friday, September 29, 2017 - link
Lol, chipzilla would also workdamianrobertjones - Saturday, September 30, 2017 - link
I've created countless videos, processed a lot of documents, but have never, ever, lost anything due to using standard non-ecc ram. Sure, in work, ALL of the servers use ecc but there's not even one standard desktop with the stuff. STILL no data loss. 32Gb at home and 64Gb in work.Yes, okay, I understand that ECC is for x and y, but is it 'really', REALLY, that important?
mapesdhs - Monday, September 25, 2017 - link
Just curious mmrezaie, why do you say "unofficially"? ECC support is included on specs pages for X399 boards.frowertr - Tuesday, September 26, 2017 - link
Run Unbound on a Pi or other Linux VM and block all thise adverts at the DNS level for all the devices on your LAN. I havent seen a site add anywhere in years from my home.Notmyusualid - Thursday, September 28, 2017 - link
@frowertrInteresting - But that won't work for me - I'm a frequent traveller, and thus on different LANs all the time.
But what works for me, is PeerBlock, then iblocklist.com for the Ad-server & Malicious lists and others, add Microsoft and any other entity I don't want my packets broadcast to (my Antivirus alerts me when I need updates anyway - and thus I temporarily allow http through the firewall for that type of occasion).
realistz - Monday, September 25, 2017 - link
This is why the "core wars" won't be a good thing for consumers. Focus on better single thread perf instead quantity.[email protected] - Monday, September 25, 2017 - link
On the contrary, single-threaded performance is largely a dead end until we hit quantum computing due to instability inherent to extremely high clock speeds. The core wars is exactly what we need to incentivize developers to improve multi-core scaling and performance: it represents the future of computing.extide - Monday, September 25, 2017 - link
Some things just can't be split up into multiple threads -- it's not a developer skill level or laziness issue, it's just the way it is. Single threaded speed will always be important.PixyMisa - Monday, September 25, 2017 - link
Maybe, but it's still a dead end. It's not going to improve much, ever.HStewart - Monday, September 25, 2017 - link
As a developer for 30 years this is absolutely correct - especially with the user interface logic which includes graphics. Until technology is a truly able to multi-thread the display logic and display hardware - it very important to have single thread performance. I would think this is critically important for games since they deal a lot with screen. Intel has also done something very wise and I believe they realize this important - by allowing some cores to go faster than others. Multi-core is basically hardware assisted multi-threaded applications which is very dependent on application design - most of time threads are used for background tasks. Another critical error is database logic - unless the database core logic is designed to be multithread, you will need single point of entry and in some cases - they database must be on screen thread. Of course with advancement is possible hardware to handle threading and such, it might be possible to over come these limitations. But in NO WAY this is laziness of developer - keep in mind a lot of software has years of development and to completely rewrite the technology is a major and costly effort.lilmoe - Monday, September 25, 2017 - link
There are lots of instances where I'd need summation and other complex algorithm results from millions of records in certain tables. If I'm going the traditional sql route, it would take ages for the computation to return the desired values. I instead divide the load one multiple threads to get a smaller set in which I would perform some cleanup and final arithmetic. Lots of extra work? Yup. More ram per transaction total? Oh yea. Faster? Yes, dramatically faster.WPF was the first attempt by Microsoft to distribute UI load across multiple cores in addition to the gpu, it was so slow in its early days due to lots out inefficiencies and premature multi-core hardware. It's alot better now, but much more work than WinForms as you'd guess. UWP UI is also completely multithreaded.
Android is inching closer to completely have it's UI multithreaded and separate from the main worker thread. We're getting there.
Both you and sonich are correct, but it's also a fact that developers are taking their sweet time to get familiar with and/or use these technologies. Some don't want to that route simply because of technology bias and lock-in.
HStewart - Monday, September 25, 2017 - link
"Both you and sonich are correct, but it's also a fact that developers are taking their sweet time to get familiar with and/or use these technologies. Some don't want to that route simply because of technology bias and lock-in."That is not exactly what I was saying - it completely understandable to use threads to handle calculation - but I am saying that the designed of hardware with a single screen element makes it hard for true multi-threading. Often the critical sections must be lock - especially in a multi-processor system.
The best use of multi-threading and mult-cpu systems is actually in 3D rendering, this is where multiple threads can be use to distribute the load. In been a while since I work with Lightwave 3D and Vue, but in those days I would create a render farm - one of reason, I purchase a Dual Xeon 5160 ten years ago. But now a days processors like these processors here could do the work or 10 or normal machines on my farm ( Xeon was significantly more power then the P4's - pretty much could do the work of 4 or more P4's back then )
ddriver - Monday, September 25, 2017 - link
You are living in a world of mainstream TV functional BS.Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.
Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.
There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?
At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
SharpEars - Monday, September 25, 2017 - link
You are absolutely correct regarding quantum computing and it is photonic computing that we should be looking towards.Notmyusualid - Monday, September 25, 2017 - link
@ SharpEarsYes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
Gothmoth - Monday, September 25, 2017 - link
and what have they done for the past 6 years?i am glad that i get more cores instead of 5-10% performance per generation.
Krysto - Monday, September 25, 2017 - link
The would if they could. Improvements in IPC have been negligible since Ivy Bridge.kuruk - Monday, September 25, 2017 - link
Can you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.evilpaul666 - Monday, September 25, 2017 - link
I'd really like it if Enthusiast ECC RAM was a thing.I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
someonesomewherelse - Saturday, October 14, 2017 - link
It is. Buy AMD.IGTrading - Monday, September 25, 2017 - link
I think we're being to kind on Intel.Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.
THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.
I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.
People could burn their systems if they design workstations and use cooling solutions for 165W TDP.
If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...
On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.
It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.
The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!
This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0
This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.
For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
whatevs - Monday, September 25, 2017 - link
What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.extide - Monday, September 25, 2017 - link
No, TDP should include Turbo as that is part of the base/stock operation mode of the CPU.IGTrading - Monday, September 25, 2017 - link
TDP = Total Design Power by definition.This is used to design the motherboard and the cooling system to give designers a clear limit over which the system doesn't go unless it is purposely overcloked.
Wikipedia : "The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often the CPU or GPU) that the cooling system in a computer is designed to dissipate under any workload."
Intel : "TDP (Thermal Design Power) Intel defines TDP as follows: The upper point of the thermal profile consists of the Thermal Design
Power (TDP) and the associated Tcase value. Thermal Design Power (TDP) should be used for
processor thermal solution design targets. TDP is not the maximum power that the processor can
dissipate. TDP is measured at maximum TCASE.1"
Intel : "Due to normal manufacturing variations, the exact thermal characteristics of each individual processor are unique. Within the specified parameters of the part, some processors may operate at a slightly higher or lower voltage, some may dissipate slightly higher or lower power and some may draw slightly higher or lower current. As such, no two parts have identical power and thermal characteristics.
However the TDP specifications represent a “will not exceed” value. "
This is what we've understood by TDP in the past 21 years while in IT hardware industry.
If you have a different definition, then perhaps we're talking about different things.
whatevs - Monday, September 25, 2017 - link
Specification for 7980xe says "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements."There's a different specification for electrical design. This is not your ancient Xeon TDP.
IGTrading - Monday, September 25, 2017 - link
You mean the definition of TDP should change every year to suit Intel's marketing ?! :)"Ancient" Xeon TDP ?! :)
I've quoted Intel's own definition.
If the company just came up with a NEW and DIFFERENT definition just for the Core i9 series, then that's just plain deceiving marketing, changing with the wind (read : new generation of products) .
Plus, why the heck are they calling it TDP ?!
If they now claim that TDP "represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active " then they basically use AMD's ACP from 2011.
What a load of nonsense from Intel ...
https://www.intel.com/content/dam/doc/white-paper/...
whatevs - Monday, September 25, 2017 - link
You have quoted 6 year old Xeon definition, different products have different operating conditions, deal with it.Spunjji - Monday, September 25, 2017 - link
Your name suggests that you're kind of a dick and your comments confirm it. Your point is weak and doesn't at all do the work you think it does.whatevs - Monday, September 25, 2017 - link
You may be unhappy with what Intel promised you, but to claim that you could burn a system with increased power usage from turbo clocks is ridiculous, thermal throttling is not fire, and it's ridiculous to argue on a cpu that can run overclocked at >400w power consumption.Notmyusualid - Monday, September 25, 2017 - link
+1wolfemane - Tuesday, September 26, 2017 - link
You can't talk rationale with a loyalist sympathizer. TDP is a set definition in the industry and one Intel seems to be misleading about with their Extreme HEDT CPU. That seems to be a fact clearly made among almost all reviews of the 7980xe.I think I read a few articles yesterday talking about how the 7980xe was having major issues and wasn't boosting correctly but showing high power draw. But yesterday was a long time ago and I cant remember where I read that.
someonesomewherelse - Saturday, October 14, 2017 - link
So why not call it 'Average Design Power - ADP'?Krysto - Monday, September 25, 2017 - link
Yes, it's total bullshit that they are misinterpreting what TDP is. I imagine this is how they'll get away with claiming a lower TDP than the real one in the 8700k chip, too, which has low base clock speed, but the super-high Turbo-Boost, which probably means the REAL TDP will go through the rough when that Turbo Boost is maximized.This is how Intel will get to claim that its chips are still faster than AMD "at the same TDP" (wink wink, nudge nudge).
Demigod79 - Monday, September 25, 2017 - link
"What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you."I find it ironic that you would call someone ignorant, then reveal your own ignorance about the TDP and turbo clocks.
Spunjji - Monday, September 25, 2017 - link
It is now, it wasn't before. Wanna bet on how many people noticed?SodaAnt - Monday, September 25, 2017 - link
I'm quite curious what happens if your system cooling simply can't handle it. I suspect if you designed a cooling solution which only supported 165W the CPU would simply throttle itself, but I'm curious by how much.ZeDestructor - Monday, September 25, 2017 - link
Strictly speaking, all forms of Turbo boost are a form of vendor-sanctioned overclocking. The fact that measured power goes beyond TDP when at max all-core turbo should really not be all that surprising. The ~36% increase in power for ~31% increase in clocks is pretty reasonable and inline when you keep that in mind. Especially when you factor that there has to have been a bit of extra voltage added for stability reasons (power scales linearly with clocks and current, and quadratically to exponentially with voltage).Demigod79 - Monday, September 25, 2017 - link
I agree. Everything looked good until that page. 190 watts is unacceptable, and Intel needs to correct this right away - either make the CPU run within the TDP limit, or update the TDP to 190 watts in the specs.HStewart - Monday, September 25, 2017 - link
It funny that people complain about CPU watts but never about external GPU watts. Keep in mind the GPU is smaller amount of area.artk2219 - Monday, September 25, 2017 - link
They most certainly do, that is one of the biggest gripes against Vega 64, people do seem to have short memory on how high GPU TDP's used to be however.IGTrading - Tuesday, September 26, 2017 - link
On a video card, the same manufacturer takes responsibility for the GPU, cooling system, design, PCB, components and warranty.On the CPU, you have somebody else designing the cooling system, the motherboard, the power lines and they all have to offer warranty for their components while Intel is only concerned with the CPU.
If the CPU is throttling or burnt out, they will say "sufficient cooling was not provided" and so on ...
It is a whole lot different.
whatevs - Tuesday, September 26, 2017 - link
Thermal throttling is not a burn out and not a warranty event, you don't get to warranty your gpu when it throttles under load, cooling warranty does not include cpu/gpu chip performance andIntel designed the ATX specification and the electrical specification for the boards.
You clearly don't know the things you're talking about.
IGTrading - Tuesday, September 26, 2017 - link
Thanks man , after 21 years in IT hardware I don't know ;)Have a fun life and enjoy your "wisdom" :)
whatevs - Tuesday, September 26, 2017 - link
Seeing these new cpus released, sold and used I think Intel has a better idea of what it is doing than you.Good luck competing with Intel in your "experience in the industry" category.
0ldman79 - Wednesday, September 27, 2017 - link
I'm sure he'll be fine.He was here before the "165W" chip and I'm sure he'll be here long after it is gone, same as me.
ZeDestructor - Monday, September 25, 2017 - link
Laptops and tablets break TDP all the time under Turbo loads. I don't see anyone bitching there...0ldman79 - Wednesday, September 27, 2017 - link
It's really no different than if a car was sold with inadequate cooling."Average" heat production at normal speeds is fine, but if you actually come close to using the 300HP the engine produces by, I dunno, pulling a trailer at those same speeds it will overheat and you'll have to pull over and let it cool.
But sure, it's Intel, so it's cool...
HStewart - Monday, September 25, 2017 - link
I have a still running dual Intel Xeon 3Gz 5160 and my biggest complaint is that the box is huge. This machine is 10 years old has 8G of memory and about 5T of storage. It CPU's alone cost around $2000 and in your terms it like the Bentley or my 2000 Toyota Tundra with Lexus Engine with 240,000. In essence you get what you pay for.wolfemane - Tuesday, September 26, 2017 - link
Hate to break it to ya but that Lexus motor IS a Toyota motor. And by going Lexus you way overpaid for a Toyota.Garf75 - Monday, September 25, 2017 - link
Ian, why are there no temperatures posted?extide - Monday, September 25, 2017 - link
Probably because they are highly dependant on the cooler used and the environment it is in. Not really relevant to an article like this.Garf75 - Monday, September 25, 2017 - link
Seriously? As a customer I would want to know if my cooling system is adequate for the job if I'm pushing the CPU.Spunjji - Monday, September 25, 2017 - link
Super relevant, because they indicate how badly thermally limited the CPU is - which is hella good info to have if you're, say, considering delidding a $1999 processor because the manufacturer used toothpaste under the IHS.tricomp - Monday, September 25, 2017 - link
Poor AMD...No chance they are going to supply (even more) cpu's demand after posting this article..
I am trying to purchase at list 7 systems for my customers in my country but there's nowhere I can find them beasts here..
iwod - Monday, September 25, 2017 - link
I wish someone could do an article on that too. GF doesn't seems to be the limitation here. GF, should in theory more then enough capacity in their Fab 8 for AMD. Unless GF have some other big customers, otherwise AMD should really be bumping out as much unit as possible.Atom11 - Monday, September 25, 2017 - link
Can we please see one test (!), if you could possible manage, that shows the advantage of AVX-512 in compare to AVX2 when doing:1.) matrix multiply
2.) FFT
3.) convolution
ZeDestructor - Monday, September 25, 2017 - link
Give us a comparison to AVX1 and SSE4 too!Gothmoth - Monday, September 25, 2017 - link
threadripper delivers 80+% of the perfromance for less than 50% of the price.... you don´t have to be a genius to see what the better deal is (price germany: TR 1950x = 950 euro, 7890xe =2300 euro)Spunjji - Monday, September 25, 2017 - link
Don't let that stop them equivocating about how companies who need that power yet somehow have no need for ECC don't care about cost because something something software TCO blah blah.spdragoo - Monday, September 25, 2017 - link
I'm trying really, really hard to think of a company that, at some point or another, doesn't say, "Equipment X may outperform Equipment Y, but the extra cost to buy Equipment X is too much, we'll just make-do with Y instead." Especially since 100% of companies have a limit on their budgets.What's that, you say? Multi-billion dollar corporations don't have to worry about the money they spend? Someone apparently didn't pay attention in their Econ 200 class, or their Introduction to Accounting coursework.
By definition, every business has a *finite* amount of money they can spend, based on a) how much money they collect from their customers, b) how much they can recoup on the sale of assets (tangible or intangible), & c) how much they can get from "other sources" (mostly bank loans or by selling stock shares, or sometimes government grants, but you might find the occasional situation where a generous benefactor just bequeaths money to a company...but I doubt you'll even see that happen to 1% of the companies out there -- & no, venture capitalists pouring money into a company is *not* a situation where they "give the money away", they're getting something for their money, usually stock shares or guarantees of repayment of the loans). Of that money, some of it is earmarked for employee compensation (not just the executives, but the office drones & lower-level employees that do 99% of the actual work), some of it goes towards taxes, some of it pays for rental payments, some for loan payments, some for utilities (telephone, Internet, electricity, gas, water, etc.), some of it may get set aside for "emergencies", some gets earmarked for dividends to the shareholders, etc. That means that a (relatively) small portion is set aside for "equipment replacement". Now, if the company is lucky, the lion's share of that budget is for IT-related equipment...but that covers more than just the office drones' machines, that covers everything: server racks, storage services, cloud vendor payments, etc.
And that is where the price comes into play. For probably 90% of office users out there, not only is Threadripper an overpowered product, so are these products. Heck, we're in the middle of an upgrade from Windows 7 to Windows 10, & they're taking the opportunity to replace our old Sandy Bridge i5 machines with Skylake i7 machines. Sure, they're running faster now...but the main reason they're running faster is because we went from 32-bit Windows to 64-bit Windows, so our PCs now have 8GB of RAM instead of 4GB. That helps with our workload...which primarily revolves around MS Office & using browsers to access & modify a number of massive databases. Having an 8C/16T CPU, let alone a 16C/32T CPU, wouldn't provide any boost for us, since the primary slowdown is on the server side.
These are going to be expensive systems for specialized purposes...& those individual companies are going to look at their budgets very closely, as well as the performance benchmarks, before deciding to purchase these systems. Sure, they may hold the performance crown...but not by that big of a margin, & especially when compared to the margin that gives them the "most expensive price" crown.
BrokenCrayons - Monday, September 25, 2017 - link
Human labor is more expensive than hardware. The 20% additional performance for $1000 more can be earned back quickly by the increased productivity of your workforce (assuming your management staff is effective enough to keep the employees gainfully employed of course and that's certainly not always the case).vladx - Tuesday, September 26, 2017 - link
Indeed the difference in price is pretty much negligible in a professional setting.Notmyusualid - Tuesday, September 26, 2017 - link
@ vladxTry telling that to the fanbios here...
For some of us, price is not the main consideration.
HStewart - Monday, September 25, 2017 - link
I have a big issue with latest performance results - especially dealing with multi-core performance. What is most important is single core performance - this primary because real applications and not benchmark application use the primary thread more that secondary threads. Yes the secondary threads do help in calculations and such - but most important especially in graphical application is using the primary thread. Plus quality is important - I not sure AMD is going to last in this world - because they seem to have a very limited focus.HStewart - Monday, September 25, 2017 - link
Especially GeekBench - I don't trust it all - realistically can an ARM processor beat huge Xeon processor. Give me break - lets be realistic.Krysto - Monday, September 25, 2017 - link
Intel is failing hard at competing, if it can only release chips that cost twice as much as AMD's for only a slight improvement in performance.vladx - Tuesday, September 26, 2017 - link
On the contrary, these chips will sell very well since they aren't geared towards "prosumers" but big businesses where every minute wasted could mean thousands $$ lost.willis936 - Monday, September 25, 2017 - link
That performance ler dollar page is amazing. I could look at graphs like that comparing all types of compinents and configurations against different workloads all day.Judas_g0at - Monday, September 25, 2017 - link
I didn't see anything about temp and thermals. For 2k, does Intel give you crappy thermal compound, or solder?Spunjji - Monday, September 25, 2017 - link
Spoiler alert: I hope you like cleaning toothpaste off your $1999 CPU.shabby - Monday, September 25, 2017 - link
Lol platform pcie lanes, good one intel, how many does threadripper have in this case?DanNeely - Monday, September 25, 2017 - link
TR has 60 platform PCIe3 lanes, 68 total if you count the 8 half speed PCIe2 lanes on the x399 chipset.mapesdhs - Tuesday, September 26, 2017 - link
In that case, using Intel's MO, TR would have 68. What Intel is doing here is very misleading.iwod - Monday, September 25, 2017 - link
If we factor in the price of the whole system, rather then just CPU, ( AMD's MB tends to be cheaper ), then AMD is doing pretty well here. I am looking forward to next years 12nm Zen+.peevee - Monday, September 25, 2017 - link
From the whole line, only 7820X makes sense from price/performance standpoint.boogerlad - Monday, September 25, 2017 - link
Can an IPC comparison be done between this and Skylake-s? Skylake-x LCC lost in some cases to skylake, but is it due to lack of l3 cache or is it because the l3 cache is slower?IGTrading - Monday, September 25, 2017 - link
There will never be an IPC comparison of Intel's new processors, because all it would do is showcase how Intel's IPC actually went down from Broadwell and further down from KabyLake.Intel's IPC is a downtrend affair and this is not really good for click and internet traffic.
Even worse : it would probably upset Intel's PR and that website will surely not be receiving any early review samples.
rocky12345 - Monday, September 25, 2017 - link
Great review thank you. This is how a proper review is done. Those benchmarks we seen of the 18 core i9 last week were a complete joke since the guy had the chip over clocked to 4.2GHz on all core which really inflated the scores vs a stock Threadripper 16/32 CPU. Which was very unrealistic from a cooling stand point for the end users.This review had stock for stock and we got to see how both CPU camps performed out of the box states. I was a bit surprised the mighty 18 core CPU did not win more of the benches and when it did it was not by very much most of the time. So a 1K CPU vs a 2K CPU and the mighty 18 core did not perform like it was worth 1K more than the AMD 1950x or the 1920x for that matter. Yes the mighty i9 was a bit faster but not $1000 more faster that is for sure.
Notmyusualid - Thursday, September 28, 2017 - link
I too am interested to see 'out of the box performance' also.But if you think ANYONE would buy this and not overclock - you'd have to be out of your mind.
There are people out there running 4.5GHz on all cores, if you look for it.
And what is with all this 'unrealistic cooling' I keep hearing about? You fit the cooling that fits your CPU. My 14C/28T CPU runs 162W 24/7 running BOINC, and is attached to a 480mm 4-fan all copper radiator, and hand on my heart, I don't think has ever exceeded 42C, and sits at 38C mostly.
If I had this 7980XE, all I'd have to do is increase pump speed I expect.
wiyosaya - Monday, September 25, 2017 - link
Personally, I think the comments about people that spend $10K on licenses having the money to go for the $2K part are not necessarily correct. Companies will spend that much on a license because they really do not have any other options. The high end Intel part in some benchmarks gets 30 to may be 50 percent more performance on a select few benchmarks. I am not going to debate that that kind of performance improvement is significant even though it is limited to a few benchmarks; however, to me that kind of increased performance comes at an extreme price premium, and companies that do their research on the capabilities of each platform vs price are not, IMO, likely to throw away money on a part just for bragging rights. IMO, a better place to spend that extra money would be on RAM.HStewart - Monday, September 25, 2017 - link
In my last job, they spent over $100k for software version system.In workstation/server world they are looking for reliability, this typically means Xeon.
Gaming computers are different, usually kids want them and have less money, also they are always need to latest and greatest and not caring about reliability - new Graphics card comes out they replace it. AMD is focusing on that market - which includes Xbox One and PS 4
For me I looking for something I depend on it and know it will be around for a while. Not something that slap multiple dies together to claim their bragging rights for more core.
Competition is good, because it keeps Intel on it feat, I think if AMD did not purchase ATI they would be no competition for Intel at all in x86 market. But it not smart also - would anybody be serious about placing AMD Graphics Card on Intel CPU.
wolfemane - Tuesday, September 26, 2017 - link
Hate to burst your foreign bubble but companies are cheap in terms of staying within budgets. Specially up and coming corporations. I'll use the company I work for as an example. Fairly large print shop with 5 locations along the US West coast that's been in existence since the early 70's. About 400 employees in total. Server, pcs, and general hardware only sees an upgrade cycle once every 8 years (not all at once, it's spread out). Computer hardware is a big deal in this industry, and the head of IT for my company Has done pretty well with this kind of hardware life cycle. First off, macs rule here for preprocessing, we will never see a Windows based pc for anything more than accessing the Internet . But when it comes to our servers, it's running some very old xeons.As soon as the new fiscal year starts, we are moving to an epyc based server farm. They've already set up and established their offsite client side servers with epyc servers and IT absolutely loves them.
But why did I bring up macs? The company has a set budget for IT and this and the next fiscal year had budget for company wide upgrades. By saving money on the back end we were able to purchase top end graphic stations for all 5 locations (something like 30 new machines). Something they wouldn't have been able to do to get the same layout with Intel. We are very much looking forward to our new servers next year.
I'd say AMD is doing more than keeping Intel on their feet, Intel got a swift kick in the a$$ this year and are scrambling.
mapesdhs - Monday, September 25, 2017 - link
Ian, thanks for the great review! Very much appreciate the initial focus on productivity tasks, encoding, rendering, etc., instead of games. One thing though, something that's almost always missing from reviews like this (ditto here), how do these CPUs behave for platform stability with max RAM, especially when oc'd?When I started building oc'd X79 systems for prosumers on a budget, they often wanted the max 64GB. This turned out to be more complicated than I'd expected, as reviews and certainly most oc forum "clubs" achieved their wonderful results with only modest amounts of RAM, in the case of X79 typically 16GB. Mbd vendors told me published expectations were never with max RAM in mind, and it was "normal" for a mbd to launch without stable BIOS support for a max RAM config at all (blimey). With 64GB installed (I used two GSkill TridentX/2400 4x8GB kits), it was much harder to achieve what was normally considered a typical oc for a 3930K (mab was the ASUS P9X79 WS, basically an R4E but with PLEX chips and some pro features), especially if one wanted the RAM running at 2133 or 2400. Talking to ASUS, they were very helpful and advised on some BIOS tweaks not mentioned in their usual oc guides to specifically help in cases where all RAM slots were occupied and the density was high, especially a max RAM config. Eventually I was able to get 4.8GHz with 64GB @ 2133. However, with the help of an AE expert (this relates to the lack of ECC I reckon), I was also able to determine that although the system could pass every benchmark I could throw at it (all of toms' CPU tests for that era, all 3DMark, CB, etc.), a large AE render (gobbles 40GB RAM) would result in pixel artefacts in the final render which someone like myself (not an AE user) would never notice, but the AE guy spotted them instantly. This was very interesting to me and not something I've ever seen mentioned in any article, ie. an oc'd consumer PC can be "stable" (benchmarks, Prime95 and all the rest of it), but not correct, ie. the memory is sending back incorrect data, but not in a manner that causes a crash. Dropping the clock to 4.7 resolved the issue. Tests like P95 and 3DMark only test parts of a system; a large AE render hammered the whole lot (storage, CPU, RAM and three GTX 580s).
Thus, could you or will you be able at some point to test how these CPUs/mbds behave with the max 128GB fitted? I suspect you'd find it a very different experience compared to just having 32GB installed, especially under oc'd conditions. It stresses the IMCs so much more.
I note the Gigabyte specs page says the mbd supports up to 512GB with Registered DIMMs; any chance a memory corp could help you test that? Mind you, I suspect that without ECC, the kind of user who would want that much RAM would probably not be interested in such a system anyway (XEON or EPYC much more sensible).
Ian.
peevee - Monday, September 25, 2017 - link
"256 KB per core to 1 MB per core. To compensate for the increase in die area, Intel reduced the size of the size of the L3 from 2.5 MB per core to 1.375 MB per core, keeping the overall L2+L3 constant"You might want to check your calculator.
tygrus - Monday, September 25, 2017 - link
Maybe Intel saw the AMD TR numbers and had to add 10-15% to their expected freqs. Sure, there is some power that goes to the CPU which ends up in RAM et. al. but these are expensive room heaters. Intel marketing bunnies thought 165w looked better thn 180w to fool the customers.eddieobscurant - Monday, September 25, 2017 - link
Wow! Another intel pro review. I was expecting this but having graphs displaying intels perf/$ advantage, just wow , you've really outdone yourselves this time.Of course i wanted to see how long are you gonna keep delaying the gaming benchmarks of intel's core i9 due to mess rearrangement horrid performance. I guess you're expecting game developers to fix what can be fixed. It's been already several months, but on ryzen you were displaying a few issues since day 1.
You tested amd with 2400mhz ram , when you know that performance is affected with anything below 3200mhz.
Several different intel cpus come and go into your graphs only to show that a different intel cpu is better when core i9 lacks in performance and an amd cpu is better.
Didn't even mention the negligent performance difference bettween the 7960x and 7980xe. Just take a look at phoronix review.
Can this site even get any lower? Anands name is the only thing keeping it afloat.
mkaibear - Tuesday, September 26, 2017 - link
Erm, there are five graphs on the performance/$ page, and three of them show AMD with a clear price/$ advantage in everything except the very top end and the very bottom end (and one of the other two is pretty much a tie)....how can you possibly call that a pro-Intel review?
wolfemane - Tuesday, September 26, 2017 - link
And why the heck would you want game reviews on these CPUs anyways? By now we KNOW what the results are gonna be and they won't be astonishing. And more than likely will be under a 7700k. Game benchmarks are utterly worthless for these CPUs and any kind of s surprise by the reader in their lack of overall performance in game is the readers fault for not paying attention to previous reviews.Notmyusualid - Tuesday, September 26, 2017 - link
Sorry to distract gents (and ladies?), and even though I am not a fan of liquid nitrogen, here:http://www.pcgamer.com/overclocked-core-i9-7980xe-...
gagegfg - Tuesday, September 26, 2017 - link
EPYC 7551P vs core i9 790XEThat is the true comparison, or not?
$2000 vs $2000
gagegfg - Tuesday, September 26, 2017 - link
EPYC 7551P vs core i9 7980XEThat is the true comparison, or not?
$2000 vs $2000
IGTrading - Tuesday, September 26, 2017 - link
That's a perfectly valid comparison with the exception of the fact that Intel's X299 platform will look completely handicapped next to AMD's EPYC based solution and it will have just half of the computational power.CrazyHawk - Tuesday, September 26, 2017 - link
"Intel also launched Xeon-W processors in the last couple of weeks."Just where can one purchase these mythical Xeon-W processors? There hasn't been a single peep about them since the "launch" week. I've only heard of two motherboards that will support them. They seem to be total vaporware. On Intel's own site, it says they were "Launched" in 3Q2017. Intel had better hurry up, 3Q will be up in 4 days!
samer1970 - Tuesday, September 26, 2017 - link
I dont understand why intel disables ECC on their i9 CPU , they are losing low budget workstation buyers who will 100% choose AMD threadripper over intel i9..Even if they are doing this to protect their xeons chips ,they can enable non buffered ECC and not allow Registered ECC on the i9 problem solved. unbuffered ECC has Size limitation and people who want more RAM will go for xeons.
Remember that their i3 has ECC support , but only the i3 ...
intel , you are stupid.
vladx - Wednesday, September 27, 2017 - link
Newsflash, these chips don't target "low budget workstation buyers". Golden rule is always: "If you can't afford it, you're not the target customer.".samer1970 - Wednesday, September 27, 2017 - link
Thats not a Golden Rule anymore with the Threadripper chips around . it is called "Stupid rule" ...They are allowing AMD to steal the low budget workstation buyers by not offering them an alternative to choose from.
vladx - Wednesday, September 27, 2017 - link
The "low budget workstation buyers" as you call them are a really insignificant percentage of an already really small piece of the huge pie of Intel customers.samer1970 - Wednesday, September 27, 2017 - link
who told you so ? Most engineering students at universities need one , and Art Students who render alot as well. all these people will buy threadripper CPU and avoid intel , for intel xeon are 50% more expensive .andI dont cae about the percentage in intel Pie ... hundreds of thousands student enter uiviersites around the world each year . Low percentage or not they are alot ...
how much do you think a low budget workstation costs ? they start from $3000 ... and with xeon Pricing , it will be very difficult to add alot of RAM and a good workstation card and fast SSD .
esi - Wednesday, September 27, 2017 - link
What's the explanation for some of the low scores of the 7980XE on the SPECwpc benchmarks? Particularly Poisson, where the 6950X is 3.5X higher.ZeDestructor - Wednesday, September 27, 2017 - link
Most likely cache-relatedesi - Wednesday, September 27, 2017 - link
Maybe. But one that really makes no sense is the Dolphin 5.0 render test. How can the 7980XE take nearly twice as long as the 7960X?esi - Wednesday, September 27, 2017 - link
So I ran the Poisson benchmark on by 6950X. It uses all 10 cores (20 h/w threads), but can be configured to run in different ways: you can set the number of s/w threads per process. It then creates enough processes to ensure there's one s/w thread per h/w thread. Changing the s/w threads per processes significantly effects the result:20 - 1.34
10 - 2.5
5 - 3.31
4 - 3.47
2 - 3.67
1 - 0.19
Each process only uses about 2.5MB of RAM. So the 1-thread per process probably has a low result as this will result in more RAM usage than L3 cache, whereas the others should all fit in.
Would be interesting to see what was used for the 7980/7960. Perhaps the unusual number of cores resulted in a less than optimal process/thread mapping.
tamalero - Wednesday, September 27, 2017 - link
Hey guys, question.. Toms and others have mentioned that they HAD to put watercooling to keep this thing stable.Did the same happened to your sample? Wouldnt that increase the "cost of ownership" even more than the intel counterpart?
I mean, the mobo, the ram, the watercooling kit and then the hefty processor?
samer1970 - Wednesday, September 27, 2017 - link
Water cooling is for overclocking only ... you will be okay using 170 watt TDP rated air cooler if you dont oc.0ldman79 - Wednesday, September 27, 2017 - link
I'm going to grab another cup-o-coffee and read it again, but the performance per dollar, AMD costs about half as much as Intel for several comparable models, how does Intel have better performance per dollar on so many of those graphs?Admittedly my kids are driving me nuts and I've been reading this for two days now trying to finish...
silvertooth82 - Thursday, September 28, 2017 - link
if this is all true... let's say thanks to AMD for poking IntelAnnonymousCoward - Friday, September 29, 2017 - link
Very nice review. So compared to a 6700K/7700K, the 18-core beast is marginally slower in single-thread, and only 2-3x faster in multi-thread.I found the time difference when opening the big PDF to be the most interesting chart. 65W Ryzens take a noticable extra second.
Exceeding the published TDP sounds like lawsuit territory.
nufear - Monday, October 2, 2017 - link
Price for Intel Core i9-7980XE and Core i9-7960XMy opinion, I can not justify to spend extra $700~1k on these processors. The performances weren't that significant.
rwnrwnn7 - Wednesday, October 4, 2017 - link
AVX-512 - What software work with him?for what it used today?
rwnrwnn7 - Wednesday, October 4, 2017 - link
AVX-512 - What software work with him?for what it used today?
DoDidDont - Friday, October 27, 2017 - link
Would have been nice to see the Xeon Gold 6154 in the test. 18 cores / 36 threads and apparently an all core turbo of 3.7Ghz, plus the advantage of adding a second one on a dual socket Mobo.Planning a pair of 6154's on either an Asus WS C621E or a Supermicro X11DPG-QT and Quad GPU set up.
My 5 year old dual E5-2687w system scores 2298 in Cinebench R15, which has served me well and paid for itself countless times over, but having dual 6154's will bring a huge smile to the face for V-ray production rendering.
My alternative is to build two systems on the i9-7980XE, one for content creation, single CPU, single GPU, and the other as a GPU workhorse for V-ray RT, and Iray, single CPU, Quad GPU+ to call on when needed.
So the comparison would have been nice for the various tests performed.
sharath.naik - Sunday, December 3, 2017 - link
Isn't there supposed to be part 2!!!lewipro - Friday, March 23, 2018 - link
I wonder if AnandTech is considering about adding TensorFlow as part of the CPU and GPU benchmark suite?I'm a PhD student in computer science and a lot of us are using TensorFlow for research so we are interested in the performance of CPU/GPUs on TensorFlow.
Thanks!
rfpgzellfhtz - Saturday, August 22, 2020 - link
http://bitly.com/zoom-viber-skype-psy