It used to be that AMD would release specs, and the chips they sent out would not come close to them. After bulldozer, they stopped giving pre released chips for test.
These are going to perform like Threadripper with Zen+. There aren't going to be any major surprises. The architecture is already well-known. It looks like the only question about these is how the memory allocation will affect certain benchmarks.
First page has 2800x in the AMD SKUs table, with the specs of a 2700x; and what appears to be the price of the 1800x ($419, url has 1800x) from last year for the 2700x in the Threadripper 2 vs Skylake-X The Battle (Sorted by Price) table.
Putting an asterisk on the AMD supplied number in the chart is of only limited value if you don't have a matching footnote. I'm pretty sure I've seen the same problem on some of your other charts/graphs before.
I put the asterisk in to initially signify it was different, then put AMD result in just in case people took the graph without taking the context, forgot to remove the asterisk. Should be fixed.
I think I'm still a bit confused about what/why you did because I think an unverified (or at least one you can't say if is/isn't representative of what your internal testing is for a few more days) result from an OEM should be called out if put in a chart with results you and/or other testers generated directly.
Cinebench would be like best case scenario for AMD 32 cores
Not saying that it is bad thing. but you should expect the gap to be lower once you use other benchmarks. Cinebench scales very well with cores. but not only that. It also like ryzen more than intel unlike most other benchmarks. And there is no surprise that AMD started using that benchmark for their advertisement after they release ryzen
Every company would use a benchmark that would show their product in the best light. However I'm interested in knowing what the average turbo speeds for each clock rate. What will the difference be if one used that 500w AiO compared to stock cooling.
Not sure why you've concluded that CB R15 favors Ryzen---the 16/32 7960X (barely) beats the 1950X, which seems about right to me. With all 16 (32) cores saturated, the 1950X tops out at 3.4ghz turbo, and the 7960X is somewhere around 3.6.
You should except things to be even worse if you use something like H.265 encoding because does not scale with cores like rendering benchmarks
While 2990WX is going to be faster than intel 7980X overall, the gap is not going to be 50% or even close. I expect to be maybe 30% (if not less) faster on average
R15 is very well multithreaded, not the ideal indicator of real-world performance, so would we really expect an 8700K to dominate a 2600X in an ideal multithreaded benchmark?
Yes, every company puts their best foot/benchmark forward. The 7920/40/60 (all x) chips from Intel will still have a serious advantage in any application that can really utilize AVX 512, as that provides a huge performance boost. Unfortunately (for Intel), using AVX 512 also makes their chips run really, really hot (might be time for Intel to invest in some better thermal solutions for their pricey chips). Ultimately, it still boils down to: What are using your workstation for?
Even if it was barely more than half the cores that are less than 15 percent stronger? Come on, even your 35-year-old scalar, nonpipelined processor with no branch prediction will quickly tell you that you will be far behind on nearly every workstation task.
Make a list of the top 15 reasons people who actually do work and could use a high-end workstation to take care of business. Now question: will the Core i9-7980XE be faster atany single one?
People here, and on most tech sites for that matter, keep thinking of these processors in terms of gaming. That's obviously not what they're designed for.
@digitalfreak It is quite correct & true. Even three 1st gen threadripper was not marketed for pure gaming-- more towards content creaters and those who want to stream games professionally. On an other note I have reduced going to the sister site "Tom's " because Anandtech has a less FPS centric editorial outlook.
This is true 100% I blame a lot of it on the tech sites (not Anandtech of coarse) that try to focus on gaming with CPU's that clearly are made for more than just gaming yes they can game but CPU's like this are designed for so much more. When Anandtech's review comes out I am sure they will have the proper tests done for CPU's like this and yes they may also through in a few games just to show that these CPU's can also do a bit of gaming which is fine.
"Make a list of the top 15 reasons people who actually do work and could use a high-end workstation to take care of business. Now question: will the Core i9-7980XE be faster atany single one?"
I'm in the middle of a rush rollout of quad-core machines to replace several tens of thousands (workdwide, only a 'few' thousand in this building) of dual-octacore-CPU workstations because before purchase and rollout, nobody bothered to look at users' actual workloads. Turns out threaded workloads were exceptionally rare, so all the monster workstations were utterly worthless in real world performance compared to the 'low spec' machines the back office staff were using.
Pretty much any highly threaded workload has already been offloaded to a GPU (or Phi) coprocessor, or moved entirely to a remote HPC cluster. For desktop workstations, threaded workloads are the exception rather than the rule.
And this is at a Fortune 5 company, who not only should know better but were repeatedly told their purchasing decision was a terrible mistake. But it's hard to fight simple "more cores is more better!" marketing with specific-use-case benchmarking numbers, eyes start to glaze over.
Also shows that the very people employed to provide proper advice on such things are often the first to be ignored. Been through that lunacy several times when I was a sysadmin.
Oh man -- I know this may sound quite unethical and downright sketch, but hopefully you, as an enthusiast, can get a few of the older machines sent your way to noodle around with ... or build your own (somewhat TDP/performance obsolete) data centre! :-)
Nope, they take Data Remanence very seriously (and a good chunk of the drives pass through my hands anyway). A machine that went walking out the building without being processed through bag & tag and scanned by the disposal service would make a lot of people very upset and generally be considered a bad move.
Yea they would rather have them sent to the recycle plant and destroyed most likely once the hard drives are removed of coarse. I am just guessing that they send them to the recycle plant to get destroyed maybe they send them off for donations for all I know without the hard drives..lol
edzieba: "Turns out threaded workloads were exceptionally rare, so all the monster workstations were utterly worthless in real world performance compared to the 'low spec' machines the back office staff were using."
Certain industries benefit greatly. I worked in software development, and many-core workstations are a great benefit. Developers typically run the entire stack locally: database, app/web server, and client, so they can find where the problems are without affecting coworkers. Each one of those platforms is multi-threaded (or multi-process), so 40+ threads is common.
Your general point is true, and has been for decades: be aware of your runtime environment, and allocate resources which reflect those realities.
I must say it surprised me to discover even excel and other office apps are slowly going multithreaded though... as are browsers, with Chrime earlier and now Firefox leading. If you can do even CSS and JavaScript multithreaded every normal computer user suddenly benefits. I doubt they get benefit beyond 16 threads soon but a hyperthreaded octacore is finally useful for a normal user and ibcannimagine a heavy multitasking desktop office worker keeping 16 real/32 logical cores busy. I know i have run out of space on my quad-core years ago and i hope AMD brings more than 8 cores to mainstream soon as threadripper is a tad expensive...
1. Multiple Thread application are INSANELY hard to write CORRECTLY. ( That is why we have RUST )
2. There are still a lot of performance to be squeezed out from parallelism. As proved by Servo.
3. Because Software has to care about the lowest common denominator, that is why no one is optimising for 8 Core yet.
If we could push the bottom market to 8 Core, middle market to 16 and top end market to 32 Core, and each segment is then differentiated by its Full Core Speed. We may see software optimise for Multiple Core sooner.
The only problem is 1. There is no incentive for them to do so and 2. The computer we have today are fast enough for majority of use case.
I'm now regulary waiting for excel to do some numbercrunching. 3 to 4 minutes 100% on all 8 threads (xeon e3 1240). I am wondering if such a threadripper would make that 20 to 30 seconds. If a 2700x would half that time, I am going to hit myself in the head for not going the threadripper route.
Depending on the nature of your formula graph in Excel the problem may not be easily to parallelize. Excel performs some tricks to try and determine if formulas can be calculated concurrently but they can and do fall victim to fragile nodes in their directed cyclic graphs. Even if your graph is very flat, they don't always get parallelism correct as maintaining those facts are either 1) hard to determine in a scalable manner 2) push a lot of state handling to the graph editing side of things which can cause massive slowdowns in user experience to make simple edits. Unfortunately, a lot of programs we use on the desktop aren't just hard to parallelize, but don't parallelize very well (far less than linear scaling). Traversing your graph while tracking state (because excel keeps track of circular dependencies) in the correct order is just a hard problem and even though they can pound your CPU by speculatively executing, you probably won't see a huge speedup unless you've taken steps to make your graph as flat as humanly possible. And if you are doing the latter, why not just use Access?
Go AMD, keep holding chipzilla's feet to the fire and their pricing honest (Intel just reported new record earnings, so there is room there). Unrelated, while I assume that the inactive dies in the cheaper TRs may well be dies that binned too low or are just defective, and are locked down better than Fort Knox, just out of interest: Has anybody tried and succeeded to bring back the dead, i.e. reactivate the inactive ones? Anybody? Even trying would, of course, immediately void your warranty, but maybe, just maybe, somebody tried. Would love to hear what happened, successful or not.
I have thinking about the same thing since it was revealed that the inactive dies have also been etched by derbauer-- are not just pieces of silicon. I would like to read that review too.
And then somehow, you'll see on Tomshardware ''We tested the new CPU with our 1995 suite of games, Intel has superior IPC and shows a 2% advantage on single threaded games, so Intel is better, buy Intel.'' :)
Seriously. Tom's hardware has some crazy single threaded benchmarks. I stopped reading them when they refused to remove project cars from their benchmark suite, which was heavily optimized for Nvidia. It's like they don't realize what an outlier is.
The memory configuration is going to be a huge bottleneck.
Just try you try to use a 32 core Epyc with only 4 channels populated: performance it's hindered so badly you end up making very little use of the additional cores unless you're not accessing memory at all.
So you're telling me AMD is shaving off features from their more expensive server parts so that theres some market differentiation? For shame! Seriously though, it is annoying that TR4 and SP3 are "2 different sockets", would have been nice to be able to use Epyc's in TR4.
My "guess" is that while TR4 ( SP3R2) and SP3 are both 4094 pins, in TR4 the pins leading to the 2nd 2 processors are just that-- pins. They are just for physical support & are not electrically connected to anything. Hence, to maintain backwards comptibility AMD disabled the memory & PCIE of the second pair of dies
While I also believe that there is no such thing as too much computing power, the 32 (and 24?) core TRs are the CPU equivalents of a 1,000 HP engine in a car: great for bragging rights, but only useful in very specific situations, and otherwise not faster than mere 8 core chips. In this case, the applications where 32 cores can make a difference are those that are not that dependent on memory speed/access. I would love to see some benchmarks for compiling and complex CAD situations.
Overall, the question is/remains how well AMD executed on this second round of "NUMA on a chip". Lastly, about EPYC vs. TR: AMD learned from the master (Intel). It's not about not letting people run server chips on desktop boards, it's about blocking people from doing the opposite: using much less expensive desktop CPUs in server boards and for server applications. That is also why desktop CPUs and chipsets basically never support ECC RAM, which is a requirement for many servers. TR is almost "EPYC", but just not quite, so you still have to buy EPYC and pay epic prizes for your servers. But than, Intel does the same, and gouges us even worse.
Not sure how these are about blocking people from doing the opposite, since they do support ECC, so surely one could use these CPUs just as they are with a good quality consumer mbd and they'd do just fine for a wide range of server tasks, using ECC memory if desired. If companies cared about cost that much then this is an option. Most though won't do that. There's a belief that companies will cram a consumer chip onto a pro board if they can, but really that's very rare as most bulk buyers of workstations and servers get them from OEMs, very few build their own.
Nobody's gouging anyone btw, it's still a free market choice whether to buy Intel or not.
In theory TR boards can support ECC but I've heard reports that validation of ECC RAM is not exactly a priority and with all the work Ryzen boards required regarding RAM that's not a surprise. So anybody here built a TR ECC system and how did you get on? 1st hand reports are always better.
ECC RAM is sold at slower speeds than typical enthusiast RAM. I fail to see why validation would be necessary. The fastest ECC RAM I know of is only 2666. If there is anything faster it should still fit within the TR2 spec.
So why did the CPU race slow to a crawl now for years? Have we actually reached a "safe" limit for CPUs until some new tech can make it faster? I know the need isn't as great as it used to be, but remember the days that CPU speed leaped so much each generation..like 500mhz jumps each new CPU it seemed. Now we are seeing boosts..which is basically like saying "We can go this high, but its just a limit because we not sure of ourselves".
Two reasons come to mind - technology and competition. It's becoming increasingly difficult to go to smaller process nodes (see Intel 10nm) which are necessary to make faster chips. As to competition, Intel hasn't had any until AMD's Zen architecture. They're not going to put a lot of money into R&D if they don't have to. Unfortunately for them, AMD caught them with their pants down, and their 10nm process has had nothing but problems.
The gate thickness limit was hit around Sandy Bridge time and has stuck even with process node scaling. "Moar Cores" scaling was chopped off at the knees by GPGPU. There's just not many places to go to gain performance without massive power consumption increases (and even that hits areal power density limits as overall process scale shrinks).
The irony of all this is that threaded support within application software is generally still pretty terrible, with many pro apps still only using one core. If anything there's much more to gain with better written software, but good programmers are expensive, and these days grud knows where they'd come from given the woeful education standards of many modern edu places, at least in the West anyway. Probably have to poach them from south east Asia, Israel, etc.
It's not really a case of 'just program better', dual cores have been commonplace for a decade now: any workload that could be easily threaded has long ago taken those double-performance gains (and quadruple for the now ubiquitous quad-cores). Many tasks simply do not subdivide easily in a way conducive to threading (no good splitting into a bunch of sub-tasks if all depends on results of the previous task). Unlike HPC workloads that fall under Gustafson's Law scaling, desktop workloads are firmly in Amdahl's Law territory.
I would say party of the issue is the tools, most programming languages still have not added much multithreaded tools. Rust and go are of course designed for it but they will take time to be adopted. Nice to see Firefox leading here!
1. They call the TR 2990WX - "for workstation" solutions, yet it doesn't have even a shred of remote management neither on the Chipset nor any motherboard... 2. Pre sales are starting today, yet performance benchmarks are not allowed to be published today, so buy those CPU's based on ... what? hype?
While early adopters have been known to buy based on hype in the past, they only need to use common sense to pull the trigger on the 2990WX.
Only someone as dense as a rock won't be able to see that they will be getting double digit percentage increases over an Intel alternative that still, comically, costs $200 more.
Wow, come on people. We should all be praising AMD and hating INTEL for juicing us up for all these years. We should welcome competition and purchase AMD to show Intel that what they have done in the past is not right!
Buy AMD if it's a better solution for your problem, not because doing so somehow conveys some emotional concept of which Intel will be completely unaware. Buying things in that way is no less daft than buying Intel just because it's Intel. Steve at GN describes this whole thing best, in this case with regard to GPU flame wars, but the same thing applies to CPU arguments:
And what does "hating Intel" even mean? Intel is a company; as such, it isn't an entity with agency and awareness with which it can respond to someone who 'hates' it. So much emotional language with all this. :D Fact is, nobody has been forced to buy an Intel CPU for their gaming PC or whatever, they made a free choice to do that.
This is more to do with the expression of in-group preference, people feeling like they're with one gang or the other, or the need to defend their purchasing decisions.
If you don't like some product strategy that Intel uses, then don't buy their products, or if you still need something better and AMD has nothing to offer, then look at the 2nd-hand market, eg. there's often good value in used XEONs, and even today, old X79 can often hold its own rather well (especially for gaming above 1080p).
In the U.S., the courts have determined that a business can have beliefs with which to avoid laws and discriminate. Hobby Lobby decision. Citizens United decision. Not disagreeing with what you're saying, but the U.S. has some issues when classifying people and for-profit entities.
The new Threadrippers are interesting products, but I'm not really that concerned either way about how they actually perform since they're not practical products for any of my computing needs. They're too hot to cool passively, too big for a laptop chassis, and far too expensive for web browsing or watching a few videos. It's a shame Anandtech doesn't review much mid- to low-end hardware anymore since things like a 32-core x86 CPU are interesting, such processors are going to end up in a very tiny portion of even Anandtech's readers' enthusiast class PCs.
What do you feel is not getting covered? I remember them covering mainstream Ryzen 2, and APU low end Ryzen 2, and the new Core 8086 and a lot of other consumer focused cpus before that.
Right off the cuff, without thinking much about it, the 1050 and 1030 weren't reviewed. There were a couple of lower end AMD GPUs that were omitted as well. There are few to no networking benchmarks and the first complete desktop review done in a long time was for a relatively high end system. It's sort of sad to go to Anandtech to read a review about upper tier stuff I'll never purchase, but then have to go spelunking with a search engine to find multiple lesser quality reviews for things I'll actually purchase and yes that certainly includes laptops and more I'll toss in reasonably priced phones there too.
Complete desktop reviews & similar are not AnandTech's target market, never have have been so I wouldn't hold out too much hope for that to change. It's always been a site about ala-carte PC building hardware 1st & foremost; dunno why you'd expect anything different all the sudden tbh.
At the top of Anandtech's website, check the bar under the site logo for the word "SYSTEMS" and hover your mouse over it to view the subcategories and browse a few links within. Also check the "SMARTPHONES & TABLETS" category.
This is so cool. Though I don't think I'd recommend any non business owners from buying it. As with 7nm Zen 2 based TR coming next year, I cannot imagine your $1800 expenditure won't feel incredibly foolish in 12 months. If they can fit 32 at 12nm, how many can they are 7nm, and superior cores as well!
Don't buy a new sports car either because in 12 months time it will have depreciated and there may be faster models. ;) TR2 pricing goes much higher than TR1 so who knows how much TR3 chips will cost next year and how many cores they will have. These are for hobbyists when not bought for work and some hobbies are expensive if you aim high.
Car analogies don't work very well in this situation as the car industry is relatively slow to change compared to even the maturing and subsequently slowing pace of CPU development.
Until they spin the silicon and you can buy it it's not real, ask anyone that's been expecting Intel 10nm processors. Process tech is likely to stall out very soon if it hasn't already because they have hit the quantum limits on the transistor which have been frozen at 16nm for years.
What I'd be more worried about going forward is all the exploits Intel is suffering with Spectre class timing attacks, we're already up to Spectre variant 11 now and Intel has been vulnerable to every variant and AMD has only been partially vulnerable to a few of them. I personally believe this is the strongest reason to migrate away from Intel until they can get some viable silicon that's not vulnerable to every possible timing exploit.
from what i hear from a friend who does hardware vlogging.... the TR2 are great update and the benchmark numbers he saw are very good. the turbo with good cooling is higher than on paper and we will not be dispapointed when the benchmakrs go live around 15/16 of august.
My overclocked 1080ti does 350W when playing games. It is fairly easy to cool it but yes, the room gets hot a lot.
I cant imagine doing both GPU and CPU intensive workload for a few houra with the TR2 and the 1080ti and staying in the room (if you dont have an AC like me, its a huge pain).
Ian, will you be reviewing this with the high-end ‘Wraith Ripper’ cooler? I am curious about it.. Also that cat is awesome looking! More pics with Summer!
Yea I can not wait for the reviews on the TR 2990WX & the TR 2950X. I do hope they let the reviewers do the reviews for both of the 2 top TR's in their class at the same time. I only say this because even though the TR 2990WX will most likely be able to game it will probably not be that great at it because that is not what it was made for and the TR 2950X being a bit higher clocked and most likely more over clock friendly will be a whole lot better at games.
SO if we get some reviewers focusing more on just the gaming end of it and then they come to the conclusion of the TR 2990WX sucks at gaming the whole internet will spread the fud around like wild fire. I know Anandtech will do the proper work load tests so this site will be my first go to to get the proper picture.
AMD already created a gaming mode on the top model of previous Threadripper, which disabled cores. If that doesn't tell people these aren't supposed to be gaming chips what will?
These are workstation chips. They're for doing work. They will game okay but that is not their purpose at all, especially the more you move up the stack.
Please, let AMD do well! I'll buy something from them, (my first PC was AMD) just to keep some competition alive. Death to Intel monopolistic practices, and the governments that let them get away with it!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
101 Comments
Back to Article
edzieba - Monday, August 6, 2018 - link
I guess "buy before you try" is going to be the standard for AMD going forward.BB-5F-96-D7-AE-26 - Monday, August 6, 2018 - link
Same?Spunjji - Tuesday, August 7, 2018 - link
If you can't keep your wallet in your pants 'til after the reviews, then sure.melgross - Tuesday, August 7, 2018 - link
It used to be that AMD would release specs, and the chips they sent out would not come close to them. After bulldozer, they stopped giving pre released chips for test.Oxford Guy - Tuesday, August 7, 2018 - link
These are going to perform like Threadripper with Zen+. There aren't going to be any major surprises. The architecture is already well-known. It looks like the only question about these is how the memory allocation will affect certain benchmarks.Dug - Thursday, August 9, 2018 - link
The other question is if we can get some good x399 motherboards.IntoGraphics - Friday, August 10, 2018 - link
Don't we already have good X399 motherboards?Or is your current X399 motherboard a lemon?
Mday - Monday, August 6, 2018 - link
First page has 2800x in the AMD SKUs table, with the specs of a 2700x; and what appears to be the price of the 1800x ($419, url has 1800x) from last year for the 2700x in the Threadripper 2 vs Skylake-XThe Battle (Sorted by Price) table.
Ryan Smith - Monday, August 6, 2018 - link
Right you are. Thanks!DanNeely - Monday, August 6, 2018 - link
Putting an asterisk on the AMD supplied number in the chart is of only limited value if you don't have a matching footnote. I'm pretty sure I've seen the same problem on some of your other charts/graphs before.Ian Cutress - Monday, August 6, 2018 - link
I put the asterisk in to initially signify it was different, then put AMD result in just in case people took the graph without taking the context, forgot to remove the asterisk. Should be fixed.DanNeely - Monday, August 6, 2018 - link
I think I'm still a bit confused about what/why you did because I think an unverified (or at least one you can't say if is/isn't representative of what your internal testing is for a few more days) result from an OEM should be called out if put in a chart with results you and/or other testers generated directly.Spunjji - Tuesday, August 7, 2018 - link
It's mentioned in the pre-amble and labelled "AMD Result" in the chart now, which I found pretty unambiguous.DanNeely - Tuesday, August 7, 2018 - link
It's marked as such now, it wasn't when I commented.jcc5169 - Monday, August 6, 2018 - link
Did you mean to show "Ryzen 7 2800X" or is that a typo?Mday - Monday, August 6, 2018 - link
It was a typo. The specs were for the 2700x.maroon1 - Monday, August 6, 2018 - link
Cinebench would be like best case scenario for AMD 32 coresNot saying that it is bad thing. but you should expect the gap to be lower once you use other benchmarks. Cinebench scales very well with cores. but not only that. It also like ryzen more than intel unlike most other benchmarks. And there is no surprise that AMD started using that benchmark for their advertisement after they release ryzen
rUmX - Monday, August 6, 2018 - link
Every company would use a benchmark that would show their product in the best light. However I'm interested in knowing what the average turbo speeds for each clock rate. What will the difference be if one used that 500w AiO compared to stock cooling.blppt - Monday, August 6, 2018 - link
Not sure why you've concluded that CB R15 favors Ryzen---the 16/32 7960X (barely) beats the 1950X, which seems about right to me. With all 16 (32) cores saturated, the 1950X tops out at 3.4ghz turbo, and the 7960X is somewhere around 3.6.maroon1 - Monday, August 6, 2018 - link
8700K only beats 2600X by 6.4% in cinebench from what I've seen. Yet in almost every other non-gaming benchmarks it show more than 6.4%.Even AMD slides themselves show that cinebench favor ryzen, even among other 3d rendering benchmarks
https://screenshotscdn.firefoxusercontent.com/imag...
You should except things to be even worse if you use something like H.265 encoding because does not scale with cores like rendering benchmarks
While 2990WX is going to be faster than intel 7980X overall, the gap is not going to be 50% or even close. I expect to be maybe 30% (if not less) faster on average
blppt - Monday, August 6, 2018 - link
R15 is very well multithreaded, not the ideal indicator of real-world performance, so would we really expect an 8700K to dominate a 2600X in an ideal multithreaded benchmark?mapesdhs - Monday, August 6, 2018 - link
Cinebench as a benchmark may be reaching EOL, unless they update it again somehow. See:https://linustechtips.com/main/topic/815405-cinebe...
A better test in some ways would be c-ray, as LTT mentions, since it can scale to hundreds of threads no problem.
Ian.
eastcoast_pete - Monday, August 6, 2018 - link
Yes, every company puts their best foot/benchmark forward. The 7920/40/60 (all x) chips from Intel will still have a serious advantage in any application that can really utilize AVX 512, as that provides a huge performance boost. Unfortunately (for Intel), using AVX 512 also makes their chips run really, really hot (might be time for Intel to invest in some better thermal solutions for their pricey chips). Ultimately, it still boils down to: What are using your workstation for?[email protected] - Monday, August 6, 2018 - link
Like winrar ??? lelDug - Thursday, August 9, 2018 - link
Yes! All day long, every day. :) I can't stop using it! Have to compress everything!Midwayman - Monday, August 6, 2018 - link
I have no legitimate use for 32 cores, but Hrrrrrgggghhhhhh. Fully torqued for that many cores.HStewart - Monday, August 6, 2018 - link
I would rather have fewer stronger cores than more weaker cores.The Hardcard - Monday, August 6, 2018 - link
Even if it was barely more than half the cores that are less than 15 percent stronger? Come on, even your 35-year-old scalar, nonpipelined processor with no branch prediction will quickly tell you that you will be far behind on nearly every workstation task.Make a list of the top 15 reasons people who actually do work and could use a high-end workstation to take care of business. Now question: will the Core i9-7980XE be faster atany single one?
DigitalFreak - Monday, August 6, 2018 - link
People here, and on most tech sites for that matter, keep thinking of these processors in terms of gaming. That's obviously not what they're designed for.drajitshnew - Monday, August 6, 2018 - link
@digitalfreak It is quite correct & true. Even three 1st gen threadripper was not marketed for pure gaming-- more towards content creaters and those who want to stream games professionally.On an other note I have reduced going to the sister site "Tom's " because Anandtech has a less FPS centric editorial outlook.
gipper51 - Monday, August 6, 2018 - link
It is comical the number of folks who think the only purpose for a high end PC is to play games.rocky12345 - Tuesday, August 7, 2018 - link
This is true 100% I blame a lot of it on the tech sites (not Anandtech of coarse) that try to focus on gaming with CPU's that clearly are made for more than just gaming yes they can game but CPU's like this are designed for so much more. When Anandtech's review comes out I am sure they will have the proper tests done for CPU's like this and yes they may also through in a few games just to show that these CPU's can also do a bit of gaming which is fine.edzieba - Monday, August 6, 2018 - link
"Make a list of the top 15 reasons people who actually do work and could use a high-end workstation to take care of business. Now question: will the Core i9-7980XE be faster atany single one?"I'm in the middle of a rush rollout of quad-core machines to replace several tens of thousands (workdwide, only a 'few' thousand in this building) of dual-octacore-CPU workstations because before purchase and rollout, nobody bothered to look at users' actual workloads. Turns out threaded workloads were exceptionally rare, so all the monster workstations were utterly worthless in real world performance compared to the 'low spec' machines the back office staff were using.
Pretty much any highly threaded workload has already been offloaded to a GPU (or Phi) coprocessor, or moved entirely to a remote HPC cluster. For desktop workstations, threaded workloads are the exception rather than the rule.
edzieba - Monday, August 6, 2018 - link
And this is at a Fortune 5 company, who not only should know better but were repeatedly told their purchasing decision was a terrible mistake. But it's hard to fight simple "more cores is more better!" marketing with specific-use-case benchmarking numbers, eyes start to glaze over.mapesdhs - Monday, August 6, 2018 - link
Also shows that the very people employed to provide proper advice on such things are often the first to be ignored. Been through that lunacy several times when I was a sysadmin.johnnycanadian - Monday, August 6, 2018 - link
Oh man -- I know this may sound quite unethical and downright sketch, but hopefully you, as an enthusiast, can get a few of the older machines sent your way to noodle around with ... or build your own (somewhat TDP/performance obsolete) data centre! :-)edzieba - Monday, August 6, 2018 - link
Nope, they take Data Remanence very seriously (and a good chunk of the drives pass through my hands anyway). A machine that went walking out the building without being processed through bag & tag and scanned by the disposal service would make a lot of people very upset and generally be considered a bad move.rocky12345 - Tuesday, August 7, 2018 - link
Yea they would rather have them sent to the recycle plant and destroyed most likely once the hard drives are removed of coarse. I am just guessing that they send them to the recycle plant to get destroyed maybe they send them off for donations for all I know without the hard drives..lolguyr - Tuesday, August 7, 2018 - link
edzieba: "Turns out threaded workloads were exceptionally rare, so all the monster workstations were utterly worthless in real world performance compared to the 'low spec' machines the back office staff were using."Certain industries benefit greatly. I worked in software development, and many-core workstations are a great benefit. Developers typically run the entire stack locally: database, app/web server, and client, so they can find where the problems are without affecting coworkers. Each one of those platforms is multi-threaded (or multi-process), so 40+ threads is common.
Your general point is true, and has been for decades: be aware of your runtime environment, and allocate resources which reflect those realities.
jospoortvliet - Tuesday, August 7, 2018 - link
I must say it surprised me to discover even excel and other office apps are slowly going multithreaded though... as are browsers, with Chrime earlier and now Firefox leading. If you can do even CSS and JavaScript multithreaded every normal computer user suddenly benefits. I doubt they get benefit beyond 16 threads soon but a hyperthreaded octacore is finally useful for a normal user and ibcannimagine a heavy multitasking desktop office worker keeping 16 real/32 logical cores busy. I know i have run out of space on my quad-core years ago and i hope AMD brings more than 8 cores to mainstream soon as threadripper is a tad expensive...iwod - Tuesday, August 7, 2018 - link
1. Multiple Thread application are INSANELY hard to write CORRECTLY. ( That is why we haveRUST )
2. There are still a lot of performance to be squeezed out from parallelism. As proved by Servo.
3. Because Software has to care about the lowest common denominator, that is why no one is optimising for 8 Core yet.
If we could push the bottom market to 8 Core, middle market to 16 and top end market to 32 Core, and each segment is then differentiated by its Full Core Speed. We may see software optimise for Multiple Core sooner.
The only problem is 1. There is no incentive for them to do so and 2. The computer we have today are fast enough for majority of use case.
Foeketijn - Tuesday, August 7, 2018 - link
I'm now regulary waiting for excel to do some numbercrunching. 3 to 4 minutes 100% on all 8 threads (xeon e3 1240). I am wondering if such a threadripper would make that 20 to 30 seconds. If a 2700x would half that time, I am going to hit myself in the head for not going the threadripper route.BigDH01 - Tuesday, August 7, 2018 - link
Depending on the nature of your formula graph in Excel the problem may not be easily to parallelize. Excel performs some tricks to try and determine if formulas can be calculated concurrently but they can and do fall victim to fragile nodes in their directed cyclic graphs. Even if your graph is very flat, they don't always get parallelism correct as maintaining those facts are either 1) hard to determine in a scalable manner 2) push a lot of state handling to the graph editing side of things which can cause massive slowdowns in user experience to make simple edits. Unfortunately, a lot of programs we use on the desktop aren't just hard to parallelize, but don't parallelize very well (far less than linear scaling). Traversing your graph while tracking state (because excel keeps track of circular dependencies) in the correct order is just a hard problem and even though they can pound your CPU by speculatively executing, you probably won't see a huge speedup unless you've taken steps to make your graph as flat as humanly possible. And if you are doing the latter, why not just use Access?Cooe - Monday, August 6, 2018 - link
*facepalm*Then you obviously aren't the target market.
cerealspiller - Monday, August 6, 2018 - link
Legitimate is overrated :-)eastcoast_pete - Monday, August 6, 2018 - link
Go AMD, keep holding chipzilla's feet to the fire and their pricing honest (Intel just reported new record earnings, so there is room there).Unrelated, while I assume that the inactive dies in the cheaper TRs may well be dies that binned too low or are just defective, and are locked down better than Fort Knox, just out of interest: Has anybody tried and succeeded to bring back the dead, i.e. reactivate the inactive ones? Anybody? Even trying would, of course, immediately void your warranty, but maybe, just maybe, somebody tried. Would love to hear what happened, successful or not.
drajitshnew - Monday, August 6, 2018 - link
I have thinking about the same thing since it was revealed that the inactive dies have also been etched by derbauer-- are not just pieces of silicon.I would like to read that review too.
Da W - Monday, August 6, 2018 - link
And then somehow, you'll see on Tomshardware ''We tested the new CPU with our 1995 suite of games, Intel has superior IPC and shows a 2% advantage on single threaded games, so Intel is better, buy Intel.'' :)Da W - Monday, August 6, 2018 - link
Seriously though i've been waiting for this AMD for almost 2 decades. Good job!evernessince - Wednesday, August 8, 2018 - link
Seriously. Tom's hardware has some crazy single threaded benchmarks. I stopped reading them when they refused to remove project cars from their benchmark suite, which was heavily optimized for Nvidia. It's like they don't realize what an outlier is.SetiroN - Monday, August 6, 2018 - link
The memory configuration is going to be a huge bottleneck.Just try you try to use a 32 core Epyc with only 4 channels populated: performance it's hindered so badly you end up making very little use of the additional cores unless you're not accessing memory at all.
This all feels like an afterthought.
artk2219 - Monday, August 6, 2018 - link
So you're telling me AMD is shaving off features from their more expensive server parts so that theres some market differentiation? For shame! Seriously though, it is annoying that TR4 and SP3 are "2 different sockets", would have been nice to be able to use Epyc's in TR4.drajitshnew - Monday, August 6, 2018 - link
My "guess" is that while TR4 ( SP3R2) and SP3 are both 4094 pins, in TR4 the pins leading to the 2nd 2 processors are just that-- pins. They are just for physical support & are not electrically connected to anything. Hence, to maintain backwards comptibility AMD disabled the memory & PCIE of the second pair of dieseastcoast_pete - Monday, August 6, 2018 - link
While I also believe that there is no such thing as too much computing power, the 32 (and 24?) core TRs are the CPU equivalents of a 1,000 HP engine in a car: great for bragging rights, but only useful in very specific situations, and otherwise not faster than mere 8 core chips. In this case, the applications where 32 cores can make a difference are those that are not that dependent on memory speed/access. I would love to see some benchmarks for compiling and complex CAD situations.Overall, the question is/remains how well AMD executed on this second round of "NUMA on a chip".
Lastly, about EPYC vs. TR: AMD learned from the master (Intel). It's not about not letting people run server chips on desktop boards, it's about blocking people from doing the opposite: using much less expensive desktop CPUs in server boards and for server applications. That is also why desktop CPUs and chipsets basically never support ECC RAM, which is a requirement for many servers. TR is almost "EPYC", but just not quite, so you still have to buy EPYC and pay epic prizes for your servers. But than, Intel does the same, and gouges us even worse.
mapesdhs - Monday, August 6, 2018 - link
Not sure how these are about blocking people from doing the opposite, since they do support ECC, so surely one could use these CPUs just as they are with a good quality consumer mbd and they'd do just fine for a wide range of server tasks, using ECC memory if desired. If companies cared about cost that much then this is an option. Most though won't do that. There's a belief that companies will cram a consumer chip onto a pro board if they can, but really that's very rare as most bulk buyers of workstations and servers get them from OEMs, very few build their own.Nobody's gouging anyone btw, it's still a free market choice whether to buy Intel or not.
smilingcrow - Monday, August 6, 2018 - link
In theory TR boards can support ECC but I've heard reports that validation of ECC RAM is not exactly a priority and with all the work Ryzen boards required regarding RAM that's not a surprise.So anybody here built a TR ECC system and how did you get on? 1st hand reports are always better.
Oxford Guy - Tuesday, August 7, 2018 - link
ECC RAM is sold at slower speeds than typical enthusiast RAM. I fail to see why validation would be necessary. The fastest ECC RAM I know of is only 2666. If there is anything faster it should still fit within the TR2 spec.imaheadcase - Monday, August 6, 2018 - link
So why did the CPU race slow to a crawl now for years? Have we actually reached a "safe" limit for CPUs until some new tech can make it faster? I know the need isn't as great as it used to be, but remember the days that CPU speed leaped so much each generation..like 500mhz jumps each new CPU it seemed. Now we are seeing boosts..which is basically like saying "We can go this high, but its just a limit because we not sure of ourselves".DigitalFreak - Monday, August 6, 2018 - link
Two reasons come to mind - technology and competition. It's becoming increasingly difficult to go to smaller process nodes (see Intel 10nm) which are necessary to make faster chips. As to competition, Intel hasn't had any until AMD's Zen architecture. They're not going to put a lot of money into R&D if they don't have to. Unfortunately for them, AMD caught them with their pants down, and their 10nm process has had nothing but problems.DigitalFreak - Monday, August 6, 2018 - link
*which are necessary to make faster chipsFaster chips without crazy heat output and power requirements, or huge die sizes.
edzieba - Monday, August 6, 2018 - link
The gate thickness limit was hit around Sandy Bridge time and has stuck even with process node scaling. "Moar Cores" scaling was chopped off at the knees by GPGPU. There's just not many places to go to gain performance without massive power consumption increases (and even that hits areal power density limits as overall process scale shrinks).mapesdhs - Monday, August 6, 2018 - link
The irony of all this is that threaded support within application software is generally still pretty terrible, with many pro apps still only using one core. If anything there's much more to gain with better written software, but good programmers are expensive, and these days grud knows where they'd come from given the woeful education standards of many modern edu places, at least in the West anyway. Probably have to poach them from south east Asia, Israel, etc.Alaa - Monday, August 6, 2018 - link
Never heard that good programmers exist in Israel.edzieba - Monday, August 6, 2018 - link
It's not really a case of 'just program better', dual cores have been commonplace for a decade now: any workload that could be easily threaded has long ago taken those double-performance gains (and quadruple for the now ubiquitous quad-cores). Many tasks simply do not subdivide easily in a way conducive to threading (no good splitting into a bunch of sub-tasks if all depends on results of the previous task). Unlike HPC workloads that fall under Gustafson's Law scaling, desktop workloads are firmly in Amdahl's Law territory.jospoortvliet - Tuesday, August 7, 2018 - link
I would say party of the issue is the tools, most programming languages still have not added much multithreaded tools. Rust and go are of course designed for it but they will take time to be adopted. Nice to see Firefox leading here!hetzbh - Monday, August 6, 2018 - link
Hmm, lets see..1. They call the TR 2990WX - "for workstation" solutions, yet it doesn't have even a shred of remote management neither on the Chipset nor any motherboard...
2. Pre sales are starting today, yet performance benchmarks are not allowed to be published today, so buy those CPU's based on ... what? hype?
Intel999 - Monday, August 6, 2018 - link
@hetzbhWhile early adopters have been known to buy based on hype in the past, they only need to use common sense to pull the trigger on the 2990WX.
Only someone as dense as a rock won't be able to see that they will be getting double digit percentage increases over an Intel alternative that still, comically, costs $200 more.
Oxford Guy - Wednesday, August 8, 2018 - link
Hype? Not really.GPUs often come in around MSRP for preorders then, when things like Ethereum hit, those who preordered saved money. They also avoided shortages.
Plus, there is Ebay to sell on if the item doesn't measure up to your expectations. People will buy anything on Ebay for high prices.
Cooe - Monday, August 6, 2018 - link
CPU's are ALWAYS released for pre-order before review embargo's left. AMD AND Intel. Nothing new / worth complaining about here folks.Oxford Guy - Wednesday, August 8, 2018 - link
And, people who buy prerelease items can most likely recoup their money by selling on Ebay if the CPUs or GPUs don't turn out to be all that great.mjz_5 - Monday, August 6, 2018 - link
Wow, come on people. We should all be praising AMD and hating INTEL for juicing us up for all these years. We should welcome competition and purchase AMD to show Intel that what they have done in the past is not right!mapesdhs - Monday, August 6, 2018 - link
Buy AMD if it's a better solution for your problem, not because doing so somehow conveys some emotional concept of which Intel will be completely unaware. Buying things in that way is no less daft than buying Intel just because it's Intel. Steve at GN describes this whole thing best, in this case with regard to GPU flame wars, but the same thing applies to CPU arguments:https://www.youtube.com/watch?v=ZyAOtQOu2YM
And what does "hating Intel" even mean? Intel is a company; as such, it isn't an entity with agency and awareness with which it can respond to someone who 'hates' it. So much emotional language with all this. :D Fact is, nobody has been forced to buy an Intel CPU for their gaming PC or whatever, they made a free choice to do that.
This is more to do with the expression of in-group preference, people feeling like they're with one gang or the other, or the need to defend their purchasing decisions.
If you don't like some product strategy that Intel uses, then don't buy their products, or if you still need something better and AMD has nothing to offer, then look at the 2nd-hand market, eg. there's often good value in used XEONs, and even today, old X79 can often hold its own rather well (especially for gaming above 1080p).
Ian.
Fujikoma - Monday, August 6, 2018 - link
In the U.S., the courts have determined that a business can have beliefs with which to avoid laws and discriminate. Hobby Lobby decision. Citizens United decision. Not disagreeing with what you're saying, but the U.S. has some issues when classifying people and for-profit entities.Oxford Guy - Tuesday, August 7, 2018 - link
Certain people have been determined, as usual, to have fewer rights than others. Protected classes can't be discriminated against.PeachNCream - Monday, August 6, 2018 - link
The new Threadrippers are interesting products, but I'm not really that concerned either way about how they actually perform since they're not practical products for any of my computing needs. They're too hot to cool passively, too big for a laptop chassis, and far too expensive for web browsing or watching a few videos. It's a shame Anandtech doesn't review much mid- to low-end hardware anymore since things like a 32-core x86 CPU are interesting, such processors are going to end up in a very tiny portion of even Anandtech's readers' enthusiast class PCs.Sttm - Monday, August 6, 2018 - link
What do you feel is not getting covered? I remember them covering mainstream Ryzen 2, and APU low end Ryzen 2, and the new Core 8086 and a lot of other consumer focused cpus before that.So more laptop stuff?
PeachNCream - Monday, August 6, 2018 - link
Right off the cuff, without thinking much about it, the 1050 and 1030 weren't reviewed. There were a couple of lower end AMD GPUs that were omitted as well. There are few to no networking benchmarks and the first complete desktop review done in a long time was for a relatively high end system. It's sort of sad to go to Anandtech to read a review about upper tier stuff I'll never purchase, but then have to go spelunking with a search engine to find multiple lesser quality reviews for things I'll actually purchase and yes that certainly includes laptops and more I'll toss in reasonably priced phones there too.Cooe - Monday, August 6, 2018 - link
Complete desktop reviews & similar are not AnandTech's target market, never have have been so I wouldn't hold out too much hope for that to change. It's always been a site about ala-carte PC building hardware 1st & foremost; dunno why you'd expect anything different all the sudden tbh.PeachNCream - Tuesday, August 7, 2018 - link
At the top of Anandtech's website, check the bar under the site logo for the word "SYSTEMS" and hover your mouse over it to view the subcategories and browse a few links within. Also check the "SMARTPHONES & TABLETS" category.Sttm - Monday, August 6, 2018 - link
This is so cool. Though I don't think I'd recommend any non business owners from buying it. As with 7nm Zen 2 based TR coming next year, I cannot imagine your $1800 expenditure won't feel incredibly foolish in 12 months. If they can fit 32 at 12nm, how many can they are 7nm, and superior cores as well!Sttm - Monday, August 6, 2018 - link
The lack of an edit button strikes me again.smilingcrow - Monday, August 6, 2018 - link
Don't buy a new sports car either because in 12 months time it will have depreciated and there may be faster models. ;)TR2 pricing goes much higher than TR1 so who knows how much TR3 chips will cost next year and how many cores they will have.
These are for hobbyists when not bought for work and some hobbies are expensive if you aim high.
PeachNCream - Tuesday, August 7, 2018 - link
Car analogies don't work very well in this situation as the car industry is relatively slow to change compared to even the maturing and subsequently slowing pace of CPU development.rahvin - Monday, August 6, 2018 - link
Until they spin the silicon and you can buy it it's not real, ask anyone that's been expecting Intel 10nm processors. Process tech is likely to stall out very soon if it hasn't already because they have hit the quantum limits on the transistor which have been frozen at 16nm for years.What I'd be more worried about going forward is all the exploits Intel is suffering with Spectre class timing attacks, we're already up to Spectre variant 11 now and Intel has been vulnerable to every variant and AMD has only been partially vulnerable to a few of them. I personally believe this is the strongest reason to migrate away from Intel until they can get some viable silicon that's not vulnerable to every possible timing exploit.
Gothmoth - Monday, August 6, 2018 - link
from what i hear from a friend who does hardware vlogging.... the TR2 are great update and the benchmark numbers he saw are very good. the turbo with good cooling is higher than on paper and we will not be dispapointed when the benchmakrs go live around 15/16 of august.cpuaddicted - Monday, August 6, 2018 - link
The price of 2950 in the first table is listed as 849$ whereas every where else is mentioned at 899$. Please correct.BB-5F-96-D7-AE-26 - Monday, August 6, 2018 - link
Just forget about the AMD TR 1950X...mikk - Monday, August 6, 2018 - link
Pathetic paper launch from AMDtsk2k - Monday, August 6, 2018 - link
Which canon lake chips are you testing Ian?The 8121U?
CaedenV - Monday, August 6, 2018 - link
250W CPU?!?!?! That is epic!forget about cooling the processor... how do you keep your room cool when you are running an oven 24/7?
Still, seriously cool tech coming from AMD. Happy to see them back in action!
solnyshok - Monday, August 6, 2018 - link
Seriously hot, you meant?TheWereCat - Tuesday, August 7, 2018 - link
My overclocked 1080ti does 350W when playing games.It is fairly easy to cool it but yes, the room gets hot a lot.
I cant imagine doing both GPU and CPU intensive workload for a few houra with the TR2 and the 1080ti and staying in the room (if you dont have an AC like me, its a huge pain).
Oxford Guy - Wednesday, August 8, 2018 - link
That's the price you pay for gaming and for increased power. A long time ago, CPUs didn't even need heatsinks.just4U - Tuesday, August 7, 2018 - link
Ian, will you be reviewing this with the high-end ‘Wraith Ripper’ cooler? I am curious about it.. Also that cat is awesome looking! More pics with Summer!marsdeat - Tuesday, August 7, 2018 - link
Some quick typos and errors on the first page that I noticed from citing for Wikipedia.Top table "AMD SKUs":
- 2950X should be $899, not $849
- 2920X base should be 3.5, not 3.4
Middle table "Stacks and Prices":
- 2970WX should be $1299, not $1249
Bottom table "The Battle":
- 2950X should be "180 W", the 'W' needs capitalising
- 2970WX should be $1299, not $1249
Ryan Smith - Tuesday, August 7, 2018 - link
Right you are. Thanks!rocky12345 - Tuesday, August 7, 2018 - link
Yea I can not wait for the reviews on the TR 2990WX & the TR 2950X. I do hope they let the reviewers do the reviews for both of the 2 top TR's in their class at the same time. I only say this because even though the TR 2990WX will most likely be able to game it will probably not be that great at it because that is not what it was made for and the TR 2950X being a bit higher clocked and most likely more over clock friendly will be a whole lot better at games.SO if we get some reviewers focusing more on just the gaming end of it and then they come to the conclusion of the TR 2990WX sucks at gaming the whole internet will spread the fud around like wild fire. I know Anandtech will do the proper work load tests so this site will be my first go to to get the proper picture.
Oxford Guy - Tuesday, August 7, 2018 - link
AMD already created a gaming mode on the top model of previous Threadripper, which disabled cores. If that doesn't tell people these aren't supposed to be gaming chips what will?These are workstation chips. They're for doing work. They will game okay but that is not their purpose at all, especially the more you move up the stack.
IntoGraphics - Friday, August 10, 2018 - link
I wish that those DDR4 prices would come down though.64GB is the minimum of my Linux KVM DDR4 requirement.
sharath.naik - Monday, August 13, 2018 - link
there is very little reason to buy intel HCC processors anymore.Tom_Joad - Wednesday, August 15, 2018 - link
Please, let AMD do well! I'll buy something from them, (my first PC was AMD) just to keep some competition alive. Death to Intel monopolistic practices, and the governments that let them get away with it!