Anandtech is simply wrong regarding Game mode or "Legacy Compatibility Mode" as you prefer to call it and make jokes about it.
It seems that you don't know what ALL other reviewers say that Game mode doesn't set SMT off, but it disables one die.
So, Threadripper doesn't become a 16C/16T CPU after enabling Game mode as you say, but a 8C/16T CPU like ALL other reviewers say.
Go read Tom's Hardware which says that Game mode executes "bcdedit /set numproc XX" in order to cut 8 cores and shrink the CPU to one die (8C/16T) but because that's a software restriction the memory and PCIe controller of the second die is still alive, giving Quad Channel memory support and full 60+4 PCIe lanes even in Game mode.
And you thought you are smart and funny regarding your Game mode comments...
real renderers buy epyc or xeon. Either they have the money because its corporate money, they have the money because it comes from plebs paying someone comission/subscription money, or they have the money because they are plebs buying pre-built workstations.
Here's the real Threadripper review: AMD thrashes Intel i9 in every possible way, smushes it's puny ass into the dirt, and dances on the grave for the coup de gras. It is very entertaining to watch the paid Intel lackeys here try to paper over what is clearly a superior product. Keep up with the gaming scores guys, like anyone is buying this for gaming. I for one am looking forward to those delicious 40% faster render times, for the same price as the Intel space heater.
Not trying to nitpick or imply anything but... There is a logical reason for Threadripper getting five pages of gaming performance review and Skylake-X not even appearing on the charts more than a month after it was reviewed?
With all due respect Mr. Cutress, "circumstances beyond our control" and "odd BIOS/firmware gaming results" didn't prevent anyone from bashing Ryzen for its gaming performance on its debut.
Ian did not lie even by omission. They clearly stated in the Ryzen conclusion and clearly stated in the Skylake-x conclusion why they didn't test gaming.
“You can please some of the people all of the time, you can please all of the people some of the time, but you can't please all of the people all of the time”
Listen to you fanyboy crybabies. Tom's and Guru3D did gaming benches too. Go find a Reddit AMD fanboy forum that will give a 100% glowing review of your precious Threadsnapper. You won't find a single credible tech site out there doing it. It's called impartiality. Oh and one more thing ladies: you all are aware that AMD sent the major tech review sites the EXACT same hardware kit for review, right?
For those of you considering this CPU the fact is you are going to get MUCH better value by choosing one of the Ryzen CPUs - Ryzen 7 1800X is now at around $420 for 8/16 and the 7 1700 (8/16 again) has been on sale for as little as $299. Now, if you need the high thread counts for work on things like content creation and you still want to be able to run games it will be competitive (read: not the king of the hill) when you are running your games. So, if you do more than 50% of your computing time is gaming then go for an Intel CPU OR one of the Ryzen 5/7 consumer CPUs.
Which would explain why the introduction doesn't mention the Netburst fiasco by name.
"The company that could force the most cycles through a processor could get a base performance advantage over the other, and it led to some rather hot chips, with the certain architectures being dropped for something that scaled better. " is, to my eye, actually attention-grabbing in the way it avoids using any names like Preshott, I mean Prescott and only obliquely references the 1GHz Athlon, the Thunderbirds, Sledgehammer, and the whole Netburst fiasco that destroyed the once-respected Pentium name.
But no, let's just say that "certain architectures" were dropped and there were "some rather hot chips" and keep Intel happy. They need that bone right now, though not as much as they did during the reign of Thunderbird and the 'hammers.
Hey, we were an Athlon house. I didn't suffer through the series of mis-steps that plagued Intel. I just thought the sentence was conspicuous in how hard it tried to not name names.
Is it possibly to bench the Intel CPUs (especially the i9-7900x) for those latency/single-thread tests with Hyperthreading turned off? This would probably give a better comparison to AMDs Game Mode and hopefully higher numbers too due to double the cache/registers available to one thread.
Yep if you use AVX-512 it will down clock to 1.8Ghz and draw 400w just for the CPU alone and 600w from the wall. See der8auer's video title "The X299 VRM Disaster (en)", all x299 motherboards VRMs can be ran into thermal shutdown under avx 512 loads, with just a small overclock, not to mention avx512 crazy power consumption. That is why AMD didn't put avx 512 in Zen, it is power consumption monster.
Yeah, the discrepency is huge - converted to anandtech's compile's per day the arstechnica benchmark maxes out at a little less than 20, which is a far cry from the we see here.
Clearly, the details of the compiler, settings and codebase (and perhaps other things!) matter hugely.
That's unfortunate, because compilation is annoyingly slow, and it would be a boon to know what to buy to ameliorate that.
This is very compiler dependent. My compiler is blazingly fast on my wimpy hardware becuase it's blazingly clever. Most compilers seem to crawl no matter what they run on.
Looks like anandtech's benchmark for compiling is bunk, it's just way off from all the other benchmarks out there. Not only that, no other test shows a 20% improvement over the 6950x which is also a 10 core/20 thread cpu. Something tells me the 7900x is completely wrong or has something faster like a different pcie ssd.
All I know is, for those of us running Plex, SABnzbd, Sonarr, Radarr servers simultaneously (and others), while encoding and gaming all simultaneously, our day has arrived!
The jokes is on you. More cores and more memory bandwidth is always faster for compiling. Anandtech must have butched the benchmark here. Other sites show ThreadRipper whipping i9 ass as expected.
They did without a doubt screw up the compile test. The 6950x is a 10 core /20 thread intel cpu, but somehow the 7900x has 20% improvement, when no other test even comes close to that much of an improvement. The 7900x is basically just bump in clock speed for a 6950x.
Why is it a mess if peope choose to buy into this level of tech? It's bring formerly Enterprise-level tech to the masses, the very nature of how this stuff works makes it clear there are tradeoffs in design. AMD is forced to start off by dealing with a sw market that for years has focused on the prevalence of moderately low core count Intel CPUs with strong(er) IPC. Offering a simple hw choice to tailor the performance slant is a nice idea. I mean, what's your problem here? Do you not understand UMA vs. NUMA? If not, probably shouldn't be buying this level of tech. :D
That will change. Why invest masses of expensive brainpower in aggressively multithreading your game or app when no-one has the hardware to use it? No they do.
On the verge? All major consoles have had a greater core count than consumer CPUs, not to mention complex memory architectures, since, what, 2005? One suspects the PC market has been benefiting from this for quite some time.
Specifically, the 360 had 3 general purpose CPU cores And the PS3 had one general purpose CPU core and 7 short pipeline coprocessors that could only read and write to their caches. They had to be fed by the CPU core. The 360 had unified program and graphics ram (still not common on PC!) As well as it's large high speed cache. The PS3 had septate program and video ram. The Xbox one and PS4 were super boring pcs in boxes. But they did have 8 core CPUs. The x1x is interesting. It's got unified ram that runs at ludicrous speed. Sadly it will only be used for running games in 1800p to 2160p at 30 to 60 FPS :(
Why do people constantly assume this is purely time/market economics?
Not everything can *be* parallelized. Do people really not get that? It isn't just developers targeting a market. There are tasks that *can't be parallelized* because of the practical reality of dependencies. Executing ahead and out of order can only go so far before you have an inverse effect. Everyone could have 40 core CPUs... It doesn't mean that *gaming workloads* will be able to scale out that well.
The work that lends itself best to parallelization is the rendering pipeline and that's already entirely on the GPU (which is already massively parallel)
I think what AMD did here though is fantastic. In my mind, creating a switch to change modes vastly adds to the value of the chip. I can now maximize performance based upon workload and software profile and that brings me closer to having the best of both worlds from one CPU.
I agree it is a mess, and also, it is not AMDs fault.
I've have a 14c/28t Broadwell chip for over a year now, and I cannot launch Tomb Raider with HT on, nor GTA5. But most s/w is indifferent to the amount of cores presented to them, it would seem to me.
Great review but the word "traditional" is used heavily. Given the short lifespan of computer parts and the nature of consumer electronics, I'd suggest that there isn't enough time or emotional attachment to establish a tradition of any sort. Motherboards sockets and market segments, for instance, might be better described in other ways unless it's becoming traditional in the review business to call older product designs traditional. :)
It's pretty useless measuring power alone. You need to measure efficiency (performance /watt). So yeah, a 16 core CPU draws more power than a 10 core, but it also probably doing a lot more work.
Just do a avx512 benchmark and Intel will jump over 300watts , 400watts(overclocked) only from the cpu. (prime95 avx512 benchmark).See der8auer's video "The X299 VRM Disaster (en)"
The Chromium build time results are interesting. Anandtech's results have the 1950X only getting 3/4ths of the 7900X's performance. Arstechnica's getting almost equal results on both CPUs, but at 16 compiles per day vs 24 or 32 is seeing significantly worse numbers all around.
I'm wondering what's different between the two compile benchmarks to see such a large spread.
I think it has a lot to do with the RAM used by Anandtech vs Arstechnica . For all the regular benchmarking Anand used DDR4 2400, only the DDR 3200 was used in some overcloking. Arstechnica used DDR4 3200 for all benchmarking. Everyone already knows how faster DDR4 memory helps the Zen architecture.
Anandtech must have misconfigured something. Building chromium is scales practically linearly. You can move jobs all the way across a slow network and compile on another machine and you still get linear speed-ups with more added cores.
I refrained from posting on the previous article, but now I'm quite sure Anand is being paid by Intel. It is not that I argue against the benchmarks, but how it is presented. I was even under the impression that this was an Intel review.
The previous article was stated as "Introducing Intel's Desktop Processor" Huge marketing research is done on how to market products. By just stating one thing first or in a different way, quite different messages can be conveyed without lying outright.
By making the "Most Powerful, Most Scalable" Bold, that is what the readers read first, then they read "Desktop Processor" without even reading that is is Intel's. This is how marketing works, so Anand used slanted journalism to favour Intel, yet most people will just not realise it eat it up.
In this review there are so many slanted journalism problems, it is just sad. If you want, just compare it to other sites reviews. They just omit certain tests and list others at which Intel excel.
I have lost my respect for Anandtech with these last two articles of them, and I have followed Anandtech since its inception. Sad to see that you are also now bought by Intel, even though I suspected this before. Congratulations for making this so clear!!!
Anand hasn't worked at the website for a few years now. The author (me) is clearly stated at the top.
Just think about what you're saying. If I was in Intel's pocket, we wouldn't be being sampled by AMD, period. If they were having major beef with how we were reporting, I'd either be blacklisted or consistently on a call every time there's been an AMD product launch (and there's been a fair few this year).
I've always let the results do the talking, and steered clear from hype generated by others online. We've gone in-depth into the how things are done the way they are, and the positives and negatives as to the methods of each action (rather than just ignoring the why). We've run the tests, and been honest about our results, and considered the market for the product being reviewed. My background is scientific, and the scientific method is applied rigorously and thoroughly on the product and the target market. If I see bullshit, I point it out and have done many times in the past.
I'm not exactly sure what you're problem is - you state that the review is 'slanted journalism', but fail to give examples. We've posted ALL of our review data that we have, and we have a benchmark database for anyone that ones to go through all the data at any time. That benchmark database is continually being updated with new CPUs and new tests. Feel free to draw your own conclusions if you don't agree with what is written.
Just note that a couple of weeks ago I was being called a shill for AMD. A couple of weeks before that, a shill for Intel. A couple before that... Nonetheless both companies still keep us on their sampling lists, on their PR lists, they ask us questions, they answer our questions. Editorial is a mile away from anything ad related and the people I deal with at both companies are not the ones dealing with our ad teams anyway. I wouldn't have it any other way.
I personally always enjoy reading your reviews Ian. Even though they don't always reach the conclusions I hoped they would reach before reading, you have the evidence and benchmarks to back it up. Keep up the good work!
For me, it isn't about "scientific benchmarking", it's about what benchmarks are used and what story is being told. I think, along with many others, would never buy a threadripper to open a single .pdf. I could be wrong, but I don't think that's the target audience Intel or AMD is aiming for.
I mean, why not forgo the .pdf and other benchmarks that are really useless for this product and add multi-threaded use cases. For instance, why not test how many VM's and I/O is received, or launching a couple VM's, running a SQL DB benchmark, and gaming at the same time?
It could just be me, but I'm not going to buy a 7900x or 1950x for opening up .pdf files, or test SunSpider/Kraken lol. Hopefully we didn't include those benchmarks to tell a story, as mentioned above.
We're goingto be compiling, 3d rendering with multi-gpu's, running multiple VM's, all while multi-tasking with other apps.
Single threaded use cases aren't why people buy really wide CPUs. But performing badly in them, since they represent a lot of ordinary basic usage, can be a reason not to buy one. Also running the same benches on all products allows for them all to be compared readily vs having to hunt for benches covering the specific pair you're interested in.
VM type benchmarks are more Johan's area since that's a traditional server workload. OTOH there's a decent amount of overlap with developer workloads there too so adding it now that we've got a compile test might not be a bad idea. On the gripping hand, any new benchmarks need to be fully automated so Ian can push an easy button to collect data while he works on analysis of results. Also the value of any new benchmark needs to be weighed against how much it slows the entire benching run down, and how much time rerunning it on a large number of existing platforms will take to generate a comparison set.
It really depends on use case. 20% slower on PDF opening? I dont care, because the time has reached diminishing returns and Intel needs to be MUCH faster for this to be a UX problem.
But I think at $999 Intel has a strong case for its i9. But factoring in the MB AMD is still cheaper. Not sure if that is mentioned in the article.
Also note Intel is on their third iteration of 14nm, against a new 14nm from AMD GloFlo.
I am very excited for 7nm Zen 2 coming next year. I hope all the software and compiler as well as optimisation has time to catch up for Zen.
I won't get into an argument, but I and many of my friends, who are on the developer side of the house have been waiting for this review, and it doesn't provide me with any useful information. I understand it might be Johan's wheelhouse, but come on... opening a damn .pdf file, and testing SunSpider/Kraken/gaming benchmarks? That won't provide anyone interested in either CPU any validation of purchase. I'm not trying to be salty, I just want some more damn details vs. trying to put both vendors in a good light.
Rather than have 20 different tests for each set of different CPUs and very minimal overlap, we have a giant glove that has all the tests for every CPU in a single script. So 80 test points, rather than 4x20. The idea is that there are benchmarks for everyone, so you can ignore the ones that don't matter, rather than expect 100% of the benchmarks to matter (e.g. if you care about five tests, does it matter to you if the tests are published alongside 75 other tests, or do they have to be the only five tests in the review?). It's not a case of trying to put both vendors in a good light, it's a case of this is a universal test suite.
Well, show me a database benchmark, virtual machine benchmark, 3dmax benchmark, blender benchmark and I'll shutty ;)
It's hard for me to look at this review outside of a gamers perspective, which I'm not. Sorry, just the way I see it. I'll wait for more pro-consumer benchmarks?
This is exactly my point as well. Why on earth so much focus on single threaded tests and games, since we all knew from way back TR was not going to be a winner here. Where are all the other benches as you mention. Oh, no, this will have Intel look bad!!!!!
the answer to both of you is that "this is a High end PC processor, not a workstation CPU, and not a server CPU. That was clearly covered at the start of the article.
If you want raw number crunching info, there will be other sites that are going to have those reviews, and really, maybe anandtech will review it in that light since it really is such a powerful CPU in another review for server stuff.
Also, there is a LOT of value in having a standardized set of tests. Even if a few tests here and there are no longer valuable like PDF opening, the same tests being used across the board are important for BENCH. you can't compare products if you aren't using the same tools.
Unfortunately AMD is ahead of the curve currently with massive SMP being given to normal consumers now at a reasonable price. It will take a little time for dev's to catch up and really make use of this amazing CPU.
With the processing power in a CPU like this imagine the game mechanics that can be created and used, For those of us that are more interested in making this a reasonably priced workstation/server build for VMs etc, cool for us, but that isn't where this is being marketed, and it's not really fair to jump all over the reviewer for it.
Yeah, TR doesn't really look like something that's massively aimed at gamers, it has too many capabilities and features which gamers wouldn't be interested in.
It's a HEDT/workstation, a year ago people called Workstation a dual Xeon 8 cores, which a sole 1950X replicates.
Intel draws a line not supporting ECC, AMD supports ECC in all their main cpu's server or not all the way back to Athlon 64.
16cores/32threads, ECC, 64 pci-e lanes, upgrade path to 32cores/64threads with zen3. Smells Workstation to me.
Another thing is server cpu's which EPYC is, with features tailored to it, like a massive core count with low clock speeds to maximize efficiency and damn expensive mobos without any gamerish gizmo, just think to put on building without looking at net. TR can do a bit of that too, but optimized to an all around performance and budget friendly.
Dan sums it up. Some of these tests are simply check boxes - is it adequate enough.
Some people do say that an automated suite isn't the way to do things: unfortunately without spending over two months designing this script I wouldn't have time for nearly as much data or to test nearly as many CPUs. Automation is a key aspect to testing, and I've spent a good while making sure tests like our Chromium Compile can be process consistent across systems.
There's always scope to add more tests (my scripts are modular now), if they can be repeatable and deterministic, but also easy to understand in how they are set up. Feel free to reach out via email if you have suggestions.
Ian, I understand that you see them as checkboxes, but this is not a normal CPU John doe is going to buy. It has a very specific audience and I feel you are missing that audience badly. I guy that buys this to use for rendering or 3Dstudio Max, is not going to worry about games. Yes, it would be a great bonus to also be OK at it. Other sittes even did tests of running rendering as well as play games at the same time. TR shined like a star against Intel. This is actually something that might happen in real life. A guy could begin a render and then while waiting, decide to play a game.
No, but you open things like IDEs and Premiere. A PDF test is a gateway test in that regard with an abnormally large input. When a workstation is not crunching hard, it's being used to navigate through programs with perhaps the web and documents in tow where the UX is going to be indicative of something like PDF opening.
Including useless benchs not only you waste target audience time, you too having to write and upload images from that useless benchs instead of making the article more interesting.
How about a "the destroyer for HEDT/Workstion", a typical productivy load + some gaming, out of a sudden people will get TWICE the cpu resources, they can do things they couldn't before on the same machine.
They could get a dual socket mobo with 2x10c Xeons paying the hefty premium with pathetic clock speeds if they wante to game a bit while doing work, TR fixed that, with mass consumer type of gaming performance while reducing the multicore costs by more than half (cores counts + ECC support without paying intel tax).
And that audience few months ago was limited to do their productivity thing with 6-8 cores or 10 paying the huge intel tax, probably they couldn't game without hurting other things and had a 2 secondary PC for killing time.
With TR and the massive 16 core count they can finally do all of that off a single PC or focus the entire powerhorse when they need (leaving things do work during their sleep).
Specially when every cpu right now autoclocks to 4Ghz on ST tasks. Single thread is just an obsolete metric when just the most basic of tasks will use it, tasks the last thing you will worry is speed, maybe curse about that piece of c*rap not using 80% of you cpu resources.
I would love to see more VM benchmarking on these types of CPUs. I would also love to see how a desktop performs on top of a Server 2016 hypervisor with multiple servers (Windows and Linux) running on top of the same hypervisor.
I should have made it clear that I loved the review. Ian's reviews are always great!
I would just like to see these types of things in addition. It seems like we are getting to a point where we can have our own home lab and a desktop all on one machine on top of a hypervisor, but this idea may be my own strange dream.
And others would like to know how it works at video editing or as a DAW etc. To add a whole bunch of demanding benchmarks just for HEDT systems is a hell of a lot of work for little return for a site whose main focus is the mainstream. Try looking at more specialised reviews.
This, please! My TR purchase is hinging on the performance of multiple VMWare VMs all running full-out at least 18 hours per day.
Ian, I'd love to see some of your compute-intensive multi-core benches running on a Linux host with Linux-based VMWare VMs (OpenCV analysis, anyone? Send me that 1950x and I'll happily run SIFT and SURF analysis all day long for you :-). I was delighted by the non-gaming benchmarks shown first in this review and hope to see more professional benches on Anand. Leave the gamerkids to Tom's or HardOCP (or at least limit gaming benchmarks to hardware that is built for it): Anandtech has always been more about folks who make their living on HPDC, and I have nothing but the highest respect for the technical staff at this publication.
I don't give a monkey's about RGB lighting, tempered glass cases, 4k gaming or GTAV FPS. How machines like Threadripper perform in a HPC environment is going to keep AMD in this market, and I sincerely hope they prove to be viable.
Your going to spend a $1000 on cpu but have no clue how it handles the tasks you need it for, smh. As a VMWare customer they will tell you which cpu has been certified to handle a specific tasked. You don't need a random website to tell you that.
Hi Ian It's a great review but i do have some suggestions on the test suite. The test suite for this CPU was not materially different from test suites of many of the other desktop CPUs done earlier. I think it would be great to see some tests which explicitly put to use the multi-threaded capabilities and the insane IOs of the system to test, e.g server hosting with how many users being able to login, virtual machines, more productivity test suites when put together with a multi-GPU setup (running adobe creator or similar) etc. I think a combination of your epyc test suite and your high-end GPU test suite would probably be best suited for this.
Also, for the gaming benchmark, it seemed you had 1080, 1060, rx580 and rx480 GPUs. Not sure if these were being bottlenecked by GPU with differences in framerates being semantic and not necessarily a show of PC strength. Also, Civ 6 AI test suite would a great addition as that really stresses the CPU.
i completely understand that there is only so much that can be done in a limited timeframe typically made available for these reviews but would be great to see these tests in future iterations and updates.
Why did you end all the gaming review sections with something like "Switching it to Game mode would have made better numbers..."? Why didn't you run the benchmarks in Gaming mode in the first place?
You might want to call that out more clearly in the text. I also missed that you have two sets of 1950X results; and probably wouldn't've figured out what the -G suffix meant without a hint.
If people quick-glance, that's their problem for missing key info. :D When learning about something as new as this, I read everything. Otherwise, it's like the tech equivalent of crossing a road while gawping at a phone. :}
Last time I read so much about a new CPU launch was Nehalem/X58.
Indeed. :D Reminds me of when a long time ebay seller told me that long item decriptions are pointless, because most bidders only read the first paragraph, often only the first sentence.
The test suite is a global glove: rather than have 20 tests for each segment, it's a global band of 80 tests for every situation. Johan does different tests as his office is several hundred miles away from where I am (and we're thousands of miles away from any other reviewer).
For the gaming benchmarks, there are big differences in 99th percentile frame rates and Time Under analysis. As games become more and more GPU bottlenecked for average frame rates, this is where the differentiation point is. It's a reason why we still test 1080p as well. With regards the AI test, I've asked the Civ team repeatedly to make the AI test accessible from the command line so I can rope it into my testing scripts easily (they already do it with the main GPU test). But like many other game studios, getting them to unlock a flag is a frustrating endeavor when they don't even respond to messages.
Thanks for your reply. Hopefully the test suite can be expanded as Intel's CPUs probably also move to higher core count and IO ranges in future. and i completely understand the frustration trying to get a 3rd party to change their defaults. Cheers
Ian can we get an updated comments section so we can +/- people and after x number of minuses they wont show by default. I'm saying this because some of these comments(the one in this chain included) are not meaningful responces. The comments section is by far the weakest link on Anantech.
toms has that, indeed it's kinda handy for blanking out the trolls. Whether it's any useful indicator of "valid" opinion though, well, that kinda varies. :D (there's nowt to stop the trolls from voting everything under the sun, though one option would be to auto-suspend someone's ability to vote if their own posts get hidden from down voting too often, a hands-off way of slapping the trolls)
Given the choice, I'd much rather just be able to *edit* what I've posted than up/down-vote what others have written. I still smile recalling a guy who posted a followup to apologise for the typos in his o.p., but the followup had typos aswell, after which he posted aaaaagh. :D
Ian thanks for at least responding, I appreciate it. Please compare your review to sites like PCPer and many others. They have no problem to also point out the weak points of TR, yet clearly understand for what TR was mostly designed and focus properly on it and even though they did not test the 64 PCI lanes as an example, mention that they are planning a follow-up to do it, since it is an important point. You do mention these as well, but could have said more than just mention it by the way.
Look at your review, most of it is about games. Are you serious?
I have to give you credit to at least mention the problems with Sysmark.
Let me give you an example of slanted journalism, When you do the rendering benchmarks, where AMD is known to shine, you only mention at each benchmark what they do etc, and fail to mention that AMD clearly beats Intel, even though other sites focus more ons these benchmarks. In the one benchmark where Intel get a descent score, you take time to mention that:
"Though it's interesting just how close the 10-core Core i9-7900X gets in the CPU (C++) test despite a significant core count disadvantage, likely due to a combination of higher IPC and clockspeeds."
Not in one of the rendering benchmarks do you give credit to AMD, yet you found it fitting to end the section of with:
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly.
After focussing so much time on game performance, I am not sure you understand TR at all. AMD still has a long way to go in many areas. Why? Because corrupt Intel basically drove them to bankruptcy, but that is a discussion for another day. I lived through those days and experienced it myself.
Maybe I missed it, but where did you discuss the issue of memory speed? You mention in the beginning of memory overclock. Did you test the system running at 3200 or 2666? It is important to note. If you ran at 2666, then you are missing a very important point. Ryzen is known to gain a huge amount with memory speed. You should not regard 3200 as an overclock, since that is what that memory is made for, even if 2666 is standard spec. Most other sites I checked, used it like that. If you did use 3200, don't you think you should mention it?
Why is it that your review ends up meh about TR and leave you rather wanting an i9 an almost all respects, yet most of the other site gives admiration where deserved, even though they have criticism as well. Ian I see that you clearly are disappointed with TR, which is OK, maybe you just like playing games and that is why you are so.
It was clear how much you admire Intel in your previous article. You say that I gave no examples of slanted journalism, maybe you should read my post again. "Most Powerful, Most scalable." It is well known that people don't read the fine print. This was intentional. If not, you are a very unlucky guys for having so many unintended mishaps. Then I truly need to say I am sorry.
For once, please be a bit excited that there is some competition against the monopoly of Intel, or maybe you are also deluded that they became so without any underhanded ways.
By the way, sorry that I called you Anand. I actually wanted to type Anandtech, but left it like it. This site still carries his name and he should still take responsibility. After I posted, I realised I should have just checked the author, so sorry about that.
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
How do you not understand that is a dig at Intel? He's saying you have to pay twice as much for only a 6.7% improvement.
The memory speed approach taken was clearly explained in the test and was stated as being consistent with how they always test. I don't take issue with testing at stock speeds at launch day as running memory out of spec for the system can be evaluated in depth later on.
That is just rubbish. Threadripper has no problem with 3200 memory and other sites has no problem running it at that speed. 3200 memory is designed to run 3200, why run it at 2666? There is just no excuse except being paid by Intel.
Maybe then you can accuse other sites of being unscientific?
""Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly."
You call Intel the monopoly and call him out for not wording the sentence to dissuade people from buying Intel. Who has the bias here? If he was actively promoting Intel over AMD or vice versa, you'd be OK with the latter, but to do neither. He's an Intel shill? Come on. That's unfair. HOW should he have wrote it so it would satisfy you?
FYI Anand is gone. He's NOT responsible for anything at Anandtech. Are you going to hold Wozniak's feet to the fire for the lack of ports on a Mac too?
Well, reading the whole review today - 13/08/2017 - I can see that the reviewer did something more evil than not using DDR4-3200 to give us performance numbers.
He used DDR4-2400, as he clearly states in the configuration table, filling up the performance tables BUT in the power consumption page he added DDR4-3200 results (!) just to inform us that DDR4-3200 consumes 13W more, without providing any performance numbers for that memory speed (!!)
The only thing left for the reviewer is to tell us in which department of Intel works exactly, because in the first pages he wanted to test TR against a 2P Intel system as Skylake-X has only 10C/20T but Intel didn't allow him.
Ask for your Intel department to permit it next time.
Yeah! You make a great point! Too much emphasis on gaming all the time! These processors aren't GPUs after all! Most people who buy PCs don't play games at all. Even I as a game developer would like to see more real world tests, especially compilation and data-crunching tests that are typical for game content creation and development workloads. Even I as a game developer spend 99% of my time in front of the computer not playing any games.
So Intel made AMD release the underpowered overheating Bulldozer cpu's? Did Intel also make them sell there US and EU based fabs so they'll be wholly dependant on the Chinese to make their chips? Did Intel also make them buy a equally struggling graphics card company? Truth is AMD lost all the mind and market share they had because of bad corporate decision and uncompetitive cpu designs post Thunderbird. It's no one's fault but there own that it took seven years to produce a competitive replacement. Was Intel suppose to wait till they caught up? And Intel was a monopoly long before AMD started producing competitive cpu's.
You can keep blaming Intel for AMD's screw ups but those of us with common sense and the ability to read know the fault lays with AMD's management.
You are not sampled because of your divine objectivity Ian, you are sampled because you review for a site that is still somewhat popular for its former glory. You can deny it all you want, and understandable, as it is part of your job, but AT is heavily biased towards the rich american boys - intel, apple, nvidia... You are definitely subtle enough for the dumdums, but for better or worse, we are not all dumdums yet.
But hey, it is not all that bad, after all, nowadays there are scores of websites running reviews, so people have a base for comparison, and extrapolate objective results for themselves.
And some bits of constructive criticism - it would be nicer if those reviews featured more workloads people actually use in practice. Too much synthetics, too much short running tests, too much tests with software that is like "wtf is it and who in the world is using it".
For example rendering - very few people in the industry actually use corona or blender, blender is used for modelling and texturing a lot, but not really for rendering. Neither is luxmark. Neither is povray, neither is CB.
Most people who render stuff nowadays use 3d max and vray, so testing this will actually be indicative of actual, practical perforamnce to more people than all those other tests combined.
Also, people want render times, not scores. That's very poor indication of actual performance that you will get, because many of those tests are short, so the CPU doesn't operate in the same mode it will operate if it sweats under continuous work.
Another rendering test that would benefit prosumers is after effects. A lot of people use after effects, all the time.
You also don't have a DAW test, something like cubase or studio one.
A lot of the target market for HEDT is also interested in multiphysics, for example ansys or comsol.
The compilation test you run, as already mentioned several times by different people, is not the most adequate either.
Basically, this review has very low informational value for people who are actually likely to purchase TR.
AE would definitely be a good test for TR, it's something that can hammer an entire system, unlike games which only stress certain elements. I've seen AE renders grab 40GB RAM in seconds. A guy at Sony told me some of their renders can gobble 500GB of data just for a single frame, imposing astonishing I/O demands on their SAN and render nodes. Someone at a London movie company told me they use a 10GB/sec SAN to handle this sort of thing, and the issues surrounding memory access vs. cache vs. cores are very important, eg. their render management sw can disable cores where some types of render benefit from a larger slice of mem bw per core.
There are all sorts of tasks which impose heavy I/O loads while also needing varying degrees of main CPU power. Some gobble enormous amounts of RAM, like ANSYS, though I don't know if that's still used.
I'd be interested to know how threaded Sparks in Flame/Smoke behave with TR, though I guess that won't happen unless Autodesk/HP sort out the platform support.
Why do you bother replying to these, Ian? I love your enthusiasm about what you do, and am happy that you reply to comments, but as you state yourself, no matter what you say, you'll be called a shill on more than a weekly basis by either side no matter what you do. Intel shill, AMD shill, Apple shill, Nvidia shill and so on. There's no stopping it, because you just can't please the people who go into something wanting a specific result. Well, you can if you give them that result, but sometimes, facts aren't what you want them to be, and some people don't accept that.
@Johan Steyn: while I agree with you that the Intel piece with PR slide at the top was a little bit lame, I even lolled at "most scalable" part (isn't something like "glued" zen the most scalable design?) I think this review is good and goes also around architecture etc., there were few instances during reading when it seemed odd wording or being unnecessarily polite toward intel's shortcoming/deficit but I cant even remember them now.
Though I was surprised about power numbers, as Toms measured much higher W for 7900X , 160-200 and with TTF even up to 250-331 , but here 7800/7900x had only ~150W. Also this sentence is odd "All the Threadripper CPUs hit around 177W, just under the 180W TDP, while the Skylake-X CPUs move to their 140W TDP." move to their? They are above the TDP...why not state it clearly?
Power consumption can vary a lot depending on the type of task and the exact nature of that task. So you should expect a lot of variation across reviews.
No offence but HardOCP is far more respectable than what we have in ATech these days.
But that's not hard. AT website is pretty much a shell for the forum which is where most of the traffic is. I'm sure they only so the reviews because 'it was something we have always done'
You may not understand how wording is used to convey sentiments in a different way. That is what politicians thrive on. You could for instance say "I am sorry that you misunderstood me." It gives the impression that you are sorry, but you are not. People also ask for forgiveness like this: "If I have hurt you, please forgive me." It sounds sincer, but it is a hidden lie, not acknowledging that you have actually hurt anybody, actually saying that you do not think that you did.
Well, this is a science and I cannot explain it all here. If you miss it, then it does not mean it is not there.
I thought I'd just comment to say I understand what you're saying and agree. Even if a sentence gives facts, it can sound more positive one way or another way based on how it is stated. The author has to do some reflection sometimes to catch this. I believe him whenever he says he doesn't have much time, and maybe that plays into it. But articles at different sites may not have this bias effect and it can be an important component of a review article.
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
These 2 sentences give facts, but sound favorable to Intel until just the very end. It's a subtle perception thing but it's real. The facts in the sentences, however, are massively favorable to AMD. Threadripper does only 6.7% less performance than an announced (not yet released) Intel CPU for half the cost!
Here is another version-
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. So Threadripper, for half the cost of Intel's as-yet unreleased chip, performs only 6.7% slower in Cinebench."
There, that one leads with Threadripper and "half the cost" in the second sentence, and sounds much different.
WCCFtech is a joke, it's nothing but rumors and trolling. If you are seriously going to put WCCFtech above Anandtech then everyone here can immediately disregard all of your comments.
Fantastic review In. I was curious exactly how AMD would handle the NUMA problem with Threadripper. It seems that anybody buying Threadripper for real work is going to have to continue being very aware of exactly what configuration gets them the best performance.
One minor correction, at the bottom of the CPU Rendering tests page:
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost." - this score is for the 16 core i9-7960X, not the 7980XE.
Why did you end all the gaming review sections with something like "Switching it to Game mode would have made better numbers..."? Why didn't you run the benchmarks in Gaming mode in the first place?
We ran with both and give the data for both. Gaming Mode is not default, and it may surprise you just how many systems are still run at default settings.
Just a thought, might it be possible for AMD to include logic in the design which can tell when it's doing something would probably run better in the other mode, and if so notify the user of this?
Keeping the version constant means you can compare against a huge backlog of old data without having to rerun anything and having to drop any systems you can't get working or were only loaners.
Exactly. We don't test GPU's with Quake 2 only to have comparable benchmark results against Voodoo 3.
And almost no-one running 7zip today (be it on Core 2 quad OR Core i9) won't be running these ancient versions. Results on those versions are just meaningless in todays environment.
When the developers of Civ finally listen to me and add in a command line for the AI benchmark, I can script it into my setup. They keep ignoring me. They have a command line for the regular benchmark, but because the AI benchmark was added post release no-one thought to add a command line for it (or publish what the command line flags are). There is an -aibenchmark flag in the disassembled code, but it doesn't do anything, which makes me think that it is disabled for release builds.
In the not so distant past - like last year - you'd have to pay Intel some seriously overpriced HEDT money for 6+ cores. Ryzen gave us 8 cores and most games can't even use that. ThreadRipper is a kick-ass processor in the workstation market. Why anyone would consider it for gaming I have no idea. It's giving you tons of PCIe lanes just as AMD is downplaying CF with Vega, nVidia has offically dropped 3-way/4-way support, even 2-way CF/SLI has been a hit-and-miss experience. I went from a dual card setup to a single 1080 Ti, don't think I'll ever do multi-GPU again.
And then there's GPU acceleration for rendering (eg. CUDA) where the SLI/CF modes are not needed at all. Here's my old X79 CUDA box with quad 900MHz GTX 580 3GB:
I recall someone who does quantum chemistry saying they make significant use of multiple GPUs, and check out the OctaneBench CUDA test, the top spot has eleven 1080 Tis. :D (PCIe splitter boxes)
There is no such thing as SHED. Ryzen is a traditional desktop part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years. Threadripper is a HEDT part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years.
Ryzen 7 was set as a HEDT directly against Intel's HEDT competition. This is a new socket and a new set over and above that, and not to mention that Intel will be offering its HCC die on a consumer platform for the first time, increasing the consumer core count by 8 in one generation which has never happened before. If what used to be HEDT is still HEDT, then this is a step above.
Plus, AMD call it something like UHED internally. I prefer SHED.
I think AMD has the better division of what is and isn't HEDT. Going forward Intel really should follow suite and make it 8+ cores to get into the HEDT lineup as what they have done this go around is just confusing and a bit goofy.
"AMD could easily make those two ‘dead’ silicon packages into ‘real’ silicon packages, and offer 32 cores"
That's exactly what the, already announced, EPYC parts are doing is it not?
Great review otherwise, these parts are intriguing but I don't personally have a workload that would suit them. Excited to see what sort of innovation this brings about though, about time Intel had some competition at this end of the market.
Presumably a relevant difference being that such a 32c TR would have the use of all of its I/O connections, instead of some of them used to connect to other EPYC units. OTOH, with a 32c TR, how the heck could mbd vendors cram enough RAM slots on a board to feed the 8 channels? Either that or stick with 8 slots and just fiddle around somehow so that the channel connections match the core count in a suitable manner, eg. one per channel for 32c, 2 per channel for 16c, etc.
Who knows whether AMD would ever release a full 32c TR for TR4 socket, but at least the option is there I suppose if enough people buy it would happily go for a 32c part (depends on the task).
Considering the TDP with just a 16C chip to go 32C would hit the clock speeds badly unless they were able to keep the turbo speeds when ONLY 16 or less of the cores are loaded? The 32C server parts have much lower max turbo speeds seemingly when less loaded.
Here are a few potential benchmark ideas that I'd like to see.
- Zbrush. (High resolution dynamesh/projection, Zremesher, or Decimation Master.) - Unreal Engine 4. (Lightmap baking on a sample map, perhaps one from Unreal Tournament 4. Perhaps compilation of the engine itself.) - XNormal. (Ambient occlusion texture baking.) - Some sort of database benchmark for the poor sods who are doing web development. - Some sort of video editor benchmark.
Luxmark OpenCL: "Though it's interesting just how cost the 10-thread Core i9-7900X gets here, likely due to a combination of higher IPC and clockspeeds."
When we initially ran the 7900X and other CPUs, Luxmark was failing for no obvious reason. We narrowed down the reason a few weeks ago - it doesn't like running when a GTX 950 is installed for detection reasons. We have since moved to RX 460s being used during our CPU benchmark runs.
The only thing I might take exception at is the notion that prosumers have never seen NUMA before, since both the Z9-PE and Z10-PE offer it. I myself had the Z9-PE with a pair of Sandy Bridge Xeons.
I've lost count of how often I've read the specs pages for those mbds, etc. Talked to so many prosumers who ideally would buy one of those boards, but the XEON costs were prohibitive.
Yes, you are right, TR is not the best gaming rig. Maybe this article misses again to even try to compare TR as a gaming machine. It is good to point out that Intel will be better, even though not compared to price. But this article made me think TR is made to be a gaming CPU. Ryzen is meant for that. When games support 32 threads, that will change, but not soon. This is a workstation class machine. It is almost like buying a Xeon to run games with.
I hope AMD tailors its PR to make this clear. Focusing any hype on gaming where it's obviously not warranted could miss a lot of potential very suitable buyers.
What does bug me though is the absence of reviewers mentioning that while Intel's 4-core CPUs do well for gaming right now, isolated to just that task, they have nothing in reserve to handle what are rapidly growing areas such as live streaming of games. GN showed a huge difference in viewer experience for game streaming between a 1700 and a 7700K.
It's a consumer CPU, which is something AMD emphasized in our briefings and again when we asked them about where they are pitching the processors. If users want Zen for datacenters, EPYC exists. We have benchmarks for those too.
It's a heck of a stretch to outright call it a consumer CPU when it has so many pro-type features such as ECC support. Sure it's aimed at consumers, but it's definitely aimed at prosumers aswell, and I'd be amazed if at least a few fully pro places didn't buy some, even if only to test.
AMD have historically been pretty cool about ECC support - and professionals such as video and rendering types appreciate, as RAM wobbles on 24hr+ rendering workflows have one less thing to worry about.
It's not that their subtly targetting server markets or owt, they just know that a substantial minority of their client base appreciate being able to utilise ECC memory without having to quadruple the cost of the base hardware, as you do with Intel stuff.
I would like to see software like ANSYS Structures or ANSYS Fluent benchmarked, but after talking with ANSYS Hardware Support, they're still waiting to see how EPYC performs on base hardware. Building systems for ANSYS using Intel parts involves obscene amounts of money, so if you can save any money with the same amount of performance, a myriad amount of companies would be interested.
That's a pity, as I understand it ANSYS is a task that gobbles RAM by the truckload, it'd be an interesting use case for analysing memory/cache behaviour.
Many years ago, one ANSYS user told me his ideal system would be single CPU with 1TB RAM.
What did you use to test max power consumption? Prime95 small FFTs? I'd love to see some perf/watt comparisons to the 7900X in the future, GamersNexus has some interesting results in that regard with the 1950X behaving significantly better at stock than the 7900X, both doing more work for less power.
You did something wrong with the chromium builds benchmark. It has absolutely no cross core communication and scales almost linearly with number of cores. So you must have misconfigured something or hit a glitz. I work on Chromium profesionally, and we can normally speed it up 2x by distributing compile jobs all the way to another machine. Or by 10x by distributing compile jobs to 10 other machines. Not scaling to more cores on the same CPU makes no sense.
We're using a late March build based on v56 with MSVC, using the methodology described in the ELI5, and implementing a complete clean rebuild every time. Why March v56? Because when we locked down our suite a few months back to start testing Windows 10 on several generations of processors, that's where it was at. 50 processors in, several hundred to go...
Then again creating obsolete date for the sake of "our benchamark suite". How about running the "for comparison's sake bench" and another with the latest version, not that difficult.
I don't know why you're being criticized as an Intel shill, Ian. I'll probably be purchasing Threadripper, and I thought it was a good review.
One thing I would like to see is some kind of audio benchmark. It's pretty well established at this point that there are latency considerations with Threadripper, and it would be useful to know how this affects DAWs with high track counts, for example.
The review is unbalanced, aiming mostly at gamers. You probably understand what TR is about, but not all do. This article does not focus on what TR is good at.
You do realise how many requests we actually got for game tests? This is our regular CPU Gaming test suite, taken from the suggestions of the readers: fast and slow GPUs, AMD and NVIDIA, 1080 and 4K. The data is there because people do request it, and despite your particular use case, it's an interesting academic exercise in itself. The CPU benchmarks are still plentiful: around 80 tests that take 8-10 hours to run in total. If you want to focus purely on those, then go ahead - the data is meant to be for everyone and whatever focus they are interested in.
I think a simple comment before the gaming test suite like...
"We show gaming tests for (the reasons you list above) but if you are looking at buying Threadripper for gaming alone, you are really missing the point of it." would go a long way to allaying concerns. You could cap it with what it would do well: Threadripper can really excel at running multiple VM's, servers, compiling, encoding etc and at the same time running a game while waiting. Or some such.
That's what appears to be missing to me, instead of just dumping tons of gaming results, putting it all into context of the strength of the processor. Just my 2 coppers
A comment like that may have helped prevent criticism, but if included it would also add weight to the suggestion that the review should have included a greater proportion of threaded workloads.
If it's priced in the existing traditional desktop segment, it's a traditional desktop part. If it's priced in the existing HEDT segment, it's an HEDT part.
That suggest that somehow there are such things as "traditional" price points, whereas in reality Intel (without competition) has been moving these all over the place (mostly up) for many years. How can such tech have traditional anything when its base nature is evolving so fast? Look at what Intel has done to its own pricing as a result of Ryzen, and now TR, implementing a major price drop at the 10c level compared to BW-E (Intel's Ark shows the 7900X being 42% cheaper after a gap of just one year).
When disruptive competition occurs, there's no such thing as traditional. To me, traditional is another way of disguising tech stagnation.
An HEDT is also a Worksation and with the amount of cores/IO AMD also made this cpu a proper server chip for small businesses that don't need exotic things like lan remote dual 10G's.
AMD disrupted the market and erased many lines, same with EPYC with 32cores on a single socket, erasing the need of dual socket for many people (while TR will scale to 32cores in the future, EPYC will go to 64cores).
Ian, can you please add a paragraph in the review that describes the " 99th percentiles" for games please? I'm having a hard time understanding it. Thanks.
A game benchmark result gives you the amount of time it takes to render each frame - 16ms for one frame, 18ms for the next, etc. In the past people used to quote minimum frame rates, i.e. the absolute minimum, which can sometimes be off due to a sudden spike caused by something else on the system kicking in, and the data would not be representative.
To get around this, we use 99th percentile. So we take all the frame times, put them in numerical order, then take the 99th percent of the worst result as our data point for 99th percentile. This means that 99% of the frame times / FPS will be better than this value during normal gameplay.
I can see a use case in an IT-Lab for a non-mission-critical VM Server. I suggest considering a test if the CPU is well behaved under typically used hypervisors.
Why are all the AnandTech results different and inferior to the results listed at TechSpot and ArsTechnica if the CPUs and the benchmarks were the same ?!
How come AMD aces all the benchmarks on these reputable sites, but the results are all over the place on AnandTech ?!
Don't think I'm bashing AnandTech for a second. I've been reading it since 2001 and even if I'd get the impression it is a bit biased, I will continue reading it. Everybody has the right to be biased and I have enough judgement to make my own opinion about a subject.
I suspect there was some issue with the settings or the motherboard, because even the power consumption results are weird. I know that the results listed try to evaluate the chip power consumption, but still the results seem very wrong.
Actually, in these Power Consumption tests the reader will completely get the WRONG IDEA, because the Intel X299 systems consume way more power than AMD's Threadripper X399 platform .
Also, no mention of the difference in handling the temperatures of the platform ?! How is X399 vs. the steak grill called X299 ? This is a very, very serious issue that should be discussed in the review.
If the AMD solution is more power efficient, stable and reliable, the readers should be able to read about it in a review.
Sorry to ask so many questions, I know it was a long week for you Ian.
Thank you for the review and I hope we do get a Part 2 or 2.0 :)
Most of our benchmarks using real-world inputs, aside from the synthetics. So our Chromium Compile test for instance uses a different code base and different compiler to Ars. Our WinRAR test and video editing tests use our own datasets. Our game tests use settings that we've chosen and are unlikely to align with others. That's why we document a lot of our testing.
Also, on the power tests. We're probing the CPU power only - not losses caused by the platform power delivery, DRAM, or power supply. We're not taking the difference between idle and load either, we're going off of the numbers that the CPU is telling itself when it comes to power/frequency management for power states, fan profiles and everything else.
I think that total system power consumption is more important than chip consumption, IMHO.
The user/buyer/client will never use the CPU without the whole platform consuming power as well, except if he drills a hole into it and uses it as a key chain. :)
In the servers business, platform power matters the most, in the mobile world as well. For the home desktop user it matter how much he will spend to enjoy that new productivity/gaming system.
The only niche of the market where chip power would be particularly of significant importance is super-computing where the platform is usually a custom one with a custom power budged that will depend directly on the decision of the designer and beneficiary.
These two decision factors, beneficiary and designer, will then chose what chip will they want to use in their project.
Otherwise, on a first look (maybe I'm being superficial), I don't see why chip power consumption would need to be measured to exactly and used for comparison.
The CHECK it and see if it stays within the boundaries declared by the manufacturer or goes over, yes. But to use it for comparison ?!
Or maybe I'm just used to the days when everybody was always checking and comparing the total system power. :)
Have you thought that in case the cooling solution is not perfect, especiall since there are no proper coolers for TR yes, just adapted ones, it could skew the results for most of the benchmarks / power used? TR has an XFR of 4.2 GHZ that will not kick unless the cooling is perfect. I saw this on Hardware Canucks I think, where their TR was below the advertised values and they mention it.
GamersNexus even has a part on Youtube for testing the results of different application methods of thermal paste and it did show that even this matters a lot in case of this cpu / cooling solutions
Yes. That is a much more appropriate and comprehensive test.
We often talk about using a VM tool to do our heavy work, despite of reminding us of the Main Frame era :) But today it makes sense. Even in a shared work environment, you can share the costs of a Threadripper machine and run 3 or 4 or more VMs.
And the everything is shared : hardware costs, maintenance, upgrades, software, repairs, power consumption and so on.
You just come to the office with your laptop. You plug into the 27" secondary desktop display, connect to your VM and you have 2 to 32 computing threads at your disposal.
So yes, concurrent computing loads in Virtual Machines makes for a very good and comprehensive mean of benchmarking, IMHO.
Good review but I see a lot of testes optimized for 2-4 cores. Also I want a test with gaming, rendering and compression (or other many intensive tasks) at the same time, this will clearly differentiate this beast from other 4-6 cores CPU's; Unfortunately for Intel, his greed really shows now. Although core still has about 5-10% more IPC compared to Ryzen, the power consumption per core is about 5-15% higher (al lower frequency) and with 10-18 cores this really shows. They had a very competitive tick-tock strategy when they had absolutely no competition and now after more than three years they are still stuck in 14nm. If they were smarter and created by now just one fab on 8 or 10 nm for the CPU's with many cores things were more simple for them today. In 8 nm, skylake x would had allowed 18 cores on 3.2-3.6Ghz, and not 2.6 as they are doing now. So they save 3-4 billions dollars not building a 8nm fab but will lose more than this when enthusiast marked will side with AMD. Please be smarter in future Intel, Samsung and TSMC already have 8 nm FABS while you...
Ryzen 7 has 32 pci-e 3.0 lanes on die, one 8 lane controller disabled, leaving 24 lanes enabled. Four are then reserved for the chipset, leaving 20 PCI-e lanes usable for direct connectivity.
re: page 1's "AMD recommends liquid cooling at a bare minimum" - if liquid cooling is the "bare minimum", what cooling is considered "pretty good"? Are we all supposed to be readying liquid nitrogen setups?
Great review as always :) - So it's effectively a great all-around CPU for streaming, gaming, and rendering in programs which utilises more than 8 cores...I think that's win, especially with ECC memory support
I had a lot of hope for Threadripper as a development machine... but when 16 core TR loses so bad to 10-core 7900x or even 8-core 7820x in compilation, there is something seriously wrong with the picture. Too much emphasis on FP performance nobody at home needs all that much (except in games where it is provided by GPU and not CPU anyway)? Maybe AT tests are wrong, say, they have failed to specify /m for MsBuild?
Hiding the fact that the CPU is NUMA both from the OS and from software is a very bad idea. Thread migration out of a core is a disaster all but itself, but thread migration to different memory and especially L3 cache (as big as it is) should never be attempted.
Basically, at this point I would take 7820x over TR1950X for every task, with similar MT performance in vast majority of tasks not offloadable to a GPU, better mixed-load performance and much better ST performance. And would save $400 and electricity costs in the process.
I know alot of hardwork and long hours went into this so I want to thank you Ian for taking the time. Minus all the bickering and whining in the comment some good points were made. Been reading this site since 2000 and appreciate all the knowledge it has given me.
How comes WinRAR is faster with the 10 core Broadwell than with the 10 core Skylake? What did they change on Cinebench going from 10 to 11.5? Threadripper is the faster CPU in Cinebench 10, but in the newer one it is not. Then again Cinebench 15 shows TR as the faster CPU. Is this benchmark reliable?
How comes Chromium compilation is so slow? Others have pointed out they get much better scaling (linear speedup). That makes sense because compilation basically consists in launching isolated processes (compiler instances). Is this related with the segfaulting problem under GNU/Linux systems?
For encoding I would start to use FFmpeg when benchmarking so many cores. In my brain lies a memory of FFmpeg being faster than Handbrake for the same number of cores. Maybe the GUI loop interrupts the process in a performance-unfriendly way. Too much overhead. HPC workloads can suffer even from the network driver having too many interrupts (hence, Linux tickless configuration).
I have read SYSMARK Results and I find strange that TR media results are slower than data, being TR slower than Intel in media and faster than Intel in data. Isn't SYSMARK from BAPCo (http://www.pcworld.com/article/3023373/hardware/am... You already point it out on the article, sorry.
How comes R9 Fury in Shadow of Mordor has AMD and Intel CPUs running consistently at two different frame rates (~95 vs ~103)?
The same but with the GTX 1080. Both cases happen regardless of the Intel architecture (Haswell, Broadwell and Skylake all have the same FPS value).
What happens with NVIDIA driver on Rocket League? Bad caching algorithm (TR has more cores/threads -> more cache available to store GPU command data)? You say you had issues but, what are your thoughts? How comes GTA V has those Under 60 and 30 FPS graphs knowing that the game is available for PS4 and XBox One (it has been already optimized for two CCX CPU, at least there is a version for that case)? Nevertheless, with NVIDIA cards, 2 seconds out of 90 is not that much.
What I can think is that all these benchmarks are programmed using threading libraries from the "good old times" due to bad scaling. And in some cases there is architecture-specific targeted code. I also include in my conception the small dataset being used. I also would not make a case out of a benchmark programmed with code having false sharing (¡:O!)
Currently for gaming, it seems that the easiest way is to have a Virtual Machine with PCIe passthrough pinned to one of the MCM dies.
As a suggestion to Anandtech, I would like to see more free (libre) software being used to measure CPU performance, compiling the benchmarks from source against the target CPU architecture. Something like Phoronix. Maybe you could use PTS (Phoronix Test Suite).
Positive things: ThreadRipper is under its TDP consumption. Intel is more power hungry. The Intel 16-core might go through the rough in power consumption. Good gaming performance. Intel is generally better, but TR still offers a beefy CPU for that too, losing a few frames only. Strong rendering performance. Strong video encoding performance.
When you talk about IPC, it would be useful to measure it with profiling tools, not just getting "points", "miliseconds" and "seconds". Seeing how these benchmarks do not scale by much beyond 10 cores you might realize software has to get better.
Ian, a query about the CPU Legacy Tests: why do you reckon does the 1920X beat both 1950X and 1950X-G for CB 11.5 MT, yet the latter win out for CB 10 MT? Is there a max-thread limit in V11.5? Filiprino asked much the same above.
"...and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation."
Isn't that the whole point though? For most workstation tasks, don't use Game Mode. There will be exceptions of course, but in general...
Don't read the comments. Also, a lot of the "complaints" are read by Ryan and he actually addresses them and his articles improve as a result of criticism. He's never been bad, but you can see an ascension in quality over time, along with his partaking in critical commentary. IOW, we don't really need a referee.
Wait a second, according to AMD and all the other articles about the 1950X and Game Mode, game mode disables all the physical cores of one of the CPU clusters and leaves SMT on, so you get 8 cores and 16 threads. It doesn't just turn off SMT for a 16 core / 16 thread setup.
You have written that "This socket is identical (but not interchangeable) to the SP3 socket used for EPYC,". Please, clarify. I was under the impression that if you drop an epyc in a threadripper board, it would disable 4 memory channels & 64 PCIe lanes as those will simply not be wired up.
No AMD have stated that won;t work. Its probably not hardware incompatible, but they probably put microcode on the CPUS so that if it doesn;t detect its a Ryzen CPU it doesn't work. There might also be differences in how the cores are wired up on the fabric since its 2 cores instead of 4. Remember, Threadripper has only 2 Physical Dies that are active. on Epyc all processors are 4 dies with cores on each die disabled right down to the 8 core part. (2 enabled on each physical die)
Wish there was an edit function..... but to add to that, If you pop in an Epyc processor, it might go looking for those extra lanes and memory busses that don;t exist on Threadripper boards, hence cause it not to function.
This is the second article where you've tried to start an acronym called SHED (Super High End Desktop) in referring to AMD Threadripper systems. You also say that Intel systems are HEDT (High End Desktop) when in all reality both AMD and Intel are HEDT. It is just that Intel has been keeping the core count low on consumer systems for so long you think that anything over a 10 core system is unusual.
AMD is actually producing a HEDT CPU for $1000 and not inflating the price of a HEDT CPU and bleeding their customers like Intel was doing with the i7-6950X CPU for $1750. HEDT CPUs should cost about $1000 and performance should increase with every generation for the same price, not relentlessly jacking the price as Intel has done.
HEDT should be increasing in performance every generation and you prove yourself to be Intel biased when something finally comes along that beats Intel's butt. Just because it beats Intel you want to put it into a different category so it doesn't look like Intel fares as bad. If we start a new category of computers called SHED what comes next in a few years? SDHED? Super Duper High End Desktop?
theres a good reason for that. Intel is not just inflating the cost because they want to. It literally cost them much more to produce their chips because of the monolithic die aproach vs AMDs Modular aproach. AMDs yeilds are much better than INtels in the higher core counts. Intel will not be able to match AMDs prices and still make significant profit unless they also adopt the same approach.
That's not how free markets work. Companies will price any given product at their maximum profit. If they can sell 10 @ $2000 or 100 at $1000 and it costs them $500 to produce, they would make $15,000 selling 10 and $50,000 selling 100 of them. Intel isn't filled with idiots, they priced their chips at whatever they thought would bring the maximum profits. The best way for the consumer to protest prices that we believe are higher than the "right" price is to not buy them. The companies will be forced to reduce their prices to find the market equilibrium. Stop complaining about Intel's gouging, vote with your wallet and buy AMD. Or don't, it's up to you.
Honestly, the review is somewhat disappointing. For a pro-sumer product, there is no MySQL/PostgreSQL benchmark. No compilation test under Linux environment. Really?
"In an ideal world, all software would be NUMA-aware, eliminating any concerns over the matter."
Why? This is an idiotic statement, like saying that in an ideal world all software would be aware of cache topology. In an actual ideal world, the OS would handle page or task migration between NUMA nodes transparently enough that almost no app would even notice NUMA, and even in an non-ideal world, how much does it actually matter? Given the way the tech world tends to work ("OMG, by using DRAM that's overclocked by 300MHz you can increase your Cinebench score by .5% !!! This is the most important fact in the history of the universe!!!") my suspicion, until proven otherwise, is that the amount of software for which this actually matters is pretty much negligible and it's not worth worrying about.
Anandtechs power and compiling tests are completely out of other rewiewers results. Still hiding poor Skylake-X gaming results. Most of the tests are completely out of that 16-core CPU target workloads. 2400 memory used for tests. Absolutely zero perf/watt and price/perf analisys.
Intel bias is over the roof here. Looks like I'm done with Anandtech.
I don't comment much (if ever), but I have to say one thing... I miss Anand's reviews. What happened to AnandTech?
What ever happened to IPC testing when IPC used to be compared on a clock for clock basis? I remember the days when IPC used to be Instructions Per Clock, and this website and others would even use a downclock/overclock processors at a nominal clock rate to compare the performance of each processor's IPC. Hell, even Bulldozer with a high clock architecture was downclocked to compare is "relative IPC" in regards using a nominal clockrate.
And to add to what other's are saying about the bias in the review... Honestly, I have been feeling the same way for some time now. Must be because AnandTech is at the "MERCY" of their mother company Purch Media... When you are at the mercy of your advertisers, you have no choice but to bend the knee, or even worse, bend over and do as they say "or else"...
Thanks for taking the time in creating this review, but AnandTech to me is no longer AnandTech... What other's say is true, this place is only good for the Forums and the very technical community that is still sticking around.
Downclocking and overclocking processors to replicate a different processor within the same family can lead to inaccurate results, as IPC can and does rely (at least to a degree) on cache size and structure. I get what you are saying, but I think Ian's work is pretty damn good.
>Move on 10-15 years and we are now at the heart of the Core Wars: how many CPU cores with high IPC can you fit into a consumer processor? Up to today, the answer was 10, but now AMD is pushing the barrier to 16
I don't personally think of Threadripper or parts like Broadwell-E as being consumer level parts.
For me, the parts most consumers use have been using for the last decade have been Intel parts with two cores or four cores at the high end.
It's been a long period of stagnation, with cutting power use on mobile parts being the area that saw the most attention and improvement.
Agree the HEDT platforms are not for the average consumer they are for enthusiasts, professional workstation usage, and some other niche uses.
When the frequency war stopped and the IPC war started. We should have had the core competition 5-8 years back since IPC stagnated to a couple percent gains year on year.
AMD Ryzen CPU is not fast enough. Apple is not ready for AMD Ryzen CPU, sorry AMD. I love AMD but I hated Intel even though I have a Skylake based MacBook Pro. :(
One small correction, Ryzen has 24 PCIE lanes, not 16. it has 16 for graphics only, but saying only 16 may make people (like me) wonder if you can't run an NVME at x4 and still have the graphics card at 16x, which you totally can do.
This is under Feeding the beast section btw, where you said "Whereas Ryzen 7 only had 16 PCIe lanes, competing in part against CPUs from Intel that had 28/44 PCIe lanes,"
He already answered this question/statement to someone else. there are 20 lanes from the CPU, 16 of which are available for graphics. I don't think his way of viewing it seems accurate, but he has stated that this is how PCIe lanes have been counted "for decades"
Nice review, btw! Yes, going all the way back to Athlon and the triumph of DDR-Sdram over Rdram, and the triumph of AMD's x86-64 over Itanium (Itanium having been Intel's only "answer" for 64-bit desktop computing post the A64 launch--other than to have actually paid for and *run* an Intel ad campaign stating "You don't need 64-bits on the desktop", believe it or not), and going all the way back to Intel's initial Core 2 designs, the products that *actually licensed x86-64 from AMD* (so that Intel could compete in the 64-bit desktop space it claimed didn't exist), it's really remarkable how much AMD has done to enervate and energize the x86 computing marketplace globally. Interestingly enough it's been AMD, not Intel, that has charted the course for desktop computing globally--and it goes all the way back to the original AMD Athlon. The original Pentium designs--I owned 90MHz and 100MHz Pentiums before I moved to AMD in 1999--were the high-point of an architecture that Intel would *cancel* shortly thereafter simply because it could not compete with the Athlon and its spin-off architectures like the A64. That which is called "Pentium" today is not...;) Intel simply has continued to use the brand. All I can say is: TGF AMD...;) I've tried to imagine where Intel would have taken the desktop computing market had consumers allowed the company to lead them around by the nose, and I can't...;) If not for AMD *right now* and all the activity the company is bringing to the PC space once again, there would not be much of a PC market globally going on. But now that we have some *action* again and Intel is breaking its legs trying to keep up, the PC market is poised to break out of the doldrums! I guess Intel had decided to simply nap for a few decades--"Wake me when some other company does something we'll have to compete with!" Ugh.
Can you try to test this CPU using windows server? This is a MCM CPU looks like 4 CPUs attached to each other. I think windows 10 Pro can't get the most of this CPU unless we have windows 10 Pro for WS
Off subject: Having just read the article about nVidia's meteoric rise in profits, some of which directly attributed to high end "gamers" video cards purchased expressly for coin mining, I wonder if it and AMD are going to manufacture CPU's and GPU's specifically for that purpose and how that will affect the price of said parts...
Hi Ian, thanks for doing this article. It's important to see all possible outcomes because in the real world, anything is possible. I do have one question that has be puzzled. Why do you say that Threadripper only has 64 PCI-Express 3.0 lanes when it's been reported several times by everyone, including official AMD releases (and also including by you) that it has 64? I thought it might be just a typo but you state it in several places and in all of your specs. This is not a new thing so is there something about Threadripper that we don't know?
"2P" system = two processor system, i.e. a system with two physical CPU sockets and two CPUs installed.
In the past a 2P (or 4P) system was really handy to get more cores especially back when 1 core, 2 core, and eventually 4 core CPUs were high end. In the consumer realm, way back, the Pentium II was the first 2P system I ever built and people even did it with Celerons as well: http://www.cpu-central.com/dualceleron/ the Opterons were also fun for dual or quad processor systems including some SFF options like the ZMAX-DP socket 940 system. https://www.newegg.com/Product/Product.aspx?Item=N...
i think the Cinebench 11.5 benchmarks are incorrect for both ThreadRippers. ThreadRipper is almost equivalent to my Quad Opteron (48 core) system which scores 3229cb on R15... and 39.04 on Cinebench 11.5. if i downclock all cores to approximately 2.9 GHz i end up with around 3000cb in R15 and in the 36 range point range for 11.5.
The fact that you are only scoring in the 18 range makes me wonder if you had the Threadripper set in some mode where it was only using 8 out of the 16 cores. Can you verify this... please? Thanks :) i would think you should see scores in the 36 range with 11.5.
Other than this minor detail... great article.
PS: i've had the same issues with software not liking NUMA on my quad opteron system as well... Cinebench especially does not like it.
Hi, Ian. Thanks for the review. As usual it was in depth and informative. I'm in the middle of building a 1700x system now based on your review. I wanted to say you handle all the nay-Sayers, gloomy Gusses and negative Nacies with aplomb! I think most people's own slant colors how they see your reviews. I appreciate the consistency of what you do here. I took a look over at Ars, and they could be called AMD shills for all the positive things they say... Keep it up!
Too many plebs complaining about a lack of 3D rendering benches. The fact is a 16 core CPU is still much slower than GPU's at rendering. I'll be getting a 1950X but it wont even be used for rendering when i know for a fact that my two GPUs will still be much faster with things like Blender. Even a single high-end GPU will still easily beat the 1950X at these tasks.
Seems like immature moron fanboys are crying over this stuff because they just want to see AMD at the top of the charts.
On page 1, does Ryzen use an AMD implementation of SMT or hyper-threading (i.e. licensed from Intel). I've been under the impression it's the former, and referring to SMT as hyper-threading in this instance is incorrect. Intel's was not the first or the only way to implement SMT.
Error in Dolphin benchmark description: "Results are given in minutes, where the Wii itself scores 17.53 minutes." should be results are given in seconds.
On the last page it states "On the side of the 1920X, users will again see more cores, ECC support, and over double the number of PCIe lanes compared to the Core i7-7820X for $100 difference."
According to the accompanying chart it's a ~$200 difference. Either the chart is wrong or that statement.
I picked up an I9-7900x at a local Micro Center for $899 this week. And it is running stable at 4.6 GHZ. How well does the Ryzen overclock? My Blender BMW score was 181 seconds. Just opened the file and clicked Render.
Ian, how about testing mobile CPUs - for games and for office work. Aren't mobile CPUs selling much larger numbers thatn desktop ones these days? I can't find a single benchmark comparing i5-7300hq vs i7-7700hq vs i7-7700K showing the difference in productivity workloads and not just for rendering pretty pictures but also for more specific tasks as compiling software etc.
I also would like to see some sort of comparison of new generation to all generations upto 10 years back in time. I'd like to know how much did performance increase since the age of Nehelem. At least from now on there should be a single test to display the relative performance increase over the last few generations. The average user doesn't upgrade their PC every year. The average user maybe upgrades every 5 years and it is really difficult to find out how much peformance increase would one get with an upgrade.
Hey Ian, you've been talking about anandtech's great database where we can see all the cool info. Well, according to your database the Phenom II 6 core 1090T is equally powerful when compared to the 16 core threadripper!!!!!!! http://www.anandtech.com/bench/product/1932?vs=146 With those sorts of numbers why would anyone plan an upgrade? (And there is also only one metric displayed, strange!) Not to play the Intel card on you as others do, but this is a serious problem for at least the AMD lineup of processors.
Since two pass encoding requires both passes to be usable, getting an overall FPS score seems somewhat relevant. Alternately, using time to completion is would present the same information in a different manner. Though, it would be difficult to extrapolate performance results to estimate performance in other encodes without also posting the number of frames encoded.
Take all those Intel FPS performance counters and multiply them by .7 and you have what their chips actually run at without a major security flaw in them.
I've having a hard time trying to swallow "Threadripper is a consumer focused product" line considering the prices to "consume" it: $550 for the MB, $550 for the TR1900X ($800 or $1000 for the others is just dreaming) then the RAM. The MB(at least the Asus one) should be $200 less, but I get it, they are trying to squeeze as much as possible from the...consumers. Now don't get me wrong and I mean no offence for the rich ones among you, but those CPU are for Workstations. WORK, not gamestations. Meaning you would need them to help you make your money, faster.
Idk, I use my 1920x for gaming and working, and... really everything. Second best CPU on the market with 1950x beating it out unless you can't get enough cooling.
HAHAHAHA xD Hope you invested in AMD despite your comment. Looks like my gut instinct in buying AMD since 2009 was right. Intel chips have a security flaw, that when fixed for series 8 and 9 will remove approximately 30% of performance...
So who has the best chip now? Take 30% off any Intel benchmark against its then AMD rival and see which one would have been better.
NUMA appeared in Windows machines in 1998/1999 with the SGI Visual PC (which, yes, was a windows machine) and iirc, a workstation from Intergraph about the same time.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
347 Comments
Back to Article
Zoeff - Thursday, August 10, 2017 - link
Yeeeees! Thanks for the review! I was hoping there'd be an embargo lift at this hour. :DZingam - Sunday, August 13, 2017 - link
The best CPUs for MineSweeper in 2017 in a single article!!!!NikosD - Monday, August 14, 2017 - link
Anandtech is simply wrong regarding Game mode or "Legacy Compatibility Mode" as you prefer to call it and make jokes about it.It seems that you don't know what ALL other reviewers say that Game mode doesn't set SMT off, but it disables one die.
So, Threadripper doesn't become a 16C/16T CPU after enabling Game mode as you say, but a 8C/16T CPU like ALL other reviewers say.
Go read Tom's Hardware which says that Game mode executes "bcdedit /set numproc XX" in order to cut 8 cores and shrink the CPU to one die (8C/16T) but because that's a software restriction the memory and PCIe controller of the second die is still alive, giving Quad Channel memory support and full 60+4 PCIe lanes even in Game mode.
And you thought you are smart and funny regarding your Game mode comments...
monglerbongler - Tuesday, July 10, 2018 - link
real renderers buy epyc or xeon. Either they have the money because its corporate money, they have the money because it comes from plebs paying someone comission/subscription money, or they have the money because they are plebs buying pre-built workstations.craptasticlemon - Wednesday, September 13, 2017 - link
Here's the real Threadripper review:AMD thrashes Intel i9 in every possible way, smushes it's puny ass into the dirt, and dances on the grave for the coup de gras. It is very entertaining to watch the paid Intel lackeys here try to paper over what is clearly a superior product. Keep up with the gaming scores guys, like anyone is buying this for gaming. I for one am looking forward to those delicious 40% faster render times, for the same price as the Intel space heater.
alysdexia - Thursday, April 18, 2019 - link
its, shit-headswifter
Dr. Swag - Thursday, August 10, 2017 - link
In paragraph two you say Ryzen 3 has double the threads of i3, I think you mean to say double the cores :)IanHagen - Thursday, August 10, 2017 - link
Not trying to nitpick or imply anything but... There is a logical reason for Threadripper getting five pages of gaming performance review and Skylake-X not even appearing on the charts more than a month after it was reviewed?Ian Cutress - Thursday, August 10, 2017 - link
Bottom of page one.IanHagen - Thursday, August 10, 2017 - link
With all due respect Mr. Cutress, "circumstances beyond our control" and "odd BIOS/firmware gaming results" didn't prevent anyone from bashing Ryzen for its gaming performance on its debut.Ian Cutress - Thursday, August 10, 2017 - link
We didn't post gaming performance for Ryzen at launch either, for similar reasons.bongey - Thursday, August 10, 2017 - link
Stop lying , you commented on gaming performance in your conclusion, without even benchmarking it in gaming.That is much worse.
Adul - Thursday, August 10, 2017 - link
How is that lying? They did not post gaming benchmarks. That is what he said.What was mention in conclusion was not part of his statement.Integr8d - Thursday, August 10, 2017 - link
It's called lying by omission...James S - Friday, August 11, 2017 - link
Ian did not lie even by omission. They clearly stated in the Ryzen conclusion and clearly stated in the Skylake-x conclusion why they didn't test gaming.“You can please some of the people all of the time, you can please all of the people some of the time, but you can't please all of the people all of the time”
just4U - Saturday, August 12, 2017 - link
I think it's pretty ignorant of someone to state that Ian is lying in his own comments about articles he has written....alysdexia - Thursday, April 18, 2019 - link
Omission isn't lyging; it's self-censorship.Gothmoth - Thursday, August 10, 2017 - link
intel pays good money for advertising at anandtech....Nfarce - Thursday, August 10, 2017 - link
Listen to you fanyboy crybabies. Tom's and Guru3D did gaming benches too. Go find a Reddit AMD fanboy forum that will give a 100% glowing review of your precious Threadsnapper. You won't find a single credible tech site out there doing it. It's called impartiality. Oh and one more thing ladies: you all are aware that AMD sent the major tech review sites the EXACT same hardware kit for review, right?tuxRoller - Thursday, August 10, 2017 - link
ThreadSNAPPER? If this was intentional, I assume it's meant to be derogatory, but I'm not sure what it is meant to imply.bigboxes - Friday, August 11, 2017 - link
You're acting just like the fanboi trolls you claim to loathe.Alexvrb - Sunday, August 13, 2017 - link
Yeah that was definitely a pot<->kettle comment. LOL.trivor - Saturday, August 12, 2017 - link
For those of you considering this CPU the fact is you are going to get MUCH better value by choosing one of the Ryzen CPUs - Ryzen 7 1800X is now at around $420 for 8/16 and the 7 1700 (8/16 again) has been on sale for as little as $299. Now, if you need the high thread counts for work on things like content creation and you still want to be able to run games it will be competitive (read: not the king of the hill) when you are running your games. So, if you do more than 50% of your computing time is gaming then go for an Intel CPU OR one of the Ryzen 5/7 consumer CPUs.Lord of the Bored - Friday, August 11, 2017 - link
Which would explain why the introduction doesn't mention the Netburst fiasco by name."The company that could force the most cycles through a processor could get a base performance advantage over the other, and it led to some rather hot chips, with the certain architectures being dropped for something that scaled better. " is, to my eye, actually attention-grabbing in the way it avoids using any names like Preshott, I mean Prescott and only obliquely references the 1GHz Athlon, the Thunderbirds, Sledgehammer, and the whole Netburst fiasco that destroyed the once-respected Pentium name.
But no, let's just say that "certain architectures" were dropped and there were "some rather hot chips" and keep Intel happy. They need that bone right now, though not as much as they did during the reign of Thunderbird and the 'hammers.
Hurr Durr - Friday, August 11, 2017 - link
If the unword "NetBurst" triggers you so much, it`s not processors you should spend money on, but shrinks.Lord of the Bored - Friday, August 11, 2017 - link
Hey, we were an Athlon house. I didn't suffer through the series of mis-steps that plagued Intel. I just thought the sentence was conspicuous in how hard it tried to not name names.mlambert890 - Saturday, August 12, 2017 - link
"name names"? There are 2 companies that make CPUs. Everyone knows Netburst was Intel P4 era. It's not Watergate ok?Conspiracy obsession has become a legitimate mental illness.
fallaha56 - Thursday, August 10, 2017 - link
handy not to show the new Intel chip struggle eh?Breit - Friday, August 11, 2017 - link
Is it possibly to bench the Intel CPUs (especially the i9-7900x) for those latency/single-thread tests with Hyperthreading turned off? This would probably give a better comparison to AMDs Game Mode and hopefully higher numbers too due to double the cache/registers available to one thread.cheshirster - Friday, August 11, 2017 - link
Skylake-X sucks at gaming.7800X is slower than 1600X.
verl - Thursday, August 10, 2017 - link
"well above the Ryzen CPUs, and batching the 10C/8C parts from Broadwell-E and Haswell-E respectively"??? From the Power Consumption page.
bongey - Thursday, August 10, 2017 - link
Yep if you use AVX-512 it will down clock to 1.8Ghz and draw 400w just for the CPU alone and 600w from the wall. See der8auer's video title "The X299 VRM Disaster (en)", all x299 motherboards VRMs can be ran into thermal shutdown under avx 512 loads, with just a small overclock, not to mention avx512 crazy power consumption. That is why AMD didn't put avx 512 in Zen, it is power consumption monster.TidalWaveOne - Thursday, August 10, 2017 - link
Glad I went with the 7820X for software development (compiling).raddude9 - Thursday, August 10, 2017 - link
In ars' review they have TR-1950X ahead of the i9-7900X for compilation:https://arstechnica.co.uk/gadgets/2017/08/amd-thre...
In short it's very difficult to test compilation, every project you build has different properties.
emn13 - Thursday, August 10, 2017 - link
Yeah, the discrepency is huge - converted to anandtech's compile's per day the arstechnica benchmark maxes out at a little less than 20, which is a far cry from the we see here.Clearly, the details of the compiler, settings and codebase (and perhaps other things!) matter hugely.
That's unfortunate, because compilation is annoyingly slow, and it would be a boon to know what to buy to ameliorate that.
prisonerX - Thursday, August 10, 2017 - link
This is very compiler dependent. My compiler is blazingly fast on my wimpy hardware becuase it's blazingly clever. Most compilers seem to crawl no matter what they run on.bongey - Thursday, August 10, 2017 - link
Looks like anandtech's benchmark for compiling is bunk, it's just way off from all the other benchmarks out there. Not only that, no other test shows a 20% improvement over the 6950x which is also a 10 core/20 thread cpu. Something tells me the 7900x is completely wrong or has something faster like a different pcie ssd.Chad - Thursday, August 10, 2017 - link
All I know is, for those of us running Plex, SABnzbd, Sonarr, Radarr servers simultaneously (and others), while encoding and gaming all simultaneously, our day has arrived!:)
Ian Cutress - Thursday, August 10, 2017 - link
We checked with Ars as to their method.We use a fixed late March build around v56 under MSVC
Ars use a fixed newer build around v62 via clang-cl using VC++ linking
Same software, different compilers, different methods. Our results are faster than Ars, although Ars' results seem to scale better.
ddriver - Friday, August 11, 2017 - link
Of every review out there, only your "superior testing methodology" presents a picture where TR is slower than SX.ddriver - Thursday, August 10, 2017 - link
Yeah if all you do all day is compile chromium with visual studio... Take that result with a big spoon of salt.Samus - Thursday, August 10, 2017 - link
This thing can also decompress my HD pr0n RARs in record time!carewolf - Thursday, August 10, 2017 - link
The jokes is on you. More cores and more memory bandwidth is always faster for compiling. Anandtech must have butched the benchmark here. Other sites show ThreadRipper whipping i9 ass as expected.bongey - Thursday, August 10, 2017 - link
They did without a doubt screw up the compile test. The 6950x is a 10 core /20 thread intel cpu, but somehow the 7900x has 20% improvement, when no other test even comes close to that much of an improvement. The 7900x is basically just bump in clock speed for a 6950x.Ian Cutress - Thursday, August 10, 2017 - link
'The 7900X is basically just bump in clock speed for a 6950X'L2 cache up to 1MB, L3 cache is a victim cache, mesh interconnect rather than rings.
mlambert890 - Saturday, August 12, 2017 - link
It's basically as far from 'just a bump in clock speed' as any follow up release short of a full architecture revamp, but yeah ok.rtho782 - Thursday, August 10, 2017 - link
The whole game mode/creator mode, UMA/NUMA, etc seems a mess. Games not working with more than 20 threads is a joke although not AMDs fault....mapesdhs - Thursday, August 10, 2017 - link
Why is it a mess if peope choose to buy into this level of tech? It's bring formerly Enterprise-level tech to the masses, the very nature of how this stuff works makes it clear there are tradeoffs in design. AMD is forced to start off by dealing with a sw market that for years has focused on the prevalence of moderately low core count Intel CPUs with strong(er) IPC. Offering a simple hw choice to tailor the performance slant is a nice idea. I mean, what's your problem here? Do you not understand UMA vs. NUMA? If not, probably shouldn't be buying this level of tech. :DprisonerX - Thursday, August 10, 2017 - link
That will change. Why invest masses of expensive brainpower in aggressively multithreading your game or app when no-one has the hardware to use it? No they do.Hurr Durr - Friday, August 11, 2017 - link
Only in lala-land will HEDT processors occupy any meaningful part of the gaming market. We`re bound by consoles, and that is here to stay for years.mapesdhs - Friday, August 11, 2017 - link
And consoles are on the verge of moving to many-cores main CPUs. The inevitable dev change will spill over into PC gaming.RoboJ1M - Friday, August 11, 2017 - link
On the verge?All major consoles have had a greater core count than consumer CPUs, not to mention complex memory architectures, since, what, 2005?
One suspects the PC market has been benefiting from this for quite some time.
RoboJ1M - Friday, August 11, 2017 - link
Specifically, the 360 had 3 general purpose CPU coresAnd the PS3 had one general purpose CPU core and 7 short pipeline coprocessors that could only read and write to their caches. They had to be fed by the CPU core.
The 360 had unified program and graphics ram (still not common on PC!)
As well as it's large high speed cache.
The PS3 had septate program and video ram.
The Xbox one and PS4 were super boring pcs in boxes. But they did have 8 core CPUs. The x1x is interesting. It's got unified ram that runs at ludicrous speed. Sadly it will only be used for running games in 1800p to 2160p at 30 to 60 FPS :(
mlambert890 - Saturday, August 12, 2017 - link
Why do people constantly assume this is purely time/market economics?Not everything can *be* parallelized. Do people really not get that? It isn't just developers targeting a market. There are tasks that *can't be parallelized* because of the practical reality of dependencies. Executing ahead and out of order can only go so far before you have an inverse effect. Everyone could have 40 core CPUs... It doesn't mean that *gaming workloads* will be able to scale out that well.
The work that lends itself best to parallelization is the rendering pipeline and that's already entirely on the GPU (which is already massively parallel)
Magichands8 - Thursday, August 10, 2017 - link
I think what AMD did here though is fantastic. In my mind, creating a switch to change modes vastly adds to the value of the chip. I can now maximize performance based upon workload and software profile and that brings me closer to having the best of both worlds from one CPU.Notmyusualid - Sunday, August 13, 2017 - link
@ rtho782I agree it is a mess, and also, it is not AMDs fault.
I've have a 14c/28t Broadwell chip for over a year now, and I cannot launch Tomb Raider with HT on, nor GTA5. But most s/w is indifferent to the amount of cores presented to them, it would seem to me.
BrokenCrayons - Thursday, August 10, 2017 - link
Great review but the word "traditional" is used heavily. Given the short lifespan of computer parts and the nature of consumer electronics, I'd suggest that there isn't enough time or emotional attachment to establish a tradition of any sort. Motherboards sockets and market segments, for instance, might be better described in other ways unless it's becoming traditional in the review business to call older product designs traditional. :)mkozakewich - Monday, August 14, 2017 - link
Oh man, but we'll still gnash our teeth at our broken tech traditions!lefty2 - Thursday, August 10, 2017 - link
It's pretty useless measuring power alone. You need to measure efficiency (performance /watt).So yeah, a 16 core CPU draws more power than a 10 core, but it also probably doing a lot more work.
Diji1 - Thursday, August 10, 2017 - link
Er why don't you just do it yourself, they've already given you the numbers.lefty2 - Thursday, August 10, 2017 - link
except that they haven'tDr. Swag - Thursday, August 10, 2017 - link
How so? You have the performance numbers, and they gave you power draw numbers...bongey - Thursday, August 10, 2017 - link
Just do a avx512 benchmark and Intel will jump over 300watts , 400watts(overclocked) only from the cpu. (prime95 avx512 benchmark).See der8auer's video "The X299 VRM Disaster (en)"DanNeely - Thursday, August 10, 2017 - link
The Chromium build time results are interesting. Anandtech's results have the 1950X only getting 3/4ths of the 7900X's performance. Arstechnica's getting almost equal results on both CPUs, but at 16 compiles per day vs 24 or 32 is seeing significantly worse numbers all around.I'm wondering what's different between the two compile benchmarks to see such a large spread.
cknobman - Thursday, August 10, 2017 - link
I think it has a lot to do with the RAM used by Anandtech vs Arstechnica .For all the regular benchmarking Anand used DDR4 2400, only the DDR 3200 was used in some overcloking.
Arstechnica used DDR4 3200 for all benchmarking.
Everyone already knows how faster DDR4 memory helps the Zen architecture.
DanNeely - Thursday, August 10, 2017 - link
If ram was the determining factor, Ars should be seeing faster build times though not slower ones.carewolf - Thursday, August 10, 2017 - link
Anandtech must have misconfigured something. Building chromium is scales practically linearly. You can move jobs all the way across a slow network and compile on another machine and you still get linear speed-ups with more added cores.Ian Cutress - Thursday, August 10, 2017 - link
We're using a late March v56 code base with MSVC.Ars is using a newer v62 code base with clang-cl and VC++ linking
We locked in our versions when we started testing Windows 10 a few months ago.
supdawgwtfd - Friday, August 11, 2017 - link
Maybe drop it then as it is not at all usefull info.Johan Steyn - Thursday, August 10, 2017 - link
I refrained from posting on the previous article, but now I'm quite sure Anand is being paid by Intel. It is not that I argue against the benchmarks, but how it is presented. I was even under the impression that this was an Intel review.The previous article was stated as "Introducing Intel's Desktop Processor" Huge marketing research is done on how to market products. By just stating one thing first or in a different way, quite different messages can be conveyed without lying outright.
By making the "Most Powerful, Most Scalable" Bold, that is what the readers read first, then they read "Desktop Processor" without even reading that is is Intel's. This is how marketing works, so Anand used slanted journalism to favour Intel, yet most people will just not realise it eat it up.
In this review there are so many slanted journalism problems, it is just sad. If you want, just compare it to other sites reviews. They just omit certain tests and list others at which Intel excel.
I have lost my respect for Anandtech with these last two articles of them, and I have followed Anandtech since its inception. Sad to see that you are also now bought by Intel, even though I suspected this before. Congratulations for making this so clear!!!
Ian Cutress - Thursday, August 10, 2017 - link
Anand hasn't worked at the website for a few years now. The author (me) is clearly stated at the top.Just think about what you're saying. If I was in Intel's pocket, we wouldn't be being sampled by AMD, period. If they were having major beef with how we were reporting, I'd either be blacklisted or consistently on a call every time there's been an AMD product launch (and there's been a fair few this year).
I've always let the results do the talking, and steered clear from hype generated by others online. We've gone in-depth into the how things are done the way they are, and the positives and negatives as to the methods of each action (rather than just ignoring the why). We've run the tests, and been honest about our results, and considered the market for the product being reviewed. My background is scientific, and the scientific method is applied rigorously and thoroughly on the product and the target market. If I see bullshit, I point it out and have done many times in the past.
I'm not exactly sure what you're problem is - you state that the review is 'slanted journalism', but fail to give examples. We've posted ALL of our review data that we have, and we have a benchmark database for anyone that ones to go through all the data at any time. That benchmark database is continually being updated with new CPUs and new tests. Feel free to draw your own conclusions if you don't agree with what is written.
Just note that a couple of weeks ago I was being called a shill for AMD. A couple of weeks before that, a shill for Intel. A couple before that... Nonetheless both companies still keep us on their sampling lists, on their PR lists, they ask us questions, they answer our questions. Editorial is a mile away from anything ad related and the people I deal with at both companies are not the ones dealing with our ad teams anyway. I wouldn't have it any other way.
MajGenRelativity - Thursday, August 10, 2017 - link
I personally always enjoy reading your reviews Ian. Even though they don't always reach the conclusions I hoped they would reach before reading, you have the evidence and benchmarks to back it up. Keep up the good work!Diji1 - Thursday, August 10, 2017 - link
Agreed!Zstream - Thursday, August 10, 2017 - link
For me, it isn't about "scientific benchmarking", it's about what benchmarks are used and what story is being told. I think, along with many others, would never buy a threadripper to open a single .pdf. I could be wrong, but I don't think that's the target audience Intel or AMD is aiming for.I mean, why not forgo the .pdf and other benchmarks that are really useless for this product and add multi-threaded use cases. For instance, why not test how many VM's and I/O is received, or launching a couple VM's, running a SQL DB benchmark, and gaming at the same time?
It could just be me, but I'm not going to buy a 7900x or 1950x for opening up .pdf files, or test SunSpider/Kraken lol. Hopefully we didn't include those benchmarks to tell a story, as mentioned above.
We're goingto be compiling, 3d rendering with multi-gpu's, running multiple VM's, all while multi-tasking with other apps.
My 2 cents.
DanNeely - Thursday, August 10, 2017 - link
Single threaded use cases aren't why people buy really wide CPUs. But performing badly in them, since they represent a lot of ordinary basic usage, can be a reason not to buy one. Also running the same benches on all products allows for them all to be compared readily vs having to hunt for benches covering the specific pair you're interested in.VM type benchmarks are more Johan's area since that's a traditional server workload. OTOH there's a decent amount of overlap with developer workloads there too so adding it now that we've got a compile test might not be a bad idea. On the gripping hand, any new benchmarks need to be fully automated so Ian can push an easy button to collect data while he works on analysis of results. Also the value of any new benchmark needs to be weighed against how much it slows the entire benching run down, and how much time rerunning it on a large number of existing platforms will take to generate a comparison set.
iwod - Thursday, August 10, 2017 - link
It really depends on use case. 20% slower on PDF opening? I dont care, because the time has reached diminishing returns and Intel needs to be MUCH faster for this to be a UX problem.But I think at $999 Intel has a strong case for its i9. But factoring in the MB AMD is still cheaper. Not sure if that is mentioned in the article.
Also note Intel is on their third iteration of 14nm, against a new 14nm from AMD GloFlo.
I am very excited for 7nm Zen 2 coming next year. I hope all the software and compiler as well as optimisation has time to catch up for Zen.
Zstream - Thursday, August 10, 2017 - link
I won't get into an argument, but I and many of my friends, who are on the developer side of the house have been waiting for this review, and it doesn't provide me with any useful information. I understand it might be Johan's wheelhouse, but come on... opening a damn .pdf file, and testing SunSpider/Kraken/gaming benchmarks? That won't provide anyone interested in either CPU any validation of purchase. I'm not trying to be salty, I just want some more damn details vs. trying to put both vendors in a good light.Ian Cutress - Thursday, August 10, 2017 - link
Rather than have 20 different tests for each set of different CPUs and very minimal overlap, we have a giant glove that has all the tests for every CPU in a single script. So 80 test points, rather than 4x20. The idea is that there are benchmarks for everyone, so you can ignore the ones that don't matter, rather than expect 100% of the benchmarks to matter (e.g. if you care about five tests, does it matter to you if the tests are published alongside 75 other tests, or do they have to be the only five tests in the review?). It's not a case of trying to put both vendors in a good light, it's a case of this is a universal test suite.Zstream - Thursday, August 10, 2017 - link
Well, show me a database benchmark, virtual machine benchmark, 3dmax benchmark, blender benchmark and I'll shutty ;)It's hard for me to look at this review outside of a gamers perspective, which I'm not. Sorry, just the way I see it. I'll wait for more pro-consumer benchmarks?
Johan Steyn - Thursday, August 10, 2017 - link
This is exactly my point as well. Why on earth so much focus on single threaded tests and games, since we all knew from way back TR was not going to be a winner here. Where are all the other benches as you mention. Oh, no, this will have Intel look bad!!!!!Vorl - Thursday, August 10, 2017 - link
the answer to both of you is that "this is a High end PC processor, not a workstation CPU, and not a server CPU. That was clearly covered at the start of the article.If you want raw number crunching info, there will be other sites that are going to have those reviews, and really, maybe anandtech will review it in that light since it really is such a powerful CPU in another review for server stuff.
Also, there is a LOT of value in having a standardized set of tests. Even if a few tests here and there are no longer valuable like PDF opening, the same tests being used across the board are important for BENCH. you can't compare products if you aren't using the same tools.
Unfortunately AMD is ahead of the curve currently with massive SMP being given to normal consumers now at a reasonable price. It will take a little time for dev's to catch up and really make use of this amazing CPU.
With the processing power in a CPU like this imagine the game mechanics that can be created and used, For those of us that are more interested in making this a reasonably priced workstation/server build for VMs etc, cool for us, but that isn't where this is being marketed, and it's not really fair to jump all over the reviewer for it.
Zstream - Thursday, August 10, 2017 - link
Utter rubbish. This CPU is designed for a workstation build. Some a product labeled Xeon is a workstation CPU, but this isn't?mapesdhs - Friday, August 11, 2017 - link
Yeah, TR doesn't really look like something that's massively aimed at gamers, it has too many capabilities and features which gamers wouldn't be interested in.pm9819 - Friday, August 18, 2017 - link
AMD themselves call it a consumer cpu. Is Intel paying them as wellLolimaster - Friday, August 11, 2017 - link
It's a HEDT/workstation, a year ago people called Workstation a dual Xeon 8 cores, which a sole 1950X replicates.Intel draws a line not supporting ECC, AMD supports ECC in all their main cpu's server or not all the way back to Athlon 64.
16cores/32threads, ECC, 64 pci-e lanes, upgrade path to 32cores/64threads with zen3. Smells Workstation to me.
Another thing is server cpu's which EPYC is, with features tailored to it, like a massive core count with low clock speeds to maximize efficiency and damn expensive mobos without any gamerish gizmo, just think to put on building without looking at net. TR can do a bit of that too, but optimized to an all around performance and budget friendly.
Ian Cutress - Thursday, August 10, 2017 - link
Dan sums it up. Some of these tests are simply check boxes - is it adequate enough.Some people do say that an automated suite isn't the way to do things: unfortunately without spending over two months designing this script I wouldn't have time for nearly as much data or to test nearly as many CPUs. Automation is a key aspect to testing, and I've spent a good while making sure tests like our Chromium Compile can be process consistent across systems.
There's always scope to add more tests (my scripts are modular now), if they can be repeatable and deterministic, but also easy to understand in how they are set up. Feel free to reach out via email if you have suggestions.
Johan Steyn - Thursday, August 10, 2017 - link
Ian, I understand that you see them as checkboxes, but this is not a normal CPU John doe is going to buy. It has a very specific audience and I feel you are missing that audience badly. I guy that buys this to use for rendering or 3Dstudio Max, is not going to worry about games. Yes, it would be a great bonus to also be OK at it. Other sittes even did tests of running rendering as well as play games at the same time. TR shined like a star against Intel. This is actually something that might happen in real life. A guy could begin a render and then while waiting, decide to play a game.I would not buy TR to open pdf's, would I?
Ian Cutress - Thursday, August 10, 2017 - link
No, but you open things like IDEs and Premiere. A PDF test is a gateway test in that regard with an abnormally large input. When a workstation is not crunching hard, it's being used to navigate through programs with perhaps the web and documents in tow where the UX is going to be indicative of something like PDF opening.Lolimaster - Friday, August 11, 2017 - link
Including useless benchs not only you waste target audience time, you too having to write and upload images from that useless benchs instead of making the article more interesting.How about a "the destroyer for HEDT/Workstion", a typical productivy load + some gaming, out of a sudden people will get TWICE the cpu resources, they can do things they couldn't before on the same machine.
They could get a dual socket mobo with 2x10c Xeons paying the hefty premium with pathetic clock speeds if they wante to game a bit while doing work, TR fixed that, with mass consumer type of gaming performance while reducing the multicore costs by more than half (cores counts + ECC support without paying intel tax).
Lolimaster - Friday, August 11, 2017 - link
And that audience few months ago was limited to do their productivity thing with 6-8 cores or 10 paying the huge intel tax, probably they couldn't game without hurting other things and had a 2 secondary PC for killing time.With TR and the massive 16 core count they can finally do all of that off a single PC or focus the entire powerhorse when they need (leaving things do work during their sleep).
Lolimaster - Friday, August 11, 2017 - link
A single 1950X destroyed 80% of the intel xeon lineup.Lolimaster - Friday, August 11, 2017 - link
Any cpu after nehalem perform enough at single thread except for software optimized too much for certain brands, like dolphin and intel.Lolimaster - Friday, August 11, 2017 - link
Specially when every cpu right now autoclocks to 4Ghz on ST tasks. Single thread is just an obsolete metric when just the most basic of tasks will use it, tasks the last thing you will worry is speed, maybe curse about that piece of c*rap not using 80% of you cpu resources.ZeroPointEF - Thursday, August 10, 2017 - link
I would love to see more VM benchmarking on these types of CPUs. I would also love to see how a desktop performs on top of a Server 2016 hypervisor with multiple servers (Windows and Linux) running on top of the same hypervisor.ZeroPointEF - Thursday, August 10, 2017 - link
I should have made it clear that I loved the review. Ian's reviews are always great!I would just like to see these types of things in addition. It seems like we are getting to a point where we can have our own home lab and a desktop all on one machine on top of a hypervisor, but this idea may be my own strange dream.
smilingcrow - Thursday, August 10, 2017 - link
And others would like to know how it works at video editing or as a DAW etc.To add a whole bunch of demanding benchmarks just for HEDT systems is a hell of a lot of work for little return for a site whose main focus is the mainstream.
Try looking at more specialised reviews.
johnnycanadian - Thursday, August 10, 2017 - link
This, please! My TR purchase is hinging on the performance of multiple VMWare VMs all running full-out at least 18 hours per day.Ian, I'd love to see some of your compute-intensive multi-core benches running on a Linux host with Linux-based VMWare VMs (OpenCV analysis, anyone? Send me that 1950x and I'll happily run SIFT and SURF analysis all day long for you :-). I was delighted by the non-gaming benchmarks shown first in this review and hope to see more professional benches on Anand. Leave the gamerkids to Tom's or HardOCP (or at least limit gaming benchmarks to hardware that is built for it): Anandtech has always been more about folks who make their living on HPDC, and I have nothing but the highest respect for the technical staff at this publication.
I don't give a monkey's about RGB lighting, tempered glass cases, 4k gaming or GTAV FPS. How machines like Threadripper perform in a HPC environment is going to keep AMD in this market, and I sincerely hope they prove to be viable.
mapesdhs - Thursday, August 10, 2017 - link
Yes, I was pleased to see the non-gaming tests presented first, makes a change, and at least a subtle nod to the larger intended market for TR.Ian.
pm9819 - Friday, August 18, 2017 - link
Your going to spend a $1000 on cpu but have no clue how it handles the tasks you need it for, smh. As a VMWare customer they will tell you which cpu has been certified to handle a specific tasked. You don't need a random website to tell you that.nitin213 - Thursday, August 10, 2017 - link
Hi IanIt's a great review but i do have some suggestions on the test suite. The test suite for this CPU was not materially different from test suites of many of the other desktop CPUs done earlier. I think it would be great to see some tests which explicitly put to use the multi-threaded capabilities and the insane IOs of the system to test, e.g server hosting with how many users being able to login, virtual machines, more productivity test suites when put together with a multi-GPU setup (running adobe creator or similar) etc. I think a combination of your epyc test suite and your high-end GPU test suite would probably be best suited for this.
Also, for the gaming benchmark, it seemed you had 1080, 1060, rx580 and rx480 GPUs. Not sure if these were being bottlenecked by GPU with differences in framerates being semantic and not necessarily a show of PC strength. Also, Civ 6 AI test suite would a great addition as that really stresses the CPU.
i completely understand that there is only so much that can be done in a limited timeframe typically made available for these reviews but would be great to see these tests in future iterations and updates.
launchcodemexico - Thursday, August 10, 2017 - link
Why did you end all the gaming review sections with something like "Switching it to Game mode would have made better numbers..."? Why didn't you run the benchmarks in Gaming mode in the first place?Ian Cutress - Thursday, August 10, 2017 - link
Gaming mode is not default, and we run gaming mode alongside the default - there's two sets of values in each graming test.DanNeely - Thursday, August 10, 2017 - link
You might want to call that out more clearly in the text. I also missed that you have two sets of 1950X results; and probably wouldn't've figured out what the -G suffix meant without a hint.Ian Cutress - Thursday, August 10, 2017 - link
I mentioned it in the Game vs Creator mode page, but I'll propagate it through.lordken - Thursday, August 10, 2017 - link
read before you complain, it is stated at beginning of the review that -G is for game mode...DanNeely - Thursday, August 10, 2017 - link
Especially during the work day a lot of people just are doing quick glances at the most interesting parts. I'll end to end read it sometime tonight.mapesdhs - Thursday, August 10, 2017 - link
If people quick-glance, that's their problem for missing key info. :D When learning about something as new as this, I read everything. Otherwise, it's like the tech equivalent of crossing a road while gawping at a phone. :}Last time I read so much about a new CPU launch was Nehalem/X58.
Ian.
smilingcrow - Thursday, August 10, 2017 - link
It seemed really clear to me but for people who didn't read the long text on NUMA etc maybe not.The dangers of skimming!
mapesdhs - Friday, August 11, 2017 - link
Indeed. :D Reminds me of when a long time ebay seller told me that long item decriptions are pointless, because most bidders only read the first paragraph, often only the first sentence.Ian Cutress - Thursday, August 10, 2017 - link
The test suite is a global glove: rather than have 20 tests for each segment, it's a global band of 80 tests for every situation. Johan does different tests as his office is several hundred miles away from where I am (and we're thousands of miles away from any other reviewer).For the gaming benchmarks, there are big differences in 99th percentile frame rates and Time Under analysis. As games become more and more GPU bottlenecked for average frame rates, this is where the differentiation point is. It's a reason why we still test 1080p as well. With regards the AI test, I've asked the Civ team repeatedly to make the AI test accessible from the command line so I can rope it into my testing scripts easily (they already do it with the main GPU test). But like many other game studios, getting them to unlock a flag is a frustrating endeavor when they don't even respond to messages.
nitin213 - Thursday, August 10, 2017 - link
Thanks for your reply. Hopefully the test suite can be expanded as Intel's CPUs probably also move to higher core count and IO ranges in future.and i completely understand the frustration trying to get a 3rd party to change their defaults. Cheers
deathBOB - Thursday, August 10, 2017 - link
It's clear to me . . . Ian is playing both sides and making out like a bandit! /sFreckledTrout - Thursday, August 10, 2017 - link
Ian can we get an updated comments section so we can +/- people and after x number of minuses they wont show by default. I'm saying this because some of these comments(the one in this chain included) are not meaningful responces. The comments section is by far the weakest link on Anantech.Nice review btw.
mapesdhs - Thursday, August 10, 2017 - link
toms has that, indeed it's kinda handy for blanking out the trolls. Whether it's any useful indicator of "valid" opinion though, well, that kinda varies. :D (there's nowt to stop the trolls from voting everything under the sun, though one option would be to auto-suspend someone's ability to vote if their own posts get hidden from down voting too often, a hands-off way of slapping the trolls)Given the choice, I'd much rather just be able to *edit* what I've posted than up/down-vote what others have written. I still smile recalling a guy who posted a followup to apologise for the typos in his o.p., but the followup had typos aswell, after which he posted aaaaagh. :D
Ian.
Johan Steyn - Thursday, August 10, 2017 - link
Ian thanks for at least responding, I appreciate it. Please compare your review to sites like PCPer and many others. They have no problem to also point out the weak points of TR, yet clearly understand for what TR was mostly designed and focus properly on it and even though they did not test the 64 PCI lanes as an example, mention that they are planning a follow-up to do it, since it is an important point. You do mention these as well, but could have said more than just mention it by the way.Look at your review, most of it is about games. Are you serious?
I have to give you credit to at least mention the problems with Sysmark.
Let me give you an example of slanted journalism, When you do the rendering benchmarks, where AMD is known to shine, you only mention at each benchmark what they do etc, and fail to mention that AMD clearly beats Intel, even though other sites focus more ons these benchmarks. In the one benchmark where Intel get a descent score, you take time to mention that:
"Though it's interesting just how close the 10-core Core i9-7900X gets in the CPU (C++) test despite a significant core count disadvantage, likely due to a combination of higher IPC and clockspeeds."
Not in one of the rendering benchmarks do you give credit to AMD, yet you found it fitting to end the section of with:
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly.
After focussing so much time on game performance, I am not sure you understand TR at all. AMD still has a long way to go in many areas. Why? Because corrupt Intel basically drove them to bankruptcy, but that is a discussion for another day. I lived through those days and experienced it myself.
Maybe I missed it, but where did you discuss the issue of memory speed? You mention in the beginning of memory overclock. Did you test the system running at 3200 or 2666? It is important to note. If you ran at 2666, then you are missing a very important point. Ryzen is known to gain a huge amount with memory speed. You should not regard 3200 as an overclock, since that is what that memory is made for, even if 2666 is standard spec. Most other sites I checked, used it like that. If you did use 3200, don't you think you should mention it?
Why is it that your review ends up meh about TR and leave you rather wanting an i9 an almost all respects, yet most of the other site gives admiration where deserved, even though they have criticism as well. Ian I see that you clearly are disappointed with TR, which is OK, maybe you just like playing games and that is why you are so.
It was clear how much you admire Intel in your previous article. You say that I gave no examples of slanted journalism, maybe you should read my post again. "Most Powerful, Most scalable." It is well known that people don't read the fine print. This was intentional. If not, you are a very unlucky guys for having so many unintended mishaps. Then I truly need to say I am sorry.
For once, please be a bit excited that there is some competition against the monopoly of Intel, or maybe you are also deluded that they became so without any underhanded ways.
By the way, sorry that I called you Anand. I actually wanted to type Anandtech, but left it like it. This site still carries his name and he should still take responsibility. After I posted, I realised I should have just checked the author, so sorry about that.
vanilla_gorilla - Thursday, August 10, 2017 - link
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."How do you not understand that is a dig at Intel? He's saying you have to pay twice as much for only a 6.7% improvement.
smilingcrow - Thursday, August 10, 2017 - link
The memory speed approach taken was clearly explained in the test and was stated as being consistent with how they always test.I don't take issue with testing at stock speeds at launch day as running memory out of spec for the system can be evaluated in depth later on.
Johan Steyn - Friday, August 11, 2017 - link
That is just rubbish. Threadripper has no problem with 3200 memory and other sites has no problem running it at that speed. 3200 memory is designed to run 3200, why run it at 2666? There is just no excuse except being paid by Intel.Maybe then you can accuse other sites of being unscientific?
fanofanand - Tuesday, August 15, 2017 - link
Anandtech always tests at JDEC, regardless of the brand.Manch - Friday, August 11, 2017 - link
""Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."Not slanted journalism? At least you mention "2x the cost," but for most this will not defer them in buying the monopoly."
You call Intel the monopoly and call him out for not wording the sentence to dissuade people from buying Intel. Who has the bias here? If he was actively promoting Intel over AMD or vice versa, you'd be OK with the latter, but to do neither. He's an Intel shill? Come on. That's unfair. HOW should he have wrote it so it would satisfy you?
FYI Anand is gone. He's NOT responsible for anything at Anandtech. Are you going to hold Wozniak's feet to the fire for the lack of ports on a Mac too?
NikosD - Sunday, August 13, 2017 - link
Well, reading the whole review today - 13/08/2017 - I can see that the reviewer did something more evil than not using DDR4-3200 to give us performance numbers.He used DDR4-2400, as he clearly states in the configuration table, filling up the performance tables BUT in the power consumption page he added DDR4-3200 results (!) just to inform us that DDR4-3200 consumes 13W more, without providing any performance numbers for that memory speed (!!)
The only thing left for the reviewer is to tell us in which department of Intel works exactly, because in the first pages he wanted to test TR against a 2P Intel system as Skylake-X has only 10C/20T but Intel didn't allow him.
Ask for your Intel department to permit it next time.
Zingam - Sunday, August 13, 2017 - link
Yeah! You make a great point! Too much emphasis on gaming all the time! These processors aren't GPUs after all! Most people who buy PCs don't play games at all. Even I as a game developer would like to see more real world tests, especially compilation and data-crunching tests that are typical for game content creation and development workloads. Even I as a game developer spend 99% of my time in front of the computer not playing any games.pm9819 - Friday, August 18, 2017 - link
So Intel made AMD release the underpowered overheating Bulldozer cpu's? Did Intel also make them sell there US and EU based fabs so they'll be wholly dependant on the Chinese to make their chips? Did Intel also make them buy a equally struggling graphics card company? Truth is AMD lost all the mind and market share they had because of bad corporate decision and uncompetitive cpu designs post Thunderbird. It's no one's fault but there own that it took seven years to produce a competitive replacement. Was Intel suppose to wait till they caught up? And Intel was a monopoly long before AMD started producing competitive cpu's.You can keep blaming Intel for AMD's screw ups but those of us with common sense and the ability to read know the fault lays with AMD's management.
ddriver - Thursday, August 10, 2017 - link
You are not sampled because of your divine objectivity Ian, you are sampled because you review for a site that is still somewhat popular for its former glory. You can deny it all you want, and understandable, as it is part of your job, but AT is heavily biased towards the rich american boys - intel, apple, nvidia... You are definitely subtle enough for the dumdums, but for better or worse, we are not all dumdums yet.But hey, it is not all that bad, after all, nowadays there are scores of websites running reviews, so people have a base for comparison, and extrapolate objective results for themselves.
ddriver - Thursday, August 10, 2017 - link
And some bits of constructive criticism - it would be nicer if those reviews featured more workloads people actually use in practice. Too much synthetics, too much short running tests, too much tests with software that is like "wtf is it and who in the world is using it".For example rendering - very few people in the industry actually use corona or blender, blender is used for modelling and texturing a lot, but not really for rendering. Neither is luxmark. Neither is povray, neither is CB.
Most people who render stuff nowadays use 3d max and vray, so testing this will actually be indicative of actual, practical perforamnce to more people than all those other tests combined.
Also, people want render times, not scores. That's very poor indication of actual performance that you will get, because many of those tests are short, so the CPU doesn't operate in the same mode it will operate if it sweats under continuous work.
Another rendering test that would benefit prosumers is after effects. A lot of people use after effects, all the time.
You also don't have a DAW test, something like cubase or studio one.
A lot of the target market for HEDT is also interested in multiphysics, for example ansys or comsol.
The compilation test you run, as already mentioned several times by different people, is not the most adequate either.
Basically, this review has very low informational value for people who are actually likely to purchase TR.
mapesdhs - Thursday, August 10, 2017 - link
AE would definitely be a good test for TR, it's something that can hammer an entire system, unlike games which only stress certain elements. I've seen AE renders grab 40GB RAM in seconds. A guy at Sony told me some of their renders can gobble 500GB of data just for a single frame, imposing astonishing I/O demands on their SAN and render nodes. Someone at a London movie company told me they use a 10GB/sec SAN to handle this sort of thing, and the issues surrounding memory access vs. cache vs. cores are very important, eg. their render management sw can disable cores where some types of render benefit from a larger slice of mem bw per core.There are all sorts of tasks which impose heavy I/O loads while also needing varying degrees of main CPU power. Some gobble enormous amounts of RAM, like ANSYS, though I don't know if that's still used.
I'd be interested to know how threaded Sparks in Flame/Smoke behave with TR, though I guess that won't happen unless Autodesk/HP sort out the platform support.
Ian.
Zingam - Sunday, August 13, 2017 - link
Good points!Notmyusualid - Sunday, August 13, 2017 - link
...only he WAS sampled. Read the review.bongey - Thursday, August 10, 2017 - link
You don't have to be paid by Intel, but this is just a bad review.Gothmoth - Thursday, August 10, 2017 - link
where is smoke there is fire.there are clear indications that anandtech is a bit biased.
Lolimaster - Friday, August 11, 2017 - link
Adding useless singe thread benchs for people that will pay $600+ for many cores is plain stupidity like that useless open a pdf test.casperes1996 - Saturday, August 12, 2017 - link
Why do you bother replying to these, Ian? I love your enthusiasm about what you do, and am happy that you reply to comments, but as you state yourself, no matter what you say, you'll be called a shill on more than a weekly basis by either side no matter what you do. Intel shill, AMD shill, Apple shill, Nvidia shill and so on. There's no stopping it, because you just can't please the people who go into something wanting a specific result. Well, you can if you give them that result, but sometimes, facts aren't what you want them to be, and some people don't accept that.Cheers, mate
Diji1 - Thursday, August 10, 2017 - link
You sound like a crazy person.Notmyusualid - Saturday, August 12, 2017 - link
@ Diji1You are correct.
He is implying just because we want HEDT platforms / chips, that we care nothing for single-threaded performance.
He is a true AMD fan-boi, as you will see over time.
lordken - Thursday, August 10, 2017 - link
@Johan Steyn: while I agree with you that the Intel piece with PR slide at the top was a little bit lame, I even lolled at "most scalable" part (isn't something like "glued" zen the most scalable design?) I think this review is good and goes also around architecture etc., there were few instances during reading when it seemed odd wording or being unnecessarily polite toward intel's shortcoming/deficit but I cant even remember them now.Though I was surprised about power numbers, as Toms measured much higher W for 7900X , 160-200 and with TTF even up to 250-331 , but here 7800/7900x had only ~150W.
Also this sentence is odd
"All the Threadripper CPUs hit around 177W, just under the 180W TDP, while the Skylake-X CPUs move to their 140W TDP." move to their? They are above the TDP...why not state it clearly?
tamalero - Thursday, August 10, 2017 - link
Im sratching my head on power consumption as well. Almost all reviewers shows that the i9's consume more than threadripper.Could be the motherboard used?
Some used the Zenith and other reviewers used the ASROCK retail one.
smilingcrow - Thursday, August 10, 2017 - link
Power consumption can vary a lot depending on the type of task and the exact nature of that task.So you should expect a lot of variation across reviews.
Johan Steyn - Thursday, August 10, 2017 - link
The amount of tests on games in this review is unbalanced. Also read my reply to Ian.Extremely well stated: " unnecessarily polite toward Intel's shortcoming" Sometimes I think these guys think all are complete mindless drones.
carewolf - Thursday, August 10, 2017 - link
It has always been like this here. This was pretty neutral by Anandtech standards, they even admitted it when it was faster.Johan Steyn - Thursday, August 10, 2017 - link
Please read my response to Ian, I think you are not looking close enough to what is happening here.imaheadcase - Thursday, August 10, 2017 - link
So you lost respect for a website based on how they word titles of articles? I think you don't understand advertising at all. lolIf you want to know a website that lost respect, look at HardOCP and you know why people don't like them for obvious reasons.
Alexey291 - Thursday, August 10, 2017 - link
No offence but HardOCP is far more respectable than what we have in ATech these days.But that's not hard. AT website is pretty much a shell for the forum which is where most of the traffic is. I'm sure they only so the reviews because 'it was something we have always done'
Johan Steyn - Thursday, August 10, 2017 - link
You may not understand how wording is used to convey sentiments in a different way. That is what politicians thrive on. You could for instance say "I am sorry that you misunderstood me." It gives the impression that you are sorry, but you are not. People also ask for forgiveness like this: "If I have hurt you, please forgive me." It sounds sincer, but it is a hidden lie, not acknowledging that you have actually hurt anybody, actually saying that you do not think that you did.Well, this is a science and I cannot explain it all here. If you miss it, then it does not mean it is not there.
mikato - Monday, August 14, 2017 - link
I thought I'd just comment to say I understand what you're saying and agree. Even if a sentence gives facts, it can sound more positive one way or another way based on how it is stated. The author has to do some reflection sometimes to catch this. I believe him whenever he says he doesn't have much time, and maybe that plays into it. But articles at different sites may not have this bias effect and it can be an important component of a review article."Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."
These 2 sentences give facts, but sound favorable to Intel until just the very end. It's a subtle perception thing but it's real. The facts in the sentences, however, are massively favorable to AMD. Threadripper does only 6.7% less performance than an announced (not yet released) Intel CPU for half the cost!
Here is another version-
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. So Threadripper, for half the cost of Intel's as-yet unreleased chip, performs only 6.7% slower in Cinebench."
There, that one leads with Threadripper and "half the cost" in the second sentence, and sounds much different.
Johan Steyn - Thursday, August 10, 2017 - link
HardOCP and PCPer is more respected in my opinion. Wccftech is unpredictable, sometimes they shine and sometimes they are really odd.mapesdhs - Thursday, August 10, 2017 - link
I've kinda taken to GamersNexus recently, but I still always read AT and toms to compare.Ian.
fanofanand - Tuesday, August 15, 2017 - link
WCCFtech is a joke, it's nothing but rumors and trolling. If you are seriously going to put WCCFtech above Anandtech then everyone here can immediately disregard all of your comments.Drumsticks - Thursday, August 10, 2017 - link
Fantastic review In. I was curious exactly how AMD would handle the NUMA problem with Threadripper. It seems that anybody buying Threadripper for real work is going to have to continue being very aware of exactly what configuration gets them the best performance.One minor correction, at the bottom of the CPU Rendering tests page:
"Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost." - this score is for the 16 core i9-7960X, not the 7980XE.
Drumsticks - Thursday, August 10, 2017 - link
Ian*. Can't wait for the edit button one day!launchcodemexico - Thursday, August 10, 2017 - link
Why did you end all the gaming review sections with something like "Switching it to Game mode would have made better numbers..."? Why didn't you run the benchmarks in Gaming mode in the first place?Ian Cutress - Thursday, August 10, 2017 - link
We ran with both and give the data for both. Gaming Mode is not default, and it may surprise you just how many systems are still run at default settings.mapesdhs - Thursday, August 10, 2017 - link
Just a thought, might it be possible for AMD to include logic in the design which can tell when it's doing something would probably run better in the other mode, and if so notify the user of this?zepi - Thursday, August 10, 2017 - link
Which 7-zip version are you actually using? Do you really run version 9.2 as stated in the article?Latest stable should be something like 16.x and 17.x's are also available.
zepi - Thursday, August 10, 2017 - link
Your numbers look somewhat different compared to some sites that have been using 17.x versions.DanNeely - Thursday, August 10, 2017 - link
Keeping the version constant means you can compare against a huge backlog of old data without having to rerun anything and having to drop any systems you can't get working or were only loaners.Alexey291 - Thursday, August 10, 2017 - link
Yes and also means that the results are useless.tamalero - Thursday, August 10, 2017 - link
Agree. Its like running a benchmark suit that cant handle more than 8 threads.. because "back in the days" there were only dual core processors.Johan Steyn - Thursday, August 10, 2017 - link
Its called slanted journalism, just another example.zepi - Friday, August 11, 2017 - link
Exactly. We don't test GPU's with Quake 2 only to have comparable benchmark results against Voodoo 3.And almost no-one running 7zip today (be it on Core 2 quad OR Core i9) won't be running these ancient versions. Results on those versions are just meaningless in todays environment.
zepi - Friday, August 11, 2017 - link
princess and half a kingdom for functional edit.Notmyusualid - Sunday, August 13, 2017 - link
@ AlexeyNope - it means comparisons are easier than ever. If that means anything to you.
Alexey291 - Monday, August 14, 2017 - link
Why yes, I can compare some results of performance in software which is so outdated that it's half a dozen major versions behind...So as I was saying. Useless information.
Lolimaster - Friday, August 11, 2017 - link
You can add 2 results, one for comparison purposes and one with always the newest version available.Alexey291 - Saturday, August 12, 2017 - link
Would involve work as opposed to just running a macro once in a whileTypo - Thursday, August 10, 2017 - link
I wonder if the TR 1900x will get its own mode? Something like game mode but still retains smt?Yojimbo - Thursday, August 10, 2017 - link
It would be cool if you tested time between turns for a few late-game Civilization VI saves.Ian Cutress - Thursday, August 10, 2017 - link
When the developers of Civ finally listen to me and add in a command line for the AI benchmark, I can script it into my setup. They keep ignoring me. They have a command line for the regular benchmark, but because the AI benchmark was added post release no-one thought to add a command line for it (or publish what the command line flags are). There is an -aibenchmark flag in the disassembled code, but it doesn't do anything, which makes me think that it is disabled for release builds.rtho782 - Thursday, August 10, 2017 - link
http://www.anandtech.com/show/11685 <--- this link to the motherboard roundup just takes you to the homepage.Ian Cutress - Thursday, August 10, 2017 - link
It's still a WIP, needs expanding and editing. Will be doing that over the weekend :)Arbie - Thursday, August 10, 2017 - link
FYI, this sentence needs some repair work:"Though it's interesting just how cost the 10-thread Core i9-7900X gets here"
Kjella - Thursday, August 10, 2017 - link
In the not so distant past - like last year - you'd have to pay Intel some seriously overpriced HEDT money for 6+ cores. Ryzen gave us 8 cores and most games can't even use that. ThreadRipper is a kick-ass processor in the workstation market. Why anyone would consider it for gaming I have no idea. It's giving you tons of PCIe lanes just as AMD is downplaying CF with Vega, nVidia has offically dropped 3-way/4-way support, even 2-way CF/SLI has been a hit-and-miss experience. I went from a dual card setup to a single 1080 Ti, don't think I'll ever do multi-GPU again.tamalero - Thursday, August 10, 2017 - link
Probably their target is for those systems that have tons of cards with SATA RAID ports or PCI-E accelerators like AMD's or Nvidia's?mapesdhs - Thursday, August 10, 2017 - link
And then there's GPU acceleration for rendering (eg. CUDA) where the SLI/CF modes are not needed at all. Here's my old X79 CUDA box with quad 900MHz GTX 580 3GB:http://www.sgidepot.co.uk/misc/3930K_quad580_13.jp...
I recall someone who does quantum chemistry saying they make significant use of multiple GPUs, and check out the OctaneBench CUDA test, the top spot has eleven 1080 Tis. :D (PCIe splitter boxes)
GreenMeters - Thursday, August 10, 2017 - link
There is no such thing as SHED. Ryzen is a traditional desktop part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years. Threadripper is a HEDT part. That it raises the bar in that segment compared to Intel's offering is a good thing--a significant performance and feature boost that we haven't seen in years.Ian Cutress - Thursday, August 10, 2017 - link
Ryzen 7 was set as a HEDT directly against Intel's HEDT competition. This is a new socket and a new set over and above that, and not to mention that Intel will be offering its HCC die on a consumer platform for the first time, increasing the consumer core count by 8 in one generation which has never happened before. If what used to be HEDT is still HEDT, then this is a step above.Plus, AMD call it something like UHED internally. I prefer SHED.
FreckledTrout - Thursday, August 10, 2017 - link
I think AMD has the better division of what is and isn't HEDT. Going forward Intel really should follow suite and make it 8+ cores to get into the HEDT lineup as what they have done this go around is just confusing and a bit goofy.ajoy39 - Thursday, August 10, 2017 - link
Small nitpick but"AMD could easily make those two ‘dead’ silicon packages into ‘real’ silicon packages, and offer 32 cores"
That's exactly what the, already announced, EPYC parts are doing is it not?
Great review otherwise, these parts are intriguing but I don't personally have a workload that would suit them. Excited to see what sort of innovation this brings about though, about time Intel had some competition at this end of the market.
Dr. Swag - Thursday, August 10, 2017 - link
I assume they're referring to putting 32 cores on TR4mapesdhs - Thursday, August 10, 2017 - link
Presumably a relevant difference being that such a 32c TR would have the use of all of its I/O connections, instead of some of them used to connect to other EPYC units. OTOH, with a 32c TR, how the heck could mbd vendors cram enough RAM slots on a board to feed the 8 channels? Either that or stick with 8 slots and just fiddle around somehow so that the channel connections match the core count in a suitable manner, eg. one per channel for 32c, 2 per channel for 16c, etc.Who knows whether AMD would ever release a full 32c TR for TR4 socket, but at least the option is there I suppose if enough people buy it would happily go for a 32c part (depends on the task).
smilingcrow - Thursday, August 10, 2017 - link
Considering the TDP with just a 16C chip to go 32C would hit the clock speeds badly unless they were able to keep the turbo speeds when ONLY 16 or less of the cores are loaded?The 32C server parts have much lower max turbo speeds seemingly when less loaded.
mapesdhs - Friday, August 11, 2017 - link
I haven't checked, is the TDP of a 32c EPYC a lot higher, with consequently higher clocks?JedTheKrampus - Thursday, August 10, 2017 - link
Here are a few potential benchmark ideas that I'd like to see.- Zbrush. (High resolution dynamesh/projection, Zremesher, or Decimation Master.)
- Unreal Engine 4. (Lightmap baking on a sample map, perhaps one from Unreal Tournament 4. Perhaps compilation of the engine itself.)
- XNormal. (Ambient occlusion texture baking.)
- Some sort of database benchmark for the poor sods who are doing web development.
- Some sort of video editor benchmark.
T1beriu - Thursday, August 10, 2017 - link
Luxmark OpenCL: "Though it's interesting just how cost the 10-thread Core i9-7900X gets here, likely due to a combination of higher IPC and clockspeeds."1. Typo?
2. 7900X it's not in the chart.
Ian Cutress - Thursday, August 10, 2017 - link
When we initially ran the 7900X and other CPUs, Luxmark was failing for no obvious reason. We narrowed down the reason a few weeks ago - it doesn't like running when a GTX 950 is installed for detection reasons. We have since moved to RX 460s being used during our CPU benchmark runs.gzunk - Thursday, August 10, 2017 - link
The only thing I might take exception at is the notion that prosumers have never seen NUMA before, since both the Z9-PE and Z10-PE offer it. I myself had the Z9-PE with a pair of Sandy Bridge Xeons.mapesdhs - Thursday, August 10, 2017 - link
I've lost count of how often I've read the specs pages for those mbds, etc. Talked to so many prosumers who ideally would buy one of those boards, but the XEON costs were prohibitive.SpartanJet - Thursday, August 10, 2017 - link
Well that was disappointing. Guess I'm waiting for Coffee Lake for my new gaming rig.Total Meltdowner - Thursday, August 10, 2017 - link
Yea, I'm not regretting my 1800x right now. Too bad. I had high hopes for threadripper.Let's see how drivers and BIOS updates help it, assuming they will.
Johan Steyn - Thursday, August 10, 2017 - link
Yes, you are right, TR is not the best gaming rig. Maybe this article misses again to even try to compare TR as a gaming machine. It is good to point out that Intel will be better, even though not compared to price. But this article made me think TR is made to be a gaming CPU. Ryzen is meant for that. When games support 32 threads, that will change, but not soon. This is a workstation class machine. It is almost like buying a Xeon to run games with.PS, I would not buy CL though.
mapesdhs - Thursday, August 10, 2017 - link
I hope AMD tailors its PR to make this clear. Focusing any hype on gaming where it's obviously not warranted could miss a lot of potential very suitable buyers.What does bug me though is the absence of reviewers mentioning that while Intel's 4-core CPUs do well for gaming right now, isolated to just that task, they have nothing in reserve to handle what are rapidly growing areas such as live streaming of games. GN showed a huge difference in viewer experience for game streaming between a 1700 and a 7700K.
Lolimaster - Friday, August 11, 2017 - link
Sadly AMD was always the good fella even with known shills like anandtech.YazX_ - Thursday, August 10, 2017 - link
it is strange that you haven't mention data centers in the conclusion, this CPU will be selling like hot cakes for those, cloud computing and VMs.Ian Cutress - Thursday, August 10, 2017 - link
It's a consumer CPU, which is something AMD emphasized in our briefings and again when we asked them about where they are pitching the processors. If users want Zen for datacenters, EPYC exists. We have benchmarks for those too.mapesdhs - Friday, August 11, 2017 - link
It's a heck of a stretch to outright call it a consumer CPU when it has so many pro-type features such as ECC support. Sure it's aimed at consumers, but it's definitely aimed at prosumers aswell, and I'd be amazed if at least a few fully pro places didn't buy some, even if only to test.Beany2013 - Friday, August 11, 2017 - link
AMD have historically been pretty cool about ECC support - and professionals such as video and rendering types appreciate, as RAM wobbles on 24hr+ rendering workflows have one less thing to worry about.It's not that their subtly targetting server markets or owt, they just know that a substantial minority of their client base appreciate being able to utilise ECC memory without having to quadruple the cost of the base hardware, as you do with Intel stuff.
Notmyusualid - Saturday, August 12, 2017 - link
@ mapesdhsQuote "It's a heck of a stretch to outright call it a consumer CPU"
He didn't just say it was a consumer prouduct - AMD (the manufacturer, remember) TOLD him it was.
Thats called logic, that'll help you.
T1beriu - Thursday, August 10, 2017 - link
GTA V, you write "ASUS GTX 1060 Strix 6G Performance", but the charts say "GTX 1080".Ninhalem - Thursday, August 10, 2017 - link
I would like to see software like ANSYS Structures or ANSYS Fluent benchmarked, but after talking with ANSYS Hardware Support, they're still waiting to see how EPYC performs on base hardware. Building systems for ANSYS using Intel parts involves obscene amounts of money, so if you can save any money with the same amount of performance, a myriad amount of companies would be interested.Ian Cutress - Thursday, August 10, 2017 - link
I've been in contact with Ansys before to collaborate on a benchmark. They didn't want to get involved.mapesdhs - Thursday, August 10, 2017 - link
That's a pity, as I understand it ANSYS is a task that gobbles RAM by the truckload, it'd be an interesting use case for analysing memory/cache behaviour.Many years ago, one ANSYS user told me his ideal system would be single CPU with 1TB RAM.
T1beriu - Thursday, August 10, 2017 - link
GTA V, Sapphire Nitro R9 Fury 4G Performance with GTX 1060 charts.GTA V, Sapphire Nitro RX 480 8G Performance with R9 Fury charts.
CleverBullet - Thursday, August 10, 2017 - link
What did you use to test max power consumption? Prime95 small FFTs?I'd love to see some perf/watt comparisons to the 7900X in the future, GamersNexus has some interesting results in that regard with the 1950X behaving significantly better at stock than the 7900X, both doing more work for less power.
carewolf - Thursday, August 10, 2017 - link
You did something wrong with the chromium builds benchmark. It has absolutely no cross core communication and scales almost linearly with number of cores. So you must have misconfigured something or hit a glitz. I work on Chromium profesionally, and we can normally speed it up 2x by distributing compile jobs all the way to another machine. Or by 10x by distributing compile jobs to 10 other machines. Not scaling to more cores on the same CPU makes no sense.Ian Cutress - Thursday, August 10, 2017 - link
We're using a late March build based on v56 with MSVC, using the methodology described in the ELI5, and implementing a complete clean rebuild every time. Why March v56? Because when we locked down our suite a few months back to start testing Windows 10 on several generations of processors, that's where it was at. 50 processors in, several hundred to go...Lolimaster - Friday, August 11, 2017 - link
Then again creating obsolete date for the sake of "our benchamark suite". How about running the "for comparison's sake bench" and another with the latest version, not that difficult.Dave Null - Thursday, August 10, 2017 - link
I don't know why you're being criticized as an Intel shill, Ian. I'll probably be purchasing Threadripper, and I thought it was a good review.One thing I would like to see is some kind of audio benchmark. It's pretty well established at this point that there are latency considerations with Threadripper, and it would be useful to know how this affects DAWs with high track counts, for example.
schizoide - Thursday, August 10, 2017 - link
The link to the 5Ghz space-heater on page 19 goes to your Dropbox as a file:// URL, not http://.Ian Cutress - Thursday, August 10, 2017 - link
That was an odd error. I've adjusted it.Johan Steyn - Thursday, August 10, 2017 - link
The review is unbalanced, aiming mostly at gamers. You probably understand what TR is about, but not all do. This article does not focus on what TR is good at.Ian Cutress - Thursday, August 10, 2017 - link
You do realise how many requests we actually got for game tests? This is our regular CPU Gaming test suite, taken from the suggestions of the readers: fast and slow GPUs, AMD and NVIDIA, 1080 and 4K. The data is there because people do request it, and despite your particular use case, it's an interesting academic exercise in itself. The CPU benchmarks are still plentiful: around 80 tests that take 8-10 hours to run in total. If you want to focus purely on those, then go ahead - the data is meant to be for everyone and whatever focus they are interested in.mapesdhs - Thursday, August 10, 2017 - link
And at least the non-gaming tests were presented first.Chad - Thursday, August 10, 2017 - link
I think a simple comment before the gaming test suite like..."We show gaming tests for (the reasons you list above) but if you are looking at buying Threadripper for gaming alone, you are really missing the point of it." would go a long way to allaying concerns. You could cap it with what it would do well: Threadripper can really excel at running multiple VM's, servers, compiling, encoding etc and at the same time running a game while waiting. Or some such.
That's what appears to be missing to me, instead of just dumping tons of gaming results, putting it all into context of the strength of the processor. Just my 2 coppers
mapesdhs - Friday, August 11, 2017 - link
A comment like that may have helped prevent criticism, but if included it would also add weight to the suggestion that the review should have included a greater proportion of threaded workloads.pm9819 - Friday, August 18, 2017 - link
No one spending a $1000 on a cpu is going to be swayed by it's gaming performance. That comment isn't needed.Notmyusualid - Saturday, August 12, 2017 - link
@ Ian CutressI am here for the gaming results, so I thank you for running the benchies.
I think the problem is that fan-bois expected TR to do better than it had in those tests, and well, it didn't.
I for one, think you are reporting honestly, for what its worth.
Aristechnica, on the other hand...
Mugur - Sunday, August 13, 2017 - link
If you're here for the games, maybe the 7700k review is waiting for you...Notmyusualid - Sunday, August 13, 2017 - link
4 x 1070s in my main rig.A quad core woudn't suffice.
Chad - Sunday, August 13, 2017 - link
wow, if you are here for only gaming results w/ the threadripper, you are completely missing the point of it. just, wowGreenMeters - Thursday, August 10, 2017 - link
If it's priced in the existing traditional desktop segment, it's a traditional desktop part. If it's priced in the existing HEDT segment, it's an HEDT part.mapesdhs - Thursday, August 10, 2017 - link
That suggest that somehow there are such things as "traditional" price points, whereas in reality Intel (without competition) has been moving these all over the place (mostly up) for many years. How can such tech have traditional anything when its base nature is evolving so fast? Look at what Intel has done to its own pricing as a result of Ryzen, and now TR, implementing a major price drop at the 10c level compared to BW-E (Intel's Ark shows the 7900X being 42% cheaper after a gap of just one year).When disruptive competition occurs, there's no such thing as traditional. To me, traditional is another way of disguising tech stagnation.
Lolimaster - Friday, August 11, 2017 - link
An HEDT is also a Worksation and with the amount of cores/IO AMD also made this cpu a proper server chip for small businesses that don't need exotic things like lan remote dual 10G's.AMD disrupted the market and erased many lines, same with EPYC with 32cores on a single socket, erasing the need of dual socket for many people (while TR will scale to 32cores in the future, EPYC will go to 64cores).
Lolimaster - Friday, August 11, 2017 - link
*octachannel on EPYC, absurd amount of cache.Total Meltdowner - Thursday, August 10, 2017 - link
FIRST OMG!!!T1beriu - Thursday, August 10, 2017 - link
Yeah buddy, you're a couple of hours late on that one.T1beriu - Thursday, August 10, 2017 - link
Ian, can you please add a paragraph in the review that describes the " 99th percentiles" for games please? I'm having a hard time understanding it. Thanks.Ian Cutress - Thursday, August 10, 2017 - link
A game benchmark result gives you the amount of time it takes to render each frame - 16ms for one frame, 18ms for the next, etc. In the past people used to quote minimum frame rates, i.e. the absolute minimum, which can sometimes be off due to a sudden spike caused by something else on the system kicking in, and the data would not be representative.To get around this, we use 99th percentile. So we take all the frame times, put them in numerical order, then take the 99th percent of the worst result as our data point for 99th percentile. This means that 99% of the frame times / FPS will be better than this value during normal gameplay.
T1beriu - Friday, August 11, 2017 - link
I understand now. Thanks for the explanation.Micha81 - Thursday, August 10, 2017 - link
I can see a use case in an IT-Lab for a non-mission-critical VM Server. I suggest considering a test if the CPU is well behaved under typically used hypervisors.Mugur - Sunday, August 13, 2017 - link
Me too, but you won't find it here. But in case you want to see in how many miliseconds it opens a PDF... you are right on the spot.IGTrading - Thursday, August 10, 2017 - link
Why are all the AnandTech results different and inferior to the results listed at TechSpot and ArsTechnica if the CPUs and the benchmarks were the same ?!How come AMD aces all the benchmarks on these reputable sites, but the results are all over the place on AnandTech ?!
Don't think I'm bashing AnandTech for a second. I've been reading it since 2001 and even if I'd get the impression it is a bit biased, I will continue reading it. Everybody has the right to be biased and I have enough judgement to make my own opinion about a subject.
I suspect there was some issue with the settings or the motherboard, because even the power consumption results are weird. I know that the results listed try to evaluate the chip power consumption, but still the results seem very wrong.
Actually, in these Power Consumption tests the reader will completely get the WRONG IDEA, because the Intel X299 systems consume way more power than AMD's Threadripper X399 platform .
Also, no mention of the difference in handling the temperatures of the platform ?! How is X399 vs. the steak grill called X299 ? This is a very, very serious issue that should be discussed in the review.
If the AMD solution is more power efficient, stable and reliable, the readers should be able to read about it in a review.
Sorry to ask so many questions, I know it was a long week for you Ian.
Thank you for the review and I hope we do get a Part 2 or 2.0 :)
Johan Steyn - Thursday, August 10, 2017 - link
See my other posts, maybe it might give some light.Ian Cutress - Thursday, August 10, 2017 - link
Most of our benchmarks using real-world inputs, aside from the synthetics. So our Chromium Compile test for instance uses a different code base and different compiler to Ars. Our WinRAR test and video editing tests use our own datasets. Our game tests use settings that we've chosen and are unlikely to align with others. That's why we document a lot of our testing.Also, on the power tests. We're probing the CPU power only - not losses caused by the platform power delivery, DRAM, or power supply. We're not taking the difference between idle and load either, we're going off of the numbers that the CPU is telling itself when it comes to power/frequency management for power states, fan profiles and everything else.
IGTrading - Friday, August 11, 2017 - link
I think that total system power consumption is more important than chip consumption, IMHO.The user/buyer/client will never use the CPU without the whole platform consuming power as well, except if he drills a hole into it and uses it as a key chain. :)
In the servers business, platform power matters the most, in the mobile world as well. For the home desktop user it matter how much he will spend to enjoy that new productivity/gaming system.
The only niche of the market where chip power would be particularly of significant importance is super-computing where the platform is usually a custom one with a custom power budged that will depend directly on the decision of the designer and beneficiary.
These two decision factors, beneficiary and designer, will then chose what chip will they want to use in their project.
Otherwise, on a first look (maybe I'm being superficial), I don't see why chip power consumption would need to be measured to exactly and used for comparison.
The CHECK it and see if it stays within the boundaries declared by the manufacturer or goes over, yes. But to use it for comparison ?!
Or maybe I'm just used to the days when everybody was always checking and comparing the total system power. :)
Mugur - Sunday, August 13, 2017 - link
Have you thought that in case the cooling solution is not perfect, especiall since there are no proper coolers for TR yes, just adapted ones, it could skew the results for most of the benchmarks / power used? TR has an XFR of 4.2 GHZ that will not kick unless the cooling is perfect. I saw this on Hardware Canucks I think, where their TR was below the advertised values and they mention it.GamersNexus even has a part on Youtube for testing the results of different application methods of thermal paste and it did show that even this matters a lot in case of this cpu / cooling solutions
sbandur - Thursday, August 10, 2017 - link
You shuld test Chrome compiling in 4 virtual machines at the same time... just for fun...sbandur - Thursday, August 10, 2017 - link
Great review!IGTrading - Friday, August 11, 2017 - link
Yes. That is a much more appropriate and comprehensive test.We often talk about using a VM tool to do our heavy work, despite of reminding us of the Main Frame era :) But today it makes sense. Even in a shared work environment, you can share the costs of a Threadripper machine and run 3 or 4 or more VMs.
And the everything is shared : hardware costs, maintenance, upgrades, software, repairs, power consumption and so on.
You just come to the office with your laptop. You plug into the 27" secondary desktop display, connect to your VM and you have 2 to 32 computing threads at your disposal.
So yes, concurrent computing loads in Virtual Machines makes for a very good and comprehensive mean of benchmarking, IMHO.
tamalero - Thursday, August 10, 2017 - link
I find It strange that in your reviews your intel chips consume way less power than other reviewers.Ian Cutress - Thursday, August 10, 2017 - link
We're testing the CPUs, not the system level + VRM losses.Interitus - Thursday, August 10, 2017 - link
Might have missed it in previous comments, but the link one page 1 to the X399 board previews doesn't work?Ian Cutress - Thursday, August 10, 2017 - link
Still a WIP, was hoping to have it finished, but will probably early next week.BOBOSTRUMF - Thursday, August 10, 2017 - link
Good review but I see a lot of testes optimized for 2-4 cores. Also I want a test with gaming, rendering and compression (or other many intensive tasks) at the same time, this will clearly differentiate this beast from other 4-6 cores CPU's;Unfortunately for Intel, his greed really shows now. Although core still has about 5-10% more IPC compared to Ryzen, the power consumption per core is about 5-15% higher (al lower frequency) and with 10-18 cores this really shows. They had a very competitive tick-tock strategy when they had absolutely no competition and now after more than three years they are still stuck in 14nm. If they were smarter and created by now just one fab on 8 or 10 nm for the CPU's with many cores things were more simple for them today. In 8 nm, skylake x would had allowed 18 cores on 3.2-3.6Ghz, and not 2.6 as they are doing now.
So they save 3-4 billions dollars not building a 8nm fab but will lose more than this when enthusiast marked will side with AMD.
Please be smarter in future Intel, Samsung and TSMC already have 8 nm FABS while you...
looncraz - Thursday, August 10, 2017 - link
Page 2: "Whereas Ryzen 7 only had 16 PCIe lanes"Ryzen 7 has 32 pci-e 3.0 lanes on die, one 8 lane controller disabled, leaving 24 lanes enabled. Four are then reserved for the chipset, leaving 20 PCI-e lanes usable for direct connectivity.
Ian Cutress - Thursday, August 10, 2017 - link
Of which only 16 are reserved for GPUs. Which is how PCIe lanes from CPUs have been characterized for decades.bongey - Thursday, August 10, 2017 - link
From the site that tried to tell you a pentium 4 was a better cpu than a athlon 64.amdwilliam1985 - Monday, August 14, 2017 - link
pentium 4 is better than athlon 64 as a space heater ;)brucek2 - Thursday, August 10, 2017 - link
re: page 1's "AMD recommends liquid cooling at a bare minimum" - if liquid cooling is the "bare minimum", what cooling is considered "pretty good"? Are we all supposed to be readying liquid nitrogen setups?Hurr Durr - Friday, August 11, 2017 - link
Look at the TDPs across the boeard. Of course we are!Gavin Bonshor - Thursday, August 10, 2017 - link
Great review as always :) - So it's effectively a great all-around CPU for streaming, gaming, and rendering in programs which utilises more than 8 cores...I think that's win, especially with ECC memory supportpeevee - Thursday, August 10, 2017 - link
Why do you need ECC at home?Makaveli - Thursday, August 10, 2017 - link
Some professional work from home. Kinda of a silly question.mapesdhs - Thursday, August 10, 2017 - link
Yeah, I just inferred that'd be the case.prisonerX - Friday, August 11, 2017 - link
"640K should be enough for anyone"peevee - Thursday, August 10, 2017 - link
I had a lot of hope for Threadripper as a development machine... but when 16 core TR loses so bad to 10-core 7900x or even 8-core 7820x in compilation, there is something seriously wrong with the picture. Too much emphasis on FP performance nobody at home needs all that much (except in games where it is provided by GPU and not CPU anyway)? Maybe AT tests are wrong, say, they have failed to specify /m for MsBuild?peevee - Thursday, August 10, 2017 - link
Well, there is a good chance that the optimal config for the test would be SMT on (obviously) and NUMA on.peevee - Thursday, August 10, 2017 - link
Hiding the fact that the CPU is NUMA both from the OS and from software is a very bad idea. Thread migration out of a core is a disaster all but itself, but thread migration to different memory and especially L3 cache (as big as it is) should never be attempted.peevee - Thursday, August 10, 2017 - link
Basically, at this point I would take 7820x over TR1950X for every task, with similar MT performance in vast majority of tasks not offloadable to a GPU, better mixed-load performance and much better ST performance. And would save $400 and electricity costs in the process.BOBOSTRUMF - Friday, August 11, 2017 - link
Take your Intel, I'm with the ThreadRipperLolimaster - Friday, August 11, 2017 - link
X299 + cpu consumes and produces way more heat than TR, and that's a fact, anand is anand, if you're happy for you blu placebo site, good for ya.Notmyusualid - Saturday, August 12, 2017 - link
@ peeveeI'm sort-of eyeing-up the 7900X myself. But I have the feeling the Mrs. will sh1t if I buy any more new toys, and my 13 Nvidia GPUs.... :)
Notmyusualid - Saturday, August 12, 2017 - link
*after my 13....FFS - edit button please!
ComputerGuy2006 - Thursday, August 10, 2017 - link
I don't know how many people would care... But I always felt having something like stockfish tested could be interesting in these types of benchmarks.Netmsm - Thursday, August 10, 2017 - link
@Ian Cutress, If it's possible, please do some benchmarks about multitasking performance.Makaveli - Thursday, August 10, 2017 - link
I know alot of hardwork and long hours went into this so I want to thank you Ian for taking the time. Minus all the bickering and whining in the comment some good points were made. Been reading this site since 2000 and appreciate all the knowledge it has given me.Ian Cutress - Thursday, August 10, 2017 - link
Thanks! :)psychickitten - Thursday, August 10, 2017 - link
Any chance of including vray benchmarks (both cpu and gpu) in future reviews? Vray has recently released a vray benchmark which is free to download.Ian Cutress - Thursday, August 10, 2017 - link
Check some of the comments above. Apparently, we have too many rendering benchmarks according to other users.fallaha56 - Thursday, August 10, 2017 - link
why is XFR not turned on here?and what respected rig-builder doesn't turn on XMP profiles...
come on guys, this is poor
Ian Cutress - Thursday, August 10, 2017 - link
XFR is enabled by default.Outlander_04 - Thursday, August 10, 2017 - link
Intels 140 watt chips pull 149 watts.AMDs 180 watt chips pull 176 watts
BOBOSTRUMF - Friday, August 11, 2017 - link
Actually Intel's 140 can consume more than 210 if You want the top unrestricted performance limited. Read tomshardware reviewFiliprino - Thursday, August 10, 2017 - link
How comes WinRAR is faster with the 10 core Broadwell than with the 10 core Skylake?What did they change on Cinebench going from 10 to 11.5? Threadripper is the faster CPU in Cinebench 10, but in the newer one it is not. Then again Cinebench 15 shows TR as the faster CPU. Is this benchmark reliable?
How comes Chromium compilation is so slow? Others have pointed out they get much better scaling (linear speedup). That makes sense because compilation basically consists in launching isolated processes (compiler instances). Is this related with the segfaulting problem under GNU/Linux systems?
For encoding I would start to use FFmpeg when benchmarking so many cores. In my brain lies a memory of FFmpeg being faster than Handbrake for the same number of cores. Maybe the GUI loop interrupts the process in a performance-unfriendly way. Too much overhead. HPC workloads can suffer even from the network driver having too many interrupts (hence, Linux tickless configuration).
I have read SYSMARK Results and I find strange that TR media results are slower than data, being TR slower than Intel in media and faster than Intel in data. Isn't SYSMARK from BAPCo (http://www.pcworld.com/article/3023373/hardware/am... You already point it out on the article, sorry.
How comes R9 Fury in Shadow of Mordor has AMD and Intel CPUs running consistently at two different frame rates (~95 vs ~103)?
The same but with the GTX 1080. Both cases happen regardless of the Intel architecture (Haswell, Broadwell and Skylake all have the same FPS value).
What happens with NVIDIA driver on Rocket League? Bad caching algorithm (TR has more cores/threads -> more cache available to store GPU command data)? You say you had issues but, what are your thoughts?
How comes GTA V has those Under 60 and 30 FPS graphs knowing that the game is available for PS4 and XBox One (it has been already optimized for two CCX CPU, at least there is a version for that case)? Nevertheless, with NVIDIA cards, 2 seconds out of 90 is not that much.
What I can think is that all these benchmarks are programmed using threading libraries from the "good old times" due to bad scaling. And in some cases there is architecture-specific targeted code. I also include in my conception the small dataset being used. I also would not make a case out of a benchmark programmed with code having false sharing (¡:O!)
Currently for gaming, it seems that the easiest way is to have a Virtual Machine with PCIe passthrough pinned to one of the MCM dies.
As a suggestion to Anandtech, I would like to see more free (libre) software being used to measure CPU performance, compiling the benchmarks from source against the target CPU architecture. Something like Phoronix. Maybe you could use PTS (Phoronix Test Suite).
Filiprino - Thursday, August 10, 2017 - link
Positive things: ThreadRipper is under its TDP consumption. Intel is more power hungry. The Intel 16-core might go through the rough in power consumption.Good gaming performance. Intel is generally better, but TR still offers a beefy CPU for that too, losing a few frames only.
Strong rendering performance.
Strong video encoding performance.
When you talk about IPC, it would be useful to measure it with profiling tools, not just getting "points", "miliseconds" and "seconds".
Seeing how these benchmarks do not scale by much beyond 10 cores you might realize software has to get better.
Chad - Thursday, August 10, 2017 - link
Second ffmpeg test (pretty please!)mapesdhs - Thursday, August 10, 2017 - link
Ian, a query about the CPU Legacy Tests: why do you reckon does the 1920X beat both 1950X and 1950X-G for CB 11.5 MT, yet the latter win out for CB 10 MT? Is there a max-thread limit in V11.5? Filiprino asked much the same above.
"...and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation."
Isn't that the whole point though? For most workstation tasks, don't use Game Mode. There will be exceptions of course, but in general...
Btw, where's C-ray? ;)
Ian.
Da W - Thursday, August 10, 2017 - link
ALL OF YOU COMPLAINERS: START A TECH REVIEW WEBSITE YOURSELVES AND STFU!hansmuff - Thursday, August 10, 2017 - link
Don't read the comments. Also, a lot of the "complaints" are read by Ryan and he actually addresses them and his articles improve as a result of criticism. He's never been bad, but you can see an ascension in quality over time, along with his partaking in critical commentary.IOW, we don't really need a referee.
hansmuff - Thursday, August 10, 2017 - link
And of course I mean Ian, not Ryan.mapesdhs - Friday, August 11, 2017 - link
It is great that he replies at all, and does so to quite a lot of the posts too.Kepe - Thursday, August 10, 2017 - link
Wait a second, according to AMD and all the other articles about the 1950X and Game Mode, game mode disables all the physical cores of one of the CPU clusters and leaves SMT on, so you get 8 cores and 16 threads. It doesn't just turn off SMT for a 16 core / 16 thread setup.AMD's info here: https://community.amd.com/community/gaming/blog/20...
drajitshnew - Thursday, August 10, 2017 - link
You have written that "This socket is identical (but not interchangeable) to the SP3 socket used for EPYC,".Please, clarify.
I was under the impression that if you drop an epyc in a threadripper board, it would disable 4 memory channels & 64 PCIe lanes as those will simply not be wired up.
Deshi! - Friday, August 11, 2017 - link
No AMD have stated that won;t work. Its probably not hardware incompatible, but they probably put microcode on the CPUS so that if it doesn;t detect its a Ryzen CPU it doesn't work. There might also be differences in how the cores are wired up on the fabric since its 2 cores instead of 4. Remember, Threadripper has only 2 Physical Dies that are active. on Epyc all processors are 4 dies with cores on each die disabled right down to the 8 core part. (2 enabled on each physical die)Deshi! - Friday, August 11, 2017 - link
Wish there was an edit function..... but to add to that, If you pop in an Epyc processor, it might go looking for those extra lanes and memory busses that don;t exist on Threadripper boards, hence cause it not to function.pinellaspete - Thursday, August 10, 2017 - link
This is the second article where you've tried to start an acronym called SHED (Super High End Desktop) in referring to AMD Threadripper systems. You also say that Intel systems are HEDT (High End Desktop) when in all reality both AMD and Intel are HEDT. It is just that Intel has been keeping the core count low on consumer systems for so long you think that anything over a 10 core system is unusual.AMD is actually producing a HEDT CPU for $1000 and not inflating the price of a HEDT CPU and bleeding their customers like Intel was doing with the i7-6950X CPU for $1750. HEDT CPUs should cost about $1000 and performance should increase with every generation for the same price, not relentlessly jacking the price as Intel has done.
HEDT should be increasing in performance every generation and you prove yourself to be Intel biased when something finally comes along that beats Intel's butt. Just because it beats Intel you want to put it into a different category so it doesn't look like Intel fares as bad. If we start a new category of computers called SHED what comes next in a few years? SDHED? Super Duper High End Desktop?
Deshi! - Friday, August 11, 2017 - link
theres a good reason for that. Intel is not just inflating the cost because they want to. It literally cost them much more to produce their chips because of the monolithic die aproach vs AMDs Modular aproach. AMDs yeilds are much better than INtels in the higher core counts. Intel will not be able to match AMDs prices and still make significant profit unless they also adopt the same approach.fanofanand - Tuesday, August 15, 2017 - link
"HEDT CPUs should cost about $1000 "That's not how free markets work. Companies will price any given product at their maximum profit. If they can sell 10 @ $2000 or 100 at $1000 and it costs them $500 to produce, they would make $15,000 selling 10 and $50,000 selling 100 of them. Intel isn't filled with idiots, they priced their chips at whatever they thought would bring the maximum profits. The best way for the consumer to protest prices that we believe are higher than the "right" price is to not buy them. The companies will be forced to reduce their prices to find the market equilibrium. Stop complaining about Intel's gouging, vote with your wallet and buy AMD. Or don't, it's up to you.
Stiggy930 - Thursday, August 10, 2017 - link
Honestly, the review is somewhat disappointing. For a pro-sumer product, there is no MySQL/PostgreSQL benchmark. No compilation test under Linux environment. Really?name99 - Friday, August 11, 2017 - link
"In an ideal world, all software would be NUMA-aware, eliminating any concerns over the matter."Why? This is an idiotic statement, like saying that in an ideal world all software would be aware of cache topology. In an actual ideal world, the OS would handle page or task migration between NUMA nodes transparently enough that almost no app would even notice NUMA, and even in an non-ideal world, how much does it actually matter?
Given the way the tech world tends to work ("OMG, by using DRAM that's overclocked by 300MHz you can increase your Cinebench score by .5% !!! This is the most important fact in the history of the universe!!!") my suspicion, until proven otherwise, is that the amount of software for which this actually matters is pretty much negligible and it's not worth worrying about.
cheshirster - Friday, August 11, 2017 - link
Anandtechs power and compiling tests are completely out of other rewiewers results.Still hiding poor Skylake-X gaming results.
Most of the tests are completely out of that 16-core CPU target workloads.
2400 memory used for tests.
Absolutely zero perf/watt and price/perf analisys.
Intel bias is over the roof here.
Looks like I'm done with Anandtech.
Hurr Durr - Friday, August 11, 2017 - link
Here`s your pity comment.Notmyusualid - Sunday, August 13, 2017 - link
Yep, I'll get the door for him.Jeff007245 - Friday, August 11, 2017 - link
I don't comment much (if ever), but I have to say one thing... I miss Anand's reviews. What happened to AnandTech?What ever happened to IPC testing when IPC used to be compared on a clock for clock basis? I remember the days when IPC used to be Instructions Per Clock, and this website and others would even use a downclock/overclock processors at a nominal clock rate to compare the performance of each processor's IPC. Hell, even Bulldozer with a high clock architecture was downclocked to compare is "relative IPC" in regards using a nominal clockrate.
And to add to what other's are saying about the bias in the review... Honestly, I have been feeling the same way for some time now. Must be because AnandTech is at the "MERCY" of their mother company Purch Media... When you are at the mercy of your advertisers, you have no choice but to bend the knee, or even worse, bend over and do as they say "or else"...
Thanks for taking the time in creating this review, but AnandTech to me is no longer AnandTech... What other's say is true, this place is only good for the Forums and the very technical community that is still sticking around.
fanofanand - Tuesday, August 15, 2017 - link
Downclocking and overclocking processors to replicate a different processor within the same family can lead to inaccurate results, as IPC can and does rely (at least to a degree) on cache size and structure. I get what you are saying, but I think Ian's work is pretty damn good.SloppyFloppy - Friday, August 11, 2017 - link
Why did you leave out the i9s from the gaming tests?Why didn't you include the 7700k when you include 1800x for gaming tests?
People want to know that if they buy a $1k 7900X or 1950X if it's not only great for media creation/compiling but also gaming.
silverblue - Friday, August 11, 2017 - link
Stated why at the bottom of page 1. Also, he used the 7740X, so there is little to no point in putting the 7700K.Lolimaster - Friday, August 11, 2017 - link
The 1950X is as good at gaming as the 1800X, OCed 1700, with many more cpu resource to toy with.Swp1996 - Friday, August 11, 2017 - link
Thats The Best Title I have ever seen ...😂😂😂😂🤣🤣🤣🤣🤣 Steroids 😂😂😂🤣🤣🤣🤣🤣🤣🤣corinthos - Friday, August 11, 2017 - link
in other words.. AMD Ryzen is still the best bet for most people, and the best value. 1700 OC'd all day!BillBear - Friday, August 11, 2017 - link
>Move on 10-15 years and we are now at the heart of the Core Wars: how many CPU cores with high IPC can you fit into a consumer processor? Up to today, the answer was 10, but now AMD is pushing the barrier to 16I don't personally think of Threadripper or parts like Broadwell-E as being consumer level parts.
For me, the parts most consumers use have been using for the last decade have been Intel parts with two cores or four cores at the high end.
It's been a long period of stagnation, with cutting power use on mobile parts being the area that saw the most attention and improvement.
James S - Friday, August 11, 2017 - link
Agree the HEDT platforms are not for the average consumer they are for enthusiasts, professional workstation usage, and some other niche uses.When the frequency war stopped and the IPC war started. We should have had the core competition 5-8 years back since IPC stagnated to a couple percent gains year on year.
sorten - Friday, August 11, 2017 - link
Swole? Threadripped?Rottie - Friday, August 11, 2017 - link
AMD Ryzen CPU is not fast enough. Apple is not ready for AMD Ryzen CPU, sorry AMD. I love AMD but I hated Intel even though I have a Skylake based MacBook Pro. :(Deshi! - Friday, August 11, 2017 - link
One small correction, Ryzen has 24 PCIE lanes, not 16. it has 16 for graphics only, but saying only 16 may make people (like me) wonder if you can't run an NVME at x4 and still have the graphics card at 16x, which you totally can do.Deshi! - Friday, August 11, 2017 - link
This is under Feeding the beast section btw, where you said "Whereas Ryzen 7 only had 16 PCIe lanes, competing in part against CPUs from Intel that had 28/44 PCIe lanes,"fanofanand - Tuesday, August 15, 2017 - link
He already answered this question/statement to someone else. there are 20 lanes from the CPU, 16 of which are available for graphics. I don't think his way of viewing it seems accurate, but he has stated that this is how PCIe lanes have been counted "for decades"WaltC - Friday, August 11, 2017 - link
Nice review, btw! Yes, going all the way back to Athlon and the triumph of DDR-Sdram over Rdram, and the triumph of AMD's x86-64 over Itanium (Itanium having been Intel's only "answer" for 64-bit desktop computing post the A64 launch--other than to have actually paid for and *run* an Intel ad campaign stating "You don't need 64-bits on the desktop", believe it or not), and going all the way back to Intel's initial Core 2 designs, the products that *actually licensed x86-64 from AMD* (so that Intel could compete in the 64-bit desktop space it claimed didn't exist), it's really remarkable how much AMD has done to enervate and energize the x86 computing marketplace globally. Interestingly enough it's been AMD, not Intel, that has charted the course for desktop computing globally--and it goes all the way back to the original AMD Athlon. The original Pentium designs--I owned 90MHz and 100MHz Pentiums before I moved to AMD in 1999--were the high-point of an architecture that Intel would *cancel* shortly thereafter simply because it could not compete with the Athlon and its spin-off architectures like the A64. That which is called "Pentium" today is not...;) Intel simply has continued to use the brand. All I can say is: TGF AMD...;) I've tried to imagine where Intel would have taken the desktop computing market had consumers allowed the company to lead them around by the nose, and I can't...;) If not for AMD *right now* and all the activity the company is bringing to the PC space once again, there would not be much of a PC market globally going on. But now that we have some *action* again and Intel is breaking its legs trying to keep up, the PC market is poised to break out of the doldrums! I guess Intel had decided to simply nap for a few decades--"Wake me when some other company does something we'll have to compete with!" Ugh.zeroidea - Friday, August 11, 2017 - link
Hi Ian,On the Civ 6 benchmark page, all results after the GTX 1080 are mislabeled as GTA 6.
Ahmad Rady - Friday, August 11, 2017 - link
Can you try to test this CPU using windows server?This is a MCM CPU looks like 4 CPUs attached to each other.
I think windows 10 Pro can't get the most of this CPU unless we have windows 10 Pro for WS
Pekish79 - Friday, August 11, 2017 - link
Vray has a Rendering Benchmark too maybe you could use bothPekish79 - Friday, August 11, 2017 - link
I went to check both page of Vray and Corona BenchmarkCorona match more or less the graphic and Vray has the following
AMD 1950 : 00:46-00:48 sec
I9 7900: 00:54-00:56 sec
I7 6950: 01:00-01:10 sec
I5 5960: 01:23-01:33 sec
Pekish79 - Friday, August 11, 2017 - link
Vraybench 1.0.5SanX - Friday, August 11, 2017 - link
*** AMD, make 2-chip mobos for upcoming multicore wars, you will double your profit from this at no cost for you +++vicbee - Friday, August 11, 2017 - link
Off subject: Having just read the article about nVidia's meteoric rise in profits, some of which directly attributed to high end "gamers" video cards purchased expressly for coin mining, I wonder if it and AMD are going to manufacture CPU's and GPU's specifically for that purpose and how that will affect the price of said parts...Avro Arrow - Friday, August 11, 2017 - link
Hi Ian, thanks for doing this article. It's important to see all possible outcomes because in the real world, anything is possible. I do have one question that has be puzzled. Why do you say that Threadripper only has 64 PCI-Express 3.0 lanes when it's been reported several times by everyone, including official AMD releases (and also including by you) that it has 64? I thought it might be just a typo but you state it in several places and in all of your specs. This is not a new thing so is there something about Threadripper that we don't know?HotJob - Friday, August 11, 2017 - link
Could someone explain to me what a "2P" system is from the competition section of the article?coolhardware - Saturday, August 12, 2017 - link
"2P" system = two processor system, i.e. a system with two physical CPU sockets and two CPUs installed.In the past a 2P (or 4P) system was really handy to get more cores especially back when 1 core, 2 core, and eventually 4 core CPUs were high end. In the consumer realm, way back, the Pentium II was the first 2P system I ever built and people even did it with Celerons as well:
http://www.cpu-central.com/dualceleron/
the Opterons were also fun for dual or quad processor systems including some SFF options like the ZMAX-DP socket 940 system.
https://www.newegg.com/Product/Product.aspx?Item=N...
Now fast forward with ThreadRipper already available at Amazon and NewEgg
http://amzn.to/2wDqgWw (URL shortened)
https://www.newegg.com/Product/Product.aspx?Item=N...
I do not think I will ever be building a 2P or 4P system again!!!
:-)
rvborgh - Friday, August 11, 2017 - link
hi Ian,i think the Cinebench 11.5 benchmarks are incorrect for both ThreadRippers. ThreadRipper is almost equivalent to my Quad Opteron (48 core) system which scores 3229cb on R15... and 39.04 on Cinebench 11.5. if i downclock all cores to approximately 2.9 GHz i end up with around 3000cb in R15 and in the 36 range point range for 11.5.
The fact that you are only scoring in the 18 range makes me wonder if you had the Threadripper set in some mode where it was only using 8 out of the 16 cores. Can you verify this... please? Thanks :) i would think you should see scores in the 36 range with 11.5.
Other than this minor detail... great article.
PS: i've had the same issues with software not liking NUMA on my quad opteron system as well... Cinebench especially does not like it.
Tchamber - Saturday, August 12, 2017 - link
Hi, Ian. Thanks for the review. As usual it was in depth and informative. I'm in the middle of building a 1700x system now based on your review. I wanted to say you handle all the nay-Sayers, gloomy Gusses and negative Nacies with aplomb! I think most people's own slant colors how they see your reviews. I appreciate the consistency of what you do here. I took a look over at Ars, and they could be called AMD shills for all the positive things they say... Keep it up!Tchamber - Saturday, August 12, 2017 - link
P.S.I loved your Kessel Run reference, it tied in nicely with your Yoda quote.
B3an - Saturday, August 12, 2017 - link
Too many plebs complaining about a lack of 3D rendering benches. The fact is a 16 core CPU is still much slower than GPU's at rendering. I'll be getting a 1950X but it wont even be used for rendering when i know for a fact that my two GPUs will still be much faster with things like Blender. Even a single high-end GPU will still easily beat the 1950X at these tasks.Seems like immature moron fanboys are crying over this stuff because they just want to see AMD at the top of the charts.
coolhardware - Saturday, August 12, 2017 - link
Hi B3an, what will you primarily be using your 1950X for?I do not really have the workload to justify that CPU, but I wish I did ;-)
Mugur - Sunday, August 13, 2017 - link
I suggest you to read other TR reviews. Some were testing GPU rendering and they show that even in this case you need the best cpu you can get.minde - Saturday, August 12, 2017 - link
i see in foto on amd processor MADE IN CHINA . without comment. what differencebetween intel and amd quality , class
mr_tawan - Saturday, August 12, 2017 - link
TSMC perhaps?tuxRoller - Saturday, August 12, 2017 - link
I'm very curious as to how this will perform with smt enabled and numa being exposed.franzeal - Saturday, August 12, 2017 - link
On page 1, does Ryzen use an AMD implementation of SMT or hyper-threading (i.e. licensed from Intel). I've been under the impression it's the former, and referring to SMT as hyper-threading in this instance is incorrect. Intel's was not the first or the only way to implement SMT.Oxford Guy - Saturday, August 12, 2017 - link
When you went with 2400 speed RAM to slow down TR you forgot to make it single channel.franzeal - Saturday, August 12, 2017 - link
Error in Dolphin benchmark description: "Results are given in minutes, where the Wii itself scores 17.53 minutes." should be results are given in seconds.franzeal - Saturday, August 12, 2017 - link
On the last page it states "On the side of the 1920X, users will again see more cores, ECC support, and over double the number of PCIe lanes compared to the Core i7-7820X for $100 difference."According to the accompanying chart it's a ~$200 difference. Either the chart is wrong or that statement.
quadi9 - Saturday, August 12, 2017 - link
I picked up an I9-7900x at a local Micro Center for $899 this week. And it is running stable at 4.6 GHZ. How well does the Ryzen overclock? My Blender BMW score was 181 seconds. Just opened the file and clicked Render.blublub - Sunday, August 13, 2017 - link
From what I have read is that all TR do 3.9hhz and some even 4-4.1ghz on all cores .What are your temp when running all 10c @4.6ghz prime for 1-2hrs
Zingam - Sunday, August 13, 2017 - link
Ian, how about testing mobile CPUs - for games and for office work. Aren't mobile CPUs selling much larger numbers thatn desktop ones these days?I can't find a single benchmark comparing i5-7300hq vs i7-7700hq vs i7-7700K showing the difference in productivity workloads and not just for rendering pretty pictures but also for more specific tasks as compiling software etc.
I also would like to see some sort of comparison of new generation to all generations upto 10 years back in time. I'd like to know how much did performance increase since the age of Nehelem. At least from now on there should be a single test to display the relative performance increase over the last few generations. The average user doesn't upgrade their PC every year. The average user maybe upgrades every 5 years and it is really difficult to find out how much peformance increase would one get with an upgrade.
SanX - Sunday, August 13, 2017 - link
I agree, there must be 5-7 years old processors in the chartsSanX - Sunday, August 13, 2017 - link
Why one core of Apple A10 costs $10 but one core of Intel 7900x costs 10x more?oranos - Sunday, August 13, 2017 - link
so its complete dogsh*t for the segment which is driving the PC market right now: gaming. got it.ballsystemlord - Sunday, August 13, 2017 - link
Hey Ian, you've been talking about anandtech's great database where we can see all the cool info. Well, according to your database the Phenom II 6 core 1090T is equally powerful when compared to the 16 core threadripper!!!!!!! http://www.anandtech.com/bench/product/1932?vs=146With those sorts of numbers why would anyone plan an upgrade?
(And there is also only one metric displayed, strange!)
Not to play the Intel card on you as others do, but this is a serious problem for at least the AMD lineup of processors.
jmelgaard - Monday, August 14, 2017 - link
o.O... I don't know how you derived that conclusion? you need a guide on how to read the database?...BurntMyBacon - Monday, August 14, 2017 - link
For anyone looking for an overall fps for two pass encoding here is your equation (hope my math is correct):FPS = 2*FPS1*FPS2/(FPS2+FPS1)
No, you can't just average the FPS scores from each pass as the processor will spend more time in the slower pass.
For the x264 encoding test, for example, a few relevant FPS scores end up being:
i9-7900X: 122.56
i7-7820X: 114.37
i7-6900K: 95.26
i7-7740X: 82.74
TR-1950X: 118.13
TR-1950X(g): 117.00
TR-1920X: 111.74
R7-1800X: 100.19
Since two pass encoding requires both passes to be usable, getting an overall FPS score seems somewhat relevant. Alternately, using time to completion is would present the same information in a different manner. Though, it would be difficult to extrapolate performance results to estimate performance in other encodes without also posting the number of frames encoded.
goldgrenade - Thursday, January 4, 2018 - link
Take all those Intel FPS performance counters and multiply them by .7 and you have what their chips actually run at without a major security flaw in them.Let's see that would be...
i9-7900X: 85.792
i7-7820X: 80.059
i7-6900K: 66.682
i7-7740X: 57.918
And that's at best. It can be up to 50% degradation when rendering or having to do many small file accesses or repeated operations with KAISER.
Gastec - Tuesday, August 15, 2017 - link
I've having a hard time trying to swallow "Threadripper is a consumer focused product" line considering the prices to "consume" it: $550 for the MB, $550 for the TR1900X ($800 or $1000 for the others is just dreaming) then the RAM. The MB(at least the Asus one) should be $200 less, but I get it, they are trying to squeeze as much as possible from the...consumers. Now don't get me wrong and I mean no offence for the rich ones among you, but those CPU are for Workstations. WORK, not gamestations. Meaning you would need them to help you make your money, faster.goldgrenade - Thursday, January 4, 2018 - link
Idk, I use my 1920x for gaming and working, and... really everything. Second best CPU on the market with 1950x beating it out unless you can't get enough cooling.LOVE this CPU.
rauelius - Thursday, August 17, 2017 - link
I really want to build a 1920x1080 build.goku4liv - Saturday, August 19, 2017 - link
21/08/2017 INTEL LAUNCH 8 SERIES OF CPU................. AMD DEAD !!goldgrenade - Thursday, January 4, 2018 - link
HAHAHAHA xDHope you invested in AMD despite your comment. Looks like my gut instinct in buying AMD since 2009 was right. Intel chips have a security flaw, that when fixed for series 8 and 9 will remove approximately 30% of performance...
So who has the best chip now? Take 30% off any Intel benchmark against its then AMD rival and see which one would have been better.
Draven31 - Saturday, August 19, 2017 - link
NUMA appeared in Windows machines in 1998/1999 with the SGI Visual PC (which, yes, was a windows machine) and iirc, a workstation from Intergraph about the same time.halotron - Friday, March 16, 2018 - link
The benchmark Chromium Compile is excellent!Please do that for the next 2000-series of Ryzen/Threadripper as well.
Thanks