Comments Locked

104 Comments

Back to Article

  • WoWFishmonger - Thursday, August 17, 2017 - link

    I thought it was "The proof is in the PUDDING"
    All this time I've been eating pudding, looking for proof..... explains why I haven't found any yet. :|

    Nice write up, its good to see that even if people won't use this new mode, they do have the choice to enable it.

    Nothing wrong with choice IMO.
  • Ian Cutress - Thursday, August 17, 2017 - link

    Heh, wow. That's a bad typo. Fixed, thanks :)
  • edzieba - Thursday, August 17, 2017 - link

    "I thought it was "The proof is in the PUDDING""

    The phrase is: "the proof of the pudding is in the eating".
  • boozed - Thursday, August 17, 2017 - link

    This
  • Alexvrb - Saturday, August 19, 2017 - link

    I've always heard "the proof is in the pudding". The shorter version's meaning is still pretty apparent. Plus it rolls off the tongue better, so to speak. Mmmm..... pudding.
  • sprockincat - Thursday, August 17, 2017 - link

    While we're on the topic, I think it's "Game Mode as originally envisioned by AMD."
  • NikosD - Thursday, August 17, 2017 - link

    So, you read my comment regarding your mistake at the first TR review of assuming a 16C/16T CPU after enabling Game Mode instead of a 8C/16T and you corrected that in your new review.

    Now, you only have to repeat your tests with DDR4-3200 and select a different, more "workstation" kind of benchmarks in order to test monsters of 32 threads and not PDF opening of course !

    Mercy !
  • Aisalem - Thursday, August 17, 2017 - link

    For the average person reading most of tech sites the more workstation benchmarks doesn't really makes sense.

    What I would like to see if you can enable game mode and disable SMT. That will leave 1950X with 8 cores available to the system which still should be enough for gaming but might present even better results.
  • Zstream - Thursday, August 17, 2017 - link

    For the love of all things... no one buys TR to just play games, or open .PDF's.
  • Gothmoth - Thursday, August 17, 2017 - link

    well noobs do.

    but i think websites like anandtech should know better.. but well anand is gone and.
    the new generation is obviously no adequat replacement.
  • Aisalem - Thursday, August 17, 2017 - link

    Ok, I'm a noob then, actually I'm an engineer who's doing designs in AutoCad, Creo and Solidworks but from time to time like to play few games.
    So yes I'm a NOOB who has some free cash to throw AMD direction and would like to know what are the best settings for it to play a game once or twice a week without spending hours on testing those.
  • Gigaplex - Thursday, August 17, 2017 - link

    That makes you a workstation user, not a noob who buys Threadripper just for games.
  • pepoluan - Friday, August 18, 2017 - link

    Why do you want to change to Game Mode anyways? Is playing in Creator Mode not Good Enough for you?
  • Ratman6161 - Friday, August 18, 2017 - link

    Actually you sound more like the actual target audience for game mode. But for your purposes I would think you would want reviews with more heavy emphasis on workstation tasks. Gaming with it is just a sidelight.
  • Greyscend - Saturday, August 19, 2017 - link

    If you really are an engineer you shouldn't need hours to figure out if you can disable SMT while "Game Mode" is active. In fact, you shouldn't even need "hours" to turn on game mode and play a few minutes of your current, favorite game, then turn off SMT (if possible in game mode) and play again. I'm no engineer but I would have to be on Peyote and a bottle of wine to make all of this take longer than 30 minutes. Also, you may find that the bleeding edge isn't the best place for people who need to be told exactly how to configure their own machines.
  • Ratman6161 - Friday, August 18, 2017 - link

    Exactly

    "For the average person reading most of tech sites the more workstation benchmarks doesn't really makes sense."

    Counter point: The "more workstation benchmarks" and the tasks they represent are the reason this CPU exists in the first place. If you want a Ryzen and gaming is your primary use, you would be better off with something in the R7 family since when you disable half the cores, you effectively have the equivalent of an 1800x.

    The only reason game mode would exist is for someone who really needs to do those "more workstation" tasks for work purposes but also wants to to use the same machine for games when not doing actual work. IMO, the reviews should really stick even more to workstation use cases with gaming being an "oh, by the way, you can play games on it too" sort of deal.
  • Ian Cutress - Sunday, August 20, 2017 - link

    https://myhacker.net | Hacking Tutorials | Hacking news | hacking tools | hacking ebooks
  • Gothmoth - Thursday, August 17, 2017 - link

    waiting for anandtech praising the 8% on average performance boost of the 9000 intel cpu generation.... :-)
  • peevee - Friday, August 18, 2017 - link

    3%
  • peevee - Friday, August 18, 2017 - link

    Been this way for the last 5 generations. Moore's law is over.
  • peevee - Friday, August 18, 2017 - link

    Of course. Work CPUs must be tested at work. Kiddies are fine with i3s.
  • Ian Cutress - Sunday, August 20, 2017 - link

    https://myhacker.net hacking news hacking tutorials hacking ebooks
  • IGTrading - Thursday, August 17, 2017 - link

    It would be nice and very useful to post some power consumption results at the platform level, if we're doing "extra" additional testing.

    It is very important since we're paying for the motherboard just as much as we pay for a Ryzen 5 or even Ryzen 7 processor.

    And it will correctly compare the TCO of the X399 platform with the TCO of X299.
  • jordanclock - Thursday, August 17, 2017 - link

    So it looks like AMD should have gone with just disabling SMT for Game Mode. There are way more benefits and it is easier to understand the implications. I haven't seen similar comparisons for Intel in a while, perhaps that can be exploration for Skylake-X as well?
  • HStewart - Thursday, August 17, 2017 - link

    I would think disable SMT would be better, but the reason maybe in designed of link between the two 8 Core dies on chip.
  • GruenSein - Thursday, August 17, 2017 - link

    I'd really love to see a frame time probability distribution (Frame time on x-axis, rate of occurrence on y-axis). Especially in cases with very unlikely frames below a 60Hz rate, the difference between TR and TR-GM/1800X seem most apparent. Without the distribution, we will never know if we are seeing the same distribution but slightly shifted towards lower frame rates as the slopes of the distribution might be steep. However, those frames with frame times above a 60Hz rate might be real stutters down to a 30Hz rate but they might just as well be frames at a 59,7Hz rate. I realize why this threshold was selected but every threshold is quite arbitrary.
  • MrSpadge - Thursday, August 17, 2017 - link

    Does AMD comment on the update? What's their reason for choosing 8C/16T over 16C/16T?

    > One could postulate that Windows could do something similar with the equivalent of hyperthreads.

    They're actually already doing that. Loading 50% of all threads on an SMT machine will result in ~50% average load on every logical core, i.e. all physical cores are only working on 1 thread at a time.

    I know mathematically other schedulings are possible, leading to the same result - but by now I think it's common knowledge that the default Win scheduler works like that. Hence most lightly threaded software is indifferent to SMT. Except games.
  • NetMage - Sunday, August 20, 2017 - link

    Then why did SMT mode show differences from Creator mode in the original review?
  • Dribble - Thursday, August 17, 2017 - link

    No one is ever going to run game mode - why buy a really expensive chip and then disable half of it, especially as you have to reboot to do it? It's only use is to make threadripper look slightly better in reviews. Imo it would be more honest as a reviewer to just run it in creator mode all the time.
  • jordanclock - Thursday, August 17, 2017 - link

    The point is compatibility, as mentioned in the article multiple times. AMD is offering this as an option for applications (mainly games) that do not run correctly, if at all, on >16 core CPUs.
  • MrSpadge - Thursday, August 17, 2017 - link

    It's definitely good that reviewers test the game mode and the others, so that we know what to expect from them. If they only tested creator mode the internets would be full of people shouting foul play to bash AMD.
  • deathBOB - Thursday, August 17, 2017 - link

    Ian - why not just enable NUMA and leave SMT on?
  • Ian Cutress - Thursday, August 17, 2017 - link

    The fourth corner of testing :)
  • lelitu - Thursday, August 17, 2017 - link

    Looking at setting up something for a home VM host, and linux development workstation makes NUMA with SMT the most useful set of benchmarks for my usecase.

    I'm particularly interested in TR, because it's brought the price of entry low enough that I can actually consider building such a system.
  • Ratman6161 - Friday, August 18, 2017 - link

    ThreadRipper is big bucks for your purposes if I'm reading this correctly. For a home lab sort of environment a lot of cores helps as does a lot of RAM, but you don't necessarily need a boatload of CPU power. For example, in my home ESXi system I've got an FX8350 which VMWare sees as an 8 Core CPU. I've also given it 32 GB of DDR3 RAM (purchased when that was cheap). The 990FX motherboards work great for this since they have plenty of PCIe lanes available. In my case, those are used for an ancient ATI video card I happened to have in a drawer, an LSI x8 RAID card and an x4 Intel dual port gigabit NIC. The RAID card has 4 1 TB desktop drives hooked up to it in a RAID 5.

    All of the above can be had pretty cheap these days. I'm thinking of upgrading my storage to 4x2 TB SAS drives - available for $35 each on Amazon...brand new (but old models). The system is running 6 to 7 VM's (Windows Servers mostly) at any given time. But with only two users, I don't run into many cases where more than two VM's are actually doing anything at the same time. Example: Web server and SQL Server serving up a web app.

    For this environment, having a storage setup where the VM's are not contending for the disks and also having plenty of RAM seems to make a lot more difference than the CPU.

    Of course if you have the bucks and just want to, ThreadRipper would be terrific for this - just way to expensive and overkill for me.
  • lelitu - Monday, August 21, 2017 - link

    That depends a lot on what you want the VMs for. Unfortunately for the sort of performance testing and development I do a VM toaster isn't actually good enough. Each VM needs at least 4 uncontended cores, and 10GB uncontended RAM. Two VMs is the absolute minimum, 3 would be better.

    That's not going to fit into anything less than a ryzen 7 minimum, and a Threadripper, *if* it performs as I expect in SMT + NUMA mode would be almost perfect. Unfortunately, you're right, it's a *lot* of coin to drop on something I don't know will actually do what I need well enough.

    Thus, I wish there were SMT+NUMA workstation and VM benchmarks here.
  • JasonMZW20 - Thursday, August 17, 2017 - link

    Seems like Game Mode should have bumped up the base clocks to 1800X levels, especially for Nvidia cards using a software scheduler that seems to scale with CPU frequency. AMD's hardware scheduler is apparent in overall FPS stability and being mostly CPU agnostic.

    Matching base clocks with 1800X or even 1900X (3.8GHz) might be better on TR for gaming in Game Mode.
  • lordken - Friday, August 18, 2017 - link

    Also for some weird reason that 1800X is much faster with higher fps in civilization and tomb rider?
  • peevee - Thursday, August 17, 2017 - link

    "because the 1920X has fewer cores per CCX, it actually falls behind the 1950X in Game Mode and the 1800X despite having more cores. "

    Sorry, but when 12 cores with twice memory bandwidth are compiling slower than 8, you are doing something wrong. Yes, Anandtech, you. I'd seriously investigate. For example, the maximum number of threads were set at 24 or something.
  • Ian Cutress - Thursday, August 17, 2017 - link

    When you have a bank of cores that communicate with each other, and replace it with more cores but uneven communication latencies, it is a difference and it can affect code paths.
  • peevee - Friday, August 18, 2017 - link

    Compilation scales even on multi-CPU machines. With much higher communication latencies.
    In general, compilers running in parallel on MSVC (with MSBuild) run in different processes, they don't write into each other's address spaces and so do not need to communicate at all.

    Quit making excuses. You are doing something wrong. I am doing development for multi-CPU machines and ON multi-CPU machines for a very long time. YOU are doing something wrong.
  • peevee - Friday, August 18, 2017 - link

    BTW, when you enable NUMA on TR, does Windows 10 recognize it as one CPU group or 2?
  • gzunk - Saturday, August 19, 2017 - link

    It recognizes it as two NUMA nodes.
  • Alexey291 - Saturday, September 2, 2017 - link

    They aren't going to do anything.

    All their 'scientific benchmarking' is running the same macro again and again on different hardware setups.

    What you are suggesting requires actual work and thought.
  • Arbie - Thursday, August 17, 2017 - link

    As noted by edzieba, the correct phrase (and I'm sure it has a very British heritage) is "The proof of the pudding is in the eating".

    Another phrase needing repair: "multithreaded tests were almost halved to the 1950X". Was this meant to be something like "multithreaded tests were almost half of those in Creator mode" (?).

    Technically, of course, your articles are really well-done; thanks for all of them.
  • fanofanand - Thursday, August 17, 2017 - link

    Thank you for listening to the readers and re-testing this, Ian!
  • ddriver - Thursday, August 17, 2017 - link

    To sum it up - "game mode" is moronic. It is moronic for amd to push it, and to push TR as a gaming platform, which is clearly neither its peak, nor even its strong point. It is even more moronic for people to spend more than double the money just to have half of the CPU disabled, and still get worse performance than a ryzen chip.

    TR is great for prosumers, and represents a tremendous value and performance at a whole new level of affordability. It will do for games if you are a prosumer who occasionally games, but if you are a gamer it makes zero sense. Having AMD push it as a gaming platform only gives "people" the excuse to whine how bad it is at it.

    Also, I cannot shake the feeling there should be a better way to limit scheduling to half the chip for games without having to disable the rest, so it is still usable to the rest of the system.
  • Gothmoth - Thursday, August 17, 2017 - link

    first coders should do their job.. that is the main problem today. lazy and uncompetent coders.
  • eriohl - Thursday, August 17, 2017 - link

    Of course you could limit thread scheduling on software level. But it seems to me that there is a perfectly reasonable explanation why Microsoft and the game developers haven't been spending much time optimizing for running games on systems with NUMA.
  • HomeworldFound - Thursday, August 17, 2017 - link

    You can't call a coder that doesn't anticipate a 16 core 32 thread CPU lazy. The word is incompetent btw. I'd like to see you make a game worth millions of dollars and account for this processor, heck any processor with more than six cores.
  • ddriver - Friday, August 18, 2017 - link

    Why not? We've had 16 core CPUs long before W10 was launched, and it has allegedly been heavily updated since then.

    But it is NOT the "coder"'s responsibility. Programmers don't get any say, they are paid workers, paid to do as they are told. Not that I don't have the impression that a lot of the code that's being written is below the standard, but the actual decision making is not a product of software programmers but that of software architects, and the latter are even more atrocious than the actual programmers.
  • HollyDOL - Friday, August 18, 2017 - link

    Sadly, the reality is much worse... those architects are ordered by managers, economic persons etc. who, sadly often, don't know more about computer than where's power button. And they want products with minimal cost and 'yesterday was late'.
  • ddriver - Friday, August 18, 2017 - link

    Well, yeah, the higher you go up the latter the grosser the incompetence level.
  • BrokenCrayons - Thursday, August 17, 2017 - link

    Interesting test results. I think they demonstrate pretty clearly why Threadripper isn't really a very good option for pure gaming workloads. The big takeaway is that there are more affordable processors with lower TDPs offer comparable or better performance without adding additional settings that few people will realize exist and even fewer people will fiddle with enough to determine which settings actually improve performance in their particular software library. The Ryzen 7 series is probably a much better overall choice than TR right now if you don't have specific tasks that require all those cores and threads.
  • Gothmoth - Thursday, August 17, 2017 - link

    "I think they demonstrate pretty clearly why Threadripper isn't really a very good option for pure gaming workloads."

    wow.... what a surprise.
    thanks for pointing that out mr. obvious. :-)
  • Gigaplex - Thursday, August 17, 2017 - link

    These are single GPU tests. Threadripper has enough PCIe lanes to do large multi GPU systems. More GPU usually trumps better CPU in the high end gaming scene, especially with 4k resolution.
  • BrokenCrayons - Friday, August 18, 2017 - link

    Yes, but multi-GPU setups are generally not used for gaming-centric operations. There's been tacit acknowledgement of this as the state of things by NV since the release of the 10x0 series. Features like Crossfire and SLI support are barely a bullet point in marketing materials these days. With good reason since game support is waning as well and DX12 is positioned to pretty nail the multi-GPU coffin shut entirely except in corner cases where it MIGHT be possible to leverage an iGPU alongside a dGPU if a game engine developer bothers to invest time into banging out code to support it. That places TR's generous PCIe lane count and the potential multi-GPU usage in the domain of professional workloads that need GPU compute power.
  • Bullwinkle J Moose - Thursday, August 17, 2017 - link

    I agree with ddriver

    We should not have to fiddle with the settings and reboot to game mode on these things

    Windows should handle the hardware seamlessly in the background for whatever end use we put these systems to

    The problem is getting Microsoft to let the end users use the full potential of our hardware

    If the framework for the hardware is not fully implemented in the O.S., every "FIX" looks a bit like the one AMD is using here

    I think gaming on anything over 4 cores might require a "proper" update from Microsoft working with the hardware manufacturers

    Sometimes it might be nice to use the full potential of the systems we have instead of Microsoft deciding that all of our problems can be fixed with another cloud service
  • Gothmoth - Thursday, August 17, 2017 - link

    but but.. what about linux.

    i mean linux is the savior, not?
    it has not won a 2.2% marketshare on teh desktop for nothing.

    sarcasm off....
  • HomeworldFound - Thursday, August 17, 2017 - link

    What can we expect Microsoft to do prior to a product like this launching. If a processor operates in a manner that requires the operating system to be adjusted, the company selling it needs to approach Microsoft and provide an implementation, and it should be ready for launch. If that isn't possible then why manufacture something that doesn't work correctly and requires hacky fixes to run.
  • Gigaplex - Thursday, August 17, 2017 - link

    How is Windows supposed to know when a specific app will run better with SMT enabled/disabled, NUMA, or even settings like SLI/Crossfire and PCIe lane distribution between peripheral cards? If your answer is app profiles based on benchmark testing, there's no way Microsoft will do that for all the near infinite configurations of hardware against all the Windows software out there. They've cut back on their own testing and fired most of their testing team. It's mostly customer beta testing instead.
  • peevee - Friday, August 18, 2017 - link

    Windows does not know whether it is a critical gaming thread or not. Setting thread affinity is not a rocket science - unless you are some Java "programmer".
  • Spoelie - Friday, August 18, 2017 - link

    And anyone not writing directly in assembly should be shot on sight, right?
  • peevee - Friday, August 18, 2017 - link

    You don't need to write in assembly to set thread affinities.
  • Glock24 - Thursday, August 17, 2017 - link

    Seems like the 1800X is a better all around CPU. If you really need and can use more than 8C/16T then get TR.

    For mixed workloads of gaming and productivity the 1800X or any of the smaller siblings is a better choice.
  • msroadkill612 - Friday, August 18, 2017 - link

    The decision watershed is pcie3 lanes imo. Otherwise, the ryzen is a mighty advance in the mainstream sweet spot over ~6 months ago.

    OTH, I see lane hungry nvme ports as a boon to expanding a pcs abilities later. The premium for an 8 core TR & mobo over ryzen seems cost justifiable expandability.
  • Luckz - Friday, August 18, 2017 - link

    It seems that the 1800X has the NVIDIA spend less time doing weird stuff.
  • franzeal - Thursday, August 17, 2017 - link

    If you're going to reference Intel in your benchmark summaries (Rocket League is one place I noticed it), either include them or don't forget to edit them out of your copy/paste job.
  • Luckz - Friday, August 18, 2017 - link

    WCCFTech-Level writing, eh.
  • franzeal - Thursday, August 17, 2017 - link

    Again, as with the original article, the description for the Dolphin render benchmarks is incorrectly stating that the results are shown in minutes.
  • silverblue - Friday, August 18, 2017 - link

    I'd like to see what happens when you manually set a 2+2+2+2 core configuration, instead of enabling Game Mode. From what I've read, Game Mode destroys memory bandwidth but yields better latency, however it's not answering whether Zen cores can really benefit from the extra bandwidth that a quad-channel memory interface affords.

    Alternatively, just clock the 1950 and 1920 identically, and see if the 1920's per-core performance is any higher.
  • KAlmquist - Friday, August 18, 2017 - link

    “One of the interesting data points in our test is the Compile. Because <B>this test requires a lot of cross-core communication</B> and DRAM, we get an interesting metric where the 1950X still comes out on top due to the core counts, but because the 1920X has fewer cores per CCX, it actually falls behind the 1950X in Game Mode and the 1800X despite having more cores.”

    Generally speaking, copmpilers are single threaded, so the parallelism in a software build comes from compiling multiple source files in parallel, meaning the cross-core communication is minimal. I have no idea what MSVC is doing here, can you explain? In any case, while I appreciate you including a software development benchmark, the one you've chosen would seem to provide no useful information to anyone who doesn't use MSVC.
  • peevee - Friday, August 18, 2017 - link

    I use MSVC and it scales pretty well if you are using it right. They are doing something wrong.
  • KAlmquist - Saturday, August 19, 2017 - link

    Thanks. It makes sense that MSVC would scale about as well as any other build environment.

    ARS Technica also benchmarked a Chromium build, which I think uses MSVC, but uses the Google tools GN and Ninja to manage the build. They get:

    Ryzen 1800X (8 cores) - 9.8 build/day
    Threadripper 1920X (12 cores) - 16.7 build/day
    Threadripper 1950X (16 cores) - 18.6 build/day

    Very good speedup with the 1920X over the 1800X, but not so much going from the 1920X to the 1950X. Perhaps the benchmark is dependent on memory bandwidth and L3 cache.
  • Timur Born - Friday, August 18, 2017 - link

    Thanks for the tests!

    I would have liked to see a combination of both being tested: Game Mode to switch off the second die and SMT disabled. That way 4 full physical cores with low latency memory access would have run the games.

    Hopefully modern titles don't benefit from this, but some more "legacy" ones might like this setup even more.
  • Timur Born - Friday, August 18, 2017 - link

    Sorry, I meant 8 cores, aka 8/8 cores mode.
  • mat9v - Friday, August 18, 2017 - link

    I wish someone had an inclination to test creative mode but with games pinned to one module. It is essentially NUMA mode but with all cores active.
    Or just enable SMT that is disabled in Gaming Mode - we actually then get a Ryzen 1800X CPU that overclocks well but with possibly higher performance due to all system task running on different module (if we configure system that way) and unencumbered access to more PCIEx lines.
  • peevee - Friday, August 18, 2017 - link

    Yes, that would be interesting.
    c:\>start /REALTIME /NODE 0 /AFFINITY 5555 you_game_here.exe
  • mat9v - Friday, August 18, 2017 - link

    I think I would start it on node 1 is anything since system task would be at default running on node 0.
    Mask 5555? Wouldn't it be AAAA - for 8 cores (8 threads) and FFFF for 8 cores (16 threads)?
  • peevee - Friday, August 18, 2017 - link

    The mask 5555 assumes that SMT is enabled. Otherwise it should be FF.

    When SMT is enabled, 5555 and AAAA will allocate threads to the same cores, just different logical CPUs.
    Where system threads will be run is system dependent, nothing prevents Windows from running them on NODE 1. /NODE 0 allows to run whether or not you actually have multiple NUMA nodes.

    With /REALTIME Windows will have hard time allocating anything on those logical CPUs, but can use the same cores with other logical CPUs, so yes, technically it will affect results. But unless you load it with something, the difference should not be significant - things like cache and memory bus contention are more important anyway and don't care on which cores you run.
  • Lieutenant Tofu - Friday, August 18, 2017 - link

    "... we get an interesting metric where the 1950X still comes out on top due to the core counts, but because the 1920X has fewer cores per CCX, it actually falls behind the 1950X in Game Mode and the 1800X despite having more cores. "

    Would you mind elaborating on this? How does the proportion of cores per CCX affect performance?
  • JasonMZW20 - Sunday, August 20, 2017 - link

    The only thing I can think of is CCX cache locality. Given a choice, you want more cores per CCX to keep data on that CCX rather than using cross-communication between CCXes through L2/L3. Once you have to communicate with the other CCX, you automatically incur a higher average latency penalty, which in some cases, is also a performance penalty (esp. if data keeps moving between the two CCXes).
  • Lieutenant Tofu - Friday, August 18, 2017 - link

    On the compile test (prev page):
    "... we get an interesting metric where the 1950X still comes out on top due to the core counts, but because the 1920X has fewer cores per CCX, it actually falls behind the 1950X in Game Mode and the 1800X despite having more cores. "

    Would you mind elaborating on this? How does the proportion of cores per CCX affect performance?
  • rhoades-brown - Friday, August 18, 2017 - link

    This gaming mode intrigues me greatly- the article states that the PCIe lanes and memory controller is still enabled, but the cores are turned off as shown in this diagram:
    http://images.anandtech.com/doci/11697/kevin_lensi...

    If these are two complete processors on one package (as the diagrams and photos show), what impact does having gaming mode enabled and a PCIe device connected to the PCIe controller on the 'inactive' side? The NUMA memory latency seems to be about 1.35 surely this must affect the PCIe devices too- further how much bandwidth is there between the two processors? Opteron processors use HyperTransport for communication, do these do the same?

    I work in the server world and am used to NUMA systems- for two separate processor packages in a 2 socket system, cross-node memory access times is normally 1.6x that of local memory access. For ESXi hosts, we also have particular PCIe slots that we place hardware in, to ensure that the different controllers are spread between PCIe controllers ensuring the highest level of availability due to hardware issue and peek performance (we are talking HBAs, Ethernet adapters, CNAs here). Although, hardware reliability is not a problem in the same way in a Threadripper environment, performance could well be.

    I am intrigued to understand how this works in practice. I am considering building one of these systems out for my own home server environment- I yet to see any virtualisation benchmarks.
  • versesuvius - Friday, August 18, 2017 - link

    So, what is a "Game"? Uses DirectX? Makes people act stupidly? Is not capable of using what there is? Makes available hardware a hindrance to smooth computing? Looks like a lot of other apps (that are not "Game") can benefit from this "Gaming Mode".
  • msroadkill612 - Friday, August 18, 2017 - link

    A shame no Vega GPU in the mix :(

    It may have revealed interesting synergies between sibling ryzen & vega processors as a bonus.
  • BrokenCrayons - Friday, August 18, 2017 - link

    The only interesting synergy you'd get from a Threadripper + Vega setup is an absurdly high electrical demand and an angry power supply. Nothing makes less sense than throwing a 180W CPU plus a 295W GPU at a job that can be done with a 95W CPU and a 180W GPU just as well in all but a few many-threaded workloads (nevermind the cost savings on the CPU for buying Ryzen 7 or a Core i7).
  • versesuvius - Friday, August 18, 2017 - link

    I am not sure if I am getting it right, but apparently if the L3 cache on the first Zen core is full and the core has to go to the second core's L3 cache there is an increase in latency. But if the second core is power gated and does not take any calls, then the increase in latency is reduced. Is it logical to say that the first core has to clear it with the second core before it accesses the second core's cache and if the second core is out it does not have to and that checking with the second core does not take place and so latency is reduced? Moving on if the data is not in the second core's cache then the first core has to go to DRAM accessing which supposedly does not need clearance from the second core. Or does it always need to check first with the second core and then access even the DRAM?
  • BlackenedPies - Friday, August 18, 2017 - link

    Would Threadripper be bottlenecked by dual channel RAM due to uneven memory access between dies? Is the optimal 2 DIMM setup one per die channel or two on one die?
  • Fisko - Saturday, August 19, 2017 - link

    Anyone working on daily basis just to view and comment pdf won't use acrobat DC. Exception can be using OCR for pdf. Pdfxchange viewer uses more threads and opens pdf files much faster than Adobe DC. I regularly open files from 25 to 80 mb of CAD pdf files and difference is enormous.
  • Ian Cutress - Saturday, August 19, 2017 - link

    Visit https://myhacker.net For Latest Hacking & security updates.
  • Glock24 - Saturday, August 19, 2017 - link

    Ha your account bee hacked Ian? This seems like an out of place comment from a spam bot.
  • zodiacfml - Saturday, August 19, 2017 - link

    Useless. Why cripple an expensive chip? It is already mentioend that the value of high core counts is mega tasking, like rendering while gaming. I wouldn't be to tell a increase in of 10% or less in performance but I will do for multi-tasking.
  • Greyscend - Saturday, August 19, 2017 - link

    To summarize, I can pay $1000 for a new and crazy powerful CPU that gives me the option to turn $500 of it off so that I can sporadically gain performance in games at a level that is mostly equal to or below the level of standard testing deviations? Worth.
  • Greyscend - Saturday, August 19, 2017 - link

    I want competition in the CPU market so I feel like AMD should consider redistributing funds from what can only be described as the "Gimmicks Department" back to the actual processor R&D department. Although, the Gimmicks Department is getting pretty good at UI development. Look at the software they churned out that turns $500 of your CPU off! It's beautiful! They also seem to be getting bolder since they asked Anandtech to effectively re-write an entire article in order to more succinctly point out how consumers can effectively disable half of the CPU cores they paid for with almost no discernible real world effect. Pretty impressive considering the number of consumers who seem genuinely interested in this type of feature.
  • Oxford Guy - Sunday, August 20, 2017 - link

    "research is paramount"

    Yeah, like the common knowledge that Zen reviews shouldn't be handicapped by only testing them with slow RAM.

    Joel Hruska at ExtremeTech tested Ryzen on day 1 with 3200 speed RAM. Tom's tested the latest batch of consumer Zen (Ryzen 3) with 3200.

    And yet... this site has apparently just discovered why it's so important to not kneecap Zen with slow RAM — as if we're using ECC for enterprise stuff all the time.
  • Gastec - Sunday, August 20, 2017 - link

    Why such abysmal performance in Rise of the Tomb Raider and GTA5 for Sapphire Nitro R9 and RX480 with Thradripper CPU's?
  • Oxford Guy - Thursday, August 24, 2017 - link

    I can't say I'm an expert on this subject but it looks like their tested games generally are a list of some of the poorer performers on AMD. Tomb Raider, GTA5, etc.

    Dirt 4, by contrast, shows Vega 56 beating a 1080 Ti at Tech Report.
  • dwade123 - Sunday, August 20, 2017 - link

    Threadripper is a mess. There's always a compromise with AMD.
  • mapesdhs - Sunday, August 20, 2017 - link

    Because of course X299 doesn't involve aaany compromise at all. :D
  • zodiacfml - Monday, August 21, 2017 - link

    I agree with the conclusion, just disable SMT and be done with it. With 16 cores, it is overkill for all desktop tasks except for full tilt rendering/encoding.
  • MrRuckus - Tuesday, August 22, 2017 - link

    Now overclock it with half the cores enabled and do it again?

    Thats the only benefit I see from going to TR, is the top 5% of Ryzen cores go on threadripper chips, so its basically the best binned cores. What you can reach with half the cores overclocked would be interesting to see. How much better are the top binned cores compered to say a 1800x? HOCP did a overclocking article on TR, but not with half the cores disabled. They saw better performance by underclocking because if the heat and so many cores. Cut the cores in half and see what it'll do?
  • Ian Cutress - Friday, August 25, 2017 - link

    Android Password Breaker hacking tutorials hacking ebooks hacking news hacking tools android technology https://myhacker.net
  • druuzil - Tuesday, March 6, 2018 - link

    This was quite useful to me. I wasn't aware of the Ryzen Master software prior to this article, and I was having SLI scaling issues/poor performance in gaming (not horrible, but not what I would have expected from a $700 CPU, the 1920x). Using Gaming Mode has helped tremendously.. My 3dMark Firestrike score went up about 4500 points simply by engaging Gaming Mode, and a bit more after a modest overclock. The ability to swap back and forth is pretty handy, as I can re-enable the full set of cores when I want to encode a video for example with the push of a button (and a quick reboot).

Log in

Don't have an account? Sign up now