Comments Locked

172 Comments

Back to Article

  • shabby - Monday, June 10, 2019 - link

    Wow a 105watt 16core cpu... does intel even have an 8 core that realistically runs at that wattage?
  • fadsarmy - Monday, June 10, 2019 - link

    The value of TDP, or thermal design power, is not a measure of power consumption.
  • arakan94 - Monday, June 10, 2019 - link

    It is for AMD (they set it differently from Intel, whose TDP is when running at base clocks).
  • Santoval - Tuesday, June 11, 2019 - link

    They set it differently in what way? Surely AMD's "TDP" is not that of boost clocks. It is *very* low for that. So what are they doing, do they calculate a mean value between something like all-core base clock and single or dual core boost clock?
  • arakan94 - Tuesday, June 11, 2019 - link

    It's effectively a power limit for AMD (not immediate, but over period of time) meaning that if you use cooler for 105W on 105W TDP CPU, you won't have any problem with throttling.

    AMD TDP is well representative of the real power draw. But it is calculated using cooling capacity of stock cooler, so you can get better/worse results with different cooler.

    Here is more detailed explanation: https://www.anandtech.com/show/13124/the-amd-threa...
  • Opencg - Thursday, June 13, 2019 - link

    You all are crazy. Power draw depends heavily on workload. No way a 16 core 32 thread cpu only draws 105w with all cores stressed.

    If you fancy to fanboy out then measure power draw vs actual performance at a specific application.

    But yeah its a dynamic system and cooling also heavily effects it. Better cooling, less voltage. Less voltage, less power. Less power, less heat. Less heat, better voltage. You can see why some random TDP number means nothing. If you really want to push this or a 9900k get a cooler rated for 300w+. Water is best.
  • Korguz - Thursday, June 13, 2019 - link

    or just wait ill next month, and see what the reviews say.... and not bother argue about how much TDP each side uses.....
  • mode_13h - Friday, June 21, 2019 - link

    > No way a 16 core 32 thread cpu only draws 105w with all cores stressed.

    Sure it can. It really depends on clock speed, though. If they clock down the cores enough, you can easily fit 8 cores / 16 threads in 105 W.
  • mode_13h - Friday, June 21, 2019 - link

    Er, I meant 16 cores / 32 threads.
  • kinerry - Thursday, July 18, 2019 - link


    you must be new

    AMD gives an honest rating at peak sustained power draw, whereas intel uses average power usage while idle

    I know it's confusing when you don't have Intel lying and ripping you off all the time, but it is what it is
  • Irata - Tuesday, June 11, 2019 - link

    Simplified: Different as in Intel TDP being power consumption at base clock, whereas for AMD it is not. AMD's TDP may not represent the worst possible case but it is close.

    But it's really easy to see by checking CPU reviews - the max power consumption for AMD CPU is usually rather close to their official TDP, whereas for high turbo Intel CPU is is very far from their TDP.

    I do hope AMD does not follow Intel's track here.
  • AshlayW - Tuesday, June 11, 2019 - link

    In my testing and experience: the 1700, 2700 and 1600, 2600 are all very close to 65W at stock, measured in software and at the wall. I used to run many PCs for WCG and was obviously very interested in power use. The "X" parts use a bit over the "TDP", for example the two 2700X's I've owned used 110-120W in heavy multi-thread workthat is over the 105W TDP but still very close. The 2600X was 95-100. This is fair as these parts are likely using the extra headroom to boost a bit more aggressively, but you can easily get a cooler rated for this output and use it no problem.

    Yes, Intel on the other hand - 95W for 9900K is complete malarkey, this CPU is 150W~ when it is boosting from what I have read.
  • schujj07 - Tuesday, June 11, 2019 - link

    Under a torture loop the 9900K uses more than 200W, & at a 5GHz clock it uses just short of 250W.
    https://www.tomshardware.com/reviews/intel-core-i9...
  • Oxford Guy - Tuesday, June 11, 2019 - link

    5 GHz doesn't count for TDP at all unless it comes that way out of the box.
  • Santoval - Wednesday, June 12, 2019 - link

    It actually does. 9900K has single/dual 5 GHz boost clock officially, "out of the box". Now Intel will release a "special"* version of the 9900K capable of all-core 5 GHz boost clock. Just imagine the power draw of that thing and what cooler it would require to sustain that clock for more than a few seconds at a time... Even a decent AIO watercooling kit might be insufficient.
    * "special" as in "a poor attempt to respond to AMD by heavily overclocking and overvoltaging their existing CPUs because Ice Lake CPUs for desktop are nowhere close to being ready."
  • Oxford Guy - Friday, June 14, 2019 - link

    Santoval, read what I wrote again, focusing on the word unless.
  • Gastec - Wednesday, June 19, 2019 - link

    Is Intel loosing money or better said not making enough profit? Or are they holding out the release of the Ice Lakes to milk Coffee Lakes as much as possible, because "it just works" :)
  • Spoelie - Wednesday, June 12, 2019 - link

    The X versions actually going slightly over the TDP is an intended feature of XFR/Precision Boost 2 - you can keep it within TDP from the bios.
  • Opencg - Thursday, June 13, 2019 - link

    Yeah 150watts is about what you need to full on bench a 9700k and 9900k.
  • WaltC - Monday, June 17, 2019 - link

    Yes, my current R5 1600 6c/12t x470 system (MSI GPC) overclocks 500Mhz via multiplier setting (to 3.7GHz) @ "auto" stock voltage (~ 1.2v) ROOB and even has a TDP setting in the bios--I can't increase the TDP beyond 65W even if I wanted to...;) But...I can lower it--which is something I have no need to do. AMD shipped it with a 95W air cooling fan--more than adequate. Looking forward to x570 & a R3k in a couple of weeks--I hope! I imagine demand will be fierce--but I'm not paying over MSRP--well, a couple of dollars over, maybe...;)
  • mrsnowman - Sunday, June 16, 2019 - link

    It seems appropriate to have an anandtech article explain this:
    https://www.anandtech.com/show/13544/why-intel-pro...
    You're wrong in disagreeing with the post in that way though. "It is for AMD" can't really be interpreted in any other way than you saying it's power consumption.
  • xrror - Monday, June 10, 2019 - link

    I'm still angry that Intel made TDP a marketing joke =(
  • imaheadcase - Tuesday, June 11, 2019 - link

    It is a joke when people are not buying high end CPU and caring about it. No one builds a high end system wondering how much power it is drawing while playing games or whatever they do. lol
  • Xyler94 - Friday, June 14, 2019 - link

    It does matter for cooling purposes though.
  • Gastec - Wednesday, June 19, 2019 - link

    Only a newbie in computing would write that. Or a "gamer" with more money than common sense.
  • shabby - Monday, June 10, 2019 - link

    Tell me more...
  • xrror - Monday, June 10, 2019 - link

    TDP used to mean TOTAL design power. As in the max the chip could theoretically pull from the socket, if you could light up every transistor on the die at once. It was actually really hard to ever hit this number in real use, but if you built your power delivery and cooling to handle TDPmax you were covered.

    Intel later "clarified" that the T meant "Typical" rather than "Total"

    Thus, TDP went from being a useful real engineering number to just being another whiff out of some marketers ass.

    R.I.P. real TDP
  • xrror - Monday, June 10, 2019 - link

    My bitterness is that they should have left TDP the heck alone, and then defined a NEW measurement like "Nominal Design Power" or ANYTHING else rather than fubaring TDP as a marketing item.

    But maybe the blame isn't all Intel's, probably some sales exec at an OEM started the trend and it snowballed. Who knows.
  • bubblyboo - Monday, June 10, 2019 - link

    I recall Intel did SDP for their early Y-series processors.
    https://en.wikipedia.org/wiki/Thermal_design_power...
  • xrror - Monday, June 10, 2019 - link

    Lastly, as far as I know (tm) AMD still adheres to the older Total Design Power, so when they say 105w they mean it, unlike Intel.

    I give AMD credit for that, because for Joe/Jane Blow that puts AMD at a disadvantage for being honest.
  • Integr8d - Tuesday, August 13, 2019 - link

    All good. Mouthbreathers won't know the difference. But where AMD plans to make money (enterprise), those ppl know the difference ;)
  • Darkstone - Tuesday, June 11, 2019 - link

    TDP means neither total not typical design power. It means Thermal Design Power. The TDP indicates requirements for the cooling, not for the power delivery circuit. In general a cooler with TDP 45w is designed to dissipate 45W of energy, but of course this doesn't mean you can't draw 50w for a short time.

    https://en.wikipedia.org/wiki/Thermal_design_power
  • ats - Wednesday, June 12, 2019 - link

    TDP has always meant Thermal Design Power. The T has never meant Typical or Total. Nor has TDP ever had anything to do with continuous or instantaneous power draw, they are completely orthogonal.
  • Gastec - Wednesday, June 19, 2019 - link

    WORD!
  • Byte - Tuesday, June 11, 2019 - link

    how...how did AMD pull this out of their pants and into intels uranus?
  • Gondalf - Tuesday, June 11, 2019 - link

    You know....right now 8 core Ryzens are rated 105W but draw 145W under load.
    So these are marketing numbers. Bet this 16 core thing will be 180W under heavy tasks.
  • behrouz - Tuesday, June 11, 2019 - link

    That's thermal design power.If Ryzen is rated 105w , doesn't mean this consumes 105w.rather this means if you buy CPU Cooler rated at 95w, then Ryzen will throttle.
  • FMinus - Tuesday, June 11, 2019 - link

    Do they? At stock? I doubt.
  • Irata - Tuesday, June 11, 2019 - link

    At least not according to Anandtech's (or Tom's) Ryzen 2700x review.

    Tom had 104.7 W total package power consumption for the 2700x in their torture loops (vs. 159.5 for the i7-8700k).

    Anand had 106.38W for the 2700x (vs. 122.29 for the i7-8700) total package power consumption under load.

    Both sites may measure this differently - not sure how THG does it, but AT use the processors internal registers to estimate power consumption.
  • RiCHeeGee - Tuesday, June 11, 2019 - link

    My 2700 (non-x) @ 4.2ghz 1.4v vcore and 1.1v SOC draws 135w total with 120w on CPU and 15w on SOC. This is probably higher than usual due to it being a lower binned chip that requires more voltage to operate at these clocks.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    The Stilt said better binning typically means lower leakage which means, counterintuitively, that the chip needs MORE voltage for higher clocks.

    People constantly make the mistake of assuming that good binning (low leakage) translates into less operating voltage needed at higher clocks. No, that would be higher-leakage parts. He said that under air and regular water (non-chilled) conditions it's better to have the lower leakage.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    ASUS apparently capitalized on this mistaken assumption in its Crosshair board for AM3+, where it would underreport the amount of voltage being used, making people assume that they got luckier with the silicon lottery and/or that the board is just so much better.

    In reality, you were more likely to degrade your chip by putting more voltage into it than was safe.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    Another reason for the assumption is that poorer-quality boards tend to have looser power regulation so they need higher operating voltages to stabilize due to the wilder swings/droop.

    Setting the UD3P 970 board, for example, to Medium LLC, resulted in much more uniform voltage than anything higher or lower. Some boards didn't even have LLC.
  • Gastec - Wednesday, June 19, 2019 - link

    I guess that's a normal power draw for a 1.4vcore, 4.2 GHz overclock.
  • ksec - Tuesday, June 11, 2019 - link

    I have yet to see a single report that resemble those numbers. So unless you could prove that you are effectively spreading fake news.
  • Gastec - Wednesday, June 19, 2019 - link

    I have no need for 16-cores, but I'll take a 8-core CPU that draws 70-80 W for $330.
  • MDD1963 - Wednesday, June 12, 2019 - link

    Intel had a 16 core Atom C3958 (SOC) at 31 watts TDP ...14 months ago...; does that count?
  • Zok - Monday, June 10, 2019 - link

    Time for an upgrade from my i7 7700K! I'm torn between the 3900X and 3950X though. Will wait to see which one results in better OC. Fewer cores vs better binning.
  • 12345 - Monday, June 10, 2019 - link

    I have a feeling PBO will be amazing this generation. 2nd gen is so limited by how fragile 12nm is.
  • anonomouse - Monday, June 10, 2019 - link

    There's plenty of other questions about this thing, the biggest of which I see is how the performance scaling on multithreaded workloads looks given that the bandwidth is still limited to two channels of DDR4 and frequencies have definitely not doubled here to match the core scaling.
  • arakan94 - Monday, June 10, 2019 - link

    depends on your tasks a lot - only memory intensive tasks will be affected.
  • xrror - Monday, June 10, 2019 - link

    Hopefully AMD would have enough sense to not release it if it was stupidly memory choked.

    But yea it puts more pressure hoping that the rumors of Ryzen 3xxx having the ability to hit DDR4 5000 speeds it true... but it worries me that it might (probably) takes a 1/2 mult to the infinity fabric to do that. Plus on a 32/64virtual processor you'd need those speeds on >32GB minimum. Yikes

    Really looking forward to when the reviewers can get their hands on these and see if memory bandwidth is a concern.
  • xrror - Monday, June 10, 2019 - link

    edit: 16:32virtual
  • Hul8 - Monday, June 10, 2019 - link

    Level1Techs demonstrated that the 32-core Threadripper 2990WX was mostly not memory bandwidth starved with 4 memory channels. Instead the widely reported problems at launch were caused by how (badly) Windows handled situations where some cores didn't have direct memory access. AMD alleged it wasn't the scheduler. Still, something in Windows core and memory allocation / migration misbehaved badly, juggling threads between dies when there was no need.

    Compared to the 2990WX, the 3950X has half the cores and memory channels, only able to run both at higher frequencies and the cores have a higher IPC. Apart from the most memory-hungry (scientific calculation) applications, it's doubtful memory bandwidth is a problem.
  • rems - Monday, June 10, 2019 - link

    "The entire lineup will be available on shelves worldwide July 7th"
  • Slash3 - Monday, June 10, 2019 - link

    "Except for that one."
  • godrilla - Monday, June 10, 2019 - link

    Recap the 16 core Ryzen 3950x is coming in September everything from Computex AMD event is launching on 7/7/19 including Navi.
  • rems - Monday, June 10, 2019 - link

    Yeah I completely spaced out the big letters behind her on the product's page :s
  • Ashinjuka - Monday, June 10, 2019 - link

    Squreface!

    Once you see it you will never unsee it.
  • mkaibear - Tuesday, June 11, 2019 - link

    Thanks. Now i can't unsee it.

    *lol*
  • xrror - Monday, June 10, 2019 - link

    I ***really*** hope AMD isn't playing games with that core boost speed. Hopefully that 4.7Ghz speed will be good for at least 2 cores (maybe 1 per core "chiplet") and then the full load/additional cores degradation is graceful enough to give say 4 cores at 4.5, 6 4.4 or something like that.

    I will be disappointed if that 4.7 is only for 1 core if only since it means that the newer process nodes still suck for scaling.
  • godrilla - Monday, June 10, 2019 - link

    4.7 x 1.15 is 5.405 though.
  • xrror - Monday, June 10, 2019 - link

    where do you get the 1.15 mult from? just curious
  • Hul8 - Monday, June 10, 2019 - link

    AMD is claiming +15% IPC.

    Performance = frequency x IPC.
  • Hul8 - Monday, June 10, 2019 - link

    That 4.7 x 1.15 implies that 4.7 GHz on Ryzen 3rd Gen is equivalent to a theoretical frequency of 5.4 GHz on Ryzen 2nd Gen.
  • xrror - Tuesday, June 11, 2019 - link

    MEH. Since my main rigs are still socket 1366 xeons, then lets claim (WAG( numbers of +170% Intel generational advantage.

    But say there, since I run my ancient garbage pre-spector/meltdown Gulftown /
    Westmere-EP at 4.4Ghz w/o the remediation patches...

    I have no idea where I stand.

    But since Intel already disowns me (socket 1366/X58 has no official Windows 10 support) and my system still gets the cold shoulder (the micro-op cache prefech from the early exploits came Sandy Bridge and later, not Westmere) when I migrate those rigs from win8.1 to 10 they will eat that "punishment/hit" even though their arch pre-dates the speculative permission exploits.

    I whine, but my niche self is nothing compared to all of the Sandy Bridge systems getting the shaft.

    my last two builds were Ryzen 1600 and 1600x

    I know that the only reason that AMD's pre-fetcher hasn't been corn-holed as bad as Intel's is due to popularity, but at least I can hope, with AMD being the underdog still, that they'll at least give me a compelling upgrade option sooner than the 10 years I've seen Intel ignore 1366.
  • zealvix - Monday, June 10, 2019 - link

    Or they can do what Intel did with 8700, provide a cooler that is just sufficient for base all cores speed. :D
  • zealvix - Monday, June 10, 2019 - link

    Wrong reply, ignore the above
  • xrror - Monday, June 10, 2019 - link

    Well ironically, AMD did kinda do that for the 1xxx Ryzens. Additional cooling past the stock coolers really didn't give you much more headroom.

    So maybe there is something to be read into the specs of the OEM cooler. ;)
  • psychobriggsy - Tuesday, June 11, 2019 - link

    That's what I am wondering as well.

    Previously the turbo clock was for 1 core per CCX. The theory is that each 8C Zen 2 die is still 2 CCXs, so there is a chance that 4.7GHz is a quad-core turbo.

    And the maths can be made to work out, assuming AMD isn't playing around with TDP. I will assume that TDP is met both at base clocks and at turbo clocks.

    105W - I/O (15W?) / 16 is 5.6W per core, at 3.5GHz
    As the clocks increase, so will the per-core power consumption for the faster cores. AMD doesn't use tables like Intel any more though, so there is a kind of smooth scaling thing going on.
    It is easy to imagine that at 4.7GHz, a core will use 20-25W of power. So 4 can be at that speed within the TDP.

    (However, we don't know the voltage curves for Zen 2 and TSMC 7nm, it could equally easily be 4C Turbo = 4.4GHz, 2C Turbo = 4.6GHz and 1C Turbo = 4.7GHz). I hope reviews do some investigation here, presumably as part of whatever tweaks AMD has done to XFR, etc.
  • Arbie - Monday, June 10, 2019 - link

    Regarding the 3950X... since AMD will be selling it with a cooler, that component has to figure into the guaranteed boost. Coolers can't be binned like silicon, especially since the interface quality will be uncontrolled.

    Meaning that AMD will be forced to leave performance on the table, as margin.

    Meaning that premium cooling could very well enable PBO / XFR to push the boost significantly higher. I'll bet that with top air cooling most 3950X will boost at least 100 MHz higher. And on water, well, maybe 5.0GHz.

    Personally I'd rather they omitted the cooler from the top chips, for exactly this reason. I'll hate to throw it away...
  • xrror - Monday, June 10, 2019 - link

    Don't throw the stock cooler away - put it on ebay, AMD stock coolers are actually some of the first OEM coolers that weren't a joke =)

    Back in the Opteron days, the stock cooler was all copper with heatpipes. They're worth listing ;)
  • zealvix - Monday, June 10, 2019 - link

    Or they can do what Intel did with 8700, provide a cooler that is just sufficient for base all cores speed. :D
  • SaturnusDK - Tuesday, June 11, 2019 - link

    Barely sufficient. It was only sufficient in a 21C ambient room. Above that, thermal throttling.
  • godrilla - Monday, June 10, 2019 - link

    The 16 core Ryzen 3950x is overkill for gaming. 8 core is probably the minimum recommended for some level of future proofing especially because next gen console ports are around the corner. From there what ever overclocks the best is going to be more attractive for high end gaming. Who knows maybe the 12 core part might overclock better than both 8 core and 16 core parts.
  • xrror - Monday, June 10, 2019 - link

    well duh ;) okay non-jerk mode now:

    the 3600 and 3600x - depending on how they overclock, those are the gamer chips hands down.

    I don't think many games currently know what to do with more than 12 virtual cores?
  • vanilla_gorilla - Monday, June 10, 2019 - link

    Unless you are a gamer who likes to record, edit and encode your gaming.
  • xrror - Monday, June 10, 2019 - link

    fair enough, you got me there. Lets hope those "extra" cores/threads get the boost lovin' too!
  • Metroid - Tuesday, June 11, 2019 - link

    each chiplet = 8 cores, 2 chiplets = 16 cores, that full package, 12 cores = 2 chiplets of 8 cores with 2 cores disabled on each chiplet, so 2 x 6 = 12 cores, so yes, the 12 cores is the one that could most overclock here.
  • Irata - Wednesday, June 12, 2019 - link

    Didn't we have the "four cores and threads are more than enough for gaming" mantra when Ryzen was first launched (saying gamers would not need more cores) ?

    Techspot has an interesting article up, re-benchmarking Ryzen 1600 vs. the 4C/4T core i5-7600k which at the time was seen as the better gaming CPU.

    Now, two years later, it is actually the worse gaming CPU in newer games and really struggling with some:

    "Today when testing with Battlefield V the 1% low performance is a disaster for the 7600K and this means although the R5 1600 was only slightly faster on average the actually gaming experience was worlds better. The Core i5-7600K crash and burned at 1440p, this is a game that simply requires more than four cores and threads, even if they’re clocked at or around 5 GHz."
  • LegionR - Wednesday, June 12, 2019 - link

    The 16-core one is definitely not targeted towards gamers, it's for people who actually perform worthwhile tasks like design, rendering and many more.
  • xrror - Monday, June 10, 2019 - link

    Okay AnandTech, I give you a mission. (pre-post edit: assuming you can get parts/samples of course)
    Whenever the 3950X becomes available... test it on something with good bios support but is "cheap and cheery" in the channel.

    By that I mean something stupid like say (ASRock AB350M PRO4) or sure random Asus Strix x50 board.

    Because seriously, that sort of combo truly highlights what AMD is offering with AM4 compatibility.

    I... hope? There are enough readers, who on a budget, reading Anandtech for info, are interested in the possibility of what a 3600 or 3700 on their "old" 3xx and 4xx series AM4 boards would do for them.

    Using a 3950X on a B350 board sure is... the extreme case but also - if that combo shines means AMD executed perfectly. BUT any faults in that combo would also be super informative in things to look out for.

    I wish I could wordcraft better - the fact that the above combo is possible... is awesome!

    We're in a fun time (again!) where AMD is on the cusp of being underdog to leadership (for a bit) and we get things like socket AM4 for more than 2 gens of proc.

    This is a bit of an opportunity, since we don't get things like core 2 quad on old Pentium 4 chipsets (975x P5W DH Deluxe) because Intel loves us (the last era Intel really got pushed). A weaker analog on the AMD side is Thurban Phenom x6 to FX transition AM2 AM2+ AM3.

    I guess my rambling incoherent hardware love letter is this. Test a 3950x on the nearest to $85 AM4 motherboard that isn't a known pile of garbage, and ... you know even if it throttles a bit, if that is a working combo, slamming a $750 proc on your existing $85 board and it doesn't explode and actually, even if throttled is still 364.6% faster as a drop in upgrade to your self built computer from 2 years ago...

    That's a huge win for AMD. And why I love PC hardware ;)
  • jeremyshaw - Monday, June 10, 2019 - link

    That's exactly my usecase right now.

    B350 ITX board (ASRock), currently serving HTPC duties. If it has a better future replacing my Z270 Intel setup, I'll upgrade early. If it doesn't, I'll probably wait.
  • Hul8 - Monday, June 10, 2019 - link

    AMD is recommending combining the 3950X only with a X570 motherboard - presumably because of the power delivery quality/headroom.

    I wouldn't put it past them to limit which chipsets are qualified for (overclocking) the 12- and 16-core parts due to power constraints.

    It makes perfect sense, since that way
    1) motherboard vendors don't need to overbuild the power delivery on the lower end boards, saving costs and allowing lower prices (and the budget options that are sorely missing in all vendors' X570 lineups);
    2) the chipsets get differentiated more, since overclocking a high-end CPU will *require* that expensive X570 board.

    PCIe 4 support would be another way to differentiate and cut costs: Support for it may be cut entirely on B550 (for both CPU and chipset). Would anyone want to use the $750 16-core CPU on a motherboard with only PCIe 3? Not being able to add a PCIe 4 SSD a year from now, when they're probably more readily available from most manufacturers?
  • xrror - Tuesday, June 11, 2019 - link

    I don't disagree on AMD's recommended. I just want to know on AMD's "technically valid but not-recommended" where if the mobo maker didn't totally cheap out - you have a surprise winner.

    To be realistic, if any mobo vendor did something nuts like... enable 3950x support on an A320 board. When that combo exploded overclocked? I'm torn in that it is awesome that you could use that proc on a board that was made before those demands could be imagined, but it sucks that you're going to blame the maker of your $40 mobo that they gave you the ability to explode your system with a $750 proc.

    But I would like to see some runs with the 3600 and 3700 with some budget B350 boards.

    I show my age when I get angry that motherboard makers that have the... cahoons to release beta bios's that let boards operate "outside of specification" over the years have gotten slammed for releasing "bad" bios's and/or motherboards because Joe/Jane bob can't comprehend "when this bios voltage number is flashing red/white with a skull, that might not be a good thing long term"
  • Hul8 - Tuesday, June 11, 2019 - link

    The TDPs present of the Ryzen 3rd gen are the same as Ryzen 2nd gen. (Gen 1 didn't have 105W, IIRC.) The TDP only applies to stock operation, though.

    The problem arises from having 50% or 100% more cores than you used to; At that point the power demands from more voltage and frequency stack up quickly and the earlier generation motherboards may not allow much power headroom.

    Up to 8 cores should be fine, and overclocking on B350/450 similar to Ryzen 2000 with the same core count.
  • SaturnusDK - Tuesday, June 11, 2019 - link

    It's not only that. Smaller lithographies means lower voltage so that means in order to deliver the same power, you have to supply more ampere. That's not an easy task, and why some B350/X370 MBs in particular might have problems with even the 12 core 3900X. It's much easier to provide low amp at high voltage than the reverse. But we shall see in testing.
  • TheUnhandledException - Tuesday, June 11, 2019 - link

    The 3700X has lower TDP. The 3700X is 65W. The 2700X was 105W and the 1700X was 95W. The 3700X is looking like one nice CPU. No clock improvement over the 2700X but 15% higher IPC and 1/3rd less power? Yes Please.
  • jeremyshaw - Tuesday, June 11, 2019 - link

    Eh, we already see with the 2700X that AMD doesn't always strictly adhere to the classical TDP definition. Anandtech's Ryzen 2 review covers that. System power consumption for CPU-only tasks on the 2700X seem to match Intel's 9900k, and the 2600X system power for CPU-only tasks seem to exceed the 8700k.

    I know Anandtech writers are in love with trusting power readings from internal CPU registers, but in the end, total system power draw doesn't lie. We also know some tools (like HWInfo) are not reliable at measuring AMD Ryzen power draw (the creator of HWInfo believes AMD engineers don't know their own CPU, and HWInfo underreports AMD power draw significantly - by almost 2:1 on my E485 laptop [Ryzen 2500U], vs AMD's own performance profiling tool).
  • Hul8 - Tuesday, June 11, 2019 - link

    Tom's Hardware publishes power figures from their German sister site who directly measure consumption from individual power cables and a PCIe riser.

    Gaming loop:
    - Ryzen 7 2700X: 55.7W
    - Core i7-8700K: 66.8W

    Torture loop:
    - 2700X: 104.7W
    - 8700K 159.5W

    Source:
    Test setup on page 4: https://www.tomshardware.com/reviews/amd-ryzen-7-2... (power measuring hardware near the bottom of the page)
    Power results on page 12: https://www.tomshardware.com/reviews/amd-ryzen-7-2...
  • Hul8 - Tuesday, June 11, 2019 - link

    Sorry, this was meant to answer @jeremyshaw.
  • psychobriggsy - Tuesday, June 11, 2019 - link

    B550, if it exists, will hopefully retain CPU PCIe 4 (x16 gfx, x4 ssd, x4 chipset), but not offer it in the chipset itself (that will reduce the TDP of the chipset down a bit more to allow passive cooling).
  • Itveryhotinhere - Monday, June 10, 2019 - link

    Will try 12 core on my 2 years old asus x370 strix F
  • xrror - Tuesday, June 11, 2019 - link

    I'm going to do the same but on the 350 strix!
  • PixyMisa - Monday, June 10, 2019 - link

    Ouch at the price, but then, where's the competition?

    I'd love to see them position this as a CPU for small servers as well as for high-mid-range desktops.
  • Qasar - Monday, June 10, 2019 - link

    PixyMisa at current exchange rates, this cpu would cost me $994, for the 12 core intel i9 7920x, i would need $1600 i dont want to know what a 16 core i9 would cost... this cpu... is still the better buy.. even at that price :-)
  • SaturnusDK - Tuesday, June 11, 2019 - link

    What is the right price for the fastest desktop CPU on the market bar none?

    I think it's up to the market leader, in this case AMD, to decide what they want you to pay for the privilege of owning the best there currently is.
  • Phynaz - Tuesday, June 11, 2019 - link

    AMD is a market leader in what market?
  • Korguz - Tuesday, June 11, 2019 - link

    currently.. cpus.. intel has no answer at the price points of zen 2
  • Phynaz - Tuesday, June 11, 2019 - link

    I don’t think you understand the word leader. AMD is a follower. They lead at nothing at all.
  • Phynaz - Tuesday, June 11, 2019 - link

    Leader: an organization or company that is the most advanced or successful in a particular area.

    Nope, that’s not AMD.
  • Cooe - Wednesday, June 12, 2019 - link

    What are you talking about? They are beating Intel in literally every major area there is here. Single-threaded performance? Better on Zen 2. Multi-threaded performance? Also better on Zen 2. Well how about power efficiency? WAAAAAAAAY better on Zen 2. And the price? Leagues cheaper. Even in their worst area, gaming, they are toe-to-toe matching the very best Intel has to offer. How the hell is all of that not "leadership" to you? Please do explain.
  • Korguz - Wednesday, June 12, 2019 - link

    Cooe.. he has no idea what he is talking about.. so he CAN'T explain...hes just a blind arrogant nvidia fanboy.... he will happily pay intel for their 12 core cpu, that costs around 1k, vs a 16 core for 300 less... and will still be faster then intels $2500-ish 18 core...

    as for leader... AMD has done more then intel since zen was released... and ONLY cause of Zen, has intel gotten off their butt ...
  • Xajel - Wednesday, June 12, 2019 - link

    Leader could also means that other companies follows it. Now, Intel is following AMD, the jump in core numbers, finer process technology, higher performance. Intel is trying to keep up but they can't, their TDP is rising higher (as clocks comes high) to compensate, it was the same thing for AMD years ago with their Boldozer and pre Ryzen stuff I don't recall their names.
  • PixyMisa - Tuesday, June 11, 2019 - link

    I was hoping it would be cheaper, but on the other hand I do want AMD to make money so they can bring us more shiny toys in the future.
  • SaturnusDK - Tuesday, June 11, 2019 - link

    Personally, I was hoping for $699 but would be fine with $799.

    Let's remind ourselves this is a CPU that beats the ~$2000 i9 9980XE in practically everything.
    I'm still wanting for the next generation Threadrippers to be announced to replace my 1950X.
  • Irata - Tuesday, June 11, 2019 - link

    +1
  • LegionR - Wednesday, June 12, 2019 - link

    Why is it ouch? It's cheap as fuck.
  • mode_13h - Monday, June 10, 2019 - link

    I wonder if AMD will consolidate some/all of these chiplets, in the next tock. Once 7 nm gets cheap enough, you'd think it would be a win to offer at least the 8-core parts as a single-die.
  • mode_13h - Monday, June 10, 2019 - link

    Especially with respect to power consumption, I might add.
  • xrror - Tuesday, June 11, 2019 - link

    kinda makes you wonder how the newer processes are optimized.

    My guess is everything is geared for mobile/battery life.

    Going for raw per (Ghz scaling) isn't the mass market anymore, so all we get are power optimized nodes that don't scale for crap.

    The big money isn't in giving John/Julie Doe a faster craptop anymore, it's giving them a phone that they can forget in the car for 3 days while still remembering their facebook dickpics.

    yeay humanity =(
  • mode_13h - Tuesday, June 11, 2019 - link

    I think frequency scaling has slowed due to increasing leakage, as feature size continues to shrink - not dickpics. There is plenty of demand for a high-power process - cloud, HPC, self-driving cars, and robotics, to name a few.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    "There is plenty of demand for a high-power process"

    More explanation needed. I know of high-frequency trading wanting the fastest-possible IPC coupled with the fastest-possible clock. Emulation also tends to like this. Outside of these two small cases, though, I am unfamiliar with how things like cloud computing can't use the chiplet approach. The same goes for cars. Robotics is a maybe. It seems that it would benefit the most from improved AI circuitry.
  • mode_13h - Tuesday, June 11, 2019 - link

    There's something ironic about bemoaning the decline of humanity with a post that sort of exemplifies it, I might add.
  • PixyMisa - Tuesday, June 11, 2019 - link

    I was hoping it would be cheaper, but on the other hand I do want AMD to make money so they can bring us more shiny toys in the future.
  • PixyMisa - Tuesday, June 11, 2019 - link

    Clicked the wrong reply button...
  • psychobriggsy - Tuesday, June 11, 2019 - link

    Why? Using chiplets greatly improves yield, and allows die-matching (good + good die = 3950X).

    Sure, at 5nm maybe a 16C chiplet will make sense, simply because otherwise the chiplets start getting absolutely tiny, but even then there's no real need, a 5nm 8C die will still be around 50mm^2 (allowing for Zen 3 enhancements, more cache, etc).
  • Oxford Guy - Tuesday, June 11, 2019 - link

    The trouble with chiplets is latency. However, considering the bottlenecks elsewhere (like slotted RAM), perhaps it's not a huge problem.

    Beyond chiplets, though, there is the issue of not getting even close to the reticle limit of process nodes (e.g. Radeon VII). Not only is this good for margin it's bad for performance, including performance-per-watt. Companies like AMD resort to pushing voltage/clock well beyond optimal to get more performance out of an unnecessarily small die.
  • scineram - Wednesday, June 12, 2019 - link

    Nonsense.
  • Oxford Guy - Friday, June 14, 2019 - link

    fascinating analysis there
  • mode_13h - Saturday, June 22, 2019 - link

    They have to build products at a price the market will pay. That's why Radeon VII isn't bigger. I think we all know it's more efficient to build bigger dies and clock them lower, but they can't only look at power-efficiency.

    Speaking of Radeon VII, I think they expected it to cost more, but perhaps a 7 nm demand slump allowed them to offer the consumer version at its current price. Their original messaging on Vega 20 did not include a consumer product, however.
  • Xajel - Wednesday, June 12, 2019 - link

    I guess they will offer it with APU's at least, they mentioned something about that as APU's are designed more for power and efficiency.

    As with CPU's, I don't know, maybe they might do something with 7nm+ as yields comes better. But I doubt it actually beyond the APU's, AMD's idea is to use as fewer die designs as possible for more products as possible, now the same 8C chiplet is being used across the line, the only different is the IO chiplet which is different between Ryzen and Epyc, Threadripper might reuse the Epyc IO already.

    So adding a new CPU just for 8C isn't feasible for them at least this time. But 8C APU's is possible now.
  • yankeeDDL - Tuesday, June 11, 2019 - link

    Impressive.
    105W, 16 cores. IPC similar to Core 9-gen.
    The MSRP for teh Core i9 7960x is 1700usd, more than 2X, for similar performance despite >50% higher TDP (and Intel's TDP is cheating).

    Sounds to me that starting Q3/Q4 buying Intel for desktop, workstation, or Servers, makes no sense... Good going! Looking forward to Intel's response.
  • SaturnusDK - Tuesday, June 11, 2019 - link

    No. IPC of Ryzen 2000 series was roughly on par with Core 9th gen. Ryzen 3000 series has higher IPC than any Intel product.

    The R9 3950X competes directly only with the i9 9980XE and beats it easily.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    We also have to take into full account the security problems and the related performance regressions.

    For the best security, is it still mandatory to disable SMT in Intel CPUs? If so, the performance advantage for AMD is massive.
  • SaturnusDK - Wednesday, June 12, 2019 - link

    In the slide foot notes AMD clarifies that Intel CPUs used for comparison were tested without using any vulnerability mitigations. And these are already known to quite significant negative performance impact.
    They also note that AMD CPUs were tested without the latest Windows update when compared against Intel CPUs, and the latest Windows update significantly improves performance on AMD CPUs.
    So none of the usual Intel trickery like hiding an industrial cooler with an illegal coolant like the 5GHz 28-core fiasco, or turning of half the CPU cores in Threadripper when comparing to i9 9900K, or using dual channel DDR4-3700 for their Ice lake and single channel DDR4-2400 for Ryzen 3700U when comparing those.
  • scineram - Wednesday, June 12, 2019 - link

    3700U was dual channel. Stop lying!
  • yankeeDDL - Wednesday, June 12, 2019 - link

    I hadn't noiticed the footnotes. Then it is massive, wow! I think teh embargo lifts on June 7th. Let's hope on a thorough review, taking all of these factors into consideration.
  • yankeeDDL - Wednesday, June 12, 2019 - link

    *July 7th...
  • Gondalf - Tuesday, June 11, 2019 - link

    This move will kill their HEDT line.
    The real question is what will be AMD revenue advantage of this cheap silicon saga.

    ps. why not 4.7Ghz turbo on 8 cores dies??? 16 cores are useless in games.
    Bad move from AMD
  • Xyler94 - Tuesday, June 11, 2019 - link

    Kill? No. HEDT is more than just core count. remember Zen2 goes up to 64 cores. They still have 24 and 32 cores to go into. And even if they release a 16 core Threadripper 3, I bet it will sell very well because of quad channel memory, and way more PCIe lanes to work with.

    This is just giving us a new high core count chip on "mainstream"
  • peevee - Wednesday, June 12, 2019 - link

    They absolutely SHOULD release a 2-chiplet 16-core Threadripper if they are still going to limit this platform to 4 channels (while the socket supports 8).
  • haukionkannel - Thursday, June 13, 2019 - link

    They save best chiplets for Rome and 3950! The 6 and 8 core cpus gets the worst cpus They can find, because it does not matter if They eat more energy. The 16 core has ro have reasonable good cpu chiplet or it would run out of energy too early...
  • ChubChub - Friday, June 14, 2019 - link

    The way AMD boosts clocks is largely based on TDP/motherboard limitations; it's a very clever system that rewards superior cooling solutions. TDP can be adjusted, meaning your available boost clocks will adjust accordingly (assuming you have the proper cooling). The benefit of this system will be clear once the new AMD chips are compared against their Intel counterparts; Intel's heavy reliance on going way out of spec. will become (more) obvious.

    So, the 4.7GHz of the 16C chip is able to be advertised mostly because the base clocks are low, leaving a lot more headroom to boost specific cores to guaranteed 4.7GHz (most chips will likely boost above this, bone stock). If you take a 12C (8C) chip and mess with the chip's limitations, you'll see similar boost clocks.

    TL;DR: Worry not, the 8C chips will be boosting in the same range as the 16C ones are, much like what is happening with the Zen+ parts.
  • mode_13h - Saturday, June 22, 2019 - link

    AMD is probably binning their best dies for the 16-core part.

    I hope they eventually release an 8-core that turbos to 4.7 GHz.
  • Wise Wolf - Tuesday, June 11, 2019 - link

    Wow you have to go down from a Ryzen 9 3950X to a Ryzen 7 3700X to go down in TDP from 105W to 65W, in Mini ITX builds where higher watts = more heat it can be pretty crucial. Im surprised that they dont decrease the wattage for all those disabled cores when you go down in cpu level. One other thing is that all the new X570 boards are HUGE! either Full ATX or E-ATX but I havent seen any Mini-ITX or Micro-ATX motherboards with X570 yet from any manufacturer.
  • FMinus - Tuesday, June 11, 2019 - link

    The 3800x was marked as 95W this show, but 105W at computer, but I believe 95 might be right
  • schujj07 - Tuesday, June 11, 2019 - link

    Gigabyte has a mini-ITX.
    https://www.anandtech.com/show/14460/gigabyte-unve...
    The 3950X will create less heat in an ITX space than either the 9700K or 9900K and people put those into ITX cases all the time.
  • Wise Wolf - Tuesday, June 11, 2019 - link

    Thanks for that link, on a side note DAT BACKPLATE! is huge lol
  • GreenReaper - Tuesday, June 11, 2019 - link

    Well sure; instead of decreasing wattage; they're increasing base core clocks for sustained perf. Looks like there's a sweet spot somewhere around 3.5/3.6Ghz, which is why they put the 3700X and 3950X base there. 3600X, 3800X and 3900X have more headroom for a higher base clock. It won't use that all the time, but they only have three cooler models, so no sense in e.g. 80W TDP.

    Personally I think 3900X is the best value here. Twelve cores should be enough for a while, and you basically pay double to get two 3600Xs in one socket, with no compromise in base clock and +4.5% boost, perhaps thanks to the separation (though I want to see full clock/core progression). Having that extra cache will help keep up as newer software inevitably processes more data. 3950X is compromised by less L3 per core, and lower clocks, butting up against the 105W limit.

    Obviously the 3600 is a great value in cores per dollar, and the clock speed is only 200Mhz lower. All things being equal, you should be able to overclock it to at least 3700X levels as well. But they're not; better-quality chiplets are likely to go to the higher-level CPUs, at least at first. You'll probably have to toss the 65W cooler it comes with to get much more out of it, too.

    The 3800X doesn't make sense to me, unless you *really* want to avoid the potential impact of NUMA. You're paying the price for having a chiplet with all cores working and able to run at a reasonably high frequency, but losing out on 3900X's additional cores, cache, and higher boost.

    Of course ultimately we need to wait for benchmarks. But it's no surprise that CPUs employing less-than-perfect 6-core chiplets offer a significant discount. The value there is unlikely to change.
  • jtd871 - Wednesday, June 12, 2019 - link

    According to the info, all the main memory access goes thru the IO die, so the only NUMA issue should be related to local cache. IIRC, one of the design goals for Zen2-based Ryzen/TR was to make latency more uniform. The increased L3 and new Windows thread allocation strategy should also go a long way to smoothing out latency. I expect AT and some others to investigate this shortly after the embargo lifts.
  • AshlayW - Tuesday, June 11, 2019 - link

    On Zen2, all communication between chiplets goes through the I/o Die: there are no direct links between each chiplet.. At least this is what I am aware of.
  • FMinus - Tuesday, June 11, 2019 - link

    3800x was marked as a 95W part this show, so which is right?
  • SmCaudata - Tuesday, June 11, 2019 - link

    The 3800x is clearly made from the low binned 8 core chiplets with the 3950x using two chiplets from the good bin. I'm guessing that the difference in power between 3700x and 3800x, they are basically the same quality, just clocked differently. Looking at the list, I think the 3800x is a bit overpriced.

    All that said, you get effectively double the silicon for less than double the price going from 3800x to 3950x. It's nice to see companies not gouging on the enthusiast level chips.
  • GreenReaper - Tuesday, June 11, 2019 - link

    3950x gives you double chiplets over 3800x, but 179% the all-core performance for 189% the price. And that's assuming there's no PCIe bandwidth limitations or impact from NUMA considerations.

    If you're considering those CPUs, it seems like a no-brainer to go for the 3900X ($41.98/core) instead of 3800X ($49.88/core) or 3950X ($46.81/core, all-core running slower than either). As noted above, you get a significant discount for going with CPUs which can use defective chiplets.
  • Metroid - Tuesday, June 11, 2019 - link

    I made a table yesterday about it.

    Ryzen 9 3950X (16c, $749) 105w 749/16 = $46.8 usd per core
    Ryzen 9 3900X (12c, $499) 105w 499/12 = $41.5 usd per core
    Ryzen 7 3800X (08c, $399) 095w 399/08 = $49.8 usd per core
    Ryzen 7 3700X (08c, $329) 065w 329/08 = $41.1 usd per core
    Ryzen 5 3600X (06c, $249) 095w 249/06 = $41.5 usd per core
    Ryzen 5 3600 (06c, $199) 065w 199/06 = $33.1 usd per core

    Intel i9 9900k (08c, $499) 095w 499/08 = $62.3 usd per core
    Intel i7 9700k (08c, $379) 095w 379/08 = $47.3 usd per core
    Intel i5 9600k (06c, $262) 095w 262/06 = $32.7 usd per core

    The winner here hands down is the 3900x, second stays with 3700x and third with 3950x.
  • shabby - Tuesday, June 11, 2019 - link

    Add a price per thread too, those 9600k/9700k won't look so hot anymore.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    Not to mention the security performance regressions. Let's hope AMD doesn't also have to have SMT disabled, for example.
  • Oxford Guy - Tuesday, June 11, 2019 - link

    How can the 9600K be cheaper per-core than the 3600 if they both have six cores and the 3600 is $63 lower in price? Shouldn't it be $43.7 per core?
  • peevee - Tuesday, June 11, 2019 - link

    How can you up with:
    Ryzen 5 3600 (06c, $199) 065w 199/06 = $33.1 usd per core
    Intel i5 9600k (06c, $262) 095w 262/06 = $32.7 usd per core

    The latter is $43.67 per core.
  • Metroid - Wednesday, June 12, 2019 - link

    yes is 43.6 it was a miscalculation.
  • peevee - Wednesday, June 12, 2019 - link

    And 3800x is not 95W, it is 105W according to anand.
  • oleyska - Friday, June 14, 2019 - link

    There is no Numa.
  • mode_13h - Saturday, June 22, 2019 - link

    No, but your cache gets bifurcated. Thread communication is worse between chiplets than within them, or within the same CCX.
  • ChubChub - Friday, June 14, 2019 - link

    The difference in power is performance based; a 95w part only "needs" a 95w cooler. Once you overclock, the limit becomes VRMs / cooler. This is partially why Intel doesn't include a cooler; the enormity of the stock cooler that is required to get the performance Intel claims would increase the chip cost dramatically (the 9900k is often in the 200-250w TDP range for the overclocks people boast about).

    https://twitter.com/Thracks/status/113845497360650...

    AMD claiming the boost clocks/TDP/EDC/PPT limits can be overrridden (essentially, allowing your chip to perform under similar constraints as Intel would ... aka, largely without constraints). At that point, your boost clocks are basically on your cooler; glory.
  • peevee - Tuesday, June 11, 2019 - link

    Wow, I even guessed the name right (not that it was hard).
    At 105W and with the same L3 it is quite useless over 3900X.
  • Cooe - Tuesday, March 23, 2021 - link

    Holy crap has hindsight not treated you well here...
  • Tkan215 - Tuesday, June 11, 2019 - link

    the only think it surprise me is Intel TDP is more like a marketing scheme. Hopefully, a real comparison coming soon
  • urbanman2004 - Wednesday, June 12, 2019 - link

    3700X FTW, nuff said 😉
  • Oxford Guy - Friday, June 14, 2019 - link

    If Intel's CPUs have hyperthreading disabled to mitigate the latest security problem then AMD is going to clean up in the server benchmarks, and others.
  • mode_13h - Saturday, June 22, 2019 - link

    There'd be no need to disable HT, if OS kernels just wouldn't pair threads from different processes on the same core. I think we'll probably see that, before long.
  • ccares28 - Sunday, December 29, 2019 - link

    Just a novice in registering would compose that. Or then again a "gamer" with more cash than sound judgment - <a href="https://www.customercares.website/2019/12/full-for... Form Of DM</a>

Log in

Don't have an account? Sign up now