Comments Locked

339 Comments

Back to Article

  • 5j3rul3 - Thursday, November 5, 2020 - link

    Rip Intel🤩🤩🤩
  • Smell This - Thursday, November 5, 2020 - link


    Chipzillah has got good stuff ... everyone is "just dandy" for the most part...
    but, AMD has kicked Intel "night in the ruts" in ultimate price/performance with Zen3
  • Kangal - Saturday, November 7, 2020 - link

    True, but the price hikes really hurt.

    For the Zen3 chips, it's only worth getting the:
    - r9-5950X for the maximum best performance
    - r5-3600X for the gaming performance (and decent value).

    The 12 core r9-5900X is a complete no-buy. Whilst the r7-5800X is pretty dismal too, so both chips really need to be skipped. Neither of them have an Overclocking advantage. And there's just no gaming advantage to them over the 5600X. For more performance, get a 3950X or 5950X. And when it comes to productivity, you're better served with the Zen2 options. You can get the 3700 for much cheaper than the 5800X. Or for the same price you can get the 3900X instead.

    Otherwise, if you're looking for the ultimate value, as in something better than the 5600X value... you can look at the 3600, 1600f, 3300X, 3100 chips. They're not great for gaming/single-core tasks, but they're competent and decent at productivity. Maybe even go into the Used market for some 2700X, 2700, 1800X, 1700X, 1700, 1600X, and 1600 chips as these should be SIGNIFICANTLY cheaper. Such aggressive pricing puts these options at better value for gaming (surprising), and better value for productivity (unsurprising).
  • DazzXP - Saturday, November 7, 2020 - link

    Price hike doesn't really hurt that much, AMD was making very little money on their past Ryzen's because they had to contend with Intel Mindshare and throw more cores in as they did not quite have IPC and clock speeds, now they have all. It was as expected to be honest.
  • Silma - Sunday, November 8, 2020 - link

    Do you have any recommendations for motherboards for either a Zen3 or a Zen 2 (depending on availability of processors)? I want to spend as litte as possible on it, but it miust be compatible with 128 GB of RAM.
  • AdrianBc - Sunday, November 8, 2020 - link

    If you really intend to use 128 GB of RAM at some point in the future, you should use ECC RAM, because the risk of errors is proportional with the quantity of RAM.
    A good motherboard was ASUS Pro WS X570-ACE (which I use) previously at $300 but right now it is available at much higher prices ($370), for some weird reason.

    If you want something cheap with 128 GB and ECC support, the best you can do is an ASRock micro-ATX board with the B550 chipset. There are several models and you should compare them. For example an ASRock B550M PRO4 is USD 90 at Amazon.
  • Silma - Wednesday, November 11, 2020 - link

    Thanks for the input! Is ECC really necessary? The primary objective of the PC memory would be loading huge sound libraries in RAM for orchestral compositions. The PC would serve at the same time as gaming PC + Office PC.
  • Spunjji - Sunday, November 8, 2020 - link

    In the context of a whole system? Not really, no.

    In the context of an upgrade? Not at all, if you have a 4xx board you'll be good to go in January without having to buy a new board. That's something that hasn't been possible for Intel for a while, and won't be again until around March, when you'll be able to upgrade from a mediocre power hog of a chip to a more capable power hog of a chip.

    Comparing new to used in terms of value of a *brand new architecture* doesn't really make much sense, but go for it by all means 👍 The fact remains that these have the performance to back up the cost, which you can see in the benchmarks.
  • leexgx - Sunday, November 8, 2020 - link

    I would aim for the 5600x minimum unless your really trying to Save $100 as the 5600x is a good jump over the 3700x/3600x
  • biostud - Monday, November 9, 2020 - link

    Uhm, no? For me the 5900X would make perfect sense. I game and work with/photo video editing, and would like to have my computer for a long time. The 5950X costs too much for my needs, the 5900X offers 50% more cores than the 5800X for $100 and the 5600X hasn't got enough cores when video editing. (Although I'm waiting for next socket before upgrading my 5820k)
  • jakky567 - Tuesday, November 24, 2020 - link

    Total system, I think the 5950x should be more popular. That being said, the 5900x is still great.
  • mdriftmeyer - Monday, November 9, 2020 - link

    I spend $100 or more per week on extra necessities from Costco. Your price hike concerns are laughable.
  • bananaforscale - Monday, November 9, 2020 - link

    5900X has good binning and the cheapest price per core. For productivity 3900X has *nothing* on 5900X for the 10% price difference and 5950X is disproportionately more expensive. Zen and Zen+ are not an option if you want high IPC, 3300X basically doesn't exist... I'll give you that 3600 makes more sense to most people than 5600X, it's not that much faster.
  • Kangal - Wednesday, November 11, 2020 - link

    "Price per Core".... yeah, that's a pointless metric.
    What you need to focus on is "Price per Performance", and this should be divided into two segments: Gaming Performance, Productivity Performance. You shouldn't be running productivity tools whilst gaming for plenty of reasons (game crashes, tool errors, attention span, etc etc). The best use case for a "mixed/hybrid" would be Twitch Gaming, that's still a niche case.... but that's where the 5800X and 5900X makes sense.

    Now, I don't know what productivity programs you would use, nor would I know which games you would play, or if you plan on becoming a twitcher. So for your personal needs, you would have to figure that out yourself. Things like memory configurations and storage can have big impacts on productivity. Whereas for Gaming the biggest factor is which GPU you use.

    What I'm grasping at is the differences should/will decrease for most real-world scenarios, as there is something known as GPU scaling and being limited or having bottlenecks. For instance, RTX 2070-Super owners would target 1440p, and not 1080p. Or RTX 3090 owners would target 4K, and not for 1440p. And GTX 1650 owners would target 1080p, they wouldn't strive for 4K or 1440p.

    For instance, if you combine a 5600X with a Ultra-1440p-card, and compare the performance to a 3600X, the differences will diminish significantly. And at Ultra/4K both would be entirely GPU limited, so no difference. So if you compare a 5800X to a 3900X, the 3900X would come cheaper/same price but offer notably better productivity performance. And when it comes to gaming they would be equal/very similar when you're (most likely) GPU limited. That scenario applies to most consumers. However, there are outliers or niche people, who want to use a RTX 3090 to run CS GO at 1080p-Low Settings so they can get the maximum frames possible. This article alludes to what I have mentioned. But for more details, I would recommend people watch HardwareUnboxed video from YouTube, and see Steve's tests and hear his conclusions.

    Whereas here is my recommendation for the smart buyer, do not buy the 5600X or 5800X or 5900X. Wait a couple months and buy then. For Pure Gaming, get the r5-5600 which should have similar gaming performance but come in at around USD $220. For Productivity, get the r7-5700 which should have similar performance to the 5800X but come in at around USD $360. For the absolute best performance, buy the r9-5950x now don't wait. And what about Twitch Streamers? Well, if you're serious then build one Gaming PC, and a second Streaming PC, as this would allow your game to run fast, and your stream to flow fluidly.... IF YOU HAVE A GOOD INTERNET CONNECTION (Latency, Upload, Download).
  • lwatcdr - Monday, November 9, 2020 - link

    "You can get the 3700 for much cheaper than the 5800X. Or for the same price you can get the 3900X instead."
    And if you want both gaming and productivity? They get the 5800X or 5900X. So AMD has something for every segment which is great.
  • TheinsanegamerN - Thursday, November 12, 2020 - link

    The 5900x is margin of error from the 5950x in games, still shows a small uptick in gaming compared to 5800/5600x, offers far better performance then 5600/5800x in productivity tasks, and is noticeably cheaper then the 5950x.

    How on earth is that a non buy?

    The rest may be better value for money, but by that metric a $2 pentium D 945 is still far better value for money depending on the task. The 5000 series consistently outperforms the 3000 series, offring 20% better performance for 10% better cash.
  • Kishoreshack - Saturday, November 14, 2020 - link

    AMD has the best products to offer
    Soo you expect them to sell it at a cheaper rate than intel ?
  • Threska - Monday, November 16, 2020 - link

    AMD has a good product RANGE, which means something for everyone AND all monies go to AMD regardless of consumer choice.
  • Ninjawithagun - Friday, November 20, 2020 - link

    The price hike is mainly to cover ongoing R&D for the next-gen Ryzen Zen 4 CPUs due out in 2022. The race between Intel and AMD must go on!
  • jakky567 - Monday, November 23, 2020 - link

    I disagree about the 5900x being a no buy.

    I feel like it goes 5950x for absolute performance. 5900x for high tier performance on a budget. And then the 3000 series for people on a budget, except the 3950x.

    The 5900x has all the l3 cache.
  • brunis.dk - Tuesday, November 24, 2020 - link

    It's nothing compared to the price premiums Intel used to charge for their performance leadership.
  • Diggodo - Monday, January 11, 2021 - link

    You might want to rethink what you've just claimed.. and I'm very confused why you would think 5950x is worth it unless you absolutely need the extra cores for work. Its $750 MSRP compared to $550 🤦‍♂️. I'm curious why you say otherwise because every Intel 10th gen-11th gen chip have been duds really.

    The 5900x is a steal for it's price and is a killer chip. The price hike means nothing because the 3900x was 499 when it came out.
  • Santoval - Monday, November 9, 2020 - link

    Not just in price/performance this time, in performance period.
  • leexgx - Thursday, November 5, 2020 - link

    Rip anandtech server been overloaded (to many views I and to reload like 8 times just to get to this page about to try and use the print to show all pages good luck to me trying that so I can read everything )
  • NickOne - Thursday, November 5, 2020 - link

    Yeah, probably Intel server
  • Drkrieger01 - Thursday, November 5, 2020 - link

    Just my $0.02 as a sysadmin, it's likely a limited bandwidth issue, not server access/drive IOPS.
  • lmcd - Thursday, November 5, 2020 - link

    Probably all the other website editors looking for the best one-line quote to include
  • Orkiton - Thursday, November 5, 2020 - link

    Intel will buy TSMC and Rip out Amd :))
  • Hifihedgehog - Thursday, November 5, 2020 - link

    Wishful thinking. That's like a Bulldog trying to eat a Great Dane.
  • fazalmajid - Thursday, November 5, 2020 - link

    Er, TSMC’s market cap is double Intel’s.
  • lmcd - Monday, November 9, 2020 - link

    A great dane weighs twice as much as a bulldog so...
  • Xyler94 - Thursday, November 5, 2020 - link

    Even if Intel could... I highly doubt they'd be able to legally speaking, since that would literally be burning out competition in terms of CPU, and even Silicon productions...
  • Morawka - Friday, November 6, 2020 - link

    Intel would be better served luring TSMC's process engineers over. Most of the good ones have already been scooped up by China though.
  • bmacsys - Monday, November 9, 2020 - link

    Really dude. I suppose you know this firsthand?
  • lmcd - Monday, November 9, 2020 - link

    China's mainland fab efforts would not be as far as they are otherwise.
  • Qasar - Monday, November 9, 2020 - link

    and you have proof of this ? or is it just your opinion ?
  • ze_banned_because_at - Tuesday, November 10, 2020 - link

    Not that hard to google for "tsmc engineers poached by china".
  • RogerAndOut - Thursday, November 5, 2020 - link

    Well before any bid premium, TSMC has a market value of over $400B and so is far larger than Intel's total worth of around $240B. It would be somewhat cheaper for Intel to just buy up all of the TSMC production capacity that it can for a few years. This would allow Intel to limit the production of other players, while also giving them a chance to produce some chips that are worth buying.
  • Thanny - Thursday, November 5, 2020 - link

    TMSC would never allow that while Intel was a competitor. Buy up all their capacity, getting rid of their customers? Then what happens when Intel stops buying their capacity? Unless Intel spun off its fabs (which is extremely unlikely), TSMC will treat them as a competitor. Intel can make some things at TSMC, but not to the extent that it erodes TSMC's customer base.
  • Spunjji - Sunday, November 8, 2020 - link

    Exactly this. Amazing how fee pro-Intel commenters can do big picture thinking.
  • zodiacfml - Friday, November 6, 2020 - link

    whut?! They were late buying the EUV equipment to save money, too much focus on profitability which will kill Intel slowly overtime.
  • PandaBear - Friday, November 6, 2020 - link

    Yup, TSMC bought about 50% of all ASML output for the next couple years while Intel only bought 5%. RIP Intel, you got what you deserve and you are going to be the next Motorola.
  • Threska - Monday, November 16, 2020 - link

    Like it says in the article AMD almost folded in 2015, and people were writing articles about it's demise. Seems no one has learned anything about predicting the future from that experience. The world needs competition. It doesn't need an AMD monopoly, nor an Intel one, and with good fortune RISC-V and maybe other competitors will come on the scene so we don't keep repeating the history of "Oh they're dying, and I'm rooting for it".
  • Spunjji - Sunday, November 8, 2020 - link

    Keep on wishing, friend
  • Jasonovich - Wednesday, November 11, 2020 - link

    Hardly likely, TSMC is the bigger fish, has almost twice the capita as Intel.
  • vais - Wednesday, November 11, 2020 - link

    Luckily there are anti-monopoly laws ;)
  • Threska - Monday, November 16, 2020 - link

    Let's see how the whole ARM acquisition by Nvidia shakes out before we all start quoting monopoly laws.
  • Kurosaki - Thursday, November 5, 2020 - link

    RIP Anandtech, these reviews makes it hard to come in without error 504 or the site c crashing
  • catavalon21 - Thursday, November 5, 2020 - link

    No issues here. Site's working fine.
  • ballsystemlord - Thursday, November 5, 2020 - link

    Same here.
  • just4U - Thursday, November 5, 2020 - link

    There were some issues early on as the review came out (obviously got hammered..) good now tho..
  • MDD1963 - Saturday, November 7, 2020 - link

    The pages were indeed VERY slow to load the hour or two after they were posted....; overloaded, perhaps.
  • NA1NSXR - Thursday, November 5, 2020 - link

    What are you talking about, have you seen the prices? We got a big leap but we also got a value-destroying price hike. 5800X is in line with 10900K throughout the suite, but is newer and no cheaper!
  • catavalon21 - Thursday, November 5, 2020 - link

    Agree. The 10850 hands the 5800x it's backside in a great many contests, at about the same price point, yeah.
  • just4U - Thursday, November 5, 2020 - link

    It's just launch prices (..shrug) I'd pay the premium for the 5900x and the 5950x but the 3800? Hmm no.. I'd either opt in for the 3900x or a Intel 10core part first at that price. Needs to be priced $10 cheaper than the 10900 (non K) which brings it closer to the 8core 10700K price.
  • just4U - Thursday, November 5, 2020 - link

    err (should read 5800x) not 3800.
  • yankeeDDL - Friday, November 6, 2020 - link

    The 10850 peaks at 140W *more* than the 5800x. It's, literally, half as efficient as the 5800x. Running the 10850 will on a daily basis will cost you easily much more than the CPU's cost itself over its lifetime.
  • LithiumFirefly - Friday, November 6, 2020 - link

    Especially if you live in a climate that's warm part of the year paying more for AC cuz that Intel chip is hot AF
  • dagobah123 - Friday, November 6, 2020 - link

    This is so much more important than people realize. I think they should include a cost of ownership when discussing these prices like they do with cars.
  • lmcd - Monday, November 9, 2020 - link

    it wasn't important when AMD was behind so why is it important now?
  • Qasar - Tuesday, November 10, 2020 - link

    simple. if intel/nvidia does it, its ok, and accepted. but if amd does it ? its a crime, and becomes important.
  • TheinsanegamerN - Thursday, November 12, 2020 - link

    The leectrical costs from running intel VS amd add up to literal cents per month. If you are that concerned....you shouldnt be buying $500 CPUs.

    Cost of ownership really only matters, similarly, on cheap low end cars. People buying $100K+ mercedes are not particularly concerned about the price of parts or fuel, if they were they wouldnt be buying a $100K car.
  • Threska - Monday, November 16, 2020 - link

    Funny thing my APC UPS keeps track of something like that for things plugged it. Only thing that demonstrates is that everything costs, even FUN.
  • Spunjji - Sunday, November 8, 2020 - link

    Only if you totally ignore performance per watt... You need a cooler capable of dissipating up to 250W to hit that performance, and even then, your characterisation here is garbage. Overall the 5800X is a superior product for the same price, and it's only just been released.

    Let the shitty, bitter takes continue!
  • Gigaplex - Thursday, November 5, 2020 - link

    And when Intel held the performance crown, they priced their parts higher than the competition. This is to be expected. AMD only undercut on price because they couldn't compete on performance previously.
  • LithiumFirefly - Friday, November 6, 2020 - link

    They didn't just price their parts higher for nearly 25 years they just slapped $1,000 price tag on their top chip didn't matter what its performance was $1,000 that's what it was.
  • just4U - Thursday, November 5, 2020 - link

    the 5900X is nice at it's price point @ only 3-10 bucks more than the 10900K which appears to be what it's competing with.. and all the 10core parts really. The 5800X is in a odd position.. and I doubt it's going to be all that popular at that price point.
  • Spunjji - Sunday, November 8, 2020 - link

    5600 and 5700 non-X will be where it's at for value when they roll around.
  • just4U - Monday, November 9, 2020 - link

    Yeah I agree.. plus it's likely that prices will come down on these parts somewhat or be offered on sale or bundled..(saw a bit of that on launch day but they all sold out so whatever)
  • bananaforscale - Monday, November 9, 2020 - link

    "Value destroying price hike"? Sure for 3600 vs 5600X (which is *arguably* comparable), 3900X vs 5900X there's no contest. 5900X is demonstrably more than 10% faster is most cases. FWIW, I'd go for the 5800X over 10900K performance being equal because PCIe 4 and lower power draw.
  • Luminar - Thursday, November 5, 2020 - link

    Cache Rules Everything Around Me
  • SIDtech - Thursday, November 5, 2020 - link

    Hi Andrei,

    Excellent work. Do you know how this performance shapes up against the Cortex A77 ?
  • t.s - Friday, November 6, 2020 - link

    Seconded. Want to know how the likes of ryzen 4 4350G or 5600 versus Cortex A77 or A78.
  • Kangal - Saturday, November 7, 2020 - link

    It's hard to say, because it really depends on the instruction/software as it is very situational. It also depends on the type of device it is powering, you can move up from Phones, to Thin Tablets, to Thick Laptops, to Large Desktops, and upto a Server. Each device offers different thermal constraints.

    The lower-thermal devices will favour the ARM chip, the mid-level will favour AMD, and the higher-thermal devices will favour Intel. That WAS the rule of thumb. In general, you could say Intel's SkyLake has the single-threaded performance crown, then AMD's Zen+ loses to it by a notable margin but beats it in multi-threaded tasks, and then going to an ARM Cortex A76 will have the lowest single-thread but the highest multi-threaded performance.

    Now?
    Well, there's the newly launched 2021 AMD Zen3 processor. And the upcoming 2021 ARM Cortex-X Overclocked Big-core using the new A78 microarchitecture. Lastly there's the 2022 Intel Rocket Lake yet to debut. So it's too early to tell, we can only make inferences.
  • Kangal - Saturday, November 7, 2020 - link

    Here is my personal (yet amateur) take on the future 2020-2022 standpoints between the three racers. Firstly I'll explain what the different keywords and attributes mean
    (from most technical to most real-world implication)

    Total efficiency: (think Full Server / Tractor) how much total calculations versus total power draw
    Multi-threaded: (think Large Desktop / Truck) how much total calculations
    Single-threaded: (think Thick Laptop / Car) how much priority calculations
    IPC performance: (think Thin Tablet / Motorbike) how much priority calculations at desirable frequency/voltage/power-draw

    *Emulating:
    Having a "simple" ARM chip running "complex" x86 instructions. Such as running 32bit or 64bit OS X or Windows programs, via new techniques of emulation using a partial-hardware and hybrid-software solutions. I think the hit to efficiency will be around x3, instead of the expected x12 degradation.

    So here are the lists (from most technical to most real-world implication)
    Simple Code > Mixed code > Recommended Solution

    Here's how they stack up when running identical new code (ie Modern Apps):
    Total efficiency: ARM >>>> AMD >> Intel
    Multi-threaded: ARM > AMD > Intel
    Single-threaded: Intel = AMD > ARM
    IPC performance: ARM >>> AMD > Intel

    Now what about them running legacy code (ie x86 Program):
    Efficiency + *emulating: AMD > Intel >> ARM
    Multi + *emulating: AMD > Intel >> ARM
    1n + *emulating: Intel = AMD >>> ARM
    IPC + *emulating: AMD > Intel > ARM

    My recommendation?
    Full Server: 60% legacy 40% new code. This makes ARM the best option by a small margin.
    Large Desktop: 80% legacy 20% new code. AMD is the best option with modest margin.
    Thick Laptop: 70% legacy 30% new code. Intel is the best. AMD is very close (tied?) second.
    Thin Tablet: 10% legacy 90% new code. ARM is the best option by huge margin.
  • Tomatotech - Monday, November 9, 2020 - link

    Excellent post, but worth pointing out that *all* modern chips now emulate x86 and x64 code. They run a front end that takes x86 / x64 machine code then convert that into RISC code and that goes through various microcode and translation layers before being processed by the backend. That black box structure has allowed swapping out and optimising the back end for decades while maintaining code compatibility on the front end.

    So it’s not as simple to differentiate between the various chips as you make it out to be.
  • Gondalf - Sunday, November 8, 2020 - link

    I don't know. Looking Spec results, we can say Anandtech is absolutely unable to set a Spec session correctly. From the review Zen 2 is slower per Ghz than old Skylake in integer, that is absolutely wrong in consumer cores (in server cores yes), even worse Ice Lake core is around fast as old Skylake per GHz.
    Basically this review is rushed and very likely they have set all AMD compiler flags on "fast" to do more contacts and a lot of hipe.
    My God, for Anandtech Zen 3 is 35% faster in the global Spec values than Zen 2. Not even AMD worst marketing slide say this. We have Zen 4 here not Zen 3. Wait wait please.
    A really crap review, the author need to go back to school about Spec.

    Obviously the article do not say that 28W Tiger Lake is unable to run at 4.8Ghz for more than a couple of seconds, after this it throttes down, so the same Willow Cove core on a desktop Cpu could destroy Zen 3 without mercy on a CB session. Not to mention the far slower memory subsystem of a mobile cpu.

    Basically looking at games results, Rocket Lake will eclipse this core forever. AMD have nothing of new in its hands, they need to wait Zen 4
  • Qasar - Sunday, November 8, 2020 - link

    yea ok gondalf, trying to find ways that your beloved intel doesnt lose at everything now ??
    accept it, amd is faster then intel across the board.
  • Spunjji - Monday, November 9, 2020 - link

    That's a strange claim about Tiger Lake performance, Gondalf, because I seem to recall Intel seeding all the reviewers with a laptop that could run TGL at 4.8Ghz boost 'til the cows come home - and that's what Anandtech used to get that number. It's literally the best they can do right now. You're right of course - in actual shipping ultrabooks, TGL is a hot PoS that cannot maintain its boost clocks. Maybe by 2022 they'll finally put Willow Cove into a shipping desktop CPU.

    "Basically looking at games results, Rocket Lake will eclipse this core forever"
    If by "eclipse" you mean gain a maximum 5% advantage at higher clock speeds and nearly double the power draw then sure, "eclipse", yeah. 🤭

    I love your posts here. Please, never stop stepping on rakes like Sideshow Bob.
  • macroboy - Saturday, December 12, 2020 - link

    LOL look at AMD's Efficiency and sustained core clocks, Intel runs too hot to stay at 5ghz for very long. meanwhile Zen3 plows along at 55C no problem, *you're the one who needs to check your facts.
  • Andrew LB - Sunday, December 13, 2020 - link

    5800x @ 3.6-4.7ghz draws 219w and hits 82'c and locked at 4.7ghz its 231w and 88'c.

    Thats hotter than my i7-10700k @ 5.1ghz all core locked.

    https://www.kitguru.net/wp-content/uploads/2020/11...
  • Thunder 57 - Monday, April 26, 2021 - link

    This comment didn't age well...
  • AndyMclamb - Tuesday, September 28, 2021 - link

    Rip AMD oner year later Intel destroys AMD with Alder Lake
  • jeremyshaw - Thursday, November 5, 2020 - link

    Yes! All I wanted to see was on the Cache and Latency parts - the unified cache allows 6 core and 12 core setups without the penalties of having partial CCXs!
  • JfromImaginstuff - Thursday, November 5, 2020 - link

    Wow, just wow,
    Intel, hang in there you'll get there eventually
  • PandaBear - Friday, November 6, 2020 - link

    In 2023 maybe.
  • Spunjji - Monday, November 9, 2020 - link

    It could be as soon as 2022 that they become properly competitive on power and performance, depending on how TSMC 5nm and Zen 4 shake out for AMD.

    Rocket Lake ought to at least given them presence in mid-range gaming, if you can stomach the power...
  • 5j3rul3 - Thursday, November 5, 2020 - link

    No Microsoft Filght Simulator 2020 Test?
  • 5j3rul3 - Thursday, November 5, 2020 - link

    MFS 2020 is the great to test CPU performance in game
  • gagegfg - Thursday, November 5, 2020 - link

    https://www.anandtech.com/show/16214/amd-zen-3-ryz...
  • gagegfg - Thursday, November 5, 2020 - link

    upsss:
    https://cdn.mos.cms.futurecdn.net/vnF56By3SL2xWGrc...
  • Spunjji - Sunday, November 8, 2020 - link

    Shame they didn't have the 5800X and 5600X in there, would be interesting to see how they line up too. Strong progress indeed from AMD!
  • 5j3rul3 - Thursday, November 5, 2020 - link

    The BEST moment of AMD👍👍👍
  • Tunnah - Thursday, November 5, 2020 - link

    The eventual 5700X is going to be an absolute sales smasher I reckon.
  • Smell This - Thursday, November 5, 2020 - link

    Is the Zen2 end-of-life?

    The AMD Ryzen 7 3700X $300 could sure put a really big squeeze on the i7-10700K
  • haukionkannel - Friday, November 6, 2020 - link

    Most likely Zen2 is at the end of the line. Amd will produce Zen3 at TSMC 7nm and zen+ at Globalfounduries 12 or 14nm...
  • FireSnake - Thursday, November 5, 2020 - link

    Gold reward.
    Haven't seen this here for quite a while.
  • Ryan Smith - Thursday, November 5, 2020 - link

    We haven't had a CPU worthy of one in quite a while. It's nice to be able to hand out awards like these.=)
  • just4U - Thursday, November 5, 2020 - link

    What's amazing about that Ryan is it's a AMD processor. It seemed like you guys really wanted to give the gold award last time around with the 3000 series... but then opted for the silver award, which wasn't to shabby as it's something that has become very uncommon even if it's a good review of a product that your impressed with. Great review by Ian, good job guys.
  • Byte - Saturday, November 7, 2020 - link

    Save your next gold for the radeon 6900!
  • Threska - Monday, November 16, 2020 - link

    Depends upon advantage.

    https://www.fool.com/investing/2020/11/16/nvidia-l...
  • FreckledTrout - Thursday, November 5, 2020 - link

    AMD finally has an Intel beater on its hands at least until Rocket Lake arrives. Having actual competition is going to be great computing. Nice review.
  • duploxxx - Saturday, November 7, 2020 - link

    nothing confirmed on Rocket Lake...

    fishy results with a so-called avg turbo ghz which actually shows it was doing 5ghz.
    a total unknown release date, expected at the end of Q1 2021 on a dead platform with some kind of pcie-4 . yeah really looking forward.
  • Spunjji - Sunday, November 8, 2020 - link

    They'd have to get north of 5.3Ghz consistently to beat AMD.

    I just don't think they can, which would make the product pretty hilarious - big die, lots of heat, no performance crown.
  • hbsource - Thursday, November 5, 2020 - link

    Very impressive. I think I'm good with my 3950X until the next socket but the single thread uplift is very tempting.
  • FireSnake - Thursday, November 5, 2020 - link

    @Ian:
    "With AMD taking the performance crown in almost area it’s competing in"
    Should this be:
    "With AMD taking the performance crown in almost every area it’s competing in" ... missing every?
  • charlesg - Thursday, November 5, 2020 - link

    Now to just find the 5950 in stock at NewEgg!
  • faizoff - Thursday, November 5, 2020 - link

    Quick question on encoding with Handbrake, the 4k encoding and even the others for that matter, what preset are they run? like fast, medium, slow? and what RF count are the encodes set to? Sorry if I missed those, don't see them at a glance. Amazing review as always. Best tech deep dive for me, I love to read the architectural breakdown.
  • GeoffreyA - Monday, November 9, 2020 - link

    I think AT is using Handbrake's presets: (a) Discord Nitro 480p30, (b) Vimeo YouTube 720p30, and (c) HEVC 2160p60. I went through them now and here are the settings:

    A) Medium, CRF = 21
    B) Medium, CRF = 22
    C) Slow, CRF = 24

    If you were looking for the reference frames, they are 3, 1, and 4. And there's a possibility Anandtech might have altered the presets.
  • DigitalFreak - Thursday, November 5, 2020 - link

    Does Purch require you to use at least one bad pun in every article?
  • Spunjji - Sunday, November 8, 2020 - link

    No, that's me
  • yeeeeman - Thursday, November 5, 2020 - link

    Ian, you need to buy some new servers for anandtech.com now that AMD has launched zen 3.
    The site is barely loading.
  • DigitalFreak - Thursday, November 5, 2020 - link

    I wonder if they're still running on the last hardware upgrade Anand did.
  • Ryan Smith - Thursday, November 5, 2020 - link

    Nah, we're a couple of generations past that now.
  • Phiro69 - Thursday, November 5, 2020 - link

    As far as I can tell, it's cloudfront having problems, not Anandtech's backend. I would be surprised if they aren't 100% cloud based at this point, too.
  • gagegfg - Thursday, November 5, 2020 - link

    This is what I expected from AMD, 10 years but it came !!
  • gagegfg - Thursday, November 5, 2020 - link

    Athlon 64 X2 2005 = 15 años
  • Tomatotech - Monday, November 9, 2020 - link

    15 anuses? Surely it’s not *that* bad ;)
  • ahenriquedsj - Thursday, November 5, 2020 - link

    In competitive games it is a massacre.
  • Double Trouble - Thursday, November 5, 2020 - link

    What AMD has been able to achieve over the past few years is definitely impressive, and this 5000 series CPU set is excellent. However, I do wonder if climbing up the price / segment chart is going to take a toll. For me, I've upgraded 5 PC's from older CPU's to Ryzen 5 3600 and 3600X because the price was very reasonable (about $170). With a minimum of $300 for the new 5600X, that's almost double the price, so I won't be buying any for a long time. The 5000 series is impressive, but not worth that kind of a steep price. I wonder if a lot of other buyers might be in the same boat.
  • Smell This - Thursday, November 5, 2020 - link


    AMD discounts old stock until gone __ it is hard to keep up.

    I prefer *less than bleeding edge* __ the example you have given is the Ryzen 5 3600.

    $149.06 at Amazon has me interested {| ;--|)
  • nandnandnand - Thursday, November 5, 2020 - link

    3600X was $250 at launch. You are comparing a discounted 3600 price to a newly released CPU... of a higher tier (non-X vs. X)... during a pandemic with mostly heightened tech prices.

    Prices will come down, and Ryzen 5 5600 is rumored to come in at $220 in early 2021.
  • Smell This - Thursday, November 5, 2020 - link


    You have to back that up ~~~ LOL

    The Ryzen 5 5600x is butting heads with the i7-10770K at $387 (or $88 less). Is this one of your Ass Facts?
  • nandnandnand - Thursday, November 5, 2020 - link

    It will come down just like Ryzen 3000 CPUs went down. Probably in response to Rocket Lake in Q1.
  • Smell This - Thursday, November 5, 2020 - link


    I don't know.
    The AMD product mix is seriously stout with last gen with +2 threads. a 3700X is killer and comparable to the new 5600X. There will be a 5600 but at $260 will slobber-knock Intel 6-core
  • silverblue - Thursday, November 5, 2020 - link

    nandnandnand did say it was a rumour, so there's no need to be rude. A quick search on Google brought up articles on The Guru of 3D, KitGuru, TweakTown, OC3D, NotebookCheck and TechPowerUp, either referring to a Korean translation or a table from VideoCardz.com. One theory is that AMD is waiting for 400-series BIOS updates to be released.
  • Smell This - Thursday, November 5, 2020 - link


    Backed up by WCCF ?? LOL
    ~~ you guys have bumped your heads
  • silverblue - Friday, November 6, 2020 - link

    And you're just a troll with no counter-argument, and nothing of interest to add.
  • Smell This - Friday, November 6, 2020 - link

    Troll? LOL
    Once again, you guys have bumped your heads. It is all a circle-jerk that links back to itself and WCCF

    "Source: @harukaze5719 via Wccftech"
    "Please note that this post is tagged as a rumor."
    "Recently, this article was posted, but I couldn't find the post's source. 😭 My search ability is still low…"

    Bigger LOL __ You included searches that have nothing to do with NotebookCheck and TechPowerUp

    Who is the TROLL??? HA!

    Go away
  • silverblue - Friday, November 6, 2020 - link

    1) The word "rumor" has been emphasised on various occasions. How you're struggling to comprehend that is beyond me.
    2) AMD will launch lower-end parts within one or two quarters. It's what they've done since Zen came out in 2017.
    3) NotebookCheck did indeed make a news post referencing harukaze5719
    4) TechPowerUp did indeed credit the source of their news post to @harukaze5719
  • Badelhas - Friday, November 6, 2020 - link

    I totally agree. I've upgraded from the last true overclocking champion from Intel (i5 2500k @4.8ghz from 8 years ago) to the 3600, it was finally worth it but going from 200 to 300 euros is a bit to much of an increase in price, in my humble opinion
  • Spunjji - Sunday, November 8, 2020 - link

    They're not really comparable, though. I'm weirded out by how many people are comparing the 3600 to the 5600X. The X is a bit of a giveaway.
  • Kallan007 - Saturday, November 7, 2020 - link

    I just buy new and sell off the old. But if you want a price break then just wait.
  • Spunjji - Sunday, November 8, 2020 - link

    I doubt it will. They'll sell every one they can make, and if not, there's no reason they can't begin to lower prices as supply begins to exceed demand.
  • Threska - Monday, November 16, 2020 - link

    Socket longevity is the important thing here for anyone playing the value game. You may not buy the latest and greatest NOW, but the future allows for it without starting completely over.
  • UNCjigga - Thursday, November 5, 2020 - link

    I suppose the only thing missing is a chipset/IO package with USB 4 support? Not a big deal for desktops--but I hope they have that figured out by the time Zen 3 is ready for mobile parts.
  • Spunjji - Sunday, November 8, 2020 - link

    That would be nice to see. I have a suspicion we won't see it until the new socket arrives on desktop, but would be good to get it with Cezanne on mobile.
  • Machinus - Thursday, November 5, 2020 - link

    Looks like a great set of chips for anyone who gets one mailed to them directly from AMD.

    Good luck buying one in a store.
  • charlesg - Thursday, November 5, 2020 - link

    I have to say I'm disappointed in the availability of the 5900 and 5950. I expected better.
  • lmcd - Thursday, November 5, 2020 - link

    Yea honestly isn't this the whole point of the chiplet model? Or is the IO die different for the 2-chiplet models? I assume it's not packaging constraints because that makes no sense.
  • Spunjji - Sunday, November 8, 2020 - link

    IO die is the same between all of them - they probably just haven't churned enough chiplets out yet. Those top-end chips probably need a high bin to reach their intended clocks and power levels, too.
  • lmcd - Monday, November 9, 2020 - link

    That seems like a mistake then -- should've released a 5890 and 5940 with lower clocks. At some point professionals are buying for IPC, thread count, and base clock speed.
  • Qasar - Tuesday, November 10, 2020 - link

    how is that a mistake ? if no need to change the IO die yet, why change anything ?
  • Spunjji - Sunday, November 8, 2020 - link

    On launch? Not really.

    If they're still unavailable a month or two from now, I'll be greatly disappointed.
  • Machinus - Thursday, November 5, 2020 - link

    Looks like a great set of chips for anyone who gets one mailed to them directly from AMD.

    Good luck buying one in a store.
  • danbob999 - Thursday, November 5, 2020 - link

    480p Low quality gaming benchmarks? Really? Someone really play Civ6 with those settings?
    What's the point? Who cares if CPU X has 454 fps while Y only does 322?
  • Hxx - Thursday, November 5, 2020 - link

    those are unrealistic scenarios just to showcase the IPC gains over prev gen and competition. But yeah normally you would pick the resolution you are playing at and go from there. In this case at 1080p / 1440p it trades blows with Intel in most titles.
  • silverblue - Thursday, November 5, 2020 - link

    I'm not sure why the test revolves around frame rate, and not turn time. To use Gamers Nexus as a source, the 5950X completes a turn in 26.6 seconds, whereas the 10900K does it in 30.9 (29.3 OC to 5.2GHz), and the 3950X in 32.4. So, in this one test, the 10900K takes 16% longer, and the 3950X 22%.
  • Spunjji - Sunday, November 8, 2020 - link

    Yeah, I was a bit confused by not seeing turn times for Civ as that's the really big drag in late game scenarios.
  • ExarKun333 - Thursday, November 5, 2020 - link

    Zen 3 feels a lot lot Core 2 ~ 14 years ago. Wow, very impressive.
  • ExarKun333 - Thursday, November 5, 2020 - link

    And 'Hammer' 4-5 years before then, to credit AMD then as well. It has been a long time since we had something this exctiing.
  • lmcd - Thursday, November 5, 2020 - link

    IMO Sandy Bridge was this exciting. The IPC on that release was absolutely insane compared to Nehalem.
  • ingwe - Friday, November 6, 2020 - link

    Agreed. That definitely seemed like the last big excitement though. Can't wait to upgrade!
  • Slash3 - Saturday, November 7, 2020 - link

    Yep. My case was a bit different, but I went from a launch date 2600K which had been running at 5GHz to a 3950X last November. It was a pretty solid single core upgrade (although not as dramatic as you'd think since the 2600K was so topped out - CPU-Z SC score went from 478 to 545) but the multi core performance obviously blew it entirely out of the water.

    AMD's 5950X, though? Single core CPU-Z score is ~680. Six eighty! Stock!

    The jump in single core performance between the 5950X and the 3950X is almost -double- what it was in going from my 2600K to the 3950X. That's absolutely monstrous.
  • Spunjji - Sunday, November 8, 2020 - link

    Fair point there, Slash. This may indeed be the best thing since Sandy, and damn was I excited when that released!
  • citan x - Thursday, November 5, 2020 - link

    Micro center had some in stock at the store even though there was a huge line to enter when I got there 5 minutes before opening. However, they only had 5600x and 5800x in stock. I wanted a 5950x and they said they never got those in stock. I have not found any 5950x in stock anywhere.
  • charlesg - Thursday, November 5, 2020 - link

    Yeah I'm wondering if the 5950x is actually available yet? Or if some bots had insider info on pages to buy them the instant they were available...
  • Holliday75 - Thursday, November 5, 2020 - link

    I am seeing them listed on eBay starting a little over $1,000 and going up to $2,000.
  • nandnandnand - Thursday, November 5, 2020 - link

    The listings say "Locate in Store - Unavailable Online", with a small amount of 5800X and 5600X available at my store. So no bots, you have to show up in person. It also says "Limit 1 per household" although I imagine you could get a couple of friends with different credit cards and get 1 of each model per person.
  • nandnandnand - Thursday, November 5, 2020 - link

    I can't tell you when it will be back in stock, but that's not unusual for day 1.
  • GeoffreyA - Thursday, November 5, 2020 - link

    Battling to load the article for hours, but looks like it's finally working. On page 3 now.
  • GeoffreyA - Friday, November 6, 2020 - link

    Thanks for the excellent analysis, AT. Zen 3 has delivered, even more than expected. Brings back memories of the Athlon 64 FX-51 at the top of charts and later the Core 2 Duo, which left K8 dead on the floor. I am impressed by the IPC's having gone up so much but power remaining roughly the same. And, besides the widening, surprisingly conservative, there are a lot of "intelligent" techniques bringing about improvement (reminiscent of the Pentium M in a way). All in all, outstanding work from AMD. The engineers deserve a round of applause.
  • GeoffreyA - Saturday, November 7, 2020 - link

    Ian, not sure if I missed it, but what version of Windows does the test suite use? The CPU overload article says 1909. According to Techpowerup, there have been some scheduler changes since 1903 and the difference in performance was a few per cent. for Zen 3. Thanks.

    https://www.techpowerup.com/review/amd-ryzen-9-590...
  • mjcutri - Thursday, November 5, 2020 - link

    Was pleasantly surprised that I was able to pick up a 5600x from newegg this morning. It'll be a nice upgrade from my i7-3920 that I've been running for 8 years!
  • lmcd - Thursday, November 5, 2020 - link

    The thing that's so funny to me is how well Sandy Bridge E has held up. Nearly every board supports PCIe 3.0 and SATA III, quad-channel memory means it's not memory bound, and it clocks up quite well.

    Obviously performance per watt sucks and it doesn't game as well anymore, but the feature set is way more usable than I'd ever have expected 8 years later.
  • Spunjji - Sunday, November 8, 2020 - link

    It was a really solid platform. Throw in an SSD on PCIe and you wouldn't miss an awful lot from a more modern system, but it looks like that point has finally been reached... 8 years later!
  • lmcd - Monday, November 9, 2020 - link

    I wish both Intel and AMD would bring prosumer platforms back. While obviously SLI and XF are dead, PCIe lanes are nice to have and I/O futureproofing is actually impossible anymore.
  • Qasar - Tuesday, November 10, 2020 - link

    id just like to see more PCIe lanes. maybe 8-16 more ?
  • Vitor - Thursday, November 5, 2020 - link

    The craziest thing is that AMD has a very easy upgrade path with 5nm being avaible.
  • CrystalCowboy - Thursday, November 5, 2020 - link

    5 nm, DDR5, PCIe 5, USB4. These are all obvious future developments. They will do a new socket for that; will be interesting to see what they do with that.
  • Orkiton - Thursday, November 5, 2020 - link

    Please Anandtech, up ryzen and up epic your servers, it's ages here to load a page...
  • PrionDX - Thursday, November 5, 2020 - link

    Mmm nice warm code bath, very relaxing

    > Results for Cinebench R20 are not comparable to R15 or older, because both the scene being used is different, but also the updates in the code bath.
  • prophet001 - Thursday, November 5, 2020 - link

    How you gonna test FFXIV and not WoW.

    -______________-
  • Mr Perfect - Thursday, November 5, 2020 - link

    Guys, could you please define acronyms the first time they are used in an article? Take page three for example, it touches on a BTB, TAGE and ITA, but only ITA is defined. I have no idea what a BTB or TAGE is. If they where defined on page two and I missed them, feel free to ignore me.
  • name99 - Saturday, November 7, 2020 - link

    BTB= Branch Target Buffer. Holds the addresses where a branch will go if it taken.

    TAGE= (tagged geometric something-or-other) name is not important; what matters is that it's currently the most accurate known branch predictor. Comes in a few variants, was 1st published around 2007 by Andre Seznec who has since gone on to show how it can be used for damn well everything! (Value prediction, indirect branch, prefetching, washing windows, you name it.)
    Apple seems to have been first to implement, maybe as early as the A7, certainly very soon in the A series.
    Now everybody uses it, but only in the last year or so has everyone really got on board. (Actually to be precise Seznec suggested that Intel is using TAGE based on their performance characteristics, but I don't think Intel have confirmed this. And ARM probably are but again unconfirmed. IBM is confirmed, and now AMD.)

    Even if you know the basic algorithm for direction prediction is TAGE, that still doesn't make everyone equal. There are MANY extra aspects where everyone is different. The most obvious is how much storage is given to the branch predictor, but other less obvious aspects include
    - how do you predict indirect branches? State of the art is ITTAGE, but that doesn't mean everyone is using it.
    - how do you update your branch prediction storage (ie how fast do corrections get from the backend into the predicting mechanism at the front end)
    - how do you implement your L2 storage and second-stage prediction?
    - what extra "specialist" predictors do you have? (These are things like special-case predictors for loops.)
  • quantcon - Thursday, November 5, 2020 - link

    Yeah, it's actually kinda nuts, considering Intel convinced us years ago that we've hit the point of diminishing returns and there are hardly any IPC improvements to be had.
  • Spunjji - Sunday, November 8, 2020 - link

    Seems like they needed to believe that...
  • DanD85 - Thursday, November 5, 2020 - link

    This just goes on to prove yet again how crucial a healthy competition benefits everyone. Intel has been stagnating for more than a decade. Imagine where we would have been performance-wise if we had got this ~40% increase every 3 years. Intel only have themselves to blame. They are the chipzillla, the gatekeeper and the choker of the whole industry!
  • lmcd - Monday, November 9, 2020 - link

    40% is a bit disingenuous. Most of the gap in desktop is chiplet design. Notice how mobile, while AMD-favored, is still competitive? It's just a bad bet from Intel going with stacked packaging before same-package flat chiplet, and the packaging techniques for both are very new. There aren't 40% improvements on the table going forward, Bulldozer and Piledriver were both just awful and AMD didn't ever release full desktop Steamroller or Excavator (which were fine, not great). Zen 1 left a lot on the table for such a big increase as well.
  • GeoffreyA - Tuesday, November 10, 2020 - link

    If you place Zen at Haswell's level, it took AMD three years to reach Zen 3 (from the consumer's point of view). On Intel's side, it's taken six years to go from Haswell to Sunny Cove.

    Even in the early tick-tock days, when more massive changes could be put in, it was usually two years apart for microarchitecture: Core (2006), Nehalem (2008), Sandy Bridge (2010/11), etc.

    Whether there's a lot more juice in the tank for Zen remains to be seen. In my opinion I think there is: Z3's out-of-order structures are still quite conservative, compared to Sunny Cove, which it beats, so there's possibility of more widening there. I also think their split scheduler design, inherited from the Athlon, will allow them to scale more easily. Of course, I know the engineers in Haifa must be cooking up something potent too. Either way, exciting stuff.
  • Hifihedgehog - Thursday, November 5, 2020 - link

    Bloodbath
  • gagegfg - Thursday, November 5, 2020 - link

    Intel is preaching Moore's Law, AMD is executing it.
  • FoRealz - Thursday, November 5, 2020 - link

    Wow 5600x beating the 5900x in almost everything?
  • Hul8 - Thursday, November 5, 2020 - link

    Regarding "Why Does AMD Not Promote 5.0 GHz?":

    I'm sure AMD would rather talk to journalists about why some of their CPUs boosts +150MHz beyond the advertized boost frequency, and whether that's normal, than about many of the CPUs not reaching the advertized number... They learned that lesson.
  • haukionkannel - Friday, November 6, 2020 - link

    Indeed! They did market last time that some golden samples will boost up to xxx... and the peoples reaction was really Angry! Now under advertising, ower delivering gives them perks instead, so yeah They Are learning!
    Because in the end, independent reviews will reveal the real speed eventually, so it does not hurt at all under advertise, because reviews will do all the advertisement amd needs!
  • Spunjji - Sunday, November 8, 2020 - link

    100%
  • poohbear - Thursday, November 5, 2020 - link

    Doesn't like much of a difference between the mainstream 5800x and the 10700k. It'll boil down to price when choosing between them. TBH i'm a bit disappointed as i was expecting a trouncing of Intel by AMD, but now we see AMD is just matching Intel, which is great, but since the 5xxx AMD CPUs are matching Intel's in price, we haven't really reached new heights of performance or anything. But i guess the silver lining is competition is back, and now performance matters again.
  • WaltC - Friday, November 6, 2020 - link

    In some games, which is what I assume you are talking about since AMD walks away with almost all productivity software, AMD holds a lead of 30% +. Secondly, when game-engine optimization for Zen2/3 begins to mature and become widespread, AMD should walk away with most of it. It's interesting that in the titles where the most expensive Intel CPUs keep pace with the 5600X in terms of frame rates, the game engines were heavily optimized for Intel architectures.
  • haukionkannel - Friday, November 6, 2020 - link

    Well Intel did cut prices a lot to get even with amd, so amd has done its job in that part. Intel still have cpus that needs even more price cuts!
    But that is how market works. If Intel wants to compete with amd It has to do it with prices! And in the end it means you will get both companies products about the same price vs performance range! The point is that because of amd Intel has to compete with prices and as long as amd keep making these advancements Intel has to react and that means that there is competition and neither company can get too creedy! If one company has huge lead... it means higher prices. Now both companies has good products, the market will take of the prising!
  • duploxxx - Saturday, November 7, 2020 - link

    you are smoking right?

    If you want better game performance you take the 5600X its faster everywhere vs 10700k and cheaper and on top very close performance on productivity not to mention way lower power consumption.
    if you want more productivity and gaming you spend just a bit more and you get both. And still lower power consumption.

    There are 0 reasons to buy an Intel cpu right now in this price range. THey will need to drop at least with 50-100$ to stay competitive. Not to mention there boards that lack PCIe-4.

    lets see what smart memory does soon with ATI combo and there will even be less reasons to buy the Intel parts.
  • Spunjji - Sunday, November 8, 2020 - link

    True, that combo will rock
  • Spunjji - Sunday, November 8, 2020 - link

    Why is 100 watts extra power to get that performance suddenly NBD?
  • CrystalCowboy - Thursday, November 5, 2020 - link

    If they would die-shrink that I/O chiplet, they might have room for a third CCD...
  • adt6247 - Thursday, November 5, 2020 - link

    Then most motherboards would likely have a hard time with power delivery for 8 additional cores.

    Also, a the IO die only supports 2 CCDs -- doesn't have the lanes for more. Hence the Threadripper parts with 4 CCDs being NUMA devices -- two CCDs with direct access to RAM, and two without.
  • lmcd - Thursday, November 5, 2020 - link

    Also package size constraints have about just as much to do with pin count. Necessary pin count for supporting 2 more CCDs' worth of bandwidth and power (regardless of board's ability to supply it) pushes AMD entirely out of the ITX motherboard market (barring absolutely insane designs that cost $300-400 for minimum features beyond turning on). And any situation using more than 16 cores needs more bandwidth, so the pin count increase -> no ITX is really a no-go.
  • AntonErtl - Thursday, November 5, 2020 - link

    The IO die supports 4 CCXs on the 3900X and 3950X. It seems that a CCD does not get a wider port than a CCX (the ports for Zen3 do not seem particularly wide at 16B/cycle in write width), so it may be possible to connect 4 CCDs to the IO die. Whether that will physically fit, or will be too RAM-bandwidth limited is then the question. But given that Intel cannot even match the 16 cores, there is little competetive pressure to put more cores in AM4.
  • schujj07 - Friday, November 6, 2020 - link

    Any single socket Zen 2 device is seen as a single NUMA node. When you talk about 1st & 2nd Gen Threadripper, yes they were seen as 2 or 4 NUMA nodes depending on the number of cores due to how the architecture was made.
  • nandnandnand - Thursday, November 5, 2020 - link

    Zen 4 core chiplets will be shrunk to 5nm, on a new AM5 socket that could have larger dimensions. And maybe the I/O chiplet will shrink to 7nm at the same time.

    I expect 3x 8-core chiplets or 2x 12-core chiplets. A graphics chiplet for all desktop models is possible. If so, maybe 12-core chiplets are the way to go.
  • smilingcrow - Thursday, November 5, 2020 - link

    With only dual channel RAM there will be an issue with performance scaling beyond a certain number of cores.
    It seems as if the sixteen core part already scales poorly with some workloads, so adding fifty percent more is not great.
    Leave that for TR.
  • phoenix_rizzen - Thursday, November 5, 2020 - link

    Yeah, it seems like 1 memory channel for every 8 cores is the sweet spot. At least with DDR4 memory controllers.
  • lightningz71 - Friday, November 6, 2020 - link

    With the switch to DDR5 coming, there is a need to update the IO die anyway. They could move to a previous node bulk process like TSMC 10nm or Samsung's 8nm and reduce pin count for just three CCDs and manage three CCDs and one IO die in a package well enough. Given that the next CCDs will be on N5P, power draw should come down for those as well, enabling them to stay in the same envelope.
  • lmcd - Monday, November 9, 2020 - link

    Samsung's 8nm is provably undesirable for high-volume parts though. I'd argue going the other way, pick a low power node and see if you can get the chiplet architecture in high-end laptop and desktop APU SKUs. That would push their release cadence ahead to same timeframe as desktop and absolutely dominate Intel.
  • eastcoast_pete - Thursday, November 5, 2020 - link

    Thanks Ian and Andrei! The one major fly in the ointment for me is the pricing of the entry-level Zen 3 processor. At least one option under $ 200 would have been nice. But then, both AMD and Intel are about making profit for their shareholders, and I guess there isn't a business reason for AMD to offer an entry-level Zen3 below $ 200.
  • owoeweuwu - Thursday, November 5, 2020 - link

    why no high resolution + max quality?

    lame benchmark
  • Spunjji - Monday, November 9, 2020 - link

    Because you won't see any significant difference between the CPUs; It'll just be a bunch of bars next to each other.

    If that's your use case, then pretty much any of the CPUs in these benchmarks will be enough for you. If you're concerned about how well the CPU you buy now will work with future games then it's a bit of a crap shoot, but these results will give you a better idea than nothing at all.
  • RedOnlyFan - Thursday, November 5, 2020 - link

    So there's not much improvements for gaming. Meh.
  • silverblue - Thursday, November 5, 2020 - link

    CS:GO, Shadow of the Tomb Raider, Death Stranding, Serious Sam 4, Hitman 2, Division 2, Flight Simulator 2020 etc. are all showing large gains at 1080p over Zen 2, particularly CS:GO. Check out videos by LTT/Hardware Unboxed (3950 only today)/Gamers Nexus (again, 3950 only today).
  • silverblue - Tuesday, November 10, 2020 - link

    Sorry, just realised five days later that I meant to put 5950. Anyway, you all knew what I meant.
  • Spunjji - Sunday, November 8, 2020 - link

    It's like you read a different review
  • Paazel - Thursday, November 5, 2020 - link

    Would be great to see 2600k and a 6700k for reference. These were large benchmark CPU's that lot of people have/had!
  • Peskarik - Thursday, November 5, 2020 - link

    Are these even possible to buy? Where I am at it was basically a paperlaunch, within 30 seconds sold out, and those who got lucky and managed to order will receive in 2 months!

    Same as with RTX 30X

    Same story will be with the AMD 6000 GPUs.

    Corona times, people have all the money lying around and the products have been super-hyped since months.
  • charlesg - Friday, November 6, 2020 - link

    I personally haven't seen evidence the 5900s actually are for sale anywhere, except on eBay with extreme markups and who knows if the sellers are legit? I clicked the NewEgg link on the email as soon as I got it, and it was already "sold out". Amazon doesn't list them at all. "Directly from AMD" doesn't list them.

    Some info from AMD would really be useful!
  • Smell This - Friday, November 6, 2020 - link

    Micro Center Duluth (ATL) listed but sold out -- $20 discount when 'bundled' and a free copy of FC6
  • Super_cereal - Thursday, November 5, 2020 - link

    I'm completely torn, I have an R5 1600 and a B350 motherboard, in terms of upgrade do I get a 3600 cheaply or do I splash for a 5600x and new motherboard? For reference I have an RTX 3070 gpu
  • nandnandnand - Thursday, November 5, 2020 - link

    Get a 3700X or better, ignore Ryzen 5000 entirely, take a look at Zen 4 and AM5 socket when that comes out.
  • lmcd - Thursday, November 5, 2020 - link

    B350 generally do not have the power delivery to do better than a 3700X, I would pick that model.
  • just4U - Thursday, November 5, 2020 - link

    Wait till black friday sales. Then go with whatever is priced right.. either a 3700x or the 5600x/mb combo.
  • Smell This - Friday, November 6, 2020 - link


    If you are gaming Hi-Rez/Ultrawide, your R5 1600 should be really interesting for some benchies --- with your RTX 3070 gpu. Love to see what yah got
  • Jhlot - Thursday, November 5, 2020 - link

    Currently running AMD and going AMD upgrade soon but a 28w i7-1185 does 595 for R20 single thread while 5950X does 644. A 120w or better desktop i7 of that is going to be good.
  • SNESChalmers - Thursday, November 5, 2020 - link

    I agree, Willow Cove cores at 120w would be pretty fast. Unfortunately Willow Cove (or more likely Golden Cove) won't be out on desktop until 2022 when AMD is pushing out Zen 4. Cypress Cove on 14nm is going to be very power hungry even if it catches back up to Zen 3
  • Spunjji - Sunday, November 8, 2020 - link

    It does that at 50W, mind, and it's not clear it can clock high enough (even on desktop) to overcome the IPC difference
  • JohnnyLose - Thursday, November 5, 2020 - link

    Anyone knows when Zen 3 is coming to laptops?
  • UNCjigga - Thursday, November 5, 2020 - link

    I think it might be a while. My guess is we might actually see Zen 3 come to desktop APUs for the OEM market (OEMs won't have to redesign much as these should be drop-in compatible with their current Ryzen 4000-series designs) and mobile may wait until mid-2021.
  • JohnnyLose - Thursday, November 5, 2020 - link

    Anyone knows when Zen 3 is coming to laptops?
  • 69369369 - Thursday, November 5, 2020 - link

    Soon™
  • MojiSama - Thursday, November 5, 2020 - link

    Why is GIMP launch slower on high core CPUs vs lower core CPUs, they don’t explain that. It’s almost 2x slower to launch GIMP on 5900x vs 5600x (under the Office and Science benchmarks.
  • Icehawk - Thursday, November 5, 2020 - link

    I thought the lack of discussion of that and how the 5600 was faster in a bunch of other benchmarks was strange. Clearly it has to do with being on a single CCX but some detail and probing would be nice.

    Whelp, good to see general performance increased but for gaming looks like we are GPU bound for any higher end workloads (I run 4k or 1440p at worst) so I'll just keep praying I can get one of the new cards from either party and not worry about the CPU.
  • zodiacfml - Friday, November 6, 2020 - link

    ""As it turns out, GIMP does optimizations for every CPU thread in the system, which requires that higher thread-count processors take a lot longer to run.""
  • factual - Thursday, November 5, 2020 - link

    Wow, hopefully there'll be Black Friday deals for this beast!
  • just4U - Thursday, November 5, 2020 - link

    The shop I deal with is offering some nice package deals with some of the new CPU's giving up to $100 off.. Im guessing AMD gave them all some room to play here.
  • San Pedro - Thursday, November 5, 2020 - link

    Looking at 1440p min and 1080p max gaming settings, I'm thinking that for 1440p max gaming, my 2700x is still able to hang in there, even if I were to get a top end graphics card.
  • mrvco - Thursday, November 5, 2020 - link

    Depends upon the game and what else you may doing while games (e.g. streaming, transcoding your pr0nstash, etc.), but yes, GPU performance is going to be far more important than CPU at 1440P. Hence the low resolution benchmarks in CPU reviews and higher resolution benchmarks in GPU reviews.
  • defaultluser - Thursday, November 5, 2020 - link

    Looks like an excellent upgrade over Zeb 2 - just a bit confusing because you added the 5900X and 5950x to Bench, but you haven't added the 6 coer 8 core Zen 3.

    Want to do a complete comparison of my 4790k before I jump onboard (but don't want more than a 5800X) :D
  • anactoraaron - Thursday, November 5, 2020 - link

    Did I read that right about the memory used is ddr4 3200 but you are running it at 2133?? Because 'home users' won't change the bios to enable xmp? Did I read this wrong? I simply cannot comprehend that an enthusiast site like this would even consider taking this stance, it's as if you seem to not understand who actually reads these deep dive articles.

    I removed AT from my favorites bar about a month ago, when rtx 3000 was absent and new articles seemingly were just sponsored links or 'hey this thing is on sale' type of content. I was surprised to see this deep dive article and was wondering if I made a poor choice removing AT from my sources of tech content. But it appears you no longer recognize who your faithful old readers are anymore...

    So home users will read the deep dive content and understand the core improvements and latency tradeoffs, but can't flick a box in the bios to enable xmp?

    This reeks of intel fuckery (to keep them artificially relevant in gaming), as it is widely known that ryzen thrives - especially in gaming - using faster ram. And to do that in the desktop space this requires xmp. But hey, don't let me stop you from being the only site to run at jedec. Just know you aren't helping yourselves retain viewership/readers. I'm out, AT. Good luck in the future and Godspeed.
  • Icehawk - Thursday, November 5, 2020 - link

    They repeat this mantra of "no one uses XMP" which I think is patently crazy - it is a one button change and folks who actually care about granular performance of a CPU will use it. At the very least it would be nice to see a selection of benchmarks showing scaling, if any.
  • Spunjji - Sunday, November 8, 2020 - link

    Not really how they ever justified it but go off I guess
  • just4U - Thursday, November 5, 2020 - link

    They gave the Ryzen 5000 series a gold award.. I'd say that alone shows what they think of it..
  • Ryan Smith - Thursday, November 5, 2020 - link

    "Did I read that right about the memory used is ddr4 3200 but you are running it at 2133"

    No. To clarify, we run at the highest JEDEC-rated speed the chip supports. In the case of the Ryzen 5000 series, that's DDR4-3200.
  • Spunjji - Sunday, November 8, 2020 - link

    For everyone's sake, please read the article before posting an ill-informed rant
  • TristanSDX - Thursday, November 5, 2020 - link

    Witout doubts Zen 3 is superior, but most IPC comes from large L3 cache. Also bit dissapointment, that then can not reach faster clocks for third trial, with Zen2 and Zen2 refresh (XT) before. And these prices, pretty sad.
  • Spunjji - Sunday, November 8, 2020 - link

    "It's so much faster, but here are some cherry picked reasons to be salty anyway"

    Okay then
  • Qasar - Sunday, November 8, 2020 - link

    " Also bit dissapointment, that then can not reach faster clocks for third trial" so you still believe that clock speed is king ? that thats the only way to get performance ? intel is the one that NEEDS the faster clocks, not amd.
    " And these prices, pretty sad. " how so ? seems reasonable to me, specially what intel kept charging for their cpu's before Zen.
  • ahenriquedsj - Thursday, November 5, 2020 - link

    What happened at CS GO? LOL!
  • rogerdpack - Thursday, November 5, 2020 - link

    " in almost area "
  • Thanny - Thursday, November 5, 2020 - link

    "As we scale up this improvement to the 64 cores of the current generation EPYC Rome, any compute-limited workload on Rome should be freed in Naples."

    That would be a neat trick, since Naples is Zen 1. Pretty sure you meant Milan here.
  • tidywickham - Thursday, November 5, 2020 - link

    Researching gaming hardware for the first time. Thanks for this. Very helpful.
  • mark625 - Thursday, November 5, 2020 - link

    Dr. Cutress, this last line has me puzzled: "With +19% IPC on Zen3, Intel has no equal right now - not even Tiger Lake at 4.8 GHz - and has lost that single-threaded crown."

    I think this sentence would make more sense if you use "equivalent" instead of "equal". It is the AMD processors that have no equal. Or you could say "Intel has no equal to Ryzen", which would also make better sense.

    Great article!
  • Tomatotech - Monday, November 9, 2020 - link

    The sentence is fine.

    The meaning is ‘Intel has nothing to offer to equal these AMD chips’.
  • meacupla - Thursday, November 5, 2020 - link

    damn, these chips put my 2700X to shame
  • zodiacfml - Friday, November 6, 2020 - link

    Not with higher resolutions and half of the games. The high core/thread performance benefits low threaded workloads and old CPUs aren't far behind with high cores/threads
  • forextor - Friday, November 6, 2020 - link

    This article is waay too long..., can be summarised to just one sentence... "Ryzen 5950x... BUY BUY BUYY!!!!"
  • deepblue08 - Friday, November 6, 2020 - link

    Wow, I haven't seen this big of a performance jump in a long time! Big Props to AMD!
  • Atom2 - Friday, November 6, 2020 - link

    Again and again, for anything that has to do with science and engineering and is not using Intel Compilers and Libraries, the results are for University folks only.
  • wolfesteinabhi - Friday, November 6, 2020 - link

    "As we scale up this improvement to the 64 cores of the current generation EPYC Rome, any compute-limited workload on Rome should be freed in Naples."

    i think you meant "Milan" and not "Naples" in above line.
  • Body.enhancment.tech.is.subpar - Friday, November 6, 2020 - link

    I saw some comments in regards to amd processers, and ppl struggling with installing Windows, or when they might have errors and it being more challenging to deal with compared to Intel.

    Also I noticd that amd has no on board graphics on their chip.

    So my question, is someone could please clarify;

    Is Amd, or rather zen 2, if anything to go by, is difficult to set up. Also if there are difference in compatiblity compared to intel in terms of software (including OS'es) in general and cpu work.

    Also since there is no on board gpu for AMD does that mean when reinstalling GPU drivers for a main GPU unit, that there is a complicated and different way to install?

    As you can tell, I have 0 experience with AMD, and been happy with Intel and the stability that they have.

    So can anyone if they would be so inclined, may be tell me there is no reason to fret, and become an AMD fan boy?

    Since their new CPUs are looking quite nice, but I am hesitant since I am not willing to deal with a head ache for performance gains when it does work. I would give up a bit of performance for peace of mind.
  • supdawgwtfd - Friday, November 6, 2020 - link

    Uhhh....

    Windows just installs and works.

    No issues.

    You just need a GPU or any type.
  • Qasar - Friday, November 6, 2020 - link

    if you have done this with intel, its the same process
  • CookieBin - Friday, November 6, 2020 - link

    My Threadripper crashed if you opened anything on startup. It didn't like loading games either, would cause a blue screen. After several reinstalls, it sometimes works. My opinion is AMD sucks at drivers, and that's not hard to believe, because radeon cards had the same issues. With AMD, YOU are the beta user :)
  • dagobah123 - Friday, November 6, 2020 - link

    I've had 2 threadrippers and don't have these issues. Which gpu and drivers are you on? What is your current build of Windows? Is your memory compatible? Are your BIOS drivers up to date?
  • Spunjji - Sunday, November 8, 2020 - link

    Smells like FUD
  • Qasar - Sunday, November 8, 2020 - link

    or PEBCAK :-)
  • Qasar - Sunday, November 8, 2020 - link

    ug.... PEBKAC
  • Spunjji - Monday, November 9, 2020 - link

    It works both ways! :D
  • Slash3 - Saturday, November 7, 2020 - link

    The only real snag is for Ryzen/TR users wanting to install on a RAID volume, as doing so requires loading three individual drivers not provided by the Windows boot media (RCBottom, RCRAID, RCCFG). Without these the drives won't be visible, where with Intel's RST they will be visible without additional steps.

    It's not a common configuration for regular users, but worth mentioning as it's not always obvious and nobody reads instructions these days.
  • Tomatotech - Monday, November 9, 2020 - link

    Friends don’t let friends install boot OSes on RAID disks. Anything goes wrong, dead drive etc, you’re fucked.

    Often the specific repair tools required to repair the RAID are on the OS partition that you need to access before repairing the RAID, but you can’t access it until you’ve repaired the RAID... and round and round you go.

    Seen it happen at a couple of businesses that hired shitty IT consultants.
  • Spunjji - Monday, November 9, 2020 - link

    Yup. Only ever worth doing on servers that have a RAID-aware BIOS and, ideally, some sort of integrated lifecycle controller with the drivers available.

    On a consumer-grade desktop system (i.e. not workstation) there is less than no point.
  • dagobah123 - Friday, November 6, 2020 - link

    These are not meant to be CPUs with on-board (integrated) GPUs. AMD has those, they are APUs e.g. 3400G, 3750G). The 5000 series APUs will come next year. Also, as other have stated above there's no difference in setting up an AMD vs. Intel system. Microsoft includes the drivers you need to get going, but of course with any build do update them once you're up and running. I've had both 10+ Intel and AMD systems over the years and certainly no stability issues ever related to the CPU, Intel or AMD.
  • Kent T - Friday, November 6, 2020 - link

    There seem to be something wrong in the GIMP app opening chart. Can it really be, that all the biggest and most expensive CPU's are the absolute slowest at more than half a minute? And besides that, I have a 3770 non-K, and on Linux Mint 20 it takes a little less than 3 seconds to open GIMP 2.10. Except the first time after installing, it took 8 seconds.
  • supdawgwtfd - Friday, November 6, 2020 - link

    Read the article. The answer is right there
  • Kent T - Friday, November 6, 2020 - link

    Yeah, just saw it, my bad
  • LithiumFirefly - Friday, November 6, 2020 - link

    I thought the whole point to a civilization game benchmark was a time to complete turn not FPS who cares about FPS and a turn-based game.
  • dagobah123 - Friday, November 6, 2020 - link

    The more benchmarks the better. These are general purpose CPUs. Wouldn't it be a shame if you bought a 120hz+ 4k monitor with an expensive graphics card, only to find out your CPU was limiting your frames? Sure the game is playable @ 5 FPS as the author mentioned. However, it's getting harder to make the CPU the bottleneck in a lot of these games at higher resolutions and quality settings, so they have to resort to this. Would anyone play a game @ 360p? No, but if you want to see which CPU is better I say lets include every benchmark we can find.
  • CookieBin - Friday, November 6, 2020 - link

    I find it funny that these huge gains mean literally nothing at 4K. So all these different review sites highlight sky high fps at 1080p because at 4K that huge advantage becomes less than a 0.3% improvement.. keep pounding sand linus tech tips. I've never seen such a big nothing burger. No idiot out there buys a $800 5950X to play video games at 1080p.
  • chuyayala - Friday, November 6, 2020 - link

    The reason they test 1080p is because game processing is CPU-bound at that resolution (they are testing the CPU after-all). The higher the resolution, the more the GPU is working (not the CPU). The reason why there aren't much gains in 4k is because processing is limited by the GPU power. If we assume we get ultra powerful GPUs that can run 4k games at 120+ frames per second, then the CPU becomes more important.
  • dagobah123 - Friday, November 6, 2020 - link

    This is simply not true. It only appears to 'mean nothing' if you don't realize the bottleneck in the testing system on most of the benchmarks are the GPU. Meaning the GPU is maxed out at 100%. In this case you're right, the difference between many CPUs will not matter, but what about next year when you decide to buy the next high-end GPU, only to find out the CPU you choose couldn't handle much more. This is why 360p, 720p, even 1080p benchmarks are included to show you just how much more ahead one CPU is over another. Check out the test setup--they are using a 2080 Ti. Come check out the updated reviews after they test all this on 3090s and 6900 XTs.
    Pit a Ferarri and a Ford Model T against one another. Sure they both keep up with one another in the grocery parking lot @ 15mph. Take em out on the freeway with a 70mph speed limit and you'll have a clear winner. Let alone let em loose on the race track.
    Future proof yourself a bit, buy a 5600k or 5800k for your 4k gaming. If you don't update your CPU often you'll be glad you did a couple years out if you drop in that next GPU.
  • nandnandnand - Saturday, November 7, 2020 - link

    5950X will make your web browsing snappier... so you can load more AnandTech ads. ;)
  • zodiacfml - Sunday, November 8, 2020 - link

    duh? Steam survey shows 1080p the most popular resolution for gaming. Aside from that, it is difficult to maintain frame rates for 240Hz/360Hz monitors.
    You might have a point with 720p res though
  • realbabilu - Friday, November 6, 2020 - link

    First: I think you should compare with F or KF Intel version, for price comparison. Since they don't have internal Gpu. Somehow AMD not included the FAN also, beware good cooling isn't cheap.
    SECOND: it's nice to had coding bench with optimization here windows, with AVX2 and some flags compiling, Amd only provide optimization compiling on Linux only, I think they should be on windows too with optimized math kernel and compiler.
    ThIrd: the price performance is justified now. In zen2 release the price was lower than Intel that time, made Intel justified the price for 10th Gen. Now from price sensitive, Intel still fine per price / performance ratio,even though it's need more power consumption.
  • duploxxx - Saturday, November 7, 2020 - link

    the ryzens have a base TDP of 105W and peaking towards 140-150W
    not like the intels that peak at +200ish W, there you need good cooling.

    A Dark rock slim or shadow rock can easily handle this and it will cost you 50-60$..

    go find a cooler for the +200W so that it wont throttle all the time for the Intel
  • realbabilu - Saturday, November 7, 2020 - link

    Great. I think Anand tech should do cooling shootout for 5900x/5950x bench.
    To find the minimum air cooler for this,
    AMD only list noctua and bequiet as air cooler, others as liquid cooler at https://www.amd.com/en/processors/ryzen-thermal-so...

    The slim rock and nh14s maybe the cheapest on the list. It is interesting could more budget double fan tower should enough for 5900x/5950x that has 145 watt max like deepcool gammax 400 pro (double fan), coolermaster ma410p, and shadow rock 2/3, and maybe cheapest aio coolermaster liquid master 120 lite that not listed on amd list.
  • TheinsanegamerN - Tuesday, November 10, 2020 - link

    However AMD's boost algorithim is very temperature sensitive. Those coolers may work fine, but if they get to the 70C range you're losing max performance to higher temperatures.
  • Andrew LB - Sunday, December 13, 2020 - link

    Blah blah....

    Ryzen 5800x @ 3.6-4.7ghz : 219w and 82'c.
    Ryzen 5800x @ 4.7ghz locked: 231w and 88'c.

    Fractal Celsius+ S28 Prisma 280mm AIO CPU cooler at full fan and pump speed
    https://www.kitguru.net/components/cpu/luke-hill/a...

    If you actually set your voltages on Intel chips they stay cool. My i7-10700k @ 5.0ghz all-core locked never goes above 70'c.
  • Count Rushmore - Friday, November 6, 2020 - link

    It took 3 days... finally the article load-up.
    AT seriously need to upgrade their server (or I need to stop using IE6).
  • name99 - Friday, November 6, 2020 - link

    "AMD wouldn’t exactly detail what this means but we suspect that this could allude to now two branch predictions per cycle instead of just one"

    So imagine you have wide OoO CPU. How do you design fetch? The current state of the art (and presumably AMD have aspects of this, though perhaps not the *entire* package) goes as follows:

    Instructions come as runs of sequential instructions separated by branches. At a branch you may HAVE to fetch instructions from a new address (think call, goto, return) or you may perhaps continue to the next address (think non-taken branch).
    So an intermediate complexity fetch engine will bring in blobs of instructions, up to (say 6 or 8) with the run of instructions terminating at
    - I've scooped up N or
    - I've hit a branch or
    - I've hit the end of a cache line.

    Basically every cycle should consist of pulling in the longest run of instructions possible subject to the above rules.

    The way really advanced fetch works is totally decoupled from the rest of the CPU. Every cycle the fetch engine predicts the next fetch address (from some hierarchy of : check the link stack, check the BTB, increment the PC), and fetches as much as possible from that address. These are stuck in a queue connected to decode, and ideally that queue would never run dry.

    BUT: on average there is about a branch every 6 instructions.
    Now supposed you want to sustain, let's say, 8-wide. That means that you might set N at 8, but most of the time you'll fetch 6 or so instructions because you'll bail out based on hitting a branch before you have a full 8 instructions in your scoop. So you're mostly unable to go beyond an IPC of 6, even if *everything* else is ideal.

    BUT most branches are conditional. And good enough half of those are not taken. This means that if you can generate TWO branch predictions per cycle then much of the time the first branch will not be taken, can be ignored, and fetch can continue in a straight line past it. Big win! Half the time you can pull in only 6 instructions, but the other half you could pull in maybe 12 instructions. Basically, if you want to sustain 8 wide, you'd probably want to pull in at least 10 or 12 instructions under best case conditions, to help fill up the queue for the cases where you pull in less than 8 instructions (first branch is taken, or you reach the end of the cache line).

    Now there are some technicalities here.
    One is "how does fetch know where the branches are, to know when to stop fetching". This is usually done via pre-decode bits living in the I-cache, and set by a kinda decode when the line is first pulled into the I-cache. (I think x86 also does this, but I have no idea how. It's obviously much easier for a sane ISA like ARM, POWER, even z.)
    Second, and more interesting, is that you're actually performing two DIFFERENT TYPES of prediction, which makes it somewhat easier from a bandwidth point of view. The prediction on the first branch is purely "taken/not taken", and all you care about is "not taken"; the prediction on the second branch is more sophisticated because if you predict taken you also have to predict the target, which means dealing BTB or link stack.

    But you don't have to predict TWO DIFFERENT "next fetch addresses" per cycle, which makes it somewhat easier.
    Note also that any CPU that uses two level branch prediction is, I think, already doing two branch prediction per cycle, even if it doesn't look like it. Think about it: how do you USE a large (but slow) second level pool of branch prediction information?
    You run the async fetch engine primarily from the first level; and this gives a constant stream of "runs of instructions, separated by branches" with zero delay cycles between runs. Great, zero cycle branches, we all want that. BUT for the predictors to generate a new result in a single cycle they can't be too large.
    So you also run a separate engine, delayed a cycle or two, based on the larger pool of second level branch data, checking the predictions of the async engine. If there's a disagreement you flush whatever was fetched past that point (which hopefully is still just in the fetch queue...) and resteer. This will give you a one (or three or four) cycle bubble in the fetch stream, which is not ideal, but
    - it doesn't happen that often
    - it's a lot better catching a bad prediction very early in fetch, rather than much later in execution
    - hopefully the fetch queue is full enough, and filled fast enough, that perhaps it's not even drained by the time decode has walked along it to the point at which the re-steer occurred...

    This second (checking) branch prediction doesn't ever get mentioned, but it is there behind the scenes, even when the CPU is ostensibly doing only a single prediction per cycle.

    There are other crazy things that happen in modern fetch engines (which are basically in themselves as complicated as a whole CPU from 20 years ago).

    One interesting idea is to use the same data that is informing the async fetch engine to inform prefetch. The idea is that you now have essentially two fetch engines running. One is as I described above; the second ONLY cares about the stream of TAKEN branches, and follows that stream as rapidly as possible, ensuring that each line referenced by this stream is being pulled into the I-cache. (You will recognize this as something like a very specialized form of run-ahead.)
    In principle this should be perfect -- the I prefetcher and branch-prediction are both trying to solve the *exact* same problem, so pooling their resources should be optimal! In practice, so far this hasn't yet been perfected; the best simulations using this idea are a very few percent behind the best simulations using a different I prefetch technology. But IMHO this is mostly a consequence of this being a fairly new idea that has so far been explored mainly by using pre-existing branch predictors, rather than designing a branch predictor store that's optimal for both tasks.
    The main difference is that what matters for prefetching is "far future" branches, branches somewhat beyond where I am now, so that there's plenty of time to pull in the line all the way from RAM. And existing branch predictors have had no incentive to hold onto that sort of far future prediction state. HOWEVER
    A second interesting idea is what IBM has been doing for two or three years now. They store branch prediction in what they call an L2 storage but, to avoid things, I'll cal a cold cache. This is stale/far future branch prediction data that is unused for a while but, on triggering events, that cold cache data will be swapped into the branch prediction storage so that the branch predictors are ready to go for the new context in which they find themselves.

    I don't believe IBM use this to drive their I-prefetcher, but obviously it is a great solution to the problem I described above and I suspect this will be where all the performance CPUs eventually find themselves over the next few years. (Apple and IBM probably first, because Apple is Apple, and IBM has the hard part of the solution already in place; then ARM because they's smart and trying hard; then AMD because they're also smart but their technology cycles are slower than ARM; and final Intel because, well, they're Intel and have been running on fumes for a few years now.)
    (Note of course this only solves I-prefetch, which is nice and important; but D-prefetch remains as a difficult and different problem.)
  • name99 - Friday, November 6, 2020 - link

    Oh, one more thing. I referred to "width" of the CPU above. This becomes an ever vaguer term every year. The basic points are two:

    - when OoO started, it seemed reasonable to scale every step of the pipeline together. Make the CPU 4-wide. So it can fetch up to 4 instructions/cycle. decode up to 4, issue up to 4, retire up to 4. BUT if you do this you're losing performance every step of the way. Every cycle that fetches only 3 instructions can never make that up; likewise every cycle that only issues 3 instructions.

    - so once you have enough transistors available for better designs, you need to ask yourself what's the RATE-LIMITING step? For x86 that's probably in fetch and decode, but let's consider sane ISAs like ARM. There the rate limiting step is probably register rename. So lets assume your max rename bandwidth is 6 instructions/cycle. You actually want to run the rest of your machinery at something like 7 or 8 wide because (by definition) you CAN do so (they are not rate limiting, so they can be grown). And by running them wider you can ensure that the inevitable hiccups along the way are mostly hidden by queues, and your rename machinery is running at full speed, 6-wide each and every cycle, rather than frequently running at 5 or 4 wide because of some unfortunate glitch upstream.
  • Spunjji - Monday, November 9, 2020 - link

    These were interesting posts. Thank you!
  • GeoffreyA - Monday, November 9, 2020 - link

    Yes, excellent posts. Thanks.

    Touching on width, I was expecting Zen 3 to add another decoder and take it up to 5-wide decode (like Skylake onwards). Zen 3's keeping it at 4 makes good sense though, considering their constraint of not raising power. Another decoder might have raised IPC but would have likely picked up power quite a bit.
  • ignizkrizalid - Saturday, November 7, 2020 - link

    Rip Intel no matter how hard you try squeezing Intel sometimes on top within your graphics! stupid site bias and unreliable if this site was to be truth why not do a live video comparison side by side using 3600 or 4000Mhz ram so we can see the actual numbers and be 100% assured the graphic table is not manipulated in any way, yea I know you will never do it! personally I don't trust these "reviews" that can be manipulated as desired, I respect live video comparison with nothing to hide to the public. Rip Intel Rip Intel.
  • Spunjji - Monday, November 9, 2020 - link

    I... don't think this makes an awful lots of sense, tbh.
  • MDD1963 - Saturday, November 7, 2020 - link

    It would be interesting to also see the various results of the 10900K the way most people actually run them on Z490 boards, i.e, with higher RAM clocks, MCE enabled, etc...; do the equivalent tuning with 5000 series, I'm sure they will run with faster than DDR4-3200 MHz. plus perhaps a small all-core overclock.
  • Jvanderlinde - Saturday, November 7, 2020 - link

    Glad to read the 2700x is taken into account. The 5950X seems like a hell of an upgrade coming from that path. Gotta love AMD for what it's bin accomplishing in the last few years.
  • Kallan007 - Saturday, November 7, 2020 - link

    I just want to thank AMD for a lovely way to end 2020! I cannot wait for those AMD 6000 series card reviews! Good job as always AnandTech!
  • Pumpkinhead - Saturday, November 7, 2020 - link

    How is that possible that peak power for the 3700x having 8 cores is lower than for 3600 with 6 cores?
  • nandnandnand - Saturday, November 7, 2020 - link

    "Compared to other processors, for peak power, we report the highest loaded value observed from any of our benchmark tests."

    They selected the highest single wattage value from ANY of their tests. So at the same TDP, they should be very close to each other. I guess the point is to find the worst case scenario for each processor, rather than an average, to determine the power supply needed. Other reviews point to 3600 using less power on average:

    https://www.tomshardware.com/reviews/amd-ryzen-5-3...
  • Pumpkinhead - Saturday, November 7, 2020 - link

    >https://www.tomshardware.com/reviews/amd-ryzen-5-3...
    Interesting that 3700x being 1/3 faster in handbrake still draws only 13% more power, I always thought that in parallel workloads it power consumption scales linearly as you add more cores.
    Even tho in AIDA stress test it draws 35% more power so peak consumption is different too, also look at the 5800x - draws almost twice as 5600x (also 6c vs 8c)
  • Kjella - Sunday, November 8, 2020 - link

    It scales linearly until you hit the TDP, past that more cores lets you use a lower, more efficient frequency. As long as you can keep all cores loaded 8 cores on a 10W/core power budget will get less done than 16 cores at a 5W/core power budget even though 8*10 = 16*5. But with disabled cores you might have to run more of the chip, like the whole CCX to support 3 of 4 working cores. That will obviously be less efficient since the "overhead" has to be split by 3 instead of 4.
  • Murilo - Saturday, November 7, 2020 - link

    ..and AMD price? ahahah.
    I want price!
  • psyclist80 - Saturday, November 7, 2020 - link

    Im glad you guys added in Tiger Lake...very important data point. If it beat willow cove in its full implementation then its certainly going to beat the watered down 14nm version cypress cove and slay it on power efficiency.

    Im glad AMD has taken the crown back (last held 2003-2006)...as has been said in the past, the Empire will strike back...Hopefully Golden Cove can do that for Intel. That will face 5nm Zen 4 though...AMD looking strong for the next year or two!
  • Sushisamurai - Sunday, November 8, 2020 - link

    apparently, looking at other reviews, there appears to be a performance uplift/downgrade (depends how you look at it) with the number of RAM sticks populated on the board (2 vs 4). I wonder how much of a difference it is with RAM speeds and memory stick population on the 5000 series.
  • umano - Sunday, November 8, 2020 - link

    I bought in june a 3800x and an x570 creator planning to upgrade to 5950x after zen 4 launches, but often, when we have more power, we find a way to crave for more. I work in fashion ph, for me the 35% bump in ps compared to my 3800x (bought because the price was the same as 3700) is a no brainer. It will also help me in capture one exports and zip compression. I did not think an upgrade so soon will benefit me so much, but you know life happens. Good job amd you made technology fun again
  • madymadme - Sunday, November 8, 2020 - link

    Going to buy
    AMD Ryzen 9 5900X,
    Gigabyte B550 AORUS PRO AC,
    Noctua NH-D15 Dual 140m Fans,
    G.skill Trident Z RGB Series 16GB (2x8GB) 4000 MHz DDR4 Memory F4-4000C18D-16GTZRB

    is corsair CV550 watt ok with the above spec ? & I have Quadro K2000D graphic card
    is this specification ok ? & which ram to get please help a little & thanks for reading & replying
  • Spunjji - Monday, November 9, 2020 - link

    All I can say is your PSU should be more than enough for that setup :)
  • Vik32 - Sunday, November 8, 2020 - link

    AMD is now the leader in single threaded performance!
    When will the iphone 12 review ?
  • Spunjji - Sunday, November 8, 2020 - link

    Loving the substantial review detail, as always! Quite the triumph for AMD 😁

    Only one minor criticism - the sum-up of the gaming results buries the lede a little, which is to say that the performance is excellent across AMD's new range, meaning that the 5600X frequently outperforms some of Intel's best processors. I will be *very* interested to see if overclocking makes any difference there - with some relaxed power limits and the potential for higher clocks, it could be THE gaming chip to buy.

    That's a small gripe, though. Just pleased to see a result this unequivocal. Between this and the US election result, it'll be tears before bedtime for several of the trolls on this site 🤭
  • Solidstate20 - Sunday, November 8, 2020 - link

    Zen question: If a CPU has awesome performance but is out-of-stock in every shop, does it really have awesome performance?
  • Spunjji - Monday, November 9, 2020 - link

    lol
  • Agent Smith - Sunday, November 8, 2020 - link

    Where are the new x590 motherboards to support the 5000 series CPU's?

    The B550 boards are good value but are PCIe 4.0 limited and rely on shared ports.
    The older x570 boards are good but are several years old now so lacking newer features like 2.5Gb LAN and front facing USB-C ports for mini & micro ITX.
  • Qasar - Sunday, November 8, 2020 - link

    i dont think there will be unless the mobo makers release them on their own.

    "The older x570 boards are good but are several years old now " huh ? try barely 1.5 years old. x570 was released in July 2019, how is that several years ? the strix e gaming board i have has 2.5g lan, as long as the board has the usb 3 header, wouldnt front facing usbc be more of a case feature then the board ?
  • Spunjji - Monday, November 9, 2020 - link

    I think the USB-C front ports have a different connector at the motherboard end. I still don't get why this is a big deal, though.
  • TheinsanegamerN - Tuesday, November 10, 2020 - link

    It really isnt. I dont know anyone who actually uses front USB C right now, usually they plug into the back because the back port will be 10gb/20gb/thunderbolt, but the front is only 5gb
  • TheinsanegamerN - Tuesday, November 10, 2020 - link

    There is no x590 chipset coming. X570 is ryzen 5000s chipset.

    There's also this miracle fo technology, if you have a micro atx or full atx board, you can put in ADD IN CARDS. Amazing, right? So even if your board does not natively support 2.5G LAN you can add it for a low price, because 2.5G cards are relatively cheap.
  • TheinsanegamerN - Tuesday, November 10, 2020 - link

    the x570 aorus master and msi x570 unify also have 2.5G lan. And surely there will be newer models next year with newer features and names, gotta keep the model churn going!
  • alhopper - Sunday, November 8, 2020 - link

    Ian and Andrei - 1,000 Thank Yous for this awesome article and you fine technical journalism. You guys did amazing work and we (the community) are fortunate to be the benefactors.
    Thanks again and keep up the Good Work (TM).
  • Rekaputra - Sunday, November 8, 2020 - link

    Wow this article it so comprehensive. Glad i always check anandtech for my reference in computing. I wonder how it stack againt threadripper on database or excel compute workload. I know these are desktop proc. But there is possibility use it for mini workstation for office stuff like accounting and development RDBMS as it is cheaper.
  • SkyBill40 - Sunday, November 8, 2020 - link

    Once some availability comes back into play... my old and trusty FX 8350 is going to be retired. I've been waiting to rebuild for a long time now and the wait has clearly paid off regardless of how the is the end of the line for AM4 or well Ryzen 4 does next year. I could wait... but nah.
  • jcromano - Friday, November 13, 2020 - link

    I'm in a similar boat. I'm still running an i5-2500k from early 2011 (coming up on ten years, yikes), and I'll build a new rig, probably 5600X, when the processors become available. I fret a bit over whether I should wait for the next socket to arrive before taking the plunge, but given the infrequency with which I upgrade, I think it's likely that the next socket would also be obsolete by the time it mattered.
  • evilpaul666 - Sunday, November 8, 2020 - link

    I'd love to see some PS3 emulation testing added.
  • abufrejoval - Monday, November 9, 2020 - link

    Control flow integrity (or enforcement) seem to be in, and that was for me a major criterion for getting one (5800X scheduled to arrive tomorrow).

    But what about SEV or per-VM-encryption? From the hints I see this seems enabled in Intel's Tiger Lake and I guess the hardware would be there on all Zen 3 chiplets, but is AMD going to enable it for "consumer" platforms?

    With 8 or more cores around, there is plenty of reasons why people would want to run a couple of VMs on pretty much anything, from a notebook to a home entertainment/control system, even a gaming rig. And some of those VMs we'd rather have secure from phishing and trojans, right?

    Keeping this an EPIC-only or Pro-only feature would be a real mistake IMHO.

    BTW ordered ECC DDR4-3200 to go with it, because this box will run 24x7 and pushes a Xeon E3-1276 v3 into cold backup.
  • lmcd - Monday, November 9, 2020 - link

    Starting to feel like the platform is way too constrained just for the sake of all 6 APUs AMD has released (all with mediocre graphics and most with mediocre CPUs, no less). I hope AMD bifuricates and comes up with an in-between platform that supports ~32-40 CPU PCIe lanes and drops APUs. If APUs can't be on-time with everything else there's so little point.
  • 29a - Monday, November 9, 2020 - link

    "Firstly, because we need an AI benchmark, and a bad one is still better than not having one at all."

    Can't say I agree with that.
  • halcyon - Tuesday, November 10, 2020 - link

    1. Ryzen 9 5xxx series dominate most gaming benhmarks in CPU bound games up to 720p
    2. However at 1440P/4K Intel, esp. 10850K pull ahead.

    Can somebody explain this anomaly? As Games become more GPU bound at higher res, why does Intel pull ahead (with worse single/multi-thread CPU perf)? Is it a bandwidth/latency issue? If so, where exactly (RAM? L3? somewhere else)? Can't be PCIe, can it?
  • feka1ity - Saturday, November 14, 2020 - link

    RAM. anandtech uses shitty ram for intel systems
  • Makste - Monday, November 16, 2020 - link

    I think the game optimizations for intel processors become clear at those resolutions. AMD has been a none factor in gaming for so long. These games have been developed on and mostly optimised to work better on intel machines
  • Silma - Wednesday, November 11, 2020 - link

    At 4K, the 3700X beats the 5600X quite often.
  • Samus - Friday, November 13, 2020 - link

    Considering Intel just released a new generation of CPU's, it's astonishing at their current IPC generation-over-generation trajectory, it will take them two more generations to surpass Zen 3. That's almost 2 years.

    Wow.
  • ssshenoy - Tuesday, December 15, 2020 - link

    I dont think this article compares the latest generation from Intel - the Willow Cove core in Tiger lake which is launched only for notebooks. The comparison here seems to be with the ancient Skylake generation on 14 nm.
  • abufrejoval - Friday, November 13, 2020 - link

    Got my Ryzen 7 5800X on a new Aorus X570 mainboard and finally working, too.

    It turbos to 4850MHz without any overclocking, so I'd hazard 150MHz "bonus" are pretty much the default across the line.

    At the wall plug 210 Watts was the biggest load I observed for pure CPU loads. HWinfo never reporting anything in excess of 120 Watts on the CPU from internal sensors.

    "finally working": I want ECC with this rig, because I am aiming for 64GB or even 128GB RAM and 24x7 operation. Ordered DDR4-3200 ECC modules from Kingston to go with the board. Those seem a little slow coming so I tried to make do with pilfering some DIMMs from other systems, that could be shut down for a moment. DDR4-2133 ECC and DDR4-2400 ECC modules where candidates, but wouldn't boot...

    Both were 2Rx4, dual rank, nibble not byte organized modules, unbuffered and unregistered but not the byte organized DIMMs that the Gigabyte documentation seeemd to prescribe... Asus, MSI and ASrock don't list such constraints, but I had to go with availability...

    I like to think of RAM as RAM, it may be slower or faster, but it shouldn't be tied to one specific system, right?

    So while I await the DDR4-3200 ECC 32GB modules to arrive, I got myself some DDR4-4000 R1x8 (no ECC, 8GB) DIMMs to fill the gap: But would that X570 mainboard, which might have been laying on shelves for months actually boot a Ryzen 5000?

    No, it wouldn't.

    But yes, it would update the BIOS via Q-Flash Plus-what-shall-we-call-it and then, yes, it did indeed recognize both the CPU and those R1x8 DIMMs just fine after the update.

    I haven't yet tried those R2x4 modules again, because I am still exploring the bandwidth high-end, but I want to report just how much I am impressed by the compatibility of the AM4 platform, fully aware that Zen 3 will be the last generation in this "sprint".

    I vividly remember how I had to get Skylake CPUs in order to get various mainboard ready for Kaby Lake...

    I have been using AMD x86 CPUs from 80486DX4. I owned every iteration of K6-II and K6-III, omitted all Slot-A variants, got back with socket-A, 754, 939, went single, quad, and hexa (Phenom II x4+x6), omitted Bulldozer, but did almost every APU but between Kaveri and Zen 3, AMD simply wasn't compelling enough.

    I would have gotten a Ryzen 9 5950x, if it had been available. But I count myself lucky for the moment to have snatched a Ryzen 7 5800X: It sure doesn't disappoint.

    AMD a toast! You have done very well indeed and you can count me impressed!

    Of course I'll nag about missing SVE/MKTME support day after tomorrow, but in the mean-time, please accept my gratitude.
  • feka1ity - Saturday, November 14, 2020 - link

    Interesting, my default 9700k with 1080ti does 225fps avg - Borderlands 3, 360p, very low settings and anantech testers poop 175fps avg with 10900k and 2080ti?!? And this favoritize amede products. Fake stuff, sorry.
  • Spunjji - Monday, November 16, 2020 - link

    "Fake stuff"

    Thanks for labelling your post
  • feka1ity - Monday, November 16, 2020 - link

    Fake stuff is not a label, it's a epicrisis. Go render stuff, spunji
  • Qasar - Tuesday, November 17, 2020 - link

    no, but fake posts are.
  • feka1ity - Tuesday, November 17, 2020 - link

    sure, everything faster than new amede is fake for fanboiz
  • Iketh - Monday, November 16, 2020 - link

    was there a performance/watt metric anywhere in this article? how many memory controllers on each chip?
  • peevee - Tuesday, November 17, 2020 - link

    As MT vs ST tests clearly show, there is not enough power and/or memory bandwidth on AM4 for 16 cores anymore.

    Hoping for a 4-channel DDR5 mass-market platform next.

    One 8-core chiplet, one graphics chiplet (similar to 5600 XT, and working together with an additional AMD graphics card), 4 channels of DDR5 to support that, preferably as SODIMM slots right on the CPU package for smallest latency and power consumption possible (and making a cheap MB possible)... I can dream, can I? It should have been this generation, I would have ordered it already.
  • RobJoy - Thursday, November 19, 2020 - link

    Same or better performance than Intel for the same price, with PCIe 4.0 for uber fast drives?
    Where do I sign?
    Bring it on.
  • ssshenoy - Tuesday, December 15, 2020 - link

    How do you conclude that this product line is superior to Tiger Lake when there are no measurements that compare these two? All the Intel to AMD comparisons are the old Skylake core on 14 nm vs. the latest Zen 3 core on 7 nm. Am I missing something here?
  • JSyrup - Wednesday, January 20, 2021 - link

    Is there a reason why the 5800X outperforms both the 5900X and 5950X in some games? Could it have something to do with 1 CCX vs 2 CCXs?
  • JSyrup - Wednesday, April 7, 2021 - link

    *CCDs

    I got it now. For the best of both worlds, go for the 5950X. Then, if you play games, disable 1 CCD in BIOS or leave both CCDs enabled if you do productivity. This is how to maximise performance and prevent unexpected performance drops.
  • Sgtkeebler - Tuesday, May 11, 2021 - link

    On RDR why do higher resolutions get higher FPS than 1080p?

Log in

Don't have an account? Sign up now