Comments Locked

24 Comments

Back to Article

  • willis936 - Thursday, May 7, 2020 - link

    This is fascinating. Just multiplying base clock and core count together shows that the 6 core and 8 core parts are theoretically 26% and 36% faster in multithreaded workloads than the 4 core part, respectively. I would be very interested to see if this actually holds true in testing.
  • Valantar - Thursday, May 7, 2020 - link

    Don't forget the 15% IPC increase over Zen+.
  • GreenReaper - Sunday, May 10, 2020 - link

    It's definitely pushing up against the 15W limit. I'd be a little surprised if *all* of the CPUs aren't power-limited, because four Zen 2 cores should be able to use 15W between them, not even accounting for the graphics, and they should boost above base if they are able to. Of course, that's partly accounted for by the turbo limit, but I'd sure want to see the tests before buying more cores.
  • willis936 - Sunday, May 10, 2020 - link

    The assumption is that these parts do not change convention: TDP is average power dissipated when all cores are running at base frequency without using overly power hungry extensions.
  • phoenix_rizzen - Monday, May 11, 2020 - link

    No, that's the Intel definition of TDP, which is very misleading as the actual heat output of the CPU under use will be significantly higher.

    AMD's definition is closer to "maximum heat output when running full-bore across all cores" or pretty much the max heat output the CPU should put out consistently when under use.

    Intel gives you a minimum heat output you need to deal with. (If you want the CPU to go faster than base clocks, you need better cooling.)
    AMD gives you a maximum heat output you need to deal with. (If you want the CPU to go faster than base clocks, you only need this amount of cooling.)
  • ksec - Thursday, May 7, 2020 - link

    Reading this make me sad Apple is still using Intel on its latest 13" MacBook.
  • Tomatotech - Thursday, May 7, 2020 - link

    I'm given to understand that AMD is unable to make enough CPUs for Apple. Apple tends to order millions of a single model, and even Intel struggles with this. Apple has been bitten hard by this problem several times over in the past (failure of their chip supplier to provide timely supplies) so there are rumours they are moving from Intel to their own in-house ARM-based design.

    Could be this year, could be next year, could be never. As it is, Apple's highly tweaked and customised ARM cores and tightly integrated software stack on their iPhones and iPads are thrashing the Intel chips in many of their own Mac devices.
  • willis936 - Thursday, May 7, 2020 - link

    This doesn't jive with my current understanding of things. Intel hasn't been able to meet demand for years and TSMC has slightly less capacity than Intel. AMD isn't the only chips TSMC makes, but Intel has a much larger market percentage. Apple does not move many products. They can afford to have premium chips.

    https://www.fool.com/investing/2018/11/02/heres-wh...

    https://www.tsmc.com/download/1Q13_installed_capac...

    http://download.intel.com/newsroom/kits/22nm/pdfs/...
  • close - Thursday, May 7, 2020 - link

    They just don;t want to redesign their hardware and do some significant software development to move to something that next generation might be in the same position Intel is now, So they're aiming at least for full vertical integration if they make their own CPUs.
  • Kaggy - Thursday, May 7, 2020 - link

    I believe they sign long term contracts for chip agreements, no idea when their contract expires.
  • Santoval - Friday, May 8, 2020 - link

    "TSMC has slightly less capacity than Intel".

    A *lot* of things can change in 7 years in the industry of semiconductor manufacturing, so posting a ... 7-year old link (!!) about TSMC to back up the above claim of yours makes no sense at all. The Intel PDF is even older because it mentions "2013 projected" about two planned Intel fabs and "In addition, by the second half of 2011 we expect..." So it was compiled in the first half of 2011! 9 full years ago!! I think my most surprised possible, dumbfounded actually, "Really?!?" fully applies here.

    I post a link from early 2020 below to, er, "update" your numbers. The gist is that Intel is not even at the top 5 fabs anymore in wafer output per month. They fab 817,000 200mm-equivalent wafers per month as of December 2019. Samsung top the list with 2.9 million (200mm-equiv.) wafers per month (with ~2/3 of that being DRAM and NAND) and TSMC are second with 2.5 million wafers per month :

    https://www.icinsights.com/news/bulletins/Five-Sem...
  • Santoval - Friday, May 8, 2020 - link

    "so there are rumours they are moving from Intel to their own in-house ARM-based design."
    They are no longer rumors, at least according to Bloomberg. They will start selling -in a gradual transition starting from their lowest powered models- Mac computers with ARM cores based on their upcoming A14 SoC next year. The core count will be 12 cores and it will be fabbed at TSMC's 5nm node. While it *is* an April article it was posted on April 23, so it is certainly not an April fools piece :
    https://www.bloomberg.com/news/articles/2020-04-23...
  • zamroni - Thursday, May 7, 2020 - link

    Some of the reasons:
    1. Thunderbolt port certification is limited by Intel to Intel platforms only.
    2. Power consumption is not good up to 3000U series
  • phoenix_rizzen - Monday, May 11, 2020 - link

    1 is incorrect, as of about a year ago. Intel released the TB3 specs for anyone to implement, royalty-free. I believe you still need to get the implementation certified by Intel, but there are several AMD-based systems (laptop and desktop) with official TB3 support.
  • LiKenun - Thursday, May 7, 2020 - link

    Though technically supported, Pro part products—unfortunately—don’t typically come with ECC memory going by what’s currently being sold from laptop manufacturers using the current generation of Pro parts. I can find ECC RAM easily on Amazon, but getting a CPU to work with it and the motherboard to declare support is rocket science. I don’t understand what’s up with the resistance to making ECC accessible.
  • brucethemoose - Thursday, May 7, 2020 - link

    Its hard to market, maybe too hard to justify the validation costs and (small) price premium.

    That sounds silly, but getting OEMs to use symmetric dual channel memory in consumer stuff is already like pulling teeth.
  • PeachNCream - Thursday, May 7, 2020 - link

    Or dual channel at all. A fair number of consumer notebooks are stuck with a single DIMM slot or the soldered down equivalent of a single DIMM and are consequently without the additional bandwidth that the iGPUs they universally use depend on for graphics performance.
  • rrinker - Thursday, May 7, 2020 - link

    Or a soldered down 8GB and a single slot - so you can add another 8 and get symmetrical dual channel, but only a total of 16GB, or you can put a 16GB module and get a total of 24GB but no longer symmetrical.
  • PeterCollier - Saturday, May 9, 2020 - link

    Erm, speaking of mismatched sizes of RAM... Ever heard of flex mode operation?
  • Zibi - Thursday, May 7, 2020 - link

    I'd guess the reasons against offering ECC might be along the following lines:
    1. ECC DIMMs tend to be slower (higher CL) and more expensive.
    2. ECC DIMMs also tend to visible show their bad shape, which leads to RMA - in the DataCenter environment we recently got 1 DIMM malfunctioned per about 10 servers.
    This is maybe not the best example, but my take is like that - you may got suprisingly big percentage of DIMMs that will be either DOA or will break during the guarantee period.
    Vendors need to factor this into the cost, and the margins in this segment are really tight.
  • rrinker - Thursday, May 7, 2020 - link

    Been a few years, but I had a customer purchase a pair of really beefy servers from Dell, each with 192GB of memory. 12 sticks in each server. We I hooked them up and fired them up, both had memory errors. Mixing and matching, I got ONE in a usable state - with 64GB of memory. All the other sticks were bad. The replacements were all fine, they must have had a bad batch, or a box got zapped being delivered to the assembly line or something.
    That was a one-off though, I've installs with a dozen servers and more than twice that in total memory across them all and had zero DOA.
  • zamroni - Thursday, May 7, 2020 - link

    Intel itself doesn't give ecc support for U series.
    Xeon E series are the only Intel mobile processors which get ecc support
  • Brett Howse - Friday, May 8, 2020 - link

    Just to clarify, the Pro part of Ryzen PRO is for enterprise desktop support. ECC is not a requirement. This is a different market than the workstation class you are thinking of.
  • Up2Trix - Monday, May 11, 2020 - link

    Right on, LiKenun.

    I think that ECC DRAM should be mandatory in every device EXCEPT for throw away read only devices, like tablets. Everyone who loves their data should demand it, especially people who generate any kind of data. And that should be almost all consumers, if they actually were educated to know the implications of bit rot. Imagine, say, an important document that becomes unreadable or even worse, has a silent error that totally inverts the meaning of something, due to one or more bits flipping. Not to mention system instability.

    This is the inverse of the conventional argument that ECC DRAM is only a workstation feature. That is merely a historical defect caused by evil Intel price segmentation. I so hope that AMD can break this mind virus and offer ECC DRAM to the masses...

Log in

Don't have an account? Sign up now