Comments Locked

60 Comments

Back to Article

  • Jurgen B - Thursday, October 7, 2021 - link

    Love your thorough article and testing. This is some serious firing power from the Ampere and makes some great competition for Intel and AMD. I really like the 256T runs on the AMD Dual socket EPYCs (they really are serving me well in floating point research computing), but it seems that future holds some nice innovations in the field!
  • mode_13h - Thursday, October 7, 2021 - link

    Lack of cache seems to be a serious liability, though. For many, it'll be a deal breaker.
  • Wilco1 - Friday, October 8, 2021 - link

    Yet it still beats AMD's 7763 with its humongous 256MB L3 in all the multithreaded benchmarks. Sure, it would be even faster if it had a 64MB L3 cache, however it doesn't appear to be a serious liability. Doing more with far less silicon at a lower price (and power) is an interesting design point (and apparently one that cloud companies asked for).
  • Jurgen B - Friday, October 8, 2021 - link

    Yes, Cache will play a role for many. However, people buying such servers likely have a very specific workload in mind. And thus they now have more choices which of the manufacturer options they prefer, and these choices are really good to see. Compared to 10 years ago, when AMD was much less competitive, it is wonderful to see the innovation.
  • schujj07 - Friday, October 8, 2021 - link

    That isn't true at all. The SPEC java benchmarks have the Epyc ahead, SpecINT Base Rate-N Estimated they are almost equal (despite having half the cores), FP Base Rate-N Estimated the Epyc is ahead, compiling the Epyc is ahead. Anything that will tax the memory subsystem by not fitting into the small cache of the Altra and the performance is lower for the Altera. Per core performance isn't even close.
  • mode_13h - Saturday, October 9, 2021 - link

    Thanks for correcting the record, @schujj07.

    The whole concept of adding 60% more cores while halving cache is mighty suspicious. In the most charitable view, this is intended to micro-target specific applications with low memory bandwidth requirements. From a more cynical perspective, it's merely an exercise in specsmanship and maybe trying to gin up a few specific benchmark numbers.
  • Wilco1 - Saturday, October 9, 2021 - link

    If you're that cynical one could equally claim that adding *more* cache is mighty suspicious and gaming benchmark numbers. Obviously nobody would spend a few hundred million on a chip just to game benchmarks. The fact is there is a market for chips with lots of cores. Half the SPEC subtests show huge gains from 60% extra cores despite the lower frequency and halved L3. So clearly there are lots of applications that benefit from more cores and don't need a huge L3.
  • Wilco1 - Saturday, October 9, 2021 - link

    The Altra Max wins the more useful critical-jOPS benchmark by over 30%. It also wins the LLVM compile test and SPECINT_rate by a few percent. The 7763 only wins SPECFP by 18% (not Altra's market) and max-jOPS by 13%.

    So yes my point is spot on, the small cache does not look at all like a serious liability. Per-core performance isn't interesting when comparing a huge SMT core with a tiny non-SMT core - you can simply double the number of cores to make up for SMT and still use half the area...
  • mode_13h - Saturday, October 9, 2021 - link

    > Per-core performance isn't interesting when comparing ...

    Trying to change the subject? We didn't mention that. We were talking only about cache.

    > The Altra Max wins the more useful critical-jOPS benchmark by over 30%.

    That's really about QoS, which is a different story. Surely, relevant for some. I wonder if x86 CPUs would do better on that front with SMT disabled.

    > the small cache does not look at all like a serious liability.

    Of course it's a liability! It's just a very workload-dependent one. You need only note the cases where Max significantly underperforms, relative to its 80-core sibling, to see where the cache reduction is likely an issue.

    The reason why there are so many different benchmarks is that you can't just seize on the aggregate numbers to tell the whole story.
  • mode_13h - Saturday, October 9, 2021 - link

    Apologies, I now see where schujj07 mentioned per-core performance. I even searched for "per-core" but not "per core".
  • Wilco1 - Sunday, October 10, 2021 - link

    > You need only note the cases where Max significantly underperforms, relative to its 80-core sibling, to see where the cache reduction is likely an issue.

    There are regressions in 4 of the 10 integer benchmarks - only mcf is significant. However if you look closely, Altra Max still beats/equals the 8380 in 3 out of those 4. Clearly a 40MB L3 is not large enough for these benchmarks, so would you also call that a "major liability"? EPYC beats all by a huge margin in these 4, so clearly 256MB L3 works well, but it's also way too expensive for a monolithic die.

    > The reason why there are so many different benchmarks is that you can't just seize on the aggregate numbers to tell the whole story.

    No. The aggregate result averages out the extremes and is a better prediction for average performance. For example Altra Max is slower than Altra on gcc_r and far behind EPYC. However in LLVM compilation Altra Max beats Altra by ~20% and is pretty much equal to the 7763. So in real world tests EPYC's huge caches don't help nearly as much as the gcc_r subtest suggests.
  • mode_13h - Sunday, October 10, 2021 - link

    > There are regressions in 4 of the 10 integer benchmarks - only mcf is significant.

    When you have 45% more core x GHz, *any* regression is significant! By that token, we should also be marking the xz test as underperforming, since it's only a ~20% improvement.

    It's also convenient to seize on specint, when it suffers regressions on 7 of the 12 specfp tests.

    > Altra Max still beats/equals the 8380 in 3 out of those 4.
    > Clearly a 40MB L3 is not large enough for these benchmarks,
    > so would you also call that a "major liability"?

    This seems like a rather disingenuous point. To say anything about the 8380's cache, we'd need to see a comparison against other Ice Lake CPUs with a different core-to-cache ratio.

    > No. The aggregate result averages out the extremes and is a better
    > prediction for average performance.

    It's a flawed inference to conclude that "the average workload" will match an unweighted average of a set of intentionally disparate workloads.

    Furthermore, people less & less buy hardware for "the average workload".

    > in LLVM compilation Altra Max beats Altra by ~20% and is pretty much equal to the 7763.

    You can't just cherry-pick the best results of each memory configuration. If you're going to deal in aggregates, then you need to aggregate the results per-configuration.

    > So in real world tests EPYC's huge caches don't help nearly as much as the gcc_r subtest suggests.

    As a matter of fact, the monolithic vs. quadrant results would argue the opposite, in your chosen example of LLVM. Furthermore, what qualifies LLVM compilation as more "real world" than the gcc test?
  • schujj07 - Monday, October 11, 2021 - link

    "The Altra Max wins the more useful critical-jOPS benchmark by over 30%"
    What are you talking about? In both critical-jOPS & max-jOPS the 2s 7763 is on the top of the chart. We cannot try to extrapolate possible performance on the Altra Max due to "Unfortunately, trading in one issue with another, we ran into other issues on the 2-socket test scenario where the test ran into issues at large thread counts. The 2S Q80-33 figures here only stresses 130 cores, while I wasn’t able at all to get 2S M128-30 figures at reasonable core counts, so I completely omitted results here."

    Per-core performance matters a lot. There are A LOT of programs, especially databases, that are licensed on a per core metric. This means I need 8 cores of Altra Max to equal the performance I get from an Epyc 4c that will kill my licensing cost. Those added cores could easily double the license cost and those license are often times MUCH more expensive than the server itself. It is obvious you don't work in industry as this is common knowledge.

    Overall the Altra Max is interesting but nothing more than that. It won't be a player in industry until the per core performance is at least double what it currently is and there is enterprise software able to take advantage of it. Basically Altra Max is like IBM Power and that is niche at best.
  • Wilco1 - Tuesday, October 12, 2021 - link

    Altra Max is still at the top for 1S critical-jOPS - that's not invalidated by missing 2S results.

    If you worked in the industry, you would know that per-core licenses have a multiplier based on CPU type to level out performance differences. In cases where per-core performance really matters and you completely disable SMT (for example for high-frequency trading), you would not consider these many-core servers at all but get 8 or 16 core CPUs with significantly higher bandwidth, cache and power per core.

    It seems you misunderstand the target market completely. You probably also call Graviton 2 a niche eventhough it is already a significant percentage of AWS and growing fast. And that with just 64 cores and far lower per-core performance than Altra...
  • schujj07 - Tuesday, October 12, 2021 - link

    How about we do some math instead. Compared to the 1S 80 core, the 1S Max gets 42% better performance, THP disabled, or 30% better performance THP enabled for 60% more cores in your beloved critical-jOPS. Compare that to the Epyc 7763 which gets 105% better THP disabled and 102% THP enabled. Even the older 80C only adds 62% despite doubling its cores. Based on that alone best case scenario is the 2S Altra Max ties the Epyc 2S in critical-jOPS. Sure it is beating the 1S 7763 but it barely beats the 2S 7443 a 24c/48t CPU.

    I do work in industry as a VMware Admin. Unless you are running Oracle, most of these will be run on systems up to 32c/64t to max out your VMware license. If you have specific needs you can get the higher frequency parts that also are up to 32c AMD or 28c Intel. The difference in costs for Windows DataCenter for the core additional core licenses is saved by reducing the number of physical hosts. What software has "per-core licenses have a multiplier based on CPU type to level out performance differences?" That sounds like they are going to charge your X for Xeon Scalable Gen 1 but Y for Gen 2 and Z for AMD. That doesn't happen. MS SQL Server charges per core with a base license of 4 cores. Now if I need 8 cores on the Altra Max to equal the performance of an Epyc at 4 cores I have doubled my license cost.

    Overall ARM with under 10% total market share IS a niche player. They need to get to the same per core performance & have software available for it to be an actual alternative. Until that happens companies will play around with it but nothing serious in the data center environment.
  • mode_13h - Tuesday, October 12, 2021 - link

    > You probably also call Graviton 2 a niche

    Don't put words in people's mouths. If you want to know whether @schujj07 considers it a niche, you can certainly ask.
  • Kangal - Thursday, October 7, 2021 - link

    This is basically a 3GHz Cortex-A76 (Neoverse N1), running in a 128-core tandem, and built with a more efficient/expensive Monolithic Socket based on TSMC's 7nm node. Sounds neat.

    I enjoyed seeing the older generation which was basically a 2GHz Cortex-A73, running in 64-core tandem, and built on TSMC's 16nm node. Was quiet value-for-money, at least in its time.

    Seems like this new version is giving Intel's Core-i, decent competition in the single-threaded work. Since Intel is having some issues with their own node, and can't clock too high. Whilst AMD has a clear advantage here. When it comes to total/multi-threaded performance, ARM wins through sheer grunt of all those extra cores. Overall, it is a competitive choice for today and the next few years.

    What will be interesting is when they bump it up to the Cortex-A78 (Neoverse V1) and use something like TSMC's 5nm node which should bring it to full-parity on the single-threaded performance against Intel. Or to the next best thing, ARM v9, using the Cortex-X2 (Neoverse N2) on the same TSMC 5nm node. But I share my previous concerns that the first-generation of (USA) ARM v9 is going to be quiet disappointing, but I'm optimistic about the (European) second-generation. I think then we should see more tangible benefits, when combined with the TSMC 3nm node, which should bring it on parity to AMD's cores on the single-threaded characteristic. Exciting times ahead. And yes, I know I am over-simplifying things here.
  • SarahKerrigan - Thursday, October 7, 2021 - link

    Previous Ampere parts weren't 64-core, 2GHz, or Cortex-A73. They were a custom (and bad) core, 32 per socket, at 3.3GHz.

    Neoverse V1 is based on the Cortex-X1, not the Cortex-A78. Neoverse N2 is based on the Cortex-A710, not the Cortex-X2.
  • Kangal - Friday, October 8, 2021 - link

    Sorry, by "older generation" I was talking about the Amazon Graviton one, not the previous Ampere Version.

    The proper upgrade from the Cortex-A76 is the Cortex-A78.
    The Cortex-A78 is the base micro-architecture, with the Cortex-X1 being a slightly modified derivative of it, and the Neoverse-V1 is a further slightly modified version of that. That's why I worded it in that way. Whilst ARM claims a divergence between the Cortex-A710, Cortex-X2, and Neoverse-N2... I think we will end up seeing them much more closer in-common than different.
  • SarahKerrigan - Friday, October 8, 2021 - link

    The Graviton1 was 16 Cortex-A72 at 2.3GHz.
  • lemurbutton - Thursday, October 7, 2021 - link

    Before AMD can disrupt Intel in the server, Ampere has disrupted AMD. And now Intel is coming back with Saphire Rapids. Doesn't look good for AMD.
  • Teckk - Thursday, October 7, 2021 - link

    AMD also has upcoming products, same as other companies :) Competition is good.
  • schujj07 - Thursday, October 7, 2021 - link

    Most likely Sapphire Rapids will only get Intel to Epyc Milan or a little past there. Overall ICL Xeon only caught Intel up to Epyc Rome. Initial tests on Milan were good, showing 5-7% better performance which isn't bad, however, it wasn't like what we saw on the desktop side. Turns out the benchmarks were run on a reference platform that AMD hacked to allow 3rd Gen support. Once benchmarks were done on Milan on platforms designed for 3rd Gen the performance jumped by another 10% or more. Basically that put ICL 15-17% behind Epyc Milan and SPR is only supposed to get about 19% more performance.
  • mode_13h - Thursday, October 7, 2021 - link

    > Initial tests on Milan were good, showing 5-7% better performance which isn't bad

    Initial tests were flawed, due to non-production hardware/firmware. Check out their update:

    https://www.anandtech.com/show/16778/amd-epyc-mila...
  • schujj07 - Thursday, October 7, 2021 - link

    "Initial tests were flawed, due to non-production hardware/firmware."

    I basically said that in my initial comment.

    "Turns out the benchmarks were run on a reference platform that AMD hacked to allow 3rd Gen support. Once benchmarks were done on Milan on platforms designed for 3rd Gen the performance jumped by another 10% or more. "
  • GreenReaper - Saturday, October 9, 2021 - link

    Your initial comment was too long, he didn't read that far before hitting reply.
  • whatthe123 - Thursday, October 7, 2021 - link

    icelake is not great in general. it was an improvement over 14nm but the core scaling was not there and their 10nm was still struggling to hit competitive boost clocks. I don't think the uplift they saw between 14nm and icelake reflects sapphire rapids at all considering the major design changes and improved node, but if it does I don't see how sapphire rapids would compete with milan at a lower core count. If its competing with milan then the per-core performance and MT scaling has seen a huge uplift compared to icelake.
  • schujj07 - Thursday, October 7, 2021 - link

    Had ICL come out on time people would have been more impressed. The problem that ICL has is since it was soooooo late Intel had to squeeze every ounce of performance out of SKL. Overall ICL is just a short term platform but the performance comparison to SPR.
  • mode_13h - Friday, October 8, 2021 - link

    > Had ICL come out on time people would have been more impressed.

    Depends on what you mean by "on time". If it had come in place of Cascade Lake, then probably. However, if it still followed Cascade Lake, then the clockspeed drop and strong competition from Rome & comparison with Graviton 2 are still unflattering.

    If Ice Lake had notched up the clockspeed ladder *and* launched in place of Cascade Lake, then it would've been a very solid entry.

    Anyway, I'm sure Intel is still selling every one they can make. People are quick to point out how AMD benefited from Intel's process woes, but the past 5 years' demand surge has provided Intel a very nice cushion. They basically couldn't have picked a better time to falter.
  • schujj07 - Friday, October 8, 2021 - link

    I believe that ICL was supposed to be 2nd Gen Scalable. When Intel found that it wasn't ready, they released Cascade Lake. Even worse was needing to release Cooper Lake for 4-8S systems in 2H 2020.
  • dullard - Thursday, October 7, 2021 - link

    Far too many people mistakenly think it is AMD vs Intel. In reality it is ARM vs (AMD + Intel together).
  • TheinsanegamerN - Thursday, October 7, 2021 - link

    In reality it's AMD VS INTEL, with ARM the red headed stepchild with 3 extra chromosomes drooling in the corner. x86 still commands 99% of the server market.
  • DougMcC - Thursday, October 7, 2021 - link

    And the reason is price/performance. These chips are pricey for what they deliver, and it shows in amazon instance costs. We looked at moving to graviton 2 instances in aws and even with the in-house pricing advantage there we would be losing 55+% performance for <25% price advantage.
  • eastcoast_pete - Thursday, October 7, 2021 - link

    Was/is it really that bad? Wow! I thought AWS is making a value play for their gravitons, your example suggests that isn't working so great.
  • mode_13h - Thursday, October 7, 2021 - link

    Could be that demand is simply outstripping their supply. Amazon isn't immune from chip shortages either, you know?
  • DougMcC - Thursday, October 7, 2021 - link

    It was for us. Could be that there are workload issues specific to us, though as a pretty basic j2ee app it's somewhat hard for me to imagine that we are unique.
  • lightningz71 - Friday, October 8, 2021 - link

    It is VERY workload dependent.
  • lemurbutton - Friday, October 8, 2021 - link

    Graviton2 is now 50% of all new instances at AWS.
  • DougMcC - Friday, October 8, 2021 - link

    Not super surprising. Even with the massive loss of performance, it's still cheaper. If you don't need performance, why wouldn't you choose the cheapest thing?
  • Wilco1 - Friday, October 8, 2021 - link

    In most cases Graviton is not only cheaper but also significantly faster. It's easy to find various examples:

    https://docs.keydb.dev/blog/2020/03/02/blog-post/
    https://about.gitlab.com/blog/2021/08/05/achieving...
    https://yegorshytikov.medium.com/aws-graviton-2-ar...
    https://www.instana.com/blog/best-practices-for-an...
  • mode_13h - Thursday, October 7, 2021 - link

    > x86 still commands 99% of the server market.

    Depends on what you consider the "server market", but AWS is very rapidly switching over. Others will follow.

    Lots of cloud compute just depends on density and power-efficiency. And here's where ARM has a real advantage.
  • Wilco1 - Thursday, October 7, 2021 - link

    According to https://www.itjungle.com/2021/09/13/the-cacophony-... Arm server revenue has been 4-5% over the last few quarters.
  • schujj07 - Friday, October 8, 2021 - link

    Anything under 10% market share in the server world is basically considered a niche player. Right now AMD is over 10% so they are finally seen as an actual player in the market.
  • Spunjji - Friday, October 8, 2021 - link

    Pointing at current market share that resulted from a lack of viable ARM competition isn't a great argument for your prediction that ARM will not gain market share, especially when you're being presented with evidence of viable ARM competition.
  • mode_13h - Thursday, October 7, 2021 - link

    > Before AMD can disrupt Intel in the server,

    *before* ? This is already happening! You can clearly see it in AMD's server marketshare, as well as the price structure of Ice Lake.

    > And now Intel is coming back with Saphire Rapids. Doesn't look good for AMD.

    AMD has Genoa, V-Cache, and who knows what else in the pipeline. Oh, and they can also build an ARM core just as good as anyone (with the possible exceptions of Apple and Nuvia/Qualcomm).
  • yetanotherhuman - Friday, October 8, 2021 - link

    Not even in slight agreement. Different architecture.
  • eastcoast_pete - Thursday, October 7, 2021 - link

    Thanks Andrei, great analysis! IMO, the biggest problem Ampere and other firms that develop server CPUs based on ARM designs is that their natural customers - large, cloud-type providers - pretty much all have their own, in-house designed ARM-based CPUs, and won't buy thousands of third party CPUs unless they do something their own can't do, or nowhere near as well. AWS, Google, MS, and Apple still buy x86 CPUs from Intel or AMD because there is a customer demand for those instances, but also try to shift as much as they can to their own, home-grown ARM server systems. In this regard, has anyone heard any updates about the ARM designs supposedly in development at MS? Maybe Ampere can get themselves bought out by them?
  • name99 - Friday, October 8, 2021 - link

    “own house-designed ARM-based CPU’s”?
    We obviously have Graviton. Apple seem a reasonable bet at some point. Maybe a large Chinese player.

    Do we have any evidence (as opposed to hypotheses and rumors) of Google, Facebook, Microsoft, or most of China? Or other smaller but still large players like Yandex or Cloudflare?
  • Sivar - Thursday, October 7, 2021 - link

    This is a proper old-school deep CPU review.
  • vegemeister - Thursday, October 7, 2021 - link

    Text says Intel Xeon 8380 is running at 205 W power limit, but the table says 270 W. Which is it? I assume 270 W like ARK says?
  • Unashamed_unoriginal_username_x86 - Thursday, October 7, 2021 - link

    M112-30 in the table on first page says 96 cores?
  • Andrei Frumusanu - Friday, October 8, 2021 - link

    That was a typo.
  • Unashamed_unoriginal_username_x86 - Friday, October 8, 2021 - link

    I'm glad we agree
  • nandnandnand - Thursday, October 7, 2021 - link

    Let's all run out and buy the $800 32-core.
  • mode_13h - Friday, October 8, 2021 - link

    Add in platform costs and it's not going to look like much of a bargain compared with a Threadripper or Xeon W-3300 system.
  • mode_13h - Friday, October 8, 2021 - link

    Of course, if you happen to need specifically an ARM-based workstation, I'm not aware of any better option than Altra.
  • Brutalizer - Sunday, October 10, 2021 - link

    The old SPARC T8 cpu with 32 cores, is still almost faster than all these cpus. Here in the SPECjbb2015, a single cpu achieves 153.500 max, and 90.000 crit.
    https://blogs.oracle.com/bestperf/specjbb2015:-spa...
  • Wilco1 - Sunday, October 10, 2021 - link

    Those are results based on weeks/months of tuning, so not at all comparable with this review (the same is true for SPEC scores). In your link a 1S 8180 does 84100 max-jOPS, while the faster 8280 gets 81700 in this review. Similarly the best critical-jOPS is 62600 for the 8180 while the 8280 gets just 47900.
  • Sudharshan Anbarasu - Sunday, October 17, 2021 - link

    How about Monero Mining performance...

    Since $(cache) plays major role in x86 platform, curious to know how it works in Arm architecture.

Log in

Don't have an account? Sign up now