Comments Locked

42 Comments

Back to Article

  • webdoctors - Friday, August 28, 2020 - link

    Makes sense, when there's like 5 customers that dominate the entire dataserver market (Alibaba, Google, Amazon, Facebook, Microsoft), you might as well cater to them.

    It would be good to add some numbers in this article. Like I think Amazon's internal networking company produces enough network HW for Amazon that if they were a regular networks company they'd be like 4th biggest in the world. Maybe a similar stat if they sold HDDs. That's how big these customers are.
  • Industry_veteran - Saturday, August 29, 2020 - link

    Actually it makes no sense. All the big or hyper scale customers today have the money and technological reach to hire their own team to design and manufacture CPU as per their own requirement. Why would they need Marvel? Besides that this team that Marvel has is essentially not a team experienced in server processors. They are basically an embedded CPU team that was inflated to make servers. They never understood the demands of server market and what it takes to knock off intel.
  • pifourth - Monday, August 31, 2020 - link

    We'll see how Marvell(Cavium) does with hyperscaler customers.

    Marvell (Cavium) does come from a background in multi-core ARM processors and high speed crypto operations that goes back many years. Marvell (Cavium) might see some business from
    the combination of ARM server and high speed crypto of various sorts ( iPsec, Sha, Public Key
    Crypto, PKI, IKE, AES, Crypto that is resistant to Quantum Computer attack, etc. ). Time will tell how much business it gets.

    Tom

    If Intel charges quite a bit extra for server chips that also do high speed crypto operations, Marvell (Cavium) might get some sales on a much cheaper ARM server that does high speed crypto operations. On the other hand, if Intel doesn't charge much extra for
    server chips that do high speed crypto operations, then Marvell (Cavium) may not see much
    business for customized server chips. We'll see what develops.
  • Drivebyguy - Thursday, January 14, 2021 - link

    I'm late to this but I'll comment. I don't think the hyperscale customers have quite the money and reach you have in mind. AWS for example is using an off-the-shelf Arm design (the Neoverse cores) at least at this point. Granted, they've done ML chips (Inferentia, Trainium) but a full server CPU from scratch is a different matter. If Marvell can deliver an advantage vs. the standard (although pretty good, obviously) Arm-created core designs, and put that on die or in package with some of these other capabilities, I'd say they have a pretty fair shot.
  • ksec - Saturday, August 29, 2020 - link

    Well HyperScalers ( The Big 5 you mentioned ) only represent half of the market. There are still many smaller players that make up the other 50%.

    But again you need HyperScaler to kick start those volume before moving on to more general market. So if they do succeed I am sure they will come back to with off the shelf solutions.
  • rahvin - Saturday, August 29, 2020 - link

    No it would make sense because like Thunder I and Thunder II, they got Thunder III to actual production silicon and realized it wasn't competitive just like every other ARM server design.

    It would seem the ARM server future resets on Amazon's shoulders alone.
  • Wilco1 - Sunday, August 30, 2020 - link

    Arm servers have been competitive at the time they were released. TX2 looks great compared with Skylake: https://uob-hpc.github.io/assets/cug-2018.pdf , and Cloudflare wanted to move to Centriq because it beat their x86 servers on every aspect (it looks they will use Ampere Altra now).

    No, Graviton is definitely not alone. Ampere Altra already beats the fastest EPYC server, there is Nuvia and several other companies designing Arm servers. Arm's next generation Neoverse will obviously be even faster and more efficient, so the future looks very promising.
  • Industry_veteran - Friday, August 28, 2020 - link

    So after spending hundreds of millions of dollars on false promises they are finally admitting that ARM server market was never there. This is no surprise. The Broadcom leadership had already reached this conclusion in 2016.
    This Marvell team is actually a team that spend close to 500 million dollars in Broadcom and never came up with working CPU despite spending so much cash for 4 years. Finally Broadcom leadership got fed up of this team and offloaded them.
    To my surprise, they found employment in Cavium and spent hundreds of millions more in both Cavium and then Marvell before reaching the conclusion that there is no demand for ARM server in general purpose server market. The ARM technology is certainly not good enough to abandon Intel.
    Makes me wonder how decisions are made in hi-tech companies!. People without commonsense are reaching the top of corporate ladder due to inherent buddy system in Silicon Valley.
    Question is why SEC doesn't investigate these people who supposedly spend hundreds of millions of dollars on technology that never had much promise to begin with.
  • FunBunny2 - Saturday, August 29, 2020 - link

    "People without commonsense are reaching the top of corporate ladder due to inherent buddy system in Silicon Valley."

    not specific to SV. it's the decade long invasion of the MBA. the MBA 'knows' that management is a separable skill, which skill is only truly possessed by the MBA, and such a skill that can be applied by any MBA to any industry. that's what has killed American industry, not China.
  • Industry_veteran - Saturday, August 29, 2020 - link

    While I agree with you about MBAs, the buddy system is not just limited to MBAs and people without common sense are not just MBAs but even engineers.
    The chief architect of the team in Broadcom knew fully well by 2016 that ARM server market today is all about getting hyper scale customers. That is the time Broadcom shutdown the ARM server project and let the team go. The same team then went to Cavium and then Cavium was bought by Marvell touting the promise of ARM server market. So how can anyone say that not knowing the realities of ARM server market is only the fault of MBAs?
  • TomWomack - Saturday, August 29, 2020 - link

    I don't believe it was obvious in 2016 or 2017 that the ARM server market was only hyperscalers; certainly at ARM we were really hoping for somebody to produce motherboards that could go in a standard Supermicro 1U case and be bought by anyone who might have bought a Supermicro Xeon E5 machine.

    But ARM wasn't willing to compete with its customers by providing a reference model, in the way that every Intel server outside the hyperscalers uses an Intel chipset on an Intel motherboard, and ARM didn't have enough pull to get the customers to be broadly compatible with one another - being the first to do the R&D to produce a completely commoditised system is difficult to sell to a company's board.
  • TomWomack - Saturday, August 29, 2020 - link

    ARM made development boards, but they were things you could just about endure having in your continuous-integration process rather than things you could plausibly put on engineers' desks, and they clearly weren't going to be on the same family tree as things you could put one of on every desk in a company.

    (a distinctly more interesting question is how ARM managed to lose the Chromebook market; I'm inclined to put a lot of blame on Qualcomm's licensing model there, but Broadcom or nvidia could easily have developed a Chromebook-targetted processor if a business case could be contrived -nvidia do have a decent-volume cash cow in the Nintendo Switch)
  • TomWomack - Saturday, August 29, 2020 - link

    If some supplier could have provided ARM in 2017 with two thousand reliable 1U boxes with 32 Cortex-A72 cores, 128GB memory and 10Gbps Ethernet running out-of-the-box RHEL, I suspect ARM would have been delighted to pay 50% over what generic Intel nodes cost, even if they had to lease a fair number of boxes to Cadence and Mentor and Synopsys to get the EDA tools working well on ARM. But that wasn't a capability and price point that anyone was interested in.
  • FunBunny2 - Saturday, August 29, 2020 - link

    "ARM wasn't willing to compete with its customers "

    it's an axiom of compute that C is the universal assembler, which leads to the conclusion that any cpu can (modulo intelligent design) run the same code as any other. perhaps, another way to express the Turning Machine hypothesis. in particular, ARM remains, at the compiler writer level, user visible RISC, while X86 is not. it ought to be a slam dunk that C compilers on ARM can turn X86 C source into equivalent machine code. likely, more verbose, of course. so, what, exactly, is it that the ARM ISA can't do (perhaps with more source, of course) that X86 can?

    after all, servers don't need all that GUI stuff. z/390/370/360 have been supporting very large user bases for nearly 60 years. this is not new stuff.
  • Industry_veteran - Sunday, August 30, 2020 - link

    It was very obvious by 2016. In fact the three main companies working on ARM server SoC at that time Cavium, Broadcom and Qualcomm were all competing with each other to get same hyper scale customers. By 2016, all hyper scale customers were having regular discussions about their technical requirements with all three major ARM server vendors despite the fact that only Cavium (pre Broadcom Vulcan merger) had the chip out in market. However Cavium's chip was not at all competitive. Broadcom's efforts failed to produce a working chip by middle of 2016 and that is the time company decided to pull out of general purpose ARM server market instead of giving the team another chance and millions of dollars more. Qualcomm too didn't have official version ready by 2016.
    I know for sure that, writing on the wall was clear by 2016. Getting Hyperscale customers was key to success of any general purpose ARM server vendor.
  • TomWomack - Sunday, August 30, 2020 - link

    And all three of those companies thought that building their own ARM core was the way to go, possibly because they thought that they could put more resources behind it than ARM itself could and could go faster than ARM's roadmap ... AMD built something around Cortex-A57 which worked quite reasonably but they didn't push it.

    Apple almost certainly has put more resources behind ARM core development than ARM has, and has managed to stay consistently a couple of years ahead of the standard ARM cores, but it's abundantly clear that there isn't the money in the commodity server market for someone to spend a billion dollars a year consistently.

    The people who got to capitalise on Intel's trouble at 10nm have been much more AMD than the ARM device manufacturers; the most obvious sign is that they all implemented eight-channel DDR4 because they thought they'd be competing with Ice Lake in 2018 with eight-channel DDR4, and instead they've ended up half a step ahead.
  • demian_thorne - Sunday, August 30, 2020 - link

    Tom,

    Apple has put more resources on the ARM core itself. That is the key difference. The Apple ARM implementation is considerably more beefed up and thus more competitive but guess what? Considerably more expensive. Do you think QCOM doesn't know how to do that? Of course they know ... but they also know the Android market is less premium than Apple ... if you make the ARM core more competitive then you add a good chunk to the BOM. Apple can do that. QCOM cannot in the Android space.

    So look at it from the server market perspective now. What is your value proposition? You will make a general purpose ARM core that is considerably beefed up so the cost will approach the two competitors and you will ask the customers to adjust the software ??? What is the value of all that?

    Change for the shake of change ? That is where the approach fails. It is is what it is. ARM can be competitive in the server space in performance but the reason to do that doesn't exist. Sure there are cases where a run of the mill ARM core makes sense and that is what Amazon is doing but that is a limited implementation.

    Respectfully

    DT
  • Industry_veteran - Sunday, August 30, 2020 - link


    I agree with most of what @demian_thorne mentioned. However there is one more thing to consider besides the cost. If you make ARM core beefier it costs more which is true. In addition it also consumes more power as well. Those hyper scale customers who are looking for ARM as an alternative to intel have expectations that ARM server chip will consume lot less power compared to Intel. If you make ARM core beefier and make per thread performance competitive to Intel then it doesn't hold that power advantage over Intel.
  • Wilco1 - Sunday, August 30, 2020 - link

    No a beefier core also doesn't need to burn as much power as x86. Consider this graph showing per-core power and performance of current mobile cores: https://images.anandtech.com/doci/15967/N1.png

    Now which core has both the best performance and lowest power? So can you be both beefy, fast and efficient at the same time? Welcome to the world of Arm!
  • Wilco1 - Sunday, August 30, 2020 - link

    Ampere Altra showed that you can beat EPYC using small cores, so you don't even need to make them larger. However even a beefed up core remains smaller than x86 cores. EPYC die size is ~1088mm^2 for 64 cores. Graviton 2 needs ~360mm^2 for 64 cores. If you doubled the size of each core, it increases to ~424mm^2. That's still 2.5x less silicon! TSMC 7nm yields on such dies is ~66%, and if a wafer costs $15K, the cost per CPU is less than $200.

    So the advantages of Arm are obvious and impossible to ignore. Any company which spends many millions a year on x86 servers would be stupid not to try Arm.
  • demian_thorne - Sunday, August 30, 2020 - link


    Wilco1,

    before getting to the main argument here let me point out some of the discrepancies in your post (at least thats how it seems)

    1. The only performance test that I have seen between Graviton2 and EPYC is with EPYC1 which is 3 years old. EPYC3 will be released by end of 2020, don't you think this is not a fair comp?

    2. The size of the silicon that you mentioned I believe refers to EPYC2 which like I said was not part of the comp test.

    3. One very large part of the said chip is the L3 cache - EPYC2 has basically twice as much

    Those arguments on tech web sites can go on forever so let me pass to the main point. There is not yet an apples to apples comparison test, is there? I mean these SPECint tests are not irrelevant - not calling them apples to oranges but they are not apples to apples either.

    You know why? Because there is no comparable profitable software that has been tuned to run on both platforms. If that was the case then we could talk about chip sizes (it's not just the size of the core, it is the cache, the mem controllers the intra coms etc) ... If we could run SAP on both platforms then we could see how each chip is finally architected with all the bells and whistles to perform. And then we could make a fair comp.

    And that is what you are missing. Nobody will save hardware money only to spend an order of magnitude more on re-architecting software.

    >any company which spends many millions a year on x86 servers would be stupid not to try Arm

    yet they don't because they will spend billions on the software side to get the benefits out of it.

    See the story of AMD vs INTC. Much more clear apples to apples comparison there and yet AMD just got to 10% of the market which is of the total market not just the hyperscalers (say what 6% there?)

    Same ISA - but yet the hyperscalers didn't jump to this cost saving opportunity as much as you would think, did they?

    There is a reason those MBAs are still needed in SV. LMK if you still disagree...

    DT
  • Wilco1 - Monday, August 31, 2020 - link

    1. Where did I compare with EPYC 1? Next year we can compare next-gen EPYC with next-gen Arm, but today we've got EPYC 2, Graviton 2 and Ampere Altra. That's a fair comparison.

    2. EPYC 1, like anything Intel, is so far behind it's not worth looking at.

    3. Actually EPYC 3 has 8 times as much L3 cache than Graviton 2. It improves performance but it also has a huge area penalty. Graviton 2 is able to get similar performance using a fraction of the silicon and power. How is that not a huge advantage?

    A lot of server software has already been ported and optimized for Arm. You can find many benchmarks using Graviton 2 if you cared to look. The general conclusion is that most software works efficiently out of the box. So the old "it's prohibitively expensive to port software to Arm" excuse is simply false. Years ago Cloudflare blogged on how easy it was to port all their software to Arm (it took one guy a few weeks, including full optimization). Claiming it costs billions is ridiculous and only shows how biased you are.

    So yes I disagree. AMD can't offer the same cost savings as Arm, high throughput per Watt or more than 64 cores, so it's not nearly as attractive for hyperscalers. Graviton is only the first step, I expect many of the hyperscalers to move to Arm in the coming years.
  • demian_thorne - Monday, August 31, 2020 - link

    The only comparison I have seen between EPYC and Graviton2 is on Anandtech and that was EPYC1. But do you care to backup with facts your statement about lots of server software ported to Arm. Either links or websites or google phrases ?

    Hell it is difficult to make an apples to apples comparison between Android and iPhone and you say there is lots to compare between Arm and x86?

    Links - Websites - Google phrases so you can educate me :)
  • Wilco1 - Monday, August 31, 2020 - link

    AnandTech also reviewed EPYC 7742. The new EPYC 2 instances didn't get much coverage or benchmarking however.

    Arm has various blogs, a recent one I thought was interesting: https://community.arm.com/developer/tools-software...

    That shows both Mentor and Cadence EDA tools have been ported to Arm and look fast on Graviton 2. So Arm now uses Arm servers to make better Arm CPUs! A few more:

    https://docs.keydb.dev/blog/2020/03/02/blog-post/
    https://idk.dev/improving-performance-of-php-for-a...
    https://blog.treasuredata.com/blog/2020/03/27/high...
    https://extricate.org/2020/05/13/benchmarking-the-...

    So a wide variety of server software runs faster on Graviton 2 and is cheaper as well.
  • Industry_veteran - Monday, August 31, 2020 - link

    Wilco1,
    Do you have total power consumption comparison for entire SoC of server grade ARM vs Intel?
    Giving single core simulated numbers doesn't show the whole picture.
    Also SPECint numbers are just one parameter. If you ask hyper scale customers they will give you a long list of applications that they consider for comparison.
    Also to note here - there is an important difference between having any software just ported to ARM and having software optimized for ARM. These are two different things. Intel has been working with software community for decades. There is no question ARM can do it but unfortunately no ARM hardware vendor is willing to stay in the game for decades and sustain big losses year after year with the hope that someday ARM will have equal footing with Intel in software development community. For this reason, almost all big companies backed out of ARM server one by one like AMD, Broadcom, Qualcomm and now Cavium/Marvell. It makes no sense to spend 100 million dollars a year to make less than 10 million dollar revenue unless there is a path that clearly shows company is going to turn it around in future.
  • Wilco1 - Tuesday, September 1, 2020 - link

    Industry_veteran,

    That graph proves that your assertion that a beefy core uses the same power as Intel/AMD is clearly false. A core can be beefy and beat Intel/AMD using far less power. Graviton 2 uses about 110W max, which is about half the power of a 64-core EPYC 2 and about a quarter Intel needs for equivalent performance.

    There are lots of Arm developers working on optimizing compilers and tuning software so Arm code works well on Linux, Android, iOS, Windows etc. This is not something a single company is doing, and it certainly doesn't take many decades nor cost billions (I already mentioned that Cloudflare did port and tune all their internal code in a few weeks).

    Besides grossly exaggerating the costs/effort, you're also overestimating the gains from tuning. If, as expected, next-generation Arm servers have significantly higher per-core performance than Intel/AMD, how does the last 10% from tuning matter? It's the icing on the cake, nice but not essential.

    It sounds to me both you and Demian are desperately trying to change the goal posts to spin Intel/AMD in a positive light. AMD and Intel simply cannot compete with Arm's lightning pace of innovation and annual 20-30% performance gains. I bet we'll see 5nm Arm servers with DDR5 before Intel/AMD.
  • Industry_veteran - Tuesday, September 1, 2020 - link

    Wilco1,
    I think you are comparing apples with oranges when you compare so called custom SoC like Gravitron 2 with general purpose CPUs from Intel and AMD.
    I have already mentioned that custom ARM chips made by hyper scale customer like Amazon have inherent advantages because they are made for specific narrow workloads and have large guaranteed market.
    Another point is that for general purpose chips you need to see power consumption per SoC under variety of different workloads. Your statements like "Graviton 2 uses about 110W max, which is about half the power of a 64-core EPYC 2 and about a quarter Intel needs for equivalent performance" makes me wonder, under what conditions, benchmarks and workloads etc.?
    I do not have any affinity towards Intel. However, I have been through this at a major ARM chip maker. I know how these numbers games are played.
    Just think about it for a minute. If general purpose ARM server SoCs were so good then customers would be jumping up and down to get them given the lower costs of ARM cpu SoCs. For hyper scale customers in particular who buy hundreds of thousands of servers each year, just few dollar savings per chip will result in massive amount of savings. However, you see all major chip companies like AMD, Broadcom, Qualcomm, and now Marvell got out of that market one by one. All these companies had made massive R&D investment in ARM server SoCs. Do you think they were not thinking properly when they got out of that business?
  • Wilco1 - Wednesday, September 2, 2020 - link

    Claims that an Arm SoC isn't suitable for X is literally from Intel's FUD playbook - apparently only x86 can access the internet and is suitable for phones! We know how that one played out...

    Yes Graviton 2 is custom but server CPUs are not designed for one specific workload. Neoverse N1 does particularly well on anything you throw at it (as my links show) - if anything it is more general purpose than any previous microarchitecture! So calling it special purpose without a shred of evidence is Intel-style FUD.

    We can debate history forever, but the bottom line is that designing a server is expensive and needs volume to be sustainable. Previous attempts used older processes and the microarchitectures were not as advanced, so Neoverse N1 on 7nm is a huge leap forward. That makes it much easier to get design wins, and hyperscalers is where the volumes, money and growth are. Once you've got volume, you can branch out into the wider server market.
  • Industry_veteran - Wednesday, September 2, 2020 - link

    Wilco1,
    If you are claiming that general purpose CPU vendors don't make chips with particular workloads in mind then I don't know what to say!.
    Instead of answering my common sense question about why all big name, deep pocket companies got out of general purpose ARM server market and that too after spending significant sums of money? you chose to punt and call it FUD.
    No one is denying hyperscales is where the volumes are but that doesn't mean that is where the money is, if you are a chip maker like Marvell. As I said hyper scale customers like Amazon and perhaps Google or Facebook can easily put together a team and get their own custom SoC chips designed using ARM cores. They do not need likes of Broadcom, Marvell etc. It validates everything I said about the lack of interest in general purpose ARM server SoCs.
  • Wilco1 - Thursday, September 3, 2020 - link

    Let me give you a clear example: A64FX is not a general purpose server chip. Everything is designed around HPC, its micorarchitecture implies it will be terrible at SPEC, but give it proper vectorized and optimized FP code and it will beat anything else. Obviously Graviton 2 is not like that at all. Throw any random code at it and it will do well and compete with EPYC 2. So in what way is it not general purpose server chip?

    I did answer your question: "the bottom line is that designing a server is expensive and needs volume to be sustainable. Previous attempts used older processes and the microarchitectures were not as advanced, so Neoverse N1 on 7nm is a huge leap forward. That makes it much easier to get design wins, and hyperscalers is where the volumes, money and growth are. Once you've got volume, you can branch out into the wider server market."

    Yes some of the large hyperscalers may design their own SoC since that is now feasible with recent high-end Arm cores and 7nm/5nm TSMC. But smaller hyperscalers will use TX3 or Ampere Altra, and Nuvia in the future. The market is large and growing.
  • Industry_veteran - Thursday, September 3, 2020 - link

    Wilco1,
    Your statement says,
    "Previous attempts used older processes and the microarchitectures were not as advanced"
    What is the basis of this claim?
    Are you suggesting Marvell's ThunderX2 and 3 didn't have good microarchitecture so Marvell decided to pull the plug?
    As far as your claim about Graviton 2 goes, all I will say is that even Amazon is not claiming this chip to be a general purpose server CPU SoC.
  • Wilco1 - Friday, September 4, 2020 - link

    The AnandTech article says the 16nm TX2 is great on multithreaded code, however Centriq offered 50% more cores at much lower power by using 10nm. So which got most of the attention? TX3 is 7nm and looks like a huge improvement over TX2. But how does it compare with EPYC 2 or the 80-core Ampere Altra? We don't have reviews but I bet it's a similar situation.

    In terms of microarchitecture, I'm saying that previous generations Arm cores didn't get close enough to x86 on single-threaded performance. On SPECINT2017 Graviton 2 is within 6% of a 3.5GHz Xeon Platinum. Ampere Altra runs its Neoverse N1 cores at 3.3GHz (rather than 2.5), so its performance is equivalent to 4.3GHz Skylake. That's the kind of result that gets people excited.
  • Wilco1 - Saturday, September 5, 2020 - link

    "As far as your claim about Graviton 2 goes, all I will say is that even Amazon is not claiming this chip to be a general purpose server CPU SoC."

    See https://aws.amazon.com/ec2/graviton/

    "M6g
    General Purpose
    Best price performance for general purpose workloads with balanced compute, memory, and networking"
    Built for: General-purpose workloads such as application servers, mid-size data stores, microservices, and cluster computing."

    So stop making up FUD about Graviton 2 not being general purpose.
  • demian_thorne - Monday, August 31, 2020 - link

    Thank you industry_veteran perhaps you should add to the list of your qustions what percentage of the total BOM for the entire server are the monetary savings? Ok enough said here.
    Let me make some additional comments on the test links provided:

    1. KeyDB - absolutely no reference to what size DB was tested – so how can I evaluate the results? Then I clicked another link just to see If I can find more info and I came across other test results where it seems the memory GB tested was in double digits. A very meager number.

    2. PHP – Wordpress page serving. Sure – not a bad test case where Graviton2 is mostly applicable but I am sorry what exactly is the $$$ size of market for serving simple wordpress pages?

    3. Treasure Data CDP – Benchmark spec “In each case our experiment performs five warm up queries and then measures the average runtime of five queries to determine the performance”
    Do you have anything with 500 queries? Or maybe 100?

    All in all we are back where we started. Wilco1 you are providing SPECint type of tests. I am not arguing against that. What I am looking for as an example is:

    Sample 1: Memory Subsystem bandwidth – Anandtech article

    Sizing Up Servers: Intel's Skylake-SP Xeon versus AMD's EPYC 7000 - The Server CPU Battle of the Decade?
    Servers with 385 – 256 GBs

    (not sure if links open but doing my best to provide them)

    https://www.anandtech.com/show/11544/intel-skylake...

    Sample 2: SAP S&D 2-Tier – Anandtech article

    The Intel Xeon E5 v4 Review: Testing Broadwell-EP With Demanding Server Workloads

    https://www.anandtech.com/show/10158/the-intel-xeo...

    Do you have meaningful practical stuff like that? I am all ears. The tests you provide do not cover what I think the market is – no point continuing this if you don’t have what I am asking.
    I am going to let the hyperscalers give you their answer ….
  • Wilco1 - Tuesday, September 1, 2020 - link

    AnandTech did measure memory latency and bandwidth on Graviton 2: https://www.anandtech.com/show/15578/cloud-clash-a...

    Latency is on par with EPYC1 and Xeon Platinum. The memory bandwidth results are incredible, Graviton is able to show an almost flat line across the memory hierarchy and 4 times the per-core store bandwidth into DRAM! Total memory bandwidth also beats the Intel and AMD chips by a huge margin.

    So all the benchmarks are extremely competitive. If you want something more specific that isn't covered today, you can always run benchmarks yourself. Graviton 2 is easy to access and cheap. Also remember Graviton 2 is the lowest performance level you can expect from Arm servers. Ampere Altra is 30% faster per core and can offer 80-128 cores. Hyperscalers will love those, that's for sure!
  • Quantumz0d - Friday, August 28, 2020 - link

    As always, ARM is custom and will be custom. On mobiles (Without the Qcomm CAF and their proprietary blobs it wont work and cannot do shiz to fix, we lost OMAP that was biggest shame), ultra portables (MS SQ1). People were beating their chests that Intel and x86 is dead, AMD is also dead. Bullshit peaking esp when Apple announced their own silicon (for less than 10% of global share, and same for their own revenue, paying Intel was meaningless when its generating so much less and on top they poured billions into their A series and TSMC), today they announced own search engine, why ? Apple wants to control everything vertically to lock their consumers more, if their base evaporates, they cannot survive unlike x86 and Windows, as they have mass reach and all DoD, Military and Govt as well.

    Qualcomm Centriq was top as per their advertising and look where did it go, they abandoned and chopped off the entire R&D. Graviton is also custom for AWS, this is not going to beat x86 nor overtake it.

    x86 will be always superior because of the software, and adoption rate and its too much deep into the everyday world and hard to decouple for the time being on the Semi conductor technology based computing. And thankfully the world is not revolving around bs custom, consumers have choice and can mix and match along with DIY which is the major aspect for any personal computing space.
  • FunBunny2 - Saturday, August 29, 2020 - link

    "Apple wants to control everything vertically to lock their consumers more"

    Henry Ford built a 'plant', really a whole town, that took in all the raw materials needed to build an auto at one end and spit out autos at the other. the problem with that approach is the one faced these days by CxO's: the more automated (read: capital intensive) the production process, the less flexible, from a finance point of view, it is to output. by eliminating all that labor, they're left with little way to cut costs if output must decrease. ya still gotta pay for the machines.

    whether Apple is able to maintain output, at a profit, is the big question. yes, at full capacity, Apple or Ford got to reap the profit from making all those intermediate widgets that would otherwise accrue to suppliers. but only if output never flags.
  • rahvin - Saturday, August 29, 2020 - link

    Qualcomm abandoning Centriq was because of an activist investor wanting quick payouts rather than investing in long term success. Qualcomm derives 80% of their revenue from cell phone chips, if something happens to that market the company is gone. The abandonment of Centriq was IMO a stupid move.
  • Wilco1 - Sunday, August 30, 2020 - link

    Agreed - they had a great server already, 2nd generation almost finished and several big customers willing to commit and then just cancelled it all. Now that is a huge waste of money and engineering...
  • brucethemoose - Friday, August 28, 2020 - link

    "I do wonder if the move has anything to do with Arm’s recent rise in the datacentre, and their very competitive Neoverse CPU microarchitectures and custom interconnects, essentially allowing anybody to design highly customizable products in-house, creating significant competition in the market."

    Sounds right to me.

    As the Marvell reps would say, why go in house when Marvell can do more of the heavy lifting for you?
  • TomWomack - Saturday, August 29, 2020 - link

    My guess is: because Marvell charges a lot and, if you're a hyperscaler, having a team the size of the Raspberry Pi one and a bit more load on your general counsel from talking to TSMC is better than sticking Marvell's profit margin in front of a large chunk of your procurement team.

    Assembling a system from ARM-provided cores (and recall that ARM plus your EDA vendor will give you a library with everything you want - ARM has a lot of non-core IP available, the only things I am sure you need the EDA vendor for are the memory controllers and PCIe interfaces) is a lot harder than assembling Lego blocks, but it's a lot easier than implementing and validating your own ARM core.
  • brucethemoose - Saturday, August 29, 2020 - link

    Well yes, exactly, and Marvell wants to get their foot in the door before the hyperscalers come to this conclusion.

Log in

Don't have an account? Sign up now