I think that as 2022 it is too early to chase quantum computing : it is a bit like as NASA would be trying to go to Mars before having successfully sent a satellite in space…
I think it would make much more sense to first allocate the ressources to the development of spintronics as a 1st step, and then build upon the learnings from spintronics to reach quantum computing : this still would take 25 years or more…
simple: because they got beat on future Neoverse 2 sales by the much bigger players, and at the same time, this sort of desperation Push into new markets usually ends-up with the company in receivership
But who really wants to buy them and why? Amazon already has its own Arm-core based CPUs. Google could do the same if they wanted. I doubt Intel wants to get into Arm CPUs. NVIDIA has its own Arm CPU development. AMD did and likely would make its own again if it wants to get into Arm. I just don't see what Ampere really offers anyone since they abandoned their own chips designs and started tweaking Arm cores. Any company with the ability to buy Ampere can do that on their own. Ampere would be bought for 1) IP or 2) market. Nuvia, for example, had valuable IP. I don't see good evidence that Ampere have either IP or a market position at the moment.
They can hire the engineers away if they want. Time to market is not so critical here because there isn't really much of a market, yet. But Amazon and NVIDIA already have chips either out or coming soon. I think Ampere would need to be willing to sell itself for bargain prices if the reason that, for example Microsoft or Google, would buy them is to save development time.
Despite using standard Arm designs, their Altra and Altra Max beats everybody else on the market - not only Intel/AMD's fastest servers, but it looks like it should still outperform Graviton 3 despite being on an older process and using much older cores. Ampere is also working on a custom core that should be released this year. So they have plenty of unique IP and know-how that others can't match.
Outperform it on what? Someone's benchmarks? I'm certain AWS greatly prefers Graviton3 to Altria. AWS designed it to be what they want. Ampere was using custom cores but they abandoned them for standard Arm cores. Yes, they are likely to have to make custom cores in the future because 1) they need to differentiate themselves somehow and 2) Arm is unlikely to be spending the money they did under Softbank or would have under NVIDIA on such designs. That doesn't mean Ampere's custom cores are preferable. We have indication that they have not been preferable in the past and no indication there's any value to them in the future. So there's no reason to assume they have much in the way of IP or unique know-how to offer.
That's a lot of differentiation despite using the exact same core! A custom core allows even more differentiation of course. We know their new core will obviously be better than Altra and must beat N2 in some PPA metrics (eg. perf/area so they can add more cores on a chip).
No offense to Anandtech, but buyers of servers or cloud compute don't care much about Anandtech's server reviews. Anandtech, for example, has no good idea what AWS is looking for or even much idea of what AWS customers are looking for. AWS does.
Also, why are we comparing the Altra Max to the Graviton2? The Graviton2 was open in AWS for user access in early 2020. It looks like the Altra Max started showing up in the wild in Q3 2021. Amazon started allowing preview access to its Graviton3 instances in Q4 2021, just a couple months after Altra Max started showing up. Can I yet find Altra Max public cloud instances now in Q1 2022? Oracle and Equinix have 80-core Altras from what I see and nothing with higher core counts. I don't see anything to suggest that Cloudflare is yet using Altra Max in its services.
Who exactly reads AnandTech is irrelevant. Look back, your claim was that Ampere has absolutely nothing to offer and that you can't differentiate using standard Arm cores anyway. My link proves you are wrong on both counts - Altra (Max) are able to beat all of the x86 competition, and they do far better than Graviton 2. AWS aims for something different of course, which is why I made the comparison - major differentiation despite using the same standard core.
We're still in the middle of a chip crunch, so things move a bit slower than usual. Either way Altra Max (and likely Altra) will outperform the upcoming Graviton 3 despite being older and not getting the benefit of 5nm and DDR5. They'll likely announce their next generation soon. So claiming Ampere has absolutely zero to offer both now and in the future is just stupid.
No, Wilco, you missed my point. It's not who reads it, it's the relevance of the tests. Servers are about platforms and roadmaps, not about core performance. And the only core performance that does matter is the performance on the applications customers are running, not on benchmarks. Altra Max will not outperform Graviton3.
Just count the days until Ampere is bought out, who buys it, and how much is paid.
First you ask for benchmarks, then you don't want to look at benchmarks because you don't like who is winning them. Years ago when Intel was winning the SPEC benchmarks everybody was claiming how it was the best benchmark for servers. Today Intel is no longer winning on SPEC and suddenly everybody hates it. Go figure.
> Altra Max beats everybody else on the market - not only Intel/AMD's fastest servers, > but it looks like it should still outperform Graviton 3 despite being on an older process > and using much older cores.
Graviton 3 is a 100 W CPU. Altra Max isn't. If Amazon wanted to push for more performance, I'm sure they could've spec'd a higher power envelope. And yet, Graviton 3 still features DDR5 and PCIe 5.
AWS aim for low power, low cost and high density rather than best performance like Ampere. Ampere's next-gen will also use 5nm and DDR5, so performance and efficiency should improve.
> don't see what Ampere really offers anyone since they abandoned their own chips designs
They didn't, really. It turns out that Altra was something of a stop-gap measure, to buy time until their Siryn CPUs launch, later this year. It was mentioned in this article, but had a separate announcement:
Also, it turns out that Ampere managed to scale up N1 greater than ARM had intended, through a few tricks they pulled. N1 was only designed to scale to 64 nodes. So, even in Altra, Ampere had a certain value-add.
According to physicist Paul Davies, owing to the fact that the complexity of an entangled state grows exponentially with increasing qubits, a very large qubit computer, such as a 400-component one if I understood him rightly, would come into conflict with a possible information bound in the universe. He reasons that if the universe were finite in resources, a maximum of 10^122 classical bits of information could be processed or contained in any causal region of the universe; and that scaling qubits beyond 400 or 500 would require more information than could fit in that bound. (I think it's related to the Bekenstein bound, which limits how much information can be stored in a region of space.) In short, if his arguments are right, the universe might have a physical limit that precludes practical quantum computing. Already, solving the issues of many-qubit decoherence seems to smack of this.
Also, I remember reading that IBM disputed Google's claim. Apparently, some changes to the classical version could, or did, change it from hundreds of years, or something along those lines, to a few days. I quote the latter from memory, so apologies if I got it wrong.
It's funny how most of the discussion of this article seems focused on the Ampere piece, entirely ignoring the QC and machine learning aspects that set it apart from the standard fare.
Yes, there could've been some fantastic discussion here on the quantum side of computing, which is still so difficult to grasp, much like its parent theory was for decades.
> into machine learning – the idea is that quantum computing can > assist training or inference to check all possible answers, simultaneously.
Not inference, I think, but training. Inference is much cheaper than training. However, the real allure of applying QC to training is the possibility of finding the globally optimal set of weights, whereas classical training methods can only converge on somewhat locally-optimal configurations. This should enable greater accuracy per node, which can enable smaller networks requiring less power and memory to inference.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
24 Comments
Back to Article
Diogene7 - Wednesday, February 16, 2022 - link
I think that as 2022 it is too early to chase quantum computing : it is a bit like as NASA would be trying to go to Mars before having successfully sent a satellite in space…I think it would make much more sense to first allocate the ressources to the development of spintronics as a 1st step, and then build upon the learnings from spintronics to reach quantum computing : this still would take 25 years or more…
lemurbutton - Wednesday, February 16, 2022 - link
100% betting that Ampere Computing will be acquired by AMD, Intel, Nvidia, Microsoft, Google, or Amazon within the next 12 months.Yojimbo - Wednesday, February 16, 2022 - link
Why, though?defaultluser - Wednesday, February 16, 2022 - link
simple: because they got beat on future Neoverse 2 sales by the much bigger players, and at the same time, this sort of desperation Push into new markets usually ends-up with the company in receivershipYojimbo - Wednesday, February 16, 2022 - link
But who really wants to buy them and why? Amazon already has its own Arm-core based CPUs. Google could do the same if they wanted. I doubt Intel wants to get into Arm CPUs. NVIDIA has its own Arm CPU development. AMD did and likely would make its own again if it wants to get into Arm. I just don't see what Ampere really offers anyone since they abandoned their own chips designs and started tweaking Arm cores. Any company with the ability to buy Ampere can do that on their own. Ampere would be bought for 1) IP or 2) market. Nuvia, for example, had valuable IP. I don't see good evidence that Ampere have either IP or a market position at the moment.CharmNinja - Thursday, February 17, 2022 - link
Other reason to acquisition could be human capitol OR time to market .Yojimbo - Thursday, February 17, 2022 - link
They can hire the engineers away if they want. Time to market is not so critical here because there isn't really much of a market, yet. But Amazon and NVIDIA already have chips either out or coming soon. I think Ampere would need to be willing to sell itself for bargain prices if the reason that, for example Microsoft or Google, would buy them is to save development time.Wilco1 - Thursday, February 17, 2022 - link
Despite using standard Arm designs, their Altra and Altra Max beats everybody else on the market - not only Intel/AMD's fastest servers, but it looks like it should still outperform Graviton 3 despite being on an older process and using much older cores. Ampere is also working on a custom core that should be released this year. So they have plenty of unique IP and know-how that others can't match.Yojimbo - Thursday, February 17, 2022 - link
Outperform it on what? Someone's benchmarks? I'm certain AWS greatly prefers Graviton3 to Altria. AWS designed it to be what they want.Ampere was using custom cores but they abandoned them for standard Arm cores. Yes, they are likely to have to make custom cores in the future because 1) they need to differentiate themselves somehow and 2) Arm is unlikely to be spending the money they did under Softbank or would have under NVIDIA on such designs. That doesn't mean Ampere's custom cores are preferable. We have indication that they have not been preferable in the past and no indication there's any value to them in the future. So there's no reason to assume they have much in the way of IP or unique know-how to offer.
Wilco1 - Thursday, February 17, 2022 - link
It doesn't seem you follow the server reviews on AnandTech - the Altra Max review shows quite the gap between Graviton 2 and Altra (Max): https://images.anandtech.com/graphs/graph16979/122...That's a lot of differentiation despite using the exact same core! A custom core allows even more differentiation of course. We know their new core will obviously be better than Altra and must beat N2 in some PPA metrics (eg. perf/area so they can add more cores on a chip).
Yojimbo - Friday, February 18, 2022 - link
No offense to Anandtech, but buyers of servers or cloud compute don't care much about Anandtech's server reviews. Anandtech, for example, has no good idea what AWS is looking for or even much idea of what AWS customers are looking for. AWS does.Also, why are we comparing the Altra Max to the Graviton2? The Graviton2 was open in AWS for user access in early 2020. It looks like the Altra Max started showing up in the wild in Q3 2021. Amazon started allowing preview access to its Graviton3 instances in Q4 2021, just a couple months after Altra Max started showing up. Can I yet find Altra Max public cloud instances now in Q1 2022? Oracle and Equinix have 80-core Altras from what I see and nothing with higher core counts. I don't see anything to suggest that Cloudflare is yet using Altra Max in its services.
Wilco1 - Friday, February 18, 2022 - link
Who exactly reads AnandTech is irrelevant. Look back, your claim was that Ampere has absolutely nothing to offer and that you can't differentiate using standard Arm cores anyway. My link proves you are wrong on both counts - Altra (Max) are able to beat all of the x86 competition, and they do far better than Graviton 2. AWS aims for something different of course, which is why I made the comparison - major differentiation despite using the same standard core.We're still in the middle of a chip crunch, so things move a bit slower than usual. Either way Altra Max (and likely Altra) will outperform the upcoming Graviton 3 despite being older and not getting the benefit of 5nm and DDR5. They'll likely announce their next generation soon. So claiming Ampere has absolutely zero to offer both now and in the future is just stupid.
Yojimbo - Friday, February 18, 2022 - link
No, Wilco, you missed my point. It's not who reads it, it's the relevance of the tests. Servers are about platforms and roadmaps, not about core performance. And the only core performance that does matter is the performance on the applications customers are running, not on benchmarks.Altra Max will not outperform Graviton3.
Just count the days until Ampere is bought out, who buys it, and how much is paid.
Wilco1 - Saturday, February 19, 2022 - link
First you ask for benchmarks, then you don't want to look at benchmarks because you don't like who is winning them. Years ago when Intel was winning the SPEC benchmarks everybody was claiming how it was the best benchmark for servers. Today Intel is no longer winning on SPEC and suddenly everybody hates it. Go figure.mode_13h - Monday, February 21, 2022 - link
> Altra Max beats everybody else on the market - not only Intel/AMD's fastest servers,> but it looks like it should still outperform Graviton 3 despite being on an older process
> and using much older cores.
Graviton 3 is a 100 W CPU. Altra Max isn't. If Amazon wanted to push for more performance, I'm sure they could've spec'd a higher power envelope. And yet, Graviton 3 still features DDR5 and PCIe 5.
Wilco1 - Friday, February 25, 2022 - link
AWS aim for low power, low cost and high density rather than best performance like Ampere. Ampere's next-gen will also use 5nm and DDR5, so performance and efficiency should improve.mode_13h - Monday, February 21, 2022 - link
> don't see what Ampere really offers anyone since they abandoned their own chips designsThey didn't, really. It turns out that Altra was something of a stop-gap measure, to buy time until their Siryn CPUs launch, later this year. It was mentioned in this article, but had a separate announcement:
https://www.anandtech.com/show/16684/ampere-roadma...
Also, it turns out that Ampere managed to scale up N1 greater than ARM had intended, through a few tricks they pulled. N1 was only designed to scale to 64 nodes. So, even in Altra, Ampere had a certain value-add.
Tilmitt - Wednesday, February 16, 2022 - link
Is this pseudo-science?GeoffreyA - Thursday, February 17, 2022 - link
What they're implementing isn't certain, but the physics is quite real, as far as real goes in this universe.mode_13h - Monday, February 21, 2022 - link
I think they like machine learning, as an application, because it's more tolerant of errors and approximate solutions.GeoffreyA - Thursday, February 17, 2022 - link
According to physicist Paul Davies, owing to the fact that the complexity of an entangled state grows exponentially with increasing qubits, a very large qubit computer, such as a 400-component one if I understood him rightly, would come into conflict with a possible information bound in the universe. He reasons that if the universe were finite in resources, a maximum of 10^122 classical bits of information could be processed or contained in any causal region of the universe; and that scaling qubits beyond 400 or 500 would require more information than could fit in that bound. (I think it's related to the Bekenstein bound, which limits how much information can be stored in a region of space.) In short, if his arguments are right, the universe might have a physical limit that precludes practical quantum computing. Already, solving the issues of many-qubit decoherence seems to smack of this.https://arxiv.org/ftp/quant-ph/papers/0703/0703041...
Also, I remember reading that IBM disputed Google's claim. Apparently, some changes to the classical version could, or did, change it from hundreds of years, or something along those lines, to a few days. I quote the latter from memory, so apologies if I got it wrong.
mode_13h - Monday, February 21, 2022 - link
Thanks for that!It's funny how most of the discussion of this article seems focused on the Ampere piece, entirely ignoring the QC and machine learning aspects that set it apart from the standard fare.
GeoffreyA - Wednesday, February 23, 2022 - link
Yes, there could've been some fantastic discussion here on the quantum side of computing, which is still so difficult to grasp, much like its parent theory was for decades.mode_13h - Monday, February 21, 2022 - link
> into machine learning – the idea is that quantum computing can> assist training or inference to check all possible answers, simultaneously.
Not inference, I think, but training. Inference is much cheaper than training. However, the real allure of applying QC to training is the possibility of finding the globally optimal set of weights, whereas classical training methods can only converge on somewhat locally-optimal configurations. This should enable greater accuracy per node, which can enable smaller networks requiring less power and memory to inference.