If I understand correctly, the A78 has a ~20% performance lead over A77, while the X1 a ~30%? If that is the case, it seems a rather minor difference no? Nothing like the 2x (in some scenarios) of Apple's cores compared to the A77. Did I read this wrong?
These comments are tiring. A13 is not the benchmark. A14 will be out before any X1 chip and will trounce this. As an Android user, I'm continually disappointed.
Exactly. And it will outperform most x86 CPUs. Also, while it's the largest ever Arm designed core, it's still the smallest high-end core in the world by a mile.
Well, one of those is measured, and one of those is a projection that also happens to bake in fabrication process improvements. I wouldn't really try to compare anything beyond the performance projection at this point.
Disappointed in what way? Flagship phones have been more than fast enough in the last few years. There is a balance between power consumption and performance - and I think the improved efficiency of Cortex-A78 will be more useful in typical use-cases. It won't win benchmarks, but if you believe iPhone performance is measurably better in real-life use (rather than benchmarks), why not just buy one?
Put it in context. You pay $1500 for a Galaxy S20 ultra that's slower than a $400 iphone. If you do a lot of web browsing on javascript heavy pages, nothing beats single threaded perf. You can't improve it by just throwing slower cores at it. Discourse did a good writeup that's still valid today. https://meta.discourse.org/t/the-state-of-javascri...
syxbit is right. Javascript and browsers are not just software. They stress CPU in different ways than the usual Spec/Geekbench and X1 will not be just a benchmark core. If you look at DVFS curve of A77 vs A78, X1 will probably be even lower power than A78 in the region of perf in which they overlap. For the simple reason that to achieve same performance as A77/A78, X1 will need much lower frequency and voltage. This will greatly offset the intrinsic growth in iso-frequency power that X1 will for sure have. My point would be: going wider helps you be more efficient iso-perf vs narrower cores. The power efficiency hit only comes when you go over the peak perf offered by the narrower core. So you could argue that something like X1 is taking the A78 DVFS curve and pushing it down (lower power) and of top of that it extends it to new performance point not even reachable on A78. Obviously you pay in area for this :) But Apple has clearly showed over the years that this is the winning formula
You are completely wrong. It's much more about caching than wider core's. X1 is not 50% faster than A78 but it is 50% bigger. Best approach would be wider ISA with same execution units multiplied in numbers like RISC V did lay out already foundations for 256 bit ISA (still a scratch) and is finalising 128 bit one. But there's a catch in tool's and compilers support.
X1 does have wider NEON SIMD, twice as wide in fact - so for content that favors SIMD (like dav1d AV1 decoding) you will get a serious jump in performance.
Unfortunately the benchmarks do not really give us much of an idea of real world improvement for something like this, so we'll have to wait for products to get a better idea.
ARM specifically said A78 was designed to INCREASE EFFICIENCY vs A77, a lot of the decisions concur with that. X1 was designed to MAXIMIZE PERFORMANCE sacrificing efficiency and area in the process. When you factor in the leakage caused by larger die. X1 would almost certainly be less efficient than A78 when you drop it to below 2GHz.
They are just software. Fun fact: your Android browser is built with -Oz. Yes, all optimizations are turned off in order to reduce binary size. That's an insanely stupid software decision which means Android phones appear to be behind iOS when in fact they are not.
It's not an "insanely stupid software decision"... Fun fact: Apple ALSO builds pretty much all their software at either -Oz or -Os! Both Apple and Google (and probably MS) are well aware that the "overall system experience" matters more than picking up a few percentage points in particular benchmarks, and that large app footprints hurt that overall system experience. Apple's recommendation for MOST developer code (and followed internally) has been to optimize for size for yikes, at least 20 years, and hasn't changed in all that time.
Look at the (ongoing) work in LLVM to reduce code size ( "outliner" is one of the relevant keywords); the people involved in that span a range of companies. I've seen a lot of work by Apple people, a lot by Google people, some even by Facebook people.
There is a world of difference between optimizing performance without regard for codesize and optimizing for smallest possible codesize without any regard for performance. -Ofast is the former, -Oz is the latter. Most software, including Linux distros, uses -O2 as the best tradeoff between these extremes. Non essential applications use -Os (or even -Oz if performance is irrelevant). However a browser is extremely performance sensitive. Saving a few bytes with -Oz loses 10-20% performance and that means you lose the equivalent of a full CPU generation. I call that insanely stupid, there are no other words to describe it.
It's not about software per see... Java is very memory hungry and most ARM OEM's cut corners when it comes to CPU caches & use subliminal wider supplemental SoC level additional implementations which only slow things down future more.
Exactly my point. Aside from the fact that today you cannot get a high-end Android phone, unless you have huge screens, we're at the point where anything below the $700 is substantially slower than anything Apple has come up with in the last 2-3 years. As an Android user, I find it frustrating. I have a Samsung S8 which, granted, is not a high-end phone by today's standard. I just got an S10 for my wife, which feels faster, but still, looking at benchmarks, there's no comparison. For work I have an iPhone XR, which I hardly use. It is heavy, way too large and bulky, but it unlocks reliably in a flash, and it is silky smooth to use.
The Xiaimi Redmi K30 Pro Zoom $550 got the same lpddr5, ufs 3.1 and sd865 as the s20 ultra, got a nice camera (waiting gcam port), no notch, AMOLED screen. Maybe search better, Samsung and Huawei are full on overpriced devices
Talking about unlocking has less to do with the ARM core and more to do with the individual manufacturer SW implementations on top of Android, not to mention Android itself.
It will be interesting to see if Fuchsia changes up that equation some - though I've never found my Mate 10 to be particularly laggy at all.
Depends on the web content, not all of it is JS bottlenecked - and if you are browsing JS heavy crap pages on a phone then its on you for being a moron begging to suck your battery dry.
Put *that* in context. You pay $1100 more for a device with vastly superior camera systems, a far better display and a larger quantity of faster storage. Most people don't buy a phone for its single-thread CPU performance.
The iPhone SE certainly is great value for money; flagship phones aren't. Your exaggerated comparison is the best you could manufacture - the Redmi K30 Pro has the same SoC as the Galaxy and costs about $500. Not quite so silly now, is it?
Added to that, the vast majority of web-browsing isn't dependent on single-thread performance - especially the sort you'd be doing on a phone. This whole post is off.
It's not actually slower. According to Anandtech the S20 offers: "by far the most responsive and smooth experiences you can get on a mobile phone today".
>It's not actually slower. It is. If Apple allows animation acceleration then it would show. Turning off animations altogether would show it even more.
Yes yes yes, eternal evil Apple not allowing people to do what they want. We all know the story. Meanwhile, in the real world: https://support.apple.com/en-gb/HT202655
I think syxbit's disappointment is rooted in that a current $400 Apple device available right now will probably be faster than this chip, whenever it is available, and will likely only be in much more expensive phones.
That is incredibly disappointing. Especially when you consider Android has a native performance penalty in UI performance and overall optimization due to its broad hardware compatibility requirements. If anything, Android should be getting the faster chips since Apple has the luxury of optimizing their OS around their SoC.
All around the K30 Pro Zoom rubs circles around it as a modern device. My 5.8" S9 can feel quite small for media consumption, browsing and gaming. Can't imagine something well below 5" as a "smartphone" in 2020.
"...Android has a native performance penalty in UI performance..." Somebody has clearly never used a OnePlus device..!
Apple having a theoretically faster CPU makes no difference if: 1) Apple won't sell that CPU to anyone else 2) Apple won't use anyone else's CPUs 3) You care about the actual experience you get form the device, not benchmarks.
I currently use a oneplus, the ui animation does look better on iphones, oneplus is just sped up animations - slowing it down doesnt make it look better either.
For years now, iPhones are CONSISTENTLY inferior to the Samsung Galaxy S phones in the best and most objective real life speed tests. Go see phonebuff channel and educate yourself.
But yeah, those "real world tests", comparing the animation speeds of completely different applications (yes both called "youtube" doesn't mean they have any code in common) are utterly useless to compare the cpu performance. A 1995 eta desktop pc would in the same comparison also seem faster than a modern day computer...
Why are they irrelevant when they represent actual performance in-use doing things a user actually does?
Like seriously, either the argument is that real-world testing matters or it's that e-peen measurement wins, but you can't claim that your e-peen score represents real-world use when the real-world tests say otherwise.
Honestly, who cares about that? This extra power is useless. Playing on mobile device is uncomfortable and bad for eye health. They should convert priority into maximum efficiency only.
The critical point that needs to evolve in smartphones is the battery and nobody talks about. Where are the graphene batteries samsung promised? I miss having a cell phone that lasts a week without charging. :(
Extra power is never useless. Sure if you just browse facebook. But iPads and Android tablets are trying to replace laptops. The A13 can probably replace a laptop. No Qualcomm chip can do that as well. I suspect when Apple replaces their laptop intel chips with their Arm chips, they'll be faster, and better battery life. But when Microsoft tries to do it with QCOMM, they're much worse
Nah, there is no ARM CPU that can replace a regular x86 CPU for gaming or work, and it won't exist at any time soon. You don't reverse millions of software and years of optimization overnight.
Cross-compiling doesn't mean your software will run at same performance on everything, for that to happen you still need to implement architecture-specific optimizations and even so there's still no guarantees.
A fast ARM chip could brute force x86-32 code translation so it was fast enough. The SQ1 in the Surface Pro X can run x86-32 code without it being snail slow like on older SD850 devices. An X1-based chip could probably do x86-64 translation (coming next year according to Microsoft) at 8th gen i5 levels, and it would fly when running ARM native code.
By this logic it seems that anyone could port the x86 recompiler from pcsx2 to ARM with a few clicks. Optimizing software for ARM would take a lot of time and money, not to mention that some of these software has no source code available.
It's not like there is a direct comparison, often with high end apps and games there is a serious lag between it being available on iOS and it becoming available on Android, sometimes it never comes at all.
Though the openness of Android means we get some things easily that iOS device owners have to jump through hoops for, namely console emulators.
What's the use having all that power in the Apple Axx SoC's when you are restricted in your freedom to do with it as you wish? All things considered I'll take a performance hit for that freedom any day of the week.
Why? ARM cores have much better efficiency under load. Try gaming on an iPhone and see the battery being completely vaporized in like 2Hrs compared to 4+ on an Android device.
Android phones tend to have larger screens, and yeah, this very website has the normalized tests. iPhones tend to have higher maximum power draw under load (although A13 improved this dramatically from the mess that was A12).
But really, it doesn't matter. You get what you're given with the iPhone - you can't buy one that games for more than 2 hours, so if you want to game for more than 2 hours, no amount of normalized benchmark results will help you out.
I made the switch years ago; moving from a Moto X Pure to an iPhone 6S with Apple A9. More than 2x performance and 3x battery life. I haven't looked back. My current iPhone 11 Pro is snappier than my 7th-gen Core m3 Chromebook in some benchmarks while driving a higher-res screen, and I usually end the day with at least 30% battery still left.
The A13 is a very fast chip. Processor speed doesn't need to be increased to be on the leading edge. On the contrary, if Apple does the straightforward thing and the thing that would differentiate the A14 it would be working almost entirely on lowering power consumption (that has recently got well out of control) and improving the energy efficiency of the SoC and particularly of the performance cores. And, that also would make sense for Apple's forthcoming desktop CPUs as well. A lot of power hungry performance cores that generate a lot of heat and that then fail to sustain performance at near peak levels won't be the way to make a compelling for desktop CPUs based on A14-like performance cores.
Comparisons with the A13 do make a certain amount of sense insofar as the main challenge is to lower power consumption rather than push core performance again, at this point. Peak integer performance of the Lightning cores is rather similar to the best Zen 2 cores out there.
Note: Apple has an impeccable on chip power management system but the more overly power hungry that the cores of a chip are the harder it is to hide power and thermal issues.
OK, thank you. Yes, it is a bit better then. Hopefully, the X1 is only the 1st iteration in the new stretegy, and a bit conservative. Hopefully, it'll close the gap a bit more.
having gotten to page 4 in the article, the explanation is that ARMs slides as used on the first page suck. The 20% from A77-A78 is +7% architecture, and +13% 5nm instead of 7nm. The 30% from A77-X1 is entirely architecture; that in turn implies that upcoming X1 chips should be about 40-45% faster than current A77 ones.
AIUI It's still going to be falling short of what Apple's doing (and not just because the A55 little cores are getting really dated); but is a badly needed narrowing of the gap.
in terms of engineering prowess certainly; but not in terms of letting Samsung/etc finally design smartphones and tablets that are as fast as their rivals from Apple. Assuming the product plays out in retail, in another 2 or 3 years when I look to replace my S10 I'll probably get something with an X core in it; but I really hope that they'll widen the performance uplift vs their more general purpose cores by then.
The existing A77's are already very impressive in terms of PPA, I would consider them as impressive as Apple's big cores taken as a whole. (The small cores are another story since the major uplift from the A13...) A lot of Android vendors value area in particular since they integrate modems on die whereas Apple does not; this drives a lot of value and cost savings for customers.
If Samsung really wants to create a phone/tablet as fast as an Apple one it should first concentrate more on SW optimizations. Apple puts a lot of efforts in that. It's not only a question who makes the bigger core. See the comparison of Samsung crappy phones with other Android ones using much more optimized and less bloated version of the OS. They are good for benchmarking with all those cores and MHz (and tricks on turbo spped for benchmark apps), but in real life Samsung phones are slower than they could be do to low optimizations.
Bingo! Adding a big core that does well in benchmarks is not a good solution. Improving browser performance with software optimization can be far more effective.
@Wilco1 I am afraid you only the SW part of the equation here. Again X1 is not only good in benchmarks, being wide helps in that you can achieve same performance as last-gen by running at vastly lower frequency and voltage. Thus power efficiency for all use cases that do not require max peak perf enjoy a huge power saving.
You can't brute-force your way to performance or efficiency. If you can improve performance via software optimization, you take it any time over a faster core that gives the same gain but needs more power to run the unoptimized software.
Obviously you would ideally need both SW optimization and faster CPUs. But again, power will not always be higher and higher power != higher energy usage.
Absolutely. But the biggest issue in the Android world is software optimization and tuning, not CPU performance. Improving that would easily add up to a new CPU generation. The choice to switch to LLVM was stupid at the time, but even more so today since GCC has since moved further ahead of LLVM...
Note all the evidence points to using smaller cores to improve power efficiency. You can see this on the perf/W estimates for SPEC - the A78 is almost twice as efficient as A13 while achieving 74% of the performance.
> The choice to switch to LLVM was stupid at the time, but even more so today since GCC has since moved further ahead of LLVM...
GCC's problem is its license. Apple nor Google would be able to integrate it into the IDE like Xcode/Android Studio. In the grand scheme of things, going LLVM is a much better choice, even if it's slower than GCC.
Behind the A13 ?, you missed the estimation chart where it shows the X1 reaching the A13 performance (a bit lower in integer performance and a bit higher in floating poing performance) at a much lower power consumption.
Nope you are wrong. First off all given constant power delta for something which goes into phone the A78 will be a rather significant improvement over A77 with same performance at half the power budget. A77 already had a lead over Apple big core's regarding the performance/W metric & and this means more than brute force approach. Yes Apple big core's are supperio but on something that has power budget of a laptop. On the other hand X1 is a direct take on those apple core's & it should be up to 2x faster than A78 in tasks which are optimised and utilities FP SIMDs basically SMP tasks. This is more relevant to server tasks and not so much for mobile space, still I would like to see more advanced SIMD blocks and their inclusion on smaller core's with SMT as SIMDs are hard to feed optimally and front end expansion there for is a must but it can be done in a more elegant manner like for instance MIPS did with VMT. ARM desperately needs power efficient basic OoO core a successor of A73 if you like with DynamiQ integration as an A55 replacement. Their is a A65AE but we didn't seen any implementation of it in any space so far.
It is not even an apples for apples comparison, since A78 has +20% *sustained* performance over A77, while X1 has +30% *peak* performance. Therefore the sustained performance lead of X1 over A77 might be in the +25% ballpark. Is a mere extra 5 - 10% performance over A78 really worth a 30% larger die area and quite higher TDP? Unless Arm can increase the performance lead of X1 over A78 at least another 20% I don't see the former being an attractive (or even a sane) licence and purchasing option.
The X1 exhibits 22% performance advantage over the A78 when process and frequency are controlled factors. So, yes, X1 performance is 1.22xA78. The performance improvement of the A78 over the A77 however includes a process node and frequency change, 20% all up. So, the performance of the X1 is: (A77 * 1.2) * 1.22 or 1.46xA77.
Please note Andrei seems to have made assumptions something like this in his calculations with A77 SPECspeed/performance at 2.6GHz being something in the order of 32 (which seems reasonable).
Apple love to exaggerate and use perfect scenarios. 2x faster IF you take 250$ snap model like (4XX) series and 700$ iphone. I digged though their comparisons and each of them mention worst care for others and best for them. Most of iphones dont have even 20% lead in raw performance, all it gains is efficient OS, while android is bulky and slow. another example was m1 mentioned 5x faster than "similar sized most popular laptop" which by amazon was 269$ laptop with ryzen 3200u, dated low end offering which had ~4000 points in benches when marked already had 4700u's in same form factor with 20000 points. when comparing 300$ vs 900$ laptops, it really might look like m1 is so godlike.
It would've been nice. Remember back in 2016 ARM had a clean three tier offering: Cortex A35 - Ultra-low power Cortex A53 - Low power Cortex A73 - Medium power
I was hoping a similar thing to happen. Maybe it will come on a new architecture-branch, ergo ARMv9, maybe in 2022. Perhaps they can look into cores for laptops, desktops, servers. For instance: ARM v9 Cortex A41 - Ultra-low power, same as A55 perf. ARM v9 Cortex A61 - Low power, same as A73 perf. ARM v9 Cortex A81 - Medium power, beyond Apple A14 perf.
Although, a big part will be optimisations and implementing a new InfinityFabric, big.LITTLE, DynamIQ style platform that scales from wearables to desktop.
A78 looks like another A73 - modest perf gains, but improved efficiency. X1 is fascinating; I wasn't expecting to see an aggressive design like that until Matterhorn.
Agree that this is an interesting move, both design- and strategy-wise. While the Hera (X1) core will make a great single top performance core (1x Fast+Big, 3xBig, 4xLittle) for upcoming mobile Snapdragons and Exynos SoCs, I am also curious how Hera will boost the efforts of QC and Samsung for Windows-on-Arm CPUs.
The one big fly in ARM's ointment is that they apparently still believe that their A55 remains the greatest Little core they've designed to date. Isn't that design getting a bit dated, especially compared with Apple's efficiency cores, which Apple does manage to update quite regularly? What's up with that? Any information or rumors on an A57 or A58?
That new little core can't come too soon! @Andrei: do you have any stats on how much time a phone in regular daily use (not benchmarking or gaming) actually spends in little core only? Would be interesting for battery use estimates. Thanks!
True, and I even had a device (with a Snapdragon 808) with two of those. Freudian slip on my part, that thing was awful. Ran hot, ate battery for breakfast and by breakfast. . .
I bought a new phone two months ago but it arrived faulty. Had to ask for refund and as it was clearance model, now in the market there is no smartphone model in my budget that convinces me. I might end up being one of the last users of that processor.
What killed that for me was just how fast the 808 phone would suck a 3750 mAh battery dry. Forced me to compete with iPhone users for wall outlets to plug my charger in. However, the 808 made for a good hand warmer in the winter. That was probably what it was best at
The main problem with Windows-on-Arm is not the performance of the SoCs, well at least not when running native ARM apps. It's the poor emulation. And the software emulation isn't the issue; it's that the SoCs just aren't powerful enough. It's bad enough that I don't think the X1 will be powerful enough for x86 emulation to be a good experience.
This looks pretty exciting! What if vendors will go for 2+4 configs (like Apple) with 2 X1 + 4 A78? Apple has showed that this configuration is really good! That would be a killer combinations as the littles are practically useless in real scenarios and a slow implementation of a A78 could very well cover the low part of the cluster DVFS curve, for idle. A55 is super old and doesnt offer any useful performance, I suspect they are only there for the low-power scenarios. But again, Apple has showed that its out-of-order little cores can be super efficient when implemented at low frequency (I think they run at something like 1.7GHz peak frequency). I didnt read much information on X1 power, but yes it will for sure be less power efficient than A78 when both of them are running flat out at 3GHz. But, I highly suspect that (like A78 vs A77) on the whole DVFS curve, X1 can be lower power than A78 in the performance regions in which they overlap. It is simply a matter of being wider and slower, this makes you more efficient. That is the Apple way, wide and slow. They have the example of iso-performance metric, X1 will need much lower frequencies and voltages to reach a middle-of-the-road performance point (something like ~35 SpecInt, given that projection is 47 SpecInt flat-out). This could easily offset the intrinsic iso-frequency power deficit that the X1 brings.
I might be mistaken, but aren't those wide SVEs a co-development with Fujitsu? ARM might simply not have a blanket license to use that jointly developed tech. I am rooting for wider availability of those for my own reasons (video encoding runs much faster if it can use wide extensions). And there is no such thing as too much oomph for working with videos.
It's just taking a while for SVE to get in. Future ARM Ltd cores are likely to have it; future server chips from Hisilicon are roadmapped to as well (although at this point, all bets are off due to the ongoing difficulties between the American state and Huawei.)
So Apple is finally defeated now ? A11 was 25% A12 15% and A13 just 20% faster than last gen. So A10 is still quite competitive today.
This is higher gain than Apple for a third year in a row. The question is, how much of that is won by going to the 5nm process ? I heard it is quite advanced compared to 7nm.
30% faster than the A77 will bring X1 closer to, but probably still under the A13. And A14 will be out by then. Wouldn't call Apple defeated yet, it's easier to make larger gains when you were at half their per core performance a few years ago...
30% faster in IPC gains alone (iso-process and frequency). The X1 SOCs will be much faster than that 30% over the current A77 SOCs, much closer to 50% than 30%.
First things first, so what's the cost here of the new X1 vs the 78, we already have $1000 for the smartphone planned obsolescence and now this is next level uber crazy alien tech is going to make them go for obscene $2000 non user replaceable battery junk tech gadgets ?
Going wider and 3GHz I don't know maybe maybenot, Zen doesn't clock higher because of it's wider arch from what I saw and the 7N limitations. Even Intel is going wider next, which is going to get hit in the pure clockspeed.
And next, this is hilarious - " they should outright panic at these figures if they actually materialize – and I do expect them to materialize"
Outright panic ? - Let's look at facts 95% Intel, 4.5% AMD from Q4 2019 - Server Marketshare & wonder where does ARM sit here to make both Intel and AMD "Panic".
ARM always about custom this custom that BS, Every single thing needs to be made custom for that crappy ARM part and the LGA socket system is not even a standard for these ARM Server CPUs and x86 is all about the Sockets and in the Consumer space mobile and DIY it doesn't exist, thanks to the Software which is a bigger driving force behind any product in this specturm, esp everyone knows Qualcomm's ever marketed (by Cloudflare) Centriq 2400 10nm Server CPU got deleted from it's existence and even stopped pursuing such goals, where they even put full Kryo SD820's full custom engineers on it and even the guy who was spearheading also moved on.
I will wait to see what's going to happen to the ever bashed x86 by the ARM superiority or the Apple A series Alien processors.
Those are the facts as of now yes. But the rest of the post sounds like someone about to get disrupted. The bulk of x86 vendor profits come from laptops, specifically general use thin, light and cheap laptops, those are about to be disrupted. Which is to say that in 5 years time, x86 on laptops will cease to exist in any meaningful way. Desktops/Enthusiast parts are not financially relevant to any of these companies.
Nah. The servers are the pot of gold, where profit margins are really high.
You will soon see that ARM will have its small space, but it does not pose a danger to the duopolio x86, something very complex will be coming and everything is already sealed with patents.
It's an interesting drop for this year's ARM tech day: I imagine A78 plans were nebulous when the A76 dropped, and they may have downscaled what is now called the A78 and upscaled what's now the X1. There will likely be a 9cx part for Windows on ARM that can leverage the higher end cores and larger caches very well, but really looking forward to Matterhorn and their new smaller core design which will be very impactful for mobile performance.
To me, these stories are always kind of exciting and kind of pointless. I'm no longer buying flagships, and even at the low/mid-range, it's been years since I've had, or have heard, a complaint about performance. The apps we use haven't changed in 5 years. Maybe some games, but VR never took off, and InstaGram/Twitter/Maps/FB... are the same. "As long as it has a Core A7x, it is Delightful." Hopefully the X program will help ARM get into consoles, laptops and desktop, and hopefully Android will start supporting that... even today, it's more of an Android problem than an ARM problem. Maybe Windows will fix what Google fumbled.
The DSU is separate and scales up to 8MB either way, the slides are presenting the envisioned config for that core assuming it's the strongest one in the design. So some SoCs with just A78's with just 4MB, and X1 SoCs with A78's in a 8MB design.
Andrei do you think the cortex x1 is technically superior to a13 or does apple still have a lead in terms of pure micro architecture. Do you think the a14 will post the same performance gains as the cortex x1 or lower? How much of a lead will the a14 have over the x1 in your opinion?
For the life of me I don't understand why they still haven't embraced socketing their silicon. I'd bet they could capture a significant segment of the DIY market overnight. Sixteen cores with 85%+ the performance of Zen2 at less than half the power? Plus 24-core integrated graphics?
What ? AMD is leagues ahead of Intel right now in MT performance, and soon with the unified L3 on Zen 3000 Intel's gaming crown will probably end up going to AMD after a decade.
And Ryzen 1600AF which is a Zen+ part is still a good CPU for it's value as it competes very well in gaming too, a cheap drop in for any B450, X470 boards with ease.
ARM is garbage class silicon it's all custom, and no x86 DIY computer is going to cater to that garbage Silicon except Anandtech Spec measuring contest where A series is faster than a 9900K or Ryzen 3950X.
The 8cX chips (so basically 855) or 2 generations behind X1/A78 are faster in real world web page loading comparisons than x86 ones. They earned some video editing wins as well (though that workload is mostly dedicated hardware dependent).
There were several broken benchmarks which affect the average disproportionately. But ignoring that, getting 66% of the fastest EPYC is keeping up especially since Intel servers aren't getting anywhere near that. And doing that while using half the power is a huge win.
I guess you missed the final chart where these shit cores reached Intel and AMD current desktop processors ... at an small fraction of their power consumption.
Not always. Phoronix recently did a deep dive comparing a dedicated 64-core Amazon Arm instance against a 64-core Epyc. Across 100+ benchmarks Epyc was 50% faster (geometric mean), but Arm won in several cases.
They were consistently faster at real world web page loading than x86 chips in their 8cX implementation, which is 2 generations behind what they announced just now. Apple chips are even faster than above. For 90% of users, these ARM toys would be faster, cheaper, longer lasting then.
This is easy to verify, take an iPad Pro and a Macbook Air/Pro and load your top 10 visited websites 10 times over and watch the iPad consistently beat the Mac (both browsers loading desktop mode)
Why are they 'shit' if they're this close in performance already? Assuming they're much cheaper, wouldn't they be better (performance and value) than everything except perhaps i7/i9 desktop chips?
I would like a socket-like solution in my mobile units, I have at least one tablet and two phones which I'd loved to have an SoC only upgrade path for; would cut down on electronic garbage due to forced obsolescence.
No mention of 5G, does this mean separate chip will still be needed until a new design comes with faster little cores? The fact they are even discussing usage of the server targeted X1 in Mobile tells me they know they are behind and are being pushed to fix it. Maybe the new gen next year will be A8x and A6x.
ARM is not in the modem/modem design business per se; those modems are typically the domain of Qualcomm, Huawei and other SoC designers, often with IP by specialty modem designers, and these designs integrate ARM CPU IP. That being said, several aspects of 5G apparently need significant neural processing/deep learning-type capability, which however also wouldn't be directly provided by the CPU cores.
Exynos 1000 with Cortex X1/78/55 should hopefully catch up to Snapdragon 875. Hopefully Samsung Galaxy S21/S30 makes up for disastrous S20 series. Fix camera issues and price it lower.
This is a microarchitecture announcement - it will be several months before there are physical products that can be tested. Rather than benchmarks, the claims are backed by a set of microarchitecture improvements that are being disclosed here.
For years I've wanted ARM to go high end, and bring us 3ghz+ chips that come close to Apple. Glad to hear it is coming! I'd really like them to target the DIY market, enthusiasts here would love to buy motherboards and build desktop computers around the parts. We could even install Windows on ARM. I really think they are overlooking enthusiast demand in the DIY market. Stop just focusing on laptops and get us the ecosystem.
"plus the vendor has a tendency not always use the latest CPU IPs anyhow"
They were not only using the latest, but also the first to use A72, A73 and A76 with A57/A75 being skipped by them altogether, so no, not true. They do have a tendency to skip certain generations.
The high performance, Apple-rivaling ARM core ALREADY exists..Qualcomm did this a year ago with their 8cx and SQ1 chips.(surface pro X and samsung galaxy book s tablets)
But they are as fast as the Apple iPad Pro chips, in fact they may be even faster, at similar power envelope. So my point is..what's with the premise of the supposed Apple supremacy?
I'm talking about RAW performance.. don't tell me you base your claims on the windows benchmarks? Pretty much every number posted for the SQ1 is emulation based. But the fact that it packs a 2TFLOPS worth of GPU compute and 70Gbps worth of memory bandwidth in a 15W envelope gives a strong hint of what it's true power is.
Just look in the table in this article - the Apple chip from last year (A14) is estimated to be similar to the upcoming X1 (meaning it is 'just' 1 generation behind, as it will compete with the A15, an improvement over tprevious cortex gens that were much more behind) which is a HUGE step forward for ARM - why on earth would you think their earlier designs, which were much more behind, is magically ahead of apple at comparable power?
What are you rambling on about? The SQ1 is about 25% faster than the A12X/Z for GPU, but will be trounced by the CPU since the SQ1 runs older A76/A55 ARM cores. Even a regular A12 would easily keep up, and the A12X/Z doubles the number of big cores.
And this is from a processor that’s almost 2 years old.
What kind of TFLOPS? 32bit? 16bit? Ever hear the story of Vega the Wide? It had all of the TFLOPS, but just couldn't defeat the lower specced competition.
I’d like to know this as well. I’m giving the SQ1 the benefit of the doubt on this aspect, but as I stated above the A12X/Z is substantially more powerful on the CPU side. And it’s nearing 2 years old. Imagine what the A14X (if it releases this year) will do?
No im basing on geekbench arm for windows benchmark, not the emulated one. Qualcomms tflops numbers dont show up in benchmarks either, the 855 had around 0.9 tflops according to qualcomm. The ipad pro scores 3x and more of 855 in gfxbench. Even using metal wont give that much of an advantage. Pretty sure an a12x is nowhere near 15w tdp.
I am replying not only to you, but everyone who thinks, that since Apple or ARM can make so good cpus or gpus at 15 watts or x watts, ergo it is better than AMD or Intel or Nvidia that make chips on the 100 watts level. I am afraid this is not the case , and short answer is you can't beat physics. I will continue on a new reply. Just to let you know i am equally excited at what arm or apple achieved in the mobile space.
Just because you are able to make a gpu run at 2 teraflops at 4 watts, this does not mean you can scale linearly to 300 watts. By that thinking , nvodia and amd should be making 300+ teraflops gpus, but they are incompetent , this is why they can't. At 7nm which only recently nvidia has begun to implement 20+ teraflops gpus are possible, theoretically combined with a multicore cpu , they make up a high end "power hungry" desktop or server. A top of the line phone costs between 500 to 2000 dollars. A not so top of the line desktop , costs hardly 5000 dollars, it consumes about 800 to 1000 watts and is about 10 times or more computationally capable compared to its mobile flagship competitor. On top of that most flagship desktop gpus are one or more nide processes behind , and despite this they maintain the above mentioned lead.
So, there is no comparison, computationally speaking. At every price point, desktop implementations beat hands down , their mobile counterparts, not because they are somehow superior ,but because physics. If ARM or APPLE or whoever ever decide to scale to a bigger power envelope i bet you they are not going to be sigificantly better power/performance wise, because ...physics. Everyone who tries to promote either the mobile or the desktop sector as superior they do it because they are on an agenda. If you want the best performance possible at the best price point go desktop. If you want enough performance in a power limited scenario, go mobile, But you will pay a premium for this. I don't disagree with paying a premium for this, But I want to make clear I know what I get , and I know why I pay the price I pay for.
Would be great if we get a 1 x X1 + 3xA78 and 4xA55 with 4MB L3 shared between the big cores. Or just 2 x X1 and 6xA55 cores with 8MB L3 cache for the X1 cores (would be interesting to see the efficiency here compared to the above). 5nm gives a lot of headroom and even using 1x3GHz A77 and 3x2.7 GHz A77 is possible under this node.
I'm excited to see what comes of this for Windows on ARM. I know that's are some that will find it pointless, but there are millions of office workers and IT pros that support them that would find an all-day, cheaply replaceable, Office chewing, LTE/5G always connected device to be quite useful...
For years Intel has tried to make an all-day system, and finally straight gave up! Yes, Windows is "heavier" on system calls, but then again, Linux can be as well. Seems to have shoehorned in nicely after 4+ years of trial and error (and Law and Order, but...) with Android. While I wouldn't buy a Surface Pro X, it does do 80% of what to expect from a full day Win10 x86 system. That's progress. Let's see if this makes more!
The X1 belongs in a flagship ARM Windows device like the next Surface Pro X. The current model has a Qualcomm SQ1 and it already performs at 8th gen Core i5 levels, with half the power consumption when running ARM code. An X1-based SoC could offer top tier i7 performance at half the power and hopefully a lower price. Competition is good to keep Intel honest.
@Andrei You have a technical error: "...all while reducing power by 4% and reducing area by 4%" In the picture area reduction is 5, not 4 percent. "...all while reducing power by 4% and reducing area by 5%"
So with two tiers of big cores now, and presumably a new small core and supposedly a new middle-ish core to span the ever-increasing gap between big and little... does this mean that in a couple of years Android phones will have to deal with scheduling across 4 different types of cores? bigger.big.middle.little?
I agree. But it's not an achievement to be slower than a 1-year old chip This creates the problem that you cannot hyper-focus on any one area of the PPA triangle without making compromises in the other two.
https://images.anandtech.com/doci/15813/A78-X1-cro... Why the yellow and orrange starting points/dots have drift in them. The Spec Performance axiz doesnt mandate them to let one start ahead of other. And if this mandate is applied/removed conjoining both stating points the difference of performance will be so similar that both lines will seem overlapping... infact curves between 2nd and 3rd dots of A77/A78 will make A78 even slower. Curves between 3rd and 4th dots of A77/A78 will give A78 some benefit but again curve between 4th and and 5th dots will make A77 = A78. What do u say!?! Thanks!
A lot of people are saying that with Cortex-X1 ARM is bringing the fight to Apple’s powerhouse CPUs, i.e. the potent custom ARM processors that Apple develops for consumer computing products.
Actually, that isn't exactly what is happening. I had a close look at the performance data (using ARM's own projections) and it looks like it will take until the Makalu generation before a successor to the X1 (very nearly) catches up to the A14 on outright (integer) performance. For some time, Apple has had a 2.5 year lead in the performance stakes over ARM and no change is on the cards in that regard. Cortex X1, contrary to ARM's public remarks, continues the existing strategy of winning on energy efficiency not seeking performance gains at any cost. As a matter of fact, the energy efficiency of the X1 isn't too bad as a starting point. And, when modestly clocked A78 cores are also in the mix energy efficiency improves greatly. With the next generation of SoCs based on A78 and X1 licensed ARM cores manufacturers will have the opportunity to either sharply reduce power consumption or add new and advanced processing capabilities without raising power budgets. And, that can be achieved while offering a good (single threaded) performance boost of 33% (or more) over existing A77 based processors.
When its comes to outright execution speed it seems that ARM is pushing harder on floating point performance than other areas. In that area ARM could conceivably reach performance parity with Apple's SoCs sooner rather than later.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
192 Comments
Back to Article
yankeeDDL - Tuesday, May 26, 2020 - link
If I understand correctly, the A78 has a ~20% performance lead over A77, while the X1 a ~30%?If that is the case, it seems a rather minor difference no? Nothing like the 2x (in some scenarios) of Apple's cores compared to the A77. Did I read this wrong?
SarahKerrigan - Tuesday, May 26, 2020 - link
Anandtech projects a 10-20% delta from A13 for 3GHz X1 on page 4. That's not bad IMO.syxbit - Tuesday, May 26, 2020 - link
These comments are tiring.A13 is not the benchmark. A14 will be out before any X1 chip and will trounce this.
As an Android user, I'm continually disappointed.
close - Tuesday, May 26, 2020 - link
But how would that sound? "Some percent slower than the future A14"?syxbit - Tuesday, May 26, 2020 - link
I agree. But it's not an achievement to be slower than a 1-year old chipsoresu - Wednesday, May 27, 2020 - link
It is an achievement to close the gap with a company that has a LOT more to spend on R&D.Spunjji - Wednesday, May 27, 2020 - link
It is if:1) You're gaining ground instead of losing it.
2) The 1-year-old chip in question happens to be the absolute market leader.
Vince789 - Friday, May 29, 2020 - link
It is a huge achievement as the X1 is 95% of A13 performance at only 65% power and 72% energy consumption and 146% better perf/wattWilco1 - Saturday, May 30, 2020 - link
Exactly. And it will outperform most x86 CPUs. Also, while it's the largest ever Arm designed core, it's still the smallest high-end core in the world by a mile.anonomouse - Saturday, May 30, 2020 - link
Well, one of those is measured, and one of those is a projection that also happens to bake in fabrication process improvements. I wouldn't really try to compare anything beyond the performance projection at this point.Wilco1 - Tuesday, May 26, 2020 - link
Disappointed in what way? Flagship phones have been more than fast enough in the last few years. There is a balance between power consumption and performance - and I think the improved efficiency of Cortex-A78 will be more useful in typical use-cases. It won't win benchmarks, but if you believe iPhone performance is measurably better in real-life use (rather than benchmarks), why not just buy one?syxbit - Tuesday, May 26, 2020 - link
Put it in context. You pay $1500 for a Galaxy S20 ultra that's slower than a $400 iphone.If you do a lot of web browsing on javascript heavy pages, nothing beats single threaded perf. You can't improve it by just throwing slower cores at it.
Discourse did a good writeup that's still valid today.
https://meta.discourse.org/t/the-state-of-javascri...
Wilco1 - Tuesday, May 26, 2020 - link
You could also get the $699 OnePlus 8 and beat the S20 ultra on both performance and cost. Where is the difference?Javascript and browsers depend heavily on software optimization, and that's the real issue.
armchair_architect - Tuesday, May 26, 2020 - link
syxbit is right. Javascript and browsers are not just software. They stress CPU in different ways than the usual Spec/Geekbench and X1 will not be just a benchmark core.If you look at DVFS curve of A77 vs A78, X1 will probably be even lower power than A78 in the region of perf in which they overlap.
For the simple reason that to achieve same performance as A77/A78, X1 will need much lower frequency and voltage. This will greatly offset the intrinsic growth in iso-frequency power that X1 will for sure have.
My point would be: going wider helps you be more efficient iso-perf vs narrower cores.
The power efficiency hit only comes when you go over the peak perf offered by the narrower core.
So you could argue that something like X1 is taking the A78 DVFS curve and pushing it down (lower power) and of top of that it extends it to new performance point not even reachable on A78.
Obviously you pay in area for this :)
But Apple has clearly showed over the years that this is the winning formula
ZolaIII - Wednesday, May 27, 2020 - link
You are completely wrong. It's much more about caching than wider core's. X1 is not 50% faster than A78 but it is 50% bigger. Best approach would be wider ISA with same execution units multiplied in numbers like RISC V did lay out already foundations for 256 bit ISA (still a scratch) and is finalising 128 bit one. But there's a catch in tool's and compilers support.soresu - Wednesday, May 27, 2020 - link
X1 does have wider NEON SIMD, twice as wide in fact - so for content that favors SIMD (like dav1d AV1 decoding) you will get a serious jump in performance.Unfortunately the benchmarks do not really give us much of an idea of real world improvement for something like this, so we'll have to wait for products to get a better idea.
dotjaz - Thursday, May 28, 2020 - link
ARM specifically said A78 was designed to INCREASE EFFICIENCY vs A77, a lot of the decisions concur with that.X1 was designed to MAXIMIZE PERFORMANCE sacrificing efficiency and area in the process. When you factor in the leakage caused by larger die. X1 would almost certainly be less efficient than A78 when you drop it to below 2GHz.
Wilco1 - Thursday, May 28, 2020 - link
"Javascript and browsers are not just software."They are just software. Fun fact: your Android browser is built with -Oz. Yes, all optimizations are turned off in order to reduce binary size. That's an insanely stupid software decision which means Android phones appear to be behind iOS when in fact they are not.
name99 - Saturday, May 30, 2020 - link
It's not an "insanely stupid software decision"...Fun fact: Apple ALSO builds pretty much all their software at either -Oz or -Os! Both Apple and Google (and probably MS) are well aware that the "overall system experience" matters more than picking up a few percentage points in particular benchmarks, and that large app footprints hurt that overall system experience. Apple's recommendation for MOST developer code (and followed internally) has been to optimize for size for yikes, at least 20 years, and hasn't changed in all that time.
Look at the (ongoing) work in LLVM to reduce code size ( "outliner" is one of the relevant keywords); the people involved in that span a range of companies. I've seen a lot of work by Apple people, a lot by Google people, some even by Facebook people.
Wilco1 - Saturday, May 30, 2020 - link
There is a world of difference between optimizing performance without regard for codesize and optimizing for smallest possible codesize without any regard for performance. -Ofast is the former, -Oz is the latter. Most software, including Linux distros, uses -O2 as the best tradeoff between these extremes. Non essential applications use -Os (or even -Oz if performance is irrelevant). However a browser is extremely performance sensitive. Saving a few bytes with -Oz loses 10-20% performance and that means you lose the equivalent of a full CPU generation. I call that insanely stupid, there are no other words to describe it.ZolaIII - Wednesday, May 27, 2020 - link
It's not about software per see... Java is very memory hungry and most ARM OEM's cut corners when it comes to CPU caches & use subliminal wider supplemental SoC level additional implementations which only slow things down future more.ZolaIII - Wednesday, May 27, 2020 - link
Browser's are SMP2 for a long time but worker list can spread across as many core's as you have.yankeeDDL - Wednesday, May 27, 2020 - link
Exactly my point. Aside from the fact that today you cannot get a high-end Android phone, unless you have huge screens, we're at the point where anything below the $700 is substantially slower than anything Apple has come up with in the last 2-3 years.As an Android user, I find it frustrating.
I have a Samsung S8 which, granted, is not a high-end phone by today's standard. I just got an S10 for my wife, which feels faster, but still, looking at benchmarks, there's no comparison. For work I have an iPhone XR, which I hardly use. It is heavy, way too large and bulky, but it unlocks reliably in a flash, and it is silky smooth to use.
Lolimaster - Wednesday, May 27, 2020 - link
The Xiaimi Redmi K30 Pro Zoom $550 got the same lpddr5, ufs 3.1 and sd865 as the s20 ultra, got a nice camera (waiting gcam port), no notch, AMOLED screen. Maybe search better, Samsung and Huawei are full on overpriced devicessoresu - Wednesday, May 27, 2020 - link
Talking about unlocking has less to do with the ARM core and more to do with the individual manufacturer SW implementations on top of Android, not to mention Android itself.It will be interesting to see if Fuchsia changes up that equation some - though I've never found my Mate 10 to be particularly laggy at all.
soresu - Wednesday, May 27, 2020 - link
Depends on the web content, not all of it is JS bottlenecked - and if you are browsing JS heavy crap pages on a phone then its on you for being a moron begging to suck your battery dry.Spunjji - Wednesday, May 27, 2020 - link
Put *that* in context. You pay $1100 more for a device with vastly superior camera systems, a far better display and a larger quantity of faster storage. Most people don't buy a phone for its single-thread CPU performance.The iPhone SE certainly is great value for money; flagship phones aren't. Your exaggerated comparison is the best you could manufacture - the Redmi K30 Pro has the same SoC as the Galaxy and costs about $500. Not quite so silly now, is it?
Added to that, the vast majority of web-browsing isn't dependent on single-thread performance - especially the sort you'd be doing on a phone. This whole post is off.
Nicon0s - Wednesday, May 27, 2020 - link
It's not actually slower.According to Anandtech the S20 offers: "by far the most responsive and smooth experiences you can get on a mobile phone today".
s.yu - Thursday, May 28, 2020 - link
>It's not actually slower.It is. If Apple allows animation acceleration then it would show. Turning off animations altogether would show it even more.
Spunjji - Thursday, May 28, 2020 - link
But Apple don't allow those things, so in reality where we all actually live *it is faster*.name99 - Saturday, May 30, 2020 - link
Yes yes yes, eternal evil Apple not allowing people to do what they want. We all know the story.Meanwhile, in the real world:
https://support.apple.com/en-gb/HT202655
Vince789 - Friday, May 29, 2020 - link
Not surprising as a $1500 Exynos S20 Ultra is slower than a $400-500 865 phones tooSamus - Wednesday, May 27, 2020 - link
I think syxbit's disappointment is rooted in that a current $400 Apple device available right now will probably be faster than this chip, whenever it is available, and will likely only be in much more expensive phones.That is incredibly disappointing. Especially when you consider Android has a native performance penalty in UI performance and overall optimization due to its broad hardware compatibility requirements. If anything, Android should be getting the faster chips since Apple has the luxury of optimizing their OS around their SoC.
Lolimaster - Wednesday, May 27, 2020 - link
$400 Apple SOC on a 2015 "value" body.All around the K30 Pro Zoom rubs circles around it as a modern device. My 5.8" S9 can feel quite small for media consumption, browsing and gaming. Can't imagine something well below 5" as a "smartphone" in 2020.
Spunjji - Thursday, May 28, 2020 - link
"...Android has a native performance penalty in UI performance..."Somebody has clearly never used a OnePlus device..!
Apple having a theoretically faster CPU makes no difference if:
1) Apple won't sell that CPU to anyone else
2) Apple won't use anyone else's CPUs
3) You care about the actual experience you get form the device, not benchmarks.
iphonebestgamephone - Friday, May 29, 2020 - link
I currently use a oneplus, the ui animation does look better on iphones, oneplus is just sped up animations - slowing it down doesnt make it look better either.darkich - Wednesday, May 27, 2020 - link
For years now, iPhones are CONSISTENTLY inferior to the Samsung Galaxy S phones in the best and most objective real life speed tests.Go see phonebuff channel and educate yourself.
iphonebestgamephone - Wednesday, May 27, 2020 - link
Ah yes, the app open tests. Wonderful indeed.jospoortvliet - Thursday, May 28, 2020 - link
you missed the sarcasm tag ;-)But yeah, those "real world tests", comparing the animation speeds of completely different applications (yes both called "youtube" doesn't mean they have any code in common) are utterly useless to compare the cpu performance. A 1995 eta desktop pc would in the same comparison also seem faster than a modern day computer...
Spunjji - Thursday, May 28, 2020 - link
Why are they irrelevant when they represent actual performance in-use doing things a user actually does?Like seriously, either the argument is that real-world testing matters or it's that e-peen measurement wins, but you can't claim that your e-peen score represents real-world use when the real-world tests say otherwise.
Drake H. - Tuesday, May 26, 2020 - link
Honestly, who cares about that? This extra power is useless. Playing on mobile device is uncomfortable and bad for eye health. They should convert priority into maximum efficiency only.The critical point that needs to evolve in smartphones is the battery and nobody talks about. Where are the graphene batteries samsung promised? I miss having a cell phone that lasts a week without charging. :(
syxbit - Tuesday, May 26, 2020 - link
Extra power is never useless. Sure if you just browse facebook. But iPads and Android tablets are trying to replace laptops. The A13 can probably replace a laptop. No Qualcomm chip can do that as well.I suspect when Apple replaces their laptop intel chips with their Arm chips, they'll be faster, and better battery life. But when Microsoft tries to do it with QCOMM, they're much worse
skavi - Tuesday, May 26, 2020 - link
Android tablets are dead in the high end. Would like to see some X1 cores in a future Surface tho.Drake H. - Tuesday, May 26, 2020 - link
Nah, there is no ARM CPU that can replace a regular x86 CPU for gaming or work, and it won't exist at any time soon. You don't reverse millions of software and years of optimization overnight.FunBunny2 - Tuesday, May 26, 2020 - link
"You don't reverse millions of software and years of optimization overnight."cross-compile is old news. any C/C++ compiler can do that. OTOH, all those system calls are the major issue.
vladx - Tuesday, May 26, 2020 - link
Cross-compiling doesn't mean your software will run at same performance on everything, for that to happen you still need to implement architecture-specific optimizations and even so there's still no guarantees.serendip - Wednesday, May 27, 2020 - link
A fast ARM chip could brute force x86-32 code translation so it was fast enough. The SQ1 in the Surface Pro X can run x86-32 code without it being snail slow like on older SD850 devices. An X1-based chip could probably do x86-64 translation (coming next year according to Microsoft) at 8th gen i5 levels, and it would fly when running ARM native code.Drake H. - Wednesday, May 27, 2020 - link
By this logic it seems that anyone could port the x86 recompiler from pcsx2 to ARM with a few clicks. Optimizing software for ARM would take a lot of time and money, not to mention that some of these software has no source code available.iphonebestgamephone - Wednesday, May 27, 2020 - link
Not on android or windows, but on a console its pretty good for gaming, as proven by the switch?soresu - Wednesday, May 27, 2020 - link
How can you be disappointed?It's not like there is a direct comparison, often with high end apps and games there is a serious lag between it being available on iOS and it becoming available on Android, sometimes it never comes at all.
Though the openness of Android means we get some things easily that iOS device owners have to jump through hoops for, namely console emulators.
What's the use having all that power in the Apple Axx SoC's when you are restricted in your freedom to do with it as you wish? All things considered I'll take a performance hit for that freedom any day of the week.
iphonebestgamephone - Wednesday, May 27, 2020 - link
After seeing how much better an ipad pro runs dolphin even compared to an 865, its worth jumping.Spunjji - Thursday, May 28, 2020 - link
...if you run Dolphin, sure. If you don't want to have all the attendant ecosystem an Apple device demands, it's not such a clear and obvious jump.iphonebestgamephone - Friday, May 29, 2020 - link
Its great as a gaming and media device. I wouldnt get an iphone though, seems to throttle too fast. Just needs a 3ds emulator as of now.tkSteveFOX - Wednesday, May 27, 2020 - link
Why? ARM cores have much better efficiency under load.Try gaming on an iPhone and see the battery being completely vaporized in like 2Hrs compared to 4+ on an Android device.
dudedud - Wednesday, May 27, 2020 - link
Do your metrics leave out screen consumption, are normalized for battery capacity, and run the exact same settings?I bet they don't
Spunjji - Thursday, May 28, 2020 - link
Android phones tend to have larger screens, and yeah, this very website has the normalized tests. iPhones tend to have higher maximum power draw under load (although A13 improved this dramatically from the mess that was A12).But really, it doesn't matter. You get what you're given with the iPhone - you can't buy one that games for more than 2 hours, so if you want to game for more than 2 hours, no amount of normalized benchmark results will help you out.
UNCjigga - Friday, May 29, 2020 - link
I made the switch years ago; moving from a Moto X Pure to an iPhone 6S with Apple A9. More than 2x performance and 3x battery life. I haven't looked back. My current iPhone 11 Pro is snappier than my 7th-gen Core m3 Chromebook in some benchmarks while driving a higher-res screen, and I usually end the day with at least 30% battery still left.ChrisGX - Monday, July 6, 2020 - link
The A13 is a very fast chip. Processor speed doesn't need to be increased to be on the leading edge. On the contrary, if Apple does the straightforward thing and the thing that would differentiate the A14 it would be working almost entirely on lowering power consumption (that has recently got well out of control) and improving the energy efficiency of the SoC and particularly of the performance cores. And, that also would make sense for Apple's forthcoming desktop CPUs as well. A lot of power hungry performance cores that generate a lot of heat and that then fail to sustain performance at near peak levels won't be the way to make a compelling for desktop CPUs based on A14-like performance cores.Comparisons with the A13 do make a certain amount of sense insofar as the main challenge is to lower power consumption rather than push core performance again, at this point. Peak integer performance of the Lightning cores is rather similar to the best Zen 2 cores out there.
Note: Apple has an impeccable on chip power management system but the more overly power hungry that the cores of a chip are the harder it is to hide power and thermal issues.
SarahKerrigan - Tuesday, May 26, 2020 - link
Also, the "20%" for A78 is 2.6GHz A77 vs 3.0GHz A78 (so, ~7% iso-clock gain.) The 30% for X1 is already iso-clock - 3GHz A77 vs 3GHz X1.yankeeDDL - Tuesday, May 26, 2020 - link
OK, thank you. Yes, it is a bit better then. Hopefully, the X1 is only the 1st iteration in the new stretegy, and a bit conservative. Hopefully, it'll close the gap a bit more.DanNeely - Tuesday, May 26, 2020 - link
having gotten to page 4 in the article, the explanation is that ARMs slides as used on the first page suck. The 20% from A77-A78 is +7% architecture, and +13% 5nm instead of 7nm. The 30% from A77-X1 is entirely architecture; that in turn implies that upcoming X1 chips should be about 40-45% faster than current A77 ones.AIUI It's still going to be falling short of what Apple's doing (and not just because the A55 little cores are getting really dated); but is a badly needed narrowing of the gap.
ichaya - Tuesday, May 26, 2020 - link
40-45% on appropriately less die area than Apple and you've got something competitive atleast.DanNeely - Tuesday, May 26, 2020 - link
in terms of engineering prowess certainly; but not in terms of letting Samsung/etc finally design smartphones and tablets that are as fast as their rivals from Apple. Assuming the product plays out in retail, in another 2 or 3 years when I look to replace my S10 I'll probably get something with an X core in it; but I really hope that they'll widen the performance uplift vs their more general purpose cores by then.Raqia - Tuesday, May 26, 2020 - link
The existing A77's are already very impressive in terms of PPA, I would consider them as impressive as Apple's big cores taken as a whole. (The small cores are another story since the major uplift from the A13...) A lot of Android vendors value area in particular since they integrate modems on die whereas Apple does not; this drives a lot of value and cost savings for customers.CiccioB - Tuesday, May 26, 2020 - link
If Samsung really wants to create a phone/tablet as fast as an Apple one it should first concentrate more on SW optimizations. Apple puts a lot of efforts in that. It's not only a question who makes the bigger core.See the comparison of Samsung crappy phones with other Android ones using much more optimized and less bloated version of the OS. They are good for benchmarking with all those cores and MHz (and tricks on turbo spped for benchmark apps), but in real life Samsung phones are slower than they could be do to low optimizations.
Wilco1 - Tuesday, May 26, 2020 - link
Bingo! Adding a big core that does well in benchmarks is not a good solution. Improving browser performance with software optimization can be far more effective.armchair_architect - Wednesday, May 27, 2020 - link
@Wilco1 I am afraid you only the SW part of the equation here.Again X1 is not only good in benchmarks, being wide helps in that you can achieve same performance as last-gen by running at vastly lower frequency and voltage.
Thus power efficiency for all use cases that do not require max peak perf enjoy a huge power saving.
Wilco1 - Thursday, May 28, 2020 - link
You can't brute-force your way to performance or efficiency. If you can improve performance via software optimization, you take it any time over a faster core that gives the same gain but needs more power to run the unoptimized software.It's as simple as that.
armchair_architect - Thursday, May 28, 2020 - link
Obviously you would ideally need both SW optimization and faster CPUs.But again, power will not always be higher and higher power != higher energy usage.
Wilco1 - Thursday, May 28, 2020 - link
Absolutely. But the biggest issue in the Android world is software optimization and tuning, not CPU performance. Improving that would easily add up to a new CPU generation. The choice to switch to LLVM was stupid at the time, but even more so today since GCC has since moved further ahead of LLVM...Note all the evidence points to using smaller cores to improve power efficiency. You can see this on the perf/W estimates for SPEC - the A78 is almost twice as efficient as A13 while achieving 74% of the performance.
Andrei Frumusanu - Wednesday, June 3, 2020 - link
> The choice to switch to LLVM was stupid at the time, but even more so today since GCC has since moved further ahead of LLVM...GCC's problem is its license. Apple nor Google would be able to integrate it into the IDE like Xcode/Android Studio. In the grand scheme of things, going LLVM is a much better choice, even if it's slower than GCC.
ksec - Tuesday, May 26, 2020 - link
The 40-45% figure assumes X-1 could run at 3Ghz within its TDP budget.And even with that in mind the figures Anandtech put up shows it is still behind A13.
Not bad for the rest of the ARM ecosystem. But still not quite there yet.
MarcGP - Tuesday, May 26, 2020 - link
Behind the A13 ?, you missed the estimation chart where it shows the X1 reaching the A13 performance (a bit lower in integer performance and a bit higher in floating poing performance) at a much lower power consumption.ksec - Wednesday, May 27, 2020 - link
Behind in IPC. The chart put the X1 with an 5nm node with 15% clock speed increase against a 7nm Node A13 with non sustainable 2.63 Ghz Clock.Also worth noting this is 7nm+ not 7nm EUV from TSMC. So if you put the node aside those number would likely still put it under A13.
dotjaz - Tuesday, May 26, 2020 - link
You understand INCORRECTLY. 30% is for the same frequency and 20% is the same power. you DID read it wrong.dotjaz - Tuesday, May 26, 2020 - link
With the same baseline, [email protected], then A78@3GHz is +20%, X1@3GHz is +50%ZolaIII - Wednesday, May 27, 2020 - link
Nope you are wrong. First off all given constant power delta for something which goes into phone the A78 will be a rather significant improvement over A77 with same performance at half the power budget. A77 already had a lead over Apple big core's regarding the performance/W metric & and this means more than brute force approach. Yes Apple big core's are supperio but on something that has power budget of a laptop. On the other hand X1 is a direct take on those apple core's & it should be up to 2x faster than A78 in tasks which are optimised and utilities FP SIMDs basically SMP tasks. This is more relevant to server tasks and not so much for mobile space, still I would like to see more advanced SIMD blocks and their inclusion on smaller core's with SMT as SIMDs are hard to feed optimally and front end expansion there for is a must but it can be done in a more elegant manner like for instance MIPS did with VMT. ARM desperately needs power efficient basic OoO core a successor of A73 if you like with DynamiQ integration as an A55 replacement. Their is a A65AE but we didn't seen any implementation of it in any space so far.Santoval - Friday, May 29, 2020 - link
It is not even an apples for apples comparison, since A78 has +20% *sustained* performance over A77, while X1 has +30% *peak* performance. Therefore the sustained performance lead of X1 over A77 might be in the +25% ballpark. Is a mere extra 5 - 10% performance over A78 really worth a 30% larger die area and quite higher TDP? Unless Arm can increase the performance lead of X1 over A78 at least another 20% I don't see the former being an attractive (or even a sane) licence and purchasing option.ChrisGX - Monday, July 6, 2020 - link
The X1 exhibits 22% performance advantage over the A78 when process and frequency are controlled factors. So, yes, X1 performance is 1.22xA78. The performance improvement of the A78 over the A77 however includes a process node and frequency change, 20% all up. So, the performance of the X1 is: (A77 * 1.2) * 1.22 or 1.46xA77.ChrisGX - Monday, July 6, 2020 - link
Please note Andrei seems to have made assumptions something like this in his calculations with A77 SPECspeed/performance at 2.6GHz being something in the order of 32 (which seems reasonable).deil - Tuesday, March 30, 2021 - link
Apple love to exaggerate and use perfect scenarios. 2x faster IF you take 250$ snap model like (4XX) series and 700$ iphone. I digged though their comparisons and each of them mention worst care for others and best for them. Most of iphones dont have even 20% lead in raw performance, all it gains is efficient OS, while android is bulky and slow.another example was m1 mentioned 5x faster than "similar sized most popular laptop" which by amazon was 269$ laptop with ryzen 3200u, dated low end offering which had ~4000 points in benches when marked already had 4700u's in same form factor with 20000 points.
when comparing 300$ vs 900$ laptops, it really might look like m1 is so godlike.
FreckledTrout - Tuesday, May 26, 2020 - link
Interesting move. Can we assume the choice to keep the pipeline fairly short means ARM are targeting tablet/phones with this design?skavi - Tuesday, May 26, 2020 - link
were you expecting it to target desktop?Kangal - Tuesday, May 26, 2020 - link
It would've been nice. Remember back in 2016 ARM had a clean three tier offering:Cortex A35 - Ultra-low power
Cortex A53 - Low power
Cortex A73 - Medium power
I was hoping a similar thing to happen. Maybe it will come on a new architecture-branch, ergo ARMv9, maybe in 2022. Perhaps they can look into cores for laptops, desktops, servers. For instance:
ARM v9 Cortex A41 - Ultra-low power, same as A55 perf.
ARM v9 Cortex A61 - Low power, same as A73 perf.
ARM v9 Cortex A81 - Medium power, beyond Apple A14 perf.
Although, a big part will be optimisations and implementing a new InfinityFabric, big.LITTLE, DynamIQ style platform that scales from wearables to desktop.
SarahKerrigan - Tuesday, May 26, 2020 - link
A78 looks like another A73 - modest perf gains, but improved efficiency. X1 is fascinating; I wasn't expecting to see an aggressive design like that until Matterhorn.vladx - Tuesday, May 26, 2020 - link
Cortex-A73 had in fact lower IPC than Cortex-A72, which is not the case here with Cortex-A78.tipoo - Tuesday, May 26, 2020 - link
Page 2's index should read A78 rather than A77, I believe :)MrCommunistGen - Tuesday, May 26, 2020 - link
Thanks for fixing Andrei/team!eastcoast_pete - Tuesday, May 26, 2020 - link
Agree that this is an interesting move, both design- and strategy-wise. While the Hera (X1) core will make a great single top performance core (1x Fast+Big, 3xBig, 4xLittle) for upcoming mobile Snapdragons and Exynos SoCs, I am also curious how Hera will boost the efforts of QC and Samsung for Windows-on-Arm CPUs.The one big fly in ARM's ointment is that they apparently still believe that their A55 remains the greatest Little core they've designed to date. Isn't that design getting a bit dated, especially compared with Apple's efficiency cores, which Apple does manage to update quite regularly? What's up with that? Any information or rumors on an A57 or A58?
SarahKerrigan - Tuesday, May 26, 2020 - link
As far as I know, there's going to be a new little core announced alongside Matterhorn next year.DanNeely - Tuesday, May 26, 2020 - link
Good news if true, but 4 years between generations is way too long; even the 3 between A53 and A55 was too long.eastcoast_pete - Tuesday, May 26, 2020 - link
That new little core can't come too soon! @Andrei: do you have any stats on how much time a phone in regular daily use (not benchmarking or gaming) actually spends in little core only? Would be interesting for battery use estimates. Thanks!Kamen Rider Blade - Tuesday, May 26, 2020 - link
What happened to the A34 / A35?Shouldn't they bring that back as the lowest power core and update it?
Andrei Frumusanu - Tuesday, May 26, 2020 - link
We'll see a new little core next year, with some more major updates.DanNeely - Tuesday, May 26, 2020 - link
It won't be A57, Arm used that for a 2013 big core.eastcoast_pete - Tuesday, May 26, 2020 - link
True, and I even had a device (with a Snapdragon 808) with two of those. Freudian slip on my part, that thing was awful. Ran hot, ate battery for breakfast and by breakfast. . .pashhtk27 - Thursday, May 28, 2020 - link
Still using a 808 phone. :DDI bought a new phone two months ago but it arrived faulty. Had to ask for refund and as it was clearance model, now in the market there is no smartphone model in my budget that convinces me. I might end up being one of the last users of that processor.
eastcoast_pete - Thursday, May 28, 2020 - link
What killed that for me was just how fast the 808 phone would suck a 3750 mAh battery dry. Forced me to compete with iPhone users for wall outlets to plug my charger in. However, the 808 made for a good hand warmer in the winter. That was probably what it was best atTams80 - Tuesday, May 26, 2020 - link
The main problem with Windows-on-Arm is not the performance of the SoCs, well at least not when running native ARM apps.It's the poor emulation. And the software emulation isn't the issue; it's that the SoCs just aren't powerful enough. It's bad enough that I don't think the X1 will be powerful enough for x86 emulation to be a good experience.
Kamen Rider Blade - Tuesday, May 26, 2020 - link
Windows on Arm seems like a pointless endeavorKurosaki - Thursday, June 25, 2020 - link
It's the future!MarcGP - Tuesday, May 26, 2020 - link
You don't make any sense. You say at the same time :1) It's not a problem of the SOCs performance but of the poor emulation
2) The emulation software is not the issue but the weak SOCs
Make up your mind, it's one or the other, but it can't be both.
armchair_architect - Tuesday, May 26, 2020 - link
This looks pretty exciting!What if vendors will go for 2+4 configs (like Apple) with 2 X1 + 4 A78?
Apple has showed that this configuration is really good!
That would be a killer combinations as the littles are practically useless in real scenarios and a slow implementation of a A78 could very well cover the low part of the cluster DVFS curve, for idle.
A55 is super old and doesnt offer any useful performance, I suspect they are only there for the low-power scenarios.
But again, Apple has showed that its out-of-order little cores can be super efficient when implemented at low frequency (I think they run at something like 1.7GHz peak frequency).
I didnt read much information on X1 power, but yes it will for sure be less power efficient than A78 when both of them are running flat out at 3GHz. But, I highly suspect that (like A78 vs A77) on the whole DVFS curve, X1 can be lower power than A78 in the performance regions in which they overlap.
It is simply a matter of being wider and slower, this makes you more efficient. That is the Apple way, wide and slow.
They have the example of iso-performance metric, X1 will need much lower frequencies and voltages to reach a middle-of-the-road performance point (something like ~35 SpecInt, given that projection is 47 SpecInt flat-out). This could easily offset the intrinsic iso-frequency power deficit that the X1 brings.
spaceship9876 - Tuesday, May 26, 2020 - link
I'm surprised they are still using Arm v8.2, that was released in Jan 2016.Kamen Rider Blade - Tuesday, May 26, 2020 - link
I concur.In September 2019, ARMv8.6-A was introduced.
https://en.wikipedia.org/wiki/ARM_architecture#ARM...
There's also these new instructions to add in the future:
In May 2019, ARM announced their upcoming Scalable Vector Extension 2 (SVE2) and Transactional Memory Extension (TME)
eastcoast_pete - Tuesday, May 26, 2020 - link
I might be mistaken, but aren't those wide SVEs a co-development with Fujitsu? ARM might simply not have a blanket license to use that jointly developed tech. I am rooting for wider availability of those for my own reasons (video encoding runs much faster if it can use wide extensions). And there is no such thing as too much oomph for working with videos.SarahKerrigan - Tuesday, May 26, 2020 - link
It's just taking a while for SVE to get in. Future ARM Ltd cores are likely to have it; future server chips from Hisilicon are roadmapped to as well (although at this point, all bets are off due to the ongoing difficulties between the American state and Huawei.)GC2:CS - Tuesday, May 26, 2020 - link
So Apple is finally defeated now ?A11 was 25% A12 15% and A13 just 20% faster than last gen. So A10 is still quite competitive today.
This is higher gain than Apple for a third year in a row.
The question is, how much of that is won by going to the 5nm process ? I heard it is quite advanced compared to 7nm.
tipoo - Tuesday, May 26, 2020 - link
30% faster than the A77 will bring X1 closer to, but probably still under the A13. And A14 will be out by then. Wouldn't call Apple defeated yet, it's easier to make larger gains when you were at half their per core performance a few years ago...MarcGP - Wednesday, May 27, 2020 - link
30% faster in IPC gains alone (iso-process and frequency). The X1 SOCs will be much faster than that 30% over the current A77 SOCs, much closer to 50% than 30%.Quantumz0d - Tuesday, May 26, 2020 - link
First things first, so what's the cost here of the new X1 vs the 78, we already have $1000 for the smartphone planned obsolescence and now this is next level uber crazy alien tech is going to make them go for obscene $2000 non user replaceable battery junk tech gadgets ?Going wider and 3GHz I don't know maybe maybenot, Zen doesn't clock higher because of it's wider arch from what I saw and the 7N limitations. Even Intel is going wider next, which is going to get hit in the pure clockspeed.
And next, this is hilarious - " they should outright panic at these figures if they actually materialize – and I do expect them to materialize"
Outright panic ? - Let's look at facts 95% Intel, 4.5% AMD from Q4 2019 - Server Marketshare & wonder where does ARM sit here to make both Intel and AMD "Panic".
ARM always about custom this custom that BS, Every single thing needs to be made custom for that crappy ARM part and the LGA socket system is not even a standard for these ARM Server CPUs and x86 is all about the Sockets and in the Consumer space mobile and DIY it doesn't exist, thanks to the Software which is a bigger driving force behind any product in this specturm, esp everyone knows Qualcomm's ever marketed (by Cloudflare) Centriq 2400 10nm Server CPU got deleted from it's existence and even stopped pursuing such goals, where they even put full Kryo SD820's full custom engineers on it and even the guy who was spearheading also moved on.
I will wait to see what's going to happen to the ever bashed x86 by the ARM superiority or the Apple A series Alien processors.
ah06 - Wednesday, May 27, 2020 - link
Those are the facts as of now yes. But the rest of the post sounds like someone about to get disrupted. The bulk of x86 vendor profits come from laptops, specifically general use thin, light and cheap laptops, those are about to be disrupted. Which is to say that in 5 years time, x86 on laptops will cease to exist in any meaningful way. Desktops/Enthusiast parts are not financially relevant to any of these companies.Drake H. - Wednesday, May 27, 2020 - link
Nah. The servers are the pot of gold, where profit margins are really high.You will soon see that ARM will have its small space, but it does not pose a danger to the duopolio x86, something very complex will be coming and everything is already sealed with patents.
Drake H. - Wednesday, May 27, 2020 - link
https://www.phoronix.com/scan.php?page=article&... Here's an example of how ARM outperforms x86. XDYojimbo - Tuesday, May 26, 2020 - link
Hera hated Hercules.vladx - Tuesday, May 26, 2020 - link
We need Zeus next.jaju123 - Tuesday, May 26, 2020 - link
Disappointed that there's no replacement for the ancient a55 yetKamen Rider Blade - Tuesday, May 26, 2020 - link
I concur, A55/A78 still on ARMv8.2-AARM is already on ARMv8.6-A
And there are already announced new CPU instructions coming down the pipe.
https://en.wikipedia.org/wiki/ARM_architecture#Fut...
In May 2019, ARM announced their upcoming Scalable Vector Extension 2 (SVE2) and Transactional Memory Extension (TME).
Raqia - Tuesday, May 26, 2020 - link
It's an interesting drop for this year's ARM tech day: I imagine A78 plans were nebulous when the A76 dropped, and they may have downscaled what is now called the A78 and upscaled what's now the X1. There will likely be a 9cx part for Windows on ARM that can leverage the higher end cores and larger caches very well, but really looking forward to Matterhorn and their new smaller core design which will be very impactful for mobile performance.StormyParis - Tuesday, May 26, 2020 - link
To me, these stories are always kind of exciting and kind of pointless. I'm no longer buying flagships, and even at the low/mid-range, it's been years since I've had, or have heard, a complaint about performance.The apps we use haven't changed in 5 years. Maybe some games, but VR never took off, and InstaGram/Twitter/Maps/FB... are the same. "As long as it has a Core A7x, it is Delightful." Hopefully the X program will help ARM get into consoles, laptops and desktop, and hopefully Android will start supporting that... even today, it's more of an Android problem than an ARM problem. Maybe Windows will fix what Google fumbled.
nandnandnand - Tuesday, May 26, 2020 - link
VR needs more time in the oven.Meteor2 - Friday, June 12, 2020 - link
This.voicequal - Tuesday, May 26, 2020 - link
How will the X1 support 8MB L3 while operating in the same DynamIQ cluster as the A78 or A55 that support only 4MB L3 cache?Andrei Frumusanu - Tuesday, May 26, 2020 - link
The DSU is separate and scales up to 8MB either way, the slides are presenting the envisioned config for that core assuming it's the strongest one in the design. So some SoCs with just A78's with just 4MB, and X1 SoCs with A78's in a 8MB design.Jaianiesh052306 - Tuesday, May 26, 2020 - link
Andrei do you think the cortex x1 is technically superior to a13 or does apple still have a lead in terms of pure micro architecture. Do you think the a14 will post the same performance gains as the cortex x1 or lower? How much of a lead will the a14 have over the x1 in your opinion?
vFunct - Tuesday, May 26, 2020 - link
Is the X1 aimed at high-power server processors at all or are we still in mobile SoC territory?Does ARM have any upcoming server cores?
SarahKerrigan - Tuesday, May 26, 2020 - link
N1, a heavily enhanced A76 variant, is ARM's server core. N2 ("Zeus") should be appearing in the near future.eastcoast_pete - Thursday, May 28, 2020 - link
Thanks! But, now that "Zeus" is spoken for, what will ARM call the successor of Zeus?iphonebestgamephone - Friday, May 29, 2020 - link
Kratossoresu - Sunday, May 31, 2020 - link
It was already roadmapped as Poseidon when N1 and E1 were announced last year.capt3d - Tuesday, May 26, 2020 - link
For the life of me I don't understand why they still haven't embraced socketing their silicon. I'd bet they could capture a significant segment of the DIY market overnight. Sixteen cores with 85%+ the performance of Zen2 at less than half the power? Plus 24-core integrated graphics?Yes, please.
The_Assimilator - Tuesday, May 26, 2020 - link
No. Nobody wants shit performance cores in a desktop PC.Deicidium369 - Tuesday, May 26, 2020 - link
Plenty of people buy AMDQuantumz0d - Tuesday, May 26, 2020 - link
What ? AMD is leagues ahead of Intel right now in MT performance, and soon with the unified L3 on Zen 3000 Intel's gaming crown will probably end up going to AMD after a decade.And Ryzen 1600AF which is a Zen+ part is still a good CPU for it's value as it competes very well in gaming too, a cheap drop in for any B450, X470 boards with ease.
ARM is garbage class silicon it's all custom, and no x86 DIY computer is going to cater to that garbage Silicon except Anandtech Spec measuring contest where A series is faster than a 9900K or Ryzen 3950X.
ah06 - Wednesday, May 27, 2020 - link
Why is ARM 'garbage'?The 8cX chips (so basically 855) or 2 generations behind X1/A78 are faster in real world web page loading comparisons than x86 ones. They earned some video editing wins as well (though that workload is mostly dedicated hardware dependent).
What area is ARM still far behind in?
Drake H. - Wednesday, May 27, 2020 - link
Here: https://www.phoronix.com/scan.php?page=article&...Wilco1 - Thursday, May 28, 2020 - link
That clearly shows N1 keeping up easily with the fastest EPYC server and beating it in many benchmarks. And your point was?Zoolook - Thursday, May 28, 2020 - link
I think his point is that on average the EPYC is 50% faster, I don't see how that is "keeping up easily".Wilco1 - Thursday, May 28, 2020 - link
There were several broken benchmarks which affect the average disproportionately. But ignoring that, getting 66% of the fastest EPYC is keeping up especially since Intel servers aren't getting anywhere near that. And doing that while using half the power is a huge win.MarcGP - Tuesday, May 26, 2020 - link
I guess you missed the final chart where these shit cores reached Intel and AMD current desktop processors ... at an small fraction of their power consumption.Drake H. - Tuesday, May 26, 2020 - link
btw, in any real application, these ARM toys are massacred by x86 specific optimizations. :PPixyMisa - Tuesday, May 26, 2020 - link
Not always. Phoronix recently did a deep dive comparing a dedicated 64-core Amazon Arm instance against a 64-core Epyc. Across 100+ benchmarks Epyc was 50% faster (geometric mean), but Arm won in several cases.ah06 - Wednesday, May 27, 2020 - link
They were consistently faster at real world web page loading than x86 chips in their 8cX implementation, which is 2 generations behind what they announced just now. Apple chips are even faster than above. For 90% of users, these ARM toys would be faster, cheaper, longer lasting then.This is easy to verify, take an iPad Pro and a Macbook Air/Pro and load your top 10 visited websites 10 times over and watch the iPad consistently beat the Mac (both browsers loading desktop mode)
ah06 - Wednesday, May 27, 2020 - link
Why are they 'shit' if they're this close in performance already? Assuming they're much cheaper, wouldn't they be better (performance and value) than everything except perhaps i7/i9 desktop chips?Deicidium369 - Tuesday, May 26, 2020 - link
not sockets because there will never be a DIY market for it.Wilco1 - Tuesday, May 26, 2020 - link
Various Arm servers use sockets. You won't ever see mobile phones or laptops with a socket, it simply isn't possible,eastcoast_pete - Thursday, May 28, 2020 - link
I would like a socket-like solution in my mobile units, I have at least one tablet and two phones which I'd loved to have an SoC only upgrade path for; would cut down on electronic garbage due to forced obsolescence.toyotabedzrock - Tuesday, May 26, 2020 - link
No mention of 5G, does this mean separate chip will still be needed until a new design comes with faster little cores? The fact they are even discussing usage of the server targeted X1 in Mobile tells me they know they are behind and are being pushed to fix it. Maybe the new gen next year will be A8x and A6x.eastcoast_pete - Tuesday, May 26, 2020 - link
ARM is not in the modem/modem design business per se; those modems are typically the domain of Qualcomm, Huawei and other SoC designers, often with IP by specialty modem designers, and these designs integrate ARM CPU IP.That being said, several aspects of 5G apparently need significant neural processing/deep learning-type capability, which however also wouldn't be directly provided by the CPU cores.
SarahKerrigan - Tuesday, May 26, 2020 - link
X1 isn't a server core. You're mixing it up with N1, which is a completely different core (based on A76.)Companies are already doing on-chip 5G with ARMs today.
trivik12 - Tuesday, May 26, 2020 - link
Exynos 1000 with Cortex X1/78/55 should hopefully catch up to Snapdragon 875. Hopefully Samsung Galaxy S21/S30 makes up for disastrous S20 series. Fix camera issues and price it lower.ZolaIII - Tuesday, May 26, 2020 - link
Indeed X1 is in all metric on scale with X86 current core's..pivejasey - Tuesday, May 26, 2020 - link
This piece of propaganda repeats the word "performance" a million times, but does not have a single independent benchmark.It is all empty claims.
Deicidium369 - Tuesday, May 26, 2020 - link
Are you new here? Pretty sure this is normal - hardly propaganda...PixyMisa - Tuesday, May 26, 2020 - link
The chip isn't out yet. Nobody has one to benchmark.Wilco1 - Thursday, May 28, 2020 - link
That's not relevant since you can simulate the RTL. It's slow but that's how CPU designers test and benchmark before having silicon.voicequal - Tuesday, May 26, 2020 - link
This is a microarchitecture announcement - it will be several months before there are physical products that can be tested. Rather than benchmarks, the claims are backed by a set of microarchitecture improvements that are being disclosed here.Alistair - Tuesday, May 26, 2020 - link
For years I've wanted ARM to go high end, and bring us 3ghz+ chips that come close to Apple. Glad to hear it is coming! I'd really like them to target the DIY market, enthusiasts here would love to buy motherboards and build desktop computers around the parts. We could even install Windows on ARM. I really think they are overlooking enthusiast demand in the DIY market. Stop just focusing on laptops and get us the ecosystem.Alistair - Tuesday, May 26, 2020 - link
nVidia GPU with ARM desktop etc.dotjaz - Tuesday, May 26, 2020 - link
"plus the vendor has a tendency not always use the latest CPU IPs anyhow"They were not only using the latest, but also the first to use A72, A73 and A76 with A57/A75 being skipped by them altogether, so no, not true. They do have a tendency to skip certain generations.
vladx - Tuesday, May 26, 2020 - link
Yep Kirin 1020 is unaffected by the US ban and will use Cortex-A78 which means Huawei will be again the first one using it.iphonebestgamephone - Wednesday, May 27, 2020 - link
Skipping means they arent always using the latest arm cores. They also skipped the a77. So yes, its true.darkich - Wednesday, May 27, 2020 - link
This is all quite confusing and perplexing..The high performance, Apple-rivaling ARM core ALREADY exists..Qualcomm did this a year ago with their 8cx and SQ1 chips.(surface pro X and samsung galaxy book s tablets)
Andrei Frumusanu - Wednesday, May 27, 2020 - link
Those aren't any faster per-core than the mobile chips.darkich - Wednesday, May 27, 2020 - link
But they are as fast as the Apple iPad Pro chips, in fact they may be even faster, at similar power envelope.So my point is..what's with the premise of the supposed Apple supremacy?
iphonebestgamephone - Wednesday, May 27, 2020 - link
8cx scores same as 855 in geekbench, which is a lot slower than ipad pro.Andrei Frumusanu - Wednesday, May 27, 2020 - link
I don't know what you're talking about, it's way slower than the A12X/Z.darkich - Thursday, May 28, 2020 - link
I'm talking about RAW performance.. don't tell me you base your claims on the windows benchmarks?Pretty much every number posted for the SQ1 is emulation based.
But the fact that it packs a 2TFLOPS worth of GPU compute and 70Gbps worth of memory bandwidth in a 15W envelope gives a strong hint of what it's true power is.
jospoortvliet - Thursday, May 28, 2020 - link
Just look in the table in this article - the Apple chip from last year (A14) is estimated to be similar to the upcoming X1 (meaning it is 'just' 1 generation behind, as it will compete with the A15, an improvement over tprevious cortex gens that were much more behind) which is a HUGE step forward for ARM - why on earth would you think their earlier designs, which were much more behind, is magically ahead of apple at comparable power?ciderrules - Thursday, May 28, 2020 - link
What are you rambling on about? The SQ1 is about 25% faster than the A12X/Z for GPU, but will be trounced by the CPU since the SQ1 runs older A76/A55 ARM cores. Even a regular A12 would easily keep up, and the A12X/Z doubles the number of big cores.And this is from a processor that’s almost 2 years old.
SQ1 is nothing special.
Andrei Frumusanu - Thursday, May 28, 2020 - link
Again, I don't know what you're on about. The A12X has more GPU power and just as much bandwidth.ciderrules - Thursday, May 28, 2020 - link
I thought the A12X/Z had around 1.4-1.6 TFLOPS for GPU, and the SQ1 claims 2.0 TFLOPS? That would make the SQ1 slightly faster at GPU.jeremyshaw - Thursday, May 28, 2020 - link
What kind of TFLOPS? 32bit? 16bit? Ever hear the story of Vega the Wide? It had all of the TFLOPS, but just couldn't defeat the lower specced competition.ciderrules - Friday, May 29, 2020 - link
I’d like to know this as well. I’m giving the SQ1 the benefit of the doubt on this aspect, but as I stated above the A12X/Z is substantially more powerful on the CPU side. And it’s nearing 2 years old. Imagine what the A14X (if it releases this year) will do?iphonebestgamephone - Friday, May 29, 2020 - link
Those numbers dont always relate to performance.iphonebestgamephone - Friday, May 29, 2020 - link
No im basing on geekbench arm for windows benchmark, not the emulated one. Qualcomms tflops numbers dont show up in benchmarks either, the 855 had around 0.9 tflops according to qualcomm. The ipad pro scores 3x and more of 855 in gfxbench. Even using metal wont give that much of an advantage. Pretty sure an a12x is nowhere near 15w tdp.IUU - Thursday, June 11, 2020 - link
I am replying not only to you, but everyone who thinks, that since Apple or ARM can make so good cpus or gpus at 15 watts or x watts, ergo it is better than AMD or Intel or Nvidia that make chips on the 100 watts level. I am afraid this is not the case , and short answer is you can't beat physics. I will continue on a new reply. Just to let you know i am equally excited at what arm or apple achieved in the mobile space.IUU - Thursday, June 11, 2020 - link
Just because you are able to make a gpu run at 2 teraflops at 4 watts, this does not mean you can scale linearly to 300 watts. By that thinking , nvodia and amd should be making 300+ teraflops gpus, but they are incompetent , this is why they can't. At 7nm which only recently nvidia has begun to implement 20+ teraflops gpus are possible, theoretically combined with a multicore cpu , they make up a high end "power hungry" desktop or server. A top of the line phone costs between 500 to 2000 dollars. A not so top of the line desktop , costs hardly 5000 dollars, it consumes about 800 to 1000 watts and is about 10 times or more computationally capable compared to its mobile flagship competitor. On top of that most flagship desktop gpus are one or more nide processes behind , and despite this they maintain the above mentioned lead.So, there is no comparison, computationally speaking. At every price point, desktop implementations beat hands down , their mobile counterparts, not because they are somehow superior ,but because physics. If ARM or APPLE or whoever ever decide to scale to a bigger power envelope i bet you they are not going to be sigificantly better power/performance wise, because ...physics. Everyone who tries to promote either the mobile or the desktop sector as superior they do it because they are on an agenda.
If you want the best performance possible at the best price point go desktop.
If you want enough performance in a power limited scenario, go mobile,
But you will pay a premium for this. I don't disagree with paying a premium for this,
But I want to make clear I know what I get , and I know why I pay the price I pay for.
iphonebestgamephone - Monday, June 22, 2020 - link
Sure sir, anything new?Wilco1 - Wednesday, May 27, 2020 - link
Those are based on Cortex-A76. The X1 is > 50% faster, so will make even better laptops and tablets.tkSteveFOX - Wednesday, May 27, 2020 - link
Would be great if we get a 1 x X1 + 3xA78 and 4xA55 with 4MB L3 shared between the big cores.Or just 2 x X1 and 6xA55 cores with 8MB L3 cache for the X1 cores (would be interesting to see the efficiency here compared to the above).
5nm gives a lot of headroom and even using 1x3GHz A77 and 3x2.7 GHz A77 is possible under this node.
ReverendDC - Wednesday, May 27, 2020 - link
I'm excited to see what comes of this for Windows on ARM. I know that's are some that will find it pointless, but there are millions of office workers and IT pros that support them that would find an all-day, cheaply replaceable, Office chewing, LTE/5G always connected device to be quite useful...For years Intel has tried to make an all-day system, and finally straight gave up! Yes, Windows is "heavier" on system calls, but then again, Linux can be as well. Seems to have shoehorned in nicely after 4+ years of trial and error (and Law and Order, but...) with Android. While I wouldn't buy a Surface Pro X, it does do 80% of what to expect from a full day Win10 x86 system. That's progress. Let's see if this makes more!
serendip - Wednesday, May 27, 2020 - link
The X1 belongs in a flagship ARM Windows device like the next Surface Pro X. The current model has a Qualcomm SQ1 and it already performs at 8th gen Core i5 levels, with half the power consumption when running ARM code. An X1-based SoC could offer top tier i7 performance at half the power and hopefully a lower price. Competition is good to keep Intel honest.ballsystemlord - Thursday, May 28, 2020 - link
@Andrei You have a technical error:"...all while reducing power by 4% and reducing area by 4%"
In the picture area reduction is 5, not 4 percent.
"...all while reducing power by 4% and reducing area by 5%"
anonomouse - Saturday, May 30, 2020 - link
So with two tiers of big cores now, and presumably a new small core and supposedly a new middle-ish core to span the ever-increasing gap between big and little... does this mean that in a couple of years Android phones will have to deal with scheduling across 4 different types of cores? bigger.big.middle.little?fozia - Saturday, June 6, 2020 - link
I agree. But it's not an achievement to be slower than a 1-year old chip This creates the problem that you cannot hyper-focus on any one area of the PPA triangle without making compromises in the other two.vladpetric - Friday, June 26, 2020 - link
Peak performance is not performance."Peak" is really just a value you're guaranteed to never exceed ...
mi1400 - Tuesday, October 6, 2020 - link
https://images.anandtech.com/doci/15813/A78-X1-cro...Why the yellow and orrange starting points/dots have drift in them. The Spec Performance axiz doesnt mandate them to let one start ahead of other. And if this mandate is applied/removed conjoining both stating points the difference of performance will be so similar that both lines will seem overlapping... infact curves between 2nd and 3rd dots of A77/A78 will make A78 even slower. Curves between 3rd and 4th dots of A77/A78 will give A78 some benefit but again curve between 4th and and 5th dots will make A77 = A78.
What do u say!?! Thanks!
ChrisGX - Monday, October 12, 2020 - link
A lot of people are saying that with Cortex-X1 ARM is bringing the fight to Apple’s powerhouse CPUs, i.e. the potent custom ARM processors that Apple develops for consumer computing products.Actually, that isn't exactly what is happening. I had a close look at the performance data (using ARM's own projections) and it looks like it will take until the Makalu generation before a successor to the X1 (very nearly) catches up to the A14 on outright (integer) performance. For some time, Apple has had a 2.5 year lead in the performance stakes over ARM and no change is on the cards in that regard. Cortex X1, contrary to ARM's public remarks, continues the existing strategy of winning on energy efficiency not seeking performance gains at any cost. As a matter of fact, the energy efficiency of the X1 isn't too bad as a starting point. And, when modestly clocked A78 cores are also in the mix energy efficiency improves greatly. With the next generation of SoCs based on A78 and X1 licensed ARM cores manufacturers will have the opportunity to either sharply reduce power consumption or add new and advanced processing capabilities without raising power budgets. And, that can be achieved while offering a good (single threaded) performance boost of 33% (or more) over existing A77 based processors.
When its comes to outright execution speed it seems that ARM is pushing harder on floating point performance than other areas. In that area ARM could conceivably reach performance parity with Apple's SoCs sooner rather than later.
Salman Ahmed - Tuesday, April 6, 2021 - link
Can Cortex A75 and Cortex A76 be pared together?Salman Ahmed - Tuesday, April 6, 2021 - link
Or Cortex A76 with Cortex A77?