Thanks Andrei! While not the or even a focus of your article (there was no need), I am struck by how stuck ARM bigLittle type designs are when it comes to the efficiency cores. A55, still? Let's make no mistake, you/we want capable and efficient small cores in the SoC, so that the big cores don't feast on the battery all the time. 5G modems tend to increase power use (this integrated one hopefully a bit less), so having the ability to stay on the low power cores of the CPU longer is even more important. Andrei, if you can, I'd appreciate a deeper dive into the state of the art of the small cores, especially a comparison of Apple's vs. stock ARM A55 designs. Thanks!
I imagine that there's only so far you can take an in-order ARMv8 core design when you want to optimise for power consumption. ARM do release updates to their core designs, whilst retaining the same name as well, so they do get small improvements. There are no ISA changes for these consumer cores either, as far as I am aware.
But it does look like the A55 has pretty much not changed much in several years, and simply become more and more efficient as the process has shrunk down to 5nm and the clocks have remained the same. I think people were expecting an A58 by now - maybe there will be an A59 to go with the A79 next year?
This is what he said about the icestorm cores in the A14: "The performance showcased here roughly matches a 2.2GHz Cortex-A76 which is essentially 4x faster than the performance of any other mobile SoC today which relies on Cortex-A55 cores, all while using roughly the same amount of system power and having 3x the power efficiency."
Not really. Apple's chips are always roughly the same size as QC and Kirin top chips and on the same nodes. Just better architecture and more R&D investment. QC have been doing the same chip for 3 gens now, just updating the ARM tech and gains in AI and ISP, but the CPU config remains the same, it's even clocked the same!
I don’t have all the numbers on me but Apple has never had an on die modem while Qualcomm usually does, so you can’t directly compare die sizes like that. Andrei would perhaps know the measurements of the actual core sizes and how they compare from Apple to Qualcomm.
Nope. Apple A13 is 98,48 mm² while Snapdragon 865 is 83,54 mm²—according to TechInsights. Both are manufactured on TSMC N7P and feature no integrated modem. So yeah, wider (and thus bigger) cores design does improve performance, but not always though.
Apple bought P.A. Semi in 2008 to make their own ARM Chips, and evidently, P.A. Semi is a better semiconductor company than Samsung, Qualcomm, and anyone else in the ARM marketplace.
P.A. Semi no longer exists and a lot of things happened since P.A. Semi was bought, like Apple buying Intrinsity which was involved in the creation of the A4 chip
I remember the headline a long time ago.. "Apple buys Israeli based cpu developer to make their own chips" I was like, "haha, yeah, good luck what that" I. WAS. WRONG.
The edit functionality only exists for the first 15 seconds after posting. This is to prevent people from going back and editing their comments well after the fact to appear less wrong.
I am no expert but I believe it happened because of very different visions/philosophies and objectives. ARM goes for smaller and less complex cores than Apple, believing it will consume less and that this will save space so it can add more cores in the same die, hoping for higher multithreaded performance. This would also probably be cheaper for other companies to implement. Apple on the other hand bet on bigger cores, maybe already envisioning that its development could in the end be more useful for computers, or at least the iPad. Apple believed that a faster core could consume less by finishing more complex tasks sooner. Costs didn’t seem to be a big concern for Apple, nor increasing the core numbers like crazy (remember when there were SoCs with 10 or more cores in phones?), nor Apple was constrained by what others might need. I imagine with these objectives Apple had to solve a lot of problems to optimize power consumption. Having to go through these challenges much sooner than ARM probably helped Apple to develop more efficient designs. It seems that Apple is just far more aggressive in developing its chips, and knows what it needs for its hardware and software.
From the outside, the problem appears to be that ARM is too deferential to its customers (one of the things that nV could fix if they get control...) In particular (compare with Apple) ARM appears unwilling to just do something new and aggressive and hope that it will be picked up when it's ready.
This might seem reasonable, but the problem is that your main customers are companies like Samsung and Qualcomm, the gang that couldn't shoot straight. Both (still!) seem utterly unaware that Moore's law is still a thing, and that the needs of devices evolve with time. So they both insist their priorities are smallest cheapest cores possible -- until there's a mad scramble to match whatever Apple is doing.
This foolishness has been most obvious with the A55. Sure, maybe in some technical sense the A55 is good enough wrt performance and energy and did not need to be updated for those reasons. BUT refusing to update it locks the ISA at v8.2. (Does anyone know how this is handled given that in theory the A78 supports some v8.3 instructions? Is the rule just that you don't use those instructions on a dynamiq A78+A55 system?)
So QC and SS are stuck. Because n years ago they were too stupid to see the big picture, that future small cores would need to track the evolving ARM ISA, they've held back what their large cores can do. A more dynamic ARM should probably just have ignored whatever they said and switched to a model like Apple that updates the large and small cores in lockstep every year.
The magic is simple: Icestorm is quite likely an out-of-order design, which picks up the performance quite a bit. And as for the extra power that comes along with out of order, they probably got it right down with enough optimisation/clever design (physical register files, micro-op cache, etc). Reminds me a bit of the comparison between Atom and Bobcat/Jaguar. For my part, I feel that in-order designs aren't really worth the supposed power savings (except in some cases), so I don't know why these companies waste their time.
Does anyone here know for sure whether Apple's efficiency cores are indeed out-of-order designs? That might actually help explain the better performance and performance/Wh of their Ice Storm cores; and raise the question: why hasn't ARM updated its little core design? I guess one answer is because they didn't have to (yet).
Considering A77 is more efficient and powerful at the same frequency than the A55 which is really old. Increase it a bit in size with big update and extra budget and you the Apple "lpA76" cores.
because apple have been marketing 5nm, and they can't sell "the best [android] soc in the market" with 7nm, especially when their android competitors are also moving to 5nm. the efficiency difference might be small, but marketing impact is big enough to ignore. plus most phones running this chip in 2021 probably have 4000mAh+ batteries in it anyway, wouldn't make any impact in consumer products.
Andrei/Ian - Any thoughts on the suitability of a 1+3+4 core CPU configuration as against a 2+2+4 config. I recall you guys did a deep dive on the state of threading in Android and it seemed that you had one or two main threads and the rest were low performance.
Do you think this is still the case or do you think SoC vendors should be looking at increasing the big core count, given more multitasking on phones.
So the upcoming codec wars (again): AV1, VVC (H.266), and EVC (H.267?).
However, most of the "big" hardware & software companies are backing AV1:
NVIDIA AMD Intel Apple Arm Facebook Cisco Google Microsoft Netflix Samsung (many fingers in many pies) Adobe Hulu BBC Alibaba Broadcom Realtek Vimeo Xilinx
Qualcomm already looking like chumps here. They literally use Arm's CPU cores on Google's operating system: Qualcomm's only serious work today is non-CPU IP: cameras, AI, etc. They're a little Texas Instruments?
CPUs, even high performance ones are now just much more commoditized; Qualcomm still does a lot of substantial work for the cache hierarchies and pre-fetchers which matter just as much as the core uArch. I'm always curious about CPU uArch and performance, but for a phone the other units matter more for the experience despite being less testable and prominent in most tech headlines.
I'm more interested in accelerated encode at this point. We've not had industry wide buy-in of a new lossy codec since jpeg, and hevc haven't quite achieved the ubiquity that h.264 managed after the same time in market.
While hardware AV1 encode would be quite nice to see, there's a possibility it will lose much of software AV1's gains over software HEVC (that is, one might encode quickly but end up with less compression than x265). Also, leaving aside the Slough of Patents for a moment, VVC will have to be taken into account once x266 comes out. If the studies are right, the reference VVC encoder (not x266) already shows better compression and speed than AV1. Hopefully, it won't inherit HEVC's less than pleasing picture too (to my eyes at least).
That's a great point. In my haste to mention the lack of encoding ability I'd forgotten about the actual implementation of such a complicated codec. Which of the 30 or so tools, and their combinations, provide the most bit savings per mm²? Iirc, vvc owes a lot of its gains via integration with ml (there's at least one commercial av1 implementation that does this as well to, supposedly, great effect). IOW, I'm uncertain how much easier vvc will be too implement in hardware. Otoh, EVC looks quite interesting.
Oh, yes, it will probably make their heads spin implementing this thing in hardware, and when they do, which they will, they're going to make it a marketing point (even if, in practice, it fell behind x265).
Yesterday I was experimenting with libaom-av1 on FFmpeg and discovered a useful parameter: -cpu-used. Controls compression/encoding speed and takes values between 0 and 8. 0 being the slowest, 1 the default, and 8 fastest. To my surprise, 8 brought encoding speed to reasonable levels: about 10x slower than x265, if I remember right, which isn't half bad. I was using a video shrunk to 360p though.
As for VVC, can't wait to give it a go. Hopefully, it'll deliver and be of AVC's calibre. I wasn't familiar with EVC but took a look at it now, and it does appear to be quite an interesting concept.
You might be interested in the doom9 forums (https://forum.doom9.org/forumdisplay.php?f=17). In the av1 thread you'll often see people posting updates about the various av1 en/decode implementations, new settings and, in general, some interesting thoughts from folks in the industry. BTW, starting from this post (https://forum.doom9.org/showthread.php?p=1929560#p... there's an interesting discussion regarding qcom & their interest in not pushing av1. Regarding fast encoders, I'm assuming you've tried svt-av1? That's supposed to have nearly caught up with aom's encoder quality but is still a good deal faster. Lastly, thanks for the paper. It looks interesting and a quick skim didn't reveal any mention of ml enhanced transform, or even a new entropy code(!); they seem to be continuing to iterate on h.264->h.265. However, only started reading it and realized I'm not getting through that tonight:)
Thanks for those doom9 threads. Looks like a treasure trove of information on AV1 there. As for SVT-AV1, yes, I have tried it. While the speed was good, the picture didn't seem that impressive. Anyhow, I'll have a crack at it again and see how it stacks up against libaom, now that I've got the latter running faster.
You're right. I remember getting the impression that this was similar to how HEVC improved over H.264. Mostly, extending techniques already laid down. Yet another reason to tip one's hat to the MP3 of video.
Focus on Camera and AI, with tiny ~10 percent CPU performance improvements, and the exact same 4 little cores for the 4th year. I admit I'm a little disappointed. Last time I paid attention it was all about power and Hercules A88 cores.
Hopefully we see good $700 flagships from Samsung this year instead of the way overpriced S20 series, maybe the integrated modem will help.
Any thoughts on specific use cases for the hypervisor feature they enabled or any comments from them? It would be nice to just run Linux or Windows apps from my phone attached to a monitor and kb.
To me that is the most exciting feature of the 888 and one where I'm not sure that Google's Android will pick up on.
Today mobile phones (and increasingly desktops) are under the control of the ecosystem vendor, all trust and cryptography tied to Apple or Google (or Microsoft). Of course device vendors also want a piece (DeX) but really it's the owner who should be in charge.
Going forward the number of stakeholders can only increase, there will be governments with vested interests and specific compliance concerns, corporate employers etc.
So the ability to run a flexible number of enclaves that can be guaranteed not to step or spy on each other will eventually become critical, but also allow to break the stranglehold that Apple and Google currently have on the device you own, but don't control.
We already have enclaves inside SIM cards and baseband controllers, but they are completely physical, secure that way, but not flexible and affordable to multiply.
So while I would love to have more details, know if this is like SEV/MKTME on x86 or even better, I don't see how Apple, Google or Microsoft (Pluton!) or even the NSA for that matter, can be motivated to hand the supreme power to you and me, while they can now play in a walled garden we oversee, even if we can't sniff inside.
In my book I should be able to block conversations between enclaves and their cloud controls, while Apple is pushing the envelope in the opposite direction, hiding their device/command & control-center conversations from owners.
IHMO that needs to be made painfully illegal, before all the others jump on that bandwagon.
This actually looks like a horrible SoC, which QC didn't bother to improve much. Regarding Samsungs 7nm node things are actually other way around. First gen based on HD lib and with EUV whose both better & more efficient than TSMC 7nm without EUV but it cost more. The second gen is actually 7nm with UHD lib while TSMC 5 nm is a fully new node with around 2.2x higher density (to Samsung 7nm UHD) but sadly not significantly more power efficient while it cost get up more than duble. On the other hand Samsung cost went down as density went up for around 50% and thanks to maturity and good yields it's estimated as twice as much gates per a same price. I really wanted to see a new gen of Adreno's (for 4 SoC generations now) instead of that this is minor rework (couple new functions) in a same cluster configuration as it's predecessor. Now imagine 2x increase in logic with TSMC 5 nm process at 30% lower speed. I do think quoted 35% increase is in ideal conditions (utilising new stuff which we won't see in the next couple of years) while in reality we will see just a small insignificant increase. What's the use of single high performance CPU core which outruns the rest by working 50% IPC? Sure each & every first core on any cluster will be a one to bare the burden of the one where everything is started before workers are deployed but it only needs culpe % more capacity to balance that. Seams they didn't done nothing to increase cache coherence and efficiency. DSP had seen a real improvement but I don't look at QC Hexogen as something good (stiff, property, hard to get to & not flexible) to the point of thinking such things shouldn't exist. The biggest gain will be a integration of 5G modem which will cut power consumption in half.
Thing is you will only get around 20~25% improvement in a single (X1) core while in full all CPU core utilisation that will sink to only couple % (thanks to; bus, typology and memory coherence bottlenecks. Early Geekbench results already confirmed this. It will be same regarding GPU, it will get it's 30% advantage but only when new futures are used (which they won't be in a long time).
In addition to integrating the Modem, they also appear to be integrating WIFI (FastConnect 6900) and Bluetooth on die now. Prior year diagrams seem to have illustrated WIFI and Bluetooth off chip.
"I really wanted to see a new gen of Adreno's (for 4 SoC generations now) instead of that this is minor rework (couple new functions) in a same cluster configuration as it's predecessor."
Are you really sure about it being a minor rework? Variable rate shading may be possible or not give as much of a performance boost without some changes to the hardware. The rest is just marketing much like the moniker 888 is.
Well based on the QC claims, 20% more power efficient = same number of GPU clusters and same alignment as the power saving is from process improvement, they made a new pipeline to include additional functions but fundamental blocks remain unchanged (actually ALU's didn't change from Ati days). This is just based on my assumptions and logic. Upon which I don't think it's enough to be stated as new gen. I don't have problem with numerology including additional make believes tied to the name change, at least that's not a snake oil like the rest.
Given in mind rest of the story (trade wars) actually the QC naming thing is pathetic.
Who ever makes a real flagship SoC (not saying on the die size), even if based on reference IP's based upon 5 nm TSMC has the opportunity to rip this abomination without to much hustle.
Transistor count and die size haven't been revealed yet; I have my doubts many other companies could integrate all those subsystems with the PPA that Qualcomm achieves. Even Apple doesn't get there as they don't integrate a modem which is far far trickier than you might expect.
Well you are right about that (modem, RF and cetera) and thing's won't get better any time soon, call it democracy. Rest is IP license available. If you raise a bar to actual manufacturers (in their own menagement) list goes to none as ironically Samsung is by far most adequate. I guess things will get boring until GAA.
Apple hasn’t been allowed to integrate a modem. It’s likely that a major reason they bought Intel’s work is so that they can have their own, so that they can do that. But Apple seems to have no problems with efficiency, even with an external modem. I suspect that it’s the Android OSs known efficiency problems, among others, such as the requirement for double the RAM, that’s causing these problems, which is why those phones require batteries that are so much larger.
It's a fallacy that an external modem is any less power efficient than an on SoC one, and in fact fab process can be further optimized for a totally separate modem die which really does have different requirements than CPUs and GPUs. The reason Qualcomm, Samsung, Mediatek and Huawei do it is to reduce cost and complexity. Apple simply doesn't the IP necessary and the purchase of Intel's money losing unit was primarily for IP, some talent rather than design or implementation; it still won't get them a competitive modem in house for several years to come.
It's a fact how it isn't a one peace to start with. RF analog-mixed part & processesing part which can be integrated. RF part didn't progress regarding it's manufacturing processes in very, very long time & in best case scenario is built upon SOI. In the world of mobile SoC high density libs are commonly used for everything already (excluding analogue, MOSFET's and cetera of course).
Qualcomm updated their processor the right way. They improved all of the main day to day functions that ordinary people use frequently. Tech savvy people might be disappointed about the raw compute power but the SD865 was already great.
If they wanted to improve the day-to-day, they would have made it much more power efficient and replaced those ageing A55 cores that are used for mosts daily simple tasks.
I don’t know why people only see great or horrible with nothing in between. This is a good competitive chip - only people desperately aching for someone to overtake Apple can be seriously disappointed.
The single X1 is a solution to having good singlethread speed and the UI responsiveness that comes with that while giving good multicore combined with efficiency. Is it the best solution? I don’t know, but it’s not a bad one.
The clock speed is interesting however. I have to think that there are process limitations there. ARM targeted 3 GHz, which would’ve given it some floating point wins over the A13 based on Andrei‘s estimates. The 5% shortfall will put them pretty squarely behind the A13, I’m sure they wouldn’t have excepted that unless they had no choice.
I'd say they design their SoC to suit their usage. And that is a mixed bag full of compromises, matching what people are actually doing on these devices. Very little of that is HPC.
The single X1 is for all that fat single-threaded desktop-class browser code out there, that only gets tolerable response times on a 4GHz Pentium 4, but hopefully won't run longer than a couple hundred milliseconds, because an X1 core simply can't run 24x7 on a mobile power budget.
Mobile games better run on the efficiency cores mainly (apart from the GPU), with perhaps short bursts on the power cores, because otherwise not even an hour worth of game time may be possible on a single charge (or without burning your fingers).
In short, don't expect all of these resources used at full capacity for any extended time. Instead these SoCs become a computing appliance farm with specialists for many different tasks, designed to do very little to nothing most of the time and as aesthetically pleasing inside as any SME server room that evolved with the business for 20 years.
To ask for a revolutionary design on a new process from a different fab is perhaps asking just a little bit too much, especially when they need to sell another generation next year.
For such a bad job I am seriously considering that the 888 may be enough of an upgrade over my current 855 to consider, once they sell these devices at reasonable prices (~€500) and with LinageOS support late 2021 or early 2022 with the 895's imminent arrival.
Honestly I've stopped asking for more smartphone computing power since the 820, been perfectly happy with energy effiency since the 835 and been waiting for a proper desktop mode since the first DeX on an 800.
It's hard to sell more when the need doesn't really grow or you can have 500 Watts of desktop power any time you sit done for something serious.
As for the choice of Samsung's 5 nm LPE for the manufacturing, I suspect it's not just TSMC's capacity that made QC go Sammy. My guess is that Samsung fabbed it for less - that simple. Sort of why NVIDIA chose Samsung's 8 nm for Ampere; they did it for less.
With China flexing its muscle over Taiwan, Korea may be more attractive in other ways, too.
Then I wonder if the 5nm node on Samsung may actually be faster from start to finish with EUV replacing all those multi-masking and multi-patterning steps...
Thanks for the detailed article. Did Qualcomm go with Samsung as they will be taking process leadership in the near future with 1st GAAFET implementation in 2023. It would be good to see where Samsung 5nm is relative to TSMC.
Looking at improvements from process and architecture, I feel even ARM is close to hitting the wall.
I think ARM wants to keep everyone out of reach of Apple's performance dominance. A55 is vastly inferior to the last 3 gens of small cores Apple has used. X1 is decent but seeing just a single core in there isn't great. Not using 8mb cache is purely driven by greed. It will make the chips cost higher and that's why QC aren't doing it, there are substantial gains from using more cache on a CPU in heavy workloads. All these companies want to make money, so they cut costs of their chips and that's why Android will always be behind iOS. Apple gives you the most bleeding edge stuff without sacrificing on the chip.
That is because QC has to sell these chips to OEMs that have to be able to afford them. Apple doesn't have to sell to no one. They have high margins to justify their investment in performance beyond the scope of Xiaomi, LG, Samsung.
ARM doesn’t care about that. They sell designs that are good enough to make them enough money to make a good profit and allow for further work. It’s up to the OEMs to make the changes allowed through the design license to make some improvements. Failing that, companies can get an architectural license as Apple and a few others do, which lets them design cores and subsystems from scratch.
Both Qualcomm and Samsung tried that for a few years, but failed to come up with good designs. So they went back to licensing designs from ARM.
SD820 is not a failure, it was a full custom design which was not a disaster it was superb and even had the IPC speed higher than 835, that's where all this Kryo started. Then Qcomm moved all of their engg arch teams to Centriq the ever famous most powerful ARM server processor, they axed it even after putting so much R&D in that with Cloudflare marketing. Since then Qcomm never made any custom cores. Only Samsung did with ambitious aims but failed to optimize it for the smartphone.
And in the end it doesn't even matter, because phones are going to be on parity with A14. Just looking and gaming performance and application performance tests which are real world shows 865 is not even that far vs A14 in some aspects. And Qcomm is putting money where it matters - GPU and 5G.
820 was absolutely a disaster. Its errata list was too great for Windows kernel support, likely at the ISA implementation, and likely deeply-rooted enough to justify dropping the entire endeavor.
How is Windows Kernel coming into the picture ? It was about the Android performance and 64Bit compat due to Apple's move first and 810's ultimate disaster which even killed HTC entirely. 820 processor was very fast and still holds up, just like 805 but the latter was 32bit, One can see comparison of that with Apple's A9.
Entire endeavor was dropped because there's no need. Why do you think Qcomm develops a lot of the Radio and etc and tons of R&D ? Patents. That's what Qcomm is all about and they tried that with Centriq. But ARM on DC market is a dead end, so many years of articles here on AT and STH, so far no one is there on that side the only option which was showing some metric of performance that too for small loads is Graviton2. Only when there's a need then these companies push, which is money. Apple does it because they want to hold that position to leverage their pricing justification of the iPhone. Looking at any Android top flagship vs iPhone real world application performance tests and gaming loads it shows why there is no need for Qcomm to push, they push where there is money, GPU and NPU, ISP, Radio RF.
Qualcomm and Samsung had different problems with their CPU designs. Qualcomm had a pretty competitive design. Their problem was getting blind-sided by the A7 with 64bit. They didn't have a 64bit design in their pipeline and had to abandon their own work and go back to ARM reference designs just to have something remotely competitive.
Samsung found out the hard way that chip design isn't easy. Making a more powerful chip is one thing, but being energy efficient (and powerful) is quite another thing. They eventually scuttled their custom chip hopes as well.
That leaves ARM. ARM will design what their customers want. It's not clear that customers are complaining to ARM that they want more powerful cores. Maybe the X1 is a step in that direction. However, we can see lots of cost cutting examples in the SD888, so it's not clear that there is an appetite for an Apple like design for Android based SoC vendors.
A55 is not inferior, it's still the best in order A core ARM ever made. The so called Apple little core's are simple OoO core's inferior but closest to compare to A73. Problem is ARM never made a newer incarnation of A73 suitable for DynamIQ clusters. They did make Neoverse E1 and A65 which both thanks to SMT aren't exactly suitable for mobile phones and we didn't see any of their actual silicone implementations. I don't see a L3 victim cache as the way to go as it's limiting in many aspects. Faster RAM & bigger L2 cache should be a way to go. Apple just makes your wallet cry.
The A75 is derived from the A73 (3-wide instead of 2-wide), and AFAIK supports DynamIQ. And looking at Andrei's M4 review, the A75 appears to be almost as efficient as the A55 at the A55's lowest voltage, and more efficient if the A55 has to ramp up the voltage (long before it reaches the performance of the A75 at its lowest voltage).
Even better would be a low clocked and slightly cut-down Cortex-A76. According to AnandTech at lower frequencies it is more efficient than Cortex-A55 while being much faster. It has a larger area of course, but you could cut it down a bit, and 4 little cores seems a bit overkill, 1 or 2 would be more than enough for background tasks.
Apple efficiency cores have much better performance and power efficiency versus other chips with a similar design. (High perf/low perf). It’s amazing to see what Apple has achieved since the A9 year over year just dominating performance while keeping power efficiency. It’s not even close with Apple GPUs.
You’ll probably squabble that Metal is more optimized than OpenGL.
When it comes to chip designs, whether you like apple or not, they are the best.
I don't understand why they just don't ditch the A55 cores and use two underclocked A78s instead. The A78 is the most power efficient CPU on the planet at under 2GHz!..yes I'm not forgetting the Icestorm from Apple.
1X1 + 3A78 +2A78 should be optimal according to me..what am I missing?!
What you're really identifying here is that the paragraph at the end of page 1 is extremely damning. Qualcomm isn't good enough with intricate work on multiple voltage planes to deliver a the best possible SoC.
The 888 has a separate voltage plane for the low-power cores. Having one voltage plane for the big cores looks sensible, too: My guess is: If you have a single-threaded application, you run it on the X1 at max voltage; but if you have a multi-threaded load, doing that and running the A78s at their max voltage consumes too much power, so you reduce the X1 voltage and clock to the 2.42GHz of the A78 to keep the power consumption in check.
Yes the efficiency results show a low-clocked OoO core would be better overall. Cortex-A78 would be overkill as a little processor - fast CPUs use a lot of area. So a cut-down variant maybe with the micro-op cache removed and limited to 3 or 4-way would make more sense (a bit like Cortex-A76).
"This year’s choice of switching back to a Samsung process for the flagship SoC seems to be a vote of confidence in the new process node- as otherwise Qualcomm likely wouldn’t have made the switch"
Such decisions are made for monetary reasons and monetary reasons only. Samsung 5nm is not as good as TSMC 5nm.
Why a huge single core? Impressive ISP, still baffling traditional camera makers can't make a camera from smartphone parts. Current mobile camera sensors, it is already on par with APSC cameras with a zoom kit lens.
Why a huge single core? Single core performance is pretty much what matters most because ALL applications can benefit form this. Also, lots of web apps (Javascript based) are single threaded by nature. That's why iPhones just crush Android phones on all of the javascript / web benchmarks.
Hello everyone. Excuse my ignorance. Difference between 11 Apple A14 AI teraflops and 26 teraflops AI Snapdragon 888. Even the 865, seems to have more teraflops than the A13, but I can't quite understand the difference in normal use. It does not seem to me that the various terminals that use Snapdragon, make great use of AI.
AI bragging usually involves integer operations as they work for inference and ALUs of all types can do more of them than floating points. So TOPS, not teraflops.
I don’t know if the full comparison is available because Qualcomm’s figure includes every unit on the SOC without breaking down how many are from CPU, how many from GPU, and how many are Hexagon.
Apple’s figure is just from the Neural Engine. They use their GPU, but I haven’t seen them give a public figure. The also brag about putting matrix accelerators in the CPU (big cores only?) but don’t get into how many TOPS the whole SOC can do. Maybe it’s in the developer docs?
It's important to note that these "TOPS" ratings are marketing driven and not an Apples to Apples comparison. Apple advertises the speed of their Neural Engine only (11 TOPS). Qualcomm's advertised TOPs number represents the theoretical capacity of ALL of their computing units including CPU, GPU, DSP, etc. Apple's number would be much higher as well if they included all of those other things not to mention their dedicated matrix multiplication units, etc.
Hello everyone. Excuse my ignorance. Difference between 11 Apple A14 AI teraflops and 26 teraflops AI Snapdragon 888. Even the 865, seems to have more teraflops than the A13, but I can't quite understand the difference in normal use. It does not seem to me that the various terminals that use Snapdragon, make great use of AI.
"Another rationale for the foundry switch could be manufacturing capacity. As Apple is eating up a lot of TSMC’s early 5nm capacity with the A14 and M1, Qualcomm probably saw Samsung’s 5LPE as the safer choice this year..."
This is the most likely scenario and justification to move to Samsung.
"One interesting capability that Qualcomm was advertising is triple-stream 4K HDR video recording. That’s a bit of an odd-ball use-case as I do wonder about the practical benefits..."
Nothing oddball about it. Apple's A13 has this capability as well. There are pro apps like Filmic Pro which allow you to do things like capture a documentary type of interview on multiple cams at the same time. It may be something of an edge case, but it can be useful. It makes sense that Android phones start to get some parity on this type of feature.
Many things on this 888 are old... About this triple stream 4k HDR, not only Mediatek Dimensity 1000 has it, but 820 and 800 have it too... We can notice the 3 ISP too...avaiblable since...Helio P60...even Unisoc has soc with 3 ISP! If market wasn't ruled by Qualcomm, we would already have zooming capabilities across 3 cameras for a long time like Apple does. Still no AV1 hardware support...Qualcomm want us to pay licences for mpeg5!
At least 888 is able to do 5g carrier aggregation. 865 wasn't...it means, until mmwave will be available in many years, in many countries, even a Dimensity 700 would be able to reach faster 5g speed than 865.
Plus 5nm from Samsung, not that much better than TSMC's 7nm. TSMC's 7FF : 96.5Mtransistors/mm2 TSMC's 7FFP : 113.9 Samsung 5LPE : 126.5 TSMC's N5 : 173.1 Efficiency won't be as good on 5LPE!
7FFP has the same density as 7FF but has a number of improvements. The dense TSMC process is 7FF+ (also known as N7+) and it is an EUV node. The dense node you refer to is 7FF+. Somebody seems to have wrongly assumed that P refers to plus - it doesn't. Still, the error after exchange by many hands has become pervasive on the web.
When can we expect the detailed analysis of the soc and benchmark comparisons. There is nothing out there that is as detailed as yours. Hope you guys already have an early sample to test :)
"multi-frame noise reduction engines inside of the ISPs. It’s said that the quality of the noise reduction has been improved this generation, allowing for even better low-light captures with the native capture mode (no computational photography)"
What you have described IS computational photography.
@Andrei I can't really understand, where you see the 7LPP in a disadvantage in comparison to N7.
If you compare the 765G with the middle cores of the 855, you see similar power draw. The comparison to the Kirin show higher frequency and higher power for the Kirin. I can't see a winner here. You can't look at the efficiency score. The performance is higher with the Kirin because of the bigger Caches. Double the L2 and 4x L3. Memory bandwidth and SL-Cache is bigger too.
You have a point in connection to the N7 but you are on shakier ground with regard to the N7P and N7+. The Semiwiki article shows the density advantage of N7+ process (otherwise known as 7FF+ and mislabelled in the article as 7FFP) used on the Kirin 990 5G SoC. TSMC data likewise underscores the greater density and energy efficiency of the N7+ process.
As I understand it, Band 53/n53 will make its first appearance in the 888. A great band for aggregation/5G plus it is a global band providing users with maximum flexibility.
While on its face going with unaltered clock rates for the SD888 compared to the previous (SD865) generation seems like the smart move because it gives a good peak performance boost of around 30% (when using the new Samsung 5LPE process) without pushing power consumption into problematic territory - reduced power consumption on the modestly clocked A78 cores should more than compensate for any increase in power consumption on the X1 core - the overall picture of power consumption for the SD888 doesn't seem to have quite worked out. Early information on the performance of the SD888 SoC in real world situations seems to indicate that while is does a great job of sustaining close to peak performance levels even when under heavy load it doesn't achieve that feat without producing troublingly elevated thermals. Many reports of engineering samples of new flagship Android smartphones becoming uncomfortably hot and suffering accelerated battery drain have recently appeared.
What are we to make of this? First, it is clear that Qualcomm has tried to achieve a lot in this generation - almost everything in the SD888 has changed and the most impressive improvements are outside the CPU complex. Most notably the 5G modem is now integrated with the Snapdragon SoC and the iGPU has undergone a major upgrade. It seems that the power savings available from the 5LPE process have been insufficient to deliver the full benefit of incorporating these improved components. Qualcomm, no doubt, understood that a feature rich Snapdragon SoC was going to be essential to reassure the Android faithful that flagship Android phones were keeping pace with the iPhone. The reassurance in this case is partial. A truly compelling case for the SD888 might require an upgraded 5nm silicon process that matches the density and power savings of TSMC's N5 process. The SD888 looks to me like the right SoC in the wrong (Samsung 5LPE) silicon. Where is the 5LPP process that the SD888 (and the Exynos 2100) needs Samsung?
Really want to own a Smartphone having Snapdragon 888 or even Snapdragon 888+. It is a 5nm based architecture chipset means it will be lot powerful and efficient too thats why I want smartphone having this chipset.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
123 Comments
Back to Article
eastcoast_pete - Wednesday, December 2, 2020 - link
Thanks Andrei! While not the or even a focus of your article (there was no need), I am struck by how stuck ARM bigLittle type designs are when it comes to the efficiency cores. A55, still? Let's make no mistake, you/we want capable and efficient small cores in the SoC, so that the big cores don't feast on the battery all the time. 5G modems tend to increase power use (this integrated one hopefully a bit less), so having the ability to stay on the low power cores of the CPU longer is even more important.Andrei, if you can, I'd appreciate a deeper dive into the state of the art of the small cores, especially a comparison of Apple's vs. stock ARM A55 designs. Thanks!
psychobriggsy - Wednesday, December 2, 2020 - link
I imagine that there's only so far you can take an in-order ARMv8 core design when you want to optimise for power consumption. ARM do release updates to their core designs, whilst retaining the same name as well, so they do get small improvements. There are no ISA changes for these consumer cores either, as far as I am aware.But it does look like the A55 has pretty much not changed much in several years, and simply become more and more efficient as the process has shrunk down to 5nm and the clocks have remained the same. I think people were expecting an A58 by now - maybe there will be an A59 to go with the A79 next year?
Lolimaster - Thursday, December 10, 2020 - link
Probably called A63.Ppietra - Wednesday, December 2, 2020 - link
This is what he said about the icestorm cores in the A14:"The performance showcased here roughly matches a 2.2GHz Cortex-A76 which is essentially 4x faster than the performance of any other mobile SoC today which relies on Cortex-A55 cores, all while using roughly the same amount of system power and having 3x the power efficiency."
brucethemoose - Wednesday, December 2, 2020 - link
Makes one wonder what on earth Apple is doing to achieve that. Its not like ARM's CPU architects are underpaid chumps.Some of it is extra die space, I guess? The LITTLE cores have to be, err, little, while Apple can afford to blow up area for efficiency.
tkSteveFOX - Wednesday, December 2, 2020 - link
Not really. Apple's chips are always roughly the same size as QC and Kirin top chips and on the same nodes.Just better architecture and more R&D investment. QC have been doing the same chip for 3 gens now, just updating the ARM tech and gains in AI and ISP, but the CPU config remains the same, it's even clocked the same!
Ppietra - Wednesday, December 2, 2020 - link
I think Apple CPU cores are significantly bigger than ARM designsheadeffects - Wednesday, December 2, 2020 - link
I don’t have all the numbers on me but Apple has never had an on die modem while Qualcomm usually does, so you can’t directly compare die sizes like that. Andrei would perhaps know the measurements of the actual core sizes and how they compare from Apple to Qualcomm.Fulljack - Thursday, December 3, 2020 - link
Nope. Apple A13 is 98,48 mm² while Snapdragon 865 is 83,54 mm²—according to TechInsights. Both are manufactured on TSMC N7P and feature no integrated modem. So yeah, wider (and thus bigger) cores design does improve performance, but not always though.RSAUser - Wednesday, December 2, 2020 - link
Apple has both an architecture design leas and a process node lead, TSMC 5nm vs Samsungs Marketing Version of it which is worse than TSMC 7nm.MenhirMike - Wednesday, December 2, 2020 - link
Apple bought P.A. Semi in 2008 to make their own ARM Chips, and evidently, P.A. Semi is a better semiconductor company than Samsung, Qualcomm, and anyone else in the ARM marketplace.Ppietra - Wednesday, December 2, 2020 - link
P.A. Semi no longer exists and a lot of things happened since P.A. Semi was bought, like Apple buying Intrinsity which was involved in the creation of the A4 chipjordanl17 - Thursday, December 3, 2020 - link
I remember the headline a long time ago.. "Apple buys Israeli based cpu developer to make their own chips" I was like, "haha, yeah, good luck what that" I. WAS. WRONG.jordanl17 - Thursday, December 3, 2020 - link
maybe they weren't Israeli based... (can't edit post?)Luminar - Thursday, December 3, 2020 - link
The edit functionality only exists for the first 15 seconds after posting. This is to prevent people from going back and editing their comments well after the fact to appear less wrong.Wilco1 - Friday, December 4, 2020 - link
This is not true - if it was, I could edit this!trini00 - Saturday, December 5, 2020 - link
https://www.zdnet.com/article/start-up-plans-new-e...Quite a interesting read, integration on a chip and the cache structure is some of the advantages M1 has.
headeffects - Wednesday, December 2, 2020 - link
Is this true? I knew Samsung’s 5nm was behind but behind even the TSMC 7nm sounds shocking.Lodix - Thursday, December 3, 2020 - link
NoPpietra - Wednesday, December 2, 2020 - link
I am no expert but I believe it happened because of very different visions/philosophies and objectives.ARM goes for smaller and less complex cores than Apple, believing it will consume less and that this will save space so it can add more cores in the same die, hoping for higher multithreaded performance. This would also probably be cheaper for other companies to implement.
Apple on the other hand bet on bigger cores, maybe already envisioning that its development could in the end be more useful for computers, or at least the iPad. Apple believed that a faster core could consume less by finishing more complex tasks sooner. Costs didn’t seem to be a big concern for Apple, nor increasing the core numbers like crazy (remember when there were SoCs with 10 or more cores in phones?), nor Apple was constrained by what others might need. I imagine with these objectives Apple had to solve a lot of problems to optimize power consumption. Having to go through these challenges much sooner than ARM probably helped Apple to develop more efficient designs.
It seems that Apple is just far more aggressive in developing its chips, and knows what it needs for its hardware and software.
name99 - Wednesday, December 2, 2020 - link
From the outside, the problem appears to be that ARM is too deferential to its customers (one of the things that nV could fix if they get control...)In particular (compare with Apple) ARM appears unwilling to just do something new and aggressive and hope that it will be picked up when it's ready.
This might seem reasonable, but the problem is that your main customers are companies like Samsung and Qualcomm, the gang that couldn't shoot straight. Both (still!) seem utterly unaware that Moore's law is still a thing, and that the needs of devices evolve with time. So they both insist their priorities are smallest cheapest cores possible -- until there's a mad scramble to match whatever Apple is doing.
This foolishness has been most obvious with the A55. Sure, maybe in some technical sense the A55 is good enough wrt performance and energy and did not need to be updated for those reasons. BUT refusing to update it locks the ISA at v8.2.
(Does anyone know how this is handled given that in theory the A78 supports some v8.3 instructions? Is the rule just that you don't use those instructions on a dynamiq A78+A55 system?)
So QC and SS are stuck. Because n years ago they were too stupid to see the big picture, that future small cores would need to track the evolving ARM ISA, they've held back what their large cores can do. A more dynamic ARM should probably just have ignored whatever they said and switched to a model like Apple that updates the large and small cores in lockstep every year.
GeoffreyA - Thursday, December 3, 2020 - link
The magic is simple: Icestorm is quite likely an out-of-order design, which picks up the performance quite a bit. And as for the extra power that comes along with out of order, they probably got it right down with enough optimisation/clever design (physical register files, micro-op cache, etc). Reminds me a bit of the comparison between Atom and Bobcat/Jaguar. For my part, I feel that in-order designs aren't really worth the supposed power savings (except in some cases), so I don't know why these companies waste their time.GeoffreyA - Thursday, December 3, 2020 - link
Replying to brucethemoose.eastcoast_pete - Friday, December 4, 2020 - link
Does anyone here know for sure whether Apple's efficiency cores are indeed out-of-order designs? That might actually help explain the better performance and performance/Wh of their Ice Storm cores; and raise the question: why hasn't ARM updated its little core design? I guess one answer is because they didn't have to (yet).Lolimaster - Thursday, December 10, 2020 - link
Considering A77 is more efficient and powerful at the same frequency than the A55 which is really old. Increase it a bit in size with big update and extra budget and you the Apple "lpA76" cores.tuxRoller - Thursday, December 3, 2020 - link
The answer is... the old big cores are replaced by the new X series and become the new little (or mid) core.yeeeeman - Wednesday, December 2, 2020 - link
can't wait for battery life tests, although i suspect they will be a bit worse than tsmc 7nm.iphonebestgamephone - Thursday, December 3, 2020 - link
Why did they bother with 5nm then? Is this 5nm even cheaper than tsmc 7?SyukriLajin - Thursday, December 3, 2020 - link
because apple have been marketing 5nm, and they can't sell "the best [android] soc in the market" with 7nm, especially when their android competitors are also moving to 5nm. the efficiency difference might be small, but marketing impact is big enough to ignore. plus most phones running this chip in 2021 probably have 4000mAh+ batteries in it anyway, wouldn't make any impact in consumer products.iphonebestgamephone - Thursday, December 10, 2020 - link
Do the general public care more about nm or antutu? Or is it that oems would be more likely to get the 5nm one?RSAUser - Monday, December 7, 2020 - link
It shouldn't be, it's about 10% denser than TSMC 7FFP (127 vs 114), a far cry from TSMC N5 though (173).Zeratul56 - Wednesday, December 2, 2020 - link
Any word if/when this could end up in a 8cx platform. While probably short of the M1 it could make a windows on arm pc much more compellingdomboy - Wednesday, December 2, 2020 - link
I'm interested it see what the 8cx replacement based on the 888 will be like...colinisation - Wednesday, December 2, 2020 - link
Andrei/Ian - Any thoughts on the suitability of a 1+3+4 core CPU configuration as against a 2+2+4 config. I recall you guys did a deep dive on the state of threading in Android and it seemed that you had one or two main threads and the rest were low performance.Do you think this is still the case or do you think SoC vendors should be looking at increasing the big core count, given more multitasking on phones.
spaceship9876 - Wednesday, December 2, 2020 - link
No AV1 video hardware decoding support? You have to be kidding!ikjadoon - Wednesday, December 2, 2020 - link
Qualcomm has long loved the MPEG Industry. In 2017, Qualcomm was already hating on AV1: https://web.archive.org/web/20170611163031/https:/...In 2020, Qualcomm & Samsung are pushing for MPEG 5-EVC (presumably H.267?): https://www.qualcomm.com/media/documents/files/mpe...
So the upcoming codec wars (again): AV1, VVC (H.266), and EVC (H.267?).
However, most of the "big" hardware & software companies are backing AV1:
NVIDIA
AMD
Intel
Apple
Arm
Facebook
Cisco
Google
Microsoft
Netflix
Samsung (many fingers in many pies)
Adobe
Hulu
BBC
Alibaba
Broadcom
Realtek
Vimeo
Xilinx
Qualcomm already looking like chumps here. They literally use Arm's CPU cores on Google's operating system: Qualcomm's only serious work today is non-CPU IP: cameras, AI, etc. They're a little Texas Instruments?
Raqia - Wednesday, December 2, 2020 - link
CPUs, even high performance ones are now just much more commoditized; Qualcomm still does a lot of substantial work for the cache hierarchies and pre-fetchers which matter just as much as the core uArch. I'm always curious about CPU uArch and performance, but for a phone the other units matter more for the experience despite being less testable and prominent in most tech headlines.brucethemoose - Wednesday, December 2, 2020 - link
"Apple"Not so sure about that... the M1 skipped AV1 too.
RSAUser - Wednesday, December 2, 2020 - link
No, the M1 has AV1 decode, it's a larger A14.halcyon - Wednesday, December 2, 2020 - link
Yes, the lack of AV1 decode feels odd.jaj18 - Thursday, December 3, 2020 - link
It will come with adreno 7××🤔StormyParis - Wednesday, December 2, 2020 - link
"This year although we’re not reporting from Hawaii". Heh heh. I'd feel sorry for you if I wasn't jealous for all the other years ? ;-pKrysto - Wednesday, December 2, 2020 - link
No AV1 decode support in 2021? Really?tuxRoller - Thursday, December 3, 2020 - link
I'm more interested in accelerated encode at this point.We've not had industry wide buy-in of a new lossy codec since jpeg, and hevc haven't quite achieved the ubiquity that h.264 managed after the same time in market.
GeoffreyA - Thursday, December 3, 2020 - link
While hardware AV1 encode would be quite nice to see, there's a possibility it will lose much of software AV1's gains over software HEVC (that is, one might encode quickly but end up with less compression than x265). Also, leaving aside the Slough of Patents for a moment, VVC will have to be taken into account once x266 comes out. If the studies are right, the reference VVC encoder (not x266) already shows better compression and speed than AV1. Hopefully, it won't inherit HEVC's less than pleasing picture too (to my eyes at least).tuxRoller - Friday, December 4, 2020 - link
That's a great point. In my haste to mention the lack of encoding ability I'd forgotten about the actual implementation of such a complicated codec. Which of the 30 or so tools, and their combinations, provide the most bit savings per mm²?Iirc, vvc owes a lot of its gains via integration with ml (there's at least one commercial av1 implementation that does this as well to, supposedly, great effect). IOW, I'm uncertain how much easier vvc will be too implement in hardware. Otoh, EVC looks quite interesting.
GeoffreyA - Friday, December 4, 2020 - link
Oh, yes, it will probably make their heads spin implementing this thing in hardware, and when they do, which they will, they're going to make it a marketing point (even if, in practice, it fell behind x265).Yesterday I was experimenting with libaom-av1 on FFmpeg and discovered a useful parameter: -cpu-used. Controls compression/encoding speed and takes values between 0 and 8. 0 being the slowest, 1 the default, and 8 fastest. To my surprise, 8 brought encoding speed to reasonable levels: about 10x slower than x265, if I remember right, which isn't half bad. I was using a video shrunk to 360p though.
As for VVC, can't wait to give it a go. Hopefully, it'll deliver and be of AVC's calibre. I wasn't familiar with EVC but took a look at it now, and it does appear to be quite an interesting concept.
tuxRoller - Saturday, December 5, 2020 - link
You might be interested in the doom9 forums (https://forum.doom9.org/forumdisplay.php?f=17). In the av1 thread you'll often see people posting updates about the various av1 en/decode implementations, new settings and, in general, some interesting thoughts from folks in the industry.BTW, starting from this post (https://forum.doom9.org/showthread.php?p=1929560#p... there's an interesting discussion regarding qcom & their interest in not pushing av1.
Regarding fast encoders, I'm assuming you've tried svt-av1? That's supposed to have nearly caught up with aom's encoder quality but is still a good deal faster.
Lastly, thanks for the paper. It looks interesting and a quick skim didn't reveal any mention of ml enhanced transform, or even a new entropy code(!); they seem to be continuing to iterate on h.264->h.265. However, only started reading it and realized I'm not getting through that tonight:)
GeoffreyA - Saturday, December 5, 2020 - link
Thanks for those doom9 threads. Looks like a treasure trove of information on AV1 there. As for SVT-AV1, yes, I have tried it. While the speed was good, the picture didn't seem that impressive. Anyhow, I'll have a crack at it again and see how it stacks up against libaom, now that I've got the latter running faster.You're right. I remember getting the impression that this was similar to how HEVC improved over H.264. Mostly, extending techniques already laid down. Yet another reason to tip one's hat to the MP3 of video.
GeoffreyA - Saturday, December 5, 2020 - link
I found this some weeks ago. It goes into some lower-level details of VVC.https://www.cambridge.org/core/services/aop-cambri...
Alistair - Wednesday, December 2, 2020 - link
Focus on Camera and AI, with tiny ~10 percent CPU performance improvements, and the exact same 4 little cores for the 4th year. I admit I'm a little disappointed. Last time I paid attention it was all about power and Hercules A88 cores.Hopefully we see good $700 flagships from Samsung this year instead of the way overpriced S20 series, maybe the integrated modem will help.
zeeBomb - Wednesday, December 2, 2020 - link
Awww yissss. Some of the little things as some people said that could be implemented..but hey I'm all in for the luckiest Snapdragon chipset yet.Raqia - Wednesday, December 2, 2020 - link
Any thoughts on specific use cases for the hypervisor feature they enabled or any comments from them? It would be nice to just run Linux or Windows apps from my phone attached to a monitor and kb.BedfordTim - Thursday, December 3, 2020 - link
Samsung Dex might help you.abufrejoval - Thursday, December 3, 2020 - link
To me that is the most exciting feature of the 888 and one where I'm not sure that Google's Android will pick up on.Today mobile phones (and increasingly desktops) are under the control of the ecosystem vendor, all trust and cryptography tied to Apple or Google (or Microsoft). Of course device vendors also want a piece (DeX) but really it's the owner who should be in charge.
Going forward the number of stakeholders can only increase, there will be governments with vested interests and specific compliance concerns, corporate employers etc.
So the ability to run a flexible number of enclaves that can be guaranteed not to step or spy on each other will eventually become critical, but also allow to break the stranglehold that Apple and Google currently have on the device you own, but don't control.
We already have enclaves inside SIM cards and baseband controllers, but they are completely physical, secure that way, but not flexible and affordable to multiply.
So while I would love to have more details, know if this is like SEV/MKTME on x86 or even better, I don't see how Apple, Google or Microsoft (Pluton!) or even the NSA for that matter, can be motivated to hand the supreme power to you and me, while they can now play in a walled garden we oversee, even if we can't sniff inside.
In my book I should be able to block conversations between enclaves and their cloud controls, while Apple is pushing the envelope in the opposite direction, hiding their device/command & control-center conversations from owners.
IHMO that needs to be made painfully illegal, before all the others jump on that bandwagon.
ZolaIII - Wednesday, December 2, 2020 - link
This actually looks like a horrible SoC, which QC didn't bother to improve much. Regarding Samsungs 7nm node things are actually other way around. First gen based on HD lib and with EUV whose both better & more efficient than TSMC 7nm without EUV but it cost more. The second gen is actually 7nm with UHD lib while TSMC 5 nm is a fully new node with around 2.2x higher density (to Samsung 7nm UHD) but sadly not significantly more power efficient while it cost get up more than duble. On the other hand Samsung cost went down as density went up for around 50% and thanks to maturity and good yields it's estimated as twice as much gates per a same price.I really wanted to see a new gen of Adreno's (for 4 SoC generations now) instead of that this is minor rework (couple new functions) in a same cluster configuration as it's predecessor. Now imagine 2x increase in logic with TSMC 5 nm process at 30% lower speed. I do think quoted 35% increase is in ideal conditions (utilising new stuff which we won't see in the next couple of years) while in reality we will see just a small insignificant increase.
What's the use of single high performance CPU core which outruns the rest by working 50% IPC? Sure each & every first core on any cluster will be a one to bare the burden of the one where everything is started before workers are deployed but it only needs culpe % more capacity to balance that.
Seams they didn't done nothing to increase cache coherence and efficiency.
DSP had seen a real improvement but I don't look at QC Hexogen as something good (stiff, property, hard to get to & not flexible) to the point of thinking such things shouldn't exist. The biggest gain will be a integration of 5G modem which will cut power consumption in half.
All in all a rather bad job.
iphonebestgamephone - Wednesday, December 2, 2020 - link
Its good as long as i get the the 25% and 35% imptovements.ZolaIII - Wednesday, December 2, 2020 - link
Thing is you will only get around 20~25% improvement in a single (X1) core while in full all CPU core utilisation that will sink to only couple % (thanks to; bus, typology and memory coherence bottlenecks. Early Geekbench results already confirmed this. It will be same regarding GPU, it will get it's 30% advantage but only when new futures are used (which they won't be in a long time).iphonebestgamephone - Wednesday, December 2, 2020 - link
Oh man, i forgot what they did with the 855, 45% cpu improvement over 845. But that was only on geekbench 4 single core. With maybe 15% on multi.Raqia - Wednesday, December 2, 2020 - link
In addition to integrating the Modem, they also appear to be integrating WIFI (FastConnect 6900) and Bluetooth on die now. Prior year diagrams seem to have illustrated WIFI and Bluetooth off chip."I really wanted to see a new gen of Adreno's (for 4 SoC generations now) instead of that this is minor rework (couple new functions) in a same cluster configuration as it's predecessor."
Are you really sure about it being a minor rework? Variable rate shading may be possible or not give as much of a performance boost without some changes to the hardware. The rest is just marketing much like the moniker 888 is.
ZolaIII - Wednesday, December 2, 2020 - link
Well based on the QC claims, 20% more power efficient = same number of GPU clusters and same alignment as the power saving is from process improvement, they made a new pipeline to include additional functions but fundamental blocks remain unchanged (actually ALU's didn't change from Ati days).This is just based on my assumptions and logic. Upon which I don't think it's enough to be stated as new gen.
I don't have problem with numerology including additional make believes tied to the name change, at least that's not a snake oil like the rest.
Given in mind rest of the story (trade wars) actually the QC naming thing is pathetic.
Who ever makes a real flagship SoC (not saying on the die size), even if based on reference IP's based upon 5 nm TSMC has the opportunity to rip this abomination without to much hustle.
Raqia - Wednesday, December 2, 2020 - link
Transistor count and die size haven't been revealed yet; I have my doubts many other companies could integrate all those subsystems with the PPA that Qualcomm achieves. Even Apple doesn't get there as they don't integrate a modem which is far far trickier than you might expect.ZolaIII - Wednesday, December 2, 2020 - link
Well you are right about that (modem, RF and cetera) and thing's won't get better any time soon, call it democracy.Rest is IP license available. If you raise a bar to actual manufacturers (in their own menagement) list goes to none as ironically Samsung is by far most adequate. I guess things will get boring until GAA.
melgross - Wednesday, December 2, 2020 - link
Apple hasn’t been allowed to integrate a modem. It’s likely that a major reason they bought Intel’s work is so that they can have their own, so that they can do that. But Apple seems to have no problems with efficiency, even with an external modem. I suspect that it’s the Android OSs known efficiency problems, among others, such as the requirement for double the RAM, that’s causing these problems, which is why those phones require batteries that are so much larger.Raqia - Wednesday, December 2, 2020 - link
It's a fallacy that an external modem is any less power efficient than an on SoC one, and in fact fab process can be further optimized for a totally separate modem die which really does have different requirements than CPUs and GPUs. The reason Qualcomm, Samsung, Mediatek and Huawei do it is to reduce cost and complexity. Apple simply doesn't the IP necessary and the purchase of Intel's money losing unit was primarily for IP, some talent rather than design or implementation; it still won't get them a competitive modem in house for several years to come.ZolaIII - Thursday, December 3, 2020 - link
It's a fact how it isn't a one peace to start with. RF analog-mixed part & processesing part which can be integrated. RF part didn't progress regarding it's manufacturing processes in very, very long time & in best case scenario is built upon SOI. In the world of mobile SoC high density libs are commonly used for everything already (excluding analogue, MOSFET's and cetera of course).KusheYemi - Wednesday, December 2, 2020 - link
Qualcomm updated their processor the right way. They improved all of the main day to day functions that ordinary people use frequently. Tech savvy people might be disappointed about the raw compute power but the SD865 was already great.halcyon - Wednesday, December 2, 2020 - link
If they wanted to improve the day-to-day, they would have made it much more power efficient and replaced those ageing A55 cores that are used for mosts daily simple tasks.melgross - Wednesday, December 2, 2020 - link
Great compared to what, other mediocre SoCs?The Hardcard - Wednesday, December 2, 2020 - link
I don’t know why people only see great or horrible with nothing in between. This is a good competitive chip - only people desperately aching for someone to overtake Apple can be seriously disappointed.The single X1 is a solution to having good singlethread speed and the UI responsiveness that comes with that while giving good multicore combined with efficiency. Is it the best solution? I don’t know, but it’s not a bad one.
The clock speed is interesting however. I have to think that there are process limitations there. ARM targeted 3 GHz, which would’ve given it some floating point wins over the A13 based on Andrei‘s estimates. The 5% shortfall will put them pretty squarely behind the A13, I’m sure they wouldn’t have excepted that unless they had no choice.
dudedud - Wednesday, December 2, 2020 - link
If the Vivo (V2056A) GB's scores are legit, this implementation of the X1 will be much more close to A12 than to the A13.abufrejoval - Thursday, December 3, 2020 - link
I'd say they design their SoC to suit their usage. And that is a mixed bag full of compromises, matching what people are actually doing on these devices. Very little of that is HPC.The single X1 is for all that fat single-threaded desktop-class browser code out there, that only gets tolerable response times on a 4GHz Pentium 4, but hopefully won't run longer than a couple hundred milliseconds, because an X1 core simply can't run 24x7 on a mobile power budget.
Mobile games better run on the efficiency cores mainly (apart from the GPU), with perhaps short bursts on the power cores, because otherwise not even an hour worth of game time may be possible on a single charge (or without burning your fingers).
In short, don't expect all of these resources used at full capacity for any extended time. Instead these SoCs become a computing appliance farm with specialists for many different tasks, designed to do very little to nothing most of the time and as aesthetically pleasing inside as any SME server room that evolved with the business for 20 years.
To ask for a revolutionary design on a new process from a different fab is perhaps asking just a little bit too much, especially when they need to sell another generation next year.
For such a bad job I am seriously considering that the 888 may be enough of an upgrade over my current 855 to consider, once they sell these devices at reasonable prices (~€500) and with LinageOS support late 2021 or early 2022 with the 895's imminent arrival.
Honestly I've stopped asking for more smartphone computing power since the 820, been perfectly happy with energy effiency since the 835 and been waiting for a proper desktop mode since the first DeX on an 800.
It's hard to sell more when the need doesn't really grow or you can have 500 Watts of desktop power any time you sit done for something serious.
eastcoast_pete - Wednesday, December 2, 2020 - link
As for the choice of Samsung's 5 nm LPE for the manufacturing, I suspect it's not just TSMC's capacity that made QC go Sammy. My guess is that Samsung fabbed it for less - that simple. Sort of why NVIDIA chose Samsung's 8 nm for Ampere; they did it for less.abufrejoval - Thursday, December 3, 2020 - link
With China flexing its muscle over Taiwan, Korea may be more attractive in other ways, too.Then I wonder if the 5nm node on Samsung may actually be faster from start to finish with EUV replacing all those multi-masking and multi-patterning steps...
trivik12 - Wednesday, December 2, 2020 - link
Thanks for the detailed article. Did Qualcomm go with Samsung as they will be taking process leadership in the near future with 1st GAAFET implementation in 2023. It would be good to see where Samsung 5nm is relative to TSMC.Looking at improvements from process and architecture, I feel even ARM is close to hitting the wall.
brucethemoose - Wednesday, December 2, 2020 - link
IDK about that. The schedule is *far* from set in stone, with how extreme the physics are.tkSteveFOX - Wednesday, December 2, 2020 - link
I think ARM wants to keep everyone out of reach of Apple's performance dominance.A55 is vastly inferior to the last 3 gens of small cores Apple has used.
X1 is decent but seeing just a single core in there isn't great.
Not using 8mb cache is purely driven by greed. It will make the chips cost higher and that's why QC aren't doing it, there are substantial gains from using more cache on a CPU in heavy workloads.
All these companies want to make money, so they cut costs of their chips and that's why Android will always be behind iOS.
Apple gives you the most bleeding edge stuff without sacrificing on the chip.
id4andrei - Wednesday, December 2, 2020 - link
That is because QC has to sell these chips to OEMs that have to be able to afford them. Apple doesn't have to sell to no one. They have high margins to justify their investment in performance beyond the scope of Xiaomi, LG, Samsung.melgross - Wednesday, December 2, 2020 - link
ARM doesn’t care about that. They sell designs that are good enough to make them enough money to make a good profit and allow for further work. It’s up to the OEMs to make the changes allowed through the design license to make some improvements. Failing that, companies can get an architectural license as Apple and a few others do, which lets them design cores and subsystems from scratch.Both Qualcomm and Samsung tried that for a few years, but failed to come up with good designs. So they went back to licensing designs from ARM.
Silver5urfer - Wednesday, December 2, 2020 - link
SD820 is not a failure, it was a full custom design which was not a disaster it was superb and even had the IPC speed higher than 835, that's where all this Kryo started. Then Qcomm moved all of their engg arch teams to Centriq the ever famous most powerful ARM server processor, they axed it even after putting so much R&D in that with Cloudflare marketing. Since then Qcomm never made any custom cores. Only Samsung did with ambitious aims but failed to optimize it for the smartphone.And in the end it doesn't even matter, because phones are going to be on parity with A14. Just looking and gaming performance and application performance tests which are real world shows 865 is not even that far vs A14 in some aspects. And Qcomm is putting money where it matters - GPU and 5G.
lmcd - Wednesday, December 2, 2020 - link
820 was absolutely a disaster. Its errata list was too great for Windows kernel support, likely at the ISA implementation, and likely deeply-rooted enough to justify dropping the entire endeavor.Silver5urfer - Wednesday, December 2, 2020 - link
How is Windows Kernel coming into the picture ?It was about the Android performance and 64Bit compat due to Apple's move first and 810's ultimate disaster which even killed HTC entirely. 820 processor was very fast and still holds up, just like 805 but the latter was 32bit, One can see comparison of that with Apple's A9.
Entire endeavor was dropped because there's no need. Why do you think Qcomm develops a lot of the Radio and etc and tons of R&D ? Patents. That's what Qcomm is all about and they tried that with Centriq. But ARM on DC market is a dead end, so many years of articles here on AT and STH, so far no one is there on that side the only option which was showing some metric of performance that too for small loads is Graviton2. Only when there's a need then these companies push, which is money. Apple does it because they want to hold that position to leverage their pricing justification of the iPhone. Looking at any Android top flagship vs iPhone real world application performance tests and gaming loads it shows why there is no need for Qcomm to push, they push where there is money, GPU and NPU, ISP, Radio RF.
techconc - Thursday, December 3, 2020 - link
Qualcomm and Samsung had different problems with their CPU designs. Qualcomm had a pretty competitive design. Their problem was getting blind-sided by the A7 with 64bit. They didn't have a 64bit design in their pipeline and had to abandon their own work and go back to ARM reference designs just to have something remotely competitive.Samsung found out the hard way that chip design isn't easy. Making a more powerful chip is one thing, but being energy efficient (and powerful) is quite another thing. They eventually scuttled their custom chip hopes as well.
That leaves ARM. ARM will design what their customers want. It's not clear that customers are complaining to ARM that they want more powerful cores. Maybe the X1 is a step in that direction. However, we can see lots of cost cutting examples in the SD888, so it's not clear that there is an appetite for an Apple like design for Android based SoC vendors.
ZolaIII - Thursday, December 3, 2020 - link
A55 is not inferior, it's still the best in order A core ARM ever made. The so called Apple little core's are simple OoO core's inferior but closest to compare to A73. Problem is ARM never made a newer incarnation of A73 suitable for DynamIQ clusters. They did make Neoverse E1 and A65 which both thanks to SMT aren't exactly suitable for mobile phones and we didn't see any of their actual silicone implementations.I don't see a L3 victim cache as the way to go as it's limiting in many aspects. Faster RAM & bigger L2 cache should be a way to go.
Apple just makes your wallet cry.
AntonErtl - Thursday, December 3, 2020 - link
The A75 is derived from the A73 (3-wide instead of 2-wide), and AFAIK supports DynamIQ. And looking at Andrei's M4 review, the A75 appears to be almost as efficient as the A55 at the A55's lowest voltage, and more efficient if the A55 has to ramp up the voltage (long before it reaches the performance of the A75 at its lowest voltage).Wilco1 - Friday, December 4, 2020 - link
Even better would be a low clocked and slightly cut-down Cortex-A76. According to AnandTech at lower frequencies it is more efficient than Cortex-A55 while being much faster. It has a larger area of course, but you could cut it down a bit, and 4 little cores seems a bit overkill, 1 or 2 would be more than enough for background tasks.Irish910 - Monday, December 14, 2020 - link
Apple efficiency cores have much better performance and power efficiency versus other chips with a similar design. (High perf/low perf).It’s amazing to see what Apple has achieved since the A9 year over year just dominating performance while keeping power efficiency. It’s not even close with Apple GPUs.
You’ll probably squabble that Metal is more optimized than OpenGL.
When it comes to chip designs, whether you like apple or not, they are the best.
patel21 - Wednesday, December 2, 2020 - link
Was Samsung going to use AMD GPU this year ? It they do, I can see them wearing the Android Performance Crown easily.darkich - Wednesday, December 2, 2020 - link
I don't understand why they just don't ditch the A55 cores and use two underclocked A78s instead.The A78 is the most power efficient CPU on the planet at under 2GHz!..yes I'm not forgetting the Icestorm from Apple.
1X1 + 3A78 +2A78 should be optimal according to me..what am I missing?!
lmcd - Wednesday, December 2, 2020 - link
Die size considerations.What you're really identifying here is that the paragraph at the end of page 1 is extremely damning. Qualcomm isn't good enough with intricate work on multiple voltage planes to deliver a the best possible SoC.
AntonErtl - Thursday, December 3, 2020 - link
The 888 has a separate voltage plane for the low-power cores. Having one voltage plane for the big cores looks sensible, too: My guess is: If you have a single-threaded application, you run it on the X1 at max voltage; but if you have a multi-threaded load, doing that and running the A78s at their max voltage consumes too much power, so you reduce the X1 voltage and clock to the 2.42GHz of the A78 to keep the power consumption in check.iphonebestgamephone - Thursday, December 3, 2020 - link
The x1 clock is at 2.84 in sd855 during multo threaded loads.iphonebestgamephone - Thursday, December 3, 2020 - link
Id think even one a78 costs more than 4 a55.Wilco1 - Friday, December 4, 2020 - link
Yes the efficiency results show a low-clocked OoO core would be better overall. Cortex-A78 would be overkill as a little processor - fast CPUs use a lot of area. So a cut-down variant maybe with the micro-op cache removed and limited to 3 or 4-way would make more sense (a bit like Cortex-A76).zamroni - Wednesday, December 2, 2020 - link
And samsung will continue force feeding non usa galaxy s buyer with mediocre exynosArcadeEngineer - Thursday, December 3, 2020 - link
I don't see why Exynos would be any worse than this, it's going to be be the same cores on the same process now.iphonebestgamephone - Thursday, December 3, 2020 - link
Theres the mali gpu.peevee - Wednesday, December 2, 2020 - link
"This year’s choice of switching back to a Samsung process for the flagship SoC seems to be a vote of confidence in the new process node- as otherwise Qualcomm likely wouldn’t have made the switch"Such decisions are made for monetary reasons and monetary reasons only. Samsung 5nm is not as good as TSMC 5nm.
zodiacfml - Wednesday, December 2, 2020 - link
Why a huge single core? Impressive ISP, still baffling traditional camera makers can't make a camera from smartphone parts. Current mobile camera sensors, it is already on par with APSC cameras with a zoom kit lens.techconc - Thursday, December 3, 2020 - link
Why a huge single core? Single core performance is pretty much what matters most because ALL applications can benefit form this. Also, lots of web apps (Javascript based) are single threaded by nature. That's why iPhones just crush Android phones on all of the javascript / web benchmarks.powermacg5@ - Thursday, December 3, 2020 - link
Hello everyone. Excuse my ignorance. Difference between 11 Apple A14 AI teraflops and 26 teraflops AI Snapdragon 888. Even the 865, seems to have more teraflops than the A13, but I can't quite understand the difference in normal use. It does not seem to me that the various terminals that use Snapdragon, make great use of AI.The Hardcard - Thursday, December 3, 2020 - link
AI bragging usually involves integer operations as they work for inference and ALUs of all types can do more of them than floating points. So TOPS, not teraflops.I don’t know if the full comparison is available because Qualcomm’s figure includes every unit on the SOC without breaking down how many are from CPU, how many from GPU, and how many are Hexagon.
Apple’s figure is just from the Neural Engine. They use their GPU, but I haven’t seen them give a public figure. The also brag about putting matrix accelerators in the CPU (big cores only?) but don’t get into how many TOPS the whole SOC can do. Maybe it’s in the developer docs?
techconc - Thursday, December 3, 2020 - link
It's important to note that these "TOPS" ratings are marketing driven and not an Apples to Apples comparison. Apple advertises the speed of their Neural Engine only (11 TOPS). Qualcomm's advertised TOPs number represents the theoretical capacity of ALL of their computing units including CPU, GPU, DSP, etc. Apple's number would be much higher as well if they included all of those other things not to mention their dedicated matrix multiplication units, etc.powermacg5@ - Thursday, December 3, 2020 - link
Hello everyone. Excuse my ignorance. Difference between 11 Apple A14 AI teraflops and 26 teraflops AI Snapdragon 888. Even the 865, seems to have more teraflops than the A13, but I can't quite understand the difference in normal use. It does not seem to me that the various terminals that use Snapdragon, make great use of AI.iphonebestgamephone - Thursday, December 3, 2020 - link
Better data gathering.Anymoore - Thursday, December 3, 2020 - link
Qualcomm mentioned "most advanced 5nm", that's not Samsung, especially not LPEarly version of it.kwinz - Thursday, December 3, 2020 - link
Thats disappointing. Especially no AV1 hardware decode.techconc - Thursday, December 3, 2020 - link
"Another rationale for the foundry switch could be manufacturing capacity. As Apple is eating up a lot of TSMC’s early 5nm capacity with the A14 and M1, Qualcomm probably saw Samsung’s 5LPE as the safer choice this year..."This is the most likely scenario and justification to move to Samsung.
techconc - Thursday, December 3, 2020 - link
"One interesting capability that Qualcomm was advertising is triple-stream 4K HDR video recording. That’s a bit of an odd-ball use-case as I do wonder about the practical benefits..."Nothing oddball about it. Apple's A13 has this capability as well. There are pro apps like Filmic Pro which allow you to do things like capture a documentary type of interview on multiple cams at the same time. It may be something of an edge case, but it can be useful. It makes sense that Android phones start to get some parity on this type of feature.
Plumplum - Saturday, December 5, 2020 - link
Many things on this 888 are old...About this triple stream 4k HDR, not only Mediatek Dimensity 1000 has it, but 820 and 800 have it too...
We can notice the 3 ISP too...avaiblable since...Helio P60...even Unisoc has soc with 3 ISP!
If market wasn't ruled by Qualcomm, we would already have zooming capabilities across 3 cameras for a long time like Apple does.
Still no AV1 hardware support...Qualcomm want us to pay licences for mpeg5!
At least 888 is able to do 5g carrier aggregation. 865 wasn't...it means, until mmwave will be available in many years, in many countries, even a Dimensity 700 would be able to reach faster 5g speed than 865.
Plus 5nm from Samsung, not that much better than TSMC's 7nm.
TSMC's 7FF : 96.5Mtransistors/mm2
TSMC's 7FFP : 113.9
Samsung 5LPE : 126.5
TSMC's N5 : 173.1
Efficiency won't be as good on 5LPE!
iphonebestgamephone - Sunday, December 6, 2020 - link
Mediatek gonna trash qualcomm this year!!ChrisGX - Sunday, December 13, 2020 - link
7FFP has the same density as 7FF but has a number of improvements. The dense TSMC process is 7FF+ (also known as N7+) and it is an EUV node. The dense node you refer to is 7FF+. Somebody seems to have wrongly assumed that P refers to plus - it doesn't. Still, the error after exchange by many hands has become pervasive on the web.EMMVIN - Thursday, December 3, 2020 - link
When can we expect the detailed analysis of the soc and benchmark comparisons. There is nothing out there that is as detailed as yours. Hope you guys already have an early sample to test :)tuxRoller - Friday, December 4, 2020 - link
I'm curious if the move to samsung negatively affected clocking. If not, then this is a really cheap move in favor of their later 888+.vladx - Friday, December 4, 2020 - link
Still no AV1 decoding at least? Pathetic from QualcommJames5mith - Saturday, December 5, 2020 - link
"allowing for the vastly increased workload handoff time between the different execution engines"So it's much slower now?
peevee - Thursday, December 10, 2020 - link
"multi-frame noise reduction engines inside of the ISPs. It’s said that the quality of the noise reduction has been improved this generation, allowing for even better low-light captures with the native capture mode (no computational photography)"What you have described IS computational photography.
KarlKastor - Saturday, December 12, 2020 - link
@AndreiI can't really understand, where you see the 7LPP in a disadvantage in comparison to N7.
If you compare the 765G with the middle cores of the 855, you see similar power draw.
The comparison to the Kirin show higher frequency and higher power for the Kirin. I can't see a winner here. You can't look at the efficiency score. The performance is higher with the Kirin because of the bigger Caches. Double the L2 and 4x L3. Memory bandwidth and SL-Cache is bigger too.
ChrisGX - Sunday, December 13, 2020 - link
You have a point in connection to the N7 but you are on shakier ground with regard to the N7P and N7+. The Semiwiki article shows the density advantage of N7+ process (otherwise known as 7FF+ and mislabelled in the article as 7FFP) used on the Kirin 990 5G SoC. TSMC data likewise underscores the greater density and energy efficiency of the N7+ process.https://consumer.huawei.com/ae-en/community/detail...
https://semiwiki.com/semiconductor-manufacturers/s...
https://pr.tsmc.com/english/news/2010
Bunny13 - Wednesday, January 6, 2021 - link
The more I know about the 888, the more disappointed I amDevice Guru - Monday, February 1, 2021 - link
As I understand it, Band 53/n53 will make its first appearance in the 888. A great band for aggregation/5G plus it is a global band providing users with maximum flexibility.ChrisGX - Thursday, February 4, 2021 - link
While on its face going with unaltered clock rates for the SD888 compared to the previous (SD865) generation seems like the smart move because it gives a good peak performance boost of around 30% (when using the new Samsung 5LPE process) without pushing power consumption into problematic territory - reduced power consumption on the modestly clocked A78 cores should more than compensate for any increase in power consumption on the X1 core - the overall picture of power consumption for the SD888 doesn't seem to have quite worked out. Early information on the performance of the SD888 SoC in real world situations seems to indicate that while is does a great job of sustaining close to peak performance levels even when under heavy load it doesn't achieve that feat without producing troublingly elevated thermals. Many reports of engineering samples of new flagship Android smartphones becoming uncomfortably hot and suffering accelerated battery drain have recently appeared.What are we to make of this? First, it is clear that Qualcomm has tried to achieve a lot in this generation - almost everything in the SD888 has changed and the most impressive improvements are outside the CPU complex. Most notably the 5G modem is now integrated with the Snapdragon SoC and the iGPU has undergone a major upgrade. It seems that the power savings available from the 5LPE process have been insufficient to deliver the full benefit of incorporating these improved components. Qualcomm, no doubt, understood that a feature rich Snapdragon SoC was going to be essential to reassure the Android faithful that flagship Android phones were keeping pace with the iPhone. The reassurance in this case is partial. A truly compelling case for the SD888 might require an upgraded 5nm silicon process that matches the density and power savings of TSMC's N5 process. The SD888 looks to me like the right SoC in the wrong (Samsung 5LPE) silicon. Where is the 5LPP process that the SD888 (and the Exynos 2100) needs Samsung?
Indian Tech Hunter - Saturday, September 4, 2021 - link
Really want to own a Smartphone having Snapdragon 888 or even Snapdragon 888+. It is a 5nm based architecture chipset means it will be lot powerful and efficient too thats why I want smartphone having this chipset.