Based on the not so empirical data that is Snapdragon 888 that consumes a lot of power, has worse thermals vs SD865+ while performance is not that far off.
7nm 6nm 5nm, it's all marketing, the name does not tell anything on the node's capability or any other metric. AFAIK TSMC N5 is far ahead in every single metric ahead of samsung 5nm.
Snapdragon has used Qualcomm designed Adreno GPUs from the start. Samsung's Exynos meanwhile has used ARM designed Mali GPUs. The partnership between Samsung and AMD is only for the SoCs designed by Samsung, i.e. Exynos. Including RDNA based GPU into Snapdragons would require a partnership with Qualcomm (as the GPU is integrated into the SoC), which would bring the GPUs into competing premium smartphones. Samsung wants a RDNA based GPU to compete with Qualcomm's Adreno, as the GPU is the main differentiation point between Samsung and Qualcomm made SoCs.
Getting a bit bored of this whole RDNA GPU hype. Are the RDNA desktop cards so...amazing? are they so efficient? what is exactly so amazing about AMD cards that makes you think they could put that architecture in a mobile GPU and fare better than the existing and much better tuned for mobile GPUs? I think only the 5 year olds here are getting wet about the RDNA gpu in a mobile SoC. I don't expect much from it.
Considering the Adreno GPU tech originally came from AMD, everyone is hoping lightning will strike twice, and AMD will have something that can compete with (or even beat) Qualcomm's GPU.
...maybe. Though the primary reason is that of fanboys. Since the revolution of Ryzen 1, and also RDNA-1, there has been a lot of people liking AMD's products. So the hope is that perhaps they can revolutionise the iGPU segment in ARM SoC's. I don't see that happening with GCN, Vega, or RDNA-1 honestly. If, and that's a big IF, they can miniaturise RDNA-2 (or something better) then they possibly MIGHT have a winner on their hands. AMD's GPU technology is the second best, but they don't have any devices in the ultra-portable segment, apart from the yet-to-be-seen Samsung designs.
Hence this hope is not completely unfounded. For instance, when Nvidia made the switch from their "unified architecture" (Kepler) to their "tile-rasterisation architecture" (Maxwell), they made a pretty big improvement in performance per watt, 2014-2016 GTX 750/Ti - GTX 960 - 980Ti, especially at the low-power segment. So much so that, they ported the Maxwell GPU to an ARM SoC, and that's how we ended up with the Nvidia Shield TV and the Nintendo Switch (albeit severely handicapped on 22nm-Planar instead of 14nm-FinFet). Nvidia does have the cutting-edge best technology when it comes to GPUs, however, outside of the above examples they really haven't done much in terms of GPUs in ARM SoCs in the recent years.
With that said, there's actually not much competition for iGPUs in the SoC market anyway. Apple is the current market leader, but they don't have to compete as iOS devices are distinct to Android devices. Still, Apple only makes medio improvements yearly, thanks to no competition and their access/customisation of PowerVR GPU technology. ImaginationTechnologies used to be the market leader in Android, but they've sunk in past 6 years or so, and recently have had an aggressive buyout from China. Qualcomm uses Adreno tech which they acquired from Radeon/ATi/AMD long ago. And they're good but they also only introduce modest improvements each year.... they do not have a competitor. The next best thing is Intel with their Iris Pro HD iGPUs. They've always been decent from a performance per watt scenario, and lately with their new "Xe architecture" in the 11th-gen Tiger Lake APUs they're pretty competitive. The only thing is, they're solely in x86 designs, as they do not make ARM SoCs, nor do they have any Atom-SoC for phones anymore. Last but not least is ARM's GPU division "MALI" from Norway produces GPU designs which they license for free or really cheaply, and thus have a big market share. However, the MALI technology is immature compared to the likes of the latest Xe, Adreno, PowerVR, RDNA, or CUDA.
Here's a comparison of the two flagship processors, just copied and pasted: https://docs.google.com/spreadsheets/d/1ciPXd5loz1... A lot of details are missing, but it seems Samsung's trying to edge QC out where they can, QC probably has a good lead in DPU, ISP and GPU.
Andrei, this is the most hopeful and optimistic I remember seing you about Samsung Exinos proucts in years. Hope it's well placed, though the claims vs expectations do seem strain mathematics and common sense quite a bit. One can hope it's at least comparable to Qualcomm's flagship.
My feeling is the lack of competition made Q slow down quite a bit, had to have Apple's M1 to see what Arm can actually become.
Isn't Qualcomm just as far from Apple as they have been the last few years? I as well hope Samsung manages to catch up to Qualcomm with this new processor.
As a consumer I take a bit of offence at a statement like this: "While Samsung had been patient with the SARC CPU design team..."
It's not Samsung's management that was tried, it was the consumers who had a sub-par part forced onto them for a premium price, simply and only because they happend to be living in the wrong part of the World.
The sheer arrogance of that decision speaks volumes about one facet of a company that bribes heads of state as another means of getting what it wants for itsself.
There's very little to no barrier of importing from another region, so don't blame it on the manufacturer for your own reluctance to buy from those sources. And since prices are different for other brands/models from other regions the playing field is unique to each region, or else similarly you could say Xiaomi "forces a premium price" on every market beyond China. You're probably living in Europe but face it most things are just expensive there, but on the "up side" you could say Chinese consumers all have "sub-par parts forced onto them for a premium price" by European car manufacturers?
This is basically a Qcomm888 clone in a way. And finally a good SoC to have BL unlockable SKU from Samsung. But the big problem is this garbage phone will not have an SD slot, no 3.5mm jack, no charger. But a price increase with more bullshit OS features of ads and bloat.
You mean qualcomm and samsung both share the same ARM platform? When you read into it, the cortex-x1 is the first core that offers truly custom features... so qualcomm and samsung could customize (like the npu) what they think is best. Not a clone.
Acctually, customization is accepted by ARM before X1 is finalized, but they just release only one version of X1 to each customer, which means Samsung and Qcom get the same X1.
What I hope is that Samsung now really step it up makes a tablet/chromebook Exynos chip.
The time is really due for true and proper Cortex A78C implementation with 6 x78C + 2xX1. Given the top notch efficiency of the A78, this chip should offer far superior autonomy than the latest Intel and AMD based ultraportable systems, and performance should be more than competitive too (probably around 7+k on geekbebch multi core)
There's an article about it somewhere. From memory, it's something like their pipeline was long, their core design was wide, their mispredict penalty was worse and it didn't decode instructions fast enough to saturate the core. Which is a lot of problems to have in a single core design.
Also, the standard scores (performance charts) you get directly from ARM are mostly theoretical or based on prototypes.
Actual consumer devices have other complications, and features to worry about. So in-practice, the Qualcomm scores are usually slightly lower than the ARM scores. Then you have even lower scores from other companies such as: Nvidia, HiSillicon, then MediaTek, then AMLogic, then RockChip, then Allwinner etc etc. These companies aren't as good as Qualcomm when it comes to optimising ARM SoCs, and perhaps don't have access to the same quality/grade of lithography.
Samsung I would peg them as being better than all, except Qualcomm, when it comes to their optimising when we are comparing standard cores. Their high-performance cores (aka Mongoose) is a very different custom core design, and unfortunately these have been struggling to compete since 2018. Apple also uses custom cores, which they started with the A7 chipset in 2013, and have been leading the industry (ahead of Qualcomm) since then.
According to "many" previous comments, Samsung was the evil companies deliberately not including AV1 decoding because so they could support their own codec Mpeg5 codec EVC.
Key question besides the bragging rights for fastest non-Apple mobile SoC is power consumption. The Mi11 (SD 888) apparently displayed quite an appetite for those precious mAh, and the previous Exynos flagships weren't exactly power misers. I look forward to Andrei dissection the SD 888 and the Exynos 2100. Maybe this time, Samsung has a shot at besting QC for once?
In theory, if they split the X1, the A78, and the A55 cores into separate power domains, they have a chance at having better power efficiency than the SD888 (where the X1 and A78 are in one power domain).
But, with such high clock speeds, and such a beefy 14-core GPU, they're going to really have to tweak the power curves and power control circuitry to prevent it from being a power hog / heat generator. :)
Yes, and that (not being a power hog) will probably depend on who can better resist the temptation to just keep maxing out that X1 core in search of higher Antutu or whatever scores. The X1 is a very capable big ARM core, but it's designed for speed, not efficiency. So, whichever SoC can avoid the "pedal to the metal" mentality the best might have a chance at being the actual flagship SoC of choice. This might be a "turtle and the hare" scenario, where the turtle ends up winning the race in real-world usability. I'll watch for battery life as key outcome in this comparison.
This might be alleviated with having three power profiles: - max efficiency (disable X1/C78, manage A55) - standard (burst X1, manage C78, max out A55) - max performance (prioritise C78, manage X1, disregard A55)
When you open a benchmark like AnTuTu, Geekbench, etc etc, the phone should NOT change power profiles as that is cheating. However, it should give a prompt, and ask you which profile you prefer to use. That way, the onus is on the user. And many consumers, users, and unprofessional reviewers (like 90% of YouTube) won't be able to resist the temptation to run the benchmark with Max Performance mode.
This would give Samsung the crown when it comes to benchmarks, in their penus-measuring contest, yet won't be cheating, and allows people to have a balanced efficiency during real-world use.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
34 Comments
Back to Article
wrkingclass_hero - Tuesday, January 12, 2021 - link
So far Samsung's 5mm seems like a dud... we may have a hot chip on our hands here.Deicidium369 - Tuesday, January 12, 2021 - link
based on what empirical data? - and it's nm not mmyeeeeman - Tuesday, January 12, 2021 - link
Based on the not so empirical data that is Snapdragon 888 that consumes a lot of power, has worse thermals vs SD865+ while performance is not that far off.s.yu - Wednesday, January 13, 2021 - link
They should've called it 6nm to save some face.Eliadbu - Saturday, January 16, 2021 - link
7nm 6nm 5nm, it's all marketing, the name does not tell anything on the node's capability or any other metric. AFAIK TSMC N5 is far ahead in every single metric ahead of samsung 5nm.Makaveli - Tuesday, January 12, 2021 - link
When they released RDNA 2 in this soc but snapdragon version since i'm in NA that is when I will upgrade from my S10.Rudde - Tuesday, January 12, 2021 - link
The Snapdragon version isn't going to have a RDNA based GPU. Or am I understanding you wrong?Makaveli - Tuesday, January 12, 2021 - link
Where did they state the Snapdragon version won't have an RDNA based GPU? now you got me curious.Rudde - Tuesday, January 12, 2021 - link
Snapdragon has used Qualcomm designed Adreno GPUs from the start. Samsung's Exynos meanwhile has used ARM designed Mali GPUs. The partnership between Samsung and AMD is only for the SoCs designed by Samsung, i.e. Exynos. Including RDNA based GPU into Snapdragons would require a partnership with Qualcomm (as the GPU is integrated into the SoC), which would bring the GPUs into competing premium smartphones. Samsung wants a RDNA based GPU to compete with Qualcomm's Adreno, as the GPU is the main differentiation point between Samsung and Qualcomm made SoCs.Makaveli - Tuesday, January 12, 2021 - link
Well then I hope this Exynos version actually equals the same or close performance to the Snapdragon version.Rudde - Tuesday, January 12, 2021 - link
"Samsung will only be able to use AMD’s GPU IP to counter Qualcomm’s Adreno GPUs in the mobile market"From https://www.anandtech.com/show/14492/samsung-amds-...
yeeeeman - Tuesday, January 12, 2021 - link
Getting a bit bored of this whole RDNA GPU hype. Are the RDNA desktop cards so...amazing? are they so efficient? what is exactly so amazing about AMD cards that makes you think they could put that architecture in a mobile GPU and fare better than the existing and much better tuned for mobile GPUs?I think only the 5 year olds here are getting wet about the RDNA gpu in a mobile SoC. I don't expect much from it.
phoenix_rizzen - Tuesday, January 12, 2021 - link
Considering the Adreno GPU tech originally came from AMD, everyone is hoping lightning will strike twice, and AMD will have something that can compete with (or even beat) Qualcomm's GPU.Kangal - Wednesday, January 13, 2021 - link
...maybe.Though the primary reason is that of fanboys. Since the revolution of Ryzen 1, and also RDNA-1, there has been a lot of people liking AMD's products. So the hope is that perhaps they can revolutionise the iGPU segment in ARM SoC's. I don't see that happening with GCN, Vega, or RDNA-1 honestly. If, and that's a big IF, they can miniaturise RDNA-2 (or something better) then they possibly MIGHT have a winner on their hands. AMD's GPU technology is the second best, but they don't have any devices in the ultra-portable segment, apart from the yet-to-be-seen Samsung designs.
Hence this hope is not completely unfounded.
For instance, when Nvidia made the switch from their "unified architecture" (Kepler) to their "tile-rasterisation architecture" (Maxwell), they made a pretty big improvement in performance per watt, 2014-2016 GTX 750/Ti - GTX 960 - 980Ti, especially at the low-power segment. So much so that, they ported the Maxwell GPU to an ARM SoC, and that's how we ended up with the Nvidia Shield TV and the Nintendo Switch (albeit severely handicapped on 22nm-Planar instead of 14nm-FinFet). Nvidia does have the cutting-edge best technology when it comes to GPUs, however, outside of the above examples they really haven't done much in terms of GPUs in ARM SoCs in the recent years.
With that said, there's actually not much competition for iGPUs in the SoC market anyway. Apple is the current market leader, but they don't have to compete as iOS devices are distinct to Android devices. Still, Apple only makes medio improvements yearly, thanks to no competition and their access/customisation of PowerVR GPU technology. ImaginationTechnologies used to be the market leader in Android, but they've sunk in past 6 years or so, and recently have had an aggressive buyout from China. Qualcomm uses Adreno tech which they acquired from Radeon/ATi/AMD long ago. And they're good but they also only introduce modest improvements each year.... they do not have a competitor. The next best thing is Intel with their Iris Pro HD iGPUs. They've always been decent from a performance per watt scenario, and lately with their new "Xe architecture" in the 11th-gen Tiger Lake APUs they're pretty competitive. The only thing is, they're solely in x86 designs, as they do not make ARM SoCs, nor do they have any Atom-SoC for phones anymore. Last but not least is ARM's GPU division "MALI" from Norway produces GPU designs which they license for free or really cheaply, and thus have a big market share. However, the MALI technology is immature compared to the likes of the latest Xe, Adreno, PowerVR, RDNA, or CUDA.
Unashamed_unoriginal_username_x86 - Tuesday, January 12, 2021 - link
Here's a comparison of the two flagship processors, just copied and pasted:https://docs.google.com/spreadsheets/d/1ciPXd5loz1...
A lot of details are missing, but it seems Samsung's trying to edge QC out where they can, QC probably has a good lead in DPU, ISP and GPU.
Unashamed_unoriginal_username_x86 - Tuesday, January 12, 2021 - link
Ignore that last bit I have no clue what I'm talking about theredragosmp - Tuesday, January 12, 2021 - link
Andrei, this is the most hopeful and optimistic I remember seing you about Samsung Exinos proucts in years. Hope it's well placed, though the claims vs expectations do seem strain mathematics and common sense quite a bit. One can hope it's at least comparable to Qualcomm's flagship.My feeling is the lack of competition made Q slow down quite a bit, had to have Apple's M1 to see what Arm can actually become.
Looking forward for a deep dive
Rudde - Tuesday, January 12, 2021 - link
Isn't Qualcomm just as far from Apple as they have been the last few years? I as well hope Samsung manages to catch up to Qualcomm with this new processor.abufrejoval - Tuesday, January 12, 2021 - link
As a consumer I take a bit of offence at a statement like this: "While Samsung had been patient with the SARC CPU design team..."It's not Samsung's management that was tried, it was the consumers who had a sub-par part forced onto them for a premium price, simply and only because they happend to be living in the wrong part of the World.
The sheer arrogance of that decision speaks volumes about one facet of a company that bribes heads of state as another means of getting what it wants for itsself.
s.yu - Wednesday, January 13, 2021 - link
There's very little to no barrier of importing from another region, so don't blame it on the manufacturer for your own reluctance to buy from those sources.And since prices are different for other brands/models from other regions the playing field is unique to each region, or else similarly you could say Xiaomi "forces a premium price" on every market beyond China.
You're probably living in Europe but face it most things are just expensive there, but on the "up side" you could say Chinese consumers all have "sub-par parts forced onto them for a premium price" by European car manufacturers?
Silver5urfer - Tuesday, January 12, 2021 - link
This is basically a Qcomm888 clone in a way. And finally a good SoC to have BL unlockable SKU from Samsung. But the big problem is this garbage phone will not have an SD slot, no 3.5mm jack, no charger. But a price increase with more bullshit OS features of ads and bloat.grahaman27 - Tuesday, January 12, 2021 - link
You mean qualcomm and samsung both share the same ARM platform? When you read into it, the cortex-x1 is the first core that offers truly custom features... so qualcomm and samsung could customize (like the npu) what they think is best. Not a clone.AnnaFar1s - Tuesday, January 12, 2021 - link
Acctually, customization is accepted by ARM before X1 is finalized, but they just release only one version of X1 to each customer, which means Samsung and Qcom get the same X1.darkich - Tuesday, January 12, 2021 - link
What I hope is that Samsung now really step it up makes a tablet/chromebook Exynos chip.The time is really due for true and proper Cortex A78C implementation with 6 x78C + 2xX1.
Given the top notch efficiency of the A78, this chip should offer far superior autonomy than the latest Intel and AMD based ultraportable systems, and performance should be more than competitive too (probably around 7+k on geekbebch multi core)
Ptosio - Tuesday, January 12, 2021 - link
Could somebody explain me how Samsung cores, developed based on ARM cores, got worse than the basic ARM cores they were built upon?Rudde - Tuesday, January 12, 2021 - link
It's not based on ARM cores. Samsung developed M1 from scratch and M5 is an iteration of M1.lmcd - Tuesday, January 12, 2021 - link
There's an article about it somewhere. From memory, it's something like their pipeline was long, their core design was wide, their mispredict penalty was worse and it didn't decode instructions fast enough to saturate the core. Which is a lot of problems to have in a single core design.Kangal - Sunday, January 17, 2021 - link
Also, the standard scores (performance charts) you get directly from ARM are mostly theoretical or based on prototypes.Actual consumer devices have other complications, and features to worry about. So in-practice, the Qualcomm scores are usually slightly lower than the ARM scores. Then you have even lower scores from other companies such as: Nvidia, HiSillicon, then MediaTek, then AMLogic, then RockChip, then Allwinner etc etc. These companies aren't as good as Qualcomm when it comes to optimising ARM SoCs, and perhaps don't have access to the same quality/grade of lithography.
Samsung I would peg them as being better than all, except Qualcomm, when it comes to their optimising when we are comparing standard cores. Their high-performance cores (aka Mongoose) is a very different custom core design, and unfortunately these have been struggling to compete since 2018. Apple also uses custom cores, which they started with the A7 chipset in 2013, and have been leading the industry (ahead of Qualcomm) since then.
ksec - Tuesday, January 12, 2021 - link
According to "many" previous comments, Samsung was the evil companies deliberately not including AV1 decoding because so they could support their own codec Mpeg5 codec EVC.I wonder what they have to say about it now.
eastcoast_pete - Tuesday, January 12, 2021 - link
Key question besides the bragging rights for fastest non-Apple mobile SoC is power consumption. The Mi11 (SD 888) apparently displayed quite an appetite for those precious mAh, and the previous Exynos flagships weren't exactly power misers. I look forward to Andrei dissection the SD 888 and the Exynos 2100. Maybe this time, Samsung has a shot at besting QC for once?phoenix_rizzen - Tuesday, January 12, 2021 - link
In theory, if they split the X1, the A78, and the A55 cores into separate power domains, they have a chance at having better power efficiency than the SD888 (where the X1 and A78 are in one power domain).But, with such high clock speeds, and such a beefy 14-core GPU, they're going to really have to tweak the power curves and power control circuitry to prevent it from being a power hog / heat generator. :)
eastcoast_pete - Tuesday, January 12, 2021 - link
Yes, and that (not being a power hog) will probably depend on who can better resist the temptation to just keep maxing out that X1 core in search of higher Antutu or whatever scores. The X1 is a very capable big ARM core, but it's designed for speed, not efficiency. So, whichever SoC can avoid the "pedal to the metal" mentality the best might have a chance at being the actual flagship SoC of choice. This might be a "turtle and the hare" scenario, where the turtle ends up winning the race in real-world usability. I'll watch for battery life as key outcome in this comparison.Kangal - Wednesday, January 13, 2021 - link
This might be alleviated with having three power profiles:- max efficiency (disable X1/C78, manage A55)
- standard (burst X1, manage C78, max out A55)
- max performance (prioritise C78, manage X1, disregard A55)
When you open a benchmark like AnTuTu, Geekbench, etc etc, the phone should NOT change power profiles as that is cheating. However, it should give a prompt, and ask you which profile you prefer to use. That way, the onus is on the user. And many consumers, users, and unprofessional reviewers (like 90% of YouTube) won't be able to resist the temptation to run the benchmark with Max Performance mode.
This would give Samsung the crown when it comes to benchmarks, in their penus-measuring contest, yet won't be cheating, and allows people to have a balanced efficiency during real-world use.
t0mat0 - Tuesday, January 12, 2021 - link
27% more cores and 10 percent better performance -1.27 * 1.1 = 1.397 so rounded up to the 40% better graphics score.