More accurately "upper midrange". Though I'm very curious to see how Samsung's flagship SoC will differ. Will they just use cherry-binned variants of the same dies with slightly faster A78 clocks (e.g. 1 core with 3 GHz and 3 cores with 2.8 GHz) and perhaps an iGPU with 12 or 14 cores (Mali G78 supports up to 24 cores) that have potentially been disabled in this variant for yield reasons?
p.s. One other option might be to use a Cortex-X1 core for the single "biggest" core (with 3 or 4 A78 cores clocked at 2.8 GHz), though that would be a different design and thus quite more costly.
So that means there will be another SoC for the Samsung top SKU ? If I ever own a Samsung I would only get Exynos because of one reason - Bootloader unlock for NA market. And they announced partnership with AMD right, for the RDNA2, not sure when will we be able to see it materialized.
Last year flagship was Exynos 990. This is Exynos 1080 meant to replace last year's Exynos 980. Arm has stated that they are in partnership with Samsung to deliver Cortex-X1 core in their product with Arm's CXC program. So, expect another announcement later about Exynos 1090 for Samsung's 2021 flagship.
And what exactly do you expect to be so special about rdna 2 gpu? Do you think it will beat established mobile vendors in efficiency? I seriously doubt it so I don't understand why people keep harping on it. Other than a curiosity of how it will fare I don't see anything interesting about it.
There are some supposed leaked benchmarks of Exynos/RDNA2 showing it trashing Qualcomm Adreno. That has fed into the hype. Other than that, maybe there will be some benefits to having RDNA2+ across desktop/laptop, consoles, and smartphones.
The real game changer will be when monolithic 3D chips start appearing in smartphones and everything else. 5-10 years from now.
I agree monolithic 3D chips using carbon nanotubes combined with Persistent Memory based on spintronics (MRAM) and/or Nantero carbon nanotube NRAM has the potential to be very disruptive, but unfortunately without Elon Musk style investment, I would think it will appear only somewhere between 2027 - 2035 timeframe :(...
Not sure about NRAM but STT-MRAM has already been used in some products. Still only in small sizes though (up to 256 MB), probably because the latency of STT-MRAM deteriorates fast as it gets bigger (just like SRAM). At very small sizes it is at least as fast as 6T-SRAM, but it is of course persistent and also quite denser, thus takes quite less die space. At larger sizes it has a latency similar to DRAM.
Even 5, 10 or 40 years from now the heat management issue of 3D stacked chips will remain. Lakefield barely managed to work without active cooling between the dies despite having a mere 7W of TDP. As AnandTech's review showed its "big" Sunny Cove core is only used in short bursts for "quick responsiveness", not for sustained single threaded code execution. Why? Because with passive cooling its thermal load cannot be handled.
3D chips, particularly with TDPs above 5W require active cooling with something like microfluidics between the dies to function properly, but this type of cooling has never made it out of labs and beyond the R&D stage (probably due to the high cost and high complexity of implementation). Below 5W 3D chips can probably work with good passive cooling solutions though.
There's no reason to doubt that AMD can produce something good.
After all Qualcomm's Adreno graphics was created by ATI who were bought by AMD, who then sold off their mobile graphics. Adreno is an anagram of Radeon.
So the design chops and pedigree are there at AMD.
The most exciting part about RDNA in mobile SoCs would be, that there are great open source drivers, which could be adapted to mobile (and very likely could be used by default), which would greatly enhance community efforts, such as LineageOS.
"Do you think it will beat established mobile vendors in efficiency" Possibly, yes. ARM's GPU efficiency has always been sub-par; the only real innovators have been Qualcomm (evolved from old ATi tech) and Imagination (via Apple). RDNA 2 looks pretty fierce on the power efficiency front even compared with Nvidia, and AMD have learned a lot about designing for low-power low-bandwidth SOCs recently; the biggest questions are about area efficiency and how low they can scale the design.
Four Big A78 cores sound pretty "grown up" to me; should be a substantial increase over the 2+6 BigLittle 980 with only 2 big cores, plus some generational uplift. This "premium mid-range" SoC might well reach SD865 performance levels, maybe not so on the graphics side, but still competitive - not bad at all. I look forward to your review of a phone with that SoC inside.
Still one fly in that ointment: why no AV1 decode support? Youtube, Netflix etc are supposedly all hot to trott on using AV1 to save on bandwidth. And, while I wouldn't expect it anytime soon, an AV1 encode ASIC SIP in a smartphone SoC would be really nice, especially for live streaming; however, I don't believe such an ASIC even exists all by itself, save as (very expensive) FPGAs, which are really more for prototyping.
If Samsung 5nm doesn't make it, it would need to be another node. Until someone could make it in a cost / performance / power efficient way. Especially on Mobile. Laptop And Desktop dont have as much constrain.
This 1x A78 @ 2.8 + 3x A78 @ -200 MHz is kind of funny description. It's kind of like Intel would describe my W-2265 as: 2x Cascade Lake @ 4.8GHz + 2x Cascade Lake @ 4.7GHz + 8x Cascade Lake @ 4.6GHz. Note, the description is based on a real chip and Linux claim about it: $ for i in `seq 0 11`; do cat /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq; done 4600000 4600000 4600000 4600000 4700000 4700000 4600000 4800000 4600000 4600000 4600000 4800000
Unlike the Intel CPUs with few cores, which support "Turbo 2.0", i.e. all cores have the same maximum turbo frequency, the Intel CPUs with more cores, e.g. your Cascade Lake, support "Turbo 3.0", where each core might have a different maximum turbo frequency, which you see in your example.
Intel invented "Turbo 3.0", as a step back from "Turbo 2.0", because otherwise they would have been force to sell most processors at a lower price, based on the worst core they happened to include. With "Turbo 3.0", they sell the processor at the higher price corresponding to the best core.
Well considering the 8nm LPP is actually a "10nm++", it looks like they advanced not one but two process nodes at once, meaning this new midranger might actually be smaller than the older Exynos 980.
Plus, depending on the memory configuration, this looks like it might be capable of performing better than the Exynos 990.
If Samsung is putting this chip on the higher end Galaxy A smartphones then it's looking like that range will get even more popular.
Isn't the process node (Samsung's 5 nm LPE) the same node Qualcomm's upcoming 875 is supposedly fabbed in? Apparently, Samsung claims their 5 nm is superior to TSMC's 5 nm process, although price probably also played a role in choosing the fab.
One thing I wonder more and more as the structures get smaller is heat dissipation. Power consumption should still be relatively lower compared to 7 nm or larger, but is there a sweet spot for density for each node? I believe Intel ended up lowering the density in one of their 14 nm plus etc nodes to achieve higher frequencies, but maybe someone here can point to a reference for that.
Really? What on earth motivates "anyone" to watch hd and above video on a <8" screen? It's an utter waste of time/CPU/GPU/bandwidth. It's a *tiny* screen, with shit sound. WTF?
@hansellt Why? Who gives a crap on a tiny screen? Video on phones is beyond pointless, just give me something I can read rather than some halfwit bellend blathering on. Codecs are as irrelevant as video is.
Any news on a Samsung SoC with X1 for Windows-on-ARM ultralights? Something like this, but with an X1 and a few more Mali cores might to as well or better than the 865 variant Microsoft uses in its Surface X, and be cheaper. Right now, the two biggest obstacles to Windows-on-ARM are the lack of native apps (supposedly coming) and the high price of the Surface X.
It's actually a Snapdragon 855 variant that MS are using - not 865 - so even this chip would bat it around (though it's unclear whether that would also apply to the GPU). Something with at least one X1 core would slaughter the SQ1 / SQ2 and likely humiliate Intel's Lakefield chips, too.
It is interesting that mmWave 5G has a quite slower download speed than sub-6GHz 5G but (the former) supports fully symmetric download and upload speeds of ~3,7 Gbps. I understand why its upload speed is quite faster (since it operates at a much higher frequency range) but not why its download speed is so much slower - is that a typo or a deliberate scheme by Samsung to make the download & upload speed symmetric?
Not that it is ever going to be used anywhere close to that speed -anywhere in the world- of course. While the SoC supports such speeds the support is partly academic (able to be demonstrated only in labs) and partly marketing driven, since no buyer is ever going to make use of it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
54 Comments
Back to Article
SarahKerrigan - Thursday, November 12, 2020 - link
Quite a substantial core configuration for a midrange processor.Santoval - Saturday, November 14, 2020 - link
More accurately "upper midrange". Though I'm very curious to see how Samsung's flagship SoC will differ. Will they just use cherry-binned variants of the same dies with slightly faster A78 clocks (e.g. 1 core with 3 GHz and 3 cores with 2.8 GHz) and perhaps an iGPU with 12 or 14 cores (Mali G78 supports up to 24 cores) that have potentially been disabled in this variant for yield reasons?Santoval - Saturday, November 14, 2020 - link
p.s. One other option might be to use a Cortex-X1 core for the single "biggest" core (with 3 or 4 A78 cores clocked at 2.8 GHz), though that would be a different design and thus quite more costly.Silver5urfer - Thursday, November 12, 2020 - link
So that means there will be another SoC for the Samsung top SKU ? If I ever own a Samsung I would only get Exynos because of one reason - Bootloader unlock for NA market. And they announced partnership with AMD right, for the RDNA2, not sure when will we be able to see it materialized.SarahKerrigan - Thursday, November 12, 2020 - link
I think it's likely the top SKU will have an Exynos with at least one Cortex-X1.Fulljack - Thursday, November 12, 2020 - link
Last year flagship was Exynos 990. This is Exynos 1080 meant to replace last year's Exynos 980. Arm has stated that they are in partnership with Samsung to deliver Cortex-X1 core in their product with Arm's CXC program. So, expect another announcement later about Exynos 1090 for Samsung's 2021 flagship.yeeeeman - Thursday, November 12, 2020 - link
And what exactly do you expect to be so special about rdna 2 gpu? Do you think it will beat established mobile vendors in efficiency? I seriously doubt it so I don't understand why people keep harping on it. Other than a curiosity of how it will fare I don't see anything interesting about it.nandnandnand - Thursday, November 12, 2020 - link
There are some supposed leaked benchmarks of Exynos/RDNA2 showing it trashing Qualcomm Adreno. That has fed into the hype. Other than that, maybe there will be some benefits to having RDNA2+ across desktop/laptop, consoles, and smartphones.The real game changer will be when monolithic 3D chips start appearing in smartphones and everything else. 5-10 years from now.
Diogene7 - Friday, November 13, 2020 - link
I agree monolithic 3D chips using carbon nanotubes combined with Persistent Memory based on spintronics (MRAM) and/or Nantero carbon nanotube NRAM has the potential to be very disruptive, but unfortunately without Elon Musk style investment, I would think it will appear only somewhere between 2027 - 2035 timeframe :(...Santoval - Saturday, November 14, 2020 - link
Not sure about NRAM but STT-MRAM has already been used in some products. Still only in small sizes though (up to 256 MB), probably because the latency of STT-MRAM deteriorates fast as it gets bigger (just like SRAM). At very small sizes it is at least as fast as 6T-SRAM, but it is of course persistent and also quite denser, thus takes quite less die space. At larger sizes it has a latency similar to DRAM.Santoval - Saturday, November 14, 2020 - link
Even 5, 10 or 40 years from now the heat management issue of 3D stacked chips will remain. Lakefield barely managed to work without active cooling between the dies despite having a mere 7W of TDP. As AnandTech's review showed its "big" Sunny Cove core is only used in short bursts for "quick responsiveness", not for sustained single threaded code execution. Why? Because with passive cooling its thermal load cannot be handled.3D chips, particularly with TDPs above 5W require active cooling with something like microfluidics between the dies to function properly, but this type of cooling has never made it out of labs and beyond the R&D stage (probably due to the high cost and high complexity of implementation). Below 5W 3D chips can probably work with good passive cooling solutions though.
Tams80 - Friday, November 13, 2020 - link
There's no reason to doubt that AMD can produce something good.After all Qualcomm's Adreno graphics was created by ATI who were bought by AMD, who then sold off their mobile graphics. Adreno is an anagram of Radeon.
So the design chops and pedigree are there at AMD.
Ej24 - Friday, November 13, 2020 - link
Wow. Adreno. Radeon. Never noticed. Is this genuinely on purpose or just a happy coincidence?Spunjji - Monday, November 16, 2020 - link
100% on purpose.thomasg - Saturday, November 14, 2020 - link
The most exciting part about RDNA in mobile SoCs would be, that there are great open source drivers, which could be adapted to mobile (and very likely could be used by default), which would greatly enhance community efforts, such as LineageOS.Spunjji - Monday, November 16, 2020 - link
"Do you think it will beat established mobile vendors in efficiency"Possibly, yes. ARM's GPU efficiency has always been sub-par; the only real innovators have been Qualcomm (evolved from old ATi tech) and Imagination (via Apple). RDNA 2 looks pretty fierce on the power efficiency front even compared with Nvidia, and AMD have learned a lot about designing for low-power low-bandwidth SOCs recently; the biggest questions are about area efficiency and how low they can scale the design.
SydneyBlue120d - Thursday, November 12, 2020 - link
Can we dream of no hi-end Exynos this year? Only Snapdragon 875 worldwide?ToTTenTranz - Thursday, November 12, 2020 - link
2021 might be seeing Samsung's first SoC with a RDNA GPU.I wouldn't claim clear victory to Qualcomm just yet. It will be Radeon vs. Adreno.
eastcoast_pete - Thursday, November 12, 2020 - link
Four Big A78 cores sound pretty "grown up" to me; should be a substantial increase over the 2+6 BigLittle 980 with only 2 big cores, plus some generational uplift. This "premium mid-range" SoC might well reach SD865 performance levels, maybe not so on the graphics side, but still competitive - not bad at all. I look forward to your review of a phone with that SoC inside.Wilco1 - Thursday, November 12, 2020 - link
It should be similar to SD865+ CPU performance since 2.8GHz Cortex-A78 is about the same as 3.0GHz Cortex-A77.eastcoast_pete - Thursday, November 12, 2020 - link
Still one fly in that ointment: why no AV1 decode support? Youtube, Netflix etc are supposedly all hot to trott on using AV1 to save on bandwidth.And, while I wouldn't expect it anytime soon, an AV1 encode ASIC SIP in a smartphone SoC would be really nice, especially for live streaming; however, I don't believe such an ASIC even exists all by itself, save as (very expensive) FPGAs, which are really more for prototyping.
brucethemoose - Thursday, November 12, 2020 - link
Yeah. I consider AV1 hw decode a hard requirement for a new smartphone purchase.5G, faster CPU and GPUs... thats all nice. But my smartphone's ultimate bottleneck is my ISP's data cap.
nandnandnand - Thursday, November 12, 2020 - link
No AV1, no buy. Throw it into the incinerator. Nice TOPS tho.ksec - Thursday, November 12, 2020 - link
If Samsung 5nm doesn't make it, it would need to be another node. Until someone could make it in a cost / performance / power efficient way. Especially on Mobile. Laptop And Desktop dont have as much constrain.Tomatotech - Thursday, November 12, 2020 - link
The Exynos 1080 looks good, I'm looking forward to the 1080 Ti release soon next year.nandnandnand - Thursday, November 12, 2020 - link
Will that one add the Cortex-X1 core this one should have had?ZoZo - Thursday, November 12, 2020 - link
I hear they've scrapped that in favor of an RTXynos 2080.kgardas - Thursday, November 12, 2020 - link
This 1x A78 @ 2.8 + 3x A78 @ -200 MHz is kind of funny description. It's kind of like Intel would describe my W-2265 as: 2x Cascade Lake @ 4.8GHz + 2x Cascade Lake @ 4.7GHz + 8x Cascade Lake @ 4.6GHz. Note, the description is based on a real chip and Linux claim about it:$ for i in `seq 0 11`; do cat /sys/devices/system/cpu/cpu$i/cpufreq/scaling_max_freq; done
4600000
4600000
4600000
4600000
4700000
4700000
4600000
4800000
4600000
4600000
4600000
4800000
AdrianBc - Thursday, November 12, 2020 - link
I believe that the Linux claim must be correct.Unlike the Intel CPUs with few cores, which support "Turbo 2.0", i.e. all cores have the same maximum turbo frequency, the Intel CPUs with more cores, e.g. your Cascade Lake, support "Turbo 3.0", where each core might have a different maximum turbo frequency, which you see in your example.
Intel invented "Turbo 3.0", as a step back from "Turbo 2.0", because otherwise they would have been force to sell most processors at a lower price, based on the worst core they happened to include. With "Turbo 3.0", they sell the processor at the higher price corresponding to the best core.
ToTTenTranz - Thursday, November 12, 2020 - link
Well considering the 8nm LPP is actually a "10nm++", it looks like they advanced not one but two process nodes at once, meaning this new midranger might actually be smaller than the older Exynos 980.Plus, depending on the memory configuration, this looks like it might be capable of performing better than the Exynos 990.
If Samsung is putting this chip on the higher end Galaxy A smartphones then it's looking like that range will get even more popular.
movax2 - Friday, November 13, 2020 - link
Speaking of Samung, 5nm isn't a full node shrink over 7nm.https://en.wikichip.org/w/images/e/eb/5nm_densitie...
James5mith - Thursday, November 12, 2020 - link
"It’s possible that previous performance of these “premium” tier SoCs was as well received as there was a large gap in performance..."That sentence makes no sense.
Did you mean "wasn't as well received" ?
movax2 - Thursday, November 12, 2020 - link
No AV1 format support because Samsung promotes MPEG 5 EVC.ksec - Thursday, November 12, 2020 - link
If that was the case you would have seen EVC hardware decoding and encoding support on its chip.movax2 - Friday, November 13, 2020 - link
it is the case. EVC isn't just ready... that's why there is no support yer.Samsung openly talks for EVC support. You can google it.
dotjaz - Wednesday, November 18, 2020 - link
And Samsung joined AOMedia as a "founding member".dotjaz - Wednesday, November 18, 2020 - link
Samsung will support AV1 in their mobile chips. No EVC support will be added before that.dotjaz - Wednesday, November 18, 2020 - link
That's so dumb. You do realise Samsung is a Board Member of AOMedia, and Exynos 1080 doesn't support EVC, right?webdoctors - Thursday, November 12, 2020 - link
Wow, had no idea Samsung already on 5nm too. Dang what happened to Intel's dominance, SAD!eastcoast_pete - Thursday, November 12, 2020 - link
Isn't the process node (Samsung's 5 nm LPE) the same node Qualcomm's upcoming 875 is supposedly fabbed in? Apparently, Samsung claims their 5 nm is superior to TSMC's 5 nm process, although price probably also played a role in choosing the fab.AlexDaum - Friday, November 13, 2020 - link
For density Samsung 5nm is a bit better than TSMC 6nm, but much less dense than TSMC 5nm. I don't know anything about performance or price.eastcoast_pete - Saturday, November 14, 2020 - link
One thing I wonder more and more as the structures get smaller is heat dissipation. Power consumption should still be relatively lower compared to 7 nm or larger, but is there a sweet spot for density for each node? I believe Intel ended up lowering the density in one of their 14 nm plus etc nodes to achieve higher frequencies, but maybe someone here can point to a reference for that.hanselltc - Thursday, November 12, 2020 - link
no av1 ripWhiteknight2020 - Thursday, November 12, 2020 - link
Really? What on earth motivates "anyone" to watch hd and above video on a <8" screen? It's an utter waste of time/CPU/GPU/bandwidth. It's a *tiny* screen, with shit sound. WTF?RSAUser - Friday, November 13, 2020 - link
If I'm on the bus I'm not going to lug a laptop with, and often it can be difficult to carry a tablet around. Always have earphones with me.There are a lot more use cases than just yours.
iphonebestgamephone - Friday, November 13, 2020 - link
You can hdmi out the phone to your 4k tv.Whiteknight2020 - Thursday, November 12, 2020 - link
@hansellt Why? Who gives a crap on a tiny screen? Video on phones is beyond pointless, just give me something I can read rather than some halfwit bellend blathering on. Codecs are as irrelevant as video is.nandnandnand - Thursday, November 12, 2020 - link
The latest smartphones can be used like desktop computers. Particularly Samsung with DeX.MetaCube - Friday, November 13, 2020 - link
2/10eastcoast_pete - Saturday, November 14, 2020 - link
Any news on a Samsung SoC with X1 for Windows-on-ARM ultralights? Something like this, but with an X1 and a few more Mali cores might to as well or better than the 865 variant Microsoft uses in its Surface X, and be cheaper. Right now, the two biggest obstacles to Windows-on-ARM are the lack of native apps (supposedly coming) and the high price of the Surface X.eastcoast_pete - Saturday, November 14, 2020 - link
"do as well, or better..", of course.Spunjji - Monday, November 16, 2020 - link
It's actually a Snapdragon 855 variant that MS are using - not 865 - so even this chip would bat it around (though it's unclear whether that would also apply to the GPU). Something with at least one X1 core would slaughter the SQ1 / SQ2 and likely humiliate Intel's Lakefield chips, too.Santoval - Saturday, November 14, 2020 - link
It is interesting that mmWave 5G has a quite slower download speed than sub-6GHz 5G but (the former) supports fully symmetric download and upload speeds of ~3,7 Gbps. I understand why its upload speed is quite faster (since it operates at a much higher frequency range) but not why its download speed is so much slower - is that a typo or a deliberate scheme by Samsung to make the download & upload speed symmetric?Not that it is ever going to be used anywhere close to that speed -anywhere in the world- of course. While the SoC supports such speeds the support is partly academic (able to be demonstrated only in labs) and partly marketing driven, since no buyer is ever going to make use of it.