Are they the only one releasing an actual A73 core at 10nm process? It is possible that this will actually be faster than the custom cores (or slightly slower at significantly lower price). I hope the GPU will be good enough though, any concrete news on the GPU?
For now they are the only ones, Huawei's HiSilicon is apparently launching an K970 chip sometime this year that will also be produced on TSMC's 10mm process and have A73 cores. Hopefully it will have a high core count than 8 for it's Mali G71 GPU as well.
Of course the GPU will be "good enough" for most users most of the time - or what do you think people are doing with their phones where they're bottlenecked by the GPU performance?
The GPU will be a bit behind the Adreno 530 in the SD820 so it is not bad. And from the numbers given by MTK it could be more efficient so more battery life while gaming and less throttling.
GPU is a little disappointing, not because it's PowerVR, but because it's an older PowerVR. It's a little frustrating to see that MediaTek moved away from Mali just when Mali got good, after keeping the bad Mali GPUs for so many years.
However, I have the feeling that they did this on purpose, as a "compromise" that allowed them to have potentially one of the fastest mobile CPUs on the market - Cortex-A73 on 10nm should give very high performance, potentially even higher than the Snapdragons. We'll see, either way it should be very close.
If this is the compromise they had to make to keep their chips a lower price, I'm okay with that. It's better to have top of the line CPU and a less than top of the line GPU for the position they are in right now in the market.
Perhaps in a generation or two, they'll be able to have the fastest CPU as well as the fastest GPU, while on the most cutting edge process. But for now, I'm actually quite impressed with the chip. Cortex-A73 and A35 at 10nm should do wonders for efficiency. If I were them I would've probably gotten rid of the A53, though. Not sure what's their purpose there. Was a "mid-step" in performance really necessary there? I kind of doubt it.
What would you rather they use from the PowerVR line in place of the Series7XTP? Series8XE? Nope, not as powerful. Series8XE Plus? Nope, not as powerful. Series8XT? Nope, the Furian architecture was announced just two days ago. It's impressive but too new, remember, Mediatek unveiled the Helio X30 last fall... Also API's wise the Series8XE/XEP don't offer anything special over the Series7XTP and the Furian is still so new and I'm too tired to do further research on it so don't know if it has some magical unicorns the 7XTP doesn't have. Nevertheless, Rogue is still a really good GPU arch in my opinion.
None have been announced, though you can bet Mediatek are either talking to partners already or using the announcement to actively solicit more interested parties.
Vernee Apollo 2 is the only phone confirmed with this chip. It looks very exciting, a shoulder to shoulder competitor to the Snapdragon 835, for much lower price. I just hope they are friendly with the mod community so they become more popualr.
Eh, the new Bifrost Mali architecture looks pretty good. I don't know how they compare, though.
The more annoying thing about Mediatek is that it has kept using like the first Mali architecture for so long in its chips, and that one was indeed quite bad.
Weird that they only push to 2.5GHz. Likely they do a X35 at 2.8GHz or something went wrong although, it was claimed at 2.8GHz just weeks ago at ISSCC.
That's utter nonsense, as most so called rumors from CHina it's from a random guy on Weibo. It's highly likely they use Mediatek and a 10% chance they use Samsung but Samsung is likely supply limited on 10nm as even the S8 had to be delayed because of it.
I'm actually almost certain the CPU config has been entirely redone from the first announced design. Corrsct me if im wrong but i believe the original config was 4x A73, 6x A53.
Are any of you aware of who is first at integrating a neuromorphic chip in a SOC for better rendition/development of deep learning neural networks over a smartphone?
It's the kind of question that's vague enough that everyone can give whatever answer they want. QC can say they have such a great library that uses Adreno and Hexagon that you get all the advantages of neuromorphic today. Apple can say they tweaked their customization of PowerVR to be optmized for neuro. And they could all even kinda be true. Or going the other direction, Apple could announce that the A11 has the N11 neuromorphic computing unit on board, and what does that mean? There are already custom neuro routines in the iOS APIs (part of the Accelerate library) so ANY hardware that's even slightly optimized as a target for those routines (even just something like a specialized cache added to the GPU) could be called the N11 neuromorphic processor and be kinda sorta technically accurate.
On the other hand, this is not exactly a tragedy. On the TECHNICAL side, everyone knows the value of neuro, so we are getting there one improvement at a time across all vendors. And on the marketing side, who cares? The same crowd that say "Siri sux and Google image recognition is best" will fight it out with the crowd that says "Alexa is best and MS doesn't have a clue about image recognition" and details about who is using what hardware won't change any minds...
>Pretty sure the Snapdragon 835 has something like that.
It has a DSP core, which is basically the opposite of a neuromorphic coprocessor :)
Probably be a very long time, if ever, before you see things like that on a mobile device. You'll probably just see more DSPs/GPUs with libraries to support linear algebra for NNs.
On previous Mediatek models it didn't work so well, but if they get it right, including the software that manages the process, then obviously it can make a difference.
It might just be the licensing costs &/or die space required for A53 which makes it (still) competitive wrt the A35, that or A53 > A35 in layman's terms cause numbers!
A more technical isssue; is CorePilot (and specifically CorePilot 4) software or hardware? If it's software, so it's just yet another slightly tweaked OS governor, that's not especially interesting. But if they have (FINALLY...) moved DVFS control into HW (the way Intel did at Skylake) that has the potential for a substantial improvement in power+responsivity.
hey, will Anandtech cover the Xiaomi Pinecone SoC launch today? it's [email protected] + [email protected] with Mali-G71 MP12@900MHz on a 10 nm Samsung process. looks very interesting.
Finally seeing Cortex-A35 in a chip. Although it seems they've still kept Cortex-A53. That's interesting because Cortex-A35 is supposed to have higher IPC than A53, no? Perhaps it can't reach as high of a peak.
Yes indeed, its absence has been notable. Perhaps it can't clock as high as A53 so it's too slow or as a previous commenter said, the model number is bad for marketing.
It has more to do with development cost. If you already have an A53 design, it's more cost effective to give it a few tweaks and reuse than start over with a new core you have no previous experience with.
Also, I'm glad MediaTek is finally using a competitive process node. This will actually be MediaTek's first time to compete toe-to-toe with Qualcomm and Samsung. Let's see how they do.
That's only for decode and most video streams are only at 24/25/30 fps anyways -- so perfect;y fine. The refresh rate is not constrained by the decode rates
This chip looks outstanding and very much like it could compete on par with Samsung's newest Exynos and Qualcomms 835.
And since Samsung doesn't sell Exynos to anyone and buys all available Qualcomms perhaps it's the only choice left for the rest of the crowd.
I'd certainly like to see some larger tablets or RemixOS type notebooks at budget prices with it.
I'm sick and tired of "Edge" designed premium smartphones consuming all high-end ARM SoCs while tablets and (dare I say it?) netbooks get nothing but mid-range or Intel.
I still like my original Asus TF101 Transformer, but with a non-NEON Tegra2 dual core and 1GB of RAM it's in need of an update, even if it still holding up very well mechanically and in battery time.
Three clusters and ten cores? Because ten is better than eight?
I don't get Mediatek's reasoning for this design. There's already a performance penalty for moving tasks from one cluster to another; I can't imagine what it's like to move tasks from the first to the second and then to the third cluster. They should have gone with a quad A35 and dual or quad A73 setup. It's funny that a newer X20 in the Redmi Note 4 has less performance and uses more power than a Snapdragon 650 on an ancient process node.
I wish they'd give some real-world examples comparing their design to some competitors. Logically I get it, A35's for idle/low load (low power, low clocks), A53's (mid power, mid clocks), and A73's for high performance.
But there are only 2 A73's and 4 A53's and 4 A35's. You'd think you should have 4 A73's and are there scenarios where 4 A53's may match or exceed? Can you run the A73's with the A53's? Why not just 4 A35's and 4 A73's? What is the cluster switching penalty? Are cache lines and everything the same or will some software need to be recompiled or be aware of these changes.
Anyways it's interesting and sounds pretty good, just wish we had more information. It's also interesting because intel and AMD haven't used this approach (atom cores + core) or (jaguar + bulldozer) and instead have favored dynamically adjusting clock speeds across a wide range.
It's a tradeoff between performance and power/area. The cost of adding additional big cores is not negligible. All 3 clusters (or 2 in conventional big.little) can be online at the same time. The latency penalty for hotplugging cores and migrating threads between clusters is more than offset by the additional performance of the big cores (or lower power going the other way). The use of a high-bandwidth, cache-coherent (at the hardware level) interconnect reduces the migration penalty.
I keep hearing that at least the small cores are really tiny in terms of surface area. I remember overall 15% for all CPU cores, BIG and little on the SoC and that was a couple of generations back. Within those 15% the big A73 core may relatively large in terms of surface area, not only for the far more complex OoO logic etc. but because they require big caches for effective work. Then adding one, two or four small cores may add single digit surface percentage, because last level caches remain shared, primary caches are tiny and less needed at the slow speeds but most importantly allow more GPU block connections to keep those busy for "perceived" speed on scrolls, video etc., whilst the big cores are sleeping.
In addition those extra CPU cores won't cost much in terms of ARM licensing fees, because the biggest charge is on the number of the BIG cores, the smaller ones seem to be thrown in to the bundle.
Silicon real-estate and licensing fee economicy should perhaps get more editorial space to lessen these confusions, but few vendors seem willing to talk about them.
What the mobile segment really needs isn't a bunch of barely differentiated CPU cores, it needs a freaking ton of small GPU cores that support programming a la CUDA or OpenCL. I'm already happy with a 4x little and 2x big core setup. Having access to hundreds of programmable simple cores would be great for custom image processing, neural networks, sound processing etc. as long as those cores are easily accessible. DSPs hidden behind proprietary blobs like Qualcomm's Hexagon don't count.
FWIW, to give an example a single A72/73 core is bigger than an entire quad core cluster of A35/53 cores. It's cheaper (in terms of transistor budget or die size) to add a whole cluster of A53's than to add even a single extra A72/73, so to add two more is quite a bit different. I bet thats a large reason MTK is pushing these cores, 10>8 = great for marketing and less die space is great for the bottom line...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
57 Comments
Back to Article
kenansadhu - Monday, February 27, 2017 - link
Are they the only one releasing an actual A73 core at 10nm process? It is possible that this will actually be faster than the custom cores (or slightly slower at significantly lower price). I hope the GPU will be good enough though, any concrete news on the GPU?Arch_Fiend - Monday, February 27, 2017 - link
For now they are the only ones, Huawei's HiSilicon is apparently launching an K970 chip sometime this year that will also be produced on TSMC's 10mm process and have A73 cores. Hopefully it will have a high core count than 8 for it's Mali G71 GPU as well.fm13 - Tuesday, February 28, 2017 - link
no: https://www.gizmochina.com/2017/02/24/xiaomi-pinec...MrSpadge - Tuesday, February 28, 2017 - link
Of course the GPU will be "good enough" for most users most of the time - or what do you think people are doing with their phones where they're bottlenecked by the GPU performance?Mil0 - Wednesday, March 1, 2017 - link
Gaming.Lodix - Wednesday, March 1, 2017 - link
The GPU will be a bit behind the Adreno 530 in the SD820 so it is not bad. And from the numbers given by MTK it could be more efficient so more battery life while gaming and less throttling.Krysto - Thursday, March 9, 2017 - link
GPU is a little disappointing, not because it's PowerVR, but because it's an older PowerVR. It's a little frustrating to see that MediaTek moved away from Mali just when Mali got good, after keeping the bad Mali GPUs for so many years.However, I have the feeling that they did this on purpose, as a "compromise" that allowed them to have potentially one of the fastest mobile CPUs on the market - Cortex-A73 on 10nm should give very high performance, potentially even higher than the Snapdragons. We'll see, either way it should be very close.
If this is the compromise they had to make to keep their chips a lower price, I'm okay with that. It's better to have top of the line CPU and a less than top of the line GPU for the position they are in right now in the market.
Perhaps in a generation or two, they'll be able to have the fastest CPU as well as the fastest GPU, while on the most cutting edge process. But for now, I'm actually quite impressed with the chip. Cortex-A73 and A35 at 10nm should do wonders for efficiency. If I were them I would've probably gotten rid of the A53, though. Not sure what's their purpose there. Was a "mid-step" in performance really necessary there? I kind of doubt it.
lagittaja - Thursday, March 9, 2017 - link
What would you rather they use from the PowerVR line in place of the Series7XTP?Series8XE? Nope, not as powerful.
Series8XE Plus? Nope, not as powerful.
Series8XT? Nope, the Furian architecture was announced just two days ago. It's impressive but too new, remember, Mediatek unveiled the Helio X30 last fall...
Also API's wise the Series8XE/XEP don't offer anything special over the Series7XTP and the Furian is still so new and I'm too tired to do further research on it so don't know if it has some magical unicorns the 7XTP doesn't have.
Nevertheless, Rogue is still a really good GPU arch in my opinion.
Meteor2 - Tuesday, March 14, 2017 - link
He said he'd rather they stuck with Mali.zeeBomb - Monday, February 27, 2017 - link
So uh, which phone will have this?Ian Cutress - Monday, February 27, 2017 - link
None have been announced, though you can bet Mediatek are either talking to partners already or using the announcement to actively solicit more interested parties.Amandtec - Tuesday, February 28, 2017 - link
Not Apple. Not Samsung. Anything out of Taiwan. Half of phones out of China.vladx - Tuesday, February 28, 2017 - link
I would bet on the next Le Eco phone.zanza - Wednesday, March 1, 2017 - link
Vernee Apollo 2 is the only phone confirmed with this chip. It looks very exciting, a shoulder to shoulder competitor to the Snapdragon 835, for much lower price. I just hope they are friendly with the mod community so they become more popualr.iwod - Monday, February 27, 2017 - link
Finally some PowerVR! No more Mali crap.Krysto - Tuesday, February 28, 2017 - link
Eh, the new Bifrost Mali architecture looks pretty good. I don't know how they compare, though.The more annoying thing about Mediatek is that it has kept using like the first Mali architecture for so long in its chips, and that one was indeed quite bad.
lolipopman - Wednesday, March 1, 2017 - link
What an ignorant, laughable commentThis year's Mali' is much faster than any of the other GPUs, it's going to be as powerful as the Tegta X1 GPU
jjj - Monday, February 27, 2017 - link
Weird that they only push to 2.5GHz. Likely they do a X35 at 2.8GHz or something went wrong although, it was claimed at 2.8GHz just weeks ago at ISSCC.MrSpadge - Tuesday, February 28, 2017 - link
I suspect they're doing it to improve yield, releasing faster versions as the process matures.jjj - Tuesday, February 28, 2017 - link
Nah it's marketing , they launch it later,claim it's better and likely give Meizu exclusivity for a while again.vladx - Tuesday, February 28, 2017 - link
Meizu MX7 is rumoured to be using Kirin 960, not Mediatek.jjj - Thursday, March 2, 2017 - link
That's utter nonsense, as most so called rumors from CHina it's from a random guy on Weibo.It's highly likely they use Mediatek and a 10% chance they use Samsung but Samsung is likely supply limited on 10nm as even the S8 had to be delayed because of it.
WPX00 - Tuesday, February 28, 2017 - link
I'm actually almost certain the CPU config has been entirely redone from the first announced design. Corrsct me if im wrong but i believe the original config was 4x A73, 6x A53.Meteor2 - Tuesday, March 14, 2017 - link
Yep! Completely different.pberger - Monday, February 27, 2017 - link
Are any of you aware of who is first at integrating a neuromorphic chip in a SOC for better rendition/development of deep learning neural networks over a smartphone?jjj - Monday, February 27, 2017 - link
Nobody as there isn't really anything like that beyond early R&D.Doing it on GPU/DSP/VPU is what we'll see for now.
Meteor2 - Tuesday, February 28, 2017 - link
Pretty sure the Snapdragon 835 has something like that.name99 - Tuesday, February 28, 2017 - link
It's the kind of question that's vague enough that everyone can give whatever answer they want.QC can say they have such a great library that uses Adreno and Hexagon that you get all the advantages of neuromorphic today. Apple can say they tweaked their customization of PowerVR to be optmized for neuro. And they could all even kinda be true.
Or going the other direction, Apple could announce that the A11 has the N11 neuromorphic computing unit on board, and what does that mean? There are already custom neuro routines in the iOS APIs (part of the Accelerate library) so ANY hardware that's even slightly optimized as a target for those routines (even just something like a specialized cache added to the GPU) could be called the N11 neuromorphic processor and be kinda sorta technically accurate.
On the other hand, this is not exactly a tragedy. On the TECHNICAL side, everyone knows the value of neuro, so we are getting there one improvement at a time across all vendors. And on the marketing side, who cares? The same crowd that say "Siri sux and Google image recognition is best" will fight it out with the crowd that says "Alexa is best and MS doesn't have a clue about image recognition" and details about who is using what hardware won't change any minds...
Mavendependency - Tuesday, February 28, 2017 - link
Last year's 8890 also does.saratoga4 - Wednesday, March 8, 2017 - link
>Pretty sure the Snapdragon 835 has something like that.It has a DSP core, which is basically the opposite of a neuromorphic coprocessor :)
Probably be a very long time, if ever, before you see things like that on a mobile device. You'll probably just see more DSPs/GPUs with libraries to support linear algebra for NNs.
beginner99 - Tuesday, February 28, 2017 - link
Does this tri-cluster really make any sense besides marketing? Why not just leave the A53 cores away if the A35 are almost as powerful?Amandtec - Tuesday, February 28, 2017 - link
On previous Mediatek models it didn't work so well, but if they get it right, including the software that manages the process, then obviously it can make a difference.R0H1T - Tuesday, February 28, 2017 - link
It might just be the licensing costs &/or die space required for A53 which makes it (still) competitive wrt the A35, that or A53 > A35 in layman's terms cause numbers!MrSpadge - Tuesday, February 28, 2017 - link
Good question: leaving out the A53 and maybe adding 2 more A73 could be interesting as well.name99 - Tuesday, February 28, 2017 - link
A more technical isssue; is CorePilot (and specifically CorePilot 4) software or hardware?If it's software, so it's just yet another slightly tweaked OS governor, that's not especially interesting. But if they have (FINALLY...) moved DVFS control into HW (the way Intel did at Skylake) that has the potential for a substantial improvement in power+responsivity.
haukionkannel - Tuesday, February 28, 2017 - link
Pure marketing! More cores!Much better with four A35 and two or four A73... But nobody in china would buy "only six or eight core" CPU...
ppi - Tuesday, February 28, 2017 - link
No H.265?!eduardor2k - Tuesday, February 28, 2017 - link
H265 = HEVCfm13 - Tuesday, February 28, 2017 - link
hey, will Anandtech cover the Xiaomi Pinecone SoC launch today? it's [email protected] + [email protected] with Mali-G71 MP12@900MHz on a 10 nm Samsung process. looks very interesting.milli - Tuesday, February 28, 2017 - link
Why no mention in the article about the switch to PowerVR?lucam - Thursday, March 2, 2017 - link
Was gonna ask same question...Krysto - Tuesday, February 28, 2017 - link
Finally seeing Cortex-A35 in a chip. Although it seems they've still kept Cortex-A53. That's interesting because Cortex-A35 is supposed to have higher IPC than A53, no? Perhaps it can't reach as high of a peak.Lodix - Tuesday, February 28, 2017 - link
If you read the article you will see that it has 80-100% the ipc of the A53 consuming less power.Meteor2 - Tuesday, February 28, 2017 - link
Yes indeed, its absence has been notable. Perhaps it can't clock as high as A53 so it's too slow or as a previous commenter said, the model number is bad for marketing.Matt Humrick - Tuesday, February 28, 2017 - link
It has more to do with development cost. If you already have an A53 design, it's more cost effective to give it a few tweaks and reuse than start over with a new core you have no previous experience with.Krysto - Tuesday, February 28, 2017 - link
Also, I'm glad MediaTek is finally using a competitive process node. This will actually be MediaTek's first time to compete toe-to-toe with Qualcomm and Samsung. Let's see how they do.SharpEars - Tuesday, February 28, 2017 - link
I once again ask the obvious: Who the hell wants 30 Hz refresh at 4k? If you can't do at least 60 Hz, don't advertise the resolution!extide - Sunday, May 7, 2017 - link
That's only for decode and most video streams are only at 24/25/30 fps anyways -- so perfect;y fine. The refresh rate is not constrained by the decode ratesabufrejoval - Tuesday, February 28, 2017 - link
This chip looks outstanding and very much like it could compete on par with Samsung's newest Exynos and Qualcomms 835.And since Samsung doesn't sell Exynos to anyone and buys all available Qualcomms perhaps it's the only choice left for the rest of the crowd.
I'd certainly like to see some larger tablets or RemixOS type notebooks at budget prices with it.
I'm sick and tired of "Edge" designed premium smartphones consuming all high-end ARM SoCs while tablets and (dare I say it?) netbooks get nothing but mid-range or Intel.
I still like my original Asus TF101 Transformer, but with a non-NEON Tegra2 dual core and 1GB of RAM it's in need of an update, even if it still holding up very well mechanically and in battery time.
serendip - Tuesday, February 28, 2017 - link
Three clusters and ten cores? Because ten is better than eight?I don't get Mediatek's reasoning for this design. There's already a performance penalty for moving tasks from one cluster to another; I can't imagine what it's like to move tasks from the first to the second and then to the third cluster. They should have gone with a quad A35 and dual or quad A73 setup. It's funny that a newer X20 in the Redmi Note 4 has less performance and uses more power than a Snapdragon 650 on an ancient process node.
andrewaggb - Tuesday, February 28, 2017 - link
I wish they'd give some real-world examples comparing their design to some competitors. Logically I get it, A35's for idle/low load (low power, low clocks), A53's (mid power, mid clocks), and A73's for high performance.But there are only 2 A73's and 4 A53's and 4 A35's. You'd think you should have 4 A73's and are there scenarios where 4 A53's may match or exceed? Can you run the A73's with the A53's?
Why not just 4 A35's and 4 A73's? What is the cluster switching penalty? Are cache lines and everything the same or will some software need to be recompiled or be aware of these changes.
Anyways it's interesting and sounds pretty good, just wish we had more information. It's also interesting because intel and AMD haven't used this approach (atom cores + core) or (jaguar + bulldozer) and instead have favored dynamically adjusting clock speeds across a wide range.
Matt Humrick - Tuesday, February 28, 2017 - link
It's a tradeoff between performance and power/area. The cost of adding additional big cores is not negligible. All 3 clusters (or 2 in conventional big.little) can be online at the same time. The latency penalty for hotplugging cores and migrating threads between clusters is more than offset by the additional performance of the big cores (or lower power going the other way). The use of a high-bandwidth, cache-coherent (at the hardware level) interconnect reduces the migration penalty.abufrejoval - Tuesday, February 28, 2017 - link
I keep hearing that at least the small cores are really tiny in terms of surface area. I remember overall 15% for all CPU cores, BIG and little on the SoC and that was a couple of generations back. Within those 15% the big A73 core may relatively large in terms of surface area, not only for the far more complex OoO logic etc. but because they require big caches for effective work. Then adding one, two or four small cores may add single digit surface percentage, because last level caches remain shared, primary caches are tiny and less needed at the slow speeds but most importantly allow more GPU block connections to keep those busy for "perceived" speed on scrolls, video etc., whilst the big cores are sleeping.In addition those extra CPU cores won't cost much in terms of ARM licensing fees, because the biggest charge is on the number of the BIG cores, the smaller ones seem to be thrown in to the bundle.
Silicon real-estate and licensing fee economicy should perhaps get more editorial space to lessen these confusions, but few vendors seem willing to talk about them.
serendip - Wednesday, March 1, 2017 - link
What the mobile segment really needs isn't a bunch of barely differentiated CPU cores, it needs a freaking ton of small GPU cores that support programming a la CUDA or OpenCL. I'm already happy with a 4x little and 2x big core setup. Having access to hundreds of programmable simple cores would be great for custom image processing, neural networks, sound processing etc. as long as those cores are easily accessible. DSPs hidden behind proprietary blobs like Qualcomm's Hexagon don't count.Meteor2 - Tuesday, March 14, 2017 - link
Tegra X1.extide - Sunday, May 7, 2017 - link
FWIW, to give an example a single A72/73 core is bigger than an entire quad core cluster of A35/53 cores. It's cheaper (in terms of transistor budget or die size) to add a whole cluster of A53's than to add even a single extra A72/73, so to add two more is quite a bit different. I bet thats a large reason MTK is pushing these cores, 10>8 = great for marketing and less die space is great for the bottom line...nandnandnand - Wednesday, December 1, 2021 - link
I'm from the future! Intel and AMD are using this approach!