Comments Locked

39 Comments

Back to Article

  • SunnyNW - Thursday, November 5, 2015 - link

    Kirin 920/930 ISP : Bad One...LOL
  • extide - Thursday, November 5, 2015 - link

    Yeah, I LOL'd at that as well, hehe
  • frewster - Thursday, November 5, 2015 - link

    Glad I wasn't the only one to get a laugh out of this.
  • lilmoe - Thursday, November 5, 2015 - link

    I LOL'ed after imagining Andrei's frustrated face writing that. I'm sure he almost broke the darn keyboard trying to mod that thing.
  • MrSpadge - Thursday, November 5, 2015 - link

    I wonder if that's part of the official spec.
  • zeeBomb - Friday, November 6, 2015 - link

    I wonder what was going through Andrei's mind when he said "Bad One" lmao!
  • zeeBomb - Friday, November 6, 2015 - link

    But on the contrary...I'm excited for this chipset. I wonder how this will tackle on Samsungs next underdog M1 (Mongoose, 8890 Exynos) but Huawei made all the right moves with this SoC. I'm impressed!
  • hans_ober - Thursday, November 5, 2015 - link

    200mW per A53 @ 1.8Ghz is pretty dang impressive; IIRC Exynos 7420 uses 300mW+ @ 1.5Ghz.

    Guys please do a Exynos 7420 style deepdive (with power analysis) on this and the A9. Adding Intel+TX1 would make things sweeter :)
  • saayeee - Thursday, November 5, 2015 - link

    yes comparing Kirin 950, Apple A9 , and other upcoming flagships based on PPW would be great !
  • Le Geek - Thursday, November 5, 2015 - link

    I +1 hans_ober's request
  • zeeBomb - Friday, November 6, 2015 - link

    One more thing...the Kirin 920-935 having bad ISP is rather true tbh. A camera like the Honor 7 and Mate 7/S that has good specs on paper like a IMX230, and then fails to get colours accuracy and detail is a joke. Almost to a point like its a slap to the face!
  • mercucu1111 - Tuesday, November 17, 2015 - link

    http://images.anandtech.com/doci/9330/a53-average-...

    What?
  • SunnyNW - Thursday, November 5, 2015 - link

    Has there actually been any confirmation that Apple's A9 was indeed manufactured on TSMC's 16FF+ node and not just regular 16FF?
  • Andrei Frumusanu - Thursday, November 5, 2015 - link

    Yes. Regular 16FF never went into any kind of mass production.
  • webdoctors - Thursday, November 5, 2015 - link

    Slide 3: First Commercial A72...

    What, A72 is only 15% faster than A57? Which was only ~20% faster than A15. The last 4 years has only given us a 35% boost in ARM CPUs? We need some real competition in this space...
  • witeken - Thursday, November 5, 2015 - link

    Broxton won't arrive until the second half of 2016, which is a delay of more than a year (Intel originally said it would be released mid-2015).
  • syxbit - Thursday, November 5, 2015 - link

    True, but I think that's assuming the same process.
    In 4 years we've also gone from 45nm to 14nm.
  • extide - Thursday, November 5, 2015 - link

    Seems like Apple is the only one to really push hard on this arch. I guess we will have to see what the Qualcomm Kryo puts out, but I don't know it really is a wildcard at this point, could be amazing, could be terrible.
  • ToTTenTranz - Thursday, November 5, 2015 - link

    35% boost if all the CPU cores were being made in the same process, which they weren't.

    Besides, the A72 consumes less power while being faster.
    The A57 is a power hog (leading many IHVs to use only Cortex A53 for their "high-end" ARM64 chips), so it's natural that ARM would focus on power efficiency over raw performance.

    Besides, it's not like smartphones and tablets are in dire need of higher single-threaded performance. A Cortex A15 at 1.6-1.8GHz already blasts through most "real-life" Javascript scenarios.
  • melgross - Thursday, November 5, 2015 - link

    No, single core performance is very important. For most uses is more important than a greater number of cores.
  • xype - Thursday, November 5, 2015 - link

    It’s 38%, actually, since the percentages compound. 100 * 1.20 = 120, 120 * 1.15 = 138. Still, doesn’t change your point, really. And there is competition, no? It’s just that the fast CPUs are reserved for one line of phones, only. But it’s competition in the sense that it makes others look bad and (hopefully) forces them to improve.
  • Wilco1 - Thursday, November 5, 2015 - link

    35% IPC gain in just 3 years (first A15 was in Q3 2012, Galaxy S4 in Q2 2013) is actually a very impressive improvement given they have similar micro architectures (A57 is a 64-bit A15 and A72 is a "tick" of A57). Add the frequency increase and performance has more than doubled in 2-3 years.

    I don't see how you can possibly claim this is a small gain, especially since x86 performance has completely stagnated in the same period (at best I could get 10-15% gain over the last 3 years if I upgrade my PC).
  • ssdssd - Thursday, November 5, 2015 - link

    Is it?
    2012 Q3, i7 3770K SPEC CINT2006 base number is 50.3 @ 3.9G turbo
    2015 Q3, i7 6770K SPEC CINT2006 base number is 71.3 @ 4.2G turbo
    3 year IPC gain is about 30%
  • Wilco1 - Saturday, November 7, 2015 - link

    Most of that "gain" is due to the already completely bogus libquantum result doubling. Note that Sandy Bridge i7 2600 got 49.6 @ 3.8GHz turbo, so that's a better comparison.

    If you consider the more accurate GCC subtest to avoid these compiler tricks, the gain over 4 years / 5 generations is about 29% overall and just 16.7% in terms of IPC. At about 3.9% a year that's a glacial pace compared to mobile SoCs.
  • jjj - Thursday, November 5, 2015 - link

    When you talk CPU and GPU perf over existing SoCs , maybe there is hope for more given that the existing ones can't sustain max clocks for long.
  • saayeee - Thursday, November 5, 2015 - link

    Nice article .. good to see Huawei sharing details about yield, process and power ..
  • Achtung_BG - Thursday, November 5, 2015 - link

    Many interesting numbers for area, consumption, yields and power density. I think this is the first LTE modem manufactured with finfets transistors. Well balanced chip from Huawei , 3 cluster configurations with 10 cores is difficult for proper handling and consumes die size.
  • lilmoe - Thursday, November 5, 2015 - link

    "For the very vast majority of use-cases and users the GPU isn't a bottleneck and the full potential of more powerful GPUs are never utilized, so the vendors prefer to save die space and thus cost by using less GPU cores"

    GPU AND other peripherals I might add. It's really sad that only Apple (and Samsung to some extent) packs their SoCs FULL of accelerators, co-processors and other non-off-the-shelf IP. Others just throw a memory controller or a semi-modified generic (yes, GENERIC) ISP/DSP here and there...

    I'm not sure, but I believe the problem lies on Android and Android OEMs. It's sad that after all these years of improving hardware acceleration, Android's UI is still *mainly* and heavily CPU bound, which makes it difficult for OEMs to focus on anything other than making Android run "smooth"....... Add to that Google's failure in pushing the ecosystem further beyond "traditional apps".

    I seriously gave up on Google. Their business model does NOT benefit from a tightly integrated ecosystem. Only Microsoft can rival Apple there. Hope Microsoft pulls it off.
  • StrangerGuy - Friday, November 6, 2015 - link

    Google wants to flood the market the with low end devices because that's the fastest way to get eyes on screen for advertising bucks.

    Android devs wants to code for the lowest common denominator hardware for the same exact reason as Google.

    Which means there is basically next to no incentive for anybody on the software side of Android to remotely utilize the full potential flagship SoCs, other than bloating the crap out of the UI and uninstallable garbage. Ultimately, that leads the my point that Android has virtually zero value addition outside of recent perfectly usable low end devices to the vast majority of people out there.
  • leo_sk - Thursday, November 5, 2015 - link

    It will be more powerful than snapdragon 820 but less than exynos m1. Only wish to see where helio x30 would stand.
  • jospoortvliet - Saturday, November 7, 2015 - link

    Love to hear how you know it will be faster than the 820, I would expect the cores in the 820 to kick the ass of the a72...
  • mercucu1111 - Tuesday, November 17, 2015 - link

    And Mongoose will kick the ass of 820
  • melgross - Thursday, November 5, 2015 - link

    We will really need to see tests. If this is anything like AMD, claims won't hold up in real devices.
  • ArthurG - Thursday, November 5, 2015 - link

    Most interesting part of the presentation is not the 950 (meh SoC overall) but how good is TSMC 16FF+ process. I'm salivating only to think of Pirate Islands and Pascal. I predict a huuuuuuge leap in perf/mm2/watt metrics on the next gen GPUs !
  • zodiacfml - Friday, November 6, 2015 - link

    "power density" and "dark silicon", first time I heard those. This might probably help explain why Apple still maintains their chip sizes despite the smaller process nodes.
  • sseemaku - Friday, November 6, 2015 - link

    Vow, good to see Huawei revealing so many details considering semiconductor companies are becoming more and more protective!
  • ZolaIII - Saturday, November 7, 2015 - link

    Finally a full accurate spec sheet!
    I hope that you will do & in depth review of the SoC, I am especially interested in CPU performance/power consumption analize across the frequency range on 16nm FinFET+ process more that I can actually compare it to the projections of power optimized 22nm planar FD-SOI then with any interest in this particular HiSilicon SoC as Huawei raped the silicone with clock speeds (as usual in industry). I don't think most of the performance gain comes from A72 design actually I think only around 5% are from it & the rest comes from optimized FinFET+ process (~10%)rest is from higher clocks. The used Tensilica DSP is relatively old and not competitive in any aspect (just finished writing article about Vision P5). It's a shame that they didn't go with a new ARM interconnect (500/550) as it would benefit performance (around 3% CPU & 8% GPU truth 20% bigger memory bandwidth). The GPU won't be fast enough to be competitive with other top solutions & certainly won't be power efficient @ 900 MHz. All in all Huawei didn't improve all that much.

    Best regards & carry on the good work.
  • Andrei Frumusanu - Saturday, November 7, 2015 - link

    HiSilicon doesn't use the CCI as the GPU's backbone but uses Arteris NoC IP much like Samsung does on Exynos.
  • ZolaIII - Sunday, November 8, 2015 - link

    Thanks on clarifying that.

Log in

Don't have an account? Sign up now