Agreed, but they don't even always use their own components like Huawei and Apple do, I think I've yet to own a Samsung Smartphone that uses a Samsung SoC. I think they will eventually get there though although I can't say I have any idea why they aren't doing it today.
They are DEFINITELY NOT legally forced to use Qualcomm chips in China, especially for the worlds biggest carrier China Mobile. As for the US, if Huawei is not forced to use Qualcomm, I can't imagine why Samsung would unless they signed a deal with Qualcomm - then again that's just by choice.
Samsung's mobile division (which makes the phones) still makes key use of Snapdragon SoCs for certain markets. Whatever the reason for this and we can argue a lot about it, fact is that the end product more often than not ends up being as the lowest common denominator in terms of features and performance between the two SoC's capabilities. In that sense, Samsung is not vertically integrated and does not control the full stack in the same way Apple and Huawei do.
No, Samsung simply isn't so vain as to use it's own solutions when they are inferior. Samsung skipped the Snapdragon 810 because their chip was much better. Samsung used the 835 instead of their chip last year because the 835 performed nearly exactly the same as the Samsung chip, but was smaller, so they could get more chips out of an early 10 nm process. Huawei chooses their chips so they don't look stupid by making an inferior chip that costs more compared to the competition.
Someguyperson, that isn't the case at all. Samsung simply doesn't use Exynos in various markets for legal reasons. Qualcomm, for example, wouldn't license Exynos for mobile phones as early as the Galaxy S III, which is why a (surprise) Qualcomm SoC was used instead. Samsung licenses Qualcomm's modem IP, much like virtually every SoC designer, for use in their Exynos. The only other option has historically been Intel, who until recently, made inferior LTE modems.
I think it's pretty obvious to anybody that if Samsung could, they would, sell their SoC's in all their devices. They might even sell them to competitors, but again, Qualcomm won't let them do that.
Since their Shannon modem integration in the Exynos platform, I struggled to understand why...
My best guess would be a bulk deal they made with Qualcomm in order for them to build Snapdragons on both their 14nm and 10nm. Samsung offered a fab deal, Qualcomm agreed to build using Samsung fabs and provide a generous discount in Snapdragon resale for Galaxies, but in the condition to buy a big minimum amount of SoCs. That minimum quantity was more than what was needed for the US market. Samsung did the math, and figured that it was more profitable to keep their fabs ramped up, and save money on LTE volume licensing. So Samsung made a bigger order and included Chinese variants in the bulk.
I believe this is all a bean counter decision, not technical or legal.
That's easy to answer. Samus is right, it's a legal problem. The reason is named CDMA2000. Qualcomm owns all IP concerning CDMA2000. Look at the regions where a Galaxy S is shipped with a Snapdragon and look a the Countries using CDMA2000. That's North America, Chna and Japan. Samsung has two choices: Using a Snapdragon SoC with integrated QC Modem or plant a dedicated QC Modem alongside their own SoC. The latter is a bad choice concerning space and i think it's more expensive to buy an extra chip instead of just using a Snapdragon.
I bet all this will end when Verizon quits CDMA2000 in late 2019 and Samsung will use their Exynos SoCs only. CDMA200 is useless since LTE and is just maintained for compatibility reasons. In all regions not using this crappy network, Samsung uses Exynos SoCs in every phone from low cost to high end. So of course Samsung IS vertically integrated. Telling something else is pretty ridicoulous. They have theor own fabs, produce and develope their own SoC, modem, DRAM and NAND Flash and have their own CPU and modem IP. They only lack their own GPU IP. So who is more vertically integrated?
I forgot their own displays and cameras. Especially the first is very important. The fact, that they make their own displays enabled more options in design. Think of their Edge-Displays, you may like them or not, but with them the whole design differed much from their competitors.
Personally I think Samsung is in a great position...wheather you consider them "truly vertically integrated" or not. One thing to remember is that most often, Samsung flagship devices come in two variants. It's mostly in the US where we get the Qualcomm variants while elsewhere tends to get Exynos. The dual source is a great arrangement because every once in a while Qualcomm is going to turn out a something problematic like the Snapdragon 810. When that happens Samsung has the option to use its own which is what they did with the Galaxy S6/Note 5 generation which was Exynos only.
Another point is: what do you consider "truly vertically integrated". The story cites Apple and Huewai but they don't actually manufacture their SOC's and neither does Qualcomm. I believe the Kirin SOC's are actually manufactured by TSMC while Apple and Qualcomm SOC's have at various times been actually manufactured in Samsung FABs. As far as I know, Samsung is the only company that even has the capability to design and also manufacture their own SOC. So in a way, you could say that my Samsung Note 5 is about the most vertically integrated phone there is, along with non-US versions of the S7 and S8 generations. In those cases you have a samsung SOC manufactured in a Samsung FAB in a Samsung phone with a Samsung screen etc. Don't make the mistake of thinking the whole world is just like us...they aren't. Also many of the screens for other brands are also of Samsung manufacture so you have to keep in mind that there is a lot more to the device than the SOC
His point is Huawei only uses non-HiSilicon chips in price segments that they do not have SoCs for. Samsung, however, does sometimes use QC silicon even if they have SoCs that can fill that segment (e.g. Samsung uses the Snapdragon 835s even though they have the 8895).
I'm not saying that I agree with Andrei's view, but there is a difference.
I completely disagree with the assessment that Samsung is somehow not "as vertically integrated" as Huawei. Samsung is not just vertically integrated, it produces components for many other key players in the market. They have reasons why they CHOOSE not to use their SOCs in specific markets and areas. Some of the rationale behind those choices may be questioned, but it's a choice. I too think that the world would be a better place if they actually put their own chip designs into their phones and directly competed against Qualcom. That of course might be the end of Qualcom and a whole lot of other companies... Samsung can easily turn into a monopoly that suffocates the entire market, so it's not just veritcal, but horizontal integration. What Huawei has accomplished in short order is impressive, but isn't Huawei just another branch of the Chinese government at this point? Sure yeah, their country is more vertically integrated. Maybe that's the line to take to justify the statement...
No, it's not INTEGRATED because it doesn't prefer its own over outsourcing. Samsung Mobile department runs separately from its Semiconductor department which act as a contractor no different than Qualcomm.
As for Huawei being a branch of the Chinese government, it's as true as Google being part of the US government. Stop spruiking conspiracy theory. I know for a fact their employees almost fully owns the company.
Well, that's not true. Huawei choose the Snapdragon 625 in the Nova. Why not use their own Kirin 600 Series? it is the same market segment.
Samsung only opts for Snapdragon, where they have no own SoCs: all regions with CDMA2000 Networks. In all other regions, europe for example, they ship all smartphones frome the J- and A-Series to the S-Series and Note with their Exynos SoCs.
That's also not true, Samsung uses Snapdragon where there's no CDMA2000 as well. Huawei used to use VIA's 55nm CBP8.2D over Snapdragon.
Mid-tier is not so indicative compared to higher end devices when it comes to, well everything. They may even outsource the ENTIRE DESIGN to a third party, and still proves nothing in particular. They might have chosen S625 because of supply issues which is completely reasonable. Same can not be applied to Samsung, since there's no such thing as supply issues when it comes to Exynos and Snapdragon.
Unfortunately, they're not "fully" vertical as of yet. They've been held back since the start by Qualcomm's platform, because of licensing and "other" issues that no one seems to be willing to explain. Like Andrei said, they use the lowest common denominator of both the Exynos and Snapdragon platforms, and that's almost always lower on the Snapdragons.
Where I disagree with Andrei, and others, are the efficiency numbers and the type of workloads used to reach those results. Measuring efficiency at MAX CPU and GPU load is unrealistic, and frankly, misleading. Under no circumstance is there a smartphone workload that demands that kind of constant load from either the CPU or GPU. A better measure would be running a actual popular game for 30 mins in airplane mode and measuring power consumption accordingly, or loading popular websites, using the native browser, and measuring power draw at set intervals for a set period of time (not even a benchmarking web application).
Again, these platforms are designed for actual, real world, modern smartphone workloads, usually running Android. They do NOT run workstation workloads and shouldn't be measured as such. Such notions, like Andrei has admitted, is what pushes OEMs to be "benchmark competitive", not "experience competitive". Apple is also guilty of this (proof is in the latest events, where they're power deliver can't handle the SoC, or the SoC is designed well above sustainable TDP). I can't stress this enough. You just don't run SPEC and then measure "efficiency". It just doesn't work that way. There is no app out there that stresses a smartphone SoC this much, not even the leading game. In the matter of fact, there isn't an Android (or iPhone) game that saturates last year's flagship GPU (probably not even the year before).
We've reached a point of perfectly acceptable CPU and GPU performance for flagships running 1080p and 1440p resolution screens at this point. Co-processors, such as the decoder, ISP, DSP and NPU, in addition to software optimization are far, FAR more more important at this time, and what Huawei has done with their NPU is very interesting and meaningful. Kudos to them. I just hope these co-processors are meant to improve the experience, not collect and process private user data in any form.
Just curious about your claims about Apple – so you think it's a design fault? I'm thinking that the problem arise only when the battery has been worn out and a healthy battery won't have the problem of not sustaining enough juice for the SoC.
Their batteries are too small, by design, so that's the first design flaw. But that still shouldn't warrant unexpected slowdowns within 12-18 months of normal usage; their SoCs are too power hungry at peak performance, and the constant amount of bursts was having its tall on the already smaller batteries that weren't protect with a proper power delivery system. It goes both ways.
Exactly this. Apple still uses 1500mah batteries in 4.7" phones. When more than half the energy is depleted in a cell this small, the nominal voltage drops to 3.6-3.7v from the 3.9-4.0v peak. A sudden spike in demand for a cell hovering around 3.6v could cause it to hit the low-voltage cutoff, normally 3.4v for Li-Ion, and 3.5v for Li-Polymer, to prevent damage to the chemistry the internal power management will shut the phone down, or slow the phone down to prevent these voltage drops.
Apple designed their software to protect the hardware. It isn't necessarily a hardware problem, it's just an inherently flawed design. A larger battery that can sustain voltage drops, or even a capacitor, both of which take up "valuable space" according to Apple, like that headphone jack that was erroneously eliminated for no reason. A guy even successfully reinstalled a Headphone jack in an iPhone 7 without losing any functionality...it was just a matter of relocating some components.
"Rather than using Exynos as an exclusive keystone component of the Galaxy series, Samsing has instead been dual-sourcing it along with Qualcomm’s Snapdragon SoCs."
This is a bit untrue. It's well known that Qualcomm's CDMA patents are the stumbling block for Samsung. We'll probably see Exynos-based models in the US within the next two versions once Verizon phases out their CDMA network.
Samsung has already introduced a CDMA capable Exynos in the 7872 and also offers a standalone CDMA capable modem (S359). Two year's ago when I talked to SLSI's VP they openly said that it's not a technical issue of introducing CDMA and it'll take them two years to bring it to market once they decide they need to do so (hey maybe I was the catalyst!), but they didn't clarify the reason why it wasn't done earlier. Of course the whole topic is a hot mess and we can only speculate as outsiders.
Uh, how many devices have shipped yet with the 7872? Why do you think they came with a MDM9635 in the Galaxy S6 in all CDMA2000 regions? In all other regions their used their integrated shannon modem. The other option is to use a Snapdragon SoC with QC Modem. They also with opt for this alternative but in the S6 they don't wanted to use the crappy Snapdragon 810.
It is possible, that Qualcomm today skip their politics concerning CDMA2000 because it is obsolete.
Don't forget that Qualcomm is a foundry customer for Samsung and that could be why they still use it. Also, cost is a major factor when it comes to vertical integration, at sufficient scale integration can be much cheaper. What Huawei isn't doing is to prioritize the user experience and use their high end SoCs in lower end devices too, that's a huge mistake. They got much lower costs than others in high end and gaining scale by using these SoCs in lower end devices, would decrease costs further. It's an opportunity for much more meaningful differentiation that they fail to exploit. Granted, the upside is being reduced nowadays by upper mid range SoCs with big cores and Huawei might be forced into using their high end SoCs more as the competition between Qualcomm and Mediatek is rather ferocious and upper mid becomes better and better.
Got to wonder about A75 and the clocks it arrives at ... While at it, I hope that maybe you take a close look at the SD670 when it arrives as it seems it will slightly beat SD835 in CPU perf.
On the GPU side, the biggest problem is the lack of real world tests. In PC we have that and we buy what we need, in mobile somehow being anything but first is a disaster and that's nuts. Not everybody needs a Ferrari but mobile reviews are trying to sell one to everybody.
This could be good example why Windows 10 for ARM will failed - it only works for Qualcomm CPU and could explain why Samsung created Intel based Windows Tablets
I do believe that ARM especially Samsung has good market in Phone and Tablets - I love my Samsung Tab S3 but I also love my Samsung TabPro S - both have different purposes.
One thing I would not mind Windows for ARM - if had the following
1. Cheaper than current products - 300-400 range 2. No need for x86 emulation - not need on such product - it would be good for Microsoft Office, email and internet machine. But not PC apps
It's too early in the Win10 on ARM product life cycle to call the entire thing a failure. I agree that it's possible we'll be calling it failed eventually, but the problems aren't solely limited to the CPU of choice. Right now, Win10 ARM platforms are priced too high (personal opinion) and _might_ be too slow doing the behind-the-scenes magic necessary to run x86 applications. Offering a lot more battery life, which Win10 on ARM does, isn't enough of a selling point to entirely offset the pricing and limitations. While I'd like to get 22 hours of battery life doing useful work with wireless active out of my laptops, it's more off mains time than I can realistically use in a day so I'm okay with a lower priced system with shorter life (~5 hours) since I use my phone for multi-day, super light computing tasks already. That doesn't mean everyone feels that way so let's wait and see before getting out the hammer and nails for that coffin.
The CPU is the reason for the high price, SD835 comes at a high premium and LTE adds to it. That's why those machines are not competitive in price with Atom based machines. Use a 25$ SoC and no LTE and Windows on ARM becomes viable with an even longer battery life.
I didn't realize the 835 accounted for so much of the BOM on those ARM laptops. Since Intel's tray pricing for their low end chips isn't exactly cheap (not factoring in OEM/volume discounts), it didn't strike me as a significant hurdle. I'd thought most of the price as due to low production volume and attempts to make the first generation's build quality attractive enough to have a ripple effect on subsequently cheaper models.
I'm not sure they do. A search indicated that in 2014 the average price of a Qualcomm solution for a platform was $24. The speculation was that the high-end socs were sold in the high $30s to low $40s.
It's likely more like 50-60$ for the hardware and 15$ for licensing for a 700$ laptop- although that includes only licenses to Qualcomm and they are not the only ones getting payed. Even a very optimistic estimate can't go lower than 70$ total and that's a large premium vs my suggestion of a 25$ SoC with no LTE. An 8 cores A53 might go below 10$, something like Helio X20 was around 20$ at it's time, one would assume that SD670 will be 25-35$, depending on how competitive Mediatek is with P70.
Some estimates will go much higher though (look at LTE enabling components too ,not just SoC for the S8). http://www.techinsights.com/about-techinsights/ove... Don't think costs are quite that high but they are supposed to know better.
Now, that's for the exynos 8895, but is imagine prices are similar for Snapdragon. Regardless, these are all estimates. I'm not aware of anyone who actually knows the real prices of these (including licenses) we has come out and told us.
On licensing you can take a look at the newest 2 pdfs here https://www.qualcomm.com/invention/licensing. Those are in line with the China agreement they have at 3.5% and 5% out of 65% of the retail value. There would be likely discounts for exclusivity and so on. So ,assuming multinode, licensing would be 22.75$ for a 700$ laptop, before any discounts (if any) BUT that's only to Qualcomm and not others like Nokia, Huawei, Samsung, Ericsson and whoever else might try to milk this.
Thanks for that ihs link. I just wasn't able to find a recent bom which included a snapdragon that wasn't behind a pay wall:/ The first link wasn't working but I found others on the Qualcomm site. They list licensing terms of 2.275% (5G only) or 3.25% (multimode). Given that, I agree that offering an arm laptop that doesn't include a (working) baseband makes more sense.
Well the chip and LTE (LTE means hardware+ licensing costs) does not add hundreds of dollars to the retail price but the extra cost forces them to position these as high end. A mobile SoC has actually some costs positives too as it offers more integration, thus slightly reducing costs but with a high end SoC and LTE things go sideways. I was telling people before these got release that they'll be high specked with high prices but even I wasn't expecting things to be this bad and thought they'll at least have higher res displays at current prices.
Give me a laptop with SD670 (yet to be announced) and no LTE at 300$ and I might consider it. Oh well, at least we have Raven Ridge now.
Right now the US gov is very concerned about Huawei to the point theyre pressuring ATT to stop using their products. In addition they don't like them being involved in next gen wireless bc the security risk involved. To be fair the company is top down filled with Chinese Government Official.
As for Apple, they're not the only US or EU company that has given up IP to the Chinese government in order to play in their backyard. Of course that comes at a cost in the long run.
It will be interesting to see what happens over the next few years between China, the EU & US ovwer this issue.
At the rate China is pouring money into AI with little to zero oversight, they are the first country to be pawned by SuperAI (first AGI that is superhuman), from there, the democratization of rights and freedom will accelerate. Maybe a bit turbulent in the adjustment period but will prevail. The process is already in motion for some months ....
What is in motion? Democracy? With Xi in power out is rather going the other way around. The progress chins had made inn the decades since tianmen square is going to be wiped out soon...
Meanwhile the current AI is as far from the type of generic AI you talk about as we were from useful neural networks in the early '80s... Don't count on it soon.
Nice article Andrei. Just demonstrates how much Qualcomm is killing it right now, gpu is nearly twice as efficient as Mali, likely much more efficient in area also. Even the hexagon 680 DSP, which is not a special AI processor on its own but can match the efficiency of likely the best AI processor in smartphones... Huawei NPU. Aside from the horrible mistakes of the Snapdragon 810 & 820...they seem to have got their CPU/SOC decisions in order.
9810 Vs 845 is going to be a battle royale, Samsung M3 might well turn the tables around.
If the Modem IP is Huawei's one true in-house part, why didn't you at least test it alongside the CPU and GPU ? I'd assume in the real world, ti too has a large impact on batteyr and performance ?
The kit to properly test a modem power/attenuation to battery is around $50-100k. We did borrow one once, a few years ago, but it was only short-term loan. Not sure if/when we'll be able to do that testing again.
How does Mali have so many design wins? Why did Samsung switch from PowerVR to Mali? Cost savings? Politics? Because it clearly wasn't a descistion made on technical merit.
Because OEMs like Samsung are not stupid? And Mali is actually very power efficient and competitive?
What are you basing your GPU decision on? Nothing in the articles provides evidence that Mali is less efficient than Adreno in UI acceleration or 60fps capped popular games (or even 60fps 1080p normalized T-Rex benchmark)...
Measuring the constant power draw of the GPU, which is supposed to be reached in vert short bursts, is absolutely meaningless.
***Measuring the max (constant) power draw of the GPU, which is supposed to be reached in very short bursts during a workload, is absolutely meaningless.
Your argument is half-way sensible for a CPU but not for a GPU.
A GPU should not even HAVE a boost clock - there is no point in that for typical GPU workloads. Where a CPU is often active in bursts, a GPU has to sustain performance in games - normal UI work barely taxes it anyway.
So yes the max sustained performance and associated efficiency is ALL that matters. And MALI, at least in the implementations we have seen, is behind.
I think you're confusing fixed function processing with general purpose GPUs. Modern GPU clocks behave just like CPU cores, and yes, with bursts, just like NVidia's and AMD's. Not all scenes rendered in a game, for example, need the same GPU power, and not all games have the same GPU power needs.
Yes, there is a certain performance envelope that most popular games target. That performance envelope/ target is definitely not SlingShot nor T-rex.
This is where Andrei's and your argument crumbles. You need to figure out that performance target and measure efficiency and power draw at that target. That's relatively easy to do; open up candy crush and asphalt 8 and measure in screen fps and power draw. That's how you measure efficiency on A SMARTPHONE SoC. Your problem is that you think people are using these SoCs like they would on a workstation. They don't. No one is going to render a 3dmax project on these phones, and there are no games that even saturate last year's flagship mobile gpu.
Not sure if you're not getting my simple and sensible point, or you're just being stubborn about it. Mobile SoC designed have argued for bursty gpu behavior for years. You guys need to get off your damn high horse and stop deluding yourself into thinking that you know better. What Apple or Qualcomm do isn't necessarily best, but it might be best for the gpu architecture THEY'RE using.
As for the CPU, you agree but Andrei insists on making the same mistake. You DON'T measure efficiency at max clocks. Again, max clocks are used in bursts and only for VERY short periods of time. You measure efficient by measuring the time it takes to complete a COMMON workload and the total power it consumes at that. Another hint, that common workload is NOT geekbench, and it sure as hell isn't SPEC.
The A75 is achieving higher performance mostly with higher clocks. The Exynos M3 is a wide core WITH higher clocks. Do you really believe these guys are idiots? You really think that's going to affect efficiency negatively? You think Android OEMs will make the same "mistake" Apple did and not provide adequate and sustainable power delivery?
"The Kirin 970 in particular closes in on the efficiency of the Snapdragon 835, leapfrogging the Kirin 960 and Exynos SoCs." Except according to the chart right above it the 960 is still more efficient.
The efficiency axis is portrayed as energy (joules) per performance (test score). In this case the less energy used, the more efficient, meaning the shorter the bars, the better.
I think I see what you mean, but the graphs themselves need work. It's not clear which axis belongs to which data point, and the scaling/notation on the left axis makes no sense. If you look at some of them, quite often the longest data bar has a mark showing a value somewhere halfway between two data points with bars smaller than it. So it less to confusion; is the length of the bar one metric, and the mark another? And it's not that the bars are reversed scale either. It's not even clear why there are two axis at all, now I take a second look.
It's a good idea for a viz, but needs some rejigging. Maybe it looks clearer on desktop, mobile may be too small.
I agree with Hairses; that the graph - and in particular its legend - could do with revision.
At first I thought the CPU was faster than the NPU because the arrows seemed to be pointing at the end which related to the measurement in question - instead it seems the intent was "travelling in this direction". You could perhaps keep the text and arrows the same, but position them at the relevant sides of the graph.
As mentioned in the article, Apple stuff is a lot harder. Measuring power efficiency for example requires me to tear down an iPhone to tap the battery. It's my goal for the future as I work through the backlog of articles.
Great article. I'd have liked to see though GPU efficiency figures at sustained power levels. Not that it should reverse the outcome, but I would expect the efficiency of the newest Samsung/Qualcomm/Hisilicon chips to be a bit closer then.
Another issue might be the silicon lottery... hard to deal with but especially small differences might be due to a particularly leaky or good piece of silicon...
Excellent article! Hope to see many more like this. I wish the mid range could also be included but I understand how time consuming these tests were. Great job, well done!
Also really a testament to the Adreno 500 series of GPUs..great performance with good energy consumption and good temps. Can't wait to see how the 600 series does
I understand that it's out of your educational level to understand what makes an SoC better, since it has been explained to you by myself and others, so please stop using abbreviations like you're some sort of expert. Do you even know what IPC means? SMH...
Ok maybe...though I don't know what workload you would even be able to see "3% higher integer IPC" on a phone. The only workloads I'm running that even remotely tax these monsters, really come down to how well the Vulkan drivers pan out, it's ability to not thermally throttle all of its performance away and actually do this for awhile away from a charger. For these workloads Snapdragon is King as the Mali Vulkan/OpenGLes 3.x drivers are terrible in comparison. Again I was just curious
@Wardrive86 I was being a bit facetious. I assume the person either prefers Samsung because of an association that's developed between the success of the company and their own sense of self-worth, or they like watching YouTube videos of proper opening a bunch of apps while a timer runs on the screen:)
>We’ve seen companies such as Nvidia try and repeatedly fail at carving out meaningful market-share.
Don't think i'd call Nvidia's strategy for mobile SoCs as of the Shield Portable 'pursuing market-share' and I think their actual intentions have been more long-term with emphasis around the Drive CX/PX. The Shield devices were just a convenient way to monetize exploration into ARM and their custom Denver cores. Hence why we saw the Shield Portable and Tablet more or less die after one iteration; the SoCs were more or less there as an experiment. They weren't really prepared I think for the success the Shield TV has had and so that's gotten to see some evolution; the Nintendo Switch win is also nice for them but not really the focus. As much as I want to see a more current Tegra for a Shield Tablet (A73, Pascal cores, <=16nm) the Shield Tablet 2 was cancelled and doesn't look to be getting an update.
>Meanwhile even Samsung LSI, while having a relatively good product with its flagship Exynos series, still has not managed to win over the trust of the conglomorate's own mobile division. Rather than using Exynos as an exclusive keystone component of the Galaxy series, Samsing has instead been dual-sourcing it along with Qualcomm’s Snapdragon SoCs. It’s therefore not hard to make the claim that producing competitive high-end SoCs and semiconductor components is a really hard business.
We did see the Exynos 7420 with its Samsung sourced Exynos modem 333 which further adds to the questions of *why* Samsung bothers to source Snapdragons for the US. That's just extra development complexity on multiple levels. I always thought it had something to do with the cost of CDMA patent licensing, so they'd just opt to use Qualcomm's products and the Galaxy S6 was a special situation as Snapdragon was hot garbage.
There has to be some reason that Samsung bothers with Snapdragon when their Exynos offerings perform pretty similarly.
This article indicates that they have similar performance but pretty divergent efficiency. This also doesn't even touch on the modem front where Qualcomm has a massive advantage.
I bet NV had a sizable inventory of Tegra X1 which has the twin combo of suck of A57 cores and TSMC 20nm process that they are more than willing to dump onto Nintendo for free, since nobody else in the right mind to use them for phones while the intended ARM tablet market has collapsed almost overnight.
" The hardware acceleration blocks with various names from various companies do not actually do any DEEP LEARNING, but rather are there to improve execution (inferencing) of neural network models "
This is something I've read on this site before and it clashes with actual usage of the term and I think the issue is that deep learning is being over-loaded. Deep learning describes a class of NNs with certain characteristics (hierarchical representation). You perform both (not at the same time:) training and inference with such networks. Briefly, both stages are part of the deep learning, and "deep learning" is more of a noun than a verb:)
Does Huawei also have a flash/RAM business and does it manufacture LCDs? Do they make image sensors? I couldn't find any info on that readily available on the net. If they don't, it seems a bit weird to use "vertical integration" to only refer to the SoC inside a phone, one of the lesser parts of the overall experience these days. :)
SPEC/GHz is a metric to showcase IPC used in the industry, including ARM themselves. If you're tending to disagree with the industry habits you're free to do so.
nm, found it at ludashi.com. Does anyone know if you can set the language to English? Also, does Master Lu have any plans to support NNAPI? I'd like to see how the Pixel performs.
''I wouldn’t necessarily say that Apple is the SoC trend setter that other companies are trying to copy, as much as other vendors are coming to the same conclusion Apple has: to be able to evolve and compete in a mature ecosystem you need to be able to control the silicon roadmap yourself. Otherwise you fall into the risk of not being able to differentiate from other vendors using similar component stacks, or risk not being competitive against those vendors who do have vertical integration. Apple was early to recognize this, and to date Huawei has been the only other OEM able actually realize this goal towards quasi-silicon independence''
LOL You must be joking...when did you try Apple products? I have seen NO A10X or A11 reviews from you guys...it's very difficult to believe what you can't prove...
This is one of the worst HiSilicon designs. So much black silicone & such a waist of it. Would be much better if they MP-d P6 DSP instead that horrible overgrown machine learning cluster block.
I am normally not interested in phone ARM SoCs, but I wanted to just have a look at this one, so I looked at the rodmap table on the first page ... Now I am mighty confused ... The Kirin 970 especially ... So it does 2160p60 Decode and 2160p30 Encode ... not capable of 2160p30 Decode ? I guess the 950 is better in this regard ... the Camera/ISP line is also funny ... all 3 have Dual 14-bit ISP. no clue which is better ... again the 960 looks to be the best of the bunch ...
Oh and the Interconect one takes tha crown ... just plain Arm CCI on the top model ? That CCI-550 on the 960 looks way superior ...
For the future, either put up a reasonably detailed table that actually serves the purpose (provides at-a-glance overview and evaluation possibility) or don't. Doing a 1/2-assed job like this just lowers your credibility ...
If it can do 2160p60 Decode then I'd imagine that of course it can do 2160p30 Decode, just as it can do 1080p60/30 decode. You list the maximum in a category.
What a wonderful article: a joy to read, thoughtful, and very, very insightful. Thank you, Andrei. Here's to more coverage like that in the future.
It looks like the K970 could be used in smaller form factors. If Huawei were to make a premium, bezel-less ~ 4.8" 18:9 model power by K970, it would be wonderful - a premium, Android phone about the size of the iPhone SE.
Even though Samsung and Qualcomm (S820) have custom CPUs, it feels like their designs are much closer to stock ARM than Apple's CPUs. Why are they not making wider designs? Is it a matter of inability or unwillingness?
Props for a nice article with data rich diagrams filled with interesting metrics as well as the efforts to normalize tests now and into the future w/ LLVM + SPECINT 06. (Adding the units after the numbers in the chart and "avg. watts" to the rightward pointing legend at the bottom would have helped me grok faster...) Phones are far from general purpose compute devices and their CPUs are mainly involved in directing program flow rather than actual computation, so leaning more heavily into memory operations with the larger data sets of SPECINT is a more appropriate CPU test than Geekbench. The main IPC uplift from OoOE comes from the prefetching and execution in parallel of the highest latency load and store operations and a good memory/cache subsystem does wonders for IPC in actual workloads. Qualcomm's Hexagon DSP has
It would be interesting to see the 810 here, but its CPU figures would presumably blow off the chart. A modem or wifi test would also be interesting (care for a donation toward the aforementioned harness?), but likely a blowout in the good direction for the Qualcomm chipsets.
Andrei, two questions on the Master Lu tests. First, is there a chance you could run it on the 835 GPU as well and compare? Second, do these power number include DRAM power, or are they SoC only? If they do not include DRAM power, any chance you could measure that as well?
The Master Lu uses the SNPE framework and currently doesn't have the option to chose computing target on the SoC. The GPU isn't any or much faster than the DSP and is less efficient.
The power figures are the active power of the whole platform (total power minus idle power) so they include everything.
"AnandTech is also partly guilty here; you have to just look at the top of the page: I really shouldn’t have published those performance benchmarks as they’re outright misleading and rewarding the misplaced design decisions made by the silicon vendors. I’m still not sure what to do here and to whom the onus falls onto." That is pretty easy. Post sustained performace values and not only peak power. Just run the benchmarks ten times in a row, it's not that difficult. If in every review sustained performance is shown, the people will realize this theater.
And it is a big problem. Burst GPU performance is useless. No one plays a game for half a minute. Burst CPU performance ist perhaps a different matter. It helps to optimize the overall snappiness.
What i would love to see, is a comparison to Imagination GPU IP. Is there a chance to run this parcour with an iPhone 7? Well, Apple GPU isn't unintereingt too. An iphone 8 would be great too. So one can see waht improvement they achieved with their own GPU.
Whilst I have all the new iPhones I can only post performance data as we cannot open up Apple review devices for power measurement. I have the intention to do this in the future once I get some new tools and a device I can dismantle.
There's a reason you do not see the exynos in the US. it is not because Samsung has not convinced it's own mobile division..it is legally bound to use QC here in the US: https://www.reddit.com/r/Android/comments/71rjyx/w... The author might want to do a google search on that next time. Otherwise good article.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
116 Comments
Back to Article
zorxd - Monday, January 22, 2018 - link
Samsung isn't vertically integrated? They also have their own SoC and even fabs (which Huawei and Apple don't)Myrandex - Monday, January 22, 2018 - link
Agreed, but they don't even always use their own components like Huawei and Apple do, I think I've yet to own a Samsung Smartphone that uses a Samsung SoC. I think they will eventually get there though although I can't say I have any idea why they aren't doing it today.jospoortvliet - Saturday, January 27, 2018 - link
Samsung always uses their own SOCs unless they are legally forced not to, like in the US...levizx - Saturday, January 27, 2018 - link
They are DEFINITELY NOT legally forced to use Qualcomm chips in China, especially for the worlds biggest carrier China Mobile. As for the US, if Huawei is not forced to use Qualcomm, I can't imagine why Samsung would unless they signed a deal with Qualcomm - then again that's just by choice.Andrei Frumusanu - Monday, January 22, 2018 - link
Samsung's mobile division (which makes the phones) still makes key use of Snapdragon SoCs for certain markets. Whatever the reason for this and we can argue a lot about it, fact is that the end product more often than not ends up being as the lowest common denominator in terms of features and performance between the two SoC's capabilities. In that sense, Samsung is not vertically integrated and does not control the full stack in the same way Apple and Huawei do.Someguyperson - Monday, January 22, 2018 - link
No, Samsung simply isn't so vain as to use it's own solutions when they are inferior. Samsung skipped the Snapdragon 810 because their chip was much better. Samsung used the 835 instead of their chip last year because the 835 performed nearly exactly the same as the Samsung chip, but was smaller, so they could get more chips out of an early 10 nm process. Huawei chooses their chips so they don't look stupid by making an inferior chip that costs more compared to the competition.Samus - Monday, January 22, 2018 - link
Someguyperson, that isn't the case at all. Samsung simply doesn't use Exynos in various markets for legal reasons. Qualcomm, for example, wouldn't license Exynos for mobile phones as early as the Galaxy S III, which is why a (surprise) Qualcomm SoC was used instead. Samsung licenses Qualcomm's modem IP, much like virtually every SoC designer, for use in their Exynos. The only other option has historically been Intel, who until recently, made inferior LTE modems.I think it's pretty obvious to anybody that if Samsung could, they would, sell their SoC's in all their devices. They might even sell them to competitors, but again, Qualcomm won't let them do that.
lilmoe - Monday, January 22, 2018 - link
Since their Shannon modem integration in the Exynos platform, I struggled to understand why...My best guess would be a bulk deal they made with Qualcomm in order for them to build Snapdragons on both their 14nm and 10nm. Samsung offered a fab deal, Qualcomm agreed to build using Samsung fabs and provide a generous discount in Snapdragon resale for Galaxies, but in the condition to buy a big minimum amount of SoCs. That minimum quantity was more than what was needed for the US market. Samsung did the math, and figured that it was more profitable to keep their fabs ramped up, and save money on LTE volume licensing. So Samsung made a bigger order and included Chinese variants in the bulk.
I believe this is all a bean counter decision, not technical or legal.
KarlKastor - Thursday, January 25, 2018 - link
That's easy to answer. Samus is right, it's a legal problem. The reason is named CDMA2000.Qualcomm owns all IP concerning CDMA2000.
Look at the regions where a Galaxy S is shipped with a Snapdragon and look a the Countries using CDMA2000. That's North America, Chna and Japan.
Samsung has two choices: Using a Snapdragon SoC with integrated QC Modem or plant a dedicated QC Modem alongside their own SoC.
The latter is a bad choice concerning space and i think it's more expensive to buy an extra chip instead of just using a Snapdragon.
I bet all this will end when Verizon quits CDMA2000 in late 2019 and Samsung will use their Exynos SoCs only. CDMA200 is useless since LTE and is just maintained for compatibility reasons.
In all regions not using this crappy network, Samsung uses Exynos SoCs in every phone from low cost to high end.
So of course Samsung IS vertically integrated. Telling something else is pretty ridicoulous.
They have theor own fabs, produce and develope their own SoC, modem, DRAM and NAND Flash and have their own CPU and modem IP. They only lack their own GPU IP.
So who is more vertically integrated?
KarlKastor - Thursday, January 25, 2018 - link
I forgot their own displays and cameras. Especially the first is very important. The fact, that they make their own displays enabled more options in design.Think of their Edge-Displays, you may like them or not, but with them the whole design differed much from their competitors.
Ratman6161 - Wednesday, January 24, 2018 - link
Personally I think Samsung is in a great position...wheather you consider them "truly vertically integrated" or not. One thing to remember is that most often, Samsung flagship devices come in two variants. It's mostly in the US where we get the Qualcomm variants while elsewhere tends to get Exynos. The dual source is a great arrangement because every once in a while Qualcomm is going to turn out a something problematic like the Snapdragon 810. When that happens Samsung has the option to use its own which is what they did with the Galaxy S6/Note 5 generation which was Exynos only.Another point is: what do you consider "truly vertically integrated". The story cites Apple and Huewai but they don't actually manufacture their SOC's and neither does Qualcomm. I believe the Kirin SOC's are actually manufactured by TSMC while Apple and Qualcomm SOC's have at various times been actually manufactured in Samsung FABs. As far as I know, Samsung is the only company that even has the capability to design and also manufacture their own SOC. So in a way, you could say that my Samsung Note 5 is about the most vertically integrated phone there is, along with non-US versions of the S7 and S8 generations. In those cases you have a samsung SOC manufactured in a Samsung FAB in a Samsung phone with a Samsung screen etc. Don't make the mistake of thinking the whole world is just like us...they aren't. Also many of the screens for other brands are also of Samsung manufacture so you have to keep in mind that there is a lot more to the device than the SOC
fred666 - Monday, January 22, 2018 - link
Huawei only uses HiSilicon SoCs? Nothing from Qualcomm?Andrei Frumusanu - Monday, January 22, 2018 - link
They've used Qualcomm chip-sets and still do use them in segments they can't fill with their own SoCs.niva - Monday, January 22, 2018 - link
So they still use QC chips, but unlike them, Samsung isn't vertically integrated because they use QC chips.Get out of here.
Dr. Swag - Monday, January 22, 2018 - link
His point is Huawei only uses non-HiSilicon chips in price segments that they do not have SoCs for. Samsung, however, does sometimes use QC silicon even if they have SoCs that can fill that segment (e.g. Samsung uses the Snapdragon 835s even though they have the 8895).I'm not saying that I agree with Andrei's view, but there is a difference.
niva - Tuesday, January 23, 2018 - link
I completely disagree with the assessment that Samsung is somehow not "as vertically integrated" as Huawei. Samsung is not just vertically integrated, it produces components for many other key players in the market. They have reasons why they CHOOSE not to use their SOCs in specific markets and areas. Some of the rationale behind those choices may be questioned, but it's a choice. I too think that the world would be a better place if they actually put their own chip designs into their phones and directly competed against Qualcom. That of course might be the end of Qualcom and a whole lot of other companies... Samsung can easily turn into a monopoly that suffocates the entire market, so it's not just veritcal, but horizontal integration. What Huawei has accomplished in short order is impressive, but isn't Huawei just another branch of the Chinese government at this point? Sure yeah, their country is more vertically integrated. Maybe that's the line to take to justify the statement...levizx - Monday, February 26, 2018 - link
No, it's not INTEGRATED because it doesn't prefer its own over outsourcing. Samsung Mobile department runs separately from its Semiconductor department which act as a contractor no different than Qualcomm.As for Huawei being a branch of the Chinese government, it's as true as Google being part of the US government. Stop spruiking conspiracy theory. I know for a fact their employees almost fully owns the company.
KarlKastor - Thursday, January 25, 2018 - link
Well, that's not true. Huawei choose the Snapdragon 625 in the Nova. Why not use their own Kirin 600 Series? it is the same market segment.Samsung only opts for Snapdragon, where they have no own SoCs: all regions with CDMA2000 Networks.
In all other regions, europe for example, they ship all smartphones frome the J- and A-Series to the S-Series and Note with their Exynos SoCs.
yslee - Tuesday, January 30, 2018 - link
You keep on repeating that line, but where I am we have no CDMA2000 networks and still get Snapdragon Samsungs.levizx - Monday, February 26, 2018 - link
That's also not true, Samsung uses Snapdragon where there's no CDMA2000 as well. Huawei used to use VIA's 55nm CBP8.2D over Snapdragon.Mid-tier is not so indicative compared to higher end devices when it comes to, well everything. They may even outsource the ENTIRE DESIGN to a third party, and still proves nothing in particular. They might have chosen S625 because of supply issues which is completely reasonable. Same can not be applied to Samsung, since there's no such thing as supply issues when it comes to Exynos and Snapdragon.
lilmoe - Monday, January 22, 2018 - link
Unfortunately, they're not "fully" vertical as of yet. They've been held back since the start by Qualcomm's platform, because of licensing and "other" issues that no one seems to be willing to explain. Like Andrei said, they use the lowest common denominator of both the Exynos and Snapdragon platforms, and that's almost always lower on the Snapdragons.Where I disagree with Andrei, and others, are the efficiency numbers and the type of workloads used to reach those results. Measuring efficiency at MAX CPU and GPU load is unrealistic, and frankly, misleading. Under no circumstance is there a smartphone workload that demands that kind of constant load from either the CPU or GPU. A better measure would be running a actual popular game for 30 mins in airplane mode and measuring power consumption accordingly, or loading popular websites, using the native browser, and measuring power draw at set intervals for a set period of time (not even a benchmarking web application).
Again, these platforms are designed for actual, real world, modern smartphone workloads, usually running Android. They do NOT run workstation workloads and shouldn't be measured as such. Such notions, like Andrei has admitted, is what pushes OEMs to be "benchmark competitive", not "experience competitive". Apple is also guilty of this (proof is in the latest events, where they're power deliver can't handle the SoC, or the SoC is designed well above sustainable TDP). I can't stress this enough. You just don't run SPEC and then measure "efficiency". It just doesn't work that way. There is no app out there that stresses a smartphone SoC this much, not even the leading game. In the matter of fact, there isn't an Android (or iPhone) game that saturates last year's flagship GPU (probably not even the year before).
We've reached a point of perfectly acceptable CPU and GPU performance for flagships running 1080p and 1440p resolution screens at this point. Co-processors, such as the decoder, ISP, DSP and NPU, in addition to software optimization are far, FAR more more important at this time, and what Huawei has done with their NPU is very interesting and meaningful. Kudos to them. I just hope these co-processors are meant to improve the experience, not collect and process private user data in any form.
star-affinity - Monday, January 22, 2018 - link
Just curious about your claims about Apple – so you think it's a design fault? I'm thinking that the problem arise only when the battery has been worn out and a healthy battery won't have the problem of not sustaining enough juice for the SoC.lilmoe - Monday, January 22, 2018 - link
Their batteries are too small, by design, so that's the first design flaw. But that still shouldn't warrant unexpected slowdowns within 12-18 months of normal usage; their SoCs are too power hungry at peak performance, and the constant amount of bursts was having its tall on the already smaller batteries that weren't protect with a proper power delivery system. It goes both ways.Samus - Monday, January 22, 2018 - link
Exactly this. Apple still uses 1500mah batteries in 4.7" phones. When more than half the energy is depleted in a cell this small, the nominal voltage drops to 3.6-3.7v from the 3.9-4.0v peak. A sudden spike in demand for a cell hovering around 3.6v could cause it to hit the low-voltage cutoff, normally 3.4v for Li-Ion, and 3.5v for Li-Polymer, to prevent damage to the chemistry the internal power management will shut the phone down, or slow the phone down to prevent these voltage drops.Apple designed their software to protect the hardware. It isn't necessarily a hardware problem, it's just an inherently flawed design. A larger battery that can sustain voltage drops, or even a capacitor, both of which take up "valuable space" according to Apple, like that headphone jack that was erroneously eliminated for no reason. A guy even successfully reinstalled a Headphone jack in an iPhone 7 without losing any functionality...it was just a matter of relocating some components.
ZolaIII - Wednesday, January 24, 2018 - link
Try with Dolphine emulator & you will see not only how stressed GPU is but also how much more performance it needs.Shadowfax_25 - Monday, January 22, 2018 - link
"Rather than using Exynos as an exclusive keystone component of the Galaxy series, Samsing has instead been dual-sourcing it along with Qualcomm’s Snapdragon SoCs."This is a bit untrue. It's well known that Qualcomm's CDMA patents are the stumbling block for Samsung. We'll probably see Exynos-based models in the US within the next two versions once Verizon phases out their CDMA network.
Andrei Frumusanu - Monday, January 22, 2018 - link
Samsung has already introduced a CDMA capable Exynos in the 7872 and also offers a standalone CDMA capable modem (S359). Two year's ago when I talked to SLSI's VP they openly said that it's not a technical issue of introducing CDMA and it'll take them two years to bring it to market once they decide they need to do so (hey maybe I was the catalyst!), but they didn't clarify the reason why it wasn't done earlier. Of course the whole topic is a hot mess and we can only speculate as outsiders.KarlKastor - Thursday, January 25, 2018 - link
Uh, how many devices have shipped yet with the 7872?Why do you think they came with a MDM9635 in the Galaxy S6 in all CDMA2000 regions? In all other regions their used their integrated shannon modem.
The other option is to use a Snapdragon SoC with QC Modem. They also with opt for this alternative but in the S6 they don't wanted to use the crappy Snapdragon 810.
It is possible, that Qualcomm today skip their politics concerning CDMA2000 because it is obsolete.
jjj - Monday, January 22, 2018 - link
Don't forget that Qualcomm is a foundry customer for Samsung and that could be why they still use it.Also, cost is a major factor when it comes to vertical integration, at sufficient scale integration can be much cheaper.
What Huawei isn't doing is to prioritize the user experience and use their high end SoCs in lower end devices too, that's a huge mistake. They got much lower costs than others in high end and gaining scale by using these SoCs in lower end devices, would decrease costs further. It's an opportunity for much more meaningful differentiation that they fail to exploit. Granted, the upside is being reduced nowadays by upper mid range SoCs with big cores and Huawei might be forced into using their high end SoCs more as the competition between Qualcomm and Mediatek is rather ferocious and upper mid becomes better and better.
Got to wonder about A75 and the clocks it arrives at ... While at it, I hope that maybe you take a close look at the SD670 when it arrives as it seems it will slightly beat SD835 in CPU perf.
On the GPU side, the biggest problem is the lack of real world tests. In PC we have that and we buy what we need, in mobile somehow being anything but first is a disaster and that's nuts. Not everybody needs a Ferrari but mobile reviews are trying to sell one to everybody.
HStewart - Monday, January 22, 2018 - link
This could be good example why Windows 10 for ARM will failed - it only works for Qualcomm CPU and could explain why Samsung created Intel based Windows TabletsI do believe that ARM especially Samsung has good market in Phone and Tablets - I love my Samsung Tab S3 but I also love my Samsung TabPro S - both have different purposes.
HStewart - Monday, January 22, 2018 - link
One thing I would not mind Windows for ARM - if had the following1. Cheaper than current products - 300-400 range
2. No need for x86 emulation - not need on such product - it would be good for Microsoft Office, email and internet machine. But not PC apps
StormyParis - Monday, January 22, 2018 - link
But then why do you need WIndows to do that ? Android iOS and CHromme already do it, with a lot more other apps.PeachNCream - Monday, January 22, 2018 - link
It's too early in the Win10 on ARM product life cycle to call the entire thing a failure. I agree that it's possible we'll be calling it failed eventually, but the problems aren't solely limited to the CPU of choice. Right now, Win10 ARM platforms are priced too high (personal opinion) and _might_ be too slow doing the behind-the-scenes magic necessary to run x86 applications. Offering a lot more battery life, which Win10 on ARM does, isn't enough of a selling point to entirely offset the pricing and limitations. While I'd like to get 22 hours of battery life doing useful work with wireless active out of my laptops, it's more off mains time than I can realistically use in a day so I'm okay with a lower priced system with shorter life (~5 hours) since I use my phone for multi-day, super light computing tasks already. That doesn't mean everyone feels that way so let's wait and see before getting out the hammer and nails for that coffin.jjj - Monday, January 22, 2018 - link
The CPU is the reason for the high price, SD835 comes at a high premium and LTE adds to it.That's why those machines are not competitive in price with Atom based machines.
Use a 25$ SoC and no LTE and Windows on ARM becomes viable with an even longer battery life.
PeachNCream - Monday, January 22, 2018 - link
I didn't realize the 835 accounted for so much of the BOM on those ARM laptops. Since Intel's tray pricing for their low end chips isn't exactly cheap (not factoring in OEM/volume discounts), it didn't strike me as a significant hurdle. I'd thought most of the price as due to low production volume and attempts to make the first generation's build quality attractive enough to have a ripple effect on subsequently cheaper models.tuxRoller - Monday, January 22, 2018 - link
I'm not sure they do.A search indicated that in 2014 the average price of a Qualcomm solution for a platform was $24. The speculation was that the high-end socs were sold in the high $30s to low $40s.
https://www.google.com/amp/s/www.fool.com/amp/inve...
jjj - Monday, January 22, 2018 - link
It's likely more like 50-60$ for the hardware and 15$ for licensing for a 700$ laptop- although that includes only licenses to Qualcomm and they are not the only ones getting payed.Even a very optimistic estimate can't go lower than 70$ total and that's a large premium vs my suggestion of a 25$ SoC with no LTE.
An 8 cores A53 might go below 10$, something like Helio X20 was around 20$ at it's time, one would assume that SD670 will be 25-35$, depending on how competitive Mediatek is with P70.
jjj - Monday, January 22, 2018 - link
Some estimates will go much higher though (look at LTE enabling components too ,not just SoC for the S8). http://www.techinsights.com/about-techinsights/ove...Don't think costs are quite that high but they are supposed to know better.
tuxRoller - Monday, January 22, 2018 - link
That's way higher than I've seen.http://mms.businesswire.com/media/20170420006675/e...
Now, that's for the exynos 8895, but is imagine prices are similar for Snapdragon.
Regardless, these are all estimates. I'm not aware of anyone who actually knows the real prices of these (including licenses) we has come out and told us.
jjj - Monday, January 22, 2018 - link
On licensing you can take a look at the newest 2 pdfs here https://www.qualcomm.com/invention/licensing.Those are in line with the China agreement they have at 3.5% and 5% out of 65% of the retail value. There would be likely discounts for exclusivity and so on. So ,assuming multinode, licensing would be 22.75$ for a 700$ laptop, before any discounts (if any) BUT that's only to Qualcomm and not others like Nokia, Huawei, Samsung, Ericsson and whoever else might try to milk this.
As for SoC, here's IHS for a SD835 phone https://technology.ihs.com/584911/google-pixel-xl-...
jjj - Monday, January 22, 2018 - link
Sorry it's a SD821 phonetuxRoller - Tuesday, January 23, 2018 - link
Thanks for that ihs link. I just wasn't able to find a recent bom which included a snapdragon that wasn't behind a pay wall:/The first link wasn't working but I found others on the Qualcomm site. They list licensing terms of 2.275% (5G only) or 3.25% (multimode). Given that, I agree that offering an arm laptop that doesn't include a (working) baseband makes more sense.
https://www.qualcomm.com/documents/qualcomm-5g-nr-...
https://www.qualcomm.com/documents/examples-of-roy...
jjj - Monday, January 22, 2018 - link
Well the chip and LTE (LTE means hardware+ licensing costs) does not add hundreds of dollars to the retail price but the extra cost forces them to position these as high end.A mobile SoC has actually some costs positives too as it offers more integration, thus slightly reducing costs but with a high end SoC and LTE things go sideways.
I was telling people before these got release that they'll be high specked with high prices but even I wasn't expecting things to be this bad and thought they'll at least have higher res displays at current prices.
Give me a laptop with SD670 (yet to be announced) and no LTE at 300$ and I might consider it. Oh well, at least we have Raven Ridge now.
lilmoe - Monday, January 22, 2018 - link
Raven Ridge is where it's at. Let's hope it doesn't disappoint.Manch - Monday, January 22, 2018 - link
Maybe Huawei is late to the party bc they need time to integrate "features" at the behest of the Chinese govt?StormyParis - Monday, January 22, 2018 - link
If you mean "move all their servers to a gov'-owned and operated facility", that's Apple China.Manch - Tuesday, January 23, 2018 - link
Right now the US gov is very concerned about Huawei to the point theyre pressuring ATT to stop using their products. In addition they don't like them being involved in next gen wireless bc the security risk involved. To be fair the company is top down filled with Chinese Government Official.As for Apple, they're not the only US or EU company that has given up IP to the Chinese government in order to play in their backyard. Of course that comes at a cost in the long run.
It will be interesting to see what happens over the next few years between China, the EU & US ovwer this issue.
fteoath64 - Thursday, January 25, 2018 - link
At the rate China is pouring money into AI with little to zero oversight, they are the first country to be pawned by SuperAI (first AGI that is superhuman), from there, the democratization of rights and freedom will accelerate. Maybe a bit turbulent in the adjustment period but will prevail. The process is already in motion for some months ....jospoortvliet - Saturday, January 27, 2018 - link
What is in motion? Democracy? With Xi in power out is rather going the other way around. The progress chins had made inn the decades since tianmen square is going to be wiped out soon...Meanwhile the current AI is as far from the type of generic AI you talk about as we were from useful neural networks in the early '80s... Don't count on it soon.
french toast - Monday, January 22, 2018 - link
Nice article Andrei.Just demonstrates how much Qualcomm is killing it right now, gpu is nearly twice as efficient as Mali, likely much more efficient in area also.
Even the hexagon 680 DSP, which is not a special AI processor on its own but can match the efficiency of likely the best AI processor in smartphones... Huawei NPU.
Aside from the horrible mistakes of the Snapdragon 810 & 820...they seem to have got their CPU/SOC decisions in order.
9810 Vs 845 is going to be a battle royale, Samsung M3 might well turn the tables around.
StormyParis - Monday, January 22, 2018 - link
If the Modem IP is Huawei's one true in-house part, why didn't you at least test it alongside the CPU and GPU ? I'd assume in the real world, ti too has a large impact on batteyr and performance ?Ian Cutress - Monday, January 22, 2018 - link
The kit to properly test a modem power/attenuation to battery is around $50-100k. We did borrow one once, a few years ago, but it was only short-term loan. Not sure if/when we'll be able to do that testing again.juicytuna - Monday, January 22, 2018 - link
How does Mali have so many design wins? Why did Samsung switch from PowerVR to Mali? Cost savings? Politics? Because it clearly wasn't a descistion made on technical merit.lilmoe - Tuesday, January 23, 2018 - link
Because OEMs like Samsung are not stupid? And Mali is actually very power efficient and competitive?What are you basing your GPU decision on? Nothing in the articles provides evidence that Mali is less efficient than Adreno in UI acceleration or 60fps capped popular games (or even 60fps 1080p normalized T-Rex benchmark)...
Measuring the constant power draw of the GPU, which is supposed to be reached in vert short bursts, is absolutely meaningless.
lilmoe - Tuesday, January 23, 2018 - link
***Measuring the max (constant) power draw of the GPU, which is supposed to be reached in very short bursts during a workload, is absolutely meaningless.jospoortvliet - Saturday, January 27, 2018 - link
Your argument is half-way sensible for a CPU but not for a GPU.A GPU should not even HAVE a boost clock - there is no point in that for typical GPU workloads. Where a CPU is often active in bursts, a GPU has to sustain performance in games - normal UI work barely taxes it anyway.
So yes the max sustained performance and associated efficiency is ALL that matters. And MALI, at least in the implementations we have seen, is behind.
lilmoe - Sunday, January 28, 2018 - link
I think you're confusing fixed function processing with general purpose GPUs. Modern GPU clocks behave just like CPU cores, and yes, with bursts, just like NVidia's and AMD's. Not all scenes rendered in a game, for example, need the same GPU power, and not all games have the same GPU power needs.Yes, there is a certain performance envelope that most popular games target. That performance envelope/ target is definitely not SlingShot nor T-rex.
This is where Andrei's and your argument crumbles. You need to figure out that performance target and measure efficiency and power draw at that target. That's relatively easy to do; open up candy crush and asphalt 8 and measure in screen fps and power draw. That's how you measure efficiency on A SMARTPHONE SoC. Your problem is that you think people are using these SoCs like they would on a workstation. They don't. No one is going to render a 3dmax project on these phones, and there are no games that even saturate last year's flagship mobile gpu.
Not sure if you're not getting my simple and sensible point, or you're just being stubborn about it. Mobile SoC designed have argued for bursty gpu behavior for years. You guys need to get off your damn high horse and stop deluding yourself into thinking that you know better. What Apple or Qualcomm do isn't necessarily best, but it might be best for the gpu architecture THEY'RE using.
As for the CPU, you agree but Andrei insists on making the same mistake. You DON'T measure efficiency at max clocks. Again, max clocks are used in bursts and only for VERY short periods of time. You measure efficient by measuring the time it takes to complete a COMMON workload and the total power it consumes at that. Another hint, that common workload is NOT geekbench, and it sure as hell isn't SPEC.
lilmoe - Sunday, January 28, 2018 - link
The A75 is achieving higher performance mostly with higher clocks. The Exynos M3 is a wide core WITH higher clocks. Do you really believe these guys are idiots? You really think that's going to affect efficiency negatively? You think Android OEMs will make the same "mistake" Apple did and not provide adequate and sustainable power delivery?Laughable.
futrtrubl - Monday, January 22, 2018 - link
"The Kirin 970 in particular closes in on the efficiency of the Snapdragon 835, leapfrogging the Kirin 960 and Exynos SoCs."Except according to the chart right above it the 960 is still more efficient.
Andrei Frumusanu - Monday, January 22, 2018 - link
The efficiency axis is portrayed as energy (joules) per performance (test score). In this case the less energy used, the more efficient, meaning the shorter the bars, the better.Hairses - Monday, January 22, 2018 - link
I think I see what you mean, but the graphs themselves need work. It's not clear which axis belongs to which data point, and the scaling/notation on the left axis makes no sense. If you look at some of them, quite often the longest data bar has a mark showing a value somewhere halfway between two data points with bars smaller than it. So it less to confusion; is the length of the bar one metric, and the mark another? And it's not that the bars are reversed scale either. It's not even clear why there are two axis at all, now I take a second look.It's a good idea for a viz, but needs some rejigging. Maybe it looks clearer on desktop, mobile may be too small.
GreenReaper - Thursday, January 25, 2018 - link
I agree with Hairses; that the graph - and in particular its legend - could do with revision.At first I thought the CPU was faster than the NPU because the arrows seemed to be pointing at the end which related to the measurement in question - instead it seems the intent was "travelling in this direction". You could perhaps keep the text and arrows the same, but position them at the relevant sides of the graph.
gregounech - Monday, January 22, 2018 - link
This is the reason why I read Anandtech, good job Andrei.hlovatt - Monday, January 22, 2018 - link
Great article. Any chance of same for Apple?Andrei Frumusanu - Monday, January 22, 2018 - link
As mentioned in the article, Apple stuff is a lot harder. Measuring power efficiency for example requires me to tear down an iPhone to tap the battery. It's my goal for the future as I work through the backlog of articles.lilmoe - Monday, January 22, 2018 - link
If you're going the extra mile, it would be nice to see multiple generations of Apple SoCs tested, not just the A11. Thanks.mczak - Monday, January 22, 2018 - link
Great article.I'd have liked to see though GPU efficiency figures at sustained power levels. Not that it should reverse the outcome, but I would expect the efficiency of the newest Samsung/Qualcomm/Hisilicon chips to be a bit closer then.
Andrei Frumusanu - Monday, January 22, 2018 - link
It's something that I'm considering doing for device reviews (Sustained power levels obviously differ between devices).jospoortvliet - Saturday, January 27, 2018 - link
Another issue might be the silicon lottery... hard to deal with but especially small differences might be due to a particularly leaky or good piece of silicon...Wardrive86 - Monday, January 22, 2018 - link
Excellent article! Hope to see many more like this. I wish the mid range could also be included but I understand how time consuming these tests were. Great job, well done!Wardrive86 - Monday, January 22, 2018 - link
Also really a testament to the Adreno 500 series of GPUs..great performance with good energy consumption and good temps. Can't wait to see how the 600 series doesarvindgr - Monday, January 22, 2018 - link
Nice article. Can someone highlight if chipset supports USB v3.x?? GsmArena lists USBv2 which is scarry for a flagship chip!hescominsoon - Monday, January 22, 2018 - link
Samsung does not use exynos in the US due to a license agreement with...Qualcomm. https://www.androidcentral.com/qualcomm-licensing-...I prefer exynos to the QC SOC's....
Wardrive86 - Monday, January 22, 2018 - link
Why do you prefer exynos over snapdragon? Not being smart, just curioustuxRoller - Monday, January 22, 2018 - link
The 3% higher integer IPC?lilmoe - Monday, January 22, 2018 - link
I understand that it's out of your educational level to understand what makes an SoC better, since it has been explained to you by myself and others, so please stop using abbreviations like you're some sort of expert. Do you even know what IPC means? SMH...tuxRoller - Tuesday, January 23, 2018 - link
I'm assuming you've confused me with another."The Exynos 8895 shows a 25% IPC uplift in CINT2006 and 21% uplift in CFP2006 whilst leading the A73 in overall IPC by a slight 3%."
Yes, that's simply referencing the CPU, but that's a pretty important component and one whose prowess fans of Sam have enjoyed trumpeting.
Wardrive86 - Monday, January 22, 2018 - link
Ok maybe...though I don't know what workload you would even be able to see "3% higher integer IPC" on a phone. The only workloads I'm running that even remotely tax these monsters, really come down to how well the Vulkan drivers pan out, it's ability to not thermally throttle all of its performance away and actually do this for awhile away from a charger. For these workloads Snapdragon is King as the Mali Vulkan/OpenGLes 3.x drivers are terrible in comparison. Again I was just curioustuxRoller - Tuesday, January 23, 2018 - link
@Wardrive86 I was being a bit facetious. I assume the person either prefers Samsung because of an association that's developed between the success of the company and their own sense of self-worth, or they like watching YouTube videos of proper opening a bunch of apps while a timer runs on the screen:)Space Jam - Monday, January 22, 2018 - link
>We’ve seen companies such as Nvidia try and repeatedly fail at carving out meaningful market-share.Don't think i'd call Nvidia's strategy for mobile SoCs as of the Shield Portable 'pursuing market-share' and I think their actual intentions have been more long-term with emphasis around the Drive CX/PX. The Shield devices were just a convenient way to monetize exploration into ARM and their custom Denver cores. Hence why we saw the Shield Portable and Tablet more or less die after one iteration; the SoCs were more or less there as an experiment. They weren't really prepared I think for the success the Shield TV has had and so that's gotten to see some evolution; the Nintendo Switch win is also nice for them but not really the focus. As much as I want to see a more current Tegra for a Shield Tablet (A73, Pascal cores, <=16nm) the Shield Tablet 2 was cancelled and doesn't look to be getting an update.
>Meanwhile even Samsung LSI, while having a relatively good product with its flagship Exynos series, still has not managed to win over the trust of the conglomorate's own mobile division. Rather than using Exynos as an exclusive keystone component of the Galaxy series, Samsing has instead been dual-sourcing it along with Qualcomm’s Snapdragon SoCs. It’s therefore not hard to make the claim that producing competitive high-end SoCs and semiconductor components is a really hard business.
We did see the Exynos 7420 with its Samsung sourced Exynos modem 333 which further adds to the questions of *why* Samsung bothers to source Snapdragons for the US. That's just extra development complexity on multiple levels. I always thought it had something to do with the cost of CDMA patent licensing, so they'd just opt to use Qualcomm's products and the Galaxy S6 was a special situation as Snapdragon was hot garbage.
There has to be some reason that Samsung bothers with Snapdragon when their Exynos offerings perform pretty similarly.
tuxRoller - Monday, January 22, 2018 - link
This article indicates that they have similar performance but pretty divergent efficiency.This also doesn't even touch on the modem front where Qualcomm has a massive advantage.
StrangerGuy - Tuesday, January 23, 2018 - link
I bet NV had a sizable inventory of Tegra X1 which has the twin combo of suck of A57 cores and TSMC 20nm process that they are more than willing to dump onto Nintendo for free, since nobody else in the right mind to use them for phones while the intended ARM tablet market has collapsed almost overnight.SunnyNW - Tuesday, January 23, 2018 - link
Please look up the likes of the Tegra 2, 3, 4, 4i, etc.Space Jam - Tuesday, January 23, 2018 - link
Oh no definitely, i'd call the Tegras as a whole that, but with the Shield Portable and later it seems more like Nvidia was working towards something.tuxRoller - Monday, January 22, 2018 - link
"The hardware acceleration blocks with various names from various companies do not actually do any DEEP LEARNING, but rather are there to improve execution (inferencing) of neural network models
"
This is something I've read on this site before and it clashes with actual usage of the term and I think the issue is that deep learning is being over-loaded.
Deep learning describes a class of NNs with certain characteristics (hierarchical representation). You perform both (not at the same time:) training and inference with such networks.
Briefly, both stages are part of the deep learning, and "deep learning" is more of a noun than a verb:)
tuxRoller - Monday, January 22, 2018 - link
Btw, terrific article.Death666Angel - Monday, January 22, 2018 - link
Does Huawei also have a flash/RAM business and does it manufacture LCDs? Do they make image sensors? I couldn't find any info on that readily available on the net. If they don't, it seems a bit weird to use "vertical integration" to only refer to the SoC inside a phone, one of the lesser parts of the overall experience these days. :)legume - Monday, January 22, 2018 - link
The fact that you use something as incorrect as SPEC/GHz speaks volumes about how you know nothing about SPEC nor real system performanceAndrei Frumusanu - Monday, January 22, 2018 - link
SPEC/GHz is a metric to showcase IPC used in the industry, including ARM themselves. If you're tending to disagree with the industry habits you're free to do so.skavi - Monday, January 22, 2018 - link
Where can I get the Master Lu app?skavi - Monday, January 22, 2018 - link
nm, found it at ludashi.com.Does anyone know if you can set the language to English?
Also, does Master Lu have any plans to support NNAPI? I'd like to see how the Pixel performs.
WorldWithoutMadness - Monday, January 22, 2018 - link
Hmmm... I'm guessing next chip is a73 and a55, then next of it will be a75 and a55 just to give the user sense of 'upgrade'vladx - Tuesday, January 23, 2018 - link
You can't pair A73 with A55 or A75 with A53, only A75/A55 and A73/A72/A57 with A53.zeeBomb - Monday, January 22, 2018 - link
Holy crap...it's about time we got an Andrei post about SoCs!!!69369369 - Tuesday, January 23, 2018 - link
Inb4 huawei dropping driver support 1 year laterlucam - Tuesday, January 23, 2018 - link
''I wouldn’t necessarily say that Apple is the SoC trend setter that other companies are trying to copy, as much as other vendors are coming to the same conclusion Apple has: to be able to evolve and compete in a mature ecosystem you need to be able to control the silicon roadmap yourself. Otherwise you fall into the risk of not being able to differentiate from other vendors using similar component stacks, or risk not being competitive against those vendors who do have vertical integration. Apple was early to recognize this, and to date Huawei has been the only other OEM able actually realize this goal towards quasi-silicon independence''LOL You must be joking...when did you try Apple products? I have seen NO A10X or A11 reviews from you guys...it's very difficult to believe what you can't prove...
SanX - Tuesday, January 23, 2018 - link
AndreiGraphs are a worst mess, hard to comprehend
ZolaIII - Tuesday, January 23, 2018 - link
This is one of the worst HiSilicon designs. So much black silicone & such a waist of it. Would be much better if they MP-d P6 DSP instead that horrible overgrown machine learning cluster block.haplo602 - Tuesday, January 23, 2018 - link
I am normally not interested in phone ARM SoCs, but I wanted to just have a look at this one, so I looked at the rodmap table on the first page ... Now I am mighty confused ... The Kirin 970 especially ... So it does 2160p60 Decode and 2160p30 Encode ... not capable of 2160p30 Decode ? I guess the 950 is better in this regard ... the Camera/ISP line is also funny ... all 3 have Dual 14-bit ISP. no clue which is better ... again the 960 looks to be the best of the bunch ...Oh and the Interconect one takes tha crown ... just plain Arm CCI on the top model ? That CCI-550 on the 960 looks way superior ...
For the future, either put up a reasonably detailed table that actually serves the purpose (provides at-a-glance overview and evaluation possibility) or don't. Doing a 1/2-assed job like this just lowers your credibility ...
haplo602 - Tuesday, January 23, 2018 - link
oh ... not possible to edit comments ... I have to proof-read my own drivel next time :-) substitute the 950 with the 960 in that one appearance ....GreenReaper - Thursday, January 25, 2018 - link
If it can do 2160p60 Decode then I'd imagine that of course it can do 2160p30 Decode, just as it can do 1080p60/30 decode. You list the maximum in a category.yhselp - Tuesday, January 23, 2018 - link
What a wonderful article: a joy to read, thoughtful, and very, very insightful. Thank you, Andrei. Here's to more coverage like that in the future.It looks like the K970 could be used in smaller form factors. If Huawei were to make a premium, bezel-less ~ 4.8" 18:9 model power by K970, it would be wonderful - a premium, Android phone about the size of the iPhone SE.
Even though Samsung and Qualcomm (S820) have custom CPUs, it feels like their designs are much closer to stock ARM than Apple's CPUs. Why are they not making wider designs? Is it a matter of inability or unwillingness?
Raqia - Tuesday, January 23, 2018 - link
Props for a nice article with data rich diagrams filled with interesting metrics as well as the efforts to normalize tests now and into the future w/ LLVM + SPECINT 06. (Adding the units after the numbers in the chart and "avg. watts" to the rightward pointing legend at the bottom would have helped me grok faster...) Phones are far from general purpose compute devices and their CPUs are mainly involved in directing program flow rather than actual computation, so leaning more heavily into memory operations with the larger data sets of SPECINT is a more appropriate CPU test than Geekbench. The main IPC uplift from OoOE comes from the prefetching and execution in parallel of the highest latency load and store operations and a good memory/cache subsystem does wonders for IPC in actual workloads. Qualcomm's Hexagon DSP hasIt would be interesting to see the 810 here, but its CPU figures would presumably blow off the chart. A modem or wifi test would also be interesting (care for a donation toward the aforementioned harness?), but likely a blowout in the good direction for the Qualcomm chipsets.
Andrei Frumusanu - Friday, January 26, 2018 - link
Apologies for the chart labels, I did them in Excel and it doesn't allow for editing the secondary label position (watts after J/spec).The Snapdragon 810 devices wouldn't have been able to sustain their peak performance states for SPEC so I didn't even try to run it.
Unless your donation is >$60k, modem testing is far beyond the reach of AT because of the sheer cost of the equipment needed to do this properly.
jbradfor - Wednesday, January 24, 2018 - link
Andrei, two questions on the Master Lu tests. First, is there a chance you could run it on the 835 GPU as well and compare? Second, do these power number include DRAM power, or are they SoC only? If they do not include DRAM power, any chance you could measure that as well?Andrei Frumusanu - Friday, January 26, 2018 - link
The Master Lu uses the SNPE framework and currently doesn't have the option to chose computing target on the SoC. The GPU isn't any or much faster than the DSP and is less efficient.The power figures are the active power of the whole platform (total power minus idle power) so they include everything.
jbradfor - Monday, January 29, 2018 - link
Thanks. Do you have the capability of measuring just the SoC power separately from the DRAM power?ReturnFire - Wednesday, January 24, 2018 - link
Great article Andrei. So glad there is new mobile stuff on AT. Fingers crossed for more 2018 flagship / soc articles!KarlKastor - Thursday, January 25, 2018 - link
"AnandTech is also partly guilty here; you have to just look at the top of the page: I really shouldn’t have published those performance benchmarks as they’re outright misleading and rewarding the misplaced design decisions made by the silicon vendors. I’m still not sure what to do here and to whom the onus falls onto."That is pretty easy. Post sustained performace values and not only peak power. Just run the benchmarks ten times in a row, it's not that difficult.
If in every review sustained performance is shown, the people will realize this theater.
And it is a big problem. Burst GPU performance is useless. No one plays a game for half a minute.
Burst CPU performance ist perhaps a different matter. It helps to optimize the overall snappiness.
Andrei Frumusanu - Friday, January 26, 2018 - link
I'm planning to switch to this in future reviews.KarlKastor - Friday, January 26, 2018 - link
Great, good news.KarlKastor - Thursday, January 25, 2018 - link
What i would love to see, is a comparison to Imagination GPU IP. Is there a chance to run this parcour with an iPhone 7?Well, Apple GPU isn't unintereingt too. An iphone 8 would be great too. So one can see waht improvement they achieved with their own GPU.
Andrei Frumusanu - Friday, January 26, 2018 - link
Whilst I have all the new iPhones I can only post performance data as we cannot open up Apple review devices for power measurement. I have the intention to do this in the future once I get some new tools and a device I can dismantle.KarlKastor - Friday, January 26, 2018 - link
I see. The others want a whole device for testing too. ;)levizx - Saturday, January 27, 2018 - link
Great article, a few grammar mistakes here and there and a couple of factual mistakes such as "Kirin 970’s G71MP12".Is it possible to get a Meizu PRO 7 Plus with X30? It'll be interesting to see how PVR XT7P stack against G72.
hescominsoon - Friday, February 2, 2018 - link
There's a reason you do not see the exynos in the US. it is not because Samsung has not convinced it's own mobile division..it is legally bound to use QC here in the US:https://www.reddit.com/r/Android/comments/71rjyx/w... The author might want to do a google search on that next time. Otherwise good article.