Also the CDMA variants of s6 have a different modem, Qualcomm mdm9635, do you think that it would have different power usage than Samsung in House modem?
There is. And it says just that. I think it almost 2 years old though, so you might have to look through older articles if you want to know more about it. You can't expect them to write about Intel in an article titled: The Samsung Exynos 7420 Deep Dive.
http://www.realworldtech.com/forum/?threadid=15103... By: Linus Torvalds (torvalds.[email protected]ux-foundation.org), July 4, 2015 2:37 pmRoom: Moderated Discussions Wilco (Wilco.Dijkstra.[email protected]world.com) on July 4, 2015 11:41 am wrote: > > Many results don't make sense indeed. I wonder if the benchmarks were forced to run on > a specific CPU at a fixed frequency - without that the results will be totally bogus.
I don't think they actually ran the benchmarks at all.
The numbers for some of the oddest ones are suspicious. Look at the 7420 arm64 numbers for gcc, eon and perlbmk: 2000, 2500 and 4000 respectively. Yeah, round numbers like that happen, but three ones like that that just happen to be that round?
So I wonder what "The scores we publish are only estimates" really means. It could mean that they want to make it clear that it's not an official Spec submission, and they kind of try to imply that.
But it could mean that they are just marketing estimates from some company, and have never seen any actual real benchmark run, or at best were run in some simulation with a perfect memory subsystem. They even say that they haven't been able to run 64-bit benchmarks due to lack of software availability, but then they quote specint2000 numbers anyway? Where did they come from? That's very unclear.
And gcc getting the same nice round score on a53 and a57? Yeah, not likely. And perlbmk on a53 has another suspiciously round score.
Or maybe it's real, and they just happen to be rounded to even thousands (or halves), and the fact that they seem to make no sense is just "that's life, deal with it".
We have responded to his concerns. These benchmarks are independently run and not provided by either ARM or any division of Samsung. We have re-run the benchmarks and verified that the benchmark is placed on the right cluster. However, we will look into recompiling our version to see if we can get more representative values.
The power efficiency improvements from FinFet are really huge which makes me that excited about the next gen Apple SoC built on TSMC's 16nm FF process !
Couldn't be an Apple SoC's deep dive looking into power consumption made then ?
An Apple SoC deep dive as detailed as what was done here just isn't possible due to the closed nature of iOS. For example the CPU power curves just aren't feasible as there's no power management control for CPU frequency. We could do maximum power consumption and the GFXBench tests, but that's about as far as it gets.
The new power saving mode downclocks the SoC so you could have an extra data point, depending on what are the other actions in this mode. And who knows maybe it can be edited?
In theory TMSC 16+nm PP should not allow for a shrink of the die, as it is the same 20nm PP with FinFet support. Power consumption should go down, but density will not.
The power doesn't look that great, for the A57 seems to allow 300-350Mhz higher clocks, granted it's not a clean shrink. It looks good here because on 20nm they pushed the clocks way high.
Insofar as rumors can be believed, the bulk of A9's are scheduled to be produced by Samsung, presumably on this process. It seems strange to have Apple design/layout everything twice for the same CPU, so if these same rumors (30% going to TSMC) are correct, presumably that means the A9X will be on TSMC.
As for characterizing Apple CPUs, while there are limits to what one can learn (eg in the voltage/power tradeoffs), there is a LOT which can be done but which, to my disappointment, has still not been done. In particular if someone wanted, I think there's scope for learning an awful lot from carefully crafted micro benchmarks. Agner Fog has give a large number of examples of how to do this in the x86 space, while Henry Wong at stuffedcow.net has done the same for a few less obvious parts of the x86 architecture and for GPUs.
It strikes me as bizarre how little we know about Apple CPUs even after two years. The basic numbers (logical registers, window, ROB size) seem to about match Intel these days, and the architecture seems to be 6-wide with two functional clusters. There appears to be a loop buffer (but how large?) But that's about it. How well does the branch prediction work and where does it fail? What prefetchers are provided? (at I1, D1, L2. L3) Do the caches do anything smart (like dead block prediction) for either performance or power? Does the memory manager do anything smart (like virtual write queue in the L3)? etc etc etc
Obviously Apple doesn't tell us these. (Nowadays the ONLY company that does is IBM, and only in pay-walled articles in their JRD.) But people write the micro benchmarks to figure this out for Intel and AMD, and I wish the same sort of enthusiasm and community existed in the ARM world.
Is the heterogeneous processing that allows all 8 cores working together active? Seen the numbers of the various bench it seems this feature is not used. What I would like to know exactly is that is the bench number of this SoC can be directly compared to SoC with only 4 cores like the incoming Qualcomm Snapdragon 820 based on custom architecture which has "only" 4 cores and not a big.LITTLE configuration.
Because with 8 cores active (or what they should be with HMP) results is not even near 4x the score of a single core. So I wonder if those 8 core are really active. And whether they are of any real use if, to keep consumption adequate, frequencies of higher cores get limited.
All the cores are always active and they do not get limited other than in thermal stress situations. I didn't publish any benchmarks comparing single vs multi-core performance so your assumption must be based on something else. Having X-times the cores doesn't mean you'll have X-times the performance, it completely depends on the application.
It's still a perfectly valid comparison to look at traditional quad-cores vs bL octa-cores. In the end you're looking at total power and total performance and for use-cases such as PCMark the number of cores used shouldn't be of interest to the user.
It does have 4 cores but I guess they are in big.LITTLE configuration too. We will see shortly. HMP is active but I am not sure if every bench app uses all the cores.
I've waited long for this piece. Thanks for the hard work.
Now for the rant. All I've read in this article leads me to think that the tech blog community is partially to blame for the stupid benchmark and resolution race that has negative effects on consumer perception. Some bloggers (NOT consumers) now swear by 1440p when the difference is so minimal, you literally have to be 1 inch far from the screen to even slightly tell, and nowhere near the transition from 720p to 1080p on ~5". A more balanced SoC, with better thermal and voltage limits, and a less stressful 1080p screen would have made the GS6 a much better device in both performance and battery life...
I see the GS6 sell like hotcakes every time I'm at the mall or electronics shops, but no one, NO ONE, knows (or cares) about 14nm or 1440p. All people care about is design and build. THEN when it starts to sink in, they start caring about fast, consistent performance, battery life, a good speaker, and a nice (BRIGHT) screen (which 1080p is already more than capable of delivering, and till now, Apple is STILL getting away with 720p).
The only one who "gets it" (except for the battery life) is Apple. This is mainly for the sole reason that Apple can afford relying more on its brand name (and a couple of stupid buzzwords) rather than numbers and benchmarks. But GOD do you I HATE iOS, their ecosystem, and limiting ways of doing things.
Yup. It's best to skip all this and wait for what they'll bring with the GS7. Samsung tried hard to merge what I believe is an engineering marvel that is the GS5 with the great design and aesthetics of the iPhone 6 and, IMHO as a Samsung fan, didn't totally deliver with the GS6.
Apple's screen res is about costs, they couldn't go from 4 inch devices to 4.7 and 5.5 with higher res without harming the margins. Plus they always save some upgrades for the next cycle sto give people a reason to buy.
Sony "gets it" as well. The Z3 is only 1080p, and the Z3 Compact is 720p. Both use the same SoC, running at the same speeds. So you can get a large screen with good performance and battery life, or you can get a smaller screen with better performance and battery life.
It's just too bad they don't market their phones as much in North America as they used to.
Fabulous review! Way above my pay-grade, but nice to read and understand what makes the differences that end-users like myself experience.
One question, Andrei. I know this may not be up your alley, but any reason why other OEMs aren't buying the Exynos 7420, with Snapgdragon 810 now confirmed as a miss? Is it that Samsung is hands full with just production for it's sibling's SGS6s? (Or perhaps also producing for Apple?) Or is there also a steep price difference? Or is it that there are inherent reasons for OEMs like Xiaomi or Meizu or even HTC to not use Samsung parts?
Frankly, I don't know. I tried to ask Samsung a similar question but they refused to comment on customer relations. Meizu so far seems to be the only major vendor consistently using Exynos parts but as to why we haven't seen other vendors adopt them can be attributed to anything going from pricing to volume availability. Only the companies themselves know the details of these contracts.
This is Samsung's chance to eat Qualcomm's lunch. Close down node manufacturing for others(including Apple) and be like Intel. Either use Exynos or be satisfied with inferior nodes from other fabs.
And that meas start competing with PP only, like Intel did. That is, if you force others to go to other foundries, you have to be sure you have the best one, or in case TMSC comes up with a better PP (like a 16+nm revision) you have just thrown all your customers to your fab competitors, making double damage (or total one). Or just think if Intel tomorrow suddenly opens to ARM customers in order to saturate it's now rusting 14nm machineries. Samsung would be in great trouble after that eventual (and IMHO stupid) move. Investing in PP i really expensive and there are other foundries capable of doing so. Samsung can't be sure to always be the best one on the market. And invest tons of billions of dollar every year to make sure to be the number one (for SoC of course).
Samsung is part of a common platform alliance/agreement with GloFo, so while they could lock down and close others out, GloFo would not, so there's little commercial benefit from doing so.
They could of course coerce GloFo into doing the same, but that lands them into hot water with regulatory watchdogs like the FTC regarding anti-competitive practices and collusion, which while Samsung wouldn't really mind (no, really), GloFo would.
How will it take for Samsung's process to trickle down to AMD via GloFo? Could it bridge the efficiency gap to nvidia / Intel? Holding out hope that ATI/MD will be competitive once more.
No, AMD won't have a technology advantage to Nvidia on next gen GPUs, currently it looks like nvidia will choose Samsung for their next node, and as Samsung and GloFo jav some kind of alliance and share processes (glofo licenses some Samsung processes AFAIK, the technology should be very similar for both, yet AMD should have a small HBM advantage, they have better relations to hynix (and helped to develop HBM) than nvidia.
There won't be a HBM advantage from a technological point of view, at best AMD could get slightly better pricing but even that is unlikely since Nvidia has much higher volume. The first gen HBM was late and both Nvidia and AMD had plenty of time to prepare for it. As for the process, we don't really know what foundry each will use and what version of the process.On the GPU side both are more likely to go TSMC or use both. On the CPU side AMD will likely go GloFo but not this early version of the process and Intel might go 10nm not long after AMD has 14nm. On 10nm TSMC and Samsung do seem to be catching up with Intel but doubt AMD will have 10nm early.
Can you share process and size for the modem? For the A53 power scaling maybe the small L2 cache is starving the cores? On A57 clocks guess they can go higher but here they wanted to keep the device thin (with a small battery) for marketing. They couldn't go at 1.9GHz since SD810 was supposed to be there but they should have went just a bit higher to make it look better vs the iphone. On the GPU clocks debate, smaller GPU would be better than lower clocks cost wise since in the real world chasing ideal efficiency is not feasible, although since they use it only in their own device ,they have more room to chose efficiency vs costs. How does GPU throttling corelate with CPU throttling and even what cores are active? In GPU benchmarking the focuus has to be ,like on PC, on actual games not synthetic but AT just refuses to do that, even a little bit. Same on the CPU side so the SoC benchmarking section in reviews is just skippable,. When estimating DRAM power consumption have you factored in that the S5 has only 2GB? Anyway, nice article and glad you've mapped the SOC ,we don't usually see anything besides the main blocks.
"Currently I’m not aware of any semiconductor vendor following this method in the mobile space as there simply isn’t the same opportunity to recycle chips into lower performing SKUs."
Thanks! An awesome review of an amazing SoC (or process node). This article has so parts than I can chew and there's nothing else in my mind to ask except maybe comparisons with Intel's 14nm products e.g. Atom and Core M as they have similar thermal budgets. The advantage puts Samsung currently with little competition, similar to the position Intel enjoys in PC world. This will benefit them for the short and long term.
The GPU throttling is a concern. Is there any current game that can reproduce such? If not, there's not much of a problem. I actually like this kind of overboost as it allows the user to get excellent performance from a game or application or is it time for SoCs to have heat sinks/spreaders? I still recall your Nexus 5 test with the cold compress which allowed constant top speeds.
What's the point of the high "overboost" if the SOC can only sustain it for 2 minutes before it has to start throttling because of heat? If a graphically intense game runs on 60fps while in the boost mode, all you'll notice is a drop to 30fps during a 20-minute gaming session. "Yay I got smooth framerates for a few minutes and I lost a ton of battery life during that!"
It is useful if the game or application does not saturate the SoC. Some or many games are not as demanding as benchmarks. It is also allows very fluid UI and loading of content. Battery life during gaming is always an issue with high end smartphones.
Really well done article, and the Exynos really looks without peer (at least in the android space, if not anywhere [the A8 is still pretty impressive]) right now.
Andrei, your mystery SoC block is most likely a DSP. It makes a lot of sense being next to the audio block in your diagram. DSPs are also used to offload some compute when doing things like music playback and making a call, so it's proximity to the A53 cluster makes sense too.
Please do not read too much into where I put the audio block on the diagram, that one was part of the blocks which I couldn't physically locate so I just chose a fitting space where to put it in a logical manner. The top right quadrant and the ISP are *very* abstracted and representative of physical location. All audio is either handled by the A5 itself or by the main CPUs. I'm pretty certain the SoC doesn't have any separate DSP for any task so it most likely is something else.
Andrei, you mentioned doing some undervolting and modifying some of the GPU frequency states in your article. I found some of the power consumption gains you posted very interesting. Would you recommend that an end user (such as myself) that has a rooted device attempt to do these things? If so, what exactly must be done in order to undervolt? (I'm assuming a custom kernel and/or ROM is required..?)
You need to flash a custom kernel on the device. I can't really point out to where you can find such resources due to conflict of interest and myself being active in the modding community, but if you search for it I'm sure you'll be quick to find guides and download links to achieve what you want.
Great job. Incredible detail. For the SD810 deep dive, I assume you'll be looking at a v2.1. I would really like you to cover the important questions of the 810's heat problems, and whether v2.1 really fixes it. It's annoying to see both QCOM's marketing VP pretend that the overheating problems are imaginary, and I don't think anyone really believes that v2.1 completely fixes the problem. I'm disappointed that there's just nothing else on the market for OEMs. the OnePlus 2 basically bad no choice. Either a MediaTek, or QCOM. Unfortunately, rumours state that the 2015 Nexus 6 will ship with the now ridiculed SD810 :(
At least the SD810 in OnePlus 2 will only be clocked at 1.8GHz max and they use graphene (IIRC) to spread the heat more effectively to the entire back cover of the device. I think I also read the OP2 will have the 2.1 version of SD810. Let's just hope they've done a good job optimizing their firmware and software. This article is amazing, I think every smartphone manufacturer should read it, they'd learn a lot. Great job, Andrei. Quite a lot of typos/breaks in thought through a centence unfortunately, you must've stayed up late many a night writing this :p
I find "overclocking" gpu and cpu both useful for general usage. At least video and picture processing apps that resize/cut/process video and photos can benefit from extra gpu-oomph while actual processing time is still within reasonable times to allow process to complete before thermal limits kick in.
Ofc, this depends solely on the app and whether it uses GPU processing.
Problem with this is that even my old phone on 28 nm uses > 50% of battery for the screen. So there is much more to be gained from better screens than better SOC and process.
That's debatable. Displaymate puts average power consumption for the Screen on the S6 at 0.65 watts and max power at 1.2W. http://www.displaymate.com/Galaxy_S6_ShootOut_1.ht... The SoC, the RAM, the connectivity use plenty of power and the screen can be turned off a lot so it uses a lot less power than people seem to think.
correction on s6 xda, there are only debloated, deodexed, modded stock roms; some custom kernels and that's it. no custom roms like cyanoagenmod, paranoid, aosp or aokp. only hope is a stock theme from the samsung app store and an aosp themed launcher.
Hi Andrei. I can't remember if I've ever remembered to comment on your articles since you started here, but this one was so cool I had to finally get around to it. Cut a long story short, I loved your kernels for S3/Note 2 and I've really missed them since I've been been on S800 Note 3. So as always, thanks a LOT for all your amazing work over the years.
I have to give major props for your investigation of undervolting the SOC. I remember you having an argument with someone on XDA and you stated something like 'Undervolting is literally the only thing we can do to improve battery life without affecting performance, so let's undervolt everything' and I've always agreed completely with this (sadly my Note 3 does not UV well and simply will not behave consistently). So it was very interesting to read the UVing results in your deep dive. And also quite shocking how much energy a -75mv UV can save!
And again as always your writing is fantastic: Very easy to read and understand and very informative for people with only a small background in basic computing (eg, building/overclocking PC hardware) and much of what I know about SOCs today is down to your very informative posts and articles.
Please keep up the hard work, but of course you deserve a break more than anybody so I hope you don't have to work too hard for these amazing articles! :P
(And of course, please dear lord in the sky can Note 5 has a memory card slot so I can enjoy this SOC!)
How did you perform the power measurement? Did you hook up the battery to a Monsoon Power Meter or directly instrument the motherboard? It would be nice addition to discuss/show this in the article.
I think he said in the article that he hooks up the phone to an external power supply. "To get the numbers, we hook up the Galaxy S6 to an external power supply and energy meter."
I was asking for a more in-depth explanation. There are several different places a mobile device can be hooked up to for measuring CPU power -- as well as different techniques for doing so. Depending on where you instrument you will see different results. For example measuring at the battery includes the power regulator and each of the SoC subsystems has a different power domain. Also given that it's an internal battery, it is much harder to instrument than a removable battery.
Given he took apart the phone, internal battery isn' t too much of an issue, and as for wiring it iup, again, not much of an issue if you have access to a bench power supply - set it to ~3.8V DC, wire it in, and let 'er rip
Your percentage math is incorrect: going from 113 mm² to 78 mm² is a 31% shrink, not a 44% shrink. This is a classic mistake: a decrease percentage must be computed with respect to the initial quantity, not the final one.
"Another CCI-connected block is a new kind of IP that we haven’t seen before in the mobile space: a memory compressor. "
My assumption, when Apple added page compression to Mavericks, was that they did the same thing to iOS7 at the same time. The internet is kinda vague on whether this happened, but I'm seeing more articles that agree than don't. Likewise it's unclear whether they used hardware for this.
With this background, it is noteworthy that at WWDC Apple made a (reasonably) big deal about a new lossless compression library for both iOS and OSX. I take this as a signal (maybe I'm wrong) that they HAVE added a compression block to the A9 (it's a pretty obviously useful addition, after all). It's possible that there was an earlier block, but with a clumsy usage model and/or only one coded and/or other limitations; but the new block is robust enough that it can be exposed as a generic API.
Another way Apple could use such compression HW is in file system compression. HFS+ has had file system compression for, what, ten? years now, and while it's only used on OSX at OS installation (various system files are compressed at installation, but nothing is dynamically compressed), that's a backward compatibility issue that doesn't matter on iOS.
Of course they could compress in SW today (and maybe they will on older devices) but HW is clearly lower power and probably faster. (And of course you can do this sort of thing at the flash controller level; but doing it higher up in the OS gives you much more power. Files that compress really small can be stuck directly in the catalog, with no allocation block at all; and already compressed files --- JPG, MP3, that sort of thing --- can be skipped over, avoiding wasting power trying to compress them further.)
How did you learn THIS code name? This appears to be the first mention of it on the web? Is Anand feeding you insider information!?
Should we take this to mean (a) The A9 will be called Hurricane? (b) That it will not be a major revision, but basically a tweak of the A8 (same basic architecture)?
Seeing these perf/W curves really shows that those socs that make use of highly clocked A53s are making mistakes. Pretty clearly, the smarter solution would be to include at least one A57/A72.
Gah! Thanks for the tip. Well, that certainly makes brings things a bit closer to 800MHz. So, given the freq range of 900MHz, wouldn't 850MHz represent 50% load?
Yes, if you're looking at average frequency over time instead of discrete frequency. But there is no 850MHz state, so you instead have a certain load percentage at the 800MHz state.
Just asking ..when is the galaxy alpha review coming as was promised in the galxy note 4 review . You have covered the chipset well although i would really like to know about the camera sensor especially since the quality is so good that reviewers claim that its the same 16mp iso-cell sensor as in galaxy s5.
I dropped it due to Samsung also basically abandoning the device back in January. I wouldn't recommend the device due to the low screen resolution and bad calibration.
The camera sensor is a 12MP variant of the same S5K2P2 S5 sensor, so those claims are not unwarranted.
Great article . I always love these deep dives instead of superficial youtube and written reviews . Nothing comes close to the joy of reading an anandtech review.
@andrei If you have time please push out a dive-in or mini article for exynos 5430 and MT6595.
i have a small question : now i have nexus 6 which has inside SD805 i hate the idea how the cpu freq are 2.7 becuase it's not real and no real difference in real world i want to know the program u use to know when my cpu throttle and clock down the cpu to smaller freq so i can know the best max freq to set my cpu to it so i will have it surly at lower voltage and i will use the difference to gpu not to post but i think it will automatically will not decrease as much as it was with a cpu that clocked at 2.7 with 1070mV
Now i am running on 2GHz 930mV and really i can't fell a diffrence thank you for that great article
If you are running Android 5.0 or newer, then you can go into Developer Options and enable the "CPU Info" option. That will put an X+1 (where X is the number of CPU cores in the SoC) line overlay in the top-right of the screen.
Line 1: CPU Temperature Line 2: CPU core 0 governor, current frequency Line 3: CPU core 1 governor, current frequency Line 4: CPU core 2 governor, current frequency Line 5: CPU core 3 governor, current frequency and so on.
Very handy to watch how different governors and hotplug systems work while you use the phone.
http://www.realworldtech.com/forum/?threadid=15103... By: Linus Torvalds (torvalds.[email protected]ux-foundation.org), July 4, 2015 2:37 pmRoom: Moderated Discussions Wilco (Wilco.Dijkstra.[email protected]world.com) on July 4, 2015 11:41 am wrote: > > Many results don't make sense indeed. I wonder if the benchmarks were forced to run on > a specific CPU at a fixed frequency - without that the results will be totally bogus.
I don't think they actually ran the benchmarks at all.
The numbers for some of the oddest ones are suspicious. Look at the 7420 arm64 numbers for gcc, eon and perlbmk: 2000, 2500 and 4000 respectively. Yeah, round numbers like that happen, but three ones like that that just happen to be that round?
So I wonder what "The scores we publish are only estimates" really means. It could mean that they want to make it clear that it's not an official Spec submission, and they kind of try to imply that.
But it could mean that they are just marketing estimates from some company, and have never seen any actual real benchmark run, or at best were run in some simulation with a perfect memory subsystem. They even say that they haven't been able to run 64-bit benchmarks due to lack of software availability, but then they quote specint2000 numbers anyway? Where did they come from? That's very unclear.
And gcc getting the same nice round score on a53 and a57? Yeah, not likely. And perlbmk on a53 has another suspiciously round score.
Or maybe it's real, and they just happen to be rounded to even thousands (or halves), and the fact that they seem to make no sense is just "that's life, deal with it".
If I am to underclock the CPU at a lower frequency, lets say the a57 to 1.9ghz, would that give me a better battery life? since a57 run more efficient at 1.9ghz
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
114 Comments
Back to Article
hans_ober - Monday, June 29, 2015 - link
Superb article!The reason Anandtech stands out from other sites!
III-V - Monday, June 29, 2015 - link
Andrei puts out very good stuff.ddriver - Monday, June 29, 2015 - link
Hope he didn't make this chart...http://images.anandtech.com/doci/9330/a57-power-cu...
Y axis looks "funky"... rounding error mayhaps?
Andrei Frumusanu - Monday, June 29, 2015 - link
Whoops! The decimal was truncated, let me fix that right away.Refuge - Monday, June 29, 2015 - link
Great article, love the deep dives on this site. :Dedlee - Wednesday, July 1, 2015 - link
Also the CDMA variants of s6 have a different modem, Qualcomm mdm9635, do you think that it would have different power usage than Samsung in House modem?witeken - Monday, June 29, 2015 - link
A bit disappointed there's not a single word about Intel, which also has a FinFET SoC.SleepyFE - Tuesday, June 30, 2015 - link
There is. And it says just that. I think it almost 2 years old though, so you might have to look through older articles if you want to know more about it. You can't expect them to write about Intel in an article titled: The Samsung Exynos 7420 Deep Dive.BillBear - Thursday, July 2, 2015 - link
The level of detail in this piece is wonderful.ads2015 - Saturday, July 4, 2015 - link
http://www.realworldtech.com/forum/?threadid=15103...By: Linus Torvalds (torvalds.[email protected]ux-foundation.org), July 4, 2015 2:37 pmRoom: Moderated Discussions
Wilco (Wilco.Dijkstra.[email protected]world.com) on July 4, 2015 11:41 am wrote:
>
> Many results don't make sense indeed. I wonder if the benchmarks were forced to run on
> a specific CPU at a fixed frequency - without that the results will be totally bogus.
I don't think they actually ran the benchmarks at all.
The numbers for some of the oddest ones are suspicious. Look at the 7420 arm64 numbers for gcc, eon and perlbmk: 2000, 2500 and 4000 respectively. Yeah, round numbers like that happen, but three ones like that that just happen to be that round?
So I wonder what "The scores we publish are only estimates" really means. It could mean that they want to make it clear that it's not an official Spec submission, and they kind of try to imply that.
But it could mean that they are just marketing estimates from some company, and have never seen any actual real benchmark run, or at best were run in some simulation with a perfect memory subsystem. They even say that they haven't been able to run 64-bit benchmarks due to lack of software availability, but then they quote specint2000 numbers anyway? Where did they come from? That's very unclear.
And gcc getting the same nice round score on a53 and a57? Yeah, not likely. And perlbmk on a53 has another suspiciously round score.
Or maybe it's real, and they just happen to be rounded to even thousands (or halves), and the fact that they seem to make no sense is just "that's life, deal with it".
I don't believe it for a second.
Linus
JoshHo - Wednesday, July 8, 2015 - link
We have responded to his concerns. These benchmarks are independently run and not provided by either ARM or any division of Samsung. We have re-run the benchmarks and verified that the benchmark is placed on the right cluster. However, we will look into recompiling our version to see if we can get more representative values.mmrezaie - Monday, June 29, 2015 - link
Thanks, a very nice article.kspirit - Monday, June 29, 2015 - link
I like how power efficient this thing is. Thanks for the review!Badelhas - Monday, June 29, 2015 - link
Great artcile, as usual.The big question is to know if the performance is noticeable under Samsung´s Touchwiz, which tends to lag as hell.
theduckofdeath - Monday, June 29, 2015 - link
What?! You didn't manage to get it in there first? We're We all sleeping over at the troll office? :Dnathanddrews - Monday, June 29, 2015 - link
Nice one, English Bob!GC2:CS - Monday, June 29, 2015 - link
The power efficiency improvements from FinFet are really huge which makes me that excited about the next gen Apple SoC built on TSMC's 16nm FF process !Couldn't be an Apple SoC's deep dive looking into power consumption made then ?
That would be super interesting.
Andrei Frumusanu - Monday, June 29, 2015 - link
An Apple SoC deep dive as detailed as what was done here just isn't possible due to the closed nature of iOS. For example the CPU power curves just aren't feasible as there's no power management control for CPU frequency. We could do maximum power consumption and the GFXBench tests, but that's about as far as it gets.jjj - Monday, June 29, 2015 - link
The new power saving mode downclocks the SoC so you could have an extra data point, depending on what are the other actions in this mode. And who knows maybe it can be edited?CiccioB - Monday, June 29, 2015 - link
In theory TMSC 16+nm PP should not allow for a shrink of the die, as it is the same 20nm PP with FinFet support. Power consumption should go down, but density will not.jjj - Monday, June 29, 2015 - link
The power doesn't look that great, for the A57 seems to allow 300-350Mhz higher clocks, granted it's not a clean shrink. It looks good here because on 20nm they pushed the clocks way high.name99 - Monday, June 29, 2015 - link
Insofar as rumors can be believed, the bulk of A9's are scheduled to be produced by Samsung, presumably on this process. It seems strange to have Apple design/layout everything twice for the same CPU, so if these same rumors (30% going to TSMC) are correct, presumably that means the A9X will be on TSMC.As for characterizing Apple CPUs, while there are limits to what one can learn (eg in the voltage/power tradeoffs), there is a LOT which can be done but which, to my disappointment, has still not been done. In particular if someone wanted, I think there's scope for learning an awful lot from carefully crafted micro benchmarks. Agner Fog has give a large number of examples of how to do this in the x86 space, while Henry Wong at stuffedcow.net has done the same for a few less obvious parts of the x86 architecture and for GPUs.
It strikes me as bizarre how little we know about Apple CPUs even after two years.
The basic numbers (logical registers, window, ROB size) seem to about match Intel these days, and the architecture seems to be 6-wide with two functional clusters. There appears to be a loop buffer (but how large?) But that's about it.
How well does the branch prediction work and where does it fail?
What prefetchers are provided? (at I1, D1, L2. L3)
Do the caches do anything smart (like dead block prediction) for either performance or power?
Does the memory manager do anything smart (like virtual write queue in the L3)?
etc etc etc
Obviously Apple doesn't tell us these. (Nowadays the ONLY company that does is IBM, and only in pay-walled articles in their JRD.) But people write the micro benchmarks to figure this out for Intel and AMD, and I wish the same sort of enthusiasm and community existed in the ARM world.
SunnyNW - Wednesday, July 1, 2015 - link
Believe word on the street is the A9 will be Sammy 14nm and the A9X TSM 16nm+SunnyNW - Wednesday, July 1, 2015 - link
Please ignore this comment, should have read the rest of the comments before posting since Name99 already alluded to this below. SorryCiccioB - Monday, June 29, 2015 - link
Is the heterogeneous processing that allows all 8 cores working together active?Seen the numbers of the various bench it seems this feature is not used.
What I would like to know exactly is that is the bench number of this SoC can be directly compared to SoC with only 4 cores like the incoming Qualcomm Snapdragon 820 based on custom architecture which has "only" 4 cores and not a big.LITTLE configuration.
Andrei Frumusanu - Monday, June 29, 2015 - link
HMP is active. Why do you think it seems to be not used?CiccioB - Monday, June 29, 2015 - link
Because with 8 cores active (or what they should be with HMP) results is not even near 4x the score of a single core.So I wonder if those 8 core are really active. And whether they are of any real use if, to keep consumption adequate, frequencies of higher cores get limited.
Andrei Frumusanu - Monday, June 29, 2015 - link
All the cores are always active and they do not get limited other than in thermal stress situations. I didn't publish any benchmarks comparing single vs multi-core performance so your assumption must be based on something else. Having X-times the cores doesn't mean you'll have X-times the performance, it completely depends on the application.It's still a perfectly valid comparison to look at traditional quad-cores vs bL octa-cores. In the end you're looking at total power and total performance and for use-cases such as PCMark the number of cores used shouldn't be of interest to the user.
Refuge - Monday, June 29, 2015 - link
I would hazard a guess that thermal throttling has something to do with part of it.[email protected] - Monday, June 29, 2015 - link
It does have 4 cores but I guess they are in big.LITTLE configuration too. We will see shortly. HMP is active but I am not sure if every bench app uses all the cores.lilmoe - Monday, June 29, 2015 - link
I've waited long for this piece. Thanks for the hard work.Now for the rant. All I've read in this article leads me to think that the tech blog community is partially to blame for the stupid benchmark and resolution race that has negative effects on consumer perception. Some bloggers (NOT consumers) now swear by 1440p when the difference is so minimal, you literally have to be 1 inch far from the screen to even slightly tell, and nowhere near the transition from 720p to 1080p on ~5". A more balanced SoC, with better thermal and voltage limits, and a less stressful 1080p screen would have made the GS6 a much better device in both performance and battery life...
I see the GS6 sell like hotcakes every time I'm at the mall or electronics shops, but no one, NO ONE, knows (or cares) about 14nm or 1440p. All people care about is design and build. THEN when it starts to sink in, they start caring about fast, consistent performance, battery life, a good speaker, and a nice (BRIGHT) screen (which 1080p is already more than capable of delivering, and till now, Apple is STILL getting away with 720p).
The only one who "gets it" (except for the battery life) is Apple. This is mainly for the sole reason that Apple can afford relying more on its brand name (and a couple of stupid buzzwords) rather than numbers and benchmarks. But GOD do you I HATE iOS, their ecosystem, and limiting ways of doing things.
Yup. It's best to skip all this and wait for what they'll bring with the GS7. Samsung tried hard to merge what I believe is an engineering marvel that is the GS5 with the great design and aesthetics of the iPhone 6 and, IMHO as a Samsung fan, didn't totally deliver with the GS6.
gnx - Monday, June 29, 2015 - link
+1 And hopefully they'll revamp Touchwiz too.[email protected] - Monday, June 29, 2015 - link
I guess VR was one of the reason for 1440p.larryvand - Monday, June 29, 2015 - link
VR on a 720p screen is a total FAIL. My S6 Edge does VR like nothing else. Best phone on the planet.jjj - Monday, June 29, 2015 - link
Apple's screen res is about costs, they couldn't go from 4 inch devices to 4.7 and 5.5 with higher res without harming the margins. Plus they always save some upgrades for the next cycle sto give people a reason to buy.phoenix_rizzen - Monday, June 29, 2015 - link
Sony "gets it" as well. The Z3 is only 1080p, and the Z3 Compact is 720p. Both use the same SoC, running at the same speeds. So you can get a large screen with good performance and battery life, or you can get a smaller screen with better performance and battery life.It's just too bad they don't market their phones as much in North America as they used to.
lilmoe - Monday, June 29, 2015 - link
Make Sony put a good OLED screen on their phones and I'm in. The Z3 is an amazing phone, yes, but damn that screen knows NO blacks.YoloPascual - Monday, June 29, 2015 - link
TLDR; Mediatek the real MVPRefuge - Monday, June 29, 2015 - link
lol fuck it +1gnx - Monday, June 29, 2015 - link
Fabulous review! Way above my pay-grade, but nice to read and understand what makes the differences that end-users like myself experience.One question, Andrei. I know this may not be up your alley, but any reason why other OEMs aren't buying the Exynos 7420, with Snapgdragon 810 now confirmed as a miss? Is it that Samsung is hands full with just production for it's sibling's SGS6s? (Or perhaps also producing for Apple?) Or is there also a steep price difference? Or is it that there are inherent reasons for OEMs like Xiaomi or Meizu or even HTC to not use Samsung parts?
Andrei Frumusanu - Monday, June 29, 2015 - link
Frankly, I don't know. I tried to ask Samsung a similar question but they refused to comment on customer relations. Meizu so far seems to be the only major vendor consistently using Exynos parts but as to why we haven't seen other vendors adopt them can be attributed to anything going from pricing to volume availability. Only the companies themselves know the details of these contracts.gnx - Monday, June 29, 2015 - link
Thanks! The SoC market is really strange.id4andrei - Monday, June 29, 2015 - link
This is Samsung's chance to eat Qualcomm's lunch. Close down node manufacturing for others(including Apple) and be like Intel. Either use Exynos or be satisfied with inferior nodes from other fabs.CiccioB - Monday, June 29, 2015 - link
And that meas start competing with PP only, like Intel did.That is, if you force others to go to other foundries, you have to be sure you have the best one, or in case TMSC comes up with a better PP (like a 16+nm revision) you have just thrown all your customers to your fab competitors, making double damage (or total one). Or just think if Intel tomorrow suddenly opens to ARM customers in order to saturate it's now rusting 14nm machineries. Samsung would be in great trouble after that eventual (and IMHO stupid) move.
Investing in PP i really expensive and there are other foundries capable of doing so. Samsung can't be sure to always be the best one on the market. And invest tons of billions of dollar every year to make sure to be the number one (for SoC of course).
ZeDestructor - Wednesday, July 1, 2015 - link
Samsung is part of a common platform alliance/agreement with GloFo, so while they could lock down and close others out, GloFo would not, so there's little commercial benefit from doing so.They could of course coerce GloFo into doing the same, but that lands them into hot water with regulatory watchdogs like the FTC regarding anti-competitive practices and collusion, which while Samsung wouldn't really mind (no, really), GloFo would.
eh_ch - Monday, June 29, 2015 - link
How will it take for Samsung's process to trickle down to AMD via GloFo? Could it bridge the efficiency gap to nvidia / Intel? Holding out hope that ATI/MD will be competitive once more.eh_ch - Monday, June 29, 2015 - link
How long will it take, that isAdding-Color - Monday, June 29, 2015 - link
No, AMD won't have a technology advantage to Nvidia on next gen GPUs, currently it looks like nvidia will choose Samsung for their next node, and as Samsung and GloFo jav some kind of alliance and share processes (glofo licenses some Samsung processes AFAIK, the technology should be very similar for both, yet AMD should have a small HBM advantage, they have better relations to hynix (and helped to develop HBM) than nvidia.jjj - Monday, June 29, 2015 - link
There won't be a HBM advantage from a technological point of view, at best AMD could get slightly better pricing but even that is unlikely since Nvidia has much higher volume. The first gen HBM was late and both Nvidia and AMD had plenty of time to prepare for it.As for the process, we don't really know what foundry each will use and what version of the process.On the GPU side both are more likely to go TSMC or use both. On the CPU side AMD will likely go GloFo but not this early version of the process and Intel might go 10nm not long after AMD has 14nm. On 10nm TSMC and Samsung do seem to be catching up with Intel but doubt AMD will have 10nm early.
fluxtatic - Tuesday, June 30, 2015 - link
Hell, at this point I'd be happy to see AMD at < 28nmrepoman27 - Monday, June 29, 2015 - link
Stellar work, Andrei. Thank you!tareyza - Monday, June 29, 2015 - link
Typo at the top of page 1: "Most notably it’s the on the North American and..."jjj - Monday, June 29, 2015 - link
Can you share process and size for the modem?For the A53 power scaling maybe the small L2 cache is starving the cores?
On A57 clocks guess they can go higher but here they wanted to keep the device thin (with a small battery) for marketing. They couldn't go at 1.9GHz since SD810 was supposed to be there but they should have went just a bit higher to make it look better vs the iphone.
On the GPU clocks debate, smaller GPU would be better than lower clocks cost wise since in the real world chasing ideal efficiency is not feasible, although since they use it only in their own device ,they have more room to chose efficiency vs costs.
How does GPU throttling corelate with CPU throttling and even what cores are active?
In GPU benchmarking the focuus has to be ,like on PC, on actual games not synthetic but AT just refuses to do that, even a little bit. Same on the CPU side so the SoC benchmarking section in reviews is just skippable,.
When estimating DRAM power consumption have you factored in that the S5 has only 2GB?
Anyway, nice article and glad you've mapped the SOC ,we don't usually see anything besides the main blocks.
der - Monday, June 29, 2015 - link
BEST F*KING ARTICLE I READ ALL MONTH! YOU WIN MY MVP AWARD ANANDTECH!michael2k - Monday, June 29, 2015 - link
"Currently I’m not aware of any semiconductor vendor following this method in the mobile space as there simply isn’t the same opportunity to recycle chips into lower performing SKUs."I thought Apple did this with their A5 SoC?
Andrei Frumusanu - Monday, June 29, 2015 - link
The A5 in the Apple TV was a new SKU: https://archive.is/ufMprzodiacfml - Monday, June 29, 2015 - link
Thanks! An awesome review of an amazing SoC (or process node). This article has so parts than I can chew and there's nothing else in my mind to ask except maybe comparisons with Intel's 14nm products e.g. Atom and Core M as they have similar thermal budgets.The advantage puts Samsung currently with little competition, similar to the position Intel enjoys in PC world. This will benefit them for the short and long term.
The GPU throttling is a concern. Is there any current game that can reproduce such? If not, there's not much of a problem. I actually like this kind of overboost as it allows the user to get excellent performance from a game or application or is it time for SoCs to have heat sinks/spreaders? I still recall your Nexus 5 test with the cold compress which allowed constant top speeds.
Kepe - Monday, June 29, 2015 - link
What's the point of the high "overboost" if the SOC can only sustain it for 2 minutes before it has to start throttling because of heat? If a graphically intense game runs on 60fps while in the boost mode, all you'll notice is a drop to 30fps during a 20-minute gaming session. "Yay I got smooth framerates for a few minutes and I lost a ton of battery life during that!"zodiacfml - Tuesday, June 30, 2015 - link
It is useful if the game or application does not saturate the SoC. Some or many games are not as demanding as benchmarks. It is also allows very fluid UI and loading of content.Battery life during gaming is always an issue with high end smartphones.
tipoo - Monday, June 29, 2015 - link
Really well done article, and the Exynos really looks without peer (at least in the android space, if not anywhere [the A8 is still pretty impressive]) right now.Someguyperson - Monday, June 29, 2015 - link
Andrei, your mystery SoC block is most likely a DSP. It makes a lot of sense being next to the audio block in your diagram. DSPs are also used to offload some compute when doing things like music playback and making a call, so it's proximity to the A53 cluster makes sense too.Andrei Frumusanu - Monday, June 29, 2015 - link
Please do not read too much into where I put the audio block on the diagram, that one was part of the blocks which I couldn't physically locate so I just chose a fitting space where to put it in a logical manner. The top right quadrant and the ISP are *very* abstracted and representative of physical location. All audio is either handled by the A5 itself or by the main CPUs. I'm pretty certain the SoC doesn't have any separate DSP for any task so it most likely is something else.Andrei Frumusanu - Monday, June 29, 2015 - link
* Non-representative for that matter.tareyza - Monday, June 29, 2015 - link
Andrei, you mentioned doing some undervolting and modifying some of the GPU frequency states in your article. I found some of the power consumption gains you posted very interesting. Would you recommend that an end user (such as myself) that has a rooted device attempt to do these things? If so, what exactly must be done in order to undervolt? (I'm assuming a custom kernel and/or ROM is required..?)Andrei Frumusanu - Monday, June 29, 2015 - link
You need to flash a custom kernel on the device. I can't really point out to where you can find such resources due to conflict of interest and myself being active in the modding community, but if you search for it I'm sure you'll be quick to find guides and download links to achieve what you want.Stuka87 - Monday, June 29, 2015 - link
Top notch article Andrei!syxbit - Monday, June 29, 2015 - link
Great job. Incredible detail.For the SD810 deep dive, I assume you'll be looking at a v2.1.
I would really like you to cover the important questions of the 810's heat problems, and whether v2.1 really fixes it. It's annoying to see both QCOM's marketing VP pretend that the overheating problems are imaginary, and I don't think anyone really believes that v2.1 completely fixes the problem.
I'm disappointed that there's just nothing else on the market for OEMs. the OnePlus 2 basically bad no choice. Either a MediaTek, or QCOM. Unfortunately, rumours state that the 2015 Nexus 6 will ship with the now ridiculed SD810 :(
Kepe - Monday, June 29, 2015 - link
At least the SD810 in OnePlus 2 will only be clocked at 1.8GHz max and they use graphene (IIRC) to spread the heat more effectively to the entire back cover of the device. I think I also read the OP2 will have the 2.1 version of SD810. Let's just hope they've done a good job optimizing their firmware and software.This article is amazing, I think every smartphone manufacturer should read it, they'd learn a lot. Great job, Andrei. Quite a lot of typos/breaks in thought through a centence unfortunately, you must've stayed up late many a night writing this :p
tuxRoller - Monday, June 29, 2015 - link
All the socs have issues, to varying degrees (hehe), with thermals.http://wccftech.com/snapdragon-exynos-atom-a8-benc...
The 810 is definitely the worst when looking at non-gpu benchmarks.
zepi - Monday, June 29, 2015 - link
I find "overclocking" gpu and cpu both useful for general usage. At least video and picture processing apps that resize/cut/process video and photos can benefit from extra gpu-oomph while actual processing time is still within reasonable times to allow process to complete before thermal limits kick in.Ofc, this depends solely on the app and whether it uses GPU processing.
beginner99 - Monday, June 29, 2015 - link
Problem with this is that even my old phone on 28 nm uses > 50% of battery for the screen. So there is much more to be gained from better screens than better SOC and process.jjj - Monday, June 29, 2015 - link
That's debatable. Displaymate puts average power consumption for the Screen on the S6 at 0.65 watts and max power at 1.2W. http://www.displaymate.com/Galaxy_S6_ShootOut_1.ht...The SoC, the RAM, the connectivity use plenty of power and the screen can be turned off a lot so it uses a lot less power than people seem to think.
djvita - Monday, June 29, 2015 - link
so when is samsung gonna post their soc kernel source? they havent since the s3, there are no stable custom roms as a resultdjvita - Monday, June 29, 2015 - link
correction on s6 xda, there are only debloated, deodexed, modded stock roms; some custom kernels and that's it. no custom roms like cyanoagenmod, paranoid, aosp or aokp. only hope is a stock theme from the samsung app store and an aosp themed launcher.SirCanealot - Monday, June 29, 2015 - link
Hi Andrei. I can't remember if I've ever remembered to comment on your articles since you started here, but this one was so cool I had to finally get around to it. Cut a long story short, I loved your kernels for S3/Note 2 and I've really missed them since I've been been on S800 Note 3. So as always, thanks a LOT for all your amazing work over the years.I have to give major props for your investigation of undervolting the SOC. I remember you having an argument with someone on XDA and you stated something like 'Undervolting is literally the only thing we can do to improve battery life without affecting performance, so let's undervolt everything' and I've always agreed completely with this (sadly my Note 3 does not UV well and simply will not behave consistently). So it was very interesting to read the UVing results in your deep dive. And also quite shocking how much energy a -75mv UV can save!
And again as always your writing is fantastic: Very easy to read and understand and very informative for people with only a small background in basic computing (eg, building/overclocking PC hardware) and much of what I know about SOCs today is down to your very informative posts and articles.
Please keep up the hard work, but of course you deserve a break more than anybody so I hope you don't have to work too hard for these amazing articles! :P
(And of course, please dear lord in the sky can Note 5 has a memory card slot so I can enjoy this SOC!)
Impulses - Monday, June 29, 2015 - link
+1aryonoco - Tuesday, June 30, 2015 - link
Seconded.This was an amazing piece, right at home at Anandtech. Informative, educational, in-depth. Simply awesome.
Thank you Andrei. I hope you are sticking around at AT and Apple doesn't poach you anytime soon ;-)
Marc GP - Monday, June 29, 2015 - link
Best review I have ever read, seriously, ever.Thank you.
turtleman323 - Monday, June 29, 2015 - link
How did you perform the power measurement? Did you hook up the battery to a Monsoon Power Meter or directly instrument the motherboard? It would be nice addition to discuss/show this in the article.Kepe - Monday, June 29, 2015 - link
I think he said in the article that he hooks up the phone to an external power supply."To get the numbers, we hook up the Galaxy S6 to an external power supply and energy meter."
Kepe - Monday, June 29, 2015 - link
It's all explained right at the top of the "CPU power Consumption" page.turtleman323 - Monday, June 29, 2015 - link
I was asking for a more in-depth explanation. There are several different places a mobile device can be hooked up to for measuring CPU power -- as well as different techniques for doing so. Depending on where you instrument you will see different results. For example measuring at the battery includes the power regulator and each of the SoC subsystems has a different power domain. Also given that it's an internal battery, it is much harder to instrument than a removable battery.ZeDestructor - Wednesday, July 1, 2015 - link
Given he took apart the phone, internal battery isn' t too much of an issue, and as for wiring it iup, again, not much of an issue if you have access to a bench power supply - set it to ~3.8V DC, wire it in, and let 'er ripjack1458 - Monday, June 29, 2015 - link
Your percentage math is incorrect: going from 113 mm² to 78 mm² is a 31% shrink, not a 44% shrink. This is a classic mistake: a decrease percentage must be computed with respect to the initial quantity, not the final one.name99 - Monday, June 29, 2015 - link
"Another CCI-connected block is a new kind of IP that we haven’t seen before in the mobile space: a memory compressor. "My assumption, when Apple added page compression to Mavericks, was that they did the same thing to iOS7 at the same time. The internet is kinda vague on whether this happened, but I'm seeing more articles that agree than don't. Likewise it's unclear whether they used hardware for this.
With this background, it is noteworthy that at WWDC Apple made a (reasonably) big deal about a new lossless compression library for both iOS and OSX. I take this as a signal (maybe I'm wrong) that they HAVE added a compression block to the A9 (it's a pretty obviously useful addition, after all). It's possible that there was an earlier block, but with a clumsy usage model and/or only one coded and/or other limitations; but the new block is robust enough that it can be exposed as a generic API.
name99 - Monday, June 29, 2015 - link
Another way Apple could use such compression HW is in file system compression. HFS+ has had file system compression for, what, ten? years now, and while it's only used on OSX at OS installation (various system files are compressed at installation, but nothing is dynamically compressed), that's a backward compatibility issue that doesn't matter on iOS.Of course they could compress in SW today (and maybe they will on older devices) but HW is clearly lower power and probably faster. (And of course you can do this sort of thing at the flash controller level; but doing it higher up in the OS gives you much more power. Files that compress really small can be stuck directly in the catalog, with no allocation block at all; and already compressed files --- JPG, MP3, that sort of thing --- can be skipped over, avoiding wasting power trying to compress them further.)
name99 - Monday, June 29, 2015 - link
"Apple A8(Typhoon)
AArch64"
How did you learn THIS code name? This appears to be the first mention of it on the web?
Is Anand feeding you insider information!?
Should we take this to mean
(a) The A9 will be called Hurricane?
(b) That it will not be a major revision, but basically a tweak of the A8 (same basic architecture)?
Andrei Frumusanu - Monday, June 29, 2015 - link
Eh, somebody noticed :)The name can be seen whenever an iPhone crashes due to a kernel panic. It just took a long time to actually catch this occurrence.
tipoo - Tuesday, July 7, 2015 - link
Interesting, I thought since it wasn't as big a change, it was still going by cyclone or cyclone+ or something.Red Panda - Monday, June 29, 2015 - link
Oh! they're making chips with GloFo; yield maybe an issue.tuxRoller - Monday, June 29, 2015 - link
Seeing these perf/W curves really shows that those socs that make use of highly clocked A53s are making mistakes.Pretty clearly, the smarter solution would be to include at least one A57/A72.
tuxRoller - Monday, June 29, 2015 - link
"For example 50% up-threshold of the 5433 would mean a 100% load at 800MHz of the A53 cores"Are you sure about that? Since the max freq 1300, a 100% load @800MHz should just be 800/1300~61%?
Andrei Frumusanu - Tuesday, June 30, 2015 - link
The scheduler sees 400MHz 0% load as 0% capacity, you have to account for minimum frequency.tuxRoller - Wednesday, July 1, 2015 - link
Gah! Thanks for the tip.Well, that certainly makes brings things a bit closer to 800MHz.
So, given the freq range of 900MHz, wouldn't 850MHz represent 50% load?
Andrei Frumusanu - Wednesday, July 1, 2015 - link
Yes, if you're looking at average frequency over time instead of discrete frequency. But there is no 850MHz state, so you instead have a certain load percentage at the 800MHz state.SanX - Monday, June 29, 2015 - link
Article of the year!sandy105 - Tuesday, June 30, 2015 - link
Just asking ..when is the galaxy alpha review coming as was promised in the galxy note 4 review . You have covered the chipset well although i would really like to know about the camera sensor especially since the quality is so good that reviewers claim that its the same 16mp iso-cell sensor as in galaxy s5.Andrei Frumusanu - Tuesday, June 30, 2015 - link
I dropped it due to Samsung also basically abandoning the device back in January. I wouldn't recommend the device due to the low screen resolution and bad calibration.The camera sensor is a 12MP variant of the same S5K2P2 S5 sensor, so those claims are not unwarranted.
sandy105 - Tuesday, June 30, 2015 - link
Great article . I always love these deep dives instead of superficial youtube and written reviews . Nothing comes close to the joy of reading an anandtech review.@andrei
If you have time please push out a dive-in or mini article for exynos 5430 and MT6595.
at80eighty - Tuesday, June 30, 2015 - link
all i can say is - you guys educate us. thankssneha45 - Tuesday, June 30, 2015 - link
great articlei have bookmarked your blog
http://www.userlinks.in/category/smartphones-revie...
achillez - Tuesday, June 30, 2015 - link
One of the best articles I've read in a long time. Great jobspikebike - Tuesday, June 30, 2015 - link
Why the single core SpecINT instead of the parallel benchmark SpecINT Rate?darkich - Wednesday, July 1, 2015 - link
I dare to say this was one of the, if not *the* best ever, articles on AT, and the best SoC analysis in all of Internet.Bravo
IlllI - Wednesday, July 1, 2015 - link
didn't samsung poach a bunch of amd engineers a couple years back?duartix - Thursday, July 2, 2015 - link
Great article.BTW, these guys are riding on parts of it. They usually do it and link back, but I suspect they never asked for your permission.
http://www.phonearena.com/news/Samsung-Exynos-7420...
muhamadamrueldan - Thursday, July 2, 2015 - link
i have a small question :now i have nexus 6 which has inside SD805 i hate the idea how the cpu freq are 2.7 becuase it's not real and no real difference in real world i want to know the program u use to know when my cpu throttle and clock down the cpu to smaller freq so i can know the best max freq to set my cpu to it so i will have it surly at lower voltage and i will use the difference to gpu not to post but i think it will automatically will not decrease as much as it was with a cpu that clocked at 2.7 with 1070mV
Now i am running on 2GHz 930mV and really i can't fell a diffrence
thank you for that great article
phoenix_rizzen - Tuesday, July 14, 2015 - link
If you are running Android 5.0 or newer, then you can go into Developer Options and enable the "CPU Info" option. That will put an X+1 (where X is the number of CPU cores in the SoC) line overlay in the top-right of the screen.Line 1: CPU Temperature
Line 2: CPU core 0 governor, current frequency
Line 3: CPU core 1 governor, current frequency
Line 4: CPU core 2 governor, current frequency
Line 5: CPU core 3 governor, current frequency
and so on.
Very handy to watch how different governors and hotplug systems work while you use the phone.
samer1970 - Friday, July 3, 2015 - link
Exynos 7420 is behind when it comes to gaming performance.it is a problem in Tablets where snapdragon 810 heating is no issue , but I dont see it a problem in mobile phones at all.
Having said that , iPhone rules the market today in both performance and efficiency
ads2015 - Saturday, July 4, 2015 - link
http://www.realworldtech.com/forum/?threadid=15103...By: Linus Torvalds (torvalds.[email protected]ux-foundation.org), July 4, 2015 2:37 pmRoom: Moderated Discussions
Wilco (Wilco.Dijkstra.[email protected]world.com) on July 4, 2015 11:41 am wrote:
>
> Many results don't make sense indeed. I wonder if the benchmarks were forced to run on
> a specific CPU at a fixed frequency - without that the results will be totally bogus.
I don't think they actually ran the benchmarks at all.
The numbers for some of the oddest ones are suspicious. Look at the 7420 arm64 numbers for gcc, eon and perlbmk: 2000, 2500 and 4000 respectively. Yeah, round numbers like that happen, but three ones like that that just happen to be that round?
So I wonder what "The scores we publish are only estimates" really means. It could mean that they want to make it clear that it's not an official Spec submission, and they kind of try to imply that.
But it could mean that they are just marketing estimates from some company, and have never seen any actual real benchmark run, or at best were run in some simulation with a perfect memory subsystem. They even say that they haven't been able to run 64-bit benchmarks due to lack of software availability, but then they quote specint2000 numbers anyway? Where did they come from? That's very unclear.
And gcc getting the same nice round score on a53 and a57? Yeah, not likely. And perlbmk on a53 has another suspiciously round score.
Or maybe it's real, and they just happen to be rounded to even thousands (or halves), and the fact that they seem to make no sense is just "that's life, deal with it".
I don't believe it for a second.
Linus
Testpm - Friday, July 10, 2015 - link
BTW, Samsung is also working on 11k display technology too now!garnikg - Wednesday, June 15, 2016 - link
Very nice! Thanks a lot. Does 8990 block diagram (contents and location of blocks) look similar? Also do you know if its PCIe Gen2 or Gen3?thebarca - Friday, June 17, 2016 - link
If I am to underclock the CPU at a lower frequency, lets say the a57 to 1.9ghz, would that give me a better battery life? since a57 run more efficient at 1.9ghzmbrandalero - Tuesday, May 8, 2018 - link
Could someone clarify to me how is it possible that two very distinct processors can run with 2x frequency difference under the same voltage?i.e. A53 runs at 400 MHz at 656 mV while A57 runs at twice this frequency (800MHz) under (nearly) the same 687 mV.