Comments Locked

52 Comments

Back to Article

  • alex3run - Wednesday, February 20, 2013 - link

    And where is the official information about the GPU and it's power consumption?
  • rd_nest - Wednesday, February 20, 2013 - link

    http://www.theinquirer.net/inquirer/news/2249399/s...
  • toyotabedzrock - Wednesday, February 20, 2013 - link

    That does not sound like a sgx gpu.

    I think they should find a gpu that supports opengl es 3
  • rd_nest - Wednesday, February 20, 2013 - link

    72GFlops..Most probably T604. Was same for N10.
  • lmcd - Sunday, February 24, 2013 - link

    I think it's pretty obviously an ARM solution as that's what they have experience with. I think that also makes more sense because they have been touting the benefits of using those combinations. Furthermore the 544MP3 would practically be a step back from the T604, even ignoring the other ARM and IT solutions they could use.

    Their rep probably got it wrong.
  • banvetor - Wednesday, February 20, 2013 - link

    One of the unanswered questions at ISSCC was what is the delay penalty from switching between the A7 and the A15 cores... I don't see all that bright future for this baby.
  • StormyParis - Wednesday, February 20, 2013 - link

    Why ? because of a switching delay you don't know, for a switch you don't know the frequency of ?
  • jeffkibuule - Wednesday, February 20, 2013 - link

    I don't think the delay switch matters that much, when the goal of these chips is reasonable performance with good battery life, not maximum performance, otherwise you much as well just chuck the A7 cores and run the A15s at full blast.
  • twotwotwo - Wednesday, February 20, 2013 - link

    Yeah, the compromise-y nature of it is important for the whole thing to make sense. In theory, 6W's a lot. In real use, you rarely hit that--you usually just blast 1-2 of the A15s for a few seconds while you load a webpage or app or do some other big chunk of CPU-bound work.

    If I'm going to second-guess and play armchair engineer (as DigitalFreak aptly put it), maybe you can imagine other uses for all that die area than going 4+4-core when many workloads still aren't heavily threaded--more cache w/the A15s, more GPU (I bet games on 1080p phone screens can use a lot), something. Apple was OK with dual-core, at least as of the A6(X). Other hand, I haven't the first clue how other designs perform, etc. and Samsung does, so I should close my mouth. :)
  • wsw1982 - Wednesday, February 20, 2013 - link

    I don't see the mansion of L3 cache, and the L2 cache of A7 and A15 are not shared. Therefore, it's highly possible the switch is across the main memory, which may add mile-seconds of delay (dumping and reloading cache data to and from low power ddr, power down and warm up the cores). How much should A15/A7 done to just even out of the performance and energy penalty of switching?
  • Wilco1 - Wednesday, February 20, 2013 - link

    The L2 caches have a special port to allow cachelines to be swapped directly. When both caches are powered up, coherency is maintained between them.
  • DigitalFreak - Wednesday, February 20, 2013 - link

    We have an armchair engineer in 'da house!
  • xaml - Saturday, February 23, 2013 - link

    An ARM-chair engineer... ;)
  • Wilco1 - Wednesday, February 20, 2013 - link

    Ever heard of Google?

    http://samsung.com/global/business/semiconductor/m...
  • tuxRoller - Wednesday, February 20, 2013 - link

    It depends on what is handling the switching.
    These initial implementations are using a cpufreq driver with internel core switching being moved from the hypervisor to the kernel in order to switch between pairs (as illustrated above, the heterogeneous mode will come after a good solution is found for the scheduler). The switching times aren't bad b/c you have cache coherency (not shown above) and thus you only need to transfer the active register states.

    http://lwn.net/Articles/481055/
    http://lwn.net/Articles/501501/#Add%20Minimal%20Su...
  • amdwilliam1985 - Wednesday, February 20, 2013 - link

    I can't wait to see benchmarks on these.

    "While it's possible for you to use both in parallel, initial software implementations will likely just allow you to run on the A7 or A15 clusters and switch based on performance requirements."
    -Imagine future projects such as OUYA based on this baby with all cores enabled :)
    This will be a perfect HTPC.

    Intel better be prepare, time is ticking. Seems like every generation, ARM cpu takes a big jump in performance.
  • Jinxed_07 - Wednesday, February 20, 2013 - link

    There's a difference between a more powerful CPU and one that simply has more cores slapped on. If ARM really had a more powerful CPU, then this architecture would only have one smaller CPU that was able to run everything while consuming less eneregy, rather then needing two in order to save energy.
    Futhermore, if Intel should be afraid of ARM, then they should be afraid of AMD for making an 8-core processor that outperforms a 4-core processor by a bit.
  • flyingpants - Wednesday, February 20, 2013 - link

    Hello. In the first chart, it says both quad core CPUs are ARM 7. No mention of ARM 15. Is this correct?
  • Cow86 - Wednesday, February 20, 2013 - link

    I'm afraid you are confused in this case...that is the architecture of the cores, being ARM v7...all Cortex cores use this architecture, the A5, A7, A8, A9 and A15...so it is correct :) The left column in that table is the A15, the right is the A7.
  • pyaganti - Thursday, February 21, 2013 - link

    Any idea why samsung is using A7 for little instead of A5? If A7 and A5 are both ARM v7 archietecture, it makes more sense if they use A5 instead of A7. Beacuse A5 is low power core than A7 and thats the main concept of little core right?
  • Cow86 - Thursday, February 21, 2013 - link

    Because the A7 is specifically designed to be a Little core to the A15. The A5 is not. Furthermore the A7 has a better performance/watt than the A5, and a very similar die size.
  • saurabhr8here - Wednesday, February 20, 2013 - link

    Both say ARM v7a, which is the instruction set architecture. Both A7 and A15 processors use the same instruction set, hence they are able to implement the big.LITTLE architecture in the first place.
  • twotwotwo - Wednesday, February 20, 2013 - link

    Nah, it's talking about the instruction set. v7a is the common instruction set for the A7 and A15 microarchitectures.
  • MrSpadge - Wednesday, February 20, 2013 - link

    The Architecture is Arm v7a, the actual chip designs are called A7 and A15. This means both designs understand the same instructions and can thus run the same software (which is needed for quick transparent switches). ARM is not very good at the naming game yet.
  • SetiroN - Wednesday, February 20, 2013 - link

    It can be misleading if you don't pay attention, ARM v7a is the architecture revision (ISA), Cortex A15 and A7 are the core's names, which aren't mentioned on that chart.
  • toyotabedzrock - Wednesday, February 20, 2013 - link

    I have to wonder if they tested the a7 to ensure it has the power to run the ui smoothly.
  • UpSpin - Wednesday, February 20, 2013 - link

    No, they never test it. They haven't even tested if the SoC works at all. And the performance numbers, they are just random numbers. /s

    The A7 is only slightly slower than an A9. Android JB runs smooth on dual core A9 SoCs because it makes heavy use of the GPU for rendering. So for lag free UI the GPU will be more important.
    A quad core A7 will be faster than a A9 dual core! It will handle the usual tasks without any issues at all.
  • tuxRoller - Thursday, February 21, 2013 - link

    http://www.arm.com/products/processors/cortex-a/co...
    They're claiming around a 20% improvement, but I'm guessing that's at the high end. Other numbers I've seen claim it to be a bit below an A8.
    However, it uses MUCH less power than these other chips (arm claim it's similar to an A5 in terms of power draw).
    Considering it looks as though the biggest change (to the A8) is in the branch prediction, and it should clock higher,
    http://www.arm.com/products/processors/cortex-a/co...
  • phoenix_rizzen - Wednesday, February 20, 2013 - link

    The A7 is supposed to have just slightly less performance than an A9, but with much reduced power requirements. It will most likely take over from the A8 and low-end A9 SoCs for low-to-middle-range phones. Look for dual-core A7s to hit "feature" phones this year.
  • MrSpadge - Wednesday, February 20, 2013 - link

    Looks like sharing the L2 between both CPU clusters might be a good idea. Done in some clever way it could even speed the switching up.
  • UpSpin - Wednesday, February 20, 2013 - link

    According to the chart, the Quad Core A15 part consumes about 5W! Probably CPU only. If this SoC is put inside a smartphone, the battery would be dead in less than an hour if you also consider the power consumption of the display and PowerVR GPU..
    This SoC is for tablets with large enough batteries and large enough surface to passivly cool the 5W+GPU waste power. Maybe in the next Galaxy Note 10 it will find a use or in a boosted Nexus 10, but never in a smartphone.
  • tempestglen - Wednesday, February 20, 2013 - link

    4xA15=4.5 watt
    4xA7=0.75 watt

    80% of phone's running time is low performance, so battery life of exynos octa will be good.
  • Aenean144 - Thursday, February 21, 2013 - link

    It really depends on the 80%.

    The A7 turns back the clock about 3 years back to the Cortex-A8 days in terms of DMIPS/Hz. I can easily see many an app, process or thread wanting more. Will be interesting to see where running a web browser will land. It's not going to be pretty if it stays on the A7.
  • UpSpin - Thursday, February 21, 2013 - link

    A7 has 1.9 DMIPS/MHz, A9 has 2.5 DMIPS/MHz.
    The Galaxy Nexus has a 1.2 GHz Dual Core A9 --> 6000 DMIPS
    This SoC has a 1.2 GHz Quad Core A7 --> 9120 DMIPS
    Really, the LITTLE part should handle any normal tasks easily. Video playback gets done by hardware decoders, GUI rendering gets done by the GPU, website parsing and other processing stuff gets done by the CPU.

    The Galaxy Nexus runs fluid. This A7 quad core is at least 30% faster, a smartphone could live without the A15 easily.
  • Death666Angel - Thursday, February 21, 2013 - link

    All very true. :D
    My Galaxy Nexus has some "think pauses" (I'm running a custom everything, so not sure if that happens on plain Android). But when that happens I often wonder if it is a CPU issue or a memory one. It mostly happens when starting/switching between memory intensive apps (big emails, video, browser). Would the noticeable performance increase be bigger from an A15 upgrade or from getting a midway decent SSD with >200MB/s seq r/w and >30MB/s rnd r/w. :)
  • UpSpin - Thursday, February 21, 2013 - link

    That's the idea behind big.LITTLE. In low demanding tasks use the A7 in high performance task use the A15.
    But the device must be able to handle the A15 power consumption.
    But if the A15 consume 5W and you start a game which will most probably make a use of the A15 power, your smartphone battery will be dead in an hour, just because of the CPUs.
    Yes, in standby the battery life will be good, but I never denied this. That's what big.LITTLE is made for.
    In heavy use however, this SoC will, with CPU and GPU full power, consume, most probably, 10W. A smartphone battery has <10Wh. A smartphone surface is too small to dissipate 10W.
    Conclusion:
    This SoC won't find a use in a smartphone. It's physically impossible, except you never make a use of the A15 cores, which defies the purpose of this SoC!
  • Aenean144 - Wednesday, February 20, 2013 - link

    I think that's just CPU.

    If you assume 3.5 DMIPS/MHz for Cortex-A15, a quad-core A15 running at 2 GHz is 3500*2*4 = 28000 DMIPS. That's quite close to the point in the upper right in the plot, which is actually a little over 5 Watts. Maybe 5.2 W.

    Even in a tablet, the SoC may be prevented from maxing out the CPU and the GPU at the same time. This could be an 8 to 10 W SoC with both the GPU and CPU maxed out.
  • xaml - Saturday, February 23, 2013 - link

    It is prominently claimed so for this very reason, here:
    http://www.sammobile.com/2013/02/23/samsung-ditche...
  • xaml - Saturday, February 23, 2013 - link

    That was @UpSpin, for the prehistoric lack of editing.
  • lmcd - Sunday, February 24, 2013 - link

    Disappointing. Exynos 5 Octa should have made it in there. And as some commenter noted, a 720p SAMOLED+ would be preferable to the 1080p SLCD cited.
  • alexvoda - Wednesday, February 20, 2013 - link

    Is it me or is the layout of that chip really inefficient?
    I'm not really knowledgeable in chip design but I think the orange brown area around the CPU may possibly be wasted surface.
    At least compared to these:
    http://www.anandtech.com/show/6323/apple-a6-die-re...
    http://www.anandtech.com/show/6472/ipad-4-late-201...
    http://www.anandtech.com/show/5831/amd-trinity-rev...

    There doesn't seam to be a lot of space left for the GPU.
    And since this will probably come in 1080p or higher devices the GPU matters
  • Death666Angel - Wednesday, February 20, 2013 - link

    It's just you. :P
    The stuff you linked to has the exact same "orange brown area". And that probably isn't wasted surface.
  • UpSpin - Wednesday, February 20, 2013 - link

    I don't think the die photo shows the whole die, only the upper half maybe.
    I'm also no chip designer, but maybe the orange brown area gets used for wires to connect the different parts.
    The top left part looks odd, not orange brown, not structured, but near the RAM interfaces. Maybe they blurred that part because it contains their 'secret CPU switching' part?
  • alexvoda - Thursday, February 21, 2013 - link

    I see.
    Yup, the photo only shows half of the die, Really made it more confusing.
    I didn't mean wasted and in empty and does nothing but more as in no active component. And if that was the entire die the percentage occupied by the orange brown space would have been huge. Since this is probably just half of the die it matters a lot less.
  • alex3run - Thursday, February 21, 2013 - link

    I think Samsung would unveil more details about this SoC later.
  • Shadowmaster625 - Wednesday, February 20, 2013 - link

    I was saying years ago how intel needed to take an atom and stick it on the same die as an i-series chip. And essentially do with them exactly what is described in this article. But of course, they didnt do it, and as a result they lost billions in potential mobile chip sales to companies like apple. Haswell looks kind of like an improvement, but you can tell they're still not doing what needs to be done.

    90% of the time, a tablet/ultrabook only needs the cpu power of one single atom core. This is the basic fact that has been ignored by intel (and AMD) for more than a decade now. But samsung understands this.
  • djgandy - Thursday, February 21, 2013 - link

    The problem with laptops was not the CPU, it was all the other cheap components that sucked power. Laptop screens are terrible and good value ones still use mechanical hard drives!
  • UpSpin - Thursday, February 21, 2013 - link

    No!
    Extra devices get shut down if not used.
    Laptop screens are/were terrible, but this means they are low resolution TN panels. But TN panels have better transmittance than IPS. Low resolution displays have better transmittance than high resolution. I hope you're able to follow, but this means, yes, they are more efficient and don't require such a bright backlight.
    The power consumption of a HDD is, in idle, higher than the one of a SSD. But therefore you get more space. And in general, the impact is small compared to the CPU and GPU power consumption.
    I'm sorry, your logic is flawed.

    Btw: This is an article about the power consumption of a SoC, and only the SoC! Why do you compare it with the power consumption of a whole system? This SoC consumes less power than an Intel CPU/GPU/chipset combo. That's what matters. Nothing else! So don't compare apples with oranges.

    shadowmaster is right, your post nonsene.
  • mayankleoboy1 - Thursday, February 21, 2013 - link

    smart move to abandon Mali arch. It was slow compared to the PowerVR GPU's.
    But is it enough to win benchmarks against iPhone6 and iPad5 ?

    And Adreno 320 is a wimp by comparison.
  • alex3run - Thursday, February 21, 2013 - link

    Mali T604 is the same level as PowerVR 554 @280 MHz. And PowerVR 544MP3 is slower than both of those GPUs. So it would be very strange to see it inside Exynos Octa. More likely there will be Mali T658/678MP4 which is twice as fast as anything on the market today.
  • lmcd - Sunday, February 24, 2013 - link

    But I'd rather have seen a Quad-core A7 and Dual-core A15. I feel at least two of the four possible threads should always be on a LP core. Maybe an Exynos 5 Hexa, and an Exynos 5 Octa for the next Chrome book (plus a bigger battery).

    That Exynos 5 Hexa could then fit another (power gated) GPU module. Or maybe a small 2d accelerator to allow constant gating of the GPU during normal usage (like the OMAP 4470)

    Regardless, the Exynos 5 Dual is too weak of a low end (between low core count and no LP cores) while the 5 Octa is too strong. *Sigj* this time I feel I really could armchair-engineer a better solution.
  • lmcd - Sunday, February 24, 2013 - link

    That was supposed to be a *sigh*

Log in

Don't have an account? Sign up now