The A5 uses an SGX543MP2, so yes, the A5 has a better GPU. In the end, though, it's all about balance in the mobile game, so this will be just what a lot of manufacturers are looking for.
I know the 4xxx series is on 40nm and 5xxx on 28nm but don't want to just assume .Qualcomm said today they are sampling MSM8960 this month and MSM8960 should be on 28nm so i figured others should be having 28nm soon too..
I see that the smartphone world is innovating much faster than the x86 world. I been saying for a while that intel/amd need to have a couple cores like this so they can keep their monster x86 cores powered down when you are reading a web page or typing into a box like this. I am thinking a 386 with 8k of cache, clocked at 66 MHz would do the trick. A tiny little core.
That'd be a real waste of die space. Besides, pretty much every CPU since the Core days in 2006 has had the ability to underclock itself, and the Core iX series is able to turn of entire CPUs. And while web browsing may not be that demanding, there are always little background tasks that require some CPU.
a 386 with 8k of cache would consume how much of a i7-2600k die? 2%? 1%? How can that be awaste of space if it enables 10% or more power consumption reduction? You are the epitomy of stagnation in the x86 world. No facts. No numbers. Not even any intelligent estimations. You just know it wont work.
You could put a 66 MHz CPU on there that create your page in 100 ms, or the 3.43 GHz CPU can create the page in 2 ms and then go to sleep. Intel looked into what you're suggesting over a decade ago, and concluded that it always makes sense to have more power than you need. When you're working on an easy task, you just finish the task faster and then power down.
The rules are a bit different on very low power devices though. Peak power draw, as opposed to average power over time, is also important. If that 3.43GHz CPU requires a huge inrush of current during that 2ms, the battery may not be able to supply it. It may actually be better to draw a steady, lower current for 100ms, especially if it simplifies the design of the power supplies.
Seriously a 386 w/ 8k cache would be pathetically underpowered for doing anything really useful besides idling. Those two 266Mhz M3 cores probably run circles around 386. Also the i386 instruction set is antiquated as most software is now optimized for i586 & i686 sets, plus all the MMX/SSE extensions.
Now integrating a low powered Atom CPU might be more valuable, but more research in clock throttling & powergating is probably a better use of our time. A lot of idle power consumption for recent computers has been the horrible idle power of GPUs. My Radeon 4850 sucks too much power sitting idle two minor 2d/3d composition in Win XP or Windows 7. Luckily this is starting to change as GPU vendors take idle & peak power more seriously.
Not to mention that such a heterogeneous setup would need OS support for it to work well. Sure, one could build hardware support for switching, but that will be horribly inefficient, as the hardware can only guess at the workload. Also. it'll make the already horribly complex front end (x86 decode) even more complex.
Phones are mostly idle most of the time, but it needs to perform housekeeping tasks, like checking in with the mobile network periodically. SoCs may even support that in hardware. The ARM ISA is also vastly less loaded with historical BS; they could also revise it if necessary, since the issue of backwards compatibility is really a non-issue. It'll just be a recompilation. Not so for x86. We still need to run quake, and IPX/SPX not working was the last straw.
They are following the footprints on a path already innovated by the x86 world. I know there are major differences between the architectures. However they have common problems and solutions as well.
It would have to be P5 or P6-based at minimum... A 386 or even a 486, wouldn't cut it. Intel could integrate one or two Atom-like cores with a Core-i Series chip, and AMD could integrate some Bobcat-like small cores and Phenom or Bulldozer-based larger cores.
It is not necessary to pair small processors with large processors for power efficiency. Intel already studied it.
CPU A runs at 1000 MHz and works on hard problems CPU B runs at 100 MHz to save power
So you have an easy task that you'd like to pass to CPU B. CPU B performs the task in 100 ms. CPU A performs the task in 10 ms and then sleeps for the next 90 ms. Intel concluded that a super fast processor that is idle most of the time is more energy efficient than a slower processor that is running most of the time. Thanks to modern power gating, it makes sense to throw a super powerful CPU at even the simplest of tasks. It simply performs the task quickly and then power down.
Umm, Intel isn't as right as you think. They had to spend A TON of R&D to get power gating to the point that what you suggested works reasonably well.
As for why it still isn't perfect...
Think about it: execution cores 3x the size of the little core out of the "big little" at the same speed are still going to use more power. And the instructions common for LP cores might be different than those common to regular usage. So where do you draw the line? And if you draw a line anywhere towards the low power side, you have peak performance compromises, so still less than optimal efficiency.
Two different core types, with power gating or asynchronous clocks, is easily viable and probably more efficient
Innovating?! They are just making things faster. If intel wanted to make a 10GHz processor, they could. It would probably also require a 10lb block of silver to cool it.
Unlike the x86 world, mobile chips are slow enough that they are not constrained by heat, they are only constrained by battery life. Every one of these speed increases comes at an increase of power consumption and lower battery life. They can keep doing this forever until batteries will only last a minute on a charge. Why? ...because people believe faster=better. Not sure what anyone does on a cell phone that requires this increase in speed.
As for your suggestion to use a 386, it is not a terrible idea, but the system components all draw a large amount of power, beside the CPU and integrating it would just add to the cost of the board that no one cares about. Sleep or standby consumption is not enough for most consumers to care about, unless they buy into this global warming nonsense.
They're nearly synonymous. For a while, mobile chips in the x86 world were slow enough that they weren't constrained by heat.
As ARM chips get faster and faster, they're going to start heating up, and the constraints of being able to cool them become more and more important. I realize that there are architectural differences that are somewhat in ARM's favor for the class of workloads the ARM is expected to do vs. an x86 processor, but you're still eventually going to run into the heating issue.
If for no other reason, fully discharging a Li-Ion battery in 1 minute will probably cause it to catch fire.
I fail to see the innovation you speak of. I see slapping more cores on and upping clock rates with die shrinks. Looks like it is mirroring the x86 market but is about 7 years behind.
Remember smart phone chips are also getting a larger because their performance is seen as more important. You can't compare phones from 5 years ago with phones of today simply for this fact.
You can just hear that battery life being sucked away! I think they are trying to convert mobile phones back to corded. You will need to cord constantly plugged in, just to keep it charged! Do you think we will get to use those nifty spiral wound cords again?
So now we will be able to... preview a demo of UT3 on our keyboardless cell phones? I don't get it. Would be much more excited to hear that the new phones lasted a month without a charge. That would be awesome.
True, but I think it's more interesting from the giant massively parallel ARM based "servers" that are planned perspective. I wouldn't see that as a viable anything on a mobile device (yet).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
30 Comments
Back to Article
iwod - Thursday, June 2, 2011 - link
So basically an Apple A5 without the two M3 core. But 1.8Ghz is amazing, Is it really intended to be a Mobile Chip?I think Apple needs to up the game a bit.
Chloiber - Thursday, June 2, 2011 - link
? Why ?First devices arrive in "first half of 2012". Just "lol". And the A5 of the iPad 2 still has the better GPU.
ebolamonkey3 - Thursday, June 2, 2011 - link
No, the SGX 543 and SGX 544 are the same except the latter support DX9.NMR Guy - Thursday, June 2, 2011 - link
The A5 uses an SGX543MP2, so yes, the A5 has a better GPU. In the end, though, it's all about balance in the mobile game, so this will be just what a lot of manufacturers are looking for.domdym - Thursday, November 3, 2011 - link
ipad is dual gpu the omap is single.jjj - Thursday, June 2, 2011 - link
Any clue if it's on 40nm or 32/28nm ?jjj - Thursday, June 2, 2011 - link
I know the 4xxx series is on 40nm and 5xxx on 28nm but don't want to just assume .Qualcomm said today they are sampling MSM8960 this month and MSM8960 should be on 28nm so i figured others should be having 28nm soon too..Wilco1 - Thursday, June 2, 2011 - link
From the press release:"The 45nm OMAP4470 processor is expected to sample in the second half of 2011, with devices expected to hit the market in first half 2012. "
OMAP5 will be 28nm indeed.
Brian Klug - Thursday, June 2, 2011 - link
Like the rest of the OMAP4 family, it's 45nm. Apologies, I should've made that a bit clearer ;)-Brian
Shadowmaster625 - Thursday, June 2, 2011 - link
I see that the smartphone world is innovating much faster than the x86 world. I been saying for a while that intel/amd need to have a couple cores like this so they can keep their monster x86 cores powered down when you are reading a web page or typing into a box like this. I am thinking a 386 with 8k of cache, clocked at 66 MHz would do the trick. A tiny little core.dagamer34 - Thursday, June 2, 2011 - link
That'd be a real waste of die space. Besides, pretty much every CPU since the Core days in 2006 has had the ability to underclock itself, and the Core iX series is able to turn of entire CPUs. And while web browsing may not be that demanding, there are always little background tasks that require some CPU.ahmedz_1991 - Thursday, June 2, 2011 - link
agree with dagamerShadowmaster625 - Thursday, June 2, 2011 - link
a 386 with 8k of cache would consume how much of a i7-2600k die? 2%? 1%? How can that be awaste of space if it enables 10% or more power consumption reduction? You are the epitomy of stagnation in the x86 world. No facts. No numbers. Not even any intelligent estimations. You just know it wont work.sum1guy - Thursday, June 2, 2011 - link
You don't understand how an Intel CPU works.You could put a 66 MHz CPU on there that create your page in 100 ms, or the 3.43 GHz CPU can create the page in 2 ms and then go to sleep. Intel looked into what you're suggesting over a decade ago, and concluded that it always makes sense to have more power than you need. When you're working on an easy task, you just finish the task faster and then power down.
Galaxie500 - Thursday, June 2, 2011 - link
The rules are a bit different on very low power devices though. Peak power draw, as opposed to average power over time, is also important. If that 3.43GHz CPU requires a huge inrush of current during that 2ms, the battery may not be able to supply it. It may actually be better to draw a steady, lower current for 100ms, especially if it simplifies the design of the power supplies.Klinky1984 - Thursday, June 2, 2011 - link
Seriously a 386 w/ 8k cache would be pathetically underpowered for doing anything really useful besides idling. Those two 266Mhz M3 cores probably run circles around 386. Also the i386 instruction set is antiquated as most software is now optimized for i586 & i686 sets, plus all the MMX/SSE extensions.Now integrating a low powered Atom CPU might be more valuable, but more research in clock throttling & powergating is probably a better use of our time. A lot of idle power consumption for recent computers has been the horrible idle power of GPUs. My Radeon 4850 sucks too much power sitting idle two minor 2d/3d composition in Win XP or Windows 7. Luckily this is starting to change as GPU vendors take idle & peak power more seriously.
Zoomer - Monday, June 20, 2011 - link
Not to mention that such a heterogeneous setup would need OS support for it to work well. Sure, one could build hardware support for switching, but that will be horribly inefficient, as the hardware can only guess at the workload. Also. it'll make the already horribly complex front end (x86 decode) even more complex.Phones are mostly idle most of the time, but it needs to perform housekeeping tasks, like checking in with the mobile network periodically. SoCs may even support that in hardware. The ARM ISA is also vastly less loaded with historical BS; they could also revise it if necessary, since the issue of backwards compatibility is really a non-issue. It'll just be a recompilation. Not so for x86. We still need to run quake, and IPX/SPX not working was the last straw.
mustafaka - Thursday, June 2, 2011 - link
They are following the footprints on a path already innovated by the x86 world. I know there are major differences between the architectures. However they have common problems and solutions as well.FunBunny2 - Thursday, June 2, 2011 - link
They hardware instruction set of an Intel processor might surprise you.mustafaka - Thursday, June 2, 2011 - link
Still even the failures of the pioneers are hints to the followers.JMC2000 - Thursday, June 2, 2011 - link
It would have to be P5 or P6-based at minimum... A 386 or even a 486, wouldn't cut it. Intel could integrate one or two Atom-like cores with a Core-i Series chip, and AMD could integrate some Bobcat-like small cores and Phenom or Bulldozer-based larger cores.sum1guy - Thursday, June 2, 2011 - link
It is not necessary to pair small processors with large processors for power efficiency. Intel already studied it.CPU A runs at 1000 MHz and works on hard problems
CPU B runs at 100 MHz to save power
So you have an easy task that you'd like to pass to CPU B. CPU B performs the task in 100 ms. CPU A performs the task in 10 ms and then sleeps for the next 90 ms. Intel concluded that a super fast processor that is idle most of the time is more energy efficient than a slower processor that is running most of the time. Thanks to modern power gating, it makes sense to throw a super powerful CPU at even the simplest of tasks. It simply performs the task quickly and then power down.
lmcd - Sunday, October 7, 2012 - link
Umm, Intel isn't as right as you think. They had to spend A TON of R&D to get power gating to the point that what you suggested works reasonably well.As for why it still isn't perfect...
Think about it: execution cores 3x the size of the little core out of the "big little" at the same speed are still going to use more power. And the instructions common for LP cores might be different than those common to regular usage. So where do you draw the line? And if you draw a line anywhere towards the low power side, you have peak performance compromises, so still less than optimal efficiency.
Two different core types, with power gating or asynchronous clocks, is easily viable and probably more efficient
sleepeeg3 - Thursday, June 2, 2011 - link
Innovating?! They are just making things faster. If intel wanted to make a 10GHz processor, they could. It would probably also require a 10lb block of silver to cool it.Unlike the x86 world, mobile chips are slow enough that they are not constrained by heat, they are only constrained by battery life. Every one of these speed increases comes at an increase of power consumption and lower battery life. They can keep doing this forever until batteries will only last a minute on a charge. Why? ...because people believe faster=better. Not sure what anyone does on a cell phone that requires this increase in speed.
As for your suggestion to use a 386, it is not a terrible idea, but the system components all draw a large amount of power, beside the CPU and integrating it would just add to the cost of the board that no one cares about. Sleep or standby consumption is not enough for most consumers to care about, unless they buy into this global warming nonsense.
erple2 - Tuesday, June 7, 2011 - link
They're nearly synonymous. For a while, mobile chips in the x86 world were slow enough that they weren't constrained by heat.As ARM chips get faster and faster, they're going to start heating up, and the constraints of being able to cool them become more and more important. I realize that there are architectural differences that are somewhat in ARM's favor for the class of workloads the ARM is expected to do vs. an x86 processor, but you're still eventually going to run into the heating issue.
If for no other reason, fully discharging a Li-Ion battery in 1 minute will probably cause it to catch fire.
djgandy - Friday, June 3, 2011 - link
I fail to see the innovation you speak of. I see slapping more cores on and upping clock rates with die shrinks. Looks like it is mirroring the x86 market but is about 7 years behind.Remember smart phone chips are also getting a larger because their performance is seen as more important. You can't compare phones from 5 years ago with phones of today simply for this fact.
sleepeeg3 - Thursday, June 2, 2011 - link
You can just hear that battery life being sucked away! I think they are trying to convert mobile phones back to corded. You will need to cord constantly plugged in, just to keep it charged! Do you think we will get to use those nifty spiral wound cords again?So now we will be able to... preview a demo of UT3 on our keyboardless cell phones? I don't get it. Would be much more excited to hear that the new phones lasted a month without a charge. That would be awesome.
Veerappan - Thursday, June 2, 2011 - link
"2.5x overall graphics performance increase; support for DirectX, OpenGL ES 2.0, OpenVG 1.1, and OpenCL 1.1"OpenCL support on a mobile GPU could introduce some nice possibilities.
erple2 - Tuesday, June 7, 2011 - link
True, but I think it's more interesting from the giant massively parallel ARM based "servers" that are planned perspective. I wouldn't see that as a viable anything on a mobile device (yet).jessie320 - Monday, July 11, 2011 - link
what is the package type? Flip chip BGA ?