That is the typical scaling effect of complexity (look up Pollack's rule) and power (roughly cubic increase in power with linear increase in clock frequency). There's no free lunch. ;)
Tiny text: welcome to the world of foolish people defining non-raster (non photo/movie) sizes in px, instead of something like pt, in, or cm that automatically scales.
Interesting. Hopefully they are planning a process shrink on A15 before then to help with the power some.
These time frames would also seem to leave a pretty big window open on the high-end for Intel and/or AMD if they can execute properly. We're expecting a rev of Silvermont and Jaguar next year too, correct?
ARM doesn't set a process roadmap like Intel does--folks like TSMC, GlobalFoundries, and Samsung do the fabbing, so they determine what process they'll introduce and when.
Not to say ARM doesn't have a role optimizing for new processes as they come out; this story says they offered "optimized packages" for a couple of today's processes (though not Samsung's--Samsung might optimize for their own proc). They're just not driving the process (ha!) as Intel does.
Yes, so I don't understand the point of this chip for xmas 2014/2015Q1 devices. I will want a T5/Mali678/A340/rogue7 (whatever all these & more will be named) etc etc at 20nm by then. I guess this will be for feature phones by then :) I will expect much more from a smart device at xmas 2014 (or after in 2015 even worse). This seems to be about the perf of a T4i but a year later (likely with better power - but what about a T5i next July which is surely coming for mainstream?) or am I missing something here? Devices in 2015? I'm confused. Power of a 2013 xmas mainstream device (t4i to me, which is aimed at $200 phones last I checked) but not until 2015. Umm....Ok....LOL. I may have to take back the power comment, both fabs mentioned are running these on 28nm? Ok, a T5i (or whatever) makes these pointless if on 20nm. Again, even more confused. What the heck is any 28nm chip good for in 2015 for mobile? I can see from reading some of the other posts I'm not the only one confused.
Yes like many on here you seem confused. This is not a high-end chip, it's simply a faster and more efficient replacement of Cortex-A9. You know, not everybody in the world can afford an S4 or iPhone. Check out the cheap devices based on Allwinner, MediaTek and Qualcomm SoCs which are selling well in Asian markets. They use 65nm and 40nm processes, and some still use Cortex-A8! Why? Because it's far cheaper to use old processes and old cores.
Currently the quad Cortex-A7 at 1.2GHz is replacing the old Cortex-A9 as the low-cost top-end at 40nm (as Cortex-A7 is far more efficient and cheaper). Cortex-A12 on 28nm in early 2015 fits perfectly in this large market of cheap devices made on old processes. The A15 won't be used in this market until it is obsolete and cheap.
Note Cortex-A12 is faster than A9R4 as used in Tegra 4i (A9R4 is ~15% faster, A12 is 40% faster than A9).
Umm, you must have missed the fact that T4i is aimed at $200 phones in...wait for it...ASIA, thus something coming a bit faster FAR later is pointless. It will be facing a T5i which I'm sure will make up the difference in anything A12 adds right? I missed nothing, neither did anyone else. T3 as sold to Google & MS was only $23 roughly. How much cheaper do you think a chip needs to be? At these prices (and T4i is smaller as T4 full is only 81mm which is the same as T3's 80mm), you can't get much cheaper and T4i should easily be able to sell for the same and we're not talking the savings of 20nm this A12 will be facing next year by everyone not just NV.
Are you trying to tell me the difference between a $600 phone and $200 phone can be made up by making a $23 soc cheaper on old process tech? ROFL. OK man. Whatever. Even if NV gave the soc away free, the difference is $23 bucks. Do you think they hand you $100 AND give you the A12 free for using it? You are not going to magically enter untapped markets etc over $23. You are making a ridiculous argument I'm afraid. It would perhaps be a good one if you could show me the price of an A12 is less than $23 (first) and NV was charging $200 for a soc (2nd). Then I would get your point. But at $23 for T3, I'm guessing the BEST A12 can do on a LARGER process node than T5i etc, is a few bucks tops. Even if I am TOTALLY generous here and give you a $10 advantage (this assume they can make it for $13 and have a 40-50% margin AT that price...LOL) you are not magically getting into cheaper markets...We are talking $10 and there is no way I believe it will be FAR cheaper to produce an A12 at 28nm than whatever cheapo units everyone makes at 20nm. If the max cost is around $23 now, it's miniscule differences we're talking here, not hundreds of dollars. T4i is NOT a high end chip...T4 is.
http://gigaom.com/2013/02/19/nvidia-launches-its-q... "The Tegra 4i is part of what will now be a family of Tegra 4 smartphone chips, with the 4 aimed at high-end phones and tablets and the 4i aimed at phones in the $100 to $200 range."
I'm actually giving you a great benefit saying it's aimed at $200 phones. Clearly I can find sites saying $100-200. How cheap do you think A12 is aimed at? Again, I missed NOTHING.
http://www.pcmag.com/article2/0,2817,2415515,00.as... PC magazine says UNDER $200 for T4i. Are we done here? All clear? "Tegra 4i has that nailed down. It's half the size of competing chipsets like the Tegra 4 and Qualcomm's Snapdragon 800, and according to Nvidia it's up to 2.5 times as efficient per square millimeter, which means it's able to deliver the performance we think of from top-end phones in devices that will cost $100-200 without any subsidy."
So even PC mag says $100-200. I'm not sure where they get that T4i is 1/2 size of T4, but I know it is vs. S800. I guess we'll have to wait to see if T4=S800 (I thought S800 was a good bit bigger than T4 full). Either way, the point is YOU are the one who seems confused. Unless S4 and iPhone are suddenly selling for $100-200 without contract subsidy? I must have missed that. I'm running off to by a $200 S4 now... /sarcasm. We'll see how big A12 is in the end, but you are not going to knock much off of $23 already. I guess you're just having trouble with simple math and economics :) T4i will do just as well in those $100-200 markets (unlike T3) because it now has a MODEM. This is the only thing that kept it from competing before in phones. Well, it didn't perform like a T4i either...LOL. But you get the point. It was modem-less which causes most to look at Qcom instead. That will be different this time.
"R4 has 15-30 percent higher performance per clock cycle than the A9s used in chips like the Nvidia Tegra 3, and can be cranked up to even higher clock speeds;" You need to read more...PCmag article said that too. So 15-30% and that's BEFORE upclocking it. A 2.3ghz T3 would be impressive vs even a T3 1.4ghz from last year (or even T3+1.7ghz). But add in the 15-30% and 5x the gpu's. I don't see the point in A12 as many others don't here. To get into $100-200 phones you aren't selling the T4i at more than $25 right? This is straight from NV's Matt Wuebbling (he's the guy I'm quoting from PCmag article), the director of Nvidia's mobile business. He should know where they are aiming their T4i right?
I totally don't get your huge rant about Tegra 4i. We are talking about A12 and what markets it will be used in, which has nothing to do with Tegra 4i. The 4i is an interesting chip and I hope it will be successful, but A12 will clearly surpass the A9R4 in every aspect it when it comes out. If there will be a Tegra 5i, then I bet it will be based on A12.
Tegra 4i may well go after the Asian markets, but at ~$23 Tegra 3 is way too expensive for the $50-$100 devices that Allwinner, Mediatek etc are selling. Yes, the SoCs in those are in the $5-$10 range. I don't believe the 4i can go that low.
@Wilco1 thanks for that. Is there a simple chart showing the various Arm-derivative processor architectures with their performance and performance/watt in a table normalised to a "standard" process/geometry, along with the best performance that each variants can managed on the best fab's process?
Good question!. It goes for the Tegra 4i as well which optimizes with dual channel memory and enhancements to the tune of 30% over A9. The issue here is the reference A9 has been improved by the likes of Apple, Nvidia maybe others so the gap between A9 to A15 needs to be plugged as A15 shift has slowed for power reasons. A12 is a great fit and there is market demand due to "good enough" graphics on tablets/phone today.
This sounds like, “the A12 presents similar types of enhancements to what Apple & nVidia have been implementing, just a couple of years later.” Or am I missing something?
Nope. There's probably more work done, though, since it sounded like there are more base improvements. And Nvidia never made that performance claim IIRC, which makes sense as the A9 upgrades (note: which were done by ARM, not Nvidia) were incremental enough that ARM didn't designate a new processor name, like they did here.
A12 will beat A9r4 or w/e, but not Swift (most likely). The other main point is it's big.LITTLE compatible with A7 and A15. So it must be fundamentally different than the A9 and add a few things to maintain that compatibility.
The Cortex A9 was an older generation core, which couldn't match the cortex a7 in power or the a15. The A7 was about the same speed. The A12 is a newer generation core that does not consume a crazy amount of power and is still fast.
What exactly would make it faster? The Silvermont microarchitecture is more like A9 than A15. It has just 2-way decode, and 1 load/store units vs 3-way decode for A15 and 2 load/store units.
Maybe I'm missing something, but isn't late 2014 too late for A12 to be relevant? Silvermont will not only be outperforming it by a wide margin, but it will also have already undergone a node advancement and will have appeared in the form of Airmont. Even at 20/22nm, A12 is not going to look great against a 14nm Atom. The Silvermont architecture's successor, presumably coming less than a year later, will only exacerbate the issue.
A57 and A53 may very well be more competitive, but A12 just doesn't sound even remotely useful with that launch time frame. Won't A53 already be encroaching on A12's performance?
I suppose an A12/A7 or A12/A53 pair would make perfect sense for smartphones, but Atom will look even better...
Apple has Swift and Qualcomm has Krait, both of which were released in 2012 and are already similar to A12; the A12 is for the Rockchips and Mediateks who don't have the budget to develop their own SoC so that in 2014 you can get $200 smartphones with the power/battery of an iPhone 5. Apple and Qualcomm will have a 20nm part more similar to the A15 or A57 at that point.
Your point wrt Atom is also behind since Atom today is barely competitive with Swift/Qualcomm, and the next gen Atom won't be out until 2014, competing against 20nm A15 parts.
22nm Silvermont won't appear until first half of 2014 in smartphones, and 14nm won't appear until 2015. Also 22nm Silvermont will be competing with the 20nm Cortex A15 and 20nm-whatever-Qualcomm-comes-up-with next year.
Maybe, but it still won't be available in phones until 2015. Do you really think they'd release it in mid-2014, if they are barely launching Silvermont in phones in Q1-Q2 2014? At the very earliest it will be November-December 2014, but my bet is still on 2015.
Agreed, considering the slide leaked a while back shows Nov15-Jan15 for production, how can you get into an xmas device in 2013? It takes 4.5+ months after an OEM gets a chip to make a device around it (only google has done it this fast so far). Heck if memory serves, that slide showed QA not even happening when they'd have to be SHIPPING to oem's for xmas 2013. devices. I'll be fairly shocked to see silvermont in a phone before late Q1 (if it makes it, they must have updated the roadmap without it leaking). Is Silvermont now shipping to device makers in Aug? I must have missed that news if so. This is why it was so important for T4/T4i to make it out the door ~early july. You can possibly get into Black friday stuff then...LOL which is basically like Christmas part1 right? :)
This product is presumably intended to replace the A9 in mid-range SoCs, which are the bulk of the market, and will still be relevant for several years to come in low and mid-range phones and other devices. No die sizes given though, probably a bit bigger than the A9 on the same node, but a lot smaller than the A15.
I presume that ARM are also working on an as-yet-unannounced mid-range 64-bit part, probably called the A55, to go alongside the A53 and A57.
And until I see a review of Silvermont I won't be assuming that it is the be-all-and-end-all of mobile cores, unlike plenty of other people.
I believe you are right. Because 32 bit in mid-late 2014, doesn't make any sense. This feels like a stop-gap chip, and ideally they should've released the A53, "A55" and A57 in the same time, but it will probably arrive a year later - mid-2015, and be 10-15% faster than A12.
What makes you think Silvermont will outperform the A12? We don't know until we see actual benchmarks, but my guess is their performance is similar. I don't see how Silvermont could outperform either Jaguar or A15 (let alone A57!) given it has just a single load/store pipeline and limited memory reordering.
"What makes you think Silvermont will outperform the A12?"
Saltwell cores already are pretty close to A15.
"I don't see how Silvermont could outperform either Jaguar or A15 (let alone A57!) given it has just a single load/store pipeline and limited memory reordering."
Clock speed. Intel's FinFETs are a huge advantage to them. Silvermont is supposed to come in at 2.0-2.4GHz (for dual core, at least). A 2.4 GHz Saltwell would be on the level of A15; multiply by the 50% IPC gains that Silvermont brings, and Intel's lead is clear.
Jaguar is in a totally different class on the high end of its spectrum, but Silvermont should do pretty well against it at the TDPs they compete in. Jaguar isn't quite cut out for tablets. Here, Intel's overwhelming frequencies will be what makes it competitive -- most Temash designs have a clock speed of 1GHz. The A6 Temash model does look very good, though.
Perhaps we'll see just how well Temash does in the near future...
The performance gap both single and multithreaded is just ridiculous. And these are current phone models...
As for clockspeed, A15 is currently at 1.9GHz, and the move to 20nm should give a good frequency boost, so A15 will have similar top frequency as Silvermont. However A15 will likely have a decent IPC advantage (given the gap with current Atom is huge) and thus will still outperform Silvermont when running at a lower frequency.
Jaguar certainly can't compete at lower TDP/clocks so in tablets A15 will prevail.
Comparing a top of line Galaxy S4 versus an old single core saltwell medfield isn't going to say much about how good A15 is core-for-core or iso-power performance against upcoming silvermont. And i don't even know how much difference 2GB ram vs 1GB ram is going to favor S4 in these test. Also as I mentioned before you can easily fall into a pitfall of doing a bad job of assess processor/soc perf and power if you are not careful about your test methodology.
A15 at 1.9GHz? what's the power burn at that freq, how long does it have to throttle back to 600Mhz or less to cool down the phone? there are reports showing the octa core S4 throttles itself badly after maybe 30second use.
20nm move? This will take a while and it's interesting to see the power/perf and yield on TSMC this time. Everyone can paint pies in the sky, sometime it shows up more or less without big hiccups, sometime it can take a really long time (TSMC 28nm HPM, anyone?).
I don't believe dual core Atom phones are out yet, but Geekbench lists single-threaded results so you can compare core for core, and Atom comes out woefully bad against A15. RAM doesn't make a difference as there is no IO and Android doesn't do paging.
Samsung claims about 5.5W for quad A15 at 1.8GHz. I haven't seen any reports that S4 throttles itself badly - Anand explicitly reported that he was only able to throttle it in one benchmark when running a set of benchmarks.
20nm is already ramping up, the rumour is that Apple will be the first customer. I'd certainly hope TSMC has learnt from the slow 28nm introduction.
The A12 sounds really the same as the Qualcomm Krait and Apple swift, up to 40% faster with the same power consumption. I have no ideal why ARM introduce it, as it's 2 years late than the counterparts form Qualcomm and Apple and targeting the same market. And as for the Atom, the current generation Atom (5 years old architecture)is already comparable to Krait, swift and A15 in performance/watt with 32nm technology.
To me, ARM armed too high with A15 which was simply not designed for phone/tablet. Keep in mind, the A15 without throttling is a 8w monster comparing to 2+w socs for either tablet or smart phones. ARM were either too confident about A9, or mislead by Intel's MID concepts. In either case, the A12 sounds like a making-up of their wrong decision, just a reinvent-the-wheel of the Krait and Swift. Is this the reason cost their CEO?
A12 is effectively an improved A9, and quite different from Krait and Swift which are far more similar to A15. Note that A15/Krait/Swift are a lot faster than A9, while A7 is quite close in performance despite using less power. So A9 is no longer competitive, and the A12 will take over the mid range. It would have been better for it to arrive earlier indeed, but there are only so many OoO CPUs one can design per year...
The Korean Galaxy S4 uses an Exynos Octa which consumes 5.5W at 1.8GHz. That is fine for mobiles and tablets, for example power is similar as Tegra 3 despite providing more than twice the performance. For typical usage you rarely need that amount of performance, let alone for long periods, so only looking at max TDP is misleading.
What the introduction of yet another new CPU has to do with the CEO is beyond me. The naming of A15 and A9 suggests the mid-range A12 was planned a long time ago (Cortex numbering indicates relative performance).
my guess is because A15 on current TSMC/GloF process is seriously overshooting its original power target, and this leaves a huge gap between A15 and A7 for other vendors (apple/qcom/intel) to exploit. What makes matter worse is that the gap happens to sit in the most important power range of high-end smartphone and fanless tablet design points, so ARM has to deliver some stopgap solution.
No, the Krait and Swift are in the same performance range of yet-to-arrive A12 which is at maximum 50% faster than A9. The A15 is another store. It's much faster than both Krait and Swift, but consumes much more power as well. There are review on Nexus 10 which is powered by A15 by Anand. You could check the results. To me, the A15 is fast but not designed for phone or tablet. The only reason could be that the ARM thought the A9 was faster and good enough to secure the Mobile market, and therefore armed the A15 to something bigger (server? MID? netbook?). This caused the trouble now, because apparently the A9 is too weak and the A15 is too power hungry to defense the mobile market, which you can see in the COMPUTEX this year. And therefore, without choice, ARM must introduce yet another core, the A12, to copy the success of krait, swift or even current gen. ATOM. A12 is apparently redesign of the wheel and really has the huge-time-to-the-market (dis)advantage :) This is apparently the decision made by the CEO himself, which put ARM in today's awkward position. It's just the most logic conclusion I could find out. And if my analysis is right, don't you think this bad decision would cost the CEO the job?
By the time A12 comes out, Krait/A15/Swift will be much faster and on 20nm, so the gap between A9 and Krait/A15/Swift will only widen.
The issue is that A9 is getting old and no longer competitive, not A15 being too fast or power hungry (A15 will appear in lots of mobiles). In 2014 nobody will use A9 - A7 is as fast and far more efficient (you already see Qualcomm, Allwinner, MediaTek etc using A7 rather than A9 in mid and low-end designs).
As I said before, it's clear A12 has been planned a long time ago, so your analysis is completely incorrect. A12 is not trying to be another Krait or Swift at all (A15 is already faster and A7 is already more efficient), it's simply replacing the old A9.
Yes, at 20nm the complaints about A15 go away even if you just shrink the same crap and do nothing else but upclock to whatever your power target is for your device. I'm sure everyone will up the gpu some, but this chip seems out of place vs. 20nm everything.
A12 is not out of place once you understand that it is an A9 replacement for the mid-end. Currently dual A9 is being replaced with quad Cortex-A7 on 40nm as A7 is far more efficient and cheaper. The next faster core is the A15, which is way too large, power hungry and expensive to use at 40nm. Given that A9 is obsolete there is now a huge gap between A7 and A15, and A12 fits right in there.
You'd better call NV and let them know they're releasing an A9r4 that is obsolete already...ROFL. Just tell them to stop producing T4i and go home. While you're at it tell it to PCmag I just quoted who says it has Top end phone power in a $100-$200 phone.
So what does a 20nm die shrink of T4i run at? 2.7-3.0ghz in the same power envelope? Again A12 pointless. They got from 1.7ghz to 2.3ghz from 40 to 28nm. It's reasonable to think even the exact chip at 20nm would hit 2.7ghz at least or your process needs work. Also note this little gem from pcmag article: http://www.pcmag.com/article2/0,2817,2415515,00.as... "Unlike Qualcomm, Nvidia isn't actually allowed to design its own ARM-compatible cores, so it had to give the R4 innovations back to ARM and let competitors license them, but the 4i will be the first A9R4 chip on the market."
So everyone will get it eventually, and I'm sure by Q1 2015. T4i has what NV did first with A9R4, but they had to give the improvements back. Note the GPU is not the same, they get to keep all their own tech on that side. But everyone will get the cpu enhancements NV made. I think when A12 hit the drawing board they didn't realize companies like NV would make A9 so much better and in doing so raise everyone else too (who A12 might be aimed at, people who couldn't optimize like NV alone). But with NV having to give it all back, everyone gets it making an A12 redundant at best. T4i die shrunk to 20 puts A12 to shame in the same price and power envelope unless arm starts saying it's die size is 20mm. T4i if half the size of T4 (which is known to be 81mm or so) is 40mm roughly or NV has to measure their own chip again and re-report :) They should know their own T4/T4i die size right? It's amazing they got 72gpu cores on T4 into a 10mm die, so 60gpu cores is obviously smaller. I'd like to know what Arm has to pair with A12 in that size (what 8mm for 60 cores?). Good luck. http://www.tomshardware.com/reviews/tegra-4-tegra-... "Nvidia clearly needed to make difficult decisions in order to enable Tegra 4’s GPU in just 10.5 square millimeters of die space—less than Qualcomm’s Adreno 320, ARM’s Mali-T604, or Imagination Technologies’ PowerVR SGX554MP4."
A9R4 is NOT obsolete, and further I'd say dual A9 40nm is being replaced by A9R4 28nm QUAD+1 in a few weeks :) I think they call it a T4i ;) I don't understand your points. Do they sell phones for under $100 without subsidy with power like T4i? Next June/July we'll have T5i no doubt along with everyone else's stuff. A12 doesn't fit well with a 20nm Kepler based T5i and it's ilk. I think most people will want T4i power in $100-200 this year, and whatever T5i brings next year vs. A12. This might have a place if coming next month vs. T4i, not late 2014/Q1 2015.
https://en.wikipedia.org/wiki/Allwinner_A31 Look at the perf in the chart of T3 vs. quad A7. Look at the prices both $20. Look at the raw dmips (12-17 for T3 is bigger than 9.1 for A7quad right?). Gpu better than allwinner also. T4i will make this worse by 5x on gpu and who knows on cpu but it's better. Pixel ability is the only thing better (but that's about dual channel and gpu not cpu). Exynos shows the same. I don't see how this magical A7 quad can be cheaper than a T4i at 1/2 the size of the T3 in the chart going for the EXACT same cost ($20) as the Allwinner A31. Surely T4i is cheaper to make than T3 at 1/2 the die size right? T4 is same size as T3 as toms etc reports. T4i is 1/2 it's size. You're not making any sense. Are you privy to info all of these people don't understand or know? Note they quote that it's partially data we have to believe from allwinner at wikipedia. "Note: Part of this chart would be AllWinner's opinion which is publicized from the Onda's press conference on December 5, 2012." So it's only true if you totally believe allwinner. Not saying they're lying, just pointing it out, and I doubt they would undersell themselves so if the numbers are true I again, don't get your points.
Prove A7 quad is cheaper please. Allwinner should know they cost $20, and we already know T3/exynos are basically the same $20 as the chart shows. A few bucks either way is nothing to write home about and T4i is 1/2 the size (I keep repeating this but you don't seem to get this). Worse, A12 will be facing shrinks of everything mentioned.
Yes Tegra 4i would have been a really great design if it were released last year, not later this year. I'm sure it will get more design wins than Tegra 3. However when A12 comes out there will be no reason at all to continue using A9, not even A9R4 simply because R12 beats it on power, performance and die size. That still leaves a window for Tegra 4i, but it is not going to last that long. If a Tegra 5i exists it will use A12, what other CPU do you think it would use???
Your logic behind the A12 is planned before A15 on their number is not valid. In your logic, the A7 should planed before the A8... Hehe, the Xbox after Xbox 360 is... xbox one. If they planed the A12 long before the A15, what happened to ARM Engineering? Their design team had a 3 years delay (compare to A15). If this was true, ARM should be worried more :)
And as I said the A15 is not same class core as Krait/Swift, the A12 is. And when A15/Krait/swift come to 20nm, why cannot A9? The gap should be the same. As I said, the only logic reason is ARM made a wrong decision and cause them the problem now (The invasion of ATOM).
As for 20nm, you know, TSMC said they would ready the the 28nm production during Q2,2010 in 2009. And the first 28nm mobile chip came in 2012. So good luck with the expectation:)
You're very confused. What I'm saying is that the number used in Cortex cores indicates relative performance. The gap beween A9 and A15 means that there has been space reserved for a CPU that fits in the middle - the A12. That doesn't imply at all that the design of A12 started before A15, just that a successor of the A9 was planned a long time ago.
No, A15, Krait and Swift are the same class: 3-way fairly aggressive OoO cores. A9, A12 and Silvermont are 2-way, limited OoO and thus in the same class. Silverthorne and Cortex-A8 are both 2-way in-order so in the same class.
It looks like TSMC has learned from 28nm debacle so I don't expect 20nm will be as troublesome. They seem to have invested a lot more effort into it than before.
The A9 is old and obsolete, so it cannot compete - even at 20nm: the A7 is almost as fast and far more efficient, so why would anyone use A9? Given that A9 will disappear, the gap between A7 and A15 will only increase on 20nm, so there is a need for something to fill it.
The 3-way isn't the only parameter to affect the design, the ATOM use in-order design, which didn't make it the same level chip as original pentieum.
The core is more or less a black box to outside world, which is only differentiated by it's performance and power consumption. And in this case, the A15 is 2 times more powerful and > 2times power hungery than A9. The Atom, krait and swift is under same power envelop as A9 (but of couse the performance gain is not free, they are little bit more power hungry than A9). And from a performance and power consumption point of view, the A12 is at the same catalog as krait, swift and current gen Atom.
The A7 is actually worse than/comparable to A8 in a performance point of view, but with better power efficiency and less area.
You are right about the A12 that it design to replace the A9. However, the ARM, to me, should plan the A12 last years instead of A15. There was nothing exist for ARM to compete with Atom, Krait and swift on the market until the end the 2014. And the A15 is clearly targeting a non exist market, that only the throttled-down version of it exist on limited designs. On the other hand, the krait and Atom are eating ARM's cake.
Maybe you are right the A15 could be fine under 20nm technologies. But this is also means ARM designed and introduced some product in 2012 which can only be useful after 2014 (very optimistic for TSMC's 20nm technology). On the mean while give up the market from 2012 to 2014 to ATOM, krait and swift and simply because it had no valid product. Doesn't it sound like problem?
You can't classify CPUs just by power consumption/frequency. For example Swift is only low power because it is run at a low frequency. Clock an A15 at 1GHz, and it would equal Swift on performance while using similar or less lower power than Swift. Similarly Krait is pretty efficient around 1.5GHz but uses a lot more power when trying to keep up with A15 at ~2GHz.
Claiming ARM has no valid product for the next 2 years is ridiculous. I really don't see the huge problem you are imagining. A15 is ARM's flagship and it is clearly able to compete with Swift, Krait and Silvermont, so you'll see it in many devices including lots of phones. Yes, it would have been better if A12 was released a year earlier - A9 is really near the end of its life, however it isn't needed to replace A15.
TSMC's 20nm will be in production this year. Even if you are pessimistic about it and think they will be at least 12 months late, it means devices by mid 2014 or around the same time Silvermont will appear in mobiles.
It was already said there, and updated by Anand, that it is the other way round. The A15 runs at a higher speed, which is much more logical (and you can find it in plenty of other sources online).
I also think that the A12 is a great SoC...if it were available today. It seems to be shooting for the 'best of both worlds' route that Krait has currently taken, with both decent power consumption and performance. It'll just be woefully late to market...and I also can't help but think that the A53 will be very close to market as well presumably by the time this is out...Shouldn't be far below the A12 in performance either, yet maybe a fair bit more efficient (not to mention 64 bit) so it might render it obsolete already. We'll see.
But in this case, A15 runing at 1.8Ghz, it bascially means the A15 is slower than current gen. ATOM in IPC. It take four A15 cores running at 1.8Ghz to compete with 2 saltwel core running at 2Ghz or 4 krait core running at 1.9Ghz. ( The benchmark of S4 and Lenovo K900). And this is extreme Odd. Even the dual core A15 in nexus 10 can outperform the quad core A15 at 1.8GHz in lots of benchmark. Is that means, in the A15 case, the more cores the less performance?
No, A15 is clearly much faster than Atom clock for clock (eg. check the link I posted), so I don't see how you can think Atom could possibly compete? Note some S4's use 1.6GHz A15, others use 1.8GHz A15, yet others use 1.9GHz Krait (confusing I know...). But where does it show dual A15 outperforming a quad A15???
please!!! check the comparison between Galaxy S4 (octo) , Galaxy S4 (krait), Nexus 10 and Lenovo K900. I am talking about overall performance. You can find out the Quad-core A15 (octo) is barely faster than Dual-core Atom running at 2Ghz. The only possible reason for this is A15 running at 1.2 Ghz instead of 1.8Ghz, OTHERWISE, even running at 1.6Ghz, the A15 become slower than 5 years old Atom in IPC. Do you have any other logic explanation?
A15 in Exynos Octa does most definitely run at 1.6/1.8GHz, not 1.2. If you get different results then there is something wrong with the benchmark - that is the only possible explanation.
Anyway which comparison are you talking about exactly - link? I am looking at Geekbench results and those clearly show A15 running at 1.6GHz and beating K900 by a huge margin. Eg. http://browser.primatelabs.com/geekbench2/compare/...
In Antutu, the S4 (Octa) barely faster than the K900 and the S4(Snapdragon). This could only happen if the A15 is dramatically under clocked compare to it's counterpart.
That's not a CPU benchmark. Antutu does all sorts of stuff unrelated to CPU performance such as SD card read/write speed and GPU tests, and is Java rather than native compiled code. It happens to be one of the most cheated benchmarks (both by users and device makers adding special "optimizations" to show a better score). In short, Antutu scores mean absolutely nothing.
Yeah, right. You're talking about mainly single-threaded tests, where A15 wouldn't have much of an advantage. In tests like Octane, A15 has a huge advantage over Atom. And Anand has already shown it with the Chromebook review, too.
Wait until the quad core Atoms, if they ever even arrive in smartphones, and we'll see how they stand then. But prepare to not be so impressed by Intel.
The Cortex A12 is the successor to the Cortex A9, not something in between the A7 and A15. The Cortex A15 was not the successor to the A9, it used far more power. This uses less/the some amount of power and is much faster.
But why waste deving a new chip, when by the time it comes out the power monger A15 will be on 20nm removing the powermonger feature? :) This chip is moot by 20nm A15 etc. I'd rather have a die shrunk A15's power in Q1 2015. This is way too late. We don't want to go backwards, I want them to die shrink their way forward....ROFL.
Again A12 is not a replacement for A15 but for the old A9. Billions of Asians cannot afford a Galaxy S4 or iPhone, and current low-cost devices for these markets use Cortex-A7 on a 40nm process. A15 and Krait are too expensive, A9 is obsolete, so there is a huge gap. The A12 fills it. It's as simple as that.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
78 Comments
Back to Article
Egg - Sunday, June 2, 2013 - link
The text is tiny. Also..."In exchange for 5x the area and 6x the performance, the Cortex A15 offers under 4x the performance." what?
lmcd - Sunday, June 2, 2013 - link
6x power.4lpha0ne - Monday, June 3, 2013 - link
That is the typical scaling effect of complexity (look up Pollack's rule) and power (roughly cubic increase in power with linear increase in clock frequency). There's no free lunch. ;)lmcd - Tuesday, June 4, 2013 - link
You understand that I understand, right?MrSpadge - Monday, June 3, 2013 - link
Tiny text: welcome to the beauty of high ppi devices. We need even more pixels...jbo5112 - Tuesday, July 16, 2013 - link
Tiny text: welcome to the world of foolish people defining non-raster (non photo/movie) sizes in px, instead of something like pt, in, or cm that automatically scales.A5 - Sunday, June 2, 2013 - link
Interesting. Hopefully they are planning a process shrink on A15 before then to help with the power some.These time frames would also seem to leave a pretty big window open on the high-end for Intel and/or AMD if they can execute properly. We're expecting a rev of Silvermont and Jaguar next year too, correct?
twotwotwo - Monday, June 3, 2013 - link
ARM doesn't set a process roadmap like Intel does--folks like TSMC, GlobalFoundries, and Samsung do the fabbing, so they determine what process they'll introduce and when.Not to say ARM doesn't have a role optimizing for new processes as they come out; this story says they offered "optimized packages" for a couple of today's processes (though not Samsung's--Samsung might optimize for their own proc). They're just not driving the process (ha!) as Intel does.
Krysto - Monday, June 3, 2013 - link
We should see Cortex A15 at 20nm next year (probably in Tegra 5, and Exynos 6), and then Cortex A57 at 14nm in 2015.TheJian - Tuesday, June 4, 2013 - link
Yes, so I don't understand the point of this chip for xmas 2014/2015Q1 devices. I will want a T5/Mali678/A340/rogue7 (whatever all these & more will be named) etc etc at 20nm by then. I guess this will be for feature phones by then :) I will expect much more from a smart device at xmas 2014 (or after in 2015 even worse). This seems to be about the perf of a T4i but a year later (likely with better power - but what about a T5i next July which is surely coming for mainstream?) or am I missing something here? Devices in 2015? I'm confused. Power of a 2013 xmas mainstream device (t4i to me, which is aimed at $200 phones last I checked) but not until 2015. Umm....Ok....LOL. I may have to take back the power comment, both fabs mentioned are running these on 28nm? Ok, a T5i (or whatever) makes these pointless if on 20nm. Again, even more confused. What the heck is any 28nm chip good for in 2015 for mobile? I can see from reading some of the other posts I'm not the only one confused.Wilco1 - Tuesday, June 4, 2013 - link
Yes like many on here you seem confused. This is not a high-end chip, it's simply a faster and more efficient replacement of Cortex-A9. You know, not everybody in the world can afford an S4 or iPhone. Check out the cheap devices based on Allwinner, MediaTek and Qualcomm SoCs which are selling well in Asian markets. They use 65nm and 40nm processes, and some still use Cortex-A8! Why? Because it's far cheaper to use old processes and old cores.Currently the quad Cortex-A7 at 1.2GHz is replacing the old Cortex-A9 as the low-cost top-end at 40nm (as Cortex-A7 is far more efficient and cheaper). Cortex-A12 on 28nm in early 2015 fits perfectly in this large market of cheap devices made on old processes. The A15 won't be used in this market until it is obsolete and cheap.
Note Cortex-A12 is faster than A9R4 as used in Tegra 4i (A9R4 is ~15% faster, A12 is 40% faster than A9).
TheJian - Wednesday, June 5, 2013 - link
Umm, you must have missed the fact that T4i is aimed at $200 phones in...wait for it...ASIA, thus something coming a bit faster FAR later is pointless. It will be facing a T5i which I'm sure will make up the difference in anything A12 adds right? I missed nothing, neither did anyone else. T3 as sold to Google & MS was only $23 roughly. How much cheaper do you think a chip needs to be? At these prices (and T4i is smaller as T4 full is only 81mm which is the same as T3's 80mm), you can't get much cheaper and T4i should easily be able to sell for the same and we're not talking the savings of 20nm this A12 will be facing next year by everyone not just NV.Are you trying to tell me the difference between a $600 phone and $200 phone can be made up by making a $23 soc cheaper on old process tech? ROFL. OK man. Whatever. Even if NV gave the soc away free, the difference is $23 bucks. Do you think they hand you $100 AND give you the A12 free for using it? You are not going to magically enter untapped markets etc over $23. You are making a ridiculous argument I'm afraid. It would perhaps be a good one if you could show me the price of an A12 is less than $23 (first) and NV was charging $200 for a soc (2nd). Then I would get your point. But at $23 for T3, I'm guessing the BEST A12 can do on a LARGER process node than T5i etc, is a few bucks tops. Even if I am TOTALLY generous here and give you a $10 advantage (this assume they can make it for $13 and have a 40-50% margin AT that price...LOL) you are not magically getting into cheaper markets...We are talking $10 and there is no way I believe it will be FAR cheaper to produce an A12 at 28nm than whatever cheapo units everyone makes at 20nm. If the max cost is around $23 now, it's miniscule differences we're talking here, not hundreds of dollars. T4i is NOT a high end chip...T4 is.
http://gigaom.com/2013/02/19/nvidia-launches-its-q...
"The Tegra 4i is part of what will now be a family of Tegra 4 smartphone chips, with the 4 aimed at high-end phones and tablets and the 4i aimed at phones in the $100 to $200 range."
I'm actually giving you a great benefit saying it's aimed at $200 phones. Clearly I can find sites saying $100-200. How cheap do you think A12 is aimed at? Again, I missed NOTHING.
http://www.pcmag.com/article2/0,2817,2415515,00.as...
PC magazine says UNDER $200 for T4i. Are we done here? All clear?
"Tegra 4i has that nailed down. It's half the size of competing chipsets like the Tegra 4 and Qualcomm's Snapdragon 800, and according to Nvidia it's up to 2.5 times as efficient per square millimeter, which means it's able to deliver the performance we think of from top-end phones in devices that will cost $100-200 without any subsidy."
So even PC mag says $100-200. I'm not sure where they get that T4i is 1/2 size of T4, but I know it is vs. S800. I guess we'll have to wait to see if T4=S800 (I thought S800 was a good bit bigger than T4 full). Either way, the point is YOU are the one who seems confused. Unless S4 and iPhone are suddenly selling for $100-200 without contract subsidy? I must have missed that. I'm running off to by a $200 S4 now... /sarcasm. We'll see how big A12 is in the end, but you are not going to knock much off of $23 already. I guess you're just having trouble with simple math and economics :) T4i will do just as well in those $100-200 markets (unlike T3) because it now has a MODEM. This is the only thing that kept it from competing before in phones. Well, it didn't perform like a T4i either...LOL. But you get the point. It was modem-less which causes most to look at Qcom instead. That will be different this time.
"R4 has 15-30 percent higher performance per clock cycle than the A9s used in chips like the Nvidia Tegra 3, and can be cranked up to even higher clock speeds;"
You need to read more...PCmag article said that too. So 15-30% and that's BEFORE upclocking it. A 2.3ghz T3 would be impressive vs even a T3 1.4ghz from last year (or even T3+1.7ghz). But add in the 15-30% and 5x the gpu's. I don't see the point in A12 as many others don't here. To get into $100-200 phones you aren't selling the T4i at more than $25 right?
This is straight from NV's Matt Wuebbling (he's the guy I'm quoting from PCmag article), the director of Nvidia's mobile business. He should know where they are aiming their T4i right?
Whatever...
Wilco1 - Wednesday, June 5, 2013 - link
I totally don't get your huge rant about Tegra 4i. We are talking about A12 and what markets it will be used in, which has nothing to do with Tegra 4i. The 4i is an interesting chip and I hope it will be successful, but A12 will clearly surpass the A9R4 in every aspect it when it comes out. If there will be a Tegra 5i, then I bet it will be based on A12.Tegra 4i may well go after the Asian markets, but at ~$23 Tegra 3 is way too expensive for the $50-$100 devices that Allwinner, Mediatek etc are selling. Yes, the SoCs in those are in the $5-$10 range. I don't believe the 4i can go that low.
speculatrix - Wednesday, June 26, 2013 - link
@Wilco1 thanks for that. Is there a simple chart showing the various Arm-derivative processor architectures with their performance and performance/watt in a table normalised to a "standard" process/geometry, along with the best performance that each variants can managed on the best fab's process?michael2k - Sunday, June 2, 2013 - link
What's the fundamental difference between an A12 and Apple's Swift architecture?http://www.anandtech.com/show/6330/the-iphone-5-re...
There's a similar 40% perf improvement in some cases and there are also memory improvements.
cmikeh2 - Monday, June 3, 2013 - link
Swift is a 3 wide out of order front end whereas the A12 is dual issue.tuxRoller - Monday, June 3, 2013 - link
Possibly the fully OoO pipeline in A12 to make up for the frontend.fteoath64 - Monday, June 3, 2013 - link
Good question!. It goes for the Tegra 4i as well which optimizes with dual channel memory and enhancements to the tune of 30% over A9. The issue here is the reference A9 has been improved by the likes of Apple, Nvidia maybe others so the gap between A9 to A15 needs to be plugged as A15 shift has slowed for power reasons. A12 is a great fit and there is market demand due to "good enough" graphics on tablets/phone today.WaltFrench - Monday, June 3, 2013 - link
This sounds like, “the A12 presents similar types of enhancements to what Apple & nVidia have been implementing, just a couple of years later.” Or am I missing something?lmcd - Tuesday, June 4, 2013 - link
Nope. There's probably more work done, though, since it sounded like there are more base improvements. And Nvidia never made that performance claim IIRC, which makes sense as the A9 upgrades (note: which were done by ARM, not Nvidia) were incremental enough that ARM didn't designate a new processor name, like they did here.A12 will beat A9r4 or w/e, but not Swift (most likely).
The other main point is it's big.LITTLE compatible with A7 and A15. So it must be fundamentally different than the A9 and add a few things to maintain that compatibility.
kpal12 - Tuesday, June 4, 2013 - link
The Cortex A9 was an older generation core, which couldn't match the cortex a7 in power or the a15. The A7 was about the same speed. The A12 is a newer generation core that does not consume a crazy amount of power and is still fast.sherlockwing - Monday, June 3, 2013 - link
Interesting that ARM is planning to fill the gap between two 64bit 20nm ARMv8 parts(A57/A53) with a 32bit 28nm ARMv7 part(A12).Death666Angel - Monday, June 3, 2013 - link
*40 bit part, not 32.lmcd - Tuesday, June 4, 2013 - link
Uhh it's between A15 and A7 anyway, so...rwei - Monday, June 3, 2013 - link
Ok, so this comes in *below* A15.What comes *above* A15? Anything? Or is Silvermont going to run roughshod over ARM performance-wise for the forseeable future?
Krysto - Monday, June 3, 2013 - link
The 64 bit Cortex A57.sherlockwing - Monday, June 3, 2013 - link
It is written clearly in the last slide, Cortex-A57 (20nm, 64bit) will be the A15 successor and will compete with Silvermont.Wilco1 - Monday, June 3, 2013 - link
A57 is faster than Silvermont, and A15 is most likely as well (as A15 and Jaguar are similar performance).lmcd - Tuesday, June 4, 2013 - link
Silvermont is likely to be comparable with A15, if not faster.Wilco1 - Tuesday, June 4, 2013 - link
What exactly would make it faster? The Silvermont microarchitecture is more like A9 than A15. It has just 2-way decode, and 1 load/store units vs 3-way decode for A15 and 2 load/store units.Klimax - Tuesday, June 4, 2013 - link
That's not everything...lmcd - Tuesday, June 4, 2013 - link
Also, isn't Jaguar faster than A15 by a pretty significant bit? Especially in floats and doubles, as noted by the recent piece on the matter.Wilco1 - Tuesday, June 4, 2013 - link
No, Jaguar is slightly faster on integer but on FP A15 beats Jaguar by a good margin: http://browser.primatelabs.com/geekbench2/compare/...kpal12 - Tuesday, June 4, 2013 - link
Cortex A15 ----> Cortex A57Cortex A9 ----> Cortex A12 -----> Cortex A55?
Cortex A7 ----> Cortex A53
Homeles - Monday, June 3, 2013 - link
Maybe I'm missing something, but isn't late 2014 too late for A12 to be relevant? Silvermont will not only be outperforming it by a wide margin, but it will also have already undergone a node advancement and will have appeared in the form of Airmont. Even at 20/22nm, A12 is not going to look great against a 14nm Atom. The Silvermont architecture's successor, presumably coming less than a year later, will only exacerbate the issue.A57 and A53 may very well be more competitive, but A12 just doesn't sound even remotely useful with that launch time frame. Won't A53 already be encroaching on A12's performance?
I suppose an A12/A7 or A12/A53 pair would make perfect sense for smartphones, but Atom will look even better...
Homeles - Monday, June 3, 2013 - link
I do suppose A12 would be a nice toy for Qualcomm and Apple to play with, though.michael2k - Tuesday, June 4, 2013 - link
Apple has Swift and Qualcomm has Krait, both of which were released in 2012 and are already similar to A12; the A12 is for the Rockchips and Mediateks who don't have the budget to develop their own SoC so that in 2014 you can get $200 smartphones with the power/battery of an iPhone 5. Apple and Qualcomm will have a 20nm part more similar to the A15 or A57 at that point.Your point wrt Atom is also behind since Atom today is barely competitive with Swift/Qualcomm, and the next gen Atom won't be out until 2014, competing against 20nm A15 parts.
Krysto - Monday, June 3, 2013 - link
22nm Silvermont won't appear until first half of 2014 in smartphones, and 14nm won't appear until 2015. Also 22nm Silvermont will be competing with the 20nm Cortex A15 and 20nm-whatever-Qualcomm-comes-up-with next year.Homeles - Monday, June 3, 2013 - link
Atom is going to be the lead uarch on Intel's 14nm process.Krysto - Tuesday, June 4, 2013 - link
Maybe, but it still won't be available in phones until 2015. Do you really think they'd release it in mid-2014, if they are barely launching Silvermont in phones in Q1-Q2 2014? At the very earliest it will be November-December 2014, but my bet is still on 2015.TheJian - Tuesday, June 4, 2013 - link
Agreed, considering the slide leaked a while back shows Nov15-Jan15 for production, how can you get into an xmas device in 2013? It takes 4.5+ months after an OEM gets a chip to make a device around it (only google has done it this fast so far). Heck if memory serves, that slide showed QA not even happening when they'd have to be SHIPPING to oem's for xmas 2013. devices. I'll be fairly shocked to see silvermont in a phone before late Q1 (if it makes it, they must have updated the roadmap without it leaking). Is Silvermont now shipping to device makers in Aug? I must have missed that news if so. This is why it was so important for T4/T4i to make it out the door ~early july. You can possibly get into Black friday stuff then...LOL which is basically like Christmas part1 right? :)psychobriggsy - Monday, June 3, 2013 - link
This product is presumably intended to replace the A9 in mid-range SoCs, which are the bulk of the market, and will still be relevant for several years to come in low and mid-range phones and other devices. No die sizes given though, probably a bit bigger than the A9 on the same node, but a lot smaller than the A15.I presume that ARM are also working on an as-yet-unannounced mid-range 64-bit part, probably called the A55, to go alongside the A53 and A57.
And until I see a review of Silvermont I won't be assuming that it is the be-all-and-end-all of mobile cores, unlike plenty of other people.
Krysto - Monday, June 3, 2013 - link
I believe you are right. Because 32 bit in mid-late 2014, doesn't make any sense. This feels like a stop-gap chip, and ideally they should've released the A53, "A55" and A57 in the same time, but it will probably arrive a year later - mid-2015, and be 10-15% faster than A12.Wilco1 - Monday, June 3, 2013 - link
What makes you think Silvermont will outperform the A12? We don't know until we see actual benchmarks, but my guess is their performance is similar. I don't see how Silvermont could outperform either Jaguar or A15 (let alone A57!) given it has just a single load/store pipeline and limited memory reordering.Homeles - Monday, June 3, 2013 - link
"What makes you think Silvermont will outperform the A12?"Saltwell cores already are pretty close to A15.
"I don't see how Silvermont could outperform either Jaguar or A15 (let alone A57!) given it has just a single load/store pipeline and limited memory reordering."
Clock speed. Intel's FinFETs are a huge advantage to them. Silvermont is supposed to come in at 2.0-2.4GHz (for dual core, at least). A 2.4 GHz Saltwell would be on the level of A15; multiply by the 50% IPC gains that Silvermont brings, and Intel's lead is clear.
Jaguar is in a totally different class on the high end of its spectrum, but Silvermont should do pretty well against it at the TDPs they compete in. Jaguar isn't quite cut out for tablets. Here, Intel's overwhelming frequencies will be what makes it competitive -- most Temash designs have a clock speed of 1GHz. The A6 Temash model does look very good, though.
Perhaps we'll see just how well Temash does in the near future...
Wilco1 - Monday, June 3, 2013 - link
Where is Saltwell performance close to A15 even with a 25% clock advantage? http://browser.primatelabs.com/geekbench2/compare/...The performance gap both single and multithreaded is just ridiculous. And these are current phone models...
As for clockspeed, A15 is currently at 1.9GHz, and the move to 20nm should give a good frequency boost, so A15 will have similar top frequency as Silvermont. However A15 will likely have a decent IPC advantage (given the gap with current Atom is huge) and thus will still outperform Silvermont when running at a lower frequency.
Jaguar certainly can't compete at lower TDP/clocks so in tablets A15 will prevail.
Wilco1 - Monday, June 3, 2013 - link
Here is what ARM says about A15 vs Silvermont: http://www.pcworld.com/article/2040582/arm-claims-...tech4real - Monday, June 3, 2013 - link
Comparing a top of line Galaxy S4 versus an old single core saltwell medfield isn't going to say much about how good A15 is core-for-core or iso-power performance against upcoming silvermont. And i don't even know how much difference 2GB ram vs 1GB ram is going to favor S4 in these test. Also as I mentioned before you can easily fall into a pitfall of doing a bad job of assess processor/soc perf and power if you are not careful about your test methodology.A15 at 1.9GHz? what's the power burn at that freq, how long does it have to throttle back to 600Mhz or less to cool down the phone? there are reports showing the octa core S4 throttles itself badly after maybe 30second use.
20nm move? This will take a while and it's interesting to see the power/perf and yield on TSMC this time. Everyone can paint pies in the sky, sometime it shows up more or less without big hiccups, sometime it can take a really long time (TSMC 28nm HPM, anyone?).
Wilco1 - Monday, June 3, 2013 - link
I don't believe dual core Atom phones are out yet, but Geekbench lists single-threaded results so you can compare core for core, and Atom comes out woefully bad against A15. RAM doesn't make a difference as there is no IO and Android doesn't do paging.Samsung claims about 5.5W for quad A15 at 1.8GHz. I haven't seen any reports that S4 throttles itself badly - Anand explicitly reported that he was only able to throttle it in one benchmark when running a set of benchmarks.
20nm is already ramping up, the rumour is that Apple will be the first customer. I'd certainly hope TSMC has learnt from the slow 28nm introduction.
wsw1982 - Monday, June 3, 2013 - link
Check lenovo K900, please.Wilco1 - Monday, June 3, 2013 - link
Ah you're right, K900 results just appeared - still terrible vs S4: http://browser.primatelabs.com/geekbench2/compare/...Krysto - Tuesday, June 4, 2013 - link
What are you talking about? Old Atom is not even CLOSE to A15 in performance. At the same clock speed, A15 has at least 50% performance advantage.wsw1982 - Monday, June 3, 2013 - link
The A12 sounds really the same as the Qualcomm Krait and Apple swift, up to 40% faster with the same power consumption. I have no ideal why ARM introduce it, as it's 2 years late than the counterparts form Qualcomm and Apple and targeting the same market. And as for the Atom, the current generation Atom (5 years old architecture)is already comparable to Krait, swift and A15 in performance/watt with 32nm technology.To me, ARM armed too high with A15 which was simply not designed for phone/tablet. Keep in mind, the A15 without throttling is a 8w monster comparing to 2+w socs for either tablet or smart phones. ARM were either too confident about A9, or mislead by Intel's MID concepts. In either case, the A12 sounds like a making-up of their wrong decision, just a reinvent-the-wheel of the Krait and Swift. Is this the reason cost their CEO?
Wilco1 - Monday, June 3, 2013 - link
A12 is effectively an improved A9, and quite different from Krait and Swift which are far more similar to A15. Note that A15/Krait/Swift are a lot faster than A9, while A7 is quite close in performance despite using less power. So A9 is no longer competitive, and the A12 will take over the mid range. It would have been better for it to arrive earlier indeed, but there are only so many OoO CPUs one can design per year...The Korean Galaxy S4 uses an Exynos Octa which consumes 5.5W at 1.8GHz. That is fine for mobiles and tablets, for example power is similar as Tegra 3 despite providing more than twice the performance. For typical usage you rarely need that amount of performance, let alone for long periods, so only looking at max TDP is misleading.
What the introduction of yet another new CPU has to do with the CEO is beyond me. The naming of A15 and A9 suggests the mid-range A12 was planned a long time ago (Cortex numbering indicates relative performance).
tech4real - Monday, June 3, 2013 - link
my guess is because A15 on current TSMC/GloF process is seriously overshooting its original power target, and this leaves a huge gap between A15 and A7 for other vendors (apple/qcom/intel) to exploit. What makes matter worse is that the gap happens to sit in the most important power range of high-end smartphone and fanless tablet design points, so ARM has to deliver some stopgap solution.wsw1982 - Monday, June 3, 2013 - link
No, the Krait and Swift are in the same performance range of yet-to-arrive A12 which is at maximum 50% faster than A9. The A15 is another store. It's much faster than both Krait and Swift, but consumes much more power as well. There are review on Nexus 10 which is powered by A15 by Anand. You could check the results. To me, the A15 is fast but not designed for phone or tablet. The only reason could be that the ARM thought the A9 was faster and good enough to secure the Mobile market, and therefore armed the A15 to something bigger (server? MID? netbook?). This caused the trouble now, because apparently the A9 is too weak and the A15 is too power hungry to defense the mobile market, which you can see in the COMPUTEX this year. And therefore, without choice, ARM must introduce yet another core, the A12, to copy the success of krait, swift or even current gen. ATOM. A12 is apparently redesign of the wheel and really has the huge-time-to-the-market (dis)advantage :) This is apparently the decision made by the CEO himself, which put ARM in today's awkward position. It's just the most logic conclusion I could find out. And if my analysis is right, don't you think this bad decision would cost the CEO the job?Wilco1 - Monday, June 3, 2013 - link
By the time A12 comes out, Krait/A15/Swift will be much faster and on 20nm, so the gap between A9 and Krait/A15/Swift will only widen.The issue is that A9 is getting old and no longer competitive, not A15 being too fast or power hungry (A15 will appear in lots of mobiles). In 2014 nobody will use A9 - A7 is as fast and far more efficient (you already see Qualcomm, Allwinner, MediaTek etc using A7 rather than A9 in mid and low-end designs).
As I said before, it's clear A12 has been planned a long time ago, so your analysis is completely incorrect. A12 is not trying to be another Krait or Swift at all (A15 is already faster and A7 is already more efficient), it's simply replacing the old A9.
TheJian - Tuesday, June 4, 2013 - link
Yes, at 20nm the complaints about A15 go away even if you just shrink the same crap and do nothing else but upclock to whatever your power target is for your device. I'm sure everyone will up the gpu some, but this chip seems out of place vs. 20nm everything.Wilco1 - Tuesday, June 4, 2013 - link
A12 is not out of place once you understand that it is an A9 replacement for the mid-end. Currently dual A9 is being replaced with quad Cortex-A7 on 40nm as A7 is far more efficient and cheaper. The next faster core is the A15, which is way too large, power hungry and expensive to use at 40nm. Given that A9 is obsolete there is now a huge gap between A7 and A15, and A12 fits right in there.TheJian - Wednesday, June 5, 2013 - link
You'd better call NV and let them know they're releasing an A9r4 that is obsolete already...ROFL. Just tell them to stop producing T4i and go home. While you're at it tell it to PCmag I just quoted who says it has Top end phone power in a $100-$200 phone.So what does a 20nm die shrink of T4i run at? 2.7-3.0ghz in the same power envelope? Again A12 pointless. They got from 1.7ghz to 2.3ghz from 40 to 28nm. It's reasonable to think even the exact chip at 20nm would hit 2.7ghz at least or your process needs work. Also note this little gem from pcmag article:
http://www.pcmag.com/article2/0,2817,2415515,00.as...
"Unlike Qualcomm, Nvidia isn't actually allowed to design its own ARM-compatible cores, so it had to give the R4 innovations back to ARM and let competitors license them, but the 4i will be the first A9R4 chip on the market."
So everyone will get it eventually, and I'm sure by Q1 2015. T4i has what NV did first with A9R4, but they had to give the improvements back. Note the GPU is not the same, they get to keep all their own tech on that side. But everyone will get the cpu enhancements NV made. I think when A12 hit the drawing board they didn't realize companies like NV would make A9 so much better and in doing so raise everyone else too (who A12 might be aimed at, people who couldn't optimize like NV alone). But with NV having to give it all back, everyone gets it making an A12 redundant at best. T4i die shrunk to 20 puts A12 to shame in the same price and power envelope unless arm starts saying it's die size is 20mm. T4i if half the size of T4 (which is known to be 81mm or so) is 40mm roughly or NV has to measure their own chip again and re-report :) They should know their own T4/T4i die size right? It's amazing they got 72gpu cores on T4 into a 10mm die, so 60gpu cores is obviously smaller. I'd like to know what Arm has to pair with A12 in that size (what 8mm for 60 cores?). Good luck.
http://www.tomshardware.com/reviews/tegra-4-tegra-...
"Nvidia clearly needed to make difficult decisions in order to enable Tegra 4’s GPU in just 10.5 square millimeters of die space—less than Qualcomm’s Adreno 320, ARM’s Mali-T604, or Imagination Technologies’ PowerVR SGX554MP4."
A9R4 is NOT obsolete, and further I'd say dual A9 40nm is being replaced by A9R4 28nm QUAD+1 in a few weeks :) I think they call it a T4i ;) I don't understand your points. Do they sell phones for under $100 without subsidy with power like T4i? Next June/July we'll have T5i no doubt along with everyone else's stuff. A12 doesn't fit well with a 20nm Kepler based T5i and it's ilk. I think most people will want T4i power in $100-200 this year, and whatever T5i brings next year vs. A12. This might have a place if coming next month vs. T4i, not late 2014/Q1 2015.
https://en.wikipedia.org/wiki/Allwinner_A31
Look at the perf in the chart of T3 vs. quad A7. Look at the prices both $20. Look at the raw dmips (12-17 for T3 is bigger than 9.1 for A7quad right?). Gpu better than allwinner also. T4i will make this worse by 5x on gpu and who knows on cpu but it's better. Pixel ability is the only thing better (but that's about dual channel and gpu not cpu). Exynos shows the same. I don't see how this magical A7 quad can be cheaper than a T4i at 1/2 the size of the T3 in the chart going for the EXACT same cost ($20) as the Allwinner A31. Surely T4i is cheaper to make than T3 at 1/2 the die size right? T4 is same size as T3 as toms etc reports. T4i is 1/2 it's size. You're not making any sense. Are you privy to info all of these people don't understand or know? Note they quote that it's partially data we have to believe from allwinner at wikipedia.
"Note: Part of this chart would be AllWinner's opinion which is publicized from the Onda's press conference on December 5, 2012."
So it's only true if you totally believe allwinner. Not saying they're lying, just pointing it out, and I doubt they would undersell themselves so if the numbers are true I again, don't get your points.
Prove A7 quad is cheaper please. Allwinner should know they cost $20, and we already know T3/exynos are basically the same $20 as the chart shows. A few bucks either way is nothing to write home about and T4i is 1/2 the size (I keep repeating this but you don't seem to get this). Worse, A12 will be facing shrinks of everything mentioned.
Wilco1 - Wednesday, June 5, 2013 - link
Yes Tegra 4i would have been a really great design if it were released last year, not later this year. I'm sure it will get more design wins than Tegra 3. However when A12 comes out there will be no reason at all to continue using A9, not even A9R4 simply because R12 beats it on power, performance and die size. That still leaves a window for Tegra 4i, but it is not going to last that long. If a Tegra 5i exists it will use A12, what other CPU do you think it would use???wsw1982 - Tuesday, June 4, 2013 - link
Your logic behind the A12 is planned before A15 on their number is not valid. In your logic, the A7 should planed before the A8... Hehe, the Xbox after Xbox 360 is... xbox one. If they planed the A12 long before the A15, what happened to ARM Engineering? Their design team had a 3 years delay (compare to A15). If this was true, ARM should be worried more :)And as I said the A15 is not same class core as Krait/Swift, the A12 is. And when A15/Krait/swift come to 20nm, why cannot A9? The gap should be the same. As I said, the only logic reason is ARM made a wrong decision and cause them the problem now (The invasion of ATOM).
As for 20nm, you know, TSMC said they would ready the the 28nm production during Q2,2010 in 2009. And the first 28nm mobile chip came in 2012. So good luck with the expectation:)
Wilco1 - Tuesday, June 4, 2013 - link
You're very confused. What I'm saying is that the number used in Cortex cores indicates relative performance. The gap beween A9 and A15 means that there has been space reserved for a CPU that fits in the middle - the A12. That doesn't imply at all that the design of A12 started before A15, just that a successor of the A9 was planned a long time ago.No, A15, Krait and Swift are the same class: 3-way fairly aggressive OoO cores. A9, A12 and Silvermont are 2-way, limited OoO and thus in the same class. Silverthorne and Cortex-A8 are both 2-way in-order so in the same class.
It looks like TSMC has learned from 28nm debacle so I don't expect 20nm will be as troublesome. They seem to have invested a lot more effort into it than before.
The A9 is old and obsolete, so it cannot compete - even at 20nm: the A7 is almost as fast and far more efficient, so why would anyone use A9? Given that A9 will disappear, the gap between A7 and A15 will only increase on 20nm, so there is a need for something to fill it.
wsw1982 - Tuesday, June 4, 2013 - link
The 3-way isn't the only parameter to affect the design, the ATOM use in-order design, which didn't make it the same level chip as original pentieum.The core is more or less a black box to outside world, which is only differentiated by it's performance and power consumption. And in this case, the A15 is 2 times more powerful and > 2times power hungery than A9. The Atom, krait and swift is under same power envelop as A9 (but of couse the performance gain is not free, they are little bit more power hungry than A9). And from a performance and power consumption point of view, the A12 is at the same catalog as krait, swift and current gen Atom.
The A7 is actually worse than/comparable to A8 in a performance point of view, but with better power efficiency and less area.
http://www.anandtech.com/show/4991/arms-cortex-a7-...
You are right about the A12 that it design to replace the A9. However, the ARM, to me, should plan the A12 last years instead of A15. There was nothing exist for ARM to compete with Atom, Krait and swift on the market until the end the 2014. And the A15 is clearly targeting a non exist market, that only the throttled-down version of it exist on limited designs. On the other hand, the krait and Atom are eating ARM's cake.
Maybe you are right the A15 could be fine under 20nm technologies. But this is also means ARM designed and introduced some product in 2012 which can only be useful after 2014 (very optimistic for TSMC's 20nm technology). On the mean while give up the market from 2012 to 2014 to ATOM, krait and swift and simply because it had no valid product. Doesn't it sound like problem?
Wilco1 - Tuesday, June 4, 2013 - link
You can't classify CPUs just by power consumption/frequency. For example Swift is only low power because it is run at a low frequency. Clock an A15 at 1GHz, and it would equal Swift on performance while using similar or less lower power than Swift. Similarly Krait is pretty efficient around 1.5GHz but uses a lot more power when trying to keep up with A15 at ~2GHz.Claiming ARM has no valid product for the next 2 years is ridiculous. I really don't see the huge problem you are imagining. A15 is ARM's flagship and it is clearly able to compete with Swift, Krait and Silvermont, so you'll see it in many devices including lots of phones. Yes, it would have been better if A12 was released a year earlier - A9 is really near the end of its life, however it isn't needed to replace A15.
TSMC's 20nm will be in production this year. Even if you are pessimistic about it and think they will be at least 12 months late, it means devices by mid 2014 or around the same time Silvermont will appear in mobiles.
wsw1982 - Monday, June 3, 2013 - link
The Exynos Octa run A7 at 1.8 GHz, and A15 maximum at 1.2 GHz. please check the anand review.Cow86 - Monday, June 3, 2013 - link
It was already said there, and updated by Anand, that it is the other way round. The A15 runs at a higher speed, which is much more logical (and you can find it in plenty of other sources online).I also think that the A12 is a great SoC...if it were available today. It seems to be shooting for the 'best of both worlds' route that Krait has currently taken, with both decent power consumption and performance. It'll just be woefully late to market...and I also can't help but think that the A53 will be very close to market as well presumably by the time this is out...Shouldn't be far below the A12 in performance either, yet maybe a fair bit more efficient (not to mention 64 bit) so it might render it obsolete already. We'll see.
wsw1982 - Monday, June 3, 2013 - link
But in this case, A15 runing at 1.8Ghz, it bascially means the A15 is slower than current gen. ATOM in IPC. It take four A15 cores running at 1.8Ghz to compete with 2 saltwel core running at 2Ghz or 4 krait core running at 1.9Ghz. ( The benchmark of S4 and Lenovo K900). And this is extreme Odd. Even the dual core A15 in nexus 10 can outperform the quad core A15 at 1.8GHz in lots of benchmark. Is that means, in the A15 case, the more cores the less performance?Wilco1 - Monday, June 3, 2013 - link
No, A15 is clearly much faster than Atom clock for clock (eg. check the link I posted), so I don't see how you can think Atom could possibly compete? Note some S4's use 1.6GHz A15, others use 1.8GHz A15, yet others use 1.9GHz Krait (confusing I know...). But where does it show dual A15 outperforming a quad A15???wsw1982 - Tuesday, June 4, 2013 - link
please!!! check the comparison between Galaxy S4 (octo) , Galaxy S4 (krait), Nexus 10 and Lenovo K900. I am talking about overall performance. You can find out the Quad-core A15 (octo) is barely faster than Dual-core Atom running at 2Ghz. The only possible reason for this is A15 running at 1.2 Ghz instead of 1.8Ghz, OTHERWISE, even running at 1.6Ghz, the A15 become slower than 5 years old Atom in IPC. Do you have any other logic explanation?Wilco1 - Tuesday, June 4, 2013 - link
A15 in Exynos Octa does most definitely run at 1.6/1.8GHz, not 1.2. If you get different results then there is something wrong with the benchmark - that is the only possible explanation.Anyway which comparison are you talking about exactly - link? I am looking at Geekbench results and those clearly show A15 running at 1.6GHz and beating K900 by a huge margin. Eg. http://browser.primatelabs.com/geekbench2/compare/...
wsw1982 - Tuesday, June 4, 2013 - link
In Antutu, the S4 (Octa) barely faster than the K900 and the S4(Snapdragon). This could only happen if the A15 is dramatically under clocked compare to it's counterpart.http://www.androidauthority.com/galaxy-s4-vs-ideap...
Wilco1 - Tuesday, June 4, 2013 - link
That's not a CPU benchmark. Antutu does all sorts of stuff unrelated to CPU performance such as SD card read/write speed and GPU tests, and is Java rather than native compiled code. It happens to be one of the most cheated benchmarks (both by users and device makers adding special "optimizations" to show a better score). In short, Antutu scores mean absolutely nothing.Krysto - Tuesday, June 4, 2013 - link
Yeah, right. You're talking about mainly single-threaded tests, where A15 wouldn't have much of an advantage. In tests like Octane, A15 has a huge advantage over Atom. And Anand has already shown it with the Chromebook review, too.Wait until the quad core Atoms, if they ever even arrive in smartphones, and we'll see how they stand then. But prepare to not be so impressed by Intel.
kpal12 - Monday, June 3, 2013 - link
The Cortex A12 is the successor to the Cortex A9, not something in between the A7 and A15.The Cortex A15 was not the successor to the A9, it used far more power. This uses less/the some amount of power and is much faster.
TheJian - Tuesday, June 4, 2013 - link
But why waste deving a new chip, when by the time it comes out the power monger A15 will be on 20nm removing the powermonger feature? :) This chip is moot by 20nm A15 etc. I'd rather have a die shrunk A15's power in Q1 2015. This is way too late. We don't want to go backwards, I want them to die shrink their way forward....ROFL.Wilco1 - Tuesday, June 4, 2013 - link
Again A12 is not a replacement for A15 but for the old A9. Billions of Asians cannot afford a Galaxy S4 or iPhone, and current low-cost devices for these markets use Cortex-A7 on a 40nm process. A15 and Krait are too expensive, A9 is obsolete, so there is a huge gap. The A12 fills it. It's as simple as that.wsw1982 - Tuesday, June 4, 2013 - link
You are right, as a results, they turn themselves to Qualcomm or Intel...