Comments Locked

140 Comments

Back to Article

  • powerarmour - Friday, January 4, 2013 - link

    So yes, finally confirming what anyone with half a brain knows, competitive ARM SoC's use less power.
  • apinkel - Friday, January 4, 2013 - link

    I'm assuming you are kidding.

    Atom is roughly equivalent to (dual core) Krait in power draw but has better performance.

    The A15 is faster than either krait or the atom but it's power draw is too much to make it usable in a smartphone (which is I'm assuming why qualcomm had to redesign the A15 architecture for krait to make it fit into the smartphone power envelope).

    The battle I still want to see is quad core krait and atom.
  • ImSpartacus - Friday, January 4, 2013 - link

    Let me make sure I have this straight. Did Qualcomm redesign A15 to create Krait?
  • djgandy - Friday, January 4, 2013 - link

    No. Qualcomm create their own designs from scratch. They have an Instruction Set licence for ARM but they are arm "clones"
  • apinkel - Friday, January 4, 2013 - link

    Sorry, yeah, I could have worded that better.

    But in any case the comment now has me wondering if I'm off base in my understanding of how Qualcomm does what it does...

    I've been under the impression that Qualcomm took the ARM design and tweaked it for their needs (instead of just licensing the instruction set and the full chip design top to bottom). Yeah/Nay?
  • fabarati - Friday, January 4, 2013 - link

    Nay.

    They do what AMD does, they license the instruction set and create their own cpus that are compatible with the ARM ISA's (in Krait's case, the ARMv7). That's also what Apple did with their Swift cores.

    Nvidia tweaked the Cortex A9 in the Tegra 2, but it was still a Cortex A9. Ditto for Samsung, Hummingbird and the Cortex A8.
  • designerfx - Friday, January 4, 2013 - link

    do I need to remind you that the Tegra 3 has disabled cores on the RT? Using an actual android device with Tegra 3 would show better results.
  • madmilk - Friday, January 4, 2013 - link

    The disabled 5th core doesn't matter in loaded situations. During idle, screen power dominates, so it still doesn't really matter. About all you'll get is more standby time, and Atom seems to be doing fine there.
  • designerfx - Friday, January 4, 2013 - link

    The core allows a lot of different significant things - so in other words, it's extremely significant, including in high load situations as well.

    That has nothing to do with the Atom. You get more than standby time.
  • designerfx - Friday, January 4, 2013 - link

    also, during idle the screen is off, usually after whatever amount of time the settings are set for. Which is easily indicated in the idle measurements. What the heck are you even talking about?
  • metafor - Friday, January 4, 2013 - link

    It matters to a degree. Look at the CPU power chart, the CPU is constantly being ramped from low to high frequencies and back.

    Tegra automatically switches the CPU to a low-leakage core at some frequency threshold. This helps in almost all situations except for workloads that constantly keep the CPU at above that threshold, which, if you look at the graph, isn't the case.

    That being said, that doesn't mean it'll be anywhere near enough to catch up to its Atom and Krait competitors.
  • jeffkro - Saturday, January 5, 2013 - link

    The tegra 3 is also not the post powerful arm processor, intel obviously chose it to make atom look better.
  • npoe1 - Wednesday, January 9, 2013 - link

    From one of Ananad's articles: "NVIDIA recently revealed it was doing something similar to this with its upcoming Tegra 3 (Kal-El) SoC. NVIDIA outfitted its next-generation SoC with five CPU cores, although only a maximum of four are visible to the OS. If you’re running light tasks (background checking for email, SMS/MMS, twitter updates while your phone is locked) then a single low power Cortex A9 core services those needs while the higher performance A9s remain power gated. Request more of the OS (e.g. unlock your phone and load a webpage) and the low power A9 goes to sleep and the 4 high performance cores wake up."

    http://www.anandtech.com/show/4991/arms-cortex-a7-...
  • jeffkro - Saturday, January 5, 2013 - link

    A15 currently pulls to much power for smartphone but it makes for a great tablet chip as well as providing enough horse power to power basic laptops.
  • djgandy - Friday, January 4, 2013 - link

    The most obvious thing here is that PowerVR graphics are far superior to Nvidia graphics.
  • Wolfpup - Friday, January 4, 2013 - link

    Actually no, that isn't obvious at all. Tegra 3 is a two year old design, on a 2 generations old process. The fact that it's still competitive today is just because it was so good to begin with. It'll be nessisary to look at the performance and power usage of upcoming Nvidia chips on the same process to actually say anything "obvious" about them.
  • Death666Angel - Friday, January 4, 2013 - link

    According to Wikipedia, the 545 is from January '10, so it's got its a 3 year old now. The only current gen thing here is the Mali. The 225 is just a 220 with a higher clock, so it's about 1.5 to 2 years old.
  • djgandy - Friday, January 4, 2013 - link

    And a 4/5 year old atom and the 2/3 year+ old SGX545 aren't old designs?

    Look at the power usage of Nvidia. It's way beyond what is acceptable for any SOC design. Phones from 2 years ago used far less power on older processes than the 40nm T3! Just look at GLbenchmark battery life tests for the HTC One X and you'll see how poor the T3 GPU is. In fact just take your Nvidia goggles off and re-read this whole article.
  • Wolfpup - Friday, January 4, 2013 - link

    Atom's basic design is old, the manufacturing process is newer. Tegra 3 is by default at the biggest disadvantage here. You accuse me of bias when it appears you're actually biased.
  • Chloiber - Tuesday, January 8, 2013 - link

    First of all it's still 40nm.

    Second of all: you mentioned the battery benchmarks yourself. Go look at the Nexus 4 review and look how the international version of the One X fares. Battery life on the T3 One X is very good, if you take into account that it's based on 40nm compared to 28nm of the One XL and uses 4 cores.
  • gryer7421 - Friday, January 4, 2013 - link

    aaaand then looses where it matters, the rest of the platform. one more process shrink and both will be on even terms in cpu power usage and then as a whole platform will start punching arm in the face.
  • Wolfpup - Friday, January 4, 2013 - link

    Huh? Did you read the article? Atom built on 32nm is competitive with ARM built on 28nm. Not only that, but it's looking like Haswell will realistically be able to compete here too, and we've got the second gen Atom coming up this year too...but TODAY'S Atom at an older process is competitive with ARM...what you're claiming is exactly the opposite of what the article says.
  • JumpingJack - Friday, January 4, 2013 - link

    I don't think we are looking at the same data, overall Atom appears to uses the same or less power than Krait and offers better performance in general.
  • Homeles - Friday, January 4, 2013 - link

    "Anyone with half a brain" would read the article before making such an idiotic statement.
  • Rezurecta - Saturday, January 5, 2013 - link

    wow. Way to belittle Anand's hard work...

    Great article! One of the many reasons I love this site. :)
  • Death666Angel - Friday, January 4, 2013 - link

    The Star Wars theme to play in my head! Thanks for that! :D
  • Death666Angel - Friday, January 4, 2013 - link

    "I wonder what an 8W Haswell would look like in a similar situation."
    Me too. However, considering that they 17W ULV parts only reach those numbers by throttling as well, I don't expect a lot.
  • carancho - Friday, January 4, 2013 - link

    Amazing work. Congratulations! A couple of presentation suggestions:

    Next time please smooth some of the most important charts. The volatility makes it hard to see where the averages are. Take this chart: http://images.anandtech.com/reviews/SoC/Intel/CTvK... it could really benefit to have another copy with some additional smoothing.

    Also, in power charts like this http://images.anandtech.com/reviews/SoC/Intel/CTvK... it would be helpful to have as a summary followup chart the power calculation done and presented as bar charts; otherwise we have to resort to calculate the differences in the areas below the lines with our eyes, and they can be deceiving.
  • carancho - Friday, January 4, 2013 - link

    I hadn't reached the A15 part yet when writing this. Ignore the 2nd comment.
  • amorlock - Friday, January 4, 2013 - link

    I'm frankly amazed and impressed that Intel can get Haswell down to 8W but it's hard to imagine it in a mid range mobile device because of the likely unit cost. The reason Atom has stagnated until recently is because Intel doesn't want to create a chip that cuts into it's very profitable mainstream CPU market.
  • powerarmour - Friday, January 4, 2013 - link

    "Intel doesn't want to create a chip that cuts into it's very profitable mainstream CPU market."

    Indeed, they've left Cedar Trail to fester and die by totally withdrawing driver support :-

    http://communities.intel.com/message/175069#175069

    Quite a lot of desktop Atom hardware is still on the market, and they are trying their best to kill it off.
  • djgandy - Friday, January 4, 2013 - link

    All that says to me is that they don't care about Win7 i.e. non tablets.
  • Krysto - Friday, January 4, 2013 - link

    Cortex A15 coupled with Cortex A7 will use half the power on average. Also, I told you before that Mali T604 is more efficient than PowerVR in the latest iPads, and that's why Apple managed to use a more powerful GPU - because it's more inefficient. They sacrificed energy efficiency for performance, because they can use a very large battery in the iPad.

    I saw you're trying hard to "prove something" about Intel lately, and I'm not sure why. Is Intel is biggest "client" when they pay you for reviews here? Is that why you're trying so hard to make them look good?

    You're also always making unebelivable claims about what Intel chips will do in the future. Even if they get Haswell to 8W (is that for CPU only? The whole SoC? Is it peak TDP? Will it still need fans?), you do realize a Haswell chip costs as much as the whole BOM of an iPhone 5 right? Haswell chips will never arrive in smartphones, or in tablets that are competitive on price.
  • Tetracycloide - Friday, January 4, 2013 - link

    You're always making "unebelivable" claims about what corruption does here. Do you have anything to back up your allegations to a normal person who would view any excitement about future possibilities as some kind of damning evidence that the writer must be on the take? It's like you think everyone that doesn't share your opinion of Intel is paid to have that opinion or something.
  • trivik12 - Friday, January 4, 2013 - link

    Haswell ULV is a SOC. So the platform TDP was < 8W. You like it or lot intel has the best process technology and ultimately they will produce a platform which is faster and lower TDP.

    That being said ARM will dominate the smartphone market and even majority of low end laptops. I see intel existing only in mid to higher end smartphone plus tablets > $500.

    I am personally waiting for broadwell based tablet which should hopefully cut power even more in 14nm process.
  • djgandy - Friday, January 4, 2013 - link

    You'd hope two brand new technologies would be better than two 3/4 year old ones wouldn't you. Clearly you are blinded by your love for ARM in the same way many here are blinded by love for Nvidia and actually consider Tegra 3 a competitive SOC.

    I don't think many people would be astonished to find that the T604, an architecture only released a few months back, is more efficient than PowerVR Series 5, dating back to 2008.

    Why are people so shocked to find that Intel can make a low power chip? It's not some kind of magic, it is a business goal. Power is a trade off just like performance. When you have desktop systems the trade off for using more power is seen as a pro for a 40-50% performance gain.
  • mrdude - Friday, January 4, 2013 - link

    He's spot on about the pricing issue, though. Intel isn't going to start selling Haswell SoCs for $30, and if they do then they'll quickly go out of business. It's a completely different business model that they're trying to compete with. The Tegra 3 costs $15-$25 (and way closer to that $15 to date) while Intel charges $70+ for their CPU+GPU, and that's before you get to the chipset, WiFi and the rest. A low-TDP Haswell chip might offer great performance and fit in the same form factor (tablets), but if the tablet ends up costing $800+ and isn't Apple, well... nobody cares.

    It's not just a matter of performance but performance-per-dollar and design wins. Intel can't afford to drop prices to competitive levels on their Core products unless they can supplement it with very high volume. For very high volume you need to sell a lot of competitive SoCs that can do it all at a very reasonable price. The Tegra 3 was a big success not because it was an amazing performer, but because it offered decent performance for a very low price tag. Can Intel afford to do that with their cash cow business slipping? Remember that x86 seeing drops in sales and PCs aren't exactly doing very well right now. Intel already had to drop their margins and they've let fabs run idle and sent home engineers at their 14nm fab in Ireland all the while processor prices haven't decreased even a tiny bit. Those aren't signs of a company that's willing to compete on price
  • Homeles - Friday, January 4, 2013 - link

    I'm more than willing to pay for the performance premium.
  • mrdude - Friday, January 4, 2013 - link

    While you may be willing to fork over that much cash, most people won't. If you don't believe me, check out the recent sales figures of Win8 devices. The Win8 tablets (excluding Surface RT) don't even make up 1% of all Win8 products sold. That's not poor, that's absolutely horrible. On the other side the cheap Android tablets and smartphones have been gaining significant market share and outselling even the iPhone and iPads. Price matters. A lot. Furthermore, device makers/OEMs are more likely to go with the cheaper SoC if the experience is roughly equal. Remember that a majority of tablet and smartphone buyers don't browse Anandtech for benchmarks but buy based on things like display quality or whether it's got a nice look (or brand name, in the case of Apple). If an OEM can take that $60 saved and put it towards a better display, a larger battery or more NAND then that means a lot more in differentiating yourself from the competition than being 10-15% faster in X benchmark.

    People forget that these are SoCs and not CPUs. They also forget that these aren't DIY computers but tablets. Think about how much people complain when they see a $900 Ultrabook with a crappy 1366x768 TN display but those same people don't utter a word about how Intel's ULVs cost the same as their 35W parts. If the Intel chip was cheaper you'd probably have a better display or a cheaper price tag. This same notion extends to tablets and smartphones.

    Qualcomm is in a place where they can offer something everybody wants; their LTE is second to none. What does Intel have to offer to warrant Intel prices? Currently Intel's chipsets cost as much as an entire Tegra 3 SoC. x86 PC/server and ARM SoCs are in a completely different universe when it comes to pricing, and unless you've got something special (see Qualcomm or Apple) or you're making and selling the device (Samsung), then you're going to have a very rough time of it.
  • jeffkro - Saturday, January 5, 2013 - link

    I paid $15 to "upgrade" my laptop and have since gone back to win 7. A lot of people simply don't want win 8 at any cost.
  • karasaj - Friday, January 4, 2013 - link

    Yeah, keep in mind that Haswell will be like Windows 8 Pro, i.e. a more traditional laptop experience anyways, so it won't necessarily be strictly competing with the iPad.
  • mrdude - Friday, January 4, 2013 - link

    Win8 and its devices aren't selling, so I'm not sure how Intel plans on suddenly making that OS and the products that run on it any better. Like I said, Win8 tablet sales have been really really poor.

    And as far as consumers go, it is directly competing with the iPad and Android. People are more inclined to buy a new shiny tablet than a laptop and they could care less for x86 compatibility. For your average user, the iOS app store has thousands of more applications than your x86 desktop if only for the fact that it makes it incredibly easy to search through, install and play with. x86 compatibility means very little to most folks, and you can argue that it's actually a detriment due to the x86 legacy's inherent safety issues (non-Metro) and higher prices.

    The "It's also a PC" shtick only works if people want PCs. Judging by the sales figures and the longer upgrade cycles, Win8's sales figures, it's clear they're obviously aren't that interested.
  • Krysto - Friday, January 4, 2013 - link

    "One reason the Pro version of the device will be more expensive is that it uses a PC-style chip from Intel Corp. INTC -0.87% (INTC), part of a family of chips that sells for between $177 and $225. The Nvidia Corp. NVDA +3.48% (NVDA) chip typically used in the Surface RT model costs about $28, according to an estimate by research firm UBM TechInsights."

    http://blogs.wsj.com/digits/2013/01/04/windows-8-f...

    iPhone 5's BOM is $188. iPad 3's BOM was $160. I wish Intel good luck if they think they are going to have competitive devices on the market where the chip alone costs as much as all the components in iPads and iPhones.

    Oh, but that's just for IVB chips. Did I mention Haswell will be 40% more expensive - so Haswell chips will be more like $250-$300. Yeah...good luck with that Intel.
  • jeffkro - Saturday, January 5, 2013 - link

    "I'm more than willing to pay for the performance premium."

    I'm not, I'm perfectly happy with the speed of my galaxy nexus which is completely outclassed by the Krait S4. So why would I want to pay a huge premium for an intel powered phone? Just give me dual core krait performance in a all day battery life phone and I'll be thrilled. For me getting more battery life takes precedence over all out speed.
  • djgandy - Friday, January 4, 2013 - link

    True, pricing is another issue, but Intel has room to cut prices a lot. Intel makes the conscious decision to make upwards $30 on a chip rather than $3. In turn they don't have to ship as many chips.

    I'm pretty sure the cost of manufacturing an Atom SOC is around $5-6, so Intel has plenty of room to make money in the $20 if they so choose. There is no technical reason a Tegra 3 is cheaper than a Haswell when it comes to manufacturing. It's all about market segment.

    Intel idling fabs a bit is probably due to the fact they went massive with 22nm. Intel also makes in profit a quarter 75% of what Nvidias entire yearly revenue is.

    If Intel wants to play ball, I am sure they will.
  • mrdude - Friday, January 4, 2013 - link

    But can they afford to?

    Those billion dollar fabs require billions of dollars to run, and if they want to maintain - or even stretch - their fab lead over their competitors, they need to make MORE money going forward. This means that if they were to compete at ARM level they can't afford to do so over a longer period of time else they'd find themselves short of cash to funnel back into the fabs and losing that distinct advantage. I remember seeing something that stood out in the latest Qualcomm earnings report, it read (paraphrasing): our fabless strategy is actually an advantage.
    For Intel's fab advantage to remain an advantage they must make more and more money going forward. As soon as the sales figures look gloomy then it all goes downhill quickly, as the fabs go from being a distinct advantage to a potentially expensive disadvantage.

    Intel also has investors to answer to, and this is more complicated than the microarchitecture involved. The investors expect >60% gross margins, and Intel dropped margins a bit below 60% but also let fabs idle so they wouldn't have to drop them even further. The mere fact they're latting fabs are idle means that they're not meeting sales estimates. That's not good. This is rather obvious and seen by the drop in chip sales over the same period last year. In order to keep that dip from looking worse (dropping margins even further), Intel just let the fabs idle. Smart short term strategy as investors don't look at that stuff, but it shows that things aren't 'all gravy.'

    If Intel were to drop prices during a resurgent and strong PC market then I'd completely agree with you. They'd even be able to lose money in the SoC mobile arena and still show good numbers at their earnings call. That's just not the case, with PC sales slowing quite a bit and tablet sales picking up substantially, you'd have to question whether they can "weather the storm" by leaning on their dominant x86 PC/server position long enough to make up for the lower-than-usual prices in the mobile SoC space. If Intel went at ARM head on with competitive prices a couple of years ago there would be no question that Intel would remain competitive, but with weak PC sales that are expected to look even weaker compared to mobile this year?

    If Intel is to compete with ARM on price, it's, oddly enough, going to be determined by how well their high profit products sell in the near future.
  • Ananke - Friday, January 4, 2013 - link

    They can't compete on pricing with x86 vs ARM. I'm in the business, I know. Intel has absolutely competitive process facilities. If they were strictly making chips, nobody can beat them. If they license the design and make chips - nobody can beat them either. However, coupling the own design R&D expenses with own production, and their cost is higher. Products might be better, but cost is higher.
    On the marketing side, only price matters today. You may think performance is important, but in reality it defines 1% of the decision of 1% of the market....
    Hence, the trend towards ARM designs. That trend was not accidental, it is structural, and I see no reason it will turn around.
    It is the reason why AMD is performing so poorly, just Intel is much larger and owns its fabs, it takes longer to become obvious they will have revenue problems.
  • GillyBillyDilly - Saturday, January 5, 2013 - link

    Exactly. Performance is one thing, Price another. ARM is not AMD. There is no way intel can compete with ARM price-wise. And the less competetative they are, the less they will sell and the less they sell, the less R&D and the less R&D, well, this goes on and on. I am glad I don't own any Intel shares.
  • felixyang - Friday, January 4, 2013 - link

    There is no doubt A15 is more power consuming. A A7 core can save power sometimes, but when you have a CPU intensive workload like sunspider, the A7 core's effect is limited.
  • GillyBillyDilly - Saturday, January 5, 2013 - link

    I hope you do realise that this website is a business and not a charity organisation, and no money = no business.
    Of course Anand is being paid for their reviews ( not only Intel related ) but as long as they don*t maipulate the data (which I don*t think they do ), their reviews are worth reading. You read it, get a general picture and make YOUR own conclusions.
  • kumar0us - Friday, January 4, 2013 - link

    My point was that for a CPU benchmark say Sunspider, the code generated by x86 compilers would be better than ARM compilers.

    Could better compilers available for x86 platform be a (partial) reason for faster performance of intel. Or compilers for ARM platform are mature and fast enough that this angle could be discarded?
  • iwod - Friday, January 4, 2013 - link

    Yes, not just compiler but general optimization in software on x86. Which is giving some advantage on Intel's side. However with the recent surge of ARM platform and software running on it my ( wild ) guess is that this is less then 5% in the best case scenario. And it is only the worst case, or individual cases like SunSpider not running fully well.
  • jwcalla - Friday, January 4, 2013 - link

    Yes. And it was a breath of fresh air to see Anand mention that in the article.

    Look at, e.g., the difference in SunSpider benchmarks between the iPad and Nexus 10. Completely different compilers and completely different software. As the SunSpider website indicates, the benchmark is designed to compare browsers on the same system, not across different systems.
  • monstercameron - Friday, January 4, 2013 - link

    it would be interesting to throw an amd system into the benchmarking, maybe the current z-01 or the upcoming z-60...
  • silverblue - Friday, January 4, 2013 - link

    AMD has thrown a hefty GPU on die, which, coupled with the 40nm process, isn't going to help with power consumption whatsoever. The FCH is also separate as opposed to being on-die, and AMD tablets seem to be thicker than the competition.

    AMD really needs Jaguar and its derivatives and now. A dual core model with a simple 40-shader GPU might be a competitive part, though I'm always hearing about the top-end models which really aren't aimed at this market. Perhaps AMD will use some common sense and go for small, volume parts over the larger, higher performance offerings, and actually get themselves into this market.
  • BenSkywalker - Friday, January 4, 2013 - link

    There is an AMD design in their, Qualcomm's part.

    A D R E N O
    R A D E O N

    Not a coincidence, Qualcomm bought AMD's ultra portable division off from them for $65 million a few years back.

    Anand- If this is supposed to be a CPU comparison, why go overboard with the terrible browser benchmarks? Based on numbers you have provided, Tegra 3 as a generic example is 100% faster under Android then WinRT depending on the bench you are running. If this was an article about how the OSs handle power tasks I would say that is reasonable, but given that you are presenting this as a processor architecture article I would think that you would want to use the OS that works best with each platform.
  • powerarmour - Friday, January 4, 2013 - link

    Agreed, those browser benchmarks seem a pretty poor way to test general CPU performance, in fact browser benchmarks in general just test how optimized a particular browser is on a particular OS mainly.

    In fact I can beat most of those results with a lowly dual-A9 Galaxy Nexus smartphone running Android 4.2.1!
  • Pino - Friday, January 4, 2013 - link

    I remember AMD having a dual core APU (Ontario) with a 9W TDP, on a 40nm process, back in 2010.

    They should invest on a SOC
  • kyuu - Friday, January 4, 2013 - link

    That's what Temash is going to be. They just need to get it on the market and into products sooner rather than later.
  • jemima puddle-duck - Friday, January 4, 2013 - link

    Impressive though all this engineering is, in the real world what is the unique selling point for this? Normal people (not solipsistic geeks) don't care what's inside their phone, and the promise of their new phone being slighty faster than another phone is irrelevant. And for manufacturers, why ditch decades of ARM knowledge to lock yourself into one supplier. The only differentiator is cost, and I don't see Intel undercutting ARM any time soon.

    The only metric that matters is whether normal human beings get any value from it. This just seems like (indirect) marketing by Intel for a chip that has no raison d'etre. I'm hearing lots of "What" here, but no "Why". This is the analysis I'm interested in.

    All that said, great article :)
  • djgandy - Friday, January 4, 2013 - link

    People care about battery life though. If you can run faster and go idle lower you can save more power.

    The next few years will be interesting and once everyone is on the same process, there will be less variables to find to assert who has the most efficient SOC.
  • DesktopMan - Friday, January 4, 2013 - link

    "and once everyone is on the same process"

    Intel will keep their fabs so unless everybody else suddenly start using theirs it doesn't look like this will ever happen. Even at the same transistor size there are large differences between fab methods.
  • jemima puddle-duck - Friday, January 4, 2013 - link

    Everyone cares about battery life, but it would take orders of magnitudes of improvement for people to actually go out of their way and demand it.
  • Wolfpup - Friday, January 4, 2013 - link

    No it wouldn't. People want new devices all the time with far less.

    And Atom swaps in for ARM pretty easily on Android, and is actually a huge selling point on the Windows side, given it can just plain do a lot more than ARM.
  • DesktopMan - Friday, January 4, 2013 - link

    The same power tests during hardware based video playback would also be very useful. I'm disappointed in the playback time I get on the Nexus 10, and I'm not sure if I should blame the display, the SOC, or both.
  • djgandy - Friday, January 4, 2013 - link

    It's probably the display. Video decode usually shuts most things off except the video decoder. Anand has already done Video decode analysis in other articles.
  • jwcalla - Friday, January 4, 2013 - link

    You can check your battery usage meter to verify, but... in typical usage, the display takes up by far the largest swath of power. And in standby, it's the wi-fi and cell radios hitting the battery the most.

    So SoC power efficiency is important, but the SoC is rarely the top offender.
  • Drazick - Friday, January 4, 2013 - link

    Why don't you keep it updated?
  • iwod - Friday, January 4, 2013 - link

    I dont think no one, or no anandtech reader with some technical knowledge in its mind, has ever doubt what Intel is able to come up with. A low power, similar performance or even better SoC in both aspect. Give it time Intel will get there. I dont think anyone should disagree with that.

    But i dont think that is Intel's problem at all. It is how they are going to sell this chip when Apple and Samsung are making one themselves for less then $20. Samsung owns nearly majority of the Android Market, Which means there is zero chance they are using a Intel SoC since they design AND manufacture the chip all by themselves. And when Samsung owns the top end of the market, the lower end are being filled by EVEN cheaper ARM SoCs.

    So while Intel may have the best SoC 5 years down the road, I just dont see how they fit in in Smartphone Market. ( Tablet would be a different story and they should do alright.... )
  • jemima puddle-duck - Friday, January 4, 2013 - link

    Exactly. Sometimes, whilst I enjoy reading these articles, it feels like the "How many angels can dance on the head of a pin" argument. Everyone knows Intel will come up with the fastest processor eventually. But why are we always told to wait for the next generation? It's just PR. Enjoyable PR, but PR none the less.
  • Kidster3001 - Friday, January 4, 2013 - link

    Samsung uses everyone's chips in their phones. Samsung, Qualcomm, TI... everyone's. I would not be surprised to see a Samsung phone with Atom in it eventually.
  • jeffkibuule - Friday, January 4, 2013 - link

    They've never used other non-Samsung SoCs by choice, especially in their high end phones. They only used Qualcomm MSM8960 in the US GS III because Qualcomm's separate baseband MDM9615 wasn't ready. As soon as it was, we saw the Galaxy Note II use Exynos again. Nvidia and TI chips have been used in the low end from Samsung, but that's not profitable to anyone.

    Intel needs a major design win from a tier one OEM willing to put its chip inside their flagship phone, and with most phone OEMs actually choosing to start designing their own ARM SoCs (including even LG and Huawei), that task is getting a lot harder than you might think.
  • felixyang - Saturday, January 5, 2013 - link

    some versions of Samsung's GS2 use TI's OMAP.
  • iwod - Saturday, January 5, 2013 - link

    Exactly like what is said above. If they have a choice they would rather use everything they produce themselves. Simply Because Wasted Fabs Space is expensive.
  • Icehawk - Friday, January 4, 2013 - link

    I find these articles very interesting - however I'd really like to see an aggregate score/total for power usage, IOW what is the area under the curve? As discussed being quicker to complete at higher power can be more efficient - however when looking at a graph it is very hard to see what the total area is. Giving a total wattage used during the test (ie, area under curve) would give a much easier metric to read and it is the important #, not what the voltage maxes or minimums at but the overall usage over time/process IMO.
  • extide - Friday, January 4, 2013 - link

    There are indeed several graphs that display total power used in joules, which is the area under the curve of the watts graphs. Maybe you missed them ?
  • jwcalla - Friday, January 4, 2013 - link

    That's what the bar charts are showing.
  • GeorgeH - Friday, January 4, 2013 - link

    It's already there. A Watt is a Joule/Second, so the area under the power/time graphs is measured in Watts * Seconds = Joules.
  • Veteranv2 - Friday, January 4, 2013 - link

    Another Intel PR Article, it is getting really sad on this website.

    Now since you are still using Win8 which is garbage for ARM. Please us the correct software platform for ARM chips. I'd love to see those power measurements then.

    Anandtech did it again. Pick the most favorable software platform for Intel, give the least favorable to ARM.
    Way to go! Again....

    Intel PR at its best...
  • Veteranv2 - Friday, January 4, 2013 - link

    Oh wait its even better!
    They used totally different screens with almost 4 times the pixels on the nexus 10 and then says it requires more power to do benchmarks. Hahaha, this review gave me a good laugh. Even worse then the previous ones.

    This might explain the lack of product overviews at the start.
  • A5 - Friday, January 4, 2013 - link

    Even if you just look at the Sunspider (which draws nothing on the screen) power draw, it's pretty clear that the A15 draws more power. There have been a ton of OEMs complaining about A15's power draw, too.
  • madmilk - Friday, January 4, 2013 - link

    Since when did screen resolution matter for CPU power consumption on CPU benchmarks? Platform power might change, yes, but this doesn't invalidate many facts like Cortex-A15 using twice as much power on average compared to Krait, Atom or Cortex-A9.
  • Wolfpup - Friday, January 4, 2013 - link

    Good lord. Do you have some evidence for any of this? If neither Windows nor Android is the "right platform" for ARM, then...are you waiting for Blackberry benchmarks? That's a whole lot of spin you're doing, presumably to fit the data to your preconceived "ARM IS BETTER!" faith.
  • Veteranv2 - Friday, January 4, 2013 - link

    Hahaha, the Nexus 10 has almost 4 times the pixels of the Atom.
    And the conclusion is it draws more power in benchmarks? Of course, those pixels aren't going to fill itself. Way to make conclusion.

    How big was that Intel PR cheque?
  • iwod - Saturday, January 5, 2013 - link

    While i wouldn't say it was a Intel PR, I think they should definitely have left the system level power usage out of the questions. There is no point telling me that a 100" Screen with ARM is using X amount of power compared to 1" Screen with Haswell.

    It is confusing.

    But they did include CPU and GPU benchmarks. So saying it is Intel PR is just trolling.
  • AlB80 - Friday, January 4, 2013 - link

    Architectures with variable length of instruction are doomed. Actually there is only one remains. x86.
    Intel made the step into a happy past when CISC has an advantage over RISC, when superscalarity was just a theory.
    Cortex A57 is coming. ARM cores will easily outperform Atom by effective instruction rate with minimum overhead.
  • Wolfpup - Friday, January 4, 2013 - link

    How is x86 doomed when it has an absolute stranglehold on real PCs, and is now competitive on ultramobile platforms?

    The only disadvantage it holds is the need for a larger decoder on the front end, which has been proportionally shrinking since 1995.
  • djgandy - Friday, January 4, 2013 - link

    plus effing one!

    I think some people heard their uni lecturers say something once in 1999 and just keep repeating it as if it is still true!
  • AlB80 - Friday, January 4, 2013 - link

    Shrinking decoder... nice myth. Of course complicated scheduler and ALU dozen impact on performance, but do not forget how decoded instruction queues are filled. Decoder is only one real difference.
    1. There is fundamental limits how many variable instructions can be decoded per clock. CISC has an instruction cross-interference at the decoder stage. One logical block should determine a total length of decoded instructions.
    2. There is a trick when CISC decoder is disintegrated into 2-3 parts with dedicated inputs, so its looks like a few independent decoders, but each part can not decode any instruction.

    Now compare it with RISC.
    And as I said, what happens when Cortex can decode 4,5,6,7,8 instructions?
  • Kogies - Friday, January 4, 2013 - link

    Don't be so quick to prophesy the death of a' that. What happens when a Cortex decodes 8 instructions... I don't know, it uses 8W?

    Also, didn't Apple choose CISC (Intel) chips over RISC (PowerPC)? Interestingly, I believe Apple made the switch to Intel because the PowerPC chips had too high a power premium for mobile computers.
  • kyuu - Friday, January 4, 2013 - link

    You're the one stepping into the past with the CISC vs. RISC. x86 is not going to go away anytime soon. Keep dreaming, though.
  • iwod - Saturday, January 5, 2013 - link

    Nothing about Architectures in this comment, but by the time ARM Cortex A57 is out, so is Intel ValleyView, which doubles the performance. A57 is expected to give in best case scenario 30 - 50% increase in performance. And All of a sudden this look so similar to 2x Atom performance.

    It will only take one, just ONE mistake that ARM make for Intel to possibly wipe them off the map.

    Although looking into the next 3 - 5 years ahead. It will be a bloody battle instead.
  • Cold Fussion - Friday, January 4, 2013 - link

    Why didn't have any charts which were performance per watt or energy consumption vs performance in the GPU area? If the Mali chip is using twice the energy but giving 3x the performance then that is a very significant point thats being misrepresented.
  • mrdude - Friday, January 4, 2013 - link

    I was thinking the same thing.

    If I can game at native resolution on a Nexus 10 at better frame rates than on the Atom or Snapdragon SoC and the battery capacity is larger and the price of the device is equal, then do I really care about the battery life?

    Although it's nice seeing Intel is getting x86 down to a competitive level with ARM, the most astonishing thing that I took away from that review was just how amazing that MaliT604 GPU is. All that performance and only that power draw? Yesplz :P
  • parkpy - Friday, January 4, 2013 - link

    i've learned so much from AT's review of the iPhone5, Galaxy S III, and Nexus 4, and this article about mobile phones that it makes me wish AT could produce MORE reviews of mobile devices.

    All of this information is crack! I can't get enough of it. Keep up the good work! And Intel, I can't wait for you to get your baseband processor situation sorted out!

    I was already tempted to get a Razr I, but it looks like before the end of the year consumers will have some very awesome technology in their phones that won't require as much time on the battery charger!
  • This Guy - Friday, January 4, 2013 - link

    What if Rosepoint is software defined instead of fixed function?
  • ddriver - Friday, January 4, 2013 - link

    I am confused here - this review shows the atom to be somewhat faster than A15, while the review at phoronix shows the A15 destroying the atom, despite the fact intel's compiler is incredibly good at optimizations and incomparably more mature.

    So I am in a dilemma on who to trust - a website that is known to be generously sponsored by intel or a website that is heavily focused on open source.

    What do you think?
  • kyuu - Friday, January 4, 2013 - link

    Uh, did we read the same article? Where does it show the Atom being "somewhat faster than A15"? The article showed that the A15 is faster than Atom, but at a large power premium.
  • ddriver - Friday, January 4, 2013 - link

    On the charts I see the blue line ending its task first and taking less time, blue is atom, right?
  • jwcalla - Friday, January 4, 2013 - link

    A couple things:

    1) The Phoronix benchmarks were for different Atoms than the one used in this article. I don't know how they compare, but they're probably older models.

    2) The Phoronix benchmarks used GCC 4.6 across the board. Yes, in general GCC will have better optimizations for x86, but we don't know anything (unless I missed it) about which compilers were used here. If this was an Intel sample sent to Anand, I'm sure they compiled the software stack with one of their own proprietary Intel compilers. Or perhaps it is the MS compiler, which no doubt has decades of x86 optimizations built in and probably less ARM work than GCC (for the RT comparison).

    Don't take the benchmarks too seriously, especially since even the software isn't held constant here like it was in the Phoronix benchmarks. It's all ballpark information. Atom is competitive with ARMv7 architectures -- that's the takeaway.
  • tahyk - Friday, January 4, 2013 - link

    I've read that review. That's a total scam. They comparing the Atom N270, aka the original 2008 Netbook chip to the latest and greatest ARM. This article is about the Atom Z2760, so it compares 2012 x86 to 2012 ARM - a whole lot more fair.
  • mugiebahar - Friday, January 4, 2013 - link

    While I always enjoy reading here, I have to admit this article is not 1 of them. I'm not slinging mud or anything but rather I think it's highly subjective to consider @ this point intel is in good standing to make in roads. I agree intel has the technology/money/resources/will power to make a killer chip that sips power better then anyone. But as so many have pointed out and cannot be changed, Intel doesn't have the ability to do it for a cheap competitive price. ARM has always will always be better in that. It's all about how a company is built. ARM doesn't need to finance a foundry much less several like intel. With over head and size comes problems changing business models, especially in manufacturing. The thing is while we don't know yet what the future holds as to the amount of things we will do on a phone, I can guarantee I won't be ripping a DVD, making CAD drawings on it. So fundamentally we will hit a wall that the cost is not worth the money. Am I wrong? While I know what the article is pointing to, which is a strong class leading, watt sipping intel. But they cannot win or be as noteworthy as the article points out. You can't ask a company to devalue their products, Why? Because you lose either because 1) you look desperate or 2) you acknowledge that you where ripping people off before. While they may have had legitimate reasons for pricing, or that technology brought prices down its perception that's the killer. Is Atom bad? No. But ask regular joe, he'll tell you it's crap but why? Price and perception. Intel did something's right and sme wrong. They should have realized a while back desktops were good enough and push mobile chips to the better lower cost production. But now a company so heavy on top can switch just like that. Now ether they will have to restructure to be competitive in the mobile arena or just play second fiddle. But they can't right now (unless they change) be a mobile king as they are on the desk top only because of company structure nothing else.
  • jemima puddle-duck - Friday, January 4, 2013 - link

    I'd echo this sentiment. I'm getting less and less interested that so-and-so has made something slightly better than so-and-so. The chances are this Atom will never see the inside of more than 1 or 2 phones. I want to know why! This is the insight Anandtech, with its extensive contacts, can deliver! I guess what I'm asking for is more politics and less technology :-)
  • vngannxx - Friday, January 4, 2013 - link

    Anandtech should rerun the nexus 10 benchmark with the aosp browser.
  • mugiebahar - Friday, January 4, 2013 - link

    While Intel is the 800lb Gorilla that there is no doubt. Problem is its not stuck in a room with a monkey and a chimpanzee (AMD and VIA) this time it's in a room with a Lion (Apple) Tiger (say Samsung) Siberian Tiger (Qualcomm) mountain lion (TI) baby cub (AMD) and a litter full of Chinese cats. So the Gorilla is the strongest but now he might just have the hair bit of his ass cheeks if he doesn't watch it. Please feel free to reorder the cats to a better resemblance to whoever as I just thought about it quickly, but I think you can agree True?
  • UpSpin - Friday, January 4, 2013 - link

    Intel PR, nothing more.
    This article is misleading, confusing and compares totally different things. It's a shame to see such a bad written article on Anandtech, full with meaningless, misleading, graphs, which just sit there, without any further descriptive text. If a image isn't worth some text, it isn't worth to get shown at all!

    1. There's no use in showing and even comparing Total Power Consumption numbers, because the systems are totally different. So don't show them! Everything else is misleading, most probably on purpose because the absolutely low-end Intel device looks good in this comparison, logically. But please, if you can't compare things, don't try to compare them. And if you can't compare them, also don't further use such numbers, like in Task Energy Total Platform. It's useless.
    You can't measure Qualcomm chips correctly, thus include Total Platform power draw? Poor excuse. If you can't measure it, don't post it, but don't post false and misleading numbers.
    2. What is Average Power Draw? What's the use of it? You don't use those graphs in your Article at all! Do you know what this means? Exactly: Those graphs are useless and meaningless. Why do you post them? They are redundant because of the Energy graphs. So naturally, because of the much shorter run time of the A15 SoC, the Average graph looks disadvantageous for ARM, which is simply misleading. But well Intel is probably happy you posted hit and thanked you with cash, why else should a sane person post such misleading stuff.
    3. GPU Power: What game? How did it run? Off-Screen? The same resolution on every tablet? The same API? The same FPS? It's not surprising that a low-end GPU struggling to keep maybe 10FPS consumes less power than a high-end GPU displaying 60FPS at a higher resolution. You haven't said anything about this issue, yet happily compare meaningless numbers.

    I'm sorry but this article is, right now, garbage. And the only reason for posting such a poor written article is that Intel must have paid you a lot of money for doing so.

    It's nice that you post such semi-scientifical articles, but the way you do in this case isn't great.
    This article is very very hard to read, because the reader has to do ALL the interpretation.
    You could remove 2/3 of all graphs, and the article would contain the same information.
    By just looking at the graphs Intel is the overall winner, which is, if you do some further comparisons based on your article wrong. At most Intel is, according to your graphs, on par with A15, CPU wise, which still is a nice outcome for Intel.
    The GPU is awful in the Intel SoC, the CPU competive.
    The A15 GPU is perfect, the CPU at least as efficient as the Intel one, but much faster.
    Yet, because the article is so confusing and I don't want to waste any further time doing the work a good writer should have done, I see ARM as the clear winner.
    Same or more efficient CPU, much faster CPU, much better GPU, overall winner!
    Similar argumentation for Tegra and Krait.
    Intel has a good CPU, but the SoC looks awful.
  • powerarmour - Friday, January 4, 2013 - link

    "I see ARM as the clear winner.
    Same or more efficient CPU, much faster CPU, much better GPU, overall winner!
    Similar argumentation for Tegra and Krait.
    Intel has a good CPU, but the SoC looks awful."

    That was exactly my conclusion reading through it, I just couldn't be bothered to be eloquent enough to explain it like that as it seemed obvious to me.

    I look at the SoC as a whole, and apart from a 'slight' advantage on the CPU side in a few select (and likely x86 optimized) browser benchmarks, the Clover Trail SoC is really quite lacklustre.
  • mfergus - Friday, January 4, 2013 - link

    The GPU in the clover trail soc isn't even made by Intel. They could swap it out for anything they wanted tho they want it to be an in house gpu.
  • Cold Fussion - Saturday, January 5, 2013 - link

    I concur, the article is pretty bad as it stands. Apart from all the poorly presented information, it should have had tests done on an andriod tablet running the same krait SOC as the windows tablet so we establish how the different operating systems affect power draw. Without that I don't see how they can reasonably establishes the differences between A15 and the others.
  • wsw1982 - Friday, January 11, 2013 - link

    http://www.phonearena.com/news/Intel-Atom-powered-...

    check this out... The clove trail+ in smart phone lenovo K900 scores more than 25000 in Antutu on a 1080P display, which just crush snapdragon pro (4 krait) in Optimus G, and the beloved Samsung Exynos 5440 (2 A15) in Nexus 10...

    So, what gonna be the next far cry from ARMy: "Intel cannot make low power chips"? Oh, no, that's already busted. Then, I guess it gonna be "Intel cannot sell smartphone chips as cheap as others" or "We don't care performance and we don't care battery life, we just care the compatibility to IOS"
  • extide - Friday, January 4, 2013 - link

    When will you post an article about Bay Trail / Valley View?? Usually you guys are pretty fast to post stuff about topics like this yet I have seen some info on other sites already...
  • jpcy - Friday, January 4, 2013 - link

    ...which I bet CISC users thought had ended about 18 years ago...

    It's good to see a resurgence of this highly useful, extremely low-power and very hardy British CPU platform.

    I remember back in the day when ARMs were used in the Acorn computers (possibly too long ago for most to remember, now - I still have an A7000 and a RISC PC with both a StrongARM and a DX2-66 lol) was at war with Intel's Pentium CPU range and AMD's K6's, boasting an almost 1:1 ration of MIPS:MHz - Horsepower for your money (something Intel and AMD were severely lacking in, if I remember correctly.)

    And now, well, who'dve thought it... These ARM CPUs are now in nearly everything we use... Phones, smartphones, tablets, notebooks...

    Suppose I was right in the argument with my mate in school afterall... RISC, superior technology (IMHO) may well take over, yet!
  • nofumble62 - Friday, January 4, 2013 - link

    No performance advantage, no battery life advantage. Why anyone would bother with incompatible software?
  • sseemaku - Friday, January 4, 2013 - link

    Looks like people have changed religion from AMD to ARM. Thats what I see from some comments.
  • mugiebahar - Saturday, January 5, 2013 - link

    Yeah n no. They wanted a no paid opinions to screw with the outcome. But Intel hype won over real life .

    Intel better and will get better - yes
    Any chance they will compete (performance and PRICE) and legacy. Support to phone apps - Never in the near future which is the only time for them.
  • tuxRoller - Saturday, January 5, 2013 - link

    Also, any chance for an actual performance comparison between the platforms?
    Apple's performance and power use look awesome. Better than I had imagined.
    I'd love to see how they compare on the same tests, however.
  • Kogies - Saturday, January 5, 2013 - link

    It appears the war has begun, well two wars in fact. The one you have articulately described, and the oft ensuing war-of-words...

    Thanks Anand, I appreciate the analysis you have given. It is excellent to get to see the level of granularity you have been able to achieve with your balance of art and science, and knowing where to hook into! I am very interested to see how the L2 cache power draw effects the comparison, just a little jitter in my mind. If nothing else, it looks as if the delicate balance of process tech., and desired performance/power may have a greater bearing on this "war" than mere ISA.

    With Krait 300, Haswell, and more A15's this is going to be a tremendous year. Keep up the good work.
  • Torrijos - Saturday, January 5, 2013 - link

    Any chance we could see the same tests run on the latest Apple iPad?
    That way we could have a chance to see what Apple tried to improve compared to the A15 generation.
  • urielshun - Saturday, January 5, 2013 - link

    The whole discussion about ARM and x86 is not important when you go for the ecomonics of each platform. ARM is dirty cheap and works well. It's 1/10th of the price of any current Atom with decent perfomace (talking about RK3066).

    Don't underestimate the Chinese they are having a field day with ARM's pricing model and and have shown amazing chips.

    In 8 years from now all SoC's would have reached the usuable performace and the only thing that will matter will be power and cost of integration.
  • iwod - Saturday, January 5, 2013 - link

    Where are you getting 1/10 of a price from? Unless they are produced on good old 40nm LP Node with Nothing else, or crap included, otherwise there just aren't any Chinese SoC selling for $4
  • some_guy - Saturday, January 5, 2013 - link

    I was reading comments about how Intel can't compete on price because it will cannibalize profits from higher margin chips.

    My sense is that Intel will aggressively attempt to squash ARM just like cored AMD. I think they are aware of the imminent threat.

    Cannibalizing their own profits from higher margin chips is inevitable and may be a better choice than being eaten alive by ARM.

    (BTW I didn't read all 11 pages of comments, so sorry if this was already stated.)
  • nofumble62 - Saturday, January 5, 2013 - link

    Intel current margin is >60% but its stock price didn't rise an inch.
    The concern is about architecture and future.
    As long as they can get into mobile and get market share back, their stock price will spike.

    So YES, they can and will drop the price to compete. Their cheap smartphones are already in store in India, China, and Europe.

    Don't forget that there are plenty of companies doing well with much smaller margin.
  • mugiebahar - Saturday, January 5, 2013 - link

    While theory will agree, it's not so easy. Simple illustration (only in smart phone tablets)

    Company 1 (ARM) - employes 1 person and works from home.
    Company 2 (Intel) - employes 100 people and rents a warehouse.

    Cost to run business 1 - ?
    Cost to run business 2 - ???

    ARM owns the market now (just for now is all I'm saying) - revenue per expenses = great
    Intel is negligible - revenue to expenses = disaster (not that won't change, but the
    money spent already over the years)

    Company 1 has loyalty and legacy in this market.
    Company 2 is HOPING it has loyalty and the it legacy support will matter in this market!

    The article makes it seem that it's a one deal Intel will be in the marke and we just have to wait. But that's wrong. Right now we are having phone that can do everything we need at the moment. Yes it will get better but we don't need the leaps n bounds anymore @ a cost we really don't need to spend.

    Also if Intel sells a core CPU SoC and it goes for 20-30 like ARM SoC. uneducated people will look at it as an insult that the desk top costs so much. And while its completely different levels of power average joe won't know, and humans are the worst critics correct? I stated before its not about intel not being able to make a chip. It's their company s structured so wrong for this market. You have to be small and agie. AMD would have a better shot at it, if they had the smarts.
  • some_guy - Sunday, January 6, 2013 - link

    It could get ugly.

    Intel will not give up the high end ARM market without a fight because next stop is desktop market. And Intel may not succeed.

    Apple is already majorly moving away from Intel which gives the lots of leverage.

    This may be Intel's long twilight, and AMD's shorter twilight.
  • CuriousSoul - Saturday, January 5, 2013 - link

    Without knowing the Frames Per Second, Games settings for 3D Game 1 how is is possible to compare all of the graphs in any sense other than maximum mW ?

    It's like saying a VW Golf is more frugal around a race track than a Ferrari !

    One resistor could draw 1mW and another 1000mW but which is better at the actual 3D Game 1? (in this case they offer an identical experience).
  • Cold Fussion - Saturday, January 5, 2013 - link

    Yea this was mentioned multiple times before, It doesn't do very much to alleviate the claims of Anand's Intel bias does it?
  • jameskatt - Sunday, January 6, 2013 - link

    The first problem for Intel is that every major ARM chip company develops their own ARM chips. Apple does it. Samsung does it, etc. This means that these companies will NOT buy third party CPU/GPUs for their ARM devices. By developing their own ARM chips, they cut out Intel as the middle man and Intel's profits. For low priced devices, it is crucial to do this since margins can be very low or in Apple's case, the need to make profit is very high. These companies will not be Intel customers for chips. Intel is locked out. This is why Texas Instruments got out of the ARM business.

    The second problem for Intel is that the ARM-chips have a huge ecosystem supporting it. Apple has their iOS software kingdom with over a million apps. Android has it hundreds of thousands of apps. These ecosystems have a heavy momentum of their own. Their number dwarfs the number of apps available to the Intel platform. Thus Intel has a difficult job convincing anyone to incorporate its CPUs in to devices which will automatically be incompatible for the existing ecosystems.

    What Intel can do is to simply keep aggressively developing its own chips so that they can compete strongly with the ARM chips so that Intel can keep the ecosystem it currently has - laptops, desktops, mainframe and supercomputers.

    Intel won't be able to compete on price. it has to make its own profits. It won't be able to compete at the bottom of the device market. But it certainly can keep and expand its own arena. Microsoft surface tablets are an example of an attempt to keep its own ecosystem.
  • omind - Monday, January 7, 2013 - link

    James,

    Intel is not trying to sell IP to existing ARM chip companies (Samsung, Nvidia etc) but instead are creating complete SOCs, including GPU, Video en- and decoders and so on. They are positioning themselves as competitiors to SOC manufacturers not as suppliers.

    Secondly, the Atoms already in the market (Medfield) can leverage the Android app ecosystem. This is accomplished by a version of Android's Dalvik adapted to the x86 ISA. I personally own a Medfield-powered Motorola Razr-i and I can say that every single app I downloaded from Google's play store does run without any problem whatsoever on it. Contrary to the popular belief, there is no ecosystem problem for the end user.

    Best regards
  • mrdude - Wednesday, January 9, 2013 - link

    I think his point is that today the big device makers are also have their own SoCs, thus locking Intel out from the beginning. The push towards vertical integration threatens Intel drastically. And unlike a Qualcomm, Intel currently has very little to offer that stands out from the rest of the competition in the tablet and smartphone space (I say tablet because the sales of x86 tablets has been really poor. Consumers don't value x86 compatibility on their tablet)

    Intel just can't compete on price. I know this gets brought up endlessly but it's something that's absolutely true. Intel operates on sky high margins and that's something that's impossible to do if you're only making SoCs for tablets and smartphones. If the high end is cannibalized at an accelerating rate going forward (and the figures I've read point to this) then Intel is going to be in big big trouble. There's just no way the company can maintain its sky high margins and even keep its own fabs (and R&D) by selling ~$20 SoCs. It's just not gonna' happen. Either Intel provides something that only they can offer and that's wanted by every device maker and consumer and warrants a higher price tag or they begin their slow march downhill.
  • wsw1982 - Friday, January 11, 2013 - link

    Intel sell much more processors today than they were in 2010/2009, if they can maintain their own fabs then, I don't see the reason why they cannot now. The $20-$30 SOC is new market for Intel, not a replacement (at least in near feature). The additional market should be welcomed, right? It reminded me that back to 2008 when Intel first introduced the ATOM, a lot of genius started to cry that Intel opened the netbook market and sale chip at $30 and it would cannibalize the notebook market and hurt Intel's profit. But the years after are one of intel's fast growing age.

    As for the price? Why cannot Intel compete? doesn't it have enough revenue (PC&Server revenue is much more than 2010 and 2009) to even up the R&D? does it need to pay a bloody foundry fee to TSMC (who alone enjoy 40% margin)? does it have to pay a license fee to ARM? Doesn't it already spend a forturn on PR? Doesn't it already have the manufacture volume ? (50% in idle, anyway will be wasted, why don't produce something at least earn some money)

    And I think Intel is never as competitive as now. It gaining it position in server market for the last few year, it's pc business has never such high market share, it's mobile chip start to crush ARM in any respect.

    http://www.phonearena.com/news/Intel-Atom-powered-...

    check this out, the smart phone lenovo K900 score more than 25000 in Antutu on a 1080P display, which not only crush snapdragon pro (4 krait) in Optimus G but also the newly release Samsung Exynos 5440 (2 A15) in Nexus 10
  • StrangerGuy - Sunday, March 24, 2013 - link

    You really think having a slightly better chip would make Samsung risk everything to get locked into a chip with an ISA owned, designed and manufactured by one single sole supplier? And when that supplier in question historically has shown all sorts of monopolistic abuses?

    And when a quad A7 could already scroll desktop sites in Android capped at 60 fps additional performance provides very little real world advantage for most users. I'll even say most users would be annoyed by I/O bottlenecks like LTE speeds long before saying 2012+ class ARM CPUs are too slow.
  • duploxxx - Monday, January 7, 2013 - link

    Now it is clear that when Intel provides that much material and resources that they know they are at least ok in the comapere against ARM on cpu and power... else they wouldn't make such a fuss...,

    but what about GPU? any decent benchmark or testing available on GPU performance.

    I played in december with HP envy2 and after some questioning they installed a few light games which were "ok" but i wonder how well the gpu in the atom really is, power consumption looks ok, but i preffer a performing gpu @ a bit higher power then a none performing one.
  • memosk - Tuesday, January 8, 2013 - link

    It looks like old problem with PowerPc v.s. PC .
    PowerPC have had a faster Risc procesor and PC have had slower x86 like procesor .

    The end result was that clasical PC has won this Battle . Because of tradition and what is more important , you need knowledge about platform over users, producers, programers ...

    And you should think about economical thinks like mortage and whole enviroment named as Logistik.

    The same problem was Tesla vs. Edison. Tesla have had better ideas and Edison was Bussines-man. Who has won ? :)
  • memosk - Tuesday, January 8, 2013 - link

    Nokia tryed seriosly sell windows 8 phones without SD cards And they said because of microsoft .

    How can you then compete again android with SD cards. But if you copy an Apple you think it has logic.

    You need generaly: complete and logic and OWN and CONSISTENT and ORIGINAL strategy.

    If you copy something it is dangerous that you strategy will be incostintent , leaky , "two way" , vague also with tragical errors like incompatibility or like legendary siemens phones: 1 crash per day . :D
  • apozean - Friday, January 11, 2013 - link

    I studied the setup and it appears and Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a different to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fits in this setup?
  • apozean - Friday, January 11, 2013 - link

    Fixing typos..

    I studied the setup and it appears that Intel just wants to take on Nvidia's Tegra 3. Here are a couple of differences that I think are not highlighted appropriately:

    1. They used an Android tablet for Atom, Android tablet for Krait, but a Win RT (Surface) for Tegra 3. It must have been very difficult to fund a Google Nexus 7. Keeping the same OS across the devices would have controlled for a lot of other system variables. Wouldn't it?

    2. Tegra 3 is the only quad core chip among chips being compared. Atom and Krait are dual-core. If all four cores are running, wouldn't it make a difference to the idle power?

    3. Tegra 3 is built on 40nm and is one of the first A9 SoCs. In contrast, Atom is 32nm and Krait is 28nm.

    How does Tegra 3 fit in this setup?
  • some_guy - Wednesday, January 16, 2013 - link

    I thinking this may be the beginning of Intel being commoditied and the end of the juicy margins for most of their sales.

    I was just reading an article about how hedge funds love Intel. I don't see it, but that doesn't mean that the hedge funds would make money. Perhaps they know the earning report that is coming out soon, maybe tomorrow, will be good. http://www.insidermonkey.com/blog/top-semiconducto...
  • some_guy - Wednesday, January 16, 2013 - link

    I meant to say "but that doesn't mean that the hedge funds won't make money."
  • raptorious - Wednesday, February 20, 2013 - link

    but Anand has no clue what the rails might actually be powering. How do we know that the "GPU Rail" is in fact just powering the GPU and not the entire uncore of the SOC? This article is completely biased towards Intel and lacks true engineering rigor.
  • EtTch - Tuesday, April 2, 2013 - link

    My take in all of this is that ARM and x86 is in comparable at this point when it comes to comparing the different instruction set architectures due to different the lithography size and the new 3d transistors. When ARM based SOC has finally all the physical features of the x86 then it's only truly comparable. Right now x86 is most likely to have a lower power consumption than ARM based processors that has a higher lithographic size than itself. (I really don't know what it's called but I'll go out on a limb and call it lithography size even though I know that I am most likely wrong)

Log in

Don't have an account? Sign up now