Comments Locked

117 Comments

Back to Article

  • mayankleoboy1 - Tuesday, October 30, 2012 - link

    Great for ARM that it has so many performance improving avenues open. Unlike x86, which has basically stagnated....
  • A5 - Tuesday, October 30, 2012 - link

    Yeah, 10% IPC improvements every year is "stagnation".

    ARM is just going through the same super-growth period that x86 went through in the 90s.
  • ssj4Gogeta - Wednesday, October 31, 2012 - link

    It continues to amaze me how Intel keep increasing the IPC every year while AMD desperately try to tackle the problem by throwing more cores at it.
  • Pipperox - Wednesday, October 31, 2012 - link

    This is not entirely true.
    What you say is correct with the 6x core Thuban, but not with Bulldozer and Piledriver/Trinity architecture.
    They didn't simply "throw more cores at it", they designed new cores which are smaller and simpler and share resources, so that they could increase the number of cores without creating a monster chip in terms of size (which would have been difficult and expensive to produce, and dissipate a lot more power).
    So in the end Intel and AMD took 2 different strategies to increase performance: Intel increases the efficiency of the existing cores, AMD changed architecture creating cores which individually are slower but where it is easier to pack more of them on a chip.

    Also with Trinity/Piledriver AMD achieved a solid 15% IPC improvement, without increasing core count (sort of like an Intel's "tock").
  • maroon1 - Friday, November 2, 2012 - link

    "without creating a monster chip in terms of size"

    The die size of Bulldozer/Piledriver is 315mm^2

    Ivy bridge with HD4000 iGPU is only 160mm^2

    Thats almost twice as big as ivy bridge
  • parkerm35 - Wednesday, December 19, 2012 - link

    That might have a lot to do with Intel using 22nm and AMD using 32nm.....Doh!
  • gruffi - Monday, January 7, 2013 - link

    Not comparable. Orochi is 32 nm, Ivy Bridge is 22 nm. 22 nm has a theoretical area advantage of factor ~2.12. Furthermore, Orochi is a server design, while Ivy Bridge is a client design. Normally, server designs are always bigger than client designs (more cache, more interconnects, etc). The only useful comparison is a core logic comparison. Bulldozer on 32 nm is ~19 mm^2, Sandy Bridge on 32 nm is ~18 mm^2. So, architecture-wise not much difference at all. The rest is just design-specific and also depends on the process node.
  • gruffi - Monday, January 7, 2013 - link

    LOL, poor troll attempt. AMD increases IPC as well, Stars -> Husky, Bulldozer -> Piledriver, Bobcat -> Jaguar. AMD doesn't throw more cores at the problem, but Intel desperately does. When AMD launched Barcelona back in 2007, they had 4 physical cores. The current Orochi design, more than 5 years later, also has only 4 physical cores. That's a core increase of exactly zero percent per die! AMD tries to improve performance with a better multithreading technology than Hyperthreading on core level. Even if the AMD marketing says so, Orochi is not really 8-core. It's more like 4-core with 8 threads. Now compare it with Intel. Back in 2007 they had their C2Q, which actually was only a double dual-core (Conroe). Then there was Nehalem (4 cores), Dunnington (6 cores), Nehalem EX / Sandy Bridge E (8-cores) and Westmere EX (10 cores). That's a core increase of exactly 400 percent per die over 5 years! Ivy Bridge EX is rumored to have up to 15 (16) cores per die. So, the truth is, AMD increases IPC and tries to implement better core technologies. While Intel mostly just throws more cores in. And it amazes me how AMD continues their innovative designs (Trinity / Kaveri, Kabini) with much less resources, while Intel does the same old and boring stuff over and over again with just little tweaks. If they wouldn't have their manufacturing advantage, they would completely suck ass.
  • jakoh - Thursday, June 11, 2015 - link

    Lets end this debate, Intel is infront (14nm), regardless of how they both tackle problems.
  • andrewaggb - Tuesday, October 30, 2012 - link

    I don't know that x86 has stagnated, intel keeps upping performance and reducing power every year. It's not revolutionary, but it's constant improvement. Atom 2 might be good.

    ARM is definitely more interesting at the moment.
  • Khato - Tuesday, October 30, 2012 - link

    We're so close to at last having another piece of the puzzle, just a few more days/weeks before we'll get some good performance indications of the A15. I fully expect that it'll beat out the current Atom while being decimated by anything in the Core line... But the real question is just how far behind Atom will be, 'cause Atoms upcoming Silvermont core architecture could easily see a 2x performance gain.
  • Krysto - Tuesday, October 30, 2012 - link

    It's not "decimated" by the Core line. Exynos 5 Dual seems to be getting about 3-4x lower performance than my quad core Sandy Bridge laptop, in several browser tests. Considering it only uses 1-2W of power, that's pretty impressive. A57/A53 set-up should be good for just about any casual user, for much lower prices.
  • Khato - Tuesday, October 30, 2012 - link

    Eh, according to that review you linked in another comment it's 2x slower than a dual core SNB based celeron running at 1.3 GHz. Even that can qualify as being 'decimated' in terms of performance. Power is indeed an important metric, which is why I'm quite saddened to not yet see any figures in that respect from the chromebook reviews available as of yet.
  • Wilco1 - Tuesday, October 30, 2012 - link

    The simple fact that an A15 is now just 2x slower than Sandy Bridge at the same frequency is simply amazing and proof of how fast ARM CPUs are advancing on Intel! If anything, the SandyBridge is being decimated here by being around a factor 10 worse on power and cost.
  • Shadowmaster625 - Tuesday, October 30, 2012 - link

    You're not even comparing apples to apples. When comparing a sandy bridge celeron to an Exynos 5 SoC, you have to isolate how much power is being used by the memory bus, the storage, the display, the I/O controller... and compare each one separately.

    In most cases, features dont scale linearly at all... such as with SATA. If you want SATA you have to pay a steep power penalty.

    In some cases the features scale somewhat linearly, such as with memory bandwidth and latency. Compare sandy bridge memory bandwidth and latency with the Exynos 5.

    Then you have to look at price. What is the price of a sandy bridge mobile celeron, its chipset, and 4GB of DDR? And how much is an Exynos 5 and 2 GB of LPDDR? I wouldnt be surprised if the Exynos parts actually costed more. Dont forget that these phones are highly subsidized. Theyre actually quite expensive, far more than a $300 celeron notebook.
  • Wilco1 - Tuesday, October 30, 2012 - link

    I'm not sure what you are getting at, but an Exynos 5 SoC contains far more on-chip than the Celeron which needs exxtra support chips which add cost and consumer more power.

    It's true the Sandy Bridge memory speed is a bit lower than the Exynos (10.6GB/s vs 12.8), but I don't think this affects power that much.

    Price? Well we know that the Celeron Chromebook sells for $449 while the Exynos 5 one costs just $249. Since the specs are fairly similar, much of that extra cost is due to the expensive Intel CPU and chipset. The Celeron 867 costs a whopping $134 vs around $20 for Exynos.
  • lowlymarine - Tuesday, October 30, 2012 - link

    SunSpider is an awful benchmark of modern processor performance. Certain browsers are "optimized" specifically for it to an absurd degree, it's single-threaded, fits almost entirely in processor cache, and cares about basically nothing but clock speed.

    Try something like LINPACK or software decode of HD content and tell me quad-core Sandy Bridge is "only 3-4x" faster than current ARM chips.
  • dagamer34 - Tuesday, October 30, 2012 - link

    The fact that there's such a huge gulf between the Sunspider score on the HTC 8X and HTC One X should disqualify it as a performance benchmark. You can't use the OS as a variable in one comparison (Android v. Windows Phone), then change the CPU and parts of the kernel in another comparison (Android v Android). It's inconsistent.
  • Symmetry81 - Tuesday, October 30, 2012 - link

    I'd be really surprised if an A15 couldn't beat one of the old Core Solos form '06, even single threaded. Even the dual core Snapdragon S4 in a Galaxy S III comes pretty close if you check out the scores on Geekbench, and the Krait cores in those are about halfway between an A9 and an A15.
  • Kurge - Thursday, November 1, 2012 - link

    Not even close, the core solo would kill it in real work like video decoding. The more CPU constrained the task the more the Core would spank it.
  • Heisenburger - Tuesday, October 30, 2012 - link

    "ARM is definitely more interesting at the moment."

    Because that's where all the buzz is at: smartphones and tablets.
  • BSMonitor - Tuesday, October 30, 2012 - link

    Because that's where people with money and no knowledge see "new" profits..
  • listic - Tuesday, October 30, 2012 - link

    "reducing power every year"? Not so much, in my humble opinion.

    Look where Intel was in 2004:
    http://ark.intel.com/products/27609
    http://en.wikipedia.org/wiki/List_of_Intel_Pentium...
    Pentium M ULV 733 processor, released in July 20, 2004, had TDP of 5W.

    Sure, nowadays you get multi-core, hyper-threading, memory controller and integrated graphics on-chip, but the lowest power you can buy (apart from Atom) has 17W TDP. That doesn't count as reducing power in my books. Surely, some stagnation is there.
  • andrewaggb - Tuesday, October 30, 2012 - link

    Those are slightly different things. Well performance/watt has generally improved with every tick/tock.

    But overall TDP has been improving with ivy bridge and supposedly haswell. Their new atom designs have pretty good TDP as well.

    I get what you're saying though, they've overall failed to create new markets by continually releasing products with the same TDP's.
  • lowlymarine - Tuesday, October 30, 2012 - link

    That Pentium M ULV included nothing but a single 32-bit CPU core on die. A 10W Haswell die crams in two SMT-enabled 64-bit x86 cores, the memory controller, the PCI-Express controller, the clock generator, an integrated GPU that runs circles around any graphics card you could buy in 2004, and the TMDS/DP links necessary to output those graphics to multiple displays at up to 3840x2160 each.
  • Stahn Aileron - Tuesday, October 30, 2012 - link

    And you're comparing an older CPU-only design to a current, near SoC design. Intel integrated most, if not all of the northbridge chip into the die. 17W for about 80%-90% of an >entire platform< isn't that bad. Especially one as complex as x86 is.

    They might not be pushing >absolute< power use down every year, but they have been getting power use efficiencies every year for a while. Getting better performance in the same or slightly lower power envelope isn't something to overlook.

    Intel isn't really stagnate by any means. They may only be evolutionary instead of revolutionary with each new CPU release. Stagnant would've been more the Intel from the P4 days. (Ignoring the aforementioned Pentium M line for laptops in taht same era.)
  • bhtooefr - Tuesday, October 30, 2012 - link

    And the 82855GM northbridge (which, the northbridge is now part of any of the modern Intel CPUs) was another 3.2 watts on top of that.

    The 82801DBM southbridge was another 2.2 watts. Total platform consumption, 10.4 watts.

    Now, let's compare to a Core i7-3667U. 17 watts, with cTDP down to 14 watts. Toss in a UM77 chipset (essentially a southbridge), and you add 3 watts. 17 watts platform power consumption (albeit with an underclock).

    Now consider that that's peak power consumption. Also consider that modern power saving theory is, if you can get the computation done as quickly as possible and get to a lower idle state, you get better battery life. I'd say that an i7-3667U, even underclocked, can beat the CRAP out of a Dothan, even the fastest one (2.1 GHz or whatever it was), on performance... with one core DISABLED.
  • BSMonitor - Tuesday, October 30, 2012 - link

    And 64-bit addresses do what in a system with 512MB to 1GB system memory??
  • name99 - Tuesday, October 30, 2012 - link

    Oh don't be stupid.

    (a) It is no secret that ARM hopes to take some of the server space away from Intel.

    (b) There are already a number of tablets on the market with 2GiB of RAM --- 31 bits...

    I know it is standard in the PC world to wait for the pain to actually bite before figuring out a solution (starting with the 640kiB problem, and continuing pretty much unending since then --- most recently see 4kiB hard drive sectors) but ARM would like to be a little more intelligent than that and to have a solution in place as soon as tablet manufacturers want to go beyond 4GiB, not three years AFTER they want to do this, by which time four different incompatible hacks have been introduced.
  • bhtooefr - Tuesday, October 30, 2012 - link

    Well, ARM's already introduced one hack for Cortex-A15 and A7 - LPAE, which is a 40-bit mode. (Essentially, the same idea as PAE on an Intel x86 32-bit CPU, just more of it.)

    And, they were smart about it - apparently the LPAE format was intended to be used for AArch64's translation table.
  • lmcd - Tuesday, October 30, 2012 - link

    Your name contradicts your post. My GSIII has 2GB of RAM, while many Core 2 Duo machines also shipped with 2GB of RAM. And RAM is going to continue to increase on these devices.
  • Kurge - Thursday, November 1, 2012 - link

    Stagnated? Lolwut? Did you know Haswell will have a 10W SKU?

    Who do you think will be more successful, ARM scaling up or Intel scaling down?
  • parkerm35 - Wednesday, December 19, 2012 - link

    Foundations up, always! Hence why it has took Intel 6 years to get a chip into a phone, while only a year and a half ago ARM decided it wanted a share in the PC market and well now it has.
  • dylan522p - Tuesday, April 9, 2013 - link

    Intel only took 6 years because it just started to care about the mobile space.
  • melgross - Tuesday, October 30, 2012 - link

    It seems as though the advantages Atom might have won't be as overwhelming as thought.
  • Lonyo - Tuesday, October 30, 2012 - link

    You seem to forget that these CPUs are probably 18 months away.
    Intel won't be using the same Atom CPUs we have today in 18 months. Haswell will also have arrived, which is optimised for lower power operation.

    If you think Intel is sitting around not planning ahead, you are incorrect.

    22nm Silvermont will be here in 2013, and Intel will be aiming for 14nm by 2014, as Atom catches up to "desktop" chips in terms of process node.

    http://www.anandtech.com/show/4333/intels-silvermo...

    http://www.anandtech.com/show/4345/intels-2011-inv...
  • Kevin G - Tuesday, October 30, 2012 - link

    Actually Atom will be the lead chip for 14 nm production and skip ahead of its laptop/desktop cousins by a few months.
  • Kurge - Thursday, November 1, 2012 - link

    I hadn't seen this, but if true that's hard core game on.
  • Krysto - Tuesday, October 30, 2012 - link

    "Globalfoundries intends to have 14nm finfet in volme manufacturing in 2014, the same timescale as Intel has for introducing 14nm finfet manufacturing."

    20nm next year.

    http://www.electronicsweekly.com/Articles/05/10/20...
  • Khato - Tuesday, October 30, 2012 - link

    Feel free to believe Globalfoundries' PR roadmap - I'll wait until they actually deliver as their track record on meeting their roadmaps doesn't exactly inspire confidence. Needless to say, it'd be extraordinarily impressive if they actually managed to pull it off seeing as how they're claiming the ability to go from 28nm to 14nm in under 2 years.
  • name99 - Tuesday, October 30, 2012 - link

    To understand the actual issue against Intel, look at what is being shown here. It is not a new CPU, it is a new architecture. Intel is still on the same µarch for Atom that they had five years ago --- all they've done since then is process improvements and moving more parts onto the CPU die.

    In that same time, ARM have been able to make substantial arch changes --- over say the last two years we've seen A4 to A5 to A6/Swift in the Apple space, along with A15 and now these early ARM64 designs.

    THIS is, and always has been, ARM's strength and Intel's weakness. Intel takes five to seven years to spin a new design because their cores are so complex. The only way they can run the design machine faster is multiple teams. You can get some idea of the expense of that (and remember, they can only charge ARM-like prices for these chips, not desktop prices --- AND if the push too hard on these chips they will start to cannibalize the low-end desktop market) by seeing that they have not done this for Atom, even though it;s obvious how important it is.

    We'll get a new Intel design fairly soon. OK, big step forward. And then what --- five more years of stasis while Apple, nVidia, Qualcomm, ARM are all changing their architectures almost annually?

    In the past, Intel had complexity against it; but had the compensation of a huge market. Now they have a smaller market, while ARM has the huge market, AND their complexity is that much worse than it was in the days of the P6.
  • andrewaggb - Tuesday, October 30, 2012 - link

    I think you're forgetting that ARM is about to hit the IPC wall. These new chips have essentially the same execution abilities as modern intel and AMD chips have had for a long time. It's not that Intel and AMD are stupid, they just can't get more parallelism out of code. So they focus on caches, latency, buffers, mmx, sse, etc etc. Adding hyper threading, more cores, etc, etc.

    ARM is just getting to the hard part. Remember Itanium? PowerPC? Alpha? all of these other advanced processors that in practice aren't much different/better than a Xeon?

    Intel's threat is on power consumption, absolute performance has never been a problem for them. if ARM can power a cpu at 70-100% of the speed at 50% of the power, that's a threat.

    It's killing intel in the mobile space. In the server space it's a race to see if intel can get power consumption down faster than arm can get performance up.
  • MadMan007 - Tuesday, October 30, 2012 - link

    Intel is going to a 2-year cadence for Atom. They had originally used a 5-year cadence, which is their classic pre-tick/tock schedule, which is why we are finally about to see a new Atom architecture.

    In short, to answer this "And then what --- five more years of stasis.." No.
  • A5 - Tuesday, October 30, 2012 - link

    Eh. I'm guessing that if we had a slide deck about the 2014 Atom chips, it'd seem pretty exciting, too.
  • Krysto - Tuesday, October 30, 2012 - link

    Actually Intel tends to show them for 5 years or something. ARM announces them 2 years before, because that's how their business works, considering they only make the IP for the chips, so they can't announce it just one year earlier. Plus, one ARM CPU generation lasts 2 years.
  • Matias - Tuesday, October 30, 2012 - link

    Wow, looks like a big disrupt is on the horizon. Now lets hope they execute all these promisses, and we may see even some desktop use in the future with Win8 ported. Beware intel...
  • powerarmour - Tuesday, October 30, 2012 - link

    Windows 8 has already been ported somewhat = Windows RT

    If WinRT apps take off, and legacy x86 slowly simmers off the boil, then these CPU's should be the basis for some nice tablets and mini-desktops in future.
  • Krysto - Tuesday, October 30, 2012 - link

    Too bad Windows RT doesn't work like Linux and I expect Mac OS soon, too, and just transition smoothly from x86 to ARM, allowing all previous apps to work on ARM.

    Do we know at what clock speed A57 will start at? 2.5 Ghz maybe? A53 I assume at 1.0-1.2 Ghz.
  • Krysto - Tuesday, October 30, 2012 - link

    Answered my own question. It sounds like A53 will start at 1.3 Ghz, and A57 will end at 3 Ghz.

    "For those who are still looking for gigahertz performance numbers Hurley sais]d that new A-50 family will deliver performance ranging from 1.3 gigahertz to 3 Gigahertz depending on how the ARM licensees tweak their designs."

    From Gigaom:
    http://gigaom.com/2012/10/30/meet-arms-two-newest-...
  • aicom64 - Wednesday, October 31, 2012 - link

    Linux doesn't allow x86 binaries to run on ARM systems. You have to recompile from source or try to obtain a properly compiled package.

    I don't know if Mac OS will because usually it only makes sense to pay the emulator penalty when you're moving from a lower performance to a higher performance platform (68k -> PPC, PPC -> Intel).
  • powerarmour - Tuesday, October 30, 2012 - link

    Intel must be secretly sweating, the Atom can only hold on in it's current form so long, and will Haswell be able to scale down far enough in future?
  • Krysto - Tuesday, October 30, 2012 - link

    Under the exact same software, a dual core A15 should already out-perform a dual core Atom at the same power level, and that Atom won't even be available until next year.
  • A5 - Tuesday, October 30, 2012 - link

    Link? Everything I've seen said Medfield and A15 have about the same performance.
  • Krysto - Tuesday, October 30, 2012 - link

    What? Are you confusing A15 with Krait by any chance?

    http://gigaom.com/mobile/intel-v-arm-the-chromeboo...
  • dcollins - Tuesday, October 30, 2012 - link

    Krait and A15 should have fairly similar performance on a clock to clock basis. They have very similar execution resources.
  • Krysto - Tuesday, October 30, 2012 - link

    Not really. Krait is definitely weaker.
  • FearfulSPARTAN - Wednesday, October 31, 2012 - link

    I disagree, krait is only a little bit weaker than cortex a15 and in my opinion overall better for smartphones in power consumption and performance
  • parkerm35 - Wednesday, December 19, 2012 - link

    http://www.phoronix.com/scan.php?page=article&...

    Atom 1.8GHz 13w TDP Dual Core 4 Thread Vs A15 Dual Core 1.7GHz 4.5w TDP = Fail

    A15 makes mince meat out of the Atom! Role on the Quad cores @ 2.5 GHz
  • bcg27 - Tuesday, October 30, 2012 - link

    Based on the timing this most likely wouldn't compete against Haswell but rather Broadwell. since they expect silicon to be available middle to end of 2014.

    I'm sure Intel is concerned about ARM's advantages at lower power but that is definitely a good thing as it will really push them to innovate in that space. They will have two iterations to see what they can come up with. It will be very interesting to see ULV Broadwell vs. ARM in the tablet/netbook space or whatever that market ends up being in two years.
  • Krysto - Tuesday, October 30, 2012 - link

    Haswell...Broadwell. What are you talking about? They are not in the same category of performance, let alone price. Haswell will cost like $300 a chip. ARM chips cost $20. The ONLY chip Intel has to compete with ARM is the single core Medfield. Not even Clover Trail is in the same category (has much higher TDP). Clover Trail is more like a netbook chip with a 6 cell battery alongside.
  • Khato - Tuesday, October 30, 2012 - link

    Oh, so all the current ARM chips used in tablets are 'more like a netbook chip with a 6 cell battery alongside' then? Good to know.

    http://www.anandtech.com/show/6340/intel-details-a...

    Feel free to disregard the power consumption numbers, since clearly Intel can't be trusted to provide realistic numbers on testing they know review sites will be able replicate once the platform's launched.
  • Klimax - Wednesday, October 31, 2012 - link

    300 a chip? Why do you quote cost for LGA2011-like platform(entry level)?
    In reality it's 50+ range. Are you trying to FUD?
  • wsw1982 - Wednesday, October 31, 2012 - link

    Arm will not be comparable with haswell but with Atom.
  • Kurge - Thursday, November 1, 2012 - link

    Lolwut? ARM is moving up to the 5W range with its bigger ARM chips. Haswell will have a 10W part, maybe even lower.

    It's silly to think Intel won't push Broadwell down to ~5W. So on the high end ARM will be competing with Core chips, which is a battle it can't hope to win.

    On the low end, it'll be Atom but it's not like Medfield will be what's used in 2014.
  • ImSpartacus - Tuesday, October 30, 2012 - link

    To be honest, I want to keep Intel sweating.
  • A5 - Tuesday, October 30, 2012 - link

    Yeah. Intel is at its best when they have to really compete.
  • B1gBOY - Tuesday, October 30, 2012 - link

    Not only Intel. If you think about it, Anandtech EIC must be sweating too...You know what i mean...
  • bengildenstein - Tuesday, October 30, 2012 - link

    When hearing "3x performance at 0.25 the size," it feels eerily close to the same tech strategy as Imaginations Meta CPU core. For the uninitiated, the Meta core eschews out-of-order operations, and instead adopts an in-order multi-threaded approach which gives a significant bump in performance for cache-miss scenarios when spread over multiple cores, but is radically more simple than traditional ooo cores (hence the reduction in die space).
  • blanarahul - Tuesday, October 30, 2012 - link

    Now AMD's announcement makes sense.
  • blanarahul - Tuesday, October 30, 2012 - link

    Hmmmmm. Now we all know what Galaxy S5 will have. And what Qualcomm is working on (Snapdragon S5).
  • blanarahul - Tuesday, October 30, 2012 - link

    (Sorry for repeated comments)

    Look at the second picture of this post (big.LITTLE diagrams).
    In the Superphone part of the diagram, 2 Cortex A57s are there and 4 Cortex A53s are there. What would be the possible use of this? Wouldn't it be hard to implement as it utilizes 2.4 configuration?
  • Symmetry81 - Tuesday, October 30, 2012 - link

    Not really. The switching is done by the operating system, and with a 2.4 you just reserve two of the little cores for always running light tasks, and switch your other two sets of threads between the big and little cores.
  • dylan522p - Tuesday, April 9, 2013 - link

    Qualcomm is on S200, S400, S600(S4/ONE), S800(Q4). They dropped the S4 naming scheme.
  • iwod - Tuesday, October 30, 2012 - link

    A hypothetical SoC with two Cortex A57s and two Cortex A53s would still only appear to the OS as a dual-core system, but it would alternate between performance levels depending on workload.

    which means you cant use alll 4 core at the same time?
  • Krysto - Tuesday, October 30, 2012 - link

    You might be able to, but not sure if with A15 and A7. ARM hinted that they want to use all cores through heterogenous computing, but I'm not sure if that will be available since next year. Might be available by A57/A53.
  • Symmetry81 - Tuesday, October 30, 2012 - link

    Some people are working on a way to do that in Linux, but are having a hard time. So it might be possible, but it's fighting against the design. Even if you do manage to do it you won't be able to use the special hardware acceleration for switching execution from the A7 to the A15 and vice versa, though.
  • andrewaggb - Tuesday, October 30, 2012 - link

    hype machine!

    Sounds pretty good. About as exciting as haswell, except it's further away (sounds like end of 2014 or early 2015 for products IF everything goes according to plan), so a lot can happen by then. We'll be talking about the haswell successor and atom v2. Intel vs the world...
  • Krysto - Tuesday, October 30, 2012 - link

    Nobody cares about Atom in mobiles except Anand. What's Atom v2 anyway? Atom has existed since 2008.
  • andrewaggb - Tuesday, October 30, 2012 - link

    http://www.anandtech.com/show/4333/intels-silvermo...
    silvermont

    An out of order atom on the latest and greatest manufacturing process. That's potentially a real contender.
  • Wilco1 - Wednesday, October 31, 2012 - link

    Potentially is the word indeed. The details on Silvermont are scarce, but it looks like it will be a lot like Bobcat. And when it comes out at the end of 2013, so it will have to compete with 20nm 2.5GHz Cortex-A15's.
  • bossia - Tuesday, October 30, 2012 - link

    So can we say, soon we can have a quad-core with a combination of a dual core A53 and a dual core A57 for mobile devices or it is only a solution for the servers? Besides, does it something challenging the costumed design core by Apple A6 or irrelevant? Any one likes to make a comment about this in case they know more than an average reader? TY!
  • Krysto - Tuesday, October 30, 2012 - link

    Sure, but probably in 2015. Just like we're going to have quad core A15's next year. A57 set-ups will consume just as much power, but will have higher performance. That's all.
  • Gabik123 - Tuesday, October 30, 2012 - link

    Here is a question no one seems to have addressed and I'm uncertain about -

    Right now, we have windows RT to run on ARM cores, which are incompatible with existing windows software from pre-Win8. Under these x64 cores, which would move them to an instruction set currently used on all intel and AMD processors, would this restriction on software compatibility be listed, or is there something further different about the ARM cores that would keep this restriction in play?
  • Wilco1 - Tuesday, October 30, 2012 - link

    No, x64 and 64-bit ARM are very different ISAs, so a 64-bit ARM cannot run any 64-bit x64 applications just like a 32-bit ARM cannot run 32-bit x86 applications. But any existing applications can be ported to run on Windows RT. Given AMDs announcement it seems likely Microsoft will at some point support Windows 8 Server for 64-bit ARMs.
  • Kidster3001 - Tuesday, October 30, 2012 - link

    They'll be using an ARM 64 bit instruction set, not x86 compatible. There will be no difference from now in the compatibility between ARM and x86.

    That's how ARM makes most their money; they license an instruction set. All the big players, Qualcomm, Samsung, Apple, TI do NOT use the ARM circuit designs, they create their own custom designs (Krait, Snapdragon, OMAP, Ax). They purchase a license from ARM to use the instruction set in their designs.
  • Wilco1 - Tuesday, October 30, 2012 - link

    No, most use ARM's cores. Only Qualcomm, Marvell and now Apple build their own cores. But even they do use ARM designs, for example various Snapdragons use Cortex-A5, Apple used ARM11, Cortex-A8 and A9.
  • wsw1982 - Wednesday, October 31, 2012 - link

    The current atom "medfield", with 5 year arch, 32 nm tech, beat krait, new arch, 28 nm, in power efficiency, die area and lots of benchmark . And quelcomm declared the krait is up 1.5 times better then a9 in ipc, the same as arm declaration of a15. I don't see any advantage of arm. 1 years ago, the arm declare arm a9 is better then atom in performance, and can not beat arm in power efficiency, that's ether means they are bluffing or naive(it's not 10 year prediction which has a lot of unknow factors ) so i don't take their declaration serious unless i see the result. But from what i see now, atom vs krait, where arm is newer and with smaller silicon node, or the battery life of a15, i am not positive about arm
  • Wilco1 - Wednesday, October 31, 2012 - link

    For independent performance benchmarks, check out Geekbench for Z2460 Medfield compared with Galaxy Note 2:

    http://browser.primatelabs.com/geekbench2/compare/...

    You can see how a Cortex-A9 beats the 2GHz Z2460 on every single threaded integer and floating point benchmark by a large margin, except for LU decomposition. Atom is obliterated in multithreaded results, even hyperthreading can't save it.

    There is a reason Anand only ever uses Javascript benchmarks, especially Sunspider, and not Geekbench results. Sunspider is the worst imagineable benchmark possible as it is tiny, single threaded and easily gamed using software optimizations.

    Don't you think it is odd that Sunspider is the only benchmark where Medfield seems competitive?
  • andrewaggb - Wednesday, October 31, 2012 - link

    Too bad they are running very different versions of Android. It probably doesn't matter much for the cpu intensive benchmarks provided they are in native code, but I don't know that for sure. Otherwise dalvik improvements could play into it as well.

    Really interested in benchmarks on the atom vs tegra 3 in the surface. assuming we get pcmark, 3dmark, etc for winrt sometime soon... unreal, anything....
  • andrewaggb - Wednesday, October 31, 2012 - link

    http://www.anandtech.com/show/6422/samsung-chromeb...

    This is a bit better. It clearly shows the current atom outperformed by the new cortex a15 by a significant margin. It's still alot of browser benchmarks, but since it's a chromebook and they aren't able to do anything else...

    However power-wise, you'll notice that arm is better, but the power increase in watts from idle to load is about the same on each platform.

    Anyways, silvermont had better be way faster at the same or lower power or they might as well not bother.
  • Wilco1 - Wednesday, October 31, 2012 - link

    Yes the A15 does well on Javascript, but it will do even better on native code. Hopefully there will be a Nexus 10 review soon which shows Geekbench scores, just use Javascript.

    As for power, looking at the increase over idle power doesn't work - the issue with the older Atoms is that idle power is too high. Average power is typically close to idle power, which is why low idle power matters the most.

    Note also that the A15 does 50% more work while still using less power. It will be interesting to see what Silvermont can do, but it will have to compete with 20nm 2+GHz A15's late 2013/early 2014.
  • wsw1982 - Thursday, November 1, 2012 - link

    yes, the Samsung exyons 5 is faster than medfield. No doubt of that. But you are comparing a 6+ w ARM with a 2+ w atom, why don't compare ATOM to the 200+ w tesla? the result could be even more exciting:) by the way, the a15 is, javaspider wise, 2 times worse than the medfield in performance/wat... of course, this comparasion is not fair, my point is this is a apple to orange comparision
  • Wilco1 - Thursday, November 1, 2012 - link

    You meant 8.5W Atom, right? That's the official TDP of the N570 used in the Chromebook (though note how it uses over 12W at load).

    The power results with screen off were 8.32W vs 11.4W. And the A15 is 46% faster on Kraken. So overall the A15 is exactly 2 times as power efficient.

    So I have no idea how you came to the exact opposite conclusion, but you're calculation is wrong.
  • wsw1982 - Thursday, November 1, 2012 - link

    No, I addressed it's medfield in my post. which is in 2+ w range (to my best guess by checking the loading and idle power consumption of razr i during web surfering in iphone 5 review). The N570 is a two to three years old ATOM, and of course is not comparable with a modern ARM. It run javaspider 1.5- slower then sumsung A15, while use arround 1/3 of power. But I said it's not fare to compare them, because they address different market, smart phone and tablet for ATOM, netbook for Samsung a15. I will compare when there is smartphone addressed a15 (use 3 times less power, and God knows how much slower than the a15 in chromebook, could be as fast as krait or apple siwift, or use the same soc but need to be charged every 3 hours:) )come out, or there is atom soc ( I mean SOC and as new as samsung a15) for netbook. It's only fare to compare krait with smartphone atom now, they are the SOCs target the same market
  • Wilco1 - Thursday, November 1, 2012 - link

    Yes it is true that Medfield uses less power than the N570, but mobile phone variants of the A15 will use less power too. Just downclocking a little can easily make a factor 2 difference. Better binning will be possible as A15 production volumes ramp up. Then there is big/little with A7.

    So as you say it is not possible to compare the Chromebook result with Medfield, we have to wait until A15-based phones appear.
  • wsw1982 - Wednesday, October 31, 2012 - link

    yes, javascript is a extrem case, just as the b
    anchmark you show. there are lots of other compares, you can check the review yourself, i said the atom outperform the arm in a lot of benchmark, not all, it is of couse better to use arm cell phone to calculate climate change, i guess it will take you only one year on arm cellphone, and will take you another year if you use intel phone. but the atom do rendering the internet faster than arm (see xolo review). i mean, i am a simple person that use cell mostly broswer internet and never run mapreduce on that. in this case, atom running faster than any arm mobiles and consume less power (as far as i know). so it's clean winner to me. need less to say it achieve it by a much order design and big silicon node.
  • Wilco1 - Thursday, November 1, 2012 - link

    Atom used to be faster on Javascript, but with the latest software updates that is no longer true (Galaxy Note 2 for example has Sunspider scores way faster than either the Xolo or Razr i). Note that Sunspider and other Javascript benchmarks do not indicate browsing performance.

    Also where did you get the idea that Atom based mobiles use less power? Battery life of the Xolo and Razr i is worse than other modern phones like the Note 2, iPhone 5, HTC One X. See http://www.anandtech.com/show/6386/samsung-galaxy-... for the battery life comparisons and the previous page for the Sunspider and other Javascript results.
  • wsw1982 - Thursday, November 1, 2012 - link

    I think you also need take the battery size and other component of the phone into considerations, do you? My core 2 dual laptop with 8 gig memory and a second touch-on battery, run both faster and last longer then the sandy bridge notebook with 2 g memory.

    The Galaxy note 2 have a comparable battery life by a 1.5 times battery.

    http://blog.gsmarena.com/motorola-razr-i-battery-l...

    http://blog.gsmarena.com/samsung-galaxy-note-ii-ba...

    The operational system also affect battery life a lot, my macbook pro can last at least 4 hour with os 10, but less than 2.5 hour with windows 7.

    The farest comparation is between razr i and rarz m, at least they are almost identical design, aren't they? that's why I most compare krait with medfield, and the razr i is a clear winner in battery life.

    The note II didn't win all the web oriented test in your qoute. I would say they are similar in web performance, but note II is like 1.5 time power hungry. By the way, I didn't see any major website reallz qoute the geekmark... I think there must be reason for that...
  • Wilco1 - Thursday, November 1, 2012 - link

    Of course, battery capacity, OS, screen size, resolution and all the other components can make a big difference. And that makes fair comparisons difficult indeed.

    Looking at your links, the Note 2 does win the web browsing and video playback tests by a good margin, despite having twice the resolution and a larger screen. The talk time one is odd, as this hardly uses the CPU/screen, so you'd not expect it to use twice as much power. Obviously measuring talk time fairly is not easy either - you may connect to a different cell tower, a different band or need more power due to interference or "holding it wrongly"...

    Yes, I agree Razr i and m are very similar in specs (although I think they run different Android versions). Krait is a bit disappointing in battery life but beats Medfield by a large margin on most benchmarks (except for Javascript), so you do get better performance for the extra power consumption.

    Geekbench is not that popular but you do see AnandTech, GsmArena and a few others mentioning it nowadays. It is one of the better benchmarks as it uses native code, so more accurately models CPU performance, unlike Javascript.
  • ET - Wednesday, October 31, 2012 - link

    First I saw 3x today's high end, then learned that today's high end means A9, and A57 is expected to be 20-30% faster than A15. And it's only expected in 2014.

    Still cool, but enthusiasm is dampened.
  • A4i - Wednesday, October 31, 2012 - link

    "Today's high end" means APQ8064 , Apple A6/A6x and Еxynos 5250. APQ8064 Soc is in LG Optimus G and Nexus 4. Apple A6/A6x is in iPhone 5 and iPad 4. Еxynos 5250 is in Nexus 10 and Chromebook 3. LG Optimus G score in Linpack benchmark is 608 MFLOPS and that is slill without NEON optimisation. NEON is a 128-bit wide SIMID, roughly twice the size of a single Krait CPU core.
  • Wilco1 - Wednesday, October 31, 2012 - link

    The 3x performance gain is over current high-end mobiles (Galaxy Note 2), not tablets or laptops - I think it will take a few months before we'll see A15 based phones.

    The penultimate slide shows shows the A15 is going to give about 2x gain, and the A57 gives another 50% again (this includes frequency increases). A 3x gain in less than 24 months is amazing. It means phones are approaching Sandy Bridge levels of performance!
  • A4i - Wednesday, October 31, 2012 - link

    Yep, 20-30% faster than A15 in 32-bit code, presumably at the same frequency.
  • Charbax - Wednesday, October 31, 2012 - link

    http://armdevices.net/2012/10/31/arm-keynote-arm-c...
  • blanarahul - Thursday, November 1, 2012 - link

    I can't wait to see a server with PFLOP/s level of computing power using the ARM Cortex A57. Let's see if it can break the long standing record of Blue Gene/Q 16 core Power PC in perf/watt.
  • quirksNquarks - Friday, November 2, 2012 - link

    Imagine what ARM and Licensees could do with a 77w TDP like what Intel i7s have.

    Alot of people are confused.. it seems. Scaling is not per-core IPC based. It is system based. ARM IP designs are improving on a System Level and not Manufacturing (like Intel).

    ARM already owns the mobile market - FACT.
    Intel would like to be a player in the mobile market - FACT.

    But it is far easier to make technology perform faster when given higher Thermal Design Envelopes - than to remain Fast pushing down Thermal Design Envelopes.

    ARM are not worried about the mobile space/niche product markets... THEY OWN IT.
    ARM are looking to push their IPs into territory they haven't been yet. (Server - Desktop - Laptop etc).

    Its no coincidence Intel were pissed off when Apple (awhile ago) were talking to AMD. Hence, launching the Ultrabook initiative. Its no coincidence Nvidia and Intel haven't gotten along.

    Why hasn't anyone noticed that now - the Worlds 2 largest GPU companies are ARM licensees?
    An area Intel have always been behind the curve.
    AMD - Nvidia ..lets not forget the magnitude of other tech companies already onboard..
    Apple - Calexda - Samsung - Qualcomm - etc

    Hello!! efficient processing Cores (ARM) with system TDP room for all out iGPUs ONCHIP!!!!!! memory controllers - I/O controllers etc. fiber connects new ethernet protocols. Beyond the AMD APU. Remembering ARM designs are Configurable to suit the need. Intel not so much. Intel is waht you see is what you get. Fast processing with compromise.

    16 3 ghz A57 core laptops running High-end Geforce/Quadro Radeon/FirePro (thousands of stream processors) iGPUs anyone? in Thermals fit for 13" notebooks lol.. oh baby!

    Tablets and Phones are incredible gadgets to dick around with and do light workloads.BUT will NEVER be something tangible to do anything critical on. (Small Screens hinder any workflow - regardless of light or heavy based). BUT they PAY the BILLS because are sold in such HUGE volume and why they seem to be where technology is headed. It is - but as controllers for larger environments (cloud - applications etc).

    People want Smarter more Complex Applications (beyond what is offered today) - you wont get that on a Phone. Trickle down effect will grant you some really cool stuff you can do with them ...but. on 5" inch screens? Tablets same deal... 12" tablets are very constrictive UI wise. People only have 2 hands. One has to hold the device! or why bother having a tablet at all if not mobile.

    anyhow..
    With the announcement of ARMv8 and 64-bit --- x86 and ARM -- are on a level playing field - instruction set wise (efficiency) and access to hardware based trickery.

    Picture - lower priced Ultrabooks/Sub-Notebooks/Full size Laptops/Low Watt Desktops/HTPCs/Workstations etc with processing powers of a High Dollar Workstation/Server at 1/50th the prices and power consumption. !!

    and

    With Billions of devices on the same Architecture (ARM instruction). The Software Engineering possibilities are Endless. From your phone to your laptop to your desktop to your TV and your modems and your vehicle entertainment systems. hello!! ONE standard!!!! hasn't that been the DREAM all along?

    Linux/ARM and their Licensees are the future in making this all happen. Harmony between the End User and their Technology Environment. it will happen - biggest question is when.

    when will Dinosaurs like Intel and Microsoft - get out of the way and let it? probably never - they make good money in not doing so.

    anyone seen my meds?

  • Creig - Friday, November 2, 2012 - link

    Somebody please kill this spammers account. He's posting the same message across multiple articles.
  • blanarahul - Friday, November 9, 2012 - link

    I am interested in how Cray and AMD are going to implement this. And how Intel is going to respond (an improved version of Knights Corner is the most likely option).
  • Achtung_BG - Thursday, November 8, 2012 - link

    cortex A53 - 2.3 DMIPS/MHz OK
    cortex A57 - 4.1 DMIPS/MHz?
  • Biscuit - Tuesday, November 13, 2012 - link

    I commend intel for the work they have done with the x86 architecture. They have done an amazing job at making such an abortion of an instruction set execute incredibly quickly. The penalty of this legacy is large die size, and ultimately higher power consumption.

    AMD should be shot through the head for x64. They had a prime chance to clean up this awful chip, if they had made a decent (set sized instruction stream) then they would have been able to make much more power efficient chips in the future when x86 support would have finally gone the way of the Dodo. But, no, they just tagged 64 bit on to the x86 instruction set and fucked up intels and their power consumption future.

    Intel will improve process, but ARM will be just a single step behind them on that. As soon as the chip fabs switch to 20nm and lower, Intels "power" advantage on process size reduction will be gone.

    ARM chips, from the get-go have been elegant and have been designed with power consumption in mind for years. Now we're getting some much higher order features such as out-of-order execution, multiple execution pipelines, etc.

    They have a good plan. The big.LITTLE concept is yet to play out, but I think it's a good path. They have two options: High performance/higher power or low performance/lower power. But the difference is that their high power modes consume less power than the lowest power mode an intel chip can do.

    The ARM64 architecture is a clean break. They've taken this chance to see what made "out of order" more difficult on the ARM32 platform and improve upon it using all the knowledge gained over the last 20 years in CPU design, since they designed the original ARM.

    The key: Power consumption, power density. In the server space, this will be key. It will lead to processor densities like we haven't seen before with catastrophic drop off in power consumption in the data centers (ah, maybe that's just my naivety showing).

    But it's a good chip, and I can't wait to get working on it.

    Bisc.
  • gruffi - Monday, January 7, 2013 - link

    Another poor troll attempt. AMD actually did with x86-64 (AMD64) what Intel did with x86-32. If someone should be shot through the head, then it's Intel. AMD did a fantastic job with AMD64. A modern 64-bit pipeline, that obsoleted some outdated extensions (for example x87), without neglecting 32-bit compatibility. More was not possible. I think the compatibility was the most important aspect because of the huge x86 software infrastructure. We've seen how Intel struggled at it with IA-64 and finally disastrously failed.

    OTOH, when ARM tries to improve performance to get on x86 level, power consumption will drastically go up. x86 takes the opposite route, less performance, but also much less power consumption. We see what a fantastic job AMD did with Bobcat. Jaguar will be another important step forward. X86 learned a lot in power saving mechanisms. I think you won't see much difference in performance per watt between ARM and x86 in the future. Many people talk about the power disadvantage of x86 and "legacy penalty". The truth is, those people have no clue about it. Finally, it likely will not come down to which architecture is the better one, ARM or x86, but who offers better and more innovative solutions.
  • brainee - Sunday, January 13, 2013 - link

    All true, but you forgot to mention "cost". Unless Intel adresses this issue, they will be locked out of phones forever I believe. What I mean is that Haswell ultra uber mobile will be simply too expensive. Atom? Maybe once Windows Phone takes off...
  • Thennarasu - Monday, June 24, 2013 - link

    We are working on building a beagleboard - i checked with texas instruments and they site has some arm cortex processors for sale around INR1300 but actual cost reported by ARM is is around INR300. Anyone know a good distributor in India who sell ARM processors?

Log in

Don't have an account? Sign up now