Comments Locked

128 Comments

Back to Article

  • ChronoReverse - Thursday, April 4, 2013 - link

    Very interesting article. I've been wondering where the current phone GPUs stood compared to desktop GPUs
  • krumme - Thursday, April 4, 2013 - link

    +1
    Anand sticking to the subject and diving into details and at the same time giving perspective is great work!
    I dont know if i buy the convergence thinking on the technical side, because from here it look like people is just buying more smatphones and far less desktop. The convergence is there a little bit, but i will see the battle on the field before it gets really intereting. Atom is not yet ready for phones and bobcat is not ready for tablets. When they get there, where are arm then?

    I put my money on arm :)
  • kyuu - Thursday, April 4, 2013 - link

    If Atom is ready for tablets, then Bobcat is more than ready. The Z-60 may only have one design win (in the Vizio Tablet PC), but it should deliver comparable (if not somewhat superior) CPU performance with much, much better GPU performance.
  • zeo - Tuesday, April 16, 2013 - link

    Uh, no... The Hondo Z-60 is basically just an update to the Desna, which itself was derived from the AMD Fusion/Brazos Ontario C-50.

    While it is true that Bobcats are superior to ATOM processors for equivalent clock speeds. The problem is AMD has to deal with higher power consumption and that generates more heat, which in turn forces them to lower the max clock speed... especially, if they want to offer anywhere near competitive run times.

    So the Bobcat cores for the Z-60 are only running at 1GHz, while Clover Trail ATOM is running at 1.8GHz (Clover Trail+ even goes up to 2GHz for the Z2580, that that version is only for Android devices). The differences in processor efficiency is overcome by just a few hundred MHz difference in clock speed.

    Meaning you actually get more CPU performance from Clover Trail than you would a Hondo... However, where AMD holds dominance over Intel is in graphical performance and while Clover Trail does provide about 3x better performance than previous GMA 3150 (back in the netbook days of Pine Trail ATOM) it is still about 3x less powerful as the Hondo graphical performance.

    Only other problems is Hondo only slightly improves power consumption compared to the previous Desna, down to about 4.79W max TDP though that is at least nearly half of the original C-50 9W...

    However, keep in mind Clover Trail is a 1.7W part and all models are fan-less but Hondo models will continue to require fans.

    While AMD also doesn't offer anything like Intel's advance S0i Power Management that allows for ARM like extreme low mw idling states and allowing for features like always connected standby.

    So the main reason to get a Hondo tablet is because it'll likely offer better Linux support, which is presently virtually non-existent for Intel's present 32nm SoC ATOMs, and the better graphical performance if you want to play some low end but still pretty good games.

    It's the upcoming 28nm Temash that you should keep a eye out for, being AMD's first SoC that can go fan-less for the dual core version and while the quad core version will need a fan, it will offer a Turbo docking feature that lets it go into a much higher 14W max TDP power mode that will provide near Ultrabook level performance... Though the dock will require an additional battery and additional fans to support the feature.

    Intel won't be able to counter Temash until their 22nm Bay Trail update comes out, though that'll be just months later as Bay Trail is due to start shipping around September of this year and may be in products in time for the holiday shopping season.
  • Speedfriend - Thursday, April 4, 2013 - link

    Atom is not yet ready for phones?

    It is in several phones already, where it delivers a strong performance from a CPU and power consumption perspective. It's weak point is the GPU from Imagination. In two years time, ARM will be a distant memory in high-end tablets and possibly high-end smartphones too, even more so if we get advances in battery technology.
  • krumme - Thursday, April 4, 2013 - link

    Well Atom is in several phones that do not sell in any meaningfull matter. Sure there will be x86 in high-end tablets, and jaguar will make sure that happens this year, but will those tablets matter? There is ARM servers also. Do they matter?
    Right now there is sold tons of cheap 40nm A9 products, the consumers is just about to get into 28nm quadcore A7 at 2mm2 for the cpu part. And they are ready for cheap, slim phones, with google play, and acceptable graphics performance for templerun 2.
  • Wilco1 - Thursday, April 4, 2013 - link

    Also note that despite Anand making the odd "Given that most of the ARM based CPU competitors tend to be a bit slower than Atom" claim, the Atom 2760 in the Vivo Tab Smart scores consistently the worst on both the CPU and GPU tests. Even Surface RT with low clocked A9's beats it. That means Atom is not even tablet-ready...
  • kyuu - Thursday, April 4, 2013 - link

    The Atom scores worse in 3DMark's physics test, yes. But any other CPU benchmark I've seen has always favored Clover Trail over any A9-based ARM SoC. A15 can give the Atom a run for its money, though.
  • Wilco1 - Thursday, April 4, 2013 - link

    Well I haven't seen Atom beat an A9 at similar frequencies except perhaps SunSpider (a browser test, not a CPU test). On native CPU benchmarks like Geekbench Atom is well behind A9 even when you compare 2 cores/4 threads with 2 cores/2 threads.
  • kyuu - Friday, April 5, 2013 - link

    At similar frequencies? What does that matter? If Atom can run at 1.8GHz while still being more power efficient than Tegra 3 at 1.3GHz, then that's called -- advantage: Atom.

    DId you read the reviews of Clover Trail when it came out?

    http://www.anandtech.com/show/6522/the-clover-trai...

    http://www.anandtech.com/show/6529/busting-the-x86...
  • Wilco1 - Friday, April 5, 2013 - link

    Yes frequency still matters. Surface RT looks bad because MS chose the lowest frequency. If they had used the 1.7GHz Tegra 3 instead then Surface RT would look a lot more competitive just because of the frequency.

    So my point stands and is confirmed by your link: at similar frequencies Tegra 3 beats the Z-2760 even on SunSpider.
  • tech4real - Friday, April 5, 2013 - link

    but why do we have to compare them at similar frequencies? one of atom's strength is working at high freq within thermal budget. If tegra 3 can't hit 2GHz within power budget, it's nvidia/arm's problem. why should atom bother to downclock itself.
  • Wilco1 - Friday, April 5, 2013 - link

    There is no need to clock the Atom down - typical A9-based tablets are at 1.6 or 1.7GHz. Yes an Z-2760 beats a 1.3GHz Tegra 3 on SunSpider, but that's not true for Cortex-A9's used today (Tegra 3 goes up to 1.7GHz, Exynos 4 does 1.6GHz), let alone future ones. So it's incorrect to claim that Atom is generally faster than A9 - that implies Atom has an IPC advantage (which it does not have - it only wins if it has a big frequency advantage). I believe MS made a mistake by choosing the slowest Tegra 3 for Surface RT as it gives RT as well as Tegra a bad name - hopefully they fix this in the next version.

    Beating an old low clocked Tegra 3 on performance/power is not all that difficult, however beating more modern SoCs is a different matter. Pretty much all ARM SoCs are already at 28 or 32nm, while Tegra 3 is still 40nm. That will finally change with Tegra 4.
  • tech4real - Sunday, April 7, 2013 - link

    Based on this anand article
    http://www.anandtech.com/show/6340/intel-details-a...
    the linearly projected 1.7GHz Tegra 3 specint2000 score is about 1.12, while the 1.8Ghz atom stands at 1.20, so the gap is still there. If you consider 2GHz atom turbo case, we can argue the gap is even wider. Of course since this specint data is provided by intel, we have to take it with a grain of salt, but i think the general idea has its merit.
  • Wilco1 - Monday, April 8, 2013 - link

    Those are Intel marketing numbers indeed - Intel uses compiler tricks to get good SPEC results, and this doesn't translate to real world performance or help Atom when you use a different compiler (Android uses GCC, Windows uses VC++).

    Geekbench gives a better idea of CPU performance:

    http://browser.primatelabs.com/geekbench2/compare/...

    A 1.6GHz Exynos 4412 soundly thrases the Z-2760 at 1.8GHz on the integer, FP and memory performance. Atom only wins the Stream test. Before you go "but but Atom has only 2 cores!", it has 4 threads, so it is comparable with 4 cores, and in any case it loses all but 3 single thread benchmarks despite having a 12.5% frequency advantage.

    There are also several benchmark runs by Phoronix which test older Atoms against various ARM SoCs using the same Linux kernel and GCC compiler across a big test suite of benchmarks which come to the same conclusion. This is what I base my opinion of, not some Intel marketing scores blessed by Anand or some rubbish Javascript benchmark.
  • tech4real - Wednesday, April 10, 2013 - link

    Cross ISA cross platform benchmarking is a daunting task to be done fairly, or at least trying to :-)
    SPEC benchmark has established its position after many years of tuning, and I think most people would prefer using it to gauge processor performance. If samsung or nvidia believe they can do a better job to showcase their CPUs than Intel(which I totally expect they could do better, after all it doesn't make sense for intel to spend time tuning its competitors' products), they can publish their SPEC scores. However in the absence of that, it's very hard to argue samsung/nvidia/arm has a better performing product. Remember "the worst way to lose a fight is by not showing up".
    I don't have much knowledge of these new benchmark suites, and they may well be decent, but it takes time to mature and gain professional acceptance.
    A past example of taking hobby benchmarks on face value too seriously is: back in early 2011, nvidia showed a tegra 3 is performing on the same level as(or faster than?) core 2 duo T7200 under CoreMarks. Needless to say, now we all know tegra 3 in real life is around atom level performance. This shows there is a reason we have and need a benchmark suite like SPEC.
  • Wilco1 - Sunday, April 14, 2013 - link

    SPEC is hardly used outside high-end server CPUs (it's difficult to even run SPEC on a mobile phone due to memory and storage constraints). However the main issue is that Intel has tuned its compiler to SPEC, giving it an unfair advantage. Using GCC results in a much lower score. The funny thing is, GCC typically wins on real applications (I know because I have done those comparisons). That makes Intel's SPEC scores useless as an indication of actual CPU speed in real world scenarios. Yes, ARM, NVidia, Samsung etc could tune GCC in the same way by pouring in 10's of millions over many years (it really takes that much effort). But does it really make sense to use compiler tricks to pretend you are faster?

    The NVidia T7200 claim was based on a public result on the EEMBC website that was run by someone else. It used an old GCC version with non-optimal settings. However for the Tegra score they used a newer version of GCC, giving it an unfair advantage. Same thing as with Intel's SPEC scores... This shows how much CPU performance is affected by compiler and settings.
  • theduckofdeath - Thursday, April 4, 2013 - link

    That is not true. A few months ago Anandtech themselves made a direct comparison between the Tegra 3 in the Surface tablet and an Atom processor, and the Atom beat the Tegra 3 both on performance and power efficiency.
  • Wilco1 - Friday, April 5, 2013 - link

    I was talking about similar frequencies - did you read what I said? Yes the first Surface RT is a bit of a disappointment due to the low clocked Tegra 3, but hopefully MS will use a better SoC in the next version. Tegra 4(+) or Exynos Octa would make it shine. We can then see how Atom does against that.
  • SlyNine - Saturday, April 6, 2013 - link

    Nobody cares if the frequencies are different, if one performs better and uses less power that's a win; REGARDLESS OF FREQUENCY.

    Give one good reason, that matters to the consumer and manufacture, for frequencies being an important factor.
  • pSupaNova - Sunday, April 7, 2013 - link

    Your not listening to what Wilco1 is saying.

    Microsoft used a poor Tegra 3 part, the HTC One X+ ships with a Tegra 3 clocked at 1.7ghz.

    So by Anand comparing the Atom based tabs to the Surface RT it puts Intel chip in a much better light.
  • zeo - Tuesday, April 16, 2013 - link

    Incorrect, Wilco1 is ignoring the differences in the SoCs. The Tegra 3 is a quad core and that means it can have up to 50% more performance than a equivalent dual core.

    While the Clover Trail is only a dual core... so while the clock speed may favor the ATOM, the number of cores favors the Tegra 3.

    It doesn't help that the ATOM still wins the run time tests as well. So overall efficiency is clearly in the ATOMs favor. While needing a quad core to beat a dual core still means the ATOM has better performance per core!

    Not that it matters much as Intel is set to upgrade the ATOM to Bay Trail by the end of the year, which promises to up to double CPU performance, along with going up to quad cores) and triple GPU compared to the present Clover Trail.

    While also going full 64bit and offering up to 8GB of RAM... Something ARM won't do till about the later half of 2014 at earliest and Nvidia specifically won't do until the Tegra 6... with Tegra 4 yet to come out yet in actual products now...
  • nofumble62 - Friday, April 5, 2013 - link

    LTE is not available on Intel platform yet, that is why they don't offer in US. But I heard the new Intel LTE chip is pretty good (won award), so next year will be interesting.
    The ARM Big cores suck up a lot of power when they are running. That is the reason Qualcomm SnapDragon is winning the latest Samsung S4 (over Samsung own Enoxys chip) and Nexus 7 (over Nvidia Tegra).
  • Spunjji - Friday, April 5, 2013 - link

    Nvidia Tegra's not really ready for the new Nexus 7, so it's not entirely fair to say it's out because of power issues. When you consider that the S4 situation you described isn't strictly true either (if I buy an S4 here in the UK it's going to have the Exynos chip in it) it tends to harm your conclusion a bit.
  • zeo - Tuesday, April 16, 2013 - link

    LTE will be introduced with the XMM 7160, which will be an optional addition to the Clover Trail+ series that's starting to come out now... Lenovo's K900 being one of the first design wins that has already been announced.

    MWC 2013 showed the K900 off with the 2GHz Z2580, which ups the GMA to dual 544 PowerVR GPUs at 533MHz... So they showcased it running some games and demos like Epic Citadel at the full 1080P and max FPS that demo allows.

    Only issue is the LTE is not integrated into the SoC... so won't be as power efficient as the other ARM solutions that are coming out with integrated LTE... at least for the LTE part...
  • WaltC - Friday, April 5, 2013 - link

    Unfortunately, that's not what this article delivers. It doesn't tell you a thing about current desktop gpu performance versus current ARM performance. What it does is tell you about how obsolete cpus & gpus from roughly TEN YEARS AGO look like against state-of-the-art cell-phone and iPad ARM running a few isolated 3d Mark graphics tests. What a disappointment. Nobody's even using these desktop cpus & gpus anymore. All this article does is show you how poorly ARM-powered mobile devices do when stacked up against common PC technology a decade ago! (That's assuming one assumes the 3dMark tests used here, such as they are, are actually representative of anything.) AH, if only he had simply used state-of-the-art desktops & cpus to compare with state-of-the-art ARM devices--well, the ARM stuff would have been crushed by such a wide margin it would astound most people. Why *would you* compare current ARM tech with decade-old desktop cpus & gpus? Beats me. Trying to make ARM look better than it has any right to look? Maybe in the future Anand will use a current desktop for his comparison, such as it is. Right now, the article provides no useful information--unless you like learning about really old x86 desktop technology that's been hobbled...;)

    To be fair, in the end Anand does admit that current ARM horsepower is roughly on a par with ~10-year-old desktop technology IF you don't talk about bandwidth or add it into the equation--in which case the ARMs don't even do well enough to stand up to 10-year-old commonplace cpu & gpu technology. So what was the point of this article? Again, beats me, as the comparisons aren't relevant because nobody is using that old desktop stuff anymore--they're running newer technology from ~5 years old to brand new--and it runs rings around the old desktop nVidia gpus Anand used for this article.

    BTW, and I'm sure Anand is aware of this, you can take DX11.1 gpus and run DX9-level software on them just fine (or OpenGL 3.x-level software, too.) Comments like this are baffling: "While compute power has definitely kept up (as has memory capacity), memory bandwidth is no where near as good as it was on even low end to mainstream cards from that time period." What's "kept up" with what? It sure isn't ARM technology as deployed in mobile devices--unless you want to count reaching ~decade-old x86 "compute power" levels (sans real gpu bandwidth) as "keeping up." I sure wouldn't say that.

    Neither Intel or AMD will be sitting still on the x86 desktop, so I'd imagine the current performance advantage (huge) of x86 over ARM will continue to hold if not to grow even wider as time moves on. I think the biggest flaw in this entire article is that it pretends you can make some kind of meaningful comparisons between current x86 desktop performance and current ARM performance as deployed in the devices mentioned. You just can't do that--the disparity would be far too large--it would be embarrassing for ARM. There's no need in that because in mobile ARM cpu/gpu technology, performance is *not* king by a long shot--power conservation for long battery life is king in ARM, however. x86 performance desktops, especially those setup for 3d gaming, are engineered for raw horsepower first and every other consideration, including power conservation, second. That's why Apple doesn't use ARM cpus in Macs and why you cannot buy a desktop today powered by an ARM cpu--the compute power just isn't there, and no one wants to retreat 10-15 years in performance just to run an ARM cpu on the desktop. The forte for ARM is mobile-device use, and the forte for x86 power cpus is on the desktop (and no, I don't count Atom as a powerful cpu...;))
  • pSupaNova - Sunday, April 7, 2013 - link

    How is it embarrassing for ARM? 90% of consumers don't require for most of their computing needs the power of a desktop CPU.

    Mobile devices have took the world by storm and have been able to increase their pixel pushing ability exponentially.

    No-one is suggesting that Mobile chips will suddenly catch their Desktop brethrens, but it is interesting to see that they are only three times slower than an typical CPU/ Discrete GPU combo of 2004!
  • zeo - Tuesday, April 16, 2013 - link

    That percentage would be much higher if you eliminated cloud support... the only reason why they get away with not needing a lot of performance for the average person is because a lot is offset to run on the cloud instead of on the device.

    Apple's Siri for example runs primarily on Apple Servers!

    While some applications like augmented reality, voice control, and other developing features aren't wide spread or developed enough to be a factor yet but when they do then performance requirements will skyrocket!

    Peoples needs may be small now but they were even smaller before... so they're steadily increasing, though maybe not as quickly as historically but never underestimate what people may need even just a few years from now.
  • Wolfpup - Friday, April 5, 2013 - link

    Yeah, I've been wanting to know more about these architectures and how they compare to PC components for ages! Nice article.
  • robredz - Sunday, April 7, 2013 - link

    It certainly puts things in perspective in terms of gaming on mobile platforms.
  • bobjones32 - Thursday, April 4, 2013 - link

    Those are some pretty amazing comparisons. Thanks for the article, Anand!
  • sibren - Thursday, April 4, 2013 - link

    eehm wrong topic title, smartphones are no tablets. I wonder how people can miss that.
  • sibren - Thursday, April 4, 2013 - link

    I'm talking about the ipads and surface pro
  • kyuu - Thursday, April 4, 2013 - link

    Thanks for including the E-350. However, looking at results for the E-350 on 3DMark's website, most of the posted results are a bit better than yours (putting its GPU performance above any of the ARM SoCs). Is there any reason more recently manufactured E-350s should perform better than your old unit?
  • meacupla - Thursday, April 4, 2013 - link

    Yeah, there's something amiss with the E-350.

    The only thing I can think of is faster RAM, since all AMD APUs benefit from having faster clocked, lower timing RAM. C/E series especially, since they only have a single channel to work with.
  • mayankleoboy1 - Thursday, April 4, 2013 - link

    Can you add a modern desktop Kepler/Tahiti GPU, and calculate the "performance/watt " value ? That should give us a better idea of the architectural prowess.
  • piroroadkill - Thursday, April 4, 2013 - link

    Oh, yes, a performance/watt metric would be fantastic. Great idea.
    That said, very interesting article, glad to see it done. Thanks, Anand!
  • Mumrik - Thursday, April 4, 2013 - link

    The phones and tablets are doing better than I expected actually. This is nice - an image of the gap. It would be interesting to see where something like a PS Vita fit in too.
  • slatanek - Thursday, April 4, 2013 - link

    Great article Anand!
  • zilm - Thursday, April 4, 2013 - link

    It would be great to compare with HD3000
    It looks very close to Ipad4 and it's in the same league (I mean not desktop).
  • IntelUser2000 - Thursday, April 4, 2013 - link

    @zlim:

    It would be lot closer, because the HD 4000 in the regular variant is only 50-60% faster than the HD 3000. The HD 4000 in ULV is maybe only 30-40% faster.
  • zeo - Tuesday, April 16, 2013 - link

    Mind though that the iPad is using a quad GPU, while the Intel GMA's are single GPUs... Also this isn't counting discrete graphic cards, which scale much higher than Integrate GPU's can go. Even AMD's best APU still barely provides what would be a mid-range discrete graphic card performance.
  • ltcommanderdata - Thursday, April 4, 2013 - link

    These results seem to suggest that high-end mobile GPUs are very close to current-gen consoles (nVidia 7900 class) and that the HD4000 is faster than current-gen consoles. Based on realizable graphics in games it doesn't seem that close although that may be advantages in bandwidth and close-to-metal access showing through.

    "Again, the new 3DMark appears to unfairly penalize the older non-unified NVIDIA GPU architectures. Keep in mind that the last NVIDIA driver drop for DX9 hardware (G7x and NV4x) was last October, before either benchmark had been released. The 8500 GT on the other hand gets the benefit of a driver released this month."

    You mentioned you were using nVidia drivers from October 2012, but the latest legacy drivers for the 6000/7000 series is GeForce 307.83 from February 2013 for Windows 7 and GeForce 307.74 from January 2013 for Windows 8.
  • perry1mm - Thursday, April 4, 2013 - link

    This will be speaking in terms of mobile graphics for the HD 4000:

    Combined with the proper specs (fast RAM, higher clocked processor frequencies without the TDP holding it back from turbo-boosting, and a higher allocation of VRAM) should come out to 125-150% better performance than the PS3 or Xbox 360, from the comparisons I have done.

    Such as most games that run 1080p on PS360 (if that's even the native resolution, doubtful for most) are locked at 30FPS and for PC would be "low" settings and likely low or no AA. I can run similar games on my Vaio Duo 11 @ 1080p with low settings and low or no AA at more consistent framerates than 30. I've looked up graphical capabilities (settings) on the consoles such as Skyrim, DmC, L4D2, Borderlands 2, Far Cry 2, Deus Ex: Human Revolution, and more...and it seems when consoles are running them at 30FPS I can consistently get 35-40, if not higher.
  • perry1mm - Thursday, April 4, 2013 - link

    Oh, and in reference to the "proper specs" with the HD 4000 in my Sony Vaio Duo 11, I have the i7 3537u (turbo-boost to 3.1Ghz) with 8GB 1600Mhz DDR3 RAM.
  • jeffkibuule - Thursday, April 4, 2013 - link

    No game runs at 1080p on the PS3/360. It's mostly 720p60, 720p30, and even 640p(30-60). 1080p is a pipe dream on current consoles.
  • Friendly0Fire - Thursday, April 4, 2013 - link

    There was one notable exception: Wipeout HD has always run at 60fps and 1080p and looked superb for it.
  • SlyNine - Friday, April 5, 2013 - link

    Actually there are a few games that run in 1080p@60. Soul Calibur is rendered at 1080p for instance.

    Sorry you didn't know or bother checking the facts before you posted misinformation just to help prove your point.

    A point that I agree with, but I hate misinformation.
  • Anand Lal Shimpi - Thursday, April 4, 2013 - link

    You are correct, the drivers were from February 2013 not October. I'm not sure why we thought October. I've updated the article accordingly :)

    When HD 3000 launched, the unofficial word was that GPU was the first integrated GPU to finally be faster than the Xbox 360. HD 4000 is understandably quicker. The ultra mobile GPUs (likely in tablets first) will be faster than the current gen consoles in the next 1 - 2 years. Combine that with the fact that one of the ultra mobile players happens to be a big game publisher for a major console and you begin to have a rough outline of what could look like a giant trump card. Whether or not they ever play it is another question entirely.

    Take care,
    Anand
  • Ryan Smith - Thursday, April 4, 2013 - link

    I'm the one who came up with October, since that was when NVIDIA moved their DX9 cards to legacy. I had not realized there had been an interim driver release. Though it's worth noting that legacy cards don't get performance optimizations, only bug fixes. So there still would not be any optimizations in those drivers for the new benchmarks.
  • marc1000 - Saturday, April 6, 2013 - link

    Anand, it seems you are talking about Sony (I doubt it is MS). And unfortunately I am one of the orphaned users of the ill fated Xperia Play. They had a HUGE trump card with that phone, but killed it deliberately to clear the path to PS Vita. So I have to say that I doubt they will ever release games for any tablet, instead they will keep pushing PS3/4 + Vita combo.
  • Krysto - Thursday, April 4, 2013 - link

    Where's the proof that E-350's CPU is faster than Cortex A15? Or did I miss it?
  • milli - Thursday, April 4, 2013 - link

    You need to compare the physics score between the E-350 and Nexus 10. It shows that the Bobcat core has a higher IPC compared to the A15.
  • scaramoosh - Thursday, April 4, 2013 - link

    compared to 8 year old desktop gpus....

    compare them to a 7990
  • tipoo - Thursday, April 4, 2013 - link

    The graphs on the chart would have to be smaller than the width of a pixel :P

    it's still impressive that single digit watt system on chips are getting up around 7 year old GPU performance though, the Geforce 7800 is close to what the 360 has. From everything I've been hearing SGXs Rogue/600 series will be a huge boost, bigger than just adding MP cores.
  • B3an - Thursday, April 4, 2013 - link

    Yeah Anand, put a Nividia Titan or 7990 in there just for the lolz. Have it on the last page or something, it would just be interesting and amusing.
  • fabarati - Thursday, April 4, 2013 - link

    A good thing to remember is that despite the 8500 GT scoring better, the 7900 GTX (and gs and 7800 gt) was much faster in actual games.
  • tipoo - Thursday, April 4, 2013 - link

    Yeah, the 8500 is only faster when it's a complete vertex or pixel shading situation due to its unified shaders, in most games with mixed requirements the 7900 would still beat it with its fixed shaders.
  • Flunk - Thursday, April 4, 2013 - link

    Because modern games are more shader heavy this would no longer be the case. A 8500 GT beats a 7900 GTX hands down for modern game performance. However, both are still pretty bad. A higher end card like a 8800 GT is still usable in a lot of games where the 7 series is totally unusable.
  • Friendly0Fire - Thursday, April 4, 2013 - link

    Do mobile games use a lot of shaders though? What I've seen looks a lot closer to early DX9 era pipelines than the shader-heavy setups console/PC games have been moving towards.
  • fabarati - Thursday, April 4, 2013 - link

    Good point. Haven't used a 7 Series geforce since 2008, so I'm not up to date on their performance. I just meant that the benchmarks don't tell the whole stories.

    If you use the DX9 path available in many games, would the 8500 GT still be faster? I mean, DX9 is probably the only way to get decent performance out of the both these cards (that was always true of the 8500 GT).
  • perry1mm - Thursday, April 4, 2013 - link

    It would be interesting to see another Win 8 tablet with higher frequencies for the iGPU from an i7 and more RAM (or with higher bandwidth if it's possible). Maybe just turn off the dedicated GPU of the Edge Pro to see the difference?

    From the videos I've checked out the Surface Pro vs games I play on my Vaio Duo 11, I get about 25% better performance with the i7 3537u @ 3.1Ghz. So if the Surface Pro gets 30FPS at native res, I'll likely get between 35 and 40. But I'd still be interested in an actual comparison test, and searching there seems to be none for GPU/gaming performance of Win 8 tablets, other than the Surface and Edge...which are on extreme ends of the spectrum.
  • milli - Thursday, April 4, 2013 - link

    So a GPU with 16 Kepler cores would be enough to be faster than every SOC available. The lowest end Kepler GPU (GT640) has 384 of them. Just to put things into pespective.
    Also the A15 is beaten by AMD's Bobcat core ...
  • marc1000 - Thursday, April 4, 2013 - link

    this is a thing I would like to see, even if only for academic reasons. what happens if we put a small quantity of current-gen GPU cores on a really small chip? I guess the biggest problem would be to scale down the fixed hardware that feeds the cores...
  • DanNeely - Thursday, April 4, 2013 - link

    Our first look at something like this is probably going to be the Atom2 later this year with 4(?) Intel HD GPU (IVB generation??) cores.
  • jeffkibuule - Thursday, April 4, 2013 - link

    Tegra 5 I believe will use Kepler cores.
  • Spunjji - Friday, April 5, 2013 - link

    That and tweaking them for the lower thermal envelope. The way they've been optimised for desktop/notebook voltages and thermal envelopes probably means they'd need significant alterations to be suitable for a mobile scenario.
  • tipoo - Thursday, April 4, 2013 - link

    I've been underestimating these tablet GPUs, I figured they'd be up around Geforce FX series performance by the games that run on them, but the iPad 4 is nearing the 7800 which is pretty impressive.

    This is also interesting because the xbox 360 is only up around Geforce 7800 level too, but that also has more memory bandwidth and the eDRAM and closer to metal programming to work with than the smartphone SoCs. Within the next generation of SoCs though, I think the PS360 GPUs will be thoroughly beaten, which is pretty amazing to think about considering these are single digit watt chips. Especially with the SGX Rogue/600 series.

    Games are a different matter though, the best tablet games still don't have a great deal of depth.
  • dj christian - Thursday, April 4, 2013 - link

    Metal programming?
  • tipoo - Thursday, April 4, 2013 - link

    Close to the metal meaning less overhead to go through than PC game programming.
  • jeffkibuule - Thursday, April 4, 2013 - link

    I'm not sure how much more significant overhead there is left when you remove APIs. At least with iOS, developers can specifically target performance characteristics of certain devices if they wish, only problem is that a graphically intense game isn't such an instant money maker on handhelds like it is on consoles. Almost no incentive to go down that route.
  • SlyNine - Saturday, April 6, 2013 - link

    You have to remember that a 7900GT is faster than a 8500GT, hell the 8600GT had 2x as many shaders and that performed closely to the 8600GT.

    Even still I doubt that the benchmark does as well as it could on a 8500GT if it were better optimized. I'm willing to bet you're about right with the Geforce FX estimate.
  • whyso - Thursday, April 4, 2013 - link

    Is this showing how bad AMD is? E-350 is a 18 watt tdp. Even if they double perf/watt with jaguar you are still looking at a chip that hangs with an ipad 4 at 9 watts.
  • tipoo - Thursday, April 4, 2013 - link

    The 3DMark online scores seem to be showing higher numbers for the E350, but perhaps those pair it with fast RAM, since RAM seems to be the bottleneck on AMD APUs.
  • Flunk - Thursday, April 4, 2013 - link

    A lot of people overclock their equipment for 3Dmark.
  • oaso2000 - Thursday, April 4, 2013 - link

    I use a E350 as my desktop computer, and I get much higher results than those in the article. I don't overclock it, but I use 1333 MHz RAM.
  • ET - Thursday, April 4, 2013 - link

    You're assuming that performance is linear with power and that a PC processor has the same power as a tablet processor. It's enough to take a look at the other Brazos processors to see that's not the case.

    The C-50 runs at a lower clock speed of 1GHz vs. 1.6GHz but is 9W. The newer C-60 adds boosting to 1.33GHz.

    The Z-60 is a mobile variant which runs at 1GHz and has a TDP of 4.5W.

    It's hard to predict where AMD's next gen will land on these charts, but it's obviously not as far in terms of performance/power as you peg it to be.

    For me, even if AMD does end up a little slower than some ARM CPU's, I'd pick such as Windows tablet over an ARM based tablet for its compatibility with my large library of (mostly not too demanding) games.
  • whyso - Thursday, April 4, 2013 - link

    The nexus 4 is already more powerful than the e-350 while consuming much less power. By the time jaguar launches, even if they double perf/watt they are still going to be behind any high end tablet or phone soc.
    c-50 is weaker or the same than atom z-2760 which does't do too well.
  • lmcd - Thursday, April 4, 2013 - link

    You both didn't factor the die shrink... This is 40nm.
  • whyso - Thursday, April 4, 2013 - link

    yes I did, I'm assuming they can double perf/ watt to do that they will need to shrink the die. Jaguar only boost IPC by ~15% with small efficiency gains. By going down to 28 nm they might be able to double perf /watt (which is a clearly optimistic prediction).
  • lmcd - Thursday, April 4, 2013 - link

    if the C-50 is about equal to the Z-2760, + 15% IPC + die shrink suddenly AMD is in this, so I'd say you didn't factor it in very well then...
  • whyso - Friday, April 5, 2013 - link

    The asus vivotab smart is pretty much rock bottom in these tests. using the z2760.
  • kyuu - Thursday, April 4, 2013 - link

    No, it's not. The E-350 is actually still stronger than the A15s. The only reason those ARM SoCs get a better score in the physics test is because they are quad-core compared to the E-350's two cores. Also, as ET stated below, performance doesn't scale down linearly with power, so a Bobcat at, say, a 75% lower TDP isn't going to have 75% lower performance. Heck, you can look at 3DMark results for the C-60 to see that it still outperforms the ARM SoCs at a lower TDP (assuming nothing fishy is going on with those results). The Z-60, with a 4.5W TDP, should still have comparable performance to the C-60.

    Plus, Bobcat is a couple years old. When AMD (finally) gets Jaguar out sometime this year, it should handily beat any A15-based SoC.

    Finally, if this shows that AMD is "bad", then it would show Intel as "absolutely pathetic".
  • Wilco1 - Thursday, April 4, 2013 - link

    E-350 certainly looks good indeed, however I would expect that A15 and E-350 will be close in most single threaded benchmarks, especially considering the latest SoCs will have higher frequencies (eg. Tegra 4 is 1.9GHz). On multi-threaded E-350 will lose out against any quad-core as the physics results show.

    A Z-60 at 1GHz will be considerably slower than an E-350 at 1.6GHz, so will lose out against pretty much all A15 SoCs. How well Jaguar will do obviously depends on the IPC improvement and clock speeds. But unless it is quad core, it certainly won't win all benchmarks.
  • Spunjji - Friday, April 5, 2013 - link

    Kabini uses 4 jaguar cores, Temash uses a pair of them. 15% IPC improvement combined with a clock speed bump from the die shrink should see it easily reaching competitive levels with the corresponding A15-based SoCs.
  • milli - Thursday, April 4, 2013 - link

    You're looking at it the wrong way. The E-350 is a desktop/netbook cpu. If you want to make a direct comparison, you should compare it to the C-60. That one is 9W (compared to an estimated 5W for the A6X). It will still beat the A6X on CPU performance and stay close enough in 3D. It's still produced on 40nm (compared to A6's 32nm) and has a much smaller die (75mm² vs 123mm²). AMD 64-bit memory interface, A6X 128-bit. If you look at it that way, AMD isn't doing too bad.
    Jaguar/GCN based Temash will be able to get C-60 performance or better under 4W.
  • lmcd - Thursday, April 4, 2013 - link

    The die size is what really gets me. AMD should be able to push Temash everywhere if they hit the die shrink advantages, push the GPU size up a bit and Jaguar core delivers.
  • jabber - Thursday, April 4, 2013 - link

    Interesting article. I'm always intrigued to know how far we've come.

    I think what would be really handy is an annual "How things have progressed!" round up at the end of the year.

    This would entail picking up all the generations past and present top of the range flagship cards (excluding the custom limited edition bizzaro models) and doing a range of core benchmarks. You could go back as far as the first PCI-e cards (2004 ish).

    Up until a few years ago I was still running a 7900GTX but really have no clue as to how much better current cards are in real hard facts and figures.

    Would be good to see how far we have been progressing over the past 10 years.
  • marc1000 - Thursday, April 4, 2013 - link

    I too would like to see an actual gaming comparison on GPUs from 2004 to now. Even if just on a small number of games, and with limited IQ settings.

    Something like testing 3 games: 1 real light, 1 console port, and 1 current and heavy. All benchmarked at medium settings, on 1280x720 or 1680x1050 resolution. no need to scale IQ, AA settings or triple-monitor situations.

    (pick just DX9 games if needed, most people who are NOT tech enthusiasts can't tell the difference between DX9 and DX11 anyway.)
  • dj christian - Thursday, April 4, 2013 - link

    +1 to that!
  • Spunjji - Friday, April 5, 2013 - link

    Agreed, this would save me a shedload of hassle when recommending upgrades for friends! I often get questions like "is my 8800GTX better than an HD7770" and making that comparison tends to involve rather shady comparisons (e.g. 8800GTX x% better than 6570, 7770 x% better than 6570, therefore..?!) because the older cards just don't get included in the newer comparisons.
  • powerarmour - Thursday, April 4, 2013 - link

    Interestingly, the Atom 330/ION would be fairly high up most of those lists too, here's my Ice Storm compare link :- http://www.3dmark.com/is/312945

    Just shows how badly Intel need competitive graphics hardware on Atom again.
  • Spunjji - Friday, April 5, 2013 - link

    Killing ION (and 3rd party chipset support in general) starts to look like a huge mistake on Intel's part when you look closely at the performance numbers of their Atom chips.
  • powerarmour - Saturday, April 13, 2013 - link

    Here's also an Atom D2700/GMA 3650 run for comparison also :-

    http://www.3dmark.com/is/394030

    That CedarView SGX545 (also in CloverView) is pretty damn slow!
  • Hrel - Thursday, April 4, 2013 - link

    Good article. I'd like to see how more modern desktop GPU's compare though. Especially at the lower resolutions these "phones" run at. 8800GT, since that was THE part to get. GTX460, same thing. HD6850. GTX660M/560M. Probably just wishful thinking, but if you have these dinosaurs laying around perhaps you have those too. But also at 720p and 1080p. Since you can output the phones to a tv just like a desktop. I know you have the Razer, which is more modern. So that could replace the 5 and 6 series GPU's I mentioned, I guess.
  • geniekid - Thursday, April 4, 2013 - link

    I recognize the XFX 7900 GS as it was the graphics card I chose for the very first computer I built myself! Based on the picture and the specs, it appears you used the factory OCed version of the 7900 GS?
  • marc1000 - Thursday, April 4, 2013 - link

    oh boy, I'm getting old....... the GPU for the very first computer I built myself was an ancient Voodoo Banshee 16mb from Diamond Multimedia. It was an intermediate card between Voodoo 2 and 3, and I even installed linux on it and enabled the first 3D-like desktop experience I ever saw (it was on gnome). MS could only deliver a similar experience with windows 7...

    yes, i'm old!
  • jamyryals - Thursday, April 4, 2013 - link

    Awesome article Anand, this juxtaposition is just wild. The real mind blower would be the perf/watt comparison that has been mentioned. Obviously time moves and almost everything gets better, but to turn around and look in the rearview like that was a fun read Anand. I'm just wondering how big an impact the stacked DRAM will have on memory bandwidth in the future. Also, what's been the challenge to incorporate this up to now?
  • WagonWheelsRX8 - Thursday, April 4, 2013 - link

    Great article!!!
  • Peroxyde - Thursday, April 4, 2013 - link

    Very interesting article. Hope you will do a similar one comparing mobile CPUs vs desktop CPUs
  • SPBHM - Thursday, April 4, 2013 - link

    Interesting stuff, the HD 4000 completely destroys the 7900GTX,
    considering the Playstation 3 uses a cripled G70 based GPU, is fantastic the results they can achieve under a higher optimized platform...

    also, it would have been interesting to see something from ATI, a x1900 or x1950. I think itwould represent better the early 2006 GPUs
  • milli - Thursday, April 4, 2013 - link

    Yeah agree. I've seen recent benchmarks of a X1950 and it aged much better than the 7900GTX.
  • Spunjji - Friday, April 5, 2013 - link

    Definitely, it had a much more forward-looking shader arrangement; tomorrow's performance cannot sell today's GPU though and I'm not sure AMD have fully learned from that yet.
  • Th-z - Thursday, April 4, 2013 - link

    I'll take these numbers with a grain of salt, 7900 GTX still has the memory bandwidth advantage, about 2x as much, maybe in actual games or certain situations it's faster. Perhaps Anand can do a retro discrete vs today's iGPU review just for the trend, it could be fun as well :)
  • Wilco1 - Thursday, April 4, 2013 - link

    Anand, what exactly makes you claim: "Given that most of the ARM based CPU competitors tend to be a bit slower than Atom"? On the CPU bound physics test the Z-2760 scores way lower than any of the Cortex-A9, A15 and Snapdragon devices (A15 is more than 5 times as fast core for core despite the Atom running at a higher frequency). Even on Sunspider score Atom scores half that of an A15. I am confused, it's not faster on single threaded CPU tasks, not on multi-threaded CPU tasks, and clearly one of the worst on GPU tasks. So where exactly is Atom faster? Or are you talking about the next generation Atoms which aren't out yet???
  • tech4real - Thursday, April 4, 2013 - link

    It's fairly odd to see such a big gap between atom and A9/A15 in a supposedly CPU bound test. However the score powerarmour showed several posts earlier on an atom330/ion system seems to give us some good explanation.

    The physics core 7951 and 25.2 FPS of his atom330/ion system is pretty in line with what the atom cpu expectation is, faster than a9, a bit slower than a15. So i would guess that the extremely low z2760 score is likely due to its driver implementation, and is not truly exercising CPU for that platform.
  • Wilco1 - Thursday, April 4, 2013 - link

    It may be a driver issue, however it doesn't mean that with better drivers a Z-2760 would be capable of achieving that same score. The Ion Atom uses a lot more power, not just the CPU, but also the separate higher-end GPU and chipset. Besides driver differences, a possibility is that the Z-2760 is thermally throttled to stay within a much lower TDP.
  • tech4real - Friday, April 5, 2013 - link

    the scale of the drop does not look like a typical thermal throttle, so I would lean towards the "driver screw up" theory, particularly considering intel's poor track record on PowerVR driver support. It would be interesting to see someone digging a bit deeper into this.
  • islade - Thursday, April 4, 2013 - link

    I'd love to see an iPad 3 included in this results. I'm sure many readers have iPad 3's and couldn't justify upgrading to the 4, so I'd love to see where we fit in with all this :)
  • tipoo - Sunday, April 7, 2013 - link

    CPU wise you'd be the exact same as the iPad 2, Mini, and just slightly above the 4S. GPU wise you'd be like the 5.
  • Beenthere - Thursday, April 4, 2013 - link

    Anand -

    Any chance of spending some time fixing the website issues instead of tweeting? I would think with hundreds of negative comments regarding the redesigned site and the fact that many of us can't even read it with parts cut of, odd fonts, conflicting color schemes and layout design, etc. that this would be far more important than posting tweets??? I'm sure that your ad revenue is going to take a big hit with the significant drop in page hits from a basically unusable website for many people, since the redesign.
  • mrzed - Friday, April 5, 2013 - link

    Anand,

    Such a great amd timely article for me. The PC I am posting this from runs a 7900GS, which shows you how much gaming I've done in the last 6 years (I now have a 5 year old and a toddler). Came down to the rumpus room with my phone in my pocket, saw the article title and was immediately curious just how a new phone might compare in gaming prowess with the PC I still mostly surf on.

    You call yourself out for not ever dreaming that your closet full of cards may be used to benchmark phones, but I recall many years ago wondering why you were spending so much editorial direction on phones. You saw the writing on the wall before many in the industry, and as a result, your hardware site is remaining relevant as the PC industry enters its long sunset. Huzzah.
  • Infy2 - Friday, April 5, 2013 - link

    My old office PC with dual core Core2 Duo 8400 3GHz with GF8600GT scored 3DMark Score 28333 3DMarks, Gfx Score 30322, Phy Score 23044, Gfx 1 132.3 FPS, Gfx 2 131.4 FPS, Phy Test 73.2 FPS.
  • yannigr - Friday, April 5, 2013 - link

    Ι Loved this article. It is really really REALLY interesting.
  • somata - Friday, April 5, 2013 - link

    If anyone was curious how a contemporaneous ATI card performs in this context, here are some DXBenchmark results from my Radeon X1900 XT (early 2006):

    http://dxbenchmark.com/phonedetails.jsp?benchmark=...

    With 38 fps in T-Rex HD (offscreen), it looks like the R580 architecture did wind up being more future-proof than G70 as games became more shader-heavy, not that it matters anymore ;-).

    Unfortunately, like Anand, I was unable to run 3DMark on this card most likely due to outdated drivers.
  • tipoo - Sunday, April 7, 2013 - link

    Yeah, I remember when those came out everyone was saying the x1k series would be more future proof than the 7k series due to the shader/vertex balance, and I guess that turned out to be true. Too bad it happened so late that their performance isn't relevant in modern gaming though, heh. But that's also part of why the 360 does better on cross platform ports.
  • GregGritton - Saturday, April 6, 2013 - link

    The comparisons to 6-8 year old desktop CPUs and GPUs is actually quite relevant. It gives an idea of the graphical intensity of games that you might be able to play on modern cell phones and tablets. For example, World of Warcraft was released on 2004 and ran on graphics cards of the day. Thus, modern Cell Phone and Tablet processors should be able to run a game of similar graphical quality.
  • tipoo - Sunday, April 7, 2013 - link

    But possibly not size. Storage and standardized controllers are the bottleneck now imo.
  • jasonelmore - Sunday, April 7, 2013 - link

    instead of doing embedded/stacked DRAM to increase memory performance, couldn't they just embed the memory controller into the CPU/GPU like intel did with the i7? this increased memory bandwidth significantly.
  • marc1000 - Monday, April 8, 2013 - link

    I believe it already is integrated for all SOCs listed...
  • robredz - Sunday, April 7, 2013 - link

    Wonder how the later AMD450 with 6320 and the quad core Mediatek quad core smartphone processors would compare?
  • geneiusxie - Thursday, May 22, 2014 - link

    So a 7 year old GPU is still 50% faster than the iPad? A modern GPU like the AMD 8770 is ~40x faster still...
    It looks like actual power efficiency might be pretty close to Haswell, but many AMD/NVIDIA GPUs (like the 8770) at 21 GFLOPS/watt have over 2-4x the power efficiency of those puny laptop processors.

    ARM has to follow the same laws of physics that everyone else does :P
  • Samy - Wednesday, August 6, 2014 - link

    This interesting article should be updated for 2014 specially for Nvidia Kepler K-1 performance in shield tablet. Thank you
  • onteo - Sunday, January 25, 2015 - link

    I would like to see this with the new tegra k1, x1, A8x...
  • SlyNine - Saturday, March 5, 2016 - link

    10 year old hardware running on 5 year old drivers vs brand new gpus with brand new drivers.

    Coincidentally the software is created for newer gpus and drivers. Even if the old stuff was 5x I'd be surprised if it ran as well.

    Optimize the software or drivers for those old gpus and it's a curb stomp.
  • adifbbk1 - Saturday, July 9, 2016 - link

    How to use android gpu in pc?
  • adifbbk1 - Saturday, July 9, 2016 - link

    I dont have gpu in my pc but my tab is high end device with 2gb ram, 1.3 ghz processor, arm mali-400 mp2 gpu
    how to use it?
  • android_user - Friday, March 10, 2017 - link

    What is with the gpu desktop core count : Like it's written " 384 " cores on the GT640 .
    What is the situation in smartphone gpu core count ? "1" ?
    Please tell i am wrong .

    http://www.geforce.com/hardware/desktop-gpus/gefor...

Log in

Don't have an account? Sign up now