They're much more modern than that in terms of features, as they support the roughly mobile equivalent to DX9/OpenGL2.0.
The fact that Tegra and Mali have dedicated Pixel and Vertex shades puts them in roughly the GeForce 7 series, although they're probably more powerful clock-per-clock-per-unit than those cores were (not enough to make up the vast difference in clock-speed though). If you just look at raw hardware, it would put them roughly equivalent to somewhere between a 7300GT and a 7600GT. However their raw throughput numbers are a lot lower thanks to the reduced clockspeeds- Even a 7100GS ran at 600+Mhz.
The SGX is a Unified Shader Arch supporting DX10.1 like functionality, putting it in the GeForce8-GTX200 era. What they define as a 'core' is a bit hard to figure out (I would guess they mean SMP, I.E. a cluster of smaller USA Shader Units), but based on the numbers it would be somewhere at the very bottom of the rung, like GeForce 9300 but not even.
I believe adreno is also unified since the 200 series. I'd also be astonished if nivida gets anywhere close to the 3.5 years old 543mp let alone any of their newer models. I imagine the problem for nvidia will be getting around all the patents of img.
i know i'm a few months late to the party but... he was almost spot-on as far as raw performance goes on his guess. i'll be honest, i'm not familiar with the performance of the lower end nvidia cards, but a 7600GT is definitively a lot faster than even the ipad 3 gpu.
features aside, if we stick to theoretical peak performance, most current mobile gpus rank in between a geforce 4 ti4600 and a geforce fx. here's a quick comparison of gpu theoretical peak performance:
· Nexus 7 (tegra 3) - 10 Gflops
· Galaxy s3 (mali-400mp) - 15,8 Gflops
· IPad 3 (SGX 543MP4) - 25,6 Gflops
· Geforce 4 Ti4600 - 16 Gflops
· Geforce FX 5950 Ultra - 30,4 Gflops
· Geforce 6800 Ultra - 75 Gflops
· > Geforce 7600 GT - 136 Gflops
· xbox360 and ps3 gpu - somewhere around here, closer to the latter
· Geforce 7950 GT - 255 Gflops
as you can see, there's still a lot of ground to cover to get to 7600GT levels of performance. maybe in a couple more generations. This is a very simplified comparison, because there's no way to compare them directly, but it's enough to give you an idea of the situation.
(a few months later, but here we go) yes. very nice guess. in fact the theoretical peak performance of the mali-400mp on the galaxy s3 is virtually the same as a geforce4 ti4600. the SGX 543MP4 on the ipad 3 is a bit inferior to the geforce fx 5900 ultra. read my other post for more details :P
So Tegra 3 is better and Exynos Quadcore worse than most people think. At least in more demanding games with more polygons. And thus the US Galaxy S3 probably faster in more recent games with more polygons than the international quad core version. (more modern ARM core and faster GPU). It will be interesting how well the Apple SoCs scale with this HD test.
And even if the results might look poor, but including some older speced SoCs might be interesting, too, to see how the new SoCs improved. Like some older Snapdragons. (and only Offscreen seems to be interesting)
This is because they Egypt test favors Vertex shading power, and Mali400 is intentionally lacking in that department. If you were to built a test that was pixel shader heavy, the results would be more balanced. You saw this same kind of thing happen in the GeForce 7, Radion X1900 era, when it was nVidia that decided it was better to have more Pixel power than Vertex power. The SGX doesn't haven't to worry about that since it's USA.
In the game benchmark I see the Quad Exynos being equal to or better than Tegra3. That the Dual Exynos holds its own quite well, considering it launched over half a year earlier. :) At this point though, I wouldn't buy a full price kit with either processor.
Looks like offscreen Mali400 scores are STILL broken in the game tests. Scores 13 fps in 720p and 1080p. Not as bad as going the score going dramatically UP when jumping from 800x400 to 720p like in glbenchmark 2.1, but still... Why hasn't this been addressed?
Because increasing the resolution increases the load on Pixel shaders more than Vertex shaders, and Mali400 is hamstrung by its Vertex power in this test, so it has the Pixel shading power to spare. It doesn't surprise me that the framerate doesn't change with resolution in this case.
But they DO change, in gl2.1 they actually go UP by a lot. That's going from 800x480 to 720p (2.4 times the pixels). Even more proof is that the same SoC in the galaxy note, which has a native resolution very close to 720p used in these off screen tests, shows appropriately scaling performance. It's only when rendering offscreen that Mali400 is able to beat the competition. In glbenchmark2.5 they don't go up, but stay the same, even though there is now a 5.9 factor difference in pixel count. The galaxy note numbers prove that it's not a vertex shader issue. Let's even say that the Mali is so vertex bound (remember it has to be at LEAST 6x deficiency vs the rest of the gpu based on these scores), the fps should still have went down because of the increased demands on the rest of the SoC. I still say offscreen Mali400- broken.
Egypt 2.1 didn't have complex shaders like 2.5. If only one thing in Mali is bottlenecked (like the vertex shader) then nothing else will change. 2.1 simply didn't provide enough complex shaders to bottleneck the vertex shaders, especially at lower resolutions.
I'm resisting the urge to go all caps fury here. It's not a bottleneck issue. The offscreen score @720p is HIGHER than the native score at 480p (scores are well below vsync so it can't be that). And offscreen scores @720p are almost double what near-720p resolutions show when they are native. Reduced completely: why does simply going offscreen nearly Double The SCOREOHGOSH I DIDN'T MAKE IT!??!
What is the point that you're trying to make? I don't understand. Have you downloaded GLBenchmark 2.5? All hardware have lower fps on offscreen test, with the exception of Galaxy S3, which suggest that Galaxy S3 was severely vertex limited.. heck, even Galaxy S2 is severely vertex limited because that kind of difference is usually within the margin of error.
Where is this double thing that you mentioned? or are you seeing double? Or are you talking something that wasn't presented in the article? If that is the case, then in what way it affects the result from the article?
Performance improvements have caused vsync to get in the way of glbenchmark 2.1 numbers on the Galaxy S2 Galaxy S2 Egypt HD 23.4fps Egypt HD 720p offscreen 21.7fps 384k vs 921k pixels 240% workload, still 93% score
Galaxy Note glbenchmark 2.1 Egypt High (1280x800) 48.1FPS Egypt Offscreen (1280x720) 72.3FPS!! 1024k vs 921k pixels 90% workload, 150% score (newer scores, while too high, are no longer double, my bad) Galaxy Note GLbenchmark 2.5 Egypt HD (1280x800) 16.5FPS Egypt HD (1920x1080) 15.9FPS 1024k vs 2073.6k Pixels 202.5% workload, still 96.4% score
Galaxy S3 Egypt HD (1280x720) 23.9FPS Egypt HD (1920x1080) 23.7FPS 921k vs 2073.6k Pixels 225% workload, still 99% score
The GS2 and GS3 continue the trend of scoring almost the same while more than doubling the workload (only difference being the doubled workload is offscreen) I included the galaxy note because it's native screen resolution of 1280x800 is very, very close to the offscreen resolution in GLbenchmark 2.1, making apples to apples very straightforward and eliminating vertex shader architecture as the culprit.
I got these scores from glbenchmark's site to give you as much detail as I could on short notice, it confirms the trend that shows in the AT reviews. Hopefully this helps :)
"I included the galaxy note because it's native screen resolution of 1280x800 is very, very close to the offscreen resolution in GLbenchmark 2.1, making apples to apples very straightforward and eliminating vertex shader architecture as the culprit."
What's the line of reasoning here? Vertex or front-end limitations are fairly resolution independent (not entirely independent depending on the scene). And more importantly, GLB 2.1 didn't necessarily have the complex shaders to bottleneck Mali400.
The only time performance doubled when going offscreen is when the framerate hits the vsync limit during the on-screen test (GS2 in GLB 2.1).
You said the inconsistency could be because of a vertex shader weakness in Mali400. The reasoning behind listing the GL2.1 galaxy note scores is that we have a perfect example of NOTHING changing but the state of 'offscreen rendered' vs 'onscreen rendered', yet offscreen scores jump to 150% of onscreen scores. (I apologized for the lack of score doubling in my last post, as the numbers have become more even since I posted about this many months ago. So, in a nutshell, Exynos score jumps to 150% by simply switching from onscreen to offscreen, with NO OTHER VARIABLES. No that GL2.5 defaults to 1080p for offscreen, I've got no other way to compare on/offscreen evenly. If you know of an exynos tablet with a 1080p res, we can do a straight apples to apples comparison with gl2.5, which judging by the gs2/3/note scores, will be a repeat of skyrocketing offscreen.
"The reasoning behind listing the GL2.1 galaxy note scores is that we have a perfect example of NOTHING changing but the state of 'offscreen rendered' vs 'onscreen rendered', yet offscreen scores jump to 150% of onscreen scores. (I apologized for the lack of score doubling in my last post, as the numbers have become more even since I posted about this many months ago. So, in a nutshell, Exynos score jumps to 150% by simply switching from onscreen to offscreen, with NO OTHER VARIABLES."
Erm, vsync. Like I said, when numbers come close to 60fps, the averages will vary a lot since the on-screen numbers will clip during high framerate times. That should be quite obvious....
It looks like it falls in somewhere around 2/3rds of the Adreno 225's performance. They actually have a pretty decent comparison tool on the glbenchmark.com site.
That's the same phone I have. Good to hear that it's finally working. Do you have the updated you drivers in your phone? They make an enormous difference for games. I'd be interested in your scores.
It's just different. Mali400 is very good in pixel-shader heavy situations. And Exynos Quad is better than S4's current dual core Krait implementation in highly threaded environments.
The problem is that a lot of current workloads are neither of those things. In day to day tasks, raw processor performance between quad-core A9 (Exynos Quad, Tegra-3) and dual core Krait (Snapdragon S4) works out to be roughly equal, because Krait is about 30% faster Clock per Clock and runs a about 20% higher clockspeeds thanks to its 28nm construction. Since most workloads are not highly threaded, Krait's increased single-threaded performance is enough to overcome the quad-core's core advantage.
Not quite. First of all, synthetic benchmarks can be very misleading as they selectively stress certain components in isolation, and they're also prone to cheating i.e. benchmark specific optimizations which bring nothing for real world performance. In the PC industry they've been practically abandoned in reviewing new graphic cards, in favor of game benchmarks.
The Mali400 in the SGS III international version has by far the highest fill rate, 3x compared to Adreno, which in turn has a much higher triangle throughput, according to this test.
But the thing is, there aren't few if any games out there which come close to the polygon count of the Egypt Classic scenario.
Egypt HD is rather extreme, and more of a view of things to come, but with no smartphone significantly breaking the barrier of 20 fps (and if you run the benchmark, you see how the frame rate dips often to 5fps or less), it's unlikely that we'll see anything remotely comparable in the next couple of years.
So for the games out there and in the near future, Mali400 is king.
A big problem is that any 3D-heavy game that you'd want to use for benchmarking on Android is basically fully optimized to a specific chipset, to the point that nVidia and Qualcomm heavily assist developers who use their chips in exchange for prominent placing in TegraZone and GameCommand respectively.
I feel that the great equalizer in the SoC wars will be Windows RT, since one binary will have to run on a Tegra 3, Snapdragon S4, and OMAP4, so the chances of cheating will likely be diminished.
Of course, with modern games that are mostly fill-rate bound, all these GPU's have little trouble hitting the 60fps barrier. So while fill-rate monster GPU's can indeed win in off-screen, non-vsync'ed benchmarks, it's really kinda moot.
It seems they made the Egypt tests at least twice more aggressive than they should've been in this release. It's kind of ridiculous that all the latest GPU's can't even get past 15 FPS in it. I mean, what's the point of showing something like that? Might as well make the app for latest PC gaming hardware.
So yeah, I think this version of Egypt is too aggressive, especially since we know they are going to launch GLBenchmark 3.0 for OpenGL ES 3.0 this fall. What are they going to do when they have to add all the extra graphics features of OpenGL ES 3.0 on top of the new very complex Egypt text? Will they just make all the new GPU's show 5 FPS in their tests?
You clearly have no experience when it comes to GPU benchmarking..
A GOOD benchmark is supposed to exercise the GPU heavily and stress the GPU and the SoC to accurately measure Graphics performance. A benchmark is not supposed to run at 30 or 60 fps...
GLBenchmark 3.0 isn't just more stress. GPU's don't work like that. It's there to test the feature set of the Halti generation of GPU's, which aren't tested at all in GLB 2.5.
The actual compute power stressed -- such as triangle throughput, fetch speed, fill-rate, etc. -- will likely stay the same between 3.0 and 2.5.
To truly make a fair comparison between SoCs, I think the version of Android should also be mentioned in the charts, as newer versions might come with newer/updated drivers which makes not so much of a level playing field.
To illustrate the point, my Motorola (Droid) RAZR (OMAP 4430) gets 5.5 fps on Egypt HD onscreen test (running at qHD resolution). I find it interesting that it scores so much lower than the OMAP 4460 in the Galaxy Nexus, as they run on the exactly same clocks and until now, every GPU benchmark had put them exactly equal.
Which makes me wonder, what version of Android is the Galaxy Nexus running in this test? Is it Jelly Bean? Cause JB might come with newer drivers which might be adding to its performance (My RAZR is running ICS). The same might be true if you are comparing phones running ICS to those running Gingerbread.
will an optimized driver makes a difference in these scores? as the benchmark suite is just launched there is possibility that the current set of drivers for mali400 being unoptimized.
All these benchmarks, specifically for smartphones must be divided by consumed energy from the battery. Things are that 4-core processor consume much more juice from the battery on such gaming tests!!!!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
47 Comments
Back to Article
tipoo - Tuesday, July 31, 2012 - link
I wonder where these would be ranked among old desktop GPUs, Geforce 4 series-ish? FX series?aruisdante - Tuesday, July 31, 2012 - link
They're much more modern than that in terms of features, as they support the roughly mobile equivalent to DX9/OpenGL2.0.The fact that Tegra and Mali have dedicated Pixel and Vertex shades puts them in roughly the GeForce 7 series, although they're probably more powerful clock-per-clock-per-unit than those cores were (not enough to make up the vast difference in clock-speed though). If you just look at raw hardware, it would put them roughly equivalent to somewhere between a 7300GT and a 7600GT. However their raw throughput numbers are a lot lower thanks to the reduced clockspeeds- Even a 7100GS ran at 600+Mhz.
The SGX is a Unified Shader Arch supporting DX10.1 like functionality, putting it in the GeForce8-GTX200 era. What they define as a 'core' is a bit hard to figure out (I would guess they mean SMP, I.E. a cluster of smaller USA Shader Units), but based on the numbers it would be somewhere at the very bottom of the rung, like GeForce 9300 but not even.
tipoo - Tuesday, July 31, 2012 - link
Funny how much more advanced SGX GPUs are than a GPU giant like Nvidias. But that could well change next gen with kepler based Tegras.tuxRoller - Tuesday, July 31, 2012 - link
I believe adreno is also unified since the 200 series.I'd also be astonished if nivida gets anywhere close to the 3.5 years old 543mp let alone any of their newer models. I imagine the problem for nvidia will be getting around all the patents of img.
GraveUypo - Tuesday, September 25, 2012 - link
i know i'm a few months late to the party but...he was almost spot-on as far as raw performance goes on his guess. i'll be honest, i'm not familiar with the performance of the lower end nvidia cards, but a 7600GT is definitively a lot faster than even the ipad 3 gpu.
features aside, if we stick to theoretical peak performance, most current mobile gpus rank in between a geforce 4 ti4600 and a geforce fx. here's a quick comparison of gpu theoretical peak performance:
· Nexus 7 (tegra 3) - 10 Gflops
· Galaxy s3 (mali-400mp) - 15,8 Gflops
· IPad 3 (SGX 543MP4) - 25,6 Gflops
· Geforce 4 Ti4600 - 16 Gflops
· Geforce FX 5950 Ultra - 30,4 Gflops
· Geforce 6800 Ultra - 75 Gflops
· > Geforce 7600 GT - 136 Gflops
· xbox360 and ps3 gpu - somewhere around here, closer to the latter
· Geforce 7950 GT - 255 Gflops
as you can see, there's still a lot of ground to cover to get to 7600GT levels of performance. maybe in a couple more generations. This is a very simplified comparison, because there's no way to compare them directly, but it's enough to give you an idea of the situation.
GraveUypo - Tuesday, September 25, 2012 - link
(a few months later, but here we go)yes. very nice guess.
in fact the theoretical peak performance of the mali-400mp on the galaxy s3 is virtually the same as a geforce4 ti4600. the SGX 543MP4 on the ipad 3 is a bit inferior to the geforce fx 5900 ultra. read my other post for more details :P
UpSpin - Tuesday, July 31, 2012 - link
So Tegra 3 is better and Exynos Quadcore worse than most people think. At least in more demanding games with more polygons.And thus the US Galaxy S3 probably faster in more recent games with more polygons than the international quad core version. (more modern ARM core and faster GPU).
It will be interesting how well the Apple SoCs scale with this HD test.
And even if the results might look poor, but including some older speced SoCs might be interesting, too, to see how the new SoCs improved. Like some older Snapdragons. (and only Offscreen seems to be interesting)
Arnulf - Tuesday, July 31, 2012 - link
WTF are you babbling about, there is no mention of Exynos, let alone quad core Exynos, anywhere in the article. ?!owan - Tuesday, July 31, 2012 - link
Did you even read the second page?owned66 - Tuesday, July 31, 2012 - link
lol epic failewood - Tuesday, July 31, 2012 - link
read the article before you ridicule someone's post.aruisdante - Tuesday, July 31, 2012 - link
This is because they Egypt test favors Vertex shading power, and Mali400 is intentionally lacking in that department. If you were to built a test that was pixel shader heavy, the results would be more balanced. You saw this same kind of thing happen in the GeForce 7, Radion X1900 era, when it was nVidia that decided it was better to have more Pixel power than Vertex power. The SGX doesn't haven't to worry about that since it's USA.jeremyshaw - Tuesday, July 31, 2012 - link
I think it was ATi that made the first jump, with the X1950. 48..... 48!! Pixel shaders vs only 16 vertex.Before that, for both ATi and nVidia, it was pretty much always 1:1 pixel:vertex at the top end.
Death666Angel - Tuesday, July 31, 2012 - link
In the game benchmark I see the Quad Exynos being equal to or better than Tegra3. That the Dual Exynos holds its own quite well, considering it launched over half a year earlier. :)At this point though, I wouldn't buy a full price kit with either processor.
Stormkroe - Tuesday, July 31, 2012 - link
Looks like offscreen Mali400 scores are STILL broken in the game tests. Scores 13 fps in 720p and 1080p. Not as bad as going the score going dramatically UP when jumping from 800x400 to 720p like in glbenchmark 2.1, but still... Why hasn't this been addressed?aruisdante - Tuesday, July 31, 2012 - link
Because increasing the resolution increases the load on Pixel shaders more than Vertex shaders, and Mali400 is hamstrung by its Vertex power in this test, so it has the Pixel shading power to spare. It doesn't surprise me that the framerate doesn't change with resolution in this case.Stormkroe - Tuesday, July 31, 2012 - link
But they DO change, in gl2.1 they actually go UP by a lot. That's going from 800x480 to 720p (2.4 times the pixels). Even more proof is that the same SoC in the galaxy note, which has a native resolution very close to 720p used in these off screen tests, shows appropriately scaling performance. It's only when rendering offscreen that Mali400 is able to beat the competition. In glbenchmark2.5 they don't go up, but stay the same, even though there is now a 5.9 factor difference in pixel count. The galaxy note numbers prove that it's not a vertex shader issue. Let's even say that the Mali is so vertex bound (remember it has to be at LEAST 6x deficiency vs the rest of the gpu based on these scores), the fps should still have went down because of the increased demands on the rest of the SoC.I still say offscreen Mali400- broken.
metafor - Tuesday, July 31, 2012 - link
Egypt 2.1 didn't have complex shaders like 2.5. If only one thing in Mali is bottlenecked (like the vertex shader) then nothing else will change. 2.1 simply didn't provide enough complex shaders to bottleneck the vertex shaders, especially at lower resolutions.Stormkroe - Tuesday, July 31, 2012 - link
I'm resisting the urge to go all caps fury here. It's not a bottleneck issue. The offscreen score @720p is HIGHER than the native score at 480p (scores are well below vsync so it can't be that). And offscreen scores @720p are almost double what near-720p resolutions show when they are native.Reduced completely: why does simply going offscreen nearly Double The SCOREOHGOSH I DIDN'T MAKE IT!??!
Rurou - Tuesday, July 31, 2012 - link
I'm also resisting to go all caps, but what are you talking about?Egypt HD:
Galaxy S2 (800x480): 9.1
Galaxy S3 (Mali, 1280x720): 13
Egypt HD offscreen (1080p):
Galaxy S2: 8.8
Galaxy S3 (Mali): 13
Egypt Classic:
Galaxy S2: 60
Galaxy S3: 59
Egypt Classic offscreen (1080p):
Galaxy S2: 31
Galaxy S3: 57
What is the point that you're trying to make? I don't understand. Have you downloaded GLBenchmark 2.5?
All hardware have lower fps on offscreen test, with the exception of Galaxy S3, which suggest that Galaxy S3 was severely vertex limited.. heck, even Galaxy S2 is severely vertex limited because that kind of difference is usually within the margin of error.
Where is this double thing that you mentioned? or are you seeing double? Or are you talking something that wasn't presented in the article? If that is the case, then in what way it affects the result from the article?
Stormkroe - Tuesday, July 31, 2012 - link
Performance improvements have caused vsync to get in the way of glbenchmark 2.1 numbers on the Galaxy S2Galaxy S2
Egypt HD 23.4fps
Egypt HD 720p offscreen 21.7fps
384k vs 921k pixels 240% workload, still 93% score
Galaxy Note
glbenchmark 2.1
Egypt High (1280x800) 48.1FPS
Egypt Offscreen (1280x720) 72.3FPS!!
1024k vs 921k pixels 90% workload, 150% score (newer scores, while too high, are no longer double, my bad)
Galaxy Note
GLbenchmark 2.5
Egypt HD (1280x800) 16.5FPS
Egypt HD (1920x1080) 15.9FPS
1024k vs 2073.6k Pixels 202.5% workload, still 96.4% score
Galaxy S3
Egypt HD (1280x720) 23.9FPS
Egypt HD (1920x1080) 23.7FPS
921k vs 2073.6k Pixels 225% workload, still 99% score
The GS2 and GS3 continue the trend of scoring almost the same while more than doubling the workload (only difference being the doubled workload is offscreen)
I included the galaxy note because it's native screen resolution of 1280x800 is very, very close to the offscreen resolution in GLbenchmark 2.1, making apples to apples very straightforward and eliminating vertex shader architecture as the culprit.
I got these scores from glbenchmark's site to give you as much detail as I could on short notice, it confirms the trend that shows in the AT reviews.
Hopefully this helps :)
metafor - Tuesday, July 31, 2012 - link
"I included the galaxy note because it's native screen resolution of 1280x800 is very, very close to the offscreen resolution in GLbenchmark 2.1, making apples to apples very straightforward and eliminating vertex shader architecture as the culprit."What's the line of reasoning here? Vertex or front-end limitations are fairly resolution independent (not entirely independent depending on the scene). And more importantly, GLB 2.1 didn't necessarily have the complex shaders to bottleneck Mali400.
The only time performance doubled when going offscreen is when the framerate hits the vsync limit during the on-screen test (GS2 in GLB 2.1).
Stormkroe - Tuesday, July 31, 2012 - link
You said the inconsistency could be because of a vertex shader weakness in Mali400.The reasoning behind listing the GL2.1 galaxy note scores is that we have a perfect example of NOTHING changing but the state of 'offscreen rendered' vs 'onscreen rendered', yet offscreen scores jump to 150% of onscreen scores. (I apologized for the lack of score doubling in my last post, as the numbers have become more even since I posted about this many months ago.
So, in a nutshell, Exynos score jumps to 150% by simply switching from onscreen to offscreen, with NO OTHER VARIABLES.
No that GL2.5 defaults to 1080p for offscreen, I've got no other way to compare on/offscreen evenly.
If you know of an exynos tablet with a 1080p res, we can do a straight apples to apples comparison with gl2.5, which judging by the gs2/3/note scores, will be a repeat of skyrocketing offscreen.
leexgx - Tuesday, July 31, 2012 - link
come one we do not want this comment box getting very small do wemetafor - Wednesday, August 1, 2012 - link
"The reasoning behind listing the GL2.1 galaxy note scores is that we have a perfect example of NOTHING changing but the state of 'offscreen rendered' vs 'onscreen rendered', yet offscreen scores jump to 150% of onscreen scores. (I apologized for the lack of score doubling in my last post, as the numbers have become more even since I posted about this many months ago.So, in a nutshell, Exynos score jumps to 150% by simply switching from onscreen to offscreen, with NO OTHER VARIABLES."
Erm, vsync. Like I said, when numbers come close to 60fps, the averages will vary a lot since the on-screen numbers will clip during high framerate times. That should be quite obvious....
edlee - Tuesday, July 31, 2012 - link
umm, how come there are a bunch of soc's missing from the test??Exynos + Mali 400
Qualcom + Adreno 220
Stormkroe - Tuesday, July 31, 2012 - link
It's all on the second page, except adreno 220 hasnt ever got a fix for 2.1 to run the offscreen game benches. Not sure if it works yet on 2.5.A5 - Tuesday, July 31, 2012 - link
I'm running it on my Adreno-220 EVO 3D right now. Will post in a second.A5 - Tuesday, July 31, 2012 - link
It looks like it falls in somewhere around 2/3rds of the Adreno 225's performance. They actually have a pretty decent comparison tool on the glbenchmark.com site.Stormkroe - Tuesday, July 31, 2012 - link
That's the same phone I have. Good to hear that it's finally working. Do you have the updated you drivers in your phone? They make an enormous difference for games. I'd be interested in your scores.A5 - Tuesday, July 31, 2012 - link
My results fall in almost exactly with these running a ROM based on the VM ICS leak: https://glbenchmark.com/phonedetails.jsp?D=HTC+EVO...The offscreen+MSAA tests are still busted, but the rest of it seems to work.
Bateluer - Tuesday, July 31, 2012 - link
With the Galaxy Nexus, my ePenis feels inadequate. Next crop of Nexus devices better push the performance envelope.hurrakan - Tuesday, July 31, 2012 - link
I was going to buy a Samsung Galaxy S3 here in the UK, but is it really utter crap compared to the US version?aruisdante - Tuesday, July 31, 2012 - link
It's just different. Mali400 is very good in pixel-shader heavy situations. And Exynos Quad is better than S4's current dual core Krait implementation in highly threaded environments.The problem is that a lot of current workloads are neither of those things. In day to day tasks, raw processor performance between quad-core A9 (Exynos Quad, Tegra-3) and dual core Krait (Snapdragon S4) works out to be roughly equal, because Krait is about 30% faster Clock per Clock and runs a about 20% higher clockspeeds thanks to its 28nm construction. Since most workloads are not highly threaded, Krait's increased single-threaded performance is enough to overcome the quad-core's core advantage.
Pipperox - Tuesday, July 31, 2012 - link
Not quite.First of all, synthetic benchmarks can be very misleading as they selectively stress certain components in isolation, and they're also prone to cheating i.e. benchmark specific optimizations which bring nothing for real world performance.
In the PC industry they've been practically abandoned in reviewing new graphic cards, in favor of game benchmarks.
The Mali400 in the SGS III international version has by far the highest fill rate, 3x compared to Adreno, which in turn has a much higher triangle throughput, according to this test.
But the thing is, there aren't few if any games out there which come close to the polygon count of the Egypt Classic scenario.
Egypt HD is rather extreme, and more of a view of things to come, but with no smartphone significantly breaking the barrier of 20 fps (and if you run the benchmark, you see how the frame rate dips often to 5fps or less), it's unlikely that we'll see anything remotely comparable in the next couple of years.
So for the games out there and in the near future, Mali400 is king.
dagamer34 - Tuesday, July 31, 2012 - link
A big problem is that any 3D-heavy game that you'd want to use for benchmarking on Android is basically fully optimized to a specific chipset, to the point that nVidia and Qualcomm heavily assist developers who use their chips in exchange for prominent placing in TegraZone and GameCommand respectively.I feel that the great equalizer in the SoC wars will be Windows RT, since one binary will have to run on a Tegra 3, Snapdragon S4, and OMAP4, so the chances of cheating will likely be diminished.
metafor - Tuesday, July 31, 2012 - link
Of course, with modern games that are mostly fill-rate bound, all these GPU's have little trouble hitting the 60fps barrier. So while fill-rate monster GPU's can indeed win in off-screen, non-vsync'ed benchmarks, it's really kinda moot.Lucian Armasu - Tuesday, July 31, 2012 - link
It seems they made the Egypt tests at least twice more aggressive than they should've been in this release. It's kind of ridiculous that all the latest GPU's can't even get past 15 FPS in it. I mean, what's the point of showing something like that? Might as well make the app for latest PC gaming hardware.So yeah, I think this version of Egypt is too aggressive, especially since we know they are going to launch GLBenchmark 3.0 for OpenGL ES 3.0 this fall. What are they going to do when they have to add all the extra graphics features of OpenGL ES 3.0 on top of the new very complex Egypt text? Will they just make all the new GPU's show 5 FPS in their tests?
This is definitely wrong.
Mike1111 - Tuesday, July 31, 2012 - link
"It's kind of ridiculous that all the latest GPU's can't even get past 15 FPS in it."A5X and Adreno 320 weren't tested yet. And the next generation of mobile GPUs coming in early 2013 will be much more powerful.
A5 - Tuesday, July 31, 2012 - link
http://glbenchmark.com/phonedetails.jsp?benchmark=...A5X roughly triples the Fill Test of the Tegra 3. It also gets about 50 fps on the Offscreen test in Egypt HD.
If anything, I'd say they didn't make it stressful enough.
Mike1111 - Tuesday, July 31, 2012 - link
"A5X roughly triples the Fill Test of the Tegra 3. It also gets about 50 fps on the Offscreen test in Egypt HD."I wish. But I only see 24fps in the Egypt HD 1080p offscreen?!
pcinyourhand - Tuesday, July 31, 2012 - link
You clearly have no experience when it comes to GPU benchmarking..A GOOD benchmark is supposed to exercise the GPU heavily and stress the GPU and the SoC to accurately measure Graphics performance. A benchmark is not supposed to run at 30 or 60 fps...
metafor - Tuesday, July 31, 2012 - link
GLBenchmark 3.0 isn't just more stress. GPU's don't work like that. It's there to test the feature set of the Halti generation of GPU's, which aren't tested at all in GLB 2.5.The actual compute power stressed -- such as triangle throughput, fetch speed, fill-rate, etc. -- will likely stay the same between 3.0 and 2.5.
Pirks - Tuesday, July 31, 2012 - link
[vader]NOOOOOOOOOOOOOOOOOOOOOOOO!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!![/vader]aryonoco - Tuesday, July 31, 2012 - link
To truly make a fair comparison between SoCs, I think the version of Android should also be mentioned in the charts, as newer versions might come with newer/updated drivers which makes not so much of a level playing field.To illustrate the point, my Motorola (Droid) RAZR (OMAP 4430) gets 5.5 fps on Egypt HD onscreen test (running at qHD resolution). I find it interesting that it scores so much lower than the OMAP 4460 in the Galaxy Nexus, as they run on the exactly same clocks and until now, every GPU benchmark had put them exactly equal.
Which makes me wonder, what version of Android is the Galaxy Nexus running in this test? Is it Jelly Bean? Cause JB might come with newer drivers which might be adding to its performance (My RAZR is running ICS). The same might be true if you are comparing phones running ICS to those running Gingerbread.
balagamer - Wednesday, August 1, 2012 - link
will an optimized driver makes a difference in these scores? as the benchmark suite is just launched there is possibility that the current set of drivers for mali400 being unoptimized.SanX - Wednesday, August 8, 2012 - link
All these benchmarks, specifically for smartphones must be divided by consumed energy from the battery.Things are that 4-core processor consume much more juice from the battery on such gaming tests!!!!