Comments Locked

51 Comments

Back to Article

  • Formul - Thursday, April 19, 2012 - link

    The Y axis scale seems out of whack.
  • mepenete - Thursday, April 19, 2012 - link

    Not to mention there is no label...
  • phatboye - Thursday, April 19, 2012 - link

    The Y axis is on a logarithmic scale and is perfectly fine but the lack of any axis labels or units is a big no no. What school did these people graduate from?

    My guess is that the Y-axis shows relative performance compared to the original xbox. How Nvidia determined this relative performance though is anyone's guess.
  • fetuse - Thursday, April 19, 2012 - link

    It's an arbitrary, unitless, performance axis that is used to compare each category. The title tells you the axis. It would be redundant to write "Graphics Performance" twice.
  • Yojimbo - Friday, April 20, 2012 - link

    Well, i don't think you should say it's an arbitrary axis. the only thing that's arbitrary is what's being called "1". i assume that that's what you meant.
  • gmkmay - Friday, April 20, 2012 - link

    They graduated from marketing school, that should be obvious.
  • mckirkus - Friday, April 20, 2012 - link

    It's clear that the Y axis is performance relative to the original XBox and PCs at that time. 1 is Xbox level performance. No need for an explanation.
  • Dracusis - Sunday, April 22, 2012 - link

    So the Xbox 360 is 30/40 times faster than the original Xbox?

    Current PCs are about 200 times more powerful than the Xbox 360?

    That chart just makes absolutely no sense.
  • vartec - Sunday, April 22, 2012 - link

    "Current PCs are about 200 times more powerful than the Xbox 360?"

    X360's GPU: 240 GFLOPS, DirectX 9.0
    Radeon HD 7970: 3790 GFLOPS, DirectX 11.1; and that's a single card, with CrossFireX you can have multiple.

    Now, how much it cost is whole discussion apart.
  • IntelUser2000 - Sunday, April 22, 2012 - link

    Where do you guys get that?

    The "1" line is the original Xbox. The Xbox 360 is about 40. That makes 2010 PC graphics only 5x faster than Xbox 360. I assume 2011/2012 is higher but its simply not there.

    Here you can see in Crysis compared to the GTX 480, 2006 GPUs are about 1/5x.

    Oh, and Flops are NOT indicative of direct performance as you can see.
  • icrf - Friday, April 20, 2012 - link

    What about the X axis? There's no label there, either. I am confused! /s
  • jgutteri - Thursday, April 19, 2012 - link

    The axis label may be obscured by the cropping of the image, for all we know.
  • Goty - Thursday, April 19, 2012 - link

    It's logarithmic.
  • piroroadkill - Friday, April 20, 2012 - link

    The whole thing is out of whack. PCs can be configured far, far beyond the PS3 and 360 these days.
  • Hrel - Friday, April 20, 2012 - link

    It shows PC as 10X faster than console. It's just not a linear scale so it's very unintuitive.
  • mcnabney - Friday, April 20, 2012 - link

    More importantly, the chart doesn't show the Apple GPU completely kicking the Tegra2/3 ass.
  • B3an - Friday, April 20, 2012 - link

    It's an Imagination Tech GPU, nothing to do with Apple, just happens to be in some of their useless toys.
  • michael2k - Friday, April 20, 2012 - link

    Excuse me, but I think it's fairly accurate to say that anything with a powerful GPU is a toy, from a 3DS to an XBox to a gaming PC.

    You can discount Imagination all you want, Apple is still pushing state of the art by using the most powerful GPU in mobile right now, despite NVIDIA's pedigree.
  • Lonbjerg - Sunday, April 22, 2012 - link

    Not only that...the performance graph between PC and consoels seems way to low...that is what happens when your let marketing in over, so that consoels don't look like a dated joke..
  • arnavvdesai - Thursday, April 19, 2012 - link

    While the advancement in GPU tech is obviously important, I still don't think it is enough as we have seen that screen resolutions have also jumped up & what used to standard (1024x768) is now on the lowest end of the scale & high end is extremely high(2k) compared to what consoles were aiming to hit (1080p or in most cases 720p upscaled). It will be interesting to see where the right balance is achieved. Also, we are coming close to theoretical limits of node technologies which would lead to IMO performance being gained only by drastic architecture changes &/or increasing die sizes.
    However, I think we at least have 15 years to hit those though.
  • GeekBrains - Thursday, April 19, 2012 - link

    "The solid lines are estimated performance, while the dotted lines are trends."

    I thought the solid lines were the actual trends and the dotted lines were the expected performance curves..
  • tipoo - Thursday, April 19, 2012 - link

    Estimated, not expected. The solid ones are the theoretical performances I guess.
  • tipoo - Thursday, April 19, 2012 - link

    These mobile SoCs are still limited by how much storage an app can take and controller support. Last I checked memory bandwidth was nowhere near the bandwidth even old systems like the PS3 and 360 had. For that power to be useful they need to address all those concerns too. Android has decent controller support, its just not widely used. And on the 360 some games now have to use multiple DVDs and some PS3 games are hitting the 25GB of single layer blu rays.

    Also, development costs, most of these apps don't cost more than 7 dollars or so vs 60 dollar PC/console games, developer incentive to make something really great is low.
  • Conficio - Thursday, April 19, 2012 - link

    Can we conclude that NVIDIA is leaving the PC business, as they have no performance estimate for PCs beyond 2010?
  • winterspan - Thursday, April 19, 2012 - link

    Can anyone figure out why Nvidia, the leader in the dedicated GPU market cannot seem to produce a competitive mobile SoC. Even their latest chip, Tegra 3, is beaten badly by the IMG Tech PowerVR series.. And even Qualcomms chipsets and ARM's own Mali are as good or better.
    You would think Nvidia would be blowing away everyone ..
  • frostyfiredude - Thursday, April 19, 2012 - link

    I think I saw an article awhile back that had an explanation for it. If I remember correctly they were under the impression that the GPUs used in Tegra SoCs so far are based off old Geforce 7xxx series architecture rather than their newer unified shader architectures.

    How true that is is beyond me, but that would atleast partially explain the difference. We should see a solid bump in Tegra4, I believe they are moving on to a more modern unified shader architecture of some sort for them.
  • UpSpin - Friday, April 20, 2012 - link

    The Tegra 3 is a low cost high performance SoC.
    I think Nvidia mainly focused on the 4+1 cores, to make it a great tablet SoC which supports perfect multi-tasking and support for Windows 8 ARM, additionally they wanted to have the first quad core out.
    If you compare the die sizes of the Tegra 3 with them of other SoCs like the Apple A5, you'll notice that the Tegra 3 is tiny, and thus inexpensive to produce.
    Their GPU has average benchmark results, but I just think they had other priorities, like Windows support, quad core, ...
    It's also possible that the GPU performs poor in raw benchmarks, in computing things, but maybe has more hardware based accelerators which games explicitly have to make a use of, because if you take a look at the Anandtech game comparison between iOS and Nvidia Tegrazone, the NVidia ones have more effects and look better.
    (Simpler comparison: Two CPUs, both have to decode a video, the faster one, has higher results in benchmarks, but lacks a hardware video decoder, thus has to decode the video in software and can't keep up. The slower one, has lower benchmark results, but a hardware decoder, thus smooth video playback.
    I don't know if this fits for Tegra 3, but at least according to visual comparisons Tegra 3 games look impressive compared to the competition)
  • darkcrayon - Friday, April 20, 2012 - link

    "because if you take a look at the Anandtech game comparison between iOS and Nvidia Tegrazone, the NVidia ones have more effects and look better."

    This is more because of a few games that nVidia specifically optimized for the Tegra 3 vs. "unoptimized" games running on the A5x.
  • UpSpin - Friday, April 20, 2012 - link

    so your point is? Have you read what I wrote? I said that the Tegra 3 is slower than others in benchmarks, but still manages to deliver better graphics in games. Isn't this what really is interesting?
    Sure, it would be nice to have a faster GPU, no question, but I don't understand why people say that the Tegra 3 is too slow!!, if it delivers better graphics, consuming less power and being cheaper to produce.
    I also don't understand why people say Tegra 3 is crap, just because it doesn't have the fastest GPU. The big thing NVidia achieved was to build the first power efficient quad core, which is, even after half a year, competive with currently released Krait processors.
  • vision33r - Friday, April 20, 2012 - link

    Your argument is that one game that looks better on one platform should be more weighted more heavily than the other platform.

    When you compare hardware vs hardware you have to compare apples vs apples.

    If we say that Crysis 2 optimized on the Mac OS looks better but runs slower than the PC version. Is there are any proof that the Mac Hardware is better?

    Software comparison is a different comparo.
  • darkcrayon - Friday, April 20, 2012 - link

    But it doesn't "deliver better graphics" in general, it just happens to in a couple of games specifically optimized for the Tegra 3. Of course it *can* deliver better graphics in a specific scenario. But it stands to reason that the same optimization effort done to the same game on the A5X will far surpass what could be done with the Tegra 3. That doesn't mean the Tegra 3 "is crap"- it just means nVidia's SoC isn't delivering anywhere near the state of the art in ARM SoC graphics performance. A lot of people expect that it would due to the nVidia name in PC graphics.
  • BabelHuber - Saturday, April 21, 2012 - link

    Shadowgun looks much better on Tegra3 than on the new iPad: Realtime light and shadow, water etc.

    This has been confirmed in various tests, including Anandtech's.

    As long as the games look better on Tegra3, you have the better experience, period.

    If Apple catches up, fine. As long as Apple doesn't, the advantages are purely theoretical and hence don't even matter.
  • tipoo - Friday, April 20, 2012 - link

    Well, they have three more CPU cores than Apple on a die size that is still smaller. Clearly its optimized for cost and CPU scaling over brute GPU performance. I forget the codename of their next one but I think its going to use an architecture more similar to todays Nvidia GPUs, so expect them to take the lead there.
  • vision33r - Friday, April 20, 2012 - link

    Tegra 3 operates as one CPU all the time until tasks requires the use of the other 4 and then the one power saving CPU becomes dormant.

    Imo, the quadcore GPU solution makes more sense than 4 CPU cores makes very little sense today with so few apps take advantage of it.

    Games can easily take advantage of the quadcore graphics especially when Apple optimizes their drivers to utilize them right away.

    Nvidia pushed Tegra 3 out to sell 4 cores and fool the typical big numbers geeks.
  • tipoo - Friday, April 20, 2012 - link

    "Tegra 3 operates as one CPU all the time until tasks requires the use of the other 4 and then the one power saving CPU becomes dormant."

    Which still means there are five CPU cores on the die, which is still smaller than the A5 or A5X. I'm not saying Tegra 3 is great, just giving the OP a reason why Nvidia can't seem to match their own heritage in graphics, they devoted far less die space to the GPU.
  • frostyfiredude - Thursday, April 19, 2012 - link

    I enjoy nVidia's use of a Logarithmic graph to stretch out their point.

    At a glance the graph makes it look like mobile in 2014 will be equal to current gen consoles in graphics performance and 3/4 that of the PC's trend line. Slap the same data on a linear graph and mobile will clearly be a couple times slower still than current consoles and over 100x slower than their trended PC.

    This reminds me of their GTX680 vs HD7970 performance comparison graph. At a glance it looked like the GTX680 killed the HD7970 by HUGE margins in all the games, but in reality is was averaging 10% faster in all the games but BF3.
  • GnillGnoll - Friday, April 20, 2012 - link

    The graph shows mobile in 2014 projected to exceed the current consoles (the solid line, not the trend line). That doesn't change when you use a linear scale. Also, the difference between the solid mobile line and the PC trend line is less than 10x (as the distance is smaller than the distance between grid lines).
  • gorash - Friday, April 20, 2012 - link

    Was the PC really 100 times more powerful than 360 by 2009?
  • gorash - Friday, April 20, 2012 - link

    Well, I guess it's more like ~60 times.
  • IntelUser2000 - Friday, April 20, 2012 - link

    I see max ~200 for the PC line graph, stretching to 2010. I think 2009 is more like 150.

    Xbox 360 is at 40. That makes PC in 2009 4x faster in graphics compared to Xbox 360.
  • 3DoubleD - Friday, April 20, 2012 - link

    I thought someone would have commented on how the xbox 360 supposedly had better graphics performance than a PC when it was released. I'm pretty sure it didn't... x1800/hybrid DX10 vs the x1900 or the 8800 series that came out shortly afterward.
  • tipoo - Friday, April 20, 2012 - link

    The GPUs in the PS360 were both around 200Gflops, todays top end cards are over 3000. Not a perfect metric of performance, but for a rough estimate its ok, and actual performance would be even higher for the newer cards due to higher efficiency.
  • vision33r - Friday, April 20, 2012 - link

    But the 360 is more efficient than the PC so the actual power is much less to some degree.

    Optimization can be easily done on the 360.
  • silverblue - Friday, April 20, 2012 - link

    ...NVIDIA seems to be saying that consoles (specifically, the PS2) were more powerful than a PC with NVIDIA's own GeForce 256 series. Here's a clue, NVIDIA - they weren't. There were a few cross-platform games then that signified the PC's superiority.
  • karocage - Friday, April 20, 2012 - link

    There's no mention of the PS2 on this graph so I'm not sure why you leapt to that conclusion. Much more likely, as earlier comments suggested, that 1 represents the original Xbox, especially since it actually had an NVidia GPU in it.
  • silverblue - Friday, April 20, 2012 - link

    My apologies, I'd forgotten that the original Xbox was released in 2001.
  • mkeast73 - Friday, April 20, 2012 - link

    We really have to slow down a bit and focus on quality and not only speed. Having fast hardware and no or little software development can negate the whole experience. It's like having a Porsche with no driver, at a stand still. an example "Atari Jaguar"

    Don't get me wrong, I love the Tegra 3 chip in the Asus Transformer prime. But right now I'm waiting on some software to take advantage of that chip. Everything is speed nowadays, faster cpu's, overclock this and that.. etc. I only say slow down and let the developers utilize the hardware.
  • darkcrayon - Friday, April 20, 2012 - link

    Funny, a friend of mine is a developer and an Android enthusiast. A couple of years ago I was commenting on how I thought his phone felt a bit slow (the original HTC MyTouch I believe or something similar). And his response was "Well you have to wait for the hardware to catch up!" But now with quad core products like the Tegra 3 in devices, you're telling me it's "wait for the software to catch up!" ;)

    I agree with you though, it doesn't seem like much at this point can take advantage of a quad core CPU. I would like to see some real world apps that are shown to perform significantly better than with a similar design dual core.
  • yyrkoon - Saturday, April 21, 2012 - link

    And for the latest greatest news 3DFX announces the Voodoo Graphics card with SLI interface by the year 2020 . . . .

    Sorry Anand, could not resist.

    </sarcasm>
  • S20802 - Saturday, April 21, 2012 - link

    2005 XBox 360 performance in 2014? X-0
    By 2014 the XBox 720 would have been launched and there is no way Mobile SoC can catch up with that.
    And PC will always be miles ahead. And that the PC can SLI or CrossfireX it is a joke to compare to Mobile SoCs.
    Where do they expect to put the silicon in a mobile SoC. PC and Console have the advantage of unlimited power and die space.
  • fteoath64 - Monday, April 23, 2012 - link

    The design here is to ensure the gpu is designed to match the screen resolution and do 30fps. Nothing more, or else it is wasting battery. So here is where one will need to drastically trickle-boost gpu cores on-demand and ensure a matched cpu core to feed it. It is not a problem that has not been solved before but the granularity might not be as fine as needed until now. We can see that general gpu performance for tablets has been rather good, so it is not a bad start while the manufacturers need to maintain a 10 hour battery life. The Tegra3 does it rather well and I am sure there is very aggressive power-management in the 4 operating cpu cores.

    So it seems, with a market with 4 or more supplier of chips, each player designed their SoCs with more care than Intel ever did because they are a monopoly in their space. Only issue we seem to have is the lack of foundary capacity to churn up the new chips ....

Log in

Don't have an account? Sign up now