This review is not accurate , Badaboom GTX 400 series cards , are not compatible with GTX 400 series yet .However they already post the test resaults I have a GTX 480 and does not work with badaboom , Badaboom official site confirms that
I thought that after the line-up of games thread, you would really start testing games from all genres, so we can actually see how each graphic cards performs in different scenarios.
Now you have 80% first person shooters, 10% racing/Action-adventure and 10%RPG and RTS.
Where are the RTS games, isometric RPG's, simulation games, etc?
I would really like Battleforge thrown out and replaced by Starcraft 2, DOW 2: Chaos Rising, Napoleon Total War. All these RTS games play differently and will give different results, and thus better knowledge of how graphic cards perform.
How about also testing The Sims 3, NFS:Shift, Dragon Age Origins.
Actually DAO was in the original test suite I wanted to use. At the high end it's not GPU limited, not in the slightest. Just about everything was getting over 100fps, at which point it isn't telling us anything useful.
On Page 9 of Crysis, your final sentence indicates that SLI scales better than CF at lower resolutions, which is incorrect from your own graphs. CF clearly scales better at lower resolutions when video RAM is not filled:
This indicates the CF technology scales better than SLI, even if the brute performance of the nVidia solution comes out on top. This opposes diametrically your conclusion to page 9 ("Even at lower resolutions SLI seems to be scaling better than CF").
(Scaling ability is a comparison of ratios, not a comparison of FPS)
Correct. The only thing I really have to say about that is that while we include 1680 for reference's sake, for any review of a high-end video card I'm looking nearly exclusively at 1920 and 2560.
I get that, to test the card. But if you don't have a monitor that goes that high, it really doesn't matter. I'd really like to see 1080p thrown in there. 1920x1080; as that's the only resolution that matters to me and most everyone else in the US.
It's pretty obvious that anantech was spanked by nVIDIA the last time they did a review. No mention of 5970 being superior to the 480 is a little disturbing. I guess the days of "trusting anandtech" are over. Come on guys, not even a mention of how easily the 5870 overclocks? The choice is still clear, dual 5870's with full cover blocks FTW!
You are wrong. You can and you should compare single gpu cards with multi gpu cards. It does not matter if a card has one or 30 gpu's on the card. It's the performance / price that matters.
These nvidia cards are very expensive in performance / price compared to the ATI cards, simple as that. It's obvious that nvidia dropped the ball with their new flagship. You even need 2 cards to be able to use 3 screens.
This is bad for us customers, we are not getting any price pressure at all. These nvidia cards does not improve the market since they can not compete with the ATI card, only nvidia fans will purchase these cards or possibly some working with graphics.
I hope nvidia will do better with their next series or cards and I hope that won't take to long because ATI will most likely release a new series in half a year or so.
I will be interested in seeing the performance gains that will likely come from revised Nvidia drivers in a month or two. In some of the tests the gtx470 is trading blows with the gtx285 despite having nearly double the compute power...I think there is a lot of room for optimization.
I am no fanboy and even owned a 4850 for a while, but Nvidia's drivers have always been a big decision factor for me. I don't get any of the random issues that were common on catalyst and aside from the occasional hiccup (196.67 G92 fan bug) I don't worry about upgrades breaking things. I admit I don't know if all the 5xxx series driver issues have been fixed yet but I do look forward to driver parity, until then I think raw performance is only part of the equation.
Ryan, have you checked performance and/or clocks to see if any of the cards you are testing are throttling under FurMark? I recall you mentioning in your 58xx review that ATi cards can throttle under FurMark to prevent damage, and while most of the power numbers look normal, I notice a few of the cards are consuming less power under FurMark than Crysis, unlike the majority of the cards which consume considerably more power running FurMark than Crysis...
I can turn off one light in my house and remove the power consumption difference between the GTX480 and the 5870.
I thought this was an enthusiast site?
I lol irl when people talk about saving 100 watts and buying a 5870. So saving 100 watts but building a 700 watt system? Are you saving the planet or something?
I think nVidia is smart, if you fold or use cuda or need real time 3d performance from a quadro you will buy this card. That probably is a large enough market for a niche high end product like this.
Seriously i wonder who'd want gpus that power angry, noisy and hot...Nvidia is out both on mobile and desktop market...The only pro for Nvidia i can see is the 3D support.
This is kind of bad for consumers. 0 pressure on ATI to do anything from lower price to anything else. they can just lay back and work on the next gen.
Well, that at least made my decision easy. build now or wait for sandybridge. I will wait. hoepfully gpu marekt will be nicer then too (hard to be worse actually).
I was waiting for the fermi cards to come out before my next high end build( looking for price drops), but I actually did not expect this card to be this fast. The GTX480 is ~15% faster than the 5870, but for $100 more, and it is just gonna be a Nvidia loyal card, and the 5870 will probably drop just a little if at all.. The 5850 and and 5830 should drop $25-50, hopefully more(2x5850 at ~250$ each would be FTW). Now, would I like to have a fermi?, well yeah for sure, but I would much rather have a 5870 and down the road add another. A GTX 480 uses the same, if not more power than (2) 5870's. Now this reminds me of the last gen of the P4's. or as we know em, the Preshots. Basically, Nvidia's idea of a huge chip approach, with yes impressive performance, was just the wrong approach. I mean, their next-gen, if based on this same "doubling" SPs, cuda cores, would draw 300w+ easily and almost require water cooling because the next TSMC process is going to be 32nm and that will not allow them to "cut the chip in half." ATI's theory started with the 4000 series has proven to be a much better/efficient design. I think they could make a 6870 using 40nm TSMC right now, but ofcourse it would be a hot chip. Now when they get the 32 TSMC FABs running, Nvidia has got to re-design their chips.. And with how hot the GTX 480 is, I dont see how they could make a GTX 495. Also, the 5890 is right around the corner and that should give the final punch to KO Nvidia in this GPU generation. On a side note, Thank " " that there is some healthy competion or AMD might pull what Nvidia did and rebrand the 8800 5 or 6 times.
Keep in mind, the GeForce 480 (GTX means nothing, see any GTX210 or GT 285?) is already the most power hungry card on the market, just under 300watts under full load.... if the GF480 had all 512 Cuda Cores running and clocked higher... the card will easily surpass 300watts!
This in turn means MORE heat, more power, more noise. There are videos on the 480/470s & ATI cards... the 480's fan is running very fast and loud to keep it under 100c, about 2~3 times hotter than a typical CPU.
We will see the ATI 6000 series on 40nm, but it may not be with TSMC.
If the upcoming 5890 is 15% faster and can sell for $400~450, that would put some hurt on the GF480.
Not sure how/why ATI would do re-branding. The 4670 is almost like a 3870, but is easily a more advanced and cheaper GPU. The bottom end GPUs have all changed. 2400 / 3450, 4350, 5450 - all different.
Nvidia has been doing re-branding for quite a long time. The GF2mx was re-branded as the GF2MX 400 (These were bottom end $150~190 cards in 2001) and then for some bone-head reason, during the GF6 era - they brought back the GF2MX but added DX8. Huh? Add a function to an OLD bottom end GPU?
The GF2-TI came out when GF3-TI series was launched... they wanted "TI" branding. The GF2-TI was a rebranded GF2-Pro with a slight clock upgrade.
Then came the first big-branding/feature fiasco with Nvidia. The GF8 was the first DX8 cards. Then the GF 4 series came out. The GF4ti were the high end models. But the MX series were nothing more than GF2 (DX7) with optional DVI... to take care of the low end and shove the letter names to the front.
GF4 mx420 = GF2mx, but a bit slower.
GF4 mx440 = GF2 Pro/TI
GF4 mx460 = ... faster DX7 card, but it was about $20~35 cheaper than the GF4-TI4200, a DX8 card. The Ti4200 was a #1 seller at about $200. Some of the 440se & 8x models may have 64 or 128bit RAM... ugh.
Then they had fun with the TI series when AGP 8x came out... NEW models! Either thou the cards couldn't max out the AGP 4x bus. Even the future ATI 9800Pro only ran 1~3% faster with AGP 8x.
GF4 Ti 4200 > GF4 Ti 4200 8x
GF4 Ti 4400 > GF4 Ti 4800 SE
GF4 Ti 4600 > GF4 Ti 4800
Yep, same GPUs... new names. Some people would upgrade to nothing or worse. Some even went from the 4600 to the 4800SE which was a downgrade!
GF5 5500 = 5200
Since the GF5... er "FX" series, Nvidia kept the DX# and feature set within the series. All GF5 cards are DX9.
But the 5200s were a joke. By the time they hit the market at $120, the Ti4200s were also $120 and the 4mx were reduced to $30~60. But the 5200 was HALF the performance of a 4200. People actually thought they were upgrading... returns happened.
Funny thing once. A person bought a "5200" at walmart and was confused by the POST display of "4200". Luckily he had posted to us on the interent. We laughed our butts off...! What happened? Batch & switch... someone bought a 5200, took it home - switched cards, took it back to Walmart for a refund. hey, its usually a brick or a dead card, etc. he got used card, but a much better product.
Like the ATI 5450 is too slow for gaming today for DX11, the GF5200 was horrible back in 2003 for DX9! The 5200 is still sold today, the only thing left.
Pretty much the entire GF5 series was utter garbage. 4 versions of the GF5600 ($150~200) were slower than the previous $100 Ti 4200. It was sick. This allowed ATI to gain respect and marketshare with their ATI 9600 & 9700 cards. The GF 5700 series (2 out of 5 types) were good Ti4200 replacements. The 5900 went up against the ATI 9800. I've owned both.
Since then, ATI pretty much had the upper hand in performance throughout the GF6 & GF7 era. AMD buys out ATI, then the GF8 and core2 wipes out ATI/AMD with faster products.
While ATI had the faster cards during DX9.0c (really MS? Couldn't make 6.1, 6.2?) era over the GF6/7... Nvidia *HAD* the lower end market. The GF6600 and 7600GT were $150~200 products... ATI products in that price range were either too slow or cost too much.
With GF 8800 & 8600s, ATI had lost high & mid-range markets. The HD 2000 series = too expensive, too hot and not fast enough... (sound familiar). The ATI 3000 series brought ATI back to competitive position where it counted. Meanwhile, Nvidia milked the G92~96 for the past 2+ years. They are code-name & model number crazy happy.
As long as ATI continues doing engineering and management this way, nVidia will continue to be in trouble for a long time unless they get their act together or count on the server market to stay in business.
AMD already stated they are not reducing their pricing any time soon. This is because their line-up is far healthier than Nvidia.
They know (and we should know) that the $500/$350 price for the new GeForce 4 cards are not going to stick. There is only some many thousands of cards available for the next 3~5 months. The supply will run dry in about 1-3 weeks I bet. We're going to see the pricing shoot up close to $600 for the GF480, the fanboyz will be willing to pay that price.
The 5850 was supposed to be a $250 card, we see how well that worked out. While the price did settle around $300, the 5850 was still a better value than the $370~400 GeForce 285 card as it was far faster and run cooler, etc. The 5870 is typically faster than the $500 GeForce 295 - for $100 less. ATI has no reason to lower their pricing.
The GeForce 265~295 cards are already being phased out, too slow, cost too much.
So nVidia has nothing for the sub $300 market... nothing. Only the GF-250 has any value but a tad expensive as it should be $100 since its still a DX10 re-badged 9800GTX.
So when ATI feels any pressure from Nvidia, they can easily drop their prices. It costs $5000 per wafer, no matter how many chip dies are on it. It may be costing nVidia $150~200 per chip while for AMD, they could be paying $20~35 per chip used in the 5800/5900s.
Then you add the costs for memory, the PCB, parts, cooling system etc.
It is very easy for AMD to drop $50 per GPU and they'd still make a profit while Nvidia sells their geForce 400 cards at a loss or no profit.
When ATI sells the 5830 for $190~200, 5850 at $250 and 5870 at $325~350 would help sales and keep nVidia at bay.
I would've liked to see 5850's in crossfire thrown into this mix. I know you don't have time to test them all, but I think that's the key competitor against the 480 when it comes to bang/buck. I would think the 5850's in crossfire could handily beat the 295 and the 480, all while consuming less power. I believe there may have been another site that did it, but with this excellent and very thorough review done here, it would've been even that little tiniest bit sweeter to have a 5850 crossfire line on their graphs.
The 5850 idles inthe mid 30's but it also does absolutely nothing to stay cool, operating at about 20% max fan speed. Under load it may go up to 30% fan speed, but rarely ever breaks the 40% mark.
What are the approximate idle and load fan speeds for both the gtx 480 and 470? I guess I'm asking this to understand just how much extra cooling room is innately available. Are these card working at max capacity to keep cool or is there thermal/fan headroom there to be had?
Looking back at the data, I realized that power consumption is for system total. Guru3d measured the power consumption of the card itself and reported a max of 263W, so roughly 21 A. I think my 850W will do just fine since each PCI-X con has 20A each.
I think this was a great review, as mentioned previously, very objective. I think though that I may get a 480, because when I buy a card I keep it for 3 to 4 years before I get a new one, aka every other gen. And seeing that tessellation is really the gift horse of DX11 and how much more tessellation power is in the 480's, I think it could very much pay off in the future. If not then I spent an extra $85 for a tad extra performance as I just pre-ordered one for 485 and the 5870's are at $400 still.
My only concern is heat and power, but most of the cards have a life time warranty. Hopefully my OCZ GamerXtreme 850W can handle it at max loads. The two 12v rails for the two 6 pin PCI-X connectors are 20 A each, I saw 479w max consumption, however that was furmark, at 12v that's 39.5 amps, so it would be extremely close if there is ever a game to utilize that much power. Although If I recall ATI specifically stated a while back to not use that as it pushes loads that are not possible to see in an actual game, I think they had an issue with the 4000 series burning out power regulators, correct me if I'm wrong.
I'm with sunburn on this one. Your reasoning doesn't make much sense. You must've not followed the GPU market for the last few years because
first) "every other gen" would mean a 2 year cycle
second) Nothing's really gonna pay off in the future, as the future will bring faster cards for a fraction of the price. You'd only enjoy those questionable benefits until Q4, when AMD releases Northern Islands and nVidia pops out GF100b or whatever they'll call it.
third) Tessellation won't improve further that fast. If at all, developers will focus on the lowest common denominator, which would be Cypress. Fermi's extra horse power will most likely stay unused.
fourth) Just look at your power bill. The 25W difference with a "typical" Idle scheme (8h/day; 350d/y) comes to 70kWh which where I live translates to around $20 per year. That's Idle *only*. You're spending way more than just $85 extra on that card.
fifth) The noise will kill you. This isn't a card than just speeds up for no reason. You can't just magically turn down the fan from 60% to 25% and still enjoy Temps of <90°C like on some GTX 260 boards. Turn up your current fan to 100% for a single day. Try living through that. That's probably what you're buying.
In the end everyone has to decide this for himself. But for someone to propose keeping a GTX 480 in his PC for a whopping 3-4 years... I don't know man. I'd rather lose a finger or two. ;)
tl;dr I know, I know. But really people. Those cards aren't hugely competetive, priced too high and nV's drivers suck as much as ATi's (allegedly) do nowadays. Whis is to say neither do.
I could honestly bite me right now. I had a great deal for a 5850 in Nov. and I waited for nV to make their move. Now the same card will cost me $50 more, and I've only wasted time by waiting for the competetive GTX 470 that never was. Argh.
Thats kind of bad logic imo. I'm not fanboy on either side, but it's clear to me that Nvidia targeted the performance of their cards to fit in exactly between the 5970, the 5870, and 5850. Its much harder to release a card not knowing what the other guy truly has as opposed to releasing a card knowing exactly what sort of performance levels you have to hit.
Two, realistically, think of the noise. I mean ifyou've ever heard a gtx 260 at 100 percent fan speed, thats the sort of fan noises you're going to be experiencing on a regular basis. Its not a mild difference.
And three, realistically for the premium you're paying for the extra performance (which is not useful right now as there are no games to take advantage of it) as well as for the noise, heat and power, you could simply buy the cheaper 5870, save that 85-150 dollars extra, and sell off the 5870 when the time is right.
I just don't see why anyone would buy this card unless they were specifically taking advantage of some of the compute functions. As a consumer card it is a failure. Power and heat be damned, the noise the noise! Take your current card up to 100 percent fan speed, and listen to it for a few mins, and thats what you should about expect from these gpus.
We're working on it. Of course, the "Internet Police" have now flagged our site as malicious because of one bad ad that one of the advertisers put up, and it will probably take a week or more to get them to rescind the "Malware Site" status. Ugh....
The people who are going to buy the GTX 480/470 are enthusiast who most likely bought the GTX 295 or had 200 Series SLI. So not including the 295 in every bench is kind of odd. We need to see how the top end of the last gen does against the new gen top end.
Well the 295 beats the 470 in most benches so there's no need to really include it in all benches. Personally I think the 480 is the better deal. Although I am not buying those cards until a respin/refresh, those temps and power requirements are just ridiculous.
I know you "upgraded" your test PSU to the Antec 1200W PSU, but did you go back and try any of these tests/setups with your previous 850W PSU to see if could handle the power requirements. It seemed that only your 480 SLI setup drew 851W in total system in the Furmark load test. Other than that scenario it looks like your old PSU should handle the power requirements just fine. Any comments?
Yes, we did. We were running really close to the limits of our 850W Corsair unit. We measured the 480SLI at 900W, which after some power efficiency math comes out to around 750-800W actual load. At that load there's less extra space than we'd like to have.
Just to add to that, we had originally been looking for a larger PSU after talking about PSU requirements with an NVNDIA partner, as the timing of this required we secure a new PSU before the cards arrived. So Antec had already shipped us their 1200W PSU before we could test the 850W, and we're glad since we would have been cutting it so close.
OK, so 480 generally beats 5870, and 470 generally beats 5850, but at higher prices, temperatures, wattage, and noise levels. What about 5970?
As far as i can tell, the 5970 beat or came even with 480 in all tests, draws less power, runs cooler, and makes less noise. The price isn't that much more either.
It seems more fair to me to compare 480 with 5970 as both are the fastest single-card (as in PCIe slot) sollutions and are close in price and wattage.
I would also like to see what framerate FPS games come in at with gamer settings (1680x1050 and 1920x1200 resolutions), and if average is higher than game cutoff or tickrate, what is the minimum FPS, and how much can you bump eyecandy before avg drops below cutoff/tickrate or minimum drops below acceptable (30).
The reason for gamers sacraficing visuals to get high FPS can be summarized to game flow and latency. If FPS is below game tickrate, you get latency. For many games the tickrate is around 100 (100 updates in the game engine pr second). At 100 FPS you have 10ms latency between frames, if it drops to 50 you have 20 ms, and at 25 you have 40 ms. Lower than 25-30 FPS will obviously also result in virtually unplayable performance since aiming will becoming hard, so added latency from FPS below this becomes moot. If you are playing multiplayer games, this is added to the network latency. As most gamers know, latency below 30ms is generally desired, and above 50ms starts to matter, and above 100ms is very noticable. If you are on a bad connection (or have a bad connection to the server), 20-30ms added latency starts to matter even if it isn't visually notable.
Anyone else getting that message? I finally had to turn off the 'attack site' option in FF. It wasn't doing this last night. It's not doing it all over AT, just on the GTX 480 article.
I agree. Temperature graphs should either be normalized to the ambient environment or absolute zero; any other choice of basis is completely arbitrary.
Uh oh, my browser just got a heartwarming warning when I clicked on this article, the warning said that it might infect my computer badly and that I should definitely run home faster than my legs can carry.
Me thinks that Cypress really blindsided nVidia. And then on top of it being such an efficient chip, you throw in Eyefinity and all of the audio over HDMI features, etc.
With the current console generation being the primary focus of game developers I find it hard to believe that tessalation will get the big breakthrough anytime soon. With the next-gen console, it will come, but that is few years from now, and hopefully at that time we have seen at least one new generations of GPUs.
These things are not "single slot cards" They are double slot. They take 2 slots. No review should be published without pointing out performance per watt. If you dont publish performance per dollar which includes the 100 watt premium over 3 years you are not doing your job. Only and idiot would buy anything from nvdia. You really think anyone is going to want fan noise from these monstrosities anywhere near them?
SHAME SHAME SHAME.
Throw this bullshit in the garbage and tell nvidia to f-off untill it releases an actual computer graphics product instead of a spaceheater for retarded monekeys with developmental disablities.
You talk about dual-slot cards as if it's a bad thing--it's the best design currently possible, since it allows for efficient cooling without much fan noise, and the heat goes outside your case. Plus, AMD's 5870 & 5850 are also dual slot!
"No review should be published without pointing out performance per watt" - what gamer cares about that? That's a concern for server farmers!
So Nvidia's fastest card is 11% faster than AMD's mid level card 5870 and AMD's top card the 5970 is allot faster than the 480 GTX. Do not give me that the 5970 is a two chip card and cannot be compared to a single chip card. Sorry guys the 5970 takes up one slot just like the 480 GTX and is faster and consumes less energy to move things on the screen faster. If I got 3 slots on my motherboard I can have 6 video chips with an ATI while with a nvidia setup I can have only 3 at the most. Until Nvidia has a two chip version which looks impossible with this power hunger design. ATI has the top single card, be that two chips, you can buy. It took them 6 months and still cannot buy one paper launches suck. I have bought several Nvidia Cards and like them all but this one really looks to fall short. If I got one slot to put my video card in, ATI has the highest performing Card I can buy 5970. It's like this would I rather have a single core chip or dual core cpu and that's a no brainer two is always better than one.
6 months and we get a couple of harvested, power-sucking heaters? Performance king, barely, but for what cost. Cards not even available yet. This is a fail.
This puts ATI in a very good place to release a refresh or revisions and snatch away the performance crown.
It's hard to say since we can't control every variable independent of each other. A full GF100 will have more shading, texturing, and geo power than the GTX 480, but it won't have any more ROP/L2/Memory.
This is going to heavily depend on what the biggest bottleneck is, possibly on a per-game basis.
Oh how the mighty have fallen.:( i remember the days of the 8800gt when nvidia did a hard launch, released a cheap & excellent performing card for the masses. W/ the fermi release u would never know its the same company. Such a disappointment.
I think the MSRP is lower than $300 for the 5850 (259) and lower than $400 for the 5870 (379). Just thought that was worth sharing.
I have to believe that the demand will shift back evenly now and price drops for the AMD cards can ensue (if nothing else, the cards should go to the MSRP values because competition is finally out). I would imagine the price gap between the GTX480 and the AMD 5870 could be as much as $150 dollars when all is said and done. Maybe $200 dollars initially as this kind of release almost always is followed by a paper launch (major delays and problems before launch = supply issues).
So the 5870 and 470 appear to be priced similarly, while the 5870 beats it in virtually every game and uses 47W less at load! That is a TON of additional on-die power (like 30-40A?).
We saw this coming last year when Fermi was announced. Now AMD is better positioned than ever.
I see why XFX started making ATI cards a few years ago with the 4000 series. Once again nVidia has made a giant chip that requires a high price tag to offset the price of manufacturing and material. The same thing happened a few years ago with the nVidia GTX200 cards and the ATI 4000 cards. XFX realized that they weren't making as much money as they'd like with GTX200 cards and started producing more profitable ATI 4000 cards.
I bought a 5870 a couple months ago for $379 at newegg with a promotion code. I plan on selling it not to upgrade, but to downgrade. A $400 card doesn't appeal to me anymore when, like many posters have mentioned, most games don't take advantage of the amazing performance these cards offer us. I only play games like MW2, Borderlands, Dirt 2, and Bioshock 2 at 1920x1080 so a 4870 should suffice my needs for another year. Maybe then I'll buy a 5850 for ~$180.
First post, hope I didn't sound too much like a newbie.
Unless you are an insider all of this "profitability" speculuation is just that, useless speculation.
The reason they make both companies chips is more likely due to diversification, if one company does poorly one round then they are not going to go down with them. I'd hate to make ATI chips during the 2900XT era and i'd hate to make nVidia chips during the 5800 FX era
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.
If i remember correctly Nvidia makes nearly 30- 40% of their Profits from Telsa and Quadro. However Telsa and Quadro only occupies 10% of their Total GPU volume shipment. Or 20% if we only count desktop GPU.
Which means Nvidia is selling those Perfect grade Fermi 512 Shader to the most profitable market. And they are just binning these chips to lower grade GTX 480 and GTX 470. While Fermi did not provide the explosion of HPC sales as we initially expected due to heat and power issues, but judging by pre-order numbers Nvidia still has quite a lot of orders to fulfill.
The Best thing is we get another Die Shrink in late 2010 / early 2011 to 28nm. ( It is actually ready for volume production in 3Q 2010 ). This should bring Lower Power and Heat. Hopefully the next update will get us a much better Memory Controller, with 256Bit controller and may be 6Ghz+ GDDR5 should offer enough bandwidth while getting better yield then 384Bit Controller.
Fermi may not be exciting now, but it will be in the future.
So how do you guys test temps? It's not specifically stated. Are you using a case? An open bench? Using readings from a temp meter? Or system readings from catalyst or nvidia control panel? Please enlighten. It's important because people will eventually have to extrapolate your results to their personal scenarios which involve cases of various designs. 94 degrees measured inside a case is completely different from 94 degrees measured on an open bench.
Also, why are people saying all this stuff about switching sides and families? Just buy the best card available in your opinion. I mean it's not like ATI and Nvidia are feeding you guys and clothing your kids and paying your bills. They make gpus, something you plug into a case and forget about if it's working properly. I just don't get it :(
We're using a fully assembled and closed Thermaltake Spedo with a 120mm fan directing behind the video cards feeding them air. Temperatures are usually measured with GPU-Z unless for some reason it can't grab the temps from the driver.
Thanks for elaborating on the temps as I was wondering about that myself. One other thing I'd like to know is how the VRM and RAM temps are on these cards. I'm assuming that the reported values are for just the core.
The reason I ask is that on my 4870 with aftermarket cooling and the fan set pretty low, my core always stayed well below 65, while the RAM went all the way up to 115 and VRMs up to ~100 (I have obviously increased fan speeds as the RAM temps were way too hot for my liking- they now peak at ~90)
Correct, it's just the core. We don't have VRM temp data for Fermi. I would have to see if the Everest guys know how to read it, once they add support.
I just am not interested in a card with a TDP over 175W. When I upgraded from 8800gt to GTX 260 It was big jump in heat and noise and definitely at my tolerance limit during the summer months. I found myself under-clocking a card I had just bought.
175W max though a 150W is preferred @ 250$ and I am ready to buy if NVIDIA wont make it then I will switch back to ATI.
All this waiting and a paper launch. They couldn't even manage the 1/2 dozen cards per vendor at Newegg of some previous soft launches.
All this waiting an a small incremental increase over existing card performance. High power draw and temps. High prices, at least they had the sense not to price it like the 8800Ultra-which was a game changer. It had a big leap in performance plus brought us a new DX level, DX10.
I've been holding off buying until this launch, I really wanted nVidia to pull something off here. Oh, well.
so by the time a "full" gf100 is available, how close will we be the the next gen AMD card?
and how low will be the prices on the 58XX series be?
this article never made an explicit buying recommendation, but how many people out there are still waiting to buy a gf100?
6 months is a long time.
after xmas and the post holiday season, anybody on the fence about it (i.e. not loyal nvidia fans) probably just went for amd card.
so the question (for a majority of potential buyers?) isn't "which card do i buy?", it's "do i need/want to upgrade from my 58xx amd card to a gf100?"
also, i'm curious to find out if fermi can be scaled down into a low profile card and offer superior performance in a form factor that relies so heavily on low temps and low power consumption.
the htpc market is a big money maker, and a bad showing for nvidia there could really hurt them.
maybe they won't even try?
great review as usual here at Anandtech. I would have thought in your conclusions you would have mentioned that, in light of the rather lack luster 5% performance crown that they now hold, that it wasnt the best idea for them to disable 6% of their cores on the thing after all.
Why make a 512 core gpu then disable 32 of them and end up with poorer performance when youre already 6 months behind the competition, sucking up more juice, have higher temps and fan noise, and a higher price tag? That's like making the bugatti veyron and then disabling 2 of its 16 cylinders!
That will probably be what nvidia does when amd releases their super cypress to beat the 480. They'll release the 485 with all 512 cores and better i/o for the ram.
"Fermi is arranged as 16 clusters of 32 shaders, and given that it is turning off 64 shaders, it looks like the minimum granularity it can fuse off is a single cluster of 32. This means it is having problems getting less than two unrecoverable errors per die, not a good sign."
dont quote semi accurate to me. If you wanna call 1 in 100 claims being correct as Semi accurate then fine you can... me I call it a smear. Especially since the guy who wrote that article is a known liar and hack. If you google for gtx480 and click on the news results and click on semi accurate you will see its listed as satire.
the same Ryan Smith who panned the 5830 for being a "paper launch" even though it was available one day later?
What's wrong this time Ryan? Maybe there are so many bad things to say about Fermi, being "paper launched" was well down the pecking order of complaints?
I was thinking the same thing. The 5830 got slammed for being a paper launch even though it wasn't, but Fermi gets a pass? Why? This isn't even a launch at all despite what Nvidia says. Actual cards will be available in what, 17 days? That's assuming the date doesn't change again.
Even though Ryan Smith mentioned that Fermi was paper launched today, the tone and way that the article read was much harsher on AMD/ATI. That is ridiculous considering that Ryan had to eat his own words with an "Update" on the 5830's availability.
To be tougher on AMD/ATI, when they did in fact launch the 5830 that day and have hard-launched, to the best of their ability, the entire 5XX0 stack gives an impression of bias.
A paper launch with availability at least two and a half weeks out for a product six months late is absurd!
Yeah I mentioned it too. ATI got reamed for almost a whole entire page for something that didn't really happen. While this review mentions it in passing almost like it's a feature.
"The price gap between it and the Radeon 5870 is well above the current performance gap"
Bingo, Nvidia may have the fastest single GPU out now, but not by much, and there are tons of trade offs for just a little bit more FPS over the Radeon 5870. High heat/noise/power for what? Over 90% of gamers play at 1920 X 1200 resolution or less, so even just a Radeon 5850 or Crossfired 5770's are the best bang for the buck.
If all your going to play at is 1920 X 1200 or less, I see no reason why educated people would want to buy a GTX 470/480 after reading all the reviews for Fermi today. Way to expensive and way to hot for not much of a performance gain, maybe it's time to sell my Nvidia stock before it goes down any further over the next year or so.
Hey, thanks for the folding data, very much appreciated. Although, if there's any way you can translate it into something that folders are a little more used to, like ppd (points per day), that would be even better. I'm not sure what the benchmarking program you used is like, but if it folds things and produces log files, it should be possible to get ppd. From the ratios, it looks like above 30kppd, but it would be great to get hard numbers on it. Any chance of that getting added?
Eh, that's ok, if you want to that's fine, but don't worry about it too much, it sounds like it was an artificial nvidia thing. We'll have to wait for people to really start folding on them to see how they work out.
I'd like to see some overclocking benchmarks given the small die vs big die design decisions each company made.
All in all ATI has this round in the business sense. The performance crown is not where the money is. ATI out executed Nvidia in a huge way. I cannot wait to see the financial results for each company.
Agree.. No overclocking at all..feels like big part of review missing. With GTX480 having that high consumption/temperatures, I doubt it would go much further, at least on air. On the other hand, there are already many OCed HD58xx cards out there, and even those can easily be overclocked further. With as much watts of advantage, I think AMD could easily catch up with GTX480 and still be a bit cooler and less power hungry. And less noisy as a consequence as well of course.
very thorough test as expected from you guys, thanks... BUT:
Why on earth do you keep using an arguably outdated core i7 920 for benchmarking the newest GPUs? Even at 3,33GHz its no match for an overclocked 860, a comman highend gaming-rig cpu these days. I got mine at 4,2GHz air cooled?!
sorry... don't get it. On any GPU review I'd try to eliminate any possible bottleneck so the GPU gets limited more, why use an old cpu like this?!
clock for clock, the 920 is faster than the 860 thanks to its triple channel memory - the 860 is faster because of its aggressive turbo mode. X58 is definitely the route to go, espeacially if you're benchmarking SLI/CF setups (dual PCIe x16).
Clock for clock, the 920 is faster than the 860 (860 is faster because of its aggressive turbo mode). Using the P55/860 would limit cards to PCIe x8 bandwidth when benchmarking SLI/CF (unless of course you get a board with nF200 chip), which can be more significant (espeacially with high-end cards) than a OC-ing a CPU from 3.33GHz to 4GHz.
Your post is silly everyone knows the X58 platform is the superior chipset in the intel line up. Secondly do you honestly think 3.33Ghz vs 4Ghz is going to make that much of a difference at those high resolutions?
sorry guys but I know what I'm talking about, using Crysis for instance, I found that minimum fps scale quite nicely with CPU clock whereas the difference a quad core makes is not so big (only 2 threads in the game afaik). FarCry 2, huge improvements with higher end (=clocked) cpus. The Core i7 platform has a clear advantage, yes, but the clock counts quite a bit.
As I said... no offense intended and no, not arguing against my favorite site anandtech ;). Just stating what I and others have observed. I'd just always try and minimize other possible bottlenecks.
These new cards from ATI and Nvidia are very nice and for a new PC build it is a no brainer, to pick up one of these cards. But for those like me with decent cards from the last generation (GTX285 SLI) I don't really feel a lot of pressure to upgrade.
Most current PC games are Directx 9 360 ports that last gen cards can handle quite well. Even Directx 10 games are not too slow. The real driver for these cards are Directx 11 games, the amount of which I can count on one hand and not very many upcomming.
Those that are out don't really bring much over DX10 so I don't really feel like I am missing anything yet. I think Crysis 2 may change this, but by it's release date there will probably be updated / shrunk versions of these new GPU's avaliable.
Hence why Nvidia and ATI need really ecstatic reviews to convince us to buy their new cards when there is not a lot of software that (in my opinion) really needs them.
As long as the consoles are in the driver's seat (this isn't going to change) DX11 and the features it provides won't be widely found in games until the next generation of consoles - in 2-3 years.
So really, without growth in the PC gaming market these is no need to upgrade from the last generation. Too bad really.
Thank you for listening to our feedback on improving your test suite of games, Ryan. I think your current list much better represents our interests (fewer console ports, a selection of games that better represent the game engines being used in current and future titles, fewer titles with GPU vendor bias, inclusion of popular titles that have staying power like BF:BC2, etc.) than the one you used to review the 58xx's when they were released. The only title that I feel that is missing from our suggestions is Metro 2033. Kudos!
Good review. The grammar errors are prolific, but I guess this was rushed to release or something.
So it's a hot, power-hungry card with a high pricetag. Not too surprising.
Would have liked to see a $150-range Fermi-based card sometime this year so I can ditch my 5770 and get back to NVidia, but the high temps and prices on these cards are not a good sign, especially comparing the performance against the 5800-series.
Fanboy of what?
The ATI card I have now that I can't wait to get rid of?
The desire for NVidia to release something competitive so I can get back to a stabler driverset and remove all traces of ATI from this PC?
You know you have the best tech site around when a product review makes it seem like a ddos is in progress.
As far as the review itself, it's very comprehensive, so thanks Ryan! The new NVIDIA cards seem to be just where most people thought they would be. It really makes me anticipate the next HD58xx card and the AMD price cuts on the current line up that will come with it.
Great review, although you may want to edit this sentence:
"NVIDIA meanwhile had to deal with the fact that they were trying to produce a very large chip on a low-yielding process, a combination for disaster given that size is the enemy of high yields."
Shouldn't it be "large size is the enemy of low yields?" Either way, that end point seems a bit redundant.
IMO HardOCP review was better because they showed real world differences between those Nv and AMD cards - 470 didnt allow better setting than 5850 and 480 was only little bit better than 5870. So 470 is IMO epic fail at that price.
When you add extra power and noise fom 470 and 480, I wouldnt pay for them more than for 5850 and 5870.
With the 470 and 480 generating so much heat and noise, and consume more power than even the dual GPU Radeon HD 5970, even thinking of dual GPU 470/480 (495?) is a scary thing to do.
Agreed. And considering the $350 470 is no faster than a $150 5770 at 1680x in BF:BC2, and only 23% faster at 1920x, that's pathetic. Considering how much better it does in other games, it must be a driver optimization issue that hopefully can be worked out.
Fermi has existed for months, so the driver work should be as far along as AMD. The delay allowed for better stepping and higher clocks, but the drivers aren't going to improve any more quickly than AMD.
Firmly in AMD's hands?
i dont know about that.
Although it can't bitstream true hd and dts-MA, I would argue that's not really as debilitating as not being able to bitstream level 5.0 h264 video, since you can output as LCPM.
FIRMLY in AMD's hand...it is... not only Nvidia doesn't do true HD and dts-ma for a card that doesn't fit in a HTPC but they won't even when they get the smaller cards out... Firmly? yeah....
let me be more clear.
I'm saying although the ati cards have better handling of audio, the nvidia cards do in fact have better handling of video since they can handle level 5.0 and 5.1 h264 video (and i guess mpeg 4 asp, but thats irrelevant)
so i wouldn't say the ati cards have a definite lead in this area.
A sound card that will provide Bitstream HD audio will require another $200+, so that tacks on an even higher net-price for Nvidia.
All of AMD's 5XXX cards give you real HD audio for free.
I agree an excellent article overall. I wonder if Amd will move and try and get price drops in play on their cards now. Afterall they are still selling far above their suggested sales price.
I'm really impressed by this article author made a great job;) But about Fermi It's seem to be really good product for scientific matters but for gamers I'm not so sure about that. The price tag, power consumption, noise! this all is to much for only 10-15% of power more than above the cheaper and much more reasonable in all this things Radeon. I guess Fermi need some final touch from Nvidia and for now it's not a final , well tested product. Temp around 100 it's not good for PCB, GPU and all electronic and I don't believe it want metter for time-life and stability of the card. I'm glad the Farmi finally came but I'm dissapointed at least for now.
I just don't know why GTX480 is compared to HD5870, and same for GTX470 vs HD5850.. GTX470 is right in the middle between two single-GPU Radeons, and just the same can be said for GTX480 sitting right in between HD5970 & HD5870.
Prices of this cards as presented by nVidia/ATI:
HD5970 - 599$
GTX480 - 499$
HD5870 - 399$
GTX470 - 349$
HD5850 - 299$
I know GTX480 is single GPU, so by this logic you'll compare it to HD5870. But GTX480 is top of the line nVidia graphics card, and HD5970 is top of the line ATI card. Besides, ATI's strategy for last 3 product cycles is producing small(er) chips and go multi-GPU, while nVidia wants to go single-monolitic-GPU way.. So following this logic, indeed GTX480 should be compared to HD5970 rather than HD5870.
Anyway, conclusion of this article is all fine, telling both strengths and the weaknesses of solutions from both camps, but I believe readers weren't told straightforward enough that these cards don't cost the same... And HD5970 was left out of the most of the comparisions (textual ones).
If I personaly look at these cards, they are all worth their money. nVidia cards are probably more future-proof with their commitment to future tech (tessellation, GPGPU) but AMD cards are better for older and current (and close future) titles. And they are less hot, and less noisy, which most gamers do pay a lot of attention to. Not to say - this is first review of new card in which no one mentioned GPU overclocking. I'm guessing that 90+C temperatures won't allow much better clocks in the near future ;)
In regards to the temperature and noise: there's always watercooling to go to, I mean if you have so much money to throw at the latest card you might as well thrown in some watercooling too.
It's too pricey for me though, I guess I'll wait for the 40nm process to be tweaked, spending so much money on a gfx card is silly if you know a while later something new will come around that's way better, and it's just not worth committing so much money to it in my view.
It's a good card though (when watercooled), nice stuff in it and faster on all fronts, but it also seems an early sample of new roads nvidia went into and I expect they will have much improved stuff later on (if still in business)
Like I've said before - if you want FASTEST (and that's usually what you want if you have money to throw away), you'll be buying HD5970. Or you'll be buying HD5970+water cooling as well..
Oh, don't make me laugh, please! :D In that case this review shouldn't be up at all, or it should be called "PREview".. or have you actually seen any stock of GTX470/480 arround?
It is not my fault that your US shops bumped up the price in the complete absence of competition in the high end market. But US is not only market in the world either.
You want to compare with real world prices? Here, prices from Croatia, Europe..
HD5970 - 4290kn = 591€ (recommended is 599$, which is usually 599€ in EU)
GTX480 - not listed, recommended is 499$/€
HD5870 - 2530kn = 348€ (recommended is 399$/399€ in EU)
GTX470 - not listed, recommended is 349$/€
HD5850 - 1867kn = 257€ (recommended is 299$/299€ in EU)
So let's say that European prices for GTX will be a bit lower than recommended ones, GTX480 would still be ~120-130€ pricier than HD5870, and HD5970 would be same ~120-130€ more expensive than GTX480.
As for the lower priced nVidia card, it's again firmly in the middle between HD5850 & HD5870.
Point is that there's no clear price comparision at the moment, and article's conclusion should be clear on that.
Person that wants the FASTEST CARD will stretch for another 100$/€ to buy HD5970. Especially since this means lower noise, lower consumption, and lower heat. This all combined means you can save a few $/€ on PSU, case, cooling, and earplugs, throwing HD5970 in the arm reach of the GTX480 (price-wise) while allowing for better speeds.
As for GTX470, again, lower consumption/heat/noise with ATI cards which means less expenses for PSU/cooling, and saving money on electrical bills. For me, well worth the 50€/$ difference in price, in fact, I'd rather spend 50$/€ more to buy HD5870 which is faster, less noisy, doesn't require me to buy new PSU (I own HD4890, which was overclocked for a while, so HD5870 would work fine just as well), and will save me 50W per hour of any game I play.. which will all make it CHEAPER than GTX470 in the long run.
So let's talk again - why isn't conclusion made a bit more straightforward for end users, and why is HD5890 completely gone from the conclusion??
These MSRPs are not entirely, I mean historically correct... The first MSRP (list price) for HD 5850 was $259, and that was price you had to pay when buying on sites like newegg (there were some rebates, and some differences depending on manufacturer, but still you had to have a very potent hunting sense to get a card of any manufacturer, I got lucky twice). Shortly after launch (about one month, it was October) the MSRP (set by AMD) hiked to $279 and problems with supply not only continued but even worsened. Now, since November 2009, it's $299. HD 5870 followed generally similar path, though HD 5850 hiked more, which is no wonder. Note that this is for reference design only, some manufacturers had higher MSRPs, after all AMD or nvidia sell only chips and not gaming cards.
The whole pricing things with HD 5xxx series is quite unusual (though not unexpected) since normally you'd anticipate the street price to be quite lower than MSRP, and then to drop even further, and you would be right. I remember buying EVGA GTX260 just after its launch and the price was good $20 lower than suggested price. That's why we need more competition, and for now the outlook isn't very bright, with nvidia not quite delivering...
And these European prices - most if not all European countries have a heavy tax (VAT), this tax is always included and you have to pay it, there are other taxes too. In the US the sales tax is not included in the street price, and usually you can evade it after all (harder for Californians). Europeans usually get higher prices. Comparing US prices is thereby better, particularly in us dollars (most electronics deliveries are calculated in dollars in Europe). So the prices in the rest of the world were also boosted, even in Europe, despite weak dollar and other factors :)
One note - HD5xxx cards are really very big and most of them have very unfriendly location of power sockets, so you'd expect to pay more for a proper, huge case. Also note that if you have a 600 W PSU or so you'd be smarter to keep it and not upgrade, unless REALLY necessary. The lower load means lower efficiency, especially when plugged to 115V/60Hz grid. So if you have a bigger PSU you pay more for electricity. And it seems that more gamers are concerned with that bill than in any time before... You couldn't blame them for that and it's sad in its own way.
Well, current MSRP is like I wrote it above. If there is no competition and/or demand is very high, prices always tend to go up. We're just lucky it's not happening often because in IT competition is usually very good.
As for European prices, what do taxes have to do with it? We've got 23% taxes here, but it's included in all prices, so if nVidia goes up 23% so do AMD cards as well. If I'm looking at prices in the same country (and city, and sometimes store as well), and if nVidia is 300$ and ATI is 100 and 500, than I just can't compare them and say "hey, nVidia is faster than this 100$ ATI card, I?ll buy that"... no, you can't compare like that. Only thing you can do in that case is say something like "OK, so I have 300$ and fastest I can afford is nVidia" .. or "I want fastest there is, and I don't mind the cost" and you'll take HD5970 than. Or you can't afford any of those. So again, I don't get why cards in this review are so rigidly compared one to another as if they have exact same price (or +/- 10$ difference). And at one hand they compare MORE expensive nVidia card to QUITE CHEAPER AMD card, but won't compare that same nVidia card to a more expensive AMD card.. WHY?
And AMD cards are no bigger than nVidia ones, and last time I've checked bigger case is way way cheaper than a new PSU. And I'm running my computer on, get this, 450W PSU, so I'm not wasting any excessive power on inefficiences on low loads ;) And since this PSU keeps overclocked HD4890, it should work just fine with non-overclocked HD5870. While I'm pretty sure that GTX470 would already mean a new PSU, new PSU that costs ~100$/80€ .. So I'd pay more $ in total, and get a slower card.
Again, I'm not getting why there's such a rigid idea of GTX470=HD5850 & GTX480=HD5870 ..
Just re-read the conclusion.. something lacks in this sentence:
"If you need the fastest thing you can get then the choice is clear, .."
Shouldn't it finish with "... choice is clear, HD5970..." ? That's what I'm saying, HD5970 wasn't mentioned in the entire conclusion. Past are the days of "single-GPU crown" .. That's just for nVidia to feel better. ATI Doesn't want "single GPU crown", they want the fastest graphics CARD. And they have it.. Serious lack in this article, serious.. And again, there is exact same amount of money dividing GTX480 and HD5870, as is between GTX480 and HD5970..
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.
It's funny how you guys only coloured the 480.
PS: I'm sorry for the spam, my comments are not appearing, and I'm sorry for replying to this guy when it is completely off topic, lol.
Yes, it did take a bit of work, but I did it for Ryan. The HD 5870/5970 results are in orange and the 5850 is in red. It makes more of a difference on crowded graphs, but it should help pick out the new parts and their competition. I'm guessing Ryan did it to save time, because frankly the graphing engine is a pain in the butt. Thankfully, the new engine should be up and running in the near future. :-)
No. Keep colouring simple. Just 3 or 4 colours max. More creates noise. If you need to highlight other results, colour the label, or circle or drop shadow or put a red * a the end.
The article does not contain hd 5970 in CF. The article does not mention the hd 5970 at all under conclusion. This is really weird. It is my belief that anandtech has become pro nvidia and is no longer an objective site. Obejtivity is looking at performance + functionality / price. HD 5970 is a clear winner here. After all, who cares if a card has 1, 2 or 20 gpus? It's the performance / price that matters.
According to a test in legitreviews.com having two monitors attached to the card causes the idle power use to rise quite a bit, I guess the anand test is done with just one monitor attached? It would be nice to see power consumption numbers for dual monitor use as well, I dont mind high power use during load but if the card does not idle properly (with two monitors) then that is quite a showstopper.
I have a second monitor (albeit 1680) however I don't use it for anything except 3D Vision reviews. But if dual monitor power usage is going to become an issue, it may be prudent to start including that.
Is it irresponsible to use benchmarks desgined for one card to measure the performance of another card?
Sadly, the "community" tries to hold the belief that all GPU architectures are the same, which is of course not true.
The N-queen solver is poorly coded for ATI GPUs, so of course, you can post benchmarks that say whatever you want them to say if they are coded that way.
Personally, I find this fact invalidates the entire article, or at least the "compute" section of this article.
One of the things we absolutely wanted to do starting with Fermi is to include compute benchmarks. It's going to be a big deal if AMD and NVIDIA have anything to say about it, and in the case of Fermi it's a big part of the design decision.
Our hope was that we'd have some proper OpenCL/DirectCompute apps by the time of the Fermi launch, but this hasn't happened. So our decision was to go ahead with what we had, and to try to make it clear that our OpenCL benchmarks were to explore the state of GPGPU rather than to make any significant claims about the compute capabilities of NVIDIA or AMD's GPUs. We would rather do this than to ignore compute entirely.
It sounds like we didn't make this clear enough for your liking, and if so I apologize. But it doesn't make the results invalid - these are OpenCL programs and this is what we got. It just doesn't mean that these results will carry over to what a commercial OpenCL program may perform like. In fact if anything it adds fuel to the notion that OpenCL/DirectCompute will not be the great unifier we had hoped for them to be if it means developers are going to have to basically write paths optimized around NVIDIA and AMD's different shader structure.
Thanks for this comprehensive review, it covers some very interesting topics betwen Team Green and Team Red.
Yet, I agree with one of the comments here, you missed how easy that ATI 5850 and 5870 can be overlocked thanks to their lite design, a 5870 can easily deliver more or less the same performance as a 480 card while still running cooler and consumes less power..
Some people might point out that our new 'champion' card can be overlocked as well..that's true..however, doesn't it feel terrifying to have a graphic card running hotter than boiling water!
I wonder what kind of overclocking headroom the 470 has.... since someone with a 5850 can easily bump the voltage up a smidge, and get about a 30% overclock with minimal effort... people who tinker can usually safely reach about 1GHz core, for about a 37% overclock.
Unless the 470 has a bit of overclocking headroom, someone with a 5850 could easily overclock to have superior performance, lower heat, lower noise, and lower power consumption.
After all these months and months of waiting, Nvidia has basically released a few products that ATI can defeat by just binning their current GPUs and bumping up the clockspeed? *sigh* I really don't know who would buy these cards.
You're being way too kind to Nvidia. Up to 50% more power consumption for a very slight (at best) price/performance advantage? This isnt a repeat of the AMD/Intel thing. This is a massive difference in power consumption. We're talking about approximately $1 a year per hour a week of gaming. If you game for 20 hours a week, expect to pay $20 a year more for using the GTX470 vs a 5850. May as well add that right to the price of the card.
But the real issue is what happens to these cards when they get even a modest coating of dust in them? They're going to detonate...
Even if the 470 outperformed the 5850 by 30%, I dont think it would be worth it. I cant stand loud video cards. It is totally unacceptable to me. I again have to ask the question I find myself asking quite often: what kind of world are you guys living in? nVidia should get nothing more than a poop-in-a-box award for this.
with those power draws and the temps it reaches for daily operation i see gpu failure rates high on the gtx 480 and 470 as they are already faulty from the fab lab. Ill stick with ATI for 10 fps less.
I been holding on for months to see what Fermi would bring in the world of GPUs. After reading countless reviews of this card i dont think its a justifyable upgrade for my gtx260. I mean yeah the performance is much higher but in most reviews of benchmarks with games like Crysis this card barely wins against the 5870, but buying this card i would need to upgrade the psu and posibly a new case for ventilation. I keep loading up Novatechs website and and almost adding a 5870 to the basket, and not pre ordering gtx480 like i was intending. What puts me off more than anything with the new nvidia card is its noise and temps. I cant see this card living for very long.
Ive been a nvidia fan ever since the the first geforce card came out, which i still have tucked away in a draw somewhere. I find myself thinking of switching to ATI, but read too many horror stories about their driver implementation that puts me off. Maybe i should just wait for Nvidia to refresh its new card and keep hold of my 260 for a bit longer. i really dont know :-(
There is an error with the Bad Company 2 image mouse overs for the GTX 480. I think the images for 2xAA and 4xAA have been mixed up. 2xAA clearly has more AA than the 4xAA image.
Compare GTX 480 2x with GTX 285 4x and they look very similar. Also compare 480 4x with 285 2x.
Very nice article, Ryan! I really enjoyed the tessellation tests. Keep up the good work.
My master copies are labeled the same, but after looking at the pictures I agree with you; something must have gotten switched. I'll go flip things. Thanks.
Correction, Nvidia retained their crown on Anandtech. Even though some resolutions even on here were favored to ATI(mostly the higher ones). On Toms Hardware 5870 pretty much beat GTX 480 from 1900x1200 to 2560x1600, not every time in 1900 but pretty much every single time in 2560.
That ...is where the crown is, in the best of the best situations, not ....OMG it beat it in 1680 ...THAT HAS TO BE THE BEST!
Plus the power hungry state of this card is just appauling. Nvidia have shown they can't compete with proper technology, rather having to just cram everything they can onto a chip and prey it works right.
Where as ATI's GPU is designed perfectly to where they have plenty of room to almost double the size of the 5870.
I copied this over from a comment I made on a blog post.
I've been with nVidia for the past decade. My brother built his desktop way back when with the Ti 4200, I bought a prefab with a 5950 ultra, my last budget build had an 8600 GTS in it, and I upgraded to the GTX 275 last year. I am in no way a fanboy, nVidia just has treated me very well. If I had made that last decision a few months later after the price hike, it would've definitely been the HD 4890; almost identical performance for ballpark $100 less.
I recently built a new high-end rig (Core i7 and all), but I waited out on dropping the money on a 5800 series card. I knew nVidia's new cards were on the way, and I was excited and willing to wait it out; I expected a lot out of them.
Now that they're are out in the open, I have to say I'm a little shaken. In many cases, the performance of the cards are not where I would've hoped they be (the general consensus seems to be 5-10% increase in performance over their ATI counterparts; I see that failing in many cases, however). It seems like the effort that nVidia put into the cards gave them lots of potential, but most of it is wasted.
"The future of PC gaming" is right in the title of this post, and that's what these cards have been built for. Nvidia has a strong lead over ATI in compute and tessellation performance now, that's obvious; however, that will only provide useful if and when developers decide to put the extra effort into taking advantage of those technologies. Nvidia is gambling right now; it has already given ATI a half-year lead on the DX11 market, and it's pushing cards that won't be fully utilized until who-knows-when (there's no telling when these technologies will be more widely integrated into the gaming market). What will it do in the meantime? ATI is already on it's way to producing its 5000-series refresh; and this time it knows the competition's performance.
I was hoping for the GTX 400s to do the same thing that the GTX 200s did: give nVidia back the high-end performance throne. ATI is not only competitive with it's counterparts, but it still has the 5970 for the enthusiast performance crown (don't forget Eyefinity!). I think nVidia made a mistake in putting so much focus into compute and tessellation performance; it would've been smarter to produce cards with similar die sizes (crappy wafer yields, anyone?), faster raw performance with tessellation/compute as a secondary objective, and more competitive pricing. It wouldn't have been a bad option to create a separate chip for the Tesla cards, one that focused on the compute performance while the GeForce cards focused on the rest.
I still have faith. Maybe nVidia will work wonders with the drivers and producing performance we were waiting for. Maybe it has something awesome brewing deep within its labs. Or maybe my fears will embody themselves, and nVidia is crossing its fingers and hoping for its tessellation/compute performance to give it the market share later on. If so, ATI will provide me with my pair of cards.
That was quite the rant; I wasn't planning on writing that much when I decided to comment on Drew Henry's (nVidia GM) blog post. I suppose I'm passionate about this sort of thing, and I really hope nVidia doesn't lose me after all this time.
The fact that this card comes out a year and a 1/2 after the the GTX 295 makes me sick. Add to that the fact that the GTX 295 actually is faster then the GTX 480 in a few benchmarks and very close in others is like a bad dream for nvidia. Forget if they can beat AMD, they can't even beat themselves. They could have did a die shrink on the GTX 295, add some more shadders and double the memory and had that card out a year ago and it would have crushed anything on the market. Instead they risked it all on a hair brained new card. I am a GTX 295 owner. Apperently my card is a all arround better card being it doesnt lag in some games like the 480 does. I guess i will stick with my old GTX 295 for another year. Maybe then there might be a card worth buying. Even the ATI 5970 doesn't have enough juice to justify a new purchase from me. This should be considered horrible news for Nvidia. They should be ashammed of themselves and the CEO should be asked to step down.
Did you guys recieve the GTX480 earlier than other reviewers? There were 17 cards tested on 3 drivers and i am assuming tests were done multiple times per game to get an average. installing, reinstalling drivers, etc 10.3 catalyst drivers came out week of march 18.
Do you guys have multiple computers benchmarking at the same time? I just cannot imagine how the tests were all done within the time frame.
Our cards arrived on Friday the 19th, and in reality we didn't start real benchmarking until Saturday. So all of that was done in roughly a 5 day span. In true AnandTech tradition, there wasn't much sleep to be had that week. ;-)
I felt compelled to say a few things about nvidia’s Fermi (480/470 GTX). I like to always start out by saying…let’s take the fanboyism out of the equation and look at the facts. I am a huge nvidia fan, however they dropped the ball big time. They are selling people on ONE aspect of DX11 (tessellation) and that’s really the only thing there cards does well but it’s not an efficient design. What people aren’t looking at is that their tessellation is done by the polymorh engine which ties directly into the cuda cores, meaning the more cuda cores occupied by shaders processing…etc the less tessellation performance and vice versa = less frames per sec. As you noticed we see tons of tessellation benchmarks that show the gtx 480 is substantially faster at tessellation, I agree when the conditions suite that type of architecture (and there isn’t a lot of other things going on). We know that the gf100(480/470gtx) is a computing beast, but I don’t believe that will equate to overall gaming performance. The facts are this gpu is huge (3billion + transistors), creates a boat load of heat, and sucks up more power than any of the latest dual gpu cards (295gtx, 5970) came to market 6 months late and is only faster than its single gpu competition by 10-15% and some of us are happy? Oh that’s right it will be faster in the future when dx11 is relevant…I don’t think so for a few reasons but ill name two. If you look at the current crop of dx11 games, the benchmarks and actual dx11 game benchmarks (shaders and tessellation…etc) shows something completely different. I think if tessellation was nvidia’s trump card in games then basically the 5800 series would be beat substantially in any dx11 title with tessellation turned on…we aren’t seeing that(we are seeing the opposite in some circumstances), I don’t think we will. I also am fully aware that tessellation is scalable, but that brings me to another point. I know many of you will say that it is only in extreme tessellation environments that we really start to see the nvidias card take off. Well if you agree with that statement then you will see that nvidia has another issue. The 1st is the way they implement tessellation in their cards (not very scalable imo) 2nd is, the video card industry sales are not comprised of high end gpus, but the cheaper mainstream ones. Since nvidia polymorph engine is tied directly to its shaders…u kinda see where this is going, basicly less powerful cards will be bottlenecked by their lack of shaders for tessellation and vice versa. Developers want to make money, the way they make money is selling lots of games, example crysis was a big game, however it didn’t break any records sales…truth of the matter is most people systems couldn’t run crysis. Now you look at valve software and a lot of their titles sale well because of how friendly it is to mainstream gpus(not the only thing but it does help). The hardware has to be there to support a large # of game sales, meaning that if the majority of parts cannot do extreme levels of tessellation then you will find few games to implement it. Food for thought… can anyone show me a dx11 title that the gtx480 handily beats the 5870 by the same amount that it does in the heaven benchmark or even close to that. I think as a few of you have said, it will come down to what game work better with what architecture..some will benefit nvidia(Farcry2..good example) others Ati (Stalker)…I think that is what we are seeing now. IMO P.S. I think also why people are pissed is because this card was stated to be 60% faster than the 5870. As u can see its not!!
I'm not familiar with Battleforge firsthand, but I understood it uses HD Ambient Occlusion wich is a variation of Screen Space Ambient Occlusion that includes normal maps. And since it's inception in Crysis SSAO has stood for Screen Space AO. So why is it called Self Shadow AO in this article?
Bit-tech refers to Stalker:CoP's SSAO as "Soft Shadow." That I'm willing to dismiss. But I think they're wrong.
Am I'm falling behind with my jargon, or are you guys not bothering to keep up?
Im going wiht the less power hungry ati 5000 series. I know a 5850 card will easily fit in my Case aswell. There no way id choose the GTX 470 over any of the ati s870 or 5850 cards. So that only leaves the GTX 480 against either the 5870 or the 5850. The performance increase and power increase is NOT worth me paying for a nvidia card thats higher in price over the 5870.
I meen even looking at the games. The games ill probably play Crysis adn BAttlefield bad company 2 come out on top of the nivdia 480 GTX. so bla.
Nvidia you need to make a much bette rcard than that fo rme to spend money on a GTX 470 or GTX 480 ove rthe 5870 or 5850.
oh and secondly, if your buying a 200 series nivida card or the GTX 480 it isnt fast enough to future proof your computer. You might aswell go spend less money on a 5970 or a single 5870 you know it will last for the next 2 years and the GTX 480 will NOT last any longer than teh 5000 series with its 10-15% performance increase. I didnt like the 200 series nvidia cards and im not interested in even MORE power hungry cards that. I want less power hungry cards and efficiency. To me a game plays bugger all different with 60 FPS average and 100 fps average. If you have a 200 series card save your money and wait for the next gen of cards or at least wait till a DX 11 game actualyl comes out not just Just cause friggin 2..
ok all theese cards are nice. new technology is very welcome. but where is the games to push them?? if i spent 400$ or 500$ on a new card where i could see a really big difference against my old 8800GT?? they sell hardware without software to support it...2 or 3 games makes no difference to me. ps3 an xbox360 have very old graphic cards compared to ati 5800 series and nvidia 400 and still tha games are looking beautifull. an in some cases mauch better than on pc... make new games for pc and then i will buy a new card! until then i will stuck with my xbox360...
I have been running the GTX 295. The plan was to buy a second GTX 295. Looking at the prices, I was thinking about just buying two GTX 470's. What the better move?
Ryan, what was the special sauce you used to get Badaboom working on Fermi? My GTX 460 won't run it, and Elemental's website says Fermi support won't get added until Q4 2010. http://badaboomit.com/node/507
I have the same probleme on my GTX 480 , Badaboom does not work on fermi ,according with my own experience and Badaboom official web site I dont think this benchmarks are accurate
I bought a GTX 480 based on this review as I do a considerable amount of video converting Just to find out ,dispite the GTX 480 is showing very good resaults when using Badaboom The truth is Badaboom is not compatible yet with any GTX 400 series according with The badaboom web site .
The Badaboom website says it does not work and when I try it with the GTX 465 it does not work. How were you able to get it to work? I have the NVIDIA latest release drivers as of today and the latest released version of Badaboom.
Please do yourselves a favour, do not believe every paid review you see, and listen to actual users. In an unprecedented move of arrogance, nVidia has intentionally crippled the entire 400 series performance for anything but mainstream games. When 8000 series handily outperform the new 400, we have a problem. Check the following user discussions, and especially the last one from a developer.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
196 Comments
Back to Article
Headfoot - Monday, March 29, 2010 - link
Great review, great depth but not too long. Concise but still enough information.THANK YOU SO MUCH FOR INCLUDING MINIMUM FRAME RATES!!! IMO they contribute the most to a game feeling "smooth"
niceboy60 - Friday, August 20, 2010 - link
This review is not accurate , Badaboom GTX 400 series cards , are not compatible with GTX 400 series yet .However they already post the test resaultsI have a GTX 480 and does not work with badaboom , Badaboom official site confirms that
slickr - Sunday, March 28, 2010 - link
I thought that after the line-up of games thread, you would really start testing games from all genres, so we can actually see how each graphic cards performs in different scenarios.Now you have 80% first person shooters, 10% racing/Action-adventure and 10%RPG and RTS.
Where are the RTS games, isometric RPG's, simulation games, etc?
I would really like Battleforge thrown out and replaced by Starcraft 2, DOW 2: Chaos Rising, Napoleon Total War. All these RTS games play differently and will give different results, and thus better knowledge of how graphic cards perform.
How about also testing The Sims 3, NFS:Shift, Dragon Age Origins.
Ryan Smith - Monday, March 29, 2010 - link
Actually DAO was in the original test suite I wanted to use. At the high end it's not GPU limited, not in the slightest. Just about everything was getting over 100fps, at which point it isn't telling us anything useful.The Sims 3 and Starcraft are much the same way.
Hsuku - Sunday, March 28, 2010 - link
On Page 9 of Crysis, your final sentence indicates that SLI scales better than CF at lower resolutions, which is incorrect from your own graphs. CF clearly scales better at lower resolutions when video RAM is not filled:@ 1680x1050
480 SLI -- 60.2:40.7 --> 1.48
5870 CF -- 53.8:30.5 --> 1.76 (higher is better)
@ 1920x1200
480 SLI -- 54.5:33.4 --> 1.63
5870 CF -- 46.8:25.0 --> 1.87 (higher is better)
This indicates the CF technology scales better than SLI, even if the brute performance of the nVidia solution comes out on top. This opposes diametrically your conclusion to page 9 ("Even at lower resolutions SLI seems to be scaling better than CF").
(Scaling ability is a comparison of ratios, not a comparison of FPS)
Ryan Smith - Monday, March 29, 2010 - link
You're looking at the minimums, not the averages.Hsuku - Tuesday, March 30, 2010 - link
My apologies, I was looking at the wrong graphs.However, even so, your assertion is still incorrect: at at the lowest listed resolution, CF and SLI scaling are tied.
Ryan Smith - Wednesday, March 31, 2010 - link
Correct. The only thing I really have to say about that is that while we include 1680 for reference's sake, for any review of a high-end video card I'm looking nearly exclusively at 1920 and 2560.Hrel - Thursday, September 2, 2010 - link
I get that, to test the card. But if you don't have a monitor that goes that high, it really doesn't matter. I'd really like to see 1080p thrown in there. 1920x1080; as that's the only resolution that matters to me and most everyone else in the US.Vinas - Sunday, March 28, 2010 - link
It's pretty obvious that anantech was spanked by nVIDIA the last time they did a review. No mention of 5970 being superior to the 480 is a little disturbing. I guess the days of "trusting anandtech" are over. Come on guys, not even a mention of how easily the 5870 overclocks? The choice is still clear, dual 5870's with full cover blocks FTW!ol1bit - Thursday, April 1, 2010 - link
I thought it was a fare review. They talked about the heat issues, etc.You can't compare a 2 GPU card to a single GPU card. If they ever make a 2 core GF100, I'm sure Anandtech will do a review.
IceDread - Tuesday, April 6, 2010 - link
You are wrong. You can and you should compare single gpu cards with multi gpu cards. It does not matter if a card has one or 30 gpu's on the card. It's the performance / price that matters.These nvidia cards are very expensive in performance / price compared to the ATI cards, simple as that. It's obvious that nvidia dropped the ball with their new flagship. You even need 2 cards to be able to use 3 screens.
This is bad for us customers, we are not getting any price pressure at all. These nvidia cards does not improve the market since they can not compete with the ATI card, only nvidia fans will purchase these cards or possibly some working with graphics.
I hope nvidia will do better with their next series or cards and I hope that won't take to long because ATI will most likely release a new series in half a year or so.
xxtypersxx - Sunday, March 28, 2010 - link
I will be interested in seeing the performance gains that will likely come from revised Nvidia drivers in a month or two. In some of the tests the gtx470 is trading blows with the gtx285 despite having nearly double the compute power...I think there is a lot of room for optimization.I am no fanboy and even owned a 4850 for a while, but Nvidia's drivers have always been a big decision factor for me. I don't get any of the random issues that were common on catalyst and aside from the occasional hiccup (196.67 G92 fan bug) I don't worry about upgrades breaking things. I admit I don't know if all the 5xxx series driver issues have been fixed yet but I do look forward to driver parity, until then I think raw performance is only part of the equation.
GourdFreeMan - Sunday, March 28, 2010 - link
Ryan, have you checked performance and/or clocks to see if any of the cards you are testing are throttling under FurMark? I recall you mentioning in your 58xx review that ATi cards can throttle under FurMark to prevent damage, and while most of the power numbers look normal, I notice a few of the cards are consuming less power under FurMark than Crysis, unlike the majority of the cards which consume considerably more power running FurMark than Crysis...MojaMonkey - Sunday, March 28, 2010 - link
I can turn off one light in my house and remove the power consumption difference between the GTX480 and the 5870.I thought this was an enthusiast site?
I lol irl when people talk about saving 100 watts and buying a 5870. So saving 100 watts but building a 700 watt system? Are you saving the planet or something?
I think nVidia is smart, if you fold or use cuda or need real time 3d performance from a quadro you will buy this card. That probably is a large enough market for a niche high end product like this.
PS: 5870 is the best gaming card for the money!
Paladin1211 - Sunday, March 28, 2010 - link
No, the 5850 is.p/s: I misclicked the Report instead of Reply button, so pls ignore it T_T
kallogan - Sunday, March 28, 2010 - link
Seriously i wonder who'd want gpus that power angry, noisy and hot...Nvidia is out both on mobile and desktop market...The only pro for Nvidia i can see is the 3D support.beginner99 - Sunday, March 28, 2010 - link
This is kind of bad for consumers. 0 pressure on ATI to do anything from lower price to anything else. they can just lay back and work on the next gen.Well, that at least made my decision easy. build now or wait for sandybridge. I will wait. hoepfully gpu marekt will be nicer then too (hard to be worse actually).
C5Rftw - Sunday, March 28, 2010 - link
I was waiting for the fermi cards to come out before my next high end build( looking for price drops), but I actually did not expect this card to be this fast. The GTX480 is ~15% faster than the 5870, but for $100 more, and it is just gonna be a Nvidia loyal card, and the 5870 will probably drop just a little if at all.. The 5850 and and 5830 should drop $25-50, hopefully more(2x5850 at ~250$ each would be FTW). Now, would I like to have a fermi?, well yeah for sure, but I would much rather have a 5870 and down the road add another. A GTX 480 uses the same, if not more power than (2) 5870's. Now this reminds me of the last gen of the P4's. or as we know em, the Preshots. Basically, Nvidia's idea of a huge chip approach, with yes impressive performance, was just the wrong approach. I mean, their next-gen, if based on this same "doubling" SPs, cuda cores, would draw 300w+ easily and almost require water cooling because the next TSMC process is going to be 32nm and that will not allow them to "cut the chip in half." ATI's theory started with the 4000 series has proven to be a much better/efficient design. I think they could make a 6870 using 40nm TSMC right now, but ofcourse it would be a hot chip. Now when they get the 32 TSMC FABs running, Nvidia has got to re-design their chips.. And with how hot the GTX 480 is, I dont see how they could make a GTX 495. Also, the 5890 is right around the corner and that should give the final punch to KO Nvidia in this GPU generation. On a side note, Thank " " that there is some healthy competion or AMD might pull what Nvidia did and rebrand the 8800 5 or 6 times.Belard - Sunday, March 28, 2010 - link
Keep in mind, the GeForce 480 (GTX means nothing, see any GTX210 or GT 285?) is already the most power hungry card on the market, just under 300watts under full load.... if the GF480 had all 512 Cuda Cores running and clocked higher... the card will easily surpass 300watts!This in turn means MORE heat, more power, more noise. There are videos on the 480/470s & ATI cards... the 480's fan is running very fast and loud to keep it under 100c, about 2~3 times hotter than a typical CPU.
We will see the ATI 6000 series on 40nm, but it may not be with TSMC.
If the upcoming 5890 is 15% faster and can sell for $400~450, that would put some hurt on the GF480.
Not sure how/why ATI would do re-branding. The 4670 is almost like a 3870, but is easily a more advanced and cheaper GPU. The bottom end GPUs have all changed. 2400 / 3450, 4350, 5450 - all different.
Nvidia has been doing re-branding for quite a long time. The GF2mx was re-branded as the GF2MX 400 (These were bottom end $150~190 cards in 2001) and then for some bone-head reason, during the GF6 era - they brought back the GF2MX but added DX8. Huh? Add a function to an OLD bottom end GPU?
The GF2-TI came out when GF3-TI series was launched... they wanted "TI" branding. The GF2-TI was a rebranded GF2-Pro with a slight clock upgrade.
Then came the first big-branding/feature fiasco with Nvidia. The GF8 was the first DX8 cards. Then the GF 4 series came out. The GF4ti were the high end models. But the MX series were nothing more than GF2 (DX7) with optional DVI... to take care of the low end and shove the letter names to the front.
GF4 mx420 = GF2mx, but a bit slower.
GF4 mx440 = GF2 Pro/TI
GF4 mx460 = ... faster DX7 card, but it was about $20~35 cheaper than the GF4-TI4200, a DX8 card. The Ti4200 was a #1 seller at about $200. Some of the 440se & 8x models may have 64 or 128bit RAM... ugh.
Then they had fun with the TI series when AGP 8x came out... NEW models! Either thou the cards couldn't max out the AGP 4x bus. Even the future ATI 9800Pro only ran 1~3% faster with AGP 8x.
GF4 Ti 4200 > GF4 Ti 4200 8x
GF4 Ti 4400 > GF4 Ti 4800 SE
GF4 Ti 4600 > GF4 Ti 4800
Yep, same GPUs... new names. Some people would upgrade to nothing or worse. Some even went from the 4600 to the 4800SE which was a downgrade!
GF5 5500 = 5200
Since the GF5... er "FX" series, Nvidia kept the DX# and feature set within the series. All GF5 cards are DX9.
But the 5200s were a joke. By the time they hit the market at $120, the Ti4200s were also $120 and the 4mx were reduced to $30~60. But the 5200 was HALF the performance of a 4200. People actually thought they were upgrading... returns happened.
Funny thing once. A person bought a "5200" at walmart and was confused by the POST display of "4200". Luckily he had posted to us on the interent. We laughed our butts off...! What happened? Batch & switch... someone bought a 5200, took it home - switched cards, took it back to Walmart for a refund. hey, its usually a brick or a dead card, etc. he got used card, but a much better product.
Like the ATI 5450 is too slow for gaming today for DX11, the GF5200 was horrible back in 2003 for DX9! The 5200 is still sold today, the only thing left.
Pretty much the entire GF5 series was utter garbage. 4 versions of the GF5600 ($150~200) were slower than the previous $100 Ti 4200. It was sick. This allowed ATI to gain respect and marketshare with their ATI 9600 & 9700 cards. The GF 5700 series (2 out of 5 types) were good Ti4200 replacements. The 5900 went up against the ATI 9800. I've owned both.
Since then, ATI pretty much had the upper hand in performance throughout the GF6 & GF7 era. AMD buys out ATI, then the GF8 and core2 wipes out ATI/AMD with faster products.
While ATI had the faster cards during DX9.0c (really MS? Couldn't make 6.1, 6.2?) era over the GF6/7... Nvidia *HAD* the lower end market. The GF6600 and 7600GT were $150~200 products... ATI products in that price range were either too slow or cost too much.
With GF 8800 & 8600s, ATI had lost high & mid-range markets. The HD 2000 series = too expensive, too hot and not fast enough... (sound familiar). The ATI 3000 series brought ATI back to competitive position where it counted. Meanwhile, Nvidia milked the G92~96 for the past 2+ years. They are code-name & model number crazy happy.
As long as ATI continues doing engineering and management this way, nVidia will continue to be in trouble for a long time unless they get their act together or count on the server market to stay in business.
End of short history lesson :0
Sunburn74 - Sunday, March 28, 2010 - link
It is absolutely ridiculous. Like having a buzzsaw in your case.http://www.hardocp.com/article/2010/03/26/nvidia_f...">http://www.hardocp.com/article/2010/03/26/nvidia_f...
Belard - Sunday, March 28, 2010 - link
AMD already stated they are not reducing their pricing any time soon. This is because their line-up is far healthier than Nvidia.They know (and we should know) that the $500/$350 price for the new GeForce 4 cards are not going to stick. There is only some many thousands of cards available for the next 3~5 months. The supply will run dry in about 1-3 weeks I bet. We're going to see the pricing shoot up close to $600 for the GF480, the fanboyz will be willing to pay that price.
The 5850 was supposed to be a $250 card, we see how well that worked out. While the price did settle around $300, the 5850 was still a better value than the $370~400 GeForce 285 card as it was far faster and run cooler, etc. The 5870 is typically faster than the $500 GeForce 295 - for $100 less. ATI has no reason to lower their pricing.
The GeForce 265~295 cards are already being phased out, too slow, cost too much.
So nVidia has nothing for the sub $300 market... nothing. Only the GF-250 has any value but a tad expensive as it should be $100 since its still a DX10 re-badged 9800GTX.
So when ATI feels any pressure from Nvidia, they can easily drop their prices. It costs $5000 per wafer, no matter how many chip dies are on it. It may be costing nVidia $150~200 per chip while for AMD, they could be paying $20~35 per chip used in the 5800/5900s.
Then you add the costs for memory, the PCB, parts, cooling system etc.
It is very easy for AMD to drop $50 per GPU and they'd still make a profit while Nvidia sells their geForce 400 cards at a loss or no profit.
When ATI sells the 5830 for $190~200, 5850 at $250 and 5870 at $325~350 would help sales and keep nVidia at bay.
We'll see...
SirKronan - Sunday, March 28, 2010 - link
I would've liked to see 5850's in crossfire thrown into this mix. I know you don't have time to test them all, but I think that's the key competitor against the 480 when it comes to bang/buck. I would think the 5850's in crossfire could handily beat the 295 and the 480, all while consuming less power. I believe there may have been another site that did it, but with this excellent and very thorough review done here, it would've been even that little tiniest bit sweeter to have a 5850 crossfire line on their graphs.Regardless, thanks for the informative review!
B3an - Saturday, March 27, 2010 - link
This review and any other pages on this site are not working in Firefox... they have been reported as attack pages."This web page at www.anandtech.com has been reported as an attack page and has been blocked based on your security preferences."
This is why FF with all default settings, and just using adlock.
Could it be trouble with the ads again using a malware/virus?
Ryan Smith - Sunday, March 28, 2010 - link
We know. It's being worked on.Sunburn74 - Saturday, March 27, 2010 - link
The 5850 idles inthe mid 30's but it also does absolutely nothing to stay cool, operating at about 20% max fan speed. Under load it may go up to 30% fan speed, but rarely ever breaks the 40% mark.What are the approximate idle and load fan speeds for both the gtx 480 and 470? I guess I'm asking this to understand just how much extra cooling room is innately available. Are these card working at max capacity to keep cool or is there thermal/fan headroom there to be had?
Ryan Smith - Saturday, March 27, 2010 - link
I don't have that data on-hand (we don't record fan speeds), but it's something that I should be able to easily grab at a later time.Lemonjellow - Saturday, March 27, 2010 - link
For some reason Chrome is flagging this article as a malicious sight... Oddness... Possibly got a bad advertisement...NJoy - Saturday, March 27, 2010 - link
well, Charlie was semi-accurate, but quite right =))What a hot chick... I mean, literately hot. Way too hotWiNandLeGeNd - Saturday, March 27, 2010 - link
Looking back at the data, I realized that power consumption is for system total. Guru3d measured the power consumption of the card itself and reported a max of 263W, so roughly 21 A. I think my 850W will do just fine since each PCI-X con has 20A each.WiNandLeGeNd - Saturday, March 27, 2010 - link
I think this was a great review, as mentioned previously, very objective. I think though that I may get a 480, because when I buy a card I keep it for 3 to 4 years before I get a new one, aka every other gen. And seeing that tessellation is really the gift horse of DX11 and how much more tessellation power is in the 480's, I think it could very much pay off in the future. If not then I spent an extra $85 for a tad extra performance as I just pre-ordered one for 485 and the 5870's are at $400 still.My only concern is heat and power, but most of the cards have a life time warranty. Hopefully my OCZ GamerXtreme 850W can handle it at max loads. The two 12v rails for the two 6 pin PCI-X connectors are 20 A each, I saw 479w max consumption, however that was furmark, at 12v that's 39.5 amps, so it would be extremely close if there is ever a game to utilize that much power. Although If I recall ATI specifically stated a while back to not use that as it pushes loads that are not possible to see in an actual game, I think they had an issue with the 4000 series burning out power regulators, correct me if I'm wrong.
Alastayr - Saturday, March 27, 2010 - link
I'm with sunburn on this one. Your reasoning doesn't make much sense. You must've not followed the GPU market for the last few years becausefirst) "every other gen" would mean a 2 year cycle
second) Nothing's really gonna pay off in the future, as the future will bring faster cards for a fraction of the price. You'd only enjoy those questionable benefits until Q4, when AMD releases Northern Islands and nVidia pops out GF100b or whatever they'll call it.
third) Tessellation won't improve further that fast. If at all, developers will focus on the lowest common denominator, which would be Cypress. Fermi's extra horse power will most likely stay unused.
fourth) Just look at your power bill. The 25W difference with a "typical" Idle scheme (8h/day; 350d/y) comes to 70kWh which where I live translates to around $20 per year. That's Idle *only*. You're spending way more than just $85 extra on that card.
fifth) The noise will kill you. This isn't a card than just speeds up for no reason. You can't just magically turn down the fan from 60% to 25% and still enjoy Temps of <90°C like on some GTX 260 boards. Turn up your current fan to 100% for a single day. Try living through that. That's probably what you're buying.
In the end everyone has to decide this for himself. But for someone to propose keeping a GTX 480 in his PC for a whopping 3-4 years... I don't know man. I'd rather lose a finger or two. ;)
tl;dr I know, I know. But really people. Those cards aren't hugely competetive, priced too high and nV's drivers suck as much as ATi's (allegedly) do nowadays. Whis is to say neither do.
I could honestly bite me right now. I had a great deal for a 5850 in Nov. and I waited for nV to make their move. Now the same card will cost me $50 more, and I've only wasted time by waiting for the competetive GTX 470 that never was. Argh.
Sunburn74 - Saturday, March 27, 2010 - link
Thats kind of bad logic imo. I'm not fanboy on either side, but it's clear to me that Nvidia targeted the performance of their cards to fit in exactly between the 5970, the 5870, and 5850. Its much harder to release a card not knowing what the other guy truly has as opposed to releasing a card knowing exactly what sort of performance levels you have to hit.Two, realistically, think of the noise. I mean ifyou've ever heard a gtx 260 at 100 percent fan speed, thats the sort of fan noises you're going to be experiencing on a regular basis. Its not a mild difference.
And three, realistically for the premium you're paying for the extra performance (which is not useful right now as there are no games to take advantage of it) as well as for the noise, heat and power, you could simply buy the cheaper 5870, save that 85-150 dollars extra, and sell off the 5870 when the time is right.
I just don't see why anyone would buy this card unless they were specifically taking advantage of some of the compute functions. As a consumer card it is a failure. Power and heat be damned, the noise the noise! Take your current card up to 100 percent fan speed, and listen to it for a few mins, and thats what you should about expect from these gpus.
andyo - Saturday, March 27, 2010 - link
I too am getting the warning message with Firefox 3.6.2. Posting this on IE. Here's the message:http://photos.smugmug.com/photos/820690277_fuLv6-O...">http://photos.smugmug.com/photos/820690277_fuLv6-O...
JarredWalton - Saturday, March 27, 2010 - link
We're working on it. Of course, the "Internet Police" have now flagged our site as malicious because of one bad ad that one of the advertisers put up, and it will probably take a week or more to get them to rescind the "Malware Site" status. Ugh....jeffrey - Saturday, March 27, 2010 - link
Give the advertiser that put up the bad ad hell!LedHed - Saturday, March 27, 2010 - link
The people who are going to buy the GTX 480/470 are enthusiast who most likely bought the GTX 295 or had 200 Series SLI. So not including the 295 in every bench is kind of odd. We need to see how the top end of the last gen does against the new gen top end.Ryan Smith - Saturday, March 27, 2010 - link
What chart is the 295 not in? It should be in every game test.kc77 - Saturday, March 27, 2010 - link
Well the 295 beats the 470 in most benches so there's no need to really include it in all benches. Personally I think the 480 is the better deal. Although I am not buying those cards until a respin/refresh, those temps and power requirements are just ridiculous.bigboxes - Saturday, March 27, 2010 - link
I know you "upgraded" your test PSU to the Antec 1200W PSU, but did you go back and try any of these tests/setups with your previous 850W PSU to see if could handle the power requirements. It seemed that only your 480 SLI setup drew 851W in total system in the Furmark load test. Other than that scenario it looks like your old PSU should handle the power requirements just fine. Any comments?Ryan Smith - Saturday, March 27, 2010 - link
Yes, we did. We were running really close to the limits of our 850W Corsair unit. We measured the 480SLI at 900W, which after some power efficiency math comes out to around 750-800W actual load. At that load there's less extra space than we'd like to have.Just to add to that, we had originally been looking for a larger PSU after talking about PSU requirements with an NVNDIA partner, as the timing of this required we secure a new PSU before the cards arrived. So Antec had already shipped us their 1200W PSU before we could test the 850W, and we're glad since we would have been cutting it so close.
bigboxes - Sunday, March 28, 2010 - link
Appreciate the reply.GullLars - Saturday, March 27, 2010 - link
OK, so 480 generally beats 5870, and 470 generally beats 5850, but at higher prices, temperatures, wattage, and noise levels. What about 5970?As far as i can tell, the 5970 beat or came even with 480 in all tests, draws less power, runs cooler, and makes less noise. The price isn't that much more either.
It seems more fair to me to compare 480 with 5970 as both are the fastest single-card (as in PCIe slot) sollutions and are close in price and wattage.
I would also like to see what framerate FPS games come in at with gamer settings (1680x1050 and 1920x1200 resolutions), and if average is higher than game cutoff or tickrate, what is the minimum FPS, and how much can you bump eyecandy before avg drops below cutoff/tickrate or minimum drops below acceptable (30).
The reason for gamers sacraficing visuals to get high FPS can be summarized to game flow and latency. If FPS is below game tickrate, you get latency. For many games the tickrate is around 100 (100 updates in the game engine pr second). At 100 FPS you have 10ms latency between frames, if it drops to 50 you have 20 ms, and at 25 you have 40 ms. Lower than 25-30 FPS will obviously also result in virtually unplayable performance since aiming will becoming hard, so added latency from FPS below this becomes moot. If you are playing multiplayer games, this is added to the network latency. As most gamers know, latency below 30ms is generally desired, and above 50ms starts to matter, and above 100ms is very noticable. If you are on a bad connection (or have a bad connection to the server), 20-30ms added latency starts to matter even if it isn't visually notable.
bigboxes - Saturday, March 27, 2010 - link
Anyone else getting that message? I finally had to turn off the 'attack site' option in FF. It wasn't doing this last night. It's not doing it all over AT, just on the GTX 480 article.GullLars - Saturday, March 27, 2010 - link
Here too, it listed among others googleanalytics.com as a hostile site.It was probebly because NVidia wasn't happy with the review XD
(just joking ofc)
chrisinedmonton - Saturday, March 27, 2010 - link
Great article. Here's a small suggestion; temperature graphs should be normalised to room temperature rather than to 0C.GourdFreeMan - Sunday, March 28, 2010 - link
I agree. Temperature graphs should either be normalized to the ambient environment or absolute zero; any other choice of basis is completely arbitrary.Ahmed0 - Saturday, March 27, 2010 - link
Uh oh, my browser just got a heartwarming warning when I clicked on this article, the warning said that it might infect my computer badly and that I should definitely run home faster than my legs can carry.So, whats up with that?
Lifted - Saturday, March 27, 2010 - link
I just got that too. Had to disable the feature in Firefox.NJoy - Saturday, March 27, 2010 - link
well, Charlie was semi-accurate, but quite right =))What a hot chick... I mean, literately hot. Way too hotGTaudiophile - Saturday, March 27, 2010 - link
Me thinks that Cypress really blindsided nVidia. And then on top of it being such an efficient chip, you throw in Eyefinity and all of the audio over HDMI features, etc.Talk about a smack down.
AnnihilatorX - Saturday, March 27, 2010 - link
Page 2:Finally we bad news: availability. This is a paper launch;
simtex - Saturday, March 27, 2010 - link
With the current console generation being the primary focus of game developers I find it hard to believe that tessalation will get the big breakthrough anytime soon. With the next-gen console, it will come, but that is few years from now, and hopefully at that time we have seen at least one new generations of GPUs.viewwin - Saturday, March 27, 2010 - link
I would like a test to see how the new cards do at video encoding.Philip123 - Saturday, March 27, 2010 - link
These things are not "single slot cards" They are double slot. They take 2 slots. No review should be published without pointing out performance per watt. If you dont publish performance per dollar which includes the 100 watt premium over 3 years you are not doing your job. Only and idiot would buy anything from nvdia. You really think anyone is going to want fan noise from these monstrosities anywhere near them?SHAME SHAME SHAME.
Throw this bullshit in the garbage and tell nvidia to f-off untill it releases an actual computer graphics product instead of a spaceheater for retarded monekeys with developmental disablities.
AnnonymousCoward - Saturday, March 27, 2010 - link
You are so wrong.You talk about dual-slot cards as if it's a bad thing--it's the best design currently possible, since it allows for efficient cooling without much fan noise, and the heat goes outside your case. Plus, AMD's 5870 & 5850 are also dual slot!
"No review should be published without pointing out performance per watt" - what gamer cares about that? That's a concern for server farmers!
529th - Saturday, March 27, 2010 - link
I think people would be interested in seeing these cards overclocked, also seeing the 470 in SLIcobra32 - Saturday, March 27, 2010 - link
So Nvidia's fastest card is 11% faster than AMD's mid level card 5870 and AMD's top card the 5970 is allot faster than the 480 GTX. Do not give me that the 5970 is a two chip card and cannot be compared to a single chip card. Sorry guys the 5970 takes up one slot just like the 480 GTX and is faster and consumes less energy to move things on the screen faster. If I got 3 slots on my motherboard I can have 6 video chips with an ATI while with a nvidia setup I can have only 3 at the most. Until Nvidia has a two chip version which looks impossible with this power hunger design. ATI has the top single card, be that two chips, you can buy. It took them 6 months and still cannot buy one paper launches suck. I have bought several Nvidia Cards and like them all but this one really looks to fall short. If I got one slot to put my video card in, ATI has the highest performing Card I can buy 5970. It's like this would I rather have a single core chip or dual core cpu and that's a no brainer two is always better than one.
Roland00 - Saturday, March 27, 2010 - link
Two thingsYou can only have 4 gpus in nvidia or ati multi gpu setups. That means two 5970s not 3. (well you can have 4 gpus and one physX)
Second crossfire doesn't scale well past 2 cards, sli doesn't scale well past 3.
derrida - Saturday, March 27, 2010 - link
Thank you Ryan for including OpenCL benchmarks.ReaM - Saturday, March 27, 2010 - link
I don't agree with final words.480 is crap. Already being expensive it adds huge power consumption factor only to have a slightly better performance.
However (!), I see a potential for future chips and I can't wait for a firmy Quadro to hit the market :)
Patrick Wolf - Saturday, March 27, 2010 - link
6 months and we get a couple of harvested, power-sucking heaters? Performance king, barely, but for what cost. Cards not even available yet. This is a fail.This puts ATI in a very good place to release a refresh or revisions and snatch away the performance crown.
dingetje - Saturday, March 27, 2010 - link
exactly my thoughtsand imo the reviewers are going way to easy on nvidia over this fail product (except maybe hardocp)
cjb110 - Saturday, March 27, 2010 - link
You mention that both of these are cut-down GF100's, but you've not done any extrapolation of what the performance of a full GF100 card would be?We do expect a full GF100 gaming orientated card, and probly before the end of the year, don't we?
Is that going to be 1-9% quicker or 10%+?
Ryan Smith - Saturday, March 27, 2010 - link
It's hard to say since we can't control every variable independent of each other. A full GF100 will have more shading, texturing, and geo power than the GTX 480, but it won't have any more ROP/L2/Memory.This is going to heavily depend on what the biggest bottleneck is, possibly on a per-game basis.
SlyNine - Saturday, March 27, 2010 - link
Yea and I had to return 2 8800GT's from being burnt up. I will not buy a really hot running card again.poohbear - Saturday, March 27, 2010 - link
Oh how the mighty have fallen.:( i remember the days of the 8800gt when nvidia did a hard launch, released a cheap & excellent performing card for the masses. W/ the fermi release u would never know its the same company. Such a disappointment.descendency - Saturday, March 27, 2010 - link
I think the MSRP is lower than $300 for the 5850 (259) and lower than $400 for the 5870 (379). Just thought that was worth sharing.I have to believe that the demand will shift back evenly now and price drops for the AMD cards can ensue (if nothing else, the cards should go to the MSRP values because competition is finally out). I would imagine the price gap between the GTX480 and the AMD 5870 could be as much as $150 dollars when all is said and done. Maybe $200 dollars initially as this kind of release almost always is followed by a paper launch (major delays and problems before launch = supply issues).
AnnonymousCoward - Saturday, March 27, 2010 - link
...for two reasons: power and die size.So the 5870 and 470 appear to be priced similarly, while the 5870 beats it in virtually every game and uses 47W less at load! That is a TON of additional on-die power (like 30-40A?).
We saw this coming last year when Fermi was announced. Now AMD is better positioned than ever.
IVIauricius - Saturday, March 27, 2010 - link
I see why XFX started making ATI cards a few years ago with the 4000 series. Once again nVidia has made a giant chip that requires a high price tag to offset the price of manufacturing and material. The same thing happened a few years ago with the nVidia GTX200 cards and the ATI 4000 cards. XFX realized that they weren't making as much money as they'd like with GTX200 cards and started producing more profitable ATI 4000 cards.I bought a 5870 a couple months ago for $379 at newegg with a promotion code. I plan on selling it not to upgrade, but to downgrade. A $400 card doesn't appeal to me anymore when, like many posters have mentioned, most games don't take advantage of the amazing performance these cards offer us. I only play games like MW2, Borderlands, Dirt 2, and Bioshock 2 at 1920x1080 so a 4870 should suffice my needs for another year. Maybe then I'll buy a 5850 for ~$180.
First post, hope I didn't sound too much like a newbie.
-Mauro
Headfoot - Monday, March 29, 2010 - link
Unless you are an insider all of this "profitability" speculuation is just that, useless speculation.The reason they make both companies chips is more likely due to diversification, if one company does poorly one round then they are not going to go down with them. I'd hate to make ATI chips during the 2900XT era and i'd hate to make nVidia chips during the 5800 FX era
blindbox - Saturday, March 27, 2010 - link
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.It's funny how you guys only coloured the 480.
blindbox - Saturday, March 27, 2010 - link
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.It's funny how you guys only coloured the 480.
iwodo - Saturday, March 27, 2010 - link
If i remember correctly Nvidia makes nearly 30- 40% of their Profits from Telsa and Quadro. However Telsa and Quadro only occupies 10% of their Total GPU volume shipment. Or 20% if we only count desktop GPU.Which means Nvidia is selling those Perfect grade Fermi 512 Shader to the most profitable market. And they are just binning these chips to lower grade GTX 480 and GTX 470. While Fermi did not provide the explosion of HPC sales as we initially expected due to heat and power issues, but judging by pre-order numbers Nvidia still has quite a lot of orders to fulfill.
The Best thing is we get another Die Shrink in late 2010 / early 2011 to 28nm. ( It is actually ready for volume production in 3Q 2010 ). This should bring Lower Power and Heat. Hopefully the next update will get us a much better Memory Controller, with 256Bit controller and may be 6Ghz+ GDDR5 should offer enough bandwidth while getting better yield then 384Bit Controller.
Fermi may not be exciting now, but it will be in the future.
swing848 - Saturday, March 27, 2010 - link
We are not living in the future yet.When the future does arrive I expect there will also be newer, better hardware.
Sunburn74 - Saturday, March 27, 2010 - link
So how do you guys test temps? It's not specifically stated. Are you using a case? An open bench? Using readings from a temp meter? Or system readings from catalyst or nvidia control panel? Please enlighten. It's important because people will eventually have to extrapolate your results to their personal scenarios which involve cases of various designs. 94 degrees measured inside a case is completely different from 94 degrees measured on an open bench.Also, why are people saying all this stuff about switching sides and families? Just buy the best card available in your opinion. I mean it's not like ATI and Nvidia are feeding you guys and clothing your kids and paying your bills. They make gpus, something you plug into a case and forget about if it's working properly. I just don't get it :(
Ryan Smith - Saturday, March 27, 2010 - link
We're using a fully assembled and closed Thermaltake Spedo with a 120mm fan directing behind the video cards feeding them air. Temperatures are usually measured with GPU-Z unless for some reason it can't grab the temps from the driver.hybrid2d4x4 - Saturday, March 27, 2010 - link
Thanks for elaborating on the temps as I was wondering about that myself. One other thing I'd like to know is how the VRM and RAM temps are on these cards. I'm assuming that the reported values are for just the core.The reason I ask is that on my 4870 with aftermarket cooling and the fan set pretty low, my core always stayed well below 65, while the RAM went all the way up to 115 and VRMs up to ~100 (I have obviously increased fan speeds as the RAM temps were way too hot for my liking- they now peak at ~90)
Ryan Smith - Saturday, March 27, 2010 - link
Correct, it's just the core. We don't have VRM temp data for Fermi. I would have to see if the Everest guys know how to read it, once they add support.shiggz - Friday, March 26, 2010 - link
I just am not interested in a card with a TDP over 175W. When I upgraded from 8800gt to GTX 260 It was big jump in heat and noise and definitely at my tolerance limit during the summer months. I found myself under-clocking a card I had just bought.175W max though a 150W is preferred @ 250$ and I am ready to buy if NVIDIA wont make it then I will switch back to ATI.
arjunp2085 - Friday, March 26, 2010 - link
For dealing with suck fake geometry, Fermi has several new tricks.is that supposed to be such??
850 Watts for SLI.. man Air Conditioning for my room does not consume that much electricity
Might have to go for industrial connections to use such high Electricity consumptions lol
Green Team NOT GREEN....
Leyawiin - Friday, March 26, 2010 - link
Guess I'll keep my GTX 260 for a year or so more and hope for better days.hangfirew8 - Friday, March 26, 2010 - link
Launch FAIL.All this waiting and a paper launch. They couldn't even manage the 1/2 dozen cards per vendor at Newegg of some previous soft launches.
All this waiting an a small incremental increase over existing card performance. High power draw and temps. High prices, at least they had the sense not to price it like the 8800Ultra-which was a game changer. It had a big leap in performance plus brought us a new DX level, DX10.
I've been holding off buying until this launch, I really wanted nVidia to pull something off here. Oh, well.
softdrinkviking - Friday, March 26, 2010 - link
so by the time a "full" gf100 is available, how close will we be the the next gen AMD card?and how low will be the prices on the 58XX series be?
this article never made an explicit buying recommendation, but how many people out there are still waiting to buy a gf100?
6 months is a long time.
after xmas and the post holiday season, anybody on the fence about it (i.e. not loyal nvidia fans) probably just went for amd card.
so the question (for a majority of potential buyers?) isn't "which card do i buy?", it's "do i need/want to upgrade from my 58xx amd card to a gf100?"
also, i'm curious to find out if fermi can be scaled down into a low profile card and offer superior performance in a form factor that relies so heavily on low temps and low power consumption.
the htpc market is a big money maker, and a bad showing for nvidia there could really hurt them.
maybe they won't even try?
shin0bi272 - Friday, March 26, 2010 - link
great review as usual here at Anandtech. I would have thought in your conclusions you would have mentioned that, in light of the rather lack luster 5% performance crown that they now hold, that it wasnt the best idea for them to disable 6% of their cores on the thing after all.Why make a 512 core gpu then disable 32 of them and end up with poorer performance when youre already 6 months behind the competition, sucking up more juice, have higher temps and fan noise, and a higher price tag? That's like making the bugatti veyron and then disabling 2 of its 16 cylinders!
That will probably be what nvidia does when amd releases their super cypress to beat the 480. They'll release the 485 with all 512 cores and better i/o for the ram.
blyndy - Saturday, March 27, 2010 - link
"Fermi is arranged as 16 clusters of 32 shaders, and given that it is turning off 64 shaders, it looks like the minimum granularity it can fuse off is a single cluster of 32. This means it is having problems getting less than two unrecoverable errors per die, not a good sign."from: http://www.semiaccurate.com/2009/12/21/nvidia-cast...">http://www.semiaccurate.com/2009/12/21/nvidia-cast...
shin0bi272 - Saturday, March 27, 2010 - link
dont quote semi accurate to me. If you wanna call 1 in 100 claims being correct as Semi accurate then fine you can... me I call it a smear. Especially since the guy who wrote that article is a known liar and hack. If you google for gtx480 and click on the news results and click on semi accurate you will see its listed as satire.Jamahl - Friday, March 26, 2010 - link
the same Ryan Smith who panned the 5830 for being a "paper launch" even though it was available one day later?What's wrong this time Ryan? Maybe there are so many bad things to say about Fermi, being "paper launched" was well down the pecking order of complaints?
AnandThenMan - Friday, March 26, 2010 - link
I was thinking the same thing. The 5830 got slammed for being a paper launch even though it wasn't, but Fermi gets a pass? Why? This isn't even a launch at all despite what Nvidia says. Actual cards will be available in what, 17 days? That's assuming the date doesn't change again.jeffrey - Saturday, March 27, 2010 - link
I'll third that notion.Even though Ryan Smith mentioned that Fermi was paper launched today, the tone and way that the article read was much harsher on AMD/ATI. That is ridiculous considering that Ryan had to eat his own words with an "Update" on the 5830's availability.
To be tougher on AMD/ATI, when they did in fact launch the 5830 that day and have hard-launched, to the best of their ability, the entire 5XX0 stack gives an impression of bias.
A paper launch with availability at least two and a half weeks out for a product six months late is absurd!
kc77 - Saturday, March 27, 2010 - link
Yeah I mentioned it too. ATI got reamed for almost a whole entire page for something that didn't really happen. While this review mentions it in passing almost like it's a feature.gigahertz20 - Friday, March 26, 2010 - link
"The price gap between it and the Radeon 5870 is well above the current performance gap"Bingo, Nvidia may have the fastest single GPU out now, but not by much, and there are tons of trade offs for just a little bit more FPS over the Radeon 5870. High heat/noise/power for what? Over 90% of gamers play at 1920 X 1200 resolution or less, so even just a Radeon 5850 or Crossfired 5770's are the best bang for the buck.
If all your going to play at is 1920 X 1200 or less, I see no reason why educated people would want to buy a GTX 470/480 after reading all the reviews for Fermi today. Way to expensive and way to hot for not much of a performance gain, maybe it's time to sell my Nvidia stock before it goes down any further over the next year or so.
ImSpartacus - Friday, March 26, 2010 - link
"with a 5th one saying within the card"Page 2, Paragraph 2.
Aside from minor typos, this is a great article.
cordis - Friday, March 26, 2010 - link
Hey, thanks for the folding data, very much appreciated. Although, if there's any way you can translate it into something that folders are a little more used to, like ppd (points per day), that would be even better. I'm not sure what the benchmarking program you used is like, but if it folds things and produces log files, it should be possible to get ppd. From the ratios, it looks like above 30kppd, but it would be great to get hard numbers on it. Any chance of that getting added?Ryan Smith - Friday, March 26, 2010 - link
I can post the log files if you want, but there's no PPD data in them. It only tells me nodes.cordis - Tuesday, March 30, 2010 - link
Eh, that's ok, if you want to that's fine, but don't worry about it too much, it sounds like it was an artificial nvidia thing. We'll have to wait for people to really start folding on them to see how they work out.ciparis - Friday, March 26, 2010 - link
I had a weird malware warning pop up when I hit page 2:"The website at anandtech.com contains elements from the site googleanalyticz.com"
I'm using Safari (I also saw someone with Chrome report it). I wonder what that was all about...
Despoiler - Friday, March 26, 2010 - link
I'd like to see some overclocking benchmarks given the small die vs big die design decisions each company made.All in all ATI has this round in the business sense. The performance crown is not where the money is. ATI out executed Nvidia in a huge way. I cannot wait to see the financial results for each company.
LuxZg - Saturday, March 27, 2010 - link
Agree.. No overclocking at all..feels like big part of review missing. With GTX480 having that high consumption/temperatures, I doubt it would go much further, at least on air. On the other hand, there are already many OCed HD58xx cards out there, and even those can easily be overclocked further. With as much watts of advantage, I think AMD could easily catch up with GTX480 and still be a bit cooler and less power hungry. And less noisy as a consequence as well of course.randfee - Friday, March 26, 2010 - link
very thorough test as expected from you guys, thanks... BUT:Why on earth do you keep using an arguably outdated core i7 920 for benchmarking the newest GPUs? Even at 3,33GHz its no match for an overclocked 860, a comman highend gaming-rig cpu these days. I got mine at 4,2GHz air cooled?!
sorry... don't get it. On any GPU review I'd try to eliminate any possible bottleneck so the GPU gets limited more, why use an old cpu like this?!
anyone?
palladium - Saturday, March 27, 2010 - link
clock for clock, the 920 is faster than the 860 thanks to its triple channel memory - the 860 is faster because of its aggressive turbo mode. X58 is definitely the route to go, espeacially if you're benchmarking SLI/CF setups (dual PCIe x16).randfee - Sunday, March 28, 2010 - link
go ahead and try Crysis with 3,33GHz and 4,x, minimum fps scale strangely with the CPU.palladium - Saturday, March 27, 2010 - link
shit double post, srypalladium - Saturday, March 27, 2010 - link
Clock for clock, the 920 is faster than the 860 (860 is faster because of its aggressive turbo mode). Using the P55/860 would limit cards to PCIe x8 bandwidth when benchmarking SLI/CF (unless of course you get a board with nF200 chip), which can be more significant (espeacially with high-end cards) than a OC-ing a CPU from 3.33GHz to 4GHz.Roland00 - Saturday, March 27, 2010 - link
It doesn't really add to the framerates, and having a 4ghz cpu could in theory bring stability issues.http://www.legionhardware.com/articles_pages/cpu_s...">http://www.legionhardware.com/articles_...scaling_...
B3an - Friday, March 26, 2010 - link
You're good at making yourself look stupid.A 920 will reach 4GHz easy. I've got one to 4.6GHz. And a 920 is for the superior X58 platform and will have Tri-Channel memory.
Makaveli - Friday, March 26, 2010 - link
I have to agree with that guy.Your post is silly everyone knows the X58 platform is the superior chipset in the intel line up. Secondly do you honestly think 3.33Ghz vs 4Ghz is going to make that much of a difference at those high resolutions?
randfee - Friday, March 26, 2010 - link
sorry guys but I know what I'm talking about, using Crysis for instance, I found that minimum fps scale quite nicely with CPU clock whereas the difference a quad core makes is not so big (only 2 threads in the game afaik). FarCry 2, huge improvements with higher end (=clocked) cpus. The Core i7 platform has a clear advantage, yes, but the clock counts quite a bit.As I said... no offense intended and no, not arguing against my favorite site anandtech ;). Just stating what I and others have observed. I'd just always try and minimize other possible bottlenecks.
randfee - Friday, March 26, 2010 - link
well.... why not test using the 920 @ 4.xGHz, why possibly bottleneck the System at the CPU by using "only" 3,3?No offense intended but I find it a valid question. Some games really are CPU bound, even at high settings.
Ph0b0s - Friday, March 26, 2010 - link
These new cards from ATI and Nvidia are very nice and for a new PC build it is a no brainer, to pick up one of these cards. But for those like me with decent cards from the last generation (GTX285 SLI) I don't really feel a lot of pressure to upgrade.Most current PC games are Directx 9 360 ports that last gen cards can handle quite well. Even Directx 10 games are not too slow. The real driver for these cards are Directx 11 games, the amount of which I can count on one hand and not very many upcomming.
Those that are out don't really bring much over DX10 so I don't really feel like I am missing anything yet. I think Crysis 2 may change this, but by it's release date there will probably be updated / shrunk versions of these new GPU's avaliable.
Hence why Nvidia and ATI need really ecstatic reviews to convince us to buy their new cards when there is not a lot of software that (in my opinion) really needs them.
mcnabney - Friday, March 26, 2010 - link
You make the most valid point.As long as the consoles are in the driver's seat (this isn't going to change) DX11 and the features it provides won't be widely found in games until the next generation of consoles - in 2-3 years.
So really, without growth in the PC gaming market these is no need to upgrade from the last generation. Too bad really.
GourdFreeMan - Friday, March 26, 2010 - link
Thank you for listening to our feedback on improving your test suite of games, Ryan. I think your current list much better represents our interests (fewer console ports, a selection of games that better represent the game engines being used in current and future titles, fewer titles with GPU vendor bias, inclusion of popular titles that have staying power like BF:BC2, etc.) than the one you used to review the 58xx's when they were released. The only title that I feel that is missing from our suggestions is Metro 2033. Kudos!yacoub - Friday, March 26, 2010 - link
Good review. The grammar errors are prolific, but I guess this was rushed to release or something.So it's a hot, power-hungry card with a high pricetag. Not too surprising.
Would have liked to see a $150-range Fermi-based card sometime this year so I can ditch my 5770 and get back to NVidia, but the high temps and prices on these cards are not a good sign, especially comparing the performance against the 5800-series.
AznBoi36 - Saturday, March 27, 2010 - link
Fanboy much?yacoub - Saturday, March 27, 2010 - link
Fanboy of what?The ATI card I have now that I can't wait to get rid of?
The desire for NVidia to release something competitive so I can get back to a stabler driverset and remove all traces of ATI from this PC?
mcnabney - Saturday, March 27, 2010 - link
Ah yes, get back to Nvidia whose last trick was releasing a driver that turned off GPU fans causing instant-card-death.With 480, turning off the fan might actually start a fire.
Headfoot - Monday, March 29, 2010 - link
I bet you experienced that fan error IRL right?Just like how everyone who owned a Phemon got a TLB error 100% of the time right?
numberoneoppa - Friday, March 26, 2010 - link
You know you have the best tech site around when a product review makes it seem like a ddos is in progress.As far as the review itself, it's very comprehensive, so thanks Ryan! The new NVIDIA cards seem to be just where most people thought they would be. It really makes me anticipate the next HD58xx card and the AMD price cuts on the current line up that will come with it.
Devo2007 - Friday, March 26, 2010 - link
Great review, although you may want to edit this sentence:"NVIDIA meanwhile had to deal with the fact that they were trying to produce a very large chip on a low-yielding process, a combination for disaster given that size is the enemy of high yields."
Shouldn't it be "large size is the enemy of low yields?" Either way, that end point seems a bit redundant.
SlyNine - Saturday, March 27, 2010 - link
No, Large size would be a friend of low yeilds. low yeilds are our enemy.yacoub - Friday, March 26, 2010 - link
I think he's got it right. If you want high yields, a larger chip size is the enemy, because you get fewer chips per die, and thus lower yields.Rebel44 - Friday, March 26, 2010 - link
IMO HardOCP review was better because they showed real world differences between those Nv and AMD cards - 470 didnt allow better setting than 5850 and 480 was only little bit better than 5870. So 470 is IMO epic fail at that price.When you add extra power and noise fom 470 and 480, I wouldnt pay for them more than for 5850 and 5870.
stagen - Friday, March 26, 2010 - link
With the 470 and 480 generating so much heat and noise, and consume more power than even the dual GPU Radeon HD 5970, even thinking of dual GPU 470/480 (495?) is a scary thing to do.yacoub - Friday, March 26, 2010 - link
Agreed. And considering the $350 470 is no faster than a $150 5770 at 1680x in BF:BC2, and only 23% faster at 1920x, that's pathetic. Considering how much better it does in other games, it must be a driver optimization issue that hopefully can be worked out.mcnabney - Saturday, March 27, 2010 - link
Fermi has existed for months, so the driver work should be as far along as AMD. The delay allowed for better stepping and higher clocks, but the drivers aren't going to improve any more quickly than AMD.uibo - Friday, March 26, 2010 - link
What was the ambient temperature?Ryan Smith - Saturday, March 27, 2010 - link
20C.mindbomb - Friday, March 26, 2010 - link
Firmly in AMD's hands?i dont know about that.
Although it can't bitstream true hd and dts-MA, I would argue that's not really as debilitating as not being able to bitstream level 5.0 h264 video, since you can output as LCPM.
Galid - Friday, March 26, 2010 - link
FIRMLY in AMD's hand...it is... not only Nvidia doesn't do true HD and dts-ma for a card that doesn't fit in a HTPC but they won't even when they get the smaller cards out... Firmly? yeah....mindbomb - Friday, March 26, 2010 - link
let me be more clear.I'm saying although the ati cards have better handling of audio, the nvidia cards do in fact have better handling of video since they can handle level 5.0 and 5.1 h264 video (and i guess mpeg 4 asp, but thats irrelevant)
so i wouldn't say the ati cards have a definite lead in this area.
cmdrdredd - Friday, March 26, 2010 - link
Get a sound card fr audiomcnabney - Saturday, March 27, 2010 - link
A sound card that will provide Bitstream HD audio will require another $200+, so that tacks on an even higher net-price for Nvidia.All of AMD's 5XXX cards give you real HD audio for free.
Hauk - Friday, March 26, 2010 - link
An excellent article Ryan. I loved seeing Ujesh's response recaptured like that, so fitting for this epic fail. Well crafted review though..just4U - Friday, March 26, 2010 - link
I agree an excellent article overall. I wonder if Amd will move and try and get price drops in play on their cards now. Afterall they are still selling far above their suggested sales price.formulav8 - Friday, March 26, 2010 - link
Man the site is getting hammered.I wonder how many dissapointed compared to happy people there is going to be? :)
Jason
xsilver - Friday, March 26, 2010 - link
the site is moving as fast as nvidia is moving cards ;)MrSpadge - Friday, March 26, 2010 - link
"Peak 64-bit FP execution rate is now 1/2 of 32-bit FP, it used to be 1/8 (AMD's is 1/5)."AMDs is 2/5, not 1/5. Otherwise.. still reading ;)
MrSpadge - Saturday, March 27, 2010 - link
It's still there (page 3).MrSpadge - Saturday, March 27, 2010 - link
It's still there (page 3).MrSpadge - Saturday, March 27, 2010 - link
It's still there (page 3).deputc26 - Friday, March 26, 2010 - link
"GTX 480 only has 11% more memory bandwidth than the GTX 285, and the 15% less than the GTX 285."and holy server lag batman.
529th - Friday, March 26, 2010 - link
Thanks for the review :)ghost2code - Saturday, March 27, 2010 - link
I'm really impressed by this article author made a great job;) But about Fermi It's seem to be really good product for scientific matters but for gamers I'm not so sure about that. The price tag, power consumption, noise! this all is to much for only 10-15% of power more than above the cheaper and much more reasonable in all this things Radeon. I guess Fermi need some final touch from Nvidia and for now it's not a final , well tested product. Temp around 100 it's not good for PCB, GPU and all electronic and I don't believe it want metter for time-life and stability of the card. I'm glad the Farmi finally came but I'm dissapointed at least for now.LuxZg - Saturday, March 27, 2010 - link
I just don't know why GTX480 is compared to HD5870, and same for GTX470 vs HD5850.. GTX470 is right in the middle between two single-GPU Radeons, and just the same can be said for GTX480 sitting right in between HD5970 & HD5870.Prices of this cards as presented by nVidia/ATI:
HD5970 - 599$
GTX480 - 499$
HD5870 - 399$
GTX470 - 349$
HD5850 - 299$
I know GTX480 is single GPU, so by this logic you'll compare it to HD5870. But GTX480 is top of the line nVidia graphics card, and HD5970 is top of the line ATI card. Besides, ATI's strategy for last 3 product cycles is producing small(er) chips and go multi-GPU, while nVidia wants to go single-monolitic-GPU way.. So following this logic, indeed GTX480 should be compared to HD5970 rather than HD5870.
Anyway, conclusion of this article is all fine, telling both strengths and the weaknesses of solutions from both camps, but I believe readers weren't told straightforward enough that these cards don't cost the same... And HD5970 was left out of the most of the comparisions (textual ones).
If I personaly look at these cards, they are all worth their money. nVidia cards are probably more future-proof with their commitment to future tech (tessellation, GPGPU) but AMD cards are better for older and current (and close future) titles. And they are less hot, and less noisy, which most gamers do pay a lot of attention to. Not to say - this is first review of new card in which no one mentioned GPU overclocking. I'm guessing that 90+C temperatures won't allow much better clocks in the near future ;)
Wwhat - Sunday, March 28, 2010 - link
In regards to the temperature and noise: there's always watercooling to go to, I mean if you have so much money to throw at the latest card you might as well thrown in some watercooling too.It's too pricey for me though, I guess I'll wait for the 40nm process to be tweaked, spending so much money on a gfx card is silly if you know a while later something new will come around that's way better, and it's just not worth committing so much money to it in my view.
It's a good card though (when watercooled), nice stuff in it and faster on all fronts, but it also seems an early sample of new roads nvidia went into and I expect they will have much improved stuff later on (if still in business)
LuxZg - Tuesday, March 30, 2010 - link
Like I've said before - if you want FASTEST (and that's usually what you want if you have money to throw away), you'll be buying HD5970. Or you'll be buying HD5970+water cooling as well..ViRGE - Saturday, March 27, 2010 - link
I'm not sure where you're getting that the HD5970 is a $600 card. In the US at least, that's a $700 card (or more) everywhere.wicko - Sunday, March 28, 2010 - link
Honestly I don't even know if it should be mentioned at all even if it is 600, because there is almost no stock anywhere.LuxZg - Tuesday, March 30, 2010 - link
Oh, don't make me laugh, please! :D In that case this review shouldn't be up at all, or it should be called "PREview".. or have you actually seen any stock of GTX470/480 arround?LuxZg - Sunday, March 28, 2010 - link
It's AMD's & nVidia's recommended prices, and you can see them all in Anandtech's own articles:http://www.anandtech.com/video/showdoc.aspx?i=3783">http://www.anandtech.com/video/showdoc.aspx?i=3783 (nvidia prices)
http://www.anandtech.com/video/showdoc.aspx?i=3746">http://www.anandtech.com/video/showdoc.aspx?i=3746 (ATI single-gpu cards)
http://www.anandtech.com/video/showdoc.aspx?i=3679">http://www.anandtech.com/video/showdoc.aspx?i=3679 (ATI single/dual GPU cards)
It is not my fault that your US shops bumped up the price in the complete absence of competition in the high end market. But US is not only market in the world either.
You want to compare with real world prices? Here, prices from Croatia, Europe..
HD5970 - 4290kn = 591€ (recommended is 599$, which is usually 599€ in EU)
GTX480 - not listed, recommended is 499$/€
HD5870 - 2530kn = 348€ (recommended is 399$/399€ in EU)
GTX470 - not listed, recommended is 349$/€
HD5850 - 1867kn = 257€ (recommended is 299$/299€ in EU)
So let's say that European prices for GTX will be a bit lower than recommended ones, GTX480 would still be ~120-130€ pricier than HD5870, and HD5970 would be same ~120-130€ more expensive than GTX480.
As for the lower priced nVidia card, it's again firmly in the middle between HD5850 & HD5870.
Point is that there's no clear price comparision at the moment, and article's conclusion should be clear on that.
Person that wants the FASTEST CARD will stretch for another 100$/€ to buy HD5970. Especially since this means lower noise, lower consumption, and lower heat. This all combined means you can save a few $/€ on PSU, case, cooling, and earplugs, throwing HD5970 in the arm reach of the GTX480 (price-wise) while allowing for better speeds.
As for GTX470, again, lower consumption/heat/noise with ATI cards which means less expenses for PSU/cooling, and saving money on electrical bills. For me, well worth the 50€/$ difference in price, in fact, I'd rather spend 50$/€ more to buy HD5870 which is faster, less noisy, doesn't require me to buy new PSU (I own HD4890, which was overclocked for a while, so HD5870 would work fine just as well), and will save me 50W per hour of any game I play.. which will all make it CHEAPER than GTX470 in the long run.
So let's talk again - why isn't conclusion made a bit more straightforward for end users, and why is HD5890 completely gone from the conclusion??
Saiko Kila - Sunday, March 28, 2010 - link
These MSRPs are not entirely, I mean historically correct... The first MSRP (list price) for HD 5850 was $259, and that was price you had to pay when buying on sites like newegg (there were some rebates, and some differences depending on manufacturer, but still you had to have a very potent hunting sense to get a card of any manufacturer, I got lucky twice). Shortly after launch (about one month, it was October) the MSRP (set by AMD) hiked to $279 and problems with supply not only continued but even worsened. Now, since November 2009, it's $299. HD 5870 followed generally similar path, though HD 5850 hiked more, which is no wonder. Note that this is for reference design only, some manufacturers had higher MSRPs, after all AMD or nvidia sell only chips and not gaming cards.If you believe anandtech, here you've got a link, the day the cards were announced:
http://www.anandtech.com/video/showdoc.aspx?i=3643">http://www.anandtech.com/video/showdoc.aspx?i=3643
The whole pricing things with HD 5xxx series is quite unusual (though not unexpected) since normally you'd anticipate the street price to be quite lower than MSRP, and then to drop even further, and you would be right. I remember buying EVGA GTX260 just after its launch and the price was good $20 lower than suggested price. That's why we need more competition, and for now the outlook isn't very bright, with nvidia not quite delivering...
And these European prices - most if not all European countries have a heavy tax (VAT), this tax is always included and you have to pay it, there are other taxes too. In the US the sales tax is not included in the street price, and usually you can evade it after all (harder for Californians). Europeans usually get higher prices. Comparing US prices is thereby better, particularly in us dollars (most electronics deliveries are calculated in dollars in Europe). So the prices in the rest of the world were also boosted, even in Europe, despite weak dollar and other factors :)
One note - HD5xxx cards are really very big and most of them have very unfriendly location of power sockets, so you'd expect to pay more for a proper, huge case. Also note that if you have a 600 W PSU or so you'd be smarter to keep it and not upgrade, unless REALLY necessary. The lower load means lower efficiency, especially when plugged to 115V/60Hz grid. So if you have a bigger PSU you pay more for electricity. And it seems that more gamers are concerned with that bill than in any time before... You couldn't blame them for that and it's sad in its own way.
LuxZg - Tuesday, March 30, 2010 - link
Well, current MSRP is like I wrote it above. If there is no competition and/or demand is very high, prices always tend to go up. We're just lucky it's not happening often because in IT competition is usually very good.As for European prices, what do taxes have to do with it? We've got 23% taxes here, but it's included in all prices, so if nVidia goes up 23% so do AMD cards as well. If I'm looking at prices in the same country (and city, and sometimes store as well), and if nVidia is 300$ and ATI is 100 and 500, than I just can't compare them and say "hey, nVidia is faster than this 100$ ATI card, I?ll buy that"... no, you can't compare like that. Only thing you can do in that case is say something like "OK, so I have 300$ and fastest I can afford is nVidia" .. or "I want fastest there is, and I don't mind the cost" and you'll take HD5970 than. Or you can't afford any of those. So again, I don't get why cards in this review are so rigidly compared one to another as if they have exact same price (or +/- 10$ difference). And at one hand they compare MORE expensive nVidia card to QUITE CHEAPER AMD card, but won't compare that same nVidia card to a more expensive AMD card.. WHY?
And AMD cards are no bigger than nVidia ones, and last time I've checked bigger case is way way cheaper than a new PSU. And I'm running my computer on, get this, 450W PSU, so I'm not wasting any excessive power on inefficiences on low loads ;) And since this PSU keeps overclocked HD4890, it should work just fine with non-overclocked HD5870. While I'm pretty sure that GTX470 would already mean a new PSU, new PSU that costs ~100$/80€ .. So I'd pay more $ in total, and get a slower card.
Again, I'm not getting why there's such a rigid idea of GTX470=HD5850 & GTX480=HD5870 ..
LuxZg - Saturday, March 27, 2010 - link
Just re-read the conclusion.. something lacks in this sentence:"If you need the fastest thing you can get then the choice is clear, .."
Shouldn't it finish with "... choice is clear, HD5970..." ? That's what I'm saying, HD5970 wasn't mentioned in the entire conclusion. Past are the days of "single-GPU crown" .. That's just for nVidia to feel better. ATI Doesn't want "single GPU crown", they want the fastest graphics CARD. And they have it.. Serious lack in this article, serious.. And again, there is exact same amount of money dividing GTX480 and HD5870, as is between GTX480 and HD5970..
blindbox - Saturday, March 27, 2010 - link
I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.It's funny how you guys only coloured the 480.
PS: I'm sorry for the spam, my comments are not appearing, and I'm sorry for replying to this guy when it is completely off topic, lol.
JarredWalton - Saturday, March 27, 2010 - link
Yes, it did take a bit of work, but I did it for Ryan. The HD 5870/5970 results are in orange and the 5850 is in red. It makes more of a difference on crowded graphs, but it should help pick out the new parts and their competition. I'm guessing Ryan did it to save time, because frankly the graphing engine is a pain in the butt. Thankfully, the new engine should be up and running in the near future. :-)Finally - Saturday, March 27, 2010 - link
Further improvement idea:Give the dual-chip/SLI cards also another colour tone.
lemonadesoda - Sunday, March 28, 2010 - link
No. Keep colouring simple. Just 3 or 4 colours max. More creates noise. If you need to highlight other results, colour the label, or circle or drop shadow or put a red * a the end.Just NO rainbow charts!
IceDread - Tuesday, March 30, 2010 - link
The article does not contain hd 5970 in CF. The article does not mention the hd 5970 at all under conclusion. This is really weird. It is my belief that anandtech has become pro nvidia and is no longer an objective site. Obejtivity is looking at performance + functionality / price. HD 5970 is a clear winner here. After all, who cares if a card has 1, 2 or 20 gpus? It's the performance / price that matters.Kegetys - Tuesday, March 30, 2010 - link
According to a test in legitreviews.com having two monitors attached to the card causes the idle power use to rise quite a bit, I guess the anand test is done with just one monitor attached? It would be nice to see power consumption numbers for dual monitor use as well, I dont mind high power use during load but if the card does not idle properly (with two monitors) then that is quite a showstopper.Ryan Smith - Wednesday, March 31, 2010 - link
I have a second monitor (albeit 1680) however I don't use it for anything except 3D Vision reviews. But if dual monitor power usage is going to become an issue, it may be prudent to start including that.henrikfm - Tuesday, March 30, 2010 - link
Now it would be easier to believe only idiots buy ultra-high end PC hardware parts.ryta1203 - Tuesday, March 30, 2010 - link
Is it irresponsible to use benchmarks desgined for one card to measure the performance of another card?Sadly, the "community" tries to hold the belief that all GPU architectures are the same, which is of course not true.
The N-queen solver is poorly coded for ATI GPUs, so of course, you can post benchmarks that say whatever you want them to say if they are coded that way.
Personally, I find this fact invalidates the entire article, or at least the "compute" section of this article.
Ryan Smith - Wednesday, March 31, 2010 - link
One of the things we absolutely wanted to do starting with Fermi is to include compute benchmarks. It's going to be a big deal if AMD and NVIDIA have anything to say about it, and in the case of Fermi it's a big part of the design decision.Our hope was that we'd have some proper OpenCL/DirectCompute apps by the time of the Fermi launch, but this hasn't happened. So our decision was to go ahead with what we had, and to try to make it clear that our OpenCL benchmarks were to explore the state of GPGPU rather than to make any significant claims about the compute capabilities of NVIDIA or AMD's GPUs. We would rather do this than to ignore compute entirely.
It sounds like we didn't make this clear enough for your liking, and if so I apologize. But it doesn't make the results invalid - these are OpenCL programs and this is what we got. It just doesn't mean that these results will carry over to what a commercial OpenCL program may perform like. In fact if anything it adds fuel to the notion that OpenCL/DirectCompute will not be the great unifier we had hoped for them to be if it means developers are going to have to basically write paths optimized around NVIDIA and AMD's different shader structure.
ryta1203 - Tuesday, March 30, 2010 - link
The compute section of this article is just nonsense. Is this guy a journalist? What does he know about programming GPUs?Firen - Tuesday, March 30, 2010 - link
Thanks for this comprehensive review, it covers some very interesting topics betwen Team Green and Team Red.Yet, I agree with one of the comments here, you missed how easy that ATI 5850 and 5870 can be overlocked thanks to their lite design, a 5870 can easily deliver more or less the same performance as a 480 card while still running cooler and consumes less power..
Some people might point out that our new 'champion' card can be overlocked as well..that's true..however, doesn't it feel terrifying to have a graphic card running hotter than boiling water!
Fulle - Tuesday, March 30, 2010 - link
I wonder what kind of overclocking headroom the 470 has.... since someone with a 5850 can easily bump the voltage up a smidge, and get about a 30% overclock with minimal effort... people who tinker can usually safely reach about 1GHz core, for about a 37% overclock.Unless the 470 has a bit of overclocking headroom, someone with a 5850 could easily overclock to have superior performance, lower heat, lower noise, and lower power consumption.
After all these months and months of waiting, Nvidia has basically released a few products that ATI can defeat by just binning their current GPUs and bumping up the clockspeed? *sigh* I really don't know who would buy these cards.
Shadowmaster625 - Tuesday, March 30, 2010 - link
You're being way too kind to Nvidia. Up to 50% more power consumption for a very slight (at best) price/performance advantage? This isnt a repeat of the AMD/Intel thing. This is a massive difference in power consumption. We're talking about approximately $1 a year per hour a week of gaming. If you game for 20 hours a week, expect to pay $20 a year more for using the GTX470 vs a 5850. May as well add that right to the price of the card.But the real issue is what happens to these cards when they get even a modest coating of dust in them? They're going to detonate...
Even if the 470 outperformed the 5850 by 30%, I dont think it would be worth it. I cant stand loud video cards. It is totally unacceptable to me. I again have to ask the question I find myself asking quite often: what kind of world are you guys living in? nVidia should get nothing more than a poop-in-a-box award for this.
jujumedia - Wednesday, March 31, 2010 - link
with those power draws and the temps it reaches for daily operation i see gpu failure rates high on the gtx 480 and 470 as they are already faulty from the fab lab. Ill stick with ATI for 10 fps less.njs72 - Wednesday, March 31, 2010 - link
I been holding on for months to see what Fermi would bring in the world of GPUs. After reading countless reviews of this card i dont think its a justifyable upgrade for my gtx260. I mean yeah the performance is much higher but in most reviews of benchmarks with games like Crysis this card barely wins against the 5870, but buying this card i would need to upgrade the psu and posibly a new case for ventilation. I keep loading up Novatechs website and and almost adding a 5870 to the basket, and not pre ordering gtx480 like i was intending. What puts me off more than anything with the new nvidia card is its noise and temps. I cant see this card living for very long.Ive been a nvidia fan ever since the the first geforce card came out, which i still have tucked away in a draw somewhere. I find myself thinking of switching to ATI, but read too many horror stories about their driver implementation that puts me off. Maybe i should just wait for Nvidia to refresh its new card and keep hold of my 260 for a bit longer. i really dont know :-(
Zaitsev - Wednesday, March 31, 2010 - link
There is an error with the Bad Company 2 image mouse overs for the GTX 480. I think the images for 2xAA and 4xAA have been mixed up. 2xAA clearly has more AA than the 4xAA image.Compare GTX 480 2x with GTX 285 4x and they look very similar. Also compare 480 4x with 285 2x.
Very nice article, Ryan! I really enjoyed the tessellation tests. Keep up the good work.
Ryan Smith - Wednesday, March 31, 2010 - link
My master copies are labeled the same, but after looking at the pictures I agree with you; something must have gotten switched. I'll go flip things. Thanks.Wesgoood - Wednesday, March 31, 2010 - link
Correction, Nvidia retained their crown on Anandtech. Even though some resolutions even on here were favored to ATI(mostly the higher ones). On Toms Hardware 5870 pretty much beat GTX 480 from 1900x1200 to 2560x1600, not every time in 1900 but pretty much every single time in 2560.That ...is where the crown is, in the best of the best situations, not ....OMG it beat it in 1680 ...THAT HAS TO BE THE BEST!
Plus the power hungry state of this card is just appauling. Nvidia have shown they can't compete with proper technology, rather having to just cram everything they can onto a chip and prey it works right.
Where as ATI's GPU is designed perfectly to where they have plenty of room to almost double the size of the 5870.
efeman - Wednesday, March 31, 2010 - link
I copied this over from a comment I made on a blog post.I've been with nVidia for the past decade. My brother built his desktop way back when with the Ti 4200, I bought a prefab with a 5950 ultra, my last budget build had an 8600 GTS in it, and I upgraded to the GTX 275 last year. I am in no way a fanboy, nVidia just has treated me very well. If I had made that last decision a few months later after the price hike, it would've definitely been the HD 4890; almost identical performance for ballpark $100 less.
I recently built a new high-end rig (Core i7 and all), but I waited out on dropping the money on a 5800 series card. I knew nVidia's new cards were on the way, and I was excited and willing to wait it out; I expected a lot out of them.
Now that they're are out in the open, I have to say I'm a little shaken. In many cases, the performance of the cards are not where I would've hoped they be (the general consensus seems to be 5-10% increase in performance over their ATI counterparts; I see that failing in many cases, however). It seems like the effort that nVidia put into the cards gave them lots of potential, but most of it is wasted.
"The future of PC gaming" is right in the title of this post, and that's what these cards have been built for. Nvidia has a strong lead over ATI in compute and tessellation performance now, that's obvious; however, that will only provide useful if and when developers decide to put the extra effort into taking advantage of those technologies. Nvidia is gambling right now; it has already given ATI a half-year lead on the DX11 market, and it's pushing cards that won't be fully utilized until who-knows-when (there's no telling when these technologies will be more widely integrated into the gaming market). What will it do in the meantime? ATI is already on it's way to producing its 5000-series refresh; and this time it knows the competition's performance.
I was hoping for the GTX 400s to do the same thing that the GTX 200s did: give nVidia back the high-end performance throne. ATI is not only competitive with it's counterparts, but it still has the 5970 for the enthusiast performance crown (don't forget Eyefinity!). I think nVidia made a mistake in putting so much focus into compute and tessellation performance; it would've been smarter to produce cards with similar die sizes (crappy wafer yields, anyone?), faster raw performance with tessellation/compute as a secondary objective, and more competitive pricing. It wouldn't have been a bad option to create a separate chip for the Tesla cards, one that focused on the compute performance while the GeForce cards focused on the rest.
I still have faith. Maybe nVidia will work wonders with the drivers and producing performance we were waiting for. Maybe it has something awesome brewing deep within its labs. Or maybe my fears will embody themselves, and nVidia is crossing its fingers and hoping for its tessellation/compute performance to give it the market share later on. If so, ATI will provide me with my pair of cards.
That was quite the rant; I wasn't planning on writing that much when I decided to comment on Drew Henry's (nVidia GM) blog post. I suppose I'm passionate about this sort of thing, and I really hope nVidia doesn't lose me after all this time.
Kevinmbaron - Wednesday, March 31, 2010 - link
The fact that this card comes out a year and a 1/2 after the the GTX 295 makes me sick. Add to that the fact that the GTX 295 actually is faster then the GTX 480 in a few benchmarks and very close in others is like a bad dream for nvidia. Forget if they can beat AMD, they can't even beat themselves. They could have did a die shrink on the GTX 295, add some more shadders and double the memory and had that card out a year ago and it would have crushed anything on the market. Instead they risked it all on a hair brained new card. I am a GTX 295 owner. Apperently my card is a all arround better card being it doesnt lag in some games like the 480 does. I guess i will stick with my old GTX 295 for another year. Maybe then there might be a card worth buying. Even the ATI 5970 doesn't have enough juice to justify a new purchase from me. This should be considered horrible news for Nvidia. They should be ashammed of themselves and the CEO should be asked to step down.ol1bit - Thursday, April 1, 2010 - link
I just snagged a 5870 gen 2 I think (XFX) from NewEgg.They have been hard to find in stock, and they are out again.
I think many were waiting to see if the GF100 was a cruel joke or not. I am sorry for Nivida, but love the completion. I hope Nvidia will survive.
I'll bet they are burning the midnight oil for gen 2 of the GF100.
bala_gamer - Friday, April 2, 2010 - link
Did you guys recieve the GTX480 earlier than other reviewers? There were 17 cards tested on 3 drivers and i am assuming tests were done multiple times per game to get an average. installing, reinstalling drivers, etc 10.3 catalyst drivers came out week of march 18.Do you guys have multiple computers benchmarking at the same time? I just cannot imagine how the tests were all done within the time frame.
Ryan Smith - Sunday, April 4, 2010 - link
Our cards arrived on Friday the 19th, and in reality we didn't start real benchmarking until Saturday. So all of that was done in roughly a 5 day span. In true AnandTech tradition, there wasn't much sleep to be had that week. ;-)mrbig1225 - Tuesday, April 6, 2010 - link
I felt compelled to say a few things about nvidia’s Fermi (480/470 GTX). I like to always start out by saying…let’s take the fanboyism out of the equation and look at the facts. I am a huge nvidia fan, however they dropped the ball big time. They are selling people on ONE aspect of DX11 (tessellation) and that’s really the only thing there cards does well but it’s not an efficient design. What people aren’t looking at is that their tessellation is done by the polymorh engine which ties directly into the cuda cores, meaning the more cuda cores occupied by shaders processing…etc the less tessellation performance and vice versa = less frames per sec. As you noticed we see tons of tessellation benchmarks that show the gtx 480 is substantially faster at tessellation, I agree when the conditions suite that type of architecture (and there isn’t a lot of other things going on). We know that the gf100(480/470gtx) is a computing beast, but I don’t believe that will equate to overall gaming performance. The facts are this gpu is huge (3billion + transistors), creates a boat load of heat, and sucks up more power than any of the latest dual gpu cards (295gtx, 5970) came to market 6 months late and is only faster than its single gpu competition by 10-15% and some of us are happy? Oh that’s right it will be faster in the future when dx11 is relevant…I don’t think so for a few reasons but ill name two. If you look at the current crop of dx11 games, the benchmarks and actual dx11 game benchmarks (shaders and tessellation…etc) shows something completely different. I think if tessellation was nvidia’s trump card in games then basically the 5800 series would be beat substantially in any dx11 title with tessellation turned on…we aren’t seeing that(we are seeing the opposite in some circumstances), I don’t think we will. I also am fully aware that tessellation is scalable, but that brings me to another point. I know many of you will say that it is only in extreme tessellation environments that we really start to see the nvidias card take off. Well if you agree with that statement then you will see that nvidia has another issue. The 1st is the way they implement tessellation in their cards (not very scalable imo) 2nd is, the video card industry sales are not comprised of high end gpus, but the cheaper mainstream ones. Since nvidia polymorph engine is tied directly to its shaders…u kinda see where this is going, basicly less powerful cards will be bottlenecked by their lack of shaders for tessellation and vice versa. Developers want to make money, the way they make money is selling lots of games, example crysis was a big game, however it didn’t break any records sales…truth of the matter is most people systems couldn’t run crysis. Now you look at valve software and a lot of their titles sale well because of how friendly it is to mainstream gpus(not the only thing but it does help). The hardware has to be there to support a large # of game sales, meaning that if the majority of parts cannot do extreme levels of tessellation then you will find few games to implement it. Food for thought… can anyone show me a dx11 title that the gtx480 handily beats the 5870 by the same amount that it does in the heaven benchmark or even close to that. I think as a few of you have said, it will come down to what game work better with what architecture..some will benefit nvidia(Farcry2..good example) others Ati (Stalker)…I think that is what we are seeing now. IMOP.S. I think also why people are pissed is because this card was stated to be 60% faster than the 5870. As u can see its not!!
houkouonchi - Thursday, April 8, 2010 - link
Why the hell are the screenshots showing off the AA results in a lossy JPEG format instead of PNG like pretty much anything else?dzmcm - Monday, April 12, 2010 - link
I'm not familiar with Battleforge firsthand, but I understood it uses HD Ambient Occlusion wich is a variation of Screen Space Ambient Occlusion that includes normal maps. And since it's inception in Crysis SSAO has stood for Screen Space AO. So why is it called Self Shadow AO in this article?Bit-tech refers to Stalker:CoP's SSAO as "Soft Shadow." That I'm willing to dismiss. But I think they're wrong.
Am I'm falling behind with my jargon, or are you guys not bothering to keep up?
nyran125 - Monday, April 12, 2010 - link
Im going wiht the less power hungry ati 5000 series. I know a 5850 card will easily fit in my Case aswell. There no way id choose the GTX 470 over any of the ati s870 or 5850 cards. So that only leaves the GTX 480 against either the 5870 or the 5850. The performance increase and power increase is NOT worth me paying for a nvidia card thats higher in price over the 5870.I meen even looking at the games. The games ill probably play Crysis adn BAttlefield bad company 2 come out on top of the nivdia 480 GTX. so bla.
Nvidia you need to make a much bette rcard than that fo rme to spend money on a GTX 470 or GTX 480 ove rthe 5870 or 5850.
nyran125 - Monday, April 12, 2010 - link
oh and secondly, if your buying a 200 series nivida card or the GTX 480 it isnt fast enough to future proof your computer. You might aswell go spend less money on a 5970 or a single 5870 you know it will last for the next 2 years and the GTX 480 will NOT last any longer than teh 5000 series with its 10-15% performance increase. I didnt like the 200 series nvidia cards and im not interested in even MORE power hungry cards that. I want less power hungry cards and efficiency. To me a game plays bugger all different with 60 FPS average and 100 fps average. If you have a 200 series card save your money and wait for the next gen of cards or at least wait till a DX 11 game actualyl comes out not just Just cause friggin 2..vagos - Thursday, April 15, 2010 - link
ok all theese cards are nice. new technology is very welcome. but where is the games to push them?? if i spent 400$ or 500$ on a new card where i could see a really big difference against my old 8800GT?? they sell hardware without software to support it...2 or 3 games makes no difference to me. ps3 an xbox360 have very old graphic cards compared to ati 5800 series and nvidia 400 and still tha games are looking beautifull. an in some cases mauch better than on pc...make new games for pc and then i will buy a new card! until then i will stuck with my xbox360...
Drizzit101 - Sunday, May 9, 2010 - link
I have been running the GTX 295. The plan was to buy a second GTX 295. Looking at the prices, I was thinking about just buying two GTX 470's. What the better move?Krazy Glew - Tuesday, May 11, 2010 - link
See http://semipublic.comp-arch.net/wiki/Poor_Man%27s_...In particular
US patent 7,117,421, Transparent error correction code memory system and method,
Danilak,
assigned to Nvidia,
2002.
http://semipublic.comp-arch.net/wiki/Poor_Man%27s_...
Matt Campbell - Monday, August 2, 2010 - link
Ryan, what was the special sauce you used to get Badaboom working on Fermi? My GTX 460 won't run it, and Elemental's website says Fermi support won't get added until Q4 2010. http://badaboomit.com/node/507niceboy60 - Friday, August 20, 2010 - link
I have the same probleme on my GTX 480 , Badaboom does not work on fermi ,according with my own experience and Badaboom official web siteI dont think this benchmarks are accurate
niceboy60 - Friday, August 20, 2010 - link
I bought a GTX 480 based on this review as I do a considerable amount of video convertingJust to find out ,dispite the GTX 480 is showing very good resaults when using Badaboom
The truth is Badaboom is not compatible yet with any GTX 400 series according with The badaboom web site .
adder1971 - Friday, September 17, 2010 - link
The Badaboom website says it does not work and when I try it with the GTX 465 it does not work. How were you able to get it to work? I have the NVIDIA latest release drivers as of today and the latest released version of Badaboom.wizardking - Tuesday, September 21, 2010 - link
I bought this card for it only ! I used badaboom with number which you use !!!!!!IcarusLSC - Wednesday, September 22, 2010 - link
How'd you set the 4x + TrSS 4x mode in Battlefield BC2? I can't find anythign that resembles it in the nVidia panel at all to force it etc...Thanks!
jmkayu - Thursday, October 14, 2010 - link
Please do yourselves a favour, do not believe every paid review you see, and listen to actual users. In an unprecedented move of arrogance, nVidia has intentionally crippled the entire 400 series performance for anything but mainstream games. When 8000 series handily outperform the new 400, we have a problem. Check the following user discussions, and especially the last one from a developer.http://forums.nvidia.com/index.php?showtopic=18157...
http://forums.nvidia.com/index.php?showtopic=16675...
http://www.opengl.org/discussion_boards/ubbthreads...
http://news2.mcneel.com/scripts/dnewsweb.exe?cmd=a...