Comments Locked

168 Comments

Back to Article

  • DoktorSleepless - Wednesday, December 15, 2010 - link

    Is it just me or are all the graphs missing?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    They're not missing. They're fashionably late.

    In all seriousness though, they're going up now. We had less than a week to cover all of this, so it's very much down to the wire here.
  • DoktorSleepless - Wednesday, December 15, 2010 - link

    They're still missing in some of the non-benchmark pages including the "Enhanced Quality AA".
  • AnnihilatorX - Thursday, December 16, 2010 - link

    They are all here.
    You should clean your browser cache.
  • AstroGuardian - Wednesday, December 15, 2010 - link

    It's you. The graphs are drawn in infrared. Your fault you can't see them :)
  • mmatis - Wednesday, December 15, 2010 - link

    They all look fine to me. Surely you aren't trying to use an NVidia card to read a favorable article about AMD?
  • Stuka87 - Wednesday, December 15, 2010 - link

    Err, did you even read the article?!
  • opticalmace - Wednesday, December 15, 2010 - link

    also missing the conclusion right now. :)
  • tipoo - Wednesday, December 15, 2010 - link

    In their defense, you can't have read the whole article that fast :-P
  • HOOfan 1 - Wednesday, December 15, 2010 - link

    Doesn't look to me like the HD6970 is worth $70 more than the HD6950 at this time.

    Hopefully for AMD/ATI's sake, drivers updates will catapult it ahead.
  • fausto412 - Wednesday, December 15, 2010 - link

    6970 just 4 to 6 fps faster in Bad Company 2 than my 5870? WTF!

    not worth the upgrade. what a lame ass successor.
  • Kibbles - Wednesday, December 15, 2010 - link

    It's 7% faster at 1920 and 9% faster at 2560. BC2 obviously doesn't need the extra GPU power at 1680.

    I wouldn't call it weak, but this card certainly isn't the clear winner that the 5870 was.
  • fausto412 - Wednesday, December 15, 2010 - link

    its weak if i was expecting a response to the gtx580 to upgrade to.

    may as well stay with my 5870.
  • ClownPuncher - Wednesday, December 15, 2010 - link

    For now... But who really bases their purchase on one game anymore? It looks like 10.12 or 11.1 drivers will help performance a good amount.
  • fausto412 - Wednesday, December 15, 2010 - link

    I base my performance on 1 game...because it is a very taxing game and my #1 game right now.
  • MeanBruce - Wednesday, December 15, 2010 - link

    Yup, dude I heard the AMD 7000 series might make an early appearance next July, with the die shrink @28nm you might want to wait and pick up a 7970!
  • fausto412 - Wednesday, December 15, 2010 - link

    that's what i'm considering now. need to upgrade for 30% more performance than 5870 for it to make sense.
  • Stuka87 - Wednesday, December 15, 2010 - link

    The game is CPU limited at lower resolutions. BC2 is known for being more CPU bound than GPU bound.

    But I was hoping for a larger jump over the previous cards :/
  • fausto412 - Wednesday, December 15, 2010 - link

    I understand BFBC2 is more cpu bound. But in this testing Anandtech did they use a TOP TOP TOP of the line cpu so that rules that out as a bottleneck.
  • Belard - Wednesday, December 15, 2010 - link

    Yeah... at least the model numbers didn't make things confusing!

    In some benchmarks, the 6950 is faster than your 5870... but it would have made far more sense to call these 6850/6870 or even 6830/6850..

    AMD screwed up with the new names...
  • DoktorSleepless - Wednesday, December 15, 2010 - link

    What benchmark or game is used to measure noise?
  • Hrel - Wednesday, December 15, 2010 - link

    I'm not 100% but I believe they test it under Crysis. It was either that or a benchmark that put full load on the system. It was in an article in last year or 2, I've been reading so long it's all starting to mesh together; chronologically. But suffice it to say it stresses the system.
  • Hrel - Wednesday, December 15, 2010 - link

    It's furmark, it's in the article.
  • Adul - Wednesday, December 15, 2010 - link

    nice Christmas gift from the GF :D
  • AstroGuardian - Wednesday, December 15, 2010 - link

    I saw my GF buying a couple of those. One is supposed to be for me and she doesn't play games...... WTF?
  • MeanBruce - Wednesday, December 15, 2010 - link

    Wow, you are getting a couple of 6950s? All I am getting from my 22yo gf is a couple of size F yammos lying on a long narrow torso, and a single ASUS 6850. Don't know which I like better, hmmmmm. Wednesday morning comic relief.
  • Adul - Wednesday, December 15, 2010 - link

    damn sounds good to me :) enjoy both ;)
  • SirGCal - Wednesday, December 15, 2010 - link

    I'm happy to see these power values! I did expect a bit more performance but once I get one, I'll benchmark it myself. By then the drivers will likely have changed the situation. Now to get Santa my wish list... :-) If it was only that easy...
  • mac2j - Wednesday, December 15, 2010 - link

    One of the most impressive elements here is that you can get 2x6950 for ~$100 more than a single 580. That's some incredible performance for $600 which is not unheard of as the price point for a top single-slot card.

    Second... the scaling of the 6950 combined with the somwhat lower power consumption relative to the 570 bodes well for AMD with the 6990. My guess is they can deliver a top performing dual-GPU card with under a 425-watt TDP .... the 570 is a great single chip performer but getting it into a dual-gpu card under 450-500w is going to be a real challenge.

    Anyway exciting stuff all-around - there will be a lot of heavy-hitting GPU options available for really very fair prices....
  • StormyParis - Wednesday, December 15, 2010 - link

    It's nice to have all current cards listed, and helps determine which one to buy. My question, and the one people ask me, is rather "is it worth upgrading now". Which depends on a lot of things (CPU, RAM...), but, above all, on comparative perf between current cards and cards 1-2-3 generations out. I currently use a 4850. How much faster would a 6850 or 6950 be ?
  • MeanBruce - Wednesday, December 15, 2010 - link

    TechPowerUp.com shows the 6850 as 95percent or almost double the performance of the 4850 and 100percent more efficient than the 4850@1920x1200. I also am upgrading an old 4850, as far as the 6950 check their charts when they come up later today.
  • mapesdhs - Monday, December 20, 2010 - link


    Today I will have completed by benchmark pages comparing 4890, 8800GT and
    GTX 460 1GB (800 and 850 core speeds), in both single and CF/SLI, for a range
    of tests. You should be able to extrapolate between known 4850/4890 differences,
    the data I've accumulated, and known GTX 460 vs. 68xx/69xx differences (baring
    in mind I'm testing with 460s with much higher core clocks than the 675 reference
    speed used in this article). Email me at [email protected] and I'll send you
    the URL once the data is up. I'm testing with 3DMark06, Unigine (Heaven, Tropics
    and Sanctuary), X3TC, Stalker COP, Cinebench, Viewperf and PT Boats. Later
    I'll also test with Vantage, 3DMark11 and AvP.

    Ian.
  • ZoSo - Wednesday, December 15, 2010 - link

    Helluva 'Bang for the Buck' that's for sure! Currently I'm running a 5850, but I have been toying with the idea of SLI or CF. For a $300 difference, CF is the way to go at this point.
    I'm in no rush, I'm going to wait at least a month or two before I pull any triggers ;)
  • RaistlinZ - Wednesday, December 15, 2010 - link

    I'm a bit underwhelmed from a performance standpoint. I see nothing that will make me want to upgrade from my trusty 5870.

    I would like to see a 2x6950 vs 2x570 comparison though.
  • fausto412 - Wednesday, December 15, 2010 - link

    exactly my feelings.

    it's like thinking Miss Universe is about to screw you and then you find out it's her mom....who's probably still hot...but def not miss universe
  • Paladin1211 - Wednesday, December 15, 2010 - link

    CF scaling is truly amazing now, I'm glad that nVidia has something to catch up in terms of driver. Meanwhile, the ATI wrong refresh rate is not fixed, it stucks at 60hz where the monitor can do 75hz. "Refresh force", "refresh lock", "ATI refresh fix", disable /enable EDID, manually set monitor attributes in CCC, EDID hack... nothing works. Even the "HUGE" 10.12 driver can't get my friend's old Samsung SyncMaster 920NW to work at its native 1440x900@75hz, both in XP 32bit and win 7 64bit. My next monitor will be an 120hz for sure, and I don't want to risk and ruin my investment, AMD.
  • mapesdhs - Monday, December 20, 2010 - link


    I'm not sure if this will help fix the refresh issue (I do the following to fix max res
    limits), but try downloading the drivers for the monitor but modify the data file
    before installing them. Check to ensure it has the correct genuine max res and/or
    max refresh.

    I've been using various models of CRT which have the same Sony tube that can
    do 2048 x 1536, but every single vendor that sells models based on this tube has
    drivers that limited the max res to 1800x1440 by default, so I edit the file to enable
    2048 x 1536 and then it works fine, eg. HP P1130.

    Bit daft that drivers for a monitor do not by default allow one to exploit the monitor
    to its maximum potential.

    Anyway, good luck!!

    Ian.
  • techworm - Wednesday, December 15, 2010 - link

    future DX11 games will stress GPU and video RAM incrementally and it is then that 6970 will shine so it's obvious that 6970 is a better and more future proof purchase than GTX570 that will be frame buffer limited in near future games
  • Nickel020 - Wednesday, December 15, 2010 - link

    In the table about whether PowerTune affects an application or not there's a yes for 3DMark, and in the text you mention two applications saw throttling (with 3DMark it would be three). Is this an error?

    Also, you should maybe include that you're measuring the whole system power in the PowerTune tables, it might be confusing for people who don't read your reviews very often to see that the power draw you measured is way higher than the PowerTune level.

    Reading the rest now :)
  • stangflyer - Wednesday, December 15, 2010 - link

    Sold my 5970 waiting for 6990. With my 5970 playing games at 5040x1050 I would always have a 4th extended monitor hooked up to a tritton uve-150 usb to vga adapter. This would let me game while having the fourth monitor display my teamspeak, afterburner, and various other things.
    Question is this!! Can i use the new 6950/6970 and use triple monitor and also use a 4th screen extended at the same time? I have 3 matching dell native display port monitors and a fourth with vga/dvi. Can I use the 2 dp's and the 2 dvi's on the 6970 at the same time? I have been looking for this answer for hours and can't find it! Thanks for the help.
  • cyrusfox - Wednesday, December 15, 2010 - link

    You should totally be able to do a 4X1 display, 2 DP and 2 DVI, as long as one of those DP dells also has a DVI input. That would get rid of the need for your usb-vga adapter.
  • gimmeagdlaugh - Wednesday, December 15, 2010 - link

    Not sure why AMD 6970 has green bar,
    while NV 580 has red bar...?
  • medi01 - Wednesday, December 15, 2010 - link

    Also wondering. Did nVidia marketing guys called again?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    I normally use green for new products. That's all there is to it.
  • JimmiG - Wednesday, December 15, 2010 - link

    Still don't like the idea of Powertune. Games with a high power load are the ones that fully utilize many parts of the GPU at the same time, while less power hungry games only utilize parts of it. So technically, the specifications are *wrong* as printed in the table on page one.

    The 6970 does *not* have 1536 stream processors at 880 MHz. Sure, it may have 1536 stream processors, and it may run at up to 880 MHz.. But not at the same time!

    So if you fully utilize all 1536 processors, maybe it's a 700 MHz GPU.. or to put it another way, if you want the GPU to run at 880 MHz, you may only utilize, say 1200 stream processors.
  • cyrusfox - Wednesday, December 15, 2010 - link

    I think Anand did a pretty good job of explaining at how it reasonably power throttles the card. Also as 3rd party board vendors will probably make work-arounds for people who abhor getting anything but the best performance(even at the cost of efficiency). I really don't think this is much of an issue, but a good development that is probably being driven by Fusion for Ontario, Zacate, and llano. Also only Metro 2033 triggered any reduction(850Mhz from 880Mhz). So your statement of a crippled GPU only holds for Furmark, nothing got handicapped to 700Mhz. Games are trying to efficiently use all the GPU has to offer, so I don't believe we will see many games at all trigger the use of powertune throttling.
  • JimmiG - Wednesday, December 15, 2010 - link

    Perhaps, but there's no telling what kind of load future DX11 games, combined with faster CPUs will put on the GPU. Programs like Furmark don't do anything unusual, they don't increase GPU clocks or voltages or anything like that - they just tell the GPU - "Draw this on the screen as fast as you can".

    It's the same dilemma overclockers face - Do I keep this higher overclock that causes the system to crash with stress tests but works fine with games and benchmarks? Or do I back down a few steps to guarantee 100% stability. IMO, no overclock is valid unless the system can last through the most rigorous stress tests without crashes, errors or thermal protection kicking in.

    Also, having a card that throttles with games available today tells me that it's running way to close to the thermal limit. Overclocking in this case would have to be defined as simply disabling the protection to make the GPU always work at the advertised speed.
    It's a lazy solution, what they should have done is go back to the drawing board until the GPU hits the desired performance target while staying within the thermal envelope. Prescott showed that you can't just keep adding stuff without any considerations for thermals or power usage.
  • AnnihilatorX - Wednesday, December 15, 2010 - link

    Didn't you see you can increase the throttle threshold by 20% in Catalyst Control Centre. This means 300W until it throttles, which in a sense disables the PowerTune.
  • Mr Perfect - Thursday, December 16, 2010 - link

    On page eight Ryan mentions that Metro 2033 DID get throttled to 700MHz. The 850MHz number was reached by averaging the amount of time Metro was at 880MHz with the time it ran at 700MHz.

    Which is a prime example of why I hate averages in reviews. If you have a significantly better "best case", you can get away with a particularly bad "worst case" and end up smelling like roses.
  • fausto412 - Wednesday, December 15, 2010 - link

    CPU's have been doing this for a while...and you are allowed to turn the feature off. AMD is giving you a range to go over.

    It will cut down on RMA's, Extend Reliability.
  • henrikfm - Wednesday, December 15, 2010 - link

    The right numbers for these cards considering the performance:

    6970 -> 5875
    6950 -> 5855
  • flyck - Wednesday, December 15, 2010 - link

    Anand also tested with 'outdated' drivers. It is ofcourse AMD fault to not supply the best drivers available at launch though. But anand used 10.10, Reviews that use 10.11 like HardOcp see that the 6950 performance equally or better than 570GTx!! and 6970 trades blows with 580GTX but is overall little slower (but faster than 570GTX).

    And now we have to wait for the 10.12 drivers which were meant to be for 69xx series.
  • flyck - Wednesday, December 15, 2010 - link

    my bad anand tested with 10.11 :shame:
    10.12 don't seam to improve performance.

    That said, Anand would it be possible to change your graphs?
    Starting with the low quality and ending with the high quality? And also make the high quality chart for single cards only. Now it just isn't readable with SLI and crossfire numbers through it.

    According to your results 6970 is > 570 and 6950~570 but only when everything turned on.. but one cannot deduct that with the current presentation.
  • Will Robinson - Wednesday, December 15, 2010 - link

    $740 for HD6970 CrossfireX dominates GTX580 SLI costing over $1000.
    That's some serious ownage right there.
    Good pricing on these new cards and solid numbers for power/heat and noise.
    Seems like a good new series of cards from AMD.
  • prdola0 - Wednesday, December 15, 2010 - link

    No, you're wrong. Re-read the graphs. GTX580 SLI wins most of the time.
  • softdrinkviking - Wednesday, December 15, 2010 - link

    By a small average amount, and for ~$250 extra.
    Once you get to that level, you're not really hurting for performance anyway, so for people who really just want to play games and aren't interested in having the "fastest card" just to have it, the 6970 is the best value.
  • Nfarce - Wednesday, December 15, 2010 - link

    True. However AMD has just about always been about value over an all out direct card horsepower war with Nvidia. Some people are willing to spend for bragging rights.

    But I'm a little suspect on AT's figures with these cards. Two other tech sites (Toms Hardware and Guru3D) show the GTX 570 and 580 solidly beating the 6950 and 6970 respectively in the same games with similar PC builds.
  • IceDread - Friday, December 17, 2010 - link

    You are wrong. HD 5970 in crossfire wins over gtx 580 sli. But anandtech did not test that.
  • ypsylon - Wednesday, December 15, 2010 - link

    A lot of people were anxious to see what AMD will bring to the market with 6950/6970. And once again not much. Some minor advantages (like 5FPS in handul of games) is nothing worth writing or screaming about. For now GTX580 is more expensive, but now with AMD unveiling new cards nVidia will get really serious about the price. That $500 price point won't live for long. I expecting at least 50$ off that in the next 4-6 weeks.

    GTX580 is best option today for someone who is interested in new VGA, if you do own right now 5850/5870/5970 (CF or not) don't even bother with 69[whatever].
  • duploxxx - Wednesday, December 15, 2010 - link

    at that price point a 580 the best buy, get lost. The 580 is way over prized for the small performance increase it has above 570-6970 not to mentioning the additional power consumption. Don't see any reason at all to buy that card.

    Indeed no need to upgrade from a 58xx series but neither would be to move to a nv based card.
  • mac2j - Wednesday, December 15, 2010 - link

    Um - if you have the money for a 580 ... pick up another $80-100 and get 2 x 6950 - you'll get nearly the best possible performance on the market at a similar cost.

    Also I agree that Nvidia will push the 580 price down as much as possible... the problem is that if you believe all of the admittedly "unofficial" breakdowns ... it costs Nvidia 1.5-2x as much to make a 580 as it costs AMD to make a 6970.

    So its hard to be sure how far Nvidia can push down the price on the 580 before it ceases to become profitable - my guess is they'll focus on making a 565 type card which has almost 570 performance but for a manufacturing cost closer to what a 460 runs them.
  • fausto412 - Wednesday, December 15, 2010 - link

    yeah. AMD let us down on this here product. We see what gtx580 is and what 6970 is...i would say if you planning to spend 500...the gtx580 is worth it.
  • truepurple - Wednesday, December 15, 2010 - link

    "support for color correction in linear space"

    What does that mean?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    There are two common ways to represent color, linear and gamma.

    Linear: Used for rendering an image. More generally linear has a simple, fixed relationship between X and Y, such that if you drew the relationship it would be a straight line. A linear system is easy to work with because of the simple relationship.

    Gamma: Used for final display purposes. It's a non-linear colorspace that was originally used because CRTs are inherently non-linear devices. If you drew out the relationship, it would be a curved line. The 5000 series is unable to apply color correction in linear space and has to apply it in gamma space, which for the purposes of color correction is not as accurate.
  • IceDread - Wednesday, December 15, 2010 - link

    Yet again we do not get to see hd 5970 in crossfire despite it being a single card! Is this an nvidia site?

    Anyway, for those of you who do want to see those results, here is a link to a professional Swedish site!

    http://www.sweclockers.com/recension/13175-amd-rad...

    Maybe there is some google translation available or so if you want to understand more than the charts shows.
  • medi01 - Wednesday, December 15, 2010 - link

    Wow, 5970 in crossfire consumes less than 580 in SLI.
    http://www.sweclockers.com/recension/13175-amd-rad...
  • ggathagan - Wednesday, December 15, 2010 - link

    Absolutely!!!
    There's no way on God's green earth that Anandtech doesn't currently have a pair of 5970's on hand, so that MUST be the reason.
    I'll go talk to Anand and Ryan right now!!!!
    Oh, wait, they're on a conference call with Huang Jen-Hsun.....

    I'd like to note that I do not believe Anadtech ever did a test of two 5970's, so it's somewhat difficult to supply non-existent into any review.
    Ryan did a single card test in November 2009.That is the only review I've found of any 5970's on the site.
  • vectorm12 - Wednesday, December 15, 2010 - link

    I was not aware of the fact that the 32nm process had been canned completely and was still expecting the 6970 to blow the 580 out of the water.

    Although we can't possibly know and are unlikely to ever find out what cayman at 32nm would have performed like I suspect AMD had to give up a good chunk of performance to fit it on the 389mm^2 40nm die.

    This really makes my choice easy as I'll pickup another cheap 5870 and run my system in CF.
    I think I'll be able to live with the performance until the refreshed cayman/next gen GPUs are ready for prime time.

    Ryan: I'd really like to see what ighashgpu can do with the new 6970 cards though. Although you produce a few GPGPU charts I feel like none of them really represent the real "number-crunching" performance of the 6970/6950.

    Ivan has already posted his analysis in his blog and it seems like the change from LWIV5 to LWIV4 made a negligible impact at the most. However I'd really love to see ighashgpu included in future GPU tests to test new GPUs and architectures.

    Thanks for the site and keep up the work guys!
  • slagar - Wednesday, December 15, 2010 - link

    Gaming seems to be in the process of bursting its own bubble. Graphics of games isn't keeping up with the hardware (unless you cound gaming on 6 monitors) because most developers are still targeting consoles with much older technology.
    Consoles won't upgrade for a few more years, and even then, I'm wondering how far we are from "the final console generation". Visual improvements in graphics are becoming quite incremental, so it's harder to "wow" consumers into buying your product, and the costs for developers is increasing, so it's becoming harder for developers to meet these standards. Tools will always improve and make things easier and more streamlined over time I suppose, but still... it's going to be an interesting decade ahead of us :)
  • darckhart - Wednesday, December 15, 2010 - link

    that's not entirely true. the hardware now allows not only insanely high resolutions, but it also lets those of us with more stringent IQ requirements (large custom texture mods, SSAA modes, etc) to run at acceptable framerates at high res in intense action spots.
  • Remon - Wednesday, December 15, 2010 - link

    Seriously, are you using 10.10? It's not like the 10.11 have been out for a while. Oh, wait...

    They've been out for almost a month now. I'm not expecting you to use the 10.12, as these were released just 2 days ago, but you can't have an excuse about not using a month old drivers. Testing overclocked Nvidia cards against newly released cards, and now using older drivers. This site get's more biased with each release.
  • cyrusfox - Wednesday, December 15, 2010 - link

    I could be wrong, but 10.11 didn't work with the 6800 series, so I would imagine 10.11 wasn't meant for the 6900 either. If that is the case, it makes total sense why they used 10.10(cause it was the most updated driver available when they reviewed.)

    I am still using 10.10e, thinking about updating to 10.12, but why bother, things are working great at the moment. I'll probably wait for 11. or 11.2.
  • Remon - Wednesday, December 15, 2010 - link

    Nevermind, that's what you get when you read reviews early in the morning. The 10.10e was for the older AMD cards. Still, I can't understand the difference between this review and HardOCP's.
  • flyck - Wednesday, December 15, 2010 - link

    it doesn't. Anand has the same result for 25.. resolutions with max details AA and FSAA.

    Presentation on anand however is more focussed on 16x..10.. resolutions. (last graph) if you look in the first graph you'll notice the 6970/6950 performs like HardOcp. e.g. the higher the quality the smaller the gap becomes between 6950 and 570 and 6970 and 580. the lower the more 580 is running away and 6970/6950 are trailing the 570.
  • Gonemad - Wednesday, December 15, 2010 - link

    Oookay, new card from the red competitor. Welcome aboard.

    But, all of this time, I had to ask: why is Crysis is so punitive on the graphics cards? I mean, it was released eons ago, and still can't be run with everything cranked up in a single card, if you want 60fps...

    Is it sloppy coding? Does the game *really* looks better with all the eye candy? Or they built a "FPS bug" on purpose, some method of coding that was sure to torture any hardware that would be built in the next 18 months after release?

    I will get slammed for this, but for instance, the water effects on Half Life 2 look great even on lower spec cards, once you turn all the eye-candy on, and the FPS doesn't drop that much. The same for some subtle HDR effects.

    I guess I should see this game by myself and shut up about things I don't know. Yes, I enjoy some smooth gaming, but I wouldn't like to wait 2 years after release to run a game smoothly with everything cranked up.

    Another one is Dirt 2, I played it with all the eye candy to the top, my 5870 dropped to 50-ish FPS (as per benchmarks),it could be noticed eventually. I turned one or two things off, checked if they were not missing after another run, and the in game FPS meter jumped to 70. Yay.
  • BrightCandle - Wednesday, December 15, 2010 - link

    Crysis really does have some fabulous graphics. The amount of foliage in the forests is very high. Crysis kills cards because it really does push current hardware.

    I've got Dirt 2 and its not close in the level of detail. Its a decent looking game at times but its not a scratch on Crysis for the amount of stuff on screen. Half life 2 is also not bad looking but it still doesn't have the same amount of detail. The water might look good but its not as good as a PC game can look.

    You should buy Crysis, its £9.99 on steam. Its not a good game IMO but it sure is pretty.
  • fausto412 - Wednesday, December 15, 2010 - link

    yes...it's not much of a fun game but damn it is pretty
  • AnnihilatorX - Wednesday, December 15, 2010 - link

    Well original Crysis did push things too far and optimization could be used. Crysis Warhead is much better optimized while giving pretty identical visuals.
  • fausto412 - Wednesday, December 15, 2010 - link

    "I guess I should see this game by myself and shut up about things I don't know. Yes, I enjoy some smooth gaming, but I wouldn't like to wait 2 years after release to run a game smoothly with everything cranked up."

    that's probably a good idea. Crysis was made with future hardware in mind. It's like a freaking tech demo. Ahead of it's time and beaaaaaautiful. check it out on max settings,...then come back tell us what you think.
  • TimoKyyro - Wednesday, December 15, 2010 - link

    Thank you for the SmallLuxGPU test. That really made me decide to get this card. I make 3D animations with Blender in Ubuntu so the only thing holding me back is the driver support. Do these cards work in Ubuntu? Is it possible for you to test if the Linux drivers work at the time?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    AMD rarely has Linux drivers ready for the press ahead of a launch. This is one such occasion.
  • MeanBruce - Wednesday, December 15, 2010 - link

    Great job on the review Ryan, hope you will cover the upcoming Nvidia 560 and 550 when they arrive. Peace Brother!
  • gescom - Wednesday, December 15, 2010 - link

    Please Anand make an update with a new 10.12 driver. Great review btw.
  • knowom - Wednesday, December 15, 2010 - link

    Until you keep into consideration

    1) Driver support
    2) Cuda
    3) PhysX

    I also prefer the lower idle noise, but higher load noise than the reverse for Ati because when your gaming usually you have your sound turned up a lot it's when you aren't gaming is when noise is more of the issue for seeking a quieter system.

    It's a better trade off in my view, but they are both pretty even in terms of noise for idle and load regardless and a far cry from quite compared to other solutions from both vendors if that's what your worried about not to mention non reference cooler designs effect that situation by leaps and bounds..
  • Acanthus - Wednesday, December 15, 2010 - link

    AMD has been updating drivers more aggressively than Nvidia lately. (the last year)
    Anecdotally, my GTX285 has had a lot more game issues than my 4890. Specifically in NWN2 and Civ5.

    Cuda is irrelevant unless you are doing heavy 1. photoshop, 2. video encoding.

    PhysX is still a crappy gimmick at this point and needs to offer real visual improvements without a 40%+ performance hit.
  • smookyolo - Wednesday, December 15, 2010 - link

    PhysX may be a gimmick in games, but it's one of the better ones.

    Also, guess what... it's being used all over the 3D animation industry.

    And guess where the real money comes from? The industry.
  • fausto412 - Wednesday, December 15, 2010 - link

    physx is a gimmick that has been around for some time and will never take hold. when physx came around it set a new standard but since then developers have adopted havok more commonly since it doesn't require extra hardware.

    it's all marketing and not a worthy decision point when buying a new card
  • jackstar7 - Wednesday, December 15, 2010 - link

    Alternately, my triple-monitor setup makes AMD the obvious choice.
  • beepboy - Wednesday, December 15, 2010 - link

    Agreed on triple-monitor setup. You can make the argument that 2x 460s are cheaper and nets better performance but at the end of the day 2x 460s will be louder, use more power, more heat, etc over a single 69xx. I just want my triple monitor setup, damn it.
  • codedivine - Wednesday, December 15, 2010 - link

    Any info on cache sizes and register files?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    Exactly the same as on Cypress.

    L2: 128KB per ROP block (so 512KB)
    L1: 8KB per SIMD
    LDS: 32KB per SIMD
    GDS: 64KB

    http://images.anandtech.com/doci/4061/MidLevelView...

    I don't have the register file size readily available.
  • DanNeely - Wednesday, December 15, 2010 - link

    How likely is the decrease from 2 to 1 operations per clock likely to affect real world applications?
  • yeraldin37 - Wednesday, December 15, 2010 - link

    My current cards are running at 870Mhz(GPU) and 1100Mhz(clock), faster than stock 5870, those benchmarks for new 6970 are really disappointing, I was seriously expecting to get a single 6970 for Christmas to replace my 5850OC CF cards and make room for additional cards or even have a free pcie to plug my gtx460 for physx capability. I was going to be happy to get at least 80% of my current 5850CF setup from new 6970. what a joke! I will not make any move and wait for upcoming next generation 28nm amd GPU's. We have to be fair and mention all great efforts from AMD team to bring new technology to newest radeon cards, however not enough performance for die hard gamers. If gtx 580 were 20% cheaper I might consider to buy one, I personally never ever pay more than $400 for one(1) video card.
  • Nfarce - Wednesday, December 15, 2010 - link

    Reading Tom's Hardware they essentially slam AMD's marketing these cards as a 570-580 beater. Guru3D is also less than friendly. Interstingly, *both* sites have benches showing the 570 an d580 beating the 6950 and 6970 commandingly. What's up with that exactly?
  • fausto412 - Wednesday, December 15, 2010 - link

    it's called AMD didn't deliver on the hype...they deserve to get slammed.
  • medi01 - Wednesday, December 15, 2010 - link

    AMD delivers cards with better performance/price ratio that also consume less power. How come there is a reason to "slam", eh?
  • zst3250 - Friday, December 31, 2010 - link

    Off yourself cretin, prefearbly by getting your cranium kicked in.
  • Mr Perfect - Thursday, December 16, 2010 - link

    Wait, is Tom's reputable again? Haven't read that site since the Athlon XP was new....
  • AnnonymousCoward - Wednesday, December 15, 2010 - link

    As a 30" owner and gamer, I would never run at 2560x1600 with AA enabled if that causes <60fps. I'd disable AA. Who wouldn't value framerate over AA? So when the fps is <60, please compare cards at 2560x1600 without AA, so that I'm able to apply the results to a purchase decision.
  • SimpJee - Wednesday, December 15, 2010 - link

    Greetings, also a 30'' gamer. If you see the FPS above 30 with AA enabled, you can assume it will be (much) higher without it enabled so what's the point in actually having the author bench it without AA? Plus, anything above 30 FPS is just icing on the cake as far as I'm concerned.
  • AnnonymousCoward - Wednesday, December 15, 2010 - link

    First of all, 30fps is choppy as hell in a non-RTS game. ~40fps is a bare minimum, and >60fps all the time is hugely preferred since then you can also use vsync to eliminate tearing.

    Now back to my point. Your counter was "you know that non-AA will be higher than AA, so why measure it?" Is that a point? Different cards will scale differently, and seeing 2560+AA doesn't tell us the performance landscape at real-world usage which is 2560 no-AA.
  • Dug - Wednesday, December 15, 2010 - link

    Is it me, or are the graphs confusing.
    Some leave out cards on certain resolutions, but add some in others.

    It would be nice to have a dynamic graph link so we can make our own comparisons.
    Or a drop down to limit just ati, single card, etc.

    Either that or make a graph that has the cards tested at all the resolutions so there is the same number of cards in each graph.
  • benjwp - Wednesday, December 15, 2010 - link

    Hi,

    You keep using Wolfenstein as an OpenGL benchmark. But it is not. The single player portion uses Direct3D9. You can check this by checking which DLLs it loads or which functions it imports or many other ways (for example most of the idTech4 renderer debug commands no longer work).

    The multiplayer component does use OpenGL though.

    Your best bet for an OpenGL gaming benchmark is probably Enemy Territory Quake Wars.
  • Ryan Smith - Wednesday, December 15, 2010 - link

    We use WolfMP, not WolfSP (you can't record or playback timedemos in SP).
  • 7Enigma - Wednesday, December 15, 2010 - link

    Hi Ryan,

    What benchmark do you use for the noise testing? Is it Crysis or Furmark? Along the same line of questioning I do not think you can use Furmark in the way you have the graph setup because it looks like you have left Powertune on (which will throttle the power consumption) while using numbers from NVIDIA's cards where you have faked the drivers into not throttling. I understand one is a program cheat and another a TDP limitation, but it seems a bit wrong to not compare them in the unmodified position (or VERBALLY mention this had no bearing on the test and they should not be compared).

    Overall nice review, but the new cards are pretty underwhelming IMO.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi 7Enigma;

    For noise testing it's FurMark. As is the case with the rest of our power/temp/noise benchmarks, we want to establish the worst case scenario for these products and compare them along those lines. So the noise results you see are derived from the same tests we do for temperatures and power draw.

    And yes, we did leave PowerTune at its default settings. How we test power/temp/noise is one of the things PowerTune made us reevaluate. Our decision is that we'll continue to use whatever method generates the worst case scenario for that card at default settings. For NVIDIA's GTX 500 series, this means disabling OCP because NVIDIA only clamps FurMark/OCCT, and to a level below most games at that. Other games like Program X that we used in the initial GTX 580 article clearly establish that power/temp/noise can and do get much worse than what Crysis or clamped FurMark will show you.

    As for the AMD cards the situation is much more straightforward: PowerTune clamps everything blindly. We still use FurMark because it generates the highest load we can find (even with it being reduced by over 200MHz), however because PowerTune clamps everything, our FurMark results are the worst case scenario for that card. Absolutely nothing will generate a significantly higher load - PowerTune won't allow it. So we consider it accurate for the purposes of establishing the worst case scenario for noise.

    In the long run this means that results will come down as newer cards implement this kind of technology, but then that's the advantage of such technology: there's no way to make the card louder without playing wit the card's settings. For the next iteration of the benchmark suite we will likely implement a game-based noise test, even though technologies like PowerTune are reducing the dynamic range.

    In conclusion: we use FurMark, we will disable any TDP limiting technology that discriminates based on the program type or is based on a known program list, and we will allow any TDP limiting technology that blindly establishes a firm TDP cap for all programs and games.

    -Thanks
    Ryan Smith
  • 7Enigma - Friday, December 17, 2010 - link

    Thanks for the response Ryan! I expected it to be lost in the slew of other posts. I highly recommend (as you mentioned in your second to last paragraph) that a game-based benchmark is used along with the Furmark for power/noise. Until both adopt the same TDP limitation it's going to put the NVIDIA cards in a bad light when comparisons are made. This could be seen as a legitimate beef for the fanboys/trolls, and we all know the less ammunition they have the better. :)

    Also to prevent future confusion it would be nice to have what program you are using for the power draw/noise/heat IN the graph title itself. Just something as simple as "GPU Temperature (Furmark-Load)" would make it instantly understandable.

    Thanks again for the very detailed review (on 1 week nonetheless!)
  • Hrel - Wednesday, December 15, 2010 - link

    I really hope these architexture changes lead to better minimum FPS results. AMD is ALWAYS behind Nvidia on minimum FPS and in many ways that's the most important measurment since min FPS determines if the game is playable or not. I dont' care if it maxes out 122 FPS if when the shit hits the fan I get 15 FPS, I won't be able to accurately hit anything.
  • Soldier1969 - Wednesday, December 15, 2010 - link

    I'm dissapointed in the 6970, its not what I was expecting over my 5870. I will wait to see what the 6990 brings to the table next month. I'm looking for a 30-40% boost from my 5870 at 2560 x 1600 res I game at.
  • stangflyer - Wednesday, December 15, 2010 - link

    Now that we see the power requirements for the 6970 and that it needs more power than the 5870 how would they make a 6990 without really cutting off the performance like the 5970?

    I had a 5970 for a year b4 selling it 3 weeks ago in preparation of getting 570 in sli or 6990.
    It would obviously have to be 2x8 pin power! Or they would have to really use that powertune feature.

    I liked my 5970 as I didn't have the stuttering issues (or i don't notice them) And actually have no issues with eyefinity as i have matching dell monitors with native dp inputs.

    If I was only on one screen I would not even be thinking upgrade but the vram runs out when using aa or keeping settings high as I play at 5040x1050. That is the only reason I am a little shy of getting the 570 in sli.

    Don't see how they can make a 6990 without really killing the performance of it.

    I used my 5970 at 5870 and beyond speeds on games all the time though.
  • anactoraaron - Wednesday, December 15, 2010 - link

    I would like to thank Ryan for the article that makes me forget the "OC card in the review" debacle. Fantastic in depth review with no real slant to team green or red. Critics go elsewhere please.
  • Hrel - Wednesday, December 15, 2010 - link

    When are you guys gonna put all these cards in bench? Some of them have been out for a relatively long time now and they're still not in bench. Please put them in there.
  • ajlueke - Wednesday, December 15, 2010 - link

    I agree with most of the conclusions I have read here. If you already own a 5800 series card, there isn't really enough here to warrant an upgrade. Some improved features and slightly improved FPS in games doesn't quite give the same upgrade incentive as the 5870 did compared a 4870.
    There are some cool things with the 6900 and 6800 series. Looking at the performance in games, the 6970 and even the 6870 seemed to get much closer to 2X performance when placed in crossfire as compared to 5800 series cards. That is a pretty interesting development. All in all, a good upgrade if you didn't buy a card last generation. If you did, it seems the wait is on for the 28 nm version of the GPU.
  • Belard - Wednesday, December 15, 2010 - link

    NO!

    The 800 cards were the HIGH end models since the 3000 series and worked well through to the 5000 series with the 5970 being the "odd one" since the "X2" made more sense like the 4850X2.

    It also allows for a "x900" series if needed.

    AMD needs to NOT COPY Nvidia's naming games... did they hire someone from Nvidia? Even the GeForce 580/570 still belong to the 400 series since its the same tech. SHould have been named 490 and the 475... But hey, in 12 months, Nvidia will be up to the 700 series. Hey, Google Chrome is version 8.0 and its been on the market for about 2 years! WTF?!

    What was their excuse again? Oh, to not create confusion with the 5700 series? So they frack up the whole model names for a mid-range card? The 6800's should have been 6700s, simple as that. Yes, there will be some people who will accidentally downgrade.

    What the new 6000 series has going for AMD is that they are somewhat cheaper and easily cost less to make than the 5000s and what Nvidia makes.

    In the end, the 6000 series is the first dumb-thing AMD has done since the 2000 series, but nowhere near as bad.
  • MS - Wednesday, December 15, 2010 - link

    In terms of effienct usage of space though AMD is doing quite well; ... should be efficient

    Nice article so far,

    Regards,
    Michael
  • nitrousoxide - Wednesday, December 15, 2010 - link

    The power connector on the left (8-pin of 6970 and 6-pin of 6950) has a corner (bottom left corner) cut down, that's because the cooler doesn't fit with the PCB design, if you install it with force the power connector would get stuck. So the delay of 6900 Series could be due to this issue, AMD needs one month to 'manually polish' all power connectors of the stock-cards in order to go with the cooler. Well, just a joke, but this surely reflects how poorly AMD organizes the whole design and manufacture process :)
  • nitrousoxide - Wednesday, December 15, 2010 - link

    you can find this out here :)
    hiphotos. baidu. com/coreavc/pic/item/70f48d81ffe07cf26d811957. jpg
  • nitrousoxide - Wednesday, December 15, 2010 - link

    AMD promises that every one will get a unique 6970 or 6950, different from any other card on the planet :)
  • GummiRaccoon - Wednesday, December 15, 2010 - link

    The performance of these cards is much better with 10.12, why didn't you test it with that?
  • Ryan Smith - Wednesday, December 15, 2010 - link

    10.12 does not support the 6900 series.

    8.79.6.2RC2, dated December 7th, were the absolute latest drivers for the 6900 series at the time of publication.
  • Roland00Address - Wednesday, December 15, 2010 - link

    1) The architecture article is something that can be written before hand, or written during benching (if the bench is on a loop). It has very little "cramming" to get out right after a NDA ends. Anand knows this info for a couple of weeks but can't discuss it due to NDAs. Furthermore the reason anandtech is one of the best review sites on the net is the fact they do go into the architecture details. The architecture as well as the performance benchmarks is the reason I come to anandtech instead of other review sites as my first choice.

    2) The spelling and grammar errors is a common thing at anandtech, this is nothing new. That said I can't complain for my spelling and grammar is far worse than Ryan's.

    If you don't like the style of the review go somewhere else.
  • Ryan Smith - Wednesday, December 15, 2010 - link

    1) That's only half true. AMD told us the basics about the 6900 series back in October, but I never had full access to the product information (and more importantly the developers) until 1 week ago. So this entire article was brought up from scratch in 1 week.

    It's rare for us to get too much access much earlier than that; the closest thing was the Fermi launch where NVIDIA was willing to talk about the architecture months in advance. Otherwise that's usually a closely held secret in order to keep the competition from having concrete details too soon.
  • Dracusis - Wednesday, December 15, 2010 - link

    Neither the AMD 6xxx series or Nvidia's 5xx series have been added. Would like to see how my 4870x2 stack up against this latest generation and weather or not it's worth upgrading.
  • Makaveli - Wednesday, December 15, 2010 - link

    The Canadian pricing on these cards are hilarious.

    Ncix is taking preorder for the 6970 at $474.

    While they sell the 570 for $379.

    Can someone explain to me why I would pay $100 more for the radeon when the 570 gives equal performance?

    Are these retailers that retarded?
  • stangflyer - Thursday, December 16, 2010 - link

    They will price the 6950/6970 high for a few days to get the boys that bleed red and have to have the new cards right away to pay top dollar for the card.

    After a week they will probably be about the same price.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Bench will be up to date by the start of next week.
  • Paladin1211 - Thursday, December 16, 2010 - link

    Whats wrong with you rarson? Do you even know whats the difference between "Graphics card review", "Performance review", "Performance Preview"? I dont know how good your grammar and spelling are, but they dont matter as long as you cant understand the basic meaning of the words.

    Most of the sites will tell you about WHAT, but here at AnandTech, you'll truly find out WHY and HOW. Well, of course, you can always go elsewhere try to read some numbers instead of words.

    Keep up the good works, Ryan.
  • Belard - Thursday, December 16, 2010 - link

    The 3870 and 3850 were the TOP end for ATI, as was the 4800 and the 5800. Their relationship of model numbers do not have anything to do with the status of Nvidia.

    When the 3870 was brand new, what was the HIGHEST end card ATI had back then? Oh yeah, the 3870!

    4800 is over the 3870, easily.
    4600 replaced the 3800

    The 5800s replaces the 4800s... easily.
    the 5700s kind of replaced the 4800s.

    The 6800s replaces the 5700 & 5800s, the 6900s replace the 5800s, but not so much on performance.

    I paid $90 for my 4670 and a much better value than the $220 3870 since both cards perform almost the same.
  • AmdInside - Thursday, December 16, 2010 - link

    I can't think of a single website that has better hardware reviews, at least for computer technology than Anandtech. Ryan, keep up the great work.
  • George.Zhang - Thursday, December 16, 2010 - link

    BTW, HD6950 looks great and affordable for me.
  • AnnihilatorX - Thursday, December 16, 2010 - link

    I disagree with you rarson

    This is what sets Anandtech apart, it has quality over quantity.
    Anandtech is the ONLY review site which offers me comprehensive information on the architecture, with helpful notes on the expected future gaming performance. It mention AMD intended the 69xx to run on 35nm, and made sacrifices. If you go to Guru3D''s review, the editor in the conclusion stated that he doesn't know why the performance lacks the wow factor. Anandtech answered that question with the process node.

    If you want to read reviews only, go onto google and search for 6850 review, or go to DailyTech's daily recent hardware review post, you can find over 15 plain reviews. Even easier, just use the Quick Navigation menu or the Table of Content in the freaking first page of article. This laziness does not entrice sypathy.
  • Quidam67 - Thursday, December 16, 2010 - link

    Rarson's comments may have been a little condescending in their tone, but I think the critism was actually constructive in nature.

    You can argue the toss about whether the architecture should be in a separate article or not, but personally speaking, I actually would prefer it was broken out. I mean, for those who are interested, simply provide a hyper-link, that way everyone gets what they want.

    In my view, a review is a review and an analysis on architecture can compliment that review but should not actually a part of the review itself. A number of other sites follow this formula, and provide both, but don't merge them together as one super-article, and there are other benefits to this if you read on.

    The issue of spelling anf grammer is trivial, but in fact could be symptomatic of a more serious problem, such as the sheer volume of work Ryan has to perform in the time-frame provided, and the level of QA being squeesed in with it. Given the nature of NDA's, perhaps it might take the pressure off if the review did come first, and the architecture second, so the time-pressures weren't quite so restrictive.

    Lastly, employing a professional proof-reader is hardly an insult to the original author. It's no different than being a software engineer (which I am) and being backed up by a team of quality test analysts. It certainly makes you sleep better when stuff goes into production. Why should Ryan shoulder all the responsibility?
  • silverblue - Thursday, December 16, 2010 - link

    I do hope you're joking. :) (can't tell at this early time)
  • Arnulf - Thursday, December 16, 2010 - link

    "... unlike Turbo which is a positive feedback mechanism."

    Turbo is a negative feedback mechanism. If it was a positive feedback mechanism (= a consequence of an action resulting in further action in same direction) the CPU would probably burn up almost instantly after Turbo triggered as its clock would increase indefinitely, ever more following each increase, the higher the temperature, the higher the frequency. This is not how Turbo works.

    Negative feedback mechanism is a result of an action resulting in reaction (= action in the opposite direction). In the case of CPUs and Turbo it's this to temperature reaction that keeps CPU frequency under control. The higher the temperature, the lower the frequency. This is how Turbo and PowerTune work.

    The fact that Turbo starts at lower frequency and ramps it up and that PowerTune starts at higher frequency and brings it down has no bearing on whether the mechanism of control is called "positive" or "negative" feedback.

    Considering your fondness for Wikipedia (as displayed by the reference in the article) you might want to check out these:

    http://en.wikipedia.org/wiki/Negative_feedback
    http://en.wikipedia.org/wiki/Positive_feedback

    and more specifically:

    http://en.wikipedia.org/wiki/Negative_feedback#Con...
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Arnulf;

    Fundamentally you're right, so I won't knock you. I guess you could say I'm going for a very loose interpretation there. The point I'm trying to get across is that Turbo provides a performance floor, while PowerTune is a performance ceiling. People like getting extra performance for "free" more than they like "losing" performance. Hence one experience is positive and one is negative.

    I think in retrospect I should have used positive/negative reinforcement instead of feedback.
  • Soda - Thursday, December 16, 2010 - link

    Anyone noticed that the edge missing og the boards 8-pin power connector ?

    Apparently the AMD made a mistake in the reference design of the board and didn't calculating the space needed by the cooler.

    If you look closely on the power connector in http://images.anandtech.com/doci/4061/6970Open.jpg you'll notice the missing edge.

    For a full story on the matter you can go to http://www.hardwareonline.dk/nyheder.aspx?nid=1060...
    For the english speaking people I suggest the googlish version here http://translate.google.com/translate?hl=da&sl...

    There are some pictures to backup the claim the mistake made AMD here.

    Though it haven't been confirmed by AMD if this is only a mistake on the review boards or all cards of the 69xx series.
  • versesuvius - Thursday, December 16, 2010 - link

    I have a 3870, on a 17 inch monitor, and everything is fine as long as games go. The hard disk gets in the way sometimes, but that is just about it. All the games run fine. No problem at all. Oh, there's more: They run better on the lousy XBOX. Why the new GPU then? Giant monitors? Three of them? Six of them? (The most fun I had on Anandtech was looking at pictures of AT people trying to stabilize them on a wall). Oh, the "Compute GPU"? Wouldn't that fit on a small PCI card, and act like the old 486 coprecessor, for those who have some use for it? Or is it just a silly excuse for not doing much at all, or rather not giving much to the customers, and still charge the same? The "High End"! In an ideal world the prices of things go down, and more and more people can afford them. That lovely capitalist idea was turned on its head, sometime in the eighties of the last century, and instead the notion of value was reinvented. You get more value, for the same price. You still have to pay $400 for your graphic card, even though you do not need the "Compute GPU", and you do not need the aliased superduper antialiasing that nobody yet knows how to achieve in software. Can we have a cheap 4870? No that is discontinued. The 58 series? Discontinued. There are hundreds of thousands or to be sure, millions of people who will pay 50 dollars for one. All ATI or Nvidia need to do is to fine tune the drivers and reduce power consumption. Then again, that must be another "High End" story. In fact the only tale that is being told and retold is "High End"s and "Fool"s, (i.e. "We can do whatever we want with the money that you don't have".) Until better, saner times. For now, long live the console. I am going to buy one, instead of this stupid monstrosity and its equally stupid competitive monstrosity. Cheaper, and gets the job done in more than one way.

    End of Rant.
    God Bless.
  • Necc - Thursday, December 16, 2010 - link

    So True.
  • Ananke - Thursday, December 16, 2010 - link

    Agree. I have 5850 and it does work fine, and I got it on day one at huge discount, but still - it is kind of worthless. Our entertainment comes more exclusively from consoles, and I discrete high end card that commands above $100 price tag is worthless. It is nice touch, but I have no application for it in everyday life, and several months later is already outdated or discontinued.

    My guess, integrated in the CPU graphics will take over, and the mass market discrete cards will have the fate of the dinosaurs very soon.
  • Quidam67 - Thursday, December 16, 2010 - link

    Wonderfully subversive commentary. Loved it.

    Still, the thing I like about the High end (I'll never buy it until my Mortgage is done with) is that it filters down to the middle/low end.

    Yes, lots of discontinued product lines but for example, I thought the HD5770 was a fantastic product. Gave ample performance for maintstream gamers in a small form-factor (you can even get it in single slot) with low heat and power requirements meaning it was a true drop-in upgrade to your existing rig, with a practical upgrade path to Crossfire X.

    As for the xbox, that hardware is so outdated now that even the magic of software optimisation (a seemingly lost art in the world of PC's) cannot disguise the fact that new games are not going to look any better, or run any faster, than those that came out at launch. Was watching GT5 in demo the other day and with all the hype about how realistic it looks (and plays) I really couldn't get past the massive amount of Jaggies on screen. Also, very limited damage modelling, and in my view that's a nod towards hardware limitations rather than a game-design consideration.
  • B3an - Thursday, December 16, 2010 - link

    Very stupid uninformed and narrow-minded comment. People like you never look to the future which anyone should do when buying a graphics card, and you completely lack any imagination. Theres already tons of uses for GPU computing, many of which the average computer user can make use of, even if it's simply encoding a video faster. And it will be use a LOT more in the future.

    Most people, especially ones that game, dont even have 17" monitors these days. The average size monitor for any new computer is at least 21" with 1680 res these days. Your whole comment is as if everyone has the exact same needs as YOU. You might be happy with your ridiculously small monitor, and playing games at low res on lower settings, and it might get the job done, but lots of people dont want this, they have standards and large monitors and needs to make use of these new GPU's. I cant exactly see many people buying these cards with a 17" monitor!
  • CeepieGeepie - Thursday, December 16, 2010 - link

    Hi Ryan,

    First, thanks for the review. I really appreciate the detail and depth on the architecture and compute capabilities.

    I wondered if you had considered using some of the GPU benchmarking suites from the academic community to give even more depth for compute capability comparisons. Both SHOC (http://ft.ornl.gov/doku/shoc/start) and Rodinia (https://www.cs.virginia.edu/~skadron/wiki/rodinia/... look like they might provide a very interesting set of benchmarks.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Ceepie;

    I've looked in to SHOC before. Unfortunately it's *nix-only, which means we can't integrate it in to our Windows-based testing environment. NVIDIA and AMD both work first and foremost on Windows drivers for their gaming card launches, so we rarely (if ever) have Linux drivers available for the launch.

    As for Rodinia, this is the first time I've seen it. But it looks like their OpenCL codepath isn't done, which means it isn't suitable for cross-vendor comparisons right now.
  • IdBuRnS - Thursday, December 16, 2010 - link

    "So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. "

    At NewEgg right now:

    Cheapest GTX 570 - $509
    Cheapest 6970 - $369

    $30 difference? What are you smoking? Try $140 difference.
  • IdBuRnS - Thursday, December 16, 2010 - link

    Oops, $20 difference. Even worse.
  • IdBuRnS - Thursday, December 16, 2010 - link

    570...not 580...

    /hangsheadinshame
  • epyon96 - Thursday, December 16, 2010 - link

    This was a very interesting discussion to me in the article.

    I'm curious if Anandtech might expand on this further in a future dedicated article comparing what NVIDIA is using to AMD.

    Are they also more similar to VLIW4 or VLIW5?

    Can someone else shed some light on it?
  • Ryan Smith - Thursday, December 16, 2010 - link

    We wrote something almost exactly like you're asking for for our Radeon HD 4870 review.

    http://www.anandtech.com/show/2556

    AMD and NVIDIA's compute architectures are still fundamentally the same, so just about everything in that article still holds true. The biggest break is VLIW4 for the 6900 series, which we covered in our article this week.

    But to quickly answer your question, GF100/GF110 do not immediately compare to VLIW4 or VLIW5. NVIDIA is using a pure scalar architecture, which has a number of fundamental differences from any VLIW architecture.
  • dustcrusher - Thursday, December 16, 2010 - link

    The cheap insults are nothing but a detriment to what is otherwise an interesting argument, even if I don't agree with you.

    As far as the intellect of Anandtech readers goes, this is one of the few sites where almost all of the comments are worth reading; most sites are the opposite- one or two tiny bits of gold in a big pan of mud.

    I'm not going to "vastly overestimate" OR underestimate your intellect though- instead I'm going to assume that you got caught up in the moment. This isn't Tom's or Dailytech, a little snark is plenty.
  • Arnulf - Thursday, December 16, 2010 - link

    When you launch an application (say a game), it is likely to be the only active thread running on the system, or perhaps one of very few active threads. CPU with Turbo function will clock up as high as possible to run this main thread. When further threads are launched by the application, CPU will inevitably increase its power consumption and consequently clock down.

    While CPU manufacturers don't advertise this functionality in this manner, it is really no different from PowerTune.

    Would PowerTune technology make you feel any better if it was marketed the other way around, the way CPUs are ? (mentioning lowest frequencies and clock boost provided that thermal cap isn't met yet)
  • versesuvius - Friday, December 17, 2010 - link

    Ananke,

    I am not very knowledgeable about this, but I don't think a modern GPU can fit inside a CPU for now. A better idea would be a console on a card. The motherboards on the consoles are not much bigger than the large graphic cards of today. A console card for $100 would be great. I am sure that there is no technical obstacles that the average electronic wizard cannot overcome, doing that.

    Sure, there is a use for everything. I can imagine that every single human being on earth can find a use for a Ferrari, but the point is that even those who do have it, do not use it as often as their other car, (Toyota, VW or whatever). In fact, there is rarely a Ferrari that has more than 20,000 km on it, and even that is put on it by successive owners, not one. The average total an ordinary person can stand a Ferrari is 5000 KM. (Disclaimer: I do not have one. I only read something to that effect somewhere). Having said that, I do have a sense of the "need for speed". I can remember sitting in front of the university's 80286 waiting for the FE program to spit out the results, one node at a time, click, click, ... . You have millions of polygons, we can have billions of mesh nodes, and that even does not even begin to model a running faucet. How's that for the need for speed. I do appreciate the current speeds. However, the CPU deal was and is a straight one. The graphic card deals, today, are not. To be clear, the "and" in "High End"s and "Fool"s is an inclusive one. "Someone will pay for it", was also initiated in the eighties of the last century. By the way, the big question "can it play crysis", will no longer be. Crysis 2 is coming to the consoles.
  • Quidam67 - Friday, December 17, 2010 - link

    "But can it play Crysis" should be in the Urban dictionary as a satirical reference on graphics code that combines two potent attributes: 1) is way ahead of its time in terms of what current hardware can support 2) is so badly written and optimised that even hardware that should be able to run it still can't.

    In 1000 years time when Organic Graphics cards that you can plug into your head still can't run it smoothly @2560*1600 60fps they will realise the joke was on us and that the code itself was written to run more and more needless loops in order to overwhelm any amount of compute-resource thrown at it.
  • Iketh - Friday, December 24, 2010 - link

    LOL
  • marc1000 - Friday, December 17, 2010 - link

    I swear I've read ALL the comments to see if anyone already pointed it... but no one did.

    I feel a bit disappointed with this launch too (I have a 5770 and wanted to get 6950 but was wanting a bigger increase %-wise). But one thing interesting it the number of Stream Processors in the new gpus. By the "pure processor" count this number decreased from 1600 SPs on 5870 to 1536 SPs on 6970. But the size of the VLIW processors changed too. It was 5 SPs on 5870 and now is 4 SPs.

    So we have:
    hd5870 = 1600 SPs / 5 = 320 "processors"
    hd6970 = 1536 SPs / 4 = 384 "processors"

    if we take that 384 and multiply by 5, we would have 1920 SPs on the new generation (on par with many rumors). this is 20% more shaders. and considering AMD is saying that the new VLIW4 is 10% faster than VLIW5 we should have more than 20% increase in all situations. but this is only true in the minority of tests (like crysis at 2560x1660 where it is 24%, but in the same game at 1680x1050 the increase is only 16%). and at the same time the minimun FPS got better, yet in another games the difference is smaller.

    but then again, I was expecting a little more. I believe the 6950 will be a worthy upgrade to me, but the expectations were so high that too much people ended a little disappointed... myself included.
  • Sunburn74 - Tuesday, December 28, 2010 - link

    Well... at least they delivered on time and didn't make you wait 6 more months to simply deliver an equivalent, if not considerably worse, product.
  • Mr Perfect - Friday, December 17, 2010 - link

    Yes, the minimums are appreciated when they're included.

    It would be even better if the framerates was displayed as a line graph instead of a bar graph. That way readers could tell if an average consisted of a lot of high peaks and low valleys, or really was a nice smooth experience all the way through. Some other review sites use linegraphs and while I visit Anandtech for it's timeliness, professionalism, industry insight and community involvement, I go to the other sites for the actual performance numbers.
  • Quidam67 - Friday, December 17, 2010 - link

    There is further rationale for splitting the article. Lets say someone is googling "HD 6970 architecture" perhaps they will pick up this review, or perhaps they won't, but either way, if they see that it is actually a review on the cards, they might be inclined to bypass it in favour of a more focused piece.

    And again, there is no reason why the Architecture Article can't provide a hyperlink to the review, if the reader then decides they want to see how that architecture translates into performance on the current generation of cards supporting it.

    I really hope AT are reading this and giving it some consideration. As you say, they are a great sight and no one is disputing that, but it's not a religion, so you should be allowed to question it without being accused of blasphemy :O)
  • dustcrusher - Friday, December 17, 2010 - link

    It really comes down to how important the mainstream market is. If they are a large enough segment of the market, one company using a simple, easy-to-grasp naming convention would likely grab some market share. Make it easy to buy your product and at least some people will be more likely to do so.

    If not, then it's fun to talk about but not terribly important. Tech-savvy folk will buy whatever meets their needs price/performance-wise after doing research, even if a card is named the Transylvania 6-9000 or the Wankermeister GTFO. Eager to please tech-naive folk are going to buy the largest model number they can get with the money they have, because "larger model numbers = bigger/better equipment" is a long-established consumer shorthand.

    I have a half-baked idea for a model numbering system that's based around the key specs of the card- it's a 5 digit system where the first digit is the hardware platform ID (like what we have now, mostly) and the other four would represent combinations of other specs (one digit could be the lowest memory clock speed and bus width would be 1, the next lowest memory clock speed and lowest bus width would be 2, etc).

    No idea if this could actually be implemented- there are probably too many variables with GPU/memory clock speeds, among other things.
  • Shinobi_III - Saturday, December 18, 2010 - link

    If you ever saw Nvidia 4xAA in action, you know it's not as smooth as the radeon implementation (especially in motion) and z-buffer miscalculations has always been a nvidia feature.

    Go up a hill in Fallout New Vegas and look at Vegas in the horizon, with Nvidia cards it always looks like a disco due to meshes overlapping. Now do the same on Radeon.
  • TheUsual - Saturday, December 18, 2010 - link

    Right now, Newegg has a 6870 for $200 after rebate. Two of these makes for an awesome value at $400. The top tier of cards doesn't give a corresponding increase in performance for the extra cost. Two 6950s costs 50% more but does not give you 50% more FPS. Two GTX 460 1GBs is also a great bang for the buck at $300.

    Neither of these lets you do triple SLI/XFIRE however. That would be what would be paying extra for.

    My hope is that the price will drop on the 6950 by around February. By then the GTX 560 should be out and might drive prices down some. The benchmarks could change some with Sandy Bridge too, if they are currently CPU bound.
  • 529th - Sunday, December 19, 2010 - link

    Great job on this review. Excellent writing and easy to read.

    Thanks
  • marc1000 - Sunday, December 19, 2010 - link

    yes, that's for sure. we will have to wait a little to see improvements from VLIW4. but my point is the "VLIW processors" count, they went up by 20%. with all other improvements, I was expecting a little more performance, just that.

    but in the other hand, I was reading the graphs, and decided that 6950 will be my next card. it has double the performance of 5770 in almost all cases. that's good enough for me.
  • Iketh - Friday, December 24, 2010 - link

    This is how they've always reviewed new products? And perhaps the biggest reason AT stands apart from the rest? You must be new to AT??
  • WhatsTheDifference - Sunday, December 26, 2010 - link

    the 4890? I see every nvidia config, never a card overlooked there, ever, but the ATI's (then) top card is conspicuously absent. long as you include the 285, there's really no excuse for the omission. honestly, what's the problem?
  • PeteRoy - Friday, December 31, 2010 - link

    All games released today are in the graphic level of the year 2006, how many games do you know that can bring the most out of this card? Crysis from 2007?
  • Hrel - Tuesday, January 11, 2011 - link

    So when are all these tests going to be re-run at 1920x1080 cause quite frankly that's what I'm waiting for. I don't care about any resolution that doesn't work on my HDTV. I want 1920x1080, 1600x900 and 1280x720. If you must include uber resolutions for people with uber money then whatever; but those people know to just buy the fastest card out there anyway so they don't really need performance numbers to make up their mind. Money is no object so just buy nvidia's most expensive card and ur off.
  • AKP1973 - Thursday, October 13, 2011 - link

    Have you guys noticed the "load GPU temp" of the 6870 in XFIRE?... It produced so very low heat than any enthusiast card in a multi-GPU setup. That's one of the best XFIRE card in our time today if you value price, performance, cool temp, and silence.!
  • Travisryno - Wednesday, April 26, 2017 - link

    It's dishonest referring to enhanced 8x as 32x. There are industry standards for this, which AMD, NEC, 3DFX, SGI, SEGA AM2, etc..(everybody) always follow(ed), then nVidia just makes their own...
    Just look how convoluted it is..

Log in

Don't have an account? Sign up now