Not even close, unless you are talking about outdated distributed computing projects like Folding@Home code. Try any of the modern DC projects like Collatz Conjecture, MilkyWay@home, etc. and a single HD4850 will smoke a GTX580. This is because Fermi cards are limited to 1/8th of their double-precision performance.
In other words, an HD6990 which has 5,100 Gflops of single-precision performance will have 1,275 Glops double precision performance (since AMD allows for 1/4th of its SP). In comparison, the GTX470 has 1,089 Gflops of SP performance which only translates into 136 Gflops in DP. Therefore, a single HD6990 is 9.4x faster in modern computational GPGPU tasks.
Those are just theoretical performance numbers. Not all programs *even newer ones* can effectively extract ILP from AMD's VLIW4 architecture. Those that can will no doubt with faster; others that can't would be slower. As far as I'm aware lots of programs still prefer nV's scalar arch but that might change with time.
Well.. if you can oly use 1 of 4 VLIW units in DP then you don't need any ILP. Just keep the threads in flight and it's almost like nVidias scalar architecture, just with everything else being different ;)
It all depends on the driver and compiler implementation, and the guy/gal coding it. If you code the same but the compilers are generations apart, then the compiler with the higher generation wins out. If you've had more experience with CUDA based OpenCL, then your NVIDIA OpenCL implementation will outperform your ATI Stream implementation. Pick your card for it's purpose. My homebrew stuff works great on NVIDIA, but I only code for NVIDIA - same thing for big league compute directions.
Don't like the direction this is going. In GPUs it's hard to see any performance advances that don't come with equivalent increases in power usage, unlike what Core 2 was compared to Pentium4.
Are you kidding? I have a 7900GTX I dont even use, because it fried my only spare large power supply. A 5670 is twice as fast and consumes next to nothing.
I'm disappointed at the choices AMD made with the cooler. The noise levels are truly intolerable, it seems like it would have made more sense to go with a triple-slot card that would be more capable of handling the heat without painful levels of noise. It'll be interesting to see how the aftermarket cooler vendors like Arctic Cooling and Thermalright handle this.
There's actually a good reason for that. I don't believe I mentioned this in the article, but AMD is STRONGLY suggesting not to put a card next to the 6990. It moves so much air that another card blocking its airflow would run the significant risk of killing it.
What does this have to do with triple-slot coolers? By leaving a space open, it's already taking up 3 spaces. If the cooler itself takes up 3 spaces, those 3 spaces + 1 open space is now 4 spaces. You'd be hard pressed to find a suitable ATX board and case that could house a pair of these cards in Crossfire if you needed 8 open spaces. Triple slot coolers are effectively the kryptonite for SLI/CF, which is why NVIDIA isn't in favor of them either (but that's a story for another time).
if it means a quieter card then that might have been a compromise worth making. Also, 2.5 would stop people from making the il-advised choice of using the slot next to the card, thus possibly killing it!
However, if one is even contemplating Crossfire or SLI then a triple-slot space between the PCIe X16 SOCKETS for a pair of high-power 2-slot-cooler graphics cards with "open-fan" cooling (like the 6990) is recommended to avoid one card being fried by lack of air. This socket-spacing allows a one-slot clear air-space for the "rear" card's intake fan to "breathe". (Obviously, one must not plug any other card into any motherboard socket present in this slot)
In the case of a pair of 6990 (or a pair of nVidia's upcoming dual-GPU card), a minimum one-slot air-space between cards becomes MANDATORY, unless custom water or cryo cooling is installed.
Very few current Crossfire/SLI-compatible motherboards have triple-slot (or more) spaces between the two PCIe X16 connectors while simultaneously also having genuine X16 data-paths to both connectors. That socket spacing is becoming more common with high-end Sandy-Bridge motherboards, but functionality may still may be constrained by X8 PCIe data-paths at the primary pair of X16 connectors.
To even attempt to satisfy the data demands of a pair of 6990 Cross-Fire with a SINGLE physical CPU, you really do need a X58 motherboard and a Gulftown Corei7 990x processor, or maybe a Corei7 970 heavily overclocked. For X58 motherboards with triple-spaced PCIe sockets properly suitable for Crossfire or SLI , you need to look at the Asrock X58 "Extreme" series of motherboards. These do indeed allow full X16 data-paths to the two primary PCIe X16 "triple-spaced" sockets.
Many ATX motherboards have a third "so-called" PCIe X16 socket in the "slot7" position. However, this slot is always incapable of a genuine X16 pairing with either of the other two "X16" sockets, Anyway this "slot 7" location will not allow any more than a two-slot wide card when the motherboard is installed in a PC "tower" -- an open-fan graphics card will have no proper ventilation here, as it comes right up against either the power-supply (if bottom-loaded) or the bottom-plate of the case.
Exactly. For people who are going to do quad-Crossfire with these, you pretty much have to add the cost of a liquid cooling system to the price of the cards, and it's going to have to be a pretty studly liquid cooler too. Of course, the kind of person who "needs" (funny, using that word!) two of these is also probably the kind of person who would do the work to implement a liquid cooling system, so that may be less of an issue than it otherwise might be.
So, here's the question (more rhetorical than anything else). For a given ultra-high-end gaming goal, say, Crysis @ max settings, 60fps @ 3x 2500x1600 monitors (something that would require quad Crossfire 69xx or 3-way SLI 580), with a targeted max temperature and noise level... which is the cheaper solution by the time you take into account cooling, case, high-end motherboard, the cards themselves? That's the cost-comparison that needs to be made, not just the cost of the cards themselves.
Before anyone thinks of buying this card stock, you should really go out and get a sense of what that kind of noise level is like. Unless you have a pair of high quality expensive noise-cancelling earbuds and you're playing games at a loud volume, you're going to constantly hear the fan.
$700 isn't the real price. Add on some aftermarket cooling and that's how much you're going to spend.
If you're going to keep your case open all day with your ear to the graphics card, then you might get that 70dB+, which won't be too nice on the ear. On the other hand you won't get much gaming done. :)
I don't know the exact distance Anandtech measured this noise at, but Kitguru measured at about 1 metre and got 48dB when running Furmark, 40dB for normal load.
70dB is very low and only would apply if you are essentially living next to your computer - that's a 24-hour exposure level.
NIOSH recommends 85dB as the upper limit for 8 hours of exposure, with a 3dB exchange rate - that is, every time you halve the amount of time you're exposed to the sound you can increase the volume by 3dB.
*ahem* "The document identifies a 24-hour exposure level of 70 decibels as the level of environmental noise which will *prevent* any measurable hearing loss over a lifetime"
does NOT say what the maximum is.
from my experience as an audio tech: 95dB is the start of temporary hearing loss, 110dB is the start of permanent and 140dB is the threshold of pain.
and for drunk people, they can't hear anything below 150dB :)
Both AMD and nVidia are out of mind, they are ignorant of the consequence by putting two gigantic chips with 5+billion transistors on the same board. I can't find the point of buying such outrageous card instead of building a CFX/SLI system. At least the latter isn't that loud, isn't that hot and consumes hardly more power than those monstrosities.
TSMC is to blame. They dropped 32nm so it is impossible to get 6990/590 within 300W power envelope. But neither AMD nor nVidia turn back, but keep making these non-sense flagship cards.
What a shame it will be when next-generation 28nm Single GPU flagships wipe out these monsters with ease while consumes half the power, running silent.
That hasn't traditionally happened. Look at the comparison between the 6990 and the 4870x2 - 2 generations, IIRC one process shrink. The 6990 does generally put up much better numbers, but consumes a lot more power to do so. Looking at a comparison between the 4870x2 and 5870 (double GPU on a larger process size to next-gen single-GPU) they are very close, with the 4870x2 overall holding a slight lead. And of course none of these high-end reference cards have ever been silent
To AMD's credit, they were good sports offered to take any pictures we needed. So all of those disassembled shots came from them. They were really adamant about it being a bad idea to take these things apart if you intended to use them in the future.
My problem with the shot is the poor application of thermal paste from the picture. In a card of this magnitude having a perfect coating of thermal compound is critical. And knowing marketing if that is the shot they SHOW, how good do you really think the application is on one purchased in retail channels?
I haven't been following CrossFire / SLI or these Single Card Dual GPU closely. ( Which to me are the same as two card anyway )
Do they still need drivers to have a specific profile of the game to take advantage? I.e an Unknown Game to the Drivers will gain 0 benefits from the 2nd GPU?
If that is so, then they are not even worth a look.
Most of the big games support SLI and CrossfireX today. For example, I play Battlefield BC2 myself and using 2 video cards blows the doors of a single card and is well worth the investment, especially if you have a 2560x1600 resolution monitor like I do. For 2 ATI cards in Crossfire you install a separate crossfire profiles pack in addition to the catalyst drivers. The profiles pack supports all the game crossfire optimization. The AI mode in the catalyst drivers exploits the game profile pack setting for crossfire so you want that enabled.
A dual GPU HD 6990 is essentially crossfire on a single card, but for some reason it doesn't perform as well as dual single GPU cards in crossfire. Go figure.
If you play old games that don't support SLI or Crossfire then YES there is 0 gain. If you have not tried any of the new games what are you waiting for? It doesn't suck you know?!
I agree, unless you have eyefinity and one slot, paying a premium for heat and noise doesnt make sense. For most people with 1 screen a 6970 or 580 is more than enough.
i'd be shocked if they sold even one of these without it being returned at some point. the noise levels are astonishing. at full blast the thing doesn't even meet federal vehicle emissions noise regulations without being classified as a motorcycle!
Minor typo in section "ONCE AGAIN THE CARD THEY BEG YOU TO OVERCLOCK" second to last paragraph second sentence says "...the 6690OC’s core clock is only 6% faster and the memory clock is the same, versus..."
Yes this is the article I as waiting for. Time to get rid of my 2 HD 5870 cards and purchase 2 HD 6970 ones. I wouldn't get an HD 6990. That is pretty clear.
Ryan, any chance you'll be doing a thermal compound review soon? 8% against their stock compound. How much better is it than current performance aftermarket compounds?
Quite difficult to get accurate thermal compound numbers. There's no way you can guarantee that the compound will be spread evenly and accurately every time. Any big 8ºC differences will show sure, but you're always playing with statistics to +/- 3ºC. Then there's the inevitable argument about the right way to apply the paste...
More importantly is the normal compound most manufacturers use is junk compared to a good thermal compound such as arctic silver (don't keep up on the latest brands as I still have Arctic Silver 3 that works great for me). So that 8% might very well be true since the normal stuff is of poor quality.
But few issues need to be addressed. Noise for starters, nearly 80dBA. Thats like working in a foundry. Also cooling is highly inefficient for card of this size. Need some 3rd party solution or water cooling altogether.
Biggest problem for 6990 could be (or rather will be) nVidia. If they price GTX590 at the same level or even below $700 price tag then AMD will be screwed totally. For now waiting for GTX590 and 6990 with some after market coolers as stock solutions are completely unacceptable.
One thing straight - I do not sleep on ca$h and if I buy 6990/590 it will be ma$$ive expense for me, but... What swings things for me with cards like this, is that I do not need uber VGA for 30 monitors. All I want is card with large frame buffer, which will live in my PC for ~10 years without need to upgrade, and it will occupy only 1 PCI-ex x16 slot. SLI/CF is totally misguided if you do have some more hardware installed inside. Sometimes (with all that SLI/CF popularity) I wonder, why 7 slot ATX is still alive and 10-12 slot motherboards are not a standard?
The card needs 3 slots to keep cool and such. They should have made a 2.5-slotted card, but with a bit of a twist.
Channel the AIR from the front GPU chamber into a U-duct, then split into a Y that goes around the fan (which can still be made bigger. The ducts then exhaust out the back in a "3rd slot". Or a duct runs along the top of the card (out of spec a bit) to allow the fan more air space. It would add about $5 for more plastic.
Rather than blowing HOT air INTO the case (which would then recycle BACK into the card! OR - blowing HOT air out the front and onto your foot or arm.
I´m always wondering why reviews always neglect this topic. Can this card run 3 monitors @ 1920x1080p 120 HZ. 120 HZ monitors/beamer offer not only 3D but foremost smooth transitions and less screen tearing. Since this technique is available and getting more and more friends, I really would like to see it tested. Can anybody enlighten me ? (I know that Dual link is necessary for every display and that AMD had problems with 120 HZ+eyefinity) Did they improve?
...with two slightly downclocked 6950s. Alternatively, a 6890, albeit with the old VLIW5 shader setup. As we've seen, the 5970 can win a test or two thanks to this even with a substantial clock speed disparity.
The 6990 is stunning but I couldn't ever imagine the effort required to set up a machine capable of adequately running one... and don't get me started on two.
Could someone please confirm if this card supports 30-bit colour?
Previously, only AMD's professional cards supported 30-bit colour, with the exception of the 5970. I will buy either the 6990 or Nvidia's Quadro based on this single feature.
(Because somebody will inevitably say that I don't need or want 30-bit colour, I have a completely hardware-calibrated workflow with a 30" display with 110% AdobeRGB, 30-bit IPS panel and the required DisplayPort cables. Using 24-bit colour with my 5870 CF I suffer from _very_ nasty posterisation when working with high-gamut photographs. Yes, my cameras have a colour space far above sRGB. Yes, my printers have it too.)
Just a heads up for anyone buying the card and wanting to remove the stock cooler.... There is a small screw on the back that is covered by two stickers with (its under the two stickers that look like a barcode). Well removing that you will then notice a void logo underneath it... I just wanted to point it out to you all...
Didnt bother us too much here seen as ours is sample but I know to some droppin £550ish UK is quite a bit of cash and if all you are doing is having an inquisitive look it seems a shame to void your warranty :-S
I'd like to know how do you benchmark those cards in Civ V. I suppose it's the in-game benchmark, isn't it? Well, I read some tests on one site using their own test recorded after some time spent in the game using FRAPS and I'm wondering if using the in-game test is that really different scenario. According to the source, in the real world situation nVidia cards' performance show no improvement whatsoever over AMD's offerings. If you could investigate that matter it would be great.
Yes, using the LateGameView benchmark. Like any other benchmark it's not a perfect representation of all scenarios, but generally speaking it's a reasonable recreation of a game many turns in with a large number of units.
All cards at this level are niche. Very few of us have that much to splash on one component.
I find it amusing that most of the folks here going "oh wow thats too noisy/power hungry/slow etc. so I wont be buying!", will just then load up Crysis on their 5770/GTX460 equipped PCs.
Note to 95% of you reading this article...this card isnt/wasnt designed for you.
then again my 4850CF/E8500 system just played dragon age 2 demo at 1920x1200 with absolutely no problems, so have no reason to upgrade just yet, still.
I also just wanted to touch on that comment. Whilst these cards seem excessive to some you have to remember that @ this moment in time they are as I dub it the veyron moment like others in the past the concorde moment. They might not be for most people practical but to its like me saying to my team, look lets see what we can do. Not only that but having the crown of fastest single card (agreed single card but multi gpu) goes a long way to brand loyalty and advertising. An example I like to use is the GTX 560. Its a fantastic card and in many ways better… hang with me a second. In terms of actual raw power you get for sub £200 is incredible also factor in its quiet and wont eat through your electricity like a moth through primark. But…. to not produce these high end cards would be criminal. We need people to keep pushing as hard and fast (that sounds so wrong) at the the boundaries(agreed quite crudely in the 6990 case but hey I dont sit in either camp just wait a few days for the 590 for brute but crude). with the reduction in nm to 28 the power consumption and heat will be brought down further (dont need pointing out here that a lot of factors at play here it was just a generalisation) but sure we could see just as big and hot cards within practical reason. I would not say out of hand I would say its progress and with progress we can lear
The r6990 stands out as a massive single card leader. The 6950CF offers far better price/perfomance with potential for 6970CF performance through BIOS flashing.
Maybe you should break up some of the charts to show only single cards configurations (for those with motherboards lacking full/partial SLI/CF support).
It will be interesting to see how enabled & what clocks the gf590 will arrive at, in order to keep its power-draw & temps down to reasonable levels??
I wonder if someone would place a carbon tax on these bad boys....lol
Why they are available for OEM only? They looks interesting, especially the 6670, which with its 480 SP should be faster than the 5670 which has 400 SP and lower frequency. Do you plan to review them?
As you note, they're OEM only. AMD will release them to the retail market eventually, but clearly they're not in a hurry. It's unlikely we'll review them until then, as OEM cards are difficult to come by.
I have to ask, if you bring up the price and say that you might as well do two 6950's in SLI when this thing doubles the performance of the GTX580, I mean would it also not be the better solution than a GTX580 which is $500 while two 6950's can apparently double it for $550 being they can be found for $225 after rebates these days.
You sound a little confused. You can't run ATI cards in SLI, they run in what is called crossfire (or crossfirex which is the same thing). Two 6950's don't equal GTX580 in SLI. You need two HD 6970 cards in crossfire to nearly equal two GTX580 in SLI.
In my opinion, why limit your performance with 2 HD 6950 cards, why not just bye the 2 HD 6970 cards and never have to second guess if you should have or not? But... That's just me. I have a job.
Totally unnecessary closing comment there, considering most people here do actually have jobs. Not everyone who has a job can afford such gear as there's more important things to spend money on.
You sound confused, too. He miswrote SLI, but you misunderstood his point entirely. He's saying that two 6950's are significantly faster than a single 580 for almost the same price.
Funny after reading this review I went into town (Tokyo) to buy a new hard disk and saw this card for sale. So in Japan at least it is already on the market..... price was ridiculous though, 79,000YEN or $945 US..... I'm sure it will be available everywhere soon.
Waterblocks on the other hand could be a couple month or so away I guess...
First off, nice article Ryan. Good data, relevant commentaries on said data, and conclusions.
You mention in the article that you believe some of the shortcomings of the 6990 to be a lack of PCIe bandwidth. This got me thinking that perhaps it is a good time to revisit the effect of PCIe bandwidth on modern cards. Given the P67 only natively supports 16 lanes, I'm curious to see what effect it has on CF/SLI. It could make big difference in the recommended hardware for various levels of gaming systems.
Typically, someone looking for a CF/SLI setup will get a board that supports more lanes. However, I have seen a situation where a friend built a budget i5 system and about 4 months later was in a position to acquire an HD5970 on the cheap (relatively speaking). Clearly, two HD5850s/HD5870s would have been an option.
If newer cards are effectively PCIe bandwidth limited, then a 6990 may perform more closely to an HD6970 CF setup in such a system than it does in these graphs. This would be even more of a consideration at the high end if the rare boards with support for 4x8 lane (spaced) PCIe give you no real benefit over a more common 2x16 lane board (comparing 4 HD6970s to 2 HD6990s).
I have 2 XFX HD 5870 cards for sale. I have a double lifetime warranty on these so you get the use of the second lifetime warranty on these. Interested? They are very great performers I can vouch for that. I am use to upgrading my GPU on an annual basis so I am upgrading to 2 HD 6970. $230 each.
I kinda wanted to see a chart with the most common gaming resolution...and can we benchmark with a Q9550 just for comparison? i would love to know if i'm holding back a video card by not going i5 or i7 and by how much.
The most common gaming resolution for this card is the one Ryan tested. It is pointless to test at a lower resolution other than possibly true 24" (1920X1200). And even at that res this card is really not needed.
It seems we're getting into the Pentium IV trap, a bit. Big, hot, power-hungry, noisy chips...personally, I'm going to pass on this generation of GPUs. I'm waiting for a revolution in either manufacturing or coding. It's all well and good to have a fast computer for getting what you need done in minimal, but at the risk of the box taking flight because the fans are now of jet engine proportion in speed and power, I'd rather not be able to hear my fans over my headphones...or risk my cat getting sucked into the intake.
Well we've kinda got what we asked for. We've all gamely been buying more and more powerful graphics cards with regards to brute force rendering power.
We've shown we love buying 750w+ power supplies with multiple GPU connectors, buying SLI and Xfire setups galore.
So the GPU corps think we love nothing more than just piling on more and more power and wattage to solve the situation.
It works both ways.
What we should have been doing was challenging AMD and Nvidia to develop smarter rendering techniques. Had either of them developed PowerVR to the state we are in today we would be in a far better place. Chances are the most power hungry card we'd have today would be 5770 level.
We need something more efficient like PowerVR to take us to the next level.
Are you waiting to update your test system until the SATA port issue is corrected? Seems to me that anyone wanting to buy this card would also be using an overclocked 2600K... According to the Bench numbers, the 2600K offers roughly 30% more frames than the 920, depending on the game. That indicates to me that your test system is insufficient to properly test this card.
Granted, since the vast majority of displays are fixed at 60Hz, fps counts beyond that don't really matter, but I have to wonder what impact this would have on folks with 120Hz-native LCDs. That extra 30% could make the difference.
It will be curious to see what impact the bandwidth will have... then again, even with the restriction, the current Sandy Bridge systems still dominate the previous chips.
In reality, 16/16 or 8/8 really doesn't have much impact. The difference even at 2560x1600 with all the fixins in even the most demanding games is <1%. Unless AT's new test system will feature six displays and 4K+ resolutions, I'm not sure SNB-E is worth waiting so long for (yes, that could be perceived as a challenge!)
In any case, I'm looking forward to it! Thanks for the article!
i hope u said the same thing when ur friend nvidia release their 590 card i also do hope u say the exact words that the 590 dont make any sence since a pair of 560 or 570 can give u the same performance as the 590 i cant wait to see ur article on the 590 ill be waiting for anand tfor this because we all know that the 590 are going to be down clock
With cards designed specifically with multi monitor gaming in mind, you may want to include those resolutions. Buying this card for 1920x1200 would make zero sense.
I think it was good to have both. The number of people buying this card will likely have 30" displays, but I'm sure some (competetive FPS for example) will want extremely fluid display even in busy scenes, as well as the person that doesn't yet have the cash to upgrade to a big screen but plans to in the near future.
I would also argue that there are likely vastly more people playing on large single-screen displays than eyefinity folks so this does make more sense. And honestly when some of the games are averaging in the sub 80-100 fps range, those minimum framerates approach questionable playability depending on type of game.
So basically as crazy as it is to say this, the graphical power isn't quite there yet to use Eyefinity at high detail settings in more recent and demanding games.
"With but a trio of exceptions, the 6990 doesn’t make sense compared to a pair of cards in Crossfire."
This product is not meant to make any sense from a financial, performance or even practical standpoint.
It IS the fastest videocard and that is that.
I was watching a video last night on youtube of a chainsaw powered by a Buick's V8 engine (hG5sTLY0-V8). It goes through a tree trunk in a blink of an eye, but it had to be lifted by TWO men.
It makes complete sense if you want SLI in a small form factor, mATX and such. (as do I). PCIe slots are at a premium, and so is space on a mATX board/case.
However, I think I'm going to wait and see what the 590 looks like...
I would like to see the 6990 and 5970 comparison in crysis and metro at eyefinity and single monitor res but with the 5970 at default clocks and close to 5870 clocks. When I am playing these games I have my 5970 at 850 core and 1150 memory and it runs all day without any throttling.
The 5970 is handicapped at the default speeds as everyone can run at or real close to 5870 speeds. The core is easy at 850 but you may need to back down memory to 1150 or 1175.
Would love to see the true difference in the 5970 and 6990 this way.
The framebuffer will be the big difference at eyefinity res. with any aa applied.
One thing I do like about the dual gpu amd cards is that I play a few games that use physx.. (I have a 5970) I have a 250gts in the second pcie slot. both my slots are 2x16. This way I have a powerfull gpu and physx! I play my games at 5040x1050 and a single card just don't cut it. I did use nvidia surround for 2 months but like my eyefinity setup better. To go crossfire and then have physx you need a motherboard that doesn't knock your pcie slot down to 8x with 3 cards which are few and expensive and also a case that has space for that 3rd card like a coolermaster haf 932X. I have a haf 932 (not X) and I could not go 3 cards unless the 3rd card is single slot.
On a side note as to why I am sticking with my 5970 till the 28nm show up is that I like the way the cooler is set up. With the fan on the end I have my 250gts below it with about a 3/8 inch below it. BUT the 250gts is only about 7.5-8 inches long and does not cover the fan at all because the fan is at the end. I have a 120mm fan at the bottom of my haf 932 case that blows straight up into the 5970 fan.
If I used a 6990 the 250gts would cover the 6990 fan.
My choices would be then to sell the 250gts and get a single slot card. (450gts probably)
I think I am just going to stay with what I have for now.
2 HD 6970 Cards for $640? I don't think so! These cards are over $300 everywhere. I purchased 2 for $710 shipped and I thought that was a deal. Maybe reviews like yours here inflated the price and I purchased after the price adjustment. I have the same luck with gasoline on days I fill my tank.
All right, you got me there. I only buy XFX double lifetime warranty cards when I start spending this much on replacing my dual GPU solution.
I seem to manage to actually re-sell my used video cards when I can offer then to a buyer with a lifetime warranty. XFX double lifetime warranty is not a sales gimic, it works. Heck, I would buy a used card if it had a lifetime warranty, it's kind of a no brainer given you actually want to buy that card int he first place.
Thanks for keeping the Crysis Warhead minimum FPS charts!! To me, Crysis/Warhead remains the defining game (and not only technically). I don't even look at the numbers on the other titles.
Also of prime importance to me are the idle power and, to a slightly lesser extent, idle noise.
Of course, like most people reading your review, I wouldn't be buying a 6990 even if it were silent. In fact, given that PC graphics requirements are apparently ramping down to console levels, I wonder how AMD/Nvidia are going to sell any significant number of cards above midrange. My HD 5770 will run everything at 1920x1200, though not always with all sliders maxed. However, I don't see much if any difference (in DX9) when I do enable 4xAA vs 2xAA etc. Certainly not enough to double the price of this $140 card.
A nit on the Crysis Warhead minimum fps chart for 1920x1200 Frost Bench - Gamer Quality - Enthusiast Shaders + 4xAA: Your Dec 10 chart shows 6970CF at 66.2 fps but this Mar 11 chart shows 66.6. Can you believe anyone would actually notice this, much less comment on it? We are too absorbed in this tech stuff (ain't it grand...).
The performance bottleneck is also seen in nvidia's dual gpu offerings. Dual GPU cards operating in X16 PCIe slots must have their data lanes divided between the gpu's, so they are effectively operating at X8 data rates, not at X16 data rates. Whereas single gpu cards will utilize all X16 express lanes, and even then the PCIexpress standard may soon be obsoleted. I hope we can look forward to Intel's fiber optic technology effectively replacing all data bus signalling with 10GB fiber optic bus and peripheral device signalling which can simultaneously and independently utilize all of the different data protocols used for inter-device and system bus communications. Imagine soon AMD and Nvidia will be producing video cards with fiber-optic data buses which may change requirements for power supplied to present day PCI express slots and may change the standards in power supply manufacturing to require that additional power connector to a video card since the 75 watt PCIe slot will be obsolete.
But ATI and Nvidia may also have to work with motherboard manufacturers to see if Intel's "Thunderbolt" fiber optic data buses can increase or freely throttle the video data bandwidth through its 10GB interface and would be tantamount to increasing data lanes from X16 to X32. It would be almost unlimited video bandwidth which far exceeds any bandwidth limitations vs availability that is needed today. Dual GPU's cannot promise the performance with the limitation of the PCIe X16 slot being divided to dual X8 channels, but it would be nice to see how they perform with unlimited bandwidth potential over a single 10GB fiber-optic. And that would change the battlefield between ATI-AMD and Nvidia.
My 4870 X2's (Can run Quadfire) still rocks on enthusiast settings in Crysis and Warhead without any hiccups and I've not seen a slowdown of any sort on any level in Crysis. The price to performance ratio is declining and may affect my decision to purchase another dual GPU card, opting instead for single GPU card CF solutions that can utilize all X16 lanes by the GPU.
BTW I did notice the lack of DATA on Crysis @1920x1200 with full enthusiast settings, so that data is missing from this review. Its Gamer plus enthusiast shaders.....not full enthusiast. As above the 4870 X2 runs full enthusiast settings, not one setting is scaled back, and not one hiccup....just smooth play throughout on a single 28" display.
Why are we still using Crysis Warhead at "Gamer Quality"????? With cards like these why not turn everything maxed in game and then fidget with AA and the like? I don't get it.
I've always viewed single-card dual-GPU cards as more of a packaging stunt than a product.
They invariably are clocked a little lower than the single-GPU cards they re based upon, and short of a liquid cooling system are extremely noisy (unavoidable when you have twice as much heat that has to be dissipated by the same sized cooler as the single-GPU card). They also tend to not be a bargain price-wise; compare a dual-GPU card versus two of the single-GPU cards with the same GPU.
Personally, I would much rather have discrete GPUs and be able to cool them without the noise. I'll spend a little more for a full-sized case and a motherboard with the necessary layout (two slots between PCI-16x slots) rather than deal with the compromises of the extra-dense packaging. If someone else needs quad SLI or quad Crossfire, well, fine... to each their own. But if dual GPUs is the goal, I truly don't see any advantage of a dual-GPU card over dual single-GPU cards, and plenty of disadvantages.
Like I said... more of a stunt than a product. Cool that it exists, but less useful than advertised except for extremely narrow niches.
Even -2- years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for -3.5- years the answer was “no.”
"With a 375W TDP the 6990 should consume less power than 2x200W 6950CF, but in practice the 6950CF setup consumes 21W less. Part of this comes down to the greater CPU load the 6990 can create by allowing for higher framerates, but this doesn’t completely explain the disparity."
If it hasn't been mentioned before: guys, this is simple. The TDP for the HD6950 is just for the PowerTune limit. The "power draw under gaming" is specified at ~150 W, which is just what you'll find during gaming gaming tests.
Furthermore Cayman is run at lower voltage (1.10 V) and clocks and with less units on HD6950, so it's only natural for 2 of these to consume less power than a HD6990. Summing it up one would expect 1.10^2/1.12^2 * 800/830 * 22/24 = 85,2% the power consumption of a Cayman on HD6990.
The article points that the 6990 runs much closer to 6950CF than 6970CF.
I assume that the author is talking about 2GB 6950, that can be shader unlocked, in a process much safer than flashing the card with a 6970 BIOS.
It would be interesting to see CF numbers for unlocked 6950s.
As it stands the 6990 is not a great product: it requires an expensive PSU, a big case full of fans, at price ponit higher than similar CF setups.
Considering that there are ZERO enthuasiast mobos thah wont accept CF, the 6990 becomes a very hard sell.
Even more troubling is the lack of a DL-DVI adapter in the bundle, scaring way 30" owners, precisely the group of buyers most interested in this video card.
Why should a 30" step away from a 580m or SLI 580s, if the 6990 the same expensive PSU, the same BIG case full of fans and a DL-DVI adapter costs more than teh price gap to a SLI mobo?
This card looks very much like the XFX 4GB 5970 card. The GPU position and cooling setup is identical.
I'd be very interested to see a performance comparison with that card, which operates at 5870 clock speeds and has the same amount of graphics memory (which is not "frame buffer", for those who keep misusing that term).
I (which is not "frame buffer", for those who keep misusing that term).
:) Yep, I wished they would actually make it right.
The frame buffer is the amount of memory to store the pixel and color depth info for a renderable frame of data, whereas graphics memory (or VRAM) is the total memory available for the card which consequently holds the frame buffer, command buffer, textures, etc etc. The frame buffer is just a small portion of the VRAM set aside and is the output target for the GPU. The frame buffer size is the same for every modern video card on the planet at fixed (same) resolution. I.e. a 1900x1200 res with 32 bit color depth has a frame buffer of ~9.2 MB (1900x1200x32 / 8), if double or tripled buffered, multiply by 2 or 3.
Most every techno site misapplies the term "frame buffer", Anandtech, PCPer (big abuser), Techreport ... most everyone.
Anyone wanting to play at resolutions above 1080p should just buy two GTX560's for 500 bucks. Why waste the extra 200? There's no such thing as future proofing at these levels.
As usual no reply after the first few pages... or on the topic of Linux.
If they only had any clue how many Linux workstations get ordered with "are you sure this has the top processor, top video card and most RAM available?" on a post-it note stuck on the Req.
I'm big RADEON fan. I had many ATI cards ( 3870 x 2 , 4870x 2, 5970 etc). Most of them water cooled. So i don't care about noise. My temperatures always at 40C ( for the chips) overclocked. I was planning to sell 5970 and get 6990, but after first reviews i decided to wait to see what NVIDIA will bring in form of GTX 590.
The AMD made great first step with DirectX 11, cards came 6 month earlier then Nvidia. First time ( may be not) ATI cards were on the top charts for long time. But this time almost 1 1/2 years past since 5970 came to the market. 6990 shows about 20% gain of performance comparing to 5970. Why somebody would pay $700 for that kind of performance? In my opinion NVidia will beat 6990 pretty easy, will take a crown and will keep it for a while. Thank you all.
We of course strive to make the best benchmark possible. So if you believe there is a problem, I'd like to hear what you think is amiss with our benchmarks. We can't fix things unless you guys chime in and let us know what you think is wrong.
куда стока мощностей это же пздец,295 жифорс в несколько раз рвет.Интересно какой процессор участвовал в тесте, и7 ито не потянет такова монстра.Жесть кароче.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
130 Comments
Back to Article
smookyolo - Tuesday, March 8, 2011 - link
My 470 still beats this at compute tasks. Hehehe.And damn, this card is noisy.
RussianSensation - Tuesday, March 8, 2011 - link
Not even close, unless you are talking about outdated distributed computing projects like Folding@Home code. Try any of the modern DC projects like Collatz Conjecture, MilkyWay@home, etc. and a single HD4850 will smoke a GTX580. This is because Fermi cards are limited to 1/8th of their double-precision performance.In other words, an HD6990 which has 5,100 Gflops of single-precision performance will have 1,275 Glops double precision performance (since AMD allows for 1/4th of its SP). In comparison, the GTX470 has 1,089 Gflops of SP performance which only translates into 136 Gflops in DP. Therefore, a single HD6990 is 9.4x faster in modern computational GPGPU tasks.
palladium - Tuesday, March 8, 2011 - link
Those are just theoretical performance numbers. Not all programs *even newer ones* can effectively extract ILP from AMD's VLIW4 architecture. Those that can will no doubt with faster; others that can't would be slower. As far as I'm aware lots of programs still prefer nV's scalar arch but that might change with time.MrSpadge - Tuesday, March 8, 2011 - link
Well.. if you can oly use 1 of 4 VLIW units in DP then you don't need any ILP. Just keep the threads in flight and it's almost like nVidias scalar architecture, just with everything else being different ;)MrS
IanCutress - Tuesday, March 8, 2011 - link
It all depends on the driver and compiler implementation, and the guy/gal coding it. If you code the same but the compilers are generations apart, then the compiler with the higher generation wins out. If you've had more experience with CUDA based OpenCL, then your NVIDIA OpenCL implementation will outperform your ATI Stream implementation. Pick your card for it's purpose. My homebrew stuff works great on NVIDIA, but I only code for NVIDIA - same thing for big league compute directions.stx53550 - Tuesday, March 15, 2011 - link
off yourself idiotm.amitava - Tuesday, March 8, 2011 - link
".....Cayman’s better power management, leading to a TDP of 37W"- is it honestly THAT good? :P
m.amitava - Tuesday, March 8, 2011 - link
oops...re-read...that was idle TDP !!MamiyaOtaru - Tuesday, March 8, 2011 - link
my old 7900gt used 48 at loadD:
Don't like the direction this is going. In GPUs it's hard to see any performance advances that don't come with equivalent increases in power usage, unlike what Core 2 was compared to Pentium4.
Shadowmaster625 - Tuesday, March 8, 2011 - link
Are you kidding? I have a 7900GTX I dont even use, because it fried my only spare large power supply. A 5670 is twice as fast and consumes next to nothing.nafhan - Tuesday, March 8, 2011 - link
I generally buy cards in the $100-$200 range. Power usage has gone up a bit while performance has increased exponentially over the last 10 years.LtGoonRush - Tuesday, March 8, 2011 - link
I'm disappointed at the choices AMD made with the cooler. The noise levels are truly intolerable, it seems like it would have made more sense to go with a triple-slot card that would be more capable of handling the heat without painful levels of noise. It'll be interesting to see how the aftermarket cooler vendors like Arctic Cooling and Thermalright handle this.Ryan Smith - Tuesday, March 8, 2011 - link
There's actually a good reason for that. I don't believe I mentioned this in the article, but AMD is STRONGLY suggesting not to put a card next to the 6990. It moves so much air that another card blocking its airflow would run the significant risk of killing it.What does this have to do with triple-slot coolers? By leaving a space open, it's already taking up 3 spaces. If the cooler itself takes up 3 spaces, those 3 spaces + 1 open space is now 4 spaces. You'd be hard pressed to find a suitable ATX board and case that could house a pair of these cards in Crossfire if you needed 8 open spaces. Triple slot coolers are effectively the kryptonite for SLI/CF, which is why NVIDIA isn't in favor of them either (but that's a story for another time).
arkcom - Tuesday, March 8, 2011 - link
2.5 slot cooler. That would guarantee at least half a slot is left for airspace.Quidam67 - Tuesday, March 8, 2011 - link
if it means a quieter card then that might have been a compromise worth making. Also, 2.5 would stop people from making the il-advised choice of using the slot next to the card, thus possibly killing it!strikeback03 - Tuesday, March 8, 2011 - link
With the height of a triple slot card maybe they could mount the fan on an angle to prevent blocking it off.kilkennycat - Tuesday, March 8, 2011 - link
Triple-slot coolers... no need!!However, if one is even contemplating Crossfire or SLI then a triple-slot space between the PCIe X16 SOCKETS for a pair of high-power 2-slot-cooler graphics cards with "open-fan" cooling (like the 6990) is recommended to avoid one card being fried by lack of air. This socket-spacing allows a one-slot clear air-space for the "rear" card's intake fan to "breathe". (Obviously, one must not plug any other card into any motherboard socket present in this slot)
In the case of a pair of 6990 (or a pair of nVidia's upcoming dual-GPU card), a minimum one-slot air-space between cards becomes MANDATORY, unless custom water or cryo cooling is installed.
Very few current Crossfire/SLI-compatible motherboards have triple-slot (or more) spaces between the two PCIe X16 connectors while simultaneously also having genuine X16 data-paths to both connectors. That socket spacing is becoming more common with high-end Sandy-Bridge motherboards, but functionality may still may be constrained by X8 PCIe data-paths at the primary pair of X16 connectors.
To even attempt to satisfy the data demands of a pair of 6990 Cross-Fire with a SINGLE physical CPU, you really do need a X58 motherboard and a Gulftown Corei7 990x processor, or maybe a Corei7 970 heavily overclocked. For X58 motherboards with triple-spaced PCIe sockets properly suitable for Crossfire or SLI , you need to look at the Asrock X58 "Extreme" series of motherboards. These do indeed allow full X16 data-paths to the two primary PCIe X16 "triple-spaced" sockets.
Many ATX motherboards have a third "so-called" PCIe X16 socket in the "slot7" position. However, this slot is always incapable of a genuine X16 pairing with either of the other two "X16" sockets, Anyway this "slot 7" location will not allow any more than a two-slot wide card when the motherboard is installed in a PC "tower" -- an open-fan graphics card will have no proper ventilation here, as it comes right up against either the power-supply (if bottom-loaded) or the bottom-plate of the case.
Spazweasel - Tuesday, March 8, 2011 - link
Exactly. For people who are going to do quad-Crossfire with these, you pretty much have to add the cost of a liquid cooling system to the price of the cards, and it's going to have to be a pretty studly liquid cooler too. Of course, the kind of person who "needs" (funny, using that word!) two of these is also probably the kind of person who would do the work to implement a liquid cooling system, so that may be less of an issue than it otherwise might be.So, here's the question (more rhetorical than anything else). For a given ultra-high-end gaming goal, say, Crysis @ max settings, 60fps @ 3x 2500x1600 monitors (something that would require quad Crossfire 69xx or 3-way SLI 580), with a targeted max temperature and noise level... which is the cheaper solution by the time you take into account cooling, case, high-end motherboard, the cards themselves? That's the cost-comparison that needs to be made, not just the cost of the cards themselves.
tzhu07 - Tuesday, March 8, 2011 - link
Before anyone thinks of buying this card stock, you should really go out and get a sense of what that kind of noise level is like. Unless you have a pair of high quality expensive noise-cancelling earbuds and you're playing games at a loud volume, you're going to constantly hear the fan.$700 isn't the real price. Add on some aftermarket cooling and that's how much you're going to spend.
Don't wake the neighbors...
Spivonious - Tuesday, March 8, 2011 - link
70dB is the maximum volume before risk of hearing loss, according to the EPA. http://www.epa.gov/history/topics/noise/01.htmSeriously, AMD, it's time to look at getting more performance per Watt.
ET - Tuesday, March 8, 2011 - link
If you're going to keep your case open all day with your ear to the graphics card, then you might get that 70dB+, which won't be too nice on the ear. On the other hand you won't get much gaming done. :)I don't know the exact distance Anandtech measured this noise at, but Kitguru measured at about 1 metre and got 48dB when running Furmark, 40dB for normal load.
bobsmith1492 - Tuesday, March 8, 2011 - link
70dB is very low and only would apply if you are essentially living next to your computer - that's a 24-hour exposure level.NIOSH recommends 85dB as the upper limit for 8 hours of exposure, with a 3dB exchange rate - that is, every time you halve the amount of time you're exposed to the sound you can increase the volume by 3dB.
http://www.cdc.gov/niosh/docs/96-110/appF.html
looniam - Wednesday, March 9, 2011 - link
*ahem*"The document identifies a 24-hour exposure level of 70 decibels as the level of environmental noise which will *prevent* any measurable hearing loss over a lifetime"
does NOT say what the maximum is.
from my experience as an audio tech:
95dB is the start of temporary hearing loss, 110dB is the start of permanent and 140dB is the threshold of pain.
and for drunk people, they can't hear anything below 150dB :)
Ninjahedge - Thursday, March 10, 2011 - link
What?futuristicmonkey - Tuesday, March 8, 2011 - link
Any chances for some memory OC benches?nitrousoxide - Tuesday, March 8, 2011 - link
Both AMD and nVidia are out of mind, they are ignorant of the consequence by putting two gigantic chips with 5+billion transistors on the same board. I can't find the point of buying such outrageous card instead of building a CFX/SLI system. At least the latter isn't that loud, isn't that hot and consumes hardly more power than those monstrosities.TSMC is to blame. They dropped 32nm so it is impossible to get 6990/590 within 300W power envelope. But neither AMD nor nVidia turn back, but keep making these non-sense flagship cards.
Figaro56 - Tuesday, March 8, 2011 - link
No, it's not.Amuro - Tuesday, March 8, 2011 - link
I will be water cooling them! :)nitrousoxide - Tuesday, March 8, 2011 - link
What a shame it will be when next-generation 28nm Single GPU flagships wipe out these monsters with ease while consumes half the power, running silent.strikeback03 - Tuesday, March 8, 2011 - link
That hasn't traditionally happened. Look at the comparison between the 6990 and the 4870x2 - 2 generations, IIRC one process shrink. The 6990 does generally put up much better numbers, but consumes a lot more power to do so. Looking at a comparison between the 4870x2 and 5870 (double GPU on a larger process size to next-gen single-GPU) they are very close, with the 4870x2 overall holding a slight lead. And of course none of these high-end reference cards have ever been silentFigaro56 - Tuesday, March 8, 2011 - link
Just be prepared to upgrade your GPU every year. If you think they are going to stop leap frogging performance you are not being realistic.san1s - Tuesday, March 8, 2011 - link
"AMD even went so far as to suggest that reviewers not directly disassemble their 6990"The next picture: The card disassembled
haha
Ryan Smith - Tuesday, March 8, 2011 - link
To AMD's credit, they were good sports offered to take any pictures we needed. So all of those disassembled shots came from them. They were really adamant about it being a bad idea to take these things apart if you intended to use them in the future.strikeback03 - Tuesday, March 8, 2011 - link
I was looking for a comment on whether you did all your testing before disassembling or whether you got some of their super paste to reassemble.7Enigma - Tuesday, March 8, 2011 - link
My problem with the shot is the poor application of thermal paste from the picture. In a card of this magnitude having a perfect coating of thermal compound is critical. And knowing marketing if that is the shot they SHOW, how good do you really think the application is on one purchased in retail channels?iwod - Tuesday, March 8, 2011 - link
I haven't been following CrossFire / SLI or these Single Card Dual GPU closely. ( Which to me are the same as two card anyway )Do they still need drivers to have a specific profile of the game to take advantage? I.e an Unknown Game to the Drivers will gain 0 benefits from the 2nd GPU?
If that is so, then they are not even worth a look.
Figaro56 - Tuesday, March 8, 2011 - link
Most of the big games support SLI and CrossfireX today. For example, I play Battlefield BC2 myself and using 2 video cards blows the doors of a single card and is well worth the investment, especially if you have a 2560x1600 resolution monitor like I do. For 2 ATI cards in Crossfire you install a separate crossfire profiles pack in addition to the catalyst drivers. The profiles pack supports all the game crossfire optimization. The AI mode in the catalyst drivers exploits the game profile pack setting for crossfire so you want that enabled.A dual GPU HD 6990 is essentially crossfire on a single card, but for some reason it doesn't perform as well as dual single GPU cards in crossfire. Go figure.
If you play old games that don't support SLI or Crossfire then YES there is 0 gain. If you have not tried any of the new games what are you waiting for? It doesn't suck you know?!
cactusdog - Tuesday, March 8, 2011 - link
I agree, unless you have eyefinity and one slot, paying a premium for heat and noise doesnt make sense. For most people with 1 screen a 6970 or 580 is more than enough.Samus - Tuesday, March 8, 2011 - link
i'd be shocked if they sold even one of these without it being returned at some point. the noise levels are astonishing. at full blast the thing doesn't even meet federal vehicle emissions noise regulations without being classified as a motorcycle!MadAd - Tuesday, March 8, 2011 - link
8%? where do we buy tubes of this phase change material? do they sell it like arctic silver?MarkLuvsCS - Tuesday, March 8, 2011 - link
Thanks for an awesome article!Minor typo in section "ONCE AGAIN THE CARD THEY BEG YOU TO OVERCLOCK" second to last paragraph second sentence says "...the 6690OC’s core clock is only 6% faster and the memory clock is the same, versus..."
Figaro56 - Tuesday, March 8, 2011 - link
Yes this is the article I as waiting for. Time to get rid of my 2 HD 5870 cards and purchase 2 HD 6970 ones. I wouldn't get an HD 6990. That is pretty clear.Thanks AnAndTech!
mino - Tuesday, March 8, 2011 - link
AT has CHOSEN to not overclock the card based on its THEORETHICAL (Furmark) load temperatures ...Go bash AT for writing "OC" on the slides while they enabled ONLY the performance BIOS. Not doing ANY overclocking whatsoever by fear of Furmark ...
In effect what they have done was in effect a factory OC, not a traditional OC of the what-it-can-handle kind.
Great, so Furmark has achieved one more evil goal: it prevents (AT?) journalists to do overclocking reviews ...
mino - Tuesday, March 8, 2011 - link
Here come some real OC numbers: www.legitreviews.com/article/1566/14BTW, they did not even bother with the #1 BIOS option to achieve it ... so, lets talk about biased reviewing, shall we?
RaistlinZ - Tuesday, March 8, 2011 - link
Looks like the 2x6950 is a much better option, given you'll have much less noise to deal with and that they can be flashed to 6970 shaders.If this card had been $599 I probably would have picked one up. But at $699 I think I'll just wait for 28nm generation of cards.
Thanks for trying, AMD.
MarcHFR - Tuesday, March 8, 2011 - link
Hi,Drivers used are :
NVIDIA ForceWare 262.99
NVIDIA ForceWare 266.56 Beta
NVIDIA ForceWare 266.58
AMD Catalyst 10.10e
AMD Catalyst 11.1a Hotfix
AMD Catalyst 11.4 Preview
Is it possible to know wich driver is used for each card ?
Thanks
jcandle - Tuesday, March 8, 2011 - link
Ryan, any chance you'll be doing a thermal compound review soon? 8% against their stock compound. How much better is it than current performance aftermarket compounds?IanCutress - Tuesday, March 8, 2011 - link
Quite difficult to get accurate thermal compound numbers. There's no way you can guarantee that the compound will be spread evenly and accurately every time. Any big 8ºC differences will show sure, but you're always playing with statistics to +/- 3ºC. Then there's the inevitable argument about the right way to apply the paste...7Enigma - Tuesday, March 8, 2011 - link
More importantly is the normal compound most manufacturers use is junk compared to a good thermal compound such as arctic silver (don't keep up on the latest brands as I still have Arctic Silver 3 that works great for me). So that 8% might very well be true since the normal stuff is of poor quality.ypsylon - Tuesday, March 8, 2011 - link
But few issues need to be addressed. Noise for starters, nearly 80dBA. Thats like working in a foundry. Also cooling is highly inefficient for card of this size. Need some 3rd party solution or water cooling altogether.Biggest problem for 6990 could be (or rather will be) nVidia. If they price GTX590 at the same level or even below $700 price tag then AMD will be screwed totally. For now waiting for GTX590 and 6990 with some after market coolers as stock solutions are completely unacceptable.
One thing straight - I do not sleep on ca$h and if I buy 6990/590 it will be ma$$ive expense for me, but... What swings things for me with cards like this, is that I do not need uber VGA for 30 monitors. All I want is card with large frame buffer, which will live in my PC for ~10 years without need to upgrade, and it will occupy only 1 PCI-ex x16 slot. SLI/CF is totally misguided if you do have some more hardware installed inside. Sometimes (with all that SLI/CF popularity) I wonder, why 7 slot ATX is still alive and 10-12 slot motherboards are not a standard?
james.jwb - Tuesday, March 8, 2011 - link
I doubt it. That would be 80 dBA at ear level compared to whatever Ryan used. At ear level it's going to be a lot lower than 80.Still, that doesn't take away the fact that this card is insane...
Belard - Tuesday, March 8, 2011 - link
The card needs 3 slots to keep cool and such. They should have made a 2.5-slotted card, but with a bit of a twist.Channel the AIR from the front GPU chamber into a U-duct, then split into a Y that goes around the fan (which can still be made bigger. The ducts then exhaust out the back in a "3rd slot". Or a duct runs along the top of the card (out of spec a bit) to allow the fan more air space. It would add about $5 for more plastic.
Rather than blowing HOT air INTO the case (which would then recycle BACK into the card!
OR - blowing HOT air out the front and onto your foot or arm.
Noise is a deal killer for many people nowadays.
strikeback03 - Tuesday, March 8, 2011 - link
That ductwork would substantially reduce the airflow, making a sharp turn like that would be a large bottleneck.burner1980 - Tuesday, March 8, 2011 - link
I´m always wondering why reviews always neglect this topic. Can this card run 3 monitors @ 1920x1080p 120 HZ. 120 HZ monitors/beamer offer not only 3D but foremost smooth transitions and less screen tearing. Since this technique is available and getting more and more friends, I really would like to see it tested.Can anybody enlighten me ? (I know that Dual link is necessary for every display and that AMD had problems with 120 HZ+eyefinity) Did they improve?
silverblue - Tuesday, March 8, 2011 - link
...with two slightly downclocked 6950s. Alternatively, a 6890, albeit with the old VLIW5 shader setup. As we've seen, the 5970 can win a test or two thanks to this even with a substantial clock speed disparity.The 6990 is stunning but I couldn't ever imagine the effort required to set up a machine capable of adequately running one... and don't get me started on two.
Figaro56 - Tuesday, March 8, 2011 - link
I'd love to see Jessica Alba's beaver, but that aint going to happen either.qwertymac93 - Tuesday, March 8, 2011 - link
AUSUM SWITCHhttp://img202.imageshack.us/img202/8717/ausum.png
KaelynTheDove - Tuesday, March 8, 2011 - link
Could someone please confirm if this card supports 30-bit colour?Previously, only AMD's professional cards supported 30-bit colour, with the exception of the 5970. I will buy either the 6990 or Nvidia's Quadro based on this single feature.
(Because somebody will inevitably say that I don't need or want 30-bit colour, I have a completely hardware-calibrated workflow with a 30" display with 110% AdobeRGB, 30-bit IPS panel and the required DisplayPort cables. Using 24-bit colour with my 5870 CF I suffer from _very_ nasty posterisation when working with high-gamut photographs. Yes, my cameras have a colour space far above sRGB. Yes, my printers have it too.)
Gainward - Tuesday, March 8, 2011 - link
Just a heads up for anyone buying the card and wanting to remove the stock cooler.... There is a small screw on the back that is covered by two stickers with (its under the two stickers that look like a barcode). Well removing that you will then notice a void logo underneath it... I just wanted to point it out to you all...Didnt bother us too much here seen as ours is sample but I know to some droppin £550ish UK is quite a bit of cash and if all you are doing is having an inquisitive look it seems a shame to void your warranty :-S
mmsmsy - Tuesday, March 8, 2011 - link
I'd like to know how do you benchmark those cards in Civ V. I suppose it's the in-game benchmark, isn't it? Well, I read some tests on one site using their own test recorded after some time spent in the game using FRAPS and I'm wondering if using the in-game test is that really different scenario. According to the source, in the real world situation nVidia cards' performance show no improvement whatsoever over AMD's offerings. If you could investigate that matter it would be great.Ryan Smith - Tuesday, March 8, 2011 - link
Yes, using the LateGameView benchmark. Like any other benchmark it's not a perfect representation of all scenarios, but generally speaking it's a reasonable recreation of a game many turns in with a large number of units.jabber - Tuesday, March 8, 2011 - link
All cards at this level are niche. Very few of us have that much to splash on one component.I find it amusing that most of the folks here going "oh wow thats too noisy/power hungry/slow etc. so I wont be buying!", will just then load up Crysis on their 5770/GTX460 equipped PCs.
Note to 95% of you reading this article...this card isnt/wasnt designed for you.
araczynski - Tuesday, March 8, 2011 - link
nice, but it sounds like the 6950CF owns the bang/$ award, this thing is too little for too much $/headache.araczynski - Tuesday, March 8, 2011 - link
then again my 4850CF/E8500 system just played dragon age 2 demo at 1920x1200 with absolutely no problems, so have no reason to upgrade just yet, still.Gainward - Tuesday, March 8, 2011 - link
I also just wanted to touch on that comment. Whilst these cards seem excessive to some you have to remember that @ this moment in time they are as I dub it the veyron moment like others in the past the concorde moment. They might not be for most people practical but to its like me saying to my team, look lets see what we can do. Not only that but having the crown of fastest single card (agreed single card but multi gpu) goes a long way to brand loyalty and advertising.An example I like to use is the GTX 560. Its a fantastic card and in many ways better… hang with me a second. In terms of actual raw power you get for sub £200 is incredible also factor in its quiet and wont eat through your electricity like a moth through primark. But…. to not produce these high end cards would be criminal. We need people to keep pushing as hard and fast (that sounds so wrong) at the the boundaries(agreed quite crudely in the 6990 case but hey I dont sit in either camp just wait a few days for the 590 for brute but crude).
with the reduction in nm to 28 the power consumption and heat will be brought down further (dont need pointing out here that a lot of factors at play here it was just a generalisation) but sure we could see just as big and hot cards within practical reason.
I would not say out of hand I would say its progress and with progress we can lear
Figaro56 - Tuesday, March 8, 2011 - link
Roger that.JimmiG - Tuesday, March 8, 2011 - link
"Water cooled 6990s will be worth their weight in gold."They'll probably cost about that too...
smigs22 - Tuesday, March 8, 2011 - link
The r6990 stands out as a massive single card leader. The 6950CF offers far better price/perfomance with potential for 6970CF performance through BIOS flashing.Maybe you should break up some of the charts to show only single cards configurations (for those with motherboards lacking full/partial SLI/CF support).
It will be interesting to see how enabled & what clocks the gf590 will arrive at, in order to keep its power-draw & temps down to reasonable levels??
I wonder if someone would place a carbon tax on these bad boys....lol
Figaro56 - Tuesday, March 8, 2011 - link
The would have done this, but there is a cryo cooled case interior on the market yet.IceDread - Tuesday, March 8, 2011 - link
If the card would have come with water cooling option or something like that, then it would have been a great product.EmmetBrown - Tuesday, March 8, 2011 - link
Nice, but what about the Radeon HD 6450, 6570 and 6670?http://en.wikipedia.org/wiki/Comparison_of_ATI_Gra...
Why they are available for OEM only? They looks interesting, especially the 6670, which with its 480 SP should be faster than the 5670 which has 400 SP and lower frequency. Do you plan to review them?
Ryan Smith - Tuesday, March 8, 2011 - link
As you note, they're OEM only. AMD will release them to the retail market eventually, but clearly they're not in a hurry. It's unlikely we'll review them until then, as OEM cards are difficult to come by.misfit410 - Tuesday, March 8, 2011 - link
I have to ask, if you bring up the price and say that you might as well do two 6950's in SLI when this thing doubles the performance of the GTX580, I mean would it also not be the better solution than a GTX580 which is $500 while two 6950's can apparently double it for $550 being they can be found for $225 after rebates these days.Figaro56 - Tuesday, March 8, 2011 - link
You sound a little confused. You can't run ATI cards in SLI, they run in what is called crossfire (or crossfirex which is the same thing). Two 6950's don't equal GTX580 in SLI. You need two HD 6970 cards in crossfire to nearly equal two GTX580 in SLI.In my opinion, why limit your performance with 2 HD 6950 cards, why not just bye the 2 HD 6970 cards and never have to second guess if you should have or not? But... That's just me. I have a job.
silverblue - Tuesday, March 8, 2011 - link
Totally unnecessary closing comment there, considering most people here do actually have jobs. Not everyone who has a job can afford such gear as there's more important things to spend money on.Thanny - Tuesday, March 8, 2011 - link
You sound confused, too. He miswrote SLI, but you misunderstood his point entirely. He's saying that two 6950's are significantly faster than a single 580 for almost the same price.Loiosh - Tuesday, March 8, 2011 - link
Hey guys, you forgot one other usage case that would necessitate this card: ATI+physx setup: http://www.shackpics.com/viewer.x?file=DumbVideoca...I'm currently running one and it requires a dual-GPU card. :/
In my case I'm waiting for a watercooled version. BTW, you didn't say the release date for this?
nanajuuyon - Wednesday, March 9, 2011 - link
Funny after reading this review I went into town (Tokyo) to buy a new hard disk and saw this card for sale. So in Japan at least it is already on the market..... price was ridiculous though, 79,000YEN or $945 US..... I'm sure it will be available everywhere soon.Waterblocks on the other hand could be a couple month or so away I guess...
Vinas - Tuesday, March 8, 2011 - link
If you buy this you better have it on water. 'nuff said about all this tri slot cooler talk.JPForums - Tuesday, March 8, 2011 - link
First off, nice article Ryan.Good data, relevant commentaries on said data, and conclusions.
You mention in the article that you believe some of the shortcomings of the 6990 to be a lack of PCIe bandwidth. This got me thinking that perhaps it is a good time to revisit the effect of PCIe bandwidth on modern cards. Given the P67 only natively supports 16 lanes, I'm curious to see what effect it has on CF/SLI. It could make big difference in the recommended hardware for various levels of gaming systems.
Typically, someone looking for a CF/SLI setup will get a board that supports more lanes. However, I have seen a situation where a friend built a budget i5 system and about 4 months later was in a position to acquire an HD5970 on the cheap (relatively speaking). Clearly, two HD5850s/HD5870s would have been an option.
If newer cards are effectively PCIe bandwidth limited, then a 6990 may perform more closely to an HD6970 CF setup in such a system than it does in these graphs. This would be even more of a consideration at the high end if the rare boards with support for 4x8 lane (spaced) PCIe give you no real benefit over a more common 2x16 lane board (comparing 4 HD6970s to 2 HD6990s).
Figaro56 - Tuesday, March 8, 2011 - link
I have 2 XFX HD 5870 cards for sale. I have a double lifetime warranty on these so you get the use of the second lifetime warranty on these. Interested? They are very great performers I can vouch for that. I am use to upgrading my GPU on an annual basis so I am upgrading to 2 HD 6970. $230 each.Thanny - Tuesday, March 8, 2011 - link
Ignoring the inappropriateness of advertising here, I submit:http://www.newegg.com/Product/Product.aspx?Item=N8...
Why would someone pay you $230 for a used product that can be obtained new at $190?
fausto412 - Tuesday, March 8, 2011 - link
I kinda wanted to see a chart with the most common gaming resolution...and can we benchmark with a Q9550 just for comparison? i would love to know if i'm holding back a video card by not going i5 or i7 and by how much.jabber - Tuesday, March 8, 2011 - link
If you can afford a 6990 why would you be bothering using it with a Q9550 at 1680x1050. Hence why it isnt part of this review.This review is to show how it works for the intended market/customer.
As I said before, this card isnt for folks like you (or me for that matter). Sorry.
7Enigma - Tuesday, March 8, 2011 - link
The most common gaming resolution for this card is the one Ryan tested. It is pointless to test at a lower resolution other than possibly true 24" (1920X1200). And even at that res this card is really not needed.Figaro56 - Tuesday, March 8, 2011 - link
BOYA to both of those resolutions. You should be playing your games at 2560x1600. Now that's what I'm talkin about! You'd be saying hell ya.Jorgisven - Tuesday, March 8, 2011 - link
It seems we're getting into the Pentium IV trap, a bit. Big, hot, power-hungry, noisy chips...personally, I'm going to pass on this generation of GPUs. I'm waiting for a revolution in either manufacturing or coding. It's all well and good to have a fast computer for getting what you need done in minimal, but at the risk of the box taking flight because the fans are now of jet engine proportion in speed and power, I'd rather not be able to hear my fans over my headphones...or risk my cat getting sucked into the intake.jabber - Tuesday, March 8, 2011 - link
Well we've kinda got what we asked for. We've all gamely been buying more and more powerful graphics cards with regards to brute force rendering power.We've shown we love buying 750w+ power supplies with multiple GPU connectors, buying SLI and Xfire setups galore.
So the GPU corps think we love nothing more than just piling on more and more power and wattage to solve the situation.
It works both ways.
What we should have been doing was challenging AMD and Nvidia to develop smarter rendering techniques. Had either of them developed PowerVR to the state we are in today we would be in a far better place. Chances are the most power hungry card we'd have today would be 5770 level.
We need something more efficient like PowerVR to take us to the next level.
Less brute force and more finesse.
therealnickdanger - Tuesday, March 8, 2011 - link
Are you waiting to update your test system until the SATA port issue is corrected? Seems to me that anyone wanting to buy this card would also be using an overclocked 2600K... According to the Bench numbers, the 2600K offers roughly 30% more frames than the 920, depending on the game. That indicates to me that your test system is insufficient to properly test this card.Granted, since the vast majority of displays are fixed at 60Hz, fps counts beyond that don't really matter, but I have to wonder what impact this would have on folks with 120Hz-native LCDs. That extra 30% could make the difference.
... just sayin'. :)
Ryan Smith - Tuesday, March 8, 2011 - link
At this point we're waiting on SNB-E. SNB is very nice, but for a GPU testbed the lack of PCIe bandwidth is an issue.iamezza - Tuesday, March 8, 2011 - link
This could make for an extremely valuable article for gamers on a budget. When does lack of PCIe bandwidth become an issue for running SLI/crossfire?Testing 580SLI at 2 x 8 and 2 x 16 modes would be a good place to start....
therealnickdanger - Tuesday, March 8, 2011 - link
It will be curious to see what impact the bandwidth will have... then again, even with the restriction, the current Sandy Bridge systems still dominate the previous chips.In reality, 16/16 or 8/8 really doesn't have much impact. The difference even at 2560x1600 with all the fixins in even the most demanding games is <1%. Unless AT's new test system will feature six displays and 4K+ resolutions, I'm not sure SNB-E is worth waiting so long for (yes, that could be perceived as a challenge!)
In any case, I'm looking forward to it! Thanks for the article!
shaggart5446 - Tuesday, March 8, 2011 - link
i hope u said the same thing when ur friend nvidia release their 590 card i also do hope u say the exact words that the 590 dont make any sence since a pair of 560 or 570 can give u the same performance as the 590 i cant wait to see ur article on the 590 ill be waiting for anand tfor this because we all know that the 590 are going to be down clockClownPuncher - Tuesday, March 8, 2011 - link
With cards designed specifically with multi monitor gaming in mind, you may want to include those resolutions. Buying this card for 1920x1200 would make zero sense.7Enigma - Wednesday, March 9, 2011 - link
I think it was good to have both. The number of people buying this card will likely have 30" displays, but I'm sure some (competetive FPS for example) will want extremely fluid display even in busy scenes, as well as the person that doesn't yet have the cash to upgrade to a big screen but plans to in the near future.I would also argue that there are likely vastly more people playing on large single-screen displays than eyefinity folks so this does make more sense. And honestly when some of the games are averaging in the sub 80-100 fps range, those minimum framerates approach questionable playability depending on type of game.
So basically as crazy as it is to say this, the graphical power isn't quite there yet to use Eyefinity at high detail settings in more recent and demanding games.
Nentor - Tuesday, March 8, 2011 - link
"With but a trio of exceptions, the 6990 doesn’t make sense compared to a pair of cards in Crossfire."This product is not meant to make any sense from a financial, performance or even practical standpoint.
It IS the fastest videocard and that is that.
I was watching a video last night on youtube of a chainsaw powered by a Buick's V8 engine (hG5sTLY0-V8). It goes through a tree trunk in a blink of an eye, but it had to be lifted by TWO men.
Sure is cool though.
Squuiid - Sunday, March 13, 2011 - link
It makes complete sense if you want SLI in a small form factor, mATX and such. (as do I).PCIe slots are at a premium, and so is space on a mATX board/case.
However, I think I'm going to wait and see what the 590 looks like...
Fhistleb - Tuesday, March 8, 2011 - link
I didn't even think that was possible. Though with what this is pushing out its a little expected I suppose.stangflyer - Tuesday, March 8, 2011 - link
I would like to see the 6990 and 5970 comparison in crysis and metro at eyefinity and single monitor res but with the 5970 at default clocks and close to 5870 clocks. When I am playing these games I have my 5970 at 850 core and 1150 memory and it runs all day without any throttling.The 5970 is handicapped at the default speeds as everyone can run at or real close to 5870 speeds. The core is easy at 850 but you may need to back down memory to 1150 or 1175.
Would love to see the true difference in the 5970 and 6990 this way.
The framebuffer will be the big difference at eyefinity res. with any aa applied.
stangflyer - Tuesday, March 8, 2011 - link
One thing I do like about the dual gpu amd cards is that I play a few games that use physx.. (I have a 5970) I have a 250gts in the second pcie slot. both my slots are 2x16. This way I have a powerfull gpu and physx! I play my games at 5040x1050 and a single card just don't cut it. I did use nvidia surround for 2 months but like my eyefinity setup better. To go crossfire and then have physx you need a motherboard that doesn't knock your pcie slot down to 8x with 3 cards which are few and expensive and also a case that has space for that 3rd card like a coolermaster haf 932X. I have a haf 932 (not X) and I could not go 3 cards unless the 3rd card is single slot.On a side note as to why I am sticking with my 5970 till the 28nm show up is that I like the way the cooler is set up. With the fan on the end I have my 250gts below it with about a 3/8 inch below it. BUT the 250gts is only about 7.5-8 inches long and does not cover the fan at all because the fan is at the end. I have a 120mm fan at the bottom of my haf 932 case that blows straight up into the 5970 fan.
If I used a 6990 the 250gts would cover the 6990 fan.
My choices would be then to sell the 250gts and get a single slot card. (450gts probably)
I think I am just going to stay with what I have for now.
Maybe! LOL!
Figaro56 - Tuesday, March 8, 2011 - link
2 HD 6970 Cards for $640? I don't think so! These cards are over $300 everywhere. I purchased 2 for $710 shipped and I thought that was a deal. Maybe reviews like yours here inflated the price and I purchased after the price adjustment. I have the same luck with gasoline on days I fill my tank.ViRGE - Tuesday, March 8, 2011 - link
Looking at the Egg, there's 2 different 6970s at $320, which is probably where AT got $640 from.http://www.newegg.com/Product/Product.aspx?Item=N8...
Figaro56 - Tuesday, March 8, 2011 - link
All right, you got me there. I only buy XFX double lifetime warranty cards when I start spending this much on replacing my dual GPU solution.I seem to manage to actually re-sell my used video cards when I can offer then to a buyer with a lifetime warranty. XFX double lifetime warranty is not a sales gimic, it works. Heck, I would buy a used card if it had a lifetime warranty, it's kind of a no brainer given you actually want to buy that card int he first place.
Arbie - Tuesday, March 8, 2011 - link
Thanks for keeping the Crysis Warhead minimum FPS charts!! To me, Crysis/Warhead remains the defining game (and not only technically). I don't even look at the numbers on the other titles.Also of prime importance to me are the idle power and, to a slightly lesser extent, idle noise.
Of course, like most people reading your review, I wouldn't be buying a 6990 even if it were silent. In fact, given that PC graphics requirements are apparently ramping down to console levels, I wonder how AMD/Nvidia are going to sell any significant number of cards above midrange. My HD 5770 will run everything at 1920x1200, though not always with all sliders maxed. However, I don't see much if any difference (in DX9) when I do enable 4xAA vs 2xAA etc. Certainly not enough to double the price of this $140 card.
A nit on the Crysis Warhead minimum fps chart for 1920x1200 Frost Bench - Gamer Quality - Enthusiast Shaders + 4xAA: Your Dec 10 chart shows 6970CF at 66.2 fps but this Mar 11 chart shows 66.6. Can you believe anyone would actually notice this, much less comment on it? We are too absorbed in this tech stuff (ain't it grand...).
strikeback03 - Tuesday, March 8, 2011 - link
They did say the new drivers made a slight difference, that seems likely to be one of the configurations they retestedmorphologia - Tuesday, March 8, 2011 - link
That isn't portrait orientation in the picture...it's landscape.taltamir - Tuesday, March 8, 2011 - link
The card was measured at 77.3db in the article.1. At what distance was it measured?
2. What is its db measurement 1 meter away?
taltamir - Tuesday, March 8, 2011 - link
I just looked it up, gold is worth 1430$/ounce right now.I highly doubt a watercooled 6990 will weigh half an ounce.
ekrash - Tuesday, March 8, 2011 - link
The performance bottleneck is also seen in nvidia's dual gpu offerings. Dual GPU cards operating in X16 PCIe slots must have their data lanes divided between the gpu's, so they are effectively operating at X8 data rates, not at X16 data rates. Whereas single gpu cards will utilize all X16 express lanes, and even then the PCIexpress standard may soon be obsoleted. I hope we can look forward to Intel's fiber optic technology effectively replacing all data bus signalling with 10GB fiber optic bus and peripheral device signalling which can simultaneously and independently utilize all of the different data protocols used for inter-device and system bus communications. Imagine soon AMD and Nvidia will be producing video cards with fiber-optic data buses which may change requirements for power supplied to present day PCI express slots and may change the standards in power supply manufacturing to require that additional power connector to a video card since the 75 watt PCIe slot will be obsolete.But ATI and Nvidia may also have to work with motherboard manufacturers to see if Intel's "Thunderbolt" fiber optic data buses can increase or freely throttle the video data bandwidth through its 10GB interface and would be tantamount to increasing data lanes from X16 to X32. It would be almost unlimited video bandwidth which far exceeds any bandwidth limitations vs availability that is needed today. Dual GPU's cannot promise the performance with the limitation of the PCIe X16 slot being divided to dual X8 channels, but it would be nice to see how they perform with unlimited bandwidth potential over a single 10GB fiber-optic. And that would change the battlefield between ATI-AMD and Nvidia.
My 4870 X2's (Can run Quadfire) still rocks on enthusiast settings in Crysis and Warhead without any hiccups and I've not seen a slowdown of any sort on any level in Crysis.
The price to performance ratio is declining and may affect my decision to purchase another dual GPU card, opting instead for single GPU card CF solutions that can utilize all X16 lanes by the GPU.
BTW I did notice the lack of DATA on Crysis @1920x1200 with full enthusiast settings, so that data is missing from this review. Its Gamer plus enthusiast shaders.....not full enthusiast. As above the 4870 X2 runs full enthusiast settings, not one setting is scaled back, and not one hiccup....just smooth play throughout on a single 28" display.
cmdrdredd - Tuesday, March 8, 2011 - link
Why are we still using Crysis Warhead at "Gamer Quality"????? With cards like these why not turn everything maxed in game and then fidget with AA and the like? I don't get it.Spazweasel - Tuesday, March 8, 2011 - link
I've always viewed single-card dual-GPU cards as more of a packaging stunt than a product.They invariably are clocked a little lower than the single-GPU cards they re based upon, and short of a liquid cooling system are extremely noisy (unavoidable when you have twice as much heat that has to be dissipated by the same sized cooler as the single-GPU card). They also tend to not be a bargain price-wise; compare a dual-GPU card versus two of the single-GPU cards with the same GPU.
Personally, I would much rather have discrete GPUs and be able to cool them without the noise. I'll spend a little more for a full-sized case and a motherboard with the necessary layout (two slots between PCI-16x slots) rather than deal with the compromises of the extra-dense packaging. If someone else needs quad SLI or quad Crossfire, well, fine... to each their own. But if dual GPUs is the goal, I truly don't see any advantage of a dual-GPU card over dual single-GPU cards, and plenty of disadvantages.
Like I said... more of a stunt than a product. Cool that it exists, but less useful than advertised except for extremely narrow niches.
mino - Tuesday, March 8, 2011 - link
Even -2- years since the release of the original Crysis, “but can it run Crysis?” is still an important question, and for -3.5- years the answer was “no.”Umm, you sure bout both those time values?
:)
Nice review, BTW.
MrSpadge - Tuesday, March 8, 2011 - link
"With a 375W TDP the 6990 should consume less power than 2x200W 6950CF, but in practice the 6950CF setup consumes 21W less. Part of this comes down to the greater CPU load the 6990 can create by allowing for higher framerates, but this doesn’t completely explain the disparity."If it hasn't been mentioned before: guys, this is simple. The TDP for the HD6950 is just for the PowerTune limit. The "power draw under gaming" is specified at ~150 W, which is just what you'll find during gaming gaming tests.
Furthermore Cayman is run at lower voltage (1.10 V) and clocks and with less units on HD6950, so it's only natural for 2 of these to consume less power than a HD6990. Summing it up one would expect 1.10^2/1.12^2 * 800/830 * 22/24 = 85,2% the power consumption of a Cayman on HD6990.
MrS
mino - Tuesday, March 8, 2011 - link
You shall not hit them so hard next time. :)Numbers tend to hurt one's ego badly if properly thrown.
geok1ng - Tuesday, March 8, 2011 - link
The article points that the 6990 runs much closer to 6950CF than 6970CF.I assume that the author is talking about 2GB 6950, that can be shader unlocked, in a process much safer than flashing the card with a 6970 BIOS.
It would be interesting to see CF numbers for unlocked 6950s.
As it stands the 6990 is not a great product: it requires an expensive PSU, a big case full of fans, at price ponit higher than similar CF setups.
Considering that there are ZERO enthuasiast mobos thah wont accept CF, the 6990 becomes a very hard sell.
Even more troubling is the lack of a DL-DVI adapter in the bundle, scaring way 30" owners, precisely the group of buyers most interested in this video card.
Why should a 30" step away from a 580m or SLI 580s, if the 6990 the same expensive PSU, the same BIG case full of fans and a DL-DVI adapter costs more than teh price gap to a SLI mobo?
Thanny - Tuesday, March 8, 2011 - link
This card looks very much like the XFX 4GB 5970 card. The GPU position and cooling setup is identical.I'd be very interested to see a performance comparison with that card, which operates at 5870 clock speeds and has the same amount of graphics memory (which is not "frame buffer", for those who keep misusing that term).
JumpingJack - Wednesday, March 9, 2011 - link
:) Yep, I wished they would actually make it right.
The frame buffer is the amount of memory to store the pixel and color depth info for a renderable frame of data, whereas graphics memory (or VRAM) is the total memory available for the card which consequently holds the frame buffer, command buffer, textures, etc etc. The frame buffer is just a small portion of the VRAM set aside and is the output target for the GPU. The frame buffer size is the same for every modern video card on the planet at fixed (same) resolution. I.e. a 1900x1200 res with 32 bit color depth has a frame buffer of ~9.2 MB (1900x1200x32 / 8), if double or tripled buffered, multiply by 2 or 3.
Most every techno site misapplies the term "frame buffer", Anandtech, PCPer (big abuser), Techreport ... most everyone.
Hrel - Wednesday, March 9, 2011 - link
Anyone wanting to play at resolutions above 1080p should just buy two GTX560's for 500 bucks. Why waste the extra 200? There's no such thing as future proofing at these levels.wellortech - Wednesday, March 9, 2011 - link
If the 560s are as noisy as the 570, I think I would rather try a pair of 6950s.HangFire - Wednesday, March 9, 2011 - link
And you can't even bring yourself to mention Linux (non) support?You do realize there are high end Linux workstation users, with CAD, custom software, and OpenCL development projects that need this information?
HangFire - Thursday, March 10, 2011 - link
As usual no reply after the first few pages... or on the topic of Linux.If they only had any clue how many Linux workstations get ordered with "are you sure this has the top processor, top video card and most RAM available?" on a post-it note stuck on the Req.
Azfar - Wednesday, March 9, 2011 - link
Crysis Killer that is.....Finally !!Euchrestalin - Wednesday, March 9, 2011 - link
Why is this card named after Wedge Antilles? Why not Vader, Skywalker or my personal favorite Mitth'raw'nuruodo?gorgid - Wednesday, March 9, 2011 - link
I'm big RADEON fan. I had many ATI cards ( 3870 x 2 , 4870x 2, 5970 etc). Most of them water cooled. So i don't care about noise. My temperatures always at 40C ( for the chips) overclocked. I was planning to sell 5970 and get 6990, but after first reviews i decided to wait to see what NVIDIA will bring in form of GTX 590.The AMD made great first step with DirectX 11, cards came 6 month earlier then Nvidia.
First time ( may be not) ATI cards were on the top charts for long time.
But this time almost 1 1/2 years past since 5970 came to the market. 6990 shows about 20% gain of performance comparing to 5970. Why somebody would pay $700 for that kind of performance?
In my opinion NVidia will beat 6990 pretty easy, will take a crown and will keep it for a while.
Thank you all.
slickr - Wednesday, March 9, 2011 - link
I've read the review on most of the other English sites and there seems to be a big fluke in anand's battlefield benchmark.Seems like yet another shady bench.
Ryan Smith - Thursday, March 10, 2011 - link
We of course strive to make the best benchmark possible. So if you believe there is a problem, I'd like to hear what you think is amiss with our benchmarks. We can't fix things unless you guys chime in and let us know what you think is wrong.HangFire - Monday, March 14, 2011 - link
So, you do read past page 5 of the comments... but you still can't bring yourself to even mention Linux.We are working our contacts with AMD but can't get much out of them. I guess no news is bad news.
andy5174 - Thursday, March 10, 2011 - link
Is this performance based on AMD's image quality cheat?http://www.guru3d.com/article/exploring-ati-image-...
Ryan Smith - Friday, March 11, 2011 - link
That article is out of date as of Catalyst 11.1a.noxyucT(RUS) - Tuesday, March 15, 2011 - link
куда стока мощностей это же пздец,295 жифорс в несколько раз рвет.Интересно какой процессор участвовал в тесте, и7 ито не потянет такова монстра.Жесть кароче.