Original Link: https://www.anandtech.com/show/4254/triplegpu-performance-multigpu-scaling-part1



It’s been quite a while since we’ve looked at triple-GPU CrossFire and SLI performance – or for that matter looking at GPU scaling in-depth. While NVIDIA in particular likes to promote multi-GPU configurations as a price-practical upgrade path, such configurations are still almost always the domain of the high-end gamer. At $700 we have the recently launched GeForce GTX 590 and Radeon HD 6990, dual-GPU cards whose existence is hedged on how well games will scale across multiple GPUs. Beyond that we move into the truly exotic: triple-GPU configurations using three single-GPU cards, and quad-GPU configurations using a pair of the aforementioned dual-GPU cards. If you have the money, NVIDIA and AMD will gladly sell you upwards of $1500 in video cards to maximize your gaming performance.

These days multi-GPU scaling is a given – at least to some extent. Below the price of a single high-end card our recommendation is always going to be to get a bigger card before you get more cards, as multi-GPU scaling is rarely perfect and with equally cutting-edge games there’s often a lag between a game’s release and when a driver profile is released to enable multi-GPU scaling. Once we’re looking at the Radeon HD 6900 series or GF110-based GeForce GTX 500 series though, going faster is no longer an option, and thus we have to look at going wider.

Today we’re going to be looking at the state of GPU scaling for dual-GPU and triple-GPU configurations. While we accept that multi-GPU scaling will rarely (if ever) hit 100%, just how much performance are you getting out of that 2nd or 3rd GPU versus how much money you’ve put into it? That’s the question we’re going to try to answer today.

From the perspective of a GPU review, we find ourselves in an interesting situation in the high-end market right now. AMD and NVIDIA just finished their major pushes for this high-end generation, but the CPU market is not in sync. In January Intel launched their next-generation Sandy Bridge architecture, but unlike the past launches of Nehalem and Conroe, the high-end market has been initially passed over. For $330 we can get a Core i7 2600K and crank it up to 4GHz or more, but what we get to pair it with is lacking.

Sandy Bridge only supports a single PCIe x16 link coming from the CPU – an awesome CPU is being held back by a limited amount of off-chip connectivity; DMI and a single PCIe x16 link. For two GPUs we can split that out to x8 and x8 which shouldn’t be too bad. But what about three GPUs? With PCIe bridges we can mitigate the issue some by allowing the GPUs to talk to each other at x16 speeds and dynamically allocate CPU-to-GPU bandwidth based on need, but at the end of the day we’re splitting a single x16 lane across three GPUs.

The alternative is to take a step back and work with Nehalem and the x58 chipset. Here we have 32 PCIe lanes to work with, doubling the amount of CPU-to-GPU bandwidth, but the tradeoff is the CPU.  Gulftown and Nehalm are capable chips on its own, but per-clock the Nehalem architecture is normally slower than Sandy Bridge, and neither chip can clock quite as high on average. Gulftown does offer more cores – 6 versus 4 – but very few games are held back by the number of cores. Instead the ideal configuration is to maximize performance of a few cores.

Later this year Sandy Bridge E will correct this by offering a Sandy Bridge platform with more memory channels, more PCIe lanes, and more cores; the best of both worlds. Until then it comes down to choosing from one of two platforms: a faster CPU or more PCIe bandwidth. For dual-GPU configurations this should be an easy choice, but for triple-GPU configurations it’s not quite as clear cut. For now we’re going to be looking at the latter by testing on our trusty Nehalem + x58 testbed, which largely eliminates a bandwidth bottleneck in a tradeoff for a CPU bottleneck.

Moving on, today we’ll be looking at multi-GPU performance under dual-GPU and triple-GPU configurations; quad-GPU will have to wait. Normally we only have two reference-style cards of any product on hand, so we’d like to thank Zotac and PowerColor for providing a reference-style GTX 580 and Radeon HD 6970 respectively.



Fitting Three Video Cards in an ATX Case

I thought we’d flip our normal GPU review style on its head by starting with Power, Temperature, and Noise first. NVIDIA and AMD have both long recommended against placing high-end video cards directly next to each other, in favor of additional spacing between video cards. Indeed this is a requirement for their latest dual-GPU cards, as both the GTX 590 and 6990 draw relatively massive amounts of air using a fan mounted at the center of the card and exhaust roughly half their air inside of the case. Their reference style single-GPU cards on the other hand are fully exhausting with fans mounted towards the rear of the card. Thus multi-GPU configurations with the cards next to each other is supposed to be possible, though not ideal.

There’s a reason I want to bring this up first, and a picture is worth a thousand words.

While AMD and NVIDIA’s designs share a lot in common – a rear-mounted blower fan pushes air over a vapor chamber cooler – the shrouds and other external equipment are quite different. It’s not until we see a picture that we can appreciate just how different they are.

With the Radeon HD 6000 series, AMD’s reference designs took on a very boxy design. The cards fully live up to the idea of a “black box”; they’re enclosed on all sides with a boxy cooler and a black metal backplate. As a GPU reviewer I happen to like this design as the GPUs are easy to stack/store, and the backplate covers what would normally be the only exposed electronics on the card. The issue with this boxy design is that AMD is taking full advantage of the PCIe specification, leading to the 6900 series being the full width allowed.

NVIDIA on the other hand has always had some kind of curve in their design, normally resulting in a slightly recessed shroud around the blower intake. For the GTX 580 and GTX 570 they took a further step in recessing the shroud around this area, leading to the distinct wedge shape. At the same time NVIDIA does not use a backplate, saving precious millimeters of space. The end result of this is that even when packed like sardines, the GTX 580 and GTX 570 blowers have some space reserved for air intake.

The Radeon HD 6970 does not, and this is our problem. The picture of the 6970 in triple-CF really paints the picture, as the middle card is directly pressed up against the top card. Because these cards are so large and heavy the rear ends tend to shift and dip some when installed against a vertical motherboard – in fact this is why we can normally get away with a dense dual-CF setup since the bottom card dips a bit more – but in a triple-CF configuration the end result is that one of the cards will end up getting up-close and personal with another one.

Without outside intervention this isn’t usable. We hit 99C on the middle card in Crysis when we initially installed the three cards, and Crysis isn’t the hardest thing we run. For the purposes of our test we ultimately resorted to wedging some space between the cards with wads of paper, but this isn’t a viable long-term solution.

Unfortunately long-term alternatives are few if you want to give a triple-GPU setup more space. Our testbed uses an Asus Rampage II Extreme, which features three PCIe slots mixed among a total of 6 slots; the way it’s laid out makes it impossible to have our triple-GPU configuration setup in any other manner. Even something like the ASRock P67 Extreme4 can’t escape the fact that the ATX spec only has room for 7 slots and that when manufacturers actually use the 7th and topmost slot that it’s a short PCIe x1 slot. In short you won’t find an ATX motherboard that can fit three video cards and at the same time gives each one a slot’s worth of breathing room. For that you have to use a larger than ATX form factor.

So what’s the point of all of this rambling? With AMD’s current shroud design it’s just not practical to do triple-CF on air on an ATX motherboard. If you want to play with three AMD boards you need to think outside of the box: either use water cooling or use a larger motherboard.



The Test, Power, Temps, and Noise

CPU: Intel Core i7-920 @ 3.33GHz
Motherboard: Asus Rampage II Extreme
Chipset Drivers: Intel 9.1.1.1015 (Intel)
Hard Disk: OCZ Summit (120GB)
Memory: Patriot Viper DDR3-1333 3x2GB (7-7-7-20)
Video Cards: AMD Radeon HD 6990
AMD Radeon HD 6970
PowerColor Radeon HD 6970
EVGA GeForce GTX 590 Classified
NVIDIA GeForce GTX 580
Zotac GeForce GTX 580
Video Drivers: NVIDIA ForceWare 266.58
AMD Catalyst 11.4 Preview
OS: Windows 7 Ultimate 64-bit

With that out of the way, let’s start our look at power, temperature, and noise. We did include our jury-rigged triple-CF setup in these results for the sake of a comparison point, but please keep in mind that we’re not using a viable long-term setup, which is why we have starred the results. These results also include the GTX 590 from last week, which has its own handicap under FurMark due to NVIDIA’s OCP. This does not apply to the triple SLI setup, which we can bypass OCP on.

Given NVIDIA’s higher idle TDP, there shouldn’t be any surprises here. Three GTX 580s in SLI makes for a fairly wide gap of 37W – in fact even two GTX 580s in SLI is still 7W more than the triple 6970 setup. Multi-GPU configurations are always going to be a limited market opportunity, but if it were possible to completely power down unused GPUs, it would certainly improve the idle numbers.

With up to three GPUs, power consumption under load gets understandably high. For FurMark in particular we see the triple GTX 580 setup come just shy of 1200W due to our disabling of OCP – it’s an amusingly absurd number. Meanwhile the triple 6970 setup picks up almost nothing over the dual 6970, which is clearly a result of AMD’s drivers not having a 3-way CF profile for FurMark. Thus the greatest power load we can place on the triple 6970 is under HAWX, where it pulls 835W.

With three cards packed tightly together the middle card ends up having the most difficult time, so it’s that card which is setting the highest temperatures here. Even with that, idle temperatures only tick up a couple of degrees in a triple-GPU configuration.

Even when we forcibly wedge the 6970s apart, the triple 6970 setup still ends up being the warmest under Crysis – this being after Crysis temperatures dropped 9C from the separation. Meanwhile the triple GTX 580 gets quite warm on its own, but under Crysis and HAWX it’s nothing we haven’t seen before. FurMark is the only outlier here, where temperatures stabilized at 95C, 2C under GF110’s thermal threshold. It’s safe, but I wouldn’t recommend running FurMark all day just to prove it.

With a 3rd card in the mix idle noise creeps up some, but much like idle temperatures it’s not significantly more. For some perspective though, we’re still looking at idle noise levels equivalent to the GTX 560 Ti running FurMark, so it’s by no means a silent operation.

It turns out adding a 3rd card doesn’t make all that much more noise. Under HAWX the GTX 580 does get 3dB louder, but under FurMark the difference is under a dB. The triple 6970 setup does better under both situations, but that has more to do with our jury-rigging and the fact that FurMark doesn’t scale with a 3rd AMD GPU. Amusingly the triple 580 setup is still quieter under FurMark than the 6990 by nearly 3dB even though we’ve disabled OCP for the GTX 580, and for HAWX the difference is only .2dB in AMD’s favor. It’s simply not possible to do worse than the 6990 without overvolting/overclocking, it seems.



Crysis, BattleForge, Metro 2033, and HAWX

For the sake of completeness we have included both 2560x1600 and 1920x1200 results in our charts. However with current GPU performance a triple-GPU setup only makes sense at 2560, so that’s the resolution we’re going to be focusing on for commentary and scaling purposes.

As we normally turn to Crysis as our first benchmark it ends up being quite amusing when we have a rather exact tie on our hands. The triple GTX 580 setup ends up exactly tying the triple 6970 setup at 2560x1600 with full enthusiast settings at 65.6fps. This is quite an appropriate allegory for AMD and NVIDIA’s relative performance as of late, as the two are normally very close when it comes to cards at the same price. It’s also probably not the best start for the triple GTX 580 though, as it means NVIDIA’s lead at one and two cards has melted away by the 3rd.

We have however finally established what it takes to play Crysis at full resolution on a single monitor with every setting turned up – it takes no fewer than three GPUs to do the job. Given traditional GPU performance growth curves, it should be possible to do this on a single GPU by early 2014 or so, only some 7 years after the release of Crysis: Warhead. If you want SSAA though, you may as well throw in another few years.

Moving on, it’s interesting to note that while we had a tie at 2560 with Enthusiast settings for the average framerate, the same cannot be said of the minimums.  At 2560, no matter the quality, AMD has a distinct edge in the minimum framerate. This is particularly pronounced at 2560E, where moving from two to three GPUs causes a drop in the framerate on the GTX 580. This is probably a result of the differences in the cards’ memory capacity – additional GPUs require additional memory, and it seems the GTX 580 and its 1.5GB has reached its limit. We never seriously imagined we’d find a notable difference between 1.5GB and 2GB at this point in time, but here we are.

BattleForge is a shader-bound game that normally favors NVIDIA, and this doesn’t change with three GPUs. However even though it’s one of our more intensive games, three GPUs is simply overkill for one monitor.

Metro 2033 is the only other title in our current lineup that can challenge Crysis for the title of the most demanding game, and here that’s a bout it would win. Even with three GPUs we can’t crack 60fps, and we still haven’t enabled a few extra features such as Depth of Field. The 6970 and GTX 580 are normally close with one and two GPUs, and we see that relationship extend to three GPUs. The triple GTX 580 setup has the lead by under 2fps, but it’s not the lead one normally expects from the GTX 580.

Our next game is HAWX, a title that shifts us towards games that are CPU bound. Even with that it’s actually one of the most electrically demanding games in our test suite, which is why we use it as a backup for our power/temperature/noise testing. Here we see both the triple GTX 580 and triple 6970 crack 200fps at 2560, with the GTX 580 taking top honors.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Crysis G+E Avg
185%
134%
249%
181%
127%
230%
Crysis E
188%
142%
268%
184%
136%
252%
Crysis G+E Min
191%
141%
270%
181%
116%
212%
Crysis E Min
186%
148%
277%
185%
83%
155%
BattleForge
194%
135%
263%
199%
135%
269%
Metro 2033
180%
117%
212%
163%
124%
202%
HAWX
190%
115%
219%
157%
117%
185%

Having taken a look at raw performance, what does the scaling situation look like? All together it’s very good. For a dual-GPU configuration the weakest game for both AMD and NVIDIA is Metro 2033, where AMD gets 180% while NVIDIA manages 163% a single video card’s performance respectively. At the other end, NVIDIA manages almost perfect scaling for BattleForge at 199%, while AMD’s best showing is in the same game at 194%.

Adding in a 3rd GPU significantly shakes things up however. The best case scenario for going from two GPUs to three GPUs is 150%, which appears to be a harder target to reach. At 142% under Crysis with Enthusiast settings AMD does quite well, which is why they close the overall performance gap there. NVIDIA doesn’t do as quite well however, managing 136%. The weakest for both meanwhile is HAWX, which is what we’d expect for a game passing 200fps and almost assuredly running straight into a CPU bottleneck.

The Crysis minimum framerate gives us a moment’s pause though. AMD gets almost perfect scaling moving from two to three GPUs when it comes to minimum framerates in Crysis, meanwhile NVIDIA ends up losing performance here with Enthusiast settings. This is likely not a story of GPU scaling and more a story about GPU memory, but regardless the outcome is a definite hit in performance. Thus while minimum framerate scaling from one to two GPUs is rather close between NVIDIA and AMD with full enthusiast settings and slightly in AMD’s favor with gamer + enthusiast, AMD has a definite advantage going from two to three GPUs all of the time out of this batch of games.

Sticking with average framerates and throwing out a clearly CPU limited HAWX, neither side seems to have a strong advantage moving from two GPUs to three GPUs; the average gain is 131%, or some 62% the theoretical maximum. AMD does have a slight edge here, but keep in mind we’re looking at percentages, so AMD’s edge is often a couple of frames per second at best.

Going from one GPU to two GPUs also gives AMD a minor advantage, with the average performance being 186% for for AMD versus 182% for NVIDIA. Much like we’ve seen in our individual GPU reviews though, this almost constantly flip-flops based on the game being tested, which is why in the end the average gains are so close.



Civ V, Battlefield, STALKER, and DIRT 2

Civilization V continues to be the oddball among our benchmarks. Having started out as a title with low framerates and poor multi-GPU scaling, in recent months AMD and NVIDIA have rectified this some.  As a result it’s now possible to crack 60fps at 2560 with a pair of high-end GPUs, albeit with some difficulty. In our experience Civ V is a hybrid bottlenecked game – we have every reason to believe it’s bottlenecked by the CPU at certain points, but the disparity between NVIDIA and AMD’s performance indicates there’s a big difference in how the two are settings things up under the hood.

When we started using Bad Company 2 a year ago, it was actually a rather demanding benchmark; anything above 60fps at 2560 required SLI/CF. Today that’s still true, but at 52fps the GTX 580 comes close to closing that gap. On the flip side two GPUs can send scores quite a distance up, and three GPUs will push that over 120fps. Now if we could just get a 120Hz 2560 monitor…

The Bad Company 2 Waterfall benchmark is our other minimum framerate benchmark, as it provides very consistent results. NVIDIA normally does well here with one GPU, but with two GPUs the gap closes to the point where NVIDIA may be CPU limited as indicated by our 580SLI/590 scores. At three GPUs AMD falls just short of a 60fps minimum, while the triple GTX 580 setup drops in performance. This would indicate uneven performance scaling for NVIDIA with three GPUs.

STALKER is another title that is both shader heavy and potentially VRAM-intensive. When moving from 1GB cards to 2GB cards we’ve seen the average framerate climb a respectable amount, which may be why AMD does so well here with multiple GPUs given the 512MB advantage in VRAM. With three GPUs the GTX 580 can crack 60fps, but the 6970 can clear 90fps.

We’ve seen DiRT 2 become CPU limited with two GPUs at 1920, so it shouldn’t come as a surprise that with three GPUs a similar thing happens at 2560. Although we can never be 100% sure that we’re CPU limited versus just seeing poor scaling, the fact that our framerates top out at only a few FPS above our top 1920 scores is a solid sign of this.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Civilization V 168% 99% 167% 170% 95% 160%
Battlefield: BC2 Chase 200% 139% 278% 189% 129% 246%
Battlefield: BC2 Water 206% 131% 272% 148% 85% 125%
STALKER: CoP 189% 121% 231% 149% 104% 157%
DiRT 2 181% 120% 219% 177% 105% 186%

So what does multi-GPU scaling look like in this batch of games? The numbers favor AMD at this point, particularly thanks to STALKER. Throwing out a CPU limited DIRT 2, and the average FPS for an AMD card moving from one GPU to two GPUs is 185%; NVIDIA’s gains under the same circumstances are only 169%.

For the case of two GPUs, AMD’s worst showing is Civilization V at 168%, while for NVIDIA it’s STALKER at %149. In the case of Civilization V the close gains to NVIDIA (168% vs. 170%) hides the fact that the GTX 580 already starts out at a much better framerate, so while the gains are similar the final performance is not. STALKER meanwhile presents us with an interesting case where the GTX 580 and Radeon HD 6970 start out close and end up far apart; AMD has the scaling and performance advantage thanks to NVIDIA’s limited performance gains here.

As for scaling with three GPUs, as was the case with two GPUs the results are in AMD’s favor. We still see some weak scaling at times – or none as in the case of Civilization V – but AMD’s average gain of 120% over a dual-GPU configuration isn’t too bad. NVIDIA’s average gains are basically only half AMD’s though at 110%, owing to an even larger performance loss in Civilization V, and almost no gain in STALKER. Battlefield: Bad Company 2 is the only title that NVIDIA sees significant gains in, and while the specter of CPU limits always looms overhead, I’m not sure what’s going on in STALKER for NVIDIA; perhaps we’re looking at the limits of 1.5GB of VRAM?

Looking at minimum framerates though the Battlefield: Bad Company 2, the situation is strongly in AMD’s favor for both two and three GPUs, as AMD scales practically perfectly with two GPUs and relatively well with three GPUs. I strongly believe this has more to do with the game than the technology, but at the end of the day NVIDIA’s poor triple-GPU scaling under this benchmark really puts a damper on things.



Mass Effect 2, Wolfenstein, and Civ V Compute

Mass Effect 2 is a game we figured would be GPU limited by three GPUs, so it’s quite surprising that it’s not. It does look like there’s a limit at around 200fps, but we can’t hit that at 2560 even with three GPUs. You can be quite confident with two or more GPUs however that your framerates will be nothing short of amazing.

For that reason, and because ME2 is a DX9-only game, we also gave it a shot with SSAA on both the AMD and NVIDIA setups at 1920. Surprisingly it’s almost fluid in this test even with one GPU. Move to two GPUs and we’re looking at 86fps – again this is with 4x super sampling going on. I don’t think we’re too far off from being able to super sample a number of games (at least the console ports) with this kind of performance.

Wolfenstein is quite CPU limited even with two GPUs, so we didn’t expect much with three GPUs. In fact the surprising bit wasn’t the performance, it was the fact that AMD’s drivers completely blew a gasket with this game. It runs fine with two GPUs, but with three GPUs it will crash almost immediately after launching it. Short of a BSOD, this is the worst possible failure mode for an AMD setup, as AMD does not provide individual game settings for CF, unlike NVIDIA who allows for the enabling/disabling of SLI on a game-specific basis. As a result the only way to play Wolfenstein if you had a triple-GPU setup is to change CrossFire modes globally, which requires a hardware reconfiguration that takes several seconds and a couple of blank screens.

We only have one OpenGL game in our suite so we can’t isolate this as an AMD OpenGL issue or solely an issue with Wolfenstein. It’s disappointing to see AMD have this problem though.

We don’t normally look at multi-GPU numbers with our Civilization V compute test, but in this case we had the data so we wanted to throw it out there as an example of where SLI/CF and the concept of alternate frame rendering just doesn’t contribute much to a game. Texture decompression needs to happen on each card, so it can’t be divided up as rendering can. As a result additional GPUs reduce NVIDIA’s score, while two GPUs does end up helping AMD some only for a 3rd GPU to bring scores crashing down. None of this scores are worth worrying about – it’s still more than fast enough for the leader scenes the textures are for, but it’s a nice theoretical example.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Mass Effect 2 180% 142% 158% 195% 139% 272%
Mass Effect 2 SSAA 187% 148% 280% 198% 138% 284%
Wolfenstein 133% 0% 0% 151% 96% 145%

Since Wolfenstein is so CPU limited, the scaling story out of these games is really about Mass Effect 2. Again dual-GPU scaling is really good, both with MSAA and SSAA; NVIDIA in particular achieves almost perfect scaling. What makes this all the more interesting is that with three GPUs the roles are reversed, scaling is still strong but now it’s AMD achieving almost perfect scaling on Mass Effect 2 with SSAA, which is quite a feat given the uneven scaling of triple-GPU configurations overall. It’s just a shame that AMD doesn’t have a SSAA mode for DX10/DX11 games; if it was anything like their DX9 SSAA mode, it could certainly sell the idea of a triple GPU setup to users looking to completely eliminate all forms of aliasing at any price.

As for Wolfenstein, with two GPUs NVIDIA has the edge, but they also had the lower framerate in the first place. Undoubtedly being CPU limited even with two GPUs, there’s not much to draw from here.



Closing Thoughts

Unlike our normal GPU reviews, looking at multi-GPU scaling in particular is much more about the tests than it is architectures. With AMD and NVIDIA both using the same basic alternate frame rendering strategy, there's not a lot to separate the two on the technology side. Whether a game scales poorly or well has much more to do with the game than the GPU.

  Radeon HD 6970 GeForce GTX 580
GPUs 1->2 2->3 1->3 1->2 2->3 1->3
Average Avg. FPS Gain 185% 127% 236% 177% 121% 216%
Average Min. FPS Gain 196% 140% 274% 167% 85% 140%

In terms of average FPS gains for two GPUs, AMD has the advantage here. It’s not much of an advantage at under 10%, but it is mostly consistent. The same can be said for three GPU setups, where the average gain for a three GPU setup versus a two GPU setup nets AMD a 127% gain versus 121% for NVIDIA. The fact that the Radeon HD 6970 is normally the weaker card in a single-GPU configuration makes things all the more interesting though. Are we seeing AMD close the gap thanks to CPU bottlenecks, or are we really looking at an advantage for AMD’s CrossFire scaling? One thing is for certain, CrossFire scaling has gotten much better over the last year – at the start of 2010 these numbers would not have been nearly as close.

Overall the gains for SLI or CrossFire in a dual-GPU configuration are very good, which fits well with the fact that most users will never have more than two GPUs. Scaling is heavily game dependent, but on average it’s good enough that you’re getting your money’s worth from a second video card. Just don’t expect perfect scaling in more than a handful of games.

As for triple-GPU setups, the gains are decent, but on average it’s not nearly as good. A lot of this has to do with the fact that some games simply don’t scale beyond two GPUs at all – Civilization V always comes out as a loss, and the GPU-heavy Metro 2033 only makes limited gains at best. Under a one monitor setup it’s hard to tell if this is solely due to poor scaling or due to CPU limitations, but CPU limitations alone do not explain it all. There are a couple of cases where a triple-GPU setup makes sense when paired with a single monitor, particularly in the case of Crysis, but elsewhere framerates are quite high after the first two GPUs with little to gain from a 3rd GPU. I believe super sample anti-aliasing is the best argument for a triple-GPU setup with one monitor, but at the same time that restricts our GPU options to NVIDIA as they’re the only one with DX10/DX11 SSAA.

Minimum framerates with three GPUs does give us a reason to pause for a moment and ponder some things. For the games we do collect minimum framerate data for – Crysis and Battlefield: Bad Company 2 – AMD has a massive lead in minimum framerates. In practice I don’t completely agree with the numbers, and it’s unfortunate that most games don’t generate consistent enough minimum framerates to be useful. From the two games we do test AMD definitely has an advantage, but having watched and played a number of games I don’t believe this is consistent for every game. I suspect the games we can generate consistent data for are the ones that happen to favor the 6970, and likely because of the VRAM advantage at that.

Ultimately triple-GPU performance and scaling cannot be evaluated solely on a single monitor, which is why we won’t be stopping here. Later this month we’ll be looking at triple-GPU performance in a 3x1 multi-monitor configuration, which should allow us to put more than enough load on these setups to see what flies, what cracks under the pressure, and whether multi-GPU scaling can keep pace with such high resolutions. So until then, stay tuned.

Log in

Don't have an account? Sign up now