Original Link: https://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus
The AMD Radeon R9 Fury Review, Feat. Sapphire & ASUS
by Ryan Smith on July 10, 2015 9:00 AM ESTA bit over two weeks ago AMD launched their new flagship video card, the Radeon R9 Fury X. Based on the company’s new Fiji GPU, the R9 Fury X brought with it significant performance improvements to AMD’s lineup, with AMD’s massive Fiji greatly increasing the card’s shading resources. Meanwhile Fiji also marked the introduction of High Bandwidth Memory (HBM) in to consumer products, giving the R9 Fury X a significant leg up in memory bandwidth. Overall AMD put together a very impressive card, however at $649 it fell just short of the GeForce GTX 980 Ti AMD needed it to beat.
Meanwhile alongside the announcement of the R9 Fury X, AMD announced that there would be three other Fiji-based cards. These include the R9 Fury, the R9 Nano, and a yet-to-be-named dual-GPU Fiji card. The first of these remaining cards to launch would be the R9 Fury, the obligatory lower-tier sibling to AMD’s flagship R9 Fury X. Today we will be taking a look at the first of those remaining cards, the R9 Fury, which launches next week.
While R9 Fury X remains the fastest Fiji card – and by virtue of being introduced first, the groundbreaking card – the impending launch of the R9 Fury brings with it a whole slew of changes that make it an interesting card in its own right, and a very different take on a Fiji product altogether. From a performance standpoint it is a lower performing card, featuring a cut-down Fiji GPU, but at the same time it is $100 cheaper than the R9 Fury X. Meanwhile in terms of construction, unlike the R9 Fury X, which is only available in its reference closed loop liquid cooling design, the R9 Fury is available as semi-custom and fully-custom cards from AMD’s board partners, built using traditional air coolers, making this the first air cooled Fiji card. As a result the R9 Fury at times ends up being a very different take on Fiji, for all of the benefits and drawbacks that comes with.
AMD GPU Specification Comparison | ||||||
AMD Radeon R9 Fury X | AMD Radeon R9 Fury | AMD Radeon R9 290X | AMD Radeon R9 290 | |||
Stream Processors | 4096 | 3584 | 2816 | 2560 | ||
Texture Units | 256 | 224 | 176 | 160 | ||
ROPs | 64 | 64 | 64 | 64 | ||
Boost Clock | 1050MHz | 1000MHz | 1000MHz | 947MHz | ||
Memory Clock | 1Gbps HBM | 1Gbps HBM | 5Gbps GDDR5 | 5Gbps GDDR5 | ||
Memory Bus Width | 4096-bit | 4096-bit | 512-bit | 512-bit | ||
VRAM | 4GB | 4GB | 4GB | 4GB | ||
FP64 | 1/16 | 1/16 | 1/8 | 1/8 | ||
TrueAudio | Y | Y | Y | Y | ||
Transistor Count | 8.9B | 8.9B | 6.2B | 6.2B | ||
Typical Board Power | 275W | 275W | 250W | 250W | ||
Manufacturing Process | TSMC 28nm | TSMC 28nm | TSMC 28nm | TSMC 28nm | ||
Architecture | GCN 1.2 | GCN 1.2 | GCN 1.1 | GCN 1.1 | ||
GPU | Fiji | Fiji | Hawaii | Hawaii | ||
Launch Date | 06/24/15 | 07/14/15 | 10/24/13 | 11/05/13 | ||
Launch Price | $649 | $549 | $549 | $399 |
Starting things off, let’s take a look at the specifications of the R9 Fury. As we mentioned in our R9 Fury X review, we have known since the initial R9 Fury series launch that the R9 Fury utilizes a cut-down Fiji GPU, and we can now reveal just how it has been cut down. As is usually the case for these second-tier cards, the R9 Fury features both a GPU with some functional units disabled and a slightly reduced clockspeed, allowing AMD to recover partially defective GPUs while easing up on the clockspeed requirements.
The Fiji GPU in the R9 Fury ends up having 56 of 64 CUs enabled, which brings down the total stream processor count from 4,096 to 3,584. This in turn ends up being the full extent of the R9 Fury’s disabled functional units, as AMD has not touched the front-end or back-end, meaning the number of geometry units and the number of ROPs remained unchanged.
Also unchanged is the memory subsystem. All Fiji-based cards, including the R9 Fury, will be shipping with a fully enabled memory subsystem, meaning we’re looking at 4GB of HBM attached to the GPU over a 4096-bit memory bus. With Fiji topping out at just 4GB of memory in the first place – one of the drawbacks faced by the $650 R9 Fury X – cutting back on memory here to a smaller capacity is not a real option for AMD, so every Fiji card will come with that much memory.
As for clockspeeds, R9 Fury takes a slight trim on the GPU clockspeed. The reference clockspeed for the R9 Fury is a flat 1000MHz, a 5% reduction from the R9 Fury X. On the other hand the memory clock remains unchanged at 500MHz DDR, for an effective memory rate of 1Gbps/pin.
All told then, on paper the performance difference between the R9 Fury and R9 Fury X will stand to be between 0% and 17%; that is, the R9 Fury will be up to 17% slower than the R9 Fury X. In the best case scenario for the R9 Fury of a memory bandwidth bottleneck, it has the same 512GB/sec of memory bandwidth as the R9 Fury X. At the other end of the spectrum, in a shader-bound scenario, the combination of the reduction in shader hardware and clockspeeds is where the R9 Fury will be hit the hardest, as its total FP32 throughput drops from 8.6 TFLOPs to 7.17 TFLOPs. Finally in the middle, workloads that are front-end or back-end bound will see a much smaller drop since those units haven’t been cut-down at all, leading to just a 5% performance drop. As for the real world performance drop, as we’ll see it’s around 7%.
Power consumption on the other hand is going to be fairly similar to the R9 Fury X. AMD’s official Typical Board Power (TBP) for the R9 Fury is 275W, the same as its older sibling. Comparing the two products, the R9 Fury sees some improvement from the disabled CUs, however as a second-tier part it uses lower quality chips overall. Meanwhile the use of air cooling means that operating temperatures are higher than the R9 Fury X’s cool 65C, and as a result power loss from leakage is higher as well. At the end of the day this means that the R9 Fury is going to lose some power efficiency compared to the R9 Fury X, as any reduction in power consumption is going to be met with a larger decrease in performance.
Moving on, let’s talk about the cards themselves. With the R9 Fury X AMD has restricted vendors to selling the reference card, and we have been told it will be staying this way, just as it was for the R9 295X2. On the other hand for R9 Fury AMD has not even put together a complete reference design, leaving the final cards up to their partners. As a result next week’s launch will be a “virtual” launch, with all cards being semi or fully-custom.
Out of the gate the only partners launching cards are Sapphire and Asus, AMD’s closest and largest partners respectively. Sapphire will be releasing stock and overclocked SKUs based on a semi-custom design that couples the AMD reference PCB with Sapphire’s Tri-X cooler. Asus on the other hand has gone fully-custom right out of the gate, pairing up a new custom PCB with one of their DirectCU III coolers. Cards from additional partners will eventually hit the market, but not until later in the quarter.
The R9 Fury will be launching with an MSRP of $549, $100 below the R9 Fury X. This price puts the R9 Fury up against much different competition than its older sibling; instead of going up against NVIDIA’s GeForce GTX 980 Ti, the closest competition will be the older GeForce GTX 980. The official MSRP on that card is $499, so the R9 Fury is more expensive, but in turn AMD is promising better performance than the GTX 980. Otherwise NVIDIA’s partners serve to fill that $50 gap with their higher-end factory overclocked GTX 980 cards.
Finally, today’s reviews of the R9 Fury are coming slightly ahead of the launch of the card itself. As previously announced, the card goes on sale on Tuesday the 14th, however the embargo on the reviews is being lifted today. AMD has not officially commented on the launch supply, but once cards do go on sale, we’re expecting a repeat of the R9 Fury X launch, with limited quantities that will sell out within a day. After that, it seems likely that R9 Fury cards will remain in short supply for the time being, also similar to the R9 Fury X. R9 Fury X cards have come back in stock several times since the launch, but have sold out within an hour or so, and there’s currently no reason to expect anything different for R9 Fury cards.
Summer 2015 GPU Pricing Comparison | |||||
AMD | Price | NVIDIA | |||
Radeon R9 Fury X | $649 | GeForce GTX 980 Ti | |||
Radeon R9 Fury | $549 | ||||
$499 | GeForce GTX 980 | ||||
Radeon R9 390X | $429 | ||||
Radeon R9 290X Radeon R9 390 |
$329 | GeForce GTX 970 | |||
Radeon R9 290 | $250 | ||||
Radeon R9 380 | $200 | GeForce GTX 960 | |||
Radeon R7 370 Radeon R9 270 |
$150 | ||||
$130 | GeForce GTX 750 Ti | ||||
Radeon R7 360 | $110 |
Meet The Sapphire Tri-X R9 Fury OC
Today we’ll be looking at Fury cards from both Sapphire and Asus. We’ll kick things off with Sapphire’s card, the Tri-X R9 Fury OC.
Radeon R9 Fury Launch Cards | |||||
ASUS STRIX R9 Fury | Sapphire Tri-X R9 Fury | Sapphire Tri-X R9 Fury OC | |||
Boost Clock | 1000MHz / 1020MHz (OC) |
1000MHz | 1040MHz | ||
Memory Clock | 1Gbps HBM | 1Gbps HBM | 1Gbps HBM | ||
VRAM | 4GB | 4GB | 4GB | ||
Maximum ASIC Power | 216W | 300W | 300W | ||
Length | 12" | 12" | 12" | ||
Width | Double Slot | Double Slot | Double Slot | ||
Cooler Type | Open Air | Open Air | Open Air | ||
Launch Date | 07/14/15 | 07/14/15 | 07/14/15 | ||
Price | $579 | $549 | $569 |
Sapphire is producing this card in two variants, a reference clocked version and a factory overclocked version. The version we’ve been sampled is the factory overclocked version, though other than some basic binning to identify cards that can handle being overclocked, the two cards are physically identical.
As far as Sapphire’s overclock goes, it’s a mild overclock, with the card shipping at 1040MHz for the GPU while the memory remains unchanged at 1Gbps. As we discussed in our R9 Fury X review, Fiji cards so far don’t have much in the way of overclocking headroom, so AMD’s partners have to take it easy on the factory overclocks. Sapphire’s overclock puts the upper-bound of any performance increase at 4% – with the real world gains being smaller – so this factory overclock is on the edge of relevance.
Getting down to the nuts and bolts then, Sapphire’s card is a semi-custom design, meaning Sapphire has paired an AMD reference PCB with a custom cooler. The PCB in question is AMD’s PCB from the R9 Fury X, so there’s little new to report here. The PCB itself measures 7.5” long and features AMD’s 6 phase power design, which is designed to handle well over 300W. For overclockers there is still no voltage control options available for this board design, though as Sapphire has retained AMD’s dual BIOS functionality there’s plenty of opportunity for BIOS modding.
The real story here is Sapphire’s Tri-X cooler, which gets the unenviable job of replacing AMD’s closed loop liquid cooler from the R9 Fury X. With a TBP of 275W Sapphire needs to be able to dissipate quite a bit of heat to keep up with Fiji, which has led to the company using one of their Tri-X coolers. We’ve looked at a few different Tri-X cards over the years, and they have been consistently impressive products. For the Tri-X R9 Fury, Sapphire is aiming for much the same.
Overall the Tri-X cooler used on the Tri-X R9 Fury ends up being quite a large cooler. Measuring a full 12” long it runs the length of the PCB and then some, meanwhile with that much copper and aluminum it’s not a light card either. The end result is that with such a large cooler the card is better defined as a PCB mounted on a cooler than a cooler mounted on a PCB, an amusing juxtaposition from the usual video card. As a result of this Sapphire has gone the extra mile to ensure that the PCB can support the cooler; there are screws in every last mounting hole, there’s a full-sized backplate to further reinforce the card, and the final 4.5” of the cooler that isn’t mounted to the PCB has its own frame to keep that secure as well.
Moving to the top of the card, the Tri-X R9 Fury features three of Sapphire’s 90mm “Aerofoil” fans, the company’s larger, dual ball bearing fans. These fans are capable of moving quite a bit of air even when moving at relatively low speeds, and as a result the overall card noise is kept rather low even under load, as we’ll see in full detail in our benchmark section.
Meanwhile Sapphire has also implemented their version of zero fan speed idle on the Tri-X R9 Fury, dubbed Intelligent Fan Control, which allows the card to turn off its fans entirely when their cooling capacity isn’t needed. With such a large heatsink the Fiji GPU and supporting electronics don’t require active cooling when idling, allowing Sapphire to utilize passive cooling and making the card outright silent at idle. This is a feature a number of manufacturers have picked up on in the last couple of years, and the silent idling this allows is nothing short of amazing. For Sapphire’s implementation on the Tri-X R9 Fury, what we find is that the fans finally get powered up at around 53C, and power down when the temperature falls below 44C.
Sapphire Tri-X R9 Fury Zero Fan Idle Points | ||||
GPU Temperature | Fan Speed | |||
Turn On | 53C | 27% | ||
Turn Off | 44C | 23% |
Helping the cooling effectiveness of the Tri-X quite a bit is the length of the fans and heatsink relative to the length of the PCB. With the 4.5” of overhang, the farthest fan is fully beyond the PCB. That means that all of the air it pushes through the heatsink doesn’t get redirected parallel to the card – as is the case normally for open air cards – but rather the hot air goes straight through the heatsink and past it. For a typical tower case this means that hot air goes straight up towards the case’s exhaust fans, more efficiently directing said hot air outside of the case and preventing it from being recirculated by the card’s fans. While this doesn’t make a night & day difference in cooling performance, it’s a neat improvement that sidesteps the less than ideal airflow situation the ATX form factor results in.
Moving on, let’s take a look at the heatsink itself. The Tri-X’s heatsink runs virtually the entire length of the card, and is subdivided into multiple segments. Connecting these segments are 7 heatpipes, ranging in diameter between 6mm and 10mm. The heatpipes in turn run through both a smaller copper baseplate that covers the VRM MOSFETs, and a larger copper baseplate that covers the Fiji GPU itself. Owners looking to modify the card or otherwise remove the heatsink will want to take note here; we’re told that it’s rather difficult to properly reattach the heatsink to the card due to the need to perfectly line up the heatsink and mate it with the GPU and the HBM stacks.
The Tri-X R9 Fury’s load temperatures tend to top out at 75C, which is the temperature limit Sapphire has programmed the card for. As with the R9 Fury X and the reference Radeon 290 series before that, Sapphire is utilizing AMD’s temperature and fan speed target capabilities, so while the card will slowly ramp up the fan to 75C, once it hits that temperature it will more greatly ramp up the fan to keep the temperature at or below 75C.
Moving on, since Sapphire is using AMD’s PCB, this means the Tri-X also inherits the former’s BIOS and lighting features. The dual-BIOS switch is present, and Sapphire ships the card with two different BIOSes. The default BIOS (switch right) uses the standard 300W ASIC power limit and 75C temperature target. Meanwhile the second BIOS (switch left) Increases the power and temperature limits to 350W and 80C respectively, for greater overclocking limits. Note however that this doesn’t change the voltage curve, so Fury cards in general will still be held back by a lack of headroom at stock voltages. As for the PCB’s LEDs, Sapphire has retained those as well, though they default to blue (sapphire) rather than AMD red.
Finally, since this is the AMD PCB, display I/O remains unchanged. This means the Tri-X offers 3x DisplayPorts along with a single HDMI 1.4 port.
Wrapping things up, the OC version we are reviewing today will retail for $569, $20 over AMD’s MSRP. The reference clocked version on the other hand will retail at AMD’s MSRP of $549, the only launch card that will be retailing at this price. Finally, Sapphire tells us that the OC version will be the rarer of the two due to its smaller run, and that the majority of Tri-X R9 Fury cards that will be on sale will be the reference clocked version.
Meet The ASUS STRIX R9 Fury
Our second card of the day is ASUS’s STRIX R9 Fury, which arrived just in time for the article cutoff. Unlike Sapphire, Asus is releasing just a single card, the STRIX-R9FURY-DC3-4G-GAMING.
Radeon R9 Fury Launch Cards | |||||
ASUS STRIX R9 Fury | Sapphire Tri-X R9 Fury | Sapphire Tri-X R9 Fury OC | |||
Boost Clock | 1000MHz / 1020MHz (OC) |
1000MHz | 1040MHz | ||
Memory Clock | 1Gbps HBM | 1Gbps HBM | 1Gbps HBM | ||
VRAM | 4GB | 4GB | 4GB | ||
Maximum ASIC Power | 216W | 300W | 300W | ||
Length | 12" | 12" | 12" | ||
Width | Double Slot | Double Slot | Double Slot | ||
Cooler Type | Open Air | Open Air | Open Air | ||
Launch Date | 07/14/15 | 07/14/15 | 07/14/15 | ||
Price | $579 | $549 | $569 |
With only a single card, ASUS has decided to split the difference between reference and OC cards and offer one card with both features. Out of the box the STRIX is a reference clocked card, with a GPU clockspeed of 1000MHz and memory rate of 1Gbps. However Asus also officially supports an OC mode, which when accessed through their GPU Tweak II software bumps up the clockspeed 20MHz to 1020MHz. With OC mode offering sub-2% performance gains there’s not much to say about performance; the gesture is appreciated, but with such a small overclock the performance gains are pretty trivial in the long run. Otherwise at stock the card should see performance similar to Sapphire’s reference clocked R9 Fury card.
Diving right into matters, for their R9 Fury card ASUS has opted to go with a fully custom design, pairing up a custom PCB with one of the company’s well-known DirectCU III coolers. The PCB itself is quite large, measuring 10.6” long and extending a further .6” above the top of the I/O bracket. Unfortunately we’re not able to get a clear shot of the PCB since we need to maintain the card in working order, but judging from the design ASUS has clearly overbuilt it for greater purposes. There are voltage monitoring points at the front of the card and unpopulated positions that look to be for switches. Consequently I wouldn’t be all that surprised if we saw this PCB used in a higher end card in the future.
Moving on, since this is a custom PCB ASUS has outfitted the card with their own power delivery system. ASUS is using a 12 phase design here, backed by the company’s Super Alloy Power II discrete components. With their components and their “auto-extreme” build process ASUS is looking to make the argument that the STRIX is a higher quality card, and while we’re not discounting those claims they’re more or less impossible to verify, especially compared to the significant quality of AMD’s own reference design.
Meanwhile it comes as a bit of a surprise that even with such a high phase count, ASUS’s default power limits are set relatively low. We’re told that the card’s default ASIC power limit is just 216W, and our testing largely concurs with this. The overall board TBP is still going to be close to AMD’s 275W value, but this means that Asus has clamped down on the bulk of the card’s TDP headroom by default. The card has enough headroom to sustain 1000MHz in all of our games – which is what really matters – while FurMark runs at a significantly lower frequency than any R9 Fury series cards built on AMD’s PCB as a result of the lower power limit. As a result ASUS also bumps up the power limit by 10% when in OC mode to make sure there’s enough headroom for the higher clockspeeds. Ultimately this doesn’t have a performance impact that we can find, and outside of FurMark it’s unlikely to save any power, but given what Fiji is capable of with respect to both performance and power consumption, this is an interesting design choice on ASUS’s part.
PCB aside, let’s cover the rest of the card. While the PCB is only 10.6” long, ASUS’s DirectCU III cooler is larger yet, slightly overhanging the PCB and extending the total length of the card to 12”. Here ASUS uses a collection of stiffeners, screws, and a backplate to reinforce the card and support the bulky heatsink, giving the resulting card a very sturdy design. In a first for any design we’ve seen thus far, the backplate is actually larger than the card, running the full 12” to match up with the heatsink, and like the Sapphire backplate includes a hole immediately behind the Fiji GPU to allow the many capacitors to better cool. Meanwhile builders with large hands and/or tiny cases will want to make note of the card’s additional height; while the card will fit most cases fine, you may want a magnetic screwdriver to secure the I/O bracket screws, as the additional height doesn’t leave much room for fingers.
For the STRIX ASUS is using one of the company’s triple-fan DirectCU III coolers. Starting at the top of the card with the fans, ASUS calls the fans on this design their “wing-blade” fans. Measuring 90mm in diameter, ASUS tells us that this fan design has been optimized to increase the amount of air pressure on the edge of the fans.
Meanwhile the STRIX also implements ASUS’s variation of zero fan speed idle technology, which the company calls 0dB Fan technology. As one of the first companies to implement zero fan speed idling, the STRIX series has become well known for this feature and the STRIX R9 Fury is no exception. Thanks to the card’s large heatsink ASUS is able to power down the fans entirely while the card is near or at idle, allowing the card to be virtually silent under those scenarios. In our testing this STRIX card has its fans kick in at 55C and shutting off again at 46C.
ASUS STRIX R9 Fury Zero Fan Idle Points | ||||
GPU Temperature | Fan Speed | |||
Turn On | 55C | 28% | ||
Turn Off | 46C | 25% |
As for the DirectCU III heatsink on the STRIX, as one would expect ASUS has gone with a large and very powerful heatsink to cool the Fiji GPU underneath. The aluminum heatsink runs just shy of the full length of the card and features 5 different copper heatpipes, the largest of the two coming in at 10mm in diameter. The heatpipes in turn make almost direct contact with the GPU and HBM, with ASUS having installed a thin heatspeader of sorts to compensate for the uneven nature of the GPU and HBM stacks.
In terms of cooling performance AMD’s Catalyst Control Center reports that ASUS has capped the card at 39% fan speed, though in our experience the card actually tops out at 44%. At this level the card will typically reach 44% by the time it hits 70C, at which point temperatures will rise a bit more before the card reaches homeostasis. We’ve yet to see the card need to ramp past 44%, though if the temperature were to exceed the temperature target we expect that the fans would start to ramp up further. Without overclocking the highest temperature measured was 78C for FurMark, while Crysis 3 topped out at a cooler 71C.
Moving on, ASUS has also adorned the STRIX with a few cosmetic adjustments of their own. The top of the card features a backlit STRIX logo, which pulsates when the card is turned on. And like some prior ASUS cards, there are LEDs next to each of the PCIe power sockets to indicate whether there is a full connection. On that note, with the DirectCU III heatsink extending past the PCIe sockets, ASUS has once again flipped the sockets so that the tabs face the rear of the card, making it easier to plug and unplug the card even with the large heatsink.
Since this is an ASUS custom PCB, it also means that ASUS has been able to work in their own Display I/O configuration. Unlike the AMD reference PCB, for their custom PCB ASUS has retained a DL-DVI-D port, giving the card a total of 3x DisplayPorts, 1x HDMI port, and 1x DL-DVI-D port. So buyers with DL-DVI monitors not wanting to purchase adapters will want to pay special attention to ASUS’s card.
Finally, on the software front, the STRIX includes the latest iteration of ASUS’s GPU Tweak software, which is now called GPU Tweak II. Since the last time we took at look at GPU Tweak the software has undergone a significant UI overhaul, with ASUS giving it more distinct basic and professional modes. It’s through GPU Tweak II that the card’s OC mode can be accessed, which bumps up the card’s clockspeed to 1020MHz. Meanwhile the other basic overclocking and monitoring functions one would expect from a good overclocking software package are present; GPU Tweak II allows control over clockspeeds, fan speeds, and power targets, while also monitoring all of these features and more.
GPU Tweak II also includes a built-in copy of the XSplit game broadcasting software, along with a 1 year premium license. Finally, perhaps the oddest feature of GPU Tweak II is the software’s Gaming Booster feature, which is ASUS’s system optimization utility. Gaming Booster can adjust the system visual effects, system services, and perform memory defragmentation. To be frank, ASUS seems like they were struggling to come up with something to differentiate GPU Tweak II here; messing with system services is a bad idea, and system memory defragmentation is rarely necessary given the nature and abilities of Random Access Memory.
Wrapping things up, the ASUS STRIX R9 Fury will be the most expensive of the R9 Fury launch cards. ASUS is charging a $30 premium for the card, putting the MSRP at $579.
The Test
On a brief note, since last month’s R9 Fury X review, AMD has reunified their driver base. Catalyst 15.7, released on Wednesday, extends the latest branch of AMD’s drivers to the 200 series and earlier, bringing with it all of the optimizations and features that for the past few weeks have been limited to the R9 Fury series and the 300 series.
As a result we’ve gone back and updated our results for all of the AMD cards featured in this review. Compared to the R9 Fury series launch driver, the performance and behavior of the R9 Fury series has not changed, nor were we expecting it to. Meanwhile AMD’s existing 200/8000/7000 series GCN cards have seen a smattering of performance improvements that are reflected in our results.
CPU: | Intel Core i7-4960X @ 4.2GHz |
Motherboard: | ASRock Fatal1ty X79 Professional |
Power Supply: | Corsair AX1200i |
Hard Disk: | Samsung SSD 840 EVO (750GB) |
Memory: | G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | Asus PQ321 |
Video Cards: | AMD Radeon R9 Fury X AMD Radeon R9 290X AMD Radeon R9 285 AMD Radeon HD 7970 ASUS STRIX R9 Fury Sapphire Tri-X R9 Fury OC NVIDIA GeForce GTX 980 Ti NVIDIA GeForce GTX 980 NVIDIA GeForce GTX 780 NVIDIA GeForce GTX 680 NVIDIA GeForce GTX 580 |
Video Drivers: | NVIDIA Release 352.90 Beta AMD Catalyst Cat 15.7 |
OS: | Windows 8.1 Pro |
Battlefield 4
Kicking off our benchmark suite is Battlefield 4, DICE’s 2013 multiplayer military shooter. After a rocky start, Battlefield 4 has since become a challenging game in its own right and a showcase title for low-level graphics APIs. As these benchmarks are from single player mode, based on our experiences our rule of thumb here is that multiplayer framerates will dip to half our single player framerates, which means a card needs to be able to average at least 60fps if it’s to be able to hold up in multiplayer.
When the R9 Fury X launched, one of the games it struggled with was Battlefield 4, where the GTX 980 Ti took a clear lead. However for the launch of the R9 Fury, things are much more in AMD’s favor. The two R9 Fury cards have a lead just shy of 10% over the GTX 980, roughly in-line with their price tag difference. As a result of that difference AMD needs to win in more or less every game by 10% to justify the R9 Fury’s higher price, and we’re starting things off exactly where AMD needs to be for price/performance parity.
Looking at the absolute numbers, we’re going to see AMD promote the R9 Fury as a 4K card, but even with Battlefield 4 I feel this is a good example of why it’s better suited for high quality 1440p gaming. The only way the R9 Fury can maintain an average framerate over 50fps (and thereby reasonable minimums) with a 4K resolution is to drop to a lower quality setting. Otherwise at just over 60fps, it’s in great shape for a 1440p card.
As for the R9 Fury X comparison, it’s interesting how close the R9 Fury gets. The cut-down card is never more than 7% behind the R9 Fury X. Make no mistake, the R9 Fury X is meaningfully faster, but scenarios such as these question whether it’s worth the extra $100.
Crysis 3
Still one of our most punishing benchmarks, Crysis 3 needs no introduction. With Crysis 3, Crytek has gone back to trying to kill computers and still holds the “most punishing shooter” title in our benchmark suite. Only in a handful of setups can we even run Crysis 3 at its highest (Very High) settings, and that’s still without AA. Crysis 1 was an excellent template for the kind of performance required to drive games for the next few years, and Crysis 3 looks to be much the same for 2015.
Under Crysis 3 the R9 Fury once again has the lead, though there is a clear amount of variation in that lead depending on the resolution. At 4K it’s 14% or so, but at 1440p it’s just 5%. This is consistent with the general trend for AMD and NVIDIA cards, which is that AMD sees better performance scaling at higher resolutions, and is a big part of the reason why AMD is pushing 4K for the R9 Fury X and R9 Fury. Still, based on absolute performance, the R9 Fury’s performance probably makes it better suited for 1440p.
Meanwhile the R9 Fury cards once again consistently trail the R9 Fury X by no more than 7%. Crysis 3 is generally more sensitive to changes in shader throughput, so it’s interesting to see that the performance gap is as narrow as it is here. These kinds of results imply that the R9 Fury X’s last 512 stream processors aren’t being put to very good use, since most of the performance difference can be accounted for in the clockspeed difference.
Middle Earth: Shadow of Mordor
Our next benchmark is Monolith’s popular open-world action game, Middle Earth: Shadow of Mordor. One of our current-gen console multiplatform titles, Shadow of Mordor is plenty punishing on its own, and at Ultra settings it absolutely devours VRAM, showcasing the knock-on effect that current-gen consoles have on VRAM requirements.
Shadow of Mordor ends up being a big win for AMD, with the R9 Fury cards shooting well past the GTX 980. Based on our earlier R9 Fury X review this was not an unexpected result, but at the end of the day with a 20%+ performance advantage, it’s a great situation for AMD to be in.
Meanwhile the R9 Fury’s performance relative to its X-rated sibling is yet again in the 7% range. So far the performance difference between the two cards is surprisingly consistent.
Finally, since AMD’s last two $550 cards were the R9 290X and HD 7970, let’s take a look at those comparisons quickly. At 1440p the R9 Fury only has a 17% lead over the R9 290X “Uber”, which for a card almost 2 years old is more than a bit surprising. The R9 Fury has more efficient front-ends and back-ends and significant advantages in shader throughput and memory bandwidth, and yet the performance gains compared to 290X are fairly small. On the other hand 7970 owners looking to upgrade to another Radeon should like what they’re seeing, as the R9 Fury’s 79% performance advantage is approaching upgrade territory.
Shifting gears to minimum framerates, the situation is similarly in AMD’s favor at 4K. One of the outcomes of going up against the GTX 980 is that it’s just as VRAM-limited as R9 Fury is, so in a VRAM intensive game like Shadow of Mordor, neither card has an advantage. However it’s quite interesting that once we back off to 1440p, the GTX 980 surges forward.
Civilization: Beyond Earth
Shifting gears from action to strategy, we have Civilization: Beyond Earth, the latest in the Civilization series of strategy games. Civilization is not quite as GPU-demanding as some of our action games, but at Ultra quality it can still pose a challenge for even high-end video cards. Meanwhile as the first Mantle-enabled strategy title Civilization gives us an interesting look into low-level API performance on larger scale games, along with a look at developer Firaxis’s interesting use of split frame rendering with Mantle to reduce latency rather than improving framerates.
As one of the few games that can hit 60fps on the R9 Fury at 4K with everything turned up, it’s interesting to see how resolution impacts all of our cards with Civilization. At 4K the R9 Fury is well ahead of the GTX 980, surpassing it by 17%. Yet at 1440p that lead becomes a very slight loss, with the Sapphire Tri-X R9 Fury’s mild factory overclock giving it just enough of a boost to stay ahead of the GTX 980.
Meanwhile the Fury/Fury X gap widens ever so slightly here. The R9 Fury is now a full 10% behind the full-fledged Fury.
The minimum framerate situation for Civilization is very nearly a mirror of the averages. The R9 Fury does relatively well at 4K, but at 1440p it’s now neck-and-neck with the GTX 980 once again.
Dragon Age: Inquisition
Our RPG of choice for 2015 is Dragon Age: Inquisition, the latest game in the Dragon Age series of ARPGs. Offering an expansive world that can easily challenge even the best of our video cards, Dragon Age also offers us an alternative take on EA/DICE’s Frostbite 3 engine, which powers this game along with Battlefield 4.
Dragon Age is another solid win for AMD at 4K, with the R9 Fury taking an 8-11% lead over the GTX 980. However it’s also a game that’s better played at 1440p than 4K on the R9 Fury, at which point that lead shrinks to just 2%. At the very least the R9 Fury can claim to be the minimum card required to crack 60fps at that resolution, a feat the GTX 980 falls just short of.
The Talos Principle
Croteam’s first person puzzle and exploration game The Talos Principle may not involve much action, but the game’s lush environments still put even fast video cards to good use. Coupled with the use of 4x MSAA at Ultra quality, and even a tranquil puzzle game like Talos can make a good case for more powerful video cards.
The Talos Principle is another game that AMD tends to do well in, which works in the R9 Fury’s favor. At 4K the R9 Fury has a 31% performance advantage over the GTX 980, and even at 1440p it’s still a 23% lead. This is well over the 10% price premium of the card, and convincing leads like this can help to shift the value proposition in AMD’s favor.
As for the R9 Fury versus the R9 Fury X, the difference is once again a rather consistent 8-9% at both resolutions.
Far Cry 4
The next game in our 2015 GPU benchmark suite is Far Cry 4, Ubisoft’s Himalayan action game. A lot like Crysis 3, Far Cry 4 can be quite tough on GPUs, especially with Ultra settings thanks to the game’s expansive environments.
Like The Talos Principle, Far Cry 4 is another game that has traditionally favored AMD cards, and as a result the R9 Fury looks quite good here. On a relative basis it’s ahead of the GTX 980 by 33% at 4K and 22% at 1440. On an absolute basis this is enough to keep the average framerate above 60fps at 1440, something the GTX 980 could not do, and above 40fps at 4K.
Shifting gears, comparing the R9 Fury to the 290X paints the R9 Fury in a more favorable light than earlier, but it’s still not great. The performance advantage for AMD’s new card tops out at 26% here, which isn't poor, but at the same time is not all that great given the fact that it has been almost 2 years now since the 290X launched at the same price point.
Total War: Attila
The second strategy game in our benchmark suite, Total War: Attila is the latest game in the Total War franchise. Total War games have traditionally been a mix of CPU and GPU bottlenecks, so it takes a good system on both ends of the equation to do well here. In this case the game comes with a built-in benchmark that plays out over a large area with a fortress in the middle, making it a good GPU stress test.
With Attila the R9 Fury’s lead over the GTX 980 tapers some, but it’s still largely in AMD’s favor. The 13% lead at 1440p continues to be ahead of the 10% price premium, and on an absolute basis it’s enough to keep the R9 Fury over 40fps.
Meanwhile the R9 Fury yet again trails the R9 Fury X by 9-10%, just a bit over the overall average.
GRID Autosport
For the racing game in our benchmark suite we have Codemasters’ GRID Autosport. Codemasters continues to set the bar for graphical fidelity in racing games, delivering realistic looking environments layered with additional graphical effects. Based on their in-house EGO engine, GRID Autosport includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty on lower-end cards but does a good job of emulating more realistic lighting within the game world.
In our R9 Fury X review, we pointed out how AMD is GPU limited in this game below 4K, and while the R9 Fury’s lower performance essentially mitigates that to a certain extent, it doesn’t change the fact that AMD is still CPU limited here. The end result is that at 4K the R9 Fury is only 2% ahead of the GTX 980 – less than it needs to be to justify the price premium – and at 1440p it’s fully CPU-limited and trailing the GTX 980 by 14%.
On an absolute basis AMD isn’t faring too poorly here, but AMD will need to continue dealing with and resolving CPU bottlenecks on DX11 titles if they want the R9 Fury to stay ahead of NVIDIA, as DX11 games are not going away quite yet.
Grand Theft Auto V
The final game in our review of the R9 Fury X is our most recent addition, Grand Theft Auto V. The latest edition of Rockstar’s venerable series of open world action games, Grand Theft Auto V was originally released to the last-gen consoles back in 2013. However thanks to a rather significant facelift for the current-gen consoles and PCs, along with the ability to greatly turn up rendering distances and add other features like MSAA and more realistic shadows, the end result is a game that is still among the most stressful of our benchmarks when all of its features are turned up. Furthermore, in a move rather uncharacteristic of most open world action games, Grand Theft Auto also includes a very comprehensive benchmark mode, giving us a great chance to look into the performance of an open world action game.
On a quick note about settings, as Grand Theft Auto V doesn't have pre-defined settings tiers, I want to quickly note what settings we're using. For "Very High" quality we have all of the primary graphics settings turned up to their highest setting, with the exception of grass, which is at its own very high setting. Meanwhile 4x MSAA is enabled for direct views and reflections. This setting also involves turning on some of the advanced redering features - the game's long shadows, high resolution shadows, and high definition flight streaming - but not increasing the view distance any further.
Otherwise for "High" quality we take the same basic settings but turn off all MSAA, which significantly reduces the GPU rendering and VRAM requirements.
Closing out our gaming benchmarks, the R9 Fury is once again in the lead, besting the GTX 980 by as much as 15%. However GTA V also serves as a reminder that the R9 Fury doesn’t have quite enough power to game at 4K without compromises. And if we do shift back to 1440p, a more comfortable resolution for this card, AMD’s lead is down to just 5%. At that point the R9 Fury isn’t quite covering its price advantage.
Meanwhile compared to the R9 Fury X, we close out roughly where we started. The R9 Fury trails the more powerful R9 Fury X by 5-7% depending on the resolution, a difference that has more to do with GPU clockspeeds than the cut-down CU count. Overall the gap between the two cards has been remarkably consistent and surprisingly narrow.
99th percentile framerates however are simply not in AMD’s favor here. Despite AMD’s driver optimizations and the fact that the GTX 980 only has 4GB of VRAM, the R9 Fury X could not pull ahead of the GTX 980, so the R9 Fury understandably fares worse. Even at 1440p the R9 Fury cards can’t quite muster 30fps, though in all fairness even the GTX 980 falls just short of this mark as well.
Synthetics
As always we’ll also take a quick look at synthetic performance. Since R9 Fury is a cut-down and lower clocked Fiji part, what we’re expecting here is a significant shader/texture hit, with a much smaller hit to tessellation and pixel throughput.
TessMark scores more or less perfectly scale with clockspeed in this case. The R9 Fury is almost precisely 5% behind the R9 Fury X.
As for 3DMark Vantage, the performance hits are in-line with expectations. The R9 Fury takes a pretty significant hit to texturing performance due to the combination of lost texture units and the clockspeed reduction, while pixel throughput trails by just under 5%. This indicates that at least for the purposes of the 3DMark test, the R9 Fury series is ROP bottlenecked rather than memory bandwidth bottlenecked, a consequence of AMD’s excellent delta color compression.
Compute
Shifting gears, we have our look at compute performance. As compute performance will be more significantly impacted by the reduction in CUs than most other tests, we’re expecting the performance hit for the R9 Fury relative to the R9 Fury X to be more significant here than under our gaming tests.
Starting us off for our look at compute is LuxMark3.0, the latest version of the official benchmark of LuxRender 2.0. LuxRender’s GPU-accelerated rendering mode is an OpenCL based ray tracer that forms a part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.
For LuxMark with the R9 Fury X already holding the top spot, the R9 Fury cards easily take the next two spots. One interesting artifact of this is that the R9 Fury’s advantage over the GTX 980 is actually greater than the R9 Fury X’s over the GTX 980 Ti’s, both on an absolute and relative basis. This despite the fact that the R9 Fury is some 13% slower than its fully enabled sibling.
For our second set of compute benchmarks we have CompuBench 1.5, the successor to CLBenchmark. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on face detection, optical flow modeling, and particle simulations.
Not unlike LuxMark, tests where the R9 Fury X did well have the R9 Fury doing well too, particularly the optical flow sub-benchmark. The drop-off in that benchmark and face detection is about what we’d expect for losing 1/8th of Fiji’s CUs. On the other hand the particle simulation benchmark is hardly fazed beyond the clockspeed drop, indicating that the bottleneck lies elsewhere.
Our 3rd compute benchmark is Sony Vegas Pro 13, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.
At this point Vegas is becoming increasingly CPU-bound and will be due for replacement. The R9 Fury comes in one second behind the chart-topping R9 Fury X, at 22 seconds.
Moving on, our 4th compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, utilizing the OpenCL path for FAHCore 17.
Overall while the R9 Fury doesn’t have to aim quite as high given its weaker GTX 980 competition, FAHBench still stresses the Radeon cards. Under single precision tests the GTX 980 pulls ahead, only surpassed under double precision thanks to NVIDIA’s weaker FP64 performance.
Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.
As with our other tests the R9 Fury loses some performance on our C++ AMP benchmark relative to the R9 Fury X, but only around 8%. As a result it’s competitive with the GTX 980 Ti here, blowing well past the GTX 980.
Power, Temperature, & Noise
As always, last but not least is our look at power, temperature, and noise. Next to price and performance of course, these are some of the most important aspects of a GPU, due in large part to the impact of noise. All things considered, a loud card is undesirable unless there’s a sufficiently good reason – or sufficiently good performance – to ignore the noise.
Starting with voltages, with the latest update to GPU-Z (0.8.4) we now have a basic idea of what the R9 Fury series’ voltages are, so let’s take a look.
Radeon R9 Fury Series Voltages | |||||
R9 Fury X (Ref) Load | ASUS R9 Fury Load | Sapphire R9 Fury Load | Sapphire R9 Fury OC Load | ||
1.212v | 1.169v | 1.188v | 1.212v |
What we find is that the R9 Fury X tops out at 1.212v, a fairly typical voltage for a large 28nm GPU.
Meanwhile what’s of much greater interest is the difference between the two R9 Fury cards. The Sapphire Tri-X R9 Fury OC also tops out at 1.212v, indicating that it’s operating in the same voltage range as the reference R9 Fury X, and what we’d expect to see if AMD isn’t doing any power binning and just selecting Fiji chips for R9 Fury based on yields and attainable clockspeeds. The ASUS card on the other hand reports a notably lower voltage, topping out at 1.169v, 43mv below the Sapphire card.
This is an unexpected, though not unreasonable finding. We know that AMD has been carefully testing chips and more closely assigning operating voltages to them based on what the chips actually need in order to improve their energy efficiency. As a result it looks like the ASUS card ended up with a better chip as part of the overall random distribution of chips, i.e. the chip lottery. We’ll revisit this point in a little bit, but for now it’s something to keep in mind.
Moving on, let’s take a look at average clockspeeds.
Radeon R9 Fury Series Average Clockspees | ||||
Game | R9 Fury X (Ref) | ASUS R9 Fury | Sapphire R9 Fury | |
Max Boost Clock | 1050MHz | 1000MHz | 1040MHz | |
Battlefield 4 |
1050MHz
|
1000MHz
|
1040MHz
|
|
Crysis 3 |
1050MHz
|
1000MHz
|
1040MHz
|
|
Mordor |
1050MHz
|
1000MHz
|
1040MHz
|
|
Civilization: BE |
1050MHz
|
1000MHz
|
1040MHz
|
|
Dragon Age |
1050MHz
|
1000MHz
|
1040MHz
|
|
Talos Principle |
1050MHz
|
1000MHz
|
1040MHz
|
|
Far Cry 4 |
1050MHz
|
1000MHz
|
1040MHz
|
|
Total War: Attila |
1050MHz
|
1000MHz
|
1040MHz
|
|
GRID Autosport |
1050MHz
|
1000MHz
|
1040MHz
|
|
Grand Theft Auto V |
1050MHz
|
1000MHz
|
1040MHz
|
|
FurMark |
985MHz
|
902MHz
|
988MHz
|
All of these cards are well-cooled, and as a result they have no trouble sustaining their rated clockspeeds when running our benchmark suite games. This is the case even for the ASUS card, which has a much lower power limit, as evidenced by the lower average clockspeed under FurMark.
Shifting gears to power consumption, we knew at the time of the R9 Fury X that it was paying a slight idle power penalty as a result of its closed loop liquid cooler, and now we have a better idea of just what that penalty is. It essentially costs AMD another 5W at the wall to run the R9 Fury X’s pump, which means that the air-cooled R9 Fury’s idle power consumption is right in line with other air cooled cards. The fact that the ASUS and Sapphire cards are consistently 2W apart was a bit surprising, though likely a consequence of their difference PCB designs.
Load power consumption on the other hand is a poignant reminder that the R9 Fury is still a Fiji card, and that AMD can’t match the energy efficiency of NVIDIA’s Maxwell cards even under better circumstances. The Fury cards have a several percent performance lead over the GTX 980, but their power consumption in turn is much, much higher. The difference is between 62W and 109W at the wall, closer to large, power-hungry cards like the R9 Fury X and GTX 980 Ti than the smaller GTX 980 that the R9 Fury is competing with.
Overall the power gap is influenced by several factors. The R9 Fury cards run hotter than R9 Fury X, leading to more leakage. On the other hand disabling some CUs saves power, offsetting the increased leakage. But at the end of the day the R9 Fury is meant to be a 275W TBP card, just like the R9 Fury X, and that’s what we see here.
With that said, the difference between the ASUS and Sapphire cards is extremely surprising. The ASUS card has a lower power limit, but with the card sustaining 1000MHz under Crysis 3, that’s not what’s going on here. Rather we seem to be seeing the result of random chip variation quite possibly combined with ASUS’s custom PCB. Keep in mind what we said earlier about voltages, the ASUS card operates at a lower voltage than the Sapphire card, and as a result it has an advantage going into our power testing. Still, I would normally not expect a 43mv difference to lead to such significant savings.
This is a particular case where I’m curious what we’d find if we tested multiple ASUS cards, rather than looking at a sample size of 1. Would other ASUS cards have worse chips and draw more power? We’d expect so, but there may still be an advantage from ASUS’s PCB design and BIOS that need to be accounted for. At the very least it helps to close the efficiency gap between the R9 Fury and GTX 980, but it’s still not going to be enough.
Finally, since Sapphire also sent over the stock BIOS for their card we quickly tested that as well. Power consumption is down slightly thanks to slightly lower clockspeeds coupled with a lower maximum voltage of 1.188v. Still, even configured as a stock card, the ASUS card is well ahead in energy efficiency.
As for FurMark, we get the results more or less exactly what we were expecting. The Sapphire card with its AMD reference PCB and the high power delivery limits that entails throttles very little on FurMark, and as a result power consumption at the wall is quite high. On the other hand the ASUS card has its much lower default power limit, resulting in much heavier throttling under FurMark and keeping maximum power consumption down as well.
The end result is that the Sapphire card has a much wider range than the ASUS card, and it also indicates that the ASUS card’s power limit is likely not very far above what it takes to sustain 1000MHz under most games. The ASUS card will likely need more power to avoid power throttling if overclocked.
Moving on to idle temperatures, both R9 Fury cards fare relatively well here. The fact that they have no active cooling due to their zero fan speed idle technologies means that they run a bit warmer at idle, but it’s nothing significant.
Crysis 3 temperatures meanwhile are in-line with what these cards are designed for. The Sapphire card is designed to run at up to 75C, and that’s exactly what happens here. The ASUS card on the other hand is a bit more aggressive with its fan early on, leading to it topping out at 71C. Both, as expected, are warmer than the R9 Fury X, which is not a major problem, but it does contribute to the lower energy efficiency we’ve been seeing, particularly for the Sapphire card.
FurMark on the other hand changes things slightly. The Sapphire card holds at 75C and ramps up its fan, while the ASUS card is already operating at its first fan speed limit, and as a result is allowed to increase in temperature instead. What’s holding back the ASUS card at this point and preventing it from getting warmer yet is its power limit; the card can only generate enough heat to get up to 78C with its fans running at their first limit.
Last but not least, we have our noise measurements, starting with idle noise. The Sapphire and ASUS cards both feature zero fan speed idle technology, which means neither card is running their fans here. As a result the cards are outright silent. What we end up measuring is the background noise, mainly the fans and pumps of our CPU’s closed loop liquid cooler. At this point zero fan speed idle technology is not cutting edge, but it is never the less impressive. High-end HTPC users, or just users looking for a very quiet card at idle should be very happy with either card.
When it came time to run our load noise testing we were expecting good things from both cards, but the results from the Sapphire Tri-X have completely blown our expectations. With sub-40dB(A) noise levels we ended up running our load noise test three different times in order to make sure there wasn’t an error in our testing methodology. There wasn’t.
What we have here then is the quietest high power air cooled card we have ever tested. In its OC configuration the Sapphire Tri-X card tops out at 38.2dB under Crysis 3, less than 2dB(A) above our noise floor and nearly 5dB quieter than the quiet R9 Fury X. Switching to its reference configuration drives noise levels down even further, to just 37.8dB(A). To put that in perspective, the Sapphire Tri-X under a gaming workload is as loud as the GTX 980 Ti is at idle. Simply put, these results are amazing.
In going over the design of Sapphire’s card there is no doubt that this is the product of several factors – not the least of which is the massive heatsink – but going back to what we said earlier about the short PCB, we’re left to wonder if that didn’t have a significant impact. By being able to blow hot air straight back (or rather, straight up), the card reduces the amount of airflow and noise being reflected against the back of the card, but it also offers a more direct cooling route that in principle reduces the amount of work the fans need to do. In any case we’ve seen Sapphire’s Tri-X cards do well before, but at 38dB(A) this is a new high water mark that is going to be difficult to beat.
Which on that note, puts the ASUS card in an awkward position. Compared to just about anything else, 46.2dB would be doing well for an open air card. But at an 8dB(A) difference between it and the Sapphire card, the difference is night and day. ASUS has to run their fans so much faster that they just can’t match what Sapphire has done. It’s a solid card, but when it comes to acoustic testing it has the misfortune of going up against some very capable competition.
Finally we have our noise levels under FurMark. What’s interesting here is that with the ASUS card still running at its first maximum fan speed (44%), it doesn’t get any noisier under FurMark than it does Crysis 3. The Sapphire card on the other hand does need to ramp up its fans significantly to handle the extra heat. None the less the Sapphire card still ends up being a bit quieter, topping out at 45.4dB. And when taking into consideration the significant difference in power limits between the two cards, it’s clear that the Sapphire card is remaining quieter while performing a great deal more work.
Overclocking
Finally, no review of a high-end video card would be complete without a look at overclocking performance.
As was the case with the R9 Fury X two weeks ago, overclockers looking at out of the box overclocking performance are going to come away disappointed with the R9 Fury cards. While cooling and power delivery are overbuilt on both the Asus and Sapphire cards, the R9 Fury is still very restricted when it comes to overclocking. There is no voltage control at this time (even unofficial) and the card’s voltage profile has been finely tuned to avoid needing to supply the card with more voltage than is necessary. As a result the card has relatively little overclocking potential without voltage adjustments.
Radeon R9 Fury Series Overclocking | |||||
Ref. R9 Fury X | ASUS R9 Fury | Sapphire R9 Fury OC | |||
Boost Clock | 1125MHz | 1075MHz | 1100MHz | ||
Memory Clock | 1Gbps (500MHz DDR) | 1.1Gbps (550MHz DDR) | 1.1Gbps (550MHz DDR) | ||
Power Limit | 100% | 115% | 100% | ||
Max Voltage | 1.212v | 1.169v | 1.212v |
Neither R9 Fury card is able to overclock as well as our R9 Fury X, indicating that these are likely lower quality (or lower headroom) chips. Ultimately we’re able to get another 75MHz out of the ASUS, for 1075MHz, and another 60MHz out of the Sapphire, for 1100MHz.
Meanwhile with unofficial memory overclocking support now attainable via MSI Afterburner, we’ve also tried our hand at memory overclocking. There’s not a ton of headroom here before artifacting sets in, but we were able to get another 10% (50MHz) out of both R9 Fury cards.
Using our highest clocking card as a reference point, the Sapphire card, the actual performance gains are in the 7-10% range, with an average right up the middle at 8% over a reference clocked R9 Fury. This is actually a bit better than the R9 Fury X and its 5% performance gains, however it’s still not going to provide a huge difference in performance. We’d need to be able to overclock to better than 1100MHz to see any major overclocking gains on the R9 Fury cards.
Final Words
Bringing this video card review to a close, we’ll start off with how the R9 Fury compares to its bigger sibling, the R9 Fury X. Although looking at the bare specifications of the two cards would suggest they’d be fairly far apart in performance, this is not what we have found. Between 4K and 1440p the R9 Fury’s performance deficit is only 7-8%, noticeably less than what we’d expect given the number of disabled CUs.
In fact a significant amount of the performance gap appears to be from the reduction in clockspeed, and not the number of CUs. And while overclocking back to R9 Fury X clockspeeds can’t recover all of the performance, it recovers a lot of it. This implies that Fiji on the whole is overweight on shading/texturing resources, as it’s not greatly impacted by having some of those resources cut off.
Consequently I can see why AMD opted to launch the R9 Fury X and R9 Fury separately, and to withhold the latter’s specifications until now, as this level of performance makes R9 Fury a bit of a spoiler for R9 Fury X. 7-8% makes R9 Fury notably slower than R9 Fury X, but it’s also $100 cheaper, or to turn this argument on its head, the last 10% or so that the R9 Fury X offers comes at quite the price premium. This arguably makes the R9 Fury the better value, and not that we’re complaining, but it does put AMD in an awkward spot.
As for the competition, that’s a bit more of a mixed bag. R9 Fury X had to compete with GTX 980 Ti but couldn’t surpass it, which hurt it and make the GTX the safer buy. On the other hand R9 Fury needs to compete with just the older GTX 980, and while it’s by no means a clean sweep for AMD, it’s a good outcome for AMD. The R9 Fury offers between 8% and 17% better performance than the GTX 980, depending on if we’re looking at 1440p or 4K. I don’t believe the R9 Fury is a great 4K card – if you really want 4K, you really need more rendering power at this time – but even at 1440p this is a solid performance lead.
Along with a performance advantage, the GTX 980 is also better competition for the R9 Fury (and Fiji in general) since the GTX 980 is only available with 4GB of VRAM. This negates the Fiji GPU’s 4GB HBM limit, which is one of the things that held back the R9 Fury X against the GTX 980 Ti. As a result there are fewer factors to consider, and in a straight-up performance shootout with the GTX 980 the R9 Fury is 10% more expensive for 8%+ better performance. This doesn’t make either card a notably better value, but makes the R9 Fury a very reasonable alternative to the GTX 980 on a price/performance basis.
The one area where the R9 Fury struggles however is power efficiency. GTX 980’s power efficiency is practically legendary at this point; R9 Fury’s is not. Even the lower power of our two R9 Fury cards, the ASUS STRIX, can’t come close to GTX 980’s efficiency. And that’s really all there is to that. If energy efficiency doesn’t matter to you then the R9 Fury’s performance is competitive, otherwise GTX 980 is a bit slower, a bit cheaper, and uses a lot less power. That said, AMD’s partners do deserve some credit for keeping their acoustics well under control despite the high power and heat load. It’s not an apples-to-apples comparison against the reference GTX 980 and its blower, but at the very least picking R9 Fury over GTX 980 doesn’t mean you have to pick a loud card as well.
And that brings us to the third aspect of this review, which is comparing the R9 Fury cards from Sapphire and ASUS. Both partners have come to the plate with some very good open air cooled designs, and while it’s a bit unusual for AMD to launch with so few partners, what those partners have put together certainly paint R9 Fury in a positive light.
Picking between the two ends up being a harder task than we expected, in part because of how different they are at times. From a performance perspective the two cards offer very similar performance, with Sapphire’s mild factory overclock giving them only the slightest of edges, which is more or less what we expected.
However the power and acoustics situation is very different. On its own the ASUS STRIX’s acoustics would look good, but compared to the Sapphire Tri-X’s deliciously absurd acoustics it’s the clear runner-up. On the other hand the ASUS card has a clear power efficiency advantage of its own, but I’m not convinced that this isn’t just a byproduct of the ASUS card randomly receiving a better chip. As a result I’m not convinced that this same efficiency advantage exists between all ASUS and Sapphire cards; ASUS’s higher voltage R9 Fury chips have to go somewhere.
In any case, both are solid cards, but if we have to issue a recommendation then it’s hard to argue with the Sapphire Tri-X’s pricing and acoustics right now. It’s the quietest of the R9 Fury cards, and it’s slightly cheaper as well. Otherwise ASUS’s strengths lie more on their included software and their reputation for support than in their outright performance in our benchmark suite.
And with that, we wrap up our review of the second product in AMD’s four Fiji launches. The R9 Fury was the last product with a scheduled launch date, however AMD has previously told us that the R9 Nano will launch this summer, meaning we should expect it in the next couple of months. With a focus on size and efficiency the R9 Nano should be a very different card from the R9 Fury and R9 Fury X, which makes us curious to see just what AMD can pull off when optimizing for efficiency over absolute performance. But that will be a question for another day.