I'm curious @anandtech in general, given the likely newer state of the city/X's drivers, do you think that the performance deltas between each fury card and the respective nvidia will swing further or into AMD's favor as they solidify their drivers?
So basically if you have $500 to spend on a video card, get the Fury, if you have $600, get the 980 Ti. Unless you want something liquid cooled/quiet, then the Fury X could be an attractive albeit slower option.
Driver optimizations will only make the Fury better in the long run as well, since the 980Ti (Maxwell 2) drivers are already well optimized as it is a pretty mature architecture.
I find it astonishing you can hack off 15% of a cards resources and only lose 6% performance. AMD clearly has a very good (but power hungry) architecture here.
Balance is not very conclusive. There are games that take advantage of the higher resources and blows past the 980Ti and there are games that don't and therefore slower. Most likely due to developers not having access to Fury and it's resources before. I would say, no games uses that many shading units and you won't see a benefit until games do. The same with HBM.
What a pathetic excuse, apologists for amd are so sad.
AMD got it wrong, and the proof is already evident.
No, NONE OF US can expect anandtech to be honest about that, nor it's myriad of amd fanboys, but we can all be absolutely certain that if it was nVidia whom had done it, a full 2 pages would be dedicated to their massive mistake.
I've seen it a dozen times here over ten years.
When will you excuse lie artists ever face reality and stop insulting everyone else with AMD marketing wet dreams coming out of your keyboards ? Will you ever ?
Typical fanboy, ignore the points and go straight to name calling. No, you are the one people shold be sad about, delusional that they are not a fanboy when they are.
Proof that intel and nvidia wackos are the worst type of people, arrogant, snide, insulting, childish. You are the poster boy for an intel/nvidia sophomoric fanboy.
Your problem is deeper than just that you like intel/nvidia since you apparently hate people who don't like those, and ONLY because they like something different than you do.
A third way to look at it is that maybe AMD did it right.
Let's say the chip is built from 80% stream processors (by area), the most redundant elements. If some of those functional elements fail during manufacture, they can disable them and sell it as the cheaper card. If something in the other 20% of the chip fails, the whole chip may be garbage. So basically you want a card such that if all the stream processors are functional, the other 20% become the bottleneck, whereas if some of the stream processors fail and they have to sell it as a simple Fury, then the stream processors become the bottleneck.
Anand has been running DX12 benchmarks last spring. When they compared Radeon 290x to GTX 980 Ti nVidia ordered them to stop. That is why no more DX12 benchmarks have been run.
Intel and nVidia are at a huge disadvantage with DX12 and Mantle.
The reason:
AMD IP: Asynchronous Shader Pipelines and Asynchronous Compute Engines.
Bingo... :-). I bet the whole Fury lineup will gain a lot with DX12, especially the X2 part (4 + 4 GB won't equal 4 as in current CF). The are clearly CPU limited at this point.
DX12 games are out now. DX12 does not degrade DX11 performance. In fact Radeon 290x is 33% faster than 980 Ti in DX12. Fury X just CRUSHES ALL nVIDIA silicon with DX12 and there is a reason for it.
Dx11 can ONLY feed data to the GPU serially and sequencially. Dx12 can feed data Asynchronously, the CPU send the data down the shader pipeline WHEN it is processed. Only AMD has this IP.
Kindly provide link to a single DX12 game that is "out now".
In every single review of the GTX 980 Ti there is this slide of DX12 feature set that the GTX 980 Ti supports and in that slide in all the reviews "Async Compute" is right there setting in the open, so I'm not really sure what do you mean by "Only AMD has this IP"!
I'd strongly recommend that you hold your horses till DX12 games starts to roll out, and even then, don't forget the rocky start of DX11 titles!
Regarding the comparison you're referring to, that guy is known for his obsession with mathematical calculations and synthetic benchmarking, given the differences between real-world applications and numbers based on mathematical calculations, you shouldn't be using/taking his numbers as a factual baseline for what to come.
You are an idiotic person, wishful think and dreams don't make you correct. As stated please provide a link to these so called DX12 games and your wonderful "Fury X just CRUCHES ALL NVidia" statement.
Negative. Once Graphic data is processed and sent to the shaders it next goes to VRAM or video ram.
System ram is what the CPU uses to process object draws. Once the objects are in the GPU pipes system ram is irrelevant.
IN fact that is one of AMD's stacked memory patents. AMD will be putting HBM on APU's to not only act as CPU cache but HBM video ram as well. They have patents for programmable HBM using FPGA's and reconfigurable cache memory HBM as well.
Stacked memory HBM can also be on the cpu package as a replacement for system ram. Can you imagine how your system would fly with 8-16gb of HBM instead of system ram?
You should stop reading wccftech.com this site is full of sh1t! you made also an error because they are comparing 290x to 980 and not the Ti! Asd :D I'm still laughing... those moron cited PCper's numbers as fps, they probably made the assumption since are 2 digit numbers but that's because PCper show numbers in million!!! look at that http://www.pcper.com/files/imagecache/article_max_... wccftech.com also compare the the 290x on Mantle with the 980 on DX12, probably for an apple to apple comparison ;), the fun continue if you read this Futurmark's not on this particular benchmark, that essentially says something pretty obvious, number of draw calls don't reflect actual performance and thus shouldn't be used to compare GPU's http://a.disquscdn.com/uploads/mediaembed/images/1... Finally I think there's something wrong with PC world's results since NVIDIA should deliver more draw calls than AMD on DX11.
Man auto correct plus an early morning post is hard. I meant "do you expect more optimized drivers to cause the Fury to leap further ahead of the 980, or the Fury X to catch up to the 980 Ti" haha. My bad.
My first initial impression on that assessment would be yes, but I'm not an expert so I was wondering how many people would like to weigh in.
Fuji has a lot more room for driver improvement and optimization than maxwell, which is quite well optimized by now. I'd expect the fury x to tie the 980ti in the near future, especially in dx12 games. But nvidia will probably have their new architecture ready by then.
So, Nvidia is faster, and has been for many months, and still is faster, but a year or two into the future when amd finally has dxq12 drivers and there are actually one or two Dx12 games, why then, amd will have a card....
MY GOD HOW PATHETIC. I mean it sounded so good, you massaging their incompetence and utter loss.
Your continuous AMD bashing is more pathetic. Check the performance numbers of the GTX 680 when it was launched and check where it stands now? Do the same thing with the GTX 780 and then with the GTX 970, then talk.
That is another confirmation that AMD GCN doesn't scale well. That problem was already seen with Hawaii, but also Tahiti showed it's inefficiency with respect to smaller GPUs like Pitcairn. Nvidia GPUs scales almost linearly with respect to the resources integrated into the chip. This has been a problem for AMD up to now, but it would be worse with new PP, as if no changes to solve this are introduced, nvidia could enlarge its gap with respect to AMD performances when they both can more than double the number of resources on the same die area.
This resources reduction just means that AMD performance bottleneck is somewhere else in card. We have to see that this kind of reduction is not made to purposely slow down a card but to reduce costs or to utilize chips which didn't pass all tests to become a X model. AMD is know to do that since their weird but very functional 3 cores Phenons. Also this means if they can work better on the real bottleneck, they will be able to make a stronger card with much less resources, who remembers the HD 4770?...
"Draw calls are the best metric we have right now to compare AMD Radeon to nVidia ON A LEVEL PLAYING FIELD." Well, lets just for a moment consider this as true (and you should try to explain why :D ) Looking at draw calls a GTX 980 should perform 2.5x faster than a 290X in DX11 (respectively 2.62M vs 1.05M draw calls) and even a GTX 960 would be 2.37x faster than the over mentioned 290X (respectively 2.49M vs 1.05M draw calls) :)
Performing minor optimizations, on an API that isn't even out yet, to give themselves the appearance of a theoretical advantage in some arbitrary GPU function, as a desperate attempt to keep themselves relevant, is so very AMD (their motto should be, "we will take your money now, and give you its worth... later..., maybe.)
Meanwhile people at NV are optimizing for the API that is currently actually being used to make games, and raising their stock value and market share while they're at it.
Why wouldn't AMD optimize for DX11, and instead do what it's doing? Because DX11 is a mature API, so any further improvements would be small, yet expensive, while DX12 isn't even out yet, so it would be comparatively cheaper to get bigger gains, and AMD is seriously low on funds.
Realistically, proper DX12 games are stll 2-3 years away. By that time AMD probably wouldn't even be around anymore.
Hence, in conclusion, whatever DX12 performance the Fury trio (or AMD in general) claims, means absolutely nothing at this point.
Thank GOD for nvidia or amd would have this priced so sky high no one could afford it.
Instead of crazy high scalping greedy pricing amd only greeded up on price perf the tiny bit it could since it can't beat nvidia, who saved our wallets again !
THANK YOU NVIDIA ! YOUR COMPETITION HAS KEPT THE GREEDY RED TEAM FROM EXHORBITANT OVERPRICING LIKE THEY DID ON THEIR 290 SERIES !
i wasnt really impressed with the fury-x at its price point and performance this normal fury seems a bit better at it price point than the fury-x does
as i write this the information on overclocking wasnt finished - i sure hope the fury overclocks much better than fury-x did because that was a massive letdown when it came to overclocking, when nvidia can get some crazy high overclocks with its maxwell it kinda makex the fury line seem not as good with its meager overclocks the fury-x had hopefully fury (non x) overclocks like a beast like the nvidia cards do
AMD haven't unlocked the voltage yet on Fury X. Hopefully, they will unlock the voltage cap soon, so the Fury X should be able to overclock better. Better than 980ti? We'll see, but Fury X still has lots of uptapped resources.
Don't hold you breath. There is very little overhead in Fiji. That's clearly been divulged. As the article states Maxwell is very efficient and has a good deal of room for partners to indulge themselves. Especially the Ti.
The WC for the X makes up ~half of the price increase from non-x. For someone who's going to do moderate OC and don't want to bother doing WC conversion the X is a good choice, even over a Ti.
no it's not...the 980ti bests it handily. It's not a good choice at all when 980ti can overclock as well and many coolers have 0rpm fan modes for when it's at idle or very low usage.
You haven't seen the DX12 Benchmarks yet. Anand has been keeping them from you. Once you see how much Radeon crushes nVidia you would never buy green again.
nVidia silicon is RUBBISH with DX12 and Mantle. Radeon 290x is 33% faster than GTX 980Ti.
sefem already told you... " "Draw calls are the best metric we have right now to compare AMD Radeon to nVidia ON A LEVEL PLAYING FIELD." Well, lets just for a moment consider this as true (and you should try to explain why :D ) Looking at draw calls a GTX 980 should perform 2.5x faster than a 290X in DX11 (respectively 2.62M vs 1.05M draw calls) and even a GTX 960 would be 2.37x faster than the over mentioned 290X (respectively 2.49M vs 1.05M draw calls) :) "
Yes paper launch for the r9 390X ... newegg is dry as a bone and just 15 reviews with zero stock only sapphire had about 10 cards to sell otherwise NO STOCK AT NEWEGG AT ALL.
Got a Sapphire Fury Tri-X (non OC version) the 16/7 in Italy... probably is newegg problem... and really is a good card, with catalyst 15.7 i got very nice results... With my system 8320, 16gb 1600hz, in Tomb Raider 2560x1440 all maxed out with TressFX on, FPS MIN: 58,0 - MED: 75,3 - MAX: 94,0 Really good results. Witcher3 run stable beetween 45 to 50 fps in ultra setting in 2560x1440 and that casr is really silent you can't hear anything even after some long time playing.
Almost makes you wonder if AMD should have just designed the card with 54 compute units and would have had a winner on it's hand. Fury X seems to be somewhat unbalanced in terms of it's hardware configuration.
The sapphire Tri-X cooling solutions performs impressively under load. I think this a consequence of the abysmal configuration forced on videocards by the ATX standard. The sapphire card can exhaust the hot air freely because of the short PCB, which proves we could use a replacement for ATX (or shorter PCB's)
Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?
Damn those not-editable comments... I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
With the introduction of HBM, perhaps it's time to move to socketed GPUs.
It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.
Is it not time to move beyond the confines of ATX?
Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.
The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.
You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.
If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.
330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next... Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...
Not if the GPU socket standard is universal and backward compatible like PCI-E. It's only if companies get to make incompatible/proprietary sockets that that would be an issue.
Yeah, let's put an additional 300 watts inside a socket laying flat on the motherboard - we can have a huge tube to flow the melting heat outside the case...
Yep, that gigantic 8.9B trans core die, slap some pins on it... amd STILL loves pinned sockets...
Yeah, time to move to the motherboard ... ROFLMAO
I just can't believe it ... the smartest people in the world.
I'm definitely interested to see how well these cards would do in a rotated atx Silverstone case. I have one of those, and I'm concerned about the alignment of the fins. You basically want the heat to be able to move up vertically, out the back/top of the card.
Priced in between the GTX 980 and the Fury X it is substantially faster than the former, and hardly any slower than the latter. Price performance wise this card is a fantastic option if it can be found around the MSRP, or found at all.
NO, actually if you read, ha ha, and paid attention, lol, 10% more price for only 8% more performance... so it's ratio sucks compared to the NVIDIA GTX 980.
Not a good deal, not good price perf compared to NVIDIA.
One interesting thing from this review is looking at the performance of the older AMD cards. The improvement of the Fury vs the older cards was mentioned by Ryan Smith in the review, noting that performance hasn't improved that much. But there's a lot more to it than that. The relative performance of AMD's cards seem to have moved up a lot compared to their Nvidia competitors.
Look at how the 290X stacks up against the GTX 780 in this review. It pretty much just blows it away. The 290X is performing close to the GTX 980 (which explains why the 390X which has faster memory is competitive with it). Meanwhile, the HD 7970 is now stacking up against the GTX 780.
It looks like performance on AMD's GCN chips has increased significantly. Meanwhile the GTX 780's performance has at best stayed the same, but actually looks to have decreased.
Anandtech should really do a review of how performance has changed over time on these cards, because it seems the change has been pretty significant.
I don't know, maybe it's just different benchmark settings but the AMD cards look to be a bit more competitive to their counterparts than they were at release.
Its been the case with all GCN cards. AMD continues to make driver optimizations. The 7970 is significantly faster now that it was at launch. Its one advantage of them all sharing a similar architecture.
nvidia CARDS GAIN 10-20% AND MORE over their release drivers... but that all comes faster, on game release days, and without massive breaking of prior fixes, UNLIKE AMD, who takes forever and breaks half of what it prior bandaided, and it takes a year or two or three or even EOL for that fix to come.
This is exactly what I was thinking... A few months ago when the 980 was launched I recall the 290X not being able to compete with it, and now they are trading blows. Shows some good work by the driver's team.
Maybe AMD's cards are like a fine wine; you have to give them time to age before they reach their maximum potential haha.
Making driver improvements is nice, and shows commitment from AMD, but it could also mean the original state of the drivers was not so good, and there was indeed a lot to improve. I hope this is not the case, but I'm not sure.
Apparently not, as most games don't run even for the reviewers on GCN "release day".
The endless fantasies in amd fanboy minds though, those run, run their course, are debunked, go into schizoid mode and necromance themselves, then of course we are treated to the lying again.
SO FOR 2 FULL YEARS AMD 290 290X 290 UBER OWNERS GOT SCREWED " by the drivers that are just as good as Nvidia's as that problem amd had was 4 years ago or more" !!???
I get it amd fanboy ... it's all you have left after the constant amd failures and whippings they've taken from nVidia - the fantasy about "amd drivers" TWO AND A HALF YEARS AFTER RELEASE.
You may be over excited to have noted that in the review there's a GTX780, not a 780Ti. Seen the difference between the cards, if some improvements have been created, they are quite marginal. It's really funny to see these sort of myth raise from time to time without a real study on the thing. All impressions, not a single number reported as a proof of anything. Yet, continue to believe in what you want. Unfortunately for you the market doesn't really care.
You should check out the techpowerup review - they have a 780TI in it. Then you will understand what you here are calling a myth. 780TI is positioned just before a 290X, hahah, pretty sad to be honest.
You can look at Anandtech reviews. The only game that was in benchmark suite as today is Crisis 3. Look what are the changes between the 290X and the 780 (not Ti). Here the two boards when on par at 290X presentation, and they still are on par today. You can see the difference are the same and we are speaking for 1FPS change for both GPUs. Yes, miraculous drivers. Come on, return on Earth with your fantasies.
If you still can't understand numbers but only can understand bar colors, I can sum up things for you for the same game (Crysys 3) also for the techpowerup review at 2560x1440 (the resolution for this kind of cards): At 780Ti presentation (nov 2013) 780ti 27 290X 26.3 At Fuxy X presentation (so, last week): 780ti 29.3 290X 29.4
So the 290X passed from -0.7fps to +0.1fps... WOW! That is a miracle!!!!! Only a fanboy should think about that, or one that does not understand benchmarks numbers, can't interpret them and can only see bar length/relative positions.
You see a similar trend with Battlefield 3, where the 290X from -3fps became -0.3fps. And both cards have raised their FPS. So, yes, AMD recovered a fraction of nothing and nvidia didn't crippled anything.
You have also not noted that in the meantime AMD changed the 290X policy on BIOS and custom, so all cards have become "uber" and better custom radiators allowed the card not to be throttled. So the advantage of this performance is reserved for those that have bought these cards, not for those that have bought the "not sampled" reference ones (can you remember the issue about those cards in retail market that have quite different performances with respect to those send to reviewers?). Yes, another miracle...
These are the MYTH I like reading about that only fanboy can sustain. These are the type of arguments that let you clearly spot a fanboy in the group.
So, where are the facts sustaining your myth? I can't see them and it seems you can't provide them either. Yes, 780Ti a crappy investment... it was good the 290X with stuttering all over the place that still continues today with DX9 games.
This is the primary reason why i buy AMD, because i am not willing to change my hardware every year i brought a 290x in 2013 and in that period it was neck to neck with the 780 ti, after nearly two years the 290x destroys the 780 ti and beats constantly even the 970, which at it's release was ahead. The 970 remained there with the performance meanwhile the 290x continued improving. I am so glad i brought the 290x.
You are a poor man with no clue on what it is buying. Your justification for buying the cheaper card on the market are quite pitiful. I bet you can't report a single case where Kepler run faster before than it is today. Nor can't you evaluate how much this miraculous" AMD drivers have improved your gaming experience. Can you? Let's see these numbers. If not, well, just don't go on with this king of talking because it really picture you (all AMD fanboys) more ridiculous than you already are.
I agree. This would make a fantastic article - and a unique critical thinking subject that Anandtech is well positioned to undertake and is known for. It would certainly generate traffic and be linked to like crazy, hint hint.
"The R9 Fury offers between 8% and 17% better performance than the GTX 980, depending on if we’re looking at 4K or 1440p"
"I don’t believe the R9 Fury is a great 4K card"
"in a straight-up performance shootout with the GTX 980 the R9 Fury is 10% more expensive for 8%+ better performance."
"This doesn’t make either card a notably better value"
So at resolutions under 4K, which are the applications you recommend for the R9 Fury, it performs 17% better than the GTX 980 for 10% more price, and yet you conclude it is not a better value? Help me out here. It would be more accurate to say that neither card is a better value for 4K gaming, where the difference was indeed 8%. Any resolution below that, the Fury X is indeed a better value.
Thanks for the clarification. Also, I really appreciate the inclusion of the 7970 data, as I currently run a 3.5 yr old reference version of that card.
My 980 is about 15% from stock, and it's a poor overclocker despite running cool. These cards struggle to hit 10%. I also can't go back 6 months ago and buy a R9 Fury. And Nvidia's next release is likely around the corner. I think they're approximately equal value - which is good for AMD fans, but it's been a long wait for them to have a card comparable to what NVIDIA enthusiasts have been enjoying for a year!
It's nice to see AMD win a segment. I'm not sure that the Fury X matters that much in the grand scheme of things, seeing that it's the same price as the better performing Geforce 980 TI.
The Fury seems to overclock to almost match the Fury X, making it a good enthusiast buy.
If you're willing to over clock though, you can get a good 15+ percent out of the 980 and pretty much bring it even with an OCed Fury for a little less money.
But as soon as voltage control is unlocked the fury will probably eek out at least another 100MHz or more, which will put it healthily out of reach of the 980. And, once a few more driver issues (such as GTA V performance) the performance of the Fury will improve even more.
HBM has a different performance profile, and AMD is still accommodating that. And, of course, if you turn the nVidia image quality up to AMD levels, nVidia loses a few extra percent of performance.
The GTX 980 vs R9 Fury question is easy to answer (until a 980 price drop). The Fury X vs 980 Ti question is slightly more difficult (but the answer tends to go the other way, the AIO cooler being the Fury X's main draw).
I've heard the same thing, although I believe it was concerning the lack of anisotropic filtering on the NVIDIA side. However, anisotropic filtering is very cheap nowadays as far as I'm aware, so it's not really going to shake things up much whether it's on OR off, though image quality does improve noticeably.
Impressive results, especially by the Sapphire card. The thing I'm glad to see is that it's such a -quiet- card overall. That bodes well for some of the next releases (I'm dying to see the results of the Nano) and bodes well for AMD overall.
Two things I'd like to see:
1) HBM on APU. Even if it were only 1GB or 2GB with an appropriate interface (imaging keeping the 4096 bit interface and either dual or quad-pumping the bus?). The close location of being on-die and high speed of the DRAM would be a very, VERY interesting graphics solution.
2) One would expect that with the cut down on resources, there would have been more of a loss in performance. On average, you see a 7-8% drop in speed after a loss of 13-14% cut in hardware resources and a slight drop in clock speeds. So - where does that mean the bottleneck in the card is? It's possible that something is a bit lopsided internally (it does however perform exceptionally well), so it would be very interesting to tease out the differences to see whats going on inside the card.
It would be very interesting to run HBM as the system ram instead of DDR on APU. 4GB (for the 1) wouldn't be a lot and may choke on heavy work load, but for casual user (and tablet uses) that's probably enough.
It would also allow smaller machine than NUC form factor, I think.
HBM wouldn't be terribly well suited for system RAM due to its comparatively low small-read performance and physical form factor. On an APU, for example, it would probably be best used as a single HBM[2] chip on a 1024-bit bus. Probably just 1 or 2GB, largely dedicated to graphics. That is 128GB/s with HBM1 (but 1GB max), 256GB/s with HBM2 (with, IIRC, 4GB max).
For a SoC, though, such as the NUC form factor, as you mentioned, it is potentially a game changer only AMD can deliver on x86. Problem is that the net profit margins in that category are quite small, and AMD needs to be chasing higher net margin markets (net margin being a simple result of market volume, share, and product margin).
I'd love to see it, though, for laptops. And with Apple and AMD being friendly, we may end up seeing it. As well as probably seeing it find its way into the next generation of consoles.
Given the high prices Intel is charging for its NUC systems are you really certain it's not profitable? Perhaps sales aren't good because they're overpriced.
The only way to keep the 4096bit bus would be to use four HBM chips, and I highly doubt this would be the case. I am thinking an APU would use either a single HBM chip, or possibly two. The performance boost would still be huge.
1) I can't imagine we won't see this. APU scaling with RAM speed was pretty well documented, I would be surprised if there were socket AM4 motherboards that incorporated some amount of HBM directly. Also, AMD performs best against NVidia at 4K, suggesting that Maxwell may be running into a memory bandwidth bottleneck itself. It will be interesting to see how Pascal performs when you couple a die-shrink with the AMD developed HBM2. 2) It does suggest that Fiji derives far more benefit from faster clocks versus more resources. That makes the locked down voltages for the Fury X even more glaring. You supply a card that is massively overpowered, with 500W of heat dissipation but no way to increase voltages to really push the clock speed? I hope we get custom BIOS for that card soon.
As regards APU scaling, it's a tough one. More bandwidth is good, however scaling drops above 2133MHz which shows you'd need more hardware to consume it. Would you put in more shaders, or ROPs? I'd go for the latter - don't APUs usually top out at 8 ROPs? Sure, add in more bandwidth, but at the very least, increase how much the APU can actually draw. The HD 4850 had 32 TMUs (like the 7850K) but 16 ROPs, which is double that on offer here.
I keep seeing complaints about AMD's ROP count, so perhaps there's some merit to them.
It's hard to say what the bottleneck is with memory scaling on APUs. It could be something related to the memory controller built into the CPU rather than the GPU not having the resources to benefit.
I think they really have. Ryan mentioned it in the review, I think on the test setup page and the one after. I just installed the 15.7 drivers for my 290x and haven't had a chance to properly test but this looks very promising.
why are you being such a dick bubbly? even when everyone can clearly see that the last couple of driver releases for amd have perked up the 290/290x as well as the 390/390x cards you throw out cheap shots at anyone that mentions that or anything else about amd.
Yes there is some fanboyism coming from both sides that is to be expected but you sir are just here to be a jerk to anyone that happens to like AMD, I'm not a fanboy for either side Ihave Nvidia and AMD cards in my gaming rigs which are AMD Sapphire R9 390X tri-X 8GB & a Nvidia Geforce GTX 980 4GB in the other system and both on Intel i7's both cards work very close to each other but I find the 390x seems to be smoother in a lot of games at 1440p max sttings than the 980gtx not sure why maybe the extra memory.
Anyways just chill people will be people they get excited says stupid sh*t in the heat of the moment. This is not a personal attack on anyone just tired of the bickering. both companies rock when it comes to graphics cards.
60 less watts when your system is pulling over 300 watts doesnt really mean anything. What matters is how quiet that sapphire card runs. That's exactly what I would buy, hands down.
It would be cool if damaged chips weren't all disabled to the same level i.e. only the faulty parts were disabled on each chip. Then amd could charge a little more for something they have already produced and we would have access to cards closer to the performance of the fury x (with a little overclocking) for less. Assuming the ref cooler can handle the extra heat.
then they would have to market, package and ship the extra cards as well as optimize for another chip, used to be you could unlock the salvage chips if the damages were not too bad (some fully unlocked into the full chips in some cases where chips are cut simply to meet the demand for the cut chips) but amd and nvidia now laser off the disabled parts to stop that.
This really highlights the idea that AMD should have focused on increasing ROP count over the massive amount of shaders. HBM not only increased bandwidth but the delta color compression increased effective bandwidth as well but AMD didn't alter the number of ROPs in the design.
It's not the ROPs. Look at the 3dmark tests, it tops the Pixel throughput charts.
What it needs is more geometry power, not ROPs. Look at the tessellation results, the Furys can't even keep up with a GTX 780. THAT is their issue, they need more geometry horsepower.
Mr know it all sure read well..... didn't you, mr wise meister... Thank god you are here with your gigantic brain
QUOTE: " This indicates that at least for the purposes of the 3DMark test, the R9 Fury series is ROP bottlenecked "
YEP SURE AIN'T THOSE ROPs !
64 just like the FURY X, meaning the Fury X is MORE ROP BOTTLENECKED !
Thanks for playing amd fanboy... it's been so beneficial to have your ultimate knowledge and experience to reign in rumors and place the facts on the table.
Since you've loaded Fury X OC numbers in the the bench database, is there any chance you can load Fury OC, 980 TI OC, and 980 OC numbers as well? Overclocking cards, but choosing not to compare OC'd results against OC'd results of other cards makes it tedious to flip back and fourth. Basically I want to see your 980 OC'd results vs. Fury OC'd results and going all the way back to the 980 launch isn't ideal since there have been driver improvements and game performance improving patches as well.
I agree. The common practice of showing factory OC'd cards against reference designs is misleading. Showing an EVGA factory clocked 980 versus a Sapphire factory clocked Fury is a more realistic comparison. A factory OC'd 980 probably matches or beats the Fury at the same price. But I can't determine that here because there is no comparison :(
The main problem with putting OC results in bench is YMMV. I've yet to get a card that's overclocked as well as the reviewed version (granted, you tend to find faults weeks or months after, whereas in a review they only need to be "stable" for that few days or a week during the review). Unigine Valley at Ultra / 4K DSR is a really good test for me.
"The main problem with putting OC results in bench is YMMV"
This is true if you personally overclock the card. However, the Sapphire is "factory clocked" higher and is guaranteed and warrantied at that clock - and it also performs better as a result. The EVGA GTX 980 Superclocked ACX 2.0 with a factory clock of 1266/1367MHz (about 10% above reference) is also guaranteed and warrantied at those clocks.
Including a card like this in the benchmarks would provide a real-world comparison to the Sapphire. At the very least it should at least be *mentioned* in the OC section or conclusions. Something like "Of course, the real elephants in the room are the GTX 980's from board partners like EVGA which are clocked 10% or more above reference. Maxwell has proven to scale very well with clock speed, and these should provide comparable or better performance than the Sapphire for nearly $100 less at current prices."
I'm betting that they can, but the problem is not the 980 which they could match, but as noted in the review, the Fury X. It's a similar issue with the 980ti vs Titan. The 980ti is 60% of the price, but 90+% of the performance. At that point, the ONLY reason to get a Titan is if you need it for FP64 compute.
In this case, if they took it to the $500 price point, you'd be in the same boat. 75% of the price for 93% of the performance, and it would really cannibalize the Fury X. Keeping it at $550 makes it an 85% of the price for 93% performance. And, since it does outperform the 980, it should be a bit more expensive.
Honestly though, I think AMD should be undercutting themselves a bit to win more of the market share. If I compare the GTX 980 and the R9 Fury, the only problem I see is I take a 10% or so cut to 1080p performance. If the R9 Fury is the same price, then I can forgive the higher power consumption in exchange for better performance at the same price point.
I don't think a lot of people consider the top tier cards anyway (not including the Titan, which is ridiculous in and of itself for most consumers).
Most people are at 1080P, that's WHY NVIDIA optimizes for it.
But, the fickle insanity of all the reviewers requires unplayable "compromises" on image quality and frame rates, and the stupid as dog doo fanboy base goes along for the autistic ride.
NVIDIA knows better, there are sane persons such as myself - when spending multiple hundreds on a video card I don't want to "cut doown on settings" and fiddle with the dang thing day and night then test for "playability". I don't want to waste my life screwing around.
I game 980Ti at 1920x 1200 and it's BARELY ENOUGH to not worry about any settings I want to use, in any game whatsoever - which is of course the most enjoyable thing !
No crashing, no haggling, no constant attempts at optimizing, no cutting down eye candy, no limiting !
Now maybe for someone who wants to hassle with 2 or 3 or 4 cards, a gagggle of cables, likely custom water cooling, a monstruous noise constantly going, then a huge 3k or triplet monitors - then THEY STILL HAVE TO SHUT OFF EYE CANDY AND SETTINGS TO GET A DECENT PLAYABLE FRAME RATE.... THOUSANDS UPON THOUSANDS OF DOLLARS YET INADEQUATE.
Wondering how much the drop in power consumption is when setting the fan to max speed. Is lekage power significant when making the card run cooler say 65 degs or so? if it can be tested
You use the Reference GTX 980 and don't overclock it or even use one of the many many factory overclocked models available yet there in all your charts is the Sapphire Tri-X R9 Fury OC.
I thought the previous review was brown nosing AMD but this one even out does that one.
Nonesense. I'm not sure what review you glossed over but this was a fantastic read. I'm oggling over the Sapphire's noise results, quite phenomenomal. RE: OC versus stock cards. This seem to be a ?policy ?protocol of many review sites and not just Anandtech. If you been on the site long enough you'd know it doesn't it apply to this review only: flick back to the 980 and 980Ti reviews you'll see the OC'd card under review tested against stock counterparts. It's not Anandtech either. Hexus does exactly the same (they OC'd the alraedy overclocked phenomenal 980 Ti G1 from Gigabyte versus stock models; it trashed the 295x2). It's annoying to keep flicking in between to check all the oc'd results but to bash the whole review is unfair to say the least.
Reply from Ryan Smith (taken from the Fury X review comment section): "Curious as to why you would not test Fury OC's against the 980TI's OC?"
As a matter of policy we never do that. While its one thing to draw conclusions about reference performance with a single card, drawing conclusions about overclocking performance with a single card is a far trickier proposition. Depending on how good/bad each card is, one could get wildly different outcomes.
If we had a few cards for each, it would be a start for getting enough data points to cancel our variance. But 1 card isn't enough.
It would be "a win" for the Fury if it offered 5-10% more performance at its resolution tier (1440p) at the same price. But at 8% more performance for 10% more price, as always under similar circumstances (i.e., near parity in performance/$), it all really boils down to brand preference/recognition.
Too bad it requires so much cooling because that big ass heatsink on a much smaller pcb is just ridiculously proportioned. Should every model come with "reinforcement" or "stabilizers" to mitigate warping over time then? Still seems ridiculous though.
You do realize that these aren't reference coolers (which would be the indication of what is required) and that both companies put out graphics cards with just as big of heatsinks? Hell, with the extra PCB space, they probably put more cooling on it because 10"-12" high end GPUs are normal, so why not put that space to use? NVIDIA did this with the GTX 670. The PCB was something like 8" long, but they stuck a 4" fan that overhung because why not.
Well it actually cools way better because the heat sink is longer than the PCB for obvious reasons. The shorter PCB is pretty much an advantage any way you look at it.
I don't know about anyone else, but I'd like to see a very in-depth article about why/how nvidia gpus are much more power efficient vs amd. I don't know how they managed it.
The biggest contributing factors in Maxwell's efficiency is:
1. Different organization of GPU controller units to execution units for better use of resources. 2. 256-bit bus (less lines to power) 3. Stuff that was done in Kepler (namely the lack of a hardware scheduler). AT's review on the GTX 680 covers a lot of this.
It's only extremely important if AMD wins and NVidia has a possible weakness or failure - then of course pages of dedicated review time will be spent analyzing the "possible problem" NVidia might have... and every thing humanly possible will be sworn to be done to produce a single instance of "that weakness".
Doesn't matter if even it's just theoretical due to "the configuration nvidia chose in the layout", I've seen it here over and over for many years.
But, if AMD has a glaring henious hole in performance, capability, standards, whatever it is, ignore it, claim it does not really exist, then in the exact same or very similar area, put down nvidia...
Yuck. I was thinking the 980 was going to be the odd man out after the reshuffle but it looks like it's going to be cheaper than the Fury and easily faster once OC'ing is done.
i think they are using a gtx 980 FTW which already has a pretty hefty overclock. Mind you it can be pushed further with adequate cooling but still they are not using a vanilla 980.
Except when it comes to minimum frame rates and added features, non stuttering, and especially OVERCLOCKING, then of course NVIDIA wins on ALL COUNTS. 3D multi monitor QUAD monitors out of the box Frame rate limiting adaptive v-sync bestdrivers game day drivers GAME STREAMING ON THE FLY RECORDING Automatic driver optimization per games for free
I mean the list is embarrrassing for all amd fanboys, rather it should be, but of course, amd fanboys might look at a tinily bettter average fraps number and lose contro of their bowels.
Only that one of the benefits of HBM was ... wait for it... right on the same package! No need for 12" cards with all your power delivery components far away due to gddr5 needing soo much space. There's even that famous photo of Jen Hsun holding a ridiculously SHORT video card showing off HBM. AMD can't get its heat and power numbers under control hence it needs the big hsf for air cooling? And gosh the fully enabled fiji *needs* water cooling? Just because all other current video cards are always 10-12" is not a legitimate reason to make the cards just as long.
Dont know how you came to that conclusion but ok, the card doesnt need the massive heatsinks to keep itself cool, it uses the same amount of power as an overclocked 290x which did come in shorter 2 fan cards, its just aib's are going over the top for what is literally a premium card, asus needed an excuse to use their dcu3 cooler and this is a pretty good debut. these cards take a lot of engineering to get right, sapphire always use reference pcb's and have the cooler overhang, gigabyte always do it on shorter cards too (i have one of their windforce 3x 270x's and its just so unecasarily long lol), asus even does it in the rare case their previous coolers were too long. if they needed the coolers to be so big they wouldnt have included the 0rpm fan modes because the gpu wouldnt be able to take it due to out of control power consumption and leakage. It is a legitmate reason as it gives you more space for a fin array which means you can have slower and quieter fans as well as the fact some people prefer the long cards as they feel the need to overcompensate for things....
that's quite the gigantic rambling set of excuses for amd's power hungry housefires...let's list 1. good debut ! ( the giant heatsink debut apparently) 2. takes a lot of engineeering .... ( since it's a gigantic housefire, YES) 3. gigabyte always makes shorter cards ( not true but in any case it negates " lotsa engineeering !" 4. 0rpm fan modes is why ! ( compensating for immense heat dissipation noise when not gaming) 5. it's RARE coolers are too long, Asus knows. ( how amd fanboy speaks for asus too long coolers is unclear) 6. so hot gpu wouldn't be able to take it ( great supporting amd fanboy, problem sounds like advantage) 7. legitimate because longer = more fins, which makes longer legitimate ! (circular amd halo) 8. slower and quieter fans (actually fanboy got 1 thing correct) 9. overcompensation ( long cards are for little manlets says amd fanboy )
ROFLMAO - AN ASTOUNDING AMD FANBOY BLOVIATION !
You might deserve the amd fanboy diseembling and politician PR disaster award.
To be fair, for many if not most people, performance/watt is a lesser concern than performance/dollar. The Fijis do seem to have some minor power and thermal issues, but still if priced competitively (and supplied promptly), they may very well allow AMD to hang on until "Zen" and the eagerly awaited die-shrink.
please tell me ryan, why you are not including 390X benches, since the GTX 980 sits between fury and 390X and we all now how close R9 390X is to the 980. Including it would have been much better than including a very old card like the GTX 580.
We have not yet reviewed the 390X at this time. That will be coming later this month.
As for the GTX 580, that's something that I collected back for the GTX 980 Ti review, but the data is still valid since driver branches have not changed.
Everybody who decide a purchase should consider the image quality as a factor. There is a hard evidence in the following link, showing how inferior is the image of Titan X compared to Fury X. the somehow blurry image of Titan X, is lacking some details, like the smoke of the fire. If the Titan X is unable to show the full details, one can guess what other Nvidia cards are lacking. I hope such issues to be investigated fully by reputed HW sites, for the sake of fair comparison, and to help consumers in their investments.
The issue covered in that thread (if you follow it up completely) turned out to be a bug in Battlefield 4, rather than some kind of driver issue or real image quality difference between NV and AMD. The author of the video, Gregster, went back and was able to find and correct the problem; Battlefield 4 was having a mild freak-out when he switched video cards. This is something of a known issue with the game (it can be very picky) and does not occur with our testing setup.
Meanwhile, though you don't see it published here, we do look for image quality issues, and if we saw something we would post about it.
Well, Sapphire has a history of fans that die early on their cards and Asus has a rep for poor customer service, so that (waiting for other vendors) on top of the price being at least $50 too high means I'll wait until the initial rush is over and prices come down to market rates instead of early adopter premiums.
I mean, really, they're charging $50 less for a card that's up to 17% slower IN ADDITION to not having the expensive water cooling block on it? No, it needs to be closer to $100 cheaper. And then on top of that, both Fury cards are $50 too expensive based on how they perform and their missing features compared to NVidia's. This Fury non-X should have debuted at $499, and likely dropped to $474 within a month, while the Fury-X drops to $599 initially, followed by $574 within a month. Then AMD would actually be what it used to be: a better bang-for-the-buck than NVidia.
Question, what features are amd missing that nvidia have, and no gameworks/physx dont really count because amd can use those features despite them running bad on the hardware because its a nvidia feature and amd arent allowed to optimize for it.
Sapphire now uses double-ball bearing fans. That means the issue with their fans dying on newer cards hasn't been proven yet. Next time please read the review more carefully.
Since you can easily get a SUPER OC 980 for the price of Fury why not put one in the benchmarks? That is it's real competition NOT the regular 980's that are $50 less correct? IF AMD is avoiding giving a ref design for Fury regular, then why not use a OC version for 980 also? It would seem sneaky tactics by AMD here to allow AMD portal site like yours to compare products that are NOT really even. You should be using an OC 980 priced like one of the cards you reviewed. The OC card here is actually $70 more than a ref 980. http://www.newegg.com/Product/ProductList.aspx?Sub... Multiple OC cards for $499 or less (zotac AMP $479 in cart), and come with a game. EVGA, Gbyte, Zotac, Asus Strix etc all 499 or less. MSI is the highest OC at $509 of this bunch.
EVGA has a ACX model for $499 after $30 rebate. 1279 core/1380 boost! Pitting Fury vs. regular 980's is a joke.
Also why do you keep using drivers that are TWO revs behind NV's WHQL drivers (released 353.06 may 31st, and 353.30 june 22nd. Both are later correct? Also Extremetech says 25% faster with 353.30 on Metro LL, so wondering what other games are much faster given TWO revs of drivers later than what you seem to be using here. Extremetech used the same as you (furyX) but commented on the 353.30's being faster apparently.
Still wondering when you're going to cover the WHINE of the FURY X cards that retail users have had also: http://www.tomshardware.com/reviews/amd-radeon-r9-... Toms spent 4 pages on it. Wccftech reported it also, with vids so you could hear it (and coil whine). There are going to be RMA's over this. But not a peep about it reaching users and AMD covering OEM's here?
1) We do not compare OC'd cards. We did this once before; the community made it VERY clear that it was the wrong thing to do. So all of our comparisons are based on reference clocked cards against reference clocked cards. In other words we examine the baseline, so that the performance you as a consumer gets would never be lower than what we get at identical settings.
2) For the NV drivers, the latest drivers do not impact the performance on our current benchmark suite in any way. Nor would we expect them to, as they're all from the same driver branch. While I have already checked some cards, the amount of time required to fully validate all of our NVIDIA cards would not be worth the effort since the results would be absolutely identical.
Well, technically you do, but usually only when there's no reference card. Problem is, if one vendor decides not to do a reference card, the custom cards get an advantage when comparing against reference cards of another vendor. We're only asking that you include a custom card as well in these situations.
Whenever there isn't a reference card however, we always test a card at reference clockspeeds in one form or another. For example in this article we have the ASUS STRIX, which ships at reference clockspeeds. No comparisons are made between the factory overclocked Sapphire card and the GTX 980 (at the most we'll say something along the lines of "both R9 Fury cards", where something that is true for the ASUS is true for the Sapphire as well).
And if you ever feel like we aren't being consistent or fair on that matter, please let us know.
Except a card that is $100 more than the GTX 980, and slower than it when OC'd, isn't a "win" by any means. Unless we're treating AMD like they're in the special Olympics and handing them a medal just for showing up.
The amd fanboy forgets pricing suddenly, when their master and god reams the backkside wallet holder, suddenly price is removed entirely from the equation !
It's no secret amd fanboys are literally insane from their obsession.
Really? The Fury is ahead of the GTX 980 and is a comparable value proposition? Yet again you run overclock benches with the Fury without showing the same being done on that GTX 980. I have no idea why you follow this policy of no cross-vendor OC performance comparisons as most people geeky enough to be reading this website would be interested in seeing final performance stats that they can achieve on a stock card, with basic overclocking. So if you have a card like the Fury that overclocks a whopping 5%, compared to a GTX 980 that can overclock 25-40%, yeah...you're doing a disservice to your readers by withholding that information. Especially when you don't even bring it up in your conclusion.
Paid by AMD much? I never thought I'd have to look at any reviews other than those from Anandtech. But this website has been playing it too cautiously, and without the ultimate intent of showing best overall/actual performance and value to their readers.
Seriously Ryan, I'm disappointed. I don't see why there would be an issue with showing a comparison between max OC between, oh, let's say, the Asus R9 Fury STRIX and the...Asus GTX 980 STRIX. Would be a pretty darn fair comparison of 2 different cards and their performance when given the same treatment by the same manufacturer.
I don't know why I even bother commenting anymore.
I think it would be great to see an OC vs OC comparison, the problem is its unfair to do so now for Fury as there is no voltage control yet like there is for Nvidia Cards.
When afterburner / trixx etc support the voltage control for Fury then we can really get the real max oc vs max oc , unless you are talking about overclocks at defualt voltage which makes no sense.
At the end of the day, this is the product they've placed in people's hands. But even without voltage increases on the Nvidia side, you should be able to OC to around 1400MHz at least, when on a custom cooler design like the STRIX model, for example. But I don't think Ryan cares about what's "fair." That's why he's putting these custom boards up against the reference NVidia designed/clocked GTX 980. Because that's somehow a fair comparison. =D
Don't worry, the insanity is so rampant that if amd ever gets an overclock going- we will never hear the end of it - "OVERCLOCK MONSTER !!!!"
On the 290 series amd fanboys were babbling all over the internet that nearly every amd card hits 1300 or 1350 "easily". Of course they were lying those were rare exceptions, but hey...losing is hard on the obsessed.
Now we know the nvidia flagship core is a real monster overclocker, so we should not mention it ever, nor hear about it, nor factor it in at all, EVER FOR ANY REASON WHATSOEVER.
Frankly I can't get away from the sickly amd fanboys, it's so much entertainment so cheap my sides have never been busted so hard.
If quality articles that provide a balanced perspective disappoint you that's a shame. Take comfort in knowing the 980TI is currently still the fastest card. Allow me to once again commend Anandtech on an excellent article.
Yes...comparing an overclocked 3rd party pimped model against a non-OC'd reference design card that is $100 less in price and then saying it's a tough choice between the 2 as the R9 Fury is faster than the GTX 980...lol...totally balanced perspective there bro.
Whenever a pure reference card isn't available, like is the case for R9 Fury, we always test a card at reference clockspeeds in one form or another. For example in this article we have the ASUS STRIX, which ships at reference clockspeeds. No comparisons are made between the factory overclocked Sapphire card and the GTX 980 (at the most we'll say something along the lines of "both R9 Fury cards", where something that is true for the ASUS is true for the Sapphire as well).
And if you ever feel like we aren't being consistent or fair on that matter, please let us know.
I just want to understand something clearly. Are you saying that when you are talking about performance/value, you ignore the fact that one card can OC 4%, and the other can OC 30% as you don't believe it's relevant? Let me pose the question another way. If you had a friend who was looking to spend $500~ on a card and was leaning between the R9 Fury and the GTX 980. Knowing that the GTX 980 will give him better performance once OC'd, at $450, would you even consider telling him to get the R9 Fury at $550?
My concern here is that you're not giving a good representation of real world performance gamers will get. As a result, people get misled into thinking spending the extra money on the R9 Fury is actually going to net them higher frame rate...not realizing they could get better performance, for even less money, if someone decided to actually look at overclocking potential...
Now if you weren't interested in overclocking results in general, I'd say fine. I disagree, but it's your choice. But then you do show overclocking results with the R9 Fury. I'm finding it really hard to understand what your intent is with these articles, if not to educate people and help them make an actual informed decision when making their next purchase.
As I mentioned in your previous article on the Fury X...you seem to have a soft spot for AMD. And I'm not exactly sure why. I will admit that I'm currently a big Nvidia fan. Only because of the features and performance I get. If the Fury X had come out, and could OC like the 980ti and had 8gb HBM memory, I'd have become an AMD fan. I'm a fan of whoever has the best technology at any given moment. And if I were looking to make a decision on my next card purchase, your article here would give a false impression of what I would get if I spent $100 more on an R9 fury, than on a GTX 980...
Apparently, OC is your thing. I get it. There are plenty of OC sites that are just for you. For some of us, we really don't want to see how much we can shorten the life of something we pay good money for, when the factory performance does what we need. I, for one, am more interested in how something will perform without me potentially damaging my computer, and I appreciate the way that AT does their benchmarks.
I think the fact that you believe overclocking will "damage your computer" or in any meaningful way shorten the lifespan of the product, is all the more reason to talk about overclocking. I'd be more than welcome to share a little info.
Generally speaking, what kills the product is heat (minus high current degradation that was a bigger problem on older fab processes) So let's say you have a GPU that runs 60 degrees Celsius under load, at 1000MHz with a 50% fan speed profile. Now let's imagine 2 scenarios:
1) You underclock the GPU to 900MHz and set a 30% fan profile to make your system more quiet. Under load, your GPU now hits 70 degrees Celsius.
2) You overclock the GPU to 1100MHz and set a 75% fan profile for more performance at the cost of extra sound. Under load, your GPU now hits 58 degrees Celsius due to increased fan speed.
Which one of these devices would you think is likely to last the longest? If you said the Overclocked one, you'd be correct. In fact...the overclocked one is likely to last even longer than the stock 1000MHz at the 50% fan speed profile, because despite using more power and giving more performance, the fan is working harder to keep it cooler, thus reducing the stress on the components.
Now. Let's talk about why that card was clocked at 1000MHz to start! When a chip is designed, the exact clock speed is an unknown. Not just between designs...but between individual wafers and dies cut out from those wafers. As an example, I had an i7 3770k that would use 1.55v to hit 4.7GHz. I now have one that uses 1.45v to hit 5.2GHz and 1.38v to hit 5GHz. Why am I telling you this? Well...because it's important! When designing a product like a CPU or GPU, and setting a base clock, you have to account for a few things:
1) How much power do I want to feed it? 2) How much of the heat generated by that power can I dissipate with my fan design? 3) How loud do I want my fan to be? 4) What's the highest clock rate my lowest end wafer can hit while remaining stable and at an acceptable voltage requirement?
So here's the fun part. So while chips themselves can vary greatly, there are tons of precautions added when dealing with stock speeds. Just for a point of reference...a single GTX Titan X is guaranteed to overclock 1400MHz-1550MHz with proper settings, if you put the fan at 100% full blast. That's a 30%-44% overclock! So why wouldn't Nvidia do that? Well it's a few things.
1) Noise! Super important here. Your clockspeed is determined by your ability to cool it. And if you're cooling by means of a fan, the faster that fan, the more noise, the more complaints by consumers.
2) Power/Heat variability. Since each chip is different, as you go into the higher ranges, each will require a different amount of power in order to be stable at that frequency. If you're curious, you can see what's called an ASIC quality for your GPU using a program like GPU-Z. This number will tell you roughly how good of a chip you have, in terms of how much of a clock it can achieve with how much power. The higher the % of your ASIC quality, the better overclocking potential you have on air because it'll require less power, and therefore create less heat to do it!
3) Overclocking potential. This is actually important in Marketing. And it's something AMD and Nvidia are both pretty bad at, actually. But AMD's bit a bit worse, to their own detriment. In their R9 Fury and Fury X press release performance numbers they set expectations for their cards to completely outperform the 980ti using best case scenarios and hand picked settings. And they also said it overclocks like a beast. Now...here's why that's bad. Customers like to feel they're getting more than what they pay for. That's why companies like BMW always list very modest 0-60 times for their cars. When I say modest, I mean they set the 0-60 times to show they're worse than what the car is actually capable of. That's why every car review program you see will show 0-60 times being 0.2 to 0.5 seconds faster than what BMW has actually listed. This works because you're sold on a great product, only to find it's even greater than that.
Got off track a bit there. I apologize. Back to AMD and the Fiji lineup and why this long post was necessary. When AMD announced the Fury X being an all in one cooler design, I instantly knew what was up. The chip wasn't able to hold up to the limitations we talked about above (power requirement/heat/fan noise/stability). But they needed to put out a stock clock that would allow the card to be competitive with Nvidia, but they also didn't want it to sound like a hair dryer. That's why they opted for the all in one cooler design. Otherwise, a chip that big on an air cooled design would likely have been clocked around the 850-900MHz range that the original GTX Titan had. But they wanted the extra performance, which created extra heat and required more power, and used a better cooler design to be able to accomplish that across the board with all their chips. That's great, right? Well...yes and no. I'll explain.
Essentially the Fiji lineup is "factory overclocked" by AMD. This is the same as putting a turbo on a car engine. And as any car enthusiast will tell you, a 2L engine with a turbo on it may be able to produce 330 horsepower, which could otherwise take a naturally aspirated 4L V8 to accomplish. But then you're limited for increased horsepower even further. Sure you could put a bigger supercharger on that 2L engine. But it already had a boosted performance for that size engine. So your gains will be minimal. But with that naturally aspirated engine...you can drop a supercharger on it and realize massive gains. This is very much the same as what's happening with the Fury X, for example.
And this is why I believe it's incredibly important to point this out. I built a system for my friend recently with a GTX 970. I overclocked that to 1550MHz on first try, without even maxing out the voltage. That was a $300 model card. And even that would challenge the performance of the R9 Fury if you don't plan on overclocking (not that you could, anyway, with that 4% overclock limit).
So...yes, I do think Overclocking needs to be talked about more, as it's become far easier and safer to do than in the past. Even if you don't plan to do extreme overclocking, you can keep all your fan speed automated profiles the same, don't touch voltage, and just increase power limit and a slight increase in the gpu clock for simple free performance. It's something I could teach my mom to do. So I hope it's something that is done by more and more people, as there's really no reason not to do it.
And that's why I think informing people of these differences with regard to overclocking will help people save money, and get more performance. And not doing just keeps people in the dark, and does them a great disservice. Why keep your audience oblivious and allow them to remain ignorant to these things when you can take some time to help them? Overclocking today is far different from the overclocking of a few years ago. And everybody should give it a try.
@Socius That was a good post explaining your interest in OC. From my perspective, I come from the days when OC'ing would void your warranty and would shorten the life span of your system. I also come from a more business and channel oriented background, meaning that we stay with "officially supported" configurations. That background stays with you. Even today, when building my personal computer or a web browsing computer for a family member, I do everything I can to stay with the QVL of the motherboard.
I am more concerned with out of box experience, and almost always skip over the OC results of benchmarks whenever they are presented. If a device can not perform properly OOB, then it was not properly configured from the start, which does not give me the best impression about the part, regardless of the potential of individual tweaks.
In the end, you are looking for OC results, I am only concerned with OOB experience. Different target audience, both with valid concerns. I just don't think it is worth bashing a reviewer's method that focuses on one experience over the other.
Good review as always, Ryan. I wouldn't throw away Asus's approach. Nice power efficiency gains. Funny how overclocking for a few fps more heated the discussion... All in all, I think AMD is back in business with Fury non-X. Waiting for 3xx reviews, hopefully for the whole R9 line, with 2, 4 and 8 GB.
What a disappointment again. Man...the Fury cards really aren't worth the hassle at all, are they? It's sad to see this, especially coming from an FX-8350 & 2xHD6950s owner. So the 980Ti beats the custom cards (we knew Fury X was quite at its limits from the beginning, but still - despite all the improvements, custom cards sometimes perform even worse than the stock one. The 980Ti really beats the Fury X most of the time.
What is that, nVidia's blower is only 8dB louder than the highest end of the fans used by arguably the two best board partners AMD has? Wow! This is where I realize nVidia really must have done an amazing job with the 980Ti and Maxwell in general.
HBM this HBM that...this card is beaten at 4K by the GTX980Ti and gameplay seems to be smoother on the nVidia card too. What the hell? Where are the reasons to buy any of the Fury cards?
By that token the 980 is not good performance per dollar either. It's sonething like a 390 non-x topping the charts. These high end cards are always a rip off.
The Asus Strix is 9.4% faster than the 980 with 20% worse power consumption. I wouldn't call that "nowhere near" Maxwell tbh and the Nano will be even closer if not ahead.
Which is true but not the only way to get that resolution & refresh. Lack of HDMI 2.0 and full HEVC features is certainly another sore point for Fury. For the most part HDMI 2.0 affects the consumer AV/HT world though, not so much the PC world. In the PC world, gaming monitors capable of those res/refresh rates are going to have DP on them which makes HDMI 2.0 extraneous.
I'll second ES_Revenge on the DP for PC Gaming. The world of 4K Home Monitors being absent with HDMI 2.0 is something we'll live with until the next major revision.
I don't even own a 4K Home Monitor. Not very popular in sales either.
Every single one of them showing up on Amazon are handicapped with that SMART TV crap.
I want a 4K Dumb Device that is the output Monitor with FreeSync and nothing else.
It's such a massive failure, and so big a fat obtuse lie, it's embarrassing to even bring up, spoiling the party that is fun if you pretend and fantasize enough, and ignore just how evil amd is.
hdmi 2.0 - nope ! way to go what a great 4k gaming card ! 4gb ram - suddenly that is more than enough and future proof !
With all respect, 300 vs 360 watt at load and 72 vs 75 watt idle doesn't deserve "consumes MUCH more power", Ryan, and that even if it wasn't a faster card.
For total system power draw? Yeah it does....because the power usage gap percentage is lessened by the addition of the system power usage (minus the cards) in the total figure. So if the numbers were 240W vs 300W, for example, that's 25% more power usage. And that's with a 20-30W reduction in power usage by using HBM. So it shows how inefficient the GPU design actually is, even when asking it with HBM power reduction and the addition of total system power draw instead of calculating it by card.
The fail that’s AMD’s Fury series makes my MSI Gaming 4G GTX980 looks even better. I only paid $430 for it and it gets to 1490/1504 boosted at stock voltage. Essentially it means I got a card as fast as Fury OC at $100+ cheaper, uses far less power and in my system for months earlier.
I am very glad I went Nvidia after 5 year with AMD/ATI graphics and didn’t wait months for Fiji.
"The R9 Fury will be launching with an MSRP of $549, $100 below the R9 Fury X. This price puts the R9 Fury up against much different completion* than its older sibling; "
WOW, so much fail from AMD...might as well kiss their ass goodbye! Pimping the Fury at 4k, when really even the 980Ti is borderline on occasion, and releasing a card with no OC headroom at the same price as its competitor!
I didn't have too high hopes for the "regular" Fury [Pro] after the disappointing Fury X. However I have to say...this thing makes the Fury X look bad, plain and simple. With a pretty significant cut-down (numerically) in SPs and 32 fewer TMUs, you'd expect this thing to be more of a yawn. Instead it gives very near to Fury X performance and still faster than a GTX 980.
The only problem with it is price. At $550 it still costs more than a GTX 980 and Fury has less OC potential. And at only $100 less than Fury X it's not really much of a deal considering the AIO/CLC with that is probably worth $60-80. So really you're only paying $30 or so for the performance increase of Fury X (which isn't that much but it's still faster). What I suggest AMD "needs to do" is price this thing near to where they have the 390X priced. Fury Pro at ~$400 price will pull sales from Nvidia's 980 so fast it's not funny. Accordingly the 390X should be priced lower as well.
But I guess AMD can't really afford to undercut Nvidia at the moment so they're screwed either way. Price is high, people aren't going to bother; lower the price and people will buy but then maybe they're just losing money.
But imagine buying one of these at $400ish, strapping on an Asetek AIO/CLC you might have lying around (perhaps with a Kraken bracket), and you have a tiny little card* with a LOT of GPU power and nice low temps, with performance like a Fury X. Well one can dream, right? lol
*What I don't understand is why Asus did a custom PCB to make the thing *longer*??? One of the coolest things about Fury is how small the card is. They just went and ruined that--they took it and turned it back into a 290X, the clowns. While the Sapphire one still straps on an insanely large cooler, at least if you remove it you're still left with the as-intended short card.
You ran a whole suite of synthetic Benchmarks yet you completely ignored DX12 Starswarm and 3dMarks API Overhead test.
The question that I have is why did you omit DX12 benchmarks?
Starswarm is NOW COMPLETE AND MATURE.
It is also NOT synthetic but rather a full length game simulation; but you know this.
3dMark is synthetic but it is THE prime indicator of the CPU to GPU data pipeline performance.
They are also all we have right now to adequately judge the value of a $549 dollar AMD GPU vs a $649 nVidia GPU for new games coming up.
Since better than 50% of games released this Christmas will be DX12 don't you think that consumers have a right to know how well a high performance API will work with a dGPU card designed to run on both Mantle and DX12?
AMD did not position Fiji for DX11. This card IS designed for DX12 and Mantle.
The Star Swarm benchmark is, by design, a proof of concept. It is meant to showcase the benefits of DX12/Mantle as it applies to draw calls, not to compare the gaming performance of video cards.
Furthermore the latest version is running a very old version of the engine that has seen many changes. We will not be able to include any Oxide engine games until Ashes of the Singularity (which looks really good, by the way) is out of beta.
Finally, the 3DMark API Overhead test is not supposed to be used to compare video cards from different vendors. From the technical guide: "The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors."
Do you also realise that Fiji completely outclasses Maxwell and Tesla as well?
Gaming is a sideshow. AMD is positioning Fury x to sell for $350+ as a single unit silicon for HPC. With HBM on the package!!!
HPGPU computing is now up for grabs. Compaing the Fiji PACKAGE to the Maxwell or Tesla PACKAGE has AMD thoroughly outclassing the Professional Workstation and HPC silicon.
HBM stacked memory can be configured as cache and still feed GDDR5 RAM for multiple monitors.
AMD has several patents for just that while using HBM stacked memory.
I think that AMD is quietly positioning Fiji and Greenland next for High Performance Computing.
Fury X2 with 17 Tflps of single precision and almost 8Tflops dual precison is going to change the cluster server market.
Of course Fury X2 will rock this Christmas.
What will be the release price? I think less than $999!!!
AMD has made a habit of being the grinch that stole nVidias Christmas.
Note that Fiji is not expected to appear in any HPC systems. It has no ECC, minimal speed FP64, and only 4GB of VRAM. HPC users are generally after processors with large amounts of memory and ECC, and frequently FP64 as well.
Looking at what happened with the old generation AMD and nvidia gpus, i wouldn't be surprised if after a few driver updates the fiji will be so ahead of maxwell. AMD always improved it's old architectures with software updates while nvidia quite never did that, actually they downgrade their old gpus so that they can sell their next overpriced SoC.
The myth, here again! Let's see these numbers of a miraculous vs crippling driver. And I mean I WANT NUMBNERS! Or what you are talking about is just junk you are reporting because you can't elaborate yourself. Come on, the numbers!!!!!!!!!
I am very interested to see how much of a difference ASUS' power delivery system will make for (real) overclocking in general once voltage control is available. If these cards act the same as the 290's did, then AMD's default VRM setup could very likely be more than capable of overclocks in the 25% or more range. I'm basing the 25% or more off of my experience with a half dozen reference based R9 290's, default 947MHz core, that would reach 1200 core clock with ~100mV additional. And if you received a capable card then you could surpass those clocks with more voltage.
It appears AMD has followed the EXACT same path they did with the 290 and 290X. The 290X always held a slight lead in performance, but the # of GPU components disabled didn't hinder the 290 as much as anyone thought. This is exactly what we see now with the Fury ~VS~ Fury X...overclock the Fury and it's the better buy. All while the Fury X is there for those who want that little bit of extra performance for the premium, and this time you're getting water cooling! It seems like a pretty good deal to me.
Once 3rd party programmers(not AMD) figure out voltage control for these cards, history will likely repeat itself for AMD. Yes, these will run hotter and use more power than their Nvidia counterparts...I don't see why this is a shock to anyone since this is still 28nm and similar enough to Hawaii...What no one seems to mention is the amount of performance increase compared to Hawaii in the same power/thermal envelope..it's a very significant jump.
Whom in the enthusiast PC world really cares about the additional power draw? We're looking at 60-90W under normal load conditions; Furmark is NOT normal load. Unless electricity where you hail from is that expensive, it isn't actually costing you that much more in the long run. If you're in the market for a ~$550 GPU, then you probably aren't too concerned with buying a good PSU. What the FurMark power draw of the Fury X/Sapphire Fury really tell us is that the reference PCB is capable of handling 385W+ of draw. This should give an idea of what the card can do once we are able to control the voltage.
These cards are enthusiast grade and plenty of those users will remove the included cooler for maximum performance. A full cover waterblock is going to be the key to releasing the full potential of Fury(X) just like it was for 290(X). It is a definite plus to see board partners with solid air cooling solutions out of the gate though...Sapphire's cooling solution fares better in temperature AND noise during FurMark than ASUS' when it's pulling 130W additional power! Way to go Sapphire!
My rant will continue concerning drivers. Nvidia has mature hardware with mature drivers. The fact AMD is keeping up, or winning is some instances, is a solid achievement. Go back to a 290(X) review when their primary competition was a 780 Ti, where the 780 Ti was usually winning. Now, the 390(X), that so many are calling a rebranded POS, easily bests the 780 Ti and competes with GTX 980. Nvidia changed architecture, but AMD is still competitive? Another commenter said it best by saying: "An AMD GPU is like a fine wine, and gets better with age."
This tells me 3 things...
1) Once drivers mature, AMD stands to gain solid performance improvements. 2) Adding voltage control to enable actual overclocking will show the true potential of these cards. 3) Add these two factors together and AMD has another winning product.
Lastly we still have DX12 to factor into all of this. Sure, you can say DX12 is too far away, but in actuality it is not. I know there are those people who MUST HAVE the latest and greatest hardware every time something new comes around every ~9 months. However, there are plenty more of us who wait a few generations of GPUs to upgrade. If DX12 brings even a half of the anticipated performance gains and you're in the market, then purchasing this card now, or in the coming months, will be a solid investment for the coming years.
Whatever flats your boat. There are still some people like you that believes FX CPUs are faster than i7s and they are what keeps AMD afloat. The rest of us.... we actually consider everything and go Intel & Nvidia.
There are 3 fails in your assumptions: 1. Fiji is a much bigger core tied to 4 HBM modules. OC will likely not be as "smooth" as 290X 2. 60-90W is not just cost in electricity. It is also getting a PSU that will supply the additional draw and more fan(s) and better case to get the heat out. Or suffer the heat and noise. The $15-45 a year in additional electricity bill also means you will be in the red in a couple od years. 3. You assume AMD/ATI driver team is still around and will be around a couple of years in the future.
3. Unless the driver work has been completely outsourced and there's proof of this happening, I'm not sure you can use this as a "fail".
Fiji isn't a brand new version of GCN so I don't expect the huge gains in performance that are being touted, however whatever they do bring to the table should benefit Tonga as well, which will (hopefully) distance itself from Tahiti and perhaps improve sales further down the stack.
The most electrically efficient 3D computer gaming via an ARM chip, right? Think of all the wasted watts for these big fancy GPUs. Even more efficient are text-based games.
A bit disingenuous as custom cooled over clocked 980s are the norm these days and easily match or exceed Fury, while running cooler with much less power and can be found cheaper. AMD HAS its work cut out.
For a GPU that was expected to beat Titan X hands down, just being faster than 980 is quite a fail. Also due to the high cost technology involved in producing it. Be happy for that, and just wait or DX12 to have some hope to gain few FPS with respect to the competitor. I just think DX12 is not going to change anything (whatever these cards will gain will be the same for nvidia cards) and few FPS more or less is not what we expected from this top ties class (expensive) GPU. Despite the great steps ahead made by AMD in power consumption, it still is a fail. Large, expensive, still consuming more, and badly scaling. Hope that with the new 16nm FinFet PP things will change radically, or we will witness a 2 year dominance again by nvidia with high prices.
Ok compared to the Fury X, The Regular R9 Fury makes a bit more sense than the X model. It is priced better (But still priced a bit too much) And it has almost even performance with the X model. However the power consumption is still insane and unreasonable for todays standards! And the temps are way too high for a triple fan card! With a 70c temp running triple fans I doubt there is any room at all for overclocking! I do respect this card's performance! But it is just not worth it for the price you have to pay for a hefty PSU, And the very loud and expensive cooling setup you will have to put inside your case! To be honest: If I was stuck with a old GTX 660 Ti, And someone offered me a R9 Fury for even trade, I would not do it!
The power consumption is not insane or unreasonable for "today's standards". Only the GTX 960, 970, 980, Titan X are better. So it's unreasonable for Nvidia's new standard but it's actually an improvement over Hawaii, etc. of the past.
Compared to current Nvidia offerings, it's bad yeah but we can't really established standards on their cards alone. R9 390/X, 380, etc. are still power hungry for their performance and they are still "today's" cards, like it or not.
Don't get me wrong I agree they really need to start focusing on power/heat reduction, but we're not going to see that from AMD until their next gen cards (if they make it that far, lol).
We'd know him by his words, his many lengthy words with links and facts up the wazoo, and he is so proud he would not hide with another name, like a lousy, incorrect, uninformed, amd fanboy failure.
The logic is that the 480 was a successful product despite having horrid performance per watt and a very inefficient (both in terms of noise and temps) cooler. It didn't get nearly the gnashing of teeth the recent AMD cards are getting and people routinely bragged about running more than one of them in SLI.
No, it was not a successful product at all, though it was still the fastest card on market. The successful card was the 460 launched few months later and surely the 570/580 cards which brought the corrections to the original GF100 that nvidia itself said it was bugged. Here, instead, we have a card which uses a lot of power, it is not on top of the charts and there's really no fix at the horizont for it. The difference was that with GF100 nvidia messed up the implementation of the architecture which was then fixxed, here we are seeing what is the most advanced implementation of a really not so good architecture that for 3 years has struggled to keep the pace of the competitions which at the end has decided to go with a 1024 shaders + 128bit wide bus in a 220mm^2 die space against a 1792 shader + 256bit wide bus in a 356mm^2 die space instead of trying to have the latest fps longer bar war. AMD, please, review your architecture completely or we are doomed with next PP.
It was successful. Enthusiasts bought them in a significant number and review sites showed off their two and three card rigs. The only site that even showed their miserable performance per watt was techpowerup
So we are discussing 6 year old products now? Is that your version of logic? Yes, it was hot, yes, it was buggy but it was still the fastest video card in its era, that's why people bragged about SLI'ing it. Fury X isn't.
He shows that R9 Fury x2 is on par with GTX 980 Ti x 2 and blows away GTX 980 x2. Considering that R9 Fury x2 is much cheaper than GTX 980 Ti x2 and also R9 Fury is optimized for upcoming DX12, it looks like R9 Fury is the clear winner in cost/performance.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
288 Comments
Back to Article
Shadow7037932 - Friday, July 10, 2015 - link
Yes! Been waiting for this review for a while.Drumsticks - Friday, July 10, 2015 - link
Indeed! Good that it came out so early too :DI'm curious @anandtech in general, given the likely newer state of the city/X's drivers, do you think that the performance deltas between each fury card and the respective nvidia will swing further or into AMD's favor as they solidify their drivers?
Samus - Friday, July 10, 2015 - link
So basically if you have $500 to spend on a video card, get the Fury, if you have $600, get the 980 Ti. Unless you want something liquid cooled/quiet, then the Fury X could be an attractive albeit slower option.Driver optimizations will only make the Fury better in the long run as well, since the 980Ti (Maxwell 2) drivers are already well optimized as it is a pretty mature architecture.
I find it astonishing you can hack off 15% of a cards resources and only lose 6% performance. AMD clearly has a very good (but power hungry) architecture here.
witeken - Friday, July 10, 2015 - link
No, not at all. You must look at it the other way around: Fury X has 15% more resources, but is <<15% faster.0razor1 - Friday, July 10, 2015 - link
Smart , you :) :D This thing is clearly not balanced. That's all there is to it. I'd say x for the WC at 100$ more make prime logic.thomascheng - Saturday, July 11, 2015 - link
Balance is not very conclusive. There are games that take advantage of the higher resources and blows past the 980Ti and there are games that don't and therefore slower. Most likely due to developers not having access to Fury and it's resources before. I would say, no games uses that many shading units and you won't see a benefit until games do. The same with HBM.FlushedBubblyJock - Wednesday, July 15, 2015 - link
What a pathetic excuse, apologists for amd are so sad.AMD got it wrong, and the proof is already evident.
No, NONE OF US can expect anandtech to be honest about that, nor it's myriad of amd fanboys,
but we can all be absolutely certain that if it was nVidia whom had done it, a full 2 pages would be dedicated to their massive mistake.
I've seen it a dozen times here over ten years.
When will you excuse lie artists ever face reality and stop insulting everyone else with AMD marketing wet dreams coming out of your keyboards ?
Will you ever ?
redraider89 - Monday, July 20, 2015 - link
And you are not an nividia fanboy are you? Hypocrite.redraider89 - Monday, July 20, 2015 - link
Typical fanboy, ignore the points and go straight to name calling. No, you are the one people shold be sad about, delusional that they are not a fanboy when they are.redraider89 - Monday, July 20, 2015 - link
Proof that intel and nvidia wackos are the worst type of people, arrogant, snide, insulting, childish. You are the poster boy for an intel/nvidia sophomoric fanboy.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Oh, gee, forgot, it's not amd's fault ... it was "developers and access" which is not amd's fault, either... of course...OMFG
redraider89 - Monday, July 20, 2015 - link
What's your excuse for being such an idiotic, despicable and ugly intel/nvidia fanboy? I don't know, maybe your parents? Somewhere you went wrong.OldSchoolKiller1977 - Sunday, July 26, 2015 - link
I am sorry and NVIDIA fan boys resort to name calling.... what was it that you said and I quote "Hypocrite" :)redraider89 - Monday, July 20, 2015 - link
Your problem is deeper than just that you like intel/nvidia since you apparently hate people who don't like those, and ONLY because they like something different than you do.ant6n - Saturday, July 11, 2015 - link
A third way to look at it is that maybe AMD did it right.Let's say the chip is built from 80% stream processors (by area), the most redundant elements. If some of those functional elements fail during manufacture, they can disable them and sell it as the cheaper card. If something in the other 20% of the chip fails, the whole chip may be garbage. So basically you want a card such that if all the stream processors are functional, the other 20% become the bottleneck, whereas if some of the stream processors fail and they have to sell it as a simple Fury, then the stream processors become the bottleneck.
thomascheng - Saturday, July 11, 2015 - link
That is probably AMD's smart play. Fury was always the intended card. Perfect cards will be the X and perhaps less perfect card will be the Nano.FlushedBubblyJock - Thursday, July 16, 2015 - link
"fury was always the intended card"ROFL
amd fanboy out much ?
I mean it is unbelievable, what you said, and that you said it.
theduckofdeath - Friday, July 24, 2015 - link
Just shut up, Bubby.akamateau - Tuesday, July 14, 2015 - link
Anand has been running DX12 benchmarks last spring. When they compared Radeon 290x to GTX 980 Ti nVidia ordered them to stop. That is why no more DX12 benchmarks have been run.Intel and nVidia are at a huge disadvantage with DX12 and Mantle.
The reason:
AMD IP: Asynchronous Shader Pipelines and Asynchronous Compute Engines.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
We saw mantle benchmarks so your fantasy is a bad amd fanboy delusion.Midwayman - Friday, July 10, 2015 - link
I'd love to see these two go at it again once dx12 games start showing up.Mugur - Saturday, July 11, 2015 - link
Bingo... :-). I bet the whole Fury lineup will gain a lot with DX12, especially the X2 part (4 + 4 GB won't equal 4 as in current CF). The are clearly CPU limited at this point.squngy - Saturday, July 11, 2015 - link
I don't know...Getting dx12 performance at the cost of dx11 performance sounds like a stupid idea this soon before dx12 games even come out.
By the time a good amount of dx12 games come out there will probably be new graphics cards available.
thomascheng - Saturday, July 11, 2015 - link
They will probably circle around and optimize things for 1080p and dx11, once dx12 and 4k is at a good place.akamateau - Tuesday, July 14, 2015 - link
DX12 games are out now. DX12 does not degrade DX11 performance. In fact Radeon 290x is 33% faster than 980 Ti in DX12. Fury X just CRUSHES ALL nVIDIA silicon with DX12 and there is a reason for it.Dx11 can ONLY feed data to the GPU serially and sequencially. Dx12 can feed data Asynchronously, the CPU send the data down the shader pipeline WHEN it is processed. Only AMD has this IP.
@DoUL - Sunday, July 19, 2015 - link
Kindly provide link to a single DX12 game that is "out now".In every single review of the GTX 980 Ti there is this slide of DX12 feature set that the GTX 980 Ti supports and in that slide in all the reviews "Async Compute" is right there setting in the open, so I'm not really sure what do you mean by "Only AMD has this IP"!
I'd strongly recommend that you hold your horses till DX12 games starts to roll out, and even then, don't forget the rocky start of DX11 titles!
Regarding the comparison you're referring to, that guy is known for his obsession with mathematical calculations and synthetic benchmarking, given the differences between real-world applications and numbers based on mathematical calculations, you shouldn't be using/taking his numbers as a factual baseline for what to come.
@DoUL - Sunday, July 19, 2015 - link
My Comment was intended as a reply to @akanateauOldSchoolKiller1977 - Sunday, July 26, 2015 - link
You are an idiotic person, wishful think and dreams don't make you correct. As stated please provide a link to these so called DX12 games and your wonderful "Fury X just CRUCHES ALL NVidia" statement.Michael Bay - Sunday, July 12, 2015 - link
As long as there is separate RAM in PCs, memory argument is moot, as contents are still copied and executed on in two places.akamateau - Tuesday, July 14, 2015 - link
Negative. Once Graphic data is processed and sent to the shaders it next goes to VRAM or video ram.System ram is what the CPU uses to process object draws. Once the objects are in the GPU pipes system ram is irrelevant.
IN fact that is one of AMD's stacked memory patents. AMD will be putting HBM on APU's to not only act as CPU cache but HBM video ram as well. They have patents for programmable HBM using FPGA's and reconfigurable cache memory HBM as well.
Stacked memory HBM can also be on the cpu package as a replacement for system ram. Can you imagine how your system would fly with 8-16gb of HBM instead of system ram?
akamateau - Tuesday, July 14, 2015 - link
Radeon 290x is 33% faster than 980 Ti with DX12 and Mantle. It is equal to Titan X.http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...
Sefem - Wednesday, July 15, 2015 - link
You should stop reading wccftech.com this site is full of sh1t! you made also an error because they are comparing 290x to 980 and not the Ti!Asd :D I'm still laughing... those moron cited PCper's numbers as fps, they probably made the assumption since are 2 digit numbers but that's because PCper show numbers in million!!! look at that http://www.pcper.com/files/imagecache/article_max_...
wccftech.com also compare the the 290x on Mantle with the 980 on DX12, probably for an apple to apple comparison ;), the fun continue if you read this Futurmark's not on this particular benchmark, that essentially says something pretty obvious, number of draw calls don't reflect actual performance and thus shouldn't be used to compare GPU's
http://a.disquscdn.com/uploads/mediaembed/images/1...
Finally I think there's something wrong with PC world's results since NVIDIA should deliver more draw calls than AMD on DX11.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
They told us Fury X was 20% and more faster, they lied then, too.Now amd fanboys need DX12 as a lying tool.
Failure requires lying, for fanboys.
Drumsticks - Friday, July 10, 2015 - link
Man auto correct plus an early morning post is hard. I meant "do you expect more optimized drivers to cause the Fury to leap further ahead of the 980, or the Fury X to catch up to the 980 Ti" haha. My bad.My first initial impression on that assessment would be yes, but I'm not an expert so I was wondering how many people would like to weigh in.
Samus - Friday, July 10, 2015 - link
Fuji has a lot more room for driver improvement and optimization than maxwell, which is quite well optimized by now. I'd expect the fury x to tie the 980ti in the near future, especially in dx12 games. But nvidia will probably have their new architecture ready by then.FlushedBubblyJock - Wednesday, July 15, 2015 - link
So, Nvidia is faster, and has been for many months, and still is faster, but a year or two into the future when amd finally has dxq12 drivers and there are actually one or two Dx12 games,why then, amd will have a card....
MY GOD HOW PATHETIC. I mean it sounded so good, you massaging their incompetence and utter loss.
evolucion8 - Friday, July 17, 2015 - link
Your continuous AMD bashing is more pathetic. Check the performance numbers of the GTX 680 when it was launched and check where it stands now? Do the same thing with the GTX 780 and then with the GTX 970, then talk.CiccioB - Monday, July 13, 2015 - link
That is another confirmation that AMD GCN doesn't scale well. That problem was already seen with Hawaii, but also Tahiti showed it's inefficiency with respect to smaller GPUs like Pitcairn.Nvidia GPUs scales almost linearly with respect to the resources integrated into the chip.
This has been a problem for AMD up to now, but it would be worse with new PP, as if no changes to solve this are introduced, nvidia could enlarge its gap with respect to AMD performances when they both can more than double the number of resources on the same die area.
Sdriver - Wednesday, July 15, 2015 - link
This resources reduction just means that AMD performance bottleneck is somewhere else in card. We have to see that this kind of reduction is not made to purposely slow down a card but to reduce costs or to utilize chips which didn't pass all tests to become a X model. AMD is know to do that since their weird but very functional 3 cores Phenons. Also this means if they can work better on the real bottleneck, they will be able to make a stronger card with much less resources, who remembers the HD 4770?...akamateau - Tuesday, July 14, 2015 - link
@ Ryan SmithThis review is actually a BIG LIE.
ANAND is hiding the DX12 results that show 390x outperforming GTX 980Ti by 33%+, Fury outperforming 980ti by almost 50% and Titan X by almost 20%.
Figures do not lie. BUT LIARS FIGURE.
Draw calls are the best metric we have right now to compare AMD Radeon to nVidia ON A LEVEL PLAYING FIELD.
You can not render and object before you draw it!
I dare you to run the 3dMark API Overhead Feature Tests on Fury show how Mantle and DX12 turns nVidia siliocn into RUBBISH.
Radeon 290x CRUSHES 980Ti by 33% and is just a bit better than Titan X.
www dot eteknix.com/amd-r9-290x-goes-head-to-head-with-titan-x-with-dx12/
"AMD R9 290X As Fast As Titan X in DX12 Enabled 3DMark – 33% Faster Than GTX 980"
www dot wccftech dot com/amd-r9-290x-fast-titan-dx12-enabled-3dmark-33-faster-gtx-980/
Sefem - Wednesday, July 15, 2015 - link
"Draw calls are the best metric we have right now to compare AMD Radeon to nVidia ON A LEVEL PLAYING FIELD."Well, lets just for a moment consider this as true (and you should try to explain why :D )
Looking at draw calls a GTX 980 should perform 2.5x faster than a 290X in DX11 (respectively 2.62M vs 1.05M draw calls) and even a GTX 960 would be 2.37x faster than the over mentioned 290X (respectively 2.49M vs 1.05M draw calls) :)
D. Lister - Friday, July 17, 2015 - link
Performing minor optimizations, on an API that isn't even out yet, to give themselves the appearance of a theoretical advantage in some arbitrary GPU function, as a desperate attempt to keep themselves relevant, is so very AMD (their motto should be, "we will take your money now, and give you its worth... later..., maybe.)Meanwhile people at NV are optimizing for the API that is currently actually being used to make games, and raising their stock value and market share while they're at it.
Why wouldn't AMD optimize for DX11, and instead do what it's doing? Because DX11 is a mature API, so any further improvements would be small, yet expensive, while DX12 isn't even out yet, so it would be comparatively cheaper to get bigger gains, and AMD is seriously low on funds.
Realistically, proper DX12 games are stll 2-3 years away. By that time AMD probably wouldn't even be around anymore.
Hence, in conclusion, whatever DX12 performance the Fury trio (or AMD in general) claims, means absolutely nothing at this point.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Thank GOD for nvidia or amd would have this priced so sky high no one could afford it.Instead of crazy high scalping greedy pricing amd only greeded up on price perf the tiny bit it could since it can't beat nvidia, who saved our wallets again !
THANK YOU NVIDIA ! YOUR COMPETITION HAS KEPT THE GREEDY RED TEAM FROM EXHORBITANT OVERPRICING LIKE THEY DID ON THEIR 290 SERIES !
f0d - Friday, July 10, 2015 - link
i wasnt really impressed with the fury-x at its price point and performancethis normal fury seems a bit better at it price point than the fury-x does
as i write this the information on overclocking wasnt finished - i sure hope the fury overclocks much better than fury-x did because that was a massive letdown when it came to overclocking, when nvidia can get some crazy high overclocks with its maxwell it kinda makex the fury line seem not as good with its meager overclocks the fury-x had
hopefully fury (non x) overclocks like a beast like the nvidia cards do
noladixiebeer - Friday, July 10, 2015 - link
AMD haven't unlocked the voltage yet on Fury X. Hopefully, they will unlock the voltage cap soon, so the Fury X should be able to overclock better. Better than 980ti? We'll see, but Fury X still has lots of uptapped resources.Chaser - Saturday, July 11, 2015 - link
Don't hold you breath. There is very little overhead in Fiji. That's clearly been divulged. As the article states Maxwell is very efficient and has a good deal of room for partners to indulge themselves. Especially the Ti.chrnochime - Friday, July 10, 2015 - link
The WC for the X makes up ~half of the price increase from non-x. For someone who's going to do moderate OC and don't want to bother doing WC conversion the X is a good choice, even over a Ti.cmdrdredd - Friday, July 10, 2015 - link
no it's not...the 980ti bests it handily. It's not a good choice at all when 980ti can overclock as well and many coolers have 0rpm fan modes for when it's at idle or very low usage.akamateau - Tuesday, July 14, 2015 - link
You haven't seen the DX12 Benchmarks yet. Anand has been keeping them from you. Once you see how much Radeon crushes nVidia you would never buy green again.nVidia silicon is RUBBISH with DX12 and Mantle. Radeon 290x is 33% faster than GTX 980Ti.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
sefem already told you..." "Draw calls are the best metric we have right now to compare AMD Radeon to nVidia ON A LEVEL PLAYING FIELD."
Well, lets just for a moment consider this as true (and you should try to explain why :D )
Looking at draw calls a GTX 980 should perform 2.5x faster than a 290X in DX11 (respectively 2.62M vs 1.05M draw calls) and even a GTX 960 would be 2.37x faster than the over mentioned 290X (respectively 2.49M vs 1.05M draw calls) :) "
Now go back to stroking your amd spider platform.
mickulty - Friday, July 10, 2015 - link
Looks fantastic! Definitely getting one of these once the stock is there.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Yes paper launch for the r9 390X ... newegg is dry as a bone and just 15 reviews with zero stock only sapphire had about 10 cards to sell otherwise NO STOCK AT NEWEGG AT ALL.it's july 16th and the r9 390x is vapor
figus77 - Monday, July 20, 2015 - link
Got a Sapphire Fury Tri-X (non OC version) the 16/7 in Italy... probably is newegg problem... and really is a good card, with catalyst 15.7 i got very nice results... With my system 8320, 16gb 1600hz, in Tomb Raider 2560x1440 all maxed out with TressFX on, FPS MIN: 58,0 - MED: 75,3 - MAX: 94,0Really good results. Witcher3 run stable beetween 45 to 50 fps in ultra setting in 2560x1440 and that casr is really silent you can't hear anything even after some long time playing.
Jtaylor1986 - Friday, July 10, 2015 - link
Almost makes you wonder if AMD should have just designed the card with 54 compute units and would have had a winner on it's hand. Fury X seems to be somewhat unbalanced in terms of it's hardware configuration.Asomething - Friday, July 10, 2015 - link
This imbalance comes from gcn's limitations, amd tried to compensate with the extra shaders.Ranger101 - Friday, July 10, 2015 - link
Another quality Gpu review from Anandtech, in addition to being so early. Best of both worlds.jann5s - Friday, July 10, 2015 - link
The sapphire Tri-X cooling solutions performs impressively under load. I think this a consequence of the abysmal configuration forced on videocards by the ATX standard. The sapphire card can exhaust the hot air freely because of the short PCB, which proves we could use a replacement for ATX (or shorter PCB's)Ian Cutress - Friday, July 10, 2015 - link
It would be interesting to see the effect of having 2 or 3 cards in one system using that paradigm for sure.jann5s - Friday, July 10, 2015 - link
interesting for sure, any change sapphire will send another card?jann5s - Friday, July 10, 2015 - link
speaking of which, what happened to BTX?nightbringer57 - Friday, July 10, 2015 - link
Intel kept it in stock for a while but it didn't sell. So the management decided to get rid of it, gave it away to a few colleagues (dell, HP, many OEMs used BTX for quite a while, both because it was a good user lock-down solution and because the inconvenients of BTX didn't matter in OEM computers, while the advantages were still here) and noone ever heard of it on the retail market again?nightbringer57 - Friday, July 10, 2015 - link
Damn those not-editable comments...I forgot to add: with the switch from the netburst.prescott architecture to Conroe (and its followers), CPU cooling became much less of a hassle for mainstream models so Intel did not have anything left to gain from the effort put into BTX.
xenol - Friday, July 10, 2015 - link
It survived in OEMs. I remember cracking open Dell computers in the later half of 2000 and finding out they were BTX.yuhong - Friday, July 10, 2015 - link
I wonder if a BTX2 standard that fixes the problems of original BTX is a good idea.onewingedangel - Friday, July 10, 2015 - link
With the introduction of HBM, perhaps it's time to move to socketed GPUs.It seems ridiculous for the industry standard spec to devote so much space to the comparatively low-power CPU whilst the high-power GPU has to fit within the confines of (multiple) pci-e expansion slots.
Is it not time to move beyond the confines of ATX?
DanNeely - Friday, July 10, 2015 - link
Even with the smaller PCB footprint allowed by HBM; filling up the area currently taken by expansion cards would only give you room for a single GPU + support components in an mATX sized board (most of the space between the PCIe slots and edge of the mobo is used for other stuff that would need to be kept not replaced with GPU bits); and the tower cooler on top of it would be a major obstruction for any non-GPU PCIe cards you might want to put into the system.soccerballtux - Friday, July 10, 2015 - link
man, the convenience of the socketed GPU is great, but just think of how much power we could have if it had it's own dedicated card!meacupla - Friday, July 10, 2015 - link
The clever design trend, or at least what I think is clever, is where the GPU+CPU heatsinks are connected together, so that, instead of many smaller heatsinks trying to cool one chip each, you can have one giant heatsink doing all the work, which can result in less space, as opposed to volume, being occupied by the heatsink.You can see this sort of design on high end gaming laptops, Mac Pro, and custom water cooling builds. The only catch is, they're all expensive. Laptops and Mac Pro are, pretty much, completely proprietary, while custom water cooling requires time and effort.
If all ATX mobos and GPUs had their core and heatsink mounting holes in the exact same spot, it would be much easier to design a 'universal multi-core heatsink' that you could just attach to everything that needs it.
Peichen - Saturday, July 11, 2015 - link
That's quite a good idea. With heat-pipes, distance doesn't really matter so if there is a CPU heatsink that can extend 4x 8mm/10mm heatpipes over the videocard to cooled the GPU, it would be far quieter than the 3x 90mm can cooler on videocard now.FlushedBubblyJock - Wednesday, July 15, 2015 - link
330 watts transferred to the low lying motherboard, with PINS attached to amd's core failure next...Slap that monster heat onto the motherboard, then you can have a giant green plastic enclosure like Dell towers to try to move that heat outside the case... oh, plus a whole 'nother giant VRM setup on the motherboard... yeah they sure will be doing that soon ... just lay down that extra 50 bucks on every motherboard with some 6X VRM's just incase amd fanboy decides he wants to buy the megawatter amd rebranded chip...
Yep, NOT HAPPENING !
Goty - Friday, July 10, 2015 - link
Can you imagine the hassle upgrades would be with having to deal with two sockets instead of one?Oxford Guy - Saturday, July 11, 2015 - link
Not if the GPU socket standard is universal and backward compatible like PCI-E. It's only if companies get to make incompatible/proprietary sockets that that would be an issue.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Yeah, let's put an additional 300 watts inside a socket laying flat on the motherboard - we can have a huge tube to flow the melting heat outside the case...Yep, that gigantic 8.9B trans core die, slap some pins on it... amd STILL loves pinned sockets...
Yeah, time to move to the motherboard ... ROFLMAO
I just can't believe it ... the smartest people in the world.
ant6n - Saturday, July 11, 2015 - link
I'm definitely interested to see how well these cards would do in a rotated atx Silverstone case. I have one of those, and I'm concerned about the alignment of the fins. You basically want the heat to be able to move up vertically, out the back/top of the card.ajlueke - Friday, July 10, 2015 - link
Priced in between the GTX 980 and the Fury X it is substantially faster than the former, and hardly any slower than the latter. Price performance wise this card is a fantastic option if it can be found around the MSRP, or found at all.FlushedBubblyJock - Wednesday, July 15, 2015 - link
NO, actually if you read, ha ha, and paid attention, lol, 10% more price for only 8% more performance... so it's ratio sucks compared to the NVIDIA GTX 980.Not a good deal, not good price perf compared to NVIDIA.
Thanks for another amd fanboy blowout
Nagorak - Friday, July 10, 2015 - link
One interesting thing from this review is looking at the performance of the older AMD cards. The improvement of the Fury vs the older cards was mentioned by Ryan Smith in the review, noting that performance hasn't improved that much. But there's a lot more to it than that. The relative performance of AMD's cards seem to have moved up a lot compared to their Nvidia competitors.Look at how the 290X stacks up against the GTX 780 in this review. It pretty much just blows it away. The 290X is performing close to the GTX 980 (which explains why the 390X which has faster memory is competitive with it). Meanwhile, the HD 7970 is now stacking up against the GTX 780.
Now, look back at the performance at the time the 290X was released: http://www.anandtech.com/show/7457/the-radeon-r9-2...
It looks like performance on AMD's GCN chips has increased significantly. Meanwhile the GTX 780's performance has at best stayed the same, but actually looks to have decreased.
Anandtech should really do a review of how performance has changed over time on these cards, because it seems the change has been pretty significant.
Nagorak - Friday, July 10, 2015 - link
I don't know, maybe it's just different benchmark settings but the AMD cards look to be a bit more competitive to their counterparts than they were at release.Stuka87 - Friday, July 10, 2015 - link
Its been the case with all GCN cards. AMD continues to make driver optimizations. The 7970 is significantly faster now that it was at launch. Its one advantage of them all sharing a similar architecture.FlushedBubblyJock - Wednesday, July 15, 2015 - link
nvidia CARDS GAIN 10-20% AND MORE over their release drivers... but that all comes faster, on game release days, and without massive breaking of prior fixes, UNLIKE AMD, who takes forever and breaks half of what it prior bandaided, and it takes a year or two or three or even EOL for that fix to come.refin3d - Friday, July 10, 2015 - link
This is exactly what I was thinking... A few months ago when the 980 was launched I recall the 290X not being able to compete with it, and now they are trading blows. Shows some good work by the driver's team.Maybe AMD's cards are like a fine wine; you have to give them time to age before they reach their maximum potential haha.
jann5s - Friday, July 10, 2015 - link
Making driver improvements is nice, and shows commitment from AMD, but it could also mean the original state of the drivers was not so good, and there was indeed a lot to improve. I hope this is not the case, but I'm not sure.Asomething - Friday, July 10, 2015 - link
Amd drivers weren't as good, its one of the reasons they switched to GCN in the 1st place, their drivers got a lot better since those days apparently.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Apparently not, as most games don't run even for the reviewers on GCN "release day".The endless fantasies in amd fanboy minds though, those run, run their course, are debunked, go into schizoid mode and necromance themselves, then of course we are treated to the lying again.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
SO FOR 2 FULL YEARS AMD 290 290X 290 UBER OWNERS GOT SCREWED " by the drivers that are just as good as Nvidia's as that problem amd had was 4 years ago or more" !!???I get it amd fanboy ... it's all you have left after the constant amd failures and whippings they've taken from nVidia - the fantasy about "amd drivers" TWO AND A HALF YEARS AFTER RELEASE.
Oxford Guy - Saturday, July 11, 2015 - link
A conspiracy theory is that Nvidia has purposefully hampered performance for Kepler.Cellar Door - Monday, July 13, 2015 - link
Look at 780Ti - at launch 290X could not touch it. Where is the 780Ti now?!?!? - what a crappy investment was that for anyone that got one.
CiccioB - Monday, July 13, 2015 - link
You may be over excited to have noted that in the review there's a GTX780, not a 780Ti. Seen the difference between the cards, if some improvements have been created, they are quite marginal.It's really funny to see these sort of myth raise from time to time without a real study on the thing. All impressions, not a single number reported as a proof of anything.
Yet, continue to believe in what you want. Unfortunately for you the market doesn't really care.
Cellar Door - Monday, July 13, 2015 - link
You should check out the techpowerup review - they have a 780TI in it. Then you will understand what you here are calling a myth. 780TI is positioned just before a 290X, hahah, pretty sad to be honest.CiccioB - Monday, July 13, 2015 - link
You can look at Anandtech reviews. The only game that was in benchmark suite as today is Crisis 3.Look what are the changes between the 290X and the 780 (not Ti).
Here the two boards when on par at 290X presentation, and they still are on par today.
You can see the difference are the same and we are speaking for 1FPS change for both GPUs. Yes, miraculous drivers. Come on, return on Earth with your fantasies.
CiccioB - Monday, July 13, 2015 - link
If you still can't understand numbers but only can understand bar colors, I can sum up things for you for the same game (Crysys 3) also for the techpowerup review at 2560x1440 (the resolution for this kind of cards):At 780Ti presentation (nov 2013)
780ti 27
290X 26.3
At Fuxy X presentation (so, last week):
780ti 29.3
290X 29.4
So the 290X passed from -0.7fps to +0.1fps... WOW! That is a miracle!!!!!
Only a fanboy should think about that, or one that does not understand benchmarks numbers, can't interpret them and can only see bar length/relative positions.
You see a similar trend with Battlefield 3, where the 290X from -3fps became -0.3fps. And both cards have raised their FPS.
So, yes, AMD recovered a fraction of nothing and nvidia didn't crippled anything.
You have also not noted that in the meantime AMD changed the 290X policy on BIOS and custom, so all cards have become "uber" and better custom radiators allowed the card not to be throttled. So the advantage of this performance is reserved for those that have bought these cards, not for those that have bought the "not sampled" reference ones (can you remember the issue about those cards in retail market that have quite different performances with respect to those send to reviewers?). Yes, another miracle...
These are the MYTH I like reading about that only fanboy can sustain. These are the type of arguments that let you clearly spot a fanboy in the group.
CiccioB - Wednesday, July 15, 2015 - link
So, where are the facts sustaining your myth? I can't see them and it seems you can't provide them either.Yes, 780Ti a crappy investment... it was good the 290X with stuttering all over the place that still continues today with DX9 games.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Thank you CiccioB, I was wondering if another sane person was here.loguerto - Sunday, July 12, 2015 - link
This is the primary reason why i buy AMD, because i am not willing to change my hardware every year i brought a 290x in 2013 and in that period it was neck to neck with the 780 ti, after nearly two years the 290x destroys the 780 ti and beats constantly even the 970, which at it's release was ahead. The 970 remained there with the performance meanwhile the 290x continued improving. I am so glad i brought the 290x.CiccioB - Monday, July 13, 2015 - link
You are a poor man with no clue on what it is buying. Your justification for buying the cheaper card on the market are quite pitiful.I bet you can't report a single case where Kepler run faster before than it is today. Nor can't you evaluate how much this miraculous" AMD drivers have improved your gaming experience.
Can you? Let's see these numbers.
If not, well, just don't go on with this king of talking because it really picture you (all AMD fanboys) more ridiculous than you already are.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
What the HELL are you babbling about ?The 980 wasn't realeased THEN at your "proof link" and the 290x is winning over the 780...
WHAT FANTASY HAVE YOU CONVINCED YOURSELF OF YOU AMD FANBOY... TIME WILL NOT HEAL THE FURY AND FURY X LOSSES !
mikato - Wednesday, July 15, 2015 - link
I agree. This would make a fantastic article - and a unique critical thinking subject that Anandtech is well positioned to undertake and is known for. It would certainly generate traffic and be linked to like crazy, hint hint.ajlueke - Friday, July 10, 2015 - link
"The R9 Fury offers between 8% and 17% better performance than the GTX 980, depending on if we’re looking at 4K or 1440p""I don’t believe the R9 Fury is a great 4K card"
"in a straight-up performance shootout with the GTX 980 the R9 Fury is 10% more expensive for 8%+ better performance."
"This doesn’t make either card a notably better value"
So at resolutions under 4K, which are the applications you recommend for the R9 Fury, it performs 17% better than the GTX 980 for 10% more price, and yet you conclude it is not a better value? Help me out here. It would be more accurate to say that neither card is a better value for 4K gaming, where the difference was indeed 8%. Any resolution below that, the Fury X is indeed a better value.
Ryan Smith - Friday, July 10, 2015 - link
At 1440p the Fury X is 8% faster for 10% more cost. From a value standpoint that's a wash.At 4K the lead is upwards of 17%, but on an absolute basis it's a bit too slow if you're serious about 4K.
ajlueke - Friday, July 10, 2015 - link
Thanks for the clarification. Also, I really appreciate the inclusion of the 7970 data, as I currently run a 3.5 yr old reference version of that card.Oxford Guy - Saturday, July 11, 2015 - link
2% cost difference is likely to be erased by sale pricing at various times.darkfalz - Saturday, July 11, 2015 - link
My 980 is about 15% from stock, and it's a poor overclocker despite running cool. These cards struggle to hit 10%. I also can't go back 6 months ago and buy a R9 Fury. And Nvidia's next release is likely around the corner. I think they're approximately equal value - which is good for AMD fans, but it's been a long wait for them to have a card comparable to what NVIDIA enthusiasts have been enjoying for a year!Flunk - Friday, July 10, 2015 - link
It's nice to see AMD win a segment. I'm not sure that the Fury X matters that much in the grand scheme of things, seeing that it's the same price as the better performing Geforce 980 TI.The Fury seems to overclock to almost match the Fury X, making it a good enthusiast buy.
cmikeh2 - Friday, July 10, 2015 - link
If you're willing to over clock though, you can get a good 15+ percent out of the 980 and pretty much bring it even with an OCed Fury for a little less money.looncraz - Friday, July 10, 2015 - link
But as soon as voltage control is unlocked the fury will probably eek out at least another 100MHz or more, which will put it healthily out of reach of the 980. And, once a few more driver issues (such as GTA V performance) the performance of the Fury will improve even more.HBM has a different performance profile, and AMD is still accommodating that. And, of course, if you turn the nVidia image quality up to AMD levels, nVidia loses a few extra percent of performance.
The GTX 980 vs R9 Fury question is easy to answer (until a 980 price drop). The Fury X vs 980 Ti question is slightly more difficult (but the answer tends to go the other way, the AIO cooler being the Fury X's main draw).
D. Lister - Saturday, July 11, 2015 - link
"if you turn the nVidia image quality up to AMD levels, nVidia loses a few extra percent of performance."Surely we have some proof to go along with that allegation... ?
silverblue - Saturday, July 11, 2015 - link
I've heard the same thing, although I believe it was concerning the lack of anisotropic filtering on the NVIDIA side. However, anisotropic filtering is very cheap nowadays as far as I'm aware, so it's not really going to shake things up much whether it's on OR off, though image quality does improve noticeably.D. Lister - Saturday, July 11, 2015 - link
Err...http://international.download.nvidia.com/webassets...
You mean to say that it doesn't work like it is supposed to?
silverblue - Monday, July 13, 2015 - link
I'm not sure what you're getting at. In any case, I was trying to debunk the myth that turning off AF makes a real difference to performance.FlushedBubblyJock - Wednesday, July 15, 2015 - link
no, there's no proof, the proof of course is inside the raging gourd of the amd fanboy, never be unlocked by merely sane mortal beings.bill.rookard - Friday, July 10, 2015 - link
Impressive results, especially by the Sapphire card. The thing I'm glad to see is that it's such a -quiet- card overall. That bodes well for some of the next releases (I'm dying to see the results of the Nano) and bodes well for AMD overall.Two things I'd like to see:
1) HBM on APU. Even if it were only 1GB or 2GB with an appropriate interface (imaging keeping the 4096 bit interface and either dual or quad-pumping the bus?). The close location of being on-die and high speed of the DRAM would be a very, VERY interesting graphics solution.
2) One would expect that with the cut down on resources, there would have been more of a loss in performance. On average, you see a 7-8% drop in speed after a loss of 13-14% cut in hardware resources and a slight drop in clock speeds. So - where does that mean the bottleneck in the card is? It's possible that something is a bit lopsided internally (it does however perform exceptionally well), so it would be very interesting to tease out the differences to see whats going on inside the card.
mr_tawan - Friday, July 10, 2015 - link
It would be very interesting to run HBM as the system ram instead of DDR on APU. 4GB (for the 1) wouldn't be a lot and may choke on heavy work load, but for casual user (and tablet uses) that's probably enough.It would also allow smaller machine than NUC form factor, I think.
looncraz - Friday, July 10, 2015 - link
HBM wouldn't be terribly well suited for system RAM due to its comparatively low small-read performance and physical form factor. On an APU, for example, it would probably be best used as a single HBM[2] chip on a 1024-bit bus. Probably just 1 or 2GB, largely dedicated to graphics. That is 128GB/s with HBM1 (but 1GB max), 256GB/s with HBM2 (with, IIRC, 4GB max).For a SoC, though, such as the NUC form factor, as you mentioned, it is potentially a game changer only AMD can deliver on x86. Problem is that the net profit margins in that category are quite small, and AMD needs to be chasing higher net margin markets (net margin being a simple result of market volume, share, and product margin).
I'd love to see it, though, for laptops. And with Apple and AMD being friendly, we may end up seeing it. As well as probably seeing it find its way into the next generation of consoles.
Oxford Guy - Saturday, July 11, 2015 - link
Given the high prices Intel is charging for its NUC systems are you really certain it's not profitable? Perhaps sales aren't good because they're overpriced.Stuka87 - Friday, July 10, 2015 - link
The only way to keep the 4096bit bus would be to use four HBM chips, and I highly doubt this would be the case. I am thinking an APU would use either a single HBM chip, or possibly two. The performance boost would still be huge.ajlueke - Friday, July 10, 2015 - link
1) I can't imagine we won't see this. APU scaling with RAM speed was pretty well documented, I would be surprised if there were socket AM4 motherboards that incorporated some amount of HBM directly. Also, AMD performs best against NVidia at 4K, suggesting that Maxwell may be running into a memory bandwidth bottleneck itself. It will be interesting to see how Pascal performs when you couple a die-shrink with the AMD developed HBM2.2) It does suggest that Fiji derives far more benefit from faster clocks versus more resources. That makes the locked down voltages for the Fury X even more glaring. You supply a card that is massively overpowered, with 500W of heat dissipation but no way to increase voltages to really push the clock speed? I hope we get custom BIOS for that card soon.
silverblue - Saturday, July 11, 2015 - link
As regards APU scaling, it's a tough one. More bandwidth is good, however scaling drops above 2133MHz which shows you'd need more hardware to consume it. Would you put in more shaders, or ROPs? I'd go for the latter - don't APUs usually top out at 8 ROPs? Sure, add in more bandwidth, but at the very least, increase how much the APU can actually draw. The HD 4850 had 32 TMUs (like the 7850K) but 16 ROPs, which is double that on offer here.I keep seeing complaints about AMD's ROP count, so perhaps there's some merit to them.
Nagorak - Sunday, July 12, 2015 - link
It's hard to say what the bottleneck is with memory scaling on APUs. It could be something related to the memory controller built into the CPU rather than the GPU not having the resources to benefit.silverblue - Monday, July 13, 2015 - link
Isn't there a 256-bit Radeon Memory Bus link between memory and the GPU? Just a question.Stuka87 - Friday, July 10, 2015 - link
Is it just me, or is the 290X faster now than it used to be when compared to the 980? Perhaps the 15.7 drivers offered some more performance?refin3d - Friday, July 10, 2015 - link
I think they really have. Ryan mentioned it in the review, I think on the test setup page and the one after. I just installed the 15.7 drivers for my 290x and haven't had a chance to properly test but this looks very promising.FlushedBubblyJock - Wednesday, July 15, 2015 - link
I see, over 2 years after release, there's " a very promising amd driver performance upgrade !"I guess undie stripes went out with poopless bowel movements too, check the reviews.
rocky12345 - Friday, October 16, 2015 - link
why are you being such a dick bubbly? even when everyone can clearly see that the last couple of driver releases for amd have perked up the 290/290x as well as the 390/390x cards you throw out cheap shots at anyone that mentions that or anything else about amd.Yes there is some fanboyism coming from both sides that is to be expected but you sir are just here to be a jerk to anyone that happens to like AMD, I'm not a fanboy for either side Ihave Nvidia and AMD cards in my gaming rigs which are AMD Sapphire R9 390X tri-X 8GB & a Nvidia Geforce GTX 980 4GB in the other system and both on Intel i7's both cards work very close to each other but I find the 390x seems to be smoother in a lot of games at 1440p max sttings than the 980gtx not sure why maybe the extra memory.
Anyways just chill people will be people they get excited says stupid sh*t in the heat of the moment. This is not a personal attack on anyone just tired of the bickering. both companies rock when it comes to graphics cards.
mdriftmeyer - Friday, July 10, 2015 - link
Not impressed benching Gameworks optimized titles.Oxford Guy - Saturday, July 11, 2015 - link
Is that hair nonsense turned on?FlushedBubblyJock - Wednesday, July 15, 2015 - link
I agree awesome realism in games sucks. I like blurry and like low level cartoonish only.Why have things look true to life ?
This is amd gaming.
Shadowmaster625 - Friday, July 10, 2015 - link
60 less watts when your system is pulling over 300 watts doesnt really mean anything. What matters is how quiet that sapphire card runs. That's exactly what I would buy, hands down.FlushedBubblyJock - Wednesday, July 15, 2015 - link
YEP I TOTALLY AGREE - that's why I ALWAYS spend THAT EXTRA $60 BUCKS OVER $300 ON THE NVIDIA CARD !CAUSE who cares about 60 when yer over 300 ....
EXCELLENT ! Thank you amd fanboy !
mikato - Wednesday, July 15, 2015 - link
Give it a rest will you?mr_tawan - Friday, July 10, 2015 - link
If we put on the water cooler on the Fury, is it possible the the card will perform as well as the Fury X? I wonder.Makaveli - Friday, July 10, 2015 - link
Why would it perform the same this Fury is not throttling due to heat it has less performance due to having less hardware.tviceman - Friday, July 10, 2015 - link
So it looks like OC'd GTX 980 @ 1440p is going to be faster, on average, than an OC'd Fury @ 1440p.....refin3d - Friday, July 10, 2015 - link
Potentially, we don't really know what is going to happen exactly with the cards being voltage locked.tviceman - Friday, July 10, 2015 - link
That's easy to deduce: ~8% more OC performance for 150-200 more watts power consumption.ToTTenTranz - Friday, July 10, 2015 - link
No Hawaii in the compute tests?Ryan Smith - Friday, July 10, 2015 - link
Whoops. Forgot to include them when generating the graphs. I'll go fill those intitan13131 - Friday, July 10, 2015 - link
It would be cool if damaged chips weren't all disabled to the same level i.e. only the faulty parts were disabled on each chip. Then amd could charge a little more for something they have already produced and we would have access to cards closer to the performance of the fury x (with a little overclocking) for less. Assuming the ref cooler can handle the extra heat.Asomething - Friday, July 10, 2015 - link
then they would have to market, package and ship the extra cards as well as optimize for another chip, used to be you could unlock the salvage chips if the damages were not too bad (some fully unlocked into the full chips in some cases where chips are cut simply to meet the demand for the cut chips) but amd and nvidia now laser off the disabled parts to stop that.Kevin G - Friday, July 10, 2015 - link
This really highlights the idea that AMD should have focused on increasing ROP count over the massive amount of shaders. HBM not only increased bandwidth but the delta color compression increased effective bandwidth as well but AMD didn't alter the number of ROPs in the design.extide - Friday, July 10, 2015 - link
It's not the ROPs. Look at the 3dmark tests, it tops the Pixel throughput charts.What it needs is more geometry power, not ROPs. Look at the tessellation results, the Furys can't even keep up with a GTX 780. THAT is their issue, they need more geometry horsepower.
Asomething - Friday, July 10, 2015 - link
Amd knows this and already made some decent leaps from gcn1.1 to gcn1.2 but nvidia are still way ahead on geometry.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Mr know it all sure read well..... didn't you, mr wise meister...Thank god you are here with your gigantic brain
QUOTE: " This indicates that at least for the purposes of the 3DMark test, the R9 Fury series is ROP bottlenecked "
YEP SURE AIN'T THOSE ROPs !
64 just like the FURY X, meaning the Fury X is MORE ROP BOTTLENECKED !
Thanks for playing amd fanboy... it's been so beneficial to have your ultimate knowledge and experience to reign in rumors and place the facts on the table.
tviceman - Friday, July 10, 2015 - link
Hey Ryan -Since you've loaded Fury X OC numbers in the the bench database, is there any chance you can load Fury OC, 980 TI OC, and 980 OC numbers as well? Overclocking cards, but choosing not to compare OC'd results against OC'd results of other cards makes it tedious to flip back and fourth. Basically I want to see your 980 OC'd results vs. Fury OC'd results and going all the way back to the 980 launch isn't ideal since there have been driver improvements and game performance improving patches as well.
deppman - Friday, July 10, 2015 - link
I agree. The common practice of showing factory OC'd cards against reference designs is misleading. Showing an EVGA factory clocked 980 versus a Sapphire factory clocked Fury is a more realistic comparison. A factory OC'd 980 probably matches or beats the Fury at the same price. But I can't determine that here because there is no comparison :(Ryan Smith - Friday, July 10, 2015 - link
The Fury OC results actually aren't supposed to be live. That's an error.The problem with putting OC results in Bench is that we don't keep them updated. So they would quickly grow stale and not be valid results.
darkfalz - Saturday, July 11, 2015 - link
The main problem with putting OC results in bench is YMMV. I've yet to get a card that's overclocked as well as the reviewed version (granted, you tend to find faults weeks or months after, whereas in a review they only need to be "stable" for that few days or a week during the review). Unigine Valley at Ultra / 4K DSR is a really good test for me.deppman - Sunday, July 12, 2015 - link
"The main problem with putting OC results in bench is YMMV"This is true if you personally overclock the card. However, the Sapphire is "factory clocked" higher and is guaranteed and warrantied at that clock - and it also performs better as a result. The EVGA GTX 980 Superclocked ACX 2.0 with a factory clock of 1266/1367MHz (about 10% above reference) is also guaranteed and warrantied at those clocks.
Including a card like this in the benchmarks would provide a real-world comparison to the Sapphire. At the very least it should at least be *mentioned* in the OC section or conclusions. Something like "Of course, the real elephants in the room are the GTX 980's from board partners like EVGA which are clocked 10% or more above reference. Maxwell has proven to scale very well with clock speed, and these should provide comparable or better performance than the Sapphire for nearly $100 less at current prices."
xenol - Friday, July 10, 2015 - link
I felt like AMD should've released this at $500 and put NVIDIA in a bit of a bind in that market sector.DigitalFreak - Friday, July 10, 2015 - link
AMD isn't in a position where they can afford to lose money on these cards.extide - Friday, July 10, 2015 - link
I HIGHLY doubt they would lose money at $500, however they definitely do want to get every last penny they can.FlushedBubblyJock - Wednesday, July 15, 2015 - link
They can't even supply the dang cores properly, that's why you ONLY SEE SAPPHIRE.Hell you can't even buy them (furyx) at newegg.
So profit would require actual valid production instead of paperlaunched vape.
bill.rookard - Friday, July 10, 2015 - link
I'm betting that they can, but the problem is not the 980 which they could match, but as noted in the review, the Fury X. It's a similar issue with the 980ti vs Titan. The 980ti is 60% of the price, but 90+% of the performance. At that point, the ONLY reason to get a Titan is if you need it for FP64 compute.In this case, if they took it to the $500 price point, you'd be in the same boat. 75% of the price for 93% of the performance, and it would really cannibalize the Fury X. Keeping it at $550 makes it an 85% of the price for 93% performance. And, since it does outperform the 980, it should be a bit more expensive.
xenol - Friday, July 10, 2015 - link
Honestly though, I think AMD should be undercutting themselves a bit to win more of the market share. If I compare the GTX 980 and the R9 Fury, the only problem I see is I take a 10% or so cut to 1080p performance. If the R9 Fury is the same price, then I can forgive the higher power consumption in exchange for better performance at the same price point.I don't think a lot of people consider the top tier cards anyway (not including the Titan, which is ridiculous in and of itself for most consumers).
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Most people are at 1080P, that's WHY NVIDIA optimizes for it.But, the fickle insanity of all the reviewers requires unplayable "compromises" on image quality and frame rates, and the stupid as dog doo fanboy base goes along for the autistic ride.
NVIDIA knows better, there are sane persons such as myself - when spending multiple hundreds on a video card I don't want to "cut doown on settings" and fiddle with the dang thing day and night then test for "playability".
I don't want to waste my life screwing around.
I game 980Ti at 1920x 1200 and it's BARELY ENOUGH to not worry about any settings I want to use, in any game whatsoever - which is of course the most enjoyable thing !
No crashing, no haggling, no constant attempts at optimizing, no cutting down eye candy, no limiting !
Now maybe for someone who wants to hassle with 2 or 3 or 4 cards, a gagggle of cables, likely custom water cooling, a monstruous noise constantly going, then a huge 3k or triplet monitors - then THEY STILL HAVE TO SHUT OFF EYE CANDY AND SETTINGS TO GET A DECENT PLAYABLE FRAME RATE.... THOUSANDS UPON THOUSANDS OF DOLLARS YET INADEQUATE.
Sorry, not this gamer. No way, not ever.
extide - Friday, July 10, 2015 - link
Except the Titan X has crappy FP64 compute, so even that isnt a reason.Hxx - Friday, July 10, 2015 - link
completely agree but value wise does not beat the 980 unfortunately which can be had for much less than $500 once you flip the crap batman game.Archie2085 - Friday, July 10, 2015 - link
Hey Ryan..Wondering how much the drop in power consumption is when setting the fan to max speed. Is lekage power significant when making the card run cooler say 65 degs or so? if it can be tested
Thanks
HighTech4US - Friday, July 10, 2015 - link
What a worthless review.You use the Reference GTX 980 and don't overclock it or even use one of the many many factory overclocked models available yet there in all your charts is the Sapphire Tri-X R9 Fury OC.
I thought the previous review was brown nosing AMD but this one even out does that one.
HighTech4US - Friday, July 10, 2015 - link
EVGA 04G-P4-2983-KR GeForce GTX 980 Superclocked ACX 2.0 4GB 256-Bit GDDR5 PCI Express 3.0 x16 SLI Support Video CardCore Clock: 1266MHz
Boost Clock: 1367MHz
Available today for $507.99 or 487.99 after $20 rebate card.
http://www.newegg.com/Product/Product.aspx?Item=N8...
K_Space - Friday, July 10, 2015 - link
Nonesense. I'm not sure what review you glossed over but this was a fantastic read. I'm oggling over the Sapphire's noise results, quite phenomenomal.RE: OC versus stock cards. This seem to be a ?policy ?protocol of many review sites and not just Anandtech. If you been on the site long enough you'd know it doesn't it apply to this review only: flick back to the 980 and 980Ti reviews you'll see the OC'd card under review tested against stock counterparts. It's not Anandtech either. Hexus does exactly the same (they OC'd the alraedy overclocked phenomenal 980 Ti G1 from Gigabyte versus stock models; it trashed the 295x2). It's annoying to keep flicking in between to check all the oc'd results but to bash the whole review is unfair to say the least.
K_Space - Friday, July 10, 2015 - link
Reply from Ryan Smith (taken from the Fury X review comment section):"Curious as to why you would not test Fury OC's against the 980TI's OC?"
As a matter of policy we never do that. While its one thing to draw conclusions about reference performance with a single card, drawing conclusions about overclocking performance with a single card is a far trickier proposition. Depending on how good/bad each card is, one could get wildly different outcomes.
If we had a few cards for each, it would be a start for getting enough data points to cancel our variance. But 1 card isn't enough.
Will Robinson - Friday, July 10, 2015 - link
LOL..cry moar fanboy.R9Fury defeats GTX980...deal with that and your other aggression issues.
D. Lister - Saturday, July 11, 2015 - link
It would be "a win" for the Fury if it offered 5-10% more performance at its resolution tier (1440p) at the same price. But at 8% more performance for 10% more price, as always under similar circumstances (i.e., near parity in performance/$), it all really boils down to brand preference/recognition.darckhart - Friday, July 10, 2015 - link
Too bad it requires so much cooling because that big ass heatsink on a much smaller pcb is just ridiculously proportioned. Should every model come with "reinforcement" or "stabilizers" to mitigate warping over time then? Still seems ridiculous though.xenol - Friday, July 10, 2015 - link
You do realize that these aren't reference coolers (which would be the indication of what is required) and that both companies put out graphics cards with just as big of heatsinks? Hell, with the extra PCB space, they probably put more cooling on it because 10"-12" high end GPUs are normal, so why not put that space to use? NVIDIA did this with the GTX 670. The PCB was something like 8" long, but they stuck a 4" fan that overhung because why not.extide - Friday, July 10, 2015 - link
Wow, you really had to look pretty hard to find something to hate on there, didn't you.mikato - Wednesday, July 15, 2015 - link
Well it actually cools way better because the heat sink is longer than the PCB for obvious reasons. The shorter PCB is pretty much an advantage any way you look at it.FriendlyUser - Friday, July 10, 2015 - link
Great product. Buy+++IlllI - Friday, July 10, 2015 - link
I don't know about anyone else, but I'd like to see a very in-depth article about why/how nvidia gpus are much more power efficient vs amd. I don't know how they managed it.xenol - Friday, July 10, 2015 - link
The biggest contributing factors in Maxwell's efficiency is:1. Different organization of GPU controller units to execution units for better use of resources.
2. 256-bit bus (less lines to power)
3. Stuff that was done in Kepler (namely the lack of a hardware scheduler). AT's review on the GTX 680 covers a lot of this.
Oxford Guy - Saturday, July 11, 2015 - link
It will be interesting to see if that holds true for DX 12.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Absolutely not !AMD will, under DX12, skyrocket to the MAXXXXX in the gaming Gastrosphere, power EFF !
The evil MAN, namely intel and nvidia will no longer be able to keep amd down.
I can hardly zenly wait.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
It's only extremely important if AMD wins and NVidia has a possible weakness or failure - then of course pages of dedicated review time will be spent analyzing the "possible problem" NVidia might have... and every thing humanly possible will be sworn to be done to produce a single instance of "that weakness".Doesn't matter if even it's just theoretical due to "the configuration nvidia chose in the layout", I've seen it here over and over for many years.
But, if AMD has a glaring henious hole in performance, capability, standards, whatever it is, ignore it, claim it does not really exist, then in the exact same or very similar area, put down nvidia...
That's how it works. Get used to it.
NA1NSXR - Friday, July 10, 2015 - link
Yuck. I was thinking the 980 was going to be the odd man out after the reshuffle but it looks like it's going to be cheaper than the Fury and easily faster once OC'ing is done.Hxx - Friday, July 10, 2015 - link
i think they are using a gtx 980 FTW which already has a pretty hefty overclock. Mind you it can be pushed further with adequate cooling but still they are not using a vanilla 980.D. Lister - Saturday, July 11, 2015 - link
"they are not using a vanilla 980."They are, as (apparently) per site policy.
http://www.anandtech.com/show/9421/the-amd-radeon-...
Notice that non-reference gpus are listed differently from reference gpus.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Hxx now hates you, you evil nvidia fanboy.FlushedBubblyJock - Wednesday, July 15, 2015 - link
does Hxx= amd fanboy code for fanatic ?There's a new Gaming Evolved Game called ULTIMATE DELUSIONS ! - pre-purchase, did you ?
nader_21007 - Friday, July 10, 2015 - link
So whoever want high-Res Gaming, AMD is the better choice.Low-Res Gaming better on Nvidia cards.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Except when it comes to minimum frame rates and added features, non stuttering, and especially OVERCLOCKING, then of course NVIDIA wins on ALL COUNTS.3D
multi monitor
QUAD monitors out of the box
Frame rate limiting
adaptive v-sync
bestdrivers
game day drivers
GAME STREAMING ON THE FLY RECORDING
Automatic driver optimization per games for free
I mean the list is embarrrassing for all amd fanboys, rather it should be, but of course, amd fanboys might look at a tinily bettter average fraps number and lose contro of their bowels.
darckhart - Friday, July 10, 2015 - link
Only that one of the benefits of HBM was ... wait for it... right on the same package! No need for 12" cards with all your power delivery components far away due to gddr5 needing soo much space. There's even that famous photo of Jen Hsun holding a ridiculously SHORT video card showing off HBM. AMD can't get its heat and power numbers under control hence it needs the big hsf for air cooling? And gosh the fully enabled fiji *needs* water cooling? Just because all other current video cards are always 10-12" is not a legitimate reason to make the cards just as long.Asomething - Friday, July 10, 2015 - link
Dont know how you came to that conclusion but ok, the card doesnt need the massive heatsinks to keep itself cool, it uses the same amount of power as an overclocked 290x which did come in shorter 2 fan cards, its just aib's are going over the top for what is literally a premium card, asus needed an excuse to use their dcu3 cooler and this is a pretty good debut. these cards take a lot of engineering to get right, sapphire always use reference pcb's and have the cooler overhang, gigabyte always do it on shorter cards too (i have one of their windforce 3x 270x's and its just so unecasarily long lol), asus even does it in the rare case their previous coolers were too long. if they needed the coolers to be so big they wouldnt have included the 0rpm fan modes because the gpu wouldnt be able to take it due to out of control power consumption and leakage. It is a legitmate reason as it gives you more space for a fin array which means you can have slower and quieter fans as well as the fact some people prefer the long cards as they feel the need to overcompensate for things....FlushedBubblyJock - Wednesday, July 15, 2015 - link
that's quite the gigantic rambling set of excuses for amd's power hungry housefires...let's list1. good debut ! ( the giant heatsink debut apparently)
2. takes a lot of engineeering .... ( since it's a gigantic housefire, YES)
3. gigabyte always makes shorter cards ( not true but in any case it negates " lotsa engineeering !"
4. 0rpm fan modes is why ! ( compensating for immense heat dissipation noise when not gaming)
5. it's RARE coolers are too long, Asus knows. ( how amd fanboy speaks for asus too long coolers is unclear)
6. so hot gpu wouldn't be able to take it ( great supporting amd fanboy, problem sounds like advantage)
7. legitimate because longer = more fins, which makes longer legitimate ! (circular amd halo)
8. slower and quieter fans (actually fanboy got 1 thing correct)
9. overcompensation ( long cards are for little manlets says amd fanboy )
ROFLMAO - AN ASTOUNDING AMD FANBOY BLOVIATION !
You might deserve the amd fanboy diseembling and politician PR disaster award.
Thank you mr Asomething.
D. Lister - Saturday, July 11, 2015 - link
To be fair, for many if not most people, performance/watt is a lesser concern than performance/dollar. The Fijis do seem to have some minor power and thermal issues, but still if priced competitively (and supplied promptly), they may very well allow AMD to hang on until "Zen" and the eagerly awaited die-shrink.nader_21007 - Friday, July 10, 2015 - link
please tell me ryan, why you are not including 390X benches, since the GTX 980 sits between fury and 390X and we all now how close R9 390X is to the 980.Including it would have been much better than including a very old card like the GTX 580.
Ryan Smith - Friday, July 10, 2015 - link
We have not yet reviewed the 390X at this time. That will be coming later this month.As for the GTX 580, that's something that I collected back for the GTX 980 Ti review, but the data is still valid since driver branches have not changed.
Oxford Guy - Saturday, July 11, 2015 - link
It's important to put in a card like that for perspective. A lot of people are still using old cards.Innokentij - Friday, July 10, 2015 - link
Take a G1 gigabyte 980 and OC it, look at it smoke this overpriced AMD card. 40mhz OC? comeone.nader_21007 - Friday, July 10, 2015 - link
Everybody who decide a purchase should consider the image quality as a factor.There is a hard evidence in the following link, showing how inferior is the image of Titan X compared to Fury X.
the somehow blurry image of Titan X, is lacking some details, like the smoke of the fire.
If the Titan X is unable to show the full details, one can guess what other Nvidia cards are lacking.
I hope such issues to be investigated fully by reputed HW sites, for the sake of fair comparison, and to help consumers in their investments.
nader_21007 - Friday, July 10, 2015 - link
Here is the link:http://hardforum.com/showpost.php?p=1041709168&...
Ryan Smith - Saturday, July 11, 2015 - link
The issue covered in that thread (if you follow it up completely) turned out to be a bug in Battlefield 4, rather than some kind of driver issue or real image quality difference between NV and AMD. The author of the video, Gregster, went back and was able to find and correct the problem; Battlefield 4 was having a mild freak-out when he switched video cards. This is something of a known issue with the game (it can be very picky) and does not occur with our testing setup.Meanwhile, though you don't see it published here, we do look for image quality issues, and if we saw something we would post about it.
FlushedBubblyJock - Wednesday, July 15, 2015 - link
Maybe nader would really enjoy the image quality and gameplay enhancements of PhysX.I mean if smoke from a fire is important.... PhysX could blow him off his chair.
I know, it might not work, since the point is amd must be superior.
Michael Bay - Friday, July 10, 2015 - link
>literally one game>developed by amd suckers at that
You`re literally grasping at straws.
Ranger101 - Saturday, July 11, 2015 - link
Relax man, every now and then you have to take a comprehensive AMD win in your stride :)FlushedBubblyJock - Wednesday, July 15, 2015 - link
"the smoke of the fire" ? ROFLMAOTHAT'S LIKE SOME PHYSX CRAP ! ONLY PHYSX IS 100 TIMES MORE...
Now the amd fanboy loves one little puffy of smoke from a campfire or some crap, but PhysX - forget it !
ROFLMAO SO NOT CONVINCING.
AS118 - Friday, July 10, 2015 - link
Nice! But I'm looking forward to the Nano. These air-Fury cards won't fit in my case anyway, and apparently the Nano is more powerful than a 290x.jay401 - Friday, July 10, 2015 - link
Well, Sapphire has a history of fans that die early on their cards and Asus has a rep for poor customer service, so that (waiting for other vendors) on top of the price being at least $50 too high means I'll wait until the initial rush is over and prices come down to market rates instead of early adopter premiums.jay401 - Friday, July 10, 2015 - link
I mean, really, they're charging $50 less for a card that's up to 17% slower IN ADDITION to not having the expensive water cooling block on it? No, it needs to be closer to $100 cheaper. And then on top of that, both Fury cards are $50 too expensive based on how they perform and their missing features compared to NVidia's. This Fury non-X should have debuted at $499, and likely dropped to $474 within a month, while the Fury-X drops to $599 initially, followed by $574 within a month. Then AMD would actually be what it used to be: a better bang-for-the-buck than NVidia.jay401 - Friday, July 10, 2015 - link
Btw, "$50 less" was assuming the Fury X was $599 like it should have been since launch.Asomething - Friday, July 10, 2015 - link
Question, what features are amd missing that nvidia have, and no gameworks/physx dont really count because amd can use those features despite them running bad on the hardware because its a nvidia feature and amd arent allowed to optimize for it.silverblue - Saturday, July 11, 2015 - link
AMD also have TrueAudio... for all the good that's doing them or the industry.jay401 - Saturday, July 11, 2015 - link
HDMI 2.0, 6-8GB VRAM, diversity in connector output types. I forget what else.D. Lister - Sunday, July 12, 2015 - link
Add to that...- Regular driver/optimal settings/SLI profile updates.
- G-Sync - more expensive, but performs better and is available across a much wider range of GPUs.
- Shadowplay, live 4K video streaming and capture.
- Game anywhere streaming via Shield tab.
- Better privacy with the software suite, since unlike the Gaming Evolved Raptr app, GFE doesn't mine you for personal data to be sold. http://mobile.pcauthority.com.au/News/362545,is-am...
RussianSensation - Sunday, July 12, 2015 - link
Sapphire now uses double-ball bearing fans. That means the issue with their fans dying on newer cards hasn't been proven yet. Next time please read the review more carefully.USGroup1 - Friday, July 10, 2015 - link
"... in a straight-up performance shootout with the GTX 980 the R9 Fury is 10% more expensive for 8%+ better performance."A very misleading conclusion. Those numbers are from comparing factory overclocked R9 Fury with reference GTX 980, Well played.
Ryan Smith - Friday, July 10, 2015 - link
It's comparing the stock clocked Fury (the Asus model) with the stock clocked reference GTX 980.Dazmillion - Friday, July 10, 2015 - link
Does the R9 Fury have HDMI 2.0? that can be a deal breaker for 4K gamingRyan Smith - Friday, July 10, 2015 - link
No. The Fiji GPU does not support HDMI 2.0.TheJian - Friday, July 10, 2015 - link
Since you can easily get a SUPER OC 980 for the price of Fury why not put one in the benchmarks? That is it's real competition NOT the regular 980's that are $50 less correct? IF AMD is avoiding giving a ref design for Fury regular, then why not use a OC version for 980 also? It would seem sneaky tactics by AMD here to allow AMD portal site like yours to compare products that are NOT really even. You should be using an OC 980 priced like one of the cards you reviewed. The OC card here is actually $70 more than a ref 980.http://www.newegg.com/Product/ProductList.aspx?Sub...
Multiple OC cards for $499 or less (zotac AMP $479 in cart), and come with a game. EVGA, Gbyte, Zotac, Asus Strix etc all 499 or less. MSI is the highest OC at $509 of this bunch.
EVGA has a ACX model for $499 after $30 rebate. 1279 core/1380 boost! Pitting Fury vs. regular 980's is a joke.
Also why do you keep using drivers that are TWO revs behind NV's WHQL drivers (released 353.06 may 31st, and 353.30 june 22nd. Both are later correct? Also Extremetech says 25% faster with 353.30 on Metro LL, so wondering what other games are much faster given TWO revs of drivers later than what you seem to be using here. Extremetech used the same as you (furyX) but commented on the 353.30's being faster apparently.
Still wondering when you're going to cover the WHINE of the FURY X cards that retail users have had also:
http://www.tomshardware.com/reviews/amd-radeon-r9-...
Toms spent 4 pages on it. Wccftech reported it also, with vids so you could hear it (and coil whine). There are going to be RMA's over this. But not a peep about it reaching users and AMD covering OEM's here?
Ryan Smith - Friday, July 10, 2015 - link
1) We do not compare OC'd cards. We did this once before; the community made it VERY clear that it was the wrong thing to do. So all of our comparisons are based on reference clocked cards against reference clocked cards. In other words we examine the baseline, so that the performance you as a consumer gets would never be lower than what we get at identical settings.http://www.anandtech.com/show/3988
2) For the NV drivers, the latest drivers do not impact the performance on our current benchmark suite in any way. Nor would we expect them to, as they're all from the same driver branch. While I have already checked some cards, the amount of time required to fully validate all of our NVIDIA cards would not be worth the effort since the results would be absolutely identical.
3) We did cover the Fury X noise. It has been the top story in the Pipeline for the last 24 hours. http://www.anandtech.com/show/9428/on-radeon-r9-fu...
Gigaplex - Saturday, July 11, 2015 - link
"We do not compare OC'd cards."Well, technically you do, but usually only when there's no reference card. Problem is, if one vendor decides not to do a reference card, the custom cards get an advantage when comparing against reference cards of another vendor. We're only asking that you include a custom card as well in these situations.
Ryan Smith - Saturday, July 11, 2015 - link
Whenever there isn't a reference card however, we always test a card at reference clockspeeds in one form or another. For example in this article we have the ASUS STRIX, which ships at reference clockspeeds. No comparisons are made between the factory overclocked Sapphire card and the GTX 980 (at the most we'll say something along the lines of "both R9 Fury cards", where something that is true for the ASUS is true for the Sapphire as well).And if you ever feel like we aren't being consistent or fair on that matter, please let us know.
Ranger101 - Saturday, July 11, 2015 - link
Nvidia still has the fastest high end offering in the 980TI. Isn't that enough? Sometimes you just have to take a solid AMD win on the chin.Socius - Saturday, July 11, 2015 - link
Except a card that is $100 more than the GTX 980, and slower than it when OC'd, isn't a "win" by any means. Unless we're treating AMD like they're in the special Olympics and handing them a medal just for showing up.FlushedBubblyJock - Wednesday, July 15, 2015 - link
I know, rofl.The amd fanboy forgets pricing suddenly, when their master and god reams the backkside wallet holder, suddenly price is removed entirely from the equation !
It's no secret amd fanboys are literally insane from their obsession.
Socius - Saturday, July 11, 2015 - link
Really? The Fury is ahead of the GTX 980 and is a comparable value proposition? Yet again you run overclock benches with the Fury without showing the same being done on that GTX 980. I have no idea why you follow this policy of no cross-vendor OC performance comparisons as most people geeky enough to be reading this website would be interested in seeing final performance stats that they can achieve on a stock card, with basic overclocking. So if you have a card like the Fury that overclocks a whopping 5%, compared to a GTX 980 that can overclock 25-40%, yeah...you're doing a disservice to your readers by withholding that information. Especially when you don't even bring it up in your conclusion.Paid by AMD much? I never thought I'd have to look at any reviews other than those from Anandtech. But this website has been playing it too cautiously, and without the ultimate intent of showing best overall/actual performance and value to their readers.
Seriously Ryan, I'm disappointed. I don't see why there would be an issue with showing a comparison between max OC between, oh, let's say, the Asus R9 Fury STRIX and the...Asus GTX 980 STRIX. Would be a pretty darn fair comparison of 2 different cards and their performance when given the same treatment by the same manufacturer.
I don't know why I even bother commenting anymore.
Orthello - Saturday, July 11, 2015 - link
I think it would be great to see an OC vs OC comparison, the problem is its unfair to do so now for Fury as there is no voltage control yet like there is for Nvidia Cards.When afterburner / trixx etc support the voltage control for Fury then we can really get the real max oc vs max oc , unless you are talking about overclocks at defualt voltage which makes no sense.
Socius - Saturday, July 11, 2015 - link
At the end of the day, this is the product they've placed in people's hands. But even without voltage increases on the Nvidia side, you should be able to OC to around 1400MHz at least, when on a custom cooler design like the STRIX model, for example. But I don't think Ryan cares about what's "fair." That's why he's putting these custom boards up against the reference NVidia designed/clocked GTX 980. Because that's somehow a fair comparison. =DFlushedBubblyJock - Wednesday, July 15, 2015 - link
Don't worry, the insanity is so rampant that if amd ever gets an overclock going- we will never hear the end of it - "OVERCLOCK MONSTER !!!!"On the 290 series amd fanboys were babbling all over the internet that nearly every amd card hits 1300 or 1350 "easily".
Of course they were lying those were rare exceptions, but hey...losing is hard on the obsessed.
Now we know the nvidia flagship core is a real monster overclocker, so we should not mention it ever, nor hear about it, nor factor it in at all, EVER FOR ANY REASON WHATSOEVER.
Frankly I can't get away from the sickly amd fanboys, it's so much entertainment so cheap my sides have never been busted so hard.
Ranger101 - Saturday, July 11, 2015 - link
If quality articles that provide a balanced perspective disappoint you that's a shame. Take comfort in knowing the 980TI is currently still the fastest card. Allow me to once again commend Anandtech on an excellent article.Socius - Saturday, July 11, 2015 - link
Yes...comparing an overclocked 3rd party pimped model against a non-OC'd reference design card that is $100 less in price and then saying it's a tough choice between the 2 as the R9 Fury is faster than the GTX 980...lol...totally balanced perspective there bro.Ryan Smith - Saturday, July 11, 2015 - link
Whenever a pure reference card isn't available, like is the case for R9 Fury, we always test a card at reference clockspeeds in one form or another. For example in this article we have the ASUS STRIX, which ships at reference clockspeeds. No comparisons are made between the factory overclocked Sapphire card and the GTX 980 (at the most we'll say something along the lines of "both R9 Fury cards", where something that is true for the ASUS is true for the Sapphire as well).And if you ever feel like we aren't being consistent or fair on that matter, please let us know.
Socius - Saturday, July 11, 2015 - link
I just want to understand something clearly. Are you saying that when you are talking about performance/value, you ignore the fact that one card can OC 4%, and the other can OC 30% as you don't believe it's relevant? Let me pose the question another way. If you had a friend who was looking to spend $500~ on a card and was leaning between the R9 Fury and the GTX 980. Knowing that the GTX 980 will give him better performance once OC'd, at $450, would you even consider telling him to get the R9 Fury at $550?My concern here is that you're not giving a good representation of real world performance gamers will get. As a result, people get misled into thinking spending the extra money on the R9 Fury is actually going to net them higher frame rate...not realizing they could get better performance, for even less money, if someone decided to actually look at overclocking potential...
Now if you weren't interested in overclocking results in general, I'd say fine. I disagree, but it's your choice. But then you do show overclocking results with the R9 Fury. I'm finding it really hard to understand what your intent is with these articles, if not to educate people and help them make an actual informed decision when making their next purchase.
As I mentioned in your previous article on the Fury X...you seem to have a soft spot for AMD. And I'm not exactly sure why. I will admit that I'm currently a big Nvidia fan. Only because of the features and performance I get. If the Fury X had come out, and could OC like the 980ti and had 8gb HBM memory, I'd have become an AMD fan. I'm a fan of whoever has the best technology at any given moment. And if I were looking to make a decision on my next card purchase, your article here would give a false impression of what I would get if I spent $100 more on an R9 fury, than on a GTX 980...
jardows2 - Saturday, July 11, 2015 - link
Apparently, OC is your thing. I get it. There are plenty of OC sites that are just for you. For some of us, we really don't want to see how much we can shorten the life of something we pay good money for, when the factory performance does what we need. I, for one, am more interested in how something will perform without me potentially damaging my computer, and I appreciate the way that AT does their benchmarks.Socius - Saturday, July 11, 2015 - link
I think the fact that you believe overclocking will "damage your computer" or in any meaningful way shorten the lifespan of the product, is all the more reason to talk about overclocking. I'd be more than welcome to share a little info.Generally speaking, what kills the product is heat (minus high current degradation that was a bigger problem on older fab processes) So let's say you have a GPU that runs 60 degrees Celsius under load, at 1000MHz with a 50% fan speed profile. Now let's imagine 2 scenarios:
1) You underclock the GPU to 900MHz and set a 30% fan profile to make your system more quiet. Under load, your GPU now hits 70 degrees Celsius.
2) You overclock the GPU to 1100MHz and set a 75% fan profile for more performance at the cost of extra sound. Under load, your GPU now hits 58 degrees Celsius due to increased fan speed.
Which one of these devices would you think is likely to last the longest? If you said the Overclocked one, you'd be correct. In fact...the overclocked one is likely to last even longer than the stock 1000MHz at the 50% fan speed profile, because despite using more power and giving more performance, the fan is working harder to keep it cooler, thus reducing the stress on the components.
Now. Let's talk about why that card was clocked at 1000MHz to start! When a chip is designed, the exact clock speed is an unknown. Not just between designs...but between individual wafers and dies cut out from those wafers. As an example, I had an i7 3770k that would use 1.55v to hit 4.7GHz. I now have one that uses 1.45v to hit 5.2GHz and 1.38v to hit 5GHz. Why am I telling you this? Well...because it's important! When designing a product like a CPU or GPU, and setting a base clock, you have to account for a few things:
1) How much power do I want to feed it?
2) How much of the heat generated by that power can I dissipate with my fan design?
3) How loud do I want my fan to be?
4) What's the highest clock rate my lowest end wafer can hit while remaining stable and at an acceptable voltage requirement?
So here's the fun part. So while chips themselves can vary greatly, there are tons of precautions added when dealing with stock speeds. Just for a point of reference...a single GTX Titan X is guaranteed to overclock 1400MHz-1550MHz with proper settings, if you put the fan at 100% full blast. That's a 30%-44% overclock! So why wouldn't Nvidia do that? Well it's a few things.
1) Noise! Super important here. Your clockspeed is determined by your ability to cool it. And if you're cooling by means of a fan, the faster that fan, the more noise, the more complaints by consumers.
2) Power/Heat variability. Since each chip is different, as you go into the higher ranges, each will require a different amount of power in order to be stable at that frequency. If you're curious, you can see what's called an ASIC quality for your GPU using a program like GPU-Z. This number will tell you roughly how good of a chip you have, in terms of how much of a clock it can achieve with how much power. The higher the % of your ASIC quality, the better overclocking potential you have on air because it'll require less power, and therefore create less heat to do it!
3) Overclocking potential. This is actually important in Marketing. And it's something AMD and Nvidia are both pretty bad at, actually. But AMD's bit a bit worse, to their own detriment. In their R9 Fury and Fury X press release performance numbers they set expectations for their cards to completely outperform the 980ti using best case scenarios and hand picked settings. And they also said it overclocks like a beast. Now...here's why that's bad. Customers like to feel they're getting more than what they pay for. That's why companies like BMW always list very modest 0-60 times for their cars. When I say modest, I mean they set the 0-60 times to show they're worse than what the car is actually capable of. That's why every car review program you see will show 0-60 times being 0.2 to 0.5 seconds faster than what BMW has actually listed. This works because you're sold on a great product, only to find it's even greater than that.
Got off track a bit there. I apologize. Back to AMD and the Fiji lineup and why this long post was necessary. When AMD announced the Fury X being an all in one cooler design, I instantly knew what was up. The chip wasn't able to hold up to the limitations we talked about above (power requirement/heat/fan noise/stability). But they needed to put out a stock clock that would allow the card to be competitive with Nvidia, but they also didn't want it to sound like a hair dryer. That's why they opted for the all in one cooler design. Otherwise, a chip that big on an air cooled design would likely have been clocked around the 850-900MHz range that the original GTX Titan had. But they wanted the extra performance, which created extra heat and required more power, and used a better cooler design to be able to accomplish that across the board with all their chips. That's great, right? Well...yes and no. I'll explain.
Essentially the Fiji lineup is "factory overclocked" by AMD. This is the same as putting a turbo on a car engine. And as any car enthusiast will tell you, a 2L engine with a turbo on it may be able to produce 330 horsepower, which could otherwise take a naturally aspirated 4L V8 to accomplish. But then you're limited for increased horsepower even further. Sure you could put a bigger supercharger on that 2L engine. But it already had a boosted performance for that size engine. So your gains will be minimal. But with that naturally aspirated engine...you can drop a supercharger on it and realize massive gains. This is very much the same as what's happening with the Fury X, for example.
And this is why I believe it's incredibly important to point this out. I built a system for my friend recently with a GTX 970. I overclocked that to 1550MHz on first try, without even maxing out the voltage. That was a $300 model card. And even that would challenge the performance of the R9 Fury if you don't plan on overclocking (not that you could, anyway, with that 4% overclock limit).
So...yes, I do think Overclocking needs to be talked about more, as it's become far easier and safer to do than in the past. Even if you don't plan to do extreme overclocking, you can keep all your fan speed automated profiles the same, don't touch voltage, and just increase power limit and a slight increase in the gpu clock for simple free performance. It's something I could teach my mom to do. So I hope it's something that is done by more and more people, as there's really no reason not to do it.
And that's why I think informing people of these differences with regard to overclocking will help people save money, and get more performance. And not doing just keeps people in the dark, and does them a great disservice. Why keep your audience oblivious and allow them to remain ignorant to these things when you can take some time to help them? Overclocking today is far different from the overclocking of a few years ago. And everybody should give it a try.
mdriftmeyer - Sunday, July 12, 2015 - link
Ryan is biased in these comparisons. That's a fact. Your knowledge silencing the person defending Ryan's unbalanced comparisons is a fact.jardows2 - Monday, July 13, 2015 - link
What a hoot! He didn't silence me. I just have better things to do on the weekend than check all the Internet forums I may have posted to!jardows2 - Monday, July 13, 2015 - link
@SociusThat was a good post explaining your interest in OC. From my perspective, I come from the days when OC'ing would void your warranty and would shorten the life span of your system. I also come from a more business and channel oriented background, meaning that we stay with "officially supported" configurations. That background stays with you. Even today, when building my personal computer or a web browsing computer for a family member, I do everything I can to stay with the QVL of the motherboard.
I am more concerned with out of box experience, and almost always skip over the OC results of benchmarks whenever they are presented. If a device can not perform properly OOB, then it was not properly configured from the start, which does not give me the best impression about the part, regardless of the potential of individual tweaks.
In the end, you are looking for OC results, I am only concerned with OOB experience. Different target audience, both with valid concerns. I just don't think it is worth bashing a reviewer's method that focuses on one experience over the other.
Mugur - Saturday, July 11, 2015 - link
Good review as always, Ryan. I wouldn't throw away Asus's approach. Nice power efficiency gains. Funny how overclocking for a few fps more heated the discussion... All in all, I think AMD is back in business with Fury non-X. Waiting for 3xx reviews, hopefully for the whole R9 line, with 2, 4 and 8 GB.3ogdy - Saturday, July 11, 2015 - link
What a disappointment again. Man...the Fury cards really aren't worth the hassle at all, are they? It's sad to see this, especially coming from an FX-8350 & 2xHD6950s owner. So the 980Ti beats the custom cards (we knew Fury X was quite at its limits from the beginning, but still - despite all the improvements, custom cards sometimes perform even worse than the stock one. The 980Ti really beats the Fury X most of the time.What is that, nVidia's blower is only 8dB louder than the highest end of the fans used by arguably the two best board partners AMD has? Wow! This is where I realize nVidia really must have done an amazing job with the 980Ti and Maxwell in general.
HBM this HBM that...this card is beaten at 4K by the GTX980Ti and gameplay seems to be smoother on the nVidia card too. What the hell? Where are the reasons to buy any of the Fury cards?
siliconwars - Saturday, July 11, 2015 - link
Any concept of performance per dollar?D. Lister - Saturday, July 11, 2015 - link
The Fury is 8% faster than a stock 980 and 10% more expensive. How does that "performance per dollar" thing work again? :pNagorak - Sunday, July 12, 2015 - link
By that token the 980 is not good performance per dollar either. It's sonething like a 390 non-x topping the charts. These high end cards are always a rip off.D. Lister - Tuesday, July 14, 2015 - link
"These high end cards are always a rip off."That, is unfortunately a fact. :(
siliconwars - Saturday, July 11, 2015 - link
The Asus Strix is 9.4% faster than the 980 with 20% worse power consumption. I wouldn't call that "nowhere near" Maxwell tbh and the Nano will be even closer if not ahead.Dazmillion - Saturday, July 11, 2015 - link
Nobody is talking about the fact that the Fury cards which AMD claims is for 4k gaming doesnt have a 4k@60Hz port!!David_K - Saturday, July 11, 2015 - link
So the displayport 1.2 connector isn't capable of sending 2160p60hz. That's new.Dazmillion - Saturday, July 11, 2015 - link
The fury cards dont come with HDMI 2.0ES_Revenge - Sunday, July 12, 2015 - link
Which is true but not the only way to get that resolution & refresh. Lack of HDMI 2.0 and full HEVC features is certainly another sore point for Fury. For the most part HDMI 2.0 affects the consumer AV/HT world though, not so much the PC world. In the PC world, gaming monitors capable of those res/refresh rates are going to have DP on them which makes HDMI 2.0 extraneous.mdriftmeyer - Sunday, July 12, 2015 - link
I'll second ES_Revenge on the DP for PC Gaming. The world of 4K Home Monitors being absent with HDMI 2.0 is something we'll live with until the next major revision.I don't even own a 4K Home Monitor. Not very popular in sales either.
Every single one of them showing up on Amazon are handicapped with that SMART TV crap.
I want a 4K Dumb Device that is the output Monitor with FreeSync and nothing else.
I'll use the AppleTV for the `smart' part.
FlushedBubblyJock - Thursday, July 16, 2015 - link
i'VE ALREADY SEEN A DOZEN REFUSE TO BUY FURY BECAUSE OF IT.They have a 4k TV, they say, that requires the hdmi 2.0...
SO ALL YOUR PATHETIC EXCUSES MEAN EXACTLY NOTHING. THOSE WITH 4K READY SCREENS ARE BAILING TO NVIDIA ONLY !
YOU DENYING REALITY WILL ONLY MAKE IT WORSE FOR AMD.
They can screw off longer with enough pinheads blabbering bs.
FlushedBubblyJock - Thursday, July 16, 2015 - link
It's such a massive failure, and so big a fat obtuse lie, it's embarrassing to even bring up, spoiling the party that is fun if you pretend and fantasize enough, and ignore just how evil amd is.hdmi 2.0 - nope ! way to go what a great 4k gaming card ! 4gb ram - suddenly that is more than enough and future proof !
ROFL - ONLY AMD FANBOYS
dave1231 - Saturday, July 11, 2015 - link
That's with HBM? Lol.medi03 - Saturday, July 11, 2015 - link
With all respect, 300 vs 360 watt at load and 72 vs 75 watt idle doesn't deserve "consumes MUCH more power", Ryan, and that even if it wasn't a faster card.Socius - Saturday, July 11, 2015 - link
For total system power draw? Yeah it does....because the power usage gap percentage is lessened by the addition of the system power usage (minus the cards) in the total figure. So if the numbers were 240W vs 300W, for example, that's 25% more power usage. And that's with a 20-30W reduction in power usage by using HBM. So it shows how inefficient the GPU design actually is, even when asking it with HBM power reduction and the addition of total system power draw instead of calculating it by card.mdriftmeyer - Sunday, July 12, 2015 - link
Personally, I have an RM 1000W Corsair Power Supply. Sorry, but if you're using < 850W supply units I suggest you buck up and upgrade.Socius - Sunday, July 12, 2015 - link
I think you replied to the wrong person here. I have 2 PSUs in my PC. A 6-rail 1600W unit and a single rail 1250W unit.Peichen - Saturday, July 11, 2015 - link
The fail that’s AMD’s Fury series makes my MSI Gaming 4G GTX980 looks even better. I only paid $430 for it and it gets to 1490/1504 boosted at stock voltage. Essentially it means I got a card as fast as Fury OC at $100+ cheaper, uses far less power and in my system for months earlier.I am very glad I went Nvidia after 5 year with AMD/ATI graphics and didn’t wait months for Fiji.
FlushedBubblyJock - Thursday, July 16, 2015 - link
There we have it, and it's still the better deal. It's STILL THE BETTER DEAL AND IT'S AVAILABLE.But we're supposed to believe amd is cheap and faster... and just as good in everything else...
I seriously can't think of a single thing amd isn't behind on.
MobiusPizza - Saturday, July 11, 2015 - link
"The R9 Fury will be launching with an MSRP of $549, $100 below the R9 Fury X. This price puts the R9 Fury up against much different completion* than its older sibling; "It's competition not completion
SolMiester - Saturday, July 11, 2015 - link
WOW, so much fail from AMD...might as well kiss their ass goodbye!Pimping the Fury at 4k, when really even the 980Ti is borderline on occasion, and releasing a card with no OC headroom at the same price as its competitor!
ES_Revenge - Saturday, July 11, 2015 - link
I didn't have too high hopes for the "regular" Fury [Pro] after the disappointing Fury X. However I have to say...this thing makes the Fury X look bad, plain and simple. With a pretty significant cut-down (numerically) in SPs and 32 fewer TMUs, you'd expect this thing to be more of a yawn. Instead it gives very near to Fury X performance and still faster than a GTX 980.The only problem with it is price. At $550 it still costs more than a GTX 980 and Fury has less OC potential. And at only $100 less than Fury X it's not really much of a deal considering the AIO/CLC with that is probably worth $60-80. So really you're only paying $30 or so for the performance increase of Fury X (which isn't that much but it's still faster). What I suggest AMD "needs to do" is price this thing near to where they have the 390X priced. Fury Pro at ~$400 price will pull sales from Nvidia's 980 so fast it's not funny. Accordingly the 390X should be priced lower as well.
But I guess AMD can't really afford to undercut Nvidia at the moment so they're screwed either way. Price is high, people aren't going to bother; lower the price and people will buy but then maybe they're just losing money.
But imagine buying one of these at $400ish, strapping on an Asetek AIO/CLC you might have lying around (perhaps with a Kraken bracket), and you have a tiny little card* with a LOT of GPU power and nice low temps, with performance like a Fury X. Well one can dream, right? lol
*What I don't understand is why Asus did a custom PCB to make the thing *longer*??? One of the coolest things about Fury is how small the card is. They just went and ruined that--they took it and turned it back into a 290X, the clowns. While the Sapphire one still straps on an insanely large cooler, at least if you remove it you're still left with the as-intended short card.
FlushedBubblyJock - Thursday, July 16, 2015 - link
can you even believe the 390x is at $429 and $469 and $479 ... the rebrand over 2.5 years old or so... i mean AMD has GONE NUTS.akamateau - Sunday, July 12, 2015 - link
@Ryan SmithHmmm.
You ran a whole suite of synthetic Benchmarks yet you completely ignored DX12 Starswarm and 3dMarks API Overhead test.
The question that I have is why did you omit DX12 benchmarks?
Starswarm is NOW COMPLETE AND MATURE.
It is also NOT synthetic but rather a full length game simulation; but you know this.
3dMark is synthetic but it is THE prime indicator of the CPU to GPU data pipeline performance.
They are also all we have right now to adequately judge the value of a $549 dollar AMD GPU vs a $649 nVidia GPU for new games coming up.
Since better than 50% of games released this Christmas will be DX12 don't you think that consumers have a right to know how well a high performance API will work with a dGPU card designed to run on both Mantle and DX12?
AMD did not position Fiji for DX11. This card IS designed for DX12 and Mantle.
So show us how well it does.
Ryan Smith - Monday, July 13, 2015 - link
The Star Swarm benchmark is, by design, a proof of concept. It is meant to showcase the benefits of DX12/Mantle as it applies to draw calls, not to compare the gaming performance of video cards.Furthermore the latest version is running a very old version of the engine that has seen many changes. We will not be able to include any Oxide engine games until Ashes of the Singularity (which looks really good, by the way) is out of beta.
Finally, the 3DMark API Overhead test is not supposed to be used to compare video cards from different vendors. From the technical guide: "The API Overhead feature test is not a general-purpose GPU benchmark, and it should not be used to compare graphics cards from different vendors."
FlushedBubblyJock - Thursday, July 16, 2015 - link
" Since better than 50% of games released this Christmas will be DX12 "I'LL BET YOU A GRAND THAT DOES NOT HAPPEN.
It's always the amd fanboy future, with the svengali ESP blabbed in for full on PR BS...
akamateau - Sunday, July 12, 2015 - link
@Ryan SmithDo you also realise that Fiji completely outclasses Maxwell and Tesla as well?
Gaming is a sideshow. AMD is positioning Fury x to sell for $350+ as a single unit silicon for HPC. With HBM on the package!!!
HPGPU computing is now up for grabs. Compaing the Fiji PACKAGE to the Maxwell or Tesla PACKAGE has AMD thoroughly outclassing the Professional Workstation and HPC silicon.
HBM stacked memory can be configured as cache and still feed GDDR5 RAM for multiple monitors.
AMD has several patents for just that while using HBM stacked memory.
I think that AMD is quietly positioning Fiji and Greenland next for High Performance Computing.
Fury X2 with 17 Tflps of single precision and almost 8Tflops dual precison is going to change the cluster server market.
Of course Fury X2 will rock this Christmas.
What will be the release price? I think less than $999!!!
AMD has made a habit of being the grinch that stole nVidias Christmas.
Ryan Smith - Monday, July 13, 2015 - link
Note that Fiji is not expected to appear in any HPC systems. It has no ECC, minimal speed FP64, and only 4GB of VRAM. HPC users are generally after processors with large amounts of memory and ECC, and frequently FP64 as well.AMD's HPC product for this cycle is the FirePro S9170, a 32GB Hawaii card: http://www.amd.com/en-us/products/graphics/server/...
FlushedBubblyJock - Thursday, July 16, 2015 - link
ROFLMAO delusion after delusion...loguerto - Sunday, July 12, 2015 - link
Looking at what happened with the old generation AMD and nvidia gpus, i wouldn't be surprised if after a few driver updates the fiji will be so ahead of maxwell. AMD always improved it's old architectures with software updates while nvidia quite never did that, actually they downgrade their old gpus so that they can sell their next overpriced SoC.CiccioB - Monday, July 13, 2015 - link
The myth, here again!Let's see these numbers of a miraculous vs crippling driver.
And I mean I WANT NUMBNERS!
Or what you are talking about is just junk you are reporting because you can't elaborate yourself.
Come on, the numbers!!!!!!!!!
FlushedBubblyJock - Thursday, July 16, 2015 - link
So you lied loguerto, but the sad truth is amd bails on it's cards and drivers for them FAR FAR FAR sooner than nvidia does.YEARS SOONER.
Get with it bub.
Count Vladimir - Thursday, July 16, 2015 - link
Hard evidence or gtfo.Roboyt0 - Sunday, July 12, 2015 - link
I am very interested to see how much of a difference ASUS' power delivery system will make for (real) overclocking in general once voltage control is available. If these cards act the same as the 290's did, then AMD's default VRM setup could very likely be more than capable of overclocks in the 25% or more range. I'm basing the 25% or more off of my experience with a half dozen reference based R9 290's, default 947MHz core, that would reach 1200 core clock with ~100mV additional. And if you received a capable card then you could surpass those clocks with more voltage.It appears AMD has followed the EXACT same path they did with the 290 and 290X. The 290X always held a slight lead in performance, but the # of GPU components disabled didn't hinder the 290 as much as anyone thought. This is exactly what we see now with the Fury ~VS~ Fury X...overclock the Fury and it's the better buy. All while the Fury X is there for those who want that little bit of extra performance for the premium, and this time you're getting water cooling! It seems like a pretty good deal to me.
Once 3rd party programmers(not AMD) figure out voltage control for these cards, history will likely repeat itself for AMD. Yes, these will run hotter and use more power than their Nvidia counterparts...I don't see why this is a shock to anyone since this is still 28nm and similar enough to Hawaii...What no one seems to mention is the amount of performance increase compared to Hawaii in the same power/thermal envelope..it's a very significant jump.
Whom in the enthusiast PC world really cares about the additional power draw? We're looking at 60-90W under normal load conditions; Furmark is NOT normal load. Unless electricity where you hail from is that expensive, it isn't actually costing you that much more in the long run. If you're in the market for a ~$550 GPU, then you probably aren't too concerned with buying a good PSU. What the FurMark power draw of the Fury X/Sapphire Fury really tell us is that the reference PCB is capable of handling 385W+ of draw. This should give an idea of what the card can do once we are able to control the voltage.
These cards are enthusiast grade and plenty of those users will remove the included cooler for maximum performance. A full cover waterblock is going to be the key to releasing the full potential of Fury(X) just like it was for 290(X). It is a definite plus to see board partners with solid air cooling solutions out of the gate though...Sapphire's cooling solution fares better in temperature AND noise during FurMark than ASUS' when it's pulling 130W additional power! Way to go Sapphire!
My rant will continue concerning drivers. Nvidia has mature hardware with mature drivers. The fact AMD is keeping up, or winning is some instances, is a solid achievement. Go back to a 290(X) review when their primary competition was a 780 Ti, where the 780 Ti was usually winning. Now, the 390(X), that so many are calling a rebranded POS, easily bests the 780 Ti and competes with GTX 980. Nvidia changed architecture, but AMD is still competitive? Another commenter said it best by saying: "An AMD GPU is like a fine wine, and gets better with age."
This tells me 3 things...
1) Once drivers mature, AMD stands to gain solid performance improvements.
2) Adding voltage control to enable actual overclocking will show the true potential of these cards.
3) Add these two factors together and AMD has another winning product.
Lastly we still have DX12 to factor into all of this. Sure, you can say DX12 is too far away, but in actuality it is not. I know there are those people who MUST HAVE the latest and greatest hardware every time something new comes around every ~9 months. However, there are plenty more of us who wait a few generations of GPUs to upgrade. If DX12 brings even a half of the anticipated performance gains and you're in the market, then purchasing this card now, or in the coming months, will be a solid investment for the coming years.
Peichen - Monday, July 13, 2015 - link
Whatever flats your boat. There are still some people like you that believes FX CPUs are faster than i7s and they are what keeps AMD afloat. The rest of us.... we actually consider everything and go Intel & Nvidia.There are 3 fails in your assumptions:
1. Fiji is a much bigger core tied to 4 HBM modules. OC will likely not be as "smooth" as 290X
2. 60-90W is not just cost in electricity. It is also getting a PSU that will supply the additional draw and more fan(s) and better case to get the heat out. Or suffer the heat and noise. The $15-45 a year in additional electricity bill also means you will be in the red in a couple od years.
3. You assume AMD/ATI driver team is still around and will be around a couple of years in the future.
silverblue - Tuesday, July 14, 2015 - link
3. Unless the driver work has been completely outsourced and there's proof of this happening, I'm not sure you can use this as a "fail".Fiji isn't a brand new version of GCN so I don't expect the huge gains in performance that are being touted, however whatever they do bring to the table should benefit Tonga as well, which will (hopefully) distance itself from Tahiti and perhaps improve sales further down the stack.
Count Vladimir - Thursday, July 16, 2015 - link
Honestly, driver outsourcing might be for the best in case of AMD.Oxford Guy - Wednesday, July 15, 2015 - link
The most electrically efficient 3D computer gaming via an ARM chip, right? Think of all the wasted watts for these big fancy GPUs. Even more efficient are text-based games.FlushedBubblyJock - Thursday, July 16, 2015 - link
You forgot he said spend a hundred and a half on a waterblock...for the amd card, for "full potential"..ROFL - once again the future that never comes is very bright and very expensive.
beck2050 - Monday, July 13, 2015 - link
A bit disingenuous as custom cooled over clocked 980s are the norm these days and easily match or exceed Fury, while running cooler with much less power and can be found cheaper. AMD HAS its work cut out.CiccioB - Monday, July 13, 2015 - link
For a GPU that was expected to beat Titan X hands down, just being faster than 980 is quite a fail.Also due to the high cost technology involved in producing it.
Be happy for that, and just wait or DX12 to have some hope to gain few FPS with respect to the competitor.
I just think DX12 is not going to change anything (whatever these cards will gain will be the same for nvidia cards) and few FPS more or less is not what we expected from this top ties class (expensive) GPU.
Despite the great steps ahead made by AMD in power consumption, it still is a fail.
Large, expensive, still consuming more, and badly scaling.
Hope that with the new 16nm FinFet PP things will change radically, or we will witness a 2 year dominance again by nvidia with high prices.
superjim - Monday, July 13, 2015 - link
Used 290's are going for sub-$200 (new for $250). Crossfire those and you get better performance for much less.P39Airacobra - Tuesday, July 14, 2015 - link
Ok compared to the Fury X, The Regular R9 Fury makes a bit more sense than the X model. It is priced better (But still priced a bit too much) And it has almost even performance with the X model. However the power consumption is still insane and unreasonable for todays standards! And the temps are way too high for a triple fan card! With a 70c temp running triple fans I doubt there is any room at all for overclocking! I do respect this card's performance! But it is just not worth it for the price you have to pay for a hefty PSU, And the very loud and expensive cooling setup you will have to put inside your case! To be honest: If I was stuck with a old GTX 660 Ti, And someone offered me a R9 Fury for even trade, I would not do it!ES_Revenge - Tuesday, July 14, 2015 - link
The power consumption is not insane or unreasonable for "today's standards". Only the GTX 960, 970, 980, Titan X are better. So it's unreasonable for Nvidia's new standard but it's actually an improvement over Hawaii, etc. of the past.Compared to current Nvidia offerings, it's bad yeah but we can't really established standards on their cards alone. R9 390/X, 380, etc. are still power hungry for their performance and they are still "today's" cards, like it or not.
Don't get me wrong I agree they really need to start focusing on power/heat reduction, but we're not going to see that from AMD until their next gen cards (if they make it that far, lol).
Gunbuster - Wednesday, July 15, 2015 - link
AMD thread with no Chizow comments? My world is falling apart :POxford Guy - Wednesday, July 15, 2015 - link
I'm sure this person has more than one alias.FlushedBubblyJock - Thursday, July 16, 2015 - link
We'd know him by his words, his many lengthy words with links and facts up the wazoo, and he is so proud he would not hide with another name, like a lousy, incorrect, uninformed, amd fanboy failure.FlushedBubblyJock - Wednesday, July 15, 2015 - link
Just think about placing your bare hand on 3 plugged in 100 Watt light bulbs ... that's AMD's housefire for you !My god you could cook a steak on the thing.
3X 100 watter light bulbs frying everything in your computer case... awesome job amd.
Oxford Guy - Wednesday, July 15, 2015 - link
Because the GTX 480 was quieter, had better performance per watt, and was a fully-enabled chip.FlushedBubblyJock - Thursday, July 16, 2015 - link
So the 480 being hot makes this heated furnace ok ?What exactly is the logic there ?
Are you a problematic fanboy for amd ?
Oxford Guy - Thursday, July 16, 2015 - link
"What exactly is the logic there?"I really need to spell it out for you?
The logic is that the 480 was a successful product despite having horrid performance per watt and a very inefficient (both in terms of noise and temps) cooler. It didn't get nearly the gnashing of teeth the recent AMD cards are getting and people routinely bragged about running more than one of them in SLI.
CiccioB - Thursday, July 16, 2015 - link
No, it was not a successful product at all, though it was still the fastest card on market.The successful card was the 460 launched few months later and surely the 570/580 cards which brought the corrections to the original GF100 that nvidia itself said it was bugged.
Here, instead, we have a card which uses a lot of power, it is not on top of the charts and there's really no fix at the horizont for it.
The difference was that with GF100 nvidia messed up the implementation of the architecture which was then fixxed, here we are seeing what is the most advanced implementation of a really not so good architecture that for 3 years has struggled to keep the pace of the competitions which at the end has decided to go with a 1024 shaders + 128bit wide bus in a 220mm^2 die space against a 1792 shader + 256bit wide bus in a 356mm^2 die space instead of trying to have the latest fps longer bar war.
AMD, please, review your architecture completely or we are doomed with next PP.
Oxford Guy - Tuesday, July 21, 2015 - link
"No, it was not a successful product at all"It was successful. Enthusiasts bought them in a significant number and review sites showed off their two and three card rigs. The only site that even showed their miserable performance per watt was techpowerup
Count Vladimir - Thursday, July 16, 2015 - link
So we are discussing 6 year old products now? Is that your version of logic? Yes, it was hot, yes, it was buggy but it was still the fastest video card in its era, that's why people bragged about SLI'ing it. Fury X isn't.Oxford Guy - Tuesday, July 21, 2015 - link
"So we are discussing 6 year old products now?" strawmancelebrevida - Thursday, July 16, 2015 - link
Looks like Jason Evangelho of PCWorld has the matter settled. In his article:http://www.pcworld.com/article/2947547/components-...
He shows that R9 Fury x2 is on par with GTX 980 Ti x 2 and blows away GTX 980 x2. Considering that R9 Fury x2 is much cheaper than GTX 980 Ti x2 and also R9 Fury is optimized for upcoming DX12, it looks like R9 Fury is the clear winner in cost/performance.
xplane - Saturday, October 17, 2015 - link
So with this GPU I could use 5 monitors simultaneously? Right?kakapoopoo - Wednesday, January 4, 2017 - link
i got the sapphire version up to 1150 stably using msi after burner w/o changing anything else