Comments Locked

141 Comments

Back to Article

  • tackle70 - Thursday, September 24, 2015 - link

    Nice article. Maybe tech forums can now stop with the "AMD will be vastly superior to Nvidia in DX12" nonsense.
  • cmdrdredd - Thursday, September 24, 2015 - link

    Leads me to believe more and more that Stardock is up to shenanigans just a bit or that not every game will use certain features that DX12 can perform and Nvidia is not held back in those games.
  • Jtaylor1986 - Thursday, September 24, 2015 - link

    I'd say Ashes is a far more representative benchmark. What is the point of doing a landscape simulator benchmark. This demo isn't even trying to replicate real world performance
  • cmdrdredd - Thursday, September 24, 2015 - link

    Are you nuts or what? This is a benchmark of the game engine used for Fable Legends. It's as good a benchmark as any when trying to determine performance in a specific game engine.
  • Jtaylor1986 - Thursday, September 24, 2015 - link

    Except its completely unrepresentative of actual gameplay unless this grass growing simulator.
  • Jtaylor1986 - Thursday, September 24, 2015 - link

    "The benchmark provided is more of a graphics showpiece than a representation of the gameplay, in order to show off the capabilities of the engine and the DX12 implementation. Unfortunately we didn't get to see any gameplay in this benchmark as a result, which would seem to focus more on combat."
  • LukaP - Thursday, September 24, 2015 - link

    You dont need gameplay in a benchmark. you need the benchmark to display common geometry, lighting, effects and physics of an engine/backend that drives certain games. And this benchmark does that. If you want to see gameplay, there are many terrific youtubers who focus on that, namely Markiplier, NerdCubed, TotalBiscuit and others
  • Mr Perfect - Thursday, September 24, 2015 - link

    Actual gameplay is still important in benchmarking, mainly because that's when framerates usually tank. An empty level can get fantastic FPS, but drop a dozen players having an intense fight into that level and performance goes to hell pretty fast. That's the situation where we hope to see DX12 outshine DX11.
  • Stuka87 - Thursday, September 24, 2015 - link

    Wrong, a benchmark without gameplay is worthless. Look at Battlefield 4 as an example. Its built in benchmarks are worthless. Once you join a 64 player server, everything changes.

    This benchmark shows how a raw engine runs, but is not indicative of how the game will run at all.

    Plus its super early in development with drivers that stil need work, which the article states that AMD's driver arrived too late.
  • inighthawki - Thursday, September 24, 2015 - link

    Yes, but when the goal is to show improvements in rendering performance, throwing someone into a 64 player match completely skews the results. The CPU overhead of handling a 64 player multiplayer match will far outweigh to small changes in CPU overhead from a new rendering API.
  • piiman - Saturday, September 26, 2015 - link

    "Yes, but when the goal is to show improvements in rendering performance"

    I'm completely confused with this "comparison"
    How does this story even remotely show how will Dx12 works compared to Dx11? All they did was a Dx12 VIDEO card comparison? It tells us NOTHING in regard to how much faster Dx12 is compared to 11.
  • inighthawki - Saturday, September 26, 2015 - link

    I guess what I mean is the purpose of a graphics benchmark is not to show real world game performance, it is to show the performance of the graphics API. This this case, the goal is trying to show that D3D12 works well. Throwing someone into a 64 player match of battlefield 4 to test a graphics benchmark defeats the purpose because you are introducing a bunch of overhead completely unrelated to graphics.
  • figus77 - Monday, September 28, 2015 - link

    You are wrong, many dx12 implementation will help on very chaotic situation with many pg and big use of IA, this benchmark is usefull like a 3dmark... just look at the images and say is a nice graphics (still Witcher3 in DX11 is far better for me)
  • inighthawki - Tuesday, September 29, 2015 - link

    I think you missed the point - I did not say it would not help, I just said that throwing on tons of extra overhead does not isolate the overhead improvements on the graphics runtime. You would get fairly unreliable results due to the massive variation caused by actual gameplay. When you do a benchmark of a specific thing - e.g. a graphics benchmark, which is what this is, then you want to perform as little non-graphics work as possible.
  • mattevansc3 - Thursday, September 24, 2015 - link

    Yes, the game built on AMD technology (Mantle) before being ported to DX12, sponsored by AMD, made in partnership with AMD and received development support from AMD is a more representative benchmark than a 3rd party game built on a hardware agnostic engine.
  • YukaKun - Thursday, September 24, 2015 - link

    Yeah, cause Unreal it's very neutral.

    Remember the "TWIMTBP" from 1999 to 2010 in every UE game? Don't think UE4 is a clean slate coding wise for AMD and nVidia. They will still favor nVidia re-using old code paths for them, so I'm pretty sure even if the guys developing Fable are neutral (or try to), UE underneath is not.

    Cheers!
  • BillyONeal - Thursday, September 24, 2015 - link

    That's because AMD's developer outreach was terrible at the time, not because Unreal did anything specific.
  • Kutark - Monday, September 28, 2015 - link

    Yes, but you have to remember, Nvidia is Satan, AMD is Jesus. Keep that in mind when you read comments like that and all will make sense
  • Stuka87 - Thursday, September 24, 2015 - link

    nVidia is a primary sponsor of the Unreal Engine.
  • RussianSensation - Thursday, September 24, 2015 - link

    UE4 is not a brand agnostic engine. In fact, every benchmark you see on UE4 has GTX970 beating 290X.

    I have summarized the recent UE4 games where 970 beats 290X easily:
    http://forums.anandtech.com/showpost.php?p=3772288...

    In Fable Legends, a UE4 DX12 benchmark, a 925mhz HD7970 crushes the GTX960 by 32%, while an R9 290X beats GTX970 by 13%. Those are not normal results for UE4 games that have favoured NV's Maxwell architecture.

    Furthermore, we are seeing AMD cards perform exceptionally well at lower resolutions, most likely because DX12 helped resolve their DX11 API draw-call bottleneck. This is a huge boon for GCN moving forward if more DX12 games come out.

    Looking at other websites, a $280 R9 390 is on the heels of a $450 GTX980.
    http://techreport.com/review/29090/fable-legends-d...

    So really besides 980Ti (TechReport uses a heavily factory pre-overclocked Asus Strix 980TI that boosts to 1380mhz out of the box), the entire stack of NV's cards from $160-500 loses badly to GCN in terms of expected price/performance.

    We should wait for the full game's release and give NV/AMD time to upgrade their drivers but thus far the performance in Ashes and Fable Legends is looking extremely strong for AMD's cards.
  • TheJian - Saturday, September 26, 2015 - link

    "There is a big caveat to remember, though. In power consumption tests, our GPU test rig pulled 449W at the wall socket when equipped with an R9 390X, versus 282W with a GTX 980. The delta between the R9 390 and GTX 970 was similar, at 121W. "

    You seem to see through rose colored glasses. At these kinds of watt differences you SHOULD dominate everything...LOL. Meanwhile NV guys have plenty of watts to OC and laugh. Your completely ignoring the cost of watts these days when talking a 100w bulb for hours on end for 3-7yrs many of us have our cards. You're also forgetting that most cards can hit strix speeds anyway right? NOBODY buys stock when you can buy an OC version from all vendors for not much more.

    "Early tests have shown that the scheduling hardware in AMD's graphics chips tends to handle async compute much more gracefully than Nvidia's chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it's just not enabled yet. We'll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support."

    Also seem to ignore that from your own link (techreport), they even state NV has async turned off for now. I'm guessing just waiting for all the DX12 stuff to hit, see if AMD can catch them, then boom, hello more perf...LOL.

    https://techreport.com/review/28685/geforce-gtx-98...
    "Thanks in part to that humongous cooler, the Strix has easily the highest default clock speeds of any card in this group, with a 1216MHz base and 1317MHz boost"
    A little less than you say, but yes, NV gives you free room to run to WHATEVER your card can do in the allowed limit. Unlike AMD's UP TO crap, with NV you get GUARANTEED X, and more if available. I prefer the latter. $669 at amazon for the STRIX, so for $20 I'll take the massive gain in perf (cheapest at newegg is $650 for 980ti). I'll get it back in watts saved on electricity in no time. You completely ignore Total Cost of Ownership, not to mention DRIVERS and how RARE AMD drops are. NV puts out a WHQL driver monthly or more.

    https://techreport.com/review/28685/geforce-gtx-98...
    Any time you offer me ~15% perf for 3% cost I'll take it. If you tell me electric costs mean nothing, in the same sentence I'll tell you $20 means nothing then, on the price of card most live with for years.

    Frostbite is NOT brand agnostic. Cough, Mantle, 8mil funding, Cough...The fact that MANY games run better in DX11 for Nv is just DRIVERS and time spent with DEVS (Witcher3, Project Cars etc, devs said this). This should be no surprise when R&D is down for 4yrs at AMD while the reverse is true at NV (who now spends more on R&D then AMD who has a larger product line).

    Shocker ASHES looks good for AMD when it was a MANTLE engine game...ROFL. Jeez guy...Even more funny that once NV optimized for Star Swarm they had massive DX12 improvements and BEAT AMD in it, and not to mention the massive DX11 improvement too (which AMD ignored). Gamers should look at who has the funding to keep up in DX11 for a while too correct? AMD seems to have moved on to dx12 (not good for those poor gamers who can't afford new stuff right?). You seem to only see your arguments for YOUR side. Near as I can see, NV looks good until you concentrate where I will not play (1280x720, or crap cpus). Also, you're basing all your conclusions on BETA games and current state of drivers before any of this stuff is real...LOL. You can call unreal 4 engine unrealistic, but I'll remind you it is used in TONS of games over the last two decades so AMD better be good here at some point. You can't repeatedly lose in one of the most prolific engines on the planet right? You can't just claim "that engine is biased" and ignore the fact that it is REALITY that it will be used a LOT. If all engines were BIASED towards AMD, I would buy AMD no matter what NV put out if AMD wins everything...ROFL. I don't care about the engine, I care about the result of the cards running on the games I play. IF NV pays off every engine designer, I'll buy NV because...well, DUH. You can whine all you want, but GAMERS are buying 82% NV for a reason. I bought INTEL i7 for a REASON. I don't care if they cheat, pay someone off, use proprietary tech etc, as long as they win, I'll buy it. I might complain about the cheating, but if it wins, I'll buy it anyway...LOL.

    IE, I don't have to LIKE Donald Trump to understand he knows how to MAKE money, unlike most of congress/Potus. He's pretty famous for FIRING people too, which again, congress/potus have no idea how to get done apparently. They also have no idea how to manage a budget, which again, TRUMP does. They have no idea how to protect the border, despite claiming they'll do it for a decade or two. I'll take that WALL please trump (which works in israel, china, etc), no matter how much it costs compared to decades of welfare losses, education dropping, medical going to illegals etc. The wall is CHEAP (like an NV card over 3-7yrs of usage at 120w+ or more savings as your link shows). I can hate trump (or Intel, or NV) and still recognize the value of his business skills, negotiation skills, firing skills, budget skills etc. Get it? If ZEN doesn't BURY Intel in perf, I'll buy another i7 for my dad...LOL.

    http://www.anandtech.com/show/9306/the-nvidia-gefo...
    Even anandtech hit strix speeds with ref. Core clocks of 250mhz free on 1000mhz? OK, sign me up. 4 months later likely everything does this or more as manufacturing only improves over time. All of NV cards OC well except for the bottom rungs. Call me when AMD wins where most gamers play (above 720P and with good cpus). Yes DX12 bodes well for poor people, and AMD's crap cpus. But I'm neither. Hopefully ZEN fixes the cpu side so I can buy AMD again. They still have a shot at my die shrunk gpu next year too, but not if they completely ignore DX11, keep failing to put out game ready drivers, lose the watt war etc. ZEN's success (or not) will probably influence my gpu sale too. If ZEN benchmarks suck there will probably be no profits to make my gpu drivers better etc. Think BIGGER.
  • anubis44 - Friday, October 30, 2015 - link

    As already mentioned, nVidia pulled out the seats, the parachutes and anything else they could unscrew and thew them out of the airplane to lighten the load. Maxwell's low-power usage comes at a price, like no hardware based scheduler, and now DX12 games will frequently make use of this for context switching and dynamic reallocation of shaders between rendering and compute. Why? Because the XBOX One and the PS4, having AMD Radeon graphics CGN cores, can do this. So in the interest of getting the power usage down, nVidia left out a hardware feature even the PS4 and XBOX One GPUs have. Does that sound smart? It's called 'marketing': "Hey look! Our card uses LESS POWER than the Radeon! It's because we're using super-duper, secret technologies!" No, you're leaving stuff off the die. No wonder it uses less power.
  • RussianSensation - Thursday, September 24, 2015 - link

    925mhz HD7970 is beating GTX960 by 32%. R9 280X currently sells for $190 on Newegg and it has another 13.5% increase in GPU clocks, which implies it would beat 960 by a whopping 40-45%!

    R9 290X beating 970 by 13% in a UE4 engine is extremely uncharacteristic. I can't recall this ever happen. Also, other sites are showing $280 R9 390 on the heels of the $450 GTX980.

    http://www.pcgameshardware.de/DirectX-12-Software-...

    That's an extremely bad showing for NV in each competing pricing segment, except for the 980Ti card. And because UE4 has significantly favoured NV's cards under DX11, this is actually a game engine that should have favoured NV's Maxwell as much as possible. Now imagine DX12 in a brand agnostic game engine like CryEngine or Frostbite?

    At the end it's not going to matter to gamers who upgrade every 2 years but for budget gamers who cannot afford to do so, they should pay attention.
  • CiccioB - Friday, September 25, 2015 - link

    925mhz HD7970 is beating GTX960 by 32%

    Ahahahah.. and that should prove that? A chip twice ad big and consuming twice the energy can perform 32% more than another?
    Oh, sorry, you were speaking about prices... yes... so you are just claiming that that power sucking beast has hard time selling like the winning micro hero that is filing nvidia's pokets while the competing can only be obtained when a stock cleaning operation is done?
    Can't really understand these kind of comparisons. GTX960 runs against Radeon 285 or now 380 card. It performs fantastically for the size of its die and the power it sucks. And has pretty cornered AMD margins on boards that mount beefy GPU like Tahiti or Tonga.
    The only hope for AMD to come out of this pitiful situation is to hope that with next generation and new PP perfomance/die space ratios are closer to competition, or they won't go to gain a singe cent out of graphics division for a few years again.
  • The_Countess - Friday, September 25, 2015 - link

    ya you seem to have forgotten that the hd7970 is 3+ years old while the gtx960 was released this year. and it has only 30% more transistors (~4.3billion vs ~3)

    and the only reason nvidia's power consumption is better is because they cut double precision performance on all their cards down to nothing.
  • MapRef41N93W - Saturday, September 26, 2015 - link

    So wrong it's not even funny. Maybe you aren't aware of this, but small die Kepler already had DP cut. Only GK100/GK110 had full DP with Kepler. That has nothing to do with why GM204/206 have such low power draw. The Maxwell architecture is the main reason.
  • Azix - Saturday, September 26, 2015 - link

    cut hardware scheduler?
  • Asomething - Sunday, September 27, 2015 - link

    Sorry to burst your bubble but nvidia didnt cut DP completely on small keplar, they cut down some from fermi but disabled the rest so they could keep DP on their Quadro series, there were softmods to unlock that DP, for maxwell they did actually completely cut DP to save on die space and power consumption. amd did the same for GCN1.2's fiji in order to get it on 28nm.
  • CiccioB - Monday, September 28, 2015 - link

    I don't really care how old is Tahiti. I know it was used as comparison with a chip which is half its size and power consumption ON THE SAME PP. So how old it is doesn't really matter. Same PP, so what's should be important is how good both architectures are.
    What counts is that AMD has not done anything radical to improve its architecture. It replaced Tahiti with a similar beefy GPU, Tonga, which didn't really stand a chance against Maxwell. They were the new proposal of both companies. Maxwell vs GCN 1.2. See the results.
    So again, go and look at how big GM206 is and how much power it sucks. Then compare with Tonga and the only thing you can see as similar is the price. nvidia solution beats AMD one under all points of view bringing AMD margins to nothing, though nvidia is still selling its GPU at a higher price than it really deserves.
    In reality one should compare Tahiti/Tonga with GM204 for the size and power consumption. The results will simply put AMD GCN architecture into the toilet. Only reasonable move was to lower the price so much that they could sell a higher tier GPU into a lower series of boards.
    Performance based on die space and power consumption doesn't really make GCN a hero in nothing but in having worsened AMD position even more with respect to old VLIW architecture were AMD fought with similar performances but smaller dies (and power consumption).
  • CiccioB - Monday, September 28, 2015 - link

    Forgot.. about double precision... I still don't care about it. Do you use it in your everyday life? How many professional boards is AMD selling that justifies the use of DP units into such GPUs?
    Just for numbers on the well painted box? So DP is a no necessity for 99% off the users.

    And apart that stupid thing, nvidia DP units were not present on GK204/206 as well, so the big efficiency gain has been made by improving their architecture (from Kepler to Maxwell) while AMD just moved from GCN 1.0 to GCN 1.2 with almost null efficiency results.
    The problem is not DP units present or not. It is that AMD could not make its already struggling architecture better in absolute with respect to the old version. An with Fiji they demonstrated that they could even do worse, if someone had any doubts.
  • anubis44 - Friday, October 30, 2015 - link

    The point is not whether you use DP, the point is that the circuitry is now missing, and that's why Maxwell uses less power. If I leave stuff out of a car, it'll be lighter, too. Hey look! No back seats anymore, and now it's LIGHTER! I'm a genius. It's not because nVidia whipped up a can of whoop-ass, or because they have magic powers, it's because they threw everything out of the airplane to make it lighter.
  • anubis44 - Friday, October 30, 2015 - link

    And left out the hardware based scheduler, which will bite them in the ass for a lot of DX12 games that will need this. No WAIT! nVidia isn't screwed! They'll just sell ANOTHER card to the nVidiots who JUST bought one that was obsolete, 'cause nVidia is ALWAYS better!
  • Alexvrb - Thursday, September 24, 2015 - link

    Not every game uses every DX12 feature, and knowing that their game is going to run on a lot of Nvidia hardware makes developers conservative in their use of new features that hurt performance on Nvidia cards. For example, as long as developers are careful with async compute and you've got plenty of CPU cycles, I think everything will be fine.

    Now, look at the 720p results. Why the change in the pecking order? Why do AMD cards increase their lead as CPU power falls? Is it a driver overhead issue - possibly related to async shader concerns? We don't know. Either way it might not matter, an early benchmark isn't even necessarily representative of the final thing, let alone a real-world experience.

    In the end it will depend on the individual game. I don't think most developers are going to push features really hard that kill performance on a large portion of cards... well not unless they get free middleware tools and marketing cash or something. ;)
  • cityuser - Sunday, September 27, 2015 - link

    quite sure it's nvidia again do some nasty work with the game company that descale the performance of AMD card !!!
    Look at where the nvidia cannot corrupt, futuremark's benchmark tells another story!!!
  • Drumsticks - Thursday, September 24, 2015 - link

    As always, it's only one data point. It was too early to declare AMD a winner then, but it's still too early to say they aren't actually going to benefit more from DX12 than Nvidia. We need more data to say for sure either way.
  • geniekid - Thursday, September 24, 2015 - link

    That's crazy talk.
  • Beararam - Thursday, September 24, 2015 - link

    Maybe not ''vastly superior'', but the gains in the 390x seem to be greater than those realized in the 980. Time will tell.

    https://youtu.be/_AH6pU36RUg?t=6m29s
  • justniz - Thursday, September 24, 2015 - link

    Such a large gain only on AMD just from DX12 (i.e. accessing the GPU at a lower level and bypassing AMD driver's DX11 implementation) is yet more evidence that AMD's DX11 drivers are much more of a bottleneck than nVidia's.
  • Gigaplex - Thursday, September 24, 2015 - link

    That part was pretty obvious. The current question is, how much of a bottleneck. Will DX12 be enough to put AMD in the lead (once final code starts shipping), or just catch up?
  • lefty2 - Thursday, September 24, 2015 - link

    I wonder if they were pressurized not to release any benchmark that would make Nvidia look bad, similiar to the way they did in ashes of the singularity
  • medi03 - Thursday, September 24, 2015 - link

    "AMD had driver with better results, but we didn't use it", "oh, Bryan tested it, but he's away" adds some sauce to it.
  • Oxford Guy - Thursday, September 24, 2015 - link

    "we are waiting for a better time to test the Ashes of the Singularity benchmark"

    L-O-L
  • Frenetic Pony - Thursday, September 24, 2015 - link

    This is as usual a trolley, click bait response. The truth is far more complex than whether one side "wins". Here we can see Amds older card once again benefiting greatly from Dx12, to the point where it clearly pulls ahead of Nvidias similarly priced options. Yet on the high end it seems Nvidia has scaled it's gpu architecture better than Amd has, with the 980ti having the advantage. So, technology, like life, is complicated and not prone to simple quips that accurately reflect reality.
  • jospoortvliet - Friday, September 25, 2015 - link

    True. One thing has not changed and becomes more pronounced with DirectX12: AMD offers better performance at every price point.
  • Th-z - Thursday, September 24, 2015 - link

    Are you referring to AotS results? That benchmark stresses different things than this classic flyby benchmark, both are useful in their own right. AotS's more akin to real gameplay and stresses draw call capability of DX12, which is *the* highlights of DX12.

    My question for Ryan, Anandtech didn't test AotS because you said it's still in the early developmental stage, however this one isn't? I say test them all, make interesting before-and-after study cases regardless. Also have you considered improving your comment section?
  • DoomGuy64 - Friday, September 25, 2015 - link

    Incorrect. The $650 Ti is the only Nvidia card better in dx12, and it has 96 ROPs, compared to the Fury's 64, not that Fury is actually doing that bad. AMD on the other hand, is cleaning up with the mid-range cards, which is what most people are buying.
  • masaville888 - Saturday, October 10, 2015 - link

    I left AMD for good after too many years of technical issues such as artifacting, poorly optimized drivers and so forth. I always had a good experience with nVidia and after going back I have no regrets. AMD seems more interested in tech announcements than user experience. If they figure out the customer side of things they have the potential to be great, but not until then.
  • lprates - Thursday, October 15, 2015 - link

    I totally Agree
  • lprates - Thursday, October 15, 2015 - link

    I totally Agree
  • lprates - Sunday, October 18, 2015 - link

    I totally Agree
  • lprates - Sunday, October 18, 2015 - link

    I totally Agree
  • anubis44 - Friday, October 30, 2015 - link

    It's not nonsense. AMD Radeon cards have a hardware based scheduler. These tests don't make any use of asynchronous shaders, but it IS a DX12 feature, and one which will hit the Maxwells hard, since they don't have a hardware based scheduler. nVidia left it out to get the power consumption down. Too bad it'll be needed in many upcoming DX12 titles.
  • Bleakwise - Tuesday, December 29, 2015 - link

    You think?

    They didn't even benchmark the 300 series cards and look at this, at 1080p a 290x is about 11% faster a 970, a 285 is 20% faster than 960.

    I mean holy shit.

    Also, why didn't Anandtech use the last gen AMD cards instead of the 300 series cards (no, they aren't just rebrands)? Why didn't they do 1440p benchmarks? What the hell?
  • Bleakwise - Tuesday, December 29, 2015 - link

    Also, "the driver got here late" my ass.

    What a bullshit excuse. It takes a couple hours to benchmark a card. The review couldn't wait one more day? Really? A review that's obsolete before it's even posted is better than posting a relevant review a day later?
  • Uxi - Thursday, September 24, 2015 - link

    Picture/Graph nr. 2 on the CPU scaling page seems to be the wrong. Should be 980 Ti not Fury X.
  • Brett Howse - Thursday, September 24, 2015 - link

    Fixed tyvm!
  • Drumsticks - Thursday, September 24, 2015 - link

    Any chance of having some AMD cpus tacked onto this? DX 12 is supposed to help them out after all, so it would be interesting to see if they've made any gains here.
  • Ian Cutress - Thursday, September 24, 2015 - link

    It's something we've thought of. A main issue is that the reviewer with all the GPUs, Ryan, is on the West Coast and the one with all the CPUs, me, is in Europe. So if Ryan does a piece it'll have lots of GPUs (and take time out of other things as he is Editor in Chief) and light on CPU. If I do it, it'll be limited to the GPU stack (R9 290X, R9 285, GTX 980, GTX 770, some of these non-reference) I have. We did this with Star Swarm and got a select group of angry emails claiming we were biased some way or another for not doing a full matrix intersection and claimed we were being paid off.

    That aside, when we get closer to launch of this game and others with DX12, I'll update the tests on our CPU test bed for 2016, and maybe get a new GPU or two with whatever is available at the time.
  • britjh22 - Thursday, September 24, 2015 - link

    Sounds like we need to get Ryan an FX-8320 and 990FX board, can you partially disable the FX processors in the same way to replicate the 6 and 4 series like you can with the i7?
  • R0H1T - Thursday, September 24, 2015 - link

    A better idea would be to ship Ian to the states, also since import duties are lower than that in Europe (:

    j/k
  • Alexvrb - Friday, September 25, 2015 - link

    Ship them both to the East Coast and set up a Review Office / Beach Resort, complete with community events!
  • zimanodenea - Thursday, September 24, 2015 - link

    My Asus m5a97 has an option to do this.
  • mdriftmeyer - Thursday, September 24, 2015 - link

    Time to develop in a test harness of equal merits and scope across the globe for the reviewers. To do less is unprofessional. The whole point of a test harness is not to ductape simulations but to cover all bases.
  • Spunjji - Friday, September 25, 2015 - link

    Well said. This isn't some tinpot organisation, is it? ;)
  • Drumsticks - Thursday, September 24, 2015 - link

    That's a shame. I'd really like to see that comparison. With the improvements Zen should, in theory, bring, it could really give AMD its best chance in years to get some wind under its sails.
  • beck2050 - Thursday, September 24, 2015 - link

    A little too early to worry about. Hopefully both companies will improve when 12 becomes standard issue.
  • DrKlahn - Thursday, September 24, 2015 - link

    Epic has always worked closely with Nvidia and used their hardware, so the only thing that surprises me is that the gap doesn't favor Nvidia more. It's very early to make any predictions, but there are some interesting conversations on other forums about how both architectures behave in different situations. Nvidia's architecture does appear to have issues in some asynchronous workloads. What little evidence we have says this may be an issue in some games.

    My own opinion is that with Nvidia's market dominance we will see most developers try to avoid situations where problems occur. As an AMD owner my main hope is that we see DX12 squeeze out proprietary codes and level the playing field more. I'm also happy that the latest Unreal engine appears to run well on both vendors hardware.
  • jiao lu - Thursday, September 24, 2015 - link

    not only close working relationship . the Unreal 3/4 use Nvidia Physics sdk outright. Epic engine is terribly optimized for console right now. basically it is PC engine, churn out pc demo now and then . Now far fewer AAA studio use unreal 4 like they do with unreal 3 in the ps 3/xbox 360 era. I am very much suspicious unreal 5 is not mult-threaded rendering enough , use dx 12 like do dx 11 before.
  • Midwayman - Thursday, September 24, 2015 - link

    Well, the xbox one is using AMD hardware and dx12. That's probably a bigger reason to keep it neutral than more nvidia share on the PC.
  • Spunjji - Friday, September 25, 2015 - link

    The PS4 is also using the same AMD GCN 1.0 architectures for CPU and GPU
  • Nenad - Thursday, September 24, 2015 - link

    I suggest using 2560x1440 as one of tested resolution for articles where important part are top cards, since that is currently sweet spot for top end cards like GTX 980Ti or AMD Fury.

    I know that, on steam survey, that resolution is not nearly represented as 1920x1080, but neither are cards like 980ti and FuryX.
  • Jtaylor1986 - Thursday, September 24, 2015 - link

    The benchmark doesn't support this.
  • Peichen - Thursday, September 24, 2015 - link

    Buy a Fury X if you want to play Fable at 720p. Buy a 980Ti if you want to play 4K. And remember, get a fast Intel CPU if you are playing at 720p otherwise your Fury X won't be able to do 120+fps.
  • DrKlahn - Thursday, September 24, 2015 - link

    The Fury is 2fps less at 4k (though a heavily overclocked 980ti may increase this lead), so I'd say both are pretty comparable at 4K. Fury also scales better at lower resolutions with lower end CPU's so I'm not sure what point you're trying to make. Although to be fair none of the cards tested struggle to maintain playable frame rates at lower resolutions.
  • Asomething - Thursday, September 24, 2015 - link

    Man what a strange world where amd has less driver overhead than the competition.
  • extide - Thursday, September 24, 2015 - link

    Also, remember there are those newer AMD drivers that were not able to be used for this test. I could easily see a new driver gaining 2+ fps to match/beat the 980 ti in this game.
  • Gigaplex - Thursday, September 24, 2015 - link

    If you're buying a card specifically for one game at 720p, you wouldn't be spending top dollar for a Fury X.
  • Jtaylor1986 - Thursday, September 24, 2015 - link

    Kind of suprising you chose to publish this at all. Given how limited the testing options are in the benchmark this release seems a little to much like a pure marketing stunt for my tastes from Lionhead and Microsoft rather than a DX12 showcase. The fact that it doesn't include a DX11 option is a dead giveaway.
  • piiman - Saturday, September 26, 2015 - link

    "The fact that it doesn't include a DX11 option is a dead giveaway."

    My thought also. What the point of this Benchmark if there are no Dx11 to compare them to?
  • Gotpaidmuch - Thursday, September 24, 2015 - link

    How come GTX 980 was not included in the tests? Did it score that bad?
  • Gotpaidmuch - Thursday, September 24, 2015 - link

    Sad day for all of us when even the small wins, that AMD gets, are omitted from the benchmarks.
  • Oxford Guy - Thursday, September 24, 2015 - link

    "we are waiting for a better time to test the Ashes of the Singularity benchmark"
  • ZipSpeed - Thursday, September 24, 2015 - link

    The 7970 sure has legs. Turn down the quality down one notch from ultra to high, and the card is still viable at 1080p gaming.
  • looncraz - Thursday, September 24, 2015 - link

    As a long-time multi-CPU/threaded software developer AMD's results show one thing quite clearly: they have some unwanted lock contention in their current driver.

    As soon as that is resolved, we should see a decent improvement for AMD.

    On another note, am I the only one that noticed how much the 290X jumped compared to the rest of the lineup?!

    Does that put the 390X on par with the 980 for Direct X 12? That would be an interesting development.
  • mr_tawan - Thursday, September 24, 2015 - link

    Well even if UE4 uses DX12, it would probably just a straight port from DX11 (rather than from XBONE or other console). The approach it uses maynot flavour AMD as much as Nvidia, who know ?

    Also I think the Nvidia people would have involved with the engine development more than AMD (due to its developer relationships team size I guess). The Oxide games also mentioned that they got this kind of involvement as well (even if the game is AMD title).
  • tipoo - Thursday, September 24, 2015 - link

    Nice article. Looks like i3s are going to only get *more* feasible for gaming rigs under DX12. There's still the odd title that suffers without quads though, but most console ports at least should do fine.
  • ThomasS31 - Thursday, September 24, 2015 - link

    Still not a Game performance test... nor a CPU.

    There is no AI... and I guess a lot more is missing that would make a difference in CPU as well.

    Though yeah... kind a funny that an i3 is "faster" than an i5/7 here. :)
  • Traciatim - Thursday, September 24, 2015 - link

    This is what I was thinking too. I thought that DX12 might shake up the old rule of thumb saying 'i5 for gaming and i7 for working' but it seems to be this still holds true. In some cases it might even make more sense budget wise to go for a high end i3 and sink as much in to your video card as possible rather than go for an i5 depending on where your budget and current expected configuration are.

    More CPU benchmarking and DX12 benchmarks are needed of course, but it still looks like the design of machines isn't going to change all that much.
  • Margalus - Friday, September 25, 2015 - link

    this test shows absolutely nothing about "gaming". It is strictly rendering. When it comes to "gaming" I believe your i3 is going to drop like a rock once it has to start dealing with AI and other "gaming" features. Try playing something like StarCraft or Civilization on your i3. I don't think it's going to cut the muster in the real world.
  • joex4444 - Thursday, September 24, 2015 - link

    As far as using X79 as the test platform here goes, I'm mildly curious what sort of effect the quad channel RAM had. Particularly with Core i3, most people pair that with 2x4GB of cheap DDR3 and won't be getting even half the memory bandwidth your test platform had available.

    Also fun would be to switch to X99 and test the Core i7-5960X, though dropping an E5-2687W in the X79 platform (hey, it *is* supported after all).
  • Traciatim - Thursday, September 24, 2015 - link

    RAM generally has very little to no impact on gaming except for a few strange cases (like F1).

    Though, the machine still has it's cache available so the i3 test isn't quite the same thing as a real i3 it should be close enough that you wouldn't notice the difference.
  • Mr Perfect - Thursday, September 24, 2015 - link

    In the future, could you please include/simulate a 4 core/8 thread CPU? That's probably what most of us have.
  • Oxford Guy - Thursday, September 24, 2015 - link

    How about Ashes running on a Fury and a 4.5 GHz FX CPU.
  • Oxford Guy - Thursday, September 24, 2015 - link

    and a 290X, of course, paired against a 980
  • vision33r - Thursday, September 24, 2015 - link

    Just because a game supports DX12 doesn't mean it uses all DX12 features. It looks like they have DX12 as a check box but not really utilizing DX12 complete features. We have to see more DX12 implemenations to know for sure how each card stack up.
  • Wolfpup - Thursday, September 24, 2015 - link

    I'd be curious about a direct X 12 vs 11 test at some point.

    Regarding Fable Legends, WOW am I disappointed by what it is. I shouldn't be in a sense, I mean I'm not complaining that Mario Baseball isn't a Mario game, but still, a "free" to play deathmatch type game isn't what I want and isn't what I think of with Fable (Even if, again, really this could be good for people who want it, and not a bad use of the license).

    Just please don't make a sequel to New Vegas or Mass Effect or Bioshock that's deathmatch LOL
  • toyotabedzrock - Thursday, September 24, 2015 - link

    You should have used the new driver given you where told it was related to this specific game preview.
  • Shellshocked - Thursday, September 24, 2015 - link

    Does this benchmark use Async compute?
  • Spencer Andersen - Thursday, September 24, 2015 - link

    Negative, Unreal Engine does NOT use Async compute except on Xbox One. Considering that is one of the main features of the newer APIs, what does that tell you? Nvidia+Unreal Engine=BFF But I don't see it as a big deal considering that Frostbite and likely other engines already have most if not all DX12 features built in including Async compute.

    Great article guys, looking forward to more DX12 benchmarks. It's an interesting time in gaming to say the least!
  • oyabun - Thursday, September 24, 2015 - link

    There is something wrong with the webpages of the article, an ad by Samsung seems to cover the entire page and messes up all the rendering. Furthermore wherever I click a new tab opens at www.space.com! I had to reload several times just to be able to post this!
  • Ian Cutress - Thursday, September 24, 2015 - link

    Please screenshot any issue like this you find and email it to us. :)
  • Frenetic Pony - Thursday, September 24, 2015 - link

    So interesting to note, for compute shader performance we see nvidia clearly in the lead, winning both compute and dynamic gi which here is compute based, once we get to pixel operations we see a clear lead for Amd, ala post processing, direct lighting and transparency. When we switch back to geometry, the gbuffer, Nvidia again leads. Interesting to see where each needs to catch up.
  • NightAntilli - Thursday, September 24, 2015 - link

    Once again I wish AMD CPUs were included for the performance and scaling... Both the old FX CPUs and stuff like the Athlon 860k.
  • Oxford Guy - Friday, September 25, 2015 - link

    They claimed that readers aren't interested in seeing FX benchmarked. That doesn't explain why like 8 APUs were included in the Broadwell review or whatever and not one decently-clocked FX.

    I also don't see why the right time to do Ashes wasn't shortly after ArsTechnica's article about it rather than giving it the GTX 960 "coming real soon" treatment.
  • NightAntilli - Sunday, September 27, 2015 - link

    Considering that DX12 is supposed to greatly reduce CPU overhead and be able to scale well across multiple cores, this is one of the most interesting benchmarks that can be shown. But yeah. It seems like there's a political reason behind it.
  • Iridium130m - Thursday, September 24, 2015 - link

    Shut hyperthreading off and run the tests again...be curious if the scores for the 6 core chip come up any...we may be in a situation where hyperthreading provides little benefit in this use case if all the logical cores are doing the exact same processing and bottlenecking on the physical resources underneath.
  • Osjur - Friday, September 25, 2015 - link

    Dat 7970 vs 960 makes me have wtf moment.
  • gamerk2 - Friday, September 25, 2015 - link

    Boy, I'm looking at those 4k Core i3, i5, and i7 numbers, and can't help but notice they're basically identical. Looks like the reduced overhead of DX12 is really going to benefit lower-tier CPUs, especially the Core i3 lineup.
  • ruthan - Sunday, September 27, 2015 - link

    Dont worry Intel would find the way, how to cripple i3, even more.
  • Mugur - Friday, September 25, 2015 - link

    This game will be DX12 only since it's Microsoft, that's why its Windows 10 and Xbox One exclusivity.

    Where can I find some benchmarks with the new AMD driver and this Fable Legends?
  • HotBBQ - Friday, September 25, 2015 - link

    Please avoid using green and red together for plots. It's nigh impossible to distinguish if you are red-green colorblind (the most common).
  • Crunchy005 - Friday, September 25, 2015 - link

    So we have a 680, 970, 980ti. Why is there a 980 missing and no 700 series cards from nvidia? The 700s were the original cards to go against things like the 7970, 290, 290x. Would be nice see whether those cards are still relevant, although the lack of them showing in benchmarks says otherwise. Also the 980 missing is a bit concerning.
  • Daniel Williams - Friday, September 25, 2015 - link

    It's mostly time constraints that limit our GPU selection. So many GPU's with not so many hours in the day. In this case Ryan got told about this benchmark just two days before leaving for the Samsung SSD global summit and just had time to bench the cards and hand the numbers to the rest of us for the writeup.

    It surely would be great if we had time to test everything. :-)
  • Oxford Guy - Friday, September 25, 2015 - link

    Didn't Ashes come out first?
  • Daniel Williams - Friday, September 25, 2015 - link

    Yes but our schedule didn't work out. We will likely look at it at a later time, closer to launch.
  • Oxford Guy - Saturday, September 26, 2015 - link

    So the benchmark that favors AMD is brushed to the side but this one fits right into the schedule.

    This is the sort of stuff that makes people wonder about this site's neutrality.
  • Brett Howse - Saturday, September 26, 2015 - link

    I think you are fishing a bit here. We didn't even have a chance to test Ashes because of when it came out (right at Intel's Developer Forum) so how would we even know it favored AMD? Regardless, now that Daniel is hired on hopefully this will alleviate the backlog on things like this. Unfortunately we are a very small team so we can't test everything we would like to, but that doesn't mean we don't want to test it.
  • Oxford Guy - Sunday, September 27, 2015 - link

    Ashes came out before this benchmark, right? So, how does it make sense that this one was tested first? I guess you'd know by reading the ArsTechnica article that showed up to a 70% increase in performance for the 290X over DX11 as well as much better minimum frame rates.
  • Ananke - Friday, September 25, 2015 - link

    Hmm, this review is kinda pathetic...NVidia has NO async scheduler in the GPU, the scheduler is in the driver, aka it needs CPU cycles. Then, async processors are one per compute cluster instead one per compute unit, i.e. lesser number.
    So, once you run a Dx12 game with all AI inside, in a heavy scene it will be CPU constrained and the GPU driver will not have enough resource to schedule, and it will drop performance significantly. Unless, you somehow manage to prioritize the GPU driver, aka have dedicated CPU thread/core for it in the game engine...which is exactly what Dx12 was supposed to avoid - higher level of virtualization. That abstract layer of Dx11 is not there anymore.
    So, yeah, NVidia is great in geometry calculations, it's always been, this review confirms it again.
  • The_Countess - Friday, September 25, 2015 - link

    often the fury X GAINS FPS as the CPU speed goes down from i7 to i5 and i3.

    3 FPS gained in the ultra benchmark going from the i7 to the i3, and 7 in the low benchmark between the i7 and the i5.
  • sr1030nx - Friday, September 25, 2015 - link

    Any chance you could run a few tests on the i7 + i5 with hyperthreading off?
  • WaltC - Saturday, September 26, 2015 - link

    (I pasted this comment I made in another forum--didn't feel like repeating myself...;))

    Exactly how many "DX12" features are we looking at here...?

    Was async-compute turned on/off for the AMD cards--we know it's off for Maxwell, so that goes without saying. Does this game even use async compute? AnandTech says, "The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. "

    If that's true then the nVidia drivers for this bench must turn it off--since nVidia admits to not supporting it.

    But sadly, not even that description is very informative at all. Uh, I'm not too convinced here about the DX12 part--more like Dx11...this looks suspiciously like nVidia's behind-the-scenes "revenge" setup for their embarrassing admission that Maxwell doesn't support async compute...! (What a setup... It's really cutthroat isn't it?)

    Nvidia says its Maxwell chips can support async compute in hardware—"it's just not enabled yet."

    Come on... nVidia's pulled this before...;) I remember when they pulled it with 8-bit palletized texture support with the TNT versus 3dfx years ago...they said it was there but not turned on. The product came and went and finally nVidia said, "OOOOps! We tried, but couldn't do it. We feel real bad about that." Yea...;)

    Sure thing. Seriously, you don't actually believe that at this late date if Maxwell had async compute that nVidia would have turned it *off* in the drivers, do you? If so, why? They don't say, of course. The denial does not compute, especially since BBurke has been loudly representing that Maxwell supports 100% of d3d12--(except for async compute we know now--and what else, I wonder?)

    I've looked at these supposed "DX12" Fable benchmark results on a variety of sites, and unfortunately none of them seem very informative as to what "DX12" features we're actually looking at. Indeed, the whole thing looks like a dog & pony PR frame-rate show for nVidia's benefit. There's almost nothing about DX12 apparent.

    We seem to be approaching new lows in the industry...:/
  • TheJian - Saturday, September 26, 2015 - link

    "but both are above the bare minimum of 30 FPS no matter what the CPU."

    You're kidding right? 30fps AVERAGE is not above the min of 30fps. If you are averaging 30fps, your gaming will suck. While I HOPE they will improve things over time, the point is NEITHER is currently where any of us would like to play at 4K...LOL.

    At least we can now stop blabbing about stardock proving AMD wins in DX12 now...ROFL. At best we now have "more work needs to be done before a victor is decided", which everyone should have known already.
  • ruthan - Sunday, September 27, 2015 - link

    Where is mighty DX12 promise about both different GPU with different architecture together? Where is DX11 comparison, maybe numbers are so nice?

    If it will be not delivered real soon, we could easily stay with OpenGL and hope in Vulkan destiny.. and there is also no reason to upgrade to Windows 10, we could probably survive with Win7 64b. before Android x86 or maybe, maybe other Linux take a lead.
  • Mugur - Monday, September 28, 2015 - link

    I see some people complaining because there wasn't a DX11 vs. DX12 comparison, like in the Ashes benchmark. I hope everybody realizes that this is completely useless. Why should AMD optimize its cards for DX11 in a DX12 enabled game when the cards are supporting DX12 and DX12 will definitely be better for AMD in an apples to apples comparison?

    Even in Ashes, it's stupid to compare the DX11 path with the DX12 one for a AMD card, since AMD only optimized for DX12 there.

    Also, generally speaking, DX11 and DX12 visuals in a game may not be the same and that's another reason why you cannot draw any conclusion from such comparison (besides a sanity check maybe). That's why, we will definitely see, on some games, that in DX12 the cards will perform poorer that in DX11, unless the target graphics is exactly the same.

    All in all, I'm happy that DX12 brings at least a low cpu overhead. The fact that AMD benefits more from this is obvious, but that's just a "collateral" effect IMHO. I doubt that it's only because their DX11 drivers were poor, it must be also a consequence of their architecture (the high cpu overhead, I mean).

    I also hope that Windows 10 adoption will be good, because, at this point, that's the only reason for a developer not to go full DX12 for a triple A title.
  • Oxford Guy - Monday, September 28, 2015 - link

    "According to Kollock, the idea that there’s some break between Oxide Games and Nvidia is fundamentally incorrect. He (or she) describes the situation as follows: 'I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.' 

    Kollock goes on to state that Oxide has been working quite closely with Nvidia, particularly over this past summer. According to them, Nvidia was 'actually a far more active collaborator over the summer then AMD was, if you judged from email traffic and code-checkins, you’d draw the conclusion we were working closer with Nvidia rather than AMD ;)'

    According to Kollock, the only vendor-specific code in Ashes was implemented for Nvidia, because attempting to use asynchronous compute under DX12 with an Nvidia card currently causes tremendous performance problems."

    "Oxide has published a lengthy blog explaining its position, and dismissing Nvidia's implication that the current build of Ashes of the Singularity is alpha code, likely to receive significant optimisations before release:

    'It should not be considered that because the game is not yet publicly out, it's not a legitimate test,' Oxide's Dan Baker said. 'While there are still optimisations to be had, the Ashes of the Singularity in its pre-beta stage is as or more optimised as most released games.'"
  • joshjaks - Monday, September 28, 2015 - link

    I really like the CPU scaling tests that are done, even though it looks like multiple cores aren't hugely beneficial at the moment. I'm wondering though, do you think there could be the possibility of a DirectX 12 test that focuses on FX CPUs versus i3, i5, and i7s? Being an 8350 owner myself, it'd be nice to know if FX made any improvements in DirectX 12 as well. (Not holding my breath though)
  • Oxford Guy - Monday, September 28, 2015 - link

    "Being an 8350 owner myself, it'd be nice to know if FX made any improvements in DirectX 12 as well. (Not holding my breath though)."

    PCPER made the effort to actually do this testing:

    With the FX 8370 in Ashes: "NVIDIA’s GTX 980 sees consistent DX11 to DX12 scaling of about 13-16% while AMD’s R9 390X scales by 50%. This puts the R9 390X ahead of the GTX 980, a much more expensive GPU based on today’s sale prices."

    FX 8370 and 1600p (in frames per second)

    R9 390X DX 12 high: 36.4
    GTX 980 DX 12 high: 34.3

    R9 390X DX 11 high: 23.8
    GTX 980 DX 11 high: 31.1

    with i7 6700K and 1600p

    R9 390X DX 12 high: 48.7
    GTX 980 DX 12 high: 42.3

    R9 390X DX 11 high: 38.1
    GTX 980 DX 11 high: 48.1
  • Oxford Guy - Monday, September 28, 2015 - link

    So, with typical 4.5 GHz overclocking that chip should be able to match the i7 6700K's performance with a 390X at 1600p under DX11 — using a 390X with DX12.

    That's quite a value gain, considering the price difference. I got an 8320E with an 8 phase motherboard for a total of $133.75 from Microcenter. Coupled with a $20 cooler (Zalman sale via slickdeals) which needed some extra fans, I was able to get it comfortably to 4.5 GHz.

    The drawback in the PCPER results is that an i3 4330 is actually faster under DX12 with Ashes and the 390X than the 8370 is. It made much bigger gains under DX12 than the 8370 did.

    i3 DX 12: 40.6
    DX 11: 28.0

    Ashes is a real-time strategy which tends to be CPU-heavy so it seems odd that an i3 could outperform an 8 core chip.
  • Iamthebeast - Monday, September 28, 2015 - link

    The game is probably not utilizing all cores. The funny thing with the fable benchmark is once you hit 1080p the two core CPUs perform on par with their larger core counterparts and beat them in 4k. Makes me wonder if this trend is going to continue. The funny thing for me is I thought dx12 was going to make using more cores easier but from the two test we have so far it actually looks like GPU it taking a lot more of the load away from the CPU, like cutting it out of the loop.

    If this trend continues it actually looks like the best processor for gaming might be the Pentium g3580
  • Oxford Guy - Monday, September 28, 2015 - link

    "If this trend continues it actually looks like the best processor for gaming might be the Pentium g3580."

    That's funny since I once suggested that Nintendo use a fast dual core for its upcoming gaming system along with an Nvidia GPU — basically the opposite of the other consoles' many-core slow APU setup. I wonder if a triple core design would really be the optimal chip design for gaming, balancing power consumption with clock speed.
  • Kerome - Friday, October 2, 2015 - link

    Gameplay code is notoriously hard to parallelise, so it's likely to be advantageous to have just a couple of big cores than a bunch of smaller ones. It's interesting to see that Apple has taken exactly this approach with their latest A9 SoC for the iPhone 6S. Although of course the included PowerVR 7XT series GPU doesn't compare to an NVidia desktop solution.

    Very few applications on mobile come close to maxing out the A8, let alone the A9. It will be interesting to see where they take it.
  • tec-goblin - Wednesday, September 30, 2015 - link

    I am still waiting for the integrated cards benchmarks!
  • Enterprise24 - Thursday, October 1, 2015 - link

    What about 780 Ti ?
  • remosito - Friday, October 2, 2015 - link

    Are you planning on doing benchmarks with the new 15.9.1 AMD drivers?
  • Powerrush - Saturday, October 3, 2015 - link

    http://wccftech.com/asynchronous-compute-investiga...
  • Slash3 - Monday, October 5, 2015 - link

    Any download link? I didn't see one in the article, although I'm quite tired and may have missed it.
  • lprates - Thursday, October 15, 2015 - link

    Great graphics
  • lprates - Thursday, October 15, 2015 - link

    Great graphics
  • lprates - Sunday, October 18, 2015 - link

    Great graphics
  • Jarrid Sinn - Wednesday, January 13, 2016 - link

    It would be nice to see how the performance changes with an AMD processor, especially since there is such a dramatic change due to the processor used in these test. This is of course taking into account how often you stated that the number and arrangement of the CPU cores seemed to make an impact on the results.

Log in

Don't have an account? Sign up now