Wouldn't you need multiple data points to have a trend, and as this is really the only DX12 game, you do not have that do you?
No what we have here is one game where one side has an advantage, and a fanboy for that side shouting how it means everything. As if we haven't seen that 1000 times before.
Given that we only have (almost) one DX12 game available I wouldn't worry too much about the performance of any of the two players. By the time enough games are available to actually care about DX12 I assume both will be more than ready to deliver.
Complaining (or worrying) about DX12 performance at this point is pointless. The whole ecosystem is very much in beta stages starting with the only version of Windows that supports DX12, Windows 10. The OS, the drivers, the games, they are all in a phase where they are subject to pretty big changes. Even the hardware will start supporting different subsets of DX12 in the future. And the title sums it up pretty well: "a beta look".
But some people just need a reason to complain, to lament, to try on some sarcasm, etc. Only time will tell which platform will be "the best" and for how long once all the development is done. But what I can tell you right now is that both players will be "good enough".
P.S. Regardless of which side you're on, being a fanboy only works when you have the very top end product. So unless you have a FuryX or a 980Ti/Titan X pointing fingers at the performance of the competition is like driving a Fiesta and thinking it's a sort of Mustang.
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.
This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
Finally!.. I can't even grasp the concept of how low rez and crappy the graphics look on this thing and everybody is praising this "game" and its benchmarks of dubious accuracy. It looks BAD, its choppy and pixelated, there is a simple terrain and small units that look like sprites from Dune 2000 and this thing makes an high end GPU cry to run at 60Fps's??....
No insults in his post. Sorry you get your butt hurt whenever someone points out the facts. There are few Direct X 12 pieces of software outside of tech demos and canned benchmarks avalible. Nvidia has better things to do than appease the arm-chair quarterbacks of the comments section. Like optimize for games we are playing right now. Weather Nvidia cards are getting poor or equal performance in DX 12 titles to their DX 11 counterparts is irrelevant right now. We can talk all we want but until there is a DX 12 title worth putting $60 down on and that title actually gains enough FPS to increase the gameplay quality then the conversation is moot.
Yeah, except that one part where he called him a fanboy. Yeah, totally no insults.
Seriously, is the Anandtech comment section devolving into Wccftech now? Is it still possible to have intelligent arguments about tech on the internet without idiots crawling all over the place? Thanks.
"Trolling" usually implies deliberate obtuseness in order to annoy. Itchypoot's posts reads like a newb's or fanboy's (likely a bit of both) who simply doesn't understand how evidence and logic factor into civilized debate.
Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.
Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
Poorly how, exactly? It looks to me like DX12 is just removing a bottleneck for AMD that Nvidia already fixed in DX11. It would be more correct to say that AMD has poor DX11 performance compared to Maxwell, and neither are constrained by driver overhead in DX12.
DX12 by desing will slightly favor older AMD designs simply because of the design decisions that AMD made compared to Nvidia with regards DX11 that are paying off with Dx12 while Nvidia benefited from it with DX11 games which is why they own around 80% or so of the gaming GPU market. How much of an impact this will be depends on the game just like how it is with DX11 games some do better on AMD some will be better on Nvidia.
Even in generation where AMD/ATI have been dominant in terms of performance and value, they've still not really dominated in sales.
Just like even when AMD's CPUs were offering twice the performance per watt and cheaper performance per dollar, they still sold less than Intel.
Doing it for a short time isn't enough, you have to do it for *years* to get a lead like nVidia has.
Firstly you have to overturn brand-loyalty from complete morons (aka everybody with any brand loyalty to any company, these are corporations that only care about the contents of your wallet, make rational choices). That will happen only a small percentage of people at a time. So you have to maintain a pretty serious lead for a long time to do it.
AMD did manage to do it in the enthusiast space with CPUs, but (arguably due to Intel being dodgy pricks) they didn't quite turn that into mainstream market dominance. Which sucks for them, because they absolutely deserved it.
So even if AMD maintains this DX12 lead for the rest of the year and all of the next, they'll still sell less GPUs than nVidia will in that time. But if they can do it for another year after that, *then* would they be likely to start winning the GPU war.
Personally, I don't care a lot. I hope AMD do better because they are losing and competition is good. However, I will make my next purchasing decision on performance and price, nothing else.
Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.
Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
By the time DX12 becomes commonplace, I'm sure they will have cards that were built for DX12.
It makes a lot of sense to design your cards around what will be most useful today, not years in the future when people are replacing their cards anyways. Does it really matter if AMD's DX12 performance is better when it isn't relevant, when their DX11 performance is worse when it is relevant?
Indeed it makes much sense to build cards exactly for today so people would be forced to buy new hardware next year to have decent performance. From certain green point of view. But many people are actually hoping that their brand new mid-top card would last with decent performance at least some years.
Hardware performance for new APIs is always weak with first gen products. That isn't changing here. When there are many DX12 titles out and new cards are out there, you'll see that people don't want to try playing with their old cards and will be buying new. That's how it works.
Once the second generation of DX12 cards come out, then you can analyze the jumps and get a better idea. Ideally you'd wait for three generations of post-DX12 GPUs to get the full picture. As it is, all we know is that AMD's DX12 driver is better than their DX11 driver... which ain't saying much.
except we have 3 generations of DX12 cards already on AMD's side, starting with the hd7970, which still holds its own quit well.
and we've had multiple DX12 and vulkan benchmarks already and in every one of them the 290 and 390 in particular beat the crap out of nvidia's direct competition. in fact they often beat or match the card above them as well
as for drivers. AMD's dx11 drivers are fine. they just didn't invest bucketloads of money in game specific optimizations like nividia did, but instead focused on fixed the need for those optimizations in the first place. nvidia's investment doesn't offer long term benefits (a few months, then people move on to the next game) and that level of optimization in the drivers is impossible and even unwanted in low level API's.
basically nvidia will be losing its main competitive advantage this year.
I think what he meant was we don't have enough test cases to conclude mature dx12 performance. The odds are pointing to AMD having faster gpus for dx12. But until multiple games are out, and preferably one or two "dx12" noted driver, we're speculating. I thought this was clear from the article?
It's a stretch calling 3 generations of dx12 released cards too. I guess if we add up draft revisions there are 50 generations of AC wireless.
You could state that because AMDs arch is targeting dx12, it looks to give an accross the board performance win in dx12 next gen games. But again we only have 1 beta game as a test case. Just wait and it will be a fact or not. No need to backfill the why
Right, they didn't invest bunchload in optimizing current game, they just payed a single company to make a benchmark game using their most strong point in DX12, super mega threaded (useless) engine. Not different than nvidia using super mega geometry (uselessly) complex scenes helped by tessellation. Perfect marketing: most return with less investments.
Unfortunately a single game with a bunchload of ASYN compute thread added just for the joy of it is not a complete DX12 trend: what about games that are going to support Voxel global illumination that AMD HW cannot handle?
We'll see where the game engine will point to. And if this is another faux-fire that AMD has started up these years seeing they are in big trouble.
BTW: it is stupid to say that 390 "beat up the crap aout of anything else" that is using a different API. All you could see is that a beefed up GPU like Hawaii consuming 80+W with respect to the competition manage finally to pass it as it should have do at day one. But this was only because of the use of a different API with different capacities that the other GPU could not benefit from. You can't say it is better if with current standard API (DX11) that beefed up GPU can't really do better.
If you are so excited by the fact that a GPU 33% bigger than another is able to get almost 20% more in performance with a future API and best conditions at the moment a complete new architecture is going to be launched by both red and green teams, well, you really demonstrates how biased you are. Whoever has bought a 290 (then 390) card back in the old days during all these month has been biting dust (and loosing Watts) and the small boost at the end of these cards life is really a shallow thing to be exited for.
I like what AMD has done with "future proofing" their cards and drivers for DirectX12. But people buy graphics cards to play games TODAY. I'd rather get a graphics card with solid performance in what we have now rather than get one and sit down playing the waiting game.
1) It's not like NVidia's DX12 performance is "awful", you'll still get to play future games with relatively good performance. 2) The games you play now won't be obsolete for years. 3) I agree with what others have said; AOS is just one game. We DON'T know if NVidia cards won't get any performance gains from DX12 under other games/engines.
You do not buy a new gfx card to play games TODAY, but for playing TOMORROW, next month, quarter and then for a few years (few being ~two), until the performance in new games regresses to the point when you bite the bullet and buy a new one.
Most people do not have unlimited budget to upgrade every six months when a new card claims performance crown.
It's unlikely that the gaming market will be flooded by DX12 games within six months. It's unlikely to happen within a few years, even. Look at how slow DX10 adoption was.
I think you're quite wrong about this. Windows 10 adoption is spreading like wildfire in comparison to Windows XP --> Vista. DX10 wasn't available as a free upgrade to Vista the way DX12 is in Windows 10.
Sorry, Maxwell already can support packed FP16 operations at 2x the rate of FP32 with X1. The rat of compute will be pretty much exclusive to GP100. Like how Kepler had a gaming line and GK110 for compute.
I have not read anything about Pascal from nvidia outside the FP16 capabilities that are HPC oriented (deep learning). Where have you read anything about how Pascal cores/SMX/cache and memory controller are organized? Are they still using crossbar or they finally passed to a ring bus? Are caches bigger or faster? What are the ratio of cores/ROPs/TMUs? How much bandwidth for each core? How much has the compressed memory technology improved? Have cores doubled the ALUs or they have made more independent core? How much independent? Is the HW scheduler now able to preempt the graphics thread or it still can't? How many threads can it support? Is the Voxel support better and able to be used heavily in scenes to make global illumination quality difference?
I have not read anything about this points. Have you some more info about them? Because what I could see is that at a first glance even Maxwell was not really different than Kepler. But in reality the performance were quite different in many ways.
I think you really do not know anything about what you are talking about,. You are just expressing your desire and hopes like any other fanboy as a mirror of the frustration you have suffered all these years with the less capable AMD architecture you have been using up to now. You just hop nvidia has stopped and AMD finally made a step forward. It may be you are right. But you can't say now, nor I would going telling such stupid thing you were saying without anything as a fact.
I think nVidia's been caught with their pants down, and Pascal doesn't have hardware schedulers to perform async compute, either. It may be that AMD has seriously beaten them this time.
nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon. I have a feeling Pascal, like Maxwell, doesn't have hardware schedulers, either. It's beginning to look like nVidia's been check-mated by AMD here.
@anubis44: "nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon."
I do believe you are correct. Given the lack of ability to throw driver optimizations at the DX12 code path and nVidia's proficiency at doing it, I'd say this will be quite damaging. They've lost one clear advantage they held (at least in DX11).
@anubis44: "It's beginning to look like nVidia's been check-mated by AMD here."
I wouldn't go that far. They probably won't have the necessary hardware in Pascal, but you can be sure Volta will have what it needs. Besides, most games will likely have a DX11 code path for the foreseeable future as developers wouldn't want to lock themselves out of an entire market. Also, at the moment, nVidia can still play DX12 fine, they just don't appear to have the advantage at the moment given the small sample set of available data points.
In conclusion, it is more like they have lost a rook or queen. Of course, they've taken a few of ATi's pieces as well, so lets just wait and see who plays their remaining pieces better.
The other thing I would add to this is that it's not like Nvidia have nowhere to go here. Take the GTX 970 vs the R9 390 for example... they're in a similar price & performance tier. Yet the 970 is smaller with fewer transistors (usually meaning it's cheaper to produce) and generally has a much higher overclocking headroom (because Nvidia wasn't under pressure to clock the card closer to the limit to reach relevant performance). So it's reasonable to expect Nvidia could both lower the price and clock it higher to get a significantly better value card with minimal basically no substantive engineering/architectural changes.
I'm not suggesting Nvidia will do that with the 970 specifically. Rather, what I'm saying is that if they find Pascal is similarly behind AMD they've got plenty of room to tweak performance and price before we can start calling them "check-mated". But it's certainly good new for us if DX12 performance like this continues and AMD essentially forces Nvidia to lower its margin.
nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those (and coming up with way to hurt everyone's performance as long as it hurt AMD the most or makes their own latest gen cards look better... but that's a different matter)
with DX12 however the drivers becomes MUCH thinner and doesn't have nearly as much influence. so basically nvidia's main competitive advantage is gone with dx12 and vulkan.
as for being relevant: this year pretty much every game where performance matters will have either a DX12 or Vulkan render option. adding in the fact that AMD cards generally age better then nvidia's (those game specific optimizations focus pretty much exclusively only on their latest generation of cards) and i would say that yes it is very relevant.
@The_Countess: "nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those ..."
True, they have lost a large advantage. Keep in mind, though, that nVidia's developer relations are still in play. What they once achieved through the use of driver optimizations may still be accomplished through code path optimization and design guidance for nVidia architecture. The first beta for Vulkan (The Talos Principle) showed that merely replacing a high level API (OpenGL/DX11) with a low level one (Vulkan/DX12) does not automatically improve the experience. If nVidia can convince developers to avoid certain non-optimal features or program in such a way as to take better advantage of nVidia hardware in their titles (for the sake of performance on the majority of discrete card owners out there of course) then ATi will be in the same position as they are now. Better hardware, worse software support. Then again, low level API cross-platform titles will most assuredly program to take advantage of the console architectures which happens to be ATi's at the moment.
Considering the Fury X just has a tad bit more raw power than a (older) 980Ti, I would say the DX12 numbers are fine, and what is really showing is AMDs lack of performance in DX11?
I don't agree with this. I think this is more a case of nvidia not being able to rely so much on the ENORMOUS number of special cases in their driver. IOW, this is about two things: hardware and game design. The drivers are trivial next to d3d11/ogl.
Did you not notice the across the board improvements for all gcn cards? The point I was making, and that others have made for sometime, is that AMD makes really good hardware but this is typically masked by poor drivers. You can see this by looking at their excellent performance in compute workloads where the code in the driver is more recent and doesn't have the legacy cruft of their d3d/ogl code.
It's not their drivers. It's purely architectural. GCN moved their schedulers into to hardware. GCN requires the API to be able to feed it enough work. What people have been calling "driver overhead" is nothing of the sort. DX11 is just not capable of fully utilizing AMD hardware. DX12 is and that is why AMD created Mantle. It forced MS to create DX12 and that set off the creation of Vulkan. All of the next gen APIs are tailored to exploit AMDs already being sold hardware.
It's the simpler drivers which provide less room to hide architectural deficiencies. My point was that, across the board, gcn improves its performance a good deal relative to d3d11. That includes cards that are four years old. I don't think Maxwell is older than that. I don't think we are really disagreeing, though.
Few if anyone here is going to remember the fiasco when the radeon 9700 pro demolished the competition in performance and stability. Even fewer remember nvidia "optimizing" games with lower quality textures to compete.
I remember it and it was the reason I went for the 9700 and the later 9800, atm I'm back to Nvidia I've had 2 AMD card die on me due to heat, as much as I like them I've had my fingers burnt and moved away from them, if dx12 and dual gpu support becomes better supported I'll buy a high AMD card in an instant.
I remember it clearly. My Radeon 9800 was the last ATI card I bought. I loved it for years, and only ended up replacing it with an NVidia card when the Catalyst Control Center started sucking all the cycles out of my CPU. It's funny that half of the comments on this article complain that NVidia's drivers are over-optimized for every specific game, yet ATI and AMD were content to allow the CCC to be a resource hog that ruined even non-gaming performance for years. I'm happy with my NVidia cards. I've been able to easily play all modern games with great performance using a pair of GTX 460's, and recently replaced those with a GTX 970.
Considering there aren't any other async shader games in development and nothing announced and with Pascal coming within the next year (which maybe, a game might actually use DX12) which will probably alleviate the situation, your evaluation of NVIDIA's situation is pretty poor.
It takes more than a generation or a game to make a hardware company go down. NVIDIA suffered plenty during its GeForce FX days, and it got right back on its feet.
AMD has had an async compute engine in their GPUs going back to the 7000 series. NVIDIA has not. Stands to reason AMD would do better in async compute based benchmarking.
Let's see how Pascal compares, since it's being designed with DX12, and async compute, in mind.
"NVIDIA telling us that async shading is not currently enabled in their drivers", yeah this pretty much sums it up. This beta stuff is interesting but just that beta...
The GTX 680 seems to have done well though. I feel like Maxwell is being let down by the compromises Nvidia made optimizing for FP16 only and sacrificing real compute performance.
Not really, this is a beta for a game that is heavily embedded with AMD tech, the way the game handles it would favor AMDs implementation, it could go the other way for a game designed around Nvidia's implementation.
Also, calling this bed performance of DX12? Maybe you should clarify that this is an implementation of 12_0 and not 12_1, I highly doubt AMD will fair as well as Nvidia under such circumstances.
For Nvidia to perform the same or worse between DX11 and DX12 seems like a pretty big thing to be addressed with just an 'optimization', especially compared to AMD's results. I guess we'll see when it's out of beta!
Something I have been wondering about this game is whether the DX11 vs DX12 comparison is really valid. The game apparently pushes higher draw calls and takes advantage of DX12. But when running in DX11 mode, is it still trying to push all those unique draw calls or is it optimized like most DX11 games and using draw call instancing? (Sorry not a game dev, so I don't fully grok the particulars).
Basically, if the game was designed and optimized for DX11 it might perform well but not have the visual fidelity of so many draw calls (unique unit visuals). So the real difference should have been visual quality. Instead I get the impression that the game was designed to push DX12 and then when in DX11 mode stresses the draw call limitations, over emphasizing the apparent gains. Am I wrong?
Currently it seems that porting a game from DX11 to DX12 nets up to a 50% improvement in framerates. The reality is more nuanced that existing games are clearly working around draw call limitations and thus won't see something quite so dramatic. Thoughts?
It would have to use less draw calls for DX11 as hitting DX11 with the high number of calls you can do in DX12 would make it fall flat on it's face. I am sure the DX11 path is well optimised for DX11, I mean while this game is a big showcase for DX12, most people who play it will probably be on DX11 ...
People with nVidia cards will be playing in DX11 mode. People with AMD cards, even those three generations old, will be fine in DX12 mode with better eye candy and simultaneously better FPS.
There's definitely quite a bit of that. Looking at the benchmark, a lot of the graphics design seems aimed at causing more draw calls (long-lasting smoke consisting of lots of unique particles, lots of small geometric details, etc.). While I'm absolutely convinced that DX12 will give better performance than DX11 in the long run and that the gap will be fairly large, I think this benchmark is definitely designed to overemphasize just how great DX12 is.
I would like to see how much of an impact the DX12 on a released game makes in the CPU world. Do you get better performance from multiple cores, or is it irrelevant? Speculation is that DX12 could change the normal paradigm for judging gaming performance on CPU's.
If you are CPU limited, and it's using lots of threads, then yeah more cores would be faster. They were CPU limited on an overclocked 4960X, which is no slouch, that was very surprising!
I agree that will be very interesting. I'm surprised more hasn't been made of the seemingly pretty hard CPU limit to ~70fps, irrespective of the detail settings or resolution. And that on a still very capable 4960X @ 4.2Ghz. If we estimate Skylake has a 20% IPC advantage, that would still see the current top tier 6700K (at stock) maxing out in the mid 80s, a long way short of what you might like on a 144hz monitor. Does that mean a brand new quad core CPU like the i5 6400 with its low base clock might struggle to sustain 60fps, even on lower detail settings?
I realise this is beta and all preliminary, but it's interesting nonetheless.
Does DX12 Multi-adapter offer any benefits with cards that are mismatched in performance? I'm currently running a GTX 980 in my main PC and also have an older GTX 770 sitting around; would pairing them offer any speedup over just the 980, or would the faster card end up held back by the slower one?
I'd be equally interested in seeing how AMD does with significantly mismatched GPUs; since they've been trying (with varying degrees of success) to push XFire between their IGPs and the significantly faster chips in midrange Radeon cards.
The article has a quote from the developer about using mismatched cards... "For example, you will never get more than twice the speed of the slowest video card. You would be better off just using the new card alone."
You might get some benefit, but likely not that much.
I think that's rather narrow minded and way too absolute. Mismatched cards can be used to their full potential, but you'd need some smart coding to make it so. For instance, you could offload some of the work to the weaker GPU, keeping the stronger one for the main rendering.
One excellent example which would fully utilize two mismatched cards is VR: multiadapter rendering would be used to offload the VR projection and transformation steps to the integrated GPU in most modern CPUs, while the main GPU would do the regular rendering. The data transfer requirement is minimal, but there's a fair amount of computations required, making it an ideal scenario.
Other examples include doing post-processing on the weaker card (SSAO, subsurface scattering, screenspace reflections, etc.). The big problem is judging just how much work should be offloaded to the secondary GPU - just detecting the hardware would be extremely laborious.
It's a correct description for how Ashes works. They implement a (relatively) straightforward AFR setup, so the cards need to be similar in performance.
What Multi-adapter does is left completely to developer. In some cases it can give you nothing, in others every bit of hardware can be useful including iGPU.
Their current implementation is AFR, so the performance of the cards should be as close to identical as possible. In the future I think they may plan on offloading some of the raw compute onto a second GPU, and in that case an older slower GPU would be beneficial.
Isn't Oxide's statement that they don't optimise for certain hardware a bit disingenuous?
If you read their developer diaries not only was AoS built around Mantle, not only was the engine built upon Mantle but they've stated that they developed more of Mantle than AMD did.
Before DX12 was even announced Oxide were working directly with AMD and building AoS to champion Mantle and take advantage of it a low level while only supporting nVidia hardware on DX11. That of course will automatically bias results in favour of RTG even if there is no intention to do so at this stage.
AMD has partnered with Stardock in association with Oxide to bring gamers Ashes of the Singularity – Benchmark 2.0 the first benchmark to release with DirectX® 12 benchmarking capabilities such as Asynchronous Compute, multi-GPU and multi-threaded command buffer Re-ordering. Radeon Software Crimson Edition 16.2 is optimized to support this exciting new release.
See? Every time when there's a pro AMD game tested, there'll be much butt hurt fanboy comments. And i guess everyone knows why. Because when you bought something, you'll always want to justified your purchase and you know who's got the lion share of the dGPU market now. Guess nowadays people are just too sensitive or has a heart of glasses, which makes them judging things ever so subjectively and personally.
"Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."
If you google around you will find out nvidia does not have asynchornous shading on its DX"12" cards. this was actually first found out in WDDM 1.3 back in windows 8.1 when they would not support the optional features which AMD does.
I know that the wrong terminology kept being used for years now, especially driven by major tech review websites like this one. But that's still not making it any less wrong.
The API is fully functional. So the driver does support it. Whether it does so efficiently is an entirely different matter, you don't NEED hardware "support" to provide that feature. Hardware support is only required to provide parallel execution, as opposed to the default sequential fallback. The latter one is perfectly within the bounds in the specification, and counts as fully functional. It's just not providing any additional benefits, but it's neither broken nor deactivated.
The so called ASync Compute implementation AMD has in HW IS NOT PART OF DX12 SPECIFICS. I hope that is clear written that way.
DX12 describe the use of multiple threads flying at the same time. nvidia does support them, with some limitations in number and preemption capabilities with respect to what AMD HW can. This however does not mean that nvdia HW does not support Async compute or it is out of specs. AMD just made a better implementation of it. Think it as it was for tessellation: nvidia implementation is way better than AMD one, but the fact that AMD can't go over certain values does not mean they are not DX11 compliant.
What you are looking here is a benchmark (more than a game) that stresses the multi-threaded capabilities of AMD HW. You can see that AMD is in a better position here. But the question is: how many other games are going to benefit from using such a technique and how many of them are going to implement such a heavy duty load?
We just don't know now. We have to wait to see if this technique can really improve performance (and thus image quality) in many other situations or it is just a show off for AMD (that has clearly partnered to make this feature even more heavy on nvidia HW). When nvidia will star making developers using their HW accelerated Voxels we will start to see what feature is going to hit worse one another's HW and which is going to give better image quality improvements.
For now I just think this is a over used feature that like many other engine characteristics in DX11 is going to give advantage to one side rather than the other.
It's something of a limitation of the CMS. The color bar is the average; the grey bars are in the same order as they are in the legend: normal, medium, and heavy batch counts.
There is a brief mention of GTX 680 2GB "CPU memory limitations". I take it you mean "VRAM memory limitations". It would be interesting to know if this can be overcome by DX12 memory stacking, either a pair of GTX 680s or the GTX 690.
Hey, so I'm confused by the mixed GPU testing. I thought that both cards had to be the same in order to run them in SLI/Crossfire? How did they test a Fury X + 980Ti?
That's no longer the case with DX12. It used to be like this with DX11 and earlier versions, when the driver decided if/how to split the workload onto multiple GPUs, but with DX12 that choice is now up to the application.
So if the developer chooses to support asymmetric configurations, even cross vendor or exotic combinations like Intel IGP + AMD dGPU, then it can be made to work.
I'm willing to bet that nVidia's Maxwell cards can't use DX12's async compute at all, and they're falling back to the DX11 code path, even when you 'enable' DX12 for them.
The asynchronous compute term only defines how tasks are synchronized against each other, whereby the "asynchronous" term only states tasks won't block while waiting for each other. The default of doing that in software, in order to create a sequential schedule, is perfectly legit and fulfills the specification in whole.
Hardware support isn't required for this feature at all, even though you *can* optionally use hardware to perform much better than the software solution. Parallel execution does require hardware support and can bring an huge performance boost, but "asynchronous compute" does not specify that parallel execution would be required.
The whole point of async compute is to take advantage of parallel execution. It doesn't matter what nVidia's drivers tell an application, if it accepts these commands but is forced to reorder them for serial execution because the hardware can do nothing else then it doesn't really support the technology at all. It's be like claiming support for texture compression even though your driver has to decompress every texture to an uncompressed format before the GPU can read it. It doesn't matter if the application thinks compressed textures are being used if the hardware actually provides none of the benefits the technology intended (in this case more/larger textures in a given amount of VRAM, and in the case of async compute, more efficient utilization of shader ALUs).
"Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."
or ... not to put to fine a point on it, nvidias program and strategy to optimize games for their cards (aka in some instances actively sabotaging the competitions performance through using specialized operations that run great on nvidias hardware but very poorly on others) has lead to a near perfect usage of DX11 for them while amd was struggling along.
on ashes, where there is no such interference, amd seems to be able to utilize the strong points of its architecture (it seems to be better suited for DX12) while nvidia has had no chance to "optimize" the competition out of the top spot ... too bad spaceships do not have hair ... ;P
Well, AMD has had bad DX11 performance for years, they clearly focused their architecture on Mantle/DX12, because they knew they would be producing GPUs for consoles. That will finally pay off this year. NVIDIA focused on DX11, having a big advantage for four years, and now they have to catch up, if not with Pascal, then with Volta next year.
Personally as the owner of an nVidia card, I have to say Bravo AMD. That's some impressive gains and I look forward to the coming D12 GPU wars from which we will all benefit.
Exactly. Also as a current Nvidia card owner, I don't feel the need to rush to a Windows 10 upgrade. Seems I have several months or more before I'll be looking into it. In the mean time DX11 will do just fine for me.
The reason why Nvidia is losing ground to AMD is because their GPU's are predominantly serial or DX11 while AMD as it is turning out is parallel (DX12) and has been for a number of years. And not only that, but are on their 3rd Gen of parallel architecture.
Not sure if its possible to retest this with a Tonga card with 4GB Vram, i.e. R9 380x or 380? Just a little curious why it seems to be lagging behind quite a fair bit.
Anyway, its good to see the investment in DX 12 paying off for AMD. At least owners of older AMD cards can get a performance boost when DX 12 become more popular this year and the next. Not too sure about Nvidia cards, but they seem to be very focused on optimizing for DX 11 with their current gen cards and certainly seems to be doing the right thing for themselves since they are still doing very well.
Tonga has more ACEs than Tahiti, so this could be one of those circumstances, given more memory, of Tonga actually beating out the 7970/280X. However, according to AT's own article on the subject - http://www.anandtech.com/show/9124/amd-dives-deep-... - AMD admits the extra ACEs are likely overkill, though to be fair, I think with DX12 and VR, we're about to find out.
I think that Radeon's advantage in DX12 comes from the fact that most of DX12's new features were similar to features AMD wrote into the Mantle API. They've been designing their recent cards to take advantage of the features they built for Mantle, and now that DX12 includes many of those features, their cards essentially get a head-start in optimization.
If Radeon and NVidia were running a 100-yard dash, it just means that Radeon's starting block is about 5-yards ahead of NVidia's. I (personally) think that NVidia's still the stronger runner, and they easily have the potential to catch up to Radeon's head start if they optimize their drivers some more. And, honestly, a 4fps gap should not be enough of a reason to walk away from whichever brand you already prefer.
I still prefer NVidia due to the lower power consumption, friendlier drivers, 3D glasses, and game streaming they've had for a few years. I used to like ATI cards, but when the Catalyst Control Center started sucking more cycles out of my CPU than the 3D games, I switched to NVidia.
I would also like to have seen the GTX 970 in some of these benchmarks. I understand benchmarking the highest-end cards, but I hope that when the game is out of beta and being used as an official DX12 benchmark, we get some numbers from the more affordable cards.
Of course Nvidia's performance doesnt go up under DX12. That is no doubt intentional. Why would they improve their current cards when they can sell all new ones to the same gullible fools that fell for that trick the last time around?
I had exactly the same thought. AMD may have shot themselves in the foot. Everyone using their cards is going to see a 10-20% boost in performance, meaning they may not need an upgrade this cycle.
Perhaps, but DX12's lower overhead would just encourage devs to make even more complex scenes. Result: same performance, better visuals.
All the people jumping on NVIDIA need to be careful, as there's parts of the DX12 spec that they support better than AMD. Give it a year to eighteen months and we'll see how this pans out.
Having read all 13 pages of comments I was surprised no one mentioned this either. I'll certainly be keeping my 295x2 for at least the next 18 months if not 24 months. With AFR and VR coming up, Dual GPUs is the way to go.
I was really hoping to see the benefits of sharing workload with the iGPU. Not everyone has multiple GPUs(but I do) but most people have a CPU with onboard graphics. If people with graphics cards can finally start using this recourse that would be a very good thing for a tremendous number of users. Please follow this article up as soon as possible with one on this area. Maybe one percent of users have different brand video cards laying around, maybe five percent have multiple similar GPUs but almost everyone has a video card and an unused iGPU on their CPU. This is the obvious first direction to take.
This is (sort of) covered in the article and covered clearly in the comments above. This particular game is only using AFR, and the devs have clearly said (as noted in the article) that you'll never get more than double the performance of the slower graphics solution. Just about any discrete GPU worth of the name will be more than double the performance of an IGP and therefore is better (for this game) run on its own.
DX12 opens up a raft of possibilities to use any and all available graphics resources, including IGPs, but it leaves the responsibility entirely with the game developers. For this game at this time, the devs aren't looking to make use of onboard graphics unless it's paired with a similarly anaemic GPU.
If virtually everyone can boost their framerate by 20% at no cost then it is a big thing. Most people want one processor and one video card. Multiple video cards offer a very real performance boost but there is a downside, more power, more heat and frequent compatibility issues. With DX12 developers can send the post processing to the iGPU and let the video card handle the rest. Again a 20% performance boost for free. Only then should you think about the much smaller market that wants to run with multiple video cards.
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.
This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.
This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.
This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
honestly even when nvidia should be 20% worse i would not buy ATI. not becasue im a fanboy.. but i use my GPU´s for more than games and ATI GPUs suck big time when it comes to drivers stability in pro applications.
Oxide and their so called "benchmarks" are a joke. Anyone who takes the aforementioned seriously, is just another unwitting victim of AMD's typical underhanded marketing.
https://scalibq.wordpress.com/2015/09/02/directx-1... "And don’t get me started on Oxide… First they had their Star Swarm benchmark, which was made only to promote Mantle (AMD sponsors them via the Gaming Evolved program). By showing that bad DX11 code is bad. Really, they show DX11 code which runs single-digit framerates on most systems, while not exactly producing world-class graphics. Why isn’t the first response of most people as sane as: “But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs, running much faster than this. You must be doing it wrong!”?
But here Oxide is again, in the news… This time they have another ‘benchmark’ (do these guys actually ever make any actual games?), namely “Ashes of the Singularity”. And, surprise surprise, again it performs like a dog on nVidia hardware. Again, in a way that doesn’t make sense at all… The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side. Now, if the game can get a certain level of performance in DX11, clearly that is the baseline of performance that you should also get in DX12, because that is simply what the hardware is capable of, using only DX11-level features. Using the newer API, and optionally using new features should only make things faster, never slower. That’s just common sense."
You see my post? You see that there is this underlined text in blue? Well my friend, it is called a URL, which is an acronym for "Uniform Resource Locator", long story short it is this internet thingy that you go clickity-clickity with your mouse and it opens another page, where you can find the rest of the information.
Don't worry, the process of opening a new webpage by using a URL may APPEAR quite daunting at first, but with very little practice you could be clicking away like a pro. This is after all "The AnandTech", and everybody is here to help. Heck, who knows if there are more like you out there, I might even make a video tutorial - "Open new webpages in 3 easy steps", or something.
PS: Another pro tip, there is no such thing as "solid evidence" outside of a court of law. On the internet, you have information resources and reference material, and you have to use your own first-hand knowledge, experience and commonsense to differentiate the right from wrong.
Hm, from the screenshots posted I honestly can't see why would there be a need to run Dx12 with so "low performance" even on the most elite cards. While I give these guys credits for having the guts to go and develop in completely new API, the graphics looks more like early Dx9 games. Just a note this opinion is based on screenshots, not actual live render, but still from what I see there I'd expect FPS hitting 120+ with Dx11...
The highlight here isn't dx12, but rather how badly AMD is doing DX11, which is what most games will run on for quite some time to come (only 10% run win10, and much of that base doesn't have a card that can run this game at even 1080P at 30fps). A decent sized group of win10 users go back to win7 also...LOL. I'm more interested in Vulkan, now that it's out, I think it will take over dx12 after a year as it runs everywhere but consoles and they are a totally different animal anyway.
This just goes to show what happens when you can't afford to support your products. IE, AMD constantly losing money quarter after quarter while R&D drops too. NV on the other hand, has the cash to massively improve dx11 (which is 90% of the market, more if you consider not everyone running win10 is even a gamer), while also making a dx12 driver. AMD clearly needs to devote more money to their current card users (dx11), but can't afford to. AMD is spending their money on the wrong things time and time again. You can blame consoles for this last 5yrs of losses, as that money should have went into making ZEN 4yrs ago, much faster DX11 support, mobile chips should be on rev 5-6 etc like NV and everyone else etc etc. We would not be looking at NV owning 82%+ of the gpu market right now, and Intel would have had a major competition problem for the last 4-5 yrs instead of basically being able to pour all their resources into mobile while beating AMD to death on cpu.
I cannot speak of AMD CPUs, but AMD gpus are doing very well on DX11 games, its not the DX11 implementation of AMD that is at fault for performance issues, rather than Nvidia sabotaging time and time again their games, every game which is labeled Nvidia has a potential to sabotage the entire AMD lineup and AMD has little to nothing to do about it, this is not a tin foil conspirancy theory, its a fact proven game after game after game and easelly found on google, try searching for batman games and other epic unreal engine games, try searching for crysis 2 and nvidia gameworks games, youll see what i mean. If youre a small company you cannot really do much about a big monopoly like Nvidia and Intel youre 1 fighting vs 2. Dirty tactics against AMD and even illegal tactics have also been applied by Intel vs AMD, thats why Intel was fined with a huge ammount of money but in the end the damage to AMD was already done and it was too late for AMD to recover properly. You need to realise not everything is so simple.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
153 Comments
Back to Article
itchypoot - Wednesday, February 24, 2016 - link
Continuing the trend of nvidias very bad DX12 performance.Sttm - Wednesday, February 24, 2016 - link
Wouldn't you need multiple data points to have a trend, and as this is really the only DX12 game, you do not have that do you?No what we have here is one game where one side has an advantage, and a fanboy for that side shouting how it means everything. As if we haven't seen that 1000 times before.
itchypoot - Wednesday, February 24, 2016 - link
Nothing of the sort, but you resort to insult because you have no substance. Likely you fit that description and see everyone else as being the same.There are other DX12 metrics available, nvidia continues to do poorly in them. Make yourself aware of them and return with data rather than insults.
Nvidia+DX12 = unfortunate state of affairs
willis936 - Wednesday, February 24, 2016 - link
"Make yourself aware of them so I don't have to make my own arguments"flashbacck - Wednesday, February 24, 2016 - link
Lol. This is pretty fantastic.close - Thursday, February 25, 2016 - link
Given that we only have (almost) one DX12 game available I wouldn't worry too much about the performance of any of the two players. By the time enough games are available to actually care about DX12 I assume both will be more than ready to deliver.HalloweenJack - Thursday, February 25, 2016 - link
so by the summer then - oh wait , tomb raider IS DX12 , on console - but Nv threw enough money at the dev to make it DX11 on the pc....close - Thursday, February 25, 2016 - link
Complaining (or worrying) about DX12 performance at this point is pointless. The whole ecosystem is very much in beta stages starting with the only version of Windows that supports DX12, Windows 10. The OS, the drivers, the games, they are all in a phase where they are subject to pretty big changes. Even the hardware will start supporting different subsets of DX12 in the future. And the title sums it up pretty well: "a beta look".But some people just need a reason to complain, to lament, to try on some sarcasm, etc. Only time will tell which platform will be "the best" and for how long once all the development is done. But what I can tell you right now is that both players will be "good enough".
P.S. Regardless of which side you're on, being a fanboy only works when you have the very top end product. So unless you have a FuryX or a 980Ti/Titan X pointing fingers at the performance of the competition is like driving a Fiesta and thinking it's a sort of Mustang.
silverblue - Thursday, February 25, 2016 - link
What about a Fiesta ST? (yes, I'm trolling, albeit mildly)MattKa - Thursday, February 25, 2016 - link
What a load of shit. Nvidia threw money at them to make it DX11?It's not DX12 on X-Box you uninformed baboon. In fact Crystal Dynamics is going to be patching DX12 support into the game.
You joker.
Kouin325 - Friday, February 26, 2016 - link
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
Madpacket - Friday, February 26, 2016 - link
With the lack of ethics Nvidia has displayed, this wouldn't surprise me in the least. Gameworks is a sham - https://www.youtube.com/watch?v=O7fA_JC_R5skeeepcool - Monday, February 29, 2016 - link
Finally!..I can't even grasp the concept of how low rez and crappy the graphics look on this thing and everybody is praising this "game" and its benchmarks of dubious accuracy.
It looks BAD, its choppy and pixelated, there is a simple terrain and small units that look like sprites from Dune 2000 and this thing makes an high end GPU cry to run at 60Fps's??....
hpglow - Wednesday, February 24, 2016 - link
No insults in his post. Sorry you get your butt hurt whenever someone points out the facts. There are few Direct X 12 pieces of software outside of tech demos and canned benchmarks avalible. Nvidia has better things to do than appease the arm-chair quarterbacks of the comments section. Like optimize for games we are playing right now. Weather Nvidia cards are getting poor or equal performance in DX 12 titles to their DX 11 counterparts is irrelevant right now. We can talk all we want but until there is a DX 12 title worth putting $60 down on and that title actually gains enough FPS to increase the gameplay quality then the conversation is moot.Your first post was trolling and you know it.
at80eighty - Wednesday, February 24, 2016 - link
there is definitely a disproportion in responses - in the exact inverse you described.review your own post for more chuckles.
Flunk - Thursday, February 25, 2016 - link
What? How dare you suggest that the fans of the great Nvidia might share some of the blame! Guards arrest this man for treason!Mondozai - Thursday, February 25, 2016 - link
"No insults in his post."Yeah, except that one part where he called him a fanboy. Yeah, totally no insults.
Seriously, is the Anandtech comment section devolving into Wccftech now? Is it still possible to have intelligent arguments about tech on the internet without idiots crawling all over the place? Thanks.
Mr Perfect - Thursday, February 25, 2016 - link
Arguments are rarely intelligent.MattKa - Thursday, February 25, 2016 - link
If fanboy is an insult you are the biggest pussy in the world.IKeelU - Thursday, February 25, 2016 - link
"Trolling" usually implies deliberate obtuseness in order to annoy. Itchypoot's posts reads like a newb's or fanboy's (likely a bit of both) who simply doesn't understand how evidence and logic factor into civilized debate.permastoned - Sunday, February 28, 2016 - link
Wasn't trolling - there are other metrics that show the case; for you to imply that 3dmark isn't valid is just silly: http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.
Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
Soulwager - Sunday, March 20, 2016 - link
Poorly how, exactly? It looks to me like DX12 is just removing a bottleneck for AMD that Nvidia already fixed in DX11. It would be more correct to say that AMD has poor DX11 performance compared to Maxwell, and neither are constrained by driver overhead in DX12.SunLord - Wednesday, February 24, 2016 - link
DX12 by desing will slightly favor older AMD designs simply because of the design decisions that AMD made compared to Nvidia with regards DX11 that are paying off with Dx12 while Nvidia benefited from it with DX11 games which is why they own around 80% or so of the gaming GPU market. How much of an impact this will be depends on the game just like how it is with DX11 games some do better on AMD some will be better on Nvidia.anubis44 - Thursday, February 25, 2016 - link
If results like these continue with other DX12 games, nVidia's going to be the one with only 20% in a matter of months.althaz - Thursday, February 25, 2016 - link
Even in generation where AMD/ATI have been dominant in terms of performance and value, they've still not really dominated in sales.Just like even when AMD's CPUs were offering twice the performance per watt and cheaper performance per dollar, they still sold less than Intel.
Doing it for a short time isn't enough, you have to do it for *years* to get a lead like nVidia has.
Firstly you have to overturn brand-loyalty from complete morons (aka everybody with any brand loyalty to any company, these are corporations that only care about the contents of your wallet, make rational choices). That will happen only a small percentage of people at a time. So you have to maintain a pretty serious lead for a long time to do it.
AMD did manage to do it in the enthusiast space with CPUs, but (arguably due to Intel being dodgy pricks) they didn't quite turn that into mainstream market dominance. Which sucks for them, because they absolutely deserved it.
So even if AMD maintains this DX12 lead for the rest of the year and all of the next, they'll still sell less GPUs than nVidia will in that time. But if they can do it for another year after that, *then* would they be likely to start winning the GPU war.
Personally, I don't care a lot. I hope AMD do better because they are losing and competition is good. However, I will make my next purchasing decision on performance and price, nothing else.
permastoned - Sunday, February 28, 2016 - link
Wasn't trolling - there are other metrics that show the case; for you to imply that 3dmark isn't valid is just silly: http://wccftech.com/amd-r9-290x-fast-titan-dx12-en...2 points = trend.
Another thing; what's the deal with all these fanboys? There is no benefit to being a fanboy of either AMD or Nvidia, it is just going to cause you problems because it may cause you to buy based on brand, rather than on performance per dollar, which is the factor that actually matters. At different price ranges different brands are better - e.g top end, a 980Ti is better than a fury X, however if you are looking in the price bracket below, and want buy a 980, you will get better performance and performance per dollar from a standard fury.
Being a fanboy will blind you from accepting the truth when the tides shift and the tables eventually turn. It helps you in no way at all, it disadvantages you in many. It also causes you to get angry on forums for no reason, and call people 'trolls' when they are stating facts.
Continuity28 - Wednesday, February 24, 2016 - link
By the time DX12 becomes commonplace, I'm sure they will have cards that were built for DX12.It makes a lot of sense to design your cards around what will be most useful today, not years in the future when people are replacing their cards anyways. Does it really matter if AMD's DX12 performance is better when it isn't relevant, when their DX11 performance is worse when it is relevant?
Senti - Wednesday, February 24, 2016 - link
Indeed it makes much sense to build cards exactly for today so people would be forced to buy new hardware next year to have decent performance. From certain green point of view. But many people are actually hoping that their brand new mid-top card would last with decent performance at least some years.cmdrdredd - Wednesday, February 24, 2016 - link
Hardware performance for new APIs is always weak with first gen products. That isn't changing here. When there are many DX12 titles out and new cards are out there, you'll see that people don't want to try playing with their old cards and will be buying new. That's how it works.ToTTenTranz - Wednesday, February 24, 2016 - link
"Hardware performance for new APIs is always weak with first gen products."Except that doesn't seem to be the case with 2012's Radeon line.
Friendly0Fire - Wednesday, February 24, 2016 - link
You don't have enough data to know this.Once the second generation of DX12 cards come out, then you can analyze the jumps and get a better idea. Ideally you'd wait for three generations of post-DX12 GPUs to get the full picture. As it is, all we know is that AMD's DX12 driver is better than their DX11 driver... which ain't saying much.
The_Countess - Thursday, February 25, 2016 - link
except we have 3 generations of DX12 cards already on AMD's side, starting with the hd7970, which still holds its own quit well.and we've had multiple DX12 and vulkan benchmarks already and in every one of them the 290 and 390 in particular beat the crap out of nvidia's direct competition. in fact they often beat or match the card above them as well
as for drivers. AMD's dx11 drivers are fine. they just didn't invest bucketloads of money in game specific optimizations like nividia did, but instead focused on fixed the need for those optimizations in the first place. nvidia's investment doesn't offer long term benefits (a few months, then people move on to the next game) and that level of optimization in the drivers is impossible and even unwanted in low level API's.
basically nvidia will be losing its main competitive advantage this year.
hero4hire - Friday, February 26, 2016 - link
I think what he meant was we don't have enough test cases to conclude mature dx12 performance. The odds are pointing to AMD having faster gpus for dx12. But until multiple games are out, and preferably one or two "dx12" noted driver, we're speculating. I thought this was clear from the article?It's a stretch calling 3 generations of dx12 released cards too. I guess if we add up draft revisions there are 50 generations of AC wireless.
You could state that because AMDs arch is targeting dx12, it looks to give an accross the board performance win in dx12 next gen games. But again we only have 1 beta game as a test case. Just wait and it will be a fact or not. No need to backfill the why
CiccioB - Sunday, February 28, 2016 - link
Right, they didn't invest bunchload in optimizing current game, they just payed a single company to make a benchmark game using their most strong point in DX12, super mega threaded (useless) engine. Not different than nvidia using super mega geometry (uselessly) complex scenes helped by tessellation.Perfect marketing: most return with less investments.
Unfortunately a single game with a bunchload of ASYN compute thread added just for the joy of it is not a complete DX12 trend: what about games that are going to support Voxel global illumination that AMD HW cannot handle?
We'll see where the game engine will point to. And if this is another faux-fire that AMD has started up these years seeing they are in big trouble.
BTW: it is stupid to say that 390 "beat up the crap aout of anything else" that is using a different API. All you could see is that a beefed up GPU like Hawaii consuming 80+W with respect to the competition manage finally to pass it as it should have do at day one. But this was only because of the use of a different API with different capacities that the other GPU could not benefit from.
You can't say it is better if with current standard API (DX11) that beefed up GPU can't really do better.
If you are so excited by the fact that a GPU 33% bigger than another is able to get almost 20% more in performance with a future API and best conditions at the moment a complete new architecture is going to be launched by both red and green teams, well, you really demonstrates how biased you are. Whoever has bought a 290 (then 390) card back in the old days during all these month has been biting dust (and loosing Watts) and the small boost at the end of these cards life is really a shallow thing to be exited for.
lilmoe - Wednesday, February 24, 2016 - link
I like what AMD has done with "future proofing" their cards and drivers for DirectX12. But people buy graphics cards to play games TODAY. I'd rather get a graphics card with solid performance in what we have now rather than get one and sit down playing the waiting game.1) It's not like NVidia's DX12 performance is "awful", you'll still get to play future games with relatively good performance.
2) The games you play now won't be obsolete for years.
3) I agree with what others have said; AOS is just one game. We DON'T know if NVidia cards won't get any performance gains from DX12 under other games/engines.
ppi - Wednesday, February 24, 2016 - link
You do not buy a new gfx card to play games TODAY, but for playing TOMORROW, next month, quarter and then for a few years (few being ~two), until the performance in new games regresses to the point when you bite the bullet and buy a new one.Most people do not have unlimited budget to upgrade every six months when a new card claims performance crown.
Friendly0Fire - Wednesday, February 24, 2016 - link
It's unlikely that the gaming market will be flooded by DX12 games within six months. It's unlikely to happen within a few years, even. Look at how slow DX10 adoption was.anubis44 - Thursday, February 25, 2016 - link
I think you're quite wrong about this. Windows 10 adoption is spreading like wildfire in comparison to Windows XP --> Vista. DX10 wasn't available as a free upgrade to Vista the way DX12 is in Windows 10.Despoiler - Thursday, February 25, 2016 - link
Just about every title announced for 2016 is DX12 and some are DX12 only. There are many already released games that have DX12 upgrades in the works.Space Jam - Wednesday, February 24, 2016 - link
Nvidia leading is always irrelevant. Get with the program :pNvidia's GPUs lead for two years? Doesn't matter, AMD based on future performance.
DX11 the only real titles in play? Doesn't matter, the miniscule DX12/Vulkan sample size says buy AMD!
rarson - Wednesday, February 24, 2016 - link
Yeah, because people who bought a 980 Ti are already looking to replace them...Aspiring Techie - Wednesday, February 24, 2016 - link
I'm pretty sure that Nvidia's Pascal cards will be optimized for DX12. Still, this gives AMD a slight advantage, which they need pretty badly now.testbug00 - Wednesday, February 24, 2016 - link
*laughs*Pascal is more of the same as Maxwell when it comes to gaming.
Mondozai - Thursday, February 25, 2016 - link
Pascal is heavily compute-oriented, which will affect how the gaming lineup arch will be built. Do your homework.testbug00 - Thursday, February 25, 2016 - link
Sorry, Maxwell already can support packed FP16 operations at 2x the rate of FP32 with X1.The rat of compute will be pretty much exclusive to GP100. Like how Kepler had a gaming line and GK110 for compute.
MattKa - Thursday, February 25, 2016 - link
*laughs*I'd like to borrow your crystal ball...
You lying sack of shit. Stop making things up you retarded ass face.
testbug00 - Thursday, February 25, 2016 - link
What does Pascal have over Maxwell according to Nvidia again? Bolted on FP64 units?CiccioB - Sunday, February 28, 2016 - link
I have not read anything about Pascal from nvidia outside the FP16 capabilities that are HPC oriented (deep learning).Where have you read anything about how Pascal cores/SMX/cache and memory controller are organized? Are they still using crossbar or they finally passed to a ring bus? Are caches bigger or faster? What are the ratio of cores/ROPs/TMUs? How much bandwidth for each core? How much has the compressed memory technology improved? Have cores doubled the ALUs or they have made more independent core? How much independent? Is the HW scheduler now able to preempt the graphics thread or it still can't? How many threads can it support? Is the Voxel support better and able to be used heavily in scenes to make global illumination quality difference?
I have not read anything about this points. Have you some more info about them?
Because what I could see is that at a first glance even Maxwell was not really different than Kepler. But in reality the performance were quite different in many ways.
I think you really do not know anything about what you are talking about,.
You are just expressing your desire and hopes like any other fanboy as a mirror of the frustration you have suffered all these years with the less capable AMD architecture you have been using up to now. You just hop nvidia has stopped and AMD finally made a step forward. It may be you are right. But you can't say now, nor I would going telling such stupid thing you were saying without anything as a fact.
anubis44 - Thursday, February 25, 2016 - link
I think nVidia's been caught with their pants down, and Pascal doesn't have hardware schedulers to perform async compute, either. It may be that AMD has seriously beaten them this time.anubis44 - Thursday, February 25, 2016 - link
nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon. I have a feeling Pascal, like Maxwell, doesn't have hardware schedulers, either. It's beginning to look like nVidia's been check-mated by AMD here.BurntMyBacon - Thursday, February 25, 2016 - link
@anubis44: "nVidia wasn't expecting AMD to force Microsoft's hand and release DX12 so soon."I do believe you are correct. Given the lack of ability to throw driver optimizations at the DX12 code path and nVidia's proficiency at doing it, I'd say this will be quite damaging. They've lost one clear advantage they held (at least in DX11).
@anubis44: "It's beginning to look like nVidia's been check-mated by AMD here."
I wouldn't go that far. They probably won't have the necessary hardware in Pascal, but you can be sure Volta will have what it needs. Besides, most games will likely have a DX11 code path for the foreseeable future as developers wouldn't want to lock themselves out of an entire market. Also, at the moment, nVidia can still play DX12 fine, they just don't appear to have the advantage at the moment given the small sample set of available data points.
In conclusion, it is more like they have lost a rook or queen. Of course, they've taken a few of ATi's pieces as well, so lets just wait and see who plays their remaining pieces better.
rhysiam - Thursday, February 25, 2016 - link
The other thing I would add to this is that it's not like Nvidia have nowhere to go here. Take the GTX 970 vs the R9 390 for example... they're in a similar price & performance tier. Yet the 970 is smaller with fewer transistors (usually meaning it's cheaper to produce) and generally has a much higher overclocking headroom (because Nvidia wasn't under pressure to clock the card closer to the limit to reach relevant performance). So it's reasonable to expect Nvidia could both lower the price and clock it higher to get a significantly better value card with minimal basically no substantive engineering/architectural changes.I'm not suggesting Nvidia will do that with the 970 specifically. Rather, what I'm saying is that if they find Pascal is similarly behind AMD they've got plenty of room to tweak performance and price before we can start calling them "check-mated". But it's certainly good new for us if DX12 performance like this continues and AMD essentially forces Nvidia to lower its margin.
CiccioB - Sunday, February 28, 2016 - link
They can do exactly as AMD has done with GCN: they just can start using 30 or 50% bigger GPUs to close the performance gap if they really need to.The_Countess - Thursday, February 25, 2016 - link
nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those (and coming up with way to hurt everyone's performance as long as it hurt AMD the most or makes their own latest gen cards look better... but that's a different matter)with DX12 however the drivers becomes MUCH thinner and doesn't have nearly as much influence. so basically nvidia's main competitive advantage is gone with dx12 and vulkan.
as for being relevant: this year pretty much every game where performance matters will have either a DX12 or Vulkan render option. adding in the fact that AMD cards generally age better then nvidia's (those game specific optimizations focus pretty much exclusively only on their latest generation of cards) and i would say that yes it is very relevant.
BurntMyBacon - Thursday, February 25, 2016 - link
@The_Countess: "nvidia's entire performance advantage in DX11 is based on game specific driver optimizations. they have a virtual army of developers slaving away on those ..."True, they have lost a large advantage. Keep in mind, though, that nVidia's developer relations are still in play. What they once achieved through the use of driver optimizations may still be accomplished through code path optimization and design guidance for nVidia architecture. The first beta for Vulkan (The Talos Principle) showed that merely replacing a high level API (OpenGL/DX11) with a low level one (Vulkan/DX12) does not automatically improve the experience. If nVidia can convince developers to avoid certain non-optimal features or program in such a way as to take better advantage of nVidia hardware in their titles (for the sake of performance on the majority of discrete card owners out there of course) then ATi will be in the same position as they are now. Better hardware, worse software support. Then again, low level API cross-platform titles will most assuredly program to take advantage of the console architectures which happens to be ATi's at the moment.
nevcairiel - Wednesday, February 24, 2016 - link
Considering the Fury X just has a tad bit more raw power than a (older) 980Ti, I would say the DX12 numbers are fine, and what is really showing is AMDs lack of performance in DX11?tuxRoller - Wednesday, February 24, 2016 - link
I don't agree with this. I think this is more a case of nvidia not being able to rely so much on the ENORMOUS number of special cases in their driver.IOW, this is about two things: hardware and game design. The drivers are trivial next to d3d11/ogl.
jasonelmore - Wednesday, February 24, 2016 - link
Fury X's Architecture is much newer than Maxwell 2's. Lets see what the true DX12 cards can do this summer.tuxRoller - Wednesday, February 24, 2016 - link
Did you not notice the across the board improvements for all gcn cards?The point I was making, and that others have made for sometime, is that AMD makes really good hardware but this is typically masked by poor drivers.
You can see this by looking at their excellent performance in compute workloads where the code in the driver is more recent and doesn't have the legacy cruft of their d3d/ogl code.
Despoiler - Thursday, February 25, 2016 - link
It's not their drivers. It's purely architectural. GCN moved their schedulers into to hardware. GCN requires the API to be able to feed it enough work. What people have been calling "driver overhead" is nothing of the sort. DX11 is just not capable of fully utilizing AMD hardware. DX12 is and that is why AMD created Mantle. It forced MS to create DX12 and that set off the creation of Vulkan. All of the next gen APIs are tailored to exploit AMDs already being sold hardware.tuxRoller - Friday, February 26, 2016 - link
It's the simpler drivers which provide less room to hide architectural deficiencies.My point was that, across the board, gcn improves its performance a good deal relative to d3d11. That includes cards that are four years old. I don't think Maxwell is older than that.
I don't think we are really disagreeing, though.
RMSe17 - Wednesday, February 24, 2016 - link
Nowhere near as bad as the DX9 fiasco back in the FX 5xxx days where a low level ATi card would demolish the highest end GeForcept2501 - Thursday, February 25, 2016 - link
Few if anyone here is going to remember the fiasco when the radeon 9700 pro demolished the competition in performance and stability. Even fewer remember nvidia "optimizing" games with lower quality textures to compete.dray67 - Thursday, February 25, 2016 - link
I remember it and it was the reason I went for the 9700 and the later 9800, atm I'm back to Nvidia I've had 2 AMD card die on me due to heat, as much as I like them I've had my fingers burnt and moved away from them, if dx12 and dual gpu support becomes better supported I'll buy a high AMD card in an instant.knightspawn1138 - Thursday, February 25, 2016 - link
I remember it clearly. My Radeon 9800 was the last ATI card I bought. I loved it for years, and only ended up replacing it with an NVidia card when the Catalyst Control Center started sucking all the cycles out of my CPU. It's funny that half of the comments on this article complain that NVidia's drivers are over-optimized for every specific game, yet ATI and AMD were content to allow the CCC to be a resource hog that ruined even non-gaming performance for years. I'm happy with my NVidia cards. I've been able to easily play all modern games with great performance using a pair of GTX 460's, and recently replaced those with a GTX 970.xenol - Thursday, February 25, 2016 - link
Considering there aren't any other async shader games in development and nothing announced and with Pascal coming within the next year (which maybe, a game might actually use DX12) which will probably alleviate the situation, your evaluation of NVIDIA's situation is pretty poor.It takes more than a generation or a game to make a hardware company go down. NVIDIA suffered plenty during its GeForce FX days, and it got right back on its feet.
MattKa - Thursday, February 25, 2016 - link
No, no, no. An RTS game that probably isn't going to sell very well and seems incredibly lacking is going to destroy Nvidia.gamerk2 - Thursday, February 25, 2016 - link
AMD has had an async compute engine in their GPUs going back to the 7000 series. NVIDIA has not. Stands to reason AMD would do better in async compute based benchmarking.Let's see how Pascal compares, since it's being designed with DX12, and async compute, in mind.
agentbb007 - Saturday, February 27, 2016 - link
"NVIDIA telling us that async shading is not currently enabled in their drivers", yeah this pretty much sums it up. This beta stuff is interesting but just that beta...JlHADJOE - Saturday, February 27, 2016 - link
The GTX 680 seems to have done well though. I feel like Maxwell is being let down by the compromises Nvidia made optimizing for FP16 only and sacrificing real compute performance.Dug - Sunday, February 28, 2016 - link
Very bad would indicate that it would be unplayable.I'm playing fine at over 60fps on nvidia. Maybe I should trade it in for an R9 to get 63fps?
C3PC - Wednesday, March 16, 2016 - link
Not really, this is a beta for a game that is heavily embedded with AMD tech, the way the game handles it would favor AMDs implementation, it could go the other way for a game designed around Nvidia's implementation.Also, calling this bed performance of DX12? Maybe you should clarify that this is an implementation of 12_0 and not 12_1, I highly doubt AMD will fair as well as Nvidia under such circumstances.
jsntech - Wednesday, February 24, 2016 - link
For Nvidia to perform the same or worse between DX11 and DX12 seems like a pretty big thing to be addressed with just an 'optimization', especially compared to AMD's results. I guess we'll see when it's out of beta!Senti - Wednesday, February 24, 2016 - link
Well, being low-level DX12 leaves much less to driver, so there should be less miraculous fps gains by driver optimization than in DX11.mabellon - Wednesday, February 24, 2016 - link
Something I have been wondering about this game is whether the DX11 vs DX12 comparison is really valid. The game apparently pushes higher draw calls and takes advantage of DX12. But when running in DX11 mode, is it still trying to push all those unique draw calls or is it optimized like most DX11 games and using draw call instancing? (Sorry not a game dev, so I don't fully grok the particulars).Basically, if the game was designed and optimized for DX11 it might perform well but not have the visual fidelity of so many draw calls (unique unit visuals). So the real difference should have been visual quality. Instead I get the impression that the game was designed to push DX12 and then when in DX11 mode stresses the draw call limitations, over emphasizing the apparent gains. Am I wrong?
Currently it seems that porting a game from DX11 to DX12 nets up to a 50% improvement in framerates. The reality is more nuanced that existing games are clearly working around draw call limitations and thus won't see something quite so dramatic. Thoughts?
ImSpartacus - Wednesday, February 24, 2016 - link
I think you're probably right.Dx12 doesn't simply improve performance and nothing else. So the massive performance improvements probably aren't entirely fair.
extide - Wednesday, February 24, 2016 - link
It would have to use less draw calls for DX11 as hitting DX11 with the high number of calls you can do in DX12 would make it fall flat on it's face. I am sure the DX11 path is well optimised for DX11, I mean while this game is a big showcase for DX12, most people who play it will probably be on DX11 ...Denithor - Thursday, February 25, 2016 - link
People with nVidia cards will be playing in DX11 mode. People with AMD cards, even those three generations old, will be fine in DX12 mode with better eye candy and simultaneously better FPS.Friendly0Fire - Wednesday, February 24, 2016 - link
There's definitely quite a bit of that. Looking at the benchmark, a lot of the graphics design seems aimed at causing more draw calls (long-lasting smoke consisting of lots of unique particles, lots of small geometric details, etc.). While I'm absolutely convinced that DX12 will give better performance than DX11 in the long run and that the gap will be fairly large, I think this benchmark is definitely designed to overemphasize just how great DX12 is.jardows2 - Wednesday, February 24, 2016 - link
I would like to see how much of an impact the DX12 on a released game makes in the CPU world. Do you get better performance from multiple cores, or is it irrelevant? Speculation is that DX12 could change the normal paradigm for judging gaming performance on CPU's.extide - Wednesday, February 24, 2016 - link
If you are CPU limited, and it's using lots of threads, then yeah more cores would be faster. They were CPU limited on an overclocked 4960X, which is no slouch, that was very surprising!rhysiam - Wednesday, February 24, 2016 - link
I agree that will be very interesting. I'm surprised more hasn't been made of the seemingly pretty hard CPU limit to ~70fps, irrespective of the detail settings or resolution. And that on a still very capable 4960X @ 4.2Ghz. If we estimate Skylake has a 20% IPC advantage, that would still see the current top tier 6700K (at stock) maxing out in the mid 80s, a long way short of what you might like on a 144hz monitor. Does that mean a brand new quad core CPU like the i5 6400 with its low base clock might struggle to sustain 60fps, even on lower detail settings?I realise this is beta and all preliminary, but it's interesting nonetheless.
DanNeely - Wednesday, February 24, 2016 - link
Does DX12 Multi-adapter offer any benefits with cards that are mismatched in performance? I'm currently running a GTX 980 in my main PC and also have an older GTX 770 sitting around; would pairing them offer any speedup over just the 980, or would the faster card end up held back by the slower one?I'd be equally interested in seeing how AMD does with significantly mismatched GPUs; since they've been trying (with varying degrees of success) to push XFire between their IGPs and the significantly faster chips in midrange Radeon cards.
BigLan - Wednesday, February 24, 2016 - link
The article has a quote from the developer about using mismatched cards..."For example, you will never get more than twice the speed of the slowest video card. You would be better off just using the new card alone."
You might get some benefit, but likely not that much.
Friendly0Fire - Wednesday, February 24, 2016 - link
I think that's rather narrow minded and way too absolute. Mismatched cards can be used to their full potential, but you'd need some smart coding to make it so. For instance, you could offload some of the work to the weaker GPU, keeping the stronger one for the main rendering.One excellent example which would fully utilize two mismatched cards is VR: multiadapter rendering would be used to offload the VR projection and transformation steps to the integrated GPU in most modern CPUs, while the main GPU would do the regular rendering. The data transfer requirement is minimal, but there's a fair amount of computations required, making it an ideal scenario.
Other examples include doing post-processing on the weaker card (SSAO, subsurface scattering, screenspace reflections, etc.). The big problem is judging just how much work should be offloaded to the secondary GPU - just detecting the hardware would be extremely laborious.
Ryan Smith - Wednesday, February 24, 2016 - link
It's a correct description for how Ashes works. They implement a (relatively) straightforward AFR setup, so the cards need to be similar in performance.Senti - Wednesday, February 24, 2016 - link
What Multi-adapter does is left completely to developer. In some cases it can give you nothing, in others every bit of hardware can be useful including iGPU.extide - Wednesday, February 24, 2016 - link
Their current implementation is AFR, so the performance of the cards should be as close to identical as possible. In the future I think they may plan on offloading some of the raw compute onto a second GPU, and in that case an older slower GPU would be beneficial.Drumsticks - Wednesday, February 24, 2016 - link
These are always interesting results to see. I'm pretty excited for Polaris - I can't wait to pickup a higher end GPU to replace my old, old 7850.mattevansc3 - Wednesday, February 24, 2016 - link
Isn't Oxide's statement that they don't optimise for certain hardware a bit disingenuous?If you read their developer diaries not only was AoS built around Mantle, not only was the engine built upon Mantle but they've stated that they developed more of Mantle than AMD did.
Before DX12 was even announced Oxide were working directly with AMD and building AoS to champion Mantle and take advantage of it a low level while only supporting nVidia hardware on DX11. That of course will automatically bias results in favour of RTG even if there is no intention to do so at this stage.
Beany2013 - Wednesday, February 24, 2016 - link
You are aware that Mantle and DX12 are actually different APIs, yeah?zheega - Wednesday, February 24, 2016 - link
AMD just released new drivers that say are made for this benchmark. Can we get a quick follow-up if their performance improves even more??http://support.amd.com/en-us/kb-articles/Pages/AMD...
AMD has partnered with Stardock in association with Oxide to bring gamers Ashes of the Singularity – Benchmark 2.0 the first benchmark to release with DirectX® 12 benchmarking capabilities such as Asynchronous Compute, multi-GPU and multi-threaded command buffer Re-ordering. Radeon Software Crimson Edition 16.2 is optimized to support this exciting new release.
revanchrist - Wednesday, February 24, 2016 - link
See? Every time when there's a pro AMD game tested, there'll be much butt hurt fanboy comments. And i guess everyone knows why. Because when you bought something, you'll always want to justified your purchase and you know who's got the lion share of the dGPU market now. Guess nowadays people are just too sensitive or has a heart of glasses, which makes them judging things ever so subjectively and personally.Socius - Wednesday, February 24, 2016 - link
For anyone who missed it:"Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."
ToTTenTranz - Wednesday, February 24, 2016 - link
"Unfortunately they are not providing an ETA for when this feature will be enabled."If ever...
andrewaggb - Wednesday, February 24, 2016 - link
Makes sense why it would be slightly slower. Also makes through benchmarks less meaningfulExt3h - Wednesday, February 24, 2016 - link
"not enabled" is a strange and misleading wording, since it obviously is both available and working correctly according to the specification.Should be read as "not being made full use of", as it is only lacking any clever way of profiting from asynchronous compute in hardware.
barn25 - Thursday, February 25, 2016 - link
If you google around you will find out nvidia does not have asynchornous shading on its DX"12" cards. this was actually first found out in WDDM 1.3 back in windows 8.1 when they would not support the optional features which AMD does.Ext3h - Thursday, February 25, 2016 - link
I know that the wrong terminology kept being used for years now, especially driven by major tech review websites like this one. But that's still not making it any less wrong.The API is fully functional. So the driver does support it. Whether it does so efficiently is an entirely different matter, you don't NEED hardware "support" to provide that feature. Hardware support is only required to provide parallel execution, as opposed to the default sequential fallback. The latter one is perfectly within the bounds in the specification, and counts as fully functional. It's just not providing any additional benefits, but it's neither broken nor deactivated.
barn25 - Thursday, February 25, 2016 - link
Don't try to change it. I am referring to HW Asyc compute, which AMD supports and NVidia does not. Using a shim will impact performance even greater.CiccioB - Sunday, February 28, 2016 - link
The so called ASync Compute implementation AMD has in HW IS NOT PART OF DX12 SPECIFICS.I hope that is clear written that way.
DX12 describe the use of multiple threads flying at the same time. nvidia does support them, with some limitations in number and preemption capabilities with respect to what AMD HW can.
This however does not mean that nvdia HW does not support Async compute or it is out of specs. AMD just made a better implementation of it.
Think it as it was for tessellation: nvidia implementation is way better than AMD one, but the fact that AMD can't go over certain values does not mean they are not DX11 compliant.
What you are looking here is a benchmark (more than a game) that stresses the multi-threaded capabilities of AMD HW. You can see that AMD is in a better position here. But the question is: how many other games are going to benefit from using such a technique and how many of them are going to implement such a heavy duty load?
We just don't know now. We have to wait to see if this technique can really improve performance (and thus image quality) in many other situations or it is just a show off for AMD (that has clearly partnered to make this feature even more heavy on nvidia HW).
When nvidia will star making developers using their HW accelerated Voxels we will start to see what feature is going to hit worse one another's HW and which is going to give better image quality improvements.
For now I just think this is a over used feature that like many other engine characteristics in DX11 is going to give advantage to one side rather than the other.
anubis44 - Thursday, February 25, 2016 - link
That's because it never will be. You can't enable missing hardware.xTRICKYxx - Wednesday, February 24, 2016 - link
I hate to be that guy, but I think it is time to dump the X79 platform for X99 or Z170.Ryan Smith - Wednesday, February 24, 2016 - link
Yep, Broadwell-E is on our list of things to do once it's out.Will Robinson - Wednesday, February 24, 2016 - link
NVidia got rekt.DX12 lays the smak on Chizow's green dreams.
Roboyt0 - Wednesday, February 24, 2016 - link
Do you have 3840x2160 results for the R9 290X per chance?Ryan Smith - Wednesday, February 24, 2016 - link
No. We only ran 4K on Fury X and 980 Ti.Stuka87 - Wednesday, February 24, 2016 - link
Really hating the colors of the graphs here. All grey, legend has one blue item, but no blue on the graph....Ryan Smith - Wednesday, February 24, 2016 - link
It's something of a limitation of the CMS. The color bar is the average; the grey bars are in the same order as they are in the legend: normal, medium, and heavy batch counts.Mr Perfect - Thursday, February 25, 2016 - link
I was wondering what was up with that. Maybe someone could do a little MS-Paint bucket fill on the images before publishing? :)Koenig168 - Wednesday, February 24, 2016 - link
There is a brief mention of GTX 680 2GB "CPU memory limitations". I take it you mean "VRAM memory limitations". It would be interesting to know if this can be overcome by DX12 memory stacking, either a pair of GTX 680s or the GTX 690.Ryan Smith - Wednesday, February 24, 2016 - link
That was meant to be "GPU memory limitations", thanks for the catch.B3an - Wednesday, February 24, 2016 - link
Why is Beta 2 still not available on Steam? Have the media got early access? At the time of posting this there's still only Beta 1 available.Ryan Smith - Wednesday, February 24, 2016 - link
It's out to the public tomorrow.hemipepsis5p - Wednesday, February 24, 2016 - link
Hey, so I'm confused by the mixed GPU testing. I thought that both cards had to be the same in order to run them in SLI/Crossfire? How did they test a Fury X + 980Ti?Ext3h - Wednesday, February 24, 2016 - link
That's no longer the case with DX12. It used to be like this with DX11 and earlier versions, when the driver decided if/how to split the workload onto multiple GPUs, but with DX12 that choice is now up to the application.So if the developer chooses to support asymmetric configurations, even cross vendor or exotic combinations like Intel IGP + AMD dGPU, then it can be made to work.
anubis44 - Thursday, February 25, 2016 - link
I'm willing to bet that nVidia's Maxwell cards can't use DX12's async compute at all, and they're falling back to the DX11 code path, even when you 'enable' DX12 for them.Ext3h - Thursday, February 25, 2016 - link
You loose that bet.The asynchronous compute term only defines how tasks are synchronized against each other, whereby the "asynchronous" term only states tasks won't block while waiting for each other. The default of doing that in software, in order to create a sequential schedule, is perfectly legit and fulfills the specification in whole.
Hardware support isn't required for this feature at all, even though you *can* optionally use hardware to perform much better than the software solution. Parallel execution does require hardware support and can bring an huge performance boost, but "asynchronous compute" does not specify that parallel execution would be required.
BradGrenz - Thursday, February 25, 2016 - link
The whole point of async compute is to take advantage of parallel execution. It doesn't matter what nVidia's drivers tell an application, if it accepts these commands but is forced to reorder them for serial execution because the hardware can do nothing else then it doesn't really support the technology at all. It's be like claiming support for texture compression even though your driver has to decompress every texture to an uncompressed format before the GPU can read it. It doesn't matter if the application thinks compressed textures are being used if the hardware actually provides none of the benefits the technology intended (in this case more/larger textures in a given amount of VRAM, and in the case of async compute, more efficient utilization of shader ALUs).Sajin - Thursday, February 25, 2016 - link
"Update 02/24: NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled."Source: http://www.anandtech.com/show/10067/ashes-of-the-s...
dustwalker13 - Thursday, February 25, 2016 - link
or ... not to put to fine a point on it, nvidias program and strategy to optimize games for their cards (aka in some instances actively sabotaging the competitions performance through using specialized operations that run great on nvidias hardware but very poorly on others) has lead to a near perfect usage of DX11 for them while amd was struggling along.on ashes, where there is no such interference, amd seems to be able to utilize the strong points of its architecture (it seems to be better suited for DX12) while nvidia has had no chance to "optimize" the competition out of the top spot ... too bad spaceships do not have hair ... ;P
prtskg - Thursday, February 25, 2016 - link
Lol! spaceships don't have hair. I'd have upvoted your comment if there was such an option.HalloweenJack - Thursday, February 25, 2016 - link
Waiting for Nvidia to `fix` async - just as they promised DX12 drivers for Fermi 4 months ago.....Harry Lloyd - Thursday, February 25, 2016 - link
Well, AMD has had bad DX11 performance for years, they clearly focused their architecture on Mantle/DX12, because they knew they would be producing GPUs for consoles. That will finally pay off this year.NVIDIA focused on DX11, having a big advantage for four years, and now they have to catch up, if not with Pascal, then with Volta next year.
doggface - Thursday, February 25, 2016 - link
Personally as the owner of an nVidia card, I have to say Bravo AMD. That's some impressive gains and I look forward to the coming D12 GPU wars from which we will all benefit.minijedimaster - Thursday, February 25, 2016 - link
Exactly. Also as a current Nvidia card owner, I don't feel the need to rush to a Windows 10 upgrade. Seems I have several months or more before I'll be looking into it. In the mean time DX11 will do just fine for me.mayankleoboy1 - Thursday, February 25, 2016 - link
AMD released 16.2 Crimson Edition drivers with more performance for AotS.Will you be re-benchmarking the game?
Link: http://support.amd.com/en-us/kb-articles/Pages/AMD...
albert89 - Thursday, February 25, 2016 - link
The reason why Nvidia is losing ground to AMD is because their GPU's are predominantly serial or DX11 while AMD as it is turning out is parallel (DX12) and has been for a number of years. And not only that, but are on their 3rd Gen of parallel architecture.watzupken - Thursday, February 25, 2016 - link
Not sure if its possible to retest this with a Tonga card with 4GB Vram, i.e. R9 380x or 380? Just a little curious why it seems to be lagging behind quite a fair bit.Anyway, its good to see the investment in DX 12 paying off for AMD. At least owners of older AMD cards can get a performance boost when DX 12 become more popular this year and the next. Not too sure about Nvidia cards, but they seem to be very focused on optimizing for DX 11 with their current gen cards and certainly seems to be doing the right thing for themselves since they are still doing very well.
silverblue - Friday, February 26, 2016 - link
Tonga has more ACEs than Tahiti, so this could be one of those circumstances, given more memory, of Tonga actually beating out the 7970/280X. However, according to AT's own article on the subject - http://www.anandtech.com/show/9124/amd-dives-deep-... - AMD admits the extra ACEs are likely overkill, though to be fair, I think with DX12 and VR, we're about to find out.knightspawn1138 - Thursday, February 25, 2016 - link
I think that Radeon's advantage in DX12 comes from the fact that most of DX12's new features were similar to features AMD wrote into the Mantle API. They've been designing their recent cards to take advantage of the features they built for Mantle, and now that DX12 includes many of those features, their cards essentially get a head-start in optimization.If Radeon and NVidia were running a 100-yard dash, it just means that Radeon's starting block is about 5-yards ahead of NVidia's. I (personally) think that NVidia's still the stronger runner, and they easily have the potential to catch up to Radeon's head start if they optimize their drivers some more. And, honestly, a 4fps gap should not be enough of a reason to walk away from whichever brand you already prefer.
I still prefer NVidia due to the lower power consumption, friendlier drivers, 3D glasses, and game streaming they've had for a few years. I used to like ATI cards, but when the Catalyst Control Center started sucking more cycles out of my CPU than the 3D games, I switched to NVidia.
I would also like to have seen the GTX 970 in some of these benchmarks. I understand benchmarking the highest-end cards, but I hope that when the game is out of beta and being used as an official DX12 benchmark, we get some numbers from the more affordable cards.
Shadowmaster625 - Thursday, February 25, 2016 - link
Of course Nvidia's performance doesnt go up under DX12. That is no doubt intentional. Why would they improve their current cards when they can sell all new ones to the same gullible fools that fell for that trick the last time around?Denithor - Thursday, February 25, 2016 - link
I had exactly the same thought. AMD may have shot themselves in the foot. Everyone using their cards is going to see a 10-20% boost in performance, meaning they may not need an upgrade this cycle.silverblue - Thursday, February 25, 2016 - link
Perhaps, but DX12's lower overhead would just encourage devs to make even more complex scenes. Result: same performance, better visuals.All the people jumping on NVIDIA need to be careful, as there's parts of the DX12 spec that they support better than AMD. Give it a year to eighteen months and we'll see how this pans out.
K_Space - Sunday, February 28, 2016 - link
Having read all 13 pages of comments I was surprised no one mentioned this either. I'll certainly be keeping my 295x2 for at least the next 18 months if not 24 months. With AFR and VR coming up, Dual GPUs is the way to go.Drake O - Thursday, February 25, 2016 - link
I was really hoping to see the benefits of sharing workload with the iGPU. Not everyone has multiple GPUs(but I do) but most people have a CPU with onboard graphics. If people with graphics cards can finally start using this recourse that would be a very good thing for a tremendous number of users. Please follow this article up as soon as possible with one on this area. Maybe one percent of users have different brand video cards laying around, maybe five percent have multiple similar GPUs but almost everyone has a video card and an unused iGPU on their CPU. This is the obvious first direction to take.rhysiam - Thursday, February 25, 2016 - link
This is (sort of) covered in the article and covered clearly in the comments above. This particular game is only using AFR, and the devs have clearly said (as noted in the article) that you'll never get more than double the performance of the slower graphics solution. Just about any discrete GPU worth of the name will be more than double the performance of an IGP and therefore is better (for this game) run on its own.DX12 opens up a raft of possibilities to use any and all available graphics resources, including IGPs, but it leaves the responsibility entirely with the game developers. For this game at this time, the devs aren't looking to make use of onboard graphics unless it's paired with a similarly anaemic GPU.
Drake O - Thursday, February 25, 2016 - link
If virtually everyone can boost their framerate by 20% at no cost then it is a big thing. Most people want one processor and one video card. Multiple video cards offer a very real performance boost but there is a downside, more power, more heat and frequent compatibility issues. With DX12 developers can send the post processing to the iGPU and let the video card handle the rest. Again a 20% performance boost for free. Only then should you think about the much smaller market that wants to run with multiple video cards.Kouin325 - Friday, February 26, 2016 - link
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
Kouin325 - Friday, February 26, 2016 - link
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
Kouin325 - Friday, February 26, 2016 - link
yes indeed they will be patching DX12 into the game, AFTER all the PR damage from the low benchmark scores is done. Nvidia waved some cash at the publisher/dev to make it a gameworks title, make it DX11, and to lock AMD out of making a day 1 patch.This was done to keep the general gaming public from learning that the Nvidia performance crown will all but disappear or worse under DX12. So they can keep selling their cards like hotcakes for another month or two.
Also, Xbox hasn't been moved over to DX12 proper YET, but the DX11.x that the Xbox one has always used is by far closer to DX12 than DX11 for the PC. I think we'll know for sure what the game was developed for after the patch comes out. If the game gets a big performance increase after the DX12 patch then it was developed for DX12, and NV possibly had a hand in the DX11 for PC release. If the increase is small then it was developed for DX11,
Reason being that getting the true performance of DX12 takes a major refactor of how assets are handled and pretty major changes to the rendering pipeline. Things that CANNOT be done in a month or two or how long this patch is taking to come out after release.
Saying "we support DirectX12" is fairly ease and only takes changing a few lines of code, but you won't get the performance increases that DX12 can bring.
Kouin325 - Friday, February 26, 2016 - link
ugh, I think Firefox had a brainfart, sorry for the TRIPPLE post.... *facepalm*Gothmoth - Friday, February 26, 2016 - link
it´s a crap game anyway so who cares?honestly even when nvidia should be 20% worse i would not buy ATI.
not becasue im a fanboy.. but i use my GPU´s for more than games and ATI GPUs suck big time when it comes to drivers stability in pro applications.
D. Lister - Friday, February 26, 2016 - link
Oxide and their so called "benchmarks" are a joke. Anyone who takes the aforementioned seriously, is just another unwitting victim of AMD's typical underhanded marketing.https://scalibq.wordpress.com/2015/09/02/directx-1...
"And don’t get me started on Oxide… First they had their Star Swarm benchmark, which was made only to promote Mantle (AMD sponsors them via the Gaming Evolved program). By showing that bad DX11 code is bad. Really, they show DX11 code which runs single-digit framerates on most systems, while not exactly producing world-class graphics. Why isn’t the first response of most people as sane as: “But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs, running much faster than this. You must be doing it wrong!”?
But here Oxide is again, in the news… This time they have another ‘benchmark’ (do these guys actually ever make any actual games?), namely “Ashes of the Singularity”.
And, surprise surprise, again it performs like a dog on nVidia hardware. Again, in a way that doesn’t make sense at all… The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side. Now, if the game can get a certain level of performance in DX11, clearly that is the baseline of performance that you should also get in DX12, because that is simply what the hardware is capable of, using only DX11-level features. Using the newer API, and optionally using new features should only make things faster, never slower. That’s just common sense."
Th-z - Saturday, February 27, 2016 - link
“But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs..."Doing similar stuff in DX11? What stuff and what games?
"The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side."
Which figure?
This is Anandtech, we need to be more specific and provide solid evidence to back up your claims in order to avoid sounding like an astroturfer.
D. Lister - Saturday, February 27, 2016 - link
You see my post? You see that there is this underlined text in blue? Well my friend, it is called a URL, which is an acronym for "Uniform Resource Locator", long story short it is this internet thingy that you go clickity-clickity with your mouse and it opens another page, where you can find the rest of the information.Don't worry, the process of opening a new webpage by using a URL may APPEAR quite daunting at first, but with very little practice you could be clicking away like a pro. This is after all "The AnandTech", and everybody is here to help. Heck, who knows if there are more like you out there, I might even make a video tutorial - "Open new webpages in 3 easy steps", or something.
PS: Another pro tip, there is no such thing as "solid evidence" outside of a court of law. On the internet, you have information resources and reference material, and you have to use your own first-hand knowledge, experience and commonsense to differentiate the right from wrong.
Th-z - Sunday, May 29, 2016 - link
Your blabbering is as useful as your link. I have a pro tip for you: you gave yourself away.EugenM - Tuesday, June 7, 2016 - link
@Th-z Dont feed the troll.GeneralTom - Saturday, February 27, 2016 - link
I hope Metal will be supported, too.HollyDOL - Monday, February 29, 2016 - link
Hm, from the screenshots posted I honestly can't see why would there be a need to run Dx12 with so "low performance" even on the most elite cards. While I give these guys credits for having the guts to go and develop in completely new API, the graphics looks more like early Dx9 games.Just a note this opinion is based on screenshots, not actual live render, but still from what I see there I'd expect FPS hitting 120+ with Dx11...
TheJian - Sunday, March 6, 2016 - link
The highlight here isn't dx12, but rather how badly AMD is doing DX11, which is what most games will run on for quite some time to come (only 10% run win10, and much of that base doesn't have a card that can run this game at even 1080P at 30fps). A decent sized group of win10 users go back to win7 also...LOL. I'm more interested in Vulkan, now that it's out, I think it will take over dx12 after a year as it runs everywhere but consoles and they are a totally different animal anyway.This just goes to show what happens when you can't afford to support your products. IE, AMD constantly losing money quarter after quarter while R&D drops too. NV on the other hand, has the cash to massively improve dx11 (which is 90% of the market, more if you consider not everyone running win10 is even a gamer), while also making a dx12 driver. AMD clearly needs to devote more money to their current card users (dx11), but can't afford to. AMD is spending their money on the wrong things time and time again. You can blame consoles for this last 5yrs of losses, as that money should have went into making ZEN 4yrs ago, much faster DX11 support, mobile chips should be on rev 5-6 etc like NV and everyone else etc etc. We would not be looking at NV owning 82%+ of the gpu market right now, and Intel would have had a major competition problem for the last 4-5 yrs instead of basically being able to pour all their resources into mobile while beating AMD to death on cpu.
EugenM - Tuesday, June 7, 2016 - link
I cannot speak of AMD CPUs, but AMD gpus are doing very well on DX11 games, its not the DX11 implementation of AMD that is at fault for performance issues, rather than Nvidia sabotaging time and time again their games, every game which is labeled Nvidia has a potential to sabotage the entire AMD lineup and AMD has little to nothing to do about it, this is not a tin foil conspirancy theory, its a fact proven game after game after game and easelly found on google, try searching for batman games and other epic unreal engine games, try searching for crysis 2 and nvidia gameworks games, youll see what i mean. If youre a small company you cannot really do much about a big monopoly like Nvidia and Intel youre 1 fighting vs 2. Dirty tactics against AMD and even illegal tactics have also been applied by Intel vs AMD, thats why Intel was fined with a huge ammount of money but in the end the damage to AMD was already done and it was too late for AMD to recover properly. You need to realise not everything is so simple.jacksonjacksona - Thursday, March 17, 2016 - link
( www).(ajkobeshoes).(com )christian louboutin
jordan shoes $60-
handbag
AF tank woman
puma slipper woman
=====
( www).(ajkobeshoes).(com )
hhhhhhhh
hhhhhhhh