Comments Locked

9 Comments

Back to Article

  • Mugur - Tuesday, March 29, 2016 - link

    At first, I read "for those lucky basterds" instead of "for those lucky backers"... :-)
  • xunknownx - Tuesday, March 29, 2016 - link

    i read the same thing. lol
  • B3an - Tuesday, March 29, 2016 - link

    Still no asynchronous compute on NV hardware? Can NV GPU's even support the new asynchronous timewarp? NV say that asynchronous compute isn't enabled in their drivers yet, but it seems that their hardware isn't fully capable of it and can not run graphics and compute workloads concurrently, including the 900 series.

    AMD have a massive advantage here if all GCN GPU's support this feature.

    Would be good if you did an in depth article on this. VR stuff is coming out, plus Ashes of the Singularity is out within days, and still no sign of asyn compute from NV...
  • bug77 - Tuesday, March 29, 2016 - link

    Why, were you expecting the Oculus VR launch to add the missing hardware to Nvidia's cards?
    Also, async compute is not a graphics feature. It's a more efficient way of feeding the GPU. It will not create extra FPS unless the GPU is underused to begin with.
  • B3an - Tuesday, March 29, 2016 - link

    Async compute can increase FPS for basically any game engine because all GPU's that support async compute will have hardware that is not being used, and they literally cannot be fully used without asyn compute.
  • CiccioB - Wednesday, March 30, 2016 - link

    No. It doesn't work this way. It all depends on the type of engine used that is able o not to use all the GPU resources.
    The Async compute feature is able to only increase the efficiency in use of the shaders. The rest of the resources do not take any advantage of it. If, for example, you are TMU or ROP limited, async compute is not going to help you at all.

    More over, this over hyped feature on AMD HW, that it seems to be highly supported, in a game that uses this feature at its extreme, just shows increments of 10% in performances (differences with async on and off on the same HW). Nothing real ground breaking.

    There are many other things that can be saturated before the shaders, depending on the engine. It is clear that if you create a shader intensive game which does not exploit other resources (TMUs, ROPs, memory bandwidth, tessellation, but also the new HW features like transparency and global illumination), you'll get a little improvement in performances with them.
    But is it won't be different than having a poor engine that exploits tessellation at its (useless) maximum to compensate other deficiencies.

    We are witnessing the same case when DX11 with tessellation were launched. All new games engine just over-used tessellation as a innovative feature, abusing it most of the times.
    We have seen that at the end the engine came to use it much less, increasing the quality of the image with other techniques that required the use of other resources.

    It will happen with DX12 as well. Once this async compute waterfall will end, we'll get much more balanced engines where its impact will be lower and other resources will contribute to create better image at the end.
  • Trefugl - Tuesday, March 29, 2016 - link

    Will I be able to take advantage of this with my DK2 (both changes to AMD drivers and SDK1.3)? I haven't used it much since I feel like it's still had a lot of teething issues (and I haven't had a ton of free time to sort them out). Also, any news on when crossfire will render per-eye for VR? I feel like my 2x290s are not being put to use...

Log in

Don't have an account? Sign up now