Comments Locked

105 Comments

Back to Article

  • The_Assimilator - Monday, March 24, 2014 - link

    The reason for CPUs bottlenecking GPUs is simple: single- vs multi-threaded. Writing a multi-threaded game engine that works properly is extremely difficult. It's even more difficult if you're licensing a graphics (and/or audio, etc.) engine that you need to integrate with your game's core, because you then become dependant on that engine and how it works, whether it's threaded, etc.

    Unfortunately, off-the-shelf game engines - particularly graphics - have remained steadfastly single-threaded, and that's not something DirectX or Mantle will be able to change.

    "To use consoles as an example once again, this is why they are capable of so much with such a (relatively) weak CPU, as they’re better able to utilize their multiple CPU cores than a high level programmed PC can."

    Nonsense. The current crop of consoles use x86-64 and DirectX 11-class hardware, programming games for them is virtually identical to programming games for (a slow) PC.

    "Meanwhile, though it’s a bit cynical, there’s a very real threat posed by the latest crop of consoles, putting PC gaming in a tight spot where it needs to adapt to keep pace with the consoles."

    Consoles with AMD CPUs with their abysmal single-threaded performance? Alrighty then.

    "PCs still hold a massive lead in single-threaded CPU performance, but given the limits we’ve discussed earlier, too much bottlenecking can lead to the PC being the slower platform despite the significant hardware advantage."

    Perhaps you could point to a game that is faster on consoles than PC. What's that, you can't, because such a game doesn't and never will exist? Alrighty then.
  • MrSpadge - Monday, March 24, 2014 - link

    > Unfortunately, off-the-shelf game engines - particularly graphics - have remained steadfastly single-threaded, and that's not something DirectX or Mantle will be able to change.

    What's a game engine using in the end if not DirectX? Of course there's engine work prior to the DX draw calls.. but still, they're there and currently have no alternative. And they are becoming a bottleneck. Rest assured that developers have profiled their engines to see which calls cost most performance.

    > Nonsense. The current crop of consoles use x86-64 and DirectX 11-class hardware, programming games for them is virtually identical to programming games for (a slow) PC.

    Except that on the consoles you have access to those low-level APIs and are free to use them when ever the benefit justifies the extra development work.

    > Perhaps you could point to a game that is faster on consoles than PC. What's that, you can't, because such a game doesn't and never will exist?

    Of course you'd have to compare to approximately similar hardware. Which is difficult, but doable for e.g. the older XBox. What did it have, a Coppermine Celeron 800 MHz and approximately a Gefore 7800? It's going to be challenging to find PC configurations of this class performing as well as late games for that platform.
  • Gigaplex - Monday, March 24, 2014 - link

    "What's a game engine using in the end if not DirectX?"
    A lot of them support OpenGL and other platform specific APIs.
  • Lerianis - Monday, March 31, 2014 - link

    Many graphics cards still support OpenGL because it was a popular API. Personally, I agree with the first poster: The problem is that multi-threaded games are almost unheard of today. Sure, the games do a TOKEN amount of multi-threading (i.e. putting graphics on one thread and sound on another) but anything more than that they do not really do.

    So, in most games, having 4 cores is worthless or less important than having 2 cores at least is.
  • Alexvrb - Tuesday, March 25, 2014 - link

    The ORIGINAL Xbox? It had a 733 Mhz Coppermine PIII with half the L2 cache (but still 8-way so NOT a Celeron). So you were close there. But you're waaaay off on the GPU. It had a Geforce 3.5 (NV2A)! 7800 would be closer to what ended up in the next generation PS3.
  • mars777 - Monday, March 24, 2014 - link

    The latest Fifa :)
  • munim - Monday, March 24, 2014 - link

    I thought all games were faster on console than comparatively spec'ed computers. Is that not the case? Do you have any performance comparison tests off the top of your head?
  • B3an - Monday, March 24, 2014 - link

    It is the case and has always been the case. He's just an idiot. PC will always remain faster, but a slower PC with similar hardware to a console will run a game slower than the console version. This is a well known fact, but that console advantage is something that may disappear with DX12 (or atleast be reduced). This will give PC an even further lead in performance.
  • ninjaquick - Monday, March 24, 2014 - link

    Actually, that isn't a fact at all. Only very low-level, hyper optimized software will run better on consoles, and that is only because the software will generally fail to run at all on PC as the hardware it is made for is not present on the PC.
  • B3an - Monday, March 24, 2014 - link

    It is fact. Ryan knows this, devs know this, everyone with any understanding at all knows this. Developers have been asking for lower level access on PC forever because of this.

    Even ignoring the lower level API console advantage, games on console will already benefit from set hardware, the dev will always target that and get more performance out of it. Look at all the crappy PC ports that make poor use of the hardware. PC always runs games better anyway because of vastly superior hardware, but thats not the case on slower PC hardware thats similar in spec to a console - in that case the console always has the performance advantage there (and you need to remember the latest consoles use very PC-like hardware now so it's a lot easier to compare).

    As a PC gamer DX12 is great news, it will make better use of all my cores, and will give the platform even more of a performance advantage over consoles. It may even help lower the cost of entry-level gamings PC's.
  • B3an - Monday, March 24, 2014 - link

    For anyone interested.. new DX12 graphics features are coming and yet to be announced:

    http://techreport.com/news/26210/directx-12-will-a...

    As i suspected, current hardware will benefit from the DX12 performance gains, but you will need new hardware for the new graphics features. Same as previous DX releases.
  • Tristor - Monday, March 24, 2014 - link

    I should also point out that OpenGL is already more performant in draw calls than Mantle, and is fully multi-threaded. It has been for a LONG time. More importantly, it's cross-platforms. If more game devs start using OpenGL+OpenSDL we will end up with more games available on other platforms (Mac/Linux). I'm hoping the huge push Valve is doing with SteamOS will encourage this and we'll see the benefits across the board in PC gaming.

    Here's an nVidia dev talking at Steam Dev Days about the performance increases gained from simply porting to OpenGL over DirectX (these surpass the gains from moving to Mantle):

    http://linustechtips.com/main/topic/117504-modern-...

    http://www.neogaf.com/forum/showthread.php?t=69345... - AMD already has promised (and started releasing) OpenGL extensions that fully expose the capabilities of GCN hardware which has made OpenGL more performant on AMD than Mantle.

    http://www.dsogaming.com/news/john-carmack-nvidias... - Here's John Carmack himself (one of the few devs who actually embraces OpenGL fully) talking about how nVidia's OpenGL extensions make Mantle irrelevant because OpenGL is faster.

    So basically, OpenGL > Mantle > DirectX and OpenGL is also cross-platform and has been around for ages. But, nobody wants to use it because it's "too hard" or "takes too long". Companies like EA, Activision, and Ubisoft don't want to make well-optimized games, they want to make games quickly while milking profitable franchises so they can take their billions to the bank. For the rest of the games out there that are trying to do something cool (Project CARS, Star Swarm, et al) they should really start investing heavily in OpenGL. End of story.
  • inighthawki - Monday, March 24, 2014 - link

    "I should also point out that OpenGL is already more performant in draw calls than Mantle, and is fully multi-threaded. It has been for a LONG time"
    Being multithreaded doesn't mean anything. In OpenGLs case it just means it's thread safe, so you can go queue up some render commands on thread 2 while you do app work on thread 1. No different than what DX has had for even longer than OpenGL. DX12 is introducing a better multithreading model which OpenGL doesn't have. One where you can actually split rendering work equally across cores.

    "More importantly, it's cross-platforms. If more game devs start using OpenGL+OpenSDL we will end up with more games available on other platforms (Mac/Linux)"
    I'm sure a lot of people will hate me for this, but who cares? OSX+Linux marketshare combined is a tiny fraction of what Windows is, and there's not a lot of money in it.

    OpenGL is a horrible API comprised of a horrible interface trying to retain backwards compatibility with older version that should never even be used, coupled with a bunch of vendor specific extensions that khronos decided to make official because someone did the work for them of figuring it out.

    "AMD already has promised (and started releasing) OpenGL extensions that fully expose the capabilities of GCN hardware which has made OpenGL more performant on AMD than Mantle."
    Why would you ever want extensions that are vendor specific? Half the point of your post is "don't use mantle because tis vendor specific" then you praise OpenGL for gaining the ability to have vendor specific enhancements?

    "Here's John Carmack himself (one of the few devs who actually embraces OpenGL fully) talking about how nVidia's OpenGL extensions make Mantle irrelevant because OpenGL is faster."
    That's not what he said at all, I suggest you re-read it.
  • Lerianis - Monday, March 31, 2014 - link

    Well, that is kinda becoming the new 'normal'..... in order to use the new graphics stuff, you need a new graphics card.
    I'm getting kinda angry in that newer graphics cards sold on Newegg all need a minimum of a 400 Watt power supply. My nearly brand new computer has? 300 watts. So either these companies need to start putting at least a 400 Watt power supply in these machines or the graphics card companies need to make a selection of cards that regular home users can find that they can plug into their old computers that only need 300 Watts.
  • Tristor - Monday, March 24, 2014 - link

    If devs have been asking for lower level access on the PC "forever" why does nobody use OpenGL? OpenGL provides unimpeded lower level access that DirectX does not, it's cross-platform, it's well-optimized for in hardware and drivers on both AMD and nVidia, and with the numerous OpenGL extensions available there's no features available in DirectX which are missing in OpenGL.

    I'll tell you why, because OpenGL is harder to write for. "Low-level access" implies low-level programming, which is more difficult to do, especially to do properly, and it requires higher quality programmers to pull it off. The reason DirectX is popular is because its ridiculously easy to write for, not because it's well-optimized or good. This has always been the case. If game developers were truly interested in low-level access and optimizations they would have been using OpenGL this entire time.

    QED
  • Mr Perfect - Monday, March 24, 2014 - link

    OpenGL is probably also a victim of DirectX's popularity. IE; no one uses OpenGL because no one uses OpenGL. Middleware engines all seem to focus on DirectX, hardware vendors only advertise support for new DirectX features, competing games all use DirectX... It reminds me of Windows Phone and how it doesn't receive support because everyone's all busy doing Android and IOS work.
  • DarkXale - Tuesday, March 25, 2014 - link

    It generally doesn't help that (PC) titles haven't really made a good impression.

    RAGE is not exactly a good demonstrator for graphical stability and reliability, and it was supposed to the a showcase for the ID Tech engine and OGL. Needless to say, it failed at that.
    Another game, Brink, was filled with graphical problems; the game was unplayable on AMD systems.

    OGL2 software tends to be more reliable - but the problem you have then is that you're using OGL2. Which isn't exactly modern, and was written in the more... problematic... days of OGL.
  • lmcd - Monday, March 24, 2014 - link

    Tristor, you're off-base. OpenGL isn't lower-level than Mantle and that has been long-acknowledged. OpenGL isn't even significantly lower-level than DirectX. DirectX is popular because it's standardized, not because it's easy to write for, and strong standards are better than weak standards. OpenGL regularly plays catch-up with DirectX feature sets and relies extensively on extensions which are not standardized.

    Different failings.
  • bluevaping - Friday, April 4, 2014 - link

    Big gaming houses use plenty of different API's not just Direct X or Open Gl. Microsoft likes keep things were they are, until the floor drops out. Oh snap gaming PC sales are declining. It's sad that AMD is trying to get more performance from hardware and wrote their own API. Gamers with different priorities are moving away from PC's to Console(simple and less expensive hardware) to Mobile Tablets(great screens with cheaper games). There are Open GL games, some are ports and some are released on multiple platforms at the same time . I have played dozens and dozens of them. We won't see record level of PC Gamers like the past. But developers will try to reach out to other platforms to make up for it. Microsoft little to late. They should be making better tools for Open GL. Just embrace it, Microsoft.
  • ericore - Monday, March 24, 2014 - link

    There are no console advantages.
    I think I abandoned consoles 2 months after purchase, never looked back.

    The games are pricier.
    The load times are significantly greater.
    No modding ability.
    Same price of a PC which can do so much more.
    The AI on consoles is dumbed down.

    Consoles are for kids.
    PCs are for adults.
  • Lerianis - Monday, March 31, 2014 - link

    Ah, but the PC with similar hardware will usually run it in a higher resolution and with more graphical candy turned on than on a console. That is the thing that a lot of people forget to mention, that the settings for the game in question are not always the same.

    I.E. the PC has AA and other settings turned on by default that make the game run 'slower'. Once you hash out what settings are not used in a console and turn those off? The numbers get a hell of a lot closer together.
  • inighthawki - Monday, March 24, 2014 - link

    "Unfortunately, off-the-shelf game engines - particularly graphics - have remained steadfastly single-threaded, and that's not something DirectX or Mantle will be able to change."

    I guess you missed the giant part of the DX12 talk where they focused heavily on ease and performance of multithreading for graphics and actually came up with a nearly linearly scaling solution.
  • tipoo - Monday, March 24, 2014 - link

    Really wondering about Mantles fate after this hits. They have a time advantage, but DirectX/Direct3D being the Windows standard that it is will be hard to compete with, particularly if the performance improvements are similar (or let alone, DX12 is better).

    Perhaps AMD should consider bringing Mantle to Linux.

    I wonder if the consoles being AMD based will be an advantage to them too, though Microsoft has the XBO also using DX12...PS4 porting may be easier to Mantle, while XBO porting is easier to DX12 perhaps?
  • ninjaquick - Monday, March 24, 2014 - link

    Mantle is a much broader low-level access API. D3D12 is limited by the scope of support, which is basically all driver level. They are putting as much as they can, while having intel/amd/nvidia/imagination/ti/arm/samsung/etc. all onboard for rapid implementation. Building new drivers for this, based on MS spec means they are less flexible than AMD with their own spec built around their hardware and driver platform.

    D3D12 will possibly be widely implemented, but that won't stop excitable rendering engineers and architects from trying out mantle anyways. I would be very surprised if any of the 'big rendering' players decide to completely forego D3D12 or Mantle. The chance to really push the boundaries as far as possible is far too tempting. Heck, low level programmability might even result in hybridization where possible, between d3d12 and mantle.
  • Dribble - Tuesday, March 25, 2014 - link

    The time advantage isn't that great - Mantle is released beta software, by the time they actually finish it DX12 will be just around the corner and that will come out fully productized and ready to go. Given that no dev is going to bother investing time in Mantle as only a few % of users can use it unlike DX where everyone in the end will be able to use it, also they have no guarantee's AMD will ever finish it off properly or continue to support it for future gens of cards - AMD have never been great at software, unlike MS.
  • jabber - Monday, March 24, 2014 - link

    I wonder also if a lot of this is coming out of all the furious work that I bet is going on to try to bridge the performance gap between the Xbox One and the PS4.
  • ninjaquick - Monday, March 24, 2014 - link

    Actually, it is because developers have been complaining about the massive step-backwards Xbone programmability took in comparison to X360.

    Like, not only is there a raw performance gap between the PS4 and X1, but there is a software-api performance black-hole on the X1 called WindowsRT/x64 hybrid, with straight PC DX11.1.
  • ninjaquick - Monday, March 24, 2014 - link

    D3D11.1**
  • Scali - Tuesday, March 25, 2014 - link

    The XBox One has its own D3D11.x API, which is D3D11-ish with special low-level features.
    The games I've seen on XBox One and PS4 so far, only seem to suffer from fillrate problems on the XBox, which means slightly lower resolutions and upscaling. In terms of detail, AI, physics and other CPU-related things, games appear to be identical on both platforms.
  • et20 - Monday, March 24, 2014 - link

    Don't forget Mantle is also cross-platform, it just crosses different platforms compared to Direct3D.
  • ddriver - Monday, March 24, 2014 - link

    Is it? That's not what I heard.
  • WithoutWeakness - Monday, March 24, 2014 - link

    Mantle is designed to be cross-platform but is currently only available on Windows PC's. Linux and Mac (and AMD's consoles) do not support Mantle currently.
  • tuxfool - Monday, March 24, 2014 - link

    I don't think that the GPU in the XB1 is GCN1.1. The PS4 however is somewhat GCN1.1, I read somewhere that AMD borrowed some features requested/developed by Sony for the PS4, an example of this being the larger numbers of ACE in GCN1.1.
  • Flunk - Monday, March 24, 2014 - link

    Both GPUs are the same design, the Xbox One just has less ROPs and shaders. They've both GCN1.1.
  • nathanddrews - Monday, March 24, 2014 - link

    And the PS4 uses GDDR5 instead of DDR3 on Xbone.
  • inighthawki - Monday, March 24, 2014 - link

    Not relevant to the GPU architecture.
  • anandreader106 - Monday, March 24, 2014 - link

    So the information highway that feeds the GPU is not relevant? Wrong.
  • lmcd - Monday, March 24, 2014 - link

    It isn't relevant. Bandwidth is semi-independent of architecture, though the bus width (in bit size) is relevant and dependent.
  • nathanddrews - Monday, March 24, 2014 - link

    Sure looks relevant to me:

    ht4u.net /reviews/2012/msi_radeon_hd_7750_2gb_ddr3_test/index39.php
  • inighthawki - Monday, March 24, 2014 - link

    Not really. There may be changes to the memory controller, but the core GPU architecture stays the same. It's like comparing the same highway with two different speed limits.
  • nathanddrews - Tuesday, March 25, 2014 - link

    ...and in traffic modeling the faster roadway has more vehicles pass through it per hour: more bandwidth. GDDR5 over DDR3 is a significant advantage.
  • inighthawki - Tuesday, March 25, 2014 - link

    Yes exactly, and the speed (performance) of the highway (GPU) is not related to the architecture (feature set) which was being discussed. DDR3 vs GDDR5 is completely irrelevant to GCN 1.0 vs GCN 1.1.
  • ninjaquick - Monday, March 24, 2014 - link

    Actually, the GPU in X1 is not GCN1.1. The only benefits from 1.1 is the TrueAudio path (and XDMA, which X1 would not uses anyways), which is not present in the X1.

    Microsoft would have wanted to beat Sony out with development hardware, which mean waiting for emulated GCN1.1 or actual GCN1.1 hardware would have been too long a wait. I would guess one of the reasons developers are only given full access to 5 of the 8 cores on the X1 is because at least one of the reserved cores is dedicated for the audio driver.
  • tuxfool - Tuesday, March 25, 2014 - link

    Isn't the ratio of ACE queues to CUs part of the difference between GCN1.1 and 1.0?
  • ninjaquick - Tuesday, March 25, 2014 - link

    I don't think so. There are differences in both generations, from top to bottom, in such regards. A product of scaling GCN up and down I presume.
  • evolucion8 - Tuesday, March 25, 2014 - link

    AFAIK, Xbox One is based on GCN1.1 and PS4 is based on the GCN 1.0
  • Anders CT - Monday, March 24, 2014 - link

    Will Direct12 be available on Windows 7?

    If not, we are talking about an API that will only be available on Xbox One, Surface 3 and 10% of desktop PC's, while OpenGL will be available on PS4, mac, Android, iOS, Steam,and Windows 7+8.

    As for mobile, I just don't see how they can make a cross-platform command-list interface that includes both immediate and tiled renderers.
  • Flunk - Monday, March 24, 2014 - link

    I imagine it will be available on Windows 9 and possibly Windows 8. By the time this comes out Windows 7 will be 2 versions out of date so I don't expect we'll see DirectX 12 support.
  • jabber - Monday, March 24, 2014 - link

    Plus this is another 18 months down the line. So Windows 7 will be nearly 6 years old by then.
  • anandreader106 - Monday, March 24, 2014 - link

    The vast majority of PC gamers use Windows 7. Unless Windows 9 fixes metro (seriously, it hasn't been fixed yet?) and there's a huge migration to it, Direct3D 12 adoption might be slower than previous versions.
  • jabber - Tuesday, March 25, 2014 - link

    You don't see Metro/Modern when you are playing games so whats the issue? Come on folks you've had nearly two years to work out how to press a key to switch between the two. Or are some of you not at tech as you think you are?
  • mikato - Tuesday, March 25, 2014 - link

    Uh, what if playing games isn't the only thing you do on the computer, and you don't much like having separate computers for games, and for everything else? Nice troll. Here's one - what key do I press to prevent Metro from being used at all as it reduces productivity of pretty much all of my usage scenarios?*
    (*the usage scenarios where it doesn't reduce my productivity are basically the same ones I use my mobile phone for, except when it is advantageous to be mobile)
  • Death666Angel - Tuesday, March 25, 2014 - link

    I boot up my Win8 PC to the desktop, easily done. I only see the Modern UI when I search for a program, but it has the same impact to me the Win7 search has. I press the "Windows" key, type what I look for, hit enter and the desktop program starts, not at all different from my Win7 usage. And the few features it does bring are much appreciated (better task manager, better file-copy dialogue etc.).
  • ninjaquick - Monday, March 24, 2014 - link

    I would guess that Microsoft will limit the release to Windows 8. They desperately need to increase Windows 8 sales, and there are a few core libs that are not present in Windows 7 that are present on all Win8 (Wp8/WinRT/WinX1) distributions.
  • A5 - Monday, March 24, 2014 - link

    Like others said, if MS sticks to their timeline Windows 9 will be out by then.

    Also, Win7 leaves "mainstream support" in January 2015: http://windows.microsoft.com/en-us/windows/lifecyc... So I doubt they'll get a DX update after that point.
  • A5 - Monday, March 24, 2014 - link

    Accidentally added a period to the link. Here's a working one: http://windows.microsoft.com/en-us/windows/lifecyc...
  • Ryan Smith - Monday, March 24, 2014 - link

    I didn't have a chance to throw this into the article, but the "mainstream support" angle is a very important one. Microsoft hasn't officially said yes or no at this point, but if they hold to their own lifecycle policy, then Windows 7 will no longer be receiving feature updates as of next year.
  • jabber - Tuesday, March 25, 2014 - link

    Mainstream support is just the phone in support. Updates etc. will carry on till 2020.
  • anandreader106 - Monday, March 24, 2014 - link

    If Microsoft limits the release to Win8/Win9, what motivation do developers have to require Direct3D 12 if the majority of gamers are using Windows 7?

    And if developers don't jump on board in order to continue serving the masses, then what's the benefit to all this?

    None. Adoption was so bad for 11.1 and 11.2 that Nvidia didn't even support them with their newest architecture!
  • DarkXale - Tuesday, March 25, 2014 - link

    Require? None. But theres plenty of reasons to have it optional.

    Its hardly uncommon for games to deliver with support for multiple rendering pipelines. The DX9 & DX10/11 is the two most common separations. Having OGL as a third is also not uncommon.

    DX12 could end up as an add-on to the D10/11 pipeline, just as D11 is an add-on to the D10 pipeline.
  • ninjaquick - Tuesday, March 25, 2014 - link

    I don't think you realize how much engineers like to tinker. They will use D3D12 and Mantle if they get the time, and will get them to the public whenever possible, so long as development is more fun than frustrating.
  • Scali - Tuesday, March 25, 2014 - link

    Well, aside from the 'tinkering' part, it's pretty much a given that D3D12 will become the standard at some point. Why not build that D3D12 now, rather than wait until D3D12 is as popular as D3D11? It will give you a competitive advantage.
    Same goes for Mantle, to a lesser extent. Having support is an advantage, but it is harder to justify the extra work, because it is AMD-only, and DX12 will probably make Mantle obsolete within 1.5 years.
  • Anders CT - Monday, March 24, 2014 - link

    @ninjaquick

    Well, then that is a problem. Why would a developer optimize a game using an API that excludes 80-90% of the installbase?
  • jabber - Tuesday, March 25, 2014 - link

    I think I remember reading the same comments when DX10 was announced.

    I don't think anyone died as a result. The world carried on.
  • ninjaquick - Tuesday, March 25, 2014 - link

    Because developers, contrary to popular belief, are people organizations, with real people working in them. Many engineers love to do what they do and welcome new challenges, and can easily pitch working on something new and exciting so long as they can properly list the risks and benefits.

    Why would anyone support stereoscopic 3D, even though the vast majority of the population doesn't use it? Because they can.
  • Homeles - Monday, March 24, 2014 - link

    Microsoft's previous API-locking was a software limitation (DX10 requires WDDM 1.0), rather than an arbitrary work of greed, so it mostly depends on whether or not Windows 7's WDDM 1.1 can support it. Of course, there's the support issue that A5 has highlighted as well.
  • lefty2 - Monday, March 24, 2014 - link

    Actually, OpenGL has had extensions that bring the benefits of DirectX 12, since about a year ago. That was covered in one of the GDC tech talks. The only catch is that these extensions are yet only fully implemented on Nvidia hardware... but still quite interesting
  • steven75 - Monday, March 24, 2014 - link

    I would like to know the answer to this as well. OpenGL sure seems more important these days.
  • boe - Monday, March 24, 2014 - link

    How many FPS was this getting on DX11? 60FPS on DX12 doesn't tell us much for all I know it was getting about 80FPS.

    Developer Turn 10 had ported the game from Direct3D 11.X to Direct3D 12, allowing the game to easily be run on a PC. Powered by a GeForce GTX Titan Black, Microsoft tells us the demo is capable of sustaining 60fps.
  • DesktopMan - Monday, March 24, 2014 - link

    Yeah knowing that the game runs at 60 when capped at 60 is not informative at all. It should run at 120fps+ on a Titan, compared to the Xbox One GPU.
  • nathanddrews - Monday, March 24, 2014 - link

    Forza 5 runs 60fps at 1080p on Xbone. I think the point of the Titan Black demonstration was that with only "four months of man-hours of work" they were able to flawlessly port it not only to DX12, but also to PC. It showcases the ease of porting using DX12 and the compatibility of DX12 with Kepler. Given that the TItan Black is 3-4x faster than the GPU in the Xbone, it stands to reason that taking more time with a port or developing side-by-side would yield a much better experience on the PC side.

    I'm sure that somewhere there's a Xfanboy claiming that the Xbone is as powerful as a Titan Black.
  • ninjaquick - Monday, March 24, 2014 - link

    Not just that, but to non-AMD hardware, which means not only does it port over "easily", it works on hardware from all vendors.
  • krumme - Monday, March 24, 2014 - link

    Damn nice article.

    How can fermi be compatible when it doesnt support blindless textures?
  • SydneyBlue120d - Monday, March 24, 2014 - link

    It seems even more funny the fact that Nvidia Maxwell doesn't fully support Direct X 11.1, yet it seems they're all Direct X 12 compliant :)
  • inighthawki - Monday, March 24, 2014 - link

    Don't confuse the software with the feature set. Maxwell works on DX11.1, it just not 100% compliant with all features exposed by 11.1. DX12 may also expose hardware features that are incompatible with Maxwell but will still run at a lower "11.0" feature level.
  • YazX_ - Monday, March 24, 2014 - link

    i believe this will only benefit the low end CPU users base, and specifically all AMD $hitty CPUs. on high end CPUs, there is no bottleneck so the gain will be very minimal.
  • kyuu - Monday, March 24, 2014 - link

    There is only no bottleneck with high-end CPUs because game developers design the game within the limitations of the CPU, which, as stated in the article, have not kept pace in terms of performance growth with GPUs. A big limitation that developers lament is in the number of draw calls.

    So while you're correct that current games will not see much benefit when run on higher-end CPUs, future games will be able to do more and therefore games will improve for everyone on all hardware. Also, you should consider that a high-end CPU becomes mid-end and then low-end over time -- these DX12 (and Mantle) improvements mean that it becomes less necessary to upgrade your CPU, which saves money that can be put into another part of your system (say, the GPU?).
  • Homeles - Monday, March 24, 2014 - link

    "i believe this will only benefit the low end CPU users base, and specifically all AMD $hitty CPUs. on high end CPUs, there is no bottleneck so the gain will be very minimal. "

    In other words, most computers.
  • ninjaquick - Monday, March 24, 2014 - link

    D3D12 is not a response to Mantle, as you would assume, rather it is a response to substantial developer feedback/pushback against the massive decrease in low-level access and programmability of the X1 compared to the X360. Microsoft has a unified platform vision that they are stubborn to stick to, so the d3d12 development advances made for the X1 are directly portable from the WindowsX1 (RT/8-x64 hybrid) to WindowsRT/WP8/Win8.

    Mantle is a far broader implementation, and is only possible thanks to AMD's hardware scope, as HDMA/hUMA and the massive improvement in GPU DMA are really only possible (as of yet) on AMD's hardware and software packages. D3D12 will not make much of a difference on platforms other than the X1, where developers [should be] getting more DMA for GPU task, beyond D3D buffer allocation, etc.
  • jwcalla - Monday, March 24, 2014 - link

    If you want your game to have a mobile presence and be on Steam Machines, you're going to need OpenGL. You can get access to just about all the hardware features and performance you want with OpenGL 4.4.

    Time for devs to give it a second look IMO.
  • ninjaquick - Tuesday, March 25, 2014 - link

    And the second look will wind up the same way. Indipendents who can starve a little longer will probably make sure to release on the Steam Machines, but larger developers, with larger codebases and way more stuff on their minds can't just jump ship without spending way too much time on re-engineering much of their code.
  • martixy - Monday, March 24, 2014 - link

    I see a bright future for the gaming industry...
    On that note, does anyone happen to have a time machine? Or a ship that goes really really fast?
  • Rezurecta - Monday, March 24, 2014 - link

    What piqued my interest is the fact that even MS uses Chrome. ;)

    Seriously though, posted the same on Overclock.net. Given the expected time to launch, it seems that this was only thought about because of AMD and Mantle. It is a shame that AMD paved the way and may not be a vastly supported API.

    Hopefully, Nvidia and Intel accept AMD's open offer to join Mantle and we can put the control in the IHV's instead of the OS maker.
  • errorr - Monday, March 24, 2014 - link

    MS has a lot of work to do if they want to be relevant for mobile. OpenGL ES has been largely optimized for tile-based solutions and takes into account the numerous benefits and flaws compared to desktop GPUs. Just about everything in the mobile space is created to limit memory access which is slow, narrow, and power intensive. The entire paradigm is completely different. Adreno is also VLIW which means any low-level api stuff is bound to be very hard to implement. At least it will work on Nvidia chips I guess but that is still only 10% of the market at best.
  • errorr - Monday, March 24, 2014 - link

    On another note, there was some desire to get some better understanding on mobile GPU chips in the powerVR article and the ARM Mali blog at least did the math on publicly available statements and outlined the capabilities of each "shader core".

    Each Mali has 1-16 shader cores (4-8 usu.). Each shader core has 1-4 Arithmetic pipes (SIMD). Each pipe has 128-bit quad-word registers. The registers can be flexibly accessed as either 2 x FP64, 4 x FP32, 8 x FP16, 2 x int64, 4 x int32, 8 x int16, or 16 x int8. There is a speed penalty for FP64 and a speed bump for FP16 etc. from the 17 FP32 FLOPS per pipeline per clock. So at max with 16 shader cores with 4 pipes per core @ 600mhz that gives a theoretical compute of 652 FP32 GFLOPS. Although it seems like a 16/2 design (T-760) will do 326 FP32 GFLOPS as the more likely.
    There is also a load/store pipeline and a texture pipeline (1 textel per clock or 1/2 textel w/ trilinear filtering)

    Wasn't sure where to put this but they have been sharing/implying a bunch of info on their cores publicly for a while.
  • lightyears - Monday, March 24, 2014 - link

    Please give your opinion about following question:
    What about notebooks with nVidia Optimus? I have a notebook with a GTX680M dedicated graphics combined with Ivy Bridge integrated graphics. So the 680M will support DirectX12, but the Ivy Bridge dedicated probably wont.
    Unfortunately those two are connected by nVidia Optimus technology. A technology that it seems is impossible to put off. I looked already in my usual BIOS but I cant get rid of it. Whether I like it or not I am forced to have Optimus.

    So will Optimus automatically select the 680M for DX12 applications automatically?

    Or wont it work at all. And wont the game be installed because my stupid integrated graphics card doesnt support it?

    The last option would be a true shame and I would really be frsutrated. Given that I spend a lot of money on a high end notebook. And I paid a lot to have a heavy (DX12 capable) 680M in it. And I still wont be able to do DX12 altough I have a DX12 capable card...
  • Ryan Smith - Tuesday, March 25, 2014 - link

    "What about notebooks with nVidia Optimus?"

    There is no reason that I'm aware of that this shouldn't work on Optimus. The Optimus shim should redirect any flagged game to the dGPU, where it will detect a D3D12 capable device and be able to use that API.
  • ninjaquick - Tuesday, March 25, 2014 - link

    Awesome use of the word shim.
  • lightyears - Tuesday, March 25, 2014 - link

    I looked at the internet and it looks like it wont be a real problem indeed. Back in 2011 the same situation existed with DX11. Some Optimus notebooks had Sandy Bridge CPU (DX 10.1 capacle) and GTX 555 (DX 11 capable). By some people the Optimus didnt automatically detect the DX 11 capable device and they had some problem,. But after some changes in the settings they managed to get DX 11 going with the GTX 555 on the Optimus notebooks.Altough the Sandy Bridge was not DX 11 capable.
    So I suppose Optimus also wont be a problem this time with DX12. Good news.
    Altough I truely hate Optimus. It already forbid me to use stereoscopic 3D on a supported 3DTV.
  • ericore - Monday, March 24, 2014 - link

    "But why are we seeing so much interest in low level graphics programming on the PC? The short answer is performance, and more specifically what can be gained from returning to it."

    That's absolute BS.
    The reason is 3 fold: 1. For Xbox One 2. To prevent surge of Linux Gaming 3. To fulfill alliance/pack with Intel and Nvidia
  • ninjaquick - Tuesday, March 25, 2014 - link

    It is not BS at all. Developers have been asking, even crying out, for low level access to GPU hardware on PC for ages. The Xbox One was the last straw though, currently it is no more programmable than a PC. This caused Crytek a massive headache as they budgeted rendering based on 'to the metal' efficiency, and were instead met with massive draw overheads forcing them to severely reduce the quality of their work in 'Ryse'. Other developers have complained on the very same thing.. The Xbox 360 is more programmable. The benefit D3D12 has this time around is that the X1 is based on a hybrid WindowsRT/8x64, meaning d3d12 can be pushed to all win8 gen devices.
  • rootheday3 - Monday, March 24, 2014 - link

    I am pretty sure that at least one of the demos (3D MARK?) was actually run on Haswell iGpu - meaning Intel is well along on driver development. Some if the announced features (support for order independent transparency) also sound like Intrl extensions on dx11 (pixel sync)
  • rootheday3 - Monday, March 24, 2014 - link

    Also- reducing driver cost and single threaded perf should also help ensure that mobile gaming on laptops and tablets is less likely to be cpu bound due to frequency constraints. Should also allow more of the thermal budget to go to gpu for better rendering/ less throttling.
  • Zak - Tuesday, March 25, 2014 - link

    "Powered by a GeForce GTX Titan Black, Microsoft tells us the demo is capable of sustaining 60fps."

    Titan Black? No kidding. At what resolution?
  • Scali - Tuesday, March 25, 2014 - link

    "To use consoles as an example once again, this is why they are capable of so much with such a (relatively) weak CPU, as they’re better able to utilize their multiple CPU cores than a high level programmed PC can."

    This is patently false.
    Namely, the PS3 with its Cell has only a single regular CPU core. The SPEs are very limited and not suitable for batching up draw calls.
    The XBox 360 is a more 'regular' CPU, but it has 'only' 3 cores, and the rendering is mostly done on a single core. (PS4 and XBox One are too new to draw any conclusions yet, so 'console efficiency' is what we know of consoles that are not all about multithreading).

    You are confusing low-level with multithreading. Low-level is about programming the GPU with a very direct path, little abstraction. That is why it is efficient on consoles. There is a much thinner layer between OS, GPU, driver, API and application than on a regular PC.

    Multithreading is another way of speeding up graphics, but this does not necessarily require programming the GPU directly. Which is also not what DX12 is going to do. It will still abstract the hardware to a point where vendor and implementation are not relevant. But it will allow better control of batching up calls on threads other than the master rendering thread (D3D11 already has support for multithreading, but it has some limitations).

    It seems that AMD has done a great job on confusing the general public, with its Mantle propaganda.
  • Death666Angel - Tuesday, March 25, 2014 - link

    All these articles make me think of John Carmack's QuakeCon 2013 (I think) keynote where he talked about having the same access to the GPU as he does to the CPU and basically programming in machine code. Hope this is coming. :) I need the performance for 120fps/4k Oculus Rift games! :D
  • Scali - Tuesday, March 25, 2014 - link

    The ironly is that the original D3D had low-level access to the GPU, with its execute buffer system. This would easily allow multiple threads to batch up commands for the GPU in parallel efficiently.
    But Carmack complained that the API was too hard to use.
    It looks like we're going back in time somewhat, getting closer to the original D3D.
  • Ramon Zarat - Tuesday, March 25, 2014 - link

    The only thing I really want to know is this:

    Does the current gen GPU hardware, mainly Kepler/Maxwell and Tahiti/Hawaii, *technically* ABLE to support DX12? With strong emphasis on ABLE, as even if they are able, I seriously doubt AMD or Nvidia will be "charitable" enough to actually do it and instead, force us all to upgrade again, thanks to the wonder of artificial market segmentation.

    Will we see modded driver enabling (at least partially) DX12 featues on current hardware? That would be interesting... For example, I'm currently running a modded Intel OROM BIOS on my Z68 board so it can use TRIM under RAID0 with SSD. With the Z77 and Z87, no TRIM problem out of the box and they are 99.9% identical to the Z68 SATA controller. I had ZERO problem in 2 1/2 years, so yeah, TRIM work in RAID0 SSD on the Z68, thanks a lot, Intel...for nothing.
  • inighthawki - Wednesday, March 26, 2014 - link

    Considering nvidia made it a huge point that Fermi and above will be supported and represent >50% of the existing market, yeah probably. Why would they announce that then not write drivers for it?
  • TheJian - Wednesday, March 26, 2014 - link

    "For low level PC graphics APIs Mantle will be the only game in town for the next 18-20 months; but after that, then what?"

    Ignorance or Dumbest comment I've seen this year ;) Either ignorant of just trying to pretend OpenGL can't do this already for years. You forgot the OpenGL speeches showing 5-30x better draw calls and it is ALREADY here, not 2yrs away like DX12.
    http://blogs.nvidia.com/blog/2014/03/20/opengl-gdc...
    Still showing bias here I guess...Fee free to watch the 52min video in that link on DRAW CALLS and how pointless Mantle is as OpenGL has had this stuff for years. He even shows some code.
    "How to get a crap-ton more draw calls" (a new technical term he said in the video…LOL), which clearly is telling you in the first minute, there is no need for mantle as it is about 10x draw calls right? Mantle=dead. If not by DX12 (but this isn’t out for a while-Win9?), then OpenGL/SteamOS/Android pushing OpenGL.

    NV’s speech wasn't a SMALL speech or there wouldn't be a 130 slide doc (at least you guys mentioned it, that's not enough) explaining what they covered right (on top of the 52min dev day video covering Draw Calls +many others)? Considering it is ALREADY working in opengl (same crap as mantle, even carmack said you can do the same thing already and get as close to metal as you'd like in OpenGL with extensions a YEAR ago), I'm sure the OpenGL speeches had more concrete info than the DX speech at GDC (valve didn’t say a word about anything BUT OpenGL recently and how to port DX9 to it). How can you not detail the info on OpenGL and call yourself a hardware site? I know you guys hate cuda (or you'd test it vs. opencl/AMD repeatedly to death), but why the hate for OpenGL which NV has no control over?

    Mantle's main competitor for 2yrs while we wait for DX12 is ...wait for it...OPENGL, which already does the same crap mantle does... ;) I expected nothing less from Ryan Smith. Where is the big OpenGL speeches coverage? Devs skipped the VR speech to hear about the draw call speech (it was going on right next door and John Mcdonald even says at the end of the vid he'd be in there if he wasn't giving the speech...LOL) and NV was a bit surprised those devs skipped VR. They expected 5 people, got a crowd wanting OpenGL draw call info ;)

    But believe Ryan, Mantle's only competition is DX even though OpenGL (ES) is the only thing really used in games on mobile which will further Valves desktop opengl push too. Steam Dev Days was all about leaving DX and going OpenGL, much of GDC was the same or on ES3.1 mobile info. DX12 won't get far unless you believe MS will take out Android/SteamOS/iOS. They are already stuck in cement being so far ahead with pure unit sales off the charts, game devs' mind share already on ios/android, etc, they can easily push OpenGL together and all 3 want DX/Wintel dead.

    Wintel lost 21% of ALL notebook share last year. This year we have 64bit cpus coming from all ARM soc players and they'll will use those to go further up the chain from crapbooks (chromebooks to me...LOL) to REAL notebooks, and low-end desktops (+some servers) and move up again with the next revs into everything high-end that x86 owns today.

    NV’s speech wasn't a SMALL speech or there wouldn't be a 130 slide doc explaining what they covered right (on top of the 52min video from steam dev days+many others)? Considering it is ALREADY working in opengl (same stuff as mantle, even carmack said you can do the same thing already and get as close to metal as you'd like in OpenGL with extensions a YEAR ago on stage), I'm sure the OpenGL speeches had more concrete info than the DX speech at GDC (valve didn’t say a word about anything BUT OpenGL). How can you not detail the info on OpenGL and call yourself a hardware site? I know you guys hate cuda (or you'd test it vs. opencl/AMD repeatedly to death), but why the hate for OpenGL which NV has no control over? It hurts mantle, is DONE now, AMD pays you for a portal etc, so it's off limits?
  • klmccaughey - Wednesday, March 26, 2014 - link

    DirectX needs to die.

    I think this is a response to Mantle. The best choice for PC gamers would be an API that is not linked to Windows as an OS. Windows is dying, and this is an attempt by MS to keep Windows an essential part of PC gaming. If we had Mantle and NVMantle then we could free PC gaming from the yoke of MS and (almost) everyone would be happy.

    The idea of extending DX to phones etc is pure arrogance and a bad decision. The architecture is so different that I cannot see the API's lining up in anything other than a forced manner.

    As a one-time assembly language games programmer for 8-bits, and then embedded system software programmer in same + C, I really wish that we could get a largely OS independent low level API. OpenGL needs killed too as it has been committee'd into a mess.

    Let's start again and have an industry standard (as far as possible) low to mid level API, backed by the graphics card producers.

    I fear DX12 could be the last gasp of PC gaming as Windows falls into obscurity and we see a fragmentation of the platform that isn't covered by a good graphics API.
  • Scali - Wednesday, March 26, 2014 - link

    "If we had Mantle and NVMantle then we could free PC gaming from the yoke of MS and (almost) everyone would be happy."

    Perhaps you've heard of this thing called OpenGL...?
    PC gaming could have been freed from MS at any time... Somehow it never happened...

    "The idea of extending DX to phones etc is pure arrogance and a bad decision."

    DX11 is already on Windows Phone. It's only logical that DX12 also comes to phones (seeing as even phones have DX11-capable hardware soon, which is enough to support DX12).
  • Ubercake - Friday, March 28, 2014 - link

    I feel more comfortable with a non-GPU designing 3rd party creating the low-level API, but when that 3rd party is Microsoft I worry DX12 will be in a Beta stage far longer than even Mantle.
  • boe - Tuesday, April 1, 2014 - link

    Crysis 4 and DX12 can't come soon enough for me :)
  • eskates - Tuesday, April 1, 2014 - link

    I'm curious about the future of Mantle. The biggest downside to it is the fact that it's limited to AMD GPUs.

    The game consoles would be a great place to implement it. The problem is that game developers are already using the custom APIs provided by Sony and Microsoft. So Mantle integration will have to provide something that those console-specific APIs aren't already providing.

    On top of that is the fact that a lot of console games are also coming to PC. Developers might need to take advantage of low level APIs to make a game run as well on the consoles as they do on PC. But PC doesn't necessarily need that extra performance boost unless the developer wants to try to keep the system requirements low. But then only newer GPUs are going to support DX12 and although you can get a card for around $100 that will support it, a lot of low end hardware that's currently in PCs will not be able to support it. This is a problem with Mantle too.

    So, what seems like an advantage to me at first - AMD being the exclusive GPU for "next-gen" consoles - turns out to be another roadblock to full adoption for Mantle.

    What happens if Mantle comes out of beta and is announced as being compatible with non-AMD GPUs and this all happens in the next 6 months (by Sept/Oct 2014)? Will it be enough to gain the support needed to make DX12 unnecessary?

    When DX12 releases what will it have that Mantle won't? And if Mantle has become superbly popular during the development of DX12, then what will DX12 offer that motivates the industry to adopt it?

    If Mantle is unable to succeed in gaining any ground over the next 6-12 months. I don't foresee it being used much. It's difficult to go up against Microsoft, who has had a strong-hold on the industry with DirectX for more than a decade and who has the funding and political support to make it successful.

    A lot of unanswerable questions come to mind. It's going to be an interesting shift in both programming and hardware and a welcome change to the way hardware is utilized today.

Log in

Don't have an account? Sign up now