![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/2285
Where's The Physics: The State of Hardware Accelerated Physics
by Ryan Smith on July 25, 2007 4:00 PM EST- Posted in
- GPUs
Introduction
If you believe the more tabloid-oriented hardware news sites, 16 months ago you would have thought that ATI and NVIDIA were at an all out war. Harsh phrases were flung, benchmarks were beat to death, and both sides plotted for motherboards with a third x16 PCIe slot in order to have a GPU dedicated to physics. Yes, 2006 was sure an exciting time for GPU-accelerated physics, and then the party came to a grinding halt.
Over in the Ageia camp, 2005 saw them kick off the whole subject of hardware accelerated physics with their announcement of plans to develop the PhysX hardware. 2006 saw the launch of that hardware, and while it had initial promise there was a failure to follow through with games that meaningfully used the hardware. Much like with the GPU camp, Ageia has been keeping a low profile so far this year.
To be fair, much of this is aligned with the traditional gaming seasons; titles are often loaded in to the 4th quarter for the Christmas season, leaving few games - and by extension few new uses of physics - to talk about. But it's also indicative of a general dampening of spirit for hardware accelerated physics, things have not gone as planned for anyone. Now in 2007, some 2 years after Ageia's announcement got the ball rolling, the number of released AAA titles using some sort of hardware physics acceleration can still be counted on one hand.
So what happened to the enthusiasm? It's not a simple answer as there's no single reason, but rather a combination of reasons that have done a very good job dampening things. Today we'll take a look at these reasons, the business behind all of this, and why as the days tick by hardware accelerated physics keeps looking like a pipe dream.
GPU Physics
When ATI and NVIDIA launched their first physics initiatives in 2006, they rallied behind Havok, the physics middleware provider whose software has powered a great number of PC games this decade. Havok in turn produced Havok FX, a separate licensable middleware package that used Shader Model 3.0 for calculating physics on supported GPUs. Havok FX was released in Q2 of 2006, and if you haven't heard about it you're not alone.
So far not a single game has shipped that uses Havok FX; plenty of games have shipped using the normal Havok middleware which is entirely CPU-powered, but none with Havok FX. The only title we know of that has been announced with Havok FX support is Hellgate: London, which is due this year. However we've noticed there has been next-to-no mention of this since NVIDIA's announcement in 2006, so make of that what you will.
Why any individual developer chooses to use Havok FX or not will be a unique answer, but there are a couple of common threads that we believe explain much of the situation. The first is pure business: Havok FX costs extra to license. We're not privy to the exact fee schedule Havok charges, but it's no secret PC gaming has been on a decline - it's a bad time to be spending more if it can be avoided. Paying for Havok FX isn't going to break the bank for the large development houses, but there are other potentially cheaper options.
The second reason, and that which has the greater effect, is a slew of technical details that stem from using Havok FX. Paramount to this is what the GPU camp is calling physics is not what the rest of us would call physics with a straight face. As Havok FX was designed, the physics simulations run on the GPU are not retrievable in a practical manner, as such Havok FX is designed to be used to generate "second-order" physics. Such physics are not related to gameplay and are inserted as eye-candy. A good example of this is Ghost Recon: Advanced Warfighter, which we'll ignore was a PhysX powered title for the moment and focus on the fact that it used the PhysX hardware primarily for extra debris.
The problem with this of course is obvious, and Havok goes through a great deal of trouble in their Havok FX literature to make this clear. The extra eye-candy is nice and it's certainly an interesting solution to bypassing the problem of lots-of-little-things loading down the CPU (although Direct3D 10 has reduced the performance hit of this), but it also means that the GPU can't have any meaningful impact on gameplay. It doesn't make Havok FX entirely useless since eye-candy does serve its purpose, but it's not what most people (ourselves included) envision when we think hardware accelerated physics; we're looking for the next step in interactive physics, not more eye-candy.
There's also a secondary issue that sees little discussion, largely because it's not immediately quantifiable, and that's performance. Because Havok FX is doing its work on the GPU, shader resources being used for rendering may be getting reallocated to physics calculations, while the remainder of the resources are left to pick up the rest of the work on top of the additional work generated by Havok FX as a result of creating more eye-candy. When the majority of new titles are GPU limited, it's not hard to imagine this scenario.
![](https://images.anandtech.com/reviews/motherboards/jetway/939gt4sli/3x16.jpg)
A Jetway board with 3 PCIe x16 slots. We're still waiting to put them to use
Thankfully for the GPU camp, Havok isn't the only way to get some level of physics, Shader Model 4.0 introduces some new options. Besides implementing Havok FX in the form of custom code, with proper preparation the geometry shader can be used to do second-order physics like Havok. For example the Call of Juarez technology demonstration uses this technique for its water effects. That said using the geometry shader brings on the same limitations as Havok FX in not being able to retrieve the data for first-order physics.
The second, and by far more interesting use of new GPU technology is exploiting the use of GPGPU techniques to do physics calculations for games. ATI and NVIDIA provide the CTM and CUDA interfaces respectively to allow developers to write high-level code for GPUs to do computing work, and although the primary use of GPGPU technology is for the secondary market of high-performance research computing, it's possible to use this same technology with games. NVIDIA is marketing this under the Quantum Effects initiative, separating it from their early Havok-powered SLI Physics initiative.
Unfortunately the tools for all of these technologies are virtually brand new, games using GPGPU techniques are going to take some time to arrive. This would roughly be in line with the arrival of games that make serious use of DirectX10, which includes the lag period where games will need to support older hardware and hence can't take full advantage of GPGPU techniques. The biggest question here is if any developers using GPGPU techniques will end up using the GPU for first-order physics or solely second-order.
It's due to all of the above that the GPU camp has been so quiet about physics as of late. Given that the only currently commercial-ready GPU accelerated physics technology is limited to second-order physics and only one game is due to be released using said technology this year, there's simply not much to be excited about at the moment. If serious GPU accelerated physics are to arrive, it's going to be another video card upgrade away at the least.
PhysX
2006 and 2007 have been rough for Ageia and their PhysX hardware. While they can rightfully claim to be the only solution for complete hardware accelerated physics at this time, getting a base of hardware owners and a base of developers isn't coming easily. As of right now the only two major titles that have shipped with PhysX support are Ghost Recon Advanced Warfighter(GRAW) and its sequel GRAW2.
Much of this we believe can be attributed to business reasons. Although Ageia offers a unified physics API that can handle physics done either in software or hardware, getting a developer to fully support the PhysX hardware means getting them to fully use said API. The Havok physics API in turn has been stiff competition in the physics middleware market, and it's fair to say that a number of games that have come out and will be coming out are using Havok and not PhysX. The situation is so bad that Ageia can't even give away the PhysX SDK - it's free and developers still aren't using it. With Havok eating up the business for software physics engines (not including those developers who use their own engines), it leaves Ageia in a poor spot.
Ageia's second business issue is that they still are suffering from a chicken & egg effect with developers and users. Without a large install base of PhysX cards, developers are less likely to try to support the PhysX hardware, and without developers to publish games using the hardware few people are interested in buying potentially useless hardware. Unfortunately for Ageia this is a time-sensitive issue that is only getting worse as the days pass by, the marginalization PhysX due to this effect is undoubtedly pushing developers towards other physics solutions, which ultimately breaks the chicken & egg scenario but not in Ageia's favor.
![](https://images.anandtech.com/reviews/physics/asus/physx/grawcompare.jpg)
Ghost Recon Advanced Warfighter w/PhysX
Because Ageia is not directly producing PhysX cards, the actions of their partners can also have a significant effect on the success of PhysX. We believe that Ageia has lost the support of powerhouse Asus, as the supply of Asus's PhysX cards has completely dried up, leaving smaller BFG to supply the North American market. Coencidentally, ELSA (who only sells products on the overseas markets) has become Ageia's third partner and is now producing PhysX cards.
At this point Ageia does have one ace left up its sleeve, and that's Unreal Engine 3. Epic is using the PhysX API at the core of the physics system, giving Ageia an automatic window of opportunity to get PhysX hardware support in to every one of the numerous games slated to be using UE3. Even if everyone else were to abandon the PhysX API, conceivably there are enough games using UE3 to sustain Ageia and PhysX.
The most important of these games will be Unreal Tournament 3, which is due for release this year. So far the only major pieces of software that Ageia has had to show off PhysX has been the GRAW series which underutilizes the PhysX hardware, the partially aborted CellFactor technology demo, and the single-level GRAW2 technology demo; UT3 will be the first major game that may be able to take full use of the hardware as Ageia has done in its technology demos. We believe that UT3 will be the final push for PhysX hardware acceptance, either the hardware will die at this point or UT3 will push the issue from developer acceptance to consumer acceptance. In turn, any victory for Ageia will be reliant on Epic making full use of the PhysX hardware and not using it solely for eye-candy; using it for the latter will mean certain death while the former will hinge on the use of PhysX hardware not slowing the game down like we saw in GRAW.
At the very least, unlike with the GPU camp we should have a clear idea by the start of 2008 if the PhysX hardware is going to take off or not. We expect Ageia will be hanging on for dear life until then.
Final Thoughts
More than each other however, there's one other thing that threatens the camps offering hardware physics acceleration: the CPU. Recent years have seen CPUs going multi-core, first with two cores and this week has seen the introduction of the (practically) cheap four core Q6600 from Intel. Being embarrassingly parallel in nature, physics simulations aren't just a good match for GPUs/PPUs with their sub-processors, but a logical fit for multi-core CPUs.
While both AMD and Intel have stated that they intend to avoid getting in to a core war as a replacement for the MHZ race, all signs point to a core war taking place for the foreseeable future, with Intel going so far as to experiment on 80-core designs. With the monolithic nature of games these cores will all be put to work in one way or another, and what better way than physics simulations which can be split nicely among cores? While not the floating point power houses that dedicated processors are, with multiple cores CPUs can realistically keep the gap closed well enough to prevent dedicated processors from being viable for consumers. In some ways Havok is already betting on this with their software physics middleware already designed to scale well with additional CPU cores.
Furthermore the CPU manufactures (Intel in particular) have a hefty lead in bringing manufacturing processes to market and can exploit this to further keep the gap closed versus GPUs(80nm at the high end) and the PhysX PPU(130nm). All of this leads to multi-core CPUs being an effective and low-risk way of going about physics instead of a riskier dedicated physics processor. For flagship titles developers may go the extra mile on physics, on most other titles we wouldn't expect such an effort.
So what does all this mean for hardware physics acceleration overall? In spite of the original battle being between the PPU and the GPU, we're wondering just how much longer Ageia's PhysX software/hardware package can hold out before losing the war of attrition, at the risk of becoming marginalized before any decent software library even comes out. Barring a near-miracle, we're ready to write off the PPU as a piece of impressive hardware that provided a technological solution to a problem few people ended up concerned about.
The battle that's shaping up looks to be between the GPU and the CPU, with both sides having the pockets and the manufacturing technology to play for keeps. The CPU is the safe bet for a developer, so it's largely up to NVIDIA to push the GPU as a viable physics solution (AMD has so far not taken a proactive approach with GPU physics outside of Havok FX). We know that the GPU can be a viable solution for second-order physics, but what we're really interested in is first-order physics. So far this remains unproven as far as gaming is concerned, as current GPGPU projects working with physics are all doing so as high performance computing applications that don't use simultaneous graphics rendering.
Without an idea of how well a GPU will perform with simultaneous tasks, it's too early to call to call the victor. At the very least, developers won't wait forever and the GPU camp will need to prove that their respective GPGPU interfaces can provide enough processing power to justify the cost of developing separate physics systems for each GPU line. However given the trend to move things back on to the CPU through projects such as AMD's forthcoming Fusion technology, there's an awful lot in favor of status quo.