![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/2703
GeForce 3D Vision: Stereoscopic 3D From NVIDIA
by Derek Wilson on January 8, 2009 2:30 PM EST- Posted in
- GPUs
As we've seen over the past few years, NVIDIA isn't content with simply doing what has been done well. Certainly their graphics cards are good at what they do and competition in the market is great today delivering amazing value to consumers. But they've forged ahead with initiatives like SLI for multi-GPU rendering and CUDA for general purpose programing on GPU. Now they're taking it a step further and getting into stereoscopic 3D.
To be fair, NVIDIA has supported stereoscopic 3D for a long time, but this is more of a push to get pervasive stereoscopic graphics into the consumer space. Not only will NVIDIA graphics cards support stereoscopic rendering, they will also be enhancing their driver to extract depth information and create left and right eye images for applications that do not natively produce or support stereo rendering. And did we mention they'll also be selling active wireless shutter glasses?
Packaged as GeForce 3D Vision, NVIDIA's shutter glasses and transmitter pair will run consumers a respectable $200. This is more expensive than some glasses and cheaper than others. We actually don't have any other glasses in house to compare them to, but the quality, freedom and battery life are quite good. If it becomes necessary, we will do a comparison with other products, but the real advantage isn't really in the hardware; it's in the driver. The package also comes with a soft bag and cloth for the glasses, alternate nose pieces, cables and converters, and a couple disks with drivers, stereoscopic photo viewer and video player.
Stereoscopic 3D shutter glasses have been around since the late 90s, but with the push away from CRTs to LCDs with a fixed 60Hz refresh rate meant that high quality stereoscopic viewing on the desktop had to be put on hold (along with hopes for smaller pixels sizes, but that's a whole other rant). With Hollywood getting really interested in 3D movies and some display manufacturers getting on board with 120Hz monitors, TVs and projectors, it makes sense that we would see someone try to push this back to the forefront.
Before we get into just how NVIDIA wants to make stereoscopic 3D on the desktop a reality, lets take a look at exactly what we're talking about.
More 3D than 3D: Stereoscopic Defined
Let's start with reality: we live in a world where things occupy a finite volume of space at any given moment in time... Alright, maybe that's not a good way to explain this. Let me try again. Stuff we see in real life has some width, some height and some depth. Our life in our 3D world and our two eyes give us the ability to quickly and easily judge position and dimensions of objects. 3D video games try to approximate this by drawing a two image that has many of the same "depth cues" we use to judge position and shape in reality.
Looking at a picture of something, a 2D image can help us perceive some of the depth that we would have seen if we had stood at the same location as the camera: stuff that's further away appears relatively smaller than the foreground. Shadows and lighting help give us a feel for dimensions as they fall on objects. If we were to talk about video, we would see parallax in effect making it look like objects closer to the viewer move faster than objects further away. Our experience tells us that we can expect certain constants in our reality and we pick up on those and use them to judge things that look similar to reality. Video games exploit all these things to help tell our brains that there is depth in that monitor. Or maybe we're looking at a video of something that was reality. Either way, there is something major (aside from actual depth) missing.
Though we can judge 3 dimensions to a certain extent based on depth cues, having two eyes see objects from two slightly different positions is what really tells our brain that something has depth. The combination of these two slightly different images in our brain delivers tons of information on depth. Trying to play catch with one eye is tough. Just ask your neighborhood pirate.
Seeing two different images with your two different eyes, or rather presenting two different images of the same thing from slightly different positions, is what stereoscopic 3D is. It's right there in the word ... ya know ... stereo ... and scopic. Alright, moving on.
If you've ever tried looking at those "magic eye" pictures, you know what impact just stereoscopic info can have. For those who don't know, a magic eye image is a seemingly random looking pattern that when viewed with your eyes looking "through" the image reveals a hidden 3D picture. Though there is absolutely no other depth information in the picture, no lighting or shadows, no perspective projection, nothing but basic shapes that each eye picks up when you focus through the image, the 3D effect is pronounced and looks "deeper" than any 3D game out there.
This is not a sailboat.
Combining stereoscopic information with all the other depth information makes for a dramatic effect when done properly. Correct rendering and presentation of left and right eye images with proper 3D projection, lighting all that simply looks real enough to touch. Viewing a game properly rendered for stereoscopic effects can range from feeling like looking at a shoe box diorama or a popup book to looking through a window into the next room.
Hollywood tried stereoscopic 3D with anaglyphs (those red and blue images you need the red and blue glasses for), but it didn't really take off except as a sort of lame gimmick. Back in the late 90s and early this century, we saw the computer industry test the waters with active shutter glasses that worked quite a bit better. Rather than displaying a single images with both eye views superimposed requiring filtering, shutter glasses cover one eye while the entire screen displays an image rendered for the other eye. That eye is covered while the first is uncovered to see it's own full resolution full color image. When done right this produces amazing effects.
There are a couple catches though. This process needs to happen super fast and super accurately. Anyone who spent (or spends) hours staring at sub-60Hz CRTs knows that slow flicker can cause problems from eye strain to migraines. So we need at least 60Hz for each eye for a passable experience. We also need to make absolutely certain that one eye doesn't see any of the image intended for the other eye. Thus, when building active shutter glasses, a lot of work needs to go into making both lenses able to turn on and off very fast and very accurately, and we need a display that can deliver 120 frames per second in order to achieve 60 for each eye.
Early shutter glasses and applications could work too slowly delivering the effect with a side of eye strain, and getting really good results required a CRT that could handle 120Hz and glasses that could match pace. It also required an application built for stereoscopic viewing or a sort of wrapper driver that could make the application render two alternating images every frame. Requiring the rendering of an extra image per "frame" required realtime 3D software to be very fast as well. These and other technical limitations helped to keep stereoscopic 3D on the desktop from taking off.
There is still a market today for active shutter glasses and stereoscopic viewing, though there has been sort of a lull between the production of CRTs and the availability of 120Hz LCD panels. And while LCDs that can accept and display a 120Hz signal are just starting to hit the market, it's still a little early for a resurgence of the technology. But for those early adopters out there, NVIDIA hopes to be the option of choice. So what's the big deal about NVIDIA's solution? Let's check it out.
Not Just Another Pair of Glasses: GeForce 3D Vision at Work
While the GeForce 3D Vision solution does include some pretty nice glasses, that's not where it ends. We'll start there though. The glasses are small and light weight with polarized LCD lenses. The glasses contain a battery, LCD controller and IR receiver. They charge over USB and can be plugged in to the system while in operation or not. We haven't done a full battery life test, NVIDIA claims battery life to be about 40+ hours of operation on a single charge. We can say that we haven't needed to recharge in all our testing, which was a good handful of hours over a few days.
We were a little surprised that NVIDIA went with IR at first, but it makes a lot of sense from a battery life perspective. Though the glasses need line of sight with the transmitter, you do need line of sight to the monitor to see it anyway, so it's not a huge deal. There can be issues with having a bunch of people in the same room using the glasses or if there are other IR devices transmitting around the room. We didn't have enough equipment to really push it till it broke, but it did stand up to throwing a Wii and a universal remote at it.
The transmitter connects to the PC via USB or it can connect to a stereoscopic monitor with a standard stereo connector. The transmitter also has a wheel on it for adjusting the "depth" of the image (this actually adjusts the separation of the left and right images). This is fairly convenient and can hook into multiple devices and be adjusted for the comfort of the user fairly quickly.
But that's not really the special sauce part. The real meat of the solution is in their driver, not the hardware. Aside from the fact that you need a 120Hz display that is.
As we mentioned, either an application needs to be developed for stereoscopic rendering, or it needs some external "help" from software. In 3rd party cases, this is a wrapper, but in NVIDIA's case they built it right in to the driver. The advantage NVIDIA has is that they can go really low level if they need to. Of course, this is also their downfall to some degree, but I'll get to that later.
When rendering 3D games, a virtual "camera" is floated around the world at what is considered the viewers eye position. This camera has a "look at" point that tells us where the it's pointed. At the most brute force level (which really isn't done), we could take the camera for a particular game state and render two frames instead of one. With the first, we could move the camera a little left and the look at point a little right. For the next frame we could move the camera a little right and the look at position a little left. The point where the line of sight of the cameras cross is actually at screen depth. The game thinks it's just rendered one frame, but it's actually happened twice. The problem here is that we've got to rely on a high consistent frame rate for the game which just isn't going to happen with modern titles. But it helps illustrate the effect of what is going on: these two different camera views have to get to the screen some how.
Since the scene shouldn't change between rendering for each eye, we have the advantage that the geometry is consistent. All we need to do is render everything from two different positions. This means things can be sped up a good bit. NVIDIA tries to do as little as possible more than once. But they are also very uninterested in sharing exactly how they handle rendering two images. Once the rendered images have been placed in the frame buffer, the left and right eye images bounce back and forth in alternating frames. This makes it so that slow frame rate from the game doesn't affect the stereoscopic effect. There is no flickering and none of the instant headache feeling that older solutions were prone to.
One of the down sides with whatever optimizations NVIDIA is doing is that some effects that don't have readily accessible depth information are not rendered correctly. This means many post processing effects like motion blur, corona and bloom effects, some advanced shadowing implementations and certain lighting shaders, and some smoke fire water and other effects. Rendering the full scene twice could help alleviate some of these issues, but others just can't be fixed without a little extra help. And this is where NVIDIA comes through.
One of the biggest problems is knowing what settings to run at in a game so that the stereoscopic effects look as correct as possible. Because of NVIDIA's extensive game profiling for SLI, they are able to add additional profile information for stereo effects. A little information window pops up to tell you exactly what settings you need to worry about when you run a game. This window is enabled by default until you turn it off for a particular game. NVIDIA has also rated a bunch of games in terms of the experience with stereo effects which can help let people know what to expect from a particular game.
Beyond this, another common problems is rendering crosshairs or other sighting aides as a 2D sprite at the center of the screen rather than at the depth of the thing behind it. Many games render the crosshairs at object depth, but many also render it to screen depth. Luckily, many games give you a way to disable the in-game crosshairs and NVIDIA has provided their own set of stereoscopic crosshairs that render correctly. This is very helpful, as a 2D object at screen depth in the middle of the screen looks the same as if you were looking about 3 feet ahead of you while holding your finger up about 3 inches in front of your nose.
Leveraging their extensive work with developers, NVIDIA is also hoping to get new games to better support stereoscopic effects. While some of the anomalies are a result of NVIDIA's method rather than the developer, encouraging and assisting developers in implementing their desired effects in a stereo friendly way will help pave the way for the future. They can even get developers to include information that allows them to render some effects out of the screen. And this isn't cheesy theatre out of screen: this is in your face feels like you can touch it out of screen. Currently World of Warcraft Wrath of the Litch King has some out of screen effects that pretty sweet, but that really just leaves us wanting more.
The NVIDIA Experience, Look and Feel
Oh this is such a mixed bag. Much of this is going to be personal preference, so this feedback is mine combined with that of family and friends who came to check it out. I like numbers, but this really is more of an experience type of situation, and I'll do my best with it.
When it works it works really well and looks simply amazing. It's simple to adjust to a degree that is comfortable and doesn't cause huge amounts of eye strain. Because you do have to focus on objects at different depths, your eyes are working harder than when playing a normal game, and it forces you to do more looking at things rather than using your peripheral vision and just reacting like I often do when gaming. When it's done right (especially with out of screen effects) it fundamentally changes the experience in a very positive way.
But ...
In many games we tested there were some serious drawbacks. Even games that NVIDIA rated the experience as "excellent" we felt were subpar at best. Fallout 3 had some ghosting effects that we couldn't fix, and it just didn't feel right for example. Games with an excellent rating most of the time still require reducing some settings to a lower level like FarCry 2 where the lower quality shadows really take away from the experience. If anyone is going out of their way to buy a 120Hz LCD panel, a high end NVIDIA graphics card and a $200 bundle of active shutter glasses, they are not going to be happy when told to reduce any quality settings. But thats just how it is right now.
Other games, like Crysis Warhead, that received a rating of "good" were nothing if not unplayable with stereoscopic effects. Even turning shadows, shaders, postprocessing, and motion blur and using NVIDIA's stereo crosshairs didn't help when there was any fire, smoke, explosion, or water anywhere around. When those effects pop up (which is all the time) everything goes to hell and you can't focus on anything. It just destroys the experience and you get reduced image quality. A great package.
NVIDIA has said that they are still working with the profiles and with developers to help improve the experience. They have been and are trying to get developers to add stereo friendly effects to their games through patches, but that's just not in the budget for some studios. But NVIDIA needs to be more realistic with their rating systems. At this point, we would recommend taking a look at any game not rated excellent and just writing it off as something that won't offer a good experience. Then take the games rated excellent and assume you'll either have to disable some effects or live with some minor annoyance in a good many of them. For ratings to be taken seriously they need to be accurate and right now they are just not telling the right story.
RTS like Age of Empires or games with a 3/4 view tend to look the best to me. There is a fixed depth and you don't need to do lot of refocusing, but the 3D really grabs you. It actually looks a bit like one of my daughter's pop-up books, but infinitely cooler.
First person shooters are sort of hit and miss, as one of the best looking games was Left 4 Dead, but large outdoor environments like in Fallout 3 can degrade the experience because of the huge difference in actual depth contrasted by the lack of stereoscopic depth at extreme distances: you can only go so deep "into" or "out of" the monitor, and big worlds just aren't accommodated.
Simulation games can look pretty good and Race Driver GRID worked well. It would be nice to keep shadows and motion blur, but the tradeoff isn't bad here. The depth actually helped with judging when to start a turn and just how close other drivers really were.
The two effects that stand out the best right now are the out of screen effects in World of Warcraft and the volumetric smoke and lighting in Left 4 Dead. In L4D, fire the pistol real fast and you can see the smoke pouring out of the barrel curl around as if it were really floating there. Properly done stereoscopic volumetric effects and out of screen effects add an incredible level of realism that can't be overstated. Combining those and removing all problems while allowing maximum image quality would really be incredible. Unfortunately there isn't anything we tested that gave us this satisfaction.
We do also need to note that, while no one got an instant headache, everyone who tested our setup felt a little bit of eye strain and slight pressure between the eyes after as little as 15 minutes of play. One of our testers reported nausea following the gaming session, though she happens to suffer from motion sickness so this may have played a part in it. Of course, that's also very relevant information as no one wants to take dramamine before gaming.
Final Words
All in all, this is like a very polished version of what we've had since the turn of the century. No flicker, less headaches (though there may still be some issue with people who have motion sickness -- I just don't have a large enough sample size to say definitively), and broad game support with less of a performance hit than other solutions. NVIDIA has a very good active shutter stereoscopic solution with GeForce 3D Vision. But the problem is that its value is still very dependent on the application(s) the end user wants it for. It works absolutely perfectly for viewing stereo images and 3D movies (which might be more of a factor when those start coming to bluray) and applications built with stereo support. But for games, though it works with 350 titles, it's just a little hit or miss.
We really hate to say that because we love seeing anyone push through the chicken and egg problem. NVIDIA getting this technology out there and getting developers excited about it and publishers excited about a new market will ultimately really make this a reality. But until most devs program in ways that are friendly to NVIDIA's version of stereo rendering, the gaming experience will be either good or not so good and there's just no way of knowing how much each individual title's problems will bother you until you try it. And at $200 that's a bit of a plunge for the risk. Especially if you don't have a 120Hz display device (which will cost several hundred more).
If you absolutely love a few of the games that works great with it, then it will be worth it. The problem is that NVIDIA's rating make is so that you can't rely on "excellent" as being excellent. Most of the people who played with Left 4 Dead loved it, but one person was really bothered by the floating names being 2D sprites at screen depth. Which is annoying, but the rest of it looked good enough for me not to care (and I'm pretty picky). If NVIDIA wants to play fast and loose with it's ratings, thats fine, but we don't have time to tests all their games and confirm their rating or come up with our own. They really should at least have another class of rating called "perfect" where there are absolutely no issues and all settings work great and we get exactly what we expect.
Shutter glasses have been around for a long time. Perhaps now the time is right for them to start pushing into the mainstream. But NVIDIA isn't doing the technology any favors if they put something out there and let it fail. This technology needs to be developed and needs to be pervasive because it is just that cool. But until it works perfectly in a multitude of games or until 3D movies start hitting PCs near you, we have the potential for a set back. If GeForce 3D Vision is successful, however, that will open the door for us to really move forward with stereoscopic effects.
What we really need, rather than a proprietary solution, is something like stereoscopic support built in to DirectX and OpenGL that developers can tap into very easily. Relying on NVIDIA to discern the proper information and then handle rendering images for both eyes off of one scene is great as a stop gap, just like CUDA was a good interim solution before we had OpenCL. We need the API to be able to handle knowing if there is stereo hardware present and making it easy to generate images for both eyes while duplicating as little work as possible. Giving developers simple tools to make stereo effects cooler and more real or to embed hits about convergence and separation would be great as well.
And hopefully GeForce 3D Vision is a real step toward that future that can become viable right now. I could see some World of Warcraft devotees being really excited about it. Those out there like me who love 3D technology in every form will be excited by it. People who want to create there own stereo images or videos (there are lenses available for this and techniques you can improvise to make it work) will like it, but people waiting for 3D movies will need some content available at home first. But the guys who we would love to see drive the adoption of the technology might not be as into it. The hardcore gamers out there looking to upgrade will probably be better served at this point by going with a high end graphics card and a 30" display rather than a 120Hz monitor and shutter glasses.