![](/Content/images/logo2.png)
Original Link: https://www.anandtech.com/show/938
GPU Shootout with Unreal Tournament 2003 - July 2002
by Anand Lal Shimpi on July 1, 2002 4:02 AM EST- Posted in
- GPUs
It has been a full six months since we started benchmarking with the latest build of Epic's Unreal Engine. When we first introduced the Unreal Performance Test benchmark, Epic had just announced Unreal Tournament II and we were putting the pressure on the hardware vendors to get drivers ready for the game's release later this year. In a matter of weeks Epic will be releasing the first public demo of Unreal Tournament 2003 (formerly UT II), which will stress your graphics cards unlike any first person shooter (FPS) you've ever played.
Until very recently, the limiting factor in the FPSes we all played was memory bandwidth. If you remember when the original GeForce 256 launched, the biggest complaint was the lack of DDR memory. The measley 2.7GB/s of memory bandwidth was easily saturated at higher resolutions and thus the demand for higher bandwidth memory solutions was upon us.
Since then, ATI and NVIDIA have both improved their GPUs considerably to offer as much memory bandwidth as possible. They have focused on being efficient by introducing Z occlusion culling technologies like ATI's HyperZ and NVIDIA's Visibility Subsystem which get rid of elements that will never be seen by the user before sending them through the texturing pipelines. They have also worked with memory vendors to bring extremely high speed DDR SDRAM to the mainstream graphics market, with the next generation of GPUs using 350 - 500MHz DDR SDRAM (effectively 700MHz - 1GHz). And the two companies have also improved their memory controller and cache designs to maximize the throughput of their GPUs.
While memory bandwidth has been getting more plentiful, GPUs have been becoming more powerful. The next-generation of GPUs will offer much more programmability and flexibility, allowing developers to truly utilize their potential. Going along with this trend, the next-generation of 3D games won't be entirely memory bandwidth limited. The need for a GPU to run through tens of shading instructions to produce a single pixel will shift games from being overly dependent on memory bandwidth over to a balance between GPU power and memory bandwidth.
Although Unreal Tournament 2003 doesn't make any extensive use of complex shader programs, it does represent the next paradigm of games with a modern 3D engine. Between now and Doom 3, there will be few games that can match the visual appeal of UT2003 and its engine.
With all of that said, today we will be bringing you an update to the first article we did using the Unreal Performance Test back in January. We've rounded up a total of 19 GPUs and benchmarked each and every one of them under a 6/28/2002 build of Unreal Tournament 2003 that Epic produced for our use.
The Benchmarks
We first introduced the Unreal Tournament 2003 benchmark in our recent Parhelia review, the build we're showing off today doesn't differ too much from what we demonstrated in that review.
We tested at five resolutions, each under two different detail settings. We chose the highest detail setting offered by the game, with everything set to the maximum level to provide the best possible image quality. We also chose a medium detail setting, which turned off detailed textures, turned on 16-bit color, lowered the texture detail and turned off deco layers. The reason for the medium detail setting was to compare those cards that weren't playable at the higher settings.
We ran all of those benchmarks under two levels: DM-Antalus and DM-Asbestos. We used the Antalus deathmatch level in our Parhelia review, but the Asbestos level is a newcomer to our reviews. The main difference between the two levels is that Antalus is a mostly outdoor map while Asbestos is entirely indoors and thus provides much higher frame rates.
We also had Epic whip up a test that would specifically isolate and test memory bandwidth over anything else. The reason we wanted a purely memory bandwidth test was to put numbers to John Carmack's latest words about Matrox's Parhelia:
"The performance was really disappointing for the first 256 bit DDR card. I tried to set up a "poster child" case that would stress the memory subsystem above and beyond any driver or triangle level inefficiencies, but I was unable to get it to ever approach the performance of a GF4. " - John Carmack, 6/25/2002 .plan update
The resulting test we came up with was a large room that was tiled with three 2Kx2K textures, uncompressed, across the height and length of all of the walls. The end result wasn't a pretty level at all, but it was perfect for stressing the memory subsystem of the cards. We couldn't run this test on all of the cards, as those with only two rendering pipelines (and two texture units per pipeline) or less than 64MB of memory wouldn't either complete the test or would swap to main memory entirely too much.
In order to make this comparison a manageable task we limited the tests to a single system, but if you want us to we will spend time this week running our CPU scaling tests on a number of the cards. As usual, your wish is our command so just let us know.
The Contenders
Before we get to what GPUs were included in this comparison, let's talk about those that could not be included.
Although Epic has the game working with the Voodoo3, Voodoo4 and Voodoo5 series of chips, the current benchmark does not work with them. This obviously means that we could not benchmark any of the former 3dfx offerings although owners shouldn't be worried as the game will work with your card.
The world's first AGP 8X solution, SiS' Xabre 400, would not work with the benchmark either. SiS is in contact with Epic and is working on fixing the situation through drivers but unfortunately their developer relations team is definitely not up to par with ATI and NVIDIA in terms of response time.
With those exceptions made, here's a list of all of the contenders that made it into today's comparison:
ATI Radeon 7500 (64MB)
ATI Radeon 8500 (128MB)
ATI Radeon 8500 (64MB)
ATI Radeon 8500LE (128MB)
Matrox Parhelia (128MB)
NVIDIA GeForce2 MX 200 (32MB)
NVIDIA GeForce2 MX 400 (32MB)
NVIDIA GeForce2 Pro (64MB)
NVIDIA GeForce2 Ultra (64MB)
NVIDIA GeForce3 (64MB)
NVIDIA GeForce3 Ti 200 (64MB)
NVIDIA GeForce3 Ti 500 (64MB)
NVIDIA GeForce4 MX 440 (64MB)
NVIDIA GeForce4 MX 460 (64MB)
NVIDIA GeForce4 Ti 4200 (128MB),
NVIDIA GeForce4 Ti 4200 (64MB)
NVIDIA GeForce4 Ti 4400 (128MB)
NVIDIA GeForce4 Ti 4600 (128MB)
ST Micro Kyro II (64MB)
Drivers
As you can expect, we didn't have any problems with NVIDIA's latest Detonator drivers (29.42) and the latest build of UT2003.
Surprisingly enough, Matrox's latest Parhelia drivers actually worked better than ATI's CATALYST drivers under UT2003. The problem with ATI's publicly available CATALYST drivers is that Detailed Textures aren't properly supported, meaning they won't be rendered and anywhere that they are used you'll run into annoying flashing textures. ATI has fixed the issue internally and it looks like that latest 7.73 drivers that have been leaked contain the fix as well. We're hoping that they'll make this fix into an official CATALYST release before the official release of the UT2003 demo.
We didn't run into any problems with the Kyro II drivers either; we used version 15.084 from Hercules' website.
The Test
Windows
XP Professional Test Bed
|
|
Hardware
Configuration
|
|
CPU |
AMD
Athlon XP 2100+ (1.73GHz)
133.3MHz x 13.0 |
Motherboard |
EPoX
8K3A+ BIOS Revision 8k3a2328 - 3/28/2002
VIA KT333 Chipset |
RAM |
1
x 256MB DDR266 CAS2 Corsair DIMM
|
Sound |
None
|
Hard Drive |
80GB
Maxtor D740X
|
Video Cards (Drivers) |
ATI Radeon 7500
(64MB) - v7.73 |
Memory Bandwidth Comparison
Just for kicks we wanted to start off with our homemade memory bandwidth test to compare some of the more bandwidth-happy solutions:
|
As proof that this test is entirely memory bandwidth limited we actually decreased only the memory clock speed of a GeForce4 and found that the test scales almost perfectly linearly with memory bandwidth; you can see examples of this when comparing the four GeForce4 cards to one another.
It is interesting to note that the GeForce3 Ti 500, despite its lesser memory bandwidth, actually ends up being faster than the Ti 4400 in this test. Without knowing about the inner workings of the GeForce3 GPU vs. the GeForce4 GPU, we hypothesize that the GeForce3 could have larger internal caches that give it the performance edge here. In making room for the second vertex shader unit, nView and all of the additional features in the GeForce4 it is highly likely that NVIDIA reduced the GeForce3's cache size, which would help tremendously in this benchmark.
The Radeon 8500 doesn't do well in this benchmark at all, despite the fact that it does have a competitive amount of memory bandwidth. But the main reason we pursued this test was to see how the Parhelia performed; in this case, the 27.6 fps is an indication that its 256-bit memory bus isn't doing much for the Parhelia at all.
Let's move on to the actual game tests...
640x480 - DM-Antalus
|
At 640x480, the cards that are already running out of juice definitely can't handle the higher detail settings. Although the game looks great at 640x480, in order to combat the aliasing effects of a high polygon-count game like UT2003 you'll want to move to a higher resolution.
|
Lowering our standards a bit proves to be useful, the GeForce2 MX 400 is definitely playable now. We'll ignore the ungodly frame rates that the faster cards are getting, there's no reason for any GeForce3 or higher user not to be using higher detail settings.
800x600 - DM-Antalus
|
The standings don't change too much at 800x600, but you can already tell that the GeForce4s will hold a significant lead over the competition at higher resolutions as well. The Parhelia remains quite competitive but it's definitely not earning its paycheck at $400 for the 128MB card. The $150 Radeon 8500LE comes within 2 fps of Matrox's solution, which is also a feather in Matrox's cap since the Parhelia is their first gaming GPU and it's already fairly competitive.
|
Again we see that playability is given to even the slower cards by reducing the texture details a bit, although by doing that you lose some of the beauty of the game.
1024x768 - DM-Antalus
|
Now that we're getting into some serious resolutions you'll see that there was definitely a reason to shell out the big bucks for a GeForce4 Ti 4600; not only can the card run any of today's games at very high frame rates, but it will also be able to run Unreal Tournament 2003 with all of the detail settings maxed out at close to 100 fps.
The Ti 4200 doesn't do bad at all for a $199 card, holding tightly onto third and fourth place. The Radeon 8500 is a good distance away from the GeForce4s but ATI has no reason to worry, R300 will put everything on this chart to shame.
The GeForce4 MX 460 is tied with the GeForce3 Ti 200, which perfectly echoes Carmack's statements that the GeForce3 or Radeon 8500 will be better performers than the GeForce4 MX for Doom 3; the same is definitely true for Unreal Tournament 2003.
|
The once mighty GeForce2 is definitely playable at 1024x768 provided that you turn down some of the detail settings; as you can expect, the rest of the cards get faster as well.
1280x1024 - DM-Antalus
|
Unreal Tournament 2003 looks incredible at 1280x1024 but even the fastest GeForce4 dips below 60 fps when running at this resolution. For smooth gameplay you're going to want to either turn down some of the detail settings or stick to 1024x768.
|
Turning down the details definitely helps, but it really only makes 1280x1024 a reasonable resolution for the cards that are already pretty fast.
1600x1200 - DM-Antalus
|
The days of seeing 150fps+ scores in Quake III at 1600x1200 are over, it will take another couple of generations of GPUs before we see that sort of performance here.
|
As usual, going to medium detail settings helps a bit, at the sacrifice of image quality.
640x480 - DM-Asbestos
|
Being an entirely indoor level, the Asbestos test obviously produced much higher frame rates.
|
Just to see how high we could get them, we included the medium detail settings at 640x480. In this mode, even the GeForce2 MX 200 broke 100 fps.
800x600 - DM-Asbestos
|
As we up the resolution you can start to see distinct performance groups emerge, again with the GeForce4s at the top indicating that they'll be surfacing even more as the resolution goes up.
|
Because this is an indoor map, and definitely not the most complex out of all of the UT2003 levels we end up with some fairly high performance figures here. For most cards, there's very little change between 800x600 and 640x480.
1024x768 - DM-Asbestos
|
1024x768 is clearly the sweet spot for a number of these cards, with the GeForce4s completely dominating. It's clear that Epic has worked very closely with NVIDIA to make sure that the game is as optimized for their architecture as possible.
|
Once again, we see that at the sacrifice of texture fidelity you can get very smooth frame rates at 1024x768.
1280x1024 - DM-Asbestos
|
Being an indoor level, we don't see the same levels of performance that we saw at 1280x1024 under DM-Antalus. Seeing that the Antalus test is the worst performing out of the four that we've run under UT2003, some GeForce4 owners may find that 1280x1024 is a comfortable resolution with all of the detail settings maxed out.
It's also interesting to note that the 64MB cards aren't penalized too heavily because of aggressive texture compression in the levels.
|
1600x1200 - DM-Asbestos
|
Finally we have 1600x1200 which is torture for virtually all of these cards with high detail settings.
|
Turning down the details makes the benchmark almost Quake III-like at 1600x1200.
Final Words
With a number of upcoming games based on the latest Unreal Engine, performance under this benchmark is extremely important. The numbers we've shown you today are more important than any Quake III score or any number of 3DMarks because they are produced from an engine that will be the backbone of many next-generation games to come; the obvious being Unreal II and of course, Unreal Tournament 2003 but also games like Deus Ex II.
What have we learned from the hours of endless benchmarking? For starters, the GeForce4 was definitely a wise investment for anyone that has already purchased one. Current GeForce4 owners can rest assured that their cards will be able to play the next-generation of FPSes perfectly at very high frame rates. However, we'd caution anyone looking to upgrade in the coming months; ATI's R300 is on the horizon and, from what we're hearing, will be a formidable opponent to the GeForce4 Ti 4600. Then there's the elusive NV30, provided that it does ship on time by the end of this year we'll be seeing some impressive scores from the NVIDIA camp as well.
Without considering its price tag, the Parhelia isn't too bad as Matrox's first new attempt at 3D graphics in two years. But once you take into account the $400 you'll have to shell out for it, hardcore FPS gamers will find little reason to go for the Parhelia. Matrox definitely needs a lower cost version of the Parhelia to gain market share among gamers and it will unfortunately take more than a 64MB part to do that; maybe a single-headed card or some other feature reduced implementation would work for those users that are mainly interested in price/performance rather than features like Surround Gaming.
We're still in a situation where the hardware is ahead of the software, which is definitely good for the end user that's concerned with performance. But it is good to know that we're finally getting games that can give our beloved GPUs more of a workout than what we've been throwing at them for months...
Special thanks to Daniel Vogel of Epic Games for spending his time with us on making this benchmarking comparison as thorough and accurate as possible. And of course a big thanks to everyone at Epic for making such a great looking game.