Original Link: https://www.anandtech.com/show/664
Introduction
We have seen the CPU industry go from a market dominated by a single manufacturer to a much more competitive arena with two manufacturers struggling for clear dominance. We can also say that without a doubt that the video card market of 1999/2000 resembled the Intel dominated CPU industry of yesterday, except headed by NVIDIA.
Fortunately things are changing. While NVIDIA is continuing on track to gain even more control of the market throughout the next year, the market itself will be getting much more competitive. At this year’s Fall Comdex we went around to all of the four major video chipset manufacturers, 3dfx, ATI, Matrox and NVIDIA and discussed with them all their future products as well as their competition strategies for 2001.
Without further ado let’s pay a quick visit to 3dfx.
3dfx: Learning from mistakes
Our first graphics related meeting took us to 3dfx, the company that was once considered to be the king of 3D graphics, and now ridiculed for not being in touch with the future of the industry.
3dfx had very little to show off at their suite, they aren’t going to be making the same mistakes they made at last year’s Comdex and hype up their next generation product without a guarantee that it will be on store shelves on time. This definitely a step in the right direction from 3dfx.
The Voodoo5 delay hurt 3dfx quite a bit, if it had been launched at last year’s Comdex when it was announced we would be dealing with a very different situation. If you look back to our original Voodoo5 review, the card would be a perfect competitor to the GeForce DDR however against the GeForce2 GTS, especially with the latter being priced below $200 now, the comparison is clearly in favor of NVIDIA.
Needless to say that 3dfx has learned from their mistakes, and they will definitely not be talking about their next generation product, codename Rampage, until very close to its official launch. In spite of this, we did gain quite a bit of information from meeting with 3dfx.
The main theme of our meeting with 3dfx was that the company has done a serious amount of reevaluation internally. This was obviously necessary as there were definitely some decisions made over there in the past year or two that have put them in the situation they are in today.
The influx of new executive staff members into the company, at least according to 3dfx representatives, seems to have produced much more of a focus on reevaluating their direction on a regular basis. In the “old” 3dfx, there was very little evaluation of direction and that translated into a number of features being executed without much thought to the public’s interpretation of them.
The biggest one, of course, is FSAA and in general the T-Buffer effects that 3dfx touted so much over the past year and a half. As the company stands today, there will still be a focus on T-Buffer and FSAA however they will “quit worrying about features that are great.” We took that to mean that they will actually focus on what the public is asking for instead of giving the market a feature that wasn’t specifically demanded.
In spite of this, 3dfx is promising that their Rampage will feature an improved FSAA implementation, indicating that they really are focused on FSAA as a feature and aren’t willing to drop it just because the public response to their T-Buffer technology wasn’t as great as they anticipated.
All this talk about Rampage does bring up a few questions, the most obvious being when can we expect to see it? While 3dfx is obviously reluctant to set an actual date, as to avoid a repeat of the Voodoo4/5 launch, they are sticking to their 6 – 9 month product cycle. If we assume that the Voodoo5 was their last product and it was on store shelves in June, we can expect to see the Rampage as early as January of next year or as late as March/April of 2001.
The Rampage will be 3dfx’s first DirectX 8 part, and they are quite confident that it will be the most feature rich DX8 part on the market at its release. While this is a bold claim from 3dfx, the fact of the matter was that with the exception of Matrox, all three manufacturers (3dfx, ATI and NVIDIA) promised that they would have the most feature rich DX8 part and all three manufacturers cited their incredible relationship with Microsoft as the reason.
As far as technology is concerned, the Rampage will not be using the Tile Rendering technology 3dfx acquired from Gigapixel in July of 2000. We have heard the explanation behind this quite a few times but it bears repeating, basically the Rampage was too far along in its development to incorporate any of Gigapixel’s technology at the time of the acquisition. This may unfortunately hurt 3dfx quite a bit, as we will have to wait another 6 – 9 months after the Rampage’s release before we see what we really want to see from 3dfx, a solution that makes use of Gigapixel’s Tile Rendering architecture.
In comparison to the tile-based rendering architecture we saw with the Kyro, 3dfx states that their solution will be in a completely different league than what we saw with the Kyro. And they also told us that they are very confident that any sort of Hidden Surface Removal or HyperZ like technology that NVIDIA could bring to the table wouldn’t even begin to match up to what their Gigapixel technology will be able to bring to the table.
Unfortunately for 3dfx, the fact that Gigapixel’s technology won’t surface until 6 – 9 months after the release of the Rampage, while ATI and NVIDIA will both have a chance at an improved HyperZ or Hidden Surface Removal technology may make the release of their first Gigapixel based product too little, too late.
What 3dfx needs to hope for is that the performance improvements offered by ATI’s next-generation DX8 part as well as NVIDIA’s NV20 are incremental enough that the Rampage is able to hold its own so that their next product, based on Gigapixel’s technology, may have a chance to compete. If, however, 3dfx can’t compete early next year when the rest of the DX8 parts hit the streets, it will be very difficult for them to pull the company out of that hole.
In terms of 3dfx’s more immediate future, they, as you probably have already heard, have decided to go back to their roots in a sense. 3dfx will be moving from being a card manufacturer and go back to mainly producing chips again, however unlike the first time around there are some restrictions attached to the companies that will be allowed to produce cards based on 3dfx chips. 3dfx will probably require manufacturers to sell their cards under the Voodoo name, but the fact that they will be allowing other manufacturers to produce 3dfx cards is a step in the right direction.
If you remember back to 3dfx’s acquisition of STB, where they effectively became a board manufacturer as well as a graphics chip manufacturer, this move almost completely nullifies the 3dfx/STB acquisition. It kind of makes you wonder if 3dfx actually got anything out of the STB deal other than maybe the technology to produce their VoodooTV.
With all this talk about reevaluation, it isn’t a surprise that the extremely delayed Voodoo5 6000 will never make it to the retail market. Instead, 3dfx has licensed it out to Quantum3D and they will produce the card for use in professional 3D applications such as simulators. Unlike other Quantum3D cards, there will be virtually no way for you to find the card alone for purchase in the retail market.
Quantum3D's 8-way Voodoo5 SLI
Quantum3D's 8-way Voodoo5 SLI will be found in systems like this one below priced at $14,995
This is a respectable decision on 3dfx’s part, and as you’ll soon see, it’s the same decision ATI made with the Radeon MAXX.
ATI: Building up Momentum
After coming out of the blue with an extremely well rounded Radeon part, ATI has all of the sudden been tossed back into the game and is currently in a position where it can go head to head with the market giant, NVIDIA.
As we alluded to in our analysis of 3dfx’s Voodoo5 6000 cancellation, ATI will not be making a Radeon MAXX (dual Radeon board). The main reason behind this being that the Radeon MAXX would end up being in the same price range as the GeForce2 Ultra and what the Voodoo5 6000 would have been as well.
This however, doesn’t mean that we will not see any future MAXX products from ATI. ATI told us that they are working on increasing the efficiency of their MAXX technology and we should expect to see it resurface in the future, however the Radeon, as it stands now, will not be paired up with another chip and released as a MAXX product.
ATI just recently announced their Radeon Value Edition (Radeon VE for short) which is directed at the same market as the GeForce2 MX and Matrox’s G450.
The Radeon VE differs from the regular Radeon in that it only has a single pixel pipeline versus the two pixel pipelines on the regular Radeon. This cuts the effective fill rate of the Radeon in half for the VE, and with the core operating at 183MHz we can expect a theoretical maximum fill rate of 550 MTexels/s which gives it a slight fill rate advantage over the GeForce2 MX (480 MTexels/s).
Since the Radeon design requires that the core and memory clocks be run synchronously, the Radeon VE will also ship with 183MHz memory. In order to keep the size of the Radeon VE chip down, thus reducing cost, ATI will be using a 64-bit DDR memory bus, which is equivalent to the 128-bit SDR bus of the Radeon SDR.
So the Radeon VE is doomed from the start to never have a higher level of performance than the Radeon SDR because they have the exact same amount of memory bandwidth. However, once you take into account that that the Radeon VE has almost half the fill rate of the Radeon SDR you can realize how the Radeon VE can be slower than even the Radeon SDR.
If you recall back to our Radeon SDR review you’ll remember that it is approximately the same speed as the GeForce2 MX, a bit slower at the lower resolutions and faster at higher resolutions. With the Radeon VE having a lower fill rate than the Radeon SDR, you can expect the Radeon VE to perform around 10% slower than a GeForce2 MX.
This is obviously not the most attractive performance point for ATI, however they are attempting to offer performance close to that of the GeForce2 MX while undercutting NVIDIA in terms of the price. The Radeon VE will have a selling price of $129.99 which places it under the price of a GeForce2 MX with TwinView support. Why would ATI compare the Radeon VE to the GeForce2 MX with TwinView?
The Radeon VE will be ATI’s first graphics product to offer their dual output technology, similar to Matrox’s DualHead and NVIDIA’s TwinView, which they like to call HydraVision. The HydraVision name comes from the software behind their dual monitor output technology which they are very confident in. Offering many of the same features as Matrox’s DualHead and NVIDIA’s TwinView, HydraVision on a sub-$130 Radeon VE should make it quite competitive with both of those solutions, even if it’s 10% slower than the GeForce2 MX.
If it were to be available in stores today, the Radeon VE could potentially offer the GeForce2 MX some serious competition, unfortunately we won’t see it in stores until February 2001. By that time, a $129.99 price tag won’t be very impressive at all. For performance enthusiasts you’ll be able to pick up a GeForce2 GTS for a little more than that, and if there’s a need for dual monitor support, the GeForce2 MX will probably be available for the same price if not less.
Fun on the Road
On the mobile end ATI will be debuting a new Mobility chip in the first quarter of next year that will be based on the Radeon core. This time around ATI won’t be announcing the Mobility until it is actually in laptops which will prevent a repeat of the Mobility 128 launch which occurred 6 months before the chip actually made its way into notebooks.
As you’ll soon notice, the performance of the new Mobility chip will come down to what type of memory bus is implemented since the chip will be memory bandwidth limited by far. What could actually be very interesting is if ATI implements a similar strategy with the Mobility Radeon as they did with the Mobility 128 in terms of memory bandwidth.
Remembering back to our Mobility 128 Review, ATI offered a single Mobility 128 solution, with 8MB of SDRAM integrated into the Mobility 128 package (read: not on die, but on-package) with a 64-bit data path to it. This ended up saving quite a bit of cost for notebook manufacturers since they didn’t have to worry about routing traces to an external frame buffer, and at the same time, the design allowed manufacturers to include an external 8MB of SDRAM that would extend the memory bus to 128-bits in addition to doubling the memory size.
With the Mobility Radeon, if they implemented a similar approach to memory bus configurations ATI could theoretically include a 32-bit DDR bus to the on-package SDRAM with an option of extending the memory bus width to 64-bits if an external frame buffer is implemented as well.
This could potentially make the Mobility Radeon as fast as the Radeon SDR which would definitely be more than enough for mobile applications. It will be interesting to see what implementations are actually provided when the Mobility Radeon does launch, but as we said before, the amount of memory bandwidth could make or break the Mobility Radeon and could very well determine its performance level in comparison to NVIDIA’s mobile solution.
The final point of discussion with the Mobility Radeon is its power consumption, a topic which ATI would not discuss much about outside of the fact that they will be focusing on power consumption and making sure that it’s very competitive with NVIDIA’s solution.
The Art of ATI
ATI’s acquisition of ArtX back in February left them with quite a bit of interesting technology. We brought you pictures of Nintendo’s new Gamecube from ATI’s booth in our Fall Comdex 2000 Summary and the reason behind ATI demonstrating the Gamecube was because the technology behind it is driven by a highly integrated ATI/ArtX graphics processor.
With the Gamecube design already complete, the ArtX team has been working on bringing this technology down to the PC level as well.
Another project that ATI’s ArtX team has been working on is a PC North Bridge with integrated graphics. The first product from ATI to use this technology is the S1-370TL. The S1-370TL is a North Bridge for Slot-1/Socket-370 CPUs that features integrated ArtX graphics aimed at the value PC market. It also happens to be the first integrated graphics solution with hardware T&L support, hence the TL in the S1-370TL product name.
Since the S1-370TL is based on a UMA architecture it shares the system memory for its frame buffer, meaning that by default it has a 64-bit data path to your system memory. However, the North Bridge also supports a 128-bit data path to system memory which means much more than faster video memory since your video memory is your system memory.
With a 128-bit data path to system memory the S1-370TL has the same amount of bandwidth as a DDR SDRAM solution, but while still using regular SDRAM. The only requirement in this case will be that you install SDRAM in pairs since each SDRAM only has a 64-bit interface width.
In terms of video performance, the S1-370TL’s integrated video was never intended to be a high performance solution. The graphics core features four pixel pipes, clocked at 83MHz, resulting in a 330 MPixels/s fill rate. While ATI hasn’t published anything on textured pixel fill rate we can assume that the core is only capable of processing one texture per pixel pipeline, meaning that it will most likely have a 330 MTexels/s fill rate. The T&L engine on the S1-370TL is rated at 12.5 million polygons per second.
As a North Bridge, the S1-370TL supports 66/100 and 133MHz FSB frequencies, and the same frequencies for the memory bus. With a 128-bit memory interface running at 133MHz, as we mentioned before, the S1-370TL has the same amount of memory bandwidth as a PC2100 DDR solution, 2.1GB/s. This could help overall system performance significantly, you can expect to see a 10 – 20% boost in performance over regular PC133 North Bridge designs.
Matrox leaves the gaming industry…for now
Since the release of the G400 chip well over a year ago, Matrox has been extremely quiet. Their most recent product release, the G450, was nothing much in terms of an improvement over the old G400 as it couldn’t even outperform the G400 in many cases.
The only real benefit it offered was that the G450 was a cheaper solution that was much more integrated, and as we discovered in our Millennium G450 under Linux article, it makes a great card for Linux users. The DualHead support of the G450 is still quite strong, however it is severely lacking in performance; citing 2D image quality and performance just won’t cut it anymore.
So what are Matrox’s plans for the future? Unfortunately for you gamers out there, you can expect them to pull back considerably from the gaming market. In fact, their roadmap for the majority of 2001 features products based on the G450, G400 and even the G200 cores, all of which are targeted at home/office, corporate, and professional users, not gamers.
Over the next few months Matrox will be promoting the G450 quite a bit as it will rise to become their flagship product, thus putting Matrox in a position to compete in the corporate market once again. Unfortunately, with ATI and NVIDIA both releasing solutions that rival the G450’s DualHead support while trampling it in 3D performance, you’ll begin to wonder if Matrox can even stay in the market for much longer.
While there’s definitely a desire to know about Matrox’s G800 that you’ve all definitely heard rumors about, there’s very little that we’re allowed to tell you. But when looking at the release of DX8 parts next year, you can expect the most heated competition to be between 3dfx, ATI and NVIDIA. It doesn’t seem like we can expect to see much in terms of gaming performance from Matrox for quite a while.
According to Matrox, they don’t plan on leaving the gaming market behind, but for now they will be focusing on the G450, only to later come back with a more gaming oriented product. By that time, it may be too late for Matrox, as the market is becoming increasingly more competitive and it will be very hard for Matrox to bounce back. It doesn’t seem as if Matrox has much of a choice right now, so don’t expect to see a lot from them that will be focused on the gaming market.
NVIDIA is still flying high
Out of all four manufacturers we’re talking about in this article, NVIDIA had by far the largest presence at Comdex. Their suite was crowded with demos, NVIDIA employees, and you could definitely tell that they had been enjoying quite a strong year.
Indeed they have been, the GeForce GTS was launched without the slightest hint of competition from anyone, the GeForce2 GTS is selling extremely well, and the GeForce2 MX will continue to enjoy incredible amounts of popularity for quite some time too. Although the NV20 didn’t make it out as NVIDIA’s fall product, the part itself seems to be doing quite well and card vendors are expecting samples in December. With the part sampling next month, we can expect it to hit shortly thereafter.
Out of the three manufacturers that will be launching DX8 parts early next year, it may be that NVIDIA is first to the plate, followed by ATI and then 3dfx.
While there was very little that NVIDIA was willing to talk about publicly regarding the NV20 it won’t be long before you start seeing the pieces of the puzzle fit together.
What NVIDIA was more than happy to talk about however, was their first mobile solution, the GeForce2 Go.
The GeForce2 Go, which we should start seeing in laptops in mid to late Q1-2001, is essentially a lower clocked version of the GeForce2 MX. It will still feature the same two pixel pipes as the GeForce2 MX and will be able to process two textures per clock, giving it a 4 Texels per clock fill rate. The only difference will be that the GeForce2 Go will be clocked at 143MHz instead of the 175MHz clock of the GeForce2 MX. Since the GeForce2 MX is memory bandwidth limited, decreasing the core clock shouldn’t hurt the performance too much.
The GeForce2 Go memory clock is set at 166MHz however, just like with the Mobility Radeon, its performance will be largely dependent on the type of memory bus implemented by the notebook manufacturers. The GeForce2 Go can be used with a 32, 64 or 128-bit memory bus, however it is highly unlikely that we will see any 128-bit implementations. Instead, manufacturers will probably either choose to use a 32-bit DDR implementation or a 64-bit SDR implementation which will cripple the GeForce2 Go’s performance that would otherwise be almost identical to the performance of a desktop GeForce2 MX.
With either of those two implementations (32-bit DDR or 64-bit SDR), the GeForce2 Go will have 1.3GB/s of available memory bandwidth which is about half of the available bandwidth of a GeForce2 MX. With half of the available memory bandwidth you can pretty much expect to see the GeForce2 Go perform at around half the level of the GeForce2 MX at memory bandwidth limited resolutions.
This means that on a Pentium III 500 laptop you can expect around 40 fps at 1024 x 768 x 16, and close to 60 fps at 640 x 480 x 32, both under Quake III Arena. Compared to the virtually non-existent frame rates with today’s mobile graphics accelerators, the GeForce2 Go definitely has quite a bit of potential. The only downside to the GeForce2 Go will be that you probably won’t see more than 16MB in most implementations, in fact, 8MB could be a highly desired configuration which will hurt performance considerably.
And just like its desktop counterpart, the GeForce2 Go will have a full TwinView implementation and will work with the same unified drivers that the rest of NVIDIA’s line works with.
NVIDIA was also showing off the X-Box, however there was very little that they had to say that hasn't already been disclosed.
Final Words
We should be getting the first hints of how the graphics market in 2001 is going to stack up in the next few months. The first shipment of DirectX 8 parts from the top three manufacturers, 3dfx, ATI and NVIDIA, will all hit the streets at around the same time which will make deciding between the three solutions much quicker than we are used to when product releases are more staggered.
The CPU industry has been getting very interesting over the past few months and will definitely be a very competitive world next year, and for the first time in over a year, we’ll be able to say the same about the graphics chip industry as well.