CLASS="SECT1" BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#0000FF" VLINK="#840084" ALINK="#0000FF" >

7. Video Cards

7.1. History

Once upon a time, a company in San Jose, California named 3dfx Interactive was king of the gaming video card market. In October 1996 they released the Voodoo I, which was a phenomenal success. It was the first hardware accelerated card, but only rendered 3D; it had to be piggybacked with a 2D video card. The idea was that 2D rendering was handled by a high quality 2D video card (Matrox was immensely popular at the time) but 3D information (see Glide2, Section 3.1) would be passed to the Voodoo I and rendered, using the Voodoo's fast hardware to perform the necessary graphics calculations. They released the Voodoo Rush in April 1996. It should've been a more powerful card, with a 50MHz GPU and 8MB of RAM. Even better, it was their first combined 2D/3D card, meaning that it freed up a valuable PCI slot (most PC's only had a couple of PCI slots back then) but the Rush wasn't as popular. 3dfx removed the multi-texturing unit from the Rush, and it was outperformed by the Voodoo I. At the time, ATI had their Rage series and nVidia had their Riva 128, but the Voodoo I blew them all away.

This was a good time for Linux. id Software's open sourced the Doom codebase and ported Quake I to Linux (December 1996). We were getting our first tastes of real commercial gaming. Life was simple: you purchased a Voodoo. And it felt good, because 3dfx open sourced their drivers. The king of video cards worked with Linux developers. Not only did we have the best video cards, but the drivers were all open source.

In March 1998, 3dfx released their Voodoo II, with its 3.6GB/sec memory bandwith, 12MB of video memory and 90MHz core. It supported resolutions up to 1024x768. This was 3dfx in its heyday. Like the Voodoo I, the Voodoo II was a 3D only card, and piggy backed with a 2D video card. The Voodoo Banshee was released in September 1998 as a combined 2D/3D card, like the Rush. Despite the faster 100MHz core, the Banshee was outperformed by the Voodoo II because its multi-texturing unit was removed, like with the Rush. And again like the Rush, it wasn't popular. But 3dfx reigned supreme, and nobody could touch them.

In April 1999, the Voodoo III was released. There were a number of Voodoo III's, ranging from a 143MHz core speed to 183MHz. There were TV-out versions. There were PCI and AGP versions (it was the first AGP video card). It was another success, but 3dfx began to lose ground to nVidia, which released their TNT 2. The TNT 2 outperformed the Voodoo II, and accelerated 3D graphics at full 32 bit color, while the Voodoo's were stuck at 16 bit color. But life was still good for Linux. We had a card that was almost neck-to-neck with nVidia, our drivers were open source, and in December 1999, id Software gave us a huge gift: they open sourced the Quake I codebase.

Then nVidia released the GeForce 256 in October 1999. 3dfx's Voodoo IV, its direct competitor, was about a year late which is very bad when you're competing for a bleeding edge market. While nVidia was putting real R&D into their cards, 3dfx was simply adding more and faster RAM. The Voodoo IV and V rendered in full 32bpp color, had great AA support (Section 7.4.3), featured a 2nd GPU, more memory, and was arguably the king of of video cards. However, 3dfx's late release of the Voodoo IV and V coupled with the fact that the GeForce could be had for half the price meant that 3dfx was sinking fast. For Linux, the newest Voodoo's could only accelerate at 16 and 24 bit color. Worse still, the Voodoo V's 2nd GPU was unused by the Linux driver (and to this day, the Voodoo V is functionally equivalent to the single GPU Voodoo IV on Linux). Most Windows users were switching to nVidia, and despite the fact that the nVidia drivers were proprietary, even Linux users began to jump onto the nVidia bandwagon. VA Linux, the largest Linux server vendor, put nVidia into their machines.

Then in April 2000, 3dfx was attacked on a different front: ATI started releasing their first generation Radeons. Until this point, ATI had always been an innovative (they developed their own 3D acceleration chips in 1996, about the same time as 3dfx), but sleepy graphics chipset manufacturer. The Radeons were their first 3D accelerated card that gamers took any real serious interest in. Their Radeons trounced both nVidia and 3dfx. They worked with Linux developers, open sourced all their drivers and were hailed as the great hope for Linux gaming. nVidia came back with fists swinging, and this was all too much for 3dfx. Between losing the benchmark wars to the GeForce and Radeon, their lateness with new cards and high prices, 3dfx lost its market share and didn't have the funds to stay into business. On April 18 2001, they sold most of their assets and technology to nVidia, and in October 2002, they finally declared bankruptcy.

The demise of 3dfx was quite sudden and a slap in the face to the open source community. I still remember my friend Gabe Rosa emailing me with just "Look at /." and seeing the news. It was the 2nd worst day for Linux gaming (the 1st being the demise of Loki). And it was also a shame. 3dfx was getting ready to release a new Voodoo V featuring 4 GPU's which would've trounced the ATI and nVidia offerings, as well as a new card code named "Rampage" which reportedly would've put them firmly back as the king of the hill. There are reports that the Rampage's technology (which was sold to nVidia) went into the GeForce 5900. Not too shabby for 3 year old technology!

At first, things were still simple. Linux gamers would either keep their open source Voodoos, get an open source Radeon or a closed source GeForce. However, with bigger and better games on the horizon, it was only a matter of time before the Voodoos would no longer be a viable graphics card for modern gaming. People were still using Voodoo's, but they were essentially out of the game at this point.

ATI started to release a tremendous number of versions of each video card, and keeping up with them and their terminology started to become very difficult. ATI, together with nVidia, played king of hill. Their products have been neck to neck ever since, with GeForce taking the lead a bit more times than the Radeon. But the Radeon's drivers were open source, so many Linux users stuck by them. Then things got even more complicated.

ATI started becoming more and more reluctant to open source drivers for their new releases, and suddenly, it wasn't clear who the "good guy" was anymore. nVidia's party line was they license some of their GL code from another company, and is thus non-releasable. Presumably, ATI doesn't want to release drivers to keep their trade secrets, well, a secret. And it gets worse. The ATI Linux drivers have been plagued by extremely poor performance. Even when an ATI offering is better than the current GeForce offering for Windows, the card is always trounced by GeForce on Linux. Because of the ATI Linux driver woes, Linux users cannot use MS Windows based benchmarks or card stats. They simply don't apply to us. And that's pretty much where we are right now.

As a last note, the only systematic Linux video card benchmarking effort I'm aware of was done, unfortunately, in March 2001, between a Radeon 32 DDR and a GeForce 2. You can read it for yourself at http://www.linuxhardware.org/features/01/03/19/0357219.shtml, but conclusion is that the GeForce 2 firmly and soundly trounced the Radeon 32 DDR.

7.2. Current Status (1 March 2004)

nVidia's latest offering is the GeForce 5900, based on the NV35 chipset. It's well supported by Linux with high quality but proprietary drivers. nVidia uses a convenient combined driver architecture; their driver will support the TNT 2 all the way up to the GeForce 5900. Although their drivers are closed source, as a company, nVidia has been supportive and good to Linux users.

ATI's has worked with Linux developers for their Radeons up to and including the Radeon 9200, which have 2D and 3D support in XFree86. I'm not entirely sure of the quality of these open source drivers, however, Soldier of Fortune I and Heavy Metal still have opaque texture problems under first generation Radeons. Beyond the 9200, you need to use ATI's binary only proprietary drivers, available in rpm format from ATI's website. It's claimed that these drivers are piss poor; a friend of mine claims his GeForce 4400 outperforms his Radeon 9700 pro. That's shameful.

On paper, and in the Windows benchmarks, the Radeon 9800 trounces the ill-conceived GeForce 5800 and slightly edges out the GeForce 5900. On paper, it's simply the more impressive card. But again, the driver issue makes this information unusable for us. If you have your heart set to buy the best card for Linux, you'll want to go with the GeForce 5900.

7.2.1. SVGAlib Support

As of June 2002, SVGAlib support Radeon cards is shaky. Developers have reported that SVGAlib works on the Radeon 7500, Radeon QD (64MB DDR model) but has problems on the Radeon VE.

I have no information about SVGAlib and the GeForce cards.

7.3. Which Video Card Should I Buy? (1 March 2004)

The answer was very difficult last year, but here's my take on it these days:

  1. All GeForce cards require a proprietary driver which will "taint" your kernel. However, all ATI cards beyond the Radeon 9200 also require a proprietary driver that will "taint" your kernel as well.

  2. nVidia has proven that they care enough about Linux to write and maintain current and very high quality drivers for Linux. Even when ATI open sourced its video card driver, they played the "we'll make Linux developers write our drivers for us" game. Their current proprietary drivers are below par.

  3. The current Radeon 9800 barely beats out the GeForce 5900 in benchmarks and card specs, but Linux users won't benefit from this because of driver issues..

  4. ATI has a very long history of dropping support for hardware as fast as they can get away with it.

  5. On MS Windows, when the GeForce beat out its main competing Radeon, the review claimed that the Radeon generally had better visuals. I have no idea how this translates to Linux.

Don't get the GeForce 5800. Card reviews claim that it has some serious heat, noise, and dust issues. It's informally called the "dust buster" because of noise its fan makes.

If you absolutely only want open source drivers on your system, the Radeon 9200 is the best card you can buy.

If you have a Linux/Windows dual boot, consider either the Radeon 9800 or the GeForce 5900. The Radeon will be slightly stronger on Windows. The GeForce will be stronger on Linux.

If you have a Linux only system, the GeForce 5900 is your best bet. As of today, the 256MB version comes in at a whopping $350, however, the 128MB version is more reasonable.

7.4. Definitions: Video Card and 3D Terminology

We'll cover video card and 3D graphics terminology. This material isn't crucial to actually getting a game working, but may help in deciding what hardware and software options are best for you.

7.4.1. Textures

A rendered scene is basically made up of polygons and lines. A texture is a 2D image (usually a bitmap) covering the polygons of a 3D world. Think of it as a coat of paint for the polygons.

7.4.2. T&L: Transform and Lighting

The T&L is the process of translating all the 3D world information (position, distance, and light sources) into the 2D image that is actually displayed on screen.

7.4.3. AA: Anti Aliasing

Anti aliasing is the smoothing of jagged edges along a rendered curve or polygon. Pixels are rectangular objects, so drawing an angled line or curve with them results in a 'stair step' effect, also called the 'jaggies'. This is when pixels make, what should be a smooth curve or line, jagged. AA uses CPU intensive filtering to smooth out these jagged edges. This improves a game's visuals, but can also dramatically degrade performance.

AA is used in a number of situations. For instance, when you magnify a picture, you'll notice lines that were once smooth become jagged (try it with The Gimp). Font rendering is another big application for AA.

AA can be done either by the application itself (as with The Gimp or the XFree86 font system) or by hardware, if your video card supports it. Since AA is CPU intensive, it's more desirable to perform it in hardware, but if we're talking about semi-static applications, like The Gimp, this really isn't an issue. For dynamic situations, like games, doing AA in hardware can be crucial.

7.4.4. FSAA: Full Screen Anti-Aliasing

FSAA usually involves drawing a magnified version of the entire screen in a separate framebuffer, performing AA on the entire image and rescaling it back to the normal resolution. As you can imagine, this is extremely CPU intensive. You will never see non hardware accelerated FSAA.

7.4.5. Mip Mapping

Mip mapping is a technique where several scaled copies of the same texture are stored in the video card memory to represent the texture at different distances. When the texture is far away a smaller version of the texture (mip map) is used. When the texture is near, a bigger one is used. Mip mapping can be used regardless of filtering method (Section 7.4.6). Mip mapping reduces memory bandwidth requirements since the images are in hardware, but it also offers better quality in the rendered image.

7.4.6. Texture Filtering

Texture filtering is the fundamental feature required to present sweet 3D graphics. It's used for a number of purposes, like making adjacent textures blend smoothly and making textures viewed from an angle (think of looking at a billboard from an extreme angle) look realistic. There are several common texture filtering techniques including point-sampling, bilinear, trilinear and anisotropic filtering.

When I talk about 'performance hits', keep in mind that the performance hit depends on what resolution you're running at. For instance, at a low resolution you may get only a very slight hit by using trilinear filtering instead of bilinear filtering. But at high resolutions, the performance hit may be enormous. Also, I'm not aware of any card that uses anisotropic texture filtering. TNT drivers claim they do, but I've read that these drivers still use trilinear filtering when actually rendering an image to the screen.

7.4.6.1. Point Sampling Texture Filtering

Point sampling is rare these days, but if you run a game with 'software rendering' (which you'd need to do if you run a 3D accelerated game without a 3D accelerated board) you're likely to see it used.

7.4.6.2. Bilinear Texture Filtering

Bilinear filtering is a computationally cheap but low quality texture filtering. It approximates the gaps between textures by sampling the color of the four closest (above, below, left and right) texels. All modern 3D accelerated video cards can do bilinear filtering in hardware with no performance hit.

7.4.6.3. Trilinear Texture Filtering

Trilinear filtering is a high quality bilinear filter which uses the four closest pixels in the second most suitable mip map to produce smoother transitions between mip map levels. Trilinear filtering samples eight pixels and interpolates them before rendering. Trilinear filtering always uses mip mapping. Trilinear filtering eliminates the banding effect that appears between adjacent mip map levels. Most modern 3D accelerated video cards can do trilinear filtering in hardware with no performance hit.

7.4.6.4. Anisotropic Texture Filtering

Anisotropic filtering is the best but most CPU intensive of the three common texture filtering methods. Trilinear filtering is capable of producing fine visuals, but it only samples from a square area which in some cases is not the ideal method. Anisotropic (meaning 'from any direction') samples from more than 8 pixels. The number of sampled pixels and which sampled pixels it uses depends on the viewing angle of the surface relative to your screen. It shines when viewing alphanumeric characters at an angle.

7.4.7. Z Buffering

A Z buffer is a portion of RAM which represents the distance between the viewer (you) and each pixel of an object. Many modern 3D accelerated cards have a Z buffer in their video RAM, which speeds things up considerably, but Z buffering can also be done by the application's rendering engine. However, this sort of thing clearly should be done in hardware wherever possible.

Every object has a stacking order, like a deck of cards. When objects are rendered into a 2D frame buffer, the rendering engine removes hidden surfaces by using the Z buffer. There are two approaches to this. Dumb engines draw far objects first and close objects last, obscuring objects below them in the Z buffer. Smart engines calculate what portions of objects will be obscured by objects above them and simply not render the portions that you won't see anyhow. For complicated textures this is a huge savings in processor work.