Full Screen Antialiasing is called to improve output image quality. Before we'd just provided you with performance of this technology, but now we decided to expand this section and provide quality features as well. In this issue we continue our examination on a number of new cards.
There's various sorts of antialiasing (as well as anisotropy) :-). Moreover, various companies implement their own AA. We won't describe the details of various antialiasing kinds here, but will sum them up.
So, what is antialiasing and how is it used? To answer this, we should turn to physics and linguistics. The latter is simple: "anti" is clear and "aliasing" is jaggies. As the monitor features a number of pixels vertically and horizontally, vertical or horizontal lines will appear perfect, following one another. But if the line is a kind of diagonal, it will look jaggy, because pixels are organized in the 2D matrix, limited by the screen resolution. So what should we do, if we can't change our monitors? Still user's eyes and mind can be tricked. This is done by antialiasing through various algorithms.
As the task is clear, the algorithm is as follows. As pixel color is encoded in 8 to 32 bits (there can be more, but human eye won't distinguish 64-bit color, for example, from 32-bit) it can be substituted. For example, there are two colors, white and black. They are contrary and produce a clear BORDER=1 when near. So, if we draw a black line on the white background, it won't be straight (unless vertical or horizontal), but jaggy. To make it more perfect, we need to paint borders over, i.e. blend colors. In the given case, it's a blend of black and white. As a result, we'll get gray color on the both sides of displacement points. In most cases we'll save the original line color, but sometimes the new one may replace the original. The size of matrix is important as well. For example, if it's 800x600, jaggies will be more obvious, comparing to 1600x1200 with lesser displacement. I.e., larger matrix corresponds to higher capacity and smaller pixels. Actually, it's more of a whim to use antialiasing at 1600x1200, comparing to 800x600. But everything is more complex.
To confuse you even more ;-) here are excerpts from NVIDIA World articles with our additions.
During antialiasing each pixel is divided into subpixels. Pixel color is averaged with some formula, given subpixel colors. At that, though physical resolutions remains the same, the effective resolutions becames considerably higher.
Two most popular approaches include supersampling and multisampling. Both are based on defining pixel color by blending subpixel (sample) colors, though samples are generated differently.
Supersampling is the simplest and straightforward method. The image is calculated in virtual resolution, several times higher than the actual. After that it's scaled and filtered to the original resolution. The color of each pixel is defined using several subpixels. This enables to improve image quality considerably, but resultes in several times higher graphics card load and thus performance falloff. The reason is that it needs to calculate several times more colors for a pixel instead of one, for example, AA 2x2 at 800x600 will require the calculation of 800x2 x 600x2, i.e. 1600x1200.
Multisampling is a more complex and intellectual approach or rather a tool. The idea is very simple as well: why stupidly calculate N subpixels for each pixel, if we can reuse already calculated ones again and again to form several resulting pixels. On the other hand, no AA is needed in some image parts, so why use several subpixels, if one is enough? Vice versa, some parts require very good AA, so we need to calculate many more subpixels there. This enables not only to save resources significantly, but also to obtain better AA quality! The tool can be used as one likes, and performance and quality depend on the implementation, selected by graphics card or game developer.
Now let's get to specific implementations of makers' AA. Let's start with the most unusual and original solution of Matrox.
FSAA supported by Parhelia-512 is currently unique and is close to ideal. Essentially, it's supersampling with up to 16 calculations per pixel. But it's performed ONLY (!) for polygon BORDER=1 dots (3-5% of a typical scene). The advantage is clear: unlike multisampling, it doesn't keep redundant data in memory and transfer them over the bus! The total frame buffer size increases slightly, up to by a factor of two even with the max. 16x setting. A special fast rendering pass is used to determine BORDER=1 pixels, when graphics card marks only BORDER=1 polygon pixels in a separate buffer without calculating textures and painting intermediate pixels. Besides, as only borders are processed, there's no texture sharpness losses, peculiar to FSAA and some hybrid MSAA techniques. Nowever, this intellectual AA can still produce artefacts in some cases. Besides, it can't correctly process borders mixed with transparent polygons (like clouds, glass, fire).
NVIDIA AA is clearly described in articles at NVIDIA World. We won't be reposting them again, but just provide the links to them. The basic article is here. This older article describes NVIDIA AA technologies.
As for ATI AA, it's reviewed here along with other companies' antialiasing.
Let's now finish with theory and move to the performance.
These results were obtained in two tests, DirectX and OpenGL (3DMark2001 SE and Serious Sam: The Second Encounter), conducted on Pentium4 Windows XP testbed.
But we must know what we spend the performance for. As you can see the falloffs are rather great, we'd like to know if we really need it. Here are the screenshots from the tests, indicating the actual quality of different antialiasing modes.
AA behaves strangely with the latest NVIDIA drivers on GeForce4 and GeForce4 MX. They used some algorithm to speed up the 4x. Below are screenshots and the description of NVIDIA AA on these cards.
Pity, but screenshots do not provide the most interesting. In 4x AA we observed object BORDER=1 flickering. It was so annoying up to complete impossibility to use it at all. Let's hope this was a driver bug to be fixed in newer drivers.