Indeed, the new benchmark does not have a lot of new features. The hero of this article is solidly based on the good old Mark05. Why not? – DX version is the same, changes on the graphics landscape are minimal and come down to two main facts – DX9 update in December (it's rumoured to have SM4 specifications, while the compatible hardware - R600 and G80 - will be available much later, together with Vista) and the announcement of the new generation of ATI architectures that finally support SM3. There are some reservations, vertex texturing for example. Well, even such news is enough for a new version of the mark – it should support R5XX and use new DX9 libraries. By the way, the benchmark offers to update them to the December state. Click "yes" to continue.
The test is based on the well-known 3DMark05. That's natural – DX version is still the same (9). Though the new benchmark uses the latest libraries, updated in December, the graphics pipeline and its capacities remain the same, so there is no need to rewrite the engine. The new interface also resembles the previous version, but it has new menu options. But we cannot speak of complete correspondence of inherited tests, as even preserved tests have modified design of the levels, shaders, and some other issues. Thus the results will not be the same. So:
At first we publish a table with official differences of the new benchmark and the previous version:
The benchmark has been obviously reoriented to modern topical issues, such as multi-core processors, HDR, SM3 and larger VRAM. Our compliments to more adequate CPU tests – now they are Complex AI and Physics in a hypothetic game instead of software emulation of vertex shaders, which will hardly happen in real life of a gaming PC. Playable game will entertain testers during their breaks ;-).
There are three versions of the benchmark – Basic, Advanced, and Professional. The first one lacks a lot of features and actually allows to get final marks only in four game tests as well as some HDR and SM3 results. Advanced version, which is already commercial, offers all the tests including synthetic SM3 feature tests and the quality tests for texture filtering and AA. Test results can be exported to MS Excel files, you can configure the resolution and test modes, there are separate tests for shaders and CPUs, etc. Professional version adds command line and batch run functionality as well as full commercial license.
You can read about the differences between the versions here
And now we proceed to the description of tests:
At first, as usual, we publish our testbed configuration:
VSync is disabled.
Graphics Test 1: Return to Proxycon
The FPS-like test that we have known from the previous version got a considerably improved resulting image but it still uses only SM2 (the new version is on the right - 06, the old version is on the left - 05):
that's all due to new material shaders, textures of a higher resolution, more lights (26, 2 of them are directional) and an advanced shadow algorithm. There is nothing new, actually. But performance dropped noticeably and the image quality got much better:
SM2.0 Graphics Test 2 - Firefly Forest
Firefly forest scene (still SM2) was also enriched with longer shaders, textures of a higher resolution, new light sources and shadows, etc.
Comparing to the previous version, we can visually estimate the FPS costs of image changes ;-)
HDR/SM3.0 Graphics Test 1 - Canyon Flight
This one is a new test. To be more exact, it's old but thoroughly overhauled. It uses not only SM3 shaders, but also HDR rendering - it means that the engine was changed to a greater degree than in the first two cases:
As we can see, the image is also noticeably different. Lighting is indeed more realistic. Let's find out at what cost ;-)
HDR/SM3.0 Graphics Test 2 - Deep Freeze
It's another HDR/SM3 test (perhaps we shouldn't have used these two notions together – there exist shaders and tasks, which may require intensive branches peculiar to SM3, but not be HDR). Anyway,
there is nothing to compare this image to – the previous benchmark version did not have this test. This test is cinematic in many aspects – developers paid attention to realistic rendering of shadows, lighting, HDR. The results are similar in the layout to previous ones. Whether this test depends only on shader capacity does not have a simple answer. This test seems to depend more on the general frequency of an accelerator than on its computation capacity – it's limited not only by shader length, but also by texture fetch/memory bandwidth. But we should note that the R580 differs from the R520 in more than just frequencies. Balance - that's what we seem to be dealing with.
CPU Tests - Red Valley
This test finally emulates a real gaming load. There are two tests with slightly different balances of physics and AI, but they are still similar. It uses Ageia PhysX library and some complex path finding algorithm, typical of many AI tasks in games.
Feature test - Fill Rate (Single-Texturing è Multi-Texturing)
Feature test - Pixel Shader
The only shader (face surfaces in the second game test) is singled out into a synthetic pixel test. The results agree well with our general idea of the layout of forces. But they still don't allow to test some peculiarities of video cards, for example computational or texture fetch preferences. It's much more convenient to use tests with various configurable shaders.
Feature test – Vertex Shader (Simple and Complex)
There are two shaders here. One of them is responsible for standard transformation and preparation of triangles, the second computes quite a complex model of initial parameters to simulate grass.
A situation in the complex test is more like in the other synthetic benchmarks.
It's a synthetic test specially for SM3 – it computes a physical model of a particle system, computation results are written into a texture and then used to render an image. That is an accelerator is used as a hardware accelerator of computations, pixel shaders are also used. Considering high shader capacities of the latest cards, we can only welcome this approach to unloading long-suffering CPUs that execute DP code (rendering primitives) in DX and do other not always desirable things. Let's have a look at the results.
RADON X1800 and X1900 are not quite SM3 cards according to this test. What's the matter? The reason may be in the lack of vertex texture fetch, which is necessary for SM3 (from the point of view of all manufacturers but ATI). Or there may be a different reason (vertex shader compilation error, no necessary support for branching in vertex shaders from ATI, no pixel texture fetch, or others). It's hard to tell. But that's certainly alarming.
It's another SM3 test, it actually computes some 3-dimensional noise, a basic building block in many procedural textures. This test shows how promising a given accelerator is for procedural textures, which may become increasingly popular in real applications as the balance shifts to computations. The results.
Everything is OK here – indeed the cards rank strictly according to their computational capacities in pixel shaders.
So, the benchmark is not a failure, it's rather a success. But we cannot really call it new. It's evolution, not revolution. It's similar to the evolution of some game engines, which started with one shader model, but then gradually got another model as well as rose to a new level of detail. This situation is typical of our times of hardened DX9 that has been reigning for several years already. Performance is of major importance. Real advancements will become possible with the release of DX10 (or at worst, SM4 for DX9, which is also planned) – we'll wait and see. As for now, shaders should grow longer. Presentation precision should become deeper, and textures should develop in both planes.
Alexander Medvedev (firstname.lastname@example.org)
February 13, 2006
Write a comment below. No registration needed!