iXBT Labs - Computer Hardware in Detail






DirectX 8.0: FAQ

By Virtual Warrior

<Q>: Does DirectX 8.0 (further DX8) save programmers from routine? It that true that a code is not necessary to write so huge?

<A>: Yes, a source code should be much less. D3D initialization is much easier to implement. Earlier it was necessary to call up 20-30 different functions in a strictly defined order and with strictly defined parameters! And now it's necessary to fill up a structure consisting of approx 10 parameters and make only two calls. Besides, a process of creation of textures and pictures got simplified, you need no more to fill muddled up structures with beforehand defined parameters - only one function call (6 parameters) and that's all!

<Q>: How do you estimate DX8 performance? Are there any significant difference for old and new (written for DX8) software?

<A>: Games which are not considered for DX8 get 5-10% performance increase thanks to more qualitative DX7 libraries which are included in DX8 for compatibility.

Games which are written specially for DX8 can get 5% to 50% performance gain depending on usable DX8 possibilities. The maximum increase can take place with widespread use of vertex shaders and with obligatory presence of DX8 compatible accelerator.

<Q>: Is an application optimization necessary for DX8 to make difference from DX7 more noticeable?

<A>: It depends on what do they mean by the difference. In the DX8 there appeared pixel and vertex shaders which do not practically limit imagination of developers of different effects. In order to use other possibilities which are already included in the DX8 (i.e. effects with usage of T-buffer) a game must be created taking into account these possibilities.

<Q>: It seems that if we set the DX8, DX8 drivers and run the application created for the DX7 earlier we will see nothing new in terms of speed and quality; is it so?

<A>: In case of quality - nothing new (at least I have noticed nothing), in case of speed - refer to the issue on performance.

<Q>: What can you say about comfort in writing and ease in reading of the source code?

<A>: It has got MUCH easier as compared with the DX7!

<Q>: The DX7 is said to correspond to the OGL 1.2 in possibilities, and what can you say about the DX8?

<A>: The OpenGL (further OGL) has extensions - a universal mechanism of adding of new possibilities. You create a new extension - and you get an access to new possibilities of the hardware. There is one problem (according to Tony Cox): in 2001 DX8 compatible cards from different manufactures will appear (at least ATI and NVIDIA). New possibilities of these cards will be supported by the corresponding OGL extensions, but there is no guarantee that they will be the same extensions for equivalent possibilities. And we will have to write a code taking into account all existing cards as it have already occurred many times…

The OGL doesn't have pixel or vertex shaders, though they are quite a big step forward in terms of comfort and API universality. And although the shaders can got available via a system of OGL extensions, it will a set of calls and not data. Besides, different manufactures will have their own calls (i.e. extensions)!

A little bit on a conception of pixel and vertex shaders. With hardware acceleration a flexibility gets lost (as a rule, in any field). There appeared hardware rasterization and we have lost possibility of flexible processing of 2D image and optional multitextural effects. Only those operations remain which are allowed by the hardware. There appeared hardware T&L (transformation and lighting) and we lost a possibility of flexible influence on process of transformation of coordinates and lighting of primitives. Shader conception allows programmers to influence flexibly on geometry transformation (vertex shaders) and on a rasterization process (pixel shaders).

What do the shaders give completely new (what can't be done with DX7 mechanisms)? We will enumerate several obvious differences (though they are quite a lot)!

1. Mesh morphing - there are two 3D models; calculation of an intermediate stage of smooth transformation of one model into the other under control of a multiplier from 0 to 1.

Video clip: morphing.avi (1.65 MBytes)

All clips which have references in this article are written in DiVX format.

2. Motion blur is based not on a multiple scene calculation but on a more advanced method (work with vertices).

Video clips: blur-off.avi (116 KBytes) and blur-on.avi (181 KBytes)

3. Great number of light sources (more than 8) is limited only by the number of special registers of accelerator which are enabled by a shader!

Video clip: many-lights.avi (705 KBytes)

4. Skinning with usage of great number of matrices (limited only by the number of special registers and constitutes approx 20 matrices per one skeletal straight line!

5. Layered fog is a multilayer fog (in gullies, above water etc).

These are only some of the examples of vertex shaders!

Shader is a program on a special code which is implemented hardwarely by any DX8 compatible accelerator. The number of operations (code size) of a vertex shader is limited by 127. Each operation is implemented approx at one clock of accelerator. Operation includes different arithmetic operations and conditions which are implemented with a special set of registers. In fact, an accelerator contains two special-purpose programmable processors, one of which is used for processing of vertex coordinates (transformation) and setting of initial parameters of triangles, and the second one for calculation of resulting color of each pixel (rasterization).

By the way, graphics processors such as GeForce/GeForce2 (and cards based on them) do not support vertex shaders. With these GPUs you can work via DX8 only in Software T&L mode, as a result you lose the whole gain from HW T&L.

As for GPU Radeon based cards, the situation is similar to GeForce/Georce2, i.e. no support for vertex shaders. And the driver allows to realize their support in SW T&L mode (+ every bonus under optimized buffer vertices). But the HW T&L remains disabled!

A bit more complicated issue concerning pixel shaders. Basic effects:

1. Custom lighting models (Phong, Blinn, BDRF etc.).

Video clips: dp3-point-light.avi (581 KBytes) and dp3-direct-light.avi (1.45 MBytes)

2. Realistic shadows (when even knobs of relief cast correct shadows).

3. Flats of cutting of optional configuration accurate to a pixel.

In a word, all combinations of per-pixel lighting and some more exotic possibilities. The number of operations in a pixel shader is limited by 8. Each operation is implemented approx at one clock.

Now about micropolygons. Micropolygons in DX8 in all games are called particles, i.e. little flat pictures in 3-dimensional space lying in the flat of the screen. Realization of micropolygons contains a lot of restrictions:

  1. Maximum size = 64 pixels
  2. A texture covers it only entirely.
  3. Manipulations with texture coordinates are impossible.
  4. No multitexturing.

The matter is that rasterization speed of these micropolygons is twice higher as compared with usual ones. And considering 2 polygons per particle - four times higher.

Nevertheless, strict limitations greatly constrict area of applications of such particles. They can be easily used in effects on the base of great number of particles with the same texture - snow, rain, fire, clouds... Earlier there wasn't enough power for such effects, a decent cloud or fire require 4000-8000 thousand micropolygons. According to different data, on the Xbox there will be from 150 to 300 million micropolygons per second processed (i.e. 3 million rain drops in a frame at 100 fps).

The DX8 includes a new setting to the the Direct3D - D3DX8. If earlier (DX7) it was just a set of utilities, now it's a powerful system that includes:

1. Class of multitextural effects loaded from a text file.

Video-clip: cem.avi (1.02 Mb)

2. LOD technology on the base of progressive models.

3. Cache-optimization of models.

And many other small but not less functional things…

And two more novations in DX8:

Volume texture. Interesting but controversial thing - on the one hand with the help of these textures one can create different volume ephemeral effects, but on the other hand, consumption of texture memory will be such big that nothing could be justified!!! Volume textures requires hardware support.

High order primitive rendering. A wonderful thing - a model loses its angles and becomes smooth and round! Unfortunately there works a support for splines and not for subdivision surfaces, it means patch structure of models and problems of smooth connection of patches - and this complicates modeling process by ten times! That's why I think that not all application developers turn to this interesting feature of the DX8. High order primitive rendering requires hardware support.

<Q>: What completely new conceptions are in the DX8?

They are many:

  • Direct3D: pixel and vertex shaders.
  • DirectPlayVoice: real voice network communication during a game. This feature can be combined with the DirectSound in order to create different sound effects and with the DirectPlay for transmission via the Internet.
  • DirectInput has significantly changed. There appeared a command map and now you can load a list of your own commands and corresponding to them DirectInput constants (i.e. key codes, mouse etc.), and when these events take place the program will receive your own commands! The resister keeps sets of commands for different types of games which can be used right with this device. Besides, there appeared many other new things…

<Q>: Does the DX have a perspective to transmit to other platforms?

<A>: There you will see only Windows and XBox. Others are not necessary. And I think that XBox will become a main platform soon. Why does the DX need other platforms? Nobody will play games on a corporate Linux server, everybody will play on the XBox at home. It's my opinion.

<Q>: Is the DX8 intended only for games?

<A>: Yes, Microsoft announced that the OpenGL is intended for powerful applications where a stability is more important that speed, and the DX is for games where should be maximum speed.

<Q>: The market doesn't still offer accelerators which hardwarely support all shaders. What can you say about emulation, how strong it decreases performance of usual (TNT class) accelerators and the last-wave accelerators partially supporting shaders?

<A>: Considering pixel shaders we can say that if there is no hardware support then there will be no any support at all. It can be explained by the fact that pixel shaders operate with every pixel, for their emulation it's necessary to rasterize the whole image softwarely.

As for vertex shaders it seems better: using usual cards without hardware T&L they mustn't cause performance decrease. We will gain at the expense of usage of optimized vertex buffers, and an interpreter of vertex shaders will otherwise cause performance fall. They will compensate each other.

Microsoft is against a partial support of shaders. It in fact will fall down in the last day of incompatible OGL extensions. You either support the whole set of commands of version N shader, or you don't support it at all. From this point of view the cards GF and GF2 are transitional from the DX7 to the DX8 and do not have support for DX8 shaders.

<Q>: Is there any significant difference in the system of texture management?

<A>: In the DX8 resource management is generalized. Now a resource is not only a texture but also buffers of vertices and indices. Outward management is the same - each resource has its priority and a driver (or the D3D) transmits this resource according to the priority and needs in one or other memory. Microsoft announced considerable improvement of algorithms of DX8 resource manager.

<Q>: Are there any convenient means of saving context (like in the OGL), when a sequence of operations can be saved and named for subsequent appeal (lists)?

<A>: Such possibility was present even in the DX7 (state block). In terms of functionality it's a bit less than display lists of the OGL - here you can only write a state (render state/texture stage state/transform/viewport/clip plane/texture/light/material), but you can't write data such as of vertices or indices.

Shaders is a generalization of the conception of state blocks - here you build a sequence of necessary operations yourselves and save its unique identifier.

<Q>: Is there anything new in the Scene Graph API?

<A>: Microsoft promised a corresponding API, but did nothing in the DX8. It seems that this direction will be closed. The DX8 contains no references on it.

Application: new possibilities of the DX8


Full integration of DirectDraw and Direct3D

Application initialization is simplified and allocation and management of data are improved. So, API integration allowed setting several parallel entering streams of vertices for more flexible rendering.

Programming language for vertex processing

It allows programmers to write their own shaders for animation and transformation of models, skinning, user models of lighting, different technics connected with environment maps, procedure geometry and any other algorithms of geometry processing.

Programming language for pixel processing

It allows programmers to write their own shaders for texture combining, per-pixel lighting, bump mapping, per-pixel environment map for photorealistic effect of flashes and other algorithms.

Multisampling rendering

It allows using different effects connected with full-screen anti-aliasing and other multisampling effects (motion blur, depth-of-field)

Pixel sprites

High-efficient rendering of systems of particles for sparks, explosions, rain, snow etc.

Volume textures

Effect of weakening with the distance for per-pixel lighting, volume atmospheric effects etc.

High order primitive support

Improves outside look of models and simplifies their transmission from main systems of 3D modeling.

High-level technologies

Includes plug-in's of export into Direct3D of models and skins, detailed geometry (LOD) and high order surfaces (splines).

Utility library extension (Direct3DX)

In the library of utilities intended for simplicity of creation and debugging of Direct3D programs a new "layer" has been added which realizes common tasks such as creation of pixel and vertex shaders and other work with models.


The components for work with sound have changed. DirectMusic and DirectSound became more API uniform for playing all formats, there appeared a lot of new possibilities for its processing including a dynamic one. Let's enumerate new possibilities:

Processing of sound data on the base of messages (MIDI) and digital form of sound (WAV) on the base of a single mechanism of reproduction and processing.

Now WAV files and resources can be loaded and reproduced with standard means of DirectMusic. Although the DirectSound API is still supported, in the most cases it's simpler to move reproduction, swapping and synchronization of beforehand created audio streams to the
DirectMusic. For more complex applications, sound digitization and full duplex processing it is more advantageous to use the DirectSound.

More flexible model of internal transmission and processing of data (audiopath model).

If earlier channels were assigned by ports, each of which came to the DirectSound buffer in the end, now channels are assigned by "audiopaths", which control transmitting and processing of data. It means that data from synthesizer can be sent for processing and output by different ways simultaneously, for different channels one can use different filters, and the signal path can dynamically change during reproduction.

DLS2 synthesis

Now fully supported is the DLS2 format of banks and synthesizer including envelopes with 6 stages, layers, additional low frequency oscillators and other possibilities taken from such professional formats as SoundFont 2.0. It seems that many of average sf2 banks now can be converted in DLS2 (and reproduced) practically without losses.

Special effects

Before passing to reproduction buffers, resulting streams can be processed in buffers of effects. Standard effects such as reverberation, distortion and echo are represented in the DirectX Audio, and your own effects can be programmed as well.

3-D sound

The DirectX Audio supports stream processing with the help of standard presets (for reverberation, obstruction, occlusion) according to I3DL2 (Interactive 3-D Audio Level 2) specification, and allows setting parameters of these effects.

Audio script

Composers and designers of sound can simplify their work using possibility of wring scripts. As a result the program doesn't follow the music details and just calls up reproduction of the corresponding script. For example, an event in a game can calls up a script which will change the current soundtrack.

MIDI controller support is extended

Envelopes can control both RPN and NRPN controllers.

Control of reproduction is considerably extended

There appeared new possibilities in playback, cycling, synchronization. It's possible to make a fast setting of reproduction parameters with only one call. Tracks can be generated in a speaker and changed before each playback.

For example, in order to change base chords.


Object allowing keeping all components created with the DirectMusic Producer in one file, and then loading them with one call.

Lyrics tracks

Support for text illustration timed with sound. There appeared an object representing a song.

It keeps segments automating their reproduction, and generating automatically transitions between them.

Data caching is improved

Like in other services.


Has changed much. It received a new set of interfaces, "more close" access to hardware, performance increased.

Interfaces are written entirely

A complicity of creation of network application is considerably decreased thanks to separating of interfaces for peer-to-peer and client/server sessions. Interfaces for creation of DirectPlay session are the following:

  • IDirectPlay8Peer provides methods for creation of peer-to-peer sessions.
  • IDirectPlay8Client provides methods for creation client part of an application like "client/server".
  • IDirectPlay8Server provides methods for creation of server parts of an application like "client/server".

Sound transmission is added

  • DirectPlay Voice provides a set of interfaces for creation of voice communications in real time in applications. The following interfaces are set:
  • IDirectPlayVoiceClient provides methods for creation and control of client in DirectPlay Voice session.
  • IDirectPlayVoiceServer provides methods for creation and control of DirectPlay Voice session.
  • IDirectPlayVoiceSetup is used for testing and setting DirectPlay Voice audio.

Information on addresses is based now not on GUID but on URL

The previous versions of the DirectPlay addressed binary data using GUID what was difficult in realization and for reading. In the DirectX 8.0 DirectPlay inputs addressing in URL format. For creation and manipulation with a new addressing format there are the following interfaces:

  • IDirectPlay8Address provides general address methods for creation and control of DirectPlay addresses.
  • IDirectPlay8AddressIP provides service directed on a provider of IP separation.

Higher scalability and improved memory system control were added

Increase in bandwidth of network channels of an end user told upon design and realization of network games. DirectPlay added an improved stream control simplifying the development and design of scalable, more stable applications and applications that support a massive multiplier.

Support for Firewalls and Network Address Translators was improved

Writing of network games which pass through Network Address Translators (NATs), Firewalls, and other methods of splitting internet-connections can be difficult, especially for protocols with unguaranteed delivery (UDP). DirectPlay 8.0 supports NAT-solutions (where it is possible) in order to simplify developers' life. DirectPlay 8 TCP/IP service provider uses a single UDP port defined by a developer for game data allowing to set firewalls and NATs correspondingly.


Microsoft DirectInput for Microsoft DirectX® 8.0 enters a new possibility: an action mapping. It allows to ascertain correspondence between actions and devices which are not dependent on presence of some specific parts (e.g. additional axis or buttons). Action mapping simplifies input cycle and makes unnecessary some profiles (custom game drivers, custom device profilers, and custom configuration user interfaces) in games.

Write a comment below. No registration needed!

Article navigation:

blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook

Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.