iXBT Labs - Computer Hardware in Detail






Analysing NVIDIA G8x Performance in Modern Games

This article was initially intended to continue our analysis of shader units and to show how their number affects NVIDIA G8x performance in modern games. We planned to change parameters of a G80-based graphics card to make it resemble the mid-end G84-based products. Our plan was to use RivaTuner by Alexei Nikolaychuk to leave only 32 active unified processors in the GeForce 8800, and change GPU and memory clock rates.

GPU clock rate was to be reduced to the level of G84, and video memory clock rate had to be adjusted to scale memory bandwidth of the GeForce 8800 down to the level of the GeForce 8600 (memory clock must be three times as low, because the difference between memory bus bit-capacity is 384/128=3 times). Theoretically, such cards would have differed only in the number of ROPs: 24 versus 8, as well as in on-die caches and other optimizations.

Our objective was to determine how much the different number of ROPs affects performance, and how much shader units differ in G84 and G80. We wanted to analyze performance of modern games using NVIDIA PerfKit. Unfortunately, our plans never got to the first base - mandatory hardware performance counters of the G80 do not work, when some shader units are disabled. And such an analysis would have been useless without them. So we decided not to trash our test results and to use them in a brief comparative analysis of performance and some other parameters of modern 3D games.

One more idea was discarded in the process - stream_out_busy readings. This counter proved to be practically useless, even though most our tests are Direct3D 10 applications. The counter indicated that stream output units were used only in one game - Lost Planet: Extreme Condition. Moreover, these units were loaded only by less than 1%, so we decided to discard this counter as well.

Testbed configuration and settings

We used the following testbed configuration:

  • CPU: AMD Athlon 64 X2 4600+ Socket 939
  • Motherboard: Foxconn WinFast NF4SK8AA-8KRS (NVIDIA nForce4 SLI)
  • RAM: 2048 MB DDR SDRAM PC3200
  • Graphics cards: NVIDIA GeForce 8800 GTX 768MB and NVIDIA GeForce 8600 GT 256MB
  • HDD: Seagate Barracuda 7200.7 120 Gb SATA
  • Operating system: Microsoft Windows Vista Home Premium
  • Video driver: NVIDIA ForceWare 163.16 (instrumented)

We used only one video mode with the most popular resolution 1280x1024 (or 1280x960 for games that do not support the former), MSAA 4x and anisotropic filtering 16x. Both features were enabled from game options, nothing was changed in the control panel of the video driver.

Our bundle of game tests includes recent projects. We gave preference to games supporting Direct3D 10 or containing new interesting 3D techniques. Here is the full list: Call of Juarez DX10 benchmark, Company of Heroes, S.T.A.L.K.E.R.: Shadow of Chernobyl, Lost Planet: Extreme Condition DX10 benchmark, Colin McRae Rally: DiRT, PT Boats: Knights of the Sea DX10 benchmark, SEGA Rally Revo, Clive Barker's Jericho. Our tests included several games without built-in ways to run demos, so we had to test them tentatively. Additional software: NVIDIA PerfKit 5, Riva Tuner 2.05, and Microsoft PIX for Windows from DirectX SDK.

Unfortunately, our tests did not include such interesting applications as World in Conflict, BioShock, Medal of Honor: Airborne, Test Drive Unlimited, TimeShift, Call of Duty 4: Modern Warfare, Half-Life 2: Episode 2, etc. Some demos and games did not make it in time, a couple of projects failed to run under PIX debugger: BioShock and Test Drive Unlimited.

Test Results

Lost Planet: Extreme Condition

We start our analysis with one of the most technically advanced games launched in 2007. Lost Planet: Extreme Condition has come to PC from Xbox 360. But its high-tech features are confirmed by many changes in the engine for Direct3D 10 GPUs. Compared to its console version (which is a high-tech game as well), it has some new features: FP16 frame buffer, motion blur and depth of field of higher quality, fur, more samples and improved shadow map filtering, ambient occlusion, soft particles, advanced parallax mapping, etc.

Lost Planet was tested with a built-in demo consisting of two game levels. They differ much - there are not many objects in the first level, so its performance is limited mostly by a graphics card; the second level contains a lot of objects, so the performance is limited by both CPU and GPU.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 20.8 4.5
FPS (min) 11.0 2.1
Video memory, MB 507 505
batch count (avg) 738 799
batch count (max) 4375 4336
primitive count (avg) 497858 475061
primitive count (max) 2335540 2104705
setup triangle count (avg) 83318 82474
setup triangle count (max) 311494 292719
gpu_idle, % 0.1 0.1
rop_busy, % 7.9 5.2
texture_busy (avg), % 74.4 77.5
texture_busy (max), % 83.4 90.6
shader_busy (avg), % 84.6 88.3
shader_busy (max), % 89.0 98.0
geom_busy, % 0.2 1.2
input_assembler_busy, % 3.1 0.8

In our tests we used a DirectX 10 demo of the game with a built-in benchmark, which does not reflect real performance of the release with all patches applied. That's why we could evaluate the frame rate only relatively, rendering speed of the latest release is much higher. We can see that the G84 is heavily outperformed by the G80. Let's locate the bottleneck. First of all, the GeForce 8600 can be slowed down by by video memory size, which is half as much as the game uses. Secondly, judging by the results, rendering speed is affected by the number of shader units and TMUs - the difference in performance is proportional to the difference in units.

Let's have a look at interesting results. The average number of draw calls does not reflect the real picture, because it depends on levels: there are not many calls in the first scene, and in the second scene this number reaches over 4000, which is really much, even for state-of-the-art games. The amount of geometry in this game is above average, but I don't quite understand the difference between primitive count and setup triangle count. Both GPUs were constantly working, we can see that performance does not depend on a CPU. Geometry and raster units are not loaded much, while texture and shader units are working at full capacity. We haven't seen such active usage of shader units before. That's what I call good optimization - performance depends on a GPU only.

PT Boats: Knights of the Sea

This benchmark has been released recently, and we are interested in many of its parameters. Unlike most of our tests, this is not an FPS. Besides, this demo reveals innovations used in the game. It has one of the best (or the very best) visualization of water surfaces: geometry surface with excellent dynamics, reflections and refractions. Other visualizations are also very good: well filtered soft shadows, active post processing, and complex geometry models of vehicles. The demo is written for Direct3D 10 API and uses some of its features, such as geometry shaders.

In the course of our tests we ran the DirectX 10 benchmark with maximum quality settings for a given resolution. Interestingly, these settings affect LOD and do not deteriorate render quality much.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 21.2 1.8
FPS (min) 3.3 0.5
Video memory, MB 820 782
batch count (avg) 551 610
batch count (max) 1280 1298
primitive count (avg) 686565 707890
primitive count (max) 1067049 1063914
setup triangle count (avg) 358759 372106
setup triangle count (max) 728364 697458
gpu_idle, % 38.7 0.0
rop_busy, % 13.0 4.6
texture_busy (avg), % 24.6 47.9
texture_busy (max), % 41.3 73.3
shader_busy (avg), % 46.6 56.2
shader_busy (max), % 76.0 77.7
geom_busy, % 0.2 0.9
input_assembler_busy, % 5.6 0.6

Performance of the GeForce 8600 dropped very low in this case. Well, even the GeForce 8800 is slow with maximum settings. If we consider the minimum frame rate, the G84 is a catastrophe. We take into account, of course, that we use the special driver, which reduces performance a little together with the PIX debugger. But such low performance cannot be written off solely to the driver. I think that the main factor here was high requirements to video memory size, because the demo requires up to 800 MB with maximum settings! Other characteristics of GPUs and cards also affect performance, of course. Performance may be limited by computing and texturing performance from time to time.

Interestingly, the number of draw calls is not that big. Perhaps, the program was optimized after all. The amount of processed geometry is big even for a modern game. That may be the effect of almost disabled LOD. Besides, the GeForce 8800 did not work at full capacity, it was idle almost half of the time, and the total performance was apparently limited by a CPU. The demo loads TMUs and ALUs well. But the results do not reach 100%, so performance is not limited by a single component. Judging by results, the load on geometry units is not big, despite the number of triangles.

Colin McRae Rally: DiRT

It's quite a new game, which performance depends on a graphics card in the first place, but it still needs some optimizations. The latest game of the CMR series demonstrates complex geometry, high quality particle and reflection effects, and powerful post processing effects. It's a multiplatform game. But it still uses high technologies, because the latest generation of consoles possess powerful GPUs on the level of the previous generation of PC GPUs.

As the game does not have a built-in benchmark and does not allow to record how you play, we had to drive the same track in the same conditions to analyze performance. Such measurements are highly inaccurate, so each graphics card was tested three times, and then these results were averaged.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 35.2 11.9
FPS (min) 5.3 1.9
Video memory, MB 402 397
batch count (avg) 1772 1798
batch count (max) 3274 3656
primitive count (avg) 597935 614332
primitive count (max) 1138591 1157551
setup triangle count (avg) 145346 149966
setup triangle count (max) 354265 375663
gpu_idle, % 15.1 0.3
rop_busy, % 10.2 10.6
texture_busy (avg), % 37.8 42.3
texture_busy (max), % 59.6 61.5
shader_busy (avg), % 60.0 82.0
shader_busy (max), % 80.4 87.3
geom_busy, % 0.8 3.8
input_assembler_busy, % 11.7 3.8

The frame rate of this game with maximum settings is irritatingly low. You can play this game with the G80-based card, although the game can be too slow sometimes. But if you have the GeForce 8600 GT, you'll have to reduce image quality in game settings to make the game run smooth. The G84 is approximately three times as slow in this game as the G80 - a usual situation. DiRT performance with the GeForce 8800 is sometimes limited by a processor, and the GeForce 8600 is always loaded to the brim. Low performance of the latter is caused by fewer shader units and by insufficient video memory (for high quality settings).

In our opinion, the game depends on a processor because of many draw calls, 3600 in extreme cases. In case of the G84, the fact that a game has high CPU requirements is hidden by the slow graphics card. But the G80-based card clearly shows that the game depends on a CPU, the GPU is generally idle for 15% of the time, and its units are not fully loaded. There is much geometry in this game, although geometry units are not heavily loaded. The same concerns raster units. This cannot be said about texture and shader units, which work at 40-80% of their capacity, depending on their type and a GPU. In this case the load on shaders is higher than on textures.

Call of Juarez

This game is older, but it has a high-tech engine, especially considering that we used a benchmark of the updated Direct3D 10 version of the game. This version of the game is technologically different from the initial release, it offers many new features: a lot of objects and geometry, an improved particle system that uses geometry shaders, high quality soft shadows with improved shadow map filtering, new textures, and advanced parallax mapping, alpha to coverage.

We used a stand-alone benchmark for our tests, which technologies and optimizations are similar to the game with the latest patch. Performance of Call of Juarez is almost completely limited by a graphics card. The game contains a lot of geometry and pixel processing, which is done by unified processors. A CPU might have limited performance only in case of many draw calls. But the game is optimized to reduce their numbers.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 23.1 7.2
FPS (min) 4.5 1.2
Video memory, MB 541 530
batch count (avg) 412 428
batch count (max) 722 724
primitive count (avg) 1380734 1390771
primitive count (max) 2308458 2306383
setup triangle count (avg) 851683 851585
setup triangle count (max) 2920384 2753703
gpu_idle, % 0.2 0.1
rop_busy, % 12.8 13.1
texture_busy (avg), % 23.4 27.5
texture_busy (max), % 53.7 36.5
shader_busy (avg), % 69.6 77.8
shader_busy (max), % 91.7 96.7
geom_busy, % 4.4 4.8
input_assembler_busy, % 33.3 12.1

The updated engine and support for new effects made this game even slower. That's the effect of maximum settings and a relatively high resolution, but less than 30 fps with a high-end graphics card is a bit much... However, the game was modified in cooperation with AMD. So solutions from this company perform better in this game, the updated code is too slow on NVIDIA cards. Perhaps, such geometry algorithms are better suited for the AMD architecture. The G84-based card is approximately three times as slow as the G80-based card. Performance does not depend on a CPU in both cases, only on a graphics card.

The size of video memory used by the game reaches 530-540 MB, which contributed to low performance of the GeForce 8600. There are not many draw calls, which can be explained with certain optimizations. The number of processed geometry primitives is great. There is apparently a lot of geometry in this game, the number of polygons in a frame may reach almost three millions! ROPs and geometry units are not heavily loaded, the average load on texture units is one fourth. But we can clearly see that performance is limited by shader units. It's a proof that rendering speed in this game depends on the number and frequency of unified processors in the first place - they process vertex, pixel, and geometry shaders.

Company of Heroes

This is another game that is not a shooter. It's a real-time strategy. Unfortunately, we used the Direct3D 9 version of the game without some Direct3D 10 features added with patches. The game cannot boast of many modern effects. But it still offers the most popular features: soft shadows, post processing, bump mapping, high quality lights and textures, particles.

We've mentioned many times that the benchmark built into Company of Heroes does not reflect gaming performance, because it uses a script scene, not gameplay. But it's still interesting to see the difference in cinematics render speed on various graphics cards.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 92.6 41.2
FPS (min) 6.9 5.6
Video memory, MB 614 601
batch count (avg) 205 214
batch count (max) 931 930
primitive count (avg) 74895 77411
primitive count (max) 231863 233044
setup triangle count (avg) 19645 20159
setup triangle count (max) 98149 91632
gpu_idle, % 36.1 0.4
rop_busy, % 9.2 11.9
texture_busy (avg), % 29.3 37.3
texture_busy (max), % 70.8 96.2
shader_busy (avg), % 49.8 86.9
shader_busy (max), % 91.0 96.4
geom_busy, % 0.4 7.5
input_assembler_busy, % 5.6 2.5

The game is rather old (we haven't tested patches with Direct3D 10 support), so the frame rate is relatively high. Low minimum FPS results can be explained with dynamically loaded content and peculiarities of the engine. The G84 is only twice as slow as the G80. Judging by the gpu_idle readings, the latter was idle more than one third of the time, it was limited by a CPU.

What concerns other counters, we are surprised to see so few vertices and triangles per second. But then video memory usage is high, all textures and models seem to be loaded at once. There are not many batches, the amount of geometry is an indirect sign of it. Traditionally, ROPs, input assembler, and geometry units are not loaded much, while texture and shader units are very active, especially in the G84. Perhaps, performance of the script scene in this benchmark is sometimes limited by texture units, and sometimes - by shader units. The ALU load was almost full sometimes even on the G80.

S.T.A.L.K.E.R.: Shadow of Chernobyl

We included this game into our tests because of its popularity and technical singularity. It uses some new interesting technical solutions: deferred shading, a lot of per-pixel light sources, filtered soft shadows from several sources, simple parallax mapping on many surfaces, active post processing, etc.

Fortunately for testers, developers added an option to record and play demos, as well as a benchmark, where a game is not recorded, a camera just gives you a fly-around - not a sterling test, but it's better than nothing. The game does not allow to use multisampling because of deferred shading, so we had to restrict ourselves to the usual 1280x1024 mode.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 69.7 23.0
FPS (min) 20.7 8.1
Video memory, MB 610 592
batch count (avg) 1887 2014
batch count (max) 3468 3545
primitive count (avg) 760388 783642
primitive count (max) 1334276 1330498
setup triangle count (avg) 230065 229739
setup triangle count (max) 436388 424331
gpu_idle, % 22.3 0.6
rop_busy, % 15.3 17.4
texture_busy (avg), % 35.5 50.7
texture_busy (max), % 60.8 66.4
shader_busy (avg), % 61.0 78.7
shader_busy (max), % 86.1 88.0
geom_busy, % 1.5 5.3
input_assembler_busy, % 20.8 8.5

Interestingly, a performance difference between so different graphics cards was not that big, although the 3:1 ratio in the frame rate is preserved again. Besides, the render speed of the GeForce 8800 GTX in S.T.A.L.K.E.R. was limited by a CPU. The GPU was idle almost one fourth of the time. Judging by FPS values, you can play the game with the G80-based card, but the G84 is not powerful enough for maximum settings. Game performance depends mostly on a CPU and shader/texture units of a GPU.

There were quite many D3D draw calls: average 2000 calls, maximum 3500 calls. The amount of geometry processed per frame is average for these days, but input assembler is loaded more than usual. It may indicate that other units of a GPU fetch much data from memory. Intensive load of the input assembler in the G80 versus the G84 can be explained with more input data because of a higher frame rate.

Interestingly, the game actively uses both texture and shader units, but the effect of the latter on the overall render speed is higher, they have more work. Even though the Direct3D 9 engine of the game does not allow to use multisampling, the ROP load is above average, which speaks of active post processing and several render buffers.

SEGA Rally Revo

Another rally game in our list. Unlike Colin McRae Rally, it does not have that many interesting technologies. But it's a model of an average multiplatform game. I cannot say that this engine is very simple, it supports shadow maps, dynamic reflections, bump mapping, post effects. But it's a plain engine compared to other games. All the more interesting to see how various graphics cards cope with it.

The game does not allow to run tests and record gameplay. So we had to use the same scheme as in case with DiRT - we drove one track for several times and then averaged the results.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 73.6 33.5
FPS (min) 14.4 4.9
Video memory, MB 331 336
batch count (avg) 1424 1493
batch count (max) 2989 3047
primitive count (avg) 834790 854697
primitive count (max) 1777959 1661084
setup triangle count (avg) 214693 226801
setup triangle count (max) 780875 785677
gpu_idle, % 30.9 0.5
rop_busy, % 13.8 21.4
texture_busy (avg), % 28.1 36.9
texture_busy (max), % 49.1 53.7
shader_busy (avg), % 57.5 91.2
shader_busy (max), % 85.8 95.1
geom_busy, % 1.0 11.2
input_assembler_busy, % 16.1 7.9

We can see that the 3D engine of this game is easier to process than in most previous cases. The GeForce 8800 demonstrates a comfortable frame rate (low minimal FPS is not a problem in this case), the GeForce 8600 is slower, but not as slow as usual, a tad over twofold. It happens because game performance is limited by a CPU. Judging by gpu_idle, the G80 was idle over 30% of the time. When the speed depends on a GPU, performance is limited by shader units. The other parts of a GPU do not slow rendering down.

The demo uses a tad over 300 MB of video memory. It almost fits into 256 MB in the GeForce 8600. So it shouldn't affect its performance so much. There are a lot of batches, which can be explained by the multiplatform origin of the game and the lack of PC optimizations. The average number of draw calls is almost 1500, over 3000 in extreme cases. The game processes much geometry (vertices and polygons), even though image quality is not outstanding. The ROP load is average. Geometry and input assembler counters show strange results, which are hard to explain. Texture and shader counters provide interesting readings, as usual. Texture units are loaded by one third of their capacity, shader units are loaded by half in the G80 and completely in the G84. In fact, the load is full, the average ALU load exceeds 90%, the peak load reaches 95%.

Clive Barker's Jericho

The last game in our review does not impress its users with image quality, it was not intended to be on the edge of technological progress. It's a middle game, but it uses a lot of new technologies: parallax mapping, a lot of post processing (depth of field, motion blur, bloom), filtered shadow maps, average geometry with textures, and a lot of peer-pixel processing. Let's see how our graphics cards will perform in another multiplatform project...

Our demo version of the game does not offer any benchmark options. We had to walk through the same level several times and then average the results.

NVIDIA PerfKit counter GeForce 8800 GTX (G80) GeForce 8600 GT (G84)
FPS (avg) 31.7 9.9
FPS (min) 7.2 3.5
Video memory, MB 427 422
batch count (avg) 1137 1085
batch count (max) 2158 2208
primitive count (avg) 283826 268386
primitive count (max) 695568 683055
setup triangle count (avg) 89721 89926
setup triangle count (max) 283335 270695
gpu_idle, % 13.4 4.1
rop_busy, % 16.3 17.7
texture_busy (avg), % 39.1 34.7
texture_busy (max), % 48.5 41.6
shader_busy (avg), % 68.1 83.9
shader_busy (max), % 77.7 88.5
geom_busy, % 0.4 5.6
input_assembler_busy, % 6.1 1.7

This situation differs from the one with the previous multiplatform game. Now the difference between the cards is strictly threefold, as it should be. The GeForce 8800 is on the verge of being slow, and the GeForce 8600 cannot cope with maximum quality settings. Note that the frame rate is not limited by graphics cards only, even the G84 was idle for some time. A CPU seems to determine performance in some moments. In other cases - it's governed by unified processors, which are busy 70-80% of the time.

Other GPU units are used in the usual manner: ROP - less than 20%, geometry and input assembler - less than 10%, texture - 30-40%. This multiplatform game is limited by shader processors. That's not the first time when performance is limited by these units. The game requires more video memory than usual - over 400 MB. The number of draw calls and the amount of geometry in a frame is also above average, 1000 calls and less than 300000 primitives. So we've got a modern game with average requirements.


Let's draw conclusions on all games at once:

  • You can play most modern games with maximum quality settings in the most popular resolution only on the high-end GeForce 8800 GTX, not always at that - minimum frame rates in some cases drop too low.

  • New games use up to 600-700 MB of video memory. It does not mean that all these resources must be in local memory of a graphics card. Games often give resource control (textures, etc) to API, especially as Direct3D 10 uses video memory virtualization. Nevertheless, there is an apparent tendency to increase requirements to video memory size on graphics cards. So 512 MB can be considered an optimal solution now. 256-320 MB - insufficient, 0.7-1 GB - not used in games so far. But this memory size makes sense for high-end solutions, because they can provide acceptable frame rates in higher resolutions.

  • We can see an evident increase in draw calls. Games may now use 2000 calls per frame, although the most optimized projects do fine with 500-1000 calls per frame. The increase of draw calls affects the growing dependence of 3D applications on CPUs - more draw calls generate a heavier load on a CPU.

  • The number of draw calls grows together with the amount of processed geometry. We are not surprised to see 300000-500000 polygons per frame anymore. Most advanced games may use up to million triangles per average frame and up to 2-3 millions in extreme cases.

  • Performance of a computer with the GeForce 8600 was not limited by a CPU in all tested games, it depended on the graphics card only. On the other hand, the GeForce 8800 GTX was often idle in our conditions, up to one fourth or third of the time. Conclusion: such graphics card needs either a more powerful CPU or better optimizations in games (fewer draw calls in the first place.)

  • The ROP load was always low - up to 10-20% in both cases. Of course, we should take into account the FPS difference. But the number and capacities of ROPs were not the main stumbling block in our tests. The G80 and G84 demonstrate similar results, except for one case (PT Boats), probably owing to high requirements to video memory size.

  • Modern games still heavily load texture units, their minimum load is 30%, up to 75% maximum (in the G80!) Plans of some companies to change the ratio between the number of texture and shader units were apparently premature - even such an advanced game as Lost Planet: Extreme Condition uses TMUs so actively.

  • The load on geometry units and input assembler (it fetches geometry and other data from memory to be used by other units) in games is quite low. It never limits rendering speed, although the load on input assembler is heavier in some projects than in other games.

  • Unified processors in all new games act as the main bottleneck. Their peak load in almost all games reaches 70-90%, which speaks of their full utilization. So their performance determines rendering speed. Consequently, performance of these units plays the most important role in modern games. As the G84 does not have many such units, the GeForce 8600 cannot demonstrate better results. The load on shader units in the G80 is evidently lower. This GPU has a certain performance margin of unified ALUs.

  • It's now confirmed that new applications have higher requirements to shader units. The games we tested show that importance of computing units in GPUs grows, and that it will continue to grow (it slowed down a little in multiplatform projects for apparent reasons). They will become even more important, if future games use not only vertex and pixel shaders, but also geometry shaders.
Alexei Berillo (sbe@ixbt.com)
November 14, 2007

Write a comment below. No registration needed!

Article navigation:

blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook

Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.