iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

ATI RADEON X700XT (RV410):
Part 1 - Performance.



Contents

  1. Official specifications
  2. Architecture
  3. Video cards' features
  4. Testbed configurations, benchmarks, 2D quality
  5. Synthetic tests in D3D RightMark
  6. Synthetic tests in 3DMark03: FillRate Multitexturing
  7. Synthetic tests in 3DMark03: Vertex Shaders
  8. Synthetic tests in 3DMark03: Pixel Shaders
  9. Test results: Quake3 ARENA
  10. Test results: Serious Sam: The Second Encounter
  11. Test results: Return to Castle Wolfenstein
  12. Test results: Code Creatures DEMO
  13. Test results: Unreal Tournament 2003
  14. Test results: Unreal II: The Awakening
  15. Test results: RightMark 3D
  16. Test results: TRAOD
  17. Test results: FarCry
  18. Test results: Call Of Duty
  19. Test results: HALO: Combat Evolved
  20. Test results: Half-Life2(beta)
  21. Test results: Splinter Cell
  22. Test results: DOOM III
  23. Test results: 3DMark03 Game1
  24. Test results: 3DMark03 Game2
  25. Test results: 3DMark03 Game3
  26. Test results: 3DMark03 Game4
  27. Test results: 3DMark03 MARKS
  28. Conclusions

So, not so much time has passed since the release of the NVIDIA GeForce 6600 series, and now ATI is announcing its "Canadian answer". The X700 series also comprises several video cards, united by the same codename of the chip - RV410.

The new series is oriented solely to PCI-Express, so we cannot expect the AGP counterparts in the nearest future (or for good). No matter how you feel about this series, insulting for the AGP segment still covering almost 99% of users. You can wax indignant over the behavior of the leading manufacturers, but progress is progress. It's appropriate here to remember that the market will finally cross the T's, and the demand may force the manufacturers to listen to users' needs and release the AGP counterparts.

Note that it will be easy for NVIDIA (as it already has an HSI bridge, which converts AGP to PCI-E and vice versa) and not so simple for ATI, who does not possess such a bridge.

Our research and measurements demonstrated that ATI is wrong to support RADEON 9800 PRO on the AGP market as an alternative to X700 in the PCI-E segment, because 9800 PRO does not demonstrate proper performance, which will pertain to the new video cards up to 200 USD. So what is left to ATI is to drop the prices for its old products down to $150 and lower. This is of small profit, considering the expensive PCB and the 256-bit memory bus. So, more de-featured versions of 9800 Platinum Edition based on the 128-bit bus will most likely come. The "speed" they demonstrate is a shame. So ATI will probably have hard times on the AGP market in the $150-250 price segment, especially if NVIDIA releases the AGP version of GeForce 6600, which is quite possible.

We'll leave forecasts and assumptions at that, and focus on the new product released by the Canadian company, in order to temporarily put aside the AGP market and compare RADEON X700XT with GeForce 6600GT.

Official RADEON X700XT/PRO (RV410) specification

  1. Codename of the chip is RV410
  2. Based on a 110 nm process technology (TMSC, low-k, copper connections)
  3. 120 million transistors
  4. FÑ packaging (flip-chip, without a metal cap)
  5. 128-bit memory interface (dual channel controller) (!)
  6. Up to 256 MB DDR/GDDR-2/GDDR-3
  7. On-board PCI-Express 16x bus interface (perhaps in future ATI will use a proprietary PCI-Express->AGP 8x bridge to manufacture AGP cards)
  8. 8 pixel processors, each with one texture unit
  9. Calculation, blending, and writing of up to 8 full (color, depth, stencil buffer) pixels per clock
  10. Calculating up to 16 depth values per clock in MSAA mode (i.e. MSAA 2x without penalty clocks)
  11. Support for a "double-sided" stencil buffer
  12. MSAA 2x/4x/6x, with flexibly programmed sample patterns. Compression of frame and Z buffers in MSAA modes. Capability to change MSAA patterns from frame to frame (Temporal AA)
  13. 16x Anisotropic Filtering
  14. 6 vertex processors (!)
  15. Everything necessary to support Pixel and Vertex Shaders 2.0
  16. Additional capabilities of pixel shaders based on the enhanced 2.0 version - so called 2.0.b
  17. Small additional features of vertex shaders, besides the basic 2.0 ones
  18. New technique of texture compression, optimized for compressing two-channel normal maps (so called 3Dc, 4:1 compression ratio)
  19. Supports rendering to floating point format buffers, per component FP16 and FP32 precision
  20. Supports 3D and FP texture formats
  21. MRT
  22. 2 x RAMDAC 400 MHz
  23. 2 x DVI interfaces
  24. TV-Out and TV-In (the latter requires an interface chips)
  25. Programmable video processing - pixel processors are used to process the video stream (compression, decompression, and postprocessing tasks)
  26. 2D accelerator supporting all GDI+ functions

Specification on the RADEON X700XT reference card

  1. Core frequency 475 MHz
  2. Effective memory frequency 1.05 GHz (2*525 MHz)
  3. 128-bit memory bus
  4. GDDR3 memory type
  5. Memory capacity: 128 (or 256) MB
  6. 16.8 GB/sec Memory Bandwidth
  7. Theoretical fill rate 3.8 gigapixel/sec.
  8. Theoretical texture fetch speed 3.8 gigatexel/sec
  9. 1 x VGA (D-Sub) and 1 x DVI-I connector
  10. TV-Out
  11. Consumes less than 70 W (that is there is no need in an additional power connector on PCI-Express cards, recommended power unit is 300 W or more)

A list of existing cards based on RV410:

  • RADEON X700XT: 475/525 (1050) MHz, 128/256MB GDDR3, PCI-Express x16, 8 pixel and 6 vertex pipelines ($199 - for a 128-MB card and $249 - for a 256-MB one) - competing with NVIDIA GeForce 6600GT
  • RADEON X700 PRO: 425/430 (860) MHz, 128/256MB GDDR3(?), PCI-Express x16, 8 pixel and 6 vertex pipelines ($149 - for a 128-MB card and $199 - for a 256-MB one) - competing with NVIDIA GeForce 6600
  • RADEON X700: 400/350 (700) MHz, 128/256MB DDR, PCI-Express x16, 8 pixel and 6 vertex pipelines ($99 - for a 128-MB card and $149 - for a 256-MB one) - competing with NVIDIA GeForce PCX5900 and pushing down the previous X600XT



As we can see, there are no special architectural differences from R420, which is not surprising - RV410 is a scaled (by means of reducing vertex and pixel processors and memory controller channels) solution based on the R420 architecture. The same situation as with the NV40/NV43 pair. Moreover, as we have already noted, this generation features extremely similar architectural principles in both competitors. What concerns the differences between RV410 and R420, they are quantitative (bold elements on the diagram) but not qualitative - from the architectural point of view the chip remains practically unchanged.

Thus, we have 6 (as before, which is potentially a pleasant surprise for triangle-greedy DCC applications) vertex processors and two (there were 4) independent pixel processors, each working with one quad (2x2 pixel fragment). As in case with NV43, PCI Express is a native (that is, on-chip) bus interface, and AGP 8x cards (if there appear any) will contain an additional PIC-E -> AGP bridge (displayed on the diagram), which will have to be designed and manufactured by ATI.

Besides, note an important limiting factor - two-channel controller and a 128-bit memory bus - we'll analyze and discuss this fact later on, as we did with NV43.

The architecture of vertex and pixel processors and of the video processor remained the same - these elements were described in detail in our review of RADEON X800 XT. And now let's talk about the potential tactical factors:

Considerations about what and why has been cut down

On the whole, at present we have the following series of solutions based on NV4X and R4XX:

Video card Chip Pixel/
Vertex
Memory Memory band Fillrate
Mpix.
Core frequency

NVIDIA

6800 U NV40 16/6 256 (4 x 64)
GDDR3 1100
35.2 6400 400
6800 GT NV40 16/ 6 256 (4 x 64)
GDDR3 1000
32.0 5600 350
6800 NV40 12/5 256 (4 x 64)
DDR 700
22.4 3900 325
6800 LE NV40 8/4 256 (4x64)
DDR 700
22.4 2560 325
6600 GT NV 43 8/3 128 (2x64)
GDDR 3 1000
16 4000 500
6600 NV43 8/3 128 (2x64)
DDR 500-600-700
<11.2 2400 300

ATI

X800 XT/PE R42X 16/6 256 (4 x 64)
GDDR3 1000/1100
32/35.2 8000/8320 500/520
X800 PRO R42X 12/6 256 (4 x 64)
GDDR3 900
28.8 5700 475
X800 SE R42X 8/6 256 (4 x 64)
DDR 700
22.4 3400 425
X700 XT RV410 8/6 128 (256 as an option)(2x64)
GDDR3 1050
16.8 3800 475
X700 PRO RV410 8/6 256 (128 as an option)(2x64)
GDDR3 964
13.8 3360 420
X700 RV410 8/6 128 (256 as an option) (2x64)
DDR 700
11.2 3200 400
X600 XXX

Based on the architecture of the previous generation

Similar picture, isn't it? Thus we can predict that the weak point in the X700 series, as in case of the 6600 series, will be high resolutions and full-screen antialiasing modes, especially in simple applications. And the strong point - programs with long and complex shaders and anisotropic filtering without (or, for ATI possibly with) MSAA. We'll check this assumption later in game and synthetic tests.

It's difficult to judge now how reasonable the introduction of 128-bit memory bus was - on the one hand, it cheapens chip packaging and reduces a number of defective chips, on the other hand, the cost difference between 256-bit and 128-bit PCBs is small and is excessively compensated by the cost difference between usual DDR and still expensive high-speed GDDR3 memory. Perhaps, from the manufacturers' point of view the solution with a 256-bit bus would be more convenient, at least if they had a choice. But from the point of view of NVIDIA and ATI, who manufacture chips, very often selling them together with memory, the 128-bit solution with GDDR3 is more profitable. Its impact on performance is another story, the potential restriction of excellent chip features because of the considerably reduced memory passband is obvious (8 pipelines, 475 MHz core frequency).

Note that the Ultra suffix is still reserved by NVIDIA - considering a great overclocking potential of the 110 nm process, we can expect a video card with the core frequency at about 550 or even 600 MHz, 1100 or even 1200 (in future) memory and with the name 6600 Ultra. The question is how much it will be?

To all appearances, vertex and pixel processors in RV410 remained the same, but the internal caches could have been reduced, at least proportionally to the number of pipelines. However, the number of transistors does not give cause for trouble. Considering not so large cache sizes, it would be more reasonable to leave them as they were (as in NV43), thus compensating for the noticeable scarcity of memory passband. All techniques for sparing memory bandwidth - Z buffer and frame buffer compression, Early Z with an on-chip Hierarchical Z, etc.

Interestingly, unlike NV43, which can perform blending writing not more than 4 resulting pixels per clock, pixel pipelines in RV410 completely correspond to R420 in this respect. In case of simple shaders with a single texture RV410 will correspondingly get almost double advantage in fillrate. Unlike NVIDIA having sufficiently large in terms of transistors ALU array, which is responsible for postprocessing, verification, Z generation, and pixel blending in floating point format, RV410 possesses modest combinators and that's why their number was not cut down so much. However, in PRACTICE the reduced memory band will almost always not allow full 3.8 megapixel/sec writing, but in synthetic tests the difference between RV410 and NV43 can get noticeable in case of a single texture.

The solution to retain all 6 vertex units is no less interesting. On the one hand, it's an argument in the DCC area, on the other hand we know that in this area it's all up to drivers, OpenGL in the first place - traditionally strong aspects of NVIDIA. Besides, floating blending and Shaders 3.0 may be valuable there - exactly what the latest ATI generation lacks. Thus, the solution concerning 6 vertex pipelines and active RV410 positioning on the DCC market looks disputable. The time will show whether it was justified.

We'll check all these assumptions in the course of our synthetic and game tests.

Technological innovations

On the whole, there are no innovations in comparison with R420. Which is no disadvantage in itself. In comparison with NV43:
  1. Up to 8 pixels written to the frame buffer per clock.
  2. Up to 16 MSAA pixels (up to 8 in NV43)
  3. 6 vertex units, which is commendable, but they will be noticeable only in synthetic tests and in DCC applications
  4. Not so flexible shaders (2.0b)
  5. Lack of floating blending, which can be presently necessary only in DCC applications though.



Before examining the card itself, we'll publish a list of articles devoted to the analysis of the previous products: NV40/R420. It's obvious already that the RV410 architecture is a direct heir to R420 (its chip capacity was halved).

Theoretical materials and reviews of video cards, which concern functional properties of the GPU ATI RADEON X800 (R420) and NVIDIA GeForce 6800 (NV40)

I repeat that this is only the first part of the review devoted to the performance of the new video cards. The qualitative components will be examined later in the second part (3D quality and video playback).

So, reference card RADEON X700XT.

Video card



ATI RADEON X700XT


The card has the PCI-Express x16 interface, 128 MB GDDR3 SDRAM allocated in 4 chips on the front side of the PCB.

ATI RADEON X700XT
Samsung (GDDR3) memory chips. 2.0ns memory access time, which corresponds to 500 (1000) MHz. Memory operates at 525 (1050) MHz. GPU frequency — 475 MHz. 128-bit memory bus. The number of pixel pipelines and texture units is 8x1. 6 vertex pipelines.

Comparison with the reference design, front view
ATI RADEON X700XT ATI RADEON X600XT
ATI RADEON X800

Comparison with the reference design, back view
ATI RADEON X700XT/6600 ATI RADEON X600XT
ATI RADEON X800

We can see that the product is closer to X600XT in its design; but unlike this card, X700XT has seats on the back of the PCB for additional memory to get the 256-MB solution. The PCB also has a seat for RAGE Theater (VIVO).

Cooling device.

ATI RADEON X700XT

It's a very unusual cooler. What distinguishes it from other coolers? Well, first of all, ATI has never used closed heatsinks with forced air flow in such video cards. You should pay attention that the heatsink sockets do not touch memory chips! They serve to cool the core only! Secondly, it's the material of the heatsink - copper. That's why the card feels so heavy in hand.

And the most important: The cooler is very noisy! Especially at full load, when the fan speed is increasing. I'll write about it below.



Manufacturers of these cards will presumably experiment with their own coolers, because equipping the cards with the native ATI solution is utterly unwise.

The cooler removed, you can see the chip. Let's compare the core dimensions in RV410 and R350. Why R350 of all? Because this chip also incorporates 8 pixel pipelines, and it has twice as little vertex pipelines as well. With all that it's based on a 0.15-micron process technology, while RV410 is already based on the 0.11-micron process technology.





Well, the smaller core dimensions are quite expectable due to a thinner process technology. Though the number of transistors in the chip is not reduced. And still, one can assume that some part of its caches or other technological elements was cut down. Our examination will show it all...

Now let's return to operating temperatures of the card and the cooler noise. Due to the regular responsiveness of Alexei Nikolaychuk, the author of RivaTuner, the next nonpublic beta-version of this utility already supports RV410. And, moreover, it is capable not only to change and control card's frequencies, bus also monitor temperatures and a fan speed. Here is what the card demonstrated operating at regular frequencies without external cooling in a closed PC case:

Though the temperatures were not rising so intensively and reached only a mark below 60 degrees, the cooler behaved "nervously", as you can see on the bottom graph above, which displays the cooler operation percentagewise from its maximum possible rpm. As I have already noted, it causes very disagreeable noise.

Let's take advantage of the RivaTuner feature to control the fan and fix its speed at such a level when its noise is not alarming and is hardly audible - it's approximately 55-56% of its rotational speed.

The temperatures of the core and of the card on the whole have raised not very high, and are still in the secure zone. Why this overcautiousness with the cooler? We don't know the answer yet and hope to get explanations from ATI.

Installation and Drivers

Testbed configurations:

    Pentium4 Overclocked 3200 MHz (Prescott) based computer
    • Intel Pentium4 3600 MHz CPU (225MHz x 16; L2=1024K, LGA775); Hyper-Threading enabled
    • ABIT AA8 DuraMAX mainboard based on i925X
    • 1 GB DDR2 SDRAM 300MHz
    • WD Caviar SE WD1600JD 160GB SATA HDD
  • Athlon 64 3400+ based computer
    • AMD Athlon 64 3400+ (L2=1024K) CPU
    • ASUS K8V SE Deluxe mainboard based on VIA K8T800
    • 1 GB DDR SDRAM PC3200
    • Seagate Barracuda 7200.7 80GB SATA HDD
  • Operating system - Windows XP SP2; DirectX 9.0c
  • Monitors: ViewSonic P810 (21") and Mitsubishi Diamond Pro 2070sb (21").
  • ATI drivers v6.483 (CATALYST 4.10beta); NVIDIA drivers v65.76.
  • Video cards:
    1. NVIDIA GeForce FX 5950 Ultra, 475/950 MHz, 256MB DDR, AGP
    2. NVIDIA GeForce 6800 Ultra, 425/1100 MHz, 256MB GDDR3, AGP
    3. NVIDIA GeForce 6800 Ultra, 400/1100 MHz, 256MB GDDR3, AGP
    4. NVIDIA GeForce 6800 GT, 350/1000 MHz, 256MB GDDR3, AGP
    5. ASUS V9999GE (NVIDIA GeForce 6800, 350/1000 MHz, 256MB GDDR3), AGP
    6. NVIDIA GeForce 6800, 325/700 MHz, 128MB DDR, AGP
    7. NVIDIA GeForce 6800LE, 325/700 MHz, 128MB DDR, AGP
    8. NVIDIA GeForce PCX5900, 350/550 MHz, 128MB DDR, PCI-E
    9. NVIDIA GeForce PCX5750, 425/500 MHz, 128MB DDR, PCI-E
    10. NVIDIA GeForce 6600GT, 500/1000 MHz, 128MB GDDR3, PCI-E
    11. ATI EADEON 9800 PRO, 380/680 MHz, 128MB DDR, AGP
    12. ATI EADEON 9800 XT, 412/730 MHz, 256MB DDR, AGP
    13. ATI EADEON X800 XT PE, 520/1120 MHz, 256MB DDR, AGP
    14. ATI EADEON X800 XT, 500/1000 MHz, 256MB DDR, AGP
    15. ATI EADEON X800 PRO, 475/900 MHz, 256MB DDR, AGP
    16. ATI EADEON X800 XT, 500/1000 MHz, 256MB DDR, PCI-E
    17. ATI EADEON X600 XT, 500/760 MHz, 128MB DDR, PCI-E

VSync is disabled.

As we can see, ATI has prepared a new version of drivers to the launch of RADEON X700. The salt of it is CATALYST Control Center. However this utility was officially released earlier, in 4.9. But why do we lay stress on this program in this article? The answer is simple: CCC is the only way to use such new features as 3D optimization control.

But let's not put the cart before the horse. First of all it should be noted that CCC is a large program that takes up much hard disk space and takes long to download from Internet. Plus Microsoft .NET 1.1, which adds 24 MB. CCC will not work without it.

Is it worth the download expenses? At first glance it really is. But you should take a closer look. From here you can download (or open) an animated GIF file (920K!), which demonstrates all CCC settings.

And here we shall touch upon only those settings that are interesting from the point of view of innovations in 3D graphics control.

We can see the settings called CATALYST A.I. They enable so called optimizations of drivers for various games, as well as general filtering (trilinear and anisotropy) optimizations.

There are three grades:

  1. OFF (Disable) - to disable optimizations completely. In this case the card is promised not to use any filtering optimizations and a simplified optimization algorithm for applications.
  2. LOW (Standard) - enables optimizations for applications as well as the light filtering optimization.
  3. HIGH (Advanced) - enables all optimizations at full potential.

We shall publish the performance results of all the three modes in X700XT below, in the section devoted to the analysis of speed. The quality aspect will be analyzed in the next article.

According to ATI, the optimizations support the following games:

  • Doom 3: CATALYST A.I. replaces a lighting shader with its mathematical counterpart operating more efficiently. This optimization increases performance in some scenes.
  • Half Life 2 Engine (currently available in the Counter Strike source beta release): CATALYST A.I. enables improved texture caching for this engine, which allows an increased speed, especially with active anisotropy at high resolutions.
  • Unreal Tournament 2003/Unreal Tournament 2004: The CATALYST driver is modified so that anisotropic filtering (or its combination with bi- and trilinear filtering) is always defined by the application and the game enables these functions on its own. In previous drivers, when a user enabled anisotropy via the driver, trilinear filtering was used only for the first texturing level. Starting with this driver version all texture layers will be processed. Improved level of texturing analysis (in particular, it refers to all RADEON X products) is guaranteed in these games to increase performance without quality loss. RADEON 9800, RADEON 9700, and RADEON 9500 series will still operate in the previous mode (that is, as they used to work before A.I.)
  • Splinter Cell, Race Driver, Prince of Persia, Crazy Taxi 3 - for these games A.I. optimizations are reduced to the strict block of the ÀÀ mode, which is not supported by these games (that is, even if a user accidentally forced ÀÀ in the driver, with active A.I. nothing would happen, the driver would detect the game and disable ÀÀ if necessary). Before that, one could see glitches and even freezes of the games in such situations.

So, it seems like a useful thing. Performance tests will prove that. We'll see what happens with the quality later.

Proceeding with the CCC examination, a summary tabbed page with all major settings looks interesting:

I would recommend this tabbed page to start managing 3D. Of course, those tabbed pages with some function controls have their advantages, at least you can see the results of activating different functions on a 3D scene looped in a window.

I also want to note the comfort of sampling frequencies controls:

And a friendlier TV interface:

But working with CCC also has major disadvantages. First of all, it is an irritatingly slow interface. When you switch one of the controls and click APPLY, the program "thinks" for half a minute (sometimes you even feel as if it has already frozen) but then the normal operation restores. Nervous users may fall in stupor or psychosis, or decide to throw this piece of software out.

So, ATI programmers still have some issues to improve. Lots of them.

IN THE DIAGRAMS with the ANIS16x lettering, the results of GeForce FX 5950 Ultra and GeForce PCX5900/5750 are obtained with active ANIS8x.

It should be noted that by default the driver optimizations are enabled and set to LOW/STANDARD, so the main comparisons with their competitors were carried out exactly in this operating mode of X700XT.

Test results

Before giving a brief evaluation of 2D, I will repeat that at present there is NO valid method for objective evaluation of this parameter due to the following reasons:

  1. 2D quality in most modern 3D accelerators dramatically depends on a specific sample, and it's impossible to evaluate all the cards.
  2. 2D quality depends not only on the video card, but also on the monitor and a cable.
  3. A great impact on this parameter has been recently demonstrated by monitor-card combos, that is there are monitors, which just won't "work" with specific video cards.

What concerns the combo of our sample under review and Mitsubishi Diamond Pro 2070sb, this card demonstrated the excellent quality in the following resolutions and frequencies:

ATI RADEON X700XT 1600x1200x85Hz, 1280x1024x120Hz, 1024x768x160Hz

[ The next part (2) ]


We express our thanks to ATI
for the video cards provided to our lab.


Andrey Vorobiev (anvakams@ixbt.com)
Alexander Medvedev (unclesam@ixbt.com)

October 5, 2004

Write a comment below. No registration needed!


Article navigation:



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.