iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

NVIDIA GeForce GTX 295



<< Previous page

     Next page >>

Back in 2000 we dreamed of dual-GPU graphics cards and even visualized such devices (based on two GeForce2 GTS), because while we were all looking forward to a dual-GPU card from 3dfx that year, experienced users understood that 2 x GeForce2 would be much faster.

But let's get back to present days, nostalgia aside. Almost half a year after the rollout of RADEON HD 4870 X2 NVIDIA decided to launch a dual-GPU card based on its modern GT2xx architecture. On the one hand, we can understand it -- the best products from this company were outperformed by competing dual-GPU solutions from AMD, and the market leader just cannot afford it.

Do you remember what has been going on in the market since last August? AMD followed its concept of Mid-End single-GPU solutions and High-End dual-GPU cards. Taking advantage of lower complexity and size of its GPUs, manufactured by a finer process technology, single-GPU solutions were soon followed by a dual-GPU card -- HD 4870 X2. This very product has been the fastest card in the market since August 2008 (we'll get back to the issues of multi-GPU rendering in our articles).

NVIDIA had to offer SLI systems based on GeForce GTX 280 or GTX 260 to oppose HD 4870 X2. And in case of two HD 4870 X2 cards -- Tri-SLI GTX 280. That is, if the company couldn't launch a more powerful single-GPU solution, GTX 295 would be a natural choice.

On the other hand, we don't think that rolling out such a graphics card was a timely event. AMD has been a leader for several months with its RADEON HD 4870 X2, and this information has already taken root in the minds of interested users. And rumor has it that NVIDIA will launch new graphics cards based on the overhauled GPU in several months, in the second quarter. And they will hardly be faster than GTX 295 in benchmarks.

That is, a rival for HD 4870 X2 appeared too late. And this dual-GPU card may interfere with future solutions, just like GeForce 9800 GX2. Besides, this model will hardly have enough time to improve financial health of the company -- its production costs are apparently high, but the company cannot raise prices too much (and it hasnn't). It would have been different, if GTX 295 was launched in November. It would have made much more sense then. Besides, spring will come soon, bringing new single-GPU solutions along.

But we don't know the future, so let's not put the cart before the horse. We've presented our vision of the situation, now we'll wait and see what will actually happen. In our opinion, GTX 295 is a trendy solution designed to demonstrate NVIDIA's strength rather than to change situation in the market by increasing sales and profits. Well, it makes sense. Let's hope users won't be confused why future single-GPU cards demonstrate lower results in benchmarks than the dual-GPU GTX 295, as it has already happened to GTX 280 and 9800 GX2.

Unlike AMD, which has already been manufacturing only multi-GPU solutions for the High-End segment, NVIDIA does not seem to abandon the single-GPU future for their top cards. However, the product we review today is designed to take away the leading position in benchmarks from AMD RADEON HD 4870 X2. That is its primary objective.

It would have been impossible to create such a solution with 65nm chips, of course. Even a single GeForce GTX 280 card with one GT200 GPU (1.4 billion transistors, about 576mm²) consumes over 200 watts! As it had taken NVIDIA too long to migrate GT200 to the 55nm process technology, the rollout of the dual-GPU solution was also delayed.

So, GeForce GTX 295 is based on two 55nm GT200b chips, which differ from their 65nm predecessors only in smaller die surface and lower power consumption. The transition to the 55nm process technology allows the card with two powerful GPUs to consume less than 300W and keep heat release at a level that its redesigned dual-slot cooling system can handle.

Our theoretical part about GeForce GTX 295 will be short, as it's essentially two regular GT200b chips, even if manufactured by the new 55nm process technology, installed on two connected PCBs. This dual-GPU system is based on the SLI technology. Its PCI Express lanes and the bridge are situated on the PCB. The only significant differences from a couple of GeForce GTX 280 or 260 cards are GPU and memory clock rates, memory size, bus width, and GPU configuration. These are all quantitative changes, not qualitative.

If you are not familiar with the GeForce GTX 200 (GT200) architecture, you can read about it in our baseline review. This architecture has developed from G8x/G9x, but it features some changes.

You may also want to study these baseline theoretical articles, which describe various features of graphics cards and architectural peculiarities of older products from NVIDIA and AMD.

These articles predicted the current situation with GPU architectures, and confirmed many of our assumptions about future solutions. The detailed information about NVIDIA G8x/G9x unified architecture is provided in these articles:

OK, we proceed from the assumption that our readers are already familiar with the architecture. Now we shall examine characteristics of the dual-GPU card from the GeForce GTX 200 series, based on 55nm GT200b chips.

GeForce GTX 295

  • Code name: GT200b
  • process technology: 55nm
  • 2 x 1.4 billion transistors
  • Unified architecture with an array of common processors for streaming processing of vertices and pixels, as well as other data
  • Hardware support for DirectX 10, including Shader Model 4.0, geometry generation, and stream output
  • Two 448-bit memory buses, each one comprising seven (out of eight) independent 64-bit controllers
  • Core clock: 576MHz
  • Higher than doubled ALU frequency: 1242MHz
  • 2 x 240 scalar floating-point ALUs (integer and floating-point formats, support for FP32 and FP64 according to IEEE 754(R), two MAD+MUL per cycle)
  • 2 x 80 texture address and filtering units, support for FP16 and FP32 components in textures
  • Dynamic branching in pixel and vertex shaders
  • 2 x 7 wide ROPs (2 x 28 pixels) supporting antialiasing with up to 16 samples per pixel, including FP16 or FP32 frame buffer. Each unit consists of an array of flexibly configurable ALUs and is responsible for Z generation and comparison, MSAA, blending. Peak performance of this subsystem is up to 224 MSAA samples (+ 224 Z) per cycle, in Z only mode - 448 samples per cycle
  • Multiple render targets (up to 8 buffers)
  • Interfaces (2 x RAMDAC, 2 x Dual DVI, HDMI, DisplayPort, HDTV) are integrated into a separate chip.

GeForce GTX 295 reference specs

  • Core clock: 576MHz
  • Frequency of unified processors: 1242MHz
  • Unified processors: 480 (2 x 240)
  • 160 (2 x 80) texture units, 56 (2 x 28) blending units
  • Effective memory frequency: 2000MHz (2*1000)
  • Memory type: GDDR3
  • Memory: 1792MB (896 x 2)
  • Memory bandwidth: 2 x 112 GB/s
  • Maximum theoretical fill rate: 2 x 16.1 gigapixel per second
  • Theoretical texture sampling rate: 2 x 46.1 gigatexel per second
  • 2 x DVI-I Dual Link, 2560x1600 video output
  • Single SLI connector
  • PCI Express 2.0
  • TV-Out, HDTV-Out, HDCP support, HDMI, DisplayPort
  • Power consumption: up to 289W (8-pin and 6-pin connectors)
  • Dual-slot design
  • Recommended price: $499

So, now the 55nm process technology used to manufacture GT200b chips allows NVIDIA to design a powerful dual-GPU solution. It's more efficient in energy terms (both in 2D and 3D) than its direct competitor -- RADEON HD 4870 X2. The new card from NVIDIA offers higher performance at a similar power consumption level. It's all the more unexpected, as GT200b GPUs have much larger surface area and complexity (the number of transistors) than RV770 chips, which are also manufactured by the 55nm process technology. It's either frequencies of the final GT200b revisions were reduced from the initially planned values, or these GPUs were designed for better energy efficiency.

As you can see, NVIDIA decided to launch a dual-GPU card with the same GTX suffix, only the model number if different. That was the decision, but it would have been more logical to give this card a different name, like GX2 290 or G2X 290. Even SLI 290 would be easier to understand for common users. And this product name does not indicate that the card has two GPUs. That's not right, because it may confuse users.

The company should also add a few words about such specific video memory size. To all appearances, this 448-bit bus and 896MB per each GPU may have to do with a decision to simplify PCB layout. As a result, we get unusual memory volume. And more importantly, it's lower than in RADEON HD 4870 X2. Although the difference between 896MB and 1024MB is not big, and it does not affect performance that much, it's not good from the marketing point of view. Even if nominally, this product is worse than its competitor judging by one of its specification parameters (and marketing people just love these things!)


Write a comment below. No registration needed!


Next page >>



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.