Back in 2000 we dreamed of dual-GPU graphics cards and even visualized such devices (based on two GeForce2 GTS), because while we were all looking forward to a dual-GPU card from 3dfx that year, experienced users understood that 2 x GeForce2 would be much faster.
But let's get back to present days, nostalgia aside. Almost half a year after the rollout of RADEON HD 4870 X2 NVIDIA decided to launch a dual-GPU card based on its modern GT2xx architecture. On the one hand, we can understand it -- the best products from this company were outperformed by competing dual-GPU solutions from AMD, and the market leader just cannot afford it.
Do you remember what has been going on in the market since last August? AMD followed its concept of Mid-End single-GPU solutions and High-End dual-GPU cards. Taking advantage of lower complexity and size of its GPUs, manufactured by a finer process technology, single-GPU solutions were soon followed by a dual-GPU card -- HD 4870 X2. This very product has been the fastest card in the market since August 2008 (we'll get back to the issues of multi-GPU rendering in our articles).
NVIDIA had to offer SLI systems based on GeForce GTX 280 or GTX 260 to oppose HD 4870 X2. And in case of two HD 4870 X2 cards -- Tri-SLI GTX 280. That is, if the company couldn't launch a more powerful single-GPU solution, GTX 295 would be a natural choice.
On the other hand, we don't think that rolling out such a graphics card was a timely event. AMD has been a leader for several months with its RADEON HD 4870 X2, and this information has already taken root in the minds of interested users. And rumor has it that NVIDIA will launch new graphics cards based on the overhauled GPU in several months, in the second quarter. And they will hardly be faster than GTX 295 in benchmarks.
That is, a rival for HD 4870 X2 appeared too late. And this dual-GPU card may interfere with future solutions, just like GeForce 9800 GX2. Besides, this model will hardly have enough time to improve financial health of the company -- its production costs are apparently high, but the company cannot raise prices too much (and it hasnn't). It would have been different, if GTX 295 was launched in November. It would have made much more sense then. Besides, spring will come soon, bringing new single-GPU solutions along.
But we don't know the future, so let's not put the cart before the horse. We've presented our vision of the situation, now we'll wait and see what will actually happen. In our opinion, GTX 295 is a trendy solution designed to demonstrate NVIDIA's strength rather than to change situation in the market by increasing sales and profits. Well, it makes sense. Let's hope users won't be confused why future single-GPU cards demonstrate lower results in benchmarks than the dual-GPU GTX 295, as it has already happened to GTX 280 and 9800 GX2.
Unlike AMD, which has already been manufacturing only multi-GPU solutions for the High-End segment, NVIDIA does not seem to abandon the single-GPU future for their top cards. However, the product we review today is designed to take away the leading position in benchmarks from AMD RADEON HD 4870 X2. That is its primary objective.
It would have been impossible to create such a solution with 65nm chips, of course. Even a single GeForce GTX 280 card with one GT200 GPU (1.4 billion transistors, about 576mm²) consumes over 200 watts! As it had taken NVIDIA too long to migrate GT200 to the 55nm process technology, the rollout of the dual-GPU solution was also delayed.
So, GeForce GTX 295 is based on two 55nm GT200b chips, which differ from their 65nm predecessors only in smaller die surface and lower power consumption. The transition to the 55nm process technology allows the card with two powerful GPUs to consume less than 300W and keep heat release at a level that its redesigned dual-slot cooling system can handle.
Our theoretical part about GeForce GTX 295 will be short, as it's essentially two regular GT200b chips, even if manufactured by the new 55nm process technology, installed on two connected PCBs. This dual-GPU system is based on the SLI technology. Its PCI Express lanes and the bridge are situated on the PCB. The only significant differences from a couple of GeForce GTX 280 or 260 cards are GPU and memory clock rates, memory size, bus width, and GPU configuration. These are all quantitative changes, not qualitative.
If you are not familiar with the GeForce GTX 200 (GT200) architecture, you can read about it in our baseline review. This architecture has developed from G8x/G9x, but it features some changes.
You may also want to study these baseline theoretical articles, which describe various features of graphics cards and architectural peculiarities of older products from NVIDIA and AMD.
These articles predicted the current situation with GPU architectures, and confirmed many of our assumptions about future solutions. The detailed information about NVIDIA G8x/G9x unified architecture is provided in these articles:
OK, we proceed from the assumption that our readers are already familiar with the architecture. Now we shall examine characteristics of the dual-GPU card from the GeForce GTX 200 series, based on 55nm GT200b chips.
GeForce GTX 295
GeForce GTX 295 reference specs
So, now the 55nm process technology used to manufacture GT200b chips allows NVIDIA to design a powerful dual-GPU solution. It's more efficient in energy terms (both in 2D and 3D) than its direct competitor -- RADEON HD 4870 X2. The new card from NVIDIA offers higher performance at a similar power consumption level. It's all the more unexpected, as GT200b GPUs have much larger surface area and complexity (the number of transistors) than RV770 chips, which are also manufactured by the 55nm process technology. It's either frequencies of the final GT200b revisions were reduced from the initially planned values, or these GPUs were designed for better energy efficiency.
As you can see, NVIDIA decided to launch a dual-GPU card with the same GTX suffix, only the model number if different. That was the decision, but it would have been more logical to give this card a different name, like GX2 290 or G2X 290. Even SLI 290 would be easier to understand for common users. And this product name does not indicate that the card has two GPUs. That's not right, because it may confuse users.
The company should also add a few words about such specific video memory size. To all appearances, this 448-bit bus and 896MB per each GPU may have to do with a decision to simplify PCB layout. As a result, we get unusual memory volume. And more importantly, it's lower than in RADEON HD 4870 X2. Although the difference between 896MB and 1024MB is not big, and it does not affect performance that much, it's not good from the marketing point of view. Even if nominally, this product is worse than its competitor judging by one of its specification parameters (and marketing people just love these things!)
Write a comment below. No registration needed!