iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

3D Video Technologies for PC, 2004




By the middle of autumn 2004 we faced a paradoxical situation on the market of PC video chips. At least, we cannot remember similar situations in the past years. After high-flown announcements made this year in spring, almost nothing happened... New video cards would appear on the shelves in minimal lots instantly dissolved in the innumerable mass of customers wishing to purchase the products standing out so well against their predecessors even at the beastly price of 500 USD.

Today we'll try to take a detached view of the market, to analyze the reasons for the present situation and to peek to the nearest future. Let's start with finances.

Financial records

Finances adore singing romances ("Finances are singing romances" in Russian means to be pressed for money). The singer usually performs solo, while the others are busy packing and shipping the wealth, which suddenly came down on them. In 2004 the de facto monopolization (for the fans of exact terminology – oligopolization) of the market by two huge corporations became evident. Even zealots of the mythical new PowerVR tile architecture, which has been "coming soon" for the second year already, quieted down.

Last year S3 Graphics (that is VIA) and young XGI (at least partially rooted from SIS) tendered for their share of the pie. The former couldn't manage to provide a noticeable retail volume of their considerably good video cards based on the DeltaChrome chip and they preferred to focus on integrating those cores into VIA chipsets. The latter made a scary face and loudly clapped instead of launching truly competitive cores. As a result, both companies practically gave up their attempts to compete with ATI and NVIDIA on traditionally their market this year.

ATI and NVIDIA, NVIDIA and ATI. This couple is rather interesting. If you remember as everything started long ago in 1996, you will understand the huge intrinsic differences between the two current competitors. NVIDIA is a young upstart founded by people from SGI (like legendary 3dfx) that has soared to the heights of the market for less than ten years of its existence. ATI is a respectable canadian company, which – just think about it! – has been the market leader for a long time, when young and frisky 3dfx and NVIDIA started their fight for the 3D-video PC market.

Practically all existing chip manufacturing companies were eliminated this way or another in the course of battles unleashed by frisky upstarts. Every one of them, except for ATI. Possessing serious market reserves, this company has been saving its potential for a long time, which led to the appearance of an absolutely ingenious R300 architecture in 2002. This architecture, without false modesty, ensured practically all ATI's achievements today.

Let's have a look at the diagram displaying the income flows of the canadian company for the last financial year:




It's obviously just an effect of company's activities, which cannot be used for forecasts, but the trend is quite illustrative. If you wish, you can take the results of the previous quarters at www.ati.com and add some columns to the left – the trend will become even more pronounced. Since the launch of the R300 core (Radeon 9500 and 9700 video cards) ATI has been constantly piling up its incomes, providing a steady decent percentage of profitability.

Seasonal post-holiday drop of market activity this year conditioned the comparatively flat results in the quarter that finished on February 29. Abrupt growth in the last quarter is on the contrary caused by the increased market activity thanks to the release of long-awaited Doom 3 and the launch of mass deliveries of Radeon X800 Pro. At least we don't see any other reasons for this leap.

Giving a closer look into the results of the last finished financial quarter of ATI (finished on 08.31.2004), you will understand the income/expenditure pattern of the company. So, the lion's share of the income (449.391 million USD) falls at componentry sales, that is videochips and chipsets. The second place (111.905 million USD) is taken by video cards sold under their trademark. It's interesting to note that the canadian company gets over 80% (!) of the income in the Asian-Pacific region and only 15% – in the USA. Less than 1% in native Canada seems even indecent to mention.

The expenditure pattern is no less curious. Out of 572.218 million USD realized by market activities during the last quarter, 378.792 millions (66.2%) are the prime costs of the services and products. ATI spends the remaining profits in the following way: 77.073 millions (13.5%) are spent for R&D; 30.542 millions (5.3%) are the expenses for publicity, marketing, and contacts with distribution channels; 11.745 millions (2.0%) fall at administrative expenses. Remember these figures, later on we shall compare them with the expenditure pattern of the main competitor of the Canadian company.

Speaking of the competitor. Let's have a look at the corresponding diagram for NVIDIA. Bear in mind that the ends of financial quarters in these companies do not coincide, so there can be no direct comparison by market activity periods.




NVIDIA demonstrates a more curious picture. First of all note that NVIDIA had ended the last financial quarter before the release of Doom 3 and before video cards based on successful NV40 started to appear in retail stores. Secondly, you can clearly see the company's profitability trend to decrease while the management is evidently trying to stand on the sharpest edge of minimal profitability, which in the last finished financial quarter was practically 12 times as low as the same index for the last finished financial quarter of the canadian competitor. It's important to understand that this issue is neither good nor bad in itself. You have to know where the income went in order to make conclusions whether the income is too low or it's just on the necessary level.

NVIDIA, unfortunately, does not publish information on its income pattern. However, being aware that the company does not manufacture final goods, one can safely assume that the lion's share of the income falls at the chip and chipset sales.

Let's see the expenditure pattern. Out of $456.061 millions earned by the Californian company for the last financial quarter $315.968 millions (approximately 70%) are the prime costs of the services and products. You can clearly see that the share of prime costs in the sale proceeds in NVIDIA for the last financial quarter is higher than the same index in ATI. NVIDIA spends the remaining 140.093 millions of its income in the following way: 85.420 millions USD (about 18.5%) falls at R&D – this sum is greater than in the canadian company both relatively and absolutely; the remaining 50.874 millions (~11%) are spent on marketing, sale, administrative, and other operations. Note that this index in absolute figures is noticeably (approximately by 9 million USD) higher than the same index in ATI. This may speak for low management efficiency in NVIDIA, too high expenses on publicity or partner relations.

From our analysis, which is not pretending to any special professionalism, it becomes more or less clear that the canadian company is currently in a more favourable position from the financial point of view than its main competitor from sunny California.

But our readers should remember that we have seen a section of the current situation without analyzing its reasons and making no suppositions about possible turns of events in future. This issue will be considered in the second chapter, in which we shall dwell on technologies that yield profits to these companies.

3D Technologies 2004

We shall certainly start with the most interesting and the most important sector, namely...

High-End

In this article all video cards within the specified MSRP > 300 USD will be reckoned among the High End sector of 3D video chips. In case of ATI, it will be Radeon 9800 and X800, in case of NVIDIA – GeForce 6800.

This year, the Californian company was the first to "fire its exercise" having announced in mid April a new flagship chip codenamed NV40, which received the marketing name of GeForce 6800. After an unfortunate for NVIDIA series of GeForce FX video chips, the new flagship had no right even to be an average product – in this case the company could have yielded its market positions and lost profitability.

That's why GeForce 6800 got all possible efforts. The chip turned out bulky (222 million transistors versus 135 millions in NV35 the former flagship of the company – but the process technology is the same!), very power consuming (for the first time a video card based on one video chip requires two connectors for external power), but second to none in functionality.




GeForce 6800 supports everything that Radeon 9800 supported, and even moves two steps higher: it offers new Shader Model 3, which was initially part of DirectX 9.0 specification and was only awaiting appropriate hardware, and delivers FP16-texture filtering and FP16-blending to the consumer market, thus becoming the first video chip on the PC market capable of sterling floating point operations along the entire pixel pipeline.

Not wishing to rest content just with functionality, NVIDIA equips the flagship chip with 16 pixel and 6 vertex pipelines: it's 4 (!) and two times as many as the corresponding characteristics of the previous flagship, NV35 chip. For all this, NV40 chip is still based on the common 130nm process technology. But now these chips are manufactured on IBM plants in the USA instead of the plants of Taiwanese TSMC. Despite the widely advertised IBM skills to manufacture the most complex chips, it all has affected the clock frequencies: the NV40 core reaches only 400MHz, 75MHz lower than the fastest NV35. However, this shortcoming is generously repaid by the quadrupled number of pixel pipelines and by considerably increased overall efficiency of the architecture.

We should make a reservation here that NV40 is not actually a completely new chip. It's more correct to speak of the development and, in a certain sense, revision of the NV35 architecture. Being completely redesigned, pixel pipelines in NV40 still possess all the properties of pixel pipelines in NV35 (certainly, except for the capacity to fetch two texture values per clock). So, pixel pipelines in NV40 still support the double FP precision – 16 and 32 bits per component, and they similarly lose performance when switching from FP16 to FP32. The only difference is that unlike NV35, NV40 provides more than sufficient performance in case of FP32 and just marvellous performance in FP16. The former flagship was bad at FP16 and very bad at FP32.

The new product also differs from NV35 in its antialiasing algorithms (there appears 4x Rotated Grid Sample MSAA) and anisotropy (NVIDIA actually copies the faster algorithm of a less quality from its competitor).

At first the company announced two standard video cards based on NV40: GeForce 6800 Ultra (16 pixel pipelines, 400MHz core, 1100MHz memory) and GeForce 6800 (12 pixel pipelines, 325MHz core, 700MHz memory). Then two more cards are added: GeForce 6800 GT (16 pixel pipelines, 350MHz core, 1000MHz memory) and GeForce 6800 LE (8 pixel pipelines, 325MHz core, 700MHz memory).

So, NVIDIA strikes back. This strike is successful in any sense, except for the above mentioned increased requirements to power stability. The announced series of video cards looks tasty. But what do we see? May is over – no cards. June – no cards. July – almost no cards! August – there appear a few video cards based on GF6800 with one disabled pixel quad (4 pixel pipelines) and still fewer cards based on GF6800GT. Besides, some retail channels offered GF6800LE, which had been initially intended only for OEM market. September – the boom is growing up, the cards are preordered even before they cross the Russian borders, the prices are slowly but steadily crawling up. GF6800U cards at sane prices are totally unavailable on the shelves... October – the same picture.

Since about August NVIDIA has actually started forcing the retailers to hold back prices for all GF6800 video cards except for GF6800U, preventing the huge market demand from raising the prices to the sky level.

What is it? The announcement was half a year ago, and we are still having such a deficiency of supply! Problems with the output of effective chips at necessary frequencies? No. Happy owners of various GF6800 cards note their excellent overclocking potential: almost all GF6800GT cards can reach GF6800U frequencies. Besides, experiments with unlocking disabled pipelines in GF6800 show that these cards are often equipped with effective 16-pipeline chips. Manufacturing problems? It seems so. To all appearances, IBM plants are overloaded for some reasons and cannot process all the orders from NVIDIA. It seems high time to return to good old TSMC, but...

In early May this year, having yielded the right-of-way to NVIDIA with its NV40, ATI presented its new flagship previously known as R420. The chip also contains 16 pixel and 6 vertex pipelines, however it practically doesn't introduce innovations from the functionality point of view: ATI stakes on high clock frequencies of the core. There were announced two video cards based on Radeon X800 (this is the marketing name of the R420 chip): Radeon X800 XT Platinum Edition (16 pixel pipelines, 520MHz core, 1120MHz memory) and Radeon X800 Pro (12 pixel pipelines, 475MHz core, 900MHz memory).




Pay attention to the core frequency of RX800XTPE: it's an absolute record for a 16-pipeline video chip consisting of 160 million transistors! How can it reach such high frequencies? Strictly speaking, the reasons are in everything connected with the R420 architecture.

Firstly, it's a new TSMC process technology: 130nm with low-k dielectrics, which increased the frequency potential of R420 in comparison with NV40. Secondly, the R420 die is substantially less complex – 160 million transistors versus 222 in NV40. And at last thirdly, the R420 architecture itself has been reworked in order to obtain considerably higher frequencies than were allowed by the architecture of the former flagship, R360 chip.

Let's analyze these points in detail starting from the bottom of the list. So, R420 architecture is completely reworked in comparison with R360. The objective of this architectural redesign is to optimize it but not to introduce new functionality into the chip. ATI states that the set of R360 functions (that is the old R300 per se) is currently compliant with all the requirements put in by game developers.

But still, some new functionality appears in R420 chip, for ATI it's a bloodless victory. For example, due to the reworked control logic of pixel pipelines (which were redesigned anyway to increase their efficiency) ATI created the second standard expansion of Pixel Shaders 2.0 presently known as PS2.0.b. Functionality provided by this expansion is significantly inferior not only to a more progressive PS3.0 model but also to PS2.0.a, which appeared a year before, supported by all GeForce FX video cards and certainly by later GeForce Series 6 cards.

But it turned out later on that PS2.0.b expansion is quite enough for most techniques of pixel shader optimization used in video cards with PS3.0 support without destroying PS2.0.x hardware performance on the vine. To our mind, the introduction of PS2.0.b to R420 is a very wise move from ATI, which allowed this company reasonable statements about the uselessness of SM3 support in NV40. One to nil, Canada leading. At least at the initial stage. Later on, the magic of figures will inevitably have its effect on sales: other things being equal, customers will prefer video cards with Figure 3 on board to a video card with Figure 2...

The second innovation in R420 belongs to texture compression: due to the DXTC block modification this chip now supports a new compression format optimal for compressing normal maps. This format is named 3Dc and then gets licensed by Microsoft to be included into the next DirectX version. Note that the technologies included into DirectX become free and do not require any royalties paid to their initial developers. That was the way how S3TC became DXTC. A similar fate obviously awaits ATI 3Dc as well. Possibly from the point of view of hardware support as well, but it's too early to speak about it now.

Let's return to our points: the second reason for this tremendous frequency potential of R420 is a considerably less complexity of its core than in NV40. Core complexity is a direct result of the number of functions supported by the video chip. R420 is noticeably weaker than NV40 in this regard – hence the considerably lesser chip complexity.

The first reason is the new TSMC process technology – low-k 130nm. Did NVIDIA make a mistake having preferred IBM plants to Taiwanese TSMC for NV40 manufacturing? We think that it didn't. Here is our explanation.

Look into a price list of the nearest computer store: can you see at least one RX800XTPE? Most likely no. The situation looks exactly like with NVIDIA: the new flagship was announced half a year ago, but these cards are just not to be found on sale. But this is only how it looks: in case of RX800XTPE we can speak of the very low percentage of the R420 chip yield (for these video cards) from the 130nm low-k process line in TSMC. What's the reason?

It's elementary: ATI overestimated capacities of this process technology when specifying the final operating frequencies for RX800XTPE. Why overestimated? Remember what chip was the first to be released this year? It was NV40. According to our information, ATI had to increase RX800XTPE frequencies shortly before the announcement in order to ensure, at least on paper, the advantage of its new flagship over the NVIDIA flagship announced a month earlier. That's why you don't see RX800XTPE on the shelves – at first these cards were so scarce they could barely suffice ATI. Why aren't they available now? We'll talk about it in the final chapter of our article.

To do justice to ATI we should note that this company presented another model in the Radeon X800 series: X800 XT, without the confusing "Platinum Edition" suffix. This card has sterling 16 pixel pipelines and completely corresponds to its senior brother except for reduced clock frequencies: 500MHz core and 1000MHz memory. These cards were initially represented only by the PCI Express x16 modification, but later on there appeared AGP models. Performance of these video cards is not that worse than the RX800XTPE performance, but they can still be found in stores unlike the former.

By the way, we have almost forgotten the epic with the transition of video cards to PCI Express x16, which was started this year. In Hi-End sector it developed in the following way: ATI released R423 chip, which is just the old R420 but with PCIE interface instead of AGP. This chip was used in Radeon X800 Pro and Radeon X800 XT. That's all PCIEx16 activities of ATI in Hi-End sector.

NVIDIA chose a different way: it released NV45 core, which was actually just the same NV40 with PCIE-AGP HSI bridge integrated to the die substrate. It had been previously used in GeForce PCX series (practically the same GeForce FX, but with PCIE implemented using the external HSI bridge). But unlike ATI, NVIDIA has not stopped at that and these cards were equipped with... SLI!

Of course, it's not the SLI that everybody remembers in 3dfx Voodoo 2 and Voodoo 5 (NVIDIA owns all technologies of 3dfx deceased). It even reads differently: Scalable Link Interface instead of Scan Line Interleave. The concept of the new SLI from NVIDIA is similar not only to the old SLI from 3dfx, but also to the technology used by ATI in Rage Fury MAXX: two video cards with PCI Express x16 interfaces can operate together to render the same frame (split frame mode, as in Voodoo 5) or render frames in consecutive order, one processes odd frames, the other – even (alternate frame mode, as in MAXX).

Lacking mainboards with two PCIEx16 slots, SLI was certainly no more than a declaration of intent. However such chipsets (nForce4 from NVIDIA in particular) and mainboards are starting to appear, demonstrating from 50% to 80% of performance gain in SLI mode for two GeForce 6800. Will such configurations be demanded? No more than Ferrari, I think. At least in case of GeForce 6800 SLI. We cannot tell you for sure how NVIDIA SLI will be evolving further or whether ATI will present its own version of SLI in future.

Bottom line for Hi-End sector

So, presently we can ascertain that both vendors overestimated capacities of their technology partners. NVIDIA definitely overestimated IBM capacities in terms of mass manufacturing of such complex chips. ATI definitely overestimated the frequency potential of the 130-nm process technology in TSMC, even taking into account low-k dielectrics.

As a result, the flagships announced half a year ago are de facto not available on sale, and the main competition goes one step lower: GF6800GT competes with RX800Pro (RX800XT is noticeably more expensive than GF6800GT, thus not qualifying for this competition). GF6800GT is an obvious winner in this competition: this video card is faster in most tests, to say nothing of its support for lots of functions unavailable in RX800Pro.

If we go down another step, NVIDIA is still victorious: in most cases the old 8-pipeline Radeon 9800XT cannot compete with the 12-pipeline GeForce 6800.

Speaking of performance at any cost, NVIDIA turns out victorious again: on the PCI Express x16 platform in SLI mode. Although the price is really impressive. As well as, to put it mildly, the moderate number of available mainboards with two PCI Express x16 slots.

However, ATI has its own victories as well. So, there is actually no NVIDIA counterpart to RX800XT, and RX800XTPE, though not available on sale, still holds performance leadership in comparison with GF6800U in the latest applications with Shaders 2.0 (for the only considerable exception of Doom 3).

Is ATI right to stake on frequencies to the detriment of functionality? As the practice shows, the company is not doing any worse yet. It's quite another matter that R420 chip didn't manage to gain advantage over its competitor among game developers, who are always interested in the first place in new functions as such, especially if speed characteristics of the old functions remain at the decent level.

Is NVIDIA right to stake on mighty functionality to the detriment of clock frequencies? So far, there is no unambiguous answer to this question either. Perhaps, if NV40 chip hadn't supported Shaders 3.0, it would have been able to reach the core frequencies similar to R420. That would have allowed to outscore it significantly, because in most cases NV40 chip provides better performance per one megahertz than R420.

If ifs and ans were pots and pans... Both chips have actually become the logical development of the earlier trends. ATI has been preferring to make use of the raving success of R300 enhancing the cores with new functions, which are relatively easy to implement, and raising frequencies and pipeline numbers. While NVIDIA, after its fiasco with GeForce FX last year, has firmly re-established its positions by correcting almost all its previous mistakes when designing GeForce 6800.

And at this point we suggest returning to the indexes of the last financial quarters in both companies. So, in the last quarter NVIDIA earned relatively little money, while ATI went up with a jump. Accidentally? Not likely. Look once again at the new video cards offered by both companies: very often they are even not available on sale! That means that NVIDIA is mostly trying to oppose the onslaught of ATI with a very weak series of GeForce FX video cards – no wonder that ATI is winning over the sales from NVIDIA.

It would be curious to have a look at the indexes of the current financial quarter in NVIDIA, which ends on November 31. It will reflect the results of sales of the video cards based on the company's chips already after the release of Doom 3, in which the GeForce 6800 series fairs much better than its Radeon competitors. It's quite possible that NVIDIA will manage to get back its profitability, which was lost in the previous quarter, due to this game only.

Middle-End

To the Middle-End sector we refer video cards, which MSRP is within 150-300 USD.

In fact, the battle in the Middle-End sector has not seriously started yet, and chances are that it will shift to the next year. All new chips, announced by both companies after they had presented their spring flagships, feature a still exotic interface – PCI Express x16. NVIDIA can manufacture AGP modifications of these video cards on such chips with the help of the HSI bridge (it successfully works both ways). ATI does not have such a bridge so far, but it's rumoured to be developing it in much haste.

As in spring, NVIDIA was the first in the Middle End sector. In late August the company presented GeForce 6600 video cards. We shall not describe the architecture here – you can learn the details from our review. We'll only mention that the NV43 chip (aka GeForce 6600) is completely analogous to the NV40 chip in its functions. But its number of pipelines was cut down (8 pixel pipelines instead of 16 and 3 vertex pipelines instead of 6). Memory bus width was correspondingly reduced by half – to 128 bit. The number of ROPs (Raster Operation Processor), which are responsible for post processing, filtering, and writing rendered frame pixels into memory of the accelerator, was also reduced by four.




In other words, though NV43 chip processes only 8 pixels at a cycle, it can write to memory only 4 pixels at a cycle. In practice this seemingly severe reduction of chip efficiency shows only in theoretical tests, while in games NV43 has to do more than one cycle to calculate a frame anyway – the later the game is released, the more cycles are required to calculate its frames. In this case ROPs have time to write all previously rendered pixels, while the chip is busy with complex multi-cycle calculations of pixel and vertex shaders.

This solution allowed NVIDIA to save a considerable quantity of transistors on NV43. There is one obvious shortcoming though: as there are only 4 ROPs, NV43 is not good at MSAA.

However, the cardinal point of the announcement of GF6600 was the new 110nm process technology of TSMC, which is used to manufacture these chips. This process allowed NVIDIA to set the core of the GF6600 flagship to 500MHz. Moreover, almost all analysts noted the wonderful overclocking potential of the core. For example, the video card tested in our lab was stable with its core frequency set to 590MHz, when we used only the on-board cooling system and an external fan blowing on the card.

ATI, which most likely deliberately allowed NVIDIA to be the first, is about one month late from this company: in late September the canadian company announced the Radeon X700 series based on RV410 chip. The design principles of this chip are remarkably similar to those used to create NVIDIA NV43: RV410 is a half of R420 in terms of pixel pipelines. However ATI decided to retain the number of vertex pipelines and it retained the number of ROPs corresponding to the number of pixel pipelines.




In other words, theoretically RV410 chip must have been more powerful than the product from NVIDIA. But in practice everything did not exactly work in favour of ATI. Firstly, ATI seemed to have some problems with the 110nm process technology: RV410 chips we reviewed in our lab could be hardly overclocked, they overheated much, and the noisy coolers installed on video cards based on these chips were noted by almost all analysts.

However, there is no obvious winner here either. As in High-End, the competitors released video cards, which are very close in performance. Yes, GeForce 6600 has an additional bonus of Shaders 3.0 support as well as other new functions of GeForce 6800. This will inevitably affect the choice of customers, all other things being equal. But there must be video cards on the shelves in stores to attract interested customers...

No, GF6600 and RX700 appear not to have the problems their elder brothers had. Yes, ATI has problems with the yield of effective RV410 chips for Radeon X700 XT, but these problems will be surely solved in the new chip revision. That's not the point. The delicacy of the situation is in the fact that both chips are currently represented only in PCIEx16 modifications.

These days the share of computers with PCI Express x16 slots is hardly 10% of the total number of computers. This means that the new Middle-End chips from ATI and NVIDIA "discharged their guns into the air". They will be demanded only by integrators, and even then not for all computers, but only for those based on Intel chipsets for Pentium 4.

Perhaps, the AGP modification of GF6600 will be presented in November – PCB will be equipped with an external HSI bridge, which will translate PCIE interface of the video chip into the AGP interface of the video card. Everything is more complicated with RX700, because ATI, as we have already mentioned, does not currently have such a bridge, and releasing a separate chip for a dying interface is too expensive.

As a result, the AGP platform of the Middle-End sector is represented by quite different video cards from both manufacturers. From NVIDIA – a series of rapidly getting obsolete GeForce FX in lower range and the intensively cheapening GeForce 6800 LE in the upper range. From ATI – Radeon 9600 in the lower range and all possible modifications of Radeon 9800 in the upper range. It's obvious that ATI is currently offering much better AGP video cards in this price segment than NVIDIA (probably except for GF6800LE, which price is however too high so far). Later on, the situation will most likely turn diametrically opposite: after the release of GF6600 series for AGP, it will be ATI to appear at disadvantage, if it delays its AGP modifications of RX700.

Bottom line for Middle End

Situation in the Middle End sector partially replicates the situation in High End, though with its own peculiarities. In particular, as in the top price range, the competitors are approximately equal in performance, but the new video cards from NVIDIA have noticeably more advanced functionality (which is presently not used by real applications).

The tactics to manufacture new chips only for PCIE looks justified in case of NVIDIA, who has the HSI bridge allowing a rather smooth production of AGP modifications of the corresponding cards.

In case of ATI the stake on PCIEx16 looks somewhat premature: this platform is too rare, and the canadian company does not possess a bridge to design AGP modifications of the same cards in a quick and inexpensive way.

Low-End

In 2004 the Low End sector (to which we refer all video cards with MSRP below 150 USD) is reigned by dead calm. NVIDIA proceeds with promoting the GeForce FX 5200/5500 series, which is rather weak in its performance characteristics even for this sector. In spring ATI made a successful attempt to impose RV360 chips with considerably reduced frequencies on this sector – there appeared Radeon 9550 video cards, which are fairly good in the price/performance/functions ratio. But what concerns video cards below R9550, ATI offers are still limited to Radeon 9000/9200 video cards based on the old RV280 core, which is limited only to DirectX 8.1 support.

Nevertheless, ATI launched two insignificant modifications of RV360 core in spring: RV370 and RV380. These cores were the first to introduce PCI Express x16 interface to the video cards on the market. However, in the light of total lack of mainboards with this slot, the chips again "hung poised in mid-air" and had practically no effect on the situation in the Low End sector until Intel released its new chipsets – i915 and i925.

In fact the situation started to change only in October, when NVIDIA announced GeForce 6200. This video card uses familiar NV43 chips, but it has one pixel quad blocked (4 pixel pipelines), in which connection we can speak of the usage of NV43 chips rejected for GF6600 in GF6200 video cards.

But the announced prices are disappointing – GF6200 will be sold at 130-150 USD. Which looks weird against a more preferable for customers 8-pipeline GF6600 (which in its turn looks overcharged against the performance demonstrates by RX700).

A separate mention should be made of the reluctance of NVIDIA to expose GF6200 to the AGP market – these cards are supposed to be manufactured only for the PCIEx16 platform. However, the finial decision will be up to video card manufacturers rather than NVIDIA. Theoretically, they can use the HSI bridge in GF6200, but this will require at least the support of these cards by ForceWare drivers.

Yes, GF6200 supports Shaders 3.0 again (it has the same chip as GF6600), however FP16-filtering and blending are for some reason locked in the drivers. Why? We shall talk about rumours and market prospects in our concluding chapter.

Nearest future of 3D video cards for PC

I suppose I'll start this chapter with the end of the previous one: why does NVIDIA present only one video card with obviously too high a price for its performance characteristics instead of the GF6200 series for the traditional MSRP of 80-150 USD? In our opinion, the answer is in the further development of the NV4x chip series.

Only a year ago we learned about the NV44 chip in NVIDIA roadmaps, which is intended to relieve the guard of the ambiguous GeForce FX 5200 series. This chip must presumably contain only 4 pixel pipelines. After the announcement of GF6200 based on rejected NV43 chips for GF6600, it became clear that NV44 would not have FP16-filtering and blending – according to some sources, ALUs responsible for these functions take up too many transistors making it inexpedient to manufacture chips with such functions below 100 USD.

One can assume that NV44 will have not 3, but 2 or even 1 vertex pipeline – SM3.0 compatible vertex pipelines also take up much die space. This is the reason for cutting down the number of vertex pipelines in NV43 in comparison with NV40 and vice versa, their preservation in non SM3.0 compliant RV410.

At last, it's obvious that NV44 will be manufactured with the 110nm TSMC process technology, it will have a 128-bit bus and on-board PCIEx16 interface. Perhaps NVIDIA deliberately decided to delay the announcement of NV44, because the PCIE platform did not gain sufficient ground yet and because the main profits on the Low End market are from mass sales, but not from the profit margin of every sold product. It's clear that we cannot speak of mass sales of PCIE solutions so far.

Correspondingly, the appearance of NV44 (it's difficult to say now how the video cards based on these chips will be called; perhaps GF6200 or maybe GF6100) can be expected in the first six months of 2005. Video cards based on this chip will most likely be manufactured only for PCIE, because HSI bridge on Low End video cards may significantly increase their primary costs.

What concerns ATI, NV44 will be opposed by RV380 (Radeon X600) and RV370 (Radeon X300) The lack of support for Shaders 3.0 as well as for all new functions in the R4x0 architecture makes these chips simpler and cheaper to produce than NV44. Thus, ATI will have a chance to set more attractive prices for such video cards.

However, the lion's share of sales of video chips on the Low End market is provided by OEM partners, and the support for advanced functions may tip the contract balance for them. In particular, that was the reason for successive sales of low-performance GFFX5200 as part of computers from major brands.

In the nearest future we can expect the appearance of GF6600 for AGP in the Middle End sector. This is potentially the cardinal chance for NVIDIA to improve its financial reports, because for a start the canadian competitor will have nothing to oppose these video cards.

In general, NVIDIA's decision to make maximum use of its proprietary AGP-PCIE HSI bridge is presently the only true way. I suppose that ATI agreed with that – it's rumoured that the canadian company is already developing a similar bridge.

Looking farther into the future of Middle End, we'll see GF6600GT SLI configurations, which do not look much worse than one GF6800U according to the first tests. Perhaps, such configurations will become a good alternative for GF6800U (practically unavailable on the shelves). However, they can be referred to Middle End only relatively – it's quite clear that the price for two GF6600GT video cards will greatly exceed the price limit set for the Middle End sector.

In the first half of 2005 RX700 video cards will most likely come to support AGP due to the PCIE-AGP bridge installed on PCB.

Undisguised grape-vine: GeForce 6600 Ultra can be expected, this suffix being "reserved" by NVIDIA so far. This video card will most likely use the overclocking potential of the NV43 chip. The core frequencies may be expected in the region of 550MHz. We can naturally expect higher memory frequencies as well. The price for this video card is not clear so far. Considering the strange parity of the GF6600 and RX700 series in this respect, it would look logical to reduce the prices for GF6600 and GF6600GT, GF6600U dropping down to 200 USD.

However the odds are that if GF6600U appears, it will have the MSRP of 250 USD. And the price problems with the GF6600 series will be solved by launching new video cards (with SE, LE etc suffixes).

At last, it's quite likely that ATI will release a new video chip for the Middle End sector at the end of the next year. This time it'll be based on R520 architecture, which is expected to enter the market in the first half of the next year.

So, the most interesting events in the nearest future will take place in the High End sector again, with their echoes going down to lower price ranges. Both companies are planning to present new flagship series approximately in coming winter: NV41/NV48 and R430/R480 chips.

There are a lot of various rumours about NV41 circulating around. The most credible rumour has it that NV40 chip will be transferred to the 110nm process line of TSMC, one of its pixel quads being cut down. Thus, the chip will get much simpler and its die will shrink so that we can expect the increased scale of manufactured products based on this chip. In the current range of NVIDIA video cards this chip will probably take a position between the top model based on GF6600 and the lower model based on NV48.

While NV48 will most likely be an overclocked NV40 chip. Neither the process technology nor the set of supported functions will change. Even the interface will remain AGP 8x. I suppose we can expect the frequencies of 450-475MHz for the chip and over 1200MHz for memory. There is a possibility that NV48 will still be manufactured by the 110nm TSMC process technology, but it's quite unreal.

We can expect that R430 as well as NV41 will be more available. We have information that R430 will be the same R420 transferred to the 110nm process technology. Interestingly, they seem determined to retain the number of pixel pipelines in the chip. However it's rumoured that the fastest video card based on R430 will be considerably slower than RX800XTPE, but it will be widely available in retail stores.

In the top market segment NV48 will be opposed by the R480 chip, which, according to our information, will be like NV48 to its predecessor, that is trite overclocking of the core. Neither functions nor the process technology will be changed.

Both flagship chips will be the minimum renewal of top video cards from both vendors, you shouldn't expect miracles of speed from them. The next ponderable renewal in the High End sector will take place in the first half of 2005. R520 chip looks the most interesting. For the first time in the last two years ATI will have to considerably redesign the R300 architecture in this chip to provide Shaders 3.0 support. R520 will become the first ATI chip supporting FP32 precision for every color component.

It's difficult to discuss any other features of this chip so far, because we don't even know what process technology will be used to manufacture it: some sources tell us about the same 110nm process technology, the others – 90nm. That's why we can only guess.

R520 will most likely have not less than 16 pixel pipelines and not less than 6 vertex ones. Will this chip reach 32 pixel pipelines? I doubt it. The other two options seem more likely: there will be either 24 pipelines, or there will remain 16 pipelines, but the number and complexity of shader ALUs will be considerably increased.

NV47 chip, due in next spring to replace NV48, looks a tad less interesting. First of all because its set of functions is more or less clear even now – it's highly probable that it will not be changed relative to the other representatives of the NV4x series. The main changes in NV47 will be quantitative. For example, we can be 100% sure that the number of its pixel pipelines will be increased to 24.

In autumn 2005 there will appear the first information about video chips supporting the next DirectX version, which must enter the market together with the new version of Windows OS presently known as Longhorn. The new DirectX version will include Shader Model 4 specification, its main idea most likely becoming the complete unification of function sets of pixel and vertex shaders (that is there will be actually no division between pixel and vertex shaders any more) as well as the appearance of new programmable geometry processors capable of generating new vertices right in video chip. The latter will open the floodgates to complex geometric surfaces, real displacement mapping, and higher-order surfaces created in GPU.

Overall Bottom Line

Summing up all the events happened this year on the 3D graphics market for PC (up to now, mid October), we can note the following:

  • There remained de facto two powerful companies on the market, which set the trends and patterns for 3D graphics development.

  • Californian NVIDIA launched two very successful chips – NV40 and NV43, which are completely free from problems of the previous series of video chips from this company (NV3x). But we still cannot say that NVIDIA completely resolved the problems peculiar to GeForce FX. Firstly, there are too few video cards based on new chips on sale, and they do not cover some price sectors of the market at all. Secondly, a part of video cards on the new chips is manufactured only for exotic PCI Express x16, while the regular AGP 8x still offers relatively weak GeForce FX video cards.

  • GeForce Series 6 video cards provide the performance comparable with competitors, but they support by far advanced set of functions, which will inevitably has its effect on their popularity in case of equal prices.

  • Canadian ATI again nailed down the success of the great R300 chip by releasing R420. However, unlike the previous years, in 2004 the R420 chip is outscored by competing solutions in supported functions. Performance advantage could have recovered this lag, but ATI overestimated production possibilities of TSMC plants and so the flagship Radeon X800 XT PE video cards based on R420 cannot be actually found on sale. Slower Radeon X800 Pro does not demonstrate any significant performance advantage over competitors in its price range.

  • In the nearest future we should expect new, slightly faster and more available modifications of today's top video cards.

  • At the end of the year GeForce Series 6 will start its expansion to the Middle End segment for AGP (GF6600 AGP) and to the Low End segment for PCIEx16 (GF6200). All GeForce 6 video cards support baseline Shaders 3.0 specifications.

  • The series of Radeon X700 chips presented in September for Middle End will exist only in PCIEx16 modifications up to spring of the next year. By the functions these chips are identical to the flagship Radeon X800 series.

  • Next year in spring we should be expecting the new generation of video cards in the High End sector. At last ATI will implement the support for Shader Model 3.0 and NVIDIA will significantly raise performance in comparison with the current top video cards.

  • On the whole, 3D graphics for PC has been developing in a rather successful and quite expectable way in 2004. The existing problems can be easily classified as allowable :-)

Finally, we call you attention to the conjectural summary table of video cards, offered on the market by the two leading companies in the first half of the next year. Let's agree that this table is not pretending to the indisputable truth. Its significant part is based on rumours circulating in Internet.

Conjectural series of video cards from ATI and NVIDIA in the end of the first half of 2005

Price range, USD ATI NVIDIA
AGP 8x
> 500 R520+bridge NV47+bridge
300-500 R480+bridge
R430+bridge
NV48
NV41+bridge
150-299 Radeon X700 (RV410+bridge) GeForce 6600 (NV43+bridge)
80-149 Radeon 9800 (R360)
Radeon 9600 (RV360)
Radeon 9550 (RV360)
GeForce 6200 (NV43+bridge)
GeForce 5700 (NV36)
< 80 Radeon 9250 (RV280)
Radeon 9200 (RV280)
Radeon 9000 (RV280)
GeForce FX 5500 (NV34)
GeForce FX 5200 (NV34)
PCI Express x16
> 500 R520 NV47
300-500 R480
R430
NV48+bridge
NV41
150-299 Radeon X700 (RV410) GeForce 6600 (NV43)
80-149 Radeon X700 (RV410)
Radeon X600 (RV370)
GeForce 6200 (NV43)
NV44
< 80 Radeon X300 (RV380) NV44


Danil Gridasov (degust@ixbt.com)


01.11.2004


Write a comment below. No registration needed!


Article navigation:



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.