iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

The Spiral of Technology

October 26, 2007



Many historians and sci-fi writers support the idea that human evolution resembles a spiral, not a straight line drawn by optimistic progress advocates. That, in fact, most events repeat, each time on some higher level. So it may look like linear motion, when viewed from a great distance. Some chairborne analysts do not take this factor into account and even manage to invent alternative world models based on similar events (they contradict experimental data, of course; but what fantast of a scientist will pay attention to such trifles as facts?) But this article is not about them. What's more interesting, the computer civilization, though still being in an embryo state, also develops in a spiral. As in case with the grand history of mankind, some people tend to forget that events repeat themselves. That's why each event is viewed as a unique breakthrough or as a global conspiracy of marketers (zionists, communists, freemasons, aliens, you name it). Time passes, passions abate, and computers again seem to develop in a linear way. Until the next collision, of course. But all these collisions (direct consequence of the progress "spring" being compressed too much by external circumstances) look so much alike in the main aspect that an alternative new chronology can be introduced. It's easy to prove that there were no fifty years of computer history and that the first computers appeared only 10-15 years ago.

You don't believe it? OK, then. Let's play an interesting game - I'll give you three events from a small part of computer history - the history of PC development (it seems impossible to forget it for this period of time). I'll give them in the order they had happened (they were even repeated in trios). And you'll try to guess them without looking up in the end of the article. Ready? Let's start then.

  1. IT development added more work to computers. So they had to become more complex - new devices to process and transfer information, modernized existing devices. The system bus that had been serving well for several years became a bottleneck. At the same time, this problem was not pressing for most computers. Besides, not all devices needed higher bandwidth than the old bus could provide. So, the industry required something faster and more functional, but it had to be compatible with old equipment. The price did not play an important role - it was required by few people. But those who needed it would pay any price. This problem was solved, and the situation got under control for a long time (but not forever).
  2. But the time waxed on, and mainstream computers also faced the problem of narrow system bus. The solution used at the first stage was not acceptable because of its high price. Besides, it was not fast enough. Main problems appeared in the field of graphics, so manufacturers decided to create a special graphics interface. It had to be fast and cheap. New systems had been compatible with the old ones for quite a long time: it was not necessary to rush the new segment - it preserved compatibility with old devices for some time.
  3. However, the number of expansion adapters requiring a high-speed interface increased significantly over time. Besides, technologies took a big step forward since Stage One, so there appeared an opportunity to design a fast and cheap generic bus, much better than the previous solutions in every respect. But it had to abandon its compatibility with old solutions. There were no problems with low-speed peripheral devices - it was possible to preserve some old connectors. Solution 1 could exist together with the new feature in the High-End sector (the price was irrelevant here), but it made no sense to keep Solutions 2 and 3 together in mainstream products. Wipe off outdated solutions? It would have offended users (that and insufficient support for low-speed periphery, or the lack of it) - why bargain one trouble for another at your own expense? The problem was solved by launching some hybrid solutions at the initial stage - the wolves are sated, the sheep are safe, may shepherd's memory live on forever. Later on (when hybrid solutions were no longer popular), only new solutions were manufactured, they were cheaper at that. That's how the transition problem was solved.

So, how many events have we described? Three? Or six? Or more? Let's find out.

Introducing EISA (Step 1)

AT bus, which later became an industry standard called ISA, was a very convenient and fast solution at a time, adequate to its tasks. It worked synchronously with the existing processors and had the same bandwidth as their external data bus, which made the life of engineers much easier. Taking the above said into account, its performance was exactly what was required - those processors couldn't cope with more. Complex arbitration and device configuration? They were not required in unsophisticated computers of that time, because their tasks were very simple, so they did not require additional equipment.

However, time waxed on, personal computers evolved. So a simple architecture of PC-compatible machines worked not only for primitive single user personal computers. It became especially noticeable after the appearance of i386 processors, which were much more powerful than processors of old mini computing devices. They rose to the level of some mainframes (which were used even at that time). It was very tempting to use PCs for something more than just desktop typewriting. But their system bus was definitely a problem, because it remained on the level of 1984. Its bandwidth and memory capacity were too low. Organization of these devices was too primitive, it impeded system development (it caused problems even within narrow limits). Something had to be done.

A cardinal solution of the bus problem was offered by the main player and creator of this market - IBM. However, this method was too radical: the company wanted to kill two birds with one stone - to solve technical problems and to punish clone makers by returning control over such a promising market segment. As a result, brand new PS/2 computers were software-compatible with classic PCs. Besides, they were copy-protected by many patents, which actually brought them and IBM (in this segment) to ruin eventually.

It happened because other large manufacturers, who wanted to develop an open architecture, managed to find a way out of this crisis. It was an ISA extension, called EISA. This bus was not so easily upgradeable as the IBM PS/2 micro-channel architecture. And it was 20% as slow in its native mode (32 MB/sec versus 40 MB/s). However, it supported practically the same architectural extensions (like arbitration or improved interrupts). And most importantly, it was compatible with old ISA. The latter feature came at a cost (in particular, its peak bandwidth was lower because it was necessary to preserve the sync clock of 8 MHz - just like the one in ISA). In return, this solution allowed to use old expansion adapters. Loyal IBM clients had to replace all components during their upgrades (even old floppy disks had to be thrown away, although it has nothing to do with our topic).

However, both buses had one huge drawback - their implementation was very expensive even for those happy times, when 10000 USD (old sterling dollars, not our present-day devaluated ones) used to be quite a usual price for a Mid-End computer. Besides, most users did not need a high-speed bus. They did not have such tasks to make it critical. As a result, EISA was used in the segment of servers and high-performance workstations. And the largest part of the market was taken by the ancient ISA. Even IBM started manufacturing... IBM PC clones (such self-cloning was the first step towards the end of the forefather and the former locomotive of the industry, this faceless industry started to become a self-regulating system - we'll speak about it later).

VESA (Step 2)

The computer progress did not stop at that. It was now the turn of graphics operating systems. They significantly raised requirements to bandwidth of the bus, to which a graphics adapter was connected (full text screen took only four kilobytes, while a 640x480 graphics screen with 16 colors took 150 KB, and this was the lowest video mode for Windows or OS/2). Expensive server buses were not ready for such volumes of data, but the industry needed an affordable solution for mainstream desktops.

Computer manufacturers used to design their own local buses for some time - direct interfaces between a CPU and a GPU. This solution was technically acceptable, but the industry required a universal solution. That is, it was necessary to invent a similar solution, but this standard had to be used everywhere.

Thus appeared VLB (VESA Local Bus; it was often called VESA). The concept of this bus remained almost just as simple as proprietary local buses. However, it had one important advantage - it had unified specifications, so VLB was quickly adopted by chipset makers as well as graphics card manufacturers. Being an extension of the CPU bus, this bus affected its performance. So only a limited number of devices could be connected to it. In fact, the first version of the specifications guaranteed only a couple of expansion cards operating at up to 33 MHz (the bus worked at the external CPU clock), one card - at 40 MHz, and one on-board adapter at 50 MHz (it had to be soldered on the PCB - even a single expansion slot was not guaranteed to work in the latter case). Later on, these limitations were relieved, but VESA had still way to go to the level of "true" system buses. But simplicity of the design ensured low prices and consequently popularity of the solution. Besides, no one else could provide such bandwidth even to few users at that time. There was no need to connect many devices to the bus: a graphics card and a disk controller - they were the only consumers of fast buses in the old mainstream computers. So, the bus was an adequate solution to those problems, its implementation in i486 computers was simple and cheap - so very soon each motherboard was equipped with a couple of VESA slots. Fortunately, peculiarities of its design (the slot was designed from ISA by adding a short connector in the style of a microchannel connector, it was responsible for the new features, the old 16-bit section was busy with power issues, interaction with the interrupt controller, etc) had absolutely no effect on the number of system bus slots.

So, the age of 486 computers ended under the sign of VLB/VESA. This very bus had seen the development and formation (nothing important was added afterwards) of such devices as 2D accelerators (Windows accelerators, as they were called at first). However, further prospects of the bus did not look so peachy.

Transition from VESA to PCI (Step 3)

After VESA had reached the peak of its powers, Intel offered a better solution - PCI bus. It was not actually a single-purpose local bus anymore, but a universal system bus. It could completely replace the previous solutions. Including VESA - it offered the same bandwidth (plus it could be increased), it had fewer limitations, its expansion slots and cards were smaller (so they were cheaper)... So, VLB stayed only because of its simpler implementation in dominating i486 computers. Small wonder - the bus was designed for them. So it was doing well even after the appearance of such a serious competitor, partly owing to its head start. There were a lot of chipsets on the market that supported VESA, many expansion cards, they were all cheaper - that was not a favorable situation to introduce PCI.

However, the rigid design for the 486 architecture played a nasty trick on VLB after the appearance of Pentium processors. Both buses had similar complexity here. Only PCI had been initially designed for multiple platforms, while VESA had to be modified, which took precious time. It was precious, because Intel was actively conquering the chipset market. It already had some advantages, because chipsets had to be tailored for processors, so many of their peculiarities could be taken into account prior to their production, not afterwards (as the competitors had to do). That's why, chipsets with VESA support couldn't win a significant share of the new market. So they didn't. Even the old but kicking 486 market started to migrate to available PCI systems. If a user intended to buy a new computer, it would be logical to choose a product with the new bus - to protect the investment. Especially as VESA had no evident advantages over PCI.

It was necessary only to those users who had already bought such expansion cards and didn't want to pay for new devices during their upgrades. In fact, this market segment had always been small... but noisy. Computer enthusiasts were not in raptures over bargaining one trouble for another at their own expense. However, the industry could calm such users down - there already appeared chipsets supporting both buses. They were designed for 486 processors only. But thanks to Intel competitors, performance of top processors for this socket was higher than that of Low-End Pentium models. Hybrid motherboards usually had only one VESA slot. But that was OK, one usually had to keep an old graphics card, as far as common devices were concerned. The IDE controller had already migrated to South Bridge by that time. And owners of exotic VESA adapters either used their computers up to the very end, or replaced their computers - this move was often justified in their case. Later on, VESA support was discontinued almost painlessly.

I have mentioned that hybrid motherboards used to support both buses. In fact, there were three of them - two buses were used in mainstream motherboards. The good old ISA bus was not going to give up yet. Users had lots of such devices at that time. Moreover, new devices were being designed. The fast PCI with all its new features was necessary only in a few areas, so it made sense to continue using a simpler (and consequently cheaper) bus. ISA was even modified a little to facilitate device configuration (Plug'n'Play ISA caused even more troubles at first, but its childhood diseases were cured soon).

Besides, PCI also came to the server market, while VESA "missed" it completely. It quickly became EISA's neighbor there. However, its active usage came later - healthy conservatism of market players made them wait for stable solutions without bugs and use EISA until that time. But it was already clear that the future belonged to PCI, which promised to become one and only bus.

PCI-X (Seems like Step 1... Again?)

As we have already mentioned, PCI eventually got into the market of servers and high-speed workstations. Moreover, its capacity became insufficient for a number of applications. Small wonder, classic PCI had the bandwidth of just 132 MB/s for all devices. A couple of such devices already needed at least a hundred MB/s individually. However, this time manufacturers got ready for this situation - this bus had been initially designed for modernization. 32-bit at 33 MHz was not enough? No problem - use 64 bit at 66 MHz or both to quadruple the bandwidth (to 0.5 GB/s). Besides, compatibility with old expansion cards was preserved for some time. However, full compatibility had to be abandoned later - when 66 MHz was not enough, the clock rate had to be increased again (fortunately, the bus width couldn't be enlarged that simply), but there were no problems with that. Not so many devices needed high bandwidth, so it was fine when a motherboard had both PCI-X and PCI slots. They were physically separated, it had become a "decency rule" in this segment. Theoretically, two independent PCI buses would have sufficed in some cases. But this architecture would have limited the fastest adapters, without being much cheaper. It was a very convenient arrangement: the PCI-X segment for high-speed adapters and the PCI segment for available low-speed components did not interfere with each other.

Why does it seems like Step 1? I mentioned the events tend to repeat themselves on a different level for a reason. The main difference between PCI-X and EISA is not implementation details, just different positioning from the very beginning. EISA was considered a universal solution, a replacement for the old ISA. Even if not at once, but eventually. So it was in focus of IT press, it was discussed by enthusiasts, etc. The bus eventually took roots only in the segment of high-performance systems. On the other hand, no one considered PCI-X as a mass solution! Engineers thought that PCI would do fine in mainstream computers for a long time. Moreover, they said it about the 32 bit 33 MHz modification. PCI-X was only for servers and workstations. As a result, not all enthusiasts know about its existence, they sometimes apply this name to a different bus, which will be described later. To say nothing about regular users, who hadn't seen such slots even on pictures. But if we dig deeper, similarities between ISA -> EISA and PCI -> PCI-X transitions are apparent. They have similar prerequisites, implementations, and consequences. So we can easily draw a parallel between these two events.

Accelerated Graphics Port (AGP) (Resembles Step 2)

Let's return to our muttons, I mean computers. The fate of servers was certain. But the situation with technical progress in the mainstream segment was not as peachy as it seemed. Graphics became a stumbling block again, this time the problem was in 3D. Adding the third dimension again raised requirements to video memory size and its speed. Hundreds of kilobytes for a single screen that scared users and engineers 5-6 years before were replaced with megabytes and dozens of megabytes to store textures. By the way, memory was expensive. Using such video memory volumes for graphics only was not expedient anymore - 3D accelerators had been used for games only for a long time. But even buyers of gaming computers did not use them for games all the time, to say nothing about generic home PCs. Users were promised a juicy carrot (3D desktop), of course. But it was only a promise at first. In fact, Aqua appeared only several years after these events. Aero was delayed for a whole decade - in the long run, this interface is being introduced in computers now, although several generations of 3D accelerators have already passed away. So the existing resources had to be utilized more efficiently. In this case - to use main memory. Not always, not as the main memory for textures, but still. There would have been no progress in games otherwise: the chosen solution allowed them to run on simple graphics cards as well (GPU had to be fast enough, though). It would have been too severe otherwise: either you upgrade your graphics card twice a year, or you stay on the same graphics level for several years.

There was only one problem - PCI was slow for these tasks. It provided only 132 MB/s - the total bandwidth for all devices, not just for a graphics card. Introduce PCI64 and then PCI-X to the mainstream segment? Too expensive. How could it be made cheaper? Let's assume we have only one device with high bandwidth requirements. True? Indeed, ISA had been still widely used, because many users did not need PCI performance. Let's assign a separate modernized PCI bus to it. As it's a single device, it can work at 66 MHz (it won't be expensive). Not bad already: all other devices will share 132 MB/s, and the graphics card will have 264 MB/s. As it's a graphics adapter (it's a precondition), we can take it into account to raise the bandwidth. At least texture fetches from memory (they are very important), memory write operations may be slower: they are not used as often, main operations can be done in local memory. Actively used textures can also be stored there. Besides, DDR (and later QDR) had already grown to the point of practical usage - we'll deal with it as well.

So graphics cards migrated to AGP just like they had migrated to VESA. It didn't happen in a day - old PCI solutions could be used as well, especially if new features were not necessary. The PCI graphics market did not burst all at once: there were still some old computers working, people wanted to upgrade them somehow. But in the long run, PCI graphics cards have become a rarity just as ISA adapters. These are similarities between Steps 2.0 and 2.1. There were some differences as well. Good planning had been done, no strong competitors were in sight - as a result VESA died at the introductory stage of Specs 2.0, while AGP had lived through three versions and several years. So, it was a new level. Another turn. If the bus is not fast enough for a server or workstation, modernize the existing one no matter the costs: the cost is important in this segment, but it does not play the determining factor. If the bus is not fast enough for mainstream computers, you invent an inexpensive solution to please all users. And then you eventually improve what was added. At the time AGP was introduced, there was no performance difference between graphics cards with this interface and PCI counterparts. Then the difference appeared, quite prominent at that. Not only in percentage - many games wouldn't work with PCI versions of Radeon 9600-like graphics cards. Even if they were equipped with enough video memory.

Sooner or later, the progress reaches a point when nothing can be added. So something has to be changed radically. Wishes of users are not taken into account here - the spiral must go into another turn. The alternative is stagnation and death of the industry (at least in the existing form).

From PCI and AGP to PCI Express (PCI-E) (Step 3, a new incarnation)

Frankly speaking, I started to think this article over two years and a half ago - when PCI-E became an objective reality to be reckoned with. That was a regular change in the industry. But I'm surprised at how many lances were broken over this issue. I decided that there is a rational explanation to all these disputes, indignation, relatively sane criticism, and plain screaming. Firstly, it was the first transition for many users - many of them didn't witness the VESA-PCI switchover. The idea of a computer for $1000 appeared in the times of PCI and its modifications after all. Before those times, the community of PC users who bought computers had been much smaller. And they had no mass media to communicate in - Internet blossomed after VESA had died. And offline magazines were published to read, not to send letters to... By the way, people did write letters to magazines. I know it for sure, because I already worked at that time. They even tried to write and send articles about bad manufacturers who made people pay for some unnecessary replacements of working equipment. However, those old magazines were not intended for home enthusiasts, so they didn't publish these heartfelt cries. Human memory is short, and it's pretty easy to forget things you don't even know.

Users were allowed to express their incomprehension of the industry processes only when ISA was dying out. And they expressed their feelings in the same words as they use now to speak about abandoning PCI. They use the same arguments, because they have the same problems. There were a lot of ISA devices, the market couldn't offer enough PCI components, and motherboards used more slots of the new type, and fewer slots of the old type. The same happens now - you just replace ISA with PCI, and PCI with PCI-E. Moreover, our current situation is worse than several years ago. The first timid attempts to fight ISA started 3-4 years after the appearance of PCI. And now motherboard manufacturers are over-optimistic, so products with just a couple of PCI slots have appeared almost immediately. Moreover, you are lucky, if both ports are usable: in the happy PCI times you would be sent to a lunatic asylum, if you had said that a cooling system of an expansion card could block the neighboring slot. Now we got used to it (if power consumption of top GPUs is growing at the same speed, we'll get used to three-slot cards as well). However, manufacturers of expansion cards are not in a hurry to enter the bright future, they prefer to focus on difficult-to-obtain PCI devices. It all negatively affects popularity of PCI-E. The PCI transition was less stressful, VESA was also a relatively universal bus, most users of that time could understand future benefits of the new bus. And how does our current transition looks to a common gamer? Not as a progress in bus interfaces, believe me. It looks like replacing one graphics slot (AGP) for another (PCI-E 16x). For no apparent dividends like performance or functionality (we can see some changes only now). What can it be but a conspiracy of manufacturers, who make common users waste money on fatuous upgrades?

So the transition process must be softer. How? To manufacture hybrid chipsets and motherboards. Hybrid chipsets were meant to support VESA, PCI and ISA. Now some of us need three slot types: PCI-E 16x, AGP, and PCI. Modern bus protocols are very complex, that's why only one chipset with full support appeared in the market. And this solution from ALi/ULi did not make much difference. But when you have to reduce the costs, importance of the sterling support fades. So simpler solutions become quite popular. In particular, VIA designed a new chipset for Intel processors by taking the previous AGP product as a basis and adding four PCI-E lanes to it. OK, four lanes are fewer than sixteen lanes. But it's not a problem, if there is enough local memory. Our tests show that the difference between 16x and 1x in 256 MB or 512 MB graphics cards is noticeable, but not fatal. However, this solution is oriented to the past - it's for those users, who have a very good AGP card and don't want to upgrade it in the nearest future. There also appeared compromises for the future. In fact, AGP is a superstructure over PCI. However, even if many additions are removed, many cards will still manage to work. Their operation will not be optimal, but still. So there appeared motherboards on i915/nForce4 chipsets with a sterling PCI-E 16x and a cut-down AGP instead of a PCI slot. Not all cards can work in this mode. Performance of many cards in this case is lower than in sterling AGP mode by 30%. But it's a temporary solution.

GPU manufacturers also took care of the transition period. However, common users can use products of their efforts only indirectly. They will hardly come in handy for these users. These products help only manufacturers of graphics cards. Thanks to the translation bridges, they still manufacture AGP cards. That is there are no GPUs for this slot, but we have cards. Including state-of-the-art products.

So, the problem with graphics cards can be solved. Problems with other cards are not that pressing, because regular PCI is not abolished yet. In other words, the current situation is identical to the VESA-PCI transition. It's even a tad better. I don't understand why all this noise. CPU manufacturers introduce a new socket each year, some of them do not change names and exterior (which is plain cheating, because they work differently and lack full compatibility), but users do not complain much about it. But when the bus is radically changed once in a decade (because it is really necessary) in the most painless way for users, they start complaining. Loudly and wearily. And rarely to the point. I sometimes think that it should be done once a year - users will get used to it and forget about partial upgrades (90% of users don't think about them already, others will forget about them in a couple of years.)

No single bus for us

As I've mentioned above, the founder of the PC market and its players went different ways 20 years ago. The computer industry (a group of companies) wanted a new bus to be compatible with the old one. It achieved much success, but it failed its main objective - EISA did not become a universal standard. Neither did MCA, despite all the efforts of IBM and the unleashed "bus war". Why?

The first reason that springs to mind is license limitations. IBM tried to solve two problems at once - to solve technical issues with the bus and to press clone makers. That was why the company patented everything possible, and it gave just a small amount of permissions to manufacture compatible products. It all ended deplorably for IBM. It couldn't have ended differently - no one gets away with such license games. Interestingly, an increasing number of manufacturers step on the same rakes. Intel did the same silly thing a decade ago, it did not go well... but we'll speak about it later. Let's return to MCA.

Was the problem in patents only, or the bus was doomed from the very beginning? To answer this question, we should analyze MCA and its past prospects.

The bus was initially up to 32-bit wide, operating at 10 MHz. A lot of computers at that time used 80286 or i386 SX processors (IBM and Cyrix used them as a base for 386SLC, 486SLC, 486SLC2, 486SLC3 - the latter was the first processor with a triple clock rate, which was launched much earlier than iDX4) with a 16-bit external bus. But even in this case MCA had an advantage over old ISA - real 20 MB/s versus virtual 16 MB/s (it's much lower in practice). The bus operating in the sterling mode had the bandwidth of 40 MB/s, that is wider than EISA. But it still was not the limit. IBM revealed hidden features of the bus right after the appearance of computers with the micro-channel architecture. MCA allowed two devices to exchange data at high speed, if they could support it that is. Developed arbitration allowed to send status of the main bus arbiter from device to device. The further, the better: while other devices are idle for a couple of bus cycles, a pair of devices exchange data, syncing at 20 MHz or even 40 MHz instead of 10 MHz. So in this case the bus width will be 80 MB/s or even 160 MB/s - higher than in PCI. It was really hard to implement the latter, but MCA had been designed much earlier than PCI. Besides, it was also possible to create a 64-bit extension of the microchannel. So the bar could be raised up to 320 MB/s, faster than in AGP 1x.

The practice of strict configuration of expansion cards with jumpers was abandoned in the past, of course - full Plug'n'Play could not be implemented at that time, but they could be configured with software. Besides, expansion cards were very compact in size. As well as the slots. Thus, motherboards and standard PC cases could have been made smaller. In fact, IBM had done it. It had been done more than 15 years prior to SFC PC (barebones)! If the industry had migrated to MCA, these 15 years could have passed without bus changes. There would have been no EISA, VESA, PCI... Perhaps, there would have been no PCI-X and AGP either. There would have been only MCA. And then the industry would have migrated to something like PCI Express (it would have been called differently, of course).

That would have happened, if IBM had preserved full control over the PC segment by 1987. But it was actually the control that the company tried to return. And those who try to kill two birds with one stone usually go to bed hungry. The IBM way required to throw away all old expansion cards. That's all. Besides, there would have been problems with new cards because of the patents. It was technically possible to create hybrid systems with MCA and ISA, but IBM did not intend to do it, and other companies could not do it because of patents. As a result, a good bus, well ahead of its time, remained just a technical curiosity. And IBM came down from being a locomotive to just a wagon. It started to self-clone, and then it sold the department of personal computers.

It's hard to say whether this industry have had a locomotive since MCA had failed... The industry turned into a self-adjusting system. Some companies still tried to invent new technologies, of course. Manufacturers used them or forgot about them. But there was no overmind to look after the entire process. Spiral development is the only possible way in such conditions. We solve small problems at small steps without paying attention to the Holy Cow (compatibility issues). Then we come up with a new solution (when we cannot stand the old one anymore), still trying to preserve this compatibility. It looks ugly from the technical point of view, but it's quite usable: protecting the investments, etc. We'll speak about it later.

Looking over the horizon

The main point is crystal clear - there is nothing new in the world, but new events are not exact copies of the old ones. So we have some progress. Probably it's not optimal, but still the industry develops. It would be naive to expect it to stop.

It may happen only after three improbable events: engineers cannot come up with anything new, absolutely all people are satisfied with old hardware, and marketing specialists invent a new stable way to make money on the market without selling the same products with different flavors. In other words, if there is no need in flavors, or they become impossible. It would have been an interesting world.

As for now, there is no limit to our progress (it's more likely that our civilization dies because of an energy crisis, and the rest of mankind start a new turn of the history). PCI-E has just as many reserves as PCI used to have. We've just started using Version 1.0, and Version 2.0 is already round the corner... Specs 3.0 will be developed soon. Then the concept will run dry. In fact, it was a heresy to upgrade from a parallel to series-parallel bus. Engineers should have thought about a purely serial bus. Or something else. Engineers will find what to do anyway.

There is one more scenario how the IT progress may stop - if a computer becomes a household equipment. Enthusiasts will gradually die out, the others will have no idea of what's inside. No one will open a computer, except for specially trained specialists. Computers will be upgraded just like TV sets - you'll just buy a new one. You won't even try to preserve your good old remote control. That will change things for sure. I'm not sure though that it will be more profitable for manufacturers than the current situation. If that's true, they will stop any attempts to turn to this road. Besides, sorting out the situation with internal buses will not solve the problem with external ones. Their situation is no better. So, the reviewing job will last our time.

Andrey Kozhemiako aka Korzh (korzh@ixbt.com)
October 26, 2007

Write a comment below. No registration needed!


Article navigation:



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.