Ignorant and biased tests (comparisons)
Ignorant tests are probably the worst of all evils (in this case by the evil we mean cheating users by disorienting them and pushing towards the wrong choice). However, authors of these 'tests' do not want to do harm deliberately. Some of them are just ignorant (that is they don't understand what they do). The others express their love for a certain manufacturer in this original way, they sincerely try to 'help' it, and only few of them really lie for money. But regardless of the reasons of any ignorant reviewer, all these opuses have several discriminant marks that give them away.
- The most noticeable sign is the lack of competition. That's a classic feature of moth-eaten ignorance: a review examines a single device, publishes its test results, and then concludes its superiority. Excuse me, what will drive readers to the peachy conclusion, if they haven't been even briefed on results of similar devices?
- The second sign of an ignorant review: competitors are present, but there is no description of the reasoned selection method -- in other words, competitors were selected at random or what were at hand. Thus, they compare Mercedes with Daewoo and conclude that Mercedes is an excellent car. One cannot disagree with that, but is such a comparison adequate?
- The third sign of an ignorant review: there is no clear explanation for the selection of tests. It often happens because authors of such reviews barely understand what they test (what kind of load any given benchmark generates) and how to interpret obtained results -- they just installed several tests that were mentioned in other articles and decided that, if they know how to run these tests, they can write their own articles. In this case, you can read a comparison of engine power of a motorbike and a heavy-duty dump truck. And in the end of the article you may find a deep conclusion that since the dump truck has a more powerful engine, it will run faster.
- Another alarming sign: most test names will tell you nothing, and the author himself does not describe them. Such tests are not always ignorant, they may be written for narrowly focused specialists, who need no explanations. However, we can say for sure that such comparisons will be of little use for you.
- As for biased reviews, the primary sign is when one product significantly outperforms its competitors in all tests without exception. We don't say that it's impossible. But we can guarantee that it happens extremely rarely. As a rule, big advantages in some tests alternate with small ones or even defeats. Otherwise, one can suspect that tests were selected to stage the victory.
- The second sign of a biased review: noticeable discord between test results and comments. In other words, of one of test results is praised to the skies, while the other is described indifferently, depending on which competitor shows such results -- it indicates possible bias (not always perceived by the author).
- The main sign of vendible tests (extremely rare, but possible) is lots of cliches from the advertising brochure of the 'winner'. The fact is, such articles are actually written not by journalists, who got money and put their name under the article, but marketing specialists from the client company, middle specialists at best (top managers won't waste their time on scribblings). Fortunately, a typical copywriter is usually zealous, enthusiastic, and lacks common sense. So it's easy to recognize their style.
Another favorite hook of marketing specialists, with which they catch even experienced users, is good prospects of a product. Here is how this bait works -- a practically useless property of a product is announced as its main advantage, because it's 'very promising'. It can be a new instruction set, 64 bits, integrated memory controller, SMT, more cores and whatnot. Ads and media wage massive propaganda, e.g., "You'll see that the new technology will prove its worth no later than in a year!" From our experience we know that it won't. And if it does, it will be a new product.
For one, devices that pioneered a new technology, especially the one not widely used, often act as engineering prototypes launched into production. They suffer from glitches, specification changes, etc. For two, the first devices supporting a new technology usually have price tags inadequate to their performance or functionality. That we can understand: enthusiasts are natural targets to pay through the nose. Later on such products are adapted to common users, and they don't like expensive products. For three, by the time the new technology really spreads (and it becomes useful in real applications), there will have been launched faster, more functional, and cheaper products. And your first product to support this technology will look like a dinosaur. By the way, there is also a very sad fourth point: technology may be simply rejected by the market, and everybody will forget about it in those 6-12 months (including the marketing specialist behind it).
So, if the promotional campaign of a processor is based mostly on its future prospects, know this: the marketing phrase "buy our incredibly promising product" translates as "We've designed something new, and in our opinion, it's great! But it's still rather buggy, and we are not sure whether anybody needs it right now. But if a lot of people buy this product, and we experiment much with it, we'll polish it to become mainstream." We pitch it strong, of course: there appear innovations that bring immediate benefits. But they are promoted in a different way: if a manufacturer is really proud of its product (except for 'good prospects' and 'innovations'), trust me, it won't forget to mention about it.
Are you surprised to see the price in a chapter devoted to false criteria? Do you resent such woeful ignorance of the author? Well, all the more useful for you to read this chapter. The price of a processor itself is indeed a false criterion for its comparison with other processors. Why? Because it's the same 'magic of numbers' ('wrong' numbers) as in case of power consumption. It's wrongness of the same type here: one numerical value is rated without regard to the others, even though they are interrelated. When you choose a processor, you automatically choose at least another two components: motherboard and memory. It's impossible to install AMD Athlon 64 into a motherboard for Intel Core 2 Duo. Intel Core i7 requires DDR3 memory, because it cannot work with any other memory type. These are only two simple examples, but they illustrate our main idea: when you choose a processor, you limit your choice of some other components.
Is it rational to regard CPU price in isolation from them? I don't think so. So it's just as unreasonable to conclude that a 200-dollar processor is better than a CPU for $250, if you don't analyze prices for those components you will be forced to buy by this cheaper processor. The cost of a motherboard and/or memory for this 'inexpensive' CPU may be so high that the 50-dollar difference in its price will make you pay much more for the rest. These are not just words -- there are a lot of real examples, and we don't hope that the situation will change cardinally in future. Here we involuntarily return to the last chapter devoted to basic criteria for choosing a CPU: the lack of integrated approach at any stage of choosing PC components (not only a CPU, but also many others) inevitably leads to a choice that is not the best and not the cheapest. So if you want to compare prices, do it with system units. Ideally, analyze possible prices for the entire computer. By the way, there is another reason to do it: if you take a look at several price variations, you may decide that those N dollars saved by choosing a given device do not make any difference.
We conclude the first theoretical (and even educational) part of the article devoted to problems of choosing a CPU. Some of you might have expected multi-page tables with technical characteristics, built-in system designer, or a step-by-step guide to choose an optimal processor 'with just five mouse clicks'. You may be indignant not to find the exact model to buy. Others may be displeased with too detailed explanations of evident issues. It's easier to calm the latter down: these issues are not that evident to everyone. Yes, sad, but true. It's harder with the former, but we'll try.
You see, if such a system designer had been possible, it would have been created long before us. And by the way, it would have left us all without the job -- why would anyone need independent test labs, if they could use this magic system designer? So its impossibility is empirically proved, for now at least. So what's left for you to do? You should use your head. Compare, analyze, challenge any information. Having collected enough data, you can finally make a deliberate logical choice. Tables with technical characteristics can be helpful here, of course. But they are updated almost on a weekly basis with the appearance of new products. Besides, they contain too many models (you should understand every characteristic, by the way: meaning, effect, etc.) Not all of you can afford this analysis, as it takes too much time. And such people do not need us. Technical characteristics can be obtained from lots of sources. We chose the other way and defined the main directions for analysis and tried to comment impartially on each vector -- which issues to pay attention to, which things to ignore, what is important right now, and what can become relevant in the nearest or distant future. We hope that it will help you choose the right processor for your needs. To choose it on your own, without our recommendations.
And if this issue is still a little foggy, we'll try to clear it up in the second part of this article "Typical Case Studies". A word of warning: we tried to take the first part out of time (so that it remained relevant long after it was written). But the second part will be tied to the current reality. So recommendations there will have a much shorter life.
Write a comment below. No registration needed!