iXBT Labs - Computer Hardware in Detail






Choosing a Processor

<< Previous page

     Next page >>

In this article we are going to answer one simple and at the same time very difficult question: how to choose a CPU? We shall not even try to give our advice as a step-by-step guide that guarantees good results -- such a guide is apparently impossible to compile. We gave up this idea and decided to follow a different path: collect the mandatory basic information you must know to make an educated choice. The main idea is to give you the choice and teach you how to choose. We hope that our article will come in handy to inexperienced users who face the problem of choosing a CPU. By the way, we don't promise that it will be easy!

The basic criteria of choosing a CPU

In this chapter we'll consider three basic criteria, which should be analyzed before you make your choice. Why 'basic'? Because there are lots of other criteria. For example, to compare processors by how much precious metals they contain and how many pins they have -- and choose the one with more gold and pins. But you won't like results of such a choice. We shall describe only those criteria, which should be used to evaluate processors.

'Theoretical' performance

Theoretically, performance (or speed -- in this case we treat these synonyms as equal) of a processor is calculated like this: a CPU executes commands, each of which has its own execution time. You don't even have to have a processor in stock -- you can find these values (command execution time) in the documentation, or someone will deduce them empirically and publish results in the web. In practice, everything is much more complex: some commands take different time to execute depending on several conditions, data can be read from memory too late, so a processor will have to wait for them. Besides, a program does not run in vacuum, it runs with other programs, so its execution can be interrupted, so that a processor could deal with other programs. Memory can be segmented. An operating system runs its own background processes. So, in order to finish the theoretical part, we give you the following example: you need to go from Location One to Location Two in a given city. We take two 'processors' -- a compact and a sport car. Which will get to the destination point faster? Any experienced driver will tell you that it depends on many factors, and the model of a car can be the least important factor here. That's right, any technical characteristics (engine horsepower, road width, vehicle size, run-up time, suspension ruggedness, acceleration rate, etc) may affect the results, or fail to do it. There are lots of situations (just like cities), it's impossible to take everything into account.

All this information will most likely tell you nothing about the real CPU performance.

This simple example puts an end to our attempts to analyze properties of a complex device in a constantly changing environment: there are no methods for such analysis, and they will hardly appear in the nearest future. Like a car, a processor has lots of technical characteristics: processor core frequency, frequencies and size of different caches, bus frequency, memory controller frequency and number of channels (not in all processors), number of cores, etc. Can these technical characteristics tell us which processor will be faster in a given program? No, they cannot. None of them individually or all of them collectively. Just as there is no universal correct answer as to which car will cross a given city faster.

'Practical' performance

That's why it's the common practice now to compare the speed of processors with the help of test results. What do me mean by a 'test' here? It's actually a race, but with one reservation: all 'cars' will have absolutely the same 'drivers' (aka operating system, you must know at least one of it -- Microsoft Windows). An operating system controls how programs are executed, and in this sense it's even more than a simple driver. It's sort of a driver, traffic lights, road signs, and even a traffic artery combined. This 'race' consists in running the same programs under the same operating system with different processors. Then each processor gets a score depending on how well it copes with execution of a given program -- faster or slower. In this case we can compare a program with a 'city', which should be driven through -- each program is special in its own way. So, a processor that demonstrates good results in one program will not necessarily excel in the other as well.

Audio compression, LAME 3.96.1; "-m j -q 0 -V 4", benchmark runtime, min:sec

Audio compression, LAME 3.96.1; "--alt-preset standard", benchmark runtime, min:sec

CPU performance ratios may differ even in the same application,
depending on various options.

This method of evaluating processors has its shortcomings, of course. One of them is lying on the surface: we cannot predict how a given processor will perform in a new program, where it wasn't tested yet. Justification of this method is also very simple: yep, we really cannot predict it. And nobody can. So it only remains to use the data we can analyze. The good news is, if the number of programs, where processors are benchmarked, is gradually increased, you will notice a certain regularity: some processors win more often than the others in the mean score. The more programs are used in tests, the more reliable the mean score is. We still cannot predict how a given processor will perform in a new application, of course. But we can say with a sufficiently high probability (relying on the mean score) whether it will be faster or slower than the other given processor. This method cannot guarantee that you won't make the wrong choice -- but it will make mistakes less probable.

Besides, don't forget about local tendencies that apply to certain classes of software. For example, if processors with large caches and systems with low-latency memory cope well with 7-Zip archiving with a big dictionary, and the same tendency is revealed in WinRAR, we can make an assumption that the other archivers with large dictionaries will prefer processors with large caches and low-latency memory. It's instructive to recall the previous paragraph here, the one about 'theoretical' performance: indeed, some characteristic of a processor is only an indirect indicator of execution speed in real applications. However, tests may help us evince regularities in the effect of some characteristics on execution speed of application groups. And it will make our predictions even more accurate, even for untested software.

Bottom line: the dependence of CPU speed in a given task (program) on technical characteristics (be it frequency, cache size, or something else) is rather ambiguous. At least it's ambiguous enough to give up the idea to evaluate CPU speed using its technical characteristics only. The main objective criterion for benchmarking performance is testing these days. The more tests you can analyze, the better. The more tests include programs that you use, the better. If the list of programs used in tests is not selected randomly, but neatly structured into groups, it sometimes helps us draw even more interesting conclusions. Unfortunately, there is no other way. At least now.

That seems to be all. But you still need an afterword. It will be simple: CPU performance is indeed one of the objective criteria for its selection. One of. But don't forget that relevance of a given criterion depends on you. Perhaps you use your computer only to browse the Web, read emails, and communicate via ICQ. You will hardly manage to find tests with these applications for one simple reason: the speed of any modern processor is excessive for such applications, not just sufficient. So in the end of this chapter we ask you a question: is CPU speed really that important to you?

Write a comment below. No registration needed!

Next page >>

blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook

Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.