We proceed with our series of articles devoted to analyzing performance of modern CPUs in real applications and finding out the effect of processor features. This part touches upon the previously unexplored topic: the effect of core clock rates on performance. This issue has been thoroughly examined theoretically: in any architecture growing core clock rates increase CPU performance practically linearly at first. Then, at a certain point, performance growth slows down. And finally, starting from a certain frequency, its further increase makes no sense, as it does not lead to higher CPU performance. The reasons were determined long ago: performance is limited by the memory system, which simply fails to deliver data and code at the speed a CPU processes them.
As practical people, we are interested in answering a simple question: what exactly these "critical frequencies" are in specific CPU architectures? Today we are going to analyze this issue on the example of Intel Core i7.
The testbed remains the same as in the previous two articles devoted to Intel Core i7:
- CPU: Intel Core i7 950
- Cooler: ASUS Triton 81
- Motherboard: ASUS P6T SE (Intel X58)
- Memory: 3 x 2GB Corsair DDR3-1800 in DDR3-1600 mode
- Graphics card: Palit GeForce GTX 275
- PSU: Cooler Master Real Power M1000
We selected four processors with clock rates ranging from 1.86 GHz to 3.06 GHz at 400 MHz steps. We decided that it would be enough to detect main trends. The nominal clock rate of our CPU is 3.06 GHz (the core multiplier is 23). Lower clock rates were obtained by decreasing the multiplier:
- x20 -- 2.66 GHz
- x17 -- 2.26 GHz
- x14 -- 1.86 GHz
The uncore multiplier* is the same in all Core i7 processors -- x16 (so the uncore frequency is 2.13 GHz). Hyper-Threading was enabled, but we had to disable Turbo Boost, because we needed a processor operating at certain frequencies here.
* A part of Core i7 processors outside the core operating at its own frequency, different from the core frequency. Two most important parts of the uncore are a memory controller and a FSB controller.
The first diagram shows a performance growth curve based on performance points of each processor, calculated according to our test method (red line). The blue line shows perfectly scalable performance, which is calculated based on the previous result and the assumption that the next result will grow by the same amount as the CPU clock rate. That is if a 1.86 GHz CPU demonstrates performance X in some group of tests, the ideal performance of a 2.26 GHz CPU will be Y=X*2.26/1.86. So performance of a 2.66 GHz processor will be Z=Y*2.66/2.26. Why did we add this curve to the diagram? In our opinion, it makes our test results much more illustrative. You can always take exact numbers from the spreadsheet with detailed results after all. But the difference between practice and theory is easier to grasp visually.
Curves on the second diagram (if you need it) stand for performance gains from increasing processor clock rates for each application in a given software group. We start with a 1.86 GHz processor, which performance is taken for 100% -- so all curves come from the same point. This graph lets us monitor behavior of separate programs.
And finally, a table with test results (for each application separately). Starting from the "2.26 GHz" column, it includes not only absolute test results, but also percentage values. What are they? They reflect performance gains of a given system relative to the previous one. It's very important: relative to the previous one, not to the original system. Thus, 22% in the 2.66 GHz column means that result of the system in a given application is better by 22% than with a 2.26 GHz processor.
It won't hurt to mention "ideal" performance gains to make it easier for our readers to read the tables. Here are these values:
- for the 1.86 GHz to 2.26 GHz transition: ~+22%
- 2.26 GHz to 2.66 GHz: ~+18%
- 2.66 GHz to 3.06 GHz: ~+15%
Considering that the ±2% spread of results is within the measurement error, we get three ranges: from +20 to +24%, from +16 to +20%, and from +13 to +17%. However, we are not interested in the lower boundaries: scalability may not be perfect, and it can even equal zero (it cannot be negative even theoretically.) But superlinear performance gains are impossible from the ideal point of view -- so values above +24%, +20%, and +17% will have to be explained somehow.
Besides, we traditionally publish a link to a Microsoft Excel spreadsheet for curious readers, which contains all test results in the most detailed form. Besides, it includes two additional tabs "Compare #1" and "Compare #2" to facilitate their analysis. Just like the tables in the article, they compare the four systems percentagewise. The difference is simple: Compare #1 provides percentage values that are calculated just like in tables from the article -- relative to the previous system. Compare #2 compares all systems with the reference system (1.86 GHz).
|3ds max ↑*
| Lightwave ↓
|UGS NX ↑
|Group Score ↑
* The up arrow (↑) marks tests, where the highest results are the best, the down arrow (↓) marks tests, where the best results are the lowest.
We should have expected perfect scalability from the visualization group -- a graphics card plays an important role in this process. However, as it turned out, interactivity of 3D modeling packages depends much on a processor despite using various 3D APIs (Lightwave and Maya -- OpenGL, 3ds max -- Direct3D). Lightwave is the champion here, its graph is practically a straight line. Engineering packages have much lower appetites (that is they are better at using a graphics card.) Superlinear performance growth is demonstrated at the transitions from 2.26 GHz to 2.66 GHz (three times) and from 2.66 GHz to 3.06 GHz (one time). Just keep it in mind.
|3ds max ↑
| Lightwave ↓
|Group Score ↑
Rendering expectedly scales practically perfectly regardless of the package (and its render engine) -- curves of 3ds max, Maya, and Lightwave on the individual graph practically merged into one thick line.
Write a comment below. No registration needed!