Group 10: HD video playback
Potential audience for this group of tests are all people who watch high definition video on their personal computers. We've developed a test to determine how well a given computer will cope with HD content (qualitative rating: will cope/won't cope) and how difficult this task will be (quantitative rating). This test analyzes CPU usage during HD video playback in one of the most popular formats: H.264. Unfortunately, the player we use for these tests (MPC HC x64) has some problems with DXVA support in the other popular format -- VC-1. So we were forced to give up the idea of using this format in our tests.
We use the software player Media Player Classic Homecinema to play a 10-minute HD video clip, and monitor CPU usage (all cores) with Riva Tuner five times a second. Media Player Classic is configured to use EVR, which is the best option for this player in Microsoft Windows Vista. The player starts maximized, but not in fullscreen. The only info displayed is playback statistics. Video is played in two different situations:
The following results are calculated for each playback using Riva Tuner logs:
We can only say that, just like SPEC viewperf 10, this test hasn't been added to any group for good. That is its results are published in a table, but they do not affect any total group score. By the way, such tests are another novelty in our test method. In our opinion, such optional tests make it more flexible. As results of optional tests do not affect total scores, they can be added or removed from the test method any time. On the other hand, results of these tests can be analyzed, discussed, accumulated for statistics, and get your feedback, which adds dynamism to the situation after freezing the test method for another year and a half.
Software: Media Player Classic Home Cinema x64 1.2.908.0, RivaTuner 2.24. Test conditions: RivaTuner is configured to monitor CPU usage (each core) five times a second, monitoring results are written into a log file. Start Media Player Classic Homecinema and open the H.264 video fragment of Iron Man (30000 bitrate, 10 minutes long)*. Then analyze the log: first of all calculate instant CPU usage by summing up the load of each processor core and dividing the result by the number of cores; then calculate the minimum, average, and maximum CPU usage. First run (Software Mode): file to be imported into the register. Second run: (DXVA Mode): file to be imported into the register.
* We cannot upload this fragment, as it will be infringement of law, so we give you an accurate description, so that you could make the file on your own.
Group 11: 3D games
The gaming part is traditional for our test method. However, there have been made significant changes here (probably for the first time): now this chapter is divided into two large parts for different computer systems. The first part of tests is quite traditional. It's designed for high-speed systems equipped with graphics cards. This group uses modern games with high settings for image quality and resolution (1280x1024). We have finally abandoned the old popular idea to try and benchmark CPUs with games in low resolution and low graphics quality. From our point of view, such manipulations render results innocuous, they become fully synthetic and lose their connection with reality: LQ options simplify (or even disable) CPU computations. Thus, even within the same game engine, CPU usage in LQ and HQ modes may actually differ in both essence and the set of computations.
The second part is represented by old games (three or four years old) with average graphics quality settings and low resolution (800x600). This group will help us get a full idea of gaming performance demonstrated by computers with integrated graphics. Indeed, some integrated solutions still don't support DirectX 10, to say nothing of the speed in tests from the first part of tests: it does not matter whether such a computer demonstrates 3 or 4 fps in Crysis Warhead. Both 3 and 4 fps are too slow, even though the latter has a formal 33% advantage over the former. So these tests are designed for weak computers. It's what owners of such computers, who like to play games, usually do: they install old hits and use low graphics options.
Group 1: Game tests for PCs with discrete graphic cards.
Group 2: Game tests for computers with integrated graphics cores.
The total score in this group is calculated as a geomean of results demonstrated by all tests:
This method for testing PC performance is evolutionary. It's based on the previous version that is positioned as a CPU benchmarking method. Both have similar benchmarks and test groups. However, it's not a transition product. This version of our test method marks the already formed tendency: modern end users are interested in the speed of the entire computer in daily applications rather than in performance of separate components.
From this point of view, this test method is practically a completed (perfect) version of the old classical approach, having borrowed its best features (scope, traditions). On the other hand, it's an attempt to introduce as many new features to the old approach as possible: focus on real tasks, retreat from benchmarking separate systems of a modern computer and adoption of the complex approach, and finally, transition from testing separate components to benchmarking end solutions.
Nevertheless, we also tried to meet the needs of users advocating the classical approach to CPU benchmarking. Hopefully, we've succeeded here, so you can obtain maximum information in the classic format as well.
Write a comment below. No registration needed!