iXBT Labs - Computer Hardware in Detail

Platform

Video

Multimedia

Mobile

Other

SPEC CPU2006. Introduction




The Standard Performance Evaluation Corporation (SPEC) released the long-awaited SPEC CPU2006 on August 24, 2006. It replaced six-year-old SPEC CPU2000. SPEC is a non-profit group, which membership is drawn from hardware and software manufacturers as well as academic and research organizations. SPEC's CPU benchmarks have been the worldwide standard for measuring compute-intensive performance since their introduction in 1989. In this article we are going to analyze the contents of the new version of the test (SPEC CPU2006), its main differences from the previous version (SPEC CPU2000), as well as our first experience with its installation and usage.

Objectives

SPEC CPU2006 is a useful tool for anyone interested in how hardware systems will perform under compute-intensive workloads based on real applications. This includes computer users, buyers evaluating system options, hardware system vendors, researchers, and application developers. Those who do not own a SPEC CPU2006 license can track performance results on SPEC's web site.

Actual applications provided as source code are the basis for SPEC CPU2006 benchmarks. These applications are intended to evaluate compute-intensive performance of a given system, mostly contributed by the following elements:

  • CPU
  • Memory
  • Compilers

We should highlight the last two components in this list — SPEC CPU performance deliberately depends not only on CPU performance. It especially concerns compilers, as applications are provided as source code and their performance may depend on optimizations of binary code generated by a given compiler. At the same time, the other system components (such as IO, graphics, network, as well as an operating system) have a negligible effect on test results, especially when the tests are run on a single processor.

SPEC CPU2006 includes two benchmark suites:

  • CINT2006 for measuring compute-intensive integer performance,
  • and CFP2006 for compute-intensive floating point performance.

So, SPEC CPU2006 resembles much the previous version of the product (SPEC CPU2000) in its purpose, structure, and load on system components. What are the reasons for releasing a new version of the benchmark? The main reason is the constant development of technologies, so benchmarks should get better as well. SPEC kept in mind the following key issues when developing SPEC CPU2006:

1. Run-time. As of summer, 2006, many of the CPU2000 benchmarks are finishing in less than a minute on leading-edge processors/systems. Small changes or fluctuations in system state or measurement conditions can therefore have significant impacts on the percentage of observed run time. SPEC chose to make run times for CPU2006 benchmarks longer to take into account future performance and prevent this from being an issue for the lifetime of the suites (considering the successful lifetime of SPEC CPU2000, it must be no less than five years.)

2. Application size. As applications grow in complexity and size, CPU2000 becomes less representative of what runs on current systems. For CPU2006, SPEC included some programs with both larger resource requirements and more complex source code.

3. Application type. SPEC felt that there were additional application areas that should be included in CPU2006 to increase variety and representation within the suites. For example, video compression and speech recognition have been added, and molecular biology has been significantly expanded.

SPEC CPU2006 package

SPEC provides the following on the SPEC CPU2006 media (a single DVD):

  • Source code for the CINT2006 benchmarks
  • Source code for the CFP2006 benchmarks
  • A tool set for compiling, running, validating and reporting on the benchmarks (source code and pre-compiled for various operating systems)
  • Run and reporting rules
  • Documentation

System requirements

To run and install SPEC CPU2006, you will need:

1. A computer system running UNIX, Microsoft Windows, or Mac OS X. The benchmark suite includes a toolset. Pre-compiled versions of the toolset are provided that are expected to work with:

  • AIX: PowerPC systems running AIX 5L V5.1 or later
  • HP-UX: IPF running HP-UX 11iv2
  • HP-UX: HP-UX 11iv2 on PA-RISC
  • Irix: MIPS running IRIX
  • Linux/PPC: PowerPC-based Linux systems with GLIBC 2.2.1+
  • MacOS X: MacOS X 10.2.8+ on PowerPC G3+
  • MacOS X: MacOS X 10.4.3+ on x86
  • Redhat 6.2: x86- or x64-based Linux systems with GLIBC 2.1.3+
  • RHAS: Red Hat Enterprise Linux AS 3, and/or SGI ProPack 3
  • SLES: SuSE Enterprise Linux 9 on IA64
  • Solaris: SPARC with Solaris 8 and later
  • Solaris: x86 or x64 with Solaris 10 and later
  • SuSE: 64-bit AMD64 Linux systems
  • Windows: Microsoft Windows XP and Windows Server.

For systems not listed in above, such as earlier or later versions of the above systems, you may find that the tools also work, but SPEC has not tested them.

2. A DVD drive (the package is shipped on a DVD).

3. Memory SPEC CPU2006 requirements to memory have grown significantly: the typical memory size is 1 GB for 32-bit systems, exclusive of OS/overhead, but more may be required. Typically, 64-bit environments will require 2GB for some of the benchmarks in the suite. More memory will be needed if you run multi-copy SPECrates: generally 1GB for 32-bit, or 2GB for 64-bit, for each copy you plan to run.

4. Disk space Typically you will need at least 8GB of disk space to install and run the suite. However, space needs can vary greatly depending upon your usage and system. The 8GB estimate is based on the following:

  • Unpacked and installed, the suite takes approximately 1.5 to 2GB of disk space.
  • When compiling your own binaries, the size of the build directories and resulting objects will vary depending upon your system, compiler, and compiler options. Estimate at least 2 to 3GB of disk space per build. If you plan to maintain multiple sets of binaries, each set will need space (2-3GB).
  • A single run takes an additional 2 to 3GB of disk space. Besides, if you plan to run SPECrate with multiple copies, estimate an additional 2 to 3GB of disk space per copy.

The minimum requirement to disk space is 5 GB, if: you are running only single-CPU metrics; you delete the build directories after the build is done; and you clean run directories between tests.

5. Since SPEC supplies only source code for the benchmarks, you will need a set of C99 and C++98 compilers for CINT2006, and Fortran-95 compiler for CFP2006. If you have no compilers, you may use a pre-compiled set of benchmark executables, given to you by another user of the same revision of SPEC CPU2006, and any run-time libraries that may be required for those executables.

Benchmarks

As we have already mentioned, SPEC CPU2006 contains two components that focus on two different types of compute intensive performance. The first suite (CINT2006) measures compute-intensive integer performance, and the second suite (CFP2006) measures compute-intensive floating point performance. CINT2006 contains 12 benchmarks based on real applications written in C and C++, while CFP2006 contains 17 benchmarks written in C, C++, and various Fortran versions, as well as C/Fortran.

Here is the list of SPEC CPU2006 benchmarks, their programming languages, and brief descriptions.

Table 1. CINT2006 benchmarks

Benchmark Language Description
400.perlbench C PERL Programming Language. It's a cut-down version of Perl v5.8.7, the popular scripting language. SPEC's version of Perl has had most of OS-specific features removed.
401.bzip2 C Data compression. It's based on bzip2 version 1.0.3. No file I/O other than reading the input. All compression and decompression happens entirely in memory. This is to help isolate the work done to only the CPU and memory subsystem.
403.gcc C C Language optimizing compiler. It is based on gcc Version 3.2. It generates optimized code for an AMD Opteron processor. It has had its inlining heuristics altered slightly, so as to inline more code than would be typical on a Unix system in 2002. It is expected that this effect will be more typical of compiler usage in 2006. This was done so that 403.gcc would spend more time analyzing its source code inputs, and use more memory.
429.mcf C Combinatorial optimization / Singledepot vehicle scheduling. The task is derived from MCF, a program used for single-depot vehicle scheduling in public mass transportation.
445.gobmk C Artificial intelligence - game playing. The program plays Go and executes a set of commands to analyze Go positions.
456.hmmer C Search a gene sequence database. Profile Hidden Markov Models (profile HMMs) are statistical models of multiple sequence alignments, which are used in computational biology to search for patterns in DNA sequences.
458.sjeng C Artificial Intelligence, chess. It is based on Sjeng 11.2 (freeware), which is a program that plays chess and several chess variants.
462.libquantum C99 Physics / Quantum Computing. It uses the libquantum library for the simulation of a quantum computer.
464.h264ref C Video compression. It is a reference implementation of H.264/AVC (Advanced Video Coding), the latest video compression standard, developed by the VCEG and the MPEG group.
471.omnetpp C++ Discrete Event Simulation. Simulation of a large Ethernet network, based on the OMNeT++ discrete event simulation system, using an ethernet model which is publicly available.
473.astar C++ Path finding. It's derived from a portable 2D path-finding library that is used in game's AI.
483.xalancbmk C++ XSLT processor. It's a modified version of Xalan-C++, an XSLT processor for transforming XML documents into HTML, text, or other XML document types.

Table 2. CFP2006 benchmarks

Benchmark Language Description
410.bwaves Fortran-77 Computational Fluid Dynamics. It numerically simulates blast waves in three dimensional transonic transient laminar viscous flow.
416.gamess Fortran Quantum chemical computations. A wide range of quantum chemical computations are possible using GAMESS.
433.milc C Physics / Quantum Chromodynamics. It uses the serial (single CPU) version of the su3imp program, which is used to simulate behavior of fundamental constituents of matter, namely quarks and gluons according to the lattice gauge theory.
434.zeusmp Fortran-77 Physics / Magnetohydrodynamics. It's based on ZEUS-MP, a computational fluid dynamics code for the simulation of astrophysical phenomena.
435.gromacs C/Fortran Chemistry / Molecular Dynamics. It is derived from GROMACS, a versatile package that performs molecular dynamics, i.e. simulation of the Newtonian equations of motion for systems with hundreds to millions of particles.
436.cactusADM C/Fortran-90 Physics / General Relativity. CactusADM is a combination of Cactus, an open source problem solving environment, and BenchADM, a computational kernel representative of many applications in numerical relativity.
437.leslie3d Fortran-90 Computational Fluid Dynamics. It is derived from LESlie3d, a researchlevel Computational Fluid Dynamics (CFD) code used to investigate a wide array of turbulence phenomena.
444.namd C++ Scientific, Structural Biology, Classical Molecular Dynamics Simulation. The 444.namd benchmark is derived from the data layout and inner loop of NAMD, a parallel program for the simulation of large biomolecular systems.
447.dealII C++ Solution of Partial Differential Equations using the Adaptive Finite Element Method. The benchmark uses deal.II, a C++ program library targeted at adaptive finite elements and error estimation.
450.soplex C++ Simplex Linear Program Solver. It's based on SoPlex Version 1.2.1. SoPlex solves a linear program using the Simplex algorithm.
453.povray C++ Computer Visualization / Ray Tracing. It's based on POV-Ray, a popular ray-tracer (renderer).
454.calculix C/Fortran-90 Structural Mechanics. It is based on CalculiX, a free software finite element code for linear and nonlinear threedimensional structural applications. It uses classical theory of finite elements.
459.GemsFDTD Fortran-90 Computational Electromagnetics. It solves the Maxwell equations in 3D in the time domain using the finite-difference time-domain (FDTD) method.
465.tonto Fortran-95 Quantum Crystallography. Tonto is an open source quantum chemistry package, adapted for crystallographic tasks.
470.lbm C Computational Fluid Dynamics. This program implements the so-called "Lattice Boltzmann Method" (LBM) to simulate incompressible fluids.
481.wrf C/Fortran-90 Weather Forecasting. It is based on the Weather Research and Forecasting (WRF) Model, which is a next-generation mesocale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.
482.sphinx3 C Speech Recognition. It's based on Sphinx-3, a widely known speech recognition system.

Some of the SPEC CPU2006 benchmark names sound familiar. Indeed, many of the SPEC benchmarks have been derived from publicly available programs (or cut-down versions of commercial applications). But SPEC benchmarks are not identical to such applications. For this reason, direct comparison of their performance (for example, 403.gcc benchmark and gcc 3.2 compiler) is not correct. Some SPEC CPU2006 benchmarks may also seem familiar to users of old SPEC CPU2000 (for example, 403.gcc "resembles" 176.gcc, 429.mcf — 181.mcf). Nevertheless, these benchmarks are not identical — as a rule, the new version of the benchmark (SPEC CPU2006) uses the latest program code and (in all cases) different input data, which reflect the typical computational load these days. Therefore, it is not valid to compare results of "similar" benchmarks from SPEC CPU2006 and SPEC CPU2000 (for example, 403.gcc and 176.gcc) either. By the way, that's the reason why updated SPEC CPU2006 benchmarks, derived from SPEC CPU2000, bear different numeric indices.

Performance metrics

As we have already mentioned, SPEC CPU2006 benchmarks are divided into two large groups — CINT2006 (for integer compute intensive performance comparisons) and CFP2006 (for floating point compute intensive performance comparisons.) Thus, SPEC CPU2006 provides two fundamental performance ratings, generally called SPECint2006 and SPECfp2006.

Each of these ratings provide two important metrics of system performance. The first one measures how fast (that is how much time) a system can solve a single task — this metric is called (speed). The second one reflects how many tasks a system can solve for a certain period of time. This metric is called throughput or rate. In the first case, a single copy of a task is run (the task can be automatically distributed by an optimized compiler in order to use all available system processors — it will still be the speed metric). For the rate metrics, multiple copies of the benchmarks are run simultaneously. Typically, the number of copies is the same as the number of CPUs on the machine, but this is not a requirement.

And finally, each of these metrics (speed and rate) can be measured in two builds: base and peak. This approach to dividing test results is based on compiler usage scenarios. In the first case, simplicity is preferred (in particular, you must use a single set of switches and single-pass make process for all benchmarks). Base metric is required, when you submit your performance results to the SPEC web site. It is notable for stricter compilation rules than for peak metrics (optional). This metric reflects an "experimental" programming approach to compiling benchmark code in order to obtain maximum performance: you can use more than one type of compilers and various optimization flags, as well as multi-pass make process with training workloads stipulated by SPEC CPU2006. Speaking of the latter — multi-pass make process belongs to peak metrics now. It's inadmissible for compiling benchmarks to get base results (which was allowed in SPEC CPU2000).

So considering the above said, SPEC CPU2006 allows to measure up to 8 performance metrics listed in Table 3.

Table 3. SPEC CPU2006 Metrics

Metric name Component Type Optimization
SPECint2006 CINT2006 speed peak
SPECint_base2006 base
SPECint_rate2006 rate peak
SPECint_rate_base2006 base
SPECfp2006 CFP2006 speed peak
SPECfp_base2006 base
SPECfp_rate2006 rate peak
SPECfp_rate_base2006 base

Performance results are always output in a normalized form — that is in the form of a ratio between performance of a tested computer and reference performance. SPEC uses a historical Sun system, the "Ultra Enterprise 2" which was introduced in 1997, as the reference machine. The reference machine uses a 296 MHz UltraSPARC II processor, as did the reference machine for CPU2000. But the reference machines for the two suites are not identical: the CPU2006 reference machine has substantially better caches, and the CPU2000 reference machine could not have held enough memory to run CPU2006 (remember the significantly higher requirements of this version to memory). It takes about 12 days to do a rule-conforming run of the base metrics for CINT2006 and CFP2006 on the CPU2006 reference machine.

First Tests. Installation, compilation, and experimental run

Let's proceed to the description of our first experience with SPEC CPU2006. Here are the general instructions from installation to getting valid results:

  • Make sure that your system meets the above system requirements.
  • Install SPEC CPU 2006 from the DVD on UNIX, Linux, Mac OS X, or Microsoft Windows.
  • Determine which metric you wish to run.
  • Learn about runspec, which is the primary SPEC-provided tool.
  • Locate a configuration file as a starting point. Hints about where to find one are in runspec documentation.
  • Use runspec to build (compile) the benchmarks.
  • If the above steps are successful, use runspec to run, validate, and create a report on the performance of the benchmarks.

Let's analyze installation, compilation, and test run on our testbed under Microsoft Windows XP (SP2) with the following configuration:

  • CPU: AMD Athlon 64 X2 3800+ (2.0 GHz, Manchester rev. JH-E6, 1 MB L2)
  • Motherboard: ASUS A8N-E (nForce 4 Ultra), BIOS 1013 dated 04/07/2006
  • Memory: 2 x 1 GB Corsair DDR-400, 2.5-3-3-5-1T

1. Installation

It's very easy to install the benchmark — you just insert the SPEC CPU2006 DVD and run install.bat in command prompt:

install.bat destination_drive destination_directory

for example:

F:\>install.bat E: \CPU2006

The next installation step (to be more exact, preliminary configuration) is to edit shrc.bat in the root folder of the benchmark. You should specify paths to installed compilers (set SHRC_COMPILER_PATH_SET=yes) or specify that you are going to use precompiled benchmarks (set SHRC_PRECOMPILED=yes). Otherwise, you won't be able to run shrc.bat (that's the first thing you should do any time you start the benchmark).

2. Compiling

Since our first SPEC CPU2000 tests, we have been using the latest available compilers. In this case we used Intel C++ Compiler and Intel Fortran Compiler 9.1 (9.1.034) with various code optimization parameters to achieve maximum performance on both Intel's, and other processors. We used Microsoft Platform SDK for Windows 2003 Server SP1 (build 3790) as headers and Windows API libraries. Standard tools from Microsoft Visual Studio 2005 Professional Edition were used to compile object code.

It was easy to choose a proper config file for Windows (x86) and Intel compilers - the config folder contained the self-explanatory windows-ia32-icl.cfg. The comment inside it (that it was tested with Intel Compiler 9.1 and MS Visual Studio.Net 2003) was almost a match to our situation. Nevertheless, our attempt to compile benchmarks with this config file (after proper modifications to meet our test requirements) was not fully successful.

First of all, there were some compilation errors in 483.xalancbmk. We decided to add some compatibility options to this task, borrowed from a similar config file (windows-em64t-icl.cfg) for x64 platforms: "CXXPORTABILITY = -Qoption,cpp,--no_wchar_t_keyword". After that step, the benchmark was successfully compiled on our x86 platform.

The reason of errors was usage of built-in wchar_t data type, a default in Microsoft Visual Studio 2005 (as well as in Intel C++ Compiler 9.1 with Visual Studio 2005 /Qvc8 compatibility key). While this task expected this data type to be converted into a usual short int. The compatibility problem was solved by adding the "CXXPORTABILITY = -Zc:wchar_t-" option, which allowed to successfully compile the task on our testbed.

The second problem was in 454.calculix, with peak code compilation (tune=peak). In our case, it differed from the base modification (tune=base) by a two-pass compilation with Profile-Guided Optimization (PGO). In fact, the error appeared not during compilation, but when we tried running the compiled binary with any input data - the benchmark produced a strange error stating it couldn't allocate memory for its data (about 20 millions of 4-byte elements, e.g. about 80MB in total.) To solve this problem we had to thoroughly examine the configuration file. In it we found the following option, common for all SPECfp2006 tasks:

fp=default:
EXTRA_LDFLAGS = /F950000000

It was to allocate 950 million bytes for a stack (which is just 1MB by default.) While the stack for SPECint2006 integer tasks made only 512 million bytes:

int=default:
EXTRA_LDFLAGS = /F512000000

After some thinking we added the "EXTRA_LDFLAGS = /F512000000" line right under the 454.calculix compatibility options. With that we reduced its stack and added the remaining free system memory for the heap, which this task uses to store data. As a result, the two-pass compilation of 454.calculix finished successfully and we could run it with any input data.

Table 4. SPEC CPU2006 Task Compilation Time

Task Compilation time
tune=base tune=peak
400.perlbench 0:01:08 0:06:25
401.bzip2 0:00:06 0:02:41
403.gcc 0:02:50 0:05:47
429.mcf 0:00:01 0:00:58
445.gobmk 0:00:38 0:06:51
456.hmmer 0:00:11 0:05:36
458.sjeng 0:00:08 0:08:41
462.libquantum 0:00:01 0:00:15
464.h264ref 0:03:13 0:08:52
471.omnetpp 0:01:11 0:11:01
473.astar 0:00:02 0:06:16
483.xalancbmk 0:15:03 0:50:37
410.bwaves 0:00:03 0:01:43
416.gamess 0:14:24 0:20:08
433.milc 0:00:17 0:01:00
434.zeusmp 0:01:35 0:03:21
435.gromacs 0:00:56 0:04:50
436.cactusADM 0:00:46 0:02:08
437.leslie3d 0:00:22 0:10:12
444.namd 0:00:13 0:00:48
447.dealII 0:22:11 0:25:05
450.soplex 0:01:05 0:02:14
453.povray 0:01:29 0:02:57
454.calculix 0:08:14 0:09:15
459.GemsFDTD 0:04:53 0:07:51
465.tonto 0:16:10 0:33:51
470.lbm 0:00:03 0:02:19
481.wrf 1:37:43 0:51:48
482.sphinx3 0:00:14 0:00:52
Total 3:15:10 4:54:22

Speaking of the entire suite compilation time (Table 4 lists compilation times of tasks with specific SSE3 optimizations with the /QxP option), it's rather long. For example, the complete compilation of the base build (tune=base) takes more than 3 hours. At that, individual task compilation times vary greatly - from 1 second to more than 1.5 hours. There's a large group of benchmarks, which compile in less than 2-3 minutes, several benchmarks, which compile in 10-20 minutes, and, finally, the 481.wrf, which compiles in nearly 2 hours.

The two-pass compilation with Profile-Guided Optimization of "tune=peak" significantly increases the total compilation time (by 1.5 times - to about 5 hours). It also introduces significant changes into the distribution of individual benchmark compilation times. Nevertheless, even in this case most benchmarks compile in reasonable 10 minutes or less, and only some of them require 20 to 50 minutes. Interestingly, the absolute compilation time leader (481.wrf base build) compiles almost twice as fast in its "peak" modification. It seems the two-pass compilation with a "training" run-through significantly reduces its code analysis time when executing multi-file inter-procedural optimization used in both cases.

You should pay attention not only to compilation times, but to amount of used RAM as well. As we have already mentioned above, SPEC CPU2006 has high requirements to memory - at least 1GB of free RAM for a 32-bit platform. It turned out that compilation had much higher requirements, at least in our conditions (i.e. Intel compilers with high-level optimizations). Total memory usage during this process made about 1.8GB for single-pass compilation and about 1.9GB for two-pass compilation. We found this out on the qualitative level at first, when we attempted to use our dual-core processor at its full capacity, that is to compile two builds simultaneously (for example, non- and SSE-optimized.) This quickly used up all 2GB of installed RAM and resulted in heavy hard drive swapping. Thus we had to give up this "speed-up" idea and evaluate the total memory usage throughout the entire compilation on one CPU core using Windows Task Manager.

3. Running benchmarks

Like its previous version, SPEC CPU2006 provides two sizes of input and output data sets for benchmarks - test run (size=test) and reference run (size=ref). The former is a quick way to check benchmark operability, while the latter is used to evaluate system performance. Moreover, according to SPEC rules, in order to get valid test results eligible for publishing on their website, each benchmark should be run at least three times.

Speaking of the test run, its name is still justified in SPEC CPU2006. Our testbed completed all benchmarks (CINT2006 and CFP2006) in less than 6 minutes. Peak memory usage in this mode was also suitable for the official requirements - approximately 1.15 GB (considering that about 0.25 GB is used by the operating system).

The reference run of SPEC CPU2006 required a tad more memory (1.4GB,) but took significantly more time to complete. The results obtained on our testbed with SSE3-optimized benchmark code are provided in the Table 5.

Table 5. SPEC CPU2006 Runtimes

Task Runtime (size=ref)
tune=base tune=peak
400.perlbench 0:13:52 0:12:02
401.bzip2 0:17:32 0:17:15
403.gcc 0:14:25 0:14:03
429.mcf 0:15:18 0:15:23
445.gobmk 0:10:45 0:09:59
456.hmmer 0:37:10 0:36:21
458.sjeng 0:16:41 0:14:38
462.libquantum 0:28:24 0:28:48
464.h264ref 0:22:38 0:21:39
471.omnetpp 0:13:10 0:12:03
473.astar 0:14:46 0:13:48
483.xalancbmk 1:40:00 1:39:25
410.bwaves 0:15:12 0:15:14
416.gamess 0:25:23 0:25:23
433.milc 0:15:00 0:14:59
434.zeusmp 0:13:55 0:14:00
435.gromacs 0:10:30 0:10:26
436.cactusADM 0:16:40 0:16:42
437.leslie3d 0:22:08 0:22:00
444.namd 0:11:59 0:11:56
447.dealII 0:16:11 0:15:52
450.soplex 0:15:23 0:15:22
453.povray 0:05:29 0:04:44
454.calculix 0:16:42 0:16:43
459.GemsFDTD 0:25:35 0:26:44
465.tonto 0:21:08 0:20:59
470.lbm 0:23:13 0:23:05
481.wrf 0:18:54 0:19:33
482.sphinx3 0:26:20 0:25:30
Total 10:04:23 9:54:36

So, the total runtime on our system makes approximately 10 hours. Note that this is only a single run. Thus, it would take about 30 hours of CPU time to obtain valid test results, according to SPEC requirements. So, the complete platform benchmarking in SPEC CPU2006, using our test method and taking into account our code optimizations (non-optimized; optimized for SSE, SSE2; optimized for Northwood, Prescott, Conroe) as well as "base" and "peak" runs (12 variants all in all), may take 1 to 2 weeks of pure runtime.

Conclusion

We've just analyzed the contents of SPEC CPU2006, its main peculiarities and differences from the previous SPEC CPU2000, which had been used in our testlab for several years to evaluate performance of various platforms. We've also tried SPEC CPU2006 out, that is we estimated whether we could use it in our testlab (compilation and runs) and evaluated its typical requirements to system resources. SPEC CPU2006 requirements to memory are quite high — up to 1.9 GB to compile benchmarks (fortunately, this procedure is done much more seldom than tests themselves), and about 1.4 GB for reference runs to obtain performance ratings of platforms. It concerns 32-bit platforms. SPEC honestly warns users that 64-bit platforms may require twice as much memory (we'll try to find it out soon). Considering that memory size in typical modern platforms usually does not exceed 2 GB. This fact significantly hampers parallel runs of benchmarks to evaluate "full" performance of a platform with multi-core processors (because in the general case, memory usage should be multiplied by a number of running instances). In this respect, our first results of performance analysis in SPEC CPU2006 to be published in the nearest future will concern solely SPECint2006/SPECfp2006, obtained in the "single-core mode".



Dmitri Besedin (dmitri_b@ixbt.com)
January 26, 2007
Updated on May 2, 2007

Write a comment below. No registration needed!


Article navigation:



blog comments powered by Disqus

  Most Popular Reviews More    RSS  

AMD Phenom II X4 955, Phenom II X4 960T, Phenom II X6 1075T, and Intel Pentium G2120, Core i3-3220, Core i5-3330 Processors

Comparing old, cheap solutions from AMD with new, budget offerings from Intel.
February 1, 2013 · Processor Roundups

Inno3D GeForce GTX 670 iChill, Inno3D GeForce GTX 660 Ti Graphics Cards

A couple of mid-range adapters with original cooling systems.
January 30, 2013 · Video cards: NVIDIA GPUs

Creative Sound Blaster X-Fi Surround 5.1

An external X-Fi solution in tests.
September 9, 2008 · Sound Cards

AMD FX-8350 Processor

The first worthwhile Piledriver CPU.
September 11, 2012 · Processors: AMD

Consumed Power, Energy Consumption: Ivy Bridge vs. Sandy Bridge

Trying out the new method.
September 18, 2012 · Processors: Intel
  Latest Reviews More    RSS  

i3DSpeed, September 2013

Retested all graphics cards with the new drivers.
Oct 18, 2013 · 3Digests

i3DSpeed, August 2013

Added new benchmarks: BioShock Infinite and Metro: Last Light.
Sep 06, 2013 · 3Digests

i3DSpeed, July 2013

Added the test results of NVIDIA GeForce GTX 760 and AMD Radeon HD 7730.
Aug 05, 2013 · 3Digests

Gainward GeForce GTX 650 Ti BOOST 2GB Golden Sample Graphics Card

An excellent hybrid of GeForce GTX 650 Ti and GeForce GTX 660.
Jun 24, 2013 · Video cards: NVIDIA GPUs

i3DSpeed, May 2013

Added the test results of NVIDIA GeForce GTX 770/780.
Jun 03, 2013 · 3Digests
  Latest News More    RSS  

Platform  ·  Video  ·  Multimedia  ·  Mobile  ·  Other  ||  About us & Privacy policy  ·  Twitter  ·  Facebook


Copyright © Byrds Research & Publishing, Ltd., 1997–2011. All rights reserved.