Back to article
Initially for the tests we chose Adaptec ThreadMark 2.0. The main drawback
of this test is low information given out - only one average weighted resulting
value. Besides, when testing new fast discs it started to raise doubts.
At least, correlation with WinBench results nearly disappeared. So, I decided
to follow the StorageReview
and start using Intel IOMeter test. The Trial Version can be downloaded
here.
I used the technique developed by StorageReview. You can find it also
there (Operating Systems and Benchmarks - Part 4 ). I will give you a short
description of the test and test technique.
The IOMeter, unlike WinBench which is based on real applications, is
a completely synthetical test. It gives high flexibility and creates a
lot of difficulties for the tester in adjusting. Additional problems arise
due to the fact that you can test not only one disc on a uniprocessor machine,
but also disc arrays in multiprocessor configurations and even a set of
computers on a net.
The IOMeter works with so called "workers". Intel recommends to create
one worker per processor, that's why consider that we have one worker.
After that each worker tests target(s), which are either one unpartitioned
physical disc, or one or several partitions on a disc. Then, each worker
receives its access pattern (a set of parameters according to which this
worker organizes an access to the target).
An access pattern contains the following variables:
- Transfer Request Size - a minimal data unit to which the test can
apply.
- Percent Random/Sequential Distribution - percentage of random requests.
The other are, therefore, sequential.
- Percent Read/Write Distribution - percentage of requests for reading.
Another important variable which is not directly included into the access
pattern - # of Outstanding I/Os - defines a number of simultaneous I/O
requests for the given worker and, correspondingly, disc load.
So, setting parameters ad arbitrium we can get a wide range of incomparable
results which have little practical sense. And there arises a question:
how to set an access pattern in order it models disc operation in real
conditions? Here, I used the technique developed by StorageReview.
So, there are three access patterns - File Server (the model is defined
by Intel and comes with the IOMeter), Workstation and Database (defined
by StorageReview). Below you can see a table of parameters for each pattern,
taken from StorageReview (Operating Systems and Benchmarks - Part 5 ).
There you can read why these patterns were chosen.
Access Patterns |
% of Access Specification |
Transfer Size Request |
% Reads |
% Random |
File Server Access Pattern (as
defined by Intel) |
10%
|
0.5 KBytes
|
80%
|
100%
|
5%
|
1 KBytes
|
80%
|
100%
|
5%
|
2 KBytes
|
80%
|
100%
|
60%
|
4 KBytes
|
80%
|
100%
|
2%
|
8 KBytes
|
80%
|
100%
|
4%
|
16 KBytes
|
80%
|
100%
|
4%
|
32 KBytes
|
80%
|
100%
|
10%
|
64 KBytes
|
80%
|
100%
|
Workstation Access Pattern (as
defined by StorageReview.com) |
100%
|
8 KBytes
|
80%
|
80%
|
Database Access Pattern (as defined
by Intel/StorageReview.com) |
100%
|
8 KBytes
|
67%
|
100%
|
Now a few words on the # of Outstanding I/Os parameter. If you
set it to 1, then with the 100% Percent Random/Sequential Distribution
we in fact measure a random access time. Value 4 corresponds to a load
of an elementary applications like Windows Calculator. According to StorageReview,
in average on real applications this parameter takes 30-50. The value more
than 100 corresponds to high disc load (e.g. in case of defragmentation).
According to it they suggest to take the following 5 values for this parameter.
Loads |
Linear |
1 Outstanding I/O |
Very Light |
4 Outstanding I/Os |
Light |
16 Outstanding I/Os |
Moderate |
64 Outstanding I/Os |
Heavy |
256 Outstanding I/Os |
Besides, you can set time of test running (in Trial Version it is done
manually, pressing STOP button) and rump-up time. I has taken time for
running each of 15 tests (5 loading types for each of 3 access patterns)
equal to 10 min, and rump-up delay - to 30 sec. The physical discs are
tested (unpartitioned and unformatted).
Now comes the main thing - what we get in the end. The following results
were included:
- Total I/Os Per Second - an average number of requests implemented
per second. A request consists of positioning and read/write of
the unit of the corresponding size.
- Total MBs Per Second - the same, but in other words. If the
patterns are working with the units of the same size (Workstation
and Database) - it's just multiplication of Total I/Os Per Second
by unit's size.
- Average I/O Response Time - for linear loading (1 outstanding
I/O) - it's again the same as Total I/Os Per Second (Total I/Os
Per Second = 1000 milliseconds / Average I/O Response Time). With
load increase the value rises but not arcwise. The result depends
on optimization of drive firmware, bus and OS.
- CPU Effectiveness, or I/Os per % CPU Utilization.
Back to article
Write a comment below. No registration needed!