In this article we will consider a testing technique for hard discs with IDE interface. We are purposing two aims.
The test system consists of hardware and software part.
The biggest problem in choosing the hardware is to defer its moral aging as far as possible, since any upgrade leads to loss of comparability of the results. The i815E chipset was taken as a base because:
As for a mainboard, we have chosen the Iwill WO2-R because the Iwill products have proved that they work stable. Besides, presence of the ATA/100 RAID-controller from American Megatrends has also made an effect on our decision.
The Intel Pentium III 800EB was chosen due to the high speed which won't slow down the work of not only current hard discs but also of future ones.
The memory size was chosen to be equal to 256MB. For today it is quite enough.
The IBM DTLA 307015 plays a role of a system disc which contains an operating system, tests and their results. No problems were noticed during the operation together with the i815E chipset.
So, what we have received in the upshot:
The software consists of an operating system and drivers of the devices.
With the Microsoft Windows 2000 Professional Service Pack 1, that we have chosen, we can test discs with NTFS and FAT32 file systems under the one OS. Besides, with release of the Service Pack 1 the system has become more stable.
As for video drivers, we have installed just a standard VGA driver of the OS complete set since the display implements just decorative functions. Besides, we didn't install drivers for an integrated sound. The only driver used was the one of the Ultra ATA controller, version 6.1.4.
2. Test set
Disk Inspection Tests and Disk WinMarks tests from the Ziff-Davis WinBench 99 ver 1.1 are used. They work with logic discs.
The Disk Inspection Tests are used to define physical characteristics of discs. They include:
The Disk WinMarks includes two tests - Business Disk WinMark and High-End
The programs were selected so that there were both types of applications used: for small-size files (FrontPage), and applications for working with relatively large files (Photoshop). The test doesn't always include the latest versins of applications.
The benchmark tests can be downloaded.
IOMeter, unlike the WinBench based on real applications, is completely a synthetical test. It gives flexibility and brings about a lot of problems (when making settings) for a tester. One may create a configuration which has nothing common with the reality. Additional difficulties are caused by the fact that one can test not only one disc on a uniprocessor machine (what we are doing), but also disc arrays in multiprocessor configurations and even computers on a network.
When making settings I used the technique developed by StorageReview. You can find it also there (Operating Systems and Benchmarks - Part 4 ). I will give you a short description of the test.
The IOMeter works with so called "workers". Intel recommends to create one worker per processor, that's why consider that we have one worker. After that each worker tests target(s), which are either one unpartitioned physical disc, or one or several partitions on a disc. Then, each worker receives its access pattern (a set of parameters according to which this worker organizes an access to the target).
An access pattern contains the following variables:
So, setting parameters ad arbitrium we can get a wide range of incomparable results which have little practical sense. And there arises a question: how to set an access pattern in order it models disc operation in real conditions? Here, I used the technique developed by StorageReview.
So, there are three access patterns - File Server (the model is defined by Intel and comes with the IOMeter), Workstation and Database (defined by StorageReview). Below you can see a table of parameters for each pattern, taken from StorageReview (Operating Systems and Benchmarks - Part 5 ). There you can read why these patterns were chosen.
Now a few words on the # of Outstanding I/Os parameter. If you set it to 1, then with the 100% Percent Random/Sequential Distribution we in fact measure a random access time. Value 4 corresponds to a load of an elementary applications like Windows Calculator. According to StorageReview, in average on real applications this parameter takes 30-50. The value more than 100 corresponds to high disc load (e.g. in case of defragmentation). According to it they suggest to take the following 5 values for this parameter.
Now comes the main thing - what we get in the end. The following results were included:
All results are outputted in the form of tables in csv format. The Trial Version of this test can be downloaded here.
3. Testing technique
The disc is unpacked and installed as master on the first channel of the integrated IDE controller of the mainboard. The disc is not partitioned and not formatted.
The OS is downloaded from the system disc, after what 15 Intel IOMeter tests are carried out (5 load levels for each of three access patterns). The test allows to set a time for test implementation (in the Trial Version you can do it only manually by pressing the STOP button) and time from the beginning of the test to beginning of measurements (rump-up time). I have set the time of test implementation equal to 10 minutes, and rump-up time equal to 30 seconds. All results are recorded on the system disc.
With OS standard means one partition is created on the disc equal in size to the whole disc. The partition is formattted with usage of the NTFS file system.
Ziff-Davis Winbench is started, the above test set is formatted and started. This process is implemented three times with OS rebooting between the startups. All results are recorded into the corresponding database. After passing this stage the partition is deleted.
With OS standard means one partition is created on the disc equal in size to the whole disc. The partition is formatted with usage of the FAT32 file system.
Ziff-Davis Winbench is started, the above test set is formatted and started. This process is implemented three times with OS rebooting between the startups. All results are recorded into the corresponding database.
By averaging the results obtained for each file system we get the final results of these tests.
Now you are to analyze them. I recommend you to keep the source test results in the initial form. These files contain a lot of service data which can be useful in case of a detailed analyses.
[ Back to article ]
Write a comment below. No registration needed!