FWIW -
the read-ahead cache in most controllers for PC disks is optimized for sequential access. Random I/O will always come in a distant second, unless the files are smaller than the cache size.
For multiuser systems, the only way to test is on a quiescent system. Judging by your wording, I'm assuming it's a single-user system. Otherwise you'd also probably need to worry about RAID controllers or SAN hardware, etc.
The only really valid tests for a disk:
data transfer rate - assuming you have memory:
create a series of contiguous 50MB files, then
Code:
stat the file to get the exact max size - I'm pretending 50000000
char *ptr;
FILE *in[10];
ptr=malloc(50000001);
for(i=0;i<10;i++){
in[i]=fopen(filename[i],"rb"); /* I'm not testing the open */
}
/* test I/O time this loop */
for(i=0;i<10;i++){ /* read in ten files fast */
if(!fread(ptr, 50000000, 1, in[i]){
perror("Error on file read");
exit(EXIT_FAILURE);
}
}
for(i=0;i<10;i++) fclose(i[]);
free(ptr);
Disk seek time - create several dozen medium sized contiguous files say 10MB each - open them all for sequential access at one time. Try make the total data size pretty close to the test above.
Then jump around reading a single record from each open file until you have EOF for all your files.
You'll find that the seek time test, even though it's about the same amount of data will be like an order of magnitude slower. This is because you will encounter both rotational latency and head seek time.
For a box that does lots of things at once, like a server, the second test is more meaningful.