Thread: File I/O

  1. #1
    Registered User
    Join Date
    Nov 2003
    Posts
    2

    File I/O

    I am simply trying to characterize both the sequential and random-access bandwidths of some disks and would like to know the best way to accomplish this. My plan for sequential bandwidth is to simply write some huge file to disk, open it up and just start performing fgetc's on it. The only way I can think of testing random-access bandwidth is to write several small files to disk, open them and read from them in some random order...

    If anyone has suggestions...please inform me of them.

    Thanks!
    John

  2. #2
    .
    Join Date
    Nov 2003
    Posts
    307
    FWIW -
    the read-ahead cache in most controllers for PC disks is optimized for sequential access. Random I/O will always come in a distant second, unless the files are smaller than the cache size.

    For multiuser systems, the only way to test is on a quiescent system. Judging by your wording, I'm assuming it's a single-user system. Otherwise you'd also probably need to worry about RAID controllers or SAN hardware, etc.

    The only really valid tests for a disk:
    data transfer rate - assuming you have memory:
    create a series of contiguous 50MB files, then
    Code:
    stat the file to get the exact max size - I'm pretending 50000000
    char *ptr;
    FILE *in[10];
    ptr=malloc(50000001);
    for(i=0;i<10;i++){
         in[i]=fopen(filename[i],"rb");  /* I'm not testing the open */
    }
    /* test I/O  time this loop */
    for(i=0;i<10;i++){  /* read in ten files fast */
           if(!fread(ptr, 50000000, 1, in[i]){
                          perror("Error on file read");
                          exit(EXIT_FAILURE);
           }
           
    }
    for(i=0;i<10;i++) fclose(i[]);
    free(ptr);
    Disk seek time - create several dozen medium sized contiguous files say 10MB each - open them all for sequential access at one time. Try make the total data size pretty close to the test above.

    Then jump around reading a single record from each open file until you have EOF for all your files.

    You'll find that the seek time test, even though it's about the same amount of data will be like an order of magnitude slower. This is because you will encounter both rotational latency and head seek time.

    For a box that does lots of things at once, like a server, the second test is more meaningful.

  3. #3
    Registered User
    Join Date
    Nov 2003
    Posts
    2
    thanks for your help...this sort of scheme appears to work rather well...aside from the need to reboot on consecutive runs (due to on disk file caching)...if anyone is interesting in my code just email me.

    ~jdt

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Data Structure Eror
    By prominababy in forum C Programming
    Replies: 3
    Last Post: 01-06-2009, 09:35 AM
  2. Game Pointer Trouble?
    By Drahcir in forum C Programming
    Replies: 8
    Last Post: 02-04-2006, 02:53 AM
  3. 2 questions surrounding an I/O file
    By Guti14 in forum C Programming
    Replies: 2
    Last Post: 08-30-2004, 11:21 PM
  4. File I/O problems!!! Help!!!
    By Unregistered in forum C Programming
    Replies: 4
    Last Post: 05-17-2002, 08:09 PM
  5. advice on file i/o
    By Unregistered in forum C Programming
    Replies: 1
    Last Post: 11-29-2001, 05:56 AM