Linux and Windows write timing difference

This is a discussion on Linux and Windows write timing difference within the C Programming forums, part of the General Programming Boards category; Hi guys, This is not related to syntax or runtime problem. What I am going ask is more about how ...

  1. #1
    Registered User
    Join Date
    Nov 2009
    Posts
    3

    Linux and Windows write timing difference

    Hi guys,

    This is not related to syntax or runtime problem. What I am going ask is more about how Linux and Windows handle writing data from buffer to a file. I have this code here, wrapped around a timing block, to write a buffer to a file.

    Code:
    StartCounter();
       if(rows != fwrite(image, cols, rows, fp)){
          fprintf(stderr, "Error writing the image data in write_pgm_image().\n");
          if(fp != stdout) fclose(fp);
          return(0);
       }
    
       test = GetCounter();
    Start counter and GetCounter is define as follow

    Code:
    #include <shrUtils.h>
    
    #ifdef _WIN32
    double PCFreq = 0.0;
    __int64 CounterStart = 0;
    #endif
    
    #ifdef __linux__
    struct timeval ts_start,ts_end;
    #endif
    
    void StartCounter()
    {
    #ifdef _WIN32
    	LARGE_INTEGER li;
        if(QueryPerformanceFrequency(&li) == 0)
    		printf("QueryPerformanceFrequency failed!\n");
    
        PCFreq = (float)((li.QuadPart)/1000.0);
    
        QueryPerformanceCounter(&li);
        CounterStart = li.QuadPart;
    #endif
    
    #ifdef __linux__
    	gettimeofday(&ts_start, NULL);
    #endif
    }
    double GetCounter()
    {
    #ifdef _WIN32
    	LARGE_INTEGER li;
        QueryPerformanceCounter(&li);
        return (float)((li.QuadPart-CounterStart)/PCFreq);
    #endif
    	
    #ifdef __linux__
    	gettimeofday(&ts_end, NULL);
    	//time = timespec_sub(ts_end, ts_start);
    	return (float)((ts_end.tv_sec - ts_start.tv_sec + 1e-6 * (ts_end.tv_usec - ts_start.tv_usec))*1000.0);
    #endif
    }
    When I measure time to write data, I found out that:
    1 - in Linux, the time to write data is linearly proportional to the data size.
    2 - in Windows, the time to write data is quadratically proportional to the data size.

    I think the fwrite function writes the data line by line to the file, therefore the linear relationship in Linux. But seems like Windows behaves differently. Do you think of any explanation for this?

    Any help is greatly appreciate

  2. #2
    Registered User
    Join Date
    Nov 2012
    Posts
    1,057
    Quote Originally Posted by chipbu View Post
    When I measure time to write data, I found out that:
    1 - in Linux, the time to write data is linearly proportional to the data size.
    2 - in Windows, the time to write data is quadratically proportional to the data size.
    Could you provide some sample data timings? I find this hard to believe. For example, suppose I have a disk writer that can write one disk block in one microsecond. The time to write N disk blocks is linear proportional to the number of disk blocks, so the total time in seconds is

    t = N * 0.000001

    So writing a 1 TB hard disk (about 2 billion blocks) should take about an hour or so.

    However, suppose I have an "improved" model of disk writer that can write 1 block in a nanosecond (1000 times faster). However, as a penalty the improved model must take quadratic time to write out N disk blocks. So...

    t = N * N * 0.000000001

    How long would it take to write out a 1 TB hard disk under our new "improved" writing system?

  3. #3
    Registered User
    Join Date
    Oct 2011
    Posts
    834
    Note that float has just seven significant digits. You'd better use double instead, to avoid loss of precision: when you subtract two seven-significant-digit values, the result may have just two or three significant digits left. In other words, float is completely unsuitable for time difference measurements.

    Quote Originally Posted by chipbu View Post
    Code:
    if (rows != fwrite(image, cols, rows, fp))
    Because the C library must try to write only full rows -- it cannot report that it wrote two-and-a-half lines --, the C library may have to do different things on different architectures to achieve that. In particular, the kernel and filesystem facilities may be different (or, to be exact, provide different guarantees about write operations, that affect the way this operation needs to be done by the C library).

    Because you are saving a 2D array, you don't get any benefits from writing row-by-row. So, it would be better to give the C library more leeway to do the write efficiently:
    Code:
    if ((size_t)1 != fwrite(image, (size_t)rows * (size_t)cols, (size_t)1, fp))
    Because you have a single buffer image, and size_t is always large enough to hold the size of that buffer in chars, the multiplication cannot overflow (unless the original call tries to write more data than there is in the buffer).

    It might be interesting to see your results after you fix the two points above. (The results are not really comparable across OSes, due to filesystem behaviour and caching differences, but they might be relevant to your use cases, if you test the programs as part of some work flow.)

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 11
    Last Post: 01-20-2009, 01:13 PM
  2. Replies: 1
    Last Post: 11-09-2008, 01:58 PM
  3. Timing measurements on windows
    By shani in forum Windows Programming
    Replies: 16
    Last Post: 12-19-2007, 12:59 PM
  4. Replies: 3
    Last Post: 03-05-2005, 03:03 AM
  5. Timing in Windows
    By steinberg in forum Windows Programming
    Replies: 3
    Last Post: 07-14-2002, 12:43 AM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21