Thread: clock vs gettimeofday

  1. #1
    Registered User C_ntua's Avatar
    Join Date
    Jun 2008
    Posts
    1,853

    clock vs gettimeofday

    I know I have had a similar topic/questions about this. But I would like to elaborate more about the differences in clock() and gettimeofday() when used for calculating time. Practically when running a program, what would one count and what the other? Shouldn't we get the same result?

  2. #2
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    clock() will [assuming that it is implemented as it should] will give you the amount of time that the CPU was busy for your particular task from one call to the next. So if you have a single(core) CPU system with two equal priority tasks that both take 1 minute to run individually [and that is completley CPU bound, so not waiting for the disk, keyboard or some other thing], clock() should report 1 minute from start to finish for each of those tasks when they are run at the same time. But the total time will be about 2 minutes.

    gettimeofday will report the time that you would see on your wrist-watch, whatever your task is doing.

    If we do this, the difference will be obvious:
    Code:
    #include <stdio.h>
    
    long long int toddiff(struct timeval *tod1, struct timeval *tod2)
    {
        long long t1, t2;
        t1 = tod1->tv_sec * 1000000 + tod1->tv_usec;
        t2 = tod2->tv_sec * 1000000 + tod2->tv_usec;
        return t1 - t2;
    }
    
    int main()
    {
       struct timeval tod1, tod2;
       clock_t t1, t2;
    
        t1 = clock();
       // Slurp CPU for 1 second. 
       gettimeofday(&tod1, NULL);
       do 
       {
          gettimeofday(&tod2, NULL);
       } while(toddiff(&tod2, &tod1) < 1000000);
    
       printf("Hit enter...");
       (void)getchar();
       
       t2 = clock();
       gettimeofday(&tod2, NULL);
       printf("timeofday %5.2f seconds, clock %5.2f seconds\n", todiff(&tod2, &tod1) / 1000000.0, (t2-t1)/(double)CLOCKS_PER_SEC);
    
       return 0;
    }
    I haven't compiled that, so I can't guarantee no typos, but concept should be OK.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  3. #3
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Sorry for maybe offtopic question, but matsp, do you know maybe if there's a better way for high-precision timing than gettimeofday() (talking about Linux)? I mean, suppose I want thread to run exactly every 20 msec (and it's running for about 0.1-1.0 msecs), but i also want it to sleep() in between. I tryed signals (setitimer()) but they are pretty unprecise. Right now this thread is given SCHED_FIFO, static priority 1, and it does nanosleep() of 0.5 msec, and doing gettimeofday() + "calc_diff()" after every sleep. Is there a better way? (Sleeping at once gives more "offset"). And sorry once more if this is offtopic.
    EDIT: oh yes, i raised HZ to 1000.
    Last edited by rasta_freak; 08-07-2008 at 05:15 AM.

  4. #4
    Registered User C_ntua's Avatar
    Join Date
    Jun 2008
    Posts
    1,853
    Does clock count the time needed for data to load from the main memory to the CPU?
    Or maybe a better question, if there is no I/O activity from HardDisk or keyboard/mouse, will clock() and gettimeofday() show the same time?

  5. #5
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Quote Originally Posted by C_ntua View Post
    Does clock count the time needed for data to load from the main memory to the CPU?
    Or maybe a better question, if there is no I/O activity from HardDisk or keyboard/mouse, will clock() and gettimeofday() show the same time?
    Only if it's only program/thread running. But there's kernel running all the time, so they should never be the same. (I could be wrong

  6. #6
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    Only if it's only program/thread running. But there's kernel running all the time, so they should never be the same. (I could be wrong
    If you run an application that doesn't do anything but calculations, clock() and gettimeofday() would be fairly close. Any time the application starts waiting for something [disk to deliver data, user to hit enter, etc, etc], clock() will "lag behind" compared to the gettimeofday(). Of course, clock() can also go faster than gettimeofday() if you have multiple threads in the same process.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  7. #7
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    Sorry for maybe offtopic question, but matsp, do you know maybe if there's a better way for high-precision timing than gettimeofday() (talking about Linux)? I mean, suppose I want thread to run exactly every 20 msec (and it's running for about 0.1-1.0 msecs), but i also want it to sleep() in between. I tryed signals (setitimer()) but they are pretty unprecise. Right now this thread is given SCHED_FIFO, static priority 1, and it does nanosleep() of 0.5 msec, and doing gettimeofday() + "calc_diff()" after every sleep. Is there a better way? (Sleeping at once gives more "offset"). And sorry once more if this is offtopic.
    EDIT: oh yes, i raised HZ to 1000.
    Linux is not a real-time OS, so if you want something to run with high precision in terms of timing, you probably shouldn't be using Linux.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  8. #8
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Quote Originally Posted by matsp View Post
    Linux is not a real-time OS, so if you want something to run with high precision in terms of timing, you probably shouldn't be using Linux.

    --
    Mats
    I know that. It's not really "mission critical" to get precisely 20ms, I was just wondering for any possible way to get as close as possible. I was reading something about posix high precision timers, but couldn't find any deeper info (or example). I'd be really grateful for any link or any docs about that subject (except kernel sources, if possible .

  9. #9
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    I know that. It's not really "mission critical" to get precisely 20ms, I was just wondering for any possible way to get as close as possible. I was reading something about posix high precision timers, but couldn't find any deeper info (or example). I'd be really grateful for any link or any docs about that subject (except kernel sources, if possible .
    Well, the real problem isn't the precision of the timer [well, 1000Hz means that the timer tick is every millisecond, so you would be running within 1 millisecond off the target time] - it is that the scheduler doesn't switch to your thread as soon as the timer elapses.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  10. #10
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Quote Originally Posted by matsp View Post
    If you run an application that doesn't do anything but calculations, clock() and gettimeofday() would be fairly close. Any time the application starts waiting for something [disk to deliver data, user to hit enter, etc, etc], clock() will "lag behind" compared to the gettimeofday(). Of course, clock() can also go faster than gettimeofday() if you have multiple threads in the same process.

    --
    Mats
    But if you have multiple threads, wouldn't they be executed sequentialy (on 1 core CPU) so they sum up the same as single threaded process?

  11. #11
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Quote Originally Posted by matsp View Post
    Well, the real problem isn't the precision of the timer [well, 1000Hz means that the timer tick is every millisecond, so you would be running within 1 millisecond off the target time] - it is that the scheduler doesn't switch to your thread as soon as the timer elapses.

    --
    Mats
    That's what i underwstood - HZ value affects sleep() & usleep(), but nanosleep() uses high precision timers (if available) internaly. It's just that i couldn't find any deeper info how to use these timers myself.

    EDIT: except, of course in glibc sources (again !!!)
    Last edited by rasta_freak; 08-07-2008 at 06:31 AM.

  12. #12
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    But if you have multiple threads, wouldn't they be executed sequentialy (on 1 core CPU) so they sum up the same as single threaded process?
    Correct, it only adds up faster if there are multiple processors to run threads in parallel.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  13. #13
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    That's what i underwstood - HZ value affects sleep() & usleep(), but nanosleep() uses high precision timers (if available) internaly. It's just that i couldn't find any deeper info how to use these timers myself.
    Not that I'm aware of, but I've never tried to do precise sleeping in Linux, because it is futile - sooner or later some kernel part will lock the scheduler for much longer than you'd like [for example when de-allocating a large kernel memory block] - unless you specifically use a RT version of Linux that has changes to handle RT tasks outside of the kernel itself.

    You will probably get decent precision by calculating the next sleep position with gettimeofday() and using nanosleep() - but I wouldn't guarantee that it's any better than what you have [it will, however, use less CPU time, so it will slightly improve the chances that your task is the only one ready to run when the timer expires, rather than "interfering" with all other tasks].

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  14. #14
    Registered User
    Join Date
    Jul 2008
    Posts
    133
    Quote Originally Posted by matsp View Post
    Not that I'm aware of, but I've never tried to do precise sleeping in Linux, because it is futile - sooner or later some kernel part will lock the scheduler for much longer than you'd like [for example when de-allocating a large kernel memory block] - unless you specifically use a RT version of Linux that has changes to handle RT tasks outside of the kernel itself.

    You will probably get decent precision by calculating the next sleep position with gettimeofday() and using nanosleep() - but I wouldn't guarantee that it's any better than what you have [it will, however, use less CPU time, so it will slightly improve the chances that your task is the only one ready to run when the timer expires, rather than "interfering" with all other tasks].

    --
    Mats
    Yes, that's what I'm using now. nanosleep() for 0.5 ms, gettimeofday(), calc_diff(). It's best i came up with. But still, worst case is 4 ms lag (happens once/twice per minute approx.), and i'm just looking to improve it

  15. #15
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by rasta_freak View Post
    Yes, that's what I'm using now. nanosleep() for 0.5 ms, gettimeofday(), calc_diff(). It's best i came up with. But still, worst case is 4 ms lag (happens once/twice per minute approx.), and i'm just looking to improve it
    The 4ms lag is probably what I described, a system-wide scheduler lock for some "big job" in the kernel, where the kernel needs to ensure that the entire task is complete before it releases the lock. There are probably other ways to solve that sort of problem when writing the kernel, but the kernel was NEVER written to be good at real-time performance.

    My suggestion was to sleep a longer period to reduce the risk of colliding with other tasks - because if you "interrupt" the system 2000 times a second (0.5 ms), you will use up a fair amount of CPU time just doing that.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 12
    Last Post: 02-28-2010, 06:30 AM
  2. Logical Error in Clock program
    By SVXX in forum C++ Programming
    Replies: 0
    Last Post: 05-10-2009, 12:12 AM
  3. Outside influences on clock cycles? (clock_t)
    By rsgysel in forum C Programming
    Replies: 4
    Last Post: 01-08-2009, 06:15 PM
  4. Clock Troubles
    By _Nate_ in forum C Programming
    Replies: 22
    Last Post: 06-19-2008, 05:15 AM
  5. clock program
    By bazzano in forum C Programming
    Replies: 3
    Last Post: 03-30-2007, 10:12 PM