1. ## clock vs gettimeofday

I know I have had a similar topic/questions about this. But I would like to elaborate more about the differences in clock() and gettimeofday() when used for calculating time. Practically when running a program, what would one count and what the other? Shouldn't we get the same result?

2. clock() will [assuming that it is implemented as it should] will give you the amount of time that the CPU was busy for your particular task from one call to the next. So if you have a single(core) CPU system with two equal priority tasks that both take 1 minute to run individually [and that is completley CPU bound, so not waiting for the disk, keyboard or some other thing], clock() should report 1 minute from start to finish for each of those tasks when they are run at the same time. But the total time will be about 2 minutes.

gettimeofday will report the time that you would see on your wrist-watch, whatever your task is doing.

If we do this, the difference will be obvious:
Code:
```#include <stdio.h>

long long int toddiff(struct timeval *tod1, struct timeval *tod2)
{
long long t1, t2;
t1 = tod1->tv_sec * 1000000 + tod1->tv_usec;
t2 = tod2->tv_sec * 1000000 + tod2->tv_usec;
return t1 - t2;
}

int main()
{
struct timeval tod1, tod2;
clock_t t1, t2;

t1 = clock();
// Slurp CPU for 1 second.
gettimeofday(&tod1, NULL);
do
{
gettimeofday(&tod2, NULL);
} while(toddiff(&tod2, &tod1) < 1000000);

printf("Hit enter...");
(void)getchar();

t2 = clock();
gettimeofday(&tod2, NULL);
printf("timeofday %5.2f seconds, clock %5.2f seconds\n", todiff(&tod2, &tod1) / 1000000.0, (t2-t1)/(double)CLOCKS_PER_SEC);

return 0;
}```
I haven't compiled that, so I can't guarantee no typos, but concept should be OK.

--
Mats

3. Sorry for maybe offtopic question, but matsp, do you know maybe if there's a better way for high-precision timing than gettimeofday() (talking about Linux)? I mean, suppose I want thread to run exactly every 20 msec (and it's running for about 0.1-1.0 msecs), but i also want it to sleep() in between. I tryed signals (setitimer()) but they are pretty unprecise. Right now this thread is given SCHED_FIFO, static priority 1, and it does nanosleep() of 0.5 msec, and doing gettimeofday() + "calc_diff()" after every sleep. Is there a better way? (Sleeping at once gives more "offset"). And sorry once more if this is offtopic.
EDIT: oh yes, i raised HZ to 1000.

4. Does clock count the time needed for data to load from the main memory to the CPU?
Or maybe a better question, if there is no I/O activity from HardDisk or keyboard/mouse, will clock() and gettimeofday() show the same time?

5. Originally Posted by C_ntua
Does clock count the time needed for data to load from the main memory to the CPU?
Or maybe a better question, if there is no I/O activity from HardDisk or keyboard/mouse, will clock() and gettimeofday() show the same time?
Only if it's only program/thread running. But there's kernel running all the time, so they should never be the same. (I could be wrong

6. Originally Posted by rasta_freak
Only if it's only program/thread running. But there's kernel running all the time, so they should never be the same. (I could be wrong
If you run an application that doesn't do anything but calculations, clock() and gettimeofday() would be fairly close. Any time the application starts waiting for something [disk to deliver data, user to hit enter, etc, etc], clock() will "lag behind" compared to the gettimeofday(). Of course, clock() can also go faster than gettimeofday() if you have multiple threads in the same process.

--
Mats

7. Originally Posted by rasta_freak
Sorry for maybe offtopic question, but matsp, do you know maybe if there's a better way for high-precision timing than gettimeofday() (talking about Linux)? I mean, suppose I want thread to run exactly every 20 msec (and it's running for about 0.1-1.0 msecs), but i also want it to sleep() in between. I tryed signals (setitimer()) but they are pretty unprecise. Right now this thread is given SCHED_FIFO, static priority 1, and it does nanosleep() of 0.5 msec, and doing gettimeofday() + "calc_diff()" after every sleep. Is there a better way? (Sleeping at once gives more "offset"). And sorry once more if this is offtopic.
EDIT: oh yes, i raised HZ to 1000.
Linux is not a real-time OS, so if you want something to run with high precision in terms of timing, you probably shouldn't be using Linux.

--
Mats

8. Originally Posted by matsp
Linux is not a real-time OS, so if you want something to run with high precision in terms of timing, you probably shouldn't be using Linux.

--
Mats
I know that. It's not really "mission critical" to get precisely 20ms, I was just wondering for any possible way to get as close as possible. I was reading something about posix high precision timers, but couldn't find any deeper info (or example). I'd be really grateful for any link or any docs about that subject (except kernel sources, if possible .

9. Originally Posted by rasta_freak
I know that. It's not really "mission critical" to get precisely 20ms, I was just wondering for any possible way to get as close as possible. I was reading something about posix high precision timers, but couldn't find any deeper info (or example). I'd be really grateful for any link or any docs about that subject (except kernel sources, if possible .
Well, the real problem isn't the precision of the timer [well, 1000Hz means that the timer tick is every millisecond, so you would be running within 1 millisecond off the target time] - it is that the scheduler doesn't switch to your thread as soon as the timer elapses.

--
Mats

10. Originally Posted by matsp
If you run an application that doesn't do anything but calculations, clock() and gettimeofday() would be fairly close. Any time the application starts waiting for something [disk to deliver data, user to hit enter, etc, etc], clock() will "lag behind" compared to the gettimeofday(). Of course, clock() can also go faster than gettimeofday() if you have multiple threads in the same process.

--
Mats
But if you have multiple threads, wouldn't they be executed sequentialy (on 1 core CPU) so they sum up the same as single threaded process?

11. Originally Posted by matsp
Well, the real problem isn't the precision of the timer [well, 1000Hz means that the timer tick is every millisecond, so you would be running within 1 millisecond off the target time] - it is that the scheduler doesn't switch to your thread as soon as the timer elapses.

--
Mats
That's what i underwstood - HZ value affects sleep() & usleep(), but nanosleep() uses high precision timers (if available) internaly. It's just that i couldn't find any deeper info how to use these timers myself.

EDIT: except, of course in glibc sources (again !!!)

12. Originally Posted by rasta_freak
But if you have multiple threads, wouldn't they be executed sequentialy (on 1 core CPU) so they sum up the same as single threaded process?
Correct, it only adds up faster if there are multiple processors to run threads in parallel.

--
Mats

13. Originally Posted by rasta_freak
That's what i underwstood - HZ value affects sleep() & usleep(), but nanosleep() uses high precision timers (if available) internaly. It's just that i couldn't find any deeper info how to use these timers myself.
Not that I'm aware of, but I've never tried to do precise sleeping in Linux, because it is futile - sooner or later some kernel part will lock the scheduler for much longer than you'd like [for example when de-allocating a large kernel memory block] - unless you specifically use a RT version of Linux that has changes to handle RT tasks outside of the kernel itself.

You will probably get decent precision by calculating the next sleep position with gettimeofday() and using nanosleep() - but I wouldn't guarantee that it's any better than what you have [it will, however, use less CPU time, so it will slightly improve the chances that your task is the only one ready to run when the timer expires, rather than "interfering" with all other tasks].

--
Mats

14. Originally Posted by matsp
Not that I'm aware of, but I've never tried to do precise sleeping in Linux, because it is futile - sooner or later some kernel part will lock the scheduler for much longer than you'd like [for example when de-allocating a large kernel memory block] - unless you specifically use a RT version of Linux that has changes to handle RT tasks outside of the kernel itself.

You will probably get decent precision by calculating the next sleep position with gettimeofday() and using nanosleep() - but I wouldn't guarantee that it's any better than what you have [it will, however, use less CPU time, so it will slightly improve the chances that your task is the only one ready to run when the timer expires, rather than "interfering" with all other tasks].

--
Mats
Yes, that's what I'm using now. nanosleep() for 0.5 ms, gettimeofday(), calc_diff(). It's best i came up with. But still, worst case is 4 ms lag (happens once/twice per minute approx.), and i'm just looking to improve it

15. Originally Posted by rasta_freak
Yes, that's what I'm using now. nanosleep() for 0.5 ms, gettimeofday(), calc_diff(). It's best i came up with. But still, worst case is 4 ms lag (happens once/twice per minute approx.), and i'm just looking to improve it
The 4ms lag is probably what I described, a system-wide scheduler lock for some "big job" in the kernel, where the kernel needs to ensure that the entire task is complete before it releases the lock. There are probably other ways to solve that sort of problem when writing the kernel, but the kernel was NEVER written to be good at real-time performance.

My suggestion was to sleep a longer period to reduce the risk of colliding with other tasks - because if you "interrupt" the system 2000 times a second (0.5 ms), you will use up a fair amount of CPU time just doing that.

--
Mats