They do use the system clock, or ticks, which amounts to the same thing since the system clock runs on ticks after it is first set from the hardware clock. So, if you have a way to check both, you could compare them at the end of the day. On seconds level of seconds, there probably won't be any difference.
This is because ticks are essentially hardware timed. On linux (and I presume other OS's as well) "jiffies per second" is determined at boot time -- it will coincide with the processor speed, because it has to do with relating how long it takes a single operation to occur in real time on the clock (nb, if the RTC is a crystal oscillator like in a quartz watch, it works at 2^15 cycles per second). So as Adak points out, ticks may drift, but they do so for the exact same (physical) reasons a quartz watch will (since it is not as accurate as an atomic clock, the final arbitrator of what we consider time ), and to the same negligible extent. I'm no engineer or physicist, but I think I have a grip on this point -- altho the frequency of the processor and the frequency of the RTC are not the same, they are probably equivalently "stable".
Anyway, here's a sort of crude test:
Code:
#include <stdio.h>
#include <string.h>
#include <sys/timex.h>
#include <time.h>
int main() {
struct ntptimeval now;
static long start, last, ulast, umark;
long dif;
time_t RTCstart = time(NULL), RTCnow, RTClast;
int count;
ntp_gettime(&now);
start = now.time.tv_sec;
umark = now.time.tv_usec;
last = start;
while (1) {
ntp_gettime(&now);
RTCnow = time(NULL);
count = RTCnow-RTClast;
if (count) {
dif = now.time.tv_usec - umark;
printf("%ld seconds (RTC: %d) %ld/1000000\n",now.time.tv_sec-start,(int)(RTCnow-RTCstart),dif);
umark = now.time.tv_usec;
}
last = now.time.tv_sec;
ulast = now.time.tv_usec;
RTClast = RTCnow;
}
return 0;
}
This compares time() (from the system software clock) to ntp_gettime (a high res timer). Once they align, not surprisingly:
4 seconds (RTC: 5) 0/1000000
5 seconds (RTC: 6) 0/1000000
6 seconds (RTC: 7) 0/1000000
7 seconds (RTC: 8) 0/1000000
8 seconds (RTC: 9) -1/1000000
9 seconds (RTC: 10) 1/1000000
10 seconds (RTC: 11) -1/1000000
11 seconds (RTC: 12) 1/1000000
12 seconds (RTC: 13) 0/1000000
14 seconds (RTC: 14) -999999/1000000
14 seconds (RTC: 15) 999999/1000000
15 seconds (RTC: 16) 0/1000000
16 seconds (RTC: 17) 0/1000000
17 seconds (RTC: 18) 0/1000000
No more than one microsecond difference. It's not really the RTC tho, as said it's the system clock set from the RTC. There are other reasons this test might be suspect (maybe someone will raise them) but it demonstrates something about what's available in standard C.
Apparently, most new processors also have something called HPET which is a high res hardware timer that you may be able to access using a device driver. I've never looked into this and can't say any more about it.
Anyway: if you want to measure several weeks to the millisecond, then you'll have to run some tests! And then you might need to hook an atomic clock up to the computer (or set periodically from one online) But if you are just talking stuff taking place within a second or an hour or even a day, I think those standard hi-res timers are very very accurate.
[EDIT: Digging a little deeper it looks to me like on linux (and probably everything else) the software timers don't use jiffies per second, they do use system ticks, which are from hardware timer interrupts.]