clock ticks vs time.h

This is a discussion on clock ticks vs time.h within the C Programming forums, part of the General Programming Boards category; I just want to confirm.. time.h (as in, true date&time services) don't use clock ticks to calculate the current time ...

  1. #1
    Registered User
    Join Date
    Jan 2005
    Posts
    108

    clock ticks vs time.h

    I just want to confirm.. time.h (as in, true date&time services) don't use clock ticks to calculate the current time right?

    For example, getting the cpu ticks using the linux kernel (or windows or mac), and trying to convert that into true time, will eventually not work because of synchronisation differences between that and the actual system time, right?

  2. #2
    Registered User
    Join Date
    Sep 2006
    Posts
    8,868
    Quote Originally Posted by underthesun View Post
    I just want to confirm.. time.h (as in, true date&time services) don't use clock ticks to calculate the current time right?

    For example, getting the cpu ticks using the linux kernel (or windows or mac), and trying to convert that into true time, will eventually not work because of synchronisation differences between that and the actual system time, right?
    There is no such thing as "true time" to a computer. It either keeps track of clock sweeps or checks with outside sources. That could include something as accurate as an atomic clock. Both Windows and Linux will synch up with a reference clock, if you allow them to do so, and are connected to the internet.

    Your PC doesn't "try" to convert it - it does convert the counted sweeps of the clock, into seconds, and portions thereof. You sound like it's failing this job, somehow. It's not.

    The problem is that your PC is not an atomic clock, and will "drift" away from keeping perfect time, but it has nothing to do with clock ticks in C, somehow failing. All clocks either lose or gain time, and need to be adjusted a bit.

    Are you referring to interval timing? Where you time 3 events, and after adding up their three elapsed time, find that they have fallen a bit behind "wall clock" time?

    That happens because it takes several clock sweeps to set up the timer (depending on your hardware, many clock sweeps might be needed), and report that data back to your program. These clock sweeps are just "lost", since they happen before the count is delivered. so it's:

    1) Generate the interrupt to get the starting clock sweeps - losing some of them, before that number is given. Starting clock sweep data is in your variable now.

    2) Your program does something for awhile.

    3) Then generates another system interrupt to get the ending clock sweep data - again losing some cycles because the OS is not a real time OS, and has many processes running at the same time.

    Cycles are lost in #1 and #3, due to limitations of your hardware and your OS. After several such intervals being timed, you'll notice that the sum of your intervals, is less than the observed "wall clock" time that has passed.

    Their are ways to help mitigate that loss of time.

    Post up your specifics and your code, and we'll get into it.

  3. #3
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    They do use the system clock, or ticks, which amounts to the same thing since the system clock runs on ticks after it is first set from the hardware clock. So, if you have a way to check both, you could compare them at the end of the day. On seconds level of seconds, there probably won't be any difference.

    This is because ticks are essentially hardware timed. On linux (and I presume other OS's as well) "jiffies per second" is determined at boot time -- it will coincide with the processor speed, because it has to do with relating how long it takes a single operation to occur in real time on the clock (nb, if the RTC is a crystal oscillator like in a quartz watch, it works at 2^15 cycles per second). So as Adak points out, ticks may drift, but they do so for the exact same (physical) reasons a quartz watch will (since it is not as accurate as an atomic clock, the final arbitrator of what we consider time ), and to the same negligible extent. I'm no engineer or physicist, but I think I have a grip on this point -- altho the frequency of the processor and the frequency of the RTC are not the same, they are probably equivalently "stable".

    Anyway, here's a sort of crude test:
    Code:
    #include <stdio.h>
    #include <string.h>
    #include <sys/timex.h>
    #include <time.h>
    
    int main() {
    	struct ntptimeval now;
    	static long start, last, ulast, umark;
    	long dif;
    	time_t RTCstart = time(NULL), RTCnow, RTClast;
    	int count;
    
    	ntp_gettime(&now);
    	start = now.time.tv_sec;
    	umark = now.time.tv_usec;
    	last = start;
    	while (1) {
    		ntp_gettime(&now);
    		RTCnow = time(NULL);
    		count = RTCnow-RTClast;
    		if (count) {
    			dif = now.time.tv_usec - umark;
    			printf("%ld seconds (RTC: %d) %ld/1000000\n",now.time.tv_sec-start,(int)(RTCnow-RTCstart),dif);
    			umark = now.time.tv_usec;
    		} 
    	last = now.time.tv_sec;
    	ulast = now.time.tv_usec;
    	RTClast = RTCnow;
    	}
    
    	return 0;
    }
    This compares time() (from the system software clock) to ntp_gettime (a high res timer). Once they align, not surprisingly:

    4 seconds (RTC: 5) 0/1000000
    5 seconds (RTC: 6) 0/1000000
    6 seconds (RTC: 7) 0/1000000
    7 seconds (RTC: 8) 0/1000000
    8 seconds (RTC: 9) -1/1000000
    9 seconds (RTC: 10) 1/1000000
    10 seconds (RTC: 11) -1/1000000
    11 seconds (RTC: 12) 1/1000000
    12 seconds (RTC: 13) 0/1000000
    14 seconds (RTC: 14) -999999/1000000
    14 seconds (RTC: 15) 999999/1000000
    15 seconds (RTC: 16) 0/1000000
    16 seconds (RTC: 17) 0/1000000
    17 seconds (RTC: 18) 0/1000000


    No more than one microsecond difference. It's not really the RTC tho, as said it's the system clock set from the RTC. There are other reasons this test might be suspect (maybe someone will raise them) but it demonstrates something about what's available in standard C.

    Apparently, most new processors also have something called HPET which is a high res hardware timer that you may be able to access using a device driver. I've never looked into this and can't say any more about it.

    Anyway: if you want to measure several weeks to the millisecond, then you'll have to run some tests! And then you might need to hook an atomic clock up to the computer (or set periodically from one online) But if you are just talking stuff taking place within a second or an hour or even a day, I think those standard hi-res timers are very very accurate.

    [EDIT: Digging a little deeper it looks to me like on linux (and probably everything else) the software timers don't use jiffies per second, they do use system ticks, which are from hardware timer interrupts.]
    Last edited by MK27; 01-26-2010 at 09:27 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  4. #4
    Registered User
    Join Date
    Jan 2005
    Posts
    108
    that was very informative, both of you. thanks

    I actually tested it on the platform I was working on, and it turns out that the difference, if there's any, is really minimal.. like almost negligible, like MK27.. so yeah, I guess that was the answer to the question.

    I was just wondering why my sound samples get played at a slightly higher frequency than it is meant to.. turns out that the other PC I had plays the sound at a different frequency than when it's recording sound cards...

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Logical Error in Clock program
    By SVXX in forum C++ Programming
    Replies: 0
    Last Post: 05-10-2009, 01:12 AM
  2. Outside influences on clock cycles? (clock_t)
    By rsgysel in forum C Programming
    Replies: 4
    Last Post: 01-08-2009, 06:15 PM
  3. Approximate number of clock ticks per second
    By black_spot1984 in forum C Programming
    Replies: 8
    Last Post: 04-09-2008, 11:46 AM
  4. clock program
    By bazzano in forum C Programming
    Replies: 3
    Last Post: 03-30-2007, 11:12 PM
  5. CPU times and clock ticks
    By ActionMan in forum C Programming
    Replies: 3
    Last Post: 04-28-2002, 09:32 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21