Thread: millisecond precision measuring in C

  1. #1
    Registered User
    Join Date
    Mar 2012
    Posts
    2

    millisecond precision measuring in C

    What techniques / methods exist for getting sub-millisecond precision timing data in C or C++, and what precision and accuracy do they provide? I'm looking for methods that don't require additional hardware. The application involves waiting for approximately 50 microseconds +/- 1 microsecond while some external hardware collects data.

    thanks for all (-:

  2. #2
    Registered User
    Join Date
    Mar 2009
    Posts
    344
    There's nothing in the C standard that's going to give you what you want, so the answer will vary depending on what you're programming on. First off, if you're using a general purpose OS, you might want to reconsider if +/-1 uSec resolution is required because many won't be able to guarantee it.

    You can look into doing a simple busy loop continually checking the results of e.g. gettimeofday() or QueryPerformanceCounter(). Even if these return values in uSec or nSec it doesn't mean they have 1 uSec resolution, though. You could also look into using timers of some sort (either through OS functions or hooking into timer interrupts explicitly).

    But again, without knowing more detail it's hard to give a real answer.

  3. #3
    Registered User
    Join Date
    Dec 2011
    Posts
    795
    You could use the routines in <sys/timex.h>

    Or, find your CPU cycles and then calculate uSec by using your processor's hertz rating.

  4. #4
    Registered User
    Join Date
    Mar 2011
    Posts
    546
    in linux/unix use clock_gettime.
    clock_gettime(3): clock/time functions - Linux man page
    on windows use QueryPerformanceFrequency/QueryPerformanceCounter

    for processors that have them, these functions use the high resolution timer register from the CPU. more or less nanosecond resolution

  5. #5
    Registered User ledow's Avatar
    Join Date
    Dec 2011
    Posts
    435
    Please note: Just because something has nanosecond resolution does NOT mean it is accurate to the nanosecond and/or that you can call it millions of times a second and get reliably-spaced and perfectly timed actions occuring on those intervals.

    - Compiler warnings are like "Bridge Out Ahead" warnings. DON'T just ignore them.
    - A compiler error is something SO stupid that the compiler genuinely can't carry on with its job. A compiler warning is the compiler saying "Well, that's bloody stupid but if you WANT to ignore me..." and carrying on.
    - The best debugging tool in the world is a bunch of printf()'s for everything important around the bits you think might be wrong.

  6. #6
    Registered User
    Join Date
    Jan 2009
    Posts
    1,485
    Quote Originally Posted by ledow View Post
    Please note: Just because something has nanosecond resolution does NOT mean it is accurate to the nanosecond and/or that you can call it millions of times a second and get reliably-spaced and perfectly timed actions occuring on those intervals.
    Exactly, in a typical desktop OS the kernel has much more to do than to attend your process.

  7. #7
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    I'm believe there is a big difference, in terms of resolution, between passive and active timing.

    By "active timing", I mean if you use a callback style function ("do this in 100 milliseconds"), or a sleep() style function, it will probably not be accurate beyond a latency of 1/100th second (10 ms) because that is the latency of scheduler. What that means is, this is the smallest interval at which the the OS can promise anything to user space applications.

    By passive timing, I mean if you use a "get the time now" type function at points A and B, the difference between them might well have a microsecond accuracy, because this can be reported very simply based an internal count of processor ticks, which is a product of the frequency, which is constant and very very high. However, the value of this theoretical, because it all takes place within the context of scheduler latency. Eg, a duration of 2.5 ms might be accurately measured, but your program is still stuck running at the latency of the scheduler.

    For example, if passive timing functions really are more accurate, a clever person might say, "Well I will write my own callback timer" by using a "get the time now" in a tight loop until X microseconds have passed. The most obvious problem with this is, that amounts to an active sleep which requires the processor to do nothing but count (very, very bad; this locks up a single core); the perhaps less obvious problem is that the events which hinge on this measurement are still managed by the scheduler and hence, subject to its latency.

    So, you can in theory observe things at a much higher resolution than you can cause things to happen. You can observe this if you passive time a nanosleep() or callback style function:

    Code:
    get_time_in_microseconds(); // point A
    sleep(1 millisecond);
    get_time_in microseconds(); // point B
    When you subtract A from B, the difference will be a lot more than 1 millisecond. That (passive) measurement is accurate, indicating that the sleep() call is not.

    If you keep factoring the duration of the sleep up, you can find the effective latency -- around 10 ms, B-A will be very close to the amount of time you asked for. So when you are trying to make something happen at a given interval, don't bother asking for durations that are not a multiple of the latency.
    Last edited by MK27; 03-09-2012 at 07:31 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  8. #8
    Registered User
    Join Date
    Oct 2006
    Posts
    3,445
    Quote Originally Posted by MK27 View Post
    this is the smallest interval at which the the OS can promise anything to user space applications.
    unless you can convince the operating system to provide real-time scheduling to your process. Windows has an option for that in the task manager, and a linux kernel can be built to provide real-time scheduling. it's not uncommon to do this for audio and video applications.

  9. #9
    Registered User
    Join Date
    Jan 2009
    Posts
    1,485
    MK, I think you can do better than 10ms for high priority processes, which I guess is how for example low latency audio works with a "soft real time" mode. There are some interfaces capable of ~1.5ms round trip latency, ie the delay for an audio signal to go into the interface pass through the system and hit the output again. That will of course vary somewhat between OS's, drivers and so on.

    Re, passive timing, a new complication is also that the clock can vary depending on work load on new intel cpu's aka turbo boost.

    Edit: Still the argument definitely stands for µS accuracy.
    Last edited by Subsonics; 03-09-2012 at 07:44 AM.

  10. #10
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Subsonics View Post
    Re, passive timing, a new complication is also that the clock can vary depending on work load on new intel cpu's aka turbo boost.
    Okay but I'd equate that with slowing time down at a quantum level, so we should all still be safe . If you can't time time, then where are you?
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  11. #11
    Registered User
    Join Date
    Jan 2009
    Posts
    1,485
    Quote Originally Posted by MK27 View Post
    If you can't time time, then where are you?
    Well you could still count "ticks" and ignore converting it to time, that would still provide a common ground for comparisons and be more accurate, but unfortunately not tied to absolute time.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Measuring Run Time
    By BIOS in forum C Programming
    Replies: 29
    Last Post: 09-13-2011, 02:07 PM
  2. Measuring size
    By kira_coder in forum Windows Programming
    Replies: 5
    Last Post: 09-13-2010, 10:07 PM
  3. measuring time in ms/us/ns
    By wanwan in forum C Programming
    Replies: 3
    Last Post: 04-29-2005, 01:42 PM
  4. Measuring time
    By BruceLeroy in forum C++ Programming
    Replies: 20
    Last Post: 10-07-2004, 02:17 PM
  5. time a function takes to execute (up to the millisecond)
    By jszielenski in forum C Programming
    Replies: 4
    Last Post: 11-23-2001, 01:34 AM