Thread: Outside influences on clock cycles? (clock_t)

  1. #1
    Registered User
    Join Date
    Jan 2009
    Posts
    6

    Outside influences on clock cycles? (clock_t)

    Hi all,

    I'm using clock cycles to count how much time some function calls take, but the results seem off. Does clock() give clock cycles being run by just my C program, or clock cycles since it started period?

    The reason I think there's a problem: I'm timing a function foo, which is a deterministic processing of some struct. Then I'll modify that struct with a function called bar and run foo again, counting the time for bar + foo together. There are cases where bar doesn't change said struct at all, yet the time reported to run foo is greater than the time to run bar + foo together.

    Some sample code:

    Code:
    void main( ){
    
      // other variables etc.
      clock_t start, mid, end;
      double mid_time, end_time;
    
      start = clock();
      foo( thing );
      mid = clock();
      bar( thing );
      foo( thing );
      end = clock();
    
      mid_time = (mid - start)/( (double) CLOCKS_PER_SEC );
      end_time = (end - mid)/( (double) CLOCKS_PER_SEC );
    
      // print results etc. etc.
    
    }
    If it's relevant, I'm running Ubuntu 8.04.1 in vmware ( long story ). I'm hoping this is the right forum for this since really I need to know how clock() etc. is implemented.

    Side question - is there a smarter way to do this?
    I've never posted on a programming forum before, so please take pity on me if I wrote something silly :O

    Thanks

  2. #2
    Registered User C_ntua's Avatar
    Join Date
    Jun 2008
    Posts
    1,853
    A function call always takes some time. It will jump to the memory location where the function is , it will do whatever it does and then will return to the location. So that might be save location, jump, declare local variables, do the code, returned to save location. So several actions are done even if nothing is really executed. Even if the function is (assuming no opitmizations by the compiler)
    Code:
    void wasteTime() {;}
    I don't think there is a better way to do this....
    Last edited by C_ntua; 01-07-2009 at 07:52 PM.

  3. #3
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    clock(), even in it's most idealistic implementation, is like trying to time a bullet using a sundial. Even with many data samples, you're only going to get a statistical answer.

    clock() basically uses data provided by your OS scheduler, to indicate how much CPU time the process has had. If the basic time slice is say 10ms, and you only end up using 7.542304 ms, then I doubt all that information will make it into what clock() will see.

    > yet the time reported to run foo is greater than the time to run bar + foo together.
    Another partial list of influences.
    - running a function for the first time takes longer, because subsequent runs happen from cache, and not main memory

    > Does clock() give clock cycles being run by just my C program, or clock cycles since it started period?
    It measures CPU time, but the granularity is way way above a single CPU clock cycle.

    > Side question - is there a smarter way to do this?
    Look up gettimeofday() in the manual
    Search this forum for RDTSC
    Both will (to some extent) suffer from the same problem, which is basically that your program is a user-space program which is far removed from the hardware.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  4. #4
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    The standard granularity of clock() is 10ms - so you won't see smaller changes than that.

    RDTSC is good, except that it's not measuring time only used by your application. There are also complications if you have multiple cores/cpus, as they may not be in prefect sync, so TSC will be different on one processor than on the other.
    gettimeofday() is also measuring "wall-clock-time", so not measuring how long the CPU is working on your task.

    Using clock() to determine what percentage of time the CPU was spent on YOUR task and what was spent on other tasks, and then either gettimeofday() [which uses some high-resolution timer in the system to get fairly good precision] or RDTSC() [which is measuring clock-ticks] will give you a good precise amount of time - but the precise time is only meaningfull if the OS spent nearly 100% of the time in your task.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #5
    Registered User
    Join Date
    Jan 2009
    Posts
    6
    Quote Originally Posted by C_ntua View Post
    A function call always takes some time. It will jump to the memory location where the function is , it will do whatever it does and then will return to the location. So that might be save location, jump, declare local variables, do the code, returned to save location. So several actions are done even if nothing is really executed. Even if the function is (assuming no opitmizations by the compiler)
    yup - it was less a question of overhead associated with a function call, and more that the exact same deterministic function is called twice with the exact same input, and taking less clock cycles the second time around.

    Quote Originally Posted by Salem View Post
    clock(), even in it's most idealistic implementation, is like trying to time a bullet using a sundial. Even with many data samples, you're only going to get a statistical answer.

    > yet the time reported to run foo is greater than the time to run bar + foo together.
    Another partial list of influences.
    - running a function for the first time takes longer, because subsequent runs happen from cache, and not main memory
    awesome. this is very helpful - i know how to proceed from here.

    I'll take a look at RDTSC even if the granularity of ~10ms or whatever it may be is sufficient. Thank you all for the replies.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Logical Error in Clock program
    By SVXX in forum C++ Programming
    Replies: 0
    Last Post: 05-10-2009, 12:12 AM
  2. Clock Troubles
    By _Nate_ in forum C Programming
    Replies: 22
    Last Post: 06-19-2008, 05:15 AM
  3. clock program
    By bazzano in forum C Programming
    Replies: 3
    Last Post: 03-30-2007, 10:12 PM
  4. C Programming CPU clock cycled Help
    By kishorepalle in forum C Programming
    Replies: 1
    Last Post: 05-26-2004, 12:57 PM
  5. world clock
    By nevermind in forum C++ Programming
    Replies: 1
    Last Post: 10-23-2002, 07:45 AM