Hi all,
I'm using clock cycles to count how much time some function calls take, but the results seem off. Does clock() give clock cycles being run by just my C program, or clock cycles since it started period?
The reason I think there's a problem: I'm timing a function foo, which is a deterministic processing of some struct. Then I'll modify that struct with a function called bar and run foo again, counting the time for bar + foo together. There are cases where bar doesn't change said struct at all, yet the time reported to run foo is greater than the time to run bar + foo together.
Some sample code:
If it's relevant, I'm running Ubuntu 8.04.1 in vmware ( long story ). I'm hoping this is the right forum for this since really I need to know how clock() etc. is implemented.Code:void main( ){ // other variables etc. clock_t start, mid, end; double mid_time, end_time; start = clock(); foo( thing ); mid = clock(); bar( thing ); foo( thing ); end = clock(); mid_time = (mid - start)/( (double) CLOCKS_PER_SEC ); end_time = (end - mid)/( (double) CLOCKS_PER_SEC ); // print results etc. etc. }
Side question - is there a smarter way to do this?
I've never posted on a programming forum before, so please take pity on me if I wrote something silly :O
Thanks