# measuring time of execution

• 12-01-2004
KevBin
measuring time of execution
Hi, I want to compare time of execution a recursive and nonrecursive program!
I tried using function clock() but it seems that my program is so fast that I always get result 0.00000.
I did search a board but didn't find the answer how to measure very fast executions.
I wonder if this is a correct way to solve this:

Code:

```#include <stdio.h> #include <time.h> #define MAX 1000000 long fact(int); long fact_rec(int); int main ( void ) {         long res;         int i;         double duration;         clock_t start,end;         start=clock();         for(i=0;i<MAX;i++)                 res=fact(10);                 end=clock();         duration=(double)(end-start)/(double)(CLOCKS_PER_SEC);         printf("Duration is %lf us!",duration);         return 0; } long fact(int n) {         int i;         long res=1;         if(n<=0)                 return 1;         for(i=2;i<=n;i++)                        res*=i;         return res; } long fact_rec(int n) {         if(n<=0)                 return 1;         return n*fact_rec(n-1); }```
I want to display time in microseconds!
I figure that if (double)(end-start)/(double)(CLOCKS_PER_SEC) is time in seconds, all I need to do is to multiply with 1000000, and because of loop I need to divide with 1000000 to get average time. I get these results:

fact(10) => duration 0.45 us
fact_rec(10) => duration 1.842 us.

Is this a correct way of measuring time?
Interesting If I execute recursive function fact_rect and start program I wait 2-3 seconds program to finish and then I get results displayed 1.842 us that is 1.8 million part of sec (us) and I wait program to finish at least 3 seconds.
Why is that happening?

Thanks!
• 12-01-2004
Brian
http://www.wideman-one.com/gw/tech/d.../wintiming.htm

edit: one way you could measure it is to loop the process several million times and time the whole thing, then divide by however many times you repeated it.
• 12-01-2004
KevBin
Quote:

Originally Posted by Brian
one way you could measure it is to loop the process several million times and time the whole thing, then divide by however many times you repeated it.

I think I used that. Time is displayed in microseconds. My question is: Is this correct way and maybe I shoud use bigger number for MAX constant? Is there any other realiable way to achive this!
• 12-01-2004
swoopy
Quote:

I wait 2-3 seconds program to finish and then I get results displayed 1.842 us that is 1.8 million part of sec (us) and I wait program to finish at least 3 seconds.
Why is that happening?
Well, you're calculating how long it takes to do one fact_rec(), but you're still calling it 1,000,000 times. So you should be waiting 1.842 seconds for it to finish. Why it would take 3 seconds I don't know.
• 12-01-2004
VOX
I don't know how you would do this, but another idea is get the time at program start, and get the time at program end. Subtract start from end for the program run time.
• 12-02-2004
Salem
For those using gcc on some kind of pentium processor
http://cboard.cprogramming.com/showt...ighlight=rdtsc

For those using windows operating systems
http://msdn.microsoft.com/library/de...ncecounter.asp
• 12-02-2004
jim mcnamara
If you run on Linux or other multi-processing systesm, your process gets pre-empted by the OS for other processes. The elapsed time is longer than the actual running time.

If you're in unix try
Code:

`time <programname>`