hi all
i am looking for a program that i can use to calculate another program's execution time
i mean that (A) program can help me to find out how long did program (B) take to finish his job
i am suing linux
thanks all
hi all
i am looking for a program that i can use to calculate another program's execution time
i mean that (A) program can help me to find out how long did program (B) take to finish his job
i am suing linux
thanks all
Code:$ time ls <<snip verboseness>> real 0m3.585s user 0m0.050s sys 0m0.030s
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I don't know how to time another program. But if you want to put a timer inside your prog its only a couple of lines of code. In C:
Code:#include <stdio.h> #include <stdlib.h> #include <time.h> int main() { //Set the start time unsigned start_time = clock(); //Do something here int a, b; for(a=0; a<10000; a++)for(b=0; b<10000; b++); //Time taken is the time minus start time printf("Time taken: %u millisecs", clock()-start_time); getchar(); }
Mike,
I've been working with the time and clock functions this past couple days and was glad to see this thread. I seem to have found lots of ways to keep it from working and I'll post some examples shortly (or after the 4th), but I noticed that if I move the getchar() before the printf that it has no affect on the answer. Evidently waiting for input takes no clock time even though it might take real time?
Regards,
Dave
My problem is that one timer only resolves in seconds at a system level and the other at a cpu level. Therefore I can't really use either timer if I'm doing mid-freq DAQ sampling where CPU time can't be counted during an interupt and system time is only every second.
Can anyone suggest a free RTLib that might provide the functionality?
Here's the output
Here's the codeCode:Please, enter your name: dave++ Hi dave++. dif: It took you 3.0 seconds to type your name. clktest: ... but 0.1 seconds for the system to loop. tics/sec: 1000000.0 start and stop times, sec: 1183516635 1183516638 start and stop clock, tics: 0 60000
Yes, its a terrible mix of c and c++Code:/* modified difftime example, from somewhere on the web, in this galaxy, n42532 and timeline, tl45 */ #include <stdio.h> #include <iostream> #include <ctime> int main () { time_t start, end; unsigned int clkstart, clkend; double dif, cdif, clkdif; double clkpsec = 1.0 * CLOCKS_PER_SEC; char szInput [256]; // begin time (&start); clkstart = clock(); printf ("Please, enter your name: "); gets (szInput); time (&end); dif = difftime (end,start); //Do something here int a, b; for(a=0; a<12000; a++)for(b=0; b<10000; b++); clkend = clock(); clkdif = ((double) (clkend - clkstart)) / clkpsec; printf ("Hi %s.\n", szInput); printf ("dif: It took you %10.1lf seconds to type your name.\n", dif ); printf ("clktest: ... but %10.1lf seconds for the system to loop.\n", clkdif ); std::cout << std::endl; printf ("tics/sec: %10.1lf \n", clkpsec); std::cout << "start and stop times, sec: " << start << " " << end << std::endl; std::cout << "start and stop clock, tics: " << clkstart << " " << clkend << std::endl; return 0; }
Last edited by Dave++; 07-03-2007 at 08:55 PM. Reason: additions
*gasp*gets (szInput);
sorry I'm not much more help
> Evidently waiting for input takes no clock time even though it might take real time?
clock() is a variously crude measure of CPU time. Time spent waiting for I/O to finish doesn't count.
If you want a really fast clock, say the timestamp counter in pentium processors, then search for some of my past posts for "rdtsc" or "queryperformancecounter".
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
This routine does it all...
It compares all the routines that are easily accessable.
Dave
Code:Please, enter your name: dave Hi dave. dif: It took you 4.0 seconds to type your name. ftm1test: ... but 3.7280 seconds according to ftime. gttest: ... but 3.7277500 seconds according to gettimeofday. clktest: ... but 0.0600 seconds for the system to loop. ftm2test: ... but 3.7960 seconds according to ftime, w/loop. tics/sec: 1000000.0 start and stop times, sec: 1183694435 1183694439 start and stop clock, tics: 0 60000Code:/* modified difftime example, from somewhere on the web, in this galaxy, n42532 and timeline, tl45 */ #include <stdio.h> #include <iostream> #include <ctime> #include <sys/time.h> // don't forget the "sys" #include <sys/timeb.h> // int ftime(struct timeb *tb); int main () { struct timeval tv0, tv1, tv2; struct timezone tvz; struct timeb tb0, tb1, tb2; time_t start, end; unsigned int clkstart, clkend; double dif, fdif, gtdif, clkdif; double clkpsec = 1.0 * CLOCKS_PER_SEC; char szInput [256]; double microsec=1000000.0; // begin time (&start); ftime(&tb0); gettimeofday(&tv0,&tvz); clkstart = clock(); printf ("Please, enter your name: "); gets (szInput); ftime(&tb1); gettimeofday(&tv1,&tvz); time (&end); dif = difftime (end,start); fdif = (double)((tb1.time + (double)(tb1.millitm)/1000) - (tb0.time + (double)(tb0.millitm)/1000)); gtdif = (double)((tv1.tv_sec + (double)(tv1.tv_usec)/microsec) - (tv0.tv_sec + (double)(tv0.tv_usec)/microsec)); //Do something here int a, b; for(a=0; a<12000; a++)for(b=0; b<10000; b++); clkend = clock(); ftime(&tb2); clkdif = ((double) (clkend - clkstart)) / clkpsec; printf ("Hi %s.\n", szInput); printf ("dif: It took you %5.1lf seconds to type your name.\n", dif ); printf ("ftm1test: ... but %5.4lf seconds according to ftime.\n", fdif); printf ("gttest: ... but %5.7lf seconds according to gettimeofday.\n", gtdif); fdif = (double)((tb2.time + (double)(tb2.millitm)/1000) - (tb0.time + (double)(tb0.millitm)/1000)); printf ("clktest: ... but %3.4lf seconds for the system to loop.\n", clkdif ); printf ("ftm2test: ... but %5.4lf seconds according to ftime, w/loop.\n", fdif); std::cout << std::endl; printf ("tics/sec: %10.1lf \n", clkpsec); std::cout << "start and stop times, sec: " << start << " " << end << std::endl; std::cout << "start and stop clock, tics: " << clkstart << " " << clkend << std::endl; return 0; }
You may be able to use the gettimeofday(...) function from within a script because its listed in the man pages. Then when the routine returns to the script you can hit the stop watch from within the script. My suspision is that all the rountines I show in the final code can be used from a script.
Dave
and thanks Salem for the "time" unix comand-line prefix
Last edited by Dave++; 07-05-2007 at 10:28 PM.
The rdtsc timer is unreliable for actual time measurements, because it measures the number of CPU cycles. But modern CPUs often switch speed during running (especially mobile CPUs, but increasingly desktop CPUs as well), so the frequency of the timer varies.
http://blogs.msdn.com/oldnewthing/ar...5/3695356.aspx
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
My professor found an interesting site that discusses nanosleep and the accuracy of gettimeofday.
http://defectivecompass.wordpress.co...ecision-sleep/
code included