Hi everyone, I was trying to measure the time between the start and the end of a function. I found time.h header and saw that it returns it in seconds. My problem is that a second is much time. I wonder if anyone can tell me how to find the difference of time in miliseconds, for instance.
Have you tried the clock() function?
It also helps if you run the function you're testing several thousand times in a loop to get a more accurate number.
And don't forget to check against the macro CLOCKS_PER_SEC to translate clock ticks to real time.
Thank you very much, it works. It gives me only two numbers of precision, but it's better than none.
There are more precise functions out there, but they are bound to specific platforms. So if you want to use one, perhaps you might specify what OS?
I would suggest that you use clock(), and just run the function multiple times to get a better measurement. Of course, if you have such a short runtime that it barely registers on clock(), then there's probably nothing to worry about.
POSIX provides an extension to the standard C++ time library, which includes high resolution timers up to nanoseconds (1 bilion nanosec = 1 sec).
The example below prints the microseconds taken to execute function ().
Of course, the value will slightly change everytime you run the program.
More detailed infos here:
using namespace std;
void function () //your testing function
for (int i = 0; i < 100000; i++);
int main(int argc, char *argv)
struct timeval tv;
struct timezone tz;
gettimeofday(&tv, &tz); //get current time
cur_time = tv.tv_usec; //store actual microseconds
gettimeofday(&tv, &tz); //get time again
cout << tv.tv_usec - cur_time << endl;
How on earth does it do that? It doesn't really make sense if you think about it, a normal 1 Ghz cpu does a clock cycle every 1 nanosecond, but if it's incrementing the timer every nanosecond, how could it do anything else in the meantime? Could anyone explain this?
Originally Posted by carlorfeo
You can use the SDL_GetTicks() functions. it return milliseconds with decent precision.
you will have to have SDL library, but it's time function is good IMHO.
Yes, gettimeofday is specified to give you a time in microseconds - however, it is not guranteed that all of the data is filled in within the time you get, so if the time fetched by the OS is still provided by a millisecond timer, then it's not getting any more precise just because you have a bunch of zeros on the end of it.
clock() calls gettimeofday internally.
You can get precise timing out of most of the modern CPU's, such as AMD K5 (and later) and Intel Pentium (and later), using the RDTSC instruction. But there are problems:
1. It gives you clock-cycles of the CPU (in the Intel case incremented by 2 every other clock-cycle, so you will always get even numbers), which is not a measure of seconds - you need to know the clock-frequency of the processor to get it to seconds.
2. If you have more than one core, one of the processor cores may have a different cycle count than the other, so measurement over multiple processor(core)s can get false results.
3. The result is a 64-bit number, so you need long long to read it, and it's very sensitive to other processes interfering - there is no way to know if it was YOUR process or some other that run for most, some or none of the time consumed.
4. You need inline assembler to get this, and it only works on x86 processors of recent models.
There isn't machine code being executed to increment the timer, not that that would even be possible. The hardware just increments the timer in a regular fashion, on its own, unaffected by whatever machine code might be being executed at the time.
Originally Posted by Neo1
All you do is read it occasionally.
Why not using HPET? It's build because of the problems of time measurment which exists today (dual core cpu's). Every newer board should have the chip. And, while linux is capable of using it for for some time, with vista even MS is supporting HPET now.
Yes, the HPET is available. But it's not commonly used to gather precise time for code.
It's used for high precision events (like it's name says), such as multimedia timer, etc. It also has too short a runtime to make much sense, as it wraps every few (milli)seconds or so.
Because it sits on a PCI device (southbridge), it takes quite a few cycles to "get there", so it's not a good idea to read it frequently, thus the OS won't do that. gettimeofday() is optimized because it is called VERY often by certain apps, and it will give you reasonable precision.
It is certainly possible to use the HPET, but it's not very practical.
If you really want high precision timing, using the TSC (with RDTSC) is the right way to go - it's long enough to only wrap after MANY years (rough calculation on paper-napkin says every 130000 years @4GHz).