How do I account for time in C? I mean if I have 20 lines of code and I want to know how long time it takes to execute them in nano secs, how do I get this time?
Thanks
/Micke
How do I account for time in C? I mean if I have 20 lines of code and I want to know how long time it takes to execute them in nano secs, how do I get this time?
Thanks
/Micke
If you want precise time, you need to read the processor time-stamp counter [assuming it's a recent x86 processor].
gcc will do that using a macro like this:
val is a "long long" type.Code:#define rdtscll(val) \ __asm__ __volatile__("rdtsc" : "=A" (val))
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Thanks, sound good, hmm, but I'm kind of new to macros, how do I use that?
like:
where diff is what I want?Code:#define rdtscll(val) \ __asm__ __volatile__("rdtsc" : "=A" (val)) long long test; long long diff; test = rdtscll(val); //some code to time //some more code ... diff = rdtscll(val) - test;
Or how do I use it?
I know in solaris there is gethrtime(), is there something similar in linux?
rdtsc reads the clock-cycles, so you need to "divide by nr of GHz" to get nanoseconds, for example.
I'm not familiar with a "high res" timer in Linux.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Yes, elysia has the right code. Sorry, I missed that part of the original question.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Be aware that if you are in a multitask environnement, the operating system switch from process to process every x amout of time and thus, if your 20 lines of code takes more than a certain y amout of time (let's say microseconds, but i really don't know) because there's loop or fonction call in it (or beause your CPU has been interrupt while it was executing this part of the code), well, you might have some "incorrect" results.
It will still give you an idea. But if you really need nano-second precision, well, you are losing your time because you'll never really have it.
But i guess if you only want to know how much time it takes to execute some 20 simples C instructions (which i guess would be under 100 ns on a decent x86 processor), well, guess you'll be mostly fine.
Someone with more knowledge than me could talk more about this. Just wanted to raise the point.
Yes, there are many problems with measuring time precisely in computers [mostly to do with "heisenbergs uncertainty theorem" - the more precisely you measure, the more you influence the execution, which in turn influences the measurement].
Big errors in measurement comes from:
First of all, you should consider OS task-switching. As long as your code runs reasonably quick and there isn't too much other load on the system, you should be fine.
Another potential problem is switching between processors - each processor runs at it's own specific frequncy, so there is at least a theoretical risk of drift between the two processors, and of coruse if you start on one processor and finish on another, any difference between the two processors will be added(subtracted) to the timing you measure.
And yes, 20 lines of "simple" C should run in less than 100ns on a modern processor.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.