Thread: Time

  1. #1
    Registered User nepper271's Avatar
    Join Date
    Jan 2008
    Location
    Brazil
    Posts
    50

    Time

    Hi everyone, I was trying to measure the time between the start and the end of a function. I found time.h header and saw that it returns it in seconds. My problem is that a second is much time. I wonder if anyone can tell me how to find the difference of time in miliseconds, for instance.

    Thank you,
    Nepper271

  2. #2
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    Try clock.
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  3. #3
    and the hat of sweating
    Join Date
    Aug 2007
    Location
    Toronto, ON
    Posts
    3,545
    Have you tried the clock() function?
    It also helps if you run the function you're testing several thousand times in a loop to get a more accurate number.

  4. #4
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    And don't forget to check against the macro CLOCKS_PER_SEC to translate clock ticks to real time.

    See: http://www.cplusplus.com/reference/c...ime/clock.html
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  5. #5
    Registered User nepper271's Avatar
    Join Date
    Jan 2008
    Location
    Brazil
    Posts
    50
    Thank you very much, it works. It gives me only two numbers of precision, but it's better than none.

    Nepper271.

  6. #6
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Location
    Inside my computer
    Posts
    24,654
    There are more precise functions out there, but they are bound to specific platforms. So if you want to use one, perhaps you might specify what OS?
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  7. #7
    Registered User nepper271's Avatar
    Join Date
    Jan 2008
    Location
    Brazil
    Posts
    50
    I'm using linux

  8. #8
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    I would suggest that you use clock(), and just run the function multiple times to get a better measurement. Of course, if you have such a short runtime that it barely registers on clock(), then there's probably nothing to worry about.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #9
    coder
    Join Date
    Feb 2008
    Posts
    127
    POSIX provides an extension to the standard C++ time library, which includes high resolution timers up to nanoseconds (1 bilion nanosec = 1 sec).

    The example below prints the microseconds taken to execute function ().
    Of course, the value will slightly change everytime you run the program.

    More detailed infos here:
    http://www.informit.com/guides/conte...lus&seqNum=272

    Code:
    #include <iostream>
    #include <sys/time.h>
    using namespace std;
    
    void function ()		//your testing function
    {
    	for (int i = 0; i < 100000; i++);	
    }
    
    int main(int argc, char *argv[])
    {
    	int cur_time;
    	struct timeval tv;
    	struct timezone tz;
    
    	gettimeofday(&tv, &tz);		//get current time
    	cur_time = tv.tv_usec;		//store actual microseconds
    
    	function ();
    
    	gettimeofday(&tv, &tz);		//get time again
    
    	cout << tv.tv_usec - cur_time << endl;
    
    	return 0;
    }

  10. #10
    Internet Superhero
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    964
    Quote Originally Posted by carlorfeo View Post
    POSIX provides an extension to the standard C++ time library, which includes high resolution timers up to nanoseconds (1 bilion nanosec = 1 sec).
    How on earth does it do that? It doesn't really make sense if you think about it, a normal 1 Ghz cpu does a clock cycle every 1 nanosecond, but if it's incrementing the timer every nanosecond, how could it do anything else in the meantime? Could anyone explain this?
    How I need a drink, alcoholic in nature, after the heavy lectures involving quantum mechanics.

  11. #11
    Registered User
    Join Date
    Nov 2005
    Posts
    673
    You can use the SDL_GetTicks() functions. it return milliseconds with decent precision.
    you will have to have SDL library, but it's time function is good IMHO.

  12. #12
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Yes, gettimeofday is specified to give you a time in microseconds - however, it is not guranteed that all of the data is filled in within the time you get, so if the time fetched by the OS is still provided by a millisecond timer, then it's not getting any more precise just because you have a bunch of zeros on the end of it.

    clock() calls gettimeofday internally.

    You can get precise timing out of most of the modern CPU's, such as AMD K5 (and later) and Intel Pentium (and later), using the RDTSC instruction. But there are problems:
    1. It gives you clock-cycles of the CPU (in the Intel case incremented by 2 every other clock-cycle, so you will always get even numbers), which is not a measure of seconds - you need to know the clock-frequency of the processor to get it to seconds.
    2. If you have more than one core, one of the processor cores may have a different cycle count than the other, so measurement over multiple processor(core)s can get false results.
    3. The result is a 64-bit number, so you need long long to read it, and it's very sensitive to other processes interfering - there is no way to know if it was YOUR process or some other that run for most, some or none of the time consumed.
    4. You need inline assembler to get this, and it only works on x86 processors of recent models.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  13. #13
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Quote Originally Posted by Neo1 View Post
    How on earth does it do that? It doesn't really make sense if you think about it, a normal 1 Ghz cpu does a clock cycle every 1 nanosecond, but if it's incrementing the timer every nanosecond, how could it do anything else in the meantime? Could anyone explain this?
    There isn't machine code being executed to increment the timer, not that that would even be possible. The hardware just increments the timer in a regular fashion, on its own, unaffected by whatever machine code might be being executed at the time.
    All you do is read it occasionally.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  14. #14
    Registered User
    Join Date
    Nov 2006
    Posts
    519
    Why not using HPET? It's build because of the problems of time measurment which exists today (dual core cpu's). Every newer board should have the chip. And, while linux is capable of using it for for some time, with vista even MS is supporting HPET now.

  15. #15
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Yes, the HPET is available. But it's not commonly used to gather precise time for code.

    It's used for high precision events (like it's name says), such as multimedia timer, etc. It also has too short a runtime to make much sense, as it wraps every few (milli)seconds or so.

    Because it sits on a PCI device (southbridge), it takes quite a few cycles to "get there", so it's not a good idea to read it frequently, thus the OS won't do that. gettimeofday() is optimized because it is called VERY often by certain apps, and it will give you reasonable precision.

    It is certainly possible to use the HPET, but it's not very practical.

    If you really want high precision timing, using the TSC (with RDTSC) is the right way to go - it's long enough to only wrap after MANY years (rough calculation on paper-napkin says every 130000 years @4GHz).

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. How to get current time
    By tsubasa in forum C Programming
    Replies: 3
    Last Post: 05-01-2009, 02:03 AM
  2. Replies: 11
    Last Post: 03-29-2009, 12:27 PM
  3. Help with assignment!
    By RVDFan85 in forum C++ Programming
    Replies: 12
    Last Post: 12-03-2006, 12:46 AM
  4. calculating user time and time elapsed
    By Neildadon in forum C++ Programming
    Replies: 0
    Last Post: 02-10-2003, 06:00 PM
  5. time class
    By Unregistered in forum C++ Programming
    Replies: 1
    Last Post: 12-11-2001, 10:12 PM