Thread: accurate runtime reading

  1. #1
    Registered User
    Join Date
    May 2009
    Posts
    242

    accurate runtime reading

    just for some simple algorithms for multiplying potentially very big numbers i'm trying to get an accurate reading on how long they take, but can't get non-zero readings using clock() for times under 0.01 seconds, even though CLOCKS_PER_SEC is 1000000 on my machine.

    Here's the relevant portion of the code:

    Code:
            const int BASE = 8;
    	vector<short> num1, num2, result;
            clock_t start, end;
    	double run_time;
            /* code to create num1 and num2 randomly of a given length */
            ...
            start = clock();
    	result = mult_full(num1, num2, BASE);
    	end = clock();
    	run_time = static_cast<double>(end - start) / CLOCKS_PER_SEC;
    	cout.setf(ios_base::fixed);
    	cout << "Run time using clock() for random integers of length " << len << ": "
    		<< setprecision(6) << run_time << endl;
    	cout << "CLOCKS_PER_SEC is " << CLOCKS_PER_SEC << endl;
    If the length of num1 and num2 is 128, for example, I always get a run_time of 0, but at length 256, i get a run_time of .010000. i'd like to be able to measure short run times here, too. any ideas?

  2. #2
    spurious conceit MK27's Avatar
    Join Date
    Jul 2008
    Location
    segmentation fault
    Posts
    8,300
    Quote Originally Posted by Aisthesis View Post
    can't get non-zero readings using clock() for times under 0.01 seconds, even though CLOCKS_PER_SEC is 1000000 on my machine.

    If the length of num1 and num2 is 128, for example, I always get a run_time of 0, but at length 256, i get a run_time of .010000. i'd like to be able to measure short run times here, too. any ideas?
    The granularity of the kernel scheduler is not as fine as CLOCKS_PER_SEC. This has some odd consequences if you try to time things on the order of milliseconds. Here's something I observed a little while ago:

    SourceForge.net: POSIX timers - cpwiki

    Since the example function timer reports in microseconds, an easily called function with a granularity also in microseconds would be nice. Unfortunately, while accurately timing an event in usecs is possible, on a normal linux system scheduling latency makes it impossible to accurately ask for a delay with finer granularity than 10 milliseconds. You can test this yourself by calling nanosleep with a 10000 nanosecond (1 millisecond) delay 10000 times -- it will work out to much more than 10 seconds. However, if you ask for 100000 nsecs (1/100th second) 1000 times, you will get exactly 10 seconds.
    You could try that function timer although despite my claim that "while accurately timing an event in usecs is possible" I don't remember using it to do so...hmmm

    Anyway, notice that the dividing line here between accurate and inaccurate is 0.01 seconds.

    In any case you should be timing this on large sets, not small ones, and averaging the time. This is going to give you a better stat anyway, since an average is an average as opposed to doing as short a set as possible repeatedly to find out "the best possible time" (which is meaningless and subject to wild variation, because userspace activities are not the kernel's top priority).
    Last edited by MK27; 07-01-2010 at 08:38 AM.
    C programming resources:
    GNU C Function and Macro Index -- glibc reference manual
    The C Book -- nice online learner guide
    Current ISO draft standard
    CCAN -- new CPAN like open source library repository
    3 (different) GNU debugger tutorials: #1 -- #2 -- #3
    cpwiki -- our wiki on sourceforge

  3. #3
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    Accuracy and precision - Wikipedia, the free encyclopedia

    0.000001 is the precision
    0.01 is the accuracy

    As MK27 says. run your test code in a loop (say 1000 iterations), then calculate the average.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  4. #4
    Registered User
    Join Date
    May 2009
    Posts
    242
    ah, ok, the looping idea makes total sense for small integers (< 256 digits)--would take more time than i care to spend for like 65,000 digit numbers, but there I don't have to worry about it.

    btw, the reason i bring this up is that i'm getting ready for a course on algorithms (based on Cormen, Leiserson et al.) next semester and want to be comfortable with some of the basics before starting. Am i understanding you guys correctly that one will in this context usually be worried not about the < .01 sec. processes but usually about those that require noticeable run time?

    also, i started my timer after populating my vector<short> with n random digits so as not to count the time for populating it. what i noticed in the large cases is that the vectors get populated almost immediately (using cout << "populating" etc. to watch) but with big numbers to require some time to multiply. is it fairly safe to assume that for shorter integers, populating the vectors will require negligible time relative to the time required for doing a kind of semi-cumbersome version of multiplication?

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Replies: 2
    Last Post: 01-28-2008, 03:07 AM
  2. help with reading from stream
    By movl0x1 in forum C Programming
    Replies: 7
    Last Post: 05-31-2007, 10:36 PM
  3. Fun with reading hex
    By dpro in forum C++ Programming
    Replies: 7
    Last Post: 02-17-2006, 06:41 PM
  4. reading file word by word
    By 98holb in forum C Programming
    Replies: 2
    Last Post: 01-25-2006, 05:49 PM
  5. accurate CPU temp reading
    By itld in forum Tech Board
    Replies: 20
    Last Post: 01-21-2003, 05:50 PM