just for some simple algorithms for multiplying potentially very big numbers i'm trying to get an accurate reading on how long they take, but can't get non-zero readings using clock() for times under 0.01 seconds, even though CLOCKS_PER_SEC is 1000000 on my machine.

Here's the relevant portion of the code:

Code:
```        const int BASE = 8;
vector<short> num1, num2, result;
clock_t start, end;
double run_time;
/* code to create num1 and num2 randomly of a given length */
...
start = clock();
result = mult_full(num1, num2, BASE);
end = clock();
run_time = static_cast<double>(end - start) / CLOCKS_PER_SEC;
cout.setf(ios_base::fixed);
cout << "Run time using clock() for random integers of length " << len << ": "
<< setprecision(6) << run_time << endl;
cout << "CLOCKS_PER_SEC is " << CLOCKS_PER_SEC << endl;```
If the length of num1 and num2 is 128, for example, I always get a run_time of 0, but at length 256, i get a run_time of .010000. i'd like to be able to measure short run times here, too. any ideas?

2. Originally Posted by Aisthesis
can't get non-zero readings using clock() for times under 0.01 seconds, even though CLOCKS_PER_SEC is 1000000 on my machine.

If the length of num1 and num2 is 128, for example, I always get a run_time of 0, but at length 256, i get a run_time of .010000. i'd like to be able to measure short run times here, too. any ideas?
The granularity of the kernel scheduler is not as fine as CLOCKS_PER_SEC. This has some odd consequences if you try to time things on the order of milliseconds. Here's something I observed a little while ago:

SourceForge.net: POSIX timers - cpwiki

Since the example function timer reports in microseconds, an easily called function with a granularity also in microseconds would be nice. Unfortunately, while accurately timing an event in usecs is possible, on a normal linux system scheduling latency makes it impossible to accurately ask for a delay with finer granularity than 10 milliseconds. You can test this yourself by calling nanosleep with a 10000 nanosecond (1 millisecond) delay 10000 times -- it will work out to much more than 10 seconds. However, if you ask for 100000 nsecs (1/100th second) 1000 times, you will get exactly 10 seconds.
You could try that function timer although despite my claim that "while accurately timing an event in usecs is possible" I don't remember using it to do so...hmmm

Anyway, notice that the dividing line here between accurate and inaccurate is 0.01 seconds.

In any case you should be timing this on large sets, not small ones, and averaging the time. This is going to give you a better stat anyway, since an average is an average as opposed to doing as short a set as possible repeatedly to find out "the best possible time" (which is meaningless and subject to wild variation, because userspace activities are not the kernel's top priority).

3. Accuracy and precision - Wikipedia, the free encyclopedia

0.000001 is the precision
0.01 is the accuracy

As MK27 says. run your test code in a loop (say 1000 iterations), then calculate the average.

4. ah, ok, the looping idea makes total sense for small integers (< 256 digits)--would take more time than i care to spend for like 65,000 digit numbers, but there I don't have to worry about it.

btw, the reason i bring this up is that i'm getting ready for a course on algorithms (based on Cormen, Leiserson et al.) next semester and want to be comfortable with some of the basics before starting. Am i understanding you guys correctly that one will in this context usually be worried not about the < .01 sec. processes but usually about those that require noticeable run time?

also, i started my timer after populating my vector<short> with n random digits so as not to count the time for populating it. what i noticed in the large cases is that the vectors get populated almost immediately (using cout << "populating" etc. to watch) but with big numbers to require some time to multiply. is it fairly safe to assume that for shorter integers, populating the vectors will require negligible time relative to the time required for doing a kind of semi-cumbersome version of multiplication?