Hi,

Well as the headline says... How do u really measure an algorithms speed in c++?

Printable View

- 01-20-2010ingentingMeasure algorithm speed
Hi,

Well as the headline says... How do u really measure an algorithms speed in c++? - 01-20-2010m37h0d
get the start time, run the algorithm some number of times, then subtract the start time from the finish time and divide by the number of times you ran the algo.

- 01-20-2010iMalc
Technically you don't measure algorithm speed, you measure implementation speeds. I might write the same algorithm as you but have it run ten times faster on the same machine due to taking care of passing certain things by const-reference instead of by value etc.

Algorithms usually are compared to one another in terms of their Big-Oh notation. - 01-20-2010ingenting
Aha, ok tyvm guys!

- 01-22-2010nadroj
As iMalc noted, algorithms are (scientifically/in research)

*always*measured in terms of order of magnitude. This is usually accomplished by using the "Big-Oh" notation. Also, as he/she correctly said, by actually timing it, you are measuring the*implementation*of the algorithm. This means that your numbers/results are*meaningless*on any other machine with different hardware/software. Therefore you*cannot*compare your results to other results, unless they both used the exact same hardware and software. I can run your code on two computers with exact same hardware and software, except one computer has CPU at 1GHz, and the other at 3GHz. Therefore, the code might be "slow" on the first computer and "fast" on the second computer. Also, some other piece of code might be "slow" on the second computer so its considered not acceptable, but tomorrow a 10GHZ computer is developed so that the code will be "fast" and is now acceptable.

Hopefully you see the problem with this, and this is where measuring an*algorithm*(i.e. pseudocode) in terms of order of magnitude using, commonly, "Big-Oh" notation, comes into the picture to solve this problem. This method*fixes*the general "hardware" and "software", therefore ignoring any specific implementation details such as the programming language, etc. By fixing the "hardware" to some model, say a "computer with 1 CPU", you ignore all other details such as CPU speed of today versus tomorrow. You can then determine if the*algorithm*is "slow" (unacceptable) or "fast" (acceptable). If its "slow", it is likely to be "slow" on*any actual computer*representative of that model, i.e.*any*1-CPU computer (the ones of yesterday, today, and 100 years from now). Once you have the Big-Oh "complexity" of the algorithm, you can use it to approximate the*actual running time*of an implementation of the algorithm, simply by considering how many operations per second the computer can do (i.e., the CPU speed).