Hi,
Well as the headline says... How do u really measure an algorithms speed in c++?
This is a discussion on Measure algorithm speed within the C++ Programming forums, part of the General Programming Boards category; Hi, Well as the headline says... How do u really measure an algorithms speed in c++?...
Hi,
Well as the headline says... How do u really measure an algorithms speed in c++?
get the start time, run the algorithm some number of times, then subtract the start time from the finish time and divide by the number of times you ran the algo.
Technically you don't measure algorithm speed, you measure implementation speeds. I might write the same algorithm as you but have it run ten times faster on the same machine due to taking care of passing certain things by const-reference instead of by value etc.
Algorithms usually are compared to one another in terms of their Big-Oh notation.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Aha, ok tyvm guys!
As iMalc noted, algorithms are (scientifically/in research) always measured in terms of order of magnitude. This is usually accomplished by using the "Big-Oh" notation. Also, as he/she correctly said, by actually timing it, you are measuring the implementation of the algorithm. This means that your numbers/results are meaningless on any other machine with different hardware/software. Therefore you cannot compare your results to other results, unless they both used the exact same hardware and software. I can run your code on two computers with exact same hardware and software, except one computer has CPU at 1GHz, and the other at 3GHz. Therefore, the code might be "slow" on the first computer and "fast" on the second computer. Also, some other piece of code might be "slow" on the second computer so its considered not acceptable, but tomorrow a 10GHZ computer is developed so that the code will be "fast" and is now acceptable.
Hopefully you see the problem with this, and this is where measuring an algorithm (i.e. pseudocode) in terms of order of magnitude using, commonly, "Big-Oh" notation, comes into the picture to solve this problem. This method fixes the general "hardware" and "software", therefore ignoring any specific implementation details such as the programming language, etc. By fixing the "hardware" to some model, say a "computer with 1 CPU", you ignore all other details such as CPU speed of today versus tomorrow. You can then determine if the algorithm is "slow" (unacceptable) or "fast" (acceptable). If its "slow", it is likely to be "slow" on any actual computer representative of that model, i.e. any 1-CPU computer (the ones of yesterday, today, and 100 years from now). Once you have the Big-Oh "complexity" of the algorithm, you can use it to approximate the actual running time of an implementation of the algorithm, simply by considering how many operations per second the computer can do (i.e., the CPU speed).