Thread: Measure algorithm speed

  1. #1
    Registered User
    Join Date
    Nov 2008
    Posts
    15

    Measure algorithm speed

    Hi,

    Well as the headline says... How do u really measure an algorithms speed in c++?

  2. #2
    3735928559
    Join Date
    Mar 2008
    Location
    RTP
    Posts
    838
    get the start time, run the algorithm some number of times, then subtract the start time from the finish time and divide by the number of times you ran the algo.

  3. #3
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Technically you don't measure algorithm speed, you measure implementation speeds. I might write the same algorithm as you but have it run ten times faster on the same machine due to taking care of passing certain things by const-reference instead of by value etc.

    Algorithms usually are compared to one another in terms of their Big-Oh notation.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  4. #4
    Registered User
    Join Date
    Nov 2008
    Posts
    15
    Aha, ok tyvm guys!

  5. #5
    Registered User
    Join Date
    Oct 2006
    Location
    Canada
    Posts
    1,243
    As iMalc noted, algorithms are (scientifically/in research) always measured in terms of order of magnitude. This is usually accomplished by using the "Big-Oh" notation. Also, as he/she correctly said, by actually timing it, you are measuring the implementation of the algorithm. This means that your numbers/results are meaningless on any other machine with different hardware/software. Therefore you cannot compare your results to other results, unless they both used the exact same hardware and software. I can run your code on two computers with exact same hardware and software, except one computer has CPU at 1GHz, and the other at 3GHz. Therefore, the code might be "slow" on the first computer and "fast" on the second computer. Also, some other piece of code might be "slow" on the second computer so its considered not acceptable, but tomorrow a 10GHZ computer is developed so that the code will be "fast" and is now acceptable.

    Hopefully you see the problem with this, and this is where measuring an algorithm (i.e. pseudocode) in terms of order of magnitude using, commonly, "Big-Oh" notation, comes into the picture to solve this problem. This method fixes the general "hardware" and "software", therefore ignoring any specific implementation details such as the programming language, etc. By fixing the "hardware" to some model, say a "computer with 1 CPU", you ignore all other details such as CPU speed of today versus tomorrow. You can then determine if the algorithm is "slow" (unacceptable) or "fast" (acceptable). If its "slow", it is likely to be "slow" on any actual computer representative of that model, i.e. any 1-CPU computer (the ones of yesterday, today, and 100 years from now). Once you have the Big-Oh "complexity" of the algorithm, you can use it to approximate the actual running time of an implementation of the algorithm, simply by considering how many operations per second the computer can do (i.e., the CPU speed).

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Implement of a Fast Time Series Evaluation Algorithm
    By BiGreat in forum C Programming
    Replies: 7
    Last Post: 12-04-2007, 02:30 AM
  2. Binary Search Trees Part III
    By Prelude in forum A Brief History of Cprogramming.com
    Replies: 16
    Last Post: 10-02-2004, 03:00 PM
  3. Request for comments
    By Prelude in forum A Brief History of Cprogramming.com
    Replies: 15
    Last Post: 01-02-2004, 10:33 AM
  4. Algorithm speed
    By -KEN- in forum C Programming
    Replies: 7
    Last Post: 10-23-2001, 07:53 PM