what is big O notation?
thanks
This is a discussion on big o notation within the C++ Programming forums, part of the General Programming Boards category; what is big O notation? thanks...
It's a way to meassure the amount of time a procedure takes from completion to end for a given set of data.
Let's say you are about to sort an array of lenght n, depending on the sortingalgorithm's design, the time may differ alot.
For example, I believe bubble-sort sorts the array in O(n^2)
while for example heapsort sorts the array in O(n log n) time.
basically this notation is not used for computing exact time, but to compare different algoritms... constant's (x * n^0) are ignored, since the only thing that matters is the power of n.
This is a quite complicated area and not easy to explain, perhaps this made it somewhat clearar anyway.
/Laos
>what is big O notation?
A theoretical measure of the execution of an algorithm. Since it's an approximation, O-notation tends to be very simple. A long complicated algorithmic equation can be shortened to something like O(n2) in O-notation simply by removing the parts that don't matter, such as constants.
-Prelude
My best code is written with the delete key.
that sorta helped
so basically its a method used to compare how efficient a certain part of code is?
sorry, I have to write a 2 page paper on big o notation for school and am having trouble finding examples that make sense online
thanks
This may help, but O-notation is a complicated subject which few seem to be able to describe effectively. It's something you can understand but not be able to explain to someone who doesn't understand:
http://www.cs.wisc.edu/~hasti/cs367-...OMPLEXITY.html
-Prelude
My best code is written with the delete key.
Basically, big O notation is used for measuring the speed of an algorithm. Typically it is used for algorithms on containers. N means the number of elements in a container.
For example, a standard bubble sort is on the order of O(n^2), or quadratic time. This means that the time it takes increases as the square of the number of elements, or doubling the length of the list quadruples the time it takes to bubble sort. This would obviously be very undesirable for a very large container.
A cheaper algorithm, like a memberwise copy, would be on the order of O(N). This, also known as linear time, means doubling the length of the container doubles the time it takes.
There is also the O(log(N)). This, known as logarithmic time, means that the speed increases as the log of the number of elements. This is, generally, quicker than constant time.
The cheapest algorithm, like back insertion to a vector, would be on the order of O(1). This, also known as constant time, means that the number of elements in the container does not affect the time an operation takes. Obviously, this is the most preferable for large containers.
Some operations can be on combined orders, like O(N*log(N)), and even much worse orders, like O(N^3).
Also, there is also a + symbol sometimes used. This is known as amortized time. It indicates that the speed may be affected in some situations. For example, insertion into the middle of a vector, with pushing back all the other member, is generally O(N). But, sometimes more space must be allocated for the extra element. This would take substantially longer, but would not happen every time. so it is often written as O(N)+, or amortized linear time.
The big O notation does not give you a clear measure of the time it take for an operation, it gives a comparison of the speed of an operation based on list size. Sometimes O(N^2) operations are faster than O(N) operations, if the list is very small and the quadratic is substantially more efficient per iteration. But in the large scale, the more efficient the better.