Hello Team
I am going through the Time Complexity of any algorithm part. But, I can't understand. What it actually is & how to find time complexity for specific algorithm. Just make make me understand the ,methodology.
Thanks & Regards
Aryaa
Hello Team
I am going through the Time Complexity of any algorithm part. But, I can't understand. What it actually is & how to find time complexity for specific algorithm. Just make make me understand the ,methodology.
Thanks & Regards
Aryaa
SAGNIK
Maybe this explanation would help.
N is often the magnitude of the algorithm's input - say you were going to sort 1 million array items, that's a large N.
The rationale for Big O Notation is as follows. If you really wanted to look at an algorithm's performance, you would have to consider all sorts of things. The speed of reads and writes in memory, for example, or the speed of a swap or a comparison, but these factors are all assumed to be constant performance costs in Big O Notation, so you don't even write them down. All of these costs are dominated by the sheer size of N.
As far as the meaning of N itself, it kinda communicates how efficiently N is used. N2 means that the N is processed completely twice. N! means that you process the input more or less completely N times. Basically the smaller N is, the more efficient the algorithm is going to be in the long run. So, really to determine Big O you are looking at how many times you have to go through the input, i.e. repetition.
Most algorithms are the "divide and conquer" variety where we repeat some process on smaller and smaller pieces of input. If you can chop the input in half at each "step" in the algorithm, you can create a "binary tree" effect; this usually results in a Big O(N * logN) which is very efficient. For an example of this, just draw a diagram of a binary tree in your head. The input is the node, but divide the array in half for each level. Now you should have a tree where, on level 2, the left node is the left half of the input and the right is the other half. As you go down the tree the array gets smaller and smaller until, on the log Nth level, each element is its own node. Hence the true performance of the algorithm depends on the height of this imaginary tree.
So that's more or less where it comes from and what it means. That's my rather playful understanding of it. If you want to know it more in detail I would look here and here for a start.
Last edited by whiteflags; 07-02-2016 at 03:20 PM.
Wouldn't N2 mean that N operations are performed N times, such as a loop inside a loop where both inner and outer loop each loop N times for a total of N2 inner loops? N! would be worse yet (for N >= 4), involving 1 x 2 x 3 x ... x (N-1) x N operations.
Last edited by rcgldr; 07-02-2016 at 04:54 PM.
Usually you have to look at the worst case, best case and average case.
For example, some algorithms in practice may never hit the worst case if the data is sorted to begin with or vice versa. You need to make sure you understand what assumptions pertain to your situation.
The Big-O notation denotes complexity under the "worst case scenario". That is an upper limit on complexity on a given N.. no matter what the input to the function. There are 2 other notations (little O and "average" O that denote the best case and average case scenarios respectively). A good example of the difference between these is to consider a merge sort vs. a quick sort. For a merge sort, the best case, average case, and best case are all the same. But for something like a quicksort (whose compliexity depends on the order of the original array), it can be different, depending on the order of the input array.
N! is probably the worst imaginable complexity.. any algorithm that could only be done in N! time would be considered practically unsolvable.