I can only expand with another question?

for(i=a; i<=N; i++) 0(N^2)-Quadratic time

for(j=b; j<=N; J++){

//Loop body requiring

//Constant time

}

for(i=a; i<=N; i++) { O(N*M)

//Loop body requiring

//time O(M)

}

I understand the first for loop and the Big O notation but I do not understand why the second for loop is (N*M)?

Little more clarification

This information assumes that N is the unknown or variable quantity.

Code:

`for(i=a; i<=N; i++) 0(N^2)-Quadratic time `

for(j=b; j<=N; J++){

//Loop body requiring

//Constant time

}

This is O(N^2) because each loop of N runs N times regardless of what N is.

Code:

`for(i=a; i<=N; i++) { O(N*M) `

//Loop body requiring

//time O(M)

}

This is based on the same principal as the first one except that M can be variable also. Generally this can be expressed as O(N^2), but since M doesn't have to be equal to N, it is usually more specifically expressed as O(N*M). When you are trying to graphically see which algorithm is better by plotting on a two-dimensional plane the Big-Oh expressions, you can't accurately plot y=NM like you are able to do with y=N^2.

Code:

`for(i=a; i<=b; i++){ 0(1) `

//Loop body requiring

//Constant time

}

This code isn't based on N, so in terms of N it is constant time. The time doesn't vary based on N at all. Since the Big-Oh is usually based on N or one particular variable quantity, it is O(1). On the other hand, if you were to develop an actual expression to describe the runtime, it would be b-a+1. This just describes how many times the body of the loop will run for any integer values a and b.

Hopefully this clears up a little bit regarding the Big-Oh discussion.