1. ## Big-O Question

This was from another forum:

Any O(n) running time list accessor is fundamentally wrong, as it encourages writing horribly inefficient code.

I'm not sure I completely understand what "running time list accessor" means or why it's bad if it becomes more efficient than (n). Which brings me to my last question... what is (n) in this case?

2. If you have a random access container, you can look up any item in it directly without any looping (O(1)). A linked list does not have random access - so to access n-th element you have to start from the beginning and follow the links until you get there (and the time it takes is proportional to how far in the list you need to go).

If the list has size and at methods, you bet there would be a lot going on in this simple loop:
Code:
```for (int i = 0; i < myList.size(); ++i) {
myList.at(i);
}```

3. Originally Posted by sh3rpa
This was from another forum:

I'm not sure I completely understand what "running time list accessor" means or why it's bad if it becomes more efficient than (n). Which brings me to my last question... what is (n) in this case?
"n" is the number of items in the list.

O(n) means that the time it takes to access an element of the list grows proportionally with the length of the list. If it were O(n^2), it would mean that the access time is proportional to the square of the length of the list.

4. When writing your own container it is a good idea to understand efficiency as expressed in big-O notation. I'd highly recommend doing some research on that subject to understand what it means, and how to use it to understand the quality of the code you are writing.

5. Originally Posted by Daved
When writing your own container it is a good idea to understand efficiency as expressed in big-O notation. I'd highly recommend doing some research on that subject to understand what it means, and how to use it to understand the quality of the code you are writing.
I've just started learning Big-O in school, but mostly how it relates to polynomials, no in-depth tracing of algorithms yet.

to understand how it relates to my problem, I can roughly just think of the literal number of steps something takes to complete.

With this rational I can imagine the extra code it would take it to actually index each Node. Everything really sets off when the list is modified, because re-assigning indexes is going to require at least 1 more iteration, right?

So there's a fundamental difference (this aspect of random-access) between data type's w/an index and those w/out. (Well, duh, I guess)

Thanks everyone.

6. the place in memory of any list element is only given by the place of the preceding and/or the subsequent element (depends on the type of list). Every element has a field that points to the neighbour.
So if you "have" a list, you have the memory address of its first element.
To access any other element, you have to walk from the first one step by step till you reach it. If you are unlucky, that are n steps. So O(n) is the execution time in the worst case. In the average case it's O(n/2) and in the best case O(1). Most times the average case is referenced. But O(n/2) is the same quarantining as O(n) - computer science people ignore that difference because execution times of O(n/2) and O(n) belongs to the same class of problems.

For a vector, the address of any element is always known directly, you doesn't need to care about other elements to look one up. So lookup is O(1) always. best, average and worst.

O(1) is the best you can get from any algorithm. This can only be further accelerated by a linear speedup, but even than does it stay O(1). Programmer want O(1) stuff, but the get it seldom.
Many important things are O(log n) which is also acceptable most times.
Next best is often O(n). For big data sets you should get a newer cpu for that.
Than comes O(n^2) ... O(n^3) ... keep cooking coffee
And O(n^n): you should really really look for alternatives to this one

7. O(n/2) and O(n) are exactly the same. The formal definition of O(f(n)) time is that the ratio of the actual running time (as a function of n), to f(n) is bounded by some constant. The definition is rigged so a constant factor doesn't make any difference.

http://en.wikipedia.org/wiki/Big-O_n...mal_definition

Edit: I should have said "bounded by some constant, as n goes to infinity".

8. yes, you are right

9. One to avoid:

O(n!)

10. Originally Posted by sh3rpa
One to avoid:

O(n!)
Yes! Avoid if at all possible!

--
Mats

11. Originally Posted by anon
If you have a random access container, you can look up any item in it directly without any looping (O(1)). A linked list does not have random access - so to access n-th element you have to start from the beginning and follow the links until you get there (and the time it takes is proportional to how far in the list you need to go).

If the list has size and at methods, you bet there would be a lot going on in this simple loop:
Code:
```for (int i = 0; i < myList.size(); ++i) {
myList.at(i);
}```
I wrote the line being quoted in the OP. I shall punctuate it better etc:
Any O(n) running-time, list-accessor, is fundamentally wrong, as it encourages writing horribly inefficient code.
Could be too many commas now though.

Yeah O(n) is Big-Oh notation and n corresponds to the number of items in the list.
The code anon shows above is exactly why one should not have an accessor that take O(n) running time, where n is the number of items in the list.
Accessors should really be O(1) or O(log(n)). O(sqrt(n)) is probably borderline unacceptable. O(n) is just too slow, and anything worse, like O(n*n) is a total abomination.

12. Originally Posted by iMalc
I wrote the line being quoted in the OP. I shall punctuate it better etc:
Any O(n) running-time, list-accessor, is fundamentally wrong, as it encourages writing horribly inefficient code.
You said it before, and you've said it again. That doesn't make you right, or any less wrong.

In the real world, these things are about trade-offs: no container supports everything type of operation efficiently. Accessing is only one operation. If other operations (eg insertion in the middle, insertion at the beginning, insertion at the end, sorting, removing elements) are being done frequently, the better trade-off can often mean accepting deficiencies of an O(n) accessor in order to achieve an O(1) performance with other operations that are used more frequently in a particular application.

13. You said it before, and you've said it again. That doesn't make you right, or any less wrong.
How about if more people say it?

Any O(n) "random-access" function is fundamentally wrong. Such a thing is not random-access.

If random access is so infrequent that you can accept O(n) for it, you can accept writing *std::advance(cont.begin(), n) to get there. At least it will make you remember that it is O(n), and if you use it too often, you'll switch to a container that supports it efficiently.