Originally Posted by **CornedBee**

But when calculating the worst-case complexity of an algorithm, you have to consider the worst-case complexities of all components, not their average complexities. (Unless you can prove that worst-case for one component requires an input sequence that makes worst-case for another component impossible.)

The reason why sorting algorithms still can use their complexities and not have to do something stupid like assume O(N) for every increment is that the N is a different one, and that counting isn't done with unlimited integers. For sorting N numbers, incrementing the count may be O(M) worst case, but that M is still just the number of bits in the count (log2(N)), not the number of elements. So do algorithms have to add log(N) to their complexity calculation? No, because you have to make a difference between theory and practice.

In theory, N can be arbitrary. However, in theory, increment is an atomic operation and thus O(1).

In practice, N is bounded by addressable memory, and thus M is bounded by the number of bits in a size_t, be that 32 or 64.

So increment, even assuming that the CPU can't increment a number atomically (which it can, I think), is O(32) or O(64), which is equivalent to O(1).