Also, the best complexity to detect if something is sorted is O(n). It's a single pass through the list, making sure that every element has the correct order in respect to its predecessor. Again, it cannot possibly be faster, because you have to look at each element at least once.
Multiprocessing or multithreading will not change the fundamentals of the O(...) method of describing the complexity of an algorithm - it will just [potentially, assuming it's actually possible to parallelise the algorithm] speed up the actual execution, which is still proportional to the number of elements in some way. If it's O(n), then twice as many elements will take twice the amount of time. If you have four processors, it will still take twice as long to do double the number of elements.
Originally Posted by master5001
That was my point. The complexity of the problem is a constant. Thus, there is no suitable way of reducing the absolute best case scenario. Which was my entire point to begin with. I am just not entirely sure where there is any room for cutting corners short of actually not checking every element.
If you didn't need to have everything sorted, you could theoretically make something with a O(n/x) complexity where x denotes a constant factor in which you skip elements of the data. The objective would simply be to sort data based on probability and skip elements that are in all probability already sorted. However, again, if something is truly randomized the probability of any element being sorted already is low.
In otherwords, you can make something with less than O(n) complexity if it does a half-assed job of sorting the data.