End note: actually, it's $2.56 for one error, $5.12 for the next, and so on. Not by year, by error number.
http://www.upto11.net/generic_wiki.p...h_reward_check
http://www.nationmaster.com/encyclop...h-reward-check
End note: actually, it's $2.56 for one error, $5.12 for the next, and so on. Not by year, by error number.
http://www.upto11.net/generic_wiki.p...h_reward_check
http://www.nationmaster.com/encyclop...h-reward-check
dwk
Seek and ye shall find. quaere et invenies.
"Simplicity does not precede complexity, but follows it." -- Alan Perlis
"Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra
"The only real mistake is the one from which we learn nothing." -- John Powell
Other boards: DaniWeb, TPS
Unofficial Wiki FAQ: cpwiki.sf.net
My website: http://dwks.theprogrammingsite.com/
Projects: codeform, xuni, atlantis, nort, etc.
Also, the best complexity to detect if something is sorted is O(n). It's a single pass through the list, making sure that every element has the correct order in respect to its predecessor. Again, it cannot possibly be faster, because you have to look at each element at least once.
All the buzzt!
CornedBee
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
That was my point. The complexity of the problem is a constant. Thus, there is no suitable way of reducing the absolute best case scenario. Which was my entire point to begin with. I am just not entirely sure where there is any room for cutting corners short of actually not checking every element.
If you didn't need to have everything sorted, you could theoretically make something with a O(n/x) complexity where x denotes a constant factor in which you skip elements of the data. The objective would simply be to sort data based on probability and skip elements that are in all probability already sorted. However, again, if something is truly randomized the probability of any element being sorted already is low.
In otherwords, you can make something with less than O(n) complexity if it does a half-assed job of sorting the data.