# Thread: Merge Sort the best sorting algorithm?

1. ## Merge Sort the best sorting algorithm?

I read an article at wikipedia which said that Quicksort can never sort as quickly as Merge Sort!? In its worst case Quicksort's O-complexity is O(n²), whereas Merge Sort will never have any other O-complexity than O(n*log(n)).
I am wondering why the 'Merge-Sort-algorithm' isn't called 'Quickest-Sort-algorithm' then.
Are there any really serious drawbacks to 'Merge Sort's' optimized speed of sorting huge amounts of data - compared to Quicksort? 2. >I read an article at wikipedia which said that Quicksort can never sort as quickly as Merge Sort!?
Um, no. Most of the time quicksort is faster than merge sort.

>In its worst case Quicksort's O-complexity is O(n&#178 , whereas Merge Sort will never have any other O-complexity than O(n*log(n)).
Yes, in the worst case, merge sort is faster in theory. But that doesn't mean it's always faster, especially since most quicksort implementations make any chance of hitting the worst case vanishingly small.

>I am wondering why the 'Merge-Sort-algorithm' isn't called 'Quickest-Sort-algorithm' then.
Because computer scientists are better at naming algorithms than you are? 3. > I read an article at wikipedia which said that Quicksort can never sort as quickly as Merge Sort!?

Which article would that be? I think the article on quicksort is pretty clear:
Typically, quicksort is significantly faster in practice than other Θ(n log n) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data it is possible to make design choices which minimize the possibility of requiring quadratic time. 4. Originally Posted by http://en.wikipedia.org/wiki/Merge_sort#Analysis
In the worst case, merge sort does about 39&#37; fewer comparisons than quicksort does in the average case; merge sort always makes fewer comparisons than quicksort, except in extremely rare cases, when they tie, where merge sort's worst case is found simultaneously with quicksort's best case. In terms of moves, merge sort's worst case complexity is O(n log n)—the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case.
(emphasis mine)

Unless I'm reading this wrong it is saying that Merge sort can never be slower than quicksort, which is probably the source of sehr alt's question. 5. ## hmm

Code:
```P1:{
Merge Sort:
O(n*log(n)) ALWAYS - GUARANTEED
}

vs.

P2:{
Quick Sort:
O(n*log(n)) ALMOST - IN ITS 'BEST CASE'

O(n²) IN ITS WORST CASE
}```
P1 && P2 --> Merge Sort is better

How could a mathematical truth ever be wrong in reality? This is one more example.
Ha ha. 6. >How could a mathematical truth ever be wrong in reality?
When mathematical truth throws away the majority of the information so that it can produce a very short and abstract result. Do you even know how asymptotic notation works?

Anyway, different situations will produce a different answer about which algorithm is best. I could "prove" that quicksort is faster or merge sort is better quite easily just by changing the data. Using the O notation alone to make a blanket statement about performance is stupid at best. It's only a starting point. 7. >How could a mathematical truth ever be wrong in reality?
different implementations, different compilers, different optimizations + what Prelude said. 8. Assymptotically, radix sort is faster than all of them, but its drawbacks are often too much to warrant usage. Upper bounds on run time aren't the do all and end all of algorithms. 9. Radix sort is not asymptotically faster. It takes O(n * log(k)) time to simply look at enough information in the elements of an n-element array with k distinct elements to differentiate them, so there's no way you can break O(n*log(n)) time in the general case. 10. Right...what am I saying...I meant counting sort. 11. Counting sort is not asymptotically faster, either. It takes O(n * log(k)) time to simply look at enough information in the elements of an n-element array with k distinct elements to differentiate them.

The thing about counting sort is that for typical input sizes, with many processors, the log(k) factor gets executed in one or fewer machine code instructions. 12. Once again, Rashakil Fol, I find myself curiously aroused by you. 13. Radix sort is O(n), but the conditions are different than for QuickSort. QS only assumes that it is possible to compare two elements. Radix sort looks at some kind of numerical representation. Radix sort is not a comparison sort and is therefore able to be O(n).

Radix Sort is really O(n * log max_n). So the maximum value of n enters the time complexity for radix sort. 14. Here's a demonstration of an O(n) sorting algorithm. It can handle integer values in the interval [0,9].
Code:
```void bucketSort(int[] data, int length) {
int count = {0};
for (int i=0;i<n;++i) {
count[data[i]]++;
}
int pos=0;
for (int i=0;i<count;++i)
data[pos++] = 0;
for (int i=0;i<count;++i)
data[pos++] = 1;
[...]
for (int i=0;i<count;++i)
data[pos++] = 9;
}```
It takes arrays of arbitrary size and sorts them. If you don't require that only comparisons are made, it is possible to achieve O(n). 15. The bucket sort is an extremely fast sort, but it is limited as well. You have to really taylor it to your needs....it's kind of difficult to write a general all-cases bucket sort...and as far as I know it's kind of theoretically impossible. I've never seen one...tell me if you find one.

merge sort generally uses much more memory than the quicksort, and this can be a cause of slow-down.

normally they perform about the same. Popular pages Recent additions 