# Merge Sort the best sorting algorithm?

This is a discussion on Merge Sort the best sorting algorithm? within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Originally Posted by Sang-drax Radix Sort is really O(n * log max_n). So the maximum value of n enters the ...

1. Originally Posted by Sang-drax
Radix Sort is really O(n * log max_n). So the maximum value of n enters the time complexity for radix sort.
If there's a maximum value of n then what does big O notation mean?

The way you are achieving O(n) complexity is not by avoiding comparisons, it's by arbitrarily limiting the number of distinct elements possible, giving yourself a counting problem, not a sorting problem. Sorting methods that do not use comparisons, while maintaining no restriction on the number of distinct elements, are still limited to O(n log n) performance, because they have to examine at least n*log(n)/log(2) bits in total simply to distinguish the elements from one another. The input size is effectively O(n log n).

That proof you may have seen regarding the performance of comparison sorts is interesting but unnecessarily complicated.

2. Originally Posted by Rashakil Fol
The way you are achieving O(n) complexity is not by avoiding comparisons, it's by arbitrarily limiting the number of distinct elements possible, giving yourself a counting problem, not a sorting problem. Sorting methods that do not use comparisons, while maintaining no restriction on the number of distinct elements, are still limited to O(n log n) performance, because they have to examine at least n*log(n)/log(2) bits in total simply to distinguish the elements from one another. The input size is effectively O(n log n).
But your assumption is that it must be possible for all elements to be distinct. Sometimes that is not the case. You are right, but my problem is still a sorting problem.

3. Originally Posted by Sang-drax
But your assumption is that it must be possible for all elements to be distinct. Sometimes that is not the case. You are right, but my problem is still a sorting problem.
You can't really discard that assumption and talk about sorting algorithms' theoretical performance. Quicksort, if coded accordingly, is a O(N) sort in all cases if you limit the integers to a finite domain.

Code:
```// All integers fall between 0 and 10
qsort (unsigned int * sort_begin, unsigned int * sort_end) {
unsigned int pivot = *sort_begin;
unsigned int * piv_begin = sort_begin;
unsigned int * piv_end = piv_begin + 1;
unsigned int * compare = piv_end;

// partition
for ( ; compare != sort_end; ++compare ) {
if (*compare <= pivot) {
swap (piv_end, compare);

if (*piv_end == pivot)
++piv_end
else
swap (piv_begin++, piv_end++);
}
}

// Since the pivot value got clumped, we are guaranteed to use each pivot only once.
// Therefore, we are guaranteed to recurse <= 10 times total.
if (sort_begin != piv_begin) // Values less than pivot
qsort (sort_begin, piv_begin);

if (piv_end != sort_end) // Values greater than pivot
qsort (piv_end, sort_end);
}```

4. Originally Posted by QuestionC
You can't really discard that assumption and talk about sorting algorithms' theoretical performance.
Of course I can. Why not? Different situations require different assumptions about the data.

Limiting the number of distinct elements is essential to some algorithms. If we want to assign a complexity to for example Radix sort, it must be O(n * log n_max) = O(n).
Originally Posted by QuestionC
Quicksort, if coded accordingly, is a O(N) sort in all cases if you limit the integers to a finite domain.
Yes it's O(n), what's your point?

5. Originally Posted by Sang-drax
Limiting the number of distinct elements is essential to some algorithms.
It's not essential for radix sort.

6. Originally Posted by sehr alt
I read an article at wikipedia which said that Quicksort can never sort as quickly as Merge Sort!? In its worst case Quicksort's O-complexity is O(n&#178, whereas Merge Sort will never have any other O-complexity than O(n*log(n)).
I am wondering why the 'Merge-Sort-algorithm' isn't called 'Quickest-Sort-algorithm' then.
Are there any really serious drawbacks to 'Merge Sort's' optimized speed of sorting huge amounts of data - compared to Quicksort?
You obviously don't understand big-O notation. You're assuming that if algorithm A is O(n log n) and algorithm B is O(n^2) then B is "slower" than A. That's NOT what big-O means.

Quick sort's enormous advantage is that it can be done IN PLACE. Merge sort requires working memory, quick sort doesn't.

7. >Merge sort requires working memory
It's possible to write an in-place merge sort. But doing so efficiently is an exercise in futility because the code ends up being long and complicated.

Page 2 of 2 First 12