Merge Sort the best sorting algorithm?

This is a discussion on Merge Sort the best sorting algorithm? within the A Brief History of Cprogramming.com forums, part of the Community Boards category; Originally Posted by Sang-drax Radix Sort is really O(n * log max_n). So the maximum value of n enters the ...

  1. #16
    aoeuhtns
    Join Date
    Jul 2005
    Posts
    581
    Quote Originally Posted by Sang-drax View Post
    Radix Sort is really O(n * log max_n). So the maximum value of n enters the time complexity for radix sort.
    If there's a maximum value of n then what does big O notation mean?

    The way you are achieving O(n) complexity is not by avoiding comparisons, it's by arbitrarily limiting the number of distinct elements possible, giving yourself a counting problem, not a sorting problem. Sorting methods that do not use comparisons, while maintaining no restriction on the number of distinct elements, are still limited to O(n log n) performance, because they have to examine at least n*log(n)/log(2) bits in total simply to distinguish the elements from one another. The input size is effectively O(n log n).

    That proof you may have seen regarding the performance of comparison sorts is interesting but unnecessarily complicated.
    Last edited by Rashakil Fol; 06-18-2007 at 06:59 PM.
    There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as binary.

  2. #17
    S Sang-drax's Avatar
    Join Date
    May 2002
    Location
    Göteborg, Sweden
    Posts
    2,072
    Quote Originally Posted by Rashakil Fol View Post
    The way you are achieving O(n) complexity is not by avoiding comparisons, it's by arbitrarily limiting the number of distinct elements possible, giving yourself a counting problem, not a sorting problem. Sorting methods that do not use comparisons, while maintaining no restriction on the number of distinct elements, are still limited to O(n log n) performance, because they have to examine at least n*log(n)/log(2) bits in total simply to distinguish the elements from one another. The input size is effectively O(n log n).
    But your assumption is that it must be possible for all elements to be distinct. Sometimes that is not the case. You are right, but my problem is still a sorting problem.
    Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling

  3. #18
    Registered User
    Join Date
    Sep 2001
    Posts
    752
    Quote Originally Posted by Sang-drax View Post
    But your assumption is that it must be possible for all elements to be distinct. Sometimes that is not the case. You are right, but my problem is still a sorting problem.
    You can't really discard that assumption and talk about sorting algorithms' theoretical performance. Quicksort, if coded accordingly, is a O(N) sort in all cases if you limit the integers to a finite domain.


    Code:
    // All integers fall between 0 and 10
    qsort (unsigned int * sort_begin, unsigned int * sort_end) {
       unsigned int pivot = *sort_begin;
       unsigned int * piv_begin = sort_begin;
       unsigned int * piv_end = piv_begin + 1;
       unsigned int * compare = piv_end;
      
       // partition
       for ( ; compare != sort_end; ++compare ) {
          if (*compare <= pivot) {
             swap (piv_end, compare);
    
             if (*piv_end == pivot)
                ++piv_end
             else
                swap (piv_begin++, piv_end++);
          }
       }
    
       // Since the pivot value got clumped, we are guaranteed to use each pivot only once.
       // Therefore, we are guaranteed to recurse <= 10 times total.
       if (sort_begin != piv_begin) // Values less than pivot
          qsort (sort_begin, piv_begin);
    
       if (piv_end != sort_end) // Values greater than pivot
          qsort (piv_end, sort_end);
    }
    Last edited by QuestionC; 06-19-2007 at 12:47 PM.
    Callou collei we'll code the way
    Of prime numbers and pings!

  4. #19
    S Sang-drax's Avatar
    Join Date
    May 2002
    Location
    Göteborg, Sweden
    Posts
    2,072
    Quote Originally Posted by QuestionC View Post
    You can't really discard that assumption and talk about sorting algorithms' theoretical performance.
    Of course I can. Why not? Different situations require different assumptions about the data.

    Limiting the number of distinct elements is essential to some algorithms. If we want to assign a complexity to for example Radix sort, it must be O(n * log n_max) = O(n).
    Quote Originally Posted by QuestionC View Post
    Quicksort, if coded accordingly, is a O(N) sort in all cases if you limit the integers to a finite domain.
    Yes it's O(n), what's your point?
    Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling

  5. #20
    aoeuhtns
    Join Date
    Jul 2005
    Posts
    581
    Quote Originally Posted by Sang-drax View Post
    Limiting the number of distinct elements is essential to some algorithms.
    It's not essential for radix sort.
    There are 10 types of people in this world, those who cringed when reading the beginning of this sentence and those who salivated to how superior they are for understanding something as simple as binary.

  6. #21
    Captain Crash brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,159
    Quote Originally Posted by sehr alt View Post
    I read an article at wikipedia which said that Quicksort can never sort as quickly as Merge Sort!? In its worst case Quicksort's O-complexity is O(n&#178, whereas Merge Sort will never have any other O-complexity than O(n*log(n)).
    I am wondering why the 'Merge-Sort-algorithm' isn't called 'Quickest-Sort-algorithm' then.
    Are there any really serious drawbacks to 'Merge Sort's' optimized speed of sorting huge amounts of data - compared to Quicksort?
    You obviously don't understand big-O notation. You're assuming that if algorithm A is O(n log n) and algorithm B is O(n^2) then B is "slower" than A. That's NOT what big-O means.

    Quick sort's enormous advantage is that it can be done IN PLACE. Merge sort requires working memory, quick sort doesn't.

  7. #22
    Code Goddess Prelude's Avatar
    Join Date
    Sep 2001
    Posts
    9,796
    >Merge sort requires working memory
    It's possible to write an in-place merge sort. But doing so efficiently is an exercise in futility because the code ends up being long and complicated.
    My best code is written with the delete key.

Page 2 of 2 FirstFirst 12
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. *Help*in sorting and searching algorithm
    By yuentong in forum C++ Programming
    Replies: 1
    Last Post: 03-07-2009, 09:43 AM
  2. help with debug (bubble sort algorithm)
    By lbraglia in forum C Programming
    Replies: 5
    Last Post: 10-19-2007, 05:24 PM
  3. Binary Search Trees Part III
    By Prelude in forum A Brief History of Cprogramming.com
    Replies: 16
    Last Post: 10-02-2004, 03:00 PM
  4. which sorting algorithm for GBs of data?
    By Sargnagel in forum C Programming
    Replies: 4
    Last Post: 06-05-2003, 08:43 AM
  5. merge sort
    By Unregistered in forum C Programming
    Replies: 0
    Last Post: 09-02-2001, 06:58 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21