Thanks a lot for such a descriptive answer, let us just consider a normal values , or a normal way, now which is the quickest according to you
O_o

Unless the algorithms in questions have certain optimizations, I would not use any of them as a general purpose sort.

The truth is, the simple approach to implementing all of the algorithms in question have performance and memory characteristics that I would find unacceptable without knowing a lot of details about the data.

If you don't have significant statistical information about your data and performance characteristics about operations on that data, you should be reaching for something with more stable performance in the face of diverse data sets.

You'll find that virtually all libraries and languages implement a modified "Quicksort" with a balanced median pivot "decaying" into a different form of sort according to some criteria for the default unordered sort algorithm. (The algorithm known as "Introsort" is a specific flavor of exactly these modifications and optimizations thought a "Radix Sort" flavor is also very popular.)

You'll find that virtually all libraries and languages implement a modified "Insertion Sort" with a speculative "single shot" allocation of pointers "decaying" into a flavor of "Merge Sort" according to some criteria for the default stable sort algorithm. (The algorithm as commonly implemented known as "Timsort" is a specific flavor of exactly these modifications and optimizations.)

In both cases, the modifications and optimization push the boundaries for comparison based sorts thanks to tweaking behavior during execution giving an extremely stable average case performance over extremely diverse data sets. The "trade off" for using these algorithms is that you don't need to know anything about your data; you throw the data at the algorithms and the algorithms do the best they can do for the particulars of that data. In a very real way, you've traded a small performance hit, the cost of adaptive behavior, for not needing to care too much about the particulars of your data.

The point is, without knowing a lot about the situation, I would not speculate on the performance of simple implementations of those algorithms. I'd simply run some tests through my testsuite and profiler with some reasonable data sets. So, my answer would be, as most everyone else will have assumed, dependent on what answers I might give in speculation to those questions I've asked.

In other words, I can not personally answer the real question you've asked because I would not speculate on the performance of those algorithms; I would speculate on the nature of the data and use that information with what I know about the algorithms, from the benchmarks, to choose; they all have their place, and I would do my best to choose according to my data.

Building on what Adak says, choosing one sorting algorithm over another should be a carefully considered and preferably measured process.

Soma