Originally Posted by
Adak
The way I did this in very small memory, was to use tiers:
1) load up 600 objects at a time (I used structs, but the same idea for strings).
2) Quicksort to sort them each first tier level array.
3) write them to file with a number (or letter) appended onto the filename: filename1, filename2, etc.
4) after all the files had been sorted out, I had dozens of these numbered files, which was perfect for a merge.
5) every level of the merge phase, reduced the number of files remaining to be merged, by half.
It did a fine job. Quicker than you'd expect. The smaller files would be merged very quickly because they fit into the HD buffer. Using an SSD drive makes this a very acceptably fast sort.
When you have massive - really massive amounts of data that need to be sorted, and you can't fit it into main memory - it can be sorted, using this tiered approach. It's not as simple or as fast as sorting in main memory, however .