You could malloc (or calloc) the array from the "heap". Local memory comes from the "stack" which is a small (1-3 MB usually) amount of memory set aside specifically for the function to use. The heap has a MUCH bigger amount of memory that you can draw from, AND it is global, so you can use it in a function, return the location (the address) of the array you malloc, and continue using it in any other function. Just don't lose the address of the array! (Easy to do since the name of the array is the address)
Generally, if you need to sort a bunch of linked lists, you want to use Merge sort.
A tiered sorting approach is quite quick for huge amounts of data. With the original bins being small compared to the size of the data. So you might have several thousands of them. I used Quicksort for the initial bin sorting, but you could use Merge sort, of course. Main thing is to use Merge sort for merging all the small bins (which are themselves already sorted), into a second tier of bins (files). Then the second tier of files are merged together, to create the sorted third tier. With every tier level, you cut the number of bins needed, in half, and double their size.
One big advantage of this, is that after the first tier bins, you don't need an array or list, to hold the data for you. You sort right off the HD, with just a few variables. It gives the HD a workout, but a HD with a large buffer will help. An SSD drive will absolutely scream through this.
The logic is fairly simple. You count the number of bins (files) you make, at each level, and name the files such that your program can find them: bin1, bin2, etc. I included a letter, like bin1A, where the A designated the tier level of the bin, but that was not needed, since every tier level is dealt with before the next tier level merging begins. Parallel processing may require such naming schemes, however.