# Thread: How to sort a simple linked list using swap-sort method

1. ## How to sort a simple linked list using swap-sort method

I would like to sort a simple linked list using swap-sort method what in arrays would be:
Code:
```for (i=MIN; i<MAX; i++) {
for (j=MIN; j<MAX-1; j++) {
if v[j] > v[j+1] {
aux ← v[j] ;
v[j] ← v[j+1];
v[j+1] ← aux;
}
}
}```
I know how to do it using arrays but I a unable to make it work using simple linked lists.
Any ideas?

2. You rearrange the "next" pointers, eg:
Code:
```if (first->value > second->value) {
first->next=second->next;
second->next=first;
}```
Of course, the problem here is the node whose's next was "first", should now point to second. So you can either use a double-linked list to do it, or if possible you can keep track of the previous pointer with a loop (best solution) or (worst solution) you can use a seperate function to find the node whose's next is first.

Or what's on third? :0

3. It sounds like you're specifically after code for bubble-sorting doubly-linked-list via swapping pointers rather than values.
I do have code for that already, but let me tell you that it is very difficult to write. I'm actually on holiday at the moment, but the code is on my website.

The sorting algorithms that are easiest to write for arrays are actually often quite a bit harder for linked lists. Also those that are harder for arrays are sometimes easier for linked lists.
I believe the easiest one for linked-lists is Insertion Sort, so perhaps you could try that?
Merge Sort is however not much harder and is far more effective.

4. Originally Posted by iMalc
It sounds like you're specifically after code for bubble-sorting doubly-linked-list via swapping pointers rather than values.
I do have code for that already, but let me tell you that it is very very difficult to write. Much harder than quicksort is to write for a linked-list in fact! It's the introduction of lots of special cases you'd never think of beforehand, that kills you.
This is just wrong. I've written a merge sort for link lists (which qsort is a refined merge-sort). IN ANY CASE YOU SWAP POINTERS, NEVER VALUES.

Imagine my node looks like this:
Code:
```typedef struct _node {
char firstname[32];
char lastname[32];
char middleinitial;
int group;
float balance;
char note[256];
double special;
struct _node* next;
}```
Now, please tell me you are going to write some linked list sorting code that swaps values and not pointers. YOU'D HAVE TO BE A TOTAL FOOL.

And telling someone that "a quicksort is much easier to write than a bubble sort" is another obvious piece of deception. I would love to hear from even one other person that agrees with what iMalc has written here today.

5. I corrected myself straight after I first posted. You must have taken over 5 minutes to post. I'm amazed you managed to see what I had written incorrectly. I realised immediately that I was thinking of one of the other sorts.

You can definitely sort a doubly-linked list by swapping values. A number of coders who are used to array sorting try this as their first attempt at linked-list sorting. I certainly agree, it's a very bad idea and you've pointed out one reason why, but some people do it. The other reason is that you are stuck with algorithms that are O(n*n) because you have neither random access, nor tha ability to 'insert' items as you would with a list. I never intended to imply that swapping values would be the normal way of doing it.

Edited for spelling this time.

6. Originally Posted by iMalc
You can definitely sort a doubly-linked list by swapping values. A number of coders who are used to array sorting try this as their first attempt at linked-list sorting. I certainly agree, it's a very bad idea and you've pointed out one reason why, but some people do it. The other reason is that you are stuck with algorithms that are O(n*n)
A lot of what I have read about comparative sorting on the web has been, I think, regurgitated to the point where the regurgitator has a hard time finding reality.

I don't really understand the formulas used to express the efficiency of the algorithms, but I do understand the algorithms themselves in so far as they are applied in programming. I believe the O(n^2) etc style measurements of efficiency are actually based on a mathematical abstraction of the method ("algorithm") AND NOT statistical measurement of how the method performs in practice. (In other words, this is someone's deductive reasoning and not an empirical observation). After all, you could write a sort according to the algorithm and then average or measure the number of comparisons it makes. But that IS NOT how these formulas (eg. O(n^2)) came into being, and the actual logic of them is not always made apparent.

For example, a normative statement about the bubble sort is that "bubble sort always performs n-1 passes through the data, period. Its best case and worst case behaviors are completely identical". I actually have several charts and tables of formuli that all say this, sometimes vehemently, and I imagine they all actually have the same source somewhere in time and space, which is never referenced because they appear to rest on the authority of deductive reasoning. Yet in practice this statement about bubble sort CANNOT BE TRUE.

In practice, a flag is used to indicate the completion of a sort. Bubble sort passes thru data one item at a time and swaps it forward if it's value is higher than the next item. If, after a pass, nothing has changed, the sort is done.

That means by definition bubble sort will make a variable number of passes. If the list is already sorted (a best case), then you have n-1 passes.

Selection sort, which works by moving lowest values one at a time, will always have the same number of passes since it proceeds the same way regardless. So with an badly unsorted list it will tend to do better than bubble sort, but if the list is only slightly out of order, bubble sort will do much, much better, making a comparable number of passes and comparisons to a merge sort.

So regurgitating these "measurements" out of a textbook tends to imply ??what?? about some of the computer science clergy, iMalc.

I guess it all amounts to the same thing anyway. A merge sort is better than a selection sort is better than a bubble sort, etc.

7. Originally Posted by MK27
A lot of what I have read about comparative sorting on the web has been, I think, regurgitated to the point where the regurgitator has a hard time finding reality.

I don't really understand the formulas used to express the efficiency of the algorithms, but I do understand the algorithms themselves in so far as they are applied in programming. I believe the O(n^2) etc style measurements of efficiency are actually based on a mathematical abstraction of the method ("algorithm") AND NOT statistical measurement of how the method performs in practice. (In other words, this is someone's deductive reasoning and not an empirical observation). After all, you could write a sort according to the algorithm and then average or measure the number of comparisons it makes. But that IS NOT how these formulas (eg. O(n^2)) came into being, and the actual logic of them is not always made apparent.

For example, a normative statement about the bubble sort is that "bubble sort always performs n-1 passes through the data, period. Its best case and worst case behaviors are completely identical". I actually have several charts and tables of formuli that all say this, sometimes vehemently, and I imagine they all actually have the same source somewhere in time and space, which is never referenced because they appear to rest on the authority of deductive reasoning. Yet in practice this statement about bubble sort CANNOT BE TRUE.

In practice, a flag is used to indicate the completion of a sort. Bubble sort passes thru data one item at a time and swaps it forward if it's value is higher than the next item. If, after a pass, nothing has changed, the sort is done.
All that means is that you @#\$#@\$ed up the implementation of bubble sort -- or, to be more kind, are implementing some other sort than bubble sort. If you have a flag, it's not bubble sort, since the definition of bubble sort contains no such thing. You can say that it's an easy addition to the algorithm (which it is), and is beneficial (which it is), but that still makes it "something other than bubble sort". An algorithm and its implementation are not the same thing.

8. Originally Posted by tabstop
All that means is that you @#\$#@\$ed up the implementation of bubble sort -- or, to be more kind, are implementing some other sort than bubble sort. If you have a flag, it's not bubble sort, since the definition of bubble sort contains no such thing. You can say that it's an easy addition to the algorithm (which it is), and is beneficial (which it is), but that still makes it "something other than bubble sort". An algorithm and its implementation are not the same thing.
Oh really? This is a bubble sort:
2) Swap if necessary.
3) Move forward one.
4) Repeat until the list is sorted.

Find me a bubble sort that does not use a flag. The other option is

4) Repeat infinitely.

If "an algorithm and its implementation are not the same thing" means the difference between something that WILL NOT RUN ON A COMPUTER and something that will, then we agree.

I have not "improved" the bubble sort by using a flag -- and they all do -- because otherwise an implimentation IS NOT POSSIBLE.

However, I think I now understand the root of what is obviously a historical argument.

9. Originally Posted by MK27
Find me a bubble sort that does not use a flag.
That's not hard -- any bubble sort will do:

Code:
```for start <- 0 to n-2
for i <- n-1 downto start+1
if array_(i-1) > array_i
swap array_(i-1) and array_i```
If you're an "end for" partisan (or a "next" partisan) then you can close the for-loops appropriately. (I happen to be an "indentation" partisan, so the loops are closed.) You don't have to believe me -- you can check Sedgewick (p. 278 in my copy) if you want.

10. Okay! So the formula is O(n^2) because this sort must iterate through the entire list a number of times equal to the length of the list (which with a flag this would be the worst case)?

You may have redeemed computer science in my eyes, tabstop. But barely.

ps. I thought Sedgewick was a girl in a Woodie Allen film.

11. Originally Posted by MK27
Okay! So the formula is O(n^2) because this sort must iterate through the entire list a number of times equal to the length of the list (which with a flag this would be the worst case)?

You may have redeemed computer science in my eyes, tabstop. But barely.

ps. I thought Sedgewick was a girl in a Woodie Allen film.
That's sort of right -- the worst case with a flag is the only case without a flag -- do all the steps that could possibly be needed. However, each time through the list takes one less step (since one number is in place), so it's 1+2+3+4+...+n-1 = n^2-n, which is O(n^2).

12. Originally Posted by tabstop
However, each time through the list takes one less step (since one number is in place), so it's 1+2+3+4+...+n-1 = n^2-n, which is O(n^2).
So it's polynomial instead of exponential...

13. Originally Posted by MK27
A lot of what I have read about comparative sorting on the web has been, I think, regurgitated to the point where the regurgitator has a hard time finding reality.

I don't really understand the formulas used to express the efficiency of the algorithms, but I do understand the algorithms themselves in so far as they are applied in programming. I believe the O(n^2) etc style measurements of efficiency are actually based on a mathematical abstraction of the method ("algorithm") AND NOT statistical measurement of how the method performs in practice. (In other words, this is someone's deductive reasoning and not an empirical observation). After all, you could write a sort according to the algorithm and then average or measure the number of comparisons it makes. But that IS NOT how these formulas (eg. O(n^2)) came into being, and the actual logic of them is not always made apparent.

For example, a normative statement about the bubble sort is that "bubble sort always performs n-1 passes through the data, period. Its best case and worst case behaviors are completely identical". I actually have several charts and tables of formuli that all say this, sometimes vehemently, and I imagine they all actually have the same source somewhere in time and space, which is never referenced because they appear to rest on the authority of deductive reasoning. Yet in practice this statement about bubble sort CANNOT BE TRUE.

In practice, a flag is used to indicate the completion of a sort. Bubble sort passes thru data one item at a time and swaps it forward if it's value is higher than the next item. If, after a pass, nothing has changed, the sort is done.

That means by definition bubble sort will make a variable number of passes. If the list is already sorted (a best case), then you have n-1 passes.

Selection sort, which works by moving lowest values one at a time, will always have the same number of passes since it proceeds the same way regardless. So with an badly unsorted list it will tend to do better than bubble sort, but if the list is only slightly out of order, bubble sort will do much, much better, making a comparable number of passes and comparisons to a merge sort.

So regurgitating these "measurements" out of a textbook tends to imply ??what?? about some of the computer science clergy, iMalc.

I guess it all amounts to the same thing anyway. A merge sort is better than a selection sort is better than a bubble sort, etc.
You seem to putting all kinds of words in my mouth. Big-Oh notation is of course all about the number of operations performed during a sort for the worst case, and taking the steepest of those curves. Hence anything that is for example n*n/2 + n - 1 or something like that is just O(n*n). It's all about mathematics and logical reasoning and deduction. Nested loops are clearly n*m operations where n and m are the loopcount for the respective loops. Clearly then if n and m are the same you have an n*n factor. In the absense of something greater, the n*n is all that matters.
I found that of all the sorting algorithms that are possible when you remove both random access and the ability to insert/remove in constant time from the middle of the data, are all limited to being in the O(n*n) class of algorithms or worse. I'd be pleased to be proven wrong there though, as it should mean there is an algorithm I've overlooked.

I don't see any relevance in the fact that bubble sort can be optimised to perform n-1 operations in the best case. That still means it's O(n*n) because the worst case is unchanged. Saying that all implementations of bubble sort use a flag is either wrong, or rather naieve at best. That isn't even the best way to improve bubble sort. Instead of a boolean flag, you can for the same amount of effort, store the integer position that the bubble floated to, and this improves the running time for even more cases. It's still bubble sort, it just eliminates even more redundant comparisons in certain cases.

Yes I possibly made a mistake, and corrected myself, when I momentarily stated that bubblesort for a linked list (by swapping pointers) is harder to write than quicksort. However as it happens, it actually is pretty damn difficult though anyway, and unless you've actually tried it (and succeeded) then you can't really say otherwise. It probably depends on the language too.
If you do attempt it, be sure to test it very thoroughly, as you may think you have it right, but it might fall over in certain circumstances.
If that's not hard enough for you, try shaker sort for a doubly-linked list. Now that's seriously hard to implement and get right, and I would certainly confidently say that that is harder than just implementing quicksort! Make sure you're doing it by swapping pointers only too.

You should know that I am emphatically NOT someone who just quotes other literature on a subject. I am someone who always learns by doing. I remember once in a lecture on Prolog where we were all told that we should never ever use a certain construct. I argued this to death with my flatmate because before we were even told this, I had written a program which made specific use of that construct. About a month later when the lecturer thought we were at a high enough level to understand it, he told us how he had lied earlier and that there was a specific usage of the construct I mentioned earlier. I could fully understand why he had dumbed it down for us earlier, as using this construct when we didn't understand it would have been disastrous, but I discovered the correct use for it on my own. Anyway, I most certainly do not pass on knowledge I have not at least verified myself.
As it happens, my claim above about being limited to O(n*n) algorithms when there is no random access or insertion is my own claim. I'm not even aware of any literature making the same claim. I'm as against people simply regurgitating information they've had shoved down their throats as much as you are. So, try not to jump to conclusions.

You've no idea how many times I edited that post afterwards...

14. Originally Posted by iMalc
You seem to putting all kinds of words in my mouth.
No, actually I've been trying to take you back from the dark side. Of course, I never really believed what I was saying, I just thought I did.

I've also formed a sort of emotional defensiveness about bubblesort since the first time I had to sort anything, that's what I intuitively wrote. laserlight later identified it as such, scornfully I think -- and now tabstop says I've been using some kind of cheat flag!

So when I heard somebody else in history had come up with a better method(s), I felt determined to master them. Not to be outdone as a masochist either, I'm actually writing (another) program that performs four or five different styles of searches, counts comparisons and list iterations, shuffles, generates random lists, switches criteria, breaks ties with secondary criteria, etc. Real fun, for the whole family! So I expected to see those O(n^2) tables confirmed, and thankfully tabstop cleared this flag issue up or my suspicion that many computer science gurus are more determined to obfuscate than explain would have reached critical proportions.

It's all about mathematics and logical reasoning and deduction.
No doubt. I've never taken any math beyond grade ten but I imagine I will have to if I ever want to work as a programmer. It just seems to me that most computer math is really not complicated, but it ends up couched in "confusing to the layman" style because the assumption (probably correct in most cases) is that the student of computers was already a student of math. What's further implied is that you have to be because it's so useful, and that's part of what I doubt. So anyway, instead of one esoteric, obscurantistic swamp to wade in, there is two! I should probably be happy. But seriously (I hope someone is listening!), much of the information could be explained much better, rather than just sticking to that old story. A lot of web tutorials read like photocopies of one another, which might be okay if the original had more merit. The more such "methods of explanation" get repeated, the more they acquire a weight potentially divorced from their real value. That is probably the heart of my criticism, iMalc. Honestly. About guarding the establishment. It would be nice if we had more "look, I did this/had to do this, and I'm not saying I regret it myself, but for the future maybe we really don't need to use so much latin where english will do".

I found that of all the sorting algorithms that are possible when you remove both random access and the ability to insert/remove in constant time from the middle of the data, are all limited to being in the O(n*n) class of algorithms or worse. I'd be pleased to be proven wrong there though, as it should mean there is an algorithm I've overlooked.
Is that a gauntlet on the floor?!

Just kidding. By "insert/remove in constant time" I presume you mean the difference between a linked list and an array, where the entire array must be re-numbered. "Random access" I'll have to google...ugh.

I don't see any relevance in the fact that bubble sort can be optimised to perform n-1 operations in the best case.
Well, one possible relevance is that in circumstances where a list is mostly sorted but slightly changed, the bubblesort may be the best choice.

That isn't even the best way to improve bubble sort. Instead of a boolean flag, you can for the same amount of effort, store the integer position that the bubble floated to, and this improves the running time for even more cases.
Hmmm.

If that's not hard enough for you, try shaker sort for a doubly-linked list. Now that's seriously hard to implement and get right, and I would certainly confidently say that that is harder than just implementing quicksort! Make sure you're doing it by swapping pointers only too.
This one is definitely an armoured glove.

You should know that I am emphatically NOT someone who just quotes other literature on a subject.
Okay, but if you are worried that you potentially sound like someone who is (are you worried, iMalc?) but really you are not, why not test yourself further by seeing if you can come up with a new/different/better method of explanation rather than resorting to the one everyone's heard already? At least then there is the chance of some original thought occurring.

Anyway, I most certainly do not pass on knowledge I have not at least verified myself.
...said the spider to the fly. What if The Emperor Wears No Clothes?

15. Pursuant to all this, what's a the relative intensity of processor activity for the following
actions:
1. interating thru a character array
2. comparing two chars (numerically)
3. comparing two ints numerically