Question about endians
I was wondering what is the advantage of the different endian systems? To me it looks like small end big endians would have the same advantages/disadvantages, while middle endians would just make things more complicated?
From what I have found out hand held consoles tend to favor the little endia system, while bigger systems are big endians.
But does it really make a difference how memory is stored?
It is probably more of a coincidence (as in handheld systems prefer ARM or MIPS, bigger systems prefer X86) than anything else.
In the old days, the benefit of a little endian system was that a 16-bit update (for example increment or decrement) could potentially be made by writing the low byte only, and skipping the high byte - assuming you have a 8-bit bus, that is. On a 16-bit bus, you could perhaps do a 32-bit update by only updating the low half too.
These sort of benefits are no longer interesting, since these type of updates go to cache in 99.99% of all cases (and 100% of performance critical updates).
So, it's just a choice made by the processor designers these days.
And probably based only on historical reasons or backwards compatibility issues with code that makes specific use of endianness.
Is the difference between the two something programmers have to think about, or is it all done by the compiler?
How can I convert between small and big? Found a few code examples of how to do it, but now `mathematical formula`
You will have to worry about endianness if you are programming low level data communications between different systems, for instance, where you don't know or expect endianness to be different among them (this is basically anything network related these days). Or using libraries that warn you about their lack of support for endianness.
As for converting, information is all over the web.
Just to clarify, x86 uses little endian.