When programming for Windows it doesn't take to long to find out that the preferred character size is 2 bytes (UTF-16LE). Generally, when programming for *nix I've only ever used 1 byte characters (that is, often UTF-8, not necessarily just ASCII). But, when programming a wxWidgets application for Linux, I discovered real quickly that it was using 4 byte characters!
Of course, when it comes to storage, I can see that UTF-8 is almost always the way to go. But it made me wonder, are the speed considerations to be had? Does a 32+ bit processor preform 8, 16, and 32 calculations or RAM IO at different rates? Or does the clock ensure that instructions varying only in size of or less than the processor's word, take the same length of time?