Not long ago I bought "Writing great code" 1 and 2 by Randall Hyde. The books are great (at least so far), but maybe a bit over my level, so I have a few questions.
The book talks about how variables are loaded from memory and goes into details about how different CPUs does this. From what I understand a 32bit processor always loads 32bit variables. So if you only need to load a 16 bit variable the cpu still loads 32bit, but only gives you 16.
Would it be more efficient to have a program that packs say 4 chars into a int, when it comes to loading from memory?
The book also talks about arithmetic for both floating point numbers and integers. According to the book (from what I understand) is that most CPUs are optimized for working on only one bit size. So taking a 16bit variable and multiply it by a 16bit variable is slower then using to 32 bit variables, as the cpu first have to convert the two 16bit variables into 32bit for the arithmetic operation.
Does the OS have anything to say? The book is a bit old, so all the examples are under a 32bit OS with a 32bit processor.
What would the best variable size be for a 64bit processor under say XP 32bit?
As a side question, how do I know that my program is causing cache thrashing?
I might have gotten everything wrong, so a bit of insight would be great.
Thanks for reading