1) I made this code based on the article at Wikipedia (http://en.wikipedia.org/wiki/IEEE_754).Code:#include <stdio.h> int main() { float f = -118.625f; char* c = (char*)&f; unsigned int i = 0; for (i = 0; i < sizeof(float); ++i) { printf("%u\n", c[i]); } getchar(); return 0; }
2) As far as I know, my machine is little endian.
3) According to 1), the binary representation for -118.625f is 11000010 11101101 01000000 00000000.
4) From 2) and 3), I expected my code above to print the values of those bytes - the first one would be 0 (00000000), the second one would be 64 (01000000), the third one would be 237 (11101101) and the last one would be 194 (11000010).
However, when I compile and execute the code above, it gives me this output:
0
64
4294967277
4294967234
You've probably guessed by now what my question is: why isn't the output the same as I expected it to be?
Thank you in advance.