Consider the following code:
Code:
#include <stdio.h>
int main()
{
float a = 17.23;
int b = *(int*)(&a);
printf("Original float formatted as integer is %d\n", a); // outputs junk, 1893822056 on Linux using gcc
// 1073741824 on Windows using Visual Studio 2010
printf("Original float formatted as float is %.2f\n", a); // correct output (17.23)
printf("b formatted as integer is %d\n", b); // outputs junk (different from the corresponding junk for a)
// same junk on Windows and Linux, though: 1099552522
printf("b formatted as float is %.2f\n", b); // outputs 0.00 on Windows and 17.23 on Fedora using gcc
return 0;
}
This raises several questions for me as to what's actually going on here. So, I'll start with what I think is the simplest one, namely the issue of what is happening with the assignment
int b = * (int*)(&a);
My hypothesis is this: The result is junk (but consistent junk) because it results from interpreting the coding of the 4 bytes of memory in which a (a float) is stored as bytes that are instead storing an int.
Is this hypothesis correct?
That then brings up the question of how floating point numbers are actually encoded as a binary sequence.