It is my understanding that float in c is stored as IEEE single precision representation. However
void main (int argv, char argc[])
{
float a = -341.274;
printf("%X\n",a);
return;
}
When I execute that, the result is:
C0755462
The IEEE 754 single precision representation for -341.274 is: C3AAA312
and double precision is: C07554624DD2F1AA
So wtf is happening here? It seems like the float is stored as a double precision but then for some reason truncated after 8 bits. Whats going on!??!?