Can anyone Explain how -3.3f is stored in Memory?
I Need to know how it was done?... I googled it.. I cant understand How...??
Can u explain me ???
Printable View
Can anyone Explain how -3.3f is stored in Memory?
I Need to know how it was done?... I googled it.. I cant understand How...??
Can u explain me ???
Moved to Tech Board.
Right, first of all, we can convert it to a 32-bit number. The highest bit is the sign, and as it is negative, we get 0x80000000.
Then we have the exponent, which is 127+int(log2(3.3)) -> 127 + 1 = 128.
So now we have 0x80000000 | (128 << 23) = 0xC0000000.
Now we have 23 bits of mantissa.
3.3 / 2^(int(log2(3.3)) to deal with as the mantissa: 3.3 / 2^1 = 1.65.
First remove the 1. (because it is implied). Leaves 0.65
First bit:
0.65 * 2 -> 1.3.
1.3 > 1.0 => 1
Second bit:
(1.3 - (previous bit)) * 2 = 0.6
0.6 > 1.0 => 0
Third bit:
(0.6 - (previous bit)) * 2 = 1.2
1.2 > 1.0 => 1
Fourth bit:
(1.2 - (previous bit)) * 2 = 0.4
0.4 > 1.0 => 0
Fifth bit:
(0.4 - (previous bit)) * 2 = 0.8
0.8 > 1.0 => 0
Sixth bit:
(0.8 - (previous bit)) * 2 = 1.6
1.6 > 1.0 => 1
Sixth bit:
(1.6 - (previous bit)) * 2 = 1.2
1.2 > 1.0 => 1
... Repeat until finished 23rd bit (the sequence now repeats as above for infinity, just like 1/3 or 1/7 in decimal form) .
Result: 101 0011 0011 0011 0011 0011 (or 53333 in hex)
Combine this with the exponent and sign:
0xC0533333
Edit: as to the order those bytes are stored in memory, that would depend on the endianness of the processor used - x86 will store the high bits on the highest byte-address. Other processors may follow the same or have the "other" endian where the upper bits are stored first in the memory.
--
Mats
:) Thanks a Lot 4 Such a nice Explanation!!!!!!! :D