# Thread: how is float stored internally

1. ## how is float stored internally

please ex plain how float get stored internally in 4bytes

2. Regardless of number of bytes it is represented in, there will be one bit for sign (is the value positive or negative, some number of bits representing the mantissa (a value between 0 and 1), and another set of bits representing an exponent (a signed integral value).

The actual value is then given by sign*mantissa*10^exponent (were ^ is used to represent exponentiation).

The actual number of bits used to represent mantissa or exponent is implementation dependant (e.g. it is determined by the compiler writer or the system the program runs on). The location of the bit fields within a floating point value is also implementation dependent (eg the location of the sign bit could be anywhere).

Some floating point formats (eg IEEE) use some additional bits so they can represent notions such as "infinity" and "NaN" (not a number). Not all systems use IEEE formats though.

3. Interesting. So if you write code to manipulate the exponent bits, you could theoretically do a float shift left and float shift right?

This might come in handy.

4. Originally Posted by Bubba
Interesting. So if you write code to manipulate the exponent bits, you could theoretically do a float shift left and float shift right?
Sort of. The "exponent bits" vary with implementation (i.e. between compilers and between computer hardware or operating system). Which means that what works on one machine won't work on another.

One of the reasons that bit shifting operators have no meaning in C for floating point types is that the result of such operations would be highly implementation dependent.

As every type of data on computers ultimately comes down to being a set of bits (and will, unless there is a significant shift of technology) it is theoretically possible to manipulate individual bits (or sets of bits) in any type of data. But there's little point in manipulating bits unless the result of a manipulation (or set of manipulations) also has some useful meaning.

5. Which it does.....faster floating point multiplies and divides through the use of bit shifts via exponent bit manipulation.

I'm only concerned about the FPU for the x86. I think you could also come up with a fixed-point based system using floats amd float bit shifts.

6. Originally Posted by Bubba
Which it does.....faster floating point multiplies and divides through the use of bit shifts via exponent bit manipulation.

I'm only concerned about the FPU for the x86. I think you could also come up with a fixed-point based system using floats amd float bit shifts.
Sure, but why not use the FPU itself? Odds are it will be more efficient than any software that emulates it. I suppose, if you're trying to write an emulator that (say) allows a Power PC to execute programs that target an x86, then what you're trying to do is valid.....