The way floating point numbers are stored is much different from the way integers are stored. A the bits of a floating point number are (usually) partitioned into 1) a sign bit, 2) a mantissa, and 3) an exponent. It basically stores numbers in a form of scientific notation, so that the number of 'significant figures (in bits)' is the size of the mantissa. Off the top of my head, I don't know what the partitioning is for the IEEE standard, but as you can see, you can't store every number, just a certain number of digits (so, adding 1.0 to a very large number may very well yield the exact same number).