Thread: float13

  1. #31
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by laserlight View Post
    Hence the way out of the problem is to use a decimal representation with sufficient precision.
    But that, of course, doesn't solve ALL problems - only the ones where the number is one that can be described precisely using decimal notation. There are plenty of numbers that can be described in a base N precisely, but not in other bases. E.g. 1/3 is 0.1 in base 3. It is not possible to describe 1/3 precisely in base 10 or base 2.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  2. #32
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by esbo
    It's not the first time though is it it is the second time as far as I can see, the first
    round down occurs on line 7.
    Interestingly, note that that round down should happen on the previous iteration. Somehow, the x was printed as 0.7000000477 on the sixth iteration and then printed as 0.6999999881 on the seventh iteration. Adding 0.1f caused x to be safely above 0.8, hence the problem observed on line 13 did not occur.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  3. #33
    Fountain of knowledge.
    Join Date
    May 2006
    Posts
    794
    Quote Originally Posted by tabstop View Post
    So as you can see, 0.1f is slightly bigger than 0.1 (the last 15 there). So even though 0.7f is represented by a number less than 0.7 (0.6999999881), 0.6f + 0.1f is still bigger than 0.7 (0.7000000477). So all we've proved is that 0.6f + 0.1f does not actually equal 0.7f.
    But 0.6999999881 (0.7) + 0.1000000015 (0.1) is still less than 0.8 so how does it get a 8?
    or whatever?

    0.1000000015
    0.6999999881+
    ===========
    0.7999999896
    ============

  4. #34
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by C_ntua View Post
    Lol, tiny elves. You could use an "analog" computer instead of a normal digital one and still use electricity. Two expensive? Sell the elves. Too slow? Have the elves make you an efficient ones. Your elf solution in inefficient and against their rights
    The problem with using base-ten in computer logic is that everything is much more complicated. Any operation on two input single digit binary can only result in a maximum of 4 different results. The same operation on a single digit decimal will result in 100 possible combinations. So to we need much more complicated logic in the chip to do those things - and it's not worth the effort in most cases. In MANY cases, we can use integer or array based math to solve problems that rely on precise results (e.g. money-counting applications that require precision of cents on national dept type values). For most math, the input values are not precise enough to warrant a huge precision - calculating the distances to the moon will be good enough if you have, say, 15 decimal places at the most - any more and you'll have to first measure the amount of dust on the precise location that you wish to land your lunar lander, or whatever you need the calculation for.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  5. #35
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    Quote Originally Posted by matsp
    There are plenty of numbers that can be described in a base N precisely, but not in other bases. E.g. 1/3 is 0.1 in base 3. It is not possible to describe 1/3 precisely in base 10 or base 2.
    However, if you are only interested in a maximum precision, 1/3 can be precisely represented in base 10. I think that it may be more accurate to say that 1/3 cannot be accurately represented in base 10, since it requires infinite precision to accurately represent 1/3 in base 10.

    Quote Originally Posted by esbo
    But 0.6999999881 (0.7) + 0.1000000015 (0.1) is still less than 0.8 so how does it get a 8?
    Due to the floating point calculation. Your example uses fixed point, not floating point.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  6. #36
    and the Hat of Guessing tabstop's Avatar
    Join Date
    Nov 2007
    Posts
    14,336
    Quote Originally Posted by esbo View Post
    But 0.6999999881 (0.7) + 0.1000000015 (0.1) is still less than 0.8 so how does it get a 8?
    or whatever?

    0.1000000015
    0.6999999881+
    ===========
    0.7999999896
    ============
    So let's do this again. 0.1f is 0.0001100110011001100110011b. 0.7f is 0.101100110011001100110011b. Adding these together, and rounding the last digit as appropriate (key! -- the computer knows that it shfited a 1 in that first-spot-that-can't be-stored, so it rounds the actual stored part up) gives 0.110011001100110011001101b (that's why the pattern looks weird -- it got rounded). As a decimal that is 0.800000011920929.

  7. #37
    Fountain of knowledge.
    Join Date
    May 2006
    Posts
    794
    Quote Originally Posted by tabstop View Post
    So let's do this again. 0.1f is 0.0001100110011001100110011b. 0.7f is 0.101100110011001100110011b. Adding these together, and rounding the last digit as appropriate (key! -- the computer knows that it shfited a 1 in that first-spot-that-can't be-stored, so it rounds the actual stored part up) gives 0.110011001100110011001101b (that's why the pattern looks weird -- it got rounded). As a decimal that is 0.800000011920929.
    It does not seem that convincing given the amount by which it is below 8.
    It's not like it is the last didgit its the third to last digit.
    Can you prove 0.1f = 0.0001100110011001100110011b?

    Basically I want you to prove everything.
    For example 0.800000011920929 has 8 digits wrong, thats a factor of 100 million times the smallest digit
    possible to represent, which seems a little wide.
    Like aiming a rocket at Baghdad hitting Aplha Centuri
    Last edited by esbo; 12-17-2008 at 02:40 PM.

  8. #38
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    What do you mean 100 million times the smallest digit?

    A 32-bit floating point value (which a float is in 99% of all machines you are likely to encounter) has a 22 bit mantissa, which means that you get 22 / log2(10) -> approx 8 digits precision. Anything further than that is in itself an artifact of rounding during the printout. Note that floating point values are printed as double values, but the content of the double is extended by the FPU during the conversion from float to double, which generally means that the rest of the bits up to bit 54 is filled with zeros. But when we expand a floating point number, each digit extracted is a subtraction of a smaller and smaller number. This causes errors to be shifted into the number, as the subtracted value may not be PRECISE. So as we print out the last digits, a further error may get introduced.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  9. #39
    and the Hat of Guessing tabstop's Avatar
    Join Date
    Nov 2007
    Posts
    14,336
    Quote Originally Posted by esbo View Post
    It does not seem that convincing given the amount by which it is below 8.
    It's not like it is the last didgit its the third to last digit.
    Can you prove 0.1f = 0.0001100110011001100110011b?

    Basically I want you to prove everything.
    For example 0.800000011920929 has 8 digits wrong, thats a factor of 100 million times the smallest didgit
    possible to represent which seems a little wide.
    I have no idea what you're smoking. The last digit is the 24th binary digit, and 2^-24 is 0.000000059604644775375. The error is 0.00000001192092875, which is in fact less than the value of the last digit (it is about one-fifth as large as the last digit).

    And it is extremely easy to prove how 0.1f is represented. Every fraction is repeating, in floating-point form (no matter whether it's decimal or binary or whatever). So 1/10, in binary, is
    Code:
          0.000110011001
         ----------------
    1010 |1.000000000000
            1010
            ----
            01100
             1010
             ----
             0010000
                1010
                ----
                01100
                 1010
                 ----
                 0010000
                    1010
    which I hope i enough for you to see the pattern: three zeroes, then 1100 repeating.

    Now, in IEEE floats you write your number in "binary scientific notation", and you get to have 23 digits (no more, no less). So: 1.1001100110011001100110 x 2^-4.

    Edit: I wrote that 2^-24 is 0.000000059604644775375. It is, in fact, 0.000000059604644775390625. I apologize for the error.
    Last edited by tabstop; 12-17-2008 at 03:15 PM.

  10. #40
    Fountain of knowledge.
    Join Date
    May 2006
    Posts
    794
    Quote Originally Posted by tabstop View Post
    I have no idea what you're smoking. The last digit is the 24th binary digit, and 2^-24 is 0.000000059604644775375. The error is 0.00000001192092875, which is in fact less than the value of the last digit (it is about one-fifth as large as the last digit).

    And it is extremely easy to prove how 0.1f is represented. Every fraction is repeating, in floating-point form (no matter whether it's decimal or binary or whatever). So 1/10, in binary, is
    Code:
          0.000110011001
         ----------------
    1010 |1.000000000000
            1010
            ----
            01100
             1010
             ----
             0010000
                1010
                ----
                01100
                 1010
                 ----
                 0010000
                    1010
    which I hope i enough for you to see the pattern: three zeroes, then 1100 repeating.

    Now, in IEEE floats you write your number in "binary scientific notation", and you get to have 23 digits (no more, no less). So: 1.1001100110011001100110 x 2^-4.

    Edit: I wrote that 2^-24 is 0.000000059604644775375. It is, in fact, 0.000000059604644775390625. I apologize for the error.

    OK.
    It seems it is converting binary to decimals where the problem come in, although the
    error doing this seems larger than I might expect, anyway I am still not sure
    why the error happens where it does and then stays that way, afterall you are
    adding 0.1 to the number all the time.
    So it should get bigger?
    Is it the case that some 'maximum rounding error' occurs here from which
    it cannot recover?
    You see I am still not that clear on it as to how it happen once and recovers
    and then happens again and ever recovers.
    I don't think anyone has yet explained it in way I can understand, that might be
    my might fault, that might be yours

  11. #41
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    In the application posted as the first post in this thread, it will never recover, as the cycle goes round and round starting with X being 13, then divided by 10 and 0.1 added, giving just under 1.4, which when multiplied by 10 gives 13.999986, and next we have 13 as the integer value again. So it will never "solve itself".

    By the way, 0.1f printed as a decimal number may well appear to be higher than 0.1, but I can guarantee that unless we have some bug in the C runtime that parses the value, it will be 0.99999... where the ... is not necessarily all nines - but it's certainly not ABOVE 0.1. If the printed value IS above 0.1, then it's a consequence of some rounding error when converting it back to decimal, not the internal format it is stored as.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

  12. #42
    Fountain of knowledge.
    Join Date
    May 2006
    Posts
    794
    I suppose you could say that the IEEE being oneof those big waste of space organisations chose
    a crap method to represent and store numbers?
    Still, it is not as if it is going to set off a nucleur weapon of accidently is it

  13. #43
    Fountain of knowledge.
    Join Date
    May 2006
    Posts
    794
    Quote Originally Posted by matsp View Post
    In the application posted as the first post in this thread, it will never recover, as the cycle goes round and round starting with X being 13, then divided by 10 and 0.1 added, giving just under 1.4, which when multiplied by 10 gives 13.999986, and next we have 13 as the integer value again. So it will never "solve itself".

    By the way, 0.1f printed as a decimal number may well appear to be higher than 0.1, but I can guarantee that unless we have some bug in the C runtime that parses the value, it will be 0.99999... where the ... is not necessarily all nines - but it's certainly not ABOVE 0.1. If the printed value IS above 0.1, then it's a consequence of some rounding error when converting it back to decimal, not the internal format it is stored as.

    --
    Mats
    Yes but the same happens with

    6 0.6000000238 5 7 0.7000000477
    7 0.6999999881 6 8 0.8000000119 <-----------
    8 0.8000000119 7 9 0.9000000358


    If seems when you add 0.1 to 1.2999999523 you get 1.3999999762 difernce =0.1000000239
    but........when you add 0.1 to 0.6999999881 you get 0.8000000119 differnce =0.1000000238
    Or does it.
    Explain why it does not happen here?

    I suppose taking the 'end bits' 523 is quite a bit less than 881 (288 less) and
    when you add the 288 to 881 = 1.169 causing and 'overflow.
    but when you add 288 to 524 = 0.811.0000000 hence no overflow and no permanent error
    Last edited by esbo; 12-17-2008 at 05:07 PM.

  14. #44
    Registered User C_ntua's Avatar
    Join Date
    Jun 2008
    Posts
    1,853
    Quote Originally Posted by tabstop View Post
    Calm down, breathe, and ask again, because I have no idea what you're asking at the top. (Note that all the integers up to 2^22 are exact floats.)
    If I calm down more I ll fall asleep and then you won't have any idea what language I am speaking. But anyway.

    My question doesn't really matter. Thinking it a little bit more carefully, I ll agree that assigning a float to an int is defined and a cast wouldn't help in avoiding this kind of confusions. There still should be a cast for other reasons, but that's another (already) discussed topic.

    The best solution is the already existing one. A warning that you may lose data. GCC doesn't give you one with -Wall -Wextra -ansi -pedantic flags.
    On the other hand G++ gives you a warning without any flags. Of course C++ is more strict on types. But again this is kind of not the point here

  15. #45
    Kernel hacker
    Join Date
    Jul 2007
    Location
    Farncombe, Surrey, England
    Posts
    15,677
    Quote Originally Posted by esbo View Post
    I suppose you could say that the IEEE being oneof those big waste of space organisations chose
    a crap method to represent and store numbers?
    Still, it is not as if it is going to set off a nucleur weapon of accidently is it
    IEEE is a large organisation, and it may be a waste of space, but it would be even worse if Motorola, Intel, AMD, ARM, IBM and other processor manufacturer came up with their own floating point format.

    After all, the real problem here is not IEEE format, but the fact that if you want to represent 0.1 as a binary number, you will need an infinite amount of bits to do so precisely - and of course 0.1 is not the ONLY instance that has an infinite number of bits in binary form.

    The problem is JUST THE SAME as you can not represent 1/3 as a decimal number precisely - it will be 0.33333333 and you can press the 3 key until hell freezes over and you still haven't got EXACTLY 1/3. 1/10 in decimal form is 0.1, so it SEEMS like it should be simple to describe in other format, but the problem with 0.1 in binary for is EXACTLY the same as the 1/3 in the decimal form - there is NO END to the number it bits you need to describe 0.1. Many calculators come up with 0.9999999 if you enter 1 / 3 * 3. But we know that if we have infinite precision, it should be 1.0. If the calculator DOESN'T come up with 0.999999 in that situation, it is most likely because it does some rounding of the result, using some EXTRA digits beyond what the display can show, not because it actually calculates it correctly.

    So, we can use non-binary form, but the "cost" of this is approximately 25% space loss. And at least in the old days when computers didn't have several gigabytes of RAM, saving space was important - and even then, it would only solve SOME of the problems with floating point numbers being an approximation.

    --
    Mats
    Compilers can produce warnings - make the compiler programmers happy: Use them!
    Please don't PM me for help - and no, I don't do help over instant messengers.

Popular pages Recent additions subscribe to a feed