Thread: Problem with floating point comparisons

1. Problem with floating point comparisons

Why is it showing X????

Code:
```#include<stdio.h>

int main(void)
{
float x=0.3, y=0.7;

if(x>0.3)
{
if(y>0.7)
printf("Y\n\n");
else
printf("X\n\n");
}
else
printf("NONE\n\n");
}```

2. Because floats are stored in a binary format which often does not precisely equate to a decimal value. Please try this on your machine before you reply:

Code:
```#include <stdio.h>

int main() {
float i;
for (i=0.0f; i<20; i+=0.1f) {
printf("%f\n",i);
}
return 0;
}```
A few conclusions:
- never use == with floats/doubles, or some operation which amounts to the same thing (as your code does).
- never use a floating point type just because you want a fractional number, if it is also important that the number have fixed precision. Eg, money is like this; dollars are not measured as a floating point value, they are measured as a fixed precision (2 decimal places) number. Use integer cents instead.

3. Try this one out as well, a slight modification to your code:
Code:
```#include<stdio.h>

int main(void)
{
float x=0.3f, y=0.7f;

if(x>0.3f)
{
if(y>0.7f)
printf("Y\n\n");
else
printf("X\n\n");
}
else
printf("NONE\n\n");
}```
Notice the 4 extra characters I've added. By themselves, values such as 0.3 are interpreted as doubles, not floats. When you test x>0.3 (for example) your comparisons are looking at a float (x) and a double (0.3) which have different levels of precision. Now, x>0.3f on the other hand instead compares a float against a float with equal levels of precision. If you had turned up your compiler's warnings a few notches you may have seen some appear with your original code.

You also still need to be aware of the inherent imprecision of floating-point values as already mentioned above.

4. Thanks!! Got it....But just 2 queries....

5. Thanks!! Got it....But just 2 queries....

1. 0.3f shows 0.300000 on my m/c so

Code:
`if(x>0.3)`
should be false and it shall print "NONE". But its not doing so....Why...?????

2. Is the approximate value of 0.3f always less than 0.3 (double)?????

6. You claim you got it, but then you just reiterate your original question and show you completely misunderstood hk_mp5kpdw's point.

0.3 (without the f) is of type double. 0.3f is of type float. They are different values, because 0.3 cannot be represented exactly in binary (as a base 2 fraction) and a double represents it to greater precision than a float.

0.3f is not necessarily less than 0.3. When you have two representations of the same value, but to different precision, it is not guaranteed that one is greater than the other. Rounding can occur up or down.

Consider 1/3 in decimal, which is 0.3333 (infinitely recurring). Represent that to three decimal places, and you get 0.333 (assuming rounding to nearest place). However, 2/3 in decimal is 0.66666 (infinitely recurring). Represent that to three decimal places, and you get 0.667 (assuming nearest rounding). In one case (1/3) the lower precision value is less than the higher precision. For the other (2/3) the reverse is true. And these two examples assume the same precision to represent both values.

The same occurs with floating point (binary) except that the rounding up or rounding down is in base 2, not base 10. The rounding depends on both the precision of the floating point type (roughly speaking, how many binary places it supports) as well as the particular values.

0.1 decimal is an infinitely recurring fraction in binary. So are 0.3 and 0.7, which is why you are being affected by varying precisions.

7. It is not so simple to generalize that all numbers in (float) are always less than their representation in (double).
I have found that for 0.7 and 0.9 the (float) is less than (double). For the other numbers 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8 (float) is larger than (double).
It depends on how the binary representation is rounded off internally.

8. Originally Posted by grumpy
0.1 decimal is an infinitely recurring fraction in binary. So are 0.3 and 0.7, which is why you are being affected by varying precisions.
Right, so l should explain my point about == a little further:

Code:
```	float x = 0.3f;
if (x == 0.3f) {
printf("True! x = %.10f", x);
}```
Guess what?

True! x = 0.3000000119

Note that printf by default uses 6 decimal places. If you specify more, you'll see more .

The reason x == 0.3f works is because 0.3, stored as a float, does not actually equal 0.3, but it does equal 0.3 as a float value (0.3f), which is what was assigned to x. So, if a float is 32 bits, the bits used to approximate 0.3 are specific, but they are probably exactly the same as some other numbers as well:

Code:
```	float a = 0.3f, b = 0.3000000009999f;
if (a == b) puts("True!");```
And it is. Think: a float value is stored in a finite space (eg, 32 bits), but there are, in fact, an infinite number of decimal values between 0.1 and 0.2. 32 bit floats have the same number of possible values as 32-bit ints, just the range is a lot greater, because of the imprecision. If you move the decimal place:

Code:
```	float a = 3000000000000.00f,
b = 3000000009999.00f;
if (a == b) puts("True!");```
Also true! But b should be 9999 more than a! Notice it is the same degree of imprecision.

The imprecision can add up and become significant quickly, which is why you need to beware of it (and don't use floats when you want fixed precision).

9. @Grumpy: "...a double represents it to greater precision than a float." -
If double represents it to a greater precession than float then x>0.3 i.e 0.3f>0.3 must be false.

@nonoob: Why does this happen??? How is the internal round off done???? Any way to guess....???? Logically if double represents it to a greater precession than float then float literals should be smaller than double literals.....

10. Originally Posted by Avenger625
Why does this happen??? How is the internal round off done???? Any way to guess....???? Logically if double represents it to a greater precession than float then float literals should be smaller than double literals.....
Part of what's at issue is that the C standard doesn't specify how floats have to be implemented* (unlike integers, which are supposed to be in two's complement). On most modern systems I think it will be with IEEE 754:

IEEE 754-2008 - Wikipedia, the free encyclopedia

If you're really curious, but don't count on manipulating that if it isn't part of the standard you are using. The most important thing is to just understand the consequences of possible** imprecision and how to use the notation properly (with an f, or a cast (float)0.3 it's a float value, without, it's a double).

* maybe c99 does?
** in theory, since the standard doesn't say there has to be imprecision either, a compiler (in theory) could use floats that are always exactly precise -- except that is impossible, as I said in #8, because it would require an infinite amount of memory.

11. if(x>0.3) will not type promotion take place here??? x getting converted to double first and then compared.....????

If double represents floating-point nos. to a greater precession than float then float literals should be smaller than double literals.....but it does not seem so.....WHY?????

12. Originally Posted by Avenger625
If double represents floating-point nos. to a greater precession than float then float literals should be smaller than double literals.....
Why do you think that? If a 32 bit float has ~4.2 billion possible distinct values, and a 64 bit double ~ 18.4^17 possible values, and the range of both of them vastly exceeds this, WRT to a number within the range of both a float and a double, there will be more values of a fixed precision (you have to round, or it is one infinity vs. another) that are identical for the float than the double. Eg, if value X is identical to value X-1 and X-2 when stored as a float, but X, X-1, and X-2 are all different as a double (due to the greater precision), when double X-2 is cast to a float and back again, depending on the implementation, it could be any one of three values, two of which are greater than X-2, because converting one way is not the same as converting the other. Ie, casting (double)X-1 to a float could yield X, but casting (float)X to a double could yield X+1.

13. if(x>0.3) will not type promotion take place here??? x getting converted to double first and then compared.....????
That's exactly what happens. But first there's a demotion of the double constant to the float x (with potential rounding) when it's initialized. Then x (as a float) is promoted by essentially concatenating a bunch of zeros to it. But the rounding in the first step causes the promoted float x to be greater than the constant double.

Here's 0.3 decimal as a float and double:

Real number: 0.01001100110011001100110011...(repeats 0011 forever)
float : 1.00110011001100110011010 (note the rounding)
double: 1.001100110011001100110011001100110011001100110011 0011
Both the float and double have an exponent of -2

The first bit is the hidden bit, then the float has 23 bits and the double has 52 bits.

14. Originally Posted by Avenger625
@Grumpy: "...a double represents it to greater precision than a float." -
If double represents it to a greater precession than float then x>0.3 i.e 0.3f>0.3 must be false.

@nonoob: Why does this happen??? How is the internal round off done???? Any way to guess....???? Logically if double represents it to a greater precession than float then float literals should be smaller than double literals.....

There is a lot that comes into play here. Rounding methods, general floating point implementations (not all computers use IEEE-754), etc. The following assumes IEEE-754.

The fact that a double has more precision than a float does not mean that 0.3f>0.3 is false, or that float literals should always be smaller than their double counterparts. What precision dictates is how close to the actual value you can get. Neither floats nor doubles can exactly represent the value 0.3. It could be that the closest value a float can achieve is just slightly more than 0.3 while the closest value a double can achieve is slightly less than 0.3. Witness my test (on Ubuntu 10.04:
Code:
```\$ cat float.c
#include <stdio.h>

int main(void)
{
printf("As a double, 0.3 is represented as approximately %.30f\n", 0.3);
printf("As a float, 0.3 is represented as approximately %.30f\n", 0.3f);
return 0;
}

\$ gcc -Wall -o float float.c

\$ ./float
As a double, 0.3 is represented as approximately 0.299999999999999988897769753748
As a float, 0.3 is represented as approximately 0.300000011920928955078125000000```
Notice how much closer the double is to the actual value (it's off by ~1e-16 where the float is off by ~1e-8), yet it's represented by a value that is smaller than the float. That's what precision is about, getting as close as you can, even if you have to under-estimate instead of over-estimate.

Also, note that when you compare a float to a double, the float gets promoted to a double. IIRC, all single-precision floats have exact representations as doubles. That means, in 0.3f>0.3, the 0.3f first creates a single-precision float value that is as close as possible to 0.3. Then it promotes it to a double. Since a double can perfectly represent any float, without losing any data, it does so. The left side of the > is now a double that is larger than 0.3 by about 1e-8. The right side is initially taken as a double literal, so the closest double value to 0.3 is smaller than 0.3 by about 1e-16. Thus, 0.3f>0.3 is true. Weird, but that's floating point numbers for you.

15. Originally Posted by anduril462
Witness my test (on Ubuntu 10.04:
When I run your program I get the first number for both. It's as if the f (after the constant) is ignored. (I've noticed the same behavior in another program.) I'm using gcc4.6.1 on WinXP. What version of gcc are you using?