What CommonTater says is true, but specifically what's happened here is you assigned a (non-representable) double value to a float, amounting to a loss of precision, and then you test against a double as well. If you do this:
You'll get the expected outcome. By "non-representable", I mean that 0.7 cannot be precisely stored as a floating point number in a computer (including doubles). Because of the nature of how floating point numbers are stored, they are only exact WRT to "inverse power of 2" numbers. "Inverse power of 2" is not a real term, lol, but I use it to refer to fractional numbers produced by dividing 1/2, so the series is 0.5, 0.25, 0.125, 0.0625, and so on. Notice 0.1 is not in there. Try this code:
Code:
#include <stdio.h>
int main() {
float i;
for (i=0.0f; i<20; i+=0.1f) {
printf("%f\n",i);
}
return 0;
}
And you'll see the imprecision show up (before that, it is there, just many decimal places away). "Precision" is limited by the number of bits used, hence, it is finite. Doubles have twice as many bits as floats, and so capable of more precision (but they are still not perfect, ie, infinitely precise). By default, numbers like "0.7" are considered doubles. If you assign that to a float, the compiler takes care of it, and the number loses some precision. So then you compare it to "0.7" (a double) and they are not equal -- the float will be less because of the precision issue. But if you use "0.7f", the compiler knows this is a float and the two numbers will match.
However, just adding f is not a total solution. In general, never use == with floats or doubles after you've done some arithmetic; always use ranges, eg:
Code:
// instead of == 0.7
if (a < 0.7001f && a > 0.6999f)