This code gives me the result 123.230003Code:#include <stdio.h> int main(void) { float x=123.23f; printf("%f",x); return 0; }
Why do I see the 3 at the end?
This code gives me the result 123.230003Code:#include <stdio.h> int main(void) { float x=123.23f; printf("%f",x); return 0; }
Why do I see the 3 at the end?
Why are floating point calculations so..inaccurate?
EDIT:
Note that the "%f" with no precision specifiers gives you 6 digits after the decimal point, which for a float is enough to give you the bogus 3 out there. You could use a double to push the inaccuracy farther out right and have it not show up with the default "%lf" (you need the l in there for a double), but it's still not exact. Also, you can change printf to give less after the decimal point. For example, "%.3f" would give only 3 digits after the decimal.
Last edited by anduril462; 02-17-2011 at 06:17 PM.
Thanks for your perfect answer
This one gave the result:123.230000Code:#include <stdio.h> int main() { double x=123.23; printf("%lf\n",x); return 0; }
I firstly thought I had a mistake but after your reply I see there is an interesting area to discover. I will try to get some knowledge about binary representation of floating-point constants.
Understanding floating point is one of the more interesting things, and most misunderstood. It could get very deep mathematically as well. But as for how floating point is represented internally, it's worthwhile looking into it.