Originally Posted by
generalt
So, if I only read in 2 decimal places that would solve the problem of imprecision? In this program, I don't think I'm in trouble w/imprecision b/c I convert it into an integer... right?
How did you plan to "read in only 2 decimal places"? Nb. That the problem *is* with converting floats into integers. There is a very long thread here if you want to look at it:
float13
If you want a quick illustration, try this:
Code:
#include <stdio.h>
int main () {
int X=1,i;
float x;
for (i=0;i<25;i++) {
x=(float)X/10;
printf("%d %f %d\t",X,x,i);
x+=0.1f;
X=x*10;
printf("%d %f \n",X,x);
}
return 0;
}
On a 32-bit system, you will notice something strange happen starting at 13. On a 64-bit system, it starts at 21.
Understand that binary floating point arithmetic is really something almost unique to computers, and not to confuse the concept with decimal numbers. You should deal with money as an integer (the number of cents), because that is what money is. Using a float is 1) inappropriate, 2) prone to error.