To the OP: the problem is that some innocent looking numbers in decimal can't be represented accurately in floating point, at all. Drama occurs when many operations make a large error out of a small error, but inaccuracy happens even in your example, because you read the value into a double.
With some extra debugging printfs:
Code:
int cents = 3596;
double amount = 0;
puts("enter amount");
scanf("%lf", &amount);
printf("\ninput double: %lf", amount);
amount *= 100;
cents = amount;
printf("\nCents amount: %d", cents);
printf("\noutput double: %.2lf\n", (double)cents/100);
Example output:
Code:
enter amount
10.12
input double: 10.120000
Cents amount: 1011
output double: 10.11
If you change the format specifier for the input double to %.16lf you'll see 10.1199999999999992. So very nearly 10.12, but not quite! Casting to int discards the non-integer part, so 1011.999999999 is 1011.
If the conversion was rounded rather than truncated, you'd get the right number. In your original post you said:
I know you don't want to use floating point numbers when working with money because of the rounding errors, but instead use integers. I read somewhere that you should "multiply by 100, add 0.5, truncate, then divide the result by 100 to get back to pennies.
What you'd heard there was someone suggesting a way to round to the nearest integer. You'd do that operation entirely on the double value. I think the multiply and divide by 100 are just to make sure the significant digits are out of the way (I think this is completely unnecessary). The add of 0.5 sets the integer part to the nearest integer. E.g. 1.1+0.5 = 1.6. 1.7+0.5=2.2. so the truncation to int gets the right value. This only works for positive numbers. Question 14.6
I don't think you should add 0.5 in straight line code. Use a macro, or a small function, or just use the round() function from the C library. It's clearer -- there's no "magic constant" to trip people up.
However, in this case rounding is a solution to a problem created by using doubles to read the input. I say "don't use doubles".
Floats and double conveniently look a bit like money notation looks, but "10.12" is... a floating point number, 5 characters, 2 integers with a dot between them... C doesn't care too much. So for example you could do:
Code:
unsigned int dollars, cents;
scanf("%u.%u", &dollars, ¢s);
From there, I agree with you that the easiest thing is to do all calculations in cents. Then convert back into two integers for printing.
It is more complicated if you want to get back the flexibility you had with doubles. You could put in a negative number and it'd work, and you could put in just a single number to just be the dollars value. Signed numbers are easy, just be careful to get the conversion to/from cents right.
Supporting a dollar-only input is probably tricky hassle with scanf. Better to read a string into a buffer and check the buffer for the different formats, then you can use sscanf to retrive the values.
It probably sounds like my way is more long winded, more code, more hassle. If you're not going to do anything more with this bit of code then by all means use a rounded double. But if, for example, you decided to check the input was valid (e.g. 1.123 is not) then doubles will continue to need special care. I'd find it quite weird to see code to deal with the innacuracy of doubles in a program representing money in integers
On the size of ints thing: it depends on what kind of money program it is. The OP didn't say. If it's meant to be a generic representation for a big bank, then yes, use something epic and worry about it a lot.
If it's for adding up grocery lists $17 million is probably ok!