Because these:
Code:
commission=(9/100)*totalsales;
are ints. When you divide an int by an int, you get an int, in this case, 0. Here's how the compiler treats numbers:
Code:
11 // an int
11.0 // a double
11.0f // a float
You should actually not use floats (or doubles) for money, because money is fixed precision, and you cannot force a float to round itself. The way floats are represented in binary means that any fraction that can't be produced by dividing 1 by 2 (0.5, 0.25, 0.125, etc) cannot be exactly represented. Try this:
Code:
#include <stdio.h>
int main() {
float i;
for (i=0.0f; i<20; i+=0.1f) {
printf("%f\n",i);
}
return 0;
}
The output is not what you expect. Now imagine this is supposed to be adding dimes to a bank account -- you will get short changed eventually. You can, of course, continuously round the output, but eventually the discrepancy will work its way into the first couple of places. There are various infamous historical software failures which resulted from the mis-use of floats.
Instead, use integer cents for money. If you want to output dollars:
Code:
int cents = 507;
printf("$%d.%02d\n", cents/100, cents%100);
Of course, you want to be precise with division on cents -- you don't want to round down, you want to round closest. So you can actually use a float for those calculations, then put the value back into an int, adding 0.5:
Code:
float tmp = 507.0f/4.0f; // 126.75
cents = tmp + 0.5f; // should yeild 127
printf("%d\n", cents);
If the decimal remainder is >=0.5, cents will be rounded up. If it's <0.5, cents will be rounded down.