1. ## data type problem

Hi all, I am hoping someone can tell me what I am doing wrong. I want to divide two numbers and get a decimal from that, but I cannot figure out how to make this work.

Code:
```#include <stdio.h>
#include <stdlib.h>

int main(){
int a=0, b=0;
float c=0;
float d=0;
float f=0;

a = 40749 - 1;
b = 65536 - 1;
c = (a/b);
d = 40748 / 65535;
f = d * 255;
printf("a=%d b=%d c=%d d=%d f=%d\n",a,b,c,d,f);
}```
And, when I compile and run, this is what I get (below), c, d, and f all giving back a 0, not a decimal number.

%: gcc test.c
%: a.out
a=40748 b=65535 c=0 d=0 f=0
%:

How do I get a decimal number back? Or, in the case of f, a char/int (because it will be between 0 and 255).

2. You cannot divide two ints together and get a decimal (float) result. You must start with (at least) one float. So 40748.0/65535.0 would give you a decimal result.

3. Three things. First, use %f for floats:
Code:
`printf("a=%d b=%d c=%f d=%f f=%f\n",a,b,c,d,f);`
Second, a and b, 40748 and 65535, are integers, and the compiler treats them that way unless you indicate otherwise

Code:
```  c = (float)a/(float)b;
d = 40748.0f / 65535.0f;```
Thirdly (this is really a minor addendum to 2), always prefix floats with "f" even if you are assigning to a float,
Code:
`float c=0.0f;`
Because while the compiler will convert "0" into "0.0" since c is a float, the final product of your program will be slightly more streamlined if the compiler does not have to do that accounting. Or so I have been told.

4. OK, so I just changed everything to floats (for simplicity). And used the %f notation for printing to the screen. It all seems to work!

Thanks for you help everyone!