# Thread: Issue with typecasting float to int

1. ## Issue with typecasting float to int

Code:
```#include <stdio.h>

int main()
{
int i = 0;
for(i = 0; i <= 10; i++)
{
float offset = (i/1000)*256;
printf("%d\n", (int)(offset));
}
return 0;
}```
Output:
0
0
0
0
0
0
0
0
0
0
0
Expected Output:
0
0
0
0
1
1
1
1
2
2
2

Could you explain why this is happening?

2. Code:
`float offset = ( i / 1000 ) * 256;`
i is an int, 1000 is an int, 256 is an int...

i/1000 = 0 (int)
0 * 256 = 0 (int)
After this sub-expression 0 is converted to float (0.0f)...

If you want a floating point calculation of 1/1000 you should use ONE of the terms (dividend or divisor) as a float:
Code:
`float offset = ( i / 1000.0f ) * 256;`
This will promote i to float, and the result of i / 1000.0f will be a float, which will promote 256 to a float as well:
Code:
```#include <stdio.h>

int main()
{
int i;

for ( i = 0; i <= 10; i++ )
{
// NOTE: dividing by 1000.0f will promote i and 256 to float.
float offset = ( i / 1000.0f ) * 256;

printf ( "%d\n", ( int )offset );
}

return 0;
}```