# Thread: Just curious about a strange run-time error using the round function

1. ## Just curious about a strange run-time error using the round function

DISCLAIMER: I am a college student, and this is a function from a program for my C class. It does not matter what this was supposed to do. I do NOT need help fixing it. I already fixed it, although it does not match the teacher's program because his program gets this error.

I came into this by accident when I was checking values in my program to see if they matched my teacher's values. When you input 797979, it prints out 797979.062500. It seems to only happen with this number.

My teacher told me that this happens because at large numbers, the float, being 4 bytes, starts to do weird things. But this is only at about 800 thousand. The program works just fine with numbers well above 100 million.

I guess all I am asking is why this occurs. It seems to have something to do with the dividing. But why does the addition of 0.0625 happen? Is it at the 0 and 1 level? Why is it fixed by using multiplication instead?

I was able to fix it by changing this line

userValue /= 100.0f;
to
userValue *= 0.01f;

I am using a Linux system, gedit as my editor, and using a c99 compiler(I don't know which one).

Code:
```#include <stdio.h>
#include <math.h>

int main ()
{
float userValue;

scanf ("%f", &userValue);

userValue *= 100.0f;
userValue = roundf(userValue);
userValue /= 100.0f;

printf("%f\n", userValue);

return 0;
}```
Thank you for anything.

2. I think it's simply a bug in the compiler. A float only has the capacity to hold 7 digits of precision. So to hold larger numbers you would have to use a double. I don't think there is a specific reason why it returns a number like that, just merely an error in the compiler.

3. Thank you for giving me a more detailed answer. I was just confused because in my book it said that for float the range of values was like plus or minus 10 to the 34th power or something like that. And I could not find a direct answer in my book or the internet as they mostly talked about precision after the decimal.

This has nothing to do with the fround function. If you take it out you will get the same behaviour.

You need to understand the difference between precision and range. Although you can store a number like 100000000000000000000000000000 in a float, that doesn't mean that all of the bits are actually stored (remember that it's stored in binary). In fact, only 24 bits are stored (23 explicity, 1 implicity), another 8 bits are used to store the binary exponent, and 1 bit stores the sign. So if you put that number in a float and then print it, you get 100000001504746621987668885504.000000. So it only stores about 8 decimal digits of precision at most.

In your example, although 797979 fits within the precision of a single-precision floating point value, when you multiply it by 100 it no longer fits. So when you divide it by 100 again you end up with an approximation of the original value.

It actually happens with about 20% of numbers between 167773 and 999999, for example, as you can see in the following program.
Code:
```#include <stdio.h>
#include <math.h>

#define LOW  167773.0f
#define HIGH 999999.0f

int main ()
{
float f, g;
int cnt = 0;

for (f = LOW; f <= HIGH; f++) {
g = f * 100.f;
if (f != g / 100.f)
cnt++;
}

printf("%d / %.0f = %f\n", cnt, HIGH-LOW, cnt/(HIGH-LOW));

return 0;
}```
EDIT: Precision doesn't mean "precision after the decimal". It's just the number of bits stored to represent the value. In essence they are all "after the decimal" (except for the implied leading bit).

5. Wow thanks for you time! I think my understanding of this topic is way better now. I wanted to see why since I multiplied it by 0.01 instead of dividing, and I got the right answer. So I changed the if statement to have g * 0.01 instead, and I got that it failed about 10% of the time. Thank you for answering my question and showing me this small program.

6. Originally Posted by Vladimir Pewton
Wow thanks for you time! I think my understanding of this topic is way better now. I wanted to see why since I multiplied it by 0.01 instead of dividing, and I got the right answer. So I changed the if statement to have g * 0.01 instead, and I got that it failed about 10% of the time. Thank you for answering my question and showing me this small program.
The difference between multiplying by 0.01 instead of dividing by 100 is that 100 can be represented exactly whereas 0.01 can't (since it's stored in binary). So you get different results.

Here's a program that displays the representation of a float. It prints the sign bit, the eight exponent bits (and the decimal value of the exponent in parentheses), and the 24 significand bits (where the leading bit before the binary point is the implied bit, meaning it is not actually stored).
Code:
```#include <stdio.h>

typedef unsigned char byte;

#define putbit(b, m) putchar(b & m ? '1' : '0')

void print_float_bits(float f) {
byte *p = (byte*)&f;    // pointer to bytes
byte m = 0x80;          // bit mask
unsigned exponent = 0;

// sign bit
putbit(p[3], m);
putchar(' ');

// exponent
for (m >>= 1; m; m >>= 1) {
exponent = (exponent << 1) | !!(p[3] & m);
putbit(p[3], m);
}
m = 0x80;
exponent = (exponent << 1) | !!(p[2] & m);
putbit(p[2], m);
printf(" (%d)", (int)exponent - 127);

// significand
printf(" %c.", exponent != 0 && exponent != 255 ? '1' : '0');
for (m >>= 1;  m; m >>= 1) putbit(p[2], m);
for (m = 0x80; m; m >>= 1) putbit(p[1], m);
for (m = 0x80; m; m >>= 1) putbit(p[0], m);
putchar('\n');
}

int main() {
float f;
while (printf(">>> "), scanf("%f", &f) == 1) {
printf("%e\n", f);
print_float_bits(f);
}
putchar('\n');
return 0;
}```
Example run:
Code:
```>>> 100
1.000000e+02
0 10000101 (6) 1.10010000000000000000000
>>> .01
1.000000e-02
0 01111000 (-7) 1.01000111101011100001010```

7. It seems like student and teacher need to head over to What Every Computer Scientist Should Know About Floating-Point Arithmetic