@juice: You correctly declared main to return an int, but you don't return one. This can cause your program to crash among other things. Read this: Cprogramming.com FAQ > main() / void main() / int main() / int main(void) / int main(int argc, char *argv[]).
It's not compiler/library dependent. The standard says the l (ell) modifier has no effect on %f. The L modifier is used for long doubles. One would expect %f to be for floats and %lf for doubles, but it's not that way. The default argument promotions convert the floats to doubles when passed to a function, so %f or %lf work for floats and doubles, though the ell in the latter case is useless.
@joybanerjee39:
To go back to the issue of floating point accuracy, you are not getting "correct" results by defining x as a double. You are getting results that are closer to the exact values versus using a float, but they're still inaccurate. If you changed Tater's program to use a double, you would still see divergence from the expected output, it would just take a lot more additions. I used the OP's program, but made x a double, and changed the print statement to print out 20 decimal places in stead of 12:
printf rounds when printing floats, so it rounded the answer to 1.234567890000, but you can see that when printing enough decimal places, even a double suffers loss of precision that provides imperfect results.Code:$ cat float.c #include <stdio.h> int main() { double x=1.234567890000; printf("%.20lf\n", x); return 0; } $ gcc -Wall -g -std=c99 float.c -o float $ ./float 1.23456788999999989009
That said, for the special case of the main function, 0 is returned if control reaches the end of the main function without encountering a return statement.Originally Posted by anduril462
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
I don't think it prints random values, cause no matter how many times you compile the program (you can even try after restarting your PC)
you get the same set of values (at least on my PC). So it is basically concerned with the algorithm and techniques used to represent floating point numbers, and the true reason behind this lies in the reading materials reffered by salem and CommonTater.
Am I the only one who didn't understand the article "What Every Computer Scientist Should Know About Floating-Point Arithmetic" referred in post 3?
Most likely not... It's a difficult read and, not being a mathematician, I'm not sure I understand all of it...
But I do heed the warning that some values in floating point arithmetic are impossible to represent accurately in finite binary systems...
The warning is that these inaccuracies can play hell with financial and scientific systems, so one must be prepared to deal with them.
Look at the example I posted in msg# 9... notice that it went perfectly up to 2.7 then went right off the rails... With that rounded off to 2 decimal places $1.80 would become $1.79, a full penny off. Now consider that happening across tens of thousands of bank transactions a day.
For the most part the financial answer is to multiply by 100 and work in integer pennies instead of floating point dollars (etc). This way you avoid the rounding and imprecision of floating point math which can (and sometimes did) cost major financial institutions millions of dollars a day.
(Yes, "Office Space" is based on a real problem that can be exploited, as was the earlier Superman Movie with the same plot line)
that's because when you define it as a double it can support the double level of precision. when you pass in a float, it has less precision, it is promoted to double by the compiler and the extra precision is filled in. not with random data, but according to precise rules of floating point. the rules are complicated.