I see that in different books one writer uses the f at the end of every float and another writer in his book omit the 'f' letter completely.
What is the best strategy?
Thnx.
I see that in different books one writer uses the f at the end of every float and another writer in his book omit the 'f' letter completely.
What is the best strategy?
Thnx.
Be specific in your coding.
Mainframe assembler programmer by trade. C coder when I can.
Unless you have a really good reason to use float, always use doubles.Code:float a = 1.0f; double b = 1.0;
40 years ago, when C was invented, memory was scarce and CPU power was limited (and seldom had FPUs). The tradeoff between floats and doubles was a useful choice.
Except for low-end embedded micros, an FPU is pretty standard and there is plenty of memory.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
Run this little program to see a difference:
It prints this:Code:#include <stdio.h> int main (void) { float f = 0.1f; if (f == 0.1) printf("Equal\n"); else printf("Not equal\n"); if (f == 0.1f) printf("Equal\n"); else printf("Not equal\n"); }
Not equal
Equal
The 0.1 in the first if test is converted to a double by the compiler, so it is a more accurate representation of the infinitely repeating binary equivalent of decimal 0.1.
(Yes, I know this example is a bit contrived, and I'm comparing floating-point values for equality (usually frowned upon), but you get my point: behavior can differ with or without the 'f' suffix.)
BTW, the line "float f = 0.1f;" could just as well have left off the 'f'; in this case, the suffix does not matter -- the compiler converts to a float.
(For related reading, see my article When Floats Don’t Behave Like Floats - Exploring Binary .)