IEEE standard floating point numbers have about eight or nine decimal digits of precision. Double precision numbers have more.

Now, in C, (therefore, by extension C++), all internal calculations are carried out in double precision. Assignment statements evaluate the right hand side, then convert to whatever is required by the left hand side.

Therefore, the following declarations store the same value in **f**:

Code:

float f = 0.1;
float f = 0.1f;

Why would you want to use 0.1f rather than 0.1? *In this case*, at least, it's a matter of style, rather than substance. I'm sure that there are sometimes reasons that people want to do this, but it doesn't affect the results.

Try the following with your favorite C++ compiler:

Code:

#include <iostream>
int main()
{
float f = 0.1;
double d = 0.1;
float ff = 0.1f;
double df = 0.1f;
std::cout.precision(20);
std::cout << "f = " << f << std::endl;
std::cout << "d = " << d << std::endl;
std::cout << "ff = " << ff << std::endl;
std::cout << "df = " << df << std::endl;
return 0;
}

Now, I have told the program to print out 20 digits after the decimal point. This is more than the precision of float or double variables (at least for IEEE standard numbers, which are used in the compilers that I have at hand).

The number 0.1 cannot be represented exactly by a binary (or hex) floating point number. Double precision still cannot store the number exactly either.

How many correct significant digits do you see?

Is there any difference between 0.1 and 0.1f?

Regards,

Dave