The "precision" of floating point types is given by the number of significant bits, not the decimal fractional part of the number.
When it is said 'single precision' has 6 or 7 digits of precision, this is an approximation: Since single precision has 24 bits of precision, in decimal, we get p=log10(2^24) -> p = 24*log10(2) = 7.22... (7 decimal digits). But, again, this is an approximation. As an example, the value: 9.403954806578300063749892297777965422549324454176 7001720700136502273380756378173828125*10^-38 (2^-123) can be stored EXACTLY in single precision, because it supports values with 24 bits and expoents from -126 to 127, so 2^-123 is 1.000...*2^-123, which is an exact value in single precision:
Code:
$ bc
scale=130
2^-123
.0000000000000000000000000000000000000940395480657830006374989229777\
796542254932445417670017207001365022733807563781738281250000000
There is a different interpretation of "precision", used by printf(): There the "precision" is the number of digits after the decimal point, like:
Code:
#include <stdio.h>
#include <math.h>
int main( void )
{
float f = powf( 2.0f, -123.0f );
printf( "%.85e\n", f );
}
Here printf() will print the value of f with 85 decimal digits after the decimal point... This is different from "floating point" precision (which is always measured in bits).