Quote Originally Posted by filker0
Actually, I guess I didn't make my point clearly enough -- the example that he gave in his code sample didn't actually show a difference between float and double, as they can both represent 0.00000000000152 without loss of precision. If you update my format string to use "%1.30f" instead of "%1.15f" for both values, you get:
Code:
2/3 as float: 0.666666686534881591796875000000, double: 0.666666666666666629659232512495
which, as you can see, is actually more than twice the number of digits of accuracy.
Both of your examples produce "basically" the same result. The float remains accurate to 7 points in both; the double is 15, and 16 parts accurate in the second. Therefore the statement: "basically twice as big" IS in fact accurate. Unless you're debating now that 15 is not in fact "basically twice as big" as 7. In which case, I kindly direct your attention to a bit of integer math:
Code:
#include<stdio.h>
int main( void )
{
    int example1 = 15 / 7;
    int example2 = 16 / 7;

    printf("example1, 15 / 7 is %d\n", example1 );
    printf("example2, 16 / 7 is %d\n", example2 );

    return 0;
}
/*
example1, 15 / 7 is 2
example2, 16 / 7 is 2
*/
Oh, and actually, the whole statement is this:
"The difference between a double and a float is that doubles are designed to be bigger than floats (basically twice (double) as big). Therefore a double is capable of storing a decimal value that has a significantly higher level of precision than a float."
That statement is in fact true. doubles are designed to be bigger than floats. "Therefore a double is capable storing a decimal value that has a significantly higher level of precision than a float." See, that is in fact true. As you suggested, consult float.h for details. Or the standard. See, the minimum significant digits FLT_DIG and DBL_DIG are close to twice the size. 6 and 10 respectively. I'd say that is both significant, and "basicly double".


Quzah.