Dear C experts,

Could anyone tell me what the numerical precision of a double is in C? I read about the smallest and largest numbers I can store in a double, but what I really need to know is how many significant digits after the decimal point are "to be trusted"? Is it a 16-digit mantissa in scientific notation?

Besides understanding this issue, I have a practical need to know this. I want to use a function to compare doubles. Something like

Code:
#define Abs(x)    ((x) < 0 ? -(x) : (x))
#define Max(a, b) ((a) > (b) ? (a) : (b))
double RelativeDifference(double a, double b)
{
	double c = Abs(a);
	double d = Abs(b);
	d = Max(c, d);
	return d == 0.0 ? 0.0 : Abs(a - b) / d;
}
So, a good condition for two doubles to be the same would be

Code:
if ( RelativeDifference(x,y) < TOLERANCE )
Thus, what is a good value for TOLERANCE? 1E(-16)?

Thank you very much,

mc61