Hi there,
A while ago I read about an international treaty which limits the precision of any device which calculates logarithms to 40 decimal digits of precision.
Anyway, I was wondering how relevant this treaty is to todays computers and software. Does anyone have any knowledge of this? I believe it is still in force.
On a similar thread, does anyone know what precision is available for extended floating point (long double in C) calculations on 64 bit (PC) processors?