Hi there,

Can I assume that a floating point (long double) will be encoded the same way on every 32 bit (INTEL) computer? Can I assume that on a 32 bit INTEL, a long double is able to store 19 decimals digits reliably? I'm beginning to think not.

I'm trying to convert a long double to a string, with 18 digits of total precision (or 17 digits after the decimal point in scientific form). In theory, this should be OK as a long double is able to represent 19 digit decimals accurately, or at least that's how it's documented, and I only want to convert to a precision of 18. However, I've found that while this works on most machines, it doesn't work on every one. So I ended up storing and decoding long double BCD values myself to see what I was getting.

Suprisingly I found that it can differ, i.e:

On my owm machine:
long double val 10066.52L is represented as (sign) 0| (exp*) 0000000D| (sig) 9D4A147AE147AE14

*the bias has been removed from the exponent value.

On a problem machine:
long double val 10066.52L is represented as 0|0000000D|9D4A147AE147B000

NOTICE the low order bits are different in the significand.

When I decode the above representations on paper (which is hard work), I get:

10066.5199999999999|9957367435854393989

and

10066.5200000000004|3655745685101

respectively.

I've marked, with a '|', where the conversion should truncate and be rounded. So you can see on my machine 10066.52L converts to string "10066.52", while on the other machine it converts to a head bangingly frustrating "10066.5200000000004".

In the second example, it appears that the full BCB precision available is simply not being used by the CPU.

Is this a flaw in that CPU? Or is this normal behaviour?

Both CPUs are INTEL, the problem CPU is a slightly later model than my own.

What's the solution? I'm thinking about lowering the precision I expect from string conversions - what should I lower it to so that I can be sure that it will work - always?

Any help or comments appreciated.

Thanks

Andy