You believe wrongOriginally Posted by robatino
You believe wrongOriginally Posted by robatino
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
Would you mind elaborating a bit (explaining exactly which statement is wrong, and why)?.
you should search the forum - it was discussed several times why == cannot be used with floats
even float a=1.0
and double b=1.0 can be represented differently
First of all they'll be probably stored as
0.1*10^1
Then you should convert 0.1 into the binary format and see if it will have the same representetion in different number of bits...
Then convert back and see if this number is still 1
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
For a double variable x one can also compare x with either floor(x) or ceil(x) in <math.h> using == to see if the fractional part is zero. Although for this to work, one must assume that the double type has enough extra precision left over from representing the integer part to be able to at least tell if the fractional part is nonzero or not. I think this can only be done reliably by knowing something about the specific problem. But the "delta" method has the same problem - one has to use such knowledge in order to choose delta. It's not enough to just set it to machine precision, if that precision isn't enough.
OK, I understand now - if the number is normalized so the mantissa represents a number between 0 and 1, then that number has to be exactly representable in binary, which it probably won't be. This issue will also invalidate my later suggestion of comparing with floor() or ceil().Originally Posted by vart
Actually, upon further thought, I think you're wrong here - I believe the normalization is by a power of 2, not a power of 10. This means that it would not prevent an integer of sufficiently small absolute value from being represented exactly. For example, see
http://en.wikipedia.org/wiki/Floatin...ice_properties
although it's a bit wrong when it says "any integer less than or equal to" without mentioning absolute value.
I could leave you with your misunderstandings... But you started to link this thread from other threads. That can lead to misinforations for other readers.
When you write
a = 1.0 - the value of 1 is never stored as is.
Some float operations are applied.
When you print the value stored - some other operations are applied.
When float arithmetics is applied you cannot be sure that 1/2.0*2.0 == 1.
Never.
So you never use == for floats.
It is simple.
Search forum for additional info.
All problems in computer science can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler
I'd agree with vart. There's no guarantees as far as floating point arithmetic goes.
Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction
Yes, often type double can accurately represent even more integer values than a plain int, but it's not required. You can check float.h for info like DBL_MANT_DIG and others.This means that it would not prevent an integer of sufficiently small absolute value from being represented exactly.
That's a different point. Whilst it's probably possible to accurately represent the values of 0.5 and/or 1 in a float/double, whether or not the computation result in those values is another matter. If the above calculation was more complex, possible answers could be anywhere between 0.99..... and 1.00...1, which may become 0 or 1, if converted to int.When float arithmetics is applied you cannot be sure that 1/2.0*2.0 == 1
Nevertheless, as stated above, in many cases, you could store an integer (i.e. 5) in a double and retrieve the same integer value later. I.e. it would not turn into 4.9999999 on many implementations.
If the machine uses IEEE 754 arithmetic (which is not guaranteed by the standard, but which is true on most platforms), then one can be sure that 1/2.0*2.0 == 1, or that any integer of sufficiently small absolute value can be represented exactly. For example, consider the following program to test a double to see if it's an integer (I wrote this in C++ but it's easy to convert):
I used the dreaded "==". It works for inputs up to about 15 or 16 decimal digits, assuming the arithmetic is IEEE 754 (if the input is longer than that, it thinks it's always an integer). Suppose I modified it to use the delta method (either on the difference between d and floor(d), or the difference between 1 and d/floor(d)). Okay, how do I choose delta? Is there a portable way? No. My point is that this is a special case, and the general rule "never use == for floats" doesn't necessarily apply.Code:#include <iostream> #include <cmath> int main() { double d; while (true) { std::cout << "Input double number d:"; std::cin >> d; if (d == floor(d)) std::cout << "d is an integer\n"; else std::cout << "d is not an integer\n"; } }
Last edited by robatino; 01-21-2007 at 01:06 AM.
It's often a percentage of (one of) the numbers you're trying to compare. Thus it would vary automatically depending on whether the number is 0.0000324 or 3x10^56.how do I choose delta?
That doesn't prove anything, the equation is far too simple. See code below for another equation.then one can be sure that 1/2.0*2.0 == 1
In your test program, you're taking a value from stdin, which is different than actually computing one. Often, a calculation will not result in the actual correct value, even though that correct value can be represented. So a calculation might result in 0.99999, when it should result in 1.
Note that the test will fail even at very low values of i (far before a,b or c can no longer be represented accurately. Try it with long double too if you wish).Code:#include <stdio.h> int main(void) { int i; double a=1, b=1, c=1; for(i=0; i<10000; i++) { if( (a+b)*c / (a*c + b*c) == 1 ) ; /* success */ else printf("Failed at i=%d\n", i); a *= 1.001; b += 0.05; c *= 0.9998; } return 0; }
I agree that in the general case, comparing two arbitrary floating-point numbers using == is bad (as your code shows). Actually, 1.001, 0.05, and 0.9998 can't be represented accurately, so the expression will be inexact as soon as i == 1. For example, 0.05 equals 1/20 = 1/4 * 1/5 and 1/5 can't be represented accurately. On my machine it fails first at i=4.
Edit: And if the original poster's floating-point expression comes from some similar general expression, then there's no reliable way to determine if it's an integer or not, without using some problem-specific knowledge - just because there's no general way to know how close the expression could be to an integer without actually being one. One way is to try to establish a lower bound delta on how close it could be, and then use the delta method. But even that doesn't work in all cases, since there are situations where the difference could be arbitrarily small without being zero (for example, the value of 1/n gets arbitrarily close to 0 as n -> infinity, but never equals it).
Last edited by robatino; 01-21-2007 at 02:48 AM.
Yes, you're right. Bad choice of values on my behalf.Actually, 1.001, 0.05, and 0.9998 can't be represented accurately
Anyways, despite all of this now being rather unrelated to the original post, I'll post some code, which I wrote some time ago, to look at the computer's float representation. It relies on some technically undefined behaviour with regards to union usage, in order to get the representation. It's unlikely that it will work incorrectly.
Code:#include <stdio.h> union float_or_representation { float f; unsigned int rep; }; typedef struct { union float_or_representation f_or_rep; int sign; int exponent; int mantissa; char binary[33]; } float_internals; void uncover_float(float_internals *internals) { int i = 0; unsigned int temp = internals->f_or_rep.rep; /* technically undefined behaviour */ /* SIGN */ if(temp & 2147483648u) { internals->sign = -1; internals->binary[i] = '1'; } else { internals->sign = 1; internals->binary[i] = '0'; } temp = temp << 1; /* EXPONENT */ for(i=1, internals->exponent=0; i<=8; i++) { if(temp & 2147483648u) { internals->binary[i] = '1'; internals->exponent = internals->exponent * 2 + 1; } else { internals->binary[i] = '0'; internals->exponent *= 2; } temp = temp << 1; } internals->exponent -= 127; /* Offset Adjust */ /* MANTISSA */ for(i=9, internals->mantissa=0; i<=31; i++) { if(temp & 2147483648u) { internals->binary[i] = '1'; internals->mantissa = internals->mantissa * 2 + 1; } else { internals->binary[i] = '0'; internals->mantissa *= 2; } temp = temp << 1; } internals->binary[i] = '\0'; /* NOTE: We don't cover special cases (i.e. 0, infinity, NaN...) */ } int main(void) { float_internals internals; internals.f_or_rep.f = 0.05; /* set any float */ uncover_float(&internals); /* analyze float */ printf("Float: %f is represnted as %s\n Sign: %d [-1 or 1]\n Exponent: %d\n Mantissa (numerator): %d\n", internals.f_or_rep.f, internals.binary, internals.sign, internals.exponent, internals.mantissa); printf("You can calculate the float by: float = Sign * 2^Exponent * (1 + Mantissa (numerator) / 8388608)\n" "Special Cases not covered (i.e. 0, NaN, infinity...)\n"); return 0; }
Whilst you know the general rule about not trusting the equality of a floating point number, you've taking things too far by giving specific examples which you claim will always fail. Saying not to trust the exact value would have been correct. But your examples are simply false.Originally Posted by vart
Robatino's statement about being able to pefectly accurately store any integeral value up to +/- 4 million (in IEEE754 format) is totaly accurate. In fact, one can go furthur than that and actually state that any real number of the form m*2^e (where ^ is the power symbol) can be perfectly represented in an IEEE754 floating point number. (m and e being integers of limited range of course). One such example is 5*2^-3, five-eighths, or 0.625. This works because the exponent is base 2, and the most significant bits of the mantissa (including implicit 1) will be 101 in binary.
One would know this after having written your own arbitrary-sized integer or floating point type of libraries. Or if you read and fully understood a description of the IEEE754 floating point format. Or if you read up on the differences between Microsofts new floating point operation settings in VS2005 (fast, strict, precise). I have done all three.
1/2.0*2.0 should ALWAYS exactly equal 1.0. All values are perfectly representable, and none of those operations will incur any precision error. The VS2005 compiler will even reduce it to a single constant at compile time under ALL THREE floating point modes.
Just to clarify, whilst 5*2^-3 can be accurately represented, it is actually stored as 1.25 * 2^(-1), since the mantissa is a binary fraction. 101 doesn't represent 5, it's 1.25 or 1 + 1/4.5*2^-3, five-eighths, or 0.625