1. Originally Posted by laserlight
Since one has to check if the range of integral values required can fit into the floating point variable in question on a given platform, I dont see why one cannot do a similiar check to ensure that a point where x+1 doesnt test non-equal to x is never reached.
Something along this line?
Code:
```#include <stdio.h>
#include <float.h>

int main(void)
{
double a = DBL_MAX, b = DBL_MAX - 1;
printf("a = %g, b = %g\n", a, b);
return 0;
}

/* my output
a = 1.79769e+308, b = 1.79769e+308
*/```

2. Originally Posted by laserlight
What you fail to mention, as MadCow257 has pointed out, is that these algorithms determine if a given number is composite or probably prime, not if they are composite or prime. That said, I havent been doing much reading up on this topic lately, and it appears that there's a certain AKS primality test that is deterministic and yet runs in polynomial time.

Oh, okay.

hmm... many or any?
Clearly the C++ Standard provides for floating point to integer (and vice versa) conversions, though I suppose you mean that there will be problems (e.g. undefined behaviour) if the integral value is larger than what the floating point type can hold in the conversion attempted.

I agree, but then this can be avoided by an addition of 1 since the result is used for the upper boundary of a range, so it need not be exact, merely large enough.
I know I should have been a bit more polite, but post after post about using the slow way he is already using isn't going to help him either.

I think I was careful enough not to overstate the usefulness or functionality of those algorithms. It will determine if a number is definately composite, but will not determine if it is definately prime. This is still very useful because it is very fast.

I suggest downloading the libmp++ as it uses the Rabin-Miller test, but also actually fully determines if the number is prime, not just likely, iirc. It does other tests to make sure, and uses data types that can represent very large integers. Like 1024 bit integers etc. Floating point need not come into it here.

3. Originally Posted by Dave_Sinkula
Something along this line?
Code:
```#include <stdio.h>
#include <float.h>

int main(void)
{
double a = DBL_MAX, b = DBL_MAX - 1;
printf("a = %g, b = %g\n", a, b);
return 0;
}

/* my output
a = 1.79769e+308, b = 1.79769e+308
*/```
Given that a double has only 52+1 bits for it's mantissa, subtracting one from such a large number will have no effect. Inserting "assert(a == b);" before the return would not cause an assertion failure. Heck, you could even change it to subtract 1000000, with still no effect.

Floating point is not suitable for this application.

4. Originally Posted by laserlight
heh, I suspect that if the OP comes back he/she will end up using a bignum library of some kind, so the whole issue would be avoided by a custom implementation of a square root function for the bignum.
Which comes back to my original point: using floating point (and the standard sqrt() function) is not suitable for this application. We will get away with using floating point only if we are playing with smallish primes. As soon as we start playing with "largish" primes, floating point is not the way to go. And the problem is, the definition of "largish" can be within the range of what can be represented in an integral type.

There's a hole in the bucket, Dear Liza.....

5. Originally Posted by iMalc
I know I should have been a bit more polite, but post after post about using the slow way he is already using isn't going to help him either.
I disagree with your claim that the posts could not help. The original post was about searching for primes, and the posts before yours addressed that (side debate about use of floating point aside).
Originally Posted by iMalc
I think I was careful enough not to overstate the usefulness or functionality of those algorithms. It will determine if a number is definately composite, but will not determine if it is definately prime. This is still very useful because it is very fast.
Again, that depends on your application. Getting a fast probabilistic answer (of the form "may be prime") is not useful for all applications. You are assuming such an answer is always suitable. The OP gave no indication to suggest that such tests might be suitable.
Originally Posted by iMalc
I suggest downloading the libmp++ as it uses the Rabin-Miller test, but also actually fully determines if the number is prime, not just likely, iirc. It does other tests to make sure, and uses data types that can represent very large integers. Like 1024 bit integers etc. Floating point need not come into it here.
Not sure offhand about libmp++, but the Rabin-Miller test is probabilistic. In fact it gives false positives: it will indicate that some composite numbers are prime.

When you actually start doing multiple probabilistic tests "to be sure" that a value is prime, you are essentially looking to eliminate numbers that one test determines is prime if another test determines it is composite. The result of that is still probabilistic, and there is still a non-zero probability of an incorrect result. While applying multiple tests can reduce the probability of a false result to near zero, such an approach can't actually offer a guarantee. And, when working with large values, application of multiple tests is not necessarily faster than more exhaustive approaches