# Thread: Some help developing a super-fast integer test?

1. ## Some help developing a super-fast integer test?

Here's the deal: I'm a little burned out of late and so I'm hoping some of you here can help out in developing a really fast test using only (or mostly) bitwise operators. It's not that I can't figure it out or anything...just a bit too bleary to channel those creative forces to pull it all together.

The mathematical description is actually quite simple (does vbulletin support math markup?). In pseudocode:

Code:
```constraint(n){ 2^((n-1)/2) % n == (n-1) }

special(n){ constraint(n) && constraint((n-1)/2) }```
Alternately (and probably more concisely), "constraint" could be defined as:

Code:
`constraint(n){ sqrt(2^(n-1)) == -1 (mod n) }`
...where the above is obviously a modular square root.

Oh, and the implementation should ideally be bit-width-neutral (ie: not tailored to 32-bits or what have you).

Any and all help would be greatly appreciated! 2. Wait, don't most compilers automatically write to bitwise operations whenever possible? Have you tried compiling with -O3? 3. Originally Posted by MutantJohn Wait, don't most compilers automatically write to bitwise operations whenever possible? Have you tried compiling with -O3?
In this case, it's not so simple. The problem here basically boils down to working out an efficient modular exponentiation scheme (that is, 'n' can be quite large). 4. Well, I'm like 100% certain you know that 2^n is easily written in code as 1 << n.

And then n/2 is easily n >> 1.

n - 1 is a pretty quick operation as it's simply just removing setting the first in the binary series equal to 0. What I mean is, 1111 - 0001 is 1110 or 8+4+2+1 - 1 = 8+4+2 = 14. Oh, the only problem would be when you have 1110 - 0001. Ooh, that'd be lame. What is that, like 14-1 = 13 which is 8+5 = 8+4+1 so you get 1101. Idk, can computers follow the rules of binary addition and subtraction quickly?

But for the modulo part, that's trickier. I'd imagine that division is still pretty quick but is this where your code is slow?

Like, n mod m is supposed to just give the remainder. So you have n = x*m + y where x is a natural number. To get y, you want the x such that n is in-between x*m and (x+1)*m. Then y is simply n - x*m.

Idk if any of this is helpful but can I just ask, why is the current code you have not fast enough? This seems like such a quick equation anyway. 5. Originally Posted by MutantJohn Well, I'm like 100% certain you know that 2^n is easily written in code as 1 << n.
Right, but now imagine that n is say 347289636918. It's not really feasible to do 1 << (n >> 1) on that sort of magnitude, even if arbitrary-precision integers are available. The approach I'm using now is basically just exponentiation-by-squaring, which works okay for normal machine word sizes, but doesn't account for multiplicative overflow (essentially halving the effective machine word size) and doesn't scale too well to arbitrary-precision integers. Hence the need for a high-performance algorithm... 6. Originally Posted by MutantJohn
I'm like 100% certain you know that 2^n is easily written in code as 1 << n.
True, but there's a caveat: n must be within a rather small range if you're working with a built-in integer type. Hence, performing modular exponentiation the straightforward way with this only works if n is sufficiently small. If you're working with an arbitrary precision math library, then it would work to compute modular exponentiation with a large n (within the limits of the library and hardware), but it would be suboptimal (thus a bignum library might provide separate functionality for computing modular exponentiation). Originally Posted by MutantJohn
And then n/2 is easily n >> 1.
This optimisation is the kind that any optimising compiler will perform. 7. Originally Posted by laserlight This optimisation is the kind that any optimising compiler will perform.
hodorcc doesn't do it  8. Originally Posted by Hodor hodorcc doesn't do it Actually wouldn't some optimizations be unavoidable in that the underlying instruction must be the same, even if you wrote n/2 or n >> 1? The number would never the less receive a new value the same way. (Another way to look at it is to say that there is no optimization) 9. I believe a loop is in order. All models for efficient modular exponentiation I know about, including Montgomery Multiplications, involve a loop of some kind. It's efficient memory-wise and fast to compute. Wikipedia has an entry on a seemingly fast method under the Modular Exponentiation page.

Is this for some work on cryptography you are doing? If so, You should check the entry to Montgomery Multiplication (MM) under the See Also section. Use this for modules 512 bit or larger. But not for smaller modules! It's considerably less efficient than the right-to-left binary method above. There is somewhere on the internet a paper on this method that uses a variant called Almost Montgomery Multiplication (AMM). It's more efficient than MM. I'm having trouble using google at the moment. Something to do with my ISP router. But you shouldn't have much difficulty finding it. 10. Originally Posted by Mario F. Is this for some work on cryptography you are doing?
No, it's a new mathematical conjecture of mine. Basically, the conjecture is that any number that passes the "special" test is definitely prime (some primes and all composites fail the test). No counterexamples found thus far (none below 2^64 at least). 11. Originally Posted by Sebastiani No, it's a new mathematical conjecture of mine. Basically, the conjecture is that any number that passes the "special" test is definitely prime (some primes and all composites fail the test). No counterexamples found thus far (none below 2^64 at least).
So your new conjecture is a Mersenne primality test? 12. Originally Posted by Yarin So your new conjecture is a Mersenne primality test?
No, a Mersenne prime is always in the form 2^n-1 whereas the values that pass this test can be in many other forms. Popular pages Recent additions 