If the number is greater than 2^32(4294967296)then how to determine whether is it prime or not? Ex:Is 4437864713 a prime number ?
Printable View
If the number is greater than 2^32(4294967296)then how to determine whether is it prime or not? Ex:Is 4437864713 a prime number ?
Type "prime number algorithm" into google.
Use long long or __int64.
@Salem : I know the algo of the problem but main problem is how to handle the large number.As we can't use double data type cause we need to use % operator.
Can any provide me a little code snippet for handling large number(>4294967296) and we can use % operator too with the number.
Well if a 64-bit type isn't an option, then try a bignum library like The GNU MP Bignum Library
Or roll your own equivalent, if you're only interested in a few specific operations.
Code:unsigned long long number = 0x7FFFFFFFFFFFFFFF;
printf("%lld\n", number);
Unless your compiler is an antique, it has a 64 bit integer type.
Check your docs... Different comiplers implement 64 bit integers in different ways.
It might be one of...
1) unsigned long long int
2) #include <stdint.h> then use uint64_t
3) unsigned long int
The easiest way to tell is to try different combinations and use sizeof() ... a 64 bit int will return 8 instead of 4.
Try...
Code:#include <stdio.h>
#include <stdint.h>
int main (void)
{ uint64_t x;
x = 600851475143;
printf("%lld", x );
return 0;
}
> When i use 600851475 then it give the result.I am using codeblock on windows.
Then unfortunately, you HAVE to use the non-portable Microsoft conversion formats.
Format Specification Fields: printf and wprintf Functions (CRT)
Although the compiler is GCC, the 'C' run-time library is from Microsoft, so you get all the bugs/features of that library.
One of the "bugs" being very poor support for C99 types.
I tried the following on windows(codeblock) and linux(gcc) :
Results in both case are same :Code:#include <stdio.h>
#include <stdint.h>
int main()
{
uint64_t x;
unsigned long long y;
unsigned long int z;
printf("%d\n",sizeof(x));
printf("%d\n",sizeof(y));
printf("%d\n",sizeof(z));
return 0;
}
8
8
4
but when i try to assign value 600851475143 to x or y then i get compile time error as :
that's my problem when why compiler show such error while the number is too short than 2^64.Quote:
error : integer constant is too large for "long" type
One more thing long is expected to have size of 8 byte but here we get 4 byte ,why?
Use a bignum library. Bignum libraries allow you to use arbitrarily large numbers. For Linux, use GMP. For Windows, I prefer iMalc's library. You can get GMP through your distro's package management system. You can get iMalc's library at his website, Useful classes, you want BigInt or VarBigInt.
long is generally only 32 bits... you need long long to get 64.
That doesn't make any sense... your number is only 39bits... there's no reason it won't fit in a uint64_t ... unless your compiler is all messed up... Did you try this both on Windows and Linux?
It works here on both VC++ (with my re-worked header file) and on PellesC ....
Yes,i think i would have go with GMP.............
please suggest me other solution of my problem ....
If you are going to use GMP, I recommend you use C++. The C interface to GMP is... irritating. With the C++ interface, they have overladed operators and other niceties impossible in C. I also forgot to mention that iMalc's library is in C++, with no C interface, AFAIK.
Thn x "CommonTater".......
On linux system it gives warning message while on windows system it gives error message when i was using GCC but with VC++ compiler it's all fine .
Thanx evry1 :-)
You could try to type your constant...
Code:unsigned long long x = 600851475143LL;
I'm surprised that after this many posts nobody outright knew that you simply cannot enter a constant larger than 2^32-1 without specifying a type suffix.
In this case I would use ULL though since you're assigning to an Unsigned Long Long.
If you want to do primality tests for numbers between 2^32 and 2^64, try using the deterministic version of the Miller Rabin test with the first 13 primes.
So I read somewhere that the "absolute" theoretical limit is 2*ln(N)^2; for a 64-bit number, that would equate to trying bases below 3935. But then the maximum is apparently much lower than that (according to Pomerance, Selfridge, et al). I wonder why there is such a huge gap between the two?
I've been wondering that for a little while myself.
I think it must be that the theory was enough to prove that 2ln(n) was sufficient, but in practice less is fine, but no even lower bound has been proven yet.
I'm busy trying to implement the BPSW test myself. Just have the actual Lucas pseudoprime test left to get working.