I made this probably a year ago or so, and to be honest I can't really be bothered converting it's interface to the specifications. however, it may be of interest: it can easily calculate >200000 primes above 2^32 in less than 0.4 seconds. it compiles on gcc fine, using the unsigned long long type; not sure about visual c++.
run it like: segprime2 minimum_value maximum_value output_file
eg. segprime2 4294967296 4300000000 outfile.txt will generate 227027 primes
it uses a 'work array' to maximize its range/number size while minimizing memory usage.
>> while this one's is O(n/8) (I am using the O notation correctly, am I not?)
Not exactly. O(n/8) is equivalent to O(n). In big-O notation all constants are ignored. As n increases, an O(n) running time will increase at the same rate as an O(n/8) running time would, which is why they are considered equivalent.
Right, O(n) ~= O(n/8), it's more a case O(n) != O(log2 n) or O(n^2) - if n is 10000, and you divide by 8, you get 1250. Not a huge number, not a small number. O(log2 n) for n=10000 is about 13, O(n^2) is 100000000.
And more importantly, if we double n, O(n) also doubles. O(n^2) is 4x larger, O(log2 n) is one larger.
So the point about big-O is the ability to predict from a small run (low n), how a larger n will behave - this is a bit more difficult in modern machines with caches, virtual memory and other aspects, but we can still predict with some accuracy what will happen if we for example double the amount of data - will the time double or increase more/less than double?
>> Only when it is specifically comparing the complexity to the size n of an arbitrary input can the constants be ignored.
But is that not what big-O notation is intended for? If you want to compare running times (which of course is often a relevant endeavor), you shouldn't be using big-O notation.
Except you arent always comparing actual runnign times, but computational complexity. So one algorithm may solve the problem in O(n log n) time, while another algorithm may solve it in O(n/2 log n) by discarding half the test cases based on some trivially computable feature. e.g. an algorithm that finds all prime numbers between 3 and n > 3 by testing every number, versus the same algorithm that only tests odd numbers between 3 and n > 3.
If I remember, O(f(x)) = g(x) if there exist constants c, d, and e such that c * f(x) + d >= g(x) for all x > e. Thus, O(n / 8) = O(n), because 8 * (n / 8) + 0 >= n, for all x > 0. (c = 8, d = 0, e = 0)
Congrats on the terrific submissions. I can't find who won - or abachler's source test code. Will someone help me out, please?
Nothing like bumping a year old thread.
I know, but the code's still good :)
Can anyone hook me up? Trying to do a primality program for school on the GPU and I'm currently not winning...