The only reason to re-seed is because you suspect that the RNG is not being "random enough" anymore. In this case, the solution is not to re-seed, but to get a better RNG.
As an aside, it's usually good to print out, or otherwise log, the random seed being used. If you get a program crash which depends on random data, it will be difficult/impossible to reproduce unless you can generate the same random sequence as before. So it's good to know what the seed data was.
As stated earlier, by your definition of "more random" (i.e. harder to predict), yes because then even if one knows the algorithm, it is more obfuscated and takes longer to figure out. However this is something that you have stated is not really an issue:
What does it really matter if you can predict the next random number, so long as you are getting a good distribution? Computers are deterministic, and short of using something like the QRBG, which is pulling "random" info from a source outside of the computer, you will never get "truly random" numbers from any algorithm.Originally Posted by shawnt
You would most definitely find that using the Mersenne Twister in the manner you have to be FAR more "random" because srand/rand by itself is a poor prng. I mean no disrespect, but do you know what a Ceasar Cipher is? If not, please look it up because that is essentially what you are doing to the result of MT. srand()/rand() always "shifts" the MT.out in exactly the same way, so of course it looks more random; you are getting the "more randomness" of the Mersenne Twister and then just "adding 20 to each result" (WAY oversimplification, but hopefully you get my point).
A harmful effect (on the basis that srand()/rand() doesn't actually do anything except camouflage the result of MT) is that is slows your program down for no appreciable gain. Unless of course you feel the need to obfuscate the process.
This slowdown may not be noticeable in the instance of a simple dice rolling simulator for a mere 10,000 iterations, but crank that up to a million iterations (or to magnify it so it is even clearer, a billion iterations) and see what kind of time difference you get between MT.out used raw, and srand(MT.out); rand();. Then compare that difference to the statistical analysis of all of those dice rolls with each method.
You might think that seems absurd. However,
Why slow your program down? In the world of optimization, this would be considered a pessimization. You are using a slower algorithm in place of one that does the exact same job with much better efficiency (i.e. not calling code that merely shifts the result), for no gain.
Last edited by jEssYcAt; 06-10-2008 at 11:49 PM.
abachler: "A great programmer never stops optimizing a piece of code until it consists of nothing but preprocessor directives and comments "
Wouldn't it be really great if your program allowed you to switch between different PRNG's by doing something as simple as changing a compile switch? You could then see for yourself whether rand gave results that were indistinguishable from those of something better.Originally Posted by shawnt
I am working on a personal project which requires me to generate large amounts of truly random numbers (hint: its about probabilities).
You could even be able to select from various PRNGs at compile time. Another one you could use is CryptGenRandom (on Windows).
Surely it's a win-win. You either prove that your compiler's rand is crap, or you can discover that it is not only good enough, but is likely faster too.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Good suggestion. One step further would be a runtime option - using a (set of) function pointer(s) to hide the actual RNG. Then you can give for example a command-line switch or a radio-button selection box to choose which to use. You could then also automate the testing and find out in the end of a long run what all the results are.
--
Mats
Compilers can produce warnings - make the compiler programmers happy: Use them!
Please don't PM me for help - and no, I don't do help over instant messengers.
Type-ins are back! Visit Cymon's Games at http://www.cymonsgames.com for a new game every week!
I think we have already put to rest the question as to whether rand coupling adds any value or not. It doesn't.
I believe we have also established that multiple resetting of a prng by re-seeding with a random seed is not detrimental to the randomness of a distribution. Its simply a redundant code ovehead.
Moving on, I will be comparing QRBG's results with Mersenne's (if I can ever get QRBG to work ). I am not interested in the program's efficiency with one implementation vs the other. The main purpose of this project was to use the program as a tool to study the manifestation of probabilities.
So far, I've had to limit my simulation to 10,000 iterations because past that, the results screw up. I'm not sure why this is the case, since all my integer constants are defined as long int. Any ideas?
Lastly, on a physical note, I am now inclined to believe that the only guage there can be for true randomness is close adherence to the theortical probabilities of known events. So a randomness source is close to being 'truly random' if it delivers each element of a binary distribution approximately 50% of the time, spread out over a distribution much greater period. That would encompass both even distribution and unpredictability. This is where probabilistic manifestation starts to feel mystical (what ensures that a coin will flip heads 50% of the time?). Of course its no more mystical than the force of gravity, but theres a certain allure to it which is what drew me to this project.
Last edited by shawnt; 06-11-2008 at 09:10 AM.
>So far, I've had to limit my simulation to 10,000 iterations because past that, the results screw up.
I bet if you do:
this won't happen.Code:final_out = MT.out;
> anyone have any ideas why?
Assuming you're expecting more, then the answer will be a bug in your code.
If you can post a short example which demonstrates the problem (like cut out anything which isn't called yet, and any non-critical screen I/O say), then we could probably tell you where you're going wrong.
Look for uninitialised pointers, arrays being overrun etc.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
You'll need to allocate arrays that large dynamically (recommended to use std::vector to do that for you). Stack space for automatic variables and arrays is only about 1 MB.)
I might be wrong.
Quoted more than 1000 times (I hope).Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
In reality, PRNGs are far more useful. For example in a saved game file such as Warcraft3 it would simply store the random number seed used for the game and then the whole game can be played back later with the exact same random decisions made the whole way through. No need to store every random number result in the file.
You'd be surprised how common that usage is.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"