At a whim, I decided to convert Prelude's Mersenne Twister C library into a C++ class that could work with std::random_shuffle(). Naturally, I discovered how much I did not know about pseudorandom number generators :D

My first question concerns working with random_shuffle(). Section 25.2.11 of the C++ Standard notes:

Josuttis used ptrdiff_t inQuote:

random_shuffle() can take a particular random number generating function object rand such that if n is an argument for rand, with a positive value, that has type iterator_traits<RandomAccessIterator>::difference_ type, then rand(n) returns a randomly chosen value, which lies in the interval [0, n), and which has a type that is convertible to iterator_traits<RandomAccessIterator>::difference_ type.

The C++ Standard Library, and I decided to follow that (but with a typedef). Is there anything better?

My next question is about generating random numbers within the range [0, n). From what I understand, Mersenne Twister does not suffer from the lower bit order problem, so it is safe to just take (random_number % n) ? I believe there will be a slight distribution problem (no longer uniform), but does it really matter? After all, Prelude's uniform deviate solution appears to suffer from the same problem, if my empirical evidence is correct.

For example, suppose I actually have a generator that gives me random integers uniformly distributed in the range [0,7]. I want the range [0, 3). Using the modulo method, we have:

As such, there is now a probability of 0.375 each to get 0 and 1, and a probability of 0.25 to get 2. The distribution is no longer uniform. Switching to use a uniform deviate as a multiplier:Code:`0 % 3 = 0`

1 % 3 = 1

2 % 3 = 2

3 % 3 = 0

4 % 3 = 1

5 % 3 = 2

6 % 3 = 0

7 % 3 = 1

It seems to me that the resulting distribution is still non-uniform, and it must be so, since we are trying to squeeze a range of 8 numbers into a range of 3 numbers. There will surely be one number that has a frequency at least one less than the rest. My solution was to simply calculate the minimum number of numbers that will be out of range, and then just ignore them. For the above example, I would reduce the original range to [0, 6) by rejecting all numbers generated out of the range, and then use the modulo method on the first number generated within range. 6 is calculated by 7 - (7 % 3). Is this method correct, and is the overhead imposed worth it to fix the distribution problem (if that is a problem at all)?Code:`0 -> 0.0 => 0.0 * 3 = 0`

1 -> 0.125 => 0.125 * 3 = 0

2 -> 0.25 => 0.25 * 3 = 0

3 -> 0.375 => 0.375 * 3 = 1

4 -> 0.5 => 0.5 * 3 = 1

5 -> 0.625 => 0.625 * 3 = 1

6 -> 0.75 => 0.75 * 3 = 2

7 -> 0.875 => 0.875 * 3 = 2

I have attached my code, feel free to comment if you see any other problems with it. I am still a little wary of my handling of unsigned long to Random::difference_type, since the latter is a typedef for a signed integer (ptrdiff_t).