1. ## Probability of repetition

Seeding rand with a specific value is supposed to ensure that every time you call rand(), you get the same sequence. I don't know if this is actually true, once you start taking other people's hardware into account.

Let's say I write a program, and compile it, and give you the .exe. You and I both run it. Are we guaranteed to get the same sequence of numbers? That's the basis for this question:

Take a series of N bits. Make a square out of it. Let's say we have 8192 bytes, for unsigned short number of bits. It looks like this:
Code:
`unsigned char bits[ 256 ][ 32 ] = { ...all of our bits... };`
It doesn't matter what we fill it in with. Just fill it with stuff, that way it's the same for everyone. It's static, constant, and we never change it. Everyone has this same block of bits.

Now we all start at the same point, doesn't matter where. Let's say we 'seed' with 12345. First, we find the 12345th bit. Do a bit of math, that should put us oh let's say:

12345 / 256 = 48
256 * 48 = 12288
12345 - 12288 = 57
57 / 8 = 7
8*7 = 56
57 - 56 = 1

So what's that give us? 1st bit, in the 7th byte in the 48th row? Good enough. So that's where we start. At that point, we take 2 bits:

01 /* 00 north, 01 east, 10 south, 11 west */

Doesn't matter what it is. That tells us we need to go. We are in fact taking the integer you want from the bits to the east of where we are. You wanted an integer, so we take the next 32 bits for you. We copy them off into your return value, and there's your rand. We save where we are for the next call to rand.

The next call picks the first 2 bits from where we left off, to see where to go next. Doesn't matter what it is, that's what we use. It also takes the 2 bits after that, to skip that direction that many bits. I could make that number be the number of bytes we skip, the number of rows, whatever. Doesn't matter, as long as it is consistent. Then I give you your number, and repeat this process for every rand call.

My question is this: Given consecutive calls to rand, how likely am I to encounter running into the same loop, or short series of numbers? Like this:

rand 0 = go 2 the east, pop out your number
rand 1 = go north, pop out your number
rand 2 = go west, ...
rand 3 = go south

Damnit, I'm in aloop now. I will always get the same four consecutive numbers!

Now it doesn't have to be an immediate loop of four. I'm just wondering if I do the above, how likely am I to get stuck in a pattern, and how likely is it to be a small noticeable pattern? I could fiddle with those numbers a bit. I don't need to use 2 bits. I could 11 or 17 or whatever to tell me where to skip.

Say I just call rand in a loop, plotting x/y on a graph. With a small set of bits (65k~), how likely am I to encounter a repeating loop?

Quzah.

2. Let's say I write a program, and compile it, and give you the .exe. You and I both run it. Are we guaranteed to get the same sequence of numbers?
Assuming you set the seed to a known value, it absolutely should, but it depends on the nature of the algorithm and the binary. (I'm assuming here that the standard doesn't say that `rand' must not sample entropy from other sources. And of course, if you send me an executable that links a DLL, I may have a different version of that DLL.)

Basically, if a PRNG is initialized to a known value, the implementation isn't broken, and the implementation doesn't sample entropy from another source you get the same sequence every time. That's almost the definition of "PRNG".

Given consecutive calls to rand, how likely am I to encounter running into the same loop, or short series of numbers?
It depends on the implementation, but assuming a decent algorithm, basically you can expect a period of 2^log2(RAND_MAX + 1), 2^(log2(RAND_MAX + 1) - 1), or 2^(log2(RAND_MAX + 1) / 2).

At most you will see a period of 2^BITSOFSTATE, but the real period depends on the number of possible states so this is usually artificially limited. (For example, in order to prevent an all zero double word from ever appearing in the state which for some algorithms would result in an unusually long string of zeros in the output.)

I'm just wondering if I do the above, how likely am I to get stuck in a pattern, and how likely is it to be a small noticeable pattern?
I can't say without exact details, and depending on the details, it is impossible to say without simply counting the possible states.

That said, I believe it would be somewhat likely. (I'd say you'd have a period of no larger than 2^((XMATRIXSIZE * YMATRIXSIZE) / DIRECTIONBITS) despite using 2^(XMATRIXSIZE * YMATRIXSIZE * 8) bits.)

The algorithm you have proposed samples bits, but it never mutates the state. When it comes to PRNG algorithms, that's a sure sign that you are wasting bits.

With a small set of bits (65k~), how likely am I to encounter a repeating loop?
O_o

100%

Well, I may have misunderstood the context.

Do you mean "If I sample only 65k bits from the above algorithm, will I encounter a loop?".

That's much more likely, so I'll go with that instead.

Again, your algorithm never mutates the state. Your selecting algorithm is only 2 bits. I will not worry about the exact math, but I believe your period can be approximated by assuming that your matrix was itself filled with a random algorithm exhibiting equal distribution. (Ignoring details about the size and such.) With those numbers and those assumption, you have a 1 in 4 chance of hitting any given direction, and because those odds always hold you always have a 1 in 4 chance of getting the opposite direction immediately. I'd say you have a 1 in 16 chance of having a period of only 2! I'm sure you can see where I'm going with this.
[/Edit]

Soma

3. Yeah I was thinking it wouldn't work out well. I was just toying with it as I was trying to drift off to sleep last night. I don't know if I explained it well, but here's the idea (now assume this is all bits, not the characters '1' and '0'):
Code:
```foo[][] =
{
10110101 11010100 00101011 ...
00111010 11101010 11100111 ...
...
}```
We have a bunch of bits and we treat them as a 2D array of bits. We pick any starting position, and pull a number from it (a number being however many bits you want, in any direction you feel like heading. Then you use the next N bits in that direction to tell you where to go from there.

So if I make it really small and readable:
Code:
```00000000
11111111
10101001
10110100
10111011
00010000
11011111
01010101```
There's our bits, 8x8. We start a 3/3, and read say 4 bits for our actual number that minirand is returning:
Code:
```00000000
11111111
10101001
10110100
10111011
00010000
11011111
01010101```
Then we pull the next 2 bits for the direction to head next:
Code:
```00000000
11111111
10101001
10110100
10111011
00010000
11011111
01010101```
That means we pull our next 4 bits (next minirand number) from there, heading north from the southern bit shown in blue.

Anyway, it was a fun idea, even if not very usable.

Quzah.

4. > Let's say I write a program, and compile it, and give you the .exe. You and I both run it. Are we guaranteed to get the same sequence of numbers?
According to C99, the answer is apparently yes.
The srand function uses the argument as a seed for a new sequence of pseudo-random
numbers to be returned by subsequent calls to rand. If srand is then called with the
same seed value, the sequence of pseudo-random numbers shall be repeated. If rand is
called before any calls to srand have been made, the same sequence shall be generated
as when srand is first called with a seed value of 1.
Whether you give the exe to someone else, or run it yourself many times, if you start with the same seed, you're going to get the same sequence.

Now if you give the source code to someone else to compile, the answer is pretty likely to be "no". The standard doesn't say anything about HOW to implement a PRNG. But typically these seem to use LCG generators.

If you provided your own "my_srand()" and "my_rand()", then you would be able to answer "yes" regardless of whether you were distributing source or object code.