If you want to start at a random position at the edge of the grid:

There are 2*(width - 1) + 2*(height - 1) cells at the edge of a rectangular width × height grid. Remember the corners; don't count them twice. For a 10×10 grid, this means you have 36 cells to pick from. In other words:

Code:

n = rand() % (2*width + 2*height - 4);
if (n < width) {
x = n;
y = 0;
} else
if (n < width + height - 1) {
x = width - 1;
y = n - (width - 1);
} else
if (n < 2*width + height - 3) {
x = 2*width + height - 3 - n;
y = height - 1;
} else {
x = 0;
y = 2*width + 2*height - 4 - n;
}

Also, the rand() function provided by standard C libraries is a terribly bad random number generator. It is typically a linear congruential pseudorandom number generator with a relatively short period, and is known to cause problems due to its periodicity.

Instead, I recommend using the Xorshift algorithm. It is simple, fast, and has 128 bits of state you can initialize however you want (as long as it is not all zeros). For brevity, here is the Wikipedia example implementation slightly rewritten to make seeding possible:

Code:

#include <stdint.h>
static uint32_t prng_state[4] = {
123456789U,
362436069U,
521288629U,
88675123U
};
static inline uint32_t prng_u32(void)
{
uint32_t temp;
temp = prng_state[0] ^ (prng_state[0] << 11U);
prng_state[0] = prng_state[1];
prng_state[1] = prng_state[2];
prng_state[2] = prng_state[3];
return prng_state[3] = prng_state[3] ^ (prng_state[3] >> 19U) ^ temp ^ (temp >> 8U);
}

In Linux, I recommend simply reading sizeof prng_state bytes from /dev/urandom, until at least one of the values is nonzero. This should yield a very reliably random seed.

While one can pretty safely use prng_u32()%*range* to generate a pseudorandom number between 0 and *range*-1 inclusive, there is a very slight bias towards lower values depending on the *range* unless it is a power of two. The bias is just ((1<<32) % *range*) / (1<<32); it only becomes significant for large *range*s. For example, at *range* = 16777217, you get 0.39% more often zeros than you get 16777215.

Like I said, that does not affect most applications, definitely not a random walker such as discussed in this thread.

In almost all applications, a small bias towards lower values should not affect anything; it should be within the noise of the distribution anyway. However, if you do *statistics* or *Monte Carlo* stuff, you really need a uniform distribution. Then, I recommend tacking on a simple exclusion method:

Code:

static uint32_t prng_bitmask(uint32_t range)
{
range |= range >> 16U;
range |= range >> 8U;
range |= range >> 4U;
range |= range >> 2U;
range |= range >> 1U;
return range;
}
/* Generate a uniform pseudorandom integer [min, max),
* or [max, min) if max < min. If max == min, always returns min.
*
* Note: This is limited to 32-bit range, i.e. |max - min| <= 4294967295.
*/
int prng_int(const int min, const int max)
{
if (min < max) {
const uint32_t range = (uint32_t)(max - min);
const uint32_t mask = prng_bitmask(range);
uint32_t value;
do {
value = prng_u32() & mask;
} while (value >= range);
return min + (int)value;
} else
if (max < min) {
const uint32_t range = (uint32_t)(min - max);
const uint32_t mask = bitmask(range);
uint32_t value;
do {
value = prng_u32() & mask;
} while (value >= range);
return max + (int)value;
} else
return min;
}

It is good for the full 32-bit integer range, and max and min can be specified in any order. The helper function generates a bit mask of the bits needed to represent any value within the range (shifted to start from zero; all low bits set, all high bits clear); equal or larger than the maximum value within the range.

The loop generates an unsigned random number with those bits only, until it gets a value that fits within the range. Since the bitmask is never larger than twice the range, the loops exclude at most half the values on average. If you bother to test it, it is still extremely fast, in many cases still faster than the standard C library rand() function.

For those who are interested in pseudorandom number generation, the exclusion method does not guarantee an uniform distribution; it only guarantees an uniform distribution when the original pseudorandom number generator generates an uniform distribution with a known number of bits (thus its range must be a power of two). Xorshift does pass the Diehard tests, so I'd trust it when using statistics or Monte Carlo methods.