# Help with grid problem (dynamic programming)

Show 80 post(s) from this thread on one page
Page 1 of 2 12 Last
• 05-06-2009
scp89
Help with grid problem (dynamic programming)
Hey, long time reader, first time poster.

I'm working on a dynamic algorithm, and I've hit a roadblock in my thought process. Here's what I'm trying to do:

I'm given a X x Y rectangular grid in the form of an boolean array. The grid represents street intersections in a city. The goal is to walk to from South West corner (0,0) to the North East corner (X,Y) in the shortest amount of blocks. The problem is that any intersection labeled "true" is a dangerous intersection and must be avoided.

I've come up with 2 algorithms:

1. An O(XY) algorithm that finds a safe path, but not the shortest path. In just moves with Priorities (1. Up, 2. Right, 3. Left, 4. Down) and when it hits dead ends it remembers not to go back.
2. An O(X^2*Y^2) algorithm that tries every legal path and picks the best one.

What I am looking for is an O(XY) algorithm that finds the shortest safe path. I assume this would require dynamic programming. Here's what I've come up with so far:

- The minimum moves in a completely safe grid is X+Y.
- The minimum moves in any grid if X + Y + (number of needed down and left moves * 2), since all down and left move require an additional right or up move to regain progress.

So I'm trying to think of a way to find the necessary number of down + left moves, but do it in linear time. This is where I've hit a stumbling block. I've used 5 hours and about 60 sheets of scrap paper with little progress.

Anyone have any ideas? I'm not asking for code, just a way of going about the problem.
• 05-06-2009
brewbuck
Sounds like you can directly apply any of the pre-existing shortest-path algorithms. Your case isn't really a special case -- since the dangerous intersections must be avoided AT ALL COSTS, it is as if those edges are not even in the graph, and the problem decays to simply finding the shortest path.
• 05-06-2009
scp89
Im not extremely familiar with shortest-path algorithms, which can I use to solve this in O(XY)?
• 05-06-2009
brewbuck
Quote:

Originally Posted by scp89
Im not extremely familiar with shortest-path algorithms, which can I use to solve this in O(XY)?

I don't know if you can do it in O(XY) time. XY is basically the number of vertices (V) in the graph. I don't know of any shortest-path algorithm that goes linearly with V.

You might start here:

Dijkstra's algorithm - Wikipedia, the free encyclopedia

EDIT: Just because I don't know of any doesn't mean they don't exist. I found this:

http://citeseerx.ist.psu.edu/viewdoc...10.1.1.56.4472
• 05-06-2009
scp89
Thanks for your help. There is definitely a solution, as I've found almost the exact same problem in textbooks (dating back to 1998) when googling for an answer.

That's why I think it's not just translating into a classic graph problem, I think there's something specific about this problem that makes it possible in O(XY)
• 05-06-2009
In the classical knights tour of a chessboard, the dynamic Warndorff's algorithm can be used to greatly simplify the problem.

It relies on moving to the square which, on the *next* move, look most favorable. (which is, oddly, the move with the most constraints).

I'm wondering if something like that couldn't be used here. Something akin to:

Look at the current choices, in blocks to move along. For each of those possible moves, count up the number of possible moves you would have, if you were already there.

For each safe next move you find, which also leads either to the right, or upward, give it 5 points. For each safe move you find which leads left or down, give it 1 point (or whatever value seems to work best when testing).

On the basis of these values, choose the best one. This would be the "score" from a one or two ply depth first search, if I were to use it.

In the case of Warnsdorff's algorithm, it only works 100% correctly, if you settle tie scores, in this fashion (looking ahead one more ply).

It is an amazing simplification. Check it out on Wikipedia - "Knights tour". Don't worry about Depth first search, it's very simple. I have a little game that shows it off brilliantly, and you'll get it, straightaway.
• 05-06-2009
scp89
adak - Wouldn't evaluating each move to 2 levels bring the algorithm to O(n^2)?

I think there's some insight into this problem that none of us are getting.
• 05-07-2009
Quote:

Originally Posted by scp89
adak - Wouldn't evaluating each move to 2 levels bring the algorithm to O(n^2)?

I think there's some insight into this problem that none of us are getting.

Only in rare cases do you need to go two ply deep, however.

In any case, you'll have to look at least one ply deep, for any dp type algorithm, to work.

Since you could have "traps" in the way the unsafe streets were laid out, you can't say that only one ply extra would be enough, either.

I'm quite sure you can't solve this problem without some look-ahead or backtracking. Any rule of thumb you employ without it, can be de-railed, miserably.

Think about it, couldn't you set up a bunch of unsafe streets in such a pattern that any algorithm lacking lookahead, would be utterly sunk?

Of course you could.
• 05-07-2009
scp89
I'm not arguing that you dont look ahead, obviously blindly choosing a path wouldn't result in an optimal solution, or O(n) time. I think we need to look at the problem, and the dangerous intersections, as a whole, and try to find patterns that can tell us where we need to turn form the beginning, not along.

For instance, if there is an unsafe intersection in the bottom row, we should start with "up". If every Y value contains an unsafe intersection, and they all occur within 3 rows, the requires a left or a down to avoid (I think).

Something like that, where you look at it as whole and see some clever insights. Maybe I'm wrong, but I don't see how you can traverse this normally in O(n).

Think of the 4 queens problem - seems tough and O(n^2) at first, until you use some ingenuity and there an elegant solution.

Maybe I'm wrong, but that's my feeling.

Anyone else have any ideas?
• 05-07-2009
scp89
Could I use breadth-first search, implemented with an adjacency list, which is O(E+V)?
• 05-07-2009
Sfel
You can use breadth-first search, but if you apply it directly on your given matrix, it will be O(XY). If you're given an adjacency list, then it'll be O(V+E).

Consider a queue where you first insert (0,0). At each step, extract one node from the queue, say X. Say the current distance to this node, from (0,0), is d[X]. Insert all of X's neighbours Yi (i=1,2,3,4 - 4 neighbours, right?) in the queue, but only those that have d[Yi] > d[X] + 1 (initialize d to a really large value at start). Update d[Yi] to d[X] + 1. At the end, d[END_NODE] will contain your minimum path. This takes O(XY) time and O(XY) memory if applied on the matrix.

d can actually be a 2-d array if you keep the matrix. so d[p][q] = minimum path to node (p, q). Breadth first guarantees you'll get the minimum.
• 05-07-2009
scp89
Thanks, Sfel, but I really have never coded a breadth first search before. Could be a little more clear on exactly how i'd go about keeping track of the minimum path, or WHY it always gets the minimum path?
• 05-07-2009
Sfel
Consider the following example, where 1 means inaccessible cell and 0 means accessible:

* 0 0 0 0
0 1 1 1 1
0 0 0 0 0
1 1 1 1 0
1 1 1 1 *

You want to get from one "*" to the other.

Let this matrix be A. A[i][j] will refer to the j-th element on i-th row, with numbering starting from 1 for convenience. Let d[p][q] be the minimum distance from A[1][1] to A[p][q], if there exists such a path, and +infinity otherwise (a really high value, say 2^30).

Do you agree that d[p][q] = 1 + min(d[p-1][q], d[p+1][q], d[p][q-1], d[p][q+1])? basically, if we know the minimum path to all of (p, q)'s neighbours (up, down, left, right), the minimum path to (p, q) will be the neighbor with the shortest path out of all the neighbors, to which we add 1, because we need to get from that neighbor to (p, q).

Consider how this would work on a matrix where we have no obstacles:

* 0 0 0
0 0 0 0
0 0 0 0
0 0 0 *

Here's how it'd look after running the algorithm:

* 1 2 3 4
1 2 3 4 5
2 3 4 5 6
3 4 5 6 7

Here's how our d matrix would look if applied on the original matrix:

0 1 2 3 4
1 x x x x
2 3 4 5 6
x x x x 7
x x x x 8

where x = +infinity

it looks the same, pretty much, just that we couldn't get to the blocked cells.

This works because the problem has optimal substructure: if we know the answer for a small problem size, we can find the answer to a large size.

I don't know if I was clear enough. If you need any more help understanding or implementing, just say so.
• 05-07-2009
scp89
OK, so would this tell me the actual path, or just how many blocks that path is? I understand that we are building a matrix with minimum distance, Im just not completely clear on how we're doing it.
• 05-07-2009
Sfel
This would only get you the length of the path. But you can build the path by using the values in the distance matrix: we could only have gotten on one value from an adjacent value that is lower than the current value by 1.

About how we're building the matrix, we use a queue. Here's how it'd work for this example:

* 0 0 1
1 0 0 0

Enqueue (1, 1), set d[1][1] = 0, all other d[][] = inf

The queue is not empty, so we extract the first element, (1, 1). Its neighbors are (1, 2) and (2, 1). We can't get on (2, 1) because it's inaccessible. We can get on (1, 2), however. d[1][2] = d[1][1] + 1 = 1. We enque (1, 2).

The queue is still not empty, so we extract the first element, which is now (1, 2). Its neighbours are (1, 1), (2, 2) and (1, 3). (1, 1) has d[1][1] = 0, which is lower than d[1][2] + 1 = 2, so we can't go there. (2, 2) has d[2][2] = infinity, which is higher than d[1][2] + 1 = 2, so d[2][2] becomes d[1][2] + 1 = 2, and we enqueue (2, 2). The same reasoning applies to (1, 3), so we also enque (1, 3).

Queue looks like this: [(2, 2); (1, 3)]
Queue is not empty, so we extract the first element, now (2, 2). Applying a similar reasoning, we'll end up enqueing (2, 3), and d[2][3] = d[2][2] + 1 = 3.

Next, we extract (1, 3), which won't update anything. Next, we extract (2, 3), which will update d[2][4] = d[2][3] + 1 = 4 and enqueue (2, 4). Next, we extract (2, 4), which won't update and won't enqueue anything. Next, out queue will be empty, so we end the algorithm.

d[2][4] = 4 is thus the minimum distance from (1, 1) to (2, 4)

d will look like this:

0 1 2 x
x 2 3 4

where x = infinity. Here's how to get the actual path.

we notice that d[2][4] = 4. So, we could only have come to (2, 4) from an adjacent position that has the d[x][y] = 3. In our case, this is d[2][3]. Then, we could only have come to this from a position that has d[x][y] = 2, in this case either (1, 3) or (2, 2). We just pick one at random, it doesn't matter. We continue in this manner until we get to (1 ,1).

Please say if you don't understand something :).
Show 80 post(s) from this thread on one page
Page 1 of 2 12 Last