I am trying to figure out which Pathfinding algorithm I should use for my game. After googleing around for a while it looks like A* is the one that is most used, but it needs the game world to be a grid. My game world is not a grid. I know I can turn it into one, but I was wondering if there maybe is an algorithm that is better suited for me?
A few things about my game:
* There are a lot of enemies that will need pathfinding.
* The world has few object that can be collided with.
* The world is big.
Thanks in advance.
A* is an abstract algorithm for finding paths in world states. There is no reason why these world states must be represented by a grid, although it commonly is.
You have to restrict the motion of your entities to a finite number of nodes, obviously. You probably want to use a sparse node set when you are "far" from the goal, and a more densely populated set when you are near it.
The world doesn't need to be a gird to use A*. You could model it as a graph for example. If the world is large, just make graph nodes at various places along paths and at key points where units need to travel.
If the world really has very few things that can be collided with, you could use a naive "walk towards the goal" method that just tries to go around simple obstacles. This is really efficient but won't work in all cases.
If your world is composed of distinct areas or "rooms" then you can treat each room as a node for A*. So you reduce the problem of getting to a specific point, to just getting to a specific room. Once in the goal room, switch to some other algorithm to navigate to the exact point.
My world is an outdoor level. The only thing that can be collided with is some buildings.
Which algorithm would probably be the easiest to implement?
From playing around with a small demo that shows different pathfinding algorithms, Breadth-first search looks like the best one. Not always the shortest path, but very very quick. From looking at it on wikipedia, it does not look to hard to do.
Thanks for the replies so far
It can use huge amounts of memory -- the memory usage grows with the size of the horizon.
Originally Posted by h3ro
For an outdoor level, I'd suggest precomputing a set of "rooms" in the level (obviously not real rooms), then break these rooms into subrooms, and so on, to a specified limit. Now the pathfinding problem is a matter of finding the path to the room, then a path from there to the subroom, then the sub-sub-room, and eventually the end point.
If you want to get fancy, and fast, you could use a multilayer peceptron and teach it to regenerate the optimized solutions from another slower but very acurate method.
Well by far the easiest way would be to mindlessly move towards the player and if they hit a building attempt to move around it. It wont look very nice though, and bots could get stuck in enclaves if any exist.
A* may take a little work to get your head around at first, but would probably work best. I'd try and code the pathfinding as a seperate module, so you can reuse it in future. Also, if you get A* done you should find it very easy to make a dijkstras variant, which can deal with situations where the target is not known when beginning the search.
Painter's algorithm is pretty fast, and a lot easier to implement for first timers. Not to be confused with the algorithm of the same name used in 3d imaging.
If your algorithm is slow you may want to run it and then save the results. The actual game then would just use the results as the path. Dynamic path finding has quite a few hurdles to overcome and IMO does not fit with how we as humans would path find our way from point A to point B.
One thing to think about is that we cannot possibly find our way from point A to B easily without having done it before. So perhaps a bad guy should be a bumbling idiot as he runs from your player if he has never seen the rest of the world. I think games are going the wrong direction by having the AI know more than it should which usually results in wacky behavior.
You can run A* for short distances and come up with a reasonably good result. However A* is just a tool and the best pathfinding algos are probably a combination of several different algos with fallbacks in case they fail. I would recommend A* for starters and then fallback to a bump and grind (similar to what mike_g suggested).
I know this is old news, but just this week I'm working on improving my A* algorithm for armies to move in hexagon based terrain including diferentiated movement costs (mountains, hills, rivers, grassland, roads, etc...).
Originally Posted by Bubba
I'm actually trying to dumb down the AI - in function of the army leader skills - so that armies make mistakes and correcting those mistakes doesn't mean a sharp turn.
What I'm, finding is that the algorithm is actually quiet extensible. Messing with the cost-to-goal property, for instance can generate some interesting results. Input an approximate goal and changing it in mid calculation is another option. In order to avoid sharp turns, I'm starting to tinker with a Catmull-Rom formula trick I've read somewhere that I apply once the path is calculated. Finally I also want to implement some form of army leader memory, so that the next time the same A-B path (or its inverse) is attempted, a more optimized version is calculated then the last time.
But overall I find all this possible to achieve with the A* algo.
I have a few problems still as I need to find a way to mess with the cost-to-goal while still forcing armies to snap to roads whenever possible. I'm still in the very early stages of implementing a more dumbed-down (read, realistic) path finding based tailored for armies movement through long distances. Only started this week. But wanted to share that A* is adaptable enough to allow for more realistic situations.
If you want a tough A* challenge, adapt the algorithm to move X unites from A to B in the shortest amount of time. It's not as trivial as just computing the shortest path and sending all units down that path. Here's why:
If the path from A to B is only wide enough for one unit at a time, then it may be faster to send X - i units down the shortest path (single file) and send the remaining i units around. You could run A* X times considering units as obstacles (and giving some a head start), but this is far too expensive for large numbers of units.
| | |
| | |
Why not just use a better algorithm? Namely one that is designed for flow networks. A* is a decent algorithm but it isn't the end all be all.
Originally Posted by Perspective
Easiest way to impliment progressive learning is to start the leader out with random values for movement costs. Then gradually correct these values towards the real cost. This will cause some leaders to favor certain terrain types at first. This can lead to emergant behavior that can make them appear smarter than they are. e.g. A leader that favors swamp (slow movement, moderate defense bonus) and mountains (very slow movement, high defense bonus) over plains (high movement, low defense bonus) will appear to be favoring highly defensive positions, when in fact they are just misjudging the terrain cost.