Thread: fun with pathfinding

  1. #1

    Join Date
    May 2005
    Posts
    1,042

    fun with pathfinding

    Well, I'm in the midst of some intense coding, but most of it is stuff I've already implemented, conceptually, in the past...I'm just trying to throw it all together for something useful.

    I've been fiddling around with AI and pathfinding. I'm using the exact same constructs/concepts as when I posted this demo:

    http://cboard.cprogramming.com/showthread.php?t=69595

    except now I'm adding it to a 3D world. I use a quake3 editor (GTKRadiant 1.4) and quake3 bsps for the basic data and BSP compilation. I edited the entities.def script that the editor reads to determine what entities exist. I added an 'info waypoint node' entity which you place in the map:

    Code:
    /*QUAKED info_player_start (1 0 0) (-16 -16 -24) (16 16 32)
    Player spawn location. It works in Quake III Arena, but it is not used in the Id maps. Use info_player_deathmatch instead.
    -------- KEYS --------
    angle : direction in which player will look when spawning in the game.
    target : this can point at a target_give entity for respawn freebies.*/
    
    //=============================================================================
    
    
    /*QUAKED info_waypoint_node (.05 .5 .5) (-10 -10 -10) (10 10 10)
    -------- KEYS --------
    target : your mother
    */
    So in the editor I just place these little goose-eggs around the world at critical points, and when my program loads the map it parses the entity file, saving the positions of the waypoint nodes. Then, it performs a line of sight test to fill in the row and column entries of a sparse connectivity matrix (a matrix that doesn't save the 0 entries, to speed up computation and decrease memory). It then raises these sparse matrices to n powers, determining what nodes you can get to at various steps. So far, it has turned out to be exactly what I wanted: an effective mechanism for using pre compiled constructs to minimize the amount that you need to do realtime.

    So, when the AI agent needs to make a move, it can look up these values in the stored matrices (which require essentially nothing memory wise) and find the next move along the leg of a path, without having to know the entire path.

    I'm still just ........ing around with things, but I posted a screenshot of a map I'm building to test these things

    screenie

    Each of the yellow spheres you see is a waypoint node. The red sphere (well, there's actually some ms3d model in the middle of it, I think it might be a bert model or a model sentral made for me) is actually a hovertank sitting on an air cushion, and the gray thing is a ramp that I was using to test the physics of the air cushion (I can command it to spawn above the ramp, upright, then when it lands on it it rights itself, slides down one side, and back up the other!!! woot! and it remains stable).

    Umm yeah, I'm just happy that I'm making progress, thought I'd share. I didn't bother recording another video because it didn't exactly work last time I tried that.
    Last edited by BobMcGee123; 06-10-2006 at 03:43 PM.
    I'm not immature, I'm refined in the opposite direction.

  2. #2

    Join Date
    May 2005
    Posts
    1,042
    Here's soem of the code that does the aforementioned stuff


    Code:
    template	<class	X>
    void	GenerateVectorCollapsedMatrixFromWaypoints(std::vector<Vector3*>	&	Waypoints,VectorCollapsedMatrix<X> & output_Mat,
    												   float	LOSRadius,BSP*pBSP,float	cancel_dist = 999999999999.0f)
    {
    	int	s = Waypoints.size();
    	while(s--)
    		output_Mat.AddRowToMatrix();
    	
    	std::vector<Vector3*>::iterator	ptr_outside,ptr_inside;
    
    	int	out_index(0),in_index(0);
    
    	for(ptr_outside = Waypoints.begin(); ptr_outside != Waypoints.end(); ptr_outside++,out_index++)
    	{
    		Vector3	&	outside_vec = (**ptr_outside);
    		in_index = 0;
    		for(ptr_inside = Waypoints.begin(); ptr_inside != Waypoints.end(); ptr_inside++, in_index++)
    		{
    			if(ptr_outside == ptr_inside)
    				continue;
    			
    			Vector3	&	inside_vec	=	(**ptr_inside);
    
    			float	sep_dist	=	(inside_vec-outside_vec).GetLength();
    			if(sep_dist >= cancel_dist)
    				continue;
    
    			HitInfo	LOS = TraceSphereToBSPFaces(pBSP,outside_vec,inside_vec,LOSRadius,0.0f);
    
    			if(LOS.Time	==	1.0f)
    			{
    				trace << "outside index can see inside index: " << out_index << " " << in_index << "\n";
    				output_Mat.AddEntryToRow(out_index,in_index);
    			}
    			else
    			{
    				trace << "outside index cannot see inside index: " << out_index << " " << in_index << "\n";
    			}
    		}
    	}
    	
    	output_Mat.EnsureConsecutiveAndUnique();
    }
    
    template	<class X>
    void	BuildWaypointPathMatrices(VectorCollapsedMatrix<X> * gMatrices,int	num_mats)
    {
    	char	msg[1024];
    
    	DWORD	start = GetTickCount();
    	gMatrices[0].EnsureConsecutiveAndUnique();
    	for(int i = 0; i < num_mats-1; i++)
    	{
    		memset(msg,0,sizeof(char)*1024);
    		sprintf(msg,"Matrix: %i out of %i",i+1,num_mats);
    
    		R_PrintString(msg);
    		SwapBuffers(gpNTGLAPI->NT_DEVICE_CONTEXT);
    		gpNTGLAPI->ntglClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    
    		VectorCollapsedMatrixMultiply(gMatrices[0],gMatrices[i],gMatrices[i+1]);
    		gMatrices[i+1].EnsureConsecutiveAndUnique();
    	}	
    	
    	float	total = (float)(GetTickCount() - start) * .001f;
    	
    	trace << "TotalTime: " << total << "\n";
    	memset(msg,0,sizeof(char)*1024);
    	sprintf(msg,"TotalTime: %i seconds",(int)total);
    	R_PrintString(msg);
    	SwapBuffers(gpNTGLAPI->NT_DEVICE_CONTEXT);
    	gpNTGLAPI->ntglClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
    }
    I'm not immature, I'm refined in the opposite direction.

  3. #3
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    So you combined the ideas. Nodes with matrices. I like.

  4. #4

    Join Date
    May 2005
    Posts
    1,042
    Thanks!
    I'm not immature, I'm refined in the opposite direction.

  5. #5
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    That is the point I was trying to make thousands of years ago when we posted that. Nodes make it simpler to do convincing pathfinding rather than attempting to figure out a path from raw geometric data.

    I'm glad you found something in-between. I feel most games are probably in this category since A* and other algos would take a lifetime to compute correctly.

    Maybe they come out with some new Pathfinding Processor add-on like the Agea PhysX for physics. God I hope not.

  6. #6

    Join Date
    May 2005
    Posts
    1,042
    Nodes make it simpler to do convincing pathfinding rather than attempting to figure out a path from raw geometric data.
    Actually the only difference between what I posted here and what I posted a thousand years ago in that thread was that the 'nodes' were really the center of tiles. That's the only difference (the underlying code is the same). It actually requires much less data to place reasonably convincing waypoints in a 3D world than it does for a 2D tilemap. I also think my particular implementation is pretty...rough to say the least. I'm really just trying to focus on getting something playable done before I go back to school *sigh*

    Maybe they come out with some new Pathfinding Processor add-on like the Agea PhysX for physics. God I hope not.
    Yeah, it's hard enough to keep track of what shader programs work with what vendors and driver implementations...all we need is another piece of hardware to make up for what the CPU cannot do.
    Last edited by BobMcGee123; 06-11-2006 at 08:56 PM.
    I'm not immature, I'm refined in the opposite direction.

  7. #7
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    I think I'm using the same type of system w/o the matrices. I've yet to test the pathfinding since the editor is what I've been working on.

    In our system the goal is that each object will get its instructions from the script. So if the first instruction is to move to waypoint list 1, here is what happens.

    1. The script sends the message to CObject::ScriptProc
    2. CObject::ScriptProc routes the message to the correct CObject handler
    3. CObject::<script_function> handles the message.
    4. The function sets the dest vector of the object to the first waypoint of the desired waypoint list.
    5. The function then sets a flag to indicate the object is currently executing the command.
    6. The object ID is added to the update list for the game.
    7. The object moves towards the waypoint.
    8. When it reaches the waypoint, if there is a second waypoint in the list, it now moves towards it. If not, all processing stops, the flag is reset, and the object waits for it's next command from the script.

    This is simple vector based stuff. So if you fail to place a waypoint in the editor portion of the level/map, there isn't anything to route the object to. Everything is dependent on how the level/map is designed. The only dynamic testing performed is if an object gets in the way on the way from point A to B. If so, we use simple bump and grind till we get back on track.

    Yes this might cause problems given certain map conditions, but again I've yet to test it.

  8. #8

    Join Date
    May 2005
    Posts
    1,042
    I'd def. like to see your implementation in action Bubba!

    Here's another pic, a slightly more elaborate maze (overhead view)
    I'm not immature, I'm refined in the opposite direction.

  9. #9
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    Cool. How did you get the AI to figure out not to take the leftmost path (at node 1) even though that node got you closer to the goal than the one it chose?

    Are you using some type of cost function?

    And what if the cost function causes a node leading to a dead end to be chosen? What then?

  10. #10

    Join Date
    May 2005
    Posts
    1,042
    Cool. How did you get the AI to figure out not to take the leftmost path (at node 1) even though that node got you closer to the goal than the one it chose?
    I guess I'm not entirely sure what you meant, I'm pretty sure the path that is displayed is the shortest in this case. As it stands, if you are talking about the leftmost green node it selects the node to the right and above (A), as opposed to the one directly above (B), because it can get directly to A in a single move, and this particular implementation minimizes the number of moves, but not necessarily the distance (elaborated below).

    As per cost function, in this implementation you can already know whether or not you can get from start to end, and you also know the shortest number of moves to get there, without needing to determine the entire path (it is stored in the collapsed connectivity matrices). Seeing as how you do not need to evaluate the entire pathway to find the next move along the leg of the path you can spend a substantial amount of processing time deciding what path you want: you can take the shortest path or the longest path, or you can take the shortest path on one move, but then take the longest route the next move. You can also evaluate other factors such as the incline that it has to go over during the path (by examining the plane normals).

    You can also simulate some sort of spatial coherence by only considering nodes that the AI agent is aware of (the nodes that it has been exposted to in the past X seconds). Subsequenly, you could find the next move, but if the AI agent hasn't seen it, he isn't aware that it's an option, and he 'forgets' how to get from one node to the next.

    And what if the cost function causes a node leading to a dead end to be chosen? What then?
    Yeah, this is def. an AI issue, a problem of how you use the connectivity matrices. Ultimately, these matrices only see nodes, and how they are connected at given 'moves.' A move is simply the transition from one node to another, but the connectivity matrices have no clue that this node is actually an index into an array of 3D vectors, which may be above fire or lava or 7,000 feet into the atmosphere. Subsequently, I'm going to have to be *extremely* careful as to how I use this.

    I'm doing a ton of research with AI, and similarly to the pathfinding issue I'm not liking the way other people have done things. Finite state machines and fuzzy logic seem to make too many assumptions about what the AI agent can/cannot do and is/isn't supposed to do. It seems like it's an AI system that ultimately boils down to manually programming the response of the AI agent in every possible scenario, and I dont' want to do that.

    Every possible action that the AI agent can make in this context boils down to a few different fundamental instructions, which can be regarded as the most fundamental instructions in an instruction set.
    A hovertank, whether controlled by a player or an AI agent, can do the following:
    -Move the center of balance mechanism (this causes the hovertank to lean in a particular direction, which makes it move like a helicopter)
    -Fire the thrusters that is has on either side of the chassis (causing it to rotate and move)
    -Rotate the cannon it has
    -fire the cannon

    A very high level behavior such as moving from one part of the world, to the other, is simply a manifestation of these fundamental behaviors being executed at the right time.

    My first AI programming experiment is going to be to program a very stupid AI agent, controlled by a simple finite state machine, and then battle against it. I will be controlling a hovertank, and I can control all of the above actions. The training program will record every possible variable that a human could observe, e.g.:
    -Distance to the hovertank
    -Is the enemy hovertank getting closer to me or further from me
    -Is the enemy hovertank attacking me?
    -How much fuel do I have?
    -How much ammo do I have?

    So, I would know how I personally responded to the various observable inputs, and I could use this data to train a neural network that produces outputs/instructions the proper responses. The thing is, though, this logic network (whether it is fuzzy logic or a sophisticated neural network, they're kind of similar) will be directly connected to the thrusters, the center of balance mechanism, and the turret. I'm hoping that this means the hovertank will behave very similarly to how I behaved when I was battled the stupid AI agent. Finally, this AI agent, the one that is controlled by the logic network that was trained from my inputs into the thruster, CG balance and turret controls during the training program is the AI that would finally make it into the game.

    Granted, this seems extremely difficult to accomplish, which is why I simply say I'm going to experiment with it.

    I've written some prototype code for a simple AI instruction set. There's a lot of overhead and it could be done a lot better (using a templated and doing everything through instruction pointers is kind of nasty, but it works) Here's the code for an instruction:

    PHP Code:
    #ifndef    INSTRUCTION_SET_H
    #define    INSTRUCTION_SET_H

    #include "Vector.h"    //math 3d vector

    template<class    X>
    class    
    ai_instruction
    {
        public:
        
    ai_instruction()
        {
            
    pObject        =    NULL;
            
    pFunc        =    NULL;
            
    unknowns    =    NULL;
        }

        
        
    ai_instruction(X*_pObject,void(X::*_pFunc)(ai_instruction<X>&))
        {
            
    pObject    =    _pObject;
            
    pFunc    =    _pFunc;
        }

        
    void    clear_cache()
        {
            
    float_params.clear();
            
    int_params.clear();

            if(
    unknowns)
            {
                
    delete[] unknowns;
                
    unknowns    =    NULL;
            }

            if(
    vectors)
            {
                
    delete[]    vectors;
                
    vectors    =    NULL;
            }
        }

        
    void    operator()(ai_instruction<X>&)
        {
            if(
    pFunc && pObject)
            {    
                (*
    pObject.*pFunc)(*this);
            }
        }
        
    X    *pObject;
        
    void    (X::*pFunc)(ai_instruction<X>&);    //instruction pointer
        
    Vector3    *vectors;
        
    std::vector<int>        int_params;
        
    std::vector<float>        float_params;

        
    void    *unknowns;    
    };

    #endif 
    And here's how a hovertank might setup its own instruction set:

    PHP Code:


    class    Hovertank : public RigidBody
    {
    public:
        
    Hovertank();

        ~
    Hovertank();
    ...
        
    int    current_instruction;    
        
    int    next_instruction;
        
        
    enum    {linear_stop_inst 0angular_stop_inst,move_to_goal_inst,rotate_to_goal_inst,
        
    fire_projectile_inst,num_instructions};

        
    ai_instruction<Hovertank>    hovertank_instruction_set[num_instructions];

        
    void    linear_stop(ai_instruction<Hovertank>&);
        
    void    angular_stop(ai_instruction<Hovertank>&);

        
    void    move_to_goal(ai_instruction<Hovertank>&);
        
    void    rotate_to_goal(ai_instruction<Hovertank>&);

        
    void    fire_projectile(ai_instruction<Hovertank>&);
    };

    ...


    Hovertank::Hovertank()
    {
    ...
        
    hovertank_instruction_set[linear_stop_inst]        =    ai_instruction<Hovertank>(this,&Hovertank::linear_stop);
        
    hovertank_instruction_set[angular_stop_inst]    =    ai_instruction<Hovertank>(this,&Hovertank::angular_stop);
        
    hovertank_instruction_set[move_to_goal_inst]    =    ai_instruction<Hovertank>(this,&Hovertank::move_to_goal);
        
    hovertank_instruction_set[rotate_to_goal_inst]    =    ai_instruction<Hovertank>(this,&Hovertank::rotate_to_goal);
        
    hovertank_instruction_set[fire_projectile_inst]    =    ai_instruction<Hovertank>(this,&Hovertank::fire_projectile);
    ...

    Last edited by BobMcGee123; 06-13-2006 at 08:00 AM.
    I'm not immature, I'm refined in the opposite direction.

  11. #11
    Registered User VirtualAce's Avatar
    Join Date
    Aug 2001
    Posts
    9,607
    I will show you our method in action once we get it done and working. We have a lot of code, a lot of design docs, and a lot of thought has been put into the system. Whether it actually will work as planned is a whole 'nother ball game.

    I would be interested to perhaps utilize some of the waypoint guidance systems you are using. There will be a point at which some type of real-time pathfinding will have to be done in order to get the 'feel' of the game right.

    We all know what it's like in a game like Command and Conquer when you save up and build a huge tank rush only to have it completely foiled by one stupid bridge. The idiot tanks get so confused that half of them just sit there while their buds at the front just get demolished.
    Last edited by VirtualAce; 06-15-2006 at 02:22 AM.

  12. #12

    Join Date
    May 2005
    Posts
    1,042
    Well we know how to contact each other. Any sort of collaboration that will be mutually helpful sounds great to me.
    I'm not immature, I'm refined in the opposite direction.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Auto Calling Derived Fun
    By Arlenik in forum C++ Programming
    Replies: 27
    Last Post: 06-02-2008, 11:17 AM
  2. My pathfinding algorithm
    By BobMcGee123 in forum Game Programming
    Replies: 7
    Last Post: 09-23-2005, 05:08 PM
  3. AI (Pathfinding)
    By jdinger in forum Game Programming
    Replies: 7
    Last Post: 04-16-2003, 10:20 AM
  4. AI (pathfinding) tutorial
    By dirkduck in forum Game Programming
    Replies: 3
    Last Post: 02-09-2002, 12:04 PM