Thread: application of AI in gaming

  1. #1
    Registered User
    Join Date
    Mar 2007

    application of AI in gaming

    I am a student doing a small project on

    " Comparative Studies on Game Programming Techniques to Design Game Players".

    I need few information regarding how the concepts of GA(genetic algrithm), Fuzzy Logic and Neural Network, work in designing games. It would be better if you only say me the core areas where the above mentioned techniques are used in developing a game.
    Please, if you can, mention the challenges or the problems that have been faced by the programmers in game development using the above techniques. Informations on wether there is any on going investigation that could aid the game developers in improving the use of AI in modern computer games would also help me a lot. And how does the above techniques also help to make a game player more intelligent, so that it can become more challenging to the person who is playing against it.

    <<email snipped by mod>>
    Last edited by Salem; 03-28-2007 at 09:29 AM. Reason: Actually previous one was a mail I did to EA Games, not for the forum..

  2. #2
    Fear the Reaper...
    Join Date
    Aug 2005
    Toronto, Ontario, Canada
    Although I'd like to believe that neural nets and GA are used alot in games to give a pseudo-learning-type aspect to game players, I'm much more inclined to believe that they're all just alpha-beta trees with pruning, with a little probability in the mix to make it look random.
    Teacher: "You connect with Internet Explorer, but what is your browser? You know, Yahoo, Webcrawler...?" It's great to see the educational system moving in the right direction

  3. #3

    Join Date
    May 2005
    Quake3 bot AI, which is quite good as far as AI goes (AI is very hard to implement!) uses 'fuzzy logic' and is outlined in a PhD thesis (I've read it, but I don't have the link, just google it).

    Fuzzy logic and 'neural networks' seem to overlap in some cases. They have a similar structure and appear quite similar. Neural networks are used with genetic algorithms, where rather than explicitly programming the AI, the behaviors emerge from simulated genes of AI agents. The actual behaviors are (ideally) unpredictable, and over time the behaviors best suited genes for the given situation 'emerge,' without having been explicitely put into code. The challenge is setting up the basic fundamental conditions/genes such that evolution can occur. You should look up SMART (think that's what it's called), where on top of evolving the neural network of AI agents the author wrote a library that also evolves the fundamental structure of the neural network. So, the values in the genes are evolving, but the genes themselves are also evolving (really hard to explain what I mean).

    Most AI for computer games is not particularly sophisticated. It's extremely difficult to make AI agents, say, coherent over time (they have no clue what happened in the past, and it's incredibily difficult to make AI agents predict what happens in the future). The goal for most game AI coders is to just make sure the AI works for a small set of conditions and make it so that it doesn't do anything too stupid in front of the player.

    The results of the neural network are typically manifested in a state function (or a finite state machine), along with a virtual machine which executes instructions. The instructions can be high or low level (yep, you guessed it, CISC or RISC), and the finite state machine determines which instruction to execute next, which based on:

    1- How the AI agent perceives reality (limited number of sensory 'organs,' the AI agent has a cognitive model of reality, as do humans. Human's cognitive model of reality may be said to be manifested in mathematical language/logic)
    2- How the AI agent chooses to react based on its perception of reality

    This is represented as inputs and outputs of a neural network . If there is a definite correct answer, you can setup training programs which automatically adjust the weights on each pin/node of the neural network, otherwise you simulate death, survival and mating to evolve new AI agents, where theoretically over time healtheir AI agents emerge. the SMART AI library takes this to a higher level of abstraction, where the actual fundamental structure of the neural network evolves, along with the weights.

    Here's an example of an instruction I wrote for a virtual machine for my AI. You define how high level, or low level, you choose each instruction to be. The following examples are very high level

    Format for an instruction
    template<class	X>
    class	ai_instruction
    		pObject		=	NULL;
    		pFunc		=	NULL;
    		unknowns	=	NULL;
    		pObject	=	_pObject;
    		pFunc	=	_pFunc;
    	void	clear_cache()
    			delete[] unknowns;
    			unknowns	=	NULL;
    			delete[]	vectors;
    			vectors	=	NULL;
    	void	operator()(ai_instruction<X>&)
    		if(pFunc && pObject)
    	X	*pObject;
    	void	(X::*pFunc)(ai_instruction<X>&);	//instruction pointer
    	Vector3	*vectors;
    	std::vector<int>		int_params;
    	std::vector<float>		float_params;
    	void	*unknowns;	
    The definition of some basic instructions
    enum	Instruction_Categories	{Navigation = 0,Weaponry};
    enum	Navigation_Instructions	{TurnToGoal = 0, KillRotation,KillMovement, CG_Shift,NUM_NAV_INSTRUCTIONS};	
    enum	Weaponry_Instructions	{Fire = NUM_NAV_INSTRUCTIONS, Turret_Aim, Missile_Aim,NUM_WEAPONRY_INSTRUCTIONS};	
    The implementation of the instructions, in this case it just prints to the screen which instruction 
    the craft is currently executing, the actual functionality was removed on purpose
    void	Hovertank::linear_stop(ai_instruction<Hovertank>&a)
    	current_instruction			 = angular_stop_inst;
    void	Hovertank::angular_stop(ai_instruction<Hovertank>&a)
    	current_instruction			 = move_to_goal_inst;
    void	Hovertank::move_to_goal(ai_instruction<Hovertank>&a)
    	current_instruction			 = rotate_to_goal_inst;
    void	Hovertank::rotate_to_goal(ai_instruction<Hovertank>&a)
    	current_instruction			 = fire_projectile_inst;
    void	Hovertank::fire_projectile(ai_instruction<Hovertank>&a)
    	current_instruction			 = linear_stop_inst;
    The initialization of the instructions inside the constructor
    	hovertank_instruction_set[linear_stop_inst]		=	ai_instruction<Hovertank>(this,&Hovertank::linear_stop);
    	hovertank_instruction_set[angular_stop_inst]	=	ai_instruction<Hovertank>(this,&Hovertank::angular_stop);
    	hovertank_instruction_set[move_to_goal_inst]	=	ai_instruction<Hovertank>(this,&Hovertank::move_to_goal);
    	hovertank_instruction_set[rotate_to_goal_inst]	=	ai_instruction<Hovertank>(this,&Hovertank::rotate_to_goal);
    	hovertank_instruction_set[fire_projectile_inst]	=	ai_instruction<Hovertank>(this,&Hovertank::fire_projectile);
    Last edited by BobMcGee123; 04-04-2007 at 11:42 AM.
    I'm not immature, I'm refined in the opposite direction.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. chess ai contest
    By Raven Arkadon in forum Contests Board
    Replies: 7
    Last Post: 07-09-2005, 06:38 AM
  2. AI Contest Proposal
    By MadCow257 in forum Contests Board
    Replies: 4
    Last Post: 03-13-2005, 03:27 PM
  3. Game Design Topic #1 - AI Behavior
    By TechWins in forum Game Programming
    Replies: 13
    Last Post: 10-11-2002, 10:35 AM
  4. Technique of all board-like games?
    By Nutshell in forum Game Programming
    Replies: 28
    Last Post: 04-24-2002, 08:19 AM