Quake3 bot AI, which is quite good as far as AI goes (AI is very hard to implement!) uses 'fuzzy logic' and is outlined in a PhD thesis (I've read it, but I don't have the link, just google it).
Fuzzy logic and 'neural networks' seem to overlap in some cases. They have a similar structure and appear quite similar. Neural networks are used with genetic algorithms, where rather than explicitly programming the AI, the behaviors emerge from simulated genes of AI agents. The actual behaviors are (ideally) unpredictable, and over time the behaviors best suited genes for the given situation 'emerge,' without having been explicitely put into code. The challenge is setting up the basic fundamental conditions/genes such that evolution can occur. You should look up SMART (think that's what it's called), where on top of evolving the neural network of AI agents the author wrote a library that also evolves the fundamental structure of the neural network. So, the values in the genes are evolving, but the genes themselves are also evolving (really hard to explain what I mean).
Most AI for computer games is not particularly sophisticated. It's extremely difficult to make AI agents, say, coherent over time (they have no clue what happened in the past, and it's incredibily difficult to make AI agents predict what happens in the future). The goal for most game AI coders is to just make sure the AI works for a small set of conditions and make it so that it doesn't do anything too stupid in front of the player.
The results of the neural network are typically manifested in a state function (or a finite state machine), along with a virtual machine which executes instructions. The instructions can be high or low level (yep, you guessed it, CISC or RISC), and the finite state machine determines which instruction to execute next, which based on:
1- How the AI agent perceives reality (limited number of sensory 'organs,' the AI agent has a cognitive model of reality, as do humans. Human's cognitive model of reality may be said to be manifested in mathematical language/logic)
2- How the AI agent chooses to react based on its perception of reality
This is represented as inputs and outputs of a neural network . If there is a definite correct answer, you can setup training programs which automatically adjust the weights on each pin/node of the neural network, otherwise you simulate death, survival and mating to evolve new AI agents, where theoretically over time healtheir AI agents emerge. the SMART AI library takes this to a higher level of abstraction, where the actual fundamental structure of the neural network evolves, along with the weights.
Here's an example of an instruction I wrote for a virtual machine for my AI. You define how high level, or low level, you choose each instruction to be. The following examples are very high level
Code:
Format for an instruction
template<class X>
class ai_instruction
{
public:
ai_instruction()
{
pObject = NULL;
pFunc = NULL;
unknowns = NULL;
}
ai_instruction(X*_pObject,void(X::*_pFunc)(ai_instruction<X>&))
{
pObject = _pObject;
pFunc = _pFunc;
}
void clear_cache()
{
float_params.clear();
int_params.clear();
if(unknowns)
{
delete[] unknowns;
unknowns = NULL;
}
if(vectors)
{
delete[] vectors;
vectors = NULL;
}
}
void operator()(ai_instruction<X>&)
{
if(pFunc && pObject)
{
(*pObject.*pFunc)(*this);
}
}
X *pObject;
void (X::*pFunc)(ai_instruction<X>&); //instruction pointer
Vector3 *vectors;
std::vector<int> int_params;
std::vector<float> float_params;
void *unknowns;
};
...
The definition of some basic instructions
#ifndef HOVERTANK_INSTRUCTION_SET_H
#define HOVERTANK_INSTRUCTION_SET_H
enum Instruction_Categories {Navigation = 0,Weaponry};
enum Navigation_Instructions {TurnToGoal = 0, KillRotation,KillMovement, CG_Shift,NUM_NAV_INSTRUCTIONS};
enum Weaponry_Instructions {Fire = NUM_NAV_INSTRUCTIONS, Turret_Aim, Missile_Aim,NUM_WEAPONRY_INSTRUCTIONS};
#define NUM_HOVERTANK_INSTRUCTIONS (NUM_NAV_INSTRUCTIONS + NUM_WEAPONRY_INSTRUCTIONS)
#endif
...
The implementation of the instructions, in this case it just prints to the screen which instruction
the craft is currently executing, the actual functionality was removed on purpose
void Hovertank::linear_stop(ai_instruction<Hovertank>&a)
{
gpTextManager->PushLineOfText("Hovertank::linear_stop",-1024,1.0f,0.0f,0.0f);
current_instruction = angular_stop_inst;
}
void Hovertank::angular_stop(ai_instruction<Hovertank>&a)
{
gpTextManager->PushLineOfText("Hovertank::angular_stop",-1024,1.0f,0.0f,0.0f);
current_instruction = move_to_goal_inst;
}
void Hovertank::move_to_goal(ai_instruction<Hovertank>&a)
{
gpTextManager->PushLineOfText("Hovertank::move_to_goal",-1024,1.0f,0.0f,0.0f);
current_instruction = rotate_to_goal_inst;
}
void Hovertank::rotate_to_goal(ai_instruction<Hovertank>&a)
{
gpTextManager->PushLineOfText("Hovertank::rotate_to_goal",-1024,1.0f,0.0f,0.0f);
current_instruction = fire_projectile_inst;
}
void Hovertank::fire_projectile(ai_instruction<Hovertank>&a)
{
gpTextManager->PushLineOfText("Hovertank::fire_projectile",-1024,1.0f,0.0f,0.0f);
current_instruction = linear_stop_inst;
}
..
The initialization of the instructions inside the constructor
hovertank_instruction_set[linear_stop_inst] = ai_instruction<Hovertank>(this,&Hovertank::linear_stop);
hovertank_instruction_set[angular_stop_inst] = ai_instruction<Hovertank>(this,&Hovertank::angular_stop);
hovertank_instruction_set[move_to_goal_inst] = ai_instruction<Hovertank>(this,&Hovertank::move_to_goal);
hovertank_instruction_set[rotate_to_goal_inst] = ai_instruction<Hovertank>(this,&Hovertank::rotate_to_goal);
hovertank_instruction_set[fire_projectile_inst] = ai_instruction<Hovertank>(this,&Hovertank::fire_projectile);