# Thread: Swarm intelligence algorithm question

1. ## Swarm intelligence algorithm question

Hi all,

I have recently become very interested in search optimization algorithms. One that I am going to apply is to use particle swarm optimization. However I am confused by the descriptions of the algorithm I have read as they seem to 'not mention' a key step. That is after applying the discrete steps then I would expect an instruction that says 'and now make a change to the particles in the swarm and re-evaluate the status of the particles, repeating the evaluating and scoring etc'.

For example if this were the travelling salesman:

//(Generate a swarm of random solutions to the map i.e. the particles)

For each particle
Initialize particle
END

//Then.. (pBest is the top solution found in the history of that particle, gBest is the global best found amongst all particles)

Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End

Choose the particle with the best fitness value of all the particles as the gBest
For each particle
Calculate particle velocity according equation (a)
Update particle position according equation (b)
End
While maximum iterations or minimum error criteria is not attained
Ok so that is fine, but after the last velocity/position score update then surely each particle needs to 'evolve' i.e. each salesman route needs a change adding into it and then the loop repeats applying the evaluation to the next generation achieved? like incrementally attempting to improve each solution in the swarm, with the best fit converging somewhere in that swarm?

Is this just implicit anyway? I understand that not all problems are suited to all methods but i think that a salesman map entity fits it just fine, so long as there is an evolution step, otherwise the scores would just mill around the population without anything actually changing in terms of an overall optimum solution.

I must be missing something?

[EDIT]
I see that searching for PSO with travelling salesman yields some interest so I will have a read also. My problem is combinatric in nature also, not TSP but shares similar attributes.

2. I found clarification on the algorithm with respect to my problem here

The velocity argument would be interpreted as 'faster=this particle has more changes tested on the next iteration of the objective function' - ie it searches more of its neighborhood.

Those with lower velocity, ie closer to the current global best - would test fewer changes and the lowest next to gBest would be basically a greedy algorithm and accepts an improving change immediately- I see it a little like simulated annealing with velocity of each particle akin to the temperature value visualised in that algorithm