# Local particle swarm optimization algorithm for gearbox

Particle swarm optimization algorithm has good global optimization ability. Suppose there are m particles in the d-dimensional space, which form the particle space X = x1, X2 (,…) ，X )M。 The vector of every particle is one , x) id, which represents the position of the t-th particle in the d-dimensional search space. The velocity of the t-th particle is VI = VI1, vi2 (,…) , V) in, that is, in the whole space, a single historical optimal solution pi = Pi1, Pi2 (,…) , P) in and global optimal solution g = G1, G2 (,…) ，g )n。 As the iteration progresses, the velocity and position of the particles are updated as follows:

Where ω is the inertia factor; I = 1, 2 ，M; d = 1，2，… , D; K is the current iteration; C1, C2 are the learning factors of particles; Rand is the random number between [0, 1].

The above is the standard PSO algorithm. However, it is easy to fall into local optimum. Therefore, by improving the speed update formula of PSO algorithm, the global historical optimal solution g of PSO is replaced by the optimal solution pnext of particles in the neighborhood of PSO, so that the speed update of PSO is no longer dependent on the global optimal solution. And it is not easy to fall into the local optimal solution

Among them, pinext is the optimal solution of the particle in the neighborhood of the ith particle.

In order to obtain better convergence effect of particle swarm optimization algorithm with speed change and get more accurate results, the change of inertia function with concave function is better than linear change, and linear change is better than constant change. Therefore, this paper uses concave function to change the number of iterations.

Among them, ω Max is the maximum value of ω; ω min is the minimum value of ω; K is the number of iterations of particle swarm optimization; max Gen is the number of iterations of PSO algorithm.

In the early stage of particle evolution, particles can search their neighborhood carefully to prevent them from converging to the local optimal solution. In the later stage of evolution, PSO should converge to the global optimal solution faster and more accurately. That is to say, in the initial stage, the value of C1 will be larger, while in the later stage, the value of C2 will increase with the increase of the number of iterations. Therefore, the improved learning factor formula is as follows:

When searching for the optimal solution, particle swarm optimization algorithm needs to choose fitness function to measure the effect of the optimal solution. When the particle updates its position, it needs to calculate the fitness value of each time according to the fitness function, and select the optimal solution by comparison. The traditional fitness function chooses SNR, kurtosis and entropy as fitness functions to evaluate the noise reduction effect of MED. 