Abstract : To address the particle depletion problem in particle filtering (PF), particle swarm optimization (PSO) is introduced into PF. This method first identifies the center points of the sampled particle set through clustering, and then applies PSO optimization to each cluster, allowing the particle set to move towards regions with higher posterior probability density values, thus overcoming the particle depletion problem. Furthermore, by optimizing the particle set, the number of initial particles required is significantly reduced. Experimental results show that the algorithm has high prediction accuracy and good robustness.
Keywords : Particle filtering; Clustering; Particle swarm optimization; Bayesian estimation
1. Introduction
Particle filtering [1,2] is a Bayesian filtering technique based on the Monte Carlo method. This algorithm is currently widely used in parameter estimation and state estimation in nonlinear and non-Gaussian environments, as well as in speech signal processing, target tracking, adaptive estimation, and other fields.
The basic particle filtering algorithm mainly consists of three steps: importance sampling, weight update, and resampling. Due to the many difficulties in selecting the optimal importance probability density function, a suboptimal importance density function is used instead. Therefore, conventional particle filtering methods have some problems: (1) Sample depletion problem. The reason is that during resampling, particles with larger weights are retained, while particles with smaller weights are removed. This results in the resampled particles being almost all descendants of a few particles, significantly reducing particle diversity and failing to fully represent the posterior probability density. When the process noise is large, the depletion problem is more severe and may even lead to sample exhaustion, i.e., after several iterations, there is only one point. (2) Real-time computation problem. Conventional particle filtering methods require a large number of particles to ensure estimation accuracy and have high computational complexity. When the initial state of the system is unknown, a large number of particles are needed to achieve system state prediction. If the number of particles is small, the particles may not be distributed near the true state. After several iterations, it is difficult for the particles to converge to the true state, thus requiring a large number of initialization particles.
To address the above problems, reference [3] analyzed the selection rules and difficulties of the optimal importance density function; Pitt M and Shephard N proposed the auxiliary variable particle filter (APF) [4] from the perspective of selecting the important probability distribution, thereby increasing the diversity of particles and reducing the variance of importance weights; Merwe et al. proposed the Unscented Particle Filter (UPF) [5], using UKF to generate the importance distribution of PF, and the importance probability distribution is closer to the true posterior probability distribution. This method introduces the latest observations into the prediction process, thus improving the performance of conventional particle filtering, but the computational load is also greatly increased. Reference [6] proposed a particle filtering algorithm based on differential evolution (DE-PF), which improves the utilization rate of particles by performing differential mutation and differential crossover operations on particles, but the differential evolution algorithm, like the genetic algorithm, requires multiple generations of differential evolution operations to achieve a good filtering effect.
To address this issue, this paper analyzes the shortcomings of particle filtering methods and introduces clustering and particle swarm optimization algorithms into particle filtering sampling. By optimizing the particle set, the distribution of samples is improved, the convergence of the particle set is accelerated, and the performance of particle filtering is greatly improved.
2. Particle Filtering Algorithm
The general state estimation problem can be described by the following state-space model:
Where is the prior probability density, is the likelihood probability density when the observed quantity is , and is the posterior probability density. From the above equation, it can be seen that Bayes' theorem is a process of starting with prior knowledge (prior probability density) and continuously using subsequent new observation data to revise the prior knowledge, thereby obtaining the revised unknown quantity (posterior probability density). The prior probability density obtains information about all unknown state variables before the arrival of new observation data, while the likelihood probability density represents the probability of the unknown state variable occurring given that the actual observation data is known, i.e., the magnitude of the particle weights in particle filtering. By obtaining the revision from new observation data, the posterior probability density is closer to the true probability density of the estimated quantity than the prior probability density.
The particle filter algorithm is a sequential Monte Carlo approximation method within the Bayesian estimation framework. It approximates the posterior estimate of the current state using a series of weighted particles, and estimates the current state by the weighted sum of the particles and their weights. Essentially, it transforms integration into a summation operation of a finite number of samples, using a weighted particle set to approximate the posterior probability distribution of the system's state at time k. Its estimation formula is:
(4)
In general, the posterior distribution of a state lacks an analytical form or is too complex to sample. In such cases, importance sampling is used for approximation. In standard particle filtering, the state transition density function is often chosen as the suboptimal importance density function, as shown in the following formula:
(5)
To address the particle degeneration caused by the increased variance of weights in suboptimal importance functions, conventional particle filtering algorithms incorporate a resampling step. The main idea is to discard particles with small weights and replicate those with large weights. However, this also introduces a new problem—the loss of particle diversity after resampling. Particles with large weights are selected multiple times, and after several iterations, the variance of the sample weights gradually increases over time, causing all particles to concentrate on a single point, resulting in particle impoverishment.
3. Improvements to particle filtering using clustering particle swarm optimization algorithm
3.1 Particle Swarm Optimization Algorithm
The core idea of Particle Swarm Optimization (PSO) [7] is that information sharing among individuals in a swarm can provide an evolutionary advantage. When solving optimization problems, the solution to the problem is usually designed as a particle in the search space. The i-th particle consists of three parts: current position, flight speed and particle fitness. During the iteration process, the particle updates itself by updating two "extreme values": one is the optimal solution found by the particle itself, denoted as ; the other is the optimal solution found by the entire particle swarm, denoted as . After finding two optimal solutions, each particle updates its speed and position using the following formula:
(6)
(7)
Where and are the current position and flight speed of the i-th particle, respectively, is the cognition coefficient, which is a random number between [0,1]. The fitness value is used to measure the quality of a position within the search space, driving each particle to move towards the individual optimal direction and the globally optimal direction discovered so far within the search space.
3.2 Improved Algorithm in This Paper
As mentioned above, the particle swarm optimization algorithm finds the optimal value by continuously updating the velocity and position of particles in the search space. Its characteristics are similar to those of the particle filtering method, which approximates the true posterior probability distribution of the system by continuously updating the position and weight values of particles. Furthermore, in the particle swarm optimization algorithm, the point with the optimal fitness function is the spatially optimal value, which is similar to the particle filtering method where particles with larger weights are more likely to represent the true motion state. Based on these two points, the particle filtering algorithm is improved as follows:
(1) System initialization
The initial particles are obtained from the prior probability density distribution of the known stochastic dynamical system: the initial weights for each particle are:
.
After dividing the particle set into several clusters, the particle swarm optimization algorithm is applied to each cluster according to formulas (6) and (7). The velocity and position of each particle are continuously updated according to the optimal value, so that the particles continuously move closer to the real state. Among them, is the inertia weight [9], which is adaptively adjusted and has a value range between [0.9, 0.4]. In the early stage of optimization, in order to enhance the global search capability of the algorithm, the inertia weight increases with the increase of population diversity; in the later stage of optimization, in order to enhance the local search capability of the algorithm, it decreases linearly as the search proceeds.
Particle swarm optimization (PSO) essentially drives all particles in a particle swarm to move closer to the optimal particle in each cluster, thereby improving the distribution of particles closer to the true state, enhancing the effectiveness of each particle, and increasing particle diversity. The optimized particle set is shown below.
(4) Update and normalize the particle weights:
(11)
(12)
(5) Perform resampling:
4. Simulation Experiment and Result Analysis
Figure 1 Comparison of tracking performance of different algorithms
As shown in Figure 1, the improved IPF algorithm in this paper is closer to tracking the true value than the APF algorithm, with higher accuracy and less fluctuation. To further verify the performance of the improved algorithm, the root mean square error (RMSE) of multiple Monte Carlo experiments is used as the evaluation index. The definition of RMSE is:
Figure 2. Comparison of RMSE from 400 Monte Carlo experiments.
The mean and variance of the RMSE of the two algorithms, as well as the average running time, are shown in Table 1:
algorithm | RMSE mean | RMSE variance | Average runtime (s) |
APF | 1.9802 | 0.4358 | 0.4836 |
IPF | 1.1067 | 0.3164 | 0.3951 |
As shown in Table 1, the IPF method proposed in this paper outperforms the APF algorithm. Compared with the APF algorithm, the IPF algorithm optimizes particles during sampling, combining individual particle optimization with global optimization of the entire particle swarm. This causes particles far from the true state to tend towards regions where the true state has a higher probability of occurrence, improving the effectiveness of each particle and effectively suppressing particle depletion. In contrast, the APF algorithm suffers from excessive state noise, and a single sample is insufficient to accurately approximate the state transition density function. Consequently, the sample quality of the secondary sampling based on the state transition density function approximated by a single sample deteriorates, resulting in a less than ideal tracking performance.
5. Conclusion
This paper proposes a particle filtering algorithm utilizing clustered particle swarm optimization (PSO) sampling. The algorithm first divides the sampled particle set into several clusters using a clustering algorithm. Then, it employs PSO to guide particles in each cluster to move closer to the optimal particle by updating their velocity and position. Essentially, this involves particles moving towards high-likelihood regions, thus effectively reducing particle depletion, increasing particle diversity, and improving particle utilization. Experimental results show that the proposed method outperforms the Auxiliary Particle Filter (AFP) algorithm in filtering performance.