Particle Filter
Particle filters approximate the posterior by a finite number of
parameters. The posterior
The samples of a posterior distribution are called particles, denoted by:
Each particle
Key Idea
The likelihood for a state hypothesis
Hence, the denser the subregion of the state space populated by
samples, the more likely it is that the true state falls into this
region. This property holds asymptotically for
Algorithm
The algorithm first constructs a temporary particle set
The second for-loop implements importance re-sampling. The algorithm
draws with replacement
Properties
There are four complimentary sources of approximation error:
- There are finitely many particles. Non-normalized values
are drawn from an M-dimensional space, but after normalization they reside in a space of dimension . The effect of loss of dimensionality becomes less pronounced with larger . - The resampling process induces a loss of diversity in the particle
population, manifesting as an approximation error. This is the
variance of the estimator. This is countered with several variance
reduction techniques:
- Reducing the frequency of resampling
- low variance sampling
- Divergence of proposal and target distribution. Particles are generated from a proposal distribution that does not consider the measurement. If at one extreme, the sensors of the robot are highly inaccurate, but its motion is accurate, the target distribution will be similar to the proposal distribution and the particle filter will be efficient. However, the opposite configuration can cause the distributions to diverge substantially.
- Particle deprivation problem: in high dimensional spaces, there may be no particles in the vicinity to the correct state. That is, the number of particles might be too small to cover all relevant regions of high likelihood.