Jethro's Braindump

Information Filter

tags
Gaussian Filter, Bayes Filter

Key Idea

The multi-variate Gaussians are represented in their canonical representation, by precision/information matrix Ω and the information vector ξ, where Ω=Σ1, and ξ=Σ1μ.

The Gaussian can be redefined as follows:

p(x)=ηexp{12xTΩx+xTξ}

where η has been redefined to subsume a constant. The reason they are called information matrix and vectors is because logp(x) is quadratic in Ω and ξ.

For Gaussians, Ω is positive semi-definite, so logp(x) is a quadratic distance function with mean μ=Ω1ξ. The matrix Ω determines the rate at which the distance function inccreases is different dimensions of the variable x. A quadratic distance that is weighted by a matrix Ω is called Mahalanobis distance.

Algorithm

Unknown environment 'algorithm'

Pros

  1. Representing global uncertainty is simple: Ω=0. With moments, global uncertainty amounts to covariance of infinite magnitude.
  2. More numerically stable for many applications.
  3. Natural fit for multi-robot problems, where sensor data is collected decentrally. Information integration is additive and achieved by summing information from multiple robots. This is because the canonical parameters represent a probability in log form.
  4. Information matrix may be sparse, lending itself to algorithms that are computationally efficient.

Cons

  1. The update step requires the recovery of a state estimate, inverting the information matrix. Matrix inversion is computationally expensive.