Jethro's Braindump

Extended Kalman Filter

tags
Bayes Filter, Kalman Filter, Information Filter

Key Idea

Remove linearity assumption from the Kalman Filter:

xt=g(ut,xt1)+ϵt zt=h(xt)+γt

Where function g and replaces At,Bt, and function h replaces Ct respectively.

The belief remains approximated by a Gaussian, represented by mean μt and covariance Σt. This belief is approximate, unlike in Kalman filters.

Linearization is key to EKFs. EKFs use first-order Taylor expansion for g to construct a linear approximation to a function g from its value and slope. The slope is given by the partial derivative:

g(ut,xt1):=g(ut,xt1)xt1

Both g and the slope depend on the argument of g. We choose the most likely argument: the mean of the posterior μt1, giving:

g(ut,xt1)g(ut,μt1)+g(ut,μt1)(xt1μt1)

Where we can define g(ut,μt1):=Gt. Gt is the Jacobian matrix, with dimensions n×n, where n is the dimensions of the state.

Similarly, h is linearized as:

h(xt)h(μt)+Ht(xtμt)

Algorithm

Unknown environment 'algorithm'

Cons

Since the belief is modelled as a multi-variate Gaussian, it is incapable of modelling multimodal beliefs. One extension is to represent posteriors as a mixture of Gaussians. These are called multi-hypothesis Kalman filters.

Extensions

There are multiple ways for linearization. The unscented Kalman filter probes the function to be linearized at selected points, and calculates a linearized approximation based on the outcomes of the probes. Moments matching linearizes while preserving the true mean and true covariance of the posterior distribution.

Links to this note