Jethro's Braindump

Extended Kalman Filter

tags
Bayes Filter, Kalman Filter, Information Filter

Key Idea

Remove linearity assumption from the Kalman Filter:

\begin{align} x_t &= g(u_t, x_{t-1}) + \epsilon_t \\\
z_t = h(x_t) + \gamma_t \end{align}

Where function \(g\) and replaces \(A_t, B_t\), and function \(h\) replaces \(C_t\) respectively.

The belief remains approximated by a Gaussian, represented by mean \(\mu_t\) and covariance \(\Sigma_t\). This belief is approximate, unlike in Kalman filters.

Linearization is key to EKFs. EKFs use first-order Taylor expansion for \(g\) to construct a linear approximation to a function \(g\) from its value and slope. The slope is given by the partial derivative:

\begin{equation} g’ (u_t, x_{t-1}) := \frac{\partial g(u_t, x_{t-1})}{\partial x_{t-1}}} \end{equation}

Both \(g\) and the slope depend on the argument of \(g\). We choose the most likely argument: the mean of the posterior \(\mu_{t-1}\), giving:

\begin{align} g(u_t, x_{t-1}) \approx g(u_t, \mu_{t-1}) + g’(u_t, \mu_{t-1}) (x_{t-1} - \mu_{t-1}) \end{align}

Where we can define \(g’(u_t, \mu_{t-1}) := G_t\). \(G_t\) is the Jacobian matrix, with dimensions \(n \times n\), where \(n\) is the dimensions of the state.

Similarly, \(h\) is linearized as:

\begin{equation} h(x_t) \approx h(\overline{\mu}_t) + H_t (x_t - \overline{\mu}_t) \end{equation}

Algorithm

\begin{algorithm} \caption{Extended Kalman Filter} \label{ekf} \begin{algorithmic}[1] \Procedure{ExtendedKalmanFilter}{$\mu_{t-1}, \Sigma_{t-1}, \mu_t, \z_t$} \State $\overline{\mu}_t = g(u_t, \mu_{t-1})$ \State $\overline{\Sigma}_t = G_t \Sigma_{t-1} G_t^T + R_t$ \State ${K}_t = \overline{\Sigma}_t H_t^T (H_t \overline{\Sigma}_t H_t^T + Q_t)^{-1}$ \State $\mu_t = \overline{\mu}_t + K_t(z_t - h(\overline{\mu}_t))$ \State $\Sigma_t = (I - K_t H_t) \overline{\Sigma}_t$ \State \Return $\mu_t, \Sigma_t$ \EndProcedure \end{algorithmic} \end{algorithm}

Cons

Since the belief is modelled as a multi-variate Gaussian, it is incapable of modelling multimodal beliefs. One extension is to represent posteriors as a mixture of Gaussians. These are called multi-hypothesis Kalman filters.

Extensions

There are multiple ways for linearization. The unscented Kalman filter probes the function to be linearized at selected points, and calculates a linearized approximation based on the outcomes of the probes. Moments matching linearizes while preserving the true mean and true covariance of the posterior distribution.

Icon by Laymik from The Noun Project. Website built with ♥ with Org-mode, Hugo, and Netlify.