Jethro's Braindump

Markov Localization

Markov Localization

A direct extension of the Bayes Filter, but using the map \(m\) of the environment:

\begin{algorithm} \caption{Markov Localization} \label{markov_localization} \begin{algorithmic}[1] \Procedure{Markov Localization}{$\text{bel}(x_{t-1}), u_t, z_t, m$} \ForAll{$x_t$} \State $\overline{\text{bel}}(t) = \int p(x_t | u_t, x_{t-1}, m) \text{bel}(x_{t-1}) dx$ \State $\text{bel}(t) = \eta p(z_t | x_t, m)\overline{\text{bel}}(t) (x_t)$ \EndFor \State \Return $bel(x_t)$ \EndProcedure \end{algorithmic} \end{algorithm}

The initial belief reflects initial knowledge of the robot pose, and can be instantiated differently:

If the initial pose is known, \(\mathrm{bel}(x_0)\) is a point-mass distribution such that:

\begin{equation} \operatorname{bel}\left(x_{0}\right)=\left\{\begin{array}{ll}{1} & {\text { if } x_{0}=\bar{x}_{0}} \ {0} & {\text { otherwise }}\end{array}\right. \end{equation}

However, point-mass distributions are discrete and do not have a density, so in most scenarios, a narrow Gaussian centered around \(\overline{x}_0\) is used instead.

If the initial pose is unknown, \(\mathrm{bel}(x_0)\) is initialized with a uniform distribution over the space of all legal poses in the map.