Jethro's Braindump

Feedback Alignment and Random Error Backpropagation

Backpropagation is not biologically plausible because it the error signals to update the weights of the hidden layers need to be propagated back from the top layer.

Feedback alignment side-steps this problem by replacing the weights in the backpropagation rule with random ones:

\begin{equation} \delta_{i}^{(l)}=\sigma^{\prime}\left(a_{i}^{(l)}\right) \sum_{k} \delta_{k}^{(l+1)} G_{k i}^{(l)} \end{equation}

where \(G^{(l)}\) is a fixed, random matrix with the same dimensions as \(W\). The replacement of \(W^{T,(l)}\) with \(G^{(l)}\) breaks the dependency of the backward phase on \(W\), enabling the rule to be more local. Another variation is to replace the backpropagation of the errors in each layer with a random propagation of errors to each layer:

\begin{equation} \delta_{i}^{(l)}=\sigma^{\prime}\left(a_{i}^{(l)}\right) \sum_{k} \delta_{k}^{(L)} H_{k i}^{(l)} \end{equation}

Random BP applied to Spiking Neural Networks do not account for the temporal dynamics of neurons and synapses. SuperSpike solves this problem.