Jethro's Braindump

Synaptic Current Model

Synaptic currents are generated by synaptic currents triggered by arrival of presynaptic spikes \(S_{j}^{(l)}(t)\). Spike trains \(S_{j}^{(l)}(t)\) are denoted as a sum of Dirac delta functions \(S_{j}^{(l)}(t)=\sum_{s \in C_{j}^{(l)}} \delta(t-s)\), where \(s\) runs over the firing times \(C_j^{(l)}\) of neuron \(j\) in layer \(l\).

A good first-order approximation of the synaptic current is one of exponential decay. Synaptic currents are also assumed to sum linearly.

\begin{equation} \label{eq:scm} \frac{\mathrm{d} I_{i}^{(l)}}{\mathrm{d} t}=-\underbrace{\frac{I_{i}^{(l)}(t)}{\tau_{\mathrm{syn}}}}_{\mathrm{exp} . \text { decay }}+\underbrace{\sum_{j} W_{i j}^{(l)} S_{j}^{(l-1)}(t)}_{\text {feed-forward }}+\underbrace{\sum_{j} V_{i j}^{(l)} S_{j}^{(l)}(t)}_{\text {recurrent }} \end{equation}

A single LIF neuron can be simulated with 2 linear differential equations whose initial conditions change instantaneously when a spike occurs. Combining the reset term with the equation for the Leaky Integrate-And-Fire model, we get:

\begin{equation} \label{eq:lif_with_reset} \frac{\mathrm{d} U_{i}^{(l)}}{\mathrm{d} t}=-\frac{1}{\tau_{\mathrm{mem}}}\left(\left(U_{i}^{(l)}-U_{\mathrm{rest}}\right)+R I_{i}^{(l)}\right)+S_{i}^{(l)}(t)\left(U_{\mathrm{rest}}-\vartheta\right) \end{equation}

The solutions to Equations eq:scm and eq:lif_with_reset are approximated numerically by discretizing time, and expressing the output spike-train \(S_{i}^{(l)}(n)\) of neuron \(i\) in layer \(l\) at time-step \(n\) as a non-linear function of the membrane voltage \(S_i^{(l)}(n) \equiv \Theta(U_i^{(l)(n)} - \theta)\) where \(\theta\) is the Heaviside step function, and \(\theta\) is the firing threshold.

Setting \(U_{\text{rest}} = 0\), \(R=1\), \(\theta=1\), and using some small simulation time step \(\delta t > 0\), we get:

\begin{equation} I_{i}^{(l)}[n+1]=\alpha I_{i}^{(l)}[n]+\sum_{j} W_{i j}^{(l)} S_{j}^{(l)}[n]+\sum_{j} V_{i j}^{(l)} S_{j}^{(l)}[n] \end{equation}

with decay strength \(\alpha \equiv \exp \left(-\frac{\Delta_{t}}{\tau_{\mathrm{syn}}}\right)\). Equation eq:lif_with_reset can then be expressed as:

\begin{equation} U_{i}^{(l)}[n+1]=\beta U_{i}^{(l)}[n]+I_{i}^{(l)}[n]-S_{i}^{(l)}[n] \end{equation}

with \(\beta \equiv \exp \left(-\frac{\Delta_{t}}{\tau_{\operatorname{mem}}}\right)\). These two equations characterise the dynamics of a RNN. Specifically, the state of neuron \(i\) is given by the instantaneous synaptic currents \(I_i\) and the membrane voltage \(U_i\).

References

Bibliography

Neftci, Emre O., Hesham Mostafa, and Friedemann Zenke. n.d. “Surrogate Gradient Learning in Spiking Neural Networks.” http://arxiv.org/abs/1901.09948v2.

Links to this note