Jethro's Braindump

zhu_unsupervised_2018: Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

Unsupervised Event-based Learning of Optical Flow, Depth, and Egomotion

Contributions

The authors propose a new input representation that captures the spatiotemporal distribution of events, and a set of unsupervised loss functions that allows for learning of motion information only from the event stream.

Input Representation

Given a set of N input events {(xi,yi,ti,pi)}i[1,n, and a set of B bins to discretize the time dimension, the timestamps are scaled to the range [0,B1], and the event volume is generated as:

ti=(B1)(tit0)/(tNt1) V(x,y,t)=ipikb(xxi)kb(yyi)kb(tti) kb(a)=max(0,1|a|)

where kb(a) is the bilinear sampling kernel.

<biblio.bib>