End-to-end learning of object motion estimation from retinal events for event-based object tracking

Document Type

Conference Proceeding

Publication Title

Vol. 34 No. 07: AAAI-20 Technical Tracks 7

Volume

34

Publisher

AAAI Press

School

School of Science

RAS ID

35654

Comments

Chen, H., Suter, D., Wu, Q., & Wang, H. (2020). End-to-end learning of object motion estimation from retinal events for event-based object tracking. In Vol. 34 No. 07: AAAI-20 Technical Tracks 7 (pp. 10534-10541). AAAI Press. https://doi.org/10.1609/aaai.v34i07.6625

Abstract

Event cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in computer vision and artificial intelligence. However, the application of event cameras to object-level motion estimation or tracking is still in its infancy. The main idea behind this work is to propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking. To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which effectively encodes the spatio-temporal information of asynchronous retinal events into TSLTD frames with clear motion patterns. We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform an end-to-end 5-DoF object motion regression. Our method is compared with state-of-the-art object tracking methods, that are based on conventional cameras or event cameras. The experimental results show the superiority of our method in handling various challenging environments such as fast motion and low illumination conditions.

DOI

10.1609/aaai.v34i07.6625

Access Rights

free_to_read

Share

 
COinS