Real-time event simulation with frame-based cameras
Andreas Ziegler
Daniel Teigland
Jonas Tebbe
Thomas Gossard
Andreas Zell
[Paper]
[Code]
[Video]
Runtime comparison [ms] on low events-per-frame footage (table-tennis ball) with a resolution of 1280 x 720 pixels. LK stands for Lucas Kanade. *Due to memory constraints on the GPU, we had to limit the output for v2e to 640 x 420 pixels. **For vid2e we only measured the time for the event generation but not for the upsampling due to the time-consuming upsampling. The bars visualize the mean value and the standard deviation. The blue line indicates a runtime of 30ms, which we consider as real-time capable in this work..

Abstract

Event cameras are becoming increasingly popular in robotics and computer vision due to their beneficial properties, e.g., high temporal resolution, high bandwidth, almost no motion blur, and low power consumption. However, these cameras remain expensive and scarce in the market, making them inaccessible to the majority. Using event simulators minimizes the need for real event cameras to develop novel algorithms. However, due to the computational complexity of the simulation, the event streams of existing simulators cannot be generated in real-time but rather have to be pre-calculated from existing video sequences or pre-rendered and then simulated from a virtual 3D scene. Although these offline generated event streams can be used as training data for learning tasks, all response time dependent applications cannot benefit from these simulators yet, as they still require an actual event camera. This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude (making them real-time capable) while remaining competitive in the quality assessment.


Video




Paper

A. Ziegler, D. Teigland, J. Tebbe, T. Gossard, A. Zell.
Real-time event simulation with frame-based cameras.
In ICRA, 2023.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This research was funded by Sony AI.