Adaptive Unsupervised Learning-Based 3D Spatiotemporal Filter for Event-Driven Cameras

Research (Wash D C). 2024 Apr 1:7:0330. doi: 10.34133/research.0330. eCollection 2024.

Abstract

In the evolving landscape of robotics and visual navigation, event cameras have gained important traction, notably for their exceptional dynamic range, efficient power consumption, and low latency. Despite these advantages, conventional processing methods oversimplify the data into 2 dimensions, neglecting critical temporal information. To overcome this limitation, we propose a novel method that treats events as 3D time-discrete signals. Drawing inspiration from the intricate biological filtering systems inherent to the human visual apparatus, we have developed a 3D spatiotemporal filter based on unsupervised machine learning algorithm. This filter effectively reduces noise levels and performs data size reduction, with its parameters being dynamically adjusted based on population activity. This ensures adaptability and precision under various conditions, like changes in motion velocity and ambient lighting. In our novel validation approach, we first identify the noise type and determine its power spectral density in the event stream. We then apply a one-dimensional discrete fast Fourier transform to assess the filtered event data within the frequency domain, ensuring that the targeted noise frequencies are adequately reduced. Our research also delved into the impact of indoor lighting on event stream noise. Remarkably, our method led to a 37% decrease in the data point cloud, improving data quality in diverse outdoor settings.