Event fusion photometric stereo network

Neural Netw. 2023 Oct:167:141-158. doi: 10.1016/j.neunet.2023.08.009. Epub 2023 Aug 9.

Abstract

Photometric stereo methods typically rely on RGB cameras and are usually performed in a dark room to avoid ambient illumination. Ambient illumination poses a great challenge in photometric stereo due to the restricted dynamic range of the RGB cameras. To address this limitation, we present a novel method, namely Event Fusion Photometric Stereo Network (EFPS-Net), which estimates the surface normals of an object in an ambient light environment by utilizing a deep fusion of RGB and event cameras. The high dynamic range of event cameras provides a broader perspective of light representations that RGB cameras cannot provide. Specifically, we propose an event interpolation method to obtain ample light information, which enables precise estimation of the surface normals of an object. By using RGB-event fused observation maps, our EFPS-Net outperforms previous state-of-the-art methods that depend only on RGB frames, resulting in a 7.94% reduction in mean average error. In addition, we curate a novel photometric stereo dataset by capturing objects with RGB and event cameras under numerous ambient light environments.

Keywords: 3D reconstruction; Deep learning; Event camera; Optical profilometry; Photometric stereo.