End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks

Sensors (Basel). 2022 May 6;22(9):3540. doi: 10.3390/s22093540.

Abstract

Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatial and angular resolutions. Accordingly, in this research, we suggest a learning-based solution to this challenging problem, reconstructing dense, high-quality LF images. Instead of training our model with several images of the same scene, we used raw LF images (lenslet images). The raw LF format enables the encoding of several images of the same scene into one image. Consequently, it helps the network to understand and simulate the relationship between different images, resulting in higher quality images. We divided our model into two successive modules: LFR and LF augmentation (LFA). Each module is represented using a convolutional neural network-based residual network (CNN). We trained our network to lessen the absolute error between the novel and reference views. Experimental findings on real-world datasets show that our suggested method has excellent performance and superiority over state-of-the-art approaches.

Keywords: based view synthesis; convolutional neural network; light field reconstruction; micro-lens image.

MeSH terms

  • Image Processing, Computer-Assisted* / methods
  • Neural Networks, Computer*