Semi-Supervised Image Stitching from Unstructured Camera Arrays

Sensors (Basel). 2023 Nov 28;23(23):9481. doi: 10.3390/s23239481.

Abstract

Image stitching involves combining multiple images of the same scene captured from different viewpoints into a single image with an expanded field of view. While this technique has various applications in computer vision, traditional methods rely on the successive stitching of image pairs taken from multiple cameras. While this approach is effective for organized camera arrays, it can pose challenges for unstructured ones, especially when handling scene overlaps. This paper presents a deep learning-based approach for stitching images from large unstructured camera sets covering complex scenes. Our method processes images concurrently by using the SandFall algorithm to transform data from multiple cameras into a reduced fixed array, thereby minimizing data loss. A customized convolutional neural network then processes these data to produce the final image. By stitching images simultaneously, our method avoids the potential cascading errors seen in sequential pairwise stitching while offering improved time efficiency. In addition, we detail an unsupervised training method for the network utilizing metrics from Generative Adversarial Networks supplemented with supervised learning. Our testing revealed that the proposed approach operates in roughly ∼1/7th the time of many traditional methods on both CPU and GPU platforms, achieving results consistent with established methods.

Keywords: image blending; image stitching; scene representation; self-supervised learning; unstructured camera arrays.

Grants and funding

This research was partially funded by the National Science Foundation grant number CNS 2007320. The APC was funded by the University Of Florida via Dr. Christophe Bobda.