Real-to-Synthetic Feature Transform for Illumination Invariant Camera Localization

IEEE Comput Graph Appl. 2022 Jan-Feb;42(1):47-55. doi: 10.1109/MCG.2020.3041617. Epub 2022 Jan 25.

Abstract

Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation but it is time consuming. Synthetic images are easy to accumulate and we can control varying illumination. However, synthetic images do not perfectly match real images of the same scene, i.e., there exists a gap between real and synthetic images that also affects the accuracy of camera localization. To reduce the impact of this gap, we introduce ''real-to-synthetic feature transform (REST)." REST is a fully connected neural network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database for robust camera localization. Our experimental results show that REST improves matching accuracy by approximately 28% compared to a naïve method.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Databases, Factual
  • Lighting* / methods
  • Pattern Recognition, Automated* / methods
  • Photic Stimulation