Analyzing the Impact of Objects in an Image on Location Estimation Accuracy in Visual Localization

Sensors (Basel). 2024 Jan 26;24(3):816. doi: 10.3390/s24030816.

Abstract

Visual localization refers to the process of determining an observer's pose by analyzing the spatial relationships between a query image and a pre-existing set of images. In this procedure, matched visual features between images are identified and utilized for pose estimation; consequently, the accuracy of the estimation heavily relies on the precision of feature matching. Incorrect feature matchings, such as those between different objects and/or different points within an object in an image, should thus be avoided. In this paper, our initial evaluation focused on gauging the reliability of each object class within image datasets concerning pose estimation accuracy. This assessment revealed the building class to be reliable, while humans exhibited unreliability across diverse locations. The subsequent study delved deeper into the degradation of pose estimation accuracy by artificially increasing the proportion of the unreliable object-humans. The findings revealed a noteworthy decline started when the average proportion of the humans in the images exceeded 20%. We discuss the results and implications for dataset construction for visual localization.

Keywords: augmented reality; semantic segmentation; synthetic dataset; visual localization.