Cross-Viewpoint Semantic Mapping: Integrating Human and Robot Perspectives for Improved 3D Semantic Reconstruction

Sensors (Basel). 2023 May 27;23(11):5126. doi: 10.3390/s23115126.

Abstract

Allocentric semantic 3D maps are highly useful for a variety of human-machine interaction related tasks since egocentric viewpoints can be derived by the machine for the human partner. Class labels and map interpretations, however, may differ or could be missing for the participants due to the different perspectives. Particularly, when considering the viewpoint of a small robot, which significantly differs from the viewpoint of a human. In order to overcome this issue, and to establish common ground, we extend an existing real-time 3D semantic reconstruction pipeline with semantic matching across human and robot viewpoints. We use deep recognition networks, which usually perform well from higher (i.e., human) viewpoints but are inferior from lower viewpoints, such as that of a small robot. We propose several approaches for acquiring semantic labels for images taken from unusual perspectives. We start with a partial 3D semantic reconstruction from the human perspective that we transfer and adapt to the small robot's perspective using superpixel segmentation and the geometry of the surroundings. The quality of the reconstruction is evaluated in the Habitat simulator and a real environment using a robot car with an RGBD camera. We show that the proposed approach provides high-quality semantic segmentation from the robot's perspective, with accuracy comparable to the original one. In addition, we exploit the gained information and improve the recognition performance of the deep network for the lower viewpoints and show that the small robot alone is capable of generating high-quality semantic maps for the human partner. The computations are close to real-time, so the approach enables interactive applications.

Keywords: 3D semantic maps; computer vision; deep learning; human–robot collaboration; label transfer; real-time reconstruction; semantic matching; semantic segmentation; superpixel segmentation.

MeSH terms

  • Humans
  • Robotics* / methods
  • Semantics

Grants and funding

Support by the the European Union project RRF-2.3.1-21-2022-00004, within the framework of the Artificial Intelligence National Laboratory. This work was partially supported by the European Commission funded project “Humane AI: Toward AI Systems That Augment and Empower Humans by Understanding Us, our Society and the World Around Us” (grant # 820437). The support is gratefully acknowledged. The authors thank Robert Bosch, Ltd. Budapest, Hungary for their generous support to the Department of Artificial Intelligence. This work was funded by the European Commission project MASTER (grant number 101093079; https://www.master-xr.eu/, accessed on 23 May 2023).