Integrating Egocentric and Robotic Vision for Object Identification Using Siamese Networks and Superquadric Estimations in Partial Occlusion Scenarios

Biomimetics (Basel). 2024 Feb 8;9(2):100. doi: 10.3390/biomimetics9020100.

Abstract

This paper introduces a novel method that enables robots to identify objects based on user gaze, tracked via eye-tracking glasses. This is achieved without prior knowledge of the objects' categories or their locations and without external markers. The method integrates a two-part system: a category-agnostic object shape and pose estimator using superquadrics and Siamese networks. The superquadrics-based component estimates the shapes and poses of all objects, while the Siamese network matches the object targeted by the user's gaze with the robot's viewpoint. Both components are effectively designed to function in scenarios with partial occlusions. A key feature of the system is the user's ability to move freely around the scenario, allowing dynamic object selection via gaze from any position. The system is capable of handling significant viewpoint differences between the user and the robot and adapts easily to new objects. In tests under partial occlusion conditions, the Siamese networks demonstrated an 85.2% accuracy in aligning the user-selected object with the robot's viewpoint. This gaze-based Human-Robot Interaction approach demonstrates its practicality and adaptability in real-world scenarios.

Keywords: Siamese network; gaze; human–robot interaction; image matching; pose estimation; primitive shapes; superquadrics.