Lifelong Visual-Tactile Cross-Modal Learning for Robotic Material Perception

IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):1192-1203. doi: 10.1109/TNNLS.2020.2980892. Epub 2021 Mar 1.

Abstract

The material attribute of an object's surface is critical to enable robots to perform dexterous manipulations or actively interact with their surrounding objects. Tactile sensing has shown great advantages in capturing material properties of an object's surface. However, the conventional classification method based on tactile information may not be suitable to estimate or infer material properties, particularly during interacting with unfamiliar objects in unstructured environments. Moreover, it is difficult to intuitively obtain material properties from tactile data as the tactile signals about material properties are typically dynamic time sequences. In this article, a visual-tactile cross-modal learning framework is proposed for robotic material perception. In particular, we address visual-tactile cross-modal learning in the lifelong learning setting, which is beneficial to incrementally improve the ability of robotic cross-modal material perception. To this end, we proposed a novel lifelong cross-modal learning model. Experimental results on the three publicly available data sets demonstrate the effectiveness of the proposed method.

Publication types

  • Research Support, Non-U.S. Gov't