Gradient adaptive sampling and multiple temporal scale 3D CNNs for tactile object recognition

Front Neurorobot. 2023 Apr 26:17:1159168. doi: 10.3389/fnbot.2023.1159168. eCollection 2023.

Abstract

Tactile object recognition (TOR) is very important for the accurate perception of robots. Most of the TOR methods usually adopt uniform sampling strategy to randomly select tactile frames from a sequence of frames, which will lead to a dilemma problem, i.e., acquiring the tactile frames with high sampling rate will get lots of redundant data, while the low sampling rate will miss important information. In addition, the existing methods usually adopt single time scale to construct TOR model, which will induce that the generalization capability is not enough for processing the tactile data generated under different grasping speeds. To address the first problem, a novel gradient adaptive sampling (GAS) strategy is proposed, which can adaptively determine the sampling interval according to the importance of tactile data, therefore, the key information can be acquired as much as possible when the number of tactile frames is limited. To handle the second problem, a multiple temporal scale 3D convolutional neural networks (MTS-3DCNNs) model is proposed, which downsamples the input tactile frames with multiple temporal scales (MTSs) and extracts the MTS deep features, and the fused features have better generalization capability for recognizing the object grasped with different speed. Furthermore, the existing lightweight network ResNet3D-18 is modified to obtain a MR3D-18 network which can match the tactile data with smaller size and prevent the overfitting problem. The ablation studies show the effectiveness of GAS strategy, MTS-3DCNNs, and MR3D-18 networks. The comprehensive comparisons with advanced methods demonstrate that our method is SOTA on two benchmarks.

Keywords: 3D convolutional neural networks; MR3D-18 network; gradient adaptive sampling; multiple temporal scale; tactile object recognition.

Grants and funding

This research was funded by the National Natural Science Foundation of China under Grant (Nos: 62076223 and 62073299), Project of Central Plains Science and Technology Innovation Leading Talents (No: 224200510026), and Key Science and Technology Program of Henan Province (No: 232102211018).