Tell Me, What Do You See?-Interpretable Classification of Wiring Harness Branches with Deep Neural Networks

Sensors (Basel). 2021 Jun 24;21(13):4327. doi: 10.3390/s21134327.

Abstract

In the context of the robotisation of industrial operations related to manipulating deformable linear objects, there is a need for sophisticated machine vision systems, which could classify the wiring harness branches and provide information on where to put them in the assembly process. However, industrial applications require the interpretability of the machine learning system predictions, as the user wants to know the underlying reason for the decision made by the system. We propose several different neural network architectures that are tested on our novel dataset to address this issue. We conducted various experiments to assess the influence of modality, data fusion type, and the impact of data augmentation and pretraining. The outcome of the network is evaluated in terms of the performance and is also equipped with saliency maps, which allow the user to gain in-depth insight into the classifier's operation, including a way of explaining the responses of the deep neural network and making system predictions interpretable by humans.

Keywords: computer vision for manufacturing; deformable linear objects; machine vision; neural networks; robot learning.

MeSH terms

  • Humans
  • Machine Learning*
  • Neural Networks, Computer*

Grants and funding