Lidar-Camera Semi-Supervised Learning for Semantic Segmentation

Sensors (Basel). 2021 Jul 14;21(14):4813. doi: 10.3390/s21144813.

Abstract

In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.

Keywords: deep learning; semantic segmentation; semi-supervised learning; sensor fusion.

MeSH terms

  • Semantics*
  • Specimen Handling
  • Supervised Machine Learning*

Grants and funding