Joint Unsupervised Learning of Depth, Pose, Ground Normal Vector and Ground Segmentation by a Monocular Camera Sensor

Sensors (Basel). 2020 Jul 3;20(13):3737. doi: 10.3390/s20133737.

Abstract

We propose a completely unsupervised approach to simultaneously estimate scene depth, ego-pose, ground segmentation and ground normal vector from only monocular RGB video sequences. In our approach, estimation for different scene structures can mutually benefit each other by the joint optimization. Specifically, we use the mutual information loss to pre-train the ground segmentation network and before adding the corresponding self-learning label obtained by a geometric method. By using the static nature of the ground and its normal vector, the scene depth and ego-motion can be efficiently learned by the self-supervised learning procedure. Extensive experimental results on both Cityscapes and KITTI benchmark demonstrate the significant improvement on the estimation accuracy for both scene depth and ego-pose by our approach. We also achieve an average error of about 3° for estimated ground normal vectors. By deploying our proposed geometric constraints, the IOUaccuracy of unsupervised ground segmentation is increased by 35% on the Cityscapes dataset.

Keywords: unsupervised learning, scene depth, ego-motion, ground segmentation, ground normal vector.