DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions

IEEE Trans Pattern Anal Mach Intell. 2021 Aug;43(8):2623-2633. doi: 10.1109/TPAMI.2020.2977911. Epub 2021 Jul 1.

Abstract

In the past half of the decade, object detection approaches based on the convolutional neural network have been widely studied and successfully applied in many computer vision applications. However, detecting objects in inclement weather conditions remains a major challenge because of poor visibility. In this article, we address the object detection problem in the presence of fog by introducing a novel dual-subnet network (DSNet) that can be trained end-to-end and jointly learn three tasks: visibility enhancement, object classification, and object localization. DSNet attains complete performance improvement by including two subnetworks: detection subnet and restoration subnet. We employ RetinaNet as a backbone network (also called detection subnet), which is responsible for learning to classify and locate objects. The restoration subnet is designed by sharing feature extraction layers with the detection subnet and adopting a feature recovery (FR) module for visibility enhancement. Experimental results show that our DSNet achieved 50.84 percent mean average precision (mAP) on a synthetic foggy dataset that we composed and 41.91 percent mAP on a public natural foggy dataset (Foggy Driving dataset), outperforming many state-of-the-art object detectors and combination models between dehazing and detection methods while maintaining a high speed.

Publication types

  • Research Support, Non-U.S. Gov't