Vehicle Detection and Attribution from a Multi-Sensor Dataset Using a Rule-Based Approach Combined with Data Fusion

Sensors (Basel). 2023 Oct 30;23(21):8811. doi: 10.3390/s23218811.

Abstract

Vehicle detection using data fusion techniques from overhead platforms (RGB/MSI imagery and LiDAR point clouds) with vector and shape data can be a powerful tool in a variety of fields, including, but not limited to, national security, disaster relief efforts, and traffic monitoring. Knowing the location and number of vehicles in a given area can provide insight into the surrounding activities and patterns of life, as well as support decision-making processes. While researchers have developed many approaches to tackling this problem, few have exploited the multi-data approach with a classical technique. In this paper, a primarily LiDAR-based method supported by RGB/MSI imagery and road network shapefiles has been developed to detect stationary vehicles. The addition of imagery and road networks, when available, offers an improved classification of points from LiDAR data and helps to reduce false positives. Furthermore, detected vehicles can be assigned various 3D, relational, and spectral attributes, as well as height profiles. This method was evaluated on the Houston, TX dataset provided by the IEEE 2018 GRSS Data Fusion Contest, which includes 1476 ground truth vehicles from LiDAR data. On this dataset, the algorithm achieved a 92% precision and 92% recall. It was also evaluated on the Vaihingen, Germany dataset provided by ISPRS, as well as data simulated using an image generation model called DIRSIG. Some known limitations of the algorithm include false positives caused by low vegetation and the inability to detect vehicles (1) in extremely close proximity with high precision and (2) from low-density point clouds.

Keywords: data fusion; lidar; multi-sensor; object detection; remote sensing; satellite imagery; vehicle detection.

Grants and funding

This research received no external funding.