Ratio-and-Scale-Aware YOLO for Pedestrian Detection

IEEE Trans Image Process. 2021:30:934-947. doi: 10.1109/TIP.2020.3039574. Epub 2020 Dec 8.

Abstract

Current deep learning methods seldom consider the effects of small pedestrian ratios and considerable differences in the aspect ratio of input images, which results in low pedestrian detection performance. This study proposes the ratio-and-scale-aware YOLO (RSA-YOLO) method to solve the aforementioned problems. The following procedure is adopted in this method. First, ratio-aware mechanisms are introduced to dynamically adjust the input layer length and width hyperparameters of YOLOv3, thereby solving the problem of considerable differences in the aspect ratio. Second, intelligent splits are used to automatically and appropriately divide the original images into two local images. Ratio-aware YOLO (RA-YOLO) is iteratively performed on the two local images. Because the original and local images produce low- and high-resolution pedestrian detection information after RA-YOLO, respectively, this study proposes new scale-aware mechanisms in which multiresolution fusion is used to solve the problem of misdetection of remarkably small pedestrians in images. The experimental results indicate that the proposed method produces favorable results for images with extremely small objects and those with considerable differences in the aspect ratio. Compared with the original YOLOs (i.e., YOLOv2 and YOLOv3) and several state-of-the-art approaches, the proposed method demonstrated a superior performance for the VOC 2012 comp4, INRIA, and ETH databases in terms of the average precision, intersection over union, and lowest log-average miss rate.