Video Global Motion Compensation Based on Affine Inverse Transform Model

Sensors (Basel). 2023 Sep 8;23(18):7750. doi: 10.3390/s23187750.

Abstract

Global motion greatly increases the number of false alarms for object detection in video sequences against dynamic backgrounds. Therefore, before detecting the target in the dynamic background, it is necessary to estimate and compensate the global motion to eliminate the influence of the global motion. In this paper, we use the SURF (speeded up robust features) algorithm combined with the MSAC (M-Estimate Sample Consensus) algorithm to process the video. The global motion of a video sequence is estimated according to the feature point matching pairs of adjacent frames of the video sequence and the global motion parameters of the video sequence under the dynamic background. On this basis, we propose an inverse transformation model of affine transformation, which acts on each adjacent frame of the video sequence in turn. The model compensates the global motion, and outputs a video sequence after global motion compensation from a specific view for object detection. Experimental results show that the algorithm proposed in this paper can accurately perform motion compensation on video sequences containing complex global motion, and the compensated video sequences achieve higher peak signal-to-noise ratio and better visual effects.

Keywords: affine transformation; feature point matching; global motion compensation; image processing; target detection.

Grants and funding

This work was supported in part by the NSFC (62376147), and Shaanxi province key research and development program (2021GY-087).