Efficient Transformer-Based Compressed Video Modeling via Informative Patch Selection

Sensors (Basel). 2022 Dec 26;23(1):244. doi: 10.3390/s23010244.

Abstract

Recently, Transformer-based video recognition models have achieved state-of-the-art results on major video recognition benchmarks. However, their high inference cost significantly limits research speed and practical use. In video compression, methods considering small motions and residuals that are less informative and assigning short code lengths to them (e.g., MPEG4) have successfully reduced the redundancy of videos. Inspired by this idea, we propose Informative Patch Selection (IPS), which efficiently reduces the inference cost by excluding redundant patches from the input of the Transformer-based video model. The redundancy of each patch is calculated from motions and residuals obtained while decoding a compressed video. The proposed method is simple and effective in that it can dynamically reduce the inference cost depending on the input without any policy model or additional loss term. Extensive experiments on action recognition demonstrated that our method could significantly improve the trade-off between the accuracy and inference cost of the Transformer-based video model. Although the method does not require any policy model or additional loss term, its performance approaches that of existing methods that do require them.

Keywords: action recognition; compressed video; transformer; video recognition.

MeSH terms

  • Benchmarking*
  • Data Compression*
  • Electric Power Supplies
  • Motion
  • Policy

Grants and funding

This research received no external funding.