Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

PLoS One. 2017 Apr 26;12(4):e0175798. doi: 10.1371/journal.pone.0175798. eCollection 2017.

Abstract

Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately.

MeSH terms

  • Algorithms
  • Databases, Factual
  • Humans
  • Image Enhancement / methods*
  • Motion
  • Video Recording*
  • Visual Perception

Grants and funding

This study was supported by the Natural Science Foundation of China under grants 61271270, 61271021, 61311140262 and U1301257, the National High-tech R&D Program of China (2015AA015901), Zhejiang Provincial Natural Science Foundation of China (LY15F010005, Y16F010010) and the K. C. Wong Magna Fund at Ningbo University. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.