A multiresolution mixture generative adversarial network for video super-resolution

PLoS One. 2020 Jul 10;15(7):e0235352. doi: 10.1371/journal.pone.0235352. eCollection 2020.

Abstract

Generative adversarial networks (GANs) have been used to obtain super-resolution (SR) videos that have improved visual perception quality and more coherent details. However, the latest methods perform poorly in areas with dense textures. To better recover the areas with dense textures in video frames and improve the visual perception quality and coherence in videos, this paper proposes a multiresolution mixture generative adversarial network for video super-resolution (MRMVSR). We propose a multiresolution mixture network (MRMNet) as the generative network that can simultaneously generate multiresolution feature maps. In MRMNet, the high-resolution (HR) feature maps can continuously extract information from low-resolution (LR) feature maps to supplement information. In addition, we propose a residual fluctuation loss function for video super-resolution. The residual fluctuation loss function is used to reduce the overall residual fluctuation on SR and HR video frames to avoid a scenario where local differences are too large. Experimental results on the public benchmark dataset show that our method outperforms the state-of-the-art methods for the majority of the test sets.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer
  • Video Recording / methods*
  • Video Recording / trends
  • Visual Perception / physiology*

Associated data

  • Dryad/10.5061/dryad.g79cnp5ms
  • Dryad/10.5061/dryad.qfttdz0dk
  • Dryad/10.5061/dryad.5qfttdz2d

Grants and funding

This work was supported in part by the key project of Shaanxi province No.2018ZDCXLGY0607. No additional external funding was received for this study.