The Best of Both Worlds: A Framework for Combining Degradation Prediction with High Performance Super-Resolution Networks

Sensors (Basel). 2022 Dec 30;23(1):419. doi: 10.3390/s23010419.

Abstract

To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: (A) train standard SR networks on synthetic low-resolution-high-resolution (LR-HR) pairs or (B) predict the degradations of an LR image and then use these to inform a customised SR network. Despite significant progress, subscribers to the former miss out on useful degradation information and followers of the latter rely on weaker SR networks, which are significantly outperformed by the latest architectural advancements. In this work, we present a framework for combining any blind SR prediction mechanism with any deep SR network. We show that a single lightweight metadata insertion block together with a degradation prediction mechanism can allow non-blind SR architectures to rival or outperform state-of-the-art dedicated blind SR networks. We implement various contrastive and iterative degradation prediction schemes and show they are readily compatible with high-performance SR networks such as RCAN and HAN within our framework. Furthermore, we demonstrate our framework's robustness by successfully performing blind SR on images degraded with blurring, noise and compression. This represents the first explicit combined blind prediction and SR of images degraded with such a complex pipeline, acting as a baseline for further advancements.

Keywords: blind super-resolution; contrastive learning; deep learning; degradation prediction; iterative prediction; meta-attention; metadata fusion.

MeSH terms

  • Algorithms*
  • Data Compression*