Self-attention learning network for face super-resolution

Neural Netw. 2023 Mar:160:164-174. doi: 10.1016/j.neunet.2023.01.006. Epub 2023 Jan 14.

Abstract

Existing face super-resolution methods depend on deep convolutional networks (DCN) to recover high-quality reconstructed images. They either acquire information in a single space by designing complex models for direct reconstruction, or employ additional networks to extract multiple prior information to enhance the representation of features. However, existing methods are still challenging to perform well due to the inability to learn complete and uniform representations. To this end, we propose a self-attention learning network (SLNet) for three-stage face super-resolution, which fully explores the interdependence of low- and high-level spaces to achieve compensation of the information used for reconstruction. Firstly, SLNet uses a hierarchical feature learning framework to obtain shallow information in the low-level space. Then, the shallow information with cumulative errors due to DCN is improved under high-resolution (HR) supervision, while bringing an intermediate reconstruction result and a powerful intermediate benchmark. Finally, the improved feature representation is further enhanced in high-level space by a multi-scale context-aware encoder-decoder for facial reconstruction. The features in both spaces are explored progressively from coarse to fine reconstruction information. The experimental results show that SLNet has a competitive performance compared to the state-of-the-art methods.

Keywords: Face super-resolution; Feature learning; Information compensation; Supervised learning.

MeSH terms

  • Attention
  • Benchmarking
  • Deep Learning*
  • Image Processing, Computer-Assisted
  • Learning*