Spatially Adaptive Feature Refinement for Efficient Inference

IEEE Trans Image Process. 2021:30:9345-9358. doi: 10.1109/TIP.2021.3125263. Epub 2021 Nov 12.

Abstract

Spatial redundancy commonly exists in the learned representations of convolutional neural networks (CNNs), leading to unnecessary computation on high-resolution features. In this paper, we propose a novel Spatially Adaptive feature Refinement (SAR) approach to reduce such superfluous computation. It performs efficient inference by adaptively fusing information from two branches: one conducts standard convolution on input features at a lower spatial resolution, and the other one selectively refines a set of regions at the original resolution. The two branches complement each other in feature learning, and both of them evoke much less computation than standard convolution. SAR is a flexible method that can be conveniently plugged into existing CNNs to establish models with reduced spatial redundancy. Experiments on CIFAR and ImageNet classification, COCO object detection and PASCAL VOC semantic segmentation tasks validate that the proposed SAR can consistently improve the network performance and efficiency. Notably, our results show that SAR only refines less than 40% of the regions in the feature representations of a ResNet for 97% of the samples in the validation set of ImageNet to achieve comparable accuracy with the original model, revealing the high computational redundancy in the spatial dimension of CNNs.

MeSH terms

  • Algorithms*
  • Neural Networks, Computer*
  • Semantics