Global and Local Feature Reconstruction for Medical Image Segmentation

IEEE Trans Med Imaging. 2022 Sep;41(9):2273-2284. doi: 10.1109/TMI.2022.3162111. Epub 2022 Aug 31.

Abstract

Learning how to capture long-range dependencies and restore spatial information of down-sampled feature maps are the basis of the encoder-decoder structure networks in medical image segmentation. U-Net based methods use feature fusion to alleviate these two problems, but the global feature extraction ability and spatial information recovery ability of U-Net are still insufficient. In this paper, we propose a Global Feature Reconstruction (GFR) module to efficiently capture global context features and a Local Feature Reconstruction (LFR) module to dynamically up-sample features, respectively. For the GFR module, we first extract the global features with category representation from the feature map, then use the different level global features to reconstruct features at each location. The GFR module establishes a connection for each pair of feature elements in the entire space from a global perspective and transfers semantic information from the deep layers to the shallow layers. For the LFR module, we use low-level feature maps to guide the up-sampling process of high-level feature maps. Specifically, we use local neighborhoods to reconstruct features to achieve the transfer of spatial information. Based on the encoder-decoder architecture, we propose a Global and Local Feature Reconstruction Network (GLFRNet), in which the GFR modules are applied as skip connections and the LFR modules constitute the decoder path. The proposed GLFRNet is applied to four different medical image segmentation tasks and achieves state-of-the-art performance.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Image Processing, Computer-Assisted* / methods
  • Neural Networks, Computer*
  • Semantics