Global-Guided Selective Context Network for Scene Parsing

IEEE Trans Neural Netw Learn Syst. 2022 Apr;33(4):1752-1764. doi: 10.1109/TNNLS.2020.3043808. Epub 2022 Apr 4.

Abstract

Recent studies on semantic segmentation are exploiting contextual information to address the problem of inconsistent parsing prediction in big objects and ignorance in small objects. However, they utilize multilevel contextual information equally across pixels, overlooking those different pixels may demand different levels of context. Motivated by the above-mentioned intuition, we propose a novel global-guided selective context network (GSCNet) to adaptively select contextual information for improving scene parsing. Specifically, we introduce two global-guided modules, called global-guided global module (GGM) and global-guided local module (GLM), to, respectively, select global context (GC) and local context (LC) for pixels. When given an input feature map, GGM jointly employs the input feature map and its globally pooled feature to learn its global contextual demand based on which per-pixel GC is selected. While GLM adopts low-level feature from the adjacent stage as LC and synthetically models the input feature map, its globally pooled feature and LC to generate local contextual demand, based on which per-pixel LC is selected. Furthermore, we combine these two modules as a selective context block and import such SCBs in different levels of the network to propagate contextual information in a coarse-to-fine manner. Finally, we conduct extensive experiments to verify the effectiveness of our proposed model and achieve state-of-the-art performance on four challenging scene parsing data sets, i.e., Cityscapes, ADE20K, PASCAL Context, and COCO Stuff. Especially, GSCNet-101 obtains 82.6% on Cityscapes test set without using coarse data and 56.22% on ADE20K test set.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Neural Networks, Computer*