Dynamic deformable attention network (DDANet) for COVID-19 lesions semantic segmentation

J Biomed Inform. 2021 Jul:119:103816. doi: 10.1016/j.jbi.2021.103816. Epub 2021 May 20.

Abstract

Deep learning based medical image segmentation is an important step within diagnosis, which relies strongly on capturing sufficient spatial context without requiring too complex models that are hard to train with limited labelled data. Training data is in particular scarce for segmenting infection regions of CT images of COVID-19 patients. Attention models help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent criss-cross-attention module aims to approximate global self-attention while remaining memory and time efficient by separating horizontal and vertical self-similarity computations. However, capturing attention from all non-local locations can adversely impact the accuracy of semantic segmentation networks. We propose a new Dynamic Deformable Attention Network (DDANet) that enables a more accurate contextual information computation in a similarly efficient way. Our novel technique is based on a deformable criss-cross attention block that learns both attention coefficients and attention offsets in a continuous way. A deep U-Net (Schlemper et al., 2019) segmentation network that employs this attention mechanism is able to capture attention from pertinent non-local locations and also improves the performance on semantic segmentation tasks compared to criss-cross attention within a U-Net on a challenging COVID-19 lesion segmentation task. Our validation experiments show that the performance gain of the recursively applied dynamic deformable attention blocks comes from their ability to capture dynamic and precise attention context. Our DDANet achieves Dice scores of 73.4% and 61.3% for Ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.9% points compared to a baseline U-Net and 24.4% points compared to current state of art methods (Fan et al., 2020).

Keywords: Attention mechanism; CCNet; COVID-19; Computed Tomography (CT); Consolidation; Criss-cross attention; Deformable attention; Differentiable attention sampling; Ground-glass opacity; Infection; RT-PCR; Segmentation; Semantic segmentation; U-Net.

MeSH terms

  • COVID-19*
  • Humans
  • Image Processing, Computer-Assisted
  • Neural Networks, Computer
  • SARS-CoV-2
  • Semantics
  • Tomography, X-Ray Computed