Transformer with difference convolutional network for lightweight universal boundary detection

PLoS One. 2024 Apr 16;19(4):e0302275. doi: 10.1371/journal.pone.0302275. eCollection 2024.

Abstract

Although deep-learning methods can achieve human-level performance in boundary detection, their improvements mostly rely on larger models and specific datasets, leading to significant computational power consumption. As a fundamental low-level vision task, a single model with fewer parameters to achieve cross-dataset boundary detection merits further investigation. In this study, a lightweight universal boundary detection method was developed based on convolution and a transformer. The network is called a "transformer with difference convolutional network" (TDCN), which implies the introduction of a difference convolutional network rather than a pure transformer. The TDCN structure consists of three parts: convolution, transformer, and head function. First, a convolution network fused with edge operators is used to extract multiscale difference features. These pixel difference features are then fed to the hierarchical transformer as tokens. Considering the intrinsic characteristics of the boundary detection task, a new boundary-aware self-attention structure was designed in the transformer to provide inductive bias. By incorporating the proposed attention loss function, it introduces the direction of the boundary as strongly supervised information to improve the detection ability of the model. Finally, several head functions with multiscale feature inputs were trained using a bidirectional additive strategy. In the experiments, the proposed method achieved competitive performance on multiple public datasets with fewer model parameters. A single model was obtained to realize universal prediction even for different datasets without retraining, demonstrating the effectiveness of the method. The code is available at https://github.com/neulmc/TDCN.

MeSH terms

  • Awareness*
  • Electric Power Supplies
  • Humans
  • Information Management
  • Menopause
  • Vision, Low*

Grants and funding

This work was funded by the Liaoning Provincial Science and Technology Plan Project of China under Grant 2023JH1/10400099. https://kjt.ln.gov.cn/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.