BC-DUnet-based segmentation of fine cracks in bridges under a complex background

PLoS One. 2022 Mar 15;17(3):e0265258. doi: 10.1371/journal.pone.0265258. eCollection 2022.

Abstract

Crack is the external expression form of potential safety risks in bridge construction. Currently, automatic detection and segmentation of bridge cracks remains the top priority of civil engineers. With the development of image segmentation techniques based on convolutional neural networks, new opportunities emerge in bridge crack detection. Traditional bridge crack detection methods are vulnerable to complex background and small cracks, which is difficult to achieve effective segmentation. This study presents a bridge crack segmentation method based on a densely connected U-Net network (BC-DUnet) with a background elimination module and cross-attention mechanism. First, a dense connected feature extraction model (DCFEM) integrating the advantages of DenseNet is proposed, which can effectively enhance the main feature information of small cracks. Second, the background elimination module (BEM) is proposed, which can filter the excess information by assigning different weights to retain the main feature information of the crack. Finally, a cross-attention mechanism (CAM) is proposed to enhance the capture of long-term dependent information and further improve the pixel-level representation of the model. Finally, 98.18% of the Pixel Accuracy was obtained by comparing experiments with traditional networks such as FCN and Unet, and the IOU value was increased by 14.12% and 4.04% over FCN and Unet, respectively. In our non-traditional networks such as HU-ResNet and F U N-4s, SAM-DUnet has better and higher accuracy and generalization is not prone to overfitting. The BC-DUnet network proposed here can eliminate the influence of complex background on the segmentation accuracy of bridge cracks, improve the detection efficiency of bridge cracks, reduce the detection cost, and have practical application value.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Data Collection
  • Image Processing, Computer-Assisted* / methods
  • Neural Networks, Computer*

Grants and funding

This study was supported by the Changsha Municipal Science Foundation (Grant No. kq2014160); in part by the National Natural Science Foundation in China (Grant No. 61703441); in part by the National Natural Science Foundation of Hunan Province (Grant No. 2020JJ4948); in part by the key projects of Department of Education Hunan Province (Grant No. 19A511). The funders had no role in research design, data collection and analysis, and the decision to release or prepare the manuscript.