DG-GAN: A High Quality Defect Image Generation Method for Defect Detection

Sensors (Basel). 2023 Jun 26;23(13):5922. doi: 10.3390/s23135922.

Abstract

The surface defect detection of industrial products has become a crucial link in industrial manufacturing. It has a series of chain effects on the control of product quality, the safety of the subsequent use of products, the reputation of products, and production efficiency. However, in actual production, it is often difficult to collect defect image samples. Without a sufficient number of defect image samples, training defect detection models is difficult to achieve. In this paper, a defect image generation method DG-GAN is proposed for defect detection. Based on the idea of the progressive generative adversarial, D2 adversarial loss function, cyclic consistency loss function, a data augmentation module, and a self-attention mechanism are introduced to improve the training stability and generative ability of the network. The DG-GAN method can generate high-quality and high-diversity surface defect images. The surface defect image generated by the model can be used to train the defect detection model and improve the convergence stability and detection accuracy of the defect detection model. Validation was performed on two data sets. Compared to the previous methods, the FID score of the generated defect images was significantly reduced (mean reductions of 16.17 and 20.06, respectively). The YOLOX detection accuracy was significantly improved with the increase in generated defect images (the highest increases were 6.1% and 20.4%, respectively). Experimental results showed that the DG-GAN model is effective in surface defect detection tasks.

Keywords: deep learning; defect detection; defect image generation; generating adversarial networks.

MeSH terms

  • Commerce*
  • Image Processing, Computer-Assisted
  • Industry*

Grants and funding

This research was funded in part by the National Natural Science Foundation of China under Grant 61801319, in part by the Innovation Fund of Engineering Research Center of the Ministry of Education of China, Digital Learning Technology Integration, and Application (No. 1221009), in part by the Sichuan Science and Technology Program under Grant 2020JDJQ0061 and Grant 2021YFG0099, in part by the Sichuan University of Science and Engineering Talent Introduction Project under Grant 2020RC33, in part by the Innovation Fund of Chinese Universities under Grant 2020HYA04001, in part by the Artificial Intelligence Key Laboratory of Sichuan Province Project under Grant 2021RZJ03, and in part by the 2021 Graduate Innovation Fund of Sichuan University of Science and Engineering under Grant y2022131.