PKDN: Prior Knowledge Distillation Network for bronchoscopy diagnosis

Comput Biol Med. 2023 Sep 18:166:107486. doi: 10.1016/j.compbiomed.2023.107486. Online ahead of print.

Abstract

Bronchoscopy plays a crucial role in diagnosing and treating lung diseases. The deep learning-based diagnostic system for bronchoscopic images can assist physicians in accurately and efficiently diagnosing lung diseases, enabling patients to undergo timely pathological examinations and receive appropriate treatment. However, the existing diagnostic methods overlook the utilization of prior knowledge of medical images, and the limited feature extraction capability hinders precise focus on lesion regions, consequently affecting the overall diagnostic effectiveness. To address these challenges, this paper proposes a prior knowledge distillation network (PKDN) for identifying lung diseases through bronchoscopic images. The proposed method extracts color and edge features from lesion images using the prior knowledge guidance module, and subsequently enhances spatial and channel features by employing the dynamic spatial attention module and gated channel attention module, respectively. Finally, the extracted features undergo refinement and self-regulation through feature distillation. Furthermore, decoupled distillation is implemented to balance the importance of target and non-target class distillation, thereby enhancing the diagnostic performance of the network. The effectiveness of the proposed method is validated on the bronchoscopic dataset provided by Harbin Medical University Cancer Hospital, which consists of 2,029 bronchoscopic images from 200 patients. Experimental results demonstrate that the proposed method achieves an accuracy of 94.78% and an AUC of 98.17%, outperforming other methods significantly in diagnostic performance. These results indicate that the computer-aided diagnostic system based on PKDN provides satisfactory accuracy in diagnosing lung diseases during bronchoscopy.

Keywords: Attention mechanism; Bronchoscopy; Convolutional neural network; Knowledge distillation; Prior knowledge.