Classification of precancerous lesions based on fusion of multiple hierarchical features

Comput Methods Programs Biomed. 2023 Feb:229:107301. doi: 10.1016/j.cmpb.2022.107301. Epub 2022 Dec 6.

Abstract

Purpose: To investigate an identification method for precancerous gastric cancer based on the fusion of superficial features and deep features of gastroscopic images. The purpose of this study is to make most use of superficial features and deep features to provide clinicians with clinical decision support to assist the diagnosis of precancerous gastric diseases and reduce the workload of doctors.

Methods: According to the nature of gastroscopic images, 75-dimensional shallow features were manually designed, including histogram features, texture features and high-order features of the image; then, based on the constructed convolutional neural networks such as ResNet and GoogLeNet, before the output layer. A fully connected layer is added as the deep feature of the image. In order to ensure consistent feature weights, the number of neurons in the fully connected layer is designed to be 75 dimensions. Therefore, the superficial and deep features of the image are concatenated, and a machine learning classifier is used to identify gastric polyps, there are three types of gastric precancerous diseases such as gastric polyps, gastric ulcers and gastric erosions.

Results: A dataset with 420 images was collected for each disease, and divided into a training set and a test set with a ratio of 5:1, and then based on the dataset, three methods, such as traditional machine learning, deep learning, and feature fusion, were used respectively. For model training and testing of traditional machine learning and feature fusion, SVM, RF and BP neural network are used as the classification results of the classifier. For deep learning, the GoogLeNet, ResNet, and ResNeXt were implemented. The test results of the model on the test set show that the recognition accuracy of the proposed feature fusion method reaches (SVM: 85.18%; RF: 83.42%; BPNN: 85.18%), which is better than the traditional machine learning method (SVM: 80.17%; RF: 82.37%; BPNN: 84.12%) and the deep learning method (GoogLeNet: 82.54%; ResNet-18: 81.67%; ResNet-50: 81.67%; ResNeXt-50: 82.11%), which proves that this method has obvious advantages.

Conclusion: This study provides a new strategy for the identification of precancerous gastric cancer, improving the efficiency and accuracy of precancerous gastric cancer identification, and hopes to provide substantial practical help for the identification of gastric precancerous diseases.

Keywords: Feature fusion; Gastric cancer; Image recognition; Machine learning; Precancerous diseases.

MeSH terms

  • Humans
  • Machine Learning
  • Neural Networks, Computer
  • Precancerous Conditions* / diagnostic imaging
  • Stomach Neoplasms* / diagnostic imaging