MTR-SDL: a soft computing based multi-tier rank model for shoulder X-ray classification

Soft comput. 2023 Jun 6:1-21. doi: 10.1007/s00500-023-08562-6. Online ahead of print.

Abstract

Deep neural networks (DNN) effectiveness are contingent upon access to quality-labelled training datasets since label mistakes (label noise) in training datasets may significantly impair the accuracy of models trained on clean test data. The primary impediments to developing and using DNN models in the healthcare sector include the lack of sufficient label data. Labeling data by a domain expert are a costly and time-consuming task. To overcome this limitation, the proposed Multi-Tier Rank-based Semi-supervised deep learning (MTR-SDL) for Shoulder X-Ray Classification uses the small labelled dataset to generate a labelled dataset from unable dataset to obtain performance equivalent to approaches trained on the enormous dataset. The motivation behind the suggested model MTR-SDL approach is analogous to how physicians deal with unknown or suspicious patients in everyday life. Practitioners handle these questionable circumstances with the support of professional colleagues. Before initiating treatment, some patients consult with a range of skilled doctors. Patients are treated according to the most suitable professional diagnosis (vote count). In this article, we have proposed a new ensemble learning technique called "Rank based Ensemble Selection with machine learning models" (MTR-SDL) approach. In this technique, multiple machine learning models are trained on a labeled dataset, and their accuracy is ranked. A dynamic ensemble voting approach is then used to tag samples for each base model in the ensemble. The combination of these tags is used to generate a final tag for an unlabeled dataset. Our suggested MTR-SDL model has attained the best accuracy and specificity, sensitivity, precision, Matthew's correlation coefficient, false discovery rate, false positive rate, f1 score, negative predictive value, and false negative rate negative 92.776%, 97.376%, 86.932%, 96.192%, 85.644%, 3.808%, 2.624%, 91.072%, 90.85%, and 13.068% for unseen dataset, respectively. This approach has the potential to improve the performance of ensemble models by leveraging the strengths of multiple base models and selecting the most informative samples for each model. This study results in an improved Semi-supervised deep learning model that is more effective and precise.

Keywords: Co-teacher; Deep learning; MentorNet; Self-assessment; Student–teacher; X-ray.