Deep Learning for FAST Quality Assessment

J Ultrasound Med. 2023 Jan;42(1):71-79. doi: 10.1002/jum.16045. Epub 2022 Jun 30.

Abstract

Objectives: To determine the feasibility of using a deep learning (DL) algorithm to assess the quality of focused assessment with sonography in trauma (FAST) exams.

Methods: Our dataset consists of 441 FAST exams, classified as good-quality or poor-quality, with 3161 videos. We first used convolutional neural networks (CNNs), pretrained on the Imagenet dataset and fine-tuned on the FAST dataset. Second, we trained a CNN autoencoder to compress FAST images, with a 20-1 compression ratio. The compressed codes were input to a two-layer classifier network. To train the networks, each video was labeled with the quality of the exam, and the frames were labeled with the quality of the video. For inference, a video was classified as poor-quality if half the frames were classified as poor-quality by the network, and an exam was classified as poor-quality if half the videos were classified as poor-quality.

Results: The results with the encoder-classifier networks were much better than the transfer learning results with CNNs. This was primarily because the Imagenet dataset is not a good match for the ultrasound quality assessment problem. The DL models produced video sensitivities and specificities of 99% and 98% on held-out test sets.

Conclusions: Using an autoencoder to compress FAST images is a very effective way to obtain features that can be used to predict exam quality. These features are more suitable than those obtained from CNNs pretrained on Imagenet.

Keywords: FAST; autoencoder; convolutional neural network; deep learning; ultrasound.

MeSH terms

  • Deep Learning*
  • Focused Assessment with Sonography for Trauma*
  • Humans
  • Neural Networks, Computer
  • Sensitivity and Specificity