Artificial Intelligence in Spinal Imaging: Current Status and Future Directions

Int J Environ Res Public Health. 2022 Sep 16;19(18):11708. doi: 10.3390/ijerph191811708.

Abstract

Spinal maladies are among the most common causes of pain and disability worldwide. Imaging represents an important diagnostic procedure in spinal care. Imaging investigations can provide information and insights that are not visible through ordinary visual inspection. Multiscale in vivo interrogation has the potential to improve the assessment and monitoring of pathologies thanks to the convergence of imaging, artificial intelligence (AI), and radiomic techniques. AI is revolutionizing computer vision, autonomous driving, natural language processing, and speech recognition. These revolutionary technologies are already impacting radiology, diagnostics, and other fields, where automated solutions can increase precision and reproducibility. In the first section of this narrative review, we provide a brief explanation of the many approaches currently being developed, with a particular emphasis on those employed in spinal imaging studies. The previously documented uses of AI for challenges involving spinal imaging, including imaging appropriateness and protocoling, image acquisition and reconstruction, image presentation, image interpretation, and quantitative image analysis, are then detailed. Finally, the future applications of AI to imaging of the spine are discussed. AI has the potential to significantly affect every step in spinal imaging. AI can make images of the spine more useful to patients and doctors by improving image quality, imaging efficiency, and diagnostic accuracy.

Keywords: artificial intelligence; deep learning; image interpretation; image presentation; machine learning; spinal imaging.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Forecasting
  • Humans
  • Machine Learning
  • Radiology*
  • Reproducibility of Results

Grants and funding

This project was supported by the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2020B1515120082) and the Innovation Commission of Science and Technology of Shenzhen Municipality (Grant No. JCYJ20190807144001746, Grant No. JCYJ20200109150605937, and Grant No. JSGG20191129114422849).