A hierarchical multimodal system for motion analysis in patients with epilepsy

Epilepsy Behav. 2018 Oct:87:46-58. doi: 10.1016/j.yebeh.2018.07.028. Epub 2018 Aug 31.

Abstract

During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (n = 90) and extratemporal (n = 71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy.

Keywords: Computer vision; Deep learning; Facial movements; Hand automatisms; Quantitative movement analysis; Seizure semiology.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Automatism / physiopathology*
  • Biomechanical Phenomena
  • Datasets as Topic
  • Deep Learning*
  • Epilepsy / diagnosis*
  • Epilepsy, Temporal Lobe / diagnosis*
  • Face / physiopathology*
  • Hand / physiopathology*
  • Humans
  • Movement / physiology*
  • Neurophysiological Monitoring / methods*
  • Seizures / diagnosis*