Fall Detection Using Smartphone Audio Features

IEEE J Biomed Health Inform. 2016 Jul;20(4):1073-80. doi: 10.1109/JBHI.2015.2425932. Epub 2015 Apr 23.

Abstract

An automated fall detection system based on smartphone audio features is developed. The spectrogram, mel frequency cepstral coefficents (MFCCs), linear predictive coding (LPC), and matching pursuit (MP) features of different fall and no-fall sound events are extracted from experimental data. Based on the extracted audio features, four different machine learning classifiers: k-nearest neighbor classifier (k-NN), support vector machine (SVM), least squares method (LSM), and artificial neural network (ANN) are investigated for distinguishing between fall and no-fall events. For each audio feature, the performance of each classifier in terms of sensitivity, specificity, accuracy, and computational complexity is evaluated. The best performance is achieved using spectrogram features with ANN classifier with sensitivity, specificity, and accuracy all above 98%. The classifier also has acceptable computational requirement for training and testing. The system is applicable in home environments where the phone is placed in the vicinity of the user.

MeSH terms

  • Accidental Falls*
  • Female
  • Humans
  • Male
  • Monitoring, Ambulatory / methods*
  • Neural Networks, Computer
  • Sensitivity and Specificity
  • Signal Processing, Computer-Assisted*
  • Smartphone*
  • Sound Spectrography / methods*
  • Support Vector Machine
  • Telemedicine