Effects of sliding window variation in the performance of acceleration-based human activity recognition using deep learning models

PeerJ Comput Sci. 2022 Aug 8:8:e1052. doi: 10.7717/peerj-cs.1052. eCollection 2022.

Abstract

Deep learning (DL) models are very useful for human activity recognition (HAR); these methods present better accuracy for HAR when compared to traditional, among other advantages. DL learns from unlabeled data and extracts features from raw data, as for the case of time-series acceleration. Sliding windows is a feature extraction technique. When used for preprocessing time-series data, it provides an improvement in accuracy, latency, and cost of processing. The time and cost of preprocessing can be beneficial especially if the window size is small, but how small can this window be to keep good accuracy? The objective of this research was to analyze the performance of four DL models: a simple deep neural network (DNN); a convolutional neural network (CNN); a long short-term memory network (LSTM); and a hybrid model (CNN-LSTM), when variating the sliding window size using fixed overlapped windows to identify an optimal window size for HAR. We compare the effects in two acceleration sources': wearable inertial measurement unit sensors (IMU) and motion caption systems (MOCAP). Moreover, short sliding windows of sizes 5, 10, 15, 20, and 25 frames to long ones of sizes 50, 75, 100, and 200 frames were compared. The models were fed using raw acceleration data acquired in experimental conditions for three activities: walking, sit-to-stand, and squatting. Results show that the most optimal window is from 20-25 frames (0.20-0.25s) for both sources, providing an accuracy of 99,07% and F1-score of 87,08% in the (CNN-LSTM) using the wearable sensors data, and accuracy of 98,8% and F1-score of 82,80% using MOCAP data; similar accurate results were obtained with the LSTM model. There is almost no difference in accuracy in larger frames (100, 200). However, smaller windows present a decrease in the F1-score. In regard to inference time, data with a sliding window of 20 frames can be preprocessed around 4x (LSTM) and 2x (CNN-LSTM) times faster than data using 100 frames.

Keywords: Accelerometer; Deep learning; Human activity recognition; Motion capture; Pattern recognition; Sliding windows.

Grants and funding

This work was supported by the BeHealSy Program of EIT Health that promoted the collaboration between the Universidad Politécnica de Madrid and the University of Lisbon. In addition, it was supported by national funds through the Portuguese Foundation for Science and Technology with references UIDB/50021/2020 and UIDB/50022/2020 (IDMEC under the LAETA project). Also, Milagros Jaén-Vargas is supported for Instituto para la Formación y Aprovechamiento de Recursos Humanos and Secretaría Nacional de Ciencia, Tecnología e Innovación (IFARHU-SENACYT) grant (270-2018-968). Karla Reyes Leiva received scholarship support from the Fundación Carolina (FC) and the Universidad Tecnológica Centroamericana (UNITEC). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.