Split BiRNN for real-time activity recognition using radar and deep learning

Sci Rep. 2022 May 6;12(1):7436. doi: 10.1038/s41598-022-08240-x.

Abstract

Radar systems can be used to perform human activity recognition in a privacy preserving manner. This can be achieved by using Deep Neural Networks, which are able to effectively process the complex radar data. Often these networks are large and do not scale well when processing a large amount of radar streams at once, for example when monitoring multiple rooms in a hospital. This work presents a framework that splits the processing of data in two parts. First, a forward Recurrent Neural Network (RNN) calculation is performed on an on-premise device (usually close to the radar sensor) which already gives a prediction of what activity is performed, and can be used for time-sensitive use-cases. Next, a part of the calculation and the prediction is sent to a more capable off-premise machine (most likely in the cloud or a data center) where a backward RNN calculation is performed that improves the previous prediction sent by the on-premise device. This enables fast notifications to staff if troublesome activities occur (such as falling) by the on-premise device, while the off-premise device captures activities missed or misclassified by the on-premise device.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Accidental Falls
  • Deep Learning*
  • Human Activities
  • Humans
  • Neural Networks, Computer
  • Radar*