Multimodal Multiresolution Data Fusion Using Convolutional Neural Networks for IoT Wearable Sensing

IEEE Trans Biomed Circuits Syst. 2021 Dec;15(6):1161-1173. doi: 10.1109/TBCAS.2021.3134043. Epub 2022 Feb 17.

Abstract

With advances in circuit design and sensing technology, the acquisition of data from a large number of Internet of Things (IoT) sensors simultaneously to enable more accurate inferences has become mainstream. In this work, we propose a novel convolutional neural network (CNN) model for the fusion of multimodal and multiresolution data obtained from several sensors. The proposed model enables the fusion of multiresolution sensor data, without having to resort to padding/ resampling to correct for frequency resolution differences even when carrying out temporal inferences like high-resolution event detection. The performance of the proposed model is evaluated for sleep apnea event detection, by fusing three different sensor signals obtained from UCD St. Vincent University Hospital's sleep apnea database. The proposed model is generalizable and this is demonstrated by incremental performance improvements, proportional to the number of sensors used for fusion. A selective dropout technique is used to prevent overfitting of the model to any specific high-resolution input, and increase the robustness of fusion to signal corruption from any sensor source. A fusion model with electrocardiogram (ECG), Peripheral oxygen saturation signal (SpO2), and abdominal movement signal achieved an accuracy of 99.72% and a sensitivity of 98.98%. Energy per classification of the proposed fusion model was estimated to be approximately 5.61 μJ for on-chip implementation. The feasibility of pruning to reduce the complexity of the fusion models was also studied.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Databases, Factual
  • Electrocardiography
  • Humans
  • Neural Networks, Computer
  • Sleep Apnea Syndromes* / diagnosis
  • Wearable Electronic Devices*