DeepMap+: Recognizing High-Level Indoor Semantics Using Virtual Features and Samples Based on a Multi-Length Window Framework

Sensors (Basel). 2017 May 26;17(6):1214. doi: 10.3390/s17061214.

Abstract

Existing indoor semantic recognition schemes are mostly capable of discovering patterns through smartphone sensing, but it is hard to recognize rich enough high-level indoor semantics for map enhancement. In this work we present DeepMap+, an automatical inference system for recognizing high-level indoor semantics using complex human activities with wrist-worn sensing. DeepMap+ is the first deep computation system using deep learning (DL) based on a multi-length window framework to enrich the data source. Furthermore, we propose novel methods of increasing virtual features and virtual samples for DeepMap+ to better discover hidden patterns of complex hand gestures. We have performed 23 high-level indoor semantics (including public facilities and functional zones) and collected wrist-worn data at a Wal-Mart supermarket. The experimental results show that our proposed methods can effectively improve the classification accuracy.

Keywords: activity recognition; deep learning; indoor semantic inference; multi-length windows; virtual features; virtual samples.