A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data

J Med Syst. 2020 Sep 15;44(10):187. doi: 10.1007/s10916-020-01646-y.

Abstract

In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of 'stealing' users' biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions' confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed.

Keywords: Biometric profiles; Genetic algorithms; Impersonator attack; Machine learning.

MeSH terms

  • Actigraphy*
  • Algorithms
  • Biometry
  • Humans
  • Machine Learning
  • Theft*