Detecting face presentation attacks in mobile devices with a patch-based CNN and a sensor-aware loss function

PLoS One. 2020 Sep 4;15(9):e0238058. doi: 10.1371/journal.pone.0238058. eCollection 2020.

Abstract

With the widespread use of biometric authentication comes the exploitation of presentation attacks, possibly undermining the effectiveness of these technologies in real-world setups. One example takes place when an impostor, aiming at unlocking someone else's smartphone, deceives the built-in face recognition system by presenting a printed image of the user. In this work, we study the problem of automatically detecting presentation attacks against face authentication methods, considering the use-case of fast device unlocking and hardware constraints of mobile devices. To enrich the understanding of how a purely software-based method can be used to tackle the problem, we present a solely data-driven approach trained with multi-resolution patches and a multi-objective loss function crafted specifically to the problem. We provide a careful analysis that considers several user-disjoint and cross-factor protocols, highlighting some of the problems with current datasets and approaches. Such analysis, besides demonstrating the competitive results yielded by the proposed method, provides a better conceptual understanding of the problem. To further enhance efficacy and discriminability, we propose a method that leverages the available gallery of user data in the device and adapts the method decision-making process to the user's and the device's own characteristics. Finally, we introduce a new presentation-attack dataset tailored to the mobile-device setup, with real-world variations in lighting, including outdoors and low-light sessions, in contrast to existing public datasets.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Biometric Identification*
  • Cell Phone*
  • Computer Security*
  • Face*
  • Image Processing, Computer-Assisted
  • Neural Networks, Computer*
  • Pattern Recognition, Automated

Grants and funding

This work was funded by Motorola. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.