Disentangled Representation Learning for Multiple Attributes Preserving Face Deidentification

IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):244-256. doi: 10.1109/TNNLS.2020.3027617. Epub 2022 Jan 5.

Abstract

Face is one of the most attractive sensitive information in visual shared data. It is an urgent task to design an effective face deidentification method to achieve a balance between facial privacy protection and data utilities when sharing data. Most of the previous methods for face deidentification rely on attribute supervision to preserve a certain kind of identity-independent utility but lose the other identity-independent data utilities. In this article, we mainly propose a novel disentangled representation learning architecture for multiple attributes preserving face deidentification called replacing and restoring variational autoencoders (R2VAEs). The R2VAEs disentangle the identity-related factors and the identity-independent factors so that the identity-related information can be obfuscated, while they do not change the identity-independent attribute information. Moreover, to improve the details of the facial region and make the deidentified face blends into the image scene seamlessly, the image inpainting network is employed to fill in the original facial region by using the deidentified face as a priori. Experimental results demonstrate that the proposed method effectively deidentifies face while maximizing the preservation of the identity-independent information, which ensures the semantic integrity and visual quality of shared images.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Data Anonymization*
  • Face
  • Learning
  • Neural Networks, Computer*
  • Semantics