MiDA: Membership inference attacks against domain adaptation

ISA Trans. 2023 Oct:141:103-112. doi: 10.1016/j.isatra.2023.01.021. Epub 2023 Jan 20.

Abstract

Domain adaption has become an effective solution to train neural networks with insufficient training data. In this paper, we investigate the vulnerability of domain adaption that potentially breaches sensitive information about the training dataset. We propose a new membership inference attack against domain adaption models, to infer the membership information of samples from the target domain. By leveraging the background knowledge about an additional source-domain in domain adaptation tasks, our attack can exploit the similar distributions between the target and source domain data to determine if a specific data sample belongs in the training set with high efficiency and accuracy. In particular, the proposed attack can be deployed in a practical scenario where the attacker cannot obtain any details of the model. We conduct extensive evaluations for object and digit recognition tasks. Experimental results show that our method can achieve the attack against domain adaptation models with a high success rate.

Keywords: Deep learning; Domain adaptation; Membership inference attack; Privacy.