How to backdoor split learning

Neural Netw. 2023 Nov:168:326-336. doi: 10.1016/j.neunet.2023.09.037. Epub 2023 Sep 24.

Abstract

Split learning, a distributed learning framework, has garnered significant attention from academic and industrial communities. In contrast to federated learning, split learning offers a more flexible architecture for participants with limited computing resources. However, the security of split learning has been questioned due to the separation of data and model control rights from usage rights. Currently, most research work focuses on inference attacks in split learning. In this paper, we first reveal the vulnerability of split learning to backdoor attacks and present two backdoor attack frameworks from both the server and client perspectives. Regarding the client-side attacker, we insert backdoor samples into the training data by utilizing the client's direct control over local data, and propose two methods for labeling backdoor samples that can be adapted to various application scenarios. Due to the server's lack of control over the client in split learning, it is infeasible for server-side attackers to inject backdoor samples into training data. Our strategy involves leveraging the server's control over the training process to shape the optimization direction of the client model, thereby enabling it to encode backdoor samples. Moreover, we introduce an auxiliary model into the attack framework to enhance the effectiveness of the backdoor attack. The auxiliary model can increase the distinction between backdoor samples and clean samples in the feature space to improve the sensitivity of the client model to backdoor samples. Extensive evaluations demonstrate the high attack accuracy of both proposed attack frameworks without causing any compromise to the performance of the main task. Our research uncovers the potential security risks and rings the alarm for the application of split learning.

Keywords: Auxiliary model; Backdoor attack; Shadow model; Split learning.

MeSH terms

  • Learning*
  • Neural Networks, Computer*