Learning defense transformations for counterattacking adversarial examples

Neural Netw. 2023 Jul:164:177-185. doi: 10.1016/j.neunet.2023.03.008. Epub 2023 Mar 24.

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples with small perturbations. Adversarial defense thus has been an important means which improves the robustness of DNNs by defending against adversarial examples. Existing defense methods focus on some specific types of adversarial examples and may fail to defend well in real-world applications. In practice, we may face many types of attacks where the exact type of adversarial examples in real-world applications can be even unknown. In this paper, motivated by that adversarial examples are more likely to appear near the classification boundary and are vulnerable to some transformations, we study adversarial examples from a new perspective that whether we can defend against adversarial examples by pulling them back to the original clean distribution. We empirically verify the existence of defense affine transformations that restore adversarial examples. Relying on this, we learn defense transformations to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs. Extensive experiments on both toy and real-world data sets demonstrate the effectiveness and generalization of our defense method. The code is avaliable at https://github.com/SCUTjinchengli/DefenseTransformer.

Keywords: Adversarial examples; Affine transformations; Classification boundary; Defense transformations.

MeSH terms

  • Generalization, Psychological*
  • Learning*
  • Neural Networks, Computer