Joint Adversarial Example and False Data Injection Attacks for State Estimation in Power Systems

IEEE Trans Cybern. 2022 Dec;52(12):13699-13713. doi: 10.1109/TCYB.2021.3125345. Epub 2022 Nov 18.

Abstract

Although state estimation using a bad data detector (BDD) is a key procedure employed in power systems, the detector is vulnerable to false data injection attacks (FDIAs). Substantial deep learning methods have been proposed to detect such attacks. However, deep neural networks are susceptible to adversarial attacks or adversarial examples, where slight changes in inputs may lead to sharp changes in the corresponding outputs in even well-trained networks. This article introduces the joint adversarial example and FDIAs (AFDIAs) to explore various attack scenarios for state estimation in power systems. Considering that perturbations added directly to measurements are likely to be detected by BDDs, our proposed method of adding perturbations to state variables can guarantee that the attack is stealthy to BDDs. Then, malicious data that are stealthy to both BDDs and deep learning-based detectors can be generated. Theoretical and experimental results show that our proposed state-perturbation-based AFDIA method (S-AFDIA) can carry out attacks stealthy to both conventional BDDs and deep learning-based detectors, while our proposed measurement-perturbation-based adversarial FDIA method (M-AFDIA) succeeds if only deep learning-based detectors are used. The comparative experiments show that our proposed methods provide better performance than state-of-the-art methods. Besides, the ultimate effect of attacks can also be optimized using the proposed joint attack methods.

MeSH terms

  • Brachydactyly*
  • Humans
  • Neural Networks, Computer

Supplementary concepts

  • Brachydactyly, Type D