Model-driven deep unrolling: Towards interpretable deep learning against noise attacks for intelligent fault diagnosis

ISA Trans. 2022 Oct;129(Pt B):644-662. doi: 10.1016/j.isatra.2022.02.027. Epub 2022 Feb 22.

Abstract

Intelligent fault diagnosis (IFD) has experienced tremendous progress owing to a great deal to deep learning (DL)-based methods over the decades. However, the "black box" nature of DL-based methods still seriously hinders wide applications in industry, especially in aero-engine IFD, and how to interpret the learned features is still a challenging problem. Furthermore, IFD based on vibration signals is often affected by the heavy noise, leading to a big drop in accuracy. To address these two problems, we develop a model-driven deep unrolling method to achieve ante-hoc interpretability, whose core is to unroll a corresponding optimization algorithm of a predefined model into a neural network, which is naturally interpretable and robust to noise attacks. Motivated by the recent multi-layer sparse coding (ML-SC) model, we herein propose to solve a general sparse coding (GSC) problem across different layers and deduce the corresponding layered GSC (LGSC) algorithm. Based on the ideology of deep unrolling, the proposed algorithm is unfolded into LGSC-Net, whose relationship with the convolutional neural network (CNN) is also discussed in depth. The effectiveness of the proposed model is verified by an aero-engine bevel gear fault experiment and a helical gear fault experiment with three kinds of adversarial noise attacks. The interpretability is also discussed from the perspective of the core of model-driven deep unrolling and its inductive reconstruction property.

Keywords: Intelligent fault diagnosis; Interpretable deep learning; Model-driven deep unrolling; Noise attacks.

MeSH terms

  • Algorithms
  • Deep Learning*
  • Neural Networks, Computer