MoËT: Mixture of Expert Trees and its application to verifiable reinforcement learning

Neural Netw. 2022 Jul:151:34-47. doi: 10.1016/j.neunet.2022.03.022. Epub 2022 Mar 23.

Abstract

Rapid advancements in deep learning have led to many recent breakthroughs. While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety-critical settings, such as healthcare or self-driving cars is hindered by their inability to provide safety guarantees or to expose the inner workings of the model in a human understandable form. We present MoËT, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function. Thanks to such gating function the model is more expressive than the standard decision tree. To support non-differentiable decision trees as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, MoËTh, in which predictions are made solely by a single expert chosen via the gating function. Thanks to that property, MoËTh allows each prediction to be easily decomposed into a set of logical rules in a form which can be easily verified. While MoËT is a general use model, we illustrate its power in the reinforcement learning setting. By training MoËT models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models. Moreover, we show that MoËT can also be used in real-world supervised problems on which it outperforms other verifiable machine learning models.

Keywords: Deep learning; Explainability; Mixture of Experts; Reinforcement learning; Verification.

MeSH terms

  • Humans
  • Linear Models
  • Machine Learning*
  • Reinforcement, Psychology*