Preferential Mixture-of-Experts: Interpretable Models that Rely on Human Expertise As Much As Possible

AMIA Jt Summits Transl Sci Proc. 2021 May 17:2021:525-534. eCollection 2021.

Abstract

We propose Preferential MoE, a novel human-ML mixture-of-experts model that augments human expertise in decision making with a data-based classifier only when necessary for predictive performance. Our model exhibits an interpretable gating function that provides information on when human rules should be followed or avoided. The gating function is maximized for using human-based rules, and classification errors are minimized. We propose solving a coupled multi-objective problem with convex subproblems. We develop approximate algorithms and study their performance and convergence. Finally, we demonstrate the utility of Preferential MoE on two clinical applications for the treatment of Human Immunodeficiency Virus (HIV) and management of Major Depressive Disorder (MDD).

MeSH terms

  • Algorithms
  • Depressive Disorder, Major*
  • Humans