Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions

J Comput Aided Mol Des. 2020 Oct;34(10):1013-1026. doi: 10.1007/s10822-020-00314-0. Epub 2020 May 2.

Abstract

Difficulties in interpreting machine learning (ML) models and their predictions limit the practical applicability of and confidence in ML in pharmaceutical research. There is a need for agnostic approaches aiding in the interpretation of ML models regardless of their complexity that is also applicable to deep neural network (DNN) architectures and model ensembles. To these ends, the SHapley Additive exPlanations (SHAP) methodology has recently been introduced. The SHAP approach enables the identification and prioritization of features that determine compound classification and activity prediction using any ML model. Herein, we further extend the evaluation of the SHAP methodology by investigating a variant for exact calculation of Shapley values for decision tree methods and systematically compare this variant in compound activity and potency value predictions with the model-independent SHAP method. Moreover, new applications of the SHAP analysis approach are presented including interpretation of DNN models for the generation of multi-target activity profiles and ensemble regression models for potency prediction.

Keywords: Black box character; Compound activity; Compound potency prediction; Feature importance; Machine learning; Model interpretation; Multi-target modeling; Shapley values; Structure–activity relationships.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Drug Discovery*
  • Humans
  • Machine Learning*
  • Models, Molecular
  • Neural Networks, Computer*
  • Pharmaceutical Preparations / metabolism
  • Pharmaceutical Preparations / standards*
  • Structure-Activity Relationship
  • Therapeutic Equivalency

Substances

  • Pharmaceutical Preparations