Coloring Molecules with Explainable Artificial Intelligence for Preclinical Relevance Assessment

J Chem Inf Model. 2021 Mar 22;61(3):1083-1094. doi: 10.1021/acs.jcim.0c01344. Epub 2021 Feb 25.

Abstract

Graph neural networks are able to solve certain drug discovery tasks such as molecular property prediction and de novo molecule generation. However, these models are considered "black-box" and "hard-to-debug". This study aimed to improve modeling transparency for rational molecular design by applying the integrated gradients explainable artificial intelligence (XAI) approach for graph neural network models. Models were trained for predicting plasma protein binding, hERG channel inhibition, passive permeability, and cytochrome P450 inhibition. The proposed methodology highlighted molecular features and structural elements that are in agreement with known pharmacophore motifs, correctly identified property cliffs, and provided insights into unspecific ligand-target interactions. The developed XAI approach is fully open-sourced and can be used by practitioners to train new models on other clinically relevant endpoints.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Drug Discovery
  • Ligands
  • Neural Networks, Computer*

Substances

  • Ligands