Advancing Computational Toxicology by Interpretable Machine Learning

Environ Sci Technol. 2023 Nov 21;57(46):17690-17706. doi: 10.1021/acs.est.3c00653. Epub 2023 May 24.

Abstract

Chemical toxicity evaluations for drugs, consumer products, and environmental chemicals have a critical impact on human health. Traditional animal models to evaluate chemical toxicity are expensive, time-consuming, and often fail to detect toxicants in humans. Computational toxicology is a promising alternative approach that utilizes machine learning (ML) and deep learning (DL) techniques to predict the toxicity potentials of chemicals. Although the applications of ML- and DL-based computational models in chemical toxicity predictions are attractive, many toxicity models are "black boxes" in nature and difficult to interpret by toxicologists, which hampers the chemical risk assessments using these models. The recent progress of interpretable ML (IML) in the computer science field meets this urgent need to unveil the underlying toxicity mechanisms and elucidate the domain knowledge of toxicity models. In this review, we focused on the applications of IML in computational toxicology, including toxicity feature data, model interpretation methods, use of knowledge base frameworks in IML development, and recent applications. The challenges and future directions of IML modeling in toxicology are also discussed. We hope this review can encourage efforts in developing interpretable models with new IML algorithms that can assist new chemical assessments by illustrating toxicity mechanisms in humans.

Keywords: Adverse outcome pathway; Computational toxicology; Interpretable modeling; Machine learning; Risk assessment; Systems toxicology.

Publication types

  • Review

MeSH terms

  • Animals
  • Computational Biology / methods
  • Hazardous Substances / toxicity
  • Humans
  • Machine Learning*
  • Models, Animal
  • Risk Assessment
  • Toxicology* / methods

Substances

  • Hazardous Substances