FLAN: feature-wise latent additive neural models for biological applications

Brief Bioinform. 2023 May 19;24(3):bbad056. doi: 10.1093/bib/bbad056.

Abstract

Motivation: Interpretability has become a necessary feature for machine learning models deployed in critical scenarios, e.g. legal system, healthcare. In these situations, algorithmic decisions may have (potentially negative) long-lasting effects on the end-user affected by the decision. While deep learning models achieve impressive results, they often function as a black-box. Inspired by linear models, we propose a novel class of structurally constrained deep neural networks, which we call FLAN (Feature-wise Latent Additive Networks). Crucially, FLANs process each input feature separately, computing for each of them a representation in a common latent space. These feature-wise latent representations are then simply summed, and the aggregated representation is used for the prediction. These feature-wise representations allow a user to estimate the effect of each individual feature independently from the others, similarly to the way linear models are interpreted.

Results: We demonstrate FLAN on a series of benchmark datasets in different biological domains. Our experiments show that FLAN achieves good performances even in complex datasets (e.g. TCR-epitope binding prediction), despite the structural constraint we imposed. On the other hand, this constraint enables us to interpret FLAN by deciphering its decision process, as well as obtaining biological insights (e.g. by identifying the marker genes of different cell populations). In supplementary experiments, we show similar performances also on non-biological datasets.

Code and data availability: Code and example data are available at https://github.com/phineasng/flan_bio.

Keywords: computational biology; deep learning; interpretability; machine learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Machine Learning*
  • Neural Networks, Computer*
  • Protein Binding