Transfer learning for small molecule retention predictions

J Chromatogr A. 2021 May 10:1644:462119. doi: 10.1016/j.chroma.2021.462119. Epub 2021 Mar 31.

Abstract

Small molecule retention time prediction is a sophisticated task because of the wide variety of separation techniques resulting in fragmented data available for training machine learning models. Predictions are typically made with traditional machine learning methods such as support vector machine, random forest, or gradient boosting. Another approach is to use large data sets for training with a consequent projection of predictions. Here we evaluate the applicability of transfer learning for small molecule retention prediction as a new approach to deal with small retention data sets. Transfer learning is a state-of-the-art technique for natural language processing (NLP) tasks. We propose using text-based molecular representations (SMILES) widely used in cheminformatics for NLP-like modeling on molecules. We suggest using self-supervised pre-training to capture relevant features from a large corpus of one million molecules followed by fine-tuning on task-specific data. Mean absolute error (MAE) of predictions was in range of 88-248 s for tested reversed-phase data sets and 66 s for HILIC data set, which is comparable with MAE reported for traditional machine learning models based on descriptors or projection approaches on the same data.

Keywords: Deep learning; Machine learning; Retention time prediction; Small molecules; Transfer learning.

MeSH terms

  • Databases as Topic
  • Machine Learning*
  • Natural Language Processing
  • Reproducibility of Results
  • Support Vector Machine
  • Time Factors