Classifying high-dimensional phenotypes with ensemble learning

bioRxiv [Preprint]. 2023 May 29:2023.05.29.542750. doi: 10.1101/2023.05.29.542750.

Abstract

Classification is a fundamental task in biology used to assign members to a class. While linear discriminant functions have long been effective, advances in phenotypic data collection are yielding increasingly high-dimensional datasets with more classes, unequal class covariances, and non-linear distributions. Numerous studies have deployed machine learning techniques to classify such distributions, but they are often restricted to a particular organism, a limited set of algorithms, and/or a specific classification task. In addition, the utility of ensemble learning or the strategic combination of models has not been fully explored.We performed a meta-analysis of 33 algorithms across 20 datasets containing over 20,000 high-dimensional shape phenotypes using an ensemble learning framework. Both binary (e.g., sex, environment) and multi-class (e.g., species, genotype, population) classification tasks were considered. The ensemble workflow contains functions for preprocessing, training individual learners and ensembles, and model evaluation. We evaluated algorithm performance within and among datasets. Furthermore, we quantified the extent to which various dataset and phenotypic properties impact performance.We found that discriminant analysis variants and neural networks were the most accurate base learners on average. However, their performance varied substantially between datasets. Ensemble models achieved the highest performance on average, both within and among datasets, increasing average accuracy by up to 3% over the top base learner. Higher class R2 values, mean class shape distances, and between- vs. within-class variances were positively associated with performance, whereas higher class covariance distances were negatively associated. Class balance and total sample size were not predictive.Learning-based classification is a complex task driven by many hyperparameters. We demonstrate that selecting and optimizing an algorithm based on the results of another study is a flawed strategy. Ensemble models instead offer a flexible approach that is data agnostic and exceptionally accurate. By assessing the impact of various dataset and phenotypic properties on classification performance, we also offer potential explanations for variation in performance. Researchers interested in maximizing performance stand to benefit from the simplicity and effectiveness of our approach made accessible via the R package pheble.

Keywords: R; blending; classification; ensemble learning; landmarks; machine learning; morphometrics; phenotypes.

Publication types

  • Preprint