Stacked ensemble extreme learning machine coupled with Partial Least Squares-based weighting strategy for nonlinear multivariate calibration

Spectrochim Acta A Mol Biomol Spectrosc. 2019 May 15:215:97-111. doi: 10.1016/j.saa.2019.02.089. Epub 2019 Feb 25.

Abstract

With its simple theory and strong implementation, extreme learning machine (ELM) becomes a competitive single hidden layer feed forward networks for nonlinear multivariate calibration in chemometrics. To improve the generalization and robustness of ELM further, stacked generalization is introduced into ELM to construct a modified ELM model called stacked ensemble ELM (SE-ELM). The SE-ELM is to create a set of sub-models by applying ELM repeatedly to different sub-regions of the spectra and then combine the predictions of those sub-models according to a weighting strategy. Three different weighting strategies are explored to implement the proposed SE-ELM, such as the Winner-takes-all (WTA) weighting strategy, the constraint non-negative least squares (CNNLS) weighing strategy and the partial least squares (PLS) weighting strategy. Furthermore, PLS is suggested to be selected as the optimal weighting method that can handle the multi-colinearity among the predictions yielded by all the sub-models. The experimental assessment of the three SE-ELM models with different weighting strategies is carried out on six real spectroscopic datasets and compared with ELM, back-propagation neural network (BPNN) and Radial basis function neural network (RBFNN), statistically tested by the Wilcoxon signed rank test. The obtained experimental results suggest that, in general, all the SE-ELM models are more robust and more accurate than traditional ELM. In particular, the proposed PLS-based weighting strategy is at least statistically not worse than, and frequently better than the other two weighting strategies, BPNN, and RBFNN.

Keywords: Extreme learning machine (ELM); Nonlinear multivariate calibration; Partial least squares (PLS); Stacked generalization.