Conditional Uncorrelation and Efficient Subset Selection in Sparse Regression

IEEE Trans Cybern. 2022 Oct;52(10):10458-10467. doi: 10.1109/TCYB.2021.3062842. Epub 2022 Sep 19.

Abstract

Given m d -dimensional responsors and n d -dimensional predictors, sparse regression finds at most k predictors for each responsor for linear approximation, 1 ≤ k ≤ d-1 . The key problem in sparse regression is subset selection, which usually suffers from high computational cost. In recent years, many improved approximate methods of subset selection have been published. However, less attention has been paid to the nonapproximate method of subset selection, which is very necessary for many questions in data analysis. Here, we consider sparse regression from the view of correlation and propose the formula of conditional uncorrelation. Then, an efficient nonapproximate method of subset selection is proposed in which we do not need to calculate any coefficients in the regression equation for candidate predictors. By the proposed method, the computational complexity is reduced from O([1/6]k3+(m+1)k2+mkd) to O([1/6]k3+[1/2](m+1)k2) for each candidate subset in sparse regression. Because the dimension d is generally the number of observations or experiments and large enough, the proposed method can greatly improve the efficiency of nonapproximate subset selection. We also apply the proposed method in real scenarios of dental age assessment and sparse coding to validate the efficiency of the proposed method.

MeSH terms

  • Algorithms*