A Factor Analysis Perspective on Linear Regression in the 'More Predictors than Samples' Case

Entropy (Basel). 2021 Aug 3;23(8):1012. doi: 10.3390/e23081012.

Abstract

Linear regression (LR) is a core model in supervised machine learning performing a regression task. One can fit this model using either an analytic/closed-form formula or an iterative algorithm. Fitting it via the analytic formula becomes a problem when the number of predictors is greater than the number of samples because the closed-form solution contains a matrix inverse that is not defined when having more predictors than samples. The standard approach to solve this issue is using the Moore-Penrose inverse or the L2 regularization. We propose another solution starting from a machine learning model that, this time, is used in unsupervised learning performing a dimensionality reduction task or just a density estimation one-factor analysis (FA)-with one-dimensional latent space. The density estimation task represents our focus since, in this case, it can fit a Gaussian distribution even if the dimensionality of the data is greater than the number of samples; hence, we obtain this advantage when creating the supervised counterpart of factor analysis, which is linked to linear regression. We also create its semisupervised counterpart and then extend it to be usable with missing data. We prove an equivalence to linear regression and create experiments for each extension of the factor analysis model. The resulting algorithms are either a closed-form solution or an expectation-maximization (EM) algorithm. The latter is linked to information theory by optimizing a function containing a Kullback-Leibler (KL) divergence or the entropy of a random variable.

Keywords: factor analysis; linear regression; missing data; more predictors than samples; semisupervised regression.