Regularization Methods Based on the Lq-Likelihood for Linear Models with Heavy-Tailed Errors

Entropy (Basel). 2020 Sep 16;22(9):1036. doi: 10.3390/e22091036.

Abstract

We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume q-normal distributions as the errors in linear models. A q-normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with q-normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed.

Keywords: least absolute shrinkage and selection operator (LASSO); minimax concave penalty (MCP); power function; q-normal distribution; smoothly clipped absolute deviation (SCAD); sparse estimation.