On the Treatment of Optimization Problems With L1 Penalty Terms via Multiobjective Continuation

IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):7797-7808. doi: 10.1109/TPAMI.2021.3114962. Epub 2022 Oct 4.

Abstract

We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning, as it ensures robustness against noisy data and yields models that are easier to interpret due to the small number of relevant terms. It is common practice to enforce sparsity by adding the l1-norm as a penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when minimizing the main objective and the l1-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the penalty method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method specifically tailored to MOPs with two objective functions one of which is the l1-norm. Our method can be seen as a generalization of homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight gained by this multiobjective approach.