lp-lq penalty for sparse linear and sparse multiple kernel multitask learning

IEEE Trans Neural Netw. 2011 Aug;22(8):1307-20. doi: 10.1109/TNN.2011.2157521.

Abstract

Recently, there has been much interest around multitask learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on l(p)-l(q) (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) mixed norms as sparsity-inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the l(1)-l(q) penalty which helps us in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the l(1)-l(q) penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the l(1)-l(q) proximal operator. Then, for the more general case, when , we solve the resulting nonconvex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted l(1)-l(q) sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with brain-computer interface single-trial electroencephalogram classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Databases, Factual / classification
  • Linear Models*
  • Pattern Recognition, Automated / methods
  • Psychomotor Performance*