Joint and Direct Optimization for Dictionary Learning in Convolutional Sparse Representation

IEEE Trans Neural Netw Learn Syst. 2020 Feb;31(2):559-573. doi: 10.1109/TNNLS.2019.2906074. Epub 2019 Apr 19.

Abstract

Convolutional sparse coding (CSC) is a useful tool in many image and audio applications. Maximizing the performance of CSC requires that the dictionary used to store the features of signals can be learned from real data. The so-called convolutional dictionary learning (CDL) problem is formulated within a nonconvex, nonsmooth optimization framework. Most existing CDL solvers alternately update the coefficients and dictionary in an iterative manner. However, these approaches are prone to running redundant iterations, and their convergence properties are difficult to analyze. Moreover, most of those methods approximate the original nonconvex sparse inducing function using a convex regularizer to promote computational efficiency. This approach to approximation may result in nonsparse representations and, thereby, hinder the performance of the applications. In this paper, we deal with the nonconvex, nonsmooth constraints of the original CDL directly using the modified forward-backward splitting approach, in which the coefficients and dictionary are simultaneously updated in each iteration. We also propose a novel parameter adaption scheme to increase the speed of the algorithm used to obtain a usable dictionary and in so doing prove convergence. We also show that the proposed approach is applicable to parallel processing to reduce the computing time required by the algorithm to achieve convergence. The experimental results demonstrate that our method requires less time than the existing methods to achieve the convergence point while using a smaller final functional value. We also applied the dictionaries learned using the proposed and existing methods to an application involving signal separation. The dictionary learned using the proposed approach provides performance superior to that of comparable methods.