Designing Interpretable Recurrent Neural Networks for Video Reconstruction via Deep Unfolding

IEEE Trans Image Process. 2021:30:4099-4113. doi: 10.1109/TIP.2021.3069296. Epub 2021 Apr 8.

Abstract

Deep unfolding methods design deep neural networks as learned variations of optimization algorithms through the unrolling of their iterations. These networks have been shown to achieve faster convergence and higher accuracy than the original optimization methods. In this line of research, this paper presents novel interpretable deep recurrent neural networks (RNNs), designed by the unfolding of iterative algorithms that solve the task of sequential signal reconstruction (in particular, video reconstruction). The proposed networks are designed by accounting that video frames' patches have a sparse representation and the temporal difference between consecutive representations is also sparse. Specifically, we design an interpretable deep RNN (coined reweighted-RNN) by unrolling the iterations of a proximal method that solves a reweighted version of the l1 - l1 minimization problem. Due to the underlying minimization model, our reweighted-RNN has a different thresholding function (alias, different activation function) for each hidden unit in each layer. In this way, it has higher network expressivity than existing deep unfolding RNN models. We also present the derivative l1 - l1 -RNN model, which is obtained by unfolding a proximal method for the l1 - l1 minimization problem. We apply the proposed interpretable RNNs to the task of video frame reconstruction from low-dimensional measurements, that is, sequential video frame reconstruction. The experimental results on various datasets demonstrate that the proposed deep RNNs outperform various RNN models.

MeSH terms

  • Algorithms
  • Deep Learning
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer*
  • Pedestrians
  • Video Recording*