Structured Ensembles: An approach to reduce the memory footprint of ensemble methods

Neural Netw. 2021 Dec:144:407-418. doi: 10.1016/j.neunet.2021.09.007. Epub 2021 Sep 16.

Abstract

In this paper, we propose a novel ensembling technique for deep neural networks, which is able to drastically reduce the required memory compared to alternative approaches. In particular, we propose to extract multiple sub-networks from a single, untrained neural network by solving an end-to-end optimization task combining differentiable scaling over the original architecture, with multiple regularization terms favouring the diversity of the ensemble. Since our proposal aims to detect and extract sub-structures, we call it Structured Ensemble. On a large experimental evaluation, we show that our method can achieve higher or comparable accuracy to competing methods while requiring significantly less storage. In addition, we evaluate our ensembles in terms of predictive calibration and uncertainty, showing they compare favourably with the state-of-the-art. Finally, we draw a link with the continual learning literature, and we propose a modification of our framework to handle continuous streams of tasks with a sub-linear memory cost. We compare with a number of alternative strategies to mitigate catastrophic forgetting, highlighting advantages in terms of average accuracy and memory.

Keywords: Continual learning; Deep learning; Ensemble; Neural networks; Pruning; Structured pruning.

MeSH terms

  • Learning*
  • Neural Networks, Computer*
  • Uncertainty