Uncertainty propagation for dropout-based Bayesian neural networks

Neural Netw. 2021 Dec:144:394-406. doi: 10.1016/j.neunet.2021.09.005. Epub 2021 Sep 9.

Abstract

Uncertainty evaluation is a core technique when deep neural networks (DNNs) are used in real-world problems. In practical applications, we often encounter unexpected samples that have not seen in the training process. Not only achieving the high-prediction accuracy but also detecting uncertain data is significant for safety-critical systems. In statistics and machine learning, Bayesian inference has been exploited for uncertainty evaluation. The Bayesian neural networks (BNNs) have recently attracted considerable attention in this context, as the DNN trained using dropout is interpreted as a Bayesian method. Based on this interpretation, several methods to calculate the Bayes predictive distribution for DNNs have been developed. Though the Monte-Carlo method called MC dropout is a popular method for uncertainty evaluation, it requires a number of repeated feed-forward calculations of DNNs with randomly sampled weight parameters. To overcome the computational issue, we propose a sampling-free method to evaluate uncertainty. Our method converts a neural network trained using dropout to the corresponding Bayesian neural network with variance propagation. Our method is available not only to feed-forward NNs but also to recurrent NNs such as LSTM. We report the computational efficiency and statistical reliability of our method in numerical experiments of language modeling using RNNs, and the out-of-distribution detection with DNNs.

Keywords: LSTM; Out-of-distribution; Sampling-free method; Uncertainty evaluation; Variance propagation.

MeSH terms

  • Bayes Theorem
  • Monte Carlo Method
  • Neural Networks, Computer*
  • Reproducibility of Results
  • Uncertainty