A new method of Bayesian causal inference in non-stationary environments

PLoS One. 2020 May 22;15(5):e0233559. doi: 10.1371/journal.pone.0233559. eCollection 2020.

Abstract

Bayesian inference is the process of narrowing down the hypotheses (causes) to the one that best explains the observational data (effects). To accurately estimate a cause, a considerable amount of data is required to be observed for as long as possible. However, the object of inference is not always constant. In this case, a method such as exponential moving average (EMA) with a discounting rate is used to improve the ability to respond to a sudden change; it is also necessary to increase the discounting rate. That is, a trade-off is established in which the followability is improved by increasing the discounting rate, but the accuracy is reduced. Here, we propose an extended Bayesian inference (EBI), wherein human-like causal inference is incorporated. We show that both the learning and forgetting effects are introduced into Bayesian inference by incorporating the causal inference. We evaluate the estimation performance of the EBI through the learning task of a dynamically changing Gaussian mixture model. In the evaluation, the EBI performance is compared with those of the EMA and a sequential discounting expectation-maximization algorithm. The EBI was shown to modify the trade-off observed in the EMA.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Bayes Theorem*
  • Computer Simulation
  • Humans
  • Models, Theoretical
  • Normal Distribution

Grants and funding

This research is supported by the Center of Innovation Program from the Japan Science and Technology Agency, JST and by JSPS KAKENHI Grant Numbers JP16K01408 and JP17H04696.