Optimal Query Selection Using Multi-Armed Bandits

IEEE Signal Process Lett. 2018 Dec;25(12):1870-1874. doi: 10.1109/LSP.2018.2878066. Epub 2018 Oct 26.

Abstract

Query selection for latent variable estimation is conventionally performed by opting for observations with low noise or optimizing information theoretic objectives related to reducing the level of estimated uncertainty based on the current best estimate. In these approaches, typically the system makes a decision by leveraging the current available information about the state. However, trusting the current best estimate results in poor query selection when truth is far from the current estimate, and this negatively impacts the speed and accuracy of the latent variable estimation procedure. We introduce a novel sequential adaptive action value function for query selection using the multi-armed bandit (MAB) framework which allows us to find a tractable solution. For this adaptive-sequential query selection method, we analytically show: (i) performance improvement in the query selection for a dynamical system, (ii) the conditions where the model outperforms competitors. We also present favorable empirical assessments of the performance for this method, compared to alternative methods, both using Monte Carlo simulations and human-in-the-loop experiments with a brain computer interface (BCI) typing system where the language model provides the prior information.

Keywords: Misleading prior; Multi-armed bandit framework; Query optimization; Subset selection.