Co-Learning Bayesian Optimization

IEEE Trans Cybern. 2022 Sep;52(9):9820-9833. doi: 10.1109/TCYB.2022.3168551. Epub 2022 Aug 18.

Abstract

Bayesian optimization (BO) is well known to be sample efficient for solving black-box problems. However, BO algorithms may get stuck in suboptimal solutions even with plenty of samples. Intrinsically, such a suboptimal problem of BO can attribute to the poor surrogate accuracy of the trained Gaussian process (GP), particularly that in the regions where the optimal solutions locate. Hence, we propose to build multiple GP models instead of a single GP surrogate to complement each other, thus resolving the suboptimal problem of BO. Nevertheless, according to the bias-variance tradeoff equation, the individual prediction errors can increase when increasing the diversity of models, which may lead even worse overall surrogate accuracy. On the other hand, based on the theory of the Rademacher complexity, it has been proven that exploiting the agreement of models on unlabeled information can reduce the complexity of hypothesis space, therefore achieving the required surrogate accuracy with fewer samples. Such value of model agreement has been extensively demonstrated for co-training style algorithms to boost model accuracy with a small portion of samples. Inspired by the above, we propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits both model diversity and agreement on unlabeled information to improve the overall surrogate accuracy with limited samples, therefore achieving more efficient global optimization. Through tests on five numerical toy problems and three engineering benchmarks, the effectiveness of the proposed CLBO has been well demonstrated.