Mismatched training and test distributions can outperform matched ones

Neural Comput. 2015 Feb;27(2):365-87. doi: 10.1162/NECO_a_00697. Epub 2014 Dec 16.

Abstract

In learning theory, the training and test sets are assumed to be drawn from the same probability distribution. This assumption is also followed in practical situations, where matching the training and test distributions is considered desirable. Contrary to conventional wisdom, we show that mismatched training and test distributions in supervised learning can in fact outperform matched distributions in terms of the bottom line, the out-of-sample performance, independent of the target function in question. This surprising result has theoretical and algorithmic ramifications that we discuss.