Testing the Value of Probability Forecasts for Calibrated Combining

Int J Forecast. 2015 Jan;31(1):113-129. doi: 10.1016/j.ijforecast.2014.03.005.

Abstract

We combine the probability forecasts of a real GDP decline from the U.S. Survey of Professional Forecasters, after trimming the forecasts that do not have "value", as measured by the Kuiper Skill Score and in the sense of Merton (1981). For this purpose, we use a simple test to evaluate the probability forecasts. The proposed test does not require the probabilities to be converted to binary forecasts before testing, and it accommodates serial correlation and skewness in the forecasts. We find that the number of forecasters making valuable forecasts decreases sharply as the horizon increases. The beta-transformed linear pool combination scheme, based on the valuable individual forecasts, is shown to outperform the simple average for all horizons on a number of performance measures, including calibration and sharpness. The test helps to identify the good forecasters ex ante, and therefore contributes to the accuracy of the combined forecasts.