Prior Distribution and Entropy in Computer Adaptive Testing Ability Estimation through MAP or EAP

Entropy (Basel). 2022 Dec 27;25(1):50. doi: 10.3390/e25010050.

Abstract

To derive a latent trait (for instance ability) in a computer adaptive testing (CAT) framework, the obtained results from a model must have a direct relationship to the examinees' response to a set of items presented. The set of items is previously calibrated to decide which item to present to the examinee in the next evaluation question. Some useful models are more naturally based on conditional probability in order to involve previously obtained hits/misses. In this paper, we integrate an experimental part, obtaining the information related to the examinee's academic performance, with a theoretical contribution of maximum entropy. Some academic performance index functions are built to support the experimental part and then explain under what conditions one can use constrained prior distributions. Additionally, we highlight that heuristic prior distributions might not properly work in all likely cases, and when to use personalized prior distributions instead. Finally, the inclusion of the performance index functions, arising from current experimental studies and historical records, are integrated into a theoretical part based on entropy maximization and its relationship with a CAT process.

Keywords: Bayesian inference; CAT; Kullback–Leibler divergence; entropy; expectation a posteriori; item characteristic curve; item response theory; likelihood function; maximum a posteriori; performance index function.

Grants and funding

This research received no external funding.