An Empirical Study Into Annotator Agreement, Ground Truth Estimation, and Algorithm Evaluation

IEEE Trans Image Process. 2016 Jun;25(6):2557-2572. doi: 10.1109/TIP.2016.2544703. Epub 2016 Mar 21.

Abstract

Although agreement between the annotators who mark feature locations within images has been studied in the past from a statistical viewpoint, little work has attempted to quantify the extent to which this phenomenon affects the evaluation of foreground-background segmentation algorithms. Many researchers utilize ground truth (GT) in experimentation and more often than not this GT is derived from one annotator's opinion. How does the difference in opinion affects an algorithm's evaluation? A methodology is applied to four image-processing problems to quantify the interannotator variance and to offer insight into the mechanisms behind agreement and the use of GT. It is found that when detecting linear structures, annotator agreement is very low. The agreement in a structure's position can be partially explained through basic image properties. Automatic segmentation algorithms are compared with annotator agreement and it is found that there is a clear relation between the two. Several GT estimation methods are used to infer a number of algorithm performances. It is found that the rank of a detector is highly dependent upon the method used to form the GT, and that although STAPLE and LSML appear to represent the mean of the performance measured using individual annotations, when there are few annotations, or there is a large variance in them, these estimates tend to degrade. Furthermore, one of the most commonly adopted combination methods-consensus voting-accentuates more obvious features, resulting in an overestimation of performance. It is concluded that in some data sets, it is not possible to confidently infer an algorithm ranking when evaluating upon one GT.