Validation of image segmentation by estimating rater bias and variance.


Simon K Warfield, Kelly H Zou, and William M Wells. 2006. “Validation of image segmentation by estimating rater bias and variance.” Med Image Comput Comput Assist Interv, 9, Pt 2, Pp. 839-47. Copy at


The accuracy and precision of segmentations of medical images has been difficult to quantify in the absence of a "ground truth" or reference standard segmentation for clinical data. Although physical or digital phantoms can help by providing a reference standard, they do not allow the reproduction of the full range of imaging and anatomical characteristics observed in clinical data. An alternative assessment approach is to compare to segmentations generated by domain experts. Segmentations may be generated by raters who are trained experts or by automated image analysis algorithms. Typically these segmentations differ due to intra-rater and inter-rater variability. The most appropriate way to compare such segmentations has been unclear. We present here a new algorithm to enable the estimation of performance characteristics, and a true labeling, from observations of segmentations of imaging data where segmentation labels may be ordered or continuous measures. This approach may be used with, amongst others, surface, distance transform or level set representations of segmentations, and can be used to assess whether or not a rater consistently over-estimates or under-estimates the position of a boundary.