Hi,
thank you for your comment!
The annotators left a comment that the slide is out of focus and that landmark quality will be very poor. We know of further issues between image pairs, which are largely (but not exclusively) due to far apart sections. We decided to not exclude any WSIs for three reasons:
- Annotator comments may not be comparable - we work with over 10 annotators, and it is unclear how well alligned the annotator perceptions for out of focus, far apart sections etc. are. It is therefore difficult to make objective decisions based on the annotator feedback.
- Manual quality assessment would be possible for a data set of the size of the validation or test data set for this challenge. For larger data sets however, this will not be possible and we would like to emulate a realistic application of the algorithms as if they would be applied at scale in thousands of image pairs. (Although admittedly out-of-focus detection could be applied automatically.)
- We would like to assess robustness of algorithms in the challenge-associated publication.
Since we chose the median of image pair scores as the summary score, image pairs with poor quality should not have a strong (or potentially any) influence on the challenge leaderboard.
Best wishes
Philippe Weitz, Leslie Solorzano, Masi Valkonen