Ground truth missing from DRIVE test set?

Ground truth missing from DRIVE test set?  

  By: PDorrian on May 30, 2022, 3:46 p.m.

https://drive.grand-challenge.org/

The dataset description states: "For the test cases, two manual segmentations are available; one is used as gold standard, the other one can be used to compare computer generated segmentations with those of an independent human observer. "

However these manual segmentations seem to be missing completely from the Dropbox link provided. Is there a reason for this?

Re: Ground truth missing from DRIVE test set?  

  By: jamesmeakin.diag on May 31, 2022, 8:21 a.m.

Yes, those annotations are the thing that your algorithm needs to predict. You can compare your predictions to the gold standard by submitting your predictions to the challenge.