Some questions about the challenge

Some questions about the challenge  

  By: Isensee on June 15, 2023, 8:18 a.m.

Hey there, thanks for putting this challenge together. After reading the instructions there are still some questions that the text doesn't seem to answer. I would very much appreciate clarification on that:

  • What is the test set submission strategy? Do participants have to submit Docker containers, or will there be a short windows where we get access to the test images & then send in the results? Do participants have to write a paper?
  • ROI vs whole brain If you already provide the ROI, is there any reason one might even consider using the whole brain data? Can we expect the ROIs to be available for the test set as well?
  • Missing: Information on how the final ranking will be done. I presume that's going to be 'rank then average'?
  • What is the reason for splitting CTA and MRA modalities into 2 tracks? If your inclusion criteria require that both modalities are present, why not handle this challenge as one track with multi-modal inputs, similar to BraTS?
  • Tasks & tracks: There are two tasks (binary and multilabel) and two tracks (CTA and MRA), so there will be 4 rankings in the end?
  • Finally: Are there any prizes one could win? Or just fame ;-)

Thanks a lot! Best, Fabian

Re: Some questions about the challenge  

  By: petersergeykeynes on June 19, 2023, 9:10 p.m.

Hi Fabian,

Thank you very much for your interest in our challenge! Sorry for the late reply. Please see my answers below (thank you for the great questions):

> What is the test set submission strategy? Do participants have to submit Docker containers, or will there be a short windows where we get access to the test images & then send in the results? Do participants have to write a paper?

  • Grand-challenge now seems to only allow docker container based submissions, so that is what we will follow this year. We plan to first have a Validation phase where we release 10 new scans per modality and we hold on to the labels. Then there is a final test phase, where both the images and labels are hidden. For both validation and test phase, we expect docker containers as submissions.

  • Participants do not have to write a paper before the submission deadline. But they are encouraged to write a description one week after the submission deadline. I think the grand-challenge algorithm submission page has some algorithm descriptions input-box as comments, maybe that is the easiest way to collect the method description from participants. In any case, the top submissions are invited to contribute and are included in the summary paper of the challenge.

> ROI vs whole brain If you already provide the ROI, is there any reason one might even consider using the whole brain data? Can we expect the ROIs to be available for the test set as well?

  • Originally we reasoned that the ROI can also be used for object-detection. Right now our design is to have the ROI as the region in which the evaluation will be conducted.
  • We have the ROIs for the test set, but right now only the whole-brain images are expected as inputs. (This is because the ROI is originally designed to be an intermediate output of an object-detection prediction, or to train an object-detection model.)

> Missing: Information on how the final ranking will be done. I presume that's going to be 'rank then average'?

  • The ranking for each metric is obtained via Wilcoxon signed-rank test (with 'greater' or 'lesser' hypothesis as appropriate to the metric) on the test set. Wilcoxon signed-rank test indicates if there is any statistical significance on the test set between any two teams being compared. (Sorry I copied pasted from our proposal https://zenodo.org/record/7861631, which contains more details on the ranking method). Thanks for the question, and I will update these information on the challenge webpage soon.

> What is the reason for splitting CTA and MRA modalities into 2 tracks? If your inclusion criteria require that both modalities are present, why not handle this challenge as one track with multi-modal inputs, similar to BraTS?

  • That is a very good question and we have thought about that when desiging this challenge. For this year in its first running, we feel it is best to keep it "simple". The CTA and MRA modalities are not registered and they are typically not imaged at the same study-time (i.e. usually there is a follow-up temporal relationship between the two CTA and MRA modalities, as opposed to T1 T2 acquired in one MR study). We are open to suggestions and feedback for this year's challenge, and we will think about whether to combine the two modalities as multi-modal next year. But if it helps, in test set, brain images from both modalities are available as inputs. So it is possible to design a multi-modal model still! I will provide more information on the algorithm submission workflow soon.

> Tasks & tracks: There are two tasks (binary and multilabel) and two tracks (CTA and MRA), so there will be 4 rankings in the end?

  • Yes, there are 4 rankings in the end.

> Finally: Are there any prizes one could win? Or just fame ;-)

This is the first time we are organizing such a Circle of Willis vessel segmentation challenge, and we hope to get it right for a challenge as best as we could. And we are sure there are rooms of imprvement and we are happy to hear suggestions and feedback from you and the community. We hope to organize it again next year as well and to make the challenge better, so thank you very much for your excellent questions. Please let us know if you have further feedback or comments.

Best, Kaiyuan