Questions about the dataset and challenge design

Questions about the dataset and challenge design  

  By: lWM on April 20, 2022, 8:57 a.m.

Dear Organizers,

Thank you for putting significant effort in organizing the challenge. I have several basic questions related to the challenge design: 1) Will the validation/testing pairs be consecutive or re-stained slices? Or both? Will the validation/test cases be released in x5 or x10? Are we allowed to choose the magnification level? 2) In the challenge design it is mentioned that immunohistochemistry slices will be considered as moving and H&E slices as fixed images. Will the landmarks be released for moving or fixed images? Are we allowed to reverse the registration order to avoid inverting the deformation field in case the landmarks are released for moving images? 3) Are the participants requested to submit landmarks only or to create a Docker/Singularity container for the test phase? Is the final evaluation based on TRE only or are there any additional evaluation metrics such as folding ratio, registration time, robustness? 4) What is the difference between validation and test set? Will the landmarks for validation set be openly released for both source and target images? 5) Will the validation leaderboard be separated from the test leaderboard? This is actually a limitation of the ANHIR leaderboard where the training cases are evaluated jointly with the validation cases. As a result, there is a strong first glance bias towards results obtained by methods fine-tuned using the manually annotated landmarks.

Bests, Marek

Re: Questions about the dataset and challenge design  

  By: phiewe on April 20, 2022, 2:46 p.m.

Hello Marek,

thank you for your interest in this challenge. Regarding your questions:

  1. Validation and test cases will be released both at 10X and 5X. We will compute the error distances in metric units, which is independent of the resolution level. Participants can choose whatever resolution they think is best suited to minimize the metric distance. We are also working on releasing pyramidal .tiff images of all cases, so that it will be easier to work at even lower resolution levels.
  2. Landmarks will be released for the moving images (the IHC images in the validation and test set; different from training data, there will only be one IHC WSI available per case in validation and test). You may tweak your algorithm however you like to achieve minimal error distances of the registration (reversing fields etc.)
  3. Participants will submit registered landmarks. We are currently working on the evaluation code, which also be available through github (https://github.com/rantalainenGroup/ACROBAT) later on for participants to better understand. For details on how final rankings are computed, please refer to the section "Ranking" here: https://acrobat.grand-challenge.org/Evaluation/ . Only distances will be considered, not compute times etc.
  4. We think that there is no difference between validation and test images. For both sets, only landmarks for the moving image will be released (if everything goes as planned) to avoid that participants can use paired validation landmarks to tune their algorithms.
  5. We will release validation moving image landmarks in May. We hope to be able to make a validation leaderboard available during May as well, which one submission allowed per participant per day. The test set leaderboard will be computed only using test data. The test leaderboard will be released at the challenge workshop. Only participants who submit a short description of their algorithm will be eligible to be ranked in the test leaderboard. There are two sets of prize money. All participants ranked in the test leaderboard are eligible for the first set. Only participants who publish their code will be eligible for the second set. Participants can receive prize money from both sets of prize money if applicable. Further details on this will follow later.

We hope that this clarifies your questions, please don't hesitate with further questions if anything remains unclear!

Re: Questions about the dataset and challenge design  

  By: phiewe on April 20, 2022, 2:59 p.m.

Just to clarify on 5.: we are working on a solution for participants to submit registered landmarks for the validation data on this website, which will generate an automated feedback, such that no validation target landmarks need to be released.

 Last edited by: phiewe on Aug. 15, 2023, 12:56 p.m., edited 1 time in total.

Re: Questions about the dataset and challenge design  

  By: lWM on April 21, 2022, 12:08 p.m.

Ok, thank you. The answer clarifies a lot of aspects. Not releasing the paired landmarks is indeed a good decision. Looking forward for the pyramidal .tiff files since the slices seem to be consecutive so using x5/x10 will probably not improve the overall registration quality.