Results of the Open Development Phase + How to Make Submissions Through the Future

Results of the Open Development Phase + How to Make Submissions Through the Future  

  By: anindo on Nov. 29, 2022, 10:01 p.m.

Hi everyone,

We have concluded our oral presentation at RSNA 2022 and the results of your submitted prostate-AI models, as evaluated on the Hidden Testing Cohort (1000 cases) during the Open Development Phase of the challenge, are now online!

Click here, to check out the leaderboard. Congratulations to the top 5 prostate-AI teams! We will be contacting you shortly regarding the next phase of the challenge (Closed Testing Phase). And to everyone else, thank you for your time and participation in the PI-CAI challenge. We hope you had as great of a time participating, as we did organizing this challenge.

All teams can still continue to make submissions to the Open Development Phase - Validation and Tuning Leaderboard, albeit at a reduced rate (once per week per verified user). Meanwhile, the Open Development Phase - Testing Leaderboard will reopen in early 2023 and accept submissions on a per-application basis to the PI-CAI consortium via e-mail (anindya.shaha@radboudumc.nl; joeran.bosma@radboudumc.nl). We highly encourage everyone to use this Hidden Testing Cohort (1000 cases), which was established in conjunction with a multidisciplinary advisory board of 16 experts, to benchmark your AI models through all future studies, publications and projects, and in turn, enable standardized validation and comparisons across this domain.

Thank you.

 Last edited by: anindo on Aug. 15, 2023, 12:57 p.m., edited 10 times in total.

Re: Results of the Open Development Phase + How to Make Submissions Through the Future  

  By: JMitura on Dec. 4, 2022, 6:57 p.m.

Hello is there any chance to release the validation dataset to be downloaded ? so one could perform locally validations

Re: Results of the Open Development Phase + How to Make Submissions Through the Future  

  By: anindo on Dec. 5, 2022, 7:12 a.m.

Hi Jakub,

We have no plans to publicly release the Hidden Validation and Tuning Cohort or the Hidden Testing Cohort, for two main reasons:

  • Unbiased Estimates of Performance: By facilitating validation/testing independently on grand-challenge.org, where AI algorithms are uploaded rather than predictions, we ensure that performance estimates are computed across truly unseen testing/validation cases in an unbiased manner (as would be the case during real-world deployment). For instance, participants cannot visually see or interact with any of the hidden testing/validation images and then tweak their AI predictions accordingly.

  • Transparency and Reproducibility: By facilitating validation/testing independently on grand-challenge.org, we also ensure that any proposed AI model from a given team/institute is actually functional and can reproduce its reported diagnostic performance. It also allows us to compare all proposed AI solutions on a common, public leaderboard in a head-to-head manner, instead of having to rely on assumptions surrounding each team/developer's local validation setup.

For the sake of debugging and quick prototyping, you can always use cross-validation with the Public Training and Development Dataset. However, for final model selection and benchmarking testing performance, we highly recommend making submissions to the Open Development Phase - Validation and Tuning Leaderboard (100 cases) and the Open Development Phase - Testing Leaderboard (1000 cases) respectively.

Hope this helps.

 Last edited by: anindo on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.

Re: Results of the Open Development Phase + How to Make Submissions Through the Future  

  By: JMitura on Dec. 18, 2022, 1:25 p.m.

Thank You for statement

Re: Results of the Open Development Phase + How to Make Submissions Through the Future  

  By: JMitura on April 4, 2023, 3:19 p.m.

Is there a code for best models available somwhere?