Current competition timeline questions ¶
By: minanessiem on Aug. 9, 2024, 11:05 a.m.
Hello ISLES24 Team,
I want to first thank you for undertaking this effort, we can see from the size of the dataset and its samples that this can not have been a simple endeavor. I'm glad to be participating :)
I wanted to ask if you had any information as to the timeline of the competition going forward, especially with respect to:
-
The official results evaluation script
-
The docker container for grand-challenge endpoint submissions
-
The submission testing endpoint
-
Abstract and paper submission deadlines
These I understand to be part of the challenge based on these qutoes from the challenge design document:
Code availability [p. 8]
a. Provide information on the accessibility of the organizers' evaluation software (e.g. code to produce rankings). Preferably, provide a link to the code and add information on the supported platforms. A Python script for evaluating the results will be shared together with the 1st batch of train-phase data. Code will be made available through a Github repository.
Submission method [pp. 6 -7]
a. Describe the method used for result submission. Preferably, provide a link to the submission instructions. Participants submit a docker through our evaluating platform. Submission instructions will be shared through our website. Besides, we will release (via Git) a docker template that participants must use to build their solutions. Under exceptional deployment failures, participants will be contacted to fix and resubmit their dockers.
b. Provide information on the possibility for participating teams to evaluate their algorithms before submitting final results. For example, many challenges allow submission of multiple results, and only the last run is officially counted to compute challenge results. There are 3 phases for this challenge:
-
Train phase: Teams can evaluate the performance of their trained models by themselves. With this purpose, we will release together with the first batch of training data, a Python evaluation script that computes the performance metrics defined in this document (please check the Assessment Methods section). Evaluation scripts will be shared through GitHub. It is important to mention that there is no 'validation' set for ISLES'2024. However, participants are strongly encouraged to take validation sub-sets from the training data in order to validate their models.
-
Sanity-check phase: Consists in a 'toy' example docker submission phase. It is solely intended for teams to test whether their devised dockers work in the remote servers. Multiple submissions to this phase are allowed.
-
Test phase: Participants submit a docker that will be locally evaluated by our team over the test data. Only one submission to this phase is allowed. No evaluation or ranking will be shared until the submission system is closed by the end of the challenge. For consistency, the same evaluation scripts provided during the 'train phase' will be used for computing the different teams' performance metrics and the leaderboard.