Challenge setup
In a modern challenge on grand-challenge.org, both the test data and the test labels are hidden. Participants submit an algorithm as a solution to the challenge. This algorithm is then run on the hidden test set (which must be uploaded as an archive by the challenge admins) on the Grand Challenge platform. The results that the algorithm produces are subsequently evaluated using a custom evaluation method provided by the challenge admins. The evaluation produces a set of metrics, which are subsequently displayed on the leaderboard and used to rank submissions on specific criteria. See below for details on the underlying compute infrastructure.
In the simplest, standard case, a challenge has one task and is carried out in two phases. The first phase is usually a preliminary phase where participants familiarize themselves with the algorithm submission system and test their algorithms on a small subset of images. From experience, we know that it takes participants a few attempts to get their algorithm containers right, so it is important and strongly recommended to have such a preliminary sanity-check phase. The second phase is the final test phase, often with a single submission policy, which evaluates the submitted algorithms on a larger test set. You could also think of the two phases as a qualification and a final phase, where you use the qualification phase to select participants for the second, final test phase, as was done by STOIC.
Set-up steps
To set up your algorithm submission challenge after your challenge has been accepted, you as a challenge organizer need to take the following steps:- Define the input and output interfaces that the algorithms submitted to each of your phases take and produce. Check for suitable existing interfaces here and inform the support team which interfaces need to be configured for which phase of your challenge. If no suitable interfaces exist, the support team will create new interfaces for you. If you are unfamiliar with the concept of interfaces, please have a look here first.
- After the interfaces have been chosen, the support team will create a challenge pack GitHub repository for you with an example algorithm, an example evaluation method as well as an archive upload script. You can find an example of a challenge pack here. The support will also create archives for each of your algorithm submission phases and share the links to those with you. You can then proceed to upload your secret test data to those archives. If your algorithms take a single image input, it might be easiest to upload the data through our UI on the archive page itself. If your algorithms take complex inputs (e.g. an image together with a segmentation mask, or some metadata) you are best advised to use our API client for uploading (the challenge pack contains an upload script for you to do so). Note that you only upload the secret test data to the archive, not the public training data and also not the groundtruth.
- With the data and the basic settings in place, you can then start working on an example baseline algorithm container as well as the evaluation container. You should take the example algorithm and evaluation containers in the challenge pack provided to you as a starting point.
- have a look at the remaining settings for each of your phases and configure submission start and end dates, submission limits, the leaderboard etc.
- have a look at the overall challenge settings: choose a participation request handling policy for the challenge and optionally enable the forum (recommended!) and teams features
- add information to your challenge pages
- choose a platform for hosting the public training data, see here for suggestions
Infrastructure¶
If you host a challenge on our platform, all algorithm and evaluation containers will be run on our AWS infrastructure where storage and compute scale elastically on demand. The algorithms that participants submit to your challenge are then run on each image in the archive that you linked to the respective phase. We select the type of virtual machine instance that is used as a runtime environment for running the algorithms depending on what the participant selects in their algorithm settings. Participants are limited in the resources they can request based on the configuration of your challenge. For example, participants cannot submit an algorithm that requests an A10G GPU unless that phase in the challenge is configured to allow it. It is also important to note that the participants' algorithms do not get access to the internet and the participants do not get access to the logs to prevent exfiltration of the test set. You as a challenge admin get access to the results and logs of each algorithm so you can help your participants if their submissions fail.