Create your own Challenge
If you are interested in hosting your own challenge on our platform, please carefully read this page and fill in our challenge request form. Note that we operate with a base cost of 6000 Euros for each challenge. Please take a look at our pricing policy for details.
Grand Challenge currently offers two types of submissions: prediction submission and algorithm container submission. The algorithm container submission type has the advantage of producing reproducible algorithms that remain accessible to the research community long after the challenge has ended. This allows for continued use and exploration of the algorithms by the community. Therefore, it should be noted that we are phasing out the prediction submission procedure in favor of the algorithm submission procedure in order to ensure that challenges always produce reproducible algorithms.
📢 In the interest of fairness and reproducibility in science, we hence require that all challenges include at least one leaderboard where participants submit algorithm containers as solutions. In special cases where an algorithm cannot be packaged into a container or needs to be run interactively, we may grant an exemption to this rule on a case-by-case basis. However, we strongly prefer that prediction submissions only be used for preliminary or qualification phases, if at all.
When filling in the request form, you will be asked to provide an acronym for your challenge. We will use this for the URL of your challenge (e.g., https://{acronym}.grand-challenge.org/) if your challenge gets accepted. It will also be used for specific CSS and files. For this short name, no special characters or spaces are allowed.
If you pre-registered your challenge on the BIAS website you have the option to upload your submission PDF and fill in the text fields in our form with "See structured submission form".
From challenge request 📃 to challenge launch 🚀¶
After submitting the form, you will receive a confirmation email. Our team of reviewers will then evaluate your submission and inform you of the decision within maximally 4 weeks. If your challenge is accepted, we will create the challenge page for you and share the link to it with you in the acceptance email. The challenge will initially be hidden, meaning that it will not yet be displayed on our challenge overview page.
You can then proceed to:
- add information to your challenge page
- enable or disable the forum, teams feature, participant review etc.
- configure the phases of your challenge
- create and upload an evaluation container for each of your phases, and test out the submission workflow
- have a look at our FAQ section
Once your challenge is ready for the public, you can change its status from hidden to public.
If your challenge gets rejected, you will also be notified by email.
Visit the challenge set-up page for a more detailed explanation on how to proceed after your challenge has been accepted. If you have any questions about the request procedure, or experience difficulties in setting up your challenge afterwards, do not hesitate to contact our support team at support@grand-challenge.org
Compute and storage costs¶
The request form also contains questions about the size of the test data set, the number of submissions you plan to accept from participants, the average time you expect an algorithm run to take per image (including model loading, i/o, preprocessing and inference), and the GPU type and amount of memory that participants will be allowed to request for their runtime environment. This information will be used to estimate the compute and storage cost for your challenge. Our review team will carefully review, discuss and if necessary adjust the numbers together with you during the onboarding process to arrive at the most cost-efficient estimate as possible.
Example of a challenge with 1 task¶
In the simplest case, a challenge has 1 task and is carried out in 2 phases. The first phase should be a debugging phase where participants familiarize themselves with the submission system and test their algorithms on a small subset of images. The data and the inference results for this phase should be open to allow the participants to debug their own containers on the platform. The ranking of the submissions to this phase should be meaningless. It is advisable to the cases that would require the most memory and/or compute time in this phase to ensure the inference is correctly implemented for the final, hidden phase.
The second phase is the final test phase, often with a single submission policy, which evaluates the submitted algorithms on the hidden test set.
You could also think of the two phases as a qualification and a final phase, where you use the qualification phase to select participants for the second, final test phase. The definition is up to you. For the cost estimate, we simply assume that there will be two algorithm submission phases and hence ask for estimates for two phases separately.
Note that in some cases, it might make sense to combine an algorithm submission phase with a qualification phase to which participants submit their algorithm's predictions rather than their algorithms themselves. You might want to use this approach to select teams or individual participants for a final algorithm submission leaderboard. If you think this set-up makes sense for your case, clearly describe and motivate this in the request form. Even with this set-up though we still highly recommend to also have a preliminary algorithm test submission, because we know from experience that participants take some time getting their algorithm containers to work and you do not want any of this testing to happen on the final hidden test set as participants could use it to try and improve their algorithms.
Example of a challenge that uses batched images as input¶
Grand Challenge runs the submitted algorithms on each image of the test set you provide separately. Each algorithm job requires loading of the model from scratch before inference can be run. This means that if you have a lot of test images, running the submitted algorithms is going to be very costly. If you have a large number of images (>1000) that are each small in size, what you can do to circumvent this and to reduce the overhead is divide your test set into batches of images and stack them into .tiff or .mha images. This drastically reduces the number of "images" in your archive and hence the number of algorithm jobs that will be initiated by our platform for each submission. Note, however, that stacking images comes with the downside of not being able to visualize the algorithm results in our online viewer. The viewer is not designed to handle stacked images.
Example of a challenge with 2 tasks:¶
If your challenge deviates from the standard format of 1 task, 2 phases, please indicate reasonable averages across all tasks/phases of your challenge. The Conic challenge, for example, had two tasks: participants had to develop algorithms to (1) segment and classify nuclei within tissue samples and (2) predict how many nuclei of each class are present in a given input image. For each task, there were 2 phases (a preliminary phase and a test phase).
For a budget estimate for such a challenge, provide the averages across the two tasks for all the fields respectively and indicate that your challenge has 2 rather than just 1 task:
- Number of tasks: 2
- Size of test image: (size of test image for task 1 + size of test images for task 2) / 2
- Inference time: (inference time task 1 + inference time task 2) / 2
- Number of submissions for phase 1: (N submissions phase 1 task 1 + N submissions phase 1 task 2) / 2
- Number of submissions for phase 2: (N submissions phase 2 task 1 + N submissions phase 2 task 2) / 2
- Number of test images phase 1: (N images phase 1 task 1 + N images phase 1 task 2) / 2
- Number of test images phase 2: (N images phase 2 task 1 + N images phase 2 task 2) / 2
Example of a challenge with more than 2 phases:¶
If your challenge has 1 task but more than 2 phases, please provide the average number of submissions across all phases and the sum (!) of test images across all phases in the two fields for phase 1 and enter 0s for the phase 2 fields:
- Number of tasks: 1
- Size of test image: average across all phases
- Inference time: average across all phases
- Number of submissions for phase 1: average across all phases
- Number of submissions for phase 2: 0
- Number of test images phase 1: sum of all test images across phases
- Number of test images phase 2: 0