Create your own Challenge


If you are interested in hosting your own challenge on our platform, please carefully read this page and fill in our challenge request form. Note that we operate with a base cost of 5000 euros for each challenge. Please take a look at our pricing policy for details. Since our resources are limited, we need to be selective about which challenges we host on Grand Challenge. Your answers given on the request form help us make an informed decision about whether and how we can support your challenge.


Grand Challenge currently offers two types of submissions: prediction submission and algorithm container submission. The algorithm container submission type has the advantage of producing reproducible algorithms that remain accessible to the research community long after the challenge has ended. This allows for continued use and exploration of the algorithms by the community. Therefore, it should be noted that we are phasing out the prediction submission procedure in favor of the algorithm submission procedure in order to ensure that challenges always produce reproducible algorithms.

📢 In the interest of fairness and reproducibility in science, we hence require that all challenges include at least one leaderboard where participants submit algorithm containers as solutions. In special cases where an algorithm cannot be packaged into a container or needs to be run interactively, we may grant an exemption to this rule on a case-by-case basis. However, we strongly prefer that prediction submissions only be used for preliminary or qualification phases, if at all.


When filling in the request form, you will be asked to provide an acronym for your challenge. We will use this for the URL of your challenge (e.g., https://{acronym}.grand-challenge.org/) if your challenge gets accepted. It will also be used for specific CSS and files. For this short name, no special characters or spaces are allowed.

If you pre-registered your challenge on the BIAS website you have the option to upload your submission PDF and fill in the text fields in our form with "See structured submission form".

From challenge request 📃 to challenge launch 🚀

After submitting the form, you will receive a confirmation email. This email will also contain the compute and storage cost estimate for your challenge based on the specs you entered – more information on that below.

Our team of reviewers will then evaluate your submission and inform you of the decision within maximally 4 weeks (we strive to inform everyone within 2 weeks, but there are times when that is not possible). If your challenge is accepted, we will create the challenge page for you and share the link to it with you in the acceptance email. The challenge will initially be hidden, meaning that it will not yet be displayed on our challenge overview page.

You can then proceed to:

Once your challenge is ready for the public, you can change its status from hidden to public.

If your challenge gets rejected, you will also be notified by email.

The general workflow of requesting a challenge is summarized below. Visit the challenge set-up page for a more detailed explanation on how to proceed after your challenge has been accepted. If you have any questions about the request procedure, or experience difficulties in setting up your challenge afterwards, do not hesitate to contact our support team at support@grand-challenge.org


Compute and storage costs

The request form also contains questions about the size of the test data set, the number of submissions you plan to accept from participants and the average time you expect an algorithm run (including model loading, i/o, preprocessing and inference) to take per image. This information will be used to calculate a rough compute and storage cost estimate. To make sure you fill in the budget fields as accurately as possible, we have collected a few example cost calculations here.

Example of a challenge with 1 task

In the simplest case, a challenge has 1 task and is carried out in 2 phases. The first phase is usually a preliminary phase where participants familiarize themselves with the submission system and test their algorithms on a small subset of images. The second phase is the final test phase, often with a single submission policy, which evaluates the submitted algorithms on a larger test set. You could also think of the two phases as a qualification and a final phase, where you use the qualification phase to select participants for the second, final test phase. The definition is up to you. For the cost estimate, we simply assume that there will be two algorithm submission phases and hence ask for estimates for two phases separately.

The MIDOG 2021 challenge is an example of such a challenge. For the MIDOG challenge, participants (~50 teams) had to develop algorithms to detect mitotic figures in histological tumor images (average size of image ~ 150 MB). The challenge consisted of two phases: a preliminary phase for participants to test their algorithms (~15 submissions per team) on a small subset of images (N=20), and a final test phase (1 submission per team) which evaluated the algorithms on a larger test set (N=80). The submitted algorithms took an average of 5 minutes for inference per test image.

The compute and storage costs for the MIDOG challenge amounted roughly to the following:

Cost Amount
Compute costs for phase 1 (preliminary phase): 1460 $
Storage costs for phase 1 (preliminary phase): 10 $
Compute costs for phase 2 (final test): 390 $
Storage costs for phase 2 (final test): 10 $
Docker storage costs: 3190 $
Total: 5060 $

Note that in some cases, it might make sense to combine an algorithm submission phase with a qualification phase to which participants submit their algorithm's predictions rather than their algorithms themselves. You might want to use this approach to select teams or individual participants for a final algorithm submission leaderboard. If you think this set-up makes sense for your case, clearly describe and motivate this in the request form. Even with this set-up though we still highly recommend to also have a preliminary algorithm test submission, because we know from experience that participants take some time getting their algorithm containers to work and you do not want any of this testing to happen on the final hidden test set as participants could use it to try and improve their algorithms.


Example of a challenge that uses batched images as input

Grand Challenge runs the submitted algorithms on each image of the test set you provide separately. Each algorithm job requires loading of the model from scratch before inference can be run. This means that if you have a lot of test images, running the submitted algorithms is going to be very costly. If you have a large number of images (>1000) that are each small in size, what you can do to circumvent this and to reduce the overhead is divide your test set into batches of images and stack them into .tiff or .mha images. This drastically reduces the number of "images" in your archive and hence the number of algorithm jobs that will be initiated by our platform for each submission. You could even stack all test images into one file. Note, however, that stacking images comes with the downside of not being able to visualize the algorithm results in our online viewer. The viewer is not designed to handle stacked images.

The organizers of the Airogs challenge went for this solution: instead of uploading 11.400 single test images, they created batches of 300 images, stacked into tiff files, resulting in 38 test files. For their budget estimate, they then simply entered the number of batches (not the number of single images) and provided the size of the batch (rather than the size of a single image):

  • Size of test image: 300 MB (size of the batch file, i.e., 1 test image here contains 300 images stacked)
  • Number of test images for final phase: 38 (i.e., 38 batches)
  • Number of test images for preliminary phase: 4 (i.e., 4 batches)

They expected roughly 50 teams and an inference time of about 55 minutes per batch (! not per image this time) and they allowed 3 submissions per team to the preliminary phase and 1 submission per team to the final test phase. Their cost estimate was as follows:

Cost Amount
Compute costs for phase 1 (preliminary phase): 640 $
Storage costs for phase 1 (preliminary phase): 10 $
Compute costs for phase 2 (final test): 2030 $
Storage costs for phase 2 (final test): 10 $
Docker storage costs: 800 $
Total: 3490 $


Example of a challenge with 2 tasks:

If your challenge deviates from the standard format of 1 task, 2 phases, please indicate reasonable averages across all tasks/phases of your challenge. The Conic challenge, for example, had two tasks: participants had to develop algorithms to (1) segment and classify nuclei within tissue samples and (2) predict how many nuclei of each class are present in a given input image. For each task, there were 2 phases (a preliminary phase and a test phase).

For a budget estimate for such a challenge, provide the averages across the two tasks for all the fields respectively and indicate that your challenge has 2 rather than just 1 task:

  • Number of tasks: 2
  • Size of test image: (size of test image for task 1 + size of test images for task 2) / 2
  • Inference time: (inference time task 1 + inference time task 2) / 2
  • Number of submissions for phase 1: (N submissions phase 1 task 1 + N submissions phase 1 task 2) / 2
  • Number of submissions for phase 2: (N submissions phase 2 task 1 + N submissions phase 2 task 2) / 2
  • Number of test images phase 1: (N images phase 1 task 1 + N images phase 1 task 2) / 2
  • Number of test images phase 2: (N images phase 2 task 1 + N images phase 2 task 2) / 2


Example of a challenge with more than 2 phases:

If your challenge has 1 task but more than 2 phases, please provide the average number of submissions across all phases and the sum (!) of test images across all phases in the two fields for phase 1 and enter 0s for the phase 2 fields:

  • Number of tasks: 1
  • Size of test image: average across all phases
  • Inference time: average across all phases
  • Number of submissions for phase 1: average across all phases
  • Number of submissions for phase 2: 0
  • Number of test images phase 1: sum of all test images across phases
  • Number of test images phase 2: 0