Challenge Setup

General Platform Setup

In a challenge on grand-challenge.org, both the test data and the test labels are hidden. Participants submit an algorithm as a solution to the challenge. This algorithm is then run on the hidden test set (which must be uploaded as an archive by the challenge admins) on the Grand Challenge platform. The results that the algorithm produces are subsequently evaluated using a custom evaluation method provided by the challenge admins. The evaluation produces a set of metrics, which are subsequently displayed on the leaderboard and used to rank submissions on specific criteria.

If you host a challenge on our platform, all algorithm and evaluation containers will be run on our AWS infrastructure where storage and compute scale elastically on demand. The algorithms that participants submit to your challenge are then run on each item in the archive that is linked to the respective phase. A heuristic selects the type of virtual machine instance that is used as a runtime environment for running the algorithms. The heuristic is a combination of what is allowed by you as a challenge organizer and what the participant's algorithm has configured as needed.

It is also important to note that the participants' algorithms do not get access to the internet and the participants (by default) do not get access to the logs to prevent exfiltration of the test set. You as a challenge admin get access to the results and logs of each algorithm so you can help your participants if their submissions fail. As a challenge organizer, you do not automatically gain access to the algorithm itself.

In the simplest, standard case, a challenge has one task and is carried out in two phases. The first phase is usually a preliminary phase where participants familiarize themselves with the algorithm submission system and test their algorithms on a small subset of images. From experience, we know that it takes participants a few attempts to get their algorithm containers right, so it is important and strongly recommended to have such a preliminary sanity-check phase.

The second phase is the final test phase, often with a single submission policy, which evaluates the submitted algorithms on a larger test set. You could also think of the two phases as a qualification and a final phase, where you use the qualification phase to select participants for the second, final test phase, as was done by STOIC.

Challenge Setup Steps

To set up your challenge after it has been accepted, you as a challenge organizer need to take the following steps.


Step 1: Populate Pages and Configure Challenge

You can populate custom pages information on your challenge landing page. Setup general challenge settings.

The challenge will initially be hidden, meaning that it will not yet be displayed on our challenge overview page. Once your challenge is ready for the public, you can change its status from hidden to public.


Step 2: Create Phases

Define and configure the different phases of the challenge. Have a look at the remaining settings for each of your phases and configure submission start and end dates, submission limits et cetera.

Deadline: 1 week post-challenge acceptance


Step 3: Define Inputs and Outputs

Communicate the required input and output data formats for participant algorithms by emailing support. If you are unfamiliar with what we mean by input and output, please take a look here here before continuing. The support team will help you define your sockets, but please already take a look at the existing input and output sockets that you can choose for your sockets.

Deadline: 2 weeks post-challenge acceptance


Step 4: Have an Onboarding Meeting

Receive a Challenge Pack starter kit (see example here) and participate in an onboarding meeting with support staff. We'll reach out to plan it once the phases and the inputs and outputs are created.

Deadline: 3 weeks post-challenge acceptance


Step 5: Upload Data to Archives and Storage

Add relevant data to archives. Archives must be created by support before you can upload. If the algorithms should accept a single image input, it might be easiest to upload the data through our UI on the archive page itself. If your algorithms take complex inputs (e.g. an image together with a segmentation mask, or some metadata) you are best advised to use our API client for uploading.

The challenge pack starter kit contains an upload script you can use as starting point. Note that you only upload the secret test data to the archive, not the public training data and also not the ground truth.

Grand-Challenge is generally not the place where you provide straining data. See the page on data-storage for alternatives.

Deadline: 5 weeks post-challenge acceptance


Step 6: Develop Core Components

With the data and the basic settings in place, you can proceed to develop the core components. These can most easily be based on the Challenge Pack start kit: which is custom-tailored to your challenge.

  • Example Algorithm Development
  • Evaluation Method Development
    • Implement and document the evaluation method for assessing participant submissions.
  • Scoring Configuration
    • Set up leaderboard scoring to interpret evaluation results accurately.
  • Test Evaluation
    • Run test evaluations using sample submissions to ensure the scoring system and evaluation method work correctly before launching the challenge.

Deadline: 6 weeks post-challenge acceptance


Have a look at our FAQ section.