Frequently Asked Questions

Setting up an algorithm submission challenge

An archive is a collection of image(s) (sets) or other types of data. In the context of algorithm submission challenges, they are used to store the secret test data for a given challenge phase. The archive should only contain the hidden test data, not the open training data (stored externally, see here for more information) and also not the ground truth (that goes into your evaluation container).
Yes, you need one archive per challenge phase. A submitted algorithm will get access to one image (set) from the linked archive at a time. If you have multiple phases that take the exact same type and number of inputs, you could reuse the same archive, however this is hardly ever the case.
Your ground truth goes into your evaluation container, not into an archive on Grand Challenge.
Internally, Grand Challenge works with the following file formats:
  • mha and .tiff for images
  • .json for annotations, metadata, as well as number, string and boolean data types
  • PDF
  • .obj
  • .mp4

⚠️ Please note the restriction on supported image formats. Other image formats, such as nifti files, are fine for uploading data to Grand Challenge, but they will be converted to .mha format (or .tiff) by our backend. For the algorithms submitted to a challenge this means that they will need to read image inputs in either .mha or .tiff format and will need to output images in these file types as well (if the task requires image outputs). Image outputs in any other format will be rejected and the corresponding algorithm jobs will be marked as failed.

Although we support the upload of other formats, we generally recommend converting the files yourself to .mha (or .tiff respectively) to ensure correct conversion.

Please note that mha and nifiti are equivalent formats and can both be read using SimpleITK.ReadImage(). Check out the SimpleITK library for more information.

⚠️ Exception: For the evaluation container, the ground truth can be in any format you like since that data is internal to your container and not stored or processed in any way by Grand Challenge. The algorithm outputs that your evaluation container will read will again be in mha format though since those do get validated and stored by our platform upon creation.

Input and output interfaces define what type of data your algorithm takes as input and produces as output, as well as where the input is to be read from and where the output is to be written to. You can read more about this here.

We mount a file system with the secret test data for your container to access at /input/. How the test data is structured is defined by the input and output interfaces configured for your phase. Consider an algorithm that takes a ct-image as input and outputs a covid-19-lesion-segmentation as well as a covid-19-probability-score. There are interfaces available for these inputs and outputs already, specifying their relative paths (see here). The support team will configure those interfaces for a given challenge phase, so that the mounted file system would look as follows (the output directory will initially be empty, of course, but the algorithm will need to follow this structure when saving outputs):

|── input
|      |── images
|      |       |── ct
|      |       |      |── randomly-generated-uuid.mha
|── output
|      |── images
|      |       |── covid-19-lesion-segementation
|      |       |      |── randomly-generated-uuid.mha
|      |── probability-covid-19.json

Each of the subfolders in the input directory contains exactly 1 file since we start one algorithm job per archive item (== case / patient). The algorithm in this case thus reads the image from /input/images/ct/> and writes the output to /output/images/covid-19-lesion-segmentation/ and /output/probability-covid-19.json - exactly as specified for the respective interfaces.

The uuids (the file name of the input file and the file name of the output file) are random but are irrelevant because each input and output is uniquely identified by its interface name and path.

An algorithm gets one (set of) input(s) (i.e. one archive item) at a time. This could be just one image at a time or a combination of an image with a segmentation mask or some metadata or a secondary image. Consider a challenge set-up where the algorithms need access to a patient's ct-image as well as a lung-volume measure. Each archive item in your archive would consist of two files: one for the ct scan and one containing the lung volume measure.

You need to define interfaces for those two types of data if they don't exist yet (check here first), because each file in an archive item needs to be linked to one specific interface. The support team will be happy to create new interfaces for you. You cannot have the same interface twice in one archive item; they need to be unique on the archive-item level.

In the above case, we would make use of the existing interfaces 'ct-image' and 'lung-volume' and upload the cases through the API.

We currently do not support optional inputs, so all your archive items will need to contain the same number of files, in this case always of a lung ct scan and a lung volume json file. To circumvent this, you could upload an empty json file for cases where you don't have a lung volume measure and instruct your participants accordingly.

No, Grand Challenge schedules one algorithm job per image set (archive item) in the linked archive. The algorithm then reads the inputs from /input/interface-slug-1/ and input/interface-slug-2/ etc, where each folder contains exactly 1 file (see earlier question). The exact paths depend on the interface configured for your challenge phase. In a typical challenge this means that an algorithm only ever gets one patient's data at a time (e.g., patient A's CT image and the corresponding mask in the first run, patient B's CT image and mask in the second run etc.).
The evaluation container runs once per submission and only after all algorithm jobs have run successfully.

No. GC generates random UUIDS for each image you upload to our website and will use those as the file name. Your algorithm container should hence not expect to be reading your-original-file-name.mha. Likewise, the outputs your algorithm writes will be checked, validated and subsequently stored with their own unique UUID (which is again different from the file name you give the output when saving it).

Since an algorithm only ever gets one image set (i.e., one patient's data) at a time, the algorithm doesn't need to worry about the original file names or about matching, for example, an input ct image with it's corresponding metadata or segmentation file – there will only be one ct file and one metadata or segmentation file available to the algorithm at any given time. The algorithm simply reads the inputs from their specified file paths (each path being a folder with exactly 1 file, the exact path is defined by the interface, see earlier question) and writes the outputs to designated output folders (again different paths for each output, defined by the output interfaces you defined for your phase).

For the evaluation container you will get all outputs in one go. Now you do need to match the inputs to their respective outputs. To do that your evaluation container also gets a predictions.json file that contains the information for the matching. For each algorithm job that was part of the submission, the predictions.json file lists the inputs and the outputs with their specific file names. This is explained in more detail here

On the leaderboard page of your challenge you can find the following button. Clicking this will download the evaluations or download the metrics as a CSV file. In the output column, you will find the content of the metrics.json file.

An evaluation is run once for a given submission, after all the algorithm jobs for that submission have finished successfully. The outputs produced by the algorithm are then provided to the evaluation container at the following path: /input/"job_pk"/output/"interface_relative_path". To match the algorithm output filenames with the original algorithm input filenames, we provide a json file at /input/predictions.json. This json file lists the inputs and outputs for each algorithm job along with that job's primary key. The above path can then be constructed by replacing:

  • job_pk with the primary key (pk) of each algorithm job, i.e., the top-level "pk" entry for each JSON object in the predictions.json file
  • interface_relative_path with the relative_path for each of the output interfaces of a job. The relative path's for each interface can be found here. If the algorithms output a ct-image and a nodule locations json file, you would read the corresponding files for the first algorithm job from /input//output/images/ct/ and /input//output/nodule-locations.json

Challenge costs

We are an academic institute ourselves and we have limited funding as well. Please check our pricing policy here.

There are numerous ways for you to control your compute costs. Measures you can take include:

  • Limiting the number of participants. If you enable "manual participant review", you decide who gets to submit solutions to your challenge, and if you have reached a certain number, you can stop accepting people to your challenge.
  • Limiting the number of submissions that participants can make during a specified submission window.
  • Putting a reasonable upper limit on algorithm job run times. We enforce this limit on the single job level, i.e., for the processing of a single set of inputs. Regardless of the costs, limiting algorithm run times is desirable since truly clinically useful algorithms will benefit from being fast, so forcing your participants to develop efficient solutions is a good thing to do.
  • If your test data set consists of a large number of very small images, you might be better off batching your inputs. The reason for this is that GC starts one algorithm job per input image (i.e., archive item), so the more images you have, the more jobs need to be started which increases costs. The downside to this approach is that the resulting algorithms will not be directly useful for clinicians, who will usually want to process a single (unbatched) image input. The integrated web viewer on Grand Challenge is also not equipped to read and display batched images, and hence algorithm result viewing will not be possible with such a design.