June 2022 Cycle Report

Published 8 Aug. 2022

Easier Type 2 challenge submissions

In a type 2 challenge, participants need to create an algorithm on Grand Challenge before they can submit to the challenge. This requires defining the correct inputs and outputs, choosing a logo for the algorithm and setting a bunch of other settings that are not directly relevant to the challenge submission. This cycle we have changed the workflow slightly to ease the burden on Type 2 challenge participants. On the challenge's submission pages, the link to create a new algorithm now links to a form that only,y asks for a title, description and the required GB Ram and GPU. All other relevant fields - most notably the inputs and outputs and the logo - are prefilled for the participant based on the challenge phase settings. Now that we have this dedicated algorithm form, it is also no longer necessary to grant algorithm creation permissions to all participants, they get them automatically and exclusively for the challenges they are participating in.

Minimal requirements for publishing your algorithm

We have introduced a set of minimal requirements that your algorithm needs to meet before you can make it public. Your algorithm now needs:

  • at least 1 public and successful result obtained from the latest version of your algorithm
  • at least a summary and description of the mechanism of your algorithm
  • a contact email address and a public list of algorithm editors

These requirements will help ensure that public algorithms on Grand Challenge are well documented and can be tried out by users.

Reduced start-time for algorithms using multi-resolution images.

Algorithm jobs are now started as soon as multi-resolution images (tif) are imported. This reduces their start-up time, as converting these images to deep zoom image format (needed for viewing) is handled asynchronously.

Algorithm Inference on SageMaker

In the current cycle we finished the work on switching to SageMaker (see previous blogpost for more technical details). Traffic has been switched over to the new SageMaker Batch Inference backend and we have removed the ECS backend. A limitation of the new backend is that Inference tasks will be limited to 1 hour of runtime. However, this could be offset by the increased range of instance type options now available to us, including more CPU or more performant GPU. Tasks that run on the SageMaker Batch Inference backend will have have their runtime metrics automatically gathered, including CPU Utilization and Memory Utilization, and similar GPU metrics if a GPU is used by the job. These metrics are made available on the job logs page.

Container Import Status Clarity

The status of container images has become more informative, where before it would only have status Ready, we now describe the progress of the container import with the following statuses:

is_manifest_valid - the basic test
is_in_registry - whether the valid image has been pushed to the container registry
is_on_sagemaker - whether the ECR image has been registered with SageMaker

The UI is also more verbose:

Linking Configuration

When viewing multiple images it can be beneficial to interact with several at the same time. Linking images was already possible and could conditionally be toggled on or off via the chain button () or by pressing the L key. However, this was always an all-or-none. In this cycle, we have introduced finer control over which interactions share to which images when the images are linked.

In the viewer configurations, a new section has been introduced. This section is named 'Linking Configuration' and within one can selectively disable sharing:

For instance, when windowing is disabled the images should independently update their window width and center and not duplicate the values to other images. This independence holds, even if the general linked switch is turned on.

Furthermore, viewports in custom hanging protocols can now have the "linkable" property. Setting this property to false (default: true) will make interacting with images within those viewports always independent.

As a bonus, the viewports in custom hanging protocols can now each have a separate start orientation. Configure this by adding the property "orientation" and setting it to "axial", "sagittal", or "coronal".

As a visual demonstration of this feature, check the behavior in the video below:

The viewer configuration determines that zoom and window level interactions will not be linked. The used hanging protocol defines four viewports: main, secondary, and tertiary as linkable, and quaternary as unlinkable. This results in the viewport in the bottom right corner (quaternary) not responding to image manipulations made on the other viewports. The three remaining views are partly linked,: panning, slicing, orienting, inverting, and flipping are transferred to all linked views, whereas zoom and window levels are not. When unlinking images in the UI, all settings are unlinked, giving the user full control over the image settings.

Creating segmentation masks in a reader study

It is now possible to create segmentation masks as part of a reader study. Until now, it was only possible to create binary masks, this has been extended to masks with multiple categories. To enable this option, you need to define two settings, the overlay segments, which specify the different categories in your mask, and the lookup table, which defines the color and alpha of the various categories.

You can set the overlay segment and look up table directly in the question (go to your reader study -> Questions -> Edit question). It is also possible to set the overlay segment and look up table in the interface of the default answer if you provide a default answer.

Example overlay segment:
    "name": "Heart",
    "visible": true,
    "voxel_value": 1
    "name": "Lung",
    "visible": true,
    "voxel_value": 2


  • The question needs to be of type Mask
  • The interface (in case of default answers) needs to be of kind Segmentation
  • The overlay segments and look up table from the interface of a default answer take precedence over those defined in the question.
  • If you don't define the look up table in either place, it will be taken from the default overlay look up table of the viewer configuration if you have one configured, otherwise from an internal look-up table.
All images uploaded for this question will be validated, images are only allowed to contain voxel values specified in the overlay segment. You need to contact support (support@grand-challenge.org) if you require a new interface, a new look up table or if you want to extend an existing interface with these settings.

New angle measurement question and annotation types

The angle measurement question type can now be used in reader studies. This allows users to annotate an angle by drawing two lines. After drawing the second line, dashed helper lines will be shown denoting the angle along with the angle value in degrees. Angle annotations are also added as a component interface kind so new angle component interfaces can be created on request to use with algorithms.