December 2023 Cycle Report

Published 3 Jan. 2024

Grand Challenge features

Open Algorithm Submission Phases

Until now all algorithm submission phases on Grand Challenge used hidden test data. This meant that the Challenge participants could not see the data or their logs for the jobs created for those phases. When their containers failed, the Challenge admins would need to work out why their submission failed, redact and share some logging information with the participant.

We have now added a setting that allows you to have an Open Algorithm Submission Phase. If you enable this option the challenge participants will automatically get access to their logs, the input data, and their predictions for that phase, saving Challenge admins a lot of work.

Challenge participant has full access to their algorithm jobs:


Challenge participant has full access to the logs:


⚠️ The consequence of this is that the data associated with that Phase then becomes accessible to the participants, so be careful when turning this setting on. We will email challenge admins and add a lot of warnings if this option is enabled:

If you would like to enable this setting please go to the Algorithm section of your phase settings and enable "Give Algorithm Editors Job View Permissions":

Of course, the data for existing phases and phases that do not enable this option still remain private.


Challenge Performance Section for Algorithms

We have added a Challenge Performance section to each Algorithm's detail page where you can now see the performance of the algorithm on various test data sets. Only the best result per algorithm and phase is displayed.


Credit limits for algorithm editors

This cycle we introduced credit limits for algorithm editors. In the past, editors of an algorithm could run an infinite number of algorithms jobs, which incurs high, uncontrollable cloud costs for us. Editors are now limited to running a maximum of 5 jobs per unique algorithm image for free. Once these credits have been exhausted, running jobs will get deducted from their regular user credits.


Challenge Packs

Challenge packs are our way to help challenge organizers bootstrap their challenge. It is a challenge-tailored GitHub repository that contains the following:

  • ️🦾 An example script to automate uploading data to an archive
  • 🦿 An example submission algorithm that can be uploaded to run as a submission in a challenge phase
  • 🧮 An example evaluation method that evaluates algorithm submissions and generates performance metrics for ranking

If you are curious about how they look, take a peek at the public Demo Challenge Pack.

The support team can easily generate these using a new Python package (grand-challenge-forge) and distribute them to challenge organizers.

After the challenge packs are successfully introduced, we are expecting to slowly deprecate the older evalutils package currently in use for challenges and algorithms.


Cirrus features

Three-point angle annotations for reader studies

It is now possible to use a new annotation-type called "three-point angle" in reader studies. This is an addition to the 2-line angles annotations that are currently available and offers a simpler interaction that measures the inner angle of two joined lines:

A limitation for the initial release is that the annotation type cannot be used for accept/reject questions right now. This problem will be addressed, soon.


Overlays for client-rendered pathology view item

This cycle we added support for overlays to the client-rendered view item. If you are using the view item and have an overlay defined in the view content, the overlay will be displayed on top of the base image. To make use of overlay LUTs you have to select a set of these in the viewer configuration so that they will be available in the viewer.


Adaptive Mouse Wheel Actions

When reviewing various image types in CIRRUS, the navigation experience with the mouse wheel used to depend on the order in which images loaded, posing challenges for studies involving both 2D and 3D images. CIRRUS has been enhanced to dynamically adapt to the focused image type:

  • For 2D images, the mouse wheel now facilitates zooming.
  • For 3D images, the mouse wheel scrolls through the viewing z-axis (i.e. slices).

Cover Photo by Leo on Unsplash