June 2024 Cycle Report

Published 21 June 2024

Grand Challenge features

Algorithm models and ground truth separate from the containers

Thus far, algorithm models and a challenge phase's ground truth had to be baked into the algorithm and evaluation method container images. This meant that the users had to provide an entirely new algorithm container when maybe only the model needed to be updated. For challenge phases with large ground truths, this meant that the method container would be large and that it had to be reuploaded every time when maybe only the evaluation method needed tweaking.

This cycle we worked on separating the algorithm model from the algorithm container and the evaluation ground truth from the evaluation method container. You can now upload and update them separately. When provided, the algorithm model will be extracted and made available to your container at /opt/ml/model/. Likewise, the ground truth for challenges will be extracted to /opt/ml/input/data/ground_truth/.

You can upload and manage your models and ground truths in the algorithm and phase settings respectively:

Grand Challenge Status page

We created a status page for Grand Challenge, designed to keep users informed about the operational status of our platform. This page provides updates on the health and performance of various components, making it easier for users to stay aware of any ongoing issues or incidents that may affect their experience. Each component is represented with a history graph of incidents over the past 90 days, using a simple stoplight system: green indicates no issues, yellow for minor or major incidents, and red for critical issues. This allows users to quickly understand the current and historical status at a glance. The status page also includes a dedicated section for ongoing incidents, ensuring that users can easily access information about current disruptions. Past incidents are also archived and accessible, providing a comprehensive incident history. We believe this new feature will enhance transparency and help our users stay informed about the reliability of our platform.

Go here to see the page in action.

Split additional metrics and averaged metrics

For challenge administrators, it is now possible to exclude certain metrics from the ranking calculations, while still displaying these metrics in the leaderboard. For this option, add the key exclude_from_ranking to your relevant entry in your metrics JSON and set its value to True.

Content Representation

How content such as images, files and other data is represented within the platform varied from place to place.

We've introduced a single way of representing this data. Where possible the representations give a quick preview of the content, either directly or via a pop-up. Where this is not possible, a download button for the user's ease is added.

Archive Item Details and Algorithm Jobs

Following a recent rework of how archive items are presented in a list, it was no longer possible to view the details of the archive item.

With this cycle, we've re-introduced the archive item detail page. This allows for a quick inspection of the content:

As a bonus, we've added a 'Find Algorithm Results' feature that lists all algorithm jobs with inputs matching the archive item:

This gives challenge organizers the option to view all the participating algorithm results per case: navigate via the archive to the results.

Alongside these pages, the views listing algorithm jobs have had a make over. The lists now include a quick preview of the generated outputs.

Cirrus features

Highlighting and displaying of annotation labels in client-side viewer

The client-side viewer for pathology images now supports two-way highlighting of annotations. The annotations in the viewer will be highlighted when hovering over the annotations in the sidebar in a reader study, algorithm job results, and archive items. Conversely, the annotations are also highlighted in the sidebar when hovering over them in the viewer. The label of the annotations is also displayed in the viewer when an annotation is highlighted.

Bounding boxes around point annotations

Workstation configurations now have a setting to show bounding boxes around point annotations to increase their visibility in 3d images.

The setting controls the size of a bounding box in 3d image space that will be drawn around the point annotation with the bounding box orientation aligned with the world coordinate system. The bounding box color and line width are linked to the annotation parameters, so to set the line width and color, you can use the annotation line width/color settings in the workstation configuration.

Cover photo by Marek Piwnicki on Unsplash