September 2024 Cycle Report

Published 1 Oct. 2024

Interactive algorithms for reader studies

Algorithms on Grand Challenge typically run asynchronously, meaning they are executed in parallel when resources are available, and users must wait for the results. This works well for testing algorithms and for challenge submissions, but it limits their use in reader studies for assisting with annotations. To address this, we spent this cycle on enabling synchronous execution of algorithms in reader studies. Reader study editors can now provide an interactive algorithm for certain reader study questions, which readers can then execute on demand while viewing cases.

To pilot this new feature, we collaborated closely with Max de Grauw to adapt his Universal Lesion Segmentation (ULS23) baseline algorithm for synchronous execution. The algorithm segments lesions in CT images, returning a binary segmentation mask that can be saved as an answer for each reader study case.

⚙️How to Configure an Interactive Algorithm

To incorporate this algorithm into your reader study, first define a question of type 'Mask,' then select the 'ULS23 Baseline' algorithm from the 'Interactive Algorithm' dropdown.

In the reader study viewer, you can invoke the algorithm for each case by entering the edit mode for the relevant question and clicking the button, as shown in the video below. Please note that the algorithm is not yet optimized for interactive use. The video has been shortened for your convenience; the actual duration is approximately 40 seconds.

⚠️Runtime limitations

The algorithm runs on AWS Lambda, where it is limited to a maximum runtime of 15 minutes, CPU-only processing, and up to 10 GB of memory. To ensure fast execution, we pre-warm the algorithm when a reader study session begins, minimizing start-up time.

Since this is a pilot feature, it’s currently available exclusively to our DIAG members. We’re eager to see how this will improve workflows and outcomes, and we encourage our DIAG members to explore its capabilities.

🧑‍💻Adding your own interactive algorithm

Currently, the ULS23 algorithm is the only algorithm available for this type of execution. If you're interested in using this feature and would like to add your own algorithm, please contact support@grand-challenge.org. All algorithms must undergo a thorough adaptation and testing process before integration.


Enhancing Challenge Sign-Ups: Custom Questions

When organizing challenges, participants typically sign up with basic information like their affiliation, which is shared with the challenge administrators. However, sometimes you may want to gather additional insights, such as their experience level or the approach they plan to take.

To streamline this process, you can now include custom registration questions when participants sign up for your challenge.

⚙️Editing questions

These questions can be tailored with a question prompt, and optional help text, and set as either required or optional. Additionally, you can enforce specific answer formats by using JSON schema validation, ensuring that responses follow the desired structure.

Reviewing responses

The answers provided by participants are shown during the request evaluation process, helping you make informed decisions when accepting or rejecting a participant.


Cover photo by seth schwiet on Unsplash