February 2025 Cycle Report
Published 18 Feb. 2025
Optional Algorithm Inputs¶
Up until now, algorithms on Grand Challenge worked with a fixed set of inputs and outputs. When trying out an algorithm, a user had to supply all of the defined inputs for the algorithm, and the algorithm had to produce all of the defined outputs. Sometimes, however, the number of available inputs per case varies, and algorithms might be able to generate meaningful predictions with only a subset of the inputs. Likewise, some algorithms might be able to work with multiple different types of input. To support these use cases, we worked on enabling optional inputs this cycle.
Algorithm editors can now define multiple input-output combinations (so called "interfaces") for their algorithm. For each interface, the editor defines the required inputs and outputs. For example, imagine an algorithm that segments vessels from a CT image. The algorithms requires a CT image as input, but can optionally use information about the scanner that the image was acquired with to fine-tune the segmentation process. The user trying out the algorithm should be able to either only upload a CT scan or upload a CT scan together with a JSON file containing information about the scanner.
The way to achieve this now is as follows:
Go to the Interfaces
tab of your algorithm and create 2 interfaces: one with only the CT image as input and one with both the CT image and the scanner information as inputs:
Now, when a user wants to try out the algorithm, they have to choose which of the interfaces they want to use:
After selecting an interface, the user will see the usual form either with only the CT image input or both the CT image and the scanner information json field:
⚠️ The algorithm needs to be able to work with all defined interfaces. Nothing has changed with regards to where and how the inputs are mounted in the container. It is the algorithm's responsibility to check for the presence of inputs and determine the required output accordingly. The algorithm editor can refer to the container image upload form for more information on where to read and write inputs and outputs to and from.
If an algorithm has only a single interface defined (which is the case for all algorithms at the moment of writing this), the user won't need to select the interface and will instead be immediately redirected to the inputs form, saving the user an unnecessary button click.
There are some restrictions you need to keep in mind when creating interfaces:
- The inputs and outputs of an interface need to be unique: you cannot have a CT image as both input and output, for example.
- The input combinations across all interfaces of an algorithm need to be unique: you cannot have 2 interfaces with the exact same input set (but differing outputs). Your algorithm needs to be able to deduce the required output(s) based on the input(s).
- You cannot edit interfaces, you can only add and delete interfaces, but these two actions should cover all use cases.
Optional algorithm inputs for challenges¶
The same concepts apply to algorithm submission phases. For each phase you can now define multiple interfaces that the algorithms for this phase need to implement. As before, setting up the interfaces for your phases happens in collaboration with the GC support team during challenge onboarding.
Once the interfaces are set-up, the expected interfaces are listed on the submission page so that participants are aware of the requirements:
Challenge admins can additionally check that everything is set up as expected under the Linked archive
tab in the phase settings menu. There we list the possible input combination for archive items and summarize how many valid archive items there currently are in the linked archive:
Challenge tasks list¶
Setting up a challenge can be overwhelming, especially for users unfamiliar with Grand Challenge. To make the process smoother, we’ve introduced an overview of the key steps required to set up a challenge successfully. Each step includes an estimated time to completion, and automated reminders help keep both challenge organizers and support staff on track.
Improved performance of Algorithm detail page¶
An update to algorithm details page was introduced, aiming at improving user experience and boosting performance. Previously, the algorithm detail page included usage statistics, which required a complex database query that slowed down the page loading time. To address this, the usage data is moved into a separate tab, enhancing page speed and overall usability.
Improved performance for file selection widget¶
We have recently added the option to either upload a new file or select an existing file when adding them to algorithm jobs, display sets or archive items. Due to the implementation, however, page loading sometimes became very slow—especially if you had access to a lot of files.
To solve this issue we are now making use of the same widget that is used for image type interfaces. This means you can choose to either upload a new file or search for an existing one that you have access to. These can be files that are input or output from an algorithm job you have view permission for, or they can be part of an archive or reader study that you have access to (either as editor, uploader or user/reader).
You can search for files by filename or primary key. We also fixed a small bug where the search result would not always update when the search term was cleared.
Parallel saving of reader study answers¶
We've improved the performance for saving reader studies with many questions. Previously, the process of saving answers and annotations was done sequentially. We have improved this process so that the answers are saved in parallel. This has proved to decrease saving time by ~40%. For a smooth reader study experience, it is also still possible to click "Save and continue" to perform the (parallel) saving in the background while the reader can continue answering questions for the next case.
Refactoring of the old point clustering algorithm¶
The old point clustering algorithm used for algorithm viewing caused problems when working on annotation features. Therefore, we updated the point clustering to become faster and updated the visual representation of the clusters:
The round outlines now show point clusters with high point densities.
Radboud AI event¶
Radboudumc and Radboud Healthy Data have hosted the Radboud AI event on February 13th, which included a parallel session where DIAG and AWS presented their collaboration. Miriam Groeneveld interviewed Chris Russ from AWS and discussed our collaboration, how it benefits both partners and how we see the future. See https://www.ru.nl/en/about-us/events/radboud-ai-event-collaboration-for-sustainable-growth.
Cover photo by Janosch Diggelmann on Unsplash