April 2025 Cycle Report

Published 28 April 2025

Scalebar for client-side view item

We've added a scalebar to the client-side view item. This improvement makes it much easier to judge the scale of an image while viewing it. Just like in a CIRRUS viewitem, the scalebar adjusts with the viewport and provides clear visual indicators of size using tick marks and units. Whether you're zoomed in or out, you'll now see a consistent reference to help you understand the dimensions you're looking at.


Move to client-side viewing

We started working on moving the focus from server-side viewing to client-side viewing. This should give our users a more fluent and responsive experience. More on that in later cycle reports.


Improved Code Examples

Recently, we introduced optional algorithm inputs and parameterized evaluations to Grand Challenge, and these features are now seeing their first real-world use. At the same time, algorithm editors and challenge organizers have access to custom-tailored examples to help kick-start development. With the release of these new features, those examples had become outdated.

This cycle we have updated large parts of the code base that generate the examples, to ensure they’re fully compatible with the latest features on Grand Challenge:

  • Example algorithms will now dynamically use the input sockets to infer the interface in use. In addition they will, by default, test the full set of possible interfaces when running them locally.
  • Example evaluation methods can now read from additional input sockets and generate to addition output sockets.
  • Examples now show snippets to use ground-truth and model TAR balls.

In addition:

  • Examples are now generated significantly faster.
  • The local bash scripts have seen several improvements, among which new improvements to handle various permission schemes found on local systems.

We hope the examples continue help users kick-start their projects on Grand Challenge!


Budget caps for reader studies

With the introduction of interactive algorithms in reader studies, runtime costs for these studies have become more expensive. To avoid runaway costs, we have now created a credit system for reader studies. With this system we register the runtime of reader studies in the form of consumed credits. Reader studies that employ interactive algorithms have an increased credit consumption rate. This allows us to set a limit on the credits that may be consumed and automatically disable a reader study when it is out of runtime.


Make readerstudy leaderboard accessible to readers

Educational reader studies allow you to train annotators by providing immediate feedback on their annotation performance. A leaderboard summarizes the performance of each reader as compared to the ground truth. Thus far, this leaderboard was only visible to the editors of the reader study. For some educational reader studies, however, it makes sense to open the leaderboard up to readers as well so that they can compare how they performed in comparison to other readers. This is now possible and can be enabled in the reader studies settings for any educational reader study. When enabled, the names and avatars of other readers will be hidden so as to safeguard their privacy. Names and avatars of readers will only be visible to the editors of the reader study.


Photo by Anastasiia Chepinska on Unsplash