December 2024 Cycle Report

Published 9 Dec. 2024

Updated Challenge Pricing Policy

We are excited to announce a new, stress-free pricing model for running challenges on our platform. This update simplifies the budgeting process and ensures fair, transparent pricing while eliminating unexpected costs.

Why the Change?

Estimating costs for a challenge can be daunting, especially for first-time organizers. Factors like dataset size, number of phases, participant count, and algorithm runtime significantly affect the price but are often hard to predict. Since reserved compute costs are non-refundable, this uncertainty can be stressful.

To address these challenges, our new pricing structure splits invoicing into two parts:

  • An upfront charge covering minimal compute and storage costs.
  • A post-challenge invoice for any additional usage, capped at a maximum set by you.

This approach allows for conservative initial estimates while ensuring challenges stay within budget.

New Pricing Model Details

1. Up-Front Invoice

The initial invoice includes:

Base Cost: A flat minimum charge of 6,000 Euros, covering:

  • A contribution to platform maintenance and development, ensuring reliable service.
  • 1,000 Euros for compute and storage.

Reserved Capacity: Additional compute and storage tailored to your challenge, purchased in increments of 500 Euros. The amount required will be estimated and discussed with you during the planning phase.

The initial invoice needs to be paid up-front, is non-refundable, and compute reservation expires at the end of the calendar year.

2. Post-Challenge Invoice

After your challenge ends, a second invoice will cover any usage beyond the reserved capacity.

Excess Usage: Costs are calculated based on actual additional compute and storage used, rounded up to the nearest 250 Euros.

Cost Cap: You can set a maximum budget cap during planning. If the cap is reached, the challenge will automatically close to avoid unexpected expenses. You will be informed when the costs of your challenge are at 70 and 90% or your total budget, allowing you to increase the cost cap or reduce costs by updating your challenge settings.

This updated policy is effective immediately. If you have questions or need assistance planning your next challenge, please don’t hesitate to contact us.


More powerful compute instances

GPU enabled runtimes have until now only featured the NVIDIA T4. This has limited algorithm developers to 16GB of GPU memory. Challenge organizers can now request the NVIDIA A10G GPU (g5 instances) for their challenge. This will allow participants of their challenge to request this GPU for their runtime environment for algorithm inference jobs. With 24GB this is a most welcome increase in GPU memory to enable the development of more resource intensive algorithms. These instances are also faster overall, which decrease average runtimes.


Reader study specific keybindings

We made it possible to create keybindings for directly activating specific questions in a reader study. These special keybinding actions only appear when editing keybindings for a reader study and the questions need to have been created beforehand. These actions allow users to directly activate a question by the means of a shortcut:

When activating an annotation question, the annotation mode for that particular question will be activated immediately so that editing can begin straight away.

Caveat: Reader study keybindings are assigned by question order. This means that keybindings might be re-associated with other questions when if the order of questions is changed.


First steps toward genomics

In search for broadening the impact of Grand-Challenge we've added support for two file formats associated with genomics.

  • The Newick format: holding graph-theoretical trees.
  • The Biological Observation Matrix (BIOM) format: holding (sparse) contingency tables.

The new data kinds can be used for both input and output. When uploaded the respective formats will be validated to ensure data quality and integrity. Adding options for viewing these new data kinds will be added at a later stage.


Examples for JSON configuration fields

JSON-based configuration fields on grand-challenge are relatively hard to get right. For your convenience, we now generate example JSON configurations for the view_content and overlay_segment fields which can be used as a guide when configuring these values. The examples for view_content will be specific for the context in which it is used. You will find them in the help text for the corresponding fields on grand-challenge.

Cover photo by Nathan Dumlao on Unsplash