Create Your Own Challenge

Published 21 Jan. 2021

On grand-challenge.org we provide tools for researchers to organize challenges in medical image analysis. This post takes you through the steps of creating your own challenge. We reuse various videos that James Meakin recorded for an internal workshop in our group.

Here is an introduction to the grand-challenge platform:

Click to Create Your Own Challenge

Several things are needed for organizing a challenge: a description of the task that needs to be solved, the specifications of the data, instructions for participants, and contact information. These need to be listed on a challenge website. In addition, tools to manage participants and their submissions are needed for organizers to evaluate different algorithms. We have made this easy for challenge organizers through the grand-challenge.org platform. We offer the following tools:

  • An easy way to create a site, add and edit pages like a wiki
  • Registration mechanisms for participants
  • Secure ways for organizers to provide challenge data to participants and for participants to upload results
  • Automated evaluations of uploaded results
  • Automated leaderboard management, including ways to tabulate, sort, and visualize the results

In the video below, James takes you through an example of creating your own challenge.

Data storage

We recommend using Zenodo to make data available to participants in your challenge. Like grand-challenge, Zenodo is open source, promotes Open Science, and it is free and assigns a DOI to any data set you put on the platform. We have created a grand-challenge.org Community on Zenodo. If you add data for your challenge on Zenodo, please list your record in this community. In the future, we plan to more closely integrate Zenodo in grand-challenge.org.

A drawback of sharing your data via Zenodo is that a single repository on Zenodo may not exceed 50GB. We're currently working on other solutions for datasets larger than 50GB, feel free to write to us and we can help you.

Automated evaluation

Every challenge has a unique way to objectively evaluate incoming submissions. More often than not, the evaluation scripts come with a set of dependencies and computational environments that are difficult to replicate in the host server. Therefore, we have decided that every challenge organizer has to provide a Docker container that packages the evaluation scripts. This container will run on our servers to compute the evaluation scripts necessary for an incoming submission.

Building your evaluation container

To make the process easier, we created evalutils. It is a Python package that helps you create a project structure, load, and validate submissions, and packages the evaluation scripts in a Docker container compatible with grand-challenge.org. Note that you do not have to use evalutils.

Requirements

You can use your favorite Python environment to install evalutils.

pip install evalutils

Once you've installed the above requirements, you can follow the instructions for getting started and building your evaluation container here and watch the video below to see James take you through an example.

Uploading your evaluation container

evalutils also provides methods to build, test and export your container. Watch the video below to get a walkthrough of these capabilities from James.

Once you have created your evaluation container, you can upload it to your challenge by selecting Admin → Methods → + Add a new method.

You can turn on automated evaluation by navigating to Admin → ⚙ Settings → Automated Evaluation and then check Use evaluation.

Other Features

Using Teams

Teams allow groups of users to be identified on the leaderboard, it does not limit how many submissions each team member can make.

You can turn this feature on by navigating to Admin → ⚙ Settings → Automated Evaluation and then check Use teams.

Multiple phases, multiple leaderboards

We have recently added the option to create multiple Phases for a challenge. Each phase has its own submission procedure and its own leaderboard. We'll be explaining more about Phases in an upcoming blog post. An example of a challenge that uses Phases is PAIP 2020.

Visualizations for challenges

Click this link to get further information on how you can set up visualization of algorithm results to gain insights into algorithm performance and new research directions.

Deleting your challenge

For deleting your challenge, please contact support@grand-challenge.org.

Future plans

In 2021, we will start hosting challenges where participants can upload their algorithms in the form of Docker containers, which will be applied to the test data. This way of running challenges avoids that the test data is made available to challenge participants. We have also implemented the possibility to upload algorithms that users can try out with their own data, and web-based interactive browsers that can be used for reader studies. We plan a broader roll-out of this functionality in the near future.

Contributions

You are most welcome to help us further develop and extend the grand-challenge platform. You can use the bug/issue tracker and code repository on GitHub and create a new issue there.

Icons for this post were made by Freepik from www.flaticon.com