Create Your Own Challenge
This page contains the instructions for creating your own challenge on grand-challenge.org
Click here to host a challenge.
Most should be self-explanatory. You can also consult the docs.
Tools provided
We offer the following tools for challenge organizers:
- An easy way to create a site, add and edit pages like a wiki
- Registration mechanisms for participants
- Secure ways for organizers to provide challenge data to participants and for participants to upload results
- Automated evaluations of uploaded results
- Automated leaderboard management, including ways to tabulate, sort and visualize the results
Managing pages
You can add, edit, order, and delete pages for your challenge on the page navigation panel by selecting Admin -> Pages:
Automated evaluation
Every challenge has a unique way to objectively evaluate incoming submissions. More often than not, the evaluation scripts come with a set of dependencies and computational environments that are difficult to replicate on the host server. Therefore, we have decided that every challenge organizer has to provide a Docker that containerizes the evaluation scripts. This Docker will run on our servers to compute the evaluation scripts necessary for an incoming submission.
Building your evaluation container
To make the process easier, we created evalutils. It's a Python package that helps you create a project structure, load, and validate submissions and packages the evaluation scripts in a Docker container compatible with grand-challenge.org.
Requirements
You can use your favorite Python environment to install evalutils
.
pip install evalutils
Once you've installed the above requirements, you can follow the instructions for getting started and building your evaluation container here.
Uploading your evaluation container
Once you have created your evaluation container, you can upload it to your challenge by selecting Admin -> Methods
From this page, you can add and manage your evaluation methods.
NOTE: You can turn on automated evaluation by navigating to Admin -> Challenge Settings -> Automated Evaluation
Data storage
We use Zenodo to handle and store large datasets. The platform can be installed on your own server. We strongly recommend challenge organizers to use Zenodo. It has an upper limit of 50GB. We're currently working on other solutions for datasets larger than 50GB, feel free to write to us and we can help you.
Future plans
In 2020 we will start hosting challenges where participants can upload their algorithms in the form of Docker containers, which will be applied to the test data. This way of running challenges avoids that the test data is made available to challenge participants. We have also implemented the possibility to upload algorithms that users can try out with their own data, and web-based interactive browsers that can be used for reader studies. We plan a broader roll-out of this functionality in the near future.
Listing your challenge
If you just want to have your challenge listed on the overview on the Challenges page and you run your challenge on your own site, mail to support@grand-challenge.org with the details and we'll list your challenge.
Contribution
You are most welcome to help us further develop and extend the grand-challenge platform. You can use the bug/issue tracker and code repository on github, and create a new issue there.
About us
grand-challenges.org is currently maintained by the RSE Team of the Diagnostic Image Analysis Group in Radboudumc, Nijmegen, The Netherlands. More information about grand-challenges.org can be found here.
In 2012 a team with members from five groups in medical image analysis decided to build a platform to easily set up websites for challenges in biomedical image analysis. We named our group the Consortium for Open Medical Image Computing (COMIC). We decided the platform should be developed in Python and Django, and should be open source. grand-challenges.org runs on the hardware of the Diagnostic Image Analysis Group.