Create Your Own Challenge
Click here to host a challenge.
Most should be self-explanatory. You can also consult the docs.
If you just want to have your challenge listed on the overview on the Challenges page and you run your challenge on your own site, mail to email@example.com with the details and we'll list your challenge.
We offer the following tools for challenge organizers:
- An easy way to create a site, add and edit pages like a wiki
- Registration mechanisms for participants
- Secure ways for organizers to provide challenge data to participants and for participants to upload results
- Automated evaluations of uploaded results
- Automated leaderboard management, including ways to tabulate, sort and visualize the results
We use Zenodo to handle and store large datasets. The platform can be installed on your own server. We strongly recommend challenge organizers to use Zenodo. It has an upper limit of 50GB. We're currently working on other solutions for datasets larger than 50GB, feel free to write to us and we can help you.
Why automated evaluation?
Every challenge has a unique way to objectively evaluate incoming submissions. More often than not, the evaluation scripts come with a set of dependencies and computational environments that are difficult to replicate on the host server. Therefore, we have decided that every challenge organizer has to provide a Docker that containerizes the evaluation scripts. This Docker will run on our servers to compute the evaluation scripts necessary for an incoming submission.
To make the process easier, we created evalutils. It's a Python package that helps you create a project structure, load and validate submissions and packages the evaluation scripts in a Docker container compatible with grand-challenge.org.
You can use your favorite Python environment to install
pip install evalutils
Once you've installed the above requirements, you can follow the instructions for getting started here.
In 2020 we will start hosting challenges where participants can upload their algorithms in the form of Docker containers, which will be applied to the test data. This way of running challenges avoids that the test data is made available to challenge participants. We have also implemented the possibility to upload algorithms that users can try out with their own data, and web-based interactive browsers that can be used for reader studies. We plan a broader roll-out of this functionality in the near future.
grand-challenges.org is currently maintained by the RSE Team of the Diagnostic Image Analysis Group in Radboudumc, Nijmegen, The Netherlands. More information about grand-challenges.org can be found here.
In 2012 a team with members from five groups in medical image analysis decided to build a platform to easily set up websites for challenges in biomedical image analysis. We named our group the Consortium for Open Medical Image Computing (COMIC). We decided the platform should be developed in Python and Django, and should be open source. grand-challenges.org runs on the hardware of the Diagnostic Image Analysis Group.