About

A platform for end-to-end development of machine learning solutions in biomedical imaging.

Challenges in medical image analysis became popular after the organization of the Grand Challenges for Medical Image Analysis at the MICCAI conference in 2007. Hosting challenge events quickly became commonplace for conferences: MICCAI, ISBI, and SPIE Medical Imaging, amongst others, have hosted challenge events. Leading journals such as IEEE Transactions on Medical Imaging and Medical Image Analysis have welcomed overview papers that described the results of individual challenges.

Maintaining a challenge, so that new submissions are quickly processed upon submission, is a lot of work. Typically, a junior researcher at some institution is responsible for maintaining a challenge website, but at some point the researcher moves on and the site is no longer kept up-to-date.

Grand Challenge was created in 2010 to make it easy for organizers of challenges to set up a website for a particular challenge. Its aim was to bring all information on challenges in the domain of biomedical image analysis available in a single place. In 2012 we switched to Django web framework, marking 2012 as our founding year.

Grand Challenge initially allowed Challenge organizers to upload the code that computes the score for a submission in the form of a Docker container. This system has been operational since 2017. The last few years we have seen that container technology has become more widely used and several Challenges have been organized where participants are asked to upload a container with their algorithm. The Challenge organizers describe the task and the inputs (typically one or more scans) and outputs (for example a set of detected objects with their location, a classification or a segmentation) that are expected. There is a flexible and extendible system of inputs and outputs. The participants can then create an Algorithm, providing a container that performs the computational task and generating the outputs as described in the Challenge, using the inputs. You can specify what the computational requirements are (CPU, RAM, GPU) to run the Algorithm.

The Challenge organizers then run this algorithm on the secret test data. In this way, it is possible to organize Challenges where the test data is never released to the participants and, more importantly, where the submitted solutions are directly available to the Grand Challenge community.

Algorithms can also be created as their own entity, allowing the creator to define the inputs, outputs, and the computational requirements needed. Whether an Algorithm is created as a submission to a Challenge, or as it's own entity, physicians and clinical researchers can upload their own data, have the algorithm process these, and download the results.

Grand Challenge provides all the key functionalities for running Challenges, such as user management and role base access control, a discussion forum to facilitate communication with and between participants, the option for participants to form teams, support for adding multiple phases to a Challenge, each with their own leaderboard, and much more. Over 90,000 user accounts have been created on Grand Challenge from countries all across the globe. We now process thousands of submissions per month, totaling almost 100,000 evaluated submissions that have been placed on a leaderboard.

We have extended the platform to support various medical viewers that run in the browser and the possibility to set up Reader Studies. In a Reader Study, a user is presented with images and a set of questions. Questions can include annotations, for example, "Segment the liver". The organizers of the Reader Study can download the results via the website or using the API. With Reader Studies, researchers can carry out observer studies, or set up annotation efforts that are usually needed to run a challenge. You can even set up training programs for physicians; it is possible to provide immediate feedback after a question has been answered by providing a ground truth.

The team

The software behind Grand Challenge is open source. It was largely written by the team of research software engineers from the Diagnostic Image Analysis Group at Radboud University Medical Center. Radboud University Medical Center is also the responsible entity for the website https://grand-challenge.org. If you would you like to suggest new features, set up your own infrastructure, or contribute, have a look at our developer documentation or leave a message in the Forum. You can find more information in our documentation.