Organizing a medical imaging challenge in 2020s: practical considerations

Published 5 Feb. 2024

πŸ₯πŸ”¬Things to Consider Before Organizing a Challenge in Medical Imaging πŸ₯πŸ”¬

Organizing a deep learning challenge in medical imaging is a commendable initiative, as it can help drive advancements in the field and bring together researchers and practitioners to tackle important healthcare problems. However, there are several key considerations you should keep in mind to ensure the success and integrity of the challenge.

Check Relevant Literature πŸ“š

Over the years, the scientific community has compiled various resources and guidelines to help challenge organizers make better challenges. We highly recommend reading the articles below prior to starting working on your challenge:

Clear Objective, Problem Definition 🎯

Define a specific medical imaging problem that your challenge aims to address. Clearly articulate the objectives, goals, and expected outcomes of the challenge. Assemble a Scientific Advisory Board to brainstorm various aspects and expected outcomes.

Data Collection and Privacy πŸ›‘οΈ

  • High Quality: Check your data for confounding variables, exclude cases with artifacts, and ensure a sufficient number of test cases.
  • Ground Truth: Ensure reliable gold standard, consider using other modalities for setting up ground truth.
  • Range of Variations: Ensure diverse and representative data for your task.

Data License πŸ“œ

Establish a permissive and widely recognized license (CC-BY / CC-BY-NC / CC-BY-NC-SA). Avoid using CC-BY-ND as it prohibits training deep/machine learning models. Learn more.

Data Sharing 🌐

Share data via Zenodo, AWS Open Data Registry or a similar platform for FAIR data principles. Host data on Zenodo to automatically assign a DOI for clear referencing.

Ethical Considerations πŸ€”

Ensure alignment with ethical standards and guidelines for medical research, covering data usage, participant guidelines, patient consent, and potential implications.

Funding & Hardware πŸ’»πŸ’°

  • Investigate pricing policies of diverse platforms.
  • Explore funding or collaborative partnerships to defray costs.
  • Consider efficient testing pipeline to curb computational expenses.
  • Plan for funding in advance for prizes or incentives.

Prizes and Incentives πŸ†

Offer attractive prizes to motivate participants, including cash rewards, travel grants, access to datasets, and collaboration opportunities.

Evaluation Metrics πŸ“

Define appropriate evaluation metrics reflecting clinical relevance and accuracy. Make the evaluation pipeline public to save participants time.

Baseline Models πŸ“Š

Provide baseline models for participants to compare solutions against. Open source the code to lower entry barriers and encourage building upon existing solutions.

Expert Involvement πŸ‘©β€βš•οΈπŸ‘¨β€βš•οΈ

Involve medical professionals and domain experts for insights on problem formulation, dataset curation, and evaluation metrics.

Code of Conduct πŸ“œ

Establish a clear code of conduct for participants and organizers to maintain professionalism and respectful interactions.

Timelines and Deadlines πŸ“…

Create a realistic timeline for the challenge, allowing sufficient time for development and refinement of solutions.

Transparency and Reproducibility πŸ”πŸ”„

Encourage detailed documentation of methods, code, and algorithms for transparency and reproducibility. Set up algorithm submission challenges for better validation.

Community Engagement πŸ’¬

Foster a sense of community through communication channels for questions, ideas, and collaboration.

Publication and Dissemination πŸ“°

Consider organizing workshops or conferences for knowledge exchange and broader dissemination of advancements made through the challenge.

Post-Challenge Plans πŸš€

Plan for the post-challenge phase, including testing winning solutions in real-world scenarios, collaborations, and follow-up research initiatives.

🚧 Pitfalls of Challenge Organizers 🚧

The grand-challenge.org team performs regular surveys to facilitate the workflow and improve the quality of future challenges. We have collected feedback from organizers of past challenges, highlighting the aspects they wish they put more emphasis on while planning out the challenge.

Poorly Thought Out Evaluation Metric: - Setting up the appropriate metric for your challenge objective is a key to sound solutions. Use multiple metrics if needed. Check the following literature for reference.

  1. Reinke, A. et al., (2023). Understanding metric-related pitfalls in image analysis validation (arXiv:2302.01790). arXiv. http://arxiv.org/abs/2302.01790

  2. Maier-Hein, L., et al., (2022). Metrics reloaded: Pitfalls and recommendations for image analysis validation (arXiv:2206.01653). arXiv. https://doi.org/10.48550/arXiv.2206.01653

  3. Reinke, A., et al., (2022). Common Limitations of Image Processing Metrics: A Picture Story (arXiv:2104.05642). arXiv. https://doi.org/10.48550/arXiv.2104.05642

Not Taking Into Account the Algorithm Runtime: - Set a relevant limit on expected algorithm runtime to avoid impractical models.

Any Rule That You Cannot Enforce will be Broken: - Strictly enforce runtime limits and other rules to maintain fairness.

Make the Code for Evaluation Public: - Ensure transparency by making evaluation code public.

Make the Baseline Code Public: - Lower entry barriers by sharing well-documented baseline code.

Team Formation Rules: - Set rules for team formation and verify platform enforcement.

Forum: - Create a forum for effective communication between participants and organizers.

User Verification: - Implement a user verification system to prevent multiple submissions.

Funding and Hardware: - Research pricing policies, estimate budget, and seek funding for challenges.

Not Having Time in Reserve: - Allocate extra time for unforeseen challenges in organizing.

πŸš€ Set up a challenge on grand-challenge.org platform πŸš€

Please follow the link to our documentation page to access a hands-on tutorial on how to set up a challenge on our platform.