Winner announcement ¶
By: mschuiveling on March 17, 2025, 2:25 p.m.
Challenge Winners Announcement πΒΆ
Dear Participants,
The challenge has officially concluded, and we are happy to announce the final results.
Your enthusiasm and dedication have exceeded our expectations, and we very much appreciate your contributions.
As our original evaluation code calculated the nuclei F1 score as the average of the F1 scores per image, while our intention was to use the summed true positives, false positives, and false negatives, we decided to report both metrics. For both the summed F1 score and the average F1 score, the top-ranked teams remain the same across tracks.
In cases where teams have the same mean ranking, the mean of the F1 score and Tissue Micro Dice is used to determine the higher-performing team.
Final Rankings β Track 1ΒΆ
Rank | Team | Summed Macro F1 | Macro F1 | Micro Dice (Tissue) | Mean Summed Nuclei F1 & Micro Dice | Mean Averaged Nuclei F1 & Micro Dice | Mean Position Leaderboard | Final Rank (Summed Nuclei F1 & Micro Dice) | Final Rank (Averaged Nuclei F1 & Micro Dice) |
---|---|---|---|---|---|---|---|---|---|
π₯ #1 | wildsquirrel (TIAKong) | 0.7439 | 0.6466 | 0.7823 | 0.7631 | 0.7145 | 2.0 | 1.0 | 1.0 |
π₯ #2 | NiTo (LSM) | 0.7443 | 0.6501 | 0.7237 | 0.7340 | 0.6869 | 2.5 | 2.0 | 2.0 |
π₯ #3 | rictoo | 0.7578 | 0.6585 | 0.6326 | 0.6952 | 0.6456 | 2.5 | 3.0 | 3.0 |
#8 | Baseline | 0.6940 | 0.5980 | 0.5548 | 0.6244 | 0.5764 | 8.0 | 8.0 | 8.0 |
Final Rankings β Track 2ΒΆ
Rank | Team | Summed Macro F1 | Macro F1 | Micro Dice (Tissue) | Mean Summed Nuclei F1 & Micro Dice | Mean Averaged Nuclei F1 & Micro Dice | Mean Position Leaderboard | Final Rank (Summed Nuclei F1 & Micro Dice) | Final Rank (Averaged Nuclei F1 & Micro Dice) |
---|---|---|---|---|---|---|---|---|---|
π₯ #1 | NiTo (LSM) | 0.4897 | 0.2707 | 0.7798 | 0.6348 | 0.5253 | 1.5 | 1.0 | 1.0 |
π₯ #2 | wildsquirrel (TIAKong) | 0.4669 | 0.2656 | 0.7823 | 0.6246 | 0.5240 | 1.5 | 2.0 | 2.0 |
π₯ #3 | agaldran | 0.4778 | 0.2617 | 0.6204 | 0.5491 | 0.4411 | 4.5 | 3.0 | 3.0 |
#11 | Baseline | 0.2977 | 0.2040 | 0.5548 | 0.4263 | 0.3794 | 10.5 | 10.0 | 11.0 |
A big thank you to all participants for your hard work and dedication!
As a next step, we plan to validate the top ranking submitted algorithms in a real-world melanoma patient cohort to assess their potential in predicting treatment response and survival in patients treated with immune checkpoint inhibition therapy. These results will be included in the final PUMA challenge manuscript, along with a description of the submitted methodologies.
Participants are of course also free to publish their own articles on their approaches.
We will reach out soon regarding methodology descriptions and the first draft of the publication.
If you have any questions, feel free to reach out.
Congratulations to the winners and again a very big thank you to all participants!
Best regards,
On behalf of the PUMA Challenge Team,
Mark