Open Leaderboard ¶
By: culrich on Nov. 18, 2022, 9:10 a.m.
Hi,
first of all i wanted to thank the organizers again for this great challenge and especially for providing a open leaderboard.
Many leaderboards in the field of medical imaging are overloaded with many submissions and it is not clear which method belongs to which submission. Very often people don't stick to the rules of the original challenge and use pretrained models, additional data, crazy big ensembles or overfit the test set with hundreds of submissions. Some of these approaches are justified, but in order to make a fair comparison of one's own methods, it would at least be important to know what one is comparing to.
To prevent this from happening, it would be helpful if some information were requested for each submission: - Single model or ensemble? - Pretrained model? - Additional data used? - Trained only on training data or as well on validation data? - Possibility to add a link to a publication (even later?) ....
Probably, the community could provide more important aspects. I would love to get your feedback :)
Best, Constantin