Update: Training Data, Baselines, and Evaluation Metrics ¶
By: amparobt9 on April 15, 2025, 5:31 p.m.
Dear all,
The Training dataset for both tasks is now available for download, and the baseline algorithms have been published here.
Aditionally, we want to inform you that the evaluation metrics and ranking procedure have been updated. Previously, we considered including a time score metric to evaluate inference time; however, we can only capture the total runtime—which includes both model loading and inference—and the Grand Challenge platform enforces a runtime limit. As a result, we have decided to remove the time score metric, since all successful submissions will already meet this constraint.
For clarity on ranking:
- Teams will receive individual rankings for each performance metric.
- The overall ranking is determined by averaging these individual metric rankings.
- The winning teams are the ones with the best overall ranking.
Note: The runtime limit is 5 minutes and applies to the complete runtime of the algorithm container for processing one image, including both model loading and inference.
We look forward to seeing your innovative solutions for the challenge!
Best regards,
The PANTHER Organizing Team