Dear participants,

Following recent quality checks conducted on the PANORAMA data set, we identified an overlap of 14 cases between the public training data set (2,238 cases) and the hidden tuning set (100 cases). To address this issue, we decided to withdraw the 14 overlapping cases from the development phase - tuning archive. Consequently, we have recomputed the leaderboard results using the effective tuning set of 86 cases.

This adjustment has resulted in minor changes to submission performance metrics. However, all revised results remain well within the 95% confidence interval of the original AUROC and AP metrics, calculated using bootstrapping with 10,000 iterations. Additionally, submission rankings within each team remain unchanged, ensuring there is no impact on the final model selection for the testing phase.

The testing phase will continue as planned, and winners will be determined based on the testing leaderboard performance, as originally outlined in the PANORAMA study protocol.

We sincerely apologize for any inconvenience this may have caused and appreciate your understanding.

On behalf of the PANORAMA organizing team, thank you for your continued participation and commitment.