Evaluation metrics ¶
By: MaiAShaaban on Nov. 15, 2023, 6:26 p.m.
Dear Organizers,
Regarding the evaluation metrics, is there a script or recommended package to guarantee the consistency of submissions?
Thank you.
By: MaiAShaaban on Nov. 15, 2023, 6:26 p.m.
Dear Organizers,
Regarding the evaluation metrics, is there a script or recommended package to guarantee the consistency of submissions?
Thank you.
By: alvaroparicio on Nov. 16, 2023, 8:26 a.m.
Hi,
You can use this: https://github.com/chaimeleon-eu/OpenChallenge/blob/master/ProstateCancerRiskPredictionEvaluation/evaluation.py
Hope we could help!
By: alvaroparicio on Nov. 20, 2023, 11:09 a.m.
Hi,
We'll upload to Github shortly a similar archive, but in the meantime you can use theses ones: https://scikit-survival.readthedocs.io/en/stable/api/generated/sksurv.metrics.concordance_index_censored.html https://scikit-survival.readthedocs.io/en/stable/api/generated/sksurv.metrics.cumulative_dynamic_auc.html
Hope we could help!
By: abhivellala on Nov. 20, 2023, 2:54 p.m.
Hello Organizers,
The functions you have in evaluation.py file are a little different from what I have. I have two risk probabilities (probability of class 0 and probability of class 1). Is that okay or should I change my script according to your function in evaluation.py ?
By: alvaroparicio on Nov. 20, 2023, 3:16 p.m.
Hi,
You must give only one probability. Hope we could help!