Evaluation metrics

Evaluation metrics  

  By: MaiAShaaban on Nov. 15, 2023, 6:26 p.m.

Dear Organizers,

Regarding the evaluation metrics, is there a script or recommended package to guarantee the consistency of submissions?

Thank you.

Re: Evaluation metrics  

  By: alvaroparicio on Nov. 16, 2023, 8:26 a.m.

Hi,

You can use this: https://github.com/chaimeleon-eu/OpenChallenge/blob/master/ProstateCancerRiskPredictionEvaluation/evaluation.py

Hope we could help!

Re: Evaluation metrics  

  By: WhyNot on Nov. 19, 2023, 10:16 a.m.

Hi,

Could we have a similar evaluation code for the lung type ? It is not so easy to understand Concordance Index (C-index) and Time-dependent Area Under the Curve (AUC).

Best,

Re: Evaluation metrics  

  By: alvaroparicio on Nov. 20, 2023, 11:09 a.m.

Hi,

We'll upload to Github shortly a similar archive, but in the meantime you can use theses ones: https://scikit-survival.readthedocs.io/en/stable/api/generated/sksurv.metrics.concordance_index_censored.html https://scikit-survival.readthedocs.io/en/stable/api/generated/sksurv.metrics.cumulative_dynamic_auc.html

Hope we could help!

Re: Evaluation metrics  

  By: abhivellala on Nov. 20, 2023, 2:54 p.m.

Hello Organizers,

The functions you have in evaluation.py file are a little different from what I have. I have two risk probabilities (probability of class 0 and probability of class 1). Is that okay or should I change my script according to your function in evaluation.py ?

Re: Evaluation metrics  

  By: alvaroparicio on Nov. 20, 2023, 3:16 p.m.

Hi,

You must give only one probability. Hope we could help!