Clarification of the metrics

Clarification of the metrics  

  By: mrokuss on July 18, 2023, 1:40 p.m.

Dear organizers,

I was wondering if you cold provide more insights regarding the evaluation metrics. So far I do understand, that the metrics comprise

  1. cl-DICE
  2. Betti number errors
  3. Junction/landmark-based F1 score

However, I can not find the proposed "tutorial notebooks, evaluation code, conversion scripts" from the info box. When clicking on the given GitHub Link, it just redirects to the repo containing the website. Would it be possible to provide the evaluation code?

Thanks a lot! Best,

Max

Re: Clarification of the metrics  

  By: petersergeykeynes on July 22, 2023, 10:19 p.m.

Dear Max,

Thank you very much for your question. We are preparing the evaluation code and will post it to the Github repo probably mid-August.

In the meantime, please have a look at this repo containing the relevant metrics to be used, by some of our organizers: https://github.com/nstucki/Betti-matching/blob/master/evaluation.py

Hope it helps!

Best, Kaiyuan

Re: Clarification of the metrics  

  By: petersergeykeynes on Aug. 25, 2023, 7:32 p.m.

Hi Max,

We have updated the Assessment page, and released the evaluation code on Github.

Please visit out repo: 👉 "TopCoW_Eval_Metrics" for the implementation of our evaluation metric functions.

We are working to add Betti numbers to the evaluation repo soon. Whatever we use for the challenge evaluation will be synchronized to that repo, so please refer to that repo for the actual implementation and evaluations.

Please feel free to leave an issue or let us know if you have further feedback or questions.