Live leaderboard phase temporarily closed [SOLVED]

Live leaderboard phase temporarily closed [SOLVED]  

  By: LindaSt on Dec. 2, 2024, 4:42 p.m.

--> UPDATE IS HERE.

Hi everybody

I've closed the live leaderboard phase because we are making some updates to the evaluation and the data. I expect to be done with it by Thursday at the latest. I'll write another post about the changes as well.

Best, Linda

 Last edited by: LindaSt on Dec. 5, 2024, 4:40 p.m., edited 3 times in total.

Re: Live leaderboard phase temporarily closed  

  By: wildsquirrel on Dec. 2, 2024, 8:03 p.m.

Hi, I just made a new submission to the leaderboard because I didn’t see this message. Just wondering what will happen to my new submission? Thank you

Re: Live leaderboard phase temporarily closed  

  By: LindaSt on Dec. 3, 2024, 10:46 a.m.

Hi! It will either fail or give a score of zero. Once I update everything, I will manually trigger a re-evaluation of all the existing submissions. So you're all good.

Re: Live leaderboard phase temporarily closed  

  By: gdeotale123 on Dec. 4, 2024, 5:41 a.m.

Same will happen on debugging session? i am getting "The algorithm failed on one or more cases." it is working perfectly fine when i tryout algorithm in algorithms section

Re: Live leaderboard phase temporarily closed  

  By: LindaSt on Dec. 4, 2024, 12:11 p.m.

Yes, I also updated the container there. I've tested it on our baseline submissions, which worked fine. I'll look into what's causing the issue for your submission.

Re: Live leaderboard phase temporarily closed  

  By: wildsquirrel on Dec. 4, 2024, 1:16 p.m.

Hi. Could you explain what has changed in the evaluation? Because I see that everyone’s scores have changed quite significantly.

Re: Live leaderboard phase temporarily closed  

  By: LindaSt on Dec. 4, 2024, 1:28 p.m.

Hi! Yes, once I have verified everything is correct, I will write a post. The most significant change is adding another case to the set (previously, it was only 8).

Re: Live leaderboard phase temporarily closed  

  By: LindaSt on Dec. 4, 2024, 4:08 p.m.

Hi! So, after some investigation, I've isolated the problem. We've switched the ground truth JSON files from having the annotations in mm instead of pixels (like the outputs). There was a switched-around if statement that used the pixel size margins instead of in mm. I'm fixing it now and will trigger another re-evaluation.

Re: Live leaderboard phase temporarily closed  

  By: ecandeloro on Dec. 5, 2024, 10:14 a.m.

Hi, thank you for the update!

I'm having issues with the evaluation in the debugging phase using the example code.

Is this also related with the bug found in the evaluation code?

Our debug team algorithm container running on GC succeeds and outputs the 3 JSON files as required ['/output/detected-lymphocytes.json', '/output/detected-monocytes.json', '/output/detected-inflammatory-cells.json'], but the FROC evals run for the debug submissions fails with the following error:

helpers.PredictionProcessingError: Error for prediction {'pk': '48c476ad-0cc7-4004-8064-73253717f56c', 'url': 'https://grand-challenge.org/algorithms/immunozip/jobs/48c476ad-0cc7-4004-8064-73253717f56c/', 'inputs': [{'pk': 2023171, 'file': None, 'image': {'pk': 'f6eb256a-966c-465e-8b81-6f94b8d6f660', 'name': 'P000003_A_PAS_CPG.tif'}, 'value': None, 'interface': {'pk': 502, 'kind': 'Image', 'slug': 'kidney-transplant-biopsy', 'title': 'Kidney Transplant Biopsy', 'super_kind': 'Image', 'description': 'Whole-slide image of a PAS-stained kidney transplant biopsy', 'default_value': None, 'look_up_table': None, 'relative_path': 'images/kidney-transplant-biopsy-wsi-pas', 'overlay_segments': []}}, {'pk': 2023371, 'file': None, 'image': {'pk': '2e8bd6ce-e5c3-48a2-864c-2810d44db5ab', 'name': 'P000003_A_mask.tif'}, 'value': None, 'interface': {'pk': 238, 'kind': 'Segmentation', 'slug': 'tissue-mask', 'title': 'Tissue Mask', 'super_kind': 'Image', 'description': 'Segmentation of the tissue in the slide. 0: Background 1:

We are using, only for debugging purposes, the same points predictions for all the 3 JSON files, changing the names and the required key-value pairs as described in the challenge submission instructions.

To confirm, we have:

{
  "name": "lymphocytes",
  "type": "Multiple points",
  "version": {
    "major": 1,
    "minor": 0
  },
  "points": [
    {
      "name": "Point 0",
      "point": [4.76497043966432, 1.73078052739864, 0.241999514457304],
      "probability": 0.607054948806763
    },
...
  ]
}

where the other 2 files have the correct "name" value changed to inflammatory-cells and monocytes.

From what we understood as described here, regardless of the number of patients in the evaluation data, we should output only 3 JSON per patient as the script is parallelized and will run for every patient with the results saved in a different folder named with the patient ID.

If this problem is not related to the challenge we are open to feedbacks and/or suggestions, thank you!

UPDATE: the issue seems to be fixed and the debug submission was successful, thanks!

 Last edited by: ecandeloro on Dec. 5, 2024, 3:32 p.m., edited 2 times in total.