Hi, thank you for the update!
I'm having issues with the evaluation in the debugging phase using the example code.
Is this also related with the bug found in the evaluation code?
Our debug team algorithm container running on GC succeeds and outputs the 3 JSON files as required ['/output/detected-lymphocytes.json', '/output/detected-monocytes.json', '/output/detected-inflammatory-cells.json']
, but the FROC evals run for the debug submissions fails with the following error:
helpers.PredictionProcessingError: Error for prediction {'pk': '48c476ad-0cc7-4004-8064-73253717f56c', 'url': 'https://grand-challenge.org/algorithms/immunozip/jobs/48c476ad-0cc7-4004-8064-73253717f56c/', 'inputs': [{'pk': 2023171, 'file': None, 'image': {'pk': 'f6eb256a-966c-465e-8b81-6f94b8d6f660', 'name': 'P000003_A_PAS_CPG.tif'}, 'value': None, 'interface': {'pk': 502, 'kind': 'Image', 'slug': 'kidney-transplant-biopsy', 'title': 'Kidney Transplant Biopsy', 'super_kind': 'Image', 'description': 'Whole-slide image of a PAS-stained kidney transplant biopsy', 'default_value': None, 'look_up_table': None, 'relative_path': 'images/kidney-transplant-biopsy-wsi-pas', 'overlay_segments': []}}, {'pk': 2023371, 'file': None, 'image': {'pk': '2e8bd6ce-e5c3-48a2-864c-2810d44db5ab', 'name': 'P000003_A_mask.tif'}, 'value': None, 'interface': {'pk': 238, 'kind': 'Segmentation', 'slug': 'tissue-mask', 'title': 'Tissue Mask', 'super_kind': 'Image', 'description': 'Segmentation of the tissue in the slide. 0: Background 1:
We are using, only for debugging purposes, the same points predictions for all the 3 JSON files, changing the names and the required key-value pairs as described in the challenge submission instructions.
To confirm, we have:
{
"name": "lymphocytes",
"type": "Multiple points",
"version": {
"major": 1,
"minor": 0
},
"points": [
{
"name": "Point 0",
"point": [4.76497043966432, 1.73078052739864, 0.241999514457304],
"probability": 0.607054948806763
},
...
]
}
where the other 2 files have the correct "name"
value changed to inflammatory-cells
and monocytes
.
From what we understood as described here, regardless of the number of patients in the evaluation data, we should output only 3 JSON per patient as the script is parallelized and will run for every patient with the results saved in a different folder named with the patient ID.
If this problem is not related to the challenge we are open to feedbacks and/or suggestions, thank you!
UPDATE: the issue seems to be fixed and the debug submission was successful, thanks!