Why the prediction results can not be evaluated

Why the prediction results can not be evaluated  

  By: King_HAW on April 29, 2022, 9:13 p.m.

I try to evaluate the predictions on the test set (300 cases). I follow the sample code on GitHub to construct the submission files. However, after I submit my results, the information shows that it failed for evaluation (Failed, Time limit exceeded). Can anyone tell me how to solve this problem? Many thanks.

P.S. This is the folder structure of my submission files. Predictions ├── dataset_description.json ├── sub-r005s016 │   └── ses-1 │      └── anat │         └── sub-r005s016_ses-1_space-MNI152NLin2009aSym_label-L_mask.nii.gz ... └── sub-r052s034    └── ses-1       └── anat          └── sub-r052s034_ses-1_space-MNI152NLin2009aSym_label-L_mask.nii.gz

900 directories, 301 files

 Last edited by: King_HAW on Aug. 15, 2023, 12:56 p.m., edited 5 times in total.

Re: Why the prediction results can not be evaluated  

  By: ahutton on May 3, 2022, 9:29 p.m.

I've reproduced the problem and I'm investigating. I'll update with news once we have any.

Re: Why the prediction results can not be evaluated  

  By: dikexin2000 on May 30, 2022, 3:22 p.m.

wondering if the problem has been solved...

Re: Why the prediction results can not be evaluated  

  By: ahutton on June 7, 2022, 3:10 a.m.

Should be, yes.

Re: Why the prediction results can not be evaluated  

  By: King_HAW on June 7, 2022, 1:07 p.m.

It works, thanks Alexandre, but I just wonder why the evaluation result does not show on the Leaderboard.

Re: Why the prediction results can not be evaluated  

  By: ahutton on June 7, 2022, 7:54 p.m.

It should be visible now, with future submissions auto-updating the leaderboard.