Common issues for participants

Common issues for participants  

  By: kbot on Aug. 25, 2022, 9:26 p.m.

Hi everyone,

We've gone through many of the failed attempts and found these to be the most common issues

  • CUDA out of memory errors
  • container out of memory errors
  • exceeding time limits
  • json formatting issues
  • misc data loader issues

Out of memory and time limit errors are related to resource intensive processes. Please measure resource utilization locally and define container limits correctly when trying on the platform. Alternatively, limit resources to the container when testing locally to more closely replicate the Grand Challenge environment. It may also help to implement batching to read frames from video, instead of reading all frames from the video into the RAM at once (some test set video are minutes long).

For json formatting issues, please refer to the json formats on the github pages for the challenge categories

Category 1: https://github.com/aneeqzia-isi/surgtoolloc2022-category-1 Category 2: https://github.com/aneeqzia-isi/surgtoolloc2022-category-2/tree/main

DO NOT deviate from the output json formats. If you do, the evaluation will fail since strings and values are parsed from these jsons.

For instance, I've copied below an excerpt from a sample output that failed for Category 2.

{"type": "Multiple 2D bounding boxes", "boxes": [{"corners": [[212, 156, 0.5], [142, 156, 0.5], [142, 86, 0.5], [212, 86, 0.5]], "name": "slice_nr_endoscopic-robotic-surgery-video0_needle_driver", "probability": 0.998474657535553}

The "name" field is incorrect since it does not contain the frame number (indexed from 0) but rather the video name.

All contestants have access to their logs and output files of their algorithm containers through the Results section of the Algorithm page. You have to click through each result to check files, logs, and error messages.

Finally, containers are run seperately from each test set file automatically by the Grand Challenge platform. When you submit your algorithm and it executes you will see a result for each test set file. Additionally, your algorithms do not need to keep track of the file name -- only the frame number. That is why there is slice_nr information in both json formats.

Best, The Organizers

 Last edited by: kbot on Aug. 15, 2023, 12:57 p.m., edited 2 times in total.

Re: Common issues for participants  

  By: cid on Aug. 27, 2022, 5:05 p.m.

A common error for Category 2 submission is:

The outputs from your algorithm do not match the ones required by this phase, please update your algorithm to produce: Surgical Tools (Multiple 2D bounding boxes).

This error message does not allow one to submit a docker, so no error log file for debug. The only way to submit to category 2 is to submit random generator code provided on your github, once this is replaced with actual code, the error persist.

This is error is predominantly complained in all the forum and I think without a solution, no meaningful submission will be made for this category.

Thanks

Re: Common issues for participants  

  By: sravigopal3 on Aug. 27, 2022, 8:35 p.m.

I agree with the @cid I have also encountered the same error when submitting my code . The random generator code only works. I hope someone encountered a solution for this.

 Last edited by: sravigopal3 on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.