Submission Difficulties

Submission Difficulties  

  By: mberniker on Aug. 23, 2022, 9:42 p.m.

Hi everyone,

We think we have identified a few issues with the container submission process. We are working hard to get to the bottom of it and will provide additional instructions if necessary. Stay posted and sorry for any frustration these issues may be causing.

Best,

SurgToolLoc 2022 Organizing Committee

Re: Submission Difficulties  

  By: aminey on Aug. 24, 2022, 10:02 a.m.

Hi everyone,

Has anone faced this problem in the category 2 submission? I checked the logs of the try-out and my model is running correctly and the prediction json file is saved.

UPDATE: We only have this problem with category 2 submission. Our submission in category 1 succeeded.

Kind regards,

 Last edited by: aminey on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.

Re: Submission Difficulties  

  By: kbot on Aug. 24, 2022, 6:34 p.m.

Hi aminey,

Prelim Category 2 submission mechanics should work. However, the leaderboard metric is being computed incorrectly. We are fixing this right now. As you can see on the Leaderboard, others have been able to submit successfully. However, their scores are all 0 due to the bug in computing the metric.

It might help to test locally with the .mp4 file provided in the github repo and checking the output format of the json before uploading the container.

Best, The organizers

Re: Submission Difficulties  

  By: TS_UKE on Aug. 25, 2022, 2:38 p.m.

We have the same problem as @aminey for category 2. @aminey: Have you found a solution? We see you are in the Leaderboards now.

We are using the github repo as a basis and local validation seems to be fine. We can't see any differences in the output json syntax. Of course the script in test.sh is ultimately failing because it compares to a file that is not provided with the repo. (/tmp/expected_output_detection.json)

 Last edited by: TS_UKE on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.
Reason: clarification

Re: Submission Difficulties  

  By: aminey on Aug. 25, 2022, 3:06 p.m.

Unfortunately the problem is not sovled :(

The submission that passed in the leaderboard was my model running and predicting the bounding boxes (to make sure my model doesn't break) but I fill the predictions with the the dummy bbox = [[54.7, 95.5, 0.5], [92.6, 95.5, 0.5], [92.6, 136.1, 0.5], [54.7, 136.1, 0.5]] provided by the host instead of my own predictions. This passes correctly and shows up on the leaderboard.

However, Once I replace the dummy numbers by my predictions, I am not able to submit. I have no idea what's going on here.

     bbox = [[x_min, y_min, 0.5], #TOP LEFT: xmin, ymin
                [x_max, y_min, 0.5], #TOP RIGHT: xmax, ymin
                [x_max, y_max, 0.5], #BOTTOM RIGHT: xmax, ymax
                [x_min, y_max, 0.5]] #BOTTOM LEFT: xmin, ymax

The only difference between a successful submission and "unable to submit" is the bbox variable as shown below

UPDATE: I just lowered my confidence score to 0.000001 to make sure that I have at least 1 box per frame, alternatively it's better to add an if statement whenever there are no boxes to set xyxy=0,0,0,0. Now I am able to submit, it seems that the problem was that some frames were missing some boxes. The submission is running.

UPDATE2: The submission succeeded! it seems that was the problem.

 Last edited by: aminey on Aug. 15, 2023, 12:57 p.m., edited 3 times in total.
Reason: Add solution