Sample submission github repo's

Sample submission github repo's  

  By: aneeqzia_isi on Aug. 5, 2022, 5:37 a.m.

Hi everyone,

We have placed links to github repo's containing sample algorithm submission containers for the two categories in the submisison instruction page (https://surgtoolloc.grand-challenge.org/submission/). The preliminary test phase starts soon - the teams will need to adapt the code provided by following the instructions to create their own algorithm containers. Please start testing early since teams who are unable to generate a working container in preliminary test phase will not be able to submit in the final testing phase.

Best, SurgToolLoc 2022 Organizing Committee

Re: Sample submission github repo's  

  By: boliu on Aug. 11, 2022, 5:20 p.m.

Hi aneeqzia_isi, thanks for making this available. Some questions:

  1. The sample test video vid_1_short.mp4 has different dimension (640 × 512) than the train vidoes (1280 x 720). It also doesn't have the black empty space on left and right side of vidoes as in train videos. Would all the test videos be like this? Could you share the processing steps of test vidoes (how you resize and crop the mp4 files to have smaller fps and smaller size), so that we can mimic it in our validation?
  2. It was mentioned on the forum that the bottom banner showing tool names would be removed from test videos. Yet it's still there in vid_1_short.mp4. Can you confirm they would be removed in test videos?
  3. Category 2's metric is COCO mAP@[0.5:0.05:0.95], which requires confidence score for each bbox. But the submission json doesn't have confidence score?

Thanks, Bo

 Last edited by: boliu on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.

Re: Sample submission github repo's  

  By: bilalUWE on Aug. 11, 2022, 11:39 p.m.

Hi aneeqzia_isi,

Thanks for guiding us through the competition.

I have few questions about the test data:

  1. The sample JSON dictionary you provided for the Category 1 has a small disparity in its keys. The tool monopolar_curved_scissor is labelled differently in the labels.csv file. It says monopolar_curved_scissor in the sample JSON dictionary whereas monopolar_curved_scissors in the labels.csv. Which label name for this tool shall we follow?
  2. How many video files will be used to test the algorithms? I believe there will be several. In that case, shall we implement the logic to handle multiple videos from the test folder and create separate surgical-tools-presence.json for each video? (Nevermind: got the answer. The evalutils takes care of all that stuff automatically.)
  3. Just curious if the tools listed in the surgical-tool-presence.json inside the output folder on the docker image are the actual tools present in the example test video provided in input folder or just placeholder predictions to illustrate output format? (Nevermind: got the answer. The file is just placeholder JSON as the dasboard on video displays actual tool names).
  4. Lastly, is there any possiblility of having more than four tools present in some videos you intend for testing?

Many Thanks and

Kind Regards, Bilal

 Last edited by: bilalUWE on Aug. 15, 2023, 12:57 p.m., edited 9 times in total.

Re: Sample submission github repo's  

  By: ryo-hachiuma on Aug. 13, 2022, 11:16 a.m.

Hi aneeqzia_isi, thank you for providing the submission code. I have two questions about the repository.

  1. When I tried to load vid_1_short.mp4 using OpenCV following by process.py, only 59 frames can be loaded, and 60th frame cannot be loaded. ret, img = cap.read() then ret is False from 60th frame. If the frame cannot be loaded, what should we do? (e.g. skip? filled up with random answer?)

  2. In test.sh, gpu option (--gpus all) is not used so that gpus are not available in the loaded container. Is that mean we cannot use GPUS at the test time?

Best regards, Ryo

Re: Sample submission github repo's  

  By: aneeqzia_isi on Aug. 15, 2022, 10:45 p.m.

Hi teams,

Following are responses to the questions asked by different teams.

Boliu: 1. The test videos will be of resolution 640x512. The main steps in creating this test set include a) downsampling videos to 1fps b) resizing to 640x512 and c) blurring the UI. The black spaces on sides depends on the robotic system the data was collected on - it can be variable but majority of the testing videos will not have the black sides 2. Yes, the UI including tool information etc on the lower part of the video will be blurred out. An example test image is also uploaded on https://surgtoolloc.grand-challenge.org/data/ 3. That's a great point regarding confidence required for the coco map metric. We are trying to work this with grand-challenge support as there are restrictions around json schema that can be used in submissions. In case we are unable to resolve this, we will make an announcement and let all teams know regarding any change in the evaluation metric for C2.

BilalUWE: 1. Thanks for pointing out the discrepency. The ground truth naming convention that will be used is provided in the process.py as self.tool_ list

  1. Yes, as you found out, teams do not need to worry about consolidating the predictions from the videos. The algorithm container is run on 1 video at a time to produce the predictions json file, where the evaluation container takes care of consolidating all the results. The example github repo was created in a way that would require the teams to just insert their inference code within the predict fucntion in process.py.
  2. Correct, these files are dummy outputs primarily to show output format. Those files were produced using the process.py in github repo's so you can check the code to understand it better
  3. No, there cannot be more than 4 tools present in the view at one time.

ryo-hachiuma:

  1. The teams need to produce output for all the frames in the videos. Ideally, there shouldn't be any corrupted frames, but we will check and get back to you in case there is any issue in the test example video.
  2. The algorithms will be run on GPU by default when the algorithm is uploaded for evaluation. The test.sh is just provided for teams to test their algorithm locally if they would like to. test.sh is not part of the container that is built when running build.sh and export.sh.

Best, SurgToolLoc 2022 Organizing Committee

 Last edited by: aneeqzia_isi on Aug. 15, 2023, 12:57 p.m., edited 2 times in total.

Re: Sample submission github repo's  

  By: boliu on Aug. 17, 2022, 3:16 a.m.

Hi aneeqzia_isi,

I assume there are more than 1 test videos. Let's say there are 3 videos, with 40, 50, 60 frames respectively. What json output file(s) do you expect?

  • 3 separate json files, each containing a list of dict
  • a single json file, containing a list of dict with len=150 (slice_nr=0 to 149)
  • a single json file, containing a list of list of dict (1st list slice_nr=0 to 39; 2nd list slice_nr=0 to 49; 3rd list slice_nr=0 to 59)

I'm guesing the last one, but the sample save function seems to have a bug. The current behavior is only saving the very first video's output.

https://github.com/aneeqzia-isi/surgtoolloc2022-category-1/blob/main/process.py#L121

I think json.dump(self._case_results[0], f) should be json.dump(self._case_results, f) ?

Thanks, Bo

Re: Sample submission github repo's  

  By: kbot on Aug. 18, 2022, 10:18 p.m.

The code should generate one json per video. The Grand Challenge platform will call the algorithm container on each video seperately. Please take a look at the process.py script and - use the class constructor to load your model object - use the predict method to make inference with your model object

Let me know if there are further questions.

Re: Sample submission github repo's  

  By: TS_UKE on Aug. 19, 2022, 4:02 p.m.

@aneeqzia_isi

We too are are having the problems with apparently corrupt frames. We can reliably reproduce that the Video fails to play after 00:00:58. We are using Ubuntu 22.04 and VLC 3.0.16. In our code we have the same problem as @bilalUWE, though we are using torchvision.io.read_video().

Do we need to implement a check to detect corrputed frames in each input video? And if yes, what output do you expect for corrupted frames?

 Last edited by: TS_UKE on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.

Re: Sample submission github repo's  

  By: NourJL on Aug. 21, 2022, 11:25 a.m.

Hello All,

@ ryo-hachiuma:

I had same problem, but there is no corrupt frames in the videos. The 'state, OrigImage = cap.read(i)' gives state=False at frame 60, and to slove this you only need to check the 'state' and initialise the cap (cv2.VideoCapture) again when state=False. This will solve the problem.

state, OrigImage = cap.read(i) if state == False: cap = cv2.VideoCapture(str(fname)) state, OrigImage = cap.read(i)

Re: Sample submission github repo's  

  By: xiaowen on Aug. 22, 2022, 1:42 a.m.

Dear aneeqzia_isi,

I'm curious why the video file instead of the image sequence is provided. If we use 'ret, img = cap.read() 'and get a corrupted frame, should we skip it directly or do other operations? In short, how can we ensure that the image sequence we get from video matches the image sequence labeded?

best, xiaowen

Re: Sample submission github repo's  

  By: ryo-hachiuma on Aug. 22, 2022, 12:26 p.m.

Hi NourJL,

thank you a lot for your information! It worked locally as you said!

However, I still get error when I submit to Prelim Category I ;(

Best,

Ryo

 Last edited by: ryo-hachiuma on Aug. 15, 2023, 12:57 p.m., edited 1 time in total.

Re: Sample submission github repo's  

  By: aminey on Aug. 22, 2022, 12:46 p.m.

Hi everyone,

What modifications did you make to the test.sh to run the test locally? I am able to produce the output/surgical-tools.json file by running process.py locally but test.sh is not working. The model is running correctly and producing the prediction file but the docker doesn't seem to find the file.

UPDATE: The problem was solved by renaming the output files in this line: python:3.9-slim python -c "import json, sys; f1 = json.load(open('/output/surgical-tools.json')); f2 = json.load(open('/tmp/surgical-tools.json')); sys.exit(f1 != f2);"

Regards,

 Last edited by: aminey on Aug. 15, 2023, 12:57 p.m., edited 2 times in total.
Reason: add logs

Re: Sample submission github repo's  

  By: kbot on Aug. 23, 2022, 9:29 p.m.

Hi aminey,

Thank you for posting the resolution!

Best, The organizers