Hi everyone,

There are still teams that have their algorithm container running out of time on one video. That video is ~64 mins long (or ~3840 frames at 1fps). The time limit per video inference run is set as 1 hr by grand-challenge (which was increased from 20 mins). This translates to the algorithm container needing just over 1fps inference, which is very reasonable. Teams need to ensure that their container can run inference on ~3840 frames in under 60 mins based on the grand challenge environment for a successful timely submission.

Best, SurgToolLoc 2022 Organizing Committee