Thanks to everyone who has submitted their algorithms for the final test phases. There are <3 days till the closing of the submission period (Sept. 18) and we've had an excellent turnout so far!

I want to take a moment to remind you all about the 20-minute runtime limit for inference. This applies to each case in the final test phase (n=50). While we haven't experienced any timeout errors on the final test phases yet, we want to be cautious and help avoid any potential issues. This message is especially relevant for those who have yet to submit to the final test phase and whose algorithms are nearing the 20-minute mark on the preliminary development cases.

As you probably know, typically, larger image sizes result in longer inference times. Case ID 121 from the preliminary development phase represents one of the larger images in the dataset, but there are a few cases in the final test phase that are slightly larger. Also, the GPU the Grand Challenge uses for running inference containers (NVIDIA T4) isn’t the greatest, and is usually not as fast as local GPUs you may be using. For context, on the GC platform our Task 1 nnUNet baseline took about 15 minutes for case ID 121 in the dev phase, and the longest inference time in the test phase was around 18 minutes. Please note there is a small startup time for the inference container as well. To minimize the risk of timing out, we strongly recommend reducing inference times wherever possible. If your algorithm is taking over 17 minutes on the dev phase cases, consider strategies like using fewer ensemble folds or disabling test-time augmentation.

A practical way to determine if your algorithm will time out on any test cases is to use case ID 191 from the training set (which is representative of the largest images in the test set) with the Try-Out Algorithm feature. You should already have the NIfTI file for case ID 191 in your local training folder and can upload it directly to the Try-Out Algorithm tab. Ensure your algorithm completes without a timeout on case ID 191. For those close to reaching the 5-submission cap on the dev phase, this is especially worth considering.

In the event that your algorithm times out during the test phase, it can be resubmitted (only fully successful submissions to the final test phase are scored). However, the entire test set will need to be re-run, consuming compute credits and increasing financial costs on our end. Moreover, it’s possible this could cause unexpected delays due to the GC submission queue. We kindly ask you to help us avoid this scenario.

Thanks again for your participation and good luck!

Kareem