Time Computation

Time Computation  

  By: tanya.chutani on Aug. 3, 2024, 8:37 a.m.

Hi Team, could you please shed some light on the GPU name, GPU memory, GPU cores, CUDA version and libraries that you are using to compute inference time.

 Last edited by: tanya.chutani on Aug. 5, 2024, 11:38 a.m., edited 3 times in total.

Re: Time Computation  

  By: jdex on Aug. 9, 2024, 11:40 a.m.

Hi Tanya,

i guess i don't understand your question correctly. The inference time is limited by grandchallenge, probably they just use the local time and kill the docker process after 5min or use some AWS specific tools. Here is a nice blogpost about the underlying infrastructure . The cuda version depends on what you define in your docker image and the max resources are described on the submission page and here. Quote: "All models will be run on a single NVIDIA T4 GPU (16 GB VRAM) with 8 CPUs and a max memory of 30 GB."

Hope this answers your question

Best Jakob