Time Computation ¶
By: tanya.chutani on Aug. 3, 2024, 8:37 a.m.
Hi Team, could you please shed some light on the GPU name, GPU memory, GPU cores, CUDA version and libraries that you are using to compute inference time.
By: tanya.chutani on Aug. 3, 2024, 8:37 a.m.
Hi Team, could you please shed some light on the GPU name, GPU memory, GPU cores, CUDA version and libraries that you are using to compute inference time.
By: jdex on Aug. 9, 2024, 11:40 a.m.
Hi Tanya,
i guess i don't understand your question correctly. The inference time is limited by grandchallenge, probably they just use the local time and kill the docker process after 5min or use some AWS specific tools. Here is a nice blogpost about the underlying infrastructure . The cuda version depends on what you define in your docker image and the max resources are described on the submission page and here. Quote: "All models will be run on a single NVIDIA T4 GPU (16 GB VRAM) with 8 CPUs and a max memory of 30 GB."
Hope this answers your question
Best Jakob