Slower gpu inference speed compared to what you observe locally

Slower gpu inference speed compared to what you observe locally  

  By: shiveshc on June 24, 2024, 11:34 p.m.

Are others also experiencing much slower inference speeds with gpus on submissions then what you usually get locally on your gpus? Just trying to see if I am missing something in building docker image.

Thanks Shivesh

Re: Slower gpu inference speed compared to what you observe locally  

  By: veegalinova on June 26, 2024, 9:49 a.m.

Hello Shivesh,

Grand Challenge runs the submissions on Nvidia T4, which is slower than many current consumer GPUs. The inference time for a submission is now limited to 10 minutes. Can you fit in 10 minutes locally? Do you think this is a reasonable limit for your method? We'd appreciate your feedback!

Best, Vera

Re: Slower gpu inference speed compared to what you observe locally  

  By: shiveshc on June 27, 2024, 11:44 p.m.

Thanks for getting back.

Unfortunately 10 minutes is not enough for my method.

Shivesh

Re: Slower gpu inference speed compared to what you observe locally  

  By: shiveshc on July 3, 2024, 11:11 p.m.

Dear organizers

We have built a more efficient version of the method that we are able to run locally under 10 minutes for W2S dataset. We see inference speeds are in the range 7-11s per image. But here we see inference speeds of 42s or more per image. Thus unfortunately I won't be able to submit, unless there is another way to submit.

Shivesh

Re: Slower gpu inference speed compared to what you observe locally  

  By: veegalinova on July 11, 2024, 10:29 a.m.

Hello Shivesh,

Sorry for the late reply. We are now deciding with our team whether it's possible for us to raise the inference time limit. We'll let you know as soon as we have a solution for you. We really appreciate your interest and work for the challenge. We'll write back here as soon as we have an update.

Best, AI4Life Open Challenge team

Re: Slower gpu inference speed compared to what you observe locally  

  By: veegalinova on July 20, 2024, 1:06 p.m.

Hello Shivesh,

We have increased the inference time limit to 30 minutes on the Unstructured Noise 1: JUMP leaderboard and 20 minutes for all other leaderboards.

Best, AI4Life Open Challenge team