Docker inference time

Docker inference time  

  By: lewislou0210 on Oct. 16, 2022, 10:06 a.m.

I tried the cellseg_time_eval.py codes. The inference time of my docker for each image is over 40 s. However, I added the time stamps in predict.py codes, the inference time for each image is no more than 10 s. I wonder why would this happened?

I found it's because the first torch.from_numpy().to('cuda') operator cost 30 seconds. But I can's find any reasons and solutions.

 Last edited by: lewislou0210 on Aug. 15, 2023, 12:57 p.m., edited 2 times in total.

Re: Docker inference time  

  By: junma on Oct. 18, 2022, 2:57 p.m.

We would recommand posting the question in torch repo. to get more professional answer.