Docker inference time ¶
By: lewislou0210 on Oct. 16, 2022, 10:06 a.m.
I tried the cellseg_time_eval.py codes. The inference time of my docker for each image is over 40 s. However, I added the time stamps in predict.py codes, the inference time for each image is no more than 10 s. I wonder why would this happened?
I found it's because the first torch.from_numpy().to('cuda') operator cost 30 seconds. But I can's find any reasons and solutions.