Offical Suggested Model Size / Inference time limit on docker submission ¶
By: jyuanfeng8 on July 19, 2022, 2:47 p.m.
Dear participants.
Considering the questions of many participants about the runtime and size limits of the submitted dockers, we officially set the following limits on the submitted models for a more reasonable and fair comparison. 1) The submitted model must be able to run reasonably on one 3090 GPU with 128GB memory. 2) Referring to previous competitions, we suggest to limit the reasonable infer time for each case to 15 minutes (based on 3090 GPU), and considering the number of test data for task1 and task2, the submitted models is suggested to be finished testing around 3 days (for each task). Considering such sudden changes, we will not penalize for exceeding the time, we hope that teams will strictly adhere to the rule. Extreme cases will be handled separately.
We apologize for the lack of consideration of this issue and hope participants to adjust the number of models reasonably. We will postpone the submission time to the 22nd to mitigate the impact on the participants.
Good luck!