Calculate evaluation metrics in inference.py? ¶
By: gurucharan.marthi on Aug. 26, 2024, 4:45 p.m.
Dear team,
Thank you for organizing this event. I just have two questions from my side -
- Do we need to calculate the evaluation metrics in "inference.py" before uploading the docker container? If so, in the docker template that you shared, the "input.zip" does not have any ground truth file in the folder. Are we allowed to use any other input-output pair for evaluation purpsoe?
- In the case that "algorithm failed on one or more cases", is there a possibility of viewing the log of the failed case?
Thank You, Gurucharan