Your algorithm encountered an error:
"torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.12 GiB. GPU 0 has a total capacity of 14.75 GiB, with 29.06 MiB free. Process 8910 is using 14.72 GiB of memory, of which 13.51 GiB is allocated by PyTorch, and 1.07 GiB is reserved but unallocated."
To address this issue, please modify your code to run within the available resources. You have access to Nvidia T4 GPUs with 16GB of memory, 8 CPU cores, and 32GB of CPU memory. Unfortunately, we cannot cancel your current submission, but you can submit an updated version of your algorithm.
For additional guidance, refer to the documentation on docker submissions available here and here.
Feel free to reach out if you need further assistance! 😊