Executing Algorithm

Executing Algorithm  

  By: 坤坤kk on June 23, 2024, 4:18 a.m.

Hello, the algorithm I submitted has been staying in the "Executing Algorithm" status. Could you please check it for me? If it's due to the platform's issue, could you help me cancel this submission so that I can resubmit it again? Thank you very much!

Re: Executing Algorithm  

  By: imran.muet on June 23, 2024, 1:27 p.m.

Your algorithm encountered an error:

"torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.12 GiB. GPU 0 has a total capacity of 14.75 GiB, with 29.06 MiB free. Process 8910 is using 14.72 GiB of memory, of which 13.51 GiB is allocated by PyTorch, and 1.07 GiB is reserved but unallocated."

To address this issue, please modify your code to run within the available resources. You have access to Nvidia T4 GPUs with 16GB of memory, 8 CPU cores, and 32GB of CPU memory. Unfortunately, we cannot cancel your current submission, but you can submit an updated version of your algorithm.

For additional guidance, refer to the documentation on docker submissions available here and here.

Feel free to reach out if you need further assistance! 😊