Hardware discrepancy in the instructions?

Hardware discrepancy in the instructions?  

  By: mayozhu22 on June 24, 2025, 2:08 a.m.

Hi there,

I was reading through the challenge instructions, and noticed two different hardware specs mentioned.

The Timeline And Rules section say that the algorithm should run on an AWS g4dn.2xlarge, which has a T4 GPU with 16GB VRAM.

However, the Submission Instructions say we should request an A10 chip (with 24GB VRAM) for our algorithm container.

Which is the correct chip?

Thank you!

 Last edited by: mayozhu22 on June 24, 2025, 2:09 a.m., edited 3 times in total.

Re: Hardware discrepancy in the instructions?  

  By: gpalandry on June 24, 2025, 6:22 a.m.

Dear mayozhu22,

Thank you for bringing this up. We have updated the Timeline And Rules to match the Submission Instructions.

Here is the updated text:

“Inference of submitted algorithms should run on an NVIDIA A10G Tensor Core GPU (24 GiB VRAM). Please select this under the “Job required gpu type” pulldown menu. It is important to select this GPU to ensure consistent algorithm runtime evaluation. Maximum inference time for a single case (one patient) must not exceed 1 sec per frame plus model and data loading time.”

Hope this solves the issue.

best Guillaume

Re: Hardware discrepancy in the instructions?  

  By: mayozhu22 on June 24, 2025, 12:56 p.m.

Wonderful, thank you!

Also as a quick FYI, it may be necessary to specify 16GB CPU RAM instead of 32 for the competition. When I tried to select the A10G with 32GB RAM, I got a warning that those machines are under high demand, and there may be a 48 hour wait to run the submission.

 Last edited by: mayozhu22 on June 24, 2025, 12:58 p.m., edited 1 time in total.