Docker Submission Nodes

Docker Submission Nodes  

  By: TWald on Sept. 8, 2023, 10:15 a.m.

Hello,

due to the limited inference time (and only CPU during inference?) it would be helpful to get some information about the Nodes that one submits to, so one can try to optimize inference time for such a worker. E.g. RAM, num CPU cores, types.

Cheers, Tassilo

Re: Docker Submission Nodes  

  By: jmsmkn on Sept. 8, 2023, 2:31 p.m.

You can specify the GPU and Memory options in your Algorithm Image settings. If you specify that a GPU can be utilised then you get either:

  • ml.g4dn.xlarge: 1x NVIDIA T4 GPU, 4 vCPUs, 16GiB of Memory, 1 x 125 GB NVMe SSD.
  • ml.g4dn.2xlarge: 1x NVIDIA T4 GPU, 8 vCPUs, 32GiB of Memory, 1 x 225 GB NVMe SSD.

Depending on the amount of memory you requested. G4dn instances feature 1x NVIDIA T4 GPUs with 16 GB GDDR6 300 GB/sec GPU Memory and custom Intel Cascade Lake CPUs. Currently NVIDIA Driver Version 470 and CUDA Version 11.4 are used.

If you do not select a GPU instance you will be assigned an:

  • ml.m5.large: 2 vCPUs, 8 GiB of Memory
  • ml.m5.xlarge: 4 vCPUs, 16 GiB of Memory
  • ml.m5.2xlarge: 8 vCPUs, 32 GiB of Memory

Depending on the amount of memory you requested. M5 instances use Intel Xeon Scalable processors. These M5 instances only have 30GB of EBS (non-NVMe) storage, if you require more storage please select a GPU instance.

You can see the instance types and its specifications that was used for the Algorithm Job on the Jobs log page, on the chart where the resource usage is displayed.

Re: Docker Submission Nodes  

  By: TWald on Sept. 11, 2023, 6:45 a.m.

Awesome, thanks for the detailled reply!

Cheers, Tassilo