Runtime Environment

A container will be created from the container image whenever you create a job for your algorithm. You can use the "Try Out Algorithm" button to test your container on the platform with your own data.

Any output for both stdout and stderr is captured. The output for stderr gets marked as a warning in the job's result. If an algorithm does not properly run, it should exit with a non zero exit code. The job for the algorithm then gets marked as failed. Runtime metrics are available on the logs page of your job.

Restrictions

Your container will be executed on one case at a time and is subject to a number of restrictions.

No Network Access

Your container will be executed without access to any network resources. The is to prevent exfiltration of private data uploaded by users or used in the challenge. Your container therefore must include everything it needs to run at build time.

/tmp Will Be Empty

The /tmp directory will be completely empty at runtime. This is scratch space that you can use for transient files, and will usually have a fast NVMe device attached.. Any files that you included in /tmp in your Dockerfile will not be present at runtime. It is best practise to add these somewhere else, for example in a subdirectory of /opt.

/input Is Read Only

The input directory is read only. /tmp and /output are fully writable by your process.

50% Of System Memory Is Shared

The Shared Memory available to your container at /dev/shm is 50% of the System Memory. For example, for a 16GiB instance, /dev/shm will be 8GiB. The percentage is not modifiable.

Runtime Is Set By The Phase

The maximum runtime is set by the phase of the challenge that you are submitting to. Ensure that your container can produce its output in that time.

Instance Types

You can specify the GPU and Memory options in your Algorithm Image settings. If you specify that a GPU can be utilised then you get either:

  • ml.g4dn.xlarge: 1x NVIDIA T4 GPU, 4 vCPUs, 16GiB of Memory, 1 x 125 GB NVMe SSD mounted at /tmp.
  • ml.g4dn.2xlarge: 1x NVIDIA T4 GPU, 8 vCPUs, 32GiB of Memory, 1 x 225 GB NVMe SSD mounted at /tmp.

Depending on the amount of memory you requested. G4dn instances feature 1x NVIDIA T4 GPUs with 16 GB GDDR6 300 GB/sec GPU Memory and custom Intel Cascade Lake CPUs. Currently NVIDIA Driver Version 535 and CUDA Version 12.0 are used.

If you do not select a GPU instance you will be assigned an:

  • ml.m5.large: 2 vCPUs, 8 GiB of Memory
  • ml.m5.xlarge: 4 vCPUs, 16 GiB of Memory
  • ml.m5.2xlarge: 8 vCPUs, 32 GiB of Memory

Depending on the amount of memory you requested. M5 instances use Intel Xeon Scalable processors. These M5 instances only have 30GB of EBS (non-NVMe) storage mounted at /tmp, if you require more storage please select a GPU instance.

You can see the instance types and its specifications that was used for the Algorithm Job on the Jobs log page, on the chart where the resource usage is displayed.

FAQ

Output overlay not visible or incorrectly placed on input image

It is important to specify the correct voxel spacing, origin, and direction for image outputs that should be shown as overlays on the input images. When using output_sitk_img = SimpleITK.GetImageFromArray(numpy_array) SimpleITK will set default values that might not correspond with the input image resulting in an incorrectly placed overlay. Use output_sitk_img.CopyInformation(input_sitk_img) to copy the origin, spacing and direction values from the input image to the output image to ensure they correspond.