Algorithm Time Limits and Optimization Tips¶
Algorithms have a default time limit of 5 minutes. If you are participating in a challenge, the runtime limit for your algorithm is determined by the challenge organizers.
You are responsible for ensuring that your algorithm completes within the allotted time by optimizing its performance. Write your code defensively, and use logging to document key steps and variables. Add additional logging as needed to help diagnose issues during development and testing.
📄 Accessing Logs and Resource Metrics¶
You can access the logs of a particular algorithm job by navigating to Results and clicking on the Details button for that job.
Whether you have access to the logs created by your algorithm depends on the context:
- If you use the "Try out this algorithm" feature to test your algorithm, you will be able to access the logs and metrics.
- If you submit your algorithm to a phase in a challenge, it depends on the phase.
Some phases explicitly enable log access for participants to help debug issues with large or complex cases. Otherwise, you will need to contact the challenge organizers for assistance with debugging.
⚡ GPU Utilization¶
One commonly overlooked optimization is ensuring your algorithm utilizes the available GPU.
You can select a GPU in your algorithm settings; however, in a challenge, the available GPU(s) are determined by the organizers.
If a GPU is available and enabled for your algorithm, you must ensure your code explicitly uses it during inference. Most deep learning frameworks require you to manually move your model and data to the GPU. Refer to your framework’s documentation for details.
For example, in PyTorch, you would typically write something like this (pseudocode):
import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = MyModel().to(device) data = data.to(device) output = model(data)