Algorithms have a default time limit of 5 minutes. If you are participating in a challenge, the runtime limit for your algorithm is determined by the challenge organizers. You are responsible for ensuring that your algorithm completes within the allotted time by optimizing its performance. One commonly overlooked optimization is making sure your algorithm actually utilizes the available GPU. You can select a GPU in your algorithm settings; however, in a challenge, the available GPU(s) are determined by the organizers.
If a GPU is available and enabled for your algorithm, make sure your code explicitly uses it during inference. Most deep learning frameworks require you to move your model and data to the GPU manually. Refer to your chosen frameworkâs documentation for details. For example, in PyTorch, you would typically do something like this (pseudocode):
import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = MyModel().to(device) data = data.to(device) output = model(data)