Only the top 5 teams from the Open Development Phase will be invited to participate in the Closed Testing Phase. At the start of the Closed Testing Phase, we will retrain one AI algorithm from each of these top 5 teams on the combined Public Training and Development Dataset (1500 cases) and the Private Training Dataset (7500-9500 cases). For this, teams must provide us their AI algorithms encapsulated in training containers (more details will follow in November), and we will retrain their algorithms using AWS SageMaker instances.

To facilitate each of these training runs, we are allotting a maximum total budget of 1300 USD (tentative) per team/algorithm. For instance, such a budget can run a ml.p3.2xlarge instance (with a single 16 GB NVIDIA Tesla V100 GPU) for roughly 2 weeks. Alternatively, teams can opt for a cheaper low compute instance for more hours, or a more expensive high compute instance for fewer hours. In any case, their AI algorithms must complete all steps (from data preprocessing, pretraining or pseudo-label generation, regular training, cross-validation and ensembling, to exporting the final set of weights that will be used for inference/testing) within the allotted budget. For a full list of AWS SageMaker instances + their pricing and specs, please visit: https://aws.amazon.com/sagemaker/pricing/

We insist that all teams take these compute limits into consideration, while developing and submitting their AI algorithms to the Open Development Phase. Our provided public baseline models were developed considering this same budget, and adhere to it as well. Unfortunately, we cannot support AI algorithms that mandate impractically vast amounts of compute + several weeks of training (e.g. by using huge ensembles, inefficient data loading or highly complex heuristics), or invite their developers to the Closed Testing Phase as one of the top 5 teams.