Hello everyone,

as you may have noticed a few submissions failed due to an apparent "Runtime limit exceeded".

I am happy to report that I was able to "fix" the issue and that all affected submissions where evaluated again successfully and are now part of the leaderboard for the preliminary testing phase. Future submissions should work immediately.

The issue arose from an update to the Grand Challenge platform about two days after the start of the preliminary testing phase. This update removed the "started_at" and "completed_at" timestamps which we where using to automatically compute the runtime of each algorithm. While I don't fully understand how, this in turn prevented the evaluation script from terminating until the time limit exceeded.

To solve this issue, we had to disable the automatic reporting of the runtime on the leaderboard. For the final testing phase we will manually compile the runtimes and add them to the final ranking.

I apologise for any confusion or inconvenience this may bring about.

Sincerely, Tom


Note:

The detailed metrics for your submissions will present the following (placeholder) values:

    "runtime": 7.6,
    "total_time": 1020,
    "loading_time": 120,
    "time_per_frame": 0.1,

These are just placeholder values and will not be used for any ranking.