Visualisations For Challenges
Published 6 Nov. 2020
Visualizations of algorithm results beyond the challenge leaderboard can aid in gaining insight into algorithm performance for a specific task and in finding new research directions [1]. Grand Challenge offers an option to add visualizations to your challenge through ObservableHQ notebooks. You can use Vega or Vega-lite to easily create graphs and the Vega Editor to edit them, before integrating them in your ObservableHQ notebook that will be embedded in your challenge page on grand-challenge.org. To get you started, we provided a couple of example notebooks for different types of challenges (e.g. a classification, segmentation and detection challenge).
Three ways to gain insight beyond the leaderboard¶
There are three ways in which a visitor can gain insight into the submission results beyond the leaderboard, which we will call scenario 1-3:
Scenario 1: Algorithm Details View¶
Check out the results of the selected submission on the "Evaluation" page.
Scenario 2: Browse Results¶
Walkthrough the results of all challenge leaderboard submissions.
Scenario 3: Compare Results¶
Compare the results of the submissions selected on the challenge leaderboard.
In order to provide these three views, you need to create two types of ObservableHQ notebooks. One notebook for scenario 1 and 2, and one notebook for scenario 3. In the next section, we explain how to create an ObservableHQ notebook.
Creating Visualizations in Observable Notebooks¶
ObservableHQ notebooks are similar to Jupyter notebooks, but then for JavaScript (read more or follow the 5-minute introduction). It is relatively easy to integrate visualizations made with Vega or Vega-lite into ObservableHQ notebooks. Vega enables you to easily create interactive graphics and visualizations. An example gallery can be found here.
We created some example notebooks for various challenge types that you could use as a starting point for your challenge. You will need to create an account at https://observablehq.com/, log in and "Fork" the notebook of your choice. You will then find the forked notebook under "notebooks" in your account.
Segmentation Challenge: LOLA11¶
This notebook shows various interactive scatterplots with the Dice metric per subject per segment. You can view the live LOLA11 leaderboard, or fork the evaluation detail notebook or the evaluation comparison notebook.
Classification Challenge: COVID19¶
This notebook shows a confusion matrix and interactive ROC curve. You can view the live COVID19 leaderboard, or fork the evaluation detail notebook or the evaluation comparison notebook.
Detection Challenge: LYON19¶
This notebook shows a bar graph for the overall F1-score, precision and recall, and boxplots for precision and recall, true positives, false negatives, and false positives. You can view the live LYON19 leaderboard, or fork the evaluation detail notebook or the evaluation comparison notebook.
Below an example of what the comparison notebook looks like.
Embedding observable notebooks¶
In order to embed the ObservableHQ notebooks in your challenge on grand-challenge.org, go to your challenge evaluation settings:
evaluation → challenge → Leaderboard → admin → Challenge Evaluation → ⚙ Settings
In observable you need to retrieve the URL with the cells you would like to embed in your challenge page, like this:
Then in grand-challenge.org past the URL for scenario 1 and 2 here:
And the URL for scenario 3 here:
Press Save and you should be able to view the visualizations in your challenge on grand-challenge.org. You can click on the edit button below your notebook to open the observable editor with your challenge data.
Enable editing with the Vega Editor¶
To enable challenge visitors to open the data in a Vega Editor, you need to enable a parameter inside the cell chart code in ObservableHQ under actions as illustrated below:
{
...
"actions": {
...
"editor": true
}
}
When the editor option is enabled, users can open the data by clicking on the 3-dots menu directly on the chart.
References¶
[1] Mendrik, Adriënne M., and Stephen R. Aylward. "A Framework for Challenge Design: Insight and Deployment Challenges to Address Medical Image Analysis Problems." arXiv preprint arXiv:1911.08531 (2019).