May 2022 Cycle Report

Published 5 July 2022

Shape up

The RSE team uses the software development method called Shape Up. This is an agile software development process that turns product development around. When faced with a user problem we would come up with a solution and then estimate how long that takes to implement. Yet, people are terrible at estimating, and we are no exception. We ended up with projects that would meander for too long with no clear end point.

The trick in Shape Up is to instead start with an estimate which then constrains the design. Fixed time, variable scope. It works in cycles, and we have chosen to use four week cycles with a one week cool down period. These cycles allow us to prioritize what is most important to the users of the platform on the whole. The deadlines are especially helpful on focusing development efforts. The blogpost you are reading now is the accumulation of all the implemented pitches from the May cycle. If you would like to see our ideas for the next cycle you can find our pitches in the RSE Roadmap repository.

In this blog post, we want to highlight some of the major new features, including:

  • Reviewing the answers of a reader for reader study editors
  • Opening a reader study at a specific display set
  • Provide default answers for questions in a reader study
  • An Annotation Statistics plugin for calculating statistics of an overlay in algorithm result viewing
  • SageMaker support
  • .mp4 and .obj file type support

Please get in touch if you would like to know more about these features, would like a demo, or have some comments, bug reports, or ideas.

Admin viewing mode for reader studies

We added a button to grand-challenge that allows reader study administrators to review answers given by a user. You find the button in the "User Progress" section. A link for each reader will be provided and allow you to navigate through the reader study answers they have given using CIRRUS in read-only mode.

While viewing an orange box in CIRRUS will indicate for what user you are currently viewing the answers and if the user has answered the case.

Open a reader study at a particular display set

CIRRUS has entry points to open a particular Algorithm Result, Archive Item, Reader Study or Image with Overlay. Since migrating to display sets users can no longer jump to a particular image for a reader study quickly as they used to use the Image with Overlay entrypoint.

This month, the RSE team added a new displaySet entry point, that allows the user to open a particular display set. The reader study that the display set belongs to is also automatically loaded so that the user can answer the questions if allow_case_navigation is allowed. You can now find links to these views on the Display Set list and Reader Study statistics page.

These links will become available to the readers of a reader study as well, allowing them to view all cases, and any answers that they have given already. This option was already available via the REST-api of Grand Challenge.

Default answers for reader study questions

It is now possible to provide default answers to reader study questions. To use this feature, do the following: 1. Update the question(s) you want to provide a default answer for, and select an interface that describes the answer. 2. Provide the defaults. The interfaces you selected in step 1, will be shown in the Edit page of the display sets. For questions with answer type Mask, you need to upload the default answers under Add Cases. After that, and for all other answer types, the default answer can be provide by editing the display sets.

💡For Mask type questions, currently only binary masks are supported.

Annotation statistics plugin

We developed a new plugin for analyzing overlays generated by algorithms, the Annotation Statistics plugin. This plugin allows you to draw a region of interest, and will calculate the ratios between the different voxel values in the selected overlay. With Instant enabled, the ratios will be calculated as soon as you stop annotating. If you want to make detailed annotations, you can turn this off and click Calculate when you are done. You can use the Remove option to exclude certain areas. The calculated ratios can be copied to the clipboard. If you would like to use this plugin you need to enable it in the viewer configuration.

Long case texts

Long case texts now are shown directly in the Case Information section. By default, the section is limited to 5 lines of case-text.

One can click the "Show more" button to expand the text and, if expanded, the "Show less" button to unexpand the case text. The state of the case text carries over from case to case, eliminating the need for extra clicks when changing from one case to another within a reader study.

Algorithm Inference on SageMaker

At the moment Grand Challenge uses ECS Tasks for algorithm inference. This works well but is quite limited in terms of the instance types we have integrated, and there is no support for model pre-loading or model endpoints.

This month we developed a new application, SageMaker Shim, that can convert any container that implements the Grand Challenge Inference API to a container image that implements the SageMaker API.

This application uses fastAPI and PyDantic to convert the Grand Challenge API to the SageMaker API, and is then distributed as a static binary using PyInstaller and staticx so it will work without any runtime dependencies in any existing container image.

This application is automatically added to container images that are uploaded to Grand Challenge using crane, which can efficiently add new layers to a container images in a registry.

We have updated the existing development inference backend to use the SageMaker Batch Inference API and added a new inference backend that uses SageMaker Batch Inference directly.

In the next cycle we will switch traffic over to the new SageMaker Batch Inference backend and remove the ECS backend. A limitation of the new backend is that Inference tasks will be limited to 1 hour of runtime. However, this could be offset by the increased range of instance type options now available to us, including more CPU or more performant GPU. Tasks that run on the SageMaker Batch Inference backend will have have their runtime metrics automatically gathered, including CPU Utilization and Memory Utilization, and similar GPU metrics if a GPU is used by the job. These metrics are made available on the job logs page.

In the future, this opens up the option of using SageMaker Inference containers directly, so long as they know how to read and write the data the algorithm needs from S3, and opens up options for us to use model endpoints and model pre-loading to improve the efficiency of executing algorithms on grand challenge.

Support for .mp4 and .obj files

Grand Challenge has added support for .mp4 videos and .obj mesh files in order to support the Surgical Tool Localization in Endoscopic Videos and 3D Teeth Scan Segmentation and Labeling challenges. These new file types can now be used for new interfaces for Algorithms, Archives, and Reader Studies.