Universal Lesion Segmentation [ULS23 Baseline]


Logo for Universal Lesion Segmentation [ULS23 Baseline]

About

Creator:

User Mugshot MJJdG 

Contact email:
Version:
db127463-072c-4b88-b831-953821d4e320
Last updated:
March 4, 2024, 1:08 p.m.
Inputs:
  • Stacked 3D Volumetric Spacings  (JSON describing the 3D volumetric spacing to accurately reduce a 4D stack to multiple 3D volumes. Example format: [[1.0, 1.0, 1.0], [3.4,3.4,6.5]] describes spacing of two 3D volumes on t=0 and t=1 (i.e. in the 4th dimension).)
  • Stacked 3D CT volumes of lesions  (3D CT volumes with universal lesions, stacked in the t-dimension. Includes a padding of intensity -1.)
Outputs:
  • CT Universal Lesion Binary Segmentation 

Challenge Performance

Date Challenge Phase Rank
March 4, 2024 ULS23 Development Phase Leaderboard 10
April 8, 2024 ULS23 Test Phase Leaderboard 4

Model Facts

Summary

This is the baseline algorithm for the ULS23 Challenge, and can be used to 3D segment the various lesion types present in the thorax-abdomen area of CT scans. The model has been pre-trained on pseudo masks generated from partially-annotated data using the GrabCut algorithm and subsequently fine-tuned on fully-annotated lesion data.

Mechanism

This model was trained using the nnUnetv2 framework. It consists of a residual encoder 3D fullres Unet with 6 stages, a feature size ranging from 32 to 320 and a batch size of 2 VOI's. The full VOI is used as input for the model without patching or resampling.

Inputs:

  • The algorithm takes as input either a single CT volume-of-interest of (128z , 256x , 256y) voxels or a stack of n VOI's concatenated in the z-dimension, i.e. (128 z*n , 256x, 256y) voxels. The center of each VOI is expected to contain a lesion to be segmented by the algorithm.
  • Additionally, a .json file containing the spacings of each of the n VOI's must be provided in the following format: [ [x-spacing (e.g. 0.74), y-spacing (e.g. 0.74), z-spacing (e.g. 3)], [..., ..., ...]]

Output:

  • The algorithm outputs a binary segmentation mask (0 = background, 1 = lesion). If multiple stacked volumes are provided the output also consists of masks stacked in the same format as the input.

Source Code is available in the challenge repository on GitHub.

Validation and Performance

This model was evaluated using 10% of each fully-annotated dataset, split on a patient-level. The scores are aggregated per lesion type.

Lesion Type DICE
Kidney 0.77 ± 0.21
Lung 0.71 ± 0.14
Lymph Node 0.70 ± 0.18
Bone 0.68 ± 0.24
Liver 0.65 ± 0.17
Pancreas 0.64 ± 0.19
Colon 0.55 ± 0.21

Uses and Directions

This algorithm was developed for research purposes only.
  • The intended use for this model is to automatically 3D segment a lesion selected by either a human or detection model. As such the algorithm always expects the VOI to contain a lesion to be segmented in the center of the volume.
  • This model was trained using a variety of pixel spacings, scanners and scanning protocols with data from multiple institutes.
  • For optimal performance, when padding the VOI to the required size use the minimum intensity - 1.

Warnings

  • GC (currently) does not handle uncompressed .mha files > 4 GB, to prevent your jobs from timing out during image import you should not batch more than 100 VOI's per job.

Common Error Messages

Please contact the editors if you receive error messages during usage of the algorithm.

Information on this algorithm has been provided by the Algorithm Editors, following the Model Facts labels guidelines from Sendak, M.P., Gao, M., Brajer, N. et al. Presenting machine learning model information to clinical end users with model facts labels. npj Digit. Med. 3, 41 (2020). 10.1038/s41746-020-0253-3