Swangeese (H. Kan, et al.; China) algorithm trained on PI-CAI: Private and Public Training Dataset
About
- Coronal T2 Prostate MRI (Coronal T2 MRI of the Prostate)
- Transverse T2 Prostate MRI (Transverse T2 MRI of the Prostate)
- Sagittal T2 Prostate MRI (Sagittal T2 MRI of the Prostate)
- Transverse HBV Prostate MRI (Transverse High B-Value Prostate MRI)
- Transverse ADC Prostate MRI (Transverse Apparent Diffusion Coefficient Prostate MRI)
- Clinical Information Prostate MRI (Clinical information to support clinically significant prostate cancer detection in prostate MRI. Provided information: patient age in years at the time of examination (patient_age), PSA level in ng/mL as reported (PSA_report or PSA), PSA density in ng/mL^2 as reported (PSAD_report), prostate volume as reported (prostate_volume_report), prostate volume derived from automatic whole-gland segmentation (prostate_volume_automatic), scanner manufacturer (scanner_manufacturer), scanner model name (scanner_model_name), diffusion b-value of (calculated) high b-value diffusion map (diffusion_high_bvalue), Malignant Neoplasm Histotype (histology_type), Prostate Imaging-Reporting and Data System (PIRADS), Neural invasion (neural_invasion, yes/no), Vascular invasion (vascular_invasion, yes/no), Lymphatic invasion (lymphatic_invasion, yes/no). Values acquired from radiology reports will be missing, if not reported.)
- Case-level Cancer Likelihood Prostate MRI (Case-level likelihood of harboring clinically significant prostate cancer, in range [0,1].)
- Transverse Cancer Detection Map Prostate MRI (Single-class, detection map of clinically significant prostate cancer lesions in 3D, where each voxel represents a floating point in range [0,1].)
Challenge Performance
Date | Challenge | Phase | Rank |
---|---|---|---|
Sept. 7, 2023 | PI-CAI | Closed Testing Phase - Testing (Final Ranking) | 3 |
June 13, 2024 | PI-CAI | Closed Testing Phase - Tuning | 1 |
Model Facts
Summary
This algorithm represents the submission from the Swangeese team (H. Kan, et al.; China) to the PI-CAI challenge [1]. We independently retrained this algorithm using the PI-CAI Private and Public Training Dataset (9107 cases from 8028 patients, including a sequestered dataset of 7607 cases and the public dataset of 1500 cases). This algorithm will perform two tasks: localize and classify each lesion with clinically significant prostate cancer (if any) using a 0–100% likelihood score and classify the overall case using a 0–100% likelihood score for clinically significant prostate cancer diagnosis.
To this end, this model uses biparametric MRI data. Specifically, this algorithm uses the axial T2-weighted MRI scan, the axial apparent diffusion coefficient map, and the calculated or acquired axial high b-value scan.
- A. Saha, J. S. Bosma, J. J. Twilt, B. van Ginneken, A. Bjartell, A. R. Padhani, D. Bonekamp, G. Villeirs, G. Salomon, G. Giannarini, J. Kalpathy-Cramer, J. Barentsz, K. H. Maier-Hein, M. Rusu, O. Rouvière, R. van den Bergh, V. Panebianco, V. Kasivisvanathan, N. A. Obuchowski, D. Yakar, M. Elschot, J. Veltman, J. J. Fütterer, M. de Rooij, H. Huisman, and the PI-CAI consortium. “Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI-CAI): An International, Paired, Non-Inferiority, Confirmatory Study”. The Lancet Oncology 2024; 25(7): 879-887. doi:10.1016/S1470-2045(24)00220-1
Mechanism
Team: Swangeese
Hongyu Kan (1), Liang Qiao (1), Jun Shi (1), Hong An (1)
(1) Department of Computer Science and Technology University of Science and Technology of China, Hefei, China
Contact: honeyk@mail.ustc.edu.cn, ql1an9@mail.ustc.edu.cn, shijun18@mail.ustc.edu.cn, han@ustc.edu.cn.
Code availability: github.com/Yukiya-Umimi/ITUNet-for-PICAI-2022-Challenge
Trained model availability: grand-challenge.org/algorithms/pi-cai-pubpriv-swangeese/
Abstract: This article summarizes the methods our team used in the PI-CAI 2022 Challenge. The PI-CAI 2022 Challenge is a competition to train the network to predict prostate cancer regions using MRI data. Our team only uses two-dimensional neural networks, including an ITUNet [1] converted to 2D operations.
Data preparation: We used axial T2W scan, axial DWI scan and axial ADC scan as three mode medical images to train all deep learning networks. We used the preprocessing tools provided by Saha et al. [2] to preprocess all the data. The preprocessing of experimental data includes the resampling operation and center cropping of all prostate MRI images. According to the spatial spacing information of all prostate MRI images, during the resampling process, the spatial spacing of all images is re-unified to (3.0, 0.5, 0.5) mm, so that the spatial physical meaning of adjacent voxels of all images can be consistent. It is observed that all tumor regions are roughly located in the center of the image, so the center cropping operation is performed for all images. After the spatial distribution of all tumor regions is calculated, the image size is fixed to (24, 384, 384) voxels by center cropping for all images.
Training setup: We used a segmentation network and a classification network to first generate pseudo labels for the unlabeled cases in the training dataset. For the semantic segmentation network, based on the previous work of our team, we selected the network structure that we think has the best segmentation performance: ITUNet [1]. ITUNet was originally a network structure designed for organ segmentation tasks of clinical medical images. Because of its good performance in organ segmentation tasks, especially its excellent segmentation accuracy for small and variable organs, we believe that it can also complete the task of tumor segmentation. We changed the original 3D network into a 2D network. The specific operation is to convert the original 3D convolution layer and pooling layer into the corresponding 2D structure, without making any other unnecessary modifications. For the training process, we use pixel level FocalLoss as the loss function. For the classification network, we selected the 2D EfficientNet-b5 model [3]. In this stage, both the segmentation and classification models were trained using the annotated cases over 5-fold cross-validation. Using all 10 instances of these trained models (5 folds x 2 models), pseudo labels were generated for the unlabeled cases. For each case, at maximum two predicted regions with the largest connectivity were used as the pseudo label. Then, we use labeled data and pseudo labeled data to complete semisupervised training of the detection network, another 2D ITUNet, over 5-fold cross-validation. Ensemble of the 5 models trained over cross-validation, as obtained by semisupervised learning, is used as the final model for the detection of cancer. We use the processing method provided by Bosma et al. [4] to generate cancer detection maps from the outputs of the detection network. We take the highest likelihood value within the predicted region as the prediction for the case level suspicion score.
Model parameters:
-
Total number of parameters for ITUNet (for pseudo label generation): 18,276,330 x5
-
Total number of parameters for EfficientNet-b5 (for pseudo label generation): 28,346,931 x5
-
Total number of parameters for ITUNet (for cancer detection): 18,276,847 x5
Inference setup: During the inference mode, the images were pre-processed and the output predictions are post-processed using the identical settings as in the training mode of the ITUNet detection network.
Acknowledgements: The team would like to thank Saha et al. [2] for their open source tools and their instructions and help in submitting the results.
References:
-
H. Kan, J. Shi, M. Zhao, Z. Wang, W. Han, H. An, Z. Wang, S. Wang, "ITUnet: Integration Of Transformers And Unet For Organs-At-Risk Segmentation," 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, United Kingdom, 2022, pp. 2123-2127, doi:10.1109/EMBC48229.2022.9871945.
-
A. Saha, J. S. Bosma, J. J. Twilt, B. van Ginneken, A. Bjartell, A. R. Padhani, D. Bonekamp, G. Villeirs, G. Salomon, G. Giannarini, J. Kalpathy-Cramer, J. Barentsz, K. H. Maier-Hein, M. Rusu, O. Rouvière, R. van den Bergh, V. Panebianco, V. Kasivisvanathan, N. A. Obuchowski, D. Yakar, M. Elschot, J. Veltman, J. J. Fütterer, M. de Rooij, H. Huisman, and the PI-CAI consortium. “Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI-CAI): An International, Paired, Non-Inferiority, Confirmatory Study”. The Lancet Oncology 2024; 25(7): 879-887. doi:10.1016/S1470-2045(24)00220-1
-
Mingxing, T., Quoc, L., "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks". Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6105-6114, 2019. proceedings.mlr.press/v97/tan19a.html
-
J. S. Bosma, A. Saha, M. Hosseinzadeh, I. Slootweg, M. de Rooij, and H. Huisman, “Semisupervised learning with report-guided pseudo labels for deep learning-based prostate cancer detection using biparametric mri,” Radiology: Artificial Intelligence, p. e230031, 2023. doi:10.1148/ryai.230031
Validation and Performance
This algorithm was evaluated on the PI-CAI Testing Cohort. This hidden testing cohort included prostate MRI examinations from 1000 patients across four centers, including 197 cases from an external unseen center. Histopathology and a follow-up period of at least 3 years were used to establish the reference standard. See the PI-CAI paper for more information [1].
Patient-level diagnosis performance is evaluated using the Area Under Receiver Operating Characteristic (AUROC) metric. Lesion-level detection performance is evaluated using the Average Precision (AP) metric. Overall score used to rank each AI algorithm is the average of both task-specific metrics: Overall Ranking Score = (AP + AUROC) / 2.
This algorithm achieved an AUROC of 0.904, an AP of 0.686, and an Overal Ranking Score of 0.795.
Free-Response Receiver Operating Characteristic (FROC) curve is used for secondary analysis of AI detections (as recommended in Penzkofer et al., 2022). We highlight the performance on the FROC curve using the SensX metric. SensX refers to the sensitivity of a given AI system at detecting clinically significant prostate cancer (i.e., Gleason grade group ≥ 2 lesions) on MRI, given that it generates the same number of false positives per examination as the PI-RADS ≥ X operating point of radiologists. Here, by radiologists, we refer to the radiology readings that were historically made for these cases during multidisciplinary routine practice. Across the PI-CAI testing leaderboards (Open Development Phase - Testing Leaderboard, Closed Testing Phase - Testing Leaderboard), SensX is computed at thresholds that are specific to the testing cohort (i.e., depending on the radiology readings and set of cases).
This algorithm achieved a Sens3 of 0.784, a Sens4 of 0.734, and a Sens5 of 0.548.
Figure. Diagnostic performance of the top five AI algorithms (N. Debs et al. [Guerbet Research, France], Y. Yuan et al. [University of Sydney, Australia], H. Kan et al. [University of Science and Technology, China], C. Li et al. [Stanford University, United States] and , A. Karagöz et al. [Istanbul Technical University, Turkey]), and the AI system ensembled from all five methods, across the 400 cases used in the reader study (left column) and the full hidden testing cohort of 1000 cases (right column). Case-level diagnosis performance was evaluated using receiver operating characteristic curves and the AUROC metric (top row), while lesion-level detection performance was evaluated using precision-recall curves and the AP metric (middle row). Secondary analysis of lesion-level detection performance was conducted using FROC curves (bottom row).
- A. Saha, J. S. Bosma, J. J. Twilt, B. van Ginneken, A. Bjartell, A. R. Padhani, D. Bonekamp, G. Villeirs, G. Salomon, G. Giannarini, J. Kalpathy-Cramer, J. Barentsz, K. H. Maier-Hein, M. Rusu, O. Rouvière, R. van den Bergh, V. Panebianco, V. Kasivisvanathan, N. A. Obuchowski, D. Yakar, M. Elschot, J. Veltman, J. J. Fütterer, M. de Rooij, H. Huisman, and the PI-CAI consortium. “Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI-CAI): An International, Paired, Non-Inferiority, Confirmatory Study”. The Lancet Oncology 2024; 25(7): 879-887. doi:10.1016/S1470-2045(24)00220-1
Uses and Directions
-
For research use only. This algorithm is intended to be used only on biparametric prostate MRI examinations of patients with raised PSA levels or clinical suspicion of prostate cancer. This algorithm should not be used in different patient demographics.
-
Benefits: AI-based risk stratification for clinically significant prostate cancer using prostate MRI can potentially aid the diagnostic pathway of prostate cancer, reducing over-treatment and unnecessary biopsies.
-
Target population: This algorithm was trained on patients with raised PSA levels or clinical suspicion of prostate cancer, without prior treatment (e.g. radiotherapy, transurethral resection of the prostate (TURP), transurethral ultrasound ablation (TULSA), cryoablation, etc.), without prior positive biopsies, without artifacts, and with reasonably-well aligned sequences and that the prostate gland is localized within a volume of 72 x 192 x 192 mm from the center coordinate.
-
MRI scanner: This algorithm was trained and evaluated exclusively on prostate biparametric MRI scans acquired with various commercial 1.5 Tesla or 3 Tesla scanners using surface coils from Siemens Healthineers, Erlangen, Germany or Philips Medical Systems, Eindhoven, Netherland. It does not account for vendor-neutral properties or domain adaptation, and in turn, the compatibility with scans derived using any other MRI scanner or those using endorectal coils is unkown.
-
Sequence alignment and position of the prostate: While the input images (T2W, HBV, ADC) can be of different spatial resolutions, the algorithm assumes that they are co-registered or aligned reasonably well.
-
General use: This model is intended to be used by radiologists for predicting clinically significant prostate cancer in biparametric MRI examinations. The model is not a diagnostic for cancer and is not meant to guide or drive clinical care. This model is intended to complement other pieces of patient information in order to determine the appropriate follow-up recommendation.
-
Appropriate decision support: The model identifies lesion X as at a high risk of being malignant. The referring radiologist reviews the prediction along with other clinical information and decides the appropriate follow-up recommendation for the patient.
-
Before using this model: Test the model retrospectively and prospectively on a diagnostic cohort that reflects the target population that the model will be used upon to confirm the validity of the model within a local setting.
-
Safety and efficacy evaluation: To be determined in a clinical validation study.
Warnings
-
Risks: Even if used appropriately, clinicians using this model can misdiagnose cancer. Delays in cancer diagnosis can lead to metastasis and mortality. Patients who are incorrectly treated for cancer can be exposed to risks associated with unnecessary interventions and treatment costs related to follow-ups.
-
Inappropriate Settings: This model was not trained on MRI examinations of patients with prior treatment (e.g. radiotherapy, transurethral resection of the prostate (TURP), transurethral ultrasound ablation (TULSA), cryoablation, etc.), prior positive biopsies, artifacts or misalignment between sequences. Hence it is susceptible to faulty predictions and unintended behaviour when presented with such cases. Do not use the model in the clinic without further evaluation.
-
Clinical rationale: The model is not interpretable and does not provide a rationale for high risk scores. Clinical end users are expected to place the model output in context with other clinical information to make the final determination of diagnosis.
-
Inappropriate decision support: This model may not be accurate outside of the target population. This model is not designed to guide clinical diagnosis and treatment for prostate cancer.
-
Generalizability: This model was developed with prostate MRI examinations from Radboud University Medical Center, Ziekenhuisgroep Twente, and Prostaat Centrum Noord-Nederland. Do not use this model in an external setting without further evaluation.
-
Discontinue use if: Clinical staff raise concerns about the utility of the model for the intended use case or large, systematic changes occur at the data level that necessitate re-training of the model.
Common Error Messages
Information on this algorithm has been provided by the Algorithm Editors, following the Model Facts labels guidelines from Sendak, M.P., Gao, M., Brajer, N. et al. Presenting machine learning model information to clinical end users with model facts labels. npj Digit. Med. 3, 41 (2020). 10.1038/s41746-020-0253-3