Fine-tuning nndetection on unseen local dataset

Fine-tuning nndetection on unseen local dataset  

  By: PaWeRe on April 30, 2023, 9:14 p.m.

Hi @PI-CAI team!

Thanks a lot for all your efforts curating this impressive cohort of prostate mri, together with providing the baseline models via Github.

I have tested out the semi-supervised nndetection baseline model (using the pre-trained weights from this github https://github.com/DIAGNijmegen/picai_nndetection_semi_supervised_gc_algorithm/tree/master) on a local much smaller dataset (110 cases) of our institution (Brigham and Women's Hospital, HMS) that I curated in the exact same way the pi-cai training data was curated.

The preliminary results (Metrics(auroc=59.25%, AP=15.44%, 110 cases, 27 lesions)) were not great, but I would like to fine-tune the pre-trained model on a subset of the data and see if results get better / model is better able to generalize to unseen test cases.

I have been struggling to understand how exactly I can fine-tune without overwriting the previous weights? Is there an easy / fast way to e.g. use the pre-trained weights as initialization and then continue training in a 5-fold-cross validation scheme on a subset of the local unseen data? Are there any changes you made to the initial nnDetection architecture?

Any pointers would be appreciated!

Best,

Patrick

 Last edited by: PaWeRe on Aug. 15, 2023, 12:57 p.m., edited 2 times in total.

Re: Fine-tuning nndetection on unseen local dataset  

  By: joeran.bosma on May 4, 2023, 11:26 a.m.

Hi Patrick,

That is indeed very poor performance on your dataset!

Did you verify the models are set up correctly by reproducing the cross-validation results (as discussed earlier)?

What kind of scanner was used in your hospital? The poor performance could be caused by a different scanner manufacturer (e.g. GE).

Are the axial T2-weighted, ADC and high b-value scans in your dataset registered? Medium/large shifts between sequences are likely to cause poor results with the PI-CAI baselines, since the training dataset is reasonably well registered, and no registration tool is built into the pipeline.

We don't have any experience with fine-tuning nnDetection models, so I cannot advise you on that. We did not change the nnDetection architecture in any way! For fine-tuning nnDetection, I would advise to seek the nnDetection GitHub repo for resources and/or create a GitHub issue there (https://github.com/MIC-DKFZ/nnDetection).

Good luck!

Kind regards, Joeran