Fine-tuning nndetection on unseen local dataset ¶
By: PaWeRe on April 30, 2023, 9:14 p.m.
Hi @PI-CAI team!
Thanks a lot for all your efforts curating this impressive cohort of prostate mri, together with providing the baseline models via Github.
I have tested out the semi-supervised nndetection baseline model (using the pre-trained weights from this github https://github.com/DIAGNijmegen/picai_nndetection_semi_supervised_gc_algorithm/tree/master) on a local much smaller dataset (110 cases) of our institution (Brigham and Women's Hospital, HMS) that I curated in the exact same way the pi-cai training data was curated.
The preliminary results (Metrics(auroc=59.25%, AP=15.44%, 110 cases, 27 lesions)) were not great, but I would like to fine-tune the pre-trained model on a subset of the data and see if results get better / model is better able to generalize to unseen test cases.
I have been struggling to understand how exactly I can fine-tune without overwriting the previous weights? Is there an easy / fast way to e.g. use the pre-trained weights as initialization and then continue training in a 5-fold-cross validation scheme on a subset of the local unseen data? Are there any changes you made to the initial nnDetection architecture?
Any pointers would be appreciated!
Best,
Patrick