Output detection maps

Output detection maps  

  By: sakina on Aug. 22, 2022, 3:25 p.m.

Hi, I run the U-Net baseline models and also get all the weights in "workdir/results". For evaluation, I would need the output detection maps. Do they not get saved at the end of the training? Is there another step I need to follow for inference?

Re: Output detection maps  

  By: anindo on Aug. 22, 2022, 11:01 p.m.

Hi Sakina,

For the baseline U-Net, at the end of training, you should not only have the trained model weights stored in workdir/results, but also an overview of the model's validation performance for that given fold (saved as an Excel sheet or .xlsx file). Assuming that you haven't changed the default criteria for storing model weights during training, the final validation performance of your trained model will be the maximum of the valid_ranking column in that .xlsx file. If you have completed all 5 folds of training, then you will have 5 such .xlsx files (one per fold), which in turn, can be used to estimate your overall 5-fold cross-validation performance on the Public Training and Development Dataset.

If you wish to estimate your model performance on the Hidden Validation and Tuning Cohort, then please encapsulate your trained U-Net model + weights and make a submission to our leaderboard, as shown here.

Unfortunately, csPCa detection maps and case-level csPCa likelihood scores for the training/validation fold are not automatically generated or stored, at the end of training. And we will not be adding this functionality anytime soon, either. If you wish to generate and store these predictions for your trained U-Net, you can refer to this script. You can adapt it to set up your own Python script or Jupyter notebook, that given any bpMRI exam, initializes the U-Net architecture, loads your trained model weights, and then generates + saves all output prediction files accordingly.

Hope this helps.

 Last edited by: anindo on Aug. 15, 2023, 12:57 p.m., edited 3 times in total.