U-Net training : array size error

U-Net training : array size error  

  By: youssef on July 8, 2022, 11:24 a.m.

When launching U-Net training command, I get the following error: "ValueError : all the input array dimensions for the concatenation axis must match exactly, but along dimension 3, the array at index 0 has size 640 and the array at index 1 has size 384"

The previous steps went all good.

  • Did anyone encounter the same problem? maybe I just missed something obvious
  • If the UNet training did go well, would the resulting weiths be the same as those available here? : https://github.com/DIAGNijmegen/picai_unet_gc_algorithm/tree/main/weights

Thank you!

Re: U-Net training : array size error  

  By: anindo on July 8, 2022, 11:51 a.m.

Hi Youssef,

It seems that you may have missed this preprocessing step here: https://github.com/DIAGNijmegen/picai_baseline/blob/main/unet_baseline.md#u-net---data-preparation

For the baseline U-Net, all image sequences (T2W, DWI/HBV, ADC) needed to be first resampled to a common spatial resolution (3.0, 0.5, 0.5), and then center-cropped to a common spatial size (20, 256, 256), before they can be concatenated and used for training (as done in the train.py script).

If you're only using the 1295/1500 cases with human expert-derived annotations (as prepared via prepare_data.py), then indeed the resultant models should perform similar to the models available here: https://github.com/DIAGNijmegen/picai_unet_gc_algorithm/tree/main/weights. Training is not deterministic, so the model weights themselves will be different.

If you're using all 1500/1500 cases with human expert and AI -derived annotations (as prepared via prepare_data_semi_supervised.py), then the resultant models should perform similar to the models available here: https://github.com/DIAGNijmegen/picai_unet_semi_supervised_gc_algorithm/tree/main/weights

Hope this helps.

 Last edited by: anindo on Aug. 15, 2023, 12:56 p.m., edited 2 times in total.