U-Net Data preprocessing error

U-Net Data preprocessing error  

  By: mjcho on July 22, 2022, 9:43 a.m.

Hello!

When I executed 'prepare_data.py', I got an unexpected error. Here is a log file. "Provided mha2nnunet archive is valid.

Starting preprocessing script for Task2201_picai_baseline. Reading scans from /workspace/input/images Reading annotations from /workspace/input/labels/csPCa_lesion_delineations/human_expert/resampled

Creating preprocessing plan with 1295 items.

Preprocessing plan completed, with 1295 items containing a total of 3885 scans and 1295 labels.

01294 11475/1001499
     Unexpected error: Label has changed due to resampling/other errors for 11475_1001499! Have 1 -> 2 isolated ground truth lesions

Processed 1294 items, with -1 cases skipped and 1 error.

Did not generate dataset.json, as only 1294/1295 items are converted successfully."

This problem occurs only when UNet data preprocessing. How can I solve it?

Thank you.

Re: U-Net Data preprocessing error  

  By: joeran.bosma on July 22, 2022, 12:22 p.m.

Hi Minji,

The error you face results from resampling the annotation for case 11475_1001499. Unfortunately, this is a limitation of the U-Net baseline method (the nnU-Net baseline does not have this issue, because nnU-Net uses a different resampling method). For the baseline methods (as present on the leaderboard) we excluded this particular case. We have now updated picai_baseline > UNet > prepare_overviews.py to reflect this, thanks for pointing it out!

There are a couple options to circumvent this resampling issue: * Increase the field-of-view, e.g. to 20x320x320 voxels. To do this, use mha2nnunet_settings["preprocessing"]["matrix_size"] = [20, 320, 320] in the U-Net tutorial. * Exclude this case (update picai_baseline and re-create the overviews. In this case, you can safely ignore the error message for this one case)

Hope this helps, Joeran

Re: U-Net Data preprocessing error  

  By: mjcho on July 23, 2022, 12:24 p.m.

Hello, Joeran

Thank you for your help. I have another error, so I'd appreciate it if you could help me. The following error occurred when I ran the unet/train.py file.

File "src/picai_baseline/unet/train.py", line 162, in main() File "src/picai_baseline/unet/train.py", line 137, in main args=args, tracking_metrics=tracking_metrics, device=device, writer=writer File "/workspace/picai_baseline/src/picai_baseline/unet/training_setup/callbacks.py", line 107, in optimize_model for batch_data in train_gen: File "/workspace/picai_baseline/src/picai_baseline/unet/training_setup/augmentations/multi_threaded_augmenter.py", line 200, in next item = self.__get_next_item() File "/workspace/picai_baseline/src/picai_baseline/unet/training_setup/augmentations/multi_threaded_augmenter.py", line 188, in __get_next_item if not self.pin_memory_queue.empty(): AttributeError: 'MultiThreadedAugmenter' object has no attribute 'pin_memory_queue'

Thank you again.

Re: U-Net Data preprocessing error  

  By: anindo on July 24, 2022, 9:21 a.m.

Hi Minji,

Is this error all that was reported when your task failed? Typically, a 'MultiThreadedAugmenter' object can die due to other errors arising earlier in the pipeline, in which case, the final error reported is not the real issue and you should look for another error being reported right before this one. Could you provide more details on the system configuration that you're using (OS, RAM, GPU VRAM, number of GPUs, number of CPU cores), the exact command that you had executed (including the command line arguments, if any) and full stdout/stderr of your task?

'MultiThreadedAugmenter' is derived from the batchgenerators module. You can also have a look at their documentation and issues to troubleshoot this further.

 Last edited by: anindo on Aug. 15, 2023, 12:56 p.m., edited 1 time in total.

Re: U-Net Data preprocessing error  

  By: mjcho on July 28, 2022, 6:42 a.m.

Hello Anindo,

I think this error occured because docker container has small memory. So I changed shared memory size, and it worked.

Thank you very much.