Training on validation samples

Training on validation samples  

  By: songx on Aug. 13, 2021, 8:28 p.m.

Hi organizers!

Since the number of samples are already very limited, and supervision with DICE can be considered Weakly-supervised for deformable registration, are we allowed to train on the validation set?

If so, are we allowed to overfit the validation set?

Thanks.

Re: Training on validation samples  

  By: AHering on Aug. 16, 2021, 8:21 a.m.

Hi,

Thanks for your question.

For the snapshot evaluation, please do not train on the validation cases! If you overfit on the validation data and have good scores on it and therefore be invited to give a talk during the workshop but then have way worse scores on the test data, we reserve the right to withdraw the invitation.

For the final submission, you may include the validation cases into your training set if you like.

Best, Alessa

Re: Training on validation samples  

  By: songx on Aug. 16, 2021, 1:16 p.m.

Thanks for the response!

"For the snapshot evaluation, please do not train on the validation cases!"

I couldn't find the option of removing my submissions. Can you help remove my submissions on task 1? In those submissions I trained on the validation set. Sorry for the trouble.

submission ids: f9d26a81-9f80-4646-ad3f-05a74d9f56e3 d7f1ee6f-275b-4647-b48a-0bc21fbe0125

 Last edited by: songx on Aug. 15, 2023, 12:55 p.m., edited 2 times in total.

Re: Training on validation samples  

  By: songx on Aug. 16, 2021, 3:02 p.m.

Futhermore, since the segmentations are provided for task1, are we allowed to use segmentations as network input?

If so, we basically have all the information for test-time training: image and segmentation (or segmentation alone). However, we are not allowed to update the network parameters based on the validation samples. Can I interpret that as a restriction on runtime?

Re: Training on validation samples  

  By: AHering on Aug. 17, 2021, 6:39 a.m.

There are no segmentation masks on the test dataset available. Therefore, you should only use the segmentation masks in the loss function during training and not as an additional network input.

You are not allowed to use the test data to train your network. If you want to finetune the network parameters on one scan of the test data (like conventional methods also optimize on one scan), you have to do it in your final algorithm which is included in the docker. That means, that the network is not finetuned before the submission but during the evaluation and therefore probably has a longer runtime.