A question about the training details ¶
By: pluto_charon on Feb. 11, 2022, 3:05 a.m.
About one month ago, we trained the baseline with your released code (single GPU) and achieve almost the same performance as following:
- xxxx| PQ |mPQ+ |multi_r2
- ours | 0.6138 |0.4937 |0.8369
- yours | 0.6149 | 0.4998 |0.8585
Considering the controllability for the code, we reconstruct the code with PyTorch-Lightning and use two GPUs and other parameters keep the same as your released baseline. But the result is disappointing, the mPQ down about 3 percent to 46.70 .
Now, we have checked every possible cause of the problem and speculated that it was due to the multi GPUs training.
So we were wondering how many GPUs you used in training. This is very important for us to improve our performance
Looking forward to your reply, thank you very much!