Scripts for converting pytorch grid into L2R'24 COMULISSHGBF Submission Format? ¶
By: huboqiang on June 26, 2024, 3:01 a.m.
Dear all
I need some instructions for converting 2D pytorch grid_sample grid into submission nii.gz.
For example, I got a ok result for 0004 with pytorch grid_sample
function:
However, the TRE in leaderborad, is 200+:
{
"name": "COMULISSHGBF",
"cases": {
"0004_0000<--0004_0001": {
"TRE_lm": {
"mean": 249.23447419979559,
"detailed": [
394.1283978539982,
227.08221950121387,
358.8919408074519,
327.3250411567412,
397.33235645952493,
220.47409711239033,
282.25376101368596,
63.69834172407011,
249.2802253619205,
201.04043907684527,
335.96265768308547,
114.68517970892336,
117.92020637623497,
127.20886199416978,
175.42450899960602,
473.5454197726987,
152.18533998609786,
282.2592651375634,
13.94543948063247,
335.0911858075518,
244.04301799834022,
362.49139391768966,
308.50232584383076,
197.00175265709328,
269.08847956352975
]
},
"LogJacDetStd": {
"mean": 3.3724719721954794e-06,
"detailed": 3.3724719721954794e-06
},
"num_foldings": {
"mean": 0.0,
"detailed": 0.0
}
}
}
I mentioned the instructions on the submission page that
When using PyTorch as deep learning framework you are most likely to transform your images with the grid_sample() routine. Please be aware that this function uses a different convention than ours, expecting displacement fields in the format [X, Y, 1, [1, y, x]] and normalized coordinates between -1 and 1. Prior to your submission you should therefore convert your displacement fields to match our convention (see above).
So I write a script to transform pytorch grid into .nii.gz
. I think the problem is grid_to_disp
:
import sys
import numpy as np
import torch
import nibabel as nib
import torch.nn.functional as F
def create_initial_grid(rtk_module, height, width):
grid_M = rtk_module().unsqueeze(0)
grid = F.affine_grid(grid_M, [1, 1, height, width], align_corners=True)
return grid
def rtk_to_disp(rtk_params):
rtk_module = RTKModule(rtk_params)
grid_M = create_initial_grid(rtk_module, 834, 834)
disp_field = grid_to_disp(grid_M)
return disp_field.detach().numpy()
def grid_to_disp(grid):
"""
grid: (N, H_out, W_out, 2). See torch.nn.functional.grid_sample
disp: (N, 2, H, W). See scipy.ndimage.map_coordinates
"""
H, W = grid.shape[1], grid.shape[2]
disp_field = (
(grid / 2)*(torch.tensor([H, W])-1)
).flip(-1).float().cpu().permute(1, 2, 0, 3)
return disp_field
def main():
l_idxs = [4]
l_rtks = [
[-0.05, -0.15, 0.03, 0.0, 0.02],
]
for i_idx in range(len(l_idxs)):
rtk_params = torch.FloatTensor( np.array(l_rtks[i_idx], dtype=np.float32) )
disp_field = rtk_to_disp(rtk_params)
img = nib.Nifti1Image(disp_field.astype(np.float64), np.eye(4))
nib.save(img, f"./test/disp_{l_idxs[i_idx]:04d}_{l_idxs[i_idx]:04d}.nii.gz")
As there were no test-case for TRE calculation in this challenge, and the submission time were limited, I am looking for your help for debug this code.
Thanks~