Hi Soldeepm,
I understand the confusion and I think you do understand how the data works. To be sure I'll show it with an example:


The annotations are for 1024 x 1024 image which is the center of the context image. It is possible with an offset to display the annotations in a context image.
Annotations for the full context image, though, are not present. In the context image the same nuclei types are present (with exceptions because some primary melanoma for example have glands in their primary image which consists out of different tissue and nuclei types not represented in the dataset). What I meant in my previous message is that it is possible to infer from the ROI on the context ROI to generate more data.
I hope this helps and if you have more questions please let me know!
Kind regards,
Mark