Test image format

Test image format  

  By: Tomas on Feb. 7, 2024, 6:56 p.m.

I want to reopen the question about the "tiff stacking". In the example algorithm (GitHub), the code reads a single tiff image, which is a simple RGB image (i.e. 3 channels), splits it into 3 jpg images and performs inference for each of those images. Why is that? If the image only has 3 channels, then an inference should be performed on the RGB image. Was is just to show how the final json files look like when inference is performed on multiple images?

Does the test/input folder contains test images each in tiff format and containing a single RGB image? I don't want to waist number of submissions trying to "interrogate" the submission system to figure this out.

Re: Test image format  

  By: yeganeh.madadi on Feb. 8, 2024, 5:09 p.m.

Dear Tomas,

The "tiff stacking" is a process that is done in the backend of the Grand Challenge website. You, as an algorithm developer, just load the jpg training data and use it in your model locally.

Thanks!

Re: Test image format  

  By: Tomas on Feb. 8, 2024, 7:34 p.m.

I'm not asking about the training dataset (nor about the training process). I'm asking about the testing dataset that the container, once uploaded to the site, would had to read and make inference on. The question is simple: does TEST sample images are in jpg format or a tiff format? Training dataset is in jpg format, while provided example of a container on your GitHub account reads in a TEST tiff image and then saves each channel as separate jpg files and does inference on them.

Am I the only one that is confused by this? Does other participants have no issue with this? If not, maybe they could shed some light on this.

Re: Test image format  

  By: Tomas on Feb. 9, 2024, 7:51 p.m.

I think I figured out the answer. In the test phase, we are given a tiff file, which has multiple RGB images, which are then saved as jpg files, on which the inference is performed - at least by the provided code. The confusion was due to a poor choice of the example tiff file, given in the GitHub repository - there is only a single RGB image in the example tiff file. And the provided code simply goes over channels of that single image and save those channels as separate jpg files. However, I went to the Airogs GitHub repository and the provided test tiff file contains 10 images, each of which are RGB images. When the code, given in JustRAIGS GitHub repository is run on the stacked tiff file provided in Airogs repository, it correctly extracts each RGB image (i.e. with all its 3 channels) and saves it as jpg.

The code works correctly. But the confusion could have been avoided simply by providing a tiff file with more than one image in it. Or by adding in the code exception, that when only a single image is present in the tiff file, it should not be split channel-wise.