Hi Adrian,
About your first question: there seems to be a spelling error in the container on GitHub, however the uploaded container on the Grand Challenge platform is functioning normally and accepts 10-class data. Our apologies for the inconvience and to be clear, Track 2 submissions should include 10-class predictions, while Track 1 submissions should produce a 3-class JSON file for nuclei segmentation.
About your second question: you can ignore the variable inst_channels
; this is simply a fixed parameter for the encoder used in our baseline algorithm. The parameter out_channels_cls
indeed captures the amount of output classes, for Track 1 this is 4 classes (3 classes + 1 background class).
Regarding your approach, I would advise against trying to map your output to the pinst_out
and pcls_out
format, but rather write a function to directly produce the final JSON file from your numpy arrays. Examples about what output files should look like can be found in the output interfaces list (check View example
): https://grand-challenge.org/components/interfaces/outputs/
For a JSON files (nuclei segmentation), the following format is expected:
{
"name": "Areas of interest",
"type": "Multiple polygons",
"polygons": [
{
"name": "Area 1",
"seed_point": [
55.82,
90.46,
0.5
],
"path_points": [
[
55.82,
90.46,
0.5
],
[
55.93,
90.88,
0.5
],
[
56.24,
91.19,
0.5
],
[
56.66,
91.3,
0.5
]
],
"sub_type": "brush",
"groups": [
"manual"
],
"probability": 0.67
},
{
"name": "Area 2",
"seed_point": [
90.22,
96.06,
0.5
],
"path_points": [
[
90.22,
96.06,
0.5
],
[
90.33,
96.48,
0.5
],
[
90.64,
96.79,
0.5
]
],
"sub_type": "brush",
"groups": [],
"probability": 0.92
}
],
"version": {
"major": 1,
"minor": 0
}
}
After doing this, you should be good to go. If you have any further questions, feel free to reach out.
Kind regards,
Daniel