Handle the processing of inputs and outputs yourself
Evalutils currently provides templates for only three generic tasks: classification, segmentation, and detection. These templates take care of loading data, and writing outputs in a way that is specific to the task: you are only required to adjust the predict function. If, however, the data or task does not fit the templates, you must also write code to load inputs and write outputs. This pretty much equates to writing the whole process.py
script yourself.
Writing it yourself gets you fine-grained control over how inputs and outputs are read and written. But this also means that you have to choose input and output interfaces. Interfaces tell Grand Challenge what kind of data it has to deal with, and where to get or put it. Interfaces have to be selected on the Grand Challenge website, and may need to be requested.
The general principle is that your Algorithm container will process one job at a time, and each job will process only one set of inputs. So you need to write your scripts to read that one set of inputs from a location in /input
and write your algorithm's outputs to a location in /output
. For the default algorithms, evalutils automatically does this in the background. But for algorithms that require flexibility, you will have to write your own code. Check the snippets below for an example.
First of all, we run evalutils from the command line to generate a template, and call it CustomAlgorithm
(but you can pick a more suitable name, of course):
$ evalutils init algorithm CustomAlgorithm
In the example below, we choose the segmentation template. You can choose whichever template fits your use case the best, but it does not matter as much, in this case, because we won't inherit the base class anymore.
The automatically generated code for CustomAlgorithm
would look like the following:
import SimpleITK
import numpy as np
from evalutils import SegmentationAlgorithm
from evalutils.validators import (
UniquePathIndicesValidator,
UniqueImagesValidator,
)
class Customalgorithm(SegmentationAlgorithm):
def __init__(self):
super().__init__(
validators=dict(
input_image=(
UniqueImagesValidator(),
UniquePathIndicesValidator(),
)
),
)
def predict(self, *, input_image: SimpleITK.Image) -> SimpleITK.Image:
# Segment all values greater than 2 in the input image
return SimpleITK.BinaryThreshold(
image1=input_image, lowerThreshold=2, insideValue=1, outsideValue=0
)
if __name__ == "__main__":
Customalgorithm().process()
To get more fine-grained control over inputs and outputs, we write our own process
function. In addition to running inference, this function should take care of loading a set of inputs from /inputs/
and writing a set of outputs to /outputs/
. To this end, we choose to write a load_inputs
and a write_outputs
function in the example below, and call them in the process
function. Also, note that we let go of inheriting the base class from the template:
import SimpleITK
import numpy as np
class Customalgorithm(): # SegmentationAlgorithm is not inherited in this class anymore
def __init__(self):
"""
Write your own input validators here
Initialize your model etc.
"""
pass
def load_inputs(self):
"""
Read from /input/
Check https://grand-challenge.org/algorithms/interfaces/
"""
return inputs
def write_outputs(self, outputs):
"""
Write to /output/
Check https://grand-challenge.org/algorithms/interfaces/
"""
pass
def predict(self, inputs):
"""
Your algorithm goes here
"""
return outputs
def process(self):
"""
Read inputs from /input, process with your algorithm and write to /output
"""
inputs = self.load_inputs()
outputs = self.predict(inputs)
self.write_outputs(outputs)
if __name__ == "__main__":
Customalgorithm().process()
Adding files and dependencies¶
When you're adding files to the algorithm folder, such as model weights, you have to edit the Dockerfile by adding the following line for every additional file:
COPY --chown=algorithm:algorithm checkpoint /opt/algorithm/checkpoint
Replace both instances of checkpoint
with the file path that you need for your Algorithm to run. In addition, any additional dependencies will need to be added to the requirements.txt file with the version number specified, which looks like this:
evalutils==0.3.0
scikit-learn==1.0
scipy==1.6.3
scikit-image==0.18.1