This endpoint creates a segmentannotation task. In this task, one of our labelers will view the given image and classify pixels in the image according to the labels provided. You will receive a semantic, pixel-wise, dense segmentation of the image.
We also support instance-aware semantic segmentations, also called panoptic segmentation, via LabelDescription objects.
The required parameters for this task are attachment and labels. The attachment is a URL to an image you’d like to be segmented.
labels is an array of strings or LabelDescription objects describing the different types of objects you’d like to segment the image with.
You can optionally provide additional markdown-enabled or Google Doc-based instructions via the instruction parameter.
You can also optionally set allow_unlabeled to true, which will allow the existence of unlabeled pixels in the task response - otherwise, all pixels in the image will be classified (in which case it's important that there are labels for everything in the image, to avoid misclassification).
The response you will receive will be a series of images where each pixel's value corresponds to the label, either via a numerical index or a color mapping. You will also get separate masks for each label for convenience.
If the request successful, Scale will return the generated task object, at which point you should store the task_id to have a permanent reference to the task.