There are two stages where human labelers will work on your tasks: the initial attempt stage and the higher-level review stage. Labelers working at the attempt stage are referred to as attempters and those working at the review stage are referred to as reviewers. At the review stage, a reviewer can fix or approve without fixing the work done by the previous labeler. We refer to the outputted work at each stage as the labeler’s response. In other words, we refer to the final output that you receive when a task has progressed through the pipeline as the task’s “final response.”

Basic Pipelines

On Rapid, we have three basic pipelines:

  • Our standard pipeline includes one attempt stage which is followed by one review stage. You can optionally add or remove review stages from your pipeline.

Standard Pipeline

  • Our consensus pipeline includes three attempt stages where the responses of three attempters are collected and then automatically aggregated to output a final response. You can optionally add or remove attempt stages You can also add a review stage to look over the aggregated response before the final response is delivered. Note that you will be able to access all attempter responses for each task included in a production batch (but not calibration batch).

Consensus Pipeline

  • [BETA] Our collection pipeline is similar to our consensus pipeline in that responses are collected from multiple attempters. They differ in that the attempter responses are not aggregated into one consolidated response through consensus; instead, all three responses are returned as is.

Specialized Pipelines

We also have a more specialized pipelines to help better achieve quality:

  • Our generative pipeline includes one attempt stage and one review stage. This is similar to the standard pipeline except the reviewer cannot edit the response and can only accept or reject a response. If a response is rejected at the review stage, the task is automatically sent back to the attempt stage

Generative Pipeline

  • Our video stitching pipeline involves breaking down long video tasks into multiple smaller subtasks, each with a shorter portion of the video, which can be worked on in parallel. The results are then stitched back together to create the final response, maintaining consistent object tracking across all frames of the video.
  • Our taxonomy chunking pipeline involves breaking down large taxonomies into multiple independent subtasks, each with its own smaller taxonomy, which can be worked on in parallel. The results are then combined back together to create the final response. This pipeline supports both image annotation and text collection, as well as our standard and consensus pipelines for the attempt phase.