You do not need to audit every single task though we suggest that you audit at least 10% of each batch in order to gauge quality, evaluate labeler performance and help labelers improve.
You can Audit tasks in 1 of 3 ways:
- Audit all completed tasks from the Batches page
- Audit an individual task from the Batches page
- Audit an individual task from the Labeler Management page
1: Auditing all tasks from the Batches page
If you click into a batch with completed tasks (it doesn't have to be completely finished), you will see a button that says "Start Auditing". Clicking on this button will open up a separate page where you can go through all the completed task results. Once you have finished auditing one task, the next task will automatically come into view.
2: Auditing a specific task from the Batches page
If you click into a batch with completed tasks, you can scroll through the completed tasks and click to audit a specific task. This way gives you more control into specific tasks of interest you may want to look at. This will open that task up in a separate audit window - but it will only be that one task. To open up more tasks to audit, you will have to individually click "Audit Task" on the tasks you'd like to open up.
If you want to find specific tasks to audit, you can even apply filters to help you narrow down your search.
3: Auditing a specific task from the Labeler Management page
If you want to audit a selection of tasks that a specific annotator worked on, you can do so by opening up the Labeler Management page and clicking into a specific annotator whose work you are interested in. Doing this will show you the entire activity log of tasks that an annotator has worked on - and you can open up the audit view from here as well.
For every task that you choose to audit, you can either:
- Approve the task
- Fix the task and then accept it
- Reject the task and send it back for a re-do
You should approve a task if you see no issues with it the way that it was presented to you, and you have confidence in its accuracy.
You should fix & save a task if there was a minor issue that you can easily fix - and it's not worth your time asking your annotators to redo it.
However, if a task was really poorly done, you should reject the task, provide some feedback on why the annotated results did not meet the bar, and send the task back into the labeling queue for your annotators to redo.
Any feedback that you share with annotators will appear in their activity log on their labeler dashboard - along with the status of whether their task has been approved, fixed, or rejected. If feedback is available, there will be a flag that says "Feedback Available" in the column - and annotators can open up that task to see the specific feedback.
Updated over 1 year ago