Review Labeler Performance

Navigate to the labeler management section of your navigation pane to review your labelers’ performance. Your labeler management dashboard has all the high level metrics on your project performance. Here, you can see overall metrics for your project like the number of claimed tasks, total throughput, and average accuracy. You can also see specific performance by labeler.

Project-Level Metrics

Remaining Task in Queue
These charts will help you see see how many remaining tasks there are at each stage of the pipeline that have yet to be worked on. Claimed attempts & reviews are the tasks that have been assigned to your labelers. Every labeler will at minimum have 1 claimed task (which is the next task they will see in their queue when they open it up), but you can also go into batches and manually assign additional tasks to specific labelers. Claimed tasks are only those that are currently assigned to labelers - so any finished tasks would no longer show up.

These graphs are split into attempts and reviews. Using these graphs, you can see whether you have a healthy balance between outstanding attempts & outstanding reviews. For example, if you have a lot of unclaimed reviews but not a lot of unclaimed attempts, you might want to promote more annotators to be reviewers to help go through the review queue faster.

If you do not have any layers of review specified, then you do not need to worry about the reviews graph.

754

Task Throughput
Throughput shows the total number of submissions that took place over a period of time that you specify by choosing different date ranges. You can see even how your submissions are split out across attempts & reviews.
Monitor task throughput to get a sense for how many tasks your labeling team can get through during a day, week, month etc. You can also use this information to determine if you need more workers to achieve your throughput goals.

750

Accuracy
Evaluation Task Accuracy shows the average score across all evaluation tasks in the date range that you specify. You can see how task accuracy has changed over time - and make adjustments to your workforce or your quality tasks as needed.

754

Labeler-Specific Metrics

Using the labeler insights table, you can see specific metrics on how an individual annotator is performing. You'll see a couple of metrics for every annotator:

  • Submissions: The number of tasks they worked on in a given time range - broken out by attempts and reviews
  • Time Spent: How long they spent working on their tasks in the given time range
  • Efficiency: How long each task took to complete on average
  • Evaluation Task Accuracy: The average accuracy score of their last 5 evaluation tasks

The default time range for metrics shown is since the last day. However, you can easily change the time frame to suit what you're looking.

2424

Choose the time range you want to see metrics for.

You can also set filters to show you annotators that meet a certain criteria. Filters can be based off Role, Status, Completed Tasks, Efficiency, and Evaluation Task Accuracy. For example, I say "I want to see the annotators who are attempters, and have really slow times" to figure out if I should be removing any slow performers from my project, or pulling them aside to see what's going on.

2266

Changing Annotator Roles

The labeler table is also where you can change the roles of your annotators. Simply select the annotators who you want to apply a change to, and change their settings.

  • Attempter vs Reviewer: Specify whether an annotator should also be a reviewer who receives tasks annotated by attempters and makes any additional edits, or whether they should just be an attempter. Reviewers will work on reviewing outstanding attempts - but will also annotate tasks from scratch if there are no more attempts waiting to be reviewed.
  • Enable vs Disabled: Change who is permissioned onto the project. If an annotator is disabled, they will no longer be able to label on this project. If you remove someone from your project from the Labeling Team page, they will automatically become disabled as well.
  • Trusted Annotator: Trusted annotators are not subject to evaluation tasks. Make someone a trusted annotator if you trust them whole heartedly and want to save them time from doing evaluation tasks.
2230

Individual Annotator Logs

You can deep dive into any of your labelers’ performance by clicking on their email. This will pull up the labelers' task log of all the tasks they have completed on your project.

2424

View completed task or accuracy scores by toggling between “Completed Tasks” and “Benchmarks.”
For regular tasks (completed tasks), accuracy is based on how much a review or an audit changed compared to the original submission. Attempts will have accuracy scored based on the review, while reviews will have accuracy scored based on your audits.

2422