For video task types (videoannotation
and videoplaybackannotation
), in addition to annotating objects in the video, you can also request annotation of events that happen during the video.
The main difference between events
and annotations
is that events are not linked to a specific object in the video, but rather to a frame/group of frames.
You can provide any set of events to annotate through the events_to_annotate
API field, and they can relate to both events in the video images or audio.
Events Request Format
{
"events_to_annotate": ["Environment", "Action", "Turning"]
}
Events Response Format
If the annotation was completed successfully, the events field will contain an array of annotated events. Events have the following schema:
Key | Type | Description |
---|---|---|
label | string | The label for the event, which will be one of the specified task.params.events_to_annotate . |
type | string | The type of event. This can be point if the event happens in a single frame, or range if has a start and an end. |
start | number | Frame where the event starts. |
startTime | number | The timestamp of the first frame for the event. |
end | number | Frame where the event ends (only if the type is range ). |
endTime | number | The timestamp of the last frame for the event. |
attributes | object | Key/value pairs for frame attributes as per the global attributes specification. |
{
"events": [
{
"label": "Mentioned",
"type": "range",
"start": 3,
"end": 10
},
{
"label": "Mentioned",
"type": "point",
"start": 8,
"attributes": {
"Speaker": "Primary"
}
},
...
]
}