{
"callback_url": "http://www.example.com/callback",
"instruction": "Color all points making up the desired objects in the scene.",
"labels": [
"car", "pedestrian", "vegetation", "road"
],
"attachment_type": "json",
"lidar_task": "5cc1bfaa34489c006cfd6fc3",
"lidar_task_frames": [1,2],
}
Instead of creating Lidar Segmentation tasks from scratch, we can bring already completed work from a Lidar Annotation task into the Lidar Segmentation task. This will persist the cuboids that you got from the annotation step. Furthermore, you can specify which subset of frames should be included as the source material.
The following differences from the previous approach will take effect:
-
A new parameter name
lidar_task
will contain the identifier of the LiDAR Annotation task to be used as source. Thelidar_task
needs to be in acompleted
state. -
You don't need to add the
attachments
andattachment_type
parameters as theFrame
objects will be taken from the source LiDAR Annotation task. -
A new optional parameter appears, named
lidar_task_frames
, allows you to specify an array of frame indexes to select which subset of frames you want to use from the LiDAR Annotation task. If omitted all frames will be used.- For example, assuming we start from a completed LiDAR Annotation task with five frames and we wanted to use all frames except the last one, the parameter will look like
lidar_task_frames: [0, 1, 2, 3]
.
- For example, assuming we start from a completed LiDAR Annotation task with five frames and we wanted to use all frames except the last one, the parameter will look like
-
The
labels
parameter needs to be a super set of the set used on the original LiDAR Annotation task.