CADC#

This topic describes how to manage the “CADC” dataset.

“CADC” is a fusion dataset with 8 sensors including 7 cameras and 1 lidar , and has Box3D type of labels on the point cloud data. (Fig. 6). See this page for more details about this dataset.

../../_images/example-FusionDataset.png

Fig. 6 The preview of a point cloud from “CADC” with Box3D labels.#

Authorize a Client Instance#

First of all, create a GAS client.

from tensorbay import GAS
from tensorbay.dataset import FusionDataset

# Please visit `https://gas.graviti.com/tensorbay/developer` to get the AccessKey.
gas = GAS("<YOUR_ACCESSKEY>")

Create Fusion Dataset#

Then, create a fusion dataset client by passing the fusion dataset name and is_fusion argument to the GAS client.

gas.create_dataset("CADC", is_fusion=True)

List Dataset Names#

To check if you have created “CADC” fusion dataset, you can list all your available datasets. See this page for details.

The datasets listed here include both datasets and fusion datasets.

gas.list_dataset_names()

Organize Fusion Dataset#

Now we describe how to organize the “CADC” fusion dataset by the FusionDataset instance before uploading it to TensorBay. It takes the following steps to organize “CADC”.

Write the Catalog#

The first step is to write the catalog. Catalog is a json file contains all label information of one dataset. See this page for more details. The only annotation type for “CADC” is Box3D, and there are 10 category types and 9 attributes types.

 1{
 2    "BOX3D": {
 3        "isTracking": true,
 4        "categories": [
 5            { "name": "Animal" },
 6            { "name": "Bicycle" },
 7            { "name": "Bus" },
 8            { "name": "Car" },
 9            { "name": "Garbage_Container_on_Wheels" },
10            { "name": "Pedestrian" },
11            { "name": "Pedestrian_With_Object" },
12            { "name": "Traffic_Guidance_Objects" },
13            { "name": "Truck" },
14            { "name": "Horse and Buggy" }
15        ],
16        "attributes": [
17            {
18                "name": "stationary",
19                "type": "boolean"
20            },
21            {
22                "name": "camera_used",
23                "enum": [0, 1, 2, 3, 4, 5, 6, 7, null]
24            },
25            {
26                "name": "state",
27                "enum": ["Moving", "Parked", "Stopped"],
28                "parentCategories": ["Car", "Truck", "Bus", "Bicycle", "Horse_and_Buggy"]
29            },
30            {
31                "name": "truck_type",
32                "enum": [
33                    "Construction_Truck",
34                    "Emergency_Truck",
35                    "Garbage_Truck",
36                    "Pickup_Truck",
37                    "Semi_Truck",
38                    "Snowplow_Truck"
39                ],
40                "parentCategories": ["Truck"]
41            },
42            {
43                "name": "bus_type",
44                "enum": ["Coach_Bus", "Transit_Bus", "Standard_School_Bus", "Van_School_Bus"],
45                "parentCategories": ["Bus"]
46            },
47            {
48                "name": "age",
49                "enum": ["Adult", "Child"],
50                "parentCategories": ["Pedestrian", "Pedestrian_With_Object"]
51            },
52            {
53                "name": "traffic_guidance_type",
54                "enum": ["Permanent", "Moveable"],
55                "parentCategories": ["Traffic_Guidance_Objects"]
56            },
57            {
58                "name": "rider_state",
59                "enum": ["With_Rider", "Without_Rider"],
60                "parentCategories": ["Bicycle"]
61            },
62            {
63                "name": "points_count",
64                "type": "integer",
65                "minimum": 0
66            }
67        ]
68    }
69}

Note

The annotations for “CADC” have tracking information, hence the value of isTracking should be set as True.

Write the Dataloader#

The second step is to write the dataloader. The dataloader function of “CADC” is to manage all the files and annotations of “CADC” into a FusionDataset instance. The code block below displays the “CADC” dataloader.

  1#!/usr/bin/env python3
  2#
  3# Copyright 2021 Graviti. Licensed under MIT License.
  4#
  5# pylint: disable=invalid-name
  6
  7"""Dataloader of CADC dataset."""
  8
  9import json
 10import os
 11from datetime import datetime
 12from typing import Any, Dict, List
 13
 14import quaternion
 15
 16from tensorbay.dataset import Data, Frame, FusionDataset
 17from tensorbay.exception import ModuleImportError
 18from tensorbay.label import LabeledBox3D
 19from tensorbay.opendataset._utility import glob
 20from tensorbay.sensor import Camera, Lidar, Sensors
 21
 22DATASET_NAME = "CADC"
 23
 24
 25def CADC(path: str) -> FusionDataset:
 26    """`CADC <http://cadcd.uwaterloo.ca/index.html>`_ dataset.
 27
 28    The file structure should be like::
 29
 30        <path>
 31            2018_03_06/
 32                0001/
 33                    3d_ann.json
 34                    labeled/
 35                        image_00/
 36                            data/
 37                                0000000000.png
 38                                0000000001.png
 39                                ...
 40                            timestamps.txt
 41                        ...
 42                        image_07/
 43                            data/
 44                            timestamps.txt
 45                        lidar_points/
 46                            data/
 47                            timestamps.txt
 48                        novatel/
 49                            data/
 50                            dataformat.txt
 51                            timestamps.txt
 52                ...
 53                0018/
 54                calib/
 55                    00.yaml
 56                    01.yaml
 57                    02.yaml
 58                    03.yaml
 59                    04.yaml
 60                    05.yaml
 61                    06.yaml
 62                    07.yaml
 63                    extrinsics.yaml
 64                    README.txt
 65            2018_03_07/
 66            2019_02_27/
 67
 68    Arguments:
 69        path: The root directory of the dataset.
 70
 71    Returns:
 72        Loaded `~tensorbay.dataset.dataset.FusionDataset` instance.
 73
 74    """
 75    root_path = os.path.abspath(os.path.expanduser(path))
 76
 77    dataset = FusionDataset(DATASET_NAME)
 78    dataset.notes.is_continuous = True
 79    dataset.load_catalog(os.path.join(os.path.dirname(__file__), "catalog.json"))
 80
 81    for date in os.listdir(root_path):
 82        date_path = os.path.join(root_path, date)
 83        sensors = _load_sensors(os.path.join(date_path, "calib"))
 84        for index in os.listdir(date_path):
 85            if index == "calib":
 86                continue
 87
 88            segment = dataset.create_segment(f"{date}-{index}")
 89            segment.sensors = sensors
 90            segment_path = os.path.join(root_path, date, index)
 91            data_path = os.path.join(segment_path, "labeled")
 92
 93            with open(os.path.join(segment_path, "3d_ann.json"), encoding="utf-8") as fp:
 94                # The first line of the json file is the json body.
 95                annotations = json.loads(fp.readline())
 96            timestamps = _load_timestamps(sensors, data_path)
 97            for frame_index, annotation in enumerate(annotations):
 98                segment.append(_load_frame(sensors, data_path, frame_index, annotation, timestamps))
 99
100    return dataset
101
102
103def _load_timestamps(sensors: Sensors, data_path: str) -> Dict[str, List[str]]:
104    timestamps = {}
105    for sensor_name in sensors.keys():
106        data_folder = f"image_{sensor_name[-2:]}" if sensor_name != "LIDAR" else "lidar_points"
107        timestamp_file = os.path.join(data_path, data_folder, "timestamps.txt")
108        with open(timestamp_file, encoding="utf-8") as fp:
109            timestamps[sensor_name] = fp.readlines()
110
111    return timestamps
112
113
114def _load_frame(
115    sensors: Sensors,
116    data_path: str,
117    frame_index: int,
118    annotation: Dict[str, Any],
119    timestamps: Dict[str, List[str]],
120) -> Frame:
121    frame = Frame()
122    for sensor_name in sensors.keys():
123        # The data file name is a string of length 10 with each digit being a number:
124        # 0000000000.jpg
125        # 0000000001.bin
126        stem = f"{frame_index:010}"
127
128        # Each line of the timestamps file looks like:
129        # 2018-03-06 15:02:33.000000000
130        timestamp = datetime.strptime(
131            timestamps[sensor_name][frame_index][:23], "%Y-%m-%d %H:%M:%S.%f"
132        ).timestamp()
133        if sensor_name != "LIDAR":
134            # The image folder corresponds to different cameras, whose name is likes "CAM00".
135            # The image folder looks like "image_00".
136            camera_folder = f"image_{sensor_name[-2:]}"
137            image_file = f"{stem}.png"
138
139            data = Data(
140                os.path.join(data_path, camera_folder, "data", image_file),
141                target_remote_path=f"{camera_folder}-{image_file}",
142                timestamp=timestamp,
143            )
144        else:
145            data = Data(
146                os.path.join(data_path, "lidar_points", "data", f"{stem}.bin"),
147                timestamp=timestamp,
148            )
149            data.label.box3d = _load_labels(annotation["cuboids"])
150
151        frame[sensor_name] = data
152    return frame
153
154
155def _load_labels(boxes: List[Dict[str, Any]]) -> List[LabeledBox3D]:
156    labels = []
157    for box in boxes:
158        dimension = box["dimensions"]
159        position = box["position"]
160
161        attributes = box["attributes"]
162        attributes["stationary"] = box["stationary"]
163        attributes["camera_used"] = box["camera_used"]
164        attributes["points_count"] = box["points_count"]
165
166        label = LabeledBox3D(
167            size=(
168                dimension["y"],  # The "y" dimension is the width from front to back.
169                dimension["x"],  # The "x" dimension is the width from left to right.
170                dimension["z"],
171            ),
172            translation=(
173                position["x"],  # "x" axis points to the forward facing direction of the object.
174                position["y"],  # "y" axis points to the left direction of the object.
175                position["z"],
176            ),
177            rotation=quaternion.from_rotation_vector((0, 0, box["yaw"])),
178            category=box["label"],
179            attributes=attributes,
180            instance=box["uuid"],
181        )
182        labels.append(label)
183
184    return labels
185
186
187def _load_sensors(calib_path: str) -> Sensors:
188    try:
189        import yaml  # pylint: disable=import-outside-toplevel
190    except ModuleNotFoundError as error:
191        raise ModuleImportError(module_name=error.name, package_name="pyyaml") from error
192
193    sensors = Sensors()
194
195    lidar = Lidar("LIDAR")
196    lidar.set_extrinsics()
197    sensors.add(lidar)
198
199    with open(os.path.join(calib_path, "extrinsics.yaml"), encoding="utf-8") as fp:
200        extrinsics = yaml.load(fp, Loader=yaml.FullLoader)
201
202    for camera_calibration_file in glob(os.path.join(calib_path, "[0-9]*.yaml")):
203        with open(camera_calibration_file, encoding="utf-8") as fp:
204            camera_calibration = yaml.load(fp, Loader=yaml.FullLoader)
205
206        # camera_calibration_file looks like:
207        # /path-to-CADC/2018_03_06/calib/00.yaml
208        camera_name = f"CAM{os.path.splitext(os.path.basename(camera_calibration_file))[0]}"
209        camera = Camera(camera_name)
210        camera.description = camera_calibration["camera_name"]
211
212        camera.set_extrinsics(matrix=extrinsics[f"T_LIDAR_{camera_name}"])
213
214        camera_matrix = camera_calibration["camera_matrix"]["data"]
215        camera.set_camera_matrix(matrix=[camera_matrix[:3], camera_matrix[3:6], camera_matrix[6:9]])
216
217        distortion = camera_calibration["distortion_coefficients"]["data"]
218        camera.set_distortion_coefficients(**dict(zip(("k1", "k2", "p1", "p2", "k3"), distortion)))
219
220        sensors.add(camera)
221    return sensors

create a fusion dataset#

To load a fusion dataset, we first need to create an instance of FusionDataset.(L75)

Note that after creating the Fusion Dataset, you need to set the is_continuous attribute of notes to True,(L76) since the frames in each fusion segment is time-continuous.

load the catalog#

Same as dataset, you also need to load the catalog.(L77) The catalog file “catalog.json” is in the same directory with dataloader file.

create fusion segments#

In this example, we create fusion segments by dataset.create_segment(SEGMENT_NAME).(L86) We manage the data under the subfolder(L33) of the date folder(L32) into a fusion segment and combine two folder names to form a segment name, which is to ensure that frames in each segment are continuous.

add sensors to fusion segments#

After constructing the fusion segment, the sensors corresponding to different data should be added to the fusion segment.(L87)

In “CADC” , there is a need for projection, so we need not only the name for each sensor, but also the calibration parameters.

And to manage all the Sensors (L81, L183) corresponding to different data, the parameters from calibration files are extracted.

Lidar sensor only has extrinsics, here we regard the lidar as the origin of the point cloud 3D coordinate system, and set the extrinsics as defaults(L189).

To keep the projection relationship between sensors, we set the transform from the camera 3D coordinate system to the lidar 3D coordinate system as Camera extrinsics(L205).

Besides extrinsics(), Camera sensor also has intrinsics(), which are used to project 3D points to 2D pixels.

The intrinsics consist of two parts, CameraMatrix and DistortionCoefficients.(L208-L211)

add frames to segment#

After adding the sensors to the fusion segments, the frames should be added into the continuous segment in order(L96).

Each frame contains the data corresponding to each sensor, and each data should be added to the frame under the key of sensor name(L147).

In fusion datasets, it is common that not all data have labels. In “CADC”, only point cloud files(Lidar data) have Box3D type of labels(L145). See this page for more details about Box3D annotation details.

Note

The CADC dataloader above uses relative import(L16-L19). However, when you write your own dataloader you should use regular import. And when you want to contribute your own dataloader, remember to use relative import.

Visualize Dataset#

Optionally, the organized dataset can be visualized by Pharos, which is a TensorBay SDK plug-in. This step can help users to check whether the dataset is correctly organized. Please see Visualization for more details.

Upload Fusion Dataset#

After you finish the dataloader and organize the “CADC” into a FusionDataset instance, you can upload it to TensorBay for sharing, reuse, etc.

# fusion_dataset is the one you initialized in "Organize Fusion Dataset" section
fusion_dataset_client = gas.upload_dataset(fusion_dataset, jobs=8)
fusion_dataset_client.commit("initial commit")

Remember to execute the commit step after uploading. If needed, you can re-upload and commit again. Please see this page for more details about version control.

Note

Commit operation can also be done on our GAS Platform.

Read Fusion Dataset#

Now you can read “CADC” dataset from TensorBay.

fusion_dataset = FusionDataset("CADC", gas)

In dataset “CADC”, there are lots of FusionSegments: 2018_03_06/0001, 2018_03_07/0001, …

You can get the segment names by list them all.

fusion_dataset.keys()

You can get a segment by passing the required segment name or the segment index.

fusion_segment = fusion_dataset["2018_03_06/0001"]
fusion_segment = fusion_dataset[0]

Note

If the segment or fusion segment is created without given name, then its name will be “”.

In the 2018_03_06/0001 fusion segment, there are several sensors. You can get all the sensors by accessing the sensors of the FusionSegment.

sensors = fusion_segment.sensors

In each fusion segment, there are a sequence of frames. You can get one by index.

frame = fusion_segment[0]

In each frame, there are several data corresponding to different sensors. You can get each data by the corresponding sensor name.

for sensor_name in sensors.keys():
    data = frame[sensor_name]

In “CADC”, only data under Lidar has a sequence of Box3D annotations. You can get one by index.

lidar_data = frame["LIDAR"]
label_box3d = lidar_data.label.box3d[0]
category = label_box3d.category
attributes = label_box3d.attributes

There is only one label type in “CADC” dataset, which is box3d. The information stored in category is one of the category names in “categories” list of catalog.json. The information stored in attributes is some of the attributes in “attributes” list of catalog.json.

See this page for more details about the structure of Box3D.

Delete Fusion Dataset#

To delete “CADC”, run the following code:

gas.delete_dataset("CADC")