THCHS-30#

This topic describes how to manage the THCHS-30 Dataset, which is a dataset with Sentence label

Authorize a Client Instance#

An accesskey is needed to authenticate identity when using TensorBay.

from tensorbay import GAS

# Please visit `https://gas.graviti.cn/tensorbay/developer` to get the AccessKey.
gas = GAS("<YOUR_ACCESSKEY>")

Create Dataset#

gas.create_dataset("THCHS-30")

Organize Dataset#

It takes the following steps to organize the “THCHS-30” dataset by the Dataset instance.

Step 1: Write the Catalog#

A Catalog contains all label information of one dataset, which is typically stored in a json file. However the catalog of THCHS-30 is too large, instead of reading it from json file, we read it by mapping from subcatalog that is loaded by the raw file. Check the dataloader below for more details.

Important

See catalog table for more catalogs with different label types.

Step 2: Write the Dataloader#

A dataloader is needed to organize the dataset into a Dataset instance.

 1#!/usr/bin/env python3
 2#
 3# Copyright 2021 Graviti. Licensed under MIT License.
 4#
 5# pylint: disable=invalid-name
 6
 7"""Dataloader of THCHS-30 dataset."""
 8
 9import os
10from itertools import islice
11from typing import List
12
13from tensorbay.dataset import Data, Dataset
14from tensorbay.label import LabeledSentence, SentenceSubcatalog, Word
15from tensorbay.opendataset._utility import glob
16
17DATASET_NAME = "THCHS-30"
18_SEGMENT_NAME_LIST = ("train", "dev", "test")
19
20
21def THCHS30(path: str) -> Dataset:
22    """`THCHS-30 <http://166.111.134.19:7777/data/thchs30/README.html>`_ dataset.
23
24    The file structure should be like::
25
26        <path>
27            lm_word/
28                lexicon.txt
29            data/
30                A11_0.wav.trn
31                ...
32            dev/
33                A11_101.wav
34                ...
35            train/
36            test/
37
38    Arguments:
39        path: The root directory of the dataset.
40
41    Returns:
42        Loaded :class:`~tensorbay.dataset.dataset.Dataset` instance.
43
44    """
45    dataset = Dataset(DATASET_NAME)
46    dataset.catalog.sentence = _get_subcatalog(os.path.join(path, "lm_word", "lexicon.txt"))
47    for segment_name in _SEGMENT_NAME_LIST:
48        segment = dataset.create_segment(segment_name)
49        for filename in glob(os.path.join(path, segment_name, "*.wav")):
50            data = Data(filename)
51            label_file = os.path.join(path, "data", os.path.basename(filename) + ".trn")
52            data.label.sentence = _get_label(label_file)
53            segment.append(data)
54    return dataset
55
56
57def _get_label(label_file: str) -> List[LabeledSentence]:
58    with open(label_file, encoding="utf-8") as fp:
59        labels = ((Word(text=text) for text in texts.split()) for texts in fp)
60        return [LabeledSentence(*labels)]
61
62
63def _get_subcatalog(lexion_path: str) -> SentenceSubcatalog:
64    subcatalog = SentenceSubcatalog()
65    with open(lexion_path, encoding="utf-8") as fp:
66        for line in islice(fp, 4, None):
67            subcatalog.append_lexicon(line.strip().split())
68    return subcatalog

See Sentence annotation for more details.

There are already a number of dataloaders in TensorBay SDK provided by the community. Thus, in addition to writing, importing an available dataloadert is also feasible.

from tensorbay.opendataset import THCHS30

dataset = THCHS30("<path/to/dataset>")

Note

Note that catalogs are automatically loaded in available dataloaders, users do not have to write them again.

Important

See dataloader table for dataloaders with different label types.

Visualize Dataset#

Optionally, the organized dataset can be visualized by Pharos, which is a TensorBay SDK plug-in. This step can help users to check whether the dataset is correctly organized. Please see Visualization for more details.

Upload Dataset#

The organized “THCHS-30” dataset can be uploaded to TensorBay for sharing, reuse, etc.

dataset_client = gas.upload_dataset(dataset, jobs=8)
dataset_client.commit("initial commit")

Similar with Git, the commit step after uploading can record changes to the dataset as a version. If needed, do the modifications and commit again. Please see Version Control for more details.

Read Dataset#

Now “THCHS-30” dataset can be read from TensorBay.

dataset = Dataset("THCHS-30", gas)

In dataset “THCHS-30”, there are three Segments: dev, train and test. Get the segment names by listing them all.

dataset.keys()

Get a segment by passing the required segment name.

segment = dataset["dev"]

In the dev segment, there is a sequence of data, which can be obtained by index.

data = segment[0]

In each data, there is a sequence of Sentence annotations, which can be obtained by index.

labeled_sentence = data.label.sentence[0]
sentence = labeled_sentence.sentence
spell = labeled_sentence.spell
phone = labeled_sentence.phone

There is only one label type in “THCHS-30” dataset, which is Sentence. It contains sentence, spell and phone information. See Sentence label format for more details.

Delete Dataset#

gas.delete_dataset("THCHS-30")