分词器

image0image1image2

概述

分词就是将连续的字序列按照一定的规范重新组合成词序列的过程,合理的进行分词有助于语义的理解。

MindSpore提供了多种用途的分词器(Tokenizer),能够帮助用户高性能地处理文本,用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。

MindSpore目前提供的分词器如下表所示。此外,用户也可以根据需要实现自定义的分词器。

分词器

分词器说明

BasicTokenizer

根据指定规则对标量文本数据进行分词。

BertTokenizer

用于处理Bert文本数据的分词器。

JiebaTokenizer

基于字典的中文字符串分词器。

RegexTokenizer

根据指定正则表达式对标量文本数据进行分词。

SentencePieceTokenizer

基于SentencePiece开源工具包进行分词。

UnicodeCharTokenizer

将标量文本数据分词为Unicode字符。

UnicodeScriptTokenizer

根据Unicode边界对标量文本数据进行分词。

WhitespaceTokenizer

根据空格符对标量文本数据进行分词。

WordpieceTokenizer

根据单词集对标量文本数据进行分词。

更多分词器的详细说明,可以参见API文档

MindSpore分词器

下面介绍几种常用分词器的使用方法。

BertTokenizer

BertTokenizer是通过调用BasicTokenizerWordpieceTokenizer来进行分词的。

下面的样例首先构建了一个文本数据集和字符串列表,然后通过BertTokenizer对数据集进行分词,并展示了分词前后的文本结果。

[1]:
import mindspore.dataset as ds
import mindspore.dataset.text as text

input_list = ["床前明月光", "疑是地上霜", "举头望明月", "低头思故乡", "I am making small mistakes during working hours",
                "😀嘿嘿😃哈哈😄大笑😁嘻嘻", "繁體字"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

vocab_list = [
  "床", "前", "明", "月", "光", "疑", "是", "地", "上", "霜", "举", "头", "望", "低", "思", "故", "乡",
  "繁", "體", "字", "嘿", "哈", "大", "笑", "嘻", "i", "am", "mak", "make", "small", "mistake",
  "##s", "during", "work", "##ing", "hour", "😀", "😃", "😄", "😁", "+", "/", "-", "=", "12",
  "28", "40", "16", " ", "I", "[CLS]", "[SEP]", "[UNK]", "[PAD]", "[MASK]", "[unused1]", "[unused10]"]

vocab = text.Vocab.from_list(vocab_list)
tokenizer_op = text.BertTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']))
------------------------before tokenization----------------------------
床前明月光
疑是地上霜
举头望明月
低头思故乡
I am making small mistakes during working hours
😀嘿嘿😃哈哈😄大笑😁嘻嘻
繁體字
------------------------after tokenization-----------------------------
['床' '前' '明' '月' '光']
['疑' '是' '地' '上' '霜']
['举' '头' '望' '明' '月']
['低' '头' '思' '故' '乡']
['I' 'am' 'mak' '##ing' 'small' 'mistake' '##s' 'during' 'work' '##ing'
 'hour' '##s']
['😀' '嘿' '嘿' '😃' '哈' '哈' '😄' '大' '笑' '😁' '嘻' '嘻']
['繁' '體' '字']

JiebaTokenizer

JiebaTokenizer是基于jieba的中文分词。

下载字典文件hmm_model.utf8jieba.dict.utf8,并将其放到指定位置。

[2]:
!wget -N https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/hmm_model.utf8
!wget -N https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/jieba.dict.utf8
!mkdir -p ./datasets/tokenizer/
!mv hmm_model.utf8 jieba.dict.utf8 -t ./datasets/tokenizer/
!tree ./datasets/tokenizer/
./datasets/tokenizer/
├── hmm_model.utf8
└── jieba.dict.utf8

0 directories, 2 files

下面的样例首先构建了一个文本数据集,然后使用HMM与MP字典文件创建JiebaTokenizer对象,并对数据集进行分词,最后展示了分词前后的文本结果。

[3]:
import mindspore.dataset as ds
import mindspore.dataset.text as text

input_list = ["今天天气太好了我们一起去外面玩吧"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

# files from open source repository https://github.com/yanyiwu/cppjieba/tree/master/dict
HMM_FILE = "./datasets/tokenizer/hmm_model.utf8"
MP_FILE = "./datasets/tokenizer/jieba.dict.utf8"
jieba_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)
dataset = dataset.map(operations=jieba_op, input_columns=["text"], num_parallel_workers=1)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']))
------------------------before tokenization----------------------------
今天天气太好了我们一起去外面玩吧
------------------------after tokenization-----------------------------
['今天天气' '太好了' '我们' '一起' '去' '外面' '玩吧']

SentencePieceTokenizer

SentencePieceTokenizer是基于SentencePiece这个开源的自然语言处理工具包。

下载文本数据集文件botchan.txt,并将其放置到指定位置。

[4]:
!wget -N https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/botchan.txt
!mkdir -p ./datasets/tokenizer/
!mv botchan.txt ./datasets/tokenizer/
!tree ./datasets/tokenizer/
./datasets/tokenizer/
└── botchan.txt

0 directories, 1 files

下面的样例首先构建了一个文本数据集,然后从vocab_file文件中构建一个vocab对象,再通过SentencePieceTokenizer对数据集进行分词,并展示了分词前后的文本结果。

[5]:
import mindspore.dataset as ds
import mindspore.dataset.text as text
from mindspore.dataset.text import SentencePieceModel, SPieceTokenizerOutType

input_list = ["I saw a girl with a telescope."]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/r1.2/tests/ut/data/dataset/test_sentencepiece/botchan.txt
vocab_file = "./datasets/tokenizer/botchan.txt"
vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {})
tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)
dataset = dataset.map(operations=tokenizer_op)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']))
------------------------before tokenization----------------------------
I saw a girl with a telescope.
------------------------after tokenization-----------------------------
['▁I' '▁sa' 'w' '▁a' '▁girl' '▁with' '▁a' '▁te' 'les' 'co' 'pe' '.']

UnicodeCharTokenizer

UnicodeCharTokenizer是根据Unicode字符集来分词的。

下面的样例首先构建了一个文本数据集,然后通过UnicodeCharTokenizer对数据集进行分词,并展示了分词前后的文本结果。

[6]:
import mindspore.dataset as ds
import mindspore.dataset.text as text

input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

tokenizer_op = text.UnicodeCharTokenizer()
dataset = dataset.map(operations=tokenizer_op)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']).tolist())
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['W', 'e', 'l', 'c', 'o', 'm', 'e', ' ', 't', 'o', ' ', 'B', 'e', 'i', 'j', 'i', 'n', 'g', '!']
['北', '京', '欢', '迎', '您', '!']
['我', '喜', '欢', 'E', 'n', 'g', 'l', 'i', 's', 'h', '!']

WhitespaceTokenizer

WhitespaceTokenizer是根据空格来进行分词的。

下面的样例首先构建了一个文本数据集,然后通过WhitespaceTokenizer对数据集进行分词,并展示了分词前后的文本结果。

[7]:
import mindspore.dataset as ds
import mindspore.dataset.text as text

input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

tokenizer_op = text.WhitespaceTokenizer()
dataset = dataset.map(operations=tokenizer_op)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']).tolist())
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['Welcome', 'to', 'Beijing!']
['北京欢迎您!']
['我喜欢English!']

WordpieceTokenizer

WordpieceTokenizer是基于单词集来进行划分的,划分依据可以是单词集中的单个单词,或者多个单词的组合形式。

下面的样例首先构建了一个文本数据集,然后从单词列表中构建vocab对象,通过WordpieceTokenizer对数据集进行分词,并展示了分词前后的文本结果。

[8]:
import mindspore.dataset as ds
import mindspore.dataset.text as text

input_list = ["my", "favorite", "book", "is", "love", "during", "the", "cholera", "era", "what",
    "我", "最", "喜", "欢", "的", "书", "是", "霍", "乱", "时", "期", "的", "爱", "情", "您"]
vocab_english = ["book", "cholera", "era", "favor", "##ite", "my", "is", "love", "dur", "##ing", "the"]
vocab_chinese = ["我", '最', '喜', '欢', '的', '书', '是', '霍', '乱', '时', '期', '爱', '情']

dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)

print("------------------------before tokenization----------------------------")

for data in dataset.create_dict_iterator(output_numpy=True):
    print(text.to_str(data['text']))

vocab = text.Vocab.from_list(vocab_english+vocab_chinese)
tokenizer_op = text.WordpieceTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)

print("------------------------after tokenization-----------------------------")

for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
    print(text.to_str(i['text']))
------------------------before tokenization----------------------------
my
favorite
book
is
love
during
the
cholera
era
what
我
最
喜
欢
的
书
是
霍
乱
时
期
的
爱
情
您
------------------------after tokenization-----------------------------
['my']
['favor' '##ite']
['book']
['is']
['love']
['dur' '##ing']
['the']
['cholera']
['era']
['[UNK]']
['我']
['最']
['喜']
['欢']
['的']
['书']
['是']
['霍']
['乱']
['时']
['期']
['的']
['爱']
['情']
['[UNK]']