{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 量子神经网络在自然语言处理中的应用\n", "\n", "\n", "`Linux` `CPU` `全流程` `初级` `中级` `高级`\n", "\n", "[![](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.2/tutorials/training/source_zh_cn/advanced_use/qnn_for_nlp.ipynb)\n", "\n", "## 概述\n", "\n", "在自然语言处理过程中,词嵌入(Word embedding)是其中的重要步骤,它是一个将高维度空间的词向量嵌入到一个维数更低的连续向量空间的过程。当给予神经网络的语料信息不断增加时,网络的训练过程将越来越困难。利用量子力学的态叠加和纠缠等特性,我们可以利用量子神经网络来处理这些经典语料信息,加入其训练过程,并提高收敛精度。下面,我们将简单地搭建一个量子经典混合神经网络来完成一个词嵌入任务。\n", "\n", "\n", "## 环境准备\n", "\n", "导入本教程所依赖模块\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import time\n", "from projectq.ops import QubitOperator\n", "import mindspore.ops as ops\n", "import mindspore.dataset as ds\n", "from mindspore import nn\n", "from mindspore.train.callback import LossMonitor\n", "from mindspore import Model\n", "from mindquantum.nn import MindQuantumLayer\n", "from mindquantum import Hamiltonian, Circuit, RX, RY, X, H, UN" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "本教程实现的是一个[CBOW模型](https://blog.csdn.net/u010665216/article/details/78724856),即利用某个词所处的环境来预测该词。例如对于“I love natural language processing”这句话,我们可以将其切分为5个词,\\[\"I\", \"love\", \"natural\", \"language\", \"processing”\\],在所选窗口为2时,我们要处理的问题是利用\\[\"I\", \"love\", \"language\", \"processing\"\\]来预测出目标词汇\"natural\"。这里我们以窗口为2为例,搭建如下的量子神经网络,来完成词嵌入任务。\n", "\n", "![quantum word embedding](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/advanced_use/images/qcbow.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "这里,编码线路会将\"I\"、\"love\"、\"language\"和\"processing\"的编码信息编码到量子线路中,待训练的量子线路由四个Ansatz线路构成,最后我们在量子线路末端对量子比特做$\\text{Z}$基矢上的测量,具体所需测量的比特的个数由所需嵌入空间的维数确定。\n", "\n", "## 数据预处理\n", "\n", "我们对所需要处理的语句进行处理,生成关于该句子的词典,并根据窗口大小来生成样本点。\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "def GenerateWordDictAndSample(corpus, window=2):\n", " all_words = corpus.split()\n", " word_set = list(set(all_words))\n", " word_set.sort()\n", " word_dict = {w: i for i,w in enumerate(word_set)}\n", " sampling = []\n", " for index, word in enumerate(all_words[window:-window]):\n", " around = []\n", " for i in range(index, index + 2*window + 1):\n", " if i != index + window:\n", " around.append(all_words[i])\n", " sampling.append([around,all_words[index + window]])\n", " return word_dict, sampling" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'I': 0, 'language': 1, 'love': 2, 'natural': 3, 'processing': 4}\n", "word dict size: 5\n", "samples: [[['I', 'love', 'language', 'processing'], 'natural']]\n", "number of samples: 1\n" ] } ], "source": [ "word_dict, sample = GenerateWordDictAndSample(\"I love natural language processing\")\n", "print(word_dict)\n", "print('word dict size: ', len(word_dict))\n", "print('samples: ', sample)\n", "print('number of samples: ', len(sample))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "根据如上信息,我们得到该句子的词典大小为5,能够产生一个样本点。\n", "\n", "## 编码线路\n", "\n", "为了简单起见,我们使用的编码线路由$\\text{RX}$旋转门构成,结构如下。\n", "\n", "![encoder circuit](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/advanced_use/images/encoder.png)\n", "\n", "我们对每个量子门都作用一个$\\text{RX}$旋转门。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def GenerateEncoderCircuit(n_qubits, prefix=''):\n", " if len(prefix) != 0 and prefix[-1] != '_':\n", " prefix += '_'\n", " circ = Circuit()\n", " for i in range(n_qubits):\n", " circ += RX(prefix + str(i)).on(i)\n", " return circ" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "RX(e_0|0)\n", "RX(e_1|1)\n", "RX(e_2|2)" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "GenerateEncoderCircuit(3,prefix='e')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们通常用$\\left|0\\right>$和$\\left|1\\right>$来标记二能级量子比特的两个状态,由态叠加原理,量子比特还可以处于这两个状态的叠加态:\n", "\n", "$$\\left|\\psi\\right>=\\alpha\\left|0\\right>+\\beta\\left|1\\right>$$\n", "\n", "对于$n$比特的量子态,其将处于$2^n$维的希尔伯特空间中。对于上面由5个词构成的词典,我们只需要$\\lceil \\log_2 5 \\rceil=3$个量子比特即可完成编码,这也体现出量子计算的优越性。\n", "\n", "例如对于上面词典中的\"love\",其对应的标签为2,2的二进制表示为`010`,我们只需将编码线路中的`e_0`、`e_1`和`e_2`分别设为$0$、$\\pi$和$0$即可。下面我们通过`Evolution`算子来验证以下。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Label is: 2\n", "Binary label is: 010\n", "Parameters of encoder is: \n", " [0. 3.14159 0. ]\n", "Encoder circuit is: \n", " RX(e_0|0)\n", "RX(e_1|1)\n", "RX(e_2|2)\n", "Encoder parameter names are: \n", " ['e_0', 'e_1', 'e_2']\n", "Amplitude of quantum state is: \n", " [0. 0. 1. 0. 0. 0. 0. 0.]\n", "Label in quantum state is: 2\n" ] } ], "source": [ "from mindquantum.nn import generate_evolution_operator\n", "from mindspore import context\n", "from mindspore import Tensor\n", "\n", "n_qubits = 3 # number of qubits of this quantum circuit\n", "label = 2 # label need to encode\n", "label_bin = bin(label)[-1:1:-1].ljust(n_qubits,'0') # binary form of label\n", "label_array = np.array([int(i)*np.pi for i in label_bin]).astype(np.float32) # parameter value of encoder\n", "encoder = GenerateEncoderCircuit(n_qubits, prefix='e') # encoder circuit\n", "encoder_para_names = encoder.parameter_resolver().para_name # parameter names of encoder\n", "\n", "print(\"Label is: \", label)\n", "print(\"Binary label is: \", label_bin)\n", "print(\"Parameters of encoder is: \\n\", np.round(label_array, 5))\n", "print(\"Encoder circuit is: \\n\", encoder)\n", "print(\"Encoder parameter names are: \\n\", encoder_para_names)\n", "\n", "context.set_context(mode=context.GRAPH_MODE, device_target=\"CPU\")\n", "# quantum state evolution operator\n", "evol = generate_evolution_operator(param_names=encoder_para_names, circuit=encoder)\n", "state = evol(Tensor(label_array))\n", "state = state.asnumpy()\n", "quantum_state = state[:, 0] + 1j * state[:, 1]\n", "amp = np.round(np.abs(quantum_state)**2, 3)\n", "\n", "print(\"Amplitude of quantum state is: \\n\", amp)\n", "print(\"Label in quantum state is: \", np.argmax(amp))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "通过上面的验证,我们发现,对于标签为2的数据,最后得到量子态的振幅最大的位置也是2,因此得到的量子态正是对输入标签的编码。我们将对数据编码生成参数数值的过程总结成如下函数。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "def GenerateTrainData(sample, word_dict):\n", " n_qubits = np.int(np.ceil(np.log2(1 + max(word_dict.values()))))\n", " data_x = []\n", " data_y = []\n", " for around, center in sample:\n", " data_x.append([])\n", " for word in around:\n", " label = word_dict[word]\n", " label_bin = bin(label)[-1:1:-1].ljust(n_qubits,'0')\n", " label_array = [int(i)*np.pi for i in label_bin]\n", " data_x[-1].extend(label_array)\n", " data_y.append(word_dict[center])\n", " return np.array(data_x).astype(np.float32), np.array(data_y).astype(np.int32)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(array([[0. , 0. , 0. , 0. , 3.1415927, 0. ,\n", " 3.1415927, 0. , 0. , 0. , 0. , 3.1415927]],\n", " dtype=float32),\n", " array([3], dtype=int32))" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "GenerateTrainData(sample, word_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "根据上面的结果,我们将4个输入的词编码的信息合并为一个更长向量,便于后续神经网络调用。\n", "\n", "## Ansatz线路\n", "\n", "Ansatz线路的选择多种多样,我们选择如下的量子线路作为Ansatz线路,它的一个单元由一层$\\text{RY}$门和一层$\\text{CNOT}$门构成,对此单元重复$p$次构成整个Ansatz线路。\n", "\n", "![ansatz circuit](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/advanced_use/images/ansatz.png)\n", "\n", "定义如下函数生成Ansatz线路。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "def GenerateAnsatzCircuit(n_qubits, layers, prefix=''):\n", " if len(prefix) != 0 and prefix[-1] != '_':\n", " prefix += '_'\n", " circ = Circuit()\n", " for l in range(layers):\n", " for i in range(n_qubits):\n", " circ += RY(prefix + str(l) + '_' + str(i)).on(i)\n", " for i in range(l % 2, n_qubits, 2):\n", " if i < n_qubits and i + 1 < n_qubits:\n", " circ += X.on(i + 1, i)\n", " return circ" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "RY(a_0_0|0)\n", "RY(a_0_1|1)\n", "RY(a_0_2|2)\n", "RY(a_0_3|3)\n", "RY(a_0_4|4)\n", "X(1 <-: 0)\n", "X(3 <-: 2)\n", "RY(a_1_0|0)\n", "RY(a_1_1|1)\n", "RY(a_1_2|2)\n", "RY(a_1_3|3)\n", "RY(a_1_4|4)\n", "X(2 <-: 1)\n", "X(4 <-: 3)" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "GenerateAnsatzCircuit(5, 2, 'a')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 测量\n", "\n", "我们把对不同比特位上的测量结果作为降维后的数据。具体过程与比特编码类似,例如当我们想将词向量降维为5维向量时,对于第3维的数据可以如下产生:\n", "\n", "- 3对应的二进制为`00011`。\n", "- 测量量子线路末态对$Z_0Z_1$哈密顿量的期望值。\n", "\n", "下面函数将给出产生各个维度上数据所需的哈密顿量(hams),其中`n_qubits`表示线路的比特数,`dims`表示词嵌入的维度:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "def GenerateEmbeddingHamiltonian(dims, n_qubits):\n", " hams = []\n", " for i in range(dims):\n", " s = ''\n", " for j, k in enumerate(bin(i + 1)[-1:1:-1]):\n", " if k == '1':\n", " s = s + 'Z' + str(j) + ' '\n", " hams.append(Hamiltonian(QubitOperator(s)))\n", " return hams" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[1.0 Z0, 1.0 Z1, 1.0 Z0 Z1, 1.0 Z2, 1.0 Z0 Z2]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "GenerateEmbeddingHamiltonian(5, 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 量子版词向量嵌入层\n", "\n", "量子版词向量嵌入层结合前面的编码量子线路和待训练量子线路,以及测量哈密顿量,将`num_embedding`个词嵌入为`embedding_dim`维的词向量。这里我们还在量子线路的最开始加上了Hadamard门,将初态制备为均匀叠加态,用以提高量子神经网络的表达能力。\n", "\n", "下面,我们定义量子嵌入层,它将返回一个量子线路模拟算子。" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "def QEmbedding(num_embedding, embedding_dim, window, layers, n_threads):\n", " n_qubits = int(np.ceil(np.log2(num_embedding)))\n", " hams = GenerateEmbeddingHamiltonian(embedding_dim, n_qubits)\n", " circ = Circuit()\n", " circ = UN(H, n_qubits)\n", " encoder_param_name = []\n", " ansatz_param_name = []\n", " for w in range(2 * window):\n", " encoder = GenerateEncoderCircuit(n_qubits, 'Encoder_' + str(w))\n", " ansatz = GenerateAnsatzCircuit(n_qubits, layers, 'Ansatz_' + str(w))\n", " encoder.no_grad()\n", " circ += encoder\n", " circ += ansatz\n", " encoder_param_name.extend(list(encoder.parameter_resolver()))\n", " ansatz_param_name.extend(list(ansatz.parameter_resolver()))\n", " net = MindQuantumLayer(encoder_param_name,\n", " ansatz_param_name,\n", " circ,\n", " hams,\n", " n_threads=n_threads)\n", " return net" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "整个训练模型跟经典网络类似,由一个嵌入层和两个全连通层构成,然而此处的嵌入层是由量子神经网络构成。下面定义量子神经网络CBOW。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "class CBOW(nn.Cell):\n", " def __init__(self, num_embedding, embedding_dim, window, layers, n_threads,\n", " hidden_dim):\n", " super(CBOW, self).__init__()\n", " self.embedding = QEmbedding(num_embedding, embedding_dim, window,\n", " layers, n_threads)\n", " self.dense1 = nn.Dense(embedding_dim, hidden_dim)\n", " self.dense2 = nn.Dense(hidden_dim, num_embedding)\n", " self.relu = ops.ReLU()\n", "\n", " def construct(self, x):\n", " embed = self.embedding(x)\n", " out = self.dense1(embed)\n", " out = self.relu(out)\n", " out = self.dense2(out)\n", " return out" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "下面我们对一个稍长的句子来进行训练。首先定义`LossMonitorWithCollection`用于监督收敛过程,并搜集收敛过程的损失。" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "class LossMonitorWithCollection(LossMonitor):\n", " def __init__(self, per_print_times=1):\n", " super(LossMonitorWithCollection, self).__init__(per_print_times)\n", " self.loss = []\n", " \n", " def begin(self, run_context):\n", " self.begin_time = time.time()\n", " \n", " def end(self, run_context):\n", " self.end_time = time.time()\n", " print('Total time used: {}'.format(self.end_time - self.begin_time))\n", " \n", " def epoch_begin(self, run_context):\n", " self.epoch_begin_time = time.time()\n", " \n", " def epoch_end(self, run_context):\n", " cb_params = run_context.original_args()\n", " self.epoch_end_time = time.time()\n", " if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:\n", " print('')\n", " \n", " def step_end(self, run_context):\n", " cb_params = run_context.original_args()\n", " loss = cb_params.net_outputs\n", "\n", " if isinstance(loss, (tuple, list)):\n", " if isinstance(loss[0], Tensor) and isinstance(loss[0].asnumpy(), np.ndarray):\n", " loss = loss[0]\n", "\n", " if isinstance(loss, Tensor) and isinstance(loss.asnumpy(), np.ndarray):\n", " loss = np.mean(loss.asnumpy())\n", "\n", " cur_step_in_epoch = (cb_params.cur_step_num - 1) % cb_params.batch_num + 1\n", "\n", " if isinstance(loss, float) and (np.isnan(loss) or np.isinf(loss)):\n", " raise ValueError(\"epoch: {} step: {}. Invalid loss, terminating training.\".format(\n", " cb_params.cur_epoch_num, cur_step_in_epoch))\n", " self.loss.append(loss)\n", " if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:\n", " print(\"\\repoch: %+3s step: %+3s time: %5.5s, loss is %5.5s\" % (cb_params.cur_epoch_num, cur_step_in_epoch, time.time() - self.epoch_begin_time, loss), flush=True, end='')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "接下来,利用量子版本的`CBOW`来对一个长句进行词嵌入。运行之前请在终端运行`export OMP_NUM_THREADS=4`,将量子模拟器的线程数设置为4个,当所需模拟的量子系统比特数较多时,可设置更多的线程数来提高模拟效率。" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": true, "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 25 step: 20 time: 0.592, loss is 3.154\n", "epoch: 50 step: 20 time: 0.614, loss is 2.944\n", "epoch: 75 step: 20 time: 0.572, loss is 0.224\n", "epoch: 100 step: 20 time: 0.562, loss is 0.015\n", "epoch: 125 step: 20 time: 0.545, loss is 0.009\n", "epoch: 150 step: 20 time: 0.599, loss is 0.003\n", "epoch: 175 step: 20 time: 0.586, loss is 0.002\n", "epoch: 200 step: 20 time: 0.552, loss is 0.045\n", "epoch: 225 step: 20 time: 0.590, loss is 0.001\n", "epoch: 250 step: 20 time: 0.643, loss is 0.001\n", "epoch: 275 step: 20 time: 0.562, loss is 0.001\n", "epoch: 300 step: 20 time: 0.584, loss is 0.001\n", "epoch: 325 step: 20 time: 0.566, loss is 0.000\n", "epoch: 350 step: 20 time: 0.578, loss is 0.000\n", "Total time used: 206.29734826087952\n" ] } ], "source": [ "import mindspore as ms\n", "from mindspore import context\n", "from mindspore import Tensor\n", "context.set_context(mode=context.GRAPH_MODE, device_target=\"CPU\")\n", "corpus = \"\"\"We are about to study the idea of a computational process.\n", "Computational processes are abstract beings that inhabit computers.\n", "As they evolve, processes manipulate other abstract things called data.\n", "The evolution of a process is directed by a pattern of rules\n", "called a program. People create programs to direct processes. In effect,\n", "we conjure the spirits of the computer with our spells.\"\"\"\n", "\n", "ms.set_seed(42)\n", "window_size = 2\n", "embedding_dim = 10\n", "hidden_dim = 128\n", "word_dict, sample = GenerateWordDictAndSample(corpus, window=window_size)\n", "train_x,train_y = GenerateTrainData(sample, word_dict)\n", "\n", "train_loader = ds.NumpySlicesDataset({\n", " \"around\": train_x,\n", " \"center\": train_y\n", "},shuffle=False).batch(3)\n", "net = CBOW(len(word_dict), embedding_dim, window_size, 3, 4, hidden_dim)\n", "net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')\n", "net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)\n", "loss_monitor = LossMonitorWithCollection(500)\n", "model = Model(net, net_loss, net_opt)\n", "model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "打印收敛过程中的损失函数值:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.plot(loss_monitor.loss,'.')\n", "plt.xlabel('Steps')\n", "plt.ylabel('Loss')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "得到收敛图为\n", "\n", "![nlp loss](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/advanced_use/images/nlp_loss.png)\n", "\n", "通过如下方法打印量子嵌入层的量子线路中的参数:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "array([ 1.52044818e-01, 1.71521559e-01, 2.35021308e-01, -3.95286232e-01,\n", " -3.71680595e-03, 7.96886325e-01, -4.04954888e-02, 1.55393332e-01,\n", " 4.11805660e-02, 7.79824018e-01, 2.96543002e-01, -2.21819162e-01,\n", " -4.67430688e-02, 4.66759771e-01, 2.75283188e-01, 1.35858059e-01,\n", " -3.23841363e-01, -2.31937021e-01, -4.68942285e-01, -1.96520030e-01,\n", " 2.16065589e-02, 1.23866223e-01, -9.68078300e-02, 1.69127151e-01,\n", " -8.90062153e-01, 2.56734312e-01, 8.37369189e-02, -1.15734830e-01,\n", " -1.34410933e-01, -3.12207133e-01, -8.90189946e-01, 1.97006428e+00,\n", " -2.49193460e-02, 2.25960299e-01, -3.90179232e-02, -3.03875893e-01,\n", " 2.02030335e-02, -7.07065910e-02, -4.81521547e-01, 5.04257262e-01,\n", " -1.32081115e+00, 2.83502758e-01, 2.80248702e-01, 1.63375765e-01,\n", " -6.91465080e-01, 6.82975233e-01, -2.67829001e-01, 2.29658693e-01,\n", " 2.78859794e-01, -1.04206935e-01, -5.57148576e-01, 4.41706657e-01,\n", " -6.76973104e-01, 2.47751385e-01, -2.96468334e-03, -1.66827604e-01,\n", " -3.47717047e-01, -9.04396921e-03, -7.69433856e-01, 4.33617719e-02,\n", " -2.09145937e-02, -1.55236557e-01, -2.16777384e-01, -2.26556376e-01,\n", " -6.16374731e-01, 2.05871137e-03, -3.08128931e-02, -1.63372140e-02,\n", " 1.46710426e-01, 2.31793106e-01, 4.16066934e-04, -9.28813033e-03],\n", " dtype=float32)" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net.embedding.weight.asnumpy()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 经典版词向量嵌入层\n", "\n", "这里我们利用经典的词向量嵌入层来搭建一个经典的CBOW神经网络,并与量子版本进行对比。\n", "\n", "首先,搭建经典的CBOW神经网络,其中的参数跟量子版本的类似。" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "class CBOWClassical(nn.Cell):\n", " def __init__(self, num_embedding, embedding_dim, window, hidden_dim):\n", " super(CBOWClassical, self).__init__()\n", " self.dim = 2 * window * embedding_dim\n", " self.embedding = nn.Embedding(num_embedding, embedding_dim, True)\n", " self.dense1 = nn.Dense(self.dim, hidden_dim)\n", " self.dense2 = nn.Dense(hidden_dim, num_embedding)\n", " self.relu = ops.ReLU()\n", " self.reshape = ops.Reshape()\n", "\n", " def construct(self, x):\n", " embed = self.embedding(x)\n", " embed = self.reshape(embed, (-1, self.dim))\n", " out = self.dense1(embed)\n", " out = self.relu(out)\n", " out = self.dense2(out)\n", " return out" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "生成适用于经典CBOW神经网络的数据集。" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "train_x shape: (58, 4)\n", "train_y shape: (58,)\n" ] } ], "source": [ "train_x = []\n", "train_y = []\n", "for i in sample:\n", " around, center = i\n", " train_y.append(word_dict[center])\n", " train_x.append([])\n", " for j in around:\n", " train_x[-1].append(word_dict[j])\n", "train_x = np.array(train_x).astype(np.int32)\n", "train_y = np.array(train_y).astype(np.int32)\n", "print(\"train_x shape: \", train_x.shape)\n", "print(\"train_y shape: \", train_y.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "我们对经典CBOW网络进行训练。" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 25 step: 20 time: 0.008, loss is 3.155\n", "epoch: 50 step: 20 time: 0.026, loss is 3.027\n", "epoch: 75 step: 20 time: 0.010, loss is 3.010\n", "epoch: 100 step: 20 time: 0.009, loss is 2.955\n", "epoch: 125 step: 20 time: 0.008, loss is 0.630\n", "epoch: 150 step: 20 time: 0.008, loss is 0.059\n", "epoch: 175 step: 20 time: 0.009, loss is 0.008\n", "epoch: 200 step: 20 time: 0.008, loss is 0.003\n", "epoch: 225 step: 20 time: 0.017, loss is 0.001\n", "epoch: 250 step: 20 time: 0.008, loss is 0.001\n", "epoch: 275 step: 20 time: 0.016, loss is 0.000\n", "epoch: 300 step: 20 time: 0.008, loss is 0.000\n", "epoch: 325 step: 20 time: 0.016, loss is 0.000\n", "epoch: 350 step: 20 time: 0.008, loss is 0.000\n", "Total time used: 5.06074857711792\n" ] } ], "source": [ "train_loader = ds.NumpySlicesDataset({\n", " \"around\": train_x,\n", " \"center\": train_y\n", "},shuffle=False).batch(3)\n", "net = CBOWClassical(len(word_dict), embedding_dim, window_size, hidden_dim)\n", "net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')\n", "net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)\n", "loss_monitor = LossMonitorWithCollection(500)\n", "model = Model(net, net_loss, net_opt)\n", "model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "打印收敛过程中的损失函数值:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "plt.plot(loss_monitor.loss,'.')\n", "plt.xlabel('Steps')\n", "plt.ylabel('Loss')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "得到收敛图为\n", "\n", "![classical nlp loss](https://gitee.com/mindspore/docs/raw/r1.2/tutorials/training/source_zh_cn/advanced_use/images/classical_nlp_loss.png)\n", "\n", "由上可知,通过量子模拟得到的量子版词嵌入模型也能很好的完成嵌入任务。当数据集大到经典计算机算力难以承受时,量子计算机将能够轻松处理这类问题。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 参考文献\n", "\n", "[1] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. [Efficient Estimation of Word Representations in\n", "Vector Space](https://arxiv.org/pdf/1301.3781.pdf)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 2 }