# MindSpore NumPy函数

Ascend GPU CPU 模型开发

## 概述

MindSpore NumPy工具包提供了一系列类NumPy接口。用户可以使用类NumPy语法在MindSpore上进行模型的搭建。

## 算子介绍

MindSpore Numpy具有四大功能模块：张量生成、张量操作、逻辑运算和其他常用数学运算。算子的具体相关信息可以参考NumPy接口列表

### 张量生成

[1]:

import mindspore.numpy as np
import mindspore.ops as ops
input_x = np.array([1, 2, 3], np.float32)
print("input_x =", input_x)
print("type of input_x =", ops.typeof(input_x))

input_x = [1. 2. 3.]
type of input_x = Tensor[Float32]


#### 生成具有相同元素的数组

[2]:

input_x = np.full((2, 3), 6, np.float32)
print(input_x)

[[6. 6. 6.]
[6. 6. 6.]]


[3]:

input_x = np.ones((2, 3), np.float32)
print(input_x)

[[1. 1. 1.]
[1. 1. 1.]]


#### 生成具有某个范围内的数值的数组

[4]:

input_x = np.arange(0, 5, 1)
print(input_x)

[0 1 2 3 4]


#### 生成特殊类型的数组

[5]:

input_x = np.tri(3, 3, 1)
print(input_x)

[[1. 1. 0.]
[1. 1. 1.]
[1. 1. 1.]]


[6]:

input_x = np.eye(2, 2)
print(input_x)

[[1. 0.]
[0. 1.]]


### 张量操作

#### 数组维度变换

[7]:

input_x = np.arange(10).reshape(5, 2)
output = np.transpose(input_x)
print(output)

[[0 2 4 6 8]
[1 3 5 7 9]]


[8]:

input_x = np.ones((1, 2, 3))
output = np.swapaxes(input_x, 0, 1)
print(output.shape)

(2, 1, 3)


#### 数组分割

[9]:

input_x = np.arange(9)
output = np.split(input_x, 3)
print(output)

(Tensor(shape=[3], dtype=Int32, value= [0, 1, 2]), Tensor(shape=[3], dtype=Int32, value= [3, 4, 5]), Tensor(shape=[3], dtype=Int32, value= [6, 7, 8]))


#### 数组拼接

[10]:

input_x = np.arange(0, 5)
input_y = np.arange(10, 15)
output = np.concatenate((input_x, input_y), axis=0)
print(output)

[ 0  1  2  3  4 10 11 12 13 14]


### 逻辑运算

[11]:

input_x = np.arange(0, 5)
input_y = np.arange(0, 10, 2)
output = np.equal(input_x, input_y)
print("output of equal:", output)
output = np.less(input_x, input_y)
print("output of less:", output)

output of equal: [ True False False False False]
output of less: [False  True  True  True  True]


### 数学运算

#### 加法

[12]:

input_x = np.full((3, 2), [1, 2])
input_y = np.full((3, 2), [3, 4])
print(output)

[[4 6]
[4 6]
[4 6]]


#### 矩阵乘法

[13]:

input_x = np.arange(2*3).reshape(2, 3).astype('float32')
input_y = np.arange(3*4).reshape(3, 4).astype('float32')
output = np.matmul(input_x, input_y)
print(output)

[[20. 23. 26. 29.]
[56. 68. 80. 92.]]


#### 求平均值

[14]:

input_x = np.arange(6).astype('float32')
output = np.mean(input_x)
print(output)

2.5


#### 指数

[15]:

input_x = np.arange(5).astype('float32')
output = np.exp(input_x)
print(output)

[ 1.         2.7182817  7.389056  20.085537  54.59815  ]


## MindSpore Numpy与MindSpore特性结合

mindspore.numpy能够充分利用MindSpore的强大功能，实现算子的自动微分，并使用图模式加速运算，帮助用户快速构建高效的模型。同时，MindSpore还支持多种后端设备，包括AscendGPUCPU等，用户可以根据自己的需求灵活设置。以下提供了几种常用方法：

• ms_function: 将代码包裹进图模式，用于提高代码运行效率。

• GradOperation: 用于自动求导。

• mindspore.context: 用于设置运行模式和后端设备等。

• mindspore.nn.Cell: 用于建立深度学习模型。

### ms_function使用示例

[16]:

import mindspore.numpy as np

x = np.arange(8).reshape(2, 4).astype('float32')
w1 = np.ones((4, 8))
b1 = np.zeros((8,))
w2 = np.ones((8, 16))
b2 = np.zeros((16,))
w3 = np.ones((16, 4))
b3 = np.zeros((4,))

def forward(x, w1, b1, w2, b2, w3, b3):
x = np.dot(x, w1) + b1
x = np.dot(x, w2) + b2
x = np.dot(x, w3) + b3
return x

print(forward(x, w1, b1, w2, b2, w3, b3))

[[ 768.  768.  768.  768.]
[2816. 2816. 2816. 2816.]]


[17]:

from mindspore import ms_function

forward_compiled = ms_function(forward)
print(forward(x, w1, b1, w2, b2, w3, b3))

[[ 768.  768.  768.  768.]
[2816. 2816. 2816. 2816.]]


GradOperation 可以实现自动求导。以下示例可以实现对上述没有用ms_function修饰的forward函数定义的计算求导。

[18]:

from mindspore import ops

print(grad_all(forward)(x, w1, b1, w2, b2, w3, b3))

[18]:

(Tensor(shape=[2, 4], dtype=Float32, value=
[[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02,  5.12000000e+02],
[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02,  5.12000000e+02]]),
Tensor(shape=[4, 8], dtype=Float32, value=
[[ 2.56000000e+02,  2.56000000e+02,  2.56000000e+02 ...  2.56000000e+02,  2.56000000e+02,  2.56000000e+02],
[ 3.84000000e+02,  3.84000000e+02,  3.84000000e+02 ...  3.84000000e+02,  3.84000000e+02,  3.84000000e+02],
[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02 ...  5.12000000e+02,  5.12000000e+02,  5.12000000e+02]
[ 6.40000000e+02,  6.40000000e+02,  6.40000000e+02 ...  6.40000000e+02,  6.40000000e+02,  6.40000000e+02]]),
...
Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00,  2.00000000e+00,  2.00000000e+00,  2.00000000e+00]))


[19]:

from mindspore import ms_function, context

context.set_context(mode=context.GRAPH_MODE)
print(grad_all(ms_function(forward))(x, w1, b1, w2, b2, w3, b3))

[19]:

(Tensor(shape=[2, 4], dtype=Float32, value=
[[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02,  5.12000000e+02],
[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02,  5.12000000e+02]]),
Tensor(shape=[4, 8], dtype=Float32, value=
[[ 2.56000000e+02,  2.56000000e+02,  2.56000000e+02 ...  2.56000000e+02,  2.56000000e+02,  2.56000000e+02],
[ 3.84000000e+02,  3.84000000e+02,  3.84000000e+02 ...  3.84000000e+02,  3.84000000e+02,  3.84000000e+02],
[ 5.12000000e+02,  5.12000000e+02,  5.12000000e+02 ...  5.12000000e+02,  5.12000000e+02,  5.12000000e+02]
[ 6.40000000e+02,  6.40000000e+02,  6.40000000e+02 ...  6.40000000e+02,  6.40000000e+02,  6.40000000e+02]]),
...
Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00,  2.00000000e+00,  2.00000000e+00,  2.00000000e+00]))


### mindspore.context使用示例

MindSpore支持多后端运算，可以通过mindspore.context进行设置。mindspore.numpy 的多数算子可以使用图模式或者PyNative模式运行，也可以运行在CPU，CPU或者Ascend等多种后端设备上。

from mindspore import context

# Execucation in static graph mode
context.set_context(mode=context.GRAPH_MODE)

# Execucation in PyNative mode
context.set_context(mode=context.PYNATIVE_MODE)

# Execucation on CPU backend
context.set_context(device_target="CPU")

# Execucation on GPU backend
context.set_context(device_target="GPU")

# Execucation on Ascend backend
context.set_context(device_target="Ascend")
...


### mindspore.numpy使用示例

mindspore.numpy 接口可以定义在nn.Cell代码块内进行网络的构建，示例如下：

[20]:

import mindspore.numpy as np
from mindspore import context
from mindspore.nn import Cell

context.set_context(mode=context.GRAPH_MODE)

x = np.arange(8).reshape(2, 4).astype('float32')
w1 = np.ones((4, 8))
b1 = np.zeros((8,))
w2 = np.ones((8, 16))
b2 = np.zeros((16,))
w3 = np.ones((16, 4))
b3 = np.zeros((4,))

class NeuralNetwork(Cell):
def construct(self, x, w1, b1, w2, b2, w3, b3):
x = np.dot(x, w1) + b1
x = np.dot(x, w2) + b2
x = np.dot(x, w3) + b3
return x

net = NeuralNetwork()

print(net(x, w1, b1, w2, b2, w3, b3))

[[ 768.  768.  768.  768.]
[2816. 2816. 2816. 2816.]]