Static Graph Syntax Support
Linux Ascend GPU CPU Model Development Beginner Intermediate Expert
Overview
In graph mode, Python code is not executed by the Python interpreter. Instead, the code is compiled into a static computation graph, and then the static computation graph is executed.
Currently, only the function, Cell, and subclass instances modified by the @ms_function decorator can be built.
For a function, build the function definition. For the network, build the construct method and other methods or functions called by the construct method.
For details about how to use ms_function, click https://www.mindspore.cn/doc/api_python/en/r1.2/mindspore/mindspore.html#mindspore.ms_function.
For details about the definition of Cell, click https://www.mindspore.cn/doc/programming_guide/en/r1.2/cell.html.
Due to syntax parsing restrictions, the supported data types, syntax, and related operations during graph building are not completely consistent with the Python syntax. As a result, some usage is restricted.
The following describes the data types, syntax, and related operations supported during static graph building. These rules apply only to graph mode.
All the following examples run on the network in graph mode. The network definition is not described.
Data Types
Built-in Python Data Types
Currently, the following built-in Python data types are supported: Number, String, List, Tuple, and Dictionary.
Number
Supports int, float, and bool, but does not support complex numbers.
Number can be defined on the network. That is, the syntax y = 1, y = 1.2, and y = True are supported.
Forcible conversion to Number is not supported on the network. That is, the syntax y = int(x), y = float(x), and y = bool(x) are not supported.
String
String can be constructed on the network. That is, the syntax y = "abcd" is supported.
Forcible conversion to String is not supported on the network. That is, the syntax y = str(x) is not supported.
List
List can be constructed on the network, that is, the syntax y = [1, 2, 3] is supported.
Forcible conversion to List is not supported on the network. That is, the syntax y = list(x) is not supported.
List to be output in the computation graph will be converted into Tuple.
Supported APIs
append: adds an element tolist.For example:
x = [1, 2, 3] x.append(4)
The result is as follows:
x: (1, 2, 3, 4)
Supported index values and value assignment
Single-level and multi-level index values and value assignment are supported.
The index value supports only
int.The assigned value can be
Number,String,Tuple,List, orTensor.For example:
x = [[1, 2], 2, 3, 4] m = x[0][1] x[1] = Tensor(np.array([1, 2, 3])) x[2] = "ok" x[3] = (1, 2, 3) x[0][1] = 88 n = x[-3]
The result is as follows:
m: 2 x: ([1, 88], Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), 'ok', (1, 2, 3)) n: Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])
Tuple
Tuple can be constructed on the network, that is, the syntax y = (1, 2, 3) is supported.
Forcible conversion to Tuple is not supported on the network. That is, the syntax y = tuple(x) is not supported.
Supported index values
The index value can be
int,slice,Tensor, and multi-level index value. That is, the syntaxdata = tuple_x[index0][index1]...is supported.Restrictions on the index value
Tensorare as follows:TuplestoresCell. EachCellmust be defined before a tuple is defined. The number of input parameters, input parameter type, and input parametershapeof eachCellmust be the same. The number of outputs of eachCellmust be the same. The output type must be the same as the output shape.The index
Tensoris a scalarTensorwhosedtypeisint32. The value range is[-tuple_len, tuple_len), negative index is not supported inAscendbackend.This syntax does not support the running branches whose control flow conditions
if,while, andforare variables. The control flow conditions can be constants only.GPUandAscendbackend is supported.
An example of the
intandsliceindexes is as follows:x = (1, (2, 3, 4), 3, 4, Tensor(np.array([1, 2, 3]))) y = x[1][1] z = x[4] m = x[1:4] n = x[-4]
The result is as follows:
y: 3 z: Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]) m: ((2, 3, 4), 3, 4) n: (2, 3, 4)
An example of the
Tensorindex is as follows:class Net(nn.Cell): def __init__(self): super(Net, self).__init__() self.relu = nn.ReLU() self.softmax = nn.Softmax() self.layers = (self.relu, self.softmax) def construct(self, x, index): ret = self.layers[index](x) return ret
Dictionary
Dictionary can be constructed on the network. That is, the syntax y = {"a": 1, "b": 2} is supported. Currently, only String can be used as the key value.
Dictionary to be output in the computational graph will extract all value values to form the Tuple output.
Supported APIs
keys: extracts allkeyvalues fromdictto formTupleand return it.values: extracts allvaluevalues fromdictto formTupleand return it.For example:
x = {"a": Tensor(np.array([1, 2, 3])), "b": Tensor(np.array([4, 5, 6])), "c": Tensor(np.array([7, 8, 9]))} y = x.keys() z = x.values()
The result is as follows:
y: ("a", "b", "c") z: (Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]), Tensor(shape=[3], dtype=Int64, value=[7, 8, 9]))Supported index values and value assignment
The index value supports only
String. The assigned value can beNumber,Tuple, orTensor.For example:
x = {"a": Tensor(np.array([1, 2, 3])), "b": Tensor(np.array([4, 5, 6])), "c": Tensor(np.array([7, 8, 9]))} y = x["b"] x["a"] = (2, 3, 4)
The result is as follows:
y: Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]) x: {"a": (2, 3, 4), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]), Tensor(shape=[3], dtype=Int64, value=[7, 8, 9])}
MindSpore User-defined Data Types
Currently, MindSpore supports the following user-defined data types: Tensor, Primitive, and Cell.
Tensor
Currently, tensors cannot be constructed on the network. That is, the syntax x = Tensor(args...) is not supported.
You can use the @constexpr decorator to modify the function and generate the Tensor in the function.
For details about how to use @constexpr, click https://www.mindspore.cn/doc/api_python/en/r1.2/mindspore/ops/mindspore.ops.constexpr.html.
The constant Tensor used on the network can be used as a network attribute and defined in init, that is, self.x = Tensor(args...). Then the constant can be used in construct.
In the following example, Tensor of shape = (3, 4), dtype = int64 is generated by @constexpr.
@constexpr
def generate_tensor():
return Tensor(np.ones((3, 4)))
The following describes the attributes, APIs, index values, and index value assignment supported by the Tensor.
Supported attributes
shape: obtains the shape ofTensorand returns aTuple.dtype: obtains the data type ofTensorand returns a data type defined byMindSpore.Supported APIs
all: reducesTensorthrough thealloperation. OnlyTensorof theBooltype is supported.any: reducesTensorthrough theanyoperation. OnlyTensorof theBooltype is supported.
view: reshapes Tensor into input shape.
expand_as: expands Tensor to the same shape as another Tensor based on the broadcast rule.
For example:
x = Tensor(np.array([[True, False, True], [False, True, False]]))
x_shape = x.shape
x_dtype = x.dtype
x_all = x.all()
x_any = x.any()
x_view = x.view((1, 6))
y = Tensor(np.ones((2, 3), np.float32))
z = Tensor(np.ones((2, 2, 3)))
y_as_z = y.expand_as(z)
The result is as follows:
x_shape: (2, 3)
x_dtype: Bool
x_all: Tensor(shape=[], dtype=Bool, value=False)
x_any: Tensor(shape=[], dtype=Bool, value=True)
x_view: Tensor(shape=[1, 6], dtype=Bool, value=[[True, False, True, False, True, False]])
y_as_z: Tensor(shape=[2, 2, 3], dtype=Float32, value=[[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]])
Index values
The index value can be
int,bool,None,slice,Tensor,List, orTuple.intindex valueSingle-level and multi-level
intindex values are supported. The single-levelintindex value istensor_x[int_index], and the multi-levelintindex value istensor_x[int_index0][int_index1]....The
intindex value is obtained on dimension 0 and is less than the length of dimension 0. After the position data corresponding to dimension 0 is obtained, dimension 0 is eliminated.For example, if a single-level
intindex value is obtained for a tensor whoseshapeis(3, 4, 5), the obtainedshapeis(4, 5).The multi-level index value can be understood as obtaining the current-level
intindex value based on the previous-level index value.For example:
tensor_x = Tensor(np.arange(2 * 3 * 2).reshape((2, 3, 2))) data_single = tensor_x[0] data_multi = tensor_x[0][1]
The result is as follows:
data_single: Tensor(shape=[3, 2], dtype=Int64, value=[[0, 1], [2, 3], [4, 5]]) data_multi: Tensor(shape=[2], dtype=Int64, value=[2, 3])
boolindex valueSingle-level and multi-level
boolindex values are supported. The single-levelboolindex value istensor_x[True], and the multi-levelTrueindex value istensor_x[True][False]....The
Trueindex value operation is obtained on dimension 0. After all data is obtained, a dimension is extended on theaxis=0axis. The length of the dimension is 1.For example, if a single-level
Trueindex value is obtained from a tensor whoseshapeis(3, 4, 5), the obtainedshapeis(1, 3, 4, 5), if a single-levelFalseindex value is obtained from a tensor whoseshapeis(3, 4, 5), the obtainedshapeis(0, 3, 4, 5).The multi-level index value can be understood as obtaining the current-level
boolindex value based on the previous-level index value.For example:
tensor_x = Tensor(np.arange(2 * 3 ).reshape((2, 3))) data_single = tensor_x[True] data_multi = tensor_x[True][False]
The result is as follows:
data_single: Tensor(shape=[1, 2, 3], dtype=Int64, value=[[[0, 1, 2], [3, 4, 5]]]) data_multi: Tensor(shape=[1, 0, 2, 3], dtype=Int64, value=[[[[], []]]])
Noneindex valueThe
Noneindex value is the same as theTrueindex value. For details, see theTrueindex value.ellipsisindex valueSingle-level and multi-level
ellipsisindex values are supported. The single-levelellipsisindex value istensor_x[...], and the multi-levelellipsisindex value istensor_x[...][...]....The
ellipsisindex value is obtained on all dimensions to get the original data without any change. Generally, it is used as a component of theTupleindex. TheTupleindex is described as follows.For example, if the
ellipsisindex value is obtained for a tensor whoseshapeis(3, 4, 5), the obtainedshapeis still(3, 4, 5).For example:
tensor_x = Tensor(np.arange(2 * 3 ).reshape((2, 3))) data_single = tensor_x[...] data_multi = tensor_x[...][...]
The result is as follows:
data_single: Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [3, 4, 5]]) data_multi: Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [3, 4, 5]])
sliceindex valueSingle-level and multi-level
sliceindex values are supported. The single-levelsliceindex value istensor_x[slice_index], and the multi-levelsliceindex value istensor_x[slice_index0][slice_index1]....The
sliceindex value is obtained on dimension 0. The element of the sliced position on dimension 0 is obtained. Theslicedoes not reduce the dimension even if the length is 1, which is different from theintindex value.For example,
tensor_x[0:1:1] != tensor_x[0], becauseshape_former = (1,) + shape_latter.The multi-level index value can be understood as obtaining the current-level
sliceindex value based on the previous-level index value.sliceconsists ofstart,stop, andstep. The default value ofstartis 0, the default value ofstopis the length of the dimension, and the default value ofstepis 1.Example:
tensor_x[:] == tensor_x[0:length:1].For example:
tensor_x = Tensor(np.arange(4 * 2 * 2).reshape((4, 2, 2))) data_single = tensor_x[1:4:2] data_multi = tensor_x[1:4:2][1:]
The result is as follows:
data_single: Tensor(shape=[2, 2, 2], dtype=Int64, value=[[[4, 5], [6, 7]], [[12, 13], [14, 15]]]) data_multi: Tensor(shape=[1, 2, 2], dtype=Int64, value=[[[12, 13], [14, 15]]])
Tensorindex valueSingle-level and multi-level
Tensorindex values are supported. The single-levelTensorindex value istensor_x[tensor_index], and the multi-levelTensorindex value istensor_x[tensor_index0][tensor_index1]....The
Tensorindex value is obtained on dimension 0, and the element in the corresponding position of dimension 0 is obtained.The data type of the
Tensorindex must be one of int8, int16, int32, and int64, the element cannot be a negative number, and the value must be less than the length of dimension 0.The
Tensorindex value is obtained bydata_shape = tensor_inde4x.shape + tensor_x.shape[1:].For example, if the index value is obtained for a tensor whose shape is
(6, 4, 5)by using a tensor whose shape is(2, 3), the obtained shape is(2, 3, 4, 5).The multi-level index value can be understood as obtaining the current-level
Tensorindex value based on the previous-level index value.For example:
tensor_x = Tensor(np.arange(4 * 2 * 3).reshape((4, 2, 3))) tensor_index0 = Tensor(np.array([[1, 2], [0, 3]]), mstype.int32) tensor_index1 = Tensor(np.array([[0, 0]]), mstype.int32) data_single = tensor_x[tensor_index0] data_multi = tensor_x[tensor_index0][tensor_index1]
The result is as follows:
data_single: Tensor(shape=[2, 2, 2, 3], dtype=Int64, value=[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[0, 1], [2, 3]], [[12, 13], [14, 15]]]]) data_multi: Tensor(shape=[1, 2, 2, 2, 3], dtype=Int64, value=[[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[4, 5], [6, 7]], [[8, 9], [10, 11]]]]])
Listindex valueSingle-level and multi-level
Tensorindex values are supported. The single-levelListindex value istensor_x[list_index], and the multi-levelListindex value istensor_x[list_index0][list_index1]....The
Listindex value is obtained on dimension 0, and the element in the corresponding position of dimension 0 is obtained.The data type of the
Listindex must be all bool, all int or mixed of them. TheListelements of int type must be in the range of [-dimension_shape,dimension_shape-1] and the count ofListelements with bool type must be the same as thedimension_shapeof dimension 0 and will perform as to filter the corresponding element of the Tenson data. If the above two types appear simultaneously, theListelements with the bool type will be converted to1/0forTrue/False.The
Tensorindex value is obtained bydata_shape = tensor_inde4x.shape + tensor_x.shape[1:].For example, if the index value is obtained for a tensor whose shape is
(6, 4, 5)by using a tensor whose shape is(2, 3), the obtained shape is(2, 3, 4, 5).The multi-level index value can be understood as obtaining the current-level
Tensorindex value based on the previous-level index value.For example:
tensor_x = Tensor(np.arange(4 * 2 * 3).reshape((4, 2, 3))) tensor_index0 = Tensor(np.array([[1, 2], [0, 3]]), mstype.int32) tensor_index1 = Tensor(np.array([[0, 0]]), mstype.int32) data_single = tensor_x[tensor_index0] data_multi = tensor_x[tensor_index0][tensor_index1]
The result is as follows:
data_single: Tensor(shape=[2, 2, 2, 3], dtype=Int64, value=[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[0, 1], [2, 3]], [[12, 13], [14, 15]]]]) data_multi: Tensor(shape=[1, 2, 2, 2, 3], dtype=Int64, value=[[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[4, 5], [6, 7]], [[8, 9], [10, 11]]]]])
Tupleindex valueThe data type of the
Tupleindex can beint,bool,None,slice,ellipsis,Tensor,List, orTuple. Single-level and multi-levelTupleindex values are supported. For the single-levelTupleindex, the value istensor_x[tuple_index]. For the multi-levelTupleindex, the value istensor_x[tuple_index0][tuple_index1].... The regulations of elementsListandTupleare the same as that of single indexListindex. The regulations of others are the same to the respondding single element type.Elements in the
Tupleindex can be sort out byBasic IndexorAdvanced Index.slice,ellipsisandNoneareBasic Indexandint,bool,Tensor,List,TupleareAdvanced Index. In the Getitem Progress, all the elements of theAdvanced Indextype will be broadcast to the same shape, and the final shape will be inserted to the firstAdvanced Indexelement’s position if they are continuous, else they will be inserted to the0position.In the index, the
Noneelements will expand the corresponding dimensions,boolelements will expand the corresponding dimension and be broadcast with the otherAdvanced Indexelement. The others elements except the type ofellipsis,bool, andNone, will correspond to each position dimension. That is, the 0th element inTupleoperates the 0th dimension, and the 1st element operates the 1st dimension. The index rule of each element is the same as the index value rule of the element type.The
Tupleindex contains a maximum of oneellipsis. The first half of theellipsisindex elements correspond to theTensordimensions starting from the dimension 0, and the second half of the index elements correspond to theTensordimensions starting from the last dimension. If other dimensions are not specified, all dimensions are obtained.The data type of
Tensorcontained in the element must be one of (int8, int16, int32, int64). In addition, the value ofTensorelement must be non-negative and less than the length of the operation dimension.For example,
tensor_x[0:3, 1, tensor_index] == tensor_x[(0:3, 1, tensor_index)], because0:3, 1, tensor_indexis aTuple.The multi-level index value can be understood as obtaining the current-level
Tupleindex value based on the previous-level index value.For example:
tensor_x = Tensor(np.arange(2 * 3 * 4).reshape((2, 3, 4))) tensor_index = Tensor(np.array([[1, 2, 1], [0, 3, 2]]), mstype.int32) data = tensor_x[1, 0:1, tensor_index]
The result is as follows:
data: Tensor(shape=[2, 3, 1], dtype=Int64, value=[[[13], [14], [13]], [[12], [15], [14]]])
Index value assignment
The index value can be
int,ellipsis,slice,Tensor, orTuple.Index value assignment can be understood as assigning values to indexed position elements based on certain rules. All index value assignment does not change the original
shapeofTensor.In addition, enhanced index value assignment is supported, that is,
+=,-=,*=,/=,%=,**=, and//=are supported.intindex value assignmentSingle-level and multi-level
intindex value assignments are supported. The single-levelintindex value assignment istensor_x[int_index] = u, and the multi-levelintindex value assignment istensor_x[int_index0][int_index1]... = u.The assigned values support
NumberandTensor.NumberandTensorare converted into the data type of the updatedTensor.When an assigned value is
Number, all position elements obtained from theintindex are updated toNumber.When an assigned value is
Tensor, theshapeofTensormust be equal to or broadcast asshapeof the result ofintindex. After theshapeofTensorandshapeofintare consistent, then, update theTensorelement to the index, and obtain the position of the originalTensorelement in the result.For example, if the value of
Tensorofshape = (2, 3, 4)is set to 100 by using theintindex 1, the updatedTensorshape is still(2, 3, 4), but the values of all elements whose position is 1 on dimension 0 are updated to 100.For example:
tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) tensor_y = Tensor(np.arange(2 *3).reshape((2, 3)).astype(np.float32)) tensor_z = Tensor(np.arange(2* 3).reshape((2, 3)).astype(np.float32)) tensor_x[1] = 88.0 tensor_y[1][1] = 88.0 tensor_z[1]= Tensor(np.array([66, 88, 99]).astype(np.float32))
The result is as follows:
tensor_x: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [88.0, 88.0, 88.0]]) tensor_y: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [3.0, 88.0, 5.0]]) tensor_z: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [66.0, 88.0, 99.0]])
ellipsisindex value assignmentSingle-level and multi-level
ellipsisindex value assignments are supported. The single-levelellipsisindex value assignment istensor_x[...] = u, and the multi-levelellipsisindex value assignment istensor_x[...][...]... = u.The assigned values support
NumberandTensor.NumberandTensorare converted into the data type of the updatedTensor.When an assigned value is
Number, all elements are updated toNumber.When an assigned value is
Tensor, the number of elements inTensormust be 1 or equal to the number of elements in theTensorobtained by thesliceindex. Broadcast when the element is 1, andreshapewhen the number is equal but theshapeis inconsistent. After ensuring that the twoshapeare the same, update the assignedTensorelements to the originalTensorone by one according to their positions.For example, if the value of
Tensorofshape = (2, 3, 4)is set to 100 by using the...index, the updatedTensorshape is still(2, 3, 4), and all elements are changed to 100.For example:
tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3))) tensor_y = Tensor(np.arange(2 * 3).reshape((2, 3))) tensor_z = Tensor(np.arange(2 * 3).reshape((2, 3))) tensor_x[...] = 88 tensor_y[...]= Tensor(np.array([22, 44, 55]))
The result is as follows:
tensor_x: Tensor(shape=[2, 3], dtype=Int64, value=[[88, 88, 88], [88, 88, 88]]) tensor_y: Tensor(shape=[2, 3], dtype=Int64, value=[[22, 44, 55], [22, 44, 55]])
sliceindex value assignmentSingle-level and multi-level
sliceindex value assignments are supported. The single-levelsliceindex value assignment istensor_x[slice_index] = u, and the multi-levelsliceindex value assignment istensor_x[slice_index0][slice_index1]... = u.The assigned values support
NumberandTensor.NumberandTensorare converted into the data type of the updatedTensor.When an assigned value is
Number, all position elements obtained from thesliceindex are updated toNumber.When an assigned value is
Tensor, theshapeofTensormust be equal to orreshapedassliceof the result ofintindex. After theshapeofTensorandshapeofintare consistent, then, update theTensorelement to the index, and obtain the position of the originalTensorelement in the result.For example, if the value of
Tensorofshape = (2, 3, 4)is set to 100 by using the0:1:1index, the updatedTensorshape is still(2, 3, 4), but the values of all elements whose position is 0 on dimension 0 are updated to 100.For example:
tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_x[0:1] = 88.0 tensor_y[0:2][0:2] = 88.0 tensor_z[0:2] = Tensor(np.array([11, 12, 13, 11, 12, 13]).astype(np.float32))
The result is as follows:
tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]) tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [88.0, 88.0, 88.0], [6.0, 7.0, 8.0]]) tensor_z: Tensor(shape=[3, 3], dtype=Float32, value=[[11.0, 12.0, 13.0], [11.0, 12.0, 13.0], [6.0, 7.0, 8.0]])
Tensorindex value assignmentOnly the single-level Tensor index can be assigned with a value, that is,
tensor_x[tensor_index] = u.The
Tensorindex supports theint32andbooltypes.The assigned values support
Number,Tuple, andTensor. The values inNumber,Tuple, andTensormust be the same as those in the originalTensordata type.When an assigned value is
Number, all position elements obtained from theTensorindex are updated toNumber.When an assigned value is
Tensor, theshapeofTensormust be equal to or broadcast asshapeof the index result. After theshapeofTensorandshapeofintare consistent, then, update theTensorelement to the index, and obtain the position of the originalTensorelement in the result.When the assigned value is
Tuple, the elements inTuplecan only be allNumberor allTensor.When all assigned values are
Number, the type ofNumbermust be the same as that of the originalTensordata type, and the number of elements must be the same as the last dimension of the obtained index resultshape. Then, the index resultshapeis obtained through broadcast.When all assigned values are
Tensor, theseTensorvalues are packaged on theaxis=0axis and become newTensor. In this case, the value is assigned according to the rule of assigning the value toTensor.For example, assign an index value to a tensor whose
shapeis(6, 4, 5)anddtypeisfloat32by using a tensor whoseshapeis set to(2, 3). If the assigned value isNumber, in this case,Numbermust befloat. If the assigned value isTuple, all elements intuplemust befloatand the number of elements must be 5. If the assigned value isTensor,dtypeofTensormust befloat32, andshapecan be broadcast as(2, 3, 4, 5).For example:
tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_index = Tensor(np.array([[2, 0, 2], [0, 2, 0], [0, 2, 0]], np.int32)) tensor_x[tensor_index] = 88.0 tensor_y[tensor_index] = Tensor(np.array([11.0, 12.0, 13.0]).astype(np.float32))
The result is as follows:
tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [3.0, 4.0, 5.0], [88.0, 88.0, 88.0]]) tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[11.0, 12.0, 13.0], [3.0, 4.0, 5.0], [11.0, 12.0, 13.0]])
Tupleindex value assignmentSingle-level and multi-level
Tupleindex value assignments are supported. The single-levelTupleindex value assignment istensor_x[tuple_index] = u, and the multi-levelTupleindex value assignment istensor_x[tuple_index0][tuple_index1]... = u.The
Tupleindex value assignment is the same as that of theTupleindex value. However, multi-levelTupleindex value assignment does not supportTuplecontainingTensor.The assigned values support
Number,Tuple, andTensor. The values inNumber,Tuple, andTensormust be the same as those in the originalTensordata type.When an assigned value is
Number, all position elements obtained from theTensorindex are updated toNumber.When an assigned value is
Tensor, theshapeofTensormust be equal to or broadcast asshapeof the index result. After theshapeofTensorandshapeofintare consistent, then, update theTensorelement to the index, and obtain the position of the originalTensorelement in the result.When the assigned value is
Tuple, the elements inTuplecan only be allNumberor allTensor.When all assigned values are
Number, the type ofNumbermust be the same as that of the originalTensordata type, and the number of elements must be the same as the last dimension of the obtained index resultshape. Then, the index resultshapeis obtained through broadcast.When all assigned values are
Tensor, theseTensorvalues are packaged on theaxis=0axis and become newTensor. In this case, the value is assigned according to the rule of assigning the value toTensor.For example:
tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) tensor_index = Tensor(np.array([[0, 1], [1, 0]]).astype(np.int32)) tensor_x[1, 1:3] = 88.0 tensor_y[1:3, tensor_index] = 88.0 tensor_z[1:3, tensor_index] = Tensor(np.array([11, 12]).astype(np.float32))
The result is as follows:
tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [3.0, 88.0, 88.0], [6.0, 7.0, 8.0]]) tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [88.0, 88.0, 5.0], [88.0, 88.0, 8.0]]) tensor_z: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [12.0, 11.0, 5.0], [12.0, 11.0, 8.0]])
Primitive
Currently, Primitive and its subclass instances can be constructed on the network. That is, the reduce_sum = ReduceSum(True) syntax is supported.
However, during construction, the parameter can be specified only in position parameter mode, and cannot be specified in the key-value pair mode. That is, the syntax reduce_sum = ReduceSum(keep_dims=True) is not supported.
Currently, the attributes and APIs related to Primitive and its subclasses cannot be called on the network.
For details about the definition of Primitive, click https://www.mindspore.cn/doc/programming_guide/en/r1.2/operators.html.
For details about the defined Primitive, click https://www.mindspore.cn/doc/api_python/en/r1.2/mindspore/mindspore.ops.html.
Cell
Currently, Cell and its subclass instances can be constructed on the network. That is, the syntax cell = Cell(args...) is supported.
However, during construction, the parameter can be specified only in position parameter mode, and cannot be specified in the key-value pair mode. That is, the syntax cell = Cell(arg_name=value) is not supported.
Currently, the attributes and APIs related to Cell and its subclasses cannot be called on the network unless they are called through self in contrcut of Cell.
For details about the definition of Cell, click https://www.mindspore.cn/doc/programming_guide/en/r1.2/cell.html.
For details about the defined Cell, click https://www.mindspore.cn/doc/api_python/en/r1.2/mindspore/mindspore.nn.html.
Operators
Arithmetic operators and assignment operators support the Number and Tensor operations, as well as the Tensor operations of different dtype.
This is because these operators are converted to operators with the same name for computation, and they support implicit type conversion.
For details about the rules, click https://www.mindspore.cn/doc/note/en/r1.2/operator_list_implicit.html.
Arithmetic Operators
Arithmetic Operator |
Supported Type |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Assignment Operators
Assignment Operator |
Supported Type |
|---|---|
|
Scalar and |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Logical Operators
Logical Operator |
Supported Type |
|---|---|
|
|
|
|
|
not |
Member Operators
Member Operator |
Supported Type |
|---|---|
|
|
|
Same as |
Identity Operators
Identity Operator |
Supported Type |
|---|---|
|
The value can only be |
|
The value can only be |
Expressions
Conditional Control Statements
single if
Usage:
if (cond): statements...x = y if (cond) else z
Parameter: cond – The supported types are Number, Tuple, List, String, None, Tensor and Function. It can also be an expression whose computation result type is one of them.
Restrictions:
During graph building, if
ifis not eliminated, the data type and shape ofreturninside theifbranch must be the same as those outside theifbranch.When only
ifis available, the data type and shape of theifbranch variable after the update must be the same as those before the update.When both
ifandelseare available, the updated data type and shape of theifbranch variable must be the same as those of theelsebranch.Does not support higher-order differential.
Does not support
elifstatements.
Example 1:
if x > y:
return m
else:
return n
The data types of m returned by the if branch and n returned by the else branch must be the same as those of shape.
Example 2:
if x > y:
out = m
else:
out = n
return out
The data types of out after the if branch is updated and else after the out branch is updated must be the same as those of shape.
side-by-side if
Usage:
if (cond1):statements else:statements...if (cond2):statements...
Parameters: cond1 and cond2 – Consistent with single if.
Restrictions:
Inherit all restrictions of
single if.The total number of
ifin calculating graph can not exceed 50.Too many
ifwill cause the compilation time to be too long. Reducing the number ofifwill help improve compilation efficiency.
Example:
if x > y:
out = x
else:
out = y
if z > x:
out = out + 1
return out
if in if
Usage:
if (cond1):if (cond2):statements...
Parameters: cond1 and cond2 – Consistent with single if.
Restrictions:
Inherit all restrictions of
single if.The total number of
ifin calculating graph can not exceed 50.Too many
ifwill cause the compilation time to be too long. Reducing the number ofifwill help improve compilation efficiency.
Example:
if x > y:
z = z + 1
if z > x:
return m
else:
return n
Loop Statements
for
Usage:
for i in sequence
Parameter: sequence –Iterative sequences (Tuple and List).
Restrictions:
The total number of graph operations is a multiple of number of iterations of the
forloop. Excessive number of iterations of theforloop may cause the graph to occupy more memory than usage limit.
Example:
z = Tensor(np.ones((2, 3)))
x = (1, 2, 3)
for i in x:
z += i
return z
The result is as follows:
z: Tensor(shape=[2, 3], dtype=Int64, value=[[7, 7], [7, 7], [7, 7]])
single while
Usage:
while (cond)
Parameter: cond – Consistent with single if.
Restrictions:
During graph building, if
whileis not eliminated, the data type andshapeofreturninsidewhilemust be the same as those outsidewhile.The data type and shape of the updated variables in
whilemust be the same as those before the update.Does not support training scenarios.
Example 1:
while x < y:
x += 1
return m
return n
The m data type returned inside while inside and n data type returned outside while must be the same as those of shape.
Example 2:
out = m
while x < y:
x += 1
out = out + 1
return out
In while, the data types of out before and after update must be the same as those of shape.
side-by-side while
Usage:
while (cond1):statements while (cond2):statemetns...
Parameters: cond1 and cond2 – Consistent with single if.
Restrictions:
Inherit all restrictions of
single while.The total number of
whilein calculating graph can not exceed 50.Too many
whilewill cause the compilation time to be too long. Reducing the number ofwhilewill help improve compilation efficiency.
Example:
out = m
while x < y:
x += 1
out = out + 1
while out > 10:
out -= 10
return out
while in while
Usage:
-while (cond1):while (cond2):statements...
Parameters: cond1 and cond2 – Consistent with single if.
Restrictions:
Inherit all restrictions of
single while.The total number of
whilein calculating graph can not exceed 50.Too many
whilewill cause the compilation time to be too long. Reducing the number ofwhilewill help improve compilation efficiency.
Example:
out = m
while x < y:
while z < y:
z += 1
out = out + 1
x += 1
return out
Conditional Control Statements in Loop Statements
if in for
Usage:
for i in sequence:if (cond)`
Parameters:
cond– Consistent withsingle if.sequence– Iterative sequence(Tuple、List)
Restrictions:
Inherit all restrictions of
single if.Inherit all restrictions of
for.If
condis variable, it is forbidden to useif (cond):return,if (cond):continue,if (cond):breakstatements.The total number of
ifis a multiple of number of iterations of theforloop. Excessive number of iterations of theforloop may cause the compilation time to be too long.
Example:
z = Tensor(np.ones((2, 3)))
x = (1, 2, 3)
for i in x:
if i < 3:
z += i
return z
The result is as follows:
z: Tensor(shape=[2, 3], dtype=Int64, value=[[4, 4], [4, 4], [4, 4]])
if in while
Usage:
while (cond1):if (cond2)
Parameters: cond1 and cond2 – Consistent with single if.
Restrictions:
Inherit all restrictions of
single ifandsingle while.If
cond2is variable, it is forbidden to useif (cond2):return,if (cond2):continue,if (cond2):breakstatements.
Example:
out = m
while x < y:
if z > 2*x:
out = out + 1
x += 1
return out
Function Definition Statements
def Keyword
Defines functions.
Usage:
def function_name(args): statements...
For example:
def number_add(x, y):
return x + y
ret = number_add(1, 2)
The result is as follows:
ret: 3
lambda Expression
Generates functions.
Usage: lambda x, y: x + y
For example:
number_add = lambda x, y: x + y
ret = number_add(2, 3)
The result is as follows:
ret: 5
Functions
Python Built-in Functions
Currently, the following built-in Python functions are supported: len, isinstance, partial, map, range, enumerate, super, and pow.
len
Returns the length of a sequence.
Calling: len(sequence)
Input parameter: sequence – Tuple, List, Dictionary, or Tensor.
Return value: length of the sequence, which is of the int type. If the input parameter is Tensor, the length of dimension 0 is returned.
For example:
x = (2, 3, 4)
y = [2, 3, 4]
d = {"a": 2, "b": 3}
z = Tensor(np.ones((6, 4, 5)))
x_len = len(x)
y_len = len(y)
d_len = len(d)
z_len = len(z)
The result is as follows:
x_len: 3
y_len: 3
d_len: 2
z_len: 6
isinstance
Determines whether an object is an instance of a class. Different from operator Isinstance, the second input parameter of Isinstance is the type defined in the dtype module of MindSpore.
Calling: isinstance(obj, type)
Input parameters:
obj– Any instance of any supported type.type– A type in theMindSpore dtypemodule.
Return value: If obj is an instance of type, return True. Otherwise, return False.
For example:
x = (2, 3, 4)
y = [2, 3, 4]
z = Tensor(np.ones((6, 4, 5)))
x_is_tuple = isinstance(x, mstype.tuple_)
y_is_list= isinstance(y, mstype.list_)
z_is_tensor = isinstance(z, mstype.tensor)
The result is as follows:
x_is_tuple: True
y_is_list: True
z_is_tensor: True
partial
A partial function used to fix the input parameter of the function.
Calling: partial(func, arg, ...)
Input parameters:
func–Function.arg– One or more parameters to be fixed. Position parameters and key-value pairs can be specified.
Return value: functions with certain input parameter values fixed
For example:
def add(x, y):
return x + y
add_ = partial(add, x=2)
m = add_(y=3)
n = add_(y=5)
The result is as follows:
m: 5
n: 7
map
Maps one or more sequences based on the provided functions and generates a new sequence based on the mapping result. If the number of elements in multiple sequences is inconsistent, the length of the new sequence is the same as that of the shortest sequence.
Calling: map(func, sequence, ...)
Input parameters:
func–Function.sequence– One or more sequences (TupleorList).
Return value: A Tuple
For example:
def add(x, y):
return x + y
elements_a = (1, 2, 3)
elements_b = (4, 5, 6)
ret = map(add, elements_a, elements_b)
The result is as follows:
ret: (5, 7, 9)
zip
Packs elements in the corresponding positions in multiple sequences into tuples, and then uses these tuples to form a new sequence. If the number of elements in each sequence is inconsistent, the length of the new sequence is the same as that of the shortest sequence.
Calling: zip(sequence, ...)
Input parameter: sequence – One or more sequences (Tuple or List)`.
Return value: A Tuple
For example:
elements_a = (1, 2, 3)
elements_b = (4, 5, 6)
ret = zip(elements_a, elements_b)
The result is as follows:
ret: ((1, 4), (2, 5), (3, 6))
range
Creates a Tuple based on the start value, end value, and step.
Calling:
range(start, stop, step)range(start, stop)range(stop)
Input parameters:
start– start value of the count. The type isint. The default value is 0.stop– end value of the count (exclusive). The type isint.step– Step. The type isint. The default value is 1.
Return value: A Tuple
For example:
x = range(0, 6, 2)
y = range(0, 5)
z = range(3)
The result is as follows:
x: (0, 2, 4)
y: (0, 1, 2, 3, 4)
z: (0, 1, 2)
enumerate
Generates an index sequence of a sequence. The index sequence contains data and the corresponding subscript.
Calling:
enumerate(sequence, start)enumerate(sequence)
Input parameters:
sequence– A sequence (Tuple,List, orTensor).start– Start position of the subscript. The type isint. The default value is 0.
Return value: A Tuple
For example:
x = (100, 200, 300, 400)
y = Tensor(np.array([[1, 2], [3, 4], [5 ,6]]))
m = enumerate(x, 3)
n = enumerate(y)
The result is as follows:
m: ((3, 100), (4, 200), (5, 300), (5, 400))
n: ((0, Tensor(shape=[2], dtype=Int64, value=[1, 2])), (1, Tensor(shape=[2], dtype=Int64, value=[3, 4])), (2, Tensor(shape=[2], dtype=Int64, value=[5, 6])))
super
Calls a method of the parent class (super class). Generally, the method of the parent class is called after super.
Calling:
super().xxx()super(type, self).xxx()
Input parameters:
type–Class.self–Object.
Return value: method of the parent class.
For example:
class FatherNet(nn.Cell):
def __init__(self, x):
super(FatherNet, self).__init__(x)
self.x = x
def construct(self, x, y):
return self.x * x
def test_father(self, x):
return self.x + x
class SingleSubNet(FatherNet):
def __init__(self, x, z):
super(SingleSubNet, self).__init__(x)
self.z = z
def construct(self, x, y):
ret_father_construct = super().construct(x, y)
ret_father_test = super(SingleSubNet, self).test_father(x)
return ret_father_construct, ret_father_test
pow
Returns the power.
Calling: pow(x, y)
Input parameters:
x– Base number,Number, orTensor.y– Power exponent,Number, orTensor.
Return value: y power of x, Number, or Tensor
For example:
x = Tensor(np.array([1, 2, 3]))
y = Tensor(np.array([1, 2, 3]))
ret = pow(x, y)
The result is as follows:
ret: Tensor(shape=[3], dtype=Int64, value=[1, 4, 27]))
print
Prints logs.
Calling: print(arg, ...)
Input parameter: arg – Information to be printed (int, float, bool, String or Tensor).
When the arg is int, float, or bool, it will be printed out as a 0-D tensor.
Return value: none
For example:
x = Tensor(np.array([1, 2, 3]))
y = 3
print("x: ", x)
print("y: ", y)
The result is as follows:
x: Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]))
y: Tensor(shape=[], dtype=Int64, value=3))
Function Parameters
Default parameter value: The data types
int,float,bool,None,str,tuple,list, anddictare supported, whereasTensoris not supported.Variable parameters: Inference and training of networks with variable parameters are supported.
Key-value pair parameter: Functions with key-value pair parameters cannot be used for backward propagation on computational graphs.
Variable key-value pair parameter: Functions with variable key-value pairs cannot be used for backward propagation on computational graphs.
Network Definition
Instance Types on the Entire Network
Common Python function with the @ms_function decorator.
Cell subclass inherited from nn.Cell.
Network Construction Components
Category |
Content |
|---|---|
|
mindspore/nn/* and user-defined Cell. |
Member function of a |
Member functions of other classes in the construct function of Cell can be called. |
|
Class decorated with @dataclass. |
|
|
|
|
|
Value computation operator generated by @constexpr. |
Function |
User-defined Python functions and system functions listed in the preceding content. |
Network Constraints
By default, the input parameters of the entire network (that is, the outermost network input parameters) support only
Tensor. To support non-Tensor, you can set thesupport_non_tensor_inputsattribute of the network toTrue.During network initialization,
self.support_non_tensor_inputs = Trueis set. Currently, this configuration supports only the forward network and does not support the backward network. That is, the backward operation cannot be performed on the network whose input parameters are notTensor.The following is an example of supporting the outermost layer to transfer scalars:
class ExpandDimsNet(nn.Cell): def __init__(self): super(ExpandDimsNet, self).__init__() self.support_non_tensor_inputs = True self.expandDims = ops.ExpandDims() def construct(self, input_x, input_axis): return self.expandDims(input_x, input_axis) expand_dim_net = ExpandDimsNet() input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32)) expand_dim_net(input_x, 0)
You are not allowed to modify non-
Parameterdata members of the network.For example:
class Net(Cell): def __init__(self): super(Net, self).__init__() self.num = 2 self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par") def construct(self, x, y): return x + y
In the preceding defined network,
self.numis not aParameterand cannot be modified.self.paris aParameterand can be modified.When an undefined class member is used in the
constructfunction,AttributeErroris not thrown like the Python interpreter. Instead, it is processed asNone.For example:
class Net(Cell): def __init__(self): super(Net, self).__init__() def construct(self, x): return x + self.y
In the preceding defined network,
constructuses the undefined class memberself.y. In this case,self.yis processed asNone.
