mindspore

MindSpore package.

class mindspore.Tensor(input_data, dtype=None)[source]

Tensor is used for data storage.

Tensor inherits tensor object in C++. Some functions are implemented in C++ and some functions are implemented in Python.

Parameters
  • input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.

  • dtype (mindspore.dtype) – Input data should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be as same as the input_data. Default: None.

Outputs:

Tensor, with the same shape as input_data.

Examples

>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == mindspore.float32
>>>
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == mindspore.float64
all(axis=(), keep_dims=False)[source]

Check all array elements along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

Returns

Tensor, has the same data type as x.

any(axis=(), keep_dims=False)[source]

Check any array element along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction. Default: (), reduce all dimensions.

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default : False, don’t keep these reduced dimensions.

Returns

Tensor, has the same data type as x.

asnumpy()[source]

Convert tensor to numpy array.

property dtype

The dtype of tensor is a mindspore type.

property shape

The shape of tensor is a tuple.

property virtual_flag

Mark tensor is virtual.

class mindspore.RowTensor(indices, values, dense_shape)[source]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape [L0, D1, .. , DN] where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

RowTensor can only be used in the Cell’s contruct method.

It is not supported in pynative mode at the moment.

Parameters
  • indices (Tensor) – A 1-D integer Tensor of shape [D0].

  • values (Tensor) – A Tensor of any dtype of shape [D0, D1, …, Dn].

  • dense_shape (tuple) – An integer tuple which contains the shape of the corresponding dense tensor.

Returns

RowTensor, composed of indices, values, and dense_shape.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self, dense_shape):
>>>         super(Net, self).__init__()
>>>         self.dense_shape = dense_shape
>>>     def construct(self, indices, values):
>>>         x = RowTensor(indices, values, self.dense_shape)
>>>         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> Net((3, 2))(indices, values)
class mindspore.SparseTensor(indices, values, dense_shape)[source]

A sparse representation of a set of nonzero elememts from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

Pynative mode not supported at the moment.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

Parameters
  • indices (Tensor) – A 2-D integer Tensor of shape [N, ndims], where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

  • values (Tensor) – A 1-D tensor of any type and shape [N], which supplies the values for each element in indices.

  • dense_shape (tuple) – A integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

Returns

SparseTensor, composed of indices, values, and dense_shape.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self, dense_shape):
>>>         super(Net, self).__init__()
>>>         self.dense_shape = dense_shape
>>>     def construct(self, indices, values):
>>>         x = SparseTensor(indices, values, self.dense_shape)
>>>         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> Net((3, 4))(indices, values)
mindspore.ms_function(fn=None, obj=None, input_signature=None)[source]

Create a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Parameters
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • obj (Object) – The Python Object that provides the information for identifying the compiled function.Default: None.

  • input_signature (MetaTensor) – The MetaTensor which describes the input arguments. The MetaTensor specifies the shape and dtype of the Tensor and they will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

Returns

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Examples

>>> def tensor_add(x, y):
>>>     z = F.tensor_add(x, y)
>>>     return z
>>>
>>> @ms_function
>>> def tensor_add_with_dec(x, y):
>>>     z = F.tensor_add(x, y)
>>>     return z
>>>
>>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)),
>>>                               MetaTensor(mindspore.float32, (1, 1, 3, 3))))
>>> def tensor_add_with_sig(x, y):
>>>     z = F.tensor_add(x, y)
>>>     return z
>>>
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>>
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
>>> out = tensor_add_with_dec(x, y)
>>> out = tensor_add_with_sig(x, y)
class mindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False)[source]

Parameter types of cell models.

After initialized Parameter is a subtype of Tensor.

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by a Initializer, the type of Parameter will be a MetaTensor not a Tensor. MetaTensor only save the shape type info of a tensor with no memory usage. The shape can be change while compile for auto-parallel. Call init_data will return a Tensor Parameter with initialized data.

Note

Each parameter of Cell is represented by Parameter class.

Parameters
  • default_input (Union[Tensor, Initializer]) – Parameter data, when default_input is` Initializer`, the data stored by Parameter is MetaTensor, otherwise it is Tensor.

  • name (str) – Name of the child parameter.

  • requires_grad (bool) – True if the parameter requires gradient. Default: True.

  • layerwise_parallel (bool) – A kind of model parallel mode. When layerwise_parallel is true in paralle mode, broadcast and gradients communication would not be applied to parameters. Default: False.

clone(prefix, init='same')[source]

Clone the parameter.

Parameters
Returns

Parameter, a new parameter.

init_data(layout=None, set_sliced=False)[source]

Initialize the parameter data.

Parameters
  • layout (list[list[int]]) –

    Parameter slice layout [dev_mat, tensor_map, slice_shape].

    • dev_mat (list[int]): Device matrix.

    • tensor_map (list[int]): Tensor map.

    • slice_shape (list[int]): Shape of slice.

  • set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

Returns

Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

property inited_param

Get the new parameter after call the init_data.

property is_init

Get the initialization status of the parameter.

property name

Get the name of the parameter.

property requires_grad

Return whether the parameter requires gradient.

set_parameter_data(data, slice_shape=False)[source]

Set default_input of current Parameter.

Parameters
  • data (Union[Tensor, Initializer]) – new data.

  • slice_shape (bool) – If slice the Parameter. Default: False.

Retruns:

Parameter, the parameter after set data.

property sliced

Get slice status of the parameter.

class mindspore.ParameterTuple[source]

Class for storing tuple of parameters.

Note

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init='same')[source]

Clone the parameter.

Parameters
  • prefix (str) – Namespace of parameter.

  • init (str) – Initialize the shape of the parameter. Default: ‘same’.

Returns

Tuple, the new Parameter tuple.

mindspore.dtype_to_nptype(type_)[source]

Convert MindSpore dtype to numpy data type.

Parameters

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns

The data type of numpy.

mindspore.issubclass_(type_, dtype)[source]

Determine whether type_ is a subclass of dtype.

Parameters
Returns

bool, True or False.

mindspore.dtype_to_pytype(type_)[source]

Convert MindSpore dtype to python data type.

Parameters

type_ (mindspore.dtype) – MindSpore’s dtype.

Returns

Type of python.

mindspore.pytype_to_dtype(obj)[source]

Convert python type to MindSpore type.

Parameters

obj (type) – A python type object.

Returns

Type of MindSpore type.

mindspore.get_py_obj_dtype(obj)[source]

Get the MindSpore data type which corresponds to python type or variable.

Parameters

obj – An object of python type, or a variable in python type.

Returns

Type of MindSpore type.

class mindspore.Model(network, loss_fn=None, optimizer=None, metrics=None, eval_network=None, eval_indexes=None, amp_level='O0', **kwargs)[source]

High-Level API for Training or Testing.

Model groups layers into an object with training and inference features.

Parameters
  • network (Cell) – A training or testing network.

  • loss_fn (Cell) – Objective function, if loss_fn is None, the network should contain the logic of loss and grads calculation, and the logic of parallel if needed. Default: None.

  • optimizer (Cell) – Optimizer for updating the weights. Default: None.

  • metrics (Union[dict, set]) – A Dictionary or a set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}. Default: None.

  • eval_network (Cell) – Network for evaluation. If not defined, network and loss_fn would be wrapped as eval_network. Default: None.

  • eval_indexes (list) – When defining the eval_network, if eval_indexes is None, all outputs of the eval_network would be passed to metrics, otherwise eval_indexes must contain three elements, including the positions of loss value, predicted value and label. The loss value would be passed to the Loss metric, the predicted value and label would be passed to other metric. Default: None.

  • amp_level (str) –

    Option for argument level in mindspore.amp.build_train_network, level for mixed precision training. Supports [O0, O2, O3]. Default: “O0”.

    • O0: Do not change.

    • O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.

    • O3: Cast network to float16, with additional property ‘keep_batchnorm_fp32=False’.

    O2 is recommended on GPU, O3 is recommended on Ascend.

  • loss_scale_manager (Union[None, LossScaleManager]) – If it is None, the loss would not be scaled. Otherwise, scale the loss by LossScaleManager. It is a key argument. e.g. Use loss_scale_manager=None to set the value.

  • keep_batchnorm_fp32 (bool) – Keep Batchnorm running in float32. If it is set to true, the level setting before will be overwritten. Default: True.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
>>>         self.bn = nn.BatchNorm2d(64)
>>>         self.relu = nn.ReLU()
>>>         self.flatten = nn.Flatten()
>>>         self.fc = nn.Dense(64*224*224, 12) # padding=0
>>>
>>>     def construct(self, x):
>>>         x = self.conv(x)
>>>         x = self.bn(x)
>>>         x = self.relu(x)
>>>         x = self.flatten(x)
>>>         out = self.fc(x)
>>>         return out
>>>
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
>>> dataset = get_dataset()
>>> model.train(2, dataset)
eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]

Evaluation API where the iteration is controlled by python front-end.

Configure to pynative mode, the evaluation will be performed with dataset non-sink mode.

Note

CPU is not supported when dataset_sink_mode is true. If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Parameters
  • valid_dataset (Dataset) – Dataset to evaluate the model.

  • callbacks (list) – List of callback objects which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

Returns

Dict, which returns the loss value and metrics values for the model in the test mode.

Examples

>>> dataset = get_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'})
>>> model.eval(dataset)
init(train_dataset=None, valid_dataset=None)[source]

Initialize compute graphs and data graphs with the sink mode.

Note

Pre-init process only supports GRAPH_MODE and Ascend target currently.

Parameters
  • train_dataset (Dataset) – A training dataset iterator. If train_dataset is defined, training graphs will be initialized. Default: None.

  • valid_dataset (Dataset) – A evaluating dataset iterator. If valid_dataset is defined, evaluation graphs will be initialized, and metrics in Model can not be None. Default: None.

Examples

>>> train_dataset = get_train_dataset()
>>> valid_dataset = get_valid_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={'acc'})
>>> model.init(train_dataset, valid_dataset)
>>> model.train(2, train_dataset)
>>> model.eval(valid_dataset)
predict(*predict_data)[source]

Generate output predictions for the input samples.

Data could be a single tensor, a list of tensor, or a tuple of tensor.

Note

Batch data should be put together in one tensor.

Parameters

predict_data (Tensor) – Tensor of predict data. can be array, list or tuple.

Returns

Tensor, array(s) of predictions.

Examples

>>> input_data = Tensor(np.random.randint(0, 255, [1, 3, 224, 224]), mindspore.float32)
>>> model = Model(Net())
>>> model.predict(input_data)
train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True, sink_size=-1)[source]

Training API where the iteration is controlled by python front-end.

When setting pynative mode, the training process will be performed with dataset not sink.

Note

CPU is not supported when dataset_sink_mode is true. If dataset_sink_mode is True, epoch of training should be equal to the count of repeat operation in dataset processing. Otherwise, errors could occur since the amount of data is not equal to the required amount of training . If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Parameters
  • epoch (int) – Generally, total number of iterations on the data per epoch. When dataset_sink_mode is set to true and sink_size>0, each epoch sink sink_size steps on the data instead of total number of iterations.

  • train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiple data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned. The data and label would be passed to the network and loss function respectively.

  • callbacks (list) – List of callback objects which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode, the training process will be performed with dataset not sink.

  • sink_size (int) – Control the amount of data in each sink. If sink_size=-1, sink the complete dataset for each epoch. If sink_size>0, sink sink_size data for each epoch. If dataset_sink_mode is False, set sink_size as invalid. Default: -1.

Examples

>>> dataset = get_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> model.train(2, dataset)
class mindspore.ParallelMode[source]

Parallel mode options.

There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”, “HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.

  • STAND_ALONE: Only one processor working.

  • DATA_PARALLEL: Distributing the data across different processors.

  • HYBRID_PARALLEL: Achieving data parallelism and model parallelism manually.

  • SEMI_AUTO_PARALLEL: Achieving data parallelism and model parallelism by setting parallel strategies.

  • AUTO_PARALLEL: Achieving parallelism automatically.

MODE_LIST: The list for all supported parallel modes.

class mindspore.DatasetHelper(dataset, dataset_sink_mode=True, sink_size=-1, epoch_num=1)[source]

Help function to use the MindData dataset.

According to different contexts, change the iterations of dataset and use the same iteration for loop in different contexts.

Note

The iteration of DatasetHelper will provide one epoch data.

Parameters
  • dataset (DataSet) – The training dataset iterator.

  • dataset_sink_mode (bool) – If true use GetNext to fetch the data, or else feed the data from host. Default: True.

  • sink_size (int) – Control the amount of data in each sink. If sink_size=-1, sink the complete dataset for each epoch. If sink_size>0, sink sink_size data for each epoch. Default: -1.

  • epoch_num (int) – Control the number of epoch data to send. Default: 1.

Examples

>>> dataset_helper = DatasetHelper(dataset)
>>> for inputs in dataset_helper:
>>>     outputs = network(*inputs)
sink_size()[source]

Get sink_size for each iteration.

stop_send()[source]

Free up resources about data sink.

types_shapes()[source]

Get the types and shapes from dataset on the current configuration.

mindspore.get_level()[source]

Get the logger level.

Returns

str, the Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).

Examples

>>> import os
>>> os.environ['GLOG_v'] = '0'
>>> from mindspore import log as logger
>>> logger.get_level()
mindspore.get_log_config()[source]

Get logger configurations.

Returns

Dict, the dictionary of logger configurations.

Examples

>>> import os
>>> os.environ['GLOG_v'] = '1'
>>> os.environ['GLOG_logtostderr'] = '0'
>>> os.environ['GLOG_log_dir'] = '/var/log/mindspore'
>>> os.environ['logger_maxBytes'] = '5242880'
>>> os.environ['logger_backupCount'] = '10'
>>> from mindspore import log as logger
>>> logger.get_log_config()