# Release Notes ## MindSpore 2.0.0 Release Notes ### Major Features and Improvements #### PyNative - [STABLE] Dynamic shape is fully supported on framework. For detailed operator support, refer to [Dynamic Shape Support Status of nn Interface](https://www.mindspore.cn/docs/en/r2.0/note/dynamic_shape_nn.html), [Dynamic Shape Support Status of functional Interface](https://www.mindspore.cn/docs/en/r2.0/note/dynamic_shape_func.html), and [Dynamic Shape Support Status of primitive Interface](https://www.mindspore.cn/docs/en/r2.0/note/dynamic_shape_primitive.html). #### AutoParallel - [STABLE] Build new MindFormers independent repositpry, providing distributed parallel suite, replacing mindspore.nn.transformer module. - [DEMO] Distributed parallel operator Gather supports the BatchDim attribute. - [DEMO] Streamline parallel supports specifying any dimension of the input data as the Batch dimension. ### API Change #### operator - Add operator primitive for `mindspore.ops.AdaptiveAvgPool2D` . - Add operator primitive for `mindspore.ops.BatchToSpaceNDV2` . - Add operator primitive for `mindspore.ops.CeLU` . - Add operator primitive for `mindspore.ops.ExtractVolumePatches` . - Add operator primitive for `mindspore.ops.FFTWithSize` . - Add operator primitive for `mindspore.ops.FillDiagonal` . - Add operator primitive for `mindspore.ops.FractionalMaxPool3DWithFixedKsize` . - Add operator primitive for `mindspore.ops.Im2Col` . - Add operator primitive for `mindspore.ops.MaskedScatter` . - Add operator primitive for `mindspore.ops.MatrixBandPart` . - Add operator primitive for `mindspore.ops.MatrixInverse` . - Add operator primitive for `mindspore.ops.MaxPoolWithArgmaxV2` . - Add operator primitive for `mindspore.ops.Ormqr` . - Add operator primitive for `mindspore.ops.RandpermV2` . - Add operator primitive for `mindspore.ops.ResizeBicubic` . - Add operator primitive for `mindspore.ops.Triu` . - Add operator primitive for `mindspore.ops.Zeta` . #### Backwards Incompatible Change - Interface: mindspore.ops.MultitypeFuncGraph Change: The interface parameter doc_url is used as a test feature in MindSpore 2.0.0.rc1 version. After the optimization of MindSpore 2.0.0 version, users do not need to configure this parameter, so this parameter is deleted in MindSpore 2.0.0 version.
| Original Interface | Interface v2.0.0 |
mindspore.ops.MultitypeFuncGraph(name, read_value=False, doc_url="") |
mindspore.ops.MultitypeFuncGraph(name, read_value=False) |
| Original Interface | Interface v2.0.0-rc1 |
mindspore.set_context(mode=GRAPH_MODE) |
mindspore.set_context(mode=PYNATIVE_MODE) |
| Original Interface | Interface v2.0.0-rc1 |
Model.train(dataset_sink_mode=True) |
Model.train(dataset_sink_mode=False) |
| Original Interface | Interface v2.0.0-rc1 |
mindspore.export(net, *inputs, file_name,
file_format="AIR", **kwargs)
|
mindspore.export(net, *inputs, file_name,
file_format, **kwargs)
|
| Original Interface | Interface v2.0.0-rc1 |
ops.norm(input_x, axis, p=2, keep_dims=False, epsilon=1e-12) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, [0, 1], p=2) |
ops.norm(A, ord=None, dim=None, keepdim=False, *, dtype=None) >>> # Example: >>> input = Tensor(np.array([[[1.0, 2.0], [3.0, 4.0]], ... [[5.0, 6.0], [7.0, 8.0]]]).astype(np.float32)) >>> output = ops.norm(input, ord=2, dim=(0, 1)) |
| Original Interface | Interface v2.0.0-rc1 |
Tensor.norm(axis, p=2, keep_dims=False, epsilon=1e-12) |
Tensor.norm(ord=None, dim=None, keepdim=False, *, dtype=None) |
| Original Interface | Interface v2.0.0-rc1 |
ops.dropout(x, p=0.5, seed0=0, seed1=0) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output, mask = dropout(x, p=0.5) |
ops.dropout(input, p=0.5, training=True, seed=None) >>> # Example: >>> input = Tensor(((20, 16), (50, 50)), ... mindspore.float32) >>> output = ops.dropout(input, p=0.5,training=True) |
| Original Interface | Interface v2.0.0-rc1 |
ops.dropout2d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout2d(input, 0.5) |
ops.dropout2d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout2d(input, 0.5, training=True) |
| Original Interface | Interface v2.0.0-rc1 |
ops.dropout3d(x, p=0.5) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output, mask = dropout3d(input, 0.5) |
ops.dropout3d(input, p=0.5, training=True) >>> # Example: >>> input = Tensor(np.ones([2, 1, 2, 3]), ... mindspore.float32) >>> output = ops.dropout3d(input, 0.5, training=True) |
| Original Interface | Interface v2.0.0-rc1 |
ops.std(input_x, axis=(), unbiased=True, keep_dims=False) |
ops.std(input, axis=None, ddof=0, keepdims=False) |
| Original Interface | Interface v2.0.0-rc1 |
net_param = load_param_into_net() |
net_param, ckpt_param = load_param_into_net() |
| Original Interface | Interface v2.0.0-rc1 |
BCELoss(weight=None, reduction='none') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight, reduction='mean') >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
BCELoss(weight=None, reduction='mean') >>> # Example: >>> weight = Tensor(np.array([[1.0, 2.0, 3.0], ... [4.0, 3.3, 2.2]]), ... mindspore.float32) >>> loss = nn.BCELoss(weight=weight) >>> logits = Tensor(np.array([[0.1, 0.2, 0.3], ... [0.5, 0.7, 0.9]]), ... mindspore.float32) >>> labels = Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ... mindspore.float32) >>> output = loss(logits, labels) >>> print(output) >>> 1.8952923 |
| Original Interface | Interface v2.0.0-rc1 |
ops.split(input_x, axis=0, output_num=1) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, axis=1, output_num=4) |
ops.split(tensor, split_size_or_sections, axis=0) >>> # Example: >>> input = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), ... mindspore.int32) >>> output = ops.split(input, split_size_or_sections=1, axis=1) |
| Original Interface | Interface v2.0.0-rc1 |
Tensor.split(axis=0, output_num=1) |
Tensor.split(split_size_or_sections, axis=0) |
| Original Interface | Interface v2.0.0-rc1 |
ops.pad(input_x, paddings) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = ((1, 2), (2, 1)) >>> output = ops.pad(input_x, paddings) |
ops.pad(input_x, padding, mode='constant', value=None) >>> # Example: >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], ... [0.4, 0.5, -3.2]]), ... mindspore.float32) >>> paddings = (2, 1, 1, 2) >>> output = ops.pad(input_x, paddings) |
| Original Interface | Interface v2.0.0-rc1 |
ops.meshgrid(inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid((x, y, z), indexing='xy') |
ops.meshgrid(*inputs, indexing='xy') >>> # Example: >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32)) >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32)) >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32)) output = ops.meshgrid(x, y, z, indexing='xy') |
| Original Interface | Interface v2.0.0-rc1 |
ops.max(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.max(input) >>> print(index, output) >>> 3 0.7 |
ops.max(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.max(input, axis=0) >>> print(output, index) |
| Original Interface | Interface v2.0.0-rc1 |
ops.min(x, axis=0, keep_dims=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> index, output = ops.min(input) >>> 0 0.0 |
ops.min(input, axis=None, keepdims=False, *, initial=None, where=True, return_indices=False) >>> # Example: >>> input = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), ... mindspore.float32) >>> output, index = ops.min(input, keepdims=True) >>> 0.0 0 |
| Original Interface | Interface v2.0.0-rc1 |
ops.random_gamma(shape, alpha, seed=0, seed2=0) |
ops.random_gamma(shape, alpha, seed=None) |
| Original Interface | Interface v2.0.0-rc1 |
ops.standard_laplace(shape, seed=0, seed2=0) |
ops.standard_laplace(shape, seed=None) |
| Original Interface | Interface v2.0.0-rc1 |
ops.standard_normal(shape, seed=0, seed2=0) |
ops.standard_normal(shape, seed=None) |
| Original Interface | Interface v2.0.0-rc1 |
ops.bernoulli(x, p=0.5, seed=-1) |
ops.bernoulli(input, p=0.5, seed=None) |
| Original Interface | Interface v2.0.0-rc1 |
mindspore.data_sink(fn, dataset, steps,
sink_size=1, jit=False)
|
mindspore.data_sink(fn, dataset, sink_size=1,
jit_config=None, input_signature=None)
|
| Original Interface | Interface v2.0.0-rc1 |
conv2d(inputs, weight, pad_mode="valid",
padding=0, stride=1, dilation=1, group=1)
|
conv2d(input, weight, bias=None, stride=1,
pad_mode="valid", padding=0, dilation=1, groups=1)
|
| Original Interface | Interface v2.0.0-rc1 |
mindspore.dataset.vision.Pad(padding=(1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
mindspore.dataset.vision.Pad(padding=(1,2,1,2)) Indicates that the left/upper part of the image is filled with 1 pixel, and the right/down part is filled with 2 pixels. |
| Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.map(operations=[transforms], ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
| Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... input_columns=["column_a"], ... output_columns=["column_b", "column_c"], ... column_order=["column_c", "column_b"]) |
>>> dataset = dataset.batch(batch_size=4, input_columns=["column_a"] ... output_columns=["column_b", "column_c"]) >>> dataset = dataset.project(["column_c", column_b"])") |
| Original Interface | Interface v2.0.0-rc1 |
>>> dataset = dataset.batch(batch_size=4, ... drop_remainder=True, pad_info=...) |
>>> dataset = dataset.padded_batch(batch_size=4, ... drop_remainder=True, pad_info=...) |