# Release Notes ## MindSpore 2.1.1 Release Notes ### Bug fixes - [I7Q9RX] The Ascend platform supports adaptive identification of different hardware types. - [I7SDA0] Fixed an issue where the accuracy of the CRNN network deteriorates on the NES platform. - [I6QYCD] Fixed an issue where the precision of the maskrcnn network deteriorates on the OptiX OSN 8800 platform. - [I7T4QK] Fixed an issue where the inference precision of the WGAN network deteriorates on the OptiX OSN 8800 platform. - [I7TJ8Z] Fixed an issue where the inference precision of the LGTM network deteriorates on the OptiX OSN 8800 platform. ### Contributors Thanks goes to these wonderful people: changzherui,chenfei_mindspore,chenjianping,chenkang,chenweifeng,chujinjin,fangwenyi,GuoZhibin,guozhijian,hangq,hanhuifeng,haozhang,hedongdong, You Shu, Zhou Feng, Dai Yuxin Contributions of any kind are welcome! ## MindSpore 2.1.0 Release Notes ### Major Features and Improvements #### FrontEnd - [BETA] JIT Fallback supports variable scenarios. In static graph mode, JIT Fallback supports return of Dict type and Scalar type, supports property setting of non-Parameter type objects, supports partial in-place modification operations of List, and supports third-party libraries such as NumPy. Moreover, it supports related operations of user-defined classes and supports Python basic operators and built-in functions to use more data types. It is compatible with features like control flow, side effects, automatic differentiation. For more details, please refer to [Static Graph Syntax Support](https://www.mindspore.cn/docs/en/r2.1/note/static_graph_syntax_support.html). - [BETA] In static graph mode, the error message of using undefined variables in the control flow scene is optimized. When using variables defined in if, while, and for control flow branches, the variables need to be initialized and defined before the control flow. - [STABLE] Add module ReWrite, support the ability to modify multiple network in batches based on customized rules. - [BETA] Add optim_ex module for optimizers, extend the current functionality, support parameter grouping for every parameter in the optimizer, and support parameter modification by assignment while training. - [STABLE] Optimize PyTorch and MindSpore API Mapping Table, specify the differences between APIs among functionality, parameter, input, output and specialized cases. #### PyNative - Optimize the performance of dynamic shape scenes in PyNative mode. #### DataSet - [STABLE] Optimize the memory structure of MindRecord data files. Memory consumption can be reduced 60% when loading 100TB+ data for training. - [STABLE] Support single-thread execution of data processing pipeline, and users can add code in the data pipeline for debugging. - [STABLE] Optimize the performance of TFRecordDataset to improve the performance of dataset loading by 60%+. Optimize the performance of batch to improve the performance by 30% for the scenarios with large number of batch. - [STABLE] Optimize API documentation of [mindspore.dataset](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.html) and [mindspore.dataset.transforms](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html). Four new sample libraries have been added to show the effect of data enhancement, namely: [Load & Process Datasets Using Data Pipeline](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.html#quick-start-of-dataset-pipeline), [Visual Transformation Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.vision), [Text Transform Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.text), [Audio Transform Sample Library](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore.dataset.transforms.html#module-mindspore.dataset.audio) #### AutoParallel - [STABLE] Support offload parameters or intermediate activations to the CPU or NVMe storage during training process. Users can enable this offload feature by configuring context to scale up the trainable model size. - [STABLE] Enhanced automatic parallel capability including: 1. Performance of automatic strategy for typical networks is no less than 90% of default configuration. 2. Support 3D hybrid parallel training: automatic operator-level strategy generation combined with manual configured pipeline partition. #### Runtime - [STABLE] Upgrade OpenMPI version to 4.1.4. - [STABLE] Upgrade NCCL version to 2.16.5. - [STABLE] Assign rank id continuously in same node when using dynamic cluster to launch distributed jobs. - [STABLE] No adaptation code is required for Scheduler node. The script of Scheduler could be identical to that of Worker. #### Ascend - [STABLE] Support dump assisted debug information for operator AIC Error scenario. The information includes the operator task name, stream ID, input/output/workspace address and so on. - [STABLE] Provide default processing mechanism, which skips its execution, for CANN operators for empty Tensor output scenarios. - [STABLE] Supplement debug information when network model fails to execute in graph mode. The debug information will saved in a CSV file in rank_${id}/exec_order/, recording the task ID and stream ID of each task. #### Profiler - [STABLE] The Profiler supports the collection of time-consuming data from all phases on the Host side. - [BETA] The Profiler supports the collection of memory data from all phases on the Host side. - [BETA] The Profiler supports the collection of data processing operator time consumption. ### API Change - `mindspore.dataset.GraphData`, `mindspore.dataset.Graph`, `mindspore.dataset.InMemoryGraphDataset`, `mindspore.dataset. ArgoverseDataset` are no longer evolved and are deprecated. Use [MindSpore Graph Learning](https://gitee.com/mindspore/graphlearning) for related functional replacements. When replacing networks in Model repositories that use this API, please refer to [GCN](https://gitee.com/mindspore/graphlearning/tree/master/model_zoo/gcn) for GCN and [GAT](https://gitee.com/mindspore/graphlearning/tree/master/model_zoo/gat). - `mindspore.set_context` adds `jit_syntax_level` option, which is used to set JIT syntax support level. For more details, please refer to [set_context](https://www.mindspore.cn/docs/en/r2.1/api_python/mindspore/mindspore.set_context.html). - Change the `model.infer_predict_layout` interface, which has a new parameter skip_backend_compile with a default value of False. Set to True when the user wants to skip the backend compilation process to get the parameter slicing strategy. #### Operators - Add operator primitive for `mindspore.ops.ApplyAdamWithAmsgradV2`. It is recommended to call this operator through API `mindspore.nn.Adam`. - Add operator primitive for `mindspore.ops.UpsampleTrilinear3D`. It is recommended to call this operator through API `mindspore.ops.interpolate`. - Add operator primitive for `mindspore.ops.UpsampleNearest3D`. It is recommended to call this operator through API `mindspore.ops.interpolate`. #### API Deprecation - Deprecate operator primitive `mindspore.ops.ScatterNonAliasingAdd`. It is recommended to use operator primitive `mindspore.ops.TensorScatterAdd` as a replacement. #### Backwards Incompatible Change - Interface name: `mindspore.nn.Dense`, `mindspore.nn.Conv1d`, `mindspore.nn.Conv1dTranspose`, `mindspore.nn.Conv2d`, `mindspore.nn.Conv2dTranspose`, `mindspore.nn.Conv3d`, `mindspore.nn.Conv3dTranspose` Changes: Change initialization parameter strategy. The default value of weight_init is changed from "normal" to None, and the default value of bias_init is changed from "zeros" to None. Description: The default initialization method for weights has been changed from "normal" to internal HeUniform initialization. The default initialization method of bias is changed from "zeros" to internal Uniform initialization.
| Original interface | v2.1 interface |
mindspore.nn.Dense(in_channels,
out_channels,
weight_init='normal',
bias_init='zeros',
has_bias=True,
activation=None)
|
mindspore.nn.Dense(in_channels,
out_channels,
weight_init=None,
bias_init=None,
has_bias=True,
activation=None)
|
mindspore.nn.Conv1d(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init='normal',
bias_init='zeros')
|
mindspore.nn.Conv1d(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init=None,
bias_init=None)
|
mindspore.nn.Conv1dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init='normal',
bias_init='zeros')
|
mindspore.nn.Conv1dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init=None,
bias_init=None)
|
mindspore.nn.Conv2d(in_channels,
out_channels, kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init='normal',
bias_init='zeros',
data_format='NCHW')
|
mindspore.nn.Conv2d(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init=None,
bias_init=None,
data_format='NCHW')
|
mindspore.nn.Conv2dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
output_padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init='normal',
bias_init='zeros')
|
mindspore.nn.Conv2dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
output_padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init=None,
bias_init=None)
|
mindspore.nn.Conv3d(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init='normal',
bias_init='zeros',
data_format='NCDHW')
|
mindspore.nn.Conv3d(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
has_bias=False,
weight_init=None,
bias_init=None,
data_format='NCDHW')
|
mindspore.nn.Conv3dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
output_padding=0,
has_bias=False,
weight_init='normal',
bias_init='zeros',
data_format='NCDHW')
|
mindspore.nn.Conv3dTranspose(in_channels,
out_channels,
kernel_size,
stride=1,
pad_mode='same',
padding=0,
dilation=1,
group=1,
output_padding=0,
has_bias=False,
weight_init=None,
bias_init=None,
data_format='NCDHW')
|