Supported Features

Characteristic Advantages On-device Inference Functional Module Reasoning Tools

View Source On Gitee

Q: Does MindSpore Serving support hot loading to avoid inference service interruption?

A: MindSpore does not support hot loading. It is recommended that you run multiple Serving services and restart some of them when switching the version.


Q: Does MindSpore support truncated gradient?

A: Yes. For details, see Definition and Usage of Truncated Gradient.


Q: How do I change hyperparameters for calculating loss values during neural network training?

A: Sorry, this function is not available yet. You can find the optimal hyperparameters by training, redefining an optimizer, and then training.


Q: Can you introduce the dedicated data processing framework?

A: MindData provides the heterogeneous hardware acceleration function for data processing. The high-concurrency data processing pipeline supports NPU, GPU and CPU. The CPU usage is reduced by 30%. For details, see Optimizing Data Processing.


Q: What is the MindSpore IR design concept?

A: Function expression: All expressions are functions, and differentiation and automatic parallel analysis are easy to implement without side effect. JIT compilation capability: The graph-based IR, control flow dependency, and data flow are combined to balance the universality and usability. Turing-complete IR: More flexible syntaxes are provided for converting Python, such as recursion.


Q: Will MindSpore provide a reinforcement learning framework?

A: This function is at the design stage. You can contribute ideas and scenarios and participate in the construction. Thank you.


Q: As Google Colab and Baidu AI Studio provide free GPU computing power, does MindSpore provide any free computing power?

A: If you cooperate with MindSpore in papers and scientific research, you can obtain free cloud computing power. If you want to simply try it out, we can also provide online experience similar to that of Colab.


Q: How do I visualize the MindSpore Lite offline model (.ms file) to view the network structure?

A: MindSpore Lite code is being submitted to the open-source repository Netron. Later, the MS model visualization will be implemented using Netron. While there are still some issues to be resolved in the Netron open-source repository, we have a Netron version for internal use, which can be downloaded in the netron releases.


Q: Does MindSpore have a quantized inference tool?

A: MindSpore Lite supports the inference of the quantization aware training model on the cloud. The MindSpore Lite converter tool provides the quantization after training and weight quantization functions which are being continuously improved.


Q: What are the advantages and features of MindSpore parallel model training?

A: In addition to data parallelism, MindSpore distributed training also supports operator-level model parallelism. The operator input tensor can be tiled and parallelized. On this basis, automatic parallelism is supported. You only need to write a single-device script to automatically tile the script to multiple nodes for parallel execution.


Q: Has MindSpore implemented the anti-pooling operation similar to nn.MaxUnpool2d?

A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, refer to Custom Operators.


Q: Does MindSpore have a lightweight on-device inference engine?

A:The MindSpore lightweight inference framework MindSpore Lite has been officially launched in r0.7. You are welcome to try it and give your comments. For details about the overview, tutorials, and documents, see MindSpore Lite.


Q: How does MindSpore implement semantic collaboration and processing? Is the popular Formal Concept Analysis (FCA) used?

A: The MindSpore framework does not support FCA. For semantic models, you can call third-party tools to perform FCA in the data preprocessing phase. MindSpore supports Python therefore import FCA could do the trick.


Q: Does MindSpore have any plan or consideration on the edge and device when the training and inference functions on the cloud are relatively mature?

A: MindSpore is a unified cloud-edge-device training and inference framework. Edge has been considered in its design, so MindSpore can perform inference at the edge. The open-source version will support Ascend 310-based inference. The optimizations supported in the current inference stage include quantization, operator fusion, and memory overcommitment.


Q: How does MindSpore support automatic parallelism?

A: Automatic parallelism on CPUs and GPUs are being improved. You are advised to use the automatic parallelism feature on the Ascend 910 AI processor. Follow our open source community and apply for a MindSpore developer experience environment for trial use.


Q: Does MindSpore have a module that can implement object detection algorithms as TensorFlow does?

A: The TensorFlow’s object detection pipeline API belongs to the TensorFlow’s Model module. After MindSpore’s detection models are complete, similar pipeline APIs will be provided.


Q: How do I migrate scripts or models of other frameworks to MindSpore?

A: For details about script or model migration, please visit the MindSpore official website.


Q: Does MindSpore provide open-source e-commerce datasets?

A: No. Please stay tuned for updates on the MindSpore official website.


Q:Can I encapsulate the Tensor data of MindSpore using numpy array?

A:No, all sorts of problems could arise. For example, numpy.array(Tensor(1)).astype(numpy.float32) will raise “ValueError: settinng an array element with a sequence.”.