MindSpore

Data Processing

  • Auto Augmentation
  • Lightweight Data Processing
  • Single-Node Data Cache
  • Optimizing the Data Processing

Graph Compilation

  • Process Control Statements
  • Compiling Performance Optimization for Static Graph Network
  • Customizing Reverse Propagation Function of Cell
  • Calling the Custom Class
  • Construct Constants In the Network
  • Dependency Control

Model Training Optimization

  • Enabling Mixed Precision
  • Gradient Accumulation Algorithm
  • Adaptive Gradient Summation Algorithm
  • Dimension Reduction Training Algorithm

Custom Operator

  • Custom Operators (Custom-based)

Model Inference

  • Inference Model Overview
  • Inference on a GPU
  • Inference on the Ascend 910 AI processor
  • Inference Using the MindIR Model on Ascend 310 AI Processors
  • Inference on the Ascend 310 AI Processor

Debugging and Tuning

  • Function Debug
    • Custom Debugging Information
    • Reading IR
    • Using Dump in the Graph Mode
    • Applying PyNative Mode
    • Fixed Randomness to Reproduce Run Results of Script
  • Performance Tuning
  • Precision Optimization↗

Distributed Parallel

  • Distributed Parallel Training Mode
  • Distributed Case
  • Distributed Inference
  • Saving and Loading Models in Hybrid Parallel Mode
  • Multi Dimensional
  • Other Features

Environment Variables

  • Environment Variables
MindSpore
  • »
  • Function Debug
  • View page source

Function Debug

https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.10/resource/_static/logo_source_en.png
  • Custom Debugging Information
  • Reading IR
  • Using Dump in the Graph Mode
  • Applying PyNative Mode
  • Fixed Randomness to Reproduce Run Results of Script
Previous Next

© Copyright MindSpore.

Built with Sphinx using a theme provided by Read the Docs.