MindSpore

Static Graph Usage Sepecifications

  • Process Control Statements
  • Calling the Custom Class
  • Construct Constants In the Network
  • Dependency Control

Distributed Parallel

  • Distributed Parallelism Overview
  • Distributed Basic Cases
  • Operator-level Parallelism
  • Pipeline Parallel
  • Optimizer Parallel
  • Recomputation
  • Host&Device Heterogeneous
  • Parameter Server Mode
  • Distributed Parallel Startup Methods
  • Distributed Inference
  • Distributed High-Level Configuration Case

Custom Operator

  • Custom Operators (Custom-based)
  • MindSpore Hybrid Syntax Specification
  • Custom Operator Registration

Performance Optimization

  • Profiling↗
  • Sinking Mode
  • Compiling Performance Optimization for Static Graph Network
  • Enabling Graph Kernel Fusion
  • Incremental Operator Build
  • Memory Reuse

Algorithm Optimization

  • Gradient Accumulation Algorithm
  • Adaptive Gradient Summation Algorithm
  • Dimension Reduction Training Algorithm
  • Second-order Optimization

High-level Functional Programming

  • Automatic Vectorization (Vmap)

Data Processing

  • Auto Augmentation
  • Single-Node Data Cache
  • Optimizing the Data Processing

Model Inference

  • Inference Model Overview
  • Model Compression

Complex Problem Debugging

  • Using Dump in the Graph Mode
  • Ascend Optimization Engine (AOE)
  • Running Data Recorder
  • Fault Recovery
MindSpore
  • »
  • Search


© Copyright MindSpore.

Built with Sphinx using a theme provided by Read the Docs.