MindSpore

Start

  • Overall Structure
  • Models

Quick Start

  • Installation
  • Calling Source Code to Start

Usage Tutorials

  • Development Migration
  • Multimodal Model Development
  • Pretraining
  • Supervised Fine-Tuning (SFT)
  • Evaluation
  • Inference
  • Quantization
  • Service Deployment
  • Dynamic Graph Parallelism

Function Description

  • Weight Format Conversion
  • Distributed Weight Slicing and Merging
  • Distributed Parallelism
  • Dataset
  • Weight Saving and Resumable Training
  • Training Metrics Monitoring
  • High Availability
  • Safetensors Weights
  • Fine-Grained Activations SWAP

Precision Optimization

  • Large Model Accuracy Optimization Guide

Performance Optimization

  • Large Model Performance Optimization Guide

API

  • mindformers
  • mindformers.core
  • mindformers.dataset
  • mindformers.generation
  • mindformers.models
  • mindformers.modules
  • mindformers.pet
  • mindformers.pipeline
  • mindformers.tools
  • mindformers.wrapper

Appendix

  • Environment Variable Descriptions
  • Configuration File Descriptions

FAQ

  • Model-Related
  • Function-Related
  • MindSpore Transformers Contribution Guidelines
  • Modelers Contribution Guidelines

RELEASE NOTES

  • Release Notes
MindSpore
  • »
  • Search


© Copyright MindSpore.

Built with Sphinx using a theme provided by Read the Docs.