# Overall Structure [![View Source On Gitee](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.7.0rc1/resource/_static/logo_source_en.svg)](https://gitee.com/mindspore/docs/blob/r2.7.0rc1/docs/mindformers/docs/source_en/introduction/overview.md) The overall architecture formed by MindSpore Transformers and the end-to-end AI hardware and software ecosystem of MindSpore and Ascend is as follows: 1. At the hardware level, MindSpore Transformers supports users running large models on Ascend servers; 2. At the software level, MindSpore Transformers implements the big model-related code through the Python interface provided by MindSpore and performs data computation by the operator libraries provided by the supporting software package of the Ascend AI processor; 3. The basic functionality features currently supported by MindSpore Transformers are listed below: 1. Supports tasks such as running training and inference for large models [distributed parallelism](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/parallel_training.html), with parallel capabilities including data parallelism, model parallelism, ultra-long sequence parallelism; 2. Supports [model weight conversion](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/ckpt.html), [distributed weight splitting and combination](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/ckpt.html), and different format of [dataset loading](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/dataset.html) and [resumable training after breakpoint](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/resume_training.html); 3. Support 25+ large models [pretraining](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/guide/pre_training.html), [fine-tuning](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/guide/supervised_fine_tuning.html), [inference](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/guide/inference.html) and [evaluation] (https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/evaluation.html). Meanwhile, it also supports [quantization](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/feature/quantization.html), and the list of supported models can be found in [Model Library](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/introduction/models.html); 4. MindSpore Transformers supports users to carry out model service deployment function through [MindIE](https://www.mindspore.cn/mindformers/docs/en/r1.6.0/guide/deployment.html), and also supports the use of [MindX]( https://www.hiascend.com/software/mindx-dl) to realize large-scale cluster scheduling; more third-party platforms will be supported in the future, please look forward to it.