MindArmour Documents
As a general technology, AI brings great opportunities and benefits, but also faces new security and privacy protection challenges. MindArmour is a subsystem of MindSpore. It provides security and privacy protection for MindSpore, including adversarial robustness, model security test, differential privacy training, privacy risk assessment, and data drift detection.

Typical MindArmour Application Scenarios
-
Cover capabilities such as black-and-white box adversarial attacks, adversarial training, and adversarial example detection, helping security personnel quickly and efficiently generate adversarial examples and evaluate the robustness of AI models.
-
Use algorithms such as membership inference attack and model inversion attack to evaluate the risk of model privacy leakage.
-
Use differential privacy training and privacy protection suppression mechanisms to reduce the risk of model privacy leakage and protect user data.
-
Perform the fuzzing test based on coverage, provide flexible and customizable test policies and indicators. Use the neuron coverage rate to guide input mutation so that the input can activate more neurons and the neuron value distribution range is wider. In this way, different types of model output results and incorrect behaviors can be explored.
-
Use the symmetric encryption algorithm to encrypt the parameter files or inference models to protect the model files. When the symmetric encryption algorithm is used, the ciphertext model is directly loaded to complete inference or incremental training.
Installation
AI Security
AI Privacy
AI Reliability
API References
- mindarmour
- mindarmour.adv_robustness.attacks
- mindarmour.adv_robustness.defenses
- mindarmour.adv_robustness.detectors
- mindarmour.adv_robustness.evaluations
- mindarmour.fuzz_testing
- mindarmour.privacy.diff_privacy
- mindarmour.privacy.evaluation
- mindarmour.privacy.sup_privacy
- mindarmour.reliability
- mindarmour.utils