MindArmour Documents

As a general technology, AI brings great opportunities and benefits, but also faces new security and privacy protection challenges. MindArmour is a subsystem of MindSpore. It provides security and privacy protection for MindSpore, including adversarial robustness, model security test, differential privacy training, privacy risk assessment, and data drift detection.

Typical MindArmour Application Scenarios

  1. Adversarial Example

    Cover capabilities such as black-and-white box adversarial attacks, adversarial training, and adversarial example detection, helping security personnel quickly and efficiently generate adversarial examples and evaluate the robustness of AI models.

  2. Privacy Risk Assessment

    Use algorithms such as membership inference attack and model inversion attack to evaluate the risk of model privacy leakage.

  3. Privacy Protection

    Use differential privacy training and privacy protection suppression mechanisms to reduce the risk of model privacy leakage and protect user data.

  4. Fuzz

    Perform the fuzzing test based on coverage, provide flexible and customizable test policies and indicators. Use the neuron coverage rate to guide input mutation so that the input can activate more neurons and the neuron value distribution range is wider. In this way, different types of model output results and incorrect behaviors can be explored.

  5. Model Encryption

    Use the symmetric encryption algorithm to encrypt the parameter files or inference models to protect the model files. When the symmetric encryption algorithm is used, the ciphertext model is directly loaded to complete inference or incremental training.