mindarmour

MindArmour, a tool box of MindSpore to enhance model security and trustworthiness against adversarial examples.

class mindarmour.Attack[source]

The abstract base class for all attack classes creating adversarial examples.

batch_generate(inputs, labels, batch_size=64)[source]

Generate adversarial examples in batch, based on input samples and their labels.

Parameters
  • inputs (numpy.ndarray) – Samples based on which adversarial examples are generated.

  • labels (numpy.ndarray) – Labels of samples, whose values determined by specific attacks.

  • batch_size (int) – The number of samples in one batch.

Returns

numpy.ndarray, generated adversarial examples

Examples

>>> inputs = Tensor([[0.2, 0.4, 0.5, 0.2], [0.7, 0.2, 0.4, 0.3]])
>>> labels = [3, 0]
>>> advs = attack.batch_generate(inputs, labels, batch_size=2)
abstract generate(inputs, labels)[source]

Generate adversarial examples based on normal samples and their labels.

Parameters
  • inputs (numpy.ndarray) – Samples based on which adversarial examples are generated.

  • labels (numpy.ndarray) – Labels of samples, whose values determined by specific attacks.

Raises

NotImplementedError – It is an abstract method.

class mindarmour.BlackModel[source]

The abstract class which treats the target model as a black box. The model should be defined by users.

is_adversarial(data, label, is_targeted)[source]

Check if input sample is adversarial example or not.

Parameters
  • data (numpy.ndarray) – The input sample to be check, typically some maliciously perturbed examples.

  • label (numpy.ndarray) – For targeted attacks, label is intended label of perturbed example. For untargeted attacks, label is original label of corresponding unperturbed sample.

  • is_targeted (bool) – For targeted/untargeted attacks, select True/False.

Returns

bool.
  • If True, the input sample is adversarial.

  • If False, the input sample is not adversarial.

abstract predict(inputs)[source]

Predict using the user specified model. The shape of predict results should be (m, n), where n represents the number of classes this model classifies.

Parameters

inputs (numpy.ndarray) – The input samples to be predicted.

Raises

NotImplementedError – It is an abstract method.

class mindarmour.Defense(network)[source]

The abstract base class for all defense classes defending adversarial examples.

Parameters

network (Cell) – A MindSpore-style deep learning model to be defensed.

batch_defense(inputs, labels, batch_size=32, epochs=5)[source]

Defense model with samples in batch.

Parameters
  • inputs (numpy.ndarray) – Samples based on which adversarial examples are generated.

  • labels (numpy.ndarray) – Labels of input samples.

  • batch_size (int) – Number of samples in one batch.

  • epochs (int) – Number of epochs.

Returns

numpy.ndarray, loss of batch_defense operation.

Raises

ValueError – If batch_size is 0.

abstract defense(inputs, labels)[source]

Defense model with samples.

Parameters
  • inputs (numpy.ndarray) – Samples based on which adversarial examples are generated.

  • labels (numpy.ndarray) – Labels of input samples.

Raises

NotImplementedError – It is an abstract method.

class mindarmour.Detector[source]

The abstract base class for all adversarial example detectors.

abstract detect(inputs)[source]

Detect adversarial examples from input samples.

Parameters

inputs (Union[numpy.ndarray, list, tuple]) – The input samples to be detected.

Raises

NotImplementedError – It is an abstract method.

abstract detect_diff(inputs)[source]

Calculate the difference between the input samples and de-noised samples.

Parameters

inputs (Union[numpy.ndarray, list, tuple]) – The input samples to be detected.

Raises

NotImplementedError – It is an abstract method.

abstract fit(inputs, labels=None)[source]

Fit a threshold and refuse adversarial examples whose difference from their denoised versions are larger than the threshold. The threshold is determined by a certain false positive rate when applying to normal samples.

Parameters
Raises

NotImplementedError – It is an abstract method.

abstract transform(inputs)[source]

Filter adversarial noises in input samples.

Parameters

inputs (Union[numpy.ndarray, list, tuple]) – The input samples to be transformed.

Raises

NotImplementedError – It is an abstract method.