mindarmour.adv_robustness.defenses

该模块包括经典的防御算法,用于防御对抗样本,增强模型的安全性和可信性。

class mindarmour.adv_robustness.defenses.AdversarialDefense(network, loss_fn=None, optimizer=None)[源代码]

使用给定的对抗样本进行对抗训练。

参数:
  • network (Cell) - 要防御的MindSpore网络。

  • loss_fn (Union[Loss, None]) - 损失函数。默认值:None。

  • optimizer (Cell) - 用于训练网络的优化器。默认值:None。

样例:

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import AdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> adv_defense = AdversarialDefense(net, loss_fn, optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> adv_defense.defense(inputs, labels)
defense(inputs, labels)[源代码]

通过使用输入样本进行训练来增强模型。

参数:
  • inputs (numpy.ndarray) - 输入样本。

  • labels (numpy.ndarray) - 输入样本的标签。

返回:
  • numpy.ndarray - 防御操作的损失。

class mindarmour.adv_robustness.defenses.AdversarialDefenseWithAttacks(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[源代码]

利用特定的攻击方法和给定的对抗例子进行对抗训练,以增强模型的鲁棒性。

参数:
  • network (Cell) - 要防御的MindSpore网络。

  • attacks (list[Attack]) - 攻击方法序列。

  • loss_fn (Union[Loss, None]) - 损失函数。默认值:None。

  • optimizer (Cell) - 用于训练网络的优化器。默认值:None。

  • bounds (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。

  • replace_ratio (float) - 用对抗样本替换原始样本的比率,必须在0到1之间。默认值:0.5。

异常:
  • ValueError - replace_ratio 不在0和1之间。

样例:

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.attacks import FastGradientSignMethod
>>> from mindarmour.adv_robustness.attacks import ProjectedGradientDescent
>>> from mindarmour.adv_robustness.defenses import AdversarialDefenseWithAttacks
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> fgsm = FastGradientSignMethod(net, loss_fn=loss_fn)
>>> pgd = ProjectedGradientDescent(net, loss_fn=loss_fn)
>>> ead = AdversarialDefenseWithAttacks(net, [fgsm, pgd], loss_fn=loss_fn,
...                                     optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = ead.defense(inputs, labels)
defense(inputs, labels)[源代码]

通过使用从输入样本生成的对抗样本进行训练来增强模型。

参数:
  • inputs (numpy.ndarray) - 输入样本。

  • labels (numpy.ndarray) - 输入样本的标签。

返回:
  • numpy.ndarray - 对抗性防御操作的损失。

class mindarmour.adv_robustness.defenses.NaturalAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.1)[源代码]

基于FGSM的对抗性训练。

参考文献:A. Kurakin, et al., “Adversarial machine learning at scale,” in ICLR, 2017

参数:
  • network (Cell) - 要防御的MindSpore网络。

  • loss_fn (Union[Loss, None]) - 损失函数。默认值:None。

  • optimizer (Cell) - 用于训练网络的优化器。默认值:None。

  • bounds (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。

  • replace_ratio (float) - 用对抗样本替换原始样本的比率。默认值:0.5。

  • eps (float) - 攻击方法(FGSM)的步长。默认值:0.1。

样例:

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import NaturalAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> nad = NaturalAdversarialDefense(net, loss_fn=loss_fn, optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = nad.defense(inputs, labels)
class mindarmour.adv_robustness.defenses.ProjectedAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.3, eps_iter=0.1, nb_iter=5, norm_level='inf')[源代码]

基于PGD的对抗性训练。

参考文献:A. Madry, et al., “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018

参数:
  • network (Cell) - 要防御的MindSpore网络。

  • loss_fn (Union[Loss, None]) - 损失函数。默认值:None。

  • optimizer (Cell) - 用于训练网络的优化器。默认值:None。

  • bounds (tuple) - 输入数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。

  • replace_ratio (float) - 用对抗样本替换原始样本的比率。默认值:0.5。

  • eps (float) - PGD攻击参数epsilon。默认值:0.3。

  • eps_iter (int) - PGD攻击参数,内环epsilon。默认值:0.1。

  • nb_iter (int) - PGD攻击参数,迭代次数。默认值:5。

  • norm_level (Union[int, char, numpy.inf]) - 范数类型。可选值:1、2、np.inf、’l1’、’l2’、’np.inf’ 或 ‘inf’。默认值:’inf’。

样例:

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import ProjectedAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> pad = ProjectedAdversarialDefense(net, loss_fn=loss_fn, optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = pad.defense(inputs, labels)
class mindarmour.adv_robustness.defenses.EnsembleAdversarialDefense(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[源代码]

使用特定攻击方法列表和给定的对抗样本进行对抗训练,以增强模型的鲁棒性。

参数:
  • network (Cell) - 要防御的MindSpore网络。

  • attacks (list[Attack]) - 攻击方法序列。

  • loss_fn (Union[Loss, None]) - 损失函数。默认值:None。

  • optimizer (Cell) - 用于训练网络的优化器。默认值:None。

  • bounds (tuple) - 数据的上下界。以(clip_min, clip_max)的形式出现。默认值:(0.0, 1.0)。

  • replace_ratio (float) - 用对抗样本替换原始样本的比率,必须在0到1之间。默认值:0.5。

异常:
  • ValueError - replace_ratio 不在0和1之间。

样例:

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.attacks import FastGradientSignMethod
>>> from mindarmour.adv_robustness.attacks import ProjectedGradientDescent
>>> from mindarmour.adv_robustness.defenses import EnsembleAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> fgsm = FastGradientSignMethod(net, loss_fn=loss_fn)
>>> pgd = ProjectedGradientDescent(net, loss_fn=loss_fn)
>>> ead = EnsembleAdversarialDefense(net, [fgsm, pgd], loss_fn=loss_fn,
...                                  optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = ead.defense(inputs, labels)