mindarmour.fuzz_testing

查看源文件

该模块提供了一种基于神经元覆盖率增益的模糊测试方法来评估给定模型的鲁棒性。

class mindarmour.fuzz_testing.Fuzzer(target_model)[源代码]

深度神经网络的模糊测试框架。

参考文献:DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks

参数:
  • target_model (Model) - 目标模糊模型。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import Fuzzer
>>> from mindarmour.fuzz_testing import KMultisectionNeuronCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> mutate_config = [{'method': 'GaussianBlur',
...                   'params': {'ksize': [1, 2, 3, 5], 'auto_param': [True, False]}},
...                  {'method': 'MotionBlur',
...                   'params': {'degree': [1, 2, 5], 'angle': [45, 10, 100, 140, 210, 270, 300],
...                   'auto_param': [True]}},
...                  {'method': 'UniformNoise',
...                   'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
...                  {'method': 'GaussianNoise',
...                   'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
...                  {'method': 'Contrast',
...                   'params': {'alpha': [0.5, 1, 1.5], 'beta': [-10, 0, 10], 'auto_param': [False, True]}},
...                  {'method': 'Rotate',
...                   'params': {'angle': [20, 90], 'auto_param': [False, True]}},
...                  {'method': 'FGSM',
...                   'params': {'eps': [0.3, 0.2, 0.4], 'alpha': [0.1], 'bounds': [(0, 1)]}}]
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> test_labels = np.random.randint(num_classe, size=batch_size).astype(np.int32)
>>> test_labels = (np.eye(num_classe)[test_labels]).astype(np.float32)
>>> initial_seeds = []
>>> # make initial seeds
>>> for img, label in zip(test_images, test_labels):
...     initial_seeds.append([img, label])
>>> initial_seeds = initial_seeds[:10]
>>> nc = KMultisectionNeuronCoverage(model, train_images, segmented_num=100, incremental=True)
>>> model_fuzz_test = Fuzzer(model)
>>> samples, gt_labels, preds, strategies, metrics = model_fuzz_test.fuzzing(mutate_config, initial_seeds,
...                                                                          nc, max_iters=100)
fuzzing(mutate_config, initial_seeds, coverage, evaluate=True, max_iters=10000, mutate_num_per_seed=20)[源代码]

深度神经网络的模糊测试。

参数:
  • mutate_config (list) - 变异方法配置。格式为:

    mutate_config = [
        {'method': 'GaussianBlur',
         'params': {'ksize': [1, 2, 3, 5], 'auto_param': [True, False]}},
        {'method': 'UniformNoise',
         'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
        {'method': 'GaussianNoise',
         'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
        {'method': 'Contrast',
         'params': {'alpha': [0.5, 1, 1.5], 'beta': [-10, 0, 10], 'auto_param': [False, True]}},
        {'method': 'Rotate',
         'params': {'angle': [20, 90], 'auto_param': [False, True]}},
        {'method': 'FGSM',
         'params': {'eps': [0.3, 0.2, 0.4], 'alpha': [0.1], 'bounds': [(0, 1)]}}
        ...]
    
    • 支持的方法在列表 self._strategies 中,每个方法的参数必须在可选参数的范围内。支持的方法分为两种类型:

    • 首先,自然鲁棒性方法包括:’Translate’、’Scale’、’Shear’、’Rotate’、’Perspective’、’Curve’、’GaussianBlur’、’MotionBlur’、’GradientBlur’、’Contrast’、’GradientLuminance’、’UniformNoise’、’GaussianNoise’、’SaltAndPepperNoise’、’NaturalNoise’。

    • 其次,对抗样本攻击方式包括:’FGSM’、’PGD’和’MDIM’。’FGSM’、’PGD’和’MDIM’分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 mutate_config 必须包含在[‘Contrast’, ‘GradientLuminance’, ‘GaussianBlur’, ‘MotionBlur’, ‘GradientBlur’, ‘UniformNoise’, ‘GaussianNoise’, ‘SaltAndPepperNoise’, ‘NaturalNoise’]中的方法。

    • 第一类方法的参数设置方式可以在’mindarmour/natural_robustness/transform/image’中看到。第二类方法参数配置参考 self._attack_param_checklists

  • initial_seeds (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], […], …],且标签必须为one-hot。

  • coverage (CoverageMetrics) - 神经元覆盖率指标类。

  • evaluate (bool) - 是否返回评估报告。默认值:True

  • max_iters (int) - 选择要变异的种子的最大数量。默认值:10000

  • mutate_num_per_seed (int) - 每个种子的最大变异次数。默认值:20

返回:
  • list - 模糊测试生成的变异样本。

  • list - 变异样本的ground truth标签。

  • list - 预测结果。

  • list - 变异策略。

  • dict - Fuzzer的指标报告。

异常:
  • ValueError - 参数 coverage 必须是CoverageMetrics的子类。

  • ValueError - 初始种子队列为空。

  • ValueError - initial_seeds 中的种子未包含两个元素。

class mindarmour.fuzz_testing.SensitivityMaximizingFuzzer(target_model)[源代码]

深度神经网络的模糊测试框架。

参考文献:

Themis_Sensitivity_Testing_for_Deep_Learning_System

参数:
  • target_model (Model) - 目标模糊模型。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import Fuzzer, SensitivityMaximizingFuzzer
>>> from mindarmour.fuzz_testing import SensitivityConvergenceCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> mutate_config = [{'method': 'GaussianBlur',
...                   'params': {'ksize': [1, 2, 3, 5], 'auto_param': [True, False]}},
...                  {'method': 'MotionBlur',
...                   'params': {'degree': [1, 2, 5], 'angle': [45, 10, 100, 140, 210, 270, 300],
...                   'auto_param': [True]}},
...                  {'method': 'UniformNoise',
...                   'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
...                  {'method': 'GaussianNoise',
...                   'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
...                  {'method': 'Contrast',
...                   'params': {'alpha': [0.5, 1, 1.5], 'beta': [-10, 0, 10], 'auto_param': [False, True]}},
...                  {'method': 'Rotate',
...                   'params': {'angle': [20, 90], 'auto_param': [False, True]}},
...                  {'method': 'FGSM',
...                   'params': {'eps': [0.3, 0.2, 0.4], 'alpha': [0.1], 'bounds': [(0, 1)]}}]
>>> batch_size = 32
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> test_labels = np.random.randint(num_classe, size=batch_size).astype(np.int32)
>>> test_labels = (np.eye(num_classe)[test_labels]).astype(np.float32)
>>> initial_seeds = []
>>> # make initial seeds
>>> for img, label in zip(test_images, test_labels):
...     initial_seeds.append([img, label])
>>> initial_seeds = initial_seeds[:batch_size]
>>> SCC = SensitivityConvergenceCoverage(model,batch_size = batch_size)
>>> model_fuzz_test = SensitivityMaximizingFuzzer(model)
>>> samples, gt_labels, preds, strategies, metrics = model_fuzz_test.fuzzing(mutate_config, initial_seeds,
...                                                                          SCC, max_iters=100)
fuzzing(mutate_config, initial_seeds, coverage, evaluate=True, max_iters=1000, mutate_num_per_seed=20)[源代码]

深度神经网络的模糊测试。

参数:
  • mutate_config (list) - 变异方法配置。格式为:

    mutate_config = [
        {'method': 'GaussianBlur',
         'params': {'ksize': [1, 2, 3, 5], 'auto_param': [True, False]}},
        {'method': 'UniformNoise',
         'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
        {'method': 'GaussianNoise',
         'params': {'factor': [0.1, 0.2, 0.3], 'auto_param': [False, True]}},
        {'method': 'Contrast',
         'params': {'alpha': [0.5, 1, 1.5], 'beta': [-10, 0, 10], 'auto_param': [False, True]}},
        {'method': 'Rotate',
         'params': {'angle': [20, 90], 'auto_param': [False, True]}},
        {'method': 'FGSM',
         'params': {'eps': [0.3, 0.2, 0.4], 'alpha': [0.1], 'bounds': [(0, 1)]}}
        ...]
    
    • 支持的方法在列表 self._strategies 中,每个方法的参数必须在可选参数的范围内。支持的方法分为两种类型:

    • 首先,自然鲁棒性方法包括:’Translate’、’Scale’、’Shear’、’Rotate’、’Perspective’、’Curve’、’GaussianBlur’、’MotionBlur’、’GradientBlur’、’Contrast’、’GradientLuminance’、’UniformNoise’、’GaussianNoise’、’SaltAndPepperNoise’、’NaturalNoise’。

    • 其次,对抗样本攻击方式包括:’FGSM’、’PGD’和’MDIM’。’FGSM’、’PGD’和’MDIM’分别是 FastGradientSignMethod、ProjectedGradientDent和MomentumDiverseInputIterativeMethod的缩写。 mutate_config 必须包含在[‘Contrast’, ‘GradientLuminance’, ‘GaussianBlur’, ‘MotionBlur’, ‘GradientBlur’, ‘UniformNoise’, ‘GaussianNoise’, ‘SaltAndPepperNoise’, ‘NaturalNoise’]中的方法。

    • 第一类方法的参数设置方式可以在’mindarmour/natural_robustness/transform/image’中看到。第二类方法参数配置参考 self._attack_param_checklists

  • initial_seeds (list[list]) - 用于生成变异样本的初始种子队列。初始种子队列的格式为[[image_data, label], […], …],且标签必须为one-hot。

  • coverage (CoverageMetrics) - 神经元覆盖率指标类。

  • evaluate (bool) - 是否返回评估报告。默认值: True

  • max_iters (int) - 选择要变异的种子的最大数量。默认值: 1000

  • mutate_num_per_seed (int) - 每个种子的最大变异次数。默认值: 20

返回:
  • list,模糊测试生成的变异样本。

  • list,变异样本的ground truth标签。

  • list,预测结果。

  • list,变异策略。

  • dict,Fuzzer的指标报告。

异常:
  • ValueError - 参数 coverage 必须是CoverageMetrics的子类。

  • ValueError - 初始种子队列为空。

  • ValueError - initial_seeds 中的种子未包含两个元素。

class mindarmour.fuzz_testing.CoverageMetrics(model, incremental=False, batch_size=32)[源代码]

计算覆盖指标的神经元覆盖类的抽象基类。

训练后网络的每个神经元输出有一个输出范围(我们称之为原始范围),测试数据集用于估计训练网络的准确性。然而,不同的测试数据集,神经元的输出分布会有所不同。因此,与传统模糊测试类似,模型模糊测试意味着测试这些神经元的输出,并评估在测试数据集上神经元输出值占原始范围的比例。

参考文献: DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

参数:
  • model (Model) - 被测模型。

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

abstract get_metrics(dataset)[源代码]

计算给定数据集的覆盖率指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖指标的数据集。

异常:
  • NotImplementedError - 抽象方法。

class mindarmour.fuzz_testing.NeuronCoverage(model, threshold=0.1, incremental=False, batch_size=32)[源代码]

计算神经元激活的覆盖率。当神经元的输出大于阈值时,神经元被激活。

神经元覆盖率等于网络中激活的神经元占总神经元的比例。

参数:
  • model (Model) - 被测模型。

  • threshold (float) - 用于确定神经元是否激活的阈值。默认值:0.1

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

get_metrics(dataset)[源代码]

获取神经元覆盖率的指标:激活的神经元占网络中神经元总数的比例。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖率指标的数据集。

返回:
  • float - ‘neuron coverage’的指标。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import NeuronCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> nc = NeuronCoverage(model, threshold=0.1)
>>> nc_metrics = nc.get_metrics(test_images)
class mindarmour.fuzz_testing.TopKNeuronCoverage(model, top_k=3, incremental=False, batch_size=32)[源代码]

计算前k个激活神经元的覆盖率。当隐藏层神经元的输出值在最大的 top_k 范围内,神经元就会被激活。top_k 神经元覆盖率等于网络中激活神经元占总神经元的比例。

参数:
  • model (Model) - 被测模型。

  • top_k (int) - 当隐藏层神经元的输出值在最大的 top_k 范围内,神经元就会被激活。默认值:3

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

get_metrics(dataset)[源代码]

获取Top K激活神经元覆盖率的指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖率指标的数据集。

返回:
  • float - ‘top k neuron coverage’的指标。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import TopKNeuronCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> tknc = TopKNeuronCoverage(model, top_k=3)
>>> metrics = tknc.get_metrics(test_images)
class mindarmour.fuzz_testing.NeuronBoundsCoverage(model, train_dataset, incremental=False, batch_size=32)[源代码]

获取’neuron boundary coverage’的指标 \(NBC = (|UpperCornerNeuron| + |LowerCornerNeuron|)/(2*|N|)\) ,其中 \(|N|\) 是神经元的数量,NBC是指测试数据集中神经元输出值超过训练数据集中相应神经元输出值的上下界的神经元比例。

参数:
  • model (Model) - 等待测试的预训练模型。

  • train_dataset (numpy.ndarray) - 用于确定神经元输出边界的训练数据集。

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

get_metrics(dataset)[源代码]

获取’neuron boundary coverage’的指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖指标的数据集。

返回:
  • float - ‘neuron boundary coverage’的指标。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import NeuronBoundsCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> nbc = NeuronBoundsCoverage(model, train_images)
>>> metrics = nbc.get_metrics(test_images)
class mindarmour.fuzz_testing.SuperNeuronActivateCoverage(model, train_dataset, incremental=False, batch_size=32)[源代码]

获取超激活神经元覆盖率(’super neuron activation coverage’)的指标。 \(SNAC = |UpperCornerNeuron|/|N|\) 。SNAC是指测试集中神经元输出值超过训练集中相应神经元输出值上限的神经元比例。

参数:
  • model (Model) - 等待测试的预训练模型。

  • train_dataset (numpy.ndarray) - 用于确定神经元输出边界的训练数据集。

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

get_metrics(dataset)[源代码]

获取超激活神经元覆盖率(’super neuron activation coverage’)的指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖指标的数据集。

返回:
  • float - 超激活神经元覆盖率(’super neuron activation coverage’)的指标

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import SuperNeuronActivateCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> snac = SuperNeuronActivateCoverage(model, train_images)
>>> metrics = snac.get_metrics(test_images)
class mindarmour.fuzz_testing.KMultisectionNeuronCoverage(model, train_dataset, segmented_num=100, incremental=False, batch_size=32)[源代码]

获取K分神经元覆盖率的指标。KMNC度量测试集神经元输出落在训练集输出范围k等分间隔上的比例。

参数:
  • model (Model) - 等待测试的预训练模型。

  • train_dataset (numpy.ndarray) - 用于确定神经元输出边界的训练数据集。

  • segmented_num (int) - 神经元输出间隔的分段部分数量。默认值:100

  • incremental (bool) - 指标将以增量方式计算。默认值:False

  • batch_size (int) - 模糊测试批次中的样本数。默认值:32

get_metrics(dataset)[源代码]

获取’k-multisection neuron coverage’的指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖指标的数据集。

返回:
  • float - ‘k-multisection neuron coverage’的指标。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import KMultisectionNeuronCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> net = Net()
>>> model = Model(net)
>>> batch_size = 8
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> kmnc = KMultisectionNeuronCoverage(model, train_images, segmented_num=100)
>>> metrics = kmnc.get_metrics(test_images)
class mindarmour.fuzz_testing.SensitivityConvergenceCoverage(model, threshold=0.5, incremental=False, batch_size=32, selected_neurons_num=100, n_iter=1000)[源代码]

获取神经元收敛覆盖率的指标。SCC度量神经元输出变化值收敛为正态分布的比例。

参数:
  • model (Model) - 等待测试的预训练模型。

  • threshold (float) - 神经元收敛阈值。默认值: 0.5

  • incremental (bool) - 指标将以增量方式计算。默认值: False

  • batch_size (int) - 模糊测试批次中的样本数。默认值: 32

  • selected_neurons_num (int) - 模糊测试时所选取的神经元数量。默认值: 100

  • n_iter (int) - 模糊测试时最大测试次数。默认值: 1000

get_metrics(dataset)[源代码]

获取’neuron convergence coverage’的指标。

参数:
  • dataset (numpy.ndarray) - 用于计算覆盖指标的数据集。

返回:
  • float - ‘neuron convergence coverage’的指标。

样例:

>>> from mindspore.common.initializer import TruncatedNormal
>>> from mindspore.ops import operations as P
>>> from mindspore.train import Model
>>> from mindspore.ops import TensorSummary
>>> from mindarmour.fuzz_testing import SensitivityConvergenceCoverage
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(1, 6, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.conv2 = nn.Conv2d(6, 16, 5, padding=0, weight_init=TruncatedNormal(0.02), pad_mode="valid")
...         self.fc1 = nn.Dense(16 * 5 * 5, 120, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc2 = nn.Dense(120, 84, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.fc3 = nn.Dense(84, 10, TruncatedNormal(0.02), TruncatedNormal(0.02))
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.reshape = P.Reshape()
...         self.summary = TensorSummary()
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu(x)
...         self.summary('conv1', x)
...         x = self.max_pool2d(x)
...         x = self.conv2(x)
...         x = self.relu(x)
...         self.summary('conv2', x)
...         x = self.max_pool2d(x)
...         x = self.reshape(x, (-1, 16 * 5 * 5))
...         x = self.fc1(x)
...         x = self.relu(x)
...         self.summary('fc1', x)
...         x = self.fc2(x)
...         x = self.relu(x)
...         self.summary('fc2', x)
...         x = self.fc3(x)
...         self.summary('fc3', x)
...         return x
>>> batch_size = 32
>>> num_classe = 10
>>> train_images = np.random.rand(32, 1, 32, 32).astype(np.float32)
>>> test_images = np.random.rand(batch_size, 1, 32, 32).astype(np.float32)
>>> test_labels = np.random.randint(num_classe, size=batch_size).astype(np.int32)
>>> test_labels = (np.eye(num_classe)[test_labels]).astype(np.float32)
>>> initial_seeds = []
>>> # make initial seeds
>>> for img, label in zip(test_images, test_labels):
...     initial_seeds.append([img, label])
>>> initial_seeds = initial_seeds[:batch_size]
>>> SCC = SensitivityConvergenceCoverage(model,batch_size = batch_size)
>>> metrics = SCC.get_metrics(test_images)