mindarmour.fuzzing

This module includes various metrics to fuzzing the test of DNN.

class mindarmour.fuzzing.Fuzzing(initial_seeds, target_model, train_dataset, const_K, mode='L', max_seed_num=1000)[source]

Fuzzing test framework for deep neural networks.

Reference: DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks

Parameters
  • initial_seeds (list) – Initial fuzzing seed, format: [[image, label], [image, label], …].

  • target_model (Model) – Target fuzz model.

  • train_dataset (numpy.ndarray) – Training dataset used for determining the neurons’ output boundaries.

  • const_k (int) – The number of mutate tests for a seed.

  • mode (str) – Image mode used in image transform, ‘L’ means grey graph. Default: ‘L’.

  • max_seed_num (int) – The initial seeds max value. Default: 1000

fuzzing(coverage_metric='KMNC')[source]

Fuzzing tests for deep neural networks.

Parameters

coverage_metric (str) – Model coverage metric of neural networks. Default: ‘KMNC’.

Returns

list, mutated tests mis-predicted by target DNN model.

class mindarmour.fuzzing.ModelCoverageMetrics(model, segmented_num, neuron_num, train_dataset)[source]

As we all known, each neuron output of a network will have a output range after training (we call it original range), and test dataset is used to estimate the accuracy of the trained network. However, neurons’ output distribution would be different with different test datasets. Therefore, similar to function fuzz, model fuzz means testing those neurons’ outputs and estimating the proportion of original range that has emerged with test datasets.

Reference: DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

Parameters
  • model (Model) – The pre-trained model which waiting for testing.

  • segmented_num (int) – The number of segmented sections of neurons’ output intervals.

  • neuron_num (int) – The number of testing neurons.

  • train_dataset (numpy.ndarray) – Training dataset used for determine the neurons’ output boundaries.

Raises

ValueError – If neuron_num is too big (for example, bigger than 1e+9).

Examples

>>> train_images = np.random.random((10000, 128)).astype(np.float32)
>>> test_images = np.random.random((5000, 128)).astype(np.float32)
>>> model = Model(net)
>>> model_fuzz_test = ModelCoverageMetrics(model, 10000, 10, train_images)
>>> model_fuzz_test.test_adequacy_coverage_calculate(test_images)
>>> print('KMNC of this test is : %s', model_fuzz_test.get_kmnc())
>>> print('NBC of this test is : %s', model_fuzz_test.get_nbc())
>>> print('SNAC of this test is : %s', model_fuzz_test.get_snac())
get_kmnc()[source]

Get the metric of ‘k-multisection neuron coverage’.

Returns

float, the metric of ‘k-multisection neuron coverage’.

Examples

>>> model_fuzz_test.get_kmnc()
get_nbc()[source]

Get the metric of ‘neuron boundary coverage’.

Returns

float, the metric of ‘neuron boundary coverage’.

Examples

>>> model_fuzz_test.get_nbc()
get_snac()[source]

Get the metric of ‘strong neuron activation coverage’.

Returns

float, the metric of ‘strong neuron activation coverage’.

Examples

>>> model_fuzz_test.get_snac()
test_adequacy_coverage_calculate(dataset, bias_coefficient=0, batch_size=32)[source]

Calculate the testing adequacy of the given dataset.

Parameters
  • dataset (numpy.ndarray) – Data for fuzz test.

  • bias_coefficient (float) – The coefficient used for changing the neurons’ output boundaries. Default: 0.

  • batch_size (int) – The number of samples in a predict batch. Default: 32.

Examples

>>> model_fuzz_test = ModelCoverageMetrics(model, 10000, 10, train_images)
>>> model_fuzz_test.test_adequacy_coverage_calculate(test_images)