mindarmour.diff_privacy

This module provide Differential Privacy feature to protect user privacy.

class mindarmour.diff_privacy.AdaGaussianRandom(norm_bound=1.0, initial_noise_multiplier=1.5, mean=0.0, noise_decay_rate=0.0006, decay_policy='Time', seed=0)[source]

Adaptive Gaussian noise generated mechanism. Noise would be decayed with training. Decay mode could be ‘Time’ mode or ‘Step’ mode.

Parameters
  • norm_bound (float) – Clipping bound for the l2 norm of the gradients. Default: 1.0.

  • initial_noise_multiplier (float) – Ratio of the standard deviation of Gaussian noise divided by the norm_bound, which will be used to calculate privacy spent. Default: 1.5.

  • mean (float) – Average value of random noise. Default: 0.0

  • noise_decay_rate (float) – Hyper parameter for controlling the noise decay. Default: 6e-4.

  • decay_policy (str) – Noise decay strategy include ‘Step’ and ‘Time’. Default: ‘Time’.

  • seed (int) – Original random seed. Default: 0.

Returns

Tensor, generated noise with shape like given gradients.

Examples

>>> gradients = Tensor([0.2, 0.9], mstype.float32)
>>> norm_bound = 1.0
>>> initial_noise_multiplier = 1.5
>>> mean = 0.0
>>> noise_decay_rate = 6e-4
>>> decay_policy = "Time"
>>> net = AdaGaussianRandom(norm_bound, initial_noise_multiplier, mean
>>>                         noise_decay_rate, decay_policy)
>>> res = net(gradients)
>>> print(res)
construct(gradients)[source]

Generate adaptive Gaussian noise.

Parameters

gradients (Tensor) – The gradients.

Returns

Tensor, generated noise with shape like given gradients.

class mindarmour.diff_privacy.DPModel(micro_batches=2, norm_clip=1.0, mech=None, **kwargs)[source]

This class is overload mindspore.train.model.Model.

Parameters
  • micro_batches (int) – The number of small batches split from an original batch. Default: 2.

  • norm_clip (float) – Use to clip the bound, if set 1, will retun the original data. Default: 1.0.

  • mech (Mechanisms) – The object can generate the different type of noise. Default: None.

Examples

>>> norm_clip = 1.0
>>> initial_noise_multiplier = 0.01
>>> network = LeNet5()
>>> batch_size = 32
>>> batches = 128
>>> epochs = 1
>>> micro_batches = 2
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> factory_opt = DPOptimizerClassFactory(micro_batches=micro_batches)
>>> factory_opt.set_mechanisms('Gaussian',
>>>                            norm_bound=norm_clip,
>>>                            initial_noise_multiplier=initial_noise_multiplier)
>>> net_opt = factory_opt.create('Momentum')(network.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = DPModel(micro_batches=micro_batches,
>>>                 norm_clip=norm_clip,
>>>                 mech=None,
>>>                 network=network,
>>>                 loss_fn=loss,
>>>                 optimizer=net_opt,
>>>                 metrics=None)
>>> ms_ds = ds.GeneratorDataset(dataset_generator(batch_size, batches), ['data', 'label'])
>>> ms_ds.set_dataset_size(batch_size * batches)
>>> model.train(epochs, ms_ds, dataset_sink_mode=False)
class mindarmour.diff_privacy.DPOptimizerClassFactory(micro_batches=2)[source]

Factory class of Optimizer.

Parameters

micro_batches (int) – The number of small batches split from an original batch. Default: 2.

Returns

Optimizer, Optimizer class

Examples

>>> GaussianSGD = DPOptimizerClassFactory(micro_batches=2)
>>> GaussianSGD.set_mechanisms('Gaussian', norm_bound=1.0, initial_noise_multiplier=1.5)
>>> net_opt = GaussianSGD.create('Momentum')(params=network.trainable_params(),
>>>                                          learning_rate=cfg.lr,
>>>                                          momentum=cfg.momentum)
create(policy, *args, **kwargs)[source]

Create DP optimizer.

Parameters

policy (str) – Choose original optimizer type.

Returns

Optimizer, A optimizer with DP.

set_mechanisms(policy, *args, **kwargs)[source]

Get noise mechanism object.

Parameters

policy (str) – Choose mechanism type.

class mindarmour.diff_privacy.GaussianRandom(norm_bound=0.5, initial_noise_multiplier=1.5, mean=0.0, seed=0)[source]

Gaussian noise generated mechanism.

Parameters
  • norm_bound (float) – Clipping bound for the l2 norm of the gradients. Default: 0.5.

  • initial_noise_multiplier (float) – Ratio of the standard deviation of Gaussian noise divided by the norm_bound, which will be used to calculate privacy spent. Default: 1.5.

  • mean (float) – Average value of random noise. Default: 0.0.

  • seed (int) – Original random seed. Default: 0.

Returns

Tensor, generated noise with shape like given gradients.

Examples

>>> gradients = Tensor([0.2, 0.9], mstype.float32)
>>> norm_bound = 0.5
>>> initial_noise_multiplier = 1.5
>>> net = GaussianRandom(norm_bound, initial_noise_multiplier)
>>> res = net(gradients)
>>> print(res)
construct(gradients)[source]

Generated Gaussian noise.

Parameters

gradients (Tensor) – The gradients.

Returns

Tensor, generated noise with shape like given gradients.

class mindarmour.diff_privacy.MechanismsFactory[source]

Factory class of mechanisms

static create(policy, *args, **kwargs)[source]
Parameters
  • policy (str) – Noise generated strategy, could be ‘Gaussian’ or ‘AdaGaussian’. Noise would be decayed with ‘AdaGaussian’ mechanism while be constant with ‘Gaussian’ mechanism.

  • args (Union[float, str]) – Parameters used for creating noise mechanisms.

  • kwargs (Union[float, str]) – Parameters used for creating noise mechanisms.

Raises

NameErrorpolicy must be in [‘Gaussian’, ‘AdaGaussian’].

Returns

Mechanisms, class of noise generated Mechanism.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
>>>         self.bn = nn.BatchNorm2d(64)
>>>         self.relu = nn.ReLU()
>>>         self.flatten = nn.Flatten()
>>>         self.fc = nn.Dense(64*224*224, 12) # padding=0
>>>
>>>     def construct(self, x):
>>>         x = self.conv(x)
>>>         x = self.bn(x)
>>>         x = self.relu(x)
>>>         x = self.flatten(x)
>>>         out = self.fc(x)
>>>         return out
>>> norm_clip = 1.0
>>> initial_noise_multiplier = 1.5
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> net_opt = Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9)
>>> mech = MechanismsFactory().create('Gaussian',
>>>                                   norm_bound=norm_clip,
>>>                                   initial_noise_multiplier=initial_noise_multiplier)
>>> model = DPModel(micro_batches=2,
>>>                 norm_clip=1.0,
>>>                 mech=mech,
>>>                 network=net,
>>>                 loss_fn=loss,
>>>                 optimizer=net_opt,
>>>                 metrics=None)
>>> dataset = get_dataset()
>>> model.train(2, dataset)
class mindarmour.diff_privacy.PrivacyMonitorFactory[source]

Factory class of DP training’s privacy monitor.

static create(policy, *args, **kwargs)[source]

Create a privacy monitor class.

Parameters
  • policy (str) – Monitor policy, ‘rdp’ is supported by now. RDP means R’enyi differential privacy, which computed based on R’enyi divergence.

  • args (Union[int, float, numpy.ndarray, list, str]) – Parameters used for creating a privacy monitor.

  • kwargs (Union[int, float, numpy.ndarray, list, str]) – Keyword parameters used for creating a privacy monitor.

Returns

Callback, a privacy monitor.

Examples

>>> rdp = PrivacyMonitorFactory.create(policy='rdp',
>>> num_samples=60000, batch_size=32)