mindarmour.diff_privacy

This module provide Differential Privacy feature to protect user privacy.

class mindarmour.diff_privacy.AdaGaussianRandom(norm_bound=1.5, initial_noise_multiplier=5.0, alpha=0.0006, decay_policy='Time')[source]

Adaptive Gaussian noise generated mechanism.

Parameters
  • norm_bound (float) – Clipping bound for the l2 norm of the gradients. Default: 1.5.

  • initial_noise_multiplier (float) – Ratio of the standard deviation of Gaussian noise divided by the norm_bound, which will be used to calculate privacy spent. Default: 5.0.

  • alpha (float) – Hyperparameter for controlling the noise decay. Default: 6e-4.

  • decay_policy (str) – Noise decay strategy include ‘Step’ and ‘Time’. Default: ‘Time’.

Returns

Tensor, generated noise.

Examples

>>> shape = (3, 2, 4)
>>> norm_bound = 1.0
>>> initial_noise_multiplier = 0.1
>>> alpha = 0.5
>>> decay_policy = "Time"
>>> net = AdaGaussianRandom(norm_bound, initial_noise_multiplier,
>>>                         alpha, decay_policy)
>>> res = net(shape)
>>> print(res)
construct(shape)[source]

Generate adaptive Gaussian noise.

Parameters

shape (tuple) – The shape of gradients.

Returns

Tensor, generated noise.

class mindarmour.diff_privacy.DPModel(micro_batches=2, norm_clip=1.0, dp_mech=None, **kwargs)[source]

This class is overload mindspore.train.model.Model.

Parameters
  • micro_batches (int) – The number of small batches split from an origianl batch. Default: 2.

  • norm_clip (float) – Use to clip the bound, if set 1, will retun the original data. Default: 1.0.

  • dp_mech (Mechanisms) – The object can generate the different type of noise. Default: None.

Examples

>>> class Net(nn.Cell):
>>>     def __init__(self):
>>>         super(Net, self).__init__()
>>>         self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
>>>         self.bn = nn.BatchNorm2d(64)
>>>         self.relu = nn.ReLU()
>>>         self.flatten = nn.Flatten()
>>>         self.fc = nn.Dense(64*224*224, 12) # padding=0
>>>
>>>     def construct(self, x):
>>>         x = self.conv(x)
>>>         x = self.bn(x)
>>>         x = self.relu(x)
>>>         x = self.flatten(x)
>>>         out = self.fc(x)
>>>         return out
>>>
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
>>> optim = Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9)
>>> gaussian_mech = DPOptimizerClassFactory()
>>> gaussian_mech.set_mechanisms('Gaussian',
>>>                             norm_bound=args.l2_norm_bound,
>>>                             initial_noise_multiplier=args.initial_noise_multiplier)
>>> model = DPModel(micro_batches=2,
>>>                 norm_clip=1.0,
>>>                 dp_mech=gaussian_mech.mech,
>>>                 network=net,
>>>                 loss_fn=loss,
>>>                 optimizer=optim,
>>>                 metrics=None)
>>> dataset = get_dataset()
>>> model.train(2, dataset)
class mindarmour.diff_privacy.DPOptimizerClassFactory(micro_batches=2)[source]

Factory class of Optimizer.

Parameters

micro_batches (int) – The number of small batches split from an origianl batch. Default: 2.

Returns

Optimizer, Optimizer class

Examples

>>> GaussianSGD = DPOptimizerClassFactory(micro_batches=2)
>>> GaussianSGD.set_mechanisms('Gaussian', norm_bound=1.0, initial_noise_multiplier=1.5)
>>> net_opt = GaussianSGD.create('Momentum')(params=network.trainable_params(),
>>>                                     learning_rate=cfg.lr,
>>>                                     momentum=cfg.momentum)
create(policy, *args, **kwargs)[source]

Create DP optimizer.

Parameters

policy (str) – Choose original optimizer type.

Returns

Optimizer, A optimizer with DP.

set_mechanisms(policy, *args, **kwargs)[source]

Get noise mechanism object.

Parameters

policy (str) – Choose mechanism type.

class mindarmour.diff_privacy.GaussianRandom(norm_bound=1.0, initial_noise_multiplier=1.5)[source]

Gaussian noise generated mechanism.

Parameters
  • norm_bound (float) – Clipping bound for the l2 norm of the gradients. Default: 1.0.

  • initial_noise_multiplier (float) – Ratio of the standard deviation of Gaussian noise divided by the norm_bound, which will be used to calculate privacy spent. Default: 1.5.

Returns

Tensor, generated noise.

Examples

>>> shape = (3, 2, 4)
>>> norm_bound = 1.0
>>> initial_noise_multiplier = 1.5
>>> net = GaussianRandom(shape, norm_bound, initial_noise_multiplier)
>>> res = net(shape)
>>> print(res)
construct(shape)[source]

Generated Gaussian noise.

Parameters

shape (tuple) – The shape of gradients.

Returns

Tensor, generated noise.

class mindarmour.diff_privacy.MechanismsFactory[source]

Factory class of mechanisms

static create(policy, *args, **kwargs)[source]
Parameters
  • policy (str) – Noise generated strategy, could be ‘Gaussian’ or ‘AdaGaussian’. Default: ‘AdaGaussian’.

  • args (Union[float, str]) – Parameters used for creating noise mechanisms.

  • kwargs (Union[float, str]) – Parameters used for creating noise mechanisms.

Raises

NameErrorpolicy must be in [‘Gaussian’, ‘AdaGaussian’].

Returns

Mechanisms, class of noise generated Mechanism.

class mindarmour.diff_privacy.PrivacyMonitorFactory[source]

Factory class of DP training’s privacy monitor.

static create(policy, *args, **kwargs)[source]

Create a privacy monitor class.

Parameters
Returns

Callback, a privacy monitor.

Examples

>>> rdp = PrivacyMonitorFactory.create(policy='rdp',
>>> num_samples=60000, batch_size=32)