mindarmour.diff_privacy
This module provide Differential Privacy feature to protect user privacy.
- class mindarmour.diff_privacy.AdaGaussianRandom(norm_bound=1.5, initial_noise_multiplier=5.0, alpha=0.0006, decay_policy='Time')[source]
Adaptive Gaussian noise generated mechanism.
- Parameters
norm_bound (float) – Clipping bound for the l2 norm of the gradients. Default: 1.5.
initial_noise_multiplier (float) – Ratio of the standard deviation of Gaussian noise divided by the norm_bound, which will be used to calculate privacy spent. Default: 5.0.
alpha (float) – Hyperparameter for controlling the noise decay. Default: 6e-4.
decay_policy (str) – Noise decay strategy include ‘Step’ and ‘Time’. Default: ‘Time’.
- Returns
Tensor, generated noise.
Examples
>>> shape = (3, 2, 4) >>> norm_bound = 1.0 >>> initial_noise_multiplier = 0.1 >>> alpha = 0.5 >>> decay_policy = "Time" >>> net = AdaGaussianRandom(norm_bound, initial_noise_multiplier, >>> alpha, decay_policy) >>> res = net(shape) >>> print(res)
- class mindarmour.diff_privacy.DPModel(micro_batches=2, norm_clip=1.0, dp_mech=None, **kwargs)[source]
This class is overload mindspore.train.model.Model.
- Parameters
Examples
>>> class Net(nn.Cell): >>> def __init__(self): >>> super(Net, self).__init__() >>> self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal') >>> self.bn = nn.BatchNorm2d(64) >>> self.relu = nn.ReLU() >>> self.flatten = nn.Flatten() >>> self.fc = nn.Dense(64*224*224, 12) # padding=0 >>> >>> def construct(self, x): >>> x = self.conv(x) >>> x = self.bn(x) >>> x = self.relu(x) >>> x = self.flatten(x) >>> out = self.fc(x) >>> return out >>> >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9) >>> gaussian_mech = DPOptimizerClassFactory() >>> gaussian_mech.set_mechanisms('Gaussian', >>> norm_bound=args.l2_norm_bound, >>> initial_noise_multiplier=args.initial_noise_multiplier) >>> model = DPModel(micro_batches=2, >>> norm_clip=1.0, >>> dp_mech=gaussian_mech.mech, >>> network=net, >>> loss_fn=loss, >>> optimizer=optim, >>> metrics=None) >>> dataset = get_dataset() >>> model.train(2, dataset)
- class mindarmour.diff_privacy.DPOptimizerClassFactory(micro_batches=2)[source]
Factory class of Optimizer.
- Parameters
micro_batches (int) – The number of small batches split from an origianl batch. Default: 2.
- Returns
Optimizer, Optimizer class
Examples
>>> GaussianSGD = DPOptimizerClassFactory(micro_batches=2) >>> GaussianSGD.set_mechanisms('Gaussian', norm_bound=1.0, initial_noise_multiplier=1.5) >>> net_opt = GaussianSGD.create('Momentum')(params=network.trainable_params(), >>> learning_rate=cfg.lr, >>> momentum=cfg.momentum)
- class mindarmour.diff_privacy.GaussianRandom(norm_bound=1.0, initial_noise_multiplier=1.5)[source]
Gaussian noise generated mechanism.
- Parameters
- Returns
Tensor, generated noise.
Examples
>>> shape = (3, 2, 4) >>> norm_bound = 1.0 >>> initial_noise_multiplier = 1.5 >>> net = GaussianRandom(shape, norm_bound, initial_noise_multiplier) >>> res = net(shape) >>> print(res)
- class mindarmour.diff_privacy.MechanismsFactory[source]
Factory class of mechanisms
- class mindarmour.diff_privacy.PrivacyMonitorFactory[source]
Factory class of DP training’s privacy monitor.