mindspore.jacfwd

View Source On Gitee
mindspore.jacfwd(fn, grad_position=0, has_aux=False)[source]

Compute Jacobian via forward mode, corresponding to forward-mode differentiation. When number of outputs is much greater than that of inputs, it’s better to calculate Jacobian via forward mode than reverse mode to get better performance.

Parameters
  • fn (Union[Cell, Function]) – Function to do GradOperation.

  • grad_position (Union[int, tuple[int]], optional) – If int, get the gradient with respect to single input. If tuple, get the gradients with respect to selected inputs. ‘grad_position’ begins with 0. Default: 0 .

  • has_aux (bool, optional) – If True , only the first output of fn contributes the gradient of fn, while the other outputs will be returned straightly. It means the fn must return more than one outputs in this case. Default: False .

Returns

Function, returns the Jacobian function for the input function or cell. For example, as for out1, out2 = fn(*args), when has_aux is set True , gradient function will return outputs like (Jacobian, out2) and out2 does not contribute to the differentiation, otherwise Jacobian .

Raises

TypeErrorgrad_position or has_aux does not belong to required types.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import jacfwd
>>> from mindspore import Tensor
>>> class MultipleInputsMultipleOutputsNet(nn.Cell):
...     def construct(self, x, y, z):
...         return x ** 2 + y ** 2 + z ** 2, x * y * z
>>> x = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> y = Tensor(np.array([[1, 2], [3, 4]]).astype(np.float32))
>>> z = Tensor(np.array([[1, 1], [1, 1]]).astype(np.float32))
>>> net = MultipleInputsMultipleOutputsNet()
>>> jac, aux = jacfwd(net, grad_position=0, has_aux=True)(x, y, z)
>>> print(jac)
[[[[ 2.  0.]
   [ 0.  0.]]
  [[ 0.  4.]
   [ 0.  0.]]]
 [[[ 0.  0.]
   [ 6.  0.]]
  [[ 0.  0.]
   [ 0.  8.]]]]
>>> print(aux)
[[ 1.  4.]
 [ 9. 16.]]