class mindspore.ops.HookBackward(hook_fn, cell_id='')[source]

This operation is used as a tag to hook gradient in intermediate variables. Note that this function is only supported in Pynative Mode.


The hook function must be defined like hook_fn(grad) -> Tensor or None, where grad is the gradient passed to the primitive and gradient may be modified and passed to next primitive. The difference between a hook function and callback of InsertGradientOf is that a hook function is executed in the python environment while callback will be parsed and added to the graph.

  • hook_fn (Function) – Python function. hook function.

  • cell_id (str) – Used to identify whether the function registered by the hook is actually registered on the specified Cell. Where the Cell is an object. For example, ‘nn.Add’ is a Cell object. The default value of cell_id is empty string(“”), in this case, the system will automatically register when registering. Add a value of cell_id, the value of cell_id currently does not support custom values.

  • inputs (Tensor) - The variable to hook.

  • TypeError – If inputs are not a Tensor.

  • TypeError – If hook_fn is not a function of python.

Supported Platforms:

Ascend GPU CPU


>>> import mindspore
>>> from mindspore import context
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore.ops import GradOperation
>>> context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU")
>>> def hook_fn(grad_out):
...     print(grad_out)
>>> grad_all = GradOperation(get_all=True)
>>> hook = ops.HookBackward(hook_fn)
>>> def hook_test(x, y):
...     z = x * y
...     z = hook(z)
...     z = z * y
...     return z
>>> def backward(x, y):
...     return grad_all(hook_test)(x, y)
>>> output = backward(Tensor(1, mindspore.float32), Tensor(2, mindspore.float32))
(Tensor(shape=[], dtype=Float32, value= 2),)
>>> print(output)
(Tensor(shape=[], dtype=Float32, value= 4), Tensor(shape=[], dtype=Float32, value= 4))