mindspore_gl.nn.GlobalAttentionPooling

View Source On Gitee
class mindspore_gl.nn.GlobalAttentionPooling(gate_nn, feat_nn=None)[source]

Apply global attention pooling to the nodes in the graph. From the paper Gated Graph Sequence Neural Networks .

\[r^{(i)} = \sum_{k=1}^{N_i}\mathrm{softmax}\left(f_{gate} \left(x^{(i)}_k\right)\right) f_{feat}\left(x^{(i)}_k\right)\]
Parameters
  • gate_nn (Cell) – The neural network for computing attention score for each feature.

  • feat_nn (Cell, optional) – The neural network applied to each feature before combining each feature with an attention score. Default: None.

Inputs:
  • x (Tensor) - The input node features to be updated. The shape is \((N, D)\) where \(N\) is the number of nodes, and \(D\) is the feature size of nodes.

  • g (BatchedGraph) - The input graph.

Outputs:
  • x (Tensor) - The output representation for graphs. The shape is \((2, D_{out})\) where \(D_{out}\) is the feature size of nodes.

Raises

TypeError – if gate_nn type or feat_nn type is not mindspore.nn.Cell.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore_gl.nn import GlobalAttentionPooling
>>> from mindspore_gl import BatchedGraphField
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges, ver_subgraph_idx,
...                                         edge_subgraph_idx, graph_mask)
>>> node_feat = np.random.random((n_nodes, 4))
>>> node_feat = ms.Tensor(node_feat, ms.float32)
>>> gate_nn = ms.nn.Dense(4, 1)
>>> net = GlobalAttentionPooling(gate_nn)
>>> ret = net(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret.shape)
(2, 4)