mindspore_gl.nn.SAGEConv

View Source On Gitee
class mindspore_gl.nn.SAGEConv(in_feat_size: int, out_feat_size: int, aggregator_type: str = 'pool', bias=True, norm=None, activation=None)[source]

GraphSAGE Layer. From the paper Inductive Representation Learning on Large Graphs .

\[ \begin{align}\begin{aligned}\begin{split}h_{\mathcal{N}(i)}^{(l+1)} = \mathrm{aggregate} \left(\{h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right) \\\end{split}\\\begin{split}h_{i}^{(l+1)} = \sigma \left(W \cdot \mathrm{concat} (h_{i}^{l}, h_{\mathcal{N}(i)}^{l+1}) \right)\\\end{split}\\h_{i}^{(l+1)} = \mathrm{norm}(h_{i}^{l})\end{aligned}\end{align} \]

If weights are provided on each edge, the weighted graph convolution is defined as:

\[h_{\mathcal{N}(i)}^{(l+1)} = \mathrm{aggregate} \left(\{e_{ji} h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right)\]
Parameters
  • in_feat_size (int) – Input node feature size.

  • out_feat_size (int) – Output node feature size.

  • aggregator_type (str, optional) – Type of aggregator, should in 'pool', 'lstm' and 'mean'. Default: 'pool'.

  • bias (bool, optional) – Whether use bias. Default: True.

  • norm (Cell, optional) – Normalization function Cell. Default: None.

  • activation (Cell, optional) – Activation function Cell. Default: None.

Inputs:
  • x (Tensor) - The input node features. The shape is \((N,D\_in)\) where \(N\) is the number of nodes and \(D\_in\) could be of any shape.

  • edge_weight (Tensor) - Edge weights. The shape is \((N\_e,)\) where \(N\_e\) is the number of edges.

  • g (Graph) - The input graph.

Outputs:
  • Tensor, the output feature of shape \((N,D\_out)\). where \(N\) is the number of nodes and \(D\_out\) could be of any shape.

Raises
  • TypeError – If in_feat_size or out_feat_size is not an int.

  • TypeError – If bias is not a bool.

  • KeyError – if aggregator type is not 'pool', 'lstm' or 'mean'.

  • TypeError – if activation type is not mindspore.nn.Cell.

  • TypeError – if norm type is not mindspore.nn.Cell.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore as ms
>>> from mindspore import nn
>>> from mindspore.numpy import ones
>>> from mindspore_gl.nn import SAGEConv
>>> from mindspore_gl import GraphField
>>> n_nodes = 4
>>> n_edges = 7
>>> feat_size = 4
>>> src_idx = ms.Tensor([0, 1, 1, 2, 2, 3, 3], ms.int32)
>>> dst_idx = ms.Tensor([0, 0, 2, 1, 3, 0, 1], ms.int32)
>>> ones = ms.ops.Ones()
>>> feat = ones((n_nodes, feat_size), ms.float32)
>>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
>>> sageconv = SAGEConv(in_feat_size=4, out_feat_size=2, activation=nn.ReLU())
>>> edge_weight = ones((n_edges, 1), ms.float32)
>>> res = sageconv(feat, edge_weight, *graph_field.get_graph())
>>> print(res.shape)
 (4,2)