mindspore_gl.BatchedGraph

class mindspore_gl.BatchedGraph[source]

Batched Graph class.

This is the class which should be annotated in the construct function for GNNCell class. The last argument in the ‘construct’ function will be resolved into the ‘mindspore_gl.BatchedGraph’ batched graph class.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx =>>>  ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
>>> class SrcIdx(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.src_idx
>>> ret = SrcIdx()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8]
avg_edges(edge_feat)[source]

Aggregating edge features and generates a graph-level representation by aggregation type ‘avg’.

The edge_feat should have shape \((N\_EDGES, F)\). Avg_edges operation will aggregate the edge_feat according to edge_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the edge feature.

Parameters

edge_feat (Tensor) – a tensor represents the edge feature, with shape \((N\_EDGES, F)\). \(F\) is the dimension of the edge feature.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\), \(F\) is the dimension of the edge feature.

Raises

TypeError – If edge_feat is not a Tensor which is the type of operation ‘shape’.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestAvgEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.avg_edges(x)
...
>>> ret = TestAvgEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[1.3333333730697632, 3.0, 2.0, 3.6666667461395264],
 [5.800000190734863, 4.800000190734863, 3.799999952316284, 4.599999904632568]]
avg_nodes(node_feat)[source]

Aggregating node features and generates a graph-level representation by aggregation type ‘avg’.

The node_feat should have shape \((N\_NODES, F)\). Avg_nodes operation will aggregate the node_feat according to ver_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the node feature.

Parameters

node_feat (Tensor) – a tensor represents the node feature, with shape \((N\_NODES, F)\). \(F\) is the dimension of the node feature.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the node feature.

Raises

TypeError – If node_feat is not a Tensor which is the type of operation.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestAvgNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.avg_nodes(x)
...
>>> ret = TestAvgNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[1.3333333730697632, 3.0, 2.0, 3.6666667461395264], [6.5, 5.5, 4.0, 5.0]]
broadcast_edges(graph_feat)[source]

Broadcast graph-level features to edge-level representation.

Parameters

graph_feat (Tensor) – a tensor represent the graph feature, with shape \((N\_GRAPHS, F)\), \(F\) is the feature size.

Returns

Tensor, a tensor with shape \((N\_EDGES, F)\), \(F\) is the feature size.

Raises

TypeError – If graph_feat is not a Tensor.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestBroadCastEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         ret = bg.max_edges(x)
...         return bg.broadcast_edges(ret)
...
>>> ret = TestBroadCastEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0],
 [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0],
 [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0]]
broadcast_nodes(graph_feat)[source]

Broadcast graph-level features to node-level representation.

Parameters

graph_feat (Tensor) – a tensor represent the graph feature, with shape \((N\_GRAPHS, F)\), \(F\) is the feature size.

Returns

Tensor, a tensor with shape \((N\_NODES, F)\), \(F\) is the feature size.

Raises

TypeError – If graph_feat is not a Tensor.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestBroadCastNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         ret = bg.max_nodes(x)
...         return bg.broadcast_nodes(ret)
...
>>> ret = TestBroadCastNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0],
 [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0]]
edge_mask()[source]

Get the edge mask after padding. In the mask, 1 represent the edge exists and 0 represent the edge is generated by padding.

The edge mask is calculated according to the graph_mask and edge_subgraph_idx.

Returns

Tensor, a tensor with shape \((N\_EDGES,)\). Inside tensor, 1 represent the edge exists and 0 represent the edge is generated by padding.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestEdgeMask(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.edge_mask()
...
>>> ret = TestEdgeMask()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]
property edge_subgraph_idx

Indicates each edge belonging to which subgraph.

Returns

Tensor, A tensor with shape \((N\_EDGES,)\).

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class EdgeSubgraphIdx(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.edge_subgraph_idx
...
>>> ret = EdgeSubgraphIdx()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
# [0 0 0 1 1 1 1 1]
property graph_mask

Indicates whether the subgraph is exist.

Returns

Tensor, A tensor with shape \((N\_GRAPHS,)\).

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class GraphMask(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.graph_mask
...
>>> ret = GraphMask()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
[1 1]
max_edges(edge_feat)[source]

Aggregating edge features and generates a graph-level representation by aggregation type ‘max’.

The edge_feat should have shape \((N\_EDGES, F)\). Max_edges operation will aggregate the edge_feat according to edge_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the edge feature.

Parameters

edge_feat (Tensor) – a tensor represents the edge feature, with shape \((N\_EDGES, F)\). \(F\) is the dimension of the edge feature.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the edge feature.

Raises

TypeError – If edge_feat is not a Tensor which is the type of operation ‘shape’.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestMaxEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.max_edges(x)
...
>>> ret = TestMaxEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [9.0, 7.0, 6.0, 8.0]]
max_nodes(node_feat)[source]

Aggregating node features and generates a graph-level representation by aggregation type ‘max’.

The node_feat should have shape \((N\_NODES, F)\). max_nodes operation will aggregate the node_feat according to ver_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the node feature.

Parameters

node_feat (Tensor) – a tensor represents the node feature, with shape \((N\_NODES, F)\). \(F\) is the dimension of the node feature.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\), \(F\) is the dimension of the node feature.

Raises

TypeError – If node_feat is not a Tensor which is the type of operation ‘shape’.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestMaxNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.max_nodes(x)
...
>>> ret = TestMaxNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [9.0, 7.0, 6.0, 8.0]]
property n_graphs

Represent the graphs count of the batched graph.

Returns

int, graph numbers.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class NGraphs(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.n_graphs
...
>>> ret = NGraphs()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
2
node_mask()[source]

Get the node mask after padding. In the mask, 1 represent the node exists and 0 represent the node is generated by padding.

The node mask is calculated according to the graph_mask and ver_subgraph_idx.

Returns

Tensor, a tensor with shape \((N\_NODES, )\). Inside tensor, 1 represent the node exists and 0 represent the node is generated by padding.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNodeMask(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.node_mask()
...
>>> ret = TestNodeMask()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[1, 1, 1, 1, 1, 1, 1, 0, 0]
num_of_edges()[source]

Get the number of edges of each subgraph in a batched graph.

Note

After padding operation, a not existing subgraph is created and all not existing edges created belong to this subgraph. If you want to clear it, you need to multiply it with a graph mask manually.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, 1)\), represent each subgraph contains how many edges.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNumOfEdges(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.num_of_edges()
...
>>> ret = TestNumOfEdges()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[3], [5], [3]]
num_of_nodes()[source]

Get the number of nodes of each subgraph in a batched graph.

Note

After padding operation, a not existing subgraph is created and all not existing nodes created belong to this subgraph. If you want to clear it, you need to multiply it with a graph_mask manually.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, 1)\) represent each subgraph contains how many nodes.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNumOfNodes(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.num_of_nodes()
...
>>> ret = TestNumOfNodes()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[3], [4], [2]]
softmax_edges(edge_feat)[source]

Perform graph-wise softmax on the edge features.

For each edge \(v\in\mathcal{V}\) and its feature \(x_v\), calculate its normalized feature as follows:

\[z_v = \frac{\exp(x_v)}{\sum_{u\in\mathcal{V}}\exp(x_u)}\]

Each subgraph computes softmax independently. The result tensor has the same shape as the original edge feature.

Parameters

edge_feat (Tensor) – a tensor represent the edge feature, with shape \((N\_EDGES, F)\), \(F\) is the feature size.

Returns

Tensor, a tensor with shape \((N\_EDGES, F)\), \(F\) is the feature size.

Raises

TypeError – If edge_feat is not a Tensor.

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSoftmaxEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.softmax_edges(x)
...
>>> ret = TestSoftmaxEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy()
>>> print(np.array2string(ret, formatter={'float_kind':'{0:.5f}'.format}))
[[0.21194, 0.09003, 0.66524, 0.42232],
 [0.57612, 0.66524, 0.09003, 0.15536],
 [0.21194, 0.24473, 0.24473, 0.42232],
 [0.57518, 0.41993, 0.23586, 0.83838],
 [0.21160, 0.41993, 0.64113, 0.04174],
 [0.21160, 0.15448, 0.08677, 0.11346],
 [0.00019, 0.00283, 0.00432, 0.00076],
 [0.00143, 0.00283, 0.03192, 0.00565]]
softmax_nodes(node_feat)[source]

Perform graph-wise softmax on the node features.

For each node \(v\in\mathcal{V}\) and its feature \(x_v\), calculate its normalized feature as follows:

\[z_v = \frac{\exp(x_v)}{\sum_{u\in\mathcal{V}}\exp(x_u)}\]

Each subgraph computes softmax independently. The result tensor has the same shape as the original node feature.

Parameters

node_feat (Tensor) – a tensor represent the node feature, with shape \((N\_NODES, F)\), \(F\) is the feature size.

Returns

Tensor, a tensor with shape \((N\_NODES, F)\), \(F\) is the feature size.

Raises

TypeError – If node_feat is not a Tensor.

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSoftmaxNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.softmax_nodes(x)
...
>>> ret = TestSoftmaxNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy()
>>> print(np.array2string(ret, formatter={'float_kind':'{0:.5f}'.format}))
[[0.21194, 0.09003, 0.66524, 0.42232],
 [0.57612, 0.66524, 0.09003, 0.15536],
 [0.21194, 0.24473, 0.24473, 0.42232],
 [0.57601, 0.42112, 0.24364, 0.84315],
 [0.21190, 0.42112, 0.66227, 0.04198],
 [0.21190, 0.15492, 0.08963, 0.11411],
 [0.00019, 0.00284, 0.00446, 0.00077]]
sum_edges(edge_feat)[source]

Aggregating edge features and generates a graph-level representation by aggregation type ‘sum’.

The edge_feat should have shape \((N\_EDGES, F)\). sum_edges operation will aggregate the edge_feat. according to edge_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the edge feature.

Parameters

edge_feat (Tensor) – a tensor represents the edge feature, with shape \((N\_EDGES, F)\). \(F\) is the dimension of the edge attribute.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the edge attribute.

Raises

TypeError – If edge_feat is not a Tensor which is the type of operation ‘shape’.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSumEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.sum_edges(x)
...
>>> ret = TestSumEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[4.0, 9.0, 6.0, 11.0], [29.0, 24.0, 19.0, 23.0]]
sum_nodes(node_feat)[source]

Aggregating node features and generates a graph-level representation by aggregation type ‘sum’.

The node_feat should have shape \((N\_NODES, F)\), sum_nodes operation will aggregate the nodes feat according to ver_subgraph_idx. The output tensor will have a shape \((N\_GRAPHS, F)\). \(F\) is the dimension of the node feature.

Parameters

node_feat (Tensor) – a tensor represents the node feature, with shape \((N\_NODES, F)\), \(F\) is the dimension of the node node feature.

Returns

Tensor, a tensor with shape \((N\_GRAPHS, F)\), \(F\) is the dimension of the node feature.

Raises

TypeError – If node_feat is not a Tensor which is the type of operation ‘shape’.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSumNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.sum_nodes(x)
...
>>> ret = TestSumNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[4.0, 9.0, 6.0, 11.0], [26.0, 22.0, 16.0, 20.0]]
property ver_subgraph_idx

Indicates each node belonging to which subgraph.

Returns

Tensor, A tensor with shape \((N)\), \(N\) is the number of the nodes of the graph.

Examples

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class VerSubgraphIdx(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.ver_subgraph_idx
...
>>> ret = VerSubgraphIdx()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
[0 0 0 1 1 1 1]