mindspore_gl.BatchedGraph

查看源文件
class mindspore_gl.BatchedGraph[源代码]

批次图类。

GNNCell 类的构造函数中需要被注释的类。construct 函数中的最后一个参数将被解析成 mindspore_gl.BatchedGraph 批次图类。

支持平台:

Ascend GPU

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx =>>>  ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
>>> class SrcIdx(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.src_idx
>>> ret = SrcIdx()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8]
avg_edges(edge_feat)[源代码]

聚合边特征,通过聚合函数“平均”来生成图级表示。

节点特征为 \((N\_EDGES, F)\)max_edges 将会根据 edge_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • edge_feat (Tensor) - 节点特征,shape为 \((N\_EDGES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestAvgEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.avg_edges(x)
...
>>> ret = TestAvgEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[1.3333333730697632, 3.0, 2.0, 3.6666667461395264],
[5.800000190734863, 4.800000190734863, 3.799999952316284, 4.599999904632568]]
avg_nodes(node_feat)[源代码]

聚合邻居的节点特征,通过聚合函数“平均”来生成图级表示。

节点特征为 \((N\_NODES, F)\)max_nodes 将会根据 ver_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • node_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestAvgNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.avg_nodes(x)
...
>>> ret = TestAvgNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[1.3333333730697632, 3.0, 2.0, 3.6666667461395264], [6.5, 5.5, 4.0, 5.0]]
broadcast_edges(graph_feat)[源代码]

将图级特征广播到边级特征表示。

参数:
  • graph_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_EDGES, F)\)\(F\) 是特征维度。

异常:
  • TypeError - 如果 graph_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestBroadCastEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         ret = bg.max_edges(x)
...         return bg.broadcast_edges(ret)
...
>>> ret = TestBroadCastEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0],
[9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0],
[9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0]]
broadcast_nodes(graph_feat)[源代码]

将图级特征广播到节点级特征表示。

参数:
  • graph_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_NODES, F)\)\(F\) 是特征维度。

异常:
  • TypeError - 如果 graph_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestBroadCastNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         ret = bg.max_nodes(x)
...         return bg.broadcast_nodes(ret)
...
>>> ret = TestBroadCastNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0], [2.0, 4.0, 3.0, 4.0],
[9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0], [9.0, 7.0, 6.0, 8.0]]
edge_mask()[源代码]

获取padding之后的边的掩码。

边掩码根据 mindspore_gl.BatchedGraph.graph_maskmindspore_gl.BatchedGraph.edge_subgraph_idx 计算出来。在掩码中,1表示边存在,0表示边是通过padding生成的。

返回:

Tensor,shape为 \((N\_EDGES,)\) 。在Tensor中,1表示节点存在,0表示节点是通过padding生成的。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestEdgeMask(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.edge_mask()
...
>>> ret = TestEdgeMask()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]
property edge_subgraph_idx

指示边属于哪个子图。

返回:

Tensor,shape为 \((N\_EDGES,)\)

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class EdgeSubgraphIdx(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.edge_subgraph_idx
...
>>> ret = EdgeSubgraphIdx()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
# [0 0 0 1 1 1 1 1]
property graph_mask

指示哪个子图是真实存在的。

返回:

Tensor,shape为 \((N\_GRAPHS,)\)

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class GraphMask(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.graph_mask
...
>>> ret = GraphMask()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
[1 1]
max_edges(edge_feat)[源代码]

聚合边特征,通过聚合函数“最大值”来生成图级表示。

节点特征为 \((N\_EDGES, F)\)max_edges 将会根据 edge_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • edge_feat (Tensor) - 节点特征,shape为 \((N\_EDGES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestMaxEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.max_edges(x)
...
>>> ret = TestMaxEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [9.0, 7.0, 6.0, 8.0]]
max_nodes(node_feat)[源代码]

聚合邻居的节点特征,通过聚合函数“最大值”来生成图级表示。

节点特征为 \((N\_NODES, F)\)max_nodes 将会根据 ver_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • node_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestMaxNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.max_nodes(x)
...
>>> ret = TestMaxNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[2.0, 4.0, 3.0, 4.0], [9.0, 7.0, 6.0, 8.0]]
property n_graphs

表示批次图由多少个子图组成。

返回:

int,图的数量。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class NGraphs(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.n_graphs
...
>>> ret = NGraphs()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
2
node_mask()[源代码]

获取padding之后的节点的掩码。在掩码中,1表示节点存在,0表示节点是通过padding生成的。

节点掩码根据 mindspore_gl.BatchedGraph.graph_maskmindspore_gl.BatchedGraph.ver_subgraph_idx 计算出来。

返回:

Tensor,shape为 \((N\_NODES,)\) 。在Tensor中,1表示节点存在,0表示节点是通过padding生成的。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNodeMask(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.node_mask()
...
>>> ret = TestNodeMask()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[1, 1, 1, 1, 1, 1, 1, 0, 0]
num_of_edges()[源代码]

获取批次图中每个子图的边数量。

说明

填充操作后,将创建一个不存在的子图,并且创建的所有不存在的边都属于该子图。 如果要清除它,则需要手动将它与 graph_mask 相乘。

返回:

Tensor,shape为 \((N\_GRAPHS, 1)\) ,表示每个子图有多少边。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNumOfEdges(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.num_of_edges()
...
>>> ret = TestNumOfEdges()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[3], [5], [3]]
num_of_nodes()[源代码]

获取批次图中每个子图的节点数量。

说明

填充操作后,将创建一个不存在的子图,并且创建的所有不存在的节点都属于该子图。 如果要清除它,则需要手动将它与 graph_mask 相乘。

返回:

Tensor,shape为 \((N\_GRAPHS, 1)\) ,表示每个子图有多少节点。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> n_nodes = 9
>>> n_edges = 11
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6, 8, 8, 8], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4, 8, 8, 8], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 2, 2], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2], ms.int32)
>>> graph_mask = ms.Tensor([1, 1, 0], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestNumOfNodes(GNNCell):
...     def construct(self, bg: BatchedGraph):
...         return bg.num_of_nodes()
...
>>> ret = TestNumOfNodes()(*batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[3], [4], [2]]
softmax_edges(edge_feat)[源代码]

对边特征执行图的softmax。

针对每个边 \(v\in\mathcal{V}\) 和它的特征 \(x_v\) ,计算归一化方法如下:

\[z_v = \frac{\exp(x_v)}{\sum_{u\in\mathcal{V}}\exp(x_u)}\]

每个子图独立计算softmax。 结果Tensor具有与原始边特征相同的shape。

参数:
  • edge_feat (Tensor) - 边特征的Tensor,shape为 \((N\_EDGES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_EDGES, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 edge_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> import numpy as np
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSoftmaxEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.softmax_edges(x)
...
>>> ret = TestSoftmaxEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy()
>>> print(np.array2string(ret, formatter={'float_kind':'{0:.5f}'.format}))
[[0.21194, 0.09003, 0.66524, 0.42232],
[0.57612, 0.66524, 0.09003, 0.15536],
[0.21194, 0.24473, 0.24473, 0.42232],
[0.57518, 0.41993, 0.23586, 0.83838],
[0.21160, 0.41993, 0.64113, 0.04174],
[0.21160, 0.15448, 0.08677, 0.11346],
[0.00019, 0.00283, 0.00432, 0.00076],
[0.00143, 0.00283, 0.03192, 0.00565]]
softmax_nodes(node_feat)[源代码]

对节点特征执行图的softmax。

针对每个节点 \(v\in\mathcal{V}\) 和它的特征 \(x_v\) ,计算归一化方法如下:

\[z_v = \frac{\exp(x_v)}{\sum_{u\in\mathcal{V}}\exp(x_u)}\]

每个子图独立计算softmax。 结果Tensor具有与原始节点特征相同的shape。

参数:
  • node_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_NODES, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> import numpy as np
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSoftmaxNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.softmax_nodes(x)
...
>>> ret = TestSoftmaxNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy()
>>> print(np.array2string(ret, formatter={'float_kind':'{0:.5f}'.format}))
[[0.21194, 0.09003, 0.66524, 0.42232],
[0.57612, 0.66524, 0.09003, 0.15536],
[0.21194, 0.24473, 0.24473, 0.42232],
[0.57601, 0.42112, 0.24364, 0.84315],
[0.21190, 0.42112, 0.66227, 0.04198],
[0.21190, 0.15492, 0.08963, 0.11411],
[0.00019, 0.00284, 0.00446, 0.00077]]
sum_edges(edge_feat)[源代码]

聚合边特征,通过聚合函数“求和”来生成图级表示。

边特征为 \((N\_EDGES, F)\)sum_edges 将会根据 edge_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • edge_feat (Tensor) - 节点特征,shape为 \((N\_EDGES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 edge_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> edge_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
...     [3, 2, 3, 3],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSumEdges(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.sum_edges(x)
...
>>> ret = TestSumEdges()(edge_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[4.0, 9.0, 6.0, 11.0], [29.0, 24.0, 19.0, 23.0]]
sum_nodes(node_feat)[源代码]

聚合邻居的节点特征,通过聚合函数“求和”来生成图级表示。

节点特征为 \((N\_NODES, F)\)sum_nodes 将会根据 ver_subgraph_idx 进行节点特征聚合操作。 输出的Tensor维度为 \((N\_GRAPHS, F)\)\(F\) 为节点特征维度。

参数:
  • node_feat (Tensor) - 节点特征,shape为 \((N\_NODES, F)\)F 是特征维度。

返回:

Tensor,shape为 \((N\_GRAPHS, F)\)\(F\) 是节点特征维度。

异常:
  • TypeError - 如果 node_feat 不是Tensor。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
...
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class TestSumNodes(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.sum_nodes(x)
...
>>> ret = TestSumNodes()(node_feat, *batched_graph_field.get_batched_graph()).asnumpy().tolist()
>>> print(ret)
[[4.0, 9.0, 6.0, 11.0], [26.0, 22.0, 16.0, 20.0]]
property ver_subgraph_idx

指示节点属于哪个子图。

返回:

Tensor,shape为 \((N)\)\(N\) 为图中节点个数。

样例:

>>> import mindspore as ms
>>> from mindspore_gl import BatchedGraph, BatchedGraphField
>>> from mindspore_gl.nn import GNNCell
>>> node_feat = ms.Tensor([
...     # graph 1:
...     [1, 2, 3, 4],
...     [2, 4, 1, 3],
...     [1, 3, 2, 4],
...     # graph 2:
...     [9, 7, 5, 8],
...     [8, 7, 6, 5],
...     [8, 6, 4, 6],
...     [1, 2, 1, 1],
... ], ms.float32)
>>> n_nodes = 7
>>> n_edges = 8
>>> src_idx = ms.Tensor([0, 2, 2, 3, 4, 5, 5, 6], ms.int32)
>>> dst_idx = ms.Tensor([1, 0, 1, 5, 3, 4, 6, 4], ms.int32)
>>> ver_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1], ms.int32)
>>> edge_subgraph_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 1, 1], ms.int32)
>>> graph_mask = ms.Tensor([1, 1], ms.int32)
>>> batched_graph_field = BatchedGraphField(src_idx, dst_idx, n_nodes, n_edges,
...                                         ver_subgraph_idx, edge_subgraph_idx, graph_mask)
...
>>> class VerSubgraphIdx(GNNCell):
...     def construct(self, x, bg: BatchedGraph):
...         return bg.ver_subgraph_idx
...
>>> ret = VerSubgraphIdx()(node_feat, *batched_graph_field.get_batched_graph())
>>> print(ret)
[0 0 0 1 1 1 1]