mindspore_gl.graph.PadCsrEdge

View Source On Gitee
class mindspore_gl.graph.PadCsrEdge(pad_nodes, reset_with_fill_value=True, length=None, mode=PadMode.AUTO, use_shared_numpy=False)[source]

PadCsrEdge, specific pad operator for coo edges. After padding, the shape of the coo edge index to the csr indices and indptr becomes unified.

Warning

PadArray2d will reuse memory buffer to speedup pad operation.

Parameters
  • pad_nodes (int) – nodes numbers of the graph.

  • reset_with_fill_value (bool, optional) – PadArray2d will reuse memory buffer, you can set this value to False if you dont care about the padded value. Default: True.

  • length (int, optional) – User specific length for padding result. Default: None.

  • mode (PadMode, optional) – Pad mode for array, if PadMode.CONST, this op will pad array to user-specific size. If PadMode.AUTO, this will choose padded result length according to input’s length. The expected length can be calculated as \(length=2^{\text{ceil}\left ( \log_{2}{input\_length} \right ) }\) Default: mindspore_gl.graph.PadMode.AUTO.

  • use_shared_numpy (bool, optional) – If we use SharedNDArray for speeding up inter process communication. This is recommended if you do feature collection and feature padding in child process and need inter process communication for graph feature. Default: False.

Inputs:
  • input_array (numpy.array) - input numpy array for pad.

Raises

ValueError – pad length should be provided when padding mode is PadMode.CONST.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore_gl.graph import PadCsrEdge, PadMode
>>> node_pad = 10
>>> origin_edge_index = np.array([[0, 1, 2, 4],
...                               [2, 3, 1, 1]])
>>> pad_length = 20
>>> pad_op = PadCsrEdge(node_pad, length=pad_length, mode=PadMode.CONST)
>>> res = pad_op(origin_edge_index)
>>> print(res)
[[0 1 2 4 5 6 7 8 5 6 7 8 5 6 7 8 5 6 7 8]
 [2 3 1 1 5 6 7 8 6 7 8 5 7 8 5 6 8 5 6 7]]