mindspore.hal.contiguous_tensors_handle.ContiguousTensorsHandle

查看源文件
class mindspore.hal.contiguous_tensors_handle.ContiguousTensorsHandle(tensor_list, enable_mem_align=True)[源代码]

连续内存管理器。

参数:
  • tensor_list (list[Tensor], tuple[Tensor]) - 需要申请连续内存的Tensor列表。

  • enable_mem_align (bool,可选) - 是否启用内存对齐功能。暂不支持 False。默认 True

返回:

ContiguousTensorsHandle,一个连续内存管理器。

样例:

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.hal.contiguous_tensors_handle import ContiguousTensorsHandle
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> handle = ContiguousTensorsHandle([x, y], True)
>>> print(handle[0].shape)
[1]
>>> print(handle[1: 3].asnumpy())
[2, 3]
slice_by_tensor_index(start=None, end=None)[源代码]

返回根据tensor列表的index切片出的连续内存。

参数:
  • start (int, None) - 起始位置。默认 None

  • end (int, None) - 结束位置。默认 None

返回:

Tensor

样例:

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Tensor
>>> from mindspore.hal.contiguous_tensors_handle import ContiguousTensorsHandle
>>> x = Tensor(np.array([1, 2, 3]).astype(np.float32))
>>> y = Tensor(np.array([4, 5, 6]).astype(np.float32))
>>> handle = ContiguousTensorsHandle([x, y], True)
>>> print(output.slice_by_tensor_index(0, 1).asnumpy())
[1, 2, 3]