mindspore.numpy.amax

View Source On Gitee
mindspore.numpy.amax(a, axis=None, keepdims=False, initial=None, where=True)[source]

Returns the maximum of an array or maximum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters
  • a (Tensor) – Input data.

  • axis (None or int or tuple of integers, optional) – Default: None . Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of integers, the maximum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (boolean, optional) – Default: False . If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (scalar, optional) – Default: None . The minimum value of an output element. Must be present to allow computation on empty slice.

  • where (boolean Tensor, optional) – Default: True . A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns

Tensor or scalar, maximum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises

TypeError – If the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amax(a)
>>> print(output)
3.0
>>> output = np.amax(a, axis=0)
>>> print(output)
[2. 3.]
>>> output = np.amax(a, axis=1)
>>> print(output)
[1. 3.]
>>> output = np.amax(a, where=np.array([False, True]), initial=-1, axis=0)
>>> print(output)
[-1.  3.]