Function Differences with tf.nn.leaky_relu

View Source On Gitee

tf.nn.leaky_relu

tf.nn.leaky_relu(features, alpha=0.2, name=None) -> Tensor

For more information, see tf.nn.leaky_relu.

mindspore.nn.LeakyReLU

class mindspore.nn.LeakyReLU(alpha=0.2)(x) -> Tensor

For more information, see mindspore.nn.LeakyReLU.

Differences

TensorFlow: Apply the Leaky ReLU activation function, where the parameter alpha is used to control the slope of the activation function.

MindSpore: MindSpore API basically implements the same function as TensorFlow.

Categories

Subcategories

TensorFlow

MindSpore

Differences

Parameters

Parameter 1

features

x

Same function, different parameter names

Parameter 2

alpha

alpha

-

Parameter 3

name

-

Not involved

Code Example

The two APIs achieve the same function and have the same usage.

# TensorFlow
import tensorflow as tf

features = tf.constant([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]], dtype=tf.float32)
output = tf.nn.leaky_relu(features).numpy()
print(output)
# [[-0.2  4.  -1.6]
#  [ 2.  -1.   9. ]]

# MindSpore
import mindspore
from mindspore import Tensor
import mindspore.nn as nn

x = Tensor([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]).astype('float32')
m = nn.LeakyReLU()
output = m(x)
print(output)
# [[-0.2  4.  -1.6]
#  [ 2.  -1.   9. ]]