比较与torch.optim.AdamW的差异

查看源文件

torch.optim.AdamW

class torch.optim.AdamW(
    params,
    lr=0.001,
    betas=(0.9, 0.999),
    eps=1e-08,
    weight_decay=0.01,
    amsgrad=False
)

更多内容详见torch.optim.AdamW

mindspore.nn.AdamWeightDecay

class mindspore.nn.AdamWeightDecay(
    params,
    learning_rate=1e-3,
    beta1=0.9,
    beta2=0.999,
    eps=1e-6,
    weight_decay=0.0
)

更多内容详见mindspore.nn.AdamWeightDecay

使用方式

PyTorch和MindSpore此优化器实现算法不同,详情请参考官网公式。

分类

子类

PyTorch

MindSpore

差异

参数

参数1

params

params

功能一致

参数2

lr

learning_rate

功能一致,参数名及默认值不同

参数3

betas

beta1, beta2

功能一致,参数名不同

参数4

eps

eps

功能一致,默认值不同

参数5

weight_decay

weight_decay

功能一致

参数6

amsgrad

-

PyTorch的 amsgrad 表示是否应用amsgrad算法,MindSpore无此参数

代码示例

# MindSpore
import mindspore
from mindspore import nn

net = nn.Dense(2, 3)
optimizer = nn.AdamWeightDecay(net.trainable_params())
criterion = nn.MAELoss(reduction="mean")

def forward_fn(data, label):
    logits = net(data)
    loss = criterion(logits, label)
    return loss, logits

grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)

def train_step(data, label):
    (loss, _), grads = grad_fn(data, label)
    optimizer(grads)
    return loss

# PyTorch
import torch

model = torch.nn.Linear(2, 3)
criterion = torch.nn.L1Loss(reduction='mean')
optimizer = torch.optim.AdamW(model.parameters())
def train_step(data, label):
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, label)
    loss.backward()
    optimizer.step()