mindscience.diffuser.ConditionDiffusionTransformer
- class mindscience.diffuser.ConditionDiffusionTransformer(in_channels, out_channels, cond_channels, hidden_channels, layers, heads, time_token_cond=True, cond_as_token=True, compute_dtype=mstype.float32)[源代码]
以 Transformer 作为骨干网络的条件控制扩散模型。
- 参数:
in_channels (int) - 输入特征维度。
out_channels (int) - 输出特征维度。
cond_channels (int) - 条件特征维度。
hidden_channels (int) - 隐藏层特征维度。
layers (int) - Transformer 模块的层数。
heads (int) - Transformer 模块的注意力头数。
time_token_cond (bool, 可选) - 是否将时间作为条件token。默认
True。cond_as_token (bool, 可选) - 是否将条件作为token。默认
True。compute_dtype (mindspore.dtype, 可选) - 计算数据类型。支持
mstype.float32或mstype.float16。默认mstype.float32,表示mindspore.float32。
- 输入:
x (Tensor) - 网络输入张量。形状为 \((batch\_size, sequence\_len, in\_channels)\)。
timestep (Tensor) - 时间步输入张量。形状为 \((batch\_size,)\)。
condition (Tensor, 可选) - 控制条件输入张量。形状为 \((batch\_size, cond\_channels)\)。默认
None。
- 输出:
output (Tensor) - 输出张量。形状为 \((batch\_size, sequence\_len, out\_channels)\)。
样例:
>>> from mindspore import ops >>> from mindscience.diffuser import ConditionDiffusionTransformer >>> in_channels, out_channels, cond_channels, hidden_channels = 16, 16, 10, 256 >>> layers, heads, batch_size, seq_len = 3, 4, 8, 256 >>> model = ConditionDiffusionTransformer(in_channels=in_channels, ... out_channels=out_channels, ... cond_channels=cond_channels, ... hidden_channels=hidden_channels, ... layers=layers, ... heads=heads) >>> x = ops.rand((batch_size, seq_len, in_channels)) >>> cond = ops.rand((batch_size, cond_channels)) >>> timestep = ops.randint(0, 1000, (batch_size,)) >>> output = model(x, timestep, cond) >>> print(output.shape) (8, 256, 16)