mindelec.loss.NetWithLoss

class mindelec.loss.NetWithLoss(net_without_loss, constraints, loss='l2', dataset_input_map=None, mtl_weighted_cell=None, latent_vector=None, latent_reg=0.01)[source]

Encapsulation class of network with loss.

Parameters
  • net_without_loss (Cell) – The training network without loss definition.

  • constraints (Constraints) – The constraints function of pde problem.

  • loss (Union[str, dict, Cell]) – The name of loss function, e.g. “l1”, “l2” and “mae”. Default: “l2”.

  • dataset_input_map (dict) – The input map of the dataset. If it takes “None”, the first column will be set as input. Default: None.

  • mtl_weighted_cell (Cell) – Losses weighting algorithms based on multi-task learning uncertainty evaluation. Default: None.

  • latent_vector (Parameter) – Tensor of Parameter. The latent vector to encodes the variational parameters in governing equation. It will be concated with the sampling data togother as final network inputs. Default: None.

  • latent_reg (float) – The regularization coefficient of latent vector. Default: 0.01.

Inputs:
  • inputs (Tensor) - The input is variable-length argument which contains network inputs.

Outputs:

Tensor, a scalar tensor with shape \((1,)\).

Supported Platforms:

Ascend

Examples

>>> import numpy as np
>>> from mindelec.loss import Constraints, NetWithLoss
>>> from mindspore import Tensor, nn
>>> class Net(nn.Cell):
...     def __init__(self, input_dim, output_dim):
...         super(NetWithoutLoss, self).__init__()
...         self.fc1 = nn.Dense(input_dim, 64)
...         self.fc2 = nn.Dense(64, output_dim)
...
...     def construct(self, *input):
...         x = input[0]
...         out = self.fc1(x)
...         out = self.fc2(out)
...         return out
>>> net = Net(3, 3)
>>> # For details about how to build the Constraints, please refer to the tutorial
>>> # document on the official website.
>>> constraints = Constraints(dataset, pde_dict)
>>> loss_network = NetWithLoss(net, constraints)
>>> input = Tensor(np.ones([1000, 3]).astype(np.float32) * 0.01)
>>> label = Tensor(np.ones([1000, 3]).astype(np.float32))
>>> output_data = loss_network(input, label)