Migration from a Third-party Framework
Q:How do I load a pre-trained PyTorch model for fine-tuning on MindSpore?
A:Map parameters of PyTorch and MindSpore one by one. No unified conversion script is provided due to flexible network definitions. Customize scripts based on scenarios. For details, see Advanced Usage of Checkpoint.
Q:How do I convert a PyTorch dataset
to a MindSpore dataset
?
A:The custom dataset logic of MindSpore is similar to that of PyTorch. You need to define a dataset
class containing __init__
, __getitem__
, and __len__
to read your dataset, instantiate the class into an object (for example, dataset/dataset_generator
), and transfer the instantiated object to GeneratorDataset
(on MindSpore) or DataLoader
(on PyTorch). Then, you are ready to load the custom dataset. MindSpore provides further map
->batch
operations based on GeneratorDataset
. Users can easily add other custom operations to map
and start batch
.
The custom dataset of MindSpore is loaded as follows:
# 1. Perform operations such as data argumentation, shuffle, and sampler.
class Mydata:
def __init__(self):
np.random.seed(58)
self.__data = np.random.sample((5, 2))
self.__label = np.random.sample((5, 1))
def __getitem__(self, index):
return (self.__data[index], self.__label[index])
def __len__(self):
return len(self.__data)
dataset_generator = Mydata()
dataset = ds.GeneratorDataset(dataset_generator, ["data", "label"], shuffle=False)
# 2. Customize data argumentation.
dataset = dataset.map(operations=pyFunc, …)
# 3. batch
dataset = dataset.batch(batch_size, drop_remainder=True)