Release Notes

MindSpore Golden Stick 0.3.0-alpha Release Notes

Major Features and Improvements

  • [stable] SLB(Searching for Low-Bit Weights in Quantized Neural Networks)QAT algorithm now support BatchNorm calibration. we can invoke set_enable_bn_calibration api to enable BatchNorm calibration. For a network with a BatchNorm layer, the BatchNorm calibration can reduces the decrease in network accuracy caused by the SLB quantization algorithm. (!150)

  • [stable] We verified the quantization effect of SimQAT(Simulated Quantization Aware Training) algorithm and the SLB algorithm on the ResNet network and the Imagenet2012 dataset. For details, please refer to MindSpore Models readme.

  • [stable] SimQAT algorithm now support inference on MindSpore Lite backend. We quant the LeNet network with SimQAT and deploy it on ARM CPU. For details, please refer to Deployment Effect.

API Change

Backwards Compatible Change

  • SLB algorithm adds the set_enable_bn_calibration interface to enable or disable BatchNorm calibration.(!117)

  • Add convert interface to the algorithm base class, which is configured to convert training network to inferring network. And the network will be exported to MindIR file for Deployment. For details, please refer to Model Deployment.(!176)

  • Add set_save_mindir interface to the algorithm base class, which is configured to automatically export MindIR after training. For details, please refer to Model Deployment.(!168)

Bug fixes

  • [STABLE] Refactor SimQAT algorithm code, and solve bugs such as activation operator loss, pre-trained parameter loss, simulation quantization operators redundancy, etc.

Contributors

Thanks goes to these wonderful people:

liuzhicheng01, fuzhongqian, hangangqiang, yangruoqi713, kevinkunkun.

Contributions of any kind are welcome!