[{"data":1,"prerenderedAt":374},["ShallowReactive",2],{"content-query-MQrR1KWvPk":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"body":13,"_type":368,"_id":369,"_source":370,"_file":371,"_stem":372,"_extension":373},"/technology-blogs/zh/437","zh",false,"","技术干货 | 基于MindSpore更好的理解Focal Loss","今天更新一下恺明大神的Focal Loss，它是 Kaiming 大神团队在他们的论文Focal Loss for Dense Object Detection提出来的损失函数，利用它改善了图像物体检测的效果。","2021-04-12","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/12/16cdf10461bf40a1b8a8f196c16a6106.png","technology-blogs",{"type":14,"children":15,"toc":359},"root",[16,24,30,35,49,63,68,80,95,100,105,109,114,119,124,129,148,152,157,162,167,172,177,182,187,191,196,201,206,211,216,234,239,244,249,256,261,265,270,281,285,289,293,302,307,312,322,327,335,339,344,349,354],{"type":17,"tag":18,"props":19,"children":21},"element","h1",{"id":20},"技术干货-基于mindspore更好的理解focal-loss",[22],{"type":23,"value":8},"text",{"type":17,"tag":25,"props":26,"children":27},"p",{},[28],{"type":23,"value":29},"本文来源于：知乎",{"type":17,"tag":25,"props":31,"children":32},{},[33],{"type":23,"value":34},"作者：李嘉琪",{"type":17,"tag":25,"props":36,"children":37},{},[38,40],{"type":23,"value":39},"今天更新一下恺明大神的Focal Loss，它是 Kaiming 大神团队在他们的论文Focal Loss for Dense Object Detection提出来的损失函数，利用它改善了图像物体检测的效果。ICCV2017RBG和Kaiming大神的新作（",{"type":17,"tag":41,"props":42,"children":46},"a",{"href":43,"rel":44},"https://arxiv.org/pdf/1708.02002.pdf%EF%BC%89%E3%80%82",[45],"nofollow",[47],{"type":23,"value":48},"https://arxiv.org/pdf/1708.02002.pdf）。",{"type":17,"tag":50,"props":51,"children":52},"ul",{},[53],{"type":17,"tag":54,"props":55,"children":56},"li",{},[57],{"type":17,"tag":58,"props":59,"children":60},"strong",{},[61],{"type":23,"value":62},"使用场景",{"type":17,"tag":25,"props":64,"children":65},{},[66],{"type":23,"value":67},"最近一直在做表情相关的方向，这个领域的 DataSet 数量不大，而且往往存在正负样本不均衡的问题。一般来说，解决正负样本数量不均衡问题有两个途径：",{"type":17,"tag":25,"props":69,"children":70},{},[71,73,78],{"type":23,"value":72},"1. ",{"type":17,"tag":58,"props":74,"children":75},{},[76],{"type":23,"value":77},"设计采样策略",{"type":23,"value":79},"，一般都是对数量少的样本进行重采样",{"type":17,"tag":81,"props":82,"children":84},"ol",{"start":83},2,[85],{"type":17,"tag":54,"props":86,"children":87},{},[88,93],{"type":17,"tag":58,"props":89,"children":90},{},[91],{"type":23,"value":92},"设计 Loss",{"type":23,"value":94},"，一般都是对不同类别样本进行权重赋值",{"type":17,"tag":25,"props":96,"children":97},{},[98],{"type":23,"value":99},"我两种策略都使用过，本文讲的是第二种策略中的 Focal Loss。",{"type":17,"tag":25,"props":101,"children":102},{},[103],{"type":23,"value":104},"理论分析",{"type":17,"tag":106,"props":107,"children":108},"h2",{"id":7},[],{"type":17,"tag":25,"props":110,"children":111},{},[112],{"type":23,"value":113},"论文分析",{"type":17,"tag":25,"props":115,"children":116},{},[117],{"type":23,"value":118},"我们知道object detection按其流程来说，一般分为两大类。一类是two stage detector(如非常经典的Faster R-CNN，RFCN这样需要region proposal的检测算法)，第二类则是one stage detector(如SSD、YOLO系列这样不需要region proposal，直接回归的检测算法)。",{"type":17,"tag":25,"props":120,"children":121},{},[122],{"type":23,"value":123},"对于第一类算法可以达到很高的准确率，但是速度较慢。虽然可以通过减少proposal的数量或降低输入图像的分辨率等方式达到提速，但是速度并没有质的提升。",{"type":17,"tag":25,"props":125,"children":126},{},[127],{"type":23,"value":128},"对于第二类算法速度很快，但是准确率不如第一类。",{"type":17,"tag":25,"props":130,"children":131},{},[132,134,139,141,146],{"type":23,"value":133},"所以目标就是：focal loss的出发点是希望one-stage detector可以",{"type":17,"tag":58,"props":135,"children":136},{},[137],{"type":23,"value":138},"达到two-stage detector的准确率",{"type":23,"value":140},"，同时",{"type":17,"tag":58,"props":142,"children":143},{},[144],{"type":23,"value":145},"不影响原有的速度",{"type":23,"value":147},"。",{"type":17,"tag":106,"props":149,"children":151},{"id":150},"_1",[],{"type":17,"tag":25,"props":153,"children":154},{},[155],{"type":23,"value":156},"So，Why？and result？",{"type":17,"tag":25,"props":158,"children":159},{},[160],{"type":23,"value":161},"这是什么原因造成的呢？the Reason is：Class Imbalance(正负样本不平衡)，样本的类别不均衡导致的。",{"type":17,"tag":25,"props":163,"children":164},{},[165],{"type":23,"value":166},"我们知道在object detection领域，一张图像可能生成成千上万的candidate locations，但是其中只有很少一部分是包含object的，这就带来了类别不均衡。那么类别不均衡会带来什么后果呢？引用原文讲的两个后果：",{"type":17,"tag":25,"props":168,"children":169},{},[170],{"type":23,"value":171},"(1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models.",{"type":17,"tag":25,"props":173,"children":174},{},[175],{"type":23,"value":176},"意思就是负样本数量太大(属于背景的样本)，占总的loss的大部分，而且多是容易分类的，因此使得模型的优化方向并不是我们所希望的那样。这样，网络学不到有用的信息，无法对object进行准确分类。",{"type":17,"tag":25,"props":178,"children":179},{},[180],{"type":23,"value":181},"其实先前也有一些算法来处理类别不均衡的问题，比如OHEM（online hard example mining），OHEM的主要思想可以用原文的一句话概括：In OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples。OHEM算法虽然增加了错分类样本的权重，但是OHEM算法忽略了容易分类的样本。",{"type":17,"tag":25,"props":183,"children":184},{},[185],{"type":23,"value":186},"因此针对类别不均衡问题，作者提出一种新的损失函数：Focal Loss，这个损失函数是在标准交叉熵损失基础上修改得到的。这个函数可以通过减少易分类样本的权重，使得模型在训练时更专注于难分类的样本。为了证明Focal Loss的有效性，作者设计了一个dense detector：RetinaNet，并且在训练时采用Focal Loss训练。实验证明RetinaNet不仅可以达到one-stage detector的速度，也能有two-stage detector的准确率。",{"type":17,"tag":106,"props":188,"children":190},{"id":189},"_2",[],{"type":17,"tag":25,"props":192,"children":193},{},[194],{"type":23,"value":195},"公式说明",{"type":17,"tag":25,"props":197,"children":198},{},[199],{"type":23,"value":200},"介绍focal loss，在介绍focal loss之前，先来看看交叉熵损失，这里以二分类为例，原来的分类loss是各个训练样本交叉熵的直接求和，也就是各个样本的权重是一样的。公式如下：",{"type":17,"tag":25,"props":202,"children":203},{},[204],{"type":23,"value":205},"因为是二分类，p表示预测样本属于1的概率（范围为0-1），y表示label，y的取值为{+1,-1}。当真实label是1，也就是y=1时，假如某个样本x预测为1这个类的概率p=0.6，那么损失就是-log(0.6)，注意这个损失是大于等于0的。如果p=0.9，那么损失就是-log(0.9)，所以p=0.6的损失要大于p=0.9的损失，这很容易理解。这里仅仅以二分类为例，多分类分类以此类推",{"type":17,"tag":25,"props":207,"children":208},{},[209],{"type":23,"value":210},"为了方便，用pt代替p，如下公式2:。这里的pt就是前面Figure1中的横坐标。",{"type":17,"tag":25,"props":212,"children":213},{},[214],{"type":23,"value":215},"为了表示简便，我们用p_t表示样本属于true class的概率。所以(1)式可以写成:",{"type":17,"tag":25,"props":217,"children":218},{},[219,221,227,229,232],{"type":23,"value":220},"接下来介绍一个最基本的对交叉熵的改进，也将作为本文实验的baseline，既然one-stage detector在训练的时候正负样本的数量差距很大，那么一种常见的做法就是给正负样本加上权重，负样本出现的频次多，那么就降低负样本的权重，正样本数量少，就相对提高正样本的权重。因此可以通过设定 ",{"type":17,"tag":222,"props":223,"children":226},"img",{"alt":224,"src":225},"图片","https://mmbiz.qpic.cn/mmbiz_svg/B2EfAOZfS1j8spYM3iaibmetLicsiabrmYr58Rmo8Ty32fNmwhPFNqHmia37QJ8icEJtGEMiczXX95ApmNibolkAvBp9bN9Qichvxt5Wic/640?wx_fmt=svg&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1",[],{"type":23,"value":228}," 的值来控制正负样本对总的loss的共享权重。 ",{"type":17,"tag":222,"props":230,"children":231},{"alt":224,"src":225},[],{"type":23,"value":233}," 取比较小的值来降低负样本（多的那类样本）的权重。",{"type":17,"tag":25,"props":235,"children":236},{},[237],{"type":23,"value":238},"显然前面的公式3虽然可以控制正负样本的权重，但是没法控制容易分类和难分类样本的权重，于是就有了Focal Loss，这里的γ称作focusing parameter，γ>=0，称为调制系数：",{"type":17,"tag":25,"props":240,"children":241},{},[242],{"type":23,"value":243},"为什么要加上这个调制系数呢？目的是通过减少易分类样本的权重，从而使得模型在训练时更专注于难分类的样本。",{"type":17,"tag":25,"props":245,"children":246},{},[247],{"type":23,"value":248},"通过实验发现，绘制图看如下Figure1，横坐标是pt，纵坐标是loss。CE（pt）表示标准的交叉熵公式，FL（pt）表示focal loss中用到的改进的交叉熵。Figure1中γ=0的蓝色曲线就是标准的交叉熵损失(loss)。",{"type":17,"tag":25,"props":250,"children":251},{},[252],{"type":17,"tag":222,"props":253,"children":255},{"alt":7,"src":254},"https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/04/12/9fae1e168cd949c0bd3bbbc61459b48a.png",[],{"type":17,"tag":25,"props":257,"children":258},{},[259],{"type":23,"value":260},"这样就既做到了解决正负样本不平衡，也做到了解决easy与hard样本不平衡的问题。",{"type":17,"tag":106,"props":262,"children":264},{"id":263},"_3",[],{"type":17,"tag":25,"props":266,"children":267},{},[268],{"type":23,"value":269},"结论",{"type":17,"tag":25,"props":271,"children":272},{},[273,275,280],{"type":23,"value":274},"作者将类别不平衡作为阻碍one-stage方法超过top-performing的two-stage方法的主要原因。为了解决这个问题，作者提出了focal loss，在交叉熵里面用一个调整项，为了将学习专注于hard examples上面，并且降低大量的easy negatives的权值。是",{"type":17,"tag":58,"props":276,"children":277},{},[278],{"type":23,"value":279},"同时解决了正负样本不平衡以及区分简单与复杂样本的问题",{"type":23,"value":147},{"type":17,"tag":25,"props":282,"children":283},{},[284],{"type":23,"value":260},{"type":17,"tag":106,"props":286,"children":288},{"id":287},"_4",[],{"type":17,"tag":25,"props":290,"children":291},{},[292],{"type":23,"value":269},{"type":17,"tag":25,"props":294,"children":295},{},[296,297,301],{"type":23,"value":274},{"type":17,"tag":58,"props":298,"children":299},{},[300],{"type":23,"value":279},{"type":23,"value":147},{"type":17,"tag":25,"props":303,"children":304},{},[305],{"type":23,"value":306},"MindSpore代码实现",{"type":17,"tag":25,"props":308,"children":309},{},[310],{"type":23,"value":311},"我们来看一下，基于MindSpore实现Focal Loss的代码：",{"type":17,"tag":313,"props":314,"children":316},"pre",{"code":315},"import mindspore\nimport mindspore.common.dtype as mstype\nfrom mindspore.common.tensor import Tensor\nfrom mindspore.common.parameter import Parameter\nfrom mindspore.ops import operations as P\nfrom mindspore.ops import functional as F\nfrom mindspore import nn\n\nclass FocalLoss(_Loss):\n\n    def __init__(self, weight=None, gamma=2.0, reduction='mean'):\n        super(FocalLoss, self).__init__(reduction=reduction)\n        # 校验gamma，这里的γ称作focusing parameter，γ>=0，称为调制系数\n        self.gamma = validator.check_value_type(\"gamma\", gamma, [float])\n        if weight is not None and not isinstance(weight, Tensor):\n            raise TypeError(\"The type of weight should be Tensor, but got {}.\".format(type(weight)))\n        self.weight = weight\n        # 用到的mindspore算子\n        self.expand_dims = P.ExpandDims()\n        self.gather_d = P.GatherD()\n        self.squeeze = P.Squeeze(axis=1)\n        self.tile = P.Tile()\n        self.cast = P.Cast()\n\n    def construct(self, predict, target):\n        targets = target\n        # 对输入进行校验\n        _check_ndim(predict.ndim, targets.ndim)\n        _check_channel_and_shape(targets.shape[1], predict.shape[1])\n        _check_predict_channel(predict.shape[1])\n\n        # 将logits和target的形状更改为num_batch * num_class * num_voxels.\n        if predict.ndim > 2:\n            predict = predict.view(predict.shape[0], predict.shape[1], -1) # N,C,H,W => N,C,H*W\n            targets = targets.view(targets.shape[0], targets.shape[1], -1) # N,1,H,W => N,1,H*W or N,C,H*W\n        else:\n            predict = self.expand_dims(predict, 2) # N,C => N,C,1\n            targets = self.expand_dims(targets, 2) # N,1 => N,1,1 or N,C,1\n        \n        # 计算对数概率\n        log_probability = nn.LogSoftmax(1)(predict)\n        # 只保留每个voxel的地面真值类的对数概率值。\n        if target.shape[1] == 1:\n            log_probability = self.gather_d(log_probability, 1, self.cast(targets, mindspore.int32))\n            log_probability = self.squeeze(log_probability)\n\n        # 得到概率\n        probability = F.exp(log_probability)\n\n        if self.weight is not None:\n            convert_weight = self.weight[None, :, None]  # C => 1,C,1\n            convert_weight = self.tile(convert_weight, (targets.shape[0], 1, targets.shape[2])) # 1,C,1 => N,C,H*W\n            if target.shape[1] == 1:\n                convert_weight = self.gather_d(convert_weight, 1, self.cast(targets, mindspore.int32))  # selection of the weights  => N,1,H*W\n                convert_weight = self.squeeze(convert_weight)  # N,1,H*W => N,H*W\n            # 将对数概率乘以它们的权重\n            probability = log_probability * convert_weight\n        # 计算损失小批量\n        weight = F.pows(-probability + 1.0, self.gamma)\n        if target.shape[1] == 1:\n            loss = (-weight * log_probability).mean(axis=1)  # N\n        else:\n            loss = (-weight * targets * log_probability).mean(axis=-1)  # N,C\n\n        return self.get_loss(loss)\n",[317],{"type":17,"tag":318,"props":319,"children":320},"code",{"__ignoreMap":7},[321],{"type":23,"value":315},{"type":17,"tag":25,"props":323,"children":324},{},[325],{"type":23,"value":326},"使用方法如下：",{"type":17,"tag":313,"props":328,"children":330},{"code":329},"from mindspore.common import dtype as mstype\nfrom mindspore import nn\nfrom mindspore import Tensor\n\npredict = Tensor([[0.8, 1.4], [0.5, 0.9], [1.2, 0.9]], mstype.float32)\ntarget = Tensor([[1], [1], [0]], mstype.int32)\nfocalloss = nn.FocalLoss(weight=Tensor([1, 2]), gamma=2.0, reduction='mean')\noutput = focalloss(predict, target)\nprint(output)\n\n0.33365273\n",[331],{"type":17,"tag":318,"props":332,"children":333},{"__ignoreMap":7},[334],{"type":23,"value":329},{"type":17,"tag":106,"props":336,"children":338},{"id":337},"_5",[],{"type":17,"tag":25,"props":340,"children":341},{},[342],{"type":23,"value":343},"Focal Loss的两个重要性质",{"type":17,"tag":25,"props":345,"children":346},{},[347],{"type":23,"value":348},"1. 当一个样本被分错的时候，pt是很小的，那么调制因子（1-Pt）接近1，损失不被影响；当Pt→1，因子（1-Pt）接近0，那么分的比较好的（well-classified）样本的权值就被调低了。因此调制系数就趋于1，也就是说相比原来的loss是没有什么大的改变的。当pt趋于1的时候（此时分类正确而且是易分类样本），调制系数趋于0，也就是对于总的loss的贡献很小。",{"type":17,"tag":25,"props":350,"children":351},{},[352],{"type":23,"value":353},"2. 当γ=0的时候，focal loss就是传统的交叉熵损失，当γ增加的时候，调制系数也会增加。 专注参数γ平滑地调节了易分样本调低权值的比例。γ增大能增强调制因子的影响，实验发现γ取2最好。直觉上来说，调制因子减少了易分样本的损失贡献，拓宽了样例接收到低损失的范围。当γ一定的时候，比如等于2，一样easy example(pt=0.9)的loss要比标准的交叉熵loss小100+倍，当pt=0.968时，要小1000+倍，但是对于hard example(pt \u003C 0.5)，loss最多小了4倍。这样的话hard example的权重相对就提升了很多。这样就增加了那些误分类的重要性",{"type":17,"tag":25,"props":355,"children":356},{},[357],{"type":23,"value":358},"Focal Loss的两个性质算是核心，其实就是用一个合适的函数去度量难分类和易分类样本对总的损失的贡献。",{"title":7,"searchDepth":360,"depth":360,"links":361},4,[362,363,364,365,366,367],{"id":7,"depth":83,"text":7},{"id":150,"depth":83,"text":7},{"id":189,"depth":83,"text":7},{"id":263,"depth":83,"text":7},{"id":287,"depth":83,"text":7},{"id":337,"depth":83,"text":7},"markdown","content:technology-blogs:zh:437.md","content","technology-blogs/zh/437.md","technology-blogs/zh/437","md",1776506137475]