[{"data":1,"prerenderedAt":647},["ShallowReactive",2],{"content-query-8EW7p6wUIo":3},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"date":10,"cover":11,"type":12,"category":13,"body":14,"_type":641,"_id":642,"_source":643,"_file":644,"_stem":645,"_extension":646},"/technology-blogs/zh/618","zh",false,"","【跟着小Mi一起机器学习吧！】多变量线性回归（二）","什么是机器学习，什么是机器学习，如果你想知道什么是机器学习，那么小Mi带你一起研究！","2021-06-16","https://obs-mindspore-file.obs.cn-north-4.myhuaweicloud.com/file/2021/06/16/28753fe659fd46cf9d0d57d839b3502a.png","technology-blogs","基础知识",{"type":15,"children":16,"toc":635},"root",[17,25,54,68,75,83,116,123,128,135,140,145,152,159,188,194,223,228,235,275,350,355,362,373,380,385,392,397,404,414,421,431,466,471,516,565,598,603,608,615,620,630],{"type":18,"tag":19,"props":20,"children":22},"element","h1",{"id":21},"跟着小mi一起机器学习吧多变量线性回归二",[23],{"type":24,"value":8},"text",{"type":18,"tag":26,"props":27,"children":28},"p",{},[29,31,40,46,52],{"type":24,"value":30},"几天不见，甚是想念！",{"type":18,"tag":32,"props":33,"children":37},"a",{"href":34,"rel":35},"https://bbs.huaweicloud.com/forum/thread-132013-1-1.html",[36],"nofollow",[38],{"type":24,"value":39},"小",{"type":18,"tag":32,"props":41,"children":43},{"href":34,"rel":42},[36],[44],{"type":24,"value":45},"Mi",{"type":18,"tag":32,"props":47,"children":49},{"href":34,"rel":48},[36],[50],{"type":24,"value":51},"系列的活动",{"type":24,"value":53},"正在如火如荼地进行中，小Mi看到大伙儿的热情，动力更加十足，这不又迫不及待地更新来了！",{"type":18,"tag":26,"props":55,"children":56},{},[57,59,66],{"type":24,"value":58},"在上期的",{"type":18,"tag":32,"props":60,"children":63},{"href":61,"rel":62},"https://bbs.huaweicloud.com/forum/thread-132434-1-1.html",[36],[64],{"type":24,"value":65},"多变量线性回归",{"type":24,"value":67},"介绍中，我们学习了多维特征、多变量的梯度下降法以及在实现梯度下降过程中的特征缩放和如何选择学习率这两个技巧，今天小Mi在其基础上，继续带领大家学习多项式回归、正规方程法以及介绍正规方程的不可逆性。好啦，废话不多说啦，我们继续开始吧！",{"type":18,"tag":69,"props":70,"children":72},"h3",{"id":71},"_5-特征和多项式回归",[73],{"type":24,"value":74},"5 特征和多项式回归",{"type":18,"tag":26,"props":76,"children":77},{},[78],{"type":18,"tag":79,"props":80,"children":82},"img",{"alt":7,"src":81},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/180336hddybtay3org3vhg.png",[],{"type":18,"tag":26,"props":84,"children":85},{},[86,88,92,94,98,100,103,105,108,110,114],{"type":24,"value":87},"依旧是以预测房价为例，假设我们有两个特征，分别是房子临街的宽度和垂直宽度。这就是我们想要卖出的房子的图片，临街宽度其实就是你拥有的土地的宽度，而这所房子的纵向深度就是你的房子正面的宽度（定义为长度这样可能更好理解一点），因此我们便有两个特征，临界宽度和纵深，这样我们就可以建立一个线性回归模型，其中临界宽度是第一个特征",{"type":18,"tag":79,"props":89,"children":91},{"alt":7,"src":90},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/180427sqjf2sopygu5wqsf.png",[],{"type":24,"value":93},"，纵深是第二个特征",{"type":18,"tag":79,"props":95,"children":97},{"alt":7,"src":96},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/1804479dxtyqmduswmlrox.png",[],{"type":24,"value":99},"。这里需要注意的是，当运用线性回归时，不一定非要用给出的",{"type":18,"tag":79,"props":101,"children":102},{"alt":7,"src":90},[],{"type":24,"value":104},"和",{"type":18,"tag":79,"props":106,"children":107},{"alt":7,"src":96},[],{"type":24,"value":109},"作为特征，换个角度我们还可以自己创造新的特征！（是不是脑洞大开！）因此，如果要预测房子的价格，小Mi可能会创造一个新的特征，可以称之为",{"type":18,"tag":79,"props":111,"children":113},{"alt":7,"src":112},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/180557v0tprkwbjmp2xjtv.png",[],{"type":24,"value":115},"，即临界宽度与纵深的乘积，也就是房屋的面积，这样一来就只会用到一个特征：",{"type":18,"tag":26,"props":117,"children":118},{},[119],{"type":18,"tag":79,"props":120,"children":122},{"alt":7,"src":121},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/180621lpcapmtfs3iwo4xd.png",[],{"type":18,"tag":26,"props":124,"children":125},{},[126],{"type":24,"value":127},"这边小Mi需要提醒大家注意的是，有时候不一定需要直接使用临界宽度和纵深，这两个我们一开始使用的特征，有时通过定义新的特征，你可能会得到一个更好的模型。",{"type":18,"tag":26,"props":129,"children":130},{},[131],{"type":18,"tag":79,"props":132,"children":134},{"alt":7,"src":133},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/180731l3zif2odoe5yryd3.png",[],{"type":18,"tag":26,"props":136,"children":137},{},[138],{"type":24,"value":139},"就拿上图来说，有这样一个住房价格的数据集，可能会有多个不同的模型用于拟合，选择之一是可能是二次模型，因为直线似乎并不能很好地拟合这些数据。用二次模型去拟合，但是最后发现可能二次函数似乎也有点不合理。因为，一个二次函数最终会降下来，随着土地面积的增加，房子的价格必然不会下降下来。那么小Mi又选择使用三次方的函数，进行数据拟合发现，上图中绿色的线对这个数据集拟合得更好，因为它不会在最后下降。",{"type":18,"tag":26,"props":141,"children":142},{},[143],{"type":24,"value":144},"观察曲线图我们可以知道，通常情况下线性回归并不会适用于所有数据，有时我们需要曲线方程来适应我们的数据。那么，我们又应该如何将模型与数据进行拟合呢？使用多元线性回归的方法，我们可以对算法做一个简单的修改来实现它，按照我们之前假设的形式，我们知道如何拟合，就像这样：",{"type":18,"tag":26,"props":146,"children":147},{},[148],{"type":18,"tag":79,"props":149,"children":151},{"alt":7,"src":150},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181011b03gbqnnp7dukbkl.png",[],{"type":18,"tag":26,"props":153,"children":154},{},[155],{"type":18,"tag":79,"props":156,"children":158},{"alt":7,"src":157},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181024pwle3ioo2qwktfv0.png",[],{"type":18,"tag":26,"props":160,"children":161},{},[162,164,167,169,172,174,178,180,186],{"type":24,"value":163},"如果我们想拟合这个三次模型，因为是预测一栋房子的价格，那么对应起来，特征",{"type":18,"tag":79,"props":165,"children":166},{"alt":7,"src":90},[],{"type":24,"value":168},"设为房子的面积，第二个特征",{"type":18,"tag":79,"props":170,"children":171},{"alt":7,"src":96},[],{"type":24,"value":173},"设为房屋面积的平方，将第三个特征",{"type":18,"tag":79,"props":175,"children":177},{"alt":7,"src":176},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181130ur1kzgtqpkrrpyim.png",[],{"type":24,"value":179},"设为房屋面积的立方，仅仅通过将这三个特征这样设置，然后再应用线性回归的方法就可以拟合这个模型。需要留意的是，如果房子的大小范围在1",{"type":18,"tag":181,"props":182,"children":183},"del",{},[184],{"type":24,"value":185},"100之间，那么房子面积的平方的范围就是1",{"type":24,"value":187},"1000，而第三个特征，立方的范围则是1到10的6次方，因此这三个特征的范围有很大的不同，这时如果使用梯度下降法，特征缩放将会非常重要。",{"type":18,"tag":69,"props":189,"children":191},{"id":190},"_6-正规方程",[192],{"type":24,"value":193},"6 正规方程",{"type":18,"tag":26,"props":195,"children":196},{},[197,199,203,205,209,211,215,217,221],{"type":24,"value":198},"到目前为止，在涉及线性回归算法时我们通常会选用梯度下降法来最小化代价函数",{"type":18,"tag":79,"props":200,"children":202},{"alt":7,"src":201},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181207pohauzzmr0rowiii.png",[],{"type":24,"value":204},"。使用迭代算法，经过梯度下降的多次迭代，来收敛到全局最小值。然而在面对某些线性回归问题时，正规方程法可能是一个更好的方法来帮助我们求得参数",{"type":18,"tag":79,"props":206,"children":208},{"alt":7,"src":207},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181237rc6qucxlbxosqtly.png",[],{"type":24,"value":210},"的最优值。总的来说，其提供了一种求",{"type":18,"tag":79,"props":212,"children":214},{"alt":7,"src":213},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181244xnrhmpe71kifqwva.png",[],{"type":24,"value":216},"的解析方法，不需要运行迭代算法，而是可以直接一次性地求解",{"type":18,"tag":79,"props":218,"children":220},{"alt":7,"src":219},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181251oo7buwyd36sk7d1k.png",[],{"type":24,"value":222},"的最优值，所以说基本上只需要一步就可以得到最优值。（是不是很简单粗暴！）",{"type":18,"tag":26,"props":224,"children":225},{},[226],{"type":24,"value":227},"那么现在小Mi带大家先对这个算法有一个直观的理解，上例子！",{"type":18,"tag":26,"props":229,"children":230},{},[231],{"type":18,"tag":79,"props":232,"children":234},{"alt":7,"src":233},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181318r7i7zyodpws5zr2f.png",[],{"type":18,"tag":26,"props":236,"children":237},{},[238,240,243,245,248,250,253,255,258,260,263,265,268,270,273],{"type":24,"value":239},"假设有一个非常简单的代价函数",{"type":18,"tag":79,"props":241,"children":242},{"alt":7,"src":201},[],{"type":24,"value":244},"，其实就是关于实数",{"type":18,"tag":79,"props":246,"children":247},{"alt":7,"src":207},[],{"type":24,"value":249},"的函数（",{"type":18,"tag":79,"props":251,"children":252},{"alt":7,"src":207},[],{"type":24,"value":254},"只是一个标量或者说只是一个实数值，是一个数字，并不是向量哦）。假设代价函数",{"type":18,"tag":79,"props":256,"children":257},{"alt":7,"src":201},[],{"type":24,"value":259},"是这个实参数的二次函数，那么如何最小化一个二次函数呢？这个问题应该大家都会解决，是不是！这时候只需要对J求关于",{"type":18,"tag":79,"props":261,"children":262},{"alt":7,"src":207},[],{"type":24,"value":264},"的导数，并且将导数置零，这样就可以求出使得",{"type":18,"tag":79,"props":266,"children":267},{"alt":7,"src":201},[],{"type":24,"value":269},"最小的",{"type":18,"tag":79,"props":271,"children":272},{"alt":7,"src":207},[],{"type":24,"value":274},"值。",{"type":18,"tag":26,"props":276,"children":277},{},[278,280,283,285,288,290,294,296,300,302,306,308,312,314,318,320,324,326,329,331,334,335,338,340,343,345,348],{"type":24,"value":279},"当然在实际情况中，",{"type":18,"tag":79,"props":281,"children":282},{"alt":7,"src":207},[],{"type":24,"value":284},"并不是一个实数，而是一个n+1维的参数向量。而代价函数",{"type":18,"tag":79,"props":286,"children":287},{"alt":7,"src":201},[],{"type":24,"value":289},"则是这个向量的函数，也就是",{"type":18,"tag":79,"props":291,"children":293},{"alt":7,"src":292},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181617klzdtd4vqcidjt6j.png",[],{"type":24,"value":295},"到",{"type":18,"tag":79,"props":297,"children":299},{"alt":7,"src":298},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181626z1jollrottbyc4nu.png",[],{"type":24,"value":301},"的函数，这时候我们应该如何最小化这个代价函数呢？实际上，有一个方法能做到，就是逐个对参数",{"type":18,"tag":79,"props":303,"children":305},{"alt":7,"src":304},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181653j5f7gxcup3vf0vek.png",[],{"type":24,"value":307},"求J的偏导数，并且把他们全部置零，求出",{"type":18,"tag":79,"props":309,"children":311},{"alt":7,"src":310},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181709mqx4g9taizlzsd9v.png",[],{"type":24,"value":313},"，",{"type":18,"tag":79,"props":315,"children":317},{"alt":7,"src":316},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181723zraxrjslmmf5abcl.png",[],{"type":24,"value":319},"，...,一直到",{"type":18,"tag":79,"props":321,"children":323},{"alt":7,"src":322},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181738qal9hssdw4jkdmls.png",[],{"type":24,"value":325},"的值，这样就能得到最小化代价函数J的",{"type":18,"tag":79,"props":327,"children":328},{"alt":7,"src":207},[],{"type":24,"value":330},"值。但是如果真的手动求解出参数",{"type":18,"tag":79,"props":332,"children":333},{"alt":7,"src":310},[],{"type":24,"value":295},{"type":18,"tag":79,"props":336,"children":337},{"alt":7,"src":322},[],{"type":24,"value":339},"，这个偏微分最终可能很复杂，这样不仅浪费时间还很费事。如何求出使得代价函数",{"type":18,"tag":79,"props":341,"children":342},{"alt":7,"src":201},[],{"type":24,"value":344},"最小化的",{"type":18,"tag":79,"props":346,"children":347},{"alt":7,"src":207},[],{"type":24,"value":349},"值这个过程是需要了解的。",{"type":18,"tag":26,"props":351,"children":352},{},[353],{"type":24,"value":354},"再上一个例子！假如有4个训练样本：",{"type":18,"tag":26,"props":356,"children":357},{},[358],{"type":18,"tag":79,"props":359,"children":361},{"alt":7,"src":360},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181910le401dhebliscoxi.png",[],{"type":18,"tag":26,"props":363,"children":364},{},[365,367,371],{"type":24,"value":366},"为了实现正规方程法，在数据集中，假设这四个训练样本是所有数据，在数据集中需要加上一列对应额外特征变量的",{"type":18,"tag":79,"props":368,"children":370},{"alt":7,"src":369},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/181932f2swntgnp1mb0ltl.png",[],{"type":24,"value":372},"，它的取值永远是1。接下来我们要做的就是构建一个矩阵X，这个矩阵基本包含了训练样本所有的特征变量，同时对y值进行类似的操作，构建向量y。",{"type":18,"tag":26,"props":374,"children":375},{},[376],{"type":18,"tag":79,"props":377,"children":379},{"alt":7,"src":378},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182034bd0poallhknaeg3o.png",[],{"type":18,"tag":26,"props":381,"children":382},{},[383],{"type":24,"value":384},"即：",{"type":18,"tag":26,"props":386,"children":387},{},[388],{"type":18,"tag":79,"props":389,"children":391},{"alt":7,"src":390},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/1819583gjsqfqr3nqjhl1o.png",[],{"type":18,"tag":26,"props":393,"children":394},{},[395],{"type":24,"value":396},"运用正规方程方法求解参数：",{"type":18,"tag":26,"props":398,"children":399},{},[400],{"type":18,"tag":79,"props":401,"children":403},{"alt":7,"src":402},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182248zw568gpv5scgrbdj.png",[],{"type":18,"tag":26,"props":405,"children":406},{},[407,409,412],{"type":24,"value":408},"通常情况来看，我们可以知道X是一个m*（n+1）维矩阵，y会是一个m维向量，其中m是训练样本数量，n是特征变量数，由于我们多加了一列额外特征变量，所以其实是n+1维。最后，用矩阵X和向量y来计算，从而得到",{"type":18,"tag":79,"props":410,"children":411},{"alt":7,"src":207},[],{"type":24,"value":413},"的解：",{"type":18,"tag":26,"props":415,"children":416},{},[417],{"type":18,"tag":79,"props":418,"children":420},{"alt":7,"src":419},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182423vq5e5lj2d1nblbek.png",[],{"type":18,"tag":26,"props":422,"children":423},{},[424,426,429],{"type":24,"value":425},"X转置乘以X的逆乘以X转置乘以向量y，这样就能够得到使得代价函数最小化的",{"type":18,"tag":79,"props":427,"children":428},{"alt":7,"src":207},[],{"type":24,"value":430},"。",{"type":18,"tag":26,"props":432,"children":433},{},[434,436,440,442,446,448,452,454,458,460,464],{"type":24,"value":435},"而就是更加通用的形式为：假设我们有m个样本，",{"type":18,"tag":79,"props":437,"children":439},{"alt":7,"src":438},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182528jaathrkvs6exsxci.png",[],{"type":24,"value":441},"一直到",{"type":18,"tag":79,"props":443,"children":445},{"alt":7,"src":444},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182550x8j289fjaqrrubjn.png",[],{"type":24,"value":447},"和n个特征变量。所以每一个训练样本",{"type":18,"tag":79,"props":449,"children":451},{"alt":7,"src":450},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182607bobnqzrdnhr5ibgk.png",[],{"type":24,"value":453},"可能是这样的一个向量，是n+1维的。每个训练样本给出这样的n+1维特征向量，构建设计矩阵X的方法就是取第一个训练样本，也就是一个向量，取它的转置，最后是一个扁长型的，让",{"type":18,"tag":79,"props":455,"children":457},{"alt":7,"src":456},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182813eqqe8zq0oic5sily.png",[],{"type":24,"value":459},"转置作为矩阵X的第一行，然后把第二个训练样本",{"type":18,"tag":79,"props":461,"children":463},{"alt":7,"src":462},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182853ak1tjjp6qs8oeqh9.png",[],{"type":24,"value":465},"进行转置，作为X的第二行，以此类推，直到最后一个训练样本取它的转置，作为矩阵X的最后一行，这就是矩阵X，一个m*（n+1）维矩阵。",{"type":18,"tag":26,"props":467,"children":468},{},[469],{"type":24,"value":470},"举个例子：",{"type":18,"tag":26,"props":472,"children":473},{},[474,476,480,482,486,488,491,493,497,499,503,505,509,511,515],{"type":24,"value":475},"假如我只有一个特征变量",{"type":18,"tag":79,"props":477,"children":479},{"alt":7,"src":478},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182937itqb2qofc90wih9c.png",[],{"type":24,"value":481},"，除了",{"type":18,"tag":79,"props":483,"children":485},{"alt":7,"src":484},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/182951dkmovxzsbuixvciq.png",[],{"type":24,"value":487},"之外只有一个特征变量，而",{"type":18,"tag":79,"props":489,"children":490},{"alt":7,"src":484},[],{"type":24,"value":492},"始终为1，如果我的特征向量",{"type":18,"tag":79,"props":494,"children":496},{"alt":7,"src":495},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183028lobn8fyboo1zjimo.png",[],{"type":24,"value":498},"等于某个实际的特征变量，比如说房屋大小，那么我的X会是这样：",{"type":18,"tag":79,"props":500,"children":502},{"alt":7,"src":501},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183052seay7mfsxbmlxqej.png",[],{"type":24,"value":504},"，这样就是一个m*2的矩阵。而向量y则是把训练集中所有房子的价格放在一起，得到一个m维的向量",{"type":18,"tag":79,"props":506,"children":508},{"alt":7,"src":507},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183138lv1o1edezgflhf4v.png",[],{"type":24,"value":510},"。最后，构建出矩阵X和向量y，就可以求出结果",{"type":18,"tag":79,"props":512,"children":514},{"alt":7,"src":513},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183219q15ay7s9cbj99ek0.png",[],{"type":24,"value":430},{"type":18,"tag":26,"props":517,"children":518},{},[519,521,525,527,531,533,536,538,542,544,547,549,553,554,558,559,563],{"type":24,"value":520},"那么，如果具体求出这个结果呢？",{"type":18,"tag":79,"props":522,"children":524},{"alt":7,"src":523},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183232bog9kfmthmaoxomw.png",[],{"type":24,"value":526},"是",{"type":18,"tag":79,"props":528,"children":530},{"alt":7,"src":529},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183241bo9bsibmas9ayc2v.png",[],{"type":24,"value":532},"的逆矩阵，令A等于X转置乘以X，那么",{"type":18,"tag":79,"props":534,"children":535},{"alt":7,"src":523},[],{"type":24,"value":537},"就是",{"type":18,"tag":79,"props":539,"children":541},{"alt":7,"src":540},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183300datzq8kprznnqhvt.png",[],{"type":24,"value":543},"，可以借助编程语言计算这个值。这样的式子会给出最优的",{"type":18,"tag":79,"props":545,"children":546},{"alt":7,"src":513},[],{"type":24,"value":548},"值而不需要进行特征缩放，即使特征",{"type":18,"tag":79,"props":550,"children":552},{"alt":7,"src":551},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183343r3yabpnwwktmziyy.png",[],{"type":24,"value":313},{"type":18,"tag":79,"props":555,"children":557},{"alt":7,"src":556},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183354jgu2ofozvgvbo6g8.png",[],{"type":24,"value":313},{"type":18,"tag":79,"props":560,"children":562},{"alt":7,"src":561},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183404b0jbue9bnjo0iuc1.png",[],{"type":24,"value":564},"的范围相差很大。",{"type":18,"tag":26,"props":566,"children":567},{},[568,570,574,576,580,582,586,588,591,593,596],{"type":24,"value":569},"最后，何时应该使用梯度下降法，而何时应该使用正规方程法，给大家进行一个总结，假如有m个训练样本，n个特征变量，梯度下降法的缺点之一就是需要选择学习速率",{"type":18,"tag":79,"props":571,"children":573},{"alt":7,"src":572},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/1834463tbciirxwy6yxams.png",[],{"type":24,"value":575},"，并且需要运行多次，尝试不同的学习速率，直到找到运行效果最好的那个，所以这会有额外的工作和麻烦，梯度下降的另一个缺点是，需要更多次的迭代，这需要取决于具体情况，计算可能会更慢。至于正规方程，不需要选择学习速率",{"type":18,"tag":79,"props":577,"children":579},{"alt":7,"src":578},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183423uurzean4cu0zmcct.png",[],{"type":24,"value":581},"，这样就比较方便，并且容易实现，只需要运行一步就行了，同时也不需要迭代，所以不需要画出",{"type":18,"tag":79,"props":583,"children":585},{"alt":7,"src":584},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183515dlxljlhd7kahsg4e.png",[],{"type":24,"value":587},"的曲线来检测收敛性或者其他额外的工作。这么一看，似乎正规方程法会更受欢迎，但是我们需要重视的是，正规方程法也有一些缺点。梯度下降法中，在特征变量很多的情况下，也可以正常工作，即使有上百万个特征变量，梯度下降法通常很有效果，相反，正规方程法为了求解参数",{"type":18,"tag":79,"props":589,"children":590},{"alt":7,"src":513},[],{"type":24,"value":592},"，需要求出",{"type":18,"tag":79,"props":594,"children":595},{"alt":7,"src":523},[],{"type":24,"value":597},"这一项，这个结果是一个n*n的矩阵，如果有n个特征变量的话。而对于大多应用来说，实现逆矩阵计算的代价以矩阵维度的三次方增长，大致为n的三次方，所以如果特征变量数目n特别大的话，那么这个计算会非常浪费时间，速度上会慢很多。因此，如果n很大的情况下，我们更倾向于选择梯度下降法，当然啦，如果n比较小的话，那么正规方程就是更优选择。那么，如何定义大小呢？这里需要说明一下，如果n是上百的，计算百位数乘以百位数的逆矩阵，对于计算机来说是没有任何问题的，如果n是上千的，还是可以使用正规方程法的，但是当n上万了，就需要考虑梯度下降法了，当然也不完全绝对，但是当n远大于此，就一定是选用梯度下降法了。这里我们也很难给出确切的数字，到底特征数量应该达到多少时，需要选用梯度下降法，正常来看就是在一万左右，可以选用梯度下降或者其他算法，",{"type":18,"tag":26,"props":599,"children":600},{},[601],{"type":24,"value":602},"因此，只要特征变量的数目并不大，正规方程是一个很好的计算参数的替代方法，但是随着我们需要学习的学习算法越来越复杂，例如当涉及分类算法的时候，logistic回归算法时，正规方程是不太适用的，仍然需要使用梯度下降法。因此，梯度下降法是一个非常有用的算法，在有大量特征的线性回归问题中，在更加复杂的学习算法中，但是对于线性回归的这个特定的模型，正规方程在小样本的特征变量下，是一个比梯度下降实现更快的替代算法。所以，根据具体的算法，具体的问题，以及特征变量的数目，这两个算法都是值得学习和深入研究的。",{"type":18,"tag":26,"props":604,"children":605},{},[606],{"type":24,"value":607},"梯度下降与正规方程的比较：",{"type":18,"tag":26,"props":609,"children":610},{},[611],{"type":18,"tag":79,"props":612,"children":614},{"alt":7,"src":613},"https://bbs-img.huaweicloud.com/data/forums/attachment/forum/202106/16/183753e1tfea6vqwxf00kr.png",[],{"type":18,"tag":26,"props":616,"children":617},{},[618],{"type":24,"value":619},"正规方程的python实现：",{"type":18,"tag":621,"props":622,"children":624},"pre",{"code":623},"xxxxxxxxxx\nimport numpy as np\n    \n def normalEqn(X, y):\n    \n   theta = np.linalg.inv(X.T@X)@X.T@y #X.T@X等价于X.T.dot(X)\n    \n   return theta\n",[625],{"type":18,"tag":626,"props":627,"children":628},"code",{"__ignoreMap":7},[629],{"type":24,"value":623},{"type":18,"tag":26,"props":631,"children":632},{},[633],{"type":24,"value":634},"好啦，至此，小Mi已经把线性回归的所有问题带着大家一起学习啦，下期我们将开始着手学习新的学习算法。我们下期见哦~",{"title":7,"searchDepth":636,"depth":636,"links":637},4,[638,640],{"id":71,"depth":639,"text":74},3,{"id":190,"depth":639,"text":193},"markdown","content:technology-blogs:zh:618.md","content","technology-blogs/zh/618.md","technology-blogs/zh/618","md",1776506138806]