7-1-The Problem of Overfitting
(现在为止,你已经见识了几种不同的学习方法包括线性回归和逻辑回归)By now,you've seen a couple different learning algorithm,linear regression and logistic regression.(它们能够有效解决许多问题,但是当将它们应用到某些特定的机器学习应用时,它们可能会遇到过度拟合,可能会导致它们效果很差)They work well for many problems,but when you apply them to certain machine learning applications,they can run into a problem called overfitting that can cause them to perform very poorly.(在这个视频中,我将为你解释什么是过度拟合问题,在此之后接下来的几个视频中,我们将讨论一种称为正则化的技术,它可以改善或者减少过度拟合问题,以使机器学习算法更好实现)What I need to do in this video is explain to you what is this overfitting problem,and in the next few videos after this,we'll talk about a technique called regularization,that allow us to ameliorate to reduce this overfitting problem and get this learning algorithms to maybe work much better.(那么什么是过度拟合呢?)So what is overfitting?
(让我们继续使用之前的那个线性回归预测房价的例子)Let's keep using our running an example of predicting housing prices with linear regression.(在后面的课程,我们会讲到调试和诊断诊断出导致学习算法故障的东西)In later course,we'll talk about debugging and diagnosing things that can go wrong with learning algorithm.(但是,现在让我们谈谈过拟合问题,我们该怎样解决)And now,lets talk about the problem of overfitting,what we do to address it?(为了解决过度拟合,这里有两种方法来解决问题)In order to address overfitting,there are two main options for things that we can do.(第一个方法是尽量减少选取变量的数量,我们可以人工检查变量的条目)The first option is,to try to reduce the number of features.(我们可以做的一件事是人工筛选特征)One thing we can do is manually look through the list of features,and,use that to try to decide which are the more important features.(在后面的课程,我们会提到模型选择算法)In later course,we'll talk about model selection algorithms.(减少特征的做法是非常有效的)Reduce the numbers of features can work well,and reduce overfitting.(第二个方法,就是我们接下来视频的正则化)The second option,which we'll talk about next few videos,is regularization.(正则化中我们将保留,但是将减少指数级或参数数值的大小)Here,we're going to keep all the features,but we're going to reduce the magnitude or the value of the parameters theta J.
7-2-Cost Function
(这个视频将告诉你正则化是如何进行的,而且写出我们使用正则化时,需要使用的代价函数)Write the cost function that we'll use,when we were using regularization.(让我们考虑下面的假设,我们想要加上惩罚项,从而使theta3和theta4足够小)Consider the following,suppose we were to penalize,make the parameters theta3 and theta4 really small.(这是我们的优化目标,我们要尽量减少代价函数的均方误差)Here's what I mean,here is our optimization problem,where we minimize our usual squared error cause function.(对于这个函数,我们对它进行一些,添加一些项加上1000乘以theta3加上1000乘以theta4的平方)Let's say I take this objective and modify it and add to it,plus 1000 theta3 squared,plus 1000 theta 4 squared.(1000只是我随便写的某个较大的数字而已)(使得theta3和theta4接近于0)theta3 and theta4 that they may be very close to 0.(在这个例子中,我们看到了惩罚这两个非常大的参数值的效果)In this particular example,we looked at the effect two of the parameter values being large.(思路就是如果我们有更小的参数值)The idea is that,if we have small values for the parameter.
7-3-Regularized Linear Regression
(对于线性回归的求解,我们之前推导了两种学习算法,一种基于梯度下降,一种基于正规方程)For linear regression,we had prevIoUs worked out two learning algorithms,one based on gradient descent and one based on the normal equation.(在本次视频,我们将推广这两个算法到正则化线性回归中)In this video,we'll learn those two algorithms and generalize them to the case of regularized linear regression.(所以,这样做其实没有什么变化对吧?这种写法从theta1,theta 2,theta 3剥离出theta0。)This is just writing the update for theta zero separately from the update from theta 1,theta 3,up to theta n.
梯度下降
常规方程
7-4-Regularized Logistic Regression
(针对逻辑回归问题,我们在之前的课程中已经学习过两种优化算法。)For logistic regression,we prevIoUsly talked about two types of optimization algorithms.(在这节课,我们将展示如何改进梯度下降法和高级优化算法)In this video,We'll show how you can adapt both of those techniques,both gradient descent and the more advanced optimization techiques (使其适应正则化的逻辑回归)In order to have them to work for regularized logistic regression.(所以,接下来我们注意到逻辑回归问题有可能出现过拟合的现象,类似这样的高阶多项式)We saw earlier that logistic regression can also be prone to overfitting if you fit it with a very,sort of very high order polynomial features like this,(g是S型函数)Where G is the sigmoid function.(使用正则化,变得更加圆滑,所以使用正则化方法避免过拟合)help take care of the overfitting problem.(是如何具体实现的呢?)
原文链接:https://www.f2er.com/regex/358781.html