伍德里奇計(jì)量經(jīng)濟(jì)學(xué)講義_第1頁(yè)
伍德里奇計(jì)量經(jīng)濟(jì)學(xué)講義_第2頁(yè)
伍德里奇計(jì)量經(jīng)濟(jì)學(xué)講義_第3頁(yè)
伍德里奇計(jì)量經(jīng)濟(jì)學(xué)講義_第4頁(yè)
伍德里奇計(jì)量經(jīng)濟(jì)學(xué)講義_第5頁(yè)
已閱讀5頁(yè),還剩36頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、The Simple Regression Modely = b0 + b1x + u1Some Terminology In the simple linear regression model, where y = b0 + b1x + u, we typically refer to y as theDependent Variable, orLeft-Hand Side Variable, orExplained Variable, orRegressand2Some Terminology, cont. In the simple linear regression of y on

2、x, we typically refer to x as theIndependent Variable, orRight-Hand Side Variable, orExplanatory Variable, orRegressor, orCovariate, orControl Variables3A Simple Assumption The average value of u, the error term, in the population is 0. That is, E(u) = 0 This is not a restrictive assumption, since w

3、e can always use b0 to normalize E(u) to 04Zero Conditional Mean We need to make a crucial assumption about how u and x are related We want it to be the case that knowing something about x does not give us any information about u, so that they are completely unrelated. That is, that E(u|x) = E(u) =

4、0, which implies E(y|x) = b0 + b1x5.x1x2E(y|x) as a linear function of x, where for any x the distribution of y is centered about E(y|x)E(y|x) = b0 + b1xyf(y)6Ordinary Least Squares Basic idea of regression is to estimate the population parameters from a sample Let (xi,yi): i=1, ,n denote a random s

5、ample of size n from the population For each observation in this sample, it will be the case that yi = b0 + b1xi + ui7.y4y1y2y3x1x2x3x4u1u2u3u4xyPopulation regression line, sample data pointsand the associated error termsE(y|x) = b0 + b1x8Deriving OLS Estimates To derive the OLS estimates we need to

6、 realize that our main assumption of E(u|x) = E(u) = 0 also implies that Cov(x,u) = E(xu) = 0 Why? Remember from basic probability that Cov(X,Y) = E(XY) E(X)E(Y)9Deriving OLS continued We can write our 2 restrictions just in terms of x, y, b0 and b1 , since u = y b0 b1x E(y b0 b1x) = 0 Ex(y b0 b1x)

7、= 0These are called moment restrictions10Deriving OLS using M.O.M. The method of moments approach to estimation implies imposing the population moment restrictions on the sample moments What does this mean? Recall that for E(X), the mean of a population distribution, a sample estimator of E(X) is si

8、mply the arithmetic mean of the sample11More Derivation of OLS We want to choose values of the parameters that will ensure that the sample versions of our moment restrictions are true The sample versions are as follows:12More Derivation of OLSGiven the definition of a sample mean, and properties of

9、summation, we can rewrite the first condition as follows13More Derivation of OLS14So the OLS estimated slope is15Summary of OLS slope estimate The slope estimate is the sample covariance between x and y divided by the sample variance of x If x and y are positively correlated, the slope will be posit

10、ive If x and y are negatively correlated, the slope will be negative Only need x to vary in our sample16More OLS Intuitively, OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible, hence the term least squares The residual, , is an estimate of

11、 the error term, u, and is the difference between the fitted line (sample regression function) and the sample point17.y4y1y2y3x1x2x3x41234xySample regression line, sample data pointsand the associated estimated error terms18Alternate approach to derivation Given the intuitive idea of fitting a line,

12、 we can set up a formal minimization problem That is, we want to choose our parameters such that we minimize the following:19Alternate approach, continued If one uses calculus to solve the minimization problem for the two parameters you obtain the following first order conditions, which are the same

13、 as we obtained before, multiplied by n20Algebraic Properties of OLS The sum of the OLS residuals is zero Thus, the sample average of the OLS residuals is zero as well The sample covariance between the regressors and the OLS residuals is zero The OLS regression line always goes through the mean of t

14、he sample21Algebraic Properties (precise)22More terminology23Proof that SST = SSE + SSR24Goodness-of-Fit How do we think about how well our sample regression line fits our sample data? Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared

15、of regression R2 = SSE/SST = 1 SSR/SST25Using Stata for OLS regressions Now that weve derived the formula for calculating the OLS estimates of our parameters, youll be happy to know you dont have to compute them by hand Regressions in Stata are very simple, to run the regression of y on x, just type

16、 reg y x26Unbiasedness of OLS Assume the population model is linear in parameters as y = b0 + b1x + u Assume we can use a random sample of size n, (xi, yi): i=1, 2, , n, from the population model. Thus we can write the sample model yi = b0 + b1xi + ui Assume E(u|x) = 0 and thus E(ui|xi) = 0 Assume t

17、here is variation in the xi27Unbiasedness of OLS (cont) In order to think about unbiasedness, we need to rewrite our estimator in terms of the population parameter Start with a simple rewrite of the formula as28Unbiasedness of OLS (cont)29Unbiasedness of OLS (cont)30Unbiasedness of OLS (cont)31Unbia

18、sedness Summary The OLS estimates of b1 and b0 are unbiased Proof of unbiasedness depends on our 4 assumptions if any assumption fails, then OLS is not necessarily unbiased Remember unbiasedness is a description of the estimator in a given sample we may be “near” or “far” from the true parameter32Va

19、riance of the OLS Estimators Now we know that the sampling distribution of our estimate is centered around the true parameter Want to think about how spread out this distribution is Much easier to think about this variance under an additional assumption, soAssume Var(u|x) = s2 (Homoskedasticity)33Va

20、riance of OLS (cont) Var(u|x) = E(u2|x)-E(u|x)2 E(u|x) = 0, so s2 = E(u2|x) = E(u2) = Var(u) Thus s2 is also the unconditional variance, called the error variance s, the square root of the error variance is called the standard deviation of the error Can say: E(y|x)=b0 + b1x and Var(y|x) = s234.x1x2Homoskedastic CaseE(y|x) = b0 + b1xyf(y|x)35.x x1x2yf(y|

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論