# 1415215404_395__835ps7

Description
Econometrics
Categories
Published

View again

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Transcript
Problem Set #7: OLS Economics 835: EconometricsFall 2014 1 A preliminary result Suppose our data set  D n  is a random sample of size  n  on the scalar random variables ( x i ,y i ) with ﬁnitemeans, variances, and covariance. Let:ˆ cov ( x i ,y i ) = 1 n n  i =1 ( x i − ¯ x )( y i − ¯ y )Prove that plim ˆ cov ( x i ,y i ) =  cov ( x i ,y i ).We will use this result repeatedly in this problem set and in the future. So once you have proved this result,please feel free to take it as given for the remainder of this course. You can also take as given that if youdeﬁne ˆ var ( x i ) = ˆ cov ( x i ,x i ) then plim ˆ var ( x i ) =  var ( x i ). 2 OLS with a single explanatory variable In many cases, the best way to understand various issues in regression analysis - measurement errors, proxyvariables, omitted variables bias, etc. - is to work through the issue in the special case of a single explanatoryvariable. That way, we can develop intuition without getting lost in the linear algebra. Once we have thebasics down, we can then look at the multivariate case to see if anything changes. This problem goes throughthe main starting results.Suppose our regression model has an intercept and a single explanatory variable, i.e.: y i  =  β  0  +  β  1 x i  +  u i where ( y i ,x i ,u i ) are scalar random variables. To keep things fairly general, we will assume this is a modelof the best linear predictor, i.e.  E  ( u i ) =  E  ( x i u i ) =  cov ( x i ,u i ) = 0.Our data  D n  consists of a random sample of size  n  on ( x i ,y i ), arranged into the matrices: X  =  1  x 1 1  x 2 ......1  x n  y  =  y 1 y 2 ... y n  Let:ˆ β   =   ˆ β  0 ˆ β  1   = ( X  X ) − 1 X  y  (1)1  ECON 835, Fall 2014  2be the usual OLS regression coeﬃcients. a ) Show that: β  1  =  cov ( x i ,y i ) var ( x i ) β  0  =  E  ( y i ) − β  1 E  ( x i ) b ) Show that equation (1) implies that:ˆ β  1  = 1 n  ni =1 ( x i − ¯ x )( y i − ¯ y ) 1 n  ni =1 ( x i − ¯ x ) 2  = ˆ cov ( x i ,y i )ˆ var ( x i )ˆ β  0  = ¯ y −  ˆ β  1 ¯ x The idea for this problem is that you get a little practice translating between diﬀerent ways of writing thesame model, so even if you know another way to get these results please start with equation (1). c ) Without using linear algebra (i.e., just apply Slutsky’s theorem and the Law of Large Numbers to theresult from part (b) of this question), prove thatplim ˆ β  1  =  β  1 plim ˆ β  0  =  β  0 3 OLS with measurement error Often variables are measured with error. Let ( y i ,x i ,u i ) be scalar random variables such that y i  =  β  0  +  β  1 x i  +  u i  where  cov ( x,u ) = 0Unfortunately, we do not have data on  y i  and  x i ; instead we have data on ˜ y i  and ˜ x i  where:˜ y i  =  y i  +  v i ˜ x i  =  x i  +  w i where  w i  and  v i  are scalar random variables representing measurement error. We assume “classical” mea-surement error 1 : cov ( v i ,x i ) =  cov ( v i ,u i ) =  cov ( v i ,w i ) = 0 cov ( w i ,x i ) =  cov ( w i ,u i ) =  cov ( w i ,v i ) = 0Let   x  =  var ( w i ) /var ( x i ) and let   y  =  var ( v i ) /var ( y i ). a ) Let ˆ β  1  be the OLS regression coeﬃcient from the regression of ˜ y  on ˜ x . Find plim ˆ β  1  in terms of ( β  1 , x , y ) b ) What is the eﬀect of (classical) measurement error in  x  on the sign and magnitude of plim ˆ β  1 ? c ) What is the eﬀect of (classical) measurement error in  y  on the sign and magnitude of plim ˆ β  1 ? 1 Strictly speaking the classical model of measurement error also assumes independence and normality, but we won’t needthose for our results  ECON 835, Fall 2014  3 4 Choice of units: The simple version In applied work one is often faced with choosing units for our variables. Should we express proportions asdecimals or percentages? Miles or kilometers? etc. The short answer is that it doesn’t matter if we arecomparing across linearly related scales; the OLS coeﬃcients will scale accordingly, so one can choose unitsaccording to convenience.Suppose our data set  D n  is a sample (random or otherwise; this question is about an algebraic propertyof OLS and not a statistical property) of size  n  on the scalar random variables ( y i ,x i ). Let the regressioncoeﬃcients for the OLS regression of   y i  on  x i  be:ˆ β  1  = ˆ cov ( x i ,y i )ˆ var ( x i )ˆ β  0  = ¯ y −  ˆ β  1 ¯ x Now let’s suppose we take a linear transformation of our data. That is, let:˜ x i  =  ax i  +  b ˜ y i  =  cy i  +  d where ( a,b,c,d ) are a set of scalars (both  a  and  c  must be nonzero), and let the regression coeﬃcients forthe OLS regression of ˜ y i  on ˜ x i  be:˜ β  1  = ˆ cov (˜ x i ,  ˜ y i )ˆ var (˜ x i )˜ β  0  = ¯˜ y −  ˜ β  1 ¯˜ x where ¯˜ y  =  1 n  ni =1  ˜ y i  and ¯˜ x  =  1 n  ni =1  ˜ x i . a ) Find ˜ β  1  in terms of (ˆ β  0 ,  ˆ β  1 ,a,b,c,d ). b ) Find ˜ β  0  in terms of (ˆ β  0 ,  ˆ β  1 ,a,b,c,d ). 5 Choice of units: The general version The results from the previous question carry over when there is more than one explanatory variable: addinga constant to either  y i  or  x i  only changes the intercept, multiplying  y i  by a constant  c  ends up multiplyingall coeﬃcients by  c , and multiplying any variable in  x i  by a constant  a  ends up multiplying the coeﬃcienton that variable by 1 /a . In this question we will prove that result. In working through this question,please remember that our results and arguments will be analogous to those in the previous (much easier)question, and that the purpose of this question is to get some practice with more advanced tools like theFrisch-Waugh-Lovell theorem.Let  y  be an  n × 1 matrix of outcomes, and  X  be an  n × K   matrix of explanatory variables. Let:ˆ β   = ( X  X ) − 1 X  y be the vector of coeﬃcients from the OLS regression of   y  on  X . We are interested in what will happen if weapply some linear transformation to our variables. a ) We start by seeing what happens if we take some multiplicative transformation. Let:˜ X  =  X A ˜ y  =  c y  ECON 835, Fall 2014  4where  A  is a  K  × K   matrix 2 with full rank (i.e.,  A − 1 exists) and  c  is a nonzero scalar. Let:˜ β   = (˜ X  ˜ X ) − 1  ˜ X  ˜ y be the vector of coeﬃcients from the OLS regression of  ˜ y  on ˜ X .Show that ˜ β   =  cA − 1 ˆ β  . b ) Suppose that the covariance matrix of  ˆ β   is Σ. What is the covariance matrix of  ˜ β  ? c ) Using this result, what happens to our OLS coeﬃcients if we multiply one of the explanatory variablesby 10 and leave everything else unchanged? d ) Using this result, what happens to our OLS coeﬃcients if we multiply the dependent variable by 10 andleave everything else unchanged? e ) Next we consider an additive transformation. For this we suppose we have an intercept, and we changethe notation slightly. Let: X  =  1  x 1 1  x 2 1...1  x n  where  x i  is a 1 × ( K  − 1) matrix and letˆ β   =   ˆ β  0 ˆ β  1   = ( X  X ) − 1 X  y where  y  is an  n × 1 matrix of outcomes. Our transformed data are:˜ X  =  X +  ı n b ˜ y  =  y  +  dı n where  b  is a 1 × K   matrix whose ﬁrst element is zero,  d  is a scalar, and  ı n  is an  n × 1 matrix of ones. Let:˜ β   =   ˜ β  0 ˜ β  1   = (˜ X  ˜ X ) − 1  ˜ X  ˜ y be the vector of coeﬃcients from the OLS regression of ˜ y i  on ˜ x i .Show that ˜ β  1  = ˆ β  1 . The Frisch-Waugh-Lovell theorem might be useful here. f  ) Suppose that the covariance matrix of  ˆ β  1  is Σ 1 . What is the covariance matrix of  ˜ β  1 ? g ) What happens to our OLS coeﬃcients other than the intercept when we add 5 to the dependent variablefor all observations? When we add 5 to one of the explanatory variables? 2 We are mostly interested in the case where  A  is diagonal, i.e., we are multiplying each column in X by some number. Butnotice that this setup includes a lot of other redeﬁnitions of variables.

Jul 23, 2017

#### 7.System of Particles and Rotational Motion

Jul 23, 2017
Search
Similar documents

View more...
Tags

## Vector Autoregression

We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks