Page 744 -
P. 744

17.5    Weighted Least Squares	 743

   Exercises

	 17.1	 Consider the regression model without an intercept term, Yi = b1Xi + ui
                                  (so the true value of the intercept, b0, is zero).

	 a.	 Derive the least squares estimator of b1 for the restricted regression
                                       model Yi = b1Xi + ui. This is called the restricted least squares esti-
                                       mator (bn1RLS) of b1 because it is estimated under a restriction, which in
                                       this case is b0 = 0.

	 b.	 Derive the asymptotic distribution of bnR1 LS under Assumptions #1
                                       through #3 of Key Concept 17.1.

	 c.	 Show that bnR1 LS is linear [Equation (5.24)] and, under Assumptions #1
                                       and #2 of Key Concept 17.1, conditionally unbiased [Equation (5.25)].

	 d.	 Derive the conditional variance of bn1RLS under the Gauss–Markov
                                       conditions (Assumptions #1 through #4 of Key Concept 17.1).

	 e.	 Compare the conditional variance of bnR1 LS in (d) to the conditional
                                       variance of the OLS estimator bn1 (from the regression including an
                                       intercept) under the Gauss–Markov conditions. Which estimator is

                                       more efficient? Use the formulas for the variances to explain why.

	 f.	 Derive the exact sampling distribution of bnR1 LS under Assumptions #1

        through #5 of Key Concept 17.1.

	  g.	  Now consider the estimator     b1  =     g   n   1Yi >  g  n   1Xi.  Derive   an
                                                     i=            i=
        expression for var(b1 ͉ X1, c, Xn) - var(bn1RLS 0 X1, c, Xn) under
        the Gauss–Markov conditions and use this expression to show that

        var( b1 0 X1, c, Xn) Ú var(bnR1 LS 0 X1, c, Xn).

	 17.2	 Suppose that (Xi,Yi) are i.i.d. with finite fourth moments. Prove that the
                                  sample covariance is a consistent estimator of the population covariance—
                                  that is, sXY ¡p sXY, where sXY is defined in Equation (3.24). (Hint: Use
                                  the strategy outlined in Appendix 3.3 and the Cauchy–Schwarz inequality.)

	 17.3.	 This exercise fills in the details of the derivation of the asymptotic distribu-
                                  tion of bn1 given in Appendix 4.3.

	 a.	 Use Equation (17.19) to derive the expression

                                    A  1   n     vi         (X     - mX)     A  1   n
                                       n                                     -  n
              2n(bn1  -  b1)  =           ia= 1          -    1     n              ia= 1ui,
                                     n                        n
                                 1         -     X )2              a (Xi        X )2
                                 n  a (Xi
                                                                   i=1
                                    i=1

		where vi = (Xi - mX)ui.
   739   740   741   742   743   744   745   746   747   748   749