Page 800 -
P. 800

Proof of the Gauss–Markov Theorem for Multiple Regression	 799

                   Proof of Equation (18.37)

                            The Fn1, n2 distribution is the distribution of (W1>n1)>(W2>n2), where (i) W1 is distributed
                            xn21; (ii) W2 is distributed x2n2; and (iii) W1 and W2 are independently distributed (Appendix
                            17.1). To express F in this form, let W1 = (RBn - r)′[R(X′X)-1R′s2u]-1(RBn - r) and
                            W2 = (n – k – 1)su2n > su2 Substitution of these definitions into Equation (18.36) shows that
                            F = (W1 > q) > [W2 > (n - k - 1)]. Thus, by the definition of the F distribution, F has an
                            Fq, n−k−1 distribution if (i) through (iii) hold with n1 = q and n2 = n − k − 1.

                                    i.  U nder the null hypothesis, RBn - r = R(Bn - B). Because Bn has the conditional
                                      normal distribution in Equation (18.30) and because R is a nonrandom matrix,
                                       R(Bn - B) is distributed N(0q * 1, R(X′X)-1R′su2), conditional on X. Thus, by
                                      Equation (18.77) in Appendix 18.2, (RBn - r)′[R(X′X)R′su2]-1(RBn - r) is dis-
                                      tributed xq2, proving (i).

                                    ii.  Requirement (ii) is shown in Equation (18.31).
                                  iii.  I t has already been shown that Bn - B and su2n are independently distributed [Equa-

                                       tion (18.81)]. It follows that RBn - r and s2un are independently distributed, which
                                       in turn implies that W1 and W2 are independently distributed, proving (iii) and
                                       completing the proof.

  Appendix

	 18.5	 Proof of the Gauss–Markov Theorem

               for Multiple Regression

                            This appendix proves the Gauss–Markov theorem (Key Concept 18.3) for the multiple
                            regression model. Let B be a linear conditionally unbiased estimator of B so that B = A′Y
                            and E(B ͉ X) = B, where A is an n * (k + 1) matrix that can depend on X and nonran-
                            dom constants. We show that var(c′Bn ) … var(c′B) for all k + 1 dimensional vectors c,
                            where the inequality holds with equality only if B = Bn.

                                  Because B is linear, it can be written as B = A′Y = A′(XB + U ) = (A′X)B + A′U.

                         By the first Gauss–Markov condition, E(U 0 X) = 0n * 1, so E(B 0 X ) = (A′X)B, but because

                            B is conditionally unbiased, E(B ͉ X) = B = (A′X )B, which implies that A′X = Ik + 1.

                         Thus B = B + A′U , so var(B 0 X ) = var(A′U 0 X) = E(A′UU′A 0 X) = A′E(UU′ 0 X)A =

                            Su2A′A, where the third equality follows because A can depend on X but not U, and the
                            final equality follows from the second Gauss–Markov condition. That is, if B is linear and
                            unbiased, then under the Gauss–Markov conditions,

                                  	 A′X = Ik + 1 and var(B 0 X) = s2u A′A.	(18.82)

                            The results in Equation (18.82) also apply to Bn with A = An = X(X′X)-1, where (X′X)-1
                            exists by the third Gauss–Markov condition.
   795   796   797   798   799   800   801   802   803   804   805