Page 154 -
P. 154

Two Proofs That Y Is the Least Squares Estimator of mY	                                             153

	 Appendix

	 3.2	 Two Proofs That Y Is the Least Squares

               Estimator of mY

This appendix provides two proofs, one using calculus and one not, that Y minimizes the
sum of squared prediction mistakes in Equation (3.2)—that is, that Y is the least squares
estimator of E(Y).

Calculus Proof

To minimize the sum of squared prediction mistakes, take its derivative and set it to zero:

	              d     n        -  m)2  =         n     -    m)    =         n     +    2nm  =  0.	(3.27)
              dm
                    ia= 1(Yi             - 2ia= 1(Yi                - 2ia= 1Yi

Solving   for  the     final  equation   for  m  shows     that  g  n   1(Yi  -  m)2   is  minimized  when
                                                                    i=

m = Y.

Noncalculus Proof

The strategy is to show that the difference between the least squares estimator and Y must

be zero, from which it follows that Y is the least squares estimator. Let d = Y - m, so that
m = Y - d. Then (Yi - m)2 = (Yi - 3Y - d4)2 = (3Yi - Y4 + d)2 = (Yi - Y)2 +
2d(Yi - Y) + d2. Thus the sum of squared prediction mistakes [Equation (3.2)] is

   nn                                            n                                  n
ia= 1(Yi - m)2 = ia= 1(Yi - Y )2 + 2dia= 1(Yi - Y ) + nd 2 = ia= 1(Yi - Y )2 + nd 2,
		(3.28)

where the second equality uses the fact that        g  n   1(Yi  -  Y)  =     0. Because both terms in the
                                                       i=

final line of Equation (3.28) are nonnegative and because the first term does not depend

on d,  g  n   1(Yi  -  m)2 is minimized by choosing d to make the second term, nd2, as small as
          i=

possible. This is done by setting d = 0—that is, by setting m = Y—so that Y is the least

squares estimator of E(Y).
   149   150   151   152   153   154   155   156   157   158   159