Page 633 -
P. 633
632 Chapter 14 Introduction to Time Series Regression and Forecasting
where a0, c, ap are the coefficients of the lag polynomial and L0 = 1. The degree of the
lag polynomial a(L) in Equation (14.38) is p. Multiplying Yt by a(L) yields
p pp
a(L)Yt = a a ajLj b Yt = a aj(LjYt) = a ajYt - j = a0Yt + a1Yt - 1 + g + apYt - p . (14.39)
j=0 j=0 j=0
The expression in Equation (14.39) implies that the AR(p) model in Equation (14.13) can
be written compactly as
a(L)Yt = b0 + ut, (14.40)
where a0 = 1 and aj = - bj, for j = 1, c, p. Similarly, an ADL(p, q) model can be written
a(L)Yt = b0 + c(L)Xt - 1 + ut, (14.41)
where a(L) is a lag polynomial of degree p (with a0 = 1) and c(L) is a lag polynomial of
degree q - 1.
A p p e n d i x
14.4 ARMA Models
The autoregressive–moving average (ARMA) model extends the autoregressive model by
modeling ut as serially correlated, specifically as being a distributed lag (or “moving aver-
age”) of another unobserved error term. In the lag operator notation of Appendix 14.3, let
ut = b(L)et, where b(L) is a lag polynomial of degree q with b0 = 1 and et is a serially
uncorrelated, unobserved random variable. Then the ARMA(p,q) model is
a(L)Yt = b0 + b(L)et, (14.42)
where a(L) is a lag polynomial of degree p with a0 = 1.
Both the AR and ARMA models can be thought of as ways to approximate the auto-
covariances of Yt. The reason for this is that any stationary time series Yt with a finite
variance can be written either as an AR or as a MA with a serially uncorrelated error term,
although the AR or MA models might need to have an infinite order. The second of these
results, that a stationary process can be written in moving average form, is known as the
Wold decomposition theorem and is one of the fundamental results underlying the theory
of stationary time series analysis.

