Page 686 -
P. 686
16.1 Vector Autoregressions 685
Vector Autoregressions Key Concept
A vector autoregression (VAR) is a set of k time series regressions, in which 16.1
the regressors are lagged values of all k series. A VAR extends the univariate
autoregression to a list, or “vector,” of time series variables. When the number of
lags in each of the equations is the same and is equal to p, the system of equations
is called a VAR(p).
In the case of two time series variables, Yt and Xt, the VAR(p) consists of the
two equations
Yt = b10 + b11Yt - 1 + g + b1pYt - p + g11Xt - 1 + g + g1pXt - p + u1t (16.1)
Xt = b20 + b21Yt - 1 + g + b2pYt - p + g21Xt - 1 + g + g2pXt - p + u2t, (16.2)
where the b’s and the g’s are unknown coefficients and u1t and u2t are error terms.
The VAR assumptions are the time series regression assumptions of Key
Concept 14.6, applied to each equation. The coefficients of a VAR are estimated
by estimating each equation by OLS.
using the methods of Section 14.4. Another approach is to develop a single model
that can forecast all the variables, which can help to make the forecasts mutually
consistent. One way to forecast several variables with a single model is to use a
vector autoregression (VAR). A VAR extends the univariate autoregression to
multiple time series variables, that is, it extends the univariate autoregression to
a “vector” of time series variables.
The VAR Model
A vector autoregression (VAR) with two time series variables, Yt and Xt, consists
of two equations: In one, the dependent variable is Yt; in the other, the dependent
variable is Xt. The regressors in both equations are lagged values of both vari-
ables. More generally, a VAR with k time series variables consists of k equations,
one for each of the variables, where the regressors in all equations are lagged
values of all the variables. The coefficients of the VAR are estimated by estimat-
ing each of the equations by OLS.
VARs are summarized in Key Concept 16.1.

