3. Vector Autoregressive (VAR) Model

The construction of a VAR model starts with the time series vector with \(K\) observables:

\[y_t = [y_{1t}, y_{2t}, \dots, y_{Kt}]'\]

The DGP consists of a deterministic and stochastic part

\[y_t = \mu_t + x_t\]

with \(E[y_t] = \mu_t\) as the expected values. \(\mu_t\) can be a constant, polynomial trend terms, seasonal dummies or more.

The stochastic term is a linear VAR process of order \(p\)

\[x_t = A_1 x_{t-1} + A_2 x_{t-2} + \dots + A_p x_{t-p} + u_t\]

where \(u_t\) is white noise with \(E[u_t] = 0\), \(E[u_t u_s] = 0 \forall s \neq t\) and \(E[u_t u'_t] = \Sigma_u\) such that \(u_t \sim (0, \Sigma_u)\).

It is convenient to rewrite the former expression for the stochastic part as

\[A(L) x_t = u_t\]

with \(A(L) = I_k - A_1 L - A_2 L^2 - \dots A_p L^p\). Inserting this equation into the DGP equation yields:

\[A(L) y_t = A(L) \mu + u_t\]

If the deterministic term is just a constant, i.e. \(\mu_t = \mu_0\)

\[y_t = \nu + A_1 y_{t-1} + \dots + A_p y_{t-p} + u_t\]

where \(\nu = A(L) \mu_0 = A(1) \mu_0 = (I_k - \sum^p_{j=1} A_j) \mu_0\).

The process is stable if all roots of the following polynomial are outside the unit circle.

\[\det(A(z)) = \det(I_K - A_1 z - \dots - A_pz^p) \neq 0 \forall z \in \mathcal{C}, \mid z \mid \leq 1\]

This is true under common assumptions:

  • costant mean
  • white noise with time-invariant covariance matrix
  • stationarity

A stable \(VAR(p)\) process can be represented as a moving average by successive substitution. Consider the example of the following \(VAR(1)\):

\[\begin{split}y_t &= \nu + A_1 y_{t-1} + u_t\\ y_t &= \nu + A_1 (\nu + A_1 y_{t-2} + u_{t-1}) + u_t\\ \vdots &= \vdots\\ y_t &= \sum^\infty_{i=0} A^i_1 \nu + \sum^\infty_{i=0} A^i_1 u_{t-i}\\ &= (I_K - A_1)^{-1} \nu + \sum^\infty_{i = 0} A^i_1 u_{t-i}\end{split}\]

If all eigenvalues of \(A_1\) have modulus < 1, the sequence \(A^i_1, i = 1, \dots\) is absolutely summable and converges to \((I_K - A_1) \nu\) for \(j\to\infty\).

We can also obtain the Wold representation of the process. Rewrite the model to

\[A(L) y_t = \nu + u_t\]

Then, let \(\phi(L) = \sum^\infty_{i=0} \phi_i L^i\) such that \(\phi(L)A(L) = I_k\). Premultiplying yields

\[\begin{split}y_t &= \phi(L) \nu + \phi(L) u_t\\ &= \underbrace{\sum^\infty_{i=0} \phi_i}_{=\mu} \nu + \sum^\infty_{i=0} \phi_i u_{t-i}\end{split}\]

where \(\phi(L)\) is often denoted as \(A(L)^{-1}\), meaning the inverse of the expression. \(A(L)\) is invertible if \(\mid A(z) \mid \neq 0\) for \(\mid z \mid \leq 1\), which is the stability condition. Each element \(\phi_i\) can be computed recursively with

\[\begin{split}\phi_0 &= I_K\\ \phi_i &= \sum^i_{j=0} \phi_{i-j} A_j &\text{for } i = 1, 2, \dots.\end{split}\]

where \(A_j = 0\) for \(j > p\). For a stable process, \(\phi_i \to 0\) as \(i \to \infty\).

3.1. Moments

The distribution is solely determined by \(u_t\). The mean follows immediately from the Wold representation.

\[E[y_t] = \mu\]

The covariance is

\[\begin{split}\Gamma_y(h) &= Cov(y_t, y_{t-h})\\ &= E[(y_t - \mu)(y_{t-h} - \mu})']\\ &= \sum^\infty_{i=0} \phi_{h+i} \Sigma_u \phi'_i\end{split}\]

3.2. Autocovariances

Suppose that \(y_t\) is a stationary and stable \(VAR(1)\) process with

\[\begin{split}y_t &= \nu + A_1 y_{t-1} + u_t\\ y_t - \mu &= A_1 (y_{t-1} - \mu) + u_t\end{split}\]

where \(E[u_t u_'t = \Sigma_u\) and \(E[y_t] = \mu\). Postmultiply by \((y_{t-h} - \mu)'\) and taking expectation yields

\[E[(y_t - \mu)(y_{t-h} - \mu)'] = A_1 E[(y_{t-1} - \mu)(y_{t-h} - \mu)'] + E[u_t (y_{t-h} - \mu)']\]