{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem Set 3: Assorted topics on single equation models [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/tobiasraabe/time_series/master?filepath=docs%2Fproblem_sets%2Fproblem_set_3.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 1\n", "\n", "In conventional econometric models, the variance of the disturbance term is assume to be constant. However many economic time series exhibit periods of unusally large volatility followed by periods of relative tranquility. In such circumstances, the assumption of a constant variance is inappropriate.\n", "\n", "Consider the following specification:\n", "\n", "$$\\begin{align}\n", " \\epsilon_t &= h_t u_t\\\\\n", " h^2_t &= \\alpha_0 + \\alpha_1 \\epsilon^2_{t-1}\\\\\n", " u_t &\\sim \\mathcal{N}(0, 1)\\text{, independent of } \\epsilon_{t-1}\n", "\\end{align}$$\n", "\n", "1. Compute the **unconditional expectation**, the **unconditional variance**, and the **autocovariances of $\\epsilon_t$**.\n", "\n", " To compute the **unconditional expectation**, we first compute the conditional expectation and then use the law of iterated expectation for the unconditional.\n", " \n", " $$\\begin{align}\n", " E[\\epsilon_t \\mid I_{t-1}] &= E[h_t u_t \\mid I_{t-1}]\\\\\n", " &= h_t E[u_t \\mid I_{t-1}]\\\\\n", " &= h_t * 0\n", " \\end{align}$$\n", " \n", " It is important to note that given the information set $I$ at time $t-1$, $h_t$ is a constant as it depends only on variables from period $t-1$.\n", " \n", " Using the law of iterated expectation, we get the final result.\n", " \n", " $$\\begin{align}\n", " E[\\epsilon_t] &= E[E[h_t u_t \\mid I_{t-1}]]\\\\\n", " &= E[0]\\\\\n", " &= 0\n", " \\end{align}$$\n", " \n", " For the **unconditional variance** we start with\n", " \n", " $$\n", " \\sigma^2 = E[\\epsilon^2] = E[E[\\epsilon^2 \\mid I_{t-1}]] = E[h^2]\n", " $$\n", " \n", " Therefore, we simply take the expectation of $h^2$\n", " \n", " $$\\begin{align}\n", " \\sigma^2 &= E[h^2]\\\\\n", " &= E[\\alpha_0 + \\alpha_1 \\epsilon^2_{t-1}]\\\\\n", " &= \\alpha_0 + \\alpha_1 E[\\epsilon^2_{t-1}]\\\\\n", " &= \\alpha_0 + \\alpha_1 \\sigma^2\\\\\n", " &= \\frac{\\alpha_0}{1 - \\alpha_1}\n", " \\end{align}$$\n", " \n", " Recognize that the denominator must be smaller than 1 otherwise the variance of the process is not finite.\n", " \n", " At last, we are computing the **autocovariances** of $\\epsilon_t$.\n", " \n", " $$\\begin{align}\n", " E[\\epsilon_t \\epsilon_{t-\\tau}] &= E[E[\\epsilon_t \\epsilon_{t-\\tau} \\mid I_{t-1}]]\\\\\n", " &= E[\\epsilon_{t-\\tau} E[\\epsilon_t \\mid I_{t-1}]]\\\\\n", " &= E[\\epsilon_{t-\\tau} * 0]\n", " \\end{align}$$\n", " \n", " To sum it up, the $ARCH(q)$ process has zero mean and is not autocorrelated. Therefore, it is weakly stationary as long as the variance is finite which requires that $\\sum^q_{i=1} \\alpha_i$ is smaller than 1. This does not mean that $\\epsilon$ is independently distributed as modelling of the conditional variance implies that higher moments are not disappearing.\n", " \n", "2. For which values of $\\alpha_0$ and $\\alpha_1$ is $\\epsilon_t$ white noise? *Hint: What has to hold for the variance?*\n", " \n", " As $\\epsilon_t \\sim \\mathcal{N}(0, h_t^2)$, the following has to hold\n", " \n", " $$\\begin{align}\n", " 1 &= h_t^2\\\\\n", " &= \\alpha_0 + \\alpha_1 \\epsilon_{t-1}\n", " \\end{align}$$\n", " \n", " which is true for $\\alpha_0 = 1$ and $\\alpha_1 = 0$.\n", " \n", "3. Now compute the conditional mean of $\\epsilon_t$ and its conditional variance.\n", "\n", " If we choose values for $\\alpha_0$ and $\\alpha_1$ so that $\\epsilon_t$ follows a white noise process, then $h^2_t = 1$ which leads to\n", " \n", " $$\\begin{align}\n", " E[\\epsilon_t \\mid I_{t-1}] &= E[h_t u_t]\\\\\n", " &= 1 * E[u_t]\\\\\n", " &= 0\n", " \\end{align}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 2\n", "\n", "Consider the following $TARCH$ process\n", "\n", "$$\\begin{align}\n", " r_t &= \\rho r_{t-1} + \\epsilon_t\\\\\n", " \\epsilon_t &= \\sigma_t e_t\\\\\n", " \\sigma^2_t &= \\omega + \\alpha \\epsilon^2_{t-1} + \\gamma \\epsilon^2_{t-1} I_{[\\epsilon_{t-1} < 0]} + \\beta \\sigma^2_{t-1}\\\\\n", " e_t \\overset{iid}{\\sim} \\mathcal{N}(0, 1)\n", "\\end{align}$$\n", "\n", "Assume that the conditions for this process to be covariance stationary hold. For the following tasks note that $E_t(\\cdot) = E(\\cdot \\mid I_t)$ is the time $t$ conditional expectation and $V(\\cdot) = V(\\cdot \\mid I_t)$ is the time $t$ conditional variance.\n", "\n", "1. What is $E[r_{t+1}]$?\n", "\n", " First of all, note that this task comprises a $TARCH(1, 1)$ process embedded in a $AR(1)$ process. We are going to rewrite the $AR(1)$ to an $MA(\\infty)$ expression.\n", " \n", " $$\\begin{align}\n", " r_t &= \\rho r_{t-1} + \\epsilon_t\\\\\n", " &= \\frac{1}{1 - \\rho L} \\epsilon_t\\\\\n", " &= \\sum^\\infty_{i=0} \\rho^i \\epsilon_{t-i}\n", " \\end{align}$$\n", " \n", " Furthermore, we need the unconditional expectation of the $TARCH(1, 1)$ process. Using the law of iterated expectations yields\n", " \n", " $$\\begin{align}\n", " E[\\epsilon_t] &= E[\\sigma_t e_t]\\\\\n", " &= E[E[\\sigma_t e_t \\mid I_{t-1}]]\\\\\n", " &= E[\\sigma_t E[e_t \\mid I_{t-1}]]\\\\\n", " &= E[\\sigma_t * 0]\\\\\n", " &= 0\n", " \\end{align}$$\n", " \n", " Combining the two results yields\n", " \n", " $$\\begin{align}\n", " E[r_{t+1}] &= E[\\sum^\\infty_{i=0} \\rho^{i+1} \\epsilon_{t-i}]\\\\\n", " &= \\sum^\\infty_{i=0} \\rho^{i+1} E[\\epsilon_{t-i}]\\\\\n", " &= 0\n", " \\end{align}$$\n", " \n", "2. What is $E_t[r_{t+1}]$?\n", "\n", " $$\\begin{align}\n", " E_t[r_{t+1}] &= E[r_{t+1} \\mid I_t]\\\\\n", " &= E[\\rho r_t + \\epsilon_{t+1} \\mid I_t]\\\\\n", " &= \\rho r_t + E[\\epsilon_{t+1} \\mid I_t]\\\\\n", " &= \\rho r_t\n", " \\end{align}$$\n", " \n", "3. What is $V(r_{t+1})$?\n", "\n", " $$\\begin{align}\n", " V(r_{t+1}) &= E[r^2_{t+1}]\\\\\n", " &= E[(\\sum^\\infty_{i=0} \\rho^{i+1} \\epsilon_{t-i})^2]\\\\\n", " &= E[\\rho \\epsilon^2_t + \\rho^2 \\epsilon^2_{t-1} + \\dots + 2 \\rho^3 \\epsilon_t \\epsilon_{t-1} + \\dots]\\\\\n", " &= E[\\rho^2 \\epsilon^2_t + \\rho^4 \\epsilon^2_{t-1} + \\dots]\\\\\n", " &= \\rho^2 E[\\epsilon^2_t] + \\rho^4 E[\\epsilon^2_{t-1}] + \\dots\\\\\n", " &= \\sigma^2 \\sum^\\infty_{i=0} \\rho^{2*i}\\\\\n", " &= \\sigma^2 \\frac{1}{1 - \\rho^2}\n", " \\end{align}$$\n", " \n", " For this, we rewrite the $TARCH(1, 1)$ of the conditional variance to an $ARCH(\\infty)$ process. Note that, the probability of a normally distributed variable with mean 0 being positive or negative is 0.5.\n", " \n", " $$\n", " \\sigma^2 = E[\\epsilon_{t+1}^2] = E[E[\\epsilon_{t+1}^2 \\mid I_t]] = E[\\sigma^2_{t+1}]\n", " $$\n", " \n", " $$\\begin{align}\n", " \\sigma^2_t &= \\omega + \\alpha \\epsilon^2_{t-1} + 0.5 \\gamma \\epsilon^2_{t-1} + \\beta \\sigma^2_{t-1}\\\\\n", " (1 - \\beta L) \\sigma^2_t &= \\omega + (\\alpha L + 0.5 \\gamma L) \\epsilon^2_t\\\\\n", " \\sigma^2_t &= \\frac{\\omega}{1 - \\beta} + \\frac{\\alpha L + 0.5 \\gamma L}{1 - \\beta L} \\epsilon^2_t\n", " \\end{align}$$\n", " \n", " The unconditional variance of the $TARCH(1, 1)$ process is\n", " \n", " $$\\begin{align}\n", " E[\\sigma_t^2] &= E[\\frac{\\omega}{1 - \\beta} + \\frac{\\alpha L + 0.5 \\gamma L}{1 - \\beta L} \\epsilon^2_t]\\\\\n", " \\sigma^2 &= \\frac{\\omega}{1 - \\beta} + \\frac{\\alpha L + 0.5 \\gamma L}{1 - \\beta L} E[\\epsilon^2_t]\\\\\n", " &= \\frac{\\omega}{1 - \\beta} + \\frac{\\alpha L + 0.5 \\gamma L}{1 - \\beta L} \\sigma^2\\\\\n", " (1 - \\frac{\\alpha L + 0.5 \\gamma L}{1 - \\beta L}) \\sigma^2 &= \\frac{\\omega}{1 - \\beta}\\\\\n", " \\sigma^2 &= \\frac{\\omega}{1 - \\alpha - \\beta - 0.5 \\gamma}\n", " \\end{align}$$\n", " \n", " Inserting this result in the first equation yields\n", " \n", " $$\\begin{align}\n", " V(r_{t+1}) &= \\sigma^2 \\frac{1}{1 - \\rho^2}\\\\\n", " &= \\frac{\\omega}{1 - \\alpha - \\beta - 0.5 \\gamma} \\cdot \\frac{1}{1 - \\rho^2}\n", " \\end{align}$$\n", " \n", "4. What is $V_t(r_{t+1})$?\n", " \n", " $$\\begin{align}\n", " V_t(r_{t+1}) &= E[r^2_{t+1} \\mid I_t]\\\\\n", " &= E[(\\sum^\\infty_{i=0} \\rho^{i+1} \\epsilon_{t+1})^2 \\mid I_t]\n", " \\end{align}$$\n", " \n", " **TODO**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 3\n", "\n", "Consider the following regression model\n", "\n", "$$\\begin{align}\n", " \\Delta x_t = \\delta + c \\Delta x_{t-1} + \\pi x_{t-1} + \\epsilon_t, && t = 2, 3, \\dots, T\n", "\\end{align}$$\n", "\n", "where the initial values $x_0$ and $x_1$ are given.\n", "\n", "