{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Problem Set 2: Stationary Univariate Processes [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/tobiasraabe/time_series/master?filepath=docs%2Fproblem_sets%2Fproblem_set_2.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 1\n", "\n", "Show that the coefficients of the Wold representation of the stable $ARMA(1, 1)$ model evolve according to the difference equation\n", "\n", "$$\n", "\\psi_j - \\alpha \\psi_{j-1} = 0\n", "$$\n", "\n", "with initial condition $\\psi_1 = \\alpha - \\beta$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Without loss of generality we assume $\\delta = 0$ so that the $ARMA(1, 1)$ representation is given by\n", "\n", "$$\n", "y_t = \\alpha y_{t-1} + \\epsilon_t - \\beta \\epsilon_{t-1}\n", "$$\n", "\n", "For the Wold representation, the process needs to be converted to an $MA(\\infty)$ process.\n", "\n", "$$\n", "(1 - \\alpha L) y_t = (1 - \\beta L) \\epsilon_t\n", "$$\n", "\n", "If all roots of the characteristic equation of $(1 - \\alpha L)$ lay outside the unit circle, it is invertible. Furthermore, since $\\alpha < 1$ we can expand the expression to an infinite series.\n", "\n", "$$\n", "y_t = \\frac{1 - \\beta L}{1 - \\alpha L} \\epsilon_t\n", "$$\n", "\n", "To match the expression of the Wold representation, the following relation has to hold\n", "\n", "$$\\begin{align}\n", "(1 - \\beta L) &= (1 - \\alpha L) \\sum^\\infty_{j=0} L^j \\psi_j\\\\\n", " &= \\psi_0 & &+ L \\psi_1 &&+ L^2 \\psi_2 &&+ L^3 \\psi_3 &&+ \\dots\\\\\n", " & & &- \\alpha L \\psi_0 &&- \\alpha L^2 \\psi_1 &&- \\alpha L^3 \\psi_2 &&- \\dots\\\\\n", "\\end{align}$$\n", "\n", "Matching by $L^j$ yields\n", "\n", "$$\\begin{align}\n", "L = 0:&& \\psi_0 &= 1\\\\\n", "L = 1:&& \\psi_1 &= \\alpha \\psi_0 - \\beta = \\alpha - \\beta\\\\\n", "L = 2:&& \\psi_2 &= \\alpha \\psi_1 = \\alpha (\\alpha - \\beta)\\\\\n", "L = j:&& \\psi_j &= \\alpha \\psi_{j - 1}\n", "\\end{align}$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 2\n", "\n", "Consider the following $ARMA(2, 2)$ process, where $\\epsilon_t \\overset{i.i.d.}{\\sim} \\mathcal{N}(0, 1)$:\n", "\n", "$$\n", "y_t = 10 + 0.3 y_{t-1} - 0.2 y_{t-2} + \\epsilon_t + 0.5 \\epsilon_{t-1} + 0.25 \\epsilon_{t-2}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Is the process stationary? Why?\n", "\n", " First, the process is rewritten with the lag operator.\n", "\n", " $$\n", " (1 - 0.3 L + 0.2 L^2) y_t = 10 + (1 + 0.5 L + 0.25 L^2) \\epsilon_t\n", " $$\n", " \n", " The process is stationary if the roots of the characteristic equation of $(1 - 0.3 L + 0.2 L^2)$ lay outside the unit circle. Therefore, we replace $L$ with $z$ and solve it using the $p-q$ formula.\n", " \n", " $$\\begin{align}\n", " 0 &= 1 - 0.3 z + 0.2 z^2\\\\\n", " 0 &= z^2 - 1.5 z + 5\\\\\n", " z_{1,2} &= 0.75 \\pm \\sqrt{0.75^2 - 5}\\\\\n", " &= 0.75 \\pm \\sqrt{\\frac{9}{16} - \\frac{80}{16}}\\\\\n", " &= 0.75 \\pm \\frac{\\sqrt{71}}{4}\n", " \\end{align}$$\n", " \n", " where $\\mid z_{1,2}\\mid > 1$ and therefore the process is stationary. The same result could have been obtained by using a different characteristic equation with $z^{-1} = \\lambda$. The characteristic equation can be rewritten to ([1])\n", " \n", " $$\\begin{align}\n", " 0 &= 1 - 0.3 z + 0.2 z^2\\\\\n", " 0 &= 1 - 0.3 \\frac{1}{\\lambda} + 0.2 \\frac{1}{\\lambda^2}\\\\\n", " 0 &= \\lambda^2 - 0.3 \\lambda + 0.2\n", " \\end{align}$$\n", " \n", " Proceeding with calculating the roots yields\n", " \n", " $$\\begin{align}\n", " 0 &= \\lambda^2 - 0.3 \\lambda + 0.2\\\\\n", " \\lambda_{1, 2} &= 0.15 \\pm \\sqrt{0.0225 - 0.2}\n", " \\end{align}$$\n", " \n", " where $\\mid\\lambda_{1, 2}\\mid < 1$.\n", " \n", " The reason for the condition on the roots of the equation to be either $\\mid z\\mid > 1$ or $\\mid \\lambda \\mid < 1$ is that \\dots.\n", "\n", "2. Is the process invertible? Why?\n", "\n", " The process is invertible if the lag polynomial of the $MA$ part have characteristic roots being smaller or greater than one depending on the two approaches above. We stick to the lambda representation.\n", " \n", " $$\\begin{align}\n", " 0 &= \\lambda^2 + 0.5 \\lambda + 0.25\\\\\n", " \\lambda_{1,2} &= -0.25 \\pm \\sqrt{0.0625 - 0.25}\n", " \\end{align}$$\n", " \n", " As the root of a negative number does not exist without imaginery numbers, the roots do not exist and the process is not invertible.\n", " \n", "3. Compute $E(y_t)$.\n", "\n", " To compute the expectation of $y_t$, take the process and stationarity.\n", " \n", " $$\\begin{align}\n", " E[y_t] &= 10 + 0.3 E[y_{t-1}] - 0.2 E[y_{t-2}] + 0 + 0\\\\\n", " \\mu &= 10 + 0.3 \\mu - 0.2 \\mu\\\\\n", " \\mu &= \\frac{100}{9}\n", " \\end{align}$$\n", " \n", "4. Compute the autocovariances $\\gamma(\\tau)$ for $\\tau = 0, 1, \\dots, 4$.\n", "\n", " To compute the autocovariances, take the process, multiply $y_{t-\\tau}$ and take expectations.\n", " \n", " $$\\begin{align}\n", " E[y_t y_{t-\\tau}] &= E[y_{t-\\tau}(10 + 0.3 y_{t-1} - 0.2 y_{t-2} + \\epsilon_t + 0.5 \\epsilon_{t+1} + 0.25 \\epsilon_{t+2}\\\\\n", " &= 0.3 E[y_{t-\\tau} y_{t-1}] - 0.2 E[y_{t-\\tau} y_{t-2}] + E[y_{t-\\tau} \\epsilon_t] + 0.5 E[y_{t-\\tau} \\epsilon_{t-1}] + 0.25 E[y_{t-\\tau} \\epsilon_{t-2}]\n", " \\end{align}$$\n", " \n", " For the covariances of $y_{t-\\tau}$ and $\\epsilon_i$, the Wold representation is used. Rewrite the process with the lag operator yielding:\n", " \n", " $$\\begin{align}\n", " y_t &= \\frac{\\delta}{1 - 0.3 + 0.2} + \\frac{1 + 0.5L + 0.25 L^2}{1 - 0.3 L + 0.2 L} \\epsilon_t\n", " \\end{align}$$\n", " \n", " To suffice the Wold representation, the following equation has to hold.\n", " \n", " $$\\begin{align}\n", " 1 + 0.5 L + 0.25 L^2 &= (1 - 0.3 L + 0.2 L^2) \\sum^\\infty_{i=0} L^i \\psi_i\\\\\n", " &= \\psi_0 &&+ \\psi_1 L &&+ \\psi_2 L^2 &&+ \\psi_3 L^3 &&+ \\dots\\\\\n", " & &&- 0.3 \\psi_0 L &&- 0.3 \\psi_1 L^2 &&- 0.3 \\psi_2 L^3 &&- \\dots\\\\\n", " & && &&+ 0.2 \\psi_0 L^2 &&+ 0.2 \\psi_1 L^3 &&+ 0.2 \\psi_2 L^4 &&+ \\dots\\\\\n", " \\end{align}$$\n", " \n", " Matching by $L^j$ yields\n", "\n", " $$\\begin{align}\n", " L = 0:&& \\psi_0 &= 1\\\\\n", " L = 1:&& \\psi_1 &= 0.5 + 0.3 \\psi_0 = 0.8\\\\\n", " L = 2:&& \\psi_2 &= 0.25 + 0.3 \\psi_1 - 0.2 \\psi_0 = 0.29\\\\\n", " L = j:&& \\psi_j &= 0.3 \\psi_{j-1} - 0.2 \\psi_{j-2}\n", " \\end{align}$$\n", " \n", " Finally, the Wold representation\n", " \n", " $$\n", " y_t = \\frac{100}{9} + (1 + 0.8 L + 0.29 L^2 + \\dots) \\epsilon_t\n", " $$\n", " \n", " With the $ARMA(2, 2)$ represented as $MA(\\infty)$, we can proceed with the autocovariances.\n", "\n", " $$\\begin{align}\n", " \\tau = 0:&& \\gamma(0) &= 0.3 \\gamma(1) - 0.2 \\gamma(2) + \\sigma^2 + 0.5 * 0.8 \\sigma^2 + 0.25 * 0.29 \\sigma^2\\\\\n", " \\tau = 1:&& \\gamma(1) &= 0.3 \\gamma(0) - 0.2 \\gamma(1) + 0 + 0.5 \\sigma^2 + 0.25 * 0.8 \\sigma^2\\\\\n", " \\tau = 2:&& \\gamma(2) &= 0.3 \\gamma(1) - 0.2 \\gamma(0) + 0 + 0 + 0.25 \\sigma^2\\\\\n", " \\tau = 3:&& \\gamma(3) &= 0.3 \\gamma(2) - 0.2 \\gamma(1)\\\\\n", " \\tau = 4:&& \\gamma(4) &= 0.3 \\gamma(3) - 0.2 \\gamma(2)\\\\\n", " \\tau = j:&& \\gamma(j) &= 0.3 \\gamma(j-1) - 0.2 \\gamma(j-2)\n", " \\end{align}$$\n", " \n", " From the second equation, we get that $1.2 \\gamma(1) = 0.3 \\gamma(0) + 0.7 \\sigma^2 = \\frac{3}{12} \\gamma(0) + \\frac{7}{12} \\sigma^2$. Inserting the expression in the third equation gives\n", " \n", " $$\\begin{align}\n", " \\gamma(2) &= 0.3 \\gamma(1) - 0.2 \\gamma(0) + 0.25 \\sigma^2\\\\\n", " &= 0.3 (\\frac{3}{12} \\gamma(0) + \\frac{7}{12} \\sigma^2) - 0.2 \\gamma(0) + 0.25 \\sigma^2\\\\\n", " &= \\dots\n", " \\end{align}$$\n", " \n", " **TODO**: Finish equations.\n", " \n", "5. Compute the coefficients of $\\epsilon_t, \\epsilon_{t-1}, \\dots, \\epsilon_{t-4}$ in the $MA(\\infty)$ representation of $y_t$.\n", "\n", " This can be looked up above in subtask 4.\n", "\n", "[1]: https://stats.stackexchange.com/questions/118019/a-proof-for-the-stationarity-of-an-ar2/145652#145652" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 3\n", "\n", "Consider the following $MA(3)$ process, where $\\epsilon_t \\overset{i.i.d.}{\\sim} \\mathcal{N}(0, 1)$:\n", "\n", "$$\n", "y_t = 10 + \\epsilon_t + 0.3 \\epsilon_{t-1} + 0.2 \\epsilon_{t-2} + 0.4 \\epsilon_{t-3}\n", "$$\n", "\n", "Use Julia to simulate the model and to compute the ACFs and PACFs at orders 1 to 10." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ " \n" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "using Distributions, Random, Plots, StatsBase\n", "\n", "Random.seed!(123);" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "dist = Normal()\n", "\n", "obs = 1000\n", "lags = 3\n", "\n", "err = rand(dist, obs + lags)\n", "y_sim = zeros(obs, 1);" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "for i = 1:obs\n", " y_sim[i] = 10 + err[i] + 0.3 * err[i+1] + 0.2 * err[i+2] + 0.4 * err[i+3]\n", "end" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0\n", "\n", "\n", "250\n", "\n", "\n", "500\n", "\n", "\n", "750\n", "\n", "\n", "1000\n", "\n", "\n", "7\n", "\n", "\n", "8\n", "\n", "\n", "9\n", "\n", "\n", "10\n", "\n", "\n", "11\n", "\n", "\n", "12\n", "\n", "\n", "13\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "plot(y_sim)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0.0\n", "\n", "\n", "2.5\n", "\n", "\n", "5.0\n", "\n", "\n", "7.5\n", "\n", "\n", "10.0\n", "\n", "\n", "0.0\n", "\n", "\n", "0.1\n", "\n", "\n", "0.2\n", "\n", "\n", "0.3\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "autocor_vec = autocor(y_sim, 1:10)\n", "bar(autocor_vec)" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0.0\n", "\n", "\n", "2.5\n", "\n", "\n", "5.0\n", "\n", "\n", "7.5\n", "\n", "\n", "10.0\n", "\n", "\n", "-0.2\n", "\n", "\n", "-0.1\n", "\n", "\n", "0.0\n", "\n", "\n", "0.1\n", "\n", "\n", "0.2\n", "\n", "\n", "0.3\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pacf_vec = pacf(y_sim, 1:10)\n", "bar(pacf_vec)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 4\n", "\n", "We would like to model the cyclical component of real GDP that we extracted in PS 1.\n", "\n", "1. Plot the ACFs and PACFs of the cyclical component. With which model would you start your modeling attempt?\n", "2. Let's assume we settle on an $AR(p)$ model. Plot the ACF for the residual series of your model of choice. Is there still autocorrelation present? Run a Ljung-Box Test to verify your result." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "using XLSX, DataFrames" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "gdp = float.(XLSX.readdata(\"problem_set_1_data/us_real_gdp.xlsx\", \"FRED Graph\", \"B12:B295\"));" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "function hodrick_prescott_filter(y, lambda)\n", " T = size(y, 1)\n", " matrix = zeros(T, T)\n", " \n", " matrix[1, 1:3] = [1 + lambda, -2 * lambda, lambda]\n", " matrix[2, 1:4] = [-2 * lambda, 1 + 5 * lambda, -4 * lambda, lambda]\n", " \n", " for i = 3 : T - 2\n", " matrix[i, i-2 : i+2] = [lambda, -4*lambda, 1 + 6 * lambda, -4 * lambda, lambda]\n", " end\n", " \n", " matrix[T-1, T-3:T] = [lambda, -4 * lambda, 1 + 5 * lambda, -2 * lambda]\n", " matrix[T, T-2:T] = [lambda, -2 * lambda, 1 + lambda]\n", " \n", " trend = matrix \\ y\n", " cycle = y - trend\n", " \n", " return trend, cycle\n", "end;" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "trend, cycle = hodrick_prescott_filter(log.(gdp), 1600);" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0\n", "\n", "\n", "5\n", "\n", "\n", "10\n", "\n", "\n", "15\n", "\n", "\n", "20\n", "\n", "\n", "-0.25\n", "\n", "\n", "0.00\n", "\n", "\n", "0.25\n", "\n", "\n", "0.50\n", "\n", "\n", "0.75\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cycle_acf = autocor(cycle, 1:20)\n", "bar(cycle_acf)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0\n", "\n", "\n", "5\n", "\n", "\n", "10\n", "\n", "\n", "15\n", "\n", "\n", "20\n", "\n", "\n", "-0.25\n", "\n", "\n", "0.00\n", "\n", "\n", "0.25\n", "\n", "\n", "0.50\n", "\n", "\n", "0.75\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cycle_pacf = pacf(cycle, 1:20)\n", "bar(cycle_pacf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the ACFs do not and the PACFs do break-off, we would assume that the underlying process is an $AR$ process. From the PACF plot we can assume that the process can be modeled with one lagged term." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the second answer, we assume that an $AR(1)$ model is sufficient to model the process. First, we need to estimate the model to obtain the fitted values" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "using GLM, HypothesisTests" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "cycle_fitted = lm([ones(size(cycle)[1] - 1) cycle[1:end-1]], cycle[2:end])\n", "resid = cycle[2:end] - predict(cycle_fitted);" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-1894.6676911705147" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "aic(cycle_fitted)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "-1883.731350477585" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bic(cycle_fitted)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0\n", "\n", "\n", "5\n", "\n", "\n", "10\n", "\n", "\n", "15\n", "\n", "\n", "20\n", "\n", "\n", "-0.2\n", "\n", "\n", "-0.1\n", "\n", "\n", "0.0\n", "\n", "\n", "0.1\n", "\n", "\n", "0.2\n", "\n", "\n", "0.3\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "resid_acf = autocor(resid, 1:20)\n", "bar(resid_acf)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "0\n", "\n", "\n", "5\n", "\n", "\n", "10\n", "\n", "\n", "15\n", "\n", "\n", "20\n", "\n", "\n", "-0.2\n", "\n", "\n", "-0.1\n", "\n", "\n", "0.0\n", "\n", "\n", "0.1\n", "\n", "\n", "0.2\n", "\n", "\n", "0.3\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "y1\n", "\n", "\n" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "resid_pacf = pacf(resid, 1:20)\n", "bar(resid_pacf)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Ljung-Box autocorrelation test\n", "------------------------------\n", "Population details:\n", " parameter of interest: autocorrelations up to lag k\n", " value under h_0: all zero\n", " point estimate: NaN\n", "\n", "Test summary:\n", " outcome with 95% confidence: reject h_0\n", " one-sided p-value: <1e-12\n", "\n", "Details:\n", " number of observations: 283\n", " number of lags: 6\n", " degrees of freedom correction: 2\n", " Q statistic: 66.53495169569842\n" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "LjungBoxTest(resid, 6, 2)" ] } ], "metadata": { "kernelspec": { "display_name": "Julia 1.0.0", "language": "julia", "name": "julia-1.0" }, "language_info": { "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", "version": "1.0.0" } }, "nbformat": 4, "nbformat_minor": 2 }