Our basic model was:
\(x_t=\alpha + \epsilon_t\)
We add an autoregressive component by adding a lagged observation.
\(x_t=\alpha + \beta x_{t-1}+\epsilon_t\)
AR(\(p\)) has \(p\) previous dependent variables.
\(x_t=\alpha + \sum_{i=1}^p\beta_ix_{t-i}\)
A shock bumps up the output variable, which bumps up output variables forever, at a decreasing rate.
The Dickey-Fuller test tests if there is a unit root.
The AR(\(1\)) model is:
\(y_t=\alpha + \beta y_{t-1}+\epsilon_t\)
We can rewrite this as:
\(\Delta y_t=\alpha + (\beta -1)y_{t-1}+\epsilon_t\)
We test if \(\beta -1)=0\).
If the coefficient on the last term is \(1\) we have a random walk, and the process is non-stationary.
If the last term is \(<1\) then we have a stationary process.
If our model has no intercept it is:
\(y_t=\beta y_{t-1}+\epsilon_t\)
\(\Delta y_t=(\beta -1)y_{t-1}+\epsilon_t\)
If our model has a time trend it is:
\(y_t=\alpha \beta y_{t-1}+\gamma t + \epsilon_t\)
\(\Delta y_t=\alpha + (\beta -1)y_{t-1}+\gamma t+\epsilon_t\)
We include more lagged variables.
\(y_t=\alpha + \beta t + \sum_i^p \theta_i y_{t-i}+\epsilon_t\)
If no unit root, can do normal OLS?
The standard AR(\(1\)) model is:
\(y_t=\alpha + \beta y_{t-1}+\epsilon_t\)
The variance is:
\(Var(y_t)=Var(\alpha + \beta y_{t-1}+\epsilon_t)\)
\(Var(y_t)(1-\beta^2)=Var(\epsilon_t)\)
Assuming the errors are IID we have:
\(Var(y_t))=\dfrac{\sigma^2 }{1-\beta^2 }\)
This is independent of historic observations, which may not be desirable.
Consider the alternative formulation:
\(y_t=\epsilon_t f(y_{t-1})\)
This allows for conditional heteroskedasticity.
We add previous error terms as input variables
MA(\(q\)) has \(q\) previous error terms in the model
Unlike AR models, the effects of any shocks wear off after \(q\) terms.
This is harder to fit the OLS, the error terms themselves are not observed.
We include both AR and MA
Estimted using Box-Jenkins
Uses differences to remove non statiority
Also estiamted with box-jenkins