Let Xt=Zt−3, then (1−0.5B)Xt=at∼AR(1). And because the root B=2 lies outside of the unit circle, the series is stationary. Rewrite as a MA(∞)
(1−0.5B)(1+ψ1B+ψ2B2+...)=1
ψ1=1/2 ψ2=1/4 ψk=2−k
Then, X^t(1)=∑0∞2−(i+l)at−i,Z^t(l)=3+X^t(l)
et(l)=∑i=0l−1ψiat+l−i, then var(et(l))=∑i=0l−1ψi2at+l−i2=σ2∑0l−1ψi2
(1−B+0.25B2)(Zt−1)=at
Let Xt=Zt−1, (0.25B2−B+1)=(0.5B−1)2⇒B=2 is a stationary AR(2) model.
Xt=Xt−1+41Xt−2+at, then Z^t(1)=1+Xt+41Xt−1 Z^t(2)=1+Xt+41Xt−1+41Xt=45Xt+41Xt−1 Z^t(3)=1+45Xt+41Xt−1+41(Xt+41Xt−1)=23Xt+165Xt−1Z^t(4)=1+23Xt+165Xt−1+165Xt+161Xt−1=1629Xt+83Xt−1
Consider a dynamic regression model yt=∑0kvixt−i+nt where both xt,nt are stationary and invertible ARMA model given by ϕx(B)xt=θ(B)at,ϕn(B)nt=θn(B)et and cov(et,as)=0
state the prewhitening process for how to identify the value of k
Therefore, we can test the statistical significance of vk by examining the statistical significance of corr(y^t,at−k)
state the steps of using Box-Tiao transformation to estimate vj
The steps of the estimation procedures
Run the OLS regression on yt=∑j=1svjxt−j+et to collect the residuals {e^t}
Identify an ARMA model for e^t
Apply Box-Tiao transformation to filter yt,xt
Run regression on the transformed equation
check the correlation of regression residuals
Find the l-ahead optimal forecast of yt+l,y^t(l) with at,et.
Since v(B)=∑0kviBi has finite terms, it can be transformed v(B)=δ(B)/w(B), then
yt=w(B)ϕx(B)δ(B)θ(B)at+ϕn(B)θn(B)et
yt=u(B)at+ψ(B)et
all of them are finite order with max K of polynomials in B. Therefore,
yt+l=0∑kuiat+l−i+ψiet+l−i
y^t(l)=0∑kui+l∗at−i+ψi+l∗et−i
Derive the MSE
Consider yt−l−y^t(l), which equals to =∑0l−1uiat+l−i+ψiet+l−i(i) after time lag t −∑0k(ui+l∗−ui+l)at−i(ii) −(ψi+l∗−ψi+l)et−i(iii) up to time t
Consider a VAR(p) model of 2-d variables, i.e. yt=[yi,t,y2,t]Tyt=∑1pAiyt−i+at, and each Ai=[ϕi,11ϕi,21ϕi,12ϕi,22]
state how to check the stationarity
All the roots of det(Ik−A1B−...−ApBp)=0 must lie outside of the unit circle, or the companion form ξt=Aξt−1+vt must have the moduli of the eigenvalues of A being <1.
Describe the methods to select the order for Equation (1)
Selection by information criteria, for example, BIC, AIC, DIC, HQ, SC, FPE
Using LRT for VAR(p) vs. VAR(p-1)
State how to test Granger causality for that X1t Granger causes X2t but not the other way around. Basd on the same condition, express X2t as the TFN model of X1t
X2,t=1∑pϕi,21X1,t−i+ϕi,22X2,t−i+a2,t
X2,t−1∑pϕi,22X2,t−i=1∑pϕi,21X1,t−i+a2,t
Φ22(B)X2,t=Φ21(B)X1,t+a2,t
X2,t=Φ22(B)Φ21(B)X1,t+Φ22(B)a2,t
Let v(B)=Φ22(B)Φ21(B),Nt=Φ22(B)a2,t, the TFN model is
X2,t=v(B)X1,t+Nt
To see whether X1,t Granger causes X2,t, if not, then all ϕi,21=0
Describe how to test Granger causality using univariate approach. Suppose (Φ1(B)X1,t=Θ1(B)a1,t,Φ2(B)X2,t=Θ2(B)a2,t\)
Using the Portmanteau test,
ρa1a2(k)=σ12σ22E(a1,t,a2,t+k)
If ∀k<0.ρ(k)=0, then X2 does not Granger cause X1.
Suppose X1,t,X2,t not weakly stationary. How do you model the join dynamics of {X1,t,X2,t} using co-integration.
differencing the two time series individually until each are stationary
use VAR(p) to fit the two stationary process and test the Granger causality
If (X1,t,X2,t) are cointegrated, use Error correction model to include lagged disequilibrium terms as explanatory variables.
Discuss the reasons why we have to choose different models based on the condition of integration
If cointegration exists, if we use VAR(p) model directly fitted to the differenced stationary processes, the model will be misspecified.
Discuss the Engle-Granger approach for modeling cointegrated X1t and X2t
test whether Xt,Yt are I(1) using unit root test
if both I(1), regress one against the other using least squares
run a unit root test on regression residuals. If residuals are stationary, these two series are cointegrated.
Where the regression line indicate the long-run equilibrium relationship between two variables, the disequilirium term is simply the regression residuals.
The unconditional distribution of Yt∼N(1−ϕ1−ϕ2μ,1−ϕ12−ϕ22σ2)
simulate y0,y1 by drawing a random number from Y.
recursively simulate y2 by the model, recursively
pros easy to compute cons don't know whether it is correlated
GARCH
ARCH process
ARCH(1) The first order of autoregressive conditional heteroskedastic process is et∼N(0,σt2) where σt2=a0+a1et−12. On defining vt=et2−σt2, the model can also be written as
et2=a0+a1et−12+vt
Since E(vt∣xt−1,xt−2,...)=0, the model corresponds directly to an AR(1) model for the squared error et2.