<<

RS – Lecture 12

Lecture 12

1

Two-Step Estimation of the GR Model: Review • Use the GLS estimator with an estimate of  1.  is parameterized by a few estimable parameters, = (θ). Example: Harvey’s heteroscedastic model. 2. Iterative estimation procedure: (a) Use OLS residuals to estimate the function. (b) Use the estimated  in GLS - Feasible GLS, or FGLS.

• True GLS estimator -1 -1 -1 bGLS = (X’Ω X) X’Ω y (converges in probability to .)

• We seek a vector which converges to the same thing that this does. Call it FGLS, based on [X -1 X]-1 X-1 y

1 RS – Lecture 12

Two-Step Estimation of the GR Model: Review

Two-Step Estimation of the GR Model: FGLS

• Feasible GLS is based on finding an estimator which has the same properties as the true GLS.

2 Example: Var[i] =  Exp(zi). True GLS: Regress yi/[ Exp((1/2)zi)] on xi/[ Exp((1/2)zi)] FGLS: With a of [,], say [s,c], we do the same computation with our estimates.

Note: If plim [s,c] = [,], then, FGLS is as good as true GLS.

• Remark: To achieve full , we do not need an efficient estimate of the parameters in , only a consistent one.

2 RS – Lecture 12

Heteroscedasticity • Assumption (A3) is violated in a particular way:  has unequal , but i and j are still not correlated with each other. Some observations (lower variance) are more informative than others (higher variance).

f(y|x) . . E(y|x) = b0 + b1x .

x1 x2 x3 x 5

Heteroscedasticity • Now, we have the CLM regression with hetero-(different) scedastic (variance) disturbances. (A1) DGP: y = X  +  is correctly specified. (A2) E[|X] = 0 2 (A3’) Var[i] =  i, i > 0. (CLM => i = 1, for all i.) (A4) X has full column rank – rank(X)=k-, where T ≥ k.

2 • Popular normalization: i i = 1. (A scaling, absorbed into  .)

• A characterization of the heteroscedasticity: Well defined estimators and methods for testing hypotheses will be obtainable if the heteroscedasticity is “well behaved” in the sense that

i / i i → 0 as T → . -i.e., no single observation becomes dominant.

(1/T)i i → some stable constant. (Not a plim!)

3 RS – Lecture 12

GR Model and Testing • Implications for conventional OLS and hypothesis testing: 1. b is still unbiased. 2. Consistent? We need the more general proof. Not difficult. 3. If plim b = , then plim s2 = 2 (with the normalization). 4. Under usual assumptions, we have asymptotic normality.

• Two main problems with OLS estimation under heterocedasticity: (1) The usual standard errors are not correct. (They are biased!) (2) OLS is not BLUE.

• Since the standard errors are biased, we cannot use the usual t- or F --statistics or LM statistics for drawing inferences. This is a serious issue.

Heteroscedasticity: Inference Based on OLS • Q: But, what happens if we still use s2(XX)-1? A: It depends on XX - XX. If they are nearly the same, the OLS will give OK inferences.

But, when will XX - XX be nearly the same? The answer is based

on a property of weighted averages. Suppose i is randomly drawn from a distribution with E[i] = 1. Then, 2 p 2 2 (1/T)i i xi  E[x ] --just like (1/T)i xi .

• Remark: For the heteroscedasticity to be a significant issue for estimation and inference by OLS, the weights must be correlated with 2 x and/or xi . The higher correlation, heteroscedasticity becomes more important (b is more inefficient).

4 RS – Lecture 12

Finding Heteroscedasticity

• There are several theoretical reasons why the i may be related to x 2 and/or xi : 1. Following the error-learning models, as people learn, their errors of 2 behavior become smaller over time. Then, σ i is expected to decrease. 2 2. As collecting techniques improve, σ i is likely to decrease. Companies with sophisticated data processing techniques are likely to commit fewer errors in forecasting customer’s orders. 3. As incomes grow, people have more discretionary income and, thus, 2 more choice about how to spend their income. Hence, σ i is likely to increase with income. 4. Similarly, companies with larger profits are expected to show greater variability in their dividend/buyback policies than companies with lower profits.

Finding Heteroscedasticity

• Heteroscedasticity can also be the result of model misspecification. • It can arise as a result of the presence of (either very small or very large). The inclusion/exclusion of an , especially if T is small, can affect the results of regressions. • Violations of (A1) --model is correctly specified--, can produce heteroscedasticity, due to omitted variables from the model. • in the distribution of one or more regressors included in the model can induce heteroscedasticity. Examples are economic variables such as income, wealth, and education. • David Hendry notes that heteroscedasticity can also arise because of – (1) incorrect data transformation (e.g., ratio or first difference transformations). – (2) incorrect functional form (e.g., linear vs log–linear models).

5 RS – Lecture 12

Finding Heteroscedasticity

• Heteroscedasticity is usually modeled using one the following specifications: 2 2 2 - H1 : σt is a function of past εt and past σt (GARCH model). 2 -H2 : σt increases monotonically with one (or several) exogenous variable(s) (x1,, . . . , xT ). 2 -H3 : σt increases monotonically with E(yt). 2 -H4 : σt is the same within p subsets of the data but differs across the subsets (grouped heteroscedasticity). This specification allows for structural breaks.

• These are the usual alternatives hypothesis in the heteroscedasticity tests.

Finding Heteroscedasticity

• Visual test In a plot of residuals against dependent variable or other variable will often produce a fan shape.

180 160 140 120 100 Series1 80 60 40 20 0 050100150

12

6 RS – Lecture 12

Testing for Heteroscedasticity • Usual strategy when heteroscedasticity is suspected: Use OLS along the White estimator. This will give us consistent inferences.

• Q: Why do we want to test for heteroscedasticity? A: OLS is no longer efficient. There is an estimator with lower asymptotic variance (the GLS/FGLS estimator).

2 2 2 • We want to test: H0: E(ε |x1, x2,…, xk) = E(ε ) = 

2 2 2 • The key is whether E[ ] =  i is related to x and/or xi . Suppose we suspect a particular independent variable, say X1, is driving i.

• Then, a simple test: Check the RSS for large values of X1, and the RSS for small values of X1. This is the Goldfeld-Quandt test.

Testing for Heteroscedasticity • The Goldfeld-Quandt test - Step 1. Arrange the data from small to large values of the

independent variable suspected of causing heteroscedasticity, Xj.

- Step 2. Run two separate regressions, one for small values of Xj and one for large values of Xj, omitting d middle observations (≈ 20%). Get the RSS for each regression: RSS1 for small values of Xj and RSS2 for large Xj’s.

- Step 3. Calculate the F ratio

GQ = RSS2/RSS1, ~ Fdf,df with df =[(T – d) – 2(k+1)]/2 (A5 holds).

If (A5) does not hold, we have GQ is asymptoticallly χ2.

7 RS – Lecture 12

Testing for Heteroscedasticity • The Goldfeld-Quandt test

Note: When we suspect more than one variable is driving the i’s, this test is not very useful.

• But, the GQ test is a popular to test for structural breaks (two regimes) in variance. For these tests, we rewrite step 3 to allow for different size in the sub-samples 1 and 2.

- Step 3. Calculate the F-test ratio

GQ = [RSS2/ (T2 – k)]/[RSS1/ (T1 – k)]

Testing for Heteroscedasticity: LR Test • The Likelihood Ratio Test Let’s define the , assuming normality, for a general case, where we have g different variances: g g T T 1 1 ln L   ln2 i ln2  (y  X )(y  X )  i  2 i i i i 2 i1 2 2 i1 i We have two models: 2 2 (R) Restricted under H0: i =  . From this model, we calculate ln L T T ln L   [ln(2) 1] ln(ˆ 2 ) R 2 2 (U) Unrestricted. From this model, we calculate the log likelihood. g T Ti 2 2 1 ln LU   [ln(2) 1] lnˆ i ; ˆ i  (yi  X ib)(yi  X ib) 2  2 Ti i1

8 RS – Lecture 12

Testing for Heteroscedasticity: LR Test • Now, we can estimate the Likelihood Ratio (LR) test: g ˆ 2 ˆ 2 a 2 LR  2(ln LU  ln LR )  T ln  Ti lni  g1 i1 2 Under the usual regularity conditions, LR is approximated by a χ g-1.

2 • Using specific functions for i , this test has been used by Rutemiller and Bowers (1968) and in Harvey’s (1976) groupwise heteroscedasticity paper.

Testing for Heteroscedasticity • LM tests 2 2 • We want to develop tests of H0: E(ε |x1, x2,…, xk) =  against an H1 with a general functional form.

2 2 • Recall the central issue is whether E[ ] =  i is related to x 2 and/or xi . Then, a simple strategy is to use OLS residuals to estimate 2 2 disturbances and look for relationships between ei and xi and/or xi .

• Suppose that the relationship between ε2 and X is linear: ε2 = Xα + v

Then, we test: H0: α = 0 against H1: α ≠ 0.

• We can base the test on how the squared OLS residuals e correlate with X.

9 RS – Lecture 12

Testing for Heteroscedasticity • Popular heteroscedasticity LM tests: - Breusch and Pagan (1979)’s LM test (BP). - White (1980)’s general test.

• Both tests are based on OLS residuals. That is, calculated under H0: No heteroscedasticity.

• The BP test is an LM test, based on the score of the log likelihood function, calculated under normality. It is a general tests designed to detect any linear forms of heteroskedasticity.

• The is an asymptotic Wald-type test, normality is not needed. It allows for nonlinearities by using squares and crossproducts of all the x’s in the auxiliary regression.

Testing for Heteroscedasticity: BP Test

• Let’s start with a general form of heteroscedasticity: 2 hi(α0 + zi,1’ α1 + .... + zi,m’ αm) = i 2 2 • We want to test: H0: E(εi |z1, z2,…, zk) = hi(zi’α) =

or H0: α1 = α2 =... = αm =  (m restrictions) • Assume normality. That is, the log likelihood function is: 2 2 2 log L = constant + ½Σ log  i -½ Σεi /i Then, construct an LM test: -1 LM = S(θR)’ I(θR) S(θR) θ= (β,α) -2 -2 -4 2 S(θ)=∂log L/∂θ’=[-Σi X’εi ;-½Σ(∂h/∂α)zii +½Σi εi (∂h/∂α)zi] I(θ) = E[- ∂2log L/∂θ∂θ’]

• We have block diagonality, we can rewrite the LM test, under H0: -1 LM = S(α0,0)’ [I22-I21 I11 I21] S(α0,0)

10 RS – Lecture 12

Testing for Heteroscedasticity: BP Test

• We have block diagonality, we can rewrite the LM test, under H0: -1 LM = S(α0,0)’ [I22-I21 I11 I21] S(α0,0) -2 -4 2 S(α0,0)= -½Σi (∂h/∂α|α0,R,0)z′ R + ½ Σi R ei (∂h/∂α |α0,R,0)z′ -2 2 2 = ½ R (∂h/∂α |α0,R,0) Σi zi (ei /R -1) -2 = ½ R (∂h/∂α |α0,R,0) Σi zi ωi 2 2 ωi =ei /R –1 = gi -1 2 -2 2 I22(α0,0) = E[- ∂ log L/∂ α∂ α’] = ½ [R (∂h/∂α |α0,R,0 )] Σi zi zi′ I21(α0,0) = 0 2 2 R = (1/T) Σi ei (MLE of under H0). Then, -1 -1 2 LM = ½ (Σi zi ωi)′[Σi zi zi′] (Σi zi ωi)= ½ W′Z (Z′Z) Z′W ~ χ m

Note: Recall R2 = [y′X (X′X)-1 X′y – T y 2]/[y′y – T y 2 ] = ESS/TSS 2 Also note that under H0: E[ωi]=0, E[ωi ]=1.

Testing for Heteroscedasticity: BP Test • LM = ½ W′Z (Z′Z)-1 Z′W = ½ ESS 2 2 ESS = Explained SS in regression of ωi (=ei /R - 1) against zi.

• Under the usual regularity condition, and under H0, d 4 -1 √T (αML – α)  N(0, 2  (Z′Z/T) ) Then, 4 -1 d 2 LM-BP= (2 R ) ESSe  χ m 2 2 ESSe = ESS in regression of ei (=gi R ) against zi. 4 p 4 d 2 Since R   => LM-BP   χ m

Note: Recall R2= [y′X (X′X)-1 X′y – T y 2 ]/[y′y – T y 2 ] 2 Under H0: E[ωi]=0, E[ωi ]=1, then, the LM test can be shown to be asymptotically equivalent to a T R2. (Think of y  0 and y’y/T=1 above).

11 RS – Lecture 12

Testing for Heteroscedasticity: BP Test • The LM test is asymptotically equivalent to a T R2 test, where R2 is 2 2 calculated from a regression of ei /R on the variables Z.

• Usual calculation of the Breusch-Pagan test 2 - Step 1. (Auxiliary Regression). Run the regression of ei on all the explanatory variables, z. In our example, 2 ei = α0 + zi,1’ α1 + .... + zi,m’ αm + vi 2 2 - Step 2. Keep the R from this regression. Let’s call it Re2 . Calculate either 2 2 (a) F = (Re2 /m)/[(1-Re2 )/(T-(m+1)], which follows a Fm,(T-(m+1) or 2 2 (b) LM = T Re2 , which follows χ m.

Testing for Heteroscedasticity: BP Test • Variations: 2 (1) Glesjer (1969) test. Use absolute values instead of ei to estimate the varying second . Following our previous example,

|ei|= α0 + zi,1’ α1 + .... + zi,m’ αm + vi

2 (2) Harvey-Godfrey (1978) test. Use ln(ei ). Then, the implied model 2 for i is an exponential model. 2 ln(ei ) = α0 + zi,1’ α1 + .... + zi,m’ αm + vi

2 Note: Implied model for i =exp{α0 + zi,1’ α1 + .... + zi,m’ αm + vi}.

12 RS – Lecture 12

Testing for Heteroscedasticity: BP Test • Variations: (3) Koenker’s (1981) studentized LM test. A usual problem with LM is that it crucially depends on the assumption that ε is normal. Koenker (1981) proposed studentizing the statistic LM-BP by

4 2 2 2 d 2 LM-S = (2 R ) LM-BP/[Σ (εi -R ) /T]   χ m

Testing for Heteroscedasticity: White Test • Based on the difference between OLS and true OLS variances: 2 2 2 2  (XX - XX)= XX -  XX = Σi (E[εi ]- )xi ′xi 2 2 • Empirical counterpart: (1/T) Σi (ei - s )xi ′xi • We can express each element of the k(k+1) matrix as: 2 2 (1/T) Σi (ei - s )ψi ψi : Kolmogorov-Gabor polynomial

ψi = (ψ1i, ψ2i, ..., ψmi)’ ψli= ψqi ψpi p≥q, p,q=1,2,...,k l= 1,2,...,mm=k(k-1)/2 • White heteroscedasticity test: 2 2 -1 2 2 2 W = [(1/T) Σi (ei - s )ψi]’ DT [(1/T) Σi (ei - s )ψi]   d χ m where 2 2 DT = Var [(1/T)Σi (ei - s )ψi]] Note: W is asymptotically equivalent to a T R2 test, where R2 is 2 2 calculated from a regression of ei /R on the ψi’s.

13 RS – Lecture 12

Testing for Heteroscedasticity: White Test • Usual calculation of the White test - Step 1. (Auxiliary Regression). Regress e2 on all the explanatory 2 variables (Xj), their squares (Xj ), and all their cross products. For example, when the model contains k = 2 explanatory variables, the test is based on: 2 2 2 ei =β0+ β1 x1,i +β2 x2,i+β3 x1,i +β4 x2,i + β5 x1x2,i + vi Let m be the number of regressors in auxiliary regression. Keep R2, 2 say Re2 . - Step 2. Compute the statistic 2 2 LM = T Re2 , which follows a χ m.

Testing for Heteroscedasticity: Examples

• White Test for the 3 factor F-F model for IBM returns (T=320):

IBMRet -rf = 0 + 1 (MktRet -rf) + 2 SMB + 4 HML + 

> b <- solve(t(x)%*% x)%*% t(x)%*%y #OLS regression > e <- y - x%*%b > e2 <- e^2 > xx2 <- cbind(x1^2,x2^2,x3^2,x1*x2,x1*x3,x2*x3) > fit2 <- lm(e2~xx2) > r2_e2 <- summary(fit2)$r.squared > r2_e2 [1] 0.0166492 > lm_t <- T*r2_e2 > lm_t [1] 5.327744 > ncol(xx2) [1] 6

2 LM-White Test: 5.33 ⟹ cannot reject H0 at 5% level (χ [6],05≈11.07).

14 RS – Lecture 12

Testing for Heteroscedasticity: Examples

• Suppose we suspect that squared HML (x3) –a measure of Book-to- Market variance- drives heteroscedasticity. We do a LM-BP test:

> xx1 <- x3^2 > fit2 <- lm(e2~xx1) > r2_e2 <- summary(fit2)$r.squared > r2_e2 [1] 0.009285253 > lm_t <- T*r2_e2 > lm_t [1] 2.971281 > 1-pchisq(lm_t,1) [1] 0.08475472

2 LM-BP Test: 2.97 ⟹ cannot reject H0 at 5% level (χ [1],05≈3.84); with a p-value= .085.

Testing for Heteroscedasticity: Remarks • Drawbacks of the Breusch-Pagan test: - It has been shown to be sensitive to violations of the normality assumption. - Three other popular LM tests: the Glejser test; the Harvey-Godfrey test, and the , are also sensitive to such violations.

• Drawbacks of the White test - If a model has several regressors, the test can consume a lot of df’s. - In cases where the White is statistically significant, heteroscedasticity may not necessarily be the cause, but model specification errors. - It is general. It does not give us a clue about how to model heteroscedasticity to do FGLS. The BP test points us in a direction.

15 RS – Lecture 12

Testing for Heteroscedasticity: Remarks • Drawbacks of the White test (continuation) - In simulations, it does not perform well relative to others, especially, for time-varying heteroscedasticity, typical of financial . - The White test does not depend on normality; but the Koenker’s test is also not very sensitive to normality. In simulations, Koenker’s test seems to have more power –see, Lyon and Tsai (1996) for a Monte Carlo study of the heteroscedasticity tests presented here. .

Testing for Heteroscedasticity: Remarks

• General problems with heteroscedasticity tests: - The tests rely on the first four assumptions of the CLM being true. - In particular, (A2) violations. That is, if the zero conditional assumption, then a test for heteroskedasticity may reject the null hypothesis even if Var(y|X) is constant. - This is true if our functional form is specified incorrectly (omitted variables or specifying a log instead of a level). Recall David Hendry’s comment.

• Knowing the true source (functional form) of heteroscedasticity may be difficult. A practical solution is to avoid modeling heteroscedasticity altogether and use OLS along the White heterosekdasticity-robust standard errors.

16 RS – Lecture 12

Estimation: WLS form of GLS • While it is always possible to estimate robust standard errors for OLS estimates, if we know the specific form of the heteroskedasticity, we can obtain more efficient estimates than OLS: GLS.

• GLS basic idea: Efficient estimation through the transform the model into one that has homoskedastic errors – called WLS.

• Suppose the heteroskedasticity can be modeled as: Var(ε|x) = 2 h(x)

• The key is to figure out what h(x) looks like. Suppose that we know 2 hi. For example, hi(x)=xi . (make sure hi is always positive.)

2 • Then, use 1/√ xi to transform the model.

Estimation: WLS form of GLS

2 2 • Suppose that we know hi(x)=xi . Then, use 1/√xi to transform the model: 2 Var(εi/√hi|x) = 

• Thus, if we divide our whole equation by √hi we get a (transformed) model where the error is homoskedastic.

• Assuming weights are known, we have a two-step GLS estimation: - Step 1: Use OLS, then the residuals to estimate the weights. - Step 2: Weighted using the estimated weights.

• Greene has a proof based on our asymptotic theory for the asymptotic equivalence of the second step to true GLS.

17 RS – Lecture 12

Estimation: FGLS • More typical is the situation where we do not know the form of the

heteroskedasticity. In this case, we need to estimate h(xi).

• Typically, we start by assuming a fairly flexible model, such as 2 Var(ε|x) = h(x) =  exp(X) --make sure Var(εi|x) >0.

Since we don’t know , it must be estimated. Our assumption implies that ε2 = 2exp(X) v with E(v|X) = 1. Then, if E(v) = 1 ln(ε2) = X + u (*) where E(u) = 0 and u is independent of X. Now, we know that e is an estimate of ε, so we can estimate (*) by OLS.

Estimation: FGLS • Now, an estimate of h is obtained as ĥ = exp(ĝ), and the inverse of this is our weight. Now, we can do GLS as usual.

• Summary of FGLS (1) Run the original OLS model, save the residuals, e. Get ln(e2). (2) Regress ln(e2) on all of the independent variables. Get fitted values, ĝ. (3) Do WLS using 1/sqrt[exp(ĝ)] as the weight. (4) Iterate to gain efficiency.

• Remark: We are using WLS just for efficiency – OLS is still unbiased and consistent. Sandwich estimator gives us consistent inferences.

18 RS – Lecture 12

Estimation: MLE • ML estimates all the parameters simultaneously. To construct the likelihood, we assume a distribution for ε. Under normality (A5):

T 1 T 1 T 1 lnL   ln2 ln2  (y  X )(y  X ) 2 2 i 2 2 i i i i i1 i1 i 2 • Suppose i = exp(α0 + zi,1 α1 + .... + zi,m αm )= exp(zi’α)

• Then, the first derivatives of the log likelihood wrt θ=(β,α) are:

T ln L 2 1  xi i / i  X '   i1 T T T ln L 1 2 1 2 4 1 2 2   1/ i exp(zi ')zi ( ) i / i exp(zi ')zi  zi (i / i 1) i 2 i112 i 2 i1

• Then, we get the f.o.c. We get a non-linear system of equations.

Estimation: MLE • We take second derivatives to calculate the information matrix :

 ln L2 T   x x ' /  2  X ' 1 X '  i i i  i1 T  ln L 1 2    xi zi 'i / i i ' 2 i1 T  ln L 1 2 2    zi zi 'i / i ii ' 2 i1 • Then, 1  ln L X ' X 0  I()  E[ ]   1  '  0 Z'Z   2 

• We can estimate the model using Newton’s method: -1 θj+1 = θj –Ht gt gt = ∂log L/∂θ’

19 RS – Lecture 12

Estimation: MLE • We estimate the model using Newton’s method: -1 θj+1 = θj –Hj gj gj = ∂log Lj/∂θ’

Since Ht is block diagonal, -1 -1 -1 βj+1 = βj –(X’ Σj X) X’ Σj εj

-1 2 2 -1 αj+1 = αj –(½Z’Z) [½ Σi zi (εi /i - 1)] = αj –(Z’Z) Z’v, where

2 2 v = (εi /i -1)]

Convergence will be achieved when gj = ∂log Lj/∂θ’ is close to zero.

• We have an iterative algorithm => Iterative FGLS= MLE!

Heteroscedasticity: Log Transformations

• A log transformation of the data, can eliminte (or reduce) a certain type of heteroskedasticity.

- Assume - t = E[Zt] 2 -Var[Zt] = δ t (Variance proportional to the squared mean)

• We log transformed the data: log(Zt). Then, we use the delta method to approximate the variance of the transformed variable. Recall: Var[f(X)] using delta method: Var[ f (X )]  f ' ()2Var[X ]

• Then, the variance of log(Zt) is roughly constant:

2 Var [log( Z t )]  (1 /  t ) Var [Z t ]  

20 RS – Lecture 12

ARCH Models • Until the early 1980s had focused almost solely on modeling the of series -i.e., their actual values. 2 yt = Et[ yt|x] + εt, εt ~ D(0,σ ) Suppose we have an AR(1) process:

yt = α + β yt-1 + εt. Then, the conditional mean, conditioning on information at time t, is:

Et-1[ yt|x] = Et[ yt] = α + β yt-1

Note: The unconditional mean and variance are:

E[yt] = α/(1-β)= constant 2 2 Var t[yt] = σ /(1-β )= constant

The conditional mean is time varying; the unconditional mean is not!

Key distinction: Conditional vs. Unconditional moments.

ARCH Models • Similar idea for the variance Unconditional variance: 2 2 2 Var [yt] = E[(yt –E[yt]) ] = σ /(1-β )

Conditional variance: 2 2 Vart-1[yt] = Et-1[(yt –Et-1[yt]) ] = Et-1[εt ]

Remark: Again, conditional moments are time varying; unconditional moments are not!

21 RS – Lecture 12

ARCH Models • The unconditional variance measures the overall uncertainty. In the 2 2 AR(1) example, time t-1 information plays no role: Var[yt] = σ /(1-β ) • The , Vart-1[yt], is a better measure of uncertainty at time t-1. It is a function of information at time t-1.

Conditional Variance

yt

mean Variance

t-1 Time

ARCH Models: Stylized Facts of Asset Returns

- Thick tails - Mandelbrot (1963): leptokurtic (thicker than Normal)

- clustering - Mandelbrot (1963): “large changes tend to be followed by large changes of either sign.”

- Leverage Effects – Black (1976), Christie (1982): Tendency for changes in stock prices to be negatively correlated with changes in volatility.

- Non-trading Effects, Weekend Effects – Fama (1965), French and Roll (1986) : When a market is closed information accumulates at a different rate to when it is open –for example, the weekend effect, where stock price volatility on Monday is not three times the volatility on Friday.

22 RS – Lecture 12

ARCH Models: Stylized Facts of Asset Returns

- Expected events – Cornell (1978), Patell and Wolfson (1979), etc: Volatility is high at regular times such as news announcements or other expected events, or even at certain times of day –for example, less volatile in the early afternoon.

- Volatility and serial correlation – LeBaron (1992): Inverse relationship between the two.

- Co-movements in volatility – Ramchand and Susmel (1998): Volatility is positively correlated across markets/assets.

• We need a model that accommodates all these facts.

ARCH Models: Stylized Facts of Asset Returns • Easy to check leptokurtosis (Stylized Fact #1) Figure: and Distribution for Monthly S&P500 Returns

Statistic Mean (%) 0.626332 (p-value: 0.0004) Standard Dev (%) 4.37721 Skewness -0.43764 Excess 2.29395 Jarque-Bera 145.72 (p-value: <0.0001) AR(1) 0.0258 (p-value: 0.5249)

23 RS – Lecture 12

ARCH Models: Stylized Facts of Asset Returns

• Easy to check Volatility Clustering (Stylized Fact #2)

Figure: Monthly S&P500 Returns (1964:1-2014:9)

0.2

0.15

0.1

0.05

0

-0.05 1964 1965 1966 1967 1969 1970 1971 1972 1974 1975 1976 1977 1979 1980 1981 1982 1984 1985 1986 1987 1989 1990 1991 1992 1994 1995 1996 1997 1999 2000 2001 2002 2004 2005 2006 2007 2009 2010 2011 2012 2014

-0.1

-0.15

-0.2

-0.25

ARCH Models: Engle (1982) • We start with assumptions (A1) to (A5), but with a specific (A3’):

2 Yt  X t   t  t ~ N (0,  t ) q 2 2 2 2  t  Vart 1 ( t )  Et 1 ( t )      i  t i    (L) i1 2 2 define t  t t

2 2  t     (L) t  t

• This is an AR(q) model for squared innovations. That is, we have an ARCH model: Auto-Regressive Conditional Heteroskedasticity

This model estimates the unobservable (latent) variance.

Note: We are dealing with a variance, we usually impose ω>0 and α >0 for all i. i Robert F. Engle, USA

24 RS – Lecture 12

ARCH Models: Engle (1982)

• The unconditional variance is determined by: q q 2 2 2 2   E[ t ]      i E[ t i ]      i i1 i1

2  That is,   q 1    i i 1

2 To obtain a positive σ , we impose another restriction: (1-Σi αi)>0.

• Example: ARCH(1) 2 Yt   X t   t  t ~ N (0,  t ) 2 2  t     1 t 1

• We need to impose restrictions: α1>0 & 1 - α1>0.

ARCH Models: Engle (1982)

• Even though the errors may be serially uncorrelated they are not independent: There will be volatility clustering and fat tails. Let’s define standardized errors:

zt   t / t • They have conditional mean zero and a time invariant conditional

variance equal to 1. That is, zt ~ D(0,1). If zt is assumed to follow a N(0,1), with a finite fourth moment (use Jensen’s inequality). Then:

4 4 4 4 2 2 4 2 2 2 2 E(t )  E(zt )E(t )  E(zt )E(t )  E(zt )E(t )  3E(t ) 4 2 2 (t )  E(t )/ E(t )  3.

• For an ARCH(1), the 4th moment for an ARCH(1):

2 2 2 ( t )  3(1 ) /(1 3 ) if 3 1.

25 RS – Lecture 12

ARCH Models: Engle (1982) • More convenient, but less intuitive, presentation of the ARCH(1) model: 2  t   t  t

where υt is i.i.d. with mean 0, and Var[υt]=1. Since υt is i.i.d., then:

2 2 2 2 2 2 Et 1[ t ]  Et 1[ t t ]  Et 1[ t ]Et 1[t ]    1 t 1

2 • It turns out that σt is a very persistent process. Such a process can be captured with an ARCH(q), where q is large. This is not efficient.

• In practice, q is often large. A more parsimonious representation is the Generalized ARCH model or GARCH(q,p): q p 2 2 2  t      i t  j    j t  j i1 j 1     ( L ) 2   ( L ) 2

GARCH: Bollerslev (1986) • A more parsimonious representation is the GARCH(q,p): q p 2 2 2  t      i t  j    j t  j i1 j 1 2 2 define  t   t   t

2 2  t    ( (L)   (L))   (L)   t

which is an ARMA(max(p,q),p) model for the squared innovations.

• Popular GARCH model: GARCH(1,1): 2 2 2  t 1     1 t  1 t

2 2 with an unconditional variance: Var[εt ] = σ = ω /(1- α1 - β1).

=> Restrictions: ω>0, α1>0, β1>0; (1- α1 - β1)>0.

26 RS – Lecture 12

GARCH: Bollerslev (1986) • Technical details: This is covariance stationary if all the roots of α(L) + β(L) = 1 lie outside the unit circle. For the GARCH(1,1) this amounts to

α1 + β1 < 1.

2 2 • Bollerslev (1986) showed that if 3α1 + 2α1β1 + β1 < 1, the second and 4th (unconditional) moments of εt exist:

2  E[t ]  (11  1) 2 4 3 (11  1) 2 2 E[t ]  2 2 if (1 1  211 31 ) 0 (11  1)(1 1  211 31 )

GARCH: Forecasting and Persistence • Consider the forecast in a GARCH(1,1) model

2 2 2 2 2 2 2 2 t1   1t 1t   t (1zt 1) (t  t zt ) Taking expectation at time t

2 2 Et [ t 1 ]     t (11 1 ) Then, by repeated substitutions: j 1 2 i 2 j E t [ t  j ]   [ ( 1   1 ) ]   t ( 1   1 ) i  0 As j→∞, the forecast reverts to the unconditional variance:

ω/(1-α1-β1).

• When α1+β1=1, today’s volatility affect future forecasts forever: 2 2 Et [ t  j ]   t  j

27 RS – Lecture 12

GARCH-X • In the GARCH-X model, exogenous variables are added to the conditional variance equation.

Consider the GARCH(1,1)-X model:

2 2 2  t    1 t 1  1 t 1   f ( X t 1 , )

where f(Xt,) is strictly positive for all t. Usually, Xt, is an observed economic variable or indicator, say liquidity index, and f(.) is a non- linear transformation, which is non-negative.

Examples: Glosten et al. (1993) and Engle and Patton (2001) use 3- mo T-bill rates for modeling stock return volatility. Hagiwara and Herce (1999) use interest rate differentials between countries to model FX return volatility. The US congressional budget office uses inflation in an ARCH(1) model for interest rate spreads.

IGARCH

• Recall the technical detail: The standard GARCH model:

2 2 2  t     (L)   (L) is covariance stationary if α(1) + β(1) < 1.

• But strict stationarity does not require such a stringent restriction (That is, that the unconditional variance does not depend on t).

In the GARCH(1,1) model, if α1 + β1 =1, we have the Integrated GARCH (IGARCH) model.

• In the IGARCH model, the autoregressive polynomial in the ARMA representation has a unit root: a shock to the conditional variance is “persistent.”

28 RS – Lecture 12

IGARCH

2 2 • Variance forecasts are generated with: Et [ t  j ]   t  j • That is, today’s variance remains important for future forecasts of all horizons.

• Nelson (1990) establishes that, as this satisfies the requirement for strict stationarity, it is a well defined model.

• In practice, it is often found that α1 + β1 are close to 1.

• It is often argued that IGARCH is a product of omitted variables; For example, structural breaks. See Lamoreux and Lastrapes (1989), Hamilton and Susmel (1994), & Mikosch and Starica (2004).

• Shepard and Sheppard (2010) argue for a GARCH-X explanation.

GARCH: Variations

• EGARCH model -- Nelson (, 1991). It models an exponential function for the time-varying variance: q p 2 2 log( t )     i (zti   (| zti | E | zti |))    j log( t j ) i1 j1

• GJR-GARCH model -- Glosten Jagannathan and Runkle (JF, 1993):

q q p 2 2 2 2  t      i t i    i t i * I t i    j t  j i1 i1 j1

where It-i=1 if εt-i<0; 0 otherwise.

• Remark: Both models capture sign (asymmetric) effects in volatility:

Negative news (εt-i<0) increase the conditional volatility (leverage effect).

29 RS – Lecture 12

GARCH: Variations • Non-linear ARCH model NARCH -- Higgins and Bera (1992) and Hentschel (1995).

These models apply the Box-Cox-type transformation to the conditional variance: q p     t      i |  t i   |    j t  j i1 j1 Special case: γ=2 (standard GARCH model).

Note: The variance depends on both the size and the sign of the variance which helps to capture leverage type (asymmetric) effects.

GARCH: Variations • Threshold ARCH (TARCH) --Rabemananjara, R. and J.M. Zakoian (JAE, 1993). Large events to have an effect but no effect from small events q p 2   2 2  t    ( i I ( ti   )   i I ( ti   )) ti )   j t j i11j There are two variances:

q p 2  2 2  t i     i t i   j t  j , if ( t i   ) i11j q p 2  2 2  t i     i t i   j t  j , if ( t i   ) i11j

Many other versions are possible by adding minor asymmetries or non-linearities in a variety of ways.

30 RS – Lecture 12

GARCH: Variations • Switching ARCH (SWARCH) - Hamilton and Susmel (JE, 1994).

2 Intuition: σt depends on the state of the economy –regime. It’s based on Hamilton’s (1989) time series models with changes of regime: q  2      2 t st ,st 1  i ,st ,st 1 t 1 i 1 The key is to select a parsimonious representation: 2 q 2  t  t  i      i  st i 1  st  i 2 For a SWARCH(1) with 2 states (1 and 2) we have 4 possible σt : 2 2  t   1   1 t 1 1 /  1 , s t  1, s t 1  1 2 2  t   1   1 t 1 1 /  2 , s t  1, s t 1  2 2 2  t   2   1 t 1 2 /  1 , s t  2 , s t 1  1 2 2  t   2   1 t 1 2 /  2 , s t  2 , s t 1  2

ARCH Estimation: MLE

• All of these models can be estimated by maximum likelihood. First we need to construct the likelihood.

• Since we are dealing with dependent variables, we use the conditioning trick to get the joint distribution:

f ( y1 , y 2 ,..., yT ;θ)  f ( y1 | x1 ;θ) f ( y 2 | y1 , x 2 , x1;θ)....

.... f ( yT | yT 1 ,.., y1 , xT 1 ,.., x1;θ). Taking logs

L  log(f (y1, y2,..., yT ;θ))  log(f (y1 | x1;θ))log(f (y2 | y1,x2,x1;θ))....

....log(f (yT | yT 1,.., y1,xT 1,..,x1;θ)) T  log(f (y |Y , X ;θ)) t1 t t1 t We maximize this function w.r.t. the k mean parameters (γ) and the m variance parameters (ω, α, β).

31 RS – Lecture 12

ARCH Estimation: MLE • Note that the δL/δγ=0 (k f.o.c.’s) will give us GLS.

• Denote δL/δθ=S(yt,θ)=0 (S(.) is the score vector)

-We have a (k+m x k+m) system. But, it is a non-linear system. We will need to use numerical optimization. Gauss-Newton or BHHH

(also approximates H by the product of S(yt,θ)’s) can be easily implemented.

- Given the AR structure, we will need to make assumptions about σ0 (and ε0,ε1 , ..εp if we assume an AR(p) process for the mean).

- Alternatively, we can take σ0 (and ε0,ε1 , ..εp) as parameters to be estimated (it can be computationally more intensive and estimation can lose power.)

ARCH Estimation: MLE

• If the conditional density is well specified and θ0 belongs to Ω, then T 1/ 2 ˆ 1 1 1 S t ( yt , 0 ) T (   0 )  N (0, A0 ), where A0  T  t 1 

• Under the correct specification assumption, A0=B0, where T 1 B0  T  E[S t ( yt , 0 ), S t ( yt , 0 )' ] t 1

We estimate A0 and B0 by replacing θ0 by its estimated MLE value.

• The estimator B0 has a computational advantage over A0.: Only first derivatives are needed. But A0=B0 only if the distribution is correctly specified. This is very difficult to know in practice.

• Common practice in empirical studies: Assume the necessary regularity conditions are satisfied.

32 RS – Lecture 12

ARCH Estimation: MLE - Example • Assuming normality, we maximize with respect to θ the function:

T T T 1 2 2 2 L   log( f t )   log( 2 )   { log(  t )   t /  t } t 1 2 2 i 1 Example: ARCH(1) model.

T T T T 1 2 1 2 2 L   log( ft )   log(2 )  log( 1 t1 )   t /( 1 t1) t1 2 2 t112 t Taking derivatives with respect to θ=(ω,α,γ), where γ=k mean pars: T T L 2 2 2 2  1 /(   1 t 1 )  (1 / 2)  t /(   1 t 1 )  t 11t T T L 2 2 2 2 2 2   t 1 /(   1 t 1 )  (1/ 2)  t  t 1 /(   1 t 1 )  1 t 11t T L 2   x t  t /  t γ t 1

ARCH Estimation: MLE • Then, we set the f.o.c. => δL/δθ=0.

• We have a (k+2) system. It is a non-linear system. We will use numerical optimization; usually, Gauss-Newton or BHHH.

• Again, note that δL/δγ=0 (k f.o.c.’s) will give us GLS.

T L 2   xt  t (γ MLE ) / t (MLE , MLE ,)  0 γ t1

33 RS – Lecture 12

ARCH Estimation: MLE – Example (in R) • Log likelihood of AR(1)-GARCH(1,1) Model: log_lik_garch11 <- function(theta, data) { mu <- theta[1]; rho1 <- theta[2]; alpha0 <- abs(theta[3]); alpha1 <- abs(theta[4]); beta1 <- abs(theta[5]); chk0 <- (1 - alpha1 - beta1) r <- ts(data) n <- length(r)

u <- vector(length=n); u <- ts(u) for (t in 2:n) {u[t] = r[t]- mu – rho*r[t-1]} # this setup allows for ARMA in mean

h <- vector(length=n); h <- ts(h) h[1] = alpha0/chk0 # set initial value for h[t] series if (chk0==0) {h[1]=.000001} # check to avoid dividing by 0 for (t in 2:n) {h[t] = abs(alpha0 + alpha1*(u[t-1]^2)+ beta1*h[t-1]) if (h[t]==0) {h[t]=.00001} } #check to avoid log(0)

return(-sum(-0.5*log(2*pi) - 0.5*log(abs(h[2:n])) - 0.5*(u[2:n]^2)/abs(h[2:n]))) }

ARCH Estimation: MLE – Example (in R) • To maximize the likelihood we use optim (mln can also be used):

dat_xy <- read.csv("http://www.bauer.uh.edu/rsusmel/phd/chfusd_17.csv",head=TRUE,sep=",") summary(dat_xy) names(dat_xy)

z <- dat_xy$CHFUSD # CHF/USD 1971-2017 monthly data

theta0 = c(0.05, 0.1, 0.1, 0.25, 0.7) # initial values ml_2 <- optim(theta0, log_lik_garch11, data=z, method="BFGS", hessian=TRUE)

ml_2$par # estimated parameters

I_Var_m2 <- ml_2$hessian eigen(I_Var_m2) # check if Hessian is pd. sqrt(diag(solve(I_Var_m2))) # parameters SE

chf_usd <- ts(z, frequency=12, start=c(1971,1)) plot.ts(chf_usd) # time series plot of data

34 RS – Lecture 12

ARCH Estimation: MLE – Example (in R)

> ml_2$par [1] -0.17496654 0.25333506 0.82101387 0.06514787 0.82392626 > > I_Var_m2 <- ml_2$hessian > eigen(I_Var_m2) # check if Hessian is pd. eigen() decomposition $values [1] 18103.991574 592.516649 504.252999 80.455651 4.248681

$vectors [,1] [,2] [,3] [,4] [,5] [1,] 0.0003057871 0.03980084 -0.02946857 0.998757150 0.005617731 [2,] -0.0007327615 -0.69827499 0.71413675 0.048926415 -0.005138999 [3,] -0.0974743965 0.09280099 0.08323696 0.004341412 -0.987390236 [4,] -0.6800704864 -0.52322506 -0.51289888 0.006067750 -0.025250547 [5,] -0.7266376298 0.47796595 0.46813102 -0.005890306 0.156092801

> sqrt(diag(solve(I_Var_m2))) # parameters SE [1] 0.11140088 0.04324649 0.47905926 0.03405586 0.08114472

AR(1)-GARCH(1,1): Example – USD/CHF

• We estimate an AR(1)-GARCH(1,1) for St - USD/CHF:

2 st =[log(St)-log(St-1)]x100 = a0 +a1st-1 + εFt, εtΨt-1~N(0,σ t). 2 2 2 σ t = α0 + α1 e t-1 +ß1 σ t-1.

• T: 562 (January 1971 - September 2017, monthly).

The estimated model for st is given by: st = -0.175 + 0.253 st-1, (.111) (0.432) 2 2 2 σ t = 0.821 + 0.065 e t-1 + 0.824 σ t-1. (0.479) (0.034) (0.081)

Note: α1 +ß1 =.889 < 1. (Persistent.)

35 RS – Lecture 12

AR(1)-GARCH(1,1): Example – USD/CHF

ARCH Estimation: MLE – Regularity Conditions

Note: The appeal of MLE is the optimal properties of the resulting estimators under ideal conditions.

• Crowder (1976) gives one set of sufficient regularity conditions for the MLE in models with dependent observations to be consistent and asymptotically normally distributed.

• Verifying these regularity conditions is very difficult for general ARCH models - proof for special cases like GARCH(1,1) exists.

2 For example, for the GARCH(1,1) model: if E[ln(α1zt +β1)] < 0, the model is strictly stationary and ergodic. See Nelson (1990) & Lumsdaine (1996).

36 RS – Lecture 12

ARCH Estimation: MLE – Regularity Conditions

• Block-diagonality In many applications of ARCH, the parameters can be partitioned

into mean parameters, θ1, and variance parameters, θ2.

Then, δμt(θ)/δθ2=0 and, although, δσt(θ)/δθ1≠0, the Information matrix is block-diagonal (under general symmetric distributions for zt and for particular ARCH specifications).

Not a bad result: - Regression can be consistently done with OLS. - Asymptotically efficient estimates for the ARCH parameters can be obtained on the basis of the OLS residuals.

ARCH Estimation: MLE – Remarks

• But, block diagonality cannot buy everything: - Conventional OLS standard errors could be terrible.

- When testing for serial correlation, in the presence of ARCH, the conventional Bartlett s.e. – T-1/2– could seriously underestimate the true standard errors.

37 RS – Lecture 12

ARCH Estimation: QMLE

• The assumption of conditional normality is difficult to justify in many empirical applications. But, it is convenient.

• The MLE based on the normal density may be given a quasi- maximum likelihood (QMLE) interpretation.

• If the conditional mean and variance functions are correctly

specified, the normal quasi-score evaluated at θ0 has a martingale difference property:

E{δL/δθ=S(yt,θ0)}=0

Since this equation holds for any value of the true parameters, the

QMLE, say θQMLE is Fisher-consistent –i.e., E[S(yT, yT-1,…y1 ; θ)] = 0 for any θ∈Ω.

ARCH Estimation: QMLE • The for the QMLE takes the form: 1/ 2 ˆ 1 1 T ( QMLE   0 )  N (0, A0 B0 A0 ).

-1 -1 The (A0 B0 A0 ) is called “robust.” Robust to departures from “normality.”

• Bollerslev and Wooldridge (1992) study the finite sample distribution of the QMLE and the Wald statistics based on the robust covariance matrix estimator:

For symmetric departures from conditional normality, the QMLE is generally close to the exact MLE.

For non-symmetric conditional distributions both the asymptotic and the finite sample loss in efficiency may be large.

38 RS – Lecture 12

ARCH Estimation: Non-Normality

• The basic GARCH model allows a certain amount of leptokurtosis. It is often insufficient to explain real world data.

Solution: Assume a distribution other than the normal which help to allow for the fat tails in the distribution. • t Distribution - Bollerslev (1987) The t distribution has a degrees of freedom parameter which allows greater kurtosis. The t likelihood function is

1 1/ 2 1 (v1) / 2 2 lt  ln((0.5(v 1))(0.5v) (v  2) (1 zt (v  2) ) )  0.5ln( t ) where Γ is the gamma function and v is the degrees of freedom. As υ→∞, this tends to the . • GED Distribution - Nelson (1991)

ARCH Estimation: GMM

• Suppose we have an ARCH(q). We need moment conditions:

(1)  E[m1 ]  E[x t '( yt  x t γ)]  0 2 2 2 (2)  E[m2 ]  E[ε t '( t   t )]  0 2 (3)  E[m3 ]  E[ t   /(1 1  ...   q )]  0

Note: (1) refers to the conditional mean, (2) refers to the conditional variance, and (3) to the unconditional mean.

• GMM objective function: ^ ^ Q ( X, y; θ )  E [m (θ ; X, y )]' W E [m (θ ; X, y )] where ^ ^ ^ ^ E [m (θ; X, y )]  [ E [m 1 ]' E [m 2 ]' E [m 3 ]' ]'

39 RS – Lecture 12

ARCH Estimation: GMM • γ has k free parameters; α has q free parameters. Then, we have r=k+q+1 parameters. m(θ;X,y) has r = k+q+1 equations.

Dimensions: Q is 1x1; E[m(θ;X,y)] is rx1; W is rxr.

• Problem is over-identified: more equations than parameters so cannot solve E[m(θ;X,y)]=0, exactly.

• Choose a weighting matrix W for objective function and minimize using numerical optimization.

• Optimal weighting matrix: W =[E[m(θ;X,y)]E[m(θ;X,y)]’]-1. Var(θ)=(1/T)[DW-1D’]-1,

where D = δE[m(θ;X,y)]/δθ’ --expressions evaluated at θGMM.

ARCH Estimation: Testing

• Standard BP test , with auxiliary regression given by: 2 2 2 et = α0 + α1 et-1 + .... + αm et-q + vt

H0: α1= α2=...= αq= 0 (No ARCH). It is not possible to do GARCH test, since we are using the same lagged squared residuals.

2 d 2 Then, the LM test is (T-q)R  χ q -- Engle’s (1982).

• In ARCH Models, testing as usual: LR, Wald, and LM tests.

Reliable inference from the LM, Wald and LR test statistics generally does require moderately large sample sizes of at least two hundred or more observations.

40 RS – Lecture 12

ARCH Estimation: Testing

• Issues:

- Non-negative constraints must be imposed. θ0 is often on the boundary of Ω. (Two sided tests may be conservative.)

- Lack of identification of certain parameters under H0 creates a

singularity of the Information matrix under H0. For example, under

H0: α1=0 (No ARCH), in the GARCH(1,1), ω and β1 are not jointly identified. See Davies (1977).

• Ignoring ARCH

- You suspect yt has an AR structure: yt = γ0 + γ1 yt-1 + εt Hamilton (2008) finds that OLS t-test with no correction for ARCH

spuriously reject H0: γ1=0 with arbitrarily high probability for sufficiently large T. White’s (1980) SE help. NW SE help less.

ARCH Estimation: Testing

Figure. From Hamilton (2008). Fraction of samples in which OLS t-

test leads to rejection of H0: γ1=0 as a function of T for regression with Gaussian errors (solid line) and Student’s t errors (dashed line).

Note: H0 is actually true and test has nominal size of 5%.

41 RS – Lecture 12

Testing for Heteroscedasticity: ARCH

• ARCH Test for the 3 factor F-F model for IBM returns (T=320), with one lag:

IBMRet -rf = 0 + 1 (MktRet -rf) + 2 SMB + 4 HML + 

> b <- solve(t(x)%*% x)%*% t(x)%*%y #OLS regression > e <- y - x%*%b > e2 <- e^2 > xx1 <- e2[1:T-1] > fit2 <- lm(e2[2:T]~xx1) > r2_e2 <- summary(fit2)$r.squared > r2_e2 [1] 0.2656472 > lm_t <- (T-1)*r2_e2 > lm_t [1] 84.74147

2 LM-ARCH Test: 84.74 ⟹ reject H0 at 5% level (χ [1],05≈3.84), the usual result for financial time series.

ARCH: Which Model to Use

• Questions 1) Lots of ARCH models. Which one to use? 2) Choice of p and q. How many lags to use?

• Hansen and Lunde (2004) compared lots of ARCH models: - It turns out that the GARCH(1,1) is a great starting model. - Add a leverage effect for financial series and it’s even better. - A t-distribution is also a good addition.

42 RS – Lecture 12

Realized Volatility (RV) Models

• French, Schwert and Stambaugh’s (1987) use higher to estimate the variance as: k 2 1 2 st   rt1 k i1 where rt is realized returns in days, and we estimate monthly variance.

• Model-free measure –i.e., no need for ARCH-family specifications. It is very popular for intra-daily data, called high frequency (HF) data. The measure is called realized volatility (RV).

• Very popular to calculate intra-day or daily volatility. For example, th based on TAQ data, say, 1’ or 10’ realized returns (rt,j is j interval return on day t), we can calculate the daily variance, or RVt: M 2 RV t   rt , j , t  1,2 ,...., T j  1

Realized Volatility (RV) Models

43 RS – Lecture 12

Realized Volatility (RV) Models

M 2 RV t   rt , j , t  1,2,...., T j 1

th where rt,j is j interval return on day t. That is, RV is defined as the sum of squared intraday returns.

• We can use time series models –say an ARIMA- for RVt to forecast daily volatility.

• RV is affected by microstructure effects: bid-ask bounce, infrequent trading, calendar effects, etc.. For example, the bid-ask bounce induces

serial correlation in intra-day returns, which biases RVt. (Big problem!)

-Proposed Solution: Filter the intra-day returns using MA or AR models before constructing RV measures.

Realized Volatility (RV) Models - Properties • Under some conditions (bounded kurtosis and of

squared returns less than 1), RVt is consistent and m.s. convergent. • Realized volatility is a measure. It has a distribution. • For returns, the distribution of RV is non-normal (as expected). It tends to be skewed right and leptokurtic. For log returns, the distribution is approximately normal. • Daily returns standardized by RV measures are nearly Gaussian. • RV is highly persistent. • The key problem is the choice of frequency (or number of observations per day).

— Bandi and Russell (2003) propose a data-based method for choosing frequency that minimizes the MSE of the measurement error. — Simulations and empirical examples suggest optimal sampling is around 1-3 minutes for equity returns.

44 RS – Lecture 12

RV Models - Variation

• Another method: AR model for volatility:

|  t |    |  t1 |  t

The εt are estimated from a first step procedure -i.e., a regression. Asymmetric/Leverage effects can also be introduced.

OLS estimation possible. Make sure that the variance estimates are positive.

Other Models - Parkinson’s (1980) estimator

• The Parkinson’s (1980) estimator: 2 2 s t={Σt [ln(Ht)-ln(Lt)] /(4ln(2)T)},

where Ht is the highest price and Lt is the lowest price.

• There is an RV counterpart, using HF data: Realized (RR): 2 RRt={Σj [100x(ln(Ht,j)-ln(Lt,j))] /(4ln(2)},

th where Ht,j and Lt,j are the highest and lowest price in the j interval.

• These “range estimators are very good and very efficient.

Reference: Christensen and Podolskij (2005).

45 RS – Lecture 12

Stochastic volatility (SV/SVOL) models • Now, instead of a known volatility at time t, like ARCH models, we

allow for a stochastic shock to σt, υt: 2 t   t1 t ; t ~ N(0, ) Or using logs: 2 log t     log t1 t ; t ~ N(0, ) • The difference with ARCH models: The shocks that govern the

volatility are not necessarily εt’s.

• Usually, the standard model centers log volatility around ω:

log t    (log t1 ) t

Then,

E[log(σt)] = ω 2 2 2 Var[log(σt)] = κ συ /(1- β ). 2 ⇒ Unconditional distribution: log(σt) ~ N(ω, κ )

Stochastic volatility (SV/SVOL) models • Like ARCH models, SV models produce returns with kurtosis > 3 (and, also, positive between squared excess returns): 2 2 2 2 2 Var[rt = E[(rt -E[rt]) ] = E[σt zt ] = E[σt ] E[zt ] 2 2 = E[σt ] = exp(2ω + 2 κ ) (property of log normal)

4 2 2 kurt[rt = E[(rt -E[rt]) ] / {(E[(rt -E[rt]) ]) } 4 4 2 2 2 2 = E[σt ] E[zt ] / {(E[σt ]) (E[zt ]) } = 3 exp(4ω + 8κ2 ) / exp(4ω + 4κ2 )= 3 exp(4κ2 ) > 3!

• We have 3 SVOL parameters to estimate: φ = (ω, β, σv).

• Estimation: - GMM: Using moments, like the sample variance and kurtosis of returns. Complicated -see Anderson and Sorensen (1996). - Bayesian: Using MCMC methods (mainly, Gibbs sampling). Modern approach.

46 RS – Lecture 12

Stochastic volatility (SV/SVOL) models

• The Bayesain approach takes advantage of the idea of hierarchical structure:

-f(y|ht) (distribution of the data given the volatilities) -f(ht|φ) (distribution of the volatilities given the parameters) -f(φ) (distribution of the parameters)

Algorithm: MCMC (JPR (1994).)

Augment the parameter space to include ht. Using a proper prior for f(ht,φ) MCMC methods provides inference about the joint posterior f(ht,φ|y). We’ll go over this topic in Lecture 17.

Classic references: Jacquier, E., Poulson, N., Rossi, P. (1994), “Bayesian analysis of stochastic volatility models,” Journal of Business and Economic Statistics. (Estimation). Heston, S.L. (1993), “A closed-form solution for options with stochastic volatility with applications to bond and currency options,” Review of Financial Studies. (Theory)

47