<<

SIEVE BOOTSTRAP-BASED INTERVALS FOR GARCH PROCESSES

by

Garrett Tresch

A capstone project submitted in partial fulfillment of graduating from the Academic Honors Program at Ashland University April 2015

Faculty Mentor: Dr. Maduka Rupasinghe, Assistant Professor of Mathematics Additional Reader: Dr. Christopher Swanson, Professor of Mathematics

ABSTRACT deals with observing a variable—interest rates, exchange rates, rainfall, etc.—at regular intervals of time. The main objectives of Time Series analysis are to understand the underlying processes and effects of external variables in order to predict future values. Time Series methodologies have wide applications in the fields of business where mathematics is necessary. The Generalized Autoregressive Conditional Heteroscedasic (GARCH) models are extensively used in finance and to model empirical time series in which the current variation, known as volatility, of an observation is depending upon the past observations and past variations. Various drawbacks of the existing methods for obtaining prediction intervals include: the assumption that the orders associated with the GARCH process are known; and the heavy computational time involved in fitting numerous GARCH processes. This paper proposes a novel and computationally efficient method for the creation of future prediction intervals using the Sieve Bootstrap, a promising procedure for Autoregressive (ARMA) processes. This bootstrapping technique remains efficient when computing future prediction intervals for the returns as well as the volatilities of GARCH processes and avoids extensive computation and parameter estimation. Both the included Monte Carlo simulation study and the exchange rate application demonstrate that the proposed method works very well under normal distributed errors.

i Table of Contents

Abstract ...... i

Section 1: Introduction ...... 1

Section 2: The Sieve Bootstrap Procedure...... 8

Section 3: The Simulation Study and Application ...... 11

Exchange Rate Case Study ...... 15

Section 4: Conclusion ...... 21

References ...... 23

Author’s Biography ...... 26

ii Introduction

Figure 1: The Inflation-Adjusted Price of the S&P 500; A Strong Example of a Time Series

------

The sieve bootstrap technique (referred to as SB throughout) was first proposed by

Buhlmann (1997). This procedure utilizes the general concepts of bootstrapping, an estimation technique of resampling in which is removed at random, stored within another series, and then placed back within the initial bootstrapped sample. The main idea of this technique is that repetition of this process provides insight into underlying behavior as well as the variation possibilities that the process could display with calculated likelihoods. The sieve bootstrap is a variation on this process where sieves or collections of interwoven linear weights, of autoregressive processes are used to approximate an underlying process and, similarly, gather vital statistical information regarding the original data. Autoregressive processes are those that are linearly weighted by previous data within the same data. These will be described in further detail a bit later in this paper. To simulate desired models that are dependent on previous values, the collection of data will be presented in the form of a time series where time is the determining

1 factor of order. A time series is a set of data where an explanatory variable is fixed time intervals that are paired —and often analyzed against—one or more response variables. Thus, the analysis of time series can be considered as a subset of data analysis and that serves to explain the effects on one or many dependent variables with the change of the often elusive procession of time. As can be imagined, there are countless applications for this process including, but not limited to, the extensive study of financial markets (Figure 1). To learn more about the behavior of these applications, can be performed on the data sets. Using this gained information, models can then be fitted to explain the nuances of the particular series and its accompanying statistics. To allow for further understanding of the data at hand, an error component is often included within the general model. These error terms are calculated by differencing the actual time series with that of the fitted model estimations. These errors can be, and within this paper are, referred to as the residuals of the series.

As previously mentioned, the SB technique involves the of the residuals of a fitted Autoregressive or AR( ) model in which is the order, where it is assumed that with a sample size (Buhlmann, 1997). This order can be considered the maximum number of references back to previous data or the largest time span into the past in which there is linear impact on the present value. An autoregressive model is a stochastic process wherein future values can be constructed by the use of the model formulation as a recursion with estimated coefficients that weigh previous data points. In a general sense, AR( ) refers to previous values of different time lags, or units away from the current time, as having different linear effects on current and future values. The order represents the backshift of a total of time lags; therefore, the model will contain weighted coefficients. In practice, when an AR model is fitted to a time series, the first values will be removed since they have no previous terms to weigh and,

2 therefore, a model representation of these values cannot be created. Thus, residuals can be estimated for the to values and subsequently resampled via the bootstrap method. In previous, yet fairly recent articles ranging from 2002-2004, Alonso et al discusses the obtainment of prediction intervals using the SB method and an underlying ARMA1 process.

From this point, the method has consistently been improved in a variety of studies, from the inclusion of an inflation factor for the prediction intervals (Mukhopadhyay and Samaranayake,

2010) to extensions onto other model structures including Fractionally Integrated Autoregressive

Moving average (FARIMA) models (Rupasinghe and Samaranayake, 2012-2014). In recent years the SB has been extended onto the study of Autoregressive Conditional Heteroscedastic

(ARCH) and Generalized Autoregressive Conditional Heteroscedastic (GARCH) processes, the primary models of interest in this study that will be further examined in the following sections.

All in all, the SB resampling technique has been applied to many models due to its lack of dependence on underlying structure. Regardless of whether the model fitted follows an AR, MA,

ARMA, or FARIMA structure, a new AR model is always fitted, making the actual order, structure, and error distribution of each of these processes unimportant for the creation of the distribution of statistics. It is also worthy to mention that the sieve bootstrap technique remains a rather cost-effective method with low computation time. Perhaps in today’s quick and technology-heavy world; this is the most appealing of all of its qualities.

In a variety of different fields there are time series that display attributes of changing . For example, financial market data often contain time periods of high and low volatility

1 An ARMA(p,q) process utilizes the linear dependence of previous values as described for the AR process but also incorporates a moving average component in which data can be dependent on previous errors that are independently and identically distributed. The formulation of such is as follows:

∑ ∑

where is a time series at time and are calculated coefficients and is the previous errors.

3 depending on the confidence of consumers, the state of world affairs, and various other influential factors. To account for this, ARCH models were theorized by Engle (1982), in which changing variance is taken into account by looking at volatility as a linear function of squared returns. This method would later be expanded to GARCH structures by Bollerslev (1986) by adding a linear moving average component where previous volatilities provide a basis of present

and future values of a series. In general notation, a time series is said to be a

GARCH( process if it serves as a solution to the following equations:

(1)

and

∑ ∑ (2)

where is a sequence of independent , identically distributed (i.i.d), random variables with

a of zero, unit variance and ( ; the volatility process, is a stochastic

process that is assumed to be independent of ; and where and are unknown parameters satisfying , for and . The GARCH ( process was found to display a weakly stationary2 property in 2002 by Tsay if the following qualification

is met: ∑ ( where ( , for and . Due to some of the assumptions of stationarity involved in much of the creation of the following method, this particular property remains to be quite important.

2 Stationarity is a property of a process in which the does not change when shifted amongst time. The GARCH process is weakly stationary in that this statement only holds for the first and autocovariance (a process’s with itself at different times or lags).

4

From the initial conception and onward, the GARCH structure has been adjusted into many different forms. Some examples of these alterations include exponential and nonlinear

GARCH models (Nelson, 1991; Engle and Ng, 1993 respectively) as well as long-memory and integrated GARCH models (Conrad and Karanasos, 2006; Engle and Bollerslev, 1986).

Regardless of the changes made, the basis of the GARCH model remains a popular structure to be used in applications that vary from financial analysis to weather patterns and even into signal processing. From these applications, it has become a major goal to use the knowledge of the

GARCH structure to forecast future values as well as uncover the attributes that these future should hold. The majority of the original estimations and applications of the model itself often revolved around the use of the GARCH structure for point forecasts of volatility. It would be Pascual et al (2006), many years after the original formation of the model, who would use the sieve bootstrap resampling technique to create prediction intervals and to give insight into the uncertainty held within future values. Pascual et al (2006) established prediction intervals for both returns and volatilities by using previous work done by Miguel and Olave

(1999) and Reeves (2005) with comparable results to theoretical proofs. Even more recently, it was Chen et al. (2011) who created computationally efficient prediction intervals for both returns and volatilities of GARCH processes by utilizing an autoregressive moving average (ARMA) representation of squared volatilities. Nevertheless, even though the desired results were met, the method for the creation of prediction intervals in both Pascual et al (2006) and Chen et al (2011) have specific drawbacks. For example, both methods strongly depend on the estimation of the structure of the original model. Estimations of the autoregressive and moving average components within the GARCH process are requirements of the implementations of these

5 previously discussed methods. Thus, in a practical sense these methods remain questionable because of these requirements.

The main focus of this paper is based on the work of Rupasinghe in 2015 who, in his recent, unpublished studies, extended the processes laid forth by Chen and Pascual while also utilizing a new method for the creation of future prediction intervals. By focusing on the ARMA representation of the GARCH( given by Chen et al, Rupasinghe realized he could utilize the

SB procedure on the squared returns of underlying process and then follow the work of Alonso

(2002-2004) by bootstrapping this ARMA process, instead of estimating the underlying structure, to obtain residuals. This is easy to see when the GARCH ( process is displayed

into its ARMA representation as seen below (Chen et al., 2011). Let , Then equation (2) becomes:

∑ ∑ (

which implies… ∑ ∑ ∑

Letting ( yields the following ARMA process:

∑ ( ∑ (3)

where { is white noise (i.i.d.). Rupasinghe’s advancement, while not necessarily extensive, is advantageous for several important reasons. Firstly, an underlying structure does not have to be estimated. In this sense, if there is an underlying GARCH process, the orders and coefficients of the process do not need to be estimated for the creation of return or volatility prediction

6 intervals. Because there is no need for such parameter estimation, the method becomes more computationally efficient within simulations or in any circumstance that serves to strengthen the validity of the pre-existing theory.

This paper applies the SB method of using the squared returns of a GARCH process as done in Rupasinghe’s work. However, this study will also include some innovations that enable computations of prediction intervals for volatilities within the procedure. Furthermore, this paper utilizes an application in which exchange rates between U.S. Dollars and Yen are studied in order to provide real life implications of applying the SB process that is proposed in the following section. This case-study follows the application of Chen’s paper as a comparative basis for the ultimate goal of strengthening the use of the proposed procedure.

The following section describes in detail the SB procedure employed for the creation of

GARCH intervals. Following this is a description of the extensive simulation study performed that serves to establish some of the finite properties behind the proposed method. The section concludes with an application dealing with exchange rates in order to build an understanding of the real-world uses and implications of this method. The paper will close with a conclusion that will interpret the implications of the results gathered.

7

2.) The SB Procedure

Figure 2: The generated GARCH(1,1) model in which the SB procedure was applied to in the simulation study. The model was generated with equation (7 ) as defined in Section 3.

------

The following steps were adopted from Rupasinghe and Samaranayake (2012), with some additional modifications to adjust for GARCH errors and to add intervals for the bootstrapped

volatilities. Let denote the realization of a GARCH process, and for

1.) Choose a maximum order, , for the AR( ) model that fits the squared series,

3 . Then, find the optimal order, ̂, using the AIC criterion among the values

. Within the simulation, the value of was used.

2.) Estimate the coefficients ̂ ̂ ̂ of the AR( ̂) process using the Yule-Walker or least

squares method. It was the Yule-Walker method that was used in this study [the Yule-

Walker method was used for coefficient estimation within Alonso et al. (2002, 2003,

2004) and Rupasinghe and Samaranayake (2012, 2013)].

3 Akaike information criterion (AIC) is a measurement of the overall quality of a fitted model. The underlying process relies on a compromise of and the overall complexity of the fitted model.

8

̂ ̂ ̂ ̂ 3.) Calculate the ( ̂ residuals as ̅ ∑ ( ( ̂ ,

where ̂ is the mean of .

4.) These residuals have to be centered when using the Yule-Walker method (Thombs and

Schucany, 1990). These centered residuals are denoted by ̂ ( ̂ , so ̂

̅ ̅ ̅ , where ̅ ̅ ∑ ̅ .

5.) Compute the empirical distribution function of the residuals,

̂ ̂ ( ̂ ∑ ̂ ( ]( ̂ .

6.) Then, re-sample, with replacement, the bootstrap innovations for

from this distribution.

7.) Generate the bootstrapped series , based on the

̂ ̂ ̂ ̂ recursion ∑ ( , with for ̂. The

non-positive lags are removed so that the effect of initial values is minimized.

8.) Fit an additional AR( ̂) model to using the Yule-Walker method, and let the

̂ ̂ ̂ estimated AR coefficient be denoted by ̂. It is important to mention that the

same order ̂ was used in this step as it was in step 1 for the sake of yielding better

coverages (Rupasinghe and Samaranayake (2012, 2014)).

̂ ̂ ̂ 9.) Using these new coefficients ̂ obtained in the previous step, compute the -

step ahead bootstrap observations by way of recursion and by using the following:

̅ ̂ ̂ ̅ ∑ ( , where and for .

Note: should be conditioned on the original observed data and not the bootstrap

data , by setting for , as recommended by Cao (1997) and Alonso

(2002, 2004).

9

10.) Using the future values and the relationship for the AR and GARCH processes the

̂ ̂ recursion ∑ = can be used to calculate the future volatilities.

11.) Obtain the bootstrap distribution of , denoted by ̂ ( and the bootstrap ̂

̂ distribution of denoted by ( , by repeating Steps 6-10 times, where is set ̂

to 1,000 in the following simulation study.

12.) A ( prediction interval for is then computed by

√ ( √ ( ], where ( ̂ ( is the of the ̂

estimated bootstrap distribution of . In addition a ( prediction interval

̂ for is computed by [0, ( ] where ( = ( , This is the same step ̂

used by Chen et all. since the original GARCH structure is now rekindled.

Due to the limitations of computational structure, a finite sample simulation has been performed in order to validate this method. This aforementioned Monte Carlo simulation study is described in detail in the following section.

10

3.) The Simulation Study and Application

Figure 3: Daily exchange rate of US Dollars/Japanese Yen from 28 March 1998 to 28 July 2006. The application following the simulation study uses this set of time series data and the SB technique to created 20 future prediction intervals for the process’s returns and volatilities. ------

For further investigation of the previous theory, a Monte Carlo simulation study has been carried out with various models and sample sizes. All in all, three models were generated with sample sizes of 300, 1000, and 3000 to examine the effect of sample size on the procedure and as a manner of comparing to the previous theory. From these —and in accordance to existing literature based in the construction of prediction intervals—bootstrap prediction interval upper and lower values, bootstrap lengths, and corresponding standard errors for both the returns and volatilities of each future value have been calculated using the previously described method. By using theoretical intervals computed with known order and parameter values, as well as by counting the respective bootstrap intervals covered out of the entirety of those created, it is possible to find coverages to compare to the chosen theoretical level of 95% at each respective lead (in other words, future value at a given time in the future). Through repetition of the process

11 for each of these leads, the recorded values can be averaged out for the sake of reducing the impact of extraneous outcomes and provide more accurate general results.

The representations of given in equations (1) - (2) were used to generate the time time series. In particular, the following three models for heteroscedastic errors were used in this simulation:

Model I: ARCH(1)

(5)

Model II: ARCH(2)

(6)

Model III: GARCH(1,1)

(7)

These specific models were used by Chen et al.’s 2011 simulation study which has been shown to provide comparable results with techniques prior to and within the 2011 examination.

Similarly, it is the aim of this study to use the proposed SB method above to analyze the future prediction intervals of each of these models and examine the results for comparability with

Chen’s, as well as other various works.

For each combination of model and sample size, independent series were generated and the SB steps in section 2 were completed. As a basis for comparison for each of these simulations and to compute comparable values such as that of coverages, , future observations generated by the original model were created for each future lead. These

12 future values can be denoted as where is the aforementioned sample size and k is the lead or step ahead. Using the theoretical function4 ( and ( of theoretical future values, coverages have been calculated to explain the percentage of containment of these theoretical prediction intervals within the prediction intervals constructed using the SB method.

In general, the run of the simulation has coverage ( ∑ ( ( ] , where is an indicator or Boolean function that can be thought of as a function that produces a resulting 1 if

( is within the restricting interval of and a 0 otherwise. The interval of course, is defined as √ ( √ ( ] which is the produced for the bootstrap future

values as described in section 2 and at a level. Finally, ( ( is defined as the generated future value up until and ( is defined as the respective future values calculated using recursion of the theoretical models and leads past the original sample size of

. In addition to coverage, the bootstrap and theoretical interval lengths were created by (

( and ( ( ) ( ). Intuitively, ( is the difference of ( (

the ( ) and ( ) percentile points of the empirical distribution of the future,

theoretical observations generated using the time series models from equations (4)-(6).

With statistics gathered for each generated theoretical future value of each lead out of the

total and for each bootstrapped prediction interval generated out of the cycles of the bootstrap process in section 2 of this paper, total overarching repetitions of each of these generations were performed. From this, additional statistics can be formulated as summaries of the general patterns of each of these individual runs so that the effect of a

4 As is the case with the ( notation as seen in step 12 of the process in Section 2, ( represents the th quantile. However, it is in this case that the results are simply being drawn from the ordered theoretical values created. The process is otherwise identical to the calculations found in step 12.

13 particular extreme, individual run can be minimized and the true nature of the simulations can be uncovered. These overarching statistics include the mean coverage of returns, mean coverage of volatilities, mean length of theoretical intervals, mean length of bootstrap return intervals, the mean length of the bootstrap volatility intervals, and each of the corresponding standard errors for each previous calculation. These respective, extended statistics were calculated in the following manners:5

Mean Return Coverage: ∑ (

Standard Error of Mean Return Coverage: ( ] ∑ ( ]

Mean Volatility Coverage: ∑ (

( ] ∑ ( ] of Mean Volatility Coverage:

Mean Length of Bootstrap Return Intervals: ∑ (

Standard Error of Mean Length of Bootstrap Return Intervals:

{ ( ] ∑ ( ]

Standard Error of Mean Length of Bootstrap Volatility Intervals:

{ ( ] ∑ ( ]

Mean Theoretical Length: ∑ (

In total, 9 simulations were performed with every possible combination of the three sample sizes and model type. In each of these simulations the error distributions were all

5The notation here can be relatively self-explanatory. Nevertheless, represents coverage, represents the length function, and represents the standard error of each of the previously defined functions. The subscripts provide further insight into whether the calculations involve data gathered for volatilities as and returns as . indicates the calculation is for the bootstrapped data where no subscript indicates a theoretical calculation.

14 normally distributed. While there are theoretical results that show the validity of the method employed in this paper on the returns of a GARCH model with exponential and t-errors

(Rupasinghe, 2015), limitations on time and computational power constricted much of the focus of this study—specifically to normal errors. The results have been combined in Tables 1-3, each of which represents the statistical results of a particular model. The tables display the statistical calculations from above with a particular focus on return and volatility coverages as well as the lengths of the respective bootstrap prediction intervals at future leads of 1, 10, and 20.

The results yield general conclusions that all seem to strengthen the validity of the described method. At a glance, it is easy to see that the coverages for both returns and volatilities are quite close to the theoretical, = .05 for all of the aforementioned statistical calculations. As the bootstrapped sample size of each model increased, the coverages increased to account for this improvement in simulation validity. This is very evident in the coverages of the volatilities where, although each percentage remained comparable regardless of sample size, the increase from 300 to 1000 and then to 3000 resampled values led to a large increase from initial recordings. There are some coverages above the 95% theoretical level but these can be explained by the inflation of performed within the simulation. This also explains why, in addition, the lengths of both the returns and volatilities remain slightly larger than those recorded in the theoretical results. Finally, in every instance, the standard errors remained low, implying the dispersion of these results to be rather miniscule.

Case Study/Application

Following Chen’s Case Study, we applied the SB method to the Yen/U.S. daily exchange rates and constructed prediction intervals for both the returns and volatilities. As is the case with

15

Table I. 95% Prediction intervals for returns of ARCH(1) model Average Average Average Average coverage length coverage length Sample Leads for for for for size return return volatility volatility (SE) (SE) (SE) (SE) T 95% — 95% — 94.63 1.5509 1 300 — — (.0161) (.0054) 94.77 1.612 91.31 .3321 10 300 (.0138) (.0051) (.3241) (.0036) 94.69 1.608 91.36 .3345 20 300 (.0111) (.0051) (.3188) (.0037) 94.65 1.557 1 1000 — — (.0063) (.0051) 94.99 1.619 93.58 .3490 10 1000 (.0136) (.0037) (.2673) (.0029) 95.03 1.622 93.63 .3499 20 1000 (.0159) (.0036) (.2607) (.0029) 95.02 1.555 1 3000 — — (.0161) (.0044) 95.22 1.628 94.55 .3588 10 3000 (.0215) (.0027) (.2308) (.0021) 95.20 1.627 94.55 .3597 20 3000 (.0233) (.0026) (.2334) (.0021)

Chen’s paper, both Saturdays and Sundays were removed to avoid the effects that are distinct during weekends (Anderson et al., 2003). It is also the case that all holidays and other unavailable days were removed so as to reduce the effects caused by these occasions. In order to be as consistent as possible with the literature on a comparison basis, the full sample included the rates from March 28th, 1998 to July 28th, 2006. However, it is worthy to note that according to the data found and the limitations on the accepted days, the sample gathered for this case study was a bit smaller than that of Chen’s at 2096. These original rates can be seen in Figure 3 with each time interval representing an individual day within the given dates.

16

Table II. 95% Prediction intervals for returns of ARCH(2) model Average Average Average Average coverage length coverage length Sample Leads for for for for size return return volatility volatility (SE) (SE) (SE) (SE) T 95% — 95% — 94.54 1.5102 1 300 — — (.0083) (.0042) 94.65 1.5396 91.65 .2751 10 300 (.0053) (.0040) (.2028) (.0026) 94.57 1.5394 91.66 .2756 20 300 (.0066) (.0040) (.2050) (.0026) 94.94 1.515 1 1000 — — (.0147) (.0034) 94.79 1.540 93.86 .2772 10 1000 (.0042) (.0028) (.1767) (.0017) 94.82 1.540 93.94 .2777 20 1000 (.0041) (.0027) (.1716) (.0018) 94.95 1.513 1 3000 — — (.0119) (.0029) 94.88 1.540 94.75 .2781 10 3000 (.0037) (.0020) (.1617) (.0011) 94.88 1.5399 94.67 .2774 20 3000 (.0040) (.0020) (.1641) (.0011)

In order to create a series that is stationary and has a mean near zero, a transformation was applied to the data by the use of the logarithmic function that follows:

( ) (8)

The resulting series is presented in Figure 4. From initial inspection it appears as though a

GARCH structure is possible. The resulting of can be found in Tables IV

17

Table III. 95% Prediction intervals for returns of GARCH(1,1) model Average Average Average Average coverage length coverage length Sample Leads for for for for size return return volatility volatility (SE) (SE) (SE) (SE) T 95% — 95% — 94.31 3.829 1 300 — — (.0130) (.0175) 94.18 3.871 87.36 1.739 10 300 (.0133) (.0165) (.1868) (.0240) 94.11 3.885 86.69 1.810 20 300 (.0182) (.0170) (.2504 (.0263) 94.66 3.853 1 1000 — — (.0141) (.0119) 94.62 3.903 93.36 1.788 10 1000 (.0078) (.0113) (.0961) (.0164) 94.49 3.922 93.03 1.891 20 1000 (.0044) (.0113) (.1400) (.0185) 94.86 3.859 1 3000 — — (.0135) (.0095) 94.91 3.916 94.93 1.776 10 3000 (.0102) (.0082) (.0852) (.0131) 94.80 3.926 94.64 1.904 20 3000 (.0047) (.0076) (.1200) (.0121)

and V. As Table IV describes, the estimated is surely higher than 0 and even fairly higher than 3, implying that the distribution is leptokurtic6. In a similar manner to the existing literature, the Jarque-Bera Test7 (Jarquen and Bera, 1980) provides a radically low p-value and, thus, allows for a dismissal of following a Gaussian structure. In Table V it is clear to see that the of the squared returns remain significant. Through the discussions of West and Cho (1995) and Anderson and Bollerslev (1998), an underlying GARCH(1,1) may be

6 Implying the peakness of the resulting distribution is rather extreme. In general the kurtosis of a distribution is a measurement of the shape of the distribution in a vertical sense. 7 The Jarque-Bera test is a calculation in which the goodness-of-fit of the series, including inputs of kurtosis and , is computed to compare with that of a . Results and conclusions can be further calculated using p-values and a set level of .

18

Figure 4: Transformed Exchange Rates: June 30th, 2006 - July 28th, 2006.

Table IV : Summary Statistics for log returns Mean SD Skewness Kurtosis Max Min 0.000742 0.0088 1.0037 -0.3087 5.3985 4.8132 -5.3754

Table V : Autocorrelations of the log returns at different time lags Autocorrelations ( ( ( ( ( (

-0.502 0.025 -0.033 0.051 0.018 -0.031

0.384 0.17 0.173 0.075 0.113 0.055

suitable as a model for . With a GARCH structure assured, the process of performing the SB technique can be utilized through the procedure highlighted earlier within this paper. In order to create prediction intervals for twenty calculated future values, the data for June 30th, 2006 through July 28th, 2006 were removed from the transformed data set. Thus, the procedure will generate 20-step ahead prediction intervals from all previous times up until to June 30th, 2006.

These calculated prediction intervals generated can then be compared to the actual exchange rates, which essentially represent the theoretical or true exchange rates for each of these future

19 steps. At this point, previous studies now require some form of parametric estimation. However, the SB method as described in section 2 does not require such approximations. Instead, the

squared returns—or in this case the squared daily exchange rates — can be placed directly into the SB algorithm. As described earlier within this section, 95% prediction intervals (PIs) will be created based on 95% quantiles of the returns resampled using the bootstrapping technique. In a similar manner, the algorithm will also create volatilities that can be represented in 95% prediction intervals through 95% upper quantiles. Due to the formulation, as well as the lack of parameter estimation required, there are no direct volatility observations to record for each of these future leads. Thus, realized volatilities must be produced to compare to the volatility PIs created. These realized volatilities are constructed using 5-minute returns from June

30th, 2006 to July 28th, 2006 and based on the formulation:

(9)

where is the total number of observation on a given day .

Graphical representations of both the PIs for returns as well as for the volatilities can be found in Figures 5 and 6 respectively. Through inspection, it is immediately clear that each prediction interval contains the transformed exchange rates returns and volatilities. However, it appears that the PI of the volatilities is a bit inflated. However, in comparison to previous work, this could be due to the low volatility seen within the chosen future time interval.

20

Figure 5: Sieve Bootstrap Prediction Interval containing the actual squared exchange rates. June 30th, 2006 - July 28th 2006.

Figure 6: Sieve Bootstrap Prediction Interval with contained volatilities. June 30th, 2006 - July 28th 2006.

4) Conclusion

Through inspection of the results of the Monte Carlo simulation study and the further investigation into an application of the technique using exchange rates, it becomes clear that the method proposed is novel and effective for the creation of future prediction intervals for underlying GARCH processes. It appears as though applying a SB algorithm directly to squared returns of GARCH processes can provide future prediction intervals for both returns and volatilities without knowing parameters and orders associated with the underlying GARCH

21 process. The resulting method is computationally efficient due to the lack of parametric estimations required and has been shown to have comparable results in both simulation and application examples. In the case of the additional of volatility PIs to the initial method, the coverages and lengths remain to be rather accurate when related to the results of other, more proven techniques.

22

Refererences

Alonso AM, Pe ̃a D, and Romo J. (2002). time series with sieve bootstrap. Journal of Statistical Planning and Inference, 100, 1–11.

Alonso AM, Pe ̃a D, and Romo J. (2003). On sieve bootstrap prediction intervals. Statistics and Probability Letters, 65, 13–20.

Alonso AM, Pe ̃a D, and Romo J. (2004). Introducing model uncertainty in time series bootstrap. Statistica Sinica, 14 155-174.

Andersen TG, Bollerslev T. (1998). Answering the skeptics: Yes, standard volatility models do provide accurate forecasts. International Economic Review, 39, 885-905.

Andersen TG, Bollerslev T, Diebold FX, Labys P. (2001). The distribution of realized exchange rate volatility. Journal of the American Statistical Association, 96, 42-55.

Andersen TG, Bollerslev T, Diebold FX, Labys P. (2003). Modeling and forecasting volatility. Econometrica, 71, 579–625.

Baillie RT, Bollerslev T, Mikkelsen HO. (1996). Fractionally integrated generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 74, 3-30.

Baillie RT, Bollerslev T. (1992). Prediction in dynamic models with time-dependent conditional variances. Journal of Econo-metrics, 52, 91-113.

Bollerslev T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 31, 307-327.

Buhlmann P. (1997). Sieve bootstrap for time series. Bernoulli, 3, 123-148.

Cao R, Feberobande M, Gonzles-Manteiga W, Prada-Snchez JM, Garcia-Juardo I. (1997). Saving computer time in constructing consistent bootstrap prediction intervals for autore- gressive processes. Communications in Statistics: Simulation and Computation, 26, 961{978.

Chen B, Gel YR, Balakrishna N, Abraham B. (2011). Computationally efficient bootstrap prediction intervals for returns and volatilities in ARCH and GARCH processes. Journal of Forecasting, 30, 51-71.

Conrad C, Karanasos M. (2006). The impulse response function of the long memory GARCH process. Economic Letters, 90, 34-41.

Engle RF. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987-1007.

23

Engle RF, Bollerslev T. (1986). Modeling the persistence of conditional variances. Econometric Reviews, 5, 1-50.

Engle RF, Ng VK. (1993). Measuring and testing the impact of news on volatility. Journal of Finance, 48(5), 1749-1778.

Engle RF, Patton AJ. (2001). What good is a volatility model? Quantitative Finance, 1, 237-245.

Jarque CM, Bera AK. (1980). Efficient tests for normality, and serial independence of regression residuals. Economics Letters, 6(3), 255–259.

Li WK, Ling S, McAleer M. (2002). Recent theoretical results for time series models with GARCH errors. Journal of Economics Surveys, 16, 245{269.23

Miguel JA, Olave P. (1999). Bootstrapping forecast intervals in ARCH models. Test, 8(2), 345- 364

Mukhopadhyay P, Samaranayake VA. (2010). Prediction intervals for time series: A modifed sieve bootstrap approach Communications in Statistics- Simulation and Computation, 39, 517- 538.

Nelson DB. (1991). Conditional heteroskedasticity in asset returns: a new approach. Econometrica, 59, 347-370.

Pascual L, Romo J, Ruiz E. 2006. Bootstrap prediction for returns and volatilities in GARCH models. Computational Statistics & Data Analysis, 50, 2293-2312.

Poon SH. (2005). A practical guide to forecasting financial market volatility. Wiley: Chichester.

Reeves JJ. (2005). Bootstrap prediction intervals for ARCH models. International Journal of Forecasting, 21, 237-248

Rupasinghe M. (Spring 2015) . Personal Communication.

Rupasinghe M, Samaranayake VA. (2012). Asymptotic properties of sieve bootstrap prediction intervals for FARIMA processes. Statistics and Probability Letters, 82, 2108{2114.

Rupasinghe M, Samaranayake VA. (2014). Obtaining prediction intervals for FARIMA processes using sieve bootstrap. Journal of Statistical Computations and Simulations, 84, 2044{2058.

Thombs LA, Schucany WR. (1990). Bootstrap prediction intervals for autoregression. Journal of the American Statistical Association, 85, 486{492.

Tsay RS. (2002). Analysis of Financial Time Series. Wiley-Interscience: New York.

24

West KD, Cho D. (1995). The predictive ability of several models of exchange rate volatility. Journal of Econometrics, 69, 367–391.

25

Biography:

Garrett Tresch was born and raised in Medina, Ohio where his family always knew he had a strong skill in calculation. From writing times tables on the sides of bedroom walls to graduating with majors in and Mathematics, Garrett has always had a keen eye for patterns in simplification and in providing ease when performing often difficult calculations. Because of this innate desire he has made positive advances in both his personal achievements and in the lives of those that surround him. For example, he received an award in a state-wide competition that declared him the “Best Cash Handler in North-East Ohio” while working at McDonalds one summer. On the other hand, for the last few years he has also tutored students in Elementary Statistics, Calculus I and II, College Algebra, and Intermediate Algebra at two different Universities where he always tries to spread the notion that math is both incredibly important as well as nothing short of amazing. However, Garrett has also used his love of another form of mathematics in order to both create fundraisers for those in need and to enhance the creative spectrum of North-East Ohio’s youth: music. Garrett plays bass guitar in multiple bands ranging in various genres in order to use music to explain, as an education technique, the importance of pattern and, therefore, mathematics within the enjoyable world.

Garrett has worked in downtown Cleveland as an Actuarial Intern for Findlay-Davies and passed actuarial Exam P as well as Exam FM where, in the latter, he attained a score of 9. He plans to pursue every avenue at his disposal in order to figure out a manner of maximizing his enjoyment and satisfaction with all of his favorite aspects of life. He plans to take the GRE in to pinpoint his abilities and to make decisions for his future.

“Wherever I may be, I know that I will expand the recesses of my knowledge in whatever manner seems to assist in restructuring the societal appreciation of logic and reason. For these are ancient, vital tools in the attainment of satisfaction and accomplishment in this ever-changing age of individualism.” –Garrett Tresch

26