Quick viewing(Text Mode)

The Use of Copulas in Risk Management

The Use of Copulas in Risk Management

THE USE OF COPULAS IN RISK MANAGEMENT

by

YOLANDA SOPHIA STANDER

SHORT DISSERTATION

submitted in partial fulfillment of the requirements for the degree

MASTER OF SCIENCE

in

STATISTICS

in the

FACULTY OF SCIENCE

at the

UNIVERSITY OF JOHANNESBURG

SUPERVISOR: PROF. F LOMBARD

JANUARY 2006 .5TA

THE USE OF COPULAS IN RISK MANAGEMENT

by

YOLANDA SOPHIA STANDER

SHORT DISSERTATION

submitted in partial fulfillment of the requirements for the degree

MASTER OF SCIENCE

in

STATISTICS

in the

FACULTY OF SCIENCE

at the

UNIVERSITY OF JOHANNESBURG

SUPERVISOR: PROF. F LOMBARD

JANUARY 2006 TABLE OF CONTENTS

INTRODUCTION 3

CHAPTER 1 : BASIC COPULA THEORY 5 1.1. Overview 5 1.1.1. Definition 5 1.2. Elliptical Copulas 6 1.2.1. Definition 6 1.22 Multivariate Gaussian copula 6 1.2.3. Multivariate Student's copula 7 1.2.4. Drawbacks to using elliptical copulas 7 1.3. Archimedean Copulas 8 1.3.1. Definition 8 1.3.2. Properties 8 1.3.3. Copula Distribution Function 9 1.4. Estimating Copula Functions 9 1.4.1. Nonparametric Estimation 9 1.4.2. Parametric Estimation Methods 10 1.4.3. Comparison of Parametric Estimation Methods 11 1.5. Goodness-of-Fit Testing 11 1.5.1. Conditional Distribution Function 11 1.5.2 Parametric Distribution Function of an Archimedean Copula 12 1.5.3. Nonparametric Distribution Function of an Archimedean Copula 12 1.5.4. Kolmogorov-Smimov Test 13 1.5.5. Akaike's Information Criteria (AIC) 14 1.6. Practical Example 15 1.7. Concluding Remarks 20

CHAPTER 2: DEPENDENCE MEASURES 21 2.1. Background 21 2.1.1. Overview 21 2.1.2. Definition of Dependence Measure 21 2.2. Linear Correlation 22 2.2.1. Definition 22 222. Shortcomings 22 2.3. 23

1 23.1. Kendall's tau 23 2.3.2. Spearman 's rho 24 2.3.3. Advantages and shortcomings 24 2.4. The Copula as a Dependence Measure 25 2.4.1. Background 25 2.4.2. Example: Comparison of dependence measures 25 2.5. Concluding Remarks 30

CHAPTER 3: RISK MANAGEMENT AND EXTREMES 32 3.1. Overview 32 3.2. Extreme Value Theory 33 3.2.1. Block-Maxima Approach 33 3.22 Peaks-over-Threshold Approach 35 3.2.3. Extremal Index 36 3.24. Practical application 37 3.3. Multivariate Extreme Value Distribution 37 3.4. Extreme Copulas - The Bivariate Case 38 3.4.1. Background 38 3.4.2 Estimating the 40 3.5. Application in Risk Management 40 3.6. Concluding Remarks 44

CONCLUDING REMARKS 45

REFERENCES 47

APPENDIX A: LOGLIKELIHOOD FUNCTIONS FOR VARIOUS COPULA 50 FAMILIES Overview 50 Gumbel Copula 50 Summary 51

2 INTRODUCTION

In this dissertation we take a closer look at how copulas can be used to improve the risk measurement at a financial institution. The focus is on market risk in a trading environment.

In practice risk numbers are calculated with very basic measures that are easy to explain to senior management and to traders. It is important that traders understand the risk measure as that helps them to understand the risk inherent in any deal and may assist them in deciding on the optimal hedge. The purpose of a hedge is to reduce the risk in a portfolio. As senior management is responsible for deciding on the optimal risk limits and risk appetite of the financial institution, it is important for them to understand what the risks are and how to measure these.

The simplicity of the risk measures leads to certain inadequacies that can have very negative consequences for a financial institution. If the risk measure does not adequately capture the risk of a deal, the financial institution may suffer big losses when there are stress events in the market. Alternatively, when the risk measure overestimates the risk of a deal, too much economic capital is tied up in the deal. This inhibits the trader from adding more deals to a portfolio that may potentially lead to big profits. Economic capital is the capital that has to be held against positions to protect the financial institution if and when extreme market moves occur.

In this dissertation the focus is on how copulas can be used to improve current risk measures. We focus on bivariate copulas. Bivariate copulas are easier to depict graphically than multivariate copulas with more than two dimensions. It is also easier to prove that the fitted bivariate copulae do adequately describe the underlying dependence structure between risk factors. Even though the focus is on the bivariate case, all methodologies can easily be extended to higher dimensions.

In Chapter 1 copulas are defined and some basic copula properties are shown. We consider the definition of elliptical copulas and discuss some drawbacks to using them in a financial application. Some useful Archimedean copula properties are discussed and it is shown how to generate the copula function for n 2 dimensions. The various ways in which to estimate the parameters of a copula are also discussed as well as goodness-of-fit tests that are used to test whether the copula fits the underlying adequately. Finally the chapter ends with an example that illustrates the theory. A back-test is done to establish whether the copula adequately describes the dependence structure over time. It is also shown how the fitted

3 copula can be used to generate stress scenarios that are used as an alternative to historical scenarios when calculating a value-at-risk (VaR) number.

In chapter 2 the properties of a dependence measure are discussed and it is argued that linear correlation does not conform to these desired properties. Rank correlation measures have some additional properties that make them more efficient than linear correlation measures in certain instances. We also consider their relationship to copulas. Finally it is shown how copulas can be used in practice to get another view on the dependence structure between risk factors.

In risk measurement we are mainly concerned with extreme moves that market variables may show. In chapter 3 some of the techniques used in risk management are discussed as well as some of their shortcomings. The shortcomings are addressed by applying extreme value theory to calculate stress factors and using copulas to model the dependence structure between risk factors. The theory underlying bivariate extreme copulas is discussed and illustrated with a practical example.

4 CHAPTER 1 : BASIC COPULA THEORY

1.1. OVERVIEW

1.1.1. Definition

A copula can be defined as a multivariate distribution function on [0,1] N with uniformly distributed marginals. Sklar's Theorem states that if F is a *dimensional distribution function with marginals F1,...,FN, then there exists an *copula C such that for all x in 9l"

F(X1,...,XN)=C(Fi(X1),...,FN(XN)).

(1.1) The converse is also true, in other words if C is a N-copula and are univariate distribution functions, then the function F is an N-dimensional distribution function with margins Equation (1.1) can be restated as follows:

C(ui , uN ) = F(F1-1 (ui , FN 1 (UN )) (1.2)

for any (ui,...,uN) in [0,1]N. Cis unique when are continuous, otherwise Cis uniquely determined on Ran F1 x x Ran F, where Ran F denotes the of the function F (Cherubini et al, 2004 pp.135-136; Embrechts et al., 2001 p.4). Similarly the survival copula C is defined by

(1.3)

where =1- F(.).

The copula C has the property that C(1,...,1,u,1,...,1)= u for all u in [0,1] (Embrechts et al, 2001 p.3).

The density c of a copula is given by:

C(1/1,...,UN)= au,...auN (1.4) The relationship between the copula density and the density f of the /dimensional distribution F is: N f(xi ,...,xN )=c(Fi (xi ),...,FN (XN))nfn (xn ) n =1 (1.5)

5 where f,, is the density of the distribution function F,,.

1.2. ELLIPTICAL COPULAS

1.2.1. Definition

Suppose we have a N-dimensional random vector X a vector p E 9I N with the locality

parameters, and a Nx N positive definite symmetric matrix E If the characteristic function (t) of X-p is a function of the quadratic form trit then we say that Xhas an .

An alternative definition of an elliptical copula is to note that X has an elliptical distribution with the rank(2)=k if and only if there exists a positive R, independent of a k-dimensional uniformly distributed random vector U, and a n x k matrix A where AAT = I such that X =d ,u+RAU

Elliptical copulas are the copulas of elliptical distributions like the multivariate normal and Student t (Embrechts et al., 2001 pp.22-30).

1.2.2. Multivariate Gaussian copula

A multivariate Gaussian copula is defined by:

C(ui ,...uN )= 10171 0/1 , ON-1 (UN )]

where alp denote the standard multivariate with correlation matrix p. The

multivariate normal density function is given by

Op (x)= exp(- x p x NI 1 2 (2r01

where x= while the univariate standard normal density function is given by

fn (x n ) = exp(-- x 2 L n . Thus, from (1.5)

1 r -1. exp(- x p x) N 1 2 1 (27r)T Ipli c(01. (xi ), . - - tOn (X n), --- 1 0 N(X N).1= N 1 1 2 1-1—,_ exp(- - xn ) n=-142ir 2

and by setting u „ = Pn( ) n ) it follows that the copula density (1.4) is

6 N 1(--exp 2 [0-1(4 P-ik-1 (u)l) (2707 I pl c(u)= (2707 expi— 2 (or01)[0-1(

1 exp( — 2 (ro [p -i 1191 2 (1.6) where u = (4,...,u,v ) and IN denotes the N x N identity matrix (Bouye et al., 2001b pp.14-17).

1.2.3. Multivariate Student's copula

The multivariate Student t copula is defined as:

C(ui , uN ) = Tp, v (t,T1 (ui ),..., 1 (uN )) where Tp, v denotes the standardized multivariate Student t-distribution with v degrees of freedom and correlation matrix p and where t.,71 0 denotes the inverse of the univariate marginal distribution function.

The density of this copula is derived in a similar manner as (1.6) and is given by: v+N

r(v+N2 )E 1-012 N (1+ vl [ t-1v 41)11- p-l Et171 (11) ]) 2 c(u) = _ v+1 11N l 2 F (2.1- )i (0[1 1- n=1n (tof (1.7) (Bouye et al, 2001b pp.17-18).

1.2.4. Drawbacks to using elliptical copulas

Elliptical copulas have no closed-form representations. The practicability of implementing them thus depends on the sophistication of the computing systems available at a financial institution.

Another drawback with elliptical distributions in a multivariate setting is that all marginals are of the same type. It will usually be more realistic to allow for different types of marginal distribution functions that are not necessarily elliptical. The problem then is that the correlation numbers can't be estimated directly from the data. It may be necessary to use a measure like Kendall's tau which is discussed in Chapter 2 (Embrechts et al, 2001 p.24).

7 Elliptical copulas generate distributions that are radially symmetric which C(u) = E(1— u) where u = (/11 _, N ) (Embrechts et al, 2001 p.25, 36). In many finance applications we find that the distributions exhibit , for instance distributions of portfolio returns where the portfolio is made up of non-linear instruments such as exotic options. This makes the use of elliptical copulas inappropriate.

1.3. ARCHIMEDEAN COPULAS

1.3.1. Definition

An Archimedean copula is derived from a generator and not from a multivariate distribution function via Sklar's Theorem discussed in Section 1.1 (Genest and Rivest, 1993 p.1034).

Let be a continuous and strictly decreasing function from [0,1] to [0,00) such that co(1) = 0, let 0-13 be the pseudo-inverse of v, that is, [- 0 t co(0) l] 0, co(0) t co and let C: [0,1]2 —> [0,1] be given by

C(u, v) = yo [-13 (9(u)+ go(v)).

C is a copula if and only if co is convex that is, (0" > 0. A copula of this form is called an

Archimedean copula and (0 is called a generator of the copula (Embrechts et al, 2001 p.31). We will restrict attention to generators that have v(0) = co.

In contrast to elliptical copulas, most Archimedean copulas have closed-form expressions. Appendix A shows a list of Archimedean copulas.

1.3.2. Properties

An Archimedean copula C has these properties: Cis symmetric: C(u, v) = C( v,u) for all u,vin [0,1]. Cis associative: C( C( u, v), w) = C( u,C( v, w)) for all u, v, w in [0,1].

The associative property of Archimedean copulas is not shared by copulas in general (Embrechts et al, 2001 p.32).

8

1.3.3. Copula Distribution Function

De Matteis (2001, pp.29-30) shows that when we have an Archimedean copula C with generator 9 then the random variable C(U, V) has a distribution function Kc where

K (t)= P(C(U ,V) t)= t - (0' (I.+ ) (1.8)

where coi(t + ) denotes the one-sided derivative of 9 at t.

1.4. ESTIMATING COPULA FUNCTIONS

1.4.1. Nonparametric Estimation

When we do not want to make any assumptions regarding the marginal distribution functions or the dependence structure between variables, we usually resort to nonparametric methods. In this section we define the empirical copula distribution function.

Deheuvels' empirical copula is defined on the lattice

= {(a , . 1 < j < N,ti = 0,1,..., T} T T T

and is denoted by

T N eoLTl ,..., T )- T n ind(rt. t t=i j -1 (1.9)

where r t denotes the th order of the fit, variable; and Ina(A) denotes the indicator

function which takes the value 1 if the event A is true and the value 0 otherwise (Cherubini et al, 2004 p.161). The empirical copula mass function is defined as:

e(c, tN_ '( xl,(4)''••' xN,(4)) e X L T • • • T ) ,otherwise

(1.10) The relationship between the empirical copula distribution and mass functions is given by (Bouye et al., 2001b p.22): t, . er t,„ _ N) - '•••'T )- ••• k‘. 7- '••• 1 7- ) . 4 =1 IN =1

9 1.4.2. Parametric Estimation Methods

In this section we consider three approaches that can be used to estimate the parameters of a parametrically specified copula using maximum likelihood estimation (MLE) techniques.

Say we have a sample of N variables each with Tobservations and denote this sample by

=1,...,71. Let OA denote the vector of parameters of the marginal distribution functions and 05 the vector of parameters of the copula function. The L(GA, 08) is, from (1.5), T L(0,4 ,08)= n ce8 kFi,eA (xN,t n fn,0A (X n,t) t=1 n=1 (1.12) so that the log-likelihood 0,4 ,08) is given by:

T T N t(eA ,88 )= E In ce, (xi, t ),...,FN, 0„ (xN, t ))+ E EininA (xnt ). t=1 t=1 n=1 (1.13)

The first approach towards estimating the parameters is the exact maximum likelihood method (EML). In this approach the parameters are estimated by maximizing (1.13). This approach can be very computationally intensive because we are jointly estimating the parameters of the marginals and of the copula function (Cherubini et al, 2004 pp.154-156).

The second approach is known as the inference functions for margins methods (IFM). This is a two-step approach where the parameters of the univariate marginal distributions are first estimated and the copula parameters are estimated thereafter. In other words, we first estimate T N BA = arg max E t=1 n=1 (1.14) where eA denotes the vector of estimated parameters of the marginal distribution. Then given BA we estimate:

T BB = arg max E In ce, (F1,15A (xi‘t (xN, t r =1 (1.15) where 0'5 denotes the vector of parameters of the copula (Cherubini et al, 2004 pp.156- 160).

10 In the third approach no distributional assumptions regarding the marginal distributions are made. The empirical marginal distribution functions are calculated and then the copula parameters are estimated by (Cherubini et a/., 2004 pp.160-161):

9B =arg max E Ince, (1:11,t ,•• • ,ON,t ). t=i (1.16) This approach is known as the canonical maximum likelihood method (CML).

1.4.3. Comparison of Parametric Estimation Methods

The EML approach is very computationally intensive due to the fact that the parameters of the marginal distribution functions and the copula function are estimated simultaneously. Cherubini et al (2004 pp.156-160) find that it is much easier to obtain IFM estimators than EML estimators and as such suggest that the IFM estimators are used as starting values in an EML optimization routine.

Fermanian and Scaillet (2004, pp.3-5) compares the three estimation methods and find that the squared errors of the CML is close to that of the EML and that little is loss in large samples. They also show how the misspecification of models may lead to severe bias and that this bias is more severe in the EML case. They suggest that the effect of using a semi-parametric approach like CML may not affect the efficiency of the model.

1.5. GOODNESS-OF-FIT TESTING

There are various techniques that can be used to test whether a fitted copula fits the data adequately. In this section we consider some of these techniques.

1.5.1. Conditional Distribution Function

In the literature this approach is suggested to test the goodness-of-fit of a bivariate copula (De Matteis (2001 pp.39-44) and Cherubini et al. (2004 pp.176-177)).

Suppose we have a bivariate distribution function Fx,(x,y) = C(Fi (x),6(y)) where F1 and 6 denote the marginal distribution functions of Xand rrespectively and Cdenotes a bivariate copula. The conditional distribution function of Ygiven Xis

11

Fy ix (Y x)=P(Y =x)

C(Fi(X + L1X), F2(y))- C(Fi(x), F2(y)) lim dx -40 dx

ac(F1 (X), F2(y)) ax set = 1,v(X , y) (1.17) We use this function in the goodness-of-fit test. We follow these steps: We have a bivariate data series (X1, n),...,(Xr, YT). Estimate the parameters of the copula function. Let D(x, y) denote the conditional distribution function evaluated using the fitted copula parameters. We now evaluate this function at each of the data points. Denote the series

by A = 0(xi,r1), ,AT =Ax7- , Yr). T Plot the pairs (--L (— A( where ko denotes the f" . T+1 , 1) v. " T+1 1 n

If the copula fits the data adequately, a plot of the pairs should cluster around a straight line.

1.5.2. Parametric Distribution Function of an Archimedean Copula

In Section 1.3 we saw that the distribution function of an Archimedean copula Cis given by K c (t) . This function is used in a goodness-of-fit test by following these steps: We have a bivariate data series (X1, Y1),...,(Xr, Yr). Estimate the parameters of the copula function. Let K (t) denote the function evaluated using the estimated copula parameters.

1 K 1 )) T K T Plot the pairs [—, T +1 T +1 "." T +1 1 c(T +1)) .

If the copula fits the data adequately, a plot of the pairs should cluster around a straight line (De Matteis, 2001, pp.39-44).

1.5.3. Nonparametric Distribution Function of an Archimedean Copula

In this method we compare the parametric distribution of the Archimedean copula given by (1.8) with a nonparametric distribution function.

12 Suppose we have a random sample {(X„ Y,); i= 1,..., 7) drawn from a bivariate distribution F Then we determine the proportion of observations that are less than or equal to the (X, Y,) pair componentwise. This is done as follows: 1 T I W = E inc/kX X T -1 1 =1 so that the nonparametric distribution function K c (t) is given by

\ 1 kc kt)= 7„

(1.18) Please refer to De Matteis (2001, p.39-44) for a detailed discussion on the properties of the distribution function.

To test the goodness-of-fit using (1.8) and (1.18) we follow these steps: Estimate the parameters of the copula.

Plot the pairs (kc( ) T +1 ' K T +1)} . "'V 41+1) 1K ciTT+1)) .

If the copula fits the data adequately, a plot of the pairs should cluster around a straight line.

1.5.4. Kolmogorov - Smirnov Test

The Kolmogorov-Smirnov test is used to test the hypothesis that two samples come from the same distribution. The test is conducted by comparing the sample cumulative distribution functions and the test statistic is the greatest difference between these two functions. Say the two distribution functions are denoted by F1 (x) and F2 (y), then the test statistic Tis calculated as: T = max1Fi (x) — F2 (y)1

There are tables available that list critical values for this test statistic (Sprent, 1993). However, the critical values are only for independent data.

In this dissertation the Kolmogorov-Smirnov test will be applied by comparing the parametric copula distribution function with the nonparametric distribution function (1.18) i.e. calculating the test statistic as:

T = maxll (t) — Kc 01.

(1.19) The steps are as follows:

13 We have a bivariate data series (X1, Y1),...,(Xr, Yr). Estimate the parameters of the copula function. Let K -(0 denote the function evaluated using the estimated copula parameters.

Determine the non-parametric distribution K (t) from (1.18).

Compute the test statistic T = maxiKe. (t)— .

To determine whether this test statistic is significant, we will use a nonparametric bootstrap analysis to determine a p-value (Davison and Hinkley, 1999, pp.161-173):

Let e denote the vector with estimated copula parameters.

Generate a data series from the fitted copula (4, , v7*- ).

Estimate the copula parameters from this generated data and denote the estimate of the parameters by e* .

Compute K e*t (t) and kc* (t) where the superscript * denotes the fact that we use the

bootstrap sample and a' to calculate the values.

Calculate the test statistic Ti* = max1Ke*,. (t) —

Repeat these steps R times to get a bootstrap series of test statistics T:

1 + E ind{7-7 Calculate the p-value as p = i" R +1 The p-value can be interpreted as an error rate, i.e. we reject the null hypothesis with an error rate p (Davison and Hinkley, 1999, pp.37).

1.5.5. Akaike's Information Criteria (AIC)

Akaike's Information Criteria (AIC) can be used to determine which model shows the superior fit, when a number of models have been fitted to a data series. The AIC is given by AIC = —2 In(L(.D + 2M (1.20) where LO denotes the likelihood function and M denotes the number of parameters estimated. The model with the lowest AIC should be the superior model (Burnham and Anderson, 2004 pp.261-304).

14 1.6. PRACTICAL EXAMPLE

We consider here the relationship between the returns of an equity price series (price given in EUR) and the returns of the EUR/USD exchange rate. Graphs of the price series of the two risk factors are shown in Figure 1.1 and Figure 1.2 respectively.

We are interested to see whether we can find a copula that adequately describes the dependence structure between the two series and whether this dependence structure is stable over time. The fitted copula can also be used to determine stress scenarios that can be used in the risk management process to determine value-at-risk (VaR) numbers.

EUR/USD SPOT PRICE 1.5

1.4

1.3 -

1.2

F .1. -

1.0 -

0.9 -

0.8

0.7

0.6 CO CO 0, a+ a ni cr,ci.` GI cp a, 2 cT, ,5' 2 ti; 'il -5 6 . '5 6 -5. T T '2. •-■ roc o. ... 0 .-. " 0 1 < 0 " 0 71 < " 0 A 2 ,9iPig. Pg Date

Figure 1.1. EUR/USD spot series. EQUITY PRICE (EUR)

co co cn en Ch cr% a a ,s, en r_n en ce, cr, cp 05 ci, ch c?' 4 '§' 4 z tit ec , ? 9 ' 4 'it 2 22 9 'Z 8 84 Lt - ,T, , -5 6 0. c t -5 . 0 .-, .-■ 0 1 a ^ g a ^ O 2 ii. 7 C g x (9i g 2 k A 2 .2, Date

Figure 1.2. Equity price series in EUR.

15 We have daily data from January 1999 to March 2005. Daily log returns are calculated for both series and the values are denoted by (X, v). We use the CML method to estimate the of the copula function for each of the bivariate Archimedean copulas specified in Appendix A, Section A.3 to see which one shows the best fit.

Table 1.1 shows the Kolmogorov-Smirnov test statistic Tand corresponding p-value as well as the AIC value of each of the copulas for which parameter estimates could be found. The Kolmogorov-Smirnov test showed that only copula types 5, 6 and 10 show adequate fit. From these three copulas, copula type 5 was chosen because it had the lowest AIC number. The copula type 5 function is also known as the Frank copula.

Kolmogorov-Smirnov Test AIC a T p-value Copula Type 1 -0.6261 0.2701 0.005 reject 00 Copula Type 3 -0.6888 0.3238 0.005 reject -63.9 Copula Type 5 -1.2012 0.0153 1 accept -61.9 Copula Type 6 1 0.0635 0.199 accept 0 Copula Type 10 0.5847 0.0261 0.985 accept -61.6 Copula Type 12 1 0.2234 0.004 reject 1101.3 Copula Type 14 1 0.2234 0.00 reject 1101.3 Copula Type 16 0.1019 0.1182 0.00 reject 269.4 Copula Type 17 -2.9911 0.3603 0.00 reject -54.6 Table 1.1. Goodness-of-fit test results of each fitted copula.

Figure 1.3 shows the three graphical goodness-of-fit tests for copula type 5. We can see that all three graphs show the expected straight line which confirms the results of the Kolmogorov-Smirnov and AIC tests.

The next step is to do a back-test to see whether the Frank copula adequately describes the structural relationship between the two risk factors over time. The steps are: Take one year's data and estimate the parameter of the Frank copula. Perform a goodness-of-fit test to see whether the Frank copula shows an adequate fit. Add the next three month's of data to the sample and estimate the parameter of the copula. Again perform the goodness-of-fit test. Repeat the process until the end of the sample is reached.

16 PP-Plot: Conditional Distribution 1

0.9

b 0.8

Pro 0.7 ive t

la 0.6 0.5 l Cumu

a 0.4

ion 0.3 dit 0.2 Con 0.1

0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Standard Uniform

PP-Plot: Copula CDF 1

0.9 -

5u 0.6 - to . 0.5 - tj 0.4 - 0.3 - 0.2 - 0.1 -

0 ,,,,----' 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Standard Uniform

PP-Plot: Copula CDF vs Copula Empirical CDF 1 0.9 -

0.8 - 0.7 -

0.3 0.2 -

0.1 - 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Copula Empirical CDF

Figure 1.3. Copula Type 5 graphical goodness-of-fit test results.

17 Copula Parameter Estimate Over lime

Sample End Date

Figure 1.4. Frank copula parameter a estimated over various historical periods.

The Frank copula adequately fits the data for each of the historical periods considered. Figure 1.4 shows a plot of the a-values for each of the periods. It is interesting to see how the a-values gradually change over time as more data are added to the sample. The a- parameter can be interpreted as a dependence measure because, as we will see in Chapter

2, the rank correlation measures Kendall's T and Spearman's p are simple functions of this a- value. This means that we can say that with this back-test we've proven that while the structural relationship does not change over time, the dependence measure does change.

Suppose we have a portfolio consisting of 100 units of the equity and that the base currency of the portfolio is USD. Then the value of the portfolio is calculated as: PVt = Nt x Pt x FX t where PK = portfolio value in USD at time t, Nt = position units at time t Pt = equity price in EUR at time t, and art = spot EUR/USD exchange rate at time t.

We will now generate from the fitted copula possible future paths that the equity price and exchange rate can follow. • Generate a bivariate sample from the fitted Frank copula (see below). The sample size is chosen as 1000. The generated bivariate sample gives possible returns that the equity

18 price and exchange rate may show over the next day and we call them the scenario values. Apply the scenario values to the portfolio value to get a series of profit-and-loss (PnL) values:

PnL; = N t x Pt exp(Rr)x FX t exp(Rr )- PVt

where i = 1,...,1000 denotes each of the generated scenario values; R ° denotes the generated equity return and R` denotes the generated return of the exchange rate. Calculate the 1 st percentile of the PnL series; this is the VaR number.

These steps are repeated (say) 10 000 times after which we have an empirical distribution of VaR numbers. From this empirical distribution it is possible to determine a for the VaR number.

Most practitioners rely on historical data to generate a VaR number. To calculate a historical VaR the instruments in the current portfolio are stressed with actual moves that occurred in the past. The portfolio value is recalculated under each scenario and the profit-and-loss series is calculated by subtracting the current portfolio value from the scenario values. The historical VaR number is a high percentile of the profit-and-loss distribution. In this calculation no allowance is made for the fact that the relationship between the equity and the exchange rate may change from that observed historically. By generating scenario values from the copula function it is possible to explore the effect of changes in the historically observed dependence structure.

To generate the bivariate sample from the Frank copula, we follow these steps (De Matteis (2001); Cherubini et al. (2004)): Generate two values n and p fronn a uniform(0,1) distribution. We use the conditional distribution function as given by (1.17). Please note that ac(F1 (x), F2 (y)) aC(u, v) can be written as where F1 (x) = u and F2 (y) = v . This ax aFi-1 (u)

aClu O means that by setting ', s equal to p, we can solve for v. aF1-1 (u)

The pair (ir,v) is then a realisation from the copula C

In this example the portfolio value PK is 6 470.5 USD. Figure 1.5 shows the distribution of the VaR numbers. The distribution is clearly skewed to the left with the smallest VaR being 440 USD. For risk management purposes we will use this number as the 'worst case' scenario and when expressing this as a percentage of the portfolio value we get 440/6470.5

19 rz 6.8%. This means that at worst we expect to lose no more than 6.8% of the portfolio value over the next day. We expect this number to only be exceeded once every 100 days.

Distribution of the VaR Numbers 250

200 -

100 -

50 -

-450 -410 -370 -330 -290 -250 -210 VaR

Figure 1.5. Distribution of the first percentiles (VaR numbers) calculated from the PnL numbers.

1.7. CONCLUDING REMARKS

In this chapter we considered the definition of a copula and some copula properties. It was argued that elliptical copulas may not have the correct characteristics to adequately describe the dependence between financial variables. Thus we focussed on Archimedean copulas.

The various ways in which to estimate the parameters of the copula function were discussed. Various goodness-of-fit tests that are used to test whether the fitted copula is suitable were also discussed. Finally, an example that illustrates the theory was given.

20 CHAPTER 2: DEPENDENCE MEASURES

2.1. BACKGROUND

2.1.1. Overview

In practice the linear Pearson is almost always used as a measure of dependence due to the simplicity of calculating this number. Few people are fully aware of the inadequacies of this measure. Consider for example when we have two stocks with returns denoted by Xand Y We calculate the correlation between the two return series and find that the correlation is 100%. Some traders would immediately think that when buying stock X they can hedge this position by shorting Y, because the perfect linear correlation implies that the two stocks move together perfectly. However, what such a trader does not take into account is that even though the two stocks may indeed move perfectly together, they do not necessarily move by the same amount. An example is when X" , N(0,1) and Y= 2X This means that by just considering the linear correlation number, the position will not be hedged successfully, because stock Y will always move twice as much as stock X.

There are various other issues involving linear correlation that are explored in Section 2.2. Some of the issues can be addressed by rather applying a rank correlation measure. This is discussed in Section 2.3. We also explore the relationship between rank correlation and copula functions.

A copula is a natural object with which the relationship between variables can be modelled. It provides a direct way of determining the probability that variables move together or to determine the actual probability that two variables will exhibit pre-specified moves. This is further explored in Section 2.4.

2.1.2. Definition of Dependence Measure

We would like a dependence measure fi(X, Y) between two real-valued random variables X and Y to have the following properties (Embrechts et al., 2002 pp. 176-223): Symmetry: p(X ,Y)= AY , X)

Normalisation: —1 5_ p(X ,Y)1

p(X ,Y)= 1 a X, Y perfectly positively correlated

p(X ,Y)= —1 a X, Y perfectly negatively correlated

p(X ,Y)= 0 a X , Y are independent

For T :91 --> 91 strictly monotonic on the range of X:

21 p(T(X),y)={P(X ,Y) ,T increasing — AX ,v) ,T decreasing

(Embrechts et a/., 2002) show that the last two properties contradict each other and that no dependence measure exists which satisfies both.

2.2. LINEAR CORRELATION

2.2.1. Definition

The linear correlation between two risk factors (random variables) Xand Yis defined as:

p(x,, y) COV(Xf Y) VI/ar(X)/ar(Y)

(2.1) We know that in the case of perfect linear correlation we have that AA ' ,v) = ±1 . When we have imperfect linear correlation, we have that —1 < ,o(X ,v) < 1 .

It can be shown that linear correlation is invariant only under strictly increasing linear transformations, i.e. p(aX + b, cY + d) = sgn(ac)* , Y) when a ,c E 91 \ {0}, b,d E 9i . When we are dealing with elliptical distributions the linear correlation measure is adequate to capture dependence between variables. The multivariate normal distribution is a special case of this type of distribution (Embrechts et al, 1999a). Only when we are working with a multivariate normal distribution, does p(X,Y)= 0 denote independence between the variables. These and other shortcomings of the linear correlation measure are discussed in Section 2.2.2.

2.2.2. Shortcomings

Linear correlation cannot capture the non-linear dependence between risk factors in the financial market (Embrechts et al., 1999 pp.3-6).

Linear correlation is a scalar measure of dependency. It cannot, therefore, be expected to provide everything we need to know about the dependence structure between risk factors. When we know the marginal distributions and correlation between two risk factors, we generally cannot determine the joint distribution. Also, the linear correlation values that can be attained depend on the marginal distributions of the risk factors. Perfectly positively dependent risk factors do not necessarily have a linear correlation of 1; and perfectly negatively dependent risk factors do not necessarily have a linear correlation of -1. An

22 interesting example is discussed in Embrechts et at (1999 pp.3-6). They show that the attainable linear correlations form a closed interval [pmia,p,„ax]. This interval contains zero and is a subset of [-1,1].

The upper boundary pmax represents a situation where the two risk factors are perfectly positively dependent. Two risk factors are Xand Yare perfectly positively dependent when they can be represented as two increasing functions u and v of a single underlying risk factor z, in other words X= t(Z) and Y= p(4. The lower boundary Pmin can be interpreted similarly. In this case the two risk factors are perfectly negatively dependent and they can be represented by an increasing function, u, and a decreasing function, v, of Z

A linear correlation of zero does not that the risk factors are independent. This is only the case when we are working with a multivariate normal distribution. Another interesting property is discussed in Cherubini et al. (2004, p108) where it is proven that even though the linear correlation between two variables is zero, the one variable can still almost surely be a function of the other.

Linear correlation is not invariant under non-linear strictly increasing transformations. Linear correlation is only defined when the of the risk factors are finite, which can be seen from (2.1). This means we may encounter problems when working with heavy-tailed distributions where variances may be infinite (Embrechts et al., 2002).

2.3. RANK CORRELATION

2.3.1. Kendall's tau

Suppose we have two variables Xand Ywhose joint distribution is generated by a copula C Then Kendall's tau is defined by

T = 4LI C(u,v)1C(u,v)-1

(2.2) where I = [0,1]. When C is an Archimedean copula with generator q2, then r can be expressed as (Cherubini et al. (2004 p.123)):

r — 41 (v) dv +1.

(2.3)

We have that r = -1 iff C= C and r = 1 iff C= C+ where C = max(u + v-1,0) denote the minimum copula and C + = min(u, v) denotes the maximum copula. We have, in general, that:

23 C-1 < C < C + .

An unbiased estimator of r in (2.2) is given by 2 n rs =\E sgn(X; - X AY; -Yi ) nv7-1.)i=ij>i

(2.4) where n denotes the sample size (Cherubini et al., 2004 p. 99).

Equations (2.3) and (2.4) are useful results because they can be used to estimate the parameter of a bivariate Archimedean copula. This is done as follows: Calculate r, from the data using (2.4). The r in (2.3) is a function of a (the parameter of the bivariate Archimedean copula). Thus setting (2.3) equal to Ts and solving for a, we find an initial estimate of a that can be used in the parametric estimation procedures discussed in Chapter 1.

2.3.2. Spearman's rho

Spearman's rho is defined by p= 12 jf uvdC(u, v).- 3 . 12

(2.5)

We have that p= -1 iff C= C and p= 1 iff C= C. An unbiased estimator of p in (2.5) is

- R 51 -5) i =1 Ps = E(Ri - ky (.5; -Sy =1 (2.6) where R, is the rank of X, among and 5, is the rank of r among Yn (Cherubini et al., 2004 pp. 100-103).

2.3.3. Advantages and shortcomings

Some of the shortcomings of linear correlation can be addressed with rank correlation. We have that: rank correlation is invariant under strictly increasing transformations of the risk factors; the risk factors need not have to have finite ; and the range of correlations that can be attained can be strictly contained in the interval [- 1,1].

The shortcomings that remain are: a value of zero does not necessarily imply the independence of the risk factors;

24 the rank correlation is a scalar measure that cannot tell us everything about the dependence between the risk factors; rank correlation is mathematically less tractable than linear correlation.

Embrechts et al. (2002) can be consulted for a more detailed discussion.

2.4. THE COPULA AS A DEPENDENCE MEASURE

2.4.1. Background

Embrechts et a/ (1999b) show that the dependence structure as summarised by a copula, is invariant under increasing and continuous transformations of the marginals. That is if has copula C and if are increasing continuous functions, then Tn(Xn) also has copula C

Another useful property of a copula C discussed in Chapter 1, is that the domain of C is [0,1r. It was also shown in Chapter 1 that if C is an Archimedean copula, then C is symmetric.

With these arguments we have that Archimedean copulas allow for three of the desired properties of dependence measures as discussed in Section 2.1.2 (Cherubini et al. 2004 pp•70-73).

2.4.2. Example: Comparison of dependence measures

We wish to quantify the dependence structure between ABSA and FirstRand share prices. Both companies are in the banking sector which means that certain characteristics are shared, for instance, the same types of services are offered. We would expect that external factors such as the exchange rate and government monetary policy should have the same effect on the share prices of the two companies.

To test the dependence between two risk factors in practice we usually only consider the most recent data, because we know that relationships between financial variables change over time. By only taking the most recent data, we should be able to get a more reliable estimate of the prevailing dependence between the stocks.

In this example we will divide the data into six-month blocks; for each block a copula will be fitted and the resulting a parameter will be used to determine Kendall's tau using (2.3).

25 We have stock price data available from November 1994 to December 2004. The 10-day log- returns of each series was calculated. The CML technique was used to estimate the parameter of each of the copula functions described in Appendix A, Section A.3. The goodness-of-fit tests were then applied to see which copulas show an adequate fit. The results of the Kolmogorov-Smirnov and AIC tests for all the copulas for which adequate parameter estimates could be found are shown in Table 2.1.

Kolmogorov-Smirnov AIC

a T p-value Copula Type 1 1.0004 0.0597 0.2090 accept -902.5 Copula Type 3 0.99 0.0605 0.1841 accept -911 Copula Type 5 4.335 0.036 0.7960 accept -991.7 Copula Type 6 1.8671 0.0704 0.0498 accept -912.4 Copula Type 12 1.1643 0.0469 0.4080 accept -1059 Copula Type 14 1.304 0.043 0.5622 accept -1141 Table 2.1. The goodness-of-fit test results for each of the fitted copulas.

We see that copula types 5 and 14 showed the best fit. A graphical test given by Figure 2.1 confirms this result, so the rest of this chapter the focus is on types 5 and 14.

For the two chosen copulas we find from (2.3) that Kendall's r can be calculated as (De Matteis, 2001):

1+ 4 (0 I (a) - 1) , copula type 5 r= a 4 1 ,copula type 14 2+ 4a (2.7) k

where D is the Debye function defined by Dk (a) = —kk 7 --t--Ht . a 1)exp0-1-‘

The remainder of the analysis can be summarised as follows: The data are divided into 6-monthly blocks. In each block the copula parameter a is estimated for both the Frank copula (type 5 in Appendix A) and the type 14 copula.

Using (2.7) we estimate the parametric from of T using the fitted parameter for both copula types.

Calculate Ts from (2.4). Calculate the sample version of the linear Pearson correlation coefficient.

26

a di w c 8 ..r. FT . '6 CI' LL g € t Li 0! 0 al 0 w to o . 4., .c D ■• (ii -2 3 -0 9 •rt '0 O 1:3 0 O 2 cs 2 2 g a C/) -0 th 12. re 0 c•I 0 01 6361 % O 0 c a 0_ v o_ O ..6 g 0 0 c uc. ,,,4 cco- -..

3 Ta

1 c g e p cc! 173 a Ty

r .

Fo 'I

n

io Ea t Co :I Ci '8 ibu (7) tr -6- -2 C 0.

l Dis 03 0 ID ‘r. 2 na

io B it 0 -a IA

nd C -

Co v-1

t No u c

Plo ;:i...§

PP- . O O al g E=0-as=o-..0000 e 3 e Typ For ion t bu tri l D is iona dit Con t Plo PP-

E0(3003....ccou E7J-CCICE-.00:0 >•••=00000-CL— ECO-CCIC0.-.00C() E=0-030G-vaa0

1 e Typ

r Fo

ion t bu tri Dis l na io dit Con

Plot: P- P

E=0-0:co-yozo E=0-wozo-voc0 >0.-danc0-0.- E:0-oco•-yozO E=0-000.-'00:0 The results of the analysis are summarised in Table 2.2 below.

Copula Type 5 Copula Tyne 14 End Linear Period a Kendall's T a Kendall's T Kendall's T. Correlation May-95 4.3236 0.4116 1.247 0.4276 0.4298 0.4624 Nov-95 2.5699 0.2686 1.1115 0.3795 0.2705 0.4021 May-96 1.756 0.1894 1.0411 0.3511 0.1983 0.1268 Nov-96 3.6172 0.3587 1.1176 0.3818 0.3686 0.4847 Jun-97 -0.9013 -0.0993 1.0269 0.3451 -0.1036 0.1494 Dec-97 3.4069 0.3417 1.1618 0.3982 0.3475 0.5443 Jun-98 2.7554 0.2855 1.2516 0.4291 0.2839 0.4422 Dec-98 9.1382 0.641 2.3885 0.6538 0.6475 0.8141 Jun-99 3.755 0.3695 1.2118 0.4158 0.3528 0.5646 Dec-99 4.0168 0.3894 1.155 0.3958 0.3864 0.5546 Jul-00 6.5147 0.54 1.7248 0.5505 0.547 0.7387 Jan-01 5.9603 0.5121 1.3688 0.4649 0.5092 0.6473 Jul-01 2.9372 0.3017 1.1836 0.406 0.2951 0.5002 Jan-02 4.8313 0.446 1.3224 0.4513 0.4364 0.6718 Jul-02 6.9127 0.5584 1.6559 0.5362 0.5719 0.8006 Jan-03 6.5874 0.5434 1.6784 0.541 0.5538 0.7603 Aug-03 6.7748 0.5522 1.7026 0.546 0.5471 0.7375 Feb-04 5.8288 0.505 1.7114 0.5478 0.5223 0.7587 Aug-04 5.1294 0.4647 1.456 0.4887 0.4555 0.7079

Table 2.2. Estimated parameter a and re suiting Kendall's r and Kendall's sample

rs for each of the 6- month data blocks.

Parametric versus Sample Kendall's tau 0.7 Frank (Type 5) 0 0.6 o Type 14 %..• ° 0.5 o •• 0 0 4° 0.4 to ••• o 0 0 .1i 0.3 0 0 a) •• 0 0. E 0.2 • 0 a 0.1

0

-0.1 • 0

-0.2 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 07 Parametric tau

Figure 2.1. Comparison of the parametric and non-parametric estimates of the Kendall's r for copula types 5 and 14 for each of the six-month blocks.

28 •

Figure 2.1 shows a plot of the parametric Kendall's tau against its sample equivalent for each of the 6-month data blocks. We see that the implied r from copula type 5 corresponds to the sample equivalent. However, the same cannot be said of copula type 14. This is an interesting result, because even though both copula types 5 and 14 fit the data adequately in each of the 6-month blocks, and even though copula type 14 shows the superior fit when we use the AIC value as criterion, the graphical analysis shows that the implied dependence structure from copula type 5 is more correct over time.

Another interesting comparison is between Kendall's tau calculated from the copula type 5 parameter and the linear correlation measure. This is shown in Figure 2.2 below. In most cases the linear correlation is much higher than that from the parametric r. From the arguments in Section 2.2 we know the shortcomings of the linear correlation measure. This analysis shows instances where by just looking at the linear correlation measure, we would find an 80% correlation that would seem very high. However, Kendall's r shows that the actual dependence is only around 60%. This gives a whole different view on the dependence between the variables. Should traders just use the linear correlation measure to find the optimal hedge, they may potentially show greater losses than expected.

Comparison of Dependence Measures 1

0.8 - a;

M 0.6 - a;

c 0.4 - a;

CI

0 Kendall's tau - - -0- - - Linear Correlation -0.2 co 741 0 End of 6m Sample Period

Figure 2.2. Comparison of Kendall's r for copula type 5 and the linear correlation number for each of the 6-month data blocks.

29 The problem with this type of analysis is that it is difficult to assess the dependence between the stocks, because the dependence measure is a single number. One way to extract more information is to estimate the probability that the two stocks move together in a given interval. For instance we can estimate the probability that both stocks show a return between 5% and 10%. We can consider various intervals as illustrated in Table 2.3 below. The probabilities are estimated from the empirical equivalent of the following relationship:

14

=1)(R A b,R F o')-P(R A b,R F c)--(R A a,R F c/)+P(R A a,R F c)

where R4 and RF denote the 1-day log-returns on the ABSA and FirstRand stock prices respectively.

Table 2.3 shows the estimated probability that both ABSA and FirstRand stock prices have log-returns in the given intervals. We see that the highest probability is 37% which corresponds to the event that both stocks move down between 0% and 5%. This probability is indicated by two stars, because it is the highest probability. Then next highest set of probabilities are denoted in with one star. This table gives a very useful summary of the dependence between the two stocks. The results in the table can be interpreted to confirm the results that the relationship between the stocks is positive, because the highest probabilities are when they both move up; or both move down. However, because these probabilities are still low, we know that the relationship is not very strong.

FIRSTRAND RETURNS (-10%; -5%) (-5%; 0%) (0%; 5%) (5%; 10%) ABSA (-10% ; -5%) 0% 1% 0% 0% (-5%; 0%) 1% 37%** 12%* 0% (0% ; 5%) 0% 19%* 24%* 2% (5%; 10%) 0% 1% 1% 0% Table 2.3. Dependence matrix calculated from the Frank copula.

In practice an idea may be to only flag probabilities greater than 80% with two stars; probabilities between 50 and 80% with one star; and no stars for the low probabilities. That makes it easier to identify strong relationships between market variables.

2.5. CONCLUDING REMARKS

A valid dependence measure should have a number of logically desirable characteristics. These characteristics are usually assumed when using the linear correlation number in

30 practice. However the examples in this chapter illustrate how the linear correlation number may be misleading in some instances.

Rank correlation measures are briefly considered, particularly their relationship to copulas. It has been shown how copula theory can be applied in practice to obtain a more general view on the dependence structure between risk factors.

31 CHAPTER 3: RISK MANAGEMENT AND EXTREMES

3.1. OVERVIEW

In practice risk numbers are calculated from very basic measures that are easy to understand and implement. The simplicity of the measures leads to certain inadequacies that traders can use to their advantage by putting on more trades with certain characteristics. To illustrate this, suppose it is known that the risk of a certain type of deal is underestimated. When the performance of traders is measured on a risk/reward trade-off, it is important for them to show high profits for the least amount of risk. This means that if a deal shows a large potential profit and the risk measures underestimate the risk, the trader would trade more of these deals. The risk manager would be aware of this, but the only way to keep the trader in line would be to find superior risk measures that capture the risk adequately.

There are two main approaches towards calculating the risk of a portfolio. In the first approach the following steps are followed: Determine the underlying risk factors of the portfolio. These would typically be interest rates, spot prices and dividend yields. say 99th, Calculate stress moves for each risk factor. A stress move is a high percentile, of the log-return series. We do not expect the risk factor to show a return greater than this number more than once every 100 days. Suppose the stress move is 30%. Then a stress factor series is determined as -30% to 30% in steps of 5%. It is necessary to have a series of stress factors, because when structuring a portfolio a 30% stress move in the risk factor may result in a profit for the portfolio, whereas a 10% move could result in a loss. This happens because the value of the portfolio is not necessarily a linear function of the underlying risk factors. In practice the upward moves and downward moves are analyzed separately because of the skewness of the distributions of the risk factors. The risk factors are stressed individually with the appropriate series of stress factors and then the influence of each stress factor on the portfolio value is determined. In other words, if we adjust the interest rates with each stress factor in the series then in each case we can see whether our portfolio will show a profit or a loss. The biggest loss across all the values in the stress factor series is assumed to be the interest rate risk number. The price risk number, dividend yield risk number, etc. is calculated similarly. The total risk number for the portfolio is the sum of the individual risks.

32 A problem with this approach is that the risk is typically overestimated because no allowance is made for offsets between different risk factor classes. The effect of each risk factor is measured independently of all other risk factors, which is a very conservative approach.

The second approach is known as the grid method. The following steps are implemented: Calculate the series of stress moves for each risk factor as in the previous method. Create a grid where all risk factors are stressed at the same time with the various combinations of stress factors. The profit-and-loss of the portfolio is calculated for all combinations of risk factor stress moves. The biggest portfolio loss is assumed to be the total risk number for the portfolio.

The problem with this approach is the assumption that the worst possible move in each of the risk factors occurs at exactly the same time, which means that we may be overstating the risk. We also do not know the probability with which such an extreme event will occur.

In this chapter we examine ways in which to improve the manner in which portfolio risk is calculated. Firstly we use extreme value theory (EVT) to determine the stress moves for each risk factor. Back-testing shows that EVT captures extreme market moves more accurately than the usual historical value-at-risk number which is defined as a high percentile of the log-return distribution. By using copula theory it is possible, to calculate the probability that these extreme moves occur simultaneously.

3.2. EXTREME VALUE THEORY

There are two main approaches to modelling extremes, namely the block-maxima approach and the peaks-over-threshold approach. These two approaches are discussed in this section.

3.2.1. Block - Maxima Approach

The first approach to modelling extreme values is known as the block-maxima approach. Let the series (X, i = of random variables be independent and identically distributed with a common distribution function F The idea is to split the data series into k blocks, each of size n, such that kn N and then to calculate the largest value in each block. Let , j= 1,...,k denote the series of block maxima.

It is known that there exist sequences of constants an > 0 and b„ such that

lim n bn) x]= lirn F n n x + b n ) = H(x) n-+. an n-40.

33

for some non-degenerate distribution function H. Fis said to be in the maximum domain of attraction of H(Embrechts et al. (1997,p.128-151); McNeil (1999); McNeil (1998).

According to the Fisher-Tippett theorem (Embrechts et al., 1997 p.152) H must be an extreme value distribution of the form h(x) = Ha(x - 401 for some ,u and a and where /-4 is defined by exp[— (1+ .,(0] 0 1-1(x)=

exp[— exp(— x)] = 0

(3.1) where 1+fix > 0 and is the . This distribution function is known as the

generalised extreme value (GEV) distribution and (3.1) is the Jenkinson-von Mises representation of the GEV distribution function. This parametric form subsumes distributions which are known by other names: for > 0 it is the Frechet distribution, for = 0 it is the Gumbel distribution and for < 0 it is the Weibull distribution (Embrechts et al, 1997 p.152).

When the assumption that the data series is iid is relaxed, the normalised block maxima of a stationary follows the GEV distribution asymptotically. This is evident from the

following argument: let X,- denote a stationary time series, let Mr = max(X.u...,Xr) and let ir denote an iid series with the same marginal distribution F. Then

P(Mr u,),-z (14 r ,)= r )= p n9

From this it is clear that the maximum of r observations from the stationary series with extremal index 9 behaves like the maximum of r0 observations from the associated iid series, so that the O h xp can be estimated as:

x p = - [1 - (- In(p))- ]

(3.2) where r denotes the block size. When estimating the parameters, E is the same as in the iid case, because raising the distribution function to the power of 0 only affects the location and scaling parameters (McNeil, 1998; Chavez-Demoulin and Embrechts, 2001). The extremal index is discussed in Section 3.2.3. When the data series is iid, then the quantile is calculated from:

x p = HZ1,,,,a (p)= - [1 - (- In(P)Y.

(3.3)

34

McNeil (1998) defines the concept of return levels. He shows that if Rn,k is a level expected to be exceeded in one n-block period every k n-blocks on average and if the maxima follow

the GEV distribution, then Rn,k can be estimated as follows:

N- 2` [1- (-141- 11) e ] 0 k ) fin,k = • A - (5- I n [ - Infl - I1] =0

(3.4) Thus, a k-year return level, R252,,, would satisfy the relation 1 P(M2s2 R252,k / = 7./

where it is assumed that there are 252 working days in a year.

3.2.2. Peaks-over-Threshold Approach

The second EVT approach is called the peaks-over-threshold method. We assume that are iid and have a common distribution function F where F is in the maximum

domain of attraction of 1-4 for some E In short we write this as F e MDA(HO. It can be

shown that F e MDA(/-4) is equivalent to (Embrechts et a/., 1997 p.128):

bin (t/ xa(u)) = tim prX - u (1+ &10-).1 #0 IX > uj= __- uT xF F(u) uT xF a(u) e = 0

where a(.) is a function, 1 + gx > 0, xF the right end-point of the distribution and

F(x) = 1 - F(x). This motivates the generalised Pareto distribution (GPD) which is

given by

(..7 kx)= - + , 0 1- e-x , =0

where x 0 if 0 and 0 x -X if < 0. Depending on the value of the shape

parameter other distributions are obtained. When > 0, G*f is a reparameterised version of the ordinary Pareto distribution, = 0 corresponds to the and < 0 is known as the Pareto type II distribution (Embrechts et al, 1997).

The number of exceedances Ni„ of a high threshold, u, is given by

Nu = 1i(X,> u).

Then the distribution function of the exceedances is given by:

Fu (y)= P(X u) =1 1 Y) F(u)

35

which implies that Ru + y)= RuF„(y). To estimate (u + y), we use the fact that the

empirical distribution function i(u) can be used as an estimator of 1- (u), i.e.

1- F(u)= i(x1 > u)= Nu 14 and that

lim sup ,,g (x)1 = 0 UT xp o

fu (x) •,p (x)

Eu(x)= GeAx)

for u large and Q > 0 (Embrechts et al., 1997). Then it follows that

fi(u+y)=1- A (1.+P±) -7 n

(3.5) By setting (3.5) equal to p and noting that y = x- u, we solve for x to get an estimate of the quantile xp as:

p = U H7 p)) - Nu

(3.6) where Nu is the number of exceedances above the threshold u. Similarly it follows that when 4 = 0 the are calculated from:

xp = u - j3In[ 7 (1 - p)1 Nu

(3.7) When the iid assumption is relaxed, the quantile is calculated by using N* = 111760 instead of

Nu in (3.6) and (3.7), where /V' is the number of block maxima exceeding 11 (Kluppelberg, 2002).

3.2.3. Extremal Index

The extremal index, 0, characterises the relationship between the dependence structure in the data and the extremal behaviour. According to Embrechts et al. (1997 p.413) not every strictly stationary sequence has an extremal index. They show three ways in which the extremal index can be estimated:

k Ku ) Approach 1: 0(1) = nln - n

36 Approach 2: 0 (2) = Ku Ni,

Approach 3: 0 (3) = 1nir/(X; > u, Xi + 1 Xi„ Nu 1.1 where = extremal index to be estimated; r = number of data points in each block; k = number of blocks the data is divided into; = total number of exceedances over the threshold; u = threshold value; K„ = number of maxima exceeding the threshold; and n = total sample size.

Approach 2 gives a rough idea of how the extremal index can be interpreted: if 0 (2) is close to 1, this is an indication of less clustering, because the number of maxima exceeding the threshold is then very close to the total number of exceedances. With a great deal of clustering, we can expect that many high values occur in one block of data, which means that one maximum represents the whole cluster of points, so that 0 (2) will be close to zero.

3.2.4. Practical application

It is possible to use the methods described in this section to determine the stress moves of each risk factor. In section 3.5 we show a practical application of this theory.

3.3. MULTIVARIATE EXTREME VALUE DISTRIBUTION

Let (X1 ,...,Xd ) denote an iid sequence of random vectors with a multivariate distribution function Fand marginal distributions . Let (-M 1,n r•••fAld,n) denote the sequence of block maxima vectors calculated from data blocks of size n. Then, similarly to what was described in the univariate case (Section 3.2.1), we have that

M1,n Md,n bd,n lim xd = Vxi E 91, = 1,...,d. n-)co a" ad,n with > 0,i =1,...,d,n 1 if and only if Vi = 1,...,d there exist some constants ai „ and bi,n and a non-degenerate limit distribution H,such that

— n lim P( " .xi )=1-1i (xi )VX E 91 n—oco ai,„ and there exist a copula C„„ such that

37 Coju„...,ud)= l irn Cn 1/n „U" n ) n ->co 1 d (Bouye, 2002). From this it follows that Hoo (xi ,...,xd )= Coo (Hi (xi ),...,Hd (xd )) which gives the relationship between the multivariate generalized extreme value (MGEV) distribution and an extreme copula.

In the next section we take a closer look at some extreme copulas that can be used in this framework.

3.4. EXTREME COPULAS - THE BIVARIATE CASE

3.4.1. Background

The focus is on the block-maxima approach to EVT as a lot of research is still needed to develop the theory to calculate a multivariate Generalized Pareto Distribution (GPD).

An extreme value copula should satisfy the following (Bouye, 2002):

C(ut Cqui

There are various families of functions which satisfy this property. However they are difficult to use when working in more than two dimensions. In this dissertation the focus is on the bivariate case. Family Copula function C(u1 ,u2 )

111(12 Product Copula (C 1 ) Gumbel \_i exi— (i7 + i4r )a , a E [1,00)

Gumbel II uiu2 exp a i'lla ili , a E [0,1] rila + iii Galambos u1u2 exp 471-a + c72-" Y1 , a E [0, co)

HOsler-Reiss 4. 1 a in(In(i i i ))) pi ,,,, 1 4. 1 a in In(u2) expH /7 ci-1 " 2 '4' 1 a 2 In(u2) a 2 In(ui )1 '

a E [0, 00)

Marshall-Olkin ul-a1 u2a2 min(ufrl ,u r2 ), a E [0,1] 2

Upper Frechet bound (C+) mink ,u2) Table 3.1. Extreme copula functions for two variables.

38 Let gi = -In(ul ). Some of the better known parametric extreme copulas are given in Table

3.1 (Bouye et at, 2001b).

Figure 3.1 shows plots of various extreme copula functions for a = al = a2 = 1. It is interesting to see how the various copula structures differ.

Gumbel Extreme Copula

u2 o o u1 u2 0 0 u1 Gumbel2 Extreme Copula

Fs; 0.5

0 1

u2 0 0 u1 u2 0 0 u1

treme Copula Upper Frechet ound Copula

0 1 0.5 0.5 0.5 u2 0 0 u1 u2 0 0 u1

Figure 3.1. Plots of the various extreme copulas for a = 1.

39 3.4.2. Estimating the parameters

When estimating the parameters of the extreme copula functions, we follow the IFM method discussed in Chapter 1. It is sufficient to follow these steps: Calculate the block maxima series for each data series. Fit the GEV distribution given by (3.1) to each series of maxima. Each series of maxima is then converted to a series of uniformly distributed variables by applying the GEV distribution function fitted to that series. Calculate the copula density function and derive the log-likelihood function (shown in Appendix A.2 for the case of the Gumbel copula) using these uniformly distributed variables. Estimate the parameters by maximizing the log-likelihood function.

3.5. APPLICATION IN RISK MANAGEMENT

Consider an equity portfolio consisting of two stocks, namely Goldfields (GFI) and Harmony (HAR). These two companies are both in the mining sector and their stock prices are highly correlated. It is possible to determine what the extreme price move of each share is using the techniques discussed in Section 3.2.

Table 3.3 shows the market price and the nominal amount invested in each stock on the value date. On this date the portfolio had a negative value, because we had short positions in each stock. The total portfolio value is n PVt = Pt ,t i 1 I t,t i =1 (3.8) where PVt = portfolio value on day t

Ptr = stock price ion day t, No = nominal invested in stock ion day t, and n = number of stocks in the portfolio.

The strategy is that because we expect the prices of both stocks to decrease, we can buy back the stocks at a cheaper price and thus lock in a profit. This means that the highest risk in this portfolio occurs when the prices of both stocks increase at the same time. To quantify the risk, it is necessary to estimate what the stress move of each stock is as well as the probability that the stress moves occur at the same time (as we would expect because of the high correlation between the stocks).

40 The stress moves are calculated first. Each series is considered seperately. Let (X, i = 1,...,N) denote the stock price series. Then the following steps are followed:

X Calculate the daily log returns Yi = In[ ' ;

Split the return series into monthly blocks and calculate the maximum return in each

block M20 = Y20 )/max(r2i K40 to form a sequence of monthly

maxima; Fit the GEV distribution to this sequence of maxima; Estimate the 99th percentile using the estimated parameters of the GEV distribution.

Using the concept of return level as discussed in Section 3.2.1, the 99 th percentile is interpreted as the stress move that is expected to occur only once in every 100 months.

Disteoution of GF1 Maxima Distesution of HAR Maxilla 35 25 30 20 25 20 15

15 10 10 5 5 o o o 0.05 0.1 0.15 02 025 o 0.05 0.1 0.15 02 025

QQ-plot for GF1Maximas QQ-plot for HAR Maxims

02 02 • 0.15 a 0.15 0.1 A 0.1 M M

0.05 0.05

0.05 0.1 0.15 0.05 0.1 0.15 02 Quandes (Estimated Parameters) Quantles (Estimated Parameters)

Figure 3.2. Distribution of the maxima and the QQ-plots that shows that the maxima are fitted adequately by the GEV distribution.

Table 3.2 shows the estimated GEV parameters for each of the companies. In both cases the Frechet distribution gives the best fit, which is common in financial data due to the heavy tails the financial data usually exhibits. Figure 3.2 shows the distribution of the

41 maxima as well as QQ-plots. These results show that the GEV distribution does fit the series of maxima reasonably well. There seems to be three outliers in the GFI series.

GEV Parameters GFI HAR 4- 0.1361 0.1095 Q 0.0205 0.0258 p 0.0432 0.0469 Table 3.2. GEV parameters estimated for each series.

Using the estimated parameters, the 99 th percentile for GFI is 17.4% and for HAR it is 20.1%. To calculate the stress loss, S4, for the portfolio, we use

at = E Ki pi , rl V t

(3.9) where 54. = stress loss for the portfolio on day t, and K, = stress factor for stock Z

The underlying assumption when calculating the stress loss from (3.9) is that we allow 100% offset between different stocks. This means that should we be long in one stock and short the same amount in another, we allow them to offset each other even though the correlation between them may not be 100%. This is a very dangerous assumption in practice, but this is not the focus of this example - we are mainly interested in finding the probability that the two stocks show extreme moves at the same time, so we will not consider the implications of this assumption any further.

STOCK MARKET NOMINAL STRESS PORTFOLIO STRESS PRICE MOVE VALUE LOSS GFI 66.06 -200 17.4% -13 212.00 -2 298.89 HAR 61.00 -200 20.1% -12 200.00 -2 452.20 TOTAL: -25 412.00 -4 751.09

Table 3.3. Portfolio constituents and stock value on the value date.

The stress loss calculation is illustrated in Table 3.3. The stress loss as a percentage of the portfolio value is 19% (= 4 751.09/25 412.00) which means that if both stocks should move by their estimated stress moves at the same time, our portfolio will show a loss of 19%. To

42 estimate what the probability is that this loss will occur we fit the Gumbel copula to the data. The Gumbel copula is one of the extreme copulas and due to its simplicity, it is widely used in finance applications. See for instance Demarta and McNeil (2004); Breymann et al. (2002) and Dias and Embrechts (2003).

By maximising the log-likelihood function of the Gumbel copula (derived in Appendix A) , 40= + - Via; + iic21; ) + ( 1- - 2)In(iiia; + gli)+ + rib)a + (a -1))1 i 111,i Uzi a • • the parameter a is estimated as 1.75. A plot of the estimated Gumbel copula is shown in Figure 3.3.

Gumbel Extreme Copula

1

0.9 0.8 0.7 0.6 0.5 C(ul,u2) 0.4 0.3 0.2 0.1 0

00

Figure 3.3. Gumbel copula for a= 1.75.

The final step is to estimate the probability that the stress moves in the two stocks occur at the same time. This is calculated from the empirical equivalent of the following relation:

P(M1 > x1 , M2 > X2 )=1- P(Mi G Xi )- P(M2 G X2 )+ P(Mi < x1,M2 G X2 ) = 1- Hi (X1 )- H2 (X2 )± H„,(xi , X2 ) =1- Hi (xi )- H2 (X2 )-1-C(Iii (X1 ),H2 (X2 ))

43

which is obtained upon using the relationship between a copula and the multivariate extreme value distribution discussed in section 3.3. In the present case this means that:

/3( 111,20 > 17.4%, M220 > 20.1%) = 1 - Hl (17.4%) - /42 (20.1%) + 6(H1 (17.4%), H2 (20.1%))

= 1 - 0.98993 - 0.98995 + 0.98509 = 0.00521 where M420 = extreme values calculated from 20-day blocks for series t, and = GEV cumulative distribution function for series Z (.

Thus, there is a 0.52% probability that we will show a loss of 19% or more in this portfolio value.

3.6. CONCLUDING REMARKS

In this chapter some of the techniques used in risk management have been discussed. These techniques have various shortcomings that are addressed by applying extreme value theory to calculate stress factors and then using copulas to model the dependence structure between risk factors. The theory behind bivariate extreme copulas was discussed and illustrated with a practical example.

44 CONCLUDING REMARKS

In this dissertation alternative ways to measure risk at a financial institution are explored. The most commonly used risk measures may not always be adequate due to their simplistic assumptions.

In Chapter 1 a back-test shows that the fitted copula adequately describes the structural relationship between two risk factors over time. The copula was fitted over various historical periods and in each case we find that the same copula fits the data. However, we do find that the copula parameter changes over time. It is argued that because the parameter of the copula is closely related to a dependence measure like Kendall's r, it can be interpreted as a dependence measure/parameter. From these results we propose that the structural relationship between risk factors does not change over time, only the dependence measure does.

This is a convenient result, because it means that we can generate stress scenarios from the copula and use this scenario set as an alternative to the usual historical scenarios. By just using historical scenarios we do not allow for the situations when the relationships between the variables change. This changing relationship is taken into account with a copula parameter that can be estimated as frequently as the risk system will allow. The stress scenarios are then used to calculate value-at-risk (VaR) numbers.

In chapter 2 various dependence measures are considered. An interesting analysis shows how Kendall's r can be used as a goodness-of-fit test to determine which copula best allows for the relationship between two risk factors. We also compare the linear correlation and Kendall's r over time and show how misleading linear correlation can be as a measure of dependence. One problem with rank and linear correlation numbers is that they summarise the risk factor dependence with a single number. This number may at times be difficult to interpret. To overcome this shortcoming we consider a dependence matrix using the fitted copula. The idea with the dependence matrix is to measure the probability that two risk factors show a pre-defined move at the same time, for instance the probability that both risk factors move up by 10% within the next 10 days. This type of matrix is very interesting, because it provides a more comprehensive view on how two risk factors move together.

The main concern in risk measurement is extreme events. In chapter 3 Extreme Value Theory is discussed and it is shown how to extend this theory to the multivariate case for

45 the Generalised Extreme Value distribution. Using a bivariate extreme copula, it is shown how to determine whether stress moves occur at the same time.

Copula theory can definitely be used by traders when they determine the dependence structure between variables to decide on the optimal hedge. As was discussed above, a copula gives more information regarding the dependence structure than linear or rank correlation numbers.

Another interesting financial application of copulas is the pricing of exotic options (Cherubini et a/., 2004). However, that is not explored in this dissertation.

Copula theory definitely has its place in the risk measurement process. The problem is that its practical implementation becomes difficult when working in more than two dimensions. Consider for example a situation when the risk manager would like to generate stress scenarios from the copula for a specific portfolio. A typical portfolio may have 30 different risk factors which means we have to generate values from a 30-dimensional copula. The problem is not generating the scenario numbers, but rather how to test whether a 30- dimensional copula actually adequately fits the 30 risk factors.

46 REFERENCES

Bouye, E. (2002), Multivariate Extremes at Work for Portfolio Risk Measurement, HSBC Asset Management Europe (SA), Research Department, Paris, email: [email protected].

Bouye, E., Durrleman, A., Nikeghbali, A., Riboulet, G., Roncalli, T. (2001a), Copulas: an open field for risk management, Groupe de Recherche Operationnelle, Credit Lyonnais, Working Paper.

Bouye, E., Durrleman, A., Nikeghbali, A., Riboulet, G., Roncalli, T. (2001b), Copulas for finance — a reading guide and some applications, Groupe de Recherche Operationnelle, Credit Lyonnais, Working Paper.

Breymann, W., Dias, A. and Embrechts, P. (2002), Dependence structures for Multivariate High data in Finance, Department of Mathematics, ETHZ, downloadable from www.math.ethz.ch/finance.

Burnham, K.P. and Anderson, D.R. (2004), Multimodal Inference Understanding AIC and BIC in , Sociological Methods & Research, Vol. 33, No. 2, pp.261-304.

Chavez-Demoulin, V., Embrechts, P. (2001), Smooth Extremal Models in Finance and Insurance, downloadable from http://www.math.ethz.ch/—embrechts/ or http://statwww.eDfl.ch/Deople/chavez/.

Cherubini, U., Luciano, E. and Vecchiato, W. (2004), Copula Methods in Finance, John Wiley & Sons Ltd.

Davison, A.C. and Hinkley, D.V. (1999), Bootstrap Methods and their Application, Cambridge Series in Statistical and Probabilistic Mathematics, UK: Cambridge University Press.

Demarta, S. and McNeil, A.J. (2004), The t Copula and Related Copulas, Department of Mathematics, ETHZ, downloadable from www.math.ethz.ch/finance.

De Matteis, R. (2001), Fitting Copulas to Data, Diploma Thesis, Institute of Mathematics, University of Zurich.

47 Dias, A. and Embrechts, P. (2003), Dynamic copula models for multivariate high-frequency data in finance, Department of Mathematics, ETHZ, downloadable from www.math.ethz.ch/finance.

Durrleman, V., Nikeghbali, A., Roncalli, T. (2000), Which copula is the right one?, Groupe de Recherche Operationnelle, Credit Lyonnais, France.

Embrechts, P., Kluppelberg, C., Mikosch, T. (1997), Modelling Extremal Events for Insurance and finance, Application of Mathematics, Stochastic modelling and Applied probability 33, Springer.

Embrechts, P., McNeil, A., Straumann, D. (1999), Correlation: Pitfalls and Alternatives, Department of Mathematics, ETHZ, downloadable from www.math.ethz.ch/finance.

Embrechts, P., Lindskog, F., McNeil, A. (2001), Modeling Dependence with Copulas and Application to Risk Management, Department of Mathematics, ETHZ, downloadable from www.math.ethz.ch/finance.

Embrechts, P., McNeil, A., Straumann, D. (2002), 'Correlation and dependence in risk management: properties and pitfalls' In: Risk Management: and Beyond, ed. M.A.H. Dempster, Cambridge University Press, Cambridge, pp. 176-223.

Fermanian, J-D. and Scaillet, 0. (2004), Some Statistical Pitfalls in Copula Modeling for Financial Applications, University de Geneve, downloadable from www.hec.unige.ch/professeurs/ SCAILLET_Olivier/pages_web/pdfs/piffalls.pdf.

Genest,C., Rivest, L.-P. (1993). procedures for bivanate Archimedean copulas. Journal of the American Statistical Association, 88, pp.1034-1043.

Kluppelberg, C. (2002), Risk Management with Extreme Value Theory, Center of Mathematical Sciences, Munich University of Technology, Germany, [email protected] muenchen.de.

McNeil, A.J. (1998), Calculating Quantile Risk Measures for Financial Return Series using Extreme Value Theory, Departement Mathematik, ETH Zentrum, CH-8092 Zurich, mcneil(amath.ethz.ch .

48 McNeil, A.J., Extreme Value Theory for Risk Managers, Departement Mathematik, ETH

Zentrum, CH-8092 Zurich, mcneilmath.ethz.ch .

McNeil A.J., Frey R. (2000), Estimation of Tall-Related Risk Measures for Heteroscedastic Financial Time Series: An Extreme Value Approach, Departement Mathematik, ETH

Zentrum, CH-8092 Zurich, mcneil(amath.ethz.ch .

Sprent, P. (1993), Applied Nonparametric Statistical Methods, 2nd ed, London: Chapman & Hall.

49

APPENDIX A: LOGLIKELIHOOD FUNCTIONS FOR VARIOUS COPULA FAMILIES

OVERVIEW

In Chapter 1 we show how to estimate the parameters of copula functions. In this chapter we illustrate how to determine the log-likelihood functions for several copula functions.

GUMBEL COPULA

The Gumbel copula is given by:

".1 /V C U 1 U 2) = exp[- (uia + u2")« , a E [1, c0) (

where ul = -In(u1 ) so that the density function can be derived as follows:

c(ul, u2)

82C(Ui, U2) aulaU2

= a2a exp (iir + re I> : I - 1 1fri a l + i 2 - 1 x I - f- 11.71"- 11 a ) u1 )

,,x i. -1 1 = (ri icr-1 ) a [exp[- Of + /12a x (//ia + U2 ui au2

1 ui-.4 1 1 -1 - exp - Dr + iic2( x - — ilia + t7f x - a' re -1 x 0? + D'I i -1 111 a u2

(a 2 exp - (i/ia + /12a x a-1)1", a + Djf x a re -1 +(-17-1a 1 u1 U2

= g1a-1 11(2% ex- (.71.a + 11/217 Y1 - 1(dict + r + (a - 1) . Ul U2

(A.1) In Chapter 1 we see that the log-likelihood function can be calculated from equation (A.1) as:

P(0) = Ini 4/1,,, u2,1 ) i =1 g a7i 17 a71 1 1 U2 = E [ In 1,1 ) + In 017, + i7c2,47i , + (1 - 2)14717, + ri2",,)+ In(("dic", + /72",,Y, + (a - 1)) i=1. u Li u 2,/ a

where n is the sample size.

50

(1)

C) C tt■ E :Cs a) C .ca)

1-4 ccs. U, cp

m U) m

4-, for

IT1 d -C d for e lose

(U los c c no 0 no _ca) 4-, e•-■ 4- OC) r's1 t7) a) -(T)

tn 0a) (1) 2 4-

-0 CD a) 76 o 0 Rstn o

in a) tn P--- \ 0) 1),

O — 1 - C tS ts

I. v O_ IJ + .c I Xu 4—P ro 1 II Ls) ts I ZS a O ic5 + 1 E.' 22 (1- i' i o.) a) + C ..__,--, _. _c _c IJ v .._...., x u

ro X ea (a a) E /-1

E I fS ax 0- 0 m

4-ez), 113 0 > 0 :15 _C

0 7= us O O

Y (0 a) w 8 8 8 7-1 _c 8 0 4.4 ti 8 v-I .0

MAR .0 (f/

M C 0 4—) 4-, SU

a) a) 0

3. ta z A.

- • •

tl + 1-1 ---. ).____, ■...... -1-1 I -..,LS A—e .v-1 -r 1 7 --,—, ...____, + + .1. . C 1 I

ts m m Cs .T. 74 -r .ti—, (,4 for for C I .1 ..."... Z 1 —=1 d d 2 + 4—. + --....-..--t .--• 1— lose lose .-I ...... -,-1 a I a —1 --- c

c 1------ C I 23 -;__Th o s. __./ 23 +

no ---.. n tN ts .. I 7' 1-14-1 "--, r-- —1 ..1 ....1 ...q..■. t I 1-.1 23 rtS—, ---.. ZS I )••■, irl .-r ---.. ,...... 11., 2 I .,.1 + + a + --,-, ,--, ...._.---,-• .....„ I A- 1-1 I Z ± 1-1 + I I v-I .-1■. ...... , A2-1 23 --...a Z 1-1 ■. 1' ..-1 .-1 I J I --_-- a =_. . .., ..__.--• + --- t ti I •-1 7 rs1 ..-1 I -__,--' CL .-4 X ,-I a) I I Cs —a 'I' x

■ I a

I C 2 CV I ...••••■ 1--1 C I I t3 t5 (V I ti I (V a)X ....—,tS x ro E a) 1-1

w 0 cs-

IC) r■I O 1-1 1-1

7

CY)

■-__..../ ...---, ni + .-1 -I- ti

rs,I rq

+ m --f-' n...., t1 for

=1 -I- I d .1--1 EN CN1 e ml f--.1 + I L1 rs, los c a -----. + rq

ts no no

v4

LS t^s1

x

.m1 \ r'' \ I I m-I I t -• I LS 1——=I vr le______., ts

-■ I X m-t tS 7 - 7 I s I i - + . I + m- -4 1.4 tS I s t ., + I 5". l ,$ + LZI I Ni' + >c ts J ...._., J I cu a m4 I rsi II

i ,--1 ili + ....___, a) ,-I . .., .c rl .._..../ x 3 ro toE

es, 0 '-v-'...... w 8 8 'il 8 ,—■ 8, (.; $

U.S

Tr Ul

+ I ,- ---, C ti I ■.___../ ,------. 0) '-'I I ts CN CU ■ 1 1 ___./ •----• ti .I

-›—<, ii + 7 x0_ a)

1...=...11 LS

1 8 Zs e e

6 cn c) Z .--1 C■1 UNOVERSIITY OF JOG-P,NNESBU G UNOVE SIITEOT V N JOH NMESBURG AUCKLAND PARK KINGSWAY CAMPUS / KAMPUS POSBUS 524 BOX 524 AUCKLAND PARK 2006 Tel: 011 559-2165

This item must be returned on or before the last date stamped. A renewal for a further period may be granted provided the book is not in demand. Fines are charged on overdue items.