arXiv:1412.1181v3 [stat.ME] 26 Mar 2015 arx(orhai21) h matrix cova The the expressing 2011). for (Pourahmadi many matrix by favored is decomposition Cholesky needn omlvralsit eedn utnra (Moonan multinormal dependent into variables normal independent trix ie nqerpeetto ntefr of form the in representation unique a vides .Coek eopsto Introduction - decomposition Cholesky 1. application. useful a offer we each, succes To the . between two differences of utilizes that second a correla and semi-partial ficients, uses that One fac matrix. Cholesky correlation of nonsingular parametrizations explicit and novel two present We Abstract rpitsbitdt Elsevier to submitted Preprint eoeyadRivsmn f2009. pa as of P.F.S.) Act investigator Reinvestment principal and MH089951, Recovery (RC2 Health Mental of 1 2 o oiiedfiiesmercmti hlsydcmoiinpro decomposition Cholesky matrix symmetric positive-definite a For h okdsrbdi hspprwsfne npr yteU Nationa US the by part in funded was paper this in described work The mi drs o orsodne [email protected] correspondence: for address Email ietfruaint hlsydcmoiino a of decomposition Cholesky to formulation Direct L nvriyo ot aoiaa hplHl,NrhCrln,USA. Carolina, North Hill, Chapel at Carolina North of University n h pe triangular upper the and eea osnua orlto matrix. correlation nonsingular general ee Madar Vered L T ffrdb convenient a by Offered . L LL 2 tefcnb sdt transform to used be can itself T ihalwrtinua ma- triangular lower a with , O to h American the of rt ( n ac 7 2015 27, March 3 1 algorithm, ) ieratios sive incoef- tion Institute l o fa of tor riance 1957) - which is particularly useful for Monte Carlo simulations.

Explicit forms of L are known for limited correlation structures such as the equicorrelated (Tong 1990, pp. 104), tridiagonal (Miwa et al. 2003), and the multinomial (Tanabe & Sagae 1992). The general correlated case is typ- ically computed by using spherical parametrizations (Pinheiro & Bates 1996,

Rapisarda et al. 2007, Rebonato & Jackel 2007, Mittelbach et al. 2012), a mul- tiplicative ensemble of trigonometric functions of the angles between pairs of vectors. Others may use Cholesky matrix (Cooke et al. 2011, pp. 49) that utilizes the multiplication of partial correlations.

In this paper, we will present two explicit parametrizations of Cholesky factor for a positive-definite correlation matrix. Both parametrizations of- fer a preferable, simpler alternatives to the multiplicative forms of spherical parametrization and partial correlations. In Section 2 we show that the nonzero elements of Cholesky factor are the semi-partial correlation coeffi- cients ∗j −1 ρij ρi Ri−1ρi ρij(1,...,i−1) = − , 1 ρ −1 ρT − i i−1 i −1 q i−1 ∗j where Ri−1 is the inverse of the correlation matrix Ri = (ρkj)k,j=1, ρi =

∗i (ρ1j, ρ2j,...,ρi−1,j) and ρi = ρi . The order of the ρij(1,...,i−1)s is determined by Cholesky factorization, and the notations are borrowed from Huber’s trivariate discussion of semi-partial correlation in regression (Huber 1981).

2 2 In Section 3 we uncover that the squares, ρij(1,...,i−1), are equivalent to the differences between two successive ratios of determinants, and we use this equivalence to construct the second parametrization for L. In Section 3.1 we extend the representation of L to the structure of a , and in Section 3.2 we study two inequality conditions that are essential for the positive-definiteness of LLT . We conclude this paper by offering two possible applications, one for each of the suggested forms. In Section 4 we present a simple t-test that employs the semi-partial correlation structure for testing the dependence of a single variable upon a set of multivariate normals. In

Section 5 we utilize the second parametrization to design a simple algorithm for the generation of random positive-definite correlation matrices. We end the paper with the simple case of generalization of random AR(1) correlation in Section 5.1.

2. The first parametrization for Cholesky factor

n Let Rn = (ρij)ij=1 be a positive-definite correlation matrix, for which

k n each sub-matrix Rk = (ρij)ij=1 is positive-definite. Let also L = (lij)ij=1 be Cholesky factor of R, R be the of R, R−1 its inverse, and | | ρ∗j =(ρ , ρ ,...,ρ ) for j i, so ρ ρ∗i. To simplify writing also set i 1j 2j i−1,j ≥ i ≡ i R−1 1. The first representation of L will use the semi-partial correlations 0 ≡

3 ∗j −1 T ρij −ρ R ρ ρ i i−1 i lji = ij(1,...,i−1) = R−1 T , i j, √1−ρi i−1ρi ≤

10 0 0  ···  2 ρ12 1 ρ 0 0  − 12 ···    L  ρ pρ23−ρ12ρ13 1 ρ R−1ρT 0  =  13 2 3 2 3  (1)  √1−ρ12 − ···     . . p . ..   . . . . 0   ∗n −1 T   ρ3n−ρ R ρ  ρ ρ2n−ρ12ρ1n 3 2 3 1 ρ R−1 ρT  1n 2 R−1 T n n−1 n   √1−ρ12 √1−ρ3 2 ρ3 ··· −     q  For i = j, we have ρ = 1 ρ R−1 ρT , and 1 ρ R−1 ρT > 0 ii(1,...,i−1) − i i−1 i − i i−1 i q for positive-definite R . Some may recognize 1 ρ R−1 ρT as the schur- i − i i−1 i complement of the matrix Ri−1 inside Ri from the formula for computing the determinant of Ri, using the block matrix Ri−1 (Harville 1997, pp. 188),

R ρT i−1 i −1 T Ri = = Ri−1 (1 ρiRi−1ρi ). (2) | | | | − ρi 1

T To show that Rn = LL we introduce Theorem 1.

Theorem 1. For i 1 and n j i +1, ≥ ≥ ≥ i ρ∗j R−1ρT = ρ ρ . (3) i+1 i i+1 ki(1,...,k−1) · kj(1,...,k−1) Xk=1 By the virtue that Cholesky factor of a positive-definite matrix has a unique representation, Theorem 1 will serve as a general proof for the form (1).

Some may recognize the Eq. (3) in Theorem 1 as the inner-product used for

4 the familiar algorithm of Cholesky Decomposition (Harville 1997, pp. 235):

i−1 1/2 i−1 2 lii = 1 lik and lji = ρij ljklik /lii. − ! − ! Xk=1 Xk=1 Surprisingly, the equality in Theorem 1 seems to be unknown or neglected.

The proof for Theorem 1 will be given in Appendix A, and will be heavily based on the recursive arguments of the Lemma 2:

Lemma 2. For i 1 and n j i +1, ≥ ≥ ≥ (ρ ρ∗i+1R−1 ρT )(ρ ρ∗jR−1 ρT ) ρ∗j R−1ρT ρ∗jR−1 ρ∗i+1 T i,i+1 i i−1 i ij i i−1 i i+1 i i+1 = i i−1( i ) + − −1 T− . 1 ρiRi−1ρi − (4)

The proof for Lemma 2 will be given in Appendix B.

Remark. Lemma 2 is useful to illustrate the recursive nature of the

∗j −1 T computation of the part ρi+1Ri ρi+1. As an exercise, we suggest to verify

∗j −1 T ∗j −1 T that ρ2 R1 ρ2 = ρ1jρ12, and then to compute ρ3 R2 ρ3 by using Eq. (4).

∗j −1 T The result should be equal to ρ3 R2 ρ3 = ρ13ρ1j + ρ23(1)ρ2j(1) as claimed by

Theorem 1.

3. The second parametrization for Cholesky factor

The second parametrization for L will follow directly from (1) by ap- plying Lemma 3 that claims an equivalence between semi-partial correlation coefficients to the difference between two successive schur-complements.

5 Lemma 3. Let us use the above notations and the positive-definiteness as- ∗j T Ri−1 (ρ ) sumptions as before, and define R∗j i . Then for j i ≡  ∗j  ≥ ρi 1 i + 1 3 the difference between two successive ratios of determinants(or ≥ schur-complements) is

2 R∗j / R R∗j / R = ρ ρ∗jR−1 ρT R / R . (5) | i | | i−1|−| i+1| | i| ij − i i−1 i | i−1| | i|  The proof for Lemma 3 will be found in Appendix B. Setting s ij ≡ sign(ρij(1,...,i−1)), it is possible to write Cholesky factor (1) by the equiva- lent form

1 0 0 0 ···   |R2| s12 1 R2 0 0  − 1 | | ···   q R∗3 R∗3 R R  | 2 | p| 2 | | 3| | 3|  s13 1 s23 0   1 1 |R2| |R2|   − − ···   q . q . q . .   . . . .. 0     R∗n R∗n R∗n R∗n R∗n R   s 1 | 2 | s | 2 | | 3 | s | 3 | | 4 | | n|   1n 1 2n 1 |R2| 3n |R2| |R3| |Rn−1|   − − − ···   q q q q (6)

3.1. The extension to nonsingular covariance matrix

The Cholesky factor (6) can be easily extended to the general structure of nonsingular covariance when we replace each ratio σ2 R∗j / R by its j | i | | i−1| equivalent Σ∗j / Σ . We summarize this result into Theorem 4. | i | | i−1|

Theorem 4. Let Σn be a nonsingular covariance matrix with entries σij = σ σ ρ . Let σ∗j (σ , σ , , σ ), set Σ∗j σ2, Σ 1, and i j ij i ≡ 1j 2j ··· i−1,j | 1 | ≡ j | 0| ≡

6 T Σ σ∗j ∗j i−1 i n Σi . Then, Cholesky factor L = (lij)i,j=1 for Σn ≡  ∗j 2  σi σi  is given by lji =0 (for j < i), and by

∗j ∗j sign(ρij(1,...,i−1)) Σi / Σi−1 Σi+1 / Σi if j > i lji = | | | |−| | | | .  Σ / Σ q if i = j  | i| | i−1| Remark. Mathematically, p Theorem 4 might as well describe Cholesky factor for arbitrary nonsingular , when allowing the entries of L to be complex numbers.

3.2. Two order conditions on the magnitudes of sub-determinants essential to a well defined L

We start with the well known order relation between the magnitudes of successive determinants (Yule 1907, Stuart at al. 2010, pp. 525)

1 R R R R > 0. (7) ≥| 2|≥| 3|≥···≥| n−1|≥| n|

One may alternatively see the order (7) as a direct result of Eq. (2) when adding equal signs to account for (ρi = 0)’s. To the order (7) we will add a second order that arises from the positivity of the right-hand side of the

Eq. (5) and seems to be rather new. For j =2, 3,...,n,

1 R∗j R∗j / R R∗j / R R / R > 0. (8) ≥| 2 |≥| 3 | | 2|≥···≥| j−1| | j−2|≥| j| | j−1|

It is possible to view the order relations (7) and (8) as posing necessary and sufficient conditions for the correlation matrix R = LLT to be positive-

7 definite. Since both order relations follow from the positive-definiteness prop- erty of R, and on the other hand, any failure to satisfy any of the determinant ordering in (7) or (8) will lead to ill defined L. In Section 5 we shall show how to use the conditions (7) and (8) to generate positive-definite random correlation structures.

4. Application I - A simple t-test for linear dependence

As a first application we will establish a procedure for testing the linear dependence of a single variable upon other variables by employing the first parametrization of Cholesky factor. Let x = (x , , x ) be a matrix of N 1 ··· p samples from a p-variate random variable that is multinormally distributed. ˆ p Assume that N >p and let R =(rij)i,j=1 be the estimated correlation sample matrix and r be the nonzero elements of Cholesky factor for Rˆ . { ij(1,...,i−1)}i≤j

Suppose that, for some k

H : ρ = 0 for some i < k. 1k ip(1,2,...,i−1) 6

Under H0k, the estimator rkp(1,2,...,k−1) forms a simple t statistic (Morrison 2004)

√N k rkp(1,2,...,k−1) H0k Tkp:N = − · tN−k. 1 r2 − kp(1,2,...,k−1) ∼ q 8 The hypothesis H can be rejected, at level α (0 <α< 1), if T > 0k | kp:N | tα/2,N−k. One may further establish a sequential testing procedure that searches for the largest k for which H0k can be rejected.

Remark. We leave it to the reader to verify that the null hypotheses

H is equivalent to ρ = ρ = = ρ = 0. 0k 1p 2p ··· kp 5. Application II - Generating realistic random correlations

The problem of generating random correlation structures is well discussed at the literature (Marsaglia & Olkin 1984, Joe 2006, Mittelbach et al. 2012).

However, in practice, many of the suggested procedures are not so easy to apply (Holmes 1991), and when applied, some typically fail to provide a suffi- cient number of realistic correlation matrices (B¨ohm & Hornik 2014). More recent algorithms for the generation of random correlations either utilize a beta distribution (Joe 2006, Lewandowski et al. 2009), or employ uniform angular values (Rapisarda et al. 2007, Rebonato & Jackel 2007, Mittelbach et al. 2012).

The algorithm we will suggest in this section will be considerably simple.

It will be based on uniform values that are assigned to reflect the ratios

R∗j / R which constitute the parametrization (6). The order of the {| i | | i−1|}i≤j values of R∗j / R will be chosen to preserve the ordering in (7) and {| i | | i−1|}i≤j (8), to ensure the positive-definiteness of LLT . We will start by choosing n 1 random uniform values in (0, 1], that will be further assigned by their −

9 size to reflect the determinants R n , as directed by (7). The diagonal of {| j|}j=2 L will be constructed from the ratios R / R n . Then, for each row {| j| | j−1|}j=1 of L, j, an additional set of j 2 random uniform values will be chosen to − serve as the ratios R∗j / R j−1, organized to keep the order as in (8). {| i | | i−1|}i=2 The signs s will be chosen to be ( 1)Bernoulli(0.5) and the matrix L will be ij − computed according to (6).

Algorithm for generating realistic random correlation matrix LLT .

Step 1. Diagonals: Choose n 1 random uniform values from the interval − (0, 1]. Order them in decreasing order, U U U , and (2) ≥ (3) ≥···≥ (n)

set l11 = 1, l22 = U(2), and ljj = U(j)/U(j−1) for j =3,...,n;

Step 2. Rows: Repeat thisp step for each pj = 2, 3,...,n 1. Set U (j) 1 − (1) ≡ and U (j) l2 . For j 3, choose j 2 more (additional) random (j) ≡ jj ≥ − uniform values U (j) j−1 inside [l2 , 1] and sort them in decreasing { i }i=2 jj order U (j) U (j) . Compute l = U (j) U (j) for i = (2) ≥ ··· ≥ (j−1) ji (i) − (i+1) q 1, 2,...,j 1; − Step 3. Signs: Choose n(n 1)/2 Bernoulli(0.5) values B ’s for j > i, and − ij multiply each l by s =( 1)Bij ; ji ij − Step 4. Compute R by LLT to obtain the actual correlation structure. 5.1. Generating random AR(1) structures

We end this paper by revealing the simple form of Cholesky factor for

|i−j| the AR(1) structure. The AR(1) correlation matrix is defined by ρij = ρ ,

10 and it is possible to verify that R = (1 ρ2)n−1, ρ∗j = ρj−iρ and | n| − i i

ρ ρ∗jR−1 ρT = ρj−i 1 ρ R−1 ρT = ρj−i R / R . ij − i i−1 i − i i−1 i | i| | i−1|  Cholesky factor for the AR(1) structure enjoys the simple form:

ρj−1 j i =1 ≥ lji =   ρj−i 1 ρ2 j i 2  − ≥ ≥ p Hence, for any choice of ρ,ρ < 1, it is possible to transform the standard | | n normals (Xi)i=1 into the autocorrelated normals

i Y = ρi−1X + 1 ρ ρi−kX , i =1, . . . , n. i 1 − k k=2 p X Acknowledgments

I am thankful to the Editor and the Reviewers for their helpful discussion and their literature suggestion. Special thanks to Professor Sen for encour- aging me to submit this paper and for his resourceful suggestion to add a test of hypotheses for the application part. Thanks to Professor Speed for introducing me to Yule’s paper, and to Dr. Batista for her proofreading.

References References

[Anderson 2003] Anderson, T.W. (2003). An introduction to Multivariate

Statistical Analysis. Wiley, 3rd edition.

11 [B¨ohm & Hornik 2014] B¨ohm, W., and Hornik K. (2014). Generating ran-

dom correlation matrices by the simple rejection method: Why it does

not work. Statist. Probab. Lett. 87, 27-30.

[Cooke et al. 2011] Cooke, R.M., Joe, H., and Aas, K. (2011). Chapter

3 ”Vines Arise”, in “Dependence Modeling: Vine Copula Handbook”

edited by Dorota Kurowicka and Harry Joe, World Scientific.

[Harville 1997] Harville, D.A. (1997). Matrix Algebra From a Statistician’s

Perspective, Springer.

[Holmes 1991] Holmes, R.B. (1991). On Random Correlation Matrices.

SIAM J. Matrix Anal. Appl. 12, 239-272.

[Huber 1981] Huber, J. (1981). Partial and Semipartial Correlation-A Vec-

tor Approach. The Two-Year College Mathematics Journal 12, 151-153.

[Joe 2006] Joe, H. (2006). Generating random correlation matrices based

on partial correlations. J. Multivariate Anal. 97, 2177-2189.

[Lewandowski et al. 2009] Lewandowski, D., Kurowicka, D., and Joe, H.

(2009). Generating random correlation matrices based on vines and ex-

tended onion method. J. Multivariate Anal. 100, 1989-2001.

12 [Marsaglia & Olkin 1984] Marsaglia, G., and Olkin, I. (1984). Generating

correlation matrices. SIAM J. Sci. Statist. Comput. 5, 470-475.

[Mittelbach et al. 2012] Mittelbach, M., Matthiesen, B., and Jorswieck, E.

(2012). Sampling Uniformly From the Set of Positive Definite Matrices

With Trace Constraint. IEEE Trans Signal Process 60, 2167-2179.

[Miwa et al. 2003] Miwa, T., Hayter, A.J., and Kuriki, S. (2003). The evalu-

ation of general non-centered orthant probabilities. J. R. Stat. Soc. Ser.

B, 65, 223-234.

[Moonan 1957] Moonan, W.J. (1957). Linear transformation to a set of

stochastically dependent normal variables. J. Am. Statist. Assoc. 52,

247-252.

[Morrison 2004] Morrison, D.F. (2004). Multivariate Statistical Methods,

4th Edition.

[Pinheiro & Bates 1996] Pinheiro, J.C., and Bates, D.M. (1996). Uncon-

strained Parameterizations for variance-covariance matrices. Statistics

and Computing, 6, 289-296.

[Piziak & Odell 2007] Piziak, R., and Odell P.L. (2007). Matrix Theory:

From Generalized Inverses to Jordan Form. Chapman and Hall CRC

13 [Pourahmadi 2011] Pourahmadi, M. (2011). Covariance Estimation: The

GLM and Regularization Perspectives. Statist. Sci.,3, 369-387.

[Rapisarda et al. 2007] Rapisarda. F., Brigo D., & Mercurio. F. (2007). Pa-

rameterizing correlations: a geometric interpretation. IMA Journal of

Management Mathematics, 18, 55-73.

[Rebonato & Jackel 2007] Rebonato, R., & Jackel, P. (2000). The most gen-

eral methodology to create a valid correlation matrix for risk manage-

ment and option pricing purposes. J. of Risk, 2, 17-27.

[Stuart at al. 2010] Stuart, A., Ord, K., and Arnold S. (2010). Kendall’s

Advanced Theory of Statistics, Volume 2A, Classical Inference and the

Linear Model, 6th Edition.

[Tanabe & Sagae 1992] Tanabe, K., and Sagae. M. (1992). An Exact

Cholesky Decomposition and the Generalized Inverse of the Variance-

Covariance Matrix of the Multinomial Distribution with Applications.

J. R. Stat. Soc. Ser. B, 54, 211-219.

[Tong 1990] Tong, Y.L. (1990). The multivariate normal distribution.

Springer-Verlag, New York and Berlin.

[Yule 1907] Yule, G.U. (1907). On the Theory of Correlation for any Num-

14 ber of Variables, Treated by a New System of Notation. Proc. R. Soc.

Lond. A., 79, 182-193.

Appendix A. The proof for Theorem 1

∗j −1 T Proof of Theorem 1. By mathematical induction. For i = 1, ρ2 R1 ρ2 = ∗j −1 T ρ1jρ12, and for i = 2, we can use Eq. (4) in Lemma 2 to get ρ3 R2 ρ3 = ρ ρ + ρ ρ . Next, suppose that i 2 and assume that Eq. (3) holds 1j 13 2j(1) 23(1) ≥ for i, that is,

i ρ∗j R−1ρT = ρ ρ , (A.1) i+1 i i+1 k,i+1(1,...,k−1) · kj(1,...,k−1) Xk=1 ∗j −1 T and consider the follow-up component, ρi+2Ri+1ρi+2, for the case of i + 1. By Eq. (4) in Lemma 2,

∗i+2 −1 T ∗j −1 T ρi+1,i+2−ρ R ρ ρi+1,j −ρ R ρ ρ∗j R−1 ρT ρ∗j R−1 ρ∗i+2 T i+1 i i+1 i+1 i i+1 i+2 i+1 i+2 = i+1 i ( i+1 ) + R−1 T R−1 T √1−ρi+1 i ρi+1 · √1−ρi+1 i ρi+1 = ρ∗j R−1(ρ∗i+2)T + ρ ρ . i+1 i i+1 i+1,i+2(1,...,i) · i+1,j(1,...,i) (A.2)

∗j −1 ∗i+2 T ∗j −1 T Since the component ρi+1Ri (ρi+1 ) has the same form as ρi+1Ri ρi+1 ∗i+2 with ρi+1 =(ρ1,i+2, ρ2,i+2,...,ρi,i+2) replacing ρi+1 =(ρ1,i+1, ρ2,i+1,...,ρi,i+1), ∗j −1 ∗i+2 T it is possible to rewrite ρi+1Ri (ρi+1 ) in a similar manner as was used for ∗j −1 T ρi+1Ri ρi+1 in Eq. A.1

i ρ∗j R−1(ρ∗i+2)T = ρ ρ . (A.3) i+1 i i+1 k,i+2(1,...,k−1) · kj(1,...,k−1) Xk=1 Combining Eq. (A.2) with Eq. (A.3) gives the desired Eq. (3), for i + 1:

ρ∗j R−1 ρT = ρ∗j R−1(ρ∗i+2)T + ρ ρ i+2 i+1 i+2 i+1 i i+1 i+1,i+2(1,...,i) · i+1,j(1,...,i) = i+1 ρ ρ . k=1 k,i+2(1,...,k−1) · kj(1,...,k−1) P 15 Appendix B. The proof for the two Lemmas

Lemma 2 is for the case l = i + 1 of the more general recursive equation:

∗j −1 T ∗l −1 T T (ρ −ρ R ρ )(ρij −ρ R ρ ) ∗j −1 ∗l ∗j −1 ∗l T il i i−1 i i i−1 i (B.1) ρi+1Ri ρi+1 = ρi Ri−1(ρi ) + R−1 T . 1−ρi i−1ρi  Lemma 3 is, first, by Eq. (2)

R∗j / R R∗j / R = 1 ρ∗jR−1 (ρ∗j)T 1+ ρ∗j R−1(ρ∗j )T , | i | | i−1|−| i+1| | i| − i i−1 i − i+1 i i+1 and second, by applying Eq. (B.1) for l = j,

2 ∗j R−1 T ∗j ∗j (ρij −ρi i−1ρi ) Ri / Ri−1 Ri+1 / Ri = R−1 T . | | | |−| | | | 1−ρi i−1ρi Proof of Equation (B.1). By mathematical induction. For i = 1, we T T have ρ∗jR−1 ρ∗l = ρ ρ , and for i = 2, ρ∗jR−1 ρ∗l = ρ ρ +(ρ 2 1 2 1l 1j 3 2 3 1l 1j 2l − 2 ρ12ρ1l)(ρ2j ρ12ρ1j)/(1 ρ ). The general case of j l i + 1 3 − − 12 ≥ ≥ ≥ −1 follows by representing Ri according to the Banacheiwietz inversion for- mula (Piziak & Odell 2007, pp. 26), where c 1 ρ R−1 ρT , i ≡ − i i−1 i R−1 R−1 ρT ρ R−1 R−1 ρT −1 1 ci i−1 + i−1 i i i−1 i−1 i Ri = − , (B.2) ci  ρ R−1 1  − i i−1   ∗j −1 ∗l T Finally, Eq. (B.1) follows from writing ρi+1Ri ρi+1 and (B.2)  ∗j −1 ∗l T ∗j −1 ∗l T ρi+1Ri ρi+1 =(ρi , ρij)Ri ρi , ρil ∗j −1 ∗l T ∗j −1 T ∗l −1 T = ρ R (ρ ) +(ρij ρ R ρ )(ρil ρ R ρ )/ci. i i−1 i − i i−1 i − i i−1 i

16