NumericalNumerical MethodsMethods

RafaRafałł ZdunekZdunek DirectDirect methodsmethods forfor solvingsolving linearlinear systemssystems (3h.)(3h.)

((GaussianGaussian eliminationelimination andand matrixmatrix decompositionsdecompositions))

Introduction

• Solutions to linear systems • Gaussian elimination, • LU factorization, • Gauss-Jordan elimination, • Pivoting Bibliography

[1] A. Bjorck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996, [2] G. Golub, C. F. Van Loan, Computations, The John Hopkins University Press, (Third Edition), 1996, [3] J. Stoer R. Bulirsch, Introduction to Numerical Analysis (Second Edition), Springer-Verlag, 1993, [4] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2000, [5] Ch. Zarowski, An Introduction to Numerical Analysis for Electrical and Computer Engineers, Wiley, 2004, [6] G. Strang, Linear Algebra and Its Applications, Harcourt Brace & Company International Edition, 1998, Applications

There are two basic classes of methods for solving a system of linear equations: direct and iterative methods. Direct methods theoretically give an exact solution in a (predictable) finite number of steps. Unfortunately, this does not have to be true in computational approach due to rounding errors: an error made in one step spreads in all following steps. Classical direct methods usually involve a variety of matrix factorization techniques such as the LU, LDU, LUP, Cholesky, QR, EVD, SVD, and GSVD. Direct methods are computationally efficient only for a specific class of problems where the system matrix is sparse and reveals some patterns, e.g. it is a banded matrix. Otherwise, iterative methods are usually more efficient. Solutions to linear systems A system of linear equations can be expressed in the following matrix form:

= bAx , (1)

×NM M where A [aij ] ℑ∈= is a coefficient matrix, b []bi ℑ∈= is a data vector, and N NM +× )1( x []x j ℑ∈= is a solution to be estimated. Let []bAB ℑ== be the augmented matrix to the system (1). The system of linear equations may behave in any one of three possible ways:

A The system has no solution, if ()< rank BA . () B The system has a single unique solution, if rank()BA ()== Nrank . C The system has infinitely many solutions, if rank()BA ()<= Nrank .

Solutions to linear systems

If the case B takes place, the system (1) can be basically solved with direct or iterative methods. When we have the case A (which often occurs in practice), only an approximate solution to (1) can be estimated, usually by formulating a least squares problem. The case C occurs for rank-deficient problems or under-determined problems. Gaussian elimination (example)

⎧ 21xxx123++=,(E1) ⎪ ⎨62xxx123+ +=− 1, (E2) ⎪−22xxx++= 7, (E ) (Pivot) ⎩ 123 3

⎧ 21xxx123++=,⎧ 21xxx123++=,⎧ 21xxx123++=, ⎪ ⎪ ⎪ ⎨62xxx123+ +=− 1,⎨024,xx12− −=− x 3 (E2 -3E1 ) ⎨024,xx12− −=− x 3 ⎪ ⎪ ⎪ ⎩−+22xxx123 += 7,⎩−22xxx123++= 7, ⎩032xx12+ += x 3 8,

(E3 -E1 ) Gaussian elimination (example)

⎧ 21xxx123++=,⎧ 21xxx123++=, ⎪ ⎪ ⎨024,xx12− −=− x 3 ⎨ 024,xx12− −=− x 3 ⎪032xx+ += x 8, ⎪004xxx+ −=− 4, ⎩ 12 3 ⎩ 123 (E3 -3E2 ) (Pivot) (Triangularization)

Back-substitution: x3 =1,

xx23= 42−=−⋅= 4212, 11 xxx= ()11−− =() −−=−211, 12322 Gaussian elimination (example)

Augmented matrix: [Ab| ] Let ⎧ 21xxx123++=, ⎪ ⎨62xxx123+ +=− 1, ⎪ ⎩−22xxx123++= 7,

⎡⎤⎡⎤⎡⎤2111 2111 2111 ⎢⎥⎢⎥⎢⎥ []Ab|= ⎢⎥⎢⎥⎢⎥ 6211−→ 012 −− −→ 4 012 −− − 4 RR21−+33RR32 RR+ ⎣⎦⎣⎦⎣⎦⎢⎥⎢⎥⎢⎥−−−221 731 0 3 2 8 0 0 4 4 Gaussian elimination (example)

Recursive technique: Step 1: k =1, AA(1) = , bb(1) = , Ax(1)= b (1) , (1) (1) (1) (1) (1) ⎡aaa b⎤ (R1 ) ⎢ 11 12 13 1 ⎥ ⎡⎤(1) (1) (1) (1) (1) (1) (R (1)) ⎣⎦Ab| = ⎢aaa21 22 23 b 2 ⎥ 2 (1) (1) (1) (1) ⎢aaa b⎥ (R (1)) ⎣ 31 32 33 3 ⎦ 3 ⎛⎞a(1) (1) (1) (1) (1) (2) (1) 21 (1) ⎡aaa b⎤ RR22←−⎜⎟ R 1, 11 12 13 1 a(1) (2) (2) ⎢ (2) (2) (2) ⎥ ⎝⎠11 ⎡⎤Ab|0= ⎢ aa22 23 b 2 ⎥ ⎣⎦ ⎛⎞a(1) ⎢ 0 aa(2) (2) b (2) ⎥ RR(2)←− (1) 31 R(1), ⎣ 32 33 3 ⎦ 33⎜⎟(1) 1 ⎝⎠a11 Gaussian elimination (example)

k = 2, Step 2: ⎡aaa(1) (1) (1) b (1) ⎤ ⎢ 11 12 13 1 ⎥ ⎡⎤Ab(2)|0 (2) = aa(2) (2) b (2) ⎛⎞(2) ⎣⎦⎢ 22 23 2 ⎥ (3) (2) a32 (2) RR33←−⎜⎟ R 2, ⎢ 00ab(3) (3) ⎥ a(2) ⎣ 33 3 ⎦ ⎝⎠22

In general: ⎛⎞aa()kk () aa(1)kk+ ←− () ik kj ij ij ⎜⎟()k ⎝⎠akk

(2) (1) Determinant: det(AA)=± det ( ) Computational cost Gaussian elimination (example)

⎧ xx23−=3, ⎪ ⎨ −+24xxx123 −= 1, ⎪ ⎩−254xx123+−=− x 2, Augmented matrix: ⎡ 01−− 1 3⎤⎡2411− ⎤ ⎢ ⎥⎢ ⎥ []Ab|2411=−⎢ −⎥⎢ → 0113− ⎥ Interchange RR12 and ⎣⎢−25−− 4 2⎦⎣⎥⎢−25 −− 4 2 ⎦⎥ ⎡⎤⎡⎤−−24 1 1 −− 24 1 1 ⎢⎥⎢⎥ →−→−⎢⎥⎢⎥01 13 01 1 3 RR31−− RR 32 ⎣⎦⎣⎦⎢⎥⎢⎥01−− 3 3 00−− 2 6 Gauss-Jordan elimination

The difference between the Gaussian and Gauss-Jordan elimination:

1. At each step, the pivot element is forced to be 1,

2. At each step, all terms above the pivot as well as all terms below the pivot are eliminated.

Solution Gauss-Jordan elimination (example)

⎧ 226xxx123++= 4, ⎡ 226 4⎤⎡ 1132⎤ ⎪ ⎢ ⎥⎢ ⎥ ⎨ 276,xx12++ x 3 = []Ab| =→217 6 217 6 ⎢ ⎥⎢ R /2 ⎥ ⎪−−267xxx − =− 1, 1 ⎩ 123 ⎣⎢−267−− − 1⎦⎣⎥⎢ −−− 267 − 1⎦⎥

⎡⎤⎡⎤⎡⎤11 3 2 11 3 2 104 4 ⎢⎥⎢⎥⎢⎥ →−⎢⎥⎢⎥⎢⎥0112 → 0112 −−→ 0112 −− RR21−−2R2 RR12 − RR++2 RR4 31⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦0413− −−−−− 041332 0055 ⎡⎤⎡⎤10 4 4 100 0 ⎡ 0 ⎤ ⎢⎥⎢⎥ ⎢ ⎥ →−−→−⎢⎥⎢⎥01 1 2 010 1 Solution: x = −1, −−RR31/5 4R3 ⎢ ⎥ ⎢⎥⎢⎥00 1 1 RR23+ 001 1 ⎣⎦⎣⎦ ⎣⎢ 1 ⎦⎥ Computational cost Partial Pivoting

Strategy: Maximize the magnitude of the pivot at each step only by using row interchanges. Partial Pivoting (example)

⎧−10−4 xx+= 1, ⎨ 12 ⎩ xx12+=2, If real numbers are rounded to 3 floating digits, without partial pivoting, we have:

⎡−−10−−44 111⎤⎡ 10 1 ⎤ Ab| =→ []⎢ ⎥⎢4 4 4 ⎥ ⎣ 1121⎦⎣ RR21+10 010 0 ⎦ Round-off: float(1+= 1045) float ( .10001 ×= 10) 104 , float(2+= 1045) float ( .10002 ×= 10) 104 ,

Back-substitution gives: ⎡0⎤ x = ⎢ ⎥ , ⎣1⎦ Partial Pivoting (example)

⎧−10−4 xx+= 1, ⎨ 12 ⎩ xx12+=2, If real numbers are rounded to 3 floating digits, with partial pivoting, we have:

⎡⎤⎡⎤⎡⎤−10−4 1 1112112 []Ab| =→ → ⎢⎥⎢⎥⎢⎥ −4 −4 ⎣⎦⎣⎦⎣⎦1121011011− RR21+10 Round-off: float(1+ 10−41)=×= float ( .10001 10) 1, float (1+× 2 10−41) =float ( .10002 × 10) = 1,

Back-substitution gives: ⎡1⎤ x = ⎢ ⎥ , ⎣1⎦ Ill-conditioned systems

Ax= b

Condition number: κ ()AAA=⋅−1 2 2

A system of linear equations is said to be ill-conditioned when some small perturbation in the system can produce relatively large changes in the exact solution. Otherwise, the system is said to be well-conditioned.

Row Echelon Form

mn× Let A ∈ℑ , (A) - j-th column of A A - i-th row of A * j ( )i* :

1. If (A)i* consists entirely of zeros, then all rows below (A)i* are also entirely zero rows;

2. If the first non-zero entry in (A)i* lies in the j-th position, then all entries

below the i-th position in columns (A)*1, (A)*2, ..., (A)*j are zero. Rank

Suppose A ℑ∈ × nm is reduced by row operations to the Row Echelon Form (REF). The rank of A is defined to be the number:

rank(A) = number of pivots, = number of non-zero rows in REF, = number of basic columns in A, where the basic columns of A are these columns that contain pivots. Reduced Row Echelon Form

Reduced Row Echelon Form (RREF):

1. REF is in row echelon form, 2. The first non-zero entry in each row (i.e. each pivot) is one, 3. All entries above each pivot are zeros. RREF (example)

⎡12231⎤ ⎡12231⎤⎡ 12231 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ 24462 ⎢24462⎥⎢ 00000 ⎥ A = ⎢ ⎥ RREF =→ ⎢36696⎥ ⎢36696⎥⎢ 00003 ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 12453 00222 ⎣12453⎦ ⎣ ⎦⎣ ⎦ ⎡12231⎤⎡ 1201−− 1⎤⎡ 1201 1⎤⎡ 12010 ⎤ ⎢00111⎥⎢ 0011 1⎥⎢ 0011 1⎥⎢ 00110 ⎥ →→⎢ ⎥⎢ ⎥⎢ →⎥⎢ → ⎥ ⎢00003⎥⎢ 0000 3⎥⎢ 0000 1⎥⎢ 00001 ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎣00000⎦⎣ 0000 0⎦⎣ 0000 0⎦⎣ 00000 ⎦

We have rank (A)= 3, Basic columns: {AAA*1,, *3 *5} Matrix Inversion

If A is nonsingular:

Gauss− Jordan −1 [AI||]⎯⎯⎯⎯⎯→ ⎡⎤⎣⎦IA , Matrix Inversion (example)

⎡111⎤ ⎢ ⎥ A = ⎢122⎥ ⎣⎢123⎦⎥

⎡111 100⎤⎡ 111 1 00⎤⎡ 100 2− 10⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ []AI| =→−⎢122 010⎥⎢ 011 110⎥⎢→− 011 1 1 0⎥ ⎣⎢123 001⎦⎣⎥⎢ 012−− 101⎦⎣⎥⎢ 001 0 11⎦⎥ ⎡⎤100 2− 1 0 210− ⎢⎥⎡ ⎤ →−−010 1 2 1 −1 ⎢ ⎥ ⎢⎥A = ⎢−−12 1⎥ ⎢⎥001 0− 1 1 ⎣⎦⎣⎢ 011− ⎦⎥ Matrix Factorizations

A matrix factorization (or decomposition) decomposes a matrix A into a product of other objects (or factors) 1 K,, FF k :

= 21 LFFFA k , where the number of factor matrices k depends on the factorization. Most often, k = 2 or k = 3. There are many different matrix decompositions; each one finds use among a particular class of problems. The most known matrix decompositions or factorizations include: LU, LDU, LUP, Block LU, Cholesky, QR, EVD, SVD, GSVD, Jordan, Jordan-Chevalley, Schur, QZ, Takagi, Polar, Rank factorization, and several forms of NMF. Gaussian elimination algorithm

Sweep over the columns

Multipliers for the k-th column; They form a lower triangular matrix. (LU factorization) LU factorization

aa a a ⎡⎤11 12 L 1(nn− 1) 1 ⎡⎤10LL 00 ⎡uu11 12 u1(nn− 1) u 1 ⎤ ⎢⎥⎢⎥⎢ ⎥ aaaa l1 00 u u u ⎢⎥21 22 LL2(nn− 1) 2 ⎢⎥21 ⎢ 0 22 L 2(nn− 1) 2 ⎥ ⎢⎥aa⎢⎥⎢ ⎥ aa31 32 LL3(nn− 1) 3 =⋅ll31 33 00 00L u3(nn− 1) u 3 ⎢⎥⎢⎥⎢ ⎥ ⎢⎥MMO MM ⎢⎥MMOMM ⎢ MMO MM⎥ ⎢⎥aa a a ⎢⎥ll l 1 ⎢ ⎥ ⎣⎦nn12L nnnn (1)− ⎣⎦nn12L nn (1)− ⎣ 00L 0 unn ⎦

A∈ℜnxn L∈ℜnxn U∈ℜnxn

nn Determinant: det()ALU== det () det () 1⋅=∏uuii∏ ii ii==11 LU factorization

For each k, we have ⎡10 0⎤ ⎢0 ⎥ ⎢ OO ⎥ ()k ()k a L = ⎢ OO1,⎥ where l = ik , ⎢ ⎥ ik a()k ⎢ 00−l(1)kk+ O ⎥ kk ⎢ ⎥ ⎣00L −lnk 1⎦ (1)kk+ () ()nn (−− 1) (2)(1) 1 Recursive technique: ALA= , ⇒ UA== LK LLALA =, −1 Since (LL()kk) =− (), we have

−−−−1111 (nnn−− 1) (2) (1) (1) (2) ( 1) (1) (2) (− 1) ()LLLLLKKK==( ) ( ) ( L) (− LL)(−) (− L) Finally ALU= LDU factorization

⎡1 uuu12 L 1(nn− 1) 1 ⎤ ALDU= % ⎢ ⎥ 01L uu2(nn− 1) 2 ⎧ 1 ⎫ ⎢ ⎥ where D = diag , and DU% = U, ⎢ uu⎥ ⎨ ()k ⎬ U% = 00L 3(nn− 1) 3 ⎩⎭akk ⎢ ⎥ ⎢MMO MM ⎥ ⎢ ⎥ T 00 01 If A is symmetric, ALDL= ⎣ L ⎦

n −1 Determinant: ()k det()ALDU== det () det det ()()% ∏()dkk i=1 Crout’s algorithm

Set ukk =1 for all k For k = 1,…,n For i = k, k+1, …, n

k−1 laik=− ik∑ lu ip pk p=1 For j = k +1,…, n

1 ⎛⎞k −1 ualukj=−⎜⎟ kj∑ kp pj lkk ⎝⎠p=1 LUP factorization

If a zero pivot occurs, the basic LU factorization cannot be used. If so, the partial pivoting should be applied, i.e. by interchanging the rows in A to maximize the pivotal element.

PA= LU

P – permutation matrix

If no nonzero pivot is available, A is singular. LUP factorization

The LUP decomposition algorithm (by Cormen et al.) generalizes the Crout’s method. It can be described as follows:

1. If A has a nonzero entry in its first row, then permute the columns in A to have a nonzero entry in its upper left corner. Let A = AP , where P is a permutation matrix. 1 1 1

2. Let A2 be the matrix obtained from A1 by deleting both the first row and the first column. Decompose A2 =

L2U2P2 recursively. Make L from L2 by adding a zero row above and then the first column of A1 at the left.

3. Create U3 from U2 by first adding a zero row above and a zero column at the left and then replacing the upper left

entry (which is 0 at this point) by 1. Create P3 from P2 in a similar manner -1 -1 -1 and define A3 = A1 P3 = AP1P3 . Then P = P3 P1 .

4. At this point, A3 is the same as LU3, except (possibly) at the first row. If the first row of A is zero,

then A3 = LU3, since both have first row zero, and A = LU3P follows, as desired. Otherwise, A3 and LU3 have

the same nonzero entry in the upper left corner, and A3 = LU3U1 for some upper triangular U1 with

ones on the diagonal (U1 clears entries of LU3 and adds entries of A3 by changing the upper left corner). Now A

= LU3U1P is a decomposition of the desired form.

Tiangular systems

Let ⎡lxb11 0 ⎤⎡1 ⎤ ⎡ 1 ⎤ ⎢ ⎥⎢ ⎥= ⎢ ⎥ ⎣ll21 22⎦⎣ x 2 ⎦ ⎣ b 2 ⎦

blx− b1 ( 2211) If ll11 22 ≠ 0: x1 = , x2 = , l11 l22 (Lower triangular) 1 ⎛⎞i−1 Forward Substitution: xblxiiijj=−⎜⎟∑ , for Lx= b lii ⎝⎠j=1 (Upper triangular) 1 ⎛⎞n Back Substitution: xbuxiii=−⎜⎟∑ jj, for Ux= b uii ⎝⎠ji=+1 Solving linear systems with LUP

mn× Let Ax= b , where A ∈ℜ has a full column rank, and mn≥ , then: PAx= Pb

PA= LU (LUP factorization)

1. Solve Uy= Pb Back Substitution

2. Solve Lx= y Forward Substitution Cholesky (Banachiewicz) factorization

T If A is a symmetric and positive-definite matrix, all dii > 0 for ALDL= . The Crout’s algorithm leads to: ALL= T

For k = 1,…,n Another version: k −1 2 T lakk=− kk∑ l kp ARR= p=1 For j = k +1,…, n R – upper triangular matrix 1 ⎛⎞k−1 lalljkj=−⎜⎟kj∑ pkp lkk ⎝⎠p=1 QR factorization

Let A ∈ℜmn× have a full column rank, and let mn≥ , ⎡R⎤ QR factorization: AQ= ⎢ ⎥ , ⎣ 0 ⎦

mm× T where Q ∈ℜ is an orthogonal matrix, i.e. QQ= Im , and R ∈ℜ nn × is an upper triangular matrix.

If A ∈ℑmn× has a full column rank, and mn≥ , mm× H then Q ∈ℑ is a unitary matrix, i.e. QQ = I m , and R ∈ ℑnn× is an upper triangular complex matrix. QR factorization

mn× If A ∈ℜ , mn≥ , rrank= (A), and rn< , (rank-deficient matrix) then Q ∈ℜ mm × is an orthogonal matrix, and R ∈ℜnn× is upper trapezoidal, that is, the last n-r rows have zero-values.

n Determinant: det()AQR== det () det ()∏ rii i=1

If A is a rank-deficient matrix: rii = 0 for i = r+1, ..., n, then det(A) = 0. Gram-Schmidt ortogonalization

Given the set {v1,v 2, ..., vk } of linearly independent vectors, the GS orthogonalization generates the set {u1,u 2, ..., uk } of orthogonal vectors (k ≤ n). The GS orthogonalization algorithm:

u1 e1 = uv11= , u 1 2 u2 uv22=−Pu () v 2, e = 1 2 u ... 2 2 〈vu, 〉 P ()vu= kj k −1 u u j kj k 〈uujj, 〉 uv=−P () v, ek = kk∑ u j k u j=1 k 2 {e1,e 2, ..., ek } - orthonormal vectors QR with Gram-Schmidt ortogonalization

nn× Let Aaa= []12,,,K an ∈ℜ Applying the GS orthogonalization to the column vectors in A, we have:

u1 q1 = ua11= , u 1 2 u 〈au, 〉 ua=−P ( a), 2 kj 22u1 2 q2 = P ()au= u u j kj ... 2 2 〈uujj, 〉 n−1 u ua=−P () a, n nn∑ u j n qn = j=1 u n 2

nn× Thus Qqq=∈[]12,,,K qn ℜ QR with Gram-Schmidt ortogonalization

Then aqaq1111=〈,, 〉

aqaqqaq2121222=〈,+,, 〉 〈 〉 ... n aqaqnjnj=〈∑ ,, 〉 where uqajjj= 〈〉, j=1

⎡〈qa11,,〉〈 qa 12 〉L 〈 qa1 ,n 〉⎤ ⎢ ⎥ 0,〈qa22〉〈〉L qa2 ,n Thus R = ⎢ ⎥ ⎢ MOOM⎥ ⎢ ⎥ ⎣ 00,L 〈qann〉⎦ Householder reflections

Let ww∈ℜ=n , 1, 2 Householder transformation:

T HI=−n 2 ww Properties: HH= T (symmetric) T HH= In (orthogonal) n T For x∈ℜ : Hx=− x2( w x) w (reflection)

xe−α 1 T Assuming w = , e = [1, 0, , 0] ∈ℜn xe−α 1 K 1 2 α =,± x 2 we have Hx= α e1 QR with Householder reflections

mn× (1) Let Aaa=∈[]12,,,K an ℜ ,and AA= , (1) Recursive algorithm: Step 1: (1) (1) ae111−α (1) xa:,= 1 w = , α =,± a ae(1) −α 112 1112 T (1) (1) (1) HI=−m 2 ww( )

⎡⎤α1 **L ⎢⎥0 Thus (1) (1) ⎢⎥mn× HA =∈(2) ℜ ⎢⎥M A ⎢⎥ ⎣⎦0 QR with Householder reflections

Recursive algorithm: (2) (2) (2) (2) (obtained from H(1)A by deleting Aaaa= ⎡ 12,,,K n− 1⎤ , ⎣ ⎦ the first row and the first column)

(2) Step 2: (2) (2) ae121−α (2) xa:,= 1 w = , α21=,± a ae(2) −α 2 1212 T (2) (2) (2) HI=−m−1 2 ww( )

⎡⎤α2 **L ⎢⎥0 (2) (2) ⎢⎥()()mn−11×− HA =∈(3) ℜ ⎢⎥M A ⎢⎥ ⎣⎦0 QR with Householder reflections

Recursive algorithm: For km=1,K , ,

()km⎡⎤I0k −1 ×m Q =∈⎢⎥()k ℜ ⎣⎦0H After t iterations of this recurence, where t = min(m-1, n) :

()t (2) (1) RQ= L QQA,

TT T (1) (2) ()t QQ= ( ) ( Q) L( Q) ,

This method has much better numerical stability than the QR with GR orthogonalization. Givens rotations

The Givens rotation, represented by an orthogonal matrix G, rotates a given vector x in the plane spanned by two coordinate axes by an angle θ clockwise. ⎡ cs⎤ G = ⎢ ⎥ , c = cosθ , s = sinθ , ⎣−s c⎦

⎡⎤x1 ⎡ cx12+ sx ⎤ Let x = ⎢⎥, then yGx==⎢ ⎥. ⎣⎦x2 ⎣−+sxcx12⎦

x x 22 1 2 ⎡⎤cs⎡⎤x1 ⎡ xx+ ⎤ If c = , s = , then Gx ==⎢ 12⎥ xx22+ xx22+ ⎢⎥⎢⎥x 12 12 ⎣⎦−sc⎣⎦2 ⎣⎢ 0 ⎦⎥ Givens rotations

mn× For A ∈ℜ , ⎡⎤1000LLL ⎢⎥ ⎢⎥MO M M M ⎢⎥00LLLcs i ⎢⎥mm× G ()ij,,θ =∈⎢⎥MMOMMℜ, ⎢⎥00−sc j ⎢⎥LLL ⎢⎥MMMOM ⎢⎥ ⎣⎦0001LLL

⎧ acasaiiik' ← ik+== jk if ' ,jj ' ⎪ GA()i,,j θ = ⎨asacaiij' k←− ik + jk if ',' =jj = ⎪ ⎩ aaaai'' k== ik, j k jk otherwise Givens rotations

mn× (1) Let Aaa=∈[]12,,,K an ℜ ,and AA = ,

(1) thus GG=−()1, 2,θ K G( 1,mm 1,θθ) G( 1, , ) brings the first column of A(1) T to the vector m []α1,0,K ,0∈ℜ .

⎡⎤α1 **L ⎢⎥0 (1) (1) ⎢⎥mn× (2) (mn− 1)×− ( 1) GA =∈(2) ℜwhere A ∈ℜ ⎢⎥M A ⎢⎥ ⎣⎦0 Givens rotations

(2) Then GG=−()2,3,θ K G( 2,mm 1,θθ) G( 2, , ) brings the first column of A(2) T to the vector m []*,α2 ,0,K ,0∈ℜ .

⎡⎤α1 **L ⎢⎥0 α (2) (1) (1) ⎢⎥2 mn× where A(3)∈ℜ (mn− 2)×− ( 2) GGA =∈(3) ℜ, ⎢⎥M 0 A ⎢⎥ ⎣⎦00

TT T (1)n− (2)(1) (1) (2) (n− 1) Finally RG= K GGA and QG= ( ) ( G) K( G )