Appunti Di Perron-Frobenius
Total Page:16
File Type:pdf, Size:1020Kb
Appunti di Perron-Frobenius Nikita Deniskin 13 July 2020 1 Prerequisiti e fatti sparsi > ej `e j-esimo vettore della base canonica e il vettore 1 = (1; 1;::: 1) . Spectrum: σ(A) Spectral radius: ρ(A) Autospazio generalizzato: EA(λ) = Ker(A − λ I) Algebraic multiplicity: mi of λi Geometric multiplicity: di of λi λi is simple if mi = 1, semi-simple if mi = di, defective if di < mi. lim Ak `eil limite puntuale delle matrici. k k 1 Teorema di Gelfand e su ρ(A) = lim(jjA jj) k Cose su Jordan e scrittura astratta con il proiettore. Si ha che lim Bk = 0 se e solo se ρ(B) < 1. k Inoltre il limite di Bk non diverge solo se ρ(B) ≤ 1 e 1 `eun autovalore semisemplice (guarda la forma di Jordan). DA COMPLETARE Definition 1.1 Let A 2 Cn×n. Then: • A is zero-convergent if lim An = 0 n!1 • A is convergent if lim An = B 2 n×n. n!1 C Achtung! Some authors call zero-convergent matrix convergent and convergent matrix semi- convergent. Theorem 1.2 Let A 2 Cn×n, A is zero-convergent () ρ(A) < 1. Proof. Write A = CJC−1 where J is the Jordan Normal Form and note that A is zero- convergent () J is zero-convergent. Theorem 1.3 Let A 2 Cn×n, A is convergent () ALL the three following conditions apply: 1. ρ(A) ≤ 1 1 2. If ρ(A) = 1, then λ 2 σ(A); jλj = 1 ) λ = 1 3. If ρ(A) = 1, then the eigenvalue λ = 1 is semi-simple. Or, in an equivalent formulation, if we are in one of the following two mutually exclusive cases: 1. ρ(A) < 1 2. ρ(A) = 1, the eigenvalue µ = 1 is semi-simple, and if λ 2 σ(A); jλj = 1 ) λ = 1. Proof. Use the Jordan Normal Form Remark 1.4 1 is a semi-simple eigenvalue of A if and only if rank(I − A) = rank(I − A)2 < n. This is useful since it's computationally easier to calculate the rank than all the eigenvalues of A. 2 Perron-Frobenius theorem 2.1 Positive and non-negative matrices Definition 2.1 (Positive and non-negative vectors) Let v 2 Rn be a vector • v is positive if for all i, 1 ≤ i ≤ n, we have vi > 0. We will indicate that v is positive by writing v > 0. • v is non-negative if for all i, 1 ≤ i ≤ n, we have vi ≥ 0. We will indicate that v is non-negative by writing v ≥ 0. The set of all non-negative vector of dimension n is called the non-negative orthant of Rn and n will be indicated with R+. Definition 2.2 (Positive and non-negative matrices) Let A 2 Rn×n be a matrix • A is positive if for all i; j, 1 ≤ i; j ≤ n, we have aij > 0. We will indicate that A is positive by writing A > 0. • A is non-negative if for all i; j, 1 ≤ i; j ≤ n, we have aij ≥ 0. We will indicate that A is positive by writing A ≥ 0. The set of all non-negative matrices of dimension n × n is called the cone of non-negative n×n matrices and will be indicated with R+ . We can define in similar way for A 2 Rm×n to be positive or non-negative. The space of all m×n non-negative m × n matrices will be denoted by R+ . n Remark 2.3 A is a non-negative matrix if and only if 8 x 2 R+, the product Ax is still a non-negative vector. 2 The definitions1 and2 can be used to state that a matrix (or a vector) is pointwise greater than 0. Similarly we can introduce an ordering on Rn×n to state that a matrix is pointwise greater than another. Definition 2.4 (Partial ordering) Let A; B 2 Rn×n, we say that • A > B if the matrix A − B is positive, i.e. A − B > 0. • A ≥ B if the matrix A − B is non-negative, i.e. A − B ≥ 0. A useful observation is that (Rn×n; ≥) is a partially ordered set: the relation ≥ is reflexive, transitive and antisymmetric. Definition 2.5 (Absolute value) m×n th Let A 2 C , we will denote by jAj the matrix that has as i; j element jaijj. Similarly we can define the absolute value of a vector jvj. m×n n m×n m By this definition, we have jAj 2 R+ and jvj 2 R+ for every A 2 C and v 2 C . Proposition 2.6 Properties of the ordering ≥: m m P P 1. If Ai ≥ Bi are matrices, then Ai ≥ Bi i=1 i=1 2. If A ≥ B are matrices and c 2 R+ is a scalar, then cA ≥ cB 3. If A ≥ B and C ≥ 0 are matrices, then AC ≥ BC and CA ≥ CB if the products are defined 4. If Ak ≥ Bk and there exist a pointwise limit lim Ak = A and lim Bk = B, then A ≥ B k k 5. If A 2 Cm×n, then jAj ≥ 0 and jAj = 0 if and only if A = 0 6. If A 2 Cm×n and γ 2 C, jγAj = jγj jAj 7. If A; B 2 Cm×n then jA + Bj ≤ jAj + jBj. 8. If A; B 2 Cn×n then jA · Bj ≤ jAj · jBj. This is true also for every couple of matrices such that A · B is defined, i.e. A 2 Cn×k and B 2 Ck×m. 9. If A 2 Cn×n, and k 2 N, then jAkj ≤ jAjk. Remark 2.7 There are a couple of simple but very useful particular cases of property 3 and 8: • If A ≥ 0 is a matrix and v ≥ 0 is a vector, then Av ≥ 0. • If A > 0 is a matrix, v ≥ 0 is a vector, v 6= 0, then Av > 0. • Let A 2 Cn×n and x 2 Cn. Then j A x j ≤ jAj · jxj 3 Proposition 2.8 Let A 2 Rn×n. Then: 1. A ≥ 0 () 8 v 2 Rn, with v > 0, we have Av ≥ 0. 2. A > 0 () 8 v 2 Rn, with v ≥ 0, v 6= 0, we have Av > 0. Proof. The ) implications follow easily from the above properties. 1) ( If there were ai j < 0, then choosing v with vj = 1 and other components k 6= j; vk = " P we have [Av]i = ai jvj + ai k" ≤ −ai j + (n − 1)" · max jai kj which is negative for sufficiently k6=j k6=j small ". 2) ( If there were ai j ≤ 0, it is sufficient to choose v = ej, then [Av]i = ai jvj + 0 = ai j ≤ 0. 2.2 Spectral radius bounds Theorem 2.9 (Lappo-Danilevsky) n×n n×n Let A 2 C and B 2 R+ such that jAj ≤ B. Then ρ(A) ≤ ρ(jAj) ≤ ρ(B) Proof. It is sufficent to prove that for all matrices A1;B1 that satisfy the hyphothesis of the theorem, it's true that ρ(A1) ≤ ρ(B1). Then taking A1 = A and B1 = jAj we obtain ρ(A) ≤ ρ(jAj), and by taking A1 = jAj and B1 = B we obtain ρ(jAj) ≤ ρ(B). Let's prove the claim by contradiction, so assume ρ(B) < ρ(A). Then there exist a scalar γ 2 R+ such that γρ(B) < 1 < γρ(A) Which is the same as ρ(γB) < 1 < ρ(γA) k k This means that the lim(γB) = 0, while the sequence of matrices f(γA) gk2 does not converge k N to any matrix (if it were to converge, then the spectral radius has to converge to a real number). On the other hand, we have 0 ≤ jAj ≤ B, so 0 ≤ jγAj ≤ γB and 0 ≤ jγAjk ≤ (γB)k for all k 2 N. Since (γB)k converges to the 0 matrix, it follows that also jγAjk and (γA)k has to converge to the 0 matrix, which is a contradiction. Alternative proof. For every x 2 Cn, we have: jA · xj ≤ jAj · jxj ≤ B · jxj Let jj · jj be an induced matrix norm from the vector norm jj · jjv (for example, the 2-norm). Then: jj Ax jjv ≤ jj jAj · jxj jjv ≤ jj B · jxj jjv 4 n By taking the sup on all x 2 C of unitary norm, jjxjjv = 1, we obtain the matrix norm: jj A jj ≤ jj jAj jj ≤ jj B jj Since A ≤ jAj ≤ B, we have Ak ≤ jAjk ≤ Bk for every integer k > 0; then applying the above method gives us jj Ak jj ≤ jj jAjk jj ≤ jj Bk jj k 1 k 1 k 1 jj A jj k ≤ jj jAj jj k ≤ jj B jj k We can take the limit for k ! 1 and apply Gelfand's theorem to obtain: ρ(A) ≤ ρ(jAj) ≤ ρ(B) n n×n P th Theorem 2.10 Let A 2 R+ , let Ri = ai j be the sum of entries of the i -row and j=1 n P th Cj = ai j be the sum of the j -column.Then: i=1 min Ri ≤ ρ(A) ≤ max Ri 1≤i≤n 1≤i≤n min Cj ≤ ρ(A) ≤ max Cj 1≤j≤n 1≤j≤n Proof. The maximum row-sum can be expressed as the infinity norm, since A is non-negative max Ri = jjAjj1 1≤i≤n Furthermore, this is an induced norm, so ρ(A) ≤ jjAjj1 and we have the upper bound inequality.