EVENTUAL CONE INVARIANCE

By

MICHAEL KASIGWA

A dissertation submitted in partial fulfillment of the requirements for the degree of

DOCTOR OF PHILOSOPHY

WASHINGTON STATE UNIVERSITY Department of JULY 2017 To the Faculty of Washington State University:

The members of the Committee appointed to examine the dissertation of

MICHAEL KASIGWA find it satisfactory and recommend that it be accepted.

Michael Tsatsomeros, Ph.D., Chair

Judith McDonald, Ph.D.

Nikos Voulgarakis, Ph.D.

ii Acknowledgments

Thanks to all in the Ndoleriire family for your steadfast love and support, to my adviser for your

invaluable guidance and to all the committee members for your support and dedication.

iii EVENTUAL CONE INVARIANCE

Abstract

by Michael Kasigwa, Ph.D. Washington State University July 2017

Chair: Michael Tsatsomeros

n If K is a proper cone in R some results in the theory of eventually (entrywise) nonneg- ative matrices have equivalent analogues in eventual K-invariance. We develop these analogues using the classical Perron-Frobenius theory for cone preserving maps. Using an ice-cream cone we

n×n demonstrate that unlike the entrywise nonnegative case of a A ∈ R , eventual cone invari- ance and eventual exponential cone invariance are not equivalent. The notions of inverse positivity of M-matrices, M-type and Mv-matrices are extended to Mv,K -matrices (operators), that have the form A = sI − B, where B is eventually K-nonnegative, that is, BmK ⊆ K for all sufficiently large m. Eventual positivity of semigroups of linear operators on Banach spaces ordered by a cone can be investigated using resolvents or resolvent type operators constructed from the generators.

iv Contents

Acknowledgments ...... iii

Abstract ...... iv

Contents ...... v

Symbols ...... vii

1 Introduction ...... 1

1.1 Introduction ...... 1

1.2 Nonnegative Matrices ...... 3

1.3 Exponential Nonnegativity ...... 5

1.4 Organization of the Dissertation ...... 6

2 Cones and Perron-Frobenius Theory ...... 7

2.1 Introduction and Definitions ...... 7

2.2 Perron-Frobenius Theory ...... 10

3 Matrices and Cone Invariance ...... 13

3.1 Introduction ...... 13

v 3.2 Results from the Theory of Invariant Cones ...... 14

3.3 Eventually (exponentially) K-Positive Matrices ...... 15

3.4 Eventually (exponentially) K-Nonnegative Matrices ...... 19

3.5 Points of K-Potential ...... 23

3.6 Generalizing M-Matrices based on Eventual K-Nonnegativity ...... 26

4 Semigroups of Linear Operators and Invariance ...... 33

4.1 Introduction ...... 33

4.2 Operator Semigroups on Ordered Spaces ...... 34

4.3 Semigroup Generation ...... 37

4.4 for Bounded Linear Operators ...... 39

4.5 Positive Semigroups ...... 42

Bibliography ...... 50

vi Symbols

R Set of real numbers.

R+ Nonnegative real numbers.

n R+ Nonnegative orthant.

R(λ) Real part of λ.

C Complex numbers.

n C n-dimensional complex space.

Mn(R or C) A set of all n × n matrices over R or C.

(T (t))t≥0 A one-parameter semigroup over R (or C).

ρ(A) Resolvent set of operator A.

σ(A) Spectrum of operator A.

σb(A) Boundary spectrum of A.

Rλ or R(λ, A) The resolvent operator.

L(X) Bounded linear operators on a normed linear space X.

r(A) of A.

S(A) Spectral bound, i.e. Sup{R(λ): λ ∈ σ(A)} ∈ [−∞, ∞].

+∂U Positive oriented boundary of an open set U.

C(Ω) Continuous functions on a compact Hausdorff space Ω.

vii C0(X) Space of continuous functions that vanish at infinity. f ≥ 0 A function f is positive, pointwise. f  0 A function f is strongly positive.

T ≥ 0 (T  0) Operator T is positive (strongly positive).

E+ {f ∈ E : f ≥ 0}, a positive cone in a E. int K Interior of a cone K.

π(K) The set of square matrices that leave a cone K-invariant.

XA(K) Points of K-potential.

Mv,K -matrix Matrix of form A = sI − B, B eventually K-nonnegative, s ≥ r(B) ≥ 0. indexλ(B) The index of the eigenvalue λ of B.

viii Chapter 1

Introduction

1.1 Introduction

In this dissertation we start with introductory notions on the theory of nonnegative matrices in

n×n R and basics of the Perron-Frobenius theory. This will be followed by the generalizations of

n this theory to cones in R and a description of the extension to semigroups of linear operators on

Banach lattices. This approach also reflects the trend of research in this field where results on the

n n spectrum of positive operators on R ordered by the cone R+, first analyzed by Perron (1907) and extended to nonnegative matrices by Frobenius (1909), were further extended to general spaces and cones by Krein and Rutman in the 1950’s. Quite often in the literature on the theory of linear operators one observes that, whenever possible, some ideas developed in infinite dimension are tested or exemplified in finite dimension and conversely ideas developed in finite dimension are extended to infinite dimension.

1 A systematic theory has been developed on eventually positive semigroups of linear operators on some Banach lattices. This development has advanced the notion of positivity (nonnegativity) of matrices to operators in general and applications. One such work examined is the reachability and holdability of nonnegative states for linear differential systems utilizing the notion of Perron-

Frobenius type properties [25].

Nonnegative matrices are used in different fields such as Biology, Economics, Dynamical Systems,

Numerical analysis, Probability theory and others. Several scholars have developed characteri- zations of inverse positive matrices and have made extensions from Z-(negated-Metzler) and M-

(Minkowski) matrices to generalized matrices and operators. Le and McDonald [19, 2006] char- acterized inverses of M-type matrices (created from eventually nonnegative matrices) and Olesky,

Tsatsomeros and Van Den Driessche [27, 2009] extended the theory to Mv-matrices (a generalization of M-matrices based on eventually nonnegative matrices).

We have some results on the extension of the above theory to eventual cone invariance for general

n cones in R and as far as we know this has not been studied before using our particular perspective.

The Perron-Frobenius theorem and its generalization has indeed been studied by several authors in the context of both finite and infinite dimensions. The present work seeks to generalize some key results from nonnegative matrix theory to cones and examine the corresponding Perron-Frobenius type properties. Some new theorems are formulated that correspond to cones.

Recently, Daners et al. [7, 2016] investigated strongly continuous semigroups with eventual positiv- ity properties on C(Ω), the space of complex-valued continuous functions on a compact non-empty

Hausdorff space Ω. They looked at positive initial conditions and the corresponding eventually positive solutions to the Cauchy problem for large enough time.

2 We examine further extensions to the work in [7] by using eventual cone invariance and in particular to Banach spaces ordered by a cone, in order to describe a new analogue of what is known as an Mv,K -type matrix (operator). We make use of the notions of eventually positive matrices

(operators) and look at resolvent positive (matrices) operators and generators of positive linear operator semigroups. In this way, we link the notions in M-matrix and inverse M-matrix theory in [4], [19], [27] (among others) to those in [1], [7]. We make some observations and provide some answers to questions on the characterization of eventual cone positivity for linear operators in a different vein from what has been done to date.

1.2 Nonnegative Matrices

n×n Definition 1.1. Let [aij] = A ∈ R , with aij ∈ R.

(i) A is nonnegative (positive) if aij ≥ 0 (> 0) for all i, j ∈ {1, 2, ..., n} and is denoted by

A ≥ 0 (A > 0).

(ii) A is eventually nonnegative (positive) if there exists a k0 ∈ N, such that for all k ≥ k0,

Ak ≥ 0 (Ak > 0).

n×n −1 (iii) Given A ∈ C , the set σ(A) = {λ ∈ C :(λI − A) is undefined} is called the spectrum of

A and is denoted by σ(A). In finite dimension this is simply the set of all eigenvalues of A or the point spectrum.

(iv) The spectral radius of A, denoted by r(A), is the max{|λ| : λ ∈ σ(A)}.

(v) A is essentially nonnegative (positive), also known as Metzler, if there exists an α ≥ 0 such that

A + αI ≥ 0 (> 0); (−A is called a Z-matrix).

3 (vi) When a Z-matrix is invertible and the inverse is nonnegative then it is called an invertible

n×n M-matrix. Another definition of an M matrix A is A = sI − B, where B ∈ R is (entrywise) nonnegative and s > r(B), (s > r(B) implies A is an invertible M-matrix while s = r(B) implies

A is a singular M matrix).

(vii) Pseudo M-matrices are of the form A = sI −B where s ≥ r(B) > 0 such that r(B) ∈ σ(B) and r(B) ≥ |λ|, λ ∈ σ(B) \{r(B)} with the corresponding left (right) eigenvectors strictly positive.

(viii) Matrices of the form A = sI − B, where s ≥ r(B) > 0 and B is eventually nonnegative are called EM-matrices.

(ix) M-type matrices are of the form A = sI − B , where B is eventually nonnegative, s > r(B), such that the possible index of 0 as an eigenvalue of B is at most 1, and there exists β > r(B) such that A = sI − B has a positive inverse for all s ∈ (r(B), β), [19].

(x) Olesky, Tsatsomeros and van den Driessche generalized M-matrices to Mv-matrices. An Mv matrix has the form A = sI − B, where s ≥ r(B) ≥ 0 and B is eventually nonnegative (notice that

Mv ⊇ EM-matrices), [27].   3 2   Example 1.1 (An eventually positive matrix). (i) Let A =   .   3 −2   3615 962 6   k A =   and A > 0 ∀k ≥ 6, so A is eventually positive. To see this let A = [aij]   1443 1210 and notice that |ai1| > |ai2| , i = 1, 2 and it can be seen that by powering matrix A successively, the columns of the powers are a linear combination of those in the previous matrix power so the size of entries in the first column eventually significantly dominate the corresponding entries in the second column which makes the entries in the successive powers become positive and stay positive.

4     3 1 1 255 63 29           4   k (ii) Let B =  3 1 −1 , B =  189 61 39  and B > 0 ∀k ≥ 4, thus B is eventually             3 −1 −1 87 39 37 positive. Again using the argument in example 1.1, it can be seen that the powers of B become positive and stay positive for all values of k ≥ 4.

1.3 Exponential Nonnegativity

n×n tA t2A2 P∞ tkAk Definition 1.2. (i) For A ∈ R , e = I + tA + 2! + ... = 0 k! is called the matrix exponential of A.

(ii) A is exponentially nonnegative (positive) if etA ≥ 0 (> 0), ∀t ≥ 0.

tA (iii) A is eventually exponentially nonnegative if there exists t0 such that for all t ≥ t0, e ≥ 0.

The smallest such t0 is called the exponential index and is denoted by t0 = t(A).

n×n Theorem 1.3. [35]. A matrix A ∈ R is essentially nonnegative (Metzler) if and only if A is exponentially nonnegative.

Proof: Suppose A is essentially nonnegative. Then ∃ α ≥ 0 such that A + αI ≥ 0.

Now write A = A + αI − αI. Since αI and A + αI commute,

=⇒ etA = et(A+αI−αI) = e−tαet(A+αI ≥ 0,

=⇒ etA ≥ 0 ∀ t ≥ 0, so A is exponentially nonnegative.

Conversely, let i, j ∈ {1, 2, . . . , n}. By way of contradiction, suppose that the (ij)th entry of A is

2 (2) 3 (3) tA taij t aij t aij negative, i.e. aij < 0, for some i 6= j. Then (e )ij = 0 + 1! + 2! + 3! + ...

tA  taij  As t → 0, the sign of (e )ij becomes the same as that of 1! , which is negative.

This is a contradiction to A being Metzler.

5 1.4 Organization of the Dissertation

The organization of this dissertation is as follows. In Chapter 2 cones and Perron-Frobenius the- ory are introduced and the notion of nonegativity (positivity) is described for cones. Specifically nonnegativity (positivity) of a matrix A is generalized to cone nonnegativty (positivity), i.e. order preserving matrices and the cone version of Perron-Frobenius theorem is provided.

In Chapter 3 the main results on eventual cone nonegativity (positivity) are presented. It is shown that while some results in the entrywise analysis of eventually nonnegativtiy matrices may carry over to cones, there are other results that do not generalize to cones. We show by example that eventual cone nonnegativity and eventual exponential cone nonnegativity are not equivalent, when the cone is non-polyhedral.

Our contributions extend some results in the theory of entrywise nonnegative matrices and eventual nonnegativity (positivity) to cones. On the possible extensions, we prove results relevant to cones or provide counter-examples on the notions of nonnegativity (positivity), eventual nonnegativity

(positivity), exponential nonnegativity (positivity) and eventual exponential nonnegativity (posi- tivity) of a square matrix A. Our results do not rely only on the entrywise nonnegativity but on a more general operator theory approach of positive invariance with respect to a given operator. We create a new class of matrices that extend the M-type and Mv-type matrices in, [19] and [27], to

Mv,K matrices. These notions can then be further generalized to Mv,K -type operators as we point out in Chapter 4 in the context of positive linear operator semigroups on ordered Banach spaces.

We conclude with some questions suitable for further research.

6 Chapter 2

Cones and Perron-Frobenius Theory

2.1 Introduction and Definitions

n Definition 2.1 (Cone). Let K ⊆ R . Then K is:

(i) A cone if x ∈ K, α ∈ R+ implies αx ∈ K;

(ii) convex if x, y ∈ K, 0 ≤ t ≤ 1 implies tx + (1 − t)y ∈ K;

n (iii) closed (in the Euclidean space R ) if {xn} ⊂ K and limn→∞ xn = x implies x ∈ K;

(iv) solid if int K 6= ∅ (i.e. the topological interior of K is nonempty);

(v) pointed if −K ∩ K = {0} and

(vi) proper if it is convex, closed, solid and pointed.

n n Additionally, a cone K in R is said to be generating if R = K − K and K is normal if x, y ∈ K and y − x ∈ K implies there exists b > 0 such that kxk ≤ bkyk.

7 Example 2.1.

(i) The empty set, ∅, is vacuously a cone that is not proper since 0 ∈ K for any proper cone K.

(ii) {0} is the (trivial cone) and it is not proper since it is not solid.

(iii) A half-line is not proper, since it has empty interior. Lines and half-lines are cones but they are not proper because they are not solid.

2 + + (iv) Let E = R with E = {x : x = (x1, x2), x2 = 0, x1 ≥ 0}, the positive horizontal axis; E is

2 not solid , it is also a non-generating cone, that is, not every x ∈ R can be written as x = u − v where u, v ∈ E+. E.g. x = (2, 1) cannot be written in the form u − v for any u , v ∈ E+.

n (v) R+ is a proper cone called the nonnegative orthant.

(vi) Let f :Ω → R be continuous, Ω a compact Hausdorff topological space.

K = {f : f(x) ≥ 0} is a cone in the function space C(Ω), the space of continuous real valued functions on Ω.

(vii) The set of nonnegative sequences

( ∞ ) p X {xn} ∈ l , 1 ≤ p ≤ ∞ : |xi| < ∞, xi ≥ 0 i=1 is a cone in the sequence space lp and has empty interior.

(viii) Let X be a topological space and f : X → R , real valued functions on X. In

 Z  Lp(X) = f : X|f|p < ∞ , 1 ≤ p < ∞ ,K = {f : f(x) ≥ 0},

for almost every x ∈ X, is a cone.

n ∗ n T Definition 2.2. (i) The dual of S ⊆ R is defined by S = {z ∈ R : z y ≥ 0 ∀y ∈ S}.

m (ii) K is a polyhedral cone if and only if K = BR+ , for some n × m matrix B, (i.e. is comprised of

8 m nonnegative linear combinations of a finite set of vectors in R called the generators of K). When

n m = n and B is invertible, K = BR+ is called simplicial.

Remark

n ∗ ∗ For any set S ⊆ R , S is a proper cone. A cone K is polyhedral if and only if K is polyhedral.

∗ n If for a cone K we have K = K , we call K self-dual. E.g. K = R+ is a self-dual cone.

n n×n Definition 2.3 (Cone Nonnegativity (Positivity)). Let K ⊆ R . A ∈ R is:

(i) K-nonnegative (resp., K-positive) if AK ⊆ K (resp. A(K \{0}) ⊆ int K-the interior of K).

(ii) essentially K-nonnegative (resp.,essentially K-positive) if ∃ α ≥ 0 such that A + αI is K- nonnegative (K-positive).

(iii) exponentially K-nonnegative (resp.,exponentially K-positive) if for every t ≥ 0, etAK ⊆ K

(resp., etA(K \{0}) ⊆ int K).

(iv) K-primitive if it is K-nonnegative and there exists a natural number m such that Am is K- positive.

k k (v) eventually K-nonnegative (resp.,eventually K-positive) if ∃k0 ∈ Z s.t A (K) ⊆ K (resp., A (K \

{0}) ⊆ int K) forall k ≥ k0. We denote the smallest such k0 by k0(A) and call it the power index of A.

(vi) eventually exponentially K-nonnegative (resp., eventually exponentially K-positive) if ∃t0 ∈

tA tA [0, ∞) such that ∀t ≥ t0, e K ⊆ K (resp., e (K \{0}) ⊆ int K). We denote the smallest such number by t0 = t0(A) and call it the exponential index of A.

n×n Lemma 2.4. Let A ∈ R . The following are equivalent:

(i) A is eventually exponentially K-nonnegative.

9 (ii) There exists a ∈ R such that A + aI is eventually exponentially K-nonnegative.

(iii) For all a ∈ R, A + aI is eventually exponentially K-nonnegative.

Proof: The equivalences follow readily from the fact that

et(A+aI) = eatI etA = eatetA

since aI and A commute.

2.2 Perron-Frobenius Theory

Perron-Frobenius theory commonly refers to the study of square (entrywise) nonnegative matrices

A. The name reflects the basic properties of such matrices, which are the tenets of the celebrated

Perron-Frobenius theorem; that is, in its simplest form, the spectral radius is necessarily an eigen- value of A corresponding to a nonnegative eigenvector. More generally, however, Perron-Frobenius

n theory encompasses the study of matrices A that leave a proper cone K ⊂ R invariant, namely

n AK ⊆ K. This generalizes nonnegative matrices, which leave the nonnegative orthant, R+, in- variant. One can include in this area the study of functionals and operators, in finite or infinite dimensions, in contexts where the main tenets of the Perron-Frobenius theory apply.

n×n Definition 2.5. (i) A matrix A ∈ C is said to be reducible if , either A is a 1 × 1 zero matrix or   BC   if n ≥ 2 and there exists a permutation matrix P such that P AP T =   , where B and   0 D D are square matrices and 0 represents a zero matrix.

(ii) A is said to be irreducible if it is not reducible.

10 Theorem 2.6 (Chapter 2, 1.1,[4]). If A is a nonnegative square matrix, then:

(i) r(A), the spectral radius of A , is an eigenvalue of A.

(ii) A has a nonnegative eigenvector correspondig to r(A).

The following are traditional statements of the Perron-Frobenius theorem in two parts, [4].

Theorem 2.7 (Part-I). (i) If A is a positive matrix (A > 0) then r(A) is a simple eigenvalue greater than the magnitude of any other eigenvalue. (ii) If A ≥ 0 is irreducible, then r(A) is a simple eigenvalue, any eigenvalue of A of the same modulus is also simple, A has a positive eigenvector x corresponding to r(A) and any nonnegative eigenvector of A is a multiple of x.

iθ Theorem 2.8 (Part-II). (i) If an irreducible A ≥ 0 has h eigenvalues {λk = re k , k = 0, 1, . . . , h−1} of modulus r = r(A), 0 < θ0 < θ1 < ··· < θh−1, then these eigenvalues are the distinct roots of

h h λ − r = 0. (ii) More generally, σ(A) = {λ1, λ2, ..., λh−1} goes over itself under a rotation of the

2π complex plane by h . If h > 1, then A is permunationally similar to

  0 A 0 ... 0  12       0 0 A ... 0   23    T  . . . . .  P AP =  ......  , where the zero blocks along the diagonal are square.          0 0 0 ...Ah−1h      Ah1 0 0 0 0

We proceed with the cone version of Perron-Frobenius theorem after the following definitions. We denote cone-nonnegativity defined in 2.3 by x ≥K 0.

n Definition 2.9. (i) A face F of a cone K ⊆ R is a subset of K which is a cone in the linear span of F such that x ∈ F, x ≥K y ≥K 0 implies y ∈ F .

(ii) A face F of K is trivial if F = {0} or F = K.

11 n×n (iii) If a matrix A ∈ R is K-nonnegative , then a face F of K is an A-invariant face if AF ⊆ F .

(iv) If A is K-nonnegative, then A is K-irreducible if the only A invariant faces are the trivial faces.

A is K-reducible otherwise.

Theorem 2.10 (Perron-Frobenius-Cone Version).

n×n Let K be a proper cone, A ∈ R be K-nonnegative and K-irreducible. Then

(i) A has a positive real eigenvalue equal to its spectral radius r(A);

(ii) r(A) has an associated left (right) eigenvector yT (resp. x) in the interior of K∗ (resp.K);

n×n (iii) If B ∈ R and (B − A)K ⊆ K, then r(B) ≥ r(A);

(iv) r(A) is a simple eigenvalue of A;

(v)The eigenvectors corresponding to r(A) are unique up to a scalar multiple;

T A k T (vi) If A is aperiodic (i.e. not periodic) and y x = 1 then limk→∞( r(A) ) = xy .

Definition 2.11 (Perron-Frobenius Properties).

n n×n Let K be a proper cone in R . A ∈ R is said to possess: (i) the K-Perron-Frobenius property

(KPFp) if r(A) > 0, r(A) ∈ σ(A) and there exists an eigenvector of A in K corresponding to r(A);

(ii) the strong KPFp if, in addition to having the KPFp, r(A) is a simple eigenvalue such that r(A) > |λ|, ∀λ ∈ σ(A) \{r(A)} and has a corresponding eigenvector in int K.

Remark

By Perron-Frobenius theorem, every non-nilpotent K-nonnegative matrix A has the KPFp and every K-primitive K-nonnegative matrix A has the strong KPFp.

12 Chapter 3

Matrices and Cone Invariance

3.1 Introduction

In this chapter we extend the results on the entrywise essential, eventual , exponential and even- tually exponential nonnegativity of a square matrix A to cone-nonnegativity analogues as a gen-

n eralization of the nonnegative matrix theory to cones in R . We start with the basic definitions, properties and results on cones. Some of the definitions and results in the entrywise cases are equiv-

n alent to those in the more general cone theoretic setting R while others do not generalize. For operators in general there are significant differences that arise because we cannot directly examine the structure of an operator using properties such as Jordan cannonical forms, sign patterns or the algebraic and geometric multiplicities of the eigenvalues. We use cone-nonnegativity (positivity)

n and eventual cone-nonnegativity (positivity) to study general cones in R . Further work on cones in infinite dimension spaces and eventual positivity of operators will be discussed in Chapter 4.

13 3.2 Results from the Theory of Invariant Cones

n Let K be a proper cone in R . By the Perron-Frobenius theorem, every non-nilpotent K-nonnegative matrix A has the K-Perron-Frobenius property and every K-primitive matrix A has the strong K-

Perron-Frobenius property; see [4].

n n×n Given a proper cone K in R , the set of all matrices in R that are K-nonnegative is itself a

n×n n×n proper cone in R , denoted by π(K). The relative interior of π(K) in R , int π(K), consists

n×n of all K-positive matrices in R .

There is a well-known relation between the notions of exponential K-nonnegativity and essential

K-nonnegativity; see [32, 33] and [4, Chapter 6, Theorem (3.12)].

n n×n Lemma 3.1. Let K be a cone in R . If A is essentially K-nonnegative, then A ∈ R is exponentially K-nonnegative. The converse is true only when K is a polyhedral cone.

Proof: The proof of the equivalence of essential and exponential K-nonnegativity when K is polyhedral is found in [32]. The fact that exponential K-nonnegativity does not imply essential

K-nonnegativity when K is non-polyhedral is supported by the following counterexample.

Example 3.1. Consider the non-polyhedral cone

T 3 2 2 2 K = {x = [x1 x2 x3] ∈ R : x1 + x2 ≤ x3, x2 ≥ 0, x3 ≥ 0}

and the matrix   1 1 −1       A =  −1 1 1  .       0 2 2

14 It can be verified that for all t ≥ 0, etAK ⊆ K. Notice, however, that for x = [−1 0 1]T ∈ K and for all a ≥ 0, (A + aI)x 6∈ K. That is, A is exponentially K-nonnegative but not essentially

K-nonnegative.

n×n Lemma 3.2. [14, Lemma 8.2.7] Let A ∈ R so that λ ∈ C is a simple eigenvalue of A and

|λ| > |µ| for all eigenvalues µ 6= λ of A. Let u, v be right and left eigenvectors of λ, respectively, such that uT v = 1. Then  A k lim = uvT . k→∞ r(A)

Proof: By assumption, u and v span the eigenspaces of A and AT corresponding to λ 6= 0, respectively, and r(A) = |λ| > |µ| for every eigenvalue µ 6= λ of A. Let L := uvT . Then Lk = L and

(A − r(A)L)k = Ak − r(A)kL. If 0 6= w, Aw = µw, µ 6= r(A), µ 6= 0 then from (A − r(A)L)u =

T Au−r(A)uv u = (A−r(A)I)u = 0, we can conclude that σ(A−r(A)L) = {0, λ2, ..., λn}. Therefore

(A − r(A)L)k Ak − r(A)kL Ak lim = lim = lim − L = 0 k→∞ r(A)k k→∞ r(A)k k→∞ r(A)k

Ak =⇒ lim = L = uvT . k→∞ r(A)k

3.3 Eventually (exponentially) K-Positive Matrices

In this section, we characterize eventual (exponential) K-positivity.

n n×n Theorem 3.3. Let K be a proper cone in R and A ∈ R . The following are equivalent:

(i) A has the strong K-Perron-Frobenius property and AT has the strong K∗-Perron-Frobenius property.

15 (ii) A is eventually K-positive.

(iii) AT is eventually K∗-positive.

Proof: First, it is convenient to observe that (ii) and (iii) are equivalent.

(ii) ⇐⇒ (iii). Recall that a matrix is K-positive if and only if its transpose is K∗-positive [4, (2.23),

Chapter 2]. Thus, Ak is K-positive for all sufficiently large k if and only if (AT )k is K∗-positive for all sufficiently large k.

(i) =⇒ (ii). Suppose (i) holds and let Au = r(A)u, vT A = r(A)vT with u ∈ int K and v ∈ int K∗ such that uT v = 1. By Lemma 3.2, we can set

 A k B = lim = uvT . k→∞ r(A)

For every w ∈ K \{0}, we have that Bw = (vT w) u ∈ int K because vT w > 0. Thus B is

n×n K-positive, i.e., B ∈ int π(K). Therefore, on account of π(K) being a proper cone in R [32,

Lemma 5] and the convergence of positively scaled powers of A to B, there exists a nonnegative

k integer k0 such that A is K-positive for all k ≥ k0. That is, A is eventually K-positive.

k (ii) =⇒ (i) Assume there exists nonnegative integer k0 such that A is K-positive for all k ≥ k0.

k k Thus, for all k ≥ k0, r(A ) is a simple positive eigenvalue of A greater than the magnitude of any other eigenvalue of Ak, having a corresponding eigenvector in int K [4, Theorem (3.26), Chapter 2].

It follows that Ak and thus A possess the strong K-Perron-Frobenius property. Since (ii) implies

(iii), the above argument applied to AT also shows that AT has the strong K∗-Perron-Frobenius property.

n n×n Theorem 3.4. Let K be a proper cone in R . For a matrix A ∈ R the following properties are equivalent:

16 (i) There exists a ≥ 0 such that A + aI has the K-strong Perron-Frobenius property and AT + aI have the K∗-strong Perron-Frobenius property.

(ii) A + aI is eventually K-positive for some a ≥ 0.

(iii) AT + aI is eventually K∗-positive for some a ≥ 0.

(iv) A is eventually exponentially K-positive.

(v) AT is eventually exponentially K∗-positive.

Proof: The equivalences of (i)–(iii) are the content of Theorem 3.3 applied to A + aI. We will argue the equivalence of (ii) and (iv), with the equivalence of (iii) and (v) being analogous:

Let a ≥ 0 such that A + aI is eventually K-positive, and let k0 be a positive integer such that

k (A + aI) (K \{0}) ⊆ int K for all k ≥ k0. As K is solid, there exists large enough t0 > 0 so that the first k0 − 1 terms of the series

∞ X tm(A + aI)m et(A+aI) = m! m=0

t(A+aI) are dominated by the remaining terms, rendering e K-positive for all t ≥ t0. It follows that

tA −ta t(A+aI) e = e e is K-positive for all t ≥ t0. That is, A is eventually exponentially K-positive.

Conversely, suppose A is eventually exponentially K-positive. As (eA)k = ekA, it follows that eA is eventually K-positive. Thus, by Theorem 3.3, eA has the K-strong Perron-Frobenius property.

Recall that σ(eA) = {eλ : λ ∈ σ(A)} and so r( eA) = eλ for some λ ∈ σ(A). Then for each µ ∈ σ(A) with µ 6= λ we have

eλ > |eµ| = |eRe µ+iIm µ| = eRe µ.

17 Hence λ is the spectral abscissa of A, namely, λ > Re µ for all µ ∈ σ(A) with µ 6= λ. In turn, this means that there exists large enough a > 0 such that

λ + a > |µ + a| ∀ µ ∈ σ(A), µ 6= λ.

As A + aI shares eigenspaces with eA, it follows that A + aI has the strong K-Perron-Frobenius property. By Theorem 3.3, A + aI is eventually K-positive.

Remark

It is worth emphasizing that the role of polyhedrality of a cone differs in the context of eventual

(exponential) nonnegativity from the role presented in Lemma 3.1. In Example 3.1 we presented a non-polyhedral cone K and a matrix A such that A is exponentially K-nonnegative but not essentially K-nonnegative. Nevertheless, perhaps contrary to one’s intuition, by Theorem 3.4 we have that for every proper cone (polyhedral or not), eventual exponential K-positivity implies (in fact, is equivalent) to eventual K-positivity of A + αI for some α ≥ 0. This is illustrated by examining below the matrix and cone of example 3.1.

Example 3.2. Recall from Example 3.1 the non-polyhedral cone

T 3 2 2 2 K = {x = [x1 x2 x3] ∈ R : x1 + x2 ≤ x3, x2 ≥ 0, x3 ≥ 0}

and the matrix   1 1 −1       A =  −1 1 1  .       0 2 2

18 We find that r(A + I) = 4.1304 with corresponding right and left eigenvectors given, respectively, by     −0.17494 −0.27516             x =  0.48446  and y =  0.58620              0.85715 0.76200

The dual of K can be computed to be

∗ T 3 K = K ∪ {x = [x1 x2 x3] ∈ R : x3 ≥ |x1|, x2 ≥ 0}.

The interiors of K and K∗ are the corresponding subsets for which the defining inequalities are strict. Notice then that x ∈ int K and y ∈ int K∗. Thus A + I and its transpose satisfy the strong K-Perron-Frobenius and the strong K∗-Perron-Frobenius property, respectively. It follows by Theorem 3.3 that A + I is eventually K-positive, and thus by Theorem 3.4 it is eventually exponentially K-positive.

3.4 Eventually (exponentially) K-Nonnegative Matrices

We now turn our attention to necessary conditions for eventual (exponential) cone nonnegativity.

n n×n Theorem 3.5. Let K be a proper cone in R and A ∈ R be an eventually K-nonnegative matrix which is not nilpotent. Then A has the K-Perron-Frobenius property and AT has the K∗-Perron-

Frobenius property.

Proof: Since Ak is K-nonnegative for some sufficiently large k, r(Ak) is an eigenvalue of Ak and has a corresponding eigenvector in K [4, Theorem (3.2), Chapter 2]. It follows that A possesses

19 the Perron-Frobenius property. Recall now that A is K-positive if and only if AT is K∗-positive from which the second part of the conclusion follows.

n n×n Theorem 3.6. Let K be a proper cone in R and A ∈ R be an eventually exponentially K- nonnegative matrix. Then the following hold:

(i) eA has the K-Perron-Frobenius property and eAT has the K∗-Perron-Frobenius property.

A A A r(A) (ii) If r(e ) is a simple eigenvalue of e and r(e ) = e , then there exists a0 ≥ 0 such that

k T ∗ lim ((A + aI)/(r(A + aI)) = xy for all a > a0, where x ∈ K and y ∈ K are, respectively, k→∞ right and left eigenvectors of A corresponding to r(A), satisfying xT y = 1. The limit matrix xyT belongs to π(K).

Proof:

(i) Let A be eventually exponentially K-nonnegative. As (eA)k = ekA, it follows that eA is even- tually K-nonnegative. Thus, by Theorem 3.5 and since eA and eAT are not nilpotent, eA has the

K-Perron-Frobenius property and eAT has the K∗-Perron-Frobenius property.

(ii) From (i) we specifically have that r(eA) ∈ σ(eA). Let x ∈ K, y ∈ K∗ be right and left eigenvectors of eA, respectively, corresponding to r(eA) and normalized so that xT y = 1. As in the proof of Theorem 3.4, r( eA) = eλ for some λ ∈ σ(A) with λ > Re µ, ∀ µ ∈ σ(A) \{λ}. This means that there exists a0 ≥ 0, such that for all a > a0,

r(A + aI) = λ + a > |µ + a|, ∀ µ ∈ σ(A), µ 6= λ.

A T As A + aI and e share eigenvectors, we obtain that for all a > a0, A + aI and A + aI both have the Perron-Frobenius property relative to cones K and K∗, respectively, with λ + a being simple

20 and their only dominant eigenvalue. Applying Lemma 3.2 to A + aI, we thus obtain

(A + aI)k lim = xyT . (3.4.1) k→∞ (r(A + aI))k

Note that for every w ∈ K, as y ∈ K∗ we have yT w ≥ 0; that is xyT w ∈ K.

Remark

n When K = R+, it was shown in [25, Theorem 3.7]) that if A is eventually nonnegative with index0(A) ≤ 1, then A is eventually exponentially nonnegative. This result generalizes to simplicial

n −1 cones K = SR+, where S invertible, simply by working with SAS . However, it is not in general true that K-eventual nonnegativity and an index assumption imply eventual exponential nonnegativity. This is shown by the following counterexample

Example 3.3. Consider the non-polyhedral, proper (ice-cream) cone, (see [16]) ,

 T 3 2 2 K = s [x1 x2 1] ∈ R : x1 + x2 ≤ 1, s ≥ 0 ,

as well as the matrix A and its powers given by

    a b 0 am mam−1b 0           m   A =  0 a 0  ,A =  0 am 0  ,             0 0 1 0 0 1

1 T 1 m T where a = 2 , b = −5. For any nonzero x = s[x1 x2 1] ∈ K, consider s A x = [y1 y2 1] . We have that

2 2 2m 2 2m 2 2m−2 2 2 2m−1 y1 + y2 = a x1 + (a + m a b )x2 + 2ma bx1x2. (3.4.2)

21 We look at two cases: Firstly, if x1x2 > 0 and since b = −5 < 0, then

2m 2 2m 2 2m−2 2 2 2m−1 2m 2 2m 2 2m−2 2 2 a x1 + (a + m a b )x2 + 2ma bx1x2 ≤ a x1 + (a + m a b )x2. (3.4.3)

2m 2 2m−2 1 Observe that limm→∞ a = 0 and limm→∞ m a = 0 since a = 2 . Therefore, from 3.4.2 and

3.4.3, we get for sufficiently large m that

2 2 2m 2 2m 2 2m−2 2 2 2 2 y1 + y2 ≤ a x1 + (a + m a b )x2 ≤ x1 + x2 ≤ 1.

1 m m Hence s A x ∈ K and thus A x ∈ K for all sufficiently large m.

Secondly, if x1x2 < 0, let us without loss of generality assume |x1| ≥ |x2|. Then, for all sufficiently large m,

2 2 2m 2 2m 2 2m−2 2 2 2m−1 y1 + y2 = a x1 + (a + m a b )x2 + 2ma bx1x2

2m 2m−1 2 2m 2 2m−2 2 2 ≤ (a − 2ma b)x1 + (a + m a b )x2

2 2 ≤ x1 + x2 ≤ 1.

Thus Amx ∈ K for all sufficiently large m. That is, A is eventually K-nonnegative.

Next we will argue that A is not eventually exponentially K-nonnegative by considering

  eat tea(t−1)c 0     tA   e =  0 eta 0  ,       0 0 et

22 1 P∞ m where a = 2 and where c = b m=1 m!2m−1 . Thus,

∞ ∞ X m X 1 0 < |c| = |b| ≤ |b| = 2|b|. m!2m−1 2m−1 m=1 m=1

  ctea(t−1)     T tA   Let now x = [0 1 1] ∈ K so that e x =  eat  .       et

a(t−1) 2 at 2 2 1 t 2 2 −1  It follows that (cte ) +(e ) > 1, since c > 0 and a = 2 . That is, e c t e + 1 > 1. We can thus conclude that etAx 6∈ K for any t > 0, i.e., A is not eventually exponentially K-nonnegative.

3.5 Points of K-Potential

n n×n In this section K ⊂ R is a proper cone and A ∈ R denotes an eventually exponentially K- nonnegative matrix with exponential index t0 = t0(A) ≥ 0. We will study points of K-potential, that is, the set

n tA XA(K) = {x0 ∈ R | (∃tˆ= tˆ(x0) ≥ 0) (∀t ≥ tˆ)[e x0 ∈ K]}. (3.5.1)

dx X (K) comprises all initial points giving rise to solutions (trajectories) of = Ax that reach K A dt at some finite time and stay in K for all time thereafter.

23 n×n Given the eventually exponentially K-nonnegative matrix A ∈ R with exponential index t0 = t0(A) ≥ 0, we define the simplicial cone

t0A n t0A K0 = e K = {x0 ∈ R | (∃y ∈ K)[x0 = e y]}

and consider the sets

n tAˆ YA(K0) = {x0 ∈ R | (∃tˆ= tˆ(x0) ≥ 0) [e x0 ∈ K0]} (3.5.2)

and

n tA XA(K0) = {x0 ∈ R | (∃tˆ= tˆ(x0) ≥ 0) (∀t ≥ tˆ)[e x0 ∈ K0]}. (3.5.3)

Lemma 3.7. Let K0, YA(K0) as defined above. Then K0 ⊆ K ⊆ YA(K0).

t A tAˆ t A t A Proof: We have that K0 ⊆ K since e 0 K ⊆ K. If x0 ∈ K, then for tˆ= 2t0, e x0 = e 0 (e 0 x0) ∈

K0. Hence, K ⊆ YA(K0).

Note that the sets YA(K0), XA(K0) and XA(K) are convex cones. They are not necessarily closed

2 sets, however. For example, when K = R+ and

  0 1   A =   ,   0 0

2 it can be shown that XA(R+) consists of the whole upper plane excluding the negative x-axis.

The set YA(K0) comprises initial points for which the trajectories enter K0 at some time. The set

XA(K0) comprises initial points for which the trajectories enter K at some time and remain in K0

n for all time thereafter. The set of points of K-potential, XA(R+) comprises initial points for which

24 the trajectories at some time reach K and remain in K for all time thereafter. Next we shall argue that YA(K0), XA(K0) and XA(K) coincide and interpret this result subsequently.

n n×n Proposition 3.8. Let K ⊂ R be a proper cone and let A ∈ R be an eventually exponentially

t A K-nonnegative matrix with exponential index t0 = t0(A) ≥ 0. Let K0 = e 0 K. Then

YA(K0) = XA(K) = XA(K0).

Proof: We begin by proving the first equality. If x0 ∈ YA(K0), then there exists tˆ ≥ 0 and y ∈ K

tAˆ t A (t −tˆ)A tA (t+t −tˆ)A such that e x0 = e 0 y. Thus, x0 = e 0 y and so e x0 = e 0 y ∈ K if t + t0 − tˆ≥ t0, i.e.,

ˆ n for all t ≥ t. It follows that x0 ∈ XA(K), i.e., YA(K) ⊆ XA(R+). For the opposite containment,

n ˆ tA ˆ ˜ ˆ let x0 ∈ XA(R+); that is, there exists t ≥ 0 such that e x0 ∈ K for all t ≥ t. Let t = t + t0. Then

tA˜ t A tAxˆ e x0 = e 0 (e 0 ) ∈ K0, proving that XA(K) ⊆ YA(K0) and thus equality holds.

For the second equality, we clearly have XA(K0) ⊆ XA(K) since K0 ⊆ K. To show the opposite

t A sA containment, let x0 ∈ XA(K). Then there exists tˆ≥ 0 such that e 0 e x0 ∈ K0 for all s ≥ tˆ. That

tA is, e x0 ∈ K0 for all t ≥ t0 + tˆ and thus x0 ∈ XA(K0).

Remarks

Referring to Proposition 3.8, we must make the following observations:

tA (i) If t0 = 0 (i.e., if e K ⊆ K for all t ≥ 0), then K0 = K. In this case, XA(K) coincides with the reachability cone of K for the essentially K-nonnegative matrix; this cone is studied in detail

n in [22, 23], especially when K = R+.

(ii) The equality XA(K) = XA(K0), in conjunction with Lemma 3.7, can be interpreted as saying

t A that the cone K0 = e 0 K serves as an attractor set for trajectories emanating at points of K- potential; in other words, trajectories emanating in XA(K) always reach and remain in K0 ⊆ K

25 after a finite time.

(iii) Our observations so far imply that the trajectory emanating from a point of K-potential will indeed enter cone K; however, it may subsequently exit K0 while it remains in K, and it will eventually re-enter K0 and remain in K0 for all finite time thereafter. This situation is illustrated

n by [25, Example 4.4] for K = R+. As a consequence, a natural question arises: When is it possible that all trajectories emanating in XA(K) reach and never exit K0? This is equivalent to asking

tA whether or not e K0 ⊆ K0 for all t ≥ 0. To resolve this question, we will invoke Lemma 3.1.

n n×n Corollary 3.9. Let K ⊆ R be a proper cone and let A ∈ R be an eventually exponentially

t A tA K-nonnegative matrix with exponential index t0 = t0(A) ≥ 0. Let K0 = e 0 K. Then e K0 ⊆ K0 for all t ≥ 0 if and only if t0 = 0 (or equivalently, if and only if A is eventually exponentially

K-nonnegative).

tA tA Proof: If t0 = 0, then K0 = K and e K ⊆ K0 for all t ≥ 0. For the converse, suppose e K0 ⊆ K0

t A tA for all t ≥ 0. We must show that t0 = 0. Let y ∈ K and consider x0 = e 0 y ∈ K0. As e x0 ∈ K0 for all t ≥ 0, there must exist z ∈ K such that

e(t+t0)A y = et0A z for all t ≥ 0.

But this means etAy = z ∈ K for all t ≥ 0. Since y was taken arbitrary in K, we have etA K ⊆ K for all t ≥ 0; that is, t0 = 0.

3.6 Generalizing M-Matrices based on Eventual K-Nonnegativity

This section generalizes some of the results in [27] and concerns M∨,K - matrices, namely, matrices

n×n n of the form A = sI − B ∈ R , where K is a proper cone in R , B is an eventually K-nonnegative

26 matrix and s ≥ r(B) ≥ 0. In the remainder, every M∨,K - matrix A is assumed to be in the above form for some proper cone K.

We begin with some basic properties of M∨,K - matrices, which are immediate consequences of the fact that the eventually nonnegative (resp. positive) matrix B satisfies Theorem 3.5 (resp. Theorem

3.3).

n×n Theorem 3.10. Let A = sI − B ∈ R be an M∨,K - matrix. Then

(i) s − r(B) ∈ σ(A);

(ii) Re λ ≥ 0 for all λ ∈ σ(A);

(iii) detA ≥ 0 and detA = 0 if and only if s = r(B);

(iv) if, in particular, r(B) > 0, then there exists an eigenvector x ∈ K of A

and an eigenvector y ∈ K∗ of AT corresponding to λ(A) = s − r(B);

(v) if, in particular, B is eventually K-positive and s > r(B), then in (iv),

x ∈ int K, y ∈ int K and in (ii) Re λ > 0 for all λ ∈ σ(A).

In the following result, different representations of an M∨,K - matrix are considered (analogous to different representations of an M-matrix).

n×n ˆ Theorem 3.11. Let A ∈ R be an M∨,K - matrix. Then in any representation A = tI − B with

Bˆ being eventually K-nonnegative, it follows that t ≥ r(Bˆ). If, in addition, A is nonsingular, then t > r(Bˆ).

n×n Proof: Since A is an M∨,K - matrix, A = sI − B ∈ R for some eventually K-nonnegative

B and some s ≥ r(B) ≥ 0. Let A = tI − Bˆ, where Bˆ is eventually K-nonnegative. If B is

27 nilpotent, then 0 = r(B) ∈ σ(B), and by Theorem 3.5, this containment also holds if B is not nilpotent. A similar containment holds for Bˆ. If t ≥ s, then r(Bˆ) = r((t − s)I + B) = r(B) + t − s.

Hence, t = s − r(B) + r(Bˆ) ≥ r(Bˆ). Similarly, if t ≤ s, then r(B) = r(Bˆ) + s − t. Hence, t = s − r(B) + r(Bˆ) ≥ r(Bˆ). If A is nonsingular, it follows by Theorem 3.10 (iii) that s > r(B) and so t > r(Bˆ).

As with M-matrices (see [4, Chapter 6, Lemma (4.1)]), we can now show that the class of M∨,K - matrices is the closure of the class of nonsingular M∨,K - matrices.

n×n Proposition 3.12. Let A = sI − B ∈ R , where B is eventually K-nonnegative. Then A is an

M∨,K - matrix if and only if A + I is a nonsingular M∨,K - matrix for each  > 0.

Proof: If A+I = (s+)I −B is a nonsingular M∨,K - matrix for each  > 0, then by Theorem 3.11,

+ s+ > r(B) for each  > 0. Letting  → 0 gives s ≥ r(B), i.e., A is an M∨,K - matrix. Conversely, let A = sI −B be an M∨,K - matrix, where B is eventually K-nonnegative and s ≥ r(B) ≥ 0. Thus, for every  > 0, A + I = (s + )I − B with B eventually K-nonnegative and s +  > s ≥ r(B).

That is, A + I is a nonsingular M∨,K - matrix.

Given an M-matrix A, clearly −A is essentially nonnegative, i.e., −A + αI ≥ 0 for all sufficiently large α ≥ 0. Thus e−tA = e−tα e−t(A−αI) ≥ 0 for all t ≥ 0; that is −A is exponentially nonnegative.

In the following theorem, this property is extended to a special subclass of M∨,K - matrices with exponential nonnegativity replaced by eventual exponential K-positivity.

n×n Theorem 3.13. Let A = sI − B ∈ R be an M∨,K - matrix with B being eventually K-positive

(and thus s ≥ r(B) > 0). Then −A is eventually exponentially K-positive.

m Proof: Let A = sI − B, where B = sI − A is eventually K-positive with power index k0. As B is K-positive for all m ≥ k0, there exists sufficiently large t0 > 0 so that for all t ≥ t0, the sum of

28 ∞ X tmBm tk0 Bk0 the first k − 1 terms of the series etB = is dominated by the term , and thus, 0 m! k ! m=0 0 tB −tA −ts tB since π(K) is also a proper cone, e is K-positive for all t ≥ t0. It follows that e = e e is positive for all t ≥ t0. That is, −A is eventually exponentially K-positive as claimed.

There are several properties of a Z-matrix A that are equivalent to A being an M-matrix. These properties are documented in the often cited Theorems 2.3 and 4.6 in [4]: positive stability, semipos- itivity, inverse nonnegativity and monotonicity among others. In the cone-theoretic generalizations of M-matrices (see [33]), these properties are generalized and shown to play an analogous charac- terizing role. In the following theorems we examine the form and role these properties take in the context of M∨,K - matrices.

n×n Theorem 3.14. Let A = sI − B ∈ R , where B is eventually K-nonnegative and has power

k index k0 ≥ 0. Let Kˆ be the cone defined as Kˆ = B 0 K. Consider the following conditions:

(i) A is an invertible M∨,Kˆ - matrix.

(ii) s > r(B) (positive stability of A).

(iii) A−1 exists and A−1Kˆ ⊆ K (inverse K-nonnegativity).

(iv) Ax ∈ Kˆ =⇒ x ∈ K (K-monotonicity).

Then (i)⇐⇒(ii)=⇒(iii)⇐⇒(iv). If, in addition, B is not nilpotent, then all conditions (i)-(iv) are equivalent.

Proof:

(i)=⇒(ii). This implication follows by Theorem 3.11 and invertibility of A.

(ii)=⇒(i). It follows from definition of an M∨,K - matrix and Theorem 3.10 (i).

29 (iii)=⇒(iv). Assume (iii) holds and consider y = Ax ∈ Kˆ . As A−1 exists, x = A−1y ∈ K.

(iv)=⇒(iii). Assume (iv) holds. First notice that A must be invertible because if Au = 0 ∈ Kˆ , then u ∈ K; also A(−u) = 0 ∈ Kˆ and so u ∈ −K, that is, as K is pointed, u = 0. Consider now y = A−1Bk0 x, where x ∈ K. Then Ay = Bk0 x ∈ Kˆ and so y ∈ K.

(ii)=⇒(iii). If (ii) holds, then r(B/s) < 1 and so

∞ 1 1 X Bq A−1 = (I − B/s)−1 = . s s sq q=0

Consequently, for all x ∈ K,

∞ 1 X Bq+k0 A−1Bk0 x = x ∈ K. s sq q=0

Now suppose that B is not nilpotent, that is, r(B) > 0. To prove that (i)-(iv) are equivalent it is sufficient to show that (iii)=⇒(ii).

(iii)=⇒(ii). By Theorem 3.5, B has the K-Perron-Frobenius property, i.e., there exists nonzero x ∈ K so that Bx = r(B)x. Assume (iii) holds and consider µ = s − r(B) ∈ σ(A) ∩ R. As

Bk0 x = r(B)k0 x and since r(B) > 0, it follows that x ∈ Kˆ and thus A−1x ∈ K. But Ax = µx and so x = µA−1x. It follows that µ > 0.

Remarks

(a) The implication (iii)=⇒(ii) or (i) in Theorem 3.14 is not in general true if B is nilpotent. For

2 example, consider K = R+ and the eventually K-nonnegative matrix   1 −1 v   ˆ 2 2 B =   ≥ 0, which has power index k0 = 2. Thus K = B R+ = {0}. For any s < 0,   1 −1

30 −1 ˆ 2 A = sI − B is invertible and A K = {0} ⊂ R+; however, A is not an M∨,Kˆ - matrix because its eigenvalues are negative.

(b) It is well known that when an M-matrix is invertible, its inverse is nonnegative. As mentioned earlier, in [15, Theorem 8] it is shown that the inverse of a pseudo M-matrix is eventually positive.

In [19, Theorem 4.2] it is shown that if B is an irreducible eventually nonnegative matrix with

−1 index0(B) ≤ 1, then there exists t > r(B) such that for all s ∈ (r(B), t), (sI − B) > 0. The situation with the inverse of an M∨,K - matrix A is different. Notice that condition (iii) of Theorem

−1 k −1 3.14 is equivalent to A B 0 ∈ π(K). In general, if A is an invertible M∨,K - matrix, A is neither

K-nonnegative nor eventually K-nonnegative; for example, if A = I − B, with B as in remark (a)   1 −1   v above, that is B =   ≥ 0, then the (1, 2) entry of (A−1)k is negative for all k ≥ 1.   1 −1

In the next theorem we present some properties of singular M∨,K - matrices analogous to the properties of singular, irreducible M-matrices found in [4, Chapter 6, Theorem (4.16)].

n×n Theorem 3.15. Let A = sI − B ∈ R be a singular M∨,K - matrix, where B is eventually

K-positive. Then the following hold.

(i) A has rank n − 1.

(ii) There exists a vector x ∈ int K such that Ax = 0.

(iii) If for some vector u, Au ∈ K, then Au = 0 (almost monotonicity).

Proof: As A is singular, by Theorem 3.10 (iii) it follows that s = r(B).

(i) By Theorem 3.3, B has the strong K-Perron-Frobenius property and so r(B) is a simple eigen- value of B. Thus, 0 = s − r(B) is a simple eigenvalue of A.

31 (ii) As B has the strong K-Perron-Frobenius property, there exists x ∈ int K such that Bx = r(B)x, i.e., Ax = r(B)x − Bx = 0.

(iii) By Theorem 3.3, BT also has the strong K∗-Perron-Frobenius property and so there exists z ∈ int K∗ such that zT B = r(B)zT . Let u be such that Au ∈ K. If Au 6= 0, then zT Au > 0.

However,

zT Au = r(B)zT u − zT Bu = r(B)zT u − r(B)zT u = 0, a contradiction, showing that Au = 0.

32 Chapter 4

Semigroups of Linear Operators and

Invariance

4.1 Introduction

As described in [31, Chapter-V-4], the spectral properties of positive square matrices motivate the investigation of spectral properties of positive operators defined on complex ( Banach) lattices. The goal of this chapter is to highlight the study of eventually positive semigroups of linear operators; in particular, by examining the spectral properties of the generators of the semigroups in the context of eventual cone invariance similar to [7] and possible extensions of the results in [19] and [27]. We examine corresponding Perron-Frobenious type properties for the resolvent operators that generate the semigroups. In the following sections, E is a non-empty Banach lattice over C. The notions of spectrum and spectral radius are as defined in section 1.2.

33 4.2 Operator Semigroups on Ordered Spaces

Definition 4.1. (i) A set X is partially ordered if there exists a binary operation “ ≤ ” on X that is reflective, antisymmetric and transitive.

(ii) A lattice is a partially ordered set (X, ≤) such that sup{x, y} and inf{x, y} exist in X for all elements x, y of X.

(iii) A lattice is order complete (Dedekind complete) if every non-empty subset Y ⊂ X which has an upper bound in X has a least upper bound (or that with lower bound has a greatest lower bound).

(iv) A vector lattice is a partially ordered linear space (X, +, ≤) such that (X, ≤) is a lattice. Under this ordering: x+ = sup{x, 0}, x− = sup{−x, 0} and |x| = sup{x, −x} for any x ∈ X.

(v) Let X be a vector lattice. A set Y is solid if y ∈ Y , whenever there is an x ∈ X such that

|y| ≤ |x|, where |.| is as in (iv) with the appropriate ordering in X.

(vi) If Y is any subset of X , the set {y : ∃x ∈ Y, |y| ≤ |x|} is solid. It is the smallest set containing

Y and is called the solid hull of Y .

(vii) A normal subspace (or a band) of X is a solid linear subspace W such that whenever Y ⊆ W and sup(Y ) exists in X , then sup(Y ) ∈ W (i.e. W is an order closed solid linear subspace of X).

(viii) A norm k.k on a vector lattice is called a lattice norm if it satisfies

|x| ≤ |y| ⇒ kxk ≤ kyk, ∀ x, y ∈ X.

34 (ix) A Banach lattice X is a Banach space with a partial order (X, ≤) and a lattice norm defined in (viii).

(x) X+ = {x ∈ X : x ≥ 0} ⊆ X is a cone in X. E.g. if X = C(Ω)-(the space of continuous real valued functions on a compact set Ω), then X+ = C(Ω)+ = {f ∈ C(Ω) : f(x) ≥ 0}, with pointwise ordering and f ≥ g =⇒ f − g ∈ C(Ω)+, see 2.1-(v).

(xi) An operator T : X −→ X is said to be bounded if there exists c > 0 such that kT k ≤ ckxk, where kT kkxk sup{c : kT k ≤ ckxk} = kT k and kT k = sup . x∈X kxk=1 kxk

(xii) A set G is a semigroup if there is an associative binary operation defined on it but the identity and the inverse elements are not necessarily in G.

(xiii) Let X be a real (complex) Banach space, t ∈ R or C a parameter and L(X) = {Bounded linear operators on X}. A one-parameter semigroup of bounded linear operators is a family (T (t))t≥0 that satisfies the “functional equation” given by:

(a) T (s + t) = T (s)T (t) ∀ s, t ∈ R+ (b) T (0) = I(identity). (4.2.1)

(xiv) C0-semigroup denotes a family {T (t): t ∈ R+} of strongly continuous one-parameter semi- group of bounded linear operators, T (t): X → X, on a Banach space X satisfying 4.2.1 and an additional condition that: limt→0+ T (t)f → f , f ∈ X, with respect to the norm on X.

(xv) Semigroups that satisfy the stronger condition: limt→0 kT (t) − Ik = 0 are called uniformly continuous.

35 n×n tA Example 4.1 (Matrix Example). If ∀s, t ≥ 0,A ∈ R then T (t) = e satisfies 4.2.1, that

(t+s)A tA sA 0A tA is: e = e e ; e = I. Here (e )t≥0 is the (one-parameter) semigroup generated by A.

Note: ketAk ≤ etkAk, ∀t ≥ 0 in any matrix norm and T (t) is uniformly continuous.

Theorem 4.2. [10, Bounded Generator, Uniform Continuity]. Every (T (t))t≥0, a family of bounded linear operators on a Banach space X , is of the form T (t) = etA, t ≥ 0 for some A ∈ L(X).

(The map T (t) satisfies equation 4.2.1 and solves the differential equationx ˙ = Ax uniquely as in the scalar or matrix case and A is the generator of T (t)) and the matrix exponential 4.1 is an example of this).

Example 4.2. [3, Unbounded Generator, Strong-continuity, see definition 4.1-xiv].

2 2 2 Let E = L (0, 1). For each t ∈ R, define T (t): L (0, 1) → L (0, 1) by

   f(x + t) , x + t ≤ 1 (T (t)f)(x) =   0 , x + t > 1

T (t1 + t2)f = T (t1)T (t2)f and (T (0)f)(x) = f(x) = I(f(x)). Therefore

T (t1 + t2) = T (t1)T (t2) and T (0) = I (the identity).

Remark

0 df The generator A can be found by differentiation, when the derivative exists and A = f or dx .

Usually, however, the operator A and the initial or boundary conditions are what are known, while

T (t), the sought solution of the differential equation is unknown. So studying properties of A can facilitate understanding of the semigroup (T (t))t≥0.

36 As in the case of matrices in finite dimension the following proposition holds for operators on

Banach lattices.

Proposition 4.3. [31, V-4.1]. Let E be a Banach lattice. Then 0 < T ∈ L(E) implies r(T ) ∈ σ(T ).

4.3 Semigroup Generation

Operators are not always directly studied therefore alternative approaches are used to determine the maps T (.): R+ −→ L(E) satisfying the functional equation 4.2.1. In particular , the properties of generators of the semigroup are studied. As in example 4.1, matrix A is the generator of the semigroup (T (t))t≥0, the matrix semigroup.

Matrix Semigroups

n n×n Let E = C . The space L(E) of all bounded linear operators on E can be identified with C of all complex n × n matrices, and a linear dynamical system on E given by a matrix valued function

n×n T (.): R+ −→ C satisfying the functional equation 4.2.1. The variable t can be interpreted as time. In this case the initial state is x0, and after time t the state is given by T (t)x0. In particular,

tA P∞ tkA ˙ T (t)x0 = e x0 := k=0 k! x0, the trajectory of the solution to T (t)x0 = AT (t)(x0).

n×n (t+s)A tA sA 0A Proposition 4.4. For s, t ≥ 0, any A ∈ C satisfies e = e e and e = I.

The above proposition provides a typical example when the operator A, the generator, is bounded, see example 4.1.

Remarks

(i) The range of the function t −→ T (t) := etA is a commutative semigroup of matrices depending

37 continuously on the parameter t ∈ R+. This follows from the property that the map t → T (t) is a

n×n homomophism from the additive semigroup (R+, +) into the multiplicative semigroup C .

(ii) When the parameter t is in R or C, the continuity and functional equation still hold and

(T (t))t ∈ R (or C) now gives a group of linear operators.

(iii) The map T (.): t −→ etA extends to a continuous (or analytic) homomorphism from the additive group (R, +) or (C, +) into the multiplicative group GL(n, R) or GL(n, C) of all invertible

tA n × n matrices. Here (e )t∈ R is the one-parameter group generated by A, [10, p.9].

tA Definition 4.5. [10, Stability]. A one-parameter semigroup (e )t≥0 is defined as stable if under

n×n tA n×n any matrix norm on C , limt→∞ ke k = 0. Since uniform and pointwise convergence on C

tA n coincide , the stability can be defined by the fact limt→∞ ke xk = 0 for each x ∈ C .

tA Theorem 4.6. [10, Classical Liapunov Stability Theorem]. Let (e )t≥0 be the one-parameter

n×n semigroup generated by A ∈ C . Then the following are equivalent:

tA (i) The semigroup is stable, i.e. limt→∞ ke k = 0.

(ii) All eigenvalues of A have a negative real part, i.e. R(λ) < 0, ∀λ ∈ σ(A).

tA The theorem above provides a way to relate properties of A to properties of (e )t≥0. In the case of the abstract Cauchy problem this contributes to an understanding of the solution to T˙ (t) = AT (t).

tA n×n Corollary 4.7. [10, 2.11]. For the semigroup (e )t≥0 generated by the matrix A ∈ C , the following are equivalent :

(a) The semigroup is bounded, i.e. ketAk ≤ M, ∀t ≥ 0 and some M ≥ 1.

(b) All eigenvalues λ of A satisfy R(λ) ≤ 0, and whenever R(λ) = 0, then λ is a simple eigenvalue

(i.e. the Jordan blocks corresponding to λ have size 1).

38 Infinite Dimensional Systems

Consider a complex Banach space E and the L(E) , that is, a Banach space closed under additive and multiplicative (interpreted as operator composition) operations, of all bounded linear operators on E with the operator norm k.k that satisfies kTSk ≤ kT kkSk ∀T,S ∈ L(E). The

Cauchy function equation is still valid as in equation 4.2.1. The goal is still to describe all maps

T (.): R+ −→ L(E).

4.4 Functional Calculus for Bounded Linear Operators

In this section we look at an alternative approach some scholars have used which does not only analyze the spectrum of a given operator but analyzes the properties of its resolvent as well. We start with the following basic definitions.

Definition 4.8. (i) Let 0 6= X be a normed linear space over R or C. Let A : X → X be a linear operator. The resolvent of A is the set ρ(A) of all λ such that the range of λI − A is dense in X and λI − A has a continuous inverse.

(ii) For λ ∈ ρ(A), the operator (λI −A)−1 is called the resolvent operator and is denoted by R(λ, A) or Rλ.

(iii) The spectrum of A is the set σ(A) of all scalar values not in ρ(A). Note that the spectrum could be larger than the point spectrum in the finite dimension case.

(iv) An operator A : X → X is closed if the graph of A , given by (x, A(x)) , is closed in X × X.

Remarks

(i) We assume that the space X is complete and the operator A is closed.

(ii) If λ ∈ ρ(A) the fact that λI − A has a continuous inverse implies that the range of (λI − A) is

39 closed and dense in X.

(iii) When λ ∈ ρ(A) the resolvent operator Rλ belongs to L(X), where ρ(A) is an open set in the space of scalars and Rλ can be written as a power series in λ − λ0 about each point λ0 ∈ ρ(A). The power series has coefficients in L(X) and it converges in norm of L(X). The resolvent Rλ can be regarded as an analytic function defined on ρ(A) with values in L(X).

(iv) If X has dimension n and the domain of A equals X, then A can be represented by an n-square matrix. The operator λI − A is also an n-square matrix and det(λI − A) is a polynomial of degree n. The spectrum of A is a set of scalars λ that are roots of the equation det(λI − A) = 0. This equation may have no solution over R; over C, however, σ(A) has at least one element and could have up to n elements.

(v) In finite dimensions, if λ ∈ σ(A) then λ is an eigenvalue of A, that is, there exists a non zero vector, say x, called an eigenvector associated with λ, such that Ax = λx and thus (λI − A)x = 0.

The null space of (λI − A) is called the eigenspace corresponding to λ.(λI − A) is invertible if and only if λ is not an eigenvalue of A.

(vi) In infinite dimension even when λ is an not an eigenvalue of A, it can still turn out that the operator λI − A is not invertible. For example consider S = (λI − A), the right shift operator, then

λ = 0 is not an eigenvalue but still S is not invertible since S is not surjective. The only vector that satisfies Sx = 0 is the zero vector which is not an eigenvector.

The spectrum may not give complete information about an operator as shown in the following example.

40 Example 4.3.

Consider the following matrices,

      1 2 0 3 −2 0 3 0 0                   A =  1 2 0  ,B =  0 0 0  and C =  0 0 1  .                   0 0 0 0 0 0 0 0 0

Notice that these are different matrices, however, A is similar to B but this similarity does not carry over to C even though all three have the same spectrum {λ : λ = 0, 3}. The situation is more complicated for operators in infinite dimension, the spectrum does not suffice to understand the properties or behaviour of an operator.

Definition 4.9. [10, Spectral Bound]. The spectral bound of A, denoted by S(A), is defined as

Sup{R(λ): λ ∈ σ(A)} ∈ [−∞, ∞]}.

The operator T ∈ L(X) is an analytic from ρ(A) into L(X). Since the function λ −→ etλ is analytic on all of C , the operator valued version of the Cauchy integral formula is given by:

1 Z etA := etλR(λ, A)dλ (4.4.2) 2πı +∂U for each t ≥ 0, where A ∈ L(X) and U is an open neighborhood of σ(A) with smooth, positively oriented boundary, ∂U, (see [10]).

As noted in [10], the operator etA is bounded and does not depend on the particular choice of U.

The functional calculus yields a homomorphism from an algebra of analytic functions into the alge- bra of bounded linear operators L(X) and from e(t+s)λ = etλesλ for t, s ≥ 0, we get the functional

tA tA equation for (e )t≥0. The continuity of the map t −→ e ∈ L(X) follows from the continuity of

41 t t −→ e for the topology of uniform convergence on compact subsets of C.

4.5 Positive Semigroups

In this section we describe positive semigroups of linear operators which are in turn described by the positivity of the resolvents of their generators. Daners, Gl¨uck and Kennedy in [7] studied strongly continuous semigroups with eventual positivity properties on C(Ω), where Ω is a non-empty compact Hausdorff space. Arendt’s earlier work on resolvent positive operators can be found in [1].

We seek to link or establish analogues of some results on nonsingular M-matrices to operators in general. The following definitions and results are mostly from the papers [1], [7] and texts [3], [6],

[10].

Definition 4.10. [6] (i) A real valued function f is called positive on a compact non-empty topo- logical space Ω if f(x) ≥ 0 for all x ∈ Ω and we write f ≥ 0. If f ≥ 0 but f 6= 0 we write f > 0.

We say f is strongly positive, written as f  0, if there exists β > 0 such that f ≥ β1, where 1 is the constant function on Ω with value one.

(ii) A bounded linear operator T on C(Ω) is called strongly positive, denoted by T  0, if T f  0 whenever f > 0; similarly, a linear functional φ : C(Ω) → C is called strongly positive denoted by

φ  0, if φ(f) > 0 for each f > 0.

(iii) If E is a real Banach lattice, the positive cone in E is the set {x ∈ E : x ≥ 0} and is by denoted by E+, see example 2.1-vi

42 (iv) If E be an ordered Banach space whose positive cone is generating and normal, an operator A on E is called resolvent positive if there exists w ∈ R such that (w, ∞) ⊂ ρ(A)-the resovent set of

A and R(λ, A) = (λI − A)−1 ≥ 0 for all λ > w.

(v) Let (E, k.k) be an ordered Banach space (real or complex) with positive cone E+. Suppose that {T (t)}t≥0 is a C0-Semigroup of linear operators in E, with generator A. The growth bound of

{T (t)}t≥0 is the number

wt w0 := inf{w ∈ R : There exists M ∈ R+ such that kT (t)k ≤ Me for t ≥ 0}

logkT (t)k logkT (t)k = lim = inf . (4.5.3) t→∞ t t>0 t

(vi) The semigroup {T (t)}t≥0 is called positive if T (t)E+ ⊆ E+ for all the operators (T (t))t≥0, t ≥ 0.

−1 The positivity of (T (t))t≥0 can be described in terms of the resolvent R(λ, A) = (λI − A) ∈ L(E) of its generator A. Specifically,

Z ∞ R(λ, A) = e−λtT (t)dt, (λ > w(A)), (4.5.4) 0

where, for the operator A, w0(A) is the growth bound. Conversely, the semigroup can be described by the resolvent using the formula,

n n T (t) = lim ( R( ,A))n, (4.5.5) n→∞ t t and the convergence is a strong convergence.

43 From the Perron-Frobenius theorem for positive operators, the spectral radius is an element of the spectrum. The following theorem in [3] is an analogous result for positive semigroups.

Theorem 4.11. [3]. If A is the generator of a positive semigroup on E = C0(X), then S(A) ∈ σ(A) unless S(A) = −∞. In the case X is compact, we always have S(A) > −∞.

When X is not compact or if the semigroup is not positive the spectrum may be empty. For

λ > S(A) the following is a consequence of Theorem 4.11.

Corollary 4.12. [3] Suppose λ0 ∈ ρ(A), then R(λ0,A) is a positive operator if and only if λ0 >

S(A). For λ > S(A) we have r(R(λ, A)) = (λ − S(A))−1.

The boundary spectrum, σb(A), of the generator A is given by

σb(A) = σ(A) ∩ {λ ∈ C : R(λ) = S(A)}. (4.5.6)

Proposition 4.13. [6, Proposition, 7.1]. The C0-Semigroup {T (t)}t≥0 is positive if and only if the resolvent R(λ, A) is positive ∀λ > w0 (equivalently R(λ, A) is positive for all λ > λ0 for some

λ0 ≥ w0)

Proof:

Let T (t): E → E and assume that {T (t)}t≥0 is positive. If λ > w0, the growth bound, then

R ∞ −λt R(λ, A)x = 0 e T (t)xdt for all x ∈ E and since E+ is closed, this implies that R(λ, A)x ≥ 0.

 n n n Conversely, if R(λ, A)x ≥ 0 ∀λ > λ0 (λ ≥ w0), then it follows from T (t)x = limn→∞ t R( t ,A) x, x ∈

E, t > 0, that T (t)x ≥ 0 for all 0 ≤ x ∈ E.

R ∞ −λt A λ with R(λ) > w0 belongs to the resolvent set ρ(A) and R(λ, A)x = 0 e T (t)dt

(where the intergral is absolutely norm convergent). This implies S(A) ≤ w0 and in general the

44 inequality may be strict, see Theorem 4.11 above and [6, Examples 8.1 and 9.1]. If {T (t)}t≥0 is a positive C0-Semigroup then the resolvent R(λ, A) has the intergral representation shown in the proof of Proposition 4.13 above for all λ with R(λ) > S(A) (the intergral need not be absolutely convergent).

Definition 4.14. [7, Definition 4.1]. Let A be a closed real operator on E = C(Ω) and let λ0 be either −∞ or a spectral value of A in R.

(i) The resolvent R(., A) is called individually eventually (strongly) positive at λ0, if there is a

λ2 > λ0 with the following properties: (λ0, λ2] ⊆ ρ(A) and for each f ∈ E+ \{0} there is a

λ1 ∈ (λ0, λ2] such that R(λ, A)f ≥ 0 ( 0) for all λ ∈ (λ0, λ1].

(ii) The resolvent R(., A) is called uniformly eventually (strongly) positive at λ0, if there exists

λ1 > λ0 with the following properties: (λ0, λ1] ⊆ ρ(A) and R(λ, A) ≥ 0 ( 0) for every λ ∈ (λ0, λ1].

The following is a criterion for uniform eventual positivity of the resolvent at some spectral value

λ0.

Proposition 4.15. [7, Proposition 4.2].

Let A be a closed real operator on E = C(Ω). Let λ0 be −∞ or a spectral value of A in R. Assume that there exists λ1 > λ0 such that (λ0, λ1] ⊆ ρ(A) and R(., A) ≥ 0. Then the following assertions are true.

(i) The resolvent R(., A) is uniformly eventually positive at λ0. More precisely, R(λ, A) ≥ 0 for all

λ ∈ (λ0, λ1].

n (ii) If R(λ1,A) is strongly positive for some n ∈ N, then R(., A) is uniform eventually strongly positive at λ0. More precisely, R(λ, A)  0 for all λ ∈ (λ0, λ1).

45 Details of the proof can be found in [7]. Since the resolvent operator is analytic, it can be written as a power series expansion at a point in an open set in ρ(A). That is, if λ0 ∈ ρ(A), R(., A) can be expanded in the disc

∞ X k k+1 {λ ∈ C : |λ − λ0| < λ0 − S(A)} by the power series R(λ, A) = (λ0 − λ) R(λ0,A) . k=0

n Notice that λ0 −λ > 0 yields the positivity of R(λ, A) for all λ ∈ (S(A), λ0). If R(λ0,A) is strongly positive, for sufficiently large n, then R(λ, A) is strongly positive ∀λ ∈ (S(A), λ0).

The following is an important, often cited, result in the theory of one-parameter semigroups.

Theorem 4.16. (Hille-Yosida)

Let A be an operator on a Banach space E. The following conditions are equivalent:

(i) A is the generator of a strongly continuous semigroup.

n n (ii) There exist w, M ∈ R such that (w, ∞) ⊂ ρ(A) and k(λ − w) R(λ, A) k ≤ M, ∀λ > w and n ∈ N.

Proof: For the details of the proof, see [3], [28] or other textbooks on one-parameter semigroups of linear operators.

The powers n in the theorem above are difficult to estimate or apply. The following result shows the contractive case on a Banach spaces when M = 1 and w = 0.

Corollary 4.17. [3]. For an operator A on a Banach space E the following are equivalent:

(i) A is the generator of a strongly continuous contraction semigroup.

(ii) (0, ∞) ⊂ ρ(A) and kλR(λ, A)k ≤ 1, ∀λ > 0.

46 From the Neuman series,

1  A 1  A A2 A3  (λI − A)−1 = I − = I + + + ... , (4.5.7) λ λ λ λ λ2 λ3 it can be seen that if A is a positive operator then R(λ, A) is also positive. If , however, A is only eventually positive, in the cone sense, the resolvent may not be positive. In finite dimension, we saw in example 3.3, on the ice-cream cone that eventual cone-positivity does not imply eventual exponential cone positivity.

Arendt[1] created some counter-examples to show that resolvent positive operators do not always generate postive semigroups. However, with some specific properties on the operator and its do- main, a resolvent positive operator generates a positive semigroup. If E is an ordered Banach space with normal cone E+ and operator A : E → E generates a C0-semigroup T , then A is resolvent positive if and only if T is positive (i.e. T (t)E+ ⊂ E+ for all t ≥ 0), see [2, Chapter, 3.11].

We want to develop further characterizations of inverse positive operators by extending the theory of M-matrices, M-type matrices and Mv,K matrices to semigroups of positive linear operators.

The following result is motivated by the discussion in this chapter and is applied to a semigroup

n×n (T (t))t≥0 generated by a matrix A ∈ R . We consider matrix semigroup positivity in the “even- tual” sense, as studied in the previous chapters of this dissertation; that is, we consider cone

tA positivity of (e )t≥t0 , where t0 ≥ 0. The result ties our results in Chapter 3 to (uniform) eventual positivity of the resolvent defined above.

n×n n Theorem 4.18. Let A ∈ R and K be a proper cone in R . Then there exists t0 ≥ 0 such that

tA e K ⊆ K for all t ≥ t0 if and only if there exists an eventually K-positive matrix B and λ0 ≥ 0 such that −A = λ0I − B. Moreover, if the power index of B is m0, then the restriction of the

m resolvent R(λ, B) to the cone Km := B K ⊆ K is K-positive for all m ≥ m0 and all λ ≥ r(B);

47 that is,

R(λ, B)Km ⊆ K for all m ≥ m0 and all λ ≥ r(B). (4.5.8)

Proof:

tA By Theorem 3.4 we know that e K ⊆ K for all t ≥ t0 if and only if there exists λ0 ≥ 0 such that

B := A + λ0I is eventually K-positive; that is −A = λ0I − B as claimed.

In particular, the spectral radius of B, r(B) > 0, is an eigenvalue of B, since B has the strong

Perron-Frobenius property. Let λ > r(B). Then −A + (λ − λ0)I = λI − B is an Mv,K -matrix. By

Theorem 3.16, R(λ, B) = (λI − B)−1 satisfies (4.5.8).

Remarks

(1) If we impose that B in the above proof is not nilpotent, which amounts to an assumption that

A has an eigenvalue different than λ0, then by the last part of Theorem 3.16, we can assert that

tA (4.5.8) is equivalent to the existence of t0 ≥ 0 such that e K ⊆ K for all t ≥ t0.

(2) Notice that if A is an eventually exponentially positive matrix and m0 = 0 (i.e., B is K-positive) in the above proof, it follows that A = B − λ0I is an essentially K-positive matrix and R(λ, B) is a K-positive matrix. If, in addition, K is polyhedral then the exponential index of A is also equal to 0, recovering the fact that for polyhedral cones K, exponential K-positivity and essential

K-positivity are indeed equivalent.

Questions for Further Study

(i) In what other ways can the results on eventually nonnegative matrices in [19],[22], [24], [27] be extended to positive operators in general?

48 (ii) Explore further possible characterization of eventual positivity of operator semigroups from the eventual positivity of the generator.

(iii) Examine results related to M-matrices and matrix decompositions and the analogous extensions to operators in general. Specifically applying the notion of splittings to operators that generate positive semigroups.

49 Bibliography

[1] W. Arendt. Resolvent Positive Operators, Proc. Lond. Math. Soc. (3), 54 (1987) 321 -349.

[2] W. Arendt; C.J.K. Batty; M. Hieber and F. Neubrander. Vector-Valued Laplace Transforms

and Cauchy Problems. Springer, Basel, 2011.

[3] W. Arendt; A. Grabosch; G. Greiner; U. Groch; H.P. Lotz; U. Moustakas; R. Nagel; F.

Neubrander and U.Schlotterbeck. One Parameter Semigroups of Positive Operators. In (Ed:

Dold, A. and Eckmann, B. , Lecture Notes in Mathematics), Vol. 1184, Springer-Verlag, Berlin,

1986.

[4] A. Berman and B. Plemmons. Nonnegative Matrices in the Mathematical Sciences. Second

edition. SIAM, Philadelphia, 1994.

[5] A. Berman, M. Neumann, and R.J. Stern. Nonnegative Matrices in Dynamic Systems. Wiley–

Interscience, New York, 1989.

[6] Ph. Clem´ent; H.J.A.M. Heijmans; S. Angenent; C.J. van Duijn; B. de Pagter. One-Parameter

Semigroups. North-Holland (Elsevier Science Publishers, B.V), Amsterdam, 1987.

[7] D. Daners; J. Gl¨uck and B.J. Kennedy. Eventually Positive Semigroups of Linear Operators,

J. Math. Anal. Appl. (2016).

50 [8] A. Elhashash and D.B. Szyld. Generalizations of M-matrices which may not have a nonnegative

inverse. Appl., 429:2435–2450, 2008.

[9] A. Elhashash and D.B. Szyld. On general matrices having the Perron-Frobenius property.

Electron. J. Linear Algebra, 17:389-413, 2008.

[10] K.J. Engel and R. Nagel. One Parameter Semigroups for Linear Evolution Equations, Grad-

uare Texts in Mathematics, Vol. 194, Springer-Verlag, New York, 2000, with contributions by

Brendle, S.; Pallara, D.; Perazzalo, G.; Rhandi, A.; Romanelli, S.; and Schnaubelt, R.

[11] L. Farina and S. Rinaldi. Positive Linear Systems: Theory and Applications. Wiley–

Interscience, New York, 2000.

[12] S. Friedland. On an inverse problem for nonnegative and eventually nonnegative matrices,

Israel J. Math. 29(1):43–60, 1978.

[13] D. Handelman. Positive matrices and dimension groups affiliated to C*-algebras and topolog-

ical Markov chains. J. Operator Theory, 6:55–74, 1981.

[14] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, 1990.

[15] C.R. Johnson and P. Tarazaga. On matrices with Perron-Frobenius properties and some neg-

ative entries. Positivity, 8(4):327–338, 2004.

[16] A. Kalauch. Positive-off-diagonal Operators on ordered normed spaces and maximum principles

for M-operators. Dissertation, Fakult¨atMathematik und Naturwissenschaften der Technischen

Universit¨atDresden, 2006.

[17] M. Kasigwa. Averaging Operators and their applications. Dissertation (MSc.), Makerere Uni-

versity, Kampala, 1996.

51 [18] E. Kreyszig. Introductory with Applications. John Wiley and Sons. Inc.,

New York, 1978.

[19] H.T. Le and J.J. McDonald. Inverses of M-matrices created with irreducible eventually non-

negative matrices. Linear Algebra Appl., 419:668-674, 2006.

[20] S. Carnochan Naqvi and J.J. McDonald. The combinatorial structure of eventually nonnegative

matrices. Electron. J. Linear Algebra, 9:255–269, 2002.

[21] S. Carnochan Naqvi and J.J. McDonald. Eventually nonnegative matrices are similar to semi-

nonnegative matrices. Linear Algebra Appl., 381:245–258, 2004.

[22] M. Neumann, R.J. Stern, and M. Tsatsomeros. The reachability cones of essentially nonnega-

tive matrices. Linear Multilinear Algebra, 28:213–224, 1991.

[23] M. Neumann and M. Tsatsomeros. Symbiosis points for linear differential systems. Linear

Multilinear Algebra, 30:49–59, 1991.

[24] D. Noutsos. On Perron-Frobenius property of matrices having some negative entries. Linear

Algebra Appl., 412:132–153, 2005.

[25] D. Noutsos and M. Tsatsomeros. Reachability and holdability of nonnegative states. SIAM J.

Matrix Anal. Appl., 30:700–712, 2008.

[26] D. Noutsos and R.S. Varga. On the Perron-Frobenius theory for complex matrices. Linear

Algebra Appl., 437:1071-–1088, 2012.

[27] D.D. Olesky, M.J. Tsatsomeros, and P. van den Driessche. Mv-Matrices : A generalization of

M-matrices based on eventually nonnegative matrices. Electron. J. Linear Algebra, 18:339-351,

2009.

52 [28] A. Pazy. Semigroups of Linear Operators and Applications to Partial Differential

Equations,Springer-Verlag, New York, 1983.

[29] J.E. Peris. A new Characterization of Inverse-positive Matrices. Linear Algebra Appl. , 154-155,

45-58 (1991).

[30] R.T. Rockafellar. Convex Analysis. Princeton University Press, New Jersey, 1997.

[31] H.H. Schaefer. Banach Lattices and Positive Operators.Spriger-Verlag, New York, Heidelberg,

Berlin, 1974.

[32] H. Schneider and M. Vidyasagar. Cross positive matrices. SIAM J. Numer. Anal., 7:508–519,

1970.

[33] R.J. Stern and M.J. Tsatsomeros. Extended M-matrices and subtangentiality. Linear Algebra

Appl., 97:1–11, 1987.

[34] E.A. Taylor and C.D. Lay. Introduction to Functiobal Analysis (2nd-Edition), Krieger Pub.

Comp, Malabar, Florida, 1980.

[35] M.J. Tsatsomeros. Reachability of Nonnegative and Symbiotic States. Ph.D. Thesis, University

of Connecticut, Storrs, 1990.

[36] M.R. Weber. On positive invertibility of operators and their decompositions. Math. Nachr.,

2009, 282(10): 1478 -1487.

[37] B.G. Zaslavsky. Eventually nonnegative realization of difference control systems. Dynamical

Systems and Related Topics, Adv. Ser. Dynam. Systems, 9:573–602, 1991.

53 [38] B. G. Zaslavsky and J.J. McDonald. A characterization of Jordan canonical forms which are

similar to eventually nonnegative matrices with the properties of nonnegative matrices. Linear

Algebra Appl., 372:253–285, 2003.

[39] B. G. Zaslavsky and B.-S. Tam. On the Jordan form of an irreducible matrix with eventually

non-negative powers. Linear Algebra Appl., 302/303: 303–330, 1999.

54