<<

CHARACTERIZING THE SPECTRAL RADIUS OF A SEQUENCE OF ADJACENCY MATRICES

BY

WILLIAM D. FRIES

A Thesis Submitted to the Graduate Faculty of

WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES

in Partial Fulfillment of the Requirements

for the Degree of

MASTER OF ARTS

Mathematics and Statistics

May 2018

Winston-Salem, North Carolina

Approved By:

Miaohua Jiang, Ph.D., Advisor

Kenneth Berenhaut, Ph.D., Chair Grey Ballard, Ph.D. Acknowledgments

I would like to thank Dr. Miaohua Jiang for his guidance and expertise through- out the research experience. This project would not have been possible without him. I would also like to thank Dr. Kenneth Berenhaut, Dr. Grey Ballard, Dr. John Gem- mer, Dr. Sarah Raynor and the rest of the Mathematics and Statistics department at Wake Forest University for helping me realize the endless opportunities that studying math can bring.

This work is dedicated to my parents, Karen and Andy, my sister, Margaret, and my brother, Jack: thank you for teaching me that some of best things in life come from its unpredictable nature.

ii Table of Contents

Acknowledgments ...... ii

Abstract ...... iv

List of Tables ...... v

List of Figures ...... vi

Chapter 1 Introduction ...... 1 1.1 Foundations of Network Epidemics ...... 2 1.1.1 Continuous Compartmental Epidemic Models ...... 2 1.1.2 Epidemiology on Networks ...... 5 1.1.3 Existing Bounds on the Largest Eigenvalue ...... 10 1.2 Constructing our Problem ...... 11 1.2.1 Terms and Definitions ...... 11 1.2.2 Defining the Transformation ...... 13

Chapter 2 Main Results ...... 16 2.1 Motivation for the Problem ...... 16 2.2 Characterization of Eigenvalues ...... 18 2.2.1 The Characteristic Polynomial ...... 18 2.2.2 Properties of x(m)...... 24 2.2.3 Special Cases ...... 30

Chapter 3 Further Results and Applications ...... 37 3.1 Corollaries ...... 37 3.2 Applications ...... 41 3.2.1 Relative Size of Eigenvalue ...... 42

Chapter 4 Conclusions ...... 46 4.1 Future Work ...... 46

Bibliography ...... 48

Curriculum Vitae ...... 50

iii Abstract

In this paper we explore the introductory theory of modeling epidemics on networks and the significance of the spectral radius in their analysis. We look to establish properties of the spectral radius that would better inform how an epidemic might spread over such a network. We construct a specific transformation of networks that describe a transition from a star network to a path network. For the sequence of adjacency matrices that describe this transition, we show the spectral radius of these graphs can be given in a simple algebraic equation. Using this equation we show the spectral radius increases as the star unfolds and establish bounds on the spectral radius for each network.

iv List of Tables

1.1 Common Compartmental Epidemic Models ...... 2

3.1 Numerical Approximations for d50,k, 27 < k < 47...... 44

3.2 Numerical Approximations for d75,k, 52 < k < 72 ...... 44

3.3 Numerical approximation d100,k, 77 < k < 97...... 45

v List of Figures

1.1 The differential equationx ˙ = .1x − .2x2 for 0 ≤ x ≤ 1 ...... 4 1.2 The solution curves tox ˙ = .1x − .2x2 with varying initial conditions .5 1.3 The differential equationx ˙ = −.05x − .2x2 for 0 ≤ x ≤ 1 ...... 6 1.4 The solution curves tox ˙ = .25x − .2x2 with varying initial conditions7

1.5 The network and associated adjacency for A8(4) ...... 12

1.6 The network and associated for B6 ...... 12 1.7 n-degree star and associated adjacency matrix ...... 13

1.8 The network and associated adjacency matrix for An(2) ...... 14

1.9 The network and associated adjacency matrix for An(3) ...... 14 1.10 The network and associated matrix after the i − 1 unfolding . . . . . 15 1.11 The network and associated matrix after n − 3 unfolding actions . . . 15 1.12 The network and associated matrix after n − 2 unfolding actions . . . 15

2.1 Plots of the ρ(An(k)) as k varies for n = 50, 75, and 100...... 17

3.1 Numerical Approximations for ρ(A50(k)) ...... 42

3.2 Numerical Approximations for ρ(A75(k)) ...... 43

3.3 Numerical Approximations for ρ(A100(k)) ...... 43

3.4 A graph of n vs. maxk(pn(k)) ...... 44

vi Chapter 1: Introduction

Recent research into the impact of the how the spectral radius of a network on the spread of an epidemic in such network [1, 2, 3] motivates our research into the spectral radius of trees. If we consider a population of agents who are susceptible to an epidemic and whose connections are represented by the adjacency matrix A = [aij] where

 β if node i is adjacent to node j  ij aij = δi if i = j (1.1) 0 else

with βij being the probability that if agent i is infected that agent j becomes infected in one time-step (∆t) and δi by the probability that i recovers in ∆t, then there is a strong relationship between the largest eigenvalue of A and the reproduction number, the initial rate at which the epidemic spreads through the network [1, 2, 3, 4, 5, 6].

This prompts the question: how does the network structure affect the largest eigenvalue and ultimately how the disease will spread through the population? Re- search in graph theory has shown the maximal and minimal configurations of trees along with inequalities describing how spectral radii of related networks are related [7]. Our question restricts our networks to a specific set of trees, all with n vertices and can be described as stars each having one long arm [8]. We ask: how does the largest eigenvalue change as we transform our graph from a star to a path and can we find bounds for the largest eigenvalue?

1 1.1 Foundations of Network Epidemics

1.1.1 Continuous Compartmental Epidemic Models

Before considering epidemics spreading across networks, it is useful to consider the case when a disease can be transmitted from anyone to anyone. These models are commonly referred to as fully-mixed, and the simplest epidemic models are the SI, SIR, SIS, and SIRS models (Table 1.1.1). ‘S’ refers to the susceptible, ‘I’ refers to infected, and ‘R’ refers to recovered or removed. The order of the letters describe how a member of the population might move through different stages of a disease. Thus the SIRS model would model a disease in which someone might catch the disease, recover with a brief stage of immunity and then return to the susceptible population. In our models, we refer to s as the percent of the population that is susceptible, x, the percent of the population that is infected, and r as the percent of the population that is recovered or removed from the system.

SI SIR s˙ = −βsx s˙ = −βsx x˙ = βsx − δx x˙ = βsx r˙ = δx s˙ = −βs(1 − s − x) x˙ = β(1 − x)x r˙ = δ(1 − s − x)

SIS SIRS s˙ = ηr − βsx s˙ = δx − βsx x˙ = βsx − δx x˙ = βsx − δx r˙ = δx − ηr x˙ = (β − δ − βr − βx)x x˙ = (β − δ)x − βx2 r˙ = δx − ηr

Table 1.1: The dynamical systems for simple epidemic models.

These are commonly referred to as compartmental epidemic models because they

2 separate the population into compartments. This is to be contrasted with the agent- based epidemic model in which each agent’s transmission and recovery rates are in- dependently determined which we will discuss later. The solution to the SI model is well known. Using separation of variables and initial condition x(0) = x0, the solution can be given as

βt x0e x(t) = βt 1 − x0 + x0e

The SI model has two clear fixed points, one unstable at x = 0 and one stable at x = 1 for any β > 0. This implies that, if the disease spreads, then eventually almost everyone will become infected.

If we consider the SIS model, solutions can be given by:

 δ  Ce(β−δ)t x(t) = 1 − (1.2) β 1 + Ce(β−δ)t with C = βx0 [1]. The bifurcation parameter to this equation, commonly referred β−δ−βx0

β to as the Reproduction Number, is given by R0 = δ , and the bifurcation occurs at

R0 = 1 [1]. To apply this theory we consider the example below.

Example 1. A small number, x0 essentially 0, are discovered to have a disease with a transmission rate of .2 and a recovery rate of .1 and can be modeled by the SIS model. Clearly the disease will spread and will follow the equation:

1  Ce.1t  x(t) = 2 1 + Ce.1t

The differential equation can be seen in Figure 1.1 and a sample of its solution curves can be seen in Figure 1.2.

When analyzing this graph, we notice that we will have a fixed point at x∗ = .5

3 0.2 0.4 0.6 0.8 1.0

-0.02

-0.04

-0.06

-0.08

-0.10

Figure 1.1: The differential equationx ˙ = .1x − .2x2 for 0 ≤ x ≤ 1 which is a stable fixed point. That is, over time, about half of the population will be infected.

Notice that if we change δ = .25 our differential equation becomes Figure 1.3 and our solution curves can be seen in Figure 1.4. Clearly, we have passed the bifurcation value and our epidemic will be eradicated. We also can see that as we approach the bifurcation value (R0 = 1), the stable fixed point at (β − δ) approaches 0. When comparing this to the previous example, we notice we have only one fixed point, at x∗ = 0. This implies the percent of the population that is infected tends

β to 0 as time progresses. This is exactly what we would expect because δ < 1 which implies no epidemic occurs.

Obviously these systems can become significantly more complicated in higher di- mensions when we allow for multiple disease stages such as stages of contagious levels and dormant times in a disease. Despite having these extensions, there are some drawbacks to any fully-mixed model.

One significant assumption that this fully-mixed compartmental model makes is that the probability of anyone spreading the disease to anyone else in the population is equal. However, this assumption does not allow us to accurately model many

4 Figure 1.2: The solution curves tox ˙ = .1x − .2x2 with varying initial conditions epidemic situations. For instance, there is a much higher likelihood that diseases are transmitted among family members or a group of friends instead of randomly chosen strangers within the community. We would then want to design a system that appropriately weighs transmission rates. A logical representation of this is a weighted adjacency matrix for a network that represents the community. To address this assumption, we consider the dynamical system acting on a network. This will allow the eventual extension into the agent-based epidemic model.

1.1.2 Epidemiology on Networks

The network extension of the simplified model takes into account the interactions between people and their ability to transfer the epidemic across the network. For this, we want to consider the dynamics on the network as time passes. To do this, we define the system with si denoting agent i’s probability of being susceptible and xi denoting agent i’s probability of being infected. We will start by discussing the extensions of these models to networks before discussing agent-based models. In

5 0.2 0.4 0.6 0.8 1.0

-0.05

-0.10

-0.15

-0.20

-0.25

Figure 1.3: The differential equationx ˙ = −.05x − .2x2 for 0 ≤ x ≤ 1 the most basic (SI) model, we track the change in probability using the system of equations:

X s˙i = −βsi Aijxj (1.3) j∈J X x˙i = βsi Aijxj (1.4) j∈J

where β is defined as before and Aij is an element of the unweighted, undirected adjacency matrix [1]. Note that, with epidemics on networks, we only allow i to become infected from one of his neighbors, which is the reason we include the adja- cency matrix. Using the adjacency matrix forces the probability that agent i will be infected by a non-adjacent member of the network to be 0. Using this construction, and ignoring the quadratic term for small initial outbreaks, we can say

x˙ = βAx (1.5)

where x is a vector with elements xi. Using this, writing x as a linear combination of

6 Figure 1.4: The solution curves tox ˙ = .25x − .2x2 with varying initial conditions the eigenvectors of the adjacency matrix we can say that

n X βλrt x(t) = ar(0)e vr (1.6) r=1

βλrt where ar(0)e vr is a solution for a particular element of the linear combination. We notice that these terms are dominated exponentially by the largest eigenvalue allowing us to say that

βρ(A)t x(t) ∼ e v1 [1] (1.7) where λ1 = ρ(A) is the largest eigenvalue of A. Newman’s argument above illustrates the significance of the largest eigenvalue. No longer does the percent of the popu- lation who are infected depend only on the transmission rate (and in extension to other models, recovery rate), but it also is intrinsically related to the structure of the network and how that affects the largest eigenvalue and associated eigenvector.

Notice that, because those infected cannot recover, any transmission of the disease

7 will spread to everyone, just as in the fully-mixed model. We now consider the case when recovering is possible.

When extending the SIR model to networks, the system of equations becomes X s˙i = −βsi Aijxj (1.8) j X x˙i = βsi Aijxj − δxi (1.9) j

r˙i = δxi. (1.10)

We can show that this system has a solution

n X (βλr−δ)t x(t) = ar(0)vre [1]. (1.11) r=1

Again this illustrates the significance of the leading eigenvalue and eigenvector and its ability to approximate the solution:

(βρ(A)−δ)t x(t) ∼ e v1 (1.12)

Notice if δ > βρ(A) we will have x will exponentially decay. That is, if the recov- ery rate is large enough, the disease will not spread throughout the network. This

β 1 β 1 observation gives us the bifurcation value of δ = ρ(A) . As before, if δ < ρ(A) , then the epidemic will grow. This also leads to the fact that for small values of ρ(A), it is difficult for a disease to spread throughout the network, and for large values of ρ(A) the opposite is true.

However, this modeling still assumes that transmission and recovery rates are constant and independent of the agent. More accurately generated models can be

8 created using agent-based epidemic modeling [2]. As described in the introduction, each directed edge on the network is given an independent probability which mod- els the likelihood that the infection will spread through said contact. However, the assumption of independent probabilities can be an over-simplification and unrealistic [3]. It would make sense that the when computing the probability of adjacent agents i and j being infected at time t, we would need to consider a covariance term that accounts. That is, if agent i is infected, it is more likely that agent j is infected as well. In either the independent or dependent case, we use the a weighted adjacency matrix [wij] similar to that in Equation 1.1 where wij is the contact level between agent i and j and a system of Markov chains to model the disease’s effect on the community [2].

Under the assumption of independence, eigenvalue analysis has shown that, given a matrix A, the Jacobian of a discrete dynamical system evaluated at 0, if ρ(A) < 1 then 0 is a stable fixed point and is referred to as locally asymptotically stable. Similarly, if ρ(A) > 1 then the fixed point is unstable [2]. This implies that if ρ(A) < 1 then the epidemic will die off and if ρ(A) > 1 then it will not. We can then use ρ(A) to model the rate at which the epidemic will be eradicated from the network. If we drop the mutual-independence assumption, then ρ(A) no longer perfectly describes the rate at which the epidemic is eradicated, Rather, it becomes an upper bound for this rate [3]. Thus it could be the case that ρ(A) over estimates the rate at which the disease will initially spread.

While this modeling can be extremely effective in disease tracking [2, 4], it harbors too many complexities for our initial problem. We will then, first consider adjacency matrices, as these are among the simplest methods of displaying network structure. Through the eigenvalue analysis of the simplified problem, we lay the foundations for

9 development of an agent-based model with non-constant transmission and recovery rates.

1.1.3 Existing Bounds on the Largest Eigenvalue

Construction of the dynamics above illustrates that an understanding of spectral anal- ysis of graphs will be beneficial to understanding epidemic dynamics on a network. There are many known results in graph theory which indicate that the largest eigen- value depends on graph structure [2, 7, 9]. We will look primarily at results that pertain to bounds on the largest eigenvalue.

For positive symmetric semi-definite matrices A and B, if A < B then ρ(A) < ρ(B) where ρ(A) denotes the largest eigenvalue of A [2]. We know that, given a graph with n vertices and m edges, that

1 2m(n − 1) 2 ρ(G) ≤ [7]. (1.13) n

Similarly we know that for any graph G, δ(G) ≤ ρ(G) ≤ ∆(G) where δ(G) and ∆(G) are the minimum and maximum node degrees respectively [7].

Another potential bound for ρ(A) can be established through the Gershgorin Cir- cle Theorem which also allows us to generate bounds on the largest eigenvalue of the matrix based on row-sums. The Theorem states that for a complex n × n matrix

A, define ri to be the sum of the of the entries of row i, then for all j = 1, . . . , n, λj ∈ D(aii, ri) for some i = 1, . . . , n [10].

If we now apply this Theorem to graph theory, if no node is self-adjacent then every diagonal entry of our adjacency matrix will be 0. Thus we now can say that

10 λj ∈ B(0,R) where R = maxi Ri. We will see in Section 1.2 how this pertains to our problem in bounding the largest eigenvalue of our constructed matrices.

We can also consider subgraphs and their associated adjacency matrices to illus- trate some of the advantages of considering simplified graphs. If we let A and B be two adjacency matrices and A + B denote the adjacency matrix of the combination of these two graphs, then we know that

ρ(A + B) ≤ ρ(A) + ρ(B) [7]. (1.14)

This allows us to potentially combine graphs of known spectral radius to find other eigenvalue bounds.

1.2 Constructing our Problem

1.2.1 Terms and Definitions

Let Gn(k) be a tree with n nodes where node k is the center of a star with n − k + 1 pendants and one of those pendants has length k. Let En(k) be the edge set for

Gn(k). Then

Gn(k) = {e(1, 2), e(2, 3), . . . , e(k − 1, k), e(k, k + 1), e(k, k + 2), . . . e(k, n)} (1.15)

We will call An(k) the adjacency matrix of Gn(k) and we will denote the charac- teristic polynomial of An(k) as Pn(k). It should be noted that in all matrix represen- tations, blank entries represent a value of 0.

Example 2. Consider n = 8 and k = 4 then Figure 1.5 is the graph and adjacency matrix of G8(4).

11 (5) 0 1  1 0 1    (6)  1 0 1     1 0 1 1 1 1 (1) (2) (3) (4)    1 0     1 0  (7)    1 0  1 0 (8)

Figure 1.5: The network and associated adjacency matrix for A8(4)

Then we can calculate the characteristic polynomial finding that

8 6 4 P8(4) = λ − 7λ + 9λ (1.16)

= λ4(λ4 − 7λ2 + 9) (1.17)

We will, now, define Bn := An(n−1). This is a path with n nodes. The associated adjacency matrix is commonly referred to as a tridiagonal matrix and has known eigenvalues. We will say Qn is the characteristic polynomial of Bn. The solutions to

Qn = 0 are of the form

 kπ  λ = 2 cos 1 ≤ k ≤ n. [11]. (1.18) k n + 1

π  This tells us that ρ(Bn) = 2 cos n+1 .

Example 3. Consider B6. Figure 1.6 is the graph and adjacency matrix of B6.

0 1  1 0 1     1 0 1  (1) (2) (3) (4) (5) (6)    1 0 1     1 0 1 1 0

Figure 1.6: The network and associated adjacency matrix for B6

12 π  Then by equation 1.18, ρ(Bn) = 2 cos 7 ≈ 1.8.

1.2.2 Defining the Transformation

We will start by defining the transformation. We are considering an unfolding motion from a star to a path. To move from An(k) to An(k + 1), we lengthen the path by 1 edge and lessen the degree of the star by 1. We then center the star at k +1. This can be seen through the set notation of Equation 1.15 and will be visualized in several examples below.

Example 4. Notice that the transformation from An(1) to An(2) is trivial and rep- resents the same network for all n. Consider An(1), for which our graph and matrix representation can be seen in Figure 1.7. According to the description above, An(2) can be seen in Figure 1.8. Notice that in this case, our transformation amounts to a simple relabeling of nodes (1) and (2).

(4) (3) (5) 0 1 1 1 ... 1  1 0    1 0    (2) (1) (6)  ...  1  .  . 0  (n) ... 1 0 ... Figure 1.7: An n-degree star network with the associated n × n adjacency matrix

13 (4) (3) (5) 0 1 0 0 ... 0  1 0 1 1 ... 1    0 1 0    (1) (2) (6)  ...  0 1  . .  . . 0  (n) ... 0 1 0 ...

Figure 1.8: The network and associated adjacency matrix for An(2)

However, if we consider every iteration after this, the structure of our network changes. We can see in Figure 1.9 that the transformation from An(2) to An(3) is not trivial. Rather, we begin to see the path take shape. Continuing this process for i iterations, our network structure becomes Figure 1.10. We continue this process until we reach An(n − 2) (Figure 1.11) and finally An(n − 1) = Bn (Figure 1.12).

(4) 0 1  (5) 1 0 1     1 0 1 ... 1  .   (1) (2) (3) .  ...  .  1   .  (n − 1)  . 0  1 0 (n)

Figure 1.9: The network and associated adjacency matrix for An(3)

14 0 1  (i + 1) 1 0 1   .   1 0 ..  (i + 2)    . .   .. .. 1    (1) (2) ... (i) .  1 0 1 1 ... 1  .    1 0    (n − 1)  1 0     . .   . ..  (n) 1 0 Figure 1.10: The network and associated matrix after the i − 1 unfolding

0 1  1 0 1  (n-1)    ...   1 0   . .  (1) (2) (3) ... (n-2)  .. .. 1     1 0 1 1    (n)  1 0  1 0 Figure 1.11: The network and associated matrix after n − 3 unfolding actions

0 1  1 0 1     1 0 1    (1) (2) ... (n − 1) (n)  ......   . . .     1 0 1  1 0

Figure 1.12: The network and associated matrix after n − 2 unfolding actions

15 Chapter 2: Main Results

2.1 Motivation for the Problem

In this section we will consider eigenvalue bounds that can be produced from results in Section 1.1.3 and explore numerical examples that illustrate the later-proven re- sults.

The loosest bound that we can create on the spectral radius of our trees comes from Equation 1.13. Clearly, if a tree (Tn) has n vertices, it has n − 1 edges. Given Equation 1.1.3, we then know that

1 2(n − 1)(n − 1) 2 ρ(T ) ≤ (2.1) n n √ 2(n − 1) = √ (2.2) n

√  1  = 2 n − √ (2.3) n

Similarly, we can use the Gershgorin Circle Theorem to create a bound on our spectral radius. We need to consider the largest row sum of An(k) which occurs in Pn row k. By construction, Rk = n − k + 1 where Rk = i=1 ak,i. Since each diagonal element of the adjacency matrix is 0, we know that every eigenvalue, including the largest, is contained within B(0, n − k + 1). This implies that

ρ(An(k)) < n − k + 1 (2.4)

We can combine Equations 2.3 and 2.4 to see which gives a tighter bound on our

16 eigenvalue. We can see, through simple algebra, that

√  1  2 n − √ ≤ n − k + 1 (2.5) n

q √ when n > 3 2(3 + 2 2) > 2. Accordingly, for n > 3, Equation 2.4 is a stricter bound.

Our results, while showing monotonicity of the eigenvalue, allow us create a stricter bound on the largest eigenvalue by restricting the matrix structure.

Having monotonicity being our final goal, it is advisable to explore examples to properly motivate an exploration into the theory.

Example 5. For n = 50, 75, and 100 and 2 ≤ k ≤ n − 1, the largest eigenvalue of

An(k) monotonically decreases in k.

Figure 2.1: Plots of the ρ(An(k)) as k varies for n = 50, 75, and 100.

17 Using the eigs command in Matlab, we are able to plot the largest eigenvalues for each of these cases. Figure 2.1 shows clear monotonicity of the eigenvalues. Our goal is generalize monotonicity to all n, not just these specific cases.

2.2 Characterization of Eigenvalues

Ultimately, we will demonstrate that for fixed n ∈ N, the largest eigenvalue of An(k) is monotonically decreasing in k and for n ≥ 7 bounded by

√ m + 2 + m2 − 4 √ 1 < ρ(A (k)) < m + √ (2.6) q √ n m 2(m + m2 − 4) where m = n − k − 1. In order to do this, we will consider the lemmas below to construct our final proof. Our approach will be as follows:

1. Finding a simple algebraic equation to describe Pn(k), the characteristic poly-

nomial of An(k).

2. Proving properties of the roots of Pn(k).

3. Handling special cases which do not align with the general argument.

2.2.1 The Characteristic Polynomial

n−m mx−1 Lemma 1. ρ(An(k)) > 2 if and only if x = m−x has a solution for x > 1. If

n−m mx−1 p 1 x(m) ∈ (1, m) is the solution to x = then ρ(An(k)) = x(m)+ √ where m−x x(m) m = n − k − 1.

n−m mx−1 Proof. We will start by arguing that if ρ(An(k)) > 2 then x = m−x has a solution x > 1. We start by considering the charpoly(An(k)) = Pn(k). To generate a recursive

18 relation on Pn(k), we start by taking the cofactor expansion along the last row of

(λIn − An(k)). This gives us

λ −1

.. .. −1 . . . .. λ −1

det(λIn − An(k)) = −1 λ −1 · · · −1 (2.7) . −1 ..

. .. . .

−1 λ

λ −1 λ −1

...... −1 . . −1 . .

...... λ −1 λ −1 n+k+1 = (−1) −1 λ + λ −1 λ −1 · · · −1 . . −1 .. −1 ..

. . .. . λ . .

−1 0 −1 λ (2.8)

Notice that the second (n − 1) × (n − 1) matrix in Equation 2.8 is exactly (λI −

An−1(k)). Thus, when we expand the first term of Equation 2.8 along the last row we get

λ −1

.. .. −1 . . . .. λ −1 n+k+1 n+k = (−1) (−1) −1 λ + λ det(λI − An−1(k)) (2.9)

λ

. ..

λ

Notice that Equation (2.9) is a block matrix that can be divided at the k − 1st row. Thus the determinant can be written as the product of the determinant of the

19 blocks. As in Example 3, we denote the adjacency matrix of the tridiagonal matrix

Bk. Then we can achieve the following equation:

Pn(k) = λ det(λI − An−1(k)) − det(λI − Bk−1) det(λI) (2.10)

If we let Qk be the characteristic polynomial of the tridiagonal matrix Bk, then

n−k−1 Pn(k) = λPn−1(k) − Qk−1λ (2.11)

n−k−2 n−k−1 = λ[λPn−2(k) − Qk−1λ ] − Qk−1λ (2.12)

2 n−k−1 = λ Pn−2(k) − 2Qk−1λ (2.13)

Taking a total of n − k iterations we arrive at Pn(k) in terms of Qks:

n−k−1 Pn(k) = λ [λQk − (n − k)Qk−1] (2.14)

We want to find the largest root of λQk − (n − k)Qk−1. We must first start by finding the characteristic polynomial of Qn = λQn−1 − Qn−2 with initial values

2 Q1 = λ and Q2 = λ − 1. The characteristic polynomial is known to be an nth degree Chebyshev Polynomial of the second kind [11]. We will rederive this formula with different terms for help in later algebra. Since ρ(An(k)) > 2 and we are only

1 concerned with the largest eigenvalue, then we can make the substitution λ = b + b . We will solve the difference equation using this substitutions for advantages in later analysis. Considering a cofactor expansion of Bk+1, we have

20 Qk+1 = λQk − Qk−1 (2.15)

 1 = b + Q − Q b k k−1 1 = bQ + Q − Q k b k k−1 1 Q − bQ = (Q − bQ ) k+1 k b k k−1 1  1  = b + Q − Q − bQ b b k−1 k−2 k−1

1  1  = bQ + Q − Q − bQ b k−1 b k−1 k−2 k−1

12 = (Q − bQ ) b k−1 k−2 . .

1k−1 = (λ2 − 1 − bλ) b

1k+1 Q = + bQ (2.16) k+1 b k

2 1 2 We can do this because λ − bλ − 1 = b . Because we know that Q1 = λ,

2 1 k Q2 = λ − 1, then Qk = b + bQk−1. Substituting this into our equation we get

" # 1k+1 1k 1k+1 1k−1 Q = + b + Q = + + b2Q (2.17) k+1 b b k−1 b b k−1

If we continue iterations for k − 2 steps we get

21 1k+1 1k−1 1k−3 11−k Q = + + + ··· + + bk−1Q (2.18) k+1 b b b b 2

We can factor out and apply geometric series to get the following:

1k+1 1 − b2(k−1)  Q = + bk−1B (2.19) k+1 b 1 − b2 2

2 1 Substituting Q2 = (b + b2 + 1), Equation 2.19 becomes

 1  1 − b2k−2 Q = b2 + + 1 + k+1 b2 bk+1(1 − b2)

1 − b2k−2 = bk+1 + bk−3 + bk−1 + bk+1(1 − b2)

b2k+2(1 − b2) + b2k−2(1 − b2) + b2k(1 − b2) + 1 − b2k−2 = bk+1(1 − b2)

b2k+2 − b2k+4 + b2k−2 − b2k + b2k − b2k+2 + 1 − b2k−2 = bk+1(1 − b2)

1 − b2k+4 = (2.20) bk+1(1 − b2)

1−b2k+2 1−b2k Thus Qk = bk(1−b2) and Qk−1 = bk−1(1−b2) .

Now that we have a closed form for Qk we consider our formula

g(λ) = λQk − (n − k)Qk−1 (2.21)

Then using Equation 2.20 we have

22  1  1 − b2k+2   1 − b2k  g(b) = b + − (n − k) (2.22) b bk(1 − b2) bk−1(1 − b2)

If we now let g(b) = 0 we see that

 1 b + (1 − b2k+2 − (n − k)(b − b2k+1) = 0 b

(b2 + 1)(1 − b2k+2) − (n − k)(b2 − b2k+2) = 0

b2 + 1 − b2k+4 − b2k+2 − (n − k)b2 + (n − k)b2k+2 = 0

And so,

(n − k)b2k+2 − b2k+4 − b2k+2 = (n − k)b2 − b2 − 1

((n − k) − b2 − 1)b2k+2 = (n − k)b2 − b2 − 1

This gives us a final equation of

(n − k − 1)b2 − 1 b2k+2 = (2.23) (n − k − 1) − b2

If we let n − k − 1 = m (notice 0 ≤ m ≤ n − 3) and let x = b2, then we have

mx − 1 xn−m = (2.24) m − x √ √ √1 Notice that λ = x + x . By assumption, x ≥ 1 and thus x ≥ 1. Since all statements in the proof above are equality, the “only if” direction in the proof comes from simply reversing the argument. Thus we can see that ρ(An(k)) > 2 if and only

23 n−m mx−1 if x = m−x has a solution for x > 1.

We make these substitutions to simplify our analysis; however, it does not impact our ultimate goal. It should be noted that monotonicity of x in m is the same as monotonicity of λ in m. We will now denote the largest solution of Equation 2.24 as x(m) to remind us that it is a function of m. The next lemmas will show that x(m) is uniquely determined for 3 ≤ m ≤ n − 3. We will handle the case when m ≤ 3 and n ≤ 7 separately.

2.2.2 Properties of x(m)

Lemma 2. There exists a solution, x(m) ∈ (1, m).

Proof. We first notice that x = 1 is always a solution. Because we are considering the largest eigenvalue, we are not interested in solutions smaller than 1. Let f(x) = xn−m,

mx−1 m2−1 and g(x) = m−x = −m − x−m .

0 n−m−1 0 m2+1 Then f (x) = (n − m)x and g (x) = (x−m)2 . At x = 1 we have

f 0(1) = n − m (2.25)

m + 1 2 g0(1) = = 1 − (2.26) m − 1 m − 1

0 0 2 Since f (1) = k + 1 ≥ 3 and m ≥ 3, then g (1) ≤ 1 + 2 = 2. So for 0 < ε << 1 and x = 1 + ε, g0(x + ε) − f 0(x + ε) < 0. Thus g(x) < f(x) when x = 1 + ε.

We know that limx→m− g(x) = ∞ and that f(m) < ∞. Accordingly, we can find ε > 0 such that g(m−ε) > f(m−ε). This implies that there exists an x ∈ [1+ε, m−ε] such that f(x) = g(x). If we now let ε → 0 we can say f(x) = g(x) must have some

24 n−m mx−1 solution in (1, m). Notice that for x > m, x > 0 whereas m−x < 0 and so the largest solution x(m) ∈ (1, m).

√ m+ m2−4 Lemma 3. x(m) ≥ 2 .

Proof. Notice that for fixed m, we have clear monotonicity in n by observing that if

n1−m n2−m 3 mx−1 x > 1 then n1 > n2 > m+1 then x > x . Consider the solution to x = m−x . Then

mx3 − x4 = mx − 1 (2.27)

mx3 − mx = x4 − 1 (2.28)

mx(x2 − 1) = (x2 − 1)(x2 + 1) (2.29)

mx = x2 + 1 (2.30)

x2 − mx + 1 = 0 (2.31)

√ √ √ m± m2−4 m+ m2−4 3+ 5 This gives us solutions of x = 2 . If m ≥ 3 then 2 ≥ 2 > 1. So √ m+ m2−4 x(m) ≥ 2 for all m ≥ 3 since x(m) increases in n.

It should be noted that a solution does not exists when m < 3 because if m = 2, x = 1 in which case our inequality fails.

Lemma 4. The solution 1 < x(m) < m is unique.

n−m mx−1 Proof. Since both f(x)x and g(x) = m−x are increasing functions on the interval (1, m), we will consider the natural log of the equation. First, let ` = n − m. Then let F (x) = ln(f(x)) = ln(x`) (2.32) and mx − 1 G(x) = ln(g(x)) = ln = ln(mx − 1) − ln(m − x) (2.33) m − x

25 0 k 0 m 1 Then F (x) = x and G (x) = mx−1 + m−x . We know there exists a root x1 ∈ (1, m). 0 0 We will assume this is the first root. Since f (1) > g (1) and 1 ≤ x1 ≤ m then

0 0 f (x1) < g (x1). Thus, we know that

k 1 1 < − x 1 m − x 1 x − 1 1 m x x k < 1 − 1 1 m − x x − 1 1 m k x x < 1 − 1 x  1  x(m − x ) x x − 1 1 m   x 1 1 < 1  +  x  1 m − x  x − 1 1 m

  x 1 1 1  1  1 Claim 1.  1 +  < 1 + for x ∈ (x1, m). x x − m − x1 x − m − x 1 m m

  d 1  1  Claim Proof: We will consider  1 + . If the derivative dx x − m − x m

is positive then we will know that for x > x1 our inequality will hold.

  d 1 1 −1 1  +  = + dx  1 m − x  1 2 (m − x)2 x − x − m m

26 This holds if 1 m − x < x − m 1 2x > m + m 1 m + x > m 2

Using the lower bound we established in Lemma 3, we see that the

inequality holds for m ≥ 3. 

Thus   k x 1 1 1 1 < 1  +  < + x x  1 m − x  1 m − x x − 1 x − 1 m m

0 0 0 0 Thus for x > x1 F (x) < G (x) which implies that f (x) < g (x). And so, our root is unique.

Lemma 5. The root x(m) is monotonically increasing in m.

Proof. We consider the equation xn−m(m − x) = mx − 1 and proceed by implicit differentiation to show that x0(m) > 0 for m ≥ 3. By taking the derivative with respect to m, we have the following:

d [xn(m − x) = xm(mx − 1)] dm

nxn−1x0(m − x) + xn(1 − x0) = (x + mx0)xm + (mx − 1) mxm−1x0 + xm · ln(x)

nxn−1(m − x) − xn − mxm − (mx − 1)mxm−1 x0 = xm+1 + (mx − 1) ln(x)xm − xn

27 xm+1 + (mx − 1) ln(x)xm − xn x0(m) = (2.34) nxn−1(m − x) − xn − mxm − (mx − 1)mxm−1

We will now demonstrate that x0 > 0 by showing both the numerator and denom- inator in Equation 2.34 are negative. √ m+1 m n m+ m2−4 Claim 2. x + (mx − 1) ln(x)x − x < 0 under the conditions that x ≥ 2 n−m mx−1 and x = m−x .

Claim Proof: Notice that we can reduce our problem in the following manner:

xm+1 + (mx − 1) ln(x)xm − xn < 0 (2.35)

x + (mx − 1) ln(x) − xn−m < 0 (2.36)

−xn−m + x + mx ln(x) − ln(x) < 0 (2.37)

x(−xn−m−1 + 1 + m ln(x)) − ln(x) < 0 (2.38)

And so, it suffices to show that −xn−m−1 + 1 + m ln(x) < 0. Note that n − m − 1 = k ≥ 2. Thus −xn−m−1 ≤ −x2. Thus, we can say that

−xn−m−1 + 1 + m ln(x) ≤ −x2 + 1 + m ln(x)

√ m+ m2−4 Since x ≥ 2 and x < m, then

√ 2 m + m2 − 4 −x2 + 1 + m ln(x) < − + 1 + m ln(m) 2 √ m2 m2 − 4 m m2 − 4 = + + + 1 + m ln(m) 4 4 2 √ m2 m m2 − 4 = − − + 2 + m ln(m) 2 2

We notice that this is a decreasing function when m ≥ 3 and is negative

28 n−m when m = 3. So x + (mx − 1) ln(x) − x < 0 as desired. 

We now want to show that

nxn−1(m − x) − xn − mxm − (mx − 1)mxm−1 < 0 (2.39) under the same conditions as Claim 2. We will use simple algebra to simplify the problem. We start by dividing by xm giving us

n(m − x) (mx − 1)m n(m − x) m · xn−m − xn−m − m − = · xn−m − xn−m − m − m2 + x x x x

m = (n(m − x) − x)xn−m−1 − m − m2 + x  1  = (n(m − x) − x)xn−m−1 − m − m m − x

Thus we will show the following claim to conclude Equation 2.39 holds. √ m+ m2−4 n−m mx−1 Claim 3. n(m − x) − x < 0 given x ≥ 2 and x = m−x .

n−m mx−1 m−n Claim Proof: Since x = m−x then m − x = (mx − 1)x . Substi- tuting we now need n(mx − 1) < xn−m+1. Using k = n − m − 1 we will show xk+2 − (k + 1)(mx − 1) > 0

√ m+ m2−4 when m ≥ 3 (k ≥ 2) and x ≥ 2 . Let k + 1 = t. Our goal is to now

29 show that xt+1 − tmx + t > 0. Let h(t, m, x) = xt+1 − tmx + t. Consider

t+1 ht(t, m, x) = x ln(x) − mx + 1

> xt + 1 − mx + 1

= (xt − m)x + 1

√ m+ m2−4 t When x ≥ 2 and t ≥ 3 we know that x > m. Thus ht(t, m, x) > 0 under these same conditions. We now consider

h(3, m, x) = x4 − 3mx + 3

= x3(3m)x + 3

> 0

√ 3 m+ m2−4 Since x − 3m > 0 when x ≥ 2 . Since h(t, m, x) > 0 under the given conditions then we know that n(m − x) − x < 0. 

n−m−1 1  This implies that (n(m − x) − x)x − m − m m − x < 0 and thus Claim 2 holds. Since both Claim 1 and Claim 2 hold, then

xm+1 + (mx − 1) ln(x)xm − xn x0(m) = > 0 (2.40) nxn−1(m − x) − xn − mxm − (mx − 1)mxm−1 for all m ≥ 3 and n > 7. Thus our root x(m) is monotonically increasing in m.

2.2.3 Special Cases

We now need only demonstrate that monotonicity holds for m = 0, 1, 2 and 1 ≤ n ≤ 7 to complete our proof.

Lemma 6. ρ(An(n − 1)) < ρ(An(n − 2)) for n ≥ 7.

30 Proof. This is the case when m = 0 and m = 1 respectively. We will use the Lagrange multiplier to prove this lemma. We know

T ρ(An(k)) = max x Ax (2.41) |x|=1

We then consider f(x) = 2(x1x2 + ··· + xn−1xn) under the condition

2 2 x1 + ··· + xn = 1 (2.42)

So, for An(n − 1),

2 2 Ln−1 = 2(x1x2 + ··· + xn−1xn) + λ(1 − x1 − · · · − xn) (2.43)

Taking the partial derivative with respect to xi we get the following system of equa- tions:

2x2 = 2λx1 (2.44)

2(x1 + x3) = 2λx2 (2.45)

2(x2 + x4) = 2λx3 (2.46) . . . .

2(xn−2 + xn) = 2λxn−1 (2.47)

2xn−1 = 2λxn (2.48)

In a similar construction for An(n − 2),

2 2 Ln−2 = 2(x1x2 + ··· + xn−3xn−2 + xn−2(xn−1 + xn)) + λ(1 − x1 − · · · − xn) (2.49)

31 This generates the system of equations

2x2 = 2λx1

2(x1 + x3) = 2λx2

2(x2 + x4) = 2λx3 . . . .

2xn−2 = 2λxn−1 (2.50)

2xn−2 = 2λxn (2.51)

The only difference between the two systems of equations are Equations 2.47 and

2.50, as well as, 2.48 and 2.51. Thus, if xn−1 < xn−2, we will be able to argue that

ρ(An(n − 1)) < ρ(An(n − 2)).

2 2 Notice that Equations 2.44-2.48 and 1 = x1 + ··· + xn are a consistent system of n + 1 linear equations and n + 1 unknowns. If we let x∗ be the solution to this system of equations then we see that

∗ ∗ ∗ xn−2 = λxn−1 − xn

2 ∗ ∗ = λ xn − xn

2 ∗ = (λ − 1)xn (2.52)

2 Combining Equations 2.52 with 2.48, if λ − 1 > λ then xn−2 > xn−1. We know that √ 2 1+ 5 π  λ −1 > λ when λ > 2 . We know the largest eigenvalue of An(n−1) is 2 cos n+1 √ π  1+ 5 2 [11]. We see that 2 cos n+1 ≥ 2 when n ≥ 4. This implies that λ − 1 > λ and

32 ∗ ∗ so, xn−2 > xn−1 when n ≥ 4. Using this, we can make the following conclusion:

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ρ(An(n − 1)) = 1 + 2((x1)(x2) + (x2)(x3) + ··· + (xn−2)(xn−1) + (xn−1)(xn))

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ < 1 + 2((x1)(x2) + (x2)(x3) + ··· + (xn−2)(xn−1) + (xn−2)(xn))

≤ max [1 + 2(x1x2 + x2x3 + ··· + xn−2(xn−1 + xn))] |x|=1

= ρ(An(n − 2) (2.53)

Lemma 7. ρ(An(n − 2)) < ρ(An(n − 3)) for n ≥ 7.

Proof. This is the case when m = 1 and m = 2 respectively. We, first, consider

Lemma 1 that says that ρ(An(k)) ≥ 2 if and only if

mx − 1 xn−m = (2.54) m − x has a solution x > 1 Notice that if we let m = 1 (k = n − 2). Then Equation 2.54 becomes x − 1 xn−1 = (2.55) 1 − x

= −1 (2.56)

which has no solution x ≥ 1. This implies that ρ(An(n − 2)) < 2. Notice that p √ ρ(A7(4)) = 3 + 2 > 2. In Lemma 3, we argued that x(m) is monotonically increasing in n. This implies that ρ(An(k)) is also monotonically increasing in n √ √1 because ρ(An(k)) = x + x . Thus

ρ(An(n − 2)) < 2 < ρ(A7(n − 3)) < ρ(An(n − 3)) (2.57) for n > 7.

33 Lemma 8. ρ(An(n − 3)) < ρ(An(n − 4)) for n ≥ 7.

Proof. This is the case when m = 2 and m = 3 respectively. We will compare solutions to 2x − 1 xn−2 = (2.58) 2 − x and 3x − 1 xn−3 = (2.59) 3 − x

2x−1 We can see that Equation 2.58 does not have a real solution x ≥ 2 because 2−x < 0 and xn−2 > 0. So, it suffices to show that for n = 7, x(3) > 2. Then, using monotonicity in n, we achieve the desired inequality. Letting n = 7 in (2.59) we have

3x − 1 x7−3 = (2.60) 3 − x which has its largest root at

1  √ q √  x = 1 + 5 + 2(1 + 5) > 2 (2.61) 2

Appealing to monotonicity in n, we see that for all n, x(2) < x(3). Now, knowing √ √1 that ρ(An(k)) = x + x we have that

√ 1 ρ(An(n − 3) < 2 + √ < ρ(A7(n − 4)) < ρ(An(n − 4)) (2.62) 2

Lemma 9. For 1 ≤ n < 7 the ρ(An(k)) are monotonically increasing.

Proof. We start with n = 1, 2, 3 which are all trivial cases. For each of these cases, there is only one configuration and thus k cannot vary. We proceed with the remaining cases by computation:

34 √ √ 1+ 5 n = 4: ρ(A4(3)) = 2 and ρ(A4(2)) = 3. Thus monotonicity holds. √ p √ n = 5: ρ(A5(4)) = 3, ρ(A5(3)) = 2 + 2, ρ(A5(2)) = 2. Thus monotonicity holds.

q √ q √ √ 5+ 5 5+ 13 n = 6: ρ(A6(5)) = 1.80194, ρ(A6(4)) = 2 , ρ(A6(3)) = 2 , ρ(A6(2)) = 5. Thus monotonicity holds.

This completes the special cases.

Theorem 1. For fixed n ∈ N, the largest eigenvalue is monotonically increasing and for fixed n ≥ 7, m = n − k − 1 ≥ 3 bounded by

√ m + 2 + m2 − 4 √ 1 < ρ(A (k)) < m + √ (2.63) q √ n m 2(m + m2 − 4)

Proof. We consider two cases: n < 7 and n ≥ 7. For n < 7 we simply reference Lemma 9. We now consider n ≥ 7. We first consider the characteristic polynomial in

hopes of solving for the largest eigenvalue. By Lemma 1, we can say that An(k) can be written as mx − 1 xn−m = (2.64) m − x

n−m mx−1 We notice that x 6= m−x for x > m. Thus our largest root x(n, m) < m. We then use Lemma 3 to establish our lower bound. We then use Lemma 2 and Lemma 4 to establish existence and uniqueness of a root x(n, m) ∈ (1, m) for m ≥ 3. By Lemma 5, for m ≥ 3, x(n, m) is monotonically increasing in m as desired. For m < 3 we refer to Lemma 6, Lemma 7 and Lemma 8. These together give us monotonicity for n ≥ 7, 0 ≤ m ≤ n − 2.

√ m+ m2−4 In order to establish our bounds, we consider our root x(n, m) ∈ ( 2 , m).

35 2 1 Since x = b and λ = b + b then we can say √ m + 2 + m2 − 4 √ 1 < λ < m + √ (2.65) q √ m 2(m + m2 − 4)

36 Chapter 3: Further Results and Applications

3.1 Corollaries

Corollary 1. When m = n − 2 then we have a closed form for ρ(An(1)).

Proof. We first fix m = n − 2 thus our equation becomes

mx − 1 x2 = (3.1) m − x

Performing minor algebra we see that this equation can be written as (x − 1)(x2 − (m − 1)x + 1) = 0. This has its largest solution at

(m − 1) + p(m − 1)2 − 4 x = (3.2) 2

2 1 We can then convert using x = b , λ − 1 = b + b , and m = n − 2 into λ. Thus s (n − 3) + p(n − 5)(n − 1) 1 λ = + √ + 1 2 q (n−5)+ (n−5)(n−1) 2

q p √ (n − 3) + (n − 5)(n − 1) 2 = √ + q + 1 2 (n − 3) + p(n − 5)(n − 1)

(n − 3) + p(n − 5)(n − 1) + 2 = √ q + 1 2 (n − 3) + p(n − 5)(n − 1)

√ q n − 1 + p(n − 5)(n − 1) + 2 (n − 3) + p(n − 5)(n − 1) = √ q 2 (n − 3) + p(n − 5)(n − 1)

37 Corollary 2. Let Wn(k) be a weighted adjacency matrix with the same structure as

An(k) and  β if node i is adjacent to node j  aij = δ if i = j (3.3) 0 else with 0 ≤ β, δ ≤ 1. We can show the largest eigenvalue of Wn(k) is monotonically decreasing in k and is bounded by

√ m + 2 + m2 − 4 √ 1  β + δ < ρ(W (k)) < β m + √ + δ (3.4) q √ n m 2(m + m2 − 4) with m = n − k − 1.

Proof. We can rewrite Wn(k) = (βAn(k) + δIn) then the eigenvalues of Wn(k) are of the form β(λ) + δ where λ is an eigenvalue of An(k). Since β, δ > 0 then the largest eigenvalue of Wn(k) will be β(ρ(An(k))) + δ. We can then say that because

ρ(An(k − 1) > ρ(An(k)) then

β(ρ(An(k − 1)) + δ > β(ρ(An(k))) + δ (3.5) as desired. The bounds can be found by performing simple algebra on the original inequality.

This is exactly the compartmental epidemic. Accordingly, we have found bounds on the spectral radius and can clearly see that it is monotonically decreasing in k. This allows us to now consider the possibility of this structure being used to model constructed compartmental epidemic models. This can be used to potentially find bifurcation values in the network structure.

Given a network G, we can model the compartmental spread of the disease as

β shown above. The reproduction number is given again by δ [1]. However, when

38 considering networks, our bifurcation value is no longer 1. Instead, an epidemic will spread over a network G if β 1 R = > (3.6) 0 δ ρ(A) where A is the adjacency matrix of G. Similarly, the epidemic will initially not spread

1 when R0 < ρ(A) . This naturally leads to the question: under what network conditions does the disease spread? Applying this to our transformation, we can now see that k is a bifurcation parameter. We solve in the next corollary when this bifurcation occurs.

Corollary 3. Given a weighted network as in Corollary 2 and values n, β, and δ then β < 1 δ ρ(An,k) δpδ2 − 4β2 δ2 k > − + n (3.7) 2β 2β2

Proof. The proof follows from simple algebraic computations. We know that if

β 1 1 < < (3.8) δ √ 1 ρ(A (k)) n − k − 1 + √ n n − k − 1 then the disease will not spread. We then solve the first inequality. β 1 < (3.9) δ √ 1 n − k − 1 + √ n − k − 1 δ √ 1 > n − k − 1 + √ (3.10) β n − k − 1 δ √ n − k − 1 > (n − k − 1) + 1 (3.11) β δ √ (n − k − 1) − n − k − 1 + 1 < 0 (3.12) β

39 s ! s ! 1 δ δ2 √ 1 δ δ2 − − 4 < n − k − 1 < + − 4 (3.13) 2 β β2 2 β β2

This leads us to analyzing 3 separate cases:

1. δ2 − 4β2 = 0

2. δ2 − 4β2 < 0.

3. δ2 − 4β2 > 0 √ √ 2 2 δ Case 1: If δ − 4β = 0 or δ = 2β then n − k − 1 − β n − k − 1 + 1 ≥ 0 for all √ values n − k − 1. Thus the disease will always spread.

√ √ 2 2 δ Case 2: Similar to Case 1, if δ − 4β < 0 then n − k − 1 − β n − k − 1 + 1 ≥ 0 √ for all values n − k − 1. Again, the disease will always spread.

Case 3: If δ2 − 4β2 > 0 then we have Equation 3.13. Notice that if δ2 − 4β2 > 0 then

s ! 1 δ δ2 − − 4 < 1 (3.14) 2 β β2 whenever k < n − 1. This implies that if k < n − 1,

δ √ (n − k − 1) − n − k − 1 + 1 < 0 (3.15) β for n − k − 1 > 1 (ie. k < n − 2). Thus we only need to consider the second inequality in Equation 3.13. So we solve

s ! √ 1 δ δ2 n − k − 1 < + − 4 (3.16) 2 β β2

40 We now continue with algebra and see that

2 1  δ p  n − k − 1 < + δ2 − 4β2 (3.17) 4 β

2 1  δ p  k > n − 1 − + δ2 − 4β2 (3.18) 4 β ! δ2 δpδ2 − 4β2 δ2 k > n − 1 − + + − 1 (3.19) 4β2 2β 4β2

δ2 δpδ2 − 4β2 k > n − + (3.20) 2β2 2β

Consider the following example in use of this Corollary:

Example 6. Let n = 200, δ = .4 and β = .15. The bifurcation occurs when k = 194.93. So for k > 194 the disease will not spread throughout the network because β √ 1 1 < 5 + √ < (3.21) δ 5 ρ(An,k)

This implies that almost all of our network structures would accommodate the initial spread of the disease.

Notice that this is in sharp contrast to the fully mixed system which, given the

β same transmission and recovery rates, the disease would not spread because δ = .375 < 1. However, because of the connectivity of the graph, the disease will initially spread throughout the network.

3.2 Applications

In this section we will explore methods of estimating the spectral radius for networks of this structure.

41 3.2.1 Relative Size of Eigenvalue

During initial research, a clear trend could be seen in the relationship between n−k−1 and ρ(An(k)) and appeared to be something worth investigating. When plotting se- quential graphs, as m increased, the distance between x(m) and m appeared to de- crease dramatically. This sparked our desire to attempt to approximate the largest eigenvalue by approximating its distance to the asymptote.

For notation purposes we will define

! √ 1 √ 1 p 1 dn,k := n − k − 1 + √ − ρ(An(k)) = m + √ − x(m) + n − k − 1 m px(m)

(3.22)

Using Matlab, we approximated these values for different values n. The resulting plots are displayed in Figures 3.1, 3.2 and 3.3 along with their numerical value in Tables 3.1, 3.2 and 3.3.

Figure 3.1: The graph of d50,k (Left). The graph of − log(d50,k) (Right).

These graphs prompted further exploration into potential fitted curves to this data in hopes of better approximating the largest eigenvalue. Interestingly, Figures 3.1,

3.2 and 3.3 demonstrate that dn,k does not converge to 0 exponentially, as we had

42 Figure 3.2: The graph of d75,k (Left). The graph of − log(d75,k) (Right).

Figure 3.3: The graph of d100,k (Left). The graph of − log(d100,k) (Right).

originally thought. Our graphs suggest that at a certain point, ρ(An(k))’s growth rate decreases relative to the moving asymptote.

Naturally, we want to find the value k such that − log(dn,k) is a maximum and attempt to generalize this relative to n. Again, using Matlab, we fit a spline curve to our data, took a derivative, and approximated the zeros. We will denote the fitted spline pn(k). The resulting max1≤k≤n−1(pn(k)) for n = 10,..., 100 can be seen in Figure 3.4.

43 k d50,k −ln(d50,k) k d75,k − ln(d75,k) 27 1.2696e − 36 82.65 52 3.4922e − 70 159.93 28 2.0690e − 37 84.47 53 1.8209e − 70 160.58 29 3.9469e − 38 86.13 54 1.1763e − 70 161.02 30 8.9341e − 39 87.61 55 9.5987e − 71 161.22 31 2.4378e − 39 88.91 56 1.012e − 70 161.17 32 8.1657e − 40 90.00 57 1.415e − 70 160.83 33 3.4304e − 40 90.87 58 2.7061e − 70 160.19 34 1.8536e − 41 91.49 59 7.3406e − 70 159.19 35 1.3278e − 40 91.82 60 2.9508e − 69 157.8 36 1.3081e − 40 91.83 61 1.8538e − 68 155.96 37 1.8537e − 40 91.49 62 1.9432e − 67 153.61 38 3.9971e − 40 90.72 63 3.6892e − 66 150.67 39 14088e − 39 89.46 64 1.4088e − 64 147.02 40 8.9093e − 38 87.61 65 1.241e − 62 142.54 41 1.1455e − 37 85.06 66 3.0321e − 60 137.05 42 3.5604e − 36 81.62 67 2.6549e − 57 130.27 43 3.4366e − 35 77.05 68 1.2088e − 53 121.85 44 1.5106e − 31 70.97 69 5.0686e − 49 111.2 45 5.6798e − 29 62.74 70 5.0447e − 43 97.393 46 5.7904e − 23 51.20 71 6.834e − 35 78.669 47 1.8841e − 15 33.91 72 5.6151e − 23 51.234 Table 3.1: Numerical Approximations for Table 3.2: Numerical Approximations for d50,k, 27 < k < 47. d75,k, 52 < k < 72

Figure 3.4: A graph of n vs. maxk(pn(k)) 44 k d100,k − ln(d100,k) 77 9.6058e − 104 237.21 78 1.6025e − 103 236.69 79 3.5054e − 103 235.91 80 1.0313e − 102 234.83 81 4.2012e − 102 233.43 82 2.4522e − 101 231.66 83 2.1347e − 100 229.5 84 2.907e − 99 226.89 85 6.5576e − 98 223.77 86 2.6271e − 96 220.08 87 2.0369e − 94 215.73 88 3.405e − 92 210.61 89 1.4088e − 89 204.59 90 1.7287e − 86 197.47 91 8.0259e − 83 189.03 92 1.9797e − 78 178.92 93 4.2517e − 73 166.64 94 1.7007e − 66 151.44 95 4.4806e − 58 132.05 96 8.0658e − 47 106.13 97 1.6734e − 30 68.563

Table 3.3: Numerical approximation d100,k, 77 < k < 97.

We see a close to linear relationship in Figure 3.4. In fact, the correlation is .9999. This leads us to the following conjecture:

Conjecture 1. max1≤k≤n−1(pn(k)) ≈ 0.7816(n) − 2.4680.

Extrapolating the information would allow us to accurately predict the largest eigenvalue of large systems of this size by approximating dn,k and subsequently solving for ρ(An(k)).

45 Chapter 4: Conclusions

The goal of this research is to lay the foundation for future spectral graph theory as it pertains to agent-based epidemic modeling. Knowing the significance of the spectral radius of a network in both compartmental and agent-based epidemic modeling, we restricted our model to illustrate monotonicity of the spectral radii and eventually lay the foundation for better approximating these eigenvalues.

4.1 Future Work

Naturally, this work can be extended in a few significant directions. We can consider different network structures in general and explore how minor changes in the graph structure might effect the largest eigenvalue. However, there are extensions to the original model that can be explored. One such approach would be to introduce more complexities to our network configurations. This would allow for multiple branches or cycles to exist. Similar analysis has been done on epidemic models on k-regular trees [12]. The increased complexity of the network would allow for more realistic community models to better model epidemics.

Another extension to consider would be to move to an agent-based model. Here, we have shown monotonicity holds for both the adjacency matrix and a compartmen- tal disease model. However, extending this theory to non-identical and non-symmetric transmission rates is nontrivial. One possible approach would be to consider trans-

46 mission rates as a geometric series. That is, our matrix is of the form

0 r  r 0 r2     2 ..   r 0 .   . .   .. .. ri−1     ri−1 0 ri ri+1 . . . rn     ri 0     ri+1 0     . .   . ..  rn 0

In this case, we could hope to solve the difference equations using the geometric relation and potentially retain some monotonicity.

Rather than extending the previous problem, we could examine the effects of mak- ing different cuts within the existing networks. Removing an edge would amount to isolating a community and removing a node would amount to removing an agent. In either of these cases, it is known that the eigenvalue will decrease [7]. However, one could ask by how much will it decrease and what effect cuts in different positions might have on this difference.

47 Bibliography

[1] Mark Newman. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010.

[2] Tanya Kostova. Interplay of node connectivity and epidemic rates in the dy- namics of epidemic networks. Journal of Difference Equations and Applications, 15(4):415–428, 2009.

[3] Miaohua Jiang. Approximating individual risk of infection in a markov chain epi- demic network model with a deterministic system. Journal of Difference Equa- tions and Applications, 22(10):1438–1451, 2016.

[4] Marino Gatto, Lorenzo Mari, and Andrea Rinaldo. Leading eigenvalues and the spread of cholera. SIAM News, 43, Sept 2013.

[5] Fred Brauer. An Introduction to Networks in Epidemic Modeling, pages 133–146. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.

[6] M. E. J. Newman. Spread of epidemic disease on networks. Phys. Rev. E, 66:016128, Jul 2002.

[7] R.B. Bapat. Graphs and Matrices. Universitext. Springer London, 2014.

[8] H. Shin. Spectral radius of a star with one long arm. ArXiv e-prints, September 2017.

[9] B. Nica. A Brief Introduction to Spectral Graph Theory. ArXiv e-prints, Septem- ber 2016.

48 [10] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins studies in the mathematical sciences. Johns Hopkins University Press, 1996.

[11] Devadatta Kulkarni, Darrell Schmidt, and Sze-Kai Tsui. Eigenvalues of tridiag- onal pseudo-toeplitz matrices. Linear Algebra and its Applications, 297(1):63 – 80, 1999.

[12] Claire Seibold and Hannah L. Callender. Modeling epidemics on a regular tree graph. Letters in Biomathematics, 3(1):59–74, 2016.

[13] Huiqing Liu, Mei Lu, and Feng Tian. On the spectral radius of graphs with cut edges. Linear Algebra and its Applications, 389(Supplement C):139 – 145, 2004.

[14] L. Bunimovich and B. Webb. Transformations. Springer, New York, 2014.

[15] A. Berman and R. Plemmons. Nonnegative Matrices in the Mathematical Sci- ences. Society for Industrial and Applied Mathematics, 1994.

[16] O Diekmann and JAP Heesterbeek. Mathematical Epidemiology of Infectious Diseases: Model Building, Analysis and Interpretation. Chichester: John Wiley, 2000.

[17] Qiao Li and Ke Qin Feng. On the largest eigenvalue of a graph. Acta Math. Appl. Sinica, 2(2):167–175, 1979.

[18] Stephen Strogatz. Nonlinear Dynamics And Chaos: With Applications To Physics, Biology, Chemistry, And Engineering (Studies in Nonlinearity). West- view Press, 2001.

49 Curriculum Vitae William D. Fries 408 Crowne Oaks Circle Winston Salem, NC 27106

Wake Forest University Phone: 314.402.4296 Department of Mathematics and Statistics Email: [email protected]

Education M.A. in Mathematics, Wake Forest University, Winston Salem, NC (Anticipated) May 2018 GPA 3.416 Specializing in Dynamical Systems

B.A. in Mathematics, Washington and Lee University, Lexington, VA May 2016 GPA 3.261 Second major in Religious Studies

Publications Preprint 1. Fries B., Jiang, M., (2018) “Characterizing the Largest Eigenvalue of a Sequence of Adja- cency Matrices” (Submitted to Linear Algebra and its Applications).

Presentations “Characterizing the Spectral Radius of a Sequence of Adjacency Matrices” Poster: North Carolina MMA State Dinner. Winston-Salem, NC Oct. 2017 Oral: UNCG Regional Mathematics and Statistics Conference. Greensboro, NC Nov. 2017 Oral: Triangle Area Graduate Mathematics Conference. Raleigh, NC Nov. 2017

Related Work Experience Graduate Teaching Assistant,WAKE FOREST UNIVERSITY June 2017-Present Elementary Probability and Statistics, Explorations in Mathematics, Linear Algebra Summer Graduate Research Fellow,WAKE FOREST UNIVERSITY 2017 Advisor: Miaohua Jiang, Ph.D. Tutor, Various topics in mathematics Wake Forest University Aug. 2016- May 2017 Washington and Lee University Sept. 2014-May 2016

50 Research Experience 1. Master’s Thesis. January 2017-April 2018 (Anticipated). “Characterizing the Spectral Radius of a Sequence of Adjacency Matrices”. Advisor: Miaohua Jiang, Ph.D. Abstract: In this paper we explore the introductory theory of modeling epidemics on networks and the significance of the spectral radius in their analysis. We look to establish properties of the spectral radius that would better inform how an epi- demic might spread over such a network. We construct a specific transformation of networks that describe a transition from a star network to a path network. For the sequence of adjacency matrices that describe the unfolding of a star into a path, we show the spectral radius of these graphs can be given in a simple alge- braic equation. Using this equation we show the spectral radius increases as the star unfolds and establish bounds on the spectral radius for each network.

2. Dynamical Systems Research Project. April 2017. “Extending a Model for Religious Disaffilia- tion”. Supervised by John Gemmer, Ph.D. Abstract: I consider two different adaptations to a religious disaffiliation model proposed by Daniel Abrams in “Dynamics of Social Group Competition: Mod- eling the Decline of Religious Affiliation.” The first adaptation considers the assumption of constant utility by, instead, modeling ux as a Gaussian curve. The second adaptation is the beginning of an exploration into a system with three populations.

Relevant Coursework Wake Forest University Completed: Applied Nonlinear Dynamics; Complex Analysis; ; Mea- sure Theory; Networks: Models and Analysis; Linear Models; Topology; Abstract Algebra Registered: Partial Differential Equations; Stochastic Processes and Applications; Intro- duction to Numerical Methods Washington and Lee University Real Analysis I; Real Analysis II; Ordinary Differential Equations; Partial Differential Equations; Abstract Algebra I; Abstract Algebra II; Combinatorics; Mathematical Statistics

Other Skills Interests Mathematica, Matlab, Maple, R, LaTeX. Working knowledge of French language. Swimming for 15+ years including 4 years of NCAA DIII and 1 year as Captain, Philanthropy and Service Chair of Zeta Deuteron chapter of Phi Gamma Delta, Triathlons, Hiking, Rock Climbing, Camping, Piano.

51