Asymptotic Transient Solutions of Stochastic Fluid Queues Fed By A Single ON-OFF Source

by

Yu Shao

A thesis submitted to The Johns Hopkins University in conformity with the requirements for the degree of Master of Science

Baltimore, Maryland May, 2019

© Yu Shao 2019

All rights reserved Abstract

Stochastic Fluid Queues (SFQs) have served many applications in different research fields including understanding the performance measurements of network switches in High-Performance Computing (HPC) environment and understanding the surplus processes in the context of . Although many advancements have been made to understand the stationary behavior of SFQs, the transient analysis is an open research area, where Laplace-Stieltjes Transforms (LST) are used to understand time-dependent behavior. However, performing the inversion is impractical in many cases.

Chapter2 and3 of this work revisits the work of fluid-queues driven by a single "ON-OFF" source in the context of resource allocation in HPC en- vironments. The asymptotic analysis is performed to produce equivalent representations that depict short and long-time behavior. Next, an expansion is proposed that holistically considers these behaviors while also providing a direct correlation between the packets injection rates and state transition rates. The numerical experiments validate our proposed schema, where the results can be adapted to provide better resource estimation of fast-packet switching within HPC environment.

ii Chapter4 and5 of this work explores theorizing the collective risk of a single insurance firm during periods of distress, where its current revenues areless than its initial surplus. This behavior can be modeled as a fluid queue fed by a single ON-OFF source, where our contributions are two-fold. First, we found explicit solutions consisting of combinations of modified Bessel func- tions of the first kind. Next, we obtain asymptotic expansions of our resulting solutions for short and long-time behavior to propose a comprehensive so- lution that fuses these results. Numerical experiments further demonstrate the viability of our proposed method where only a few terms are needed to effectively approximate the temporal behavior of the surplus probability density function during periods of financial distress. Furthermore, we also show that our asymptotic expansions provide a direct correlation between the firm’s revenue gains, its claim sizes, and the current surplus behavior. The results of this work can be directly applied and extended to audit the solvency of firms operating in adverse financial conditions as well as help effectively identify potential troubled corporations at various stages.

iii Thesis Committee

Primary Readers

Antwan D. Clark (Primary Advisor) Associate Research Scientist Department of Applied Mathematics and Johns Hopkins Whiting School of Engineering

Daniel Q. Naiman (Reader) Professor Department of Applied Mathematics and Statistics Johns Hopkins Whiting School of Engineering

iv Acknowledgments

I sincerely acknowledge my research advisor, Dr. Antwan Clark for instructing me in the world of mathematical research and guiding me to write the thesis. He skillfully delivers me the knowledge needed and patiently shows me the way to become an eligible independent researcher. In addition to research, he is also an excellent counselor for my life, study and career. I can never forget the evening conversations when his opinions illuminate my enthusiasm towards diving into the deeper ocean of mathematical study and research. His encouragement helps me finish my Master’s degree and continue my further travels of pursuing Ph.D.

Moreover, I would like to express heartfelt thanks to Professor Daniel Naiman and Dr. John Miller, for their instruction and advice during my Master’s life at the Johns Hopkins University. They helped me overcome most obstacles in my study and indicated directions for me when I was perplexed.

Furthermore, I would like to appreciate my colleague Jiawen Bai, who has worked with me for a long time and contributed lots of enlightening ideas towards this work. Lastly, I also want to thank my friends, my families and

v all the people who have supported me. Their company and love provide me hope and courage to continue my study and research.

This work is based upon work supported by the U.S. Department of De- fense (DoD) under award numbers FA8075-14-D-0002-0007. The views and conclusions contained in this work are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the DoD.

vi Table of Contents

Table of Contents vii

List of Tablesx

List of Figures xi

1 Introduction1

1.1 Motivation ...... 2

1.1.1 HPC Resource Allocation ...... 2

1.1.2 Ruin Theory ...... 3

1.2 Related Work ...... 4

1.2.1 Related Work for HPC Resource Allocation ...... 4

1.2.2 Related Work for Ruin Theory ...... 5

1.3 Approach ...... 8

2 Stochastic Fluid Queue Model for HPC Resource Allocation 10

2.1 Overview ...... 11

2.2 Transient Analysis ...... 22

vii 2.3 Asymptotic Expansions of Solutions ...... 35

2.3.1 Short-time Behavior Analysis ...... 37

2.3.2 Long-time Behavior Analysis ...... 43

2.3.3 Comprehensive Expansion ...... 50

3 Numerical Results for HPC Resource Allocation 54

3.1 Data Simulation ...... 55

3.2 Theoretical Solution ...... 57

3.3 Result ...... 63

4 Stochastic Fluid Queue Model for Ruin Theory 64

4.1 Overview ...... 65

4.2 Transient Analysis ...... 76

4.3 Asymptotic Expansions of Solutions ...... 86

4.3.1 Short-time Behavior Analysis ...... 86

4.3.2 Long-time Behavior Analysis ...... 90

4.3.3 Proposed Comprehensive Expansion ...... 93

5 Numerical Results for Ruin Theory 94

6 Conclusion and Future Work 99

7 Appendix 105

7.1 Appendix A: Codes of Numerical Results for HPC Resource

Allocation ...... 105

viii 7.2 Appendix B: MATLAB Codes of Numerical Results for Ruin Theory ...... 120

7.3 Appendix C: Technical Details for the Inverse Laplace Trans- form in Chapter2...... 133

7.4 Appendix D: Technical Details for the Inverse Laplace Trans- form in Chapter4...... 140

ix List of Tables

2.1 The Changing Behavior of Buffer Content Q(t) in HPC Context 12

2.2 The Changing Behavior of Buffer Content Q(t) ...... 14

2.3 Some Critical Values ...... 53

3.1 Summary of Numerical Comparison Methodology ...... 58

3.2 Summary of Numerical Comparison Methodology(II) . . . . . 59

3.3 Numerical Results of Comparison ...... 63

4.1 The Behavior of Buffer Content Q(τ) in the Context of Insur- ance Risk Management ...... 68

5.1 Numerical Result of Comparison ...... 96

x List of Figures

2.1 The Relationship between Buffer, Traffic and States . . . . 14

2.2 Leading Term of Asymptotic and Power Series for Bessel Func-

tions I0(ρ) ...... 49

2.3 Relative Error Behavior of Power and Asymptotic Series . . . 52

3.1 The Variation of Q(t) with respect to t ...... 56

3.2 The Variation of dQ(t)/dt with respect to t ...... 56

3.3 The Probability of Q(t) ≤ x with respect to t, Given x = 1 . . . 60

3.4 F(t, x) and Approximation via the Combination of Power Series and Asymptotic Series with the First and Second Leading Terms with respect to t, Given x = 1 ...... 61

3.5 W1(t, x), W2(t, x) and Approximation via the Combination of

Power Series and Asymptotic Series with the First and Second Leading Terms with respect to t, Given x = 1...... 61

3.6 Error of F(t, x) and the Approximation via the Combination of

Power Series and Asymptotic Series with the First and Second Leading Term with respect to t, Given x = 1...... 62

xi 4.1 Surplus Process R(t) ...... 66

4.2 Fluid Queue Process Q(τ) ...... 67

5.1 Relative error of F(τ, x) between the four-term approximated solutions and the theoretical solution where x = 1...... 97

5.2 Comparison of transient responses between F(τ, x), W1(τ, x) and W2(τ, x), where x = 1, between the four-term approximated solutions and those generated by Monte Carlo simulation. . . 98

5.3 Relative error of F(τ, x) between the four-term approximated

solutions and those generated by Monte Carlo simulation where x=1...... 98

xii Chapter 1

Introduction

1 1.1 Motivation

1.1.1 HPC Resource Allocation

High-Performance Computing (HPC) systems aim to support parallel execu- tion of high performance critical and large-scale applications with minimum initial investment, operating costs and energy consumption costs [1]. The execution of job flows in HPC systems requires multiple kinds of resources including the execution time of Central Processing Unit (CPU), the Input/Out- put (I/O) channel, the bandwidth of network and so on. The goal of HPC resource allocation is to manage the batch schedulers in the parallel computing environment to maximize the utilization of the synchronous networks [2].

Recently, researchers have widely applied HPC systems in fields of high- energy physics, geophysics and bioinformatics for complex large-scale ex- periments. To meet the increasing demand for computational science, high- frequency processors are employed to improve computational capability [3].

This change in hardware and architecture presents new challenges when mak- ing and resource allocating decisions. Firstly, with the scale of HPC systems growing, the complexity and variety of the jobs to be executed are increasing exponentially, making it harder to allocate the required resource immediately after the job burst flow occurs. Furthermore, the gap between the speed of I/O operation and that of CPU makes accessing data resources a bottleneck for the utility of HPC [4]. Therefore, there is a pressing need to estimate the distribution of the amount of required resource beforehand in a large HPC system. Predicting resource consumption in the system can help

2 the schedulers to make the proper resource allocation strategy in advance to prevent the deadlock and frequent suspend state of jobs in an HPC system.

1.1.2 Ruin Theory

Insurance companies play an essential role in the modern financial system, providing a way for corporations to transfer their risks. The nature of this busi- ness means that insurers will face more uncertainties than other enterprises.

On the other hand, the insolvency of insurers may pose a risk to society as a whole. Evidence from the past several decades suggests that the bankruptcy of insurance company is approximately three-to-five times more expensive than that of other financial institutions [5].

After the financial crisis in 2008, the insurance industry has faced more chal- lenges, which calls for more effective ways of managing the risk exposure faced by insurances companies [6]. In this environment, ruin theory becomes a core consideration for assessing insurance risk. Ruin theory assesses an insurer’s vulnerability to insolvency. The primary goal for analyzing transient behavior in ruin theory is to determine the likelihood of ruin within a partic- ular time interval, which can be either finite or infinite. Therefore, finding the exact solution of ruin probability can make it easier for us to predict the surplus for a given insurance company [7]. Moreover, managers of insurance companies can benefit from this analysis, because they can use the information to make risk management plans to protect their companies from bankruptcy.

3 1.2 Related Work

1.2.1 Related Work for HPC Resource Allocation

Recent research focusing on the operation of resource management systems is based on the queuing approaches in the environment of HPC systems [8]. For instance, Milidrag et al. model the packets transmitted in the network as fluid traffic and provide an event-driven simulation of the fluid network aswell as hybrids of packet/fluid and event/time-driven simulation strategies [9]. Anick et al. consider a physical model in which a buffer receives packets from a finite number of statistically independent and identical information sources that asynchronously alternate between exponentially distributed periods in the ON and OFF states. They analyze the equilibrium buffer distribution by a set of differential equations. The numerical results show that model is useful for a data-handling switch in a computer network [10]. Liu et al. study the problem of resource allocation in the context of stringent constraints considering the joint impact of spectral bandwidth, power, and code rate. Their work obtains an analytical expressions for the probability distribution function of buffer content in the fluid queue model, as well as its associated exponential decay rate and the effective capacity [11].

Although many advancements have been made to understand stationary behavior of the stochastic fluid queues applied in the computing world, un- derstanding transient behavior is still an open area of research [12]. Recent techniques involve the use of either Fourier-Stieltjes Transforms (FST) and

Laplace-Stieltjes Transforms (LST), where matrix analytic methods (MAMs)

4 have been used to understand transient behavior [13][14]. For example, Tanaka et al. analyze a multi-source on-off fluid queue model with input generated by a Markov modulated rate. They give an analytical solution of buffer content in a form of Laplace Transform [15]. Parthasarathy and Vijayashree’s obtain the transient solution of a fluid queue driven by a single

ON-OFF source in terms of modified Bessel’s function of the first kind using double Laplace transform [16]. However, the transient solutions are based on either recurrence relations or numerically inverse Laplace transform, without an explicit expression provided [17]. Hence, it is still hard to obtain the explicit form of solutions due to the complexity of the problem, whereas such form of solutions is useful for gaining insight and comparing the relative advantages of different numerical techniques.

1.2.2 Related Work for Ruin Theory

For the risk management of insurance industries, the solvency of a company is often regarded as a matter for the relevant supervisory authorities. However, solvency is a variable affected by many uncertainties like the perspective of claims, the impact of investment returns, interest rates and inflations [18].

In the theory of insurance risk management, because people haven’t rec- ognized or grasped the deterministic law of solvency accurately, in order to facilitate evaluation, the solvency of capability is usually regarded as a [19]. The classical model in the field of ruin theory is the Cramér–Lundberg model, also known as the compound-Poisson model,

5 which regards the surplus of insurance companies as a stochastic process with constant premium rate and Poisson distributed claim arrivals [20]. One of the famous generalizations of the Cramér–Lundberg model is given by Gerber and Shiu, in which the ruin probability is regarded as a function with respect to the initial surplus and time[21]; sequentially, the expected dis- counted penalty function is considered in the analysis of the insurer’s surplus behavior [22]. Explicit forms of the probability of ruin are given under some specific assumptions like all claims are of constant size or exponentially dis- tributed. Dufresne and Gerber further expand the model to a joint density of initial surplus, surplus before the ruin and the deficit at the ruin [23]. Laplace transformation is applied for solving the distribution density functions of ruin.

Recent works have modeled the behavior of insurance surplus process using the Stochastic Fluid Queues (SFQs). The transient analysis of the ruin probabil- ity bases on Laplace Transforms with Matrix Analytic Methods (MAMs) in the Laplace domain; however, inversions are mainly numerical [24]. Ramaswami proposes a quadratically convergent algorithm to invert the resulting matrices from the Laplace domain and provides an expression of the distribution of the first passage time for the SFQs model, which has become a basis for analyzing the transient behavior [25]. Zhou et al. apply a neural network method using the trigonometric function as the activation function and design an improved Extreme Learning Machine algorithm which is suitable for the risk model and which can calculate the probability of ruin accurately and quickly at an arbitrary time point [26]. Previous approaches have their limitations. First,

6 they only provide numerical solutions from which we can not get the correla- tion between the outputs and the parameters like the claim sizes. Second, the method also requires data for training the model, which is not always available.

Additionally, because of the significance of insurance companies, many coun- tries have established a comprehensive insurance regulation system to prevent insurance companies from bankruptcy. For example, according to the Risk- Based Capital (RBC) for Insurers Model Act, when state regulators identify a distressed insurance company, they may take control of the company’s assets, change the company’s management or change the company’s operations to help the insurer return to the marketplace. The National Association of Insur- ance Commissioners (NAIC) may also prohibit the insurer from writing new business and suspend current claims payments until the rehabilitation of the company [27]. Nonetheless, among previous research, how to evaluate the market value and potential risk of ruin have not been considered for distressed companies whose current surpluses are below the initial capital. Analyzing the situation can give a guideline to investors who intend to invest for the raising of new capital of such companies as it provides the level of risk of such investment. Meanwhile, this analysis can help the regulator to determine the likelihood of successful rehabilitation and the severity of the problem faced by the distressed insurer, so that proper action can be undertaken to minimize the risk of loss to policyholders. Together with the previous works, we provide the complete situation analysis to both the investors and regulators.

7 1.3 Approach

We analyze the transient behavior of a stochastic fluid queue fed by a single ON-OFF source considering both ON and OFF state as the initial state and get analytic solutions containing convolutions of modified Bessel functions of the first kind. Then, we perform the asymptotic analysis of the solution for both short and long time behaviors, as well as providing a comprehensive ex- pansion for all time. Numerical experiments and Monte Carlo simulations are conducted to illustrate the fidelity of our method, where absolute differences of distribution functions are computed as the criteria for evaluation. Numeri- cal results show that a few terms included in our expansion can achieve a high accuracy of the approximation. The result from Monte Carlo simulations also verifies the robustness of our proposed method. This result also revealsthe direct connection between the buffer content, the injection rate of the states and the transfer rates between states. The high accuracy of our approximation with few terms implies that our approach can be more efficient than the tradi- tional numerical methods for solving the probability distribution function of the buffer content.

We apply our result to modeling the behavior of the job burst processes in the context of HPC resource allocation and the insurance surplus process in the ruin theory. Our contributions are two-fold. First, we innovatively perform the asymptotic analysis of the solution behaviors in HPC resource allocation and ruin theory considering both short and long-time cases. Moreover, we propose an expansion form for these behaviors which construct a compositive

8 approximation of the solutions for all time. Our result provides an explicit solution for the buffer content probability distribution. The conclusion from our work can not only be applied for further understanding the HPC resource allocation and the bankruptcy determining mechanism but also be incorpo- rated into evaluating the expectation of future equity prices and the Value at

Risk (VaR) for distressed companies.

The rest of this paper is organized in the following manner. Chapter2 es- tablish the stochastic fluid queue model in the HPC environments, wherean in-depth analytical assessment and the asymptotic analysis of the transient behavior of the fluid queue are presented for short and long-time. Chapter 3 provides the numerical experiments highlighting the robustness of asymp- totic analysis for the HPC resource allocation problem. Chapter4 applies the stochastic fluid queue model in the context of insurance ruin theory, aswell as the analytical assessment and the asymptotic analysis. Chapter5 provides the numerical results for the ruin theory. Chapter6 concludes and presents some avenues of future exploration.

9 Chapter 2

Stochastic Fluid Queue Model for HPC Resource Allocation

10 2.1 Overview

In this chapter, we derive the Stochastic Fluid Queue (SFQ) model in the context of the job burst behavior analysis in the HPC environment. The rela- tionship between job sources and job waiting buffer in an HPC environment can be illustrated by the stochastic fluid model. Different types of jobs sharea common processor to execute jobs, where the limited maximum capacity of the processor is given as C packets per second (PPS). Each ball symbolizes a possible state of users with an specific job submission rate. In this scenario, each state Si in the model representing for one kind of users who submit jobs at a particular rate ri. If the total job submitting rate of all users is greater than the processing capacity C of the traffic, the excess jobs will be temporarily suspended in a buffer, namely the job waiting buffer Q(t). The content of Q(t) will be accumulated until the processor is not occupied.

Users distributed in the HPC environment may have finite virtual and physi- cal properties. These attributes construct a space of the system environment

S with finite states S = {S1, ··· , SM}. Because of the change of virtual or physical environment, the system may transition from one state Si to another state Sj with a rate λij, which is a constant determined by the relationship of two states. With the assumption that the duration within each state follows exponential distribution, we can define each of the states as an either ONor OFF state according to the total uploading rate of the whole network. Within an ON state Si, work flows into the targeted processor at a relatively steady rate of ri PPS, which is determined by the properties of current state. We can

11 analyze the behavior of the buffer content Q(t) in as Table 2.1.

Condition Description of Buffer Content Behavior

I ri > C The buffer content Q(t) increases at rate ri − C. The processor is completely occupied by the run- ning jobs. Thus, the new uploaded jobs will be suspended and wait temporarily in the buffer Q until the processor is spare.

II ri ≤ C, Q(t) > 0 The buffer content Q(t) decreases at rate C − ri. The processor is still completely occupied by the jobs. However, the job submitting rate is less than the processing capacity. Although the new jobs will still be suspended in the buffer, buffer content Q(t) will be released until it is empty.

III ri ≤ C, Q(t) = 0 The buffer content Q(t) = 0 has no change. This condition implies that jobs can be executed with- out waiting and the HPC system is efficient.

Table 2.1: The Changing Behavior of Buffer Content Q(t) in HPC Context

In HPC environments, if we want to analyze statistical properties of the buffer for an ensure the stability of the system, an effective measure is the Cumulative Distribution Function (CDF) of buffer content Q(t) at time t, namely,

i W (t, x) = P {Q(t) ≤ x, S(t) = Si} . (2.1.1)

Consequently, the unconditional cumulative distribution function of the buffer content Q(t) at time t can be given by

M F(t, x) = P (Q(t) ≤ x) = ∑ Wi(t, x). (2.1.2) i=1

12 To model the job burst behavior in the HPC environment, we consider an ON-OFF system with all possible states, expressed as

S = {S1,..., SM} , (2.1.3) where M is the total number of states. Suppose that there exists a public traffic with constant capacity C and buffer Q that are shared by all states. On each state Si ∈ S, a fluid with given rate ri will flow into the traffic. We define an

ON state Sm to be a state in which the fluid rate rm > C. Within an ON state, the public traffic is fully occupied by the injecting fluid, and the overload part of the fluid will be held in the buffer. Thus, buffer content Q(t) will increase with rate rm − C. On the other hand, an OFF state Sn is defined as the state in which the fluid rate rn ≤ C. Within an OFF state, the traffic is unoccupied, and if the buffer is not empty, the buffer content Q(t) will decrease at rate C − rn until either it is empty or the state changes to an ON state again. Thus, the relationship between the changing of buffer content Q with respect to time t and current state S(t), can be described as

{ dQ(t) r − C when Q(t) > 0, for state S = i i . (2.1.4) dt 0 otherwise

Table 2.2 summarizes such conditions and Figure 2.1 visually illustrates the relationship between the buffer and the states.

13 Condition Buffer Content Behavior Changing Rate

I S(t) = Si and ri > C Increase ri − C

II S(t) = Si, ri ≤ C and Q(t) > 0 Decrease C − ri

III S(t) = Si, ri ≤ C and Q(t) = 0 No change 0

Table 2.2: The Changing Behavior of Buffer Content Q(t)

Buffer

Recover Q(t)

Traffic Overload C

r1 r2 r3 r⍦

S1 S S3 ⍦

S2 States

Figure 2.1: The Relationship between Buffer, Traffic and States

Additionally, we assume that the system changes from state Si to Sj following a continuous time , which means the transition rate is a constant

λij and the duration within each state is exponential distributed. Then the

14 transition matrix of states is

⎛ ⎞ λ11 λ12 ... λ1M ⎜ ⎟ ⎜ ⎟ ⎜ λ21 λ22 ... λ2M ⎟ P = ⎜ ⎟, (2.1.5) ⎜ ⎟ ⎜ ··· ⎟ ⎝ ⎠ λM1 λM2 ... λMM where λij means the rate of the system to change from state Si to Sj defined as

1 λij = lim P {S(t + ∆t) = j|S(t) = Si} , (2.1.6) ∆t→0 ∆t and S(t) denotes the state of the system at time t. With the property of the transition matrix that

M M ∑ λij∆t = λii∆t + ∑ λij∆t = 1, (2.1.7) j=1 j̸=i we have M λii∆t = 1 − ∑ λij∆t. (2.1.8) j̸=i

Suppose the system stays in state Si for time Ti, and has rate λi of probability to remain in Si. By , for small ∆t > 0, the probability of the remaining time Ti to be greater than ∆t can be described as

P(Ti > ∆t) = P (S (t + ∆t) = i|S(t) = Si) . (2.1.9)

= λi∆t + O(∆t) Consequently, we have

P(Ti ≤ ∆t) = (1 − λi∆t) + O(∆t). (2.1.10)

15 If multi-transitions occur within a small time period ∆t, we have the following remark.

Remark 1.

( 2) P (Two or more transitions within ∆t|S(t) = Si) = O (∆t) . (2.1.11)

Proof. Suppose that the state for the system is Si at time t. If two or more transitions occur within ∆t, then the system goes to state Sj (j ̸= i) and leaves

Sj before t + ∆t. The memoryless property of such transitions yields the following relationship:

( ) ( ) P Ti + Tj ≤ ∆t ≤ P Ti ≤ ∆t, Tj ≤ ∆t

( ) = P (Ti ≤ ∆t) P Tj ≤ ∆t

= (1 − λ ∆t + O(∆t)) (1 − λ ∆t + O(∆t)) i j (2.1.12)

( M )( M ) = ∑ λik∆t + O (∆t) ∑ λjl∆t + O(∆t) k̸=i l̸=j

( ) ∼ O (∆t)2 .

16 Let Sj go through any state except Si. Equation (2.1.12) shows that the proba- bility of two or more transitions occurring in a time interval ∆t is

P(Two or more transitions by ∆t|S(t) = Si)

M ( ) = ∑ P Ti + Tj ≤ ∆t (2.1.13) j̸=i

( ) ∼ O (∆t)2 .

What we are interested in is the behavior of the buffer content, specifically, its probability distribution. In this case, it is reasonable for us to assume that the buffer never becomes empty, namely, Q(t) > 0, for ∀t > 0. Otherwise, we can split the analyzing process into several different time period, during each of which the buffer is non-empty, and in the time intervals between the periods, the buffer content is zero and unchanged. Then, we can analyze the buffer content for each non-empty period.

For each state Si, let

i W (t, x) = P {Q(t) ≤ x, S(t) = Si} , i = 1, . . . , M, (2.1.14) which denotes the probability of the buffer content Q(t) less or equal to a given level x while the system is in state Si. Consequently, because states are disjointed, the probability distribution function of buffet content at time t, F(t, x), can be described as the summation of Wi(t + ∆t, x) of all possible

17 states Si in the state space S, namely,

M F(t, x) = P {Q(t) ≤ x} = ∑ Wi(t + ∆t, x). (2.1.15) i=1

By equation (2.1.9) and equation (2.1.11), we have:

i i W (t + ∆t, x) = (λii∆t)W (t, x − (ri − C)∆t) (no transition)

M j ( ) + ∑ λji∆tW t, x − (rj − C)∆t (one transition) j̸=i

( ) + O ∆t2 (two or more transitions). (2.1.16)

Correlate equation (2.1.16) with the relationship between λii and λij which described in equation (2.1.8), we have

( M ) i i W (t + ∆t, x) = 1 − ∑ λij∆t W (t, x − (ri − C)∆t) j̸=i (2.1.17) M j ( ) ( 2) + ∑ λji∆tW t, x − (rj − C)∆t + O ∆t . j̸=i

Reorder terms in equation (2.1.17), we have

i i i i W (t + ∆t, x) − W (t, x) W (t, x) − W (t, x − (ri − C)∆t) + (ri − C) ∆t (ri − C)∆t

M ( 2) ( i j ( )) O ∆t = ∑ −λijW (t, x − (ri − C)∆t) + λjiW t, x − (rj − C)∆t + . j̸=i ∆t (2.1.18)

As ∆t → 0, the stochastic fluid queue is expressed as:

i i M ∂W (t, x) ∂W (t, x) ( j i ) + (ri − C) = ∑ λjiW (t, x) − λijW (t, x) , (2.1.19) ∂t ∂x j̸=i

18 where Wi(t, x) is the probability of buffer content bounded by a given level x, which defined in equation (2.1.14), ri is the injection rate of state Si, C is the link capacity and λij is the transition rate between state Si and Sj.

Suppose only one of the states in the state space S is an OFF state, denoted as

S1, and we assume that the system stays at S1 almost surely at the initial point t = 0. By definition,

1 W (0, x) = P {Q(0) ≤ x, S(0) = S1} = P {Q(0) ≤ x} (2.1.20) is the probability distribution function of initial buffer content Q(0), given by external conditions of the system. Meanwhile, the probability Wi(0, x) should be 0 for any other state Si, where i ̸= 1. Thus, initial conditions of the system can be described as the following equation, { f (x) when i = 1 Wi(0, x) = , (2.1.21) 0 otherwise where f (x) is a general cumulative distribution function with respect to x, which can be derived given the distribution of initial buffer content Q(0). Ad- ditionally, if we assume that the initial buffer is empty, namely Q(0) = 0, and the buffer content Q(t) is almost surely nonnegative afterwards, then f (x) = 1.

Boundary condition Wi(t, 0) depicts the probability of the buffer content Q(t) being empty at state Si. For all ON states, the buffer is almost surely nonempty.

19 Hence, the boundary conditions are described as { q (t) when i = 1 Wi(t, 0) = 0 . (2.1.22) 0 otherwise where q0(t) is a function with respect to time t, the value of which is the probability of Q(t) = 0. If we assume that the buffer content will never be empty then q0(t) = 0.

Furthermore, assuming that Q(t) < ∞ for any t < ∞, by the definition of Wi(t, x),

M M i lim W (t, x) = lim P {Q(t) < x, S(t) = Si} = lim P {Q(t) < x} = 1, x→+∞ ∑ x→∞ ∑ x→∞ i=1 i=1 (2.1.23) which gives us a boundary condition for the system as x → ∞. Condition (2.1.23) implies for all i, the solution Wi(t, x) should be bounded as x → ∞.

20 Therefore, the cumulative distribution functions of the Stochastic Fluid Queue are governed by

i i M ∂W (t, x) ∂W (t, x) ( j i ) + (ri − C) = ∑ λjiW (t, x) − λijW (t, x) , i = 1, ··· , M ∂t ∂x j̸=i (2.1.19) with the initial condition { f (x) when i = 1 Wi(0, x) = (2.1.21) 0 otherwise and boundary conditions

{ q (t) when i = 1 Wi(t, 0) = 0 (2.1.22) 0 otherwise, and M lim Wi(t, x) = 1 .(2.1.23) x→+∞ ∑ i=1

21 2.2 Transient Analysis

In this section, we analyze the transient behavior of the system elaborated in Section 2.1. For simplicity, a special case of SFQ fed by a single ON-OFF source is discussed. By taking Laplace transformation and inverse Laplace transformation, a theoretical solution of the SFQ is provided with given initial and boundary conditions.

Let Lt denotes the Laplace Transform with respect to t, and f (t, x) is a contin- uous function with respect to t and x. Then, we have the following properties: Property 1. {∂ f (t, x)} L = s fˆ(s, x) − f (0, x). (2.2.1) t ∂t Property 2. {∂ f (t, x)} d L = fˆ(s, x). (2.2.2) t ∂x dx We apply these properties to perform transient analysis of the solutions for the SFQ.

For the special case of a single ON-OFF source, the state space are constructed as S = {S1, S2}, where S1 is the initial OFF state, and S2 is the incremental ON state. Let ϕi = ri − C, i = 1, 2 denote the net injection rate of state i. Then the SFQ will be simplified to a couple system of the form:

∂W1(t, x) ∂W1(t, x) + ϕ = −λ W1(t, x) + λ W2(t, x) ∂t 1 ∂x 12 21 , (2.2.3) ∂W2(t, x) ∂W2(t, x) + ϕ = λ W1(t, x) − λ W2(t, x) ∂t 2 ∂x 12 21

22 subject to the initial conditions

W1(0, x) = 1 W2(0, x) = 0. (2.2.4) and boundary conditions

1 2 W (t, 0) = q0(t) W (t, 0) = 0. (2.2.5) and

lim W1(t, x) + W2(t, x) = 1 (2.2.6) x→+∞ where q0(t) is the probability of the buffer to be empty with respect to time t.

Take Laplace transform for equations (2.2.3) with respect to time t, yielding

dWˆ 1(s, x) sWˆ 1(s, x) − W1(0, x) + ϕ = −λ Wˆ 1(s, x) + λ Wˆ 2(s, x) 1 dx 12 21 (2.2.7) dWˆ 2(s, x) sWˆ 2(s, x) − W2(0, x) + ϕ = λ Wˆ 1(s, x) − λ Wˆ 2(s, x) 2 dx 12 21 Plug the initial condition in, new equations after reconstruction are given as

dWˆ 1(s, x) λ + s λ 1 = − 12 Wˆ 1(s, x) + 21 Wˆ 2(s, x) + dx ϕ1 ϕ1 ϕ1 (2.2.8) dWˆ 2(s, x) λ λ + s = 12 Wˆ 1(s, x) − 21 Wˆ 2(s, x), dx ϕ2 ϕ2 subject to the initial conditions,

1 Wˆ (s, 0) = Lt{q0(t)} := qˆ0(s), (2.2.9) 2 Wˆ (s, 0) = Lt{0} = 0.

23 Let ⎛ ⎞ Wˆ 1(s, x) Wˆ = Wˆ (s, x) = ⎝ ⎠ , (2.2.10) Wˆ 2(s, x) denote the vector function with respect to a single variable x. then general form for equations (2.2.8) can be written as

dWˆ (x) = A · Wˆ + H, (2.2.11) dx where

⎛ ⎞ λ12 + s λ21 ⎜ − ⎟ ⎜ ϕ1 ϕ1 ⎟ A = ⎜ ⎟ (2.2.12) ⎝ λ λ + s ⎠ 12 − 21 ϕ2 ϕ2 and ⎛ 1 ⎞ ⎜ ϕ ⎟ H = ⎝ 1 ⎠ , (2.2.13) 0 subject to the initial conditions ⎛ ⎞ qˆ0(s) Wˆ (s, 0) = ⎝ ⎠ , (2.2.14) 0 where

qˆ0(s) = Lt {q0(t)} . (2.2.15)

First, we should find the solution for the homogeneous differential equations:

dWˆ = A · Wˆ . (2.2.16) dx

To solve the equation system, first we calculate the eigenvalues of A. Let

24 ω0 (s) , ω1 (s) be the eigenvalues of A . ω0 (s) , ω1 (s) are the roots of function:

⏐ ⏐ ⏐ 1 1 ⏐ ⏐ − (λ + s) − ω λ ⏐ ⏐ ϕ 12 ϕ 21 ⏐ ⏐ 1 1 ⏐ = 0, (2.2.17) ⏐ 1 1 ⏐ ⏐ λ12 − (λ21 + s) − ω ⏐ ⏐ ϕ2 ϕ2 ⏐ which can also be written as

2 ϕ1ϕ2ω + [(ϕ1 + ϕ2) s + λ12ϕ2 + λ21ϕ1] ω + s (s + λ12 + λ21) = 0. (2.2.18)

Solving the equation above yields √ (ϕ1 + ϕ2) s + λ12ϕ2 + λ21ϕ1 T (s) ω0 (s) = − + 2ϕ1ϕ2 2ϕ1ϕ2 (2.2.19) √ (ϕ1 + ϕ2) s + λ12ϕ2 + λ21ϕ1 T (s) ω1 (s) = − − , 2ϕ1ϕ2 2ϕ1ϕ2 where

2 T (s) = [(ϕ1 + ϕ2) s + λ12ϕ2 + λ21 (ϕ1)] − 4ϕ1ϕ2s (s + λ12 + λ21) , (2.2.20) with the assumption that T (s) > 0. Thus, the eigenvectors of A are

⎛ ⎞ (λ21 + s) + ϕ2ω0 (s) ⎜ ⎟ ⎜ ϕ2 ⎟ V0 = ⎜ ⎟ , (2.2.21) ⎝ λ12 ⎠ ϕ2 ⎛ ⎞ (λ21 + s) + ϕ2ω1 (s) ⎜ ⎟ ⎜ ϕ2 ⎟ V1 = ⎜ ⎟ . (2.2.22) ⎝ λ12 ⎠ ϕ2

25 Consequently, we have

ω0x ω1x Wˆ = K0V0e + K1V1e , (2.2.23)

where K0 and K1 are constants to be determined by the boundary conditions. Let

Φ = Φ(s, x) =

⎛ ⎞ (λ + s) + ϕ ω (s) (λ + s) + ϕ ω (s) 21 2 0 ω0x 21 2 1 ω1x (2.2.24) ⎜ e e ⎟ ⎜ ϕ2 ϕ2 ⎟ ⎜ ⎟ , ⎝ λ λ ⎠ 12 eω0x 12 eω1x ϕ2 ϕ2 then equation (2.2.23) can be reconstructed as

Wˆ = Φ · K, (2.2.25) where T K = (K0, K1) (2.2.26) is an arbitrary constant vector with respect to x.

Now we try to solve the non-homogeneous equation system (2.2.11). Ap- ply the method of variation of constants, equation (2.2.11) has solution with the form of ( ∫ x ) Wˆ = Φ K + Φ−1 (s, y) · Hdy , (2.2.27) 0

26 where K is determined by the initial conditions and

2 −1 ϕ1ϕ2 Φ (s, x) = √ · λ T (s)

⎛ ⎞ λ λ + s + ϕ ω (s) (2.2.28) 12 −ω0(s)x 21 2 1 ω0(s)x ⎜ e − e ⎟ ⎜ ϕ2 ϕ2 ⎟ ⎜ ⎟ , ⎝ λ λ + s + ϕ ω (s) ⎠ − 12 e−ω1(s)x 21 2 0 e−ω1(s)x ϕ2 ϕ2

Plug in the initial condition, we have

( ∫ 0 ) Wˆ (s, 0) = Φ (s, 0) K + Φ−1 (s, y) · Hdy , (2.2.29) 0 and consequently,

K = Φ−1 (s, 0) Wˆ (s, 0) . (2.2.30)

Therefore we have the expression for Wˆ (s, x) as follows

( ∫ x ) Wˆ (s, x) = Φ (s, x) · Φ−1 (s, 0) · Wˆ (s, 0) + Φ−1 (s, y) · Hdy . (2.2.31) 0

With the boundary condition ⎛ ⎞ ⎛ ⎞ Lt {q0 (t)} qˆ0 (s) Wˆ (s, 0) = ⎝ ⎠ = ⎝ ⎠ , (2.2.32) 0 0 substitute (2.2.28) and (2.2.32) into equation (2.2.31), we will get ⎛ ⎞ qˆ (s) −1 ˆ ϕ1ϕ2 0 Φ (s, 0) · W (s, 0) = √ ⎝ ⎠ (2.2.33) T (s) −qˆ0 (s)

27 and ⎛ ϕ ( ) ⎞ − 2 e−ω0(s)x − ∫ x √ 1 −1 ⎜ T (s)ω0 (s) ⎟ Φ (s, y) · Hdy = ⎜ ϕ ( ) ⎟ . (2.2.34) 0 ⎝ 2 −ω1(s)x ⎠ √ e − 1 T (s)ω1 (s)

Thus,

Wˆ (s, x) =

⎛ ⎞ + ( λ +s+ϕ ω (s) λ +s+ϕ ω (s) ) λ21 s √ 1 21 2 0 ω0(s)x 21 2 1 ω1(s)x ⎜ ( + + ) + ω e − ( ) e ⎟ ⎜ s s λ12 λ21 T(s) 0 ω1 s ⎟ ⎜ ( ) ⎟ ⎜ (λ +s+ϕ ω (s)) (λ +s+ϕ ω (s)) ⎟ ⎜ +ϕ 21 √ 2 0 eω0(s)x − 21 √ 2 1 eω1(s)x qˆ (s) ⎟ ⎜ 1 ( ) ( ) 0 ⎟ ⎜ T s T s ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ . ⎜ ⎟ ⎜ ( ) ⎟ ⎜ ω0(s)x ω1(s)x ⎟ ⎜ λ12 λ12 e e ⎟ ⎜ + √ − ⎟ ⎜ s (s + λ12 + λ21) T (s) ω0 (s) ω1 (s) ⎟ ⎜ ⎟ ⎜ λ ϕ ( ) ⎟ ⎝ 12 1 ω0(s)x ω1(s)x ⎠ +√ e − e qˆ0 (s) T (s) (2.2.35)

The boundary condition (2.2.6) implies that Wˆ (s, x) is bounded as x → ∞.

ω (s)x However, notice that ω1(s) > 0 for s > 0, and e 1 diverge to infinity as x → ∞, in order to satisfy the boundary condition, qˆ0(s) has to be properly chosen so that the exponential terms with respect to x are canceled. Therefore, qˆ0(s) needs to satisfy 1 qˆ0 (s) = − , (2.2.36) ϕ2ω1 (s)

28 which yields

⎛ ⎞ ϕ2 ⎜ −√ ⎟ ⎜ T (s)ω (s) ⎟ K = ⎜ 1 ⎟ . (2.2.37) ⎜ ⎟ ⎝ ϕ2 ⎠ √ T (s)ω1 (s)

Substituting into equation (2.2.29), Wˆ (s, x) is simplified to

⎛ ( −ω (s)x ) ⎞ ϕ2 e 0 1 1 ⎜ √ − + − ⎟ ⎜ T (s) ω0 (s) ω0 (s) ω1 (s) ⎟ Wˆ (s, x) = Φ(s, x) · ⎜ ⎟ ⎜ −ω (s)x ⎟ ⎝ ϕ2 e 1 ⎠ √ T (s) ω1 (s)

⎛ ⎞ s + + (s) λ21 1 λ12 1 λ21 ϕ2ω0 ω0(s)x ⎜ · + · − · e ⎟ ⎜ λ12 + λ21 s λ12 + λ21 s + λ12 + λ21 s (s + λ12 + λ21) ⎟ = ⎜ ⎟ . ⎝ λ (1 1 ) ( ) ⎠ 12 · − · 1 − eω0(s)x λ12 + λ21 s s + λ12 + λ21 (2.2.38)

Next, we take the inverse Laplace Transform of Wˆ (s, x) where Wˆ 1 (s, x) and Wˆ 2 (s, x) are given by

( λ )(1) ( λ )( 1 ) Wˆ 1 (s, x) = 21 + 12 λ12 + λ21 s λ12 + λ21 s + λ12 + λ21 (2.2.39) (s + λ + ϕ ω (s)) − 21 2 0 eω0(s)x s (s + λ12 + λ21) and

( λ )(1 1 ) ( ) Wˆ 2 (s, x) = 12 − 1 − eω0(s)x . (2.2.40) λ12 + λ21 s s + λ12 + λ21

29 Consider that

2 T (s) = [(ϕ1 + ϕ2) s + λ12ϕ2 + λ21 (ϕ1)] − 4ϕ1ϕ2s (s + λ12 + λ21)

= [( − ) s + − ]2 + ϕ2 ϕ1 λ12ϕ2 λ21ϕ1 4λ12λ21ϕ1ϕ2 (2.2.41) ( ) ( λ ϕ − λ ϕ )2 4λ λ ϕ ϕ = (ϕ − ϕ )2 × s + 12 2 21 1 + 12 21 1 2 . 2 1 − 2 ϕ2 ϕ1 (ϕ2 − ϕ1)

Recall the assumption that S2 is the ON state and S1 is the OFF state, we have

ϕ2 > 0 ≥ ϕ1, we have

√ (ϕ1 + ϕ2) s + λ12ϕ2 + λ21ϕ1 T (s) ω0 (s) = − + 2ϕ1ϕ2 2ϕ1ϕ2 √ 1 (( 1 1 ) ( λ λ )) 1 ( 1 1 ) T (s) = − + + 12 + 21 + − s 2 2 ϕ1 ϕ2 ϕ1 ϕ2 2 ϕ1 ϕ2 (r2 − r1) (2.2.42) ( s λ ) 1 ( 1 1 )( λ ϕ − λ ϕ ) = − + 21 − − s + 12 2 21 1 ϕ2 ϕ2 2 ϕ1 ϕ2 ϕ2 − ϕ1

[ ] 1 1 ( 1 1 ) ( λ ϕ − λ ϕ )2 4λ λ ϕ ϕ 2 + − · + 12 2 21 1 + 12 21 1 2 s 2 . 2 ϕ1 ϕ2 ϕ2 − ϕ1 (ϕ2 − ϕ1)

Thus, ω0(s) can be defined as

ω0 (s) = β (s) + η (s) , (2.2.43) where β(s) and η(s) are defined as

( 1 λ ) β (s) = − s + 21 (2.2.44) ϕ2 ϕ2

( 1 ) ( ) 2 η (s) = b (s + a) − (s + a)2 − α2 , (2.2.45)

30 and the constants a, b and α2 are defined as

λ ϕ − λ ϕ a = 12 2 21 1 (2.2.46) ϕ2 − ϕ1

1 ( 1 1 ) b = − − (2.2.47) 2 ϕ1 ϕ2

4λ λ ϕ ϕ 2 = − 12 21 1 2 α 2 . (2.2.48) (ϕ2 − ϕ1) Next, we take inverse Laplace Transform of Wˆ 1 (s, x) with respect to s, which yields

⎧ λ λ x ⎪ 21 12 −(λ12+λ21)t ⎪ + e , 0 < t < ⎪λ12 + λ21 λ12 + λ21 ϕ2 ⎨⎪ λ λ W1 (t, x) = 21 + 12 e−(λ12+λ21)t (2.2.49) ⎪λ12 + λ21 λ12 + λ21 ⎪ λ21 ∫ t ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · h (v, x) dv, t > . ⎩ 0 ϕ2 where

bα2ϕ ( 1 ) ( ) = 2 −at [ ( )] − [ ( )] , (2.2.50) h t, x e I0 ρ t, x 2 I2 ρ t, x 2 (λ12 + λ21) κ (t, x)

( x ) −(λ12+λ21) t− ϕ f1(t, x) = 1 − e 2 , (2.2.51)

1 1 (t + 2bx ) 2 ( ( 1 1 ) x ) 2 κ (t, x) = = 1 − − , (2.2.52) t ϕ1 ϕ2 t

ρ(t, x) = ακ(t, x)t, (2.2.53)

31 and In(x), n = 0, 1, 2 are the modified Bessel functions of the first kind.

Similarly, we have { } W2 (t, x) = L−1 Wˆ 2 (s, x)

{( λ )(1 1 ) ( )} = L−1 12 − 1 − eω0(s)x λ12 + λ21 s s + λ12 + λ21 (2.2.54) λ ( {1} { 1 } = 12 · L−1 − L−1 λ12 + λ21 s s + λ12 + λ21

{(1 1 ) }) −L−1 − eω0(s)x , s s + λ12 + λ21 which yields

⎧ λ ( ) x ⎪ 12 −(λ12+λ21)t ⎪ 1 − e , 0 < t < ⎪λ12 + λ21 ϕ2 ⎨⎪ λ ( W2 (t, x) = 12 1 − e−(λ12+λ21)t (2.2.55) ⎪λ12 + λ21 ⎪ λ21 ∫ t ) ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · g (v, x) dv , t > , ⎩ 0 ϕ2 where f1(t, x) is defined in (2.2.51) and g(t, x) is defined as

( α ( ) ) g(t, x) = e−at κ (t, x)2 − 1 I [ρ(t, x)] + δ (t) (2.2.56) 2κ (t, x) 1 where δ(t) is the Dirac function.

32 In summary, we get solutions of the Stochastic Fluid Queue which are shown as the following:

⎧ λ λ x ⎪ 21 12 −(λ12+λ21)t ⎪ + e , 0 < t < ⎪λ12 + λ21 λ12 + λ21 ϕ2 ⎨⎪ λ λ W1 (t, x) = 21 + 12 e−(λ12+λ21)t (2.2.49) ⎪λ12 + λ21 λ12 + λ21 ⎪ λ21 ∫ t ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · h (v, x) dv, t > . ⎩ 0 ϕ2 and

⎧ λ ( ) x ⎪ 12 −(λ12+λ21)t ⎪ 1 − e , 0 < t < ⎪λ12 + λ21 ϕ2 ⎨⎪ λ ( W2 (t, x) = 12 1 − e−(λ12+λ21)t (2.2.55) ⎪λ12 + λ21 ⎪ λ21 ∫ t ) ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · g (v, x) dv , t > , ⎩ 0 ϕ2 where

bα2ϕ ( 1 ) ( ) = 2 −at [ ( )] − [ ( )] ,(2.2.50) h t, x e I0 ρ t, x 2 I2 ρ t, x 2 (λ12 + λ21) κ (t, x)

( α ( ) ) g(t, x) = e−at κ (t, x)2 − 1 I [ρ(t, x)] + δ (t) ,(2.2.56) 2κ (t, x) 1

1 ( ( 1 1 ) x ) 2 κ (t, x) = 1 − − ,(2.2.52) ϕ1 ϕ2 t

ρ(t, x) = ακ(t, x)t,(2.2.53) and ( x ) −(λ12+λ21) t− ϕ f1(t, x) = 1 − e 2 (2.2.51)

33 where the theoretical details of computing the inverse Laplace transform can be found in Appendix C (see Section 7.3).

34 2.3 Asymptotic Expansions of Solutions

It is noteworthy to mention that solutions given by (2.2.49) and (2.2.55) are not of concise form, especially the inclusion of modified Bessel functions in the integral expression. Therefore, understanding the transient behavior is limited. To overcome these restrictions, we here introduce a comprehensive expansion for a more straightforward analysis of the properties of W1(t, x) and W2(t, x). The vital approximation of the solutions can be provided with only the first few leading terms of them. To be specific, we can try torepresent the convolution expressions from equation (2.2.49) and (2.2.55) in terms of asymptotic expansions, which the first few terms will show the dominating behavior. This expansion makes our analysis easier while also being accurate. Additionally, the solution can be easily achieved computationally. In this sec- tion, we will expand Bessel functions for short-time and long-time behavior as t → 0 and t → ∞, respectively.

Notice that the integrals in solutions (2.2.49) and (2.2.55) can be expressed as (2.3.1) and (2.3.2),

−λ21 ∫ t −λ21 ∫ t ϕ x ϕ x e 2 f1(t − v, x)h (v, x) dv = −e 2 h(v, x)dv 0 0 (2.3.1) { } λ12 ∫ t −(λ12+λ21)t+ x (λ +λ )v + e ϕ2 e 12 21 h(v, x)dv, 0

35 λ21 ∫ t −λ21 ∫ t − ϕ x ϕ x e 2 f1(t − v, x)g (v, x) dv = −e 2 g(v, x)dv 0 0 (2.3.2) { } λ12 ∫ t −(λ12+λ21)t+ x (λ +λ )v + e ϕ2 e 12 21 g(v, x)dv. 0

Therefore, when t > x ϕ2

−λ ∫ t 1 λ21 λ12 −(λ +λ )t 21 x W (t, x) = + e 12 21 + e ϕ2 h(v, x)dv λ12 + λ21 λ12 + λ21 0

{ } λ12 ∫ t −(λ12+λ21)t+ x (λ +λ )v − e ϕ2 e 12 21 h(v, x)dv, 0 (2.3.3)

( −λ ∫ t 2 λ12 −(λ +λ )t 21 x W (t, x) = 1 − e 12 21 + e ϕ2 g(v, x)dv λ12 + λ21 0

{ } λ12 ∫ t ) −(λ12+λ21)t+ x (λ +λ )v − e ϕ2 e 12 21 g(v, x)dv . 0 (2.3.4)

The key terms we should focus on to analyze W1(t, x) and W2(t, x) are ∫ t ∫ t ∫ t ∫ t h(v, x)dv, e(λ12+λ21)vh(v, x)dv, g(v, x)dv, and e(λ12+λ21)vg(v, x)dv. 0 0 0 0

36 2.3.1 Short-time Behavior Analysis — Power Series Expansion

For short-time behavior, from equation (2.2.53) we know that for any x ∈ R,

ρ = ρ(t, x) → 0 as t → 0. The modified Bessel function In(ρ) can be expressed using power series expanding approach around ρ = 0 as,

(ρ)n ∞ 1 (ρ)2k I (ρ) = IP (ρ) = . (2.3.5) n n ∑ ( + ) ( + + ) 2 k=0 Γ k 1 Γ n 1 k 2

P Expression (2.3.53) is defined as the power series of In(ρ), denoted by In (ρ).

P,m P Let In (ρ) denotes the sum of preceding m terms of In (ρ), namely,

(ρ)n m 1 (ρ)2k IP,m(ρ) = , (2.3.6) n ∑ ( + ) ( + + ) 2 k=0 Γ k 1 Γ γ 1 k 2 then

⏐ ⏐ ⏐ ⏐ ⏐(ρ)n ∞ 1 (ρ)2k⏐ ⏐I (ρ) − IP,m(ρ)⏐ = ⏐ ⏐ ⏐ n n ⏐ ⏐ ∑ ( + ) ( + + ) ⏐ ⏐ 2 k=m+1 Γ k 1 Γ γ 1 k 2 ⏐

⏐ 2⏐ ⏐ ∞ ( ) ⏐ ⏐(ρ)n ρk ⏐ < ⏐ ⏐ ⏐ 2 ∑ Γ(k + 1) ⏐ ⏐ k=m+1 ⏐ (2.3.7)

⏐(ρ)n ⏐ < ⏐ o(ρ2m)⏐ ⏐ 2 ⏐

∼ o(ρm).

Thus, as ρ → 0,

P,m lim In (ρ) = In(ρ), (2.3.8) m→∞

37 which means it is reasonable for us to use power series to estimate In(ρ) for small ρ.

Substitute expression (2.3.53) into (2.2.50) and (2.2.56), we have the power series expansions of h(t, x) and g(t, x) as

⎛ 2m ∞ ( 2 1 ) λ λ ϕ 1 α(t + 2bxt) 2 ( ) ∼ 12 21 2 −at · h t, x e ⎝ ∑ 2 (ϕ2 − ϕ1)(λ12 + λ21) m=0 Γ (m + 1) 2

2 2m⎞ ( 2 1 ) ∞ ( 2 1 ) t α(t + 2bxt) 2 1 α(t + 2bxt) 2 − ∑ ⎠ t + 2bx 2 m=0 Γ(m + 1)Γ(m + 3) 2

( ( ( )2 )) λ λ ϕ ∞ ( α )2m (t2 + 2bxt)m t2 α ∼ 12 21 2 −at · − 2 e ∑ 2 1 (ϕ2 − ϕ1)(λ12 + λ21) m=0 2 Γ (m + 1) (m + 1)(m + 2) (2.3.9) and

( 2 1 2 − 1 −at α(t + 2bxt) 2 g(t, x) ∼ δ(t) + αbx(t + 2bxt) 2 e × 2

2m⎞ ∞ ( 2 1 ) 1 α(t + 2bxt) 2 ∑ ⎠ (2.3.10) m=0 Γ(m + 1)Γ(m + 2) 2

( ) λ λ x ∞ ( α )2m (t2 + 2bxt)m ∼ δ(t) + 12 21 e−at · ∑ . ϕ2 − ϕ1 m=0 2 Γ(m + 1)Γ(m + 2)

38 Therefore, ∫ t h(v, x)dv 0

λ λ ϕ ∞ α2m ∼ 12 21 2 · × ∑ 2 2m (ϕ2 − ϕ1)(λ12 + λ21) m=0 Γ (m + 1)2 ( ) ∫ t v2 ( α )2 e−av · (v2 + 2bxv)m 1 − 2 dv 0 (m + 1)(m + 2)

λ λ ϕ ∞ α2m ∼ 12 21 2 · × ∑ 2 2m (ϕ2 − ϕ1)(λ12 + λ21) m=0 Γ (m + 1)2

(∫ t α2 ∫ t ) e−av · (v2 + 2bxv)mdv − e−av · (v + 2bx)mvm+2dv 0 4(m + 1)(m + 2) 0 (2.3.11) and ∫ t e(λ12+λ21)vh(v, x)dv 0

λ λ ϕ ∞ α2m ∼ 12 21 2 · × ∑ 2 2m (ϕ2 − ϕ1)(λ12 + λ21) m=0 Γ (m + 1)2

(∫ t α2 ∫ t ) e−av˜ · (v2 + 2bxv)mdv − e−av˜ · (v + 2bx)mvm+2dv . 0 4(m + 1)(m + 2) 0 (2.3.12)

Analogously,

∫ t g(v, x)dv 0 ( ) ∫ t λ λ x ∞ ( α )2m (v2 + 2bxv)m ∼ δ(t) + 12 21 e−av · ∑ dv 0 ϕ2 − ϕ1 m=0 2 Γ(m + 1)Γ(m + 2) (2.3.13) λ λ x ∫ t ∞ ( α )2m (v2 + 2bxv)me−av ∼ 1 + 12 21 ∑ dv ϕ2 − ϕ1 0 m=0 2 Γ(m + 1)Γ(m + 2)

λ λ x ∞ α2m ∫ t ∼ + 12 21 ( 2 + )m −av 1 ∑ 2m v 2bxv e dv ϕ2 − ϕ1 m=0 2 Γ(m + 1)Γ(m + 2) 0

39 and ∫ t λ λ x e(λ12+λ21)vg(v, x)dv ∼ 1 + 12 21 × 0 ϕ2 − ϕ1 (2.3.14) ∞ α2m ∫ t ( 2 + )m −av˜ ∑ 2m v 2bxv e dv m=0 2 Γ(m + 1)Γ(m + 2) 0 where a and a˜ are defined as

λ ϕ − λ ϕ λ ϕ − λ ϕ a = 12 2 21 1 , a˜ = 12 1 21 2 , (2.3.15) ϕ2 − ϕ1 ϕ2 − ϕ1

α2 and b are defined in (2.2.48) and (2.2.47) respectively.

For any positive integer m,

m (m) (v + 2bx)m = ∑ vk(2bx)m−k, (2.3.16) k=0 k

Thus,

∫ t ∫ t m (m) (v + 2bx)m vme−avdv = ∑ (2bx)m−kvm+ke−avdv 0 0 k=0 k (2.3.17) m (m) ∫ t = ∑ (2bx)m−k vm+ke−avdv, k=0 k 0 where ∫ t m+k −av 1 v e dv = − + + (Γ(m + k + 1, at) − Γ(m + k + 1, 0)) 0 am k 1 ( ) (2.3.18) 1 m+k (at)l = − (m + k) e−at − m+k+1 ! ∑ 1 . a l=0 l!

From that step we expand the integrals as ( ) ∫ t m (m) (2bx)m−k(m + k)! m+k (at)l (v + bx)m vme−avdv = − e−at 2 ∑ m+k+1 1 ∑ , 0 k=0 k a l=0 l! (2.3.19)

40 and ( ) ∫ t m (m) (2bx)m−k(m + k + 2)! m+k+2 (at)l (v + bx)m vm+2e−avdv = − e−at 2 ∑ m+k+3 1 ∑ . 0 k=0 k a l=0 l! (2.3.20)

Consequently, we get the power series expansions of the integrals of h as:

∫ t λ λ ϕ ∞ α2mΩ (a, b, a; x, t) h(v, x)dv = 12 21 2 m , (2.3.21) ( − )( + ) ∑ 2( + ) 2m 0 ϕ2 ϕ1 λ12 λ21 m=0 Γ m 1 2

∫ t ∞ 2m λ λ ϕ2 α Ωm(a, b, a˜; x, t) (λ12+λ21)v ( ) = 12 21 e h v, x dv ∑ 2 2m , (2.3.22) 0 (ϕ2 − ϕ1)(λ12 + λ21) m=0 Γ (m + 1)2 where { m (m) (2bx)m−k(m + k)! m+k (γt)l (a b x t) = × − e−γt Ωm , , γ; , ∑ m+k+1 1 ∑ k=0 k a l=0 l! ( )} α2 m (m) (2bx)m−k(m + k + 2)! m+k+2 (γt)l − 1 − e−γt , ( + )( + ) ∑ m+k+3 ∑ 4 m 1 m 2 k=0 k a l=0 l! (2.3.23)

Similarly,

∫ t λ λ x ∞ α2mχ (a, b, a; x, t) g(v, x)dv = 1 + 12 21 m , (2.3.24) − ∑ 2m ( + ) ( + ) 0 ϕ2 ϕ1 m=0 2 Γ m 1 Γ m 2

∫ t ∞ 2m λ λ x α χm(a, b, a˜; x, t) e(λ12+λ21)vg(v, x)dv = 1 + 12 21 × , (2.3.25) − ∑ 2m ( + ) ( + ) 0 ϕ2 ϕ1 m=0 2 Γ m 1 Γ m 2 where ( ) m (m)(2bx)m−k(m + k)! m+k (γt)l (a b x t) = − e−γt χm , , γ; , ∑ m+k+1 1 ∑ . (2.3.26) k=0 k a l=0 l!

41 In conclusion, for short time, the solution W1(t, x) and W2(t, x) can be ex- pressed as

⎧ λ21 λ12 −(λ +λ )t x ⎪ + e 12 21 , for 0 < t < ⎪ λ12 + λ21 λ12 + λ21 ϕ2 ⎪ ⎪ ⎪ ⎪ λ λ ⎪ 21 12 −(λ12+λ21)t 1 ⎨ + e W (t, x) = λ12 + λ21 λ12 + λ21 ⎪ λ λ ϕ ∞ α2mΩ (a, b, a; x, t) ⎪ + 12 21 2 m ⎪ ∑ 2 2m ⎪ (ϕ2 − ϕ1)(λ12 + λ21) Γ (m + 1)2 ⎪ m=0 ⎪ λ λ ϕ ∞ α2mΩ (a, b, a˜; x, t) x ⎪ − 12 21 2 m , for t > ⎩⎪ ( − )( + ) ∑ 2( + ) 2m ϕ2 ϕ1 λ12 λ21 m=0 Γ m 1 2 ϕ2 (2.3.27) and ⎧ λ12 ( −( + ) ) x ⎪ 1 − e λ12 λ21 t , for 0 < t < ⎪ λ + λ ϕ ⎪ 12 21 2 ⎪ ⎪ ⎪ λ [ ⎪ 12 − −(λ12+λ21)t ⎪ 1 e ⎨⎪ λ12 + λ21 2 −λ ( ∞ 2m ) W (t, x) = 21 x λ12λ21x α χm(a, b, a; x, t) +e ϕ2 1 + ⎪ − ∑ 2m (m + ) (m + ) ⎪ ϕ2 ϕ1 m=0 2 Γ 1 Γ 2 ⎪ ( )] ⎪ λ12 ∞ 2m ⎪ −(λ12+λ21)t+ x λ12λ21x α χm(a, b, a˜; x, t) ⎪ − e ϕ2 1 + , ⎪ − ∑ 2m (m + ) (m + ) ⎪ ϕ2 ϕ1 m=0 2 Γ 1 Γ 2 ⎪ ⎩⎪ for t > x ϕ2 (2.3.28)

42 2.3.2 Long-time Behavior Analysis — Asymptotic Series Expansion

Because ρ = ρ(t, x) → ∞ as t → ∞, in order to analyze the asymptotic long- time behavior of t → ∞, it is equivalent to analyze the situation where ρ → ∞. Thus in this subsection we will mainly discuss the behavior of modified Bessel functions of the first kind In(ρ) as ρ → ∞. Notice that In(ρ) can be expressed in the following integral form for any n ∈ Z:

∫ π 1 ρ cos θ In(ρ) = e cos (nθ) dθ. (2.3.29) π 0

Let φ(θ) = ρ cos θ, (2.3.30) notice that φ(0) is the global maximum of φ(θ) for θ ∈ (0, π), and that

φ′(0) = 0, φ′′(0) = −ρ cos(0) = −ρ < 0. (2.3.31)

We apply Laplace’s Method by replacing the exponential term φ(θ) with its local series expansion around 0

φ′′(0)θ2 θ2 φ(θ) ∼ φ(0) + φ′(0)θ + ∼ ρ(1 − ), (2.3.32) 2 2

43 and substitute cos(nθ) with its Taylor’s series expansion, and get

( 2 ) ∞ k 2k 1 ∫ π ρ 1− θ (−1) (nθ) I (ρ) ∼ e 2 dθ n ∑ ( ) π 0 k=0 2k !

ρ ∫ π 2 ∞ k 2k e − θ ρ (−1) (nθ) ∼ e 2 dθ (2.3.33) ∑ ( ) π 0 k=0 2k !

ρ ∞ ∫ π 2 k 2k e − θ ρ (−1) (nθ) ∼ e 2 dθ. ∑ ( ) π k=0 0 2k !

If we focus on the first term of equation (2.3.33), namely,

ρ ∫ π 2 e −ρ θ In(ρ) ∼ e 2 dθ, (2.3.34) π 0

1 ( ρ ) 2 and let u = θ · 2 , we will get

1 √ ρ ρ ( ) 2 ∫ π e 2 2 −u2 In(ρ) ∼ e du. (2.3.35) π ρ 0

As ρ → ∞, In(ρ) becomes

1 ρ ( ) 2 ∫ ∞ e 2 −u2 In(ρ) ∼ e du π ρ 0 (2.3.36) eρ ∼ . √2πρ

44 For more terms included, as ρ → ∞, equation (2.3.33) becomes

ρ ∞ k 2k ∫ π e (−1) n − ρ θ2 2k I (ρ) ∼ e 2 θ dθ n ∑ ( ) π k=0 2k ! 0

1 1 ρ ∞ k 2k π ρ 2 ( )k+ e (−1) n ∫ ( 2 ) 2 2 2 ∼ u2k · e−u du (2.3.37) ∑ ( ) π k=0 2k ! 0 ρ

1 ρ ∞ k 2k ( )k+ e (−1) n 2 2 ∫ ∞ 2 ∼ u2k · e−u du. ∑ ( ) π k=0 2k ! ρ 0

Because we have ∫ ∞ √ 2l −u2 (2l − 1)!! u e du = + π, (2.3.38) 0 2l 1 as ρ → ∞, expression (2.3.37) can be written as

{ k+ 1 } eρ ∞ (−1)kn2k (2) 2 (2k − 1)!!√ I (ρ) ∼ π n ∑ ( ) k+1 π k=0 2k ! ρ 2 (2.3.39) eρ ∞ (−1)kn2k ∼ . √ ∑ ( ) k 2πρ k=0 2k !!ρ

So far we have given an asymptotic expression for In(ρ) as ρ → ∞, denoted A by In (ρ), hereinafter called the asymptotic series of In(ρ):

eρ ∞ (−1)kn2k I A(ρ) ∼ . (2.3.40) n √ ∑ ( ) k 2πρ k=0 2k !!ρ

A,m A Let In (ρ) denotes the sum of preceding m terms of In (ρ), namely,

eρ m (−1)kn2k I A,m(ρ) = , (2.3.41) n √ ∑ ( ) k 2πρ k=0 2k !!ρ then ⏐ ⏐ ⏐ ⏐ ⏐ eρ ∞ (−1)kn2k ⏐ ⏐I (ρ) − I A,m(ρ)⏐ = ⏐ ⏐ . (2.3.42) ⏐ n n ⏐ ⏐√ ∑ ( ) k ⏐ ⏐ 2πρ k=m+1 2k !!ρ ⏐

45 Thus, as for ∀ρ > 1, ⏐ ⏐ ⏐ ⏐ ⏐ ρ ∞ (− )k 2k ⏐ ⏐ A,m ⏐ ⏐ e 1 n ⏐ lim In(ρ) − In (ρ) = lim → 0 (2.3.43) m→∞ ⏐ ⏐ m→∞ ⏐√ ∑ ( ) k ⏐ ⏐ 2πρ k=m+1 2k !!ρ ⏐ and

A,m lim In (ρ) = In(ρ), (2.3.44) m→∞

A,m which means {In (ρ)} is a series of functions that converges to In(ρ).

Substitute expression (2.3.40) into (2.2.50) and (2.2.56), we have the asymptotic series of h(t, x) and g(t, x) as

λ λ ϕ h(t, x) ∼ 12 21 2 e−at× (ϕ2 − ϕ1)(λ12 + λ21)

⎡ 1 ⎛ ⎞⎤ α(t2+2bxt) 2 ∞ (− )m 2m ⎢ e ⎜ t 1 2 ⎟⎥ √ 1 − ∑ ( )m ⎣ 1 ⎝ t + 2bx 2 1 ⎠⎦ 2πα(t2 + 2bxt) 2 m=0 (2m)!! α(t + 2bxt) 2

1 −at+α(t2+2bxt) 2 ( ∞ m 2m ) λ12λ21ϕ2 · e t (−1) 2 ∼ √ 1 1 − m 2 + ∑ m 2 2 2πα(ϕ2 − ϕ1)(λ12 + λ21)(t + 2bxt) 4 t 2bx m=0 (2m)!!α (t + 2bxt) (2.3.45) and

2 − 1 −at g(t, x) ∼ δ(t) + αbx(t + 2bxt) 2 e ×

⎛ 1 ⎞ α(t2+2bxt) 2 ∞ (− )m ⎜ e 1 ⎟ √ ∑ ( )m ⎝ 1 2 1 ⎠ 2πα(t2 + 2bxt) 2 m=0 (2m)!! α(t + 2bxt) 2 (2.3.46)

1 √ 2 bx α e−at+α(t +2bxt) 2 ∞ (−1)m ∼ δ(t) + √ . 3 ∑ m 2 m 2π (t2 + 2bxt) 4 m=0 (2m)!!α (t + 2bxt) 2

46 Therefore,

∫ t λ λ ϕ h(v, x)dv ∼ √ 12 21 2 · 0 2πα(ϕ2 − ϕ1)(λ12 + λ21)

1 1 (2.3.47) ⎛ 2 2 ⎞ ∫ t e−av+α(v +2bxv) 2 ∞ (−1)m22m ∫ t e−av+α(v +2bxv) 2 ⎝ dv − dv⎠ 2 1 ∑ m m + 5 m − 3 0 (v + 2bxv) 4 m=0 (2m)!!α 0 v 2 4 (v + 2bx) 2 4 and ∫ t λ λ ϕ2 e(λ12+λ21)vh(v, x)dv ∼ √ 12 21 · 0 2πα(ϕ2 − ϕ1)(λ12 + λ21)

1 1 (2.3.48) ⎛ 2 2 ⎞ ∫ t e−av˜ +α(v +2bxv) 2 ∞ (−1)m22m ∫ t e−av˜ +α(v +2bxv) 2 ⎝ dv − dv⎠ 2 1 ∑ m m + 5 m − 3 0 (v + 2bxv) 4 m=0 (2m)!!α 0 v 2 4 (v + 2bx) 2 4

Analogously,

1 √ 2 ∫ t ∫ t bx α e−av+α(v +2bxv) 2 ∞ (−1)m g(v, x)dv ∼ δ(v) + √ dv 3 ∑ ( 1 )m 0 0 2π (v2 + 2bxv) 4 m=0 (2m)!! α(v2 + 2bxv) 2

1 √ 2 bx α ∞ (−1)m ∫ t e−av+α(v +2bxv) 2 ∼ 1 + √ dv, ∑ ( ) m m + 3 2π m=0 2m !!α 0 (v2 + 2bxv) 2 4 (2.3.49) and

1 √ 2 ∫ t bx α ∞ (−1)m ∫ t e−av˜ +α(v +2bxv) 2 e(λ12+λ21)vg(v, x)dv ∼ 1 + √ dv, (2.3.50) ∑ ( ) m m + 3 0 2π m=0 2m !!α 0 (v2 + 2bxv) 2 4 where a, a˜ are defined in equation (2.3.15), α2 and b are defined in (2.2.48) and (2.2.47) respectively.

47 In conclusion, for long time, the asymptotic form of solutions for W1(t, x) and W2(t, x) can be expressed as

W1(t, x) =

⎧ λ λ x ⎪ 21 + 12 e−(λ12+λ21)t, for 0 < t < ⎪ λ + λ λ + λ ϕ ⎪ 12 21 12 21 2 ⎪ ⎪ ⎪ −λ ⎪ λ λ 21 x λ λ ϕ ⎪ 21 12 −(λ12+λ21)t ϕ 12 21 2 ⎪ + e + e 2 × √ · ⎪ λ12 + λ21 λ12 + λ21 2πα(ϕ2 − ϕ1)(λ12 + λ21) ⎪ ⎛ 1 1 ⎞ ⎪ t −av+α(v2+2bxv) 2 ∞ m 2m t −av+α(v2+2bxv) 2 ⎪ ∫ e (−1) 2 ∫ e ⎨ ⎝ dv − dv⎠ 2 1 ∑ m m + 5 m − 3 0 (v + 2bxv) 4 m=0 (2m)!!α 0 v 2 4 (v + 2bx) 2 4 ⎪ { } ⎪ λ12 ⎪ −(λ12+λ21)t+ x λ12λ21ϕ2 ⎪ − ϕ2 √ · ⎪ e ⎪ 2πα(ϕ2 − ϕ1)(λ12 + λ21) ⎪ ⎛ 1 1 ⎞ ⎪ 2 2 2 2 ⎪ ∫ t e−av˜ +α(v +2bxv) ∞ (−1)m22m ∫ t e−av˜ +α(v +2bxv) ⎪ dv − dv ⎪ ⎝ 1 ∑ m m + 5 m − 3 ⎠ ⎪ 0 (v2 + 2bxv) 4 = (2m)!!α 0 v 2 4 (v + 2bx) 2 4 ⎪ m 0 ⎪ ⎩ , for t > x ϕ2 (2.3.51)

W2 (t, x) =

⎧ λ ( ) x ⎪ 12 − −(λ12+λ21)t < < ⎪ 1 e , for 0 t ⎪ λ12 + λ21 ϕ2 ⎪ ⎪ ⎪ ⎪ λ12 [ −(λ +λ )t ⎪ 1 − e 12 21 ⎪ λ12 + λ21 ⎪ 1 . ⎪ ⎛ √ 2 ⎞ ⎨ −λ ∞ m t −av+α(v +2bxv) 2 21 x bx α (−1) ∫ e +e ϕ2 1 + √ dv ⎝ ∑ m m + 3 ⎠ ⎪ 2π (2m)!!α 0 2 2 4 ⎪ m=0 (v + 2bxv) ⎪ ⎛ √ 1 ⎞⎤ ⎪ { } 2 2 ⎪ λ12 ∞ m ∫ t −av˜ +α(v +2bxv) ⎪ −(λ12+λ21)t+ x bx α (−1) e ⎪ − e ϕ2 1 + √ dv ⎪ ⎝ ∑ m m + 3 ⎠⎦ ⎪ 2π (2m)!!α 0 2 2 4 ⎪ m=0 (v + 2bxv) ⎪ ⎩⎪ , for t > x ϕ2 (2.3.52)

48 Figure 2.2: Leading Term of Asymptotic and Power Series for Bessel Functions I0(ρ)

Figure 2.2 shows the relationship between the Bessel function I0(ρ), the first leading term of its asymptotic series and power series. We can conclude from the figure that the difference between Power Series and Bessel function is relatively small as ρ close to 0. Meanwhile, the difference of Asymptotic Series is small when ρ is sufficiently large. Another result can be drawn fromthe numerical experiment is that as we add more terms with higher orders into the series, the differences get smaller and smaller, which conforms to our expectation, except that when ρ → 0 the Asymptotic Series goes to infinity.

49 2.3.3 Comprehensive Expansion

Consider that for short-time behavior ρ = ρ(t, x) → 0 as t → 0. Thus, In(ρ) can be expressed in terms of the power series expansion

(ρ)n ∞ 1 (ρ)2k IP (ρ) = . (2.3.53) n ∑ ( + ) ( + + ) 2 k=0 Γ k 1 Γ n 1 k 2

P If we use the terms up to order m of In (ρ) as an approximation to In(ρ), then the relative error of the remainder terms is

 P  In (ρ) − I (ρ) εP(ρ) = n 2 ∥In (ρ)∥2

( ρ )n ∞ 1 ( ρ )2k ∑k=m+1 ≤ 2 Γ(k+1)Γ(n+1+k) 2 (2.3.54) ( ρ )n ∞ 1 ( ρ )2k 2 ∑k=0 Γ(k+1)Γ(n+1+k) 2 ( ) ∼ O ρ2m , which decreases as ρ goes to 0.

Meanwhile, when we investigate the long-time behavior approximation, namely,

eρ ∞ (−1)mn2m I A(ρ) ∼ , (2.3.55) n √ ∑ ( ) m 2πρ m=0 2m !!ρ

50 the relative error of the estimation with terms up to order m is

 A  In (ρ) − I (ρ) εA(ρ) = n 2 ∥In (ρ)∥2

ρ (−1)kn2k √e ∑∞ 2πρ k=m (2k)!!ρk (2.3.56) ≤ ρ (−1)kn2k √e ∑∞ 2πρ k=m (2k)!!ρk

∼ O (ρ−m) .

Thus, the εA(ρ) will asymptotically decreases to zero as ρ → +∞.

A transparent observation is that εA(ρ) and εP(ρ) goes to opposite direc- tions along with the variance of ρ. Consequently, starting from the critical point tc, where

     P   A  In (ρ (x, tc)) − In (ρ (x, tc)) − In (ρ (x, tc)) − In (ρ (x, tc)) < ε,  2  2 (2.3.57) when t and ρ(x, t) go to +∞, the εA(ρ) decreases, whereas when t and ρ(x, t) go to 0, εA(ρ) decreases. This property of error gives us a convenient way to control the error with the combination of the expansions. Let εC(ρ) denote the combination approximation error. When t < tc and ρ(x, t) < ρ(x, tc), the power series are used for approximation, the error of which increasing with respect to ρ, so

C P P ε (ρ) = ε (ρ) ≤ ε (ρ(x, tc)), t < tc. (2.3.58)

On the other hand, When t > tc and ρ(x, t) > ρ(x, tc), the asymptotic series are used for approximation, the error of which decreasing with respect to ρ,

51 so

C A A ε (ρ) = ε (ρ) ≤ ε (ρ(x, tc)), t > tc. (2.3.59)

Such a behavior of error can be observed in Figure 2.3.

Figure 2.3: Relative Error Behavior of Power and Asymptotic Series

Now consider that tc satisfies

⏐ ⏐ ⏐ P A ⏐ ε ⏐ε (ρ(x, tc)) − ε (ρ(x, tc))⏐ < , (2.3.60) ∥In (ρ(x, tc))∥2 which means within error tolerance ε,

C A P ε (ρ(x, tc)) ≈ ε (ρ(x, tc)) ≈ ε (ρ(x, tc)) (2.3.61)

Combining equation (2.3.58), (2.3.58) and (2.3.61), we can conclude that for

C C ∀t > 0, ε (ρ(x, t)) ≤ ε (ρ(x, tc)). Thus, the error calculated at the critical

52 point tc will give us a reliable upper bound for our approximation error. Fur- P C thermore, considering that In and In are continuous with respect to t, a proper estimation of the critical point tc will also provide us a practical upper bound for the approximation error.

The critical value used for the approximation via combination can be cal- culated as ⏐ ⏐ ⏐ ⏐ ⏐ P ⏐ ⏐ A ⏐ ⏐In (x) − In(x)⏐ = ⏐In (x) − In(x)⏐ , (2.3.62) or ⏐ ⏐ ⏐ ⏐ ⏐ P ⏐ ⏐ A ⏐ ⏐In (x) − In(x)⏐ − ⏐In (x) − In(x)⏐ < ε, (2.3.63) where ε is the given error tolerance. Some practical critical values shown in Table 2.3 are used for approximation of lower orders with the error tolerance ε = 10−6.

The Highest Order in Series Critical Value (with 4 digits) 1 2

I0(x) 0.2579 2.7995

The Order of Bessel Function I1(x) 1.0320 3.7456

I2(x) 1.6399 6.1883

Table 2.3: Some Critical Values

53 Chapter 3

Numerical Results for HPC Resource Allocation

54 3.1 Data Simulation

In this section, we will present a numerical example of the Stochastic Fluid

Queue (SFQ) model. Consider an ON-OFF fluid system, where state S1 is the recovery state with buffer fluiding rate r1 − C = −1 and S2 is the attacking state with buffer fluiding rate r2 − C = 4. Suppose the changing rate between two states are given as λ12 = 1.3, λ21 = 0.4. However, the state of system is hidden from our observation. The data we can observe from the network link is the real-time buffer content Q(t). For the purpose of simulation, we generate the system evolving process by changing the system state with time intervals, the length of which follows exponential distributions with parameters λ12, λ21. Meanwhile, the variation of Q(t) follows the equation { dQ(t) r − C, Q(t) ≥ 0 = i (3.1.1) dt 0, otherwise for each dt = 0.1s. Consequently, we generate Q(t) for t from 0 to 200. An example sample path of Q(t) is shown in Figure 3.1 .

55 Figure 3.1: The Variation of Q(t) with respect to t

Take difference of Q(t) with the above stepwidth, we get the result shown in Figure 3.2.

Figure 3.2: The Variation of dQ(t)/dt with respect to t

56 3.2 Theoretical Solution

In this subsection, we will numerically compare the difference between the theoretical solution and the series expansion approximation. In the context of HPC resource allocation, we use the parameters r1 − C = −1, r2 − C = 4,

λ12 = 1.3 and λ21 = 0.4 for the this comparison. In the context of insurance risk management, we use the parameter r = 4, λ12 = 1.3 and λ21 = 0.4. Table 3.1 and Table 3.2 describe the scenarios included in the numerical assessment. In each scenario, the difference of solutions getting from two different meth- ods will be compared and analyzed. In each comparison, firstly, we will focus on comparing the probability function with respect to time t, namely, F(t, x), W1(t, x) and W2(t, x), for fixed x ( x = 1 in the experiment). The distances of solution functions and its comparative function in L1, L2 and L∞ norm space are employed as criteria to evaluate the error, given t in a bounded close interval, say t ∈ [0, 20]. Meanwhile, we will show how the absolute error and relative error of each approximation vary with respect to time t.

57 Scenario Description I Comparing the differences between the solution and its approximation of Power Series with the first leading term

II Comparing the differences between the solution and its approximation from the combination of Power Series and Asymptotic Series with the first leading term

III Comparing the differences between the solution and its approximation from the combination of Power Se- ries and Asymptotic Series with the first and second leading terms

IV Comparing the differences between the theoretical so- lution and simulation results

V Comparing the differences between the simulation re- sults and the approximation from the combination of Power Series and Asymptotic Series with the first lead- ing term

VI Comparing the differences between the simulation re- sults and the approximation from the combination of Power Series and Asymptotic Series with the first and second leading terms

Table 3.1: Summary of Numerical Comparison Methodology

58 Scenario Comparison Item (I) Comparison Item (II) I Theoretical Solution Approximation via Power Series with the First Leading Term II Theoretical Solution Approximation via the Combination of Power Series and Asymptotic Series with the First Leading Term III Theoretical Solution Approximation via the Combination of Power Series and Asymptotic Se- ries with the First and Second Leading Term IV Monte Carlo Simulated So- Theoretical Solution lution V Monte Carlo Simulated So- Approximation via the Combination lution of Power Series and Asymptotic Series with the First Leading Term VI Monte Carlo Simulated So- Approximation via the Combination lution of Power Series and Asymptotic Se- ries with the First and Second Leading Term

Table 3.2: Summary of Numerical Comparison Methodology(II)

Figure 3.3 shows the behavior of Prob(Q(t) ≤ x), namely, F(t, x), and W1(t, x), W2(t, x) with respect to t at x = 1.

59 Figure 3.3: The Probability of Q(t) ≤ x with respect to t, Given x = 1

Figure 3.4 and Figure 3.5 show the variation of F(t, x), W1(t, x) and W2(t, x) in Scenario III. Figure 3.6 shows the absolute error of approximations with respect to time t in Scenario III. These figures show the robustness of the proposed expansion from a graphical perspective, where Figure 3.4 highlights the closeness in behavior between the approximations of F(t, x) and the its theoretical solution. This is further verified in Figure 3.6 , where the error behavior between the approximated and simulated responses of F(t, x), where this trend continued when the authors investigated this behavior for all six cases.

60 Figure 3.4: F(t, x) and Approximation via the Combination of Power Series and Asymptotic Series with the First and Second Leading Terms with respect to t, Given x = 1

Figure 3.5: W1(t, x), W2(t, x) and Approximation via the Combination of Power Series and Asymptotic Series with the First and Second Leading Terms with respect to t, Given x = 1

61 Figure 3.6: Error of F(t, x) and the Approximation via the Combination of Power Series and Asymptotic Series with the First and Second Leading Term with respect to t, Given x = 1

62 3.3 Result

Criteria Scenario

L1 Distance L2 Distance L∞ Distance

I 0.1806 4.483 × 10−2 1.210 × 10−4

II 7.495 × 10−2 1.798 × 10−2 4.617 × 10−5

III 8.898 × 10−6 5.233 × 10−5 1.790 × 10−7

IV 2.922 × 10−2 1.635 × 10−2 2.314 × 10−4

V 0.1039 2.551 × 10−2 2.313 × 10−4

VI 2.936 × 10−2 1.635 × 10−2 2.314 × 10−4

Table 3.3: Numerical Results of Comparison

Table 3.3 shows the distances between theoretical solution and approxima- tions in functional space for each scenario. The numerical result shows that the approximation using the combination of two series are reasonable small, and may be applied for a easier and intuitive estimation of F(t, x). With the second order terms adding in, the distances decrease more by comparing Scenario II and III, which implies the combination series will universely convergence to the theoretical solution as terms goes to infinity. From the comparison of Scenario IV, V and VI, we can also conclude that the approximation are comparatively closer to the simulation with the theoretical solution, which implies it is reasonable to use the approximation result as an ideal frame of reference in the application of real data where stochastic error are included.

63 Chapter 4

Stochastic Fluid Queue Model for Ruin Theory

64 4.1 Overview

The surplus of an insurance company is the amount by which its asset exceeds its liability. If the surplus of an insurance company falls below the require level, the regulator can take over the company and announce its bankruptcy. Thus, it is important to focus on the surplus process of a company when predict the probability of its bankruptcy or evaluate the risk of the company. One of the classical model for the evaluation of insurance surplus process is known as the Cramer-Lundberg model. In their model, the insurance company collects premiums with a constant rate r and pays for the claims. Therefore the surplus process {R(t)} in continuous time t ≥ 0 is expressed as

R(t) = u + rt − ∑ Uk, (4.1.1) 1≤k≤N(t) where u ≥ 0 is the initial surplus, r ≥ 0 is the constant premium accumulating rate, N(t) is the number of claims occurring in time interval (0, t), and {Uk} are sizes of claims for k ≥ 1. Within the interval between two claims, the surplus increases at rate r. When the k’th claim occurs, the surplus drops down by Uk immediately. In the model’s assumption, arrivals of claims are independent with each other and follow a Poisson process with parameter λ21. Furthermore, sizes of claims are follow a given distribution F independently and identically. Figure 4.1 shows a sample path of the surplus process.

65 Figure 4.1: Surplus Process R(t)

Such definitions generate a discontinuous process, which will be hard forusto analyze. To avoid the discontinuity of the process, Badescu et al. introduced a fluid queue model which substitutes the with a piecewise linear one [24]. The key of their method is transform the time t in the initial process

R(t) so that in the transformed process each claim of size Uk arrives contin- uously over a time interval of length Uk/r. We denote the buffer content of the transformed process by Q(τ). The relationship between the initial process and the transformed one is the following:

Every interval of time where R(t) increases is retained, namely, Q(τ) in- creases at the same rate; whereas the k’th downward discontinuity point of

66 R(t) is replaced by an interval on the time axis of length Uk/r, in which Q(τ) decreases linearly at rate −r, meanwhile the rest of the process is translated to the right of length Uk/r. Hence, the process R(t) is transformed to a stochastic fluid queue process with buffer content Q(τ). Conversely, in time intervals where the fluid process Q(τ) increases, the surplus process R(t) coincide with the fluid queue; meanwhile, time intervals are removed from R(t) where the fluid process Q(τ) decreases. Figure 4.2 is the sample path of Q(τ) related to the surplus process R(t) in Figure 4.1.

Figure 4.2: Fluid Queue Process Q(τ)

We define the increasing and decreasing of the fluid queue as two statesina state space, S = {S1, S2}, where state S1 is the decreasing state, namely OFF state, and S2 is increasing state, or ON state. Then the buffer content Q(τ) can

67 be expressed as a function with respect to τ,

⎧ ∫ τ ⎨u + rS(v)dv, if Q(ν) > 0 , for ∀ν ∈ (0, τ) Q(τ) = 0 (4.1.2) ⎩0, otherwise where u is the initial surplus, which is also the fluid level at time zero. Noticing that the buffer content linearly increases at rate r at the state S2 and decreases linearly at rate −r at the state S1, we have r2 = r and r1 = −r. The behavior of buffer content Q(τ) is summarized in the following equation: ⎧ −r, S(τ) = S1, Q(t) > 0 dQ ⎨⎪ = r, S(τ) = S2, Q(t) > 0 . (4.1.3) dτ ⎪ ⎩0, otherwise

Furthermore, the relationship between the company condition and the buffer content behavior is presented in Table 4.1.

Condition Description of Buffer Content Behavior

I ri = r, Q(τ) > 0 The buffer content Q(τ) increases at rate r. The insurance company is of its normal business, and the surplus R(t) is accumulated at rate r.

II ri = −r , The buffer content Q(τ) decreases at rate −r.A Q(τ) > 0 claim is occurring and the surplus R(t) decreases by Uk.

III Q(τ) = 0 The buffer content Q(τ) = 0 has no change. The surplus R(t) decrease to 0, which means the in- surance company is under the bankruptcy protec- tion.

Table 4.1: The Behavior of Buffer Content Q(τ) in the Context of Insurance Risk Management

68 With the assumption that the size of claims independently and identically fol- lows an exponential distribution with parameter λ12, namely, for any integer k, the cumulative distribution function of the claim size Uk is given by { 1 − e−λ12x, x ≥ 0 P{Uk ≤ x} = . (4.1.4) 0, x < 0

Therefore, { 1 − e−rλ12x, x ≥ 0 P{Uk/r ≤ x} = . (4.1.5) 0, x < 0

Under that condition, we know the remaining time for the state S(τ) at S1 follows an exponential distribution and the remaining time at S2 is governed by a Poisson arrival process, thus the state transformation of S(τ) can be regarded as a Continuous Time Markovian Chain. Therefore, the behavior of the buffer content Q(τ) follows a Markovian Fluid Queue, with the transition matrix ⎛ ⎞ r ⎜λ11 λ12⎟ P = ⎜ ⎟ , (4.1.6) ⎝ ⎠ λ21 λ22 where λ21 is determined by the Poisson process of arrival, λ11 = 1 − rλ12, and

λ22 = 1 − λ21. Demote λ = rλ12 and µ = λ21, then the transition matrix can be written as ⎛ ⎞ − ⎜1 λ λ ⎟ P = ⎜ ⎟ . (4.1.7) ⎝ ⎠ µ 1 − µ

Now consider the passage time for the surplus R(t) from the level u ≥ 0 to x, with t0 denoting the real time and τ0 being the length of time for the fluid queue Q(τ), during which the fluid queue increased for τ2 and decrease for

69 time τ1. We have the following relationship { −rτ + rτ = x − u 1 2 . (4.1.8) τ1 + τ2 = τ0

Therefore, we have ⎧ τ0 x − u ⎨τ1 = − 2 2r τ x − u . (4.1.9) ⎩τ = 0 + 2 2 2r Recall that the total length of increasing intervals τ2 equals the elapsed time t0 in the real world, namely

τ x − u x − u t = 0 + or τ = 2t − . (4.1.10) 0 2 2r 0 0 r

Now define the Cumulative Distribution Function F(τ, x) of the buffer content Q(τ), F(τ, x) = P{Q(τ) ≤ x}, (4.1.11) and the density function

dF(τ, x) p(τ, x) = (4.1.12) dx

Therefore, fix the initial surplus u, we have the probability distribution func- tion of the surplus process at time t is

∫ x ( y − u ) ψ(t, x) = P{R(t) ≤ x} = p 2t − , y dy, (4.1.13) −∞ r which means given the initial surplus, to compute the probability of bankruptcy, it is sufficient if we can determine the probability distribution function F(τ, x) of the buffer content Q(τ) in the fluid queue model.

70 We define two measures to analyze the probability behavior of the fluid queue, namely, the cumulative distribution function of buffer content Q(τ) at the state Si i W (τ, x) = P{Q(τ) ≤ x, S(τ) = Si}. (4.1.14)

Consequently, the unconditional CDF of the buffer content Q(t) is

F(τ, x) = P{Q(τ) ≤ x} = W1(τ, x) + W2(τ, x). (4.1.15)

The goal is to determine the expression of W1(τ, x) and W2(τ, x). The proba- bilistic transitions within a small time interval of length ∆t can be represented as follows

W1(τ + ∆t, x) = (1 − ∆tλ) W1 (τ, x − (−r∆t))

( ) + ∆t · µW2(τ, x − r∆t) + O (∆t)2 (4.1.16)

W2(τ + ∆t, x) = (1 − ∆tµ) W2 (τ, x − r∆t)

( ) + ∆t · λW1(τ, x − (−r∆t)) + O (∆t)2 , (4.1.17) where, on the right hand sides of (4.1.16) and (4.1.17), the first terms describe no transition between states, the second term models the transitions to the other state and the third terms are higher order terms of ∆t.

71 It is observed that the right hand side of (4.1.16) and (4.1.17) can be written as ⎧ W1(τ + ∆t, x) − W1(τ, x) W1(τ, x + ∆x) − W1(τ, x) ⎪ − r ⎪ ∆t ∆x ⎪ ⎪ ⎪ ( 2) ⎪ 1 2 O (∆t) ⎪ = −λW (τ, x + ∆x) + µW (τ, x − ∆x) + ⎪ ∆t ⎪ ⎨⎪ (4.1.18) ⎪ ⎪ ⎪W2(τ + ∆t, x) − W2(τ, x) W2(τ, x) − W2(τ, x − ∆x) ⎪ + r ⎪ ⎪ ∆t ∆x ⎪ ⎪ ( 2) ⎪ O (∆t) ⎩⎪ = λW1(τ, x + ∆x) − µW2(τ, x − ∆x) + ∆t where ∆x = r∆t. Therefore, as ∆t → 0 equation (4.1.18) becomes

⎧∂W1(τ, x) ∂W1(τ, x) ⎪ − r = −λW1(τ, x) + µW2(τ, x) ⎨⎪ ∂τ ∂x , (4.1.19) ⎪∂W2(τ, x) ∂W2(τ, x) ⎩⎪ + r = λW1(τ, x) − µW2(τ, x) ∂τ ∂x for any τ > 0 and x > 0.

Now we should consider the Initial Conditions and Boundary Conditions of W1(τ, x) and W2(τ, x). We assume that the process of claim arrivals is a

Markovian Arrival Process, so no claim arrives at time 0 with probability 1, the initial state at τ = 0 is the “ON” state S2 almost surely, namely

P{S(0) = S1} = 0, P{S(0) = S2} = 1. (4.1.20)

Furthermore, considering the condition that Q(0) = u, we have the following

72 initial conditions:

1 W (0, x) = P {Q(0) ≤ x, S(0) = S1} = 0. (4.1.21) and

2 W (0, x) = P {Q(0) ≤ x, S(0) = S2} = P {Q(0) ≤ x} = P{u ≤ x} = 1{x≥u} { 1, x ≥ u = , 0, x < u (4.1.22) where 1{·} is the indicator function.

For the boundary conditions, notice that

i W (τ, 0) = P{Q(τ) ≤ 0, S(τ) = Si}, i = 1, 2. (4.1.23)

Considering that the buffer content Q(τ) ≥ 0 almost surely and that it strictly increases at state S2, the probability of the buffer to be empty at state S2 is 0, so we have the boundary condition

W2(τ, 0) = 0. (4.1.24)

With the boundary condition (4.1.24), the relationship (4.1.15) can now be expressed as

P{Q(τ) ≤ 0} = F(τ, 0) = W1(τ, 0), (4.1.25) which means the boundary condition for W1(τ, 0) is the probability of ruin for the insurance company with respect to the transformed time τ, which is the target function to be determined. Suppose the probability of ruin is q(τ),

73 namely the boundary condition for W1(τ, x) is

W1(τ, 0) = q(τ). (4.1.26)

With the assumption of a distressed insurance company, we suppose that the company will bankrupt within finite time, which means

lim P{Q(τ) ≤ 0} = 1. (4.1.27) τ→∞

Furthermore, for any 0 < τ < ∞, we have

P{Q(τ) ≤ ∞} = 1, (4.1.28) which yields another boundary condition

{ } lim P{Q(τ) ≤ x} = lim W1(τ, x) + W2(τ, x) = 1. (4.1.29) x→+∞ x→+∞

Boundary condition (4.1.29) also confirms that the probability functions W1(τ, x) and W2(τ, x) are almost surely bounded.

74 Therefore, in the context of ruin theory, we have constructed an Initial Bound- ary Value Problem for W1(τ, x) and W2(τ, x) as

⎧∂W1(τ, x) ∂W1(τ, x) ⎪ − r = −λW1(τ, x) + µW2(τ, x) ⎨⎪ ∂τ ∂x , (4.1.30) ⎪∂W2(τ, x) ∂W2(τ, x) ⎩⎪ + r = λW1(τ, x) − µW2(τ, x) ∂τ ∂x with initial conditions { W1(0, x) = 0 , (4.1.31) 2 W (0, x) = 1{x≥u} and boundary conditions

{ W1(τ, 0) = q(τ) (4.1.32) W2(τ, 0) = 0 and lim {W1(τ, x) + W2(τ, x)} = 1, (4.1.33) x→+∞ where q(τ) is the function to be determined.

75 4.2 Transient Analysis

In this section, we analyze the transient behavior of the system elaborated in the previous section. By taking Laplace transformation and inverse Laplace transformation, a theoretical solution to the SFQ is provided with given initial and boundary conditions.

Take Laplace transform for equations (4.1.19) with respect to τ yielding

⎧ dWˆ 1(s, x) ⎪sWˆ 1(s, x) − W1(0, x) − r = −λWˆ 1(s, x) + µWˆ 2(s, x) ⎨⎪ dx (4.2.1) ⎪ dWˆ 2(s, x) ⎩⎪sWˆ 2(s, x) − W2(0, x) + r = λWˆ 1(s, x) − µWˆ 2(s, x). dx Plugging in the initial condition, new equations after reconstruction are given as ⎧dWˆ 1(s, x) s + λ µ ⎪ = Wˆ 1(s, x) − Wˆ 2(s, x) ⎨⎪ dx r r , (4.2.2) ⎪dWˆ 2(s, x) λ µ + s 1 ⎩⎪ = Wˆ 1(s, x) − Wˆ 2(s, x) + 1 dx r r r {x≥u} with boundary conditions

1 Wˆ (s, 0) = L{q0(τ)} := qˆ0(s) , (4.2.3) Wˆ 2(s, 0) = L{0} = 0

⎛ ⎞ Wˆ 1(s x) ⎜ , ⎟ where qˆ (s) = L {q (τ)}. Let Wˆ = Wˆ (s, x) = ⎜ ⎟ be the vector of 0 0 ⎝ ⎠ Wˆ 2(s, x) function with respect to the variable x, then the equation (4.2.2) can be written

76 as dWˆ (x) = A · Wˆ + H, (4.2.4) dx where ⎛ ⎞ ⎛ ⎞ s + λ µ − ⎜ ⎟ ⎜ 0 ⎟ A = ⎜ r r ⎟ and H = ⎜ ⎟ , (4.2.5) ⎝ λ s + µ ⎠ ⎝ 1 ⎠ − 1 r r r {x≥u} subject to the boundary conditions ⎛ ⎞ q (s) ⎜ ˆ0 ⎟ Wˆ (s, 0) = ⎜ ⎟ . (4.2.6) ⎝ ⎠ 0

Notice that the eigenvalues of A are

1 ( √ ) ω (s) = λ − µ − T (s) 0 2r , (4.2.7) 1 ( √ ) ω (s) = λ − µ + T (s) 1 2r where

( λ + µ)2 T (s) = 4 s + − 4λµ, (4.2.8) 2 with the fact that T (s) ≥ 0 for s > 0. The eigenvectors of A associated with

ω0(s) and ω1(s) are

⎛ √ ⎞ ⎛ √ ⎞ + + s − T(s) + + s + T(s) ⎜ λ µ 2 ⎟ ⎜ λ µ 2 ⎟ V = ⎜ ⎟ , V = ⎜ ⎟ . (4.2.9) 0 ⎝ ⎠ 1 ⎝ ⎠ 2λ 2λ

77 Consequently, solutions to the homogeneous differential equation system

dWˆ = A · Wˆ (4.2.10) dx are of the form

ω0(s)x ω1(s)x Wˆ = K0V0e + K1V1e , (4.2.11) where K0 and K1 are constants to be determined by the boundary conditions. Let ⎛ ⎞ ( √ ) ( √ ) ω (s)x ω (s)x ⎜ λ + µ + 2s − T(s) · e 0 λ + µ + 2s + T(s) · e 1 ⎟ ⎜ ⎟ Φ(s, x) = ⎜ ⎟ , ⎝ ⎠ 2λ · eω0(s)x 2λ · eω1(s)x (4.2.12) then the solution (2.2.23) can be written as

Wˆ = Φ · K, (4.2.13)

T where K = (K0, K1) is an arbitrary constant vector. Therefore, by applying the method of variation of constants, the solution to the non-homogeneous equation system (4.2.4) has the form of

( ∫ x ) Wˆ = Φ · K + Φ−1 (s, y) · Hdy , (4.2.14) 0 where ⎛ √ ⎞ λ + µ + 2s + T(s) −λe−ω0(s)x e−ω0(s)x −1 1 ⎜ 2 ⎟ Φ (s, x) = √ · ⎜ √ ⎟ , 2λ T(s) ⎝ λ + µ + 2s − T(s) ⎠ λe−ω1(s)x − e−ω1(s)x 2 (4.2.15)

78 With the boundary condition (4.2.6), we have

( ∫ 0 ) Wˆ (s, 0) = Φ (s, 0) K + Φ−1 (s, y) · Hdy , (4.2.16) 0 which yields ⎛ ⎞ qˆ (s) − 0 ⎜ √ ⎟ −1 ˆ ⎜ 2 T(s) ⎟ K = Φ (s, 0) · W (s, 0) = ⎜ ⎟ (4.2.17) ⎝ qˆ0 (s) ⎠ √ 2 T(s)

In addition, we have

∫ x Φ−1 (s, y) · Hdy = 0 ⎛ ( √ ) ⎞ λ + µ + 2s + T(s) 1{x≥u} ( ) ⎜ · e−ω0(s)u − e−ω0(s)x ⎟ (4.2.18) ⎜ √ ⎟ ⎜ 4rλω0(s) T(s) ⎟ ⎜ ( √ ) ⎟ . ⎜ λ + µ + 2s − T(s) 1{x≥u} ( ) ⎟ ⎝ −ω1(s)u −ω1(s)x ⎠ − √ · e − e 4rλω1(s) T(s)

Thus, {( ) ˆ 1 2µ 2µ W1(s, x) = √ · − 1{x≥u} 2 T(s) rω1(s) rω0(s)

( ( √ ) ) ω0(s)x 2µ −ω0(s)u +e e 1{x≥u} − λ + µ + 2s − T(s) qˆ0(s) rω0(s)

( ( √ ) )} ω1(s)x 2µ −ω1(s)u +e − e 1{x≥u} + λ + µ + 2s + T(s) qˆ0(s) rω1(s) (4.2.19)

79 and {( ) + + − √ ( ) + + + √ ( ) ˆ 1 λ µ 2s T s λ µ 2s T s W2(s, x) = √ · − 1{x≥u} 2 T(s) rω1(s) rω0(s)

( √ ) + + s + T(s) ω0(s)x λ µ 2 −ω0(s)u +e e 1{x≥u} − 2λqˆ0(s) . rω0(s)

( √ )} λ + µ + 2s − T(s) ω1(s)x −ω1(s)u +e − e 1{x≥u} + 2λqˆ0(s) rω1(s) (4.2.20)

The boundary condition (4.1.33) implies that Wˆ (s, x) is bounded as x → ∞.

ω (s)x Considering that ω1(s) > 0 for s > 0, e 1 diverges to infinity as x → ∞, in order to satisfy the boundary condition (4.1.33), qˆ0(s) has to be properly chosen so that the exponential terms with respect to x can be vanished as x → ∞. Therefore, as x → ∞, qˆ0(s) needs to satisfy

⎧ 2µ ( √ ) −ω1(s)u ⎪ − e + λ + µ + 2s + T(s) qˆ0(s) = 0 ⎨ rω (s) 1 √ , (4.2.21) λ + µ + 2s − T(s) ⎪ −ω1(s)u ⎩⎪ − e + 2λqˆ0(s) = 0 rω1(s) which yields 2µe−ω1(s)u qˆ (s) = 0 ( √ ). (4.2.22) rω1(s) λ + µ + 2s + T(s)

Next, we consider the case of the distressed insurance company where the current surplus is less than the initial surplus, namely, x < u. Under the

80 condition x < u where 1{x≥u} = 0, we have

1 {( √ ) ˆ 1 ω1(s) W (s, x) = √ λ + µ + 2s + T(s) qˆ0(s)e 2 T(s)

( √ ) } ω0(s) − λ + µ + 2s − T(s) qˆ0(s)e (4.2.23)

λqˆ (s) ( ) ˆ 2 0 ω1(s)x ω0(s)x W (s, x) = √ e − e . (4.2.24) T(s)

Take inverse Laplace transform of solutions (4.2.23) and (4.2.24), we will get the solutions as a piecewise function after considerable simplification, where theoretical details of computing the inverse Laplace transform can be found in Appendix D (see Section 7.4):

81 1. when 0 < τ ≤ v0

W1(τ, x) = 0 (4.2.25)

W2(τ, x) = 0 , (4.2.26)

2. when v0 < τ < v˜0

( − )( − ) 1 µ − λ µ u x {( −2a(τ−v )) W (τ, x) = e 2r × 1 − e 0 2(λ + µ)

(∫ τ ∫ τ ) 1 −av −2aτ 1 av −(λ − µ) e I0 [αy(v)] dv − e e I0 [αy(v)] dv (4.2.27) v0 2 v0 2

∫ τ ∫ τ } −av −1 −2aτ av −1 + αv0e y (v)I1 [αy(v)] dv − e αv0e y (v)I1 [αy(v)] dv v0 v0

(λ−µ)(u−x) { (∫ τ ) 2 − µ −2aτ av −1 av0 W (τ, x) = e 2r × e αv0e y (v)I1[αy(v)]dv + e 2(λ + µ) v0

(∫ τ ) λ −av −1 −av0 1 −aτ + αv0e y (v)I1[αy(v)]dv + e − e I0(αy(τ)) 2(λ + µ) v0 2

∫ τ ∫ τ } µ(λ − µ) −2aτ 1 av λ(λ − µ) 1 −av − e e I0[αy(v)]dv − e I0[αy(v)]dv , 2(λ + µ) v0 2 2(λ + µ) v0 2 (4.2.28)

82 3. when τ > v˜0

( − )( − ) { 1 − λ µ u x µ ( −2a(τ−v )) W (τ, x) = e 2r × 1 − e 0 2(λ + µ)

(∫ τ ∫ τ ) µ(λ − µ) 1 −av −2aτ 1 av − e I0 [αy(v)] dv − e e I0 [αy(v)] dv 2(λ + µ) v0 2 v0 2

(∫ τ ∫ τ ) µ −av −1 −2aτ av −1 + αv0e y (v)I1 [αy(v)] dv − e αv0e y (v)I1 [αy(v)] dv 2(λ + µ) v0 v0

( 1 λ − µ α2v˜ ) +e−aτ ατy˜−1(τ)I [αy˜(τ)] + I [αy˜(τ)] − 0 λ 1 2λ 0 2λ

∫ τ 2 ∫ τ λ(λ − µ) 1 −av µ (λ − µ) −2aτ 1 av + e I0[αy˜(v)]dv − e e I0[αy˜(v)]dv 2(λ + µ) v˜0 2 2λ(λ + µ) v˜0 2

(∫ τ ) λ −av −1 −av˜0 − αv˜0e y˜ (v)I1[αy˜(v)]dv + e 2(λ + µ) v˜0

2 (∫ τ )} µ −2aτ av −1 av˜0 + e αv˜0e y˜ (v)I1[αy˜(v)]dv + e 2λ(λ + µ) v˜0 (4.2.29)

83 (λ−µ)(u−x) { (∫ τ ) 2 − µ −2aτ av −1 av0 W (τ, x) = e 2r × e αv0e y (v)I1[αy(v)]dv + e 2(λ + µ) v0

(∫ τ ) λ −av −1 −av0 1 −aτ + αv0e y (v)I1[αy(v)]dv + e − e I0[αy(τ)] 2(λ + µ) v0 2

∫ τ ∫ τ µ(λ − µ) −2aτ 1 av λ(λ − µ) 1 −av − e e I0[αy(v)]dv − e I0[αy(v)]dv 2(λ + µ) v0 2 2(λ + µ) v0 2

(∫ τ ) µ −2aτ av −1 av˜0 − e αv˜0e y˜ (v)I1[αy˜(v)]dv + e 2(λ + µ) v˜0

(∫ τ ) λ −av −1 −av˜0 1 −aτ − αv˜0e y˜ (v)I1[αy˜(v)]dv + e + e I0[αy˜(τ)] 2(λ + µ) v˜0 2

∫ τ ∫ τ } µ(λ − µ) −2aτ 1 av λ(λ − µ) 1 −av + e e I0[αy˜(v)]dv + e I0[αy˜(v)]dv , 2(λ + µ) v˜0 2 2(λ + µ) v˜0 2 (4.2.30) where

λ + µ √ a = , α = λµ, (4.2.31) 2 1 1 v = (u − x), v˜ = (u + x), (4.2.32) 0 r 0 r √ √ 2 2 2 2 y(τ) = τ − v0, y˜(τ) = τ − v˜0 (4.2.33) and In(y), n = 0, 1 are the modified Bessel functions of the first kind. Notice

1 2 that τ = v˜0 being a removable discontinuity of W (τ, x) and W (τ, x), so we

84 define

1 1 W (v˜0, x) = lim W (τ, x) (4.2.34) τ→v˜0

2 2 W (v˜0, x) = lim W (τ, x), (4.2.35) τ→v˜0 then W1(τ, x) and W2(τ, x) are continuous functions for any τ > 0 and x > 0.

85 4.3 Asymptotic Expansions of Solutions

4.3.1 Short-time Behavior Analysis

For short-time behavior, from equation (4.2.33) we can see that for any X ≥ 0, y = y(τ, x) → 0 and the modified Bessel function In(y) can be expressed using power series expanding approach around y = 0 as

(y)n ∞ 1 (y)2k I (y) = . (4.3.1) n ∑ ( + ) ( + + ) 2 k=0 Γ k 1 Γ n k 1 2

Notice that ∫ τ av −1 αv0e y (v)I1[αy(v)]dv v0 (4.3.2) α2v ∞ 1 (α)2k ∫ τ = 0 eav(v2 − v2)kdv, ∑ ( + ) ( + ) 0 2 k=0 Γ k 1 Γ k 2 2 v0

∫ τ −av −1 αv0e y (v)I1[αy(v)]dv v0 (4.3.3) ∞ α2v 1 (α)2k ∫ τ−v1 = 0 e−av(v2 − v2)kdv. ∑ ( + ) ( + ) 0 2 k=0 Γ k 1 Γ k 2 2 v0

Let

∫ τ av 2 2 k Ωk(τ; v0, a) = e (v − v0) dv, (4.3.4) v0

∫ τ −av 2 2 k Ωk(τ; v0, −a) = e (v − v0) dv, (4.3.5) v0

86 we have

∫ τ k ( ) av k−i k 2(k−i) 2i Ωk(τ; v0, a) = e ∑(−1) v0 v dv v0 i=0 i , k ( ) ( 2i m m ) k 2(k−i) (2i + 1)! (−1) a = (− )k−i ( m aτ − m av0 ) ∑ 1 v0 2i+1 ∑ τ e v0 e i=0 i a m=0 m! (4.3.6)

k ( ) ( 2i m ) k−i k 2(k−i) (2i + 1)! a ( m −aτ m −av ) Ω (τ; v , −a) = (−1) v τ e − v e 0 . k 0 ∑ 0 (− )2i+1 ∑ 0 i=0 i a m=0 m! (4.3.7)

Therefore, ∫ τ α2v ∞ 1 ( α )2k αv eavy−1(v)I [αy(v)]dv = 0 Ω (τ; v , a), (4.3.8) 0 1 ∑ ( + ) ( + ) k 0 v0 2 k=0 Γ k 1 Γ k 2 2

∫ τ α2v ∞ 1 ( α )2k αv e−avy−1(v)I [αy(v)]dv = 0 Ω (τ; v , −a). 0 1 ∑ ( + ) ( + ) k 0 v0 2 k=0 Γ k 1 Γ k 2 2 (4.3.9)

In addition,

∫ τ ∞ 1 (α)2k I [αy(v)]eavdv = Ω (τ; v , a), (4.3.10) 0 ∑ ( + )2 k 0 v0 k=0 Γ k 1 2

∫ τ ∞ 1 (α)2k I [αy(v)]e−avdv = Ω (τ; v , −a). (4.3.11) 0 ∑ ( + )2 k 0 v0 k=0 Γ k 1 2

1 Hence, for v0 < τ < v˜0, the short-time expansion for W (τ, x) is

( − )( − ) 1 µ − λ µ u x {( −2a(τ−v )) W (τ, x) = e 2r × 1 − e 0 2(λ + µ)

λ − µ ∞ 1 ( α )2k ( ) − Ω (τ; v , −a) − e−2aτΩ (τ; v , a) ∑ ( + )2 k 0 k 0 (4.3.12) 2 k=0 Γ k 1 2 } α2v ∞ 1 ( α )2k ( ) + 0 Ω (τ; v , −a) − e−2aτΩ (τ; v , a) . ∑ ( + ) ( + ) k 0 k 0 2 k=0 Γ k 1 Γ k 2 2

87 The short-time expansion for W2(τ, x) is

{ 2k ( − )( − ) −2aτ+av0 −av0 ∞ ( ) 2 − λ µ u x µe + λe 1 −aτ 1 αy(τ) W (τ, x) = e 2r × − e ( + ) ∑ ( + )2 2 λ µ 2 k=0 Γ k 1 2

α2v ∞ 1 ( α )2k ( µ λ ) + 0 e−2aτΩ (τ; v , a) + Ω (τ; v , −a) ( + ) ∑ ( + ) ( + ) k 0 k 0 . 2 λ µ k=0 Γ k 1 Γ k 2 2 2 2 } (λ − µ) ∞ 1 ( α )2k ( µ λ ) − e−2aτΩ (τ; v , a) + Ω (τ; v , −a) ( + ) ∑ ( + )2 k 0 k 0 2 λ µ k=0 Γ k 1 2 2 2 (4.3.13)

For τ > v˜0, we have

( − )( − ) { 1 − λ µ u x µ ( −2a(τ−v )) λ −av˜ W (τ, x) = e 2r 1 − e 0 − e 0 2(λ + µ) 2(λ + µ)

( µ2 α2τ ∞ 1 ( αy˜(τ) )2k + e−2aτeav˜0 + e−aτ ( + ) ∑ ( + ) ( + ) 2λ λ µ 2λ k=0 Γ k 1 Γ k 2 2 ) λ − µ ∞ 1 ( αy˜(τ) )2k α2v˜ + − 0 ∑ ( + )2 2λ k=0 Γ k 1 2 2λ

µ(λ − µ) ∞ 1 ( α )2k ( ) − Ω (τ; v , −a) − e−2aτΩ (τ; v , a) ( + ) ∑ ( + )2 k 0 k 0 4 λ µ k=0 Γ k 1 2

µα2v ∞ 1 ( α )2k ( ) + 0 Ω (τ; v , −a) − e−2aτΩ (τ; v , a) ( + ) ∑ ( + ) ( + ) k 0 k 0 4 λ µ k=0 Γ k 1 Γ k 2 2

λ − µ ∞ 1 ( α )2k ( λ µ2 ) + Ω (τ; v˜ , −a) − e−2aτΩ (τ; v˜ , a) ( + ) ∑ ( + )2 k 0 k 0 2 λ µ k=0 Γ k 1 2 2 2λ } α2v ∞ 1 ( α )2k ( λ µ2 ) − 0 Ω (τ; v˜ , −a) − e−2aτΩ (τ; v˜ , a) , ( + ) ∑ ( + ) ( + ) k 0 k 0 2 λ µ k=0 Γ k 1 Γ k 2 2 2 2λ (4.3.14)

88 ( − )( − ) { −2aτ+av0 −av0 −2aτ+av˜0 −av˜0 2 − λ µ u x µe + λe µe + λe W (τ, x) = e 2r × − 2(λ + µ) 2(λ + µ)

1 ∞ 1 ( αy(τ) )2k 1 ∞ 1 ( αy˜(τ) )2k − e−aτ + e−aτ ∑ ( + )2 ∑ ( + )2 2 k=0 Γ k 1 2 2 k=0 Γ k 1 2

α2v ∞ 1 ( α )2k ( λ µ ) + 0 Ω (τ; v˜ , −a) + e−2aτΩ (τ; v˜ , a) ( + ) ∑ ( + ) ( + ) k 0 k 0 2 λ µ k=0 Γ k 1 Γ k 2 2 2 2

(λ − µ) ∞ 1 ( α )2k ( λ µ ) − Ω (τ; v˜ , −a) + e−2aτΩ (τ; v˜ , a) ( + ) ∑ ( + )2 k 0 k 0 2 λ µ k=0 Γ k 1 2 2 2

α2v ∞ 1 ( α )2k ( λ µ ) − 0 Ω (τ; v˜ , −a) + e−2aτΩ (τ; v˜ , a) ( + ) ∑ ( + ) ( + ) k 0 k 0 2 λ µ k=0 Γ k 1 Γ k 2 2 2 2 } (λ − µ) ∞ 1 ( α )2k ( λ µ ) + Ω (τ; v˜ , −a) + e−2aτΩ (τ; v˜ , a) . ( + ) ∑ ( + )2 k 0 k 0 2 λ µ k=0 Γ k 1 2 2 2 (4.3.15)

89 4.3.2 Long-time Behavior Analysis

The asymptotic expression for In(y) as y → ∞ can be expressed as:

ey ∞ (−1)kn2k I A(y) ∼ , for n ≥ 1 (4.3.16) n √ ∑ ( ) k 2πy k=0 2k !!y and ey I A(y) ∼ . (4.3.17) 0 √2πy

Notice that as τ → ∞, y(τ) → ∞ and √ ∫ τ ∞ k 2 ∫ τ av+αy(v) av −1 (−1) αv0 e αv0e y (v)I1[αy(v)]dv ∼ dv (4.3.18) ∑ ( ) k k+ 3 v0 k=0 2k !!α 2π v0 y(v) 2

√ ∫ τ ∞ k 2 ∫ τ −av+αy(v) −av −1 (−1) αv0 e αv0e y (v)I1[αy(v)]dv ∼ dv. (4.3.19) ∑ ( ) k k+ 3 v0 k=0 2k !!α 2π v0 y(v) 2

Let √ 1 ∫ τ eav+αy(v) χk(τ; v0, a) = dv (4.3.20) k+ 3 2πα v0 y(v) 2 √ 1 ∫ τ e−av+αy(v) χk(τ; v0, −a) = dv, (4.3.21) k+ 3 2πα v0 y(v) 2 we have

∫ τ √ ∫ τ av+αy(v) av 1 e I0[αy(v)]e dv ∼ 1 dv = χ−1(τ; v0, a), (4.3.22) v0 2πα v0 y(v) 2

∫ τ √ ∫ τ −av+αy(v) −av 1 e I0[αy(v)]e dv ∼ 1 dv = χ−1(τ; v0, −a), (4.3.23) v0 2πα v0 y(v) 2

90 and

∫ τ ∞ (−1)kαv αv eavy−1(v)I [αy(v)]dv ∼ 0 χ (τ; v , a), (4.3.24) 0 1 ∑ ( ) k k 0 v0 k=0 2k !!α

∫ τ ∞ (−1)kαv αv e−avy−1(v)I [αy(v)]dv ∼ 0 χ (τ; v , a). (4.3.25) 0 1 ∑ ( ) k k 0 v0 k=0 2k !!α

Therefore, as τ → ∞, we have the asymptotic expansion for W1(τ, x) and W2(τ, x) as the followings.

( − )( − ) { 1 − λ µ u x µ ( −2a(τ−v )) λ −av˜ W (τ, x) = e 2r × 1 − e 0 − e 0 2(λ + µ) 2(λ + µ)

( ) 2 eαy(τ) − eαy(τ) 2v µ −2aτ av˜0 −aτ ατ 1 λ µ 1 α ˜0 + e e + e √ 3 + √ 1 − 2λ(λ + µ) λ 2πα y(τ) 2 2λ 2πα y(τ) 2 2λ

( ) µ(λ − µ) 1 1 −2aτ − χ− (τ; v , −a) − e χ− (τ; v , a) 2(λ + µ) 2 1 0 2 1 0 , µ ∞ (−1)kαv ( ) + 0 χ (τ; v , −a) − e−2aτχ (τ; v , a) ( + ) ∑ ( ) k k 0 k 0 2 λ µ k=0 2k !!α

( 2 ) (λ − µ) λ µ −2aτ + χ− (τ; v˜ , −a) − e χ− (τ; v˜ , a) 2(λ + µ) 2 1 0 2λ 1 0

} λ ∞ (−1)kαv˜ ( λ µ2 ) − 0 χ (τ; v˜ , −a) − e−2aτχ (τ; v˜ , a) ( + ) ∑ ( ) k k 0 k 0 2 λ µ k=0 2k !!α 2 2λ (4.3.26)

91 ( − )( − ) { −2aτ+av0 −av0 −2aτ+av˜0 −av˜0 2 − λ µ u x µe + λe µe + λe W (τ, x) = e 2r × − 2(λ + µ) 2(λ + µ)

( αy(τ) αy˜(τ) ) 1 −aτ 1 e 1 e − e √ 1 + √ 1 2 2πα y(τ) 2 2πα y˜(τ) 2

1 ∞ (−1)kαv ( λ µ ) + 0 χ (τ; v , −a) + e−2aτχ (τ; v , a) ( + ) ∑ ( ) k k 0 k 0 2 λ µ k=0 2k !!α 2 2 . (4.3.27) ( ) λ − µ λ µ −2aτ − χ− (τ; v , −a) + e χ− (τ; v , a) 2(λ + µ) 2 1 0 2 1 0

1 ∞ (−1)kαv˜ ( λ µ ) − 0 χ (τ; v˜ , −a) + e−2aτχ (τ; v˜ , a) ( + ) ∑ ( ) k k 0 k 0 2 λ µ k=0 2k !!α 2 2 ( )} λ − µ λ µ −2aτ + χ− (τ; v˜ , −a) + e χ− (τ; v˜ , a) 2(λ + µ) 2 1 0 2 1 0

92 4.3.3 Proposed Comprehensive Expansion

Next, the series representations, described in Sections 4.3.1 and 4.3.2, are incorporated to approximate the transient behavior of W1(τ, x) and W2(τ, x). We want to construct an approximation with the least error for all τ. It is noteworthy to mention that there exist critical points τc and τ˜c that serve as the transition points between short and long-time behavior, which is estimated numerically via examining the error differences between the modified Bessel function In and the respective power series and asymptotic approximations, P A In and In , given by equations (4.3.1), (4.3.17) and (4.3.16). Our proposed combination is that for τ < τc, the power series are used as our approximation, for τ > τ˜c, the asymptotic series are used as our approximation, while for

τc < τ < τ˜c, the asymptotic series are employed for terms of In(y(v)), whereas the power series are employed for terms of In(y˜(v)). With such a combination, the approximation with less error are used for each interval of τ. So we estimate τc and τ˜c given the following relationship

     P   A  In (y (τc)) − In (y (τc)) − In (y (τc)) − In (y (τc)) < ε,  2  2      P   A  In (y˜ (τ˜c)) − In (y˜ (τ˜c)) − In (y˜ (τ˜c)) − In (y˜ (τ˜c)) < ε, (4.3.28)  2  2 for n = 0, 1, where ε is the error tolerance. Notice that although τc and τ˜c may change for different parameter chosen, y(τc) and y˜(τ˜c) will be the same constant value. The critical values shown in Table 2.3 are also practical values of y(τc) and y˜(τ˜c) used for approximation of lower orders with the error tolerance ε = 10−6.

93 Chapter 5

Numerical Results for Ruin Theory

94 In this chapter, numerical experiments are conducted to illustrate the fidelity of the series expansion approximation in the context of insurance risk man- agement, where the surplus premium rate, the transition rates and the initial surplus are r = 1, λ = 2, µ = 5 and u = 2. Additionally, to check the correctness of our solutions, we perform Monte Carlo methods to produce a simulation of the surplus process with totally 1,000 trials. Similar to our discussion in Chapter3 , we will use Table 3.1 and Table 3.2 to describe the scenarios included in the numerical assessment. In each scenario, the dif- ference of solutions getting from two different methods will be compared and analyzed. In each comparison, we will firstly focus on comparing the probability function F(τ, x), W1(τ, x) and W2(τ, x) for fixed x ( x = 1 in the experiment) with respect to τ. The distances of solution functions and its comparative function in L1, L2 and L∞ norm space are employed as criteria to evaluate the error, given t in a bounded close interval, say τ ∈ [0, 20]. Meanwhile, we will show how the absolute error and relative error of each approximation vary with respect to time τ.

Table 5.1 shows the distances between theoretical solution and approxima- tions in functional space for each scenario. The numerical result shows that the approximation using the combination of two series are reasonable small, and may be applied for a easier and intuitive estimation of F(τ, x). With the second order terms adding in, the distances decrease more by comparing Scenario II and III, which implies the combination series will universally convergence to the theoretical solution as terms goes to infinity. From the comparison

95 of Scenario IV, V and VI, we can also conclude that the approximation are comparatively closer to the simulation with the theoretical solution, which implies it is reasonable to use the approximation result as an ideal frame of reference in the application of real data where stochastic error are included.

Criteria Scenario

L1 Distance L2 Distance L∞ Distance

I 0.6144 0.6944 0.8715

II 0.0095 0.0102 0.0114

III 0.0077 0.0084 0.0094

IV 0.0060 0.0083 0.0283

V 0.0086 0.0106 0.0294

VI 0.0075 0.0094 0.0273

Table 5.1: Numerical Result of Comparison

Figure 5.1 presents the relative error of our proposed expansion with respect to τ, which shows the robustness of the proposed expansion from a graphical perspective. From the figure we can find that the relative error of our proposed expansion is of order O(10−2) and the error decreases as τ tends to infinity.

96 Figure 5.1: Relative error of F(τ, x) between the four-term approximated solutions and the theoretical solution where x = 1.

Figure 5.2 and 5.3 compare the expansion with the Monte Carlo simulated results. The closeness in 5.2 shows that the behavior of our approximation are consistent with the simulation of real cases. This is further verified in

Figure 5.3, where the error behavior between the approximated and simulated responses of F(τ, x) consistently decreases and tend to 0 as τ → ∞ or as the number of trails added.

97 Figure 5.2: Comparison of transient responses between F(τ, x), W1(τ, x) and W2(τ, x), where x = 1, between the four-term approximated solutions and those generated by Monte Carlo simulation.

Figure 5.3: Relative error of F(τ, x) between the four-term approximated solutions and those generated by Monte Carlo simulation where x = 1.

98 Chapter 6

Conclusion and Future Work

99 This thesis models the job burst behavior in HPC environments and the sur- plus process of insurance companies in the context of ruin theory as stochastic fluid queues. We focus on analyzing the transient behavior of stochastic fluid queue fed by a single ON-OFF source and analyzes the transient behavior. We provide the analytical expressions of the cumulative distribution functions of the buffer content considering different cases with the initial state to be either ON or OFF. The asymptotic analysis is employed to provide robust approxi- mations for both short and long-time behavior. Furthermore, comprehensive approximations are proposed towards two cases that show direct correlations between the transient behavior and the parameters like the job burst rate in the

HPC case or the premium rate in the premium case. Moreover, using only a few terms can achieve an approximation of these behaviors with high accuracy.

The results of this work have several applications that can benefit researchers not only studying these models, but also their applications. For example, this work can be applied to understand the behavior of fast-packet switch- ing, from theoretical and practical viewpoints, within various distributed networks such as wireless and high-performance computing environments. This work can also be used to understand other types of fluid queues such as multi-dimensional fluid queues, those driven by M/M/1 queues, and tandem fluid queues [28]. On the other hand, the results of this work have several applications provide a explicit expression of the cumulative distribution func- tion of the insurer’s surplus in the context of ruin theory, which can be used to calculate the expectation value of the company in the future. Furthermore,

100 with the distribution function of surplus, we can not only determine the prob- ability of ruin, but also get the Value of Risk easily and consequently help the understanding of the risk management regarding the insurance companies.

Future explorations of this work include extending and generalizing this approach to provide robust transient solutions of fluid queues fed by multiple ON-OFF sources, where the applications include analyzing the effects of traffic congestion. Other explorations of this work include applying these results to quantify the reliability in heterogeneous ultra-dense distributed networks. In addition, from the perspective of insurance risk management, the future exploration directions also include establishing a comprehensive risk analysis strategy for distressed insurers and generalizing the arrival process and the size distribution of claims. Moreover, future researchers can extend this ap- proach to provide robust transient solutions of fluid queues fed by multiple ON-OFF sources. Other explorations of this work include applying these results to solving other stochastic processes like geometric Brownian Motion

[29].

101 Bibliography

[1] Jack Dongarra et al. “High-performance computing: clusters, constella- tions, MPPs, and future directions”. In: Computing in Science & Engineer- ing 7.2 (2005), pp. 51–59. [2] P Gonzalo et al. “HPC scheduling in a brave new world”. PhD thesis. Umeå universitet, 2017. [3] Wilfried Haensch et al. “Silicon CMOS devices beyond scaling”. In: IBM Journal of Research and Development 50.4.5 (2006), pp. 339–361. [4] Ning Liu et al. “On the role of burst buffers in leadership-class stor- age systems”. In: 012 ieee 28th symposium on mass storage systems and technologies (msst). IEEE. 2012, pp. 1–11. [5] Martin F Grace, Robert W Klein, and Richard D Phillips. “Insurance Company Failures: Why Do They Cost So Much?” In: Georgia State University Center for Risk Management and Insurance Research Working Paper 03-1 (2003). [6] Viral V Acharya and Matthew Richardson. “Is the insurance industry systemically risky?” In: Modernizing insurance regulation (2014), pp. 151– 80. [7] Karl Henrik Borch. The mathematical theory of insurance: an annotated selection of papers on insurance published 1960-1972. Lexington Books, 1974. [8] Matthias Hovestadt et al. “Scheduling in HPC resource management systems: Queuing vs. planning”. In: Workshop on Job Scheduling Strategies for Parallel Processing. Springer. 2003, pp. 1–20.

102 [9] Nenad Milidrag, George Kesidis, and Mihail Devetsikiotis. “Overview of fluid-based quick simulation techniques for large packet-switched communication networks”. In: Scalability and Traffic Control in IP Net- works. Vol. 4526. International Society for Optics and Photonics. 2001, pp. 271–278. [10] D. Anick, D. Mitra, and M. M. Sondhi. “Stochastic Theory of a Data- Handling System with Multiple Sources”. In: Bell Labs Technical Journal 61.8 (1982), pp. 1871–1894. [11] Lingjia Liu et al. “Resource allocation and quality of service evalua- tion for wireless communication systems using fluid models”. In: IEEE Transactions on Information Theory 53.5 (2007), pp. 1767–1777. [12] Ning Liu. “Modeling the Impact of Intentional Behavior in Supercom- puting Environments via 1-D and 2-D Stochastic Fluid Queues Fed by an ON-OFF Source”. PhD thesis. Johns Hopkins University, 2018. [13] Nigel G Bean and Małgorzata M O’Reilly. “A stochastic two-dimensional fluid model”. In: Stochastic Models 29.1 (2013), pp. 31–63. [14] Aviva Samuelson, MM O’Reilly, and NG Bean. “Generalised reward generator for stochastic fluid models”. In: Submitted to the 9th Interna- tional Conference on Matrix-Analytic Methods in Stochastic Models. 2016. [15] Takeshi Tanaka, On Hashida, and Yukio Takahashi. “Transient analysis of fluid model for ATM statistical multiplexer”. In: Performance evaluation 23.2 (1995), pp. 145–162. [16] P.R. Parthasarathy and K. V. Vijayashree. “A Fluid Queue Fed by an On-Off Source”. In: IEEE Annual INDICON. IEEE. 2005, pp. 396–398. [17] S. Aalto and W. R. W. Scheinhardt. “Tandem Fluid Queues Fed by Ho- mogeneous On–Off Sources”. In: Operations Research Letters 27.2 (2000), pp. 73–82. [18] Christopher D Daykin et al. “Assessing the solvency and financial strength of a general insurance company”. In: Journal of the Institute of Actuaries 114.2 (1987), pp. 227–325. [19] Tomasz Rolski et al. Stochastic processes for insurance and finance. Vol. 505. John Wiley & Sons, 2009. [20] Freddy Delbaen and Jean Haezendonck. “Classical risk theory in an economic environment”. In: Insurance: Mathematics and Economics 6.2 (1987), pp. 85–116.

103 [21] Hans U Gerber, Marc J Goovaerts, and Rob Kaas. “On the probability and severity of ruin”. In: ASTIN Bulletin: The Journal of the IAA 17.2 (1987), pp. 151–163. [22] Hans U Gerber and Elias SW Shiu. “On the time value of ruin”. In: North American Actuarial Journal 2.1 (1998), pp. 48–72. [23] François Dufresne and Hans U Gerber. “The probability and severity of ruin for combinations of exponential claim amount distributions and their translations”. In: Insurance: Mathematics and Economics 7.2 (1988), pp. 75–80. [24] Andrei Badescu et al. “Risk processes analyzed as fluid queues”. In: Scandinavian Actuarial Journal 2005.2 (2005), pp. 127–141. [25] Vaidyanathan Ramaswami. “Passage times in fluid models with ap- plication to risk processes”. In: Methodology and Computing in Applied Probability 8.4 (2006), pp. 497–515. [26] Tao Zhou et al. “Numerical solution for ruin probability of continuous time model based on neural network algorithm”. In: Neurocomputing 331 (2019), pp. 67–76. [27] Martin Eling and Ines Holzmüller. “An Overview and Comparison of Risk-Based Capital Standards.” In: Journal of Insurance Regulation 26.4 (2008). [28] K.V. Vijayashree and A. Anjuka. “Fluid Queue Driven by an M/M/1 Queue Subject to Bernoulli-Schedule-Controlled Vacation and Vacation Interruption”. In: Advanced in Operations Research (2016). [29] Søren Asmussen. “Stationary distributions for fluid flow models with or without Brownian noise”. In: Communications in statistics. Stochastic models 11.1 (1995), pp. 21–49. [30] Murray R Spiegel. Laplace transforms. McGraw-Hill New York, 1965.

104 Chapter 7

Appendix

7.1 Appendix A: Codes of Numerical Results for HPC Resource Allocation

Listing 7.1: Codes of Numerical Results for HPC Resource Allocation 1 close all 2 clear 3 warning off 4 tic 5 Lambda_est=[1.3,0.4]; 6 R=[ -1 ,4]; 7 x_fix =1; 8 dt =0.01; 9 t_n =20; 10 t=0:dt:t_n; 11 close 12 [F,W_1,W_2]=FQ(Lambda_est(1),Lambda_est(2),R(1),R(2), x_fix ,'O' ,1,t_n,dt); 13 [F_PS,W_1PS,W_2PS]=FQ(Lambda_est(1),Lambda_est(2),R(1), R(2),x_fix,'P' ,1,t_n,dt); 14 [F_AS,W_1AS,W_2AS]=FQ(Lambda_est(1),Lambda_est(2),R(1), R(2),x_fix,'A' ,1,t_n,dt); 15 [F_C,W_1C,W_2C]=FQ(Lambda_est(1),Lambda_est(2),R(1),R (2) ,x_fix ,'C' ,1,t_n,dt);

105 16 [F_C2,W_1C2,W_2C2]=FQ(Lambda_est(1),Lambda_est(2),R(1), R(2),x_fix,'C' ,5,t_n,dt); 17 [F_M,W_1M,W_2M]=MonteCarloSimulationFQ(Lambda_est(1), Lambda_est(2),R(1),R(2),x_fix,t_n,dt); 18 19 %% SenarioI 20 %F vsPS 21 figure 22 title ('Comparison of SenarioI ') 23 subplot(2,2,1) 24 figure 25 plot(t,F(round(t/dt)+1)) 26 hold on 27 plot(t,F_PS(round(t/dt)+1)) 28 %title( '(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 29 legend ('F(t,x) ','F(t,x) Approximated ') 30 % axis([0,20,0,1]) 31 32 subplot(2,2,2) 33 plot(t,W_1(round(t/dt)+1),'b') 34 hold on 35 plot(t,W_2(round(t/dt)+1),'r') 36 plot(t,W_1PS(round(t/dt)+1),'b-. ') 37 plot(t,W_2PS(round(t/dt)+1),'r-. ') 38 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 39 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 40 % axis([0,20,0,1]) 41 42 subplot(2,2,3) 43 plot(t,abs(F(round(t/dt)+1)-F_PS(round(t/dt)+1))) 44 hold on 45 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 46 %axis([0,20]) 47 48 subplot(2,2,4)

106 49 plot(t,(abs(F_PS(round(t/dt)+1)-F(round(t/dt)+1))./F( round(t/dt)+1))) 50 hold on 51 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 52 53 % Distance 54 L1_I=sum(abs(F(round(t/dt)+1)-F_PS(round(t/dt)+1))*dt); 55 L2_I=sqrt(sum(((F(round(t/dt)+1)-F_PS(round(t/dt)+1)) .^2) *dt)); 56 Linf_I=max(abs(F(round(t/dt)+1)-F_PS(round(t/dt)+1))*dt ); 57 Lmean_I=mean(abs(F(round(t/dt)+1)-F_PS(round(t/dt)+1))* dt); 58 L_I=[L1_I L2_I Linf_I Lmean_I]; 59 60 61 %% SenarioII 62 %F vsAS 63 figure 64 title ('Comparison of SenarioII ') 65 subplot(2,2,1) 66 plot(t,F(round(t/dt)+1)) 67 hold on 68 plot(t,F_AS(round(t/dt)+1)) 69 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 70 legend ('F(t,x) ','F(t,x) Approximated ') 71 %axis([0,20,0,1]) 72 73 subplot(2,2,2) 74 plot(t,W_1(round(t/dt)+1),'b') 75 hold on 76 plot(t,W_2(round(t/dt)+1),'r') 77 plot(t,W_1AS(round(t/dt)+1),'b-. ') 78 plot(t,W_2AS(round(t/dt)+1),'r-. ') 79 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 80 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ')

107 81 %axis([0,20,0,1]) 82 83 subplot(2,2,3) 84 plot(t,abs(F(round(t/dt)+1)-F_AS(round(t/dt)+1))) 85 hold on 86 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 87 %axis([0,20]) 88 89 subplot(2,2,4) 90 plot(t,(abs(F_AS(round(t/dt)+1)-F(round(t/dt)+1))./F( round(t/dt)+1))) 91 hold on 92 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 93 % Distance 94 L1_II=sum(abs(F(round(t/dt)+1)-F_AS(round(t/dt)+1))*dt) ; 95 L2_II=sqrt(sum(((F(round(t/dt)+1)-F_AS(round(t/dt)+1)) .^2) *dt)); 96 Linf_II=max(abs(F(round(t/dt)+1)-F_AS(round(t/dt)+1))* dt); 97 Lmean_II=mean(abs(F(round(t/dt)+1)-F_AS(round(t/dt)+1)) *dt); 98 L_II=[L1_II L2_II Linf_II Lmean_II]; 99 100 %% SenarioIII 101 %F vs Combination 102 figure 103 title ('Comparison of SenarioIII ') 104 subplot(2,2,1) 105 plot(t,F(round(t/dt)+1)) 106 hold on 107 plot(t,F_C(round(t/dt)+1)) 108 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 109 legend ('F(t,x) ','F(t,x) Approximated ') 110 %axis([0,20,0,1]) 111

108 112 subplot(2,2,2) 113 plot(t,W_1(round(t/dt)+1),'b') 114 hold on 115 plot(t,W_2(round(t/dt)+1),'r') 116 plot(t,W_1C(round(t/dt)+1),'b-. ') 117 plot(t,W_2C(round(t/dt)+1),'r-. ') 118 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 119 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 120 %axis([0,20,0,1]) 121 122 subplot(2,2,3) 123 plot(t,abs(F(round(t/dt)+1)-F_C(round(t/dt)+1))) 124 hold on 125 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 126 %axis([0,20]) 127 128 subplot(2,2,4) 129 plot(t,(abs(F_C(round(t/dt)+1)-F(round(t/dt)+1))./F( round(t/dt)+1))) 130 hold on 131 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 132 133 % Distance 134 L1_III=sum(abs(F(round(t/dt)+1)-F_C(round(t/dt)+1))*dt) ; 135 L2_III=sqrt(sum(((F(round(t/dt)+1)-F_C(round(t/dt)+1)) .^2) *dt)); 136 Linf_III=max(abs(F(round(t/dt)+1)-F_C(round(t/dt)+1))* dt); 137 Lmean_III=mean(abs(F(round(t/dt)+1)-F_C(round(t/dt)+1)) *dt); 138 139 L_III=[L1_III L2_III Linf_III Lmean_III]; 140 141 %% SenarioIV

109 142 %F vs Combination2 143 figure 144 title ('Comparison of SenarioIV ') 145 subplot(2,2,1) 146 plot(t,F(round(t/dt)+1)) 147 hold on 148 plot(t,F_C2(round(t/dt)+1)) 149 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 150 legend ('F(t,x) ','F(t,x) Approximated ') 151 %axis([0,20,0,1]) 152 153 subplot(2,2,2) 154 plot(t,W_1(round(t/dt)+1),'b') 155 hold on 156 plot(t,W_2(round(t/dt)+1),'r') 157 plot(t,W_1C2(round(t/dt)+1),'b-. ') 158 plot(t,W_2C2(round(t/dt)+1),'r-. ') 159 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 160 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 161 %axis([0,20,0,1]) 162 163 subplot(2,2,3) 164 plot(t,abs(F(round(t/dt)+1)-F_C2(round(t/dt)+1))) 165 hold on 166 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 167 %axis([0,20]) 168 169 subplot(2,2,4) 170 plot(t,(abs(F_C2(round(t/dt)+1)-F(round(t/dt)+1))./F( round(t/dt)+1))) 171 hold on 172 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 173 174 % Distance

110 175 L1_IV=sum(abs(F(round(t/dt)+1)-F_C2(round(t/dt)+1))*dt) /t_n; 176 L2_IV=sqrt(sum(((F(round(t/dt)+1)-F_C2(round(t/dt)+1)) .^2) *dt)); 177 Linf_IV=max(abs(F(round(t/dt)+1)-F_C2(round(t/dt)+1))* dt); 178 Lmean_IV=mean(abs(F(round(t/dt)+1)-F_C2(round(t/dt)+1)) *dt); 179 L_IV=[L1_IV L2_IV Linf_IV Lmean_IV]; 180 181 %% SenarioV 182 %F vsMC 183 figure 184 title ('Comparison of SenarioV ') 185 subplot(2,2,1) 186 plot(t,F(round(t/dt)+1)) 187 hold on 188 plot(t,F_M(round(t/dt)+1)) 189 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 190 legend ('F(t,x) ','F(t,x) Simulated ') 191 % axis([0,20,0,1]) 192 193 subplot(2,2,2) 194 plot(t,W_1(round(t/dt)+1),'b') 195 hold on 196 plot(t,W_2(round(t/dt)+1),'r') 197 plot(t,W_1M(round(t/dt)+1),'b-. ') 198 plot(t,W_2M(round(t/dt)+1),'r-. ') 199 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 200 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Simulated ','W^2( t,x) Simulated ') 201 % axis([0,20,0,1]) 202 203 subplot(2,2,3) 204 plot(t,abs(F(round(t/dt)+1)-F_M(round(t/dt)+1))) 205 hold on 206 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16)

111 207 %axis([0,20]) 208 209 subplot(2,2,4) 210 plot(t,(abs(F_M(round(t/dt)+1)-F(round(t/dt)+1))./F( round(t/dt)+1))) 211 hold on 212 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 213 214 % Distance 215 L1_V=sum(abs(F(round(t/dt)+1)-F_M(round(t/dt)+1))*dt); 216 L2_V=sqrt(sum(((F(round(t/dt)+1)-F_M(round(t/dt)+1)) .^2) *dt)); 217 Linf_V=max(abs(F(round(t/dt)+1)-F_M(round(t/dt)+1))*dt) ; 218 Lmean_V=mean(abs(F(round(t/dt)+1)-F_M(round(t/dt)+1))* dt); 219 220 L_V=[L1_V L2_V Linf_V Lmean_V]; 221 222 223 %% SenarioVI 224 % Combination2 vsMC 225 figure 226 title ('Comparison of SenarioVI ') 227 subplot(2,2,1) 228 plot(t,F_C2(round(t/dt)+1)) 229 hold on 230 plot(t,F_M(round(t/dt)+1)) 231 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 232 legend ('F(t,x) Approximated ','F(t,x) Simulated ') 233 % axis([0,20,0,1]) 234 235 subplot(2,2,2) 236 plot(t,W_1C2(round(t/dt)+1),'b') 237 hold on 238 plot(t,W_2C2(round(t/dt)+1),'r') 239 plot(t,W_1M(round(t/dt)+1),'b-. ')

112 240 plot(t,W_2M(round(t/dt)+1),'r-. ') 241 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 242 legend ('W^1(t,x) Approximated ','W^2(t,x) Approximated ', 'W^1(t,x) Simulated ','W^2(t,x) Simulated ') 243 % axis([0,20,0,1]) 244 245 subplot(2,2,3) 246 plot(t,abs(F_C2(round(t/dt)+1)-F_M(round(t/dt)+1))) 247 hold on 248 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 249 %axis([0,20]) 250 251 subplot(2,2,4) 252 plot(t,(abs(F_M(round(t/dt)+1)-F_C2(round(t/dt)+1))./ F_C2(round(t/dt)+1))) 253 hold on 254 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 255 256 % Distance 257 L1_VI=sum(abs(F_C2(round(t/dt)+1)-F_M(round(t/dt)+1))* dt); 258 L2_VI=sqrt(sum(((F_C2(round(t/dt)+1)-F_M(round(t/dt)+1) ).^2)*dt)); 259 Linf_VI=max(abs(F_C2(round(t/dt)+1)-F_M(round(t/dt)+1)) *dt); 260 Lmean_VI=mean(abs(F_C2(round(t/dt)+1)-F_M(round(t/dt) +1))*dt); 261 262 L_VI=[L1_VI L2_VI Linf_VI Lmean_VI]; 263 264 %% 265 L1_VII=sum(abs(F_C(round(t/dt)+1)-F_M(round(t/dt)+1))* dt); 266 L2_VII=sqrt(sum(((F_C(round(t/dt)+1)-F_M(round(t/dt)+1) ).^2)*dt));

113 267 Linf_VII=max(abs(F_C(round(t/dt)+1)-F_M(round(t/dt)+1)) *dt); 268 Lmean_VII=mean(abs(F_C(round(t/dt)+1)-F_M(round(t/dt) +1))*dt); 269 270 L_VII=[L1_VII L2_VII Linf_VII Lmean_VII]; 271 %% 272 273 L=[L_I;L_III;L_IV;L_V;L_VII;L_VI]; 274 toc

Listing 7.2: FQ.m 1 function [W,F_0,F_1]=FQ(lambda,mu,r_0,r_1,x,Senario, Order,t_n,dt) 2 3 if nargin < 9, dt=0.5; end 4 if nargin < 8, t_n=20; end 5 if nargin < 7, Order = 1; end 6 if nargin < 6, Senario = 'o'; end 7 %% Constants 8 a=(lambda*r_1-mu*r_0)/(r_1-r_0); 9 b=(-1/2)*(1/r_0-1/r_1); 10 alpha=sqrt(-4*lambda*mu*r_0*r_1/((r_1-r_0)^2)); 11 12 %% Functions 13 K=@(t) ((t+2*b*x)./t).^(1/2); 14 15 % 16 %{ BesselI Functions 17 I_0=BI(0,Senario,Order); 18 I_1=BI(1,Senario,Order); 19 I_2=BI(2,Senario,Order); 20 %} 21 22 23 % Testing Version 24 Convol_left=@(t) exp((lambda+mu).*t); 25 %%g

114 26 27 g= @(t) (1/2)*alpha*exp(-a*t).*(K(t)-1./K(t)).* I_1(alpha*K(t).*t); 28 Convol_g=@(t) Convol_left(t).*g(t); 29 Integral_g_1=@(t) integral(g,0,t); 30 Integral_g_2=@(t) -exp(-(lambda+mu).*(t-x/r_1)).* integral(Convol_g,0,t); 31 Integral_g=@(t) 1-exp(-(lambda+mu).*(t-x/r_1))+ Integral_g_1(t)+Integral_g_2(t); 32 F_1_first= @(t) (lambda/(lambda+mu))*(1-exp(-( lambda+mu)*t)); 33 F_1_second=@(t) -(lambda/(lambda+mu))*exp(-mu*x/r_1 )*Integral_g(t); 34 35 F_1=zeros(1,t_n+1); 36 for t=0:dt:t_n 37 F_1(round(t/dt)+1)=F_1_first(t); 38 if t>x/r_1 39 F_1(round(t/dt)+1)=F_1_first(t)+F_1_second( t); 40 end 41 end 42 %W=F_1; 43 %{ 44 figure 45 hold on 46 plot([0:1:t_n],F_1); 47 %} 48 %% 49 %h 50 h=@(t) ((lambda*mu*r_1)/(r_1-r_0))*exp(-a*t).*( I_0 (alpha*K(t).*t)-(K(t).^(-2)) .*I_2(alpha*K(t).*t )); 51 Convol_h=@(t) Convol_left(t).*h(t); 52 Integral_h_1=@(t) integral(h,0,t); 53 Integral_h_2=@(t) -exp(-(lambda+mu).*(t-x/r_1)).* integral(Convol_h,0,t); 54

115 55 F_0_first= @(t) (mu/(lambda+mu)+lambda/(lambda+mu)* exp(-(lambda+mu)*t)); 56 F_0_second=@(t) -(1/(lambda+mu))*exp(-mu*x/r_1)*( Integral_h_1(t)+Integral_h_2(t)); 57 58 59 F_0=zeros(1,t_n+1); 60 for t=0:dt:t_n 61 F_0(round(t/dt)+1)=F_0_first(t); 62 if t>x/r_1 63 F_0(round(t/dt)+1)=F_0_first(t)+F_0_second( t); 64 end 65 end 66 F_0 (1) =1; 67 F_1 (1) =0; 68 69 W= F_0+F_1; 70 end

Listing 7.3: MonteCarloSimulationFQ.m 1 % Monte Carlo simulating Stochastic Fluid Queue 2 function [W,F1,F2]=MonteCarloSimulationFQ(lambda,mu,r_0 ,r_1,x,t_end,dt,u,s0,iter) 3 if nargin < 10, iter=1000; end 4 if nargin < 9, s0=0; end 5 if nargin < 8, u=0; end 6 if nargin < 7, dt=0.5; end 7 if nargin < 6, t_end=20; end 8 9 %2 states 10 % Number of Agents 11 %N=1; 12 % Rates Transforming between states 13 r=[r_0,r_1]; 14 lambda=[lambda,mu]; 15 16 % Distribution function

116 17 18 W=zeros(t_end/dt+1,1); 19 F1=zeros(t_end/dt+1,1); 20 F2=zeros(t_end/dt+1,1); 21 for k=1:iter 22 rng(k); 23 S=zeros(t_end/dt+1,1);%State 24 S(1)=s0; 25 %X=zeros(t_end/dt,1); 26 27 % Buffer Content 28 Q=zeros(t_end/dt+1,1); 29 Q(1)=u; 30 31 %% Simulation ofQ(t) 32 33 for ti=2:t_end/dt+1 34 p=exprnd(1/lambda(S(ti-1)+1)); 35 Q(ti)=max(Q(ti-1)+(r(S(ti-1)+1))*dt,0); 36 % ifQ(ti)==0 37 % break 38 % end 39 if p<=dt% Change State 40 S(ti)=mod(S(ti-1)+1,2); 41 else 42 S(ti)=mod(S(ti-1),2); 43 end 44 end 45 W=W+(Q<=x);%ECDF 46 F1=F1+((Q<=x).*(S==0)); 47 F2=F2+((Q<=x).*(S==1)); 48 % figure; plot(Q); 49 end 50 51 W=W/k; 52 F1=F1/k; 53 F2=F2/k; 54 W=W ';

117 55 F1=F1 '; 56 F2=F2 '; 57 end

Listing 7.4: BI.m 1 function Bessel=BI(n,senario,m) 2 %n- Bessel Function type 3 % senario- 'o' for original, 'p' for power series, ' m' for asymptotic 4 % series 5 %m- orders used for approximation 6 7 % Combination of1st orderP.S andA.S 8 if nargin < 3, m = 1; end 9 if nargin < 2, senario = 'o'; end 10 if(senario== 'o' | senario == 'O' ) 11 Bessel=@(t) besseli(n,t); 12 return; 13 end 14 if(senario== 'p' | senario == 'P' ) 15 B=@(t) 0; 16 for k=0:m 17 I_PS=@(t) ((t/2).^n).*(1/(gamma(k+1).*gamma (k+n+1)).*(t/2).^(2*k)); 18 B= @(t) B(t)+I_PS(t); 19 end 20 Bessel=@(t) B(t); 21 return; 22 end 23 if(senario== 'a' | senario == 'A' ) 24 B=@(t) 0; 25 for k=0:m 26 I_AS=@(t) (1./(2*pi*t)).^(1/2).*exp(t) .*(((-1)^k)*(n^(2*k))./(( double_factorial(2*k).*(t.^(2*k))))); 27 B= @(t) B(t)+I_AS(t); 28 end 29 Bessel=@(t) B(t);

118 30 return; 31 end 32 if(senario== 'c' | senario == 'C' ) 33 B=@(t) besseli(n,t); 34 B_AS=@(t) 0; 35 B_PS=@(t) 0; 36 for k=0:m 37 I_AS=@(t) (1./(2*pi*t)).^(1/2).*exp(t) .*(((-1)^k)*(n^(2*k))./(( double_factorial(2*k).*(t.^(2*k))))); 38 B_AS= @(t) B_AS(t)+I_AS(t); 39 I_PS=@(t) ((t/2).^n).*(1/(gamma(k+1).*gamma (k+n+1)).*(t/2).^(2*k)); 40 B_PS= @(t) B_PS(t)+I_PS(t); 41 end 42 Bessel=@(t) (abs(B_AS(t)-B(t))=abs( B_PS(t)-B(t))).*B_PS(t); 43 return; 44 end 45 46 Bessel =0; 47 48 function result = double_factorial(n) 49 if (n == 1 || n==0) 50 result = 1; 51 elseif (mod(n,2) == 0)% the number is even 52 result = prod(2:2:n); 53 else% the number is odd 54 result = prod(1:2:n); 55 end 56 end 57 58 end

119 7.2 Appendix B: MATLAB Codes of Numerical Re- sults for Ruin Theory

Listing 7.5: MATLAB Codes of Numerical Results for Ruin Theory 1 close all 2 clear 3 warning off 4 tic 5 6 r=1; lambda=2; mu=5; u=3; x=2; 7 8 Lambda_est=[lambda,mu]; 9 R=[-r,r]; 10 x_fix =1; 11 dt =1/128; 12 t_n =20; 13 t=0:dt:t_n; 14 close 15 16 17 [F,W_1,W_2]=RuinProb(Lambda_est(1),Lambda_est(2),R(1),R (2) ,x_fix ,'O' ,1,t_n,dt,u);% Theoretical 18 rmmissing([F,W_1,W_2]); F=[0;F]; W_1=[0;W_1]; W_2=[0; W_2 ]; 19 [F_PS,W_1PS,W_2PS]=RuinProb(Lambda_est(1),Lambda_est(2) ,R(1),R(2),x_fix,'P' ,1,t_n,dt,u);% Power Series 1 20 rmmissing([F_PS,W_1PS,W_2PS]); F_PS=[0;F_PS]; W_1PS=[0; W_1PS]; W_2PS=[0;W_2PS]; 21 [F_C,W_1C,W_2C]=RuinProb(Lambda_est(1),Lambda_est(2),R (1),R(2),x_fix,'C' ,1,t_n,dt,u);% Combination1 22 rmmissing([F_C,W_1C,W_2C]); F_C=[0;F_C]; W_1C=[0;W_1C]; W_2C=[0;W_2C]; 23 [F_C2,W_1C2,W_2C2]=RuinProb(Lambda_est(1),Lambda_est(2) ,R(1),R(2),x_fix,'C' ,2,t_n,dt,u);% Combination2 24 rmmissing([F_C2,W_1C2,W_2C2]); F_C2=[0;F_C2]; W_1C2=[0; W_1C2]; W_2C2=[0;W_2C2];

120 25 [F_M,W_1M,W_2M]=MonteCarloSimulationFQ(Lambda_est(1), Lambda_est(2),R(1),R(2),x_fix,t_n,dt,u,1);% Monte Carlo Result 26 F_M=F_M '; W_1M =W_1M '; W_2M =W_2M '; 27 %% SenarioI 28 %F vsPS 29 figure 30 title ('Comparison of SenarioI ') 31 subplot(2,2,1) 32 plot(t,F((t/dt)+1)) 33 hold on 34 plot(t,F_PS((t/dt)+1)) 35 legend ('F(t,x) ','F(t,x) Approximated ') 36 subplot(2,2,2) 37 plot(t,W_1((t/dt)+1),'b') 38 hold on 39 plot(t,W_2((t/dt)+1),'r') 40 plot(t,W_1PS((t/dt)+1),'b-. ') 41 plot(t,W_2PS((t/dt)+1),'r-. ') 42 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 43 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 44 % axis([0,20,0,1]) 45 46 subplot(2,2,3) 47 plot(t,abs(F((t/dt)+1)-F_PS((t/dt)+1))) 48 hold on 49 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 50 %axis([0,20]) 51 52 subplot(2,2,4) 53 plot(t,(F((t/dt)+1)>0).*(abs(F_PS((t/dt)+1)-F((t/dt)+1) )./F((t/dt)+1))) 54 hold on 55 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 56

121 57 % Distance 58 L1_I=sum(rmmissing(abs(F((t/dt)+1)-F_PS((t/dt)+1))*dt)) /t_n; 59 L2_I=sqrt(sum(rmmissing(((F((t/dt)+1)-F_PS((t/dt)+1)) .^2)*dt))/t_n); 60 Linf_I=max(rmmissing(abs(F((t/dt)+1)-F_PS((t/dt)+1)))); 61 Lmean_I=mean(rmmissing(abs(F((t/dt)+1)-F_PS((t/dt)+1))) ); 62 L_I=[L1_I L2_I Linf_I Lmean_I]; 63 64 %% SenarioII 65 %F vs Combination 66 figure 67 title ('Comparison of SenarioII ') 68 subplot(2,2,1) 69 plot(t,F((t/dt)+1)) 70 hold on 71 plot(t,F_C((t/dt)+1)) 72 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 73 legend ('F(t,x) ','F(t,x) Approximated ') 74 %axis([0,20,0,1]) 75 76 subplot(2,2,2) 77 plot(t,W_1((t/dt)+1),'b') 78 hold on 79 plot(t,W_2((t/dt)+1),'r') 80 plot(t,W_1C((t/dt)+1),'b-. ') 81 plot(t,W_2C((t/dt)+1),'r-. ') 82 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 83 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 84 %axis([0,20,0,1]) 85 86 subplot(2,2,3) 87 plot(t,abs(F((t/dt)+1)-F_C((t/dt)+1))) 88 hold on 89 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16)

122 90 %axis([0,20]) 91 92 subplot(2,2,4) 93 plot(t,(F((t/dt)+1)>0).*(abs(F_C((t/dt)+1)-F((t/dt)+1)) ./F((t/dt)+1))) 94 hold on 95 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 96 97 % Distance 98 L1_II=sum(rmmissing(abs(F((t/dt)+1)-F_C((t/dt)+1))*dt)) /t_n; 99 L2_II=sqrt(sum(rmmissing(((F((t/dt)+1)-F_C((t/dt)+1)) .^2)*dt))/t_n); 100 Linf_II=max(rmmissing(abs(F((t/dt)+1)-F_C((t/dt)+1)))); 101 Lmean_II=mean(rmmissing(abs(F((t/dt)+1)-F_C((t/dt)+1))) ); 102 103 L_II=[L1_II L2_II Linf_II Lmean_II]; 104 105 %% SenarioIII 106 %F vs Combination2 107 figure 108 title ('Comparison of SenarioIII ') 109 subplot(2,2,1) 110 plot(t,F((t/dt)+1)) 111 hold on 112 plot(t,F_C2((t/dt)+1)) 113 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 114 legend ('F(t,x) ','F(t,x) Approximated ') 115 %axis([0,20,0,1]) 116 117 subplot(2,2,2) 118 plot(t,W_1((t/dt)+1),'b') 119 hold on 120 plot(t,W_2((t/dt)+1),'r') 121 plot(t,W_1C2((t/dt)+1),'b-. ') 122 plot(t,W_2C2((t/dt)+1),'r-. ')

123 123 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 124 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Approximated ','W ^2(t,x) Approximated ') 125 %axis([0,20,0,1]) 126 127 subplot(2,2,3) 128 plot(t,abs(F((t/dt)+1)-F_C2((t/dt)+1))) 129 hold on 130 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 131 %axis([0,20]) 132 133 subplot(2,2,4) 134 plot(t,(F((t/dt)+1)>0).*(abs(F_C2((t/dt)+1)-F((t/dt)+1) )./F((t/dt)+1))) 135 hold on 136 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 137 138 % Distance 139 L1_III=sum(rmmissing(abs(F((t/dt)+1)-F_C2((t/dt)+1))*dt )/t_n); 140 L2_III=sqrt(sum(rmmissing(((F((t/dt)+1)-F_C2((t/dt)+1)) .^2)*dt))/t_n); 141 Linf_III=max(rmmissing(abs(F((t/dt)+1)-F_C2((t/dt)+1))) ); 142 Lmean_III=mean(rmmissing(abs(F((t/dt)+1)-F_C2((t/dt)+1) ))); 143 L_III=[L1_III L2_III Linf_III Lmean_III]; 144 145 %% SenarioIV 146 %F vsMC 147 figure 148 title ('Comparison of SenarioIV ') 149 subplot(2,2,1) 150 plot(t,F((t/dt)+1)) 151 hold on 152 plot(t,F_M((t/dt)+1))

124 153 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 154 legend ('F(t,x) ','F(t,x) Simulated ') 155 % axis([0,20,0,1]) 156 157 subplot(2,2,2) 158 plot(t,W_1((t/dt)+1),'b') 159 hold on 160 plot(t,W_2((t/dt)+1),'r') 161 plot(t,W_1M((t/dt)+1),'b-. ') 162 plot(t,W_2M((t/dt)+1),'r-. ') 163 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 164 legend ('W^1(t,x) ','W^2(t,x) ','W^1(t,x) Simulated ','W^2( t,x) Simulated ') 165 % axis([0,20,0,1]) 166 167 subplot(2,2,3) 168 plot(t,abs(F((t/dt)+1)-F_M((t/dt)+1))) 169 hold on 170 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 171 %axis([0,20]) 172 173 subplot(2,2,4) 174 plot(t,(F((t/dt)+1)>0).*(abs(F_M((t/dt)+1)-F((t/dt)+1)) ./F((t/dt)+1))) 175 hold on 176 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 177 178 % Distance 179 L1_IV=sum(rmmissing(abs(F((t/dt)+1)-F_M((t/dt)+1))*dt)/ t_n); 180 L2_IV=sqrt(sum(rmmissing(((F((t/dt)+1)-F_M((t/dt)+1)) .^2)*dt))/t_n); 181 Linf_IV=max(rmmissing(abs(F((t/dt)+1)-F_M((t/dt)+1)))); 182 Lmean_IV=mean(rmmissing(abs(F((t/dt)+1)-F_M((t/dt)+1))* dt)); 183

125 184 L_IV=[L1_IV L2_IV Linf_IV Lmean_IV]; 185 186 %% SenarioV 187 % Combination vsMC 188 189 figure 190 title ('Comparison of SenarioV ') 191 subplot(2,2,1) 192 plot(t,F_C((t/dt)+1)) 193 hold on 194 plot(t,F_M((t/dt)+1)) 195 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 196 legend ('F(t,x) Approximated ','F(t,x) Simulated ') 197 % axis([0,20,0,1]) 198 199 subplot(2,2,2) 200 plot(t,W_1C((t/dt)+1),'b') 201 hold on 202 plot(t,W_2C((t/dt)+1),'r') 203 plot(t,W_1M((t/dt)+1),'b-. ') 204 plot(t,W_2M((t/dt)+1),'r-. ') 205 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 206 legend ('W^1(t,x) Approximated ','W^2(t,x) Approximated ', 'W^1(t,x) Simulated ','W^2(t,x) Simulated ') 207 % axis([0,20,0,1]) 208 209 subplot(2,2,3) 210 plot(t,abs(F_C((t/dt)+1)-F_M((t/dt)+1))) 211 hold on 212 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 213 %axis([0,20]) 214 215 subplot(2,2,4) 216 plot(t,(F_C((t/dt)+1)>0).*(abs(F_M((t/dt)+1)-F_C((t/dt) +1))./F_C((t/dt)+1))) 217 hold on

126 218 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 219 220 % Distance 221 L1_V=sum(rmmissing(abs(F_C((t/dt)+1)-F_M((t/dt)+1))*dt) /t_n); 222 L2_V=sqrt(sum(rmmissing(((F_C((t/dt)+1)-F_M((t/dt)+1)) .^2)*dt))/t_n); 223 Linf_V=max(rmmissing(abs(F_C((t/dt)+1)-F_M((t/dt)+1)))) ; 224 Lmean_V=mean(rmmissing(abs(F_C((t/dt)+1)-F_M((t/dt)+1)) )); 225 226 L_V=[L1_V L2_V Linf_V Lmean_V]; 227 228 229 230 %% SenarioVI 231 % Combination2 vsMC 232 figure 233 title ('Comparison of SenarioVI ') 234 subplot(2,2,1) 235 plot(t,F_C2((t/dt)+1)) 236 hold on 237 plot(t,F_M((t/dt)+1)) 238 title ('(a) Prob(Q(t)\leqx) ','fontsize ' ,16) 239 legend ('F(t,x) Approximated ','F(t,x) Simulated ') 240 % axis([0,20,0,1]) 241 242 subplot(2,2,2) 243 plot(t,W_1C2((t/dt)+1),'b') 244 hold on 245 plot(t,W_2C2((t/dt)+1),'r') 246 plot(t,W_1M((t/dt)+1),'b-. ') 247 plot(t,W_2M((t/dt)+1),'r-. ') 248 title ('(b)W^1(t,x) andW^2(t,x) ','fontsize ' ,16) 249 legend ('W^1(t,x) Approximated ','W^2(t,x) Approximated ', 'W^1(t,x) Simulated ','W^2(t,x) Simulated ')

127 250 % axis([0,20,0,1]) 251 252 subplot(2,2,3) 253 plot(t,abs(F_C2((t/dt)+1)-F_M((t/dt)+1))) 254 hold on 255 title ('(c) Absolute Difference ofF(t,x) ','fontsize ' ,16) 256 %axis([0,20]) 257 258 subplot(2,2,4) 259 plot(t,(F_C2((t/dt)+1)>0).*(abs(F_M((t/dt)+1)-F_C2((t/ dt)+1))./F_C2((t/dt)+1))) 260 hold on 261 title ('(d) Relative Difference ofF(t,x) ','fontsize ' ,16) 262 263 % Distance 264 L1_VI=sum(rmmissing(abs(F_C2((t/dt)+1)-F_M((t/dt)+1))* dt)/t_n); 265 L2_VI=sqrt(sum(rmmissing(((F_C2((t/dt)+1)-F_M((t/dt)+1) ).^2)*dt))/t_n); 266 Linf_VI=max(rmmissing(abs(F_C2((t/dt)+1)-F_M((t/dt)+1)) )); 267 Lmean_VI=mean(rmmissing(abs(F_C2((t/dt)+1)-F_M((t/dt) +1)))); 268 269 L_VI=[L1_VI L2_VI Linf_VI Lmean_VI]; 270 271 %% 272 273 L=real([L_I;L_II;L_III;L_IV;L_V;L_VI]); 274 toc

Listing 7.6: RuinProb.m 1 function [W,W_0,W_1]=RuinProb(lambda,mu,r_0,r_1,x, Senario,Order,t_end,dt,u,s0,iter) 2 3 if nargin < 9, dt=0.5; end

128 4 if nargin < 8, t_end=20; end 5 if nargin < 7, Order = 1; end 6 if nargin < 6, Senario = 'o'; end 7 %% Constants 8 r=r_1; 9 a = (lambda+mu)/2; 10 alpha = sqrt(lambda*mu); 11 v0 = (u-x) / r ; 12 y = @(t) (t.^2-v0^2).^(1/2); 13 v0tilde = (u+x) / r ; 14 ytilde = @(t) (t.^2-v0tilde^2).^(1/2); 15 16 %{ BesselI Functions 17 I_0=BI(0,Senario,Order); 18 I_1=BI(1,Senario,Order); 19 I_2=BI(2,Senario,Order); 20 %} 21 22 CE=exp(-(lambda-mu)*(u-x)/(2*r)); 23 24 25 %% W1 26 %t> v0 27 g31in = @(t,v) (1/2)*exp(-a*v).*I_0(alpha*y(v)); 28 g32in = @(t,v) (1/2)*exp(a*v).*I_0(alpha*y(v)); 29 g3 = @(t) (-mu*(lambda-mu)/(2*(lambda+mu)))*( integral(@(v) g31in(t,v), v0, t) - exp(-2*a*t) .*( integral(@(v) g32in(t,v), v0, t))); 30 31 g41in = @(t,v) alpha*v0*exp(-a*v).*(1./y(v)).*I_1( alpha*y(v)); 32 g42in = @(t,v) alpha*v0*exp(a*v).*(1./y(v)).*I_1( alpha*y(v)); 33 g4 = @(t) (mu/(2*(lambda+mu)))*( integral(@(v) g41in(t,v), v0, t) + exp(-a*v0) - exp(-2*a*t).*( integral(@(v) g42in(t,v), v0, t) + exp(a*v0) ) ); 34

129 35 %t> v0tilde 36 g91in = @(t,v) (1/2)*exp(-a*v).*I_0( alpha*ytilde(v )); 37 g92in = @(t,v) (1/2)*exp(-(lambda+mu)*(t-v)).*exp(- a*v).*I_0(alpha*ytilde(v)); 38 g9 = @(t) ( (1/lambda)*alpha*t.*(1/ytilde(t)).*I_1( alpha*ytilde(t))+((lambda-mu)/(2*lambda))*I_0( alpha*ytilde(t)) ).*exp(-a*t)+ (lambda*(lambda- mu)/(2*lambda+mu)) .* integral(@(v) g91in(t,v), v0tilde, t) - (mu^2)*(lambda-mu)/(2*lambda*( lambda+mu)) .* integral(@(v) g92in(t,v), v0tilde , t); 39 40 g101in = @(t,v) (alpha*v0tilde)*exp(-a*v).*(1./ ytilde(v)).*I_1(alpha*ytilde(v)); 41 g102in = @(t,v) exp(-(lambda+mu)*(t-v)).*exp(-a*v)* alpha*v0tilde.*(1./ytilde(v)).*I_1(alpha*ytilde( v)); 42 g10 = @(t) (1/lambda)*exp(-a*t)*alpha*v0tilde .*(1./ytilde(t)).*I_1(alpha*ytilde(t)) + lambda /(2*(lambda+mu)).*( integral(@(v) g101in(t,v), v0tilde, t) +exp(-a*v0tilde) ) - (mu^2)/(2* lambda*(lambda+mu)) .*( integral(@(v) g102in(t,v ), v0tilde, t) + exp( -(lambda+mu)*(t-v0tilde)-a *v0tilde ) ); 43 44 F1 = @(t) CE.* (t>v0).*( g3(t)+g4(t) ) + CE.*(t> v0tilde).*( g9(t)-g10(t) ); 45 46 %% W2 47 f11in = @(t,v) exp(a*v)*alpha*v0.*(1./y(v)).*I_1( alpha*y(v)); 48 f11 = @(t) ( mu / (2*( lambda + mu )) )*exp(-2*a*t) .*( integral(@(v) f11in(t,v), v0, t) + exp(a*v0) ); 49 50 f12in = @(t,v) exp(a*v)*alpha*v0tilde.*(1./ytilde(v )).*I_1(alpha*ytilde(v));

130 51 f12 = @(t) -( mu / (2*( lambda + mu )) )*exp(-2*a*t ).*( integral(@(v) f12in(t,v), v0tilde, t) + exp (a*v0tilde) ); 52 53 f21in = @(t,v) exp(-a*v)*alpha*v0.*(1./y(v)).*I_1( alpha*y(v)); 54 f21 = @(t) ( lambda / (2*(lambda+mu)) )*( integral (@(v) f21in(t,v), v0, t) + exp(-a*v0) ); 55 56 f22in = @(t,v) exp(-a*v)*alpha*v0tilde.*(1./ytilde( v)).*I_1(alpha*ytilde(v)); 57 f22 = @(t) -( lambda / (2*(lambda+mu)) )*( integral (@(v) f22in(t,v), v0tilde, t) + exp(-a*v0tilde) ); 58 59 f31in = @(t,v) exp(a*v).*I_0(alpha*y(v)); 60 f31 = @(t) - (mu*(lambda-mu)/(2*(lambda+mu))) *((1/2)*exp(-2*a*t).*integral(@(v) f31in(t,v), v0 , t)); 61 62 f32in = @(t,v) exp(a*v).*I_0(alpha*ytilde(v)); 63 f32 = @(t) (mu*(lambda-mu)/(2*(lambda+mu))) * ( (1/2)*exp(-2*a*t) .*integral(@(v) f32in(t,v), v0 , t) ); 64 65 f41 = @(t) -(1/2)*exp(-a*t).*I_0(alpha*y(t)); 66 f42 = @(t) (1/2)*exp(-a*t).*I_0(alpha*ytilde(t)); 67 68 f51in = @(t,v) (1/2)*exp(-a*v).*I_0(alpha*y(v)); 69 f51 = @(t) -(lambda*(lambda-mu)/(2*(lambda+mu)))* integral(@(v) f51in(t,v), v0, t); 70 71 f52in = @(t,v) (1/2)*exp(-a*v).*I_0(alpha*ytilde(v) ); 72 f52 = @(t) (lambda*(lambda-mu)/(2*(lambda+mu)))* integral(@(v) f52in(t,v), v0tilde, t); 73

131 74 F2= @(t) CE.*( (t>v0).*( f11(t)+f21(t)+f31(t)+f41(t )+f51(t)) + (t>v0tilde).*(f12(t)+f22(t)+f32(t)+ f42(t)+f52(t)) ); 75 76 t_start =0; 77 W_0 = zeros((t_end-t_start)/dt+1,1) ; 78 W_1 = zeros((t_end-t_start)/dt+1,1) ; 79 for tt=t_start:dt:t_end 80 W_0((tt-t_start)/dt+1)=F1(tt); 81 W_1((tt-t_start)/dt+1)=F2(tt); 82 end 83 W=W_0+W_1; 84 end

132 7.3 Appendix C: Technical Details for the Inverse Laplace Transform in Chapter2

From (2.2.39) we know that { } W1 (t, x) = L−1 Wˆ 1 (s, x)

{ } { } λ21 −1 1 λ12 −1 1 = L + L (7.3.1) λ12 + λ21 s λ12 + λ21 s + λ12 + λ21

{s + λ + ϕ ω (s) } − L−1 21 2 0 eω0(s)x . s (s + λ12 + λ21)

According to the inverse Laplace Transform table [30],

{1} L−1 = 1, (7.3.2) s and { 1 } L−1 = e−(λ12+λ21)t. (7.3.3) s + λ12 + λ21 Thus,

{s + λ + ϕ ω (s) } L−1 21 2 0 eω0(s)x s (s + λ12 + λ21)

{(1 1 ) s + λ + ϕ (β (s) + η (s)) } = L−1 − · 21 2 eβ(s)x+η(s)x s s + λ12 + λ21 λ12 + λ21

{(1 1 ) ϕ η (s) } = L−1 − eβ(s)x · 2 eη(s)x s s + λ12 + λ21 λ12 + λ21

{(1 1 ) } { ϕ η (s) } = L−1 − eβ(s)x ∗ L−1 2 eη(s)x , s s + λ12 + λ21 λ12 + λ21 (7.3.4)

133 where ∗ is the convolution operator. The inverse Laplace Transform of the first term of equation (7.3.4) can be represented as

{(1 1 ) } L−1 − eβ(s)x s s + λ12 + λ21

( ) ( ) { 1 λ21 } { 1 λ21 } −1 1 − s+ x −1 1 − s+ x = L e ϕ2 ϕ2 − L e ϕ2 ϕ2 s s + λ12 + λ21

λ ( { x } { x }) − 21 x −1 1 − s −1 1 − s (7.3.5) = e ϕ2 L e ϕ2 − L e ϕ2 s s + λ12 + λ21 ⎧ x ⎪0, 0 < t < ⎨ ϕ2 ( ) = λ21 ( x ) . − x −(λ12+λ21) t− x ⎩⎪e ϕ2 1 − e ϕ2 , t > ϕ2

We define the second term of equation (7.3.4) as

{ ϕ η (s) } h (t, x) = L−1 2 eη(s)x , (7.3.6) λ12 + λ21 which can be represented as

( 1 ) { ( 1 ) 2 2 2 } ϕ ( ) 2 b(s+a)−b[(s+a) −α ] x h(t, x) = 2 L−1 b (s + a) − (s + a)2 − α2 e λ12 + λ21

{ ( 1 )} ϕ ( 1 ) bx s−(s2−α2) 2 = 2 e−atL−1 b s − (s2 − α2) 2 e . λ12 + λ21 (7.3.7)

Let 1 ( ) 2 p = s2 − α2 (7.3.8) so that s2 − p2 = α2 (7.3.9)

134 and equation (7.3.7) can be arranged as

ϕ { } h(t, x) = 2 e−atL−1 b (s − p) ebx(s−p) λ12 + λ21 { } bϕ 2p (s + p)((s − p)(s + p)) = 2 −atL−1 bx(s−p) e 2 e λ12 + λ21 2p (s + p) (7.3.10) { } bα2ϕ (s + p)2 − α2 = 2 −atL−1 bx(s−p) e 2 e 2 (λ12 + λ21) p (s + p)

2 bα ϕ { ( − ) } = 2 e−atL−1 p−1 1 − α2 (s + p) 2 ebx(s−p) . 2 (λ12 + λ21)

Define P as P = s + p, (7.3.11) and we have

bα2ϕ ( { } { }) h(t, x) = 2 e−at L−1 p−1ebx(s−p) − α2L−1 p−1P−2ebx(s−p) . 2 (λ12 + λ21) (7.3.12) According to the inverse Laplace Transform table [30],

[ 1 ] −1 { −1 bx(s−p)} ( 2 ) 2 L p e = I0 α t + 2bxt (7.3.13) and

[ 1 ] −1 { 2 −1 −2 bx(s−p)} −1 ( 2 ) 2 L α p P e = t (t + 2bx) I2 α t + 2bxt , (7.3.14)

where Iν (x) is the modified Bessel Function of the first kind with order ν, given by the results of differential equation (7.3.15),

( ) x2y′′ + xy′ − x2 + ν2 y = 0. (7.3.15)

135 Let 1 1 (t + 2bx ) 2 ( ( 1 1 ) x ) 2 κ (t, x) = = 1 − − , (7.3.16) t ϕ1 ϕ2 t and

ρ(t, x) = ακ(t, x)t, (7.3.17) then 1 ( ) 2 ρ(t, x) t2 + 2bxt = tκ (t, x) = (7.3.18) α

Thus, we summarize equations (7.3.13) to (7.3.18) and substitute back into

(7.3.12), which yields

bα2ϕ ( 1 ) ( ) = 2 −at [ ( )] − [ ( )] (7.3.19) h t, x e I0 ρ t, x 2 I2 ρ t, x 2 (λ12 + λ21) κ (t, x)

Combine the equations above together and plug into equation (7.3.1), we have

⎧ λ λ x ⎪ 21 12 −(λ12+λ21)t ⎪ + e , 0 < t < ⎪λ12 + λ21 λ12 + λ21 ϕ2 ⎨⎪ λ λ W1 (t, x) = 21 + 12 e−(λ12+λ21)t (7.3.20) ⎪λ12 + λ21 λ12 + λ21 ⎪ λ21 ∫ t ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · h (v, x) dv, t > . ⎩ 0 ϕ2 which is (2.2.49), where

bα2ϕ ( 1 ) ( ) = 2 −at [ ( )] − [ ( )] , (7.3.21) h t, x e I0 ρ t, x 2 I2 ρ t, x 2 (λ12 + λ21) κ (t, x)

( x ) −(λ12+λ21) t− ϕ f1(t, x) = 1 − e 2 , (7.3.22)

136 1 1 (t + 2bx ) 2 ( ( 1 1 ) x ) 2 κ (t, x) = = 1 − − , (7.3.23) t ϕ1 ϕ2 t

ρ(t, x) = ακ(t, x)t, (7.3.24) and In(x), n = 0, 1, 2 are the modified Bessel functions of the first kind.

Similarly, we have { } W2 (t, x) = L−1 Wˆ 2 (s, x)

{( λ )(1 1 ) ( )} = L−1 12 − 1 − eω0(s)x λ12 + λ21 s s + λ12 + λ21 (7.3.25) λ ( {1} { 1 } = 12 · L−1 − L−1 λ12 + λ21 s s + λ12 + λ21

{(1 1 ) }) −L−1 − eω0(s)x , s s + λ12 + λ21 where {(1 1 ) } L−1 − eω0(s)x s s + λ12 + λ21

{(1 1 ) } = L−1 − eβ(s)xeη(s)x (7.3.26) s s + λ12 + λ21

{(1 1 ) } { } = L−1 − eβ(s)x ∗ L−1 eη(s)x . s s + λ12 + λ21

The inverse Laplace Transform of the first term of equation (7.3.26) can be

137 represented as

{(1 1 ) } L−1 − eβ(s)x s s + λ12 + λ21

{1 } { 1 } = L−1 eβ(s)x − L−1 eβ(s)x s s + λ12 + λ21

λ x ( { −x } { −x }) − 21 −1 1 s −1 1 s (7.3.27) = e ϕ2 L e ϕ2 − L e ϕ2 s s + λ12 + λ21 ⎧ x ⎪0, 0 < t < ⎨ ϕ2 ( ) = λ21x ( x ) − −(λ12+λ21) t− x ⎩⎪e ϕ2 1 − e ϕ2 , t > . ϕ2

Define g (t, x) to be the second term of equation (7.3.26), namely,

{ } g (t, x) = L−1 eη(s)x , (7.3.28) then we have

⎧ ( 1 )⎫ ⎨ bx (s+a)−((s+a)2−α2) 2 ⎬ g (t, x) = L−1 e ⎩ ⎭

{ } (7.3.29) = e−atL−1 ebx(s−p)

( { } ) = e−at L−1 ebx(s−p) − 1 + L−1 {1}

According to the inverse Laplace Transform table ([30]),

− 1 [ 1 ] −1 { bx(s−p) } ( 2 ) 2 ( 2 ) 2 L e − 1 = αbx t + 2bxt I1 α t + 2bxt , (7.3.30) and

L−1 {1} = δ (t) . (7.3.31)

138 Plug equation (7.3.30) and (7.3.31) into (7.3.29), we can represent g(t, x) as

( − 1 [ 1 ] ) −at ( 2 ) 2 ( 2 ) 2 g(t, x) = e αbx t + 2bxt I1 α t + 2bxt + δ (t)

( bx ) = e−at α I [ακ (t, x) t] + δ (t) (7.3.32) tκ (t, x) 1

( α ( ) ) = e−at κ (t, x)2 − 1 I [ρ(t, x)] + δ (t) 2κ (t, x) 1

Plug equations (7.3.27) and (7.3.32) into (2.2.54), we have

⎧ λ ( ) x ⎪ 12 −(λ12+λ21)t ⎪ 1 − e , 0 < t < ⎪λ12 + λ21 ϕ2 ⎨⎪ λ ( W2 (t, x) = 12 1 − e−(λ12+λ21)t (7.3.33) ⎪λ12 + λ21 ⎪ λ21 ∫ t ) ⎪ − ϕ x x ⎪ −e 2 f1(t − v, x) · g (v, x) dv , t > , ⎩ 0 ϕ2 which is (2.2.55), where f1(t, x) is defined in (7.3.22) and g(t, x) is defined as

( α ( ) ) g(t, x) = e−at κ (t, x)2 − 1 I [ρ(t, x)] + δ (t) (7.3.34) 2κ (t, x) 1 where δ(t) is the Dirac delta function.

139 7.4 Appendix D: Technical Details for the Inverse Laplace Transform in Chapter4

From the solutions (4.2.23), we will get that for x < u, √ ( √ ) µ(λ − µ − T(s)) (λ + µ + 2s − T(s))2 ˆ 1 −ω1(s)u ω1(s)x ω0(s)x W (s, x) = − √ e e − e 2s(λ + µ + s) T(s) 4λµ

= gˆ1(s, x) + gˆ2(s, x) (7.4.1) where √ µ(λ − µ − T(s)) −ω1(s)(u−x) gˆ1(s, x) = − √ e (7.4.2) 2s(λ + µ + s) T(s) and √ √ µ(λ − µ − T(s)) (λ + µ + 2s − T(s))2 −ω1(u−x) gˆ2(s, x) = √ e . (7.4.3) 2s(λ + µ + s) T(s) 4λµ

Notice that √ µ(λ − µ) − (λ−µ)(u−x) 1 − (u−x) T(s) gˆ1(s, x) = − e 2r · √ e 2r 2s(λ + µ + s) T(s) (7.4.4) √ µ − (λ−µ)(u−x) − (u−x) T(s) + e 2r · e 2r 2s(λ + µ + s)

Let λ + µ √ a = , α = λµ (7.4.5) 2 and u − x v = (7.4.6) 0 r we have

{ ( − )( − ) } ( − )( − ) −1 1 − λ µ u x 1 − λ µ u x ( −(λ+µ)τ) L − e 2r = − e 2r 1 − e , (7.4.7) 2s(λ + µ + s) 2(λ + µ)

140 { √ } (u−x) T(s) −1 1 − 1 −aτ L e 2r = e I [ y( )] √ 0 α τ 1{τ>v0}, (7.4.8) T(s) 2 and √ { (u−x) T(s) } −1 − −aτ ( −1 ) L 2r = ( ) [ ( )] + ( − ) e e αv0y v I1 αy τ 1{τ>v0} δ τ v0 , (7.4.9) which yields

−1 g1(τ, x) = L {gˆ1(s, x)}

( − )( − ) ∫ τ µ(λ − µ) − λ µ u x 1 −av ( −(λ+µ)(τ−v)) = − 2r [ ( )] − e 1{τ>v0} e I0 αy v 1 e dv 2(λ + µ) v0 2 , ( − )( − ) ∫ τ µ − λ µ u x −av −1 ( −(λ+µ)(τ−v)) + 2r ( ) [ ( )] − e 1{τ>v0} e αv0y v I1 αy v 1 e dv 2(λ + µ) v0

( − )( − ) µ − λ µ u x ( −(λ+µ)(τ−v )) + e 2r 1 1 − e 0 2(λ + µ) {τ>v0} (7.4.10) where I0 and I1 are the modified Bessel functions of the first kind, 1{·} is the indicator function and

1 2 2 2 y(τ) = (τ − v0) . (7.4.11)

On the other hand, we can write gˆ2(s, x) as √ − (u+x) T(s) − (λ−µ)(u−x) gˆ2(s, x) = e 2r 2r

[( ) 2 2s λ(λ − µ) µ2(λ − µ) 1 × √ + √ + √ − √ T(s) λ T(s) s T(s) 2λ(λ + µ) (s + λ + µ) T(s) . (7.4.12)

( 1 λ 1 µ2 1 )] − + · − · λ 2(λ + µ) s 2λ(λ + µ) s + λ + µ Let u + x v˜ = (7.4.13) 0 r

141 and 1 ( 2 2) 2 y˜(τ) = τ − v˜0 , (7.4.14)

Notice that √ { (u−x) T(s) } ∫ τ −1 1 − −av −1 −av0 L e 2r = e αv0y (v)I1[αy(v)]dv + e , for τ > v0 (7.4.15) s v0 and { √ } (u−x) T(s) ∫ τ −1 1 − 1 −av L √ e 2r = e I0[αy(v)]dv, for τ > v0, (7.4.16) s T(s) v0 2 so the inverse Laplace Transform of (7.4.12) yields

−1 g2(τ, x) = L {gˆ2(s, x)}

− (λ−µ)(u−x) = 2r e 1{τ>v˜0}

{ ( 1 λ − µ α2v˜ ) × e−aτ ατy˜−1(τ)I [αy˜(τ)] + I [αy˜(τ)] − 0 λ 1 2λ 0 2λ

∫ τ 2 ∫ τ . (7.4.17) λ(λ − µ) 1 −av µ (λ − µ) −2aτ 1 av + e I0[αy˜(v)]dv − e e I0[αy˜(v)]dv 2(λ + µ) v˜0 2 2λ(λ + µ) v˜0 2

(∫ τ ) λ −av −1 −av˜0 − αv˜0e y˜ (v)I1[αy˜(v)]dv + e 2(λ + µ) v˜0

2 (∫ τ )} µ −2aτ av −1 av˜0 + e αv˜0e y˜ (v)I1[αy˜(v)]dv + e 2λ(λ + µ) v˜0

Combine equation (7.4.10) and (7.4.17), we will get the solution for W1(τ, x) shown as (4.2.25), (4.2.27) and (4.2.29).

142 Similarly, we can write Wˆ 2(s, x) as √ √ ( − )( − ) ( (u−x) T(s) (u+x) T(s) ) 2 − λ µ u x − − Wˆ (s, x) = e 2r e 2r − e 2r

( µ 1 λ 1 × · + · 2(λ + µ) s + λ + µ 2(λ + µ) s , (7.4.18) ) µ(λ − µ) 1 1 λ(λ − µ) 1 − · √ − √ − · √ 2(λ + µ) (s + λ + µ) T(s) T(s) 2(λ + µ) s T(s) the inverse of which will result in solutions (4.2.26), (4.2.28) and (4.2.30).

143 Vita

Yu Shao was born on November 29, 1995 in Anyang, Henan, China. He got his Bachelor’s Degree of Science and Engineering in 2017 at School of Informa- tion, Renmin University of China, specializing in Mathematics and Applied Mathematics. He furthered his graduate studies in mathematics and earned a Master of Science in Engineering specializing in Financial Mathematics at

Johns Hopkins University in 2019. He also worked as a Graduate Research Assistant at Johns Hopkins University and has published a research paper at the 2018 IEEE Ubiquitous Computing, Electronics and Mobile Communication

Conference (UEMCON). His research interests include stochastic processes, market prediction and financial mathematics. He plans to pursue his Ph.D. in Statistics at Boston University to expand his knowledge horizon.

144