<<

Submitted by Irene Tubikanec

Submitted at Department of Stochastics

Supervisor Univ.-Prof.in Dr.in Stochastic Evelyn Buckwar December 2016 Motivated by the Noisy and Rhythmic Firing Activity of Neurons

Master Thesis to obtain the academic degree of Diplom-Ingenieurin in the Master’s Program Industriemathematik

JOHANNES KEPLER UNIVERSITY LINZ Altenbergerstraße 69 4040 Linz, Osterreich¨ www.jku.at DVR 0093696 „Don’t worry dreams aren’t real. They’re just neurons firing randomly in your brain.“ Katherine Applegate Eidesstattliche Erklärung

Ich erkläre an Eides statt, dass ich die vorliegende Masterarbeit selbstständig und ohne fremde Hilfe verfasst, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt bzw. die wörtlich oder sinngemäß entnommenen Stellen als solche kenntlich gemacht habe. Die vorliegende Masterarbeit ist mit dem elektronisch übermittelten Textdokument identisch.

Linz, 01. Dezember 2016 Irene Tubikanec, BSc BSc Abstract

A human brain contains billions of neurons. These are extremely complex dynamical systems that are affected by intrinsic channel and extrinsic synaptic noise. Os- cillatory behaviour is a phenomena that arises in single neurons as well as in neuronal networks. Based on the noisy and rhythmic firing activity of nerve cells, the aimof this thesis is to provide an overview on the existing theory of stochastic oscillations.

The main question is how oscillations can be defined mathematically in a stochastic setting. This work suggests two very different definitions of a stochastic oscillator according to certain well-known deterministic tools. The first one states that the solution of a specific two-dimensional stochastic is a stochastic oscillator if it has infinitely many simple zeros almost surely. The second definition of a stochastic oscillator corresponds to the concept of a random periodic solution that is based on the cocycle property. Two particular stochastic equations are introduced each satisfying one of these definitions. For this purpose, two detailed proofs onthe validity of the first and the second definition, respectively, are presented.

A second topic addressed by this work is the stability of stochastic equations. The system energy carries information on how the trajectory propagates in the phase plane. Besides, Lyapunov exponents describe the asymptotic exponential growth of random dynamical systems.

Finally, this work provides an application of the developed theory to specific standard oscillatory models as well as to the Van der Pol oscillator on which the FitzHugh- Nagumo neuron model is based. Sample path simulations of the stochastic equations are provided by the implementation of the Euler-Maruyama method in MATLAB. Furthermore, an exact simulation method applied to the stochastic equation is introduced. Zusammenfassung

Das menschliche Gehirn umfasst Milliarden von Neuronen. Das sind hochkomplexe dynamische Systeme, die intrinsischem Kanal- und extrinsischem Synapsenrauschen unterliegen. Oszillierendes Verhalten tritt in einzelnen Zellen als auch in Netzwerken auf. Aufgrund der rhythmischen und zufälligen neuronalen Impulse, ist es Ziel dieser Arbeit einen Überblick der aktuellen Theorie stochastischer Oszillatoren zu geben.

Das Hauptanliegen ist die Beantwortung der Frage wie Oszillatoren in einem stochastis- chen Rahmen mathematisch definiert werden können. In Anlehnung an determinis- tische Aussagen, betrachtet diese Arbeit zwei mögliche Definitionen. Bei der ersten Variante wird die Lösung einer zweidimensionalen stochastischen Differentialgleichung als Oszillator definiert, falls sie fast sicher unendlich viele einfache Nullstellen aufweist. Die zweite Definition entspricht dem Konzept einer periodischen Lösung und basiert auf der cocycle Eigenschaft. Es werden zwei Gleichungen vorgestellt, welche je eine der beiden stochastischen Definitionen erfüllen. Die entsprechenden Beweise werden detailliert präsentiert.

Des Weiteren wird die Stabilität von stochastischen Systemen thematisiert. Die Sys- temenergie gibt Auskunft über die Entwicklung der Trajektorie im Phasenraum. Außer- dem beschreiben Lyapunov Exponenten das asymptotische exponentielle Wachstum von stochastischen dynamischen Systemen.

Neben der Anwendung erarbeiteter Theorien auf Standardmodelle erfolgt eine Betrach- tung der Van der Pol Gleichung, die den Grundstein des bekannten FitzHugh-Nagumo Neuronenmodells bildet. Die Simulation von Beispielpfaden erfolgt mit der Imple- mentierung des Euler-Maruyama Verfahrens in MATLAB. Ergänzend wird eine exakte Simulationsmethode für den harmonischen Oszillator diskutiert. Acknowledgments

I would like to express my deep gratitude to my supervisor Prof.in Evelyn Buckwar for the opportunity to write this thesis, for giving me the possibility to decide on topics and contents, for enabling me to participate on specific conferences and for her engagement throughout my whole work.

I would also like to thank my colleague and best friend Bernadett Stadler for construc- tive discussions and for proofreading.

Special thanks to my life partner Michaela Fehringer for her patience and for her mental assistance.

Finally, I owe my deepest gratitude to my parents who supported me during my whole life and, in particular, during my studies. Especially, I would like to acknowledge my mother for respecting all of my decisions and for her long lasting blind trust. Contents

1 Introduction1 1.1 Problem Description ...... 1 1.2 Research Questions ...... 2 1.3 Structure of the Thesis ...... 2

2 Introduction to Neuroscience4 2.1 Single Neurons ...... 4 2.1.1 Components and Functionality ...... 4 2.1.2 Noise and Rhythmicity ...... 5 2.2 Single Neuron Models ...... 6 2.2.1 Hodgkin-Huxley Model ...... 6 2.2.1.1 Equivalent Electrical Circuit ...... 7 2.2.1.2 Types of Ion Channels ...... 8 2.2.2 FitzHugh-Nagumo Model ...... 9 2.2.3 Van der Pol Model ...... 10

3 Deterministic Theory 12 3.1 Definition of a Deterministic Oscillator 1 ...... 12 3.2 Definition of a Deterministic Oscillator 2 ...... 13 3.2.1 Deterministic Flow Property ...... 13 3.2.2 Fixed Points and Periodic Solutions ...... 15 3.3 System Stability ...... 16 3.3.1 Stability of Fixed Points ...... 16 3.3.2 Stability of Limit Cycles ...... 19

4 Stochastic Oscillation Theory 21 4.1 Definition of a Stochastic Oscillator 1 ...... 23 4.1.1 Stochastic Harmonic Oscillator ...... 23 Contents viii

4.1.2 Theorem of Girsanov ...... 24 4.2 Definition of a Stochastic Oscillator 2 ...... 29 4.2.1 Stochastic Cocycle Property ...... 30 4.2.2 Random Fixed Points and Periodic Solutions ...... 34 4.3 System Stability ...... 38 4.3.1 System Energy ...... 38 4.3.2 Lyapunov Exponents ...... 39

5 Theory Application on Specific Models 43 5.1 Deterministic Applications ...... 43 5.1.1 Harmonic Oscillator ...... 43 5.1.2 Damped Harmonic Oscillator ...... 45 5.1.3 Van der Pol Oscillator ...... 47 5.1.4 Nonlinear Oscillator ...... 49 5.2 Stochastic Applications ...... 50 5.2.1 Stochastic Harmonic Oscillator ...... 51 5.2.2 Stochastic Damped Harmonic Oscillator ...... 57 5.2.3 Stochastic Van der Pol Oscillator ...... 59 5.2.4 Stochastic Nonlinear Oscillator ...... 62

6 Conclusion 71 6.1 Experiences ...... 71 6.2 Future Work ...... 72

Bibliography 76 Chapter 1

Introduction

A human brain contains billions of extremely complex dynamical systems called neu- rons. By using mathematical models that capture their main dynamics, one tries to reconstruct the firing of single neurons as well as neuronal interactions. Understanding how the human brain is working, and especially how neurons manage their interactive communication, could contribute to the comprehension of specific mechanisms that lead to diseases such as Alzheimer’s, Parkinson’s or Epilepsy.

1.1 Problem Description

Oscillatory activity arises at each level of neuronal systems. Many neurons fire action potentials repetitively. The mechanisms, inducing rhythmic firing, differ across each level of neural systems. Since oscillatory behaviour has been realised in diseases as well, understanding those mechanisms and being able to control them is an important medical problem [11]. Mathematics and especially stochastics offer central possibilities to make a contribution to this crucial topic.

Neurons communicate with the use of electrical impulses that are called spikes or ac- tion potentials. These propagate along the filamentary extensions of neural cells by passing through ion channels that fluctuate rapidly between the open and closed state. For that reason, neurons are affected by intrinsic channel noise. Since nerve cells are surrounded by amounts of other neurons, extrinsic synaptic noise arises in Chapter 1 Introduction 2 this dynamical systems as well. To capture neuronal noise the application of stochastic differential equations (SDEs), which model time dependent phenomena that underlie random effects, is of essential importance. SDEs came into being in course of the twen- tieth century and have been build on a more solid ground by Itô [20] and Stratonovich [32] in the middle of that century. Many crucial developments, based on their work, have been achieved in recent decades. Therefore, this special type of equations is a very innovative mathematical tool that provides still many open possibilities for research.

1.2 Research Questions

Based on the noisy and rhythmic firing activity of neurons, the aim of this thesis isto provide an overview on stochastic oscillation theory. This work mainly addresses the following questions:

1. How can oscillations be defined mathematically in a stochastic setting according to the well-known deterministic tools?

2. Which tools concerning the treatment of system stability do exist?

3. Which standard models can be captured by the developed theoretical framework? Can the provided theory be applied to specific single neuron models?

1.3 Structure of the Thesis

The structure of this thesis is organised as follows:

Chapter 2 provides an introduction to neuroscience and a derivation of the renowned single neuron model of Hogkin and Huxley, developed in 1952 for the giant axon of the squid. Moreover, it offers a reduction of this four-dimensional to more simple two-dimensional models that still capture the main dynamics of the neuronal activity. Chapter 1 Introduction 3

The deterministic theory offers some well known tools for treating oscillations. These concepts are stated in Chapter 3. Of course one cannot adopt the deterministic tools one-to-one in the stochastic case. But one can ask for similar ideas and find analo- gous concepts that contribute to govern stochastic oscillations. Chapter 4 provides an overview on a variety of concepts based on stochastic theory.

Finally this work closes with Chapter 5. It provides an application of the developed deterministic and stochastic theory to some standard models as well as to the well known Van der Pol oscillator on which the famous neuron model of FitzHugh and Nagumo is based. Furthermore, Chapter 5 contains a variety of sample trajectory simulations and provides an exact simulation method applied to the harmonic oscillator equation. Since stochastic theory is not completely perfected yet, it will be interesting to clarify possibilities for further research and to detect existing limitations. Chapter 2

Introduction to Neuroscience

This chapter provides a brief introduction to neuroscience. The focus lies on the description of the structure and the firing activity of single nerve cells as well as onthe derivation of mathematical models that capture their behaviour.

2.1 Single Neurons

This section gives an explanation of the main components of single neurons and clarifies how nerve cells manage to communicate with each other. Moreover, it emphasizes why working with stochastic differential equations (SDEs) and focusing on oscillatory behaviour is of essential interest.

2.1.1 Components and Functionality

A neuron mainly consists of the cell body, also called soma, an axon, and the den- drites as represented in Figure 2.1. The axon and the dendrites form the filamentary extensions of a nerve cell. Different neurons are connected via the synapses andare surrounded by the cell membrane. A more detailed description of the main neuronal components can be found in [5, Chapter 2]. Chapter 2 Introduction to Neuroscience 5

Figure 2.1: Description of a single neuron [22]

Neurons communicate with the use of electrical signals that are called action potentials or spikes. Action potentials have a duration of about two milliseconds and their shape can be assumed as fixed. A neuron receives these spiking impulses via the dendrites and sends it to other neurons over the branches of the axon, as pictured in Figure 2.1. Further information, e.g., on the concrete shape of a spike or concerning the generation and propagation of action potentials, is given in [5, Chapter 4].

2.1.2 Noise and Rhythmicity

Action potentials propagate along the axon by passing through ion channels. Due to the rapid fluctuation of the gated channels between the open and closed state, neurons are affected by intrinsic due to channel noise. An extrinsic source ofnoise results from the synaptic activity of numerous surrounding neurons. These effects randomize the firing of neurons. With the application of SDEs one tries to capture the random effects. The reconstruction of the cell’s membrane potential, which isatime dependent phenomena that underlies intrinsic as well as extrinsic noise, is of essential interest [26, p. 94].

Oscillatory activity arises at each level of neural systems. The mechanisms inducing rhythmic firing differ from single neurons to large neuronal networks. Since oscillatory behaviour has also been realised in diseases like Parkinson’s or Epilepsy understanding those mechanisms is of great interest [11]. As many single neurons fire action potentials repetitively, for example when injected with a constant current, it is important to regard Chapter 2 Introduction to Neuroscience 6 a neuron as an oscillator at least over a period of several spikes. Therefore, the goal of this work is to provide a theoretical basis regarding oscillations in the deterministic framework and especially in the stochastic setting [12, p. 172].

2.2 Single Neuron Models

This section offers a derivation of the well-known deterministic neuron model of Hodgkin and Huxley. Moreover, it provides some simplified versions that capture the main dy- namics of the system. These are for example the FitzHugh-Nagumo model and the Van der Pol oscillator.

2.2.1 Hodgkin-Huxley Model

The model of Hodgkin and Huxley has been developed in 1952 for the giant axon of the squid. It is one of the most famous models describing how action potentials in single neurons are generated and propagated. Hodgkin and Huxley developed a system of four nonlinear equations. These are stated in the following model, which is taken from [14, p. 35].

Model 2.1 (Hodgkin and Huxley).

Cv˙ = −gNa(v)(v − VNa) − gK (v)(v − VK ) − gL(v − VL) + iext(t) m˙ = αm(v)(1 − m) − βm(v)m n˙ = αn(v)(1 − n) − βn(v)n h˙ = αh(v)(1 − h) − βh(v)h

4 3 with gK (v) = gK n , gNa(v) = gNam h

The derivation of this model is split into two parts. The first one deals with the equivalent electrical circuit and is mainly based on Section 1.4 of [12] whereas the second part, which describes the different types of ion channels, is primarily taken from Section 5.5 of [9]. Chapter 2 Introduction to Neuroscience 7

2.2.1.1 Equivalent Electrical Circuit

The activity of the membrane potential v can be explained in terms of a so-called „equivalent electrical circuit“ that captures three important elements, namely the re- sistors, the batteries and a . Different ion channels are described by the . The ion gradients, describing the differences in the ionic concentrations, are represented by the batteries. The capacitor is referred to the cell membrane. The dif- ference of the ionic concentration inside a neuron from that outside the cell produces an electrical voltage. Since the membrane separates the interior of the cell from the extracellular liquid the resulting voltage is called membrane potential.

Figure 2.2: Equivalent electrical circuit [6]

Figure 2.2 provides a picture of the electrical circuit used for the description of the single neuron model established by Hodgkin and Huxley. Kirchhoff‘s law states that the sum of the capacitance current and the ionic currents must be equal to the externally applied current. This results in the following equation:

ic + iNa + iK + iL = iext (2.1)

Furthermore, the charge q and the membrane potential v are proportional by a constant C, that denotes the membrane capacitance. This implies the subsequent result, based Chapter 2 Introduction to Neuroscience 8 on the fact that current is the time derivative of charge:

dq dv q = Cv ⇒ ic = = C (2.2) dt dt

In addition, Ohm’s law implies that the ionic currents ij, passing through the ion channel j, are given by

ij = gj(v − Vj), (2.3)

where gj denotes the conductance of each ion channel, respectively, and is described below in more detail. The terms v − Vj are called driving forces and Vj indicates the constant equilibrium potential for the corresponding ion, also known as reversal potential or Nernst potential. Since VL corresponds to a leakage term it equals the resting potential vrest ≈ −65mV of the membrane when the cell is at rest.

The obtained result (2.1), combined with Equation (2.2) and (2.3), implies the first equation of the regarded model:

dv C = −gNa(v)(v − VNa) − gK (v)(v − VK ) − gL(v − VL) + iext(t) dt

2.2.1.2 Types of Ion Channels

The Hodgkin-Huxley model considers the following three types of channels that are characterised by their conductances as well as by their gating activity:

1. Gated potassium K+ channel 4 Voltage-dependent conductance gK = gK (v) = gK n 2. Gated sodium Na+ channel 3 Voltage-dependent conductance gNa = gNa(v) = gNam h 3. Nongated leakage channel

Voltage-independent conductance gL

The potassium and the sodium channel are both described by a voltage-dependent conductance and may change over time whereas the leakage channel refers to a constant, voltage-independent as well as time-independent conductance gL. The leakage term Chapter 2 Introduction to Neuroscience 9

iL = gL(v − VL) corresponds to the passive flow of ions through nongated channels which are always open. Gated channels can switch from the open state to the closed state and vice versa. The opening and closing of the potassium and sodium channels can be modelled by the three voltage-dependent gating variables n, m and h.

A potassium channel consists of four identical subunits. The probability that all four units are open at the same time is described by n4. Only in this case the channel transmits currents with a maximum conductance gK .

A sodium channel consists of four subunits as well. But in this case they are of different types. Here m and h govern the fast opening and slow closing of the channel. The probability that the whole channel opens at a specific point in time is given by m3h. Analogously to the potassium channel, the sodium channel transmits currents with a maximum conductance indicated by the constant gNa when it is in the open state.

The last three equations of the model correspond to the gating variables of the sodium and the potassium channel. Each of the gating variables satisfies an ordinary differential equation of first order. The voltage-dependent rates of transmission between theopen and closed state of each gating subunit are denoted by α and β, respectively.

2.2.2 FitzHugh-Nagumo Model

The model of Hodgkin and Huxley is an important reference model for the derivation of simpler neuron models. Often this model is reduced to one with only two dimensions instead of four. In the two-dimensional case the system can be analysed as a curve in the phase plane.

The principle behind the reduction of the four-dimensional model of Hodgkin and Huxley to a simpler model with only two dimension is to merge those ion currents that evolve on similar time scales. As a result they are described by only one variable that captures the main dynamics. When studying action potentials one can observe that the membrane potential v and the sodium activation variable m evolve on related time scales. They capture rapid changes and excitability. Furthermore, the potassium activation n and the sodium inactivation h evolve on similar, but much slower, time scales and describe refractoriness [24, p. 173]. Chapter 2 Introduction to Neuroscience 10

These observations result in the following two-dimensional system of equations that is known as FitzHugh-Nagumo model or Bonhoeffer-Van der Pol oscillator [13, p. 447].

Model 2.2 (FitzHugh and Nagumo).

1 3 v˙ = c(v − v + r + iext) 3 1 r˙ = − (v − a + br) c with constants a, b and c

In this model the variable v denotes the membrane potential and captures the fast spiking activities. The variable r is a recovery variable that measures the state of excitability of the cell and governs slow dynamics. As in the previous model iext describes an external applied stimulus intensity that may depend on time and a, b and c are constants that follow some conditions.

2.2.3 Van der Pol Model

The FitzHugh-Nagumo model is based on a modification of the renowned Van der Pol oscillator. This equation originates from an electrical circuit model characterised by a linear capacitor, a linear and a nonlinear in parallel [12, p. 69]. The Van der Pol equation is given in the following model, taken from [27, p. 272].

Model 2.3 (Van der Pol). The Van der Pol equation x¨ + µ(x2 − 1)x ˙ + x = 0, with µ > 0 is equivalent to the system: x˙ = y y˙ = −µ(x2 − 1)y − x

1 3 1 By using the transformation of Liénard y = −x + 3 x + µ x˙ the System 2.3 can be rewritten as Chapter 2 Introduction to Neuroscience 11

1 x˙ = µ(x − x3 + y) 3 1 y˙ = − x µ and, therefore, the FitzHugh-Nagumo model 2.2 contains the Van der Pol oscillator as a special case for a = b = iext = 0 [13, p. 447].

Another well known and intensively studied model is the Duffing Van der Pol oscillator. The difference, compared to the original model of Van der Pol, lies in an additional nonlinear term, that is proportional to x3. The new model reads as follows [21, p. 5].

Model 2.4 (Duffing Van der Pol). The Duffing Van der Pol equation x¨ + µ(x2 − 1)x ˙ + x + βx3 = 0, with µ, β > 0 is equivalent to the system: x˙ = y y˙ = −µ(x2 − 1)y − x − βx3 Chapter 3

Deterministic Oscillation Theory

This chapter gives an overview on some basic deterministic tools that capture oscilla- tory activity arising in autonomous, two-dimensional systems. The first two sections provide mathematical definitions of a deterministic oscillator whereas the last section deals with the question of system stability. The whole chapter is organised in such a way that the next chapter on stochastic oscillations can be arranged analogously.

3.1 Definition of a Deterministic Oscillator 1

What could be the meaning of „oscillatory behaviour“? Intuitively, oscillations are associated for example with repetitions or vibrations appearing over time. Therefore, a very obvious mathematical approach is to simply consider a real-valued x(t) that takes a specific value at infinitely many points in time. The function x(t) can be considered as the solution of the following ordinary differential equation (ODE) of second order with initial value x(0) = x0 ∈ R:

x¨ = f2(x, x˙) (3.1)

2 In (3.1) the function f2 : R → R is such that the existence of a unique solution x : I = [0,T ] ⊂ R → R can be guaranteed by the Theorem of Picard-Lindelöf. Equa- tion (3.1) is equivalent to the following two-dimensional system of first order ODEs: Chapter 3 Deterministic Oscillation Theory 13

x˙ = y (3.2) y˙ = f2(x, y) This results in the following definition of an oscillator.

Definition 3.1 (Deterministic oscillator 1). Let x : I → R be the unique solution of Equation (3.1) with initial value x(0) = x0 ∈ R. The real-valued function x(t) is called a deterministic oscillator if it has infinitely many simple zeros.

For example a linear combination of the sine and cosine function, which arises as the solution of the harmonic oscillator introduced in Example 5.1, fulfils this definition. Moreover, the damped oscillator given in Example 5.2 fits into this framework as well.

3.2 Definition of a Deterministic Oscillator 2

The goal of this section is to define an oscillator as a periodic solution of an autonomous, 2 two-dimensional system of equations with initial value X(0) = X0 ∈ R . Therefore, the following problem, which is a generalisation of System (3.2), is introduced:

x˙ = f1(x, y) (3.3) y˙ = f2(x, y)

The notation X˙ = f(x, y) with a function f : R2 → R2 is an abbreviated form of the 2 2 equations introduced above. The functions f1 : R → R and f2 : R → R are such that System (3.3) has a unique solution X : I = [0,T ] ⊂ R → R2 [17, p. 534].

3.2.1 Deterministic Flow Property

For the study of two-dimensional systems one can interpret solutions geometrically as trajectories in the phase plane which is a useful and demonstrative way of analysing especially nonlinear problems. In dependence on System (3.3), the phase plane is the (x, y)-plane [17, p. 534]. Each solution X(t) = (x(t), y(t)) of System (3.3) traces out a curve Chapter 3 Deterministic Oscillation Theory 14

{X(t), t ∈ I ⊂ R} in the phase plane. This curve is called trajectory or .

Each uniquely existing solution X(t) of the corresponding initial value problem (3.3) can be interpreted as the parametric representation of a curve passing through the 2 initial point X(0) = X0 ∈ R . Another way of understanding a trajectory is by introducing the flow [10, p. 135].

Definition 3.2 (Flow). A deterministic flow is a mapping ϕ : R2 × R → R2 that satisfies the two conditions:

1. ϕ(X, 0) = X ∀ X ∈ R2 2. ϕ(X, t + s) = ϕ(ϕ(X, s), t) ∀ X ∈ R2 and ∀ s, t ∈ R

The conditions in Definition 3.2 are called flow property. The trajectory that passes 2 through the initial point X0 ∈ R is characterised by the set

{ϕ(X0, t), t ∈ R} with X(t) = ϕ(X0, t). The flow ϕ(X0, t) gives the location of the solution trajectory 2 in the phase plane at time t ∈ R if the system starts at the initial point X0 ∈ R . The notation X(t) of the solution only carries information on the time dependence of the trajectory, whereas the flow ϕ(X0, t) of Definition 3.2 highlights the solution’s depen- dence on the corresponding initial value and allows time to tend towards infinity.

Since the velocity vector at each point is given by the right-hand side of System (3.3), the vector (f1, f2) always points in the direction in that the solution flows. This can be thought of as a vector field in phase plane [15, p. 91]. As the related initial value problem exhibits a unique solution, the trajectory must be unique. This in turn implies that trajectories cannot cross and that the deterministic flow has a well-defined orientation [17, pp. 534-535]. Chapter 3 Deterministic Oscillation Theory 15

3.2.2 Fixed Points and Periodic Solutions

There are two important types of trajectories, namely fixed points [17, p. 535] and periodic solutions [7, p. 416]. With the use of the deterministic flow introduced in Definition 3.2 they can be defined as follows.

Definition 3.3 (Fixed Point). A point X ∈ R2 is denoted as a fixed point of System (3.3) if ϕ(X, t) = X ∀ t ∈ R.

If the system starts at a fixed point, then the trajectory will remain on it for all points in time, i.e., the orbit shrinks down to this single point. Therefore, a fixed point corresponds to a constant solution of System (3.3) with initial value X ∈ R2.

Definition 3.4 (Periodic Solution). A function ψ : R → R2 with period T > 0, i.e., ψ(t + T ) = ψ(t) ∀ t ∈ R is denoted as a periodic solution of System (3.3) if ϕ(ψ(t0), t) = ψ(t + t0) ∀ t, t0 ∈ R.

If the system starts at a point ψ(t0) = X0 on the periodic curve, then the trajectory will remain on this curve forever. For this reason, periodic solutions correspond to closed orbits in the phase plane.

A appears if the system does not start at the periodic curve, but once the trajectory enters a point of the periodic curve it stays on it and moves along this limit cycle for all points in time. In this case t0 can be seen as the instant of time at which the trajectory first enters the periodic curve ψ. Limit cycles are periodic solutions that are isolated, i.e., neighbouring trajectories cannot be closed. They either spiral towards the limit cycle or diverge away from it. For this reason, limit cycles can only appear in nonlinear systems. Linear systems can have periodic solutions as for example the harmonic oscillator, but due to linearity none of the trajectories are isolated. If X(t) is a solution, then also cX(t) is a solution for all constants c ∈ R [31, pp. 15666-15667]. For an exemplary see Figure 5.2.

The deterministic flow property, the fixed points and the period solutions are intro- duced in this specific way, because the stochastic framework will be established analo- gously in Chapter 4. The tools introduced so far enable to state a second definition of a deterministic oscillator. Chapter 3 Deterministic Oscillation Theory 16

Definition 3.5 (Deterministic oscillator 2). The autonomous, two-dimensional ODE 2 (3.3) with initial value X(0) = X0 ∈ R is called deterministic oscillator if there exists a corresponding unique periodic solution as introduced in Definition 3.4.

The harmonic oscillator given in Example 5.1 satisfies Definition 3.5 in contrast to the weakly damped harmonic oscillator established in Example 5.2. Its trajectories approach the origin for t to infinity. Therefore, no periodic solution in the phase plane can show up.

3.3 System Stability

This section deals with the stability of fixed points and periodic solutions as introduced in the previous section. The question of system stability is linked with the asymptotic behaviour of the solution trajectories.

3.3.1 Stability of Fixed Points

There are two different possibilities to characterise the stability of fixed points, depend- ing on the limiting behaviour of the trajectory. If the trajectory will neither approach the fixed point nor diverge away from it, the problem is stable[7, p. 378]. The system is asymptotically stable if it approaches the fixed point for t to infinity [7, p. 381].

Definition 3.6 (Stability of a fixed point).

1. A fixed point X ∈ R2 of System (3.3) is stable if any other solution trajectory ϕ(X, t) with X ≠ X ∈ R2 will stay at a fixed distance from this point. 2. A fixed point X ∈ R2 of System (3.3) is asymptotically stable if for any other solution trajectory ϕ(X, t) with X ≠ X ∈ 2 it holds that lim ϕ(X, t) = X. R t→∞

Eigenvalue Characterisation

The goal is to check the stability of a fixed point that arises in the general nonlinear system (3.3). In order to be able to do that, one has to deal with stability questions for Chapter 3 Deterministic Oscillation Theory 17 linear systems. The information on the eigenvalue characterisation of linear systems is taken from [17, pp. 541-543] and [7, pp. 418-425]. Let the following linear system of equations be given: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x˙ a11 a12 x ⎝ ⎠ = ⎝ ⎠ ⎝ ⎠ (3.4) y˙ a21 a22 y          X˙ A X

Furthermore, let λ1 and λ2 be the eigenvalues of the matrix A with corresponding eigenvectors ⃗u1 and ⃗u2. The assumption det(A) ≠ 0 guarantees that both λ1 and λ2 are different from 0. In fact, the unique fixed point of (3.4) is X = (x, y) = (0, 0) ∈ R2.

If both eigenvalues of the matrix A are real, the solution of System (3.4) can be written as: λ1t λ2t X(t) = C1 e ⃗u1 + C2 e ⃗u2 (3.5)

The representation (3.5) implies that if at least one eigenvalue is positive, X(t) diverges to infinity as t tends to infinity. This in turn agrees with an unstable fixed point (x, y).

Otherwise, if both λ1 and λ2 are negative, then X(t) approaches (0, 0) as t increases and the fixed point (x, y) turns out to be asymptotically stable.

In the complex setting, let λ1 = α + iβ and λ2 = α − iβ denote the eigenvalues of A from (3.4) with the corresponding eigenvector ⃗u1 = ⃗v + i⃗w of λ1. In this situation, it can be shown that the solution X(t) can be illustrated in the form:

⎛ ⎞ αt C1 cos(βt − ψ1) X(t) = e ⎝ ⎠ (3.6) C2 cos(βt − ψ2)

Therefore, if α = 0, the representation (3.6) leaves two periodic functions with period 2π β and amplitude C1 and C2, respectively. This implies that the fixed point (x, y) is 2πα 2π β stable. Since X( β ) = e X(0) the fixed point (x, y) is asymptotically stable if α < 0, whereas it is unstable for α > 0.

From now on, general nonlinear systems of the form (3.3) will be considered. The subsequent facts are mainly based on [10, pp. 135-137]. One possibility for checking the stability of fixed points, which arise in nonlinear systems, is to consider a linearisation of the problem around this equilibrium point. For that purpose, the next definition is introduced. Chapter 3 Deterministic Oscillation Theory 18

Definition 3.7 (Linearisation matrix). The linearisation of Problem (3.3) at the point (x, y) is defined as the Jacobian matrix:

⎛ ⎞ ∂F (x, y) ∂F (x, y) A · ∂x ∂y ·= ⎝ ∂G ∂G ⎠ ∂x (x, y) ∂y (x, y)

The approach of linearising the problem is based on the Theorem of Hartman-Grobman. This theorem states that the nonlinear system (3.3) and the linearised system X˙ = AX, with A defined as in 3.7, are topologically equivalent near a fixed point that has nonzero real part. This means that their phase portraits show mainly the same dynamics.

The stability of the nonlinear autonomous system (3.3) can be analysed in terms of the linearised system by considering the eigenvalues of the matrix A, defined as in 3.7. The above results, combined with the Theorem of Hartman-Grobman, imply that if both eigenvalues have a negative real part, the fixed point (x, y) of System (3.3) is asymptotically stable. For at least one eigenvalue with positive real part (x, y) is unstable. For eigenvalues with zero real part no conclusion can be made.

The Method of Lyapunov

An alternative way for characterising the stability of a fixed point, without first lin- earising the system, is the method of Lyapunov. Before being able to formulate a stability theorem, the definition of a „Lyapunov function“ is given [17, p. 547].

Definition 3.8 (Lyapunov function). Let (0, 0) be an isolated fixed point of (3.3). The function E : R2 → R is called Lyapunov function if there exists a neighbourhood U of (0, 0) such that the following conditions hold:

1. E is continuously differentiable, i.e., E has continuous partial derivatives 2. E(0, 0) = 0 and E(x, y) > 0 ∀ (x, y) ≠ (0, 0) ∈ U dE ∂E ∂E ∂E ∂E dE 3. For dt = ∂x x˙ + ∂y y˙ = ∂x f1 + ∂y f2 it holds that dt (0, 0) = 0 and dE dt (x, y) ≤ 0 ∀ (x, y) ≠ (0, 0) ∈ U

dE If for the third condition it holds that dt (x, y) < 0 ∀ (x, y) ≠ (0, 0) ∈ U, then E is called a strict Lyapunov function. Chapter 3 Deterministic Oscillation Theory 19

The next theorem provides a statement on the stability of a fixed point by applying the above defined Lyapunov function. The statement of the theorem as well asaproof can be found in [17, pp. 547-548]. Theorem 3.9 (System stability). If System (3.3) has an isolated fixed point (0, 0) and there exists a corresponding Lyapunov function E. Then (0, 0) is stable. If E is a strict Lyapunov function, then (0, 0) is asymptotically stable.

Especially when considering oscillatory systems one can choose

1 E(x, y) = (x2 + y2) 2 as a Lyapunov function that corresponds to the energy of the system. E is equal to the half of the square distance from the origin in the phase plane which explains the dE idea of the theorem. For example if dt = 0, it follows that the trajectory stays at a fixed distance form the fixed point (0, 0) as it is the case for the harmonic oscillator given in Example 5.1. Whereas, if E decreases over time, the trajectory must approach the origin for t to infinity as it is the case for the weakly damped harmonic oscillator described in Example 5.2[17, p. 546].

3.3.2 Stability of Limit Cycles

Many nonlinear systems are unstable, i.e., they have an unstable fixed point, but approach a stable limit cycle [7, p. 381]. Definition 3.10 (Stability of a limit cycle). A limit cycle is stable if all other trajec- tories spiral towards it for t to infinity.

The following famous Theorem of Poincaré-Bendixson gives the existence of a stable limit cycle [17, p. 557]. A proof can be found in [23, Chapter IV]. Theorem 3.11 (Poincaré-Bendixson). Let B be a compact region in the phase plane that does not contain a fixed point of System (3.3). Let {ϕ(X0, t), t ∈ R} be a solution trajectory of the system with {ϕ(X0, t), t ≥ tB} ⊂ B for a tB ∈ R and initial point 2 X0 ∈ R . Then either X0 = ϕ(X0, 0) lies on a periodic curve ψ or ϕ(X0, t) must spiral towards a limit cycle ψ ⊂ B for t to infinity. In both cases the function ψ corresponds to a periodic solution according to Definition 3.4. Chapter 3 Deterministic Oscillation Theory 20

The difficulty in the application of Theorem 3.11 is to construct a trapping region B that does not allow trajectories to escape. If all trajectories are „pushed away“ from the fixed point, but do not leave the boundary surface, they must finally settle ona limit cycle. Proving this property is a very tough task. Therefore, a second theorem for checking the existence of a stable limit cycle will be introduced, namely the Theorem of Liénard [17, pp. 558-559]. A proof is given in [27, Chapter XI]. This one can be applied easier than the Theorem of Poincaré-Bendixson and is applicable for specific systems of the form (3.2).

In order to formulate the statement of existence, the equation x¨ + g1(x)x ˙ + g2(x) = 0 has to be introduced. This is an equation of the form (3.1) and is equivalent to the system of type (3.2) with f2(x, y) = −g1(x)y − g2(x). This results in the following two-dimensional ODE:

x˙ = y (3.7) y˙ = −g1(x)y − g2(x)

System (3.7) is called Liénard system.

Theorem 3.12 (Liénard). Let the functions g1 and g2 of System (3.7) fulfil the fol- lowing conditions:

1. g1 and g2 are continuously differentiable

2. g1 is even and g2 is odd

3. g1(0) < 0 and g2(x) > 0 ∀x > 0 · ∫ x 4. ∀ G(x) ·= 0 g1(s) ds, x ∈ R ∃ a > 0: G(x) < 0 for 0 < x < a, G(a) = 0, G(x) > 0 for x > a and G(x) is monotonic increasing on [a, ∞) 5. lim G(x) = ∞ x→∞

Then there exists exactly one stable limit cycle for System (3.7). It runs around the origin and every other trajectory spirals towards it for t to infinity.

The assumptions on g1 and g2 ensure that g2 acts like a restoring force and that g1 has a behaviour such that it amplifies small amplitude oscillations, but damps down large amplitude oscillations. Exactly this happens in the case of the Van der Pol oscillator introduced in Model 2.3 of the previous chapter. Chapter 4

Stochastic Oscillation Theory

The aim of this chapter is to provide a broad range of stochastic tools that capture oscillatory mathematical problems that are disturbed by random noise. There are a variety of different approaches and ideas for establishing a reliable theoretical basison stochastic oscillations. Stochastic theory is not completely perfected yet. The goal of this work is to carry together already existing concepts, to clarify the possibilities provided by them, and to become aware of existing limitations. The main object is to answer the question of how oscillations can be defined mathematically in a stochastic setting.

In Chapter 3 some well known deterministic statements concerning oscillations have been given. Of course, one cannot adopt these tools one-to-one in the stochastic case. But there are similar ideas and analogous concepts that contribute to govern stochastic oscillations. The first two sections suggest two very different definitions of astochas- tic oscillator according to certain well-known deterministic tools. The third section addresses the stability of stochastic equations.

As mentioned in Chapter 2 neurons are effected by intrinsic channel noise due toion channel fluctuation as well as by extrinsic noise arising from the synaptic activity of neighbouring cells. To capture these noise sources one can apply stochastic differential equations, which model time dependent phenomena that underlie random effects. A detailed description of these type of stochastic equations is given in the subsequent definition [1, Chapter 6]. Chapter 4 Stochastic Oscillation Theory 22

Definition 4.1 (Stochastic differential equation (SDE)). A d-dimensional stochastic differential equation, or in short SDE, with initial value

X(0) = X0 is defined as:

dX(t) = f(t, X(t))dt + G(t, X(t))dW (t), t ∈ I = [0,T ] ⊂ R (4.1)

This is a symbolic notation of the integral equation:

∫ t ∫ t X(t) = X0 + f(s, X(s)) ds + G(s, X(s)) dW (s), t ∈ I = [0,T ] ⊂ R (4.2) 0 0

The following requirements turn SDEs of the form (4.1) or (4.2) into well-defined and useful tools:

d 1. The initial value X0 is an R -valued random variable. W 2. W = (W (t))t∈I is a m-dimensional Wiener process with natural filtration F .

3. The filtration F for the SDE is such that X0 as well as W are measurable with respect to F. In addition, the increments W (t) − W (s) are independent from

Fs ∈ F ∀ s, t ∈ I with s ≤ t.

4. W (t) − W (0) is independent from X0 ∀ t ∈ I. 5. The Drift f : I ×Rd → Rd and the Diffusion G : I ×Rd → Rd×m are progressive measurable. ∫ T 6. f is integrable, i.e., 0 |f(t, X(t))| dt < ∞. ∫ T 2 7. G is mean-square integrable, i.e., E( 0 |G(t, X(t))| dt) < ∞.

The integral with respect to the Wiener process in the above Definition 4.1 has to be interpreted in the Itô-sense. The required conditions 5 and 7 guarantee that the Itô- integral is well defined such that the usual properties are valid. These are for example the martingale property or the fact that the expectation of an Itô-integral is equal to zero.

A strong solution of the stochastic differential equation is a progressive measurable d R -valued stochastic process X = (X(t))t∈I that satisfies the SDE in Definition 4.1 with probability 1 for all t ∈ I. Chapter 4 Stochastic Oscillation Theory 23

4.1 Definition of a Stochastic Oscillator 1

To provide a stochastic version of Definition 3.1 the following specific two-dimensional SDE is taken into consideration: ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x(t) y(t) 0 d ⎝ ⎠ = ⎝ ⎠ dt + ⎝ ⎠ dW (t) (4.3) y(t) f2(x(t), y(t)) G2(x(t), y(t))          X(t) f(X(t)) G(X(t))

System (4.3) corresponds to Definition 4.1 with a one-dimensional Wiener process W and dimension d = 2. The problem can be formally written as the equation:

x¨(t) = f2(x(t), x˙(t)) + G2(x(t), x˙(t))dW˙ (t)

In the deterministic setting a function oscillates when it has „for sure“ infinitely many simple zeros. According to Definition 3.1, random perturbations yield the first defini- tion of a stochastic oscillator [28, Chapter 8].

Definition 4.2 (Stochastic oscillator 1). Let x(t) be a solution of the SDE (4.3) with initial value x(0) = x0. The stochastic process x(t) is called a stochastic oscillator, if it has infinitely many simple zeros almost surely on [0, ∞).

4.1.1 Stochastic Harmonic Oscillator

As it is shown in Chapter 5 the deterministic harmonic oscillator (Example 5.1) has infinitely many zeros with constant period T and constant amplitude A. In this sub- section a stochastic version of the harmonic oscillator is introduced. It will be proven in Chapter 5 that this particular stochastic system satisfies the above Definition 4.2. This subsection is based on Chapter 8 in [28].

Example 4.3 (Stochastic harmonic oscillator). The stochastic harmonic oscillator equation x¨(t) + ω2x(t) = σW˙ (t), where ω, σ > 0 and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = −ω2x(t)dt + σdW (t) Chapter 4 Stochastic Oscillation Theory 24

This problem is a linear SDE. Therefore, an explicit unique solution can be determined. A solution formula for linear SDEs can be found in [28, Theorem 3.1]. For ω = 1 and given initial values x(0) = 1 and y(0) = 0 the uniquely existing solution is given by:

∫ t x(t) = cos(t) + σ sin(t − s) dW (s) (4.4) 0 ∫ t y(t) = − sin(t) + σ cos(t − s) dW (s) (4.5) 0

Theorem 5.5 introduced in Chapter 5 states that the stochastic harmonic oscillator given in Example 4.3 has infinitely many simple zeros, almost surely. For the stochastic equation one can neither compute the zeros explicitly nor get a constant period or amplitude. But with the use of stopping times it is possible to obtain some regular intervals in which the zeros are expected to live. The subsequent theorem states that the first zero can be expected to lie in the interval [0, 2π]. A proof of this statement can be found in [28, Theorem 4.2 and Theorem 4.4].

Theorem 4.4 (First zero). Let x(t) be the solution of the stochastic harmonic oscillator equation given in Example 4.3 for ω = 1 and initial values x(0) = 1 and y(0) = 0. Let

τ1 ·= inf{t ≥ 0 : x(t) = 0} be the time of the first zero of x(t) on [0, ∞). Then, it holds 2 2 that 0 ≤ E(τ1) ≤ 2π and E(τ1 ) ≤ 12π .

The next theorem deals with the remaining zeros of the harmonic system. A detailed proof is stated in [28, Theorem 4.5].

Theorem 4.5 (The i-th zero). Let x(t) be the solution of the stochastic harmonic oscillator equation given in Example 4.3 for ω = 1 with initial values x(0) = 1 and y(0) = 0. Let τi ·= inf{t ≥ τi−1 : x(t) = 0} be the time of the i-th zero of x(t) on [0, ∞). Then, for i = 1, 2,... it holds that E(τi) ≤ 2iπ .

4.1.2 Theorem of Girsanov

The aim of this subsection is to use the previous statements on the linear stochastic har- monic oscillator model for more complex SDEs. The idea is to transform complicated stochastic equations to the harmonic oscillator by applying the Theorem of Girsanov. This results in the simple linear SDE, driven by another Wiener process with respect to an equivalent probability measure. Due to the equivalence of the measures, „almost Chapter 4 Stochastic Oscillation Theory 25 sure“ statements also hold for the original complicated SDE. This subsection is mainly based on Section 8.7 in [25].

In order to formulate the Theorem of Girsanov, the exponential process has to be established. Remember that I = [0,T ] ⊂ R indicates the regarded time interval. In 2 the following, let L (Ω,I) denote the space of stochastic processes (h(t))t∈I that are adopted to the filtration F and square-integrable for almost all ω ∈ Ω, i.e.,

∫ T h2(t, ω) dt < ∞ a.s. 0

2 Definition 4.6 (Exponential process). Suppose that h = (h(t))t∈I ∈ L (Ω,I). The exponential process εh = (εh(t))t∈I that is defined with respect to h is given by:

∫ t ∫ t 1 2 εh(t) = exp( h(s) dW (s) − h (s) ds), t ∈ I 0 2 0

The application of Itô’s formula gives that the exponential process solves the SDE:

dεh(t) = h(t)εh(t)dW (t), t ∈ I

This equation can be rewritten in the integral form:

∫ t εh(t) = 1 + h(s)εh(s) dW (s), t ∈ I (4.6) 0    i(s)

If the integral in (4.6) is a well-defined Itô-integral in the sense of Definition 4.1, i.e.,

∫ T 2 E( i (t) dt) < ∞, 0 then its expectation is equal to zero and the martingale property is fulfilled automati- cally by the definition of Itô-integrals. The condition (4.7) of the subsequent Theorem 4.7 guarantees the validity of these Itô-properties.

Theorem 4.7 (Martingale property). Let εh be the exponential process with respect to h as introduced in Definition 4.6. If the condition

E(εh(t)) = 1 ∀ t ∈ I (4.7) is satisfied, then εh is a martingale. Chapter 4 Stochastic Oscillation Theory 26

Idea of proof. Let the measure Q be defined by dQ ·= εh(T )dP. The assumption (4.7) implies that: ∫ Q(Ω) = εh(T ) dP = EP(εh(T )) = 1 Ω

Therefore, Q defines a probability measure on (Ω, F). By introducing Qt as the restric- tion of the probability measure Q to Ft, condition (4.7) implies that dQt = εh(t)dP. For all s ≤ t ∈ I and A ∈ F the following two equations hold:

∫ ∫ ∫ EP(εh(t)|Fs) dP = εh(t) dP = dQt = Q(A) (4.8) A A A

∫ ∫ εh(s) dP = dQs = Q(A) (4.9) A A

Equation (4.8) and (4.9) give that EP(εh(t)|Fs) = εh(s). Therefore, the exponential process εh is a martingale.

The assumption (4.7) can be checked directly only in very few situations, for example when h ∈ L2[0,T ] is a deterministic function. If this is the case, then the integral

∫ t h(s) dW (s) 0 is normally distributed with zero mean. Furthermore, due to the Itô-isometry its ∫ t 2 variance is given by 0 h (s) ds. It follows that:

∫ t ∫ t 1 2 E(εh(t)) = E(exp( h(s) dW (s) − h (s) ds)) = 0 2 0    deterministic

∫ t ∫ t 1 2 = exp(− h (s) ds) · E(exp( h(s) dW (s))) = exp(0) = 1 2 0 0    t =exp( 1 ∫ h2(s) ds) 2 0

The tough condition (4.7) can be replaced by the Novikov-condition given by:

∫ T 1 2 E(exp( h (t) dt)) < ∞ (4.10) 2 0

This inequality is a stronger but easier testable condition, at least in some specific situations [30, Chapter 8]. The inequality of Jensen yields the estimate:

∫ T ∫ T 1 2 1 2 E(exp( h (t) dt)) ≤ E(exp( h (t))) dt 2 0 0 2 Chapter 4 Stochastic Oscillation Theory 27

Therefore, it’s sufficient to show that:

1 2 E(exp( h (t))) < ∞ 2

1 This may be useful when h is normally distributed. For example, let h = W ( 2 ) such 1 that h ∼ N (0, 2 ). This gives that:

∫ 1 2 1 x2 1 −x2 E(exp( h (t))) = e 2 √ e dx = C < ∞ 2 R π

In general, the convergence of this integral depends on t and on the parameters of the system. Unfortunately, it turns out that in general one cannot overcome the martingale condition without knowing that h is L2-bounded on [0,T ], i.e., there exists a constant C > 0 such that: ∫ T h2(t, ω) dt ≤ C ∀ ω ∈ Ω 0 With the use of these tools the Theorem of Girsanov can now be introduced. For better reading, it will be split into two parts. Theorem 4.8 (Girsanov I). Let W be a m-dimensional Wiener process on the prob- ability space (Ω, F, P) and suppose that the stochastic process h ∈ L2(Ω,I). If the corresponding exponential process εh established in Definition 4.6 is a martingale, then the following statements hold:

· ∫ 1. The measure Q defined as dQ ·= εh(T )dP, i.e., Q(A) = A εh(T ) dP ∀A ∈ F is a probability measure on (Ω, F) 2. Q and P are equivalent probability measures · ∫ t 3. B = (B(t))t∈I with B(t) ·= W (t) − 0 h(s) ds is a Wiener process regarding Q

A proof on this first part of Girsanov’s Theorem can be found in[25, pp. 143-144]. The next theorem is based on the previous one and provides an additional statement concerning the transformation of SDEs. Theorem 4.9 (Girsanov II). Additionally to the assumptions of Theorem 4.8, let d Z = (Z(t))t∈I ∈ R be a Itô-process of the form

dZ(t) = f(t)dt + G(t)dW (t), t ∈ I

d d×m with drift f ∈ R and diffusion G ∈ R . If there exists a process (α(t))t∈I such that G(t)h(t) = α(t) − f(t), then complementary to the statements 1, 2 and 3 it holds that: Chapter 4 Stochastic Oscillation Theory 28

4. The process Z satisfies the following new equation with respect to the Wiener process B: dZ(t) = α(t)dt + G(t)dB(t), t ∈ I

∫ t Proof. The equation B(t) = W (t)− 0 h(s) ds, for the new Wiener process with respect to the measure Q, can be rewritten in the differential form dB(t) = dW (t) − h(t)dt. This implies that:

dZ(t) = f(t)dt + G(t)dW (t) = (f(t) + G(t)h(t)) dt + G(t)dB(t), t ∈ I    α(t)

Consequently, the process Z satisfies the new SDE with respect to B.

Below, a general nonlinear equation will be introduced to demonstrate the application of the Theorems 4.8 and 4.9[28, Chapter 8]. The desired result is a transformation of this SDE to the harmonic system, given in Example 4.3, such that the statements from the previous subsection are transferable to this nonlinear equation. Whenever an expectation value appears, the relation dQ = εh(T )dP has to be kept in mind.

Example 4.10 (Stochastic nonlinear oscillator). The stochastic nonlinear oscillator equation x¨(t) + κ(x(t), x˙(t)) = σW˙ (t), where σ > 0 and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = −κ(x(t), y(t))dt + σdW (t)

To apply Girsanov’s Theorem let the system of Example 4.10 be rewritten as:

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x(t) y(t) 0 d ⎝ ⎠ = ⎝ ⎠ dt + ⎝ ⎠ dW (t), t ∈ I (4.11) y(t) −κ σ          Z(t) f(t) G(t)

The goal is to provide a transformation to the harmonic equation of the form:

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x(t) y(t) 0 d ⎝ ⎠ = ⎝ ⎠ dt + ⎝ ⎠ dB(t), t ∈ I (4.12) y(t) −x(t) σ          Z(t) α(t) G(t) Chapter 4 Stochastic Oscillation Theory 29

It remains to find the process h that defines the equivalent measure Q. Equations (4.11) and (4.12) imply that:

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 0 y(t) y(t) 0 ⎝ ⎠ h = ⎝ ⎠ − ⎝ ⎠ = ⎝ ⎠ , t ∈ I σ −κ −x(t) −κ + x(t)          G(t) f(t) α(t)

Therefore, by choosing

−κ(x(t), y(t)) + x(t) h(t) = , t ∈ I σ one could transform the stochastic nonlinear oscillator into the stochastic harmonic ∫ t oscillator (4.12), where B(t) = W (t) − 0 h(s) ds is a Wiener process with respect to the probability measure Q, defined by dQ = εh(T )dP.

The main difficulty when applying this theorem is to show the condition thatthe exponential process εh with respect to the process h is a martingale.

4.2 Definition of a Stochastic Oscillator 2

The aim of this section is to provide a stochastic version of Definition 3.5. In this defi- nition a deterministic oscillator has been classified as a periodic solution that appears as a closed orbit in the phase plane. Due to stochastic perturbations the notion of a closed orbit in the (x, y)-plane looses its meaning. However, there are some tools that enable an analogous definition resulting in the concept of a random periodic solution.

In this section the autonomous, d-dimensional SDE of the form

dX(t) = f(X(t))dt + G(X(t))dW (t), t ∈ R (4.13) is taken into consideration. System (4.13) is introduced according to Definition 4.1 with a two-sided and one-dimensional Wiener process W = (W (t))t∈R that is defined on a probability space (Ω, F, P). The dimension d is equal to 1 or 2, depending on the regarded problem. For dimension d = 2, the stochastic system (4.13) is a generalisation of Equation (4.3). Chapter 4 Stochastic Oscillation Theory 30

4.2.1 Stochastic Cocycle Property

To formulate a stochastic version of the flow introduced in Definition 3.2, the Wiener shift operator and the canonical sample space have to be established. The Wiener shift allows to control the additional dimension Ω such that the cocycle can be defined as the stochastic counterpart of the flow. This subsection is based on[10, Chapter 6].

The probability space for the Wiener process appearing in the SDE (4.13) can be specified as the so called canonical probability space given by:

1 1 (Ω, F, P) = (C(R, R ), B(C(R, R )), PW ) (4.14)

In (4.14) the space R1 refers to the real-valued Wiener process whereas R contributes to time. The sample space Ω is given by all continuous and real-valued functions on R, i.e., the samples are specified as the paths. The σ-algebra F, which is defined on Ω, is specified as the corresponding Borel sets. The induced probability measure is called Wiener measure, denoted by PW . Particularly, the samples can be viewed as the trajectories of the Wiener process.

Figure 4.1: Canonical probability space [10]

The idea behind this specific space is to consider a Wiener process from another per- spective, which should in turn facilitate the handling of random dynamical systems. A Wiener process W (t, ω) can be considered as random variable W mapping into the space of its paths C(R, R1): Chapter 4 Stochastic Oscillation Theory 31

1 W :Ω → C(R, R ) (4.15) ω ↦→ ω(s) ·= Ws(ω), s ∈ R For a fixed time point t one obtains a real-valued random variable

1 Wt :Ω → R ω ↦→ ω(t)

whose distribution is identified by P(Wt1 ∈ I1, ..., Wtn ∈ In) for all t1, ..., tn ∈ R and 1 I1, ..., In ⊂ R . With the use of (4.15) a new stochastic process W˜t on the sample space C(R, R1) can be introduced:

1 1 W˜t : C(R, R ) → R ω(s) ↦→ ω(t), s ∈ R

The fact that P(Wt1 ∈ I1, ..., Wtn ∈ In) = PW (W˜t1 ∈ I1, ..., W˜tn ∈ In) implies that Wt and W˜t have the same distribution and, therefore, W˜t can be regarded as the canonical version of Wt, i.e., it is a Wiener process on the canonical sample space. From now on, let the canonical probability space be denoted by (Ω, F, P) as usual. Now, the Wiener shift, which is a mapping on the canonical sample space, can be defined.

Definition 4.11 (Wiener shift). Let t ∈ R be fixed. The Wiener shift operator (θt)t∈R is given by the mapping:

θt :Ω → Ω

ω(s) ↦→ θtω(s) = ω(t + s) − ω(t), s ∈ R

According to the Definition (4.15), for a fixed t the Wiener shift operator has to be interpreted as:

Ws(θtω) = Wt+s(ω) − Wt(ω), s ∈ R

It fulfils the following properties:

1. θ0ω(s) = ω(s + 0) − ω(0) = ω(s)

2. θt+s = θt ◦ θs, ∀s, t ∈ R, i.e., it satisfies a flow property

3. (ω, t) ↦→ θtω is measurable due to its continuity

4. P(θtA) = P(A) ∀A ∈ F and t ∈ R, i.e., P is an invariant measure with respect to the Wiener shift θ Chapter 4 Stochastic Oscillation Theory 32

The cocycle, a stochastic version of the deterministic flow, provided in Definition 3.2, can now be introduced. It describes the time evolution of a system that is perturbed by noise. In contrast to the flow property, the new variable ω, which is treated by the Wiener shift, appears.

Definition 4.12 (Cocycle). A stochastic cocycle over the Wiener shift θ is a mapping ϕ : Rd × R × Ω → Rd that satisfies the two conditions:

1. ϕ(X, 0, ω) = X ∀X ∈ Rd and ω ∈ Ω d 2. ϕ(X, t + s, ω) = ϕ(ϕ(X, s, ω), t, θsω) ∀X ∈ R , ω ∈ Ω and s, t ∈ R

The conditions in Definition 4.12 are called cocycle property. The cocycle introduced in Definition 4.12 is also identified as perfect cocycle due to the fact that the second condition holds for all t and ω. In the case that the condition only holds almost surely it is denoted as a crude cocycle. Furthermore, the mapping (X, t) ↦→ ϕ(X, t, ω) is continuous for all ω ∈ Ω and ϕ is measurable.

Figure 4.2: Cocycle property [2]

Figure 4.2 provides a descriptive picture of the cocycle property. In the deterministic setting a trajectory moves along the single phase plane. In the stochastic framework a new dimension concerning Ω enters the system. Therefore, one has to deal with infinitely many phase spaces, particularly with one for each ω ∈ Ω. The stochastic trajectory moves along these phase spaces as time evolves. The Wiener shift operator d θ moves an ω ∈ Ω to θsω ∈ Ω at time s. The cocycle ϕ shifts a point X ∈ {ω}×R to a Chapter 4 Stochastic Oscillation Theory 33

d d point ϕ(X, s, ω) ∈ {θsω}×R at time s. In addition, the point ϕ(X, s, ω) ∈ {θsω}×R can be moved by the cocycle ϕ to the point ϕ(ϕ(X, s, ω), t, θsω) at time t. This point lies in the corresponding to {θt+sω} and coincides with the movement ϕ(X, t + s, ω) of the point X at time t + s.

To provide a better understanding on the meaning of a cocycle, the following SDE is introduced as a simple example:

Example 4.13 (Simple SDE). The simple one-dimensional SDE with initial value

X(0) = x0 is given by: dX(t) = dW (t), t ∈ R

The solution of the SDE introduced in Example 4.13 is given by:

X(t) = x0 + W (t), t ∈ R

This result can be interpreted as:

ϕ(x0, t, ω) = x0 + Wt(ω). (4.16)

The mapping ϕ is a cocycle in the sense of Definition 4.12 because the following con- ditions hold:

1. ϕ(x0, 0, ω) = x0

2. ϕ(ϕ(x0, s, ω), t, θsω) = ϕ(x0, t + s, ω)

The first condition follows immediately from the fact that the Wiener process starts at 0. For the second condition the definition of the cocycle gives that:

ϕ(ϕ(x0, s, ω), t, θsω) = Wt(θsω) + ϕ(x0, s, ω) = Wt(θsω) + Ws(ω) + x0

The definition of the Wiener shift operator implies the following:

ϕ(ϕ(x0, s, ω), t, θsω) = Wt+s(ω) − Ws(ω) + Ws(ω) + x0 = Wt+s(ω) + x0

Finally, the nature of of the cocycle (4.16) completes the result. Chapter 4 Stochastic Oscillation Theory 34

4.2.2 Random Fixed Points and Periodic Solutions

Based on the concept of a stochastic cocylce it is possible to formulate stochastic versions of fixed points and periodic solutions referring to Definition 3.3 and 3.4, re- spectively. This subsection is based on Chapter 6 of [10] concerning the random fixed points and on [33] relating to the stochastic periodic solutions.

Definition 4.14 (Random fixed point). Let ϕ : Rd × R × Ω → Rd be a cocycle over the Wiener shift operator θ. A random variable X :Ω → Rd is indicated as a random fixed point if ϕ(X(ω), t, ω) = X(θtω) ∀t ∈ R, almost surely.

Figure 4.3: Trajectory on a random fixed point

In the deterministic setting a fixed point describes a stationary state at which the trajectory shrinks down to this single point in the phase plane. In the stochastic framework a fixed point consists of infinitely many points X(ω) each sitting on the corresponding phase space {ω} × Rd. Furthermore, the random fixed point is invariant over time along the Wiener shift θ for the cocycle ϕ by definition. As it can be observed in Figure 4.3, the cocycle shifts the point X(ω) ∈ {ω} × Rd to a point d ϕ(X(ω), t, ω) ∈ {θtω} × R at time t. Due to the requirement ϕ(X(ω), t, ω) = X(θtω), once the trajectory reaches a fixed point it will move from fixed point to fixed point along the corresponding phase spaces.

For a demonstrative example let the well-known one-dimensional Langevin equation of the following form be introduced. Chapter 4 Stochastic Oscillation Theory 35

Example 4.15 (Langevin equation). The one-dimensional Langevin SDE with initial value X(0) = x0 is given by:

dX(t) = −X(t)dt + dW (t), t ∈ R

The solution of the SDE given in Example 4.15 is interpreted as the stochastic cocycle given by: ∫ t −t (s−t) ϕ(x0, t, ω) = e x0 + e dWs(ω) (4.17) 0 In the following, it will be verified that

∫ 0 s X(ω) = e dWs(ω) (4.18) −∞ is a random fixed in the sense of Definition 4.14 for the problem under consideration. Inserting the random fixed point (4.18) into Equation (4.17) implies that:

∫ 0 ∫ t ∫ t −t s (s−t) (s−t) ϕ(X(ω), t, ω) = e e dWs(ω) + e dWs(ω) = e dWs(ω) −∞ −∞ −∞    X(ω)

Starting from equation (4.18) gives:

∫ 0 ∫ t ∫ t s (s−t) (s−t) X(θtω) = e dWs(θtω) = e d(Ws(ω) − W0(ω)) = e dWs(ω) −∞    −∞    −∞ d(Wt+s(ω)−Wt(ω)) 0 a.s.

Therefore, the condition ϕ(X(ω), t, ω) = X(θtω) of a random fixed point is satisfied for the concrete guess (4.18) belonging to the Langevin equation. In general it is very challenging to find the random fixed points of a SDE. Detecting such random points as well as guaranteeing their existence is still an open problem in stochastics.

According to Definition 3.4 and referring to the stochastic tools introduced so far, a random notion of a periodic solution for two-dimensional SDEs can be formulated as follows.

Definition 4.16 (Random periodic solution). Let ϕ : R2 × R × Ω → R2 be a cocycle over the Wiener shift θ. A random periodic solution of System (4.13) is a periodic function ψ : R × Ω → R2 with period T > 0 such that ψ(t, ω) = ψ(t + T, ω) and ϕ(ψ(t0, ω), t, ω) = ψ(t0 + t, θtω) ∀t, t0 ∈ R and ω ∈ Ω. Chapter 4 Stochastic Oscillation Theory 36

Figure 4.4: Trajectory on a random periodic solution

In the deterministic framework a periodic solution corresponds to a closed trajectory evolving in the single phase plane. Once the flow enters these curve it will remain on it. In the stochastic context a random periodic solution comprises infinitely many periodic functions ψ(·, ω) each lying on the related phase space {ω} × R2. In contrast to the deterministic case, a point ψ(t0, ω) on the periodic curve ψ(·, ω) is shifted by d the cocycle to a point ϕ(ψ(t0, ω), t, ω) ∈ {θtω} × R at time t. This point corresponds to ψ(t0 + t, θtω) lying on the periodic function ψ(·, θtω). Figure 4.4 describes this situation. When considering a whole family of trajectories with various initial points on ψ(·, ω), all these trajectories lie on the periodic function ψ(·, θtω) at time t. In addition, the random periodic solution is invariant over time along the Wiener shift θ. Analogously to the case of a random fixed point, once a trajectory reaches a point that lies on a periodic curve, the system will move from periodic curve to periodic curve across the corresponding phase spaces. Finally, the tools introduced so far enable to state a second definition of a stochastic oscillator, referring to Definition 3.5.

Definition 4.17 (Stochastic oscillator 2). The autonomous, two-dimensional SDE

(4.13) with initial value X(0) = X0 is called stochastic oscillator if there exists a corresponding random periodic solution as introduced in Definition 4.16.

In their paper [33] Huaizhong Zhao and Zuo-Huan Zheng give a detailed proof on the existence of random periodic solutions on a cylinder. Therefore, a corresponding interpretation of a periodic solution is introduced. Referring to Definition 4.16, the following notion will be extended by the winding number τ that corresponds to the number of rotations around the cylinder. The idea is to transform the original system Chapter 4 Stochastic Oscillation Theory 37 on R2 to a system on the cylinder S1 × R1. In the following definition, let s ∈ S1 be a natural parameter for a closed curve on the cylinder S1 × R1. In particular, one has s ∈ [0, 1] when applying modulo 1. Definition 4.18 (Random periodic solution on a cylinder). Let the following assump- tions hold:

1. ϕ : S1 × R1 × R × Ω → S1 × R1 is a cocycle over the Wiener shift operator θ. 1 2. For a fixed ω ∈ Ω, let γω : R → R be a continuous function with period τ ∈ N. 3. Gω defined as Gω ·= graph(γω) = {(s mod 1, γω(s)), s ∈ R} is invariant

concerning the cocycle ϕ, i.e., ϕ(t, ω)Gω = Gθtω. 4. There exists a period T > 0 such that for any s ∈ [0, τ)

ϕ(T, ω)(s mod 1, γω(s)) = (s mod 1, γθT ω(s)) almost surely.

Then the system has a random periodic solution on the cylinder S1 × R1 with period T and winding number τ.

Figure 4.5: Random periodic trajectory on a cylinder [33] Chapter 4 Stochastic Oscillation Theory 38

In Definition 4.18, a graph Gω defines a periodic curve on the cylinder with winding number τ, for fixed ω ∈ Ω. The graphs Gω correspond to the periodic functions ψ(·, ω) of Definition 4.16. The invariance condition guarantees that a trajectory that starts from a point of the periodic curve Gω will lie on the periodic curve Gθtω at time t. The last condition focuses on the period T . It states that if the trajectory starts at a specific point on the periodic curve Gω it must lie on the related point on the periodic curve GθT ω at time T , i.e., both points are fixed by the same value of s ∈ [0, τ). For a demonstrative picture of a random periodic trajectory with period T and periodic curves Gω with winding number τ = 2, see Figure 4.5.

In Chapter 5 the specific stochastic nonlinear oscillator equation given in Example 5.11 will be analysed in detail. It will be shown that this system fits into the framework of Definition 4.17.

4.3 System Stability

This section suggests two different tools for dealing with the question of system sta- bility. The energy of a deterministic system gives information on how the trajectory propagates in the phase plane as time goes by. This concept corresponds to the theory of Lyapunov functions discussed in Section 3.3. According to that, in the stochas- tic setting, one can consider the mean energy evolution by applying the expectancy value. As a second tool the so called Lyapunov exponents enable statements on the asymptotic behaviour of random dynamical systems. These can be considered as the stochastic counterparts of eigenvalues.

4.3.1 System Energy

This subsection deals with the energy of stochastic systems and is mainly taken from [10, p. 101]. In Chapter 5.1, where deterministic applications are treated, it is demon- strated that the harmonic oscillator preserves the energy. Conversely, for the damped equation energy decreases with time. The goal now is to study the impact of noise on the energy of the stochastic models. Chapter 4 Stochastic Oscillation Theory 39

The system energy will be introduced for autonomous, two-dimensional SDEs of type (4.13). For systems of this form, the energy is defined in the following way.

Definition 4.19 (System energy). The energy of the two-dimensional system (4.13) is defined as: 1 2 1 2 2 E(t) ·= ∥X(t)∥ = (x (t) + y (t)) ≥ 0, t ∈ R 2 2

This work mainly addresses systems of two dimensions, but this tool can be extended easily to the d-dimensional case with a m-dimensional Wiener process. Since (X(t))t∈R is a stochastic processe the interest lies in the the evolution of the mean energy. Ap- plying Itô’s formula to the system energy gives:

1 dE(t) = (f(X(t)) · X(t) + G(X(t)) · G(X(t)))dt + (G(X(t)) · X(t))dW (t), t ∈ R 2

Taking the expectation value on both sides yield the subsequent equation for the mean energy evolution:

d 1 E(E(t)) = E(f(X(t)) · X(t)) + E(G(X(t)) · G(X(t))), t ∈ R (4.19) dt 2

For some particular equations one can compute the mean energy explicitly. For in- stance, an explicit computation is possible for the stochastic harmonic oscillator given in Example 4.3, see Section 5.2. If an explicit description can not be found one can try to derive estimates to obtain bounds on the energy.

4.3.2 Lyapunov Exponents

The question of system stability is linked with the asymptotic behaviour of the solution trajectory. In Section 3.3 the importance of eigenvalues for the longterm behaviour of deterministic systems has been discussed. The eigenvalues carry information on the stability of a deterministic system. The damped harmonic oscillator, for instance, is asymptotically stable since the solution trajectory approaches the only fixed point as time tends to infinity, see Section 5.1.

The so called Lyapunov exponents can be seen as stochastic counterparts for the eigen- values. They describe the asymptotic exponential growth or decay of the trajectories of random dynamical systems. This subsection is mainly based on Section 6.4 of [10]. Chapter 4 Stochastic Oscillation Theory 40

Deterministic Setting

Let the deterministic setting treated in Chapter 3 be recalled for a moment and a linear system of the following form be taken into consideration:

2 X˙ = AX, X(0) = X0 ∈ R

The system is of the form (3.4) with a diagonalizable matrix A. The solution of this linear system can be interpreted by the flow as introduced in Definition 3.2:

At ϕ(X0, t) = e X0, t ∈ R

2 The set of initial values can be split by the eigenspaces of the system, i.e., R = E1 ⊕E2.

An eigenvalue λi with its corresponding eigenspace Ei for i = 1, 2 is determined by:

1 (0, 0) ≠ X0 ∈ Ei ⇔ λi = lim ln ∥ϕ(X0, t)∥ t→∞ t

To demonstrate the eigenvalue computation using this equation, an example is given.

Example 4.20 (Eigenvalues). Let the system of form (3.4) with the following specific A matrix be considered: ⎛ ⎞ 2 3 A = ⎝ ⎠ 0 −4

The eigenvalues of A are given by λ1 = 2 and λ2 = −4 with corresponding eigenvectors T T ⃗u1 = (1, 0) and ⃗u2 = (1, −2) , respectively. The deterministic flow is given by:

⎛ 2t 1 −4t 1 2t⎞ ⎛ ⎞ e − e + e x0 ϕ X , t 2 2 · ( 0 ) = ⎝ −4t ⎠ ⎝ ⎠ 0 e y0       eAt X0

The corresponding eigenvalues can be determined as follows:

⎛ ⎞ 1 1 1 2t X0 = ⎝ ⎠ ∈ E1 ⇔ lim ln ∥ϕ(X0, t)∥ = lim ln(e ) = 2 = λ1 0 t→∞ t t→∞ t

⎛ ⎞ √ 1 1 1 −4t X0 = ⎝ ⎠ ∈ E2 ⇔ lim ln ∥ϕ(X0, t)∥ = lim ln( 5 e ) −2 t→∞ t t→∞ t √ ln( 5) = lim − 4 = −4 = λ2 t→∞ t Chapter 4 Stochastic Oscillation Theory 41

Stochastic Setting

As an introductory example to the stochastic framework one can consider the one- dimensional geometric Brownian motion.

Example 4.21 (). The one-dimensional geometric Brownian mo- tion with initial value X(0) = X0 ∈ R is given by:

dX(t) = µX(t)dt + σX(t)dW (t), t ∈ R

The solution of this system can be interpreted by the following cocycle:

σ2 ϕ(X0, t, ω) = X0 exp((µ − )t + σW (t)) 2

According to the deterministic setting, the Lyapunov exponent Λ can be determined by the subsequent calculation:

2 1 1 (µ− σ )t+σW (t) lim ln ∥ϕ(X0, t, ω)∥ = lim ln(X0 e 2 ) t→∞ t t→∞ t 2 2 ln(X0) σ σW (t) σ = lim + (µ − ) + = µ − = Λ t→∞ t 2 t 2

This work mainly addresses stochastic systems of two dimensions. In two dimensions, the computation of Lyapunov exponents is more complicated. In this situation it is useful to transform Itô-integrals to stochastic integrals of Stratonovich type. Therefore, let the following two-dimensional SDE of Stratonovich type be given:

dX(t) = AX(t)dt + ΣX(t) ◦ dW (t), t ∈ R (4.20)

2 In Equation (4.20) W is a one-dimensional Wiener process, X(0) = X0 ∈ R is a deterministic initial value and A, Σ ∈ R2×2 are constant matrices. Lyapunov exponents can be defined for more general types of SDEs, but systems of this form providesome special features. Since one can go from Itô to Stratonovich type and vice versa, System (4.20) is a special linear form of the SDE (4.13).

The solution of the linear SDE (4.20) can be interpreted by the corresponding cocycle as introduced in Definition 4.12. The cocycle

ϕ(X0, t, ω), t ∈ R Chapter 4 Stochastic Oscillation Theory 42 is given on a probability space (Ω, F, P) and defined over the Wiener shift operator established in Definition 4.11.

By the multiplicative ergodic theorem [10, Theorem 6.35] there exists a random split- 2 ting of the set of initial values by the eigenspaces, i.e., R = E1(ω) ⊕ E2(ω).

Definition 4.22 (Lyapunov exponents). The upper and lower Lyapunov exponents

Λ1 > Λ2 with its corresponding eigenspaces Ei(ω) for i = 1, 2 and almost all ω ∈ Ω are given by: 1 (0, 0) ≠ X0 ∈ Ei(ω) ⇔ Λi = lim ln ∥ϕ(X0, t, ω)∥ t→±∞ t

A Lyapunov exponent Λ corresponds to the rate of exponential growth or decay of the system solution ∥ϕ(X0, t, ω)∥ moving along a trajectory that passes through the initial 2 point X0 ∈ R .

The basic idea for the computation of the Lyapunov exponents of system (4.20) is to project the system onto the unit circle. Therefore, the solution is split into its radial and angular parts as follows:

X(t) r(t) = ∥X(t)∥, s(t) = r(t)

By applying Itô’s formula one obtains stochastic equations for the new processes

(r(t))t∈R and (s(t))t∈R. The detailed computation procedure of the Lyapunov expo- nents for SDEs of the form (4.20) can be found in [18] and [19]. Chapter 5

Theory Application on Specific Models

Chapter 3 and Chapter 4 provided a broad range on deterministic and stochastic oscil- lation tools, respectively. The aim of this chapter is the application of the previously introduced theory on specific deterministic models as well as on a selection of particu- lar stochastic versions of these problems. The first section comprises the deterministic setting while the second section deals with the stochastic framework.

5.1 Deterministic Applications

This section treats deterministic equations. The first two models are standard ones, namely the harmonic oscillator and the weakly damped harmonic oscillator. Then, the theory is applied to the Van der Pol oscillator, which appears as a special case of the FitzHugh-Nagumo neuron model as described in Chapter 1. Finally, a specific nonlinear system will be introduced.

5.1.1 Harmonic Oscillator

The harmonic oscillator is described by the following equation or by the subsequent equivalent system of equations, respectively. Chapter 5 Theory Application on Specific Models 44

Example 5.1 (Harmonic oscillator). The harmonic oscillator equation x¨ + ω2x = 0, with ω > 0 is equivalent to the system: x˙ = y y˙ = −ω2x

Figure 5.2 provides possible solution trajectories for different initial values and ω = 0.5. Each orbit corresponds to one specific initial point.

Figure 5.1: Harmonic oscillator Figure 5.2: Periodic solutions

The harmonic oscillator in Example 5.1 is a simple linear problem that can be solved explicitly. The general solution reads as x(t) = C1 cos(ωt) + C2 sin(ωt). It can be rewritten in the form x(t) = A sin(ωt + ϕ)

√ 2 2 for a uniquely existing ϕ ∈ [0, 2π) and A = C1 + C2 . This implies immediately that the solution has infinitely many zeros that are all simple and as a consequence this problem fits into the framework of Definition 3.1. In addition, the solution has 2π constant amplitude A and constant period T = ω . For a demonstrative picture, see Figure 5.1. The solution of the corresponding system is given by:

x(t) = A sin(ωt + ϕ) y(t) = Aω cos(ωt + ϕ) Chapter 5 Theory Application on Specific Models 45

For A ≠ 0 all trajectories of the system are given by ellipses of the form

x2 y2 + = 1 A2 (Aω2) all having the same center (0, 0), which is the only fixed point of the system. This implies the stability of this special point [17, p. 536]. For a picture of the resultant phase portrait, see Figure 5.2. By computing the corresponding eigenvalues the fixed point (0, 0) turns out to be stable as well. The system takes periodic solutions in phase space and, therefore, it fulfils Definition 3.5[17, pp. 200-201].

As an alternative method to the computation of eigenvalues one can use the method of Lyapunov by identifying the function E with the energy of the system. The system energy is defined as: 1 1 E(x, y) ·= ω2x2 + y2 (5.1) 2 2 Consequently, it holds that:

dE ∂E ∂E = x˙ + y˙ = ω2xy + y(−ω2x) = 0 dt ∂x ∂y

Theorem 3.9 implies that (0, 0) is a stable fixed point. Moreover, it can be observed that energy is constant in time, which means that the harmonic oscillator is an energy preserving model [17, p. 546].

5.1.2 Damped Harmonic Oscillator

The damped harmonic oscillator is given by the following equation or system of equa- tions, respectively.

Example 5.2 (Damped harmonic oscillator). The damped harmonic oscillator equation x¨ + 2ρx˙ + ω2x = 0, with ρ, ω > 0 and ρ < ω is equivalent to the system: x˙ = y y˙ = −2ρy − ω2x Chapter 5 Theory Application on Specific Models 46

Figure 5.3: Damped harmonic oscillator Figure 5.4: Approaching the fixed point

−ρt The explicit solution of this problem is given by x(t) = e (C1 cos(ω1t) + C2 sin(ω1t)) √ 2 2 where ω1 = ω − ρ . This solution can be rewritten as:

−ρt x(t) = Ae sin(ω1t + ϕ)

Similar to the previous model, the solution x(t) has infinitely many zeros and constant period T = 2π . Thus, this equation fulfils Definition 3.1. The difference from the ω1 model in Example 5.1 is that the amplitude Ae−ρt is exponentially decreasing. For a picture of the damped oscillator with x0 = 1 and parameters ω = 1 and ρ = 0.1, see Figure 5.3[17, pp. 202-203].

The solution of the system of equations is given by

−ρt x(t) = Ae sin(ω1t + ϕ) √ 2 2 −ρt y(t) = ω1 + ρ Ae cos(ω1t + ϕ + ψ) with a suitable angle ψ. This implies that the solution approaches the origin for t tending to infinity as it can be observed in Figure 5.4[17, pp. 537-538]. The solution trajectory starts at (x0, y0) = (1, 0) with the same parameters as mentioned above. The origin (0, 0) is the only fixed point of this system, which turns out to be asymptotically stable for this problem by consulting the eigenvalues. As a consequence the trajectories can not form a closed orbit in the phase space and, therefore, Definition 3.5 of a deterministic oscillator does not hold for this problem. Chapter 5 Theory Application on Specific Models 47

Applying the method of Lyapunov for the system energy (5.1) gives that:

dE ∂E ∂E = x˙ + y˙ = ω2xy + y(−2ρy − ω2x) = −2ρy2 dt ∂x ∂y

Since ρ > 0, this implies that E is a strict Lyapunov function. Theorem 3.9 gives that the fixed point is asymptotically stable. The damped oscillator does not conserve the energy, except at the fixed point (0, 0). Thus, all trajectories approach the origin for t to infinity, even for arbitrarily small damping [17, p. 546].

5.1.3 Van der Pol Oscillator

The Van der Pol oscillator is described by a nonlinear system of two equations and has been introduced as Model 2.3 in Chapter 2. This model results as a special case of the famous FitzHugh-Nagumo neuron model 2.2 for a specific choice of parameters.

Figure 5.5 shows the solution function for x0 = 0.5 and parameter µ = 3.

Figure 5.5: Van der Pol oscillator Figure 5.6: Approaching the limit cycle

The system has only one fixed point at the origin. By considering the eigenvalues of the linearisation matrix A it follows that the fixed point is unstable. However, the Van der Pol model fulfils the assumptions of the Theoreme ofLi´nard 3.12 and, therefore, gives the existence of a stable limit cycle for all admissible values of the parameter µ that runs around the fixed point (0, 0). It follows that the Van der Pol model satisfies Definition 3.5 of a deterministic oscillator and as a consequence it fulfils Definition 3.1 as well [27, p. 272]. Chapter 5 Theory Application on Specific Models 48

When considering the energy one obtains the following equation:

dE ∂E ∂E = x˙ + y˙ = µy2(1 − x2) dt ∂x ∂y

dE dE This implies that dt < 0 for |x| > 1 and dt > 0 for |x| < 1. Thus, the function E is no Lyapunov function and Theorem 3.9 cannot be applied. Anyway, one can observe that energy is generated at low amplitudes, i.e., small amplitude oscillations are increased and energy is dissipated at high amplitudes such that big amplitude oscillations are damped down. As a result, there exists a state at which energy dissipation and energy generation balance and as a consequence a stable limit cycle exists [31, p. 15667].

Figure 5.6 shows the solution trajectory for (x0, y0) = (0.5, 0) and parameter µ = 3.

The Theorem of Lie´nard 3.12 only gives the existence of a stable limit cycle. Since the Van der Pol oscillator is a nonlinear model there is no information about the exact shape of this cycle. For small values of the parameter µ one can show that the uniquely existing limit cycle is close to a circle with radius equal to 2, centered around the origin [17, pp. 571-573].

The Duffing Van der Pol oscillator introduced in Model 2.4 of Chapter 2 differs from the classical Van der Pol model 2.3 by the reason of an additional nonlinear term. It can be shown that the Duffing model satisfies the assumptions of the Theorem of Lie´nard as well [21, p. 5]. When studying the more general FitzHugh-Nagumo model 2.2 the existence of a limit cycle for a special choice of parameters can be shown by applying the Theorem of Poincare´ Bendixson 3.11. In this case plays an important role [8].

In the literature one can find a variety of different versions of the . The following example demonstrates a system that does not take a periodic solution in the phase plane.

Example 5.3 (Duffing Van der Pol oscillator. 2) The duffing Van der Pol equation x¨ + 2ρx˙ + x + αx3 + βx2x˙ = 0, with ρ, ω, α, β > 0 and ρ < ω is equivalent to the system: x˙ = y y˙ = −x − 2ρy − αx3 − βx2y Chapter 5 Theory Application on Specific Models 49

Linearising the duffing model at the origin results in the damped equation introduced in Example 5.2. Therefore, the system trajectories of the duffing Van der Pol equation show a similar behaviour to those from the damped system. This can be observed by comparing Figure 5.7 and 5.8 with Figure 5.3 and 5.4, respectively.

Figure 5.7: Duffing VdP oscillator 2 Figure 5.8: Approaching the fixed point

Figure 5.8 shows the solution trajectory of the duffing model for (x0, y0) = (0.5, 0) and parameters ρ = 0.1, α = 20 and β = 0.1. Figure 5.7 provides a picture of the corresponding solution function with initial value x0 = 0.5.

5.1.4 Nonlinear Oscillator

As a last example let the following nonlinear system of equations be introduced.

Example 5.4 (Nonlinear oscillator). The nonlinear oscillator is given by the two-dimensional system of equations: x˙ = x − y − x(x2 + y2) y˙ = x + y − y(x2 + y2)

Like the Van der Pol oscillator, this model converges towards a stable limit cycle. As it can be observed in Figure 5.10 the nonlinear oscillator settles on the unit circle as time tends to infinity [33, p. 2023]. Chapter 5 Theory Application on Specific Models 50

Figure 5.9: Nonlinear oscillator Figure 5.10: Approaching the limit cycle

5.2 Stochastic Applications

This section deals with stochastic versions of the oscillatory models described in Sec- tion 5.1. The aim is to apply the stochastic tools developed in Chapter 4 on a specific choice of these models and to provide a simulation of sample trajectories. Addition- ally, existing borders will be determined and open possibilities for further research are detected.

The main focus lies on the Theorems 5.5 and 5.12. The first one states that the stochastic harmonic oscillator introduced in Example 4.3 satisfies Definition 4.2 of a stochastic oscillator. The second theorem states that a specific stochastic version of the nonlinear oscillator established in the previous section fits into the framework ofa stochastic oscillator in the sense of Definition 4.17.

A simulation of sample trajectories is provided by implementing the Euler-Maruyama method in MATLAB. Some figures show sample solutions x(t) evolving in time while others give an idea of how the system trajectories (x(t), y(t)) proceed in the phase plane as time elapses. Additionally, an exact simulation method applied on the stochastic harmonic oscillator equation is introduced. The exact method is described to demon- strate that the Euler-Maruyama simulation does not preserve the structure of the oscillatory model but overestimates the system energy. Chapter 5 Theory Application on Specific Models 51

5.2.1 Stochastic Harmonic Oscillator

A stochastic version of the harmonic oscillator with constant diffusion term σ has been introduced in Example 4.3.

Euler-Maruyama method

Figure 5.11 and 5.12 provide sample trajectories of the system generated by the method of Euler-Maruyama.

Figure 5.11: Stochastic harmonic oscillator Figure 5.12: Phase plane 1

For the Euler-Maruyama simulation the following data has been used:

1. Time interval: t ∈ [0, 70] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (1, 0) 4. Noise intensity: σ = 0.1 5. Additional parameter: ω = 1

Exact simulation method

The stochastic harmonic oscillator equation given in Example 4.3 can be rewritten in the following form: Chapter 5 Theory Application on Specific Models 52

⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x(t) 0 1 x(t) 0 d ⎝ ⎠ = ⎝ ⎠ ⎝ ⎠ dt + ⎝ ⎠ dW (t) (5.2) y(t) −1 0 y(t) σ             X(t) A X(t) G The explicit solution of System (5.2) for s ≤ t can be written as:

⎛ ⎞ A(t−s) 0 X(t) = e X(s) + Z,Z ∼ N (⎝ ⎠ ,C(t − s)), (5.3) 0

2 ⎛ 2 ⎞ ⎛ ⎞ σ t − cos(t) sin(t) sin (t) At cos(t) sin(t) C(t) = ⎝ ⎠ , e = ⎝ ⎠ 2 sin2(t) t + cos(t) sin(t) − sin(t) cos(t)

A solution formula for systems of the form (5.2) can be found in [28, Theorem 3.1]. In addition, the Itô-Integral Z in (5.3) is normally distributed with zero mean and covariance matrix C(t) that solves a ODE [28, Theorem 3.2]. These facts imply the solution representation given in (5.3). For the discrete time steps tn = ∆t·n, where ∆t is the endpoint of the time interval divided by the number of time steps, (5.3) reads as follows: ⎛ ⎞ A(∆t) 0 X(tn+1) = e X(tn) + Z,Z ∼ N (⎝ ⎠ ,C(∆t)) (5.4) 0 From Equation (5.4) it is clear that both the matrix exponential and the covariance matrix only have to be computed once to obtain an exact simulation of the solution at the discrete time points. The method works analogously to the exact numerical simulation procedure described in [16].

Figure 5.13: Stochastic harmonic oscillator Figure 5.14: Phase plane 2 Chapter 5 Theory Application on Specific Models 53

Figure 5.13 and 5.14 provide sample solution trajectories of the system generated by the exact simulation method. For the exact simulation method the following data has been used:

1. Time interval: t ∈ [0, 70] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (1, 0) 4. Noise intensity: σ = 0.1 5. Additional parameter: ω = 1

Figure 5.15 provides a picture of a sample trajectory evolving in the phase plane that is generated by the exact method. Figure 5.16 shows a sample path produced by the Euler-Maruyama method. For both trajectories the same data as mentioned above has been used, except that time is running until T = 300. From both figures one can observe that the sample paths tend to move outwards in space as time elapses. However, the Euler-Maruyama trajectory has moved too far away from the origin, i.e., this method overestimates the energy of the system.

Figure 5.15: Exact method Figure 5.16: Euler-Maruyama method

System energy

Anyway, Figure 5.14 and 5.15 give the impression that the mean energy of the harmonic equation increases slowly. The application of Equation (4.19) for the mean energy evo- lution to the stochastic harmonic oscillator given in Example 4.3 results in: Chapter 5 Theory Application on Specific Models 54

d 1 2 1 2 E(E(t)) = E(x(t)y(t) − x(t)y(t)) + σ = σ dt 2 2 This in turn implies that

∫ t 1 2 1 1 2 E(E(t)) = E(E(0)) + σ ds = (x0 + y0) + σ t 0 2 2 2 where x(0) = x0 and y(0) = y0 are deterministic initial values. What can be observed here is that due to the diffusion parameter σ the expected energy is not constant in time any more. Instead of preserving the energy as it is the case for the deterministic harmonic oscillator, randomness causes an increase of the mean energy over time. Therefore, the solution trajectory tends to move outwards in space.

Stochastic oscillator 1

A proof that the harmonic oscillator introduced in the previous chapter satisfies Defi- nition 4.2 of a stochastic oscillator is given. It is shown that the solution of the system introduced in Example 4.3, almost surely, has infinitely many simple zeros. The state- ment and the corresponding proof can be found in Chapter 8 of [28, Theorem 4.1].

Theorem 5.5 (Stochastic oscillator 1). Let x(t) be the solution of the stochastic harmonic oscillator equation of Example 4.3 for ω = 1 with initial values x(0) = 1 and y(0) = 0. Then x(t) is a stochastic oscillator in the sense of Definition 4.2.

Proof. The proof of Theorem 5.5 is organised step by step. STEP 1: Rewrite the solution x(t) By the trigonometric addition formula

sin(t − s) = sin(t) cos(s) − sin(s) cos(t) the solution x(t) of the stochastic harmonic oscillator equation, given in (4.4), can be rewritten in the form:

∫ t ∫ t x(t) = cos(t) − σ cos(t) sin(s) dW (s) + σ sin(t) cos(s) dW (s) 0 0

STEP 2: Consider x(t) at discrete points in time 1 Let the discrete time steps tn = (2n+ 2 )π, for n = 1, 2,... be specified. Since cos(tn) = 0 and sin(tn) = 1, the solution x(t) at an instant of time tn is given by: Chapter 5 Theory Application on Specific Models 55

∫ tn x(tn) = σ cos(s) dW (s) 0

STEP 3: Rewrite x(tn) as a sum of iid random variables

Let the random variable Zi, for i = 1, 2,... be defined increment-wise as:

∫ ti Zi = σ cos(s) dW (s) (5.5) ti−1

The summation of those random variables results in a new representation for x(tn):

n ∑ x(tn) = Zi i=1

Since the integrand of the Itô-integral (5.5) is a deterministic function it holds that:

2 Zi ∼ N (0, E(Zi ))

The Itô-isometry implies the following:

1 ∫ ti ⏐(2i+ )π 2 2 2 2 1 1 ⏐ 2 2 E(Zi ) = σ cos (s) ds = σ ( s + sin(s) cos(s))⏐ = σ π t ⏐ 1 i−1 2 2 (2(i−1)+ 2 )π

2 As a result, the random variables Zi ∼ N (0, σ π) for i = 1, 2,... are identically dis- tributed. Furthermore, the independence of the increments of a Wiener process implies that two random variables Zi and Zj for i ≠ j and i, j = 1, 2,... are independent as well. This results in a sequence {Zi}i≥1 of iid random variables.

STEP 4: Apply a theorem on the limit of sums of iid random variables

For instance, the law of iterated logarithms implies that the sequence {x(tn)}n≥1 must contain elements that switch their signs infinitely many often as n tends to infinity.

STEP 5: Use the continuity of x(t) A Wiener process defined on a probability space (Ω, F, P) satisfies the condition that it is continuous for almost all ω ∈ Ω, which implies that x(t) is continuous almost surely as well. This in turn, gives that the solution x(t) of the stochastic harmonic oscillator equation has almost surely infinitely many zeros on [0, ∞).

STEP 6: Show the simplicity of the zeros A proof on this last step can be found in [28, Theorem 3.4]. Chapter 5 Theory Application on Specific Models 56

For the stochastic harmonic oscillator equation one can additionally specify regular intervals where the zeros are expected to occur. This has been stated in Theorem 4.4 and Theorem 4.5 of Chapter 4.

System Transformation

As described in Subsection 4.1.2, the application of Girsanov’s Theorems 4.8 and 4.9 could enable the conclusion that other stochastic oscillation models fit into the frame- work of Definition 4.2 as well. The big difficulty here is to show the martingale property of the exponential process ϵh with respect to the corresponding stochastic process h.

In the case of the stochastic damped harmonic oscillator stated in Example 5.6 the term κ(x(t), y(t)) of Definition 4.10 is given by κ = 2ρy(t) + x(t) which implies that:

−κ + x(t) −2ρy(t) h(t) = = σ σ

This in turn yields the following Novikov condition:

∫ T 2 ∫ T 1 2 2ρ 2 E(exp( h (t) dt)) = E(exp( y (t) dt)) < ∞ 2 0 σ2 0

Since y(t) is a stochastic process one does not have that the process h is bounded for all ω ∈ Ω, though a compact time interval I = [0,T ] ⊂ R is taken into consideration. As mentioned in Subsection 4.1.2 this condition is very difficult to verify. In this work it’s validity remains as an open question.

For the stochastic Van der Pol oscillator formulated in 5.8 the nonlinear term κ(x(t), y(t)) from Definition 4.10 reads as κ = µ(x2(t) − 1)y(t) + x(t) which yields that:

−κ + x(t) −µ(x2(t) − 1)y(t) h(t) = = σ σ

For the same reasons the validation of the martingale property of the corresponding exponential process, which is required in Theorem 4.8 and 4.9 is left open here. If the application of Girsanov’s Theorem would be possible for this problem the stochastic Van der Pol model would satisfy Definition 4.2 of a stochastic oscillator. Chapter 5 Theory Application on Specific Models 57

5.2.2 Stochastic Damped Harmonic Oscillator

A stochastic version of the damped harmonic oscillator is given in the next example. Example 5.6 (Stochastic damped harmonic oscillator). The stochastic damped harmonic oscillator equation x¨(t) + 2ρx˙(t) + x(t) = σW˙ (t), where σ, ρ > 0, 0 < ρ < 1, and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = [−2ρy(t) − x(t)]dt + σdW (t)

Euler-Maruyama method

Figure 5.17 and 5.18 provide sample trajectories of the system generated by the method of Euler-Maruyama.

Figure 5.17: Stochastic damped oscillator Figure 5.18: Phase plane 3

For the simulation the following data has been used:

1. Time interval: t ∈ [0, 50] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (1, 0) 4. Noise intensity: σ = 0.1 5. Additional parameter: ρ = 0.1 Chapter 5 Theory Application on Specific Models 58

System energy

While the energy of the harmonic equation seems to increase slowly as time evolves, the energy of the damped model appears to decrease drastically as it is demonstrated in Figure 5.18. It can be shown that the mean energy of the damped equation decreases exponentially. A proof can be found in [29].

To obtain a further stochastic version, let the damped harmonic oscillator introduced in Example 5.2 be perturbed by multiplicative noise. The stochastic integral is defined in the Stratonovich-sense.

Example 5.7 (Stochastic damped harmonic oscillator 2). The stochastic damped harmonic oscillator equation x¨(t)+2ρx˙(t)+x(t) = σx(t)◦W˙ (t), where σ > 0, ρ ∈ R and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = [−2ρy(t) − x(t)]dt + σx(t) ◦ dW (t)

Euler-Maruyama method

Figure 5.19 and 5.20 provide sample trajectories of the system generated by the method of Euler-Maruyama. For this equation the Itô and Stratonovich version coincide due to the special form of the diffusion term.

Figure 5.19: Stochastic damped oscillator 2 Figure 5.20: Phase plane 4 Chapter 5 Theory Application on Specific Models 59

For the simulation the following data has been used:

1. Time interval: t ∈ [0, 50] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (1, 0) 4. Noise intensity: σ = 0.1 5. Additional parameter: ρ = 0.1

Lyapunov exponents

The stochastic damped oscillator introduced in Example 5.7 can be rewritten into a system of type (4.20) with the following matrices:

⎛ ⎞ ⎛ ⎞ 0 1 0 0 A = ⎝ ⎠ , Σ = ⎝ ⎠ −1 −2ρ σ 0

Imkeller and Lederer [18] give an explicit description of the Lyapunov exponents of this type of stochastic damped equation. For this purpose, they use a method of decom- posing the solution into its radial and angular part. Furthermore, they consider the Lyapunov exponents as functions of the noise parameter σ and the damping parameter ρ and establish corresponding specific local and global properties. In[19] they extend their work by adding a parameter α > 0, representing the strength of the restoring force, into the matrix A as follows:

⎛ ⎞ 0 1 A = ⎝ ⎠ −α −2ρ

They provide a global stability diagram where the new parameter α also plays a role. Additionally, they focus on the rotation number as well.

5.2.3 Stochastic Van der Pol Oscillator

A stochastic form of the Van der Pol oscillator, which has been introduced in Model 2.3, is described by the subsequent SDE. Chapter 5 Theory Application on Specific Models 60

Example 5.8 (Stochastic Van der Pol (VdP) oscillator). The stochastic VdP oscillator equation x¨(t) + µ(x2(t) − 1)x ˙(t) + x(t) = σW˙ (t), where σ, µ > 0 and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = [−µ(x2(t) − 1)y(t) − x(t)]dt + σdW (t)

Euler-Maruyama method

Figure 5.21 and 5.22 provide sample trajectories of the system generated by the method of Euler-Maruyama.

Figure 5.21: Stochastic VdP oscillator Figure 5.22: Phase plane 5

For the simulation the following data has been used:

1. Time interval: t ∈ [0, 40] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (0.5, 0) 4. Noise intensity: σ = 0.1 5. Additional parameter: µ = 3

To obtain a stochastic version of the duffing Van der Pol equation introduced inEx- ample 5.3, let this system be perturbed by multiplicative noise. Therefore, noise enters the equation in the same way as for the second damped model given in Example 5.7. Chapter 5 Theory Application on Specific Models 61

The stochastic integral is defined as a Stratonovich type integral.

Example 5.9 (Stochastic duffing Van der Pol oscillator 2). The duffing VdP equation x¨(t) + x(t) + 2ρx˙(t) + αx3(t) + βx2(t)x ˙(t) = σx(t) ◦ W˙ (t), where σ, α, β > 0, ρ ∈ R and W is a one-dimensional Wiener process is a formal notation for the system: dx(t) = y(t)dt dy(t) = [−x(t) − 2ρy(t) − αx3(t) − βx2(t)y(t)]dt + σx(t) ◦ dW (t)

Euler-Maruyama method

Figure 5.23 and 5.24 provide sample trajectories of the system generated by the method of Euler-Maruyama. For this equation the Itô and Stratonovich version coincide due to the special form of the diffusion term as well.

Figure 5.23: Stochastic duffing VdP Figure 5.24: Phase plane 6

For the simulation the following data has been used:

1. Time interval: t ∈ [0, 40] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (0.5, 0) 4. Noise intensity: σ = 0.1 5. Additional parameters: ρ = 0.1, α = 20, β = 0.1 Chapter 5 Theory Application on Specific Models 62

Lyapunov exponents

The SDE for the damped equation introduced in Example 5.7 is obtained by linearis- ing the specific stochastic version of the Duffing Van der Pol oscillator established in Example 5.9 at the origin. The Duffing equation contains additional nonlinear terms and noise enters into the system in the same way as for the damped equation. Bax- endale studies the linearisation of the Duffing Van der Pol equation along a trajectory {(x(t), y(t)), t ∈ R} in the phase space. In [3] and [4] he obtains stability results by providing estimates of the Lyapunov exponents for the resulting linear system.

5.2.4 Stochastic Nonlinear Oscillator

The following model defines a stochastic version of the system given in Example 5.4.

Example 5.10 (Stochastic nonlinear oscillator). A stochastic version of the nonlinear oscillator with σ > 0 and one-dimensional Wiener process W is given by the system: dx(t) = (x(t) − y(t) − x(t)(x2(t) + y2(t))dt dy(t) = (x(t) + y(t) − y(t)(x2(t) + y2(t))dt + σdW (t)

Euler-Maruyama method

Figure 5.25: Stochastic nonlinear oscillator Figure 5.26: Phase plane 7 Chapter 5 Theory Application on Specific Models 63

Figure 5.25 and 5.26 provide sample trajectories of the system generated by the method of Euler-Maruyama. For the simulation the following data has been used:

1. Time interval: t ∈ [0, 30] 2. Number of time steps: N = 10000

3. Initial point: (x0, y0) = (2, 1) 4. Noise intensity: σ = 0.1

To receive a second stochastic version, let the nonlinear oscillator introduced in Exam- ple 5.4 be perturbed by multiplicative noise. The resulting SDE is interpreted in the Stratonovich-sense.

Example 5.11 (Stochastic nonlinear oscillator 2). A stochastic version of the nonlinear oscillator with σ > 0 is given by the system: dx(t) = (x(t) − y(t) − x(t)(x2(t) + y2(t)))dt + σx(t) ◦ dW (t) dy(t) = (x(t) + y(t) − y(t)(x2(t) + y2(t)))dt + σy(t) ◦ dW (t)

The one-dimensional Wiener process W is defined on the canonical sample space with respect to the Wiener shift operator described in Section 4.2.

Euler-Maruyama method

Figure 5.27 and 5.28 provide sample trajectories of the system generated by the method of Euler-Maruyama.

Figure 5.27: Stochastic nonlinear oscillator Figure 5.28: Phase plane 8 Chapter 5 Theory Application on Specific Models 64

For the simulation the following data has been used:

1. Time interval: t ∈ [0, 30] 2. Number of time steps: N = 50000

3. Initial point: (x0, y0) = (1, 2) 4. Noise intensity: σ = 0.01

Stochastic oscillator 2

Here it is proven that the stochastic nonlinear oscillator introduced in Example 5.11 satisfies Definition 4.17 of a stochastic oscillator, i.e., the SDE has a random periodic solution as introduced in Definition 4.16. The following results are based on [33].

Theorem 5.12 (Stochastic oscillator 2). The stochastic nonlinear oscillator equation given in Example 5.11 is a stochastic oscillator in the sense of Definition 4.17 for σ = 1.

Proof. The proof of Theorem 5.12 is organised step by step. STEP 1: Transform to a cylinder By introducing the polar coordinates x = r cos(2πα) and y = r sin(2πα) with

√ r(x, y) = x2 + y2 (5.6) 1 y α(x, y) = arctan( ) (5.7) 2π x the stochastic nonlinear oscillator of Example 5.11 given on the plane R2, can be transformed to the following system of decoupled, one-dimensional equations on the cylinder [0, 1] × R1:

dr(t) = (r(t) − r3(t))dt + r(t) ◦ dW (t) (5.8) 1 dα(t) = dt (5.9) 2π

This can be verified by applying the Stratonovich-version of the Itô-formula that cor- responds to the chain rule. Let the partial derivatives of the function (5.6) be denoted by rx and ry, respectively. Applying Itô’s formula to (5.6) results in:

2 2 2 2 dr(t) = (rx(x − y − x (x + y ))) + ry(x + y − y (x + y )))dt + (rxx + ryy) ◦ dW (t)       =r2 =r2 Chapter 5 Theory Application on Specific Models 65

The partial derivatives of r are given by:

∂r x rx = = ∂x r ∂r y ry = = ∂y r This gives the following result: x y x y dr(t) = ( (x − y − xr2) + (x + y − yr2))dt + ( x + y) ◦ dW (t) r r r r 1 1 = ( (x2 + y2) −r (x2 + y2))dt + (x2 + y2) ◦ dW (t) r       r    =r2 =r2 =r2 = (r(t) − r3(t))dt + r(t) ◦ dW (t)

Analogously, let the partial derivatives of the function (5.7) be denoted by αx and αy, respectively. Applying Itô’s formula to (5.7) results in:

2 2 dα(t) = (αx(x − y − xr ) + αy(x + y − yr ))dt + (αxx + αyy) ◦ dW (t)

The partial derivatives of α are given by:

∂α y αx = = − ∂x 2πr2 ∂α x αy = = ∂y 2πr2

This results in the deterministic equation:

1 dα(t) = dt 2π

STEP 2: Solve the SDE

For initial values r0 and α0 the solutions of Equation (5.8) and (5.9) are given by:

t+W (t) r0 e r(t) = (5.10) 2 ∫ t 2(s+W (s)) 1 (1 + 2r0 0 e ds) 2 t α(t) = α0 + (5.11) 2π

This result can be investigated by introducing the following process:

∫ t Y (t) = e2(s+W (s)) ds 0 Chapter 5 Theory Application on Specific Models 66

The solution (5.10) can be rewritten as

t+W r0 e r(t, W, Y ) = 2 1 (1 + 2r0Y ) 2 with partial derivatives given by:

rt = r

rW = r 3 t+W r0 e rY = 2 3 (1 + 2r0Y ) 2

The Stratonovich-version of the Itô-formula implies that:

2(t+W (t)) dr(t) = (rt + rW · 0 + rY · e )dt + r(t) ◦ dW (t)    −r3(t) = (r(t) − r3(t))dt + r(t) ◦ dW (t)

This proves that r(t) given in (5.10) solves the one-dimensional SDE (5.8). Obviously (5.11) is a solution for the deterministic second equation (5.9).

STEP 3: Show the cocycle property The solution r(t) given in (5.10) can be interpreted as:

t+Wt(ω) r0 e ϕ˜(r0, t, ω) = 1 . (5.12) 2 ∫ t 2(s+Ws(ω)) (1 + 2r0 0 e ds) 2

To verify that the mapping ϕ˜ is a cocycle in the sense of Definition 4.12 the following two statements have to be shown:

ϕ˜(r0, 0, ω) = r0 (5.13)

ϕ˜(r0, t + s, ω) = ϕ˜(ϕ˜(r0, s, ω), t, θsω) (5.14)

The first condition (5.13) follows immediately by setting t = 0 in the above cocycle equation (5.12). For the second condition, from (5.12) one can conclude the following:

s+t+Ws+t(ω) r0 e ϕ˜(r0, t + s, ω) = 1 2 ∫ s+t 2(u+Wu(ω)) (1 + 2r0 0 e du) 2 Chapter 5 Theory Application on Specific Models 67

On the other hand (5.12) implies that:

t+Wt(θsω) ϕ˜(r0, s, ω) e ϕ˜(ϕ˜(r0, s, ω), t, θsω) = 1 2 ∫ t 2(u+Wu(θsω)) (1 + 2ϕ˜(r0, s, ω) 0 e du) 2

Replacing ϕ˜(r0, s, ω) by (5.12) and using the fact that Wt(θsω) = Wt+s(ω) − Ws(ω) gives the following:

s+Ws(ω) r0 e t+Wt+s(ω)−Ws(ω) s 1 e (1+2r2 ∫ e2(u+Wu(ω)) du) 2 0 0 ϕ˜(ϕ˜(r0, s, ω), t, θsω) = 2 2(s+Ws(ω)) 1 r0 e ∫ t 2u+2W (ω)−2W (ω) s s+u s u 2 (1 + 2( 1+2r2 ∫ e2(u+Wu(ω)) du ) 0 e d ) 0 0

A rearrangement of the fractions and a simple integral transformation finally result in:

s+t+Ws+t(ω) r0 e ϕ˜(ϕ˜(r0, s, ω), t, θsω) = 1 2 ∫ s 2(u+Wu(ω)) 2 2s ∫ t 2u+Ws+u(ω) (1 + 2r0 0 e du + 2r0 e 0 e du) 2 s+t+Ws+t(ω) r0 e = 1 2 ∫ s 2(u+Wu(ω)) ∫ t 2(u−s)+2Wu(ω) (1 + 2r0( 0 e du + s e du)) 2 s+t+Ws+t(ω) r0 e = 1 2 ∫ s+t 2(u+Wu(ω)) (1 + 2r0 0 e du) 2

Therefore, the condition (5.14) is satisfied.

STEP 4: Detect a random fixed point The next challenge is to detect a random fixed point. Let the following process be introduced: ∫ 0 2(s+W (ω)) − 1 r(ω) ·= (2 e s ds) 2 (5.15) −∞ Now it will be proven that the process r(ω) is a random fixed point in the sense of Definition 4.14. Therefore, the following condition has to be verified:

ϕ˜(r(ω), t, ω) = r(θtω) (5.16)

The Wiener shift operator combined with the guess (5.15) gives that:

∫ 0 1 2(s+Ws(θtω)) − r(θtω) = ( e ds) 2 −∞ ∫ 0 2(s+W (ω)−W (ω)) − 1 = ( e t+s t ds) 2 −∞ Chapter 5 Theory Application on Specific Models 68

On the other hand (5.12) and (5.15) imply the following:

r(ω) et+Wt(ω) ϕ˜(r(ω), t, ω) = 1 2 ∫ t 2(s+Ws(ω)) (1 + 2r(ω) 0 e ds) 2 1 ∫ 0 2(s+Ws(ω)) − t+Wt(ω) (2 −∞ e ds) 2 e = 1 ∫ 0 2(s+Ws(ω)) −1 ∫ t 2(s+Ws(ω)) (1 + 2(2 −∞ e ds) 0 e ds) 2

By applying a simple integral transformation and a few steps of basic computation one ends up with:

et+Wt(ω) ϕ˜(r(ω), t, ω) = 1 ∫ 0 2(s+Ws(ω)) ∫ t 2(s+Ws(ω)) (2 −∞ e ds + 2 0 e ds) 2 et+Wt(ω) = 1 ∫ t 2(s+Ws(ω)+W0(ω)−W0(ω) (2 −∞ e ds) 2 et+Wt(ω) = 1 ∫ 0 2(s+t+Ws+t(ω)+Wt(ω)−Wt(ω) (2 −∞ e ds) 2 ∫ 0 2(s+W (ω)−W (ω) − 1 = (2 e s+t t ds) 2 −∞

Therefore, the process r(ω) defined in (5.15) is a random fixed point.

STEP 5: Find a random periodic solution on a cylinder The aim of this step is to show that the system of equations (5.8) and (5.9) has a random periodic solution on the cylinder [0, 1] × R1 according to Definition 4.18.

With the use of the cocycle established in Step 3 and the solution for α stated in (5.11)

t ϕ1(t, ω)(α0, r0) = (α0 + mod 1, ϕ(r0, t, ω)) (5.17) 2π ˜ can be interpreted as a cocycle on the cylinder. Additionally, let Gω be defined accord- ing to the random fixed point that has been detected in step4:

Gω ·= {(α, r(ω)) | α ∈ [0, 1]} (5.18)

What is left to show are the Assumptions 3 and 4 of Definition 4.18. Concerning the first one the following invariance condition has to be verified:

ϕ1(t, ω)Gω = Gθtω Chapter 5 Theory Application on Specific Models 69

Using Equation (5.17), Definition (5.18) and the fact that r(ω) is a random fixed point results in the following:

t ϕ1(t, ω)Gω = {(α + ) mod 1, ϕ(r(ω), t, (ω)) | α ∈ [0, 1]} 2π ˜ = {(α, r(ω)) | α ∈ [0, 1]} Otherwise (5.18) implies that:

Gθtω = {(α, r(ω)) | α ∈ [0, 1]}

The last task is to find a period T > 0 and a winding number τ ∈ N such that:

ϕ1(T, ω)(α, r(ω)) = (α, r(θT ω)) ∀ α ∈ [0, τ)

It can be easily seen that the following equation holds for all α ∈ [0, 1):

2π ϕ1(T, ω)(α, r(ω)) = (α + mod 1, r(θtω)) 2π

= (α, r(θ2πω))

Therefore, the system of equations (5.8) and (5.9) has a random periodic solution of period T = 2π and winding number τ = 1 on the cylinder.

STEP 6: Find a random periodic solution The last thing that remains to show is the existence of a periodic solution for the original problem given in Example 5.11 on R2. In order to do so, one has to introduce 2 2 a cocycle ϕ2 on R and to specify a periodic function ψ : R × Ω → R . As mentioned above the trajectory ϕ1 given by t ϕ1(t, ω)(α, r) = (α + mod 1, ϕ(r, t, ω)) 2π ˜ forms a cocycle on the cylinder [0, 1] × R1. In course of the transformation applied in Step 1 the processes x and y are chosen as:

x(t) = r(t) cos(2πα(t)) y(t) = r(t) sin(2πα(t)) Chapter 5 Theory Application on Specific Models 70

The solution formulas (5.10) and (5.11) imply that the trajectory

t t ϕ2(t, ω)(x, y) ·= (ϕ(r, t, ω)) cos(2π(α + ), ϕ(r, t, ω)) sin(2π(α + )) (5.19) ˜ 2π ˜ 2π defines a cocycle on R2 for the original system given in Example 5.11. In addition, let ψ : R × Ω → R2 be defined as:

ψ(t, ω) ·= (r(ω) cos(2πα + t), r(ω) sin(2πα + t)) (5.20)

It remains to show that there exists a period T > 0 such that the subsequent two conditions are satisfied:

ψ(t + T, ω) = ψ(t, ω)

ϕ2(t, ω)ψ(0, ω) = ψ(t, θtω)

For the choice of T = 2π the first condition follows immediately from the properties of the sine and cosine functions. Furthermore, by the definition of ϕ2 and ψ given in (5.19) and (5.20), respectively, as well as by the fact that r(ω) is a random fixed point it holds that:

ϕ2(t, ω)ψ(0, ω) = ϕ2(t, ω)(r(ω) cos(2πα), r(ω) sin(2πα)) t t = (ϕ(r(ω), t, ω) cos(2π(α + )), ϕ(r(ω), t, ω) sin(2π(α + ))) ˜ 2π ˜ 2π

= (r(θtω) cos(2πα + t), r(θtω) sin(2πα + t))

On the other hand Definition (5.20) gives that:

ψ(t, θtω) = (r(θtω) cos(2πα + t), r(θtω) sin(2πα + t))

This finally implies that the original system given in Example 5.11 has a random periodic solution ψ with period T = 2π according to Definition 4.16. Therefore, this system is a stochastic oscillator in the sense of Definition 4.17. Chapter 6

Conclusion

A human brain contains billions of neurons. These are extremely complex dynamical systems. Oscillatory behavior arises both in single neurons and in neuronal networks. Nerve cells are affected by intrinsic randomness induced by channel noise and byex- trinsic noise resulting from the synaptic activity of surrounding neurons. On the back- ground of the noisy and rhythmic firing activity of neurons, the aim of this thesis was to provide an overview on the existing stochastic theory of oscillations.

6.1 Experiences

The main object was to answer the question of how oscillations can be defined mathe- matically in a stochastic setting. Chapter 4 suggested two very different definitions of a stochastic oscillator according to certain well-known deterministic tools introduced in Chapter 3. In a deterministic setting, the solution of a second order ODE is an oscillator if it has infinitely many simple zeros. This can be extended to the solution of a specific two-dimensional SDE by asking for infinitely many simple zeros almost surely. The second definition is based on periodic solution trajectories. A deterministic system of two first order ODEs is an oscillator if the solution trajectory forms aclosed curve in the phase plane. Due to stochastic perturbations the notion of a closed orbit looses its meaning. However, one can introduce the concept of a stochastic periodic solution based on the theory of random dynamical systems. The main challenge is to provide a stochastic version of the deterministic flow property. The stochastic cocycle Chapter 6 Conclusion 72 that is defined over the Wiener shift operator enables the handling of the additional dimension entering the system due to randomness. Two particular stochastic equations have been introduced each satisfying one of these definitions. For this purpose, two detailed proofs on the validity of the first and the second definition, respectively, were presented in Chapter 5.

A second topic addressed by this work was the stability of stochastic equations. The mean system energy, defined in Chapter 4, carries information on how the solution trajectory propagates in the phase plane. In particular, the mean energy of the har- monic oscillator is not constant in time. In Chapter 5 it was demonstrated that due to randomness, the trajectory tends to move outwards in space as time elapses whereas the deterministic model preserves the energy. The question of system stability is linked with the asymptotic behavior of the solution trajectories. The Lyapunov exponents describe the asymptotic exponential growth or decay of random dynamical systems. They can be seen as the stochastic counterparts of eigenvalues and were introduced in Chapter 4.

The application of the described theory to specific standard oscillatory models as well as to the well-known Van der Pol oscillator has been presented in Chapter 5. This equation arises as a special case of the FitzHugh-Nagumo neuron model. Finally, in Chapter 5 sample path simulations of the stochastic models were given by the implementation of the Euler-Maruyama method in MATLAB. Additionally, an exact simulation method applied to the harmonic oscillator equation has been introduced to point out that the standard technique overestimates the system energy.

6.2 Future Work

A major goal would be the application or extension of the developed theory to specific single neuron equations as well as to neuronal network models. At first instance, one could try to find other models, which satisfy the suggested definitions of a stochastic oscillator. Maybe there exist equations that fulfil the Novicov-condition such that by the Theorem of Girsanov they fit into the framework of the first oscillator definition. This possibility has been discussed in Section 4.1. Moreover, the discussion on system stability, initiated in Section 4.3, can be intensified and extended to a variety of other Chapter 6 Conclusion 73 tools. Especially the study of Lyapunov exponents as functions of specific system pa- rameters can give a more solid picture of the asymptotic behavior of the corresponding equation and, in particular, provide a basis for a bifurcation analysis. One could also try to apply the exact simulation method, discussed in Section 5.2, to the stochas- tic damped equation. In general, the investigation of structure-preserving numerical methods comprises a whole field in stochastics.

The broad topic of stochastic oscillations offers a variety of possibilities for further research. This thesis provides a detailed and extensive overview on this specific math- ematical field. Therefore, it provides a solid basis for future work. List of Figures

2.1 Description of a single neuron [22]...... 5 2.2 Equivalent electrical circuit [6]...... 7

4.1 Canonical probability space [10]...... 30 4.2 Cocycle property [2]...... 32 4.3 Trajectory on a random fixed point ...... 34 4.4 Trajectory on a random periodic solution ...... 36 4.5 Random periodic trajectory on a cylinder [33]...... 37

5.1 Harmonic oscillator ...... 44 5.2 Periodic solutions ...... 44 5.3 Damped harmonic oscillator ...... 46 5.4 Approaching the fixed point ...... 46 5.5 Van der Pol oscillator ...... 47 5.6 Approaching the limit cycle ...... 47 5.7 Duffing VdP oscillator 2 ...... 49 5.8 Approaching the fixed point ...... 49 5.9 Nonlinear oscillator ...... 50 5.10 Approaching the limit cycle ...... 50 5.11 Stochastic harmonic oscillator ...... 51 5.12 Phase plane 1 ...... 51 5.13 Stochastic harmonic oscillator ...... 52 5.14 Phase plane 2 ...... 52 5.15 Exact method ...... 53 5.16 Euler-Maruyama method ...... 53 5.17 Stochastic damped oscillator ...... 57 5.18 Phase plane 3 ...... 57 5.19 Stochastic damped oscillator 2 ...... 58 List of Figures 75

5.20 Phase plane 4 ...... 58 5.21 Stochastic VdP oscillator ...... 60 5.22 Phase plane 5 ...... 60 5.23 Stochastic duffing VdP ...... 61 5.24 Phase plane 6 ...... 61 5.25 Stochastic nonlinear oscillator ...... 62 5.26 Phase plane 7 ...... 62 5.27 Stochastic nonlinear oscillator ...... 63 5.28 Phase plane 8 ...... 63 Bibliography

[1] L. Arnold. Stochastische Differentialgleichungen: Theorie u. Anwendung. Olden- bourg, München and Wien, 1973.

[2] L. Arnold. Random dynamical systems. Springer, Berlin and New York, 1998.

[3] P. H. Baxendale. Lyapunov exponents and stability for the stochastic duffing- van der pol oscillator. In IUTAM Symposium on Nonlinear Stochastic Dynamics: Proceedings of the IUTAM Symposium held in Monticello, Illinois, U.S.A., 2002, 125-135, Dordrecht, 2003. Springer.

[4] P. H. Baxendale. Stochastic averaging and asymptotic behavior of the stochastic duffing–van der pol equation. Stochastic Processes and their Applications, 113(2), 235-272, 2004.

[5] M. F. Bear, B. W. Connors, and M. A. Paradiso. Neurowissenschaften: Ein grundlegendes Lehrbuch für Biologie, Medizin und Psychologie. Spektrum Akademischer Verlag, Heidelberg, 3rd edition, 2009.

[6] N. Berglund. Des canards dans mes neurones. http://images.math.cnrs.fr/ Des-canards-dans-mes-neurones.html, (accessed April 2, 2016).

[7] M. Braun. Differential equations and their applications: An introduction to applied mathematics. Springer-Verlag, New York, 4th edition, 1993.

[8] B. Chen and C. F. Martin. Fitzhugh-nagumo model and processing in the visual cortex of fly. In Decision and Control, 43rd IEEE Conference, 1, 591-595, 2004. BIBLIOGRAPHY 77

[9] P. Dayan and L. F. Abbott. Theoretical Neuroscience: Computational and Math- ematical Modeling of Neural Systems. The MIT Press, Cambridge, 2005.

[10] J. Duan. An introduction to stochastic dynamics. Cambridge University Press, New York, 2015.

[11] G. B. Ermentrout and C. C. Chow. Modeling neural oscillations. Physiology & Behavior, 77(4-5), 629-633, 2002.

[12] G. B. Ermentrout and D. H. Terman. Mathematical foundations of neuroscience. Springer, New York, 2010.

[13] R. FitzHugh. Impulses and physiological states in theoretical models of nerve membrane. Biophysical Journal, 1(6), 445-466, 1961.

[14] W. Gerstner and W. M. Kistler. Spiking neuron models: An introduction. Cam- bridge University Press, Cambridge, 2002.

[15] W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, Cambridge, 2014.

[16] D. T. Gillespie. Exact numerical simulation of the ornstein-uhlenbeck process and its integral. Phys. Rev. E, 54(2), 2084-2091, 1996.

[17] H. Heuser. Gewöhnliche Differentialgleichungen: Einführung in Lehre und Ge- brauch. B.G. Teubner, Stuttgart, 1989.

[18] P. Imkeller and C. Lederer. An explicit description of the lyapunov exponents of the noisy damped harmonic oscillator. Dynamics and Stability of Systems, 14(4), 385-405, 1999.

[19] P. Imkeller and C. Lederer. Some formulas for lyapunov exponents and rotation numbers in two dimensions and the stability of the harmonic oscillator and the inverted pendulum. Dynamical Systems, 16(1), 29-61, 2001. BIBLIOGRAPHY 78

[20] K. Itô. On stochastic differential equations. American Math. Soc., New York, 1951.

[21] Folkert S.J. Nobels Jolien L. Diekema and Eifion H. Prinsen. Phase plots in dynamical systems. http://www.astro.rug.nl/~nobels/projects2.php, 2014 (accessed April 2, 2016).

[22] A. Karpathy. Cs231n: Convolutional neural networks for visual recognition. http: //cs231n.github.io/neural-networks-1/#classifier, accessed April 2, 2016.

[23] H. W. Knobloch and F. Kappel. Gewöhnliche Differentialgleichungen. B.G. Teub- ner, Stuttgart, 1974.

[24] C. Koch. Biophysics of computation: Information processing in single neurons. Computational neuroscience. Oxford University Press, New York, 1999.

[25] H. Kuo. Introduction to stochastic integration. Springer, New York, 2006.

[26] C. Laing and G. J. Lord. Stochastic methods in neuroscience. Oxford University Press, Oxford, 2010.

[27] S. Lefschetz. Differential equations; geometric theory. Interscience Publ., New York, 2nd edition, 1962.

[28] X. Mao. Stochastic differential equations and applications. Horwood Pub., Chich- ester, 2nd edition, 2008.

[29] J.C. Mattingly, A.M. Stuart, and D.J. Higham. for sdes and approxi- mations: locally lipschitz vector fields and degenerate noise. Stochastic Processes and their Applications, 101(2), 185-232, 2002.

[30] P. E. Protter. Stochastic integration and differential equations. Springer, Berlin, 2nd edition, 2004.

[31] S. Stramigioli and M. van Dijk. Energy conservative limit cycle oscillations. http: //purl.utwente.nl/publications/64882, 2008 (accessed April 2, 2016). BIBLIOGRAPHY 79

[32] R. L. Stratonovich. A new representation for stochastic integrals and equations. SIAM Journal on Control, 4(2), 362-371, 1966.

[33] H. Zhao and Z. Zheng. Random periodic solutions of random dynamical systems. Journal of Differential Equations, 246(5), 2020-2038, 2009.