
Dynamical Systems and a Brief Introduction to Ergodic Theory Leo Baran Spring 2014 Abstract This paper explores dynamical systems of different types and orders, culminating in an ex- amination of the properties of the logistic map. It also introduces Ergodic theory and important results in the field. Contents 1 Dynamical Systems4 1 1.1 Differential Equations . 2 1 1.1.1 Flows on R ..................................... 3 2 1.1.2 Flows on R ..................................... 5 3 1.1.3 R and chaos . 8 1.2 Maps . 9 1.2.1 The Logistic Map . 9 2 Ergodic Theory 12 2.1 Measure Theory Preliminaries1;2 ............................. 12 2.2 A Few Results3 ....................................... 13 1 Dynamical Systems4 The field of dynamics came to be in the 1600s by Newton's developments in differential equations and their applications to the laws of gravitation and planetary motion. He realized that finding an 1 exact solution for some dynamical systems (such as the three-body problem, or describing the exact motion of three planetary bodies confined to the laws of gravitation) was essentially impossible. A coupld centuries later, Poincare developed a novel approach to analyzing such systems. His method was to answer qualitative questions about the system rather than develop an exact quantitative solution. Until the mid-1900s, dynamics was largely concerned with non-linear oscillating systems and their applications to physics. The development of high-speed computing led Lorenz in 1963 to discover chaotic motion in dynamics. Since then interest in dynamics and chaos has proliferated and applications to real-world systems have become exceedingly numerous. There are two main types of dynamical systems: differential equations and iterated maps. Dif- ferential equations describe the motion of systems in continuous time, while iterated maps deal exclusively with discrete time. 1.1 Differential Equations Consider an n-dimensional space whose points (x1(t); x2(t); : : : ; xn(t)) are functions of time. In general, an autonomous dynamical system on this n-dimensional phase space is defined as the system d x = f (x ; x ; : : : ; x ) dt 1 1 1 2 n d x = f (x ; x ; : : : ; x ) dt 2 2 1 2 n ::: d x = f (x ; x ; : : : ; x ) dt n n 1 2 n Where autonomous refers to the fact that f1; : : : ; fn don't depend on t. In this case, the differ- ential equations that compose the system are of first order, i.e. they only use first time derivatives of (x1; x2; : : : ; xn). 2 1 1.1.1 Flows on R On the other hand, in dynamical systems a first-order system is of the form d x(t) = f(x(t)) dt Example 1.1. x0(t) = sin(x(t)) In this case we can find an exact solution for the system by doing some slightly sketchy separating of variables: dx dx Z Z dx = sin(x) , dt = , dt = , t = − ln j csc(x) + cot(x)j + C: dt sin(x) sin(x) If x(0) = xo, then 0 = − ln j csc(x0) + cot(x0)j + C , C = ln j csc(x0) + cot(x0)j. Thus j csc(x ) + cot(x )j t = − ln j csc(x) + cot(x)j + ln j csc(x ) + cot(x )j = ln 0 0 0 0 j csc(x) + cot(x)j Most of the time we want to know what x(t) does as t ! 1 for an arbitrary initial condition x0, however in this case (and in most systems) determining a formula for x(t) isn't trivial. Instead we dx can interpret f(x(t)) as a vector field: t is time, x(t) is the position on R, and dt is the velocity on the line. We can graph this vector field while keeping in mind that a positive velocity means we move in the positive direction on R and vice versa for negative. For every point at which x0(t) = 0, x(t) is stationary. These points are called fixed points. In the above graph, certain fixed points are marked with solid circles and others with hollow circles. The fixed points with solid circles at (2n+1)π; n 2 Z are points toward which x(t) is attracted, while the hollow circles at 2nπ; n 2 Z are points from which x(t) is repelled. These points are conveniently called attracting and repelling fixed points, respectively. The fixed to which x(t) tends depends on 3 π −π the initial condition. If x(0) = 4 ; then the system will tend to π as t ! 1. If x(0) = 4 ; then the system will tend to −π and if x(0) = 0; the system will stay at x(t) = 0 8t: We, being mathematicians, would like to know about solutions to all dynamical systems of this form. Using the fact that our dynamical system is essentially a differential equation, we can show that solutions to nonautonomous dynamical systems exist and are unique in a certain inverval of time. Theorem 1.1. (Linear Fundamental Existence and Uniqueness Theorem) Let f and g be continuous functions on (a; b) 2 R and t0 2 (a; b) and x0 2 R. Then there is a unique function p(t) = x that satisfies 0 x = f(t)x + g(t); p(t0) = x0 on (a; b). Namely, Z t −F (t) −F (t) F (τ) p(t) = x0e + e g(τ)e dτ t0 Where F (t) = R t f(τ)dτ. t0 Proof. First we prove existence. Suppose the solution p(t) takes the form above. Take a first deriva- tive: p0(t) = −x e−F (t) d [F (t)]+ d [e−F (t)] R t g(τ)eF (τ)dτ+ d [R t g(τ)eF (τ)dτ]e−F (t) = −x e−F (t)f(t)− 0 dt dt t0 dt t0 0 f(t)e−F (t) R t g(τ)eF (τ)dτ+g(t)eF (t)e−F (t) = −f(t)[x e−F (t)+e−F (t) R t g(τ)eF (τ)dτ]+g(t) = −f(t)p(t)+ t0 0 t0 t g(t) , p0(t) = −f(t)p(t)+g(t) , p0(t)+f(t)p(t) = g(t). Also p(t ) = x e−F (t0)+e−F (t0) R 0 g(τ)eF (τ)dτ = 0 0 t0 R t0 t f(τ)dτ x0e 0 = x0. Thus p(t) is a solution to the initial value problem. Next we show that the solution is unique. Suppose q(t) is also a solution to the problem. Suppose F (t) 0 0 F (t) F (t) F (t) 0 r(t) = q(t)e . Then r(t0) = q(t0) = x0. Also r (t) = q (t)e − q(t)f(t)e = e [q (t) − q(t)f(t)] = eF (t)g(t). We know by the fundamental theorem of calculus that R t r0(τ)dτ = r(t) − t0 r(t ) , r(t) = r(t ) + R t eF (τ)g(τ)dτ = x + R t eF (τ)g(τ)dτ. 0 0 t0 0 t0 We also know that r(t) = q(t)eF (t) , q(t) = r(t)e−F (t). Thus q(t) = e−F (t)[x +R t eF (τ)g(τ)dτ] = 0 t0 x e−F (t) + e−F (t) R t eF (τ)g(τ)dτ. 0 t0 We therefore know that every solution to the problem must take this form, so the solution is unique. 4 Dynamical systems in two dimensions become a bit more interesting. 2 1.1.2 Flows on R We consider the general system d x (t) = f (x (t); x (t)) dt 1 1 1 2 d x (t) = f (x (t); x (t)) dt 2 2 1 2 Or d x = f(x) dt If our system is linear, then d d x (t) = ax (t) + bx (t); x (t) = cx (t) + dx (t) , x_ = Ax dt 1 1 2 dt 2 1 2 where A = ( a b ) and x = ( x1(t) ) c d x2(t) We can learn a great deal about the behavior of this system by studying the matrix A. We would like to find lines on the plane which are invariant under A. That is, Av = λv. (Like usual in solving differential equations, we assume that the solution takes the form x(t) = eλtv.) This means we want to find the eigenvalues λj and and eigenvectors v 6= 0 of A. We find them in the following way: Av = λv , (A − λI)v = 0 =) det A − λI = 0. The last step is derived from the fact that v is assumed not to be zero, thus (A − λI) is a non-invertible matrix, which means its a−λ b 2 determinant is zero. Next we solve det A − λI = 0 , det( c d−λ ) = 0 , λ − τλ + ∆ = 0 where τ = a + d = Trace(A) and ∆ = ad − bc = det(A). Using the quadratic equation, we see that p p τ + τ 2 − 4∆ τ − τ 2 − 4∆ λ = ; λ = 1 2 2 2 are the two eigenvalues of A corresponding to eigenvectors v1; v2: The general solution to the system λ t λ t is a linear combination of the two solutions: x(t) = c1e 1 v1 + c2e 1 v2. We can use the following result in linear algebra to see where this solution exists. 5 Lemma 1.2. Suppose A is a 2 by 2 nonzero matrix with eigenvalues λ1 and λ2 such that λ1 6= λ2. Then the eigenvectors v1 and v2 that correspond to λ1 and λ2 are linearly independent. Proof.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-