Introduction to the Transfer Operator Method
Total Page:16
File Type:pdf, Size:1020Kb
Omri Sarig Introduction to the transfer operator method Winter School on Dynamics Hausdorff Research Institute for Mathematics, Bonn, January 2020 Contents 1 Lecture 1: The transfer operator (60 min) ......................... 3 1.1 Motivation . .3 1.2 Definition, basic properties, and examples . .3 1.3 The transfer operator method . .5 2 Lecture 2: Spectral Gap (60 min) ................................ 7 2.1 Quasi–compactness and spectral gap . .7 2.2 Sufficient conditions for quasi-compactness . .9 2.3 Application to continued fractions. .9 3 Lecture 3: Analytic perturbation theory (60 min) . 13 3.1 Calculus in Banach spaces . 13 3.2 Resolvents and eigenprojections . 14 3.3 Analytic perturbations of operators with spectral gap . 15 4 Lecture 4: Application to the Central Limit Theorem (60 min) . 17 4.1 Spectral gap and the central limit theorem . 17 4.2 Background from probability theory . 17 4.3 The proof of the central limit theorem (Nagaev’s method) . 18 5 Lecture 5 (time permitting): Absence of spectral gap (60 min) . 23 5.1 Absence of spectral gap . 23 5.2 Inducing . 23 5.3 Operator renewal theory . 24 A Supplementary material ........................................ 27 A.1 Conditional expectations and Jensen’s inequality . 27 A.2 Mixing and exactness for the Gauss map . 30 A.3 Hennion’s theorem on quasi-compactness . 32 A.4 The analyticity theorem . 40 A.5 Eigenprojections, “separation of spectrum”, and Kato’s Lemma . 41 A.6 The Berry–Esseen “Smoothing Inequality” . 43 1 Lecture 1 The transfer operator 1.1 Motivation A thought experiment Drop a little bit of ink into a glass of water, and then stir it with a tea spoon. 1. Can you predict where individual ink particles will be after one minute? NO: the motion of ink particles is chaotic. 2. Can you predict the density profile of ink after one minute? YES: it will be nearly uniform. Gibbs’s insight: For chaotic systems, it may be easier to predict the behavior of large collections of initial conditions, than to predict the behavior of individual ini- tial conditions. The transfer operator: The action of a dynamical system on mass densities of initial conditions. 1.2 Definition, basic properties, and examples Setup. Let T : X ! X be a non–singular measurable map on a s–finite measure space (X;B; m). Non-singularity means that m(T −1E) = 0 , m(E) = 0 (E 2 B). All the maps we consider in these notes are non–invertible. The action of T on mass densities. Suppose we distribute mass on X according to the mass density f dm, f 2 L1(m), f ≥ 0, and then apply T to every point in the space. What will be the new mass distribution? 3 4 1 Lecture 1: The transfer operator (60 min) Z (The mass of points which land at E) = 1E (Tx) f (x)dm(x); (1E =indicator of E) Z = (1E ◦ T)dm f (x); where m f := f dm Z Z −1 −1 dm f ◦T = 1E dm f ◦ T = dm dm (Radon-Nikodym derivative) E −1 Exercise 1.1. m f ◦ T m, therefore the Radon-Nikodym derivative exists. Definition: The transfer operator of a non-singular map (X;B; m;T) is the opera- tor Tb : L1(m) ! L1(m) given by −1 dm f ◦ T Z Tb f = ; where m f is the (signed) measure m f (E) := f dm: dm E The previous definition is difficult to work with. In practice one works with the following characterization of Tb f : Proposition 1.1. Tb f is the unique element of L1(m) s.t. that for all test functions R R j 2 L¥, j · (Tb f ) dm = (j ◦ T) · f dm. Proof. The identity holds: For every j 2 L¥, Z Z −1 Z Z Z dm f ◦T −1 ! ! j ·(Tb f ) dm = j · dm dm = jdm f ◦T = (j ◦T)dm f = (j ◦T) f dm (make sure you can justify all =! ). 1 R R The identity characterizes Tb f : Suppose 9h1;h2 2 L s.t. jhi dm = (j ◦T) f dm ¥ R R for all j 2 L . Choose j = sgn(h1 − h2), then jh1 − h2jdm = j(h1 − h2)dm = R R R R jh1dm − jh1dm = j ◦ T f dm − j ◦ T f dm = 0; whence h1 = h2 a.e. Proposition 1.2 (Basic properties). 1. Tb is a positive bounded linear operator with norm equal to one. 2. Tb[(g ◦ T) · f ] = g · (Tb f ) a.e. ( f 2 L1;g 2 L¥), 3. Suppose m is a T-invariant probability measure, then 8 f 2 L1, −1 (Tb f ) ◦ T = Em ( f jT B) a.e. Proof of part 1: Linearity is trivial. Positivity means that if f ≥ 0 a.e., then Tb f ≥ 0 a.e. Let j := 1 , then 0 ≥ R (T f )dm = R j(T f )dm = R (j ◦ T) f dm ≥ 0: [Tb f <0] [Tb f <0] b b | {z } ≥0 It follows that R (T f )dm = 0. This can only happen if m[T f < 0] = 0. [Tb f <0] b b R R Tb is bounded: Let j := sgn(Tb f ), then kTb f k1 = j(Tb f )dm = (j ◦ T) f dm ≤ R kj ◦ Tk¥k f k1 = k f k1, whence kTb f k1 ≤ k f k1. If f > 0, kTb f k1 = jTb f jdm = R R Tb f dm = (1 ◦ T) f dm = k f k1, so kTbk = 1. 1.3 The transfer operator method 5 Exercise 1.2. Prove parts 2 and 3 of the proposition. (Hint for part 3: Show first that every T −1B–measurable function equals j ◦ T with j B-measurable.) Here are some examples of transfer operators. Angle doubling map If T : [0;1] ! [0;1] is T(x) = 2x mod 1, then 1 x x+1 (Tb f )(x) = 2 [ f ( 2 ) + f ( 2 )]. Proof. For every j 2 L¥, 1 Z 1 Z 2 Z 1 j(Tx) f (x)dx = j(2x) f (x)dx + j(2x − 1) f (x)dx 1 0 0 2 Z 1 Z 1 t 1 s+1 s+1 = j(t) f ( 2 )d( 2t) + j(s) f ( 2 )d( 2 ) 0 0 Z 1 1 x x+1 = j(x) · 2 [ f ( 2 ) + f ( 2 )]dx: 0 1 Exercise 1.3 (Gauss map). Let T : [0;1] ! [0;1] be the map T(x) = f x g. Show that ¥ (T f )(x) = 1 f ( 1 ) b ∑ (x+n)2 x+n . n=1 Exercise 1.4 (General piecewise monotonic map). Suppose [0;1] is partitioned into finitely many intervals I1;:::;IN and TjIk : Ik ! T(Ik) is one-to-one and has con- tinuously differentiable extension with non-zero derivative to an e–neighborhood of N −1 0 Ik. Let vk : T(Ik) ! Ik, vk := (TjIk ) . Show that Tb f = ∑ 1T(Ik) · jvkj · f ◦ vk: k=1 1.3 The transfer operator method What dynamical information can we extract from the behavior of Tb? 1 R R ¥ Recall that fn −−−! f weakly in L , if j fndm −−−! j f dm for all j 2 L . n!¥ n!¥ This is weaker than convergence in L1 (give an example!). Proposition 1.3 (Dynamical meaning of convergence of Tbn). n R 1. If Tb f −−−! h f dm weakly in L1 for some non-negative 0 6= f 2 L1 then T has n!¥ an absolutely continuous invariant probability measure, and h is the density. n R 2. If Tb f −−−! f dm weakly in L1 for all f 2 L1 then T is a mixing probability n!¥ preserving map. 1 n L R 3. If Tb f −−−! f dm, then for every j 2 L¥, n!¥ Z Z Z Z n n n jCov( f ;j ◦ T )j := j f j ◦ T dm − f dm jdmj ≤ kTb f − f dmk1kjk¥; n R so the rate of decay of correlations against f is O(kTb f − f dmk1). 6 1 Lecture 1: The transfer operator (60 min) R n w Proof. 1. Assume w.l.o.g. that f dm = 1, then Tb f −−−! h. For every j 2 L¥, n!¥ R R n+1 R n R jhdm = lim j · Tb f dm = lim (j ◦ T)[Tb f ]dm = (j ◦ T)hdm. So mh := hdm is T-invariant. 2. exercise n R n R R R n R 3. jCov( f ;j ◦ T )j = j Tb f jdm − ( f dm)jdmj = j (Tb f − f dm)jdmj. So n n R jCov( f ;j ◦ T )j ≤ kTb f − f dmk1kjk¥. Exercise 1.5 (Dynamical interpretation of eigenvalues). Show: 1. All eigenvalues of the transfer operator have modulus less than or equal to one. 2. The invariant probability densities of T are the non-negative h 2 L1(m) s.t. Thb = h and R hdm = 1. We call hdm an acip(=absolutely continuous invariant probability measure). 3. If Tb has an acip and 1 is a simple eigenvalue of Tb, then the acip is ergodic. “Simple” means that dimfg 2 L1 : Tgb = gg = 1. 4. If Tb has an acip and 1 is simple, and all other eigenvalues of Tb have modulus strictly smaller than one, then the acip is weak mixing. 5. If T is probability preserving and mixing, then Tb has exactly one eigenvalue on the unit circle, equal to one, and this eigenvalue is simple. (Be careful not to confuse L1-eigenvalues with L2-eigenvalues.) Further reading 1. J. Aaronson: An introduction to infinite ergodic theory, Math. Surv.