Invariant Measures: Ergodicity and Extremality

Total Page:16

File Type:pdf, Size:1020Kb

Invariant Measures: Ergodicity and Extremality PART 7: INVARIANT MEASURES: ERGODICITY AND EXTREMALITY We discuss the invariant measures in zero-range processes. To determine \all" of them, since the set of invariant measures is a convex set, one tries to prescribe the extreme points of this set. If an invariant measure is known to be an extreme point, the process run under it has interesting ergodic properties. In this subject, many things are known, but there are several open questions. 1. Invariant measures for zero-range processes We say that a probability measure µ on Ω is an invariant measure if Z Z Ptfdµ = fdµ for all bounded Lipschitz functions f 2 L. Let I denote the set of invariant mea- sures. To simplify the discussion, we will assume that p is translation-invariant. How- ever, a rich theory exists when p is not translation-invariant which are remarked upon in the Notes section. Recall the discussion of invariant measures for the zero-range process on a torus d TN . We extend these notions and define marginal ( 1 αk Z(α) g(k)! for k ≥ 1 µ¯α(k) = 1 Z(α) for k = 0 for 0 ≤ α < lim inf g(k). Correspondingly, define Y ν¯α = µ¯α: d x2Z As before ρ = ρ(α) = Eµ¯α [η(0)] is a strictly increasing function of α, and hence the inverse exists. We then define νρ =ν ¯α(ρ) ∗ for ρ < ρ := limα"lim inf g(k) ρ(α). Our first result is that these measures are invariant. ∗ Theorem 1.1. For ρ < ρ , we have νρ 2 I. Moreover, these measures fully charge Ω0, that is 0 νρ(Ω ) = 1: (1.1) This is seen by integrating kηk, X X Eνρ kηk = Eνρ [η(x)]β(x) = ρ β(x) d d x2Z x2Z and noting that X X X p(n)(x; 0) β(x) = = 2 (1.2) 2n d d x2Z x2Z n≥0 1 2 PART 7 given p(n)(x; 0) = p(n)(0; −x) by translation-invariance of p. Hence, kηk < 1 a.s. under νρ. 2. Proof of Theorem 1.1 We first give a generator characterization of invariance of a measure. Proposition 2.1. Let µ be a probability measure on Ω such that Eµ[kηk] < 1. Then, the following are equivalent: (1) R Lfdµ = 0 for all f 2 L (2) µ 2 I. In what follows, we will only use that \(1) implies (2)", and so we will prove this part of the result. Proof. For f 2 L and η 2 Ω0, we have X x;y jLPsf(η)j ≤ a0 η(x)p(x; y) Psf(η ) − Psf(η) d x;y2Z 4a0s ≤ 3a0c(f)e kηk: 4a0s The inequalities use that the Lipschitz bound on g, and that c(Psf) ≤ 3e c(f) from Theorem 2.4 in Part 6. When (1) holds, for f 2 L which is bounded, Z h i Z h Z t i Ptf − f dµ = LPsfds dµ. 0 Since Psf 2 L, and the bound given above, we may interchange the integrals by Fubini's theorem. Also, by (1), we have therefore Z h Z t i Z t h Z i LPsfds dµ = LPsfdµ ds = 0 0 0 and so may conclude (2). Exercise 2.2. To prove the converse, \(2) implies (1)", we refer the reader to [1][Lemma 2.9]. Lemma 2.3 (Finite set invariance). Suppose that p is a transition probability on A = fx : jx j ≤ n; 1 ≤ i ≤ dg which is doubly stochastic: P p(x; y) = 1 for n i x2An all y 2 An. Then, νρ is invariant for the zero-range process on An. Proof. We need only show that Eνρ [Lf(η)] = 0 for all functions f. Recall X Lf(η) = g(η(x))p(x; y)f(ηx;y) − f(η): x;y2An Since x;y Eνρ g(η(x))f(η ) = α(ρ)Eνρ f(η + δy) = Eνρ g(η(y))f(η) ; we have that X x;y X Eνρ g(η(x))p(x; y)f(η ) = p(x; y)Eνρ g(η(y))f(η) x;y2An x;y2An X = Eνρ g(η(y))f(η) : y2An PART 7 3 On the other hand, clearly, X X p(x; y)Eνρ g(η(x))f(η) = Eνρ g(η(x))f(η) : x;y2An x2An Hence, combining these estimates, Eνρ [Lf] = 0. Proof of Theorem 1.1. For a transition probability p on Zd, define the `trunca- tion' (different than in the previous Part 6), 8 1 if x = y 62 A < n p (x; y) = p(x; y) + Q−1 P p(x; z) P p(z; y) if x; y 2 A n n z62An z62An n : 0 otherwise where X X X X Qn = p(z; y) = 1 − p(z; y) = p(z; y): z2An;y62An z2An z;y2An y2An;z62An P d We have that (A) pn is a transition probability: y pn(x; y) = 1 for all x 2 Z . If x 62 A , it is trivial. If x 2 A , noting that Q = P p(z; y), we have n n n z62An;y2An P p (x; y) = P p (x; y) = 1. Moreover, p has no transitions from A to its y n y2An n n n complement. P d In addition, (B) x pn(x; y) = 1 for all y 2 Z : If y 62 An, the claim is trivial. If y 2 An, write X X −1 X X X pn(x; y) = p(x; y) + Qn p(x; z) p(z; y) = p(x; y): x x2An x2An;z62An z62An x The last quantity equals 1 if p is doubly-stochastic which is the case if p is translation- invariant. d Also, (C) pn ! p for all x; y 2 Z . Eventually, x; y 2 An. The claim follows as Q−1 P p(x; y) ≤ 1. n z62An From Lemma 2.3, as nothing moves off An, we have that Eν [Lnf] = 0 for f 2 L where Ln is the generator for the zero-range process on An according to transition probability pn. Therefore, to show Eνρ [Lf] = 0, it is enough to show Eνρ jLnf − Lfj ! 0: It is straightforward to write X X Eνρ jLnf − Lfj ≤ a0c(f)Eνρ η(x) jp(x; y) − pn(x; y)j β(x) + β(y) d x2Z y6=x = I1 + I2 corresponding to the two factors β(x) and β(y). We bound I1 by dominated convergence. We evaluate I1 as X X ρ β(x) jp(x; y) − pn(x; y)j: x y6=x For fixed x, eventually x 2 An for large n, and so X X X X jp(x; y) − pn(x; y)j ≤ jp(x; y) − pn(x; y)j + p(x; y) = 2 p(x; y) y y2An y62An y62An P using the form of Qn. Hence, since x β(x) < 1 and the last display vanishes as n " 1 for each x, by dominated convergence I1 vanishes as n " 1. 4 PART 7 We now bound I2 in a similar manner. Write I2 as X X X X ρ β(y)jp(x; y) − pn(x; y)j = ρ β(y) jp(x; y) − pn(x; y)j: x y6=x y x6=y For each y eventually y 2 An. Then, using a form of Qn, we have that X X jp(x; y) − pn(x; y)j ≤ 2 p(x; y) x6=y x62An P which vanishes as p is doubly-stochastic. Hence, as y β(y) < 1, I2 vanishes by dominated convergence. 3. Extremality, Harmonicity and Ergodicity First, what does it mean to be extremal? Let Σ be a space with Borel sets B. Let Q be an invariant proability measure on Σ for the Markov process η(t). Let PQ, as before, be the probability on the path space with initial distribution Q. Let Tt be the process semigroup. Definition 3.1. We say Q is an extremal invariant measure if the the following property holds. When for 0 < < 1 and invariant probability measures Q1 and Q2 we have Q = Q1 + (1 − )Q2, then Q = Q1 = Q2. We note if the process has only one invariant measure Q, then of course it is extremal. The import of the definition comes when the process is reducible in some way. For instance, in finite state Markov chains, with two irreducible components C1 and C2, the extreme invariant measures are exactly the unique invariant measures supported on C1 and C2 respectively. One might ask why is it useful to know when an invariant measure is extremal. It turns out there is an interesting connection with harmonic functions and shift- ergodicity. Lemma 3.2. The space L2(Q) can be decomposed into the direct sum of 2 c 2 H0 = fg 2 L (Q): Ttg = g; t ≥ 0g; and H0 = fTth − h : h 2 L (Q); t ≥ 0g: c Proof. Let f be perpendicular to all functions in H0. Then, hf; Tth − hi = 0 for all ∗ h and t ≥ 0. This means Tt f = f or Ttf = f for all t ≥ 0. Hence, f 2 H0. 2 Since Tt is a contraction on L (Q), we have the following ergodic theorem due to Von Neumann. Proposition 3.3. We have for f 2 L2(Q) that Z t 1 ^ Tsf ds ! f t 0 2 ^ 2 converges in L (Q) to f 2 L (Q) which is a projection of f onto the subspace H0.
Recommended publications
  • Measure-Theoretic Probability I
    Measure-Theoretic Probability I Steven P.Lalley Winter 2017 1 1 Measure Theory 1.1 Why Measure Theory? There are two different views – not necessarily exclusive – on what “probability” means: the subjectivist view and the frequentist view. To the subjectivist, probability is a system of laws that should govern a rational person’s behavior in situations where a bet must be placed (not necessarily just in a casino, but in situations where a decision must be made about how to proceed when only imperfect information about the outcome of the decision is available, for instance, should I allow Dr. Scissorhands to replace my arthritic knee by a plastic joint?). To the frequentist, the laws of probability describe the long- run relative frequencies of different events in “experiments” that can be repeated under roughly identical conditions, for instance, rolling a pair of dice. For the frequentist inter- pretation, it is imperative that probability spaces be large enough to allow a description of an experiment, like dice-rolling, that is repeated infinitely many times, and that the mathematical laws should permit easy handling of limits, so that one can make sense of things like “the probability that the long-run fraction of dice rolls where the two dice sum to 7 is 1/6”. But even for the subjectivist, the laws of probability should allow for description of situations where there might be a continuum of possible outcomes, or pos- sible actions to be taken. Once one is reconciled to the need for such flexibility, it soon becomes apparent that measure theory (the theory of countably additive, as opposed to merely finitely additive measures) is the only way to go.
    [Show full text]
  • Probability and Statistics Lecture Notes
    Probability and Statistics Lecture Notes Antonio Jiménez-Martínez Chapter 1 Probability spaces In this chapter we introduce the theoretical structures that will allow us to assign proba- bilities in a wide range of probability problems. 1.1. Examples of random phenomena Science attempts to formulate general laws on the basis of observation and experiment. The simplest and most used scheme of such laws is: if a set of conditions B is satisfied =) event A occurs. Examples of such laws are the law of gravity, the law of conservation of mass, and many other instances in chemistry, physics, biology... If event A occurs inevitably whenever the set of conditions B is satisfied, we say that A is certain or sure (under the set of conditions B). If A can never occur whenever B is satisfied, we say that A is impossible (under the set of conditions B). If A may or may not occur whenever B is satisfied, then A is said to be a random phenomenon. Random phenomena is our subject matter. Unlike certain and impossible events, the presence of randomness implies that the set of conditions B do not reflect all the necessary and sufficient conditions for the event A to occur. It might seem them impossible to make any worthwhile statements about random phenomena. However, experience has shown that many random phenomena exhibit a statistical regularity that makes them subject to study. For such random phenomena it is possible to estimate the chance of occurrence of the random event. This estimate can be obtained from laws, called probabilistic or stochastic, with the form: if a set of conditions B is satisfied event A occurs m times =) repeatedly n times out of the n repetitions.
    [Show full text]
  • An Introduction to Random Walks on Groups
    An introduction to random walks on groups Jean-Fran¸coisQuint School on Information and Randomness, Puerto Varas, December 2014 Contents 1 Locally compact groups and Haar measure 2 2 Amenable groups 4 3 Invariant means 9 4 Almost invariant vectors 13 5 Random walks and amenable groups 15 6 Growth of groups 18 7 Speed of escape 21 8 Subgroups of SL2(R) 24 1 In these notes, we will study a basic question from the theory of random walks on groups: given a random walk e; g1; g2g1; : : : ; gn ··· g1;::: on a group G, we will give a criterion for the trajectories to go to infinity with linear speed (for a notion of speed which has to be defined). We will see that this property is related to the notion of amenability of groups: this is a fun- damental theorem which was proved by Kesten in the case of discrete groups and extended to the case of continuous groups by Berg and Christensen. We will give examples of this behaviour for random walks on SL2(R). 1 Locally compact groups and Haar measure In order to define random walks on groups, I need to consider probability measures on groups, which will be the distribution of the random walks. When the group is discrete, there is no technical difficulty: a probability measure is a function from the group to [0; 1], the sum of whose values is one. But I also want to consider non discrete groups, such as vector spaces, groups of matrices, groups of automorphisms of trees, etc.
    [Show full text]
  • Determinism, Indeterminism and the Statistical Postulate
    Tipicality, Explanation, The Statistical Postulate, and GRW Valia Allori Northern Illinois University [email protected] www.valiaallori.com Rutgers, October 24-26, 2019 1 Overview Context: Explanation of the macroscopic laws of thermodynamics in the Boltzmannian approach Among the ingredients: the statistical postulate (connected with the notion of probability) In this presentation: typicality Def: P is a typical property of X-type object/phenomena iff the vast majority of objects/phenomena of type X possesses P Part I: typicality is sufficient to explain macroscopic laws – explanatory schema based on typicality: you explain P if you explain that P is typical Part II: the statistical postulate as derivable from the dynamics Part III: if so, no preference for indeterministic theories in the quantum domain 2 Summary of Boltzmann- 1 Aim: ‘derive’ macroscopic laws of thermodynamics in terms of the microscopic Newtonian dynamics Problems: Technical ones: There are to many particles to do exact calculations Solution: statistical methods - if 푁 is big enough, one can use suitable mathematical technique to obtain information of macro systems even without having the exact solution Conceptual ones: Macro processes are irreversible, while micro processes are not Boltzmann 3 Summary of Boltzmann- 2 Postulate 1: the microscopic dynamics microstate 푋 = 푟1, … . 푟푁, 푣1, … . 푣푁 in phase space Partition of phase space into Macrostates Macrostate 푀(푋): set of macroscopically indistinguishable microstates Macroscopic view Many 푋 for a given 푀 given 푀, which 푋 is unknown Macro Properties (e.g. temperature): they slowly vary on the Macro scale There is a particular Macrostate which is incredibly bigger than the others There are many ways more to have, for instance, uniform temperature than not equilibrium (Macro)state 4 Summary of Boltzmann- 3 Entropy: (def) proportional to the size of the Macrostate in phase space (i.e.
    [Show full text]
  • Topic 1: Basic Probability Definition of Sets
    Topic 1: Basic probability ² Review of sets ² Sample space and probability measure ² Probability axioms ² Basic probability laws ² Conditional probability ² Bayes' rules ² Independence ² Counting ES150 { Harvard SEAS 1 De¯nition of Sets ² A set S is a collection of objects, which are the elements of the set. { The number of elements in a set S can be ¯nite S = fx1; x2; : : : ; xng or in¯nite but countable S = fx1; x2; : : :g or uncountably in¯nite. { S can also contain elements with a certain property S = fx j x satis¯es P g ² S is a subset of T if every element of S also belongs to T S ½ T or T S If S ½ T and T ½ S then S = T . ² The universal set ­ is the set of all objects within a context. We then consider all sets S ½ ­. ES150 { Harvard SEAS 2 Set Operations and Properties ² Set operations { Complement Ac: set of all elements not in A { Union A \ B: set of all elements in A or B or both { Intersection A [ B: set of all elements common in both A and B { Di®erence A ¡ B: set containing all elements in A but not in B. ² Properties of set operations { Commutative: A \ B = B \ A and A [ B = B [ A. (But A ¡ B 6= B ¡ A). { Associative: (A \ B) \ C = A \ (B \ C) = A \ B \ C. (also for [) { Distributive: A \ (B [ C) = (A \ B) [ (A \ C) A [ (B \ C) = (A [ B) \ (A [ C) { DeMorgan's laws: (A \ B)c = Ac [ Bc (A [ B)c = Ac \ Bc ES150 { Harvard SEAS 3 Elements of probability theory A probabilistic model includes ² The sample space ­ of an experiment { set of all possible outcomes { ¯nite or in¯nite { discrete or continuous { possibly multi-dimensional ² An event A is a set of outcomes { a subset of the sample space, A ½ ­.
    [Show full text]
  • Integral Geometry – Measure Theoretic Approach and Stochastic Applications
    Integral geometry – measure theoretic approach and stochastic applications Rolf Schneider Preface Integral geometry, as it is understood here, deals with the computation and application of geometric mean values with respect to invariant measures. In the following, I want to give an introduction to the integral geometry of polyconvex sets (i.e., finite unions of compact convex sets) in Euclidean spaces. The invariant or Haar measures that occur will therefore be those on the groups of translations, rotations, or rigid motions of Euclidean space, and on the affine Grassmannians of k-dimensional affine subspaces. How- ever, it is also in a different sense that invariant measures will play a central role, namely in the form of finitely additive functions on polyconvex sets. Such functions have been called additive functionals or valuations in the literature, and their motion invariant specializations, now called intrinsic volumes, have played an essential role in Hadwiger’s [2] and later work (e.g., [8]) on integral geometry. More recently, the importance of these functionals for integral geometry has been rediscovered by Rota [5] and Klain-Rota [4], who called them ‘measures’ and emphasized their role in certain parts of geometric probability. We will, more generally, deal with local versions of the intrinsic volumes, the curvature measures, and derive integral-geometric results for them. This is the third aspect of the measure theoretic approach mentioned in the title. A particular feature of this approach is the essential role that uniqueness results for invariant measures play in the proofs. As prerequisites, we assume some familiarity with basic facts from mea- sure and integration theory.
    [Show full text]
  • Measure Theory and Probability
    Measure theory and probability Alexander Grigoryan University of Bielefeld Lecture Notes, October 2007 - February 2008 Contents 1 Construction of measures 3 1.1Introductionandexamples........................... 3 1.2 σ-additive measures ............................... 5 1.3 An example of using probability theory . .................. 7 1.4Extensionofmeasurefromsemi-ringtoaring................ 8 1.5 Extension of measure to a σ-algebra...................... 11 1.5.1 σ-rings and σ-algebras......................... 11 1.5.2 Outermeasure............................. 13 1.5.3 Symmetric difference.......................... 14 1.5.4 Measurable sets . ............................ 16 1.6 σ-finitemeasures................................ 20 1.7Nullsets..................................... 23 1.8 Lebesgue measure in Rn ............................ 25 1.8.1 Productmeasure............................ 25 1.8.2 Construction of measure in Rn. .................... 26 1.9 Probability spaces ................................ 28 1.10 Independence . ................................. 29 2 Integration 38 2.1 Measurable functions.............................. 38 2.2Sequencesofmeasurablefunctions....................... 42 2.3 The Lebesgue integral for finitemeasures................... 47 2.3.1 Simplefunctions............................ 47 2.3.2 Positivemeasurablefunctions..................... 49 2.3.3 Integrablefunctions........................... 52 2.4Integrationoversubsets............................ 56 2.5 The Lebesgue integral for σ-finitemeasure.................
    [Show full text]
  • 1 Probability Measure and Random Variables
    1 Probability measure and random variables 1.1 Probability spaces and measures We will use the term experiment in a very general way to refer to some process that produces a random outcome. Definition 1. The set of possible outcomes is called the sample space. We will typically denote an individual outcome by ω and the sample space by Ω. Set notation: A B, A is a subset of B, means that every element of A is also in B. The union⊂ A B of A and B is the of all elements that are in A or B, including those that∪ are in both. The intersection A B of A and B is the set of all elements that are in both of A and B. ∩ n j=1Aj is the set of elements that are in at least one of the Aj. ∪n j=1Aj is the set of elements that are in all of the Aj. ∩∞ ∞ j=1Aj, j=1Aj are ... Two∩ sets A∪ and B are disjoint if A B = . denotes the empty set, the set with no elements. ∩ ∅ ∅ Complements: The complement of an event A, denoted Ac, is the set of outcomes (in Ω) which are not in A. Note that the book writes it as Ω A. De Morgan’s laws: \ (A B)c = Ac Bc ∪ ∩ (A B)c = Ac Bc ∩ ∪ c c ( Aj) = Aj j j [ \ c c ( Aj) = Aj j j \ [ (1) Definition 2. Let Ω be a sample space. A collection of subsets of Ω is a σ-field if F 1.
    [Show full text]
  • 3. Invariant Measures
    MAGIC010 Ergodic Theory Lecture 3 3. Invariant measures x3.1 Introduction In Lecture 1 we remarked that ergodic theory is the study of the qualitative distributional properties of typical orbits of a dynamical system and that these properties are expressed in terms of measure theory. Measure theory therefore lies at the heart of ergodic theory. However, we will not need to know the (many!) intricacies of measure theory. We will give a brief overview of the basics of measure theory, before studying invariant measures. In the next lecture we then go on to study ergodic measures. x3.2 Measure spaces We will need to recall some basic definitions and facts from measure theory. Definition. Let X be a set. A collection B of subsets of X is called a σ-algebra if: (i) ; 2 B, (ii) if A 2 B then X n A 2 B, (iii) if An 2 B, n = 1; 2; 3;:::, is a countable sequence of sets in B then S1 their union n=1 An 2 B. Remark Clearly, if B is a σ-algebra then X 2 B. It is easy to see that if B is a σ-algebra then B is also closed under taking countable intersections. Examples. 1. The trivial σ-algebra is given by B = f;;Xg. 2. The full σ-algebra is given by B = P(X), i.e. the collection of all subsets of X. 3. Let X be a compact metric space. The Borel σ-algebra is the small- est σ-algebra that contains every open subset of X.
    [Show full text]
  • Statistical Properties of Dynamical Systems - Simulation and Abstract Computation
    Statistical properties of dynamical systems - simulation and abstract computation. Stefano Galatolo, Mathieu Hoyrup, Cristobal Rojas To cite this version: Stefano Galatolo, Mathieu Hoyrup, Cristobal Rojas. Statistical properties of dynamical systems - simulation and abstract computation.. Chaos, Solitons and Fractals, Elsevier, 2012, 45 (1), pp.1-14. 10.1016/j.chaos.2011.09.011. hal-00644790 HAL Id: hal-00644790 https://hal.inria.fr/hal-00644790 Submitted on 25 Nov 2011 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Statistical properties of dynamical systems { simulation and abstract computation. Stefano Galatolo Mathieu Hoyrup Universita di Pisa LORIA - INRIA Crist´obalRojas Universidad Andres Bello October 11, 2011 Abstract We survey an area of recent development, relating dynamics to the- oretical computer science. We discuss some aspects of the theoretical simulation and computation of the long term behavior of dynamical sys- tems. We will focus on the statistical limiting behavior and invariant measures. We present a general method allowing the algorithmic approx- imation at any given accuracy of invariant measures. The method can be applied in many interesting cases, as we shall explain. On the other hand, we exhibit some examples where the algorithmic approximation of invariant measures is not possible.
    [Show full text]
  • Invariant Measures for Set-Valued Dynamical Systems
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 351, Number 3, March 1999, Pages 1203{1225 S 0002-9947(99)02424-1 INVARIANT MEASURES FOR SET-VALUED DYNAMICAL SYSTEMS WALTER MILLER AND ETHAN AKIN Abstract. A continuous map on a compact metric space, regarded as a dy- namical system by iteration, admits invariant measures. For a closed relation on such a space, or, equivalently, an upper semicontinuous set-valued map, there are several concepts which extend this idea of invariance for a measure. We prove that four such are equivalent. In particular, such relation invari- ant measures arise as projections from shift invariant measures on the space of sample paths. There is a similarly close relationship between the ideas of chain recurrence for the set-valued system and for the shift on the sample path space. 1. Introduction All our spaces will be nonempty, compact, metric spaces. For such a space X let P (X) denote the space of Borel probability measures on X with δ : X P (X) → the embedding associating to x X the point measure δx. The support µ of a measure µ in P (X) is the smallest∈ closed subset of measure 1. If f : X | |X is 1 → 2 Borel measurable then the induced map f : P (X1) P (X2) associates to µ the measure f (µ) defined by ∗ → ∗ 1 (1.1) f (µ)(B)=µ(f− (B)) ∗ for all B Borel in X2. We regard a continuous map f on X as a dynamical system by iterating. A measure µ P (X) is called an invariant measure when it is a fixed point for the map f : P (∈X) P (X), i.e.
    [Show full text]
  • Arxiv:1611.00621V1 [Math.DS] 2 Nov 2016
    PROPERTIES OF INVARIANT MEASURES IN DYNAMICAL SYSTEMS WITH THE SHADOWING PROPERTY JIAN LI AND PIOTR OPROCHA ABSTRACT. For dynamical systems with the shadowing property, we provide a method of approximation of invariant measures by ergodic measures supported on odometers and their almost 1-1 extensions. For a topologically transitive system with the shadowing prop- erty, we show that ergodic measures supported on odometers are dense in the space of in- variant measures, and then ergodic measures are generic in the space of invariant measures. We also show that for every c ≥ 0 and e > 0 the collection of ergodic measures (supported on almost 1-1 extensions of odometers) with entropy between c and c + e is dense in the space of invariant measures with entropy at least c. Moreover, if in addition the entropy function is upper semi-continuous, then for every c ≥ 0 ergodic measures with entropy c are generic in the space of invariant measures with entropy at least c. 1. INTRODUCTION The concepts of the specification and shadowing properties were born during studies on topological and measure-theoretic properties of Axiom A diffeomorphisms (see [3] and [4] by Rufus Bowen, who was motivated by some earlier works of Anosov and Sinai). Later it turned out that there are many very strong connections between specification properties and the structure of the space of invariant measures. For example in [32, 33], Sigmund showed that if a dynamical system has the periodic specification property, then: the set of measures supported on periodic points is dense in the space of invariant measures; the set of ergodic measures, the set of non-atomic measures, the set of measures positive on all open sets, and the set of measures vanishing on all proper closed invariant subsets are complements of sets of first category in the space of invariant measures; the set of strongly mixing measures is a set of first category in the space of invariant measures.
    [Show full text]