Large Deviations for a Matching Problem Related to the -Wasserstein

Total Page:16

File Type:pdf, Size:1020Kb

Large Deviations for a Matching Problem Related to the -Wasserstein Large Deviations for a matching problem related to the 1-Wasserstein distance José Trashorras To cite this version: José Trashorras. Large Deviations for a matching problem related to the 1-Wasserstein distance. ALEA : Latin American Journal of Probability and Mathematical Statistics, Instituto Nacional de Matemática Pura e Aplicada, 2018, 15, pp.247-278. hal-00641378v2 HAL Id: hal-00641378 https://hal.archives-ouvertes.fr/hal-00641378v2 Submitted on 15 Feb 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Large Deviations for a matching problem related to the 1-Wasserstein distance Jos´eTrashorras ∗ February 15, 2017 Abstract Let (E; d) be a compact metric space, X = (X1;:::;Xn;::: ) and Y = (Y1;:::;Yn;::: ) two independent sequences of independent E-valued ran- X Y dom variables and (Ln )n≥1 and (Ln )n≥1 the associated sequences of em- X Y pirical measures. We establish a large deviation principle for (W1(Ln ;Ln ))n≥1 where W1 is the 1-Wasserstein distance, X Y W1(Ln ;Ln ) = min max d(Xi;Yσ(i)) σ2Sn 1≤i≤n where Sn stands for the set of permutations of f1; : : : ; ng. 1 Introduction n We say that a sequence of Borel probability measures (P )n≥1 on a topological space Y obeys a Large Deviation Principle (hereafter abbreviated LDP) with rate function I if I is a non-negative, lower semi-continuous function defined on Y such that 1 1 − inf I(y) ≤ lim inf log P n(A) ≤ lim sup log P n(A) ≤ − inf I(y) y2Ao n!1 n n!1 n y2A¯ for any measurable set A ⊂ Y, whose interior is denoted by Ao and closure by A¯. If the level sets fy : I(y) ≤ αg are compact for every α < 1, I is called a good rate function. With a slight abuse of language we say that a sequence of random variables obeys an LDP when the sequence of measures induced by these random variables obeys an LDP. For a background on the theory of large deviations see Dembo and Zeitouni [4] and references therein. Let (E; d) be a metric space. There has been a lot of interest for years in considering the space M 1(E) of Borel probability measures on E endowed with the so-called p-Wasserstein distances (Z 1=p) p Wp(ν; γ) = inf d(x; y) Q(dx; dy) Q2C(ν,γ) E×E ∗Universit´eParis-Dauphine, Ceremade, Place du mar´echal de Lattre de Tassigny, 75775 Paris Cedex 16 France. Email: [email protected] 1 where p 2 [1; 1) and C(ν; γ) stands for the set of Borel probability measures 2 on E with first marginal Q1 = ν and second marginal Q2 = γ, see Chapter 6 in [20] for a broad review. However, the 1-Wasserstein distance equivalently defined by either W1(ν; γ) = lim Wp(ν; γ) p!1 or −1 W1(ν; γ) = inf sup u; u 2 S(Q ◦ d ) (1) Q2C(ν,γ) where S(Q◦d−1) ⊂ E2 stands for the support of the probability measure Q◦d−1 has attracted much less attention so far.1 Our framework is the following : We are given a compact metric space (E; d). Without loss of generality we can assume that supx;y2E d(x; y) = 1. Let (Xn)n≥1 and (Yn)n≥1 be two independent sequences of E-valued independent random variables defined on the same probability space (Ω; A; P). We assume 1 2 that all the Xi's (resp. Yi's) have the same distribution µ (resp. µ ). For every n ≥ 1 we consider the empirical measures n n 1 X 1 X LX = δ and LY = δ : n n Xi n n Yi i=1 i=1 X Y In [7] Ganesh and O'Connell conjecture that (W1(Ln ;Ln ))n≥1 obeys an LDP with rate function 1 1 2 2 I1(x) = inf H(ν jµ ) + H(ν jµ ) ν1,ν22M 1(E) W1(ν1,ν2)=x where for any two ν; µ 2 M 1(E) R log dν dν if ν µ H(νjµ) = E dµ 1 otherwise. If instead of I1 one considers its lower semi-continuous regularization J1 which is defined by J1(x) = sup inf I1(y); δ>0 y2B(x,δ) see e.g. Chapter 1 in [15], then X Y Theorem 1.1 The sequence (W1(Ln ;Ln ))n≥1 satisfies an LDP on [0; 1] with good rate function J1(x). In Section 3.1 we show that if e.g. E = [0; 1] is endowed with the usual Euclidean 1 2 3 3 distance and µ = µ = µ admits f(x) = 1 1 (x) + 1 2 (x) as a density 2 [0; 3 ] 2 [ 3 ;1] w.r.t. the Lebesgue measure then, due to the disconnectedness of the support of µ, I1 fails to be lower semi-continuous. Hence, I1 and J1 do not always coincide. This example illustrates a general feature of the problem considered 1Let us recall that the support S(P ) of a Borel probability measure P on a metric space (E; d) is defined by x 2 S(P ) if and only if for every " > 0;P (B(x; ")) > 0 where B(x; ") stands for the open ball centered at x with radius ". If E is separable, in particular if E is compact, S(P ) is closed, see e.g. Theorem 2.1 in [14]. 2 here : the connectedness properties of the support of the reference measures X Y are the key ingredient in the behavior of (W1(Ln ;Ln ))n≥1. We will say more about that below, in Proposition 1.3 below. X Y Since for every n ≥ 1 every Q 2 C(Ln ;Ln ) can be represented as a bi- stochastic matrix and since, according to the Birkhoff-Von Neumann Theorem, every bi-stochastic matrix is a convex combination of permutation matrices (see e.g. Theorem 5.5.1 in [17]) we have X Y W1(Ln ;Ln ) = min max d(Xi;Yσ(i)) (2) σ2Sn 1≤i≤n where Sn stands for the set of permutations of f1; : : : ; ng. Hence, computing X Y W1(Ln ;Ln ) is nothing but solving a minimax matching problem which is a fundamental combinatorial question. Equality (2) can be rephrased as X Y X Y W1(Ln ;Ln ) = dH(S(Ln ); S(Ln )) (3) X Y where S(Ln ) = fX1;:::;Xng, S(Ln ) = fY1;:::;Yng and dH is the so-called Hausdorff distance. Lets us recall that the Hausdorff distance is defined on the set of non-empty closed subsets of E by dH(A; B) = max sup inf d(x; y); sup inf d(x; y) x2A y2B y2B x2A see e.g. Section 6.2.2 in [3]. It is the essential tool in many applications like image processing [8], object matching [18], face detection [10] and evolutionary optimization [16] to name just a few. Now that we have stated our main result, let us briefly review some facts about W1. The reference paper on the subject is [1] by Champion, De Pascale and Juutinen. They consider measures with support on compact subsets of Rd (d ≥ 1). Some of their results have been generalized from Rd to the setting of Polish spaces and general cost functions by Jylh¨ain [9]. We recall them for latter use. Lemma 1.1 (Proposition 2.1 in [1] and Theorem 2.6 in [9]) For any two ν; γ 2 M 1(E) there exists at least one Q 2 C(ν; γ) such that −1 W1(ν; γ) = sup u; u 2 S(Q ◦ d ) : They further establish the existence of nice solutions to the problem (1) which they call infinitely cyclically monotone couplings. Definition 1.1 A probability measure P 2 M 1(E2) is called infinitely cyclically monotone if and only if for every integer n ≥ 2, every (x1; y1);:::; (xn; yn) 2 S(P ) and every σ 2 Sn we have max d(xi; yi) ≤ max d(xi; yσ(i)): (4) 1≤i≤n 1≤i≤n Lemma 1.2 (Theorem 3.2 and Theorem 3.4 in [1] and Theorem 2.16 in [9]) For any two γ; ν 2 M 1(E) there exists at least one infinitely cyclically monotone P 2 C(γ; ν) such that −1 W1(γ; ν) = sup u; u 2 S(P ◦ d ) : 3 In order to check that P 2 M 1(E2) is infinitely cyclically monotone we only need to know its support. The way mass is spread over S(P ) does not matter : this property is concerned with subsets of E2 rather than with fully specified probability measures. This is the reason why we will also call infinitely cyclically monotone subsets of E2 that satisfy (4). Infinitely cyclically monotone couplings are nice solutions to the problem (1) in the sense that Lemma 1.3 (Theorem 3.4 in [1] and Theorem 2.17 in [9]) Any infinitely cycli- cally monotone P 2 M 1(E2) satisfies −1 W1(P1;P2) = sup u; u 2 S(P ◦ d ) : Indeed, once an infinitely cyclically monotone A ⊂ E2 is given, no matter how one spreads mass over A provided the couple that maximizes (x; y) 7! d(x; y) over A is covered, the marginals of the resulting probability measure over E2 will always be at the same W1 distance.
Recommended publications
  • Arxiv:1909.00374V3 [Math.PR] 1 Apr 2021 2010 Mathematics Subject Classification
    CONTRACTION PRINCIPLE FOR TRAJECTORIES OF RANDOM WALKS AND CRAMER'S´ THEOREM FOR KERNEL-WEIGHTED SUMS VLADISLAV VYSOTSKY Abstract. In 2013 A.A. Borovkov and A.A. Mogulskii proved a weaker-than-standard \metric" large deviations principle (LDP) for trajectories of random walks in Rd whose increments have the Laplace transform finite in a neighbourhood of zero. We prove that general metric LDPs are preserved under uniformly continuous mappings. This allows us to transform the result of Borovkov and Mogulskii into standard LDPs. We also give an explicit integral representation of the rate function they found. As an application, we extend the classical Cram´ertheorem by proving an LPD for kernel-weighted sums of i.i.d. random vectors in Rd. 1. Introduction The study of large deviations of trajectories of random walks was initiated by A.A. Borovkov in the 1960's. In 1976 A.A. Mogulskii [24] proved a large deviations result for the trajectories of a multidimensional random walk under the assumption that the Laplace transform of its increments is finite. In [24] Mogulskii also studied the large deviations under the weaker Cram´ermoment assumption, i.e. when the Laplace transform is finite in a neighbourhood of zero, but these results appear to be of a significantly limited use. The further progress was due to the concept of metric large deviations principles (LDPs, in short) on general metric spaces introduced by Borovkov and Mogulskii in [5] in 2010. The upper bound in such metric LDPs is worse than the conventional one { the infimum of the rate function is taken over shrinking "-neighbourhoods of a set rather than over its closure as in standard LDPs; compare definitions (1) and (2) below.
    [Show full text]
  • Long Term Risk: a Martingale Approach
    Long Term Risk: A Martingale Approach Likuan Qin∗ and Vadim Linetskyy Department of Industrial Engineering and Management Sciences McCormick School of Engineering and Applied Sciences Northwestern University Abstract This paper extends the long-term factorization of the pricing kernel due to Alvarez and Jermann (2005) in discrete time ergodic environments and Hansen and Scheinkman (2009) in continuous ergodic Markovian environments to general semimartingale environments, with- out assuming the Markov property. An explicit and easy to verify sufficient condition is given that guarantees convergence in Emery's semimartingale topology of the trading strate- gies that invest in T -maturity zero-coupon bonds to the long bond and convergence in total variation of T -maturity forward measures to the long forward measure. As applications, we explicitly construct long-term factorizations in generally non-Markovian Heath-Jarrow- Morton (1992) models evolving the forward curve in a suitable Hilbert space and in the non-Markovian model of social discount of rates of Brody and Hughston (2013). As a fur- ther application, we extend Hansen and Jagannathan (1991), Alvarez and Jermann (2005) and Bakshi and Chabi-Yo (2012) bounds to general semimartingale environments. When Markovian and ergodicity assumptions are added to our framework, we recover the long- term factorization of Hansen and Scheinkman (2009) and explicitly identify their distorted probability measure with the long forward measure. Finally, we give an economic interpre- tation of the recovery theorem of Ross (2013) in non-Markovian economies as a structural restriction on the pricing kernel leading to the growth optimality of the long bond and iden- tification of the physical measure with the long forward measure.
    [Show full text]
  • On the Form of the Large Deviation Rate Function for the Empirical Measures
    Bernoulli 20(4), 2014, 1765–1801 DOI: 10.3150/13-BEJ540 On the form of the large deviation rate function for the empirical measures of weakly interacting systems MARKUS FISCHER Department of Mathematics, University of Padua, via Trieste 63, 35121 Padova, Italy. E-mail: fi[email protected] A basic result of large deviations theory is Sanov’s theorem, which states that the sequence of em- pirical measures of independent and identically distributed samples satisfies the large deviation principle with rate function given by relative entropy with respect to the common distribution. Large deviation principles for the empirical measures are also known to hold for broad classes of weakly interacting systems. When the interaction through the empirical measure corresponds to an absolutely continuous change of measure, the rate function can be expressed as relative entropy of a distribution with respect to the law of the McKean–Vlasov limit with measure- variable frozen at that distribution. We discuss situations, beyond that of tilted distributions, in which a large deviation principle holds with rate function in relative entropy form. Keywords: empirical measure; Laplace principle; large deviations; mean field interaction; particle system; relative entropy; Wiener measure 1. Introduction Weakly interacting systems are families of particle systems whose components, for each fixed number N of particles, are statistically indistinguishable and interact only through the empirical measure of the N-particle system. The study of weakly interacting systems originates in statistical mechanics and kinetic theory; in this context, they are often referred to as mean field systems. arXiv:1208.0472v4 [math.PR] 16 Oct 2014 The joint law of the random variables describing the states of the N-particle system of a weakly interacting system is invariant under permutations of components, hence determined by the distribution of the associated empirical measure.
    [Show full text]
  • Large Deviation Theory
    Large Deviation Theory J.M. Swart October 19, 2016 2 3 Preface The earliest origins of large deviation theory lie in the work of Boltzmann on en- tropy in the 1870ies and Cram´er'stheorem from 1938 [Cra38]. A unifying math- ematical formalism was only developed starting with Varadhan's definition of a `large deviation principle' (LDP) in 1966 [Var66]. Basically, large deviation theory centers around the observation that suitable func- tions F of large numbers of i.i.d. random variables (X1;:::;Xn) often have the property that −snI(x) P F (X1;:::;Xn) 2 dx ∼ e as n ! 1; (LDP) where sn are real contants such that limn!1 sn = 1 (in most cases simply sn = n). In words, (LDP) says that the probability that F (X1;:::;Xn) takes values near a point x decays exponentially fast, with speed sn, and rate function I. Large deviation theory has two different aspects. On the one hand, there is the question of how to formalize the intuitive formula (LDP). This leads to the al- ready mentioned definition of `large deviation principles' and involves quite a bit of measure theory and real analysis. The most important basic results of the ab- stract theory were proved more or less between 1966 and 1991, when O'Brian en Verwaat [OV91] and Puhalskii [Puk91] proved that exponential tightness implies a subsequential LDP. The abstract theory of large deviation principles plays more or less the same role as measure theory in (usual) probability theory. On the other hand, there is a much richer and much more important side of large deviation theory, which tries to identify rate functions I for various functions F of independent random variables, and study their properties.
    [Show full text]
  • Probability (Graduate Class) Lecture Notes
    Probability (graduate class) Lecture Notes Tomasz Tkocz∗ These lecture notes were written for the graduate course 21-721 Probability that I taught at Carnegie Mellon University in Spring 2020. ∗Carnegie Mellon University; [email protected] 1 Contents 1 Probability space 6 1.1 Definitions . .6 1.2 Basic examples . .7 1.3 Conditioning . 11 1.4 Exercises . 13 2 Random variables 14 2.1 Definitions and basic properties . 14 2.2 π λ systems . 16 − 2.3 Properties of distribution functions . 16 2.4 Examples: discrete and continuous random variables . 18 2.5 Exercises . 21 3 Independence 22 3.1 Definitions . 22 3.2 Product measures and independent random variables . 24 3.3 Examples . 25 3.4 Borel-Cantelli lemmas . 28 3.5 Tail events and Kolmogorov's 0 1law .................. 29 − 3.6 Exercises . 31 4 Expectation 33 4.1 Definitions and basic properties . 33 4.2 Variance and covariance . 35 4.3 Independence again, via product measures . 36 4.4 Exercises . 39 5 More on random variables 41 5.1 Important distributions . 41 5.2 Gaussian vectors . 45 5.3 Sums of independent random variables . 46 5.4 Density . 47 5.5 Exercises . 49 6 Important inequalities and notions of convergence 52 6.1 Basic probabilistic inequalities . 52 6.2 Lp-spaces . 54 6.3 Notions of convergence . 59 6.4 Exercises . 63 2 7 Laws of large numbers 67 7.1 Weak law of large numbers . 67 7.2 Strong law of large numbers . 73 7.3 Exercises . 78 8 Weak convergence 81 8.1 Definition and equivalences .
    [Show full text]
  • Basics of the Theory of Large Deviations
    Basics of the Theory of Large Deviations Michael Scheutzow February 1, 2018 Abstract In this note, we collect basic results of the theory of large deviations. Missing proofs can be found in the monograph [1]. 1 Introduction We start by recalling the following computation which was done in the course Wahrscheinlichkeitstheorie I (and which is also done in the course Versicherungs- mathematik). Assume that X; X1;X2; ::: are i.i.d real-valued random variables on a proba- bility space (Ω; F; P) and x 2 R, λ > 0. Then, by Markov’s inequality, n n X λX1 P Xi ≥ nx ≤ exp{−λnxg E e = exp{−n(λx − Λ(λ))g; i=1 where Λ(λ) := log E expfλXg. Defining I(x) := supλ≥0fλx − Λ(λ)g, we therefore get n X P Xi ≥ nx ≤ exp{−nI(x)g; i=1 which often turns out to be a good bound. 1 2 Cramer’s theorem for real-valued random vari- ables Definition 2.1. a) Let X be a real-valued random variable. The function Λ: R ! (−∞; 1] defined by λX Λ(λ) := log Ee is called cumulant generating function or logarithmic moment generating function. Let DΛ := fλ : Λ(λ) < 1g. b) Λ∗ : R ! [0; 1] defined by Λ∗(x) := supfλx − Λ(λ)g λ2R ∗ is called Fenchel-Legendre transform of Λ. Let DΛ∗ := fλ :Λ (λ) < 1g. In the following we will often use the convenient abbreviation Λ∗(F ) := inf Λ∗(x) x2F for a subset F ⊆ R (with the usual convention that the infimum of the empty set is +1).
    [Show full text]
  • Spatial Birth-Death Wireless Networks
    1 Spatial Birth-Death Wireless Networks Abishek Sankararaman and Franc¸ois Baccelli Abstract We propose and study a novel continuous space-time model for wireless networks which takes into account the stochastic interactions in both space through interference and in time due to randomness in traffic. Our model consists of an interacting particle birth-death dynamics incorporating information-theoretic spectrum- sharing. Roughly speaking, particles (or more generally wireless links) arrive according to a Poisson point process on space-time, and stay for a duration governed by the local configuration of points present and then exit the network after completion of a file transfer. We analyze this particle dynamics to derive an explicit condition for time ergodicity (i.e. stability) which is tight. We also prove that when the dynamics is ergodic, the steady-state point process of links (or particles) exhibits a form statistical clustering. Based on the clustering, we propose a conjecture which we leverage to derive approximations, bounds and asymptotics on performance characteristics such as delay and mean number of links per unit-space in the stationary regime. The mathematical analysis is combined with discrete event simulation to study the performance of this type of networks. I. INTRODUCTION We consider the problem of studying the spatial dynamics of Device-to-Device (D2D) or ad-hoc wireless networks. Such wireless networks have received a tremendous amount of attention, due on the one hand to their increasing ubiquity in modern technology and on the other hand to the mathematical challenges in their modeling and performance assessment. Wireless is a broadcast medium and hence the nodes sharing a common spectrum in space interact through the interference they cause to one another.
    [Show full text]
  • Large Deviations Theory Lecture Notes Master 2 in Fundamental Mathematics Université De Rennes 1
    Large Deviations Theory Lecture notes Master 2 in Fundamental Mathematics Université de Rennes 1 Mathias Rousset 2021 Documents to be downloaded Download pedagogical package at http://people.irisa.fr/Mathias.Rousset/****. zip where **** is the code given in class. Main bibliography Large Deviations Dembo and Zeitouni [1998]: classic, main reference. General, complete, read- • able. Proofs are sometimes intricate and scattered throughout the book. Touchette [2009]: readable, formal/expository introduction to the link with • statistical mechanics. Rassoul-Agha and Seppäläinen [2015]: the first part is somehow similar to the • content of this course. Freidlin et al. [2012]: a well-known classic, reference for small noise large devi- • ations of SDEs. The vocabulary differs from the modern customs. Feng and Kurtz [2015]: a rigorous, general treament for Markov processes. • General references Bogachev [2007]: exhaustive on measure theory. • Kechris [1994, 2012]: short and long on Descriptive Set Theory and advanced • results on Polish spaces and related topics. Brézis [1987], Brezis [2010]: excellent master course on functional analysis. • Billingsley [2013]: most classical reference for convergence of probability dis- • tributions. 1 Large Deviations – Version 2021 Notations C (E): set of continuous and bounded functions on a topological space E. • b M (E): set of measurable and bounded functions on a measurable space E. • b (E): set of probability measures on a measurable space E. •P (E): set of real valued (that is, finite and signed) measures on a measurable •M R set E. i.i.d.: independent and identically distributed. • µ(ϕ) = ϕ dµ if µ is a measure, and ϕ a measurable function. • X µRmeans that the random variable X is distributed according to the • ∼ probability µ.
    [Show full text]
  • Lectures on the Large Deviation Principle
    Lectures on the Large Deviation Principle Fraydoun Rezakhanlou∗ Department of Mathematics, UC Berkeley March 25, 2017 ∗This work is supported in part by NSF grant DMS-1407723 1 Chapter 1: Introduction Chapter 2: A General Strategy Chapter 3: Cram´erand Sanov Large Deviation Principles Chapter 4: Donsker-Varadhan Theory Chapter 5: Large Deviation Principles for Markov Chains Chapter 6: Wentzell-Freidlin Problem Chapter 7: Stochastic Calculus and Martingale Problem Chapter 8: Miscellaneous Appendix A: Probability Measures on Polish Spaces Appendix B: Conditional Measures Appendix C: Ergodic Theorem Appendix D: Minimax Principle Appendix E: Optimal Transport Problem 2 1 Introduction Many questions in probability theory can be formulated as a law of large numbers (LLN). Roughly a LLN describes the most frequently visited (or occurred) states in a large system. To go beyond LLN, we either examine the states that deviate from the most visited states by small amount, or those that deviate by large amount. The former is the subject of Central Limit Theorem (CLT). The latter may lead to a Large Deviation Principle (LDP) if the probability of visiting a non-typical state is exponentially small and we can come up with a precise formula for the exponential rate of convergence as the size of the system goes to infinity. In this introduction we attempt to address four basic questions: • 1. What does LDP mean? • 2. What are some of our motivations to search for an LDP? • 3. How ubiquitous is LDP? • 4. What are the potential applications? We use concrete examples to justify our answers to the above questions.
    [Show full text]
  • General Theory of Large Deviations
    Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations princi- ple if the probability of the variables falling into “bad” sets, repre- senting large deviations from expectations, declines exponentially in some appropriate limit. Section 30.1 makes this precise, using some associated technical machinery, and explores a few consequences. The central one is Varadhan’s Lemma, for the asymptotic evalua- tion of exponential integrals in infinite-dimensional spaces. Having found one family of random variables which satisfy the large deviations principle, many other, related families do too. Sec- tion 30.2 lays out some ways in which this can happen. As the great forensic statistician C. Chan once remarked, “Improbable events permit themselves the luxury of occurring” (reported in Biggers, 1928). Large deviations theory, as I have said, studies these little luxuries. 30.1 Large Deviation Principles: Main Defini- tions and Generalities Some technicalities: Definition 397 (Level Sets) For any real-valued function f : Ξ R, the level sets are the inverse images of intervals from to c inclusive,!→ i.e., all sets of the form x Ξ : f(x) c . −∞ { ∈ ≤ } Definition 398 (Lower Semi-Continuity) A real-valued function f : Ξ !→ R is lower semi-continuous if xn x implies lim inf f(xn) f(x). → ≥ 206 CHAPTER 30. LARGE DEVIATIONS: BASICS 207 Lemma 399 A function is lower semi-continuous iff either of the following equivalent properties hold. i For all x Ξ, the infimum of f over increasingly small open balls centered at x appro∈aches f(x): lim inf f(y) = f(x) (30.1) δ 0 y: d(y,x)<δ → ii f has closed level sets.
    [Show full text]
  • On the Large Deviation Rate Function for the Empirical Measures Of
    The Annals of Probability 2015, Vol. 43, No. 3, 1121–1156 DOI: 10.1214/13-AOP883 c Institute of Mathematical Statistics, 2015 ON THE LARGE DEVIATION RATE FUNCTION FOR THE EMPIRICAL MEASURES OF REVERSIBLE JUMP MARKOV PROCESSES By Paul Dupuis1 and Yufei Liu2 Brown University The large deviations principle for the empirical measure for both continuous and discrete time Markov processes is well known. Various expressions are available for the rate function, but these expressions are usually as the solution to a variational problem, and in this sense not explicit. An interesting class of continuous time, reversible pro- cesses was identified in the original work of Donsker and Varadhan for which an explicit expression is possible. While this class includes many (reversible) processes of interest, it excludes the case of con- tinuous time pure jump processes, such as a reversible finite state Markov chain. In this paper, we study the large deviations principle for the empirical measure of pure jump Markov processes and provide an explicit formula of the rate function under reversibility. 1. Introduction. Let X(t) be a time homogeneous Markov process with Polish state space S, and let P (t,x,dy) be the transition function of X(t). For t ∈ [0, ∞), define Tt by . Ttf(x) = f(y)P (t,x,dy). ZS Then Tt is a contraction semigroup on the Banach space of bounded, Borel measurable functions on S ([6], Chapter 4.1). We use L to denote the in- finitesimal generator of Tt and D the domain of L (see [6], Chapter 1). Hence, for each bounded measurable function f ∈ D, 1 arXiv:1302.6647v2 [math.PR] 19 Jun 2015 Lf(x) = lim f(y)P (t,x,dy) − f(x) .
    [Show full text]
  • Large Deviations for Stochastic Processes
    LARGE DEVIATIONS FOR STOCHASTIC PROCESSES By Stefan Adams1 Abstract: The notes are devoted to results on large deviations for sequences of Markov processes following closely the book by Feng and Kurtz ([FK06]). We out- line how convergence of Fleming's nonlinear semigroups (logarithmically transformed nonlinear semigroups) implies large deviation principles analogous to the use of con- vergence of linear semigroups in weak convergence. The latter method is based on the range condition for the corresponding generator. Viscosity solution methods how- ever provide applicable conditions for the necessary nonlinear semigroup convergence. Once having established the validity of the large deviation principle one is concerned to obtain more tractable representations for the corresponding rate function. This in turn can be achieved once a variational representation of the limiting generator of Fleming's semigroup can be established. The obtained variational representation of the generator allows for a suitable control representation of the rate function. The notes conclude with a couple of examples to show how the methodology via Fleming's semigroups works. These notes are based on the mini-course 'Large deviations for stochastic processes' the author held during the workshop 'Dynamical Gibs-non-Gibbs transitions' at EURANDOM in Eindhoven, December 2011, and at the Max-Planck institute for mathematics in the sciences in Leipzig, July 2012. Keywords and phrases. large deviation principle, Fleming's semigroup, viscosity solution, range condition Date: 28th November 2012. 1Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom, [email protected] 2 STEFAN ADAMS The aim of these lectures is to give a very brief and concise introduction to the theory of large deviations for stochastic processes.
    [Show full text]