Ergodic Theory, Entropy and Application to Statistical Mechanics
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Symmetry and Ergodicity Breaking, Mean-Field Study of the Ising Model
Symmetry and ergodicity breaking, mean-field study of the Ising model Rafael Mottl, ETH Zurich Advisor: Dr. Ingo Kirsch Topics Introduction Formalism The model system: Ising model Solutions in one and more dimensions Symmetry and ergodicity breaking Conclusion Introduction phase A phase B Tc temperature T phase transition order parameter Critical exponents Tc T Formalism Statistical mechanical basics sample region Ω i) certain dimension d ii) volume V (Ω) iii) number of particles/sites N(Ω) iv) boundary conditions Hamiltonian defined on the sample region H − Ω = K Θ k T n n B n Depending on the degrees of freedom Coupling constants Partition function 1 −βHΩ({Kn},{Θn}) β = ZΩ[{Kn}] = T r exp kBT Free energy FΩ[{Kn}] = FΩ[K] = −kBT logZΩ[{Kn}] Free energy per site FΩ[K] fb[K] = lim N(Ω)→∞ N(Ω) i) Non-trivial existence of limit ii) Independent of Ω iii) N(Ω) lim = const N(Ω)→∞ V (Ω) Phases and phase boundaries Supp.: -fb[K] exists - there exist D coulping constants: {K1,...,KD} -fb[K] is analytic almost everywhere - non-analyticities of f b [ { K n } ] are points, lines, planes, hyperplanes in the phase diagram Dimension of these singular loci: Ds = 0, 1, 2,... Codimension C for each type of singular loci: C = D − Ds Phase : region of analyticity of fb[K] Phase boundaries : loci of codimension C = 1 Types of phase transitions fb[K] is everywhere continuous. Two types of phase transitions: ∂fb[K] a) Discontinuity across the phase boundary of ∂Ki first-order phase transition b) All derivatives of the free energy per site are continuous -
Josiah Willard Gibbs
GENERAL ARTICLE Josiah Willard Gibbs V Kumaran The foundations of classical thermodynamics, as taught in V Kumaran is a professor textbooks today, were laid down in nearly complete form by of chemical engineering at the Indian Institute of Josiah Willard Gibbs more than a century ago. This article Science, Bangalore. His presentsaportraitofGibbs,aquietandmodestmanwhowas research interests include responsible for some of the most important advances in the statistical mechanics and history of science. fluid mechanics. Thermodynamics, the science of the interconversion of heat and work, originated from the necessity of designing efficient engines in the late 18th and early 19th centuries. Engines are machines that convert heat energy obtained by combustion of coal, wood or other types of fuel into useful work for running trains, ships, etc. The efficiency of an engine is determined by the amount of useful work obtained for a given amount of heat input. There are two laws related to the efficiency of an engine. The first law of thermodynamics states that heat and work are inter-convertible, and it is not possible to obtain more work than the amount of heat input into the machine. The formulation of this law can be traced back to the work of Leibniz, Dalton, Joule, Clausius, and a host of other scientists in the late 17th and early 18th century. The more subtle second law of thermodynamics states that it is not possible to convert all heat into work; all engines have to ‘waste’ some of the heat input by transferring it to a heat sink. The second law also established the minimum amount of heat that has to be wasted based on the absolute temperatures of the heat source and the heat sink. -
(Measure Theory for Dummies) UWEE Technical Report Number UWEETR-2006-0008
A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 UWEE Technical Report Number UWEETR-2006-0008 May 2006 Department of Electrical Engineering University of Washington Box 352500 Seattle, Washington 98195-2500 PHN: (206) 543-2150 FAX: (206) 543-3842 URL: http://www.ee.washington.edu A Measure Theory Tutorial (Measure Theory for Dummies) Maya R. Gupta {gupta}@ee.washington.edu Dept of EE, University of Washington Seattle WA, 98195-2500 University of Washington, Dept. of EE, UWEETR-2006-0008 May 2006 Abstract This tutorial is an informal introduction to measure theory for people who are interested in reading papers that use measure theory. The tutorial assumes one has had at least a year of college-level calculus, some graduate level exposure to random processes, and familiarity with terms like “closed” and “open.” The focus is on the terms and ideas relevant to applied probability and information theory. There are no proofs and no exercises. Measure theory is a bit like grammar, many people communicate clearly without worrying about all the details, but the details do exist and for good reasons. There are a number of great texts that do measure theory justice. This is not one of them. Rather this is a hack way to get the basic ideas down so you can read through research papers and follow what’s going on. Hopefully, you’ll get curious and excited enough about the details to check out some of the references for a deeper understanding. -
Viscosity from Newton to Modern Non-Equilibrium Statistical Mechanics
Viscosity from Newton to Modern Non-equilibrium Statistical Mechanics S´ebastien Viscardy Belgian Institute for Space Aeronomy, 3, Avenue Circulaire, B-1180 Brussels, Belgium Abstract In the second half of the 19th century, the kinetic theory of gases has probably raised one of the most impassioned de- bates in the history of science. The so-called reversibility paradox around which intense polemics occurred reveals the apparent incompatibility between the microscopic and macroscopic levels. While classical mechanics describes the motionof bodies such as atoms and moleculesby means of time reversible equations, thermodynamics emphasizes the irreversible character of macroscopic phenomena such as viscosity. Aiming at reconciling both levels of description, Boltzmann proposed a probabilistic explanation. Nevertheless, such an interpretation has not totally convinced gen- erations of physicists, so that this question has constantly animated the scientific community since his seminal work. In this context, an important breakthrough in dynamical systems theory has shown that the hypothesis of microscopic chaos played a key role and provided a dynamical interpretation of the emergence of irreversibility. Using viscosity as a leading concept, we sketch the historical development of the concepts related to this fundamental issue up to recent advances. Following the analysis of the Liouville equation introducing the concept of Pollicott-Ruelle resonances, two successful approaches — the escape-rate formalism and the hydrodynamic-mode method — establish remarkable relationships between transport processes and chaotic properties of the underlying Hamiltonian dynamics. Keywords: statistical mechanics, viscosity, reversibility paradox, chaos, dynamical systems theory Contents 1 Introduction 2 2 Irreversibility 3 2.1 Mechanics. Energyconservationand reversibility . ........................ 3 2.2 Thermodynamics. -
Hamiltonian Mechanics Wednesday, ß November Óþõõ
Hamiltonian Mechanics Wednesday, ß November óþÕÕ Lagrange developed an alternative formulation of Newtonian me- Physics ÕÕÕ chanics, and Hamilton developed yet another. Of the three methods, Hamilton’s proved the most readily extensible to the elds of statistical mechanics and quantum mechanics. We have already been introduced to the Hamiltonian, N ∂L = Q ˙ − = Q( ˙ ) − (Õ) H q j ˙ L q j p j L j=Õ ∂q j j where the generalized momenta are dened by ∂L p j ≡ (ó) ∂q˙j and we have shown that dH = −∂L dt ∂t Õ. Legendre Transformation We will now show that the Hamiltonian is a so-called “natural function” of the general- ized coordinates and generalized momenta, using a trick that is just about as sophisticated as the product rule of dierentiation. You may have seen Legendre transformations in ther- modynamics, where they are used to switch independent variables from the ones you have to the ones you want. For example, the internal energy U satises ∂U ∂U dU = T dS − p dV = dS + dV (ì) ∂S V ∂V S e expression between the equal signs has the physics. It says that you can change the internal energy by changing the entropy S (scaled by the temperature T) or by changing the volume V (scaled by the negative of the pressure p). e nal expression is simply a mathematical statement that the function U(S, V) has a total dierential. By lining up the various quantities, we deduce that = ∂U and = − ∂U . T ∂S V p ∂V S In performing an experiment at constant temperature, however, the T dS term is awk- ward. -
Probability Theory: STAT310/MATH230; September 12, 2010 Amir Dembo
Probability Theory: STAT310/MATH230; September 12, 2010 Amir Dembo E-mail address: [email protected] Department of Mathematics, Stanford University, Stanford, CA 94305. Contents Preface 5 Chapter 1. Probability, measure and integration 7 1.1. Probability spaces, measures and σ-algebras 7 1.2. Random variables and their distribution 18 1.3. Integration and the (mathematical) expectation 30 1.4. Independence and product measures 54 Chapter 2. Asymptotics: the law of large numbers 71 2.1. Weak laws of large numbers 71 2.2. The Borel-Cantelli lemmas 77 2.3. Strong law of large numbers 85 Chapter 3. Weak convergence, clt and Poisson approximation 95 3.1. The Central Limit Theorem 95 3.2. Weak convergence 103 3.3. Characteristic functions 117 3.4. Poisson approximation and the Poisson process 133 3.5. Random vectors and the multivariate clt 140 Chapter 4. Conditional expectations and probabilities 151 4.1. Conditional expectation: existence and uniqueness 151 4.2. Properties of the conditional expectation 156 4.3. The conditional expectation as an orthogonal projection 164 4.4. Regular conditional probability distributions 169 Chapter 5. Discrete time martingales and stopping times 175 5.1. Definitions and closure properties 175 5.2. Martingale representations and inequalities 184 5.3. The convergence of Martingales 191 5.4. The optional stopping theorem 203 5.5. Reversed MGs, likelihood ratios and branching processes 209 Chapter 6. Markov chains 225 6.1. Canonical construction and the strong Markov property 225 6.2. Markov chains with countable state space 233 6.3. General state space: Doeblin and Harris chains 255 Chapter 7. -
Probability Spaces Lecturer: Dr
EE5110: Probability Foundations for Electrical Engineers July-November 2015 Lecture 4: Probability Spaces Lecturer: Dr. Krishna Jagannathan Scribe: Jainam Doshi, Arjun Nadh and Ajay M 4.1 Introduction Just as a point is not defined in elementary geometry, probability theory begins with two entities that are not defined. These undefined entities are a Random Experiment and its Outcome: These two concepts are to be understood intuitively, as suggested by their respective English meanings. We use these undefined terms to define other entities. Definition 4.1 The Sample Space Ω of a random experiment is the set of all possible outcomes of a random experiment. An outcome (or elementary outcome) of the random experiment is usually denoted by !: Thus, when a random experiment is performed, the outcome ! 2 Ω is picked by the Goddess of Chance or Mother Nature or your favourite genie. Note that the sample space Ω can be finite or infinite. Indeed, depending on the cardinality of Ω, it can be classified as follows: 1. Finite sample space 2. Countably infinite sample space 3. Uncountable sample space It is imperative to note that for a given random experiment, its sample space is defined depending on what one is interested in observing as the outcome. We illustrate this using an example. Consider a person tossing a coin. This is a random experiment. Now consider the following three cases: • Suppose one is interested in knowing whether the toss produces a head or a tail, then the sample space is given by, Ω = fH; T g. Here, as there are only two possible outcomes, the sample space is said to be finite. -
1 Random Vectors and Product Spaces
36-752 Advanced Probability Overview Spring 2018 4. Product Spaces Instructor: Alessandro Rinaldo Associated reading: Sec 2.6 and 2.7 of Ash and Dol´eans-Dade;Sec 1.7 and A.3 of Durrett. 1 Random Vectors and Product Spaces We have already defined random variables and random quantities. A special case of the latter and generalization of the former is a random vector. Definition 1 (Random Vector). Let (Ω; F;P ) be a probability space. Let X :Ω ! IRk be a measurable function. Then X is called a random vector . There arises, in this definition, the question of what σ-field of subsets of IRk should be used. When left unstated, we always assume that the σ-field of subsets of a multidimensional real space is the Borel σ-field, namely the smallest σ-field containing the open sets. However, because IRk is also a product set of k sets, each of which already has a natural σ-field associated with it, we might try to use a σ-field that corresponds to that product in some way. 1.1 Product Spaces The set IRk has a topology in its own right, but it also happens to be a product set. Each of the factors in the product comes with its own σ-field. There is a way of constructing σ-field’s of subsets of product sets directly without appealing to any additional structure that they might have. Definition 2 (Product σ-Field). Let (Ω1; F1) and (Ω2; F2) be measurable spaces. Let F1 ⊗ F2 be the smallest σ-field of subsets of Ω1 × Ω2 containing all sets of the form A1 × A2 where Ai 2 Fi for i = 1; 2. -
Ergodic Theory in the Perspective of Functional Analysis
Ergodic Theory in the Perspective of Functional Analysis 13 Lectures by Roland Derndinger, Rainer Nagel, GÄunther Palm (uncompleted version) In July 1984 this manuscript has been accepted for publication in the series \Lecture Notes in Mathematics" by Springer-Verlag. Due to a combination of unfortunate circumstances none of the authors was able to perform the necessary ¯nal revision of the manuscript. TÄubingen,July 1987 1 2 I. What is Ergodic Theory? The notion \ergodic" is an arti¯cial creation, and the newcomer to \ergodic theory" will have no intuitive understanding of its content: \elementary ergodic theory" neither is part of high school- or college- mathematics (as does \algebra") nor does its name explain its subject (as does \number theory"). Therefore it might be useful ¯rst to explain the name and the subject of \ergodic theory". Let us begin with the quotation of the ¯rst sentence of P. Walters' introductory lectures (1975, p. 1): \Generally speaking, ergodic theory is the study of transformations and flows from the point of view of recurrence properties, mixing properties, and other global, dynamical, properties connected with asymptotic behavior." Certainly, this de¯nition is very systematic and complete (compare the beginning of our Lectures III. and IV.). Still we will try to add a few more answers to the question: \What is Ergodic Theory ?" Naive answer: A container is divided into two parts with one part empty and the other ¯lled with gas. Ergodic theory predicts what happens in the long run after we remove the dividing wall. First etymological answer: ergodhc=di±cult. Historical answer: 1880 - Boltzmann, Maxwell - ergodic hypothesis 1900 - Poincar¶e - recurrence theorem 1931 - von Neumann - mean ergodic theorem 1931 - Birkho® - individual ergodic theorem 1958 - Kolmogorov - entropy as an invariant 1963 - Sinai - billiard flow is ergodic 1970 - Ornstein - entropy classi¯es Bernoulli shifts 1975 - Akcoglu - individual Lp-ergodic theorem Naive answer of a physicist: Ergodic theory proves that time mean equals space mean. -
On the Foundations of Statistical Mechanics: Ergodicity, Many Degrees of Freedom and Inference
On the foundations of statistical mechanics: ergodicity, many degrees of freedom and inference Sergio Chibbaro1;6 and Lamberto Rondoni2;3;6 and Angelo Vulpiani4;5;6 1 Sorbonne Universit´es,UPMC Univ Paris 06, CNRS, UMR7190, Institut Jean Le Rond d'Alembert, F-75005 Paris, (France) 2 Dip. Scienze Matematiche, Politecnico di Torino, C. Duca degli Abruzzi 24, Torino. 3 INFN, Sezione di Torino, Via P. Giuria 1, Torino (Italy). 4 Dipartimento di Fisica, Universit`a"LaSapienza" Roma (Italy) 5 ISC-CNR, Piazzale A. Moro 2 - 00185 Roma (Italy) 6 Kavli Institute for Theoretical Physics China, CAS, Beijing 100190, China March 9, 2014 1 Introduction Statistical mechanics has been founded by Maxwell, Boltzmann and Gibbs who intended to describe in terms of microscopic interactions the properties of macro- scopic objects, i.e. systems made of so many particles that a statistical treat- ment is required. For almost all practical purposes one can say that the whole subject of statistical mechanics consists in evaluating a few suitable functions and quantities, such as the partition function, the free energy and the correlation functions. Since the discovery of deterministic chaos it is plain that statistics (e.g. over ensembles of objects following the same dynamical rules, or over time series of fluctuating quantities) is unavoidable and useful even in systems with a few degrees of freedom. On the other hand, no universal agreement has been reached so far about the fundamental ingredients for the validity of statistical mechanics [1, 2]. The wide spectrum of positions includes Landau's and Khinchin's belief that the main ingredient in producing the common macroscopic behaviours is the vast number of microscopic degrees of freedom, which makes (almost) completely irrelevant details of the microscopic dynamics such as the validity of the standard notion of ergodicity, developed in the mathematical literature. -
Measure Theory
Measure Theory Mark Dean Lecture Notes for Fall 2015 PhD Class in Decision Theory - Brown University 1Introduction Next, we have an extremely rapid introduction to measure theory. Given the short time that we have to spend on this, we are really only going to be able to introduce the relevant concepts, and try to give an idea of why they are important. For more information, Efe Ok has an as-yet unpublished book available online here https://files.nyu.edu/eo1/public/ that covers the basics, and points the way to many other classic texts. For those of you who are thinking of taking decision theory more seriously, then a taking a course in measure theory is probably advisable. Take some underlying set . Measure theory is the study of functions that map subsets of into the real line, with the interpretation that this number is the ’measure’ or the ’size’ or the ’volume’ of that set. Of course, not every function defined on a subset of 2 is going to fitinto our intuitive notion of how a measure should behave. In particular, we would like a measure to have at least the following two properties: 1. () 0 \ ≥ 2. ( )= ∞ For any pairwise disjoint sets ∪∞=1 =1 P The first property says that we can’t have negatively measured sets, the second says, if we look at the measure of the union of disjoint sets should be equal to the sum of the measure of those sets. Both these properties should seem appealing if we think about what we intuitively mean by ’measure’. -
Probability Theory
Probability Theory "A random variable is neither random nor variable." Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = fΩ; F;P g: • Ω is its sample space •F its σ-algebra of events • P its probability measure Remarks: (1) The sample space Ω is the set of all possible samples or elementary events !: Ω = f! j ! 2 Ωg. (2)The σ-algebra F is the set of all of the considered events A, i.e., subsets of Ω: F = fA j A ⊆ Ω;A 2 Fg. (3) The probability measure P assigns a probability P (A) to every event A 2 F: P : F! [0; 1]. Stochastic Systems, 2013 2 Sample space The sample space Ω is sometimes called the universe of all samples or possible outcomes !. Example 1. Sample space • Toss of a coin (with head and tail): Ω = fH;T g. • Two tosses of a coin: Ω = fHH;HT;TH;TT g. • A cubic die: Ω = f!1;!2;!3;!4;!5;!6g. • The positive integers: Ω = f1; 2; 3;::: g. • The reals: Ω = f! j ! 2 Rg. Note that the !s are a mathematical construct and have per se no real or scientific meaning. The !s in the die example refer to the numbers of dots observed when the die is thrown. Stochastic Systems, 2013 3 Event An event A is a subset of Ω. If the outcome ! of the experiment is in the subset A, then the event A is said to have occurred.