<<

Emergences in Statistical

Roger Balian Académie des Sciences IPhT, Centre de Saclay, 91191 Gif-sur-Yvette Cx

Statistical physics is one of the most active branches of contemporary . During the XX eth century, the development of microphysics led to the unification of fundamental interactions and exhibited the formal simplicity of the laws at the microscopic scale. These discoveries have strengthened the reductionist attitude, based on the idea that in a system composed of elementary objects, there is “nothing more” than these elements and their mutual relations. We therefore believe that the macroscopic laws, complex and specific, can be explained, starting from the microscopic ones, simple and unified: it is only the large number of constituents which entails the appearance of new properties. Such a type of deduction is the very scope of statistical physics, which is therefore the pre-eminent field of emergence.

Since the qualitative properties of a compound system may radically differ from those of its elementary parts, the idea that they could follow from them has been difficult to accept. In the middle of the XVIII th century, Maupertuis raises the question of emergence in his Système de la Nature, essai sur la formation des corps organisés: “A uniform and blind attraction, prevalent in all parts of matter, could not explain how these parts manage to form even the simplest organised body. If they all have the same tendency, the same force to unit to one another, why would these produce an eye, those an ear? why such a marvellous arrangement? and why wouldn’t they all unit in a jumble?” He concludes: “Never the formation of an organised body will be explained from the sole physical properties of matter.” In the second half of the XIX th century, many properties of gases were explained and even predicted by the kinetic theory, the first branch of statistical physics, which was based on the assumption that a gas was an assembly of undergoing brief collisions. However, in spite of such successes, this theory suffered much opposition until the direct experimental discovery of and molecules. It seemed unthinkable that matter would not be continuous at the microscopic scale as it appears at our scale.

The field of statistical physics has not ceased to extend ever since. We acknowledge that, at the scale of 10 –10 m, matter is simply constituted of atomic nuclei and electrons, characterised by their mass and charge, interacting pair wise through Coulomb forces (plus magnetic forces for some phenomena), and governed by the rules of . Our aim is then to deduce, from these sole microscopic laws, which are simple and well established, the various properties at our scale of all kinds of materials, fluids, crystals, amorphous, plastics, such properties being mechanical, thermal, optical or electromagnetic. The whole of and even are supposed to derive from this same microphysics. Such a type of deductive understanding is also currently sought after for other systems made of many elementary components. Techniques of have thus been applied to , where the set of many components is the photon gas, to nuclei treated as a cluster of protons and , to biology in order to explain for instance collective behaviour, and even to cosmology, regarding the Universe as an assembly of galaxies in gravitational interaction.

Even if we focus on the passage from the atomic scale to our scale, statistical physics has allowed us to understand the emergence of many phenomena, and has explained the existence of major qualitative differences between these two scales. A few among its successes are listed below. The reader may find more detail in the textbook of R. Balian, From Microphysics to Macrophysics, Methods and Applications of Statistical Physics (Volumes I et II, Springer 2007). 1. Matter appears as continuous although its microscopic structure is discrete. The absence of contradiction between these two properties seems obvious to-day, as it was in the XVIII th century. However, the progress in the XIX th century of , electromagnetism, of wave theory and of had imposed a continuous conception of matter and atomism had been rejected. 2. Quantities such as undergo a metamorphosis from one scale to the other. Energy, a simple function of the microscopic variables that remains constant along the motion, takes at our scale various forms depending on the considered system and phenomenon: heat, work, kinetic, potential, chemical, electric , etc.; its conservation keeps track of its microscopic feature. These multiple aspects of energy at our scale explain why its concept has been so difficult to elaborate: the first Law of thermodynamics was recognised only in the middle of the XIX th century, whereas Carnot stated the second Law in 1824; the word “energy” in its scientific meaning was introduced in 1852, and “énergie” appeared in French only in 1875. 3. At the microscopic scale, the physical quantities reduce to the positions and velocities of the constituent particles (plus their intrinsic angular momentum, termed “spin”, relevant for magnetic materials). The current physical quantities emerge from the passage to our scale and are numerous. Some appear as average values: density, pressure, hydrodynamic velocity, electric current, magnetization, concentration of a solute, etc. Others have a hidden statistical nature: measures the microscopic disorder, characterises the probability distribution which governs the system at equilibrium. Still others are derived from the previous ones and are associated with specific properties: heat capacity, compressibility, resistivity, viscosity, etc. All these quantities are collective; they have no microscopic equivalent and exist only owing to the large number of constituents. 4. Statistical physics requires probabilities for two reasons. On the one hand, quantum mechanics, which governs any object at the microscopic scale, is irreducibly probabilistic. (We shall return to this point.) On the other hand, we wish to study a complex object, and this anyhow imposes the use of probabilities: Describing in detail a particular system of this type is unthinkable, so that we search for the general properties of an ensemble of systems, all placed under the same macroscopic conditions but presenting differences at the microscopic scale. Predictions are therefore tainted with uncertainties. However, at our scale, physics is deterministic . To raise this contradiction, we should understand why the probabilistic character of our microscopic description disappears at the . This follows again from the immensity of the number N of constituents (the Avogadro number is 6.10 23 particles per mole). The Law of large numbers then implies that the relative statistical fluctuation of a collective quantity is of order 1/ √N, hence negligible. Whereas the microscopic variables, the positions and velocities of the particles, widely fluctuate, the above listed macroscopic quantities take (or seem to take) well- defined values. 5. In the reductionist prospect of statistical physics, thermodynamics is dethroned. It is no longer a fundamental science, since its “Laws” appear as mere consequences of the principles of microphysics. The first Law relies on the interpretation of the various macroscopic forms of energy in terms of the microscopic energy. The second Law follows from Laplace’s “principle of insufficient reason”, according to which equal probabilities should be assigned to events on which no information is available. Indeed, it has be shown (R. Balian and N. Balazs, Ann. Phys. 179 (1987) 97) that the statistical entropy, a mathematical object which measures the uncertainty associated with the resulting probability, can be identified with the entropy of thermodynamics, and that the latter should be a maximum in equilibrium: this is exactly Callen’s formulation of the second Law. 6. When applied no longer on general grounds but to specific materials or phenomena, at equilibrium or off equilibrium, statistical physics allows us to understand the microscopic origin of many phenomenological laws: equations of state, specific heats of gases or , Hooke’s law of elasticity, magnetic susceptibility in 1/ T, law of mass action, heat equation, Ohm’s law, Navier– Stokes equation, laws of chemical kinetics, and so on. For sufficiently simple materials, one can even evaluate the empirical coefficients entering such laws, for instance, viscosities, conductivities, thermoelectric or diffusion coefficients. 7. Macroscopic dynamical laws are most often non linear , whereas the equations of motion of microphysics are linear. Non linearity can thus emerge from linearity. 8. Another drastic qualitative difference, the emergence of irreversibility has been regarded for a long time as a paradox. At our scale, most evolutions exhibit an “arrow of time”: friction slows down a moving object, the of two bodies in contact tend to equalize, a sugar melts in water, and the inverse spontaneous processes are forbidden. Nevertheless, microscopic dynamics is reversible: changing the direction of time does not affect the equations of motion of the constituent particles whatsoever. Here again, it is the large number of these particles which allows us to understand why the probability of observing aberrant evolutions, which would go back in time, vanishes in practice, at least over reasonable delays. It is mathematically remarkable, and often difficult to prove, that reversible equations of motion may generate macroscopic irreversibility. 9. The microscopic laws are invariant under the effect of some operations, for instance, a rotation or a symmetry (as in a mirror). Nevertheless, in many materials, we can observe the violation of such an invariance. Crystals, or ferromagnetic samples, are not rotationally invariant. A crystal of quartz differs from its mirror image, and so does any living matter. Understanding such broken symmetries is an important theme of statistical physics. 10. A such as melting of ice manifests itself as a sharp change of state induced by a continuous variation of temperature. This phenomenon has long been regarded as a macroscopic manifestation of a microscopic change: it was currently thought that the intermolecular forces were not the same in both phases, and that the melting of ice, for instance, simply reflected a change of these forces generated by the rise of temperature. It seemed paradoxical that a material might change its state without modification of its microscopic laws. A crucial conceptual step was made by Lars Onsager, who solved in 1944 a model of phase transition in which the system is paramagnetic above some temperature, ferromagnetic below, although the interaction between spins remains unchanged when the temperature varies. Here again, the qualitative change is allowed by the large number of constituents. 11. Statistical physics also enlightens the quantum measurement theory. A macroscopic physical quantity takes well defined values, whereas the very concept of physical quantity presents strange features in quantum mechanics. The position and the velocity of a are not represented mathematically by ordinary numbers but by elements of a non commutative algebra. Consequently, both can neither be measured nor even defined with precision; they are governed by inescapable probabilities, and their statistical fluctuations must be larger than some bound (Heisenberg’s uncertainty relation). More generally microscopic quantities are governed by probabilities of a novel type, different from the standard probabilities of textbooks. (For instance, quantum correlations may violate Bell’s inequalities.) At the microscopic scale, there exist “Schrödinger cats”, that is, objects that are both “dead” and “alive”. Worse, obeys strange rules: a set of statements separately true may lead to a contradiction if they are combined. This occurs when the theory prohibits to test them together experimentally with the same apparatus, although each of them can be tested through separate measurements. An assertion (or a set of assertions) has thus only a “contextual” sense: saying that it is “true” means that it is allowed to imagine some experimental context in which it may be verifiable. However, in spite of all these oddities, each quantum measurement yields a well defined result, without “Schrödinger cat”. This fact can be understood through methods of statistical physics (A. Allahverdyan et al, Europhys. Letters 61 (2003) 452), by solving a model in which the apparatus is a macroscopic object interacting with the tested system. It is then shown that the apparatus registers at each run a well defined value, and that successive measurements can provide different results governed by an ordinary probability law. Thus, at our scale, ordinary probabilities and logics emerge from the measurement process.

Statistical physics therefore allows in many circumstances to understand, predict and evaluate all kinds of properties, by relying on the microscopic theory, which is unified, more fundamental and simpler than the specific phenomenological theories based on macroscopic experiments. This research of emergences constitutes a strong impulse for research, even when it fails! Most stimulating and fruitful, it contributes to the conceptual unification of science and helps discovery, as we just saw. In a domain other than statistical physics, understanding chemical bonds by treating a molecule as an assembly of nuclei and electrons has greatly helped the elaboration of new compounds. However, this will not prevent chemistry from remaining an autonomous science, since it is hazardous to anticipate the properties of a new molecule by relying only on its elementary composition. Likewise, molecular biology will not become a branch of chemistry.

More generally, as P. W. Anderson stressed in his celebrated article More is Different (Science, 177 (1972) 393), some ideas of which will inspire us, it is deceptive to dream of a unique Science that would be deduced from the fundamental microscopic laws – although the latter laws have been established with near certainty and should in principle suffice to explain the world at any scale. The study of more and more complex objects creates new sciences, which cannot be regarded as extensions of more “fundamental” sciences. Admittedly, it happened that theory has led to the discovery of a new property emerging from an inferior level. Using kinetic theory, Maxwell predicted that the viscosity of gases should increase with temperature, contrary to that of , an unexpected feature that was later on checked experimentally. However, such successes of “constructionism” are rare. No one would have imagined, for example, the existence of quasi-crystals before their surprising macroscopic discovery; it is only a posteriori that one understood how the interactions between atoms might generate such objects. New phenomena and concepts are most often discovered at their own scale; in order to try to relate them to the inferior, more fundamental scale, one works from top to bottom, not from bottom to top, thus explaining a phenomenon already known, often long ago.

Indeed, it is often extremely difficult to connect two different scales. Discovered in 1911, the phenomenon of was explained only in 1957 in terms of the electronic structure of metals, and the theory of superconducting oxides with high critical temperature is not yet satisfactory. The problem of quantum measurement, posed around 1925, only begins to be solved. Water, an omnipresent material, has such a complicate microscopic organisation that its singular properties are hardly explained. In the field of botany, it has been observed since centuries that many objects (such as pinecones, pineapples, artichoke hearts or sunflowers) display spirals rotating in both ways; counting them provides two successive numbers of the Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 34, …). This organisation was explained only in 1992, through the solution of a model describing the growth of such objects. If the time lapse between the discovery of a phenomenon and its microscopic explanation is so large, it is due to the need of elaborate , different from one problem to another, and of much imagination.

Computers allow simulations, although the number of particles involved often supersedes their capacity. Moreover, especially when experimental data are insufficient, reproducing a macroscopic phenomenon through a numerical calculation may be useful for technical applications for which quantitative results are needed. As regards the analysis of an emergence, computer experiments allowing the exploration of various thought systems, similar to the considered system but not so complex, are quite valuable as they may suggest original ideas. However, the results themselves of a computer calculation do not help directly our understanding, since an excess of numerical data without an explanatory theory is unintelligible.

In fact, understanding often means eliminating all the unessential features. This is why statistical physics so often relies on physical models . In physics, a model is a schematic representation at the elementary scale of the system under study, in which only some properties relevant for the considered collective phenomenon are retained. This choice requires much insight; it an art, sometimes dangerous since looking for a problem sufficiently simple to be solvable may drive us far from the real system. The , solved by Onsager to explain the behaviour of non metallic ferromagnets, disregards all degrees of freedom apart from the spins, i.e., the elementary magnetic moments: the particles which carry these spins are treated as fixed points. Moreover, the material is treated as a two-dimensional lattice, and the interaction is reduced to neighbouring spins so as to allow an analytic solution. No quantitative fit could be expected from such a drastic set of simplifications. However, this model keeps the essential feature of the paramagnetic −ferromagnetic transition, so that its solution achieved a major progress in the understanding of this transition.

The success of physical models relies on the universality of the phenomenon under study. If the main features of this phenomenon happen to be the same for various objects, it is legitimate to proceed by analogy, and qualitatively correct results are obtained if the real material belongs to the same class as its simplified theoretical model. Solving different models may also help to approach reality. This method for working out emergences is common, although one cannot be certain it is operational since universality is not a priori warranted. However, favourable circumstances exist. In the 1970’s, the long pending problem of “critical exponents” has been solved. Near a critical point like that of water where the distinction between and vapour fades out, the specific heat, for instance, behaves as a power of the temperature, with an exponent to be determined theoretically. It has been proven that such exponents, and more generally the behaviour in the vicinity of the critical point, are universal, to wit, they depend only on a few parameters. This allowed to calculate them, and to successfully compare them with experiment. The proof relies on the method of the so-called renormalisation group; it leads us to identify the relevant parameters as the only ones which subsist when the critical point is gradually approached in a kind of homothety. All other parameters are thus eliminated, so that the emergence of critical exponents erases the differences between physical systems belonging to the same universality class, which is characterised by the relevant parameters.

Another type of physical modelling, better controlled, consists in proceeding by steps . In a chemically inert gas, molecules first emerge from nuclei and electrons; these molecules can then be treated as elementary constituents, the interactions of which are accounted for through their cross section. Likewise, in order to understand the emergence of the properties of a metal, it is necessary to introduce several intermediate levels of description: its electromagnetic properties are explained by modelling it as a lattice of ions and a gas of conduction electrons; for its mechanical properties, such as elasticity or plasticity, one starts from a larger scale, where the elementary objects are the crystalline defects (vacancies, dislocations, grain boundaries). Besides, when we treated matter at the “microscopic” scale as an assembly of electrons and nuclei, we did not go as far as was thinkable: we regarded the nuclei as point particles, forgetting that they emerge from a more elementary structure of protons and neutrons, or even of quarks. This was fully justified since their size (of the order of 10 -15 m) is negligible at the atomic scale. It would be ridiculous under such conditions to directly study the emergence of properties at our scale from the more ‘fundamental” scale of quarks. The situation is the same in biology, where it is necessary to pass through successive nested levels, such as amino-acids, proteins, genome, organelles, cells, organs, and organisms. Philosophically, it is licit to think that everything might reduce to the most microscopic laws; scientifically, it is already an achievement to seize step by step the hierarchy of emergences .

We can thus identify very many emergences in the field of statistical physics, either for the analysis of a given phenomenon at successive scales, or for different phenomena over a single step. However, the term “emergence” covers such a variety of dissimilar approaches that the physicist cannot give it a definite meaning. The change of scale may create order (crystal as an assembly of isotropic particles) as well as disorder (turbulence issued from regular hydrodynamic equations) – continuity (fluid made of molecules) as well as discontinuity (phase transitions). Moreover, different ideas and a specific method are called into play to work out each one of the emergences listed above, so that emergence cannot be regarded as a unique and general concept. Besides, a method which is efficient in one case may prove deceitful in another: it was tempting to transpose the theory of catastrophes to the problem of critical exponents; this was done, but the result was incorrect.

Altogether, the idea of emergence, although ubiquitous in statistical physics, has but a trifling value. Adhesion to reductionism does not mean adoption of emergence as a method of research. Each level of description possesses its own concepts and laws. Unveiling an emergence is most often uncovering the buried foundations of an existing building; it is not constructing a complex object from its constituents. Jean Perrin’s approach, “discover the simple invisible laws under the complicate visible things”, is extremely beneficial, the inverse approach is illusory. As Anderson wrote, “in general, the relationship between the system and its parts is intellectually a one-way street. Synthesis is expected to be all but impossible; analysis, on the other hand, may be not only possible but fruitful in all kinds of ways.”