<<

in “Homogenization 2001, Proceedings of the First HMS2000 International School and Conference on Ho- mogenization. Naples, Complesso Monte S. Angelo, June 18-22 and 23-27, 2001, Ed. L. Carbone and R. De Arcangelis, 191–211, Gakkotosho, Tokyo, 2003”. On Homogenization and Γ-convergence

Luc TARTAR1

In memory of

When in the Fall of 1976 I had chosen “Homog´en´eisationdans les ´equationsaux d´eriv´eespartielles” (Homogenization in partial differential equations) for the title of my , which I gave in the beginning of 1977 at Coll`egede in Paris, I did not know of the term Γ-convergence, which I first heard almost a year after, in a talk that Ennio DE GIORGI gave in the seminar that Jacques-Louis LIONS was organizing at Coll`egede France on Friday afternoons. I had not found the definition of Γ-convergence really new, as it was quite similar to questions that were already discussed in control theory under the name of relaxation (which was a more general question than what most people mean by that term now), and it was the convergence in the sense of Umberto MOSCO [Mos] but without the restriction to convex functionals, and it was the natural nonlinear analog of a result concerning G-convergence that Ennio DE GIORGI had obtained with Sergio SPAGNOLO [DG&Spa]; however, Ennio DE GIORGI’s talk contained a quite interesting example, for which he referred to Luciano MODICA (and Stefano MORTOLA) [Mod&Mor], where functionals involving surface integrals appeared as Γ-limits of functionals involving volume integrals, and I thought that it was the interesting part of the concept, so I had found it similar to previous questions but I had felt that the point of view was slightly different. I thought that the idea could be useful for questions like surface tension, but I would have preferred to consider that in a dynamical situation, of course, and although there was no direct minimization of functionals in Ennio DE GIORGI’s approach, it had for me the same limitations that I had observed in others, who clung to their obviously wrong belief that Nature minimizes energy. What I had taught in my Peccot lectures, contained extensions of some work that I had done with Fran¸coisMURAT, on a slightly more general approach than G-convergence, which he later called H-convergen- ce, and on the notion of Compensated Compactness. I thought that it was clear from my lectures that H- convergence and Compensated Compactness were two aspects of the same question, which is to understand what kind of oscillations (which one often calls microstructure nowadays) are compatible with a given system of partial differential equations, and what effective equations could be derived, for describing the macroscopic behaviour of a few interesting quantities; in some way it is this global point of view which should be called Homogenization, although for simplicity Homogenization has been first identified with the simpler aspect of H-convergence (or G-convergence in some cases), but then the term seems to have lost its original meaning due to the limitations of those who were using it and who lost track of any goal by concentrating on too many similar examples; similarly the limitations of those using Γ-convergence has made it lose some of the power that Ennio DE GIORGI had put in the concept. Although I had borrowed the term Homogenization from Ivo BABUSKAˇ , who had been interested in questions with periodic structures, in the spirit of what Henri SANCHEZ-PALENCIA had also done, I had clearly set up a much more general framework, which should have been found natural to anyone who un- derstood a little about Continuum Mechanics. I would have been greatly puzzled if I had been told at the time that some people to whom I had explained that elastic materials do not minimize their potential energy would still stick to that fake physical principle twenty years after, after having conscientiously misled gener- ations of students about that. I could hardly have understood either that it was possible that some people mistake Homogenization and Γ-convergence; certainly, a much better insight could be gained by using Ennio DE GIORGI’s possibility of using general , for example by considering the according to what my own approach of Homogenization/Compensated Compactness suggested. For various reasons, there are different groups of people who insist in attributing my ideas to their friends or themselves, and they do not seem aware that every good can observe that they do not understand well the methods that they use, and this casts a doubt on the fact that they could have

1 Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA 15213-3890, USA.

1 initialized them; I usually explain in detail who had contributed about all the ideas that I know, and why the ones which I introduced myself had become natural to me, and some of these people even go down to insult me because I explain what are the ideas that I had introduced and the mistakes that some reasonably good have made. Some of these people often pretend to answer questions of Continuum Mechanics by using Γ-convergence, but their limited understanding of Continuum Mechanics is too obvious to be missed, and although what they do is nonsense from the point of view of Continuum Mechanics it does not mean that one cannot use Γ-convergence for doing something useful in Continuum Mechanics, but for that one has to be more inventive concerning the topology that one chooses. Ennio DE GIORGI was a giant, and his ideas have had an important impact on some questions of Analysis and Geometry; as I have mentioned elsewhere, he gave me the feeling that he was interested in Mechanics but as he had not learned much in this direction I thought that he was trying to reinvent the field; his ideas have been important concerning the regularity of solutions of elliptic or parabolic partial differential equations and convergence effects related to minimization, but one cannot ignore the fact that most Continuum Mechanics or Physics is not about minimization, and that many equations are actually hyperbolic. Ennio DE GIORGI should have told his followers to learn a little about Continuum Mechanics, and as I have oriented my own research work towards overcoming the challenges coming from Continuum Mechanics or Physics, I want to offer all my contributions to his memory, hoping that it would deter many from propagating their erroneous views about Mechanics while using the name of Ennio DE GIORGI as a shield, because misleading was too opposed to his character, and he was a man of utmost integrity whom I admired for his religious approach to life, although a little different from mine. Those who feel the urge to attribute my ideas to others who have not done much and do not even understand them could then instead attribute them to Ennio DE GIORGI, as a token of appreciation of his mathematical contributions. Homogenization is a theory about partial differential equations, which may be elliptic, parabolic, hyper- bolic or neither, and although unphysical minimization processes may well be useful for technical reasons (as a way to prove existence of some solutions, for example), one should observe that most partial differential equations are not about minimizing anything. However, Fran¸coisMURAT and I were actually led to these questions by starting from an academic minimization problem, and I have described the chronology of this approach in [Tar1]. Γ-convergence is a theory about functionals, and the order relation of the real line plays a role and there are plenty of small minimization problems hidden in the definition. As a first step towards appreciating the differences between the two theories, I think that it is useful to describe a few basic facts about Continuum Mechanics and Physics. As more space would be needed for explaining some technical questions related to my subject, I do plan to write more articles later.

Does Nature minimize or conserve energy?

In 1848, STOKES [Sto1] explained a discrepancy which CHALLIS [Cha] had noticed concerning some solutions of the equations of compressible gas dynamics, which POISSON [Pois] had obtained in an implicit form in 1808, by showing that solutions could approach a discontinuity in finite time; by using conservation of mass and conservation of momentum he had then correctly derived the jump conditions that discontinuous solutions must satisfy. These jump conditions are now interpreted as meaning that the discontinuous func- tions satisfy a partial differential equation in the sense of distributions, as developped by , following the pioneer work of Sergei SOBOLEV and of . After STOKES,RIEMANN [Rie] derived independently the jump conditions (for isentropic motions) in his thesis in 1860, but instead of being called the Stokes–Riemann conditions, the jump conditions are now named after RANKINE [Ran] and HUGONIOT [Hug]. However, when STOKES edited his complete works in 1880 [Sto2], he did not reproduce there his 1848 proof of the jump conditions, and instead he apologized for having made a mistake, because he had been (wrongly) convinced by Lord RAYLEIGH and THOMSON (later to become Lord KELVIN), that his discontinuous solutions were not physical: they did not conserve energy. So in the third part of the 19th Century, good physicists were adamant: energy is conserved!

KELVIN,RAYLEIGH and STOKES must have understood later that heat is a form of energy, and that the missing energy in STOKES’s discontinuous solutions of (isentropic) gas dynamics is transformed into heat, which does make the temperature of the gas increase, but since that possibility is not allowed in the

2 mathematical model this energy is “apparently lost”; in other terms, these three great scientists had not grasped yet why one needs a notion of “internal energy”, although it seems that WATT and CARNOT had understood much earlier (and independently) that mechanical energy may be transformed into heat and heat may be transformed into mechanical energy, but with inherent limitations about the proportion of mechanical energy which can be recovered after it has been transformed into heat. So in the fourth part of the 19th Century, good physicists knew that energy is conserved, if one is careful to count all its forms, including heat; in the first part of the 20th Century, EINSTEIN even observed that mass is a form of energy, with his famous relation e = m c2. One way for solving an equation of the form −∆ u = f with boundary conditions, an equation named after LAPLACE or POISSON, is to invoke the Dirichlet principle, which consists in noticing that if a function 2 1 R 2 R u of class C minimizes the functional J defined by J(v) = 2 Ω |grad(v)| dx − Ω f v dx among all smooth functions having a given boundary value, then u satisfies −∆ u = f in Ω. The principle was named after DIRICHLET by RIEMANN, who had heard it from him, but it had been used before by GAUSS and by GREEN, and the same principle is also named after THOMSON. Could it be that any of these scientists believed that Nature is minimizing energy? I do not think so. In those days the equation may have arisen from a question of Gravitation (using Newton law of attraction), and in that case the potential energy for a finite number of masses must have been used and the limit for a density of masses must have been derived, probably first by LAPLACE and by POISSON; the same ideas were certainly applied to Magnetism, and to Electricity, but all these people must have known the basic truth of Classical Mechanics that total energy (kinetic plus potential) is conserved; they must have known that a local strict minimum of the potential energy gives rise to a stable equilibrium, but they could not have mistaken the minimization of potential energy as a physical process; in other words, they must have known that the Dirichlet Principle is a useful mathematical trick. Specialists of Numerical Analysis know that any unphysical principle is useful if it permits to compute quickly and accurately something that one is interested in, but there is no reason to mistake such a principle for reality. So it seems better to consider that Nature does not minimize energy, but that the mathematical problem of minimizing a suitable functional may give information on the solutions of some physical processes which are not about minimizing anything. Although Nature does not minimize energy, one may well arrive at something that looks like what Nature has created in a particular area by looking at the mathematical point of view of minimizing a particular functional, but one should not be fooled by this analogy and jump to conclusion too fast. If A implies B and one observes something which looks like B, then one must be an optimist to deduce that it must be because A is true, and physicists seem to be of an optimistic nature, but mathematicians would probably consider as intellectually challenged a student who thinks that if A implies B and B is true then A is true and they may think of showing the deluded student a large number of counter-examples (thinking that such a student would not even be convinced by seeing only one counter-example), so mathematicians seem to be of a pessimistic nature. If one observes a microstructure and one discovers that such a microstructure appears when one minimizes a particular functional, and if one claims that it is the reason why this particular microstructure occurs, one might be making precisely the mistake that mathematicians are warned against. Although not related to a physical process, the Dirichlet Principle has helped discover interesting math- ematical structures. WEIERSTRASS pointed out that the functional J might not attain its minimum, and this question may have been at the onset of his study of compactness. I think that the complete solution of the Dirichlet Principle was one in the famous list of problems which HILBERT proposed in 1900 at the Interna- tional Congress of Mathematicians in Paris, France; it might have been the motivation of Sergei SOBOLEV for introducing his famous spaces, and for that purpose he may have considered the question of Electrostatics, a simplification of Maxwell equation; it might have been the motivation of Laurence C. YOUNG [You1,2,3] for analyzing minimization problems without solutions and introducing a class of generalized solutions, for which one uses now the term Young measures. In the Summer 1978, I taught a course [Tar2] at Heriot–Watt University in Edinburgh, Scotland, on Compensated Compactness, a theory which I had partly developed with Fran¸coisMURAT; in that course, I used Young measures as a way to describe the constraints that were obtained by applying my Compensated Compactness Method (which is my unification of what Jacques-Louis LIONS saw as a dichotomy, compactness

3 arguments on one side, and monotonicity/convexity arguments on the other), but I did not know at the time that Laurence C. YOUNG had introduced them, and I used the term parametrized measures which I had heard in the seminar of Robert PALLU DE LA BARRIERE` at IRIA (Institut de Recherche en Informatique et Automatique, Rocquencourt, France), where the idea in control theory was attributed to GHOUILA-HOURI [G-H], but they had been used in control theory before by Jack WARGA [War] and called relaxed controls. Although I was the first to use parametrized/Young measures in a context of partial differential equations, I knew of some earlier work on variational problems involving no derivatives by Henri BERLIOCCHI and Jean-Michel LASRY [Ber&Las]. In my course, it was not the Young measures which were the important concept, and from my work on Homogenization with Fran¸coisMURAT I already knew about their limitations (and I was therefore puzzled that more than ten years after some people could wrongly present them as a characterization of microstructures); in physical terms, Young measures are just a precise mathematical way for talking about one point statistics, without falling into the usual trap of using probabilities for describing the laws of Nature.

Although MAXWELL had found his equation in his quest for unifying Electricity and Magnetism, his system of equations describes waves which propagate with the speed of Light. Many systems of partial differential equations (of hyperbolic type) are used now as mathematical models of physical processes where energy is conserved, and waves explain an important phenomenon, that energy can be globally conserved but may nevertheless flow away, transported by waves; for the wave equation I had read some works in this direction in the early 70s by various combinations of authors, , Cathleen MORAWETZ, Ralph PHILLIPS, and Walter STRAUSS [Lax&Mor&Phi1,2], [Mor1,2], [Str]. There is another way for the energy to be conserved while it may seem to disappear. It may happen from a nonlinear effect, like for the creation of shocks in quasilinear systems, and more precisely for (hyperbolic) systems of conservation laws; this question generalizes the particular situations which STOKES and RIEMANN had studied, and it was put into a contemporary mathematical framework by Peter LAX in the 50s [Lax1,2]. A good way to learn about this area now is to look at the recent book of my good friend Constantine DAFERMOS [Daf], but it should be noted that it was a book by COURANT and FRIEDRICHS [Cou&Fri] which had stimulated interest about these questions in the academic community; it was the historical section in this book which triggered my interest in reading more about the origin of ideas, some of them I had learned as a student without any precise attribution (Cathleen MORAWETZ had the charge of editing the book when she was a graduate student, and she told me that other historical informations had to be discarded due to a restriction in space). More generally, there are situations where the information goes to infinity, but instead of infinity in the space variable x, it is infinity in the dual space of the Fourier variable ξ, i.e. energy flows into smaller wavelengths (higher frequencies). In elastic structures, at least the two aspects occur, and one part of the energy is lost away through the support, and another part of the energy is trapped into high frequency vibrations, and this often happens because of a small scale in some direction, as one uses thin plates and beams in buildings; in real buildings there is an added damping due to the viscoelastic properties of concrete, which one may interpret as resulting from its microstructure by an Homogenization process, as I heard from Henri SANCHEZ-PALENCIA [S-P1], and his explanation is based on a coupling between a (linearly) elastic material and a viscous fluid trapped in cavities inside; indeed, the water content in concrete is known to be related to its viscoelastic properties (in particular the viscoelastic properties of concrete change with time, and this aging may be due to excessive drying). However, I then heard of another scenario in a talk of , at a meeting at IMA (Institut for and its Applications, Minneapolis MN) in 1985; he mentioned the effect of inclusions (of different elastic materials), together with a small coupling due to taking into account the nonlinear geometric effect in Elasticity, as a way to explain an origin of damping and the flow of energy to higher frequencies. It is possible that the energy trapped in high frequency vibrations transforms more easily into heat, but one has probably been calling heat too many different kinds of energy, and this is one difficulty for putting Thermodynamics into a sound basis. In the traditional description, heat is taken away by conduction, diffusion, convection of the fluid surrounding the structure and radiation from the boundary (according to the blackbody radiation formula of PLANCK, which implies a total radiated energy following the law in T 4

4 which STEFAN had found earlier); part of the energy trapped in high frequency is also transmitted as waves to the fluid surrounding the structure, some being audible. It must be noted that in some situations physicists call noise these particular high frequency oscillations which others are trying to describe, and of course calling it noise will not help explaining what it is, and postulating that it follows some rules which probabilists like is certainly not the way to understand the phenomenon. I like to call science-fiction elastic materials those inexistant materials which only have an elastic range and do not show any onset of plasticity or apparition of cracks, whatever forces are applied to it. In the late 70s, I made the mistake of assuming that such materials could exist in principle, although I had failed to find a good class of strain-stress relations for them [Tar3]; of course, one has to impose that the evolution problem is well posed (and the mathematical methods for studying this question still seem quite inadequate), and I also expected this class to be stable by Homogenization [Tar4]. A few years later, Gianni DAL MASO had mentioned to me that one can say something about the stored energy functional of the limiting material by using the method of Γ-convergence which Ennio DE GIORGI had developed, and until a few years ago, I did find that it was an advantage of the Γ-convergence approach that it could talk about the stored energy functional of such a limiting material, whose strain-stress relation I had been unable to characterize; however, I believe now that no such material exist (which has to satisfy some invariance properties), and that the Γ-convergence result has answered another question, which may not be very physical. I like to call ultra-science-fiction elastic materials those different inexistant materials which instanta- neously discover a minimum of their potential energy; this is usually the class which is studied by those who pretend to study Elasticity but never mention the term stress. Although ideal (science-fiction) elastic materials are conservative, it is clear that real materials are slightly dissipative, but the way a real material tends to dissipate its kinetic energy and apparently stop in an equilibrium configuration is certainly not by looking for a global minimum of the potential energy; of course, if there is only one equilibrium solution satisfying the desired constraints it might be the global minimum, and an example of such a situation has been shown in Elasticity by Robin KNOPS and Charles STUART [Kno&Stu]. I have left aside an argument which is sometimes used for explaining the interest in configurations of lowest energy, which is to invoke Statistical Physics; only systems in thermal equilibrium are supposed to exist in this framework, and the state of a system is indexed by the absolute temperature T , and the rule says w  that there is a “probability” to find the system in a state of energy w, which is proportional to exp − k T , where k is the Boltzmann constant. Of course, the basic rule of this game makes no sense but for large systems whose parts are connected enough to interact, and it is worth observing that this rule has been invented by analogy with the equilibrium solutions of Boltzmann equation, where the parts are identical “particles” which collide at a sufficient rate in order to arrive at a thermal equilibrium quickly; those who like this framework sometime argue that when the temperature T tends to 0, a system tends to occupy only the positions of lowest energy (with equal probabilities if there are a few of them), and therefore an expansion near zero temperature is mentioned, where one first considers the global minima of the energy w. However, in some experiments concerning plasmas, lighter electrons tend to settle quickly to some temperature, while heavier ions tend to settle quickly to another temperature, and the experiments do not last long enough for these two temperatures to come together; it would be very silly either for a specialist of meteorology to use a one temperature model for the atmosphere. These obvious considerations are but one of the many known defects of using equilibrium models for deducing some properties of the real world; one should be aware that this is the point of view which has been taught for generations, that there is an equation of state valid all the time, although it has been deduced from the sole knowledge of equilibria, and part of the scientific problem is not only to ascertain if some equations have a solution and how to select the physical solution when there are many, but to question the validity of these equations that one has postulated and to construct better physical models of reality. Of course, I do not think that there is much Physics in a model where there is no time, like in the approach of Statistical Physics. In Classical Mechanics, one often invokes NOETHER’s theorem which links symmetries and conservation laws, and it is precisely the invariance by translation in time which is related to the conservation of total energy, and therefore one should not be surprised that the partisans of Nature

5 minimizing energy live in a world where time does not exist. Some may pretend that time is not so important for their models because they are only interested in stationary solutions, but they should then certainly avoid mentioning the question of stability of the equilibrium model that they study, because the only possible meaning of stability is related to the precise evolution equation that is satisfied, at least since the work of LYAPUNOV (I have read in the correspondence between MITTAG-LEFFLER and POINCARE´ [M-L&Poin] that a different notion of stability had been used at the beginning); it would certainly be very silly for a partisan of Nature minimizing energy, to use the term stability and mistake the gradient flow corresponding to that minimization for the evolution equation governing the system. I do not think that there is much Physics either in a model where there is no space. Once one knows that energy is conserved and may change its form, the problem is to understand where it is located, in space and time or in the dual frequency space. If one wants to localize elastic energy in space, one is bound to discover the Cauchy stress tensor; one should notice that this (symmetric) stress tensor is natural in the physical Eulerian point of view, and that in the less realistic mathematical Lagrangian point of view (also studied by EULER) it corresponds to the stress tensor introduced by PIOLA and by KIRCHHOFF. The mathematical difficulty for keeping track of all the energy is that one needs to develop efficient mathematical tools for analyzing where the energy has gone, in space-time as well as in the dual variables. For doing that one certainly needs to understand the relations between different scales, in the framework that I have developed in the early 70s, based on the use of various types of weak convergence, like H- convergence, developped with Fran¸coisMURAT [Mur], which extended the notion of G-convergence which Sergio SPAGNOLO had developed in the late 60s [Spa1,2], helped with the insight of Ennio DE GIORGI, but also the H-measures and their variants, which I have introduced in the late 80s [Tar5].

Weak convergence models for relating various scales When I was taught Physics in the late 60s, the method for describing the relations between different scales, from microscopic to macroscopic (and I only heard much later of the term mesoscopic), was explained in a traditional way, using a probabilistic language. It is still taught that way nowadays. In the early 70s, I had initiated a different approach, where instead of probabilistic methods one uses weak convergences or more generally convergences of weak type as those appearing in G-convergence and H-convergence. When a new idea appears, there are often people who say afterwards that they had known that already, and that may be true, but it becomes doubtful when after many years they show that they have not yet understood the concept correctly. This phenomenon is not new, and when GAUSS read about the work of BOLYAI and the work of LOBACHEVSKY on non Euclidean geometries, he said that he had known that for a long time, but in his case there is no doubt that he had made similar observations but not published them, and he had even mentioned that his reputation would suffer if he admitted in public that he believed in the existence of such a geometry. I had expressed in writing [Tar6] in 1974 the new idea of using weak convergence for relating different scales, which I called microscopic and macroscopic. Could it be that others had thought of the same idea before, but had been afraid to mention it, as it was (and still is) against the well accepted approach which uses a probabilistic language? Possibly, but I have received no evidence of that yet, and it may clarify the discussion to explain who had contributed to this new point of view in the early 70s, and what the point of view is.

When I had worked with Fran¸coisMURAT on an academic problem of optimization that had been proposed by Jacques-Louis LIONS [Lio1], we used weak convergence in the usual way that we had learned from him in ; there were converging in L2(Ω) weak, and sequences converging in L∞(Ω) weak ?, and there was also a different topology involved; then we found the work that Sergio SPAGNOLO had done on G-convergence [Spa1,2], with some ideas from Ennio DE GIORGI, and what we had done was slightly different (so the term H-convergence was coined later), but at that point there were only classical ideas in Functional Analysis, as G-convergence is just the weak convergence of the inverses of some operators anyway, i.e. the convergence of Green kernels (and G is a reminder for Green), the difficult part of the work being to prove that the limit of Green functions of a of elliptic problems is the Green function of another elliptic problem. Then we found the work of Henri SANCHEZ-PALENCIA [S-P2], who was

6 doing partially formal computations in a periodic setting, and that helped us realize that what we had been doing was related to properties of mixtures. That connection with questions in Continuum Mechanics was illuminating for me, and it gave me a way to understand what I had been told in my Physics courses about microscopic/macroscopic relations. Then we proved the Div-Curl lemma, which appeared important for the question of whether or not one always needs quantities like internal energy, which only exist at the macroscopic level. I also observed that the Div-Curl lemma explains why there is an equipartition of energy between the kinetic part and the potential part of the energy for oscillating solutions of the wave equation, and that gave an interpretation radically different from anything that I had been taught in my Physics courses, where the principle is related to counting degrees of freedom; the application of the same idea to Maxwell equation gives more than just the equipartition of energy between the electric part and the magnetic part of the energy, and the application to nonlinear equations suggests to question the way that physicists have used for discussing equipartition. During the year 1974/75 which I spent at the University of Wisconsin in Madison, I had clarifying discussions with Joel ROBBIN and it became clear why the classical weak convergence is only adapted to coefficient of differential forms, and why one needs other types of weak convergence. I had derived a new proof for H-convergence based on repeated use of the Div-Curl lemma, and as I had mentioned my proof to Jacques-Louis LIONS at a meeting in Marseille, France, in the Fall 1975, his article is the first mention for it [Lio2]; I only wrote about it in the Fall of 1976 [Tar7], and it was rediscovered independently by Leon SIMON [Sim], who was told about my previous work by the referee of his article. My method, which Jacques-Louis LIONS subsequently refered to as the energy method, which others have called the duality method, and for which I prefer the method of oscillating test functions, follows precisely what one does in a physical problem for identifying effective coefficients, and it suggests that the topology adapted to a physical quantity is found by understanding how one measures it, or more precisely how one identifies what it is from the measurement of other quantities; differential forms are defined by their integrals on manifolds, and therefore the topology adapted to them is the weak topology which looks at the limits of these integrals, but other physical quantities have different interpretations and other topologies are adapted for them. A new point of view had been created, and since that time I have had no hint that this basic idea should be changed, for understanding in a better way some questions relating different scales in Continuum Mechanics or Physics. I am not saying that I could see in 1975 how some particular questions of contemporary Continuum Mechanics or Physics could fit into my program of research, as I understood some of these questions only in the late 80s. I am not saying either that I see clearly now how all questions of contemporary Continuum Mechanics or Physics can be explained in that point of view, as there are some new mathematical objects that must be defined, but it seems that everything will eventually fit into the framework that I have advocated, which does not postulate any probabilistic rules for the way Nature evolves. When I met John M. BALL in the Fall of 1975, I was puzzled that he had been taught that elastic mate- rials minimize their potential energy, but it was obvious to me from my new understanding of the relations between scales that the strain-stress relation of an elastic material should be stable by weak convergence, and I will only correct that statement now by saying that I was thinking in terms of what I have described as science-fiction elastic materials, which only have an elastic range and do not show any onset of plasticity or apparition of cracks, whatever forces are applied to it (and I have used the term science-fiction because I think now that no such material will ever exist). When in 1977, Clifford TRUESDELL had questioned my framework [Tar3,4], I had thought that it was due to a difference in generation that he did not see why I was proposing that, and by politeness I had not argued with him on that point. Younger mathematicians have seemed to have trouble with that idea too (and a few have obviously not been educated in a way where one respects older people), but if this was not the case, then macroscopic measurements could show the same average strains but different average stresses, so that such a material would certainly not be called elastic. I have learned from Ekhard SALJE [Sal] the terms co-elastic or ferroelastic for describing some nonelastic materials of that type, whose strain-stress relations show hysteresis effects; they could be studied in the program that I was describing in the late 70s, either by neglecting time or by considering the evolution problem, for which the mathematical tools are obviously still inadequate (and one must have to question a modelization using an equilibrium equation of state anyway). In the way some people have applied the approach by Γ-convergence of Ennio DE GIORGI, they have not seen any hysteresis phenomenon, and they have thought of talking about an elastic material, probably because they had implicitely postulated that a

7 material can only exist in the global minimum of its potential energy; this has also happened because they have only considered the weak topology for the strain (while I have advocated utilising both the strain and the stress, and adding the entropies which are valid), and I do not think that it is a defect of Ennio DE GIORGI’s Γ-convergence approach, because he had purposely introduced the notion with a general topology, and I believe that it is only the result of having chosen the wrong topology that the real hysteresis effect has vanished; in order to understand which is the right topology (although there could be more than one), I would prefer to follow my approach, but I do not know the answer of that question at the moment. Peter LAX has noticed that VON NEUMANN had thought that a numerical scheme which presented oscillations is converging to the right quantity only in a weak topology (and I think that Peter LAX showed that it is not the case), but this remark does not show that my approach was well known. Actually, the usual weak topologies have been used implicitly for a long time, even before the notion of weak convergence had been introduced by F. RIESZ and the spaces named after BANACH,FRECHET´ ,HILBERT,LEBESGUE or SOBOLEV had been introduced, because when one replaces a discrete repartition of mass by a (measurable) density of mass, one says that they are very near in a weak topology (and on bounded sets such weak topologies are usually metrizable). It is only when something nonlinear is introduced that my point of view starts being relevant, and al- though we had been taught that nonlinear mappings are not continuous with respect to weak convergence, we found a way to use weak convergences in nonlinear settings, and to consider other types of weak convergence when necessary. Considering sequences is just a mathematical artifice used for identifying the right topology, and it is just like the usual mathematical idea of completing a space; although real numbers seemed well known to all, they have only been defined correctly at the time of CANTOR and DEDEKIND, and in one approach a real number is just a Cauchy sequence of rational numbers (more precisely an equivalence class of such Cauchy sequences), and that intellectual exercise is useful when one only knows rational numbers and one wants to prove that real numbers exist, which have the properties that we are used to; after that, one may safely work with real numbers. Engineers and physicists may think that mathematicians are a little crazy to spend some time proving such questions that they find obvious; after RUSSELL had found his famous paradox, mathematicians realized that one even has to define what a set is; from time to time physicists discard some old theories that they do not believe in anymore, and soon they will have to discard some of the silly games that they have invented in the 20th Century, but mathematicians do that on a continuous basis, being more careful about what they know and what they only conjecture is true; mathematicians are not supposed to mistake the information coming from an experiment with the properties of solutions of an equation even when this equation is supposed to be a model for a physical effect which the experiment is supposed to test. Part of what I have been doing in my research work in the 70s and 80s was to give precise mathematical definitions of what mixtures are, and of course it depends upon what properties of the mixture one is interested in; then, after having given precise mathematical definitions of how one relates different scales, one can start discussing about some precise questions. Once the definitions are clearly set, many will say that they are not surprised, but some will prefer to continue using the vague probabilistic language that they have used before as an incantation. Good physicists do not need the definitions of mathematicians in order to go forward and if DIRAC has his name attached to a measure (which mathematicians do not call a function), it is not for having been the first to use the notation of functions for a point mass, but for having done strangely efficient formal computations with his “functions”, not being afraid to use their derivatives, and such a bold move was only explained to mathematicians by the theory of distributions of Laurent SCHWARTZ; however, it would be wrong to believe that any formal computation made by a physicist or an engineer must be right, even if it seems useful in a practical problem. The Engineering/Physics literature contains many references to bounds on effective properties for com- posite materials, where physical intuition often replaces precise mathematical definitions. I suppose that many of the results can easily be understood with minor adaptations in the mathematical framework which has been developed, while some of the results might not fit so easily; I hope to find the time to survey this question for pointing out what remains to be done from a mathematical point of view before one has transformed these results from their status of conjectures into that of theorems. It is useful to observe that some people who have been initially trained as mathematicians often do not follow a mathematical approach, even in an area where precise mathematical definitions have been given,

8 and prefer to choose the nonmathematician mode where one does not prove everything that one writes, but nevertheless do not emphasize which of their results are only conjectures, as their former mathematical training would dictate; one should then use a detective mode in order to decide if one should trust the statements which they write, or check for gaps in their “proofs” or simply look for their use of terms which have never been defined correctly (my observation is that they often also misattribute the ideas of other mathematicians, perhaps because they resent the fact that others have continued working as mathematicians do). In the early 70s, after my work with Fran¸coisMURAT, I was quite puzzled when I discovered that a text book by LANDAU and LIFSCHITZ contained a section giving a formula for the conductivity of a mixture. The reason why it was puzzling was that the results that I had obtained with Fran¸coisMURAT (and the results that Antonio MARINO and Sergio SPAGNOLO had obtained before [Mar&Spa]) showed that one cannot deduce the effective conductivity of a mixture from the sole knowledge of the proportions of materials used, except in one dimension, as it is related to the rule used in electricity that resistances in series must be added; in more than one dimension the shape of the pieces play a role (for resistances in parallel the conductivities, inverse of the resistivities, must be added, but one does not know in advance where the electric current will flow, in order to consider current tubes where resistances are in series inside the tubes, and the tubes are themselves in parallel). Of course, they could have said that their formula was only an approximation (it was easy to guess why they implicitly considered their mixture to be isotropic), they could have told how it compared with experimental measurements, and from a theoretical point of view they could have explained why, if one mixed in equal parts two (isotropic) conductors of conductivity α and β, their formula was not symmetric in α and β. These are details which annoy mathematicians, who are taught to be precise and only write statements which are proved to be true, but the real difficulty lies elsewhere. LANDAU and LIFSCHITZ talked of a scenario where one first grinds two (or more) different materials into fine powders, then one pours in given proportions these powders into a container before shaking it thoroughly (in order to mix the components of the mixture so that no preferred direction can be discovered in the end), and finally one compresses the result (in order to avoid having too much air as a third component). There is obviously not yet a mathematical formulation for expressing in how many different ways this scenario can be performed so that one may try to answer the question about the conductivity of the mixture. If one talks about grinding, a specialized engineer may understand different questions, depending if he/she is a specialist of grinding grains or of grinding minerals with various degrees of hardness, but that is an impossible problem for a mathematician at the moment, as one is talking about going beyond the elastic range and one cannot even stop once a crack appears, and one must admit that the models about cracks which mathematicians play with are problematic anyway. Avoiding to even try to understand what these processes are and forcing probabilistic ideas on the problem is just a way to sweep the dirt under the rug, which cannot result in the cleaning process that mathematicians are responsible for, and one should keep in mind that not everything can be transformed into a sound mathematical framework. It should be noted that the classical ideas of KOLMOGOROV about Turbulence are quite similar to the argument of LANDAU and LIFSCHITZ looking for an inexistant formula giving the effective properties of a mixture. It is clear that in front of a problem that they do not understand some people like to pretend that it cannot be understood, and many feel better by inventing a probabilistic game, without realizing that they can only postulate that it is related to the initial problem, of which they may understand near to nothing. In 1981, at a meeting at , Joseph KELLER talked about random waves, and he mentioned that at some time there had been many articles written about the distribution of wavelengths of the waves at the surface of the sea, but there were no measurements; a few years later some measurements were made by satellites, and they showed that all the articles were wrong (for centimetric wavelengths); I guess that the authors had not considered a realistic version of the equations of hydrodynamic and tried to deduce something about the waves (which is too hard a problem at the moment), but they must have postulated that one of their preferred probabilistic game applied to the problem, which ended up not being the case. I am not opposed to probabilists, and Probability is just a part of Analysis, and I would advocate a probabilistic proof in a case where it would be simpler than a non probabilist one, but I prefer not to use a probabilistic approach because of the wrong ideology which often comes with using probabilities (which has nothing to do with Probability, but is about faith in dogmas); it is mostly because of a misunderstanding

9 of Physics (and Probability) that probabilistic ideas have become so popular in some circles, but they have not been used correctly. Some people try to prove that Boltzmann equation is a good model, and for that they try to deduce it from some Hamiltonian framework of particles interacting with forces depending upon their distances; this project is necessarily doomed, as I had learned a long time ago in a book by Clifford TRUESDELL and Robert MUNCASTER [Tru&Mun]; if two mathematical observers were looking at the solutions of the Hamiltonian system with a large number of interacting particles, one looking at time flowing forward and the other looking at time flowing backward and reversing the velocities, they would be looking at the same equation but they could not both deduce that this implies that what they observe is near the solution of Boltzmann equation, because of the H-theorem of BOLTZMANN, unless they were observing a situation near a global Maxwellian distribution at equilibrium; in other words there is an irreversible process which has been postulated in order to write Boltzmann equation, and the equation cannot be used for studying how irreversibility occurs. I have suggested that obtaining the right effective equation could show an effect like the apparition of nonlocal terms by Homogenization, on which I had already written in honour of Ennio DE GIORGI [Tar8]; in this case the two observers would describe their limiting equation, one with an integral term from −∞ to t and the other with an integral term from t to +∞, and these two equations could well describe the same solutions; it would be by getting rid of the nonlocal term and replacing an integro-differential equation by a differential equation that some irreversibility would be introduced into the model. Of course, integral equations can be studied without probabilistic ideas, and if in some instance the integral equation follows from a probabilistic game, there is no reason to prefer the probabilistic game; in the same way if one talks about numerical methods, finite differences, finite elements, spectral methods, wavelet transforms, one is just making a list of tools that are useful to know, and none of them is always better than the others whatever the problem is, each method has advantages and defects and a good numerical analyst should learn what they are in order to choose the most efficient method for the problem that he/she considers at a given time (but some postulate that their method is better than the others). A celebrated tool in probability is the “Brownian motion”, which was developed by WIENER after the initial work of BACHELIER in 1900 and EINSTEIN in 1905, and it is associated with the heat equation ut − uxx = 0 and is a limiting case of a random walk which jumps in position, but this is quite different from what BROWN had observed, which were jumps in velocity, which are more related to the Fokker-Planck equation ft + v.fx − fvv = 0 and to the Ornstein-Uhlenbeck process. Some people like to derive Fokker- Planck equation from Boltzmann equation, and this is illogical, because Fokker-Planck equation is precisely about the grazing collisions which are not well described by Boltzmann equation, at least as long as one does not know how to avoid GRAD’s angular cut-off hypothesis (there is a recent tendency to wrongly use the term Fokker-Planck equation for a different equation with no velocity variable, often calling it forward Kolmogorov equation too). Except for the work of BACHELIER, which was related to finance (in 1900!), the physical processes where diffusion equations are used seem to have a first step where a scattering effect occurs (the mathematical analysis of which I have not seen), a second step where a large time behaviour creates a diffusion in velocity and a third step which creates a diffusion in space by letting a characteristic velocity tend to ∞. Whatever the physical origin of the heat equation is, one may try to solve it in any way one likes, including the use of “Brownian motion”, but I have learned other tools which I find more general; for example, n n n n n 1 n+1 Ui−1+Ui+1  Ui−1+Ui+1−2Ui one could use the explicit Lax-Friedrichs finite difference scheme ∆ t Ui − 2 − (∆ x)2 = 0, n where ∆ x and ∆ t are mesh sizes in space and time and Ui is an approximation for U(i ∆ x, n ∆ t); one needs 2 n+1 n n n a stability condition 2∆ t ≤ (∆ x) , which is exactly what one needs to have Ui = a Ui−1 + b Ui + c Ui+1 with a, b, c ≥ 0, which permits an “interpretation” in terms of expected value for a probabilistic game, which seems to be the basic idea for the “Brownian motion”; however there are also standard Hilbertian methods, which permit also to approach by discretization many other partial differential equations for which no probability game is known. The point of view of describing the relation between different scales by using convergences of weak type is adapted to working with the partial differential equations of Continuum Mechanics or Physics, and the identification of the adapted topology usually helps solving the important question of adding variables which are necessary for describing the oscillating behaviour of solutions, and the effective equation itself. Forgetting about these added variables automatically gives a model where something is missing, and for correcting the

10 defect of the reduced system many advocate the use of probabilities, but this approach does not seem to lead to the discovery of a better new equation. By avoiding the use of probabilities, one may discover why some phenomenon occurs, and when one discovers that some of the rules of can be obtained without probabilities, one understands that it is not the use of probabilities which is wrong, but the postulate that one cannot explain what one observes without the use of probabilities. If a probabilistic game can be found for describing the solution of an integral equation, why forget that there are elementary results in Functional Analysis which can give the same existence result in a quicker way. One should first understand what the physical problems are about, and one will probably be led to discover that there are some microstructures which appear in a natural way, and if Nature needs these microstructures it has to be for a good reason, which is certainly that they offer a much better solution than a boring smooth solution showing no microstructures. I believe that the microstructures which one observes in real problems show some kind of optimality, and that may have been what Ennio DE GIORGI had in mind, but one should certainly not rush at pretending that one knows which functional is being minimized, because that may not be exactly what Nature is doing anyway. It is the goal of Homogenization to describe the oscillations (or concentration effects) in solutions of partial differential equations, and doing so it is expected that some effective equations will be discovered, which govern the new important quantities that one will have identified for describing these oscillating solutions, and that one will have identified which topologies of weak type are adapted to the problem at hand. It is the goal of Γ-convergence to understand similar questions for sequences converging in adequate topologies and trying to minimize adapted functionals, and it is important to discover interesting classes of functionals and which are the interesting topologies to consider in combination with these functionals. I would prefer to see such mathematical questions which may be relevant for explaining something from outside Mathematics, for example problems from Continuum Mechanics or Physics, although I know that in the process of developing a mathematical theory one must necessarily study a certain number of academic problems (which one should not mistake for reality). There are issues which at the moment are better studied in one aspect than in the other, and Homoge- nization and Γ-convergence are two points of view about the study of microstructures which do not coincide, and may be complementary in certain situations, but cannot be antagonistic (except maybe for those who are more interested in politics and ideology than in Science). With the restricted point of view which has been used in the applications of Γ-convergence, it is not the same than Homogenization, but once the full strength of Ennio DE GIORGI’s method will have been used by paying more attention to the choice of the topology, it may happen that the difference with Homogenization (in the way Fran¸coisMURAT and I have developped it first, where no restrictions like periodicity are imposed) may be small. It is quite common to experience the existence of microstructures in some fluid flows for example, but it is a very arduous task to propose a model that describes accurately the important effects occuring in these flows. A first observation concerns the structure of the interface in front of a rainstorm, as I had observed many times after a hot Summer day in the French countryside, many years before I thought of becoming a mathematician; one knows that a storm is coming, although the air is still, perhaps because the pressure is higher than usual, and then one hears the leaves of the trees moving, while the branches stay still; soon after the small branches start to move too, followed by the large branches a little after and the whole trees may be in motion when the rain arrives. It clearly suggests that the classical idea of a sharp interface with some partial differential equations being satisfied on each side and with some boundary conditions being imposed on the “interface” might not be so efficient for describing the effects occuring in that living layer, with small vortices on the dry side and large vortices on the wet side. A second observation concerns the structure of the “wind”, as I had observed twenty years ago, on a week end where I went sailing with some friends, between La Rochelle and Ile de R´e(which is no longer an island now, as a bridge connects it to the continent); the morning had provided us with what one calls “calme plat” in French: there was no wind, and the surface of the sea was extremely smooth and only showing a long swell (“houle” in French), which combined with the steady movement sustained by the small engine of the boat to produce a beginning of seasickness; fortunately, it did not last too long, because after a while we saw what one calls “une ris´ee” in French (my English dictionary translates as “light squall”), the wind waiting for us! It is an amazing

11 fact to come from the windless side with a smooth sea surface to the place where the wind is, with the surface of the sea all wrinkled with wavelengths of the order of 5 to 10 centimeters (precisely the centimetric wavelengths where the articles written by probabilists about waves were all wrong), and when one crosses the transition line (which seemed stationary, but it might have been moving at a much slower pace that the boat, which was carried by its small engine), the sails inflated, and sailing started. Again, I will not dare to propose a functional, for which minimizing sequences show the precise microstructures which Nature uses in such a marvelous way, because that is probably not the reason they occur, and considering that these microstructures are optimal should be thought as an approximation. It is useful to familiarize oneself with partial differential equations from Continuum Mechanics and Physics, in order to understand physical meanings which can be attached to a given partial differential equation, even if one is only interested in its mathematical properties. An equation may have lots of properties, some of them being more or less intuitive depending upon one’s training, and the same equation may arise as a model in various applications, but some quantities may be more natural to introduce in one m model than in another; for example, the equation ut − (u )xx = 0 is used as a model in porous media for m > 1 (with u ≥ 0 denoting a density of mass), and the quantity M = R u(x, t) dx is independent of t and is R R the total mass, and the quantity x u(x, t) dx is also independent of t and is M x∗ where x∗ is the abscissa R of the center of gravity of the distribution of mass; however the case m = 1 is the heat equation, and in the interpretation that u is a temperature (out of equilibrium), no physical meaning has been attached to R u(x, t) dx and R x u(x, t) dx, although these quantities are independent of t. Using physical examples in R R Partial Differential Equations is a pedagogical approach similar to using pictures in Geometry, and it does not give the physical situations more value than hints: a drawing is not a proof but it is often a sufficient hint for a trained person for telling how to write a proof, if needed. Of course, I imagine that the students have already heard about the corresponding physical theories, and would like to understand what they have been taught in their Physics course in a more mathematical way; this is often not the case nowadays, as Mathematics and Physics are often taught in a too ideological manner, and very few students have the motivation of acquiring a vast knowledge, and they may have problems understanding examples which could have been useful if they had learned more.

Conclusion. I only read recently the motto of Hugues de Saint Victor “Learn everything, and you will see afterward that nothing is useless”, and I would have liked discussing it with Ennio DE GIORGI. I think that one must take the time to understand how the various pieces that one has learned fit together before going a little further, and what I have presented in this text is the description of a few pieces of the puzzle that one should try to assemble, and I plan to write others texts, which will be more technical, showing how some of these pieces can be put together; no one I think has a clear global view of what the assembled puzzle will look like, but in the end it should be easy to recognize the part of it which was Ennio DE GIORGI’s vision.

Acknowledgements.

My research work is supported by CARNEGIE-MELLON University; I still consider this work as part of grant DMS-97.04762 of the National Science Foundation, which I want to thank also through its support of visitors and conferences at the Center for Nonlinear Analysis; a first version of the material presented here (with more biographical information) was prepared for a Summer school of the Center for Nonlinear Analysis, and can be found at http://www.math.cmu.edu/public/cna/publications.html.

References.

[Ber&Las] Henri BERLIOCCHI & Jean-Michel LASRY, “Int´egrandesnormales et mesures param´etr´ees en calcul des variations,” Bull. Soc. Math. France 101 (1973), 129–184. [Cha] James CHALLIS (1803-1882), “On the velocity of sound,” Philos. Mag. XXXII (1848), 494–499. [Cou&Fri] (1888-1972) & (1901-1982), Supersonic Flow and Shock Waves Interscience Publishers, Inc., New York, N. Y. 1948. xvi+464 pp. Applied Mathematical Sciences, Vol. 21. Springer-Verlag, New York-Heidelberg, 1976. xvi+464 pp.

12 [Daf] Constantine M. DAFERMOS, Hyperbolic conservation laws in continuum physics, Grundlehren der Mathematischen Wissenschaften, 325. Springer-Verlag, Berlin, 2000. xvi+443 pp. ISBN 3-540-64914-X. [DG&Spa] Ennio DE GIORGI (1928-1996) & Sergio SPAGNOLO, “Sulla convergenza degli integrali dell’energia per operatori ellittici del secondo ordine,” Boll. Un. Mat. Ital. (4) 8 (1973), 391–411. [G-H] Alain GHOUILA-HOURI (-196?), “Sur la g´en´eralisationde la notion de commande d’un syst`emeguid- able, Rev. Fran¸caiseInformat. Recherche Op´erationnelle 1 (1967) no. 4, 7–32. [Hug] Pierre Henri HUGONIOT (1851-1887), “Sur la propagation du mouvement dans les corps et sp´ecialement dans les gaz parfaits,” I, II J. Ec.´ Polytech. 57 (1887), 3–97, 58 (1889), 1–125. Classic papers in shock compression science, 161–243, 245–358, High-press. Shock Compression Condens. Matter, Springer, New York, 1998. [Kno&Stu] Robin J. KNOPS & Charles A. STUART, “Quasiconvexity and uniqueness of equilibrium solutions in nonlinear elasticity,” Arch. Rational Mech. Anal. 86 (1984) no. 3, 233–249. [Lax1] Peter D. LAX, “Weak solutions of nonlinear hyperbolic equations and their numerical computation,” Comm. Pure Appl. Math. 7 (1954). 159–193. [Lax2] LAX Peter D., “Hyperbolic systems of conservation laws. II. Comm. Pure Appl. Math. 10 (1957), 537–566. [Lax&Mor&Phi1] Peter D. LAX & & Ralph Saul PHILLIPS (1913-1998), “The exponential decay of solutions of the wave equation in the exterior of a star-shaped obstacle,” Bull. Amer. Math. Soc. 68 (1962), 593–595. [Lax&Mor&Phi2] LAX Peter D. & MORAWETZ Cathleen Synge & PHILLIPS Ralph Saul, “Exponential decay of solutions of the wave equation in the exterior of a star-shaped obstacle,” Comm. Pure Appl. Math. 16 (1963), 477–486. [Lio1] Jacques-Louis LIONS (1928-2001), Contrˆoleoptimal de syst`emesgouvern´espar des ´equationsaux d´eriv´eespartielles, Dunod - Gauthier-Villars, Paris, 1968 xiii+426 pp. Optimal control of systems governed by partial differential equations Die Grundlehren der mathematischen Wissenschaften, Band 170 Springer- Verlag, New York-Berlin 1971 xi+396 pp. [Lio2] LIONS Jacques-Louis, “Asymptotic behaviour of solutions of variational inequalities with highly oscil- lating coefficients,” Applications of methods of functional analysis to problems in mechanics (Joint Sympos., IUTAM/IMU, Marseille, 1975), pp. 30–55. Lecture Notes in Math., 503. Springer, Berlin, 1976. P  [Mar&Spa] Antonio MARINO & Sergio SPAGNOLO, “Un tipo di approssimazione dell’operator Di aijDj P ij con operatori j Dj(b Dj),” Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) 23 (1969), 657–673. [M-L&Poin] La correspondance entre Henri Poincar´eet G¨ostaMittag-Leffler Publications des Archives Henri- Poincar´e.Birkh¨auserVerlag, Basel, 1999. 421 pp. ISBN 3-7643-5992-7. [Mod&Mor] Luciano MODICA & Stefano MORTOLA, “Un esempio di Γ−-convergenza,” Boll. Un. Mat. Ital. B (5) 14 (1977), no. 1, 285–299. [Mor1] Cathleen Synge MORAWETZ, “The decay of solutions of the exterior initial-boundary value problem for the wave equation,” Comm. Pure Appl. Math. 14 (1961), 561–568. [Mor2] MORAWETZ Cathleen Synge, “The limiting amplitude principle,” Comm. Pure Appl. Math. 15 (1962), 349–361. [Mos] Umberto MOSCO, “Convergence of convex sets and of solutions of variational inequalities,” Advances in Math. 3 (1969), 510–585. [Mur] Fran¸coisMURAT, “H-convergence,” S´eminaire d’analyse fonctionnelle et num´erique, Universit´ed’Al- ger, 1977-78. Translated into English as MURAT F. & TARTAR L., “H-convergence,” Topics in the mathemat- ical modelling of composite materials, 21–43, Progr. Nonlinear Differential Equations Appl., 31, Birkh¨auser Boston, Boston, MA, 1997. [Pois] Sim´eonDenis POISSON (1781-1840), “M´emoiresur la th´eoriedu son,” J. Ec.´ Polytech. 14 (1808), 319– 392. Classic papers in shock compression science, 3–65, High-press. Shock Compression Condens. Matter, Springer, New York, 1998. [Ran] William John Macquorn RANKINE (1820-1872), “On the thermodynamic theory of waves of finite longitudinal disturbance,” Philos. Trans. 160 (1870), part II, 277–288. Classic papers in shock compression science, 133–147, High-press. Shock Compression Condens. Matter, Springer, New York, 1998. [Rie] Georg Friedrich (1826-1866), “Uber¨ die Fortpssanzung ebener Luftwellen von endlicher Schwingungsweite,” Abhandlungen der Gesellschaft der Wissenschafte zu G¨ottingen,Mathemat-

13 isch-Physikalische Klasse 8, 43 (1860). Gesammelte Werke, 1876, p144. [Sal] Ekhard K. H. SALJE, Phase transitions in ferroelastic and co-elastic crystals: an introduction for min- eralogists, material scientists and physicists 296pp. Cambridge University Press, 1993. ISBN: 0521429366. [S-P1] Enrique Evariste SANCHEZ-PALENCIA, Nonhomogeneous media and vibration theory Lecture Notes in Physics, 127. Springer-Verlag, Berlin-New York, 1980. ix+398 pp. ISBN 3-540-10000-8. [S-P2] SANCHEZ-PALENCIA Enrique Evariste, “Solutions p´eriodiques par rapport aux variables d’espace et applications,” C. R. Acad. Sci. Paris S´er.A-B 271 (1970), A1129–A1132. [Sim] Leon M. SIMON, “On G-convergence of elliptic operators,” Indiana Univ. Math. J. 28 (1979), no. 4, 587–594. [Spa1] Sergio SPAGNOLO, “Sul limite delle soluzioni di problemi di Cauchy relativi all’equazione del calore,” Ann. Scuola Norm. Sup. Pisa (3) 21 (1967), 657–699. [Spa2] SPAGNOLO Sergio, “Sulla convergenza di soluzioni di equazioni paraboliche ed ellitiche,” Ann. Scuola Norm. Sup. Pisa (3) 22 (1968), 571–597. [Sto1] Sir George Gabriel STOKES (1819-1903), “On a difficulty in the theory of sound,” Philos. Mag. XXXIII (1848), 349–356. Classic papers in shock compression science, 71–79, High-press. Shock Compres- sion Condens. Matter, Springer, New York, 1998. [Sto2] STOKES George Gabriel Sir, Mathematical and physical papers, Reprinted from the original journals and transactions, with additional notes by the author. Cambridge, University Press, 1880-1905. [Str] Walter Alexander STRAUSS, “Local exponential decay of a group of conservative nonlinear operators,” J. Functional Analysis 6 (1970), 152–156. [Tar1] Luc Charles TARTAR, “An Introduction to the Homogenization Method in Optimal Design,” Optimal Shape Design, Tr´oia,Portugal, 1998, 47–156, Edited by A. Cellina and A. Ornelas. Lecture Notes in Math., Vol. 1740, Fondazione C.I.M.E. Springer-Verlag, Berlin; Centro Internazionale Matematico Estivo, Florence, 2000. [Tar2] TARTAR Luc Charles, “Compensated compactness and applications to partial differential equations,” Nonlinear analysis and mechanics: Heriot-Watt Symposium, Vol. IV, pp. 136–212, Res. Notes in Math., 39, Pitman, Boston, Mass.-London, 1979. [Tar3] TARTAR Luc Charles, “Weak convergence in nonlinear partial differential equations,” Existence Theory in Nonlinear Elasticity, 209–218. The University of Texas at Austin, 1977. [Tar4] TARTAR Luc Charles, “Nonlinear constitutive relations and homogenization,” Contemporary develop- ments in continuum mechanics and partial differential equations (Proc. Internat. Sympos., Inst. Mat., Univ. Fed. Rio de Janeiro, Rio de Janeiro, 1977), pp. 472–484. North-Holland Math. Studies, 30, North-Holland, Amsterdam-New York, 1978. [Tar5] TARTAR Luc Charles, “H-measures, a new approach for studying homogenisation, oscillations and concentration effects in partial differential equations,” Proc. Roy. Soc. Edinburgh Sect. A 115 (1990) no. 3-4, 193–230. [Tar6] TARTAR Luc Charles, “Probl`emesde contrˆoledes coefficients dans des ´equations aux d´eriv´eespar- tielles,” Control theory, numerical methods and computer systems modelling (Internat. Sympos., IRIA LA- BORIA, Rocquencourt, 1974), pp. 420–426. Lecture Notes in Econom. and Math. Systems, Vol. 107, Springer, Berlin, 1975. Translated into English as MURAT F. & TARTAR L., “On the control of coefficients in partial differential equations,” Topics in the mathematical modelling of composite materials, 1–8, Prog. Nonlinear Differential Equations Appl., 31, Birkh¨auserBoston, MA, 1997. [Tar7] TARTAR Luc Charles, “Quelques remarques sur l’homog´en´eisation,” Functional analysis and numerical analysis (Tokyo and Kyoto, 1976), 469–481, Japan Soc. Promotion of Sci., Tokyo, 1978. [Tar8] TARTAR Luc Charles, “Nonlocal effects induced by homogenization,” Partial differential equations and the calculus of variations, Vol. II, 925–938. Progr. Nonlinear Differential Equations Appl., 2, Birkha¨user Boston, Boston, MA., 1989. (Book dedicated to Ennio DE GIORGI). [Tru&Mun] Clifford Ambrose III TRUESDELL (1919-2000) & Robert G. MUNCASTER, Fundamentals of Maxwell’s Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics New York: Academic Press, 1980. [War] Jack WARGA, “Relaxed variational problems,” “Necessary conditions for minimum in relaxed varia- tional problems,” J. Math. Anal. Appl. 4 (1962), 111–128, 129–145.

14 [You1] Laurence Chisholm YOUNG (1905-2000), “Generalized curves and the existence of an attained absolute minimum in the calculus of variations,” C. R. Soc. Sci. Lett. Varsovie, Classe III 30 (1937), 212–234. [You2] YOUNG Laurence Chisholm, “Generalized surfaces in the calculus of variations,” I, II. Ann. of Math. (2) 43 (1942), 84–103, 530–544. [You3] YOUNG Laurence Chisholm, Lectures on the Calculus of Variation and Optimal Control Theory, W. B. Saunders Co., Philadelphia-London-Toronto, Ont. 1969 xi+331 pp.

15