Statistical Models of Cloud-Turbulence Interactions
by
Christopher A. M. Jeffery
M.Sc, University of British Columbia, 1996
A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF
THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy
in
THE FACULTY OF GRADUATE STUDIES
(Department of Earth and Ocean Sciences)
We accept this thesis as conforming to the required standard
The University of British Columbia
September 2001
© Christopher A. M. Jeffery, 2001 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.
Department
The University of British Columbia Vancouver, Canada
DE-6 (2/88) Abstract
The application of statistical turbulence theory to the study of atmospheric clouds has a long history that traces back to the pioneering work of L. F. Richardson in the 1920s. At a phenomenological level, both atmospheric clouds and turbulence are now well understood, but analytic theories with the power to predict as well as explain are still lacking. This deficiency is notable because the prediction of statistical cloud change in response to anthropogenic forcing is a preeminent scientific challenge in atmospheric science. In this dissertation, I apply the statistical rigor of new developments in passive scalar theory to problems in cloud physics at small scales (9(10 cm), where a white- in-time or (^-correlated closure is asymptotically exact, and at large scales 0(100 km) where a statistical approach towards unresolved cloud variability is essential. Using either the 5-correlated model or a self-consistent statistical approach I investigate (i) the preferential concentration or inertial clumping of cloud droplets; (ii) the effect of velocity field intermittency on clumping; (iii) the small-scale spatial statistics of condensed liquid water density and (iv) the large-scale parameterization of unresolved low-cloud physical and optical variability. My investigations, (i) to (iv), lead to the following conclusions:
Preferential Concentration: Inertial particles (droplets) preferentially concentrate at
scales ranging from 6O77 at St « 0.2 to 877 at St 0.6, where 77 is the Kolmogorov length and St is the Stokes number. Clumping becomes significant at St « 0.3.
r 1 2 Effect of Intermittency: An effective Stokes number, Stefj = St(^ /3) / where T is the longitudinal velocity-gradient flatness factor (kurtosis) explicitly incorporates velocity-gradient intermittency (i.e. non-Gaussian statistics) into the St-dependence
of particle clumping. In the atmospheric boundary-layer, Steff « 2.7St. Intermit• tency effects significantly increase the degree of preferential concentration of large cloud droplets.
Cloud Spatial Scaling: Density fluctuations of an inert passive scalar are typically spa• tially homogeneous, whereas root-mean-square cloud liquid water (<&) fluctuations increase linearly with height above, cloud base. As a result, the qi spectral density is axisymmetric and complex. A model of low-cloud viscous-convective statistics where axisymmetric/non-homogeneous production of scalar covariance due to con• densation/evaporation is balanced by an axisymmetric rotation reproduces recent
ii experimental measurements [Davis et al, 1999].
Low-cloud Optical Properties: The assumption of height-independence in unresolved saturation vapour density fluctuations (s) and the introduction of unresolved cloud-
top height fluctuations (z't0 ) into a statistical cloud scheme couple parameterized subgrid low-cloud physical and optical variability. Analytic relationships between
optical depth, cloud fraction and (s,z'top) provide a convenient framework for a GCM cloud parameterization that prognoses both the mean and variance of optical depth
iii Contents
Abstract ii
Contents iv
List of Figures vii
Acknowledgements viii
Dedication ix
1 Introduction 1 1.1 A short anecdotal history of "turbulence" 3 1.2 Problems addressed in this dissertation 12 1.2.1 Cloud droplet number concentration inhomogeneities 12 1.2.2 Cloud liquid water density inhomogeneities 14 1.2.3 Unresolved low cloud optical properties 14
2 The ^-Correlated Model 16 2.1 Introduction 16 2.2 Hamiltonian Fluid Mechanics 17 2.2.1 Lagrangian formulation 18 2.2.2 Eulerian formulation 19 2.3 <5-Correlated Model 24 2.4 Mean Scalar Concentration 26 2.5 Mean Scalar Covariance 29 2.6 Summary: 1968-present 30
3 Spatial Statistics of Inertial Particles 35 3.1 Introduction 35 3.2 Correlation function 36 3.3 Spectral density 38 3.3.1 Small-scale solution (k ^> 77-1) 39 3.3.2 Large-scale solution (O.I77-1 < k < 39 3.4 Analysis 40
iv 3.5 Spectra and Discussion 42 3.6 Experimental verification 43 3.7 Summary 45
4 Intermittency arid Preferential Concentration 47 4.1 Introduction 47 4.2 St-ReA dependence 48 4.3 The Shaw Model and Vortex Tubes 50 4.4 Experimental verification 52 4.5 Summary 52
5 Spatial Statistics of Cloud Droplets 54 5.1 Introduction 54 5.2 Condensation/Evaporation Source Term 57 5.3 CE in the Batchelor limit 59 5.4 Axisymmetric Kraichnan Transfer 60 5.4.1 Viscous regime solution 60 5.4.2 Inertial-Convective regime solution 62 5.5 The axisymmetric source / 63 5.6 Determination of £ 65 5.7 Spectra and discussion 66 5.8 Experimental verification 69 5.9 Summary 70
6 Unresolved Variability of Low Cloud 72 6.1 Introduction 72 6.2 Statistical Cloud Schemes 74 6.3 Shortwave Optical Depth Formulation 75 6.4 Unresolved Low Cloud Optical Variability 78 6.5 Low Cloud Optical Properties 82
6.6 reff-r relationships in GCMs 82 6.7 Low Cloud Radiative Feedback 86 6.8 Experimental verification 88
6.9 Summary 89
7 Summary 91
Bibliography 96
Appendix A List of Principal Symbols 115
v Appendix B Triangle Distributions 118 B.l Smith [1990]'s triangle distribution 118 B. 2 Modified triangle distribution 118
Appendix C Calculation of low-cloud optical properties in Sec. 6.5 121 Cl Calculation of R 121 C. 2 Calculation of e 122
vi List of Figures
2.1 Temperature and velocity spectra from Grant et al. [1968] 32
3.1 Accuracy of the fourth-order approximation Q{k) 41
3.2 Plot of the scale break kb and the self-excitation Xuc(°o, St)/x;c — 1. . . . 44 3.3 Effect of particle inertia on the scalar spectrum 45 3.4 Characteristic scale of preferential concentration 46
5.1 ID cloud liquid-water scalar spectrum from SOCEX 56 5.2 Effect of condensation/evaporation on the scalar spectrum 67 5.3 Comparison of SOCEX data with the predicted ID scalar spectrum. ... 68
6.1 Comparison of Ac vs u using Landsat data 81 6.2 Zonal accuracy of the PPH approximation 83
6.3 Effect-of model vertical resolution on the reff-r relationship 85
B.l The modified triangle distribution 120
vii Acknowledgements
In the spring of 1993 I wandered into Phil Austin's office looking for employment for the summer. At the time, I had no noteworthy skills, little knowledge of computers, and only a cursory understanding of clouds. Moreover, I had plans to study biophysics in the fall. Atmospheric science, quite frankly, was not in my plans for the future. It was to my great fortune that Phil took pity on my impoverished state and offered me a programming job for the summer. Merely hours after formalizing my employment, he was quite surprised, I imagine, to learn that I had confused my knowledge of VMS with UNIX, and thus, my meager computer expertise was quite useless. But Phil persevered through constant interruptions and my intolerably slow progress, and thus began our friendship that eventually lead to his supervision of this thesis. To state that Phil has been an exemplary supervisor is an understatement; the knowl• edge, advice and support that I received from Phil over the last four years is the greatest fortune of my graduate career. His constant urgings that I should study the cloud param• eterization literature—particularly Barker [1996b] and Considine et al. [1997]—proved invaluable and led us to develop a new statistical treatment of cloud optical variability [Jeffery and Austin, 2001b]. For all that Phil has done for me I am sincerely grateful. Yet any success that I might have achieved alone with Phil would ring hollow were it not for the love and support of my wife, Nicole. Through the trials and tribulations of my graduate career she has been a pillar of both intellectual and emotional support; both her careful reading of my articles and this thesis, and the sacrifices she has made while pursuing her own Ph.D. were invaluable. Nothing has bolstered my spirit more over the last four years than the time spent with my wife and daughter. I am truly indebted. I thank my friends Brian, Andres, Joel and Tom for good times, and Vincent for the answers to all my questions about computers. I am most grateful to my parents, not only for beginnings, but for the steady support they always offer. I owe special thanks to my committee Douw Steyn and Roland Stull and to my colleagues Marcia Baker, Howard Barker, Anthony Davis, Wojtek Grabowski, Ray Shaw and Katepalli Sreenivasan.
CHRISTOPHER A. M. JEFFERY
The University of British Columbia September 2001
viii I dedicate this thesis to my daugher Sophia and to the late Lewis F. Richardson. Chapter 1
Introduction
Big whorls have little whorls, Which feed on their velocity; And little whorls have lesser whorls, And so on to viscosity (in the molecular sense). [Richardson, 1922]
Although the men whose work heralded the beginnings of turbulence theory un• doubtedly turned to the atmosphere for inspiration—Reynolds, Taylor, Prandtl and von Karman come to mind—it is Lewis Fry Richardson who first recognized the important and central role of statistical turbulence theory in atmospheric studies. Thus in 1922 when Richardson's book Weather Prediction by Numerical Process was first published, the marriage of theoretical turbulence and atmospheric science research effectively began. Initially criticized by his contemporaries for containing "too much heterogeneous mate• rial to be satisfactory" [Platzmann, 1967], Richardson's book now stands as a monument to the genius and prescience of its author and as a milestone introducing two fundamen• tal paradigms that are among the most important in atmospheric research: numerical weather prediction and the turbulent energy cascade. The latter he expressed poetically on page 66 of his book in an oft quoted imitation of Jonathan Swift (reproduced above). The renaissance of atmospheric turbulence, one can argue, occurred during the 1970s. In those days the efforts of atmospheric scientists like Leith, Lumley, Herring, Hill, Van Atta and Wyngaard, engineers like Corrsin and Lundgren, mathematicians like Man• delbrot and Rosenblatt and physicists like Burgers, Kraichnan and Lorentz crossed dis• ciplines: Kraichnan published in J. Atmos. Sci. [Leith and Kraichnan, 1972; Herring et ai, 1973; Kraichnan, 1976], repercussions from the Kansas and Minnesota experi• ments [Kaimal and Wyngaard, 1990] were felt in the turbulence research community at large [Monin and Yaglom, 1975], and a significant cross-fertilization of ideas occurred at meetings where members from all four groups were present [Rosenblatt and Van Atta, 1972]. Nowhere in atmospheric science did the influence of the "other" turbulence com• munities have a greater impact than in boundary layer meteorology (BLM). By the end
1 of the decade BLM was considered a success, along with its methodology: "... the pa• tient, systematic attack of fundamental problems through a combination of theory and experiment and the expression of the findings in simple, effective terms that make them useful in a wide range of applications" [Kaimal and Wyngaard, 1990]. In modern times the marriage of turbulence and atmospheric science research has faltered. The theoretical study of turbulence has become highly specialized, while the BLM community has turned its attention to less idealized problems that are considered too complex to yield to the analytic machinery of theoretical turbulence. The attitude of the atmospheric science community towards the prospect of analytic advances in tur• bulence theory is best summarized by a recent quote from John C. Wyngaard, one of the principal investigators in the Kansas and Minnesota experiments [Wyngaard, 1998a]:
We are still searching for a solution to the "turbulence problem" .... Years ago, every time I saw him my father-in-law would ask me if I had solved the problem I was working on. He eventually stopped asking. The renormaliza- tion group, or RG, is the latest in a long line of failed attempts at a solution (Eyink, 1994).
Although Wyngaard [1998a]'s comments are directed primarily at the "velocity problem"—determination of the statistical properties of a turbulent flow—they imply a similar pessimism towards the determination of the statistical properties of a pollutant in the flow: the "passive scalar problem". I believe we have some cause for optimism. Although a deep theoretical understanding of a turbulent velocity field remains elusive, the problem of predicting the statistical properties of a passive tracer or scalar in a turbulent flow has recently yielded to analytic attack. Scale renormalization leads to a white-in-time or "5-correlated" velocity field that is an exact statistical surrogate for the real field at the smallest turbulent scales and a reasonable approximation at larger scales. In this 5-correlated limit, analytic expressions for the intermittency correction to the normal or similarity scaling exponents of the passive scalar structure functions have been found [Shraiman and Siggia, 1994; Chertkov et al, 1995b; Gawedzki and Kupiainen, 1995; Balkovsky and Lebedev, 1998], whereas the theory of a turbulent velocity field has no such realistic yet simplified analytic model. Moreover, the prospect of extending 5-correlated theory in the near future to include a non-white temporal decorrelation is encouraging. In addition, the renormalization group method that Wyngaard [1998a] chastises for its failure with the velocity problem has had some success in the ^-correlated limit [Adzhemyan et al., 1998; Antonov, 1999; Adzhemyan et al, 2001]. Certainly, the velocity problem and the passive scalar problem are two very different creatures; the Navier-Stokes equation is non-linear w.r.t. the velocity while the advection-diffusion equation is linear w.r.t. the pollutant concentration. In this dissertation, I take a few small steps towards the goal of reappraising and advancing theoretical cloud physics using the analytic techniques that have proven suc• cessful with the inert passive scalar problem. While cloud liquid water is treated as a
2 reactive scalar, one may ignore, in a first-order iteration, the non-passive coupling of atmospheric turbulence with latent heating or cooling that accompanies phase change. From the outset I acknowledge that much work remains to be done to accomplish the stated goal. Certainly, say, an instanton for the cloud droplet number distribution or an expression for the tails of a stratiform cloud liquid water density function constitute significant advancements in theoretical cloud physics. I have done neither. What I have accomplished over the last four years is to identify specific problems in cloud physics that are particularly amenable to analytic attack and to develop models that either provide new information at small scales—on the order of centimeters—or better predictions at large scales—on the order of tens of kilometers. In this work the 5-correlated model takes center stage because it is both highly accurate at small scales and is also the paradigm for new theories of cloud physics that I hope to develop later in my career. Although my work on large scale cloud parameterizations does not explicitly involve the r5-correlated closure, I hope that the reader recognizes a certain "continuity of approach" when we move from an application of the, 5-correlated model in Chapter 5 to the statistical treat• ment of unresolved cloud optical variability in Chapter 6. The methodology and approach that I learned from the work of others in the passive scalar community motivated the development of my statistical large-scale parameterization. The rest of this chapter is organized as follows. First I present a short anecdotal history of "turbulence" in which I have assembled a collection of some of the more interesting and amusing thoughts of the major players in this drama from the last century. Although, a thorough account of the history of turbulence theory would be a worthy and difficult challenge, I will not rise to the occasion here, and rather, refer readers to the recent resource letter by Nelkin [2000]. In Sec. 1.2 I sketch the problems that are addressed in this work and the progress I have made towards their solution.
1.1 A short anecdotal history of "turbulence"
A contemporary of Richardson, G. I. Taylor, described Richardson as "a very interesting and original character who seldom thought on the same lines as his contemporaries and often was not understood by them" [Taylor, 1959]. In the winter of 1917 Richardson was driving an ambulance for a French infantry division on the Western Front. Under these appalling conditions, he had the buoyancy of spirit to carry out one of the most remarkable and prodigious calculation feats in the history of weather prediction: the numerical calculation by hand of the change in pressure and wind for a six-hour interval in an area (2 cells, each 200 x 200 km2) of central Europe. Although the calculation failed to predict the weather—his equations had the barometer rising fast enough to make one's ears pop—in an act of genius, Richardson did foresee the future of weather prediction. Amazingly, it would take over thirty years and the arrival of electronic computers before Charney, Fjortoft and von Neumann completed the first "successful" numerical weather
3 prediction [Charney et al., 1950]. Richardson would not be deterred by the failure of his initial attempt at numeri• cal weather prediction. Near the end of Weather Prediction by Numerical Process he describes a phantasmagorical vision of a "weather factor":
Imagine a large hall like theater, except that the circles and galleries go right round through the space usually occupied by the stage. The walls of this chamber are painted to form a map of the globe. ... Myriad computers are at work on the weather of the part of the map where each sits, but each computer attends to only one equation or part of an equa• tion. The work of each region is coordinated by an official of higher rank. ... From the floor of the pit, a tall pillar rises to half the height of the hall. It carries a large pulpit on its top. In this sits the man in charge of the whole theater; he is surrounded by several assistants and messengers. One of his duties is to maintain a uniform speed of progress in all parts of the globe. In this respect, he is like the conductor of an orchestra in which the instruments are slide rules and calculating machines. But instead of waving a baton, he turns a beam of rosy light on any workers who are running ahead of the rest and a beam of blue light on those who are behind. ... In a neighboring building, there is a research department where they invent improvements. But there is much experimenting on a small scale before any change is made in the complex routine of the computing theater. In a base• ment, an enthusiast is observing eddies in the liquid lining of a huge spinning bowl, but so far, the arithmetic proves the better way. ... Outside are play• ing fields, houses, mountains, and lakes, for it was thought that those who compute the weather should breathe of it freely.
The brilliance of Richardson's vision is highlighted by the anachronisms; many first-time modern readers may not realize that Richardson's "computers" are, in fact, human beings until the last line of the passage! Perhaps, we can catch a glimpse of a Message-Passing Interface (MPI) in Richardson's conductor The above passage also illustrates the important role that turbulence played in Rich• ardson's vision of weather prediction. The "enthusiast" observing eddies would no doubt be concerned with the cascade of energy from eddy to eddy and with the bulk dispersive properties of the flow in the bowl. Richardson's revolutionary understanding of the turbulent energy cascade is encapsulated by his famous poem (pp. 1) which indicates the direction of the cascade, from large scales to small, while highlighting that molecular viscosity (and not eddy viscosity) is the mechanism of energy dissipation that ends the cascade. Richardson's intuition was so powerful that in his paper of 1926 he was able to establish the famous "four-thirds law" that relates eddy-viscosity K and eddy-size I in a turbulent flow: K ~ I4/3, by purely empirical means [Richardson, 1926]. In 1941, when Kolmogorov and Obukhov formulated the general quantitative theory of inertial-
4 range turbulence, Richardson's four-thirds law was actually the only empirical result which indicated the existence of simple general rules underlying the inertial cascade. Commenting on Richardson's work Taylor [1959] remarks that "(i)t is perhaps rather surprising that he did not take the step which Kolmogoroff (1941) and Obukhov took fifteen years later ...". Although no reference was made to Richardson in Kolmogorov [1941], later Kolmogorov would generously make up for this omission [see pp. 47]. Despite the success of Kolmogorov [1941]'s inertial range theory which predicted
1 3 4 3 2 3 5 3 Richardson's four-thirds law K ~ e / / / and a velocity spectrum Eu(k) ~ e ^ k~ ^ from dimensional analysis using I or wave-vector k and the turbulent kinetic energy dis• sipation rate e, many researchers in the 1940's and 50's were skeptical about the prospect of developing a truly predictive theory of turbulence. Sir Horace Lamb is said to have remarked [Goldstein, 1969]
I am an old man now, and when I die and go to Heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former, I am really rather optimistic."
John von Neumann, who we already encountered for his work on numerical weather pre• diction [Charney et ai, 1950], advocated a computational approach towards turbulence in 1949 because of the prohibitive mathematical difficulties [von Neumann, 1963]:
... a considerable mathematical effort towards a detailed understanding of the mechanism of turbulence is called for. The entire experience with the subject indicates that the purely analytical approach is beset with difficulties, which at this moment are still prohibitive. The reason for this is probably ... (t)hat our intuitive relationship to the subject is still too loose—not having succeeded at anything like deep mathematical penetration in any part of the subject, we are still quite disoriented as to the relevant factors, and as to the proper analytical machinery to be used. Under these conditions there might be some hope to "break the deadlock" by extensive, but well-planned, computational efforts.
A few years after von Neumann's comments, the first English textbook on BLM was published and the author, Oliver G. Sutton, warned future meteorologists that an un• derstanding of turbulence was both essential and challenging [Sutton, 1953]:
This type of motion is called turbulent As yet, the mathematical theory of this type of motion is only partially established, and most of the difficulties experienced in the study of micrometeorology are to be traced, ultimately, to the great complexity of the motion of the air in the lower layers of the atmosphere.
5 Beginning in the early 1960s considerable attention was given to the statistical prop• erties of e. Kolmogorov [1962] and Obukhov [1962] suggested that fluctuations in e could lead to deviations from Richardson's four-thirds law and the other predictions of Kol• mogorov [1941]'s theory. The so-called "Vancouver group" [Monin and Yaglom, 1975, pp. 457] consisting of Burling, Pond, Stewart and Wilson here at U.B.C. played a partic• ularly active role in the development of this idea during the 1960s. Novikov and Stewart [1964] developed the first simple random cascade model for e(x) that, at a phenomeno• logical level, accounted for the intermittency of e and predicted a scaling spatial spectrum of the form E£(k) ~ k~^ and velocity spectrum Eu ~ &-5/3-/V3. p0nd and Stewart [1965]
(See also Pond [1965]) made refined hot-wire measurements of EE over the ocean (Span• ish Banks) and found \i « 0.6 in agreement with the earlier and questionable results of Gurvich and Zubkovskii [1963] (See Monin and Yaglom [1975, Sec. 25.3]). Pond and Stewart [1965] predicted that the limits of applicability of the Novikov- Stewart model "appear to be so wide that models of this kind may represent a new important chapter in the theory of random processes." As it turned out, the chapter on fractal cascades was largely written by Benoit B. Mandelbrot who introduced the fractal (non-integer Hausdorff) dimension, D, a measure of the space-filling properties of the random cascade and related to the Novikov-Stewart model by D = 3 — \i. Mandelbrot [1976] generalized the Novikov-Stewart model which he termed "absolute curdling" and defined a more general process "weighted curdling" such that each moment (eq) has an
associated fractal dimension Dq. Mandelbrot's weighted curdling is now typically referred to as "multifractal" as popularized by Frisch and Parisi [1985] and Benzi et al. [1984]. A dynamical version of the Novikov-Stewart model called the /3-model was introduced by Frisch et al. [1978]. Phenomenological cascade models, while providing valuable insights, were not im• mune from criticism. Kraichnan [1972] emphasized that "these hypotheses are purely conjectural, with no support from analysis using the Navier-Stokes equation" and Frisch et al. [1978] remarked that "Neither the /3-model of intermittency nor the lognormal model should be taken too seriously." Arguably the greatest contribution to the analytic theory of turbulence was made by Robert H. Kraichnan. Kraichnan was well trained in the techniques of theoretical physics having been Einstein's penultimate assistant in Princeton from 1949 to 1950. Kraichnan's Direct Interaction Approximation (DIA) and Lagrangian DIA theories have been described in detail in the book by Leslie [1973] who writes that "(t)here is now an increasing body of opinion which holds that, although his work is not conclusive, it does represent a great advance in our understanding, and that much in it is of permanent value." One of the more profound ideas to emerge from Kraichnan's work is the importance of the sweeping of small scales by the large, and of the difficulties associated with the removal of sweeping. The coupling of small and large scale motions can easily be seen mathematically in the non-linear term u • Vu or physically in the motion of a small floating object entrained in the eddies of a river and swept along a complicated path by
6 the turbulent flow. Kraichnan [1972] writes:
The basic questions about small-scale structure are highly resistant to res• olution by perturbation-related approaches But there is also another, almost unfair difficulty peculiar to the self-convection mechanism. The basic physical idea underlying Kolmogorov's theory is that sufficiently large scales should simply convect small scales, without distorting them and so without affecting the energy cascade process within the small scales. ... Suppose that each flow system in the ensemble is subjected to a spatially constant transla• tion velocity — This can produce no distortion and therefore cannot affect the energy transfer, spectrum, or other statistics of a homogeneous ensemble. We may call this stochastic Galilean invariance.
Kraichnan [1965] was able to correct for the lack of stochastic Galilean invariance in DIA by casting his field theoretic approach in terms of Lagrangian paths, hence La- grangian DIA. Kraichnan's approach was fundamentally correct, and gave rise to im• portant and influential insights into the description of turbulence, but did not provide a convenient technical way to consider all the orders of perturbation theory—only low order truncations were considered. Commenting on Lagrangian DIA, Kraichnan [1975] would later remark: "In higher orders, the term-by-term re-working is unacceptably clumsy, and it is to be hoped that some powerful functional technique will be discovered to re• place it." Further advances in the field theoretic description of turbulence were made by Belinicher and L'vov [1987] and are reviewed in L'vov and Procaccia [1997]. Stochastic Galilean invariance, though pernicious, is a technical concern that may be overcome, to a lessor or greater extent, by a Lagrangian enlargement of the phase space [Kraichnan, 1965; Belinicher and L'vov, 1987]. At a more fundamental level, one is apt to wonder why the renormalization group (RG) method that has proven so successful in the study of critical phenomena in equilibrium statistical mechanics has not yielded a solution to the turbulence problem. As first pointed out by Nelkin [1974, 1975] explicit analogies can be made between turbulence at high Reynolds number and critical phenom• ena provided that the roles of position and wavenumber space are interchanged. Loosely speaking, the RG idea is to look at the system from further and further away so that its microscopic details are eventually wiped out, while in turbulence we examine the sys• tem with a stronger and stronger magnifying glass until the large-scale details disappear. More precisely the lattice spacing a and the integral scale L are analogous and we look for homologous corrections to the spatial correlation function, either spin or velocity, of
the form (r/a)^ or (r/L)^ in the limit (T — Tc) —> 0 or v —>• 0, respectively. RG has had enormous success in predicting the critical exponent, C, of critical phenomena because these exponents are usually fractal—of the Novikov-Stewart type—while in turbulence a "multifractal" infinite hierarchy of exponents exist. This difference in scaling exponents is a manifestation of a more fundamental difference between the two systems: RG has succeeded in critical phenomena when the deviation from statistical equilibrium is small
7 while the essence of turbulence as a statistical-mechanical problem lies in the necessity to consider strong departures from absolute statistical equilibrium. The huge magnitude of the disequilibrium in turbulence is emphasized by Kraichnan [1975]:
Interactions between wavenumbers that differ by as much as two octaves contribute strongly to the inertial-range energy transfer. This shows how strong is the disequilibrium A comparable disequilibrium in a gas of interacting particles would require that the temperature change by its own order of magnitude in a distance comparable to the range of the interaction potential. The analogy here is between Fourier modes of the turbulence and particles of the gas.
Perhaps Kraichnan's" greatest contribution to the "turbulence problem", broadly stated, is his work on passive scalars, in particular, the r5-correlated model which he first presented in 1968 [Kraichnan, 1968] and which is now sometimes referred to as the Kraichnan model. Historically the passive scalar problem has received less attention than the velocity problem. A k~5/3 spectrum for the scalar covariance was first predicted by Obukhov [1949] and Corrsin [1951] some ten years after the corresponding theory for the velocity was presented by Kolmogorov [1941], and the first Western experimental confir• mation of A;-5/3 Obukhov-Corrsin scaling was presented by Pond et al. [1966] (See also Pond [1965]) in agreement with the earlier Russian results from the Tsvang and Gurvich groups [Monin and Yaglom, 1975, Sec. 23.5]. Although Kraichnan originally developed his r5-correlated model to explain the k~l viscous-convective subrange, he would later return to a more general version in 1994 to derive an expression for the series of anoma• lous exponents that mark a departure from A;-5/3 Obukhov-Corrsin scaling [Kraichnan, 1994; Kraichnan et al, 1995]. As I mentioned previously, Kraichnan's 5-correlated model plays a central role in this dissertation, and a detailed presentation of its derivation and properties is given in Chapter 2. The passive scalar problem has always played an important role in BLM. One of the earliest and strongest statements concerning the importance of understanding boundary- layer transport was made by Priestley [1959]:
One may perhaps therefore be excused in pressing the viewpoint herein that, while atmospheric turbulence may be of interest to the aerodynamicist in offering a new scale for his studies ... its importance to the meteorologist rests largely in the knowledge of the transfers which it provides; that the study of profiles is to him primarily a means to this end and that of microstructure a tool for better understanding of the character of the transfer processes.
The modern theory of BLM transport is based largely on semiempirical theories of turbu• lence which combine Reynolds decomposition, dimensional analysis and empirical mea• surements to parameterize BLM transport for various stability and energy considerations and boundary conditions. "Of course, from the viewpoint of "pure" theoretical physics,
8 all these theories must be considered as nonrigorous..." reminds Monin and Yaglom [1971, Sec. 5.9]. The class of semiempirical theories in which time-evolution equations play a secondary role are usually referred to as similarity theories. A comparison of the original texts on atmospheric similarity theory, e.g. Priestley [1959], Lumley and Panofsky [1964] and Monin and Yaglom [1971], with Stull [1988]'s more recent treatment reveals that the fundamental relations characterizing surface-layer (SL) and mixed-layer similarity developed in the 1950s and 1960s are largely used today. One exception might be unstable SL similarity in which three or four distinct approaches are still being debated [Kader and Yaglom, 1990; Stull, 1994; Zilitinkevich, 1994; Grachev et ai, 1997; Chang and Grossman, 1999]. Richardson [1920] first deduced the importance of the ratio of buoyant energy pro• duction to mechanical energy production in a thermally stratified fluid. Subsequently Obukhov [1946] parameterized Richardson's ratio (or number) in terms of the height- dependent ratio z/L where L is now known as the Obukhov length. In a regime of pure convection where mechanical energy production is negligible, e.g. z/L —>• —oo, SL similarity predicts that potential temperature asymptotes to a constant and the hori• zontal flux of potential temperature asymptotes to zero [Prandtl, 1932; Obukhov, 1946]. Experimental measurements refuting these predictions caused Monin and Yaglom [1971] to remark that "the motivation of the obvious discrepancy among these results is still unclear" [Sec. 8.2] and that "some problems exist which quite definitely require due consideration" [Sec. 8.5]. Motivated by the discrepancy between theory and measurement, Zilitinkevich [1971] and Betchov and Yaglom [1971] developed a three-sublayer model ofthe unstably strati• fied SL which includes a new free convection layer at approx. —1 < z/L < —0.1 between the usual mechanical layer and the pure convection layer described above. However, "the data available in 1971 were insufficient for the confirmation of the new theory" [Kader and Yaglom, 1990] and Betchov and Yaglom [1971] raised the further concern that "most of the existing observations on the unstably stratified surface layer of the atmosphere contain no data on the pure convection layer ... if this layer exists at all in the atmo• sphere." Some twenty years after the Zilintinkevich-Betchov-Yaglom prediction, Kader and Yaglom [1990] presented a detailed study of the unstably stratified SL which con• firms the existence of both the free convection layer and the pure convection layer in the atmosphere. The Zilintinkevich-Betchov-Yaglom model assumes the existence of a mechanical sub• layer close to the ground where the friction velocity, u*, is finite. As early as 1959, Priestley pointed out that a comparison of a mechanical sublayer heat-flux, H ~ z~ll2, and the flux predicted by a convective plume model, H ~ z1/2, suggests that, in prac• tice, a mechanical sublayer with non-zero u* will exist as z —> 0 in a purely convective boundary-layer [Priestley, 1959, Chap. 6]:
... the lowest layers will generally, in practice, obey a regime of forced convec-
9 tion ..., but it is of interest to note that in such an environment the solution for the plume will be of the form H ~ z1/2 ... whereas it follows ... that H ~ z-1/2 (in forced convection). The implication is that plumes can and are likely to exist in such a regime, while their relative role in heat flux will be negligible near the surface but will increase with increasing height.
A mechanism for a non-zero u* in a purely convective boundary-layer was provided a few years later by Kraichnan [1962] who suggested that "turbulence of small spatial scale, generated locally in the boundary layers of the big eddies, can enhance heat convection near the boundary surfaces " The friction velocity was subsequently parameterized as a function of the free convective velocity scale by Businger [1973]. Recently, Stull [1994] has formulated a convective transport theory (CCT) for surfaces fluxes in which convective plume transport dominates mechanical transport as z —> 0. A distinguishing feature of CCT is that surface fluxes are independent of the surface roughness length. I conclude this short anecdotal history with a brief sample of some of the remarks that have been offered from time to time on the future of turbulence research. Frisch et al. [1978] are cautiously optimistic that a dynamical theory will be found:
To go further, a genuine dynamical theory starting from the Navier-Stokes equations is needed. ... So far no analytic tool has been found which allows these scaling laws to be determined theoretically although there have been many speculations that the renormalization-group theory developed for crit• ical phenomena ... could be appropriate In the meantime, phenomeno• logical models can give important hints as to the necessary structure of an eventual dynamical theory.
A year later, Liepmann [1979] chose to emphasize the slow progress made:
The scientific study of turbulent flow spans approximately one hundred years; during this time some of the greatest names in physics, mechanics, and engi• neering have at one time or another taken a crack at the problem. Progress in many directions has been made, indeed significant progress. However, the "turbulence problem" as a whole-whatever that means-remains.
In the 1980's, Tennekes [1985] suggested that the future of turbulence lies in a dynamical systems approach:
I hope that the current research on modons will lead to a workable definition of individual eddies. From that point, we can venture in the direction of multiple interactions between eddies, and on toward chaotic behavior. Finally, we may arrive at a new theory of turbulence, in which coherent structures, strange attractors and a generalized concept of entropy are the cornerstones of understanding.
10 Nelkin [1992] sees little hope in statistical field theory:
(T)urbulence ... remains a fascinating unsolved problem. In the next several years I expect that new carefully controlled experiments, improvements in direct numerical simulation, and better mathematical understanding of the underlying Navier-Stokes equations will combine to increase our basic under• standing. I also hope that basic progress in statistical field theory will add essential contributions from theoretical physics, but nothing so far has given much substance to this hope.
Two diametrically opposed assessments of turbulence theory were offered in 1997 and 1998. Tsinober [1998] offers these pessimistic remarks:
Indeed the heaviest and the most ambitious armory from theoretical physics and mathematics was tried for more than fifty years, but without much success—fully developed turbulence, as a physical and mathematical problem remains unsolved. ... turbulence remains among the fields with overproduc• tion of publications without any real breakthrough in understanding. ... the number of predictions made in the field is limited by the number of fingers on one hand—the rest are correlations after experiments done, i.e. "postdic- tions".
L'vov and Procaccia [1997], on the other hand, forecast a solution to the turbulence problem in a statement that is arguably the boldest and most optimistic prediction in the history of turbulence research. Commenting on Sir Horace Lamb's famous remarks (pp. 5), L'vov and Procaccia [1997] claim:
Possibly Lamb's pessimism about turbulence was short-sighted. ... The road ahead is not fully charted, but it seems that some of the conceptual diffi• culties have been surmounted We hope that the remaining 4 years of this century will suffice to achieve a proper understanding of the anomalous scaling exponents in turbulence. Progress ... will bring the theory closer to the concern of the engineers. The marriage of physics and engineering will be the challenge of the 21st century.
Ironically, though L'vov and Procaccia published 4 papers in 1995 and 7 papers in 1996 on their new theoretical approach paving the way for their bold prediction, the prediction itself would be their last contribution together in this endeavor. A more tempered opinion, lying somewhere between the remarks of Tsinober [1998] and L'vov and Procaccia [1997], is offered by Sreenivasan [1999]:
Extrapolating from experience so far, future progress will take a zigzag path, and further order will be slow to emerge. ... There is ground for optimism, and a meaningful interaction among theory, experiment, and computations must be able to take us far. It is a matter of time and persistence.
11 And finally, we return to Wyngaard [1998b] for a last assessment:
In turbulence research one sees periods of feverish activity in what can be• come blind alleys, punctuated by occasional leaps forward. The long view reveals slow, steady progress in the physical understanding of the structure of turbulent flows in engineering and geophysics, accumulating evidence of the futility of head-on analytic attacks on turbulence, and repeated indications that experiment paces the growth in our understanding of turbulence
1.2 Problems addressed in this dissertation
The difficulty in defining the turbulence problem reminds me of a cartoon in which a rather puzzled and dejected-looking researcher is introduced to a visitor: "After twenty years of research, Dr. Quimsey developed the answer and now he's forgotten the question!" [Liepmann, 1979]
In this dissertation I address a number of topical questions that have been raised about the spatial structure of cloud number density and liquid water density at small scales O(10 cm) and about the prognosis of unresolved low cloud optical depth at large scales 0(100 km). Below I summarize the problems addressed and the results obtained in Chapters 3 through 6. Note that each chapter begins with a general introduction and ends with a summary so there is some redundancy in this presentation. The work presented in this dissertation also appears in the following articles:
Chapter 3: Jeffery, C. A., Effect of particle inertia on the viscous-convective subrange. Phys. Rev. E, 61(6), pp. 6578-6585, 2000.
Chapter 4: Jeffery, C. A., Investigating the small-scale structure of clouds using the r5-correlated closure. To appear, Atmos. Res., 2001.
Chapter 5: Jeffery, C. A., Effect of condensation and evaporation on the viscous- convective subrange. Phys. Fluids, 13(3), pp. 713-722, 2001.
Chapter 6: Jeffery, C. A. and P. H. Austin, Unified treatment of the physical and optical variability of unresolved low clouds. J. Atmos. Sci., Submitted. Jeffery, C. A., Parameterization of shortwave cloud properties in large-scale models: Is effective radius necessary? J. Atmos. Sci., Submitted.
1.2.1 Cloud droplet number concentration inhomogeneities
Recently both Pinsky and Khain [1997] and Shaw et al. [1998] have suggested that cloud droplet inertia results in the formation of fine-scale concentration inhomogeneities that may lead to a modification of the droplet number spectrum. However, the mechanism by
12 which small scale variability of the droplet concentration impacts the number spectrum as suggested by these authors is distinctly different. Pinsky and Khain [1997] introduce "inertial drop mixing" whereby two small neighboring volumes of air exchange drops according to their velocity flux divergence under the assumption that the total number of droplets in the two volumes is conserved. As a result, two volumes with greatly differing number distributions at cloud base become homogenized as the volumes are lifted adiabatically, while two volumes that are identical at cloud base remain homogenized but will be stochastically different at cloud top. Results from a more refined model of inertial drop mixing are discussed in detail in Pinsky et al. [1999]. The Shaw et al. [1998] hypothesis, on the other hand, is distinctly different and much bolder. In their model of inertial effects, vortex tubes and thus the intermittency of large Reynolds number atmospheric flows plays a central role. Although a velocity intermittency effect had been hypothesized earlier [Tennekes and Woods, 1973; Cooper and Baumgardner, 1989], in the Shaw model the geometry of the fine-scale structure plays a key role for the first time. In contrast to Pinsky and Khain [1997]'s inertial mixing, Shaw et al. [1998] argue that the clumping or preferential concentration of cloud droplets leads to a greater segregation of droplets into regions with varying microphysical conditions resulting in droplet spectral broadening. Thus we have the Shaw model of "inertial drop segregation" in contradistinction to Pinsky and Khain [1997]'s inertial drop mixing. The Shaw model is based on two key assumptions that have been criticized [Grabowski and Vaillancourt, 1999; Vaillancourt and Yau, 2000]. The first assumption is that velocity intermittency leads to the clumping of cloud droplets which would not otherwise clump in a low Reynolds number flow. As pointed out by Grabowski and Vaillancourt [1999] the ratio of a cloud droplet's inertia to the velocity field inertia, known as the Stokes number, is too small for significant preferential concentration to occur in the absence of intermittency effects. The second assumption is that the vortex tube turn-over time is 0(10 s) such that individual drops become "trapped" in the tube and are prevented from mixing with the surrounding environment. However, this time-constant is 1-2 orders of magnitude larger than the Kolmogorov time. In Chapters 3 and 4 I address the following questions: (i) At what Stokes numbers and spatial scales does clumping occur? (ii) Does velocity intermittency lead to increased clumping? and (iii) If so, do vortex tubes (cylindrical vortices) play a special role in this increased clumping? To address these questions, I derive an analytic expression for the scalar spectrum of inertial particles using Kraichnan's r5-correlated closure in Chapter 3 and, in Chapter 4,1 introduce an effective Stokes number that incorporates intermittency effects. Using the effective Stokes number I am able to assess the preferential concentra• tion of cloud droplets at large Reynolds numbers in a quantitative manner for the first time.
13 1.2.2 Cloud liquid water density inhomogeneities
Recently, Davis et al. [1999] presented horizontal spectra of cloud liquid water density measured at an unprecedented resolution of 4 cm during the winter Southern Ocean Cloud Experiment (SOCEX). The spectra exhibit both an expected k~5/3 inertial-convective subrange and a k~l viscous-convective subrange. However, the scale-break between these two regimes is anomalous—typically the viscous-convective subrange begins near 10 cm in the atmosphere whereas in Davis et al. [1999]'s spectra the scale-break is greater than 1 m. This enhanced liquid variability at small-scales suggests that a source of cloud liquid water variance is present. In Chapter 5, I consider the effect of condensation and evaporation on the viscous- convective subrange—again using Kraichnan's 5-correlated closure—and I attempt to explain the presence of the observed enhanced liquid water variability. In particular, I argue that variability in cloud droplet mass due to condensation/evaporation results in a production subrange where the scalar dissipation rate increases with increasing k. This source of liquid water variability is contrasted with the effect of particle inertia in which a source of cloud number concentration variability is present at small scales. One of the more interesting ideas to emerge from my approach is the importance of the vertically non-homogeneous structure of liquid water fluctuations which leads to a liquid water spectral density that is complex (i.e. real and imaginary components).
1.2.3 Unresolved low cloud optical properties
It has been known for some time that the plane-parallel-homogeneous assumption, where• by the mean solar reflectivity of a stratiform cloud layer is given by the reflectivity of the mean optical depth, leads to an overestimation of the prognosed reflectivity [McKee and Cox, 1974; Welch and Wielicki, 1984, 1985; Harshvardhan and Randall, 1985; Schertzer and Lovejoy, 1987; Cahalan et al., 1994; Kogan et al, 1995]. This follows trivially from the convexity of the function that relates reflectivity and optical depth. To reduce this bias many current global climate models (GCMs) multiply the prognosed mean optical depth by a constant factor x ~ 0-7 in the calculation of the average reflectivity [Cahalan et al, 1994]. Although x mav be tuned in a particular GCM to reproduce the mea• sured radiative stream, this approach is ad-hoc in nature and may become increasingly inaccurate as the climate departs from its present state. A partial solution to this problem was pioneered by Barker [1996b] who approximated the distribution of optical depths by a 7-distribution specified by its mean and variance. This approximation is advantageous because a 7-distribution well represents satellite retrieved distributions of marine low cloud optical depth, and an analytic expression for the mean reflectivity (and emissivity) is available [Barker, 1996a; Barker et al., 1996]. However, it does have one drawback. A GCM must prognose both the variance and the mean of optical depth in the cloud field, a problem which Barker [1996b] left unsolved.
14 In Chapter 6 I offer a solution to this problem by coupling prognosed unresolved optical variability to the unresolved cloud physical variability predicted from a statistical cloud scheme. A key feature of my scheme is that the coupling of optical and physical variability is achieved through a simple linear model of cloud liquid water content in which the dynamically driven cloud-top height fluctuations and the thermodynamically driven temperature and moisture fluctuations are treated explicitly and separately. In Chapter 6 I also consider the effect of poor model vertical resolution on the prognosed cloud physical and optical variability, and I illustrate, using simple analytic response functions, that low resolution leads to an overestimation of the cloud fraction response.
15 Chapter 2
The ^-Correlated Model
Does the Wind possess a Velocity? This question, at first sight foolish, improves on acquaintance. [Richardson, 1926]
2.1 Introduction
In this chapter we consider the advection-diffusion equation for the concentration of a passive scalar in a velocity field. The velocity field is assumed to be turbulent which introduces difficulties but some simplifications. A turbulent velocity field is a random field in the sense that only the mean statistical properties are known; we do not have knowledge of any particular realization of our velocity field. On the other hand, a turbulent velocity field is distinctly different from a spatially-correlated Gaussian random field because there is structure in any given realization. In general, structure is apparent in the small scale statistics—particularly the velocity gradients—but absent in the larger scale statistics. In this chapter we ignore many of the statistical subtleties of a real turbulent velocity field and assume that our velocity field is Gaussian and that the two-point spatial correlation is known. In Chapter 4 we consider a particular implication of non-Gaussian velocity statistics in detail. The only real mathematical advantage afforded by a random turbulent velocity field is that the statistics of the velocity field at scales smaller than stirring scale are, to good measure, locally isotropic and locally homogeneous. Unfortunately, this does not imply local isotropy and homogeneity of the passive scalar statistics [Holzer and Siggia, 1994]. Our goal in this chapter is to derive equations for the equal-time 1-point and 2- point moments of a passive scalar in a 5-correlated in time velocity field. We could begin this derivation with the advection-diffusion (AD) equation, a well-known Eulerian statement of conservation of particle number (or mass) where the particle trajectories have a Brownian component. In fact, most reviews on passive scalar statistics in turbulent flows begin with AD equation [Pumir et al, 1999; Warhaft, 2000]. However, many ofthe tools used to solve the AD equation in the (^-correlated limit are the same tools needed
16 to derive the AD equation itself. Thus, instead, this chapter begins with a presentation of particle dynamics in a generalized Hamiltonian framework. Hopefully, the generality of the methodology and presentation will be of value to those readers familiar with Hamiltonian dynamics but with no specialized knowledge of the turbulence problem. Section 2.3 is a more formal discussion of the 5-correlated model using concepts taken from Piterbarg and Ostrovskii [1997]. In Sec. 2.4 we derive an equation for the mean scalar concentration in the 5-correlated limit using the methodology developed in Sec. 2.2, and in Sec. 2.5 we extend our derivation to the equal-time 2-point variance. Section 2.6 details the history and validation of the 5-correlated model and also summarizes some recent advances in this field. For clarity and brevity, we first introduce the notation and terminology. Vectors appear in bold print while scalars and norms of vectors appear in normal print. In this chapter components of a contravariant vector are denoted by a superscript, whereas a subscript is used for the components of a covariant vector. This convection will be dropped in later chapters where the contravariant/covariant distinction is not necessary. Vector components are labeled by Roman or Greek indices. Repeated Roman indices imply summation, but no summation is implied by repeated Greek indices. A centered dot denotes a scalar product, i.e. £ • £ = £2. The subscript of a contravariant vector is a
Q label. Thus given a contravariant vector £, £ = £a while £{ is simply £ labeled by i. The symbol (• • •) represents an ensemble average. Homogeneity refers to statistical invariance under spatial translation while isotropy refers to statistical rotational invariance. In particular, given a random function f(x, y) where x and y indicate spatial position, homogeneity of f implies (f(x, y)) = (f(y — x)) while homogeneity and isotropy demands that (f(x, y)) = (f(\y — x\)). A partial derivative with respect to some parameter, p, is denoted by d/dp. Partial differentiation w.r.t. time is also denoted by an over-dot, and
a w.r.t. space by Va = d/dx . Brackets are used to denote the scope of a partial derivative
if necessary, e.g. Vafg = (Vaf)g + (Vac/)/. Principal symbols are listed in Appendix A.
2.2 Hamiltonian Fluid Mechanics
At first glance, Hamiltonian fluid mechanics is a strange starting point for a discussion of the turbulent mixing of a passive tracer because of the essential role that viscosity plays in turbulent dynamics. Certainly, dissipative systems are non-Hamiltonian. However, if we focus on a subset of the phase space, in this case the Lagrangian coordinates of our particles, then we can derive the AD equation within a Hamiltonian framework although kinematic viscosity does not appear explicitly in this formulation. With the notation and methodology in hand from this exercise, we are well prepared to tackle the closure of the AD equation in future sections. The theory of Hamiltonian dynamics and turbulent diffusion is presented in great detail in any number of excellent texts [Goldstein, 1980; Arnold, 1980; Dubrovin and Fomenko, 1992; Beris and Edwards, 1994] and reviews
17 [Morrison, 1982; Bennett, 1987; Salmon, 1988; Stull, 1993; Bennet, 1996; Klyatskin et al, 1996; Majda and Kramer, 1999]. The treatment here will thus be appropriately terse.
2.2.1 Lagrangian formulation
Consider an individual parcel or element of a passive tracer in a fluid labeled by the parcel position x = a at time t = 0. Associated with this parcel is an initial scalar density 9(a,0). The parcel is shrunk to zero size so that there is one for each point in the fluid, while retaining a non-zero density 9. Although there are some conceptual difficulties with this Lagrangian formulation of zero-size finite-density tracer parcels, there are no mathematical difficulties because the infinite number of degrees of freedom in the corresponding continuous Eulerian formulation is preserved. The position of the a-labeled parcel at time t, £a(t) determines the Eulerian vector field a(x, t) by identifying x with £:
X = ^a(x,t)(i); or alternatively
The labels a(x,t) are, by their very meaning, unchanged along the path of the parcel so that the labels obey the Liouville equation
~a(x,t) = 0 (2.1)
where the total derivative D/Dt is given by D/Dt = d/dt + £a • V. The parcel velocity •"pl^aW^) = £aW is> m general, different from the velocity of the fluid elements due to Brownian motion of the parcels. It is also different if the tracer particles (and hence parcels) have significant inertia. As a side note, one of the more subtle and significant ideas in classical mechanics is the connection between the conservation of parcel labels, Eq. (2.1), and potential vorticity. Ertel's theorem provides the way of translating the conserved parcel labels a
_1 into the conserved quantity # (V x up) • (Va), i.e. potential vorticity. The symmetry corresponding to potential vorticity conservation is a parcel-relabeling symmetry since parcel-label variations leave the parcel density and entropy unchanged. One of the first connections between the parcel-relabeling symmetry property and the general vorticity conservation law was made here at U.B.C. by Calkin [1963]. In addition to £ we have the conjugate Lagrangian flux or tracer momenta
irfl(t)=ifl(t)0(o,0). (2.2)
The Lagrangian vector fields £,a(x,t) and 7ra(cc,i) describe the full phase space of the tracer. A variational action principle approach, provided the parcel velocity is inviscid,
18 leads to a Hamiltonian formalism for the dynamical evolution,
3F_ {F,H} (2-3) dt where {,} is the canonical (symplectic) Poisson bracket, H is the Hamiltonian, and F is any functional on the space of dynamical variables. For our purposes we restrict F to be £a or the Eulerian scalar density 9(x,t). The canonical Poisson bracket is defined by [Poisson, 1809; Arnold, 1966]
(2.4) where 6 indicates a (Volterra) functional derivative. It follows from Eq. (2.4) that the Poisson bracket is antisymmetric w.r.t. functionals F and H, and therefore dH/dt = 0— energy is conserved. Equation (2.4) also gives {^,7rJ'} = S1^ which is the classical analog ofthe well-known quantum-mechanical commutation relation [£J,7rJ] = ihS^. Restricting ourselves to the dynamics of a passive tracer, we require only the component of the Hamiltonian H(t) that contains the kinetic energy of the tracer:
(2.5)
By definition a passive scalar does not contribute to the potential energy of the fluid.
af} Using S£,%/5t;P = 5 5(r—a) and dt;%/5irr = 0, a simple calculation recovers the definition of the canonical flux, Eq. (2.2), from Eqs. (2.3-2.5).
2.2.2 Eulerian formulation
There are a number of different ways of expressing conservation of (tracer) mass for the Eulerian scalar density 9(x,t). We will pursue these various derivations in this subsection, and in Sees. 2.4 and 2.5 we verify that our independent formulations of mass conservation lead to consistent expressions for scalar mean and covariance in the ^-correlated limit. Readers who object to this inherit redundancy may wish to skip to the following subsection. For the Eulerian specification it is convenient to label the parcels
by their arrival at (x,t), i.e. £xt(i) = x. Associated with a tracer parcel is a volume
3 element at some time t' given by d £Xit(tf). Though the mass of the parcel must remain constant, its volume element may vary if the flow is compressible. Conservation of mass assumes the form
mx,t(t),t)d%,t(t) = 9(£Xtt(t'),t')d%,t(t') which, rearranged, gives, g(*,,«(*V) 9{x,t) = (2.6) Mx,t)
19 a The Jacobian Jti(x,t) — \dx / d£xt(t')\ defines the fractional change in parcel volume between £x>t{tf) and x. Taking the partial derivative of Eq. (2.6) with the limit t' —> t gives D9(T. i\ f)77l (cr. t\ De dJ t{x t) M =e(x,t) ^ ' (2.7) Dt v ' ' dt Similarly evaluating d/dt' of the Jacobian gives
dje(x,t) _ diitt(t') ^ ( Jt [X t} dt' ~ da>t(t>) ' >
with initial condition Jtt(x,t') = 1 and solution
Jt'{x,t) = exp / V •up(£Xtt(e),o)do- (2.8) Jv Substituting Eq. (2.8) into Eqs. (2.6) and (2.7) we arrive at two equivalent expressions for the Eulerian density 6(x,t): the usual Markovian, Eulerian conservation law
D9(x,t)
= -(V • up)9(x,t) (2.9) Dt and a non-Markovian path integral formulation
0(*,t) = 0(^,(0) exp (2.10) Jv Equations (2.9) and (2.10) were known to Liouville [1838]. In general Eq. (2.10) is less tractable than Eq. (2.9) because of its explicit dependence on the Lagrangian, non-
Markovian, parcel position ^Xtt(cr). To derive the AD equation from Eq. (2.9) we first separate the total parcel velocity up into advective and Brownian components:
up(x, t) = u(x,t) + V2/cw(t) (2.11) where K is the molecular diffusivity and w is a centered Wiener process satisfying
(wa{t)wp(s)) = 5al3mm(t,s), and the advective parcel velocity u is equal to the fluid velocity in the absence of inertial effects. Substituting Eq. (2.11) into Eq. (2.9) gives d9(x,t) + V • (u9(x, t)) + V2Kw(t) • V9(x, t) = 0. (2.12) dt A useful aid in our derivation of the AD equation is the Furutsu-Novikov formula [Furutsu, 1963; Novikov, 1964; Frisch, 1995; Klyatskin et al, 1996].
20 Furutsu-Novikov formula. Let v(s) be a vector-valued centered Gaussian field and let F[v] be any differentiable functional. Then, assuming all averages exist,
(v°(a)F[v]) = J ds' (j^m) Bm(s, s') (2.13) where Ba)3(s,s') = (va(s)vl3(s')). Equation (2.13) is just functional Gaussian integration by parts. Note that (5/5vlF) is an operator that is free to act on Bat. Using the Furutsu-Novikov formula, Eq. (2.12) becomes
where (• • -)w represents an average over the Wiener process w. To evaluate the functional derivative S9/6wa we return to Eq. (2.12):
a a v a Sw (t') Jt, Sw (t') Jt, ' dx
The first term of the r.h.s. goes to zero as t' —> t, while f*, daS(t' — a) = 1/2. Thus we arrive at the AD equation:
2 + V •{ue(x,t)) = rzV 9(x,t), 6(x,0) = dQ(x) (2.14)
where #n are the initial conditions at t — 0, and we have dropped the (• • -)w notation. Equation (2.14) although lacking an explicit w-dependence remains a it-dependent ran• dom process. Another useful Eulerian statement of conservation of mass can be derived using indi•
-1 cator functions. Consider the Lagrangian indicator function Rx^t(o) = 5(a — £ (a:,£))
which, like a(x,i), is unchanged along the path of the parcel so that DRXjt(a)/Dt = 0. The corresponding Eulerian indicator function is
Ra(x,t) = 5(x - Ut)) = (2-15) Jo(x,t)
The function Ra(x,t) is the evolution kernel for the AD equation because the scalar
density 9(x,t) is, by definition, the convolution of Ra(x,t):
9(x,t) = Jd?r 9(r,0)Rr(x,t) (2.16)
with initial conditions at t = 0. The equivalence of Eq. (2.16) with the other statements of mass conservation [Eqs. (2.9) and (2.10)] is immediately evident; substituting Eq. (2.15)
into (2.16) recovers Eq. (2.6). Furthermore, averaging Ra(x, t) over the w-ensemble gives the Fokker-Planck or forward Kolmogorov equation
dRa 2 &'Q + V • (uRa(x, t)) = KV Ra(x, t), Ra(x, 0) = 6{x - a) (2.17)
21 where again the (• • -)w have been dropped. It is interesting to compare the AD equation
for 9, (2.14), and Eq. (2.17) for Ra(x,t). The equations are the same but with different initial conditions. This reflects the different mathematical content of their solutions:
Ra(x,t) describes the it-dependent probability density of a particle's position while 9 describes the tracer concentration in the Eulerian picture. However, their behaviour is identical if 9 is the Green's function solution for an initial tracer concentration that is a
point source, i.e. 90(x) = 5(x — a).
A path integral solution for Ra(x,t) can be obtained that is quite distinct from the path integral formulation for 9(x,t), Eq. (2.10). In this solution the most probable parcel trajectories minimize the Onsager-Machlup action [Onsager and Machlup, 1953]. The derivation presented here follows Risken [1996]. Consider the transition probability density of a particle at (a, 0) to a neighboring
point X\ after a small time Ai: Ra(xi, At). Integrating Eq. (2.17) in time from 0 to At gives
2 Ra(xi, At) = exp (-V • uAt + nV At) 5(xx - a). Writing the 5-function as a Fourier integral, we have 1 ( \xi-a- uAt\2\ which is simply the Green's function solution for small X\ — a and Ai. Following the general path-integral formalism of Feynman and Hibbs [1965] we then denote the
path from (a = a?0,0 = to) to (x = xn,t = tn) by a series of intermediate steps
(xi, ti), (x2, t2),(xn-i, i„_i), which define a "path" with At = t/n. Since Ra(xi,ti) is a (it-dependent) probability, we have that
Ra(x,t) = lim (T[R*.it.(xi+1,U+i)) (2.18) n—>oo \ -*- / \i=0 / w
«.(-,*) = Mm/ -/n{[4xKAr^^}exp|_gN+.-^.-^>^Afj where the integration is over the (tt-dependent) Wiener paths. To obtain the Onsager-
Machlup solution we write Xi+\ — Xi — £a(i;)Ai and approximate the sum in the expo• nential by a time integral giving
Ra(x,t)= / d[£o]exp{-— / da[Ua)-u(Ua),a)n (2-19)
where d[£a] stands for the measure on the set of paths {£a(cr)} over which we integrate. Substitution of Eq. (2.19) into (2.16) gives another expression for 9(x,t). The utility of Eq. (2.19) comes from its Gaussian character—for Gaussian functions, integrating over its argument under given constraints gives, apart from a constant factor, the same result as maximizing the Gaussian under the same constraints.
22 2 The function [£0 — U] /(4K) in the exponential on the r.h.s. of Eq. (2.19) is known as the Onsager-Machlup action; the dominant contribution to Ra(x,t) arises from those trajectories that minimize this action. Clearly, in the weak noise limit K —>• 0, the r.h.s. of
Eq. (2.19) converges to S(x — £a) with £a = u. For vanishing noise the Onsager-Machlup action can be evaluated by steepest descent, but in the strong-noise case a "classical" generalization of the quantum-mechanical Rayleigh-Ritz variational method is needed [Eyink, 1996]. The last exercise in this subsection is to demonstrate equivalence between the Hamil• tonian evolution equation, (2.3), and Eq. (2.9). To do so we must first determine the non-canonical Eulerian variables that correspond to our Lagrangian, canonical (in the in- viscid limit) vector fields £ and 7r. The Eulerian variables are 9, u, entropy and potential vorticity. Only 9 and u are needed for our task. Traditionally, however, 9 and the cur• rent J = 9u are chosen instead because J and 9 form a Lie Algebra. The Hamiltonian, Eq. (2.5), in the Eulerian specification is J7w = /AsS + - (2-20) Using the chain rule we have that
SH[9, J] _ /• 3 6H SJtfat) Siva ~ J X SJifat) 8ira where we have taken advantage of the fact that 59/5-n = 0. Using the Poisson bracket relation, Eq. (2.4), the bracket {9, H} becomes
89(x,t) 6H(x,t) SJtfat) {9{x,t),H(x,t)} = J d?z Jdzr { 5£r SJ {z,t) 8irr
3 5 I d z{9(x,t),r(z,t)} ^f) (2.21) in agreement with Eq. (6.3) in Salmon [1988]. From the definition of H, Eq. (2.20), it follows trivially that
SH X Ja ( >*) - ^5(x - z) (2 22) 8J*(z,t) " 9(x,t) d[X Z)- [2-ZZ) The last step is evaluation of {9, Ja} using Eq. (2.4). The interested reader is referred to Chapter 5.3 of Beris and Edwards [1994] for further details. The final result is
{9(x,t),Ja(z,t)} = --^S(z - x)9(x,t) (2.23) where the corresponding quantum-mechanical analog [9(x), Ja{z)] contains an additional factor ih on the r.h.s. [Dashen and Sharp, 1968]. Inserting Eqs. (2.22) and (2.23) into Eq. (2.21) gives
{9(x,t), H(x,t)} = -V • J(x,t)
23 and thus, along with Eq. (2.3), we have the usual mass conservation equation, (2.9). Unfortunately, our focus in this section on mass conservation has obscured the elegance of the Poisson bracket formalism and its generality; it applies to almost all areas of physics, from classical mechanics and electrodynamics to quantum mechanics and the new theories of subatomic particles, and it is applicable to nonlinear stability analyses. In fact, quantum mechanical commutators emerged from the Poisson bracket formulation of classical mechanics about three quarters of a century ago.
2.3 (^-Correlated Model
The scalar density moments (6n) follow from Eqs. (2.10), (2.14) or (2.16) after integra• tion over the u-ensemble. Herein lies the difficulty. The turbulent velocity field u(x,t) has complex spatio-temporal correlations that make this integration, under general cir• cumstances, exceedingly difficult. We make no attempts to solve this problem in this dissertation. Rather, we restrict our attention to a specific passive scalar regime, called the viscous-convective subrange, where scale separation can be used to greatly simplify the temporal properties of u. Consider the statistics of 6(x,t) at some scale Ig. For the sake of simplicity, consider
a homogeneous, isotropic velocity field with a single correlation time ru(lg) and standard
deviation au(lg). In this case there are four independent time scales:
• The observation time, t.
• The Eulerian correlation time, rg = ru(lg).
• The eddy turnover time, TT = le/o-u(lg).
• The molecular diffusion time, To = IJ/K.
If these four time scales have the same order of magnitude then this scale classification will not help us simplify the statistics of u—we are back to square one. However, if one or more of these scales is much smaller or bigger than the others, we can renormalize or rescale our velocity field such that this scale separation becomes infinite. Hopefully, the statistics of the renormalized velocity field will be more analytically tractable than the statistics of the original field. Formally, we accomplish this renormalization by defining 6 > 0 to be a small dimen- sionless parameter that modifies the relevant scales, and then we take the limit 5—^0. Consider the rescaled variables t' = a(6)t and
where x is not rescaled and a(6) and (3(6) are some dimensionless functions. Our interest lies in a renormalized system where the temporal properties of the velocity field rapidly
24 decorrelate in time, but the renormalized scale-dependent eddy-diffusivity, K~s(lg), has a finite non-zero value. The last condition is crucial; if we carelessly modify the temporal properties of the velocity field we are apt to end up with a velocity field that transports mass infinitely fast or not at all. Dimensional analysis suggests Kg ~ o>(32, so that lim^o Kg is well behaved if a(S) = [3~2(5). Taking (3 = S'1 and a = S2 we have that
1 us(x,t') = 5~ u(x,^j . (2.24)
Given our rescaling functions a and f3, we can now determine how the time scales TE,
2 TT and TD diverge in the limit 5—^0. From the relations TE ~ 5 , TT ~ 5 and rD ~ 1 it follows that [Piterbarg and Ostrovskii, 1997]
rJ5