Chapter 2. New Worlds Versus Scaling: from Van Leeuwenhoek to Mandelbrot
Total Page:16
File Type:pdf, Size:1020Kb
2017-03-23 3:42 pm 1 Chapter 2. New worlds versus scaling: from van Leeuwenhoek to Mandelbrot 2.1 Scalebound thinking and the missing quadrillion We just took a voyage through scales, noticing structures in cloud photographs and wiggles on graphs. Collectively these spanned ranges of scale over factors of billions in space and billions of billions in time. We are immediately confronted with the question: how can we conceptualize and model such fantastic variation? Two extreme approaches have developed, for the moment I will call the dominant one the “new worlds” view after Antoni van Leeuwenhoek (1632-1723), who developed a powerful early microscope, the other, the self-similar (scaling) view by Benoit Mandelbrot (1924- 2010) that I discuss in the next section. My own view - scaling but with the notion of scale itself an emergent property - is discussed in ch.3. When van Leeuwenhoek peered through his microscopea, in his amazement he is said to have discovered a “new world in a drop of water”: “animalcules”, the first micro- organismsb (fig. 2.1). Since then, the idea that zooming in will reveal something totally new has become second nature: in the 21st century atom-imaging microscopes are developed precisely because of the promise of such new worlds. The scale-by-scale “newness” idea was graphically illustrated by K. Boeke’s highly influential book “Cosmic View” (1957) which starts with a photograph of a girl holding a cat, first zooming away showing the surrounding vast reaches of outer space, and then zooming in until reaching the nucleus of an atom. The book was incredibly successful, and was included in Mortimer Adler's “Gateway to the Great Books” (1963), a 10 volume series featuring works by Aristotle, Shakespeare, Einstein and others. In 1968, two films were based on Boeke’s book: “Cosmic Zoom”c and “Powers of Ten” (1968d, re-released in 1977e) which encouraged the idea that nearly every power of ten in scale hosted different phenomena. More recently (2012), there’s even the interactive Cosmic Eye, app for the iPad, iPhone, or iPod. In a 1981 paper, Mandelbrot coined the term “scalebound” for this “New Worlds” view, a convenient shorthandf that I use frequently belowg. While “Powers of Ten” was proselytizing the new worlds view to an entire generation, there were other developments that pushed scientific thinking in the same direction. In the 1960’s, long ice and ocean cores were revolutionizing climate science by supplying the first quantitative data at centennial, millennial and longer time scales. This coincided with the a The inventor of the first microscope is not known, but van Leuwenhoek’s was more powerful, up to about 300 times magnification. b Recent historical research indicates that Robert Hooke may in fact have preceded van Leeuwenhoek, but the latter is usually credited with the discovery. c Produced by the National Film Board of Canada. d By Charles and Ray Eames. e The re-release had the subtitle: “A Film dealing with the Relative Size of Things in the Universe and the Effect of Adding Another Zero” and was narrated by P. Morrison. More recently, the similar “Cosmic Voyage” (1996), appeared in IMAX format. f He wrote it as here, as one word, as a single concept. g He was writing in Leonardo, to an audience of architects: “I propose the term scalebound to denote any object, whether in nature or one made by an engineer or an artist, for which characteristic elements of scale, such as length and width, are few in number and each with a clearly distinct size”: 1 Mandelbrot, B. Scalebound or scaling shapes: a useful distinction in the visual arts and in the natural sciences. Leonardo 14, 43-47 (1981). 2017-03-23 3:42 pm 2 development of practical techniques to decompose a signal into oscillating components: “spectral analysis”. While it had been known since Joseph Fourier (1768-1830) that any time series may be written as a sum of sinusoids, applying this idea to real data was computationally challenging and in atmospheric science had been largely confined to the study of turbulence. The breakthrough was the development of fast computers combined with the discovery of the “Fast Fourier Transform” (FFT) algorithmh (1968). The beauty of Fourier decomposition is that each sinusoid has an exact, unambiguous time scale: its period (the inverse of its frequency) is the length of time it takes to make a full oscillation (fig. 2.2a, upper left for examples). Fourier analysis thus provides a systematic way of quantifying the contribution of each time scale to a time series. Fig. 2.2a illustrates this for the Weierstrass function which in this example, is constructed by summing sinusoids with frequencies increasing by factors of two so that the nth frequency is ω = 2n. Fig 2.2a (upper left) shows the result for H =1/3 with all the terms up until 128 cycles per second (upper row); the amplitudes decrease by factors of 2-H (here = 0.79) so that the nth amplitude is 2-nH. Eliminating n, we find the power law relation A = ω-H. More generally for a scaling process, we have: Spectrum = (frequency)-β Where β is the usual notation for the “spectral exponent”i. The spectrum is the square of the amplitude, so that in this (discrete) examplej we have β =2H. The spectrum of the Weierstrass function is shown in fig. 2.2a bottom row (left) as a discrete series of dots, one for each of the 8 sinusoids in the upper left construction. On the bottom row (right) we show the same spectrum but on a logarithmic plot on which power laws are straight lines. Of course, in the real world - unlike this academic example - there is nothing special about powers of 2 so that all frequencies – a continuum - are present. The Weierstrass function was created by adding sinusoids: Fourier composition. Now take a messy piece of data – for example the multifractal simulation of the data series (lower left in fig. 1.3): it has small, medium and large wiggles. To analyze it we need the inverse of composition, and this is where the FFT is handy. In this case, by construction, we know that all the wiggles are generated randomly by the process; that they are unimportant. However, if we had no knowledge – or only a speculation - about of the mechanism that produced it, we would wonder: do the wiggles hide signatures of important processes of interest, or are they simply uninteresting details that should be averaging out and ignored? h The speed-up due to the invention of the FFT is huge: even for the relatively short series in fig. 1.3 (2048 points) it is about a factor of one hundred. In GCM’s it accelerates calculations by factors of millions. iThe negative sign is used to so that in typical situations, β is positive. j In the more usual case of continuous spectra, we have β = 1+2H possibly with corrections when intermittency is important. 2017-03-23 3:42 pm 3 Fig. 2.1: Antoni van Leuwenhoek discovering “animalcules” (micro-organisms), circa 1675. Fig. 2.2b shows the spectrum of the multifractal simulation (fig. 1.3 lower left) for all periods longer than 10 milliseconds. How do we interpret the plot? One sees three strong spikes, at frequencies of 12, 28 and 41 cycles per second (corresponding to periods of 1/12, 1/28 and 1/41 of a second, about 83, 35, 24 milliseconds). Are they signals of some important fundamental process or are they just noise? Naturally, this question can only be answered if we have a mental model of how the process might be generated, and this is where it gets interesting. First of all, consider the case where we have only a single series. If we knew the signal was turbulent (as it was for the top data series), then turbulence theory tells us that we would expect all the frequencies in a wide continuum of scales to be important, and furthermore, that at least on average, that their amplitudes should decay in a power law manner (as with the Weierstrass function). But the theory only tells us the spectrum that we would expect to find if we averaged over a large number of identical experimentsk (each one with different “bumps” and wiggles, but from the same overall conditions). In fig. 2.2b, this average is the smooth blue curve. But in the figure, we see that there are apparently large departures from this average. Are these departures really exceptional or are these just “normal” variations expected from randomly chosen pieces of turbulence? Before the development of cascade models and the discovery of multifractals in the 1970’s and 80’s, turbulence theory would have led us to expect that the up and down variations about a smooth line through the spectrum should roughly follow the “bell curve”. If this was the case, then the spectrum should not exceed the bottom red curve more than 1% of the time and the top curve more than one in ten billion times. Yet, we see that even this 1/10,000,000,000 curve is exceeded twice in this single but nonexceptional simulationl. Had we encountered this series in an experiment, k An “ensemble” or “statistical” average. l I admit that to make my point, I made 500 simulations of the multifractal process in fig. 1.3 and then searched through the first 50 to find the one with the most striking variation. But this was by no means the most extreme of the 500 and if the statistics had been from the bell curve, then the extreme point in the spectrum in fig.