CDM Iteration III
Klaus Sutner Carnegie Mellon Universality
34-iteration 2017/12/15 23:21 1 The Mandelbrot Set
Calculus and Fixed Points
Iteration and Decidability
Theory of Fixed Points Self-Similarity 3
Recall that the driving idea behind recursion and iteration is self-similarity: the whole computation is highly similar to a part of itself.
The notion of self-similarity is perhaps most compelling in geometry, there one can see how parts of a figure are similar to the whole.
Note that without computer-aided visualization none of the following pictures would exist, they cannot be drawn by hand. Mandelbrot 1 4 Mandelbrot 2 5 Mandelbrot 3 6 Mandelbrot 4 7 What Set? 8
Consider the second-degree complex polynomial 2 pc(z) = z + c
where c ∈ C is a constant.
Definition
c ∈ C belongs to the Mandelbrot set M if the orbit of 0 under pc is bounded (as a sequence of complex numbers).
For example, 0 ∈ M but 1 ∈/ M. c = i is also in: i, −1 + i, −i, −1 + i, −i, −1 + i, −i, −1 + i, . . . Likewise −1 and −i are in. And the Colors? 9
A priori, we only get a black and white picture: in M or out of M.
Note that it is not even clear how to do this much: there is no simple test to n check if |pc (0)| → ∞ as n → ∞.
n In practice, one computes a few values |pc (0)| and checks whether they seem to tend to infinity.
Colors can be introduced for example based on the speed of divergence.
Doing this right is somewhat of a black art, see http://en.wikipedia.org/wiki/Mandelbrot_set for some background. Squaring 10
2 wHzL z wHzL z Squaring plus Offset 11
The symbolic orbit of 0 under z 7→ z2 + c.
0 0 1 c 2 c2 + c 3 c4 + 2c3 + c2 + c 4 c8 + 4c7 + 6c6 + 6c5 + 5c4 + 2c3 + c2 + c 5 c16 + 8c15 + 28c14 + 60c13 + 94c12 + 116c11+ 114c10 + 94c9 + 69c8 + 44c7 + 26c6 + 14c5 + 5c4 + 2c3 + c2 + c
Note that the coefficients are rather wild, there is little hope to understand these polynomials. Why Now? 12
Several mathematicians developed the foundations for structures such as the Mandelbrot set in the early 20th century, in particular Gaston Julia and Pierre Fatou. Fractal dimensions were also well understood, see the work by Felix Hausdorff and Abram Besicovitch.
Why did they not discover the Mandelbrot set?
Because they had no computers, unlike Monsieur Mandelbrot who happened to be working for IBM at Yorktown Heights. So one of the most important developments in geometry in the 20th century was quite strongly connected to computation and visualization. The Mandelbrot Set
2 Calculus and Fixed Points
Iteration and Decidability
Theory of Fixed Points Calculus and Fixed Points 14
Many interesting applications of fixed points lie in the continuous domain: finding a fixed point is often a good method in calculus to compute certain numbers.
√ A classical problem: calculate 2, for some value of 2.
By “calculate” we mean:√ give a method that produces as many digits in the decimal expansion of 2 as desired. One obvious way to do this is a version of binary search: given an approximation a, b such that a2 < 2 < b2 try (a + b)/2. Fine, but a bit complicated. Iteration to the Rescue 15
Consider the map
+ + g : R → R g(x) = x/2 + 1/x
√ Then the iterates gn(1) approximate 2. √ Note that 2 is in fact a fixed point of g. Alas, there is a problem: it is plain false to claim that √ FP(g, 1) = 2 All the numbers in the orbit are rational, but our goal is an irrational number, which therefore cannot be a fixed point. The Power of Wishful Thinking 16
But wouldn’t it be nice if we could set
a = gω(1)
so that
g(a) = g(gω(1)) = gω(1) = a
In a way, we can: we can make sense out of gω(1) by using limits: with luck there will be some number a such that
|gn(1) − a| → 0 n → ∞.
Of course, this does not always work. Success 17
In our case we duly have √ lim gn(1) = 2. n→∞ √ and 2 is indeed a fixed point of g, the orbit just never reaches this particular point. This is no problem in real life: we can stop the iteration when |2 − (gn(1))2| is sufficiently small.
As a matter of fact, convergence is quite rapid:
01 .0000000000000000000 11 .5000000000000000000 21 .4166666666666665186 31 .4142156862745096646 41 .4142135623746898698 51 .4142135623730949234 61 .4142135623730949234 Newton’s Method 18
This is Newton’s Method: to find a root of f(x) = 0, iterate
f(x) g(x) = x − , f 0(x)
and pray that everything works.
Obviously f needs to be differentiable here, and we would like f 0(x) to be sufficiently far away from 0 so that the second term does not become unreasonably large. Application: Reciprocal 19
A typical application of Newton’s Method is to determine 1/a in high precision computations.
Here f(x) = 1/x − a g(x) = 2x − ax2
The point here is that we can express division in terms of multiplication and subtraction (simpler operations in a sense). Example 20
√ Numerical values for a = 1.4142135623730950488 ≈ 2.
01 .0000000000000000000 10 .5857864376269049511 20 .6862915010152396095 30 .7064940365486259451 40 .7071062502115925513 50 .7071067811861488089 60 .7071067811865475244 70 .7071067811865475244
Verify the result:
0.7071067811865475244 × 1.4142135623730950488 = 1.000000000000000000 Schr¨oder’sMethod (1870) 21
If quadratic convergence is not enough one can speed things up tremendously by iterating more complicated functions. For example, define fa(x) to be the rational function