<<

Degree project

Fractal simulation of plants

Author: Haoke Liao Supervisor: Hans Frisk Examiner: Marcus Nilsson Date: 2020-01-29 Course Code: 2MA41E Subject: Mathematics Level: Bachelor

Department Of Mathematics Abstract

We study two methods of simulation of plants in R2, the Lindenmayer system and the . The algorithm for a L-system simulates fast and efficiently the growth structure of the plants. The iterated function system can be done by deterministic iteration or by the so called .

2 Contents

1 Introduction 4 1.1 History ...... 5

2 L-systems 5

3 Iterated Function Systems 9 3.1 Deterministic Iteration ...... 10 3.2 Chaos Games ...... 13

4 Concluding remarks 16

3 1 Introduction

Simulation of complex plant structures in nature is a popular field of com- puter graphics research [9]. In this thesis, we take the Lindenmayer systems (L- system for short) and the iterated function systems (IFS for short) as the main methods in R2. We choose Chaos and [1] and The algorithmic beauty of plants [3] as the key books and study how L-system and IFS work for the frac- tals. The figures of the plants are created by Mathematica [4]. The plants can be regarded a generalization of the classical fractals. Let us introduce the fractals through 2 concepts, self similarity and Hausdorff di- mension. The objects of the classical fractals research are usually not smooth and they have an irregular geometry. A closed and bounded subset of the Eu- clidean plane R2 is said to be self-similar if it can be expressed in the form S = S1 ∪ S2 ∪ · · · ∪ Sk where S1, S2 ··· , Sk are non-overlapping sets, each of which is similar to S scaled by the same factor s (0 < s < 1) [5]. The fig- ures in figure 2 are good examples to show self similarity. When it is iterated more and more times, the final figure will go towards to the Sierpinski triangle. Now we turn to the Hausdorf f dimension (also called the dimension). The method of calculating is not very easy, usually we always choose the formula for the box − counting dimension. For a bounded set α, let N(`) denote the minimum number of boxes of length ` > 0 required to cover α. Then the box dimension of α is given by [6] ln N(`) dB = lim . `→0 ln(1/`) For the classical fractals, like the Sierpinski triangle, the Hausdorff dimension is the same as the box-counting dimension. As an example, let us consider the . Start with a segment of length 1 in figure 1. Remove the middle third and continue by removing the middle third of each segment.

Figure 1: Two iterations towards the Cantor set.

n 1 After iterating n times, we get 2 segments of length 3n . So ln 2n ln 2 dB = lim = n→∞ ln 3n ln 3 .

The purpose of section 2 is to study the rules of plant growth by using a rewriting system. Both the edges and the nodes of the plant can be rewritten. The IFS in section 3 which is also based on iteration can be done in two ways: deterministic iteration and random iteration. We will discuss both of them and introduce some concepts like af finetransformation, contraction mapping and chaos game.

4 Figure 2: Initial triangle, one iteration (middle) and four iterations (right) to- wards the Sierpinski triangle.

1.1 History In 1872 Karl Weierstrass presented a function which is continuous but nowhere differentiable and today this function is considered as a fractal. Then the Cantor set was generated by Cantor in 1883. In 1904, Koch produced the Koch snowflake and Sierpinski showed the Sierpinski triangle in 1915. By 1918, Fatou and Julia came up with the idea of . In the same year, Haus- dorff expanded the definition of "dimension" and defined what we now call the Hausdorff dimension. In the 1960s, Mandelbrot introduced the concept of self similarity and he coined the word “fractal” in 1975. The fern code developed by Barnsley is an example mentioned in his 1988 book Fractals Everywhere [2]. Lindenmayer systems, were introduced and developed in 1968 by Aristid Lin- denmayer [3].

2 L-systems

The L-system is based on a rewriting system - an object is constructed by replacing the original object step by step using a set of rules. Let us start by generating curves since it is a good way to explain the rewriting system. After that we will turn to simulation of plants. A nice way of thinking of the L- system is the so called turtle interpretation. We then see the rules as instruction for a turtle‘s trajectory. To begin with we start with a so called axiom, ω, which gives the initial configuration. This can often be a length F (usually F = 1). To tell the turtle how to move a production rule, P, must be given. The turtle is allowed to move straight ahead but also turn left or right an angle δ, shown with 00−00 or 00+00. Given the triplet {ω, P, δ}, the curve can be created. As an illustration, let us construct the snowflake curve using the turtle interpretation. For its construction only one rule is used. To keep the end 1 points fixed we scale the edges by 3 in each step. Now let the rule be F → F + F − −F + F with the angle δ = 60◦ and ω = F. In figure 3 the first three iterations towards the snowflake are shown.

5 Figure 3: The initial state and the first three iterations towards the snowflake.

Sometimes it is necessary to work with two types of edges, Fl and Fr, like ◦ for the in figure 4. Here the angle δ = 90 and the axiom ω = Fl and the two rules are Fl → Fl + Fr+,

Fr → −Fl − Fr. We show the first three iterations and the tenth iteration in figure 4. After two iterations the string is Fl → Fl + Fr + Fl − Fr.

Figure 4: The first three iterations and the tenth iteration of the dragon curve.

Another significant aspect of rewriting systems is the node rewriting. The point where two edges meet is called a node. As for the edge rewriting, it also gives a method to create a new curve with a recursive structure. In the following example two types of nodes, L and R, are used and F is still the length step. Using the rules

L → LF + RFR + FL − F − LFLFL − FRFR+

R → −LFLF + RFRFR + F + RF − LFL − FR and ω = L and δ = 90◦ will produce a space filling curve 1, see figure 5.

1The space-filling curve can be a continuous function, R → R2, whose domain is the unit interval [0, 1] and the range has an area [8].

6 Figure 5: One and three iterations towards the space filling curve.

Both the edge and the node rewriting can be extend to branching struc- tures. To simulate growth of plants we will now turn to bracketed L-systems. The rules of a bracketed L-system also use rewriting. It also has two types of rewriting, the edge rewriting and the node rewriting and a branch is described by a bracket []. Let us start with the edge rewriting case in figure 6 where ω = F and δ = 22.5◦.

Figure 6: The plants after one and four iterations with the rule F → FF[− − F + F + F][+F − F − F].

An example of node-rewriting is given in figure 7 with ω = X, δ = 20◦.

7 Figure 7: The plants after 1, 2, 3 and 9 iterations with the rules X → F[+X]F[−X] + X and F → FF.

Let us finish this section with some branching structures using two types of edges and/or two types of nodes. The figures of the plant simulations are shown in figure 8 with the following rules

ω = Fl ω = L, F → FF δ = 22.5◦ δ = 20◦ Fl → Fl Fl[− − Fl][+Fr] L → F[+F[+L]]F[−R] + R Fr → Fl[−Fl][+Fr] R → F[−F[−L]][+R]

8 Figure 8: The plants simulation after five iterations with two types of edges (on the left) or nodes (on the right) respectively.

We generate the plants combining the edge rewriting and the node rewrit- ing in figure 9. The rules are

ω = L, δ = 20◦, Fl → Fl Fl[− − Fl][+Fr], Fr → Fl[−Fl][+Fr], L → Fl[+Fr[+L]]Fl[−R] + R, R → Fl[−Fr[−L]][+R], .

Figure 9: The plants simulation after 5 iterations with 2 types of edges and 2 types of nodes.

Comparing with the plants in figure 8, it looks more dense.

3 Iterated Function Systems

The iteration function system (IFS for short) can be done in two ways: deterministic iteration (section 3.1) or chaos game, also called random itera-

9 tion (section 3.2). To understand the IFS, we have to consider concepts like af fine transformation, Hausdorf f distance and contraction mapping.

3.1 Deterministic Iteration Think of a very special copy machine, the multiple reduction copy machine (MRCM for short). Starting with for example a black triangle in figure 2, it produces more than one transformation, three in the Sierpinski case. Putting again the copy in the MRCM, the outcome will be 9 small triangles, all similar to the original one. Repeating over and over again we believe we come closer and closer to the fractal. To be more mathematical we have to specify what kind of transformations are allowed and what we mean with closeness. The MRCM can be described by affine transformations, see definition 3.2 and table 1. Let T1 , ··· , Tk be the transformations of an IFS and let A0 be an initial set in the common domain of these transformations. Then A1, the first image of A0 under the IFS, consists of the following union of sets obtained by applying each of the transformations Ti once: A1 = T1(A0) ∪ T2(A0) ∪ · · · ∪ Tk(A0). Similarly An, the nth image of k S A0 under the IFS, is given by An = T(An−1) = Ti(An−1). The procedure of i=1 finding these images is called deterministic iteration of the IFS and T is called the Hutchinson operator. A significant aspect of the affine transformations used is that they are con- traction mappings. The points always move more closer to each other after a mapping. Then it turns out that for any initial object A0, An = T(An−1) will always have a predictable long-term behavior. An has a limit, A∞, whatever the initial set is. A∞ is the unique of the system [1]. In order to un- derstand closeness, it is important to define the Hausdorf f distance, which is a method proposed by Felix Hausdorff [1]. Definition 3.1. Let X be a complete metric space with metric d. For any com- pact subset A of X and ε > 0, define the ε-collar of A (see figure 10) by

Aε = {x ∈ X | d(x, y) ≤ ε for some y ∈ A}, The ε-collar of another set B is defined in the same way. For any compact subsets A and B of X the Hausdorff distance is defined as [1]

h(A, B) = inf{ε | A ⊂ Bε and B ⊂ Aε}.

Figure 10: ε-collar of A

10 To get used to the definition see figure 11. Let us draw concentric circles of A (to the left in figure 11) and B (to the middle in figure 11), respectively, that circumscribe the other circle. The ε for the B-collar is larger than the ε for the A-collar, so it gives the Hausdorff distance. Can we show that the Hutchinson operator is a contraction? If so, it follows from the contraction mapping theorem [2] that the unique attractor exists. Let the right side in figure 11 be the first iteration of A, B. We now need to show that h(T(A), T(B)) < h(A, B). Let

c = max(c1, c2) < 1, ε > 0. h(A, B) = ε, B ⊂ Aε.

It is clear from figure 11, where c = c1, that

T1(B) ⊂ T1(A)ε·c1 , T2(B) ⊂ T2(A)ε·c2 ,

T1(B) ∪ T2(B) ⊂ (T1(A) ∪ T2(A))ε·c.

Repeating this reasoning for A ⊂ Bε gives that h(T(A), T(B)) < ε · c. Finally we obtain h(T(A), T(B)) < h(A, B).

Figure 11: Since the ε-value for B-collar is the largest one, it gives the Hausdorff distance, the right part illustrates that h(T(A), T(B)) < h(A, B).

Let us now consider the affine transformations used in the deterministic iteration.

Definition 3.2. An affine transformation is a mapping of R2 into R2 of the form

x a b x e T( ) = + y c d y f where a , b , c , d , e and f are scalars [5]. It means the points in a set are transformed by a linear transformation and a translation. The samples in table 1 show basic affinities.

11             π π   −1 0 x −0.2 1 1 x 2 0 x cos 6 − sin 6 x + π π 0 1 y 0 0 1 y 0 1 y sin 6 cos 6 y

Table 1: Some basic affine transformations of the unit square, reflection, shear, strain and rotating 30 degrees respectively.

As an illustration of a deterministic iteration, let us take the three transfor- mations given by Table 2.

Parameters a b c d e f

T1 0.5 0 0 0.5 0 1 T2 −0.5 0 0 −0.5 1 1 T3 0 −0.5 0.5 0 2 0

Table 2: The values of the parameters.

These three mappings will be applied in each iteration. The first iteration of an isosceles triangle with the corresponding coordinates is shown in figure 12.

Figure 12: Three affine transformations of a isosceles right triangle.

We find that after around 8 iterations, the pictures looks the same on this scale. As seen in figure 13 we have a good picture of the fractal after 8 itera- tions.

12 Figure 13: 3 (left) and 8 (right) iterations.

3.2 Chaos Games The deterministic iteration can be quite time consuming. A quicker way to generate fractals is the so called chaos game. We discuss the Sierpinski triangle as an example. Preparing a dice, the number of the dice will represent the direction: 1 and 2 represent the direction towards A, 3 and 4 represent the direction towards B, 5 and 6 represent the direction towards C, see figure 14. First of all, we randomly take a starting point in this triangle. The next point is the middle point between the previous point and the vertex (A, B or C) given by the dice. Figure 14, as an example, defines the path of the first three points. Here x1 denotes the random initial point inside 4ABC. Say we get the numbers 6 and 2 respectively after throwing the dice two times ,then x2 and x3 are the middle points of segments x1C, x2A respectively.

Figure 14: Two iterations of the chaos game.

Repeating this step over and over again, we can see that the points will generate the attractor. The examples in figure 15 verify it.

13 Figure 15: 1000 (left), 2000 (middle) and 10000 (right) iterations of the chaos game.

To understand why the chaos game works we have to consider the figure 16. There the first three iterations of the deterministic IFS are shown. Each black triangle is labeled with a sequence of the numbers 1, 2 and 3. If for exam- ple, the first digit is 1 then we are located in lower left part. If then the second digit is 2 we know that the position is in lower right part of that triangle, et cetera. Consider now the triangle with 123 shown to the right in figure 16. If we play the chaos game, how do we end up in triangle 123? In fact we know some of the history of the trajectory. We got 5 or 6 and headed towards cor- ner C, then we got 3 or 4 and jumped into triangle 23 and finally we went in the direction towards corner A. The size of the triangle 123 tells us how close we are to the fractal object. To come closer we have to have a smaller triangle like for example 1232312. Will there be such a sequence appearing in the chaos game? Yes indeed, the so called ergodic theorem [2] tells us that it will appear. To conclude, doing more and more iterations in the chaos game takes us closer and closer to the fractal.

Figure 16: The first three iterations towards the Sierpinski triangle.

To see how well the chaos game works we compare it in figure 17 with the deterministic iteration seen in 13. Using only 10000 points we get a good picture of the fractal even if the picture is not so detailed. Of course, using more and more points in the random iteration the picture will become better and better.

14 Figure 17: Comparing the deterministic iteration with 9 iterations (left) and chaos game with 10000 iterations (right).

Given a plant in nature, like a fern, how to choose the affinities and the probabilities in the chaos game to capture the features of the plant [2]. As an illustration, we generate four groups of data for the affine transformations to discuss the famous Barnsley‘s fern in table 3 and the affine image of the fern is shown in figure 18.

Parameters a b c d e f p Map 1 0 0 0 0.16 0 0 0.01 Map 2 0.85 0.04 -0.04 0.85 0 1.6 0.85 Map 3 0.2 -0.26 0.23 0.22 0 1.6 0.07 Map 4 -0.15 0.28 0.26 0.24 0 0.44 0.07

Table 3: Four mappings for the fern and their probabilities.

Figure 18: Deterministic iteration towards the fern after 1 and 5 iterations.

The left side of figure 18 is the corresponding affine images with map 1, map 2, map 3 and map 4 in table 3. They are affine transformations of a rectan- gle with vertex coordinates (−2.15, 0), (−2.15, 10.05), (2.4, 10.05), (2.4, 0). The right side of figure 18 shows the affine image of fern iterating five times. This image will come closer and closer to the target image when we iterate more

15 and more times. The left side of the figure 19 is the same fern but created with chaos game. The other parts in the figure 19 show what happens when we change the probability values.

Figure 19: The probabilities of the fern, (left to right) p1 = 0.01, 0.01, 0.01; p2 = 0.85, 0.49, 0.85; p3 = 0.07, 0.25, 0.02; p4 = 0.07, 0.25, 0.12 respectively, and 50000 points are used.

Map 1 is not interesting here so let us consider maps 2 to 4. The values (p1 = 0.01, p2 = 0.85, p3 = 0.07, p4 = 0.07) of probability for the original are given by

|det Ti| pi = . |det T1| + |det T2| + ··· + |det Tn| which is a reasonable choice since the particle in the chaos game should most often use the mapping corresponding to the largest area. In the middle the transformations 3 and 4 are more frequent (p2 = 0.49). Comparing with the original fern, the color of each top of leaves becomes lighter. To the right we keep the value p2 = 0.85, let the difference between 3 and 4 be larger to produce a left-right asymmetry..

4 Concluding remarks

In this thesis two methods for generating plants in R2 have been demon- strated. The L-system and the IFS are two different viewpoints of the mod- eling of plants but in some cases the IFS corresponding to a L-system can be found [3]. Combining edge and node rewriting like in figure 9 produce quite realistic plants. For the chaos game the best choice of probabilities seems to be an open problem even if the choice above is a good and realistic one. An interesting problem I would like to study more is how to find the L-system, or the IFS, for a given plant in nature.

16 References

[1] Heinz-Otto Peitgen , Hartmut Ju¨rgens and Dietmar Saupe, 2004, Chaos and Fractals: New Frontiers of Science, Springer, second edition.

[2] Michael F. Barnsley, 1988, Fractals Everywhere, Academic press, second edition. [3] Przemyslaw Prusinkiewicz and Aristid Lindenmayer, 1990, The Algorith- mic Beauty of Plants, Springer-Verlag, second edition.

[4] Micheal Trott, 2004, The Mathematica Guidebook, Springer-Verlag. [5] Howard Anton and Chris Rorres, 2011, Elementary Linear Algebra, Wiley and Sons, 10th edition. [6] Judith N.Cederberg, 2010, A Course In Modern Geometries, Springer- Verlag, second edition. [7] John E Hutchinson, 1981, Fractals and self similarity, Indiana Univ. Math. J 30, 713-747. [8] Hans Sagan, 1994, Space-Filling Curves, Springer-Verlag.

[9] Johan Knutzen, 2009, Generating climbing plants using L-system, Master Thesis, Gothenburg.

17

Faculty of Technology SE-391 82 Kalmar | SE-351 95 Växjö Phone +46 (0)772-28 80 00 [email protected] Lnu.se/faculty-of-technology?l=en