<<

CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 1

Abstract

The Collage Theorem was given by Michael F. Barnsley who wanted to show that it was possible to reconstruct images using a set of functions. This idea was to reduce the space needed on a computer to save an image or video. An image/video file can be quite large, while saving only a set of functions that can duplicate the original image/video frees up extra space – and during the late 80's- early 90's when computers were still somewhat primitive, this was an important development. This paper introduces the idea of with an emphasis on the Collage Theorem, using a special case of the Contraction Mapping Theorem. The Collage theorem says that we can approximate an image by using a set of iterated function systems with a specific that will yield the desired image regardless of the initial set. CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 2

Fractal Geometry – Contraction Mapping & The Collage Theorem

“Why am I learning this?” “Where am I ever going to use this outside of school?” These are questions that every student of has asked at some point in his or her life. Often, this question is valid, because what we work with in the classroom, generally, are “nice” equations, graphs, problems, etc. Rarely in real life do we find ourselves presented with a perfect , a single-variable equation, or a straight line. For example, when we travel, we can find on a map roughly how many miles there are between where we are and our destination, but the question is “Is there a perfectly straight road from here to there?” Unless you're driving to the end of the block, generally the answer to will be no. We can translate this idea from the distance between two points to the of a shape.

Let's use the example of measuring the coastline of Australia. Most maps have a key with a scale, so let's use a 500-kilometer measuring stick. By dividing the coastline into 500-kilometer chunks, we get a total of 13,000 kilometers. But what happens if we use, say, a 100-kilometer measuring stick?

The should not change very much, right? Well, with a that smaller interval, the coastline is now

15,500 kilometers, an almost 20% increase! Where did the extra 2,500 kilometers come from? What would happen with an even smaller measuring stick? The CIA World Factbook website gives the length of Australia's coastline as 25,760 kilometers – which is a 66% increase from the second measurement.

What happens is that as the intervals decrease in length, the length of the coastline will continue to increase to because there are always going to be smaller crevices to be measured. This is called the Coastline Paradox, and it is an example of an application of fractals.

The definition of the word “” varies slightly depending on who gives it. The word, who we have thanks to Beniot B. Mandelbrot, comes from the Latin word frāct or frāctus, meaning

“broken” or “uneven.” One of the more basic definitions of “fractal” is “the roughness of an object or space.” From the coastline example, we can see how these ideas apply. However, we are still missing one of the most important pieces of what it means to be a fractal – and that word is “self-similarity.” CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 3

Wolfram MathWorld defines a fractal as “an object that displays self-similarity, in a somewhat technical sense, on all scales.” According to this definition, these objects do not need to match perfectly on every scale, but that similar structures are present on all scales. However, self-similarity, in and of itself, is not a sufficient definition. What makes a fractal is its dimension, a fractional (or fractal) dimension. We begin by recalling what our idea of dimension is from our middle-school/high-school years, which is that a point will have dimension 0, a line dimension 1, a circle/square/etc dimension 2, a sphere/cube/etc dimension 3, and so on. There is a formula that allows us to calculate the dimension of certain objects, which says: log(number of self −similar pieces) dimension= (1) log(magnification factor)

While we can show this for any shape, the 2-dimensional square, arguably, gives the best depiction of this formula.

Figure 1: Defining Dimension

# of self-similar pieces: 1 # of self-similar pieces: 4 # of self-similar pieces: 9 Magnification Factor: 1 Magnification Factor: 2 Magnification Factor 3 CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 4

Following from the figure above, we see that when a square is divided into 1, 4, and 9 self-similar pieces we can reconstruct the original square by multiplying any one of those self-similar pieces by a magnification factor of 1, 2, and 3 (respectively). So using (1) we get

log(1)2 2log(1) log(4) log(2)2 log(9) log(3)2 = = 2 = = 2 = = 2 log(1) log(1) log(2) log(2) log(3) log(3)

We can now generalize this to a square divided into N 2 self-similar pieces with a magnification factor of N. From this example, we can imagine that a line divided into N self-similar pieces will have a magnification factor of N, so

log(N ) = 1 log(N ) .

Also, a cube divided into N 3 self-similar pieces and magnification factor of N will have dimension

log(N )3 = 3 log(N ) which confirms our knowledge of dimension. But this begs the question: What dimension does a fractal have? Intuition tells us that fractal (or fractional) dimension should be between integers. It is perhaps easier to see this in an example.

The Classical , or Cantor Comb, is one such example of self-similarity and fractional dimension. The iterations of this set say to translate down the previous level and remove the middle third of each piece (See Figure 2 below).

Figure 2: The Cantor Set I 0 I 1 I 2 I 3 I 4 I 5 CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 5

With interval for I0 being [0,1], we see that 1 2 3 I =[0, ]∪[ ] 1 3 3, 3 1 2 3 6 7 8 9 I =[0, ]∪[ , ]∪[ , ]∪[ , ] 2 9 9 9 9 9 9 9 ⋮

I N =take away the middle open third for each interval in I N −1 ⋮ On this scale, after the fifth iteration there is no visual difference from the previous iteration; however, this process continues on to infinity. We see that if we magnify, say, the first piece of I1 by a factor of 3 we will get the original line. The Cantor set is not the entire picture, but instead is the limit

C=∩∞ I . of this picture – which, in this case, is n=1 n So as we continue to take the middle third out of each step above, we will get smaller and smaller pieces to the point where this fractal will contain no intervals, but will have infinitely many points around each point. This implies C is a perfect set, which means it is a closed subset of R where every point of the set is a limit point. Our intuition tells us that this particular fractal should have a dimension between that of a point (zero) and a line (one). Using formula (1) to calculate the dimension of the Cantor Set, we get log(2) ≈0.63 log(3) and our intuition holds true. This is the basis of fractal geometry – if you take any magnification of a section of the whole fractal, it will look very similar to, if not exactly like, the larger image, and its dimension is between positive integers.

Before we get into the Contraction Mapping Theorem or the Collage Theorem, we need to look at several defninitions. Recall from Real Analysis,

Definition 1. A metric space is a set S with a global distance funtion (the metric d) that, for

every two points x,y in S, gives the distance between them as a nonnegative real number. A

metric space must also satisfy:

i. d(x,y) = 0 iff x=y CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 6

ii. d(x,y) = d(y,x)

iii. The triangle inequality d(x,y) + d(y,z) ≥ d(x,z). (Wolfram Alpha).

The idea of compactness, as well as completeness, will be vital to the rest of this paper, so we will give the definitions here. For completeness, we first need to define what it means for a sequence to be

Cauchy.

Definition 2. A sequence a1, a2 , … is a Cauchy sequence if the metric d(am,an) satisfies

(Wolfram Alpha). lim min (m , n)→ ∞ [d (am ,an)]=0.

See Figure 3 below for a visual example of a Cauchy and non-Cauchy sequence.

Figure 3: Cauchy vs non-Cauchy Sequences

Cauchy Sequence non-Cauchy Sequence

Images from: http://en.wikipedia.org/wiki/Cauchy_sequence

Definition 3. A metric space, X, is complete (or a complete metric space) if every Cauchy

sequence is convergent. (Wolfram Alpha).

lim n →∞ Sn=S Definition 4. A sequence Sn converges to the limit S if, for any ε > 0, there exists

an N such that |Sn – S| < ε for n > N. If Sn does not converge, it is said to diverge.

(Wolfram Alpha).

Definition 5. A metric space, X, is compact if every open cover of X has a finite subcover.

(Wolfram Alpha).

Another definition of compact spaces is that they are closed and bounded. [Note that every compact CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 7 metric space is complete, though complete metric spaces need not be compact.]

Definition 6. Let ( X,d ) be a metric space. A mapping T : X → X is a contraction mapping, or

contraction, if there exists a constant c, with 0 ≤ c < 1, such that

d(T(x),T(y)) ≤ cd(x,y) for all x , y∈ X.

(Hunter, Nachtergaele 2001 p 61)

What a contraction mapping does is it takes points and brings them closer together. See Figure 4 on the next page for an example, which demonstrates that for every x∈ X , and any r > 0, all points y in the ball Br(x) are mapped into a ball Bs(Tx), with s < r.

Figure 3: Contraction Mapping

Tx x

Image from: Hunter, Nachtergaele (2001) p 61

Definition 7. If T : X → X, then the point x∈ X such that T(x) = x is called a fixed point.

(Hunter, Nachtergaele 2001 p 62).

With these definitions, we can now give the Contraction Mapping Theorem. Theorem 1. If T : X → X is a contraction mapping on a complete metric space ( X,d ), then there is exactly one solution x X that is the fixed point of the contraction.

(Hunter, Nachtergaele 2001 p 62).

Proof. The proof of this theorem is fairly simple and straightforward. Let x0 be any point in X, and

th n let the sequence (xn) in X be defined as xn+1 = Txn for n ≥ 0. Also define the n iterate of T by T so xn =

n T x0. In order to prove the Contraction Mapping Theorem, we will first show that (xn) is Cauchy, then show that x is our fixed point, and finally show that the fixed point is unique. Using Definition 6, what CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 8 is outlined above, and the triangle inequality, for n ≥ m ≥ 1, we have

n m d ( xn , xm)=d (T x0 ,T x0) m n−m ≤c d (T x0 , x0) m n−m n−m−1 n−m−1 n−m−2 ≤c [d (T x0 ,T x0)+d (T x0 ,T x0 )+...+d (Tx 0 , x0 )] n−m−1 ≤cm [ ck ]d ( x , x ) ∑k =0 1 0 ∞ ≤cm [ ck ] d (x , x ) ∑k =0 1 0 cm = d (x x ). 1−c 1, 0

x∈ X. This implies that (xn) is Cauchy. Since X is complete, (xn) converges to a limit Following from the continuity of T, we have

Tx=T lim n →∞ xn=lim n→ ∞ Tx n=lim n→ ∞ xn+1=x.

Thus, x is our fixed point. Now to show that x is unique, we suppose that x and y are two fixed points, then

0≤d (x , y )=d (Tx ,Ty )≤cd ( x , y).

However, c < 1 so we have d(x,y) = 0. Thus x = y and the fixed point is unique. This completes the proof. (Hunter, Nachtergaele 2001 pp 62-63).

Here, it is necessary to give the definitions of an , an attractor, and the metric space we will be working in.

Definition 8. A[n] … iterated function system (IFS) consists of a complete metric space ( X, d )

together with a finite set of contraction mappings wn : X → X, with respective contractivity

factors cn < 1, for n = 1, 2, …, N. (Barnsley 1988 p 82).

What an IFS does is scales, translates, and rotates a set using a finite set of functions, which is now possible with because of the Contraction Mapping Theorem. Recall our previous example of the Cantor

Set (Figure 1), and consider the set of functions that generate it. We need two functions, one that scales down by a factor of three, and one that scales it and translates it by two-thirds the original length. So, CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 9

1 1 2 f 1= x f 2= x+ we get our IFS {f1,f2} where 3 and 3 3 . As we continue to iterate this function system, we will reach a point where we will not have any intervals, but there will be infinitely many points around a single point, which is the defining property of a Cantor Set. Using notation with IFS, we get

I 1= f 1 ( I 0) ∪ f 2(I 0)

I 2= f 1(I 1) ∪ f 2(I 1) ⋮

I n= f 1( I n−1) ∪ f 2(I n−1) thus

C= f 1(C ) ∪ f 2(C ). Barnsley (p 86-87) gives his notation for an IFS of [affine] contraction maps as

x1 ai bi x1 ei w i (x)=w i = + = Ai x+ti [x2] [ci d i ][ x2] [ f i ] Here are a few examples of IFS codes that generate different images. Barnsley also introduces a p element that stands for the probability for each contraction map to occur.

Table 1: IFS code for a Sierpinski triangle w a b c d e f p 1 0.5 0 0 0.5 1 1 0.33 2 0.5 0 0 0.5 1 50 0.33 3 0.5 0 0 0.5 50 50 0.34

Table 2: IFS code for a Fern w a b c d e f p 1 0 0 0 0.16 0 0 0.01 2 0.85 0.04 -0.04 0.85 0 1.6 0.85 3 0.2 -0.26 0.23 0.22 0 1.6 0.07 4 -0.15 0.28 0.26 0.24 0 0.44 0.07 We will be doing most of our remaning work in a metric space called H(X), so it is defined here, along with its metric, h.

Definition 9. Let ( X,d ) be a complete metric space. Then H(X) denotes the space whose points

are the compact subsets of X, other than the empty set. (Barnsley 1988 p 30). CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 10

Definition 9. The infimum (or inf) is the greatest lower bound (glb) of a set S, defined m such

that no member of the set is less than m, but if εis any positive quantity, however small, there is

Figure 4: IFS Code Images

Sierpinski Triangle Fern

Images from:http://bookwormlaser.com/wp- content/uploads/2012/06/fractals-sierpinski-triangle-5.jpg http://www.pvv.ntnu.no/~andersr/fractal/ps/fern.gif always one member that is less than m+ε. (Wolfram Alpha).

Definition 10. The supremum (or sup) is the least upper bound (lub) of a set S, defined as a

quantity M such that no member of the set exceeds M, but if ε is any positive quantity, however

small, there is a member that exceeds M – ε. (Wolfram Alpha).

Definition 11. The Hausdorff Metric on a compact metric space H(X) is defined as

h( A , B)=max {sup x∈A[inf y∈B d ( x , y)] ,sup x∈B[inf y∈A d ( x , y)]} for A , B subsets of X.

(Barnsley, Ervin, Hardin, Lancaster 1986 p 1975).

Proof. It is necessary to prove that h is a metric in our space H(X). (Recall the properties of a metric from Definition 1). Let A ,B , C ∈ H ( X ) (by definition, they are compact). It is clear that

h( A , A)=max {d (A , A) ,d (A , A)}=d (A , A)=max {d (x , A)∣x∈ A}=0.

d ( x , B)=inf d (x , y) >0. Let x∈ A , x∉ B. Then y∈ B { } Similarly, for x∈ B , x∉ A. Then CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 11 d ( x , A)=inf d (x , y) >0. y∈A { } This implies h(A,B) > 0. This satisfies part (i) of Definition 1. To show the Triangle Inequality, h(A,B) ≤ h(A,C)+h(C,B), we first show it is true for d(A,B) ≤ d(A,C)+d(C,B). d ( x , B)=min {d (x , y)} For any x∈ A , y ∈B ≤min y∈B {d (x ,z)+d (z , y)}for all z in C

=d (x , z)+min y∈B {d (z , y)}for all z in C ,so

d ( x , B)≤min x∈C {d (x , z)}+max z ∈C {min y∈B {d (z , y)}} =d (x ,C )+d (C , B)so d ( A , B)≤d (A ,C )+d (C , B)

d ( B, A)≤d (B ,C )+d (C , A)whence Similarly, h( A, B)=sup {d ( A , B) ,D (B , A)} ≤sup {d ( B ,C ),d (C , B)}+sup {d (A ,C ),d (C , A)} =h(b ,C)+h( A , C) This implies h(A,B) ≤ h(B,C) + h(A,C), satisfying parts (ii) and (iii) of Definition 1. Thus h is a metric. (Barnsley 1988 p 33).

What the Hausdorff Metric allows us to do is take into account the orientation of the two sets we are finding the distance between (as opposed to the usual metric which takes only the distance between two points in those sets). It takes the smallest distance (the inf) from some x in A to any y in B, then from those shortest distances (from every x to its closest y) it chooses the longest (the sup). It then does the same thing for x in B and y in A, and takes the greatest of the two. This metric requires that both A and B are compact. This allows us to say,

Lemma 1. Let A ,B ∈H ( X ). Given any a ∈ A , there exists a b ∈B , such that d(a,b) ≤

h(A,B).

In other words, for h(A,B) ≤ ε, with ε > 0, A⊂ B+ε ,and B⊂A+ε.

Proof. Suppose d(A,B) ≤ ε. Then max {d (a , B)∣a∈ A}≤ε which implies that, for any a ∈ A , d(a,B)

≤ ε, and thus a ∈ B+ε . Hence A⊂ B+ε. A similar argument shows that B⊂ A+ε. Thus d(A,B) ≤ ε.

To check this, in a small set one could calculate the distances by hand and draw a circle of radius h(A,B) around the furthest element of A to B and the furthest element of B to A. Figures 4 and 5 CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 12 below perhaps better demonstrate this idea. Call the Red Triangle A, and the Blue Triangle B.

What these figures show is that, Figure 4: Hausdorff Distance Between 2 though the metric d(A,B) stays the same Triangles (Position 1) between Figure 4 and Figure 5, the

Hausdorff Distance changes, exemplifying the power of the Hausdorff Metric. It is able to take into account the orientation of the two sets, which will be very useful when we use the Collage Theorem to approximate an image. This way, we can make sure that the Image from: http://cgm.cs.mcgill.ca/~godfried/teaching/cg- projects/98/normand/main.html image generated by the IFS is close to the original.

Figure 5: Hausdorff Distance Between 2 Triangles (Position 2)

Image from: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html

H(X) is also known as “The Space of Fractals.” Before we continue, it is necessary to mention that H(X) is, indeed, complete. CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 13

Theorem 2. (The Completeness of the Space of Fractals) Let ( X,d ) be a complete metric

A ∈ H ( X ) space. Then (H(X),h) is a complete metric space. Moreover, if { n } for n = 1, 2, …, ∞,

is a Cauchy sequence then

A=lim n→ ∞ An∈ H ( X )

can be characterized as follows

A={x∈ X ∣there is a Cauchy sequence {xn∈ An}that converges to x }.

(Barnsley 1988 pp 37-38).

The proof of this theorem is rather extensive, and I do not possess concrete understanding of the finer details. However, Barnsley has given an elegant proof of the completeness of H(X) on pages 38 and 39 of his text. I will outline the main points that must be proved, given by Barnsley, and remark that he does indeed show what is desired.

Proof. (Outline) Let {An} be a Cauchy sequence in H(X) and let A be defined as in the statement of the theorem. The proof is outlined in the following sections:

(a) A ≠ Ø;

(b) A is closed and hence complete since X is complete;

(c) for ε > 0 there is N sich that for n ≥ N, A is contained in An + ε;

(d) A is totally bounded and thus by (b) compact;

(e) lim(An) = A. (Barnsley 1988 p 38).

The following lemma, The Extension Lemma, is used throughout the proof, and is given here, along with the concept of a space being not just bounded, but totally bounded.

Lemma 2. (The Extension Lemma) Let ( X,d ) be a metric space. Let {An | n = 1, 2, …, ∞} be

a Cauchy sequence of points in (H(X),h). Let {nj | j = 1, 2, …, ∞} be an infinite sequence of

x ∈A ∣ j=1,2,3... { n j n j } integers 0 < n1 < n2 < n3 < …. Suppose that we have a Cauchy sequence CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 14

{x̃ n ∈ An∣n=1,2,...} x̃ n =xn in ( X,d ). Then there is a Cauchy sequence { such that j j for all j = 1,

2, 3, …. (Barnsley 1988 p 36).

Definition 12. Let S ⊂ X be a subset of the metric space ( X,d ). S is bounded if there is a point

a∈ X and a number R > 0 so that d(a,x) < R for all x ∈S. (Barnsley 1988 p 20).

Definition 13. Let S⊂ X be a subset of the metric space ( X,d ). S is totally bounded if, for

{y y ..., y }⊂S x∈S , each ε > 0, there is a finite set of points 1, 2, n such that whenever d(x,yi) < ε

y ∈{y y ... , y }. {y y ..., y } for some i 1, 2, n This set of points 1, 2, n is called an e-net.

(Barnsley 1988 p 20).

Lemma 3. For all B, C, D,and E in H(X),

h( B ∪ C , D ∪ E ) ≤ max {h(B , D) ,h(C ,E )}

where h is the Hausdorff metric.

Lemma 4. Let ( X,d ) be a metric space. Let {wn : n = 1, 2, …, N} be contraction mappings on

(H(X), h). Let the contractivity facgtor for wn be deonted by cn for each n. Define W : H(X) →

H(X) by

W (B) = w1(B) ∪ w 2(B) ∪ ⋯∪ w N (B) N = ∪n=1 w n(B) , for each B ϵ H ( X ).

Then W is a contraction mapping with contractivity factor c = max{cn : n = 1, 2, …, N}.

Proof. Proving this inductively, we begin with the case N = 2. Let B ,C ∈H ( X ). . Then,

h(W (B),W (C))=h(w1(B) ∪ w 2(B), w 1(C) ∪ w2(C ))

≤max {h(w1 (B) ,w1 (C )) ,h(w2 (B) ,w 2(C))} (by Lemma3)

≤max {c1 h(B ,C ), s2 h(B ,C)} ≤ ch(B ,C)

This completes the proof. (Barnsley 1988 p 81). CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 15

And now that we have shown that W is, indeed, a contraction mapping on H(X), we can define an attractor on H(X).

Definition 14. An attractor is the unique fixed point A∈ H ( X ). of W : H(X) → H(X), which

obeys

N A=W ( A)=∪n=1 wn ( A)

and is given by

= o n ( ) ∈ ( ) A lim n→ ∞ W B for any B H X ,

where W is the transformation W:H(X) → H(X) defined by

( )=∪N ( ) ∈ ( ) W B n=1 w n B for all B H X . (Barnsley 1988 p 82).

An attractor of an IFS is unique, meaning that under that IFS, it does not matter what the initial set looks like – eventually the IFS will lead to its attractor. Some IFS's, such as the one for the

Sierpinski Triangle, approach the attractor very quickly. Others are more complex and take some time to reach the attractor.

There is an immediate, and important, consequence of the Contraction Mapping Theorem which will help us reach our ultimate goal of the Collage Theorem.

Corollary of Theorem 1. Given an Iterated Function System, there exists a unique attractor A.

Now that we have a solid background of information about contraction mapping and , we can finally discuss the Collage Theorem.

Theorem 3. Let ( X, d ) be a complete metric space. Let L eH(X) be given, and let ε ≥ 0 be

given. Choose an Iterated Function System { X; (w0), w1, w2 ,…, wN} with contractivity factor 0

≤ c < 1, so that

N h( L ,∪n=1 wn (L))≤ε

where h(d) is the Hausdorff Metric. Then CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 16

ε h( L , A)≤ 1−c where A is the attractor of the IFS. Equivalently,

( )≤( − )−1 ( ∪N ( )) h L , A 1 c h L , n=1 wn L for all L e H( X ).

(Barnsley 1988 pp 96-97).

The proof of the Collage Theorem is simply the proof of the following Lemma.

Lemma 5. Let ( X,d ) be a complete metric space. Let f : X → X be a contraction mapping with

x ∈ X. contractivity factor 0 ≤ c < 1, and let the fixed point of f be f Then,

d (x , f (x)) d ( x , x f )≤ for all x∈ X. 1−c

Proof. The distance function d(a,b) for fixed a ∈ X is continuous in b ∈ X. Hence, for n ≥ m ≥ 1, o n o n d ( x , x f )=d ( x ,lim n →∞ f ( x))=lim n →∞ d ( x , f (x)) n ≤lim d ( f o (m−1 )(x) , f o n(x)) n→∞ ∑m=1 n−1 ≤lim n→ ∞ d (x , f ( x))(1+c+⋯+c ) d ( x , f (x)) ≤ 1−c

This completes the proof. (Barnsley 1988 p 111).

This theorem combines everything that has been outlined in the previous pages into one, allowing us to use IFS's to approximate images. In 1980-1990 when computers were still young and primitive compared to the ones we have today, this idea of saving an image using functions was groundbreaking. While the items outlined here have only given us a way to approximate a still image, it is still possible to approximate a video by continuously varying the attractors. It appears, however, that approximating images by fractals is useful for only certain types, while more contemporary JPEG,

JPEG 2000, Lossy/Lossless Wavelet image compression seems to be the new standard. CONTRACTION MAPPING & THE COLLAGE THEOREM Porter 17

References

Barnsley, Michael F. (1988). Fractals Everywhere. Specific pages used are referenced in text.

Barnsley, M. F., Ervin, V., Hardin, D., Lancaster, J. (1986). Solutions of an Inverse Problem for Fractals and Other Sets. Proceedings of the National Academy of Sciences of the United States of America. Vol 83, No. 7. p. 1975

Goodman, Noah. (1996). Iterated Function Systems. http://www.geom.uiuc.edu/java/IFSoft/IFSs/

Hunter, John K., Nachtergaele, Bruno. (2001). Applied Analysis. pp. 61-63.

Wolfram Alpha (2012). Compact Metric Space. http://www.wolframalpha.com/input/?i=compact+metric+space

Wolfram Alpha (2012). Complete Metric Space. http://www.wolframalpha.com/input/?i=complete+metric+space

Wolfram Alpha (2012). Infimum. http://www.wolframalpha.com/input/?i=infimum

Wolfram Alpha (2012). Supremum. http://www.wolframalpha.com/input/?i=supremum