A Neural Transfer Function for a Smooth and Differentiable

Total Page:16

File Type:pdf, Size:1020Kb

A Neural Transfer Function for a Smooth and Differentiable A Neural Transfer Function for a Smooth and Differentiable Transition Between Additive and Multiplicative Interactions Sebastian Urban [email protected] Institut f¨urInformatik VI, Technische Universit¨atM¨unchen, Boltzmannstr. 3, 85748 Garching, Germany Patrick van der Smagt [email protected] fortiss | An-Institut der Technischen Universit¨atM¨unchen, Guerickestr. 25, 80805 M¨unchen, Germany Abstract where the transfer function is applied element-wise, σ (t) = σ(t ). In the context of this paper we will call Existing approaches to combine both ad- i i such networks additive ANNs. ditive and multiplicative neural units ei- ther use a fixed assignment of operations (Hornik et al., 1989) showed that additive ANNs with or require discrete optimization to determine at least one hidden layer and a sigmoidal transfer func- what function a neuron should perform. This tion are able to approximate any function arbitrarily leads either to an inefficient distribution of well given a sufficient number of hidden units. Even computational resources or an extensive in- though an additive ANN is an universal function ap- crease in the computational complexity of the proximator, there is no guarantee that it can approx- training procedure. imate a function efficiently. If the architecture is not We present a novel, parameterizable trans- a good match for a particular problem, a very large fer function based on the mathematical con- number of neurons is required to obtain acceptable re- cept of non-integer functional iteration that sults. allows the operation each neuron performs to (Durbin & Rumelhart, 1989) proposed an alternative be smoothly and, most importantly, differen- neural unit in which the weighted summation is re- tiablely adjusted between addition and mul- placed by a product, where each input is raised to a tiplication. This allows the decision between power determined by its corresponding weight. The addition and multiplication to be integrated value of such a product unit is given by into the standard backpropagation training procedure. 0 1 Y Wij yi = σ@ xj A : (3) j 1. Introduction Using laws of the exponential function this can be writ- P In commonplace artificial neural networks (ANNs) the ten as yi = σ[exp( j Wij log xj)] and thus the values value of a neuron is given by a weighted sum of its of a layer can also be computed efficiently using matrix inputs propagated through a non-linear transfer func- multiplication, i.e. arXiv:1503.05724v3 [stat.ML] 29 Mar 2016 tion. For illustration let us consider a simple neural network with multidimensional input and multivari- y = σ(exp(W log x)) (4) ate output. The input layer should be called x and where exp and log are taken element-wise. Since in the outputs y. Then the value of neuron y is i general the incoming values x can be negative, the 0 1 complex exponential and logarithm are used. Often no X non-linearity is applied to the output of the product yi = σ@ WijxjA : (1) j unit. The typical choice for the transfer function σ(t) is the 1.1. Hybrid summation-multiplication sigmoid function σ(t) = 1=(1 + e−t) or an approxima- networks tion thereof. Matrix multiplication is used to jointly compute the values of all neurons in one layer more Both types of neurons can be combined in a hybrid efficiently; we have summation-multiplication network. Yet this poses the problem of how to distribute additive and multiplica- y = σ(W x) (2) tive units over the network, i.e. how to determine A Smooth and Differentiable Transition Between Additive and Multiplicative Interactions whether a specific neuron should be an additive or for n; m 2 Z, forms an Abelian group. multiplicative unit to obtain the best results. A simple Equation (5) cannot be used to define functional it- solution is to stack alternating layers of additive and eration for non-integer n. Thus, in order to calcu- product units, optionally with additional connections late non-integer iterations of function, we have to find that skip over a product layer, so that each additive an alternative definition. The sought generalization layer receives inputs from both the product layer and should also extend the additive property (6) of the the additive layer beneath it. The drawback of this composition operation to non-integer n; m 2 . approach is that the resulting uniform distribution of R product units will hardly be ideal. 2.2. Abel's functional equation A more adaptive approach is to learn the function of each neural unit from provided training data. How- Consider the following functional equation given by ever, since addition and multiplication are different op- (Abel, 1826), erations, until now there was no obvious way to deter- (f(x)) = (x) + β (7) mine the best operation during training of the network using standard neural network optimization methods with constant β 2 C. We are concerned with f(x) = such as backpropagation. An iterative algorithm to de- exp(x). A continuously differentiable solution for β = termine the optimal allocation could have the follow- 1 and x 2 R is given by ing structure: For initialization randomly choose the operation each neuron performs. Train the network (x) = log(k)(x) + k (8) by minimizing the error function and then evaluate its (k) performance on a validation set. Based on the per- with k 2 N s.t. 0 ≤ log (x) < 1. Note that for formance determine a new allocation using a discrete x < 0 we have k = −1 and thus is well defined optimization algorithm (such as particle swarm opti- on whole R. The function is shown in Fig.1. Since mization or genetic algorithms). Iterate the process : R ! (−1; 1) is strictly increasing, the inverse until satisfactory performance is achieved. The draw- −1 :(−1; 1) ! R exists and is given by back of this method is its computational complexity; −1 (k) to evaluate one allocation of operations the whole net- ( ) = exp ( − k) (9) work must be trained, which takes from minutes to with k 2 s.t. 0 ≤ − k < 1. For practical reasons hours for moderately sized problems. N we set −1( ) = −∞ for ≤ −1. The derivative of Here we propose an alternative approach, where the is given by distinction between additive and multiplicative neu- k−1 0 Y 1 rons is not discrete but continuous and differentiable. (x) = (10a) log(j)(x) Hence the optimal distribution of additive and mul- j=0 tiplicative units can be determined during standard (k) with k 2 N s.t. 0 ≤ log (x) < 1 and the derivative of gradient-based optimization. Our approach is orga- its inverse is nized as follows: First, we introduce non-integer iter- ates of the exponential function in the real and com- k−1 −10 Y (j) −1 plex domains. We then use these iterates to smoothly ( ) = exp ( − j) (10b) interpolate between addition (1) and multiplication j=0 (3). Finally, we show how this interpolation can be with k 2 N s.t. 0 ≤ − k < 1. integrated and implemented in neural networks. 2.2.1. Non-integer iterates using Abel's 2. Iterates of the exponential function equation 2.1. Functional iteration By inspection of Abel's equation (7), we see that the nth iterate of the exponential function can be written Let f : C ! C be an invertible function. For n 2 Z we as (n) write f for the n-times iterated application of f, exp(n)(x) = −1( (x) + n) : (11) (n) f (z) = f ◦ f ◦ · · · ◦ f(z) : (5) While this equation is equivalent to (5) for integer n, | {z } n times we are now also free to choose n 2 R and thus (11) can be seen as a generalization of functional iteration Further let f (−n) = (f −1)(n) where f −1 denotes the to non-integer iterates. It can easily be verified that inverse of f. We set f (0)(z) = z to be the identity the composition property (6) holds. Hence we can un- function. It can be easily verified that functional iter- derstand the function '(x) = exp(1=2)(x) as the func- ation with respect to the composition operator, i.e. tion that gives the exponential function when applied f (n) ◦ f (m) = f (n+m) (6) to itself. ' is called the functional square root of exp A Smooth and Differentiable Transition Between Additive and Multiplicative Interactions ψ(x) with constant γ 2 C. As before we are interested in 3 solutions of this equation for f(x) = exp(x); we have 2 χ(exp(z)) = γ χ(z) (14) 1 but now we are considering the complex exp : C ! C. x The complex exponential function is not injective, -10 -5 5 10 since -1 exp(z + 2πni) = exp(z) n 2 Z : Figure 1. A continuously differentiable solution (x) to Thus the imaginary part of the codomain of its inverse, Abel's equation (7) for the exponential function in the real i.e. the complex logarithm, must be restricted to an domain. interval of size 2π. Here we define log : C ! fz 2 n exp( )(x) C : β ≤ Im z < β + 2πg with β 2 R. For now let us 2 consider the principal branch of the logarithm, that is β = −π. 1 n=1 To derive a solution, we examine the behavior of exp n = -1 around one of its fixed points. A fixed point of a func- x -3 -2 -1 1 2 3 tion f is a point c with the property that f(c) = c. -1 The exponential function has an infinite number of fixed points. Here we select the fixed point closest to -2 the real axis in the upper complex half plane.
Recommended publications
  • Remarks About the Square Equation. Functional Square Root of a Logarithm
    M A T H E M A T I C A L E C O N O M I C S No. 12(19) 2016 REMARKS ABOUT THE SQUARE EQUATION. FUNCTIONAL SQUARE ROOT OF A LOGARITHM Arkadiusz Maciuk, Antoni Smoluk Abstract: The study shows that the functional equation f (f (x)) = ln(1 + x) has a unique result in a semigroup of power series with the intercept equal to 0 and the function composition as an operation. This function is continuous, as per the work of Paulsen [2016]. This solution introduces into statistics the law of the one-and-a-half logarithm. Sometimes the natural growth processes do not yield to the law of the logarithm, and a double logarithm weakens the growth too much. The law of the one-and-a-half logarithm proposed in this study might be the solution. Keywords: functional square root, fractional iteration, power series, logarithm, semigroup. JEL Classification: C13, C65. DOI: 10.15611/me.2016.12.04. 1. Introduction If the natural logarithm of random variable has normal distribution, then one considers it a logarithmic law – with lognormal distribution. If the func- tion f being function composition to itself is a natural logarithm, that is f (f (x)) = ln(1 + x), then the function ( ) = (ln( )) applied to a random variable will give a new distribution law, which – similarly as the law of logarithm – is called the law of the one-and -a-half logarithm. A square root can be defined in any semigroup G. A semigroup is a non- empty set with one combined operation and a neutral element.
    [Show full text]
  • Homeomorphisms Group of Normed Vector Space: Conjugacy Problems
    HOMEOMORPHISMS GROUP OF NORMED VECTOR SPACE: CONJUGACY PROBLEMS AND THE KOOPMAN OPERATOR MICKAEL¨ D. CHEKROUN AND JEAN ROUX Abstract. This article is concerned with conjugacy problems arising in the homeomorphisms group, Hom(F ), of unbounded subsets F of normed vector spaces E. Given two homeomor- phisms f and g in Hom(F ), it is shown how the existence of a conjugacy may be related to the existence of a common generalized eigenfunction of the associated Koopman operators. This common eigenfunction serves to build a topology on Hom(F ), where the conjugacy is obtained as limit of a sequence generated by the conjugacy operator, when this limit exists. The main conjugacy theorem is presented in a class of generalized Lipeomorphisms. 1. Introduction In this article we consider the conjugacy problem in the homeomorphisms group of a finite dimensional normed vector space E. It is out of the scope of the present work to review the problem of conjugacy in general, and the reader may consult for instance [13, 16, 29, 33, 26, 42, 45, 51, 52] and references therein, to get a partial survey of the question from a dynamical point of view. The present work raises the problem of conjugacy in the group Hom(F ) consisting of homeomorphisms of an unbounded subset F of E and is intended to demonstrate how the conjugacy problem, in such a case, may be related to spectral properties of the associated Koopman operators. In this sense, this paper provides new insights on the relations between the spectral theory of dynamical systems [5, 17, 23, 36] and the topological conjugacy problem [51, 52]1.
    [Show full text]
  • Iterated Function Systems, Ruelle Operators, and Invariant Projective Measures
    MATHEMATICS OF COMPUTATION Volume 75, Number 256, October 2006, Pages 1931–1970 S 0025-5718(06)01861-8 Article electronically published on May 31, 2006 ITERATED FUNCTION SYSTEMS, RUELLE OPERATORS, AND INVARIANT PROJECTIVE MEASURES DORIN ERVIN DUTKAY AND PALLE E. T. JORGENSEN Abstract. We introduce a Fourier-based harmonic analysis for a class of dis- crete dynamical systems which arise from Iterated Function Systems. Our starting point is the following pair of special features of these systems. (1) We assume that a measurable space X comes with a finite-to-one endomorphism r : X → X which is onto but not one-to-one. (2) In the case of affine Iterated Function Systems (IFSs) in Rd, this harmonic analysis arises naturally as a spectral duality defined from a given pair of finite subsets B,L in Rd of the same cardinality which generate complex Hadamard matrices. Our harmonic analysis for these iterated function systems (IFS) (X, µ)is based on a Markov process on certain paths. The probabilities are determined by a weight function W on X. From W we define a transition operator RW acting on functions on X, and a corresponding class H of continuous RW - harmonic functions. The properties of the functions in H are analyzed, and they determine the spectral theory of L2(µ).ForaffineIFSsweestablish orthogonal bases in L2(µ). These bases are generated by paths with infinite repetition of finite words. We use this in the last section to analyze tiles in Rd. 1. Introduction One of the reasons wavelets have found so many uses and applications is that they are especially attractive from the computational point of view.
    [Show full text]
  • Recursion, Writing, Iteration. a Proposal for a Graphics Foundation of Computational Reason Luca M
    Recursion, Writing, Iteration. A Proposal for a Graphics Foundation of Computational Reason Luca M. Possati To cite this version: Luca M. Possati. Recursion, Writing, Iteration. A Proposal for a Graphics Foundation of Computa- tional Reason. 2015. hal-01321076 HAL Id: hal-01321076 https://hal.archives-ouvertes.fr/hal-01321076 Preprint submitted on 24 May 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1/16 Recursion, Writing, Iteration A Proposal for a Graphics Foundation of Computational Reason Luca M. Possati In this paper we present a set of philosophical analyses to defend the thesis that computational reason is founded in writing; only what can be written is computable. We will focus on the relations among three main concepts: recursion, writing and iteration. The most important questions we will address are: • What does it mean to compute something? • What is a recursive structure? • Can we clarify the nature of recursion by investigating writing? • What kind of identity is presupposed by a recursive structure and by computation? Our theoretical path will lead us to a radical revision of the philosophical notion of identity. The act of iterating is rooted in an abstract space – we will try to outline a topological description of iteration.
    [Show full text]
  • Neural Network Architectures and Activation Functions: a Gaussian Process Approach
    Fakultät für Informatik Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Vollständiger Abdruck der von der Fakultät für Informatik der Technischen Universität München zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Prof. Dr. rer. nat. Stephan Günnemann Prüfende der Dissertation: 1. Prof. Dr. Patrick van der Smagt 2. Prof. Dr. rer. nat. Daniel Cremers 3. Prof. Dr. Bernd Bischl, Ludwig-Maximilians-Universität München Die Dissertation wurde am 22.11.2017 bei der Technischen Universtät München eingereicht und durch die Fakultät für Informatik am 14.05.2018 angenommen. Neural Network Architectures and Activation Functions: A Gaussian Process Approach Sebastian Urban Technical University Munich 2017 ii Abstract The success of applying neural networks crucially depends on the network architecture being appropriate for the task. Determining the right architecture is a computationally intensive process, requiring many trials with different candidate architectures. We show that the neural activation function, if allowed to individually change for each neuron, can implicitly control many aspects of the network architecture, such as effective number of layers, effective number of neurons in a layer, skip connections and whether a neuron is additive or multiplicative. Motivated by this observation we propose stochastic, non-parametric activation functions that are fully learnable and individual to each neuron. Complexity and the risk of overfitting are controlled by placing a Gaussian process prior over these functions. The result is the Gaussian process neuron, a probabilistic unit that can be used as the basic building block for probabilistic graphical models that resemble the structure of neural networks.
    [Show full text]
  • PDF Full Text
    Detection, Estimation, and Modulation Theory, Part III: Radar–Sonar Signal Processing and Gaussian Signals in Noise. Harry L. Van Trees Copyright 2001 John Wiley & Sons, Inc. ISBNs: 0-471-10793-X (Paperback); 0-471-22109-0 (Electronic) Glossary In this section we discuss the conventions, abbreviations, and symbols used in the book. CONVENTIONS The fol lowin g conventions have been used: 1. Boldface roman denotes a v fector or matrix. 2. The symbol ] 1 means the magnitude of the vector or scalar con- tained within. 3. The determinant of a square matrix A is denoted by IAl or det A. 4. The script letters P(e) and Y(e) denote the Fourier transform and Laplace transform respectively. 5. Multiple integrals are frequently written as, 1 drf(r)J dtg(t9 7) A [fW (s dlgo) d7, that is, an integral is inside all integrals to its left unlessa multiplication is specifically indicated by parentheses. 6. E[*] denotes the statistical expectation of the quantity in the bracket. The overbar x is also used infrequently to denote expectation. 7. The symbol @ denotes convolution. CD x(t) @ y(t) A x(t - 7)YW d7 s-a3 8. Random variables are lower case (e.g., x and x). Values of random variables and nonrandom parameters are capita1 (e.g., X and X). In some estimation theory problems much of the discussion is valid for both ran- dom and nonrandom parameters. Here we depart from the above con- ventions to avoid repeating each equation. 605 9. The probability density of x is denoted by p,() and the probability distribution by I?&).
    [Show full text]
  • Bézier Curves Are Attractors of Iterated Function Systems
    New York Journal of Mathematics New York J. Math. 13 (2007) 107–115. All B´ezier curves are attractors of iterated function systems Chand T. John Abstract. The fields of computer aided geometric design and fractal geom- etry have evolved independently of each other over the past several decades. However, the existence of so-called smooth fractals, i.e., smooth curves or sur- faces that have a self-similar nature, is now well-known. Here we describe the self-affine nature of quadratic B´ezier curves in detail and discuss how these self-affine properties can be extended to other types of polynomial and ra- tional curves. We also show how these properties can be used to control shape changes in complex fractal shapes by performing simple perturbations to smooth curves. Contents 1. Introduction 107 2. Quadratic B´ezier curves 108 3. Iterated function systems 109 4. An IFS with a QBC attractor 110 5. All QBCs are attractors of IFSs 111 6. Controlling fractals with B´ezier curves 112 7. Conclusion and future work 114 References 114 1. Introduction In the late 1950s, advancements in hardware technology made it possible to effi- ciently manufacture curved 3D shapes out of blocks of wood or steel. It soon became apparent that the bottleneck in mass production of curved 3D shapes was the lack of adequate software for designing these shapes. B´ezier curves were first introduced in the 1960s independently by two engineers in separate French automotive compa- nies: first by Paul de Casteljau at Citro¨en, and then by Pierre B´ezier at R´enault.
    [Show full text]
  • Gaol 4.2.0 NOT Just Another Interval Arithmetic Library
    Gaol 4.2.0 NOT Just Another Interval Arithmetic Library Edition 4.4 Last updated 21 May 2015 Frédéric Goualard Laboratoire d’Informatique de Nantes-Atlantique, France Copyright © 2001 Swiss Federal Institute of Technology, Switzerland Copyright © 2002-2015 LINA UMR CNRS 6241, France gdtoa() and strtord() are Copyright © 1998 by Lucent Technologies All Trademarks, Copyrights and Trade Names are the property of their re- spective owners even if they are not specified below. Part of the work was done while Frédéric Goualard was a postdoctorate at the Swiss Federal Institute of Technology, Lausanne, Switzerland supported by the European Research Consortium for Informatics and Mathematics fellowship programme. This is edition 4.4 of the gaol documentation. It is consistent with version 4.2.0 of the gaol library. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. GAOL IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FIT- NESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THIS SOFTWARE IS WITH YOU. SHOULD THIS SOFTWARE PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. Recipriversexcluson. n. A number whose existence can only be defined as being anything other than itself. Douglas Adams, Life, the Universe and Everything Contents Copyright vii 1 Introduction1 2 Installation3 2.1 Getting the software........................3 2.2 Installing gaol from the source tarball on Unix and Linux...3 2.2.1 Prerequisites........................3 2.2.2 Configuration........................5 2.2.3 Building...........................7 2.2.4 Installation.........................7 3 An overview of gaol9 3.1 The trust rounding mode....................
    [Show full text]
  • Summary of Unit 1: Iterated Functions 1
    Summary of Unit 1: Iterated Functions 1 Summary of Unit 1: Iterated Functions David P. Feldman http://www.complexityexplorer.org/ Summary of Unit 1: Iterated Functions 2 Functions • A function is a rule that takes a number as input and outputs another number. • A function is an action. • Functions are deterministic. The output is determined only by the input. x f(x) f David P. Feldman http://www.complexityexplorer.org/ Summary of Unit 1: Iterated Functions 3 Iteration and Dynamical Systems • We iterate a function by turning it into a feedback loop. • The output of one step is used as the input for the next. • An iterated function is a dynamical system, a system that evolves in time according to a well-defined, unchanging rule. x f(x) f David P. Feldman http://www.complexityexplorer.org/ Summary of Unit 1: Iterated Functions 4 Itineraries and Seeds • We iterate a function by applying it again and again to a number. • The number we start with is called the seed or initial condition and is usually denoted x0. • The resulting sequence of numbers is called the itinerary or orbit. • It is also sometimes called a time series or a trajectory. • The iterates are denoted xt. Ex: x5 is the fifth iterate. David P. Feldman http://www.complexityexplorer.org/ Summary of Unit 1: Iterated Functions 5 Time Series Plots • A useful way to visualize an itinerary is with a time series plot. 0.8 0.7 0.6 0.5 t 0.4 x 0.3 0.2 0.1 0.0 0 1 2 3 4 5 6 7 8 9 10 time t • The time series plotted above is: 0.123, 0.189, 0.268, 0.343, 0.395, 0.418, 0.428, 0.426, 0.428, 0.429, 0.429.
    [Show full text]
  • Solving Iterated Functions Using Genetic Programming Michael D
    Solving Iterated Functions Using Genetic Programming Michael D. Schmidt Hod Lipson Computational Synthesis Lab Computational Synthesis Lab Cornell University Cornell University Ithaca, NY 14853 Ithaca, NY 14853 [email protected] [email protected] ABSTRACT various communities. Renowned physicist Michael Fisher is An iterated function f(x) is a function that when composed with rumored to have solved the puzzle within five minutes [2]; itself, produces a given expression f(f(x))=g(x). Iterated functions however, few have matched this feat. are essential constructs in fractal theory and dynamical systems, The problem is enticing because of its apparent simplicity. Similar but few analysis techniques exist for solving them analytically. problems such as f(f(x)) = x2, or f(f(x)) = x4 + b are straightforward Here we propose using genetic programming to find analytical (see Table 1). The fact that the slight modification from these solutions to iterated functions of arbitrary form. We demonstrate easier functions makes the problem much more challenging this technique on the notoriously hard iterated function problem of highlights the difficulty in solving iterated function problems. finding f(x) such that f(f(x))=x2–2. While some analytical techniques have been developed to find a specific solution to Table 1. A few example iterated functions problems. problems of this form, we show that it can be readily solved using genetic programming without recourse to deep mathematical Iterated Function Solution insight. We find a previously unknown solution to this problem, suggesting that genetic programming may be an essential tool for f(f(x)) = x f(x) = x finding solutions to arbitrary iterated functions.
    [Show full text]
  • Iterated Function System
    IARJSET ISSN (Online) 2393-8021 ISSN (Print) 2394-1588 International Advanced Research Journal in Science, Engineering and Technology ISO 3297:2007 Certified Vol. 3, Issue 8, August 2016 Iterated Function System S. C. Shrivastava Department of Applied Mathematics, Rungta College of Engineering and Technology, Bhilai C.G., India Abstract: Fractal image compression through IFS is very important for the efficient transmission and storage of digital data. Fractal is made up of the union of several copies of itself and IFS is defined by a finite number of affine transformation which characterized by Translation, scaling, shearing and rotat ion. In this paper we describe the necessary conditions to form an Iterated Function System and how fractals are generated through affine transformations. Keywords: Iterated Function System; Contraction Mapping. 1. INTRODUCTION The exploration of fractal geometry is usually traced back Metric Spaces definition: A space X with a real-valued to the publication of the book “The Fractal Geometry of function d: X × X → ℜ is called a metric space (X, d) if d Nature” [1] by the IBM mathematician Benoit B. possess the following properties: Mandelbrot. Iterated Function System is a method of constructing fractals, which consists of a set of maps that 1. d(x, y) ≥ 0 for ∀ x, y ∈ X explicitly list the similarities of the shape. Though the 2. d(x, y) = d(y, x) ∀ x, y ∈ X formal name Iterated Function Systems or IFS was coined 3. d x, y ≤ d x, z + d z, y ∀ x, y, z ∈ X . (triangle by Barnsley and Demko [2] in 1985, the basic concept is inequality).
    [Show full text]
  • Aspects of Functional Iteration Milton Eugene Winger Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1972 Aspects of functional iteration Milton Eugene Winger Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Statistics and Probability Commons Recommended Citation Winger, Milton Eugene, "Aspects of functional iteration " (1972). Retrospective Theses and Dissertations. 5876. https://lib.dr.iastate.edu/rtd/5876 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This dissertation was produced from a microfilm copy of the original document. While the most advanced technological means to photograph and reproduce this document have been used, the quality is heavily dependent upon the quality of the original submitted. The following explanation of techniques is provided to help you understand markings or patterns which may appear on this reproduction. 1. The sign or "target" for pages apparently lacking from the document photographed is "Missing Page(s)". If it was possible to obtain the missing page(s) or section, they are spliced into the film along with adjacent pages. This may have necessitated cutting thru an image and duplicating adjacent pages to insure you complete continuity. 2. When an image on the film is obliterated with a large round black mark, it is an indication that the photographer suspected that the copy may have moved during exposure and thus cause a blurred image.
    [Show full text]