The Computational Complexity of Randomness by Thomas Weir

Total Page:16

File Type:pdf, Size:1020Kb

The Computational Complexity of Randomness by Thomas Weir The Computational Complexity of Randomness by Thomas Weir Watson A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair Professor Satish Rao Professor David Aldous Spring 2013 The Computational Complexity of Randomness Copyright 2013 by Thomas Weir Watson Abstract The Computational Complexity of Randomness by Thomas Weir Watson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair This dissertation explores the multifaceted interplay between efficient computation and prob- ability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling through the lens of computational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random. Even when the specifi- cation of a computational problem involves no randomness, one can still consider randomized algorithms that produce a correct output with high probability. A basic question is whether one can derandomize algorithms, i.e., reduce or eliminate their dependence on randomness without incurring much loss in efficiency. Although derandomizing arbitrary time-efficient algorithms can only be done assuming unproven complexity-theoretic conjectures, it is possi- ble to unconditionally construct derandomization tools called pseudorandom generators for restricted classes of algorithms. We give an improved pseudorandom generator for a new class, which we call combinatorial checkerboards. The notion of pseudorandomness shows up in many contexts besides derandomization. The so-called Dense Model Theorem, which has its roots in the famous theorem of Green and Tao that the primes contain arbitrarily long arithmetic progressions, is a result about pseudorandomness that has turned out to be a useful tool in computational complexity and cryptography. At the heart of this theorem 1 is a certain type of reduction, and in this dissertation we prove a strong lower bound on the advice complexity of such reductions, which is analogous to a list size lower bound for list-decoding of error-correcting codes. Part III concerns settings where the problem’s input is random. We focus on the topic of randomness extraction, which is the problem of transforming a sample from a high-entropy but non-uniform probability distribution (that represents an imperfect physical source of randomness) into a uniformly distributed sample (which would then be suitable for use by a randomized algorithm). It is natural to model the input distribution as being sampled by an efficient algorithm (since randomness is generated by efficient processes in nature), and we give a construction of an extractor for the case where the input distribution’s sampler is extremely efficient in parallel time. A related problem is to estimate the min-entropy (“amount of randomness”) of a given parallel-time-efficient sampler, since this dictates how many uniformly random output bits one could hope to extract from it. We characterize the complexity of this problem, showing that it is “slightly harder than NP-complete.” Part IV concerns settings where the algorithm’s input is random. Average-case complex- ity is the study of the power of algorithms that are allowed to fail with small probability over a randomly chosen input. This topic is motivated by cryptography and by modeling heuris- tics. A fundamental open question is whether the average-case hardness of NP is implied by the worst-case hardness of NP. We exhibit a new barrier to showing this implication, by proving that a certain general technique (namely, “relativizing proofs by reduction”) can- not be used. We also prove results on hardness amplification, clarifying the quantitative relationship between the running time of an algorithm and the probability of failure over a random input (both of which are desirable to minimize). 2 Contents 1 Introduction 1 1.1 ComputationalComplexity............................ 1 1.2 Randomness.................................... 2 1.3 OurContributions ................................ 5 1.3.1 PartI ................................... 5 1.3.2 PartII................................... 6 1.3.3 PartIII .................................. 8 1.3.4 PartIV .................................. 9 I The Problem’s Output is Random 12 2 Time Hierarchies for Sampling Distributions 13 2.1 Introduction.................................... 13 2.1.1 Results................................... 14 2.1.2 Discussion................................. 15 2.2 IntuitionforTheorem2.1 ............................ 16 2.3 ProofofTheorem2.1............................... 20 2.4 List-Decoding from Ubiquitous Informative Errors . ..... 25 2.4.1 Discussion of the Model of Error-Correction . 26 2.4.2 ProofofTheorem2.3........................... 27 2.4.2.1 A Combinatorial Lemma . 27 2.4.2.2 CodeConstruction ....................... 28 II The Algorithm’s Output is Random 31 3 Pseudorandom Generators for Combinatorial Checkerboards 32 3.1 Introduction.................................... 32 3.1.1 CombinatorialCheckerboards . 33 3.1.2 OurResult ................................ 35 3.1.3 OverviewoftheProof .......................... 36 3.1.4 Preliminaries . 37 i 3.2 Deriving Theorem 3.3 from Lemma 3.7 and Lemma 3.8 . 38 3.3 TheHigh-WeightCase .............................. 40 3.3.1 ProofofLemma3.7............................ 40 3.3.2 ProofofLemma3.14 ........................... 42 3.3.3 ProofoftheTreeLabelingLemma . 45 3.3.3.1 Intuition . 46 3.3.3.2 FormalArgument........................ 46 3.4 TheLow-WeightCase .............................. 48 3.4.1 ProofofLemma3.8............................ 48 3.4.2 DimensionReduction........................... 51 3.4.3 ProofofLemma3.36 ........................... 54 4 Advice Lower Bounds for the Dense Model Theorem 55 4.1 Introduction.................................... 55 4.1.1 RelatedWork ............................... 59 4.2 Intuition...................................... 60 4.3 FormalProof ................................... 62 4.3.1 Binomial Distribution Tail . 62 4.3.2 CombinatorialDesigns .......................... 65 4.3.3 Notational Preliminaries . 66 4.3.4 ProofofTheorem4.6........................... 66 4.3.4.1 Distributions Without Dense Models . 67 4.3.4.2 Distributions That Cannot Be Covered . 67 4.3.4.3 SettingtheParameters. 68 4.3.4.4 The Majority of Majorities . 69 4.3.4.5 Putting It All Together . 70 III The Problem’s Input is Random 72 5 Extractors and Lower Bounds for Locally Samplable Sources 73 5.1 Introduction.................................... 73 5.1.1 Results................................... 75 5.1.2 Techniques................................. 76 5.1.3 ConcurrentWork ............................. 77 5.1.4 Previous Work on the Power of Locally Computable Functions . 77 5.2 Preliminaries ................................... 77 5.3 1-LocalSources .................................. 79 5.4 d-LocalSources .................................. 81 5.4.1 SuperindependentMatchings. 81 5.4.2 ProofofTheorem5.13 .......................... 82 5.5 IncreasingtheOutputLength . 83 5.5.1 TheGeneralMethod ........................... 83 ii 5.5.2 ApplyingTheorem5.19.......................... 86 5.6 Improved Lower Bounds for Sampling Input-Output Pairs . ..... 87 5.7 Improved Extractors for Low-Weight Affine Sources . ...... 89 5.7.1 A Linear Strong Seeded Extractor with Seed Length log n + O(log k) 89 5.7.2 ProofofTheorem5.10 .......................... 91 6 The Complexity of Estimating Min-Entropy 94 6.1 Introduction.................................... 94 6.1.1 Definitions................................. 95 6.1.2 Results................................... 97 6.1.3 RelatedWork ............................... 98 6.2 ProofofTheorem6.3andTheorem6.4 . 98 6.3 ProofofTheorem6.5............................... 102 6.4 SBP is Closed Under Nondeterminism . 103 IV The Algorithm’s Input is Random 105 7 Relativized Worlds Without Worst-Case to Average-Case Reductions for NP 106 7.1 Introduction.................................... 106 7.1.1 Notions of Reductions and Relationship to Previous Work . 107 7.1.2 Results................................... 109 7.1.3 IndependentWork ............................ 110 7.1.4 Organization ............................... 111 7.2 Preliminaries ................................... 111 7.2.1 ComplexityClasses ............................ 111 7.2.2 Reductions ................................ 112 7.2.3 Relativization . 114 7.2.4 Uniform Complexity
Recommended publications
  • Complexity” Makes a Difference: Lessons from Critical Systems Thinking and the Covid-19 Pandemic in the UK
    systems Article How We Understand “Complexity” Makes a Difference: Lessons from Critical Systems Thinking and the Covid-19 Pandemic in the UK Michael C. Jackson Centre for Systems Studies, University of Hull, Hull HU6 7TS, UK; [email protected]; Tel.: +44-7527-196400 Received: 11 November 2020; Accepted: 4 December 2020; Published: 7 December 2020 Abstract: Many authors have sought to summarize what they regard as the key features of “complexity”. Some concentrate on the complexity they see as existing in the world—on “ontological complexity”. Others highlight “cognitive complexity”—the complexity they see arising from the different interpretations of the world held by observers. Others recognize the added difficulties flowing from the interactions between “ontological” and “cognitive” complexity. Using the example of the Covid-19 pandemic in the UK, and the responses to it, the purpose of this paper is to show that the way we understand complexity makes a huge difference to how we respond to crises of this type. Inadequate conceptualizations of complexity lead to poor responses that can make matters worse. Different understandings of complexity are discussed and related to strategies proposed for combatting the pandemic. It is argued that a “critical systems thinking” approach to complexity provides the most appropriate understanding of the phenomenon and, at the same time, suggests which systems methodologies are best employed by decision makers in preparing for, and responding to, such crises. Keywords: complexity; Covid-19; critical systems thinking; systems methodologies 1. Introduction No one doubts that we are, in today’s world, entangled in complexity. At the global level, economic, social, technological, health and ecological factors have become interconnected in unprecedented ways, and the consequences are immense.
    [Show full text]
  • Dissipative Structures, Complexity and Strange Attractors: Keynotes for a New Eco-Aesthetics
    Dissipative structures, complexity and strange attractors: keynotes for a new eco-aesthetics 1 2 3 3 R. M. Pulselli , G. C. Magnoli , N. Marchettini & E. Tiezzi 1Department of Urban Studies and Planning, M.I.T, Cambridge, U.S.A. 2Faculty of Engineering, University of Bergamo, Italy and Research Affiliate, M.I.T, Cambridge, U.S.A. 3Department of Chemical and Biosystems Sciences, University of Siena, Italy Abstract There is a new branch of science strikingly at variance with the idea of knowledge just ended and deterministic. The complexity we observe in nature makes us aware of the limits of traditional reductive investigative tools and requires new comprehensive approaches to reality. Looking at the non-equilibrium thermodynamics reported by Ilya Prigogine as the key point to understanding the behaviour of living systems, the research on design practices takes into account the lot of dynamics occurring in nature and seeks to imagine living shapes suiting living contexts. When Edgar Morin speaks about the necessity of a method of complexity, considering the evolutive features of living systems, he probably means that a comprehensive method should be based on deep observations and flexible ordering laws. Actually designers and planners are engaged in a research field concerning fascinating theories coming from science and whose playground is made of cities, landscapes and human settlements. So, the concept of a dissipative structure and the theory of space organized by networks provide a new point of view to observe the dynamic behaviours of systems, finally bringing their flowing patterns into the practices of design and architecture. However, while science discovers the fashion of living systems, the question asked is how to develop methods to configure open shapes according to the theory of evolutionary physics.
    [Show full text]
  • Fractal Curves and Complexity
    Perception & Psychophysics 1987, 42 (4), 365-370 Fractal curves and complexity JAMES E. CUTI'ING and JEFFREY J. GARVIN Cornell University, Ithaca, New York Fractal curves were generated on square initiators and rated in terms of complexity by eight viewers. The stimuli differed in fractional dimension, recursion, and number of segments in their generators. Across six stimulus sets, recursion accounted for most of the variance in complexity judgments, but among stimuli with the most recursive depth, fractal dimension was a respect­ able predictor. Six variables from previous psychophysical literature known to effect complexity judgments were compared with these fractal variables: symmetry, moments of spatial distribu­ tion, angular variance, number of sides, P2/A, and Leeuwenberg codes. The latter three provided reliable predictive value and were highly correlated with recursive depth, fractal dimension, and number of segments in the generator, respectively. Thus, the measures from the previous litera­ ture and those of fractal parameters provide equal predictive value in judgments of these stimuli. Fractals are mathematicalobjectsthat have recently cap­ determine the fractional dimension by dividing the loga­ tured the imaginations of artists, computer graphics en­ rithm of the number of unit lengths in the generator by gineers, and psychologists. Synthesized and popularized the logarithm of the number of unit lengths across the ini­ by Mandelbrot (1977, 1983), with ever-widening appeal tiator. Since there are five segments in this generator and (e.g., Peitgen & Richter, 1986), fractals have many curi­ three unit lengths across the initiator, the fractionaldimen­ ous and fascinating properties. Consider four. sion is log(5)/log(3), or about 1.47.
    [Show full text]
  • Measures of Complexity a Non--Exhaustive List
    Measures of Complexity a non--exhaustive list Seth Lloyd d'Arbeloff Laboratory for Information Systems and Technology Department of Mechanical Engineering Massachusetts Institute of Technology [email protected] The world has grown more complex recently, and the number of ways of measuring complexity has grown even faster. This multiplication of measures has been taken by some to indicate confusion in the field of complex systems. In fact, the many measures of complexity represent variations on a few underlying themes. Here is an (incomplete) list of measures of complexity grouped into the corresponding themes. An historical analog to the problem of measuring complexity is the problem of describing electromagnetism before Maxwell's equations. In the case of electromagnetism, quantities such as electric and magnetic forces that arose in different experimental contexts were originally regarded as fundamentally different. Eventually it became clear that electricity and magnetism were in fact closely related aspects of the same fundamental quantity, the electromagnetic field. Similarly, contemporary researchers in architecture, biology, computer science, dynamical systems, engineering, finance, game theory, etc., have defined different measures of complexity for each field. Because these researchers were asking the same questions about the complexity of their different subjects of research, however, the answers that they came up with for how to measure complexity bear a considerable similarity to eachother. Three questions that researchers frequently ask to quantify the complexity of the thing (house, bacterium, problem, process, investment scheme) under study are 1. How hard is it to describe? 2. How hard is it to create? 3. What is its degree of organization? Here is a list of some measures of complexity grouped according to the question that they try to answer.
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • Lecture Notes on Descriptional Complexity and Randomness
    Lecture notes on descriptional complexity and randomness Peter Gács Boston University [email protected] A didactical survey of the foundations of Algorithmic Information The- ory. These notes introduce some of the main techniques and concepts of the field. They have evolved over many years and have been used and ref- erenced widely. “Version history” below gives some information on what has changed when. Contents Contents iii 1 Complexity1 1.1 Introduction ........................... 1 1.1.1 Formal results ...................... 3 1.1.2 Applications ....................... 6 1.1.3 History of the problem.................. 8 1.2 Notation ............................. 10 1.3 Kolmogorov complexity ..................... 11 1.3.1 Invariance ........................ 11 1.3.2 Simple quantitative estimates .............. 14 1.4 Simple properties of information ................ 16 1.5 Algorithmic properties of complexity .............. 19 1.6 The coding theorem....................... 24 1.6.1 Self-delimiting complexity................ 24 1.6.2 Universal semimeasure.................. 27 1.6.3 Prefix codes ....................... 28 1.6.4 The coding theorem for ¹Fº . 30 1.6.5 Algorithmic probability.................. 31 1.7 The statistics of description length ............... 32 2 Randomness 37 2.1 Uniform distribution....................... 37 2.2 Computable distributions..................... 40 2.2.1 Two kinds of test..................... 40 2.2.2 Randomness via complexity............... 41 2.2.3 Conservation of randomness............... 43 2.3 Infinite sequences ........................ 46 iii Contents 2.3.1 Null sets ......................... 47 2.3.2 Probability space..................... 52 2.3.3 Computability ...................... 54 2.3.4 Integral.......................... 54 2.3.5 Randomness tests .................... 55 2.3.6 Randomness and complexity............... 56 2.3.7 Universal semimeasure, algorithmic probability . 58 2.3.8 Randomness via algorithmic probability........
    [Show full text]
  • A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms
    Mathematics and Computers in Business, Manufacturing and Tourism A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms AHMED TAREK AHMED FARHAN Department of Science Owen J. Roberts High School Cecil College, University System of Maryland (USM) 981 Ridge Road One Seahawk Drive, North East, MD 21901-1900 Pottstown, PA 19465 UNITED STATES OF AMERICA UNITED STATES OF AMERICA [email protected] [email protected] Abstract: Computational complexity is a widely researched topic in Theoretical Computer Science. A vast majority of the related results are purely of theoretical interest, and only a few have treated the model of complexity as applicable for real-life computation. Even if, some individuals performed research in complexity, the results would be hard to realize in practice. A number of these papers lack in simultaneous consideration of temporal and spatial complexities, which together depict the overall computational scenario. Treating one without the other makes the analysis insufficient and the model presented may not depict the true computational scenario, since both are of paramount importance in computer implementation of computational algorithms. This paper exceeds some of these limitations prevailing in the computer science literature through a clearly depicted model of computation and a meticulous analysis of spatial and temporal complexities. The paper also deliberates computational complexities due to a wide variety of coding constructs arising frequently in the design and analysis of practical algorithms. Key–Words: Design and analysis of algorithms, Model of computation, Real-life computation, Spatial complexity, Temporal complexity 1 Introduction putations require pre and/or post processing with the remaining operations carried out fast enough.
    [Show full text]
  • Nonlinear Dynamics and Entropy of Complex Systems with Hidden and Self-Excited Attractors
    entropy Editorial Nonlinear Dynamics and Entropy of Complex Systems with Hidden and Self-Excited Attractors Christos K. Volos 1,* , Sajad Jafari 2 , Jacques Kengne 3 , Jesus M. Munoz-Pacheco 4 and Karthikeyan Rajagopal 5 1 Laboratory of Nonlinear Systems, Circuits & Complexity (LaNSCom), Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece 2 Nonlinear Systems and Applications, Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam; [email protected] 3 Department of Electrical Engineering, University of Dschang, P.O. Box 134 Dschang, Cameroon; [email protected] 4 Faculty of Electronics Sciences, Autonomous University of Puebla, Puebla 72000, Mexico; [email protected] 5 Center for Nonlinear Dynamics, Institute of Research and Development, Defence University, P.O. Box 1041 Bishoftu, Ethiopia; [email protected] * Correspondence: [email protected] Received: 1 April 2019; Accepted: 3 April 2019; Published: 5 April 2019 Keywords: hidden attractor; complex systems; fractional-order; entropy; chaotic maps; chaos In the last few years, entropy has been a fundamental and essential concept in information theory. It is also often used as a measure of the degree of chaos in systems; e.g., Lyapunov exponents, fractal dimension, and entropy are usually used to describe the complexity of chaotic systems. Thus, it will be important to study entropy in nonlinear systems. Additionally, there has been an increasing interest in a new classification of nonlinear dynamical systems including two kinds of attractors: self-excited attractors and hidden attractors. Self-excited attractors can be localized straightforwardly by applying a standard computational procedure.
    [Show full text]
  • Quantum Computing : a Gentle Introduction / Eleanor Rieffel and Wolfgang Polak
    QUANTUM COMPUTING A Gentle Introduction Eleanor Rieffel and Wolfgang Polak The MIT Press Cambridge, Massachusetts London, England ©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email [email protected] This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965– Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6 (hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10987654321 Contents Preface xi 1 Introduction 1 I QUANTUM BUILDING BLOCKS 7 2 Single-Qubit Quantum Systems 9 2.1 The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 2.2 Single Quantum Bits 13 2.3 Single-Qubit Measurement 16 2.4 A Quantum Key Distribution Protocol 18 2.5 The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 23 2.5.3 Comments on General Quantum State Spaces
    [Show full text]
  • Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks
    Article Measuring Complexity and Predictability of Time Series with Flexible Multiscale Entropy for Sensor Networks Renjie Zhou 1,2, Chen Yang 1,2, Jian Wan 1,2,*, Wei Zhang 1,2, Bo Guan 3,*, Naixue Xiong 4,* 1 School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China; [email protected] (R.Z.); [email protected] (C.Y.); [email protected] (W.Z.) 2 Key Laboratory of Complex Systems Modeling and Simulation of Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China 3 School of Electronic and Information Engineer, Ningbo University of Technology, Ningbo 315211, China 4 Department of Mathematics and Computer Science, Northeastern State University, Tahlequah, OK 74464, USA * Correspondence: [email protected] (J.W.); [email protected] (B.G.); [email protected] (N.X.) Tel.: +1-404-645-4067 (N.X.) Academic Editor: Mohamed F. Younis Received: 15 November 2016; Accepted: 24 March 2017; Published: 6 April 2017 Abstract: Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity.
    [Show full text]
  • Understanding the Mandelbrot and Julia Set
    Understanding the Mandelbrot and Julia Set Jake Zyons Wood August 31, 2015 Introduction Fractals infiltrate the disciplinary spectra of set theory, complex algebra, generative art, computer science, chaos theory, and more. Fractals visually embody recursive structures endowing them with the ability of nigh infinite complexity. The Sierpinski Triangle, Koch Snowflake, and Dragon Curve comprise a few of the more widely recognized iterated function fractals. These recursive structures possess an intuitive geometric simplicity which makes their creation, at least at a shallow recursive depth, easy to do by hand with pencil and paper. The Mandelbrot and Julia set, on the other hand, allow no such convenience. These fractals are part of the class: escape-time fractals, and have only really entered mathematicians’ consciousness in the late 1970’s[1]. The purpose of this paper is to clearly explain the logical procedures of creating escape-time fractals. This will include reviewing the necessary math for this type of fractal, then specifically explaining the algorithms commonly used in the Mandelbrot Set as well as its variations. By the end, the careful reader should, without too much effort, feel totally at ease with the underlying principles of these fractals. What Makes The Mandelbrot Set a set? 1 Figure 1: Black and white Mandelbrot visualization The Mandelbrot Set truly is a set in the mathematica sense of the word. A set is a collection of anything with a specific property, the Mandelbrot Set, for instance, is a collection of complex numbers which all share a common property (explained in Part II). All complex numbers can in fact be labeled as either a member of the Mandelbrot set, or not.
    [Show full text]
  • Computational Complexity
    Computational Complexity Avi Wigderson Oded Goldreich School of Mathematics Department of Computer Science Institute for Advanced Study Weizmann Institute of Science Princeton NJ USA Rehovot Israel aviiasedu odedgoldreichweizmann aci l Octob er Abstract The strive for eciency is ancient and universal as time is always short for humans Com putational Complexity is a mathematical study of the what can b e achieved when time and other resources are scarce In this brief article we will introduce quite a few notions Formal mo dels of computation and measures of eciency the P vs NP problem and NPcompleteness circuit complexity and pro of complexity randomized computation and pseudorandomness probabilistic pro of systems cryptography and more A glossary of complexity classes is included in an app endix We highly recommend the given bibliography and the references therein for more information Contents Introduction Preliminaries Computability and Algorithms Ecient Computability and the class P The P versus NP Question Ecient Verication and the class NP The Big Conjecture NP versus coNP Reducibility and NPCompleteness Lower Bounds Bo olean Circuit Complexity Basic Results and Questions Monotone Circuits
    [Show full text]