Computational Complexity: from Patience to Precision and Beyond

Total Page:16

File Type:pdf, Size:1020Kb

Computational Complexity: from Patience to Precision and Beyond A Model-Independent Theory of Computational Complexity: From Patience to Precision and Beyond Ed Blakey The Queen's College, Oxford [email protected] Oxford University Computing Laboratory Wolfson Building, Parks Road, Oxford, OX1 3QD Submitted in Trinity Term, 2010 for the degree of Doctor of Philosophy in Computer Science Acknowledgements and Dedication We thank the author's supervisors, Dr Bob Coecke and Dr JoÄelOuaknine, for their continued support and suggestions during this DPhil project. We thank the author's Transfer/Con¯rmation of Status assessors, Prof. Samson Abramsky and Prof. Peter Jeavons, ¯rst for agreeing to act as assessors and secondly for their useful and for- mative comments about this project; we thank the author's Examiners, Prof. Peter Jeavons and Prof. John Tucker, for taking the time to act as such and for their valuable suggestions, of which some are incorporated here. We thank the organizers of Uncon- ventional Computing, the International Workshop on Natural Computing, Quantum Physics and Logic/Development of Computational Models, Science and Philosophy of Unconventional Computing and the International Conference on Systems Theory and Scienti¯c Computation for the opportunity (and, in the case of the last-mentioned con- ference, the kind invitation) to present work forming part of this project. We thank the participants of the above-mentioned conferences/workshops, as well as the British Colloquium for Theoretical Computer Science and Complexity Resources in Physical Computation, for their encouraging feedback and insightful discussion. We thank the reviewers of publications to which the author has contributed (including New Genera- tion Computing, the International Journal of Unconventional Computing and Natural Computing, as well as proceedings/publications associated with the conferences and workshops mentioned above) for their detailed comments and helpful suggestions; we thank also Prof. Jos¶eF¶elixCosta for his kind invitation to contribute to the last- mentioned journal. We thank collaborators and colleagues for showing an interest in this work, for useful discussion and corrections, for bringing to the author's attention many relevant references, and for helping to shape and direct this research (as have the conference participants and publication reviewers|and of course supervisors and assessors|mentioned above); notably, we thank Prof. Cristian Calude for fascinating discussions regarding accelerated Turing machines and the conjectured incomplete- ness of axiomatizations of resource, Dr Viv Kendon and colleagues (not least Rob Wagner) for the opportunity to explore the interface between the present project and the practical considerations of quantum computation/simulation, Dr Rebecca Palmer for noticing an omission relating to wave superposition, Andr¶asSalamon for inspir- ing parts of the present work with questions concerning gap theorems and combined resources, Prof. Susan Stepney for her championing the `natural' in `natural compu- tation', and Dr Damien Woods for his suggestions relating to restriction of precision. Academic assistance aside, we thank the administrative and support sta® both at the Oxford University Computing Laboratory and at the Queen's College for their help during the author's DPhil programme; we thank in particular Janet Sadler for her assistance with the organization of the workshop Complexity Resources in Physical Computation and with the administration of the below-mentioned EPSRC funding. We acknowledge the generous ¯nancial support of the EPSRC; this work forms part of, and as of 1.x.2008 is funded by, the EPSRC project Complexity and Decidability in Unconventional Computational Models (EP/G003017/1). Personal thanks are due from the author to his family; to his friends in Oxford, Warwick and elsewhere; and to his girlfriend, Gemma. The author dedicates this dissertation to the memory of his mother, Linda (1944 { 2009). Abstract A Model-Independent Theory of Computational Complexity: From Patience to Precision and Beyond Ed Blakey, The Queen's College Submitted in Trinity Term, 2010 for the degree of Doctor of Philosophy in Computer Science The ¯eld of computational complexity theory|which chiey aims to quan- tify the di±culty encountered when performing calculations|is, in the case of conventional computers, correctly practised and well understood (some impor- tant and fundamental open questions notwithstanding); however, such under- standing is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the sys- tem's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complex- ity theory. Hence, we formulate the thesis that unconventional computers war- rant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of compar- ing computers' e±ciency (formerly a case merely of comparing time complexity) becomes di±cult. We resolve this by introducing a notion of overall complex- ity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we de¯ne normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebe- tween. Our notions of resource, overall complexity, normality, etc. form a model- independent framework of computational complexity theory, which allows: in- sightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased con¯dence in the di±- culty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation. Contents 1 Introduction 6 1.1 A Tale of Two Complexities . 6 1.2 Background . 7 1.2.1 Preliminaries . 7 1.2.2 Literature . 12 1.3 Overview . 12 2 Motivation 15 2.1 Introduction . 15 2.1.1 Factorization . 16 2.2 Original Analogue Factorization System . 27 2.2.1 Description . 27 2.2.2 Proof of Correctness . 52 2.2.3 Time/Space Complexity . 53 2.2.4 Precision Complexity . 55 2.2.5 Summary . 57 2.3 Improved Analogue Factorization System . 58 2.3.1 Relating the Original and Improved Systems . 58 2.3.2 Description . 59 2.3.3 Proof of Correctness . 66 2.3.4 Practical Considerations . 67 2.3.5 Time/Space Complexity . 69 2.3.6 RSA Factorization . 70 2.3.7 Precision Complexity . 72 2.3.8 Summary . 73 2.4 Conclusion . 74 2.4.1 Summary . 74 2.4.2 Discussion . 74 3 Resource 75 3.1 Unconventional Resources . 75 3.1.1 The Need Therefor. 75 3.1.2 . But the Neglect Thereof . 79 3.2 Interpreting `Resource' . 83 3.2.1 Possible Interpretations . 83 3.2.2 Recasting as Di®erent Resource Types . 88 3.2.3 Source of Complexity . 88 3.2.4 Commodity Resources . 91 1 CONTENTS 2 3.3 Precision|an Illustrative Unconventional Resource . 92 3.3.1 Motivation . 92 3.3.2 Examples . 93 3.3.3 De¯nitions . 105 3.3.4 Precision in the Literature . 114 3.4 Formalizing Resource . 116 3.4.1 First Steps . 116 3.4.2 Null Resource . 118 3.4.3 From Resource Comes Complexity . 118 3.5 Model-Independent Resources . 121 3.5.1 Speci¯c Examples . 122 3.5.2 Generic Criteria . 124 3.5.3 Two Approaches . 125 3.6 Model-Dependent Resources . 125 3.6.1 Resources for Non-Quantum Computers . 125 3.6.2 Resources for Quantum Computers . 132 3.6.3 Summary . 136 3.7 The Old versus the New . 136 3.7.1 Two Conventional Resources . 137 3.7.2 Moore's Law . 138 3.8 Underlying Resource . 139 3.8.1 Trade-O®s . 140 3.8.2 Combining Resources . 141 3.9 Summary . 142 4 Dominance 143 4.1 Comparison of Computers . 143 4.1.1 The Need to Compare Computers . 143 4.1.2 Problem-Homogeneity . 146 4.1.3 Comparing Turing Machines . 146 4.1.4 Comparing Unconventional Computers . 147 4.2 Dominance De¯ned . 149 4.2.1 Overall Complexity . 150 4.3 Dominance Classes . 152 4.3.1 De¯nitions . 152 4.3.2 Expected Results . 152 4.3.3 Other Internal Results . 153 4.3.4 Trade-O® Results . 160 4.3.5 Relationship to Traditional Hierarchy . 162 4.3.6 Transfer between Hierarchies of Results/Questions . 162 4.4 An Analogue of the Gap Theorem . 162 4.4.1 Introduction . 163 4.4.2 Dominance Gap Theorem . 165 4.4.3 Future Work . 168 4.4.4 Conclusion . 171 4.5 Normalization|Resources Revisited . 172 4.5.1 The Problem: Dominance versus Unrestricted Resource . 172 4.5.2 The Solution: Normalization . 173 4.5.3 Why Normalization? . 178 4.5.4 Summary of Resource . 178 CONTENTS 3 4.6 Discussion . 179 4.6.1 Summary . 179 4.6.2 Comments . 180 5 Case Studies 181 5.1 Introduction . 181 5.2 Ray-Tracing|Computing the Uncomputable? . 181 5.2.1 Background . 181 5.2.2 Resolution . 182 5.3 An Adiabatic Solution to Hilbert's Tenth Problem . 184 5.3.1 Background . 184 5.3.2 Resolution . 186 5.4 Cabello versus Meyer . 186 5.4.1 Background . 186 5.4.2 Resolution|First Steps . 188 5.5 Future Work . 189 5.6 Discussion . 190 5.6.1 General Features of Resolution . 190 5.6.2 Extent of Resolution . 191 6 Discussion 192 6.1 Other Applications . 192 6.1.1 Cryptography .
Recommended publications
  • Algorithms & Computational Complexity
    Computational Complexity O ΩΘ Russell Buehler January 5, 2015 2 Chapter 1 Preface What follows are my personal notes created during my undergraduate algorithms course and later an independent study under Professor Holliday as a graduate student. I am not a complexity theorist; I am a graduate student with some knowledge who is–alas–quite fallible. Accordingly, this text is made available as a convenient reference, set of notes, and summary, but without even the slightest hint of a guarantee that everything contained within is factual and correct. This said, if you find an error, I would much appreciate it if you let me know so that it can be corrected. 3 4 CHAPTER 1. PREFACE Contents 1 Preface 3 2 A Brief Review of Algorithms 1 2.1 Algorithm Analysis............................................1 2.1.1 O-notation............................................1 2.2 Common Structures............................................3 2.2.1 Graphs..............................................3 2.2.2 Trees...............................................4 2.2.3 Networks.............................................4 2.3 Algorithms................................................6 2.3.1 Greedy Algorithms........................................6 2.3.2 Dynamic Programming...................................... 10 2.3.3 Divide and Conquer....................................... 12 2.3.4 Network Flow.......................................... 15 2.4 Data Structures.............................................. 17 3 Deterministic Computation 19 3.1 O-Notation...............................................
    [Show full text]
  • Complexity Theory in Computer Science Pdf
    Complexity theory in computer science pdf Continue Complexity is often used to describe an algorithm. One could hear something like my sorting algorithm working in n2n'2n2 time in complexity, it's written as O (n2)O (n'2)O (n2) and polynomial work time. Complexity is used to describe the use of resources in algorithms. In general, the resources of concern are time and space. The complexity of the algorithm's time is the number of steps it must take to complete. The cosmic complexity of the algorithm represents the amount of memory the algorithm needs to work. The complexity of the algorithm's time describes how many steps the algorithm should take with regard to input. If, for example, each of the nnn elements entered only works once, this algorithm will take time O'n)O(n)O(n). If, for example, the algorithm has to work on one input element (regardless of input size), it is a constant time, or O(1)O(1)O(1), the algorithm, because regardless of the size of the input, only one is done. If the algorithm performs nnn-operations for each of the nnn elements injected into the algorithm, then this algorithm is performed in time O'n2)O (n2). In the design and analysis of algorithms, there are three types of complexities that computer scientists think: the best case, the worst case, and the complexity of the average case. The best, worst and medium complexity complexity can describe the time and space this wiki will talk about in terms of the complexity of time, but the same concepts can be applied to the complexity of space.
    [Show full text]
  • LSPACE VS NP IS AS HARD AS P VS NP 1. Introduction in Complexity
    LSPACE VS NP IS AS HARD AS P VS NP FRANK VEGA Abstract. The P versus NP problem is a major unsolved problem in com- puter science. This consists in knowing the answer of the following question: Is P equal to NP? Another major complexity classes are LSPACE, PSPACE, ESPACE, E and EXP. Whether LSPACE = P is a fundamental question that it is as important as it is unresolved. We show if P = NP, then LSPACE = NP. Consequently, if LSPACE is not equal to NP, then P is not equal to NP. According to Lance Fortnow, it seems that LSPACE versus NP is easier to be proven. However, with this proof we show this problem is as hard as P versus NP. Moreover, we prove the complexity class P is not equal to PSPACE as a direct consequence of this result. Furthermore, we demonstrate if PSPACE is not equal to EXP, then P is not equal to NP. In addition, if E = ESPACE, then P is not equal to NP. 1. Introduction In complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem [6]. A functional problem F is defined as a binary relation (x; y) 2 R over strings of an arbitrary alphabet Σ: R ⊂ Σ∗ × Σ∗: A Turing machine M solves F if for every input x such that there exists a y satisfying (x; y) 2 R, M produces one such y, that is M(x) = y [6].
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms
    Mathematics and Computers in Business, Manufacturing and Tourism A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms AHMED TAREK AHMED FARHAN Department of Science Owen J. Roberts High School Cecil College, University System of Maryland (USM) 981 Ridge Road One Seahawk Drive, North East, MD 21901-1900 Pottstown, PA 19465 UNITED STATES OF AMERICA UNITED STATES OF AMERICA [email protected] [email protected] Abstract: Computational complexity is a widely researched topic in Theoretical Computer Science. A vast majority of the related results are purely of theoretical interest, and only a few have treated the model of complexity as applicable for real-life computation. Even if, some individuals performed research in complexity, the results would be hard to realize in practice. A number of these papers lack in simultaneous consideration of temporal and spatial complexities, which together depict the overall computational scenario. Treating one without the other makes the analysis insufficient and the model presented may not depict the true computational scenario, since both are of paramount importance in computer implementation of computational algorithms. This paper exceeds some of these limitations prevailing in the computer science literature through a clearly depicted model of computation and a meticulous analysis of spatial and temporal complexities. The paper also deliberates computational complexities due to a wide variety of coding constructs arising frequently in the design and analysis of practical algorithms. Key–Words: Design and analysis of algorithms, Model of computation, Real-life computation, Spatial complexity, Temporal complexity 1 Introduction putations require pre and/or post processing with the remaining operations carried out fast enough.
    [Show full text]
  • Quantum Computing : a Gentle Introduction / Eleanor Rieffel and Wolfgang Polak
    QUANTUM COMPUTING A Gentle Introduction Eleanor Rieffel and Wolfgang Polak The MIT Press Cambridge, Massachusetts London, England ©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email [email protected] This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965– Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6 (hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10987654321 Contents Preface xi 1 Introduction 1 I QUANTUM BUILDING BLOCKS 7 2 Single-Qubit Quantum Systems 9 2.1 The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 2.2 Single Quantum Bits 13 2.3 Single-Qubit Measurement 16 2.4 A Quantum Key Distribution Protocol 18 2.5 The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 23 2.5.3 Comments on General Quantum State Spaces
    [Show full text]
  • Computational Complexity
    Computational Complexity Avi Wigderson Oded Goldreich School of Mathematics Department of Computer Science Institute for Advanced Study Weizmann Institute of Science Princeton NJ USA Rehovot Israel aviiasedu odedgoldreichweizmann aci l Octob er Abstract The strive for eciency is ancient and universal as time is always short for humans Com putational Complexity is a mathematical study of the what can b e achieved when time and other resources are scarce In this brief article we will introduce quite a few notions Formal mo dels of computation and measures of eciency the P vs NP problem and NPcompleteness circuit complexity and pro of complexity randomized computation and pseudorandomness probabilistic pro of systems cryptography and more A glossary of complexity classes is included in an app endix We highly recommend the given bibliography and the references therein for more information Contents Introduction Preliminaries Computability and Algorithms Ecient Computability and the class P The P versus NP Question Ecient Verication and the class NP The Big Conjecture NP versus coNP Reducibility and NPCompleteness Lower Bounds Bo olean Circuit Complexity Basic Results and Questions Monotone Circuits
    [Show full text]
  • Descriptive Complexity: a Logicians Approach to Computation
    Descriptive Complexity: A Logician’s Approach to Computation N. Immerman basic issue in computer science is there are exponentially many possibilities, the complexity of problems. If one is but in this case no known algorithm is faster doing a calculation once on a than exponential time. medium-sized input, the simplest al- Computational complexity measures how gorithm may be the best method to much time and/or memory space is needed as Ause, even if it is not the fastest. However, when a function of the input size. Let TIME[t(n)] be the one has a subproblem that will have to be solved set of problems that can be solved by algorithms millions of times, optimization is important. A that perform at most O(t(n)) steps for inputs of fundamental issue in theoretical computer sci- size n. The complexity class polynomial time (P) ence is the computational complexity of prob- is the set of problems that are solvable in time lems. How much time and how much memory at most some polynomial in n. space is needed to solve a particular problem? ∞ Here are a few examples of such problems: P= TIME[nk] 1. Reachability: Given a directed graph and k[=1 two specified points s,t, determine if there Even though the class TIME[t(n)] is sensitive is a path from s to t. A simple, linear-time to the exact machine model used in the com- algorithm marks s and then continues to putations, the class P is quite robust. The prob- mark every vertex at the head of an edge lems Reachability and Min-triangulation are el- whose tail is marked.
    [Show full text]
  • Counting Below #P: Classes, Problems and Descriptive Complexity
    Msc Thesis Counting below #P : Classes, problems and descriptive complexity Angeliki Chalki µΠλ8 Graduate Program in Logic, Algorithms and Computation National and Kapodistrian University of Athens Supervised by Aris Pagourtzis Advisory Committee Dimitris Fotakis Aris Pagourtzis Stathis Zachos September 2016 Abstract In this thesis, we study counting classes that lie below #P . One approach, the most regular in Computational Complexity Theory, is the machine-based approach. Classes like #L, span-L [1], and T otP ,#PE [38] are defined establishing space and time restrictions on Turing machine's computational resources. A second approach is Descriptive Complexity's approach. It characterizes complexity classes by the type of logic needed to express the languages in them. Classes deriving from this viewpoint, like #FO [44], #RHΠ1 [16], #RΣ2 [44], are equivalent to #P , the class of AP - interriducible problems to #BIS, and some subclass of the problems owning an FPRAS respectively. A great objective of such an investigation is to gain an understanding of how “efficient counting" relates to these already defined classes. By “efficient counting" we mean counting solutions of a problem using a polynomial time algorithm or an FPRAS. Many other interesting properties of the classes considered and their problems have been examined. For example alternative definitions of counting classes using relation-based op- erators, and the computational difficulty of complete problems, since complete problems capture the difficulty of the corresponding class. Moreover, in Section 3.5 we define the log- space analog of the class T otP and explore how and to what extent results can be transferred from polynomial time to logarithmic space computation.
    [Show full text]
  • Space Complexity and LOGSPACE
    Space Complexity and LOGSPACE Michal Hanzlík∗ Abstract This paper deal with the computational complexity theory, with emphasis on classes of the space complexity. One of the most impor- tant class in this field is LOGSPACE. There are several interesting results associated with this class, namely theorems of Savitch and Im- merman – Szelepcsényi. Techniques that are used when working with LOGSPACE are different from those known from the time complexity because space, compared to time, may be reused. There are many open problems in this area, and one will be mentioned at the end of the paper. 1 Introduction Computational complexity focuses mainly on two closely related topics. The first of them is the notion of complexity of a “well defined” problem and the second is the ensuing hierarchy of such problems. By “well defined” problem we mean that all the data required for the computation are part of the input. If the input is of such a form we can talk about the complexity of a problem which is a measure of how much resources we need to do the computation. What do we mean by relationship between problems? An important tech- nique in the computational complexity is a reduction of one problem to an- other. Such a reduction establish that the first problem is at least as difficult to solve as the second one. Thus, these reductions form a hierarchy of prob- lems. ∗Department of Mathematics, Faculty of Applied Sciences, University of West Bohemia in Pilsen, Univerzitní 22, Plzeň, [email protected] 1.1 Representation In mathematics one usually works with mathematical objects as with abstract entities.
    [Show full text]
  • The Computational Complexity of Randomness by Thomas Weir
    The Computational Complexity of Randomness by Thomas Weir Watson A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair Professor Satish Rao Professor David Aldous Spring 2013 The Computational Complexity of Randomness Copyright 2013 by Thomas Weir Watson Abstract The Computational Complexity of Randomness by Thomas Weir Watson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair This dissertation explores the multifaceted interplay between efficient computation and prob- ability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling through the lens of computational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random.
    [Show full text]
  • On the Unusual Effectiveness of Logic in Computer Science £
    On the Unusual Effectiveness of Logic in Computer Science £ Þ Ü ß Joseph Y. Halpern Ý Robert Harper Neil Immerman Phokion G. Kolaitis ££ Moshe Y. Vardi k Victor Vianu January 2001 1 Introduction and Overview In 1960, E.P. Wigner, a joint winner of the 1963 Nobel Prize for Physics, published a paper titled On the Un- reasonable Effectiveness of Mathematics in the Natural Sciences [Wig60]. This paper can be construed as an examination and affirmation of Galileo’s tenet that “The book of nature is written in the language of mathe- matics”. To this effect, Wigner presented a large number of examples that demonstrate the effectiveness of mathematics in accurately describing physical phenomena. Wigner viewed these examples as illustrations of what he called the empirical law of epistemology, which asserts that the mathematical formulation of the laws of nature is both appropriate and accurate, and that mathematics is actually the correct language for formulating the laws of nature. At the same time, Wigner pointed out that the reasons for the success of mathematics in the natural sciences are not completely understood; in fact, he went as far as asserting that “ . the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and there is no rational explanation for it.” In 1980, R.W. Hamming, winner of the 1968 ACM Turing Award for Computer Science, published a follow-up article, titled The Unreasonable Effectiveness of Mathematics [Ham80]. In this article, Hamming provided further examples manifesting the effectiveness of mathematics in the natural sciences. Moreover, he attempted to answer the “implied question” in Wigner’s article: “Why is mathematics so unreasonably effective?” Although Hamming offered several partial explanations, at the end he concluded that on balance this question remains “essentially unanswered”.
    [Show full text]