A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms Mathematics and Computers in Business, Manufacturing and Tourism A Realistic Approach to Spatial and Temporal Complexities of Computational Algorithms AHMED TAREK AHMED FARHAN Department of Science Owen J. Roberts High School Cecil College, University System of Maryland (USM) 981 Ridge Road One Seahawk Drive, North East, MD 21901-1900 Pottstown, PA 19465 UNITED STATES OF AMERICA UNITED STATES OF AMERICA [email protected] [email protected] Abstract: Computational complexity is a widely researched topic in Theoretical Computer Science. A vast majority of the related results are purely of theoretical interest, and only a few have treated the model of complexity as applicable for real-life computation. Even if, some individuals performed research in complexity, the results would be hard to realize in practice. A number of these papers lack in simultaneous consideration of temporal and spatial complexities, which together depict the overall computational scenario. Treating one without the other makes the analysis insufficient and the model presented may not depict the true computational scenario, since both are of paramount importance in computer implementation of computational algorithms. This paper exceeds some of these limitations prevailing in the computer science literature through a clearly depicted model of computation and a meticulous analysis of spatial and temporal complexities. The paper also deliberates computational complexities due to a wide variety of coding constructs arising frequently in the design and analysis of practical algorithms. Key–Words: Design and analysis of algorithms, Model of computation, Real-life computation, Spatial complexity, Temporal complexity 1 Introduction putations require pre and/or post processing with the remaining operations carried out fast enough. With Computation time and memory space require- these computational scenarios, an Amortized Analysis ments are two major constraints in computer imple- is used. With Amortization, the total time required for mentation of real-life algorithms. Though there are a n operations is bounded asymptotically from above by 2 G(n) wide variety of formal notations for expressing these a function g(n), and g(n) O( n ). Hence, Amor- computational constraints, big-oh notation is the most tized Time is an upper bound for the average time of customarily used. Big-oh provides the upper-bound an operation in the worst case. So essentially, the in- on temporal(time) and spatial(space) requirements of vestment in the pre-work and/or post-work during the a computer algorithm, which is the most difficult ob- computation amortize or average out. struction to cross in digital computation. This paper Computer memory is a re-usable resource from primarily focuses on the upper-bounds as expressed the operating system standpoint and may be released through the standard big-oh notation for both tempo- for further reallocation. Temporal resources are con- ral and spatial complexities. sumable, and once spent, there is no return to that Temporal complexity is the quantity of CPU time point in time. Before today, computer algorithms were necessitated by an algorithm for its computer imple- not capable of time travel, which is turning back the mentation. Spatial complexity is the number of mem- clock - whether it’s a parallel or a pure sequential al- ory cells that an algorithm truly requires for compu- gorithm. Though there is a significant difference be- tation. A good algorithm tends to keep both of these tween temporal and spatial complexities from the re- requirements as small as possible. However, in real- allocation standpoint, spatial complexity shares many ity, it is strenuous to design an algorithm that is both of the same features due to temporal complexity. time and space efficient. In that event, there remains a This paper considers the temporal complexity to- trade-off between these two requirements. In certain wards the beginning. Commencing with the simplistic situations, not each of the n computer operations re- coding constructs arising frequently in real-life com- quire the same amount of time. Specially, some com- putation, the paper gradually transitions to the rela- ISBN: 978-960-474-332-2 23 Mathematics and Computers in Business, Manufacturing and Tourism tively complex models of computation in practice, and problems that can be solved by a Non-deterministic for each one of them, provides the upper-bound for Turing Machine using space in the order of O(f(n)), temporal resource requirements. Following the tem- and unlimited time. Therefore, the computation is poral complexity, the paper deals with the spatial com- done in O(f(n)) space by relaxing time to unlimited. plexity in an elaborate and easy to comprehend fash- NSP ACE stands for Non-deterministic Space Com- ion. In particular, spatial complexity is important to putation, which is the Non-deterministic counter-part sorting algorithms. Starting with the simple spatial in- of DSPACE. stances, the paper gradually approaches the relatively Please refer to [2] and [4] for the definition of complex computational models. big-oh complexity. Following is the basis of Com- In Section 2, specific terms and notations used in plexity Order using big-oh, and provides the complex- this paper are discussed briefly. Section 3 deals with ity class hierarchy. 2 k n a variety of coding constructs that frequently arise in log2n < n < nlog2n < n < : : : < n < 2 < realistic computations and provides the big-oh time Cn < n! complexity for each one of them. Section 4 consid- ers spatial complexity in an elaborate fashion. Section 3 Temporal Complexity of Computational 5 explores the realistic issues in temporal and spatial Algorithms complexity models discussed in this paper. Section 6 deduces conclusions based on models and analysis in The aggregate time required to execute a pro- the paper, and explores future research avenues. gram depends both on the compile time and the run time. Compile time does not depend on the dynamic 2 Terminology and Notations parameters involved during computation. Therefore, the same compiled program may be executed several In this paper, following notations are used. times, each time with an individual input of different n: Denotes input or instance size. size. Execution time of an algorithm is the principal g(n): Highest order term in the expression for com- focus in temporal complexity analysis. plexity without coefficients determining the complex- In determining the temporal complexity, there ity class of algorithms. are: operations count and step count. Operations f(n): Complexity function with an input size, n. count is the number of additions, multiplications, f´(n): Complexity function without the constant coef- comparisons and other operations used during the ficients in f(n). computation. Success in operations count depends on O(g(n)): Big-oh complexity with problem size, n, the ability to identify the crucial operations that con- which represents a set corresponding to the complex- tribute most to the temporal complexity. Step count ity class, g(n). accounts for all time spent in all parts of the pro- T (n): Temporal complexity of an algorithm or a func- gram/function. Since 100 additions may be 1 step or tion with an input size, n. 200 multiplications may also be 1 step, a program step S(n): Spatial complexity of an algorithm or a func- is the syntactically or semantically meaningful seg- tion with an input of size n. ment of a program for which the execution time is B(n): Space-time Bandwidth Product for an input of independent of the instance characteristic. size n. Time complexity calculates the total time con- C: Any constant value. sumed for the given instance characteristic. How- Space-time Tradeoff: A situation where memory us- ever, the instance characteristic varies from input to age can be reduced at the cost of slower program exe- input even with the same problem size. For exam- cution, or vice versa: the computation time can be re- ple, the number of swaps performed by the Bubble duced at the cost of increased memory consumption. sort depends on both array size and array elements. DSPACE(f, n): DSP ACE stands for the De- With the same array size n, the array elements may be terministic Space Computation. In this paper, vastly different, which will result in different number DSP ACE(f, n) denotes the total number of mem- of swaps performed by the algorithm. As a result, the ory cells used during the deterministic computation operation count may not be uniquely determined by of f(n). Here, f indicates the algorithm or func- the selected instance characteristic. Hence, it is neces- tion under consideration, and n is the input of size, sary to have the best, the worst and the average num- jnj. It is often abbreviated as DSP ACE (f). Hence, ber of operation counts yielding the best, the worst DSP ACE(f, n) 2 O(g(n)). However, DSP ACE and the average time complexities. For vast majority (f) is not defined whenever the computation of f(n) of the applications, average complexities suffice. does not halt. Big-oh complexity is usually expressed by the NSPACE(g, n): It denotes the set of all decision fastest growing term in the complexity function. ISBN: 978-960-474-332-2 24 Mathematics and Computers in Business, Manufacturing and Tourism There are 4 algorithmic steps in determining the big- the CPU time consumed to execute statement, Si, oh complexity that may be employed as a computer forPi = 1; 2; : : : ; k, then the total time consumed k program. The complete algorithm is described below: is, i=1 ci = C, which is a constant. Applying Algorithm big ohComplexity(n) the algorithm to determine the big-oh complex- Purpose: This algorithm determines big oh com- ity, the complexity order is, O(1). plexity with instance size n (single instance vari- able). 2. Simple Loops: The following is a pro- Input: Complexity function, f(n) on n with k totype for loop found in many pro- terms.
Recommended publications
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • Quantum Computing : a Gentle Introduction / Eleanor Rieffel and Wolfgang Polak
    QUANTUM COMPUTING A Gentle Introduction Eleanor Rieffel and Wolfgang Polak The MIT Press Cambridge, Massachusetts London, England ©2011 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email [email protected] This book was set in Syntax and Times Roman by Westchester Book Group. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Rieffel, Eleanor, 1965– Quantum computing : a gentle introduction / Eleanor Rieffel and Wolfgang Polak. p. cm.—(Scientific and engineering computation) Includes bibliographical references and index. ISBN 978-0-262-01506-6 (hardcover : alk. paper) 1. Quantum computers. 2. Quantum theory. I. Polak, Wolfgang, 1950– II. Title. QA76.889.R54 2011 004.1—dc22 2010022682 10987654321 Contents Preface xi 1 Introduction 1 I QUANTUM BUILDING BLOCKS 7 2 Single-Qubit Quantum Systems 9 2.1 The Quantum Mechanics of Photon Polarization 9 2.1.1 A Simple Experiment 10 2.1.2 A Quantum Explanation 11 2.2 Single Quantum Bits 13 2.3 Single-Qubit Measurement 16 2.4 A Quantum Key Distribution Protocol 18 2.5 The State Space of a Single-Qubit System 21 2.5.1 Relative Phases versus Global Phases 21 2.5.2 Geometric Views of the State Space of a Single Qubit 23 2.5.3 Comments on General Quantum State Spaces
    [Show full text]
  • Computational Complexity
    Computational Complexity Avi Wigderson Oded Goldreich School of Mathematics Department of Computer Science Institute for Advanced Study Weizmann Institute of Science Princeton NJ USA Rehovot Israel aviiasedu odedgoldreichweizmann aci l Octob er Abstract The strive for eciency is ancient and universal as time is always short for humans Com putational Complexity is a mathematical study of the what can b e achieved when time and other resources are scarce In this brief article we will introduce quite a few notions Formal mo dels of computation and measures of eciency the P vs NP problem and NPcompleteness circuit complexity and pro of complexity randomized computation and pseudorandomness probabilistic pro of systems cryptography and more A glossary of complexity classes is included in an app endix We highly recommend the given bibliography and the references therein for more information Contents Introduction Preliminaries Computability and Algorithms Ecient Computability and the class P The P versus NP Question Ecient Verication and the class NP The Big Conjecture NP versus coNP Reducibility and NPCompleteness Lower Bounds Bo olean Circuit Complexity Basic Results and Questions Monotone Circuits
    [Show full text]
  • Descriptive Complexity: a Logicians Approach to Computation
    Descriptive Complexity: A Logician’s Approach to Computation N. Immerman basic issue in computer science is there are exponentially many possibilities, the complexity of problems. If one is but in this case no known algorithm is faster doing a calculation once on a than exponential time. medium-sized input, the simplest al- Computational complexity measures how gorithm may be the best method to much time and/or memory space is needed as Ause, even if it is not the fastest. However, when a function of the input size. Let TIME[t(n)] be the one has a subproblem that will have to be solved set of problems that can be solved by algorithms millions of times, optimization is important. A that perform at most O(t(n)) steps for inputs of fundamental issue in theoretical computer sci- size n. The complexity class polynomial time (P) ence is the computational complexity of prob- is the set of problems that are solvable in time lems. How much time and how much memory at most some polynomial in n. space is needed to solve a particular problem? ∞ Here are a few examples of such problems: P= TIME[nk] 1. Reachability: Given a directed graph and k[=1 two specified points s,t, determine if there Even though the class TIME[t(n)] is sensitive is a path from s to t. A simple, linear-time to the exact machine model used in the com- algorithm marks s and then continues to putations, the class P is quite robust. The prob- mark every vertex at the head of an edge lems Reachability and Min-triangulation are el- whose tail is marked.
    [Show full text]
  • Space Complexity and LOGSPACE
    Space Complexity and LOGSPACE Michal Hanzlík∗ Abstract This paper deal with the computational complexity theory, with emphasis on classes of the space complexity. One of the most impor- tant class in this field is LOGSPACE. There are several interesting results associated with this class, namely theorems of Savitch and Im- merman – Szelepcsényi. Techniques that are used when working with LOGSPACE are different from those known from the time complexity because space, compared to time, may be reused. There are many open problems in this area, and one will be mentioned at the end of the paper. 1 Introduction Computational complexity focuses mainly on two closely related topics. The first of them is the notion of complexity of a “well defined” problem and the second is the ensuing hierarchy of such problems. By “well defined” problem we mean that all the data required for the computation are part of the input. If the input is of such a form we can talk about the complexity of a problem which is a measure of how much resources we need to do the computation. What do we mean by relationship between problems? An important tech- nique in the computational complexity is a reduction of one problem to an- other. Such a reduction establish that the first problem is at least as difficult to solve as the second one. Thus, these reductions form a hierarchy of prob- lems. ∗Department of Mathematics, Faculty of Applied Sciences, University of West Bohemia in Pilsen, Univerzitní 22, Plzeň, [email protected] 1.1 Representation In mathematics one usually works with mathematical objects as with abstract entities.
    [Show full text]
  • The Computational Complexity of Randomness by Thomas Weir
    The Computational Complexity of Randomness by Thomas Weir Watson A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair Professor Satish Rao Professor David Aldous Spring 2013 The Computational Complexity of Randomness Copyright 2013 by Thomas Weir Watson Abstract The Computational Complexity of Randomness by Thomas Weir Watson Doctor of Philosophy in Computer Science University of California, Berkeley Professor Luca Trevisan, Co-Chair Professor Umesh Vazirani, Co-Chair This dissertation explores the multifaceted interplay between efficient computation and prob- ability distributions. We organize the aspects of this interplay according to whether the randomness occurs primarily at the level of the problem or the level of the algorithm, and orthogonally according to whether the output is random or the input is random. Part I concerns settings where the problem’s output is random. A sampling problem associates to each input x a probability distribution D(x), and the goal is to output a sample from D(x) (or at least get statistically close) when given x. Although sampling algorithms are fundamental tools in statistical physics, combinatorial optimization, and cryptography, and algorithms for a wide variety of sampling problems have been discovered, there has been comparatively little research viewing sampling through the lens of computational complexity. We contribute to the understanding of the power and limitations of efficient sampling by proving a time hierarchy theorem which shows, roughly, that “a little more time gives a lot more power to sampling algorithms.” Part II concerns settings where the algorithm’s output is random.
    [Show full text]
  • On the Unusual Effectiveness of Logic in Computer Science £
    On the Unusual Effectiveness of Logic in Computer Science £ Þ Ü ß Joseph Y. Halpern Ý Robert Harper Neil Immerman Phokion G. Kolaitis ££ Moshe Y. Vardi k Victor Vianu January 2001 1 Introduction and Overview In 1960, E.P. Wigner, a joint winner of the 1963 Nobel Prize for Physics, published a paper titled On the Un- reasonable Effectiveness of Mathematics in the Natural Sciences [Wig60]. This paper can be construed as an examination and affirmation of Galileo’s tenet that “The book of nature is written in the language of mathe- matics”. To this effect, Wigner presented a large number of examples that demonstrate the effectiveness of mathematics in accurately describing physical phenomena. Wigner viewed these examples as illustrations of what he called the empirical law of epistemology, which asserts that the mathematical formulation of the laws of nature is both appropriate and accurate, and that mathematics is actually the correct language for formulating the laws of nature. At the same time, Wigner pointed out that the reasons for the success of mathematics in the natural sciences are not completely understood; in fact, he went as far as asserting that “ . the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and there is no rational explanation for it.” In 1980, R.W. Hamming, winner of the 1968 ACM Turing Award for Computer Science, published a follow-up article, titled The Unreasonable Effectiveness of Mathematics [Ham80]. In this article, Hamming provided further examples manifesting the effectiveness of mathematics in the natural sciences. Moreover, he attempted to answer the “implied question” in Wigner’s article: “Why is mathematics so unreasonably effective?” Although Hamming offered several partial explanations, at the end he concluded that on balance this question remains “essentially unanswered”.
    [Show full text]
  • COMPUTATIONAL COMPLEXITY Contents
    Part III Michaelmas 2012 COMPUTATIONAL COMPLEXITY Lecture notes Ashley Montanaro, DAMTP Cambridge [email protected] Contents 1 Complementary reading and acknowledgements2 2 Motivation 3 3 The Turing machine model4 4 Efficiency and time-bounded computation 13 5 Certificates and the class NP 21 6 Some more NP-complete problems 27 7 The P vs. NP problem and beyond 33 8 Space complexity 39 9 Randomised algorithms 45 10 Counting complexity and the class #P 53 11 Complexity classes: the summary 60 12 Circuit complexity 61 13 Decision trees 70 14 Communication complexity 77 15 Interactive proofs 86 16 Approximation algorithms and probabilistically checkable proofs 94 Health warning: There are likely to be changes and corrections to these notes throughout the course. For updates, see http://www.qi.damtp.cam.ac.uk/node/251/. Most recent changes: corrections to proof of correctness of Miller-Rabin test (Section 9.1) and minor terminology change in description of Subset Sum problem (Section6). Version 1.14 (May 28, 2013). 1 1 Complementary reading and acknowledgements Although this course does not follow a particular textbook closely, the following books and addi- tional resources may be particularly helpful. • Computational Complexity: A Modern Approach, Sanjeev Arora and Boaz Barak. Cambridge University Press. Encyclopaedic and recent textbook which is a useful reference for almost every topic covered in this course (a first edition, so beware typos!). Also contains much more material than we will be able to cover in this course, for those who are interested in learning more. A draft is available online at http://www.cs.princeton.edu/theory/complexity/.
    [Show full text]
  • Computational Complexity - Osamu Watanabe
    MATHEMATICS: CONCEPTS, AND FOUNDATIONS – Vol. III - Computational Complexity - Osamu Watanabe COMPUTATIONAL COMPLEXITY Osamu Watanabe Tokyo Institute of Technology, Tokyo, Japan Keywords: {deterministic, randomized, nondeterministic, circuit}computation model, {time, space, circuit size, circuit depth}complexity measure, complexity class, separation by diagonalization, inclusion by simulation, reducibility, completeness, combinatorial lower bound, monotone circuit complexity, one-way function, pseudo random bit generator, derandomization Contents 1. Introduction 2. Machine Models and Complexity Measures 3. Complexity Classes 4. Fundamental Results and Questions 4.1. Inclusion by Simulation 4.2 Important Open Questions 4.3 Comparison of Hardness by Reducibility 5. Selected Topics Acknowledgement Glossary Bibliography Biographical Sketch Summary In this chapter, we survey basic notions and fundamental theorems and questions in computational complexity theory. We will also explain some of the topics that have become active more recently. Note that there are many interesting research topics and important results in computational complexity theory. For these and the technical details omitted in this chapter, it is recommended to refer the textbooks given in the bibliography. 1. Introduction Computational complexity theory is the field of theoretical computer science that studies the complexity of problems for answering questions like “How fast can we solve this problem?” In general, “resource bounded computability” is the main subject of computational complexity theory. Note that “computability” itself had been a topic of interest in mathematics long before the emergence of modern computers; indeed, the mathematical formulation of the “computability” notion lead to the invention of programmable digital computers. On the other hand, the efficiency of computation had not been considered seriously before the invention of the computer, but its importance was recognized as soon as computers were being used.
    [Show full text]
  • The Complexity of Decision Problems in Automata Theory and Logic By
    The Complexity of Decision Problems in Automata Theory and Logic by Larry J. Stockmeyer ABSTRACT The inherent computational complexity of a variety of decision problems in mathematical logic and the theory of automata is analyzed in terms of Turing machine time and space and in terms of the complexity of Boolean networks. The problem of deciding whether a star-free expression (a variation of the regular expressions of Kleene used to describe languages accepted by finite automata) defines the empty set is shown to require time and space exceeding any composition of.functions exponential in the length of expressions. In particular, this decision problem is not elementary- recursive in the sense of Kalmar. The emptiness problem can be,reduced efficiently to decision problems for truth or satisfiability of sentences in the first order monadic theory of (N,<), the first order theory of linear orders, and the first order theory of two successors and prefix, among others. It follows that the decision problems for these theories are also not elementary-recursive. The number of Boolean operations and hence the size of logical circuits required to decide truth in several familiar logical theories of sentences only a few hundred characters long is shown to exceed the number of protons required to fill the known universe. The methods of proof are analogous to the arithmetizations and reducibility arguments of recursive function theory. Keywords: computational complexity, decision procedure star-free, Turing machine AM.(MOS) Subject Classification Scheme (1970) primary 68A20, 02G05 secondary 68A40, 94820 Table of Contents 1. Introduction 2, The Model of Computation 2.1 The Basic Model 2.2 A Technically Useful Model 3.
    [Show full text]
  • Turing and the Development of Computational Complexity
    Turing and the Development of Computational Complexity Steven Homer Department of Computer Science Boston University Boston, MA 02215 Alan L. Selman Department of Computer Science and Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260 August 24, 2011 Abstract Turing's beautiful capture of the concept of computability by the \Turing ma- chine" linked computability to a device with explicit steps of operations and use of resources. This invention led in a most natural way to build the foundations for computational complexity. 1 Introduction Computational complexity provides mechanisms for classifying combinatorial problems and measuring the computational resources necessary to solve them. The discipline proves explanations of why certain problems have no practical solutions and provides a way of anticipating difficulties involved in solving problems of certain types. The classification is quantitative and is intended to investigate what resources are necessary, lower bounds, and what resources are sufficient, upper bounds, to solve various problems. 1 This classification should not depend on a particular computational model but rather should measure the intrinsic difficulty of a problem. Precisely for this reason, as we will explain, the basic model of computation for our study is the multitape Turing machine. Computational complexity theory today addresses issues of contemporary concern, for example, parallel computation, circuit design, computations that depend on random number generators, and development of efficient algorithms. Above all, computational complexity is interested in distinguishing problems that are efficiently computable. Al- gorithms whose running times are n2 in the size of their inputs can be implemented to execute efficiently even for fairly large values of n, but algorithms that require an expo- nential running time can be executed only for small values of n.
    [Show full text]
  • Computational Complexity: from Patience to Precision and Beyond
    A Model-Independent Theory of Computational Complexity: From Patience to Precision and Beyond Ed Blakey The Queen's College, Oxford [email protected] Oxford University Computing Laboratory Wolfson Building, Parks Road, Oxford, OX1 3QD Submitted in Trinity Term, 2010 for the degree of Doctor of Philosophy in Computer Science Acknowledgements and Dedication We thank the author's supervisors, Dr Bob Coecke and Dr JoÄelOuaknine, for their continued support and suggestions during this DPhil project. We thank the author's Transfer/Con¯rmation of Status assessors, Prof. Samson Abramsky and Prof. Peter Jeavons, ¯rst for agreeing to act as assessors and secondly for their useful and for- mative comments about this project; we thank the author's Examiners, Prof. Peter Jeavons and Prof. John Tucker, for taking the time to act as such and for their valuable suggestions, of which some are incorporated here. We thank the organizers of Uncon- ventional Computing, the International Workshop on Natural Computing, Quantum Physics and Logic/Development of Computational Models, Science and Philosophy of Unconventional Computing and the International Conference on Systems Theory and Scienti¯c Computation for the opportunity (and, in the case of the last-mentioned con- ference, the kind invitation) to present work forming part of this project. We thank the participants of the above-mentioned conferences/workshops, as well as the British Colloquium for Theoretical Computer Science and Complexity Resources in Physical Computation, for their encouraging feedback and insightful discussion. We thank the reviewers of publications to which the author has contributed (including New Genera- tion Computing, the International Journal of Unconventional Computing and Natural Computing, as well as proceedings/publications associated with the conferences and workshops mentioned above) for their detailed comments and helpful suggestions; we thank also Prof.
    [Show full text]