Systemic Computation: a Model of Interacting Systems with Natural Characteristics

Systemic Computation: a Model of Interacting Systems with Natural Characteristics

Systemic Computation: A Model of Interacting Systems with Natural Characteristics Peter J Bentley1 1 Department of Computer Science, University College London, Malet Place, London WC1E 6BT [email protected] Abstract. Natural systems provide unique examples of computation in a form very different from contemporary computer architectures. Biology also demonstrates capabilities such as adaptation, self-repair and self-organisation that are becoming increasingly desirable for our technology. To address these issues a new computer model and architecture with natural characteristics is presented. Systemic computation is Turing Complete; it is designed to support biological algorithms such as neural networks, evolutionary algorithms and models of development, and shares the desirable capabilities of biology not found in conventional architectures. Systemic computation may also be implemented using natural systems, enabling the potential for future computational analysis and control of biology. Keywords: Systemics, bio-inspired computation, unconventional computation, novel computation. 1 Introduction While the theory of computation is well understood through the concept of the Universal Turing Machine (UTM) [1], practical issues of architecture remain problematical for computer science and computer-based technologies. Foremost of these issues is (i) the apparent dichotomy between systems of “natural computation” such as the brain, and computer systems based on classical designs and (ii) the need to control and enhance reliability of increasingly complex systems [2,3]. This work describes a computer architecture, based on a model of computation in natural systems, which addresses both issues. Biological systems are of great interest to computer science as examples of highly complex self-organising systems that perform tasks in parallel with no centralised method of control. One reason for the increasing interest in biology is the progression of computer technology. Computers are becoming smaller, cheaper, more interconnected and ubiquitous [4], as the impact of the Internet grows year by year. Predictions of the logical extension to the Internet – wireless dynamic networks of computers in almost every device around us – look set to become reality in a decade [5]. Advances such as speckled computing [6] may enable “spray on” computer coatings. The technology of computation is increasingly becoming more parallel, decentralised and distributed – it may one day be as pervasive and widespread in our environments as biology. Yet, such trends towards distributed and ubiquitous computing come at the cost of control and reliability. While hugely complex computational systems will soon be feasible, their organisation and management is still the subject of research [7]. There have been alternative views of computation since its conception. Cellular automata have proven themselves to be a valuable approach to emergent, distributed computation [8, 9]. Generalisations such as constrained generating procedures and collision-based computing provide new ways to design and analyse emergent computational phenomena [10, 11]. Indeed it is clear that von Neumann – the architect of today’s conventional computer architecture – believed that alternative architectures for computation were likely to be valuable as can be seen by his final works on self-organising and reliable automata and “organisms” [12-14]. Bio-inspired grammars [15] and bio-inspired algorithms introduced notions of homeostasis [16] (for example in artificial immune systems [3]), fault-tolerance (as seen in embryonic hardware [17]) and parallel stochastic learning, (for example in swarm intelligence and genetic algorithms [18]). Alternative models such as analogue or pulse computers may even show potential for so-called “Super-Turing Computation” [19] (e.g., they could solve problems in a lower time complexity than Turing machines, or work with irrational numbers with the same efficiency that a finite Turing machine works with rational numbers). Ubiquitous computing may enable computation anywhere, and bio-inspired models may enable improved capabilities such as reliability and fault-tolerance, but there has been no coherent architecture which combines both technologies. Indeed, these technologies appear incompatible – the computational overhead of most bio-inspired methods is prohibitive for the limited capabilities of ubiquitous devices. Indeed, in the related area of computational modelling and analysis for the biosciences, we are finding that even our supercomputers struggle to model or analyse biological behaviour – Moore’s Law cannot keep up with the data produced by advances in DNA synthesis and sequencing productivity [20]. This failure of modern computers to model or mimic biological processes is not surprising when one examines the origins of the computer. Conventional computation was built using principles of engineering. The early designs of computer by pioneers such as von Neumann comprised four logical elements: the Central Arithmetical unit (CA), the Central Control unit (CU), the Memory (M), and Input/Output devices (IO) [21]. Although intended as a high-level abstraction, its separation of functionality was based on the technology of the day, and constructed such that each element could be performed by a human (in much the same way that a factory production-line is decomposed into separate jobs, each performed either by machine or by hand) [14]. Yet despite the 1940s design, the separation of functionality and serial, linear, synchronous architecture persists to the present day – an arbitrary design constraint inherited by almost every computer in the world. It has affected all design decisions since – “parallel software” is nothing more than a simulation of parallelism, operating systems must waste processor time with system calls and resource allocation, processor speedups must rely on instruction predictions or fast memory caches. Without biology, we might be content to stay with the von Neumann architecture, but the study of living systems has taught us that biology seems to perform far superior computation (or at least something that is equivalent to computation) and that it does it using radically different architectures to ours [2,3]. Table 1 Features of conventional vs natural computation Conventional Natural Deterministic Stochastic Synchronous Asynchronous Serial Parallel Heterostatic Homoestatic Batch Continuous Brittle Robust Fault intolerent Fault tolerant Human-reliant Autonomous Limited Open-ended Centralised Distributed Precise Approximate Isolated Embodied Linear causality Circular causality Simple Complex An analysis of parameters that can be used to measure the operation of a computational system shows natural systems and conventional computing at opposite ends of a spectrum, see table 1. The lack of similarity between natural and conventional computing systems helps explain the practical compatibility problems between natural systems and classical computers. It is theoretically possible for any Turing Complete system to mimic any other such system [1,22], so a proof that a biological system is Turing Complete would imply that it could be simulated by any computer, and also that it could simulate the function of any computer. However, practical constraints such as speed (caused, for example, by lack of parallelism and linear architectures) can sometimes make the theory irrelevant. Just as it is ludicrous to use a real ant colony to model every operation of a modern supercomputer, it is ludicrous for a modern supercomputer to model every operation of an ant colony. The two systems of computation may be mathematically equivalent at a certain level of abstraction, but they are practically so dissimilar that they become incompatible. It is thus logical to suggest that when it is desirable to have a system of computation that has biological characteristics, a computer architecture with biological features, such as those listed in the right hand column of table 1, is more appropriate. We need an alternative notion of computation that has at its heart ideas of parallelism, stochasticity and asynchrony. This work proposes a model to achieve this, based on ideas of systemics. While it can hardly be claimed to be a final solution, it is a suggestion of necessary features for a future-proof form of computation that will enable growth in key areas for the next few decades. 2 Bi-graph Computing The field of novel or unconventional computation focuses on understanding and exploiting notions of computation in physical, chemical, and other natural systems, sometimes (arguably) at the expense of mathematical rigour. While systemic computation is motivated by the same ambitions of “natural computation”, it can also be seen to be equivalent to (or rewritable as) the rigorously defined form of concurrent computation known as bigraphs. It also shares a graphical notation similar to bigraphs. Initial work in this area began in 1979, when an attempt to formalise mobility amongst processes algebraically known as the Calculus of Communicating Systems, was shown to be a failure [23]. This led to the creation of π-calculus, a (now extensively explored) calculus based on notions of processes and named links or channels between processes [23]. As a mobile process algebra, π-calculus goes beyond λ-calculus and enables notions of concurrency and cost to be modeled. Bi-graphs were then created as a graphical model to provide a theory common to different process calculi by exploiting

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    22 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us