The Limits of Computation

Total Page:16

File Type:pdf, Size:1020Kb

The Limits of Computation 1 The Limits of Computation The next time a bug freezes your computer, you can take solace in the deep mathe- matical truth that there is no way to eliminate all bugs -Seth Lloyd The 1999 science fiction movie, ‘The Matrix’, explored this idea. In this world, humans are plugged into a virtual reality, referred to as ‘The Matrix’. They go about their daily lives, unaware that the sensory inputs that they receive do not originate from their per- ceived reality. When a person, Alice, within the matrix observes a watermelon falling from a skyscraper, there is no skyscraper, nor watermelon, nor even gravity responsi- ble for the watermelon’s fall. Instead a complex computer program works silently in the background. The initial state of the watermelon, the location of the observer, are encoded in a sequence of bits. The computer takes these bits, processes according to a predetermined algorithm, and outputs the electrical signals that dictate what the observer should see. To people in the twenty first century, whose lives are enmeshed in various information processors, the eventual plausibility of the Matrix does not appear as radical as it once did. One by one, the photos we watch, and the mail we send, have been converted to digital form. Common questions such as “How many megabytes does that song take up?” reflect a society that is becoming increasingly accepting of the possibility that the observable qualities of every object can be represented by bits, and physical processes by how they manipulate these bits. While most people regard this as a representa- tion of reality, some scientists have even gone as far to speculate that the universe is perhaps just a giant information processor, programmed with the laws of physics we know [Llo06]. Why should a humble computer, that operates on the laws of electromagnetism, ever be capable of simulating all observable effects of gravity? How is it that we may use them to predict the fluctuations of gravitational waves or the reliability of a Boeing 1 MILE GU 747, when it neither detects gravity waves nor flies? A priori, there exists no reason why there should exist universal systems, computers capable of mimicking the output of any feasible physical experiment in the known universe short of the entire universe itself. The synonymity of universal systems and universal computers highlights the close links between the theories of computation, and that of physical reality. All practical computational devices must ultimately be implemented by some physical law, and all physical laws discovered so far may be simulated by a standard computer1. Not only does this suggest that universal devices allow us to simulate and thus better un- derstand reality, but also that discoveries made in the science of computation could naturally lead to insights about physical reality, and vice versa. The goal of this course is explore introduce this synergy. We consider three rapidly developing scientific fields linked to this central theme. The first demonstrates that understanding the limits and capabilities of computational devices can lead to further insight about our universe. The second motivates further development in theoretical computer science by consideration of how the difficulty of computational problems fundamentally depends on the laws of physics. The final topic revisits the Second Law of Thermodynamics from the perspective of information theory, giving profound demonstration of the idiom, ’Knowledge is Power.’ In this chapter, we begin this journey with a discussion on the ultimate limits of com- putation, and show how this can lead us to new understanding about universal prin- ciples of our reality. Section 1.1 launches the chapter in highlighting the connections between theorem’s in computer science and and understanding of reality. Section 1.2 introduces variations of the Church-Turing thesis [Tur36], a universal principle that roughly speaking, postulates that the computers we used today are such universal players. Section 1.3 describes the limitations of these universal computers exhibit, and 1.4 constructs formal models of universal computers that recasts these limita- tions to more concrete settings. In the next chapter, we combine these results with a proof that the ability to derive macroscopic laws from microscopic laws would neces- sitate the ability to do something that no universal computer can do, in violation of the Church-Turing thesis. 1.1 Emergent Laws, Universal Principles and Non-computability Much of fundamental physics has been motivated by a search for a ‘Theory of Every- thing’, a set of principles that govern all known dynamics of the universe. This ‘holy 1All currently known models of quantum computation can also be simulated by classical computers, albeit with exponential overhead. There does, however, exist a minority of scientists who believe there may exists undiscovered physical laws that cannot be simulated. For example, see [Pen89]. 2 PHYSICS OF INFORMATION LECTURE NOTES grail’ has motivated the construction of particle accelerators and many of the more es- oteric fields of theoretical physics. The standard reductionist rationale is that since all physical systems are built from conglomerations of fundamental particles, the princi- ples governing these such particles will also apply to all such systems. The plausible existence of universal systems suggests a complementary approach to the understanding of universal principles. If all observable qualities of any physical process may be simulated by a single system, then the limitations on that system will also be universal limitations that allow us to make generic statements about what we can observe within our universe. Indeed, there exists many tasks that universal computers cannot perform. The canon- ical example is the Halting problem [Tur36], which asks a computer to decide whether a given computer algorithm will eventually output some number, or be trapped in an infinite loop. A systematic solution of this problem would be of intense interest not only to software engineers, but also to any PhD student tired of spontaneous com- puter crashes when writing their thesis. The Halting problem is far from an isolated case, there exist many interesting problems that have no systematic solution. This immediately leads to a somewhat depressing universal principle (See Box 1) . ”The universe allows no systematic method to eliminate all bugs.” Philosophically, such principles may appear puzzling from a reductionist perspective. While most existing computers are electronic, such devices can in principle be con- structed from almost any physical medium. Numerous physical systems of surpris- ing simplicity, including even colliding billiard balls, have demonstrated universality [FT82]. The rather fundamental statement about arbitrary macroscopic systems is thus independent of microscopic composition. This motivates the idea of emergence; principles that govern macroscopic systems are not entirely predictable by the laws that govern their microscopic constituents [And72]. When we look at the physical universe around us, we often observe some sort of ‘macro- scopic order’. When we analyze the flow of water, or the dynamics of a glacier, we do not need to compute the exact motion of every atom. The trick here is that when we observe the macroscopic world, we generally neglect the microscopic details. We see and measure macroscopic features, such as pressure, density and stress. The earli- est scientists like Galileo and Archimedes observed relations between such quantities, without a care for the existence of atoms or quantum mechanics. When scientists write down equations relating such quantities, one should note that the quantities themselves implicitly assume a continuity limit. Pressure gradient, stress etc., are only formally defined in the limit where the medium is continuous, i.e., where it contains an infinite number of infinitesimal particles. 3 MILE GU Box 1 (Universal Principles) : ”The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” Sir Arthur Stanley Eddington, The Nature of the Physical World (1927) Any primary school student could tell you that should you drop a watermelon from the Eiffel Tower, it will plummet to the ground. Any claims otherwise would be met with disbe- lief. Yet, the kid probably never went to the Eiffel tower, and almost certainly was not there when a watermelon plummeted to its fruity demise. Without first hand experience, how is it that a child could rule out the infinite other possibilities... a watermelon hovering in midair, or spontaneous turning into a banana, perhaps? A key component of predicting natural phenomena is the knowledge of underlying rules of thumb that you expect very general classes of objects to obey. In this case, the child knows that heavier than air objects fall. This rule allows a child to rule out limitless number of scenarios, and hence have a good idea of what happens in scenarios that he or she has never observed. When these rules of thumb are thought to work for any physical system, they are regarded as universal principles. The truth of universal principles cannot be proven. There is nothing that prevents some exotic phenomena to be discovered in future, that demonstrably violates a given principle. We believe in the conservation of energy, because all observations and experiments so far indicate that it is true. Such universal principles not only give insight on how the universe behaves, but also help rule out proposals that carelessly violate them without experimental evidence or good rea- son.
Recommended publications
  • IRIX™ Device Driver Programmer's Guide
    IRIX™ Device Driver Programmer’s Guide Document Number 007-0911-090 CONTRIBUTORS Written by David Cortesi Illustrated by Dany Galgani Significant engineering contributions by (in alphabetical order): Rich Altmaier, Peter Baran, Brad Eacker, Ben Fathi, Steve Haehnichen, Bruce Johnson, Tom Lawrence, Greg Limes, Ben Mahjoor, Charles Marker, Dave Olson, Bhanu Prakash, James Putnam, Sarah Rosedahl, Brett Rudley, Deepinder Setia, Adam Sweeney, Michael Wang, Len Widra, Daniel Yau. Beta test contributions by: Jeff Stromberg of GeneSys St Peter’s Basilica image courtesy of ENEL SpA and InfoByte SpA. Disk Thrower image courtesy of Xavier Berenguer, Animatica. © 1996-1997, Silicon Graphics, Inc.— All Rights Reserved The contents of this document may not be copied or duplicated in any form, in whole or in part, without the prior written permission of Silicon Graphics, Inc. RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure of the technical data contained in this document by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 52.227-7013 and/or in similar or successor clauses in the FAR, or in the DOD or NASA FAR Supplement. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/manufacturer is Silicon Graphics, Inc., 2011 N. Shoreline Blvd., Mountain View, CA 94043-1389. Silicon Graphics, the Silicon Graphics logo, CHALLENGE, Indigo, Indy, and Onyx are registered trademarks and Crimson, Indigo2, Indigo2 Maximum Impact, IRIS InSight, IRIX, O2, Origin200, Origin2000, POWER CHALLENGE, POWER Channel, POWER Indigo2, and POWER Onyx are trademarks of Silicon Graphics, Inc.
    [Show full text]
  • A Spiking Neural Algorithm for Network Flow
    A Spiking Neural Algorithm for Network Flow A pipeline from theory to practice for neuromorphic computing Course code: SOW-MKI92 MSc thesis Artificial Intelligence ABDULLAHI ALI s4420241 5 June 2019 Supervisor: Johan Kwisthout Second reader: Iris van Rooij Abstract It is not clear what the potential is of neuromorphic hardware beyond machine learning and neu- roscience. In this project, a problem is investigated that is inherently difficult to fully implement in neuromorphic hardware by introducing a new machine model in which a conventional Turing ma- chine and neuromorphic oracle work together to solve such types of problems. A lattice of complexity classes is introduced: CSNN(RS ), in which a neuromorphic oracle is consulted using only resources at SNN(O(n);O(n);O(n)) most RS. We show that the P-complete MAX NETWORK FLOW problem is in L for graphs with n edges. A modified variant of this algorithm is implemented on the Intel Loihi chip; a neuromorphic manycore processor developed by Intel Labs. We show that by off-loading the search for augmenting paths to the neuromorphic processor we can get energy efficiency gains, while not sacrificing runtime resources. This result demonstrates how P-complete problems can be mapped on neuromorphic architectures in a theoretically and potentially practically efficient manner. 1 1 Introduction Neuromorphic computing has been one of the proposed novel architectures to replace the von Neumann architecture that has dominated computing for the last 70 years [18]. These systems consist of low power, intrinsically parallel architectures of simple spiking processing units. In recent years numerous neuro- morphic hardware architectures have emerged with different architectural design choices [1, 8, 21, 14].
    [Show full text]
  • Performance Improvement in MIPS Pipeline Processor Based on FPGA
    International Journal of Engineering Technology, Management and Applied Sciences www.ijetmas.com January 2016, Volume 4, Issue 1, ISSN 2349-4476 Performance Improvement in MIPS Pipeline Processor based on FPGA Kirat Pal Singh1, Shiwani Dod2 Senior Project Fellow1, Student2 1CSIR-Central Scientific Instruments Organisation, Chandigarh, India 2Rayat and Bhara Institute of Engineering and biotechnology, Mohali, India Abstract - The paper describes the design and synthesis favorite choice among universities and research labs of a basic 5 stage pipelined MIPS-32 processor for for using as the base of their design. This simplicity finding the longer path delay using different process also makes the MIPS architecture very attractive to technologies. The large propagation delay or critical the embedded microprocessor market as it enables path within the circuit and improving the hardware very cost-effective implementations. which causes delay is a standard method for increasing A. PROCESSOR OVERVIEW the performance. The organization of pipeline stages in such a way that pipeline can be clocked at a high A MIPS-based RISC processor was introduced in frequency. The design has been synthesized at different [1], is described. For describing the processor process technologies targeting using Spartan3, Spartan6, architecture, a basic model is chosen. Fig. 1.1 Virtex4, Virtex5 and Virtex6 devices. The synthesis represents the top-level schematic of the MIPS report indicates that critical path delay is located in pipelined processor. This schematic shows the execution unit. The maximum critical path delay is principal components, or main blocks of the 41.405ns at 90nm technology and minimum critical path processor. It was a fixed-point processor and consist delay is 6.57ns at 40nm technology.
    [Show full text]
  • Zeno Machines and Hypercomputation Petrus H
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 358 (2006) 23–33 www.elsevier.com/locate/tcs Zeno machines and hypercomputation Petrus H. Potgieter∗ Department of Decision Sciences, University of South Africa, P.O. Box 392, Unisa 0003, Pretoria Received 6 December 2004; received in revised form 14 November 2005; accepted 29 November 2005 Communicated by M. Hirvensalo Abstract This paper reviews the Church–Turing Thesis (or rather, theses) with reference to their origin and application and considers some models of “hypercomputation”, concentrating on perhaps the most straight-forward option: Zeno machines (Turing machines with accelerating clock). The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised. It is suggested that claims to have “broken the Turing barrier” could be toned down and that the important and well-founded rôle of Turing computability in the mathematical sciences stands unchallenged. © 2006 Elsevier B.V. All rights reserved. Keywords: Church–Turing Thesis; Zeno machine; Accelerated Turing machine; Hypercomputation; Halting problem 1. Introduction The popular and scientific literature in foundational aspects of computer science and in physical science have of late been replete with obituaries for the Church–Turing Thesis (CTT) and triumphant announcements of the dawn of “hypercomputation” (the “scare quotes” around which are to be carelessly omitted from this point on). Some of the proponents of this idea and their readers believe that somehow the CTT has been disproved. It is often not quite clear however what exactly they take the Thesis to mean.
    [Show full text]
  • Computability and Complexity of Unconventional Computing Devices
    Computability and Complexity of Unconventional Computing Devices Hajo Broersma1, Susan Stepney2, and G¨oranWendin3 1 Faculty of Electrical Engineering, Mathematics and Computer Science, CTIT Institute for ICT Research, and MESA+ Institute for Nanotechnology, University of Twente, The Netherlands 2 Department of Computer Science, University of York, UK 3 Microtechnology and Nanoscience Department, Chalmers University of Technology, Gothenburg, SE-41296 Sweden Abstract. We discuss some claims that certain UCOMP devices can perform hypercomputa- tion (compute Turing-uncomputable functions) or perform super-Turing computation (solve NP-complete problems in polynomial time). We discover that all these claims rely on the provision of one or more unphysical resources. 1 Introduction For many decades, Moore's Law (Moore; 1965) gave us exponentially-increasing classical (digital) computing (CCOMP) power, with a doubling time of around 18 months. This can- not continue indefinitely, due to ultimate physical limits (Lloyd; 2000). Well before then, more practical limits will slow this increase. One such limit is power consumption. With present efforts toward exascale computing, the cost of raw electrical power may eventually be the limit to the computational power of digital machines: Information is physical, and electrical power scales linearly with computational power (electrical power = number of bit flips per second times bit energy). Reducing the switching energy of a bit will alleviate the problem and push the limits to higher processing power, but the exponential scaling in time will win in the end. Programs that need exponential time will consequently need exponen- tial electrical power. Furthermore, there are problems that are worse than being hard for CCOMP: they are (classically at least) undecidable or uncomputable, that is, impossible to solve.
    [Show full text]
  • LEP to an Fcc-Ee – a Bridge Too Far?
    Monday 19 October 2020 [email protected] https://orcid.org/0000-0001-8404-3750 DOI:10.5281/zenodo.4018493 LEP to an fcc-ee – A Bridge Too Far? Introduction This paper attempts to précis key documents, presentations and papers that are stilL availabLe concerning the preparation for, and execution of, (mainLy offLine) computing for the Large ELectron Positron colLider (LEP) at CERN. The motivation is not onLy to capture this information before it is too Late – which in some cases it aLready is – but also shouLd it be of interest, even if only anecdotally, in the preparation for future, “simiLar” machines. SpecificalLy, as a resuLt, the interested reader shouLd be aware of what eLectronic documents are avaiLabLe together with direct pointers to be abLe to find them. As further motivation, the 2020 update of the European Strategy for Particle Physics contains the foLLowing statements: “The vision is to prepare a Higgs factory, followed by a future hadron collider”. “Given the unique nature of the Higgs boson, there are compelling scientific arguments for a new electron-positron collider operating as a Higgs factory”. [Pre-ambLe] “An electron-positron Higgs factory is the highest priority next collider”. [High-priority future initiatives] The document aLso states: “Further development of internal policies on open data and data preservation should be encouraged, and an adequate level of resources invested in their implementation.” Therefore, a summary of the current status in these areas, particuLarLy with regard to the former eLectron-positron collider (LEP) is called for. In 2020, we are (were) at approximateLy mid-point between the end of data taking at LEP and the possible start of data taking at a future fcc-ee (2039?).
    [Show full text]
  • Hxdp: Efficient Software Packet Processing on FPGA Nics
    hXDP: Efficient Software Packet Processing on FPGA NICs Marco Spaziani Brunella1,3, Giacomo Belocchi1,3, Marco Bonola1,2, Salvatore Pontarelli1, Giuseppe Siracusano4, Giuseppe Bianchi3, Aniello Cammarano2,3, Alessandro Palumbo2,3, Luca Petrucci2,3 and Roberto Bifulco4 1Axbryd, 2CNIT, 3University of Rome Tor Vergata, 4NEC Laboratories Europe Abstract advocating for the introduction of FPGA NICs, because of their ability to use the FPGAs also for tasks such as machine FPGA accelerators on the NIC enable the offloading of expen- learning [13, 14]. FPGA NICs play another important role in sive packet processing tasks from the CPU. However, FPGAs 5G telecommunication networks, where they are used for the have limited resources that may need to be shared among acceleration of radio access network functions [11,28,39,58]. diverse applications, and programming them is difficult. In these deployments, the FPGAs could host multiple func- We present a solution to run Linux’s eXpress Data Path tions to provide higher levels of infrastructure consolidation, programs written in eBPF on FPGAs, using only a fraction since physical space availability may be limited. For instance, of the available hardware resources while matching the per- this is the case in smart cities [55], 5G local deployments, e.g., formance of high-end CPUs. The iterative execution model in factories [44,47], and for edge computing in general [6,30]. of eBPF is not a good fit for FPGA accelerators. Nonethe- Nonetheless, programming FPGAs is difficult, often requiring less, we show that many of the instructions of an eBPF pro- the establishment of a dedicated team composed of hardware gram can be compressed, parallelized or completely removed, specialists [18], which interacts with software and operating when targeting a purpose-built FPGA executor, thereby sig- system developers to integrate the offloading solution with the nificantly improving performance.
    [Show full text]
  • ENGINEERING DESIGN of RECONFIGURABLE PIPELINED DATAPATH.Pdf
    Journal For Innovative Development in Pharmaceutical and Technical Science (JIDPTS) (J I D P T S) Volume:2, Issue:12, December:2019 ISSN(O):2581-6934 ENGINEERING DESIGN OF RECONFIGURABLE PIPELINED DATAPATH ________________________________________________________________________________________ 1 2 3 4 D.PREETHI , G.KEERTHANA , K.RESHMA , Mr.A.RAJA 1 2 3 UG Students, 4 Assistant professor 1 2 3 4 Department Of Electronics And Communication Engineering, Saveetha school of engineering, Tamil Nadu, Chennai Abstract : Configurable processing has caught the creative mind of numerous draftsmen who need the exhibition of use explicit equipment joined with the re programmability of universally useful PCs. Sadly, Configurable processing has had rather constrained achievement generally on the grounds that the FPGAs on which they are constructed are more fit to executing arbitrary rationale than registering assignments. This paper presents RaPiD, another coarse-grained FPGA engineering that is enhanced for exceptionally monotonous, calculation escalated errands. Extremely profound application-explicit calculation pipelines can be designed in RaPiD. These pipelines make significantly more proficient utilization of silicon than customary FPGAs and furthermore yield a lot better for a wide scope of uses. Keywords: Funtional unit , Symbolic array, Control path, Configurable computing, Configurable pipeline ___________________________________________________________________________________________________________ Introduction continues to beun realizable.
    [Show full text]
  • Lynx: a Smartnic-Driven Accelerator-Centric Architecture for Network Servers
    Lynx: A SmartNIC-driven Accelerator-centric Architecture for Network Servers Maroun Tork Lina Maudlej Mark Silberstein Technion – Israel Institute of Technion – Israel Institute of Technion – Israel Institute of Technology Technology Technology Haifa, Israel Haifa, Israel Haifa, Israel Abstract This paper explores new opportunities afforded by the grow- CPU Accelerator Accelerator ing deployment of compute and I/O accelerators to improve Request Processing the performance and efficiency of hardware-accelerated com- Network Server Request puting services in data centers. Network I/O Processing Accelerator I/O We propose Lynx, an accelerator-centric network server architecture that offloads the server data and control planes SNIC to the SmartNIC, and enables direct networking from accel- Accelerator I/O Service erators via a lightweight hardware-friendly I/O mechanism. NIC Network Server Lynx enables the design of hardware-accelerated network Network I/O servers that run without CPU involvement, freeing CPU cores and improving performance isolation for accelerated (a) Traditional host-centric (b) Lynx: Accelerator-centric services. It is portable across accelerator architectures and al- lows the management of both local and remote accelerators, Figure 1. Accelerated network server architectures. seamlessly scaling beyond a single physical machine. We implement and evaluate Lynx on GPUs and the In- 1 Introduction tel Visual Compute Accelerator, as well as two SmartNIC Modern data centers are increasingly heterogeneous, with a architectures – one with an FPGA, and another with an 8- variety of compute accelerators deployed to accommodate core ARM processor. Compared to a traditional host-centric growing performance demands. Many cloud vendors lever- × approach, Lynx achieves over 4 higher throughput for a age them to build hardware-accelerated network-attached GPU-centric face verification server, where it is used forGPU computing services.
    [Show full text]
  • EDGE: Event-Driven GPU Execution
    EDGE: Event-Driven GPU Execution Tayler Hicklin Hetherington, Maria Lubeznov, Deval Shah, Tor M. Aamodt Electrical & Computer Engineering The University of British Columbia Vancouver, Canada ftaylerh, mlubeznov, devalshah, [email protected] Abstract—GPUs are known to benefit structured applications 10 Baseline with ample parallelism, such as deep learning in a datacenter. (11.1 us at Recently, GPUs have shown promise for irregular streaming 1 705 MHz) network tasks. However, the GPU’s co-processor dependence on a EDGE CPU for task management, inefficiencies with fine-grained tasks, 0.1 EDGE + to Baseline and limited multiprogramming capabilities introduce challenges Preemption with efficiently supporting latency-sensitive streaming tasks. Relative Latency 0.01 This paper proposes an event-driven GPU execution model, Fig. 1: Combined kernel launch and warp scheduling latency. EDGE, that enables non-CPU devices to directly launch pre- configured tasks on a GPU without CPU interaction. Along with GPU. Second, GPUs optimize for throughput over latency, freeing up the CPU to work on other tasks, we estimate that EDGE can reduce the kernel launch latency by 4.4× compared preferring larger tasks to efficiently utilize the GPU hardware to the baseline CPU-launched approach. This paper also proposes resources and amortize task launch overheads. Consequently, a warp-level preemption mechanism to further reduce the end-to- GPU networking applications often construct large packet end latency of fine-grained tasks in a shared GPU environment. batches to improve throughput at the cost of queuing and pro- We evaluate multiple optimizations that reduce the average warp cessing latencies. Lastly, GPUs have limited multiprogramming preemption latency by 35.9× over waiting for a preempted warp to naturally flush the pipeline.
    [Show full text]
  • A Spiking Neural Algorithm
    A SPIKING NEURAL ALGORITHM FOR THE NETWORK FLOW PROBLEM APREPRINT Abdullahi Ali Johan Kwisthout School for Artificial Intelligence Donders Institute for Brain, Cognition, and Behaviour Radboud University Radboud University Nijmegen, The Netherlands Nijmegen, The Netherlands [email protected] [email protected] December 2, 2019 ABSTRACT It is currently not clear what the potential is of neuromorphic hardware beyond machine learning and neuroscience. In this project, a problem is investigated that is inherently difficult to fully implement in neuromorphic hardware by introducing a new machine model in which a conventional Turing machine and neuromorphic oracle work together to solve such types of problems. We show that the P-complete MAX NETWORK FLOW problem is intractable in models where the oracle may be consulted only once (‘create-and-run’ model) but becomes tractable using an interactive (‘neuromorphic co-processor’) model of computation. More in specific we show that a logspace-constrained Turing machine with access to an interactive neuromorphic oracle with linear space, time, and energy constraints can solve MAX NETWORK FLOW. A modified variant of this algorithm is implemented on the Intel Loihi chip; a neuromorphic manycore processor developed by Intel Labs. We show that by off-loading the search for augmenting paths to the neuromorphic processor we can get energy efficiency gains, while not sacrificing runtime resources. This result demonstrates how P-complete problems can be mapped on neuromorphic architectures in a theoretically and potentially practically efficient manner. Keywords Neuromorphic computation · Scientific programming · Spiking neural networks · Network Flow problem 1 Introduction Neuromorphic computing has been one of the proposed novel architectures to replace the von Neumann architecture arXiv:1911.13097v1 [cs.NE] 29 Nov 2019 that has dominated computing for the last 70 years [1].
    [Show full text]
  • Cambridge University Press 978-1-107-17119-0 — Physical Perspectives on Computation, Computational Perspectives on Physics Edited by Michael E
    Cambridge University Press 978-1-107-17119-0 — Physical Perspectives on Computation, Computational Perspectives on Physics Edited by Michael E. Cuffaro , Samuel C. Fletcher Index More Information Index k relation, 214 Aristotle, 34 k relation, 214 arithmetical relation, 214 1 relation, 214 arithmetical set, 213 ǫ-closeness condition, 133 asymmetry, temporal, 236 ǫ-commuting diagram, 133–135, 147 asymptotic behavior, 175 λ-calculus, 2, 155, 161, 226 asymptotic methods, 175, 180–183, 194 λ-definability, 64 Avogadro’s number, 253 μ-recursion, 155, 161 axiomatic method, 60, 62–63 axiomatization of computability, 153–156 Aaronson, Scott, 48 of geometry, see Hilbert’s axiomatization of Abstraction/Representation theory, 15–16, geometry 127–149 of physics, 155, 165 diagram, 128, 133 of quantum theory, 257, 277 essential components, 140 of relativity, 166 Ackermann function, 158–161 affine-linear symmetry, 269, 272 agency, 16, 119–125 Babbage, Charles, see difference engine algebra of concepts, 72 backward error analysis, 181 algorithm, 65, 83–84, 97, 173–174, 176, Banach-Tarski paradox, 147 187, 189 basic linear algebra subprograms, 187 asynchronous, 52 basic polynomial algebra subprograms, 192 feasible, 173 Bell experiment, 263 probabilistic, 52 Berlekamp-Zassenhaus algorithm, 173 quantum, 77, 88, 90–91, 101–102 Bernoulli, Johann, 177 symbolic, 192 black hole, 17, 123, 199–200 analytical engine, 141–142 electrically charged, 208 ancilla bit, 85 Kerr, see rotating anti-hypercomputation thesis, see also rotating, 51, 196, 200–204, 208, 210, 214
    [Show full text]