ENGINEERING DESIGN of RECONFIGURABLE PIPELINED DATAPATH.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

ENGINEERING DESIGN of RECONFIGURABLE PIPELINED DATAPATH.Pdf Journal For Innovative Development in Pharmaceutical and Technical Science (JIDPTS) (J I D P T S) Volume:2, Issue:12, December:2019 ISSN(O):2581-6934 ENGINEERING DESIGN OF RECONFIGURABLE PIPELINED DATAPATH ________________________________________________________________________________________ 1 2 3 4 D.PREETHI , G.KEERTHANA , K.RESHMA , Mr.A.RAJA 1 2 3 UG Students, 4 Assistant professor 1 2 3 4 Department Of Electronics And Communication Engineering, Saveetha school of engineering, Tamil Nadu, Chennai Abstract : Configurable processing has caught the creative mind of numerous draftsmen who need the exhibition of use explicit equipment joined with the re programmability of universally useful PCs. Sadly, Configurable processing has had rather constrained achievement generally on the grounds that the FPGAs on which they are constructed are more fit to executing arbitrary rationale than registering assignments. This paper presents RaPiD, another coarse-grained FPGA engineering that is enhanced for exceptionally monotonous, calculation escalated errands. Extremely profound application-explicit calculation pipelines can be designed in RaPiD. These pipelines make significantly more proficient utilization of silicon than customary FPGAs and furthermore yield a lot better for a wide scope of uses. Keywords: Funtional unit , Symbolic array, Control path, Configurable computing, Configurable pipeline ___________________________________________________________________________________________________________ Introduction continues to beun realizable. The speedy design takes aim at Configurable figuring vows to convey the superior these 2issueswithin the context of computationally stringent required by computationally requesting applications while tasks like those found in signal process applications. speedy giving the adaptability and adjust capacity of modified could be a coarse-grained FPGA design that permits deeply processors. All things considered, configurable figuring stages pipelined machine data paths to be made dynamically from a lie somewhere close to ASIC arrangements, which give the combination of ALUs, multipliers, registers and nativere best/cost to the detriment of adaptability and versatility, and collections. The goal of speedy is to compile regular programmable processors, which give the best adaptability to computations like those found in DSP applications into the detriment of execution/cost. Un-luckily the guarantee of eachAN application-specific data path and therefore the configurable processing presently can't seem to be program for dominant that data path. The data path is manage acknowledged despite some fruitful models. There are two mentled employing a combination of static and dynamic primary explanations behind this. To start with, configurable control signals. The static management determines the figuring stages are as of now executed utilizing business underlying structure of the data path that continues to be FPGAs which are proficient for actualizing irregular rationale constant for a selected application. The dynamic management capacities, however substantially less so for general number- signals will change from cycle to cycle and specify the crunching capacities. Building a multiplier utilizing a FPGA variable operations performed and therefore the knowledge to brings about an exhibition/cost punishment of at any rate 100. be employed by those operations. The static management Second, current configurable stages are incredibly difficult to signals ar generated by static RAM cells that ar modified program. Taking an application from idea to a superior usage solely between applications whereas the dynamic management is a time-consuming. is provided by a sway program. speedy is additionally not The dream of automatic compilation from high- fitted to tasks that ar unstructured, not extremely repetitive, or level specification to a quick and efficient implementation whose management flow depends powerfully on the 26 All rights reserved by www.jidps.com Paper Title:- ENGINEERING DESIGN OF RECONFIGURABLE PIPELINED DATAPATH information. the idea is that speedyare integrated closely with INSTRUCTION SET ARCHITECTURES AND a computer architecture engine on identical chip. The PIPELINING: computer architecture would management the machine flow, • The MIPS instruction set was designed farming out the heavy computation to speedy that needs brute particularlyfor simple pipelining. force computation. The construct of speedy will in theory be —All instructions are 32-bits long, that the instruction fetch extended to 2-D arrays of practical units. However, stage simply has toscan one word on each clock cycle. dynamically configuring 2-D arrays is way tougher, and —Fields arewithin the same position in several instruction therefore the underlying communication structure is way formats—the opcode is usually the primary six bits, rs is that additional expensive. Since most 2-D computations may be the next 5 bits, etc. This makes things easy for the ID stage. computed expeditiously employing a linear array, speedy is —MIPS may be a register-to-register design, thus arithmetic presently restricted to linear arrays. operations cannot contain memory references. This keeps the The process bandwidth provided by a RaPiD array pipeline shorter and easier. is extraordinarily high and scales with the scale of the array. • Pipelining is harder for older, additional complicated The input and output information bandwidth, however, is instruction sets. proscribed to the info memory information measure that —If completely different directions had different lengths or doesn't scale. so the quantity of computation performed per formats, the fetch and decrypt stages would want time beyond I/O operation bounds the quantity of similarity and so the regulation to see the particular length of every instruction and speed an application will exhibit once enforced mis treatment also the position of the fields. RaPiD. The RaPiD design assumes that at the most3 memory —With memory-to-memory directions, further ac- cesses square measure created per cycle. Providing even pipeline stages could also berequiredto compute effective this abundant information measure needsa very superior addresses and skim memory before the EX stage. memory design. PIPELINING: • Single-cycle and multi cycle processors permit just one instruction to bein the data path throughout any given clock cycle. This leads to useful units being idle for abundant of the time. • A pipelined processor permits multiple directions to execute atonce, and every instruction usesa distinct useful unit within the data path. LITERATURE SURVEY: • This maximizes the hardware utilization, therefore 1.AN ILP FORMULATION UNDER programs will run quicker. RESOURCE CONSTRAINTS: - Our example datapath has 5 stages and up to 5 A resource-constrained planning down side is to finish a instructions can run at the same time, therefore the quickest schedule below a given set of resources. In general. ideal speeding is 5. the resources given arthe quantity of perform units, like adders, multipliers, ALLJs and busses. Our planned planning • One instruction will end executing on each clock approach includes list achedul- ing, ASAP, ALAP and ILP. cycle, and less complicated stages additionally result Lit planning verifys the higher limit on the solution; asapand in shorter cycle times. ALAP determine the minimum and mostbegin times of every operation; whereas ILP minimizes the quantity of management 27 ISSN:-2581-6934 |Available at : www.jidps.com | pp :(26-29) Paper Title:- ENGINEERING DESIGN OF RECONFIGURABLE PIPELINED DATAPATH steps in finding an number applied mathematics formu- lation. constraints. intensive thought is given to the subsequent The ILP formulation is given here very well. For sim- plicity applications: of rationalization. 2 assumptions arcreated : (1) every Multi-cycle operations with operation is assumed to own a one-cycle propaga- tion delay: • non-pipelined operate units, and (2) solely non-pipelined knowledge waysar thought- • pipelined operate units; about.[1] reciprocally exclusive operations practical pipelining 2.ZONE SCHEDULING METHOD: Loop folding Zone Sched Uring (ZS), is to resolve the BP formulation of programing below bus constraint. huge size drawback. Suppose we tend to divide the using this model, we tend to square measureable to solve all distribution graph into n zones, zt, 22, . z, . ZS can solve the bench- marks within the literature optimally in an the primary zone followed by associate change of the exceedingly few seconds. Besides the model, a replacement distribution graph, then solve the second zone, then on till all technique, referred to as Zone programing (23). is pro- display the operations area unit ~&wcI. the concept behind Zs is that to unravelgiant size Foblems. ZS partitions the distribu- tion if the vary of the minimum associated most begin times of an graph into many zones and solves consecutive the prob- lems operation falls utterly within a zone. this operation should be contained. a unique feature of this system is that it schedules scheduied inside the zone. If it crosses zones. it will either be quite one management step at a time, permittingU.S.to require scheduled straightaway or delayed to a consequent zone. For a additional worldread of a programing drawback.[4] associate operation whose time-frame crosses a xone boun- 5.Fast and Near Optimal Scheduling in Automatic Data dary, a replacement O-l variable, referred to as delay-variable, Path
Recommended publications
  • IRIX™ Device Driver Programmer's Guide
    IRIX™ Device Driver Programmer’s Guide Document Number 007-0911-090 CONTRIBUTORS Written by David Cortesi Illustrated by Dany Galgani Significant engineering contributions by (in alphabetical order): Rich Altmaier, Peter Baran, Brad Eacker, Ben Fathi, Steve Haehnichen, Bruce Johnson, Tom Lawrence, Greg Limes, Ben Mahjoor, Charles Marker, Dave Olson, Bhanu Prakash, James Putnam, Sarah Rosedahl, Brett Rudley, Deepinder Setia, Adam Sweeney, Michael Wang, Len Widra, Daniel Yau. Beta test contributions by: Jeff Stromberg of GeneSys St Peter’s Basilica image courtesy of ENEL SpA and InfoByte SpA. Disk Thrower image courtesy of Xavier Berenguer, Animatica. © 1996-1997, Silicon Graphics, Inc.— All Rights Reserved The contents of this document may not be copied or duplicated in any form, in whole or in part, without the prior written permission of Silicon Graphics, Inc. RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure of the technical data contained in this document by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS 52.227-7013 and/or in similar or successor clauses in the FAR, or in the DOD or NASA FAR Supplement. Unpublished rights reserved under the Copyright Laws of the United States. Contractor/manufacturer is Silicon Graphics, Inc., 2011 N. Shoreline Blvd., Mountain View, CA 94043-1389. Silicon Graphics, the Silicon Graphics logo, CHALLENGE, Indigo, Indy, and Onyx are registered trademarks and Crimson, Indigo2, Indigo2 Maximum Impact, IRIS InSight, IRIX, O2, Origin200, Origin2000, POWER CHALLENGE, POWER Channel, POWER Indigo2, and POWER Onyx are trademarks of Silicon Graphics, Inc.
    [Show full text]
  • A Spiking Neural Algorithm for Network Flow
    A Spiking Neural Algorithm for Network Flow A pipeline from theory to practice for neuromorphic computing Course code: SOW-MKI92 MSc thesis Artificial Intelligence ABDULLAHI ALI s4420241 5 June 2019 Supervisor: Johan Kwisthout Second reader: Iris van Rooij Abstract It is not clear what the potential is of neuromorphic hardware beyond machine learning and neu- roscience. In this project, a problem is investigated that is inherently difficult to fully implement in neuromorphic hardware by introducing a new machine model in which a conventional Turing ma- chine and neuromorphic oracle work together to solve such types of problems. A lattice of complexity classes is introduced: CSNN(RS ), in which a neuromorphic oracle is consulted using only resources at SNN(O(n);O(n);O(n)) most RS. We show that the P-complete MAX NETWORK FLOW problem is in L for graphs with n edges. A modified variant of this algorithm is implemented on the Intel Loihi chip; a neuromorphic manycore processor developed by Intel Labs. We show that by off-loading the search for augmenting paths to the neuromorphic processor we can get energy efficiency gains, while not sacrificing runtime resources. This result demonstrates how P-complete problems can be mapped on neuromorphic architectures in a theoretically and potentially practically efficient manner. 1 1 Introduction Neuromorphic computing has been one of the proposed novel architectures to replace the von Neumann architecture that has dominated computing for the last 70 years [18]. These systems consist of low power, intrinsically parallel architectures of simple spiking processing units. In recent years numerous neuro- morphic hardware architectures have emerged with different architectural design choices [1, 8, 21, 14].
    [Show full text]
  • Performance Improvement in MIPS Pipeline Processor Based on FPGA
    International Journal of Engineering Technology, Management and Applied Sciences www.ijetmas.com January 2016, Volume 4, Issue 1, ISSN 2349-4476 Performance Improvement in MIPS Pipeline Processor based on FPGA Kirat Pal Singh1, Shiwani Dod2 Senior Project Fellow1, Student2 1CSIR-Central Scientific Instruments Organisation, Chandigarh, India 2Rayat and Bhara Institute of Engineering and biotechnology, Mohali, India Abstract - The paper describes the design and synthesis favorite choice among universities and research labs of a basic 5 stage pipelined MIPS-32 processor for for using as the base of their design. This simplicity finding the longer path delay using different process also makes the MIPS architecture very attractive to technologies. The large propagation delay or critical the embedded microprocessor market as it enables path within the circuit and improving the hardware very cost-effective implementations. which causes delay is a standard method for increasing A. PROCESSOR OVERVIEW the performance. The organization of pipeline stages in such a way that pipeline can be clocked at a high A MIPS-based RISC processor was introduced in frequency. The design has been synthesized at different [1], is described. For describing the processor process technologies targeting using Spartan3, Spartan6, architecture, a basic model is chosen. Fig. 1.1 Virtex4, Virtex5 and Virtex6 devices. The synthesis represents the top-level schematic of the MIPS report indicates that critical path delay is located in pipelined processor. This schematic shows the execution unit. The maximum critical path delay is principal components, or main blocks of the 41.405ns at 90nm technology and minimum critical path processor. It was a fixed-point processor and consist delay is 6.57ns at 40nm technology.
    [Show full text]
  • Zeno Machines and Hypercomputation Petrus H
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Theoretical Computer Science 358 (2006) 23–33 www.elsevier.com/locate/tcs Zeno machines and hypercomputation Petrus H. Potgieter∗ Department of Decision Sciences, University of South Africa, P.O. Box 392, Unisa 0003, Pretoria Received 6 December 2004; received in revised form 14 November 2005; accepted 29 November 2005 Communicated by M. Hirvensalo Abstract This paper reviews the Church–Turing Thesis (or rather, theses) with reference to their origin and application and considers some models of “hypercomputation”, concentrating on perhaps the most straight-forward option: Zeno machines (Turing machines with accelerating clock). The halting problem is briefly discussed in a general context and the suggestion that it is an inevitable companion of any reasonable computational model is emphasised. It is suggested that claims to have “broken the Turing barrier” could be toned down and that the important and well-founded rôle of Turing computability in the mathematical sciences stands unchallenged. © 2006 Elsevier B.V. All rights reserved. Keywords: Church–Turing Thesis; Zeno machine; Accelerated Turing machine; Hypercomputation; Halting problem 1. Introduction The popular and scientific literature in foundational aspects of computer science and in physical science have of late been replete with obituaries for the Church–Turing Thesis (CTT) and triumphant announcements of the dawn of “hypercomputation” (the “scare quotes” around which are to be carelessly omitted from this point on). Some of the proponents of this idea and their readers believe that somehow the CTT has been disproved. It is often not quite clear however what exactly they take the Thesis to mean.
    [Show full text]
  • Computability and Complexity of Unconventional Computing Devices
    Computability and Complexity of Unconventional Computing Devices Hajo Broersma1, Susan Stepney2, and G¨oranWendin3 1 Faculty of Electrical Engineering, Mathematics and Computer Science, CTIT Institute for ICT Research, and MESA+ Institute for Nanotechnology, University of Twente, The Netherlands 2 Department of Computer Science, University of York, UK 3 Microtechnology and Nanoscience Department, Chalmers University of Technology, Gothenburg, SE-41296 Sweden Abstract. We discuss some claims that certain UCOMP devices can perform hypercomputa- tion (compute Turing-uncomputable functions) or perform super-Turing computation (solve NP-complete problems in polynomial time). We discover that all these claims rely on the provision of one or more unphysical resources. 1 Introduction For many decades, Moore's Law (Moore; 1965) gave us exponentially-increasing classical (digital) computing (CCOMP) power, with a doubling time of around 18 months. This can- not continue indefinitely, due to ultimate physical limits (Lloyd; 2000). Well before then, more practical limits will slow this increase. One such limit is power consumption. With present efforts toward exascale computing, the cost of raw electrical power may eventually be the limit to the computational power of digital machines: Information is physical, and electrical power scales linearly with computational power (electrical power = number of bit flips per second times bit energy). Reducing the switching energy of a bit will alleviate the problem and push the limits to higher processing power, but the exponential scaling in time will win in the end. Programs that need exponential time will consequently need exponen- tial electrical power. Furthermore, there are problems that are worse than being hard for CCOMP: they are (classically at least) undecidable or uncomputable, that is, impossible to solve.
    [Show full text]
  • LEP to an Fcc-Ee – a Bridge Too Far?
    Monday 19 October 2020 [email protected] https://orcid.org/0000-0001-8404-3750 DOI:10.5281/zenodo.4018493 LEP to an fcc-ee – A Bridge Too Far? Introduction This paper attempts to précis key documents, presentations and papers that are stilL availabLe concerning the preparation for, and execution of, (mainLy offLine) computing for the Large ELectron Positron colLider (LEP) at CERN. The motivation is not onLy to capture this information before it is too Late – which in some cases it aLready is – but also shouLd it be of interest, even if only anecdotally, in the preparation for future, “simiLar” machines. SpecificalLy, as a resuLt, the interested reader shouLd be aware of what eLectronic documents are avaiLabLe together with direct pointers to be abLe to find them. As further motivation, the 2020 update of the European Strategy for Particle Physics contains the foLLowing statements: “The vision is to prepare a Higgs factory, followed by a future hadron collider”. “Given the unique nature of the Higgs boson, there are compelling scientific arguments for a new electron-positron collider operating as a Higgs factory”. [Pre-ambLe] “An electron-positron Higgs factory is the highest priority next collider”. [High-priority future initiatives] The document aLso states: “Further development of internal policies on open data and data preservation should be encouraged, and an adequate level of resources invested in their implementation.” Therefore, a summary of the current status in these areas, particuLarLy with regard to the former eLectron-positron collider (LEP) is called for. In 2020, we are (were) at approximateLy mid-point between the end of data taking at LEP and the possible start of data taking at a future fcc-ee (2039?).
    [Show full text]
  • Hxdp: Efficient Software Packet Processing on FPGA Nics
    hXDP: Efficient Software Packet Processing on FPGA NICs Marco Spaziani Brunella1,3, Giacomo Belocchi1,3, Marco Bonola1,2, Salvatore Pontarelli1, Giuseppe Siracusano4, Giuseppe Bianchi3, Aniello Cammarano2,3, Alessandro Palumbo2,3, Luca Petrucci2,3 and Roberto Bifulco4 1Axbryd, 2CNIT, 3University of Rome Tor Vergata, 4NEC Laboratories Europe Abstract advocating for the introduction of FPGA NICs, because of their ability to use the FPGAs also for tasks such as machine FPGA accelerators on the NIC enable the offloading of expen- learning [13, 14]. FPGA NICs play another important role in sive packet processing tasks from the CPU. However, FPGAs 5G telecommunication networks, where they are used for the have limited resources that may need to be shared among acceleration of radio access network functions [11,28,39,58]. diverse applications, and programming them is difficult. In these deployments, the FPGAs could host multiple func- We present a solution to run Linux’s eXpress Data Path tions to provide higher levels of infrastructure consolidation, programs written in eBPF on FPGAs, using only a fraction since physical space availability may be limited. For instance, of the available hardware resources while matching the per- this is the case in smart cities [55], 5G local deployments, e.g., formance of high-end CPUs. The iterative execution model in factories [44,47], and for edge computing in general [6,30]. of eBPF is not a good fit for FPGA accelerators. Nonethe- Nonetheless, programming FPGAs is difficult, often requiring less, we show that many of the instructions of an eBPF pro- the establishment of a dedicated team composed of hardware gram can be compressed, parallelized or completely removed, specialists [18], which interacts with software and operating when targeting a purpose-built FPGA executor, thereby sig- system developers to integrate the offloading solution with the nificantly improving performance.
    [Show full text]
  • Lynx: a Smartnic-Driven Accelerator-Centric Architecture for Network Servers
    Lynx: A SmartNIC-driven Accelerator-centric Architecture for Network Servers Maroun Tork Lina Maudlej Mark Silberstein Technion – Israel Institute of Technion – Israel Institute of Technion – Israel Institute of Technology Technology Technology Haifa, Israel Haifa, Israel Haifa, Israel Abstract This paper explores new opportunities afforded by the grow- CPU Accelerator Accelerator ing deployment of compute and I/O accelerators to improve Request Processing the performance and efficiency of hardware-accelerated com- Network Server Request puting services in data centers. Network I/O Processing Accelerator I/O We propose Lynx, an accelerator-centric network server architecture that offloads the server data and control planes SNIC to the SmartNIC, and enables direct networking from accel- Accelerator I/O Service erators via a lightweight hardware-friendly I/O mechanism. NIC Network Server Lynx enables the design of hardware-accelerated network Network I/O servers that run without CPU involvement, freeing CPU cores and improving performance isolation for accelerated (a) Traditional host-centric (b) Lynx: Accelerator-centric services. It is portable across accelerator architectures and al- lows the management of both local and remote accelerators, Figure 1. Accelerated network server architectures. seamlessly scaling beyond a single physical machine. We implement and evaluate Lynx on GPUs and the In- 1 Introduction tel Visual Compute Accelerator, as well as two SmartNIC Modern data centers are increasingly heterogeneous, with a architectures – one with an FPGA, and another with an 8- variety of compute accelerators deployed to accommodate core ARM processor. Compared to a traditional host-centric growing performance demands. Many cloud vendors lever- × approach, Lynx achieves over 4 higher throughput for a age them to build hardware-accelerated network-attached GPU-centric face verification server, where it is used forGPU computing services.
    [Show full text]
  • The Limits of Computation
    1 The Limits of Computation The next time a bug freezes your computer, you can take solace in the deep mathe- matical truth that there is no way to eliminate all bugs -Seth Lloyd The 1999 science fiction movie, ‘The Matrix’, explored this idea. In this world, humans are plugged into a virtual reality, referred to as ‘The Matrix’. They go about their daily lives, unaware that the sensory inputs that they receive do not originate from their per- ceived reality. When a person, Alice, within the matrix observes a watermelon falling from a skyscraper, there is no skyscraper, nor watermelon, nor even gravity responsi- ble for the watermelon’s fall. Instead a complex computer program works silently in the background. The initial state of the watermelon, the location of the observer, are encoded in a sequence of bits. The computer takes these bits, processes according to a predetermined algorithm, and outputs the electrical signals that dictate what the observer should see. To people in the twenty first century, whose lives are enmeshed in various information processors, the eventual plausibility of the Matrix does not appear as radical as it once did. One by one, the photos we watch, and the mail we send, have been converted to digital form. Common questions such as “How many megabytes does that song take up?” reflect a society that is becoming increasingly accepting of the possibility that the observable qualities of every object can be represented by bits, and physical processes by how they manipulate these bits. While most people regard this as a representa- tion of reality, some scientists have even gone as far to speculate that the universe is perhaps just a giant information processor, programmed with the laws of physics we know [Llo06].
    [Show full text]
  • EDGE: Event-Driven GPU Execution
    EDGE: Event-Driven GPU Execution Tayler Hicklin Hetherington, Maria Lubeznov, Deval Shah, Tor M. Aamodt Electrical & Computer Engineering The University of British Columbia Vancouver, Canada ftaylerh, mlubeznov, devalshah, [email protected] Abstract—GPUs are known to benefit structured applications 10 Baseline with ample parallelism, such as deep learning in a datacenter. (11.1 us at Recently, GPUs have shown promise for irregular streaming 1 705 MHz) network tasks. However, the GPU’s co-processor dependence on a EDGE CPU for task management, inefficiencies with fine-grained tasks, 0.1 EDGE + to Baseline and limited multiprogramming capabilities introduce challenges Preemption with efficiently supporting latency-sensitive streaming tasks. Relative Latency 0.01 This paper proposes an event-driven GPU execution model, Fig. 1: Combined kernel launch and warp scheduling latency. EDGE, that enables non-CPU devices to directly launch pre- configured tasks on a GPU without CPU interaction. Along with GPU. Second, GPUs optimize for throughput over latency, freeing up the CPU to work on other tasks, we estimate that EDGE can reduce the kernel launch latency by 4.4× compared preferring larger tasks to efficiently utilize the GPU hardware to the baseline CPU-launched approach. This paper also proposes resources and amortize task launch overheads. Consequently, a warp-level preemption mechanism to further reduce the end-to- GPU networking applications often construct large packet end latency of fine-grained tasks in a shared GPU environment. batches to improve throughput at the cost of queuing and pro- We evaluate multiple optimizations that reduce the average warp cessing latencies. Lastly, GPUs have limited multiprogramming preemption latency by 35.9× over waiting for a preempted warp to naturally flush the pipeline.
    [Show full text]
  • A Spiking Neural Algorithm
    A SPIKING NEURAL ALGORITHM FOR THE NETWORK FLOW PROBLEM APREPRINT Abdullahi Ali Johan Kwisthout School for Artificial Intelligence Donders Institute for Brain, Cognition, and Behaviour Radboud University Radboud University Nijmegen, The Netherlands Nijmegen, The Netherlands [email protected] [email protected] December 2, 2019 ABSTRACT It is currently not clear what the potential is of neuromorphic hardware beyond machine learning and neuroscience. In this project, a problem is investigated that is inherently difficult to fully implement in neuromorphic hardware by introducing a new machine model in which a conventional Turing machine and neuromorphic oracle work together to solve such types of problems. We show that the P-complete MAX NETWORK FLOW problem is intractable in models where the oracle may be consulted only once (‘create-and-run’ model) but becomes tractable using an interactive (‘neuromorphic co-processor’) model of computation. More in specific we show that a logspace-constrained Turing machine with access to an interactive neuromorphic oracle with linear space, time, and energy constraints can solve MAX NETWORK FLOW. A modified variant of this algorithm is implemented on the Intel Loihi chip; a neuromorphic manycore processor developed by Intel Labs. We show that by off-loading the search for augmenting paths to the neuromorphic processor we can get energy efficiency gains, while not sacrificing runtime resources. This result demonstrates how P-complete problems can be mapped on neuromorphic architectures in a theoretically and potentially practically efficient manner. Keywords Neuromorphic computation · Scientific programming · Spiking neural networks · Network Flow problem 1 Introduction Neuromorphic computing has been one of the proposed novel architectures to replace the von Neumann architecture arXiv:1911.13097v1 [cs.NE] 29 Nov 2019 that has dominated computing for the last 70 years [1].
    [Show full text]
  • Cambridge University Press 978-1-107-17119-0 — Physical Perspectives on Computation, Computational Perspectives on Physics Edited by Michael E
    Cambridge University Press 978-1-107-17119-0 — Physical Perspectives on Computation, Computational Perspectives on Physics Edited by Michael E. Cuffaro , Samuel C. Fletcher Index More Information Index k relation, 214 Aristotle, 34 k relation, 214 arithmetical relation, 214 1 relation, 214 arithmetical set, 213 ǫ-closeness condition, 133 asymmetry, temporal, 236 ǫ-commuting diagram, 133–135, 147 asymptotic behavior, 175 λ-calculus, 2, 155, 161, 226 asymptotic methods, 175, 180–183, 194 λ-definability, 64 Avogadro’s number, 253 μ-recursion, 155, 161 axiomatic method, 60, 62–63 axiomatization of computability, 153–156 Aaronson, Scott, 48 of geometry, see Hilbert’s axiomatization of Abstraction/Representation theory, 15–16, geometry 127–149 of physics, 155, 165 diagram, 128, 133 of quantum theory, 257, 277 essential components, 140 of relativity, 166 Ackermann function, 158–161 affine-linear symmetry, 269, 272 agency, 16, 119–125 Babbage, Charles, see difference engine algebra of concepts, 72 backward error analysis, 181 algorithm, 65, 83–84, 97, 173–174, 176, Banach-Tarski paradox, 147 187, 189 basic linear algebra subprograms, 187 asynchronous, 52 basic polynomial algebra subprograms, 192 feasible, 173 Bell experiment, 263 probabilistic, 52 Berlekamp-Zassenhaus algorithm, 173 quantum, 77, 88, 90–91, 101–102 Bernoulli, Johann, 177 symbolic, 192 black hole, 17, 123, 199–200 analytical engine, 141–142 electrically charged, 208 ancilla bit, 85 Kerr, see rotating anti-hypercomputation thesis, see also rotating, 51, 196, 200–204, 208, 210, 214
    [Show full text]