Formal Semantics of Heterogeneous CUDA-C: a Modular Approach with Applications
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
CS 2210: Theory of Computation
CS 2210: Theory of Computation Spring, 2019 Administrative Information • Background survey • Textbook: E. Rich, Automata, Computability, and Complexity: Theory and Applications, Prentice-Hall, 2008. • Book website: http://www.cs.utexas.edu/~ear/cs341/automatabook/ • My Office Hours: – Monday, 6:00-8:00pm, Searles 224 – Tuesday, 1:00-2:30pm, Searles 222 • TAs – Anjulee Bhalla: Hours TBA, Searles 224 – Ryan St. Pierre, Hours TBA, Searles 224 What you can expect from the course • How to do proofs • Models of computation • What’s the difference between computability and complexity? • What’s the Halting Problem? • What are P and NP? • Why do we care whether P = NP? • What are NP-complete problems? • Where does this make a difference outside of this class? • How to work the answers to these questions into the conversation at a cocktail party… What I will expect from you • Problem Sets (25%): – Goal: Problems given on Mondays and Wednesdays – Due the next Monday – Graded by following Monday – A learning tool, not a testing tool – Collaboration encouraged; more on this in next slide • Quizzes (15%) • Exams (2 non-cumulative, 30% each): – Closed book, closed notes, but… – Can bring in 8.5 x 11 page with notes on both sides • Class participation: Tiebreaker Other Important Things • Go to the TA hours • Study and work on problem sets in groups • Collaboration Issues: – Level 0 (In-Class Problems) • No restrictions – Level 1 (Homework Problems) • Verbal collaboration • But, individual write-ups – Level 2 (not used in this course) • Discussion with TAs only – Level 3 (Exams) • Professor clarifications only Right now… • What does it mean to study the “theory” of something? • Experience with theory in other disciplines? • Relationship to practice? – “In theory, theory and practice are the same. -
The Problem with Threads
The Problem with Threads Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html January 10, 2006 Copyright © 2006, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF award No. CCR-0225610), the State of California Micro Program, and the following companies: Agilent, DGIST, General Motors, Hewlett Packard, Infineon, Microsoft, and Toyota. The Problem with Threads Edward A. Lee Professor, Chair of EE, Associate Chair of EECS EECS Department University of California at Berkeley Berkeley, CA 94720, U.S.A. [email protected] January 10, 2006 Abstract Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to sup- port threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. -
Computer Architecture: Dataflow (Part I)
Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990. -
Ece585 Lec2.Pdf
ECE 485/585 Microprocessor System Design Lecture 2: Memory Addressing 8086 Basics and Bus Timing Asynchronous I/O Signaling Zeshan Chishti Electrical and Computer Engineering Dept Maseeh College of Engineering and Computer Science Source: Lecture based on materials provided by Mark F. Basic I/O – Part I ECE 485/585 Outline for next few lectures Simple model of computation Memory Addressing (Alignment, Byte Order) 8088/8086 Bus Asynchronous I/O Signaling Review of Basic I/O How is I/O performed Dedicated/Isolated /Direct I/O Ports Memory Mapped I/O How do we tell when I/O device is ready or command complete? Polling Interrupts How do we transfer data? Programmed I/O DMA ECE 485/585 Simplified Model of a Computer Control Control Data, Address, Memory Data Path Microprocessor Keyboard Mouse [Fetch] Video display [Decode] Printer [Execute] I/O Device Hard disk drive Audio card Ethernet WiFi CD R/W DVD ECE 485/585 Memory Addressing Size of operands Bytes, words, long/double words, quadwords 16-bit half word (Intel: word) 32-bit word (Intel: doubleword, dword) 0x107 64-bit double word (Intel: quadword, qword) 0x106 Note: names are non-standard 0x105 SUN Sparc word is 32-bits, double is 64-bits 0x104 0x103 Alignment 0x102 Can multi-byte operands begin at any byte address? 0x101 Yes: non-aligned 0x100 No: aligned. Low order address bit(s) will be zero ECE 485/585 Memory Operand Alignment …Intel IA speak (i.e. word = 16-bits = 2 bytes) 0x107 0x106 0x105 0x104 0x103 0x102 0x101 0x100 Aligned Unaligned Aligned Unaligned Aligned Unaligned word word Double Double Quad Quad address address word word word word -----0 address address address address -----00 ----000 ECE 485/585 Memory Operand Alignment Why do we care? Unaligned memory references Can cause multiple memory bus cycles for a single operand May also span cache lines Requiring multiple evictions, multiple cache line fills Complicates memory system and cache controller design Some architectures restrict addresses to be aligned Even in architectures without alignment restrictions (e.g. -
10. Assembly Language, Models of Computation
10. Assembly Language, Models of Computation 6.004x Computation Structures Part 2 – Computer Architecture Copyright © 2015 MIT EECS 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #1 Beta ISA Summary • Storage: – Processor: 32 registers (r31 hardwired to 0) and PC – Main memory: Up to 4 GB, 32-bit words, 32-bit byte addresses, 4-byte-aligned accesses OPCODE rc ra rb unused • Instruction formats: OPCODE rc ra 16-bit signed constant 32 bits • Instruction classes: – ALU: Two input registers, or register and constant – Loads and stores: access memory – Branches, Jumps: change program counter 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #2 Programming Languages 32-bit (4-byte) ADD instruction: 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 opcode rc ra rb (unused) Means, to the BETA, Reg[4] ß Reg[2] + Reg[3] We’d rather write in assembly language: Today ADD(R2, R3, R4) or better yet a high-level language: Coming up a = b + c; 6.004 Computation Structures L10: Assembly Language, Models of Computation, Slide #3 Assembly Language Symbolic 01101101 11000110 Array of bytes representation Assembler 00101111 to be loaded of stream of bytes 10110001 into memory ..... Source Binary text file machine language • Abstracts bit-level representation of instructions and addresses • We’ll learn UASM (“microassembler”), built into BSim • Main elements: – Values – Symbols – Labels (symbols for addresses) – Macros 6.004 Computation Structures L10: Assembly Language, Models -
GPU Technology Conference 2010 Sessions on 2095
GPU Technology Conference 2010 Sessions on Video Processing (subject to change) IMPORTANT: Visit www.nvidia.com/gtc for the most up-to-date schedule and to enroll into sessions to ensure your spot in the most popular courses. 2095 - Building High Density Real-Time Video Processing Systems Learn how GPU Direct can be used to effectively build real time, high performance, cost effective video processing products. We will focus especially on how to optimize bus throughput while keeping CPU load and latency minimal. Speaker: Ronny Dewaele, Barco Topics: Video Processing, Imaging Time: Thursday, September, 23rd, 16:00 - 16:50 2029 - Computer Vision Algorithms for Automating HD Post- Production Discover how post-production tasks can be accelerated by taking advantage of GPU-based algorithms. In this talk we present computer vision algorithms for corner detection, feature point tracking, image warping and image inpainting, and their efficient implementation on GPUs using CUDA. We also show how to use these algorithms to do real-time stabilization and temporal re-sampling (re-timing) of high definition video sequences, both common tasks in post-production. Benchmarking of the GPU implementations against optimized CPU algorithms demonstrates a speedup of approximately an order of magnitude. Speaker: Hannes Fassold, JOANNEUM RESEARCH Topics: Computer Vision, Video Processing Time: Wednesday, September, 22nd, 15:00 - 15:50 2125 - Developing GPU Enabled Visual Effects For Film And Video 1 The arrival of fully programable GPUs is now changing the visual effects industry, which traditionally relied on CPU computation to create their spectacular imagery. Implementing the complex image processing algorithms used by VFX is a challenge, but the payoffs in terms of interactivity and throughput can be enormous. -
Der Optimale PC
ct.2509.001 16.11.2009 12:04 Uhr Seite 1 Mit Stellenmarkt www.ct.de e 3,50 magazin für Österreich e 3,70 magazin für Schweiz CHF 6,90 • Benelux e 4,20 ItalienItalien e 4,60 • Spanien e 4,60 25/2009 c computercomputer 2525 techniktechnik 23. 11. 2009 E-Books und Zeitungen im Online-Zugriff Das universelle Buch Kindle & Co. im Test • Lesestoff selbst aufbereiten Multifunktionsdrucker Videoschnittprogramme Fernseher mit LED-Backlight Aktuelle Spielkonsolen Günstige Android-Handys i5-PCs,i5-PCs, i7-Notebooksi7-Notebooks Quad-Core-Power Von XP auf Windows 7 E-Books • Der optimale PC Core i5 & i7, Strom sparen im Netz Wissenschaftler im Web Strom sparen im Netz Linux: Initskripte verwalten Notebook oder Desktop Heise h DerDer optimaleoptimale PCPC Komplettsysteme vs. superleise Selbstbaurechner ct.0108.999.anzeige.EP 09.06.2008 16:00 Uhr Seite 2 © Copyright by Heise Zeitschriften Verlag GmbH & Co. KG. Veröffentlichung und Vervielfältigung nur mit Genehmigung des Heise Zeitschriften Verlags. ct.2509.003 17.11.2009 15:22 Uhr Seite 3 c Nicht nachmachen "Wow, das Spiel ist wirklich geil!!!1111" Bei solch fetten Gewinnen kann die Firma So oder ähnlich könnte man die Bewertungen gelassen in Kauf nehmen, wenn in den kommenden etlicher Spielemagazine zusammen fassen, die Wochen Jugendschützer auf die Barrikaden das viel diskutierte 3D-Ballerspiel Call of klettern. Außerdem könnte Activisions Erfolgs - Duty: Modern Warfare 2 vom US-Hersteller rezept als Vorlage für andere Hersteller dienen, Activision in den Himmel loben. Doch selten um vermehrt mit altbekannten Spielideen war die Diskrepanz zwischen dem Hype der kräftig Kasse zu machen. -
The Road to the Mainline Zynqmp VCU Driver
The Road to the Mainline ZynqMP VCU Driver FOSDEM ’21 Michael Tretter – [email protected] https://www.pengutronix.de Agenda Xilinx Zynq® UltraScale+™ MPSoC H.264/H.265 Video Codec Unit Video Encoders in Mainline Linux VCU Mainline Driver: Allegro A Glimpse into the Future 2/47 Xilinx Zynq® UltraScale+™ MPSoC 3/47 ZynqMP Platform Overview Luca Ceresoli: ARM64 + FPGA and more: Linux on the Xilinx ZynqMP https://archive.fosdem.org/ 2018/schedule/event/arm6 4_and_fpga 4/47 ZynqMP Mainline Status Mainline Linux just works, e.g., on ZCU104 Evaluation Kit U-Boot, Barebox, FSBL Sometimes more reliable with Xilinx downstream Xilinx is actively mainlining their drivers 5/47 Make Sure that Your ZynqMP has a VCU ZU # E V ZU: Zynq Ultrascale+ #: Value Index C/E: Processor System Identifier G/V: Engine Type 6/47 Focus on Video Encoding VCU supports video decoding, as well Linux mainline driver only supports encoding Decoding might be focus in a future talk 7/47 Basic Video Encoding Knowledge Expected Paul Kocialkowski: Supporting Hardware-Accelerated Video Encoding with Mainline https://www.youtube.com/watch?v=S5wCdZfGFew 8/47 H.264/H.265 Video Codec Unit 9/47 VCU: Documentation Hardware configuration Software usage Available on the Xilinx Website 10/47 VCU: Features The encoder engine is designed to process video streams using the HEVC (ISO/IEC 23008-2 high-efficiency Video Coding) and AVC (ISO/IEC 14496-10 Advanced Video Coding) standards. It provides complete support for these standards, including support for 8-bit and 10-bit color, Y- only (monochrome), 4:2:0 and 4:2:2 Chroma formats, up to 4K UHD at 60 Hz performance. -
AMD Linux Driver 2021.10 Release Notes
[AMD Official Use Only - Internal Distribution Only] AMD Linux Driver 2021.10 Release Notes 1. Overview AMD’s Linux® Driver’s includes open source graphics driver for AMD’s embedded platforms and other peripheral devices on selected development platforms. New features supported in this release: 1. New LTS kernel 5.10.5. 2. Bug fixes and driver updates. 2. Linux® kernel Support 1. 5.10.5 LTS 3. Linux Distribution Support 1. Ubuntu 20.04.1 4. Component Versions The following table shows git commit details of the sources and binaries used in the package. The patches present in patches folder of this release package has to be applied on top of the git commit mentioned in the below table to get the full sources corresponding to this driver release. The sources directory in this package contains patches pre-applied to these commit ids. 2021.10 Linux Driver Release Notes 1 [AMD Official Use Only - Internal Distribution Only] Component Version Commit ID Source Link for git clone Name Kernel 5.10.5 f5247949c0a9304ae43a895f29216a9d876f https://git.kernel.org/pub/scm/linux/ker 3919 nel/git/stable/linux.git Libdrm 2.4.103 5dea8f56ee620e9a3ace34a99ebf0175efb5 https://github.com/freedesktop/mesa- 7b11 drm.git Mesa 21.1.0-dev 38f012e0238f145f4c83bf7abf59afceee333 https://github.com/mesa3d/mesa.git 397 Ddx 19.1.0 6234a1b2652f469071c0c9b0d8b0f4a8079e https://github.com/freedesktop/xorg- fe74 xf86-video-amdgpu.git Gstomx 1.0.0.1 5c4bff4a433dff1c5d005edfceaf727b6214b git://people.freedesktop.org/~leoliu/gsto b74 mx Wayland 1.15.0 ea09c2fde7fcfc7e24a19ae5c5977981e9bef -
Actor Model of Computation
Published in ArXiv http://arxiv.org/abs/1008.1459 Actor Model of Computation Carl Hewitt http://carlhewitt.info This paper is dedicated to Alonzo Church and Dana Scott. The Actor model is a mathematical theory that treats “Actors” as the universal primitives of concurrent digital computation. The model has been used both as a framework for a theoretical understanding of concurrency, and as the theoretical basis for several practical implementations of concurrent systems. Unlike previous models of computation, the Actor model was inspired by physical laws. It was also influenced by the programming languages Lisp, Simula 67 and Smalltalk-72, as well as ideas for Petri Nets, capability-based systems and packet switching. The advent of massive concurrency through client- cloud computing and many-core computer architectures has galvanized interest in the Actor model. An Actor is a computational entity that, in response to a message it receives, can concurrently: send a finite number of messages to other Actors; create a finite number of new Actors; designate the behavior to be used for the next message it receives. There is no assumed order to the above actions and they could be carried out concurrently. In addition two messages sent concurrently can arrive in either order. Decoupling the sender from communications sent was a fundamental advance of the Actor model enabling asynchronous communication and control structures as patterns of passing messages. November 7, 2010 Page 1 of 25 Contents Introduction ............................................................ 3 Fundamental concepts ............................................ 3 Illustrations ............................................................ 3 Modularity thru Direct communication and asynchrony ............................................................. 3 Indeterminacy and Quasi-commutativity ............... 4 Locality and Security ............................................ -
Computer Architecture Lecture 1: Introduction and Basics Where We Are
Computer Architecture Lecture 1: Introduction and Basics Where we are “C” as a model of computation Programming Programmer’s view of a computer system works How does an assembly program end up executing as Architect/microarchitect’s view: digital logic? How to design a computer that meets system design goals. What happens in-between? Choices critically affect both How is a computer designed the SW programmer and using logic gates and wires the HW designer to satisfy specific goals? HW designer’s view of a computer system works CO Digital logic as a model of computation 2 Levels of Transformation “The purpose of computing is insight” (Richard Hamming) We gain and generate insight by solving problems How do we ensure problems are solved by electrons? Problem Algorithm Program/Language Runtime System (VM, OS, MM) ISA (Architecture) Microarchitecture Logic Circuits Electrons 3 The Power of Abstraction Levels of transformation create abstractions Abstraction: A higher level only needs to know about the interface to the lower level, not how the lower level is implemented E.g., high-level language programmer does not really need to know what the ISA is and how a computer executes instructions Abstraction improves productivity No need to worry about decisions made in underlying levels E.g., programming in Java vs. C vs. assembly vs. binary vs. by specifying control signals of each transistor every cycle Then, why would you want to know what goes on underneath or above? 4 Crossing the Abstraction Layers As long as everything goes well, not knowing what happens in the underlying level (or above) is not a problem. -
NVIDIA's Opengl Functionality
NVIDIANVIDIA ’’ss OpenGLOpenGL FunctionalityFunctionality Session 2127 | Room A5 | Monday, September, 20th, 16:00 - 17:20 San Jose Convention Center, San Jose, California Mark J. Kilgard • Principal System Software Engineer – OpenGL driver – Cg (“C for graphics”) shading language • OpenGL Utility Toolkit (GLUT) implementer • Author of OpenGL for the X Window System • Co-author of Cg Tutorial Outline • OpenGL’s importance to NVIDIA • OpenGL 3.3 and 4.0 • OpenGL 4.1 • Loose ends: deprecation, Cg, further extensions OpenGL Leverage Cg Parallel Nsight SceniX CompleX OptiX Example of Hybrid Rendering with OptiX OpenGL (Rasterization) OptiX (Ray tracing) Parallel Nsight Provides OpenGL Profiling Configure Application Trace Settings Parallel Nsight Provides OpenGL Profiling Magnified trace options shows specific OpenGL (and Cg) tracing options Parallel Nsight Provides OpenGL Profiling Parallel Nsight Provides OpenGL Profiling Trace of mix of OpenGL and CUDA shows glFinish & OpenGL draw calls OpenGL In Every NVIDIA Business OpenGL on Quadro – World class OpenGL 4 drivers – 18 years of uninterrupted API compatibility – Workstation application certifications – Workstation application profiles – Display list optimizations – Fast antialiased lines – Largest memory configurations: 6 gigabytes – GPU affinity – Enhanced interop with CUDA and multi-GPU OpenGL – Advanced multi-GPU rendering – Overlays – Genlock – Unified Back Buffer for less framebuffer memory usage – Cross-platform • Windows XP, Vista, Win7, Linux, Mac, FreeBSD, Solaris – SLI Mosaic –