Instruction Set Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Instruction Set Architecture RISC-V: An Overview of the Instruction Set Architecture Harry H. Porter III Portland State University [email protected] January 26, 2018 The RISC-V project deines and describes a standardized Instruction Set Architecture (ISA). RISC-V is an open-source speci2ication for computer processor architectures, not a particular chip or implementation. To date, several different groups have designed and fabricated silicon implementations of the RISC-V speci2ications. Based on the performance of these implementations and the growing need for interoperability among vendors, it appears that the RISC-V standard will increase in importance. This document introduces and explains the RISC-V standard, giving an informal overview of the architecture of a RISC-V processor. This document does not discuss a particular implementation; instead we discuss some of the various design options that are included in the formal RISC-V speci2ication. This document gives the reader an initial introduction to the RISC-V design. Date Last Modi=ied: January 26, 2018 Available Online: www.cs.pdx.edu/~harry/riscv Table of Contents List of Instructions 6 Chapter 1: Introduction 11 Introduction 11 The RISC-V Naming Conventions 13 The Usual Disclaimer / Request For Corrections 15 Document Revision History / Permission to Copy 16 Basic Terminology 17 Chapter 2: Basic Organization 20 Main Memory 20 Little Endian 20 The Registers 23 Control and Status Registers (CSRs) 25 Alignment 26 Chapter 3: Instructions 28 Instructions 28 The Compressed Instruction Extension 28 Instruction Encoding 31 User-Level Instructions 36 Arithmetic Instructions (ADD, SUB, …) 38 Logical Instructions (AND, OR, XOR, …) 49 Shifting Instructions (SLL, SRL, SRA, …) 52 Miscellaneous Instructions 57 Branch and Jump Instructions 65 Load and Store Instructions 78 Integer Multiply and Divide 89 Chapter 4: Floating Point Instructions 103 Floating Pointing – Review of Basic Concepts 103 The Floating Point Extensions 119 Floating Point Load and Store Instructions 124 Basic Floating Point Arithmetic Instructions 127 Floating Point Conversion Instructions 133 RISC-V Architecture Summary / Porter Page S2 of S323 Table of Contents Floating Point Move Instructions 137 Floating Point Compare and Classify Instructions 142 Chapter 5: Register and Calling Conventions 146 Standard Usage of General Purpose Registers 146 Saving Registers Across Calls 148 Register x0 - The Zero Register (“zero”) 150 Register x1 - The Return Address (“ra”) 150 Register x2 - The Stack Pointer (“sp”) 151 Register x3 - The Global Pointer (“gp”) 152 Register x4 - The Thread Base Pointer (“tp”) 152 Register x5-x7,x28-x31 - Temp Registers (“t0-t6”) 153 Register x8,x9,x18-x27 - Saved Registers (“s0-s11”) 154 Register x8 - Frame Pointer (“fp”) 154 Register x10-x17 - Argument Registers (“a0-a7”) 155 Calling Conventions for Arguments 156 Floating Point Registers 158 Chapter 6: Compressed Instructions 160 Compressed Instructions 160 Instruction Formats 161 Load Instructions 162 Store Instructions 168 Jump, Call, and Conditional Branch Instructions 175 Load Constant into a Register 178 Arithmetic, Shift, and Logic Instructions 180 Miscellaneous Instructions 189 Instruction Encoding 191 Chapter 7: Concurrency and Atomic Instructions 195 Hardware Threads (HARTs) 195 The RISC-V Memory Model 196 Concurrency Control - Review and Background 198 A Review of Caching Concepts 200 RISC-V Concurrency Control 205 The FENCE Instructions 205 Load-Reserved / Store-Conditional Semantics 210 RISC-V Architecture Summary / Porter Page S3 of S323 Table of Contents Atomic Memory Control Bits: “aq” and “rl” 213 Using LR and SC Instructions 217 Deadlock and Starvation – Background 219 Starvation and LR/SC Sequences 220 The Atomic Memory Operation (AMO) Instructions 221 Chapter 8: Privilege Modes 227 Privilege Modes 227 Interface Terminology 230 Exceptions, Interrupts, Traps and Trap Handlers 234 Control and Status Registers (CSRs) 235 CSR Listing 237 Instructions to Read/Write the CSRs 241 Basic CSRs: CYCLE, TIME, INSTRET 246 CSR Register Mirroring 249 An Overview of Important CSRs 252 Exception Processing and Invoking a Trap Handler 257 The Status Register 268 Additional CSRs 280 System Call and Return Instructions 284 Chapter 9: Virtual Memory 288 Physical Memory Attributes (PMAs) 288 Cache PMAs 291 PMA Enforcement 291 Virtual Memory 296 Sv32 (Two-Level Page Tables) 302 The Sv32 Page Table Algorithm 309 Sv39 (Three-Level Page Tables) 311 Sv48 (Four-Level Page Tables) 313 Chapter 10: What is Not Covered 316 The Decimal Floating Point Extension (“L”) 316 The Bit Manipulation Extension (“B”) 316 The Dynamically Translated Languages Extension (“J”) 316 The Transactional Memory Extension (“T”) 316 The Packed SIMD Extension (“P”) 317 RISC-V Architecture Summary / Porter Page S4 of S323 Table of Contents The Vector SIMD Extension (“V”) 317 Performance Monitoring 317 Debug/Trace Mode 318 Acronym List 319 About the Author 323 RISC-V Architecture Summary / Porter Page S5 of S323 List of Instructions Add Immediate 38 Add Immediate Word 39 Add Immediate Double 40 Add 41 Add Word 42 Add Double 42 Subtract 45 Subtract Word 46 Subtract Double 46 Sign Extend Word to Doubleword 47 Sign Extend Doubleword to Quadword 47 Negate 48 Negate Word 48 Negate Doubleword 49 And Immediate 49 And 49 Or Immediate 50 Or 50 Xor Immediate 50 Xor 51 Not 51 Shift Left Logical Immediate 52 Shift Left Logical 52 Shift Right Logical Immediate 53 Shift Right Logical 53 Shift Right Arithmetic Immediate 54 Shift Right Arithmetic 54 Shift Instructions for RV64 and RV128 56 Nop 57 Move (Register to Register) 57 Load Upper Immediate 58 Load Immediate 58 Add Upper Immediate to PC 59 Load Address 60 Set If Less Than (Signed) 61 Set Less Than Immediate (Signed) 61 Set If Greater Than (Signed) 62 RISC-V Architecture Summary / Porter Page S6 of S323 List of Instructions Set If Less Than (Unsigned) 62 Set Less Than Immediate (Unsigned) 62 Set If Greater Than (Unsigned) 63 Set If Equal To Zero 63 Set If Not Equal To Zero 64 Set If Less Than Zero (signed) 64 Set If Greater Than Zero (signed) 65 Branch if Equal 65 Branch if Not Equal 67 Branch if Less Than (Signed) 67 Branch if Less Than Or Equal (Signed) 67 Branch if Greater Than (Signed) 68 Branch if Greater Than Or Equal (Signed) 68 Branch if Less Than (Unsigned) 69 Branch if Less Than Or Equal (Unsigned) 69 Branch if Greater Than (Unsigned) 70 Branch if Greater Than Or Equal (Unsigned) 70 Jump And Link (Short-Distance CALL) 71 Jump (Short-Distance) 73 Jump And Link Register 73 Jump Register 74 Return 74 Call Faraway Subroutine 75 Tail Call (Faraway Subroutine) / Long-Distance Jump 77 Load Byte (Signed) 78 Load Byte (Unsigned) 78 Load Halfword (Signed) 79 Load Halfword (Unsigned) 79 Load Word (Signed) 80 Load Word (Unsigned) 81 Load Doubleword (Signed) 81 Load Doubleword (Unsigned) 82 Load Quadword 83 Store Byte 83 Store Halfword 84 Store Word 85 Store Doubleword 85 Store Quadword 86 Multiply 89 Multiply – High Bits (Signed) 91 RISC-V Architecture Summary / Porter Page S7 of S323 List of Instructions Multiply – High Bits (Unsigned) 91 Multiply – High Bits (Signed and Unsigned) 92 Multiply Word 94 Divide (Signed) 98 Divide (Unsigned) 98 Remainder (Signed) 99 Remainder (Unsigned) 99 Divide Word (Signed) 100 Divide Word (Unsigned) 100 Remainder Word (Signed) 101 Remainder Word (Unsigned) 101 Floating Load (Word) 124 Floating Load (Double) 125 Floating Load (Quad) 125 Floating Store (Word) 126 Floating Store (Double) 126 Floating Store (Quad) 127 Floating Add 127 Floating Subtract 128 Floating Multiply 129 Floating Divide 129 Floating Minimum 130 Floating Maximum 130 Floating Square Root 131 Floating Fused Multiply-Add 132 Floating Fused Multiply-Subtract 132 Floating Fused Multiply-Add / Subtract (Quad) 133 Floating Point Conversion (Integer to/from Single Precision) 134 Floating Point Conversion (Integer to/from Double Precision) 135 Floating Point Conversion (Integer to/from Quad Precision) 136 Floating Point Conversion (Single/Double to/from Quad Precision) 136 Floating Sign Injection (Single Precision) 137 Floating Sign Injection (Double Precision) 138 Floating Sign Injection (Double Precision) 138 Floating Move 139 Floating Negate 139 Floating Absolute Value 140 Floating Move To/From Integer Register (Single Precision) 140 Floating Move To/From Integer Register (Double Precision) 141 Floating Point Comparison 142 RISC-V Architecture Summary / Porter Page S8 of S323 List of Instructions Floating Point Classify 144 C.LW - Load Word 162 C.LW - Load Doubleword 162 C.LQ - Load Quadword 163 C.FLW - Load Single Float 164 C.FLD - Load Double Float 164 C.LWSP - Load Word from Stack Frame 165 C.LDSP - Load Doubleword from Stack Frame 166 C.LQSP - Load Quadword from Stack Frame 166 C.FLWSP - Load Single Float from Stack Frame 167 C.FLDSP - Load Double Float from Stack Frame 168 C.SW - Store Word 168 C.SD - Store Doubleword 169 C.SQ - Store Quadword 170 C.FSW - Store Single Float 170 C.FSD - Store Double Float 171 C.SWSP - Store Word to Stack Frame 172 C.SDSP - Store Doubleword to Stack Frame 172 C.SQSP - Store Quadword to Stack Frame 173 C.FSWSP - Store Single Float to Stack Frame 173 C.FSDSP - Store Double Float to Stack Frame 174 C.J - Jump (PC-relative) 175 C.JAL - Jump and Link / Call (PC-relative) 175 C.JR - Jump Register 176 C.JALR - Jump Register and Link / Call 176 C.BEQZ - Branch if Zero (PC-relative) 177 C.BNEQZ - Branch if Not Zero (PC-relative) 177 C.LI - Load Immediate 178 C.LUI - Load Upper Immediate 179 C.ADDI - Add Immediate 180 C.ADDIW - Add Immediate
Recommended publications
  • Binary and Hexadecimal
    Binary and Hexadecimal Binary and Hexadecimal Introduction This document introduces the binary (base two) and hexadecimal (base sixteen) number systems as a foundation for systems programming tasks and debugging. In this document, if the base of a number might be ambiguous, we use a base subscript to indicate the base of the value. For example: 45310 is a base ten (decimal) value 11012 is a base two (binary) value 821716 is a base sixteen (hexadecimal) value Why do we need binary? The binary number system (also called the base two number system) is the native language of digital computers. No matter how elegantly or naturally we might interact with today’s computers, phones, and other smart devices, all instructions and data are ultimately stored and manipulated in the computer as binary digits (also called bits, where a bit is either a 0 or a 1). CPUs (central processing units) and storage devices can only deal with information in this form. In a computer system, absolutely everything is represented as binary values, whether it’s a program you’re running, an image you’re viewing, a video you’re watching, or a document you’re writing. Everything, including text and images, is represented in binary. In the “old days,” before computer programming languages were available, programmers had to use sequences of binary digits to communicate directly with the computer. Today’s high level languages attempt to shield us from having to deal with binary at all. However, there are a few reasons why it’s important for programmers to understand the binary number system: 1.
    [Show full text]
  • Unit – I Computer Architecture and Operating System – Scs1315
    SCHOOL OF ELECTRICAL AND ELECTRONICS DEPARTMENT OF ELECTRONICS AND COMMMUNICATION ENGINEERING UNIT – I COMPUTER ARCHITECTURE AND OPERATING SYSTEM – SCS1315 UNIT.1 INTRODUCTION Central Processing Unit - Introduction - General Register Organization - Stack organization -- Basic computer Organization - Computer Registers - Computer Instructions - Instruction Cycle. Arithmetic, Logic, Shift Microoperations- Arithmetic Logic Shift Unit -Example Architectures: MIPS, Power PC, RISC, CISC Central Processing Unit The part of the computer that performs the bulk of data-processing operations is called the central processing unit CPU. The CPU is made up of three major parts, as shown in Fig.1 Fig 1. Major components of CPU. The register set stores intermediate data used during the execution of the instructions. The arithmetic logic unit (ALU) performs the required microoperations for executing the instructions. The control unit supervises the transfer of information among the registers and instructs the ALU as to which operation to perform. General Register Organization When a large number of registers are included in the CPU, it is most efficient to connect them through a common bus system. The registers communicate with each other not only for direct data transfers, but also while performing various microoperations. Hence it is necessary to provide a common unit that can perform all the arithmetic, logic, and shift microoperations in the processor. A bus organization for seven CPU registers is shown in Fig.2. The output of each register is connected to two multiplexers (MUX) to form the two buses A and B. The selection lines in each multiplexer select one register or the input data for the particular bus. The A and B buses form the inputs to a common arithmetic logic unit (ALU).
    [Show full text]
  • S.D.M COLLEGE of ENGINEERING and TECHNOLOGY Sridhar Y
    VISVESVARAYA TECHNOLOGICAL UNIVERSITY S.D.M COLLEGE OF ENGINEERING AND TECHNOLOGY A seminar report on CUDA Submitted by Sridhar Y 2sd06cs108 8th semester DEPARTMENT OF COMPUTER SCIENCE ENGINEERING 2009-10 Page 1 VISVESVARAYA TECHNOLOGICAL UNIVERSITY S.D.M COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE ENGINEERING CERTIFICATE Certified that the seminar work entitled “CUDA” is a bonafide work presented by Sridhar Y bearing USN 2SD06CS108 in a partial fulfillment for the award of degree of Bachelor of Engineering in Computer Science Engineering of the Visvesvaraya Technological University, Belgaum during the year 2009-10. The seminar report has been approved as it satisfies the academic requirements with respect to seminar work presented for the Bachelor of Engineering Degree. Staff in charge H.O.D CSE Name: Sridhar Y USN: 2SD06CS108 Page 2 Contents 1. Introduction 4 2. Evolution of GPU programming and CUDA 5 3. CUDA Structure for parallel processing 9 4. Programming model of CUDA 10 5. Portability and Security of the code 12 6. Managing Threads with CUDA 14 7. Elimination of Deadlocks in CUDA 17 8. Data distribution among the Thread Processes in CUDA 14 9. Challenges in CUDA for the Developers 19 10. The Pros and Cons of CUDA 19 11. Conclusions 21 Bibliography 21 Page 3 Abstract Parallel processing on multi core processors is the industry’s biggest software challenge, but the real problem is there are too many solutions. One of them is Nvidia’s Compute Unified Device Architecture (CUDA), a software platform for massively parallel high performance computing on the powerful Graphics Processing Units (GPUs).
    [Show full text]
  • PDF with Red/Green Hyperlinks
    Elements of Programming Elements of Programming Alexander Stepanov Paul McJones (ab)c = a(bc) Semigroup Press Palo Alto • Mountain View Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. Copyright c 2009 Pearson Education, Inc. Portions Copyright c 2019 Alexander Stepanov and Paul McJones All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions Department, please visit www.pearsoned.com/permissions/. ISBN-13: 978-0-578-22214-1 First printing, June 2019 Contents Preface to Authors' Edition ix Preface xi 1 Foundations 1 1.1 Categories of Ideas: Entity, Species,
    [Show full text]
  • Lecture 6: Instruction Set Architecture and the 80X86
    Lecture 6: Instruction Set Architecture and the 80x86 Professor Randy H. Katz Computer Science 252 Spring 1996 RHK.S96 1 Review From Last Time • Given sales a function of performance relative to competition, tremendous investment in improving product as reported by performance summary • Good products created when have: – Good benchmarks – Good ways to summarize performance • If not good benchmarks and summary, then choice between improving product for real programs vs. improving product to get more sales=> sales almost always wins • Time is the measure of computer performance! • What about cost? RHK.S96 2 Review: Integrated Circuits Costs IC cost = Die cost + Testing cost + Packaging cost Final test yield Die cost = Wafer cost Dies per Wafer * Die yield Dies per wafer = p * ( Wafer_diam / 2)2 – p * Wafer_diam – Test dies Die Area Ö 2 * Die Area Defects_per_unit_area * Die_Area Die Yield = Wafer yield * { 1 + } 4 Die Cost is goes roughly with area RHK.S96 3 Review From Last Time Price vs. Cost 100% 80% Average Discount 60% Gross Margin 40% Direct Costs 20% Component Costs 0% Mini W/S PC 5 4.7 3.8 4 3.5 Average Discount 3 2.5 Gross Margin 2 1.8 Direct Costs 1.5 1 Component Costs 0 Mini W/S PC RHK.S96 4 Today: Instruction Set Architecture • 1950s to 1960s: Computer Architecture Course Computer Arithmetic • 1970 to mid 1980s: Computer Architecture Course Instruction Set Design, especially ISA appropriate for compilers • 1990s: Computer Architecture Course Design of CPU, memory system, I/O system, Multiprocessors RHK.S96 5 Computer Architecture? . the attributes of a [computing] system as seen by the programmer, i.e.
    [Show full text]
  • Low-Power Microprocessor Based on Stack Architecture
    Girish Aramanekoppa Subbarao Low-power Microprocessor based on Stack Architecture Stack on based Microprocessor Low-power Master’s Thesis Low-power Microprocessor based on Stack Architecture Girish Aramanekoppa Subbarao Series of Master’s theses Department of Electrical and Information Technology LU/LTH-EIT 2015-464 Department of Electrical and Information Technology, http://www.eit.lth.se Faculty of Engineering, LTH, Lund University, September 2015. Department of Electrical and Information Technology Master of Science Thesis Low-power Microprocessor based on Stack Architecture Supervisors: Author: Prof. Joachim Rodrigues Girish Aramanekoppa Subbarao Prof. Anders Ard¨o Lund 2015 © The Department of Electrical and Information Technology Lund University Box 118, S-221 00 LUND SWEDEN This thesis is set in Computer Modern 10pt, with the LATEX Documentation System ©Girish Aramanekoppa Subbarao 2015 Printed in E-huset Lund, Sweden. Sep. 2015 Abstract There are many applications of microprocessors in embedded applications, where power efficiency becomes a critical requirement, e.g. wearable or mobile devices in healthcare, space instrumentation and handheld devices. One of the methods of achieving low power operation is by simplifying the device architecture. RISC/CISC processors consume considerable power because of their complexity, which is due to their multiplexer system connecting the register file to the func- tional units and their instruction pipeline system. On the other hand, the Stack machines are comparatively less complex due to their implied addressing to the top two registers of the stack and smaller operation codes. This makes the instruction and the address decoder circuit simple by eliminating the multiplex switches for read and write ports of the register file.
    [Show full text]
  • Undecidability of the Word Problem for Groups: the Point of View of Rewriting Theory
    Universita` degli Studi Roma Tre Facolta` di Scienze M.F.N. Corso di Laurea in Matematica Tesi di Laurea in Matematica Undecidability of the word problem for groups: the point of view of rewriting theory Candidato Relatori Matteo Acclavio Prof. Y. Lafont ..................................... Prof. L. Tortora de falco ...................................... questa tesi ´estata redatta nell'ambito del Curriculum Binazionale di Laurea Magistrale in Logica, con il sostegno dell'Universit´aItalo-Francese (programma Vinci 2009) Anno Accademico 2011-2012 Ottobre 2012 Classificazione AMS: Parole chiave: \There once was a king, Sitting on the sofa, He said to his maid, Tell me a story, And the maid began: There once was a king, Sitting on the sofa, He said to his maid, Tell me a story, And the maid began: There once was a king, Sitting on the sofa, He said to his maid, Tell me a story, And the maid began: There once was a king, Sitting on the sofa, . " Italian nursery rhyme Even if you don't know this tale, it's easy to understand that this could continue indefinitely, but it doesn't have to. If now we want to know if the nar- ration will finish, this question is what is called an undecidable problem: we'll need to listen the tale until it will finish, but even if it will not, one can never say it won't stop since it could finish later. those things make some people loose sleep, but usually children, bored, fall asleep. More precisely a decision problem is given by a question regarding some data that admit a negative or positive answer, for example: \is the integer number n odd?" or \ does the story of the king on the sofa admit an happy ending?".
    [Show full text]
  • CS411-2015F-14 Counter Machines & Recursive Functions
    Automata Theory CS411-2015F-14 Counter Machines & Recursive Functions David Galles Department of Computer Science University of San Francisco 14-0: Counter Machines Give a Non-Deterministic Finite Automata a counter Increment the counter Decrement the counter Check to see if the counter is zero 14-1: Counter Machines A Counter Machine M = (K, Σ, ∆,s,F ) K is a set of states Σ is the input alphabet s ∈ K is the start state F ⊂ K are Final states ∆ ⊆ ((K × (Σ ∪ ǫ) ×{zero, ¬zero}) × (K × {−1, 0, +1})) Accept if you reach the end of the string, end in an accept state, and have an empty counter. 14-2: Counter Machines Give a Non-Deterministic Finite Automata a counter Increment the counter Decrement the counter Check to see if the counter is zero Do we have more power than a standard NFA? 14-3: Counter Machines Give a counter machine for the language anbn 14-4: Counter Machines Give a counter machine for the language anbn (a,zero,+1) (a,~zero,+1) (b,~zero,−1) (b,~zero,−1) 0 1 14-5: Counter Machines Give a 2-counter machine for the language anbncn Straightforward extension – examine (and change) two counters instead of one. 14-6: Counter Machines Give a 2-counter machine for the language anbncn (a,zero,zero,+1,0) (a,~zero,zero,+1,0) (b,~zero,~zero,-1,+1) (b,~zero,zero,−1,+1) 0 1 (c,zero,~zero,0,-1) 2 (c,zero,~zero,0,-1) 14-7: Counter Machines Our counter machines only accept if the counter is zero Does this give us any more power than a counter machine that accepts whenever the end of the string is reached in an accept state? That is, given
    [Show full text]
  • X86-64 Machine-Level Programming∗
    x86-64 Machine-Level Programming∗ Randal E. Bryant David R. O'Hallaron September 9, 2005 Intel’s IA32 instruction set architecture (ISA), colloquially known as “x86”, is the dominant instruction format for the world’s computers. IA32 is the platform of choice for most Windows and Linux machines. The ISA we use today was defined in 1985 with the introduction of the i386 microprocessor, extending the 16-bit instruction set defined by the original 8086 to 32 bits. Even though subsequent processor generations have introduced new instruction types and formats, many compilers, including GCC, have avoided using these features in the interest of maintaining backward compatibility. A shift is underway to a 64-bit version of the Intel instruction set. Originally developed by Advanced Micro Devices (AMD) and named x86-64, it is now supported by high end processors from AMD (who now call it AMD64) and by Intel, who refer to it as EM64T. Most people still refer to it as “x86-64,” and we follow this convention. Newer versions of Linux and GCC support this extension. In making this switch, the developers of GCC saw an opportunity to also make use of some of the instruction-set features that had been added in more recent generations of IA32 processors. This combination of new hardware and revised compiler makes x86-64 code substantially different in form and in performance than IA32 code. In creating the 64-bit extension, the AMD engineers also adopted some of the features found in reduced-instruction set computers (RISC) [7] that made them the favored targets for optimizing compilers.
    [Show full text]
  • A Microcontroller Based Fan Speed Control Using PID Controller Theory
    A Microcontroller Based Fan Speed Control Using PID Controller Theory Thu Thu Zue Zin and Hlaing Thida Oo University of Computer Studies, Yangon thuthuzuezin @gmail.com Abstract PIC control application is widely used and User define popular in modern elections. The aim of this paper Microcontroller speed PIC 16F877 is design and construct the fan speed control system Duty cycle with microcontroller. PIC 16F877 and photo transistor is used for the system. User define speed can be selected by keypads for various speed. Motor is controlled by the pulse width modulation. Driver The system can be operate with normal mode and circuit timer mode. Delay time for timer mode is 10 seconds. The feedback signal from sensor is controlled by the proportional integral derivative equations to reproduce the desire duty cycle. Sensor Motor 1. Introduction Figure 1. Block Diagram of the System Motors are widely used in many applications, such as air conditioners , slide doors, washing machines and control areas. Motors are 2. Background Theory derivate it’s output performance due to the PID Controller tolerances, operating conditions, process error and Proportional Integral Derivative controller is measurement error. Pulse Width Modulation a generic control loop feedback mechanism widely control technique is better for control the DC motor used in industrial control systems. A PID controller than others techniques. PWM control the speed of attempts to correct the error between a measured motor without changing the voltage supply to process variable and a desired set point by calculating motor. Series of pulse define the speed of the and then outputting a corrective action that can adjust motor.
    [Show full text]
  • Exposing Native Device Apis to Web Apps
    Exposing Native Device APIs to Web Apps Arno Puder Nikolai Tillmann Michał Moskal San Francisco State University Microsoft Research Microsoft Research Computer Science One Microsoft Way One Microsoft Way Department Redmond, WA 98052 Redmond, WA 98052 1600 Holloway Avenue [email protected] [email protected] San Francisco, CA 94132 [email protected] ABSTRACT language of the respective platform, HTML5 technologies A recent survey among developers revealed that half plan to gain traction for the development of mobile apps [12, 4]. use HTML5 for mobile apps in the future. An earlier survey A recent survey among developers revealed that more than showed that access to native device APIs is the biggest short- half (52%) are using HTML5 technologies for developing mo- coming of HTML5 compared to native apps. Several dif- bile apps [22]. HTML5 technologies enable the reuse of the ferent approaches exist to overcome this limitation, among presentation layer and high-level logic across multiple plat- them cross-compilation and packaging the HTML5 as a na- forms. However, an earlier survey [21] on cross-platform tive app. In this paper we propose a novel approach by using developer tools revealed that access to native device APIs a device-local service that runs on the smartphone and that is the biggest shortcoming of HTML5 compared to native acts as a gateway to the native layer for HTML5-based apps apps. Several different approaches exist to overcome this running inside the standard browser. WebSockets are used limitation. Besides the development of native apps for spe- for bi-directional communication between the web apps and cific platforms, popular approaches include cross-platform the device-local service.
    [Show full text]
  • Copyright © 1998, by the Author(S). All Rights Reserved
    Copyright © 1998, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. WHATS DECIDABLE ABOUT HYBRID AUTOMATA by Thomas A. Henzinger, Peter W. Kopke, Anuj Puri, and Pravin Varaiya Memorandum No. UCB/ERL M98/22 15 April 1998 WHAT'S DECIDABLE ABOUT HYBRID AUTOMATA by Thomas A. Henzinger,Peter W. Kopke, Anuj Puri, and Pravin Varaiya Memorandum No. UCB/ERL M98/22 15 April 1998 ELECTRONICS RESEARCH LABORATORY College ofEngineering University of California, Berkeley 94720 What's Decidable About Hybrid Automata?*^ Thomas A. Henzinger^ Peter W. Kopke^ Anuj Puri^ Pravin Varaiya^ Abstract. Hybrid automata model systems with both digital and analog compo nents, such as embedded control programs. Many verification tasks for such programs can be expressed as reachabibty problems for hybrid automata. By improving on pre vious decidability and undecidability results, we identify a precise boundary between decidability and undecidability for the reachabibty problem of hybrid automata. On the positive side, we give an (optimal) PSPACE reachabibty algorithmfor the case of initiabzed rectangular automata, where ab analog variables foUow independent tra jectories within piecewise-bnear envelopes and are reinitiabzed whenever the envelope changes. Our algorithm is based on the construction ofa timedautomaton that contains ab reachabibty information about a given initiabzed rectangular automaton.
    [Show full text]