8 General Purpose Machines

Total Page:16

File Type:pdf, Size:1020Kb

8 General Purpose Machines DIGITAL SYSTEMS 63 8 General purpose machines Section 5 introduced the idea of a general purpose piece of hardware on which can be imposed a specific piece of software. Suppose we want to build a machine to evaluate an expression, like a + (b × c). We could build an adder for + and a multiplier for ×, and wire them together in a special purpose circuit, or we could try to build a programmable circuit. One strategy would be to build an FPGA-like circuit with lots of switches which could be configured like the special purpose circuit, and which would perform the evaluation in one cycle. An alternative would be to build a programmable circuit which can perform one (or more generally some constant number) of the operations in a single clock cycle, and to make it sequence through several steps to evaluate the expression. This machine needs some state to hold partial results from one cycle to the next. 8.1 Expression evaluation First of all, let us make precise what an expression is, and what it is to evaluate it. For a small example, take expressions with integer values, made up of named variables and integer numerals, combined by a small range of binary operations: type Value = Int type Name = String data Expr = Num Value | Var Name | Bin Expr Op Expr data Op = Add | Sub | Mul | Div The values of type Expr are things like Bin (Var “a”) Mul (Bin (Num 2) Add (Var “b”)) but for convenience we will assume a parsing function Main> parse "a*2+b" Bin (Var "a") Mul (Bin (Num 2) Add (Var "b")) (Notice that the parser does not try to get operator precedence right; you have to use parentheses.) An expression has a value once values have been assigned to all its free variables. Suppose there is a Store indexed by Names giving the value of variables: type Store = [(String, Value)] then a straightforward evaluator can assign a value to an expression by recursion on the structure of the expression 11:09pm 21st January 2021 GPM 64 8 GENERAL PURPOSE MACHINES eval :: Store → Expr → Value eval store (Num n) = n eval store (Var v) = store ‘at‘ v eval store (Bin l op r) = operate op (eval store l)(eval store r) operate Add a b = a + b operate Sub a b = a − b operate Mul a b = a × b operate Div a b = a ‘div‘ b We will build sequential machines whose correctness can be measured against the results of eval. 8.2 Stack evaluation One such machine has a stack of partial results. The idea is that we write a function compileS :: Expr → [Instr] which translates expressions data Expr = Num Value | Var String | Bin Expr Op Expr into a sequence of instructions data Instr = PushN Value | PushV String | Do Op data Op = Add | Sub | Mul | Div ... for a small machine exec :: Store → [Instr] → Value This factorisation of eval into a compiler which flattens the structure of the ex- pression, and a machine which executes the instructions is the essence of some portable implementations of languages like Java or Scala. The compiler trans- lates the source language into a bytecode, and a virtual machine interprets that bytecode.10 The compiler translates the expression tree into reverse Polish notation:11 10The first bytecode compiler was possibly Martin Richards’ BCPL compiler in the 1960s, which produced intermediate code for the O-machine. The O-code might be interpreted, or it might be translated into the native machine code of particular hardware. The translation into natice code might eb all-at-once, or it might be performed just in time as each bytecode instruction is executed. 11(Forward) Polish notation is attributed to JanLukasiewicz (1878–1956) in the 1920s, as a way of writing logical formulae without parentheses by putting the operators before the operands. RPN emerged in the 1950s and 1960s as an implementation technique. GPM 11:09pm 21st January 2021 8.2 Stack evaluation 65 compileS (Num n) = [PushN n] compileS (Var v) = [PushV v] compileS (Bin l op r) = compileS l ++ compileS r ++[Do op] What makes us happy to call the exec function a ‘machine’? It had better be finite state, and its next state had better be given by a function of its present state. Here is the next state function: type State = (Store, [Value]) step :: State → Instr → State step (store, stack)(PushN n) = (store, n : stack) step (store, stack)(PushV v) = (store, store ‘at‘ v : stack) step (store, b : a : stack)(Do op) = (store, operate op a b : stack) Executing a sequence of instructions will push a value onto the top of the stack: exec store prog = answer (foldl step (store, [ ]) prog) where answer (store, [v]) = v Correctness of this machine, that is eval store expr = exec store (compileS expr) can be proved by using the hypothesis foldl step (store, stack)(compileS expr) = (store, eval store expr : stack) in a proof by induction on the structure of expr. Exercises 8.1 Redefine eval and compileS as instances of the natural fold foldE :: (Value → a) → (String → a) → (a → Op → a → a) → Expr → a foldE num var bin = f where f (Num n) = num n f (Var v) = var v f (Bin l op r) = bin (f l) op (f r) for Expr. 8.2 Complete the proof that eval store expr = exec store (compileS expr) You may find it useful to use that foldl f e (xs ++ ys) = foldl f (foldl f e xs) ys 11:09pm 21st January 2021 GPM 66 8 GENERAL PURPOSE MACHINES 8.3 Modify the stack expression compiler to reduce the number of stack locations needed by evaluating the arguments of an operation in the right order. In the case of non-commutative operators you will also have to add instructions to the machine: either a Swap instruction which swaps the two values at the top of the stack, or reversed versions of each of the instructions corresponding to a non-commutative operator. What are the advantages and disadvantages of each of these two solutions? 8.3 Register machines The stack machine is not obviously finite state: however it will only use a finite amount of stack for any given (finite) expression. Moreover the pattern of access to the stack is predetermined, so it might be helpful to treat it as a list of registers, and work out which registers are accessed by each operation. This style of machine would have instructions data Instr = Load Reg Value | Move Reg Reg | Do Op Reg Reg Reg to load a constant, to copy one register into another, and to operate on the contents of two registers leaving the answer in a third (not all of which need to be distinct). The state of the state machine will be a mapping from registers to values type State = [(Reg, Value)] and the step function is straightforward step :: State → Instr → State step regs (Load d n) = update regs d n step regs (Move d s) = update regs d (regs ‘at‘ s) step regs (Do op d s t) = update regs d (operate op (regs ‘at‘ s)(regs ‘at‘ t) Rather than identifying elements of the store by name at run time, we will allocate registers to hold the values of the named variables. The mapping from names to addresses is usually called the environment type Env = [(String, Reg)] and exists only at compile time. The compiler allocates registers from a (finite) list of free registers in a way that corresponds to the locations on the stack from the previous compiler, and the result is left in the first of those free registers. GPM 11:09pm 21st January 2021 8.3 Register machines 67 compileR :: Env → [Reg] → Expr → [Instr] compileR env (r : free)(Num n) = [Load r n] compileR env (r : free)(Var v) = [Move r (env ‘at‘ v)] compileR env (r0 : r1 : free)(Bin l op r) = compileR env (r0 : r1 : free) l ++ compileR env (r1 : free) r ++ [Do op r0 r0 r1] The machine has to be modified to look for the answer in the right register. exec :: State → Reg → [Instr] → Value exec regs r prog = answer (foldl step regs prog) where answer regs = regs ‘at‘ r The initial state of the machine must be a mapping from registers to values which matches the original store: regs ‘at‘(env ‘at‘ v) = store ‘at‘ v For example Main> env [("a",10),("b",11),("c",12),("d",13)] Main> compileR env [0..9] (parse "(a+b)-(c*d)") [Move 0 10,Move 1 11,Do Add 0 0 1, Move 1 12,Move 2 13,Do Mul 1 1 2, Do Sub 0 0 1] Main> regs [(10,2),(11,4),(12,6),(13,8)] Main> exec regs 0 (compileR env [0..9] (parse "(a+b)-(c*d)")) -42 Code for the register machine can be improved by eliminating unnecessary moves from register to register, and the same sort of depth minimisation as for the stack machine. Here is code from a modified compiler, which minimises register use. This compiler returns (as well as the code) the identity of the register containing the answer, a list of unused registers, and the maximum number of registers used. Main> compileR’ env [0..5] (parse "(a+b)-(c*d)") (0,[Do Add 0 10 11,Do Mul 1 12 13,Do Sub 0 0 1],[1,2,3,4,5],2) Main> exec regs 0 [Do Add 0 10 11,Do Mul 1 12 13,Do Sub 0 0 1] -42 11:09pm 21st January 2021 GPM 68 8 GENERAL PURPOSE MACHINES The beauty of the register machine code is that it is easy to see how to construct a state machine to execute it.
Recommended publications
  • Lab 7: Floating-Point Addition 0.0
    Lab 7: Floating-Point Addition 0.0 Introduction In this lab, you will write a MIPS assembly language function that performs floating-point addition. You will then run your program using PCSpim (just as you did in Lab 6). For testing, you are provided a program that calls your function to compute the value of the mathematical constant e. For those with no assembly language experience, this will be a long lab, so plan your time accordingly. Background You should be familiar with the IEEE 754 Floating-Point Standard, which is described in Section 3.6 of your book. (Hopefully you have read that section carefully!) Here we will be dealing only with single precision floating- point values, which are formatted as follows (this is also described in the “Floating-Point Representation” subsec- tion of Section 3.6 in your book): Sign Exponent (8 bits) Significand (23 bits) 31 30 29 ... 24 23 22 21 ... 0 Remember that the exponent is biased by 127, which means that an exponent of zero is represented by 127 (01111111). The exponent is not encoded using 2s-complement. The significand is always positive, and the sign bit is kept separately. Note that the actual significand is 24 bits long; the first bit is always a 1 and thus does not need to be stored explicitly. This will be important to remember when you write your function! There are several details of IEEE 754 that you will not have to worry about in this lab. For example, the expo- nents 00000000 and 11111111 are reserved for special purposes that are described in your book (representing zero, denormalized numbers and NaNs).
    [Show full text]
  • Pragmatic Quotient Types in Coq Cyril Cohen
    Pragmatic Quotient Types in Coq Cyril Cohen To cite this version: Cyril Cohen. Pragmatic Quotient Types in Coq. International Conference on Interactive Theorem Proving, Jul 2013, Rennes, France. pp.16. hal-01966714 HAL Id: hal-01966714 https://hal.inria.fr/hal-01966714 Submitted on 29 Dec 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Pragmatic Quotient Types in Coq Cyril Cohen Department of Computer Science and Engineering University of Gothenburg [email protected] Abstract. In intensional type theory, it is not always possible to form the quotient of a type by an equivalence relation. However, quotients are extremely useful when formalizing mathematics, especially in algebra. We provide a Coq library with a pragmatic approach in two complemen- tary components. First, we provide a framework to work with quotient types in an axiomatic manner. Second, we program construction mecha- nisms for some specific cases where it is possible to build a quotient type. This library was helpful in implementing the types of rational fractions, multivariate polynomials, field extensions and real algebraic numbers. Keywords: Quotient types, Formalization of mathematics, Coq Introduction In set-based mathematics, given some base set S and an equivalence ≡, one may see the quotient (S/ ≡) as the partition {π(x) | x ∈ S} of S into the sets π(x) ˆ= {y ∈ S | x ≡ y}, which are called equivalence classes.
    [Show full text]
  • Harnessing Numerical Flexibility for Deep Learning on Fpgas.Pdf
    WHITE PAPER FPGA Inline Acceleration Harnessing Numerical Flexibility for Deep Learning on FPGAs Authors Abstract Andrew C . Ling Deep learning has become a key workload in the data center and the edge, leading [email protected] to a race for dominance in this space. FPGAs have shown they can compete by combining deterministic low latency with high throughput and flexibility. In Mohamed S . Abdelfattah particular, FPGAs bit-level programmability can efficiently implement arbitrary [email protected] precisions and numeric data types critical in the fast evolving field of deep learning. Andrew Bitar In this paper, we explore FPGA minifloat implementations (floating-point [email protected] representations with non-standard exponent and mantissa sizes), and show the use of a block-floating-point implementation that shares the exponent across David Han many numbers, reducing the logic required to perform floating-point operations. [email protected] The paper shows this technique can significantly improve FPGA performance with no impact to accuracy, reduce logic utilization by 3X, and memory bandwidth and Roberto Dicecco capacity required by more than 40%.† [email protected] Suchit Subhaschandra Introduction [email protected] Deep neural networks have proven to be a powerful means to solve some of the Chris N Johnson most difficult computer vision and natural language processing problems since [email protected] their successful introduction to the ImageNet competition in 2012 [14]. This has led to an explosion of workloads based on deep neural networks in the data center Dmitry Denisenko and the edge [2]. [email protected] One of the key challenges with deep neural networks is their inherent Josh Fender computational complexity, where many deep nets require billions of operations [email protected] to perform a single inference.
    [Show full text]
  • The Hexadecimal Number System and Memory Addressing
    C5537_App C_1107_03/16/2005 APPENDIX C The Hexadecimal Number System and Memory Addressing nderstanding the number system and the coding system that computers use to U store data and communicate with each other is fundamental to understanding how computers work. Early attempts to invent an electronic computing device met with disappointing results as long as inventors tried to use the decimal number sys- tem, with the digits 0–9. Then John Atanasoff proposed using a coding system that expressed everything in terms of different sequences of only two numerals: one repre- sented by the presence of a charge and one represented by the absence of a charge. The numbering system that can be supported by the expression of only two numerals is called base 2, or binary; it was invented by Ada Lovelace many years before, using the numerals 0 and 1. Under Atanasoff’s design, all numbers and other characters would be converted to this binary number system, and all storage, comparisons, and arithmetic would be done using it. Even today, this is one of the basic principles of computers. Every character or number entered into a computer is first converted into a series of 0s and 1s. Many coding schemes and techniques have been invented to manipulate these 0s and 1s, called bits for binary digits. The most widespread binary coding scheme for microcomputers, which is recog- nized as the microcomputer standard, is called ASCII (American Standard Code for Information Interchange). (Appendix B lists the binary code for the basic 127- character set.) In ASCII, each character is assigned an 8-bit code called a byte.
    [Show full text]
  • Midterm-2020-Solution.Pdf
    HONOR CODE Questions Sheet. A Lets C. [6 Points] 1. What type of address (heap,stack,static,code) does each value evaluate to Book1, Book1->name, Book1->author, &Book2? [4] 2. Will all of the print statements execute as expected? If NO, write print statement which will not execute as expected?[2] B. Mystery [8 Points] 3. When the above code executes, which line is modified? How many times? [2] 4. What is the value of register a6 at the end ? [2] 5. What is the value of register a4 at the end ? [2] 6. In one sentence what is this program calculating ? [2] C. C-to-RISC V Tree Search; Fill in the blanks below [12 points] D. RISCV - The MOD operation [8 points] 19. The data segment starts at address 0x10000000. What are the memory locations modified by this program and what are their values ? E Floating Point [8 points.] 20. What is the smallest nonzero positive value that can be represented? Write your answer as a numerical expression in the answer packet? [2] 21. Consider some positive normalized floating point number where p is represented as: What is the distance (i.e. the difference) between p and the next-largest number after p that can be represented? [2] 22. Now instead let p be a positive denormalized number described asp = 2y x 0.significand. What is the distance between p and the next largest number after p that can be represented? [2] 23. Sort the following minifloat numbers. [2] F. Numbers. [5] 24. What is the smallest number that this system can represent 6 digits (assume unsigned) ? [1] 25.
    [Show full text]
  • System Design for a Computational-RAM Logic-In-Memory Parailel-Processing Machine
    System Design for a Computational-RAM Logic-In-Memory ParaIlel-Processing Machine Peter M. Nyasulu, B .Sc., M.Eng. A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy Ottaw a-Carleton Ins titute for Eleceical and Computer Engineering, Department of Electronics, Faculty of Engineering, Carleton University, Ottawa, Ontario, Canada May, 1999 O Peter M. Nyasulu, 1999 National Library Biôiiothkque nationale du Canada Acquisitions and Acquisitions et Bibliographie Services services bibliographiques 39S Weiiington Street 395. nie WeUingtm OnawaON KlAW Ottawa ON K1A ON4 Canada Canada The author has granted a non- L'auteur a accordé une licence non exclusive licence allowing the exclusive permettant à la National Library of Canada to Bibliothèque nationale du Canada de reproduce, ban, distribute or seU reproduire, prêter, distribuer ou copies of this thesis in microform, vendre des copies de cette thèse sous paper or electronic formats. la forme de microficbe/nlm, de reproduction sur papier ou sur format électronique. The author retains ownership of the L'auteur conserve la propriété du copyright in this thesis. Neither the droit d'auteur qui protège cette thèse. thesis nor substantial extracts fkom it Ni la thèse ni des extraits substantiels may be printed or otherwise de celle-ci ne doivent être imprimés reproduced without the author's ou autrement reproduits sans son permission. autorisation. Abstract Integrating several 1-bit processing elements at the sense amplifiers of a standard RAM improves the performance of massively-paralle1 applications because of the inherent parallelism and high data bandwidth inside the memory chip.
    [Show full text]
  • Design and Implementation of Generics for the .NET Common Language Runtime
    Design and Implementation of Generics for the .NET Common Language Runtime Andrew Kennedy Don Syme Microsoft Research, Cambridge, U.K. fakeÒÒ¸d×ÝÑeg@ÑicÖÓ×ÓfغcÓÑ Abstract cally through an interface definition language, or IDL) that is nec- essary for language interoperation. The Microsoft .NET Common Language Runtime provides a This paper describes the design and implementation of support shared type system, intermediate language and dynamic execution for parametric polymorphism in the CLR. In its initial release, the environment for the implementation and inter-operation of multiple CLR has no support for polymorphism, an omission shared by the source languages. In this paper we extend it with direct support for JVM. Of course, it is always possible to “compile away” polymor- parametric polymorphism (also known as generics), describing the phism by translation, as has been demonstrated in a number of ex- design through examples written in an extended version of the C# tensions to Java [14, 4, 6, 13, 2, 16] that require no change to the programming language, and explaining aspects of implementation JVM, and in compilers for polymorphic languages that target the by reference to a prototype extension to the runtime. JVM or CLR (MLj [3], Haskell, Eiffel, Mercury). However, such Our design is very expressive, supporting parameterized types, systems inevitably suffer drawbacks of some kind, whether through polymorphic static, instance and virtual methods, “F-bounded” source language restrictions (disallowing primitive type instanti- type parameters, instantiation at pointer and value types, polymor- ations to enable a simple erasure-based translation, as in GJ and phic recursion, and exact run-time types.
    [Show full text]
  • POINTER (IN C/C++) What Is a Pointer?
    POINTER (IN C/C++) What is a pointer? Variable in a program is something with a name, the value of which can vary. The way the compiler and linker handles this is that it assigns a specific block of memory within the computer to hold the value of that variable. • The left side is the value in memory. • The right side is the address of that memory Dereferencing: • int bar = *foo_ptr; • *foo_ptr = 42; // set foo to 42 which is also effect bar = 42 • To dereference ted, go to memory address of 1776, the value contain in that is 25 which is what we need. Differences between & and * & is the reference operator and can be read as "address of“ * is the dereference operator and can be read as "value pointed by" A variable referenced with & can be dereferenced with *. • Andy = 25; • Ted = &andy; All expressions below are true: • andy == 25 // true • &andy == 1776 // true • ted == 1776 // true • *ted == 25 // true How to declare pointer? • Type + “*” + name of variable. • Example: int * number; • char * c; • • number or c is a variable is called a pointer variable How to use pointer? • int foo; • int *foo_ptr = &foo; • foo_ptr is declared as a pointer to int. We have initialized it to point to foo. • foo occupies some memory. Its location in memory is called its address. &foo is the address of foo Assignment and pointer: • int *foo_pr = 5; // wrong • int foo = 5; • int *foo_pr = &foo; // correct way Change the pointer to the next memory block: • int foo = 5; • int *foo_pr = &foo; • foo_pr ++; Pointer arithmetics • char *mychar; // sizeof 1 byte • short *myshort; // sizeof 2 bytes • long *mylong; // sizeof 4 byts • mychar++; // increase by 1 byte • myshort++; // increase by 2 bytes • mylong++; // increase by 4 bytes Increase pointer is different from increase the dereference • *P++; // unary operation: go to the address of the pointer then increase its address and return a value • (*P)++; // get the value from the address of p then increase the value by 1 Arrays: • int array[] = {45,46,47}; • we can call the first element in the array by saying: *array or array[0].
    [Show full text]
  • Subtyping Recursive Types
    ACM Transactions on Programming Languages and Systems, 15(4), pp. 575-631, 1993. Subtyping Recursive Types Roberto M. Amadio1 Luca Cardelli CNRS-CRIN, Nancy DEC, Systems Research Center Abstract We investigate the interactions of subtyping and recursive types, in a simply typed λ-calculus. The two fundamental questions here are whether two (recursive) types are in the subtype relation, and whether a term has a type. To address the first question, we relate various definitions of type equivalence and subtyping that are induced by a model, an ordering on infinite trees, an algorithm, and a set of type rules. We show soundness and completeness between the rules, the algorithm, and the tree semantics. We also prove soundness and a restricted form of completeness for the model. To address the second question, we show that to every pair of types in the subtype relation we can associate a term whose denotation is the uniquely determined coercion map between the two types. Moreover, we derive an algorithm that, when given a term with implicit coercions, can infer its least type whenever possible. 1This author's work has been supported in part by Digital Equipment Corporation and in part by the Stanford-CNR Collaboration Project. Page 1 Contents 1. Introduction 1.1 Types 1.2 Subtypes 1.3 Equality of Recursive Types 1.4 Subtyping of Recursive Types 1.5 Algorithm outline 1.6 Formal development 2. A Simply Typed λ-calculus with Recursive Types 2.1 Types 2.2 Terms 2.3 Equations 3. Tree Ordering 3.1 Subtyping Non-recursive Types 3.2 Folding and Unfolding 3.3 Tree Expansion 3.4 Finite Approximations 4.
    [Show full text]
  • PERL – a Register-Less Processor
    PERL { A Register-Less Processor A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by P. Suresh to the Department of Computer Science & Engineering Indian Institute of Technology, Kanpur February, 2004 Certificate Certified that the work contained in the thesis entitled \PERL { A Register-Less Processor", by Mr.P. Suresh, has been carried out under my supervision and that this work has not been submitted elsewhere for a degree. (Dr. Rajat Moona) Professor, Department of Computer Science & Engineering, Indian Institute of Technology, Kanpur. February, 2004 ii Synopsis Computer architecture designs are influenced historically by three factors: market (users), software and hardware methods, and technology. Advances in fabrication technology are the most dominant factor among them. The performance of a proces- sor is defined by a judicious blend of processor architecture, efficient compiler tech- nology, and effective VLSI implementation. The choices for each of these strongly depend on the technology available for the others. Significant gains in the perfor- mance of processors are made due to the ever-improving fabrication technology that made it possible to incorporate architectural novelties such as pipelining, multiple instruction issue, on-chip caches, registers, branch prediction, etc. To supplement these architectural novelties, suitable compiler techniques extract performance by instruction scheduling, code and data placement and other optimizations. The performance of a computer system is directly related to the time it takes to execute programs, usually known as execution time. The expression for execution time (T), is expressed as a product of the number of instructions executed (N), the average number of machine cycles needed to execute one instruction (Cycles Per Instruction or CPI), and the clock cycle time (), as given in equation 1.
    [Show full text]
  • Arithmetic in MIPS
    6 Arithmetic in MIPS Objectives After completing this lab you will: • know how to do integer arithmetic in MIPS • know how to do floating point arithmetic in MIPS • know about conversion from integer to floating point and from floating point to integer. Introduction The definition of the R2000 architecture includes all integer arithmetic within the actual CPU. Floating point arithmetic is done by one of the four possible coprocessors, namely coprocessor number 1. Integer arithmetic Addition and subtraction are performed on 32 bit numbers held in the general purpose registers (32 bit each). The result is a 32 bit number itself. Two kinds of instructions are included in the instruction set to do integer addition and subtraction: • instructions for signed arithmetic: the 32 bit numbers are considered to be represented in 2’s comple- ment. The execution of the instruction (addition or subtraction) may generate an overflow. • instructions for unsigned arithmetic: the 32 bit numbers are considered to be in standard binary repre- sentation. Executing the instruction will never generate an overflow error even if there is an actual overflow (the result cannot be represented with 32 bits). Multiplication and division may generate results that are larger than 32 bits. The architecture provides two special 32 bit registers that are the destination for multiplication and division instructions. These registers are called hi and lo as to indicate that they hold the higher 32 bits of the result and the lower 32 bits respectively. Special instructions are also provided to move data from these registers into the general purpose ones ($0 to $31).
    [Show full text]
  • Parametric Polymorphism Parametric Polymorphism
    Parametric Polymorphism Parametric Polymorphism • is a way to make a language more expressive, while still maintaining full static type-safety (every Haskell expression has a type, and types are all checked at compile-time; programs with type errors will not even compile) • using parametric polymorphism, a function or a data type can be written generically so that it can handle values identically without depending on their type • such functions and data types are called generic functions and generic datatypes Polymorphism in Haskell • Two kinds of polymorphism in Haskell – parametric and ad hoc (coming later!) • Both kinds involve type variables, standing for arbitrary types. • Easy to spot because they start with lower case letters • Usually we just use one letter on its own, e.g. a, b, c • When we use a polymorphic function we will usually do so at a specific type, called an instance. The process is called instantiation. Identity function Consider the identity function: id x = x Prelude> :t id id :: a -> a It does not do anything with the input other than returning it, therefore it places no constraints on the input's type. Prelude> :t id id :: a -> a Prelude> id 3 3 Prelude> id "hello" "hello" Prelude> id 'c' 'c' Polymorphic datatypes • The identity function is the simplest possible polymorphic function but not very interesting • Most useful polymorphic functions involve polymorphic types • Notation – write the name(s) of the type variable(s) used to the left of the = symbol Maybe data Maybe a = Nothing | Just a • a is the type variable • When we instantiate a to something, e.g.
    [Show full text]