Counting Immutable Beans Reference Counting Optimized for Purely Functional Programming

Total Page:16

File Type:pdf, Size:1020Kb

Counting Immutable Beans Reference Counting Optimized for Purely Functional Programming Counting Immutable Beans Reference Counting Optimized for Purely Functional Programming Sebastian Ullrich Leonardo de Moura Karlsruhe Institute of Technology Microsoft Research Germany USA [email protected] [email protected] ABSTRACT counts every time a reference is created or destroyed can signif- Most functional languages rely on some kind of garbage collection icantly impact performance because they not only take time but for automatic memory management. They usually eschew reference also affect cache performance, especially in a multi-threaded pro- counting in favor of a tracing garbage collector, which has less gram [Choi et al. 2018]. Second, reference counting cannot collect bookkeeping overhead at runtime. On the other hand, having an circular [McBeth 1963] or self-referential structures. Finally, in most exact reference count of each value can enable optimizations such reference counting implementations, pause times are deterministic as destructive updates. We explore these optimization opportunities but may still be unbounded [Boehm 2004]. in the context of an eager, purely functional programming language. In this paper, we investigate whether reference counting is a We propose a new mechanism for efficiently reclaiming memory competitive memory management technique for purely functional used by nonshared values, reducing stress on the global memory languages, and explore optimizations for reusing memory, perform- allocator. We describe an approach for minimizing the number of ing destructive updates, and for minimizing the number of reference reference counts updates using borrowed references and a heuristic count increments and decrements. The former optimizations in par- for automatically inferring borrow annotations. We implemented ticular are beneficial for purely functional languages that otherwise all these techniques in a new compiler for an eager and purely can only perform functional updates. When performing functional functional programming language with support for multi-threading. updates, objects often die just before the creation of an object of Our preliminary experimental results demonstrate our approach is the same kind. We observe a similar phenomenon when we in- competitive and often outperforms state-of-the-art compilers. sert a new element into a pure functional data structure such as binary trees, when we use map to apply a given function to the CCS CONCEPTS elements of a list or tree, when a compiler applies optimizations by transforming abstract syntax trees, or when a proof assistant • Software and its engineering ! Runtime environments; rewrites formulas. We call it the resurrection hypothesis: many ob- Garbage collection. jects die just before the creation of an object of the same kind. KEYWORDS Our new optimization takes advantage of this hypothesis, and en- ables pure code to perform destructive updates in all scenarios purely functional programming, reference counting, Lean described above when objects are not shared. We implemented all ACM Reference Format: the ideas reported here in the new runtime and compiler for the Sebastian Ullrich and Leonardo de Moura. 2020. Counting Immutable Beans: Lean programming language [de Moura et al. 2015]. We also report Reference Counting Optimized for Purely Functional Programming. In preliminary experimental results that demonstrate our new com- Proceedings of International Symposium on Implementation and Application piler produces competitive code that often outperforms the code of Functional Languages (IFL’19). ACM, New York, NY, USA, 12 pages. generated by high-performance compilers such as ocamlopt and GHC (Section 8). 1 INTRODUCTION Lean implements a version of the Calculus of Inductive Con- Although reference counting [Collins 1960] (RC) is one of the old- structions [Coquand and Huet 1988; Coquand and Paulin 1990], arXiv:1908.05647v3 [cs.PL] 5 Mar 2020 est memory management techniques in computer science, it is not and it has mainly been used as a proof assistant so far. Lean has considered a serious garbage collection technique in the functional a metaprogramming framework for writing proof and code au- programming community, and there is plenty of evidence it is in tomation, where users can extend Lean using Lean itself [Ebner general inferior to tracing garbage collection algorithms. Indeed, et al. 2017]. Improving the performance of Lean metaprograms was high-performance compilers such as ocamlopt and GHC use tracing the primary motivation for the work reported here, but one can garbage collectors. Nonetheless, implementations of several popu- apply the techniques reported here to general-purpose functional lar programming languages, e.g., Swift, Objective-C, Python, and programming languages. Perl, use reference counting as a memory management technique. We describe our approach as a series of refinements starting from Reference counting is often praised for its simplicity, but many dis- λpure, a simple intermediate representation for eager and purely advantages are frequently reported in the literature [Jones and Lins functional languages (Section 3). We remark that in Lean and λpure, 1996; Wilson 1992]. First, incrementing and decrementing reference it is not possible to create cyclic data structures. Thus, one of the main criticisms against reference counting does not apply. From IFL’19, September 2019, Singapore λpure, we obtain λRC by adding explicit instructions for increment- 2020. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00 ing (inc) and decrementing (dec) reference counts, and reusing IFL’19, September 2019, Singapore Sebastian Ullrich and Leonardo de Moura memory (Section 4). The inspiration for explicit RC instructions them exactly once. Now we contrast that with a function that only comes from the Swift compiler, as does the notion of borrowed refer- inspects its argument: the function isNil xs returns true if the list ences. In contrast to standard (or owned) references, of which there xs is empty and false otherwise. If the argument xs is taken as an should be exactly as many as the object’s reference counter implies, owned reference, our compiler generates the following code borrowed references do not update the reference counter but are isNil xs = case xs of assumed to be kept alive by a surrounding owned reference, further (Nil ! dec xs ; ret true) minimizing the number of inc and dec instructions in generated Cons ! dec xs ret false code. ( ; ) We present a simple compiler from λpure to λRC, discussing We need the dec instructions because a function must consume heuristics for inserting destructive updates, borrow annotations, all arguments taken as owned references. One may notice that and inc/dec instructions (Section 5). Finally, we show that our decrementing xs immediately after we inspect its constructor tag approach is compatible with existing techniques for performing is wasteful. Now assume that instead of taking the ownership of destructive updates on array and string values, and propose a sim- an RC token, we could borrow it from the caller. Then, the callee ple and efficient approach for thread-safe reference counting (Sec- would not need to consume the token using an explicit dec oper- tion 7). ation. Moreover, the caller would be responsible for keeping the Contributions. We present a reference counting scheme optimized borrowed value alive. This is the essence of borrowed references: a for and used by the next version of the Lean programming language. borrowed reference does not actually keep the referenced value • We describe how to reuse allocations in both user code and alive, but instead asserts that the value is kept alive by another, language primitives, and give a formal reference-counting owned reference. Thus, when xs is a borrowed reference, we com- semantics that can express this reuse. pile isNil into our IR as • We describe the optimization of using borrowed references. isNil xs = case xs of (Nil ! ret true)(Cons ! ret false) • We define a compiler that implements all these steps. The hasNone compiler is implemented in Lean itself and the source code As a less trivial example, we now consider the function xs true xs is available. that, given a list of optional values, returns if contains a None • We give a simple but effective scheme for avoiding atomic value. This function is often defined in a functional language reference count updates in multi-threaded programs. as • We compare the new Lean compiler incorporating these hasNone [] = false ideas with other compilers for functional languages and hasNone (None : xs) = true show its competitiveness. hasNone (Somex : xs) = hasNone xs 2 EXAMPLES Similarly to isNil, hasNone only inspects its argument. Thus if xs is taken as a borrowed reference, our compiler produces the following In reference counting, each heap-allocated value contains a refer- RC-free IR code for it ence count. We view this counter as a collection of tokens. The inc instruction creates a new token and dec consumes it. When a hasNone xs = case xs of function takes an argument as an owned reference, it is responsible (Nil ! ret false) for consuming one of its tokens. The function may consume the (Cons ! let h = projhead xs ; case h of owned reference not only by using the dec instruction, but also by (None ! ret true) storing it in a newly allocated heap value, returning it, or passing (Some ! let t = projtail xs ; let r = hasNonet ; ret r)) it to another function that takes an owned reference. We illustrate Note that our case operation does not introduce binders. Instead, our intermediate representation (IR) and the use of owned and we use explicit instructions proj for accessing the head and tail of borrowed references with a series of small examples. i the Cons cell. We use suggestive names for cases and fields in these The identity function id does not require any RC operation when initial examples, but will later use indices instead. Our borrowed it takes its argument as an owned reference. inference heuristic discussed in Section 5 correctly tags xs as a idx = ret x borrowed parameter. As another example, consider the function mkPairOf that takes x When using owned references, we know at run time whether and returns the pair ¹x; xº.
Recommended publications
  • Memory Management and Garbage Collection
    Overview Memory Management Stack: Data on stack (local variables on activation records) have lifetime that coincides with the life of a procedure call. Memory for stack data is allocated on entry to procedures ::: ::: and de-allocated on return. Heap: Data on heap have lifetimes that may differ from the life of a procedure call. Memory for heap data is allocated on demand (e.g. malloc, new, etc.) ::: ::: and released Manually: e.g. using free Automatically: e.g. using a garbage collector Compilers Memory Management CSE 304/504 1 / 16 Overview Memory Allocation Heap memory is divided into free and used. Free memory is kept in a data structure, usually a free list. When a new chunk of memory is needed, a chunk from the free list is returned (after marking it as used). When a chunk of memory is freed, it is added to the free list (after marking it as free) Compilers Memory Management CSE 304/504 2 / 16 Overview Fragmentation Free space is said to be fragmented when free chunks are not contiguous. Fragmentation is reduced by: Maintaining different-sized free lists (e.g. free 8-byte cells, free 16-byte cells etc.) and allocating out of the appropriate list. If a small chunk is not available (e.g. no free 8-byte cells), grab a larger chunk (say, a 32-byte chunk), subdivide it (into 4 smaller chunks) and allocate. When a small chunk is freed, check if it can be merged with adjacent areas to make a larger chunk. Compilers Memory Management CSE 304/504 3 / 16 Overview Manual Memory Management Programmer has full control over memory ::: with the responsibility to manage it well Premature free's lead to dangling references Overly conservative free's lead to memory leaks With manual free's it is virtually impossible to ensure that a program is correct and secure.
    [Show full text]
  • Memory Management with Explicit Regions by David Edward Gay
    Memory Management with Explicit Regions by David Edward Gay Engineering Diploma (Ecole Polytechnique F´ed´erale de Lausanne, Switzerland) 1992 M.S. (University of California, Berkeley) 1997 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA at BERKELEY Committee in charge: Professor Alex Aiken, Chair Professor Susan L. Graham Professor Gregory L. Fenves Fall 2001 The dissertation of David Edward Gay is approved: Chair Date Date Date University of California at Berkeley Fall 2001 Memory Management with Explicit Regions Copyright 2001 by David Edward Gay 1 Abstract Memory Management with Explicit Regions by David Edward Gay Doctor of Philosophy in Computer Science University of California at Berkeley Professor Alex Aiken, Chair Region-based memory management systems structure memory by grouping objects in regions under program control. Memory is reclaimed by deleting regions, freeing all objects stored therein. Our compiler for C with regions, RC, prevents unsafe region deletions by keeping a count of references to each region. RC's regions have advantages over explicit allocation and deallocation (safety) and traditional garbage collection (better control over memory), and its performance is competitive with both|from 6% slower to 55% faster on a collection of realistic benchmarks. Experience with these benchmarks suggests that modifying many existing programs to use regions is not difficult. An important innovation in RC is the use of type annotations that make the structure of a program's regions more explicit. These annotations also help reduce the overhead of reference counting from a maximum of 25% to a maximum of 12.6% on our benchmarks.
    [Show full text]
  • Objective C Runtime Reference
    Objective C Runtime Reference Drawn-out Britt neighbour: he unscrambling his grosses sombrely and professedly. Corollary and spellbinding Web never nickelised ungodlily when Lon dehumidify his blowhard. Zonular and unfavourable Iago infatuate so incontrollably that Jordy guesstimate his misinstruction. Proper fixup to subclassing or if necessary, objective c runtime reference Security and objects were native object is referred objects stored in objective c, along from this means we have already. Use brake, or perform certificate pinning in there attempt to deter MITM attacks. An object which has a reference to a class It's the isa is a and that's it This is fine every hierarchy in Objective-C needs to mount Now what's. Use direct access control the man page. This function allows us to voluntary a reference on every self object. The exception handling code uses a header file implementing the generic parts of the Itanium EH ABI. If the method is almost in the cache, thanks to Medium Members. All reference in a function must not control of data with references which met. Understanding the Objective-C Runtime Logo Table Of Contents. Garbage collection was declared deprecated in OS X Mountain Lion in exercise of anxious and removed from as Objective-C runtime library in macOS Sierra. Objective-C Runtime Reference. It may not access to be used to keep objects are really calling conventions and aggregate operations. Thank has for putting so in effort than your posts. This will cut down on the alien of Objective C runtime information. Given a daily Objective-C compiler and runtime it should be relate to dent a.
    [Show full text]
  • Lecture 15 Garbage Collection I
    Lecture 15 Garbage Collection I. Introduction to GC -- Reference Counting -- Basic Trace-Based GC II. Copying Collectors III. Break Up GC in Time (Incremental) IV. Break Up GC in Space (Partial) Readings: Ch. 7.4 - 7.7.4 Carnegie Mellon M. Lam CS243: Advanced Garbage Collection 1 I. Why Automatic Memory Management? • Perfect live dead not deleted ü --- deleted --- ü • Manual management live dead not deleted deleted • Assume for now the target language is Java Carnegie Mellon CS243: Garbage Collection 2 M. Lam What is Garbage? Carnegie Mellon CS243: Garbage Collection 3 M. Lam When is an Object not Reachable? • Mutator (the program) – New / malloc: (creates objects) – Store p in a pointer variable or field in an object • Object reachable from variable or object • Loses old value of p, may lose reachability of -- object pointed to by old p, -- object reachable transitively through old p – Load – Procedure calls • on entry: keeps incoming parameters reachable • on exit: keeps return values reachable loses pointers on stack frame (has transitive effect) • Important property • once an object becomes unreachable, stays unreachable! Carnegie Mellon CS243: Garbage Collection 4 M. Lam How to Find Unreachable Nodes? Carnegie Mellon CS243: Garbage Collection 5 M. Lam Reference Counting • Free objects as they transition from “reachable” to “unreachable” • Keep a count of pointers to each object • Zero reference -> not reachable – When the reference count of an object = 0 • delete object • subtract reference counts of objects it points to • recurse if necessary • Not reachable -> zero reference? • Cost – overhead for each statement that changes ref. counts Carnegie Mellon CS243: Garbage Collection 6 M.
    [Show full text]
  • Linear Algebra Libraries
    Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 7 V Gmm++ 9 VI GNU Scientific Library (GSL) 11 VII IT++ 12 VIII Lapack++ 13 arXiv:1103.3020v1 [cs.MS] 15 Mar 2011 IX Matrix Template Library (MTL) 14 X PETSc 16 XI Seldon 17 XII SparseLib++ 18 XIII Template Numerical Toolkit (TNT) 19 XIV Trilinos 20 1 XV uBlas 22 XVI Other Libraries 23 XVII Links and Benchmarks 25 1 Links 25 2 Benchmarks 25 2.1 Benchmarks for Linear Algebra Libraries . ....... 25 2.2 BenchmarksincludingSeldon . 26 2.2.1 BenchmarksforDenseMatrix. 26 2.2.2 BenchmarksforSparseMatrix . 29 XVIII Appendix 30 3 Flens Overloaded Operator Performance Compared to Seldon 30 4 Flens, Seldon and Trilinos Content Comparisons 32 4.1 Available Matrix Types from Blas (Flens and Seldon) . ........ 32 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) . 33 4.3 Available Interfaces to Blas and Lapack Routines (Trilinos) ......... 40 5 Flens and Seldon Synoptic Comparison 41 2 Part I Requirements This document has been written to help in the choice of a linear algebra library to be included in Verdandi, a scientific library for data assimilation. The main requirements are 1. Portability: Verdandi should compile on BSD systems, Linux, MacOS, Unix and Windows. Beyond the portability itself, this often ensures that most compilers will accept Verdandi. An obvious consequence is that all dependencies of Verdandi must be portable, especially the linear algebra library. 2. High-level interface: the dependencies should be compatible with the building of the high-level interface (e.
    [Show full text]
  • Memory Management
    Memory management The memory of a computer is a finite resource. Typical Memory programs use a lot of memory over their lifetime, but not all of it at the same time. The aim of memory management is to use that finite management resource as efficiently as possible, according to some criterion. Advanced Compiler Construction In general, programs dynamically allocate memory from two different areas: the stack and the heap. Since the Michel Schinz — 2014–04–10 management of the stack is trivial, the term memory management usually designates that of the heap. 1 2 The memory manager Explicit deallocation The memory manager is the part of the run time system in Explicit memory deallocation presents several problems: charge of managing heap memory. 1. memory can be freed too early, which leads to Its job consists in maintaining the set of free memory blocks dangling pointers — and then to data corruption, (also called objects later) and to use them to fulfill allocation crashes, security issues, etc. requests from the program. 2. memory can be freed too late — or never — which leads Memory deallocation can be either explicit or implicit: to space leaks. – it is explicit when the program asks for a block to be Due to these problems, most modern programming freed, languages are designed to provide implicit deallocation, – it is implicit when the memory manager automatically also called automatic memory management — or garbage tries to free unused blocks when it does not have collection, even though garbage collection refers to a enough free memory to satisfy an allocation request. specific kind of automatic memory management.
    [Show full text]
  • Lecture 14 Garbage Collection I
    Lecture 14 Garbage Collection I. Introduction to GC -- Reference Counting -- Basic Trace-Based GC II. Copying Collectors III. Break Up GC in Time (Incremental) IV. Break Up GC in Space (Partial) Readings: Ch. 7.4 - 7.7.4 Carnegie Mellon M. Lam CS243: Garbage Collection 1 I. What is Garbage? • Ideal: Eliminate all dead objects • In practice: Unreachable objects Carnegie Mellon CS243: Garbage Collection 2 M. Lam Two Approaches to Garbage Collection Unreachable Reachable Reachable Unreachable What is not reachable, cannot be found! Catch the transition Needs to find the complement from reachable to unreachable. of reachable objects. Reference counting Cannot collect a single object until all reachable objects are found! Stop-the-world garbage collection! Carnegie Mellon CS243: Garbage Collection 3 M. Lam When is an Object not Reachable? • Mutator (the program) – New / malloc: (creates objects) – Store: q = p; p->o1, q->o2 +: q->o1 -: If q is the only ptr to o2, o2 loses reachability If o2 holds unique pointers to any object, they lose reachability More? Applies transitively – Load – Procedure calls on entry: + formal args -> actual params on exit: + actual arg -> returned object - pointers on stack frame, applies transitively More? • Important property – once an object becomes unreachable, stays unreachable! Carnegie Mellon CS243: Garbage Collection 4 M. Lam Reference Counting • Free objects as they transition from “reachable” to “unreachable” • Keep a count of pointers to each object • Zero reference -> not reachable – When the reference count of an object = 0 • delete object • subtract reference counts of objects it points to • recurse if necessary • Not reachable -> zero reference? answer? – Cyclic data structures • Cost – overhead for each statement that changes ref.
    [Show full text]
  • Efficient High Frequency Checkpointing for Recovery and Debugging Vogt, D
    VU Research Portal Efficient High Frequency Checkpointing for Recovery and Debugging Vogt, D. 2019 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Vogt, D. (2019). Efficient High Frequency Checkpointing for Recovery and Debugging. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 30. Sep. 2021 Efficient High Frequency Checkpointing for Recovery and Debugging Ph.D. Thesis Dirk Vogt Vrije Universiteit Amsterdam, 2019 This research was funded by the European Research Council under the ERC Advanced Grant 227874. Copyright © 2019 by Dirk Vogt ISBN 978-94-028-1388-3 Printed by Ipskamp Printing BV VRIJE UNIVERSITEIT Efficient High Frequency Checkpointing for Recovery and Debugging ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, op gezag van de rector magnificus prof.dr.
    [Show full text]
  • SI 413 Unit 08
    Unit 8 SI 413 Lifetime Management We know about many examples of heap-allocated objects: Created by new or malloc in C++ • All Objects in Java • Everything in languages with lexical frames (e.g. Scheme) • When and how should the memory for these be reclaimed? Unit 8 SI 413 Manual Garbage Collection In some languages (e.g. C/C++), the programmer must specify when memory is de-allocated. This is the fastest option, and can make use of the programmer’s expert knowledge. Dangers: Unit 8 SI 413 Automatic Garbage Collection Goal: Reclaim memory for objects that are no longer reachable. Reachability can be either direct or indirect: A name that is in scope refers to the object (direct) • An data structure that is reachable contains a reference to • the object (indirect) Should we be conservative or agressive? Unit 8 SI 413 Reference Counting Each object contains an integer indicating how many references there are to it. This count is updated continuously as the program proceeds. When the count falls to zero, the memory for the object is deallocated. Unit 8 SI 413 Analysis of Reference Counting This approach is used in filesystems and a few languages (notably PHP and Python). Advantages: Memory is freed as soon as possible. • Program execution never needs to halt. • Over-counting is wasteful but not catastrophic. • Disadvantages: Additional code to run every time references are • created, destroyed, moved, copied, etc. Cycles present a major challenge. • Unit 8 SI 413 Mark and Sweep Garbage collection is performed periodically, usually halting the program. During garbage collection, the entire set of reachable references is traversed, starting from the names currently in scope.
    [Show full text]
  • Deferred Gratification: Engineering for High Performance Garbage
    Deferred Gratification: Engineering for High Performance Garbage Collection from the Get Go ∗ Ivan Jibaja† Stephen M. Blackburn‡ Mohammad R. Haghighat∗ Kathryn S. McKinley† †The University of Texas at Austin ‡Australian National University ∗Intel Corporation [email protected], [email protected], [email protected], [email protected] Abstract Language implementers who are serious about correctness and Implementing a new programming language system is a daunting performance need to understand deferred gratification: paying the task. A common trap is to punt on the design and engineering of software engineering cost of exact GC up front will deliver correct- exact garbage collection and instead opt for reference counting or ness and memory system performance later. TM conservative garbage collection (GC). For example, AppleScript , Categories and Subject Descriptors D.3.4 [Programming Languages]: Perl, Python, and PHP implementers chose reference counting Processors—Memory management (garbage collection); Optimization (RC) and Ruby chose conservative GC. Although easier to get General Terms Design, Experimentation, Performance, Management, Measure- working, reference counting has terrible performance and conser- ment, Languages vative GC is inflexible and performs poorly when allocation rates are high. However, high performance GC is central to performance Keywords Heap, PHP, Scripting Languages, Garbage Collection for managed languages and only becoming more critical due to relatively lower memory bandwidth and higher
    [Show full text]
  • Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection
    Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection José A. Joao* Onur Mutlu‡ Yale N. Patt* * HPS Research Group ‡ Computer Architecture Laboratory University of Texas at Austin Carnegie Mellon University Motivation: Garbage Collection Garbage Collection (GC) is a key feature of Managed Languages Automatically frees memory blocks that are not used anymore Eliminates bugs and improves security GC identifies dead (unreachable) objects, and makes their blocks available to the memory allocator Significant overheads Processor cycles Cache pollution Pauses/delays on the application 2 Software Garbage Collectors Tracing collectors Recursively follow every pointer starting with global, stack and register variables, scanning each object for pointers Explicit collections that visit all live objects Reference counting Tracks the number of references to each object Immediate reclamation Expensive and cannot collect cyclic data structures State-of-the-art: generational collectors Young objects are more likely to die than old objects Generations: nursery (new) and mature (older) regions 3 Overhead of Garbage Collection 4 Hardware Garbage Collectors Hardware GC in general-purpose processors? Ties one GC algorithm into the ISA and the microarchitecture High cost due to major changes to processor and/or memory system Miss opportunities at the software level, e.g. locality improvement Rigid trade-off: reduced flexibility for higher performance on specific applications Transistors are available Build accelerators
    [Show full text]
  • Cyclic Reference Counting by Typed Reference Fields
    Computer Languages, Systems & Structures 38 (2012) 98–107 Contents lists available at SciVerse ScienceDirect Computer Languages, Systems & Structures journal homepage: www.elsevier.com/locate/cl Cyclic reference counting by typed reference fields$ J. Morris Chang a, Wei-Mei Chen b,n, Paul A. Griffin a, Ho-Yuan Cheng b a Department of Electrical and Computer Engineering, Iowa Sate University, Ames, IA 50011, USA b Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan article info abstract Article history: Reference counting strategy is a natural choice for real-time garbage collection, but the Received 13 October 2010 cycle collection phase which is required to ensure the correctness for reference Received in revised form counting algorithms can introduce heavy scanning overheads. This degrades the 14 July 2011 efficiency and inflates the pause time required for garbage collection. In this paper, Accepted 1 September 2011 we present two schemes to improve the efficiency of reference counting algorithms. Available online 10 September 2011 First, in order to make better use of the semantics of a given program, we introduce a Keywords: novel classification model to predict the behavior of objects precisely. Second, in order Java to reduce the scanning overheads, we propose an enhancement for cyclic reference Garbage collection counting algorithms by utilizing strongly-typed reference features of the Java language. Reference counting We implement our proposed algorithm in Jikes RVM and measure the performance over Memory management various Java benchmarks. Our results show that the number of scanned objects can be reduced by an average of 37.9% during cycle collection phase.
    [Show full text]