Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection

Total Page:16

File Type:pdf, Size:1020Kb

Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection José A. Joao † Onur Mutlu§ Yale N. Patt † †ECE Department §Computer Architecture Laboratory The University of Texas at Austin Carnegie Mellon University {joao, patt}@ece.utexas.edu [email protected] ABSTRACT forming garbage collections too frequently degrades performance un- Languages featuring automatic memory management (garbage col- acceptably because each garbage collection takes a significant amount lection) are increasingly used to write all kinds of applications be- of time to find dead objects. Thus, the memory space required to run cause they provide clear software engineering and security advan- a program in a managed environment with reasonable performance is tages. Unfortunately, garbage collection imposes a toll on per- usually significantly larger than the space required by an equivalent formance and introduces pause times, making such languages less program efficiently written with manual memory management. attractive for high-performance or real-time applications. Much Garbage collection performs two distinct functions. First, it dis- progress has been made over the last five decades to reduce the over- tinguishes objects reachable from a valid pointer variable (“live ob- head of garbage collection, but it remains significant. jects") from unreachable objects (“dead objects"). Algorithms to de- We propose a cooperative hardware-software technique to reduce termine object reachability are variations of either reference count- ing [15] or pointer tracing [26]. Reference counting keeps track the performance overhead of garbage collection. The key idea is 1 to reduce the frequency of garbage collection by efficiently de- of the number of references (pointers) to every object. When this tecting and reusing dead memory space in hardware via hardware- number is zero, it means the object is dead. Pointer tracing recur- implemented reference counting. Thus, even though software sively follows every pointer starting with global, stack and register garbage collections are still eventually needed, they become much variables, scanning every reachable object for pointers and following less frequent and have less impact on overall performance. Our tech- them. Any object not reached by this exhaustive tracing is dead. Sec- nique is compatible with a variety of software garbage collection al- ond, after the garbage collector identifies dead objects, it makes their gorithms, does not break compatibility with existing software, and memory blocks available to the memory allocator. Depending on the reduces garbage collection time by 31% on average on the Java Da- collection and allocation algorithms, preparing memory blocks to be Capo benchmarks running on the production build of the Jikes RVM, reused by the memory allocator may require extra memory accesses. which uses a state-of-the-art generational garbage collector. A garbage collector has several sources of overhead. First, it re- quires processor cycles to perform its operations, which are not part Categories and Subject Descriptors: C.1.0 [Processor Architectures]: Gen- of the application ( mutator in GC terminology) itself. Second, while eral; C.5.3 [Microcomputers]: Microprocessors; D.3.4 [Processors]: Memory it is accessing every live object, even the ones that are not part of management (garbage collection) the current working set, it pollutes the caches. Third, it can delay General Terms: Design, Performance. the application in different ways. Stop-the-world collectors just stop Keywords: Garbage collection, reference counting. the application while they are running. Concurrent collectors impose shorter pause times on the application, but require significant syn- 1. INTRODUCTION chronization overhead. When memory space is tight, garbage collec- Garbage collection (GC) is a key feature of modern “managed" tion frequency may increase because every time the application is un- languages because it relieves the programmer from the error-prone able to allocate memory for a new object, the garbage collector has to task of freeing dynamically allocated memory when memory blocks run to free the memory left by recently dead objects. Consequently, are not going to be used anymore. Without this feature that man- garbage collection has a potentially significant overhead that limits ages memory allocation/deallocation automatically, large software the applicability of managed languages when memory is not abun- projects are much more susceptible to memory-leak and dangling- dant and performance/responsiveness is important [7]. Examples of pointer bugs [27]. These hard-to-find bugs significantly increase the memory-constrained systems are embedded and mobile systems that cost of developing and debugging software and can damage the qual- are usually memory-constrained for space and power reasons, and ity of software systems if they are overlooked by software testers and highly consolidated servers that need to efficiently use the available survive into the delivered code. Thus, many recent high-level pro- physical memory for consolidating multiple virtual machines. gramming languages include garbage collection as a feature [5, 34]. Figure 1 shows the fraction of total execution time spent on garbage However, garbage collection comes at a cost because it trades mem- collection for the Java DaCapo benchmarks with a production-quality ory space and performance for software engineering convenience. generational garbage collector for different heap sizes, relative to the On the one hand, memory space occupied by dead objects is not re- minimum heap size that can run the benchmark. 2 The overheads of claimed until garbage collection is executed. On the other hand, per- garbage collection with tight heap sizes are very significant for sev- eral benchmarks, in the 15-55% range. For this reason, running a program with tight heaps is usually avoided, which results in over Permission to make digital or hard copies of all or part of this work for provisioning of physical memory, thereby increasing cost and power personal or classroom use is granted without fee provided that copies are consumption. Even with large heap sizes, the overhead of garbage not made or distributed for profit or commercial advantage and that copies collection is still significant (6.3% on average for 3x minHeap). bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. 1A reference is a link from one object to another. In this paper we use the terms reference ISCA’09, June 20–24, 2009, Austin, Texas, USA. and pointer interchangeably. 2 Copyright 2009 ACM 978-1-60558-526-0/09/06 ...$5.00. We run the benchmarks on Jikes RVM with the large input set on an Intel Core2 Quad 1 60 the best for every application and memory configuration [23] and the 55 1x minHeap 50 1.5x minHeap hardware implementation has to pick one particular mechanism. A 45 2x minHeap hardware implementation of a scheme that works well on average 40 3x minHeap may not be the best memory management platform for many appli- 35 5x minHeap cations, and the rigid hardware implementation cannot adapt beyond 30 10x minHeap simple tuning. Second, the cost of developing, verifying and scal- 25 ing previously proposed full-hardware solutions is very high, because 20 they introduce major changes to the processor [29, 30, 28, 33] or the 15 memory [32, 36] architecture. Third, full-hardware collectors miss 10 opportunities available at the software level, such as taking advantage GC time (% of total execution time) 5 0 of garbage collections to change the memory layout to improve local- ity [19, 17, 12]. In summary, hardware GC makes a rigid trade-off of antlr 20 bloat 36 chart 40 fop 28 pmd 36 xalan 51 amean eclipse 82 hsqldb 171 jython 32 luindex 18 lusearch 45 reduced flexibility for higher performance on specific applications, Figure 1: Garbage collection overheads for DaCapo benchmarks and this trade-off is not suitable for general purpose systems. Given the importance of garbage collection in modern languages with different heap sizes (minHeap values in MB shown in labels) and its performance overhead, we hypothesize some form of hard- Software garbage collectors provide different trade-offs in terms ware acceleration for GC that improves performance without elimi- of memory space, performance penalty and pause times. Most of nating the flexibility of software collection is highly desirable, espe- the currently used and best-performing collectors are “generational cially in future processors with higher levels of resource integration. collectors” [35]. Generational collectors exploit the generational hy- Hardware trends . The number of transistors that can be inte- pothesis, which states that in many cases young objects are much grated on a chip continues to increase and the current industry trend more likely to die than old objects. New objects are allocated into is to allocate them to extra cores on a CMP. However, Amdahl’s a memory region called “younger generation” or “nursery.” When law [3] and the difficulty of parallelizing irregular applications to a the nursery fills up, surviving objects are moved to the “older gener- high degree are turning the attention of computer architects towards ation” or “mature” region. 3 Generational collectors usually combine asymmetric solutions with multiple core configurations and special- a bump-pointer allocator [4] that provides fast contiguous
Recommended publications
  • Memory Management and Garbage Collection
    Overview Memory Management Stack: Data on stack (local variables on activation records) have lifetime that coincides with the life of a procedure call. Memory for stack data is allocated on entry to procedures ::: ::: and de-allocated on return. Heap: Data on heap have lifetimes that may differ from the life of a procedure call. Memory for heap data is allocated on demand (e.g. malloc, new, etc.) ::: ::: and released Manually: e.g. using free Automatically: e.g. using a garbage collector Compilers Memory Management CSE 304/504 1 / 16 Overview Memory Allocation Heap memory is divided into free and used. Free memory is kept in a data structure, usually a free list. When a new chunk of memory is needed, a chunk from the free list is returned (after marking it as used). When a chunk of memory is freed, it is added to the free list (after marking it as free) Compilers Memory Management CSE 304/504 2 / 16 Overview Fragmentation Free space is said to be fragmented when free chunks are not contiguous. Fragmentation is reduced by: Maintaining different-sized free lists (e.g. free 8-byte cells, free 16-byte cells etc.) and allocating out of the appropriate list. If a small chunk is not available (e.g. no free 8-byte cells), grab a larger chunk (say, a 32-byte chunk), subdivide it (into 4 smaller chunks) and allocate. When a small chunk is freed, check if it can be merged with adjacent areas to make a larger chunk. Compilers Memory Management CSE 304/504 3 / 16 Overview Manual Memory Management Programmer has full control over memory ::: with the responsibility to manage it well Premature free's lead to dangling references Overly conservative free's lead to memory leaks With manual free's it is virtually impossible to ensure that a program is correct and secure.
    [Show full text]
  • Memory Management with Explicit Regions by David Edward Gay
    Memory Management with Explicit Regions by David Edward Gay Engineering Diploma (Ecole Polytechnique F´ed´erale de Lausanne, Switzerland) 1992 M.S. (University of California, Berkeley) 1997 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA at BERKELEY Committee in charge: Professor Alex Aiken, Chair Professor Susan L. Graham Professor Gregory L. Fenves Fall 2001 The dissertation of David Edward Gay is approved: Chair Date Date Date University of California at Berkeley Fall 2001 Memory Management with Explicit Regions Copyright 2001 by David Edward Gay 1 Abstract Memory Management with Explicit Regions by David Edward Gay Doctor of Philosophy in Computer Science University of California at Berkeley Professor Alex Aiken, Chair Region-based memory management systems structure memory by grouping objects in regions under program control. Memory is reclaimed by deleting regions, freeing all objects stored therein. Our compiler for C with regions, RC, prevents unsafe region deletions by keeping a count of references to each region. RC's regions have advantages over explicit allocation and deallocation (safety) and traditional garbage collection (better control over memory), and its performance is competitive with both|from 6% slower to 55% faster on a collection of realistic benchmarks. Experience with these benchmarks suggests that modifying many existing programs to use regions is not difficult. An important innovation in RC is the use of type annotations that make the structure of a program's regions more explicit. These annotations also help reduce the overhead of reference counting from a maximum of 25% to a maximum of 12.6% on our benchmarks.
    [Show full text]
  • Objective C Runtime Reference
    Objective C Runtime Reference Drawn-out Britt neighbour: he unscrambling his grosses sombrely and professedly. Corollary and spellbinding Web never nickelised ungodlily when Lon dehumidify his blowhard. Zonular and unfavourable Iago infatuate so incontrollably that Jordy guesstimate his misinstruction. Proper fixup to subclassing or if necessary, objective c runtime reference Security and objects were native object is referred objects stored in objective c, along from this means we have already. Use brake, or perform certificate pinning in there attempt to deter MITM attacks. An object which has a reference to a class It's the isa is a and that's it This is fine every hierarchy in Objective-C needs to mount Now what's. Use direct access control the man page. This function allows us to voluntary a reference on every self object. The exception handling code uses a header file implementing the generic parts of the Itanium EH ABI. If the method is almost in the cache, thanks to Medium Members. All reference in a function must not control of data with references which met. Understanding the Objective-C Runtime Logo Table Of Contents. Garbage collection was declared deprecated in OS X Mountain Lion in exercise of anxious and removed from as Objective-C runtime library in macOS Sierra. Objective-C Runtime Reference. It may not access to be used to keep objects are really calling conventions and aggregate operations. Thank has for putting so in effort than your posts. This will cut down on the alien of Objective C runtime information. Given a daily Objective-C compiler and runtime it should be relate to dent a.
    [Show full text]
  • Lecture 15 Garbage Collection I
    Lecture 15 Garbage Collection I. Introduction to GC -- Reference Counting -- Basic Trace-Based GC II. Copying Collectors III. Break Up GC in Time (Incremental) IV. Break Up GC in Space (Partial) Readings: Ch. 7.4 - 7.7.4 Carnegie Mellon M. Lam CS243: Advanced Garbage Collection 1 I. Why Automatic Memory Management? • Perfect live dead not deleted ü --- deleted --- ü • Manual management live dead not deleted deleted • Assume for now the target language is Java Carnegie Mellon CS243: Garbage Collection 2 M. Lam What is Garbage? Carnegie Mellon CS243: Garbage Collection 3 M. Lam When is an Object not Reachable? • Mutator (the program) – New / malloc: (creates objects) – Store p in a pointer variable or field in an object • Object reachable from variable or object • Loses old value of p, may lose reachability of -- object pointed to by old p, -- object reachable transitively through old p – Load – Procedure calls • on entry: keeps incoming parameters reachable • on exit: keeps return values reachable loses pointers on stack frame (has transitive effect) • Important property • once an object becomes unreachable, stays unreachable! Carnegie Mellon CS243: Garbage Collection 4 M. Lam How to Find Unreachable Nodes? Carnegie Mellon CS243: Garbage Collection 5 M. Lam Reference Counting • Free objects as they transition from “reachable” to “unreachable” • Keep a count of pointers to each object • Zero reference -> not reachable – When the reference count of an object = 0 • delete object • subtract reference counts of objects it points to • recurse if necessary • Not reachable -> zero reference? • Cost – overhead for each statement that changes ref. counts Carnegie Mellon CS243: Garbage Collection 6 M.
    [Show full text]
  • Linear Algebra Libraries
    Linear Algebra Libraries Claire Mouton [email protected] March 2009 Contents I Requirements 3 II CPPLapack 4 III Eigen 5 IV Flens 7 V Gmm++ 9 VI GNU Scientific Library (GSL) 11 VII IT++ 12 VIII Lapack++ 13 arXiv:1103.3020v1 [cs.MS] 15 Mar 2011 IX Matrix Template Library (MTL) 14 X PETSc 16 XI Seldon 17 XII SparseLib++ 18 XIII Template Numerical Toolkit (TNT) 19 XIV Trilinos 20 1 XV uBlas 22 XVI Other Libraries 23 XVII Links and Benchmarks 25 1 Links 25 2 Benchmarks 25 2.1 Benchmarks for Linear Algebra Libraries . ....... 25 2.2 BenchmarksincludingSeldon . 26 2.2.1 BenchmarksforDenseMatrix. 26 2.2.2 BenchmarksforSparseMatrix . 29 XVIII Appendix 30 3 Flens Overloaded Operator Performance Compared to Seldon 30 4 Flens, Seldon and Trilinos Content Comparisons 32 4.1 Available Matrix Types from Blas (Flens and Seldon) . ........ 32 4.2 Available Interfaces to Blas and Lapack Routines (Flens and Seldon) . 33 4.3 Available Interfaces to Blas and Lapack Routines (Trilinos) ......... 40 5 Flens and Seldon Synoptic Comparison 41 2 Part I Requirements This document has been written to help in the choice of a linear algebra library to be included in Verdandi, a scientific library for data assimilation. The main requirements are 1. Portability: Verdandi should compile on BSD systems, Linux, MacOS, Unix and Windows. Beyond the portability itself, this often ensures that most compilers will accept Verdandi. An obvious consequence is that all dependencies of Verdandi must be portable, especially the linear algebra library. 2. High-level interface: the dependencies should be compatible with the building of the high-level interface (e.
    [Show full text]
  • Memory Management
    Memory management The memory of a computer is a finite resource. Typical Memory programs use a lot of memory over their lifetime, but not all of it at the same time. The aim of memory management is to use that finite management resource as efficiently as possible, according to some criterion. Advanced Compiler Construction In general, programs dynamically allocate memory from two different areas: the stack and the heap. Since the Michel Schinz — 2014–04–10 management of the stack is trivial, the term memory management usually designates that of the heap. 1 2 The memory manager Explicit deallocation The memory manager is the part of the run time system in Explicit memory deallocation presents several problems: charge of managing heap memory. 1. memory can be freed too early, which leads to Its job consists in maintaining the set of free memory blocks dangling pointers — and then to data corruption, (also called objects later) and to use them to fulfill allocation crashes, security issues, etc. requests from the program. 2. memory can be freed too late — or never — which leads Memory deallocation can be either explicit or implicit: to space leaks. – it is explicit when the program asks for a block to be Due to these problems, most modern programming freed, languages are designed to provide implicit deallocation, – it is implicit when the memory manager automatically also called automatic memory management — or garbage tries to free unused blocks when it does not have collection, even though garbage collection refers to a enough free memory to satisfy an allocation request. specific kind of automatic memory management.
    [Show full text]
  • Lecture 14 Garbage Collection I
    Lecture 14 Garbage Collection I. Introduction to GC -- Reference Counting -- Basic Trace-Based GC II. Copying Collectors III. Break Up GC in Time (Incremental) IV. Break Up GC in Space (Partial) Readings: Ch. 7.4 - 7.7.4 Carnegie Mellon M. Lam CS243: Garbage Collection 1 I. What is Garbage? • Ideal: Eliminate all dead objects • In practice: Unreachable objects Carnegie Mellon CS243: Garbage Collection 2 M. Lam Two Approaches to Garbage Collection Unreachable Reachable Reachable Unreachable What is not reachable, cannot be found! Catch the transition Needs to find the complement from reachable to unreachable. of reachable objects. Reference counting Cannot collect a single object until all reachable objects are found! Stop-the-world garbage collection! Carnegie Mellon CS243: Garbage Collection 3 M. Lam When is an Object not Reachable? • Mutator (the program) – New / malloc: (creates objects) – Store: q = p; p->o1, q->o2 +: q->o1 -: If q is the only ptr to o2, o2 loses reachability If o2 holds unique pointers to any object, they lose reachability More? Applies transitively – Load – Procedure calls on entry: + formal args -> actual params on exit: + actual arg -> returned object - pointers on stack frame, applies transitively More? • Important property – once an object becomes unreachable, stays unreachable! Carnegie Mellon CS243: Garbage Collection 4 M. Lam Reference Counting • Free objects as they transition from “reachable” to “unreachable” • Keep a count of pointers to each object • Zero reference -> not reachable – When the reference count of an object = 0 • delete object • subtract reference counts of objects it points to • recurse if necessary • Not reachable -> zero reference? answer? – Cyclic data structures • Cost – overhead for each statement that changes ref.
    [Show full text]
  • Efficient High Frequency Checkpointing for Recovery and Debugging Vogt, D
    VU Research Portal Efficient High Frequency Checkpointing for Recovery and Debugging Vogt, D. 2019 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Vogt, D. (2019). Efficient High Frequency Checkpointing for Recovery and Debugging. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 30. Sep. 2021 Efficient High Frequency Checkpointing for Recovery and Debugging Ph.D. Thesis Dirk Vogt Vrije Universiteit Amsterdam, 2019 This research was funded by the European Research Council under the ERC Advanced Grant 227874. Copyright © 2019 by Dirk Vogt ISBN 978-94-028-1388-3 Printed by Ipskamp Printing BV VRIJE UNIVERSITEIT Efficient High Frequency Checkpointing for Recovery and Debugging ACADEMISCH PROEFSCHRIFT ter verkrijging van de graad Doctor aan de Vrije Universiteit Amsterdam, op gezag van de rector magnificus prof.dr.
    [Show full text]
  • SI 413 Unit 08
    Unit 8 SI 413 Lifetime Management We know about many examples of heap-allocated objects: Created by new or malloc in C++ • All Objects in Java • Everything in languages with lexical frames (e.g. Scheme) • When and how should the memory for these be reclaimed? Unit 8 SI 413 Manual Garbage Collection In some languages (e.g. C/C++), the programmer must specify when memory is de-allocated. This is the fastest option, and can make use of the programmer’s expert knowledge. Dangers: Unit 8 SI 413 Automatic Garbage Collection Goal: Reclaim memory for objects that are no longer reachable. Reachability can be either direct or indirect: A name that is in scope refers to the object (direct) • An data structure that is reachable contains a reference to • the object (indirect) Should we be conservative or agressive? Unit 8 SI 413 Reference Counting Each object contains an integer indicating how many references there are to it. This count is updated continuously as the program proceeds. When the count falls to zero, the memory for the object is deallocated. Unit 8 SI 413 Analysis of Reference Counting This approach is used in filesystems and a few languages (notably PHP and Python). Advantages: Memory is freed as soon as possible. • Program execution never needs to halt. • Over-counting is wasteful but not catastrophic. • Disadvantages: Additional code to run every time references are • created, destroyed, moved, copied, etc. Cycles present a major challenge. • Unit 8 SI 413 Mark and Sweep Garbage collection is performed periodically, usually halting the program. During garbage collection, the entire set of reachable references is traversed, starting from the names currently in scope.
    [Show full text]
  • Deferred Gratification: Engineering for High Performance Garbage
    Deferred Gratification: Engineering for High Performance Garbage Collection from the Get Go ∗ Ivan Jibaja† Stephen M. Blackburn‡ Mohammad R. Haghighat∗ Kathryn S. McKinley† †The University of Texas at Austin ‡Australian National University ∗Intel Corporation [email protected], [email protected], [email protected], [email protected] Abstract Language implementers who are serious about correctness and Implementing a new programming language system is a daunting performance need to understand deferred gratification: paying the task. A common trap is to punt on the design and engineering of software engineering cost of exact GC up front will deliver correct- exact garbage collection and instead opt for reference counting or ness and memory system performance later. TM conservative garbage collection (GC). For example, AppleScript , Categories and Subject Descriptors D.3.4 [Programming Languages]: Perl, Python, and PHP implementers chose reference counting Processors—Memory management (garbage collection); Optimization (RC) and Ruby chose conservative GC. Although easier to get General Terms Design, Experimentation, Performance, Management, Measure- working, reference counting has terrible performance and conser- ment, Languages vative GC is inflexible and performs poorly when allocation rates are high. However, high performance GC is central to performance Keywords Heap, PHP, Scripting Languages, Garbage Collection for managed languages and only becoming more critical due to relatively lower memory bandwidth and higher
    [Show full text]
  • Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection
    Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection José A. Joao* Onur Mutlu‡ Yale N. Patt* * HPS Research Group ‡ Computer Architecture Laboratory University of Texas at Austin Carnegie Mellon University Motivation: Garbage Collection Garbage Collection (GC) is a key feature of Managed Languages Automatically frees memory blocks that are not used anymore Eliminates bugs and improves security GC identifies dead (unreachable) objects, and makes their blocks available to the memory allocator Significant overheads Processor cycles Cache pollution Pauses/delays on the application 2 Software Garbage Collectors Tracing collectors Recursively follow every pointer starting with global, stack and register variables, scanning each object for pointers Explicit collections that visit all live objects Reference counting Tracks the number of references to each object Immediate reclamation Expensive and cannot collect cyclic data structures State-of-the-art: generational collectors Young objects are more likely to die than old objects Generations: nursery (new) and mature (older) regions 3 Overhead of Garbage Collection 4 Hardware Garbage Collectors Hardware GC in general-purpose processors? Ties one GC algorithm into the ISA and the microarchitecture High cost due to major changes to processor and/or memory system Miss opportunities at the software level, e.g. locality improvement Rigid trade-off: reduced flexibility for higher performance on specific applications Transistors are available Build accelerators
    [Show full text]
  • Cyclic Reference Counting by Typed Reference Fields
    Computer Languages, Systems & Structures 38 (2012) 98–107 Contents lists available at SciVerse ScienceDirect Computer Languages, Systems & Structures journal homepage: www.elsevier.com/locate/cl Cyclic reference counting by typed reference fields$ J. Morris Chang a, Wei-Mei Chen b,n, Paul A. Griffin a, Ho-Yuan Cheng b a Department of Electrical and Computer Engineering, Iowa Sate University, Ames, IA 50011, USA b Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan article info abstract Article history: Reference counting strategy is a natural choice for real-time garbage collection, but the Received 13 October 2010 cycle collection phase which is required to ensure the correctness for reference Received in revised form counting algorithms can introduce heavy scanning overheads. This degrades the 14 July 2011 efficiency and inflates the pause time required for garbage collection. In this paper, Accepted 1 September 2011 we present two schemes to improve the efficiency of reference counting algorithms. Available online 10 September 2011 First, in order to make better use of the semantics of a given program, we introduce a Keywords: novel classification model to predict the behavior of objects precisely. Second, in order Java to reduce the scanning overheads, we propose an enhancement for cyclic reference Garbage collection counting algorithms by utilizing strongly-typed reference features of the Java language. Reference counting We implement our proposed algorithm in Jikes RVM and measure the performance over Memory management various Java benchmarks. Our results show that the number of scanned objects can be reduced by an average of 37.9% during cycle collection phase.
    [Show full text]