Ribbon Filter: Practically Smaller Than Bloom and Xor1

Total Page:16

File Type:pdf, Size:1020Kb

Ribbon Filter: Practically Smaller Than Bloom and Xor1 Ribbon filter: practically smaller than Bloom and Xor1 Peter C. Dillinger Stefan Walzer Facebook, Inc. University of Cologne Seattle, Washington, USA Cologne, Germany [email protected] [email protected] ABSTRACT themselves2. Thus, the Bloom filter filters out almost all specific 3 Filter data structures over-approximate a set of hashable keys, i.e. set key queries to data files that would find no relevant data. False membership queries may incorrectly come out positive. A filter with negative (FN) queries would be incorrect for this application and false positive rate 5 0, 1 is known to require log 1 5 bits must never occur. 2 ¹ ¼ ≥ 2 ¹ / º per key. At least for larger 5 2 4, existing practical filters require Blocked Bloom filters [44, 54] are a popular Bloom variant be- ≥ − a space overhead of at least 20% with respect to this information- cause they are extremely fast. We do not expect to improve upon 4 theoretic bound. this solution for short-lived applications such as database joins We introduce the Ribbon filter: a new filter for static sets with or the smallest levels of an LSM-tree. However, Bloom filters use a broad range of configurable space overheads and false positive at least 44% more space (“space overhead”) than the information- theoretic lower bound of _ = log 1 5 bits per key for a hashed rates with competitive speed over that range, especially for larger 2 ¹ / º 5 2 7. In many cases, Ribbon is faster than existing filters for filter with FP rate 5 [13, Section 2.2]. Blocked Bloom can exceed ≥ − the same space overhead, or can achieve space overhead below 10% 50% space overhead for small 5 . with some additional CPU time. An experimental Ribbon design In this work we focus on saving space in static filters, and opti- with load balancing can even achieve space overheads below 1%. mizing the CPU time required for saving space. Our presentation A Ribbon filter resembles an Xor filter modified to maximize and validation are kept general, but an intended application is opti- locality and is constructed by solving a band-like linear system mizing accesses to the largest levels of an LSM-tree, where it should over Boolean variables. In previous work, Dietzfelbinger and Walzer be worth CPU time to save space in long-lived memory-resident 5 describe this linear system and an efficient Gaussian solver. We structures . In [18] it is shown that a relatively high FP rate for present and analyze a faster, more adaptable solving process we call these levels is best for overall efficiency. However, the space sav- “Rapid Incremental Boolean Banding ON the fly,” which resembles ings offered by existing practical Bloom filter alternatives is limited (dotted line in Figure 1 (a)), especially for higher FP rates, _ 5. hash table construction. We also present and analyze an attractive ≤ Ribbon variant based on making the linear system homogeneous, Bloom filter and alternatives. We can categorize hashed filters and describe several more practical enhancements. by the logical structure of a query: OR probing: Cuckoo filters [11, 32, 34, 35], Quotient filters [5, 15, 30, 51], and most others return true for a query if any one of several probed locations is a hashed data match, much 1 INTRODUCTION like hash table look-ups. This design is great for supporting dynamic add and delete, but all known instances use 1 ¹ ¸ Background and motivation. The primary motivation for this Y _ ` bits per key where ` ¡ 1.44, or often ` = 3 for speed6. º ¸ work is optimizing data retrieval, especially in systems aggregating Even with Y 0.05, the space overhead is large for small _. ≈ immutable data resources. In the example of LSM-tree storage [50], AND probing: A Bloom filter query returns true iff all probed persisted key-value data is primarily split among immutable data locations (bits) are a match (set to 1). files. A crucial strategy for reducing I/O in key look-ups is filtering accesses using an in-memory data structure. In a common configu- 2Learned filters [55] or tries [59] can take advantage of regularities in the key set. A arXiv:2103.02515v2 [cs.DS] 8 Mar 2021 ration, each data file has an associated Bloom filter [9] representing space-efficient hash table [15] can take advantage of a densely covered key space. We the set of keys with associated data in that file. The Bloom filter has focus on the general case. 3Static filters can also support range queries, either through prefix Bloom [48] or more some false positive (FP) rate, which is the probability that querying sophisticated schemes [47]. a key not added returns true (positive). For example, configuring 4In Section 7 we mention and use another blocked Bloom implementation with new the Bloom filter to use 10 bits of space per added key yields anFP trade-offs [24]. 5In some large-scale applications using RocksDB [31] for LSM-tree storage, we observe rate of just under 1%, regardless of the size or structure of the keys roughly 10% of memory and roughly 1% of CPU used in blocked Bloom filters. The size-weighted average age of a live filter is about three days. An experimental Ribbon filter option was added to RocksDB in version 6.15.0 (November 2020). This work is licensed under the Creative Commons BY-NC-ND 4.0 International 6` ¡ 1.44 can be explained by these structures approximating a minimal perfect License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of hash [36], a near-strict ordering of keys based on the location of their matching entry this license. Copyright is held by the owner/author(s). in the structure, on top of _ payload bits per key. This is an observation about existing 1Title inspired by [34, 35, 39]. structures, not necessarily a fundamental limitation. 1 Peter C. Dillinger and Stefan Walzer XOR probing: Xor filters10 [ , 14, 20, 37, 39] return true for a 5 fastest lter query iff the bitwise exclusive-or (XOR) of all probed loca- 16 2− tions is a hashed data match. XOR probing is only known to Xor 14 Xor+ work with static filters. 2− Bloom Structures using XOR probing are the most promising for space 12 2− efficient static filters. They are constructed by solving a linearsys- Cuckoo tem with one constraint per key ensuring that querying the key 10 2− returns true. Standard Xor filters use a fast solving process called Balanced 7 8 peeling that limits their space efficiency to 1.22_ bits per key , 2− Ribbon ≥ though a variant with fast compression, the Xor+ filter [39], uses 6 roughly 1.08_ 0.5 bits per key, which is an improvement for larger 2− ¸ _. Using structured Gaussian elimination instead of peeling [21, 37] 4 2− Homogeneous offers better space efficiency, but construction times are considered 2 Ribbon impractical for many applications. 2 Blocked Bloom − % space overhead Core contribution. We introduce a faster, simplified, and more 100 50 30 16 8 4 2 1 adaptable Gaussian elimination algorithm (Ribbon) for the static 5 query times 5 construction times function data structure from [22]. Based on Ribbon, we develop a 16 16 2− 100 ns 2− 300 ns 14 14 family of practical and highly space-efficient XOR-probed filters. 2− 70 ns 2− 150 ns 12 12 2− 50 ns 2− 80 ns Results and comparison. Figure 1 (a) summarizes extensive bench- 10 40 ns 10 2− 2− 40 ns 8 30 ns 8 marking data by indicating which structure is fastest for satisfying 2− 2− 20 ns 6 20 ns 6 2− 2− various space and FP rate requirements for a static filter. For “fastest” 4 15 ns 4 10 ns 2− 2− 2 10 ns 2 5 ns we consider the sum of the construction time per key and three 2− % space 2− % space overhead overhead query times (measured for G (, G 8 ( and a mixed data set).8 100 5030 16 8 4 2 1 100 5030 16 8 4 2 1 2 Although we compare Ribbon with many approaches imple- mented in the fastfilter benchmark library38 [ ], only variants of Figure 1: (a) Fastest filter with the given combination of Bloom, Cuckoo, Xor, and Ribbon emerge as winners. Specifically, space overhead and false positive rate, considering a mix of the color at point G,~ indicates the fastest filter with space over- construction and query times. (b) Construction and query ¹ º head at most G and FP rate 5 ~ 2,~ 2 for = = 107 keys. Diagonal times for fastest approaches in (a). 2 » / · ¼ shading indicates different winners for = = 106 and = = 108. The timings for the winning approach are also shown in Figure 1 (b). We observe the following. small Ribbon structures (“smash”). These features go into the Ribbon wins to the right of the dotted line because none of • Standard Ribbon filter, which in practice has increasing space the competing approaches achieve space overhead this low. overhead or running time as the number of keys increases.9 Ribbon wins in some territory previously occupied by Xor • Section 4. We present the Homogeneous Ribbon filter, which and Xor+ filters, mostly for 5 ¡ 2 8 (_ < 8) from Xor and − shares many desirable properties with blocked Bloom filters: 5 ¡ 2 12 (_ < 12) for Xor+. In these cases, Ribbon-style − construction success is guaranteed, and scaling to any num- Gaussian elimination is faster than peeling. ber of keys is efficient. Homogeneous Ribbon does not build Blocked Bloom filters are still the fastest whenever appli- • on static functions in the standard way, which simplifies cable, though Cuckoo and Xor take some of the remaining implementation but complicates analysis.
Recommended publications
  • The Scrabble Player's Handbook Is Available for Free Download At
    The Scrabble Player's Handbook is available for free download at www.scrabbleplayershandbook.com 1 Contents Introduction 3 Meet The Team 5 What's Different About Competitive Scrabble? 10 How To Play Good Scrabble 11 The Words 14 What Is Scrabble? 16 Scoring Well 21 Understanding Rack Leaves 32 Word Learning 35 The First Move 46 Tile Tracking 50 Time Management 54 Exchanging 58 Phoneys 64 Set-Ups 65 Open and Closed Boards 68 The Endgame 75 Playing Style 85 How To Play Amazing Scrabble 94 The Luck Element 98 The Game Behind The Game 99 Starting Out in Competitive Play 101 Quackle 103 Zyzzyva 109 Internet Scrabble Club 115 Aerolith 117 Scrabble by Phone 119 Books 121 Scrabble Variants 123 Scrabble Around The World 125 Playing Equipment 127 Glossary 128 Appendix 133 Rules Governing Word Inclusion 133 Two-letter words 137 Three-letter words 140 SCRABBLE® is a registered trademark. All intellectual property rights in and to the game are owned in the U.S.A. by Hasbro Inc., in Canada by Hasbro Canada Inc. and throughout the rest of the world by J.W. Spear & Sons Ltd. of Maidenhead SL6 4UB, England, a subsidiary of Mattel Inc. Mattel and Spear are not affiliated with Hasbro or Hasbro Canada. The Scrabble Player's Handbook is available free of charge. There is no copyright on the contents and readers are encouraged to distribute the book in PDF or printed form to all who would benefit from it. Please respect our work by retaining the footer on every page and by refraining from reproducing any part of this book for financial gain.
    [Show full text]
  • Automated Program Transformation for Improving Software Quality
    Automated Program Transformation for Improving Software Quality Rijnard van Tonder CMU-ISR-19-101 October 2019 Institute for Software Research School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Claire Le Goues, Chair Christian Kästner Jan Hoffmann Manuel Fähndrich, Facebook, Inc. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Software Engineering. Copyright 2019 Rijnard van Tonder This work is partially supported under National Science Foundation grant numbers CCF-1750116 and CCF-1563797, and a Facebook Testing and Verification research award. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring corporation, institution, the U.S. government, or any other entity. Keywords: syntax, transformation, parsers, rewriting, crash bucketing, fuzzing, bug triage, program transformation, automated bug fixing, automated program repair, separation logic, static analysis, program analysis Abstract Software bugs are not going away. Millions of dollars and thousands of developer-hours are spent finding bugs, debugging the root cause, writing a patch, and reviewing fixes. Automated techniques like static analysis and dynamic fuzz testing have a proven track record for cutting costs and improving software quality. More recently, advances in automated program repair have matured and see nascent adoption in industry. Despite the value of these approaches, automated techniques do not come for free: they must approximate, both theoretically and in the interest of practicality. For example, static analyzers suffer false positives, and automatically produced patches may be insufficiently precise to fix a bug.
    [Show full text]
  • Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
    1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it.
    [Show full text]
  • Racerd: Compositional Static Race Detection
    RacerD: Compositional Static Race Detection SAM BLACKSHEAR, Facebook, USA NIKOS GOROGIANNIS, Facebook, UK and Middlesex University London, UK PETER W. O’HEARN, Facebook, UK and University College London, UK ILYA SERGEY∗, Yale-NUS College, Singapore and University College London, UK Automatic static detection of data races is one of the most basic problems in reasoning about concurrency. We present RacerD—a static program analysis for detecting data races in Java programs which is fast, can scale to large code, and has proven effective in an industrial software engineering scenario. To our knowledge, RacerD is the first inter-procedural, compositional data race detector which has been empirically shownto have non-trivial precision and impact. Due to its compositionality, it can analyze code changes quickly, and this allows it to perform continuous reasoning about a large, rapidly changing codebase as part of deployment within a continuous integration ecosystem. In contrast to previous static race detectors, its design favors reporting high-confidence bugs over ensuring their absence. RacerD has been in deployment for over a year at Facebook, where it has flagged over 2500 issues that have been fixed by developers before reaching production. It has been important in enabling the development of new code as well as fixing old code: it helped support the conversion of part of the main Facebook Android app from a single-threaded to a multi-threaded architecture. In this paper we describe RacerD’s design, implementation, deployment and impact. CCS Concepts: • Theory of computation → Program analysis; • Software and its engineering → Concurrent programming structures; Additional Key Words and Phrases: Concurrency, Static Analysis, Race Freedom ACM Reference Format: Sam Blackshear, Nikos Gorogiannis, Peter W.
    [Show full text]
  • Developing a Pedagogical Framework for Gaming Literacy in the Multimodal Composition Classroom
    TOOLS OF PLAY: DEVELOPING A PEDAGOGICAL FRAMEWORK FOR GAMING LITERACY IN THE MULTIMODAL COMPOSITION CLASSROOM Tina Lynn Arduini A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY May 2016 Committee: Kristine Blair, Advisor Gi Yun, Graduate Faculty Representative Lee Nickoson Sue Carter Wood ii ABSTRACT Kristine Blair, Advisor Since the publication of James Paul Gee’s (2003) seminal text What Video Games Have to Teach us About Learning and Literacy, scholars in the field of rhetoric and writing have been looking at the ways video games have made an impact on modern literacies. Paralleling this research, rhetoric and writing teacher-scholars have also been exploring the benefits to teaching multimodal composition skills to their students. My dissertation examines the intersections of these two related fields of study in order to help construct a pedagogical framework that utilizes gaming literacies in the multimodal composition classroom. Using the gaming literacy narratives of three student gamers at a rural Midwestern university, I address the following research questions: How do students acquire gaming literacy? What kinds of multimodal skills are acquired through gaming literacy? What does one’s gaming literacy narrative reveal about his or her literate practices? The answers to these questions help to inform my approach to the more pedagogically-driven research question: How can gaming literacy be effectively used in the multimodal composition classroom? My findings are influenced by technofeminist research methodologies so that I explore not only the role that video games have played upon my research participants but also the social adaptations that the participants have exerted over their gaming experiences.
    [Show full text]
  • Static Analysis, Race Freedom ACM Reference Format: Sam Blackshear, Nikos Gorogiannis, Peter W
    RacerD: Compositional Static Race Detection SAM BLACKSHEAR, Facebook, USA NIKOS GOROGIANNIS, Facebook, UK and Middlesex University London, UK PETER W. O’HEARN, Facebook, UK and University College London, UK ILYA SERGEY∗, Yale-NUS College, Singapore and University College London, UK Automatic static detection of data races is one of the most basic problems in reasoning about concurrency. We present RacerDÐa static program analysis for detecting data races in Java programs which is fast, can scale to large code, and has proven effective in an industrial software engineering scenario. To our knowledge, RacerD is the first inter-procedural, compositional data race detector which has been empirically shownto have non-trivial precision and impact. Due to its compositionality, it can analyze code changes quickly, and this allows it to perform continuous reasoning about a large, rapidly changing codebase as part of deployment within a continuous integration ecosystem. In contrast to previous static race detectors, its design favors reporting high-confidence bugs over ensuring their absence. RacerD has been in deployment for over a year at Facebook, where it has flagged over 2500 issues that have been fixed by developers before reaching production. It has been important in enabling the development of new code as well as fixing old code: it helped support the conversion of part of the main Facebook Android app from a single-threaded to a multi-threaded architecture. In this paper we describe RacerD’s design, implementation, deployment and impact. CCS Concepts: · Theory of computation → Program analysis; · Software and its engineering → Concurrent programming structures; Additional Key Words and Phrases: Concurrency, Static Analysis, Race Freedom ACM Reference Format: Sam Blackshear, Nikos Gorogiannis, Peter W.
    [Show full text]
  • Monkey: Optimal Navigable Key-Value Store
    Monkey: Optimal Navigable Key-Value Store Niv Dayan Manos Athanassoulis Stratos Idreos Harvard University {dayan, manos, stratos}@seas.harvard.edu ABSTRACT Pareto curve In this paper, we show that key-value stores backed by an LSM-tree exhibit an intrinsic trade-off between lookup cost, update cost, and main memory footprint, yet all existing designs expose a subopti- mal and difficult to tune trade-off among these metrics. We pin- WiredTiger Cassandra, HBase point the problem to the fact that all modern key-value stores sub- RocksDB, LevelDB, cLSM optimally co-tune the merge policy, the buffer size, and the Bloom bLSM Monkey filters’ false positive rates in each level. We present Monkey, an LSM-based key-value store that strikes the optimal balance between the costs of updates and lookups with any given main memory budget. The insight is that worst-case lookup cost is proportional to the sum of the false positive rates Figure 1: State-of-the-art LSM-tree based key-value stores are of the Bloom filters across all levels of the LSM-tree. Contrary not tuned along the Pareto curve. As a result, they are unable to to state-of-the-art key-value stores that assign a fixed number of maximize throughput. In contrast, Monkey is tuned along the bits-per-element to all Bloom filters, Monkey allocates memory Paretro Frontier, and it can navigate it to find the best trade-off to filters across different levels so as to minimize this sum. We for a given application to maximize throughput. show analytically that Monkey reduces the asymptotic complex- enabling (1) high throughput for updates [29] and (2) good space- ity of the worst-case lookup I/O cost, and we verify empirically efficiency as data is stored compactly [14] (as they do not need to using an implementation on top of LevelDB that Monkey reduces keep free space at every node to allow in-place updates).
    [Show full text]
  • Varun Arora 609.790.7855 | [email protected]
    Varun Arora 609.790.7855 | [email protected] EDUCATION EXPERIENCE VANDERBILT UNIVERSITY LIME | SOFTWARE ENGINEER, RIDER GROWTH B.S. IN COMPUTER SCIENCE September 2019 - Present | San Francisco, CA B.S. IN APPLIED MATH • Iterated quickly on short-term user-facing experiments to increase Class of 2019 | Nashville, TN engagement, retention, and monetization. College of Engineering • Worked with a team to launch the LimePass subscription product, creating an in-app notification system that improved conversion by 12%. LEADERSHIP • Provided native mobile support to the Growth team as lead iOS engineer VANDYAPPS PINTEREST | SOFTWARE ENGINEER INTERN | KP FELLOW PRESIDENT June 2019 - August 2019 | San Francisco, CA Organized tech talks, coding • Created a real-time scheduler for shopping ads infrastructure competitions, and hack nights for over • Worked with a team to develop a streaming feed ingestion system, increasing 400 members the speed at which merchant catalogs populate on Pinterest by over 200% VANDYHACKS FACEBOOK | SOFTWARE ENGINEER INTERN DIRECTOR OF DEVELOPMENT January 2018 – April 2018 | London, UK Led a team of eight to build the • Created logging infrastructure in OCaml to allow easy recording of data from software for Tennessee’s largest directly within the open-source Infer static analyzer. hackathon • Implemented performance logging, discovering runtime performance opportunities to improve Infer’s issue reporting speed by 15%. CONTRARY CAPITAL FOUNDING STUDENT PARTNER FACEBOOK | SOFTWARE ENGINEER INTERN Led investment decisions for $5MM May 2017 – August 2017 | Menlo Park, CA portfolio by regularly evaluating • Created data validation tooling for the queueing and storage layers of investment opportunities Messenger infrastructure, enabling removal of extraneous fields. • Built end-to-end pipeline simplifying debugging process for on-call engineers, SKILLS allowing identification of prod issues & reducing issue resolution time by 20%.
    [Show full text]
  • Progress Report of the Max Planck Institute for Software Systems (MPI-SWS)
    Progress Report of the Max Planck Institute for Software Systems (MPI-SWS) August 2015 { January 2018 Contents 1 State of the Institute 7 1.1 General overview of the institute . .7 1.2 The state of the institute, and recent accomplishments . 11 I Current Research Groups 16 2 The Real-Time Systems Group 18 2.1 Overview . 18 2.2 Research agenda . 21 3 The Practical Formal Methods Group 27 3.1 Overview . 27 3.2 Research agenda . 29 4 The Automated Verification and Approximation Group 35 4.1 Overview . 35 4.2 Research agenda . 36 5 The Foundations of Programming Group 42 5.1 Overview . 42 5.2 Research agenda . 44 6 The Distributed Systems Group 51 6.1 Overview . 51 6.2 Research agenda . 54 7 The Large Scale Internet Systems Group 61 7.1 Overview . 61 7.2 Research agenda . 63 8 The Foundations of Computer Security Group 68 8.1 Overview . 68 8.2 Research agenda . 70 9 The Networks and Machine Learning Group 77 9.1 Overview . 77 9.2 Research agenda . 79 10 The Networked Systems Group 85 10.1 Overview . 85 10.2 Research Agenda: Foundations of Social Computing . 89 11 The Rigorous Software Engineering Group 94 11.1 Overview . 94 11.2 Research Agenda . 96 11.3 Formal Methods and Machine Learning . 101 11.4 Program Interactions with the External World . 104 12 The Foundations of Algorithmic Verification Group 109 12.1 Overview . 109 12.2 Research agenda . 110 13 The Machine Teaching Group 115 13.1 Overview . 115 13.2 Research agenda .
    [Show full text]
  • Automatic Program Transformation for Program Repair and Improving Analysis
    Automatic Program Transformation for Program Repair and Improving Analysis Rijnard van Tonder Thesis Committee: Dr. Claire Le Goues (ISR), Dr. Christian Kästner (ISR), Dr. Jan Hoffmann (CSD), Dr. Manuel Fahndrich (Facebook) Wednesday 19th September, 2018 1 Introduction Software bugs are not going away. Millions of dollars and thousands of developer-hours are spent finding bugs, debugging the root cause, writing a patch, and reviewing fixes [16, 60]. Automated tools such as linters, gradual type checkers, and static analyzers help drive down costs by catching bugs early and increase developer efficiency [12, 13, 19]. However, truly fixing bugs remains a predominantly manual and expensive process. Reliance on manual processes further result in fixable bugs (found by automated tools) remaining unfixed. Automatic Program Repair (APR) [44] promises to cut the costs of fixing bugs manually. Automatic program repair is hard: the prospect of automatically changing a program’s semantics can lead to intractable and undesirable behavior [50, 73]. To become mainstream, APR must produce semantically correct patches that can be trusted for automatic application to improve software. The vast majority of APR research rely on existing tests to identify bugs, validate fixes, or both [9, 38, 39, 43, 49, 50, 54, 55, 58, 63, 64, 75, 80]. Tests are appealing because they are readily available in practice, and straightforwardly indicate whether a change improves the program in question (i.e., by turning a previously failing test case into a passing one). However, reliance on tests impose several fundamental limitations to approaches that currently dominate the state of the art in APR. First, tests are inherently limited to executing single paths and may consequently miss bugs.
    [Show full text]
  • Cyberculture and Personnel Security: Report I – Orientation, Concerns, and Needs
    Technical Report 11-01 May 2011 Cyberculture and Personnel Security: Report I – Orientation, Concerns, and Needs John S. Leggitt Olga G. Shechter Northrop Grumman Technical Services Eric L. Lang Defense Personnel Security Research Center Approved for Public Distribution: Distribution Unlimited Defense Personnel Security Research Center Technical Report 11-01 May 2011 Cyberculture and Personnel Security: Report I – Orientation, Concerns, and Needs John S. Leggitt, Olga G. Shechter—Northrop Grumman Technical Services Eric L. Lang—Defense Personnel Security Research Center Released by – James A. Riedel BACKGROUND HIGHLIGHTS Computers and related Information from many sources was technologies, such as smart examined, including academic phones and video games, are now research journals, other federal a common part of everyday life. organizations, news reports, and Many people spend a large portion cyber environments, to understand of their waking hours using and cyber activities relevant to personnel socializing through these devices, security. Participation is widespread forming what is known as a in U.S. society and popular among all cyberculture. Personnel security age groups. Some cyber activities, investigative and adjudicative such as foreign associations, can be standards were developed before reportable per existing investigative these products were widely criteria, so procedures should be available; however, cyberculture updated appropriately and promptly. bears relevance to personnel Other topics require research before security due both to the presence action is recommended. One concern of existing security issues and is how online disinhibition, where potential effects on psychological people who become more willing to outcomes and workplace disclose personal information, performance. Although deceive, or become hostile, affects cyberculture has many beneficial personnel security.
    [Show full text]
  • Automated Fault Localization in Large Java Applications
    Dissertation submitted to the Combined Faculty for the Natural Sciences and for Mathematics of Heidelberg University, Germany for the Degree of Doctor of Natural Sciences Put forward by Mohammadreza Ghanavati (M.Sc.) from Ramhormoz (Iran) Oral examination: Automated Fault Localization in Large Java Applications Advisor: Prof. Dr. Artur Andrzejak Abstract Modern software systems evolve steadily. Software developers change the software codebase every day to add new features, to improve the performance, or to fix bugs. Despite extensive testing and code inspection processes before releasing a new software version, the chance of introducing new bugs is still high. A code that worked yesterday may not work today, or it can show a degraded performance causing software regression. The laws of software evolution state that the complexity increases as software evolves. Such increasing complexity makes software maintenance harder and more costly. In a typical software organization, the cost of debugging, testing, and verification can easily range from 50% to 75% of the total development costs. Given that human resources are the main cost factor in the software maintenance and the software codebase evolves continuously, this dissertation tries to answer the following question: How can we help developers to localize the software defects more effectively during software development? We answer this question in three aspects. First, we propose an approach to localize failure-inducing changes for crashing bugs. Assume the source code of a buggy version, a failing test, the stack trace of the crashing site, and a previous correct version of the application. We leverage program analysis to contrast the behavior of the two software versions under the failing test.
    [Show full text]