Scaling Big Data Mining Infrastructure: the Twitter Experience

Total Page:16

File Type:pdf, Size:1020Kb

Scaling Big Data Mining Infrastructure: the Twitter Experience Scaling Big Data Mining Infrastructure: The Twitter Experience Jimmy Lin and Dmitriy Ryaboy Twitter, Inc. @lintool @squarecog ABSTRACT A little about our backgrounds: The first author is an As- The analytics platform at Twitter has experienced tremen- sociate Professor at the University of Maryland who spent an dous growth over the past few years in terms of size, com- extended sabbatical from 2010 to 2012 at Twitter, primar- plexity, number of users, and variety of use cases. In this ily working on relevance algorithms and analytics infrastruc- paper, we discuss the evolution of our infrastructure and the ture. The second author joined Twitter in early 2010 and development of capabilities for data mining on \big data". was first a tech lead, then the engineering manager of the One important lesson is that successful big data mining in analytics infrastructure team. Together, we hope to provide practice is about much more than what most academics a blend of the academic and industrial perspectives|a bit would consider data mining: life \in the trenches" is occupied of ivory tower musings mixed with \in the trenches" practi- by much preparatory work that precedes the application of cal advice. Although this paper describes the path we have data mining algorithms and followed by substantial effort to taken at Twitter and is only one case study, we believe our turn preliminary models into robust solutions. In this con- recommendations align with industry consensus on how to text, we discuss two topics: First, schemas play an impor- approach a particular set of big data challenges. tant role in helping data scientists understand petabyte-scale The biggest lesson we wish to share with the community data stores, but they're insufficient to provide an overall \big is that successful big data mining is about much more than picture" of the data available to generate insights. Second, what most academics would consider data mining. A sig- we observe that a major challenge in building data analytics nificant amount of tooling and infrastructure is required to platforms stems from the heterogeneity of the various com- operationalize vague strategic directives into concrete, solv- ponents that must be integrated together into production able problems with clearly-defined metrics of success. A data workflows|we refer to this as \plumbing". This paper has scientist spends a significant amount of effort performing two goals: For practitioners, we hope to share our experi- exploratory data analysis to even figure out \what's there"; ences to flatten bumps in the road for those who come after this includes data cleaning and data munging not directly us. For academic researchers, we hope to provide a broader related to the problem at hand. The data infrastructure en- context for data mining in production environments, point- gineers work to make sure that productionized workflows op- ing out opportunities for future work. erate smoothly, efficiently, and robustly, reporting errors and alerting responsible parties as necessary. The \core" of what academic researchers think of as data mining|translating 1. INTRODUCTION domain insight into features and training models for various The analytics platform at Twitter has experienced tremen- tasks|is a comparatively small, albeit critical, part of the dous growth over the past few years in terms of size, com- overall insight-generation lifecycle. plexity, number of users, and variety of use cases. In 2010, In this context, we discuss two topics: First, with a certain there were approximately 100 employees in the entire com- amount of bemused ennui, we explain that schemas play an pany and the analytics team consisted of four people|the important role in helping data scientists understand peta- only people to use our 30-node Hadoop cluster on a daily ba- byte-scale data stores, but that schemas alone are insuffi- sis. Today, the company has over one thousand employees. cient to provide an overall \big picture" of the data avail- There are thousands of Hadoop nodes across multiple dat- able and how they can be mined for insights. We've fre- acenters. Each day, around one hundred terabytes of raw quently observed that data scientists get stuck before they data are ingested into our main Hadoop data warehouse; even begin|it's surprisingly difficult in a large production engineers and data scientists from dozens of teams run tens environment to understand what data exist, how they are of thousands of Hadoop jobs collectively. These jobs ac- structured, and how they relate to each other. Our discus- complish everything from data cleaning to simple aggrega- sion is couched in the context of user behavior logs, which tions and report generation to building data-powered prod- comprise the bulk of our data. We share a number of ex- ucts to training machine-learned models for promoted prod- amples, based on our experience, of what doesn't work and ucts, spam detection, follower recommendation, and much, how to fix it. much more. We've come a long way, and in this paper, we Second, we observe that a major challenge in building data share experiences in scaling Twitter's analytics infrastruc- analytics platforms comes from the heterogeneity of the var- ture over the past few years. Our hope is to contribute to a ious components that must be integrated together into pro- set of emerging \best practices" for building big data analyt- duction workflows. Much complexity arises from impedance ics platforms for data mining from a case study perspective. SIGKDD Explorations Volume 14, Issue 2 Page 6 mismatches at the interface between different components. ample, using machine learning techniques to train predic- A production system must run like clockwork, splitting out tive models of user behavior|whether a piece of content is aggregated reports every hour, updating data products every spam, whether two users should become \friends", the like- three hours, generating new classifier models daily, etc. Get- lihood that a user will complete a purchase or be interested ting a bunch of heterogeneous components to operate as a in a related product, etc. Other desired capabilities include synchronized and coordinated workflow is challenging. This mining large (often unstructured) data for statistical regu- is what we fondly call \plumbing"|the not-so-sexy pieces of larities, using a wide range of techniques from simple (e.g., software (and duct tape and chewing gum) that ensure ev- k-means clustering) to complex (e.g., latent Dirichlet alloca- erything runs together smoothly is part of the \black magic" tion or other Bayesian approaches). These techniques might of converting data to insights. surface \latent" facts about the users|such as their interest This paper has two goals: For practitioners, we hope to and expertise|that they do not explicitly express. share our experiences to flatten bumps in the road for those To be fair, some types of predictive analytics have a long who come after us. Scaling big data infrastructure is a com- history|for example, credit card fraud detection and mar- plex endeavor, and we point out potential pitfalls along the ket basket analysis. However, we believe there are several way, with possible solutions. For academic researchers, we qualitative differences. The application of data mining on hope to provide a broader context for data mining in produc- behavioral data changes the scale at which algorithms need tion environments|to help academics understand how their to operate, and the generally weaker signals present in such research is adapted and applied to solve real-world problems data require more sophisticated algorithms to produce in- at scale. In addition, we identify opportunities for future sights. Furthermore, expectations have grown|what were work that could contribute to streamline big data mining once cutting-edge techniques practiced only by a few innova- infrastructure. tive organizations are now routine, and perhaps even neces- sary for survival in today's competitive environment. Thus, 2. HOW WE GOT HERE capabilities that may have previously been considered luxu- ries are now essential. We begin by situating the Twitter analytics platform in Finally, open-source software is playing an increasingly the broader context of \big data" for commercial enterprises. important role in today's ecosystem. A decade ago, no The\fourth paradigm"of how big data is reshaping the phys- credible open-source, enterprise-grade, distributed data an- ical and natural sciences [23] (e.g., high-energy physics and alytics platform capable of handling large data volumes ex- bioinformatics) is beyond the scope of this paper. isted. Today, the Hadoop open-source implementation of The simple idea that an organization should retain data MapReduce [11] lies at the center of a de facto platform that result from carrying out its mission and exploit those for large-scale data analytics, surrounded by complementary data to generate insights that benefit the organization is of systems such as HBase, ZooKeeper, Pig, Hive, and many course not new. Commonly known as business intelligence, others. The importance of Hadoop is validated not only among other monikers, its origins date back several decades. by adoption in countless startups, but also the endorsement In this sense, the \big data" hype is simply a rebranding of of industry heavyweights such as IBM, Microsoft, Oracle, what many organizations have been doing all along. and EMC. Of course, Hadoop is not a panacea and is not Examined more closely, however, there are three major an adequate solution for many problems, but a strong case trends that distinguish insight-generation activities today can be made for Hadoop supplementing (and in some cases, from, say, the 1990s. First, we have seen a tremendous ex- replacing) existing data management systems. plosion in the sheer amount of data|orders of magnitude Analytics at Twitter lies at the intersection of these three increase.
Recommended publications
  • Learning React Functional Web Development with React and Redux
    Learning React Functional Web Development with React and Redux Alex Banks and Eve Porcello Beijing Boston Farnham Sebastopol Tokyo Learning React by Alex Banks and Eve Porcello Copyright © 2017 Alex Banks and Eve Porcello. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/insti‐ tutional sales department: 800-998-9938 or [email protected]. Editor: Allyson MacDonald Indexer: WordCo Indexing Services Production Editor: Melanie Yarbrough Interior Designer: David Futato Copyeditor: Colleen Toporek Cover Designer: Karen Montgomery Proofreader: Rachel Head Illustrator: Rebecca Demarest May 2017: First Edition Revision History for the First Edition 2017-04-26: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781491954621 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Learning React, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
    [Show full text]
  • UNIVERSITY of CALIFORNIA SAN DIEGO Simplifying Datacenter Fault
    UNIVERSITY OF CALIFORNIA SAN DIEGO Simplifying datacenter fault detection and localization A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science by Arjun Roy Committee in charge: Alex C. Snoeren, Co-Chair Ken Yocum, Co-Chair George Papen George Porter Stefan Savage Geoff Voelker 2018 Copyright Arjun Roy, 2018 All rights reserved. The Dissertation of Arjun Roy is approved and is acceptable in quality and form for publication on microfilm and electronically: Co-Chair Co-Chair University of California San Diego 2018 iii DEDICATION Dedicated to my grandmother, Bela Sarkar. iv TABLE OF CONTENTS Signature Page . iii Dedication . iv Table of Contents . v List of Figures . viii List of Tables . x Acknowledgements . xi Vita........................................................................ xiii Abstract of the Dissertation . xiv Chapter 1 Introduction . 1 Chapter 2 Datacenters, applications, and failures . 6 2.1 Datacenter applications, networks and faults . 7 2.1.1 Datacenter application patterns . 7 2.1.2 Datacenter networks . 10 2.1.3 Datacenter partial faults . 14 2.2 Partial faults require passive impact monitoring . 17 2.2.1 Multipath hampers server-centric monitoring . 18 2.2.2 Partial faults confuse network-centric monitoring . 19 2.3 Unifying network and server centric monitoring . 21 2.3.1 Load-balanced links mean outliers correspond with partial faults . 22 2.3.2 Centralized network control enables collating viewpoints . 23 Chapter 3 Related work, challenges and a solution . 24 3.1 Fault localization effectiveness criteria . 24 3.2 Existing fault management techniques . 28 3.2.1 Server-centric fault detection . 28 3.2.2 Network-centric fault detection .
    [Show full text]
  • Crawling Code Review Data from Phabricator
    Friedrich-Alexander-Universit¨atErlangen-N¨urnberg Technische Fakult¨at,Department Informatik DUMITRU COTET MASTER THESIS CRAWLING CODE REVIEW DATA FROM PHABRICATOR Submitted on 4 June 2019 Supervisors: Michael Dorner, M. Sc. Prof. Dr. Dirk Riehle, M.B.A. Professur f¨urOpen-Source-Software Department Informatik, Technische Fakult¨at Friedrich-Alexander-Universit¨atErlangen-N¨urnberg Versicherung Ich versichere, dass ich die Arbeit ohne fremde Hilfe und ohne Benutzung anderer als der angegebenen Quellen angefertigt habe und dass die Arbeit in gleicher oder ¨ahnlicherForm noch keiner anderen Pr¨ufungsbeh¨ordevorgelegen hat und von dieser als Teil einer Pr¨ufungsleistung angenommen wurde. Alle Ausf¨uhrungen,die w¨ortlich oder sinngem¨aߨubernommenwurden, sind als solche gekennzeichnet. Nuremberg, 4 June 2019 License This work is licensed under the Creative Commons Attribution 4.0 International license (CC BY 4.0), see https://creativecommons.org/licenses/by/4.0/ Nuremberg, 4 June 2019 i Abstract Modern code review is typically supported by software tools. Researchers use data tracked by these tools to study code review practices. A popular tool in open-source and closed-source projects is Phabricator. However, there is no tool to crawl all the available code review data from Phabricator hosts. In this thesis, we develop a Python crawler named Phabry, for crawling code review data from Phabricator instances using its REST API. The tool produces minimal server and client load, reproducible crawling runs, and stores complete and genuine review data. The new tool is used to crawl the Phabricator instances of the open source projects FreeBSD, KDE and LLVM. The resulting data sets can be used by researchers.
    [Show full text]
  • Artificial Intelligence for Understanding Large and Complex
    Artificial Intelligence for Understanding Large and Complex Datacenters by Pengfei Zheng Department of Computer Science Duke University Date: Approved: Benjamin C. Lee, Advisor Bruce M. Maggs Jeffrey S. Chase Jun Yang Dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2020 Abstract Artificial Intelligence for Understanding Large and Complex Datacenters by Pengfei Zheng Department of Computer Science Duke University Date: Approved: Benjamin C. Lee, Advisor Bruce M. Maggs Jeffrey S. Chase Jun Yang An abstract of a dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of Duke University 2020 Copyright © 2020 by Pengfei Zheng All rights reserved except the rights granted by the Creative Commons Attribution-Noncommercial Licence Abstract As the democratization of global-scale web applications and cloud computing, under- standing the performance of a live production datacenter becomes a prerequisite for making strategic decisions related to datacenter design and optimization. Advances in monitoring, tracing, and profiling large, complex systems provide rich datasets and establish a rigorous foundation for performance understanding and reasoning. But the sheer volume and complexity of collected data challenges existing techniques, which rely heavily on human intervention, expert knowledge, and simple statistics. In this dissertation, we address this challenge using artificial intelligence and make the case for two important problems, datacenter performance diagnosis and datacenter workload characterization. The first thrust of this dissertation is the use of statistical causal inference and Bayesian probabilistic model for datacenter straggler diagnosis.
    [Show full text]
  • Unicorn: a System for Searching the Social Graph
    Unicorn: A System for Searching the Social Graph Michael Curtiss, Iain Becker, Tudor Bosman, Sergey Doroshenko, Lucian Grijincu, Tom Jackson, Sandhya Kunnatur, Soren Lassen, Philip Pronin, Sriram Sankar, Guanghao Shen, Gintaras Woss, Chao Yang, Ning Zhang Facebook, Inc. ABSTRACT rative of the evolution of Unicorn's architecture, as well as Unicorn is an online, in-memory social graph-aware index- documentation for the major features and components of ing system designed to search trillions of edges between tens the system. of billions of users and entities on thousands of commodity To the best of our knowledge, no other online graph re- servers. Unicorn is based on standard concepts in informa- trieval system has ever been built with the scale of Unicorn tion retrieval, but it includes features to promote results in terms of both data volume and query volume. The sys- with good social proximity. It also supports queries that re- tem serves tens of billions of nodes and trillions of edges quire multiple round-trips to leaves in order to retrieve ob- at scale while accounting for per-edge privacy, and it must jects that are more than one edge away from source nodes. also support realtime updates for all edges and nodes while Unicorn is designed to answer billions of queries per day at serving billions of daily queries at low latencies. latencies in the hundreds of milliseconds, and it serves as an This paper includes three main contributions: infrastructural building block for Facebook's Graph Search • We describe how we applied common information re- product. In this paper, we describe the data model and trieval architectural concepts to the domain of the so- query language supported by Unicorn.
    [Show full text]
  • 2.3 Apache Chemistry
    UNIVERSITY OF DUBLIN TRINITY COLLEGE DISSERTATION PRESENTED TO THE UNIVERSITY OF DUBLIN,TRINITY COLLEGE IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MAGISTER IN ARTE INGENIARIA Content Interoperability and Open Source Content Management Systems An implementation of the CMIS standard as a PHP library Author: Supervisor: MATTHEW NICHOLSON DAVE LEWIS Submitted to the University of Dublin, Trinity College, May, 2015 Declaration I, Matthew Nicholson, declare that the following dissertation, except where otherwise stated, is entirely my own work; that it has not previously been submitted as an exercise for a degree, either in Trinity College Dublin, or in any other University; and that the library may lend or copy it or any part thereof on request. May 21, 2015 Matthew Nicholson i Summary This paper covers the design, implementation and evaluation of PHP li- brary to aid in use of the Content Management Interoperability Services (CMIS) standard. The standard attempts to provide a language indepen- dent and platform specific mechanisms to better allow content manage- ment systems (CM systems) work together. There is currently no PHP implementation of CMIS server framework available, at least not widely. The name given to the library is Elaphus CMIS. The implementation should remove a barrier for making PHP CM sys- tems CMIS compliant. Aswell testing how language independent the stan- dard is and look into the features of PHP programming language. The technologies that are the focus of this report are: CMIS A standard that attempts to structure the data within a wide range of CM systems and provide standardised API for interacting with this data.
    [Show full text]
  • Learning React Modern Patterns for Developing React Apps
    Second Edition Learning React Modern Patterns for Developing React Apps Alex Banks & Eve Porcello SECOND EDITION Learning React Modern Patterns for Developing React Apps Alex Banks and Eve Porcello Beijing Boston Farnham Sebastopol Tokyo Learning React by Alex Banks and Eve Porcello Copyright © 2020 Alex Banks and Eve Porcello. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Acquisitions Editor: Jennifer Pollock Indexer: Judith McConville Development Editor: Angela Rufino Interior Designer: David Futato Production Editor: Kristen Brown Cover Designer: Karen Montgomery Copyeditor: Holly Bauer Forsyth Illustrator: Rebecca Demarest Proofreader: Abby Wheeler May 2017: First Edition June 2020: Second Edition Revision History for the Second Edition 2020-06-12: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781492051725 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Learning React, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work.
    [Show full text]
  • Elmulating Javascript
    Linköping University | Department of Computer science Master thesis | Computer science Spring term 2016-07-01 | LIU-IDA/LITH-EX-A—16/042--SE Elmulating JavaScript Nils Eriksson & Christofer Ärleryd Tutor, Anders Fröberg Examinator, Erik Berglund Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se Abstract Functional programming has long been used in academia, but has historically seen little light in industry, where imperative programming languages have been dominating. This is quickly changing in web development, where the functional paradigm is increas- ingly being adopted. While programming languages on other platforms than the web are constantly compet- ing, in a sort of survival of the fittest environment, on the web the platform is determined by the browsers which today only support JavaScript. JavaScript which was made in 10 days is not well suited for building large applications. A popular approach to cope with this is to write the application in another language and then compile the code to JavaScript. Today this is possible to do in a number of established languages such as Java, Clojure, Ruby etc. but also a number of special purpose language has been created. These are lan- guages that are made for building front-end web applications. One such language is Elm which embraces the principles of functional programming. In many real life situation Elm might not be possible to use, so in this report we are going to look at how to bring the benefits seen in Elm to JavaScript. Acknowledgments We would like to thank Jörgen Svensson for the amazing support thru this journey.
    [Show full text]
  • Ecological Momentary Assessments from a Cross-Platform, Native Application
    Bachelor’s Thesis Ecological Momentary Assessments from a Cross-Platform, Native Application Jake Davison July, 2018 supervised by: Dr. Frank Blaauw Dr. Ando Emerencia Contents 1 Introduction 3 1.1 Ecological Momentary Assessments . 3 1.2 u-can-act . 4 1.3 Project . 4 2 Related Work and Background 5 2.1 Mobile EMA Applications . 5 2.1.1 RealLife Exp . 5 2.1.2 mEMA . 5 2.1.3 PIEL Survey . 6 2.1.4 Need for New Application . 6 2.2 Native Mobile Applications From a Single Code Base . 6 2.2.1 React Native . 7 2.2.2 Ionic / Apache Cordova . 8 2.2.3 Choice of Technology . 10 2.3 Domain Specific Languages . 10 2.4 Service Oriented Architectures . 11 3 System Overview 12 3.1 Project Requirements . 12 3.2 System Architecture . 13 3.3 Display of Questions . 14 3.3.1 Range . 14 3.3.2 Checkbox . 15 3.3.3 Radio . 15 3.3.4 Textfield . 15 3.3.5 Time . 15 3.3.6 Raw . 15 3.3.7 Unsubscribe . 15 3.4 Required and Supported Functionalities . 15 4 Implementation 17 4.1 Receiving, Parsing and Displaying Question Data . 17 4.2 Logging In to Receive Questionnaires . 19 4.3 Sending User Input . 20 5 Conclusions 22 5.1 Resulting Software . 22 5.1.1 Single Code Base . 22 5.1.2 Pull in, Parse and Display Questions . 22 5.1.3 Submitting User Input . 23 5.1.4 User Login . 23 5.1.5 Push Notification . 24 1 5.1.6 Applications Running on Device for iOS and Android .
    [Show full text]
  • “They Keep Coming Back Like Zombies”: Improving Software
    “They Keep Coming Back Like Zombies”: Improving Software Updating Interfaces Arunesh Mathur, Josefine Engel, Sonam Sobti, Victoria Chang, and Marshini Chetty, University of Maryland, College Park https://www.usenix.org/conference/soups2016/technical-sessions/presentation/mathur This paper is included in the Proceedings of the Twelfth Symposium on Usable Privacy and Security (SOUPS 2016). June 22–24, 2016 • Denver, CO, USA ISBN 978-1-931971-31-7 Open access to the Proceedings of the Twelfth Symposium on Usable Privacy and Security (SOUPS 2016) is sponsored by USENIX. “They Keep Coming Back Like Zombies”: Improving Software Updating Interfaces Arunesh Mathur Josefine Engel Sonam Sobti [email protected] [email protected] [email protected] Victoria Chang Marshini Chetty [email protected] [email protected] Human–Computer Interaction Lab, College of Information Studies University of Maryland, College Park College Park, MD 20742 ABSTRACT ers”, and “security analysts” only install updates about half Users often do not install security-related software updates, the time more than non-experts [34]. Despite this evidence leaving their devices open to exploitation by attackers. We that there are factors influencing updating behaviors, the are beginning to understand what factors affect this soft- majority of research on software updates focuses on the net- ware updating behavior but the question of how to improve work and systems aspect of delivering updates to end users current software updating interfaces however remains unan- [17, 47, 24, 13, 23]. swered. In this paper, we begin tackling this question by However, a growing number of studies have begun to explore studying software updating behaviors, designing alternative the human side of software updates for Microsoft (MS) Win- updating interfaces, and evaluating these designs.
    [Show full text]
  • Realtime Data Processing at Facebook
    Realtime Data Processing at Facebook Guoqiang Jerry Chen, Janet L. Wiener, Shridhar Iyer, Anshul Jaiswal, Ran Lei Nikhil Simha, Wei Wang, Kevin Wilfong, Tim Williamson, and Serhat Yilmaz Facebook, Inc. ABSTRACT dural language (such as C++ or Java) essential? How Realtime data processing powers many use cases at Face- fast can a user write, test, and deploy a new applica- book, including realtime reporting of the aggregated, anon- tion? ymized voice of Facebook users, analytics for mobile applica- • Performance: How much latency is ok? Milliseconds? tions, and insights for Facebook page administrators. Many Seconds? Or minutes? How much throughput is re- companies have developed their own systems; we have a re- quired, per machine and in aggregate? altime data processing ecosystem at Facebook that handles hundreds of Gigabytes per second across hundreds of data • Fault-tolerance: what kinds of failures are tolerated? pipelines. What semantics are guaranteed for the number of times Many decisions must be made while designing a realtime that data is processed or output? How does the system stream processing system. In this paper, we identify five store and recover in-memory state? important design decisions that affect their ease of use, per- formance, fault tolerance, scalability, and correctness. We • Scalability: Can data be sharded and resharded to pro- compare the alternative choices for each decision and con- cess partitions of it in parallel? How easily can the sys- trast what we built at Facebook to other published systems. tem adapt to changes in volume, both up and down? Our main decision was targeting seconds of latency, not Can it reprocess weeks worth of old data? milliseconds.
    [Show full text]
  • The Compassion Project
    The Compassion Project Mitchell Black, Kyle Melton, and Amelia Getty 25 April 2019 Abstract The Compassion Project is a public, collaborative art installation sourced from approximately 6,500 artists around Bozeman. Each participating artist painted an 8”x8” wooden block with their own interpretation of compassion. To accompany their art, each artist wrote an artist statement defining compassion or explaining how their block relates to compassion. We created a mobile app as a companion to the installation. The primary usage of this app is to allow visitors to lookup the artist statement on their phone from the unique ID number assigned to each block. Additional app functionality includes favoriting pictures, personal user viewing history, and usage statistics. We also performed a comparative analysis of image descriptors for the blocks. We were interested in the idea using an picture to search for that block artist’s statement. We have images of each block to be used as a thumbnail when users look up a block. To evaluate which image descriptors are most effective at grouping the blocks, we took secondary photos of a subset of the blocks and tested whether the descriptors can pair the thumbnail of a block to our photo of a block. 1 Qualifications See attached resumes. 2 Bozeman, MT 59718 ⋄ 406-580-3154⋄ [email protected] ⋄ linkedin.com/in/ameliagetty Amelia Getty ​ ​ Computer Science senior with diverse experience poised to transition to software development or engineering. Organized and dependable. Aptitude to learn quickly and work independently and with a team. PROGRAMMING EXPERIENCE Java ⋄ PHP (Laravel) ⋄ C ⋄ Python ⋄ C++ ⋄ SQL Linux Systems Admin ⋄ Databases ⋄ Graphics ⋄ Networks ⋄ Security (2019) EDUCATION ♢ Computer Science​.​ ​Montana State University​, 2017 - 2019 ♢ BA Modern Languages and Literatures, German​.
    [Show full text]