The Big Data Ecosystem at Linkedin

The Big Data Ecosystem at Linkedin

The “Big Data” Ecosystem at LinkedIn Roshan Sumbaly, Jay Kreps, and Sam Shah LinkedIn ABSTRACT job, company, group, and other recommendations (Figure 2b) and is The use of large-scale data mining and machine learning has prolif- one of the principal components of engagement. erated through the adoption of technologies such as Hadoop, with Other such derived data applications at LinkedIn include “People its simple programming semantics and rich and active ecosystem. You May Know,” a link prediction system attempting to find other This paper presents LinkedIn’s Hadoop-based analytics stack, which users you might know on the social network (Figure 2a); ad target- allows data scientists and machine learning researchers to extract ing; group and job recommendations; news feed updates (Figure 3a); insights and build product features from massive amounts of data. email digesting (Figure 3b); analytical dashboards (Figure 4a); and In particular, we present our solutions to the “last mile” issues in others. As a smaller example, LinkedIn displays “related searches” providing a rich developer ecosystem. This includes easy ingress (Figure 2d) on the search results page. This feature allows users from and egress to online systems, and managing workflows as to refine their searches or explore variants by pivoting to alternate production processes. A key characteristic of our solution is that queries from their original search [27]. There are over a hundred of these distributed system concerns are completely abstracted away these derived data applications on the site. from researchers. For example, deploying data back into the online These applications are largely enabled by Hadoop [34], the open- system is simply a 1-line Pig command that a data scientist can add source implementation of MapReduce [6]. Among Hadoop’s advan- to the end of their script. We also present case studies on how this tages are its horizontal scalability, fault tolerance, and multitenancy: ecosystem is used to solve problems ranging from recommendations the ability to reliably process petabytes of data on thousands of com- to news feed updates to email digesting to descriptive analytical modity machines. More importantly, part of Hadoop’s success is dashboards for our members. its relatively easy-to-program semantics and its extremely rich and active ecosystem. For example, Pig [23] provides a high-level data- Categories and Subject Descriptors: H.2.4 [Database Manage- flow language; Hive [33] provides a dialect of SQL; and libraries ment]: Systems; H.2.8 [Database Management]: Database Ap- such as Mahout [24] provide machine learning primitives. plications This rich development environment allows machine learning re- General Terms: Design, Management searchers and data scientists—individuals with modest software Keywords: big data, hadoop, data mining, machine learning, data development skills and little distributed systems knowledge—to pipeline, offline processing extract insights and build models without a heavy investment of software developers. This is important as it decreases overall de- 1. INTRODUCTION velopment costs by permitting researchers to iterate quickly on their own models and it also elides the need for knowledge transfer The proliferation of data and the availability and affordability of between research and engineering, a common bottleneck. large-scale data processing systems has transformed data mining While the Hadoop ecosystem eases development and scaling of and machine learning into a core production use case, especially in these analytic workloads, to truly allow researchers to “production- the consumer web space. For instance, many social networking and ize” their work, ingress and egress from the Hadoop system must e-commerce web sites provide derived data features, which usually be similarly easy and efficient, a presently elusive goal. This data consist of some data mining application offering insights to the user. integration problem is frequently cited as one of the most difficult A typical example of this kind of feature is collaborative filtering, issues facing data practitioners [13]. At LinkedIn, we have tried to which showcases relationships between pairs of items based on the solve these “last mile” issues by providing an environment where re- this that wisdom of the crowd (“people who did also did ”). This searchers have complete, structured, and documented data available, technique is used by several organizations such as Amazon [20], and where they can publish the results of their algorithms without YouTube [5], and LinkedIn in various recommendation capacities. difficulty. Thereafter, application developers can read these results At LinkedIn, the largest professional online social network with and build an experience for the user. A key characteristic of our over 200 million members, collaborative filtering is used for people, solution is that large-scale, multi-data center data deployment is a 1-line command, which provides seamless integration into existing Permission to make digital or hard copies of all or part of this work for Hadoop infrastructure and allows the researcher to be agnostic to personal or classroom use is granted without fee provided that copies are distributed system concerns. not made or distributed for profit or commercial advantage and that copies This paper describes the systems that engender effortless ingress bear this notice and the full citation on the first page. To copy otherwise, to and egress out of the Hadoop system and presents case studies of republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. how data mining applications are built at LinkedIn. Data flows into SIGMOD’13, June 22–27, 2013, New York, New York, USA. the system from Kafka [15], LinkedIn’s publish-subscribe system. Copyright 2013 ACM 978-1-4503-2037-5/13/06 ...$15.00. 1125 Data integrity from this pipeline is ensured through stringent schema In terms of ingress, ETL has been studied extensively. For validation and additional monitoring. Hadoop ingress, in Lee et al. [16], the authors describe Twitter’s This ingress is followed by a series of Hadoop jobs, known as a transport mechanisms and data layout for Hadoop, which uses workflow, which process this data to provide some additional value— Scribe [33]. Scribe was originally developed at Facebook and forms usually some predictive analytical application. Azkaban, a workflow the basis for their real-time data pipeline [3]. Yahoo has a similar sys- scheduler, eases construction of these workflows, managing and tem for log ingestion, called Chukwa [26]. Instead of this approach, monitoring their execution. Once results are computed, they must we have developed Kafka [15], a low-latency publish-subscribe sys- be delivered to end users. tem that unifies both near-line and offline use cases [11]. These For egress, we have found three main vehicles of transport are Hadoop log aggregation tools support a push model where the bro- necessary. The primary mechanism used by approximately 70% of ker forwards data to consumers. In a near-line scenario, a pull model applications is key-value access. Here, the algorithm delivers results is more suitable for scalability as it allows consumers to retrieve to a key-value database that an application developer can integrate messages at their maximum rate without being flooded, and it also with. For example, for a collaborative filtering application, the key permits easy rewind and seeking of the message stream. is the identifier of the entity and the value is a list of identifiers to In terms of egress from batch-oriented systems like Hadoop, recommend. When key-value access is selected, the system builds MapReduce [6] has been used for offline index construction in the data and index files on Hadoop, using the elastic resources of various search systems [21]. These search layers trigger builds on the cluster to bulk-load Voldemort [31], LinkedIn’s key-value store. Hadoop to generate indexes, and on completion, pull the indexes The second transport mechanism used by around 20% of appli- to serve search requests. This approach has also been extended to cations is stream-oriented access. The results of the algorithm are various databases. Konstantinou et al. [14] and Barbuzzi et al. [2] published as an incremental change log stream that can be processed suggest building HFiles offline in Hadoop, then shipping them to by the application. The processing of this change log may be used HBase [9], an open source database modeled after BigTable [4]. to populate application-specific data structures. For example, the In Silberstein et al. [30], Hadoop is used to batch insert data into LinkedIn “news feed” provides a member with updates on their PNUTS [29], Yahoo!’s distributed key-value solution, in the reduce network, company, and industry. The stream system permits any phase. application to inject updates into a member’s stream. An analytics OLAP is a well studied problem in data warehousing, but there is application (e.g., “What are the hot companies in your network?”) little related work on using MapReduce for cubing and on serving runs its computation as a series of Hadoop jobs and publishes its queries in the request/response path of a website. MR-Cube [22] feed of new updates to the stream service at the end of its compu- efficiently materializes cubes using the parallelism of a MapReduce tation. framework, but does not provide a query engine. The final transport mechanism is multidimensional or OLAP None of these

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us