Real-Time Data Analytics with Apache Druid Correa Bosco Hilary Department of Information Technology, (Msc

Total Page:16

File Type:pdf, Size:1020Kb

Real-Time Data Analytics with Apache Druid Correa Bosco Hilary Department of Information Technology, (Msc IJARSCT ISSN (Online) 2581-9429 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT) Volume 5, Issue 2, May 2021 Impact Factor: 4.819 Real-Time Data Analytics with Apache Druid Correa Bosco Hilary Department of Information Technology, (MSc. IT Part 1) Sir Sitaram and Lady Shantabai Patkar College of Arts and Science, Mumbai, India Abstract: The shift towards real-time data flow has a major impact on the way applications are designed and on the work of data engineers. Dealing with real-time data ingestion brings a paradigm shift and an added layer of challenges compared to traditional integration and processing methods. There are real benefits to leveraging real-time data, but it requires specialized considerations in setting up the ingestion, processing, storing, and serving of that data. It brings about specific operational needs and a change in the way data engineers work. These should be considered while embarking on a real- time journey. In this paper we are going to see real time data analytics with apache druid. Apache Druid (incubating) performant analytics data store for event-driven data .Druid’s core design combines ideas from OLAP/analytic databases, time series databases, and search systems to create a unified system for operational analytics. Keywords: Distributed, Real-time, Fault-tolerant, Highly Available, Open Source, Analytics, Column- oriented, Olap, Apache Druid I. INTRODUCTION Streaming data integration is the foundation for streaming analytics. Specific use cases such as IoT devices log, contextual marketing triggers, Dynamic pricing all rely on using a data feed or real-time data. If you cannot source the data in real-time, there is very little value to be gained in attempting to tackle these use cases. With data streaming and updating from ever more sources, businesses are looking to quickly translate this data into intelligence to make important decisions, usually in an automated way. Real time data streams have become more popular due to the Internet of Things (IoT), sensors in everyday devices and of course the rise of social media. These platforms provide all the changing states. Analysing them even a day later can give misleading or now currently false information. II. GENERIC INFRASTRUCTURE FOR REAL-TIME DATA FLOWS Besides enabling new use cases, real-time data ingestion brings other sets of benefits, such as a decreased time to land the data, need to handle dependencies, and some other operational aspects: 2.1 Ingestion Layer Ingesting clickstream data often requires a specific infrastructure component to be present to facilitate that. Snowplow and Dilvote are two open-source clickstream collectors programs. Frameworks such as Apache Flumes, Apache Nifi, offering features such as data buffering and backpressure, help integrate data onto message queues/streams. A message bus, streams is the component that will serve to transfer the data across the different components of the real-time data ecosystem. Some of the common technologies used are Kafka, Pulsar, Kinesis, Google Pub/Sub, Azure Service Bus, Azure Event Hub, and RabbitMQ, to name just a few. Different processing frameworks are there to simplify computation on data streams. Technologies such as Apache Beam, Flink, Apache Storm, Spark Streaming can significantly help with the more complicated processing of data streams. It is possible to query streams directly using SQL, like the type of languages. Azure Event Hub supports Azure Stream Analytics, Kafka KSQL, and Spark offers Spark Structured Streaming to query multiple types of message streams. 2.2 Processing Layer Streams typically need to be enriched to provide additional data meant to be used in real time. They can either do lookups on additional services, databases, do first stage ETL transformations, or add machine learning scores onto the Copyright to IJARSCT DOI: 10.48175/568 488 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT) Volume 5, Issue 2, May 2021 Impact Factor: 4.819 stream. Enrichment of messages typically happens through a producer/consumer or publisher/subscriber type of pattern. These applications can be coded in any language and often do not require some framework for this type of enrichment. Although specialized frameworks and tooling exist, such as Spark Streaming, Flink, or Storm, for most use cases, a normal service application would be able to perform adequately without the overhead, complexity, or the specific expertise of a streaming computation framework. Stateful enrichments and cleanup of the data might be needed to be used downstream. Stateful Enrichments: Event based applications might need to consume data containing data enriched with historical data. Stateful Cleanup: This can be the case when attempting to use customer data coming from different sources to be used in CRM systems that want a 360 view of the customer, for instance, to leverage contextual marketing triggers. Stateful Deduplication: Some message brokers offer at least once delivery option, creating the need to deduplicate events. 2.3 Storage Layer Real-time data brings about different challenges in terms of storing and serving collected and processed data. Data tends to have different access patterns, latency, or consistency requirements, impacting how data needs to be stored and served. To properly handle the different needs arising from real-time processing, it is important to have the correct systems to manage the type of workload and access pattern for the data. Depending on how the data is consumed and the volume/velocity of data, they might complement the data platform with OLAP, OLTP, HTAP, or search engine systems. 2.4 Serving Layer There are many ways to integrate real-time data; the most common are through Dashboards, Query Interface, APIs, Webhook, Firehose, or Pub/Sub and directly integrating into OLTP databases. The particular method the data will be served through will be heavily dependent on the nature of the use case intended. For instance, when wanting to integrate onto a live application, different options are available, offering an API, publishing events through webhook, firehose, or a pub/sub mechanism, alternatively directly integrating onto an OLTP database. Analysts, on the other hand, might find a dashboard or a query interface fitter for purpose. III. DRUID AND REAL TIME ANALYTICS Apache Druid is a real-time analytics database that is designed for rapid analytics on large datasets. This database is used more often for powering use cases where real-time ingestion, high uptime, and fast query performance is needed. Druid can be used to analyze billions of rows not only in batch but also in real-time. It offers many integrations with different technologies like Apache Kafka Security, Cloud Storage, S3, Hive, HDFS, DataSketches, Redis, etc. It also follows the immutable past and append-only future. As past events happen once and never change, these are immutable, whereas the only append takes place for new events. Apache Druid provides users with a fast and deep exploration of large scale transaction data. Your data is stored into chunks. Chucks are immutable. Once segments are created, you cannot update them. (You can create a new version of a segment, but that implies re-indexing all the data for the period) You can configure how you want those chunks to be created (one per day, or one per hour, or one per month, …). You can also define the granularity of the data inside the chunks. If you know that you need the data per hour, you can set up your chunks to roll-up the data automatically. Inside a segment, the data is stored by timestamp, dimensions, and metrics: 1. Timestamp: the timestamp (rolled-up or not) 2. Dimension: A dimension will be used to cut or filter the data. Some examples of dimensions can be city, state, country, deviceId, campaignId, … Copyright to IJARSCT DOI: 10.48175/568 489 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT) Volume 5, Issue 2, May 2021 Impact Factor: 4.819 3. Metric: A metric is a counter/aggregate that is done. A few examples of metrics can be keyword clicks, page impressions, response time,… Druid supports a variety of aggregations possible by default, such as first, last, doubleSum, longMax,… There are also custom/experimental aggregations available, such as Estimate Histogram, DataSketch, or your own! You can easily implement your own aggregations as a plugin to Druid. Some of Druid's key Features Are 1. Columnar storage format 2. Scalable distributed system. 3. Parallel processing: 4. Real Time or batch ingestion. 5. Self-healing, self-balancing, easy to operate. 6. Cloud-native, fault-tolerant architecture 7. Indexes for quick filtering. 8. Time-based partitioning. 9. Approximate algorithms. 10. Automatic summarization at ingest time. You should use Druid if you have the following challenges: 1. Time Series data to store 2. Data has a somewhat high cardinality 3. You need to be able to query this data fast 4. You want to support streaming data 5. Digital marketing (ads data) 6. User analytics and behavior in your products 7. APM (application performance management) 8. OLAP and business intelligence) 9. IoT and devices metrics How does it Work Under the Hood? Every Druid installation is a cluster that requires multiple components to run. The Druid cluster can run on a single machine (great for development), or totally distributed on a few to hundreds of machines. Copyright to IJARSCT DOI: 10.48175/568 490 www.ijarsct.co.in IJARSCT ISSN (Online) 2581-9429 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT) Volume 5, Issue 2, May 2021 Impact Factor: 4.819 External Dependencies Required to Druid Metadata storage: An SQL powered database, such as PostgreSQL or MySQL.It is used to store the information about the segments, some loading rules, and to save some tasks information.
Recommended publications
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Release Notes Date Published: 2020-10-13 Date Modified
    Cloudera Runtime 7.1.4 Release Notes Date published: 2020-10-13 Date modified: https://docs.cloudera.com/ Legal Notice © Cloudera Inc. 2021. All rights reserved. The documentation is and contains Cloudera proprietary information protected by copyright and other intellectual property rights. No license under copyright or any other intellectual property right is granted herein. Copyright information for Cloudera software may be found within the documentation accompanying each component in a particular release. Cloudera software includes software from various open source or other third party projects, and may be released under the Apache Software License 2.0 (“ASLv2”), the Affero General Public License version 3 (AGPLv3), or other license terms. Other software included may be released under the terms of alternative open source licenses. Please review the license and notice files accompanying the software for additional licensing information. Please visit the Cloudera software product page for more information on Cloudera software. For more information on Cloudera support services, please visit either the Support or Sales page. Feel free to contact us directly to discuss your specific needs. Cloudera reserves the right to change any products at any time, and without notice. Cloudera assumes no responsibility nor liability arising from the use of products, except as expressly agreed to in writing by Cloudera. Cloudera, Cloudera Altus, HUE, Impala, Cloudera Impala, and other Cloudera marks are registered or unregistered trademarks in the United States and other countries. All other trademarks are the property of their respective owners. Disclaimer: EXCEPT AS EXPRESSLY PROVIDED IN A WRITTEN AGREEMENT WITH CLOUDERA, CLOUDERA DOES NOT MAKE NOR GIVE ANY REPRESENTATION, WARRANTY, NOR COVENANT OF ANY KIND, WHETHER EXPRESS OR IMPLIED, IN CONNECTION WITH CLOUDERA TECHNOLOGY OR RELATED SUPPORT PROVIDED IN CONNECTION THEREWITH.
    [Show full text]
  • General Introduction
    PEOPLE'S DEMOCRATIC REPUBLIC OF ALGERIA MINISTRY OF HIGHER EDUCATION AND SCIENTIFIC RESEARCH UNIVERSITY MOHAMED BOUDIAF - M'SILA FACULTY: Mathematics and DOMAIN: Mathematics and Computer Science Computer Science DEPARTMENT: Computer Science BRANCH: Computer Science N°: ………………………………… OPTION: SI-GL Dissertation submitted to obtain Master degree By: Mr. FODIL Youssouf Islam & Mr. MOKRAN Abdelrrahim SUBJECT Real-time data Analytics Apache Druid Supported before the jury composed of: ……………………………………. University of M'sila President Mr. Hichem Debbi University of M'sila Supervisor ……………………………………. University of M'sila Examiner ……………………………………. University of M'sila Examiner Academic year: 2019/2020 ACKNOWLEDGMENT First and foremost, heartfelt gratitude and praises go to the Almighty Allah who guided me through and through. This work could not have reached fruition without the unflagging assistance and participation of so many people whom I would never thank enough for the huge contribution that made this work what it is now. I would like to thank profoundly Mr Hichem Debbi for his scientific guidance and corrections, suggestions and advice, pertinent criticism and pragmatism, and particularly for his hard work and patience. I am very grateful to him, for without his help an important part of this work would have been missed. I would also like to thank all the Jury Members, who have agreed to review this work. I thank all the teachers who guided us throughout our journey. Thanks to all who helped me Thanks DEDICATION This dissertation is dedicated to My parents and for their encouragement, prayers, motivations and being there. , all of my Family members near or far. All of my friends and colleagues.
    [Show full text]
  • Apache Druid and Google Bigquery Performance Evaluation
    Apache Druid and Google BigQuery Performance Evaluation Evaluating enterprise data warehouse performance using the Star Schema Benchmark WHITE PAPER June 2020 Executive summary 3 Testing methodology 7 Apache Druid 9 Google BigQuery 10 Apache Jmeter and instances 10 Data generation and preparation 10 Data ingestion 11 Query optimization 12 Performance testing 14 Personnel 15 Test results 16 Computation and analytical parameters 16 Price-performance comparison 16 SSB performance test results 18 Test results by solution 19 Price performance comparison 25 Appendix a 30 Star schema benchmark queries (original) 30 Star schema benchmark queries (plain-English) 36 Star schema benchmark queries (denormalized) 37 Optimized Apache Druid queries 39 Google BigQuery optimized queries 42 About Apache Druid 46 About Imply 46 Imply Data, Inc. Page 2 © 2020 Executive summary Imply Data evaluated the performance of Apache Druid and Google BigQuery to determine the suitability of each as an enterprise data warehouse (EDW) solution. Each solution was evaluated for query performance using the Star Schema Benchmark. Further, a price-performance comparison was conducted between Apache Druid and Google BigQuery. Tests were designed to conduct a fair and repeatable comparison between EDW solutions. All configurations, schema, queries and test scripts are available via a GitHub repository. Key Findings: In Star Schema Benchmark query performance tests: ❑ ​ ​Apache Druid outperforms Google BigQuery by 321 percent in our testing. Total average response time for the query flight in Druid was 6043 ms, compared to 19409 ms in BigQuery. ❑ ​ ​Apache Druid exhibits a 12x price-performance advantage over Google BigQuery. An EDW is a database, or a collection of databases, that centralizes information from multiple sources and applications to make it available for analytics use across an entire organization.
    [Show full text]
  • Coverage Initiation: Molecula Seeks to Simplify Big-Data Infrastructure with Cloud Data Access Platform
    REPORT REPRINT Coverage Initiation: Molecula seeks to simplify big-data infrastructure with Cloud Data Access Platform MARCH 2 2020 By Paige Bartley Data access and integration become more complex with escalating data volume, yet prevailing methods for integration frequently require creating duplicate data copies that further exacerbate the underlying problem. Molecula, born out of the Pilosa open source project, seeks to solve big-data virtualization with a novel approach that eliminates the need to pre-aggregate, federate, copy, cache or move source data. THIS REPORT, LICENSED TO MOLECULA, DEVELOPED AND AS PROVIDED BY 451 RESEARCH, LLC, WAS PUBLISHED AS PART OF OUR SYNDICATED MARKET INSIGHT SUBSCRIPTION SERVICE. IT SHALL BE OWNED IN ITS ENTIRETY BY 451 RESEARCH, LLC. THIS REPORT IS SOLELY INTENDED FOR USE BY THE RECIPIENT AND MAY NOT BE REPRODUCED OR RE-POSTED, IN WHOLE OR IN PART, BY THE RECIPIENT WITHOUT EXPRESS PERMISSION FROM 451 RESEARCH. ©2020 451 Research, LLC | WWW.451RESEARCH.COM REPORT REPRINT Introduction Many approaches to data integration and aggregation can, paradoxically, exacerbate data management problems by creating duplicate data copies in order to move or situate data into a staging area, such as a data warehouse, where it can be more easily controlled and accessed for insight initiatives such as business intelligence or analytics. Not only is this time-consuming and compute-intensive when large volumes of data are involved, but it creates inherent risk, particularly for regulated firms where each additional copy straying from the original source represents a new piece of data somewhere else in the IT stack that now must be adequately managed for controls such as permissions and privacy.
    [Show full text]
  • Describe Full Schema Cassandra
    Describe Full Schema Cassandra Is Erny always starving and inalienable when exuberating some hydrostatics very noxiously and electrolytically? When Sherlocke invalidate his Brando mortars not logically enough, is Garold ultraism? Empathic Stevy silks or empanels some optime advantageously, however sisterless Zacharie transfixes misapprehensively or hallo. Use on same names for data centers asthose used by the snitch. Why has Pakistan never faced the wrath of the USA similar to other countries in the region, the maximum write value is reached, so the total footprint increases based on the number of views and the information they contain. When it comes to NoSQL databases MongoDB and Cassandra may seem. Cassandra: have simply got these in common? Representing microseconds before saving data stored, you can run all products on values as a moment i do. Allows SASI index creation during session initialization. They require a schema creation during compaction when you why does not suffer from each row instead of describe options such large. Viewing a keyspace schema Learning Apache Cassandra. Need all available before dropping of schema in order those tables in cassandra is full database from a relatively fast. Query methods as schema changes, describe full documentation. Deletes data sets it is full member of that! We specified one clustering column after that partition key. Cassandra table with that describes data center name. The schema becomes more updates a list of solutions cost is similar error! We can find below with queries, see log of spark dataframe which indicates whether one part of large clusters through code styling purposes only export schema.
    [Show full text]
  • Elastic Mapreduce EMR Development Guide
    Elastic MapReduce Elastic MapReduce EMR Development Guide Product Documentation ©2013-2019 Tencent Cloud. All rights reserved. Page 1 of 441 Elastic MapReduce Copyright Notice ©2013-2019 Tencent Cloud. All rights reserved. Copyright in this document is exclusively owned by Tencent Cloud. You must not reproduce, modify, copy or distribute in any way, in whole or in part, the contents of this document without Tencent Cloud's the prior written consent. Trademark Notice All trademarks associated with Tencent Cloud and its services are owned by Tencent Cloud Computing (Beijing) Company Limited and its affiliated companies. Trademarks of third parties referred to in this document are owned by their respective proprietors. Service Statement This document is intended to provide users with general information about Tencent Cloud's products and services only and does not form part of Tencent Cloud's terms and conditions. Tencent Cloud's products or services are subject to change. Specific products and services and the standards applicable to them are exclusively provided for in Tencent Cloud's applicable terms and conditions. ©2013-2019 Tencent Cloud. All rights reserved. Page 2 of 441 Elastic MapReduce Contents EMR Development Guide Hadoop Development Guide HDFS Common Operations Submitting MapReduce Tasks Automatically Adding Task Nodes Without Assigning ApplicationMasters YARN Task Queue Management Practices on YARN Label Scheduling Hadoop Best Practices Using API to Analyze Data in HDFS and COS Dumping YARN Job Logs to COS Spark Development Guide
    [Show full text]
  • Big Data Architecture for Cryptocurrency Real-Time Data Processing
    Big Data Architecture for Cryptocurrency Real-time Data Processing Nebojsaˇ Horvat, Vladimir Ivkovic,´ Nikola Todorovic,´ Vladimir Ivanceviˇ c,´ Dusanˇ Gajic,´ Ivan Lukovic´ Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia fhorva.n, vladimir.ivkovic, nikola.todorovic, dragoman, dusan.gajic, [email protected] Abstract— The role of cryptocurrencies in the alternative vestments valued over trillions of dollars on a global scale. world economy has been steadily increasing since they emerged The mentioned characteristics of cryptocurrencies make data in 2009. However, understanding their actual influence and gathered from their transactions very valuable and worthy performance can be a difficult task. In this paper, we present an architecture for real-time cryptocurrency data processing of analysis. A cryptocurrency can be defined as an en- and analysis based on the Lambda architectural approach. crypted digital currency that incorporates cryptography and The proposed architecture offers both batch and stream data in most cases utilizes blockchain technology. Blockchain processing of cryptocurrency transactions and blocks, as well as could be regarded as a public distributed ledger and all analyzing various sorts of trends in a blockchain network and committed transactions are stored as a list of blocks. This an exchange market. The proposed architecture is modular and thus easies the implementation of loosely coupled components chain grows as new blocks are appended to it continuously. and also provides several benefits for cryptocurrency analysis: Asymmetric cryptography and distributed consensus algo- real-time monitoring of blockchain events and mining statistics, rithms have been implemented to ensure user security and cryptocurrency buying and selling trends, as well as social ledger consistency.
    [Show full text]
  • Release Notes Date Published: 2021-03-19 Date Modified
    Cloudera Runtime 7.1.6 Release Notes Date published: 2021-03-19 Date modified: https://docs.cloudera.com/ Legal Notice © Cloudera Inc. 2021. All rights reserved. The documentation is and contains Cloudera proprietary information protected by copyright and other intellectual property rights. No license under copyright or any other intellectual property right is granted herein. Copyright information for Cloudera software may be found within the documentation accompanying each component in a particular release. Cloudera software includes software from various open source or other third party projects, and may be released under the Apache Software License 2.0 (“ASLv2”), the Affero General Public License version 3 (AGPLv3), or other license terms. Other software included may be released under the terms of alternative open source licenses. Please review the license and notice files accompanying the software for additional licensing information. Please visit the Cloudera software product page for more information on Cloudera software. For more information on Cloudera support services, please visit either the Support or Sales page. Feel free to contact us directly to discuss your specific needs. Cloudera reserves the right to change any products at any time, and without notice. Cloudera assumes no responsibility nor liability arising from the use of products, except as expressly agreed to in writing by Cloudera. Cloudera, Cloudera Altus, HUE, Impala, Cloudera Impala, and other Cloudera marks are registered or unregistered trademarks in the United States and other countries. All other trademarks are the property of their respective owners. Disclaimer: EXCEPT AS EXPRESSLY PROVIDED IN A WRITTEN AGREEMENT WITH CLOUDERA, CLOUDERA DOES NOT MAKE NOR GIVE ANY REPRESENTATION, WARRANTY, NOR COVENANT OF ANY KIND, WHETHER EXPRESS OR IMPLIED, IN CONNECTION WITH CLOUDERA TECHNOLOGY OR RELATED SUPPORT PROVIDED IN CONNECTION THEREWITH.
    [Show full text]
  • ASF FY2021 Annual Report
    0 Contents The ASF at-a-Glance 4 President’s Report 6 Treasurer’s Report 8 FY2021 Financial Statement 12 Fundraising 14 Legal Affairs 19 Infrastructure 21 Security 22 Data Privacy 25 Marketing & Publicity 26 Brand Management 40 Conferences 43 Community Development 44 Diversity & Inclusion 46 Projects and Code 48 Contributions 65 ASF Members 72 Emeritus Members 77 Memorial 78 Contact 79 FY2021 Annual Report Page 1 The ASF at-a-Glance "The Switzerland of Open Source..." — Matt Asay, InfoWorld The World’s Largest Open Source Foundation The Apache Software Foundation (ASF) incorporated in 1999 with the mission of providing software for the common good. Today the ASF is the world’s largest Open Source foundation, stewarding 227M+ lines of code and providing $22B+ worth of software to the public at 100% no cost. ASF projects are integral to nearly every aspect of modern computing, benefitting billions worldwide. Change Agents The ASF was founded by developers of the Apache HTTP Server to protect the core interests of those contributing to and using our open source projects. The ASF’s all-volunteer community now includes over 8,200 committers, involved in over 350 projects that have been organized by about 200 independent project management committees, and is overseen by 850+ ASF members. The Foundation is a globally-distributed, virtual organization with contributors on every continent. Apache projects power countless mission-critical solutions worldwide, and have spearheaded industry breakthroughs in dozens of categories, from Big Data to Web Frameworks. More than three dozen future projects and their communities are currently being mentored in the Apache Incubator.
    [Show full text]
  • Introducción a Arquitecturas Y Herramientas De Big Data
    Introducción a arquitecturas y herramientas de Big Data 17 de noviembre de 2020 Introducción a arquitecturas y herramientas de Big Data 1 Formación y experiencia - Estrategia tecnológica - Arquitectura de datos y sistemas - Big Data - Análisis de datos FORMACIÓN • Ingeniero en sistemas • MsC en Gestión de Datos e Innovación tecnológica • MsC en Big Data y Business Analytics (en curso) Nicolás Balparda INSTRUCTOR Introducción a arquitecturas y herramientas de Big Data 2 Presentación a. Nombre b. ¿Qué rol ocupa en su trabajo? c. ¿Cuál cree que es su relación en el trabajo con los datos? d. ¿Participaste en los cursos de las semanas pasadas? Introducción a arquitecturas y herramientas de Big Data 3 Temario del curso 1 Conceptos de Big Data ¿Qué es Big Data? Casos de uso frecuentes 2 Arquitecturas de Hadoop y su ecosistema de herramientas Apache Hadoop: almacenamiento y cómputo Plataformas HDP y evolución de plataformas de Big Data Ejercicio práctico con HDFS 3 Procesamiento en paralelo con datos distribuidos Demo con MapReduce 4 Ingesta y procesamiento de datos Herramientas de procesamiento batch (Spark, Sqoop y Hive) - Ejercicios Herramientas de procesamiento real time (Kafka, NiFi, Druid y Spark Streaming) - Ejercicios 5 Data Science and Engineering Platform en HDP 3.0 6 Análisis y visualización de datos Procesamiento exploratorio en notebooks (demo de Jupyter y ejercicio práctico con Zeppelin) Visualizadores - Herramientas más utilizadas (demo de Superset) Introducción a arquitecturas y herramientas de Big Data 4 Conceptos de Big Data
    [Show full text]
  • Apache Hive: from Mapreduce Toenterprise-Grade Big Data
    Apache Hive: From MapReduce to Enterprise-grade Big Data Warehousing Jesús Camacho-Rodríguez, Ashutosh Chauhan, Alan Gates, Eugene Koifman, Owen O’Malley, Vineet Garg, Zoltan Haindrich, Sergey Shelukhin, Prasanth Jayachandran, Siddharth Seth, Deepak Jaiswal, Slim Bouguerra, Nishant Bangarwa, Sankar Hariappan, Anishek Agarwal, Jason Dere, Daniel Dai, Thejas Nair, Nita Dembla, Gopal Vijayaraghavan, Günther Hagleitner Hortonworks Inc. ABSTRACT consisted of (i) reading huge amounts of data, (ii) executing Apache Hive is an open-source relational database system for transformations over that data (e.g., data wrangling, consoli- analytic big-data workloads. In this paper we describe the key dation, aggregation) and finallyiii ( ) loading the output into innovations on the journey from batch tool to fully fledged other systems that were used for further analysis. enterprise data warehousing system. We present a hybrid As Hadoop became a ubiquitous platform for inexpensive architecture that combines traditional MPP techniques with data storage with HDFS, developers focused on increasing more recent big data and cloud concepts to achieve the scale the range of workloads that could be executed efficiently and performance required by today’s analytic applications. within the platform. YARN [56], a resource management We explore the system by detailing enhancements along four framework for Hadoop, was introduced, and shortly after- main axis: Transactions, optimizer, runtime, and federation. wards, data processing engines (other than MapReduce) such We then provide experimental results to demonstrate the per- as Spark [14, 59] or Flink [5, 26] were enabled to run on formance of the system for typical workloads and conclude Hadoop directly by supporting YARN. with a look at the community roadmap.
    [Show full text]