Data Science Using Spark: an Introduction Topics Covered

Total Page:16

File Type:pdf, Size:1020Kb

Data Science Using Spark: an Introduction Topics Covered DATA SCIENCE USING SPARK: AN INTRODUCTION TOPICS COVERED . Introduction to Spark . Getting Started with Spark . Programming in Spark . Data Science with Spark . What next? 2 DATA SCIENCE PROCESS Exploratory Data Analysis Raw data is Data is Clean collected processed Data Machine Learning Real World Algorithms Statistical Models Build Communicate Data Product ---------------- visualizations Make Decisions ----------------Report Findings Source: Doing Data Science by Rachel Schutt & Cathy O’Neil 3 DATA SCIENCE & DATA MINING Distinctions are blurred Business Data Science Analytics Knowledge Discovery Domain Knowledge Data Mining Visual Data Mining (Structured) (Unstructured) Big Data Data Mining Data Mining Engineering Natural Language Text Mining Web Mining Processing Machine Database Management Statistics Learning Management Science 4 WHAT DO WE NEED TO SUPPORT DATA SCIENCE WORK? Data Input /Output . Ability to read data in multiple formats . Ability to read data from multiple sources . Ability to deal with Big Data (Volume, Velocity, Veracity and Variety) Data Transformations . Easy to describe and perform transformations on rows and columns of data . Requires abstraction of data and a dataflow paradigm Model Development . Library of Data Science Algorithms . Ability to import / export models from other sources . Data Science pipelines / workflow Development Analytics Applications Development . Seamless integration with programming languages / IDEs 5 INTRODUCTION TO SPARK SHARAN KALWANI 6 WHAT IS SPARK? ▪ A distributed computing platform designed to be ▪ Fast ▪ General Purpose ▪ A general engine that allows combination of multiple types of computations ▪ Batch ▪ Interactive ▪ Iterative ▪ SQL Queries ▪ Text Processing ▪ Machine learning 7 Fast/Speed . Computations in memory . Faster than MR even for disk computations Generality . Designed for a wide range of workloads . Single Engine to combine batch, interactive, iterative, streaming algorithms. Has rich high-level libraries and simple native APIs in Java, Scala and Python. Reduces the management burden of maintaining separate tools. 8 SPARK UNIFIED STACK 9 CLUSTER MANAGERS ▪ Can run on a variety of cluster managers ▪ Hadoop YARN - Yet Another Resource Negotiator is a cluster management technology and one of the key features in Hadoop 2. ▪ Apache Mesos - abstracts CPU, memory, storage, and other compute resources away from machines, enabling fault-tolerant and elastic distributed systems. ▪ Spark Standalone Scheduler – provides an easy way to get started on an empty set of machines. ▪ Spark can leverage existing Hadoop infrastructure 10 SPARK HISTORY ▪ Started in 2009 as a research project in UC Berkeley RAD lab which became AMP Lab. ▪ Spark researchers found that Hadoop MapReduce was inefficient for iterative and interactive computing. ▪ Spark was designed from the beginning to be fast for interactive, iterative with support for in-memory storage and fault-tolerance. ▪ Apart from UC Berkeley, Databricks, Yahoo! and Intel are major contributors. ▪ Spark was open sourced in March 2010 and transformed into Apache Foundation project in June 2013. 11 SPARK VS HADOOP Hadoop MapReduce .Mostly suited for batch jobs .Difficulty to program directly in MR .Batch doesn’t compose well for large apps .Specialized systems needed as a workaround Spark .Handles batch, interactive, and real-time within a single framework .Native integration with Java, Python, Scala .Programming at a higher level of abstraction .More general than MapReduce CONFIDENTIAL AND PROPRIETARY 12 GETTING STARTED WITH SPARK 13 GETTING STARTED WITH SPARK …..NOT COVERED TODAY! There are multiple ways of using Spark ▪ Certified Spark Distributions ▪ Datastax Enterprise (Cassandra + Spark) ▪ HortonWorks HDP ▪ MAPR ▪ Local/Standalone ▪ Databricks cloud ▪ Amazon AWS EC2 14 LOCAL MODE ▪ Install Java JDK 6/7 on MacOSX or Windows http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads- 1880260.html ▪ Install Python 2.7 using Anaconda (only on Windows) https://store.continuum.io/cshop/anaconda/ ▪ Download Apache Spark from Databricks, unzip the downloaded file to a convenient location http://training.databricks.com/workshop/usb.zip ▪ Connect to the newly created spark-training directory ▪ Run the interactive Scala shell (REPL) ./spark/bin/spark-shell val data = 1 to 1000 val distData = sc.parallelize(data) val filteredData = distData.filter(s => s<25) filteredData.collect() 15 DATABRICKS CLOUD ▪ A hosted data platform powered by Apache Spark ▪ Features ▪ Exploration and Visualization ▪ Managed Spark Clusters ▪ Production Pipelines ▪ Support for 3rd party apps (Tableau, Pentaho, Qlik View) ▪ Databricks Cloud Trail http://databricks.com/registration ▪ Demo 16 DATABRICKS CLOUD ▪ Workspace ▪ Tables ▪ Clusters 17 DATABRICKS CLOUD ▪ Notebooks ▪ Python ▪ Scala ▪ SQL ▪ Visualizations ▪ Markup ▪ Comments ▪ Collaboration 18 DATABRICKS CLOUD ▪ Tables ▪ Hive tables ▪ SQL ▪ Dbfs ▪ S3 ▪ CSV ▪ Databases 19 AMAZON EC2 ▪ Launch a Linux instance on EC2 and setup EC2 Keys 20 AMAZON EC2 ▪ Setup an EC2 pair from the AWS console 21 AMAZON EC2 ▪ Spark binary ships with a spark-ec2 script to manage clusters on EC2 ▪ Launching Spark cluster on EC2 ./spark-ec2 -k <keypair> -i <key-file> -s <num-slaves> launch <cluster-name> ▪ Running Applications ./spark-ec2 -k <keypair> -i <key-file> login <cluster-name> ▪ Terminating a cluster ./spark-ec2 destroy <cluster-name> ▪ Accessing data in S3 s3n://<bucket>/path 22 PROGRAMMING IN SPARK SHARAN KALWANI 23 Spark Cluster • Mesos • YARN • Standalone 24 Scala – Scalable Language ▪ Scala is a multi-paradigm programming language with focus on the functional programming paradigm. ▪ In functional programming functions are used and they use variables that are immutable. ▪ Every operator, variable and function is an object. ▪ Scala generates bytecode that runs on the top of any JVM and can also use any of the java libraries. ▪ Spark is completely written in Scala. ▪ Spark SQL, GraphX, Spark Streaming etc. are libraries written in Scala. ▪ Scala Crash Course by Holden Karau @databricks lintool.github.io/SparkTutorial/slides/day1_Scala_crash_course.pdf 25 Spark Model Write programs in terms of transformations on distributed datasets Resilient Distributed Datasets (RDDs) . Read-only collections of objects that can be stored in memory or disk across a cluster . Partitions are automatically rebuilt on failure . Parallel functional transformations ( map, filter, ..) . Familiar Scala collections API for distributed data and computation . Lazy transformations 26 Spark Core RDD – Resilient Distributed Dataset ▪ A primary abstraction in Spark – a fault-tolerant collection of elements that can be operated on in parallel. ▪ Two Types ▪ Parallelized Scala collections ▪ Hadoop datasets ▪ Transformations and Actions can be performed on RDDs. Transformations ▪ Operate on an RDD and return a new RDD. ▪ Are Lazily Evaluated Actions ▪ Return a value after running a computation on a RDD. ▪ The DAG is evaluated only when an action takes place. 27 Spark Shell Interactive Queries and prototyping Local, YARN, Mesos Static type checking and auto complete 28 Spark compared to Java (native Hadoop) 29 Spark compared to Java (native Hadoop) 30 Spark Streaming • Real time computation similar to Storm • Input distributed to memory for fault tolerance • Streaming input in to sliding windows of RDDs • Kafka, Flume, Kinesis, HDFS 31 32 Spark Streaming 33 DATA SCIENCE USING SPARK 34 WHAT DO WE NEED TO SUPPORT DATA SCIENCE WORK? Data Input /Output . Ability to read data in multiple formats . Ability to read data from multiple sources . Ability to deal with Big Data (Volume, Velocity, and Variety) Data Transformations . Easy to describe and perform transformations on rows and columns of data . Requires abstraction of data and a dataflow paradigm Model Development . Library of Data Science Algorithms . Ability to import / export models from other sources . Data Science pipelines / workflow Development Analytics Applications Development . Seamless integration with programming languages / IDEs 35 WHY SPARK FOR DATA SCIENCE? Distributed In-memory Platform FAST Scalable Small to Big Data; Well integrated into the Big Data Ecosystem SPARK is HOT Expressive Simple, higher level abstractions for describing computations Flexible Extendible, Multiple language bindings (Scala, Java, Python, R) CONFIDENTIAL AND PROPRIETARY 36 Traditional Data Science Tools SAS RapidMiner Matlab And many others…. R SPSS Designed to work on single machines Proprietary & Expensive 37 What is available in Spark? Analytics Workflows (ML Pipeline) Library of Algorithms(MLlib, R packages, Mahout?, Graph Algorithms) Extensions to RDD (SchemaRDD, RRDD, RDPG, DStreams) Basic RDD (Transformations & Actions) 38 DATA TYPES FOR DATA SCIENCE (MLLIB) Single Machine Data Types Distributed Data Types (supported by RDDs) Distributed Matrix Local Vector . RowMatrix Labeled Point . IndexedRowMatrix Local Matrix . CoordinateMatrix 39 Schema RDDs 40 41 R to Spark Dataflow Worker Local tasks Spark R Executor broadcast vars R pacakges Spark Java R Context Spark (ref. in R) Context Worker Spark tasks Executor R broadcast vars R pacakges 42 43 MESOS SHARK SPARK 44.
Recommended publications
  • The Pentaho Big Data Guide This Document Supports Pentaho Business Analytics Suite 4.8 GA and Pentaho Data Integration 4.4 GA, Documentation Revision October 31, 2012
    The Pentaho Big Data Guide This document supports Pentaho Business Analytics Suite 4.8 GA and Pentaho Data Integration 4.4 GA, documentation revision October 31, 2012. This document is copyright © 2012 Pentaho Corporation. No part may be reprinted without written permission from Pentaho Corporation. All trademarks are the property of their respective owners. Help and Support Resources If you have questions that are not covered in this guide, or if you would like to report errors in the documentation, please contact your Pentaho technical support representative. Support-related questions should be submitted through the Pentaho Customer Support Portal at http://support.pentaho.com. For information about how to purchase support or enable an additional named support contact, please contact your sales representative, or send an email to [email protected]. For information about instructor-led training on the topics covered in this guide, visit http://www.pentaho.com/training. Limits of Liability and Disclaimer of Warranty The author(s) of this document have used their best efforts in preparing the content and the programs contained in it. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, express or implied, with regard to these programs or the documentation contained in this book. The author(s) and Pentaho shall not be liable in the event of incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of the programs, associated instructions, and/or claims. Trademarks Pentaho (TM) and the Pentaho logo are registered trademarks of Pentaho Corporation.
    [Show full text]
  • Star Schema Modeling with Pentaho Data Integration
    Star Schema Modeling With Pentaho Data Integration Saurischian and erratic Salomo underworked her accomplishment deplumes while Phil roping some diamonds believingly. Torrence elasticize his umbrageousness parsed anachronously or cheaply after Rand pensions and darn postally, canalicular and papillate. Tymon trodden shrinkingly as electropositive Horatius cumulates her salpingectomies moat anaerobiotically. The email providers have a look at pentaho user console files from a collection, an individual industries such processes within an embedded saiku report manager. The database connections in data modeling with schema. Entity Relationship Diagram ERD star schema Data original database creation. For more details, the proposed DW system ran on a Windowsbased server; therefore, it responds very slowly to new analytical requirements. In this section we'll introduce modeling via cubes and children at place these models are derived. The data presentation level is the interface between the system and the end user. Star Schema Modeling with Pentaho Data Integration Tutorial Details In order first write to XML file we pass be using the XML Output quality This is. The class must implement themondrian. Modeling approach using the dimension tables and fact tables 1 Introduction The use. Data Warehouse Dimensional Model Star Schema OLAP Cube 5. So that will not create a lot when it into. But it will create transformations on inventory transaction concepts, integrated into several study, you will likely send me? Thoughts on open Vault vs Star Schemas the bi backend. Table elements is data integration tool which are created all the design to the farm before with delivering aggregated data quality and data is preventing you.
    [Show full text]
  • A Plan for an Early Childhood Integrated Data System in Oklahoma
    A PLAN FOR AN EARLY CHILDHOOD INTEGRATED DATA SYSTEM IN OKLAHOMA: DATA INVENTORY, DATA INTEGRATION PLAN, AND DATA GOVERNANCE PLAN January 31, 2020 The Oklahoma Partnership for School Readiness would like to acknowledge the Oklahoma Early Childhood Integrated Data System (ECIDS) Project Oversight Committee for advising and supporting development of this plan: Steve Buck, Cabinet Secretary of Human Services and Early Childhood Initiatives Jennifer Dalton, Oklahoma Department of Human Services Erik Friend, Oklahoma State Department of Education Becki Moore, Oklahoma State Department of Health Funding for development of this plan was provided by the Preschool Development Grant Birth through Five (Grant Number 90TP0037), a grantmaking program of the U.S. Department of Health and Human Services, Administration for Children and Families, Office of Child Care. 2 Contents Glossary ......................................................................................................................................................... 6 Image Credits .............................................................................................................................................. 14 1. Executive Summary ............................................................................................................................. 15 1.1. Uses of an ECIDS ......................................................................................................................... 15 1.2. About this ECIDS Plan .................................................................................................................
    [Show full text]
  • Base Handbook Copyright
    Version 4.0 Base Handbook Copyright This document is Copyright © 2013 by its contributors as listed below. You may distribute it and/or modify it under the terms of either the GNU General Public License (http://www.gnu.org/licenses/gpl.html), version 3 or later, or the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), version 3.0 or later. All trademarks within this guide belong to their legitimate owners. Contributors Jochen Schiffers Robert Großkopf Jost Lange Hazel Russman Martin Fox Andrew Pitonyak Dan Lewis Jean Hollis Weber Acknowledgments This book is based on an original German document, which was translated by Hazel Russman and Martin Fox. Feedback Please direct any comments or suggestions about this document to: [email protected] Publication date and software version Published 3 July 2013. Based on LibreOffice 4.0. Documentation for LibreOffice is available at http://www.libreoffice.org/get-help/documentation Contents Copyright..................................................................................................................................... 2 Contributors.............................................................................................................................2 Feedback................................................................................................................................ 2 Acknowledgments................................................................................................................... 2 Publication
    [Show full text]
  • Open Source ETL on the Mainframe
    2011 JPMorgan Chase ROBERT ZWINK , VP Implementation Services, Chief Development Office [RUNNING OPEN SOURCE ETL ON A MAINFRAME] Pentaho is an open source framework written in Java which includes a full featured Extract Transform Load (ETL) tool called Pentaho Data Integration (PDI). Programmers leverage PDI to create custom transformations which can be a direct 1:1 translation of existing COBOL. A rich palette of out of the box components allows the transformation to be assembled visually. Once finished, the transformation is a completely portable Java application, written in a visual programming language, which runs fully within a java virtual machine (JVM). Java programs created by PDI are 100% zAAP eligible. Contents ABSTRACT ........................................................................................................................................ 3 GENERAL TERMS ............................................................................................................................. 3 INTRODUCTION ............................................................................................................................... 3 BACKGROUND ................................................................................................................................. 4 Assumptions and Requirements ................................................................................................. 4 Chargeback Model .....................................................................................................................
    [Show full text]
  • Multimedia Big Data Processing Using Hpcc Systems
    MULTIMEDIA BIG DATA PROCESSING USING HPCC SYSTEMS by Vishnu Chinta A Thesis Submitted to the Faculty of The College of Engineering & Computer Science In Partial Fulfillment of the Requirements for the Degree of Master of Science Florida Atlantic University Boca Raton, FL May 2017 Copyright by Vishnu Chinta 2017 ii ACKNOWLEDGEMENTS I would like to express gratitude to my advisor, Dr. Hari Kalva, for his constant guidance and support in helping me complete this thesis successfully. His enthusiasm, inspiration and his efforts to explain all aspects clearly and simply helped me throughout. My sincere thanks also go to my committee members Dr. Borko Furht and Dr. Xingquan Zhu for their valuable comments, suggestions and inputs to the thesis. I would also like to thank the LexisNexis for making this study possible by providing its funding, tools and all the technical support I’ve received. I would also like to thank all my professors during my time at FAU for all the invaluable lessons I’ve learnt in their classes. This work has been funded by the NSF Award No. 1464537, Industry/University Cooperative Research Center, Phase II under NSF 13-542. I would like to thank the NSF for this. Many thanks to my friends at FAU and my fellow lab mates in the Multimedia lab for the enthusiastic support and interesting times we have had during the past two years. Last but not the least I would like to thank my family for being a constant source of support and encouragement during my Masters. iv ABSTRACT Author: Vishnu Chinta Title: Multimedia Big Data Processing Using Hpcc Systems Institution: Florida Atlantic University Thesis Advisor: Dr.
    [Show full text]
  • Chapter 6 Reports Copyright
    Base Handbook Chapter 6 Reports Copyright This document is Copyright © 2013 by its contributors as listed below. You may distribute it and/or modify it under the terms of either the GNU General Public License (http://www.gnu.org/licenses/gpl.html), version 3 or later, or the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), version 3.0 or later. All trademarks within this guide belong to their legitimate owners. Contributors Robert Großkopf Jost Lange Jochen Schiffers Hazel Russman Jean Hollis Weber Feedback Please direct any comments or suggestions about this document to: [email protected]. Caution Everything you send to a mailing list, including your email address and any other personal information that is in the mail, is publicly archived and can not be deleted. Acknowledgments This chapter is based on an original German document and was translated by Hazel Russman. Publication date and software version Published 22 April 2013. Based on LibreOffice 3.5. Note for Mac users Some keystrokes and menu items are different on a Mac from those used in Windows and Linux. The table below gives some common substitutions for the instructions in this chapter. For a more detailed list, see the application Help. Windows or Linux Mac equivalent Effect Tools > Options menu LibreOffice > Preferences Access setup options selection Right-click Control+click Opens a context menu Ctrl (Control) z (Command) Used with other keys F5 Shift+z+F5 Opens the Navigator F11 z+T Opens the Styles and Formatting window Documentation for LibreOffice is available at http://www.libreoffice.org/get-help/documentation Contents Copyright ...........................................................................................................................
    [Show full text]
  • Pentaho Machine Learning Orchestration
    Pentaho Machine Learning Orchestration DATASHEET Pentaho from Hitachi Vantara streamlines the entire machine learning workflow and enables teams of data scientists, engineers and analysts to train, tune, test and deploy predictive models. Pentaho Data Integration and analytics platform ends the ‘gridlock’ 2 Train, Tune and Test Models associated with machine learning by enabling smooth team col- Data scientists often apply trial and error to strike the right balance laboration, maximizing limited data science resources and putting of complexity, performance and accuracy in their models. With predictive models to work on big data faster — regardless of use integrations for languages like R and Python, and for machine case, industry, or language — whether models were built in R, learning libraries like Spark MLlib and Weka, Pentaho allows data Python, Scala or Weka. scientists to seamlessly train, tune, build and test models faster. Streamline Four Areas of the Machine 3 Deploy and Operationalize Models Learning Workflow Pentaho allows data professionals to easily embed models devel- Most enterprises struggle to put models to work because data oped by a data scientist directly in an operational workflow. They professionals often operate in silos and create bottlenecks in can leverage existing data and feature engineering efforts, sig- the data preparation to model updates workflow. The Pentaho nificantly reducing time-to-deployment. With embeddable APIs, platform enables collaboration and removes bottlenecks in four organizations can also include the full power of Pentaho within key areas: existing applications. 1 Prepare Data and Engineer New Features 4 Update Models Regularly Pentaho makes it easy to prepare and blend traditional sources Ventana Research finds that less than a third (31%) of organizations like ERP and CRM with big data sources like sensors and social use an automated process to update their models.
    [Show full text]
  • Pentaho MAPR510 SHIM 7.1.0.0 Open Source Software Packages
    Pentaho MAPR510 SHIM 7.1.0.0 Open Source Software Packages Contact Information: Project Manager Pentaho MAPR519 SHIM Hitachi Vantara Corporation 2535 Augustine Drive Santa Clara, California 95054 Name of Product/Product Version License Component Apache Thrift 0.9.2 Apache License Version 2.0 Automaton 1.11-8 automation.bsd.2.0 hbase-client-1.1.1-mapr-1602 for 1.1.1-mapr-1602 Apache License Version 2.0 MapR 5.1 shim hbase-common-1.1.1-mapr-1602 for 1.1.1-mapr-1602 Apache License Version 2.0 MapR 5.1 shim hbase-hadoop-compat-1.1.1-mapr- 1.1.1-mapr-1602 Apache License Version 2.0 1602 for MapR 5.1 shim hbase-protocol-1.1.1-mapr-1602 for 1.1.1-mapr-1602 Apache License Version 2.0 MapR 5.1 shim hbase-server-1.1.1-mapr-1602 for 1.1.1-mapr-1602 Apache License Version 2.0 MapR 5.1 shim hive-common-1.2.0-mapr-1605 for 1.2.0-mapr-1605 Apache License Version 2.0 MapR 5.1 shim hive-exec-1.2.0-mapr-1605 for MapR 1.2.0-mapr-1605 Apache License Version 2.0 5.1 shim hive-jdbc-1.2.0-mapr-1605 for MapR 1.2.0-mapr-1605 Apache License Version 2.0 5.1 shim hive-metastore-1.2.0-mapr-1605 for 1.2.0-mapr-1605 Apache License Version 2.0 MapR 5.1 shim Name of Product/Product Version License Component hive-service-1.2.0-mapr-1605 for 1.2.0-mapr-1605 Apache License Version 2.0 MapR 5.1 shim hive-shims-0.23-1.2.0-mapr-1605 for 1.2.0-mapr-1605 Apache License Version 2.0 MapR 5.1 shim hive-shims-common-1.2.0-mapr-1605 1.2.0-mapr-1605 Apache License Version 2.0 for MapR 5.1 shim htrace-core 3.1.0-incubating Apache License Version 2.0 Metrics Core Library 2.2.0 Apache
    [Show full text]
  • Upgrading to Pentaho Business Analytics 4.8 This Document Is Copyright © 2012 Pentaho Corporation
    Upgrading to Pentaho Business Analytics 4.8 This document is copyright © 2012 Pentaho Corporation. No part may be reprinted without written permission from Pentaho Corporation. All trademarks are the property of their respective owners. Help and Support Resources If you have questions that are not covered in this guide, or if you would like to report errors in the documentation, please contact your Pentaho technical support representative. Support-related questions should be submitted through the Pentaho Customer Support Portal at http://support.pentaho.com. For information about how to purchase support or enable an additional named support contact, please contact your sales representative, or send an email to [email protected]. For information about instructor-led training on the topics covered in this guide, visit http://www.pentaho.com/training. Limits of Liability and Disclaimer of Warranty The author(s) of this document have used their best efforts in preparing the content and the programs contained in it. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, express or implied, with regard to these programs or the documentation contained in this book. The author(s) and Pentaho shall not be liable in the event of incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of the programs, associated instructions, and/or claims. Trademarks Pentaho (TM) and the Pentaho logo are registered trademarks of Pentaho Corporation. All other trademarks are the property of their respective owners.
    [Show full text]
  • Pentaho Data Integration's Key Component: the Adaptive Big Data
    Pentaho Data Integration’s Key Component: The Adaptive Big Data Layer Driving the Flexibility and Power of Pentaho Data Integration (PDI) to Future-Proof You Against Big Data Technology Advancements The Not-So-Hidden Intricacies of Transforming Big Data into Business Insights As any IT or developer department knows, there are growing demands to turn data into business insights for multiple business departments, all with heterogeneous data sources. In order to fulfill these demands with high-impact results, it is critical that data projects are executed in a timely and accurate fashion. With the multitude of ever-changing big data sources, such as Hadoop, building analytic solutions that can blend data from disparate sources is a daunting task. Further, many companies do not have the highly skilled resources for difficult-to-architect big data projects. The Adaptive Big Data Layer: A Protective Buffer Between Technical Teams and Big Data Complexity PDI’s Adaptive Big Data Layer (ABDL) gives Pentaho users the ability to work with any big data source, providing a fully functioning buffer to insulate IT analysts and developers from data complexity. Created with common data integration challenges in mind, the tool enables technical teams to operationalize existing and emerging big data implementations in an agile manner. The ABDL accounts for the specific and unique intricacies of each data source to provide flexible, fast, and accurate access to all data sources. TRANSFORMATION MONITORING JOB Embedded Pentaho Data Integration Step Plugins Adaptive Big Data Layer Perspective Plugins Data Sort MongoDB Hadoop and More Schedule Integration and More Cloudera Amazon EMR Hortonworks MapR and More Copyright ©2015 Pentaho Corporation.
    [Show full text]
  • Deliver Performance and Scalability with Hitachi Vantara's Pentaho
    Deliver Performance and Scalability With Hitachi Vantara’s Pentaho Business Analytics Platform By Hitachi Vantara November 2018 Contents Executive Summary ........................................................................................... 2 Meet Enterprise Scalability and High-Performance Requirements With Pentaho Business Analytics Platform ............................................................................... 3 Pentaho Business Analytics Server................................................................................................................. 3 Deployment on 64-Bit Operating Systems ........................................................................................................ 4 Optimize Configuration of the Reporting and Analysis Engines .............................. 5 Pentaho Reporting .............................................................................................................................................. 5 Pentaho Analysis ................................................................................................................................................. 5 Pentaho Data Integration ..................................................................................... 7 1 Executive Summary Business analytics solutions are only valuable when they can be accessed and used by anyone, from anywhere and at any time. When selecting a business analytics platform, it is critical to assess the underlying architecture of the platform. This consideration ensures that it not
    [Show full text]