Hadoop 2.0 Introduction – with HDP for Windows

Seele Lin Who am I

• Speaker: – 林彥⾠ – A.K.A Seele Lin – Mail: [email protected] • Experience – 2010~Present

– 2013~2014 • Trainer of Hortonworks Certificated Training lecture – HCAHD( Hortonworks Certified Apache Hadoop Developer) – HCAHA ( Hortonworks Certified Apache Hadoop Administrator) Agenda

• What is Big Data – The Need for Hadoop • Hadoop Introduction – What is Hadoop 2.0 • Hadoop Architecture Fundamentals – What is HDFS – What is MapReduce – What is YARN – Hadoop eco-systems • HDP for Windows – What is HDP – How to install HDP on Windows – The advantages of HDP – What’s Next • Conclusion • Q&A What is Big Data What is Big Data?

1. In what timeframe do we now create the same amount of information that 2 days we created from the dawn of civilization until 2003? 2. 90% of the world’s data was created 2 years in the last (how many years)? 3. This is data from 2010 report!

Sources: http://www.itbusinessedge.com/cm/blogs/lawson/just-the-stats-big-numbers-about-big-data/? cs=48051 http://techcrunch.com/2010/08/04/schmidt-data/ How large can it be?

• 1ZB = 1000 EB = 1,000,000 PB = 1,000,000,000 TB Every minute…

• http://whathappensontheinternetin60seconds.com/ The definition?

“Big Data is like teenage sex: Everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it too.”── Dan Ariely.

A set of files A database A single file Big Data Includes All Types of Data

• Pre-defined • Inconsistent • Irregular schema structure structure or.. • Relational • Cannot be • Parts of it database stored in lack system rows in a structure Structured single table • Pictures

• Logs, tweets Unstructured • Video • Often has Semi-structured nested structure

Time-sensitive Immutable 6 Key Hadoop DATA TYPES

1. Sentiment How your customers feel 2. Clickstream Website visitors’ data 3. Sensor/Machine Data from remote sensors and machines 4. Geographic Location-based data Value 5. Server Logs

6. Text Millions of web pages, emails, and documents

Page © Hortonworks Inc. 2013 4 V’s of Big Data

http://www.datasciencecentral.com/profiles/blogs/data-veracity Next Product to Buy (NPTB)

Business Problem • Telecom product portfolios are complex • There are many cross-sell opportunities to installed base • Sales associates use in-person conversations to guess about NPTB recommendations, with little supporting data

Solution • Hadoop gives telcos the ability to make confident NPTB recommendations, based on data from all its customers • Confident NPTB recommendations empower sales associates and improve their interactions with customers • Use the HDP data lake to reduce sales friction and create NPTB advantage like Amazon’s advantage in eCommerce Use case – prediction

Diapers Beer

Friday

Revenue ? Localized, Personalized Promotions

Business Problem • Telcos can geo-locate their mobile subscribers • They could create localized and personalized promotions • This requires connections with both deep historical data and real- time streaming data • Those connections have been expensive and complicated

Solution • Hadoop brings the data together to inexpensively localize and personalize promotions delivered to mobile devices • Notify subscribers about local attractions, events and sales that align with their preferences and location • Telcos can sell these promotional services to retailers 360° View of the Customer

Business Problem • Retailers interact with customers across multiple channels • Customer interaction and purchase data is often siloed • Few retailers can correlate customer purchases with marketing campaigns and online browsing behavior • Merging data in relational databases is expensive

Solution • Hadoop gives retailers a 360° view of customer behavior • Store data longer & track phases of the customer lifecycle • Gain competitive advantage: increase sales, reduce supply chain expenses and retain the best customers Use case – Target case

• Target mined their customer data and send coupons to shopper who have high “pregnancy prediction” score. • One angry father stormed into a Target to yell at them for sending his daughter coupons for baby clothes and cribs. • Guess what, she was pregnant, and hadn’t told her father yet. Changes in Analyzing Data

Big data is fundamentally changing the way we analyze information.

– Ability to analyze vast amounts of data rather than evaluating sample sets. – Historically we have had to look at causes. Now we can look at patterns and correlations in data that give us much better perspective. Recent day cases 1:

• http://www.ibtimes.co.uk/global-smartphone-data-traffic-increase-eightfold-17-exabyte-2020-1475571 Recent day cases 1: Practice on LINE

http://tech.naver.jp/blog/?p=2412 Recent day cases 2: in

– The media analyze of 2014 City mayor election • IMHO,⿊貘來說 http://gene.speaking.tw/2014/11/blog-post_28.html • 破解社群與 APP⾏銷 http://taiwansmm.wordpress.com/2014/11/26/⾏銷絕不等於買廣 告 -2014年台北市⻑選舉柯⽂哲與連 / Scaling with a traditional database

• scalling with a queue • sharding the database • fault-tolerane issue • corruption issue • problems NOSQL

• Not Only SQL • provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases. • Motivations for this approach include simplicity of design, horizontal scaling and finer control over availability. • Column: Accumulo, Cassandra, Druid, HBase, Vertica • Document: Clusterpoint, Apache CouchDB, Couchbase, MarkLogic, MongoDB • Key-value: Dynamo, FoundationDB, MemcacheDB, Redis, Riak, FairCom c-treeACE, Aerospike • Graph: Allegro, Neo4J, InfiniteGraph, OrientDB, Virtuoso, Stardog First principles(1/2)

• "At the most fundamental level, what does a data system do?" • a data system does: "A data system answers questions based on information that was acquired in the past". • "What is this person's name?” • "How many friends does this person have?” • A bank account web page answers questions like "What is my current balance?” • "What transactions have occurred on my account recently?" First principles(2/2)

• Data is often used interchangeably with the word "information". • You answer questions on your data by running functions that take data as input • The most general purpose data system can answer questions by running functions that take in the as input. In fact, any query can be answered entire dataset by running a function on the complete dataset Desired Properties of a Big Data System

• Robust and fault-tolerant • Low latency reads and updates • Scalable • General • Extensible • Allow ad hoc queries • Minimal maintenance The Lambda Architecture

• There is no single tool that provides a complete solution. • You have to use The Lambda Architecture a variety of tools and techniques to build a complete Big Data system. • solves the problem of computing arbitrary functions on arbitrary data in realtime by decomposing the problem into three layers • batch layer, the serving layer, and the speed layer The Lambda Architecture model Batch Layer 1

• The batch layer stores the master copy of the dataset and precomputes batch views on that master dataset. • The master dataset can be thought of us a very large list of records. • two things : – store an immutable, constantly growing master dataset, – and compute arbitrary functions on that dataset. • If you're going to precompute views on a dataset, you need to be able to do so for any view and any dataset. • There's a class of systems called "batch processing systems" that are built to do exactly what the batch layer requires Batch Layer(2 What is Batch View

• Everything starts from the "query = function(all data)" equation. • You could literally run your query functions on the fly on the complete dataset to get the results. • it would take a huge amount of resources to do and would be unreasonably expensive • Instead of computing the query on the fly, you read the results from the precomputed view. How we get Batch View Serving Layer 1 Serving Layer 2

• The batch layer emits batch views as the result of its functions. • The next step is to load the views somewhere so that they can be queried. • The serving layer indexes the batch view and loads it up so it can be efficiently queried to get particular values out of the view. Batch and serving layers satisfy almost all properties • Robust and fault tolerant • Scalable • General • Extensible • Allows ad hoc queries • Minimal maintenance Speed Layer 1 Speed Layer 2

• speed layer as similar to the batch layer in that it produces views based on data it receives. • One big difference is that in order to achieve the fastest latencies possible, the speed layer doesn't look at all the new data at once. • it updates the realtime view as it receives new data instead of recomputing them like the batch layer does. • "incremental updates” vs "recomputation updates". • Page view example Speed Layer 3

• complexity isolation:complexity is pushed into a layer whose results are only temporary • The last piece of the Lambda Architecture is merging the resutlts from the batch and realtime views Summary of the Lambda Architecture Summary of the Lambda Architecture

• All new data is sent to both the batch layer and the speed layer • The master dataset is an immutable, append-only set of data • The batch layer precomputes query functions from scratch • The serving layer indexes the batch views produced by the batch layer and makes it possible to get particular values out of a batch view very quickly • The speed layer compensates for the high latency of updates to the serving layer. • Queries are resolved by getting results from both the batch and realtime views and merging them together Guess what…

• Traditionally, computation has been processor-bound • For decades, the primary push was to increase the computing power of a single machine – Faster processor, more RAM • Distributed systems evolved to allow developers to use multiple machines for a single job – At compute time, data is copied to the compute nodes Scale up or Scale out? The Need for Hadoop

SCALE (storage & processing)

MPP Traditional EDW NoSQL Hadoop Database Analytics Platform

• Store and use all types of data • Process all the data • Scalability • Commodity hardware Hadoop as a Data Factory

• A role Hadoop can play in an enterprise data platform is that of a data factory

Structured, semi-structured Business and raw data value

Hadoop Hadoop as a Data Lake

• A larger more general role Hadoop can play in an enterprise data platform is that of a data lake. Integrating Hadoop

ODBC Access for Popular BI Tools Tools

Applications & Visualization & Spreadsheets Intelligence MACHINE GENERATED

ODBC WEB LOGS, CLICK STREAMS Big Data Data Analysis

Messaging Staging Area Hadoop Social Media EDW Connectors Data Marts

OLTP Hadoop Introduction – inspired by

• Apache Hadoop project – inspired by Google's MapReduce and Google File System papers. Hadoop Creator: Doug Cung • Open sourced, flexible and available architecture for large scale computation and data processing on a network of commodity hardware • Yahoo has been the largest contributor to the project and uses Hadoop extensively in its Web search and Ad business. Hadoop Concepts

• Distribute the data as it is initially stored in the system • Moving Computation is Cheaper than Moving Data • Individual nodes can work on data local to those nodes • Users can focus on developing applications. Relational Databases vs. Hadoop

Relational VS. Hadoop

Required on write schema Required on read

Reads are fast Writes are fast speed Standards and structured governance Loosely structured Limited, no data processing processing Processing coupled with data Structured data types Multi and unstructured

Interactive OLAP Analytics Data Discovery Complex ACID Transactions best fit use Processing unstructured data Operational Data Store Massive Storage/Processing Different behaviors between RDBMS and Hadoop

– RDBMS

Application Schema RDBMS SQL

– Hadoop

Application Hadoop Schema MapReduce Why we use Hadoop, not RDBMS?

• Limitation of RDBMS – Capacity • 100GB~100TB – Speed – Cost • High-end devices’ price increases over than its linear proportion • Software cost on technical support or license fee – Too Complex • A Distributed File System is more likely fit our need – DFS usually provides backup and fault- tolerance mechanism – More cheap than RDBMS when the data is really huge enough What is Hadoop 2.0?

• The Apache Hadoop 2.0 project consists of the following modules: – Hadoop Common: the utilities that provide support for the other Hadoop modules. – HDFS: the Hadoop Distributed File System – YARN: a framework for job scheduling and cluster resource management. – MapReduce: for processing large data sets in a scalable and parallel fashion. Difference between Hadoop 1.0 and 2.0 What is YARN

• Yet Another Resource Negotiator • Jira ticket (MAPREDUCE-279) raised in January 2008 by Hortonworks co-founder Arun Murthy. • YARN is the result of 5 years of subsequent development in the open community. • YARN has been tested by Yahoo! since September 2012 and has been in production across 30,000 nodes and 325PB of data since January 2013. • More recently, other enterprises such as Microsoft, eBay, Twitter, XING and Spotify have adopted a YARN- based architecture. • Apache Hadoop YARN wins Best Paper award at SoCC 2013! - Hortonworks • http://hortonworks.com/blog/apache-hadoop-yarn-wins-best-paper-award-at-socc-2013/ YARN: Taking Hadoop Beyond Batch

• With YARN, applications run natively in Hadoop (instead of on Hadoop) HDFS Federation

/app/Hive /app/HBase /home/

Hadoop Hadoop 2.0 http://hortonworks.com/blog/an-introduction-to-hdfs-federation/ HDFS High Availability (HA)

• Secondary Name Node is not Name Node • http://www.youtube.com/watch?v=hEqQMLSXQlY HDFS High Availability (HA)

https://issues.apache.org/jira/browse/HDFS-1623 Hadoop Architecture Fundamentals What is HDFS

• Shared multi-petabyte file system for an entire cluster • Managed by a single NameNode NameNode • Multiple DataNodes

DataNode DataNode DataNode The Components of HDFS

• NameNode – The “master” node of HDFS – Determines and maintains how the chunks of data are distributed across the DataNodes • DataNode – Stores the chunks of data, and is responsible for replicating the chunks across other DataNodes Concept: What is NameNode

• NameNode holds metadata for the files – One HDFS cluster only has one metadata – NameNode is a single point of failure

• Only one NameNode for One HDFS cluster – One HDFS cluster only has one namespace and one root directory • Metadata saves in NameNode’s RAM in case to query it faster – 1G RAM can almost saves 1,000,000 blocks of the mapping metadata information • If the block size is 64MB, the metadata may mapping to 64TB actual data More on the Metadata

• NameNode uses two important local files to save the metadata information︰ • fsimage – fsimage saves file directory tree information – fsimage saves the mapping of the file and the blocks • edits – edits saves the file system journal – When the client tries to create / move a file, the operation will be first recorded into edits. If the operation succeed, the data in RAM will later be changed. – fsimage WILL NOT instantly be changed. The NameNode

1. When the NameNode starts, it 2. The transactions in edits are reads the fsimage and edits files. merged with fsimage, and edits is emptied.

3. A client application creates a new file in fsimage edits HDFS.

4. The NameNode logs that transaction in the NameNode edits file. File Name Replicas Block Sequence Others

/data/part-0 2 B1, B2, B3 user, group, ...

/data/part-1 3 B4, B5 foo, bar, ...

Memory

Disk

File Name Replicas Block Sequence Others fsimage /data/part-0 3 B1, B2, B3 user, group, ...

/data/part-1 3 B4, B5 user, group, ...

OP Code Operands edits OP_SET_REPLICATION "/data/part-0", 2

OP_SET_OWNER "/data/part-1", "foo", "bar" Concept: What is DataNode

• DataNode hold the actual blocks – Each block will be 64MB or 128MB in size – Each block is replicated three times on the cluster • DataNode communicates through Heartbeat with NameNode Block backup and replication

• Each block is replicated multiple times – Default replica number = 3 – Client can modify the configuration

• Each block’s replica has the same ID – System has no need to record which blocks are the same

• Replicas can be set by rack awareness – First backup on a rack – The other two backups are on another rack, but on the different machines The DataNodes

NameNode

“Replicate block 123 “I’m still alive! This is to DataNode 1.” “I’m here! Here is my my latest latest Blockreport.” Blockreport.”

DataNode 1 DataNode 2 DataNode 3 DataNode 4

123 1. Client sends a request to the NameNode to add a file to HDFS NameNode 2. NameNode tells client how and where to distribute the blocks

3. Client breaks the data into blocks and distributes the blocks to the DataNodes

DataNode 1 DataNode 2 DataNode 3

4. The DataNodes replicate the blocks (as instructed by the NameNode)

© Hortonworks Inc. 2013 What is MapReduce

• Two Functions – Mapper • Since we are processing huge amount of data, it’s nature to split the input data • The Mapper reads data in the form of key/value pairs • M(K1, V1) à list(K2, V2)

– Reducer • Since the input data are split, we would need another phase to aggregate result in each split • R(K2, list(V2)) à list(K3, V3) Hadoop 1.0 Basic Core Architecture

Mapper Reducer

Shuffle/Sort Map Reduce

MapReduce

Hadoop Distributed File System (HDFS)

Hadoop Words to Websites - Simplified

• From words provide locations • Provides what to display for a search – Note: Page rank determines the order • For example – to find URLs with books on them

www.eslite.com books calendars Map www.yahoo.com sports finance email celebrity www.amazon.com shoes books toolkits www.google.com finance email search www.microso.com operang-system producvity system K, V books www.eslite.com www.amazon.com email www.google.com www.yahoo.com www.facebook.com Reduce finance www.yahoo.com www.google.com groceries www.costco.com www.wellcome.com toolkits www.costco.com www.amazon.com Data Model

• MapReduce works on pairs

(Key input, Value input) (www.eslite.com , books calendars) Other Compute result (other map result) Map

(Key intermediate, Value intermediate) (books, www.eslite.com)

Reduce

(Key output, Value output) (books, www.eslite.com www.amazon.com) The M/R concept

Job Tracker

Heartbeat, Task Report …

Worker Nodes

Task Tracker Task Tracker Task Tracker Task Tracker

M M M M M M M M M M M M

R R R R R R R R Map -> Shuffle -> Reduce

Task Tracker A

Sort Mapper A A

Task Tracker D

Task Tracker B A Sort Merge Mapper B B Fetch B Reducer 0

C Task Tracker C

Sort Mapper C C Map -> Shuffle -> Reduce

Partition + Sort A0 Mapper A A1 A0 Fetch Merge B0 Reducer 0

Partition C0 + Sort B0 Mapper B B1 A1 Merge Fetch B1 Reducer 1

Partition C1 + Sort C0 Mapper C C1 7 1 Input split Mapper output = Reducer input Spill files are merged into a

6 single file

The InputFormat 5 2 Records are generates sorted and pairs spilled to disk when the buffer reaches a 3 4 threshold MapOutputBuffer Mapper The map method outputs pairs

DataNode © Hortonworks Inc. 2013 Word Count Example

Key: offset Key: word Key: word Value: line Value: count Value: sum of count

0:The cat sat on the mat 22:The aardvark sat on the sofa What is YARN?

• YARN is a re-architecture of Hadoop that allows multiple applications to run on the same platform Why YARN

• support non-MapReduce workloads – reducing the need to move data between Hadoop HDFS and other storage systems • improve scalability – 2009 – 8 cores, 16GB of RAM, 4x1TB disk – 2012 – 16+ cores, 48-96GB of RAM, 12x2TB or 12x3TB of disk. – scale to production deployments of ~5000 nodes of hardware of 2009 vintage • cluster utilization – JobTracker views the cluster as composed of nodes (managed by individual TaskTrackers) with distinct map slots and reduce slots • customer agility How YARN Works

• YARN’s original purpose was to split up the two major responsibilities of the JobTracker/TaskTracker into separate entities: • a global ResourceManager • a per-application ApplicationMaster • a per-node slave NodeManager • a per-application Container running on a NodeManager MapReduce v1 YARN The Hadoop Ecosystem

Hadoop The Path to ROI

Raw Hadoop Distributed Data File System 1. Put the data into HDFS in its raw format

2. Use Pig to explore and transform

Answers to Structured questions = $$ 3. Data Analysts use Hive to Data query the data

Hidden gems = $$ 4. Data Scientists use MapReduce, R and Mahout to mine the data

Flume & Sqoop Flume / Sqoop – Data Integration Framework

What’s the problem for data collection • Data collection is currently a priori and ad hoc • A priori – decide what you want to collect ahead of time • Ad hoc – each kind of data source goes through its own collection path

(and how can it help?)

• A distributed data collection service • It efficiently collecting, aggregating, and moving large amounts of data • Fault tolerant, many failover and recovery mechanism • One-stop solution for data collection of all formats Flume: High-Level Overview • Logical Node • Source • Sink

An example flow Sqoop

• Easy, parallel database import/export • You want… – Insert data from RDBMS to HDFS – Export data from HDFS back into RDBMS Sqoop - import process Sqoop - export process

• Exports are performed in parallel using MapReduce Why Sqoop

• JDBC-based implementation – Works with many popular database vendors • Auto-generation of tedious user-side code – Write MapReduce applications to work with your data, faster • Integration with Hive – Allows you to stay in a SQL-based environment Pig & Hive Why Hive and Pig?

• Although MapReduce is very powerful, it can also be complex to master • Many organizations have business or data analysts who are skilled at writing SQL queries, but not at writing Java code • Many organizations have programmers who are skilled at writing code in scripting languages • Hive and Pig are two projects which evolved separately to help such people analyze huge amounts of data via MapReduce – Hive was initially developed at Facebook, Pig at Yahoo!

Pig – Initiated by

• An engine for executing programs on top of Hadoop • A high-level scripting language (Pig Latin) • Process data one step at a time • Simple to write MapReduce program • Easy understand A = load ‘a.txt’ as (id, name, age, ...) B = load ‘b.txt’ as (id, address, ...) • Easy debug C = JOIN A BY id, B BY id;STORE C into ‘c.txt’ Hive – Developed by

• What is Hive? – An SQL-like interface to Hadoop • Treat your Big Data as tables • Data Warehouse infrastructure – Which provides • Data summarization – MapRuduce for execution • Ad hoc querying on top of Hadoop – Maintains metadata information about your Big Data stored on HDFS • Hive Query Language SELECT * FROM purchases WHERE price > 100 GROUP BY storeid

WordCount Example

• Input Hello World Bye World Hello Hadoop Goodbye Hadoop • For the given sample input the map emits < Hello, 1> < World, 1> < Bye, 1> < World, 1> < Hello, 1> < Hadoop, 1> < Goodbye, 1> < Hadoop, 1>

• the< Bye, reduce 1> just sums up the values < Goodbye, 1> < Hadoop, 2> < Hello, 2> < World, 2> WordCount Example In MapReduce public class WordCount { public static class Map extends Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); context.write(word, one); } } }

public static class Reduce extends Reducer { public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } }

public static void main(String[] args) throws Exception { Configuration conf = new Configuration();

Job job = new Job(conf, "wordcount"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class); job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1]));

job.waitForCompletion(true); } } WordCount Example By Pig

A = LOAD 'wordcount/input' USING PigStorage as (token:chararray);

B = GROUP A BY token;

C = FOREACH B GENERATE group, COUNT(A) as count;

DUMP C; WordCount Example By Hive

CREATE TABLE wordcount (token STRING);

LOAD DATA LOCAL INPATH ’wordcount/input' OVERWRITE INTO TABLE wordcount;

SELECT count(*) FROM wordcount GROUP BY token; Hive vs. Pig

Hive Pig Language HiveQL (SQL-like) Pig Latin, a scripting language Schema Table definitions A schema is optionally defined that are stored in a at runtime metastore Programmait Access JDBC, ODBC PigServer HCatalog in the Ecosystem

Java MapReduce

HCatalog

HDFS HBase ??? Oozie What is ?

• A Java Web Application • Oozie is a workflow scheduler for Hadoop • Crond for Hadoop Job 1 Job 2

Job 3

Job 4 Job 5 How it triggered

• Time – Execute your workflow every 15 minutes

00:15 00:30 00:45 01:00

• Event – Materialize your workflow every hour, but only run them when the input data is ready. Hadoop Input Data Exists?

01:00 02:00 03:00 04:00 Defining an Oozie Workflow

Start Action

Contr ol Flow

Action

Action Action

End HDP on Windows General Planning Considerations

• Run on a single node? – For test and perform simple operations – Not suitable for big data • Start from a small cluster – Maybe 4 or 6 nodes – As the data grows, add more nodes. • Expand when necessary – Storage is not enough – Improve computing capability Traditional Operating System Selection

• RedHat Enterprise Linux • CentOS • Ubuntu Server • SuSE Enterprise Linux Topology

• Master Node – Active NameNode – ResourceManager – Secondary NameNode ( or Standby NameNode) • Slave Node – DataNode – NodeManager Network Topology

World

Switch Switch

Switch Switch Switch Switch

Namenode RM SNN

DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM DN + NM Rack 1 Rack 2 Rack 3 Rack n Hadoop Distribution

• Apache • Cloudera • MapR • Hortonworks • Amazon EMR – Apache Hadoop – MapR • Greenplum • IBM Get all your Hadoop packages and …

• Make sure the packages compatibility! Who is Hortonworks?

Upstream Community Projects Downstream Enterprise Product

Virtuous cycle when development & fixed issues done upstream & stable project releases flow downstream Integrate & Test

Apache Design & Pig Test & Patch Develop Apache Release Package Hadoop & Certify Apache Hive Hortonworks Design & Develop Data Platform

Apache HBase Apache HCatalog Distribute

Other Apache Apache Projects Ambari No Lock-in: Integrated, tested & certified distribution lowers risk by ensuring close alignment with Apache projects What is HDP

• Enterprise Hadoop OPERATIONAL DATA SERVICES SERVICES – The Hortonworks Data FLUME Platform (HDP) ManageAMBARI & PIG HIVE Operate at HBASE Scale SQOOP – The ONLY 100% open OOZIE HCATALOG source and complete

WEBHDFS MAP REDUCE distribution HADOOP CORE HDFS YARN – Enterprise grade, proven Enterprise Readiness: HA, and tested at scale PLATFORM SERVICES DR, Snapshots, Security, … – Ecosystem endorsed to HORTONWORKS ensure interoperability DATA PLATFORM (HDP)

OS Cloud VM Appliance The management for HDP What is HDP for Windows

• HDP for Windows significantly expands the ecosystem for the next generation big data platform. This means that the Microsoft partners and tools you already rely on can help you with your Big Data initiatives. • HDP for Windows is the Microsoft recommended way to deploy Hadoop on Windows Server environments. • Support – Windows server 2008 – Windows server 2012

Choose your HDP The installation of HDP for Windows 1 The installation of HDP for Windows 2 The installation of HDP for Windows 3

• #Log directory • HDP_LOG_DIR=d:\hadoop\logs

• #Data directory • HDP_DATA_DIR=d:\hdp\data

• #Hosts • NAMENODE_HOST=NAMENODE_MASTER.acme.com • SECONDARY_NAMENODE_HOST=SECONDARY_NAMENODE_MASTER.acme.com • RESOURCEMANAGER_HOST.acme.com • HIVE_SERVER_HOST=HIVE_SERVER_MASTER.acme.com • OOZIE_SERVER_HOST=OOZIE_SERVER_MASTER.acme.com • WEBHCAT_HOST=WEBHCAT_MASTER.acme.com • FLUME_HOSTS=FLUME_SERVICE1.acme.com,FLUME_SERVICE2.acme.com,FLUME_SERV ICE3.acme.com • HBASE_MASTER=HBASE_MASTER.acme.com • HBASE_REGIONSERVERS=slave1.acme.com, slave2.acme.com, slave3.acme.com • ZOOKEEPER_HOSTS=slave1.acme.com, slave2.acme.com, slave3.acme.com • SLAVE_HOSTS=slave1.acme.com, slave2.acme.com, slave3.acme.com

The installation of HDP for Windows 4

• #Database host • DB_FLAVOR=derby • DB_HOSTNAME=DB_myHostName

• #Hive properties • HIVE_DB_NAME=hive • HIVE_DB_USERNAME=hive • HIVE_DB_PASSWORD=hive

• #Oozie properties • OOZIE_DB_NAME=oozie • OOZIE_DB_USERNAME=oozie • OOZIE_DB_PASSWORD=oozie

What is HDP for Windows(Con.) The management of HDP for Windows

• The Ambari SCOM integration is made possible by the pluggable nature of Ambari. The management of HDP for Windows(Con.) The Advantages of HDP for Windows

• Hadoop on Windows Made Easy – With HDP for Windows, Hadoop is both simple to install and manage. It demystifies the Hadoop distribution so you don’t need to choose and test the right combination of Hadoop projects to deploy. • Clean and Easy Management – Apache Ambari, the open source choice for management of a Hadoop cluster is integrated and extends Microsoft System Center so that IT Operators can manage their Hadoop clusters side-by-side with their databases, applications and other IT assets on a single screen. • Secure, Reliable, Enterprise-Ready Hadoop – Offering the most reliable, innovative and trusted distribution available, Microsoft and Hortonworks together deliver tighter security through integration with Windows Server Active Directory, ease of management through System Center integration. The Data Integration

• The Hive ODBC Driver

BI Tools Analytics Reporting

Hive ODBC Driver Using Hive with Excel

• Using the Hive ODBC Driver, your Excel spreadsheets can query data stored in Hadoop Querying Hive from Excel Querying Hive from Excel (Con.) Combine model using Power View in Excel What is Next on HDP 2.2 for Windows?

• Spark

• Ambari

Why

• MapReduce is too slow • Aims to make data analytics fast — both fast to run and fast to write. • When you have the request: iterative algorithms What is

• In-memory distributed computing framework • Create by UC Berkeley AMP Lab in 2010 • Target Problem that Hadoop MR is bad at – Iterative algorithm (Machine Learning ) – Interactive data mining • More general purpose than Hadoop MR • Active contributions from ~15 companies What Different between Hadoop and Spark

Map Map Data Source Data Source 2

Reduce Reduce Map() Join()

Transform Cache() HDFS

http://spark.incubator.apache.org What is

• Provision a Hadoop Cluster – Ambari provides a step-by-step wizard for installing Hadoop services across any number of hosts. – Ambari handles configuration of Hadoop services for the cluster. • Manage a Hadoop Cluster – Ambari provides central management for starting, stopping, and reconfiguring Hadoop services across the entire cluster. • Monitor a Hadoop Cluster – Ambari provides a dashboard for monitoring health and status of the Hadoop cluster. – Ambari leverages Ganglia for metrics collection. – Ambari leverages Nagios for system alerting and will send emails when your attention is needed (e.g., a node goes down, remaining disk space is low, etc). Ambari installation Wizard Ambari central dashboard Conclusion Recap - Lifecycle of a YARN Application

Container Client • Basic unit of allocation Ex. Container A = Resource 2GB, 1CPU Manager • Fine-grained resource allocation • Replace the fixed map/ reduce slots

Node Node Node Node Manager Manager Manager Manager

Application Container Container Master Container

Container Container Container Container Hadoop 2.0 Eco-systems

• a Q&A Questions?