Hortonworks Data Platform Apache Spark Component Guide (December 15, 2017)
Total Page:16
File Type:pdf, Size:1020Kb
Hortonworks Data Platform Apache Spark Component Guide (December 15, 2017) docs.hortonworks.com Hortonworks Data Platform December 15, 2017 Hortonworks Data Platform: Apache Spark Component Guide Copyright © 2012-2017 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including MapReduce, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, ZooKeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included. Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain, free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to contact us directly to discuss your specific needs. Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 4.0 License. http://creativecommons.org/licenses/by-sa/4.0/legalcode ii Hortonworks Data Platform December 15, 2017 Table of Contents 1. Analyzing Data with Apache Spark .............................................................................. 1 2. Installing Spark ............................................................................................................ 3 2.1. Installing Spark Using Ambari ........................................................................... 3 2.2. Installing Spark Manually .................................................................................. 6 2.3. Verifying Spark Configuration for Hive Access ................................................... 7 2.4. Installing the Spark Thrift Server After Deploying Spark ..................................... 7 2.5. Validating the Spark Installation ....................................................................... 8 3. Configuring Spark ........................................................................................................ 9 3.1. Configuring the Spark Thrift Server ................................................................... 9 3.1.1. Enabling Spark SQL User Impersonation for the Spark Thrift Server .......... 9 3.1.2. Customizing the Spark Thrift Server Port .............................................. 11 3.2. Configuring the Livy Server ............................................................................. 11 3.2.1. Configuring SSL for the Livy Server ....................................................... 11 3.2.2. Configuring High Availability for the Livy Server .................................... 12 3.3. Configuring the Spark History Server ............................................................... 12 3.4. Configuring Dynamic Resource Allocation ........................................................ 12 3.4.1. Customizing Dynamic Resource Allocation Settings on an Ambari- Managed Cluster ............................................................................................ 13 3.4.2. Configuring Cluster Dynamic Resource Allocation Manually ................... 14 3.4.3. Configuring a Job for Dynamic Resource Allocation .............................. 15 3.4.4. Dynamic Resource Allocation Properties ............................................... 15 3.5. Configuring Spark for Wire Encryption ............................................................ 16 3.5.1. Configuring Spark for Wire Encryption ................................................. 17 3.5.2. Configuring Spark2 for Wire Encryption ................................................ 18 3.6. Configuring Spark for a Kerberos-Enabled Cluster ............................................ 20 3.6.1. Configuring the Spark History Server .................................................... 21 3.6.2. Configuring the Spark Thrift Server ...................................................... 21 3.6.3. Setting Up Access for Submitting Jobs .................................................. 21 4. Running Spark ........................................................................................................... 24 4.1. Specifying Which Version of Spark to Run ....................................................... 24 4.2. Running Sample Spark 1.x Applications ........................................................... 25 4.2.1. Spark Pi ................................................................................................ 26 4.2.2. WordCount .......................................................................................... 27 4.3. Running Sample Spark 2.x Applications ........................................................... 28 4.3.1. Spark Pi ................................................................................................ 29 4.3.2. WordCount .......................................................................................... 30 5. Submitting Spark Applications Through Livy ............................................................... 32 5.1. Using Livy with Spark Versions 1 and 2 ............................................................ 32 5.2. Using Livy with Interactive Notebooks ............................................................. 33 5.3. Using the Livy API to Run Spark Jobs: Overview ............................................... 34 5.4. Running an Interactive Session With the Livy API ............................................. 35 5.4.1. Livy Objects for Interactive Sessions ...................................................... 36 5.4.2. Setting Path Variables for Python ......................................................... 37 5.4.3. Livy API Reference for Interactive Sessions ............................................ 38 5.5. Submitting Batch Applications Using the Livy API ............................................ 40 5.5.1. Livy Batch Object .................................................................................. 41 5.5.2. Livy API Reference for Batch Jobs ......................................................... 41 6. Running PySpark in a Virtual Environment ................................................................. 43 iii Hortonworks Data Platform December 15, 2017 7. Automating Spark Jobs with Oozie Spark Action ........................................................ 44 7.1. Configuring Oozie Spark Action for Spark 1 .................................................... 44 7.2. Configuring Oozie Spark Action for Spark 2 .................................................... 46 8. Developing Spark Applications ................................................................................... 49 8.1. Using the Spark DataFrame API ...................................................................... 49 8.2. Using Spark SQL .............................................................................................. 51 8.2.1. Accessing Spark SQL through the Spark Shell ........................................ 52 8.2.2. Accessing Spark SQL through JDBC or ODBC: Prerequisites .................... 52 8.2.3. Accessing Spark SQL through JDBC ....................................................... 53 8.2.4. Accessing Spark SQL through ODBC ..................................................... 54 8.2.5. Spark SQL User Impersonation ............................................................. 54 8.3. Calling Hive User-Defined Functions ................................................................. 61 8.3.1. Using Built-in UDFs ............................................................................... 61 8.3.2. Using Custom UDFs .............................................................................. 62 8.4. Using Spark Streaming .................................................................................... 62 8.4.1. Prerequisites ......................................................................................... 63 8.4.2. Building and Running a Secure Spark Streaming Job ............................. 63 8.4.3. Running Spark Streaming Jobs on a Kerberos-Enabled Cluster ............... 65 8.4.4. Sample pom.xml File for Spark Streaming with Kafka .......................... 66 8.5. HBase Data on Spark with Connectors ............................................................ 68 8.5.1. Selecting a Connector ........................................................................... 69 8.5.2. Using the Connector with Apache Phoenix ........................................... 70 8.6. Accessing HDFS Files from Spark ..................................................................... 70 8.6.1. Specifying Compression ........................................................................ 70 8.6.2. Accessing HDFS from PySpark ............................................................... 70 8.7. Accessing ORC Data in Hive Tables