AN INTRODUCTION to SPARK and to ITS PROGRAMMING MODEL 2 Introduction to Apache Spark

AN INTRODUCTION to SPARK and to ITS PROGRAMMING MODEL 2 Introduction to Apache Spark

AN INTRODUCTION TO SPARK AND TO ITS PROGRAMMING MODEL 2 Introduction to Apache Spark • Fast, expressive cluster computinG system compatible with Apache Hadoop • It is much faster and much easier than Hadoop MapReduce to use due its rich APIs • LarGe community • Goes far beyond batch applications to support a variety of workloads: • includinG interactive queries, streaminG, machine learninG, and Graph processinG 3 Introduction to Apache Spark • General-purpose cluster in-memory computinG system • Provides hiGh-level APIs in Java, Scala, python 4 5 Uses Cases 6 Uses Cases 7 Real Time Data Architecture for analyzing tweets - Twitter Sentiment Analysis 8 Spark Ecosystem 9 Spark Core • Contains the basic functionality for • task schedulinG, • memory manaGement, • fault recovery, • interactinG with storaGe systems, • and more. • Defines the Resilient Distributed Data sets (RDDs) • main Spark proGramminG abstraction. 10 Spark SQL • For workinG with structured data. • View datasets as relational tables • Define a schema of columns for a dataset • Perform SQL queries • Supports many sources of data • Hive tables, Parquet and JSON • DataFrame 11 Spark StreaminG • Data analysis of streaminG data • e.G. loG files Generated by production web servers • Aimed at hight-throuGhput and fault-tolerant stream processinG • Dstream à Stream of datasets that contain data from certal interval 12 Spark MLlib • MLlib is a library that contains common Machine LearninG (ML) functionality: • Basic statistics • Classification (Naïve Bayes, decision tress, LR) • ClusterinG (k-means, Gaussian mixture, …) • And many others! • All the methods are desiGned to scale out across a cluster. 13 Spark GraphX • Graph ProcessinG Library • Defines a Graph abstraction • Directed multi-graph • Properties attached to each edGe and vertex • RDDs for edGes and vertices • Provides various operators for manipulatinG Graphs (e.G. subGraph and mapVertices) 14 ProGramminG with Python Spark (pySpark) • We will use Python’s interface to Spark called pySpark • A driver program accesses the Spark environment throuGh a SparkContext objet • They key concept in Spark are datasets called RDDs (Resilient Distributed Dateset ) • Basic idea: We load our data into RDDs and perform some operations 15 ProGramminG environment - Spark concepts • Driver proGrams access Spark throuGh a SparkContext object which represents a connection to the computinG cluster. • In a shell the SparkContext is created for you and available as the variable sc. • You can use it to build Resilient Distributed Data (RDD) objects. • Driver proGrams manaGe a number of worker nodes called executors. 16 ProGramminG environment- Spark concepts 17 RDD abstraction • Represent data or transformations on data • It is distributed collection of items - partitions • Read-only à they are immutable • Enables operations to be performed in parallel 18 RDD abstraction • Fault tolerant: • LineaGe of data is preserved, so data can be re-created on a new node at any time • CachinG dataset in memory • different storaGe levels available • fallback to disk possible 19 ProGramminG with RDDs • All work is expressed as either: • creatinG new RDDs • transforminG existinG RDDs • callinG operations on RDDs to compute a result. • Distributes the data contained in RDDs across the nodes (executors) in the cluster and parallelizes the operations. • Each RDD is split into multiple partitions, which can be computed on different nodes of the cluster. 20 Partition 3 workers An RDD can be created 2 ways • Parallelize a collection - Take an existinG in-memory collection and pass it to SparkContext’s parallelize # Parallelize in Python wordsRDD = sc.parallelize([“fish", “cats“, “dogs”]) method - Not Generally used outside of prototypinG and testinG since it requires entire dataset in memory • Read from File on one machine # Read a local txt file in Python - There are other methods to linesRDD = sc.textFile("/path/to/README.md") read data from HDFS, C*, S3, HBase, etc. 22 First ProGram! Context RDD-1 RDD-2 23 RDD operations • Once created, RDDs offer two types of operations: • transformations • transformations include map, filter, join • lazy operation to build RDDs from other RDDs • actions • actions include count, collect, save • return a result or write it to storaGe 24 Transformation vs Actions 25 Narrow Vs. Wide transformation Narrow Vs. Wide A,1 A,[1,2] A,2 Map groupByKey Life cycle of Spark Program 1) Create some input RDDs from external data or parallelize a collection in your driver program. 2) Lazily transform them to define new RDDs usinG transformations like filter() or map() 1) Ask Spark to cache() any intermediate RDDs that will need to be reused. 2) Launch actions such as count() and collect() to kick off a parallel computation, which is then optimized and executed by Spark. 28 Job schedulinG RDD Objects DAGScheduler TaskScheduler Worker Cluster manager Threads DAG TaskSet Task Block manager rdd1.join(rdd2) split graph into launch tasks via execute tasks .groupBy(…) stages of tasks cluster manager .filter(…) submit each retry failed or store and serve build operator DAG stage as ready straggling tasks blocks source: https://cwiki.apache.org/confluence/display/SPARK/Spark+Internals 29 Example: MininG Console LoGs • Load error messaGes from a loG into memory, then interactively search for patterns Base Transformed Cache 1 RDD RDD Worker lines = spark.textFile(“hdfs://...”) tasks errors = lines.filter(lambda s: s.startswith(“ERROR”)) Block 1 messages = errors.map(lambda s: s.split(‘\t’)[2]) Driver results messages.cache() Action messages.filter(lambda s: “foo” in s).count() Cache 2 Worker messages.filter(lambda s: “bar” in s).count() Cache 3 . Worker Block 2 Block 3 30 Some Apache Spark tutorials • https://www.cloudera.com/documentation/enterprise/5-7-x/PDF/cloudera- spark.pdf • https://stanford.edu/~rezab/sparkclass/slides/itas_workshop.pdf • https://www.coursera.orG/learn/biG-data-essentials • https://www.cloudera.com/documentation/enterprise/5-6-x/PDF/cloudera- spark.pdf Spark: when not to use • Even thouGh Spark is versatile, that doesn’t mean Spark’s in-memory capabilities are the best fit for all use cases: • For many simple use cases Apache MapReduce and Hive miGht be a more appropriate choice • Spark was not desiGned as a multi-user environment • Spark users are required to know that memory they have is sufficient for a dataset • AddinG more users adds complications, since the users will have to coordinate memory usaGe to run code.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us