Mastering Spark with R the Complete Guide to Large-Scale Analysis and Modeling
Total Page:16
File Type:pdf, Size:1020Kb
Mastering Spark with R The Complete Guide to Large-Scale Analysis and Modeling Javier Luraschi, Kevin Kuo & Edgar Ruiz Foreword by Matei Zaharia Mastering Spark with R The Complete Guide to Large-Scale Analysis and Modeling Javier Luraschi, Kevin Kuo, and Edgar Ruiz Beijing Boston Farnham Sebastopol Tokyo Mastering Spark with R by Javier Luraschi, Kevin Kuo, and Edgar Ruiz Copyright © 2020 Javier Luraschi, Kevin Kuo, and Edgar Ruiz. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Acquisition Editor: Jonathan Hassell Indexer: Judy McConville Development Editor: Melissa Potter Interior Designer: David Futato Production Editor: Elizabeth Kelly Cover Designer: Karen Montgomery Copyeditor: Octal Publishing, LLC Illustrator: Rebecca Demarest Proofreader: Rachel Monaghan October 2019: First Edition Revision History for the First Release 2019-10-04: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781492046370 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Mastering Spark with R, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. 978-1-492-04637-0 [LSI] To Adrian, Clara, Julian, Max, Mila and Roman. Table of Contents Foreword. xi Preface. xiii 1. Introduction. 1 Overview 2 Hadoop 3 Spark 4 R 9 sparklyr 13 Recap 14 2. Getting Started. 15 Overview 15 Prerequisites 16 Installing sparklyr 17 Installing Spark 17 Connecting 18 Using Spark 19 Web Interface 20 Analysis 22 Modeling 23 Data 25 Extensions 25 Distributed R 26 Streaming 26 Logs 27 Disconnecting 28 v Using RStudio 28 Resources 30 Recap 31 3. Analysis. 33 Overview 33 Import 36 Wrangle 37 Built-in Functions 39 Correlations 40 Visualize 42 Using ggplot2 42 Using dbplot 44 Model 46 Caching 48 Communicate 49 Recap 52 4. Modeling. 53 Overview 54 Exploratory Data Analysis 56 Feature Engineering 63 Supervised Learning 66 Generalized Linear Regression 70 Other Models 72 Unsupervised Learning 72 Data Preparation 73 Topic Modeling 74 Recap 76 5. Pipelines. 77 Overview 77 Creation 79 Use Cases 80 Hyperparameter Tuning 82 Operating Modes 83 Interoperability 84 Deployment 86 Batch Scoring 87 Real-Time Scoring 88 Recap 90 vi | Table of Contents 6. Clusters. 93 Overview 93 On-Premises 95 Managers 95 Distributions 101 Cloud 103 Amazon 104 Databricks 106 Google 107 IBM 108 Microsoft 109 Qubole 110 Kubernetes 111 Tools 112 RStudio 113 Jupyter 114 Livy 114 Recap 115 7. Connections. 117 Overview 118 Edge Nodes 119 Spark Home 120 Local 120 Standalone 121 YARN 122 YARN Client 122 YARN Cluster 123 Livy 124 Mesos 126 Kubernetes 127 Cloud 128 Batches 128 Tools 129 Multiple Connections 129 Troubleshooting 130 Logging 130 Spark Submit 130 Windows 132 Recap 132 Table of Contents | vii 8. Data. 135 Overview 135 Reading Data 137 Paths 137 Schema 138 Memory 140 Columns 141 Writing Data 141 Copying Data 143 File Formats 144 CSV 145 JSON 146 Parquet 147 Others 148 File Systems 149 Storage Systems 150 Hive 150 Cassandra 151 JDBC 152 Recap 152 9. Tuning. 155 Overview 155 Graph 157 Timeline 159 Configuring 160 Connect Settings 161 Submit Settings 162 Runtime Settings 163 sparklyr Settings 164 Partitioning 167 Implicit Partitions 167 Explicit Partitions 168 Caching 169 Checkpointing 170 Memory 171 Shuffling 172 Serialization 172 Configuration Files 172 Recap 173 viii | Table of Contents 10. Extensions. 175 Overview 176 H2O 178 Graphs 182 XGBoost 187 Deep Learning 189 Genomics 192 Spatial 195 Troubleshooting 197 Recap 197 11. Distributed R. 199 Overview 200 Use Cases 202 Custom Parsers 202 Partitioned Modeling 204 Grid Search 205 Web APIs 206 Simulations 207 Partitions 209 Grouping 210 Columns 211 Context 212 Functions 213 Packages 214 Cluster Requirements 215 Installing R 216 Apache Arrow 217 Troubleshooting 219 Worker Logs 220 Resolving Timeouts 221 Inspecting Partitions 222 Debugging Workers 223 Recap 223 12. Streaming. 225 Overview 225 Transformations 228 Analysis 229 Modeling 230 Pipelines 231 Distributed R 232 Table of Contents | ix Kafka 233 Shiny 235 Recap 237 13. Contributing. 239 Overview 240 The Spark API 242 Spark Extensions 243 Using Scala Code 245 Recap 247 A. Supplemental Code References. 249 Index. 267 x | Table of Contents Foreword Apache Spark is a distributed computing platform built on extensibility: Spark’s APIs make it easy to combine input from many data sources and process it using diverse programming languages and algorithms to build a data application. R is one of the most powerful languages for data science and statistics, so it makes a lot of sense to connect R to Spark. Fortunately, R’s rich language features enable simple APIs for calling Spark from R that look similar to running R on local data sources. With a bit of background about both systems, you will be able to invoke massive computations in Spark or run your R code in parallel from the comfort of your favorite R program‐ ming environment. This book explores using Spark from R in detail, focusing on the sparklyr package that enables support for dplyr and other packages known to the R community. It covers all of the main use cases in detail, ranging from querying data using the Spark engine to exploratory data analysis, machine learning, parallel execution of R code, and streaming. It also has a self-contained introduction to running Spark and moni‐ toring job execution. The authors are exactly the right people to write about this topic—Javier, Kevin, and Edgar have been involved in sparklyr development since the project started. I was excited to see how well they’ve assembled this clear and focused guide about using Spark with R. I hope that you enjoy this book and use it to scale up your R workloads and connect them to the capabilities of the broader Spark ecosystem. And because all of the infra‐ structure here is open source, don’t hesitate to give the developers feedback about making these tools better. — Matei Zaharia Assistant Professor at Stanford University, Chief Technologist at Databricks, and original creator of Apache Spark xi Preface In a world where information is growing exponentially, leading tools like Apache Spark provide support to solve many of the relevant problems we face today. From companies looking for ways to improve based on data-driven decisions, to research organizations solving problems in health care, finance, education, and energy, Spark enables analyzing much more information faster and more reliably than ever before. Various books have been written for learning Apache Spark; for instance, Spark: The Definitive Guide is a comprehensive resource, and Learning Spark is an introductory book meant to help users get up and running (both are from O’Reilly). However, as of this writing, there is neither a book to learn Apache Spark using the R computing lan‐ guage nor a book specifically designed for the R user or the aspiring R user. There are some resources online to learn Apache Spark with R, most notably the spark.rstudio.com site and the Spark documentation site at spark.apache.org. Both sites are great online resources; however, the content is not intended to be read from start to finish and assumes you, the reader, have some knowledge of Apache Spark, R, and cluster computing. The goal of this book is to help anyone get started with Apache Spark using R. Addi‐ tionally, because the R programming language was created to simplify data analysis, it is also our belief that this book provides the easiest path for you to learn the tools used to solve data analysis problems with Spark. The first chapters provide an intro‐ duction to help anyone get up to speed with these concepts and present the tools required to work on these problems on your own computer. We then quickly ramp up to relevant data science topics, cluster computing, and advanced topics that should interest even the most experienced users. Therefore, this book is intended to be a useful resource for a wide range of users, from beginners curious to learn Apache Spark, to experienced readers seeking to understand why and how to use Apache Spark from R. This book has the following general outline: xiii Introduction In the first two chapters, Chapter 1, Introduction, and Chapter 2, Getting Started, you learn about Apache Spark, R and the tools to perform data analysis with Spark and R. Analysis In Chapter 3, Analysis, you learn how to analyze, explore, transform, and visual‐ ize data in Apache Spark with R. Modeling In the Chapter 4, Modeling and Chapter 5, Pipelines, you learn how to create stat‐ istical models with the purpose of extracting information, predicticting out‐ comes, and automating this process in production-ready workflows.