
Imperial College London Department of Computing AutoPig - Improving the Big Data user experience Benjamin Jakobus Submitted in partial fulfilment of the requirements for the MSc degree in Advanced Computing of September 2013 Abstract This project proposes solutions towards improving the "big data user experience". This means answering a range of questions such as how can deal with big data more effectively1, identify the challenges in dealing with big data (both in terms of development and configuration) and improving this experience. How can we make the big data experience better both in terms of usability and performance? 1Within a Hadoop setting i ii Acknowledgements First and foremost I would like to thank my supervisor Dr. Peter McBrien, whose constant guid- ance and thoughts were crucial to the completion of this project. Dr. McBrien provided me with the input and support that brought reality and perspective to my thinking. I would like to thank Yu Liu, PhD student at Imperial College London, who, over the course of the past year helped me with any technical problems I encountered. At any moment, Yu willingly gave his time to teaching and supporting me. His knowledge was invaluable to my understanding of the subject. To my parents who provided me with a home, supported me throughout my studies and helped me in so many ways; thanks. Apache Hive developers Edward Capriolo and Brock Noland: you endured and answered my many (and often times silly) questions and supervised my patch development. Thanks. iii iv Contents Abstract i Acknowledgements iii 1 Introduction 1 1.1 Motivation and Objectives . .1 1.2 Report structure . .3 1.3 Statement of Originality . .4 1.4 Publications . .5 2 Background Theory 6 2.1 Introduction . .6 2.2 Literature Survey . .6 2.2.1 Developmet tools, IDE plugins, text editors . .6 2.3 Schedulers . .7 2.3.1 FIFO scheduler . .7 2.3.2 Fair scheduler . .9 2.3.3 Capacity scheduler . 11 2.3.4 Hadoop on Demand (HOD) . 11 2.3.5 Deadline Constraint Scheduler . 12 v vi CONTENTS 2.3.6 Priority parallel task scheduler . 13 2.3.7 Intelligent Schedulers . 13 2.4 Benchmark Overview . 14 3 Problem Analysis and Discussion 19 3.1 Unanswered questions - How should we configure Hadoop? . 20 3.2 How do Pig and Hive compare? How can the two projects be improved upon? . 21 3.3 How can we improve the overall development experience? . 21 4 Advanced Benchmarks 22 4.1 Benchmark design . 23 4.1.1 Test Data . 23 4.1.2 Test Cases . 25 4.1.3 Test Setup . 26 4.2 Implementation . 27 4.3 Results . 28 4.3.1 Hive (TPC-H) . 28 4.3.2 Pig (TPC-H) . 30 4.4 Hive vs Pig (TPC-H) . 33 4.5 Configuration . 37 4.6 ISO addition - CPU runtimes . 38 4.7 Conclusion . 39 5 Pig and Hive under the hood 42 5.1 Syntax Trees, Logical and Physical Plans . 42 5.1.1 General design quality and performance . 49 CONTENTS vii 5.1.2 Naming conventions . 69 5.1.3 Codesize, coupling and complexity . 70 5.1.4 Controversial . 72 5.2 Concrete example - JOIN . 73 5.3 Evolution over time . 74 5.4 The Group By operator . 75 5.5 Hive patch implementation . 77 5.6 Conclusion . 77 6 Developing an IDE 79 7 Architecture 82 7.1 Benchmarking Application Design . 82 7.2 HDFS file manager . 83 7.3 Unix file manager . 84 7.4 Script editor . 84 7.5 Notification engine . 85 7.6 Result analyzer . 85 7.7 Runtime-manager . 85 7.7.1 Scheduler . 86 7.8 User Interface . 86 7.9 Package Structure . 87 7.10 Architectural Strategies . 87 7.10.1 Policies and Tactics . 87 7.10.2 Design Patterns . 88 viii CONTENTS 7.11 User Interface Design . 89 7.11.1 Main Screen . 89 7.12 Summary . 90 8 Implementation 92 8.1 Language Choice . 92 8.2 Tools and Technologies . 93 8.3 The Script Editor . 93 8.3.1 Syntax highlighting . 94 8.3.2 Search and replace . 94 8.3.3 Refactoring . 96 8.3.4 Workspace management . 97 8.4 Remote File Manager . 98 8.5 Runtime configuration variables . 99 8.6 Git interface . 104 8.7 Code auto-completion . 104 8.8 Script configuration . 104 8.9 Remote path checker . 105 8.10 Auto-deployment, local execution and debugging . 105 8.11 Error Detection and Recovery . 105 8.12 Data Persistence . 106 8.13 Concurrency and Synchronization . 106 9 Testing 107 9.1 IDE ............................................. 107 CONTENTS ix 9.1.1 Test Goals . 107 9.1.2 Unit Testing . 108 9.1.3 System Testing . 108 9.1.4 Usability Testing . 108 9.1.5 Test Specification . 108 9.2 Test Results . 114 9.2.1 Unit Test Results . 114 9.2.2 Usability Test Results . 115 9.3 Hive Patches . 116 9.4 Summary . 116 10 Conclusion 118 10.1 Overview . ..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages196 Page
-
File Size-