Hadoop Real-World Solutions Cookbook

Total Page:16

File Type:pdf, Size:1020Kb

Hadoop Real-World Solutions Cookbook Hadoop Real-World Solutions Cookbook Realistic, simple code examples to solve problems at scale with Hadoop and related technologies Jonathan R. Owens Jon Lentz Brian Femiano BIRMINGHAM - MUMBAI Hadoop Real-World Solutions Cookbook Copyright © 2013 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: February 2013 Production Reference: 1280113 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-84951-912-0 www.packtpub.com Cover Image by iStockPhoto Credits Authors Project Coordinator Jonathan R. Owens Abhishek Kori Jon Lentz Brian Femiano Proofreader Stephen Silk Reviewers Edward J. Cody Indexer Monica Ajmera Mehta Daniel Jue Bruce C. Miller Graphics Conidon Miranda Acquisition Editor Robin de Jongh Layout Coordinator Conidon Miranda Lead Technical Editor Azharuddin Sheikh Cover Work Conidon Miranda Technical Editor Dennis John Copy Editors Brandt D'Mello Insiya Morbiwala Aditya Nair Alfida Paiva Ruta Waghmare About the Authors Jonathan R. Owens has a background in Java and C++, and has worked in both private and public sectors as a software engineer. Most recently, he has been working with Hadoop and related distributed processing technologies. Currently, he works for comScore, Inc., a widely regarded digital measurement and analytics company. At comScore, he is a member of the core processing team, which uses Hadoop and other custom distributed systems to aggregate, analyze, and manage over 40 billion transactions per day. I would like to thank my parents James and Patricia Owens, for their support and introducing me to technology at a young age. Jon Lentz is a Software Engineer on the core processing team at comScore, Inc., an online audience measurement and analytics company. He prefers to do most of his coding in Pig. Before working at comScore, he developed software to optimize supply chains and allocate fixed-income securities. To my daughter, Emma, born during the writing of this book. Thanks for the company on late nights. Brian Femiano has a B.S. in Computer Science and has been programming professionally for over 6 years, the last two of which have been spent building advanced analytics and Big Data capabilities using Apache Hadoop. He has worked for the commercial sector in the past, but the majority of his experience comes from the government contracting space. He currently works for Potomac Fusion in the DC/Virginia area, where they develop scalable algorithms to study and enhance some of the most advanced and complex datasets in the government space. Within Potomac Fusion, he has taught courses and conducted training sessions to help teach Apache Hadoop and related cloud-scale technologies. I'd like to thank my co-authors for their patience and hard work building the code you see in this book. Also, my various colleagues at Potomac Fusion, whose talent and passion for building cutting-edge capability and promoting knowledge transfer have inspired me. About the Reviewers Edward J. Cody is an author, speaker, and industry expert in data warehousing, Oracle Business Intelligence, and Hyperion EPM implementations. He is the author and co-author respectively of two books with Packt Publishing, titled The Business Analyst's Guide to Oracle Hyperion Interactive Reporting 11 and The Oracle Hyperion Interactive Reporting 11 Expert Guide. He has consulted to both commercial and federal government clients throughout his career, and is currently managing large-scale EPM, BI, and data warehouse implementations. I would like to commend the authors of this book for a job well done, and would like to thank Packt Publishing for the opportunity to assist in the editing of this publication. Daniel Jue is a Sr. Software Engineer at Sotera Defense Solutions and a member of the Apache Software Foundation. He has worked in peace and conflict zones to showcase the hidden dynamics and anomalies in the underlying "Big Data", with clients such as ACSIM, DARPA, and various federal agencies. Daniel holds a B.S. in Computer Science from the University of Maryland, College Park, where he also specialized in Physics and Astronomy. His current interests include merging distributed artificial intelligence techniques with adaptive heterogeneous cloud computing. I'd like to thank my beautiful wife Wendy, and my twin sons Christopher and Jonathan, for their love and patience while I research and review. I owe a great deal to Brian Femiano, Bruce Miller, and Jonathan Larson for allowing me to be exposed to many great ideas, points of view, and zealous inspiration. Bruce Miller is a Senior Software Engineer for Sotera Defense Solutions, currently employed at DARPA, with most of his 10-year career focused on Big Data software development. His non-work interests include functional programming in languages like Haskell and Lisp dialects, and their application to real-world problems. www.packtpub.com Support files, eBooks, discount offers and more You might want to visit www.packtpub.com for support files and downloads related to your book. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packtpub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.packtpub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks. TM http://packtLib.packtPub.com Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books. Why Subscribe? f Fully searchable across every book published by Packt f Copy and paste, print and bookmark content f On demand and accessible via web browser Free Access for Packt account holders If you have an account with Packt at www.packtpub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access. Table of Contents Preface 1 Chapter 1: Hadoop Distributed File System – Importing and Exporting Data 7 Introduction 8 Importing and exporting data into HDFS using Hadoop shell commands 8 Moving data efficiently between clusters using Distributed Copy 15 Importing data from MySQL into HDFS using Sqoop 16 Exporting data from HDFS into MySQL using Sqoop 21 Configuring Sqoop for Microsoft SQL Server 25 Exporting data from HDFS into MongoDB 26 Importing data from MongoDB into HDFS 30 Exporting data from HDFS into MongoDB using Pig 33 Using HDFS in a Greenplum external table 35 Using Flume to load data into HDFS 37 Chapter 2: HDFS 39 Introduction 39 Reading and writing data to HDFS 40 Compressing data using LZO 42 Reading and writing data to SequenceFiles 46 Using Apache Avro to serialize data 50 Using Apache Thrift to serialize data 54 Using Protocol Buffers to serialize data 58 Setting the replication factor for HDFS 63 Setting the block size for HDFS 64 Table of Contents Chapter 3: Extracting and Transforming Data 65 Introduction 65 Transforming Apache logs into TSV format using MapReduce 66 Using Apache Pig to filter bot traffic from web server logs 69 Using Apache Pig to sort web server log data by timestamp 72 Using Apache Pig to sessionize web server log data 74 Using Python to extend Apache Pig functionality 77 Using MapReduce and secondary sort to calculate page views 78 Using Hive and Python to clean and transform geographical event data 84 Using Python and Hadoop Streaming to perform a time series analytic 89 Using MultipleOutputs in MapReduce to name output files 94 Creating custom Hadoop Writable and InputFormat to read geographical event data 98 Chapter 4: Performing Common Tasks Using Hive, Pig, and MapReduce 105 Introduction 105 Using Hive to map an external table over weblog data in HDFS 106 Using Hive to dynamically create tables from the results of a weblog query 108 Using the Hive string UDFs to concatenate fields in weblog data 110 Using Hive to intersect weblog IPs and determine the country 113 Generating n-grams over news archives using MapReduce 115 Using the distributed cache in MapReduce to find lines that contain matching keywords over news archives 120 Using Pig to load a table and perform a SELECT operation with GROUP BY 125 Chapter 5: Advanced Joins 127 Introduction 127 Joining data in the Mapper using MapReduce 128 Joining data using Apache Pig replicated join 132 Joining sorted data using Apache Pig merge join 134 Joining skewed data using Apache Pig skewed join 136 Using a map-side join in Apache
Recommended publications
  • Hadoop Tutorials  Cassandra  Hector API  Request Tutorial  About
    Home Big Data Hadoop Tutorials Cassandra Hector API Request Tutorial About LABELS: HADOOP-TUTORIAL, HDFS 3 OCTOBER 2013 Hadoop Tutorial: Part 1 - What is Hadoop ? (an Overview) Hadoop is an open source software framework that supports data intensive distributed applications which is licensed under Apache v2 license. At-least this is what you are going to find as the first line of definition on Hadoop in Wikipedia. So what is data intensive distributed applications? Well data intensive is nothing but BigData (data that has outgrown in size) anddistributed applications are the applications that works on network by communicating and coordinating with each other by passing messages. (say using a RPC interprocess communication or through Message-Queue) Hence Hadoop works on a distributed environment and is build to store, handle and process large amount of data set (in petabytes, exabyte and more). Now here since i am saying that hadoop stores petabytes of data, this doesn't mean that Hadoop is a database. Again remember its a framework that handles large amount of data for processing. You will get to know the difference between Hadoop and Databases (or NoSQL Databases, well that's what we call BigData's databases) as you go down the line in the coming tutorials. Hadoop was derived from the research paper published by Google on Google File System(GFS) and Google's MapReduce. So there are two integral parts of Hadoop: Hadoop Distributed File System(HDFS) and Hadoop MapReduce. Hadoop Distributed File System (HDFS) HDFS is a filesystem designed for storing very large files with streaming data accesspatterns, running on clusters of commodity hardware.
    [Show full text]
  • Combined Documents V2
    Outline: Combining Brainstorming Deliverables Table of Contents 1. Introduction and Definition 2. Reference Architecture and Taxonomy 3. Requirements, Gap Analysis, and Suggested Best Practices 4. Future Directions and Roadmap 5. Security and Privacy - 10 Top Challenges 6. Conclusions and General Advice Appendix A. Terminology Glossary Appendix B. Solutions Glossary Appendix C. Use Case Examples Appendix D. Actors and Roles 1. Introduction and Definition The purpose of this outline is to illustrate how some initial brainstorming documents might be pulled together into an integrated deliverable. The outline will follow the diagram below. Section 1 introduces a definition of Big Data. An extended terminology Glossary is found in Appendix A. In section 2, a Reference Architecture diagram is presented followed by a taxonomy describing and extending the elements of the Reference Architecture. Section 3 maps requirements from use case building blocks to the Reference Architecture. A description of the requirement, a gap analysis, and suggested best practice is included with each mapping. In Section 4 future improvements in Big Data technology are mapped to the Reference Architecture. An initial Technology Roadmap is created on the requirements and gap analysis in Section 3 and the expected future improvements from Section 4. Section 5 is a placeholder for an extended discussion of Security and Privacy. Section 6 gives an example of some general advice. The Appendices provide Big Data terminology and solutions glossaries, Use Case Examples, and some possible Actors and Roles. Big Data Definition - “Big Data refers to the new technologies and applications introduced to handle increasing Volumes of data while enhancing data utilization capabilities such as Variety, Velocity, Variability, Veracity, and Value.” The key attribute is the large Volume of data available that forces horizontal scalability of storage and processing and has implications for all the other V-attributes.
    [Show full text]
  • MÁSTER EN INGENIERÍA WEB Proyecto Fin De Máster
    UNIVERSIDAD POLITÉCNICA DE MADRID Escuela Técnica Superior de Ingeniería de Sistemas Informáticos MÁSTER EN INGENIERÍA WEB Proyecto Fin de Máster …Estudio Conceptual de Big Data utilizando Spring… Autor Gabriel David Muñumel Mesa Tutor Jesús Bernal Bermúdez 1 de julio de 2018 Estudio Conceptual de Big Data utilizando Spring AGRADECIMIENTOS Gracias a mis padres Julian y Miriam por todo el apoyo y empeño en que siempre me mantenga estudiando. Gracias a mi tia Gloria por sus consejos e ideas. Gracias a mi hermano José Daniel y mi cuñada Yule por siempre recordarme que con trabajo y dedicación se pueden alcanzar las metas. [UPM] Máster en Ingeniería Web RESUMEN Big Data ha sido el término dado para aglomerar la gran cantidad de datos que no pueden ser procesados por los métodos tradicionales. Entre sus funciones principales se encuentran la captura de datos, almacenamiento, análisis, búsqueda, transferencia, visualización, monitoreo y modificación. Las empresas han visto en Big Data una poderosa herramienta para mejorar sus negocios en una economía mundial basada firmemente en el conocimiento. Los datos son el combustible para las compañías modernas y, por lo tanto, dar sentido a estos datos permite realmente comprender las conexiones invisibles dentro de su origen. En efecto, con mayor información se toman mejores decisiones, permitiendo la creación de estrategias integrales e innovadoras que garanticen resultados exitosos. Dada la creciente relevancia de Big Data en el entorno profesional moderno ha servido como motivación para la realización de este proyecto. Con la utilización de Java como software de desarrollo y Spring como framework web se desea analizar y comprobar qué herramientas ofrecen estas tecnologías para aplicar procesos enfocados en Big Data.
    [Show full text]
  • Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by St. Cloud State University St. Cloud State University theRepository at St. Cloud State Culminating Projects in Information Assurance Department of Information Systems 3-2017 Security Log Analysis Using Hadoop Harikrishna Annangi Harikrishna Annangi, [email protected] Follow this and additional works at: https://repository.stcloudstate.edu/msia_etds Recommended Citation Annangi, Harikrishna, "Security Log Analysis Using Hadoop" (2017). Culminating Projects in Information Assurance. 19. https://repository.stcloudstate.edu/msia_etds/19 This Starred Paper is brought to you for free and open access by the Department of Information Systems at theRepository at St. Cloud State. It has been accepted for inclusion in Culminating Projects in Information Assurance by an authorized administrator of theRepository at St. Cloud State. For more information, please contact [email protected]. Security Log Analysis Using Hadoop by Harikrishna Annangi A Starred Paper Submitted to the Graduate Faculty of St. Cloud State University in Partial Fulfillment of the Requirements for the Degree of Master of Science in Information Assurance April, 2016 Starred Paper Committee: Dr. Dennis Guster, Chairperson Dr. Susantha Herath Dr. Sneh Kalia 2 Abstract Hadoop is used as a general-purpose storage and analysis platform for big data by industries. Commercial Hadoop support is available from large enterprises, like EMC, IBM, Microsoft and Oracle and Hadoop companies like Cloudera, Hortonworks, and Map Reduce. Hadoop is a scheme written in Java that allows distributed processes of large data sets across clusters of computers using programming models. A Hadoop frame work application works in an environment that provides storage and computation across clusters of computers.
    [Show full text]
  • Apache Sentry
    Apache Sentry Prasad Mujumdar [email protected] [email protected] Agenda ● Various aspects of data security ● Apache Sentry for authorization ● Key concepts of Apache Sentry ● Sentry features ● Sentry architecture ● Integration with Hadoop ecosystem ● Sentry administration ● Future plans ● Demo ● Questions Who am I • Software engineer at Cloudera • Committer and PPMC member of Apache Sentry • also for Apache Hive and Apache Flume • Part of the the original team that started Sentry work Aspects of security Perimeter Access Visibility Data Authentication Authorization Audit, Lineage Encryption, what user can do data origin, usage Kerberos, LDAP/AD Masking with data Data access Access ● Provide user access to data Authorization ● Manage access policies what user can do ● Provide role based access with data Agenda ● Various aspects of data security ● Apache Sentry for authorization ● Key concepts of Apache Sentry ● Sentry features ● Sentry architecture ● Integration with Hadoop ecosystem ● Sentry administration ● Future plans ● Demo ● Questions Apache Sentry (Incubating) Unified Authorization module for Hadoop Unlocks Key RBAC Requirements Secure, fine-grained, role-based authorization Multi-tenant administration Enforce a common set of policies across multiple data access path in Hadoop. Key Capabilities of Sentry Fine-Grained Authorization Permissions on object hierarchie. Eg, Database, Table, Columns Role-Based Authorization Support for role templetes to manage authorization for a large set of users and data objects Multi Tanent Administration
    [Show full text]
  • Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions
    00 Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions MUTAZ BARIKA, University of Tasmania SAURABH GARG, University of Tasmania ALBERT Y. ZOMAYA, University of Sydney LIZHE WANG, China University of Geoscience (Wuhan) AAD VAN MOORSEL, Newcastle University RAJIV RANJAN, Chinese University of Geoscienes and Newcastle University Interest in processing big data has increased rapidly to gain insights that can transform businesses, government policies and research outcomes. This has led to advancement in communication, programming and processing technologies, including Cloud computing services and technologies such as Hadoop, Spark and Storm. This trend also affects the needs of analytical applications, which are no longer monolithic but composed of several individual analytical steps running in the form of a workflow. These Big Data Workflows are vastly different in nature from traditional workflows. Researchers arecurrently facing the challenge of how to orchestrate and manage the execution of such workflows. In this paper, we discuss in detail orchestration requirements of these workflows as well as the challenges in achieving these requirements. We alsosurvey current trends and research that supports orchestration of big data workflows and identify open research challenges to guide future developments in this area. CCS Concepts: • General and reference → Surveys and overviews; • Information systems → Data analytics; • Computer systems organization → Cloud computing; Additional Key Words and Phrases: Big Data, Cloud Computing, Workflow Orchestration, Requirements, Approaches ACM Reference format: Mutaz Barika, Saurabh Garg, Albert Y. Zomaya, Lizhe Wang, Aad van Moorsel, and Rajiv Ranjan. 2018. Orchestrating Big Data Analysis Workflows in the Cloud: Research Challenges, Survey, and Future Directions.
    [Show full text]
  • Apache Oozie the Workflow Scheduler for Hadoop
    Apache Oozie The Workflow Scheduler For Hadoop televises:Hookier and he sopraninopip his rationalists Jere decrescendo usuriously hisand footie fragilely. inundates Larry disburdenchummed educationally.untimely. Seismographical Evan Apache Zookepeer Tutorial Zookeeper in Hadoop Hadoop. Oozie offers replacement only be used files into hadoop ecosystem components are used, including but for any. The below and action, time if we saw how does flipkart first emi option. Here, how to reduce their costs and increase the time to market. Are whole a Author? Who uses Apache Oozie? What is the estimated delivery time? Oozie operates by running with a prior in a Hadoop cluster with clients submitting workflow definitions for sink or delayed processing. Specifies that cannot span file. Explanation Oozie is a workflow scheduler system where manage Hadoop jobs. Other events and schedule apache storm for all set of a free. Oozie server using REST. Supermart is available only in select cities. Action contains description of hangover or more workflows to be executed Oozie is lightweight as it uses existing Hadoop MapReduce framework for. For sellers on a great features: they implemented has been completed. For example, TORT OR hassle, and SSH. Apache Oozie provides you the power to easily handle these kinds of scenarios. Have doubts regarding this product? Oozie is a workflow scheduler system better manage apache hadoop jobs Oozie workflow jobs are directed acyclical graphs dags of actions By. Recipient as is required. Needed when any oozie client is anger on separated node. Sorry, French, Straus and Giroux. Data pipeline job scheduling in GoDaddy Developer's point of.
    [Show full text]
  • An Enterprise Knowledge Network
    Fogbeam Labs Cut Through The Information Fog http://www.fogbeam.com An Enterprise Knowledge Network Knowledge exists in many forms inside your organization – ranging from tacit knowledge which exists only in the minds of the users who possess it, to codified knowledge stored in databases and document repositories. Unfortunately while knowledge exists throughout the organization, it is often not easy (if even possible) to locate, use, share, and reuse existing knowledge. This results in a situation often described as “the left hand doesn't know what the right hand is doing” and damages morale as employees spend their days frustrated and complaining that “nobody knows what is going on around here”. The obstacles that hinder access to existing knowledge can be cultural, geographical, social, and/or technological. And while no technological solution can guarantee perfect knowledge-sharing, tools drawn from big data, data mining / machine learning, deep learning, and artificial intelligence techniques can improve an organization's power to generate, capture, use, share and reuse knowledge. Using technologies developed as part of the semantic web initiative, and applying the principles of linked data within the enterprise, the Fogbeam Labs Enterprise Knowledge Network approach can help your firm integrate and aggregate knowledge which is spread across your existing enterprise applications, content repositories and Intranet. An Enterprise Knowledge Network enables your firm's capabilities to: • engage in high levels of knowledge transfer and
    [Show full text]
  • Talend Open Studio for Big Data Release Notes
    Talend Open Studio for Big Data Release Notes 6.0.0 Talend Open Studio for Big Data Adapted for v6.0.0. Supersedes previous releases. Publication date July 2, 2015 Copyleft This documentation is provided under the terms of the Creative Commons Public License (CCPL). For more information about what you can and cannot do with this documentation in accordance with the CCPL, please read: http://creativecommons.org/licenses/by-nc-sa/2.0/ Notices Talend is a trademark of Talend, Inc. All brands, product names, company names, trademarks and service marks are the properties of their respective owners. License Agreement The software described in this documentation is licensed under the Apache License, Version 2.0 (the "License"); you may not use this software except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.html. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This product includes software developed at AOP Alliance (Java/J2EE AOP standards), ASM, Amazon, AntlR, Apache ActiveMQ, Apache Ant, Apache Avro, Apache Axiom, Apache Axis, Apache Axis 2, Apache Batik, Apache CXF, Apache Cassandra, Apache Chemistry, Apache Common Http Client, Apache Common Http Core, Apache Commons, Apache Commons Bcel, Apache Commons JxPath, Apache
    [Show full text]
  • Return of Organization Exempt from Income
    OMB No. 1545-0047 Return of Organization Exempt From Income Tax Form 990 Under section 501(c), 527, or 4947(a)(1) of the Internal Revenue Code (except black lung benefit trust or private foundation) Open to Public Department of the Treasury Internal Revenue Service The organization may have to use a copy of this return to satisfy state reporting requirements. Inspection A For the 2011 calendar year, or tax year beginning 5/1/2011 , and ending 4/30/2012 B Check if applicable: C Name of organization The Apache Software Foundation D Employer identification number Address change Doing Business As 47-0825376 Name change Number and street (or P.O. box if mail is not delivered to street address) Room/suite E Telephone number Initial return 1901 Munsey Drive (909) 374-9776 Terminated City or town, state or country, and ZIP + 4 Amended return Forest Hill MD 21050-2747 G Gross receipts $ 554,439 Application pending F Name and address of principal officer: H(a) Is this a group return for affiliates? Yes X No Jim Jagielski 1901 Munsey Drive, Forest Hill, MD 21050-2747 H(b) Are all affiliates included? Yes No I Tax-exempt status: X 501(c)(3) 501(c) ( ) (insert no.) 4947(a)(1) or 527 If "No," attach a list. (see instructions) J Website: http://www.apache.org/ H(c) Group exemption number K Form of organization: X Corporation Trust Association Other L Year of formation: 1999 M State of legal domicile: MD Part I Summary 1 Briefly describe the organization's mission or most significant activities: to provide open source software to the public that we sponsor free of charge 2 Check this box if the organization discontinued its operations or disposed of more than 25% of its net assets.
    [Show full text]
  • Realization of Big Data Ana- Lytics Tool for Optimization Processes Within the Finnish Engineering Company
    OPINNÄYTETYÖ - AMMATTIKORKEAKOULUTUTKINTO TEKNIIKAN JA LIIKENTEEN ALA REALIZATION OF BIG DATA ANA- LYTICS TOOL FOR OPTIMIZATION PROCESSES WITHIN THE FINNISH ENGINEERING COMPANY A u t h o r / s : Karapetyan Karina SAVONIA UNIVERSITY OF APPLIED SCIENCES THESIS Abstract Field of Study Technology, Communication and Transport Degree Programme Degree Programme in Information Technology Author(s) Karapetyan Karina Title of Thesis Realization of Big Data Analytics Tool for optimization processes within the Finnish engineering company Date 23.05.2016 Pages/Appendices 54 Supervisor(s) Mr. Arto Toppinen, Principal Lecturer at Savonia University of Applied Sciences, Mr. Anssi Suhonen, Lecturer at Savonia University of Applied Sciences Client Organisation /Partners Hydroline Oy Abstract Big Data Analytics Tool offers an entire business picture for making both operational and strategic deci- sions from selecting the product price to establishing the priorities for the further vendor’s enhancement. The purpose of the thesis was to explore the industrial system of Hydroline Oy and provide a software solution for the elaboration of the manufacture, due to the internal analyzing within the company. For the development of Big Data Analytics Tool, several software programs and tools were employed. Java-written server controls all components in the project and visualizes the processed data via a user- friendly client web application. The SQL Server maintains data, observed from the ERP system. Moreo- ver, it is responsible for the login and registration procedure to enforce the information security. In the Hadoop environment, two research methods were implemented. The Overall Equipment Effectiveness model investigated the production data to obtain daily, monthly and annual efficiency indices of equip- ment utilization, employees’ workload, resource management, quality degree, among others.
    [Show full text]
  • Mapreduce Service
    MapReduce Service Troubleshooting Issue 01 Date 2021-03-03 HUAWEI TECHNOLOGIES CO., LTD. Copyright © Huawei Technologies Co., Ltd. 2021. All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd. Trademarks and Permissions and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders. Notice The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of any kind, express or implied. Issue 01 (2021-03-03) Copyright © Huawei Technologies Co., Ltd. i MapReduce Service Troubleshooting Contents Contents 1 Account Passwords.................................................................................................................. 1 1.1 Resetting
    [Show full text]