Building Open Source ETL Solutions with Pentaho Data Integration

Total Page:16

File Type:pdf, Size:1020Kb

Building Open Source ETL Solutions with Pentaho Data Integration Pentaho® Kettle Solutions Pentaho® Kettle Solutions Building Open Source ETL Solutions with Pentaho Data Integration Matt Casters Roland Bouman Jos van Dongen Pentaho® Kettle Solutions: Building Open Source ETL Solutions with Pentaho Data Integration Published by Wiley Publishing, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 lll#l^aZn#Xdb Copyright © 2010 by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 978-0-470-63517-9 ISBN: 9780470942420 (ebk) ISBN: 9780470947524 (ebk) ISBN: 9780470947531 (ebk) Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appro- priate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at ]iie/$$lll#l^aZn#Xdb$\d$eZgb^hh^dch. Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representa- tions or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Web site may provide or recommenda- tions it may make. Further, readers should be aware that Internet Web sites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services please contact our Customer Care Department within the United States at (877) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Control Number: 2010932421 Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. Pentaho is a registered trademark of Pentaho, Inc. All other trademarks are the property of their respective owners. Wiley Publishing, Inc. is not associated with any product or vendor mentioned in this book. For my wife and kids, Kathleen, Sam and Hannelore. Your love and joy keeps me sane in crazy times. —Matt For my wife, Annemarie, and my children, David, Roos, Anne and Maarten. Thanks for bearing with me—I love you! —Roland For my children Thomas and Lisa, and for Yvonne, to whom I owe more than words can express. —Jos About the Authors Matt Casters has been an independent business intelligence consultant for many years and has implemented numerous data warehouses and BI solutions for large companies. For the last 8 years, Matt kept himself busy with the development of an ETL tool called Kettle. This tool was open sourced in December 2005 and acquired by Pentaho early in 2006. Since then, Matt took up the position of Chief Data Integration at Pentaho. His responsibility is to continue to be lead developer for Kettle. Matt tries to help the Kettle community in any way possible; he answers questions on the forum and speaks occasion- ally at conferences all around the world. He has a blog at ]iie/$$lll#^Wg^Y\Z#WZ and you can follow his 5bViiXVhiZgh account on Twitter. Roland Bouman has been working in the IT industry since 1998 and is currently work- ing as a web and business intelligence developer. Over the years he has focused on open source software, in particular database technology, business intelligence, and web development frameworks. He’s an active member of the MySQL and Pentaho com- munities, and a regular speaker at international conferences, such as the MySQL User Conference, OSCON and at Pentaho community events. Roland co-authored the MySQL 5.1. Cluster Certification Guide and Pentaho Solutions, and was a technical reviewer for a number of MySQL and Pentaho related book titles. He maintains a technical blog at ]iie/$$geWdjbVc#Wad\hedi#Xdb and tweets as 5gdaVcYWdjbVc on Twitter. Jos van Dongen is a seasoned business intelligence professional and well-known author and presenter. He has been involved in software development, business intelligence, and data warehousing since 1991. Before starting his own consulting practice, Tholis Consulting, in 1998, he worked for a top tier systems integrator and a leading management consult- ing firm. Over the past years, he has successfully implemented BI and data warehouse solutions for a variety of organizations, both commercial and non-profit. Jos covers new BI developments for the Dutch Database Magazine and speaks regularly at national and international conferences. He authored one book on open source BI and is co-author of the book Pentaho Solutions. You can find more information about Jos on ]iie/$$lll#i]da^h #Xdb or follow 5_dhkVcYdc\Zc on Twitter. vii Credits Executive Editor Marketing Manager Robert Elliott Ashley Zurcher Project Editor Production Manager Sara Shlaer Tim Tate Technical Editors Vice President and Executive Group Jens Bleuel Publisher Sven Boden Richard Swadley Kasper de Graaf Vice President and Executive Publisher Daniel Einspanjer Barry Pruett Nick Goodman Mark Hall Associate Publisher Samatar Hassan Jim Minatel Benjamin Kallmann Project Coordinator, Cover Bryan Senseman Lynsey Stanford Johannes van den Bosch Compositor Production Editor Maureen Forys, Daniel Scribner Happenstance Type-O-Rama Copy Editor Proofreader Nancy Rapoport Nancy Bell Editorial Director Indexer Robyn B. Siesky Robert Swanson Editorial Manager Cover Designer Mary Beth Wakefield Ryan Sneed viii Acknowledgments This book is the result of the efforts of many individuals. By convention, authors receive explicit credit, and get to have their names printed on the book cover. But creating this book would not have been possible without a lot of hard work behind the scenes. We, the authors, would like to express our gratitude to a number of people that provided substantial contri- butions, and thus help define and shape the final result that is Pentaho Kettle Solutions. First, we’d like to thank those individuals that contributed directly to the material that appears in the book: N Ingo Klose suggested an elegant solution to generate keys starting from a given offset within a single transformation (this solution is discussed in Chapter 8, “Handling Dimension Tables,” subsection “Generating Surrogate Keys Based on a Counter,” shown in Figure 8-2). N Samatar Hassan provided text as well as working example transformations to demonstrate Kettle’s RSS capabilities. Samatar’s contribution is included almost completely and appears in the RSS section of Chapter 21, “Web Services.” N Thanks to Mike Hillyer and the MySQL documentation team for creating and main- taining the Sakila sample database, which is introduced in Chapter 4 and appears in many examples throughout this book. N Although only three authors appear on the cover, there was actually a fourth one: We cannot thank Kasper de Graaf of DIKW-Academy enough for writing the Data Vault chapter, which has benefited greatly from his deep expertise on this subject. Special thanks also to Johannes van den Bosch who did a great job reviewing Kasper’s work and gave another boost to the overall quality and clarity of the chapter. N Thanks to Bernd Aschauer and Robert Wintner, both from Aschauer EDV (]iie/$$lll#VhX]VjZg"ZYk#Vi$Zc), for providing the examples and screen- shots used in the section dedicated to SAP of Chapter 6, “Data Extraction.” N Daniel Einspanjer of the Mozilla Foundation provided sample transformations for Chapter 7, “Cleansing and Conforming.” ix x Acknowledgments Thanks for your contributions. This book benefited substantially from your efforts. Much gratitude goes out to all of our technical reviewers. Providing a good technical review is hard and time-consuming, and we have been very lucky to find a collection of such talented and seasoned Pentaho and Kettle experts willing to find some time in their busy schedules to provide us with the kind of quality review required to write a book of this size and scope. We’d like to thank the Kettle and Pentaho communities. During and before the writ- ing of this book, individuals from these communities provided valuable suggestions and ideas to all three authors for topics to cover in a book that focuses on ETL, data integration, and Kettle. We hope this book will be useful and practical for everybody who is using or planning to use Kettle. Whether we succeeded is up to the reader, but if we did, we have to thank individuals in the Kettle and Pentaho communities for helping us achieve it.
Recommended publications
  • Star Schema Modeling with Pentaho Data Integration
    Star Schema Modeling With Pentaho Data Integration Saurischian and erratic Salomo underworked her accomplishment deplumes while Phil roping some diamonds believingly. Torrence elasticize his umbrageousness parsed anachronously or cheaply after Rand pensions and darn postally, canalicular and papillate. Tymon trodden shrinkingly as electropositive Horatius cumulates her salpingectomies moat anaerobiotically. The email providers have a look at pentaho user console files from a collection, an individual industries such processes within an embedded saiku report manager. The database connections in data modeling with schema. Entity Relationship Diagram ERD star schema Data original database creation. For more details, the proposed DW system ran on a Windowsbased server; therefore, it responds very slowly to new analytical requirements. In this section we'll introduce modeling via cubes and children at place these models are derived. The data presentation level is the interface between the system and the end user. Star Schema Modeling with Pentaho Data Integration Tutorial Details In order first write to XML file we pass be using the XML Output quality This is. The class must implement themondrian. Modeling approach using the dimension tables and fact tables 1 Introduction The use. Data Warehouse Dimensional Model Star Schema OLAP Cube 5. So that will not create a lot when it into. But it will create transformations on inventory transaction concepts, integrated into several study, you will likely send me? Thoughts on open Vault vs Star Schemas the bi backend. Table elements is data integration tool which are created all the design to the farm before with delivering aggregated data quality and data is preventing you.
    [Show full text]
  • Data Vault and 'The Truth' About the Enterprise Data Warehouse
    Data Vault and ‘The Truth’ about the Enterprise Data Warehouse Roelant Vos – 04-05-2012 Brisbane, Australia Introduction More often than not, when discussion about data modeling and information architecture move towards the Enterprise Data Warehouse (EDW) heated discussions occur and (alternative) solutions are proposed supported by claims that these alternatives are quicker and easier to develop than the cumbersome EDW while also delivering value to the business in a more direct way. Apparently, the EDW has a bad image. An image that seems to be associated with long development times, high complexity, difficulties in maintenance and a falling short on promises in general. It is true that in the past many EDW projects have suffered from stagnation during the data integration (development) phase. These projects have stalled in a cycle of changes in ETL and data model design and the subsequent unit and acceptance testing. No usable information is presented to the business users while being stuck in this vicious cycle and as a result an image of inability is painted by the Data Warehouse team (and often at high costs). Why are things this way? One of the main reasons is that the Data Warehouse / Business Intelligence industry defines and works with The Enterprise Data architectures that do not (can not) live up to their promises. To date Warehouse has a bad there are (still) many Data Warehouse specialists who argue that an image while in reality it is EDW is always expensive, complex and monolithic whereas the real driven by business cases, message should be that an EDW is in fact driven by business cases, adaptive and quick to adaptive and quick to deliver.
    [Show full text]
  • Table of Contents
    The Kimball Group Reader Relentlessly Practical Tools for Data Warehousing and Business Intelligence Remastered Collection Ralph Kimball and Margy Ross with Bob Becker, Joy Mundy, and Warren Thornthwaite Contents Introduction . xxv 1 The Reader at a Glance . 1 Setting Up for Success . 1 1.1 Resist the Urge to Start Coding . 1 1.2 Set Your Boundaries . 4 Tackling DW/BI Design and Development . 6 1.3 Data Wrangling . 6 1.4 Myth Busters . 9 1.5 Dividing the World . 10 1.6 Essential Steps for the Integrated Enterprise Data Warehouse . 13 1.7 Drill Down to Ask Why . 22 1.8 Slowly Changing Dimensions . 25 1.9 Judge Your BI Tool through Your Dimensions . 28 1.10 Fact Tables . 31 1.11 Exploit Your Fact Tables . 33 2 Before You Dive In . 35 Before Data Warehousing . 35 2.1 History Lesson on Ralph Kimball and Xerox PARC. 36 Historical Perspective . 37 2.2 The Database Market Splits . 37 2.3 Bringing Up Supermarts . 40 Dealing with Demanding Realities . 47 2.4 Brave New Requirements for Data Warehousing . 47 2.5 Coping with the Brave New Requirements. 52 2.6 Stirring Things Up . 57 2.7 Design Constraints and Unavoidable Realities . 60 xiv Contents 2.8 Two Powerful Ideas . 64 2.9 Data Warehouse Dining Experience . 67 2.10 Easier Approaches for Harder Problems . 70 2.11 Expanding Boundaries of the Data Warehouse . 72 3 Project/Program Planning . 75 Professional Responsibilities . 75 3.1 Professional Boundaries . 75 3.2 An Engineer’s View . 78 3.3 Beware the Objection Removers .
    [Show full text]
  • The Network Data Storage Model Based on the HTML
    Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering (ICCSEE 2013) The network data storage model based on the HTML Lei Li1, Jianping Zhou2 Runhua Wang3, Yi Tang4 1Network Information Center 3Teaching Quality and Assessment Office Chongqing University of Science and Technology Chongqing University of Science and Technology Chongqing, China Chongqing, China [email protected] [email protected] 2Network Information Center 4School of Electronic and Information Engineering Chongqing University of Science and Technology Chongqing University of Science and Technology Chongqing, China Chongqing, China [email protected] [email protected] Abstract—in this paper, the mass oriented HTML data refers popular NoSQL for HTML database. This article describes to those large enough for HTML data, that can no longer use the SMAQ stack model and those who today may be traditional methods of treatment. In the past, has been the included in the model under mass oriented HTML data Web search engine creators have to face this problem be the processing tools. first to bear the brunt. Today, various social networks, mobile applications and various sensors and scientific fields are II. MAPREDUCE created daily with PB for HTML data. In order to deal with MapReduce is a Google for creating web webpage index the massive data processing challenges for HTML, Google created MapReduce. Google and Yahoo created Hadoop created. MapReduce framework has become today most hatching out of an intact mass oriented HTML data processing mass oriented HTML data processing plant. MapReduce is tools for ecological system. the key, will be oriented in the HTML data collection of a query division, then at multiple nodes to execute in parallel.
    [Show full text]
  • Projects – Other Than Hadoop! Created By:-Samarjit Mahapatra [email protected]
    Projects – other than Hadoop! Created By:-Samarjit Mahapatra [email protected] Mostly compatible with Hadoop/HDFS Apache Drill - provides low latency ad-hoc queries to many different data sources, including nested data. Inspired by Google's Dremel, Drill is designed to scale to 10,000 servers and query petabytes of data in seconds. Apache Hama - is a pure BSP (Bulk Synchronous Parallel) computing framework on top of HDFS for massive scientific computations such as matrix, graph and network algorithms. Akka - a toolkit and runtime for building highly concurrent, distributed, and fault tolerant event-driven applications on the JVM. ML-Hadoop - Hadoop implementation of Machine learning algorithms Shark - is a large-scale data warehouse system for Spark designed to be compatible with Apache Hive. It can execute Hive QL queries up to 100 times faster than Hive without any modification to the existing data or queries. Shark supports Hive's query language, metastore, serialization formats, and user-defined functions, providing seamless integration with existing Hive deployments and a familiar, more powerful option for new ones. Apache Crunch - Java library provides a framework for writing, testing, and running MapReduce pipelines. Its goal is to make pipelines that are composed of many user-defined functions simple to write, easy to test, and efficient to run Azkaban - batch workflow job scheduler created at LinkedIn to run their Hadoop Jobs Apache Mesos - is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, MPI, Hypertable, Spark, and other applications on a dynamically shared pool of nodes.
    [Show full text]
  • Database Software Market: Billy Fitzsimmons +1 312 364 5112
    Equity Research Technology, Media, & Communications | Enterprise and Cloud Infrastructure March 22, 2019 Industry Report Jason Ader +1 617 235 7519 [email protected] Database Software Market: Billy Fitzsimmons +1 312 364 5112 The Long-Awaited Shake-up [email protected] Naji +1 212 245 6508 [email protected] Please refer to important disclosures on pages 70 and 71. Analyst certification is on page 70. William Blair or an affiliate does and seeks to do business with companies covered in its research reports. As a result, investors should be aware that the firm may have a conflict of interest that could affect the objectivity of this report. This report is not intended to provide personal investment advice. The opinions and recommendations here- in do not take into account individual client circumstances, objectives, or needs and are not intended as recommen- dations of particular securities, financial instruments, or strategies to particular clients. The recipient of this report must make its own independent decisions regarding any securities or financial instruments mentioned herein. William Blair Contents Key Findings ......................................................................................................................3 Introduction .......................................................................................................................5 Database Market History ...................................................................................................7 Market Definitions
    [Show full text]
  • Basically Speaking, Inmon Professes the Snowflake Schema While Kimball Relies on the Star Schema
    What is the main difference between Inmon and Kimball? Basically speaking, Inmon professes the Snowflake Schema while Kimball relies on the Star Schema. According to Ralf Kimball… Kimball views data warehousing as a constituency of data marts. Data marts are focused on delivering business objectives for departments in the organization. And the data warehouse is a conformed dimension of the data marts. Hence a unified view of the enterprise can be obtained from the dimension modeling on a local departmental level. He follows Bottom-up approach i.e. first creates individual Data Marts from the existing sources and then Create Data Warehouse. KIMBALL – First Data Marts – Combined way – Data warehouse. According to Bill Inmon… Inmon beliefs in creating a data warehouse on a subject-by-subject area basis. Hence the development of the data warehouse can start with data from their needs arise. Point-of-sale (POS) data can be added later if management decides it is necessary. He follows Top-down approach i.e. first creates Data Warehouse from the existing sources and then create individual Data Marts. INMON – First Data warehouse – Later – Data Marts. The Main difference is: Kimball: follows Dimensional Modeling. Inmon: follows ER Modeling bye Mayee. Kimball: creating data marts first then combining them up to form a data warehouse. Inmon: creating data warehouse then data marts. What is difference between Views and Materialized Views? Views: •• Stores the SQL statement in the database and let you use it as a table. Every time you access the view, the SQL statement executes. •• This is PSEUDO table that is not stored in the database and it is just a query.
    [Show full text]
  • MASTER's THESIS Role of Metadata in the Datawarehousing Environment
    2006:24 MASTER'S THESIS Role of Metadata in the Datawarehousing Environment Kranthi Kumar Parankusham Ravinder Reddy Madupu Luleå University of Technology Master Thesis, Continuation Courses Computer and Systems Science Department of Business Administration and Social Sciences Division of Information Systems Sciences 2006:24 - ISSN: 1653-0187 - ISRN: LTU-PB-EX--06/24--SE Preface This study is performed as the part of the master’s programme in computer and system sciences during 2005-06. It has been very useful and valuable experience and we have learned a lot during the study, not only about the topic at hand but also to manage to the work in the specified time. However, this workload would not have been manageable if we had not received help and support from a number of people who we would like to mention. First of all, we would like to thank our professor Svante Edzen for his help and supervision during the writing of thesis. And also we would like to express our gratitude to all the employees who allocated their valuable time to share their professional experience. On a personal level, Kranthi would like to thank all his family for their help and especially for my friends Kiran, Chenna Reddy, and Deepak Kumar. Ravi would like to give the greatest of thanks to his family for always being there when needed, and constantly taking so extremely good care of me….Also, thanks to all my friends for being close to me. Luleå University of Technology, 31 January 2006 Kranthi Kumar Parankusham Ravinder Reddy Madupu Abstract In order for a well functioning data warehouse to succeed many components must work together.
    [Show full text]
  • Kimball Vs. Inmon
    © 2011 - Andy Hogg Kimball vs. Inmon "Neither are any wars so furious and bloody, or of so long continuance as those occasioned by difference in opinion, especially if it be in things indifferent." (Swift, 1726) In the world of the data warehouse (DW) there are two dominant and opposing dogmas. Zealots of both extoll the virtues of their chosen doctrine with religious fervour, whilst decrying the beliefs of the other. These doctrines have existed for years, and in that time innumerable DWs have been built upon the principles of William Inmon. Likewise, an incalculable number built upon the ideas of Ralph Kimball. Inmon’s Corporate Information Factory (CIF), is a top-down approach. Since the whole DW is built in advance of usage, it requires significant time to deliver value. It therefore requires unwavering sponsorship from a senior figure within the organisation, possessing long-term vision of the DW’s value. Commentators contrast Kimball’s Bus Architecture (BA) as a bottom-up approach, where the data marts (DMs) are built first and unified into a DW at the end of the process. Inmon (n.d.a) ridicules this:- “…in bottom up data warehouse development first one data mart is developed, then another data mart is developed, then one day - presto - you magically and effortlessly wake up and have a data warehouse”. Kimball (2003) dislikes the bottom-up description, “Bottom-up is typically viewed as quick and dirty – focused on the needs of a single department rather than the enterprise”. He maintains the BA is a holistic view of the enterprise, with a final overall structure planned from the outset.
    [Show full text]
  • Big Data: a Survey the New Paradigms, Methodologies and Tools
    Big Data: A Survey The New Paradigms, Methodologies and Tools Enrico Giacinto Caldarola1,2 and Antonio Maria Rinaldi1,3 1Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Napoli, Italy 2Institute of Industrial Technologies and Automation, National Research Council, Bari, Italy 3IKNOS-LAB Intelligent and Knowledge Systems, University of Naples Federico II, LUPT 80134, via Toledo, 402-Napoli, Italy Keywords: Big Data, NoSQL, NewSQL, Big Data Analytics, Data Management, Map-reduce. Abstract: For several years we are living in the era of information. Since any human activity is carried out by means of information technologies and tends to be digitized, it produces a humongous stack of data that becomes more and more attractive to different stakeholders such as data scientists, entrepreneurs or just privates. All of them are interested in the possibility to gain a deep understanding about people and things, by accurately and wisely analyzing the gold mine of data they produce. The reason for such interest derives from the competitive advantage and the increase in revenues expected from this deep understanding. In order to help analysts in revealing the insights hidden behind data, new paradigms, methodologies and tools have emerged in the last years. There has been a great explosion of technological solutions that arises the need for a review of the current state of the art in the Big Data technologies scenario. Thus, after a characterization of the new paradigm under study, this work aims at surveying the most spread technologies under the Big Data umbrella, throughout a qualitative analysis of their characterizing features.
    [Show full text]
  • Big Data Solutions and Applications
    P. Bellini, M. Di Claudio, P. Nesi, N. Rauch Slides from: M. Di Claudio, N. Rauch Big Data Solutions and applications Slide del corso: Sistemi Collaborativi e di Protezione Corso di Laurea Magistrale in Ingegneria 2013/2014 http://www.disit.dinfo.unifi.it Distributed Data Intelligence and Technology Lab 1 Related to the following chapter in the book: • P. Bellini, M. Di Claudio, P. Nesi, N. Rauch, "Tassonomy and Review of Big Data Solutions Navigation", in "Big Data Computing", Ed. Rajendra Akerkar, Western Norway Research Institute, Norway, Chapman and Hall/CRC press, ISBN 978‐1‐ 46‐657837‐1, eBook: 978‐1‐ 46‐657838‐8, july 2013, in press. http://www.tmrfindia.org/b igdata.html 2 Index 1. The Big Data Problem • 5V’s of Big Data • Big Data Problem • CAP Principle • Big Data Analysis Pipeline 2. Big Data Application Fields 3. Overview of Big Data Solutions 4. Data Management Aspects • NoSQL Databases • MongoDB 5. Architectural aspects • Riak 3 Index 6. Access/Data Rendering aspects 7. Data Analysis and Mining/Ingestion Aspects 8. Comparison and Analysis of Architectural Features • Data Management • Data Access and Visual Rendering • Mining/Ingestion • Data Analysis • Architecture 4 Part 1 The Big Data Problem. 5 Index 1. The Big Data Problem • 5V’s of Big Data • Big Data Problem • CAP Principle • Big Data Analysis Pipeline 2. Big Data Application Fields 3. Overview of Big Data Solutions 4. Data Management Aspects • NoSQL Databases • MongoDB 5. Architectural aspects • Riak 6 What is Big Data? Variety Volume Value 5V’s of Big Data Velocity Variability 7 5V’s of Big Data (1/2) • Volume: Companies amassing terabytes/petabytes of information, and they always look for faster, more efficient, and lower‐ cost solutions for data management.
    [Show full text]
  • Educational Open Government Data: from Requirements to End Users
    Educational Open Government Data: from requirements to end users Rudolf Eckelberg, Vytor Bezerra Calixto, Marina Hoshiba Pimentel, Marcos Didonet Del Fabro, Marcos Suny´e,Leticia M. Peres, Eduardo Todt, Thiago Alves, Adriana Dragone, and Gabriela Schneider C3SL and NuPE Labs Federal University of Paran´a,Curitiba, Brazil {rce16,vsbc14,marina,marcos.ddf,sunye,lmperes,todt}@inf.ufpr.br, [email protected],[email protected],[email protected] Abstract. The large availability of open government data raises enor- mous opportunities for open big data analytics. However, providing an end-to-end framework able to handle tasks from data extraction and pro- cessing to a web interface involves many challenges. One critical factor is the existence of many players with different knowledge, who need to interact, such as application domain experts, database designers, and web developers. This represents a knowledge gap that is difficult to over- come. In this paper, we present a case study for big data analytics over Brazilian educational data, with more than 1 billion records. We show how we organized the data analytics phase, starting from the analytics requirements, data evolution, development and deployment in a public interface. Keywords: Open Government Data; Analytics API; data evolution 1 Introduction The large availability of Open Governmental Data raises enormous opportunities for open big data analytics. Opportunities are always followed by challenges and when one handle Big Data, difficulties lie in data capture, storage, searching, sharing, analysis, and visualization [2]. Providing useful Open Data would need a complete team, from the domain expert up to the web developer, so often it cannot be handled only by a data scientist for example.
    [Show full text]