Big Data Integration and Hadoop Best Practices for Minimizing Risks and Maximizing ROI for Hadoop Initiatives 2 Big Data Integration and Hadoop

Total Page:16

File Type:pdf, Size:1020Kb

Big Data Integration and Hadoop Best Practices for Minimizing Risks and Maximizing ROI for Hadoop Initiatives 2 Big Data Integration and Hadoop IBM Software September 2014 Big data integration and Hadoop Best practices for minimizing risks and maximizing ROI for Hadoop initiatives 2 Big data integration and Hadoop Introduction Big data and Hadoop projects depend on collecting, moving, Apache Hadoop technology is transforming the economics and transforming, cleansing, integrating, governing, exploring and dynamics of big data initiatives by supporting new processes and analyzing massive volumes of different types of data from many architectures that can help cut costs, increase revenue and create different sources. Accomplishing all this requires a resilient, end- competitive advantage. An open source software project that to-end information integration solution that is massively scalable enables the distributed processing and storage of large data sets and provides the infrastructure, capabilities, processes and disci- across clusters of commodity servers, Hadoop can scale from a pline required to support Hadoop projects. single server to thousands, as demands change. Primary Hadoop components include the Hadoop Distributed File System for storing large files and the Hadoop distributed parallel processing framework (known as MapReduce). “By most accounts, 80 percent of the develop- ment effort in a big data project goes into However, by itself, Hadoop infrastructure does not present a data integration and only 20 percent goes complete big data integration solution, and there are both chal- lenges and opportunities to address before you can reap its bene- toward data analysis.” fits and maximize return on investment (ROI). —Intel Corporation, “Extract, Transform, and Load Big Data with Apache Hadoop”1 The importance of big data integration for Hadoop initiatives The rapid emergence of Hadoop is driving a paradigm shift in how organizations ingest, manage, transform, store and analyze An effective big data integration solution delivers simplicity, big data. Deeper analytics, greater insights, new products and speed, scalability, functionality and governance to produce services, and higher service levels are all possible through this consumable data from the Hadoop swamp. Without effective technology, enabling you to reduce costs significantly and gener- integration, you get “garbage in, garbage out”—not a good rec- ate new revenues. ipe for trusted data, much less accurate and complete insights or transformative results. IBM Software 3 As the Hadoop market has evolved, leading technology analysts agree that Hadoop infrastructure by itself is not a complete Critical success factor: Avoid hype and distinguish fact or effective big data integration solution (read this report that from fiction discusses how Hadoop is not a data integration platform). To fur- ther complicate the situation, some Hadoop software vendors During these emerging stages of the Hadoop market, carefully have saturated the market with hype, myths and misleading or consider everything you hear about Hadoop’s prowess. A contradictory information. significant gap exists between the myths and the realities of exploiting Hadoop, particularly when it comes to big data integration. There is a lot of industry hype claiming that any To cut through this misinformation and develop an adoption non- scalable extract, transform and load (ETL) tool plus plan for your Hadoop big data project, you must follow a best Hadoop equals a high-p erformance, highly scalable data practices approach that takes into account emerging technolo- integration platform. gies, scalability requirements, and current resources and skill levels. The challenge: create an optimized big data integration In reality, MapReduce was not designed for the high- approach and architecture while avoiding implementation performance processing of massive data volumes, but for pitfalls. finely grained fault tolerance. That discrepancy can lower overall performance and efficiency by an order of magnitude Massive data scalability: The overarching or more. requirement Hadoop Yet Another Resource Negotiator (YARN) takes the If your big data integration solution cannot support massive data resource management capabilities that were in MapReduce scalability, it may fall short of expectations. To realize the full and packages them so they can be used by other applications business value of big data initiatives, massive data scalability is that need to execute dynamically across the Hadoop cluster. essential for big data integration on most Hadoop projects. As a result, this approach makes it possible to implement Massive data scalability means there are no limitations on data massively scalable data integration engines as native Hadoop volumes processed, processing throughput, or the number of applications without having to incur the performance limita- processors and processing nodes used. You can process more tions of MapReduce. All enterprise technologies seeking to be data and achieve higher processing throughput simply by adding scalable and efficient on Hadoop will need to adopt YARN as part of their product road map. more hardware. The same application will then run without modification and with increased performance as you add hardware Before you start your integration journey, be sure you under- resources (see Figure 1). stand the performance limitations of MapReduce and how different data integration vendors address them. Learn more in the “Themis: An I/O- Efficient MapReduce” paper, which discusses this subject at length: http://bit.ly/1v2UXAT 4 Big data integration and Hadoop Source Transform Cleanse Enrich EDW data Sequential 4-way parallel 64-way parallel Disk Disk CPU CPU CPU CPU CPU Memory Shared memory Uniprocessor SMP systemMPP clustered system or GRID Figure 1. Massive data scalability is a mandatory requirement for big data integration. In the big data era, organizations must be able to support an MPP clustered system to scale. Requirements for supporting massive data scalability are not Critical success factor: Big data integration platforms must only linked to the emergence of the Hadoop infrastructure. support all three dimensions of scalability Leading data warehouse vendors such as IBM and Teradata, and • Linear data scalability: A hardware and software system leading data integration platforms such as IBM® InfoSphere® delivers linear increases in processing throughput with linear Information Server have provided shared-nothing, massively increases in hardware resources. For example, an applica- parallel software platforms supporting massive data scalability for tion delivers linear data scalability if it can process 200 GB of years—for nearly two decades in some cases. data in four hours running on 50 processors, 400 GB of data in four hours running on 100 processors and so on. Over time, these vendors have converged on four common • Application scale-up: A measurement of how effectively the software architecture characteristics that support massive data software achieves linear data scalability across processors scalability, as shown in Figure 2. within one symmetric multiprocessor (SMP) system. • Application scale-out: A determination of how well the software achieves linear data scalability across SMP nodes in a shared-n othing architecture. IBM Software 5 A shared-nothing Implemented using That leverages data Resulting in a architecture software dataflow partitioning for linear design isolation data scalability environment Software is designed from Software dataflow enables Large data sets are Design a data processing the ground up to exploit a full exploitation of a partitioned across separate job once, and use it in any shared-nothing, massively shared-nothing architecture nodes and a single job hardware configuration parallel architecture by by making it easy to executes the same without needing to partitioning data sets implement and execute application logic against redesign and re-tune across computing nodes data pipelining and data all partitioned data. the job. and executing a single partitioning within a node application with the same and across nodes. application logic executing Software dataflow also against each data partition. hides the complexities of building and tuning parallel applications from users. Figure 2. The four characteristics of massive data scalability. Most commercial data integration software platforms were never needing to redesign and retune the job. These capabilities are designed to support massive data scalability, meaning they were critical to reducing costs by realizing efficiency gains. Without not built from the ground up to exploit a shared-nothing, them, the platform won’t be able to work with big data volumes. massively parallel architecture. They rely on shared memory multithreading instead of software dataflow. The InfoSphere Information Server data integration portfolio Furthermore, some vendors do not support partitioning large supports the four massive data scalability architectural data sets across nodes and running a single data integration job characteristics. Learn more in the Forrester report, in parallel against separate data partitions, or the ability to design “Measuring The Total Economic Impact Of IBM InfoSphere a job once and use it in any hardware configuration without Information Server” at http://ibm.co/UX1RqB 6 Big data integration and Hadoop Optimizing big data integration The right answer to these questions depends on an enterprise’s workloads: A balanced approach unique big data requirements. Organizations can choose among Because nearly all Hadoop big data use cases
Recommended publications
  • GAVS' Blockchain-Based
    Brochure GAVS’ Blockchain-based Master Data Management Framework For Enterprises Enterprise-wide Data Stewardship, Shared Ownership Of Data Master Data Challenges With enterprises moving full throttle towards digital transformation and the consequent proliferation of data, the magnitude of Master Data Management (MDM) challenges are on a steep rise. Implementing a sustainable MDM strategy has now gained critical importance. 01 www.gavstech.com Master Data Management Challenges Centralized Authority Data Management Centralized management of Master Data is complex, Data Reconciliation expensive and prone to security compromise & Mergers & acquisitions involve complex reconciliation accidental loss. and appropriate definition for Data Stewardship & Governance. Data Security Data Movement A single malicious attack to a centralized infrastructure Sharing Master Data across enterprise boundaries, with can do a lot of damage. subsidiaries or BUs across geographies, or multi-master updates is highly resource intensive. Data may also be compromised by breaches through the backend, bypassing business logic-based front-end Transfer of Ownership controls. Transfer of Master Data ownership to customers or to other BUs comes with a compliance risk due to the lack of cryptography-based authentication. Data Lineage Audit trails may be incomplete, unavailable or corrupt leading to lack of data lineage & traceability, which in-turn will affect downstream systems. Blockchain-based Master Data Management to the Rescue! A Blockchain is a kind of distributed ledger consisting of cryptographically signed, irrevocable transactional records based on shared consensus among all participants in the network. Decentralized Trust & Transparent Secured Resilient Innovative application of Blockchain is key to transformation in several industry verticals 02 www.gavstech.com Decentralization Data Transparency/Traceability The cornerstone of the Blockchain technology is Since Blockchains are based on smart contracts and decentralization.
    [Show full text]
  • Data Governance with Oracle
    Data Governance with Oracle Defining and Implementing a Pragmatic Data Governance Process with Oracle Metadata Management and Oracle Data Quality Solutions ORACLE WHITE P A P E R | SEPTEMBER 2015 Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. DATA GOVERNANCE WITH ORACLE Table of Contents Disclaimer 1 Introduction 1 First Define the Business Problem 2 Identify Executive Sponsor 3 Manage Glossary of Business Terms 4 Identify Critical Data Elements 4 Classify Data from an Information Security Perspective 5 Manage Business Rules 6 Manage Allowable Values for Business Terms 7 Support for Data Lineage and Impact Analysis 8 Manage Data Stewardship Workflows 10 Govern Big Data 11 Manage Data Quality Rules 12 Execute Data Quality Rules 12 View Data Quality Dashboard 16 Data Quality Remediation 16 Data Privacy and Security 17 Ingredients for Data Governance Success 17 Governance with Any Enterprise System 19 Align with Other Oracle Solutions 20 About the Author 22 DATA GOVERNANCE WITH ORACLE . DATA GOVERNANCE WITH ORACLE Introduction Data governance is the formulation of policy to optimize, secure, and leverage information as an enterprise asset by aligning the objectives of multiple functions. Data governance programs have traditionally been focused on people and process. In this whitepaper, we will discuss how key data governance capabilities are enabled by Oracle Enterprise Metadata Manager (OEMM) and Oracle Enterprise Data Quality (EDQ).
    [Show full text]
  • Data Management Capability
    Data Management Capability In today’s world, data fuels the Fresh Gravity supports a wide spectrum Information Driven Enterprise. It of your data management needs, offering powers business growth, ensures end-to-end services for your organization. cost control, launches products Our data management focus areas are: and services, and penetrates new markets for the organization. » Big Data Effective data management helps » Cloud Services to achieve the goal of Customer Centricity for organizations of any » Data Architecture size. When it comes to making » Data Governance the most of data, organizations » Metadata Management must get their information in order if they want to turn insights » Master Data Management into action and deliver better » Data Warehouse Modernization business outcomes. With data increasing in velocity, variety, and » Data Integration volume, organizations are finding it increasingly difficult to gather, store, and effectively use data to fuel their analytics and decision support systems. Core Capabilities Big Data Data Quality & Governance Master Data Management » Big Data Strategy » Data Quality Health Check — » Master Data Integration » Big Data Architecture Profile, Analyze and Assess » Match and Merge Rules » Data Lake Implementation Services » Data Remediation Services » Establish Golden Record » Data Quality Tools Implementation » Advanced Analytics » Master Data Syndication » Operational Data Quality » Reference Data Management Procedures Cloud Services » Hierarchies and Affiliations » Standards, Policies,
    [Show full text]
  • Achieving Regulatory Compliance with Data Lineage Solutions
    ACHIEVING REGULATORY COMPLIANCE WITH DATA LINEAGE SOLUTIONS An Industry Perspective Report brought to you by ASG Technologies and FIMA This report is based on recommendations made by the industry experts who participated in ASG’s live webinar in the Summer of 2016. SETTING THE SCENE In the Summer of 2016, ASG and the FIMA conference series partnered to produce a webinar focused on addressing the real questions that arise when working on data lineage projects. Why is lineage important? How are people approaching the creation of a data lineage analysis? What are reasons it should potentially be automated? What does automated analysis bring in terms of accelerated compliance? This report uncovers the answers to these questions, as identified by the several industry experts listed below, and includes their recommendations on building out your own data lineage projects. PANELISTS THOMAS SERVEN Vice-President of Enterprise Data Governance State Street FRED ROOS Director of ICAAP Transformation Santander Holdings US IAN ROWLANDS Vice-President of Product Marketing ASG Technologies 2 What does the current A secondary shift is data and regulatory occurring through an explosive change environment look like? in the technology environment. Cloud- based applications IAN ROWLANDS: There are several factors today and services, as well adding to a heightened focus on data lineage. One of the most prominent is change in the as big data as a new regulatory environment. In the past, you may storage infrastructure have only needed to respond to one new are leading to regulation a year, and been audited at most once a quarter. It would have been likely that your IT an increasingly team knew more about the information that they were being asked for sophisticated than the regulators themselves.
    [Show full text]
  • Lineage Tracing for General Data Warehouse Transformations
    Lineage Tracing for General Data Warehouse Transformations∗ Yingwei Cui and Jennifer Widom Computer Science Department, Stanford University fcyw, [email protected] Abstract. Data warehousing systems integrate information and managing such transformations as part of the extract- from operational data sources into a central repository to enable transform-load (ETL) process, e.g., [Inf, Mic, PPD, Sag]. analysis and mining of the integrated information. During the The transformations may vary from simple algebraic op- integration process, source data typically undergoes a series of erations or aggregations to complex procedural code. transformations, which may vary from simple algebraic opera- In this paper we consider the problem of lineage trac- tions or aggregations to complex “data cleansing” procedures. ing for data warehouses created by general transforma- In a warehousing environment, the data lineage problem is that tions. Since we no longer have the luxury of a fixed set of of tracing warehouse data items back to the original source items operators or the algebraic properties offered by relational from which they were derived. We formally define the lineage views, the problem is considerably more difficult and tracing problem in the presence of general data warehouse trans- open-ended than previous work on lineage tracing. Fur- formations, and we present algorithms for lineage tracing in this thermore, since transformation graphs in real ETL pro- environment. Our tracing procedures take advantage of known cesses can often be quite complex—containing as many structure or properties of transformations when present, but also as 60 or more transformations—the storage requirements work in the absence of such information.
    [Show full text]
  • Effective Data Governance
    PERSPECTIVE EFFECTIVE DATA GOVERNANCE Abstract Data governance is no more just another item that is good to talk about and nice to have, for global data management organizations. This PoV looks into why data governance is now on the core agenda of next-generation organizations, and how they can implement it in the most effective manner. Why is data governance Variety of data and increase in demanding around data privacy, personal important and sandboxing culture information protection, data security, data lineage, and historical data. challenging now? The next-generation analytics utilize data from all kinds of social networks and This makes data governance top priority for Data has grown significantly blogospheres, machine-generated data, Chief Information Officers (CIOs). In fact, a Omniture / clickstream data, as well as survey by Gartner suggested that by 2016, Over time the desire and capability of customer data from credit management 20% of CIOs in regulated industries would organizations, to collect and process data lose their jobs for failing to implement and loyalty management. Alongside this, has increased manifold. Some of the facts the discipline of Information Governance, organizations have now set up sandboxes, that came out in various analyst surveys successfully [3]. pilot environments, and adopted data and research suggest that: discovery tools and self-service tools. Such Data to insights to actions: Need for Structured data is growing by over 40% • data proliferation and the steep increase in accurate information every year data consumption applications demands Today’s managers use data for decisions Traditional content types, including stringent and effective data governance. • and actions.
    [Show full text]
  • Harness the Power of Your Data
    Harness the power of your data: Why Financial Services institutions are building data lakes on AWS What is a data lake? A data lake is a centralized repository that allows you to store all structured and unstructured data at scale and run flexible analytics Financial institutions are collecting such as dashboards, visualizations, big data processing, real-time analytics, and machine learning, to guide better decisions. and storing massive amounts of data Machine The Financial Services industry has relied on traditional data infrastructures Learning Analytics for decades, but traditional data solutions can’t keep up with the volumes and variety of data financial institutions are collecting today. A cloud-based data lake helps financial institutions store all of their data in one central repository, making it easy to support compliance priorities, realize cost efficiencies, perform forecasts, execute risk assessments, better understand customer behavior, and drive innovation. AWS delivers an integrated suite of services that provides the capabilities needed to quickly build and manage a secure data lake that is ready for analysis and the application of machine learning. In this overview, learn how financial institutions are unlocking the value of their data by building data lakes on AWS. On-Premises Real-Time Data Movement Data Movement © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2 There are many benefits to building adata lake on AWS Compliance & Security Scalability Agility Innovation Cost effective Encrypt highly sensitive Amazon S3 data lakes allow any Perform ad-hoc and Aggregated and normalized Pay-as-you-go pricing data and enable controls type of data to be stored at any cost-effective analytics on a data sets provide a foundation for compute, storage, for data access, auditability, scale, making it easy to meet per-query basis without moving for advanced analytics and and analytics.
    [Show full text]
  • Metadata Management on a Hadoop Eco-System
    Metadata management on a Hadoop Eco-System Whitepaper by Satya Nayak Project Lead, Mphasis Jansee Korapati Module Lead, Mphasis Introduction • A well-built metadata layer will allow organization The data lake stores large amount of structured and to harness the potential of data lake and deliver the unstructured data in various varieties at different following mechanisms to the end users to access data transformed layers. While the data is growing to terabytes and perform analysis: and petabytes, and your data lake is being used by the - Self Service BI (SSBI) enterprise, you are likely to come across questions/ - Data-as-a-Service (DaaS) challenges, such as what data is available in the data lake, how it is consumed/prepared/transformed, who is using - Machine Learning-as-a-Service this data, who is contributing to this data, how old is the - Data Provisioning (DP) data… etc. A well maintained metadata layer can effectively answer these kind of queries and thus im-prove the usability of the data lake. This white paper provides the benefits of You can optimize your data lake an effective metadata layer for a data lake implemented using Hadoop Cluster; information on various metadata to the fullest with metadata Management tools is presented, with their features and management. architecture. Benefits and Functions of Metadata Layer • The metadata Layer defines the structure of files in Raw • The metadata layer captures vital information about Zone and describes the entities in-side the file. Using the data as it enters the data lake and indexes the the base level description, the schema evolution of the information so that users can search metadata before file or record is tracked by a versioning schema.
    [Show full text]
  • Data Lineage Management: Impact and Value
    INDUSTRY DEVELOPMENTS AND MODELS Data Lineage Management: Impact and Value Stewart Bond IDC OPINION Data is at the core of digital transformation, and data without integrity won't be able to support digital transformation initiatives. A key component of data integrity includes being able to trust the data. A key component of data trust is lineage. If you don't know the lineage of your data, you don't know whether you can trust it. Data lineage has been important since before the 1st Platform, increased in importance on the 2nd Platform as data became more distributed, and is of significant importance on the 3rd Platform as the scale of data distribution and the variation of data sources are greater than ever before. Results from a survey of data integration software end users indicate that organizations that are tracking data lineage have more trustworthy data, are able to find data faster, and are able to better support security and privacy requirements compared with those organizations that are not tracking lineage. These survey results, combined with drivers for data with integrity in an era that introduces added challenges of schemaless and ever-changing big data persistence environments, are driving innovations in the metadata management segment of the data integration functional market tracked by IDC. IDC is also seeing metadata management and data lineage components becoming the cornerstones of emerging data intelligence solutions. Increasing numbers of regulatory requirements, regional diversity, data-driven decision making, complex security and privacy requirements, and the era of digital transformation are all driving new requirements for data lineage and expanding the definition to answer the five Ws of data: .
    [Show full text]
  • Data Warehouse Optimization with Hadoop
    White Paper Data Warehouse Optimization with Hadoop A Big Data Reference Architecture Using Informatica and Cloudera Technologies This document contains Confidential, Proprietary and Trade Secret Information (“Confidential Information”) of Informatica Corporation and may not be copied, distributed, duplicated, or otherwise reproduced in any manner without the prior written consent of Informatica. While every attempt has been made to ensure that the information in this document is accurate and complete, some typographical errors or technical inaccuracies may exist. Informatica does not accept responsibility for any kind of loss resulting from the use of information contained in this document. The information contained in this document is subject to change without notice. The incorporation of the product attributes discussed in these materials into any release or upgrade of any Informatica software product—as well as the timing of any such release or upgrade—is at the sole discretion of Informatica. Protected by one or more of the following U.S. Patents: 6,032,158; 5,794,246; 6,014,670; 6,339,775; 6,044,374; 6,208,990; 6,208,990; 6,850,947; 6,895,471; or by the following pending U.S. Patents: 09/644,280; 10/966,046; 10/727,700. This edition published January 2014 White Paper Table of Contents Executive Summary ....................................... 2 The Need for Data Warehouse Optimization..................... 3 Real-World Use of Data Warehouse Optimization................. 4 Informatica and Cloudera Converge on Big Data ................
    [Show full text]
  • Solution Brief Intelligent Data Cataloging for Cloud Data
    Solution Brief Intelligent Data Cataloging for Cloud Data Warehouses, Data Lakes, and Lakehouses Key Benefits Accelerate AI-powered Data Discovery and Cloud Data Integration • Maximize cloud data warehouse, data lake, and lakehouse value Enterprises today are rapidly moving their data and analytics infrastructure to the cloud. While without disruption this move to capitalize on cloud’s agility, scalability, and cost-efficiency affects all aspects of data • Accelerate even complex cloud analytics, it’s particularly urgent for data warehousing and data lakes. For years, enterprise data migration and modernization initiatives warehouses were stable workhorses, powering enterprise analytics and reporting systems. Now, a massive shift is underway to modernize them in the cloud to achieve dramatic improvements in • Gain visibility into all your enterprise data assets with performance and competitive advantage. At the same time, organizations are adopting cloud data industry-leading metadata lakes to go hand-in-hand with their cloud data warehouses. And more recently, organizations are management building cloud lakehouses, which merge the best of data warehouses and data lakes to provide • Enable users to easily find analytics capabilities to power everything from BI dashboards to advanced AI and machine relevant data in the cloud learning projects. However, adopting cloud data warehouses, data lakes, and lakehouses can present new challenges. Although it may be initially straightforward to stand up a new cloud data warehouse or data lake, maximizing the value of your investment requires strategy and planning. Whether you are building a new cloud data warehouse, data lake, or lakehouse, or modernizing data and workloads in the cloud over time, it’s essential to understand and assess your current data landscape and make sure you have the tools and best practices in place to manage your data once it’s in the cloud.
    [Show full text]
  • Data Governance 101
    Data Governance 101 Moving past challenges to operationalization MISSION IMPOSSIBLE? OVERCOMING DATA GOVERNANCE CHALLENGES 6 Key Questions to Help Identify the Strength of your Data Governance Program With today’s enterprises relying on big data analytics for business intelligence, implementing an effective data governance program is a top priority. Without data governance there are unanswered questions in understanding your data – “Am I using the right data?” “Is the data I’m using quality data?” After all, data is only valuable if you can translate it into actionable insights to inform strategic and operational decisions. Creating a comprehensive data governance structure requires a process to deal with the most common problems around data. In fact, if a business can’t answer the following six questions, it’s a sign that they need a stronger data governance program. Understanding Your Data One option is to put pressure on IT to do more. But with IT already stressed, overworked, and lacking sufficient bandwidth, getting them to reprioritize often means sacrificing other high priority projects. Often, requests for data analytics take a backseat because IT is overwhelmed satisfying commitments for others across the organization. • Keep it Secure – Ensuring the security of sensitive and personally identifiable information (PII) is a top priority for an effective data governance program. Having a place to view the data end-to-end is even more important. Many enterprises struggle to reduce data security risks due to unauthorized access or misuse of data, while others have difficulty managing the confidentiality, integrity, and availability of data. By understanding the nature of the data, where it’s stored and how it’s used, enterprises can implement the appropriate data governance guidelines for data use, and specify the right standards and policies around data ownership.
    [Show full text]