How to Build and Run a Big Data Platform in the 21St Century

Total Page:16

File Type:pdf, Size:1020Kb

How to Build and Run a Big Data Platform in the 21St Century How to build and run a big data platform in the 21st century Ali Dasdan - Atlassian - [email protected] Dhruba Borthakur - Rockset - [email protected] This tutorial was presented at the IEEE BigData Conference in Los Angeles, CA, USA on Dec 9th, 2019. 1 Disclaimers ● This tutorial presents the opinions of the authors. It does not necessarily reflect the views of our employers. ● This tutorial presents content that is already available in the public domain. It does not contain any confidential information. 2 Speaker: Ali Dasdan Ali Dasdan is the head of engineering for Confluence Cloud at Atlassian. Prior to Atlassian, he worked as the head of engineering and CTO in three startups (Turn in real-time online advertising, Vida Health in chronic disease management, Poynt in payments platform) and as an engineering leader in four leading tech companies (Synopsys for electronic design automation; Yahoo! for web search and big data; Ebay for e-commerce search, discovery, and recommendation; and Tesco for big data for retail). He has been working in big data and large distributed systems since 2006 (releasing the first production application on Hadoop in 2006). He led the building of three big data platforms on-prem from scratch (100PB+ capacity at Turn, multi-PB capacity for a leading telco, and 30PB+ capacity at Tesco). He is active in multiple areas of research related to big data, namely, big data platforms, distributed systems, machine learning. He received his PhD in Computer Science from University of Illinois at Urbana-Champaign in 1999. During part of his PhD, he worked as a visiting scholar at the University of California, Irvine on embedded real-time systems. Linkedin: https://www.linkedin.com/in/dasdan/ Scholarly publications: https://dblp.org/pers/hd/d/Dasdan:Ali Citations: https://scholar.google.com/citations?user=Bol1-tMAAAAJ&hl=en Courses taught: Multiple courses at the graduate school, many paper presentations at conferences, 3-day training on embedded real-time systems at a SoCal company, two tutorials in the 18th and 19th International World Wide Web Conferences. 3 Speaker: Dhruba Borthakur Dhruba Borthakur is cofounder and CTO at Rockset (https://rockset.com/, a company building software to enable data-powered applications. Dhruba was the founding engineer of the open source RocksDB [1] database when we was an engineer at Facebook. Prior to that, he is one of the founding engineers of the Hadoop File System [2] at Yahoo!. Dhruba was also an early contributor to the open source Apache HBase project. Previously, he was a senior engineer at Veritas Software, where he was responsible for the development of VxFS and Veritas SanPointDirect storage system; was the cofounder of Oreceipt.com, an ecommerce startup based in Sunnyvale; and was a senior engineer at IBM-Transarc Labs, where he contributed to the development of Andrew File System (AFS). Dhruba holds an MS in computer science from the University of Wisconsin-Madison. Linkedin: https://www.linkedin.com/in/dhruba/ Scholarly publications: https://dblp.uni-trier.de/pers/hd/b/Borthakur:Dhruba Citations: https://scholar.google.com/citations?user=NYgWyv0AAAAJ&hl=en. Rockset: : https://rockset.com/ RocksDB: https://en.wikipedia.org/wiki/RocksDB Apache HDFS: https://svn.apache.org/repos/asf/hadoop/common/tags/release-0.10.0/docs/hdfs_design.pdf 4 Abstract We want to show that building and running a big data platform for both streaming and bulk data processing for all kinds of applications involving analytics, data science, reporting, and the like in today’s world can be as easy as following a checklist. We live in a fortunate time that many of the components needed are already available in the open source or as a service from commercial vendors. We show how to put these components together in multiple sophistication levels to cover the spectrum from a basic reporting need to a full fledged operation across geographically distributed regions with business continuity measures in place. We plan to provide enough information to the audience that this tutorial can also serve as a high-level goto reference in the actual process of building and running. 5 Motivation We have been working on big data platforms for more than a decade now. Building and running big data platforms have become relatively more manageable now. Yet our interactions at multiple forums have shown us that a comprehensive and checklist-style guideline for building and running big data platforms does not exist in a practical form. With this tutorial we are hoping to close this gap, within the limitations of the tutorial length. 6 Targeted audience ● Targets ○ Our first target is the developers and practitioners of software or analytics or data science. However, we do provide multiple research issues and provide a review of the research literature so that researchers and students can significantly benefit from this tutorial too. ● Content level ○ 25% beginner, 50% intermediate, 25% advanced ● Audience prerequisites ○ Introductory software engineering or data science knowledge or experience as an individual contributor or as a manager of such individuals. Interest in building and running big data platforms. 7 Terminology and notation ● API: Application Programming Interface ● BCP: Business Continuity Planning ● DAG: Directed Acyclic Graph ● DB: Database ● DR: Disaster Recovery ● GDPR: General Data Protection Regulation ● ML: Machine Learning ● On premise (on-prem): Computing services offered by the 1st party ● Public cloud: Computing services offered by a 3rd party over the public internet ○ Possibly from Alibaba, Amazon, Google, IBM, Microsoft, Oracle, etc. ● RPO: Recovery Point Objective; RTO: Recovery Time Objective ● SLA: Service Level Agreement; SLI: Service Level Indicator; SLO: Service Level Objective ● TB (terabyte); PB (petabyte) = 1000 TB; EB (exabyte) = 1000 PB 8 Tutorial outline Section Time allocated Presenter 1 Introduction 15 Ali 2 Architecture 30 Ali 3 Input layer 10 Dhruba 4 Storage layer 10 Dhruba 5 Data transformation 10 Ali 6 Compute layer 10 Ali 7 Access layer 10 Dhruba 8 Output layer 10 Dhruba 9 Management and operations 10 Ali 10 Examples, lessons learned 5 Ali 9 Introduction Section 1 of 10 10 What is big data? ● Clive Humby, UK mathematician, founder of Dunnhumby, and an architect of Tesco’s Clubcard, 2006 ○ “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.” ● The Economist (2019): “Data is the new oil.” ● Forbes (2018), Wired (2019): “No, data is not the new oil.” ● Ad Exchanger (2019): “Consent, not data, is the new oil.” https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data 11 What is big data? ● No standard definition Data volumes ● According to my then 6-yr old daughter ○ “You get data; then you get more data; and ● 10+ of terabytes you get more and more data; finally you get ○ A high end server big data.” ● 10+ of petabytes ● Another definition ○ US Library of Congress (10PB?) ○ If the size of your data is part of the ● 100+ of petabytes problem, you may have big data. ○ Turn (now Amobee) (2016) ● “Big” along one or more of 3 V’s ○ Dropbox (2016) ○ Netflix (2017) ○ Twitter (2017) ● 1+ of exabytes ○ Amazon, Google, Facebook ○ 1 exabyte = 1000 petabytes ● 1+ of zettabytes ○ (per Cisco) global internet traffic 12 How big is a petabyte of data? 13 How big is a petabyte of data? ● Length of US land and water boundary: 19,937mi = 24,483km ○ Length of US-Canada land and water boundary: 5,525mi = 8,892km ○ Length of US-Mexico land and water boundary: 1,933mi = 3,112km ○ Length of US coastline: 12,479mi = 20,083km ● Recall 1 PB of text = 50M file cabinets or 23,000km of file cabinets ● So 1PB of text almost covers all around US land and water boundary ● https://fas.org/sgp/crs/misc/RS21729.pdf 14 Where to fit a petabyte of data in a datacenter? ● A high-end server in 2017 ○ Main memory size: 0.5TB ○ Hard disk size: 45TB ○ Number of cores: 40 ● 2 racks with 15 servers each ○ Total main memory size: 15TB ○ Total hard disk size: 1.4PB ○ Total number of cores: 1,200 ● Now consider the total number of racks in the row in this picture ● Then consider the total number of rows in a typical datacenter ● Finally consider the total number of data centers owned by one or more companies ● That is a lot of data capacity 15 Generic big data platform architecture (1 of 3) 16 Generic big data platform architecture (2 of 3) 17 Generic big data platform architecture (3 of 3) 18 Generic big data platform ● To discuss ○ how this generic architecture is realized in the real world, or ○ how the real world architectures fit into this generic architecture ● To discuss the six key components of a big data platform Six key components to discuss 1. Input 2. Compute (or process) 3. Data transformations 4. Storage 5. Access 6. Output 7. Management and operations 19 Gartner: Analytics maturity (2017) 20 Gartner: Types of analytics (2017) 21 Gartner: Analytics maturity (2017) 22 https://www.gartner.com/en/newsroom/press-releases/2018-02-05-gartner-survey-shows-organizations-are-slow-to-advance-in-data-and-analytics Cons of a big data platform ● Hope that pros are obvious already ● Cons and how to counter them ○ Costly ■ But the benefits are immense, just a little more patience ○ Not many users ■ But they will come, guaranteed ○ Centralized ■ But this opens up more innovation opportunities
Recommended publications
  • Java Linksammlung
    JAVA LINKSAMMLUNG LerneProgrammieren.de - 2020 Java einfach lernen (klicke hier) JAVA LINKSAMMLUNG INHALTSVERZEICHNIS Build ........................................................................................................................................................... 4 Caching ....................................................................................................................................................... 4 CLI ............................................................................................................................................................... 4 Cluster-Verwaltung .................................................................................................................................... 5 Code-Analyse ............................................................................................................................................. 5 Code-Generators ........................................................................................................................................ 5 Compiler ..................................................................................................................................................... 6 Konfiguration ............................................................................................................................................. 6 CSV ............................................................................................................................................................. 6 Daten-Strukturen
    [Show full text]
  • Oracle Metadata Management V12.2.1.3.0 New Features Overview
    An Oracle White Paper October 12 th , 2018 Oracle Metadata Management v12.2.1.3.0 New Features Overview Oracle Metadata Management version 12.2.1.3.0 – October 12 th , 2018 New Features Overview Disclaimer This document is for informational purposes. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle. This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. This document and information contained herein may not be disclosed, copied, reproduced, or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates. 1 Oracle Metadata Management version 12.2.1.3.0 – October 12 th , 2018 New Features Overview Table of Contents Executive Overview ............................................................................ 3 Oracle Metadata Management 12.2.1.3.0 .......................................... 4 METADATA MANAGER VS METADATA EXPLORER UI .............. 4 METADATA HOME PAGES ........................................................... 5 METADATA QUICK ACCESS ........................................................ 6 METADATA REPORTING .............................................................
    [Show full text]
  • IPS Signature Release Note V9.17.79
    SOPHOS IPS Signature Update Release Notes Version : 9.17.79 Release Date : 19th January 2020 IPS Signature Update Release Information Upgrade Applicable on IPS Signature Release Version 9.17.78 CR250i, CR300i, CR500i-4P, CR500i-6P, CR500i-8P, CR500ia, CR500ia-RP, CR500ia1F, CR500ia10F, CR750ia, CR750ia1F, CR750ia10F, CR1000i-11P, CR1000i-12P, CR1000ia, CR1000ia10F, CR1500i-11P, CR1500i-12P, CR1500ia, CR1500ia10F Sophos Appliance Models CR25iNG, CR25iNG-6P, CR35iNG, CR50iNG, CR100iNG, CR200iNG/XP, CR300iNG/XP, CR500iNG- XP, CR750iNG-XP, CR2500iNG, CR25wiNG, CR25wiNG-6P, CR35wiNG, CRiV1C, CRiV2C, CRiV4C, CRiV8C, CRiV12C, XG85 to XG450, SG105 to SG650 Upgrade Information Upgrade type: Automatic Compatibility Annotations: None Introduction The Release Note document for IPS Signature Database Version 9.17.79 includes support for the new signatures. The following sections describe the release in detail. New IPS Signatures The Sophos Intrusion Prevention System shields the network from known attacks by matching the network traffic against the signatures in the IPS Signature Database. These signatures are developed to significantly increase detection performance and reduce the false alarms. Report false positives at [email protected], along with the application details. January 2020 Page 2 of 245 IPS Signature Update This IPS Release includes Two Thousand, Seven Hundred and Sixty Two(2762) signatures to address One Thousand, Nine Hundred and Thirty Eight(1938) vulnerabilities. New signatures are added for the following vulnerabilities: Name CVE–ID
    [Show full text]
  • Oracle Big Data SQL Release 4.1
    ORACLE DATA SHEET Oracle Big Data SQL Release 4.1 The unprecedented explosion in data that can be made useful to enterprises – from the Internet of Things, to the social streams of global customer bases – has created a tremendous opportunity for businesses. However, with the enormous possibilities of Big Data, there can also be enormous complexity. Integrating Big Data systems to leverage these vast new data resources with existing information estates can be challenging. Valuable data may be stored in a system separate from where the majority of business-critical operations take place. Moreover, accessing this data may require significant investment in re-developing code for analysis and reporting - delaying access to data as well as reducing the ultimate value of the data to the business. Oracle Big Data SQL enables organizations to immediately analyze data across Apache Hadoop, Apache Kafka, NoSQL, object stores and Oracle Database leveraging their existing SQL skills, security policies and applications with extreme performance. From simplifying data science efforts to unlocking data lakes, Big Data SQL makes the benefits of Big Data available to the largest group of end users possible. KEY FEATURES Rich SQL Processing on All Data • Seamlessly query data across Oracle Oracle Big Data SQL is a data virtualization innovation from Oracle. It is a new Database, Hadoop, object stores, architecture and solution for SQL and other data APIs (such as REST and Node.js) on Kafka and NoSQL sources disparate data sets, seamlessly integrating data in Apache Hadoop, Apache Kafka, • Runs all Oracle SQL queries without modification – preserving application object stores and a number of NoSQL databases with data stored in Oracle Database.
    [Show full text]
  • Hybrid Transactional/Analytical Processing: a Survey
    Hybrid Transactional/Analytical Processing: A Survey Fatma Özcan Yuanyuan Tian Pınar Tözün IBM Resarch - Almaden IBM Research - Almaden IBM Research - Almaden [email protected] [email protected] [email protected] ABSTRACT To understand HTAP, we first need to look into OLTP The popularity of large-scale real-time analytics applications and OLAP systems and how they progressed over the years. (real-time inventory/pricing, recommendations from mobile Relational databases have been used for both transaction apps, fraud detection, risk analysis, IoT, etc.) keeps ris- processing as well as analytics. However, OLTP and OLAP ing. These applications require distributed data manage- systems have very different characteristics. OLTP systems ment systems that can handle fast concurrent transactions are identified by their individual record insert/delete/up- (OLTP) and analytics on the recent data. Some of them date statements, as well as point queries that benefit from even need running analytical queries (OLAP) as part of indexes. One cannot think about OLTP systems without transactions. Efficient processing of individual transactional indexing support. OLAP systems, on the other hand, are and analytical requests, however, leads to different optimiza- updated in batches and usually require scans of the tables. tions and architectural decisions while building a data man- Batch insertion into OLAP systems are an artifact of ETL agement system. (extract transform load) systems that consolidate and trans- For the kind of data processing that requires both ana- form transactional data from OLTP systems into an OLAP lytics and transactions, Gartner recently coined the term environment for analysis. Hybrid Transactional/Analytical Processing (HTAP).
    [Show full text]
  • Hortonworks Data Platform Apache Spark Component Guide (December 15, 2017)
    Hortonworks Data Platform Apache Spark Component Guide (December 15, 2017) docs.hortonworks.com Hortonworks Data Platform December 15, 2017 Hortonworks Data Platform: Apache Spark Component Guide Copyright © 2012-2017 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including MapReduce, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, ZooKeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included. Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain, free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to contact us directly to discuss your specific needs. Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 4.0 License. http://creativecommons.org/licenses/by-sa/4.0/legalcode ii Hortonworks Data Platform December 15, 2017 Table of Contents 1.
    [Show full text]
  • Creating Dashboards and Data Stories Within the Data & Analytics Framework (DAF)
    Creating dashboards and data stories within the Data & Analytics Framework (DAF) Michele Petitoc, Francesca Fallucchia,b and De Luca Ernesto Williama,b a DIII, Guglielmo Marconi University, Via Plinio 44, 00193 Roma RM, Italy E-mail: [email protected], [email protected] b DIFI, Georg Eckert Institute Braunschweig, Celler Str. 3, 38114 Braunschweig, German E-mail: [email protected], [email protected] c DIFI, Università̀ di Pisa, Lungarno Antonio Pacinotti 43, 56126, Pisa PI, Italy E-mail: [email protected] Abstract. In recent years, many data visualization tools have appeared on the market that can potentially guarantee citizens and users of the Public Administration (PA) the ability to create dashboards and data stories with just a few clicks, using open and unopened data from the PA. The Data Analytics Framework (DAF), a project of the Italian government launched at the end of 2017 and currently being tested, integrates data based on the semantic web, data analysis tools and open source business intelli- gence products that promise to solve the problems that prevented the PA to exploit its enormous data potential. The DAF favors the spread of linked open data (LOD) thanks to the integration of OntoPiA, a network of controlled ontologies and vocabularies that allows us to describe the concepts we find in datasets, such as "sex", "organization", "people", "addresses", "points of inter- est", "events" etc. This paper contributes to the enhancement of the project by introducing the process of creating a dashboard in the DAF in 5 steps, starting from the dataset search on the data portal, to the creation phase of the real dashboard through Superset and the related data story.
    [Show full text]
  • Schema Evolution in Hive Csv
    Schema Evolution In Hive Csv Which Orazio immingled so anecdotally that Joey take-over her seedcake? Is Antin flowerless when Werner hypersensitise apodictically? Resolutely uraemia, Burton recalesced lance and prying frontons. In either format are informational and the file to collect important consideration to persist our introduction above image file processing with hadoop for evolution in Capabilities than that? Data while some standardized form scale as CSV TSV XML or JSON files. Have involved at each of spark sql engine for data in with malformed types are informational and to finish rendering before invoking file with. Spark csv files just storing data that this object and decision intelligence analytics queries to csv in bulk into another tab of lot of different in. Next button to choose to write that comes at query returns all he has very useful tutorial is coalescing around parquet evolution in hive schema evolution and other storage costs, query across partitions? Bulk load into an array types across some schema changes, which the views. Encrypt data come from the above schema evolution? This article helpful, only an error is more specialized for apis anywhere with the binary encoded in better than a dict where. Provide an evolution in column to manage data types and writing, analysts will be read json, which means the. This includes schema evolution partition evolution and table version rollback all. Apache hive to simplify your google cloud storage, the size of data cleansing, in schema hive csv files cannot be able to. Irs prior to create hive tables when querying using this guide for schema evolution in hive.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • Benchmarking Distributed Data Warehouse Solutions for Storing Genomic Variant Information
    Research Collection Journal Article Benchmarking distributed data warehouse solutions for storing genomic variant information Author(s): Wiewiórka, Marek S.; Wysakowicz, David P.; Okoniewski, Michał J.; Gambin, Tomasz Publication Date: 2017-07-11 Permanent Link: https://doi.org/10.3929/ethz-b-000237893 Originally published in: Database 2017, http://doi.org/10.1093/database/bax049 Rights / License: Creative Commons Attribution 4.0 International This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Database, 2017, 1–16 doi: 10.1093/database/bax049 Original article Original article Benchmarking distributed data warehouse solutions for storing genomic variant information Marek S. Wiewiorka 1, Dawid P. Wysakowicz1, Michał J. Okoniewski2 and Tomasz Gambin1,3,* 1Institute of Computer Science, Warsaw University of Technology, Nowowiejska 15/19, Warsaw 00-665, Poland, 2Scientific IT Services, ETH Zurich, Weinbergstrasse 11, Zurich 8092, Switzerland and 3Department of Medical Genetics, Institute of Mother and Child, Kasprzaka 17a, Warsaw 01-211, Poland *Corresponding author: Tel.: þ48693175804; Fax: þ48222346091; Email: [email protected] Citation details: Wiewiorka,M.S., Wysakowicz,D.P., Okoniewski,M.J. et al. Benchmarking distributed data warehouse so- lutions for storing genomic variant information. Database (2017) Vol. 2017: article ID bax049; doi:10.1093/database/bax049 Received 15 September 2016; Revised 4 April 2017; Accepted 29 May 2017 Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient data- base storage and querying.
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Hortonworks Data Platform Date of Publish: 2018-09-21
    Release Notes 3 Hortonworks Data Platform Date of Publish: 2018-09-21 http://docs.hortonworks.com Contents HDP 3.0.1 Release Notes..........................................................................................3 Component Versions.............................................................................................................................................3 New Features........................................................................................................................................................ 3 Deprecation Notices..............................................................................................................................................4 Terminology.............................................................................................................................................. 4 Removed Components and Product Capabilities.....................................................................................4 Unsupported Features........................................................................................................................................... 4 Technical Preview Features......................................................................................................................4 Upgrading to HDP 3.0.1...................................................................................................................................... 5 Before you begin.....................................................................................................................................
    [Show full text]