Implementation of a Software-Defined Storage Service With
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Superconvergence
THE FUTURE OF DATA CENTER CONSOLIDATION: SUPERCONVERGENCE 1SHAIKH ABDUL AZEEM, 2SATYENDRA KUMAR SHARMA 1 2 Research Scholar, Dean, Faculty of Engineering 1,2,Department of Computer Science, Pacific Academy of Higher Education & Research University, Udaipur, Rajasthan, India E-mail: [email protected], 2 [email protected] Abstract - Convergence in industries is creating tremendous new opportunities and new modes for technology innovation. Convergence providing greater risk management, greater control mechanisms, greater consistency and predictability and greater cost reduction. Convergence with the view of network convergence is leading current IT infrastructure to new levels of efficiency through greater integration and centralization. Convergence is the name where advancements in compute, storage, and network are converging to enable new capabilities, reduction in cost, and introduction of new scale of computing which are previously unimaginable. Performance, resiliency, and scalability are the demands of modern cloud computing environment. The next generation of converged infrastructure, the superconvergence, is poised to provide exactly that. Companies are enabling a giant leap in cloud performance through converged storage, networking, and server systems. This paper provides details regarding the next generation of IT Infrastructure development, the superconvergence. Keywords - siloed, CI, HCI, SCI, Data Centre, Unified Platform, NVMeoF I. INTRODUCTION As the convergence is getting developed, super convergence solutions are coming into the market This is the era of convergence. Once converge system because of this naturally the silos of earlier IT implemented properly, then it can virtualize more Infrastructure implementations management are applications and desktops by just carrying a bit of getting disappear, and the possibility of a true single more resource power. -
Red Hat Data Analytics Infrastructure Solution
TECHNOLOGY DETAIL RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION TABLE OF CONTENTS 1 INTRODUCTION ................................................................................................................ 2 2 RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION ..................................... 2 2.1 The evolution of analytics infrastructure ....................................................................................... 3 Give data scientists and data 2.2 Benefits of a shared data repository on Red Hat Ceph Storage .............................................. 3 analytics teams access to their own clusters without the unnec- 2.3 Solution components ...........................................................................................................................4 essary cost and complexity of 3 TESTING ENVIRONMENT OVERVIEW ............................................................................ 4 duplicating Hadoop Distributed File System (HDFS) datasets. 4 RELATIVE COST AND PERFORMANCE COMPARISON ................................................ 6 4.1 Findings summary ................................................................................................................................. 6 Rapidly deploy and decom- 4.2 Workload details .................................................................................................................................... 7 mission analytics clusters on 4.3 24-hour ingest ........................................................................................................................................8 -
Unlock Bigdata Analytic Efficiency with Ceph Data Lake
Unlock Bigdata Analytic Efficiency With Ceph Data Lake Jian Zhang, Yong Fu, March, 2018 Agenda . Background & Motivations . The Workloads, Reference Architecture Evolution and Performance Optimization . Performance Comparison with Remote HDFS . Summary & Next Step 3 Challenges of scaling Hadoop* Storage BOUNDED Storage and Compute resources on Hadoop Nodes brings challenges Data Capacity Silos Costs Performance & efficiency Typical Challenges Data/Capacity Multiple Storage Silos Space, Spent, Power, Utilization Upgrade Cost Inadequate Performance Provisioning And Configuration Source: 451 Research, Voice of the Enterprise: Storage Q4 2015 *Other names and brands may be claimed as the property of others. 4 Options To Address The Challenges Compute and Large Cluster More Clusters Storage Disaggregation • Lacks isolation - • Cost of • Isolation of high- noisy neighbors duplicating priority workloads hinder SLAs datasets across • Shared big • Lacks elasticity - clusters datasets rigid cluster size • Lacks on-demand • On-demand • Can’t scale provisioning provisioning compute/storage • Can’t scale • compute/storage costs separately compute/storage costs scale costs separately separately Compute and Storage disaggregation provides Simplicity, Elasticity, Isolation 5 Unified Hadoop* File System and API for cloud storage Hadoop Compatible File System abstraction layer: Unified storage API interface Hadoop fs –ls s3a://job/ adl:// oss:// s3n:// gs:// s3:// s3a:// wasb:// 2006 2008 2014 2015 2016 6 Proposal: Apache Hadoop* with disagreed Object Storage SQL …… Hadoop Services • Virtual Machine • Container • Bare Metal HCFS Compute 1 Compute 2 Compute 3 … Compute N Object Storage Services Object Object Object Object • Co-located with gateway Storage 1 Storage 2 Storage 3 … Storage N • Dynamic DNS or load balancer • Data protection via storage replication or erasure code Disaggregated Object Storage Cluster • Storage tiering *Other names and brands may be claimed as the property of others. -
Vmware Vsan SAP | Solution Overview
SOLUTION OVERVIEW VMware Virtual SAN SAP Applications Hyper-Converged Infrastructure for Business Critical Applications Customers deploying Business Critical Applications (BCAs) have requirements such as stringent SLAs, sustained high performance, and continued application availability. It is a major challenge for organizations to manage data storage in these environments. Common issues in using traditional storage solutions for BCAs include inadequate performance, storage inefficiency, difficulty to scale, complex management, and high deployment and operating costs. VMware, the market leader in Hyper-Converged Infrastructure (HCI), enables low cost and high performance next- generation HCI solutions through the proven VMware Hyper-Converged Software (VMware HCS) stack. The natively integrated VMware HCS combines radically simple VMware Virtual SAN™ storage, the market-leading vSphere hypervisor, and the vCenter Server unified management solution all on the broadest and deepest set of HCI deployment options. Virtual SAN is enterprise-class storage that is uniquely embedded in the hypervisor. Virtual SAN delivers flash-optimized, high-performance hyper-converged storage for any virtualized application at a fraction of the cost of traditional, purpose- built storage and other less-efficient HCI solutions. VMware has completed extensive technical validation to demonstrate Virtual SAN as an ideal storage platform for a variety of BCAs. A recent Virtual SAN customer survey also revealed that more than 60% of customers run their BCAs on Virtual SAN today, making BCA the most common use case. Why Virtual SAN for SAP Landscapes? Customers wanting to modernize their existing complex SAP environments can leverage Virtual SAN, running on both VMware and SAP certified x86 servers, to eliminate traditional IT silos of compute, storage, and networking. -
Hypervisor Converged Software Defined Private Cloud
How a Hypervisor-Converged Software-Defined Data Center Enables a Better Private Cloud WHITE PAPER How a Hypervisor-Converged Software-Defined Data Center Enables a Better Private Cloud Table of Contents Accelerate IT Response to Business Needs . 3 Best Hypervisor Architecture for Security and Reliability . 3 Most Comprehensive Solution for Greater Business Responsiveness . 5 Software-Defined Data Center Approach Pioneered by VMware . 5 All Components for Building and Running a Private Cloud Infrastructure . 6 Purpose-Built, Highly Automated Management Solutions . 7 Software-Defined Storage Capabilities Enable New Converged Storage Tier . 9 Proven Leadership in Network Virtualization Delivers Speed and Efficiency . 10 Virtualization-Aware Security Provides More-Robust Protection . 11 Maximum Application Availability and Business Continuity for Greater Reliability and Reduced Business Risk . 12 Lowest TCO for Highest Resource Utilization and Administrator Productivity . 13 Most Proven, Trusted, and Widely Deployed Virtualization Platform Supporting Private Clouds, Hybrid Clouds, and Desktops .. 15 World’s Most Successful Companies Run VMware. 15 VMware: A Leader in Private Cloud . 16 WHITE PAPER / 2 How a Hypervisor-Converged Software-Defined Data Center Enables a Better Private Cloud Accelerate IT Response to Business Needs IT organizations must be more flexible and innovative to rapidly address competitive threats and satisfy user demands. They need to deliver higher levels of efficiency and responsiveness to business stakeholders and compete with low-cost, on-demand services from external suppliers. At the same time, IT organizations must continue to provide reliability, security, and governance for all applications and services the business requires. Cloud computing provides a more efficient, flexible, and cost-effective model for computing. -
Key Exchange Authentication Protocol for Nfs Enabled Hdfs Client
Journal of Theoretical and Applied Information Technology 15 th April 2017. Vol.95. No 7 © 2005 – ongoing JATIT & LLS ISSN: 1992-8645 www.jatit.org E-ISSN: 1817-3195 KEY EXCHANGE AUTHENTICATION PROTOCOL FOR NFS ENABLED HDFS CLIENT 1NAWAB MUHAMMAD FASEEH QURESHI, 2*DONG RYEOL SHIN, 3ISMA FARAH SIDDIQUI 1,2 Department of Computer Science and Engineering, Sungkyunkwan University, Suwon, South Korea 3 Department of Software Engineering, Mehran UET, Pakistan *Corresponding Author E-mail: [email protected], 2*[email protected], [email protected] ABSTRACT By virtue of its built-in processing capabilities for large datasets, Hadoop ecosystem has been utilized to solve many critical problems. The ecosystem consists of three components; client, Namenode and Datanode, where client is a user component that requests cluster operations through Namenode and processes data blocks at Datanode enabled with Hadoop Distributed File System (HDFS). Recently, HDFS has launched an add-on to connect a client through Network File System (NFS) that can upload and download set of data blocks over Hadoop cluster. In this way, a client is not considered as part of the HDFS and could perform read and write operations through a contrast file system. This HDFS NFS gateway has raised many security concerns, most particularly; no reliable authentication support of upload and download of data blocks, no local and remote client efficient connectivity, and HDFS mounting durability issues through untrusted connectivity. To stabilize the NFS gateway strategy, we present in this paper a Key Exchange Authentication Protocol (KEAP) between NFS enabled client and HDFS NFS gateway. The proposed approach provides cryptographic assurance of authentication between clients and gateway. -
Asia Pacific
Enterprise Data Storage Market Insights Future Technologies and Trends Will Drive Market Growth P8CD-72 September 2015 Contents Section Slide Number Scope and Market Overview 3 Enterprise Storage Market Trends 7 Future Technologies in Storage 12 Business Models of Vendors 16 Business Models of OEMs 26 Business Models of Distributors 31 Implications of Storage Trends on Data Centers 39 Frost & Sullivan Story 45 P8CD-72 2 Scope and Market Overview Return to contents P8CD-72 3 Scope of the Study Objectives • To provide an overview of the global enterprise data storage market, focusing on the trends, technology, and business models of major market participants • To understand the trends affecting the global storage market and the implications of these trends on the data center market • Future technologies entering the storage market • To gain a detailed understanding on the distribution structure and channel partner program for key storage vendors EMC and NetApp • To provide an understanding of the business model for ODM/OEMs in terms of service and support capabilities across regions; these include Foxconn and Supermicro ODM: Original Design Manufacturer OEM: Original Equipment Manufacturer Source: Frost & Sullivan P8CD-72 4 Market Overview The enterprise data storage market is changing at a rapid pace. More data is being digitized than ever before and stored on disks of various size capacities. Globally, by 2020, there are expected to be about 26 billion connected devices. These devices would generate large amounts of data that need to be stored and analyzed at some point of time. Storage devices need to be agile, scalable, low cost, able to handle huge data loads, and durable to sustain the huge data growth. -
HFAA: a Generic Socket API for Hadoop File Systems
HFAA: A Generic Socket API for Hadoop File Systems Adam Yee Jeffrey Shafer University of the Pacific University of the Pacific Stockton, CA Stockton, CA [email protected] jshafer@pacific.edu ABSTRACT vices: one central NameNode and many DataNodes. The Hadoop is an open-source implementation of the MapReduce NameNode is responsible for maintaining the HDFS direc- programming model for distributed computing. Hadoop na- tory tree. Clients contact the NameNode in order to perform tively integrates with the Hadoop Distributed File System common file system operations, such as open, close, rename, (HDFS), a user-level file system. In this paper, we intro- and delete. The NameNode does not store HDFS data itself, duce the Hadoop Filesystem Agnostic API (HFAA) to allow but rather maintains a mapping between HDFS file name, Hadoop to integrate with any distributed file system over a list of blocks in the file, and the DataNode(s) on which TCP sockets. With this API, HDFS can be replaced by dis- those blocks are stored. tributed file systems such as PVFS, Ceph, Lustre, or others, thereby allowing direct comparisons in terms of performance Although HDFS stores file data in a distributed fashion, and scalability. Unlike previous attempts at augmenting file metadata is stored in the centralized NameNode service. Hadoop with new file systems, the socket API presented here While sufficient for small-scale clusters, this design prevents eliminates the need to customize Hadoop’s Java implementa- Hadoop from scaling beyond the resources of a single Name- tion, and instead moves the implementation responsibilities Node. Prior analysis of CPU and memory requirements for to the file system itself. -
VMWARE Vsan 6.6 Evolve Without Risk to Secure Hyper-Converged Infrastructure
DATASHEET VMWARE vSAN 6.6 Evolve without Risk to Secure Hyper-Converged Infrastructure AT A GLANCE Accelerate infrastructure modernization with VMware vSAN™ to make IT a VM VM VM VM VM VM strategic, cost-effective advantage for your company. By powering the leading Hyper-Converged Infrastructure (HCI) solutions, vSAN helps customers evolve their data center without risk, control vSphere + vSAN IT costs and scale to tomorrow’s business needs. vSAN, native to the market-leading hypervisor, delivers flash-optimized, secure storage for all of your critical vSphere workloads. vSAN is built on industry-standard x86 servers and components that help lower TCO by up to 50% versus traditional storage. It delivers the agility to easily scale IT and offers the industry’s first native HCI encryption. New enhanced stretched clusters and intelligent, 1-click operations further lower costs for affordable site protection (50% less than leading traditional solutions) and simple day-to-day management. Seamless integration with VMware vSphere® and the entire VMware stack makes it the simplest Why VMware vSAN? storage platform for virtual machines— Every business initiative today is an IT project, and most likely multiple projects. whether running business-critical As a result of this ongoing digital transformation, IT needs a simpler and more databases, virtual desktops or next- cost effective approach to their data center infrastructure—one that does not generation applications. require all new training and skills. As the only vSphere-native software-defined storage platform, vSAN helps customers evolve to hyper-converged infrastructure (HCI) without risk while lowering IT costs and providing an agile solution ready for future hardware, cloud and application changes. -
Decentralising Big Data Processing Scott Ross Brisbane
School of Computer Science and Engineering Faculty of Engineering The University of New South Wales Decentralising Big Data Processing by Scott Ross Brisbane Thesis submitted as a requirement for the degree of Bachelor of Engineering (Software) Submitted: October 2016 Student ID: z3459393 Supervisor: Dr. Xiwei Xu Topic ID: 3692 Decentralising Big Data Processing Scott Ross Brisbane Abstract Big data processing and analysis is becoming an increasingly important part of modern society as corporations and government organisations seek to draw insight from the vast amount of data they are storing. The traditional approach to such data processing is to use the popular Hadoop framework which uses HDFS (Hadoop Distributed File System) to store and stream data to analytics applications written in the MapReduce model. As organisations seek to share data and results with third parties, HDFS remains inadequate for such tasks in many ways. This work looks at replacing HDFS with a decentralised data store that is better suited to sharing data between organisations. The best fit for such a replacement is chosen to be the decentralised hypermedia distribution protocol IPFS (Interplanetary File System), that is built around the aim of connecting all peers in it's network with the same set of content addressed files. ii Scott Ross Brisbane Decentralising Big Data Processing Abbreviations API Application Programming Interface AWS Amazon Web Services CLI Command Line Interface DHT Distributed Hash Table DNS Domain Name System EC2 Elastic Compute Cloud FTP File Transfer Protocol HDFS Hadoop Distributed File System HPC High-Performance Computing IPFS InterPlanetary File System IPNS InterPlanetary Naming System SFTP Secure File Transfer Protocol UI User Interface iii Decentralising Big Data Processing Scott Ross Brisbane Contents 1 Introduction 1 2 Background 3 2.1 The Hadoop Distributed File System . -
Collective Communication on Hadoop
Harp: Collective Communication on Hadoop Bingjing Zhang, Yang Ruan, Judy Qiu Computer Science Department Indiana University Bloomington, IN, USA Abstract—Big data tools have evolved rapidly in recent years. MapReduce is very successful but not optimized for many important analytics; especially those involving iteration. In this regard, Iterative MapReduce frameworks improve performance of MapReduce job chains through caching. Further Pregel, Giraph and GraphLab abstract data as a graph and process it in iterations. However, all these tools are designed with fixed data abstraction and have limitations in communication support. In this paper, we introduce a collective communication layer which provides optimized communication operations on several important data abstractions such as arrays, key-values and graphs, and define a Map-Collective model which serves the diverse communication demands in different parallel applications. In addition, we design our enhancements as plug-ins to Hadoop so they can be used with the rich Apache Big Data Stack. Then for example, Hadoop can do in-memory communication between Map tasks without writing intermediate data to HDFS. With improved expressiveness and excellent performance on collective communication, we can simultaneously support various applications from HPC to Cloud systems together with a high performance Apache Big Data Stack. Fig. 1. Big Data Analysis Tools Keywords—Collective Communication; Big Data Processing; Hadoop Spark [5] also uses caching to accelerate iterative algorithms without restricting computation to a chain of MapReduce jobs. I. INTRODUCTION To process graph data, Google announced Pregel [6] and soon It is estimated that organizations with high-end computing open source versions Giraph [7] and Hama [8] emerged. -
Maximizing Hadoop Performance and Storage Capacity with Altrahdtm
Maximizing Hadoop Performance and Storage Capacity with AltraHDTM Executive Summary The explosion of internet data, driven in large part by the growth of more and more powerful mobile devices, has created not only a large volume of data, but a variety of data types, with new data being generated at an increasingly rapid rate. Data characterized by the “three Vs” – Volume, Variety, and Velocity – is commonly referred to as big data, and has put an enormous strain on organizations to store and analyze their data. Organizations are increasingly turning to Apache Hadoop to tackle this challenge. Hadoop is a set of open source applications and utilities that can be used to reliably store and process big data. What makes Hadoop so attractive? Hadoop runs on commodity off-the-shelf (COTS) hardware, making it relatively inexpensive to construct a large cluster. Hadoop supports unstructured data, which includes a wide range of data types and can perform analytics on the unstructured data without requiring a specific schema to describe the data. Hadoop is highly scalable, allowing companies to easily expand their existing cluster by adding more nodes without requiring extensive software modifications. Apache Hadoop is an open source platform. Hadoop runs on a wide range of Linux distributions. The Hadoop cluster is composed of a set of nodes or servers that consist of the following key components. Hadoop Distributed File System (HDFS) HDFS is a Java based file system which layers over the existing file systems in the cluster. The implementation allows the file system to span over all of the distributed data nodes in the Hadoop cluster to provide a scalable and reliable storage system.