Storage Aggregation for Performance & Availability

Total Page:16

File Type:pdf, Size:1020Kb

Storage Aggregation for Performance & Availability Storage Aggregation for Performance & Availability: The Path from Physical RAID to Virtual Objects Univ Minnesota, Third Intelligent Storage Workshop Garth Gibson CTO, Panasas, and Assoc Prof, CMU [email protected] May 4, 2005 Cluster Computing: The New Wave Traditional Computing Cluster Computing Linux Monolithic Compute Computers Cluster Single data Parallel Issues path data Complex Scaling paths Limited BW & I/O Islands of storage Inflexible Scale to bigger box Expensive Monolithic Scale: Storage – File & total BW – File & total capacity ?? – Load & capacity balancing But lower $ / Gbps Slide 2 April 26, 2005 Panasas: Next Gen Cluster Storage ActiveScale Storage Cluster Scalable performance Parallel data paths to compute nodes Compute Cluster Scale clients, network and capacity Single Step: Cluster Perform job directly As capacity grows, performance grows from high I/O Panasas Simplified and dynamic management Storage Cluster Robust, shared file access by many clients Parallel Seamless growth within single namespace Control eliminates time-consuming admin tasks path data paths Integrated HW/SW solution Optimizes performance and manageability Ease of integration and support Metadata Object Storage Managers Devices Slide 3 April 26, 2005 Rooted Firmly in RAID May 4, 2005 Birth of RAID (1986-1991) Member of 4th Berkeley RISC CPU design team (SPUR: 84-89) Dave Patterson decides CPU design is a “solved” problem Sends me to figure out how storage plays in SYSTEM PERFORMANCE IBM 3380 disk is 4 arms in a 7.5 GB washing machine box SLED: Single Large Expensive Disk New PC industry demands cost effective 100 MB 3.5” disks Enabled by new SCSI embedded controller architecture Use many PC disks for parallelism SIGMOD88: A case for RAID PS. $10-20 per MB (~1000X now) 100 MB/arm (~1000X now) 20-30 IO/sec/arm (5X now) Slide 5 April 26, 2005 But RAID is really about Availability Arrays have more Hard Disk Assemblies (HDAs) -- more failures Apply replication and/or error/erasure detection codes Mirroring wastes 50% space; RAID wastes 1/N Mirroring halves, RAID 5 quarters small write bandwidth Slide 6 April 26, 2005 Off to CMU & More Availability Parity Declustering “spreads RAID groups” to reduce MTTR Each parity disk block protects fewer than all data disk blocks (C) Virtualizing RAID group lessens recovery work Faster recovery or better user response time during recovery or mixture of both RAID over X? X = Independent fault domains “Disk” is easiest “X” Parity declustering is first step in RAID virtualization Slide 7 April 26, 2005 Network-Attached Secure Disks (NASD, 95-99) May 4, 2005 Storage Interconnect Evolution Outboard circuitry increases over time (VLSI density) Hardware (#hosts, #disks, #paths) sharing increases over time Logical (information) sharing limited by host SW 1995: Fibrechannel packetizes SCSI over a near general network Outboard circuitry increases over time (VLSI density) Hardware (#hosts, #disks, #paths) sharing increases over time Logical (information) sharing limited by host SW 1995: Fibrechannel packetizes SCSI over a near general network Packetized SCSI will go over any network (iSCSI) What do we next do with device intelligence? Slide 9 April 26, 2005 Storage as First Class Network Citizen Direct transfer between client and storage Exploit scalable switched cluster area networking Split file service into: primitives (in drive) and policies (in manager) Slide 10 April 26, 2005 NASD Architecture Before NASD there was store&forward Server-Attached Disks (SAD) Move access control, consistency out-of-band and cache decisions Raise storage abstraction: encapsulate layout, offload data access Slide 11 April 26, 2005 Fine Grain Access Enforcement State of art is VPN of all out-of-band clients, all sharable data and metadata Accident prone & vulnerable to subverted client; analogy to single-address space computing Private Communication File manager NASD Integrity/Privacy 1: Request for access Secret Key 2: CapArgs, CapKey CapKey= MACSecretKey(CapArgs) CapArgs= ObjID, Version, Rights, Expiry,.... Object Storage Client uses a digitally signed, object- ReqMAC = MACCapKey(Req,NonceIn) specific capabilities on each request 3: CapArgs, Req, NonceIn, ReqMAC NASD 4: Reply, NonceOut, ReplyMAC Secret Key ReplyMAC = MACCapKey(Reply,NonceOut) Slide 12 April 26, 2005 Scalable File System Taxonomy May 4, 2005 Today’s Ubiquitous NFS ADVANTAGES DISADVANTAGES Familiar, stable & reliable Capacity doesn’t scale Widely supported by vendors Bandwidth doesn’t scale Competitive market Cluster by customer-exposed namespace partitioning File Servers Clients Disk arrays Storage Net Host Net Exported Sub- File System Slide 14 April 26, 2005 Scale Out 1: Forwarding NFS Servers Bind many file servers into single system image with forwarding Mount point binding less relevant, allows DNS-style balancing, more manageable Control and data traverse mount point path (in band) passing through two servers Single file and single file system bandwidth limited by backend server & storage Tricord, Spinnaker File Server Cluster Clients Disk arrays Host Net Storage Net Slide 15 April 26, 2005 Scale Out File Service w/ Out-of-Band Client sees many storage addresses, accesses in parallel Zero file servers in data path allows high bandwidth thru scalable networking Mostly built on block-based SANs where servers trust all clients E.g.: IBM SanFS, IBM GPFS, EMC HighRoad, Sun QFS, SGI CXFS, etc Beginning to be built on Object storage (SANs too) E.g. Panasas DirectFlow, Lustre, Clients Storage File Servers Slide 16 April 26, 2005 Object Storage Standards May 4, 2005 Object Storage Architecture An evolutionary improvement to standard SCSI storage interface (OSD) Offload most data path work from server to intelligent storage Finer granularity of security: protect & manage one file at a time Raises level of abstraction: Object is container for “related” data Storage understands how blocks of a “file” are related -> self-management Per Object Extensible Attributes key expansion of storage functionality Block Based Disk Object Based Disk Operations: Operations: Read block Create object Write block Delete object Read object Write object Addressing: Addressing: Block range [object, byte range] Allocation: Allocation: External Internal Security At Security At Volume Level ObjectSource: Level Intel Slide 18 April 26, 2005 Why I Like Objects Objects are flexible, autonomous SNIA Shared Storage Model, 2001 Variable length data with layout metadata encapsulated Share one access control decision Amortize object metadata Defer allocation, smarter caching With extensible attributes E.g. size, timestamps, ACLs, + Some updated inline by device Extensible programming model Metadata server decisions are signed & cached at clients, enforced at device Rights and object map small relative to block allocation map Clients can be untrusted b/c limited exposure of bugs & attacks (only authorized object data) Cache decisions (& maps) replaced transparently -- dynamic remapping -- virtualization Slide 19 April 26, 2005 OSD is now an ANSI Standard 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 Panasas OSD NSIC NASD SNIA/T10 OSD Standard CMU NASD Lustre Key Players: T10/SCSI ratifies OSD command set 9/27/04 Co-chaired by IBM and Seagate, protocol is a general framework (transport independent) www.snia.org/tech_activities/workgroups/osd & www.t10.org/ftp/t10/drafts/osd/osd-r10.pdf KEY: Resulting standard will build options for customers looking to deploy objects Slide 20 April 26, 2005 ActiveScale Storage Cluster May 4, 2005 Object Storage Systems Expect wide variety of Object Storage Devices Disk array subsystem “Smart” disk for objects Prototype Seagate OSD Ie. LLNL with Lustre 2 SATA disks – 240/500 GB Highly integrated, single disk 16-Port GE Switch Blade Orchestrates system activity 4 Gbps per Balances objects across OSDs Stores up to 5 TBs per shelf shelf to cluster Slide 22 April 26, 2005 BladeServer Storage Cluster Integrated GE Switch Battery Module (2 Power units) Shelf Front 1 DB, 10 SB Shelf Rear DirectorBlade StorageBlade Midplane routes GE, power Slide 23 April 26, 2005 How Do Panasas Objects Scale? Scale capacity, bandwidth, reliability by striping according to small map File Comprised of: User Data DATA Attributes Layout ScalableScalable ObjectObject MapMap 1.1. PurplePurple OSDOSD && ObjectObject 2.2. GoldGold OSDOSD && ObjectObject 3.3. RedRed OSDOSD && ObjectObject PlusPlus stripestripe size,size, RAIDRAID levellevel Slide 24 April 26, 2005 Object Storage Bandwidth Scalable Bandwidth demonstrated with GE switching 12 10 8 6 GB/sec GB/sec 4 2 0 0 50 100 150 200 250 300 350 Object Storage Devices Lab results Slide 25 April 26, 2005 ActiveScale SW Architecture ActiveScale DirectorBlades DirectFLOW StorageBlades Realm & Performance Mgrs + Web Mgmt Server DFLOW fs Virtual Sub Mgrs DFLOW fs Buffer Cache DirectFLOW RAIDDFLOW 0 fs FileFileFile RAID 0 MgrMgrMgr RAID 0 QuotaQuotaQuotaStorStorStor DirectFLOW Client OSD / iSCSI RPC Zero NVRAM MgrMgrMgrMgrMgrMgr RAID TCP/IP Zero CopyZero NVRAM Copy CacheNVRAM Mgr DB DirectFLOW Cache ActiveScale Protocol Servers Copy Cache NTP + Mgmt OSD / iSCSI NFS NDMP CIFS DHCP Agent TCP/IP Server DirectFLOW NFS CIFS DirectFLOW DirectFLOW OSD / iSCSI TCP/IP Linux NFS CIFS Direct- Local FLOW UNIX Windows Buffer RAID Cache VFS POSIX App NT App POSIX App Slide 26 April 26, 2005 DirectFLOW Client DirectFLOW Client Standard installable file system for Linux POSIX syscalls with common performance exceptions Reduce serialization & media updates (e.g.
Recommended publications
  • Hierarchical Storage Management with SAM-FS
    Hierarchical Storage Management with SAM-FS Achim Neumann CeBiTec / Bielefeld University Agenda ● Information Lifecycle Management ● Backup – Technology of the past ● Filesystems, SAM-FS, QFS and SAM-QFS ● Main Functions of SAM ● SAM and Backup ● Rating, Perspective and Literature June 13, 2006 Achim Neumann 2 ILM – Information Lifecylce Management ● 5 Exabyte data per year (92 % digital, 8 % analog) ● 20% fixed media (disk), 80% removable media ● Managing data from Creation to Cremation (and everything that is between (by courtesy of Sun Microsystems Inc). June 13, 2006 Achim Neumann 3 Different classes of Storagesystems (by courtesy of Sun Microsystems Inc). June 13, 2006 Achim Neumann 4 Backup – Technology of the past ● Backup – protect data from data loss ● Backup is not the problem ● Ability to restore data when needed − Disaster − Hardware failure − User error − Old versions from a file June 13, 2006 Achim Neumann 5 Backup – Technology of the past (cont.) ● Triple data in 2 – 3 years ● Smaller backup window ☞ more data in less time Stream for # Drives for # Drives for #TB 24 h 24 h 6 h 1 12,1 MB/s 2 5 10 121 MB/s 13 49 30 364 MB/s 37 146 June 13, 2006 Achim Neumann 6 Backup – Technology of the past (cont.) ● Limited number of tape drives in library ● Limited number of I/O Channels in server ● Limited scalability of backup software ● Inability to keep all tape drives streaming at one time ● At least every full backup has a copy of the same file ● It's not unusual to have 8 or 10 copies of the same file ☞ excessive media usage and
    [Show full text]
  • Petascale Data Management: Guided by Measurement
    Petascale Data Management: Guided by Measurement petascale data storage institute www.pdsi-scidac.org/ MPP2 www.pdsi-scidac.org MEMBER ORGANIZATIONS & Lustre • Los Alamos National Laboratory – institute.lanl.gov/pdsi/ • Parallel Data Lab, Carnegie Mellon University – www.pdl.cmu.edu/ • Oak Ridge National Laboratory – www.csm.ornl.gov/ • Sandia National Laboratories – www.sandia.gov/ • National Energy Research Scientific Computing Center • Center for Information Technology Integration, U. of Michigan pdsi.nersc.gov/ www.citi.umich.edu/projects/pdsi/ • Pacific Northwest National Laboratory – www.pnl.gov/ • University of California at Santa Cruz – www.pdsi.ucsc.edu/ The Computer Failure Data Repository Filesystems Statistics Survey • Goal: to collect and make available failure data from a large variety of sites GOALS • Better understanding of the characteristics of failures in the real world • Gather & build large DB of static filetree summary • Now maintained by USENIX at cfdr.usenix.org/ • Build small, non-invasive, anonymizing stats gather tool • Distribute fsstats tool via easily used web site Red Storm NAME SYSTEM TYPE SYSTEM SIZE TIME PERIOD TYPE OF DATA • Encourage contributions (output of tool) from many FSs Any node • Offer uploaded statistics & summaries to public & Lustre 22 HPC clusters 5000 nodes 9 years outage . Label Date Type File Total Size Total Space # files # dirs max size max space max dir max name avg file avg dir . 765 nodes (2008) System TB TB M K GB GB ents bytes MB ents . 1 HPC cluster 5 years PITTSBURGH 3,400 disks
    [Show full text]
  • Kratka Povijest Unixa Od Unicsa Do Freebsda I Linuxa
    Kratka povijest UNIXa Od UNICSa do FreeBSDa i Linuxa 1 Autor: Hrvoje Horvat Naslov: Kratka povijest UNIXa - Od UNICSa do FreeBSDa i Linuxa Licenca i prava korištenja: Svi imaju pravo koristiti, mijenjati, kopirati i štampati (printati) knjigu, prema pravilima GNU GPL licence. Mjesto i godina izdavanja: Osijek, 2017 ISBN: 978-953-59438-0-8 (PDF-online) URL publikacije (PDF): https://www.opensource-osijek.org/knjige/Kratka povijest UNIXa - Od UNICSa do FreeBSDa i Linuxa.pdf ISBN: 978-953- 59438-1- 5 (HTML-online) DokuWiki URL (HTML): https://www.opensource-osijek.org/dokuwiki/wiki:knjige:kratka-povijest- unixa Verzija publikacije : 1.0 Nakalada : Vlastita naklada Uz pravo svakoga na vlastito štampanje (printanje), prema pravilima GNU GPL licence. Ova knjiga je napisana unutar inicijative Open Source Osijek: https://www.opensource-osijek.org Inicijativa Open Source Osijek je član udruge Osijek Software City: http://softwarecity.hr/ UNIX je registrirano i zaštićeno ime od strane tvrtke X/Open (Open Group). FreeBSD i FreeBSD logo su registrirani i zaštićeni od strane FreeBSD Foundation. Imena i logo : Apple, Mac, Macintosh, iOS i Mac OS su registrirani i zaštićeni od strane tvrtke Apple Computer. Ime i logo IBM i AIX su registrirani i zaštićeni od strane tvrtke International Business Machines Corporation. IEEE, POSIX i 802 registrirani i zaštićeni od strane instituta Institute of Electrical and Electronics Engineers. Ime Linux je registrirano i zaštićeno od strane Linusa Torvaldsa u Sjedinjenim Američkim Državama. Ime i logo : Sun, Sun Microsystems, SunOS, Solaris i Java su registrirani i zaštićeni od strane tvrtke Sun Microsystems, sada u vlasništvu tvrtke Oracle. Ime i logo Oracle su u vlasništvu tvrtke Oracle.
    [Show full text]
  • LSF ’07: 2007 Linux Storage & Filesystem Workshop Namically Adjusted According to the Current Popularity of Its Hot Zone
    June07login1summaries_press.qxd:login summaries 5/27/07 10:27 AM Page 84 PRO: A Popularity-Based Multi-Threaded Reconstruction overhead of PRO is O(n), although if a priority queue is Optimization for RAID-Structured Storage Systems used in the PRO algorithm the computation overhead can Lei Tian and Dan Feng, Huazhong University of Science and be reduced to O(log n). The entire PRO implementation in Technology; Hong Jiang, University of Nebraska—Lincoln; Ke the RAIDFrame software only added 686 lines of code. Zhou, Lingfang Zeng, Jianxi Chen, and Zhikun Wang, Hua- Work on PRO is ongoing. Future work includes optimiz - zhong University of Science and Technology and Wuhan ing the time slice, scheduling strategies, and hot zone National Laboratory for Optoelectronics; Zhenlei Song, length. Currently, PRO is being ported into the Linux soft - Huazhong University of Science and Technology ware RAID. Finally, the authors plan on further investigat - ing use of access patterns to help predict user accesses and Hong Jiang began his talk by discussing the importance of of filesystem semantic knowledge to explore accurate re - data recovery. Disk failures have become more common in construction. RAID-structured storage systems. The improvement in disk capacity has far outpaced improvements in disk band - The first questioner asked about the average rate of recov - width, lengthening the overall RAID recovery time. Also, ery for PRO. Hong answered that the average reconstruc - disk drive reliability has improved slowly, resulting in a tion time is several hundred seconds in the experimental very high overall failure rate in a large-scale RAID storage setup.
    [Show full text]
  • Lustrefs and Its Ongoing Evolution for High Performance Computing and Data Analysis Solutions
    LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions Roger Goff Senior Product Manager DataDirect Networks, Inc. 2014 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. What is Lustre? Parallel/shared file system for clusters Aggregates many systems into one file system Hugely scalable - 100s of GB/s throughput, 10s of thousands of native clients Scales I/O throughput and capacity Widely adopted in HPC community Open source, community driven development Supports RDMA over high speed low latency interconnects and Ethernet Runs on commodity storage and purpose built HPC storage platforms Abstracts hardware view of storage from software Highly available 2 2014 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Lustre Node Types Management Servers (MGS) Object Storage Servers (OSS) Meta-Data Servers (MDS) Clients LNET Routers 3 2014 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Lustre MGS ManaGement Server Server node which manages cluster configuration database All clients, OSS and MDS need to know how to contact the MGS MGS provides all other components with information about the Lustre file system(s) it manages MGS can be separated from or co-located with MDS 4 2014 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Lustre MDS Meta Data Server Manages the names and directories in the Lustre file system Stores metadata (such as filenames, directories, permissions and file layout) on a MetaData Target (MDT) Usually MDSs are configured in an active/passive failover pair Lustre inode ext3 inode 5 2014 Storage Developer Conference. © Insert Your Company Name.
    [Show full text]
  • Sun Storedge™ QFS and Sun Storedge SAM-FS Software Installation and Configuration Guide
    Sun StorEdge™ QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide Release 4.2 Sun Microsystems, Inc. www.sun.com Part No. 817-7722-10 September 2004, Revision A Submit comments about this document at: http://www.sun.com/hwdocs/feedback Copyright 2004 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S. and in other countries. This document and the product to which it pertains are distributed under licenses restricting their use, copying, distribution, and decompilation. No part of the product or of this document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, AnswerBook2, docs.sun.com, Solaris, SunOS, SunSolve, Java, JavaScript, Solstice DiskSuite, and StorEdge are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and in other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc.
    [Show full text]
  • View the Slides
    SMB3.1.1 POSIX Protocol Extensions: Summary and Current Implementation Status Steve French Azure Storage – Microsoft Samba Team And SMB Jeremy Allison Google/Samba Team 3.1.1 Legal Statement This work represents the views of the author(s) and does not necessarily reflect the views of Microsoft or Google Linux is a registered trademark of Linus Torvalds. Other company, product, and service names may be trademarks or service marks of others. Outline Linux is a lot more than POSIX ... Why do these extensions matter? Implementation Status What works today? Some details How to handle Linux continuing to extend APIs? Wireshark and Tracing Linux > POSIX Currently huge number of syscalls! (try “git grep SYSCALL_DEFINE” well over 850 and 500+ are even documented “man syscalls” FS layer has 223). Verified today vs Only about 100 POSIX API calls 513 syscalls with man pages! +12 just since last year’s SDC! Some examples of new fs ones from past 9 months ... Syscall name Kernel Version introduced io_uring_enter 5.1 io_uring_register 5.1 io_uring_setup 5.1 move_mount 5.2 open_tree 5.2 fsconfig 5.2 fsmount 5.2 fsopen 5.2 fspick 5.2 Repeating an old slide ... Remember LINUX > POSIX And not just new syscalls … new flags ... 2 examples of richer Linux vs. simpler POSIX fallocate has 7 flags – Insert range – Unshare range – Zero range – Keep size – But POSIX fallocate has no flags Rename (renameat2) has 3 flags – noreplace, whiteout and exchange – POSIX rename has none Network File systems matter ● these extensions to most popular network fs protocol (SMB3) are important ● block devices struggle to do file system tasks: locking, security, leases, consistent metadata Linux Apps need to work over network mounts and continue to work as Linux evolves Improve common situations where customers have Linux and Windows and Mac clients Make sure extensions work with most secure, most optimal SMB3.1.1 dialect (don’t encourage less secure network file systems, or even SMB1/CIFS) Quick Overview of Status ● Linux kernel client: – 5.1 kernel or later can be used.
    [Show full text]
  • Creating a Hierarchical Database Backup System Using Oracle RMAN and Oracle SAM QFS with the Sun ZFS Storage Appliance
    An Oracle White Paper August 2011 Creating a Hierarchical Database Backup System using Oracle RMAN and Oracle SAM QFS with the Sun ZFS Storage Appliance Creating a Hierarchical Database Backup System using Oracle RMAN and Oracle SAM QFS with the Sun ZFS Storage Appliance Introduction ......................................................................................... 2 Overview of the Hierarchical Database Backup Solution.................... 2 Physical Architecture of a Hierarchical Database Backup System ..... 3 Hierarchical Database Backup System Logical Architecture .............. 5 SAM Server Configuration .............................................................. 6 Configuring the Oracle RMAN Server............................................. 9 Oracle RMAN Configuration ......................................................... 10 Performance Considerations ............................................................ 11 Sizing Oracle RMAN Backups .......................................................... 12 Additional Considerations ................................................................. 14 Conclusion ........................................................................................ 15 Appendix A. Reference Scripts and Configurations.......................... 16 Appendix B. Solution Component Features...................................... 19 Oracle Recovery Manager ............................................................ 19 Sun Quick File System and Storage Archive Manager................. 20 Sun ZFS
    [Show full text]
  • The Quantcast File System
    The Quantcast File System Michael Ovsiannikov Silvius Rus Damian Reeves Quantcast Quantcast Google movsiannikov@quantcast [email protected] [email protected] Paul Sutter Sriram Rao Jim Kelly Quantcast Microsoft Quantcast [email protected] [email protected] [email protected] ABSTRACT chine writing it, another on the same rack, and a third on a The Quantcast File System (QFS) is an efficient alterna- distant rack. Thus HDFS is network efficient but not par- tive to the Hadoop Distributed File System (HDFS). QFS is ticularly storage efficient, since to store a petabyte of data, written in C++, is plugin compatible with Hadoop MapRe- it uses three petabytes of raw storage. At today’s cost of duce, and offers several efficiency improvements relative to $40,000 per PB that means $120,000 in disk alone, plus ad- HDFS: 50% disk space savings through erasure coding in- ditional costs for servers, racks, switches, power, cooling, stead of replication, a resulting doubling of write through- and so on. Over three years, operational costs bring the put, a faster name node, support for faster sorting and log- cost close to $1 million. For reference, Amazon currently ging through a concurrent append feature, a native com- charges $2.3 million to store 1 PB for three years. mand line client much faster than hadoop fs, and global Hardware evolution since then has opened new optimiza- feedback-directed I/O device management. As QFS works tion possibilities. 10 Gbps networks are now commonplace, out of the box with Hadoop, migrating data from HDFS and cluster racks can be much chattier.
    [Show full text]
  • Inside the Lustre File System
    Inside The Lustre File System Technology Paper An introduction to the inner workings of the world’s most scalable and popular open source HPC file system Torben Kling Petersen, PhD Inside The Lustre File System The Lustre High Performance Parallel File System Introduction Ever since the precursor to Lustre® (known as the Object- Based Filesystem, or ODBFS) was developed at Carnegie Mellon University in 1999, Lustre has been at the heart of high performance computing, providing the necessary throughput and scalability to many of the fastest supercomputers in the world. Lustre has experienced a number of changes and, despite the code being open source, the ownership has changed hands a number of times. From the original company started by Dr. Peter Braam (Cluster File Systems, or CFS), which was acquired by Sun Microsystems in 2008—which was in turn acquired by Oracle in 2010—to the acquisition of the Lustre assets by Xyratex in 2013, the open source community has supported the proliferation and acceptance of Lustre. In 2011, industry trade groups like OpenSFS1, together with its European sister organization, EOFS2, took a leading role in the continued development of Lustre, using member fees and donations to drive the evolution of specific projects, along with those sponsored by users3 such as Oak Ridge National Laboratory, Lawrence Livermore National Laboratory and the French Atomic Energy Commission (CEA), to mention a few. Today, in 2014, the Lustre community is stronger than ever, and seven of the top 10 high performance computing (HPC) systems on the international Top 5004 list (as well as 75+ of the top 100) are running the Lustre high performance parallel file system.
    [Show full text]
  • Clustered and Parallel Storage System Technologies
    Clustered and Parallel Storage System Technologies Brent Welch Panasas, Inc. Storage Developer Conference 2008 www.storage-developer.org © 2008 Panasas. All Rights Reserved. Cluster Storage Problem Statement Compute clusters are grower larger in size (8, 128, 1024, 4096 nodes…) Scientific codes, seismic data, digital animation, biotech, EDA … Each host in the cluster needs uniform access to any stored data Demand for storage capacity and bandwidth is growing (many GBs/sec) Apply clustering techniques to the storage system itself Maintain simplicity of storage management even at large scale ? Many Clients Shared Storage Storage Developer Conference 2008 www.storage-developer.org © 2008 Panasas. All Rights Reserved. About Me Brent Welch Director, Software Architecture at Panasas IETF nfsv4 WG pNFS Sun Labs (tclhttpd) Xerox PARC (exmh) UC Berkeley (Sprite) Storage Developer Conference 2008 www.storage-developer.org © 2008 Panasas. All Rights Reserved. Parallel Filesystem Design Issues Have to solve same problems as local filesystem, at scale Block allocation Metadata management Data reliability and error correction Additional requirements Cache coherency High availability Scalable capacity & performance Storage Developer Conference 2008 www.storage-developer.org © 2008 Panasas. All Rights Reserved. Network Attached Storage (NAS) File server exports local filesystem using a file-oriented protocol NFS & CIFS are widely deployed HTTP/WebDAV? FTP? Scalability limited by server hardware NAS Head Often the same CPU, RAM, I/O and memory buses as clients Handles moderate number of clients Handles moderate amount of storage A nice model until it runs out of steam “Islands of storage” Bandwidth to a file limited by server bottleneck Storage Developer Conference 2008 www.storage-developer.org © 2008 Panasas.
    [Show full text]
  • Transfer CFT Release Notes
    Axway AMPLIFY Transfer CFT 3.3.2 Release Notes l New features and enhancements l Platform features l Limitations l Prerequisites l Discontinued support l Documentation l Support services New features and enhancements This section lists new features and product enhancements added since the last major Transfer CFT release. Major evolutions New user interface: Transfer CFT features a new a web browser user interface. You can track and manage transfers and consult the Transfer CFT log with this user interface. See the User interface comparison table. Amazon S3 connector: Transfer CFT now supports the use of Amazon S3 to store and retrieve large numbers of files. Transfer CFT implements Amazon S3 services using the AWS SDK (Amazon Web Services Software Development Kit). The Transfer CFT file process uses the AWS SDK to directly deposit or access files (binary) from S3 storage, without saving data on a local hard drive. SFTP: You can now use the SSH File Transfer Protocol (SFTP) to transfer files using Transfer CFT as a server as well as a client. Available as indicated in the Platform features table. Technical preview only Custom ciphering key: You can now manage the ciphering key using the cftcrypt tool, which ciphers all passwords to protect sensitive data. Minor evolutions and enhancements l Amazon Web Services Elastic File System (EFS) is available for use with Linux systems. l When working in UNIX environments, you can use the Bash auto-complete feature. l Added fine grain access control to the SERVICE:UI resource. 1 Axway AMPLIFY Transfer CFT 3.3.2 Release Notes l You can now use a password phrase in z/OS environments.
    [Show full text]