Pacemaker – the Open Source, High Availability Cluster

Total Page:16

File Type:pdf, Size:1020Kb

Pacemaker – the Open Source, High Availability Cluster Pacemaker The Open Source, High Availability Cluster Research Institute for Software technology OpenSource Technical Team | Kim, donghyun Saturday, July 23, 2016 한국 리눅스 사용자 그룹 [email protected] Korea Linux User Group # Whoami Systems and Infrastructure Geek Enterprise Linux Infrastructure Engineer (Red Hat) Work - Technology Research - Technical Support : Troubleshooting, Debugging, Performace Tuning….. - Consulting : Linux (Red Hat, SUSE, OEL), Virtualization, High-Availability...... Hobby - Trevelling - Drawing (cartoon) I love linux ♥ - Blog : http://rhlinux.tistory.com/ - Café : http://cafe.naver.com/iamstrong - SNS : https://www.facebook.com/groups/korelnxuser 제3회 난공불락 오픈소스 인프라세미나 1 In this Session Pacemaker’s Story - The Open Source, High Availability Cluster Overview of HA architectural components Use case examples Future endeavors 제3회 난공불락 오픈소스 인프라세미나 2 Pacemaker - The Open Source, High Availability Cluster 제3회 난공불락 오픈소스 인프라세미나 3 HA for OpenSource Technology 제3회 난공불락 오픈소스 인프라세미나 4 “Mission Critical Linux” 제3회 난공불락 오픈소스 인프라세미나 5 High-Availability Clustering in the Open Source Ecosystem https://alteeve.ca/w/High-Availability_Clustering_in_the_Open_Source_Ecosystem 2014s ~ 2010s ~ . Pacemaker 1.1.10, released with RHEL6.5 . 2010s, Pacemaker version 1.1 . Red Hat 에서는 기존 cman과rgmanager 방식을 RHEL6 - CIB (Cluster Information Base, XML Configure) - Red Hat’s “cman” Support 라이프사이클이 종료되는 시점(2020년)까지 지원예정 - SLES11 SP1 (OpenAIS to Corosync) . Global Vendors 간 기술 협약을 통해 적용범위 확대 - Hawk, a web-based . 오늘날, Clusterlabs는 Heartbeat Project 에서생성된 . 2010, Pacemaker added support for cman Component들과 다른 솔루션형태로 빠르게 통합 및 변화 . Heartbeat project reached version 3 2006s 2008s . 2005, RedHat’s cman+rgmanager . 2007, Pacemaker (Heartbeat v 2.1.3) (RHCS, Cluster Services version 2) - Heartbeat package called "Pacemaker“ . 2007s, two projects remained entirely separate until 2007 . 2008s, SUSE 및 레드햇 개발자 모두 몇 가지 코드를 재사용 when, out of the Linux HA project, Pacemaker was born as a 논의에 대해 비공식 회의 cluster resource manager that could take membership from - SUSE's CRM/pacemaker and communicate via Red Hat's OpenAIS or SUSE's Heartbeat. - Red Hat's OpenAIS . 1998–2007s, Heartbeat Old Linux-HA cluster manager . 2008, Pacemaker version 0.6.0 was release - Alan Robertson - support for OpenAIS . 2009s, “Corosync” 새로운 Project 발표 2004s 2002s 1998s . 2002, REDHAT "Red Hat Cluster Manager" Version 1 - RHEL2.1 . 1990s, 오픈 소스 고가용성 플랫폼을 만들 수있는 두 개의 완전히 독립적 인 . 2003, SUSE's Lars Marowsky-Brée conceived of a new project 시도는 1990년대 후반에 시작 called the "crm" - SUSE's "Linux HA" project . 2003, Red Hat purchased Sistina Software ‘GFS’ - Red Hat's “Cluster Services" . 2004, Cluster Summit에 SUSE와 Red Hat developers 함께 참석 . 1998s, "Heartbeat“ 불리우는 새로운 프로토콜 'Linux-HA'프로젝트, 이후 heartbeat v1.0 발표 . 2004, SUSE, in partnership with Oracle, released OCFS2 . 2000s, “Mission Critical Linux” . 2005, "Heartbeat version 2“released . 2000s, ‘Sistina Software’ 회사 창립 “Global File System” 제3회 난공불락 오픈소스 인프라세미나 6 OpenSource Project Progress Hwak (GUI) pcs_gui luci Pacemaker-mgmt booth pcs crmsh pacemaker pacemaker rgmanager resource -agents fence- agents cluster-glue cman Heartbeat corosync Linux-HA / ClusterLabs SLES HA RHEL HA Add-on Community Novell Red Hat Developer Developer Developer 제3회 난공불락 오픈소스 인프라세미나 7 Architectural Software Components Corosync: - Messaging(framework) and membership service. Pacemaker: - Cluster resource manager Resource Agents (RAs): - 사용가능한 서비스를 구성/관리 및 모니터링 Fencing Devices: - Pacemaker에서 fencing 을 STONITH라 부름 User Interface: - crmsh (Cluster Resource Manager Shell) CLI tools and Hawk web UI (SLES) - pcs (Pacemaker Configuration System) CLI tools and pcs_gui (RHEL) 제3회 난공불락 오픈소스 인프라세미나 8 More …. LVS : (=Keepalive) - Kernel space, Layer 4, ip + port. HAproxy : - user space, Layer 7, HTTP based. Shared filesystem : - OCFS2 / GFS2 Block device replication : - DRBD, cLVM mirroring, Cluster md raid1 제3회 난공불락 오픈소스 인프라세미나 9 Pacemaker : the resources manager Pacemaker (Python-based Unified, scriptable, cluster shell) - 리눅스플랫폼을 위한 고가용성과 로드밸런싱 스택 제공 - Resource Agents(RAs)를 통한 Application 간 상호작용을 통한 설정이 가능 클러스터 리소스의 정책을 사용자가 직접 결정 - Resource Agents 설정을 만들고 지우고 변경하는 것에 대한 자유로움 - 여러 산업 (공공, 증권/금융, 통신 등)환경의 어플리케이션에서 요구하는 HA조건들을 대체로 만족 - 리소스형태 fence agents 설정관리 용이 Monitor and Control Resource : - SystemD / LSB / OCF Services - Cloned Services : N+1, N+M, N nodes - Multi-state (Master/Slave, Primary/Secondary) STONITH (Shoot The Other Node In The Head) : - Fencing with Power Management 제3회 난공불락 오픈소스 인프라세미나 10 Pacemaker - Architecture Component Resource Agents - Agent Scripts Resource Agents - Open Cluster Framework LRMd PEngine Pacemaker Stonithd CRMd CIB - Resource Management Cluster Abstraction Layer Corosync - Membership Corosync - Messaging - Quorum 제3회 난공불락 오픈소스 인프라세미나 11 Pacemaker - High level architecture Resource Agents RAs Services Resources Layer (Apache, PostgreSQL 등) Local Cluster Policy Resource Information Engine CIB (복제) Manager Base LRM XML XML CRM Cluster Resource Manager Resource Allocation Layer Corosync Corosync Messaging / Infrastructure Layer Cluster Cluster Node #1 Node #2 제3회 난공불락 오픈소스 인프라세미나 12 Quick Overview of Components - CRMd CRMd (Cluster Resource Management daemon) - main controlling process 역할 담당 RA RA RA Resource Layer - 모든 리소스 작업을 라우팅해주는 데몬 LRM PE - Resource Allocation Layer내에서 동작되는 모든 동작 처리 CIB STONITH CRM (XML) - Maintains the Cluster Information Base (CIB) Resource Allocation Layer - CRMd에 의해 관리된 리소스는 필요에 따라 클라이언트 시스템에 Corosync 전달, 쿼리되거나 이동, 인스턴스화, 변경 Messaging/Infrastructure Layer 제3회 난공불락 오픈소스 인프라세미나 13 Quick Overview of Components - CIB CIB (Cluster Information Base) - 설정 정보 관리 데몬. XML파일로 설정 (In-memory data) RA RA RA Resource Layer - DC(Designated Co-ordinator)에 의해 제공되는 각 노드별 LRM 설정내용 및 상태 정보를 동기화 PE - CIB 은 cibadmin 명령어를 사용하여 변경할수 있고, crm shell CIB STONITH CRM 또는 pcs utility 사용 (XML) Resource Allocation Layer Corosync Messaging/Infrastructure Layer 제3회 난공불락 오픈소스 인프라세미나 14 Quick Overview of Components - PEngine PEngine (PE or Policy Engine) - PE프로세스는 각 노드에서 실행되지만, DC[1]에서만 활성화 RA RA RA Resource Layer - 여러 서비스환경에 따라 Clone 및 domain 등 사용자요구에 따라 LRM 정책 부여 PE - 다른 클러스터 노드로 리소스 전환시 의존성 확인 CIB STONITH CRM (XML) Resource Allocation Layer Corosync Messaging/Infrastructure Layer [1] DC = Designated Controller (master node) 제3회 난공불락 오픈소스 인프라세미나 15 Quick Overview of Components - LRMd LRMd (Local Resource Management Daemon) - CRMd와 각 리소스 사이에 인터페이스 역할을 수행하며, RA RA RA CRMd의 명령을 agent에 전달 Resource Layer LRM - CRM을 대신하여 자기 자신의 RAs(Resource Agents) 호출 PE - CRM수행되어 보고된 결과에 따라 start / stop / monitor를 동작 CIB STONITH CRM (XML) Resource Allocation Layer Corosync Messaging/Infrastructure Layer 제3회 난공불락 오픈소스 인프라세미나 16 Quick Overview of Components - Resource Agents (1/2) RAs (Resource Agents) 는 클러스터리소스를 위해 정의된 규격화된 인터페이스 - local resource의 start / stops / monitors 스크립트 제공 RA RA RA - RAs(Resource Agents)는 LRM에 의해 호출 Resource Layer Pacemaker support three types of RA’s : LRM PE - LSB : Linux Standard Base “init scripts” • /etc/init.d/resource CIB STONITH CRM - OCF : Open Cluster Framework (LSB Resource agents 확장자) (XML) Resource types : Standard:provider:name Resource Allocation Layer • /usr/lib/ocf/resource.d/heartbeat • /usr/lib/ocf/resource.d/pacemaker Corosync - Stonith Resource Agents Messaging/Infrastructure Layer Resource = Service http://linux-ha.org/wiki/OCF_Resource_Agent clone = multiple instances of a resource http://linux-ha.org/wiki/LSB_Resource_Agents https://github.com/ClusterLabs/resource-agents ms = master-slave instances of a resource 수백만명의 많은 Contributer 들이 여러 Application환경에 적용될수 있도록 github 통해 배포 제3회 난공불락 오픈소스 인프라세미나 17 Quick Overview of Components - Resource Agents (2/2) 기본적으로 Resource Agents 제공되어지는 기능: - start / stop / monitor - Validate-all : resource 설정 확인 - Meta-data : Resource Agents 대한 정보를 스스로 회신 (GUI or Other tools) OCF Resource Agents를 통해 제공되는 추가 기능 - promote : master/ Primary - demote : slave/secondary - Notify : 이벤트 발생된 리소스를 사전에 클러스터 에이전트를 통해 통보하여 알림 - Reload : 리소스 설정정보를 갱신 - Migrate_from/migrate_to : 리소스 live migration 수행 Resource scores 값의 의미 - 대부분 리소스는 Score가 정의되어 있지만 종종 지정되어 있지 않은 경우가 있음 - 어느 Cluster nodes에서 리소스가 사용되고 결정되어지는 경우 필요 - Higfest Score INF (1.000.000), lowest score –INF(-1.000.000) - 해당 값의 Positive의미는 “Can run”, Negative한 의미로는 “Can not run” +/-INF 변경값으로 “can 또는 must” 가능 제3회 난공불락 오픈소스 인프라세미나 18 Quick Overview of Components - STONITHD (1/2) STONITHD “Shoot The Other Node In The Head Daemon” - fence node에서 사용되는 서비스 데몬 RA RA RA Resource Layer - Pacemaker의 fencing agents LRM PE - 일반적인 클러스터 리소스로써 모니터링 CIB STONITH CRM (XML) - STONITH-NG는 STONITH 데몬의 다음 세대로 모니터링, 알림 및 기타 기능 제공 Resource Allocation Layer Corosync Messaging/Infrastructure Layer 제3회 난공불락 오픈소스 인프라세미나 19 Quick Overview of Components - STONITHD (2/2) Application-level fencing 설정 가능 - Pacemaker 에서 직접 fencing 조정 - fenced (X) stonithd (O) 실무에서 가장 많이 사용되는 fence devices : - APC PDU (Networked Power Switch) - HP iLO, Dell DRAC, IBM IMM, IPMI Appliance 등 - KVM, Xen, VMware (Software library) - 소프트웨어 기반의 SBD (SUSE진영 가장 많이 사용) Data integrity (데이터 무결성)을 위해 반드시 필요 - 클러스터내 다른 노드로 리소스를 전환하기 위한 가장 최상의 방법 - “Enterprise”을 지향하는 Linux HA Cluster에서는 선택이 아닌 필수 제3회 난공불락 오픈소스 인프라세미나 20 What is fencing? ‘Planned or Unplanned’ 시스템 다운타임으로 부터 데이타보호하고 예방하기 위한 장치 (I/O Fencing) Kernel panic System freeze Live hang / recovery 제3회 난공불락 오픈소스 인프라세미나 21 Quick Overview of Components - Corosync 일반적인 클러스터, 클라우드컴퓨팅 그리고 고가용성 환경에서 사용되는 오픈소스 그룹 메시징시스템. open source group messaging system typically used in clusters, cloud computing, and other high availability environments. RA RA RA Pacemaker 작동에 필요한 기본 클러스터 인프라 Resource Layer LRM Communication Layer : messaging and membership PE - Totem single-ring ordering and membership protocol - 기본적인 제약 조건 : 브로드캐스트를 통한 멀티캐스트 통신 CIB STONITH CRM 방식을 선호 (XML) - UDP/IP and InfiniBand 기반의 networks 통신 Resource Allocation Layer - UDPU (RHEL경우 6.2+ 이상부터 지원) Corosync - Corosync (OpenAIS) cman (Only RHEL6) Messaging/Infrastructure Layer 클러스터 파일시스템 지원 (GFS2, OCFS2, cLVM2 등) 제3회 난공불락 오픈소스 인프라세미나 22 Corosync Cluster Engine Architecture Handle Database Manager : - maps in O1 order a unique 64-bit handle identifier to a memory address.
Recommended publications
  • FORM 10−K RED HAT INC − RHT Filed: April 30, 2007 (Period: February 28, 2007)
    FORM 10−K RED HAT INC − RHT Filed: April 30, 2007 (period: February 28, 2007) Annual report which provides a comprehensive overview of the company for the past year Table of Contents PART I Item 1. Business 3 PART I ITEM 1. BUSINESS ITEM 1A. RISK FACTORS ITEM 1B. UNRESOLVED STAFF COMMENTS ITEM 2. PROPERTIES ITEM 3. LEGAL PROCEEDINGS ITEM 4. SUBMISSION OF MATTERS TO A VOTE OF SECURITY HOLDERS PART II ITEM 5. MARKET FOR REGISTRANT S COMMON EQUITY, RELATED STOCKHOLDER MATTERS AND ISSUER PURCHASES OF E ITEM 6. SELECTED FINANCIAL DATA ITEM 7. MANAGEMENT S DISCUSSION AND ANALYSIS OF FINANCIAL CONDITION AND RESULTS OF OPERATIONS ITEM 7A. QUANTITATIVE AND QUALITATIVE DISCLOSURES ABOUT MARKET RISK ITEM 8. FINANCIAL STATEMENTS AND SUPPLEMENTARY DATA ITEM 9. CHANGES IN AND DISAGREEMENTS WITH ACCOUNTANTS ON ACCOUNTING AND FINANCIAL DISCLOSURE ITEM 9A. CONTROLS AND PROCEDURES ITEM 9B. OTHER INFORMATION Part III ITEM 10. DIRECTORS, EXECUTIVE OFFICERS AND CORPORATE GOVERNANCE ITEM 11. EXECUTIVE COMPENSATION ITEM 12. SECURITY OWNERSHIP OF CERTAIN BENEFICIAL OWNERS AND MANAGEMENT AND RELATED STOCKHOLDER MATT ITEM 13. CERTAIN RELATIONSHIPS AND RELATED TRANSACTIONS, AND DIRECTOR INDEPENDENCE ITEM 14. PRINCIPAL ACCOUNTANT FEES AND SERVICES PART IV ITEM 15. EXHIBITS, FINANCIAL STATEMENT SCHEDULES SIGNATURES EX−21.1 (SUBSIDIARIES OF RED HAT) EX−23.1 (CONSENT PF PRICEWATERHOUSECOOPERS LLP) EX−31.1 (CERTIFICATION) EX−31.2 (CERTIFICATION) EX−32.1 (CERTIFICATION) Table of Contents UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 10−K Annual Report Pursuant to Sections 13 or 15(d) of the Securities Exchange Act of 1934 (Mark One) x Annual Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the fiscal year ended February 28, 2007 OR ¨ Transition Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the transition period from to .
    [Show full text]
  • Design and Implementation of the Spad Filesystem
    Charles University in Prague Faculty of Mathematics and Physics DOCTORAL THESIS Mikul´aˇsPatoˇcka Design and Implementation of the Spad Filesystem Department of Software Engineering Advisor: RNDr. Filip Zavoral, Ph.D. Abstract Title: Design and Implementation of the Spad Filesystem Author: Mgr. Mikul´aˇsPatoˇcka email: [email protected]ff.cuni.cz Department: Department of Software Engineering Faculty of Mathematics and Physics Charles University in Prague, Czech Republic Advisor: RNDr. Filip Zavoral, Ph.D. email: Filip.Zavoral@mff.cuni.cz Mailing address (advisor): Dept. of Software Engineering Charles University in Prague Malostransk´en´am. 25 118 00 Prague, Czech Republic WWW: http://artax.karlin.mff.cuni.cz/~mikulas/spadfs/ Abstract: This thesis describes design and implementation of the Spad filesystem. I present my novel method for maintaining filesystem consistency — crash counts. I describe architecture of other filesystems and present my own de- sign decisions in directory management, file allocation information, free space management, block allocation strategy and filesystem checking algorithm. I experimentally evaluate performance of the filesystem. I evaluate performance of the same filesystem on two different operating systems, enabling the reader to make a conclusion on how much the performance of various tasks is affected by operating system and how much by physical layout of data on disk. Keywords: filesystem, operating system, crash counts, extendible hashing, SpadFS Acknowledgments I would like to thank my advisor Filip Zavoral for supporting my work and for reading and making comments on this thesis. I would also like to thank to colleague Leo Galamboˇsfor testing my filesystem on his search engine with 1TB RAID array, which led to fixing some bugs and improving performance.
    [Show full text]
  • Red Hat, Inc. Securities Litigation 04-CV-473-Consolidated Amended
    IN THE UNITED STATES DISTRICT COUR T FOR THE EASTERN DISTRICT OF NORTH CAROLINA WESTERN DIVISION Master File NO.5:04-CV-473 (1 ) FILED IN RE RED HAT, INC . ) MAY 6 2005 SECURITIES LITIGATION ) CLASS ACTION This Document Relates To: ) ALL ACTIONS ) CONSOLIDATED A MENDED CLASS ACTION COMPLAINT 6 0 TABLE OF CONTENTS Page 1. NATURE OF THE ACTION . ..... ...... ...... ..... .. ....... ...... .. .. .. .. ..... ...... ...... ..... ..1 II . SUMMARY OF THE ACTION .. ...... ...... .. ...... .. .... .. ........ .. ..... .. .. ........ ..... .. ...... .....1 III . JURISDICTION AND VENUE ........ ....... ..... ........ ...... ......... .. ....... ........ ..... ......2 IV . THE PARTIES . .. .. .......... .. ..... ...... ...... ........ ...... .. ........ ....... ............ ................ ...... .. ..3 A. Lead Plaintiffs .. .. ...... ........ ........ ..... ........ ........ ....... .......... .. ....... .. ...... ..... ....3 B. Defendants ... ....... .. ........ ...... ........ ...... ...... ....... .. .. ........ ....... ...... ...... .....3 V. DEFENDANTS' FRAUDULENT SCHEME ..... ........ ....... ...................... ....... ....... ...... .6 A. Red Hat's Business Model Allowed Defendants to Conceal Their Fraud . ..... ...... .6 B. Defendants' Systematically Engaged in Prematurely Recognizing Revenue ...... .8 (1) Defendants knew most contracts were signed near the end of th e month ... .......... .. ...... ...... ........ ........ ..... ...... .. .. ............ ..... ...... .. .... ..... 9 C. Defendants' Desperation to Recognize Revenue Created
    [Show full text]
  • Mellanox Switch Management System (MLNX OS) Software: End-User Agreement
    MELLANOX SWITCH MANAGEMENT SYSTEM END USER LICENSE AGREEMENT Mellanox Switch Management System (MLNX_OS) Software: End-User Agreement PLEASE READ THE FOLLOWING TERMS AND CONDITIONS OF THIS MELLANOX END USER LICENSE AGREEMENT (THIS “AGREEMENT”) BEFORE INSTALLING OR USING THE MELLANOX SOFTWARE. THE MELLANOX SOFTWARE, WHICH INCLUDES ALL COMPUTER SOFTWARE IN BINARY FORM THAT IS DELIVERED TO LICENSEE, GENERALLY DESCRIBED AS THE MELLANOX SWITCH MANAGEMENT SYSTEM (MLNX_OS) SOFTWARE, AND ALL INTELLECTUAL PROPERTY RIGHTS THEREIN OR THERETO AND ANY ASSOCIATED MEDIA AND PRINTED MATERIALS, AND ANY “ONLINE” OR ELECTRONIC DOCUMENTATION, IS REFERRED TO HEREIN AS THE “SOFTWARE”. INSTALLATION OR USAGE OF THE SOFTWARE INDICATES YOUR ACCEPTANCE OF THE TERMS OF THIS AGREEMENT, AND CREATES A LEGAL AND BINDING AGREEMENT BETWEEN YOU (EITHER AN INDIVIDUAL OR AN ENTITY) (“YOU” OR “LICENSEE”) AND MELLANOX TECHNOLOGIES LTD. AND ITS AFFILIATES (“MELLANOX”). IF YOU DO NOT AGREE WITH THE TERMS AND CONDITIONS OF THIS AGREEMENT, YOU SHALL NOT USE OR COPY THIS SOFTWARE PRODUCT AND YOU MUST PROMPTLY RETURN THIS PACKAGE TO MELLANOX. 1. General Each copy of the Software is intended for use only in conjunction with Mellanox’s managed switch products (“Mellanox Products”) and is subject to the terms of this Agreement. 2. Grant of License Subject to the terms and conditions of this Agreement, Mellanox grants you a personal, non-exclusive, non- transferable license to use the Software in binary form for your internal business purposes solely in connection with Mellanox Products and not for further distribution. 2.1. Archive. You may use software back-up utilities to make one back-up copy of the Software.
    [Show full text]
  • PC Magazine® Linux® Solutions
    PC Magazine® Linux® Solutions Joe Merlino PC Magazine® Linux® Solutions Joe Merlino PC Magazine® Linux® Solutions PC Magazine® Linux® Solutions Published by Wiley Publishing, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www.wiley.com Copyright © 2006 by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN-13: 978-0-471-77769-4 ISBN-10: 0-471-77769-2 Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 1B/SY/QT/QW/IN No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation.
    [Show full text]
  • View Annual Report
    FINANCIAL RESULTS FOR FISCAL YEAR 2006 • $278.3 million in total revenue, an increase of 42% over fiscal 2005. • $230.4 million in subscription revenue, up 53% from fiscal 2005. • $186.5 million in cash flow from operations, up 53% from fiscal 2005. • $79.7 million in net income, an increase of 75% over fiscal 2005. ANNUAL REPORT • $223.5 million in deferred revenue balance at fiscal year-end, an 06020066 increase of 63% year-over-year. Dear Red Hat Shareholder: Red Hat is scaling for growth. We have experienced broad growth in total revenue, subscription revenue, deferred revenue, cash flow from operations and net income. In June 2006, Red Hat was ranked the fastest- growing software company, second among all public technology companies, by Business 2.0 magazine. We’ve continued to expand globally to meet the demand for Red Hat solutions. We have added regional operations in Latin America, and expanded business in India, China and the Czech Republic. Even as we helped build our infrastructure through investments in people and systems, margins and cash flow from operations improved significantly. On June 2, 2006, we completed the acquisition of open source middleware provider JBoss, Inc. We believe the combination of Red Hat and JBoss can deliver considerable value for customers as open source continues to change the economics of our industry. Our success would not be possible without exceptional people. This year we created the Red Hat Chairman’s Award to honor our company’s role models. These individuals are leaders who deliver outstanding service to our customers and define what it means to be Red Hat: Deborah Curtis – Manager, Customer Service; Emily Del Toro – Legal Affairs Manager; Johnray Fuller – Technical Account Manager; Niels Happel – Senior Consultant; Malcolm Herbert – Senior Manager, Consulting Practice; Jeremy Katz – Senior Software Engineer; Joan Lapid – Partner Account Manager; Steve Parkinson – Principal Software Engineer; Joseph Sclafani – Manager, Inside Sales; Larry Woodman – Consulting Software Engineer.
    [Show full text]
  • The GFS2 Filesystem
    The GFS2 Filesystem Steven Whitehouse UKUUG Files and Backup, 2008 Introduction Some highlights: ● 64bit, symmetric cluster filesystem ● Cluster aware ± uses Red Hat cluster suite for cluster services ● Quorum ● Locking ● Journaled metadata (and optionally, data as well) ● Recovery from failed nodes ● Mostly POSIX compliant ● Supports most of the Linux VFS API ● We don't yet support dnotify for example Whats new in GFS2? Original code size 1176k (approx) Current code size 728k (approx) ● So its shrunk ± easier to understand & maintain Designed to remove some of the limitations of GFS1 ● Add journals at any time (without expanding the fs) ● Fixes the statfs problem ● New ªunlinkº means unlinked inodes are accounted for on the node(s) which hold the file open, NOT the node which unlinked the inode. ● Faster (and further improvements are on the list) ● Fewer internal metadata copies ● Optimisations of the locking subsystem ● Reading cached data is now as fast as ext2/3 (hint: its the same code!) Its in the 2.6.19+ Linux kernels Whats new in GFS2? (2) As a result of upstream review and/or other feedback: Different journaled file layout wrt GFS1 (now the same as ªnormalº files on- disk) ● Allows mmap(), splice() etc to journaled files A metadata filesystem rather than ioctl()s Now all locking is at the page cache level (for GFS1 it was at the syscall level) ● Faster, supporting new syscalls, e.g. Splice() is much easier readpages() support, some writepages() support ● To be expanded in future ● Also bmap maps large blocks (no allocs)
    [Show full text]
  • Postgresql License
    SOURCE SOFTWARE NOTICE Software: postgresql 9.6 Copyright notice: Legal Notice PostgreSQL is Copyright © 1996-2020 by the PostgreSQL Global Development Group. Postgres95 is Copyright © 1994-5 by the Regents of the University of California. Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS-IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. License: PostgreSQL License PostgreSQL is released under the PostgreSQL License, a liberal Open Source license, similar to the BSD or MIT licenses. PostgreSQL Database Management System (formerly known as Postgres, then as Postgres95) Portions Copyright © 1996-2020, The PostgreSQL Global Development Group Portions Copyright © 1994, The Regents of the University of California Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies.
    [Show full text]
  • Red Hat Inc. Annual Report 2004 Amended
    ˆ1NZ=QVG4D2T=K0C.Š 1NZ=QVG4D2T=K0C ATLFBU-2K-PF001 RED HAT, INC. RR Donnelley ProFile8.5.19 CHMbowlm0at09-Aug-2004 17:47 EST 77360 COV 1 5* FORM 10-K/A ATL CLN g73n68-7.0 PS PMT 2C ˆ1NZ=QVG3YS1F2WCYŠ 1NZ=QVG3YS1F2WC ATLFBU-2K-PF001 RED HAT, INC. RR Donnelley ProFile8.5.19 CHMbowlm0at05-Aug-2004 20:54 EST 77360 INTRO 1 2* FORM 10-K/A ATL CLN PS PMT 1C ANNUAL REPORT FOR THE FISCAL YEAR ENDED FEBRUARY 29, 2004 This Annual Report to Shareholders includes a copy of our Amendment No. 1 on Form 10-K/A for the fiscal year ended February 29, 2004, excluding exhibits, as filed with the Securities and Exchange Commission and available through our website at www.redhat.com. Additional information, including selected excerpts from our Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 14, 2004, is included in this Annual Report immediately following the Form 10-K/A. ˆ1NZ=QVG3L6ZY9KCNŠ 1NZ=QVG3L6ZY9KC CHMFBU-2KP-PF20 RED HAT, INC. RR Donnelley ProFile8.5.19 CHMridds0cm03-Aug-2004 16:22 EST 77360 FS 1 5* FORM 10-K/A ATL CLN PS PMT 1C UNITED STATES SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 FORM 10-K/A Amendment No. 1 FOR ANNUAL AND TRANSITIONAL REPORTS PURSUANT TO SECTIONS 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934 (Mark One) È Annual Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the fiscal year ended February 29, 2004 OR ‘ Transition Report Pursuant to Section 13 or 15(d) of the Securities Exchange Act of 1934 For the transition period from to .
    [Show full text]
  • Doctoral Thesis
    Charles University in Prague Faculty of Mathematics and Physics DOCTORAL THESIS Mikul´aˇsPatoˇcka Design and Implementation of the Spad Filesystem Department of Software Engineering Advisor: RNDr. Filip Zavoral, Ph.D. Abstract Title: Design and Implementation of the Spad Filesystem Author: Mgr. Mikul´aˇsPatoˇcka email: [email protected]ff.cuni.cz Department: Department of Software Engineering Faculty of Mathematics and Physics Charles University in Prague, Czech Republic Advisor: RNDr. Filip Zavoral, Ph.D. email: Filip.Zavoral@mff.cuni.cz Mailing address (author): Mikul´aˇsPatoˇcka Voskovcova 37 152 00 Prague, Czech Republic Mailing address (advisor): Dept. of Software Engineering Charles University in Prague Malostransk´en´am. 25 118 00 Prague, Czech Republic WWW: http://artax.karlin.mff.cuni.cz/~mikulas/spadfs/ Abstract: This thesis describes design and implementation of the Spad filesystem. I present my novel method for maintaining filesystem consistency — crash counts. I describe architecture of other filesystems and present my own de- sign decisions in directory management, file allocation information, free space management, block allocation strategy and filesystem checking algorithm. I experimentally evaluate performance of the filesystem. I evaluate performance of the same filesystem on two different operating systems, enabling the reader to make a conclusion on how much the performance of various tasks is affected by operating system and how much by physical layout of data on disk. Keywords: filesystem, operating system, crash counts, extendible hashing, SpadFS Acknowledgments I would like to thank my advisor Filip Zavoral for supporting my work and for reading and making comments on this thesis. I would also like to thank to colleague Leo Galamboˇsfor testing my filesystem on his search engine with 1TB RAID array, which led to fixing some bugs and improving performance.
    [Show full text]
  • Paulo Orlando Reis Afonso Lopes a Shared-Disk
    PAULO ORLANDO REIS AFONSO LOPES A SHARED-DISK PARALLEL CLUSTER FILE SYSTEM (“Um Sistema de Ficheiros Paralelos para Clusters de Disco Partilhado”) Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia LISBOA 2009 Agradecimentos Estas poucas linhas são o corolário de uns bons anos de trabalho, polvilhados aqui e ali por alegrias (e não só) e momentos inesquecíveis; são as últimas a serem escritas, mas nem por isso menos importantes: de facto, são-no de tal forma, que constituem a “abertura” desta dissertação. Em primeiro lugar, a minha gratidão a dois Professores do DI-FCT/UNL, que me convidaram para esta casa e são responsáveis pela completa mudança que se operou na minha vida – Profs. José Cardoso e Cunha e Pedro Medeiros; e a este último, em especial, um redobrar de agradecimentos por ter aceite a tarefa de me orientar ao longo destes anos. Também quero aproveitar para agradecer aos colegas que aceitaram distribuições assimétricas nas suas cargas de trabalho docente, permitindo que as minhas tarefas de docência se concentrassem e, nos outros períodos, me pudesse focar quase exclusivamente neste trabalho. Duas empresas contribuíram decisivamente com os seus donativos para a realização deste trabalho, que se iniciou numa infra-estrutura doada pela Lusitania, Companhia de Seguros, S.A. e, prosseguiu numa outra obtida através de uma candidatura bem sucedida ao programa IBM Shared University Research (SUR). Os meus agradecimentos a ambas, e em especial a Teresa Moradias, António Jorge Matos e Luís Esteves, da Lusitania CS; e a Filipa Valente e Luís Diniz dos Santos, da IBM Portugal.
    [Show full text]
  • Vysoke´Ucˇenítechnicke´V Brneˇ
    VYSOKE´ UCˇ ENI´ TECHNICKE´ V BRNEˇ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA INFORMACˇ NI´CH TECHNOLOGII´ U´ STAV INFORMACˇ NI´CH SYSTE´ MU˚ FACULTY OF INFORMATION TECHNOLOGY DEPARTMENT OF INFORMATION SYSTEMS POROVNA´ NI´ PARALELNI´CH SOUBOROVY´ CH SYSTE´ MU˚ BAKALA´ Rˇ SKA´ PRA´ CE BACHELOR’S THESIS AUTOR PRA´ CE MICHAL PAZDERA AUTHOR BRNO 2014 VYSOKE´ UCˇ ENI´ TECHNICKE´ V BRNEˇ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA INFORMACˇ NI´CH TECHNOLOGII´ U´ STAV INFORMACˇ NI´CH SYSTE´ MU˚ FACULTY OF INFORMATION TECHNOLOGY DEPARTMENT OF INFORMATION SYSTEMS POROVNA´ NI´ PARALELNI´CH SOUBOROVY´ CH SYSTE´ MU˚ COMPARISON OF PARALLEL FILE SYSTEMS BAKALA´ Rˇ SKA´ PRA´ CE BACHELOR’S THESIS AUTOR PRA´ CE MICHAL PAZDERA AUTHOR VEDOUCI´ PRA´ CE Ing. TOMA´ Sˇ KASˇ PA´ REK SUPERVISOR BRNO 2014 Abstrakt Cílem této práce bylo porovnání několika zástupcù paralelních souborových systému. Za- měřuje se na výkonnost operací čtení/zápis v závislosti na rùzných typech zátěže a spo- lehlivost systémù, která sleduje odolnost proti výpadkům a ztrátě dat. První èást práce je věnována studiu osmi nejrozšířenějších zástupcù paralelních souborových systému. Z nich byly vybráni tøi konkrétní systémy pro podrobnější zkoumání: Lustre, GlusterFS a Ce- phFS. Za úèelem jejich otestování byla navržena a implementována automatizovaná sada testovacích úloh. Vybrané systémy byly postupně nainstalovány na testovací hardware a otestovány pomocí pøipravené testovací sady. Naměřené výsledky byly popsány a vzájemně porovnány. Závěrečná èást práce hodnotí vlastnosti zvolených systémù a jejich vhodnost pro konkrétní typy zátěže. Abstract The goal of this thesis was to explore several parallel file systems, and to evaluate their performance under various conditions. The main focus of this assessment were read and write speeds in different workloads, the reliability of each system, and also their ability to protect from data loss.
    [Show full text]