Scalable Kernel TCP Design and Implementation for Short-Lived Connections

Total Page:16

File Type:pdf, Size:1020Kb

Scalable Kernel TCP Design and Implementation for Short-Lived Connections Scalable Kernel TCP Design and Implementation for Short-Lived Connections Xiaofeng Lin Yu Chen Xiaodong Li Sina Corporation, ZHIHU Department of Computer Science Sina Corporation Corporation and Technology, Tsinghua University xiaodong2@staff.sina.com.cn [email protected] [email protected] Junjie Mao Jiaquan He Wei Xu Yuanchun Shi Department of Computer Science and Technology, Tsinghua University [email protected], [email protected], [email protected], [email protected] Abstract same time. Fastsocket is already deployed in the production With the rapid growth of network bandwidth, increases in environment of Sina WeiBo, serving 50 million daily active CPU cores on a single machine, and application API models users and billions of requests per day. demanding more short-lived connections, a scalable TCP Categories and Subject Descriptors D.4.4 [Operating Sys- stack is performance-critical. tems]: Communications Management—Network communi- Although many clean-state designs have been proposed, cation production environments still call for a bottom-up parallel TCP stack design that is backward-compatible with existing Keywords TCP/IP; multicore system; operating system applications. We present Fastsocket, a BSD Socket-compatible and scalable kernel socket design, which achieves table-level 1. Introduction connection partition in TCP stack and guarantees connection Recent years have witnessed a rapid growth of mobile de- locality for both passive and active connections. Fastsocket vices and apps. These new mobile apps generate a large architecture is a ground up partition design, from NIC inter- amount of the traffic consisting of short-lived TCP connec- rupts all the way up to applications, which naturally elim- tions [39]. HTTP is a typical source of short-lived TCP con- inates various lock contentions in the entire stack. More- nections. For example, in Sina WeiBo, the typical request over, Fastsocket maintains the full functionality of the kernel length of one heavily invoked HTTP interface is around TCP stack and BSD-socket-compatible API, and thus appli- 600 bytes and the corresponding response length is typically cations need no modifications. about 1200 bytes. Both the request and the response only Our evaluations show that Fastsocket achieves a speedup consume one single IP packet. After the two-packet data of 20.4x on a 24-core machine under a workload of short- transaction is done, the connection is closed, which closely lived connections, outperforming the state-of-the-art Linux matches the characteristics of short-lived connection. In this kernel TCP implementations. When scaling up to 24 CPU situation, the speed of establishment and termination of TCP cores, Fastsocket increases the throughput of Nginx and connections becomes crucial for server performance. HAProxy by 267% and 621% respectively compared with As the number of CPU cores in one machine increases, the base Linux kernel. We also demonstrate that Fastsocket the scalability of the network stack plays a key role in the can achieve scalability and preserve BSD socket API at the performance of the network applications on multi-core plat- forms. For long-lived connections, the metadata manage- ment for new connections is not frequent enough to cause significant contentions. Thus we do not observe scalabil- Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice ity issues of the TCP stack in these cases. However, it is and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to challenging to reach the same level of scalability for short- lists, contact the Owner/Author. Request permissions from [email protected] or Publications Dept., ACM, Inc., fax +1 (212) 869-0481. Copyright 2016 held by Owner/Author. Publication Rights Licensed to ACM. lived connections as that of long-lived connections, since it ASPLOS ’16 April 2–6, 2016, Atlanta, Georgia, USA. involves frequent and expensive TCP connection establish- Copyright ⃝c 2016 ACM 978-1-4503-4091-5/16/04. $15.00 DOI: http://dx.doi.org/10.1145/2872362.2872391 ment and termination, which result in severe shared resource 339 contentions in the TCP Control Block (TCB) and Virtual File 3. Resource Isolation and Sharing: TCP/IP stack needs to System (VFS). support diverse existed NIC hardware resources, and TCB Management The TCP connection establishment should isolated or share data with other name space or and termination operations involve managing the global sub-systems (disks, file systems, etc.) TCBs which are represented as TCP sockets in the Linux Extensive researches [12, 21, 22, 27, 30, 38] are focused kernel. All these sockets are managed in two global hash tables, known as the listen table and the established table. on system support for enabling network-intensive applica- tions to achieve performance close to the limits imposed by Since these two tables are shared system-wide, synchroniza- tion are inevitable among the CPU cores. the hardware. Some work proposes a brand new I/O abstrac- In addition, the TCB of a connection is accessed and tion to make the TCP stack scalable [21]. However, applica- updated in two phases: tions have to be rewritten with the new API. Recent research projects [12, 22, 30] attempt to solve the TCP stack scal- 1. Incoming packets of the connection are received in the ability problem using modified embedded TCP stack lwIP interrupt context through network stack on the CPU core [26] or custom-built user-level TCP stack from the ground that receives the NIC (Network Interface Controller) in- up. Though these design can improve the performance of terrupt. network stack, it does not fully replicate all kernel’s ad- 2. Further processing of incoming packets payload in user- vanced networking features (Firewall/Netfilter, TCP veto, level application and transmission of the outgoing pack- IPsec NAT and other vulnerabilities defenses for security, ets are accomplished on the CPU core running the corre- TCP optimizations for wireless net, cgroup/sendfile/splice sponding application. mechanism for resource isolating or sharing, etc.) which are heavily utilized by our performance-critical applications. It is ideal that for a given connection, both phases are Therefore, currently it remains an open challenge to com- handled entirely on one CPU core, which maximizes CPU pletely solve the TCP stack scalability problem while keep- cache efficiency. We refer to this property as connection lo- ing backward compatibility and full features to benefit exist- cality. Unfortunately, in the current Linux TCP stack, these ing network applications in the production environment. two phases often run on different CPU cores, which seri- In this paper, we present Fastsocket to address the afore- ously affects scalability. mentioned scalability and compatibility problems in a back- VFS Abstraction A socket is abstracted by VFS and ward compatible, incrementally deployable and maintain- exposed to user-level applications as a socket File Descriptor able way. This is achieved with three major changes: 1) (FD). As a result, the performance of opening and closing a partition the globally shared data structures, the listen table socket FD in VFS has a direct impact on the efficiency of the and established table; 2) correctly steer incoming packets to TCP connection establishment and termination. However, achieve connection locality for any connection; and 3) pro- there are severe synchronization overheads in processing vide a fast path for socket in VFS to solve scalability prob- VFS shared states, e.g., inode and dentry, which results lems in VFS and wrap the aforementioned TCP designs to in scalability bottlenecks. fully preserve BSD socket API. Serious lock contention are observed on an 8-core pro- In the production environment, with Fastsocket, the con- duction server running HAProxy as an HTTP load balancer tention in TCB management and VFS is eliminated and the for WeiBo to distribute client traffic to multiple servers. effective capacity of HAProxy server is increased by 53.5%. From profiling data collected by perf, we observed that Our experimental evaluations show that in short-lived con- spin lock consumes 9% and 11% of total CPU cycles in nection benchmark of web servers and proxy servers han- TCB management and VFS respectively. Even worse, due dling short-lived connections, Fastsocket outperforms base to contention inside the network stack processing, there is a Linux kernel by 267% and 621% respectively. clear load imbalance across all CPU cores, though packets This paper makes three key contributions: are evenly distributed to these CPU cores by NIC. Production Environment Requirements In production 1. We introduce a kernel network stack design which achieves environment, such as data center, the network server/proxy table-level partition for TCB management and realizes applications have three fundamental requirements: connection locality for arbitrary connection types. Our 1. Compatibility of TCP/IP related RFC: Different applica- partition scheme is complete and naturally solves all po- tions follow different TCP/IP related RFCs. As a result, tential contentions along the network processing path, in- the TCP/IP implementation in production environment cluding the contentions we do not directly address such also need to support commonly accepted RFCs as far as as timer lock contention. possible. 2. We demonstrate that BSD socket API does not have to 2. Security: TCP may be attacked in a variety of ways(DDoS, be an obstacle in network stack scalability for commod- Connection hijacking, etc.). These attacks result of a thor- ity hardware configuration. The evaluation using real- ough security protection of TCP [13] in OS kernel.
Recommended publications
  • Storm Clouds Platform: a Cloud Computing Platform for Smart City Applications
    RESEARCH ARTICLE Storm Clouds Platform: a cloud computing platform for smart city applications Marco Battarra, Marco Consonni*, Samuele De Domenico and Andrea Milani Hewlett Packard Italiana S.R.L., 9, Via Di Vittorio Giuseppe – 20063 Cernusco Sul Naviglio, Italy Abstract: This paper describes our work on STORM CLOUDS[1], a project with the main objective of migrating smart-city services, that Public Authorities (PAs) currently provided using traditional Information Technology, to a cloud-based environment. Our organization was in charge of finding the technical solutions, so we designed and im- plemented a cloud computing solution called Storm Clouds Platform (SCP), for that purpose. In principle, the applica- tions we ported could run on a public-cloud service, like Amazon Web ServicesTM[2] or Microsoft® Azure[3], that pro- vide computational resources on a pay-per-use paradigm. However, these solutions have disadvantages due to their proprietary nature: vendor lock-in is one of the issues but other serious problems are related to the lack of full control on how data and applications are processed in the cloud. As an example, when using a public cloud, the users of the cloud services have very little control on the location where applications run and data are stored, if there is any. This is identi- fied as one of the most important obstacles in cloud computing adoption, particularly in applications manage personal data and the application provider has legal obligation of preserving end user privacy[4]. This paper explains how we faced the problem and the solutions we found. We designed a cloud computing platform — completely based on open-software components — that can be used for either implementing private clouds or for porting applications to public clouds.
    [Show full text]
  • AIT Haproxy.Key
    HEIG-VD | TIC – Technologies de l’Information et de la Communication HAProxy (C) 2015 Marcel Graf HEIG-VD | TIC – Technologies de l’Information et de la Communication HAProxy ■ HAProxy (High Availability Proxy) is a load balancer implemented in software ■ Available as Open Source (GPL/LGPL license) (http://www.haproxy.org) ■ Available as commercial product (http://www.haproxy.com) ■ Also available as appliance: ALOHA ■ Runs on FreeBSD, Linux, OpenBSD and Solaris ■ Written by Willy Tarreau in 2000 ■ Willy is the maintainer of the Linux 2.4 kernel ■ Lives in Fontenay aux Roses ■ Used by high-profile websites: GitHub, Bitbucket, Stack Overflow, Reddit, Tumblr, Twitter 2 Administration IT | HAProxy | Année 2015/16 (C) 2015 Marcel Graf HEIG-VD | TIC – Technologies de l’Information et de la Communication HAProxy Features ■ HAProxy can be used for ■ Load balancing on TCP layer and HTTP layer ■ Normalization / filtering of TCP and HTTP traffic ■ HTTP rewriting ■ SSL offloading ■ HTTP compression offloading ■ Traffic regulation ■ Protection against DDoS and service abuse ■ … 3 Administration IT | HAProxy | Année 2015/16 (C) 2015 Marcel Graf HEIG-VD | TIC – Technologies de l’Information et de la Communication HAProxy operations ■ Health checks ■ HAProxy periodically sends probes to servers to check if they are still operational. A probe can be superficial or go deeper: ■ ping to server’s IP address (TCP mode) ■ TCP connection to server’s HTTP port (TCP mode) ■ HTTP request to server (HTTP mode) ■ Based on health checks HAProxy sets a server’s state to UP or DOWN ■ Server administrative state ■ The administrator can set a server into one of three administrative states ■ READY — Server is in normal mode, accepting requests ■ DRAIN — Removes server from load balancing, but still allows it to be health-checked and accept new persistent connections.
    [Show full text]
  • Cloud Onload Haproxy Cookbook
    Cloud Onload® HAProxy Cookbook The information disclosed to you hereunder (the “Materials”) is provided solely for the selection and use of Xilinx products. To the maximum extent permitted by applicable law: (1) Materials are made available "AS IS" and with all faults, Xilinx hereby DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, OR FITNESS FOR ANY PARTICULAR PURPOSE; and (2) Xilinx shall not be liable (whether in contract or tort, including negligence, or under any other theory of liability) for any loss or damage of any kind or nature related to, arising under, or in connection with, the Materials (including your use of the Materials), including for any direct, indirect, special, incidental, or consequential loss or damage (including loss of data, profits, goodwill, or any type of loss or damage suffered as a result of any action brought by a third party) even if such damage or loss was reasonably foreseeable or Xilinx had been advised of the possibility of the same. Xilinx assumes no obligation to correct any errors contained in the Materials or to notify you of updates to the Materials or to product specifications. You may not reproduce, modify, distribute, or publicly display the Materials without prior written consent. Certain products are subject to the terms and conditions of Xilinx’s limited warranty, please refer to Xilinx’s Terms of Sale which can be viewed at https://www.xilinx.com/legal.htm#tos; IP cores may be subject to warranty and support terms contained in a license issued to you by Xilinx.
    [Show full text]
  • Updates from the Open Quantum Safe Project
    Updates from the Open Quantum Safe Project Open Quantum Safe core team: Michael Baentsch Vlad Gheorghiu, evolutionQ & University of Waterloo Basil Hess, IBM Research Christian Paquin, Microsoft Research John Schanck, University of Waterloo Douglas Stebila, University of Waterloo Goutam Tamvada, University of Waterloo April 23, 2021 Abstract The Open Quantum Safe (OQS) project is an open-source project that aims to support the development and prototyping of quantum-resistant cryptography. This short note provides an update on the tools OQS makes available. 1 Introduction The Open Quantum Safe (OQS) project1 is an open-source project that aims to support the develop- ment and prototyping (of applications) of quantum-resistant cryptography. OQS consists of the following main lines of work: liboqs, an open source C library for quantum- resistant cryptographic algorithms, and prototype integrations into protocols and applications, including a fork of the widely used OpenSSL library. These tools support research by ourselves and others. To reduce the hurdle for getting started and to aid the uptake and use of these components, our tools are also available as ready-to-use binaries in the form of Docker images and test servers. In this short note, we provide an update on the Open Quantum Safe project. 2 liboqs liboqs is an open source C library for quantum-safe cryptographic algorithms. liboqs makes accessible a collection of open-source implementations of quantum-safe key encapsulation mechanism (KEM) and digital signature algorithms through a common API. liboqs builds on Linux, macOS, and Windows, on Intel, AMD, and ARM platforms. Some of the implementations of these algorithms have been directly contributed to liboqs by members of the NIST submission teams; others are incorporated from the PQClean project.
    [Show full text]
  • Red Hat Codeready Containers 1.19 Getting Started Guide
    Red Hat CodeReady Containers 1.19 Getting Started Guide Quick-start guide to using and developing with CodeReady Containers Last Updated: 2020-12-04 Red Hat CodeReady Containers 1.19 Getting Started Guide Quick-start guide to using and developing with CodeReady Containers Kevin Owen [email protected] Legal Notice Copyright © 2020 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent.
    [Show full text]
  • Openstack Installation
    TECH GUIDE OpenStack Installation: A Practitioner’s Guide Introduction TECH GUIDE: OpenStack Installation: A Practitioner’s Guide One of the great things about OpenStack is all the options you have for deploying it – from homebrew to hosted OpenStack to vendor appliances to OpenStack-as-a-service. Previously, Platform9 published a tech guide comparing various OpenStack deployment models. If you opt for a do- it-yourself (DIY) approach, then you face the question of which tool to use. This guide will familiarize you with the landscape of OpenStack installation tools, including an overview of the most popular ones: DevStack, RDO Packstack, OpenStack-Ansible, Fuel and TripleO. OpenStack Architecture Overview If you’re new to OpenStack it may be helpful to review the OpenStack components. (Skip this section if you’re already familiar with OpenStack.) OpenStack’s design, inspired by Amazon Web Services (AWS), has well-documented REST APIs that enable a self-service, elastic Infrastructure-as-a Service (IaaS) cloud. In addition, OpenStack is fundamentally agnostic to the underlying infrastructure and integrates well with various compute, virtualization, network and storage technologies. Heat Clarity UI CLI / Tools Scripts (orchestration) Nova Compute Scheduler (images) Cinder Keystone Glance (identity) Basic Storage Basic Network Neutron Network Block Controller Storage Figure 1: OpenStack architecture is loosely coupled, and extensible to support any hypervisor / container, storage, and network system 2 Plan to get NO help to install & manage
    [Show full text]
  • Lighthouse Documentation Release 1.0.0
    lighthouse Documentation Release 1.0.0 William Glass March 07, 2016 Contents 1 Overview 3 2 Development 5 3 License 7 3.1 Getting Started..............................................7 3.2 Configuration...............................................8 3.3 Examples................................................. 17 3.4 Writing Plugins.............................................. 24 3.5 Source Docs............................................... 28 3.6 Release Notes.............................................. 41 Python Module Index 45 i ii lighthouse Documentation, Release 1.0.0 Lighthouse is a service node discovery system written in python, built with resilience, flexibility and ease- of-use in mind and inspired by Airbnb’s SmartStack solution. Out of the box it supports discovery via Zookeeper with cluster load balancing handled by an automatically configured HAProxy. To jump right in see the Getting Started page, or if you’d like to see it in action check out the Examples page. Contents 1 lighthouse Documentation, Release 1.0.0 2 Contents CHAPTER 1 Overview A lighthouse setup consists of three parts running locally on each node: a load balancer, the lighthouse-writer script and (usually) the lighthouse-reporter script. In a Lighthouse setup, no node’s application code is aware of the existence of other nodes, they talk to a local port handled by an instance of the load balancer which in turn routes traffic among the various known other nodes. This local load balancer is automatically updated when nodes come and go via the lighthouse-writer script, which talks to the discovery method (e.g. Zookeeper) to keep track of which nodes on which clusters are up. The lighthouse-reporter script likewise talks to the discovery method, it is responsible for running health checks on any services on the local node and reports to the discovery method that the healthy services are up and the unhealthy ones are down.
    [Show full text]
  • Oracle® Openstack Configuration Guide for Release 4.0
    Oracle® OpenStack Configuration Guide for Release 4.0 E90982-03 February 2018 Oracle Legal Notices Copyright © 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications.
    [Show full text]
  • The Haproxy Guide to Multi-Layer Security Defense in Depth Using the Building Blocks of Haproxy
    The HAProxy Guide to Multi-Layer Security Defense in Depth Using the Building Blocks of HAProxy Chad Lavoie Table of Contents Our Approach to Multi-Layer Security 4 Introduction to HAProxy ACLs 6 Formatting an ACL 7 Fetches 11 Converters 12 Flags 13 Matching Methods 14 Things to do with ACLs 16 Selecting a Backend 18 Setting an HTTP Header 20 Changing the URL 21 Updating Map Files 21 Caching 23 Using ACLs to Block Requests 23 Updating ACL Lists 26 Conclusion 27 Introduction to HAProxy Stick Tables 28 Uses of Stick Tables 29 Defining a Stick Table 31 Making Decisions Based on Stick Tables 44 Other Considerations 49 Conclusion 54 Introduction to HAProxy Maps 55 The Map File 56 Modifying the Values 60 The HAProxy Guide to Multi-Layer Security 2 Putting It Into Practice 68 Conclusion 72 Application-Layer DDoS Attack Protection 73 HTTP Flood 74 Manning the Turrets 75 Setting Request Rate Limits 77 Slowloris Attacks 81 Blocking Requests by Static Characteristics 82 Protecting TCP (non-HTTP) Services 86 The Stick Table Aggregator 89 The reCAPTCHA and Antibot Modules 90 Conclusion 93 Bot Protection with HAProxy 94 HAProxy Load Balancer 95 Bot Protection Strategy 96 Beyond Scrapers 105 Whitelisting Good Bots 109 Identifying Bots By Their Location 111 Conclusion 114 The HAProxy Enterprise WAF 1​15 A Specific Countermeasure 1​16 Routine Scanning 1​17 HAProxy Enterprise WAF 1​24 Retesting with WAF Protection 1​26 Conclusion 1​29 The HAProxy Guide to Multi-Layer Security 3 Our Approach to Multi-Layer Security D​efending your infrastructure can involve a dizzying number of components: from network firewalls to intrusion-detection systems to access control safeguards.
    [Show full text]
  • Switching to Oracle Linux – a Total Value Analysis 1
    Technology Insight Paper Switching to Oracle Linux—a Total Value Analysis By John Webster, Sr. Analyst at Evaluator Group June 2020 Enabling you to make the best technology decisions Switching to Oracle Linux – a Total Value Analysis 1 © 2020 Evaluator Group, Inc. All rights reserved. Reproduction of this publication in any form without prior written permission is prohibited. Switching to Oracle Linux – a Total Value Analysis 2 Executive Summary Linux is now a foundational operating system for enterprise IT organizations. Linux allows them to deploy a secure, full-featured, cloud native-compatible and production class operating system that is based on open source development processes. In fact, according to the Linux Foundation, Linux is the primary platform for building the cloud and is experiencing continual year-over-year growth. One reason this is the case is that a majority of enterprise IT organizations feel that Linux is more secure than most other operating systems.1 Presently, enterprises have choices when it comes to acquiring Linux and related support services, the most common of these being Red Hat with Red Hat Enterprise Linux (RHEL). However, many enterprises, and especially Oracle database and application users, are now leveraging and in many cases consolidating, all of their Linux-based applications to Oracle Linux. This move simplifies their overall Linux application environment, increases security, while further reducing costs associated with a Linux operational and cloud-consistent environment. We have heard Oracle Linux users refer to this trend as a “One Linux” strategy. Here we review Oracle Linux which is free to download, use, and redistribute without a support contract.
    [Show full text]
  • Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP4 by Tanja Roth and Thomas Schraitle
    SUSE Linux Enterprise High Availability Extension 12 SP4 Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP4 by Tanja Roth and Thomas Schraitle This guide is intended for administrators who need to set up, congure, and maintain clusters with SUSE® Linux Enterprise High Availability Extension. For quick and ecient conguration and administration, the High Availability Extension includes both a graphical user interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs. Publication Date: September 10, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006–2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
    [Show full text]
  • Autonomic Fault Tolerance Using Haproxy in Cloud Environment
    Autonomic Fault Tolerance Using HAProxy in Cloud Environment Thesis submitted in partial fulfillment of the requirements for the award of degree of Master of Engineering in Software Engineering Submitted By Vishonika Kaushal (Roll No. 800931024) Under the supervision of: Mrs. Anju Bala (Assistant Professor) COMPUTER SCIENCE AND ENGINEERING DEPARTMENT THAPAR UNIVERSITY PATIALA – 147004 June 2011 Acknowledgement Many people have shared their time and expertise to help me accomplish my goal. First, I would like to sincerely thank my guide, Mrs. Anju Bala, Assistant Professor, Thapar University, Patiala for her constant support and guidance. Her instant responses to my countless inquiries have been invaluable and motivational. It was a great opportunity to work under her supervision. Many thanks to Dr. Inderveer Chana, Assistant Professor, Thapar University for her moral support and research environment she had facilitated for this work. Finally I wish to thank my family and friends for their immense love and encouragement throughout my life without which it would not have been possible to complete this work. Last but not the least I would like to thank God who have always been with me in my good and bad times. Vishonika Kaushal (800931024) ii Abstract Cloud computing, with its great potentials in low cost and on-demand services, is a promising computing platform for both commercial and non-commercial computation clients. In this thesis various fault tolerance techniques have been discussed to improve performance and availability of server applications. Cloud computing is a relatively new way of referring to the use of shared computing resources, and it is an alternative to having local servers handle applications.
    [Show full text]