Managing Energy As a First Class Operating System Resource

Total Page:16

File Type:pdf, Size:1020Kb

Managing Energy As a First Class Operating System Resource ECOSystem: Managing Energy as a First Class Operating System Resource Heng Zeng, Xiaobo Fan, Carla Ellis, Alvin Lebeck, and Amin Vahdat Department of Computer Science Duke University ¡ zengh,xiaobo,carla,alvy,vahdat ¢ @cs.duke.edu Technical Report CS-2001-01 March 25, 2001; revised August 8, 2001 Abstract economic costs of powering a large computing facility. Each scenario has slightly different implications. In this work, we The energy consumption of computers has recently been focus on battery lifetime which allows us to exploit certain widely recognized to be a major challenge of systems design. characteristics of battery technology. Our focus in this paper is to investigate what role the oper- Ideally, the problem of managing the energy consumed ating system can play in improving energy usage without de- by computing devices should be addressed at all levels of pending on application software being rewritten to become system design - from low-power circuits to applications ca- energy-aware. Energy, with its global impact on the system, pable of adapting to the available energy source. Many re- is a compelling reason for unifying resource management. search and industrial efforts are currently focusing on devel- Thus we propose the Currentcy Model that unifies energy oping low-power hardware. We have previously advocated accounting over diverse hardware components and enables the value of including energy-aware application software as fair allocation of available energy among applications. Our a significant layer in the design of energy efficient computing particular goal is to extend battery lifetime for mobile de- systems [8]. It is now a widely held view in the community vices. We have implemented ECOSystem, a modified Linux, that application involvement is important; however, the ne- that incorporates our currentcy model and demonstrates the cessity of application involvement for achieving energy sav- feasibility of explicit control of the battery resource. Experi- ings via software has not yet been shown. Thus, an important mental results show that ECOSystem can hit a target battery question to ask is what the operating system can do within lifetime, and for reasonable targets, can do so with accept- its own resource management functions to improve energy able performance. usage without assuming any explicit cooperation from appli- cations. Our scientific objective in this paper is to explore the degree to which energy-related goals can be achieved at 1 Introduction the OS-level, exploiting existing, state-of-the-art hardware features, but requiring no application-specific knowledge or One of the emerging challenges of computer system design the ability of applications to adapt to energy constraints. This is the management and conservation of energy. This goal point of view also has practical implications since we can not manifests itself in a number of ways. The goal may be to depend on many current applications being rewritten to be- extend the lifetime of the batteries in a mobile/wireless de- come energy-aware, at least until it is demonstrated that the vice. The processing power, memory, and network band- effort needed will produce dramatically better results than width of such devices are increasing rapidly, often resulting systems-based approaches or until a suitable infrastructure in an increase in demand for power, while battery capacity is available to facilitate and support such redesign. is improving at only a modest pace. Other goals may be to limit the cooling requirements of a machine or to reduce the One of the major contributions of our work is the intro- duction of an energy accounting model, called the currentcy £ This work is supported in part by the National Science model, that unifies resource management for different com- Foundation (EIA-99772879,ITR-0082914), Intel, and Microsoft. ponents of the system and allows energy itself to be explic- Vahdat is also supported by an NSF CAREER award (CCR- 9984328). Additional information on this work is available at itly managed. Unifying resource management has often been http://www.cs.duke.edu/ari/millywatt/. mentioned as a desirable goal, but a focus on energy provides 1 a compelling motivation to seriously pursue this idea. En- formance degradation. Our proportional sharing serves to ergy has a global impact on all of the components of the en- distribute the performance impact among competing activi- tire system. In our model, applications can spend their share ties in an effective way. of energy on processing, on disk I/O, or on network com- The paper is organized as follows. In the next section, we munication - with such expenditures on different hardware outline the underlying assumptions of this work, including components represented by a common model. A unified the power budget, the characteristics of batteries, and the na- model makes energy use tradeoffs among hardware compo- ture of the expected workload of applications. In Section 3, nents more explicit. we present the currentcy model and the design of the cur- In general, there are two problems to consider at the OS- rentcy allocator. In Section 4, we describe the prototype im- level for attacking most of the specific energy-related goals plementation and in Section 5, we present the results of ex- described above. The first is to develop resource manage- periments to assess the benefits of this approach. We discuss ment policies that eliminate waste or overhead and make related work in the next section and then conclude. using the device as energy efficient as possible. An exam- In future work, after exploring the possibilities for OS- ple is a disk spindown policy that uses the minimal energy centric energy management, we can begin to identify com- whenever the disk is idle. This has been the traditional ap- plementary ways in which applications can interact with OS proach and has typically been employed in a piecemeal, per- policies to enhance their effectiveness. We plan to consider device fashion. We believe our currentcy model will pro- how charging policies for the use of different devices will vide a framework to view such algorithms from a more sys- suggest appropriate interactions with applications that can temwide perspective. The second approach is to change the be included in an effective API that is consistent with our offered workload to reduce the amount of work to be done. model. This is the underlying strategy in application adaption where the amount of work is reduced, often by changing the fi- delity of objects accessed, presumably in an undetectable 2 Background and Motivation or acceptably degraded manner for the user of the applica- tion. Unfortunately, without the benefit of application-based For the users of mobile computing, battery lifetime is an im- knowledge, other ways of reducing workload demands must portant performance measure. Typically, users and system be found. Our currentcy model provides a framework in designers face a tradeoff between maximizing lifetime and which to formulate policies intended to selectively degrade traditional performance measures such as throughput and re- the level of service to preserve energy capacity for more im- sponse time. We assume a workload consisting of a mix portant work. Assuming that previous work on per-device of interactive productivity applications and multimedia pro- power management policies provides an adequate base upon cessing. Depending on the applications of the device, the which to design experiments, we concentrate first on the less- actual goal might be to have the battery last just long enough explored second problem – formulating strategies for adjust- to accomplish a specified task (e.g., finish the scheduled pre- ing the quality of service delivered to applications. Later, sentation on the way to the meeting) or a fixed amount of we will return to the first issue and revisit such policies, ex- work (e.g., viewing a DVD movie). Thus, metrics have been pressed in terms of our model. proposed that try to capture the tradeoff between the work Observing that the lifetime of a battery can be controlled completed and battery lifetime [21]. Alternatively, the work by limiting the discharge rate [20, 26], the energy objective might not be fixed, but an acceptable quality of service is we consider in this work for our energy management poli- desired for as long as possible (e.g., for an MP3 player). cies is to control the discharge rate to meet a specified bat- Extending battery lifetime must usually be balanced against tery lifetime goal. The first level allocation decision is to some loss in performance. Thus, our job is twofold: to determine how much currentcy can be allocated to all the ac- achieve the battery lifetime goal and to find a fair way to tive tasks in the next time interval so as to throttle to some distribute the performance impact among applications. Cer- target discharge rate. Essentially, this first-level allocation tain battery characteristics must be considered when trying determines the ratio of active work that can be accomplished to control battery lifetime, as described in Section 2.1. to enforced idleness that offers opportunities to power down OS-level energy management can be split across two di- components. Then, the second level decision is to propor- mensions. Along one dimension, there are a wide variety tionally share this allocation among competing tasks. of devices in the system (e.g., the CPU, disks, network in- We have implemented an OS prototype incorporating terfaces, display) that can draw power concurrently and are these energy allocation and accounting policies. Experi- amenable to very different management techniques. This ments quantify the battery lifetime / performance tradeoffs motivates a unified model that can be used to characterize of this approach. We demonstrate that the system can hit a the power/energy consumption of all of these components. target battery lifetime.
Recommended publications
  • Isolation, Resource Management, and Sharing in Java
    Processes in KaffeOS: Isolation, Resource Management, and Sharing in Java Godmar Back, Wilson C. Hsieh, Jay Lepreau School of Computing University of Utah Abstract many environments for executing untrusted code: for example, applets, servlets, active packets [41], database Single-language runtime systems, in the form of Java queries [15], and kernel extensions [6]. Current systems virtual machines, are widely deployed platforms for ex- (such as Java) provide memory protection through the ecuting untrusted mobile code. These runtimes pro- enforcement of type safety and secure system services vide some of the features that operating systems pro- through a number of mechanisms, including namespace vide: inter-application memory protection and basic sys- and access control. Unfortunately, malicious or buggy tem services. They do not, however, provide the ability applications can deny service to other applications. For to isolate applications from each other, or limit their re- example, a Java applet can generate excessive amounts source consumption. This paper describes KaffeOS, a of garbage and cause a Web browser to spend all of its Java runtime system that provides these features. The time collecting it. KaffeOS architecture takes many lessons from operating To support the execution of untrusted code, type-safe system design, such as the use of a user/kernel bound- language runtimes need to provide a mechanism to iso- ary, and employs garbage collection techniques, such as late and manage the resources of applications, analogous write barriers. to that provided by operating systems. Although other re- The KaffeOS architecture supports the OS abstraction source management abstractions exist [4], the classic OS of a process in a Java virtual machine.
    [Show full text]
  • Libresource - Getting System Resource Information with Standard Apis Tuesday, 13 November 2018 14:55 (15 Minutes)
    Linux Plumbers Conference 2018 Contribution ID: 255 Type: not specified libresource - Getting system resource information with standard APIs Tuesday, 13 November 2018 14:55 (15 minutes) System resource information, like memory, network and device statistics, are crucial for system administrators to understand the inner workings of their systems, and are increasingly being used by applications to fine tune performance in different environments. Getting system resource information on Linux is not a straightforward affair. The best way is tocollectthe information from procfs or sysfs, but getting such information from procfs or sysfs presents many challenges. Each time an application wants to get a system resource information, it has to open a file, read the content and then parse the content to get actual information. If application is running in a container then even reading from procfs directly may give information about resources on whole system instead of just the container. Libresource tries to fix few of these problems by providing a standard library with set of APIs through which we can get system resource information e.g. memory, CPU, stat, networking, security related information. Libresource provides/may provide following benefits: • Virtualization: In cases where application is running in a virtualized environment using cgroup or namespaces, reading from /proc and /sys file-systems might not give correct information as these are not cgroup aware. Library API will take care of this e.g. if a process is running in a cgroup then library should provide information which is local to that cgroup. • Ease of use: Currently applications needs to read this info mostly from /proc and /sys file-systems.
    [Show full text]
  • Introduction to Performance Management
    C HAPTER 1 Introduction to Performance Management Application developers and system administrators face similar challenges in managing the performance of a computer system. Performance manage- ment starts with application design and development and migrates to administration and tuning of the deployed production system or systems. It is necessary to keep performance in mind at all stages in the development and deployment of a system and application. There is a definite over- lap in the responsibilities of the developer and administrator. Sometimes determining where one ends and the other begins is more difficult when a single person or small group develops, admin- isters and uses the application. This chapter will look at: • Application developer’s perspective • System administrator’s perspective • Total system resource perspective • Rules of performance tuning 1.1 Application Developer’s Perspective The tasks of the application developer include: • Defining the application • Determining the specifications • Designing application components • Developing the application codes • Testing, tuning, and debugging • Deploying the system and application • Maintaining the system and application 3 4 Chapter 1 • Introduction to Performance Management 1.1.1 Defining the Application The first step is to determine what the application is going to be. Initially, management may need to define the priorities of the development group. Surveys of user organizations may also be carried out. 1.1.2 Determining Application Specifications Defining what the application will accomplish is necessary before any code is written. The users and developers should agree, in advance, on the particular features and/or functionality that the application will provide. Often, performance specifications are agreed upon at this time, and these are typically expressed in terms of user response time or system throughput measures.
    [Show full text]
  • Linux® Resource Administration Guide
    Linux® Resource Administration Guide 007–4413–004 CONTRIBUTORS Written by Terry Schultz Illustrated by Chris Wengelski Production by Karen Jacobson Engineering contributions by Jeremy Brown, Marlys Kohnke, Paul Jackson, John Hesterberg, Robin Holt, Kevin McMahon, Troy Miller, Dennis Parker, Sam Watters, and Todd Wyman COPYRIGHT © 2002–2004 Silicon Graphics, Inc. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of Silicon Graphics, Inc. LIMITED RIGHTS LEGEND The electronic (software) version of this document was developed at private expense; if acquired under an agreement with the USA government or any contractor thereto, it is acquired as "commercial computer software" subject to the provisions of its applicable license agreement, as specified in (a) 48 CFR 12.212 of the FAR; or, if acquired for Department of Defense units, (b) 48 CFR 227-7202 of the DoD FAR Supplement; or sections succeeding thereto. Contractor/manufacturer is Silicon Graphics, Inc., 1600 Amphitheatre Pkwy 2E, Mountain View, CA 94043-1351. TRADEMARKS AND ATTRIBUTIONS Silicon Graphics, SGI, the SGI logo, and IRIX are registered trademarks and SGI Linux and SGI ProPack for Linux are trademarks of Silicon Graphics, Inc., in the United States and/or other countries worldwide. SGI Advanced Linux Environment 2.1 is based on Red Hat Linux Advanced Server 2.1 for the Itanium Processor, but is not sponsored by or endorsed by Red Hat, Inc.
    [Show full text]
  • SUSE Linux Enterprise Server 11 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 11 SP4
    SUSE Linux Enterprise Server 11 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 11 SP4 Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xii 2 Feedback xiv 3 Documentation Conventions xv I BASICS 1 1 General Notes on System Tuning 2 1.1 Be Sure What Problem to Solve 2 1.2 Rule Out Common Problems 3 1.3 Finding the Bottleneck 3 1.4 Step-by-step Tuning 4 II SYSTEM MONITORING 5 2 System Monitoring Utilities 6 2.1 Multi-Purpose Tools 6 vmstat 7
    [Show full text]
  • IBM Power Systems Virtualization Operation Management for SAP Applications
    Front cover IBM Power Systems Virtualization Operation Management for SAP Applications Dino Quintero Enrico Joedecke Katharina Probst Andreas Schauberer Redpaper IBM Redbooks IBM Power Systems Virtualization Operation Management for SAP Applications March 2020 REDP-5579-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (March 2020) This edition applies to the following products: Red Hat Enterprise Linux 7.6 Red Hat Virtualization 4.2 SUSE Linux SLES 12 SP3 HMC V9 R1.920.0 Novalink 1.0.0.10 ipmitool V1.8.18 © Copyright International Business Machines Corporation 2020. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii Authors. vii Now you can become a published author, too! . viii Comments welcome. viii Stay connected to IBM Redbooks . ix Chapter 1. Introduction. 1 1.1 Preface . 2 Chapter 2. Server virtualization . 3 2.1 Introduction . 4 2.2 Server and hypervisor options . 4 2.2.1 Power Systems models that support PowerVM versus KVM . 4 2.2.2 Overview of POWER8 and POWER9 processor-based hardware models. 4 2.2.3 Comparison of PowerVM and KVM / RHV . 7 2.3 Hypervisors . 8 2.3.1 Introducing IBM PowerVM . 8 2.3.2 Kernel-based virtual machine introduction . 15 2.3.3 Resource overcommitment . 16 2.3.4 Red Hat Virtualization . 17 Chapter 3. IBM PowerVM management and operations . 19 3.1 Shared processor logical partitions . 20 3.1.1 Configuring a shared processor LPAR .
    [Show full text]
  • REST API Specifications
    IBM Hyper-Scale Manager Version 5.5.1 REST API Specifications IBM IBM Confidential SC27-6440-06 IBM Confidential Note Before using this information and the product it supports, read the information in “Notices” on page 73. Edition Notice Publication number: SC27-6440-06. This edition applies to IBM® Hyper-Scale Manager version 5.5.1 and to all subsequent releases and modifications, until otherwise indicated in a newer publication. © Copyright International Business Machines Corporation 2014, 2019. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM Confidential Contents List of Tables........................................................................................................vii About this guide................................................................................................... ix Who should use this guide.......................................................................................................................... ix Conventions used in this guide................................................................................................................... ix Related information and publications.........................................................................................................ix Getting information, help, and service.........................................................................................................x IBM Publications Center.............................................................................................................................
    [Show full text]
  • (RTOS) Is an Operating System (OS) Intended to Serve Real-Time Application Requests
    Page 1 of 7 Real-time operating systems 6.1 Introduction: A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests. It must be able to process data as it comes in, typically without buffering delays. Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter. A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS. An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real- time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time. 6.2 Design philosophies The most common designs are: Event-driven which switches tasks only when an event of higher priority needs servicing, called preemptive priority, or priority scheduling.
    [Show full text]
  • Common Information Model
    DISTRIBUTED MANAGEMENT TASK FORCE Technical Note August 2015 Copyright © 2015 Distributed Management Task Force, Inc. (DMTF). All rights reserved. Redfish – Simple, Modern and a new standard management interface. First, the Secure Management for Multi- market is shifting from traditional data center environments to scale-out solutions. With Vendor Cloud and Web-Based massive quantities of simple servers brought Infrastructures together to perform a set of tasks in a distributed fashion (as opposed to centrally located), the usage model is different from Introduction traditional enterprise environments. This set of In today’s cloud- and web-based data center customers demands a standards-based infrastructures, scalability is often achieved through management interface that is consistent in large quantities of simple servers. This hyperscale heterogeneous, multi-vendor environments. usage model is drastically different than that of traditional enterprise platforms, and requires a new However, functionality and homogeneous approach to management. interfaces are lacking in scale-out management. IPMI, an older standard for out-of-band As an open industry standard that meets scalability management, is limited to a “least common requirements in multi-vendor deployments, Redfish denominator” set of commands (e.g. Power integrates easily with commonly used tools by On/Off/Reboot, temperature value, text console, specifying a RESTful interface and utilizing JSON etc.). As a result, customers have been forced to and OData. use a reduced set of functionality because vendor extensions are not common across all platforms. This set of new customers is Why a new interface? increasingly developing their own tools for tight A variety of influences have resulted in the need for integration, often having to rely on in-band management software.
    [Show full text]
  • Processor Hardware Counter Statistics As a First-Class System Resource
    ∗ Processor Hardware Counter Statistics As A First-Class System Resource XiaoZhang SandhyaDwarkadas GirtsFolkmanis KaiShen Department of Computer Science, University of Rochester Abstract which hides the differences and details of each hardware platform from the user. Today's processors provide a rich source of statis- In this paper, we argue that processor hardware coun- tical information on program execution characteristics ters are a first-class resource, warranting general OS uti- through hardware counters. However, traditionally, op- lization and requiring direct OS management. Our dis- erating system (OS) support for and utilization of the cussion is within the context of the increasing ubiquity hardware counter statistics has been limited and ad hoc. and variety of hardware resource-sharing multiproces- In this paper, we make the case for direct OS manage- sors. Examples are memory bus-sharing symmetric mul- ment of hardware counter statistics. First, we show the tiprocessors (SMPs), L2 cache-sharing chip multiproces- utility of processor counter statistics in CPU scheduling sors (CMPs), and simultaneous multithreading (SMTs), (for improved performance and fairness) and in online where many hardware resources including even the pro- workload modeling, both of which require online contin- cessor counter registers are shared. uous statistics (as opposed to ad hoc infrequent uses). Processor metrics can identify hardware resource con- Second, we show that simultaneous system and user use tention on resource-sharing multiprocessors in addition of hardware counters is possible via time-division multi- to providing useful information on application execution plexing. Finally, we highlight potential counter misuses behavior. We reinforce existing results to demonstrate to indicate that the OS should address potential security multiple uses of counter statistics in an online continu- issues in utilizing processor counter statistics.
    [Show full text]
  • Lenovo Xclarity Controller REST API Guide Note: Before Using This Information, Read the General Information in “Notices” on Page Ccxciii
    Lenovo XClarity Controller REST API Guide Note: Before using this information, read the general information in “Notices” on page ccxciii. Sixth Edition (Mar 2021) © Copyright Lenovo 2017, 2021. LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant to a General Services Administration (GSA) contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F-05925. Contents Chapter 1. Introduction . 1 GET – Collection of Network adapters . 49 Authentication Methods . 1 GET – Network adapter properties. 50 Lenovo Extended Registries . 2 Resource NetworkPort . 53 Tools for Redfish . 2 GET – Collection of network ports . 53 GET – Network port properties . 54 Chapter 2. Service Root . 5 Resource NetworkDeviceFunction . 55 Resource ServiceRoot . 5 GET – Collection of Network device GET – Service root properties . 5 function . 56 GET – Network device PCIe functions . 57 Chapter 3. Session Management . 9 PATCH – Update network device PCIe Resource SessionService . 9 functions resource . 60 GET – Session management properties. 9 Resource Session . 10 Chapter 7. Power, thermal and GET – Session properties . 10 redundancy . 63 POST– Create a session. 11 Resource Power . 63 DELETE– Delete a session . 12 GET – Power management properties . 63 PATCH – Update power management Chapter 4. Account Management . 15 properties . 77 Resource AccountService . 15 Resource Power (Flex System Enterprise Chassis or Lenovo D2 Enclosure) . 79 GET – Account management properties . 15 GET – Power management properties . 79 PATCH – Update global account lockout properites and ldap properties . 18 Resource Thermal . 80 Resource ManagerAccount . 21 GET – Thermal management properties. 80 GET – Account properties . 21 Chapter 8. BMC Management. 85 POST – Create an account. 23 Resource Manager . 85 PATCH – Update userid/password/role/ PasswordChangeRequired .
    [Show full text]
  • Dos Attack Prevention Using Rule-Based Sniffing Technique and Firewall in Cloud Computing
    125 E3S W eb of C onferences , 21004 (2019) https://doi.org/10.1051/e3sconf/201912521004 ICENIS 2019 DoS Attack Prevention Using Rule-Based Sniffing Technique and Firewall in Cloud Computing Kagiraneza Alexis Fidele1,2*, and Agus Hartanto2 1 Data Entry and Update Taxpayer’s Registry in Rwanda Revenue Authority (RRA) Kigali-Rwanda 2 Master of Information System, School of Postgraduate Studies Diponegoro University, Semarang – Indonesia Abstract. Nowadays, we are entering an era where the internet has become a necessary infrastructure and support that can be applied as a means of regular communication and data service. In these services, cloud- based on servers has an essential role as it can serve numerous types of devices that are interconnected with several protocols. Unfortunately, internet cloud servers become the main target of attacks such as Denial of Service (DoS), Distributed Denial of Service (DDoS). These attacks have illegal access that could interfere in service functions. Sniffing techniques are typically practiced by hackers and crackers to tap data that passes through the internet network. In this paper, sniffing technique aims to identify packets that are moving across the network. This technique will distinguish packet on the routers or bridges through a sniffer tool known as snort connected to a database containing the attack pattern. If the sniffing system encounters strange patterns and recognizes them as attacks, it will notify to the firewall to separate the attacker's original Internet Protocol (IP) address. Then, communication from the attacker's host to the target will be discontinued. Consequently, the identified attack activity will stop working, and the service will proceed to run.
    [Show full text]