I/O Calibration Performance and Tuning with Linux on System Z

Total Page:16

File Type:pdf, Size:1020Kb

I/O Calibration Performance and Tuning with Linux on System Z I/O Calibration Performance and Tuning with Linux on System z David Simpson - Oracle Technical Specialist, IBM ( [email protected] ) © 2017 IBM Corporation Copyright and Trademark Information For IBM – can be found at http://www.ibm.com/legal/us/en/copytrade.shtml For Oracle – can be found at http://www.oracle.com/us/legal/index.html Any performance results/observations in this presentation are purely for education and planning purposes. No Test results should be construed as indicative of any particular customer workload or benchmark result. © 2017 IBM Corporation © 2017 IBM Corporation IBM z13 – 5 I/O Drawers with up to 32 I/O Cards per drawer 4 © 2017 IBM Corporation © 2017 IBM Corporation Oracle Cache, Memory and I/O Access © 2017 IBM Corporation FICON (ECKD) / FCP comparison (1) Normalized Transactional Throughput 160% 140% 120% 100% 80% 60% 40% 20% 0% 4 CPUs 6 CPUs FICON (20 aliases) FCP (rr_min_io=100) 7 Source: Juergen Doelle http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102234 © 2017 IBM Corporation FICON (ECKD) / FCP comparison (2) Normalized CPU cost per transaction 120% 100% 80% 60% 40% 20% 0% CPU per CPU costthroughput transactional 4 CPUs 6 CPUs FICON (20 aliases) FCP (rr_min_io=100) 8 Source: Juergen Doelle http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102234 © 2017 IBM Corporation HyperPAV (1) HyperPAV allows multiple IO operations on the same sub channel Very important for random access workload with relative small data transfers 10-20 HyperPAV aliases per LCU showed best performance gains in our internal tests. Recommendation: – Enable HyperPAV whenever using ECKD devices lsdasd –u 0.0.75bc dasda IBM.750000000NA461.1555.bc.000000000000eac50000000000000000 0.0.75ca dasdk IBM.750000000NA461.1555.ca.000000000000eac50000000000000000 0.0.75d0 alias IBM.750000000NA461.1555.xx.00000000000000000000000000000000 0.0.75d1 alias IBM.750000000NA461.1555.xx.00000000000000000000000000000000 9 © 2017 IBM Corporation HyperPAV (2) ECKD Devices: Scaling HyperPAV aliases Normalized Transactional throughput and total Disk I/O (read + write) 160% 120 140% 100 120% 100% 80 80% 60 MiB/sec 60% 40 40% 20% 20 normalized transactional throughput transactional normalized 0% 0 0 10 20 40 # alias devices normalized transactional throughput Total MB/sec 10 © 2017 IBM Corporation FICON (ECKD) / FCP comparison (3) FCP/SCSI SAN Storage offers better throughput ECKD/FICON SAN Storage uses less CPU per transaction You have to tune both environments Recommendation: it depends 11 © 2017 IBM Corporation ECKD/DASD UDEV Rule – Device Permission Change Red Hat 6 / SUSE 11: Red Hat 7.1+: SUSE 12 SP1+ 12 © 2017 IBM Corporation Linux multipathing Recommendations /etc/multipath.conf parameters can impact your FCP/SCSI performance particularly with I/O bound workloads: Check with your OS distributor & Storage vendors for recommended settings rr_min_io_rq=1 #Number of I/Os per Path (Newer Linux Kernels) SLES 11 SP3+, RH 6.x+ #rr_min_io #Use for Older Linux Distros (Red Hat 5.x,SLES 11sp1 & older) path_selector "service-time 0" #default for SLES 11 SP3+, RedHat 6.7+ #path_selector "round-robin 0" #round-robin for SLES 11 SP2+ and RedHat 6.x path_grouping_policy "multibus" # SLES 11 SP1+ and Red Hat 6.x+ # path_grouping_policy "failover" # SLES 10, Red Hat 5.x 13 © 2017 IBM Corporation SUSE 12 SP1+ multipath.conf Recommendations SUSE 12: /etc/multipath.conf: To make changes in effect dynamically: systemctl restart multipathd.service 14 © 2017 IBM Corporation Source: https://www.suse.com/documentation/sles-12/pdfdoc/stor_admin/stor_admin.pdf Red Hat 7: Multipath Recommendations Red Hat 7 /etc/multipath.conf: To make changes in effect dynamically: systemctl restart multipathd.service 15 © 2017 IBM Corporation Source: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/pdf/DM_Multipath/Red_Hat_Enterprise_Linux-7-DM_Multipath-en-US.pdf I/O Kernel Scheduler: The Linux 2.6 kernel offers a choice of four different I/O schedulers: – Noop Scheduler (noop) – Deadline Scheduler (deadline) – Anticipatory Scheduler (as) – Complete Fair Queuing Scheduler (cfq) – Default for a lot of kernels Some Linux kernels default is the “cfq” scheduler: – Designed to optimize access to physical disks – Check in /sys/block/<device>/queue/scheduler noop anticipatory [deadline] cfq – Default configurable by setting the “elevator=[…]” boot parameter in /etc/zipl.conf Recommendation: deadline or noop for Oracle database LUNs © 2017 IBM Corporation 16 Changing I/O scheduler with UDEV Rule I/O settings you may want to consider adjusting (based on storage vendor recommendations): scheduler I/O scheduler (noop or deadline) nr_requests number of I/O requests default 128 (may want to increase to 1024 for Flash Storage devices) rotational 1 default for spinning disk, 0 for Flash nomerges default 0, 1 for Flash max_sectors_kb default 1024, 4096 for Flash rq_infinity default 1, 2 for Flash add_random default 1, 0 for Flash Sample UDEV rules (/etc/udev/rules.d) /etc/udev/rules.d/12-dm-permissions.rules (Multipath Devices) ENV{DM_NAME}=="ASM*",OWNER:="oracle",GROUP:="dba",MODE:="660",RUN+="/bin/sh -c 'echo deadline > /sys/$devpath/queue/scheduler '" /etc/udev/rules.d/99-udev-oracle.rules (Single Path Devices) KERNEL=="sda",OWNER="oracle",GROUP="dba",MODE="660",RUN+="/bin/sh -c ‘echo deadline > /sys/$devpath/queue/scheduler'“ Restarting UDEV rules (Sles 11 sp3/Red Hat 6): udevadm control --reload-rules udevadm trigger © 2017 IBM Corporation 17 Setting Up Disk FCP/SCSI Devices for Oracle /etc/multipath.conf: multipath { wwid 20020c2400d1221a8 alias ASMFLASH0D } # multipath -ll ASMFLASH0D ASMFLASH0D (20020c2400d1221a8) dm-21 IBM, FlashSystem size=256G features='0' hwhandler='0' wp=rw`-+- policy='round-robin 0' prio=1 status=active |- 5:0:1:13 sdbv 68:144 active ready running |- 7:0:1:13 sdep 129:16 active ready running `- 7:0:0:13 sdfd 129:240 active ready running [root@zlnxl2 mapper]# ls -lato /dev/mapper/ ASMFLASH0D lrwxrwxrwx 1 root 8 May 13 16:46 ASMFLASH0D -> ../dm-21 [root@zlnxl2 rules.d]# ls -la /dev/dm-21 brw-rw---- 1 root disk 253, 21 May 13 14:54 /dev/dm-21 [root@zlnxl2 rules.d]# udevadm trigger [root@zlnxl2 rules.d]# ls -la /dev/dm-21 brw-rw---- 1 oracle dba 253, 21 May 13 16:46 /dev/dm-21 © 2017 IBM Corporation 18 ASM or LVM file Systems? LVM – Logical Volume Manager in Linux ASM – Automated Storage Management provided by Oracle – Oracle RAC One and Oracle RAC will require ASM LVM ASM pro • Direct control on setting and • Automated, out of the box layout environment • Can choose file system • Very good integration with Oracle con • Complex setup • RMAN required for backup Overall recommendation: ASM Don’t combine both! 19 © 2017 IBM Corporation File System Recommendations SLES 11 SP4+ Supported and Recommended File ext3 xfs Systems on Linux (MOS: 236826.1 ) SUSE xfs for DB files (24.7% increase and 12.9% increase for Reads (NON ASM) Red Hat 6.x ext4 for database files Red Hat 7.x - XFS is now recommended. XFS – difficult to reduce the size of a LVM. 20 © 2017 IBM Corporation Database files on Filesystem: Disable read ahead Oracle parameter file systems: filesystemio_options=setall – Provides asynchronous & direct I/O (avoids linux file system cache) Reduce Linux Read-Ahead for LVM file systems. – lvchange -r none <lv device name> © 2017 IBM Corporation 21 Customer Scenario - Application Performance Problem Consolidated several systems to one system Transaction times > 6 seconds for 100 Transactions per seconds (Normally sub 2 seconds) after migration. 22 © 2017 IBM Corporation Oracle AWR Report Analysis (1): 23 © 2017 IBM Corporation Oracle AWR Report Analysis (2): SQL ID: 4ztz048yfq32s: SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count, TO_CHAR(MIN(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS first_occur_time, TO_CHAR(MAX(timestamp), 'yyyy-mm-dd hh24:mi:ss') AS last_occur_time FROM sys.dba_audit_session WHERE returncode != 0 AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00') 24 © 2017 IBM Corporation Oracle DBA_AUDIT_ANALYSIS: Query 4ztz048yfq32s is an audit check –many failed logon attempts an alert is generated. Slow Performance Of DBA_AUDIT_SESSION Query From "Failed logins" Metric in Enterprise Manager (Doc ID 1178736.1) - Bug 7633167 : SQL PERFORMANCE ISSUE IN DBA_AUDIT_SESSION. Oracle Note:73408.1 "How to Truncate, Delete, or Purge Rows from the Audit Trail Table AUD$" Be aware that it may be a company requirement to keep a backup of the rows in this table. Ensure that a backup is taken before truncating/deleting as discussed in Note:73408.1 Note:731908.1 "New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information" Bug is fixed in the 12.1.0.5 database plugin. See note:1388143.1 "How to Deploy the Latest Database Plugin to the OMS and the Agent in 12C Cloud Control" for instructions on deploying the latest database plug-in. Apply the latest DB Plugin PSU Patches: Patch:17998764 OMS-Side Patch:18039625 and Patch:18012965 Agent-Side 25 © 2017 IBM Corporation Oracle SGA_TARGET Analysis: Finding 2: Undersized SGA Impact is 34.3 active sessions, 56.94% of total activity. --------------------------------------------------------- The SGA was inadequately sized, causing additional I/O or hard parses. The value of parameter "sga_target" was " 33792 M " during the analysis period. Recommendation 1: Database Configuration Estimated benefit is 34.3 active sessions, 56.94% of total activity. --------------------------------------------------------------------
Recommended publications
  • SUSE BU Presentation Template 2014
    TUT7317 A Practical Deep Dive for Running High-End, Enterprise Applications on SUSE Linux Holger Zecha Senior Architect REALTECH AG [email protected] Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 2 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 3 About REALTECH 1/2 REALTECH Software REALTECH Consulting . Business Service Management . SAP Mobile . Service Operations Management . Cloud Computing . Configuration Management and CMDB . SAP HANA . IT Infrastructure Management . SAP Solution Manager . Change Management for SAP . IT Technology . Virtualization . IT Infrastructure 4 About REALTECH 2/2 5 Our Customers Manufacturing IT services Healthcare Media Utilities Consumer Automotive Logistics products Finance Retail REALTECH Consulting GmbH 6 Table of Content • About REALTECH • About this session • Design principles • Different layers which need to be considered 7 The Inspiration for this Session • Several performance workshops at customers • Performance escalations at customer who migrated from UNIX (AIX, Solaris, HP-UX) to Linux • Presenting the experiences made at these customers in this session • Preventing the audience from performance degradation caused from: – Significant design mistakes – Wrong architecture assumptions – Having no architecture at all 8 Performance Optimization The False Estimation Upgrading server with CPUs that are 12.5% faster does not improve application
    [Show full text]
  • On the Performance Variation in Modern Storage Stacks
    On the Performance Variation in Modern Storage Stacks Zhen Cao1, Vasily Tarasov2, Hari Prasath Raman1, Dean Hildebrand2, and Erez Zadok1 1Stony Brook University and 2IBM Research—Almaden Appears in the proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST’17) Abstract tions on different machines have to compete for heavily shared resources, such as network switches [9]. Ensuring stable performance for storage stacks is im- In this paper we focus on characterizing and analyz- portant, especially with the growth in popularity of ing performance variations arising from benchmarking hosted services where customers expect QoS guaran- a typical modern storage stack that consists of a file tees. The same requirement arises from benchmarking system, a block layer, and storage hardware. Storage settings as well. One would expect that repeated, care- stacks have been proven to be a critical contributor to fully controlled experiments might yield nearly identi- performance variation [18, 33, 40]. Furthermore, among cal performance results—but we found otherwise. We all system components, the storage stack is the corner- therefore undertook a study to characterize the amount stone of data-intensive applications, which become in- of variability in benchmarking modern storage stacks. In creasingly more important in the big data era [8, 21]. this paper we report on the techniques used and the re- Although our main focus here is reporting and analyz- sults of this study. We conducted many experiments us- ing the variations in benchmarking processes, we believe ing several popular workloads, file systems, and storage that our observations pave the way for understanding sta- devices—and varied many parameters across the entire bility issues in production systems.
    [Show full text]
  • Vytvoření Softwareově Definovaného Úložiště Pro Potřeby Ukládání a Sdílení Dat V Rámci Instituce a Jeho Zálohování Do Datových Úložišť Cesnet
    Slezská univerzita v Opavě Centrum informačních technologií Vysoká škola báňská – Technická univerzita Ostrava Fakulta elektrotechniky a informatiky Technická zpráva k projektu 529R1/2014 Vytvoření softwareově definovaného úložiště pro potřeby ukládání a sdílení dat v rámci instituce a jeho zálohování do datových úložišť Cesnet Řešitel: Ing. Jiří Sléžka Spoluřešitelé: Mgr. Jan Nosek, Ing. Pavel Nevlud, Ing. Marek Dvorský, Ph.D., Ing. Jiří Vychodil, Ing. Lukáš Kapičák Leden 2016 Obsah 1 Popis projektu...................................................................................................................................1 2 Cíle projektu.....................................................................................................................................1 3 Volba HW a SW komponent.............................................................................................................1 3.1 Hardware...................................................................................................................................1 3.2 GlusterFS..................................................................................................................................2 3.2.1 Replicated GlusterFS Volume...........................................................................................2 3.2.2 Distributed GlusterFS Volume..........................................................................................3 3.2.3 Striped Glusterfs Volume..................................................................................................3
    [Show full text]
  • Buffered FUSE: Optimising the Android IO Stack for User-Level Filesystem
    Int. J. Embedded Systems, Vol. 6, Nos. 2/3, 2014 95 Buffered FUSE: optimising the Android IO stack for user-level filesystem Sooman Jeong and Youjip Won* Hanyang University, #507-2, Annex of Engineering Center, 17 Haengdang-dong, Sungdong-gu, Seoul, 133-791, South Korea E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: In this work, we optimise the Android IO stack for user-level filesystem. Android imposes user-level filesystem over native filesystem partition to provide flexibility in managing the internal storage space and to maintain host compatibility. The overhead of user-level filesystem is prohibitively large and the native storage bandwidth is significantly under-utilised. We overhauled the FUSE layer in the Android platform and propose buffered FUSE (bFUSE) to address the overhead of user-level filesystem. The key technical ingredients of buffered FUSE are: 1) extended FUSE IO size; 2) internal user-level write buffer; 3) independent management thread which performs time-driven FUSE buffer synchronisation. With buffered FUSE, we examined the performances of five different filesystems and three disk scheduling algorithms in a combinatorial manner. With bFUSE on XFS filesystem using the deadline scheduling, we achieved the IO performance improvements of 470% and 419% in Android ICS and JB, respectively, over the existing smartphone device. Keywords: Android; storage; user-level filesystem; FUSE; write buffer; embedded systems. Reference to this paper should be made as follows: Jeong, S. and Won, Y. (2014) ‘Buffered FUSE: optimising the Android IO stack for user-level filesystem’, Int. J. Embedded Systems, Vol. 6, Nos.
    [Show full text]
  • Redhat Virtualization Tuning and Optimization Guide
    Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide Using KVM performance features for host systems and virtualized guests on RHEL Last Updated: 2020-09-10 Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide Using KVM performance features for host systems and virtualized guests on RHEL Jiri Herrmann Red Hat Customer Content Services [email protected] Yehuda Zimmerman Red Hat Customer Content Services [email protected] Dayle Parker Red Hat Customer Content Services Scott Radvan Red Hat Customer Content Services Red Hat Subject Matter Experts Legal Notice Copyright © 2019 Red Hat, Inc. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
    [Show full text]
  • Virtualization Best Practices
    SUSE Linux Enterprise Server 12 SP4 Virtualization Best Practices SUSE Linux Enterprise Server 12 SP4 Publication Date: September 24, 2021 Contents 1 Virtualization Scenarios 2 2 Before You Apply Modifications 2 3 Recommendations 3 4 VM Host Server Configuration and Resource Allocation 3 5 VM Guest Images 26 6 VM Guest Configuration 37 7 VM Guest-Specific Configurations and Settings 43 8 Hypervisors Compared to Containers 46 9 Xen: Converting a Paravirtual (PV) Guest to a Fully Virtual (FV/HVM) Guest 50 10 External References 53 1 SLES 12 SP4 1 Virtualization Scenarios Virtualization oers a lot of capabilities to your environment. It can be used in multiple scenarios. For more details refer to Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.2 “Virtualization Capabilities” and Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.3 “Virtualization Benefits”. This best practice guide will provide advice for making the right choice in your environment. It will recommend or discourage the usage of options depending on your workload. Fixing conguration issues and performing tuning tasks will increase the performance of VM Guest's near to bare metal. 2 Before You Apply Modifications 2.1 Back Up First Changing the conguration of the VM Guest or the VM Host Server can lead to data loss or an unstable state. It is really important that you do backups of les, data, images, etc. before making any changes. Without backups you cannot restore the original state after a data loss or a misconguration. Do not perform tests or experiments on production systems. 2.2 Test Your Workloads The eciency of a virtualization environment depends on many factors.
    [Show full text]
  • One Big PDF Volume
    Proceedings of the Linux Symposium July 14–16th, 2014 Ottawa, Ontario Canada Contents Btr-Diff: An Innovative Approach to Differentiate BtrFs Snapshots 7 N. Mandliwala, S. Pimpale, N. P. Singh, G. Phatangare Leveraging MPST in Linux with Application Guidance to Achieve Power and Performance Goals 13 M.R. Jantz, K.A. Doshi, P.A. Kulkarni, H. Yun CPU Time Jitter Based Non-Physical True Random Number Generator 23 S. Müller Policy-extendable LMK filter framework for embedded system 49 K. Baik, J. Kim, D. Kim Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads 63 R. Schöne, J. Schuchart, T. Ilsche, D. Hackenberg Veloces: An Efficient I/O Scheduler for Solid State Devices 77 V.R. Damle, A.N. Palnitkar, S.D. Rangnekar, O.D. Pawar, S.A. Pimpale, N.O. Mandliwala Dmdedup: Device Mapper Target for Data Deduplication 83 V. Tarasov, D. Jain, G. Kuenning, S. Mandal, K. Palanisami, P. Shilane, S. Trehan, E. Zadok SkyPat: C++ Performance Analysis and Testing Framework 97 P.H. Chang, K.H. Kuo, D.Y. Tsai, K. Chen, Luba W.L. Tang Computationally Efficient Multiplexing of Events on Hardware Counters 101 R.V. Lim, D. Carrillo-Cisneros, W. Alkowaileet, I.D. Scherson The maxwell(8) random number generator 111 S. Harris Popcorn: a replicated-kernel OS based on Linux 123 A. Barbalace, B. Ravindran, D. Katz Conference Organizers Andrew J. Hutton, Linux Symposium Emilie Moreau, Linux Symposium Proceedings Committee Ralph Siemsen With thanks to John W. Lockhart, Red Hat Robyn Bergeron Authors retain copyright to all submitted papers, but have granted unlimited redistribution rights to all as a condition of submission.
    [Show full text]
  • Improving Block-Level Efficiency with Scsi-Mq
    Improving Block Level Efficiency with scsi-mq Blake Caldwell NCCS/ORNL March 4th, 2015 ORNL is managed by UT-Battelle for the US Department of Energy Block Layer Problems • Global lock of request queue per block device • Cache coherency traffic – If servicing part of a request on multiple cores, the lock must be obtained on the new core and invalidated on the old core • Interrupt locality – Hardware interrupt may occur on wrong core, requiring sending soft-interrupt to proper core 2 Improving Block Level Efficiency Linux Block Layer • Designed with rotational media in mind – Time spent in the queue allows sequential request reordering – a very good thing – Completion latencies 10ms to 100ms • Single request queue – Staging area for merging, reordering, scheduling • Drivers are presented with the same interface for each block device 3 Improving Block Level Efficiency blk-mq (multi-queue) • Rewrite of the Linux block layer (since kernel 3.13) • Two levels of queues 1) Per-core submission queues 2) 1 more more hardware dispatch queues with affinity to NUMA nodes/CPU’s (device-driver specific) • IO scheduling within software queues – Inserted in FIFO order, then interleaved to hardware queues • Tags IOs that are reused for lookup on completion 4 Improving Block Level Efficiency Core 0 Core 1 Core 2 Core 3 Core 0 Core 1 Core 2 Core 3 Application Application blk-mq (Core 0, 1, 2) (Core 0, 1, 2) single queue libaio libaio ost_io ost_io Page Page Cache Cache Software Submission queues Hardware Dispatch Queues Block Device Driver Block Device Driver
    [Show full text]
  • Xen Cloud Platform Administrator's Guide Release 0.1 0.1
    Xen Cloud Platform Administrator's Guide Release 0.1 0.1 Published October 2009 1.0 Edition 1 Xen Cloud Platform Administrator's Guide Xen Cloud Platform Administrator's Guide: Release 0.1 Published October 2009 Copyright © 2008 Xen®, Xen.org®, Xen Cloud Platform™, and logos are either registered trademarks or trademarks of Xen.org Systems, Inc. in the United States and/or other countries. Other company or product names are for informational purposes only and may be trademarks of their respective owners. This product contains an embodiment of the following patent pending intellectual property of Xen.org Systems, Inc.: 1. United States Non-Provisional Utility Patent Application Serial Number 11/487,945, filed on July 17, 2006, and entitled “Using Writeable Page Tables for Memory Address Translation in a Hypervisor Environment”. 2. United States Non-Provisional Utility Patent Application Serial Number 11/879,338, filed on July 17, 2007, and entitled “Track- ing Current Time on Multiprocessor Hosts and Virtual Machines”. 2 Contents 1. Document Overview ........................................................................................... 9 How this Guide relates to other documentation .................................................. 9 2. Xen Cloud Platform hosts and resource pools ................................................ 11 Hosts and resource pools overview ................................................................. 11 Requirements for creating resource pools ........................................................ 11
    [Show full text]
  • Synergistically Coupling of Solid State Drives and Hard Disks for Qos-Aware Virtual Memory Ke Liu Wayne State University
    Wayne State University DigitalCommons@WayneState Wayne State University Theses 1-1-2013 Synergistically Coupling Of Solid State Drives And Hard Disks For Qos-Aware Virtual Memory Ke Liu Wayne State University, Follow this and additional works at: http://digitalcommons.wayne.edu/oa_theses Recommended Citation Liu, Ke, "Synergistically Coupling Of Solid State Drives And Hard Disks For Qos-Aware Virtual Memory" (2013). Wayne State University Theses. Paper 236. This Open Access Thesis is brought to you for free and open access by DigitalCommons@WayneState. It has been accepted for inclusion in Wayne State University Theses by an authorized administrator of DigitalCommons@WayneState. SYNERGISTICALLY COUPLING OF SOLID STATE DRIVES AND HARD DISKS FOR QOS-AWARE VIRTUAL MEMORY by KE LIU THESIS Submitted to the Graduate School of Wayne State University, Detroit, Michigan in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE 2013 MAJOR: COMPUTER ENGINEERING Approved by: Advisor Date ACKNOWLEDGEMENTS The two years study at Wayne State University is very interesting and full of challenges. This thesis could not be achieved without help from many people in my life. I am very grateful to my advisor, Dr. Song Jiang, for his guidance, support, encouragement all the time. His guidance and extensive knowledge help me get through all the hard times of this work. Thanks to Dr. Kei Davis, who provided me an internship and a fantastic trip at Los Alamos National Laboratory, as his insightful comments on my work help it improved a lot. I would also like to thank the rest of my thesis committee, Prof. Cheng-zhong Xu and Prof.
    [Show full text]
  • Automatic I/O Scheduler Selection Through Online Workload Analysis
    Automatic I/O scheduler selection through online workload analysis Ramon Nou1, Jacobo Giralt1, Toni Cortes1;2 Barcelona Supercomputing Center (Spain)1 - Technical University of Catalonia(UPC)2 [email protected], [email protected], [email protected] Abstract workloads. The I/O scheduler is assigned statically on a per- I/O performance is a bottleneck for many workloads. The disk basis and is normally predefined by the Linux distribution. I/O scheduler plays an important role in it. It is typically On the CPU side, the additional transistors given by the configured once by the administrator and there is no selection advance of the scale of integration are currently addressed that suits the system at every time. Every I/O scheduler has a mainly to increase the number of cores in the chip. Perfor- different behavior depending on the workload and the device. mance achieved by CPUs keeps getting higher, and so does We present a method to select automatically the most suitable the number of CPUs in a system. A similar progression can be I/O scheduler for the ongoing workload. This selection is found for disk capacity, but not for I/O performance. This per- done online, using a workload analysis method with small I/O formance gap is widening with multicores, since the available traces, finding common I/O patterns. Our dynamic mechanism storage bandwidth must be shared among several cores. The adapts automatically to one of the best schedulers, sometimes effects of this performance gap are typically reduced using achieving improvements on I/O performance for heteroge- more storage devices in parallel (like RAID systems), which neous workloads beyond those of any fixed configuration leads to an increased performance in terms of the number of (up to 5%).
    [Show full text]
  • Tuning Linux OS on IBM System P the POWER of Innovation June 2007
    Front cover Tuning Linux OS on System p The POWER of Innovation Linux kernel tuning DB2, WebSphere, and Oracle tuning SystemTap examples Stephen Hochstetler Kraml Liu Andre Metelo Prasanna S Panchamukhi John Zucker ibm.com/redbooks International Technical Support Organization Tuning Linux OS on IBM System p The POWER of Innovation June 2007 SG24-7338-00 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (June 2007) This edition applies to Red Hat Enterprise Linux AS 4 and SUSE Linux Enterprise Server 10. © Copyright International Business Machines Corporation 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . ix Trademarks . x Preface . xi The team that wrote this book . xi Become a published author . xiv Comments welcome. xv Chapter 1. Introduction to the book. 1 1.1 Operating an efficient server: four phase . 2 1.2 Performance tuning guidelines . 3 1.3 Using information from the System p Performance Lab . 3 1.4 Linux Technology Center . 5 1.5 Understanding the organization of this book . 5 Chapter 2. IBM System p hardware . 7 2.1 System p advantages . 8 2.1.1 Performance . 8 2.1.2 Scalability . 9 2.1.3 Reliability, availability, and serviceability. 11 2.1.4 Manageability . 13 2.1.5 Virtualization . 14 2.2 System p product line introduction . 15 2.2.1 POWER servers . 16 2.2.2 PowerPC servers and blades . 23 2.2.3 IntelliStation.
    [Show full text]