Z/OS Best Practices: Large Stand-Alone Dump Handling - Version 4

Total Page:16

File Type:pdf, Size:1020Kb

Z/OS Best Practices: Large Stand-Alone Dump Handling - Version 4 z/OS Best Practices: Large Stand-Alone Dump Handling - Version 4 Update: 07/28/2014 Nobody ever wants to take a Stand-alone Dump (SADMP) of a z/OS system. Nevertheless, when your z/OS system is not responding due to an error in a critical system component, or it enters a wait state, a Stand-alone Dump is your only means of capturing sufficient diagnostic data to allow IBM Service to diagnose why your system entered the condition and recommend a fix to prevent the issue from happening again. Therefore, one needs to plan for the dump to be taken and processed as quickly as possible. Several z/OS releases have made improvements in taking and processing large stand- alone dumps. The system allows you to define a multiple-volume dump data set that simulates “striping,” and writing blocks of data in I/O priority order to each of the volumes defined in the data set. This greatly improved the elapsed time to capture a stand-alone dump. Stand-alone dump captures the page-frame table space and uses it as a pre-built dump index that relates to each absolute storage page, allowing IPCS to map the data ranges in the dump and improve the time to handle typical IPCS dump analysis requests for virtual storage access. Other enhancements allow stand-alone dump to be initiated from the operator’s console with a VARY command. z/OS functions also improve the handling of large stand-alone dumps, including the ability to subset a dump using the IPCS COPYDUMP command, allowing you to send a dump of the core system components to IBM while the rest of the dump is being processed and transferred. Most recently, in z/OS V1R13, the Problem Documentation Upload Utility allows transmission of multi-gigabyte files much more quickly, and encrypts the data all in the same process. This paper describes a set of “best practices” for ensuring the stand-alone dump is successful at capturing the necessary information for use by IBM Service, optimizing stand-alone dump data capture, and optimizing problem analysis time, . In particular, the following areas are described: Stand-Alone dump data set definition and placement IPCS performance considerations Preparing documentation to be analyzed Sending documentation to IBM support Testing your stand-alone dump setup This paper replaces an existing stand-alone dump best practices information, previously published as “z/OS Best Practices: Large Stand-Alone Dump Handling Version 2”. The URL remains the same: (http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD103286). © 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014 Page 1 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices This paper is also being updated with information which was described in “Every picture tells a story: Best practices for stand-alone dump”, published in the February 2007 issue of the z/OS Hot Topics Newsletter. You can access all issues of the Newsletter at: www.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html This paper assumes the installation is operating at least at the z/OS V1R13 level. Related enhancements available on lower z/OS levels are noted where appropriate. The planning steps are highlighted, followed by background information. 1. Plan a multi-volume stand-alone dump data set, being sure to place each volume on a separate DASD volume, and on a separate control unit. The best dump performance is realized when the dump is taken to a multiple-volume, DASD, stand-alone dump data set. Stand-alone dump exploits multiple, independent DASD volume paths to accelerate data recording. The dump data set is actually spread across all of the specified volumes, and not on each volume in succession. Stand-alone dump processing does not treat a multi-volume DASD dump data set as multiple, single data sets. (The creation of the multi-volume data set is covered in step 2.) A key to the performance of stand-alone dump is the rate at which the data is written to DASD. Modern DASD uses cache in the control unit to improve the performance of write operations. The placement of the multi-volume, stand-alone dump data set across Logical Subsystems (LSS) is strongly recommended to ensure contention in the I/O subsystem, (channels, director ports, cache, or Control unit back store) is avoided, thereby providing well performing DASD throughput throughout the duration of the stand-alone dump process. WSC Flash 10143 demonstrated that there are significant performance improvements observed with writing the data to a multi-volume DASD stand-alone dump data set, or to specific types of DASD. For more information, review the IBM performance analysis reports at the following URL: http://www- 03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10143 Therefore, when defining the placement of a multi-volume DASD, stand-alone dump data set, the following best practices apply: 1. Configure each volume on a separate Logical Subsystem (LSS) to ensure maximum parallel operation. The best performance of stand-alone dump can be attained when the multi-volume DASD data sets have the most separation, that is, separate physical control units and separate channel paths. 2. Configure, if possible, the control units to minimize the occurrence of other activity at the time of the stand-alone dump. For example, DB2 database recovery writing to a local database volume, from an LPAR that is still running against the same control unit as the stand-alone dump volume, will result in slower dump speed and may affect the elapsed time needed to restart an alternative DB2 on an LPAR that is still running. © 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014 Page 2 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices 3. Use FICON-attached DASD volumes where possible, which typically yield the best data transfer rates. FICON channels will deliver much better performance than ESCON channels. 4. For the best overall performance, dedicate a minimum of 4-5 DASD volumes to stand-alone dump, plus any additional capacity needed to contain the dump size, up to a maximum of 32 volumes. See the performance analysis report cited above for more information. IBM test results on z/OS V1R11 demonstrated that there is benefit to using up to 16 volumes spread over multiple channels. The test environment wrote a stand-alone dump at a rate of 1.2 to 1.5 Gb/second. However, you can configure up to 32 volumes in the multiple-volume data set to handle larger capacity if needed. 5. Do not define your stand-alone dump volumes as targets of a hyperswap. You could lose a completed stand-alone dump or a dump could get interrupted in the middle of the dump process. 6. Starting with the RSM Enablement offering for z/OS V1R13, you can use storage class memory (SCM), available on IBM zEnterprise EC12 servers, for paging. Doing so provides greatly-improved paging performance than can be achieved using DASD, and provides substantial improvements in SVC Dump and stand-alone dump capture time. The dump improvement is from capturing paged-out data on SCM faster than from DASD page data sets. While not recommended by IBM, stand-alone dump can also write to a fast tape subsystem. When directing the stand-alone dump to a tape drive, the dump will only use a single device and will not prompt for another device, so you cannot switch back to using a DASD device for this stand-alone dump. 2. Create the multi-volume stand-alone dump data set Define a stand-alone dump data set using the AMDSADDD utility1. Specify a volume list (VOLLIST) in AMDSADDD to designate a list of VOLSERs corresponding to each DASD volume making up the data set. A multi-volume data set will be allocated using the specified list of volumes. The utility uses the device number of the first volume to specify the data set to stand-alone dump. To achieve parallelism each volume should be on a different LSS when writing the dump. See “IBM System Test Example,” at the end of this paper, for a sample job using AMDSADDD to generate the stand-alone dump data set. 1 See MVS Diagnosis: Tools and Service Aids (GA22-7589), Chapter 4, for information on the AMDSADDD utility and any other Stand-alone Dump functions discussed in this paper. © 2013 IBM Advanced Technical Support Techdocs - Washington Systems Center Version 7/28/2014 Page 3 http://www.ibm.com/support/Techdocs z/OS Large Stand-Alone Dump Best Practices Be sure to catalog your stand-alone dump data set to prevent the possibility of accessing the “wrong version” of the data set when using IPCS COPYDUMP. (Step 4 covers information about IPCS COPYDUMP). Alternatively, IPCS offers a “SADMP Dump Data Set Utility,” available from the IPCS Utility menu. From the data set utility panel, specify whether to define, clear, or reallocate the stand-alone dump data set, specify its name and the volume serial numbers for the stand-alone dump data striping. This panel invokes the stand-alone dump allocation program to define the data set requested. The volume names, device type, and allocated space are confirmed as well. 3. Define a dump directory with the right attributes to facilitate post- processing of large stand-alone dumps and large SVC dumps Use IPCS to consolidate and/or extract ASIDs from the dump, as well as format and analyze the dump. IPCS uses a dump directory to maintain information about the layout and content of the dump.
Recommended publications
  • Package 'Slurmr'
    Package ‘slurmR’ September 3, 2021 Title A Lightweight Wrapper for 'Slurm' Version 0.5-1 Description 'Slurm', Simple Linux Utility for Resource Management <https://slurm.schedmd.com/>, is a popular 'Linux' based software used to schedule jobs in 'HPC' (High Performance Computing) clusters. This R package provides a specialized lightweight wrapper of 'Slurm' with a syntax similar to that found in the 'parallel' R package. The package also includes a method for creating socket cluster objects spanning multiple nodes that can be used with the 'parallel' package. Depends R (>= 3.3.0), parallel License MIT + file LICENSE BugReports https://github.com/USCbiostats/slurmR/issues URL https://github.com/USCbiostats/slurmR, https://slurm.schedmd.com/ Encoding UTF-8 RoxygenNote 7.1.1 Suggests knitr, rmarkdown, covr, tinytest Imports utils VignetteBuilder knitr Language en-US NeedsCompilation no Author George Vega Yon [aut, cre] (<https://orcid.org/0000-0002-3171-0844>), Paul Marjoram [ctb, ths] (<https://orcid.org/0000-0003-0824-7449>), National Cancer Institute (NCI) [fnd] (Grant Number 5P01CA196569-02), Michael Schubert [rev] (JOSS reviewer, <https://orcid.org/0000-0002-6862-5221>), Michel Lang [rev] (JOSS reviewer, <https://orcid.org/0000-0001-9754-0393>) Maintainer George Vega Yon <[email protected]> Repository CRAN Date/Publication 2021-09-03 04:20:02 UTC 1 2 expand_array_indexes R topics documented: expand_array_indexes . .2 JOB_STATE_CODES . .3 makeSlurmCluster . .4 new_rscript . .6 opts_slurmR . .7 parse_flags . .9 random_job_name . .9 read_sbatch . 10 slurmR . 11 slurmr_docker . 11 slurm_available . 12 Slurm_clean . 15 Slurm_collect . 16 Slurm_env . 17 Slurm_EvalQ . 18 slurm_job . 19 Slurm_log . 21 Slurm_Map . 22 snames . 25 sourceSlurm . 25 status . 28 the_plan .
    [Show full text]
  • TEE Internal Core API Specification V1.1.2.50
    GlobalPlatform Technology TEE Internal Core API Specification Version 1.1.2.50 (Target v1.2) Public Review June 2018 Document Reference: GPD_SPE_010 Copyright 2011-2018 GlobalPlatform, Inc. All Rights Reserved. Recipients of this document are invited to submit, with their comments, notification of any relevant patents or other intellectual property rights (collectively, “IPR”) of which they may be aware which might be necessarily infringed by the implementation of the specification or other work product set forth in this document, and to provide supporting documentation. The technology provided or described herein is subject to updates, revisions, and extensions by GlobalPlatform. This documentation is currently in draft form and is being reviewed and enhanced by the Committees and Working Groups of GlobalPlatform. Use of this information is governed by the GlobalPlatform license agreement and any use inconsistent with that agreement is strictly prohibited. TEE Internal Core API Specification – Public Review v1.1.2.50 (Target v1.2) THIS SPECIFICATION OR OTHER WORK PRODUCT IS BEING OFFERED WITHOUT ANY WARRANTY WHATSOEVER, AND IN PARTICULAR, ANY WARRANTY OF NON-INFRINGEMENT IS EXPRESSLY DISCLAIMED. ANY IMPLEMENTATION OF THIS SPECIFICATION OR OTHER WORK PRODUCT SHALL BE MADE ENTIRELY AT THE IMPLEMENTER’S OWN RISK, AND NEITHER THE COMPANY, NOR ANY OF ITS MEMBERS OR SUBMITTERS, SHALL HAVE ANY LIABILITY WHATSOEVER TO ANY IMPLEMENTER OR THIRD PARTY FOR ANY DAMAGES OF ANY NATURE WHATSOEVER DIRECTLY OR INDIRECTLY ARISING FROM THE IMPLEMENTATION OF THIS SPECIFICATION OR OTHER WORK PRODUCT. Copyright 2011-2018 GlobalPlatform, Inc. All Rights Reserved. The technology provided or described herein is subject to updates, revisions, and extensions by GlobalPlatform.
    [Show full text]
  • Powerview Command Reference
    PowerView Command Reference TRACE32 Online Help TRACE32 Directory TRACE32 Index TRACE32 Documents ...................................................................................................................... PowerView User Interface ............................................................................................................ PowerView Command Reference .............................................................................................1 History ...................................................................................................................................... 12 ABORT ...................................................................................................................................... 13 ABORT Abort driver program 13 AREA ........................................................................................................................................ 14 AREA Message windows 14 AREA.CLEAR Clear area 15 AREA.CLOSE Close output file 15 AREA.Create Create or modify message area 16 AREA.Delete Delete message area 17 AREA.List Display a detailed list off all message areas 18 AREA.OPEN Open output file 20 AREA.PIPE Redirect area to stdout 21 AREA.RESet Reset areas 21 AREA.SAVE Save AREA window contents to file 21 AREA.Select Select area 22 AREA.STDERR Redirect area to stderr 23 AREA.STDOUT Redirect area to stdout 23 AREA.view Display message area in AREA window 24 AutoSTOre ..............................................................................................................................
    [Show full text]
  • Xshell 6 User Guide Secure Terminal Emualtor
    Xshell 6 User Guide Secure Terminal Emualtor NetSarang Computer, Inc. Copyright © 2018 NetSarang Computer, Inc. All rights reserved. Xshell Manual This software and various documents have been produced by NetSarang Computer, Inc. and are protected by the Copyright Act. Consent from the copyright holder must be obtained when duplicating, distributing or citing all or part of this software and related data. This software and manual are subject to change without prior notice for product functions improvement. Xlpd and Xftp are trademarks of NetSarang Computer, Inc. Xmanager and Xshell are registered trademarks of NetSarang Computer, Inc. Microsoft Windows is a registered trademark of Microsoft. UNIX is a registered trademark of AT&T Bell Laboratories. SSH is a registered trademark of SSH Communications Security. Secure Shell is a trademark of SSH Communications Security. This software includes software products developed through the OpenSSL Project and used in OpenSSL Toolkit. NetSarang Computer, Inc. 4701 Patrick Henry Dr. BLDG 22 Suite 137 Santa Clara, CA 95054 http://www.netsarang.com/ Contents About Xshell ............................................................................................................................................... 1 Key Functions ........................................................................................................... 1 Minimum System Requirements .................................................................................. 3 Install and Uninstall ..................................................................................................
    [Show full text]
  • Lock-Free Programming
    Lock-Free Programming Geoff Langdale L31_Lockfree 1 Desynchronization ● This is an interesting topic ● This will (may?) become even more relevant with near ubiquitous multi-processing ● Still: please don’t rewrite any Project 3s! L31_Lockfree 2 Synchronization ● We received notification via the web form that one group has passed the P3/P4 test suite. Congratulations! ● We will be releasing a version of the fork-wait bomb which doesn't make as many assumptions about task id's. – Please look for it today and let us know right away if it causes any trouble for you. ● Personal and group disk quotas have been grown in order to reduce the number of people running out over the weekend – if you try hard enough you'll still be able to do it. L31_Lockfree 3 Outline ● Problems with locking ● Definition of Lock-free programming ● Examples of Lock-free programming ● Linux OS uses of Lock-free data structures ● Miscellanea (higher-level constructs, ‘wait-freedom’) ● Conclusion L31_Lockfree 4 Problems with Locking ● This list is more or less contentious, not equally relevant to all locking situations: – Deadlock – Priority Inversion – Convoying – “Async-signal-safety” – Kill-tolerant availability – Pre-emption tolerance – Overall performance L31_Lockfree 5 Problems with Locking 2 ● Deadlock – Processes that cannot proceed because they are waiting for resources that are held by processes that are waiting for… ● Priority inversion – Low-priority processes hold a lock required by a higher- priority process – Priority inheritance a possible solution L31_Lockfree
    [Show full text]
  • Waste Transfer Stations: a Manual for Decision-Making Acknowledgments
    Waste Transfer Stations: A Manual for Decision-Making Acknowledgments he Office of Solid Waste (OSW) would like to acknowledge and thank the members of the Solid Waste Association of North America Focus Group and the National Environmental Justice Advisory Council Waste Transfer Station Working Group for reviewing and providing comments on this draft document. We would also like to thank Keith Gordon of Weaver Boos & Gordon, Inc., for providing a technical Treview and donating several of the photographs included in this document. Acknowledgements i Contents Acknowledgments. i Introduction . 1 What Are Waste Transfer Stations?. 1 Why Are Waste Transfer Stations Needed?. 2 Why Use Waste Transfer Stations? . 3 Is a Transfer Station Right for Your Community? . 4 Planning and Siting a Transfer Station. 7 Types of Waste Accepted . 7 Unacceptable Wastes . 7 Public Versus Commercial Use . 8 Determining Transfer Station Size and Capacity . 8 Number and Sizing of Transfer Stations . 10 Future Expansion . 11 Site Selection . 11 Environmental Justice Considerations . 11 The Siting Process and Public Involvement . 11 Siting Criteria. 14 Exclusionary Siting Criteria . 14 Technical Siting Criteria. 15 Developing Community-Specific Criteria . 17 Applying the Committee’s Criteria . 18 Host Community Agreements. 18 Transfer Station Design and Operation . 21 Transfer Station Design . 21 How Will the Transfer Station Be Used? . 21 Site Design Plan . 21 Main Transfer Area Design. 22 Types of Vehicles That Use a Transfer Station . 23 Transfer Technology . 25 Transfer Station Operations. 27 Operations and Maintenance Plans. 27 Facility Operating Hours . 32 Interacting With the Public . 33 Waste Screening . 33 Emergency Situations . 34 Recordkeeping. 35 Environmental Issues.
    [Show full text]
  • System Calls & Signals
    CS345 OPERATING SYSTEMS System calls & Signals Panagiotis Papadopoulos [email protected] 1 SYSTEM CALL When a program invokes a system call, it is interrupted and the system switches to Kernel space. The Kernel then saves the process execution context (so that it can resume the program later) and determines what is being requested. The Kernel carefully checks that the request is valid and that the process invoking the system call has enough privilege. For instance some system calls can only be called by a user with superuser privilege (often referred to as root). If everything is good, the Kernel processes the request in Kernel Mode and can access the device drivers in charge of controlling the hardware (e.g. reading a character inputted from the keyboard). The Kernel can read and modify the data of the calling process as it has access to memory in User Space (e.g. it can copy the keyboard character into a buffer that the calling process has access to) When the Kernel is done processing the request, it restores the process execution context that was saved when the system call was invoked, and control returns to the calling program which continues executing. 2 SYSTEM CALLS FORK() 3 THE FORK() SYSTEM CALL (1/2) • A process calling fork()spawns a child process. • The child is almost an identical clone of the parent: • Program Text (segment .text) • Stack (ss) • PCB (eg. registers) • Data (segment .data) #include <sys/types.h> #include <unistd.h> pid_t fork(void); 4 THE FORK() SYSTEM CALL (2/2) • The fork()is one of the those system calls, which is called once, but returns twice! Consider a piece of program • After fork()both the parent and the child are ..
    [Show full text]
  • SAP Identity Management Password Hook Configuration Guide Company
    CONFIGURATION GUIDE | PUBLIC Document Version: 1.0 – 2019-11-25 SAP Identity Management Password Hook Configuration Guide company. All rights reserved. All rights company. affiliate THE BEST RUN 2019 SAP SE or an SAP SE or an SAP SAP 2019 © Content 1 SAP Identity Management Password Hook Configuration Guide.........................3 2 Overview.................................................................. 4 3 Security and Policy Issues.....................................................5 4 Files and File Locations .......................................................6 5 Installing and Upgrading Password Hook..........................................8 5.1 Installing Password Hook....................................................... 8 5.2 Upgrading Password Hook...................................................... 9 6 Configuring Password Hook....................................................11 7 Integrating with the Identity Center.............................................18 8 Implementation Considerations................................................21 9 Troubleshooting............................................................22 SAP Identity Management Password Hook Configuration Guide 2 PUBLIC Content 1 SAP Identity Management Password Hook Configuration Guide The purpose of the SAP Identity Management Password Hook is to synchronize passwords from a Microsoft domain to one or more applications. This is achieved by capturing password changes from the Microsoft domain and updating the password in the other applications
    [Show full text]
  • Programming Guide DP800 Series Programmable Linear DC Power
    RIGOL Programming Guide DP800 Series Programmable Linear DC Power Supply Dec. 2015 RIGOL TECHNOLOGIES, INC. RIGOL Guaranty and Declaration Copyright © 2013 RIGOL TECHNOLOGIES, INC. All Rights Reserved. Trademark Information RIGOL is a registered trademark of RIGOL TECHNOLOGIES, INC. Publication Number PGH03108-1110 Software Version 00.01.14 Software upgrade might change or add product features. Please acquire the latest version of the manual from RIGOL website or contact RIGOL to upgrade the software. Notices RIGOL products are covered by P.R.C. and foreign patents, issued and pending. RIGOL reserves the right to modify or change parts of or all the specifications and pricing policies at the company’s sole decision. Information in this publication replaces all previously released materials. Information in this publication is subject to change without notice. RIGOL shall not be liable for either incidental or consequential losses in connection with the furnishing, use, or performance of this manual, as well as any information contained. Any part of this document is forbidden to be copied, photocopied, or rearranged without prior written approval of RIGOL. Product Certification RIGOL guarantees that this product conforms to the national and industrial standards in China as well as the ISO9001:2008 standard and the ISO14001:2004 standard. Other international standard conformance certifications are in progress. Contact Us If you have any problem or requirement when using our products or this manual, please contact RIGOL. E-mail: [email protected] Website: www.rigol.com DP800 Programming Guide I RIGOL Document Overview This manual introduces how to program the power supply over remote interfaces in details.
    [Show full text]
  • Fortisiem NFS Storage Guide TABLE of CONTENTS
    FortiSIEM - NFS Storage Guide Version FORTINET DOCUMENT LIBRARY https://docs.fortinet.com FORTINET VIDEO GUIDE https://video.fortinet.com FORTINET BLOG https://blog.fortinet.com CUSTOMER SERVICE & SUPPORT https://support.fortinet.com FORTINET TRAINING & CERTIFICATION PROGRAM https://www.fortinet.com/support-and-training/training.html NSE INSTITUTE https://training.fortinet.com FORTIGUARD CENTER https://fortiguard.com/ END USER LICENSE AGREEMENT https://www.fortinet.com/doc/legal/EULA.pdf FEEDBACK Email: [email protected] FortiSIEM NFS Storage Guide TABLE OF CONTENTS Change Log 4 Installing NFS Server for FortiSIEM Event Storage 5 Installation in CentOS Linux 6.x 5 Installation in AWS Environment 7 Step 1: Launch FortiSIEM Supervisor from AWS Marketplace 7 Step 2: Start and Configure NFS Server 8 FortiSIEM NFS Storage Guide 3 Fortinet Technologies Inc. Change Log Date Change Description 03/30/2018 Initial version of FortiSIEM - NFS Storage Guide 11/20/2019 Release of FortiSIEM - NFS Storage Guide for 5.2.6. 03/30/2020 Release of FortiSIEM - NFS Storage Guide for 5.3.0. FortiSIEM NFS Storage Guide 4 Fortinet Technologies Inc. Installing NFS Server for FortiSIEM Event Storage When you install FortiSIEM, you have the option to use either local storage or NFS storage. For cluster deployments using Workers, the use of an NFS Server is required for the Supervisor and Workers to communicate with each other. This document describes how to set up and configure NFS servers for use with FortiSIEM. l NFS Server on Windows is not supported. l If Elasticsearch is chosen as the Event Database, the Supervisor needs an additional 8 GB RAM - in this case, the minimum requirement of the Supervisor is 32 GB RAM.
    [Show full text]
  • ISMP Safe Practice Guidelines for Adult IV Push Medications
    ISMP Safe Practice Guidelines for Adult IV Push Medications A compilation of safe practices from the ISMP Adult IV Push Medication Safety Summit Prepared by the Institute for Safe Medication Practices (ISMP) Table of Contents Introduction 1 Factors that Increase the Risk of IV Push Medication Errors in Adults 2 Risks Associated with Lack of Patient Information 3 Risks Associated with Lack of Drug Information 3 Risks Associated with Communication of Drug Information 3 Risks Associated with Drug Labeling, Packaging, and Nomenclature 3 Risks Associated with Drug Storage, Stock, Standardization, and Distribution 4 Risks Associated with Device Use 4 Risks Associated with Environment, Staffing, and Workflow 4 Risks Associated with Staff Education and Competency 4 Risk Management and Quality Improvement Challenges 5 Current Practices with IV Injectable Medications 6 Developing Consensus Guidelines for Adult IV Push Medications 7 Safe Practice Guidelines 8 1. Acquisition and Distribution of Adult IV Push Medications 8 2. Aseptic Technique 9 3. Clinician Preparation 10 4. Labeling 12 5. Clinician Administration 13 6. Drug Information Resources 14 7. Competency Assessment 15 8. Error Reporting 15 Future Inquiry 16 Conclusion 16 References 17 Definitions 19 ISMP Adult IV Push Medication Safety Summit Participants 20 Appendix A — ISMP Safe Practice Guidelines for Adult IV Push Medications 22 Disclosure 24 About ISMP 24 ISMP SAFE PRACTICE GUIDELINES FOR ADULT IV PUSH MEDICATIONS © ISMP 2015 Introduction Intravenous (IV) therapy is considered an essential component of current healthcare delivery, with over 90% of hospitalized patients receiving some form of infusion therapy. 1-2 Errors involving IV medications can occur in all phases of the medication use process and can be particularly dangerous based on the drug’s properties and the complexity of its therapeutic action.
    [Show full text]
  • Essential Skills for Bioinformatics: Unix/Linux PIPES a Simple Program with Grep and Pipes
    Essential Skills for Bioinformatics: Unix/Linux PIPES A simple program with grep and pipes • Suppose we are working with a FASTA file and a program warns that it contains non-nucleotide characters in sequences. We can check for non-nucleotide characters easily with a Unix one-liner using pipes and the grep. • The grep Unix tool searches files or standard input for strings that match patterns. These patterns can be simple strings, or regular expressions. See man grep for more details. A simple program with grep and pipes • tb1-protein.fasta • Approach: - remove all header lines (those that begin with >) - the remaining sequences are piped to grep - print lines containing non-nucleotide characters A simple program with grep and pipes • “^>”: regular expression pattern which matches all lines that start with a > character. • -v: invert the matching lines because we want to exclude lines starting with > • |: pipe the standard output to the next command with the pipe character | • “[^ACGT]”: When used in brackets, a caret symbol (^) matches anything that’s not one of the characters in these brackets. [^ACGT] matches any character that’s not A, C, G, or T. • -i: We ignore case. • --color: to color the matching non-nucleotide character. • Both regular expressions are in quotes, which is a good habit to get into. Combining pipes and redirection • Large bioinformatics programs will often use multiple streams simultaneously. Results are output via the standard output stream while diagnostic messages, warnings, or errors are output to the standard error stream. In such cases, we need to combine pipes and redirection to manage all streams from a running program.
    [Show full text]