Sys Diag User's Guide

Total Page:16

File Type:pdf, Size:1020Kb

Sys Diag User's Guide sys_diag User’s Guide Release 8.4 Note: This extended MS Word version is based upon the core self-extracting README_sys_diag.txt file. (Copyright © 1999-2017 by Todd A. Jobson) Pg 1 of 33 _________________________________ Outline of this document : _________________________________ 1.0 sys_diag v.8.3g Overview 2.0 HTML Report – Outline & Interpretation 3.0 Command Line Arguments & Available Parameters 4.0 Common Command Line Usage Examples + 5.0 Capturing sys_diag Command Line output 6.0 Executing sys_diag via Crontab entries 7.0 Reducing System Overhead during Data Capture 8.0 Performance Data: Threshold Analysis and Baselines 9.0 Creating / Viewing Graphs of Performance Data 10.0 sys_diag DIRECTORIES and DATA FILE Descriptions 11.0 Sample sys_diag_BASELING.cfg file 12.0 Sample Command Line Output 13.0 Downloads, Resources and Feedback ---------------------------------------------------------------------------------------------- _______________________________ 1.0 sys_diag v.8.3 Overview : _______________________________ BACKGROUND : Over the course of the past ~15+ yrs as a former SunPS field consulting Architect (now Oracle) employee, sys_diag has been personally developed in my spare time in order to increase productivity and efficiency when working with Solaris systems (system configuration snapshots, workload characterization, historical performance trending, performance analysis, POC / Proof Of Concept load testing, bottleneck root cause identification, capacity planning of stand-alone or as part of larger TCO or consolidation analysis, and/or current/future state architectural assessment). By placing this and prior versions for public use under copyright, hopefully others can reap the many time-saving benefits of this utility, making use of my efforts and sys_diag, to streamline any admin/analysis/assessment activities required of them. This has been an invaluable asset used to characterize / diagnose / analyze workloads across literally hundreds of systems within many of the top Fortune 100 datacenters. As would be expected, the obligations, support, and implications of use are the sole responsibility of the user, as is documented within the header of sys_diag. As a standard “best practice”, this and/or any new workload introduced to a system should always be tested first in a non-production environment for validation and familiarity. INTRODUCTION : sys_diag is a Solaris utility (ksh/awk/javascript) that can perform several functions, among them : system configuration 'snapshot' and reporting (detailed or high-level) along-side performance data capture (over some specified duration or point in time PEAK PERIOD 'snapshot'). Most significantly, after the data is captured, it automatically does correlation, analysis, and reporting of findings/exceptions (based upon configurable thresholds that can be easily changed within the script header). The output provides a single .html report with a color-coded “dashboard” that includes auto-generated chart summaries of findings, along-side system configuration and snapshot details. Each run of sys_diag creates a local sub-directory where all datafiles captured or created (analysis, reports, graphs generated) are stored. Upon completion, sys_diag creates an compressed archive within a single .tar.Z for examination externally. The report format is provided in .html, and .txt as a single file for easy review (without requiring trudging through several subdirectories of separate files potentially (Copyright © 1999-2017 by Todd A. Jobson) Pg 2 of 33 thousands of lines long each, to manually correlate and review for hours /days.. before manually generating the assessment report and/or any graphs needed). This tool will literally save you a week of analysis for complicated configurations that require diagnosis. sys_diag has previously been run on Solaris 2.x (or above) Solaris platforms, and today should be capable of being run on any x86 or SPARC Solaris 8+ system. Version 8.3 includes reporting new Solaris 11.3 capabilities (zones, LDOM’s/OVM, SRM, zfspools, fmd, ipfilter/ipnat, link aggregation, Dtrace probing, etc...). Beyond the Solaris configuration reporting commands (System/storage HW config, OS config, kernel tunables, network/IPMP/Trunking config, ZFS/FS/VM/NFS, users/groups, security, NameSvcs, pkgs, patches, errors/warnings, and system/network performance metrics), sys_diag also captures relevant application configuration details, such as Sun Cluster 2.x/3.x, Veritas VCS/VM/vxfs, Oracle .ora/RAC/CRS/listener.., MySQL.., along with other detailed configuration capture of key files (and tracking of changes via -t), etc. Of all the capabilities, the greatest benefits are found by being able to run this single ksh script on a system and do the analysis from one single report/ file offline/elsewhere. Since sys_diag is a ksh script (using awk for post-processing the data and javascript for dynamic HTML/chart generation), no packages need to be installed, only using standard built-in Solaris Utilities, allowing for the widest range of support. Version 8.3g of sys_diag offers built-in dynamic HTML generation with both javascript embedded dashboard charts, as well as stand-alone .gr.html files for each individual chart. Additionally, the vmstat, iostat, and netstat data is exported in a text format friendly (.gr.txt) format to import and create custom graphs from within OpenOffice or Excel. Regarding the system overhead, sys_diag runs all commands in a serially, (waiting for each command to complete before running the next) impacting system performance the same as if an admin were typing these commands one at a time on a console. The only exception is the background vmstat/mpstat/iostat/netstat (-g) performance gathering of metrics at the specified sampling interval (-I) and total duration (-T), which generally has negligible overhead on a system. *See Section 7 for examples to reduce overhead* Workflow (order of execution) of a typical sys_diag run (with arguments “-g –I1 –l”) : This example uses a 1 second sampling Interval (-I) a DEFAULT Total duration (-T) of 5 minutes = (–T 300) to gather performance data (-g) and create a long (-l) configuration report. *All Commands are run serially, except Background Collection* - Extract README_sys_diag.txt - Beginning BME (0=Begin/1=Midpt/2=EndPt) Profiling SNAPSHOT (#0) [IF NOT –x & is -v|-V] (to profile the system serially with prstat, ps, iostat, netstat, zpool, tcpstat,.. *before any background collection is started*). - Initiate BACKGROUND Data Collection (vm/mp/io/netstat..) at (“-I x”) x sec intervals for total duration default 300 seconds (5mins) or t Total Seconds via “-T t”. - WAIT until the MidPoint of Background Data Collection - Initiate BME Midpoint Profiling SNAPSHOT (#1), *ONLY IF >3mins of Total duration remains, & Not Excluded via “-x”, & using Deep Verbosity via “-V”. - WAIT for Background Data Collection to Complete - Initiate BME Midpoint Profiling SNAPSHOT (#2), *ONLY IF Not Excluded via “-x”, & using verbosity via “-v|-V”. - Capture System Configuration Data for report (following the TOC Table of Contents Outline) - Post-Process the Performance data gathered to identify exceptions. - Generate both the embedded .HTML Javascript charts and stand-alone .html and .gr.txt files (for Excel/OpenOffice custom import chart creation) - Generate the complete .html report - Identify the DataDirectory Path, the HTML Report File link - Create a compressed tar.Z archive of DataDirectory (all+ sys_diag & perflog) * See Section 12 for complete sample command line output running sys_diag * sys_diag is generally run from the same directory (eg. /var/tmp) that will have enough available disk space for storing the data directories and archives (however, the data directory and all files can be removed after each run using –C). When always run from the same directory, a single sys_diag_perflog.out file is appended to as a system chronology of performance each time sys_diag is run, that can later be referred to. NOTE: ** USE Chrome, Firefox as recommended browsers ** (for best viewing open full screen) (Copyright © 1999-2017 by Todd A. Jobson) Pg 3 of 33 ______________________________________________________________________________ 2.0 HTML Report - Outline and Interpretation ______________________________________________________________________________ The final report output that sys_diag produces, comes in 2 formats : .out (Text) Or sysd_hostname_date_time.out.html (HTML/Javascript). Both reports include a “Header” section that summarizes basic system details and characteristics of the Sys_diag snapshot captured. The .html report additionally includes the performance analysis “dashboard”, where data is summarized and color-coded within separate sub-system sections : CPU/Kernel, Memory, Storage IO, Network. Within each of the dashboard sections, details of sub-system “health”, identification of flagged exceptions, embedded charts, and links to detailed “Analysis” of captured data (how/why/where exceptions where flagged) + links to related system details (the data behind the analysis and findings). Beyond the dashboard, you will find the configuration report Table Of Contents that categorize and link all facets of system configuration within 25 Sections (to bring you directly to the relevant data within these sections). 2.1 HTML Report (Sample) Header The following is a sample .html report header from output generated within the global zone of a Solaris 11.3 host named “Newton-S11.3x6” running on an x86 server within a VirtualBox
Recommended publications
  • Web Vmstat Any Distros, Especially Here’S Where Web Vmstat Comes Those Targeted at In
    FOSSPICKS Sparkling gems and new releases from the world of FOSSpicks Free and Open Source Software Mike Saunders has spent a decade mining the internet for free software treasures. Here’s the result of his latest haul… Shiny statistics in a browser Web VMStat any distros, especially Here’s where Web VMStat comes those targeted at in. It’s a system monitor that runs Madvanced users, ship an HTTP server, so you can connect with shiny system monitoring tools to it via a web browser and see on the desktop. Conky is one such fancy CSS-driven charts. Before you tool, while GKrellM was all the rage install it, you’ll need to get the in the last decade, and they are websocketd utility, which you can genuinely useful for keeping tabs find at https://github.com/ on your boxes, especially when joewalnes/websocketd. Helpfully, you’re an admin in charge of the developer has made pre- various servers. compiled executables available, so Now, pretty much all major you can just grab the 32-bit or distros include a useful command 64-bit tarball, extract it and there line tool for monitoring system you have it: websocketd. (Of course, Here’s the standard output for vmstat – not very interesting, right? resource usage: vmstat. Enter if you’re especially security vmstat 1 in a terminal window and conscious, you can compile it from copy the aforementioned you’ll see a regularly updating (once its source code.) websocketd into the same place. per second) bunch of statistics, Next, clone the Web VMStat Git Then just enter: showing CPU usage, free RAM, repository (or grab the Zip file and ./run swap usage and so forth.
    [Show full text]
  • Performance, Scalability on the Server Side
    Performance, Scalability on the Server Side John VanDyk Presented at Des Moines Web Geeks 9/21/2009 Who is this guy? History • Apple // • Macintosh • Windows 3.1- Server 2008R2 • Digital Unix (Tru64) • Linux (primarily RHEL) • FreeBSD Systems Iʼve worked with over the years. Languages • Perl • Userland Frontier™ • Python • Java • Ruby • PHP Languages Iʼve worked with over the years (Userland Frontier™ʼs integrated language is UserTalk™) Open source developer since 2000 Perl/Python/PHP MySQL Apache Linux The LAMP stack. Time to Serve Request Number of Clients Performance vs. scalability. network in network out RAM CPU Storage These are the basic laws of physics. All bottlenecks are caused by one of these four resources. Disk-bound •To o l s •iostat •vmstat Determine if you are disk-bound by measuring throughput. vmstat (BSD) procs memory page disk faults cpu r b w avm fre flt re pi po fr sr tw0 in sy cs us sy id 0 2 0 799M 842M 27 0 0 0 12 0 23 344 2906 1549 1 1 98 3 3 0 869M 789M 5045 0 0 0 406 0 10 1311 17200 5301 12 4 84 3 5 0 923M 794M 5219 0 0 0 5178 0 27 1825 21496 6903 35 8 57 1 2 0 931M 784M 909 0 0 0 146 0 12 955 9157 3570 8 4 88 blocked plenty of RAM, idle processes no swapping CPUs A disk-bound FreeBSD machine. b = blocked for resources fr = pages freed/sec cs = context switches avm = active virtual pages in = interrupts flt = memory page faults sy = system calls per interval vmstat (RHEL5) # vmstat -S M 5 25 procs ---------memory-------- --swap- ---io--- --system- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 1301 194 5531 0 0 0 29 1454 2256 24 20 56 0 0 3 0 0 1257 194 5531 0 0 0 40 2087 2336 34 27 39 0 0 2 0 0 1183 194 5531 0 0 0 53 1658 2763 33 28 39 0 0 0 0 0 1344 194 5531 0 0 0 34 1807 2125 29 19 52 0 0 no blocked busy but not processes overloaded CPU in = interrupts/sec cs = context switches/sec wa = time waiting for I/O Solving disk bottlenecks • Separate spindles (logs and databases) • Get rid of atime updates! • Minimize writes • Move temp writes to /dev/shm Overview of what weʼre about to dive into.
    [Show full text]
  • Linux Performance Tools
    Linux Performance Tools Brendan Gregg Senior Performance Architect Performance Engineering Team [email protected] @brendangregg This Tutorial • A tour of many Linux performance tools – To show you what can be done – With guidance for how to do it • This includes objectives, discussion, live demos – See the video of this tutorial Observability Benchmarking Tuning Stac Tuning • Massive AWS EC2 Linux cloud – 10s of thousands of cloud instances • FreeBSD for content delivery – ~33% of US Internet traffic at night • Over 50M subscribers – Recently launched in ANZ • Use Linux server tools as needed – After cloud monitoring (Atlas, etc.) and instance monitoring (Vector) tools Agenda • Methodologies • Tools • Tool Types: – Observability – Benchmarking – Tuning – Static • Profiling • Tracing Methodologies Methodologies • Objectives: – Recognize the Streetlight Anti-Method – Perform the Workload Characterization Method – Perform the USE Method – Learn how to start with the questions, before using tools – Be aware of other methodologies My system is slow… DEMO & DISCUSSION Methodologies • There are dozens of performance tools for Linux – Packages: sysstat, procps, coreutils, … – Commercial products • Methodologies can provide guidance for choosing and using tools effectively • A starting point, a process, and an ending point An#-Methodologies • The lack of a deliberate methodology… Street Light An<-Method 1. Pick observability tools that are: – Familiar – Found on the Internet – Found at random 2. Run tools 3. Look for obvious issues Drunk Man An<-Method • Tune things at random until the problem goes away Blame Someone Else An<-Method 1. Find a system or environment component you are not responsible for 2. Hypothesize that the issue is with that component 3. Redirect the issue to the responsible team 4.
    [Show full text]
  • Java Bytecode Manipulation Framework
    Notice About this document The following copyright statements and licenses apply to software components that are distributed with various versions of the OnCommand Performance Manager products. Your product does not necessarily use all the software components referred to below. Where required, source code is published at the following location: ftp://ftp.netapp.com/frm-ntap/opensource/ 215-09632 _A0_ur001 -Copyright 2014 NetApp, Inc. All rights reserved. 1 Notice Copyrights and licenses The following component is subject to the ANTLR License • ANTLR, ANother Tool for Language Recognition - 2.7.6 © Copyright ANTLR / Terence Parr 2009 ANTLR License SOFTWARE RIGHTS ANTLR 1989-2004 Developed by Terence Parr Partially supported by University of San Francisco & jGuru.com We reserve no legal rights to the ANTLR--it is fully in the public domain. An individual or company may do whatever they wish with source code distributed with ANTLR or the code generated by ANTLR, including the incorporation of ANTLR, or its output, into commerical software. We encourage users to develop software with ANTLR. However, we do ask that credit is given to us for developing ANTLR. By "credit", we mean that if you use ANTLR or incorporate any source code into one of your programs (commercial product, research project, or otherwise) that you acknowledge this fact somewhere in the documentation, research report, etc... If you like ANTLR and have developed a nice tool with the output, please mention that you developed it using ANTLR. In addition, we ask that the headers remain intact in our source code. As long as these guidelines are kept, we expect to continue enhancing this system and expect to make other tools available as they are completed.
    [Show full text]
  • UNIX OS Agent User's Guide
    IBM Tivoli Monitoring Version 6.3.0 UNIX OS Agent User's Guide SC22-5452-00 IBM Tivoli Monitoring Version 6.3.0 UNIX OS Agent User's Guide SC22-5452-00 Note Before using this information and the product it supports, read the information in “Notices” on page 399. This edition applies to version 6, release 3 of IBM Tivoli Monitoring (product number 5724-C04) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright IBM Corporation 1994, 2013. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Tables ...............vii Solaris System CPU Workload workspace ....28 Solaris Zone Processes workspace .......28 Chapter 1. Using the monitoring agent . 1 Solaris Zones workspace ..........28 System Details workspace .........28 New in this release ............2 System Information workspace ........29 Components of the monitoring agent ......3 Top CPU-Memory %-VSize Details workspace . 30 User interface options ...........4 UNIX OS workspace ...........30 UNIX Detail workspace ..........31 Chapter 2. Requirements for the Users workspace ............31 monitoring agent ...........5 Enabling the Monitoring Agent for UNIX OS to run Chapter 4. Attributes .........33 as a nonroot user .............7 Agent Availability Management Status attributes . 36 Securing your IBM Tivoli Monitoring installation 7 Agent Active Runtime Status attributes .....37 Setting overall file ownership and permissions for AIX AMS attributes............38
    [Show full text]
  • System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 15 SP1
    SUSE Linux Enterprise Server 15 SP1 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 15 SP1 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to eciently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xiii
    [Show full text]
  • A Hitchhiker's Guide for Performance Assessment and Benchmarking SAS® Applications
    SAS Global Forum 2012 Systems Architecture and Administration Paper 367-2012 A Hitchhiker's guide for performance assessment & benchmarking SAS® applications Viraj Kumbhakarna, JPMorgan Chase, Columbus, OH Anurag Katare, Cognizant Technology Solutions, Teaneck, NJ ABSTRACT Almost every IT department today, needs some kind of an IT infrastructure to support different business processes in the organization. For a typical IT organization the ensemble of hardware, software, networking facilities may constitute the IT infrastructure. IT infrastructure is setup in order to develop, test, deliver, monitor, control or support IT services. Sometimes multiple applications may be hosted on a common server platform. With a continual increase in the user base, ever increasing volume of data and perpetual increase in number of processes required to support the growing business need, there often arises a need to upgrade the IT infrastructure. The paper discusses a stepwise approach to conduct a performance assessment and a performance benchmarking exercise required to assess the current state of the IT infrastructure (constituting the hardware and software) prior to an upgrade. It considers the following steps to be followed in order to proceed with a planned approach to implement process improvement. 1) Phase I: Assessment & Requirements gathering a) Understand ASIS process b) Assess AIX UNIX server configuration 2) Phase II: Performance assessment and benchmarking a) Server performance i) Server Utilization ii) Memory Utilization iii) Disk Utilization iv) Network traffic v) Resource Utilization b) Process Performance i) CPU Usage ii) Memory usage iii) Disc space 3) Phase III: Interpretation of results for performance improvement INTRODUCTION The paper discusses a practical assessment exercise that was carried out for benchmarking performance of SAS® processes on a UNIX AIX® 64 server having the SAS® 9.2 Foundation package installed on it.
    [Show full text]
  • SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to eciently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xiii
    [Show full text]
  • Whitepaper Why Nagios and Server Monitoring Are Failing Modern Apps
    An AppDynamics Business White Paper Why Nagios and Server Monitoring Are Failing Modern Apps Server monitoring is an important part of any data center monitoring architecture, but too often it becomes a crutch and a deterrent to successfully building out a holistic monitoring platform. Server status is only one indicator of application performance, so relying exclusively on server monitoring tools leaves organizations with large blind spots and unhappy end users. In this paper we will explore what server monitoring is and how it can (and should) fit into a larger application performance management platform. What is Server Monitoring? Server monitoring consists of monitoring operating system and associated hardware metrics for the servers that run your application. It’s the view of the world from the perspective of the server, but never from inside the running processes. Basic server monitoring metrics include CPU sys time, CPU wait time, used memory, free memory, disk queue length, % disk used, network collisions, adapter transmit rate, etc. Server monitoring is used by every IT organization in some shape or form. The 9s are an (Unintentional) Lie IT organizations are usually held to a standard of three, four, or five “nines” of availability, referring to the number of nines in. The table below defines each of the “nines” and translates their meaning into acceptable minutes of server downtime per year. Max Yearly Downtime The Nines Yearly Uptime (Minutes) (Minutes) 99.9% (3 nines) 525,074.5 525.5 99.99% (4 nines) 525,547.5 52.5 99.999% (5 nines) 525,594.8 5.2 Availability is usually measured at the server level by checking if the server is responding to requests.
    [Show full text]
  • Licensing Information User Manual Oracle® ILOM
    Licensing Information User Manual ® Oracle ILOM Firmware Release 3.2.x October 2018 Part No: E62005-12 October 2018 Licensing Information User Manual Oracle ILOM Firmware Release 3.2.x Part No: E62005-12 Copyright © 2016, 2018, Oracle and/or its affiliates. License Restrictions Warranty/Consequential Damages Disclaimer This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. Warranty Disclaimer The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. Restricted Rights Notice If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government
    [Show full text]
  • Embedded Android
    Embedded Android 1 These slides are made available to you under a Creative Commons Delivered and/or customized by Share-Alike 3.0 license. The full terms of this license are here: https://creativecommons.org/licenses/by-sa/3.0/ Attribution requirements and misc., PLEASE READ: ● This slide must remain as-is in this specific location (slide #2), everything else you are free to change; including the logo :-) ● Use of figures in other documents must feature the below “Originals at” URL immediately under that figure and the below copyright notice where appropriate. ● You are free to fill in the “Delivered and/or customized by” space on the right as you see fit. ● You are FORBIDEN from using the default “About” slide as-is or any of its contents. (C) Copyright 2010-2017, Opersys inc. These slides created by: Karim Yaghmour Originals at: www.opersys.com/training/embedded-android 2 About ● Author of: ● Introduced Linux Trace Toolkit in 1999 ● Originated Adeos and relayfs (kernel/relay.c) ● Training, Custom Dev, Consulting, ... 3 About Android ● Huge ● Fast moving ● Stealthy 4 About Android ● Huge ● Fast moving ● Stealthy Mainly: ● Internals-specifics are subject to change Therefore: ● Must learn to relearn every new release 5 Goals ● Master the intricacies of all components making up Android, including kernel Androidisms ● Get hands-on experience in building and customizing Android-based embedded systems ● Learn basics of Android app development ● Familiarize with the Android ecosystem 6 Format ● Tracks: ● Lecture ● Exercises ● Fast pace ● Lots
    [Show full text]
  • Informix Best Practices Configuration, ONCONFIG, CPU and Memory Usage
    Informix Best Practices Configuration, ONCONFIG, CPU and Memory Usage Webcast – February 23, 2017 by Lester Knutsen Lester Knutsen Lester Knutsen is President of Advanced DataTools Corporation, and has been building large Data Warehouse and Business Systems using Informix Database software since 1983. Lester focuses on large database performance tuning, training and consulting. Lester is a member of the IBM Gold Consultant program and was presented with one of the Inaugural IBM Data Champion awards by IBM. Lester was one of the founders of the International Informix Users Group and the Washington Area Informix User Group. [email protected] www.advancedatatools.com 703-256-0267 x102 Informix Best 2 Practices Overview • CPU Recommendations and Best Practices • Memory Recommendations and Best Practices • ONCONFIG Recommendations and Best Practices – Basic Settings – Additional Key Settings Informix Best 3 Practices CPU – Central Processor Unit Recommendations for Informix and Best Practices CPU Terms • Socket = One Chip or Processor • Cores per Socket = How many cores run on a chip. A core only runs one process at a time. • Hyper-Threads or SMT threads per Core = Many Cores have the ability to run multiple threads. No matter how many threads run on a Core, only one thread can run at a time on a core. Hyper-Threads will appear as additional Virtual Cores. • Chip speed is measured in gigahertz (GHz); this is the speed of a single core of your processor. • PVU - IBM Processor Value Unit = A unit of measure used to differentiate licensing
    [Show full text]