Front cover
AIX 5L Practical Performance Tools and Tuning Guide
Updated performance information for IBM Eserver p5 and AIX 5L V5.3
New tools for Eserver p5 with SMT and Micro-Partitioning
Practical performance problem determination examples
Kumiko Hayashi Kangkook Ji Octavian Lascu Hennie Pienaar Susan Schreitmueller Tina Tarquinio James Thompson ibm.com/redbooks
International Technical Support Organization
AIX 5L Practical Performance Tools and Tuning Guide
April 2005
SG24-6478-00
Note: Before using this information and the product it supports, read the information in “Notices” on page ix.
First Edition (April 2005)
This edition applies to Version 5, Release 3, of AIX 5L (product number 5765-G03).
© Copyright International Business Machines Corporation 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents
Notices ...... ix Trademarks ...... x
Preface ...... xi The team that wrote this redbook...... xi Become a published author ...... xiii Comments welcome...... xiv
Part 1. Introduction ...... 1
Chapter 1. Performance overview ...... 3 1.1 Performance expectations...... 4 1.1.1 System workload...... 5 1.1.2 Performance objectives...... 6 1.1.3 Program execution model ...... 7 1.1.4 System tuning ...... 12 1.2 Introduction to the performance tuning process ...... 13 1.2.1 Performance management phases ...... 14
Chapter 2. Performance analysis and tuning ...... 21 2.1 CPU performance ...... 22 2.1.1 Processes and threads ...... 22 2.1.2 SMP performance ...... 26 2.1.3 Initial advice for monitoring CPU...... 29 2.2 Memory overview ...... 31 2.2.1 Virtual memory manager (VMM) overview ...... 32 2.2.2 Paging space overview ...... 34 2.3 Disk I/O performance ...... 37 2.3.1 Initial advice ...... 37 2.3.2 Disk subsystem design approach ...... 38 2.3.3 Bandwidth-related performance considerations ...... 38 2.3.4 Disk design ...... 39 2.3.5 Logical Volume Manager concepts ...... 41 2.4 Network performance ...... 48 2.4.1 Initial advice ...... 49
2.4.2 TCP/IP protocol...... 50 2.4.3 Network tunables ...... 51
Part 2. Performance tools...... 61
© Copyright IBM Corp. 2005. All rights reserved. iii Chapter 3. General performance monitoring tools ...... 63 3.1 The topas command ...... 64 3.1.1 Topas syntax...... 65
3.1.2 Basic topas output...... 66 3.1.3 Partition statistics ...... 68 3.2 The jtopas utility ...... 70 3.2.1 The jtopas configuration file ...... 73 3.2.2 The info section for the jtopas tool ...... 74 3.2.3 The jtopas consoles ...... 75 3.2.4 The jtopas playback tool ...... 76 3.3 The perfpmr utility ...... 77 3.3.1 Information about measurement and sampling...... 78 3.3.2 Building and submitting a test case...... 81 3.3.3 Examples for perfpmr ...... 84 3.4 Performance Diagnostic Tool (PDT) ...... 86 3.4.1 Examples for PDT ...... 87 3.4.2 Using reports generated by PDT...... 92 3.4.3 Running PDT collection manually ...... 95 3.5 The curt command ...... 95 3.5.1 Information about measurement and sampling...... 96 3.5.2 Examples for curt ...... 98 3.5.3 Overview of the reports generated by curt ...... 99 3.5.4 The default report ...... 101 3.6 The splat command...... 119 3.6.1 splat syntax ...... 120 3.6.2 Information about measurement and sampling...... 122 3.6.3 The execution, trace, and analysis intervals ...... 123 3.6.4 Trace discontinuities ...... 124 3.6.5 Address-to-name resolution in splat ...... 124 3.6.6 splat examples ...... 125 3.7 The trace, trcnm, and trcrpt commands ...... 147 3.7.1 The trace command ...... 148 3.7.2 Information about measurement and sampling...... 152 3.7.3 How to start and stop trace ...... 155 3.7.4 Running trace interactively ...... 156 3.7.5 Running trace asynchronously ...... 156 3.7.6 Running trace on an entire system for 10 seconds...... 157 3.7.7 Tracing a command ...... 157 3.7.8 Tracing using one set of buffers per CPU ...... 158
3.7.9 Examples for trace ...... 158 3.7.10 The trcnm command ...... 163 3.7.11 Examples for trcnm ...... 164 3.7.12 The trcrpt command ...... 165
iv AIX 5L Practical Performance Tools and Tuning Guide 3.7.13 Examples for trcrpt ...... 169
Chapter 4. CPU analysis and tuning ...... 171 4.1 CPU overview ...... 172 4.1.1 Performance considerations with POWER4-based systems . . . . . 172 4.1.2 Performance considerations with POWER5-based systems . . . . . 172 4.2 CPU monitoring ...... 174 4.2.1 The lparstat command ...... 174 4.2.2 The mpstat command ...... 179 4.2.3 The procmon tool ...... 184 4.2.4 The topas command ...... 197 4.2.5 The sar command ...... 201 4.2.6 The iostat command ...... 205 4.2.7 The vmstat command ...... 208 4.2.8 The ps command ...... 210 4.2.9 The trace tool ...... 215 4.2.10 The curt command ...... 229 4.2.11 The splat command...... 245 4.2.12 The truss command ...... 256 4.2.13 The gprof command ...... 259 4.2.14 The pprof command ...... 262 4.2.15 The prof command ...... 268 4.2.16 The tprof command ...... 270 4.2.17 The time command ...... 273 4.2.18 The timex command ...... 273 4.3 CPU related tuning tools and techniques ...... 276 4.3.1 The smtctl command...... 276 4.3.2 The bindintcpu command ...... 278 4.3.3 The bindprocessor command ...... 280 4.3.4 The schedo command...... 282 4.3.5 The nice command ...... 288 4.3.6 The renice command ...... 289 4.4 CPU summary ...... 291 4.4.1 Other useful commands for CPU monitoring ...... 291
Chapter 5. Memory analysis and tuning ...... 297 5.1 Memory monitoring ...... 298 5.1.1 The ps command ...... 298 5.1.2 The sar command ...... 300 5.1.3 The svmon command ...... 301 5.1.4 The topas monitoring tool ...... 308 5.1.5 The vmstat command ...... 310 5.2 Memory tuning...... 317
Contents v 5.2.1 The vmo command ...... 317 5.2.2 Paging space thresholds tuning ...... 328 5.3 Memory summary ...... 329
5.3.1 Other useful commands for memory performance ...... 330 5.3.2 Paging space commands ...... 331
Chapter 6. Network performance ...... 333 6.1 Network overview ...... 334 6.1.1 The maxmbuf tunable ...... 336 6.2 Hardware considerations...... 340 6.2.1 Firmware levels ...... 340 6.2.2 Media speed considerations ...... 341 6.2.3 MTU size ...... 342 6.3 Network monitoring ...... 348 6.3.1 Creating network load ...... 349 6.4 Network monitoring commands...... 351 6.4.1 The entstat command ...... 351 6.4.2 The netstat command ...... 356 6.4.3 The pmtu command ...... 370 6.5 Network packet tracing tools ...... 371 6.5.1 The iptrace command ...... 371 6.5.2 The ipreport command ...... 374 6.5.3 The ipfilter command...... 376 6.5.4 The netpmon command ...... 376 6.5.5 The trpt command ...... 384 6.6 NFS related performance commands ...... 389 6.6.1 The nfsstat command ...... 389 6.7 Network tuning commands ...... 396 6.7.1 The no command ...... 396 6.7.2 The Interface Specific Network Options (ISNO) ...... 415 6.7.3 The nfso command ...... 416
Chapter 7. Storage analysis and tuning ...... 425 7.1 Data placement and design...... 426 7.1.1 AIX I/O stack ...... 426 7.1.2 Physical disk and disk subsystem...... 428 7.1.3 Device drivers and adapters ...... 429 7.1.4 Volume groups and logical volumes ...... 430 7.1.5 VMM and direct I/O ...... 431 7.1.6 JFS/JFS2 file systems...... 432 7.2 Monitoring ...... 433 7.2.1 The iostat command ...... 433 7.2.2 The filemon command...... 441
vi AIX 5L Practical Performance Tools and Tuning Guide 7.2.3 The fileplace command ...... 449 7.2.4 The lslv, lspv, and lsvg commands ...... 463 7.2.5 The lvmstat command...... 475
7.2.6 The sar -d command ...... 478 7.3 Tuning ...... 480 7.3.1 The lsdev, rmdev and mkdev commands ...... 480 7.3.2 The lscfg, lsattr, and chdev commands...... 487 7.3.3 The ioo command ...... 495 7.3.4 The lvmo command...... 499 7.3.5 The vmo command ...... 500
Part 3. Case studies and miscellaneous tools ...... 501
Chapter 8. Case studies ...... 503 8.1 Case study: NIM server...... 504 8.1.1 Setting up the environment ...... 504 8.1.2 Monitoring NIM master using topas ...... 506 8.1.3 Upgrading NIM environment to Gbit Ethernet ...... 510 8.1.4 Upgrading the disk storage ...... 512 8.1.5 Real workload with spread file system ...... 520 8.1.6 Summary...... 522 8.2 POWER5 case study...... 523 8.2.1 POWER5 introduction ...... 523 8.2.2 High CPU ...... 524 8.2.3 Evaluation ...... 531
Chapter 9. Miscellaneous tools ...... 533 9.1 Workload manager monitoring (WLM) ...... 534 9.1.1 Overview ...... 534 9.1.2 WLM concepts ...... 535 9.1.3 Administering WLM ...... 537 9.1.4 WLM performance tools ...... 546 9.2 Partition load manager (PLM) ...... 549 9.2.1 PLM introduction ...... 549 9.2.2 Memory management ...... 553 9.2.3 Processor management ...... 553 9.3 A comparison of WLM and PLM ...... 554 9.4 Resource monitoring and control (RMC)...... 557 9.4.1 RMC commands ...... 559 9.4.2 Information about measurement and sampling...... 560 9.4.3 Verifying RMC facilities ...... 565 9.4.4 Examples using RMC ...... 570
Chapter 10. Performance monitoring APIs ...... 583
Contents vii 10.1 The performance status (Perfstat) API ...... 584 10.1.1 Compiling and linking ...... 586 10.1.2 Changing history of perfstat API ...... 586
10.1.3 Subroutines ...... 587 10.2 System Performance Measurement Interface...... 620 10.2.1 Compiling and linking ...... 621 10.2.2 Terms and concepts for SPMI...... 621 10.2.3 Subroutines ...... 624 10.2.4 Basic layout of SPMI program...... 629 10.2.5 SPMI examples ...... 632 10.3 Performance Monitor API ...... 637 10.3.1 Performance Monitor data access ...... 638 10.3.2 Compiling and linking ...... 639 10.3.3 Subroutines ...... 639 10.3.4 PM API examples ...... 640 10.3.5 PMAPI M:N pthreads support ...... 643 10.4 Miscellaneous performance monitoring subroutines ...... 644 10.4.1 Compiling and linking ...... 644 10.4.2 Subroutines ...... 644 10.4.3 Combined example ...... 662
Appendix A. Source code ...... 665 perfstat_dump_all.c ...... 666 perfstat_dude.c...... 670 spmi_dude.c ...... 679 spmi_data.c ...... 683 spmi_file.c ...... 689 Spmi_traverse.c ...... 691 dudestat.c...... 695
Appendix B. Trace hooks ...... 699 AIX 5L trace hooks ...... 700
Abbreviations and acronyms ...... 709
Related publications ...... 711 IBM Redbooks ...... 711 Other publications ...... 712 Online resources ...... 713 How to get IBM Redbooks ...... 713 Help from IBM ...... 713
Index ...... 715
viii AIX 5L Practical Performance Tools and Tuning Guide Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
© Copyright IBM Corp. 2005. All rights reserved. ix Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Eserver ® Hypervisor™ POWER5™ ibm.com® HACMP™ PTX® pSeries® IBM® Redbooks™ AIX 5L™ Micro-Partitioning™ Redbooks (logo) ™ AIX® Nways® RS/6000® DB2® POWER™ Tivoli® Enterprise Storage Server® POWER3™ ESCON® POWER4™
The following terms are trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others.
x AIX 5L Practical Performance Tools and Tuning Guide Preface
This IBM® Redbook takes an insightful look at the performance monitoring and tuning tools that are provided with AIX® 5L™. It discusses the usage of the tools as well as the interpretation of the results by using many examples.
This redbook is meant as a practical guide for system administrators and AIX technical support professionals so they can use the performance tools in an efficient manner and interpret the outputs when analyzing an AIX system’s performance.
This book provides updated information about monitoring and tuning systems performance in an IBM Eserver® POWER5™ and AIX 5L V5.3 environment. Practical examples for the new and updated tools are provided, together with new information about using Resource Monitoring and to control part of RSCT for performance monitoring.
Also, in 10.1, “The performance status (Perfstat) API” on page 584, this book presents the Perfstat API for application programmers to have a better understanding of the new and updated facilities provided with this API.
The team that wrote this redbook
This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center.
Kumiko Hayashi is an IT Specialist working at IBM Japan Systems Engineering Co., Ltd. She has four years of experience in AIX, RS/6000®, and IBM ~ pSeries®. She provides pre-sales technical consultation and post-sales implementation support. She is an IBM Certified Advanced Technical Expert - pSeries and AIX 5L.
Kangkook Ji is an IT Specialist at IBM Korea. He has four years of experience in AIX and pSeries. Currently as a Level 2 Support Engineer, he supports field engineers, and his main work is high availability solutions, such as HACMP™ and AIX problems. His interests vary in many IT areas, such as Linux® and middleware. He is an IBM Certified Advanced Technical Expert - pSeries and
AIX 5L and HACMP.
Octavian Lascu is a Project Leader at the International Technical Support Organization, Poughkeepsie Center. He writes extensively and teaches IBM
© Copyright IBM Corp. 2005. All rights reserved. xi classes worldwide in all areas of pSeries clusters and Linux. Before joining the ITSO, Octavian worked at IBM Global Services Romania as a Software and Hardware Services Manager. He holds a master's degree in Electronic Engineering from the Polytechnical Institute in Bucharest and is also an IBM Certified Advanced Technical Expert in AIX/PSSP/HACMP. He has worked with IBM since 1992.
Hennie Pienaar is a Senior Education Specialist in South Africa. He has eight years of experience in the AIX/Linux field. His areas of expertise include AIX, Linux and Tivoli®. He is certified as an Advanced Technical Expert. He has written extensively on AIX and Linux and has delivered classes worldwide on AIX and HACMP.
Susan Schreitmueller is a Sr. Consulting I/T Specialist with IBM. She joined IBM eight years ago, specializing in pSeries, AIX, and technical competitive positioning. Susan has been a Systems Administrator on zSeries, iSeries, and pSeries platforms and has expertise in systems administration and resource management. She travels extensively to customer locations, and has a talent for mentoring new hires and working to create a cohesive technical community that shares information at IBM.
Tina Tarquinio is a Software Engineer in Poughkeepsie, NY. She has worked at IBM for five years and has three years of AIX System Administration experience working in the pSeries Benchmark Center. She holds a bachelor’s degree in Applied Mathematics and Computer Science from the University of Albany in New York. She is an IBM Certified pSeries AIX System Administrator and an Accredited IT Specialist.
James Thompson is a Performance Analyst for IBM Systems Group in Tucson, AZ. He has worked at IBM for five years, the first two years as a Level 2 Support Engineer for Tivoli Storage Manager and for the past three years he has provided performance support for the development of IBM Tape and NAS products. He holds a bachelor’s degree in Computer Science from Utah State University.
Thanks to the following people for their contributions to this project:
Julie Peet, Certified IBM AIX System Administrator, pSeries Benchmark Center, Poughkeepsie, NY.
Nigel Griffiths, Certified IT Specialist, pSeries Advanced Technology Group, United Kingdom
Luc Smolders IBM Austin
xii AIX 5L Practical Performance Tools and Tuning Guide Andreas Hoetzel IBM Austin
Gabrielle Velez International Technical Support Organization, Rochester Center
Scott Vetter IBM Austin
Dino Quintero IBM Poughkeepsie
Become a published author
Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and/or customers.
Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html
Preface xiii Comments welcome
Your comments are important to us!
We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an email to: [email protected] Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493
xiv AIX 5L Practical Performance Tools and Tuning Guide
Part 1
Part 1 Introduction
Part 1 provides an overview of performance in an AIX 5L V5.3 environment and an introduction to performance analysis and tuning methodology. It also provides a description of overall performance metrics and expectations, together with the system components that should be considered for tuning in an IBM Eserver pSeries running AIX 5L.
© Copyright IBM Corp. 2005. All rights reserved. 1
2 AIX 5L Practical Performance Tools and Tuning Guide
1
Chapter 1. Performance overview
The performance of a computer system is based on human expectations and the ability of the computer system to fulfill these expectations. The objective for performance tuning is to make those expectations and their fulfillment match. The path to achieving this objective is a balance between appropriate expectations and optimizing the available system resources.
The performance-tuning process demands skill, knowledge, and experience, and cannot be performed by only analyzing statistics, graphs, and figures. If results are to be achieved, the human aspect of perceived performance must not be neglected. Performance tuning also takes into consideration problem determination aspects as well as pure performance issues.
© Copyright IBM Corp. 2005. All rights reserved. 3 1.1 Performance expectations
Performance tuning on a newly installed system usually involves setting the basic parameters for the operating system and applications. The sections in this chapter describe the characteristics of different system resources and provide some advice regarding their base tuning parameters if applicable.
Limitations originating from the sizing phase either limit the possibility of tuning, or incur greater cost to overcome them. The system may not meet the original performance expectations because of unrealistic expectations, physical problems in the computer environment, or human error in the design or implementation of the system. In the worst case adding or replacing hardware may be necessary.
We therefore advise you to be particularly careful when sizing a system to allow enough capacity for unexpected system loads. In other words, do not design the system to be 100 percent busy from the start of the project. More information about system sizing can be found in the redbook Understanding IBM Eserver pSeries Performance and Sizing, SG24-4810.
CPU Network Workload Manager no
nfso
# of procs vmo PSALLOC Async I/O Memo filesystem al ry u layout t M r
ioo i a
V n
a
g
System e
m
e n schedo t Tuning LVM Tuning
Figure 1-1 System tuning
4 AIX 5L Practical Performance Tools and Tuning Guide When a system in a productive environment still meets the performance expectations for which it was initially designed, but the demands and needs of the utilizing organization have outgrown the system’s basic capacity, performance tuning is performed to avoid and/or delay the cost of adding or replacing hardware. Remember that many performance-related issues can be traced back to operations performed by somebody with limited experience and knowledge, who unintentionally restricted some vital logical or physical resource of the system.
To evaluate if you have a performance issue, you can use the flow chart in Figure 1-2 as a guide.
START
YES CPU bound?
Disk YES bound? NO Actions Actions NO NO YES Memory bound?
YES Network NO Actions bound?
Actions Additional tests
Figure 1-2 Performance problem determination flow chart
1.1.1 System workload An accurate and complete definition of a system's workload is critical to understanding and/or predicting its performance. A difference in workload can cause far more variation in the measured performance of a system than differences in CPU clock speed or random access memory (RAM) size. The
Chapter 1. Performance overview 5 workload definition must include not only the type and rate of requests sent to the system, but also the exact software packages and in-house application programs to be executed.
It is important to take into account the work that a system is performing in the
background. For example, if a system contains file systems that are NFS-mounted and frequently accessed by other systems, handling those accesses is probably a significant fraction of the overall workload, even though the system is not a designated server.
A workload that has been standardized to allow comparisons among dissimilar systems is called a benchmark. However, few real workloads duplicate the exact algorithms and environment of a benchmark. Even industry-standard benchmarks that were originally derived from real applications have been simplified and homogenized to make them portable to a wide variety of hardware and software platforms.
The only valid use for industry-standard benchmarks is to narrow the field of candidate systems that will be subjected to a serious evaluation. Therefore, you should not solely rely on benchmark results when trying to understand the workload and performance of your system.
It is possible to classify workloads into the following categories: Multiuser A workload that consists of a number of users submitting work through individual terminals. Typically, the performance objectives of such a workload are either to maximize system throughput while preserving a specified worst-case response time or to obtain the best possible response time for a constant workload. Server A workload that consists of requests from other systems. For example, a file-server workload is mostly disk read and disk write requests. It is the disk-I/O component of a multiuser workload (plus NFS or other I/O activity), so the same objective of maximum throughput within a given response-time limit applies. Other server workloads consist of items such as math-intensive programs, database transactions, printer jobs. Workstation A workload that consists of a single user submitting work through a keyboard and receiving results on the display of that system. Typically, the highest-priority performance objective of such a workload is minimum response time to the user's requests.
1.1.2 Performance objectives After defining the workload that your system will have to process, you can choose performance criteria and set performance objectives based on those criteria. The
6 AIX 5L Practical Performance Tools and Tuning Guide overall performance criteria of computer systems are response time and throughput.
Response time is the elapsed time between when a request is submitted and when the response from that request is returned. Examples include: