Thank you for attending this session today!

1 2012-20 years = 1992. Here we go!

2 Where the heck is TSO? ISPF? JCL? This is horrifying!

3 UNIX isn’t scary anymore. We actually have our first application working!

4 Now that customers are running the DB2/AIX application solution, how the heck do we monitor and tune it?

5 Are their any mainframes left? LOL. Companies are still, today, porting applications from DB2 z/OS to DB2 LUW.

6 Scott R. Hayes Senior Database Manager Manning & Napier Information Services, LLC 1100 Chase Square, 12th Floor Rochester, NY 14604

IBM Certified Administrator IBM Certified DB2 Application IBM Certified Transaction Server Specialist phone: (716) 325-6880 x469 fax: (716) 325-1036 email: [email protected] OR [email protected] This presentation is based upon DB2 Common Server V2.1.2 for AIX. Future versions of DB2 may require alteration of formulas herein, or provide opportunities for new formulas. print "select 'LOWERAVG', p_buffpage_4k, avg(bp_ix_hit_pct) " > $tempfile print " from ZDB2TUNE.STAT_DB_TB" >> $tempfile print " and days(current date) - days(snapshot_dt) < 30 " >> $tempfile print " and p_buffpage_4k in " >> $tempfile print " (select max(p_buffpage_4k) " >> $tempfile print " from zdb2tune.stat_db_tb " >> $tempfile print " and days(current date) - days(snapshot_dt) < 30 " >> $tempfile print " and p_buffpage_4k < $buffpage_4k ) " >> $tempfile print " group by 1, p_buffpage_4k; " >> $tempfile db2 -tf $tempfile | grep LOWERAVG | read lit lower_bpsize lower_bp_hr_avg # # Similar commands used to derive current and higher averages. # $lower_bpsize - contains next lowest BP size in 30 day sample $lower_bp_hr_avg - contains the ix hit ratio average for the lower size. # tune_buffpage_incr is 500, 1000, or 2000 depending on -tune option if [[ $curr_bp_hr_avg -gt $lower_bp_hr_avg && $curr_bp_hr_avg -lt $higher_bp_hr_avg ]] then let buffpage_4k_rec=$buffpage_4k+$tune_buffpage_incr if [[ $buffpage_4k_rec -gt $tune_buffpage_max ]] then buffpage_4k_rec=$tune_buffpage_max fi else if [[ $curr_bp_hr_avg -le $lower_bp_hr_avg || $lower_bpsize = "N/A" ]] then let buffpage_4k_rec=$buffpage_4k-$tune_buffpage_incr if [[ $buffpage_4k_rec -lt $tune_buffpage_min ]] then buffpage_4k_rec=$tune_buffpage_min fi fi fi # If a higher BP size not available, then Hit Ratio artificially set to 100 to incite an upward adjustment. If a lower BP size is not available, then Hit Ratio is set to zero and lower_bpsize is set to “N/A”.

9 About Manning & Napier Information Services… MNIS works closely with patent offices throughout the world, patent attorneys, and many Fortune 500 companies, providing insight and expertise to better address intellectual property needs at every level. Over the years we have developed a cost- and time-efficient way not only to search patent data, but also to analyze and manage patent portfolios to help clients gain and maintain a competitive advantage in the marketplace. Check out http://www.mnis.net/mapitdemo About Scott Hayes… Scott has actively worked with DB2/MVS since 1988. In November 1994, he installed and began working with DB2/6000 V1.1 on AIX. He is an IBM Certified DB2 Database Administrator, DB2 Application Programmer, and Transaction Server Specialist. At Manning & Napier, Scott is responsible for managing large and growing servers on DB2/AIX and DB2/Solaris with access from a number of Win95, WinNT, and Sun Solaris clients. Scott has attended all 10 IDUG North American Conferences. He has spoken at the 8th, 9th, and 10th North American Conferences, the 2nd Annual Asian Pacific Conference, and the 1997 DB2 Technical Conference. This is funny, looking back. IBM is pushing us all to Automatic Storage which uses DMS Files. Raw is fastest, but labor intensive to manage. DMS files were the slowest in these tests. Hmmm… maybe if automatic storage uses DMS files, and if DMS files are slowest, then maybe DB2 customers will buy bigger, beefier hardware and spend more on DB2 licenses? Hmmm. Tongue in cheek. Case 1: Runstats on table shrlevel reference Case 2: Runstats on table shrlevel change Case 3: Runstats on table with distribution shrlevel reference Case 4: Runstats on table and index ixpk shrlevel reference Case 5: Runstats on table and detailed index ixpk shrlevel reference Case 6: Runstats on table with distribution and detailed index ixpk shrlevel chg

Summary: Presuming the optimizer benefits from distribution and detailed index statistics, and presuming the best optimizer choice for large queries will save substantial time and/or resources, then the modest 30% increase in elapsed runstats times is a sound investment.

Take Home: Run Runstats with the works! This slide summarizes all of the activities from the previous slides. If you can set this up at your shop, you will collect a wealth of knowledge about resource utilization, peak periods, sort requirements, and more. DB2 UDB V5 Monitoring and Tuning

Cell Phone: (716) 704-4897 Numeric Pager: (800) 509-1227 Office Phone: (716) 349-2450 Email: [email protected] Fax: (716) 349-2451

(c) Copyright 1998. Database-GUYS Inc. All Rights Reserved. 14 DB2 UDB V5 Monitoring and Tuning

Temporary spaces, one for 4K pages and one for 8K pages, are necessary to prevent volumes of sequentially prefetched pages from flooding other pools. The SYSCATSPACE should be placed into a dedicated bufferpool to improve performance of catalog activities overall. Tables with high volume read activity (random) should have indexes placed in a dedicated pool. Another pool should be dedicated to other indexes, unless the indexes are predominantly scanned. Randomly accessed tablespaces containing table data should be placed into a data bufferpool. Sequentially accessed tablespaces (data and index) should be placed into a sequential bufferpool. These tablespaces should also be striped across multiple disks for best parallel I/O performance. Keep a small bufferpool available for isolating particular objects (tables/indexes) and observing their access characteristics.

() Copyright 1998. Database-GUYS Inc. All Rights Reserved. 15 DB2 UDB V5 Monitoring and Tuning

Still good rules to live by in 2012.

(c) Copyright 1998. Database-GUYS Inc. All Rights Reserved. 16 Thank goodness we got past that Y2K problem, huh? Now, we just need to get past 21 December 2012 and we’ll be all set… well, until the fiscal cliff of 1 January 2013 arrives…

17 Speed – 12 years later, DB2 users still crave SPEED.

18 Intra_Parallel is trying to make a come back in DB2 LUW V10. BE CAREFUL!

19 “db2top” type monitoring actually started around 1999 with DGI’s Wise- GUY™ tool.

20 Statement Cost Aggregation, Statement Concentration, Stripping literal values and performing Statement Consolidation… whatever you want to call it, the technique is ESSENTIAL to optimized DB2 tuning (z/OS and LUW).

See http://www.dbisoftware.com/blog/db2_performance.php?id=123 for a detailed discussion and illustration.

21 So is DIET.

1 SQL True Cost = Individual Execution Cost multiplied by Frequency of Execution in the application workload mix. The previous 2 slides have demonstrated how a seemingly fast (elapsed time < .5 second) SQL statement consumes over 1/5th (21%) of CPU time in the application, and 48%, almost half, of all sort time in the database. It is extremely unlikely that a 1/2 second duration statement would be captured by a snapshot, and, if it were, based on the “low” estimated optimizer cost, such a statement might miss the DBA attention “clip level”

This illustrates the valuable performance information available when properly performing SQL Consolidation/Cost Aggregation, as discussed by: http://www.dbisoftware.com/blog/db2_performance.php?id=123 The slide might be 13 years old, but this is still excellent and relevant advice! SPEED!

1 The fastest SORT is the one that never occurs! The most important thing you can do for DB2 LUW performance is to ensure that your TEMPSPACE storage is allocated on physical devices that are separate from Data, Indexes, and Logs. It was a great year.

281 DB2 V8 is perhaps my favorite DB2 version of all time.

29 This is exciting – parameters can now be changed on the fly. Now the automatic tuning that DGI invented in 1999 can be fully exploited!

30 SPEED… again!

31 Slide shows the performance benefits of completing sorts within SORTHEAP memory – spills to TEMPSPACE, even small ones, degrade performance.

32 2005 – the year Database-Brothers, Inc., aka DBI Software was born.

33 Follow the methodology.

34 Are you fascinated by rates? Rates provide great entertainment value but are nearly useless for productive tuning. Instead, focus your analysis on COSTS.

35 SQL Cost Aggregation/Concentration/Consolidation Illustrated… See http://www.dbisoftware.com/blog/db2_performance.php?id=123

36 Folks, DB2 LUW isn’t like DB2 z/OS. You DO need indexes on small tables in DB2 LUW.

37 Juice 6 lemons. Add 3 cups of water. Add ½ cup of sugar. Blend with ice. Enjoy!

38 More lessons learned. Experience is the best teacher.

39 Anyone care to dance?

40 Do you have a database that meets these criteria? Congratulations! Time to celebrate! But, remember, there’s always tomorrow and DB2 can change its mind when data volumes and number of users grow!

41 Lots of buzzwords in presentation titles helps get presentations accepted by IDUG. Present at IDUG, or volunteer to help. It is a terrific experience.

42 Automatic tuning? STMM? Hello? I’ve been preaching and teaching this since 1999! Hello? IBM? How about Floors and Ceilings (Min and Max) values for automatically tuned memory values? THEN, STMM might be useful. I’m just sayin’.

43 Scheme. DB2 has been taking notes from President Robinhood. Steal from the rich and re-allocate to the poor.

44 Haven’t we seen this slide before? Yes, I we have. Guess what. It is THAT important. Print this, put it under your pillow, and study it every night before you go to sleep.

45 Another exciting presentation title… ask an IDUG CPC (Conference Planning Committee) member how you can get involved with IDUG.

46 TBRRTX is key to determining which tables have trouble with inefficient access paths. The formula and rule of thumb works because not every transaction accesses every table. These rules of thumb apply most strongly to OLTP databases, but also offer guidance for data warehouses. Again, the emphasis is on understanding I/O COST (not rates).

47 Slide visually demonstrates SQL workload analysis process. The DBA should use snapshots (good) or event monitors (better because these contain static SQL) as input to their own analysis. CPU cost is shown, but likewise Sort time, rows read, rows written, and execution time should all be similarly aggregated to find total and relative costs.

See http://www.dbisoftware.com/blog/db2_performance.php?id=123 for more information.

48 One SELECT statement on an 8-way machine is using 97% of CPU and 90% of Elapsed time! Fix this statement, and *BAM* the entire database application will (and did) run faster!

This is a real world illustration of SQL Cost Aggregation/Consolidation/Concentration: http://www.dbisoftware.com/blog/db2_performance.php?id=123

49 Evil indexes are bad, very bad.

50 Thank you, IDUG, for introducing a DB2 technical tools track to the agenda.

51 Have you had your drink of the blue punch today?

52 You need to have awareness that there are problems and opportunities within your databases. DBI’s Brother-Eagle is a real-time “what’s happening right now?!?!?!?” monitor for DB2 LUW. www.Brother-Eagle.com DBI’s Brother-Hawk™ is a lights-out advanced “health monitor” for DB2 LUW. www.Brother-Hawk.com

53 Time spent analysis – the most widely read blog in the world: http://www.dbisoftware.com/blog/db2_performance.php?id=120 DBI’s Brother-Thoroughbred® for DB2 LUW tells you how much time is spent inside DB2 and out, and, of the inside time, how much time, and what percent, is consumed by CPU, I/O, Locks, & Sorts. It also provides a means to track Service Level Agreement attainments. See http://www.Brother-Thoroughbred.com

54 DBI’s Brother-Panther® for DB2 LUW provides SQL Cost Aggregation/Consolidation/Concentration analysis of SQL workloads for any user selected time period. It also provides analysis of database, , bufferpool, tablespace, tables, users, and applications, trends charts for all objects and entities, and the ability to compare performance between different time periods. See http://www.Brother-Panther.com.

This slide illustrates the important concept of aggregated SQL workload costs.

55 Implementing solutions is easy – once you correctly understand what your real, root cause, problems are.

56 Performance cost trend chart from commercially available tool shows substantial cost reduction in Logical Reads after an index was added. See www.Brother-Panther.com.

57 These testimonials are provided to illustrate the value of fully and properly executing the performance management life cycle, of using the performance metrics that have been taught, and for the outstanding results that can be achieved by following the methodology.

58 Scott Hayes is President & CEO of DBI, an ISV providing industry leading performance tools for DB2 LUW, an IBM GOLD Consultant and Data Management Champion, the host of The DB2Night Show webinar series (www.DB2NightShow.com), a published author in various DB2 related magazines, a frequent speaker at IDUG NA, EU, and AP events, and a regular blogger on DB2 LUW Performance topics (see www.DBISoftware.com/blog/db2_performance.php). Follow him on Twitter at twitter.com/srhayes and twitter.com/db2performance. Scott is one of a handful of DB2 Professionals that can proudly claim they have attended EVERY IDUG North American Conference since IDUG’s inception (DB2/MVS V1.3 was big news back then).

59 When the first column of an index does not participate with a sargable predicate in the WHERE clause, DB2 cannot use the B-tree structure to navigate to the specific leaf pages where the second column (FIRSTNME in this case) values can be discovered. Instead, finding it less costly than scanning the entire table, DB2 will scan all of the index leaf pages. This is incredibly CPU costly and puts an application at risk for locking problems.

60 This update statement had a WHERE clause on columns C2 and C3. The only available index was on columns C1, C2, and C3. The table had millions of rows. The result was the optimizer’s decision to do Index Leaf Page scans. Notice the very high aggregate CPU cost of 54% of all database CPU time. Also notice the Index Read Efficiency (IREF = Rows Read/Rows Fetched) of 0.16, so, once the RIDs were found via the IX leaf page scan, if any, only 1 row was updated about 20% of the time after the expensive scan. With an average Execution time of 0.94 Seconds (sub-second), this statement had escaped DBA scrutiny. Another telling symptom of the Leaf page scan is the very high Average Index Logical Reads. If the index had 4 levels, for instance, then we’d expect an Average IX L Read value of “4” and not 793! The solution was easy – create an index on C2, C3, and C1 – we brought C1 into the index to inflate the index FULLKEYCARD. More later…

61 Print this slide at keep it handy at your desk.

62 Abstract submitted to IDUG:

When asked “What DB2 LUW Challenges do you struggle with the most?”, the majority of people in The DB2Night Show™ (www.DB2NightShow.com) studio audiences frequently respond “I/O Bound Databases”. This technical session will explore multiple facets of DB2 I/O, how it works, how to optimize it, how to eliminate it, how to make friends with the storage administrator, data, index, logs, and tempspace placement, best and worst practices, compression, solid state disk, and a few other tidbits to help DBAs minimize and mitigate their I/O challenges and costs. Customer Case Studies will be shared.

63 Slide shows case study illustrating cost savings derived from reducing I/O – CPU savings were significant! Before eliminating the I/O of a costly query, 10 CPUs could barely keep up with the workload and the customer was planning to add 2 CPUs at $100K cost. After tuning, the hardware upgrade was no longer necessary and the workload could be covered by 4-6 CPUs.

64 An 8MB table is doing almost 55% of all the Read I/O in the database! One SELECT statement accounts for over 99% of this Read I/O. This was determined by using an ISV tool that isolates SQL statements driving I/O to a particular table, and by using the SQL cost aggregation/consolidation technique previously described. When this I/O costly statement was passed to the Design Advisor, an Index Recommendation results in a 99.42% cost reduction improvement.

By isolating the table with the heaviest I/O and the SQL driving I/O to that table, the customer was able to remove almost 55% of the read I/O from the database in a matter of minutes.

65 Thank you for attending this session today!

66 Thank goodness that DB2 keeps advancing, adding new features and new tuning knobs… I’ll probably have a job for another 20 years! ;-)

67 SSDD. I’ve been in this process loop for 20 years. It works. Join me on a journey of success for another 20 years!

68 Check out The DB2Night Show at http://www.DB2NightShow.com – over 120 hours of FREE DB2 education just waiting for you!

69