Abstract:

Native SQL procedures were introduced with DB2 9 for z/OS, and SQL Procedure Language (SQL PL) was extended to UDFs with DB2 10. DB2 11 has delivered additional SQL PL-related enhancements. This session will provide information that attendees can use in the development, management, and monitoring of native SQL procedures and UDFs. It will also cover exploitation of native SQL procedures and UDFs to enhance DB2 for z/OS performance, scalability, and security - particularly for client-server applications.

Robert Catterall IBM [email protected]

1 For years, COBOL ruled the roost with respect to the coding of DB2 for z/OS stored procedures. SQL PL, more recently on the scene, will, I think, come to be the dominant language for DB2 development (and for development of user-defined functions). I believe that DB2 for z/OS people – systems programmers, DBAs, and application developers – should have at least some knowledge of SQL PL, and DB2-using organizations should engage in at least some degree of SQL PL routine development.

2 I’ll start by providing some information about the history of DB2 for z/OS native SQL procedures (and UDFs), and some recently delivered enhancements in this area. Then I’ll talk about the benefits of native SQL procedures and UDFs with respect to application performance, scalability, and security. After that I’ll discuss topics related to native SQL procedure and UDF development and management. I’ll conclude with some thoughts on shifting to SQL PL from other languages for DB2 stored procedure and UDF development.

3 A brief SQL PL history lesson, and a look at what’s new.

4 It was DB2 for z/OS Version 7 that introduced the ability to write stored procedure programs using only SQL statements (in other words, the stored procedure’s source is contained entirely within the associated CREATE PROCEDURE statement).

The enriched SQL that enabled creation of SQL-only stored procedure source is called SQL Procedure Language, or SQL PL for short. To the existing SQL DML (e.g., SELECT, INSERT), DCL (GRANT and REVOKE), and DDL statements (CREATE, ALTER), SQL PL added what are called control statements. These statements are about variable declaration, assignment of values to variables, and logic flow control (via statements such as IF, WHILE, ITERATE, LOOP, and GOTO).

5 What did organizations like about SQL PL as introduced with DB2 for z/OS V7? Mostly they liked that the language expanded the pool of people who could develop DB2 stored procedures – people who didn’t know a programming language such as COBOL or C or Java could write DB2 stored procedures using SQL PL.

What did organizations not like about first-generation SQL PL routines? Some organizations didn’t like the fact that the routines were turned into C language programs with embedded SQL DML statements as part of preparation for execution. These programs ran – like other external-to-DB2 stored procedure programs – in WLM-managed stored procedure address spaces. The concern in some people’s mind had to do with the CPU consumption of a C language stored procedure program, which often was a good bit greater than the CPU cost of executing a COBOL stored procedure program of equivalent functionality. Certain organizations opted not to use DB2 V7-style SQL procedures because the negative of added CPU cost (versus COBOL stored procedures) outweighed the benefit of an expanded pool of people who could develop DB2 stored procedures.

6 A very important enhancement was introduced with DB2 9 for z/OS running in new-function mode: native SQL procedures (organizations that migrated to DB2 10 from DB2 V8 can utilize native SQL procedure functionality in a DB2 10 NFM environment)). Unlike DB2 V7 or V8 SQL procedures (still supported in DB2 9, 10, and 11 systems, and now called external SQL procedures), DB2 9 (and beyond) native SQL procedures are internal procedures – they run in the DB2 DBM1 address space, as opposed to a WLM-managed stored procedure address space. Other differences versus external SQL procedures: • A native SQL procedure’s sole executable is its DB2 package – there’s nothing else to it. • A native SQL procedure executes under the task of the invoking application process – not under its own TCB in a stored procedure address space. • A native SQL procedure provides some capabilities not available for external SQL procedures, such as nested compound SQL statements (very handy for implementation of multi-statement condition handlers). • That functionality gap versus external SQL procedures continues to grow. DB2 11 delivered some important enhancements that are only applicable to native SQL PL routines (I’ll cover these enhancements momentarily).

7 DB2 10 for z/OS (in new-function mode) expanded the usefulness of native SQL PL routines to user-defined functions (UDFs). I like to call these routines “native” SQL UDFs, but the official term is “compiled SQL scalar functions.”

As is true of SQL procedures, “native” SQL UDFs expand the pool of people who can develop user-defined functions in a DB2 environment. Could UDFs be written in SQL before DB2 10 NFM? Yes, but they were very limited in functionality (more on this to come).

8 Prior to DB2 10 (NFM), if you wanted to create a user-defined function that had some degree of sophistication, including declaration of variables and assignment of values to same, and logic such as “do loops,” you pretty much had to go with an external UDF that would be associated with a program written in COBOL (for example). With DB2 10 NFM (and beyond), that type of sophistication can be achieved with a compiled SQL scalar function written in SQL PL. As with a native SQL procedure, the “body” of a “native” SQL UDF is contained within what’s called a compound statement. Other things that “native” SQL UDFs have in common with native SQL procedures: • They run in the DB2 DBM1 address space. • They run under the task of the invoking application process. • The one and only executable is the “native” SQL UDF’s package – nothing about a “native” SQL UDF is external to DB2.

9 The ability to have a SQL PL routine as the body of a “native” SQL UDF is not the only DB2 10 enhancement related to SQL UDFs. Other goodies: • The RETURN statement in a compiled SQL scalar UDF (also called a non- inline SQL scalar UDF) can contain a scalar fullselect. Prior to DB2 10 NFM, not only could a SQL scalar UDF not contain a scalar fullselect – it couldn’t even reference a column of a table. • A table UDF, which returns a set of values, can be written in SQL in a DB2 10 NFM (and beyond) system (previously, a table UDF had to be external, i.e., associated with an external-to-DB2 program written in a language such as COBOL).

Note that DB2 10, in addition to having new CREATE (and ALTER) statements for compiled SQL scalar UDFs and SQL table UDFs, still supports the pre-DB2 10 type of SQL scalar UDF (now called an inline SQL scalar UDF).

10 I had stated previously that one benefit of native SQL procedures versus external SQL procedures is the fact that new capabilities are, by and large, delivered for the former and not for the latter. An example of such a new capability is the XML support provided for native SQL procedures (and not external SQL procedures) starting with DB2 10 in new-function mode: • Input and output parameters for a native SQL procedure can be of the XML data type (prior to DB2 10 NFM, XML data had to be passed to or received from a native SQL procedure in character string form). Native SQL procedures can also declare variables of the XML data type. • The same holds true for SQL UDFs (whether scalar – inline or compiled – or table UDFs).

11 DB2 11 provides a couple of very cool native SQL procedure-only enhancements. One of those is support for array parameters: they can be passed to or received from a native SQL procedure, or declared as variables in a native SQL procedure (and the same is true for a compiled SQL scalar UDF). An important caveat: when array parameters are used for a native SQL procedure, the procedure has to be called from another SQL PL routine or from a Java program that connects to DB2 using the IBM Data Server Driver for JDBC and SQLJ type 4 driver (I underlined that Java bit because Java client-side programmers have long wanted DB2 stored procedure support for array parameters).

Note that one first creates an array (it’s a category of user-defined data type) and then uses it with a native SQL procedure (or UDF). You can create ordinary arrays, in which a value in the array is identified by its ordinal position within the array (e.g., element 1, or 12, or 27), and associative arrays, in which values are ordered by, and can be referenced by, user-provided index values. The example on the slide shows a CREATE statement for an ordinary array. DB2 11 also provides several new built-in functions that facilitate working with arrays.

12 Another new and cool DB2 11 feature that is exclusive to native SQL procedures is the autonomous transaction. Here’s what this is about: suppose you want an application transaction to make a data change (e.g., insert a row into a DB2 table) that will persist even if the transaction itself is subsequently rolled back (perhaps due to a SQL error encountered by the transaction)? An autonomous transaction, which is a special form of a native SQL procedure, will do that for you, because it has a unit of work that is completely independent of the unit of work associated with the transaction that called the procedure.

One interesting implication of that independent-unit-of-work thing: a DB2 lock acquired by the transaction that calls a SQL procedure functioning as an autonomous transaction will not automatically be “inherited” by the autonomous transaction, and vice versa; thus, an autonomous transaction could conceivably deadlock with its invoking transaction.

13 Next I’ll cover some things that I like about native SQL procedures and UDFs from a performance perspective.

14 There are two main misconceptions that I’ve frequently encountered in the area of native SQL procedures and zIIP offload:

Misconception 1: native SQL procedures are always zIIP-eligible when they execute. NOT TRUE. A native SQL procedure is zIIP-eligible only when it is called by a DRDA requester. Here’s why: a native SQL procedure, unlike an external stored procedure, runs under the task of its caller (an external stored procedure always runs under its own task – a TCB – in a WLM-managed stored procedure address space). When the caller is a DRDA requester, the task in the z/OS system is an enclave SRB in the DB2 DDF address space, and that makes the native SQL procedure zIIP-eligible. If a native SQL procedure is called from (for example) a CICS transaction, it will run under that transaction’s TCB and will therefore not be zIIP-eligible.

Misconception 2: using native SQL procedures in a DDF application environment will always increase zIIP offload. NOT TRUE. zIIP offload will be increased if native SQL procedures replace external stored procedures for a DDF-connected application. If you just take SQL DML statements issued by a client-side program and package them in native SQL procedures, zIIP offload should not be significantly affected because the client-issued SQL DML statements would themselves be zIIP-eligible (because they would execute under the DDF enclave SRB associated with the DRDA requester).

15 I’ve mentioned that a native SQL procedure will execute under the task of its caller, while an external DB2 stored procedure will execute under its own TCB in a WLM-managed stored procedure address space. Besides affecting zIIP eligibility (see preceding slide), this characteristic of native SQL procedures provides another benefit: there is no need to switch the caller’s DB2 thread to the stored procedure’s task, because the task of a native SQL procedure and the task of its caller are one in the same. This benefit is seen as well for SQL UDFs versus external UDFs, and you could really see the associated performance impact in a situation in which a UDF is invoked MANY times in the execution of a single SQL statement. Consider a UDF that appears in the SELECT of a correlated subquery that is driven once for each of the thousands of rows qualified by an outer SELECT. An organization with which I’ve worked had exactly that situation. The query in question was running a lot longer than desired, and monitor data showed a lot of “UDF TCB wait” time. The UDF happened to be external. When it was converted to a SQL UDF, the query ran much faster than before. That can happen when thousands of task- to-task DB2 thread switches are eliminated.

16 DB2 10 provides some performance benefits for SQL PL routines. One is that a SQL PL routine’s package (and recall that this is the sole executable with respect to a native SQL procedure or a compiled SQL scalar UDF) is more CPU-efficient when it is regenerated in a DB2 10 system (more in a moment on the two ways in which a SQL PL routine’s package can be regenerated). That improved CPU efficiency is due to factors like a reduced path length for execution of IF statements (very commonly found in SQL PL routines), and reduced CPU consumption in the execution of SET statements that reference built-in functions such as CHAR (SET statements are used in native SQL procedures to assign values to variables and to output parameters).

If you make a few code changes you can realize still more CPU savings for a native SQL procedure in a DB2 10 NFM (and beyond) system. Starting with DB2 10, one SET statement can be used to assign values to multiple variables and/or parameters of a native SQL procedure. Do that to reduce the number of SET statement executions, and you’ll get some reduction in CPU consumption associated with execution of the SQL procedure.

17 A DB2 package is executable code – basically, the compiled form of a program’s SQL statements. In the case of a native SQL procedure or a compiled SQL scalar UDF, the ONLY statements in the program are SQL statements, so the SQL PL routine’s package is the run-time form of the routine. There are two ways to regenerate this run-time form of a native SQL procedure or compiled SQL scalar UDF: • REBIND PACKAGE will regenerate the part of the SQL PL routine’s package pertaining to non-control statements (e.g., data manipulation statements such as SELECT). • ALTER PROCEDURE (or ALTER FUNCTION) with the REGENERATE option will regenerate ALL of a SQL PL routine’s package – the control section and the non-control section.

Because IF and SET are control statements, getting the benefit of the improved CPU efficiency with which these statements are executed in a DB2 10 (and beyond) system (see preceding slide) requires an ALTER PROCEDURE (or FUNCTION) with REGENERATE – a rebind of the SQL PL routine’s package will not regenerate the compiled form of control statements in the routine. ALTER PROCEDURE (or FUNCTION) with REGENERATE is also required if you want most of the control section of a SQL PL routine’s package to be stored above the 2 GB bar in the DB2 DBM1 address space when the routine is executed.

18 Given that ALTER PROCEDURE (or FUNCTION) with REGENERATE reworks all sections of a SQL PL routine’s package, you might think, “Hey, I ought to always go with that versus REBIND PACKAGE for a SQL PL routine.” Not so fast, pardner. Think about SQL DML statement access path changes that can occur as a result of an ALTER with REGENERATE or a REBIND. Those changes (especially when you’re talking about taking advantage of optimizer enhancements provided with a new release of DB2) are often positive from a performance perspective, but sometimes they aren’t. In the case of REBIND, if access path-related performance regression is a concern, you can tell DB2 to reuse current access paths (if possible) for SQL DML statements in a SQL PL routine via the APREUSE option – something you can’t specify for an ALTER with REGENERATE; furthermore, plan management functionality, which enables quick restoration of a previous instance of a package (and its access paths) via REBIND SWITCH, does not apply to ALTER with REGENERATE. This being so, you probably want to go with REBIND PACKAGE when there is not a desire to rework the control section of a SQL PL routine’s package (e.g., when you want a query in the routine to use a newly created index). If you want to do a REGENERATE, try a REBIND PACKAGE first, and check to see if access paths changed. If they didn’t (or if they did and with positive performance results), do an ALTER with REGENERATE – chances are that access paths will be as they were for the just-done REBIND.

19 On this and the next slide, I describe actions that can be taken to improve performance for both native SQL procedures and external stored procedures.

RELEASE(DEALLOCATE) can be a good package bind choice for stored procedures that are frequently executed, especially when these stored procedures have relatively low in-DB2 CPU time. When callers are DRDA requesters, packages bound with RELEASE(DEALLOCATE) cause DBATs (database access threads – the kind associated with DDF transactions) to become high-performance DBATs. Related CPU savings can be 10% or more (referring to the reduction in in-DB2 CPU time that can be so achieved).

RELEASE(DEALLOCATE), combined with persistent threads (i.e., threads that persist through commits, such as high-performance DBATs), can, in a pre-DB2 11 environment, interfere with the successful completion of some bind, rebind, DDL, and utility operations (especially an online REORG executed to materialize a pending DDL operation). DB2 10 provides relief in this area for DDF transactions via the command -MODIFY DDF PKGREL(COMMIT). DB2 11 provides general relief for this contention situation.

The DB2 10-introduced RETURN TO CLIENT cursor declaration can save CPU when the result set of a cursor declared in a “nested” stored procedure is to be retrieved by the program that initiated the set of nested procedure calls.

20 Any program that can issue a SQL statement can call a DB2 for z/OS stored procedure, but the real “sweet spot” for stored procedure utilization is with DDF-connected applications. For one thing, stored procedures can reduce the network “chattiness” of transactions that involve issuance of multiple SQL DML statements. Stored procedures also provide a means of utilizing static SQL for a DDF application (static SQL generally delivers optimal CPU efficiency in a DB2 system), while allowing client-side programmers to use their database access interface of choice; so, client-side programmers can invoke DB2 stored procedures, which issue static SQL statements, by way of JDBC or ODBC calls (JDBC and ODBC calls are processed as dynamic SQL on the DB2 server side of the application).

Packaging “table-touching” SQL in DB2 stored procedures can also benefit performance by loosening the coupling between client-side programs and a data server (versus the situation in which “table-touching” SQL is issued from client-side programs). When data access is accomplished via stored procedures, the DB2 DBA team can make database design changes that are aimed at improving application performance, and only affected stored procedures will have to be modified accordingly – client-side changes will not be required, because programs at that end of the application will continue to call stored procedures as before.

21 Next, information about some stored procedure characteristics that can enhance an application’s scalability and security.

22 Most of what I’ll be talking about in this section of the presentation applies to both external and native SQL procedures. That said, keep in mind two key advantages of native SQL procedures over external stored procedures: 1) native SQL procedures can be written by people who aren’t COBOL or C or Java programmers, and 2) in a DDF application environment, native SQL procedures greatly increase zIIP offload versus external stored procedures.

23 I’ve already mentioned, briefly, the beneficial impact that stored procedures can have for network-attached DB2 applications, by reducing the “chattiness” factor – if a DDF transaction is to drive issuance of, say, 10 SQL DML statements, packaging those 10 SQL DML statements in a stored procedure that is called by the transaction can lower SQL-related network transmissions by an order of magnitude.

24 Have you ever thought of WebSphere MQ as being part of a scalability solution for a DB2-accessing client-server application? Perhaps you should. Think about it – if database changes are always accomplished in a synchronous fashion with respect to end-users hitting ”submit” on browser- presented screens, your organization will have to provide server-side processing capacity with workload peaks in mind. If, on the other hand, you put a queue between end users and the back-end database, processing peaks can be smoothed out: when a surge of work comes in, the queue depth just increases temporarily. When application traffic subsides, queue depth is worked back down.

The DB2 MQListener, provided with DB2 for z/OS, can be used to facilitate implementation of queue-based application interaction with DB2: data submitted by a user is placed on a queue in the form of a message, and the arrival of the message on the queue causes the MQListener to automatically invoke the DB2 stored procedure associated with the queue (the data in the message is the input to the DB2 stored procedure).

The queue-based approach also improves application resiliency: if the back- end database (or just a particular table therein) becomes temporarily unavailable for some reason, transactions don’t fail. Instead, input messages build up on a queue, and they are processed as soon as the database (or the target table) is again available. 25 Stored procedures typically issue static SQL statements, and static SQL, in addition to being (as previously noted) an application performance booster, can also enhance security for DDF-attached programs (and, these static SQL statements packaged in DB2 stored procedures can be dynamically invoked by client programs). If SQL DML statements are issued from client-side programs by way of an interface (e.g., ODBC or JDBC) that drives execution of dynamic SQL statements at the DB2 server, successful execution of these statements will require the DB2 authorization ID of the application process to have requisite table access privileges (i.e., SELECT, INSERT, UPDATE, DELETE). On the other hand, if the “table-touching” SQL statements are packaged in DB2 stored procedures (which can be invoked via calls issued through interfaces such as ODBC and JDBC), the authorization ID of the calling application process will require only the EXECUTE privilege on the stored procedures that are called.

Want to tighten data security further? Grant the DB2 privileges needed by the application to a DB2 role versus an ID, and create a DB2 trusted context that limits the use of the role’s privileges to an application that connects to DB2 using a particular ID and which runs on a particular application server (or servers), identified by IP address.

26 If someone wants to hack into your , he (or she) will have an easier time of it if he knows the names of tables and columns in the database. When “table-touching” SQL statements are issued from client-side programs, developers of those programs know the names of tables and columns, some of which might contain highly sensitive information. When the “table-touching” SQL statements are packaged in DB2 stored procedures, client-side programmers do not require knowledge of database table and column names. Yes, stored procedure developers will know this information, but it’s likely that the number of people writing DB2 stored procedures will be smaller than the number of people writing client-side programs; thus, use of stored procedures can reduce the number of people who require knowledge of table and column names in the database, and that can reduce the risk of unauthorized data access attempts.

27 Now, some information about native SQL procedure and SQL UDF development and management.

28 IBM Data Studio, a free and downloadable software product, is a great tool for developing, debugging, and deploying SQL PL routines. It’s Eclipse-based with a GUI, and it runs under Windows or Linux. The stored procedure debug of Data Studio has a lot of features that facilitate testing a SQL PL routine for errors.

In addition to useful SQL PL routine debugging capabilities, Data Studio can be a much better choice than SPUFI for executing certain SQL statements interactively. In particular, I’ve had much more success in executing queries that access XML data and queries that access LOB data, and issuing stored procedure calls, from Data Studio versus SPUFI. With regard to XML data, Data Studio’s formatting of retrieved XML data is a nice bonus.

29 Lots of organizations use various tools to manage source code for applications developed in-house. These tools are great when the programming language used is something like Java or C# or COBOL. But what if the source code is SQL PL, as it is when you’re talking about DB2 native SQL procedures and SQL UDFs? It seems to me that a number of the popular source code management tools on the market do not yet offer support for SQL PL. What are your options?

Some folks are going with a “roll your own” approach to SQL PL source code management. To help out with such an approach, IBM provided some sample REXX routines via the fix for APAR PM29226. Use of these routines is illustrated via DB2 sample job DSNTEJ67. The routines provide a number of services that can facilitate management of source code for SQL PL routines, including: • Extraction of the source for a SQL PL routine from the DB2 catalog (the retrieved source can be placed in a file or in a long string – you’d go with the option that best suits your needs). • Invocation of the SQL PL precompiler to generate a listing for a routine. • Modification of various elements of a SQL PL routine (schema, version ID, etc.). • Deployment of a SQL PL roitine.

30 Here’s another option for managing SQL PL source code: use the open-source Apache Subversion software versioning and revision control system with Data Studio.

How does that work? Well, as I mentioned previously, Data Studio is an Eclipse-based tool. Subversion (or SVN, for short) integrates with Eclipse. SQL PL routines developed using Data Studio are resources in Eclipse-speak, and these resources can be managed by a plug-in that provides the “Team” component for an Eclipse framework. Subversion is such a plug-in.

31 Before your organization plunges into native SQL procedure development, you should understand how deployment of such procedures differs versus external stored procedures. The deployment difference is mainly due to the fact that a native SQL procedure’s package is its sole executable – there is not the external-to-DB2 load module that you have with an external stored procedure.

To get a native SQL procedure moved from a test system to a production system, you can execute a BIND PACKAGE command for the procedure’s package, with the DEPLOY option specified. What this does: it regenerates the non-control section of the native SQL procedure’s package, including optimization (access path selection) for the SQL DML statements in the routine. This is what you want for those SQL DML statements, because catalog statistics and other factors are probably different in the production environment versus test. The control section of the package can be left alone because the logic of the native SQL procedure is set.

ALTER PROCEDURE (or FUNCTION) with the ACTIVATE VERSION clause is also something new when you get into the world of native SQL procedures and UDFs – it allows you to (for example) move a new version of a native SQL procedure (or compiled SQL scalar UDF) into production and have only a few people test it out before making it the active version of the routine on the system (testers can invoke the new routine before it’s active for general use via the CURRENT ROUTINE VERSION special register). 32 So, how do you make changes to a native SQL procedure? One approach is drop and re-create, and I’ve seen that approach used, particularly when an organization is first getting into native SQL procedure development and deployment. When native SQL procedure usage by a company ramps up, and situations involving nested stored procedure calls (in which a native SQL procedure calls another stored procedure) become more common, the drop- and-re-create method of affecting stored procedure changes can become problematic. Why? Because an attempt to drop a stored procedure that is called by a native SQL procedure will fail (the SYSPACKDEP catalog table records these dependencies). To drop that called stored procedure, you’d first have to drop the native SQL procedure that calls it. What if several native SQL procedures call the stored procedure you want to drop and re-create? What if a native SQL procedure that calls the stored procedure that you want to drop and re-create is itself called by a native SQL procedure? You can see how your one planned stored procedure drop and re-create could turn into quite a few drop and re-create actions.

The moral of this story: when you need to make a change to an existing stored procedure, use ALTER PROCEDURE to accomplish the change. Go with drop and re-create if ALTER PROCEDURE, for some reason, won’t do what you need done.

33 Last section! I’ll close with some thoughts on making the shift to SQL PL as a development language for native SQL procedures and compiled SQL scalar UDFs.

34 As you’ve probably gathered, I’m very big on native SQL procedures; however, even as a big fan of native SQL procedures I’m not going to tell you to take all your external stored procedures (if you have any) and change them en masse to native SQL procedures. For one thing, that change requires complete rewrite if an existing external stored procedure is written in a language other than SQL PL (like COBOL). Even if you have external SQL procedures that were written in SQL PL, converting those to native SQL procedures can involve more than a mere drop and re-create (I included on the slide the URL of an IBM “technote” that contains very useful information on converting external SQL procedures to native SQL procedures). On top of that, an external stored procedure written in, say, COBOL, might access non-DB2 data (such a stored procedure might access VSAM data, either directly or by way of a CICS transaction invoked by the stored procedure), and it wouldn’t make much sense to try to get a native SQL procedure into that picture.

So, what should you do if you have existing SQL procedures and you want to convert some of those to native SQL procedures? Go first for the “low-hanging fruit:” external procedures that are relatively simple (reduces conversion effort), that access DB2 data, that are frequently executed, and that are called primarily by DRDA requesters. Converting those to native SQL procedures should give you a nice return on your conversion effort, particularly in terms of increased zIIP offload.

35 So, your organization is ready to develop and deploy native SQL procedures and UDFs. Question: who’s going to write those SQL PL routines? Will it DB2 DB2 for z/OS DBAs? They know SQL, but their numbers can be fairly small, even at a site with a big DB2 for z/OS workload. Plus, they are typically pretty busy with managing and administering the DB2 environment. How about application developers? Often you’ll find lots of those folks in an organization’s IT department. Problem there is that these people may have a bit of a learning curve with respect to mastering SQL PL. And there are plenty of CREATE and ALTER PROCEDURE (and FUNCTION) statement options and keywords that application developers could find to be less than straightforward.

What if the task of writing SQL PL routines were to be assigned to people who are not DBAs (in the traditional sense) and not application developers (in the traditional sense), but who have a role that combines aspects of DBA and application development work? On to the next slide…

36 I’ll close with some information about an approach to SQL PL routine development that I encountered at a DB2 for z/OS site. It’s an approach that I like a lot. The organization in question decided to create a new position that they labeled, “procedural DBA.” The idea was to form a new team that would be database-centric in terms of their focus, but with an emphasis on development and management of DB2 routines (stored procedures and user- defined functions). The new positions were advertised internally, and a mix of people applied – some who had worked as traditional DBAs and wanted to be more involved with application development, and some who’d worked as traditional application programmers and wanted to work more closely the enterprise data server. All of the newly minted “procedural DBAs” that I met during a visit seemed to be pretty excited about their assignment. For them, the new role presented a fresh and challenging (in a good way) area of work.

A team of SQL PL people, whatever you might call them, could work effectively in support of both DB2 for z/OS and DB2 for LUW systems, because SQL PL on the two platforms is virtually identical. That kind of cross-platform DB2 work would deliver yet more value to an organization. Something to think about.

37 Thanks for attending the session!

38