Karl Umble, Ph

Total Page:16

File Type:pdf, Size:1020Kb

Karl Umble, Ph

Leadership Development Evaluation Handbook

First Draft – August 2005

Evaluation for Planning and Improving Leadership Development Programs

Karl Umble

1 Karl Umble, Ph.D., M.P.H.

Handbook of Leadership Development Evaluation

Chapter Draft

Chapter 13: Evaluation for Planning and Improving Leadership

Development Programs

About twenty years ago, I mentioned to some relatives who are teachers that I was getting very interested in the evaluation of public health education programs. They replied,

”Oh, testing.” I actually think they may have felt a bit sorry for me for being so interested in that! I thought to myself, well, testing may be a part of it, but I have something much bigger in mind.

Since then, I have had the wonderful opportunity to evaluate large leadership and management development programs as a part of my work at the North Carolina Institute for Public

Health at the University of North Carolina at Chapel Hill. We are the outreach and continuing education arm of the UNC School of Public Health, and offer several leadership and management development programs for public health professionals. These include the National Public Health Leadership Institute, the

Management Academy for Public Health, as well as degree programs

2 such as UNC’s distance master’s degree in Public Health

Leadership. In this work, I have had the opportunity to hone my

“bigger” view of evaluation.

In this chapter, I will focus on how we use evaluation to improve program planning and implementation over time.

Evaluation can help programs have clear goals that the people involved agree on, and that the programs and learners can focus on achieving and measuring. We also use evaluation methods to identify promising instructional methods and learn to sequence them in beneficial ways. Once a program is underway, we use a number of methods to keep improving them.

To explain the whole picture, I will use the Baldrige

Education Criteria for Performance Excellence. This framework

(Figure 1) for “Total Quality Management” has been adapted from business for use by schools and colleges. Some readers may know how difficult it can be to implement in its entirety for entire organizations. Lest that discourage you, I will simply suggest that you use the framework to guide your thinking about how to evaluate your programs, and will give a few examples from our work. We have never used the entire framework, but rather have found it a useful guide of things to keep in mind. I will also stress that these evaluation activities should be designed to

3 meet the ongoing information needs of specific groups of people concerned about the program, or stakeholders.

Insert Figure A about here – The Education Criteria for

Performance Excellence (Baldrige)

Establishing an Overall Program Profile and Approach to

Leadership: The Role of Evaluation

While Baldrige recommends formulating an overall organizational profile, we have learned that it is very helpful to develop a program profile. This includes a specific definition of your target audience, and a well-defined program purpose, vision, and mission statement. It also means defining relationships and communication mechanisms with partners and stakeholder groups whose interests must be taken into account in all planning and ongoing evaluation, and defining your overall approach to evaluation utilization and performance improvement.

The Baldrige system also advocates that a program establish an approach to program leadership that communicates and

4 negotiates effectively with and between partners and stakeholders during all planning, evaluation and continuous improvement activities; that fosters individual and program learning; that defines and regularly reviews data on target audience coverage, processes, outcomes, and financial measures and insists on and supports continuous program improvement and learning. Without such leadership level activities, it is very difficult to conduct process and outcome activities that meet the information needs of the partners and stakeholders, and which lead to informed and ongoing support for the program. In short, evaluation activities should be driven and shaped by such leadership.

We have learned that it is very important to define your stakeholders and to keep them involved through advisory committees of various kinds. This helps us continuously tap their concerns and interests about the program and enables stakeholder-based program management and evaluation. For example, our National Public Health Leadership Institute

(National PHLI) is run by a partnership of the North Carolina

Institute for Public Health (NCIPH, which is part of the UNC

School of Public Health), UNC’s Kenan-Flagler Business School, and the Center for Creative Leadership (CCL, Greensboro, NC).

The program is funded by the Centers for Disease Control and

5 Prevention in Atlanta, Georgia. We have a task force that includes members of each of these organizations and that meets at least monthly, and more often when needed, to discuss in detail all detailed aspects of the program of the curriculum, instructional strategies, target audience selection process, evaluation results and their meaning. This group tries to forge consensus about program improvements needed and to thereby guide the program staff and in the NCIPH.

The National PHLI also has a National Advisory Committee that has members from each of the partners above, and also from important national stakeholder groups such as the National

Association of City and County Health Officials, the Association of State and Territorial Health Officers, and the Health

Resources and Services Administration, a federal agency. This group meets annually to review evaluation findings and target audience recruitment figures, and to weigh in on matters of concern raised by the partners about any aspect of the program.

Paying close attention to these groups and their concerns is vital to our success as a program. By listening carefully to the concerns, interests, and ideas of our partner groups, we learn well how to improve our program and keep it in line with their interests. For example, at one recent National Advisory

6 Committee meeting, CDC officials explained that we were not reaching the target audience of senior state and local health officials as well as they would like us to. In response to their concern, we first asked CDC to help us more carefully define that audience, which meant identifying target proportions of each class that should be senior officials from various levels.

Then we conducted market research and found that most senior officials thought the program should last one year rather than two, that we should allow individuals and teams to apply rather than only teams, and recommended that we do face-to-face promotion with their peers at conferences and by telephone, rather than simply relying on more passive strategies like flyers. We implemented all of these strategies and have significantly improved target audience recruitment. We also modified our admissions criteria to emphasize these audiences.

Each year we examine our incoming class according to the defined targets and report our progress to the CDC and our National

Advisory Committee. By paying close attention to this input, we improved our program by keeping our target audience in line with the interests of our sponsor.

Of course, focusing the target audience three years into our program had consequences for curriculum and instructional strategies, which we continue to adjust to meet the needs of our

7 audience. This example points out that the mission, vision, and target audience for a program should be established early on, but that these aspects are subject to continuous “re- negotiation” among the stakeholders during the lifespan of a program. By having representatives of important national groups on our National Advisory Committee, we foster a general give and take atmosphere among the stakeholders and yet seek to meet the needs and interests of all of those groups.

How is all of this related to evaluation? In many ways.

Evaluation means providing information to help managers and others key stakeholders make decisions about programs. Well- organized leadership development programs will have advisory committees (both detailed and big picture) that help them set their goals, evaluate their progress in relation to those goals, and keep improving the program from the point of view of these stakeholders. Evaluators can help programs identify stakeholder goals and concerns, report on their progress in relation to these goals, find ways (such as market research) to improve, and continue this process of constant evolution in relation to the goals. In some cases, the evaluator may need to ask the stakeholders to think carefully about their goals, and to help the program define them operationally, so that success can be measured.

8 Partly in response to the issues around target audience,

National PHLI has developed an annual program Scorecard that reports progress on specific target audiences as well as learner satisfaction with the teaching and learning, and learner participation in the various portions of the program. This scorecard is reported annually to the National Advisory

Committee, which they have appreciated as a way to track program success from year to year. The CDC and other sponsors helped us develop the Scorecard, which we based on the general concepts of the Balanced Scorecard (ref) and performance-based management

(ref).

Finally, at an organizational level, the NCIPH employs two full-time professional program evaluators as part of an institutional commitment to evaluation and proving high-quality programs. By having these staff members available, we can provide a basic or advanced level of process and outcome evaluation to each of our leadership development programs, supplemented by graduate research assistants who can get valuable experience by working on the programs. We believe that providing some level of evaluation staffing on each program is a way to ensure that the important conversations about program structure and content conversations continually take place and

9 are based on useful data that program managers may not have the time or experience to collect and properly analyze.

Program Planning: How Evaluation Can Help

We have used evaluation methods during program planning in several important ways, including conducting needs assessment, using evaluation and needs assessment research conducted in other settings and reported in the literature,

“benchmarking”(observing and modeling ourselves after) quality programs and their evaluation activities, and using logic models and written programs goals and objectives.

Often, we have found that others have already conducted quality needs assessments, and that we are well-served by studying them. In our Management Academy, which includes leadership topics, our sponsoring agencies had already conducted needs assessments and determined that public health staff need training in managing people, data, and money. We supplemented this data by conducting focus groups with managers to determine in more detail the nature of the financial tasks they performed, and the typical skill needs of public employees. For National

PHLI, we based our curriculum on trends in the field, such as

10 the need for collaborative leadership, as well as on best practices in leadership development that have been defined by literature (such as team-based action learning) and organizations recognized as leaders in the field, such as the

Center for Creative Leadership. Our program evaluation staff help conduct these kinds of assessments and determine their implications for program development.

Another important feature of our program development activities includes preparing logic models and written statements of program goals and objectives. These activities can help a large and diverse group of sponsors, stakeholders, and staff members come to some degree of consensus, or at least accommodation, about a program’s major goals, objectives, and

“theory” of how it will improve learners’ leadership and management capabilities. The logic models also help us plan evaluation activities to perform at each step of the logic model. Figure 2 displays the logic model for our Management

Academy for Public Health, which teaches entrepreneurial and personal leadership as well as tactical management methods. This logic model has proven useful in evaluation planning as well as in telling the story of our program to sponsors and other interested parties.

11 It is also important to actually use the logic model and written program objectives in developing the program. Too often, logic models and objectives are forgotten even in initial program planning, let alone planning two and three years down the road. If the program’s objectives are written and have been carefully thought through and negotiated with the programs advisory groups, then they should be reflected in the actual teaching and learning activities that reach the learners. In addition, as a program is modified over the years, it is the evaluator’s job to make sure that these changes are in accord with the logic model and objectives, or, that these are changed in ways that are acceptable to the stakeholders and staff. If this does not occur, programs can drift, evaluators will not know what outcomes to measure or assess, and indeed, sponsors and partners will not be able to determine whether the found outcomes indicate “success” or “failure” or something in between.

The Role of Evaluation in Maintaining Student, Stakeholder,

and Market Focus

We have already discussed ways in which we actively use advisory groups to keep our focus on stakeholder concerns. In

12 addition, we have discussed using needs assessments and benchmarking successful approaches from other settings when planning. At the NCIPH, we also pay very close attention to student concerns as we evaluate for continuous program improvement.

One way that we do this is by using evaluation forms to obtain student reactions to every major “course” offered, such as a 5- hour simulation and discussion on negotiation, and for each major face-to-face or “on-site” learning session, such as a five-day retreat. These forms ask questions about overall quality and level of instruction and ask for comments. Some of these forms also ask students to rate their level of perceived growth in skill or confidence in skills taught during the course or program. We immediately produce reports for the staff and professors based on these reports and discuss the results in staff debriefing meetings within two to three weeks after the event. An important evaluator role is also to facilitate the debriefing meetings, or at least to be present, to be sure that staff actually use the data.

We have discovered that it is also very valuable for the evaluation and program staff members to mingle with students during the on-site sessions, such as talking in the hallway, or

13 going to meals with them. Much can be gleaned about the “pulse” of the group and their responses to the program and instruction through such informal discussions around a meal or on a walk to the classroom. We often decide as a staff that we want to know more about a certain issue, and “fan out” to different dinner tables and ask those questions informally of the learners at our table, such as, “How do you feel about the level of this negotiation course. Is this too difficult, too simple, or being taught at the right level?” We have also learned a great deal about how team-based leadership and management development helps learners through repeatedly asking learners about the benefits of team training. These discussions always give us new insights on important aspects of the learning occurring with any given class, and often suggest questions that we need to ask on our final evaluation to gauge the entire class.

Recently, several hallway conversations with learners led the evaluator to realize that some learners believe one of our programs has too many lecture-discussions and not enough difficult, true-to-life public health case studies. One of our goals for the next year is to improve this situation by working with some of these learners to develop such case studies, as we have previously done with our Management Academy. By drawing cases out of the work experiences of program graduates, we can

14 devise a curriculum that scholars will recognize as challenging and relevant. Often, our written evaluation reports merely amplify and confirm what we already knew on the day the learners left our on-site session about what we can improve upon for the next group. As previously discussed, however, the written forms can be compiled and used to assemble our annual Scorecard to track our overall progress, and several parts of the Scorecard relate directly to student satisfaction and assessments of the program’s value.

Of course, any single program cannot please all of the learners, all of the time. We sometimes have to decide that our concept of leadership development has led us to make decisions that some learners will simply not agree with, and stand by our decisions. We particularly do this when a small minority of learners press a certain point that most other learners have not also made, such as, a few learners saying that a given seminar is not appropriate for this level of leadership, when the vast majority value it very highly. Sometimes we have over-reacted to the vocal suggestions of a small number of learners, and actually gotten worse results the next year because we went too far or because most learners like it the way it was. While some of our programs have gradually stabilized with respect to curriculum and instruction, we have learned that we cannot

15 become complacent with respect to learner input because learner needs and public health priorities are constantly evolving.

Performance Measurement and Knowledge Management

This domain refers to how programs use key short-term and longer-term learning outcomes and other results to foster continuous improvement. In addition to asking for learner input on programs as described above, it is also very important to use short-term and longer-term program outcomes to constantly improve programs. Sometimes evaluators and staff may tend to focus on curriculum and instruction suggestions from learners to guide improvements, but it is also crucial to examine the results you are getting and what those mean for how the program should be improved.

This is a place where having a clear logic model and written objectives can help. If a program is designed to improve certain areas of understanding, skill, perspective, and confidence, we try to measure those achievements through surveys and interviews. We often examine longer-term improvements in community health services, or coalitions, or partnerships, through collecting and carefully analyzing written action

16 learning project reports, and through follow-up interviews a year or two later with project teams or team leaders. Of course, it is more expensive to conduct interviews and site visits than it is to collect survey data.

We have used a six-month follow-up survey with all classes of our National PHLI to measure students self-reported learning outcomes and to compare these results from year to year, as the questions remain relatively stable. Since self-reported quantitative learning outcomes (e.g. Likert scales and use of the retrospective pre-test, post-test design) are not particularly informative as to the actual content of things learned, we also ask several open-ended questions to get more detail. By combining the qualitative and quantitative data, we can get a small picture of highlights of learning and leadership changes for students. What is critical in this part of the

Baldrige criteria is that we must actually compare these results to our objectives and ask, “How are we doing? Is this what we are trying to achieve, and are we satisfied with this level of change?”

One year, the quality of our Management Academy students’

Business Plans was not acceptable to the staff or to the program sponsors who were present for some of the final student project

17 presentations. Since this is intended to be a final integrative assignment that reinforces and develops the key skills that we teach, we immediately knew that we had to improve our instruction. In response, we hired coaches experienced in small business development, and developed a protocol for how the coaches would be expected to help the teams. We also supplied examples of relatively good projects from the previous year, and put together more detailed guidelines about what we expected learners to produce. Business plans dramatically improved in their quality as measured in relation to our stated guidelines and standard practice in financial management and strategic planning.

We have also carefully examined the learning and programmatic outcomes of the leadership projects produced by our National

PHLI scholars. Recently, we found that these projects were producing improved collaborations and services, and compiled these results for publication. Again, it is crucial, however, that we examine these results in relation to our program mission and objectives and ask, with our stakeholders, “Is this what we want to be achieving?” In so doing, we can together render judgments of these outcomes and decide how the program needs to be strengthened.

18 The Role of Evaluation in Maintaining Faculty and Staff Focus

The “faculty and staff focus” of the Baldrige framework includes how faculty and staff assessments are conducted and how feedback is provided to faculty and staff to promote continuous improvements. It also includes how staff recommendations are collected and incorporated into program improvements.

Immediately after any on-site program, we gather the staff who were present and have an immediate post-program debriefing about all aspects of the program. Staff assistants take notes, type them up, and the program director is responsible for implementing any clear decisions taken at that time. Shortly afterwards, we also do the more data-based review based on the written evaluation reports supplied by the evaluator, and again the program director is responsible for taking the decisions and implementing them. Sometimes we form small study/action groups at those times to undertake tasks such as

19 reformulating a course, developing a case study, or revising an action learning guide.

We have found that the role of the program leader (whether the director, or the manager who supervises the program director) is very important. Our Associate Dean for Executive

Education is very focused on continuous improvement and insists that we have these standard “after-action reviews” after every major on-site session for each of our programs. In a very positive way, strong leadership keeps staff alert to the necessity of constant improvement and not letting programs get stagnant or gradually lose quality.

It is also important that faculty and staff be satisfied with their roles in programs. In one leadership program that we evaluated, a distance masters’ degree program, we interviewed each of the program’s faculty members to find out how satisfied they were with the process. Many were teaching in a distance learning environment for the first time, and had a number of needs including adequate lead time to re-design their course for the distance setting, adequate pay for the time needed to do that, and technical assistance for changing overheads into other formats useful in the distance course.

They also needed instruction in how to teach creatively using

20 videoconferencing. The evaluator presented this information, which we knew might be somewhat contentious because it had funding implications, to a group of program faculty and staff.

After some arguments about the validity of the data and whether the program really needed improvements, the program eventually invested in a technical assistant to help faculty develop materials. In addition, the evaluator recommended giving faculty plenty of lead time so that they could adequately prepare their course.

Paying close attention to this set of faculty concerns is very clearly related to the instruction quality of the program. If the faculty are dissatisfied and withdraw, a program may not be able to find quality faculty. If they do not have adequate time to devote to teaching because they have no technical help, instructional quality will plummet because their hours are limited. While these insights are from a distance leadership program, the principle of paying very close attention to staff and faculty satisfaction applies to any educational program.

The Role of Evaluation in Process Management

21 Process management is also a key area in the Baldrige framework for quality management. For example, how does the program design, evaluate, and continuously improve its key learning-centered processes, such as seminars, distance learning, action learning, and assessments? How are support processes, such as finance and budgeting, marketing, information and public relations including Web services, and secretarial support, evaluated and improved over time, in line with the objectives of the program? And how is knowledge about how key learning processes work best shared with similar programs in the organization?

For example, while discussing faculty concerns above, we were also discussing the need to study and improve process of course development in a distance education program. All leadership development programs have key processes such as recruitment of learners and marketing, faculty recruitment and development, curriculum and instructional development, distance learning design and delivery, communication, and evaluation. These key processes should (a) run smoothly and effectively and (b) be integrated effectively with the other processes. For example, marketing must accurately reflect the desired target audience and the nature of the curriculum, or learners may be disappointed. Likewise, action learning and

22 coaching processes should work well, and be integrated with

the on-site curriculum and instruction, and with evaluation

methods.

This takes a great deal of critical thinking and staff

time. We have used interviews, observation and participant

observation, seminar and program evaluation surveys to

continuously evaluate and improve our processes.

The Role of Evaluation in Assessing Program Performance Results

As discussed, we believe that the evaluation perspective should be included from the start of program planning. If this occurs, the program knows what it wants to achieve and is designed specifically to achieve its objectives. Short-term and longer-term outcome evaluations are then used to examine whether the program is reaching its key objectives, and if not, what might be done to improve the effort. We have already discussed how this can be done under the section on Performance

Measurement and Knowledge Management. Other chapters in this volume deal more specifically with those measures.

23 We only emphasize here that the evaluation should examine the program’s outcomes in relation to its stated objectives and logic model. This will help all stakeholders interpret and understand the results, and place them in the larger context of the program’s overall goals. We also emphasize that the program should consider spending considerable resources on measuring results that are most important to the program’s sponsor. By being responsive to the sponsor’s concerns, the program can help the sponsor assess the program’s merit and worth.

Summary

The Baldrige Education Criteria for Performance Excellence provides an integrated perspective on how evaluation can be used for improving a program’s overall quality over time. Using the framework can help leadership development staff define program goals and manage the program in relation to them. The framework reminds us of the value of strong leadership, careful planning, student and stakeholder focus, faculty and staff focus, process improvement, and outcome measurement. Further, the framework suggests what is important in each category, and ways that staff can use measurement, analysis, and knowledge management to improve each domain and the overall program. We have found these

24 domains important and useful for developing and implementing successful leadership development programs.

25 References

To be added shortly

26 Figure 1: Baldrige Education Criteria for Performance Excellence

27 Figure 3. Management Academy for Public Health Program and Evaluation Logic Model: North Carolina Institute for Public Health

Internal Evaluation Years 1, 2 & 3 External Eval: Year 1, 2, 3 Logic model, Rapid Feedback Pre-Post Test  Interviews Description Course Evaluations Skills Survey  IDP Results  Interviews  Survey Surveys and Surveys End of Program  Supervisor Observation Survey  Individual Change interviews Inputs Portfolio IDP Short-term Participants: Long-term Process individual Attitude individual development of Experience development of and application of Other and application of skills and skills and concepts concepts on the Program resources: on the job job Funding Facilities Development Individual Barriers and activities/training Changes in teamwork, Faculty: Facilitators for policies, programs, Quality Knowledge IDP and Content procedures, performance, Incentives Confidence Business Plan Instructional and community impact Time available Skills Processes Methods Perspective Support for change Team in participant development of Long-term organizations and application of team/org. skills and development of Planners and Business concepts on the and application of planning process Plan job in creation of skills on-the job as Process Business Plan the BP or spin-off Staff, logistics, is implemented administration Business External Eval: Plan SPH/B-School Portfolio collaboration (Years 1, 2, and 3)

Graduation 0fb9b70d28e4446ee4226d8047ef6d89.doc 29

Recommended publications