Post-Baccalaureate Instructional Design Diploma Program
Total Page:16
File Type:pdf, Size:1020Kb
Post- Baccalaureate Instructional Design Diploma Program
Program Evaluation Plan
Prepared by: Rebecca Hatherley Project Manager & Senior Evaluator
MDDE617, Athabasca University July 2009 Program Evaluation Plan 2 2
Overview Introduction
Online instruction has emerged as an alternative mode of teaching and learning; it is also gaining popularity among traditional higher education institutions and emerging for-profit institutions. The growth of online postsecondary education offers today’s learners a greater choice and thus the market has become increasingly competitive. The increased market competition is transforming higher education from a “cottage monopoly to competitive industry”
(Munitz, 2000). It is necessary for higher education, under a market model, to become increasingly consumer driven and operate like a business. Consequently, to attract today's learners and maintain successful programs it is necessary to integrate comprehensive program evaluation with online programs.
In the educational marketplace, institutions must utilize program evaluation to ensure a balance between quality programs that meet learners’ needs and while maintaining cost- effectiveness. The use of program evaluation assists institutions to gain and/or maintain a needed competitive advantage in a rapidly changing and growing marketplace.
Through a review of current literature, ClearVision Consulting has developed a program evaluation plan (hereinafter referred to as the Plan) to assist Ithika University assess their
Instructional Design Diploma Program (hereinafter referred to as the Program). This Plan provides a roadmap for the University Evaluation Committee (UEC) to implement an evaluation of the Program, during and after the last semester of the first offering of the Program, and assist to lay a foundation for future Program decision-making.
Background Information
Ithika University is a for-profit private university located in Ontario, Canada. The
University, founded six years ago, offers only fully online distance education graduate programs Program Evaluation Plan 3 3 and serves approximately 1,300 students. This past year they have implemented the university’s first diploma program—the Instructional Design Diploma Program. The Program is delivered through WebCT, supported fully by the Department of Technology Infrastructure, managed by the Department of Continuing Studies, and delivered by five course instructors. Three instructors are full-time members of the Faculty of Education and were involved in the development of the course; the remaining two instructors are part-time contractual faculty members.
The Program commences the beginning of each July and consists of four courses of which are a semester-long each. The Program is designed such that the learners complete the
Program in cohorts over twelve months—with a one-week break between each semester. The
Program’s four courses are as follows: IDD 100 Introduction to Instructional Design, IDD 105
Theories of Learning and Instruction, IDD 110 Advanced Instructional Design, and IDD 120
Needs Assessment and Program Evaluation. The Program’s tuition is $12,000 and paid in four quarterly instalments of $3,000. Currently, the Program is approaching the last semester of its first implementation and has two cohorts, 20 learners in each cohort, with learners from both
Canada and the United States.
The target population for this Program is 30-45 year age educators/trainers who do not have specialized training in instructional design, and have a minimum of one undergraduate degree. The Program is based on the needs of these professionals and the intent of the Program is to provide the learners with expertise in the design and development of educational/training courses and materials, which they can apply to their current and/or future careers. The Program’s learners are encouraged to work in teams and small groups to diminish the isolation effects of distance learning, and regular participation in biweekly discussions is a requirement in all courses. Program Evaluation Plan 4 4
Orientation and Purpose of Evaluation
This Plan is formative in nature as results and lessons learned from the evaluation will be used for the continuous improvement and evolution of the Program, as the University plans to offer the Program to new cohorts of learners each July. The purpose of the Plan is to determine the quality of the course design, establish if the Program meets the target populations’ needs, and provide information to the University to assist determining the cost-efficiency/viability of the
Program. Data from the evaluation will allow Ithika University to strengthen Program design— before the second implementation of the Program; maintain a record of the Program’s progress and costs; assist the Program to reach the intended target population; and determine changes, if warranted, to become more cost-effective.
The Plan uses a management-orientated approach and thus the intent of the Plan is to serve the decision makers, the clients. A rationale for this strategy is that “evaluative information is an essential part of good decision making and that the evaluator can be most effective by serving administrators, policy makers, boards, practitioners, and others who need good evaluative information” (Fitzpatrick, Sanders, & Worthen, 2004, p. 88). Moreover, Stufflebeam, an influential proponent of a decision-orientated evaluation approach to help administrators make good decisions, believes that evaluation is performed in the service of decision-making
(Fitzpatrick et al., 2004). He defines evaluation as “the process of delineating, obtaining, and providing useful information for judging decision alternatives” (p. 89). In this approach, the role of the evaluator is that of an information-gatherer; the decision makers are the real driving force.
In general, there are several models to evaluate online programs; however, it is necessary to choose a model that is suitable for the objectives of the evaluation. The CIPP Model is a best fit for the objectives outlined in the Request for Proposal (RFP) and thus the Plan will follow the
CIPP Model developed by Stufflebeam. The CIPP Model is considered a comprehensive Program Evaluation Plan 5 5 framework for both formative and summative evaluations. The premise of the model is that evaluations should assess and report an entity's merit, worth, and significance as well as, the lessons learned. The CIPP model's main theme is that evaluation's most important purpose is not to prove, but to improve. (Stufflebeam, 2003) This model is chosen for two key reasons: (1) the model places emphasis on guiding, planning, programming, and implementation efforts, and (2) the model emphasizes that the most important purpose for evaluation is improvement
(Stufflebeam, 2003).
CIPP stands for evaluations of context, inputs, processes, and products. Specifically, the
Plan will focus on the process and product evaluation components of the CIPP Model.
Stufflebeam writes that “process evaluations assess the implementation of plans to help staff carry out activities and later help the broad group of users judge program performance and interpret outcomes” and that “product evaluations identify and assess outcomes—intended and unintended, short term and long term—both to help staff keep an enterprise focused on achieving important outcomes and ultimately to help the broader group of users gauge the effort’s success in meeting targeted needs” (2003).
A weakness frequently noted with this approach is that it can be unfair and possibly even undemocratic, biased towards top-management, rather than balancing the interests of management with those of other internal and external stakeholders (Fitzpatrick et al., 2004).
However, Stufflebeam advocates that the CIPP Model is strongly oriented to involving and serving stakeholders, as evaluators are charged to keep stakeholders informed and provide them appropriate opportunities to contribute (Stufflebeam, 2003). Stufflebeam also notes that involving all stakeholder groups is seen as wise, because sustained consequential involvement positions stakeholders to contribute information and valuable insights and inclines them to study, accept, value, and act upon evaluation findings (2003). Thus, management must be vigilant to Program Evaluation Plan 6 6 search out all relevant stakeholder groups and engage at least their representatives such that they are not disregarded.
It is practical and far-sighted of the university to request an evaluation plan in order for the University to conduct an internal evaluation. Involving external evaluators in product evaluations to ensure credibility for accountability purposes is expensive and not as efficient as an internal process could be for evaluating contexts, inputs, and processes (Stufflebeam, 1971).
Stufflebeam notes that the CIPP Model is configured for use in internal evaluations conducted by an organization’s evaluators and self-evaluations conducted by project teams (2003). In addition, he notes that the CIPP Model advises evaluators to use contracted evaluation to encourage and assist evaluation clients to learn evaluation concepts and methods and install or strengthen institutional capacity to conduct and use evaluations (2003). Furthermore, he emphasizes that institutions need the capacity to conduct many of their own evaluations and external evaluators should help develop such capacity (2003).
Clients & Primary Audiences
In order to result in a useful and effective evaluation, stakeholders need to be identified so that a formal agreement can be reached on the goals and expectations of the evaluation process (Stufflebeam, 2002). There are several key stakeholders in this evaluation. Ithika
University is the client and the University Evaluation Committee (UEC) is a primary audience for this program evaluation.
It is necessary to interview the Program’s primary stakeholders to confirm and/or identify their vision of the Program goals and evaluation needs. Additionally, to ensure that the evaluation is not biased by the needs and interest of only a few stakeholders, it is necessary to identify key stakeholders, their expectations, gather their multiple perspectives about relevant issues, and involve as many stakeholders as possible (Fitzpatrick et al., 2004). Each type of Program Evaluation Plan 7 7 stakeholder represents a unique perspective based on the roles and responsibilities they have pertaining to the organization. If not considered, the evaluation may overlook important stakeholders’ questions and/or issues that are not included in the evaluation or in the results of the evaluation.
The Program’s primary stakeholders are the Program’s director (or similar position),
University Evaluation Committee (UEC), Continuing Studies staff, and the five Program instructors, as all of these stakeholders are responsible for the evaluation; however, there may be other primary stakeholders that need to be identified. Secondary stakeholders include people such as the department head, the director of graduate programs (if they exist), the learners since they have an interest in the evaluation results but do not share the responsibilities of the Program evaluation, and again other secondary stakeholder may exist and need to be identified.
Additionally, in order to confirm that the evaluation is proceeding in the proper direction and appropriate evaluation questions asked, the UEC and evaluator should keep stakeholders informed and invite their input as the evaluation proceeds. For example, various members of the
UEC should review and pilot the questions prior to their distribution to the intended audiences.
Evaluation Resources
This Plan expects Ithika University to allocate, temporarily, one full-time employee assigned as the Evaluation Coordinator. Stufflebeam adamantly states that the process evaluator is the linchpin of a sound process evaluation, and staff failure to obtain guidance for the implementation and documentation of one’s activities is due to a failure to assign anyone to do this work (International Handbook of Educational Evaluation, 2002). Employees can routinely carry out some review and documentation through activities; however, these do not fulfill the requirements of a sound process evaluation. Often institutions mistakenly assume that employees will adequately evaluate process as a normal part of their assignments. Stufflebeam notes that Program Evaluation Plan 8 8 staff can usually meet these requirements well only by assigning an evaluator to provide ongoing review, feedback, and documentation (International Handbook of Educational Evaluation, 2002).
Additional significant resources to make the evaluation work feasible are sufficient budget, trained staff and/or training for staff, and standard and adequate office resources
(computer, office space, and office supplies...). It is also necessary to have access to the
University’s existing documents and records, Program learners, and instructors.
Focus of Evaluation
The managers and administrators at Ithika University articulate that the Plan is to evaluate three major areas of the Program: (1) the integrity as a quality program, (2) the ability to serve learners’ needs, and (3) the viability of the Program, in terms of cost-effectiveness.
It is important to note that the Program is new and has not yet completed one full year.
Evaluation Issues and Key Questions
The CIPP Model calls for the evaluator and all key stakeholders to identify and clarify key questions that will guide the evaluation. The first phase, divergent phase, is to compile a list of potential evaluation questions, or evaluation issues, involving all Program key stakeholders.
Including all key stakeholders in this phase helps reduce their anxiety about the evaluation, improves their understanding of its purpose and intent, ensures that at least some of the evaluation questions address their concerns, and adds to the validity of the evaluation because stakeholders are program experts (Fitzpatrick et al., 2004). Their input must be obtained carefully, when stakeholder groups differ in power, for voices of less powerful stakeholders can be overshadowed (Fitzpatrick et al., 2004).
It is not feasible to address all identified questions in one study; therefore, the evaluation questions must be limited to a manageable number—thus entering the convergent phase.
Reducing the number of key questions to questions that are most important and feasible will Program Evaluation Plan 9 9 focus the entire evaluation study. In the convergent phase, only the most critical questions are selected—again the evaluator, client, and stakeholder representatives work together —because the greater the number of key questions the larger the budget needed, the evaluation becomes more complicated and thus more difficult to manage, and one must remember the attention span of the audience is limited (Fitzpatrick et al., 2004). Working together will create a shared ownership that enhances the probability of the use of evaluation findings and initiate the dialogue that will be an important part of the evaluation. Fitzpatrick et al. warn that rushing through this phase too quickly is one of the worst mistakes an evaluator can make, as unresolved conflicts about the focus of the evaluation will not go away and that they can ruin an otherwise well planned evaluation (2004).
From these critical questions the evaluator, again working with stakeholders, specify standards for each question and, if needed, criteria to judge the Program. The evaluator should ensure that the stakeholders agree with the criteria. Fitzpatrick et al. acknowledge the agreeing on the standards and criteria prior to obtaining results can be very useful in helping groups to be clear, realistic, and concrete concerning what expectations are acceptable for program success
(2004).
While the initial design decisions are needed to initiate the evaluation, they should not be considered fixed; instead the CIPP Model treats the design as a process of continually identifying and employing appropriate means to address emergent as well as predictable and relatively fixed information (Stufflebeam, 2003). Cronbach cautions that because changes cannot be foreseen that the choice questions and procedures should be tentative and that budgetary plans should not commit to the initial plan, but quite a bit of time and money should be held in reserve (as cited in
Fitzpatrick et al., 2004). Thus, the sets of key questions, standards, and criteria offered in this
Plan are only preliminary. Program Evaluation Plan 10 10
Key Questions. In answering the key questions, the evaluation provides evidence of the present situation and encourages management to consider how the Program could be fine-tuned to lead to improvement (Owens & Rogers, 1999).
1. To what extent does the Program serve the learners’ needs?
2. To what extent do the learners express satisfaction with the Program, in
terms of its application to their professional lives?
3. To what extent is the Program, a quality online program?
a. Are Program and courses well structured and designed?
b. To what extent are Program outputs consistent with Program
objectives?
c. Is there sufficient administratively and logistically Program
support?
4. In what way is the Program cost-effective, in terms of Program inputs?
Standards. Refer to Appendix A to see the full list of standards and criteria.
Standard 1: The Program is aligned with learners' needs.
Standard 2: Learners express satisfaction with the Program in terms of its application to
their professional lives.
Standard 3: The Program demonstrates evidence of instructional design.
Standard 4: Program goals and objectives are achieved.
Standard 5: The Program is supported administratively and logistically.
Standard 6: The Program is viable, in terms of cost-effectiveness.
Data Management.
The CIPP Model requires engagement of multiple perspectives, use of a wide range of qualitative and quantitative methods, and triangulation procedures (acquiring data from a number Program Evaluation Plan 11 11 of different sources and methods) to assess and interpret a multiplicity of information
(Stufflebeam, 2003). Consequently, the evaluator has to be resourceful in collecting and compiling information that will tell consistent and complete stories for all stakeholders’ perspectives. However, the evaluator should not redundantly gather new information if acceptable and sufficient information from other sources is readily available. When starting an evaluation, the evaluator should acquire and study existing pertinent information and use this information as a guide to decide what new information to gather.
ClearVision recommends to launch the Program evaluation immediately, as the break that is between Term 3 and Term 4 of the Program is about to begin, and complete the evaluation one week after the end of Term 4. The rationale for this urgency to is to provide data such that management can establish any indicated weaknesses in the Program, and act decisively and promptly to make essential changes and improvements before the second offering of the
Program.
The evaluation is to commences by clarifying the purpose of the Program evaluation with the clients, identifying the critical key questions with the stakeholders (as outlined in Evaluation
Issues and Key Questions), researching contextual information, and analyzing Program documents and records.
Contextual Information. Information sourced from management, instructors, and any
Program reviews provides a broad understanding of the past and present situation of the Program and the broader Ithika University environment. Obtained information includes profiles of online instructors, experience, professional development relating to various technologies used and teaching areas.
Documents and Records. Fitzpatrick et al. recommend considering existing information because it is more cost-effective; the information is nonreactive or not changed by the act of Program Evaluation Plan 12 12 collecting, or analyzing it; and currently too much collected information is not used sufficiently
(2004). The evaluator must determine which documents and/or records will be valid for the current evaluation and which be analyzed cost-effeciently. Content analysis of meeting minutes, email correspondence between instructors or between learners and instructors, Program course material, or lesson plans are just some ideas that can help identify and clarify values in an objective way no other source can match (Fitzpatrick et al., 2004).
Written Survey (Cost-Analysis). Educators, trainers, and business people need to be able to evaluate the cost-effectiveness of Web-based training in order to make informed decisions about the extent to which this new media should be used in their organizations (Whalen &
Wright, 1999). Information about the costs (fixed costs and variable costs) associated with the
Program is to be gathered from management to assist in determining the breakeven point and the return on investment for the Program. This process is followed up with synchronous discussions with management whenever deemed necessary.
Learner Survey. In the past, “schools have been licensed to tell students what they should learn, and students have not been licensed to decline the guidance. In no market economy does the vendor tell the customer what the customer will buy. Rather, the customer tells the vendor what he or she wants, and the vendor wither provides it or goes out of business (Munitz, 2000).”
It is vital that for-profit tertiary institutions capitalize on this shift from “producer-driven” education to a consumer-driven model to compete with other educational institutions. The learner survey is one tool to assist discovering learners’ needs and find ways to meet their needs.
The learner survey should ascertain what effects the Program has had on the learners; how the Program produces its effects and the factors influencing its effectiveness are of utmost importance. (Refer to Appendix B for preliminary learner survey questions.) “When evaluation is carried out in the service of course improvement, the chief aim is to ascertain what effects the Program Evaluation Plan 13 13 course has—that is, what changes it produces in pupils” (Cronbach, 1975, p. 246). He goes on to say, “The greatest service evaluation can perform is to identify aspects of the course where revision is desirable” and that “The greatest service evaluation can perform is to identify aspects of the course where revision is desirable” (p. 247).
All of the learners in the two cohorts are to be surveyed online to help frame evaluation questions and to provide context for the recommendations of the evaluation informing management of likely learners’ reactions (Fitzpatrick et al., 2004). (Online surveys using Web- based software, such as SurveyMonkey, can simplify and alleviate the time involved in assessing the data.) A group of graduate learners, who have previously enrolled in online courses but are not members of the sample, need to test pilot the finalized list of questions that the evaluator and stakeholders develop.
The evaluator should look at each item in terms of its quality rating, but also in terms of its significance to the program. For example, items that rate low in quality and low in importance are less critical than items that rate low in quality and high in importance.
Evaluations are iterative in design; evaluators use different methodologies, from different paradigms, in conjunction with the results of each informing the next stage of data collection and interpretation. Thus, the learners’ surveys, managements’ survey, and assessment of documents, records, and contextual data are all conducted to provide information for constructing questions to explore further with an instructor focus group and instructor interviews, and if needed further probing of specific learners and/or learner populations for greater understanding of survey results.
Focus Group. A focus group to discuss key issues will include all five Program instructors, as the number of staff is small. To support discussion the focus group is to be face- to-face if instructors are in the same location or if not, then the focus groups are held online— Program Evaluation Plan 14 14 using the asynchronous medium Ning1, in conjunction with the synchronous video Skype conference calls. Fitzpatrick et al. caution that a skilled focus group facilitator is needed to elicit responses from all members and negotiate sensitive topics (2004). (The focus group is to be recorded and transcribed.)
The evaluator is to conduct a content analysis on the information gathered from the focus groups, which will summarize trends. The transcripts from the focus groups are to be coded to develop initial categories and the themes that emerge should be cross-referenced with the results from the learner survey to determine areas of particular importance to faculty and management.
Subsequently, from the emergent themes from the focus group and learner survey, interview questions are to be developed for management, Program instructors, Continuing Studies staff, and Program instructors.
As always, results from gathered data is to be summarized and distributed back to the stakeholders, to perform continual member checks, for comment and feedback on both the findings, interpretations, and the accuracy of the recorded data. These member checks will decrease the chance of misinterpretation of information. Open discussion forms using the Ning will be used to include and engage stakeholders in the process.
Interviews. Interview questions for instructors, management, and if felt needed learners are emergent from the previous data collection methods. It is essential that all interview questions are reviewed and tested by a group of faculty, learners, or managers—whichever relevant—of which are not to be interviewed to ensure the questions are valid. The number of questions must be kept to a minimum as to focus on known quality issues and to respect interviewee time.
1 A Ning is an online platform where people create their own social networks and can be password protected. This is where the stakeholders can negotiate divergent perspectives and determine key issues asynchronously. Program Evaluation Plan 15 15
Dissemination Methods
Information gleaned from the evaluation is not likely to be used effectively unless it has been communicated effectively and, unfortunately, reporting is too often the step which many evaluators give the least thought (Fitzpatrick et al., 2004). Additionally, different stakeholder groups sometimes require different mediums and methods for disseminating findings.
Thoughtful evaluators contemplate at the beginning of the evaluation how evaluation reports may be used and consider ways to assure that they are useful (Fitzpatrick et al., 2004). Therefore, careful consideration of dissemination mediums and methods are vital.
Noted earlier, informal member checks are to be implemented throughout the evaluation, as some stakeholder groups may prefer the majority of the member checks to occur online within the safety of the Ning with additional infrequent face-to-face meetings. Discussions with the different stakeholder groups will elicit their preferences and ensure that the most productive mediums and means are employed.
Once the mediums and methods of reporting and dissemination have been determined for the different stakeholder groups, the evaluator must write a reporting and dissemination plan.
(Refer to the tables in Appendix D for required formal reports and timings.) This plan should include mediums used to disseminate the evaluation findings, persons responsible for disseminating the findings, how the findings will be used, and who will act on the findings.
Ethical Considerations
Note that the evaluator needs to gain all evaluation participants consent to participate in the evaluation, they should have the right to participate or not, and the evaluator must have participants review and sign an informed consent form. The participants should understand how any information associated with them will be reported. The evaluator is to clearly convey terms of confidentiality regarding access to evaluation results. Program Evaluation Plan 16 16
Issues of confidentiality and anonymity of data are a major concern, with such a small sample of instructors delivering the online courses. Any findings of which could be deemed negative in a teaching area will be associated with a particular instructor or teaching team.
Therefore, great sensitivity needs to be ensured during data collection, analysis, and summarizing to avoid the results appears to criticize a particular instructor or a group of instructors. Program Evaluation Plan 17 17
Budget
Evaluation staff salary and benefits Evaluation Coordinator ( ¼ × 64,000 annual salary) $16 000.00 Support (training, books, subscriptions, ...) $1 000.00 Consultants ClearVision (10 hours @ $0 per hour) $0.00 ClearVision’s Evaluation Plan $10 000.00 Travel and per diem $100.00 Communications (postage, telephone calls, online fees, etc.) $1 000.00 Printing and duplication $700.00 Supplies and equipment $1 000.00 Total direct costs $28 000.00 Indirect costs (facilities, utilities, …) (2% of direct $560.00 costs) Participation honoraria $50.00 each (40 learners) $2 000.00 Budget total $30 560.00 Timeline
See Appendix C for program calendar and Appendix D for design elements and timeline.
Other Considerations
Known constraints such as periods of high teacher commitment during end of semester procedures needs be taken into consideration when determining the timing of the evaluation, as will the lead-time to implement recommendations arising from the evaluation so that they can be incorporated into the next semester’s program. In addition, it is essential to include data from learners in both Canada and the United States.
ClearVision Consultant Qualifications2
ClearVision is a team of instructional designers, former educators, program developers, and program evaluators each with a minimum of 16 years experience. This team is a leader in its field and has been working together since 1995 conducting government, educational, non-profit, and for-profit program evaluations. Each client receives commitment, professionalism, and
2 The consultant firm ClearVision and consultants have been fabricated for the purpose of this assignment. Program Evaluation Plan 18 18 experience of a team which has successfully conducted 147 evaluations, not only in Canada, but in 11 other countries. An underlying factor for success at ClearVision has been our commitment to developing relationships and offering the opportunity of a voice for all stakeholders.
Serving as the project manager and senior evaluator, Rebecca J. Hatherley’s educational background includes a Ph.D. in Distance Education, Master in Program Evaluation and
Instructional Design, and a Bachelor in Education. She is committed to and actively involved in research and development in program evaluation. Refer to our website to view Ms. Hatherley’s full resume: www.ClearVision.com/consultants-resumes3
Our clients are the proof of our success—visit our website to read about what our non- profit and for-profit clients have said about us: www.ClearVision.com/nonprofitandforprofit- client-quotes4.
ClearVision Additional Support
ClearVision stands behind our evaluation plans. We demonstrate our belief that external evaluators should facilitate institutions’ capacity to conduct their own evaluations by including in this Plan 10 complimentary hours of developer support time, for about 60 days after the launch of the evaluation—often these hours are used to review interview or survey questions.
If at any time during the evaluation it is determined that further external support is needed, please contact us, and together we can discuss how we may offer assistance. If desired, we can discuss a combination of external and internal (or quasi-external5) evaluation; where internal staff conducts a segment of the evaluation, and our company assists in the remaining
3 As the consulting firm ClearVision is a fabrication, thus their website and resumes are fictitious as well.
4 As the consulting firm ClearVision is a fabrication, thus their website and clients’ reference are fictitious as well.
5 By quasi-external, we mean utilizing an internal evaluator, but far removed, as to maximize externality and give maximum independence. Program Evaluation Plan 19 19 areas. This combination can provide an external viewpoint without losing the benefits of the valued internal evaluator’s first-hand knowledge of the project.
Recommended Readings
1. Whalen, T., & Wright, D. (1999). Methodology for cost-benefit analysis of web-
based tele-learning: Case study of the Bell Online Institute. American Journal of
Distance Education, 13(1), p. 22-44.
2. Achtemeier, S. D., Morris, L. V., & Finnegan, C. L. (2003). Considerations for
developing evaluations of online courses. I(1). Available at
http://www.aln.org/publications/jaln/v7n1/pdf/v7n1_achtemeier.pdf
3. Joint Committee on Standards for Educational Evaluation. (1994). The program
evaluation standards: How to assess evaluations of educational programs (pp. 23-24,
63, 81-82, 125-126). Thousand Oaks, CA: Sage.
4. The Western Michigan University’s Evaluation Center website has an abundance of
program evaluation support material. www.wmich.edu/evalctr/checklists/ Program Evaluation Plan 20 20
References
Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American
Psychologist, 30, 116-127
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative
approaches and practical guidelines (pp. 3-52). White Plains, NY: Longman.
Kellaghan, T., Stufflebeam, D. L. (2002). International Handbook of Educational Evaluation.
Boston: Kluwer Academic Publishers. Retrieved July 10, 2009, from
http://books.google.co.id/books?
id=ER_AsKgWsqgC&pg=PA453&dq=International+handbook+of+educational+evaluati
on+2002&ei=ZlJpSrrIJ4julQT955XNAg&hl=en
Munitz, B. (2000, January-February). Changing landscape: From cottage monopoly to
competitive industry. Educause Review. 35, 12-18.
Owen, J. M., & Rogers, P. J. (1999). Program evaluation: Forms and approaches. Thousand
Oaks, CA: Sage.
Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational
accountability. Journal of research and Development in Education, 5(1), 19-25.
Stufflebeam, D. (2002). CIPP evaluation model checklist . Retrieved July 10, 2009 from Western
Michigan University web site:
http://www.wmich.edu/evalctr/checklists/cippchecklist.htm
Stufflebeam, D. L. (2003). The CIPP Model for Evaluation. Retrieved July 10, 2009, from
Western Michigan University web site:
http://www.wmich.edu/evalctr/checklists/cippchecklist_mar07.pdf Program Evaluation Plan 21 21
Whalen, T., & Wright, D. (1999). Methodology for cost-benefit analysis of web-based tele-
learning: Case study of the Bell Online Institute. American Journal of Distance
Education, 13(1), p. 22-44.
Appendix A
Standards and Criteria
Standard 1: The program is aligned with learners' needs.
Criteria 1: The curriculum and courses are developed in accordance with needs
assessment and learner analysis data.
Criteria 2: The curriculum and courses are developed in accordance with principles that
reflect good practice in teaching and learning.
Criteria 3: Learners express satisfaction with the curriculum in terms of content.
Criteria 4: Participants believe the knowledge/skills gained to have utility for them in
their present/future careers.
Criteria 5: Learners express satisfaction with the program in terms of its organization and
scheduling.
Criteria 6: The nature of part-time, graduate study is considered in designing the program
scope and sequence.
Criteria 7: Learners' professional experience and knowledge are considered in developing
program activities.
Standard 2: Learners express satisfaction with the program in terms of its application to their
professional lives.
Criteria 1: Learners are satisfied with their learning experiences throughout the Program.
Criteria 2: Course evaluation forms indicate a 70% or higher approval rating for all
courses. (70% was arbitrarily chosen.) Program Evaluation Plan 22 22
Criteria 3: Learners attest to the knowledge transfer of their Program content to their
professional environments.
Criteria 4: Learners indicate that program completion has enhanced their opportunities
for current or future careers.
Standard 3: The program demonstrates evidence of instructional design.
Criteria 1: Materials/assignments are aligned with Program goals and objectives.
Criteria 2: Learning experiences are aligned with goals and objectives.
Criteria 3: Materials/learning experiences are based in appropriate learning strategies.
Criteria 4: Courses are designed to require students to engage themselves in analysis,
synthesis, and evaluation as part of their course and program requirements.
Criteria 5: Program activities are of appropriate length and depth.
Criteria 6: The reliability of the technology delivery system is as failsafe as possible.
Criteria 7: Program and course materials are professional in appearance.
Criteria 8: Program website and online courses within WebCT are attractive, effective,
and easy to navigate.
Standard 4: Program goals and objectives are achieved.
Criteria 1: Goals and objectives are clearly stated and available to learners.
Criteria 2: Goals and objectives are worth achieving.
Criteria 3: Goals and objectives are achievable within the Institute framework.
Criteria 4: There is evidence of program planning, including analysis of context, learners,
learning tasks, and learning objectives.
Criteria 5: The curriculum is well organized, and content areas within the program are
linked.
Criteria 6: Instructors follow course development plans in implementing courses. Program Evaluation Plan 23 23
Criteria 7: Evaluation measures are appropriate for a graduate level program, by
independent instructors.
Standard 5: The program is supported administratively and logistically.
Criteria 1: Program and course registration procedures are well organized.
Criteria 2: Course materials are received on time.
Criteria 3: Online course access is on time and trouble-free.
Criteria 4: Learners express satisfaction with the course platform—WebCT.
Criteria 5: Instructors are provided with training in use of the various technologies—
WebCT, perhaps there are others technologies used?
Criteria 6: Instructors' use of various technologies is optimal.
Criteria 7: Instructor/learner and learner/learner communication is optimal.
Criteria 8:
Criteria 9: Feedback to student assignments and questions is constructive.
Criteria 10: Technical support is available within a 24-hour period, including weekends.
Criteria 11: Faculty agree upon expectations regarding times for student assignment
completion and faculty response.
Criteria 12: Students have access to sufficient library resources that may include a
“virtual library” accessible through the World Wide Web.
Standard 6: The Program is viable, in terms of cost-effectiveness.
Criteria 1: All cost saving measures, deemed important, have been implemented to make
the Project more cost-efficient.
Criteria 2: A cost-analysis of the Program indicates that the return of investment is
greater than 100%. (The percentage might need to be higher.) Program Evaluation Plan 24 24
Appendix B
Preliminary Learner Survey Questions A6
1. My interaction with my instructors is facilitated through a variety of ways.
2. My interaction with other students is facilitated through a variety of ways.
3. Feedback about my assignments and questions is provided in a timely manner.
4. Feedback is provided to me in a manner that is constructive and non-threatening.
5. I am provided with supplemental course information that outlines course objectives,
concepts, and ideas.
6. Specific expectations are set for me with respect to the amount of time per week I should
spend for study and homework assignments.
7. The instructors grade and return assignments within a reasonable period.
8. Learning outcomes for each course are summarized in a clearly written, straightforward
statement.
9. My courses are separated into self-contained modules or units that can be used to assess
my mastery before moving forward in the course.
10. Each module, unit, or lesson requires me to engage in analysis, synthesis, and evaluation
as part of the course assignments.
11. Contact information and tools are provided to encourage students to work with each other
and the instructor.
12. The courses are designed to require students to work in groups using problem-solving
activities in order to develop topic understanding.
13. Course materials promote collaboration among students.
6 Questions adapted from Chapman, D. D., (2006). Building an Evaluation plan for fully online degree programs. Online Journal of Distance Learning Administration, 9(1). Program Evaluation Plan 25 25
14. Course technical tools promote collaboration among students.
15. Sufficient library resources are available to me.
16. Before starting the program, I was advised about the program to determine if I have the
self-motivation and commitment to learn at a distance.
17. I have been provided with adequate training and information to aid me in securing
material through online databases.
18. Written information is supplied to me about the program.
19. Easily accessible technical assistance is available to me throughout the duration of the
program.
20. An effective system is in place to address my questions about the program.
21. Have you taken an online course prior to enrolling in the program?
22. Would you recommend the program to others seeking a similar degree? Why or Why
not?
23. What is your level of agreement with the following statement? “The program has
provided me with a rewarding and challenging educational experience.”
24. What is your level of agreement with the following statement? “The program has
provided me with skills required professionally.”
25. Specifically, what impacts has the Program had on you professionally?
26. What items, if any, would you suggest be changed in the program?
27. What items, if any, should the program keep the same?
28. In your opinion, how can the program be improved? Program Evaluation Plan 26 26
Preliminary Learner Survey Questions B7
Information and instruction on the course
1. It was clear, from the beginning, what I had to do in the course.
2. The instruction was organized toward solving problems.
3. I could get exemplars related to learning tasks.
4. The instruction emphasized on knowing, not on doing.
5. The activities were guided by predefine steps.
6. The content resources associated to the practices were presented in a comprehensible
way.
7. The course provides enough opportunities to carry out the practices.
8. The practices were coherent with the course contents.
9. There were enough instructions on how to resolve the practices.
10. Do you have any further comments about the way of presenting the information about the
practices?
Learning support and personal help
1. The learning resources were available all the time, so I could take any moment what I
needed.
2. Some of the learning resources that I needed were not present, but I could ask for (find)
them.
3. I could get additional information about the topic.
4. I could get help of my instructor any moment I needed it.
5. I could get answer to my needs in a time interval appropriate
7 Questions adapted from, Mediano, C. M. (2006). Evaluation plan of ‘online math & science project’. Madrid, Spain. Spanish University for Distance Education Program Evaluation Plan 27 27
6. I could put in common with other students my own impression about the task.
7. I could get collaboration among partners.
8. Any more comments about learning support and personnel help?
Self-evaluation and formative assessment (formative evaluation)
1. I was suggested how to improve the way I worked on the tasks.
2. Assignments guided me to solve the tasks procedure successfully.
3. Assignments provided me information about my practice that really I needed.
4. Assignments provided me information about what I need improve in conceptual and
theoretical foundation of practices.
5. Working on learning tasks I was suggested how to overcome the mistakes I did.
6. We were mainly evaluated on what:
a. We should do.
b. We should know.
7. The course provides me enough opportunities for checking my advances.
8. Any more comments about evaluations?
General evaluation on the course
1. The course structure was clear.
2. After finishing the course I felt ready to deal with real practical problems.
3. I could get relevant feedback on practices/tasks performance.
4. The benefits that I get after the course are:
a. Understanding learning content.
b. Applying knowledge doing the exercises.
c. Applying skills doing the exercises.
d. Applying knowledge solving real problems. Program Evaluation Plan 28 28
e. Applying skills solving real problems.
5. The course has fulfilled my initial expectations.
6. I would prefer to study the same way other subject’s matters.
7. I enjoyed working in the course.
8. In connection with the general course evaluation, do you have any other comments? Program Evaluation Plan 29 29
Appendix C
Program Calendar
Table C1
Program Calendar
Number of Term Dates Days Term 1 July 1-September 22 84 Break September 23- September 29 7 Term 2 September 30-December 22 84 Break December 23-December 30 8 Term 3 December 31-March 24 84 Break March 25-March 31 7 Term 4 April 1-June 23 84 Break June 24-June 30 7 Program Evaluation Plan 30 30
Appendix D
Design Elements
Table D1
Elements of the evaluation plan.1
Contextual information Documents and records Written survey (cost-analysis) Learner survey Focus group Interviews Final report Task reports/feedback meetings Context Input Process Product evaluation evaluation evaluation evaluation 1Evaluation methods to address CIPP Model components. Program Evaluation Plan 31 31
Table D2
Elements and timing.2
Contextual information Documents and records Written survey (cost-analysis) Learner survey Focus group Interviews Final report Task reports/feedback meetings April May June 2Evaluation methods employed each month. Program Evaluation Plan 32 32
Table D3
Planned formal reports.3
Contextual information Documents and records Written survey (cost-analysis) Learner survey Focus group Interviews Task reports/feedback meetings Preliminary Interim-report Draft of final Final report report report
3Evaluation methods that are to contribute information to each formal report. Program Evaluation Plan 33 33
Table D4
Planned evaluation reports.4
Preliminary report Interim-report Draft of final report Final report April May June 4Timing of planned evaluation reports. (Note: Summarized results from data are distributed back to stakeholders, to perform continual member checks, and for comment and feedback on both the findings, interpretations, and the accuracy of the recorded data. Member checks are not included in this table, as they are to be performed frequently.)