Draft Copy: Do Not Release

Total Page:16

File Type:pdf, Size:1020Kb

Draft Copy: Do Not Release

10-22-03

PROPOSED CHANGES IN FACULTY EVALUATION & IMPROVEMENT SYSTEM

I. INTRODUCTION:

As stated in the Faculty Handbook (page 5.8), the goal of the Faculty Evaluation and Improvement System is to provide “a responsible means of evaluating faculty performance for purposes of retention, promotion, and tenure…” The Faculty Handbook also states that “peer evaluation…is accorded primary consideration in all three areas [teaching, professional development, and corporate function] in evaluation for promotion and tenure” (page 5.32). With this in mind, the following proposals and recommendations are designed to enhance the role of the Peer Evaluation Committees and improve the quality of the evaluative data that they provide.

II. PROPOSED CHANGES TO THE FACULTY HANDBOOK REGARDING THE FACULTY EVALUATION SYSTEM

What follows are specific changes to the Faculty Handbook recommended by the Faculty Personnel Committee in order to improve the faculty evaluation system. If these changes are adopted, other areas of the handbook may require modifications in order to be consistent with the proposed changes.

The proposed changes are to be read as follows: The copy in plain text is the current wording in the Faculty Handbook. The copy in brackets is current wording that is to be deleted. The copy in bold is new wording to be added to the Faculty Handbook. Sections of the Handbook which are not mentioned are to be left unchanged. Where appropriate, a rationale for the proposed changes is provided in italics.

CHAPTER 5 -- FACULTY EVALUATION & IMPROVEMENT, TENURE, AND PROMOTION

FACULTY EVALUATION AND IMPROVEMENT

I. FACULTY EVALUATION SYSTEM

B. Evaluations

2. Composition of Peer Evaluation Committee (currently on page 5-2)

The last Peer Evaluation done immediately prior to tenure consideration will be conducted by a Peer Evaluation Committee consisting of three tenured members of the faculty, one of which will be the chair of the faculty member’s department, and in the final stage, the Dean of the College. At least one of these persons must be from an academic department other than that of the evaluatee. All other e[E]valuations will be conducted by a Peer Evaluation Committee comprised of two]faculty members, and, in the final stage, the Dean of the College, although it is highly recommended that Peer Evaluations conducted immediately prior to consideration for promotion also include three members of the faculty. Members outside the evaluatee’s department should be from departments and disciplines closely related to that of the evaluatee and have teaching responsibilities similar to those of the evaluatee. The evaluatee has the right to request a peer reviewer from his/her discipline from outside the institution as a substitute for one of the tenured members of the faculty on his/her committee. For those faculty other than department chairs, the Committee consists of the evaluatee's department chair or another tenured member of the department designated by the chair, and one other tenured member of the faculty chosen by the chair (or designee) in consultation with the evaluatee and approved by the Dean. The department chair (or designee) will serve as chair of the evaluation committee. For department chairs, the Committee consists of a tenured member of the faculty appointed by the Dean to serve as chair and another tenured member of the faculty chosen by the Dean in consultation with the evaluatee. With the exception of department chairs, no faculty member should be expected to serve on more than one peer evaluation committee in any given academic year. No department chair should be expected to serve on a peer evaluation committee for a colleague from another department in a year in which he/she is chairing a peer evaluation committee. Exceptions to any of these rules may be granted at the discretion of the Dean if necessary.

Rationale: The usefulness of the Peer Evaluation Report will be improved by increasing the number of evaluators and by ensuring that those evaluators share a professional background that enables them to meaningfully comment upon the evaluatee’s performance based upon the standards and practices of the candidate’s discipline. Although a larger number of faculty will be serving as peer evaluators, the work load will be shared more evenly and the workload for any one evaluator might actually be less than it is now.

3. Information Sources Used by Evaluation Committee (currently on page 5-2)

A minimum of six distinct classroom observations, each at mutually agreed upon times, is required for all Peer Evaluations, three per evaluator for two person committees, and two per evaluator for three person committees; these observations may occur through class visitation or through recently videotaped class sessions, also upon mutual agreement.

Rationale: Since the number of evaluators has in some cases been increased from two to three, the number of classroom visitations for each evaluator can be decreased from three to two without decreasing the number of classroom visitations each evaluatee receives.

An evaluatee should submit the following sources of information to the Peer Evaluation Committee: 1) a Personal Statement (Form 4); 2) course materials (syllabi, handouts, exams, etc.); 3) samples of graded student work appropriate to the evaluatee's discipline; 4) copies of Professional Activities Reports completed since the most recent Evaluation or Post-Tenure Review (Form 1); 5) Student Reaction Summary Reports (Form 5) [plus typed student comments] for all courses taught since the previous Evaluation or Post-Tenure Review; 6) a report from the Registrar on grade distributions for all courses taught since the previous

2 Evaluation or Review, and 7) copies of any manuscripts, documents, or tapes [or letters of evaluation] appropriate to the evaluatee's professional work or contributions to the corporate functions of the college.

In addition, the evaluatee is encouraged to submit some or all of the following, which shall be afforded consideration on a par with the above if submitted.

8) If the faculty member has developed and distributed his/her own student reaction questionnaire with scantron questions relevant to his/her specific discipline, teaching style or teaching objectives, the results of these may be included as well.

9) Letters from colleagues and letters from alumni

[In addition, both faculty members on the committee must complete a minimum of three classroom observations, each at a mutually agreed upon time; these observations may occur through class visitation or videotaped class sessions, also upon mutual agreement.]

Rationale: The proposed changes in information sources are designed to aid the Peer Evaluation Committee in more meaningfully assessing the candidate’s performance.

4. Content and Structure of the Peer Evaluation Report (NEW SECTION)

The Peer Evaluation Report must contain each of the items described below. 1. A cover sheet with the signature of every Peer Evaluation Committee member and the evaluatee indicating each person’s agreement or disagreement with the Peer Evaluation Report 2. The evaluatee’s Personal Statement 3. Individual Peer Evaluation Report Forms and Summary statements by the Chair and by each member of the Peer Evaluation Committee that specifically address the evaluatee’s teaching, professional development, and corporate and community service. Because of the importance of the Personal Statement in placing the evaluatee’s work in context, each member of the Peer Evaluation Committee must include some commentary on the appropriateness and accuracy of the claims made by the evaluatee in his/her Personal Statement.

Rationale: The proposed revisions to the Peer Evaluation Report accentuate: A. the value of the report in the overall process, B. the vital role of the Peer Evaluation Committee in placing the candidate’s work in an appropriate context, C. the need for a uniform format for peer evaluators to follow.

The committee believes that these changes will make the Peer Evaluation Report the primary summative statement of faculty performance which will allow peer evaluation to be “accorded primary consideration in all three areas in evaluation for promotion and tenure” as called for in the Faculty Handbook

3 5. Procedures

By the first week of each academic year, the Dean will notify all faculty members who are scheduled for an Evaluation during that year. The Dean will also notify the chair of the evaluatee's Peer Evaluation Committee. Notification from the Dean will include a reminder of the date by which the Evaluation must be completed. Tenured faculty members requesting an Evaluation prior to applying for promotion should submit their request to the Dean during the first week of classes of the Fall semester. After receiving this request, the Dean will notify the chair of the evaluatee's Peer Evaluation Committee.

Members of the Peer Evaluation Committee will independently review the evaluatee's materials and visit the evaluatee's classes. They will then write Peer Evaluation Reports (Form 2) and submit drafts of those reports to each other and to the evaluatee. The chair will then conduct a formal committee meeting with the evaluatee to review these drafts, [revise] discuss possible revisions to them if deemed appropriate, and make suggestions to the evaluatee regarding improvement in any of the three areas of the evaluation. After this meeting, all members of the evaluation committee may revise their reports. Each of these reports must then be circulated among the committee members and given to the evaluatee. [The peer evaluators may at this point complete Part II of the Peer Evaluation form and give it to the evaluatee.

Subsequent to this meeting, the peer evaluators should exchange copies of the final versions of their reports with each other and with the evaluatee.] The evaluatee should sign the reports, append any additional statements if he/she disagrees with the peer evaluator's comments, and return the reports to the chair of the Peer Evaluation Committee. After obtaining all required signatures on the cover sheet, t[T]he chair will then submit the entire Peer Evaluation Report (as defined in section 5.I.B.4 above) [reports] (including any appended statements from the evaluatee) [and the evaluatee's Personal Statement] to the Dean of the College.

After reviewing the [evaluatee's Personal Statement,] Peer Evaluation Reports (as defined in section 5.I.B.4 above) and other material in the evaluatee's Faculty Evaluation File (e.g., Professional Activity Reports, Student Reaction Summary Reports), the Dean will then schedule a conference with the faculty member to: 1) discuss the evaluatee's situation relative to others in terms of tenure and promotion; 2) develop a written agreement regarding the faculty member's goals for the next Evaluation or Post-Tenure Review period; and 3) develop a plan for overcoming any deficiencies.

The written goal agreement will consist of an endorsement of the goals specified on the faculty member's Personal Statement or a revised set of goals mutually developed between the Dean and the faculty member and appended to the Personal Statement. Copies of the Personal Statement (with any appended revisions) will be given to the faculty member, and a copy will be placed in the faculty member's Faculty Evaluation File.

In those cases where deficiencies have been identified, the faculty member and the Dean will also develop a plan for addressing those deficiencies. The improvement plan will be signed by the evaluatee and the Dean; one copy of the improvement plan will by given to the evaluatee, and another copy will be placed in the evaluatee's Faculty Evaluation File. If the faculty member and the Dean cannot agree upon an improvement plan, the faculty member may appeal the case to the President of the College.

4 C. Post-Tenure Reviews

3. Information Sources Used by Post-Tenure Review Committee

The faculty member should submit the following sources of information to the Post-Tenure Review Committee: 1) a Personal Statement (Form 4); 2) copies of Professional Activity Reports completed since the most recent Evaluation or Post-Tenure Review (Form 1); and 3) Student Reaction Summary Reports (Form - 6) [plus typed student comments] for all courses taught since the previous Evaluation or Post-Tenure Review. Changes

D. Administration of Student Reaction Forms

For untenured faculty, t[T]he Student Reaction Form (SRF) will be distributed to each class every academic term, including May and Summer, for both full-time and part-time faculty. For tenured faculty, student assessment will automatically be conducted in half of all courses taught during each academic year, although individual faculty members may request additional assessments. In the Fall semester of each academic year, the registrar will randomly select half of all tenured faculty to undergo assessment in all courses that semester, and will inform all tenured faculty whether they have or have not been selected by the eighth week of the semester. (Those tenured faculty not selected in this manner will undergo assessment the following Spring semester). Those not selected must then inform the registrar by the ninth week of the semester if they wish to be assessed in any or all of their courses. It is the responsibility of each individual faculty member to ensure that they undergo assessment at least once for each distinct course they teach in the four semesters prior to consideration for promotion.

SRF assessment must be conducted [ insert the option voted for by the faculty in the November meeting concerning when assessments are to be conducted]

The Registrar will prepare [a] an appropriate supply of forms for each class , [solicit from department chairs the names of faculty who will be assigned to administer the forms,] and arrange for the distribution and collection of the forms.[during the last week of classes.] Faculty members will choose, from among the students who happen to be in attendance the day of the evaluation, a responsible student to distribute, monitor the completion of, and collect the forms. (Faculty will leave the classroom while the forms are being administered.) After collecting and sealing the completed forms in an envelope provided by the registrar for that purpose, the student will sign the envelope and retrieve the faculty member, who will also sign it. The faculty member will then (after class) return the sealed envelope to a designated faculty secretary for collection.

Rationale: A number of faculty find the implied lack of trust in the current administrative procedure offensive. Moreover, requiring a different faculty member to administer the evaluations is unnecessarily cumbersome and further complicates everyone’s schedules at an already busy time of the year. The committee has found that students are commonly employed for this purpose at institutions which solicit student reactions.

5 Anonymous student comments solicited as part of the SRF assessment process will be returned to the faculty member after the grades for the semester have been turned in and recorded, but are not to become part of the faculty member’s Peer Evaluation Report. They may, at the discretion of the evaluatee, be shared with the evaluatee’s department chair and peer evaluators, but are not to be seen by anyone else involved in the evaluation process, nor (as this implies) are they to be typed.

Rationale:(for not including anonymous comments in formal evaluation materials)

Experts in the field of evaluation of teaching typically draw a sharp distinction between "summative" evaluation (e.g., evaluation for the purpose of promotion and tenure) and "formative" evaluation (e.g., evaluation for the purpose of improvement of instruction), and evaluative tools which are appropriate for one of these purposes are often inappropriate for the other. This distinction is reflected at several places in our Faculty Handbook and other documents. The separation of evaluating teaching performance and offering suggestions for improvement is not without precedent at Lycoming. Peer evaluation reviewers are encouraged to separate comments “purely for the purposes of improvement” from their evaluation of an evaluatee. The second part of the current Peer Evaluation Report reads:

Recommendations to faculty evaluators. This section of the Peer Evaluation is optional and intended purely for purposes of improvement; it should be directed to the evaluatee only. Please be as specific as possible in your comments, noting both those aspects in which the individual is performing well and those which you feel could be improved upon. If you feel the evaluatee should consult the Committee for Improvement of Instruction [now called the Teaching Effectiveness Committee], you should indicate that here. Please detach this page and deliver it to the evaluatee.

It seems more than reasonable and professional to expect the same distinction between summative and formative feedback from students.

Although anonymous student comments can be helpful to faculty members in assessing their own classroom performance, they should not play a role in formal evaluation process, for several reasons.

1. Immediate, positive and critical feedback enables faculty to improve instruction effectively. Under the current system however, typed, anonymous comments arriving months after a course has concluded arrive already devoid of, if not distanced from, the perspective of the course being evaluated. Faculty should receive comments in a timely manner and, like the recommendations for improvement in peer evaluations, the comments should only go to the individual faculty member. The comments should not be used in the formal faculty evaluation process. A student’s individual comment considered without a keen awareness of the weekly dynamics, personalities, classroom environment (even inside jokes), etc. might appear wholly damning or laudatory to someone else unfamiliar with those particular variables. Lastly, anonymous student comments reinforce a system of distrust and anti-professionalism where we are teaching our students that an effective form of evaluation, or communication, is one that is veiled, without a face, anti-constructive and without recourse for improvement. Currently, what in many cases is simply a misunderstanding or miscommunication between a faculty member and a student results in a permanently recorded, anonymous comment with serious potential consequences.

6 2. Allowing spontaneous, anonymous comments to play any role whatsoever in determining whether someone gets tenure or a promotion is inherently unfair and inconsistent with the principles of a free society. This is reflected in the fact that we do not allow such things to be used against a person in a court of law, regarding them as inadmissible hearsay evidence. Even in civil law, the Buckley amendment gives students the right to see what we say about them, unless they specifically waive that right, and it is difficult to see why faculty should not have the same rights as students. Although our evaluation process is not bound by the same rules as those followed in a court of law, the rationale which supports our legal prohibitions would also apply to any situation which has such a profound effect on people. Negative comments by students are essentially accusations, possibly true but possibly false, and without knowing who made them and why, faculty members are not in a good position to defend themselves.

3. The use of anonymous student comments is a serious threat to academic freedom. Many faculty members cover controversial issues in their classes. Since controversial issues, by definition, are issues about which many people disagree, anyone who covers them must necessarily say things with which some of their audience disagrees, perhaps strongly, and depending on the strength of their feelings, might even feel offended or enraged, and there is a strong likelihood that these feelings will influence the sorts of comments which they make on questionnaires. Moreover, administrators or faculty who read these questionnaires may share the position of the offended or enraged students. This has a “chilling effect” on what issues faculty members are willing to cover and in what way. Faculty cannot exercise academic freedom if they are constantly worried about being condemned in writing, on an official document to be examined both by their superiors and their peers, as a communist, a sexist, a fascist, a racist, a “San Francisco liberal”, or whatever.

4 The use of anonymous comments in our evaluation process is a lawsuit waiting to happen. If anyone sues the college over being denied tenure, or even promotion, the first thing their attorney will want is a copy of our procedures. The attorney will want to know two things: (1) have the procedures been followed (to insure that everyone is treated equally in this respect), and (2) are the procedures in themselves fair, which is part of due process. But the use of anonymous comments, as indicated above, is widely considered to be inherently unfair. In a legal sense, it will do us no good to point out that we treat everyone the same, that these comments are only a part of the data we use, or that the decision in the case under consideration was based on other considerations entirely. The mere fact that persons in a position to make recommendations or decisions in the case had access to these comments, assuming that the use of such comments is determined to be a violation of due process, will be found to “taint”" the entire process.

5. The committee would also like to respond to several concerns on this issue raised at the public forum last spring:

CONCERN: If anonymity isn’t assured, students won’t say what they really think; they will be dishonest about their true opinions or at the very least, reluctant to express them.

7 RESPONSE: If we can’t assume students will be honest when they include their name, how can we assume they will be honest when they don’t? Are people more likely to be dishonest when no one is watching, when they know they can get away with being dishonest, or when they know they might be held accountable? Who is likely to be more thoughtful and responsible, a person who stands behind their remarks by identifying themselves or someone who can hide behind typed anonymous comments?

CONCERN: There is a power differential between students and faculty that might make students reluctant to provide honest comments.

RESPONSE: Since faculty don’t receive comments until after grades are submitted, the grade in the course being evaluated is not a concern. Actual areas of concern might be grades in subsequent courses, letters of recommendation, or selection for honors or prizes. These are legitimate concerns for a student.

Legitimate concerns for a faculty member might be contract renewal, tenure, and promotion. These include the opportunity to keep one’s job at Lycoming, earn a living for oneself and one’s family, salary increases, career advancement, and, as with students, self respect.

Both groups have considerable opportunity to influence the professional aspirations of the other. Neither should be permitted to hide behind anonymity in their evaluations of the other.

CLAIM: Anonymous student comments aren’t taken that seriously by evaluation committees.

RESPONSE: If so, why look at them at all?

If one purpose of a college education is to help people prepare for life outside of academe, then it is important for students to know that life isn’t usually anonymous. What do we want to teach our students? Do we want them to think that they can take pot shots at bosses and supervisors behind their backs, or do we want to help students develop effective ways to change behavior in a fair and open manner? Consider, for example, a student who had difficulty understanding the professor during a course. On an anonymous evaluation, such a student might write: “The Prof stinks,” The Prof was disorganized,” or “I didn’t learn anything.” Suppose instead the comments were: “I wish the professor would put a lecture outline on the board at the beginning of class,” I wish the professor wouldn’t let students get him sidetracked from the main topic during lectures,” or “I found the professor’s Paul Revere tie to be distracting.” Obviously, the second set of remarks are more tactful and written in a way that the student wouldn’t have to worry about retaliation. But they are more than that. They also provide the instructor with concrete information about how to improve the class. And they provide P&T with better, not worse, data on which to make evaluations.

8 When student comments are open rather than anonymous, the emphasis shifts from being vindictive to helping students identify their concerns and learning how to express them in more diplomatic ways. Do we want to teach our students duplicity or diplomacy?

Rationale: (for not having comments typed)

Also, typing these forms is a waste of scarce time and resources. Secretaries report that typing the forms forces them to make interpretations and judgement calls of the students’ writing. These judgements may or may not reflect the actual intention of the poorly scrawled comments. The time needed to complete the typing also unnecessarily delays getting the comments back to the faculty. For junior faculty in particular this is a potentially costly delay. These faculty should have access to the student feedback as quickly as possible, certainly quickly enough that adjustments could be made to Spring Semester plans following the Fall Semester evaluations.

After final grades have been received by students, copies of the Student Reaction Summary Report will be distributed to the Dean of the College, the department chair, and the individual faculty member. [written responses to questions #11-14 will be typed as assigned by the Dean of the College and copies will be included with the report that goes to the department chair and individual faculty member.] Copies of the Student Reaction Summary Reports will be placed in the Faculty Evaluation File (see next section) to be maintained in the Dean's office. If faculty wish to comment on the information in these reports, their responses will be appended to the Student Reaction Summary Reports in their evaluation file.

[B. Resources for Faculty Improvement

1. Committee for the Improvement of Instruction

A Committee for the Improvement of Instruction has been established to assist individual faculty in improving their performance as teachers. The committee is appointed by the Faculty Personnel Committee in consultation with the Dean. It consists of three faculty members selected for their ability to assist faculty seeking to improve their teaching. Each member will serve a three year term, with one position opening each year. Members will be eligible for reappointment. Members of the Committee for the Improvement of Instruction may not serve concurrently on the Faculty Personnel or Promotion and Tenure Committees.

The committee will serve as consultants to faculty who request assistance either on the recommendation of their Peer Evaluation Committee or as a result of their independent desire for improvement. The committee may request any of the following materials to assist with their consultation: course syllabi, handouts, examinations, graded student work, comments and data from the Student Reaction Form, tapes of class sessions, and Peer Evaluation Reports. Committee members may also visit the instructor's classes. Following analysis of these items, the committee will develop a plan that builds on the instructor's strengths and seeks to overcome weaknesses.

Because the goal is to encourage a trusting relationship between the committee and the evaluatee, all records of the committee are to remain confidential. Care must be taken to distinguish between evaluations intended to improve faculty performance and evaluations

9 intended to assess that performance for the purpose of personnel decisions. This protects the person who seeks evaluation for purposes of improvement. It also prevents the peer evaluator's frank assessments from being clouded by the possibility of subsequent requests to use those assessments in personnel decisions. Therefore, neither the individual nor the committee may share with the Committee on Promotion and Tenure or with any faculty or administrators involved in personnel decisions any data, materials, and recommendations developed primarily for diagnostic/improvement purposes.

2. Improvement of Instruction File

The following resources will be placed on reserve in the Snowden Library: (1) samples of questionnaires used for faculty evaluation and improvement; (2) samples of classroom observation instruments that are used by peer evaluators; (3) journal articles containing suggestions for improving college teaching; and (4) a bibliography of books on college teaching techniques identifying items currently in our collection.]

Rationale: This entire section is to be eliminated, since both the committee and the file referred to therein no longer exist.

Appendix

Additional Proposals Concerning the Administration and Interpretation of SRF’s

I. Summary SRF Reports 4 The committee believes that the current usage of the Summary SRF Reports for evaluating teaching effectiveness is significantly flawed.

1 Many faculty report that they have difficulty understanding what the current numbers, or number ranges, actually mean.

2 As a consequence of item A, there appears to be no commonly understood standard on how the numbers are to be interpreted.

3 Because of both of the above issues, a strong potential for inconsistencies and subjectivity in the interpretation and use of the numbers by the evaluators of faculty teaching performance arises.

4 Our current method of compiling and reporting numeric values lends itself far too easily to the obvious conclusion that approximately half the faculty are “below average” teachers, which is very harmful to faculty morale. But, in fact, the emphasis we place on the importance of effective teaching here at Lycoming, our hiring procedures and our criteria for granting tenure guarantee that most of our faculty are in fact well above average in teaching effectiveness.

5 The committee recommends that numeric values can and should be interpreted in such a way as to reflect that fact.

Thus, for example, if a clear majority of students in a course respond with 4’s or 5’s (with ‘5’ meaning “excellent”) to a question (like our current question 4)

10 concerning the student’s overall opinion of the faculty member’s teaching performance, and only a handful respond with 1’s and 2’s, this should be regarded as a clear indication that, from a student point of view, the faculty member has exhibited highly effective teaching in that class, irrespective of how that faculty member’s “numbers” compare with those of other faculty. This is clearly the most important “fact” that can be gleaned from these numbers, and using the tools of statistical analysis to calculate additional numbers for the purpose of comparison simply confuses the issue. Instead, there just should be a clear explanation of what each number (1 to 5) means and then let the numbers stand on their own.

II. Questionnaire Revision

There is widespread dissatisfaction amongst the faculty with the questions on the existing SRF. A common faculty perception is that both the questions themselves and the way the SRF is administered discourage constructive thoughtful responses by students.

The current SRF has too much of a consumer emphasis and not enough of a collaborator emphasis. Students are not simply consumers to be asked “Did you like the teacher and do you think the teacher liked you?” One of the primary goals of a revised SRF, and its administration, should be to improve the quality of thought students put into their evaluation of the instructor’s performance. The student evaluation instrument needs to become a tool for helping students view themselves as collaborators with faculty in the learning process. In addition, irrespective of whatever changes are made to the SRF itself, more consideration should be given to enabling students to more effectively evaluate teaching and better understand their role in the evaluation process.

The committee has examined teaching evaluation instruments in use at other institutions and solicited feedback from faculty, particularly from members and former members of P & T. A revised questionnaire is appended to this document.

III. Automating the SRF

The committee recommends that the college investigate the options available for automating the administering of SRF using secured web-based forms. This could offer many potential benefits including reduced administrative costs, fewer disruptions of class time, and faster access to the data. (If we do decide to do this, of course, many of our proposals concerning the timing of SRF’s assessment would no longer be applicable).

STUDENT REACTION FORM

I am taking this course primarily as a(n) Major Minor Distribution Elective

My class rank is Fresh. Soph. Junior Senior Other

11 Far Below Below Average Above Far Above Average Average Average Average The amount of work I put into this course was

How many classes did you miss? 0-1 2-3 4-5 6-7 8+

What is your current GPA? 0.0-2.2 2.3.-2.7 2.8-3.0 3.1-3.5 3.6-4.0

Disagree Disagree Agree Agree N/A Strongly Strongly The course was well organized

The instructor gave clear explanations of assignments and course content

The syllabus and goals of the course were clear to me

The methods used for evaluating/grading my work in this course were explained in class or in the syllabus

I learned a great deal from this instructor

The instructor was receptive to my questions

The instructor encouraged my progress in the course

The instructor treated me with respect

I felt the material presented and the amount of work expected were appropriate for this course

I felt that the instructor was available when I asked for assistance

I felt there were an appropriate number of evaluations of my work in the course Poor Fair Average Good Excellent Overall, I felt that the instructor's teaching was

Overall, I felt that this course was

Written Questions

12 NOTE: These questions are to be printed on a separate sheet and the answers go only to instructor. (1) What aspects of the teaching or content of this course do you feel were especially good?

(2) What changes could be made to improve the teaching or content of this course?

Three additional items that relate to this proposal but are not a part of the proposal

1. The Faculty Personnel Committee intends to move that, after one final open discussion and debate in the November faculty meeting, our current proposal be submitted to the faculty to be voted upon by paper ballot distributed through campus mail. The current version of the proposal, together with supporting documents, is attached.

2. Main Features of Proposal: a. Enhanced peer evaluation committees required for consideration for tenure, recommended for consideration for promotion b. Students in class distribute and collect forms, instructor leaves room, student returns forms to instructor in sealed envelope afterwards

Changes to Former Proposal: a. Solicited letters from current students no longer explicitly encouraged. b. Anonymous student comments go only to evaluatees but may be shared with peer evaluators and chairs c. Student assessment required for tenured faculty only in half of their courses, optional in all others.

3. Issues Involving Statistical Evaluation of Teaching (Gene Sprechini and Steve Griffith)

Most faculty agree that typical undergraduate students are simply not competent to assess the overall teaching effectiveness of their instructors, and that the most that we can glean from student questionnaires concerning overall effectiveness is that, if an instructor consistently receives overwhelmingly negative reactions from students, there might be a problem with that instructor ’s teaching. (Some also argue that if an instructor consistently receives overwhelmingly positive reactions from students, there might be something exceptionally good about that instructor’s teaching, though there is not such a broad consensus about this).

Thus, within this context, student questionnaires are analogous to the oil warning light in an automobile. If the light goes on (which is analogous to an instructor consistently receiving overwhelmingly negative responses from students), then we should check the engine (which is analogous to attending more closely to a faculty member’s performance by other means). Even in this case, there may be other explanations for the negative

13 responses, just as the oil warning light might be a false alarm for a variety of reasons. If, on the other hand, the light does not go on (which is analogous to an instructor not consistently receiving overwhelmingly negative reactions from students), we do not thereby have much information about the condition of the engine (which is analogous to the fact that the questionnaires cannot in that case really tell us very much about overall teaching effectiveness).

Moreover, just as we cannot reasonably compare the condition of two automotive engines if we know only that the oil light does not go on in either automobile, the results of student questionnaires certainly cannot enable us to make meaningful comparisons among faculty members who do not consistently receive either overwhelmingly negative or overwhelmingly positive results. Most faculty members are aware of thes e considerations, and we all tend to pay lip service to the idea that the results of student questionnaires are not definitive of teaching effectiveness, and that we must therefore take other things into account, but the seriousness with which we examine the results of these questionnaires and worry about what questions they should include belies this fact.

It is important that we employ student questionnaires (as well as other evaluative devices) with new faculty members, just as it is important to have, and to pay attention to, an oil warning light in a recently purchased automobile. For this reason, new faculty members should undergo student assessment every semester, in every course, until they receive tenure. After that, for several reasons, a spot check should be sufficient. Why? Because, in the first place, Lycoming College is very careful about who it hires. We simply do not hire someone unless we have good reason to believe that they will be effective teachers. In the second place, after they are hired, we pay careful attention to their teaching performance, and if there are serious problems, they are not retained. In the third place, in the unlikely event that someone with questionable teaching effectiveness is retained long enough to come up for tenure, their application will almost certainly be denied.

What this amounts to is that tenured faculty members have all been “road tested”. We needn’t stare at the oil light constantly any more. A casual glance once in awhile will suffice. After the first few years, there is typically very little change in the sorts of student evaluations received by any given faculty member. Although we all try to improve, we all reach a sort of plateau fairly early in our careers, and, more importantly, the differences between us at that point, assuming the differences are even comparable, are no longer reliably measured by student questionnaires. We do not have peer evaluations every year; requiring the administration of student questionnaires for only the four semesters prior to each evaluation should be more than adequate.

Objection: “We need to administer student questionnaires in every course every semester to ensure statistical validity”.

Response: This claim is based on certain highly questionable if not outright false assumptions. Comparing what my students think of me with what your students think of you is like comparing the effectiveness of one drug in combating one disease with the effectiveness of another drug in combating a completely different disease. The claim

14 that the differences between courses and groups of students can be ignored is belied by the fact that even two sections of the same course taught by the same instructor during the same semester can often receive strikingly different student reactions. Besides this, the only thing we really need to know in regard to overall teaching effectiveness, as expressed by means of the analogy given above, is whether the warning light is on, and we have certainly had enough experience by now to know when it is just by looking at the results of student questionnaires on a course by course basis.

Objection: “We need to complete SRF assessment in every course every semester so that people know where they stand with respect to other faculty members, and these results will be skewed if people are being compared to different colleagues in different semesters.”

Response: This objection completely ignores the point made above that student questionnaires cannot be used to make valid comparisons in the first place. It is also important to note that students are not even being asked to make comparative judgments, and studies in the logic of preference show that if they were, it is unlikely that faculty would be ranked in the same order as they are when judged independently anyway.

The bottom line here is that we are wasting valuable faculty and administrative time and resources in administering student reaction forms to every student in every course every semester, and also generating statistics for the purpose of comparisons that are not useful or even necessarily appropriate. It is especially important that we not be misleading by saying that half of our teachers are statistically below average, even though this must be true in a sense.

15

Recommended publications