The Sentinel UUP — Oneonta Local 2190
The Sentinel UUP — Oneonta Local 2190 Volume 11, Number 6 February 2011 Using SPI or SRFI to Compare Faculty Teaching Effectiveness: Is it Statistically Appropriate? By Jen-Ting Wang, Ph.D., Associate Professor of Statistics, De- partment of Mathematics, Computer Science, and Statistics Backgrounds and Problems of Faculty Evaluation through SRFI For the past few months, the new instrument for measuring teaching effec- tiveness known as SRFI (Student Response to Faculty Instruction), has been widely discussed among faculty members on campus and implemented in the Senate. After reviewing some of the available documents, I am compelled to address some concerns about the proposed instrument. As a statistician with a Ph.D. in Statistics and over 10 years of experience teaching and consulting in Statistics, I'd like to express my concerns regarding SRFI from the statistical point of view. When the SPI was first implemented on campus over 20 years ago, the main purpose was to provide faculty with feedback on their teaching. It was not to be incorporated into the evaluation of faculty teaching for re- appointment, tenure, promotion, or DSI. For some reason, it has now become one of the most important assess- ment tools, if not the primary one, for evaluating teaching effectiveness. Furthermore, teaching effectiveness is often reduced to only one number -- the mean score, which has been mistakenly used for comparative analyses of faculty members’ teaching effectiveness. It is also often being misinterpreted that, for example, one faculty mem- ber with an overall mean score of 3.8 has better teaching effectiveness than another faculty member with 3.7.
[Show full text]