Higher Education Academy Annual Conference 2008

Higher Education Academy Annual Conference 2008

<p>Higher Education Academy Annual Conference 2008 2nd July 2008 11.15</p><p>E-valU8: on-line evaluation of the individual student’s learning experience</p><p>Anne-Marie Warnes – Senior Lecturer, Faculty of Health, University of Central Lancashire e-mail: [email protected] Lucy Warman – Student Liaison Officer, Faculty of Health, University of Central Lancashire e-mail: [email protected]</p><p>Abstract</p><p>The e-valU8 project aims to utilise technology to support students in effective methods of evaluating the quality of their learning experience, enabling open interaction and collaborative action planning. The paper will explore ways of engaging students when seeking feedback, through the development and pilot of an on-line evaluation tool. </p><p>Introduction This paper will share the authors’ experiences of devising and implementing an on-line evaluation tool ‘e-valU8’ with students. The project aims to utilise technology to support students in effective and efficient methods of evaluating the quality of their learning experience on-line, enabling open interaction and collaborative action planning. It is essential that on-line evaluations are of value to students and stakeholders; internal and external to the University. The student profile across the Faculty of Health is complex, including a diverse student population. The project was developed with assistance from the HEA Health Sciences and Practice Departmental Workshops in 2008, to focus on student engagement and feedback throughout the evaluation process, and outcomes for students, staff, the university and the wider higher education (HE) community. </p><p>Although they are renowned for lower rates of completion, on-line evaluation can provide an easily accessible and user-friendly means for gathering student feedback (Dommeyer, Baum, Hanna & Chapman, 2004), yet few forms of evaluation are designed for use at anytime during a programme of study. Questionnaires do not always allow the flexibility to remain contemporary to the current context, or provide the opportunity for students to respond in a variety of ways. Selection from a common bank of questions can produce a tailor-made/bespoke on-line evaluation tool, which allows for meaningful aggregations across different cohorts. This increases prospects for analysis across courses, disciplines and departments. </p><p>What is the student experience? Yorke & Langdon (2004) discuss the influence of the quality of the student experience on withdrawal, listing aspects such as teaching, organisation of the programme, staff support, personal support and class sizes. Tight (2003) identifies other areas incorporated in the student experience; access,</p><p>02/07/08 1 institutional and course choices, on-course experience such as finances and stress; the experience of different groups e.g. mature students and the transition from higher education to work. Most universities use evaluation tools to explore student experience in relation to teaching, where student feedback is counted towards staff appraisal and pay issues (Fresko & Nasser, (2001).</p><p>Throughout the literature researchers often refer to the “student experience” and the impact a particular factor may have on it without providing a clear definition. The UCLan Student experience Strategy 2007 – 2012 focuses on two categories, the learning experience and the living experience including student life; accommodation, catering, recreation, clubs and health as elements of the student experience. </p><p>Therefore, it could be argued that the “student experience” consists of everything that happens to the student from the time they consider applying from university until the time they start their first job after completing university – if not beyond. Indeed, Powney & Hall (1998) reiterate that many different factors ranging from accommodation, travel, library resources through to social and pastoral support are also key to a quality experience within higher education. </p><p>Why evaluate the student experience? A cycle of evaluation and improvement based on student feedback is seen as a fundamental component of the process of quality improvement in universities (Ramsden, 1998). This quality assurance element is also deemed key to a university’s market advantage, since students are more frequently seen as co-producers of education (Pudner, 2007). Student feedback allows universities to demonstrate the desirability of their offerings to a global market place, providing a more competitive edge (Baldwin & James, 2000). </p><p>Pitkethly & Prosser (2001) cite studies by McInnis et al, who found that initial experiences on campus are important and go as far as to influence whether students continue in higher education. Pitkethly & Prosser show that a high proportion of students withdraw from higher education because they have issues with adjusting to environmental factors, suggesting that analysis of the reasons student give for withdrawing for a programme will help determine the course of action the university should take. It could be argued that if a university could analyse the students’ experience, and implement changes while the student is still at the university it may aid retention of the current cohort rather than waiting for the next.</p><p>What is meaningful feedback? Chen & Hoshower (2003) discuss the importance of the students’ willingness to actively participate and provide quality data on creating meaningful feedback. Student feedback is used for three important purposes; improvements to the teaching of a course, the course content and making the feedback available for students who are deciding which course they will apply for (Chen & Hoshower, 2001, Fresko & Nasser, 2001). </p><p>02/07/08 2 If students do not see any action as a result of their feedback, they may become sceptical and unwilling to participate. Anecdotal evidence from the Student Liaison Officer1 (SLO) reinforces that students are averse to completing feedback where they see no personal benefit. They may not provide accurate feedback and actions taken by staff will not be meaningful, resulting in disengagement from the educational process. Leckey & Neill (2001) identify that this closing of the feedback loop is essential to total quality management. If students can access meaningful feedback which positively impacts on their own learning experience (Ballantyne 2003), it is expected that this may enhance completion rates.</p><p>Nelson (2006) states that it should be demonstrated to the students that their feedback is valued by providing evidence of continuous improvement and feeding back to the students. Without systems that lead to action and feedback the students will grow cynical about the process (Harvey, 2003) and will be less inclined to participate, raising important questions about what non- responders to evaluation are telling us.</p><p>Figure one below, displays the process of the completion of the feedback loop which Harvey refers to as the satisfaction cycle.</p><p>(Harvey, 2003)</p><p>Timing of feedback needs to be considered. Anderson (2007) cites Luks (2007) who discusses the benefits of giving feedback as close the event or period of time as possible. At UCLan students currently are predominantly</p><p>1 The Student Liaison Officer is a member of staff, who is a recent graduate and available to offer impartial, independent advice to students. They also work towards improving the student experience and seek student opinions to inform the decision making process.</p><p>02/07/08 3 asked to provide feedback at the end of the module which can be up to one academic year long. UCLan students in a focus group in 2007 suggested that they would prefer to provide feedback on an ongoing basis rather than at the end of the module. Wilson, Lizzio & Ramsden (1997) suggest that too many questionnaires can lead to questionnaire fatigue, which can compromise the results. As the university increases its effort to become more electronic in its administrative procedures, it is likely that a large number of electronic course evaluations in a short space of time will vie for the personal time of the student, likely to impact on response rates (Avery et al, 2006). This is reflective of the National Student Survey and Student Satisfaction Survey, so the timing of evaluation processes by staff is significant in order to prevent evaluation fatigue. </p><p>Why on-line delivery? Research shows that evaluation within higher education has traditionally been paper-based, predominantly within a classroom setting, or via the postal system. Whilst classroom based ensures a higher completion rate, the ethics could be questionable. As a captive audience, students may feel pressured into providing more positive feedback in the presence of the lecturer (Avery et al, 2006), may not have the opportunity to reflect on what they want to say or how they wish to respond, or go through the motions of ‘tickbox’ evaluation in order to leave the room as soon as possible. This brings into question whether truly meaningful data can be collected in this format. </p><p>In contrast, on-line evaluation delivery offers a more flexible approach in relation to how, when and where students will complete it (Dommeyer et al 2004). The on-line evaluation tool can reach each individual student easily, stored within the student’s personal password protected webspace or sent through email and can be completed at the student’s convenience. </p><p>Whilst online surveys are only one method of gathering feedback, It was anticipated that this flexible approach to evaluation would complement the flexible and more technical learning environments experienced by students (Reid, 2003, Ballantyne, 2003, McGhee & Lowell, 2003, Dommeyer et al, 2004). This results in the need for policy frameworks, quality processes and on-line tools in order to provide comprehensive, timely and appropriate information that can be acted upon in order to improve the quality of the learning experience (Reid, 2001). </p><p>Issues to consider The on-line delivery also affords a system to trigger reminders or second requests for completion within a set period of time. However, the overload of emails and spam may contribute to even lower rates, if staff’ are not careful in their usage (Avery et al, 2006). There are many reports to identify that rates of completion are lower for on-line delivery methods (Johnson et al, 2003, Conn & Norris, 2005, Avery, Bryant, Mathios, Kang & Bell, 2006) yet Avery et al (2006) suggest that this does not necessarily mean that the quality of the feedback is less meaningful, reliable and valid. Rewards, management of risk, and development of trust are general means to increase response rates (Reid, 2001). It is integral to this process to pay attention to administrative</p><p>02/07/08 4 factors, motivators, timing and the best methods for feeding back. Assurances of confidentiality are key issues for students, which will enhance the completion rates. Comments from a student, sent to the SLO, expressed concerns with the implications of giving negative feedback through a system that did not protect their confidentiality. Considering the software used, this confidentiality needs promotion across the university, advocated by student liaison services.</p><p>The Project Development and Student Involvement The e-valU8 project brought together a range of methods to gather information, develop the tool, and pilot it. A blog was used as an effective method of communication within the team and to maintain a historical record. Following anecdotal evidence from students and staff, two initial focus groups were held in early 2007 to investigate current experiences of feedback and evaluation methods and technologies. Members were invited to attend the focus groups using purposive sampling in order to represent the wider student and staff populations. </p><p>The focus groups highlighted the need to tailor evaluations to meet identified student and course requirements that are available at the most relevant time for the module leader, course leader and the students. Feedback included;</p><p>‘end of module feedback when collected is outdated and it is often too late to take action which makes a difference to the student’s experience’</p><p>‘ it is important to pick up student issues and to deal with them when they occur versus at the end of the programme’ </p><p>‘can feedback be an ongoing exercise rather than end of module?’</p><p>These strongly reflect research discussed earlier; that ownership, consequence and closure of the feedback loop are important to students, suggesting the need for on-line evaluations to be used proactively across the academic year. Reid (2001) identifies that students also want evidence that the university values their feedback, so the closure of the feedback loop is essential. This relies on the attitudes of the staff and the culture of the university as to how they value student feedback, and whether they are motivated to make changes as a result (Fresko & Nasser, 2001). </p><p>If staff or students feel that students are unqualified to provide constructive feedback or understand the terminology and educational concepts (Avery et al, 2006) they will disregard evaluation results (Reid, 2001, Alfonzo et al, 2005). This brings an interesting perspective; that all students should be educated in giving and receiving meaningful feedback (Kogan & Shea, 2007). Pudner (2007) questions whether feedback is a professional responsibility and is therefore a professional assessment. This would allow all of us, with support, to become more effective practitioners, setting evaluation in the real world context, where constructive feedback can be transformed into meaningful communicated actions (National Conference on Student Evaluation, 2007). If students believe that their opinions are being taken</p><p>02/07/08 5 seriously, they will be more inclined to provide careful and thoughtful responses (Albanese, 2000), than merely the ticking boxes approach that is often seen. This initial consultation strongly supported the project teams’ vision to create a bespoke evaluation tool that allows students to provide meaningful feedback on their own experiences, and allows the staff to respond to and act upon this is a timely manner. </p><p>Developing the Questions To determine what questions might be of value in the questionnaire, a purposive sample of students, were asked to look at a bank of questions based upon the University of Wollagong on-line evaluation tool. Students were asked to indicate which questions would be particularly useful to them and why, and how best these questions could be answered. Findings showed that different groups of students preferred questions that were most relevant and applicable to their level and programme of study. For example – e- learning students chose questions specific to the learning environment, the use of technology and support, and disregarded those more pertinent to campus based full-time students. This highlighted the need for staff to tailor questionnaires to the student group being evaluated, through discussion prior to setting up the survey to determine what, when and how evaluation should take place. </p><p>Testing the Usability Volunteers were sought among the students body to test the usability of the e- valU8 tool. The turn out for this occasion was disappointing. Students cited reasons such as family commitments, assignment deadlines, working, etc as reasons not to partake in additional activities such as this. The students who did attend represented the diversity of the student body. Students were positive about the usability, finding the tool easy to use, as well as the opportunity to receive feedback from staff and consequences of participation as a proactive process, considering this to be the strongest and most valuable consideration. The only negative aspect of the testing process was that the students found the location of the survey difficult. This is under review and will be more visible in the future, supported by alternative technical means such as e-learn, mobile phone text messaging or email to utilise push technology. </p><p>Handbooks for Staff and Students As the tool will be designed for all staff to use as they feel appropriate, there is a risk that they may; subject the students to questionnaire fatigue, deliver surveys at inappropriate times or not action the feedback and close the feedback loop with the students, therefore there is a need for a staff handbook. The handbook will offer step by step advice to guide the members of staff through the process of preparation, delivery, data extraction, analysis and feedback to the students. A guide will also be produced for students to support them with the use of the tool, and the provision of constructive feedback (Reid, 2001). Staff will be advised to discuss the purpose of the survey with students in a manner that will ensure the students understand what they are providing feedback on, without influencing responses. </p><p>02/07/08 6 Piloting the Evaluation Tool As discussed by Walford (1998) the purpose of a pilot is to ensure that the concept of the project and the operation are suitable for the nature of the problem being investigated. Timm (1994) also comments that a pilot can allow the researcher to spot any glitches. The project team has determined that the pilot will test the usability of the tool from both the staff and students perspective, evaluating the time, consequences and value of developing a system which engages the staff and students in a potentially additional task, but which is embedded within the core curriculum. </p><p>Evaluating the Process The evaluation of the project will need to establish whether the students felt that their feedback was valued, with action taken, the feedback loop was closed and if this motivated them to participate in and provide quality feedback. This will include both qualitative and quantitative research to allow us to draw statistics and in depth understanding of the staff and students opinions and attitudes towards the tool. Kelle (2006) suggests that it important to combine the two to ensure the strength and weaknesses of each are compensated for. The evaluation of the staffs’ experience will include questionnaires and interviews. However, when evaluating the students’ experience of the tool it could be argued that the researchers need to be conscious of questionnaire fatigue, discussed earlier, when evaluating the students’ experience of the evaluation process. Therefore the evaluation method used will be focus groups.</p><p>It is expected that the pilot will not reveal the anticipated attitude change in the students towards the feedback process. The theory of reasoned action, as discussed by Aizen (2005), looks at the relationship between behaviour and social norms, attitudes and beliefs. It is expected that it will take a period of time for the social norms within the university and the students’ attitudes and beliefs towards feedback to change. </p><p>Expected Outcomes and Recommendations From the initial focus groups, developing the bank of questions, reviewing the literature, and testing the interface it is evident that students want and need to be involved in all aspects of evaluation. Students need to see a response and action arising from their feedback, so the closure of the feedback loop is essential within the project. The e-valU8 tool will be used together with other sources of feedback (such as modular and course evaluation, external examiner reports, staff-student discussion) in order to provide a more detailed picture of the student experience, but the timing of this evaluation process is important in order to prevent evaluation fatigue, enhance completion rates and for the evaluation to be meaningful. A range of administrative and motivational factors need to be considered when using on-line evaluation. The e-valU8 project has identified the need for a clear system, supporting students and staff will have to be in place before the tool can be piloted and evaluated in late 2008. The e-valU8 project continues at this time. </p><p>02/07/08 7 References</p><p>Afonzo, N.M., Cardoza, L.J., Mascarensas, O.A., Aranha, A.N., and Shah, C. (2005) Are anonymous evaluations a better assessment of faculty teaching performance? A comparative analysis of open and anonymous evaluation processes. Family Medicine Vol. 37, pp43-47</p><p>Albanese, M.A. (2000) Challenges in using rater judments in medical education. Journal of Evaluation in Clinical Practice Vol. 6, No. 3, pp305-319</p><p>Aizen I. (2005). Attitudes, Personality and Behaviour (2nd Edition). Berkshire: McGraw-Hill Education</p><p>Anderson M.B. (2007) New Ideas in Medical Education: Really Good Stuff. Medical Education. Vol. 41 No. 11 pp.1083 – 1111</p><p>Avery R.J., Bryant, W. K., Mathios, A, Hyojin K. and Bell, D. (2006) Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education Vol. 37 No.1 pp 21 – 37</p><p>Baldwin, G., and James R. (2000) The market in Australian higher education and the concept of student as informed consumer. Journal of Higher Education Policy and Management. Vol. 22 No. 2 pp139-148</p><p>Ballantyne, C. (2003) On-line evaluations of teaching: an examination of current practice and considerations for the future. New Directions for Teaching and Learning Vol. 96 pp103-112</p><p>Billings-Gagliardi, S., Barrett, S.V. and Mazor, K. M (2004) Interpreting course evaluation results: insights from thinkaloud interviews with medical students. Medical Education, Vol. 38 No. 10 pp.1061 – 1070</p><p>Chen, Y and Hoshower, L.B. (2003) Student Evaluation of Teaching Effectiveness: an assessment of student perception and motivation, Assessment & Evaluation in Higher Education, Vol. 28, No. 1 pp. 71 – 89</p><p>Conn, C., and Norris, J. (2005) Investigating strategies for increasing student response rates to on-line delivered course evaluations. The Quarterly Review of Distance Education Vol. 6, No.1 pp13-29</p><p>Dommeyer, C.J., Baum, P., Hanna, R.W., Chapman, K.S. (2004) Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment and Evaluation in Higher Education. Vol. 29, No.5, pp 611-623</p><p>Fresko B. and Nasser F. (2001) Interpreting Student Ratings: consultation Instructional Modification and Attitudes Towards Course Evaluation. Studies In Educational Evaluation Vol. 27, No. 4. pp.291-372</p><p>Harvey L. (2003) Student Feedback Quality in Higher Education, Vol. 9, No. 1 pp.3 – 21.</p><p>02/07/08 8 Johnston, B.T., Wilson, N., and Boohan, M (2001) Feedback of audit results can improve clinical teaching (but also may impair it). Medical Teacher Vol. 23 No.6 pp576-579</p><p>Kelle U. (2006) Combining qualitative and quantitative methods in research practice: purposes and advantages Qualitative Research in Psychology, 2006, Vol. 3 No. 4, pp293-311</p><p>Kogan J.R. and Shea J.A. (2007) Course Evaluation in medical Education. Teaching and Teacher Education Vol. 23, Issue 3 pp. 237-322</p><p>Leckey J. and Neill N. (2001) Quantifying Quality: the Importance of Student Feedback Quality in Higher Education Vol. 1 No. 1 pp. 19-32.</p><p>McGhee, D.E., and Lowell, N. (2003) Psychometric Properties of Student Ratings of Instruction in On-line and On-Campus Courses. New Directions for Teaching and Learning. Vol. 2003, No.96, pp39-48</p><p>Nelson D.L. (2006) Online Student Ratings: Increasing Response Rates. Athabasca University.</p><p>Pitkethly A. and Prosser M. (2001) The First Year Experience Project: a model for university-wide change. Higher Education Research & Development, Vol. 20, No. 2 pp. 185 – 198</p><p>Powney J. and Hall S. (1998) Closing the Loop: The Impact of Student Feedback on Students’ Subsequent Learning. Edinburgh Scottish Council for Research in Education.</p><p>Pudner, H. (2007) The Learner Voice. National Conference on Student Evaluations: Dissemination and Debate. 26 October 2007. Higher Education Academy Medicine, Dentistry & Veterinary Medicine. University College London. </p><p>Ramsden P. (1998) Managing the Effective University Higher Education Research & Development Vol:17 No:3 p.347</p><p>Reid I (2001) Reflections on using the Internet for the evaluation of course delivery. The Internet and Higher Education. Volume 4, No. 1, pp.61-75, </p><p>Tight M. (2003) Researching Higher Education : Issues and Approaches. Berkshire: McGrawHill Education.</p><p>Timm, P. (1994) Business Research : An Informal Guide. Menlo Park, USA: Course Technology Crisp</p><p>Walford, G.Ed (1998) Doing Research in Education.London: Falmer Press, Limited (UK)</p><p>Wilson, K L., Lizzio, A and Ramsden, P (1997) 'The development, validation and application of the Course Experience Questionnaire', Studies in Higher Education, Vol. 22, No.1 pp.33 – 53</p><p>Yorke, M.,and Langden, B. (2004) Retention and Student Success in Higher Education. Maidenhead : Open University Press, 2004</p><p>02/07/08 9</p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us