<<

Global Perspectives on Measuring Quality

Proceedings of the 2010 Strategic Leaders Global Summit on Graduate

September 13-15, 2010 Brisbane, Australia

Council of Graduate Global Perspectives on Measuring Quality Proceedings of the 2010 Strategic Leaders Global Summit on Graduate Education

Managing Editor: Julia D. Kent

This project was supported in part with funding from ProQuest and by the Australian government through the Department of Education, Employment and Workplace Relations (DEEWR). The views expressed here are those of the authors and do not necessarily represent the views of the funders.

Copyright © 2011 Council of Graduate Schools, Washington, D.C.

ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced or used in any form by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, Web distribution, or information storage and retrieval systems—without the written permission of the Council of Graduate Schools, One Dupont Circle, NW, Suite 230, Washington, D.C. 20036-1173.

ISBN-13: 978-1-933042-29-9 ISBN-10: 1-933042-29-X

10 9 8 7 6 5 4 3 2 1 10 09 08 TABLE OF CONTENTS foreword ...... vii

Acknowledgments ...... viii i. introduction: Measuring Quality in Graduate education and training ...... 1 Debra W. Stewart, Council of Graduate Schools ii. national Contexts for Assessment: What are we measuring and why? ...... 5

Helene Marsh, James Cook (Australia) Jeffery Gibeling, , Davis (U.S.) Douglas Peers, (Canada) He Kebin, () Narayana Jayaram, Tata Institute of Social Sciences (India) Kyung-Chan Min, (South Korea) iii. laying A Strong foundation for institutional Assessment ...... 41

Benefi ts and Challenges of Quality Assessment ...... 46 Kim D. Nguyen, HoChiMinh City University of Education (Vietnam) Surasak Watanesk, Chiang Mai University (Thailand)

Communicating the Value of Assessment ...... 53 Patrick Osmer, (U.S.)

Administrative Challenges & Opportunities ...... 56 Ursula Lehmkuhl, Freie Universität Berlin (Germany) Alan Dench, University of Western Australia, and Maxwell King, (Australia) iV. Supporting the development of research training environments ...... 66

iii CONTENTS

External Assessments of the Research Environment...... 71 Akihiko Kawaguchi, National Institute of Academic Degrees and University Evaluation (Japan) Paul K.H. Tam, (Hong Kong)

The Role of Global Rankings...... 82 Chao Hui Du, Jiao Tong University (China) Jan Botha, (South Africa)

University Assessments of the Research Environment...... 92 Sang-Gi Paik, Chungnam National University (South Korea) Marie Carroll, , and Richard Russell, (Australia)

Admission, Retention and Completion in Doctoral Degree Programs...... 104 Charles Tustin, University of Otago (New Zealand)

V. Using Quality Measures to Support Program Content and Design...... 110

Outcomes of Learning and Research Training...... 116 Zhen Liang, Harbin (China) Julia Kent, Council of Graduate Schools (U.S.)

Mentoring and Supervision...... 125 Barbara Evans, University of British Columbia (Canada) Gregor Coster, (New Zealand)

Interdisciplinary Programs...... 139 Zlatko Skrbis, , and Mandy Thomas, Australian National University (Australia) Karen DePauw, Virginia Polytechnic Institute and State University (U.S.)

VI. Skills, Competencies, and the Workforce...... 150

Defining and Measuring Professional Skills...... 156 Austin McLean, ProQuest (U.S.) Illah Sailah, Ministry of National Education (Indonesia)

Linking Professional Training Programs to Workforce Needs...... 162 Laura -Warren, University of New South Wales, and Richard Strugnell, (Australia) iv CONTENTS

Rose Alinda Alias, Malaysia Dean of Graduate Studies Council, Universiti Teknologi Malaysia (Malaysia)

Career Pathways...... 172 Jianhua Yan, (China) Allison Sekuler, McMaster University (Canada)

VII. Measurements without Borders?...... 183

Tools and Methods for Assessing Quality Internationally...... 188 Jean Chambaz, University Pierre and Marie Curie (France) Maxwell King, Monash University (Australia)

Promising Practices for Administering Quality Assessments...... 198 Tan Thiam Soon, National University of Singapore (Singapore)

Assessing Quality in (Post)-Graduate International Collaborations...... 202 Le Thi Kim Anh, Ministry of Education (Vietnam) Andrew Comrie, (U.S.)

VIII. Conclusion: Guiding Principles for Measuring Quality...... 216 Debra W. Stewart, Council of Graduate Schools

Appendices...... 218

A. Principles and Practices for Assessing the Quality of (Post)-Graduate Education and Research Training...... 218

B. Supplementary Country Papers...... 222 Illah Sailah (Indonesia) Akihiko Kawaguchi (Japan) Gregor Coster and Charles Tustin (New Zealand) Tan Thiam Soon (Singapore) Jan Botha (South Africa) Kim Anh le Thi and Kim D. Nguyen (Vietnam)

C. Participant Biographies...... 247

Note: The names of co-authors who did not attend the summit appear at the top of the essay to which they contributed. v

FOREWORD

he 2010 Global Summit in Brisbane, Australia, “Measuring Quality in Graduate Education,” addressed a topic of growing Timportance in graduate education systems worldwide. Whereas in , the assessment of quality for the purposes of accountability and improvement has become a well-established practice, quality assessment at the graduate level, particularly in doctoral education, is a relatively new phenomenon. Increasingly, governments and other external stakeholders are expanding efforts to measure the quality of their graduate education systems along with the capacity of to serve national needs. Meanwhile, institutions are also developing strategies to enhance the quality of their programs and the preparation of graduate to solve pressing global problems. The 2010 Global Summit provided an opportunity for a diverse group of graduate leaders to better understand the differences among their national quality assessment frameworks, share assessment methods and practices, and identify emerging best practices in this area. Global Perspectives on Measuring Quality highlights both the differences and areas of common ground that emerged at the 2010 Global Summit and aims to spark future thinking and discussion of this important topic.

Debra W. Stewart President Council of Graduate Schools

vii ACKNOWLEDGMENTS

his volume refl ects the contributions of many collaborators, both individuals and groups, from around the world. For ensuring that the Tsummit refl ected a diverse range of national perspectives, I fi rst thank the international Steering Committee that oversaw the development of the summit agenda, and all of the 2010 summit delegates, who gave the agenda life through thoughtful papers and contributions to the discussion sessions. On behalf of the Council of Graduate Schools, I express the warmest thanks to our summit co-hosts in Australia, the Group of Eight (Go8) and the Deans and Directors of Graduate Studies (DDoGS), partners at every stage of the process. Michael Gallagher, Executive Director at Go8, committed vision and resources, along with the time of an exceptional staff: Susie Barrett, Jasonne Grabher, Meagan James, Jane Liang, Les Rymer, and Kerrie Thornton. As an ongoing contributor to the Summit, Maxwell King of DDoGS ensured that the Brisbane summit built on accomplishments of past years while also paving the way for future best practice exchange among international leaders in graduate education. Finally, I thank The University of Queensland and its Vice-Chancellor, Paul Greenfi eld, for providing the historic Brisbane Customs House as the summit venue. A committed staff at the Council of Graduate Schools played an important role in the planning of the 2010 Summit and the preparation of this volume: I thank Julia Kent, Director of the Summit and editor of these proceedings; Daniel Denecke, for sharing his international expertise at key points in the planning process; Eleanor Babco and Heidi Miller, for assistance in coordination and planning; Patty McAllister and Belle Woods, for overseeing press communications; Cheryl Flagg, for editorial assistance; and Joshua Mahler, for assistance with web resources and publication design. I reserve a very special thanks for ProQuest, whose generous gift made it possible to realize the ambitions of our many summit contributors. CGS is deeply grateful for their commitment to supporting graduate education and to their recognition of the global issues that connect graduate institutions around the world.

Debra W. Stewart President Council of Graduate Schools

viii Global Perspectives on Measuring Quality I. INTRODUCTION: MEASURING QUALITY IN GRADUATE EDUCATION AND RESEARCH TRAINING

he Fourth Annual Strategic Leaders Global Summit on Graduate Education1 provides an occasion to refl ect on the history and T accomplishments of what has now become a well-established international forum. The fi rst Global Summit was convened in Banff, Canada in 2007, and set the goal of creating an international dialogue about the national and global movements shaping graduate education. Along with a group of partner organizations represented in Banff, the Council of Graduate Schools set out to better understand issues such as the increasing mobility of graduate students and , the expansion of global networks of research and education, and the preparation of graduate students to enter new career pathways requiring global skills. The summit also addressed the practical question of how graduate deans and other senior university leaders might create a platform for the exchange of best practices in these and other areas. Participants from seven countries took a bold step toward creating this platform by collectively announcing a set of consensus points, the Banff Principles, which outlined nine areas of priority and future work. Key among these principles was a plan to develop an inclusive global forum for the discussion of best practices in graduate education. Since the founding summit in 2007, CGS has been committed to working with our partners in parallel organizations to ensure that each annual forum is dedicated to a topic of common concern to graduate institutions worldwide. For the 2008 summit in Florence, Italy, Research Ethics and Scholarly Integrity in a Global Context, we explored a topic that directly affects the quality of research as well as the preparation of the next generation of researchers—the ethical questions that are emerging with the globalization of research. In 2009, the Global Summit convened in San Francisco under the title Graduate

Global Perspectives on Measuring Quality 1 INTRODUCTION

International Collaborations: How to Build and Sustain Them. This theme built upon a principle agreed upon in Banff, the need to promote high-quality graduate international collaborations, and explored the challenges and opportunities surrounding collaborations such as joint degrees, dual degrees, and research exchanges. The 2010 Global Summit also built upon one of the Banff Principles, which recognized that quality assessment is one of the most important issues now facing graduate institutions throughout the world. As the Banff delegates declared four years ago in their consensus statement, “It is necessary to include and share research related to improving the process and outcomes and particularly share best practice in quality assurance. All of these efforts lead to enhanced recognition of graduate qualifications and confidence in collaborative programs.” Yet in graduate education and research, “quality” is becoming more and more difficult to define. For example, different types of measurement may seem to have conflicting goals: Quality Assurance (QA) is typically seen as a process for maintaining minimum standards of quality, while Quality Assessment is often defined as an effort to seek continuous improvement with reference to a set of aspirations that surpass minimum standards. Most graduate institutions are engaged in both types of measurement, but as global competition increases, there is now a greater push toward standards of excellence. This issue clearly called for more focused attention. We were delighted when our co-organizers for the 2010 summit, the Group of Eight (Go8) and the Council of Australian Deans and Directors of Graduate Studies (DDoGS), proposed “Measuring Quality” as the topic for the 2010 Summit, and when the Go8 offered to host the event in Brisbane. In partnership with both organizations, we assembled a group of unprecedented diversity, 42 representatives from 17 countries. This expansion reflects the impact of the summit and a promise of the impact it will have in years to come. In conversation with the members of the international Steering Committee for the 2010 summit, we developed an agenda of broad global relevance. Our goals for this year’s summit included:

• Developing a clearer understanding of the national and regional contexts for Quality Assessment in graduate education;

2 Global Perspectives on Measuring Quality INTRODUCTION

• Clearly defining the areas of graduate education that need to be measured, with appropriate differentiations for Master’s and Ph.D. programs; • Exchanging institutional strategies and methods for measuring quality; • Sharing institutional strategies for using quality measurements to improve the quality of learning and research; • Comparing approaches to defining and measuring professional skills, and for building better pathways to the workforce; and finally and most ambitiously, • Developing a set of international principles for Quality Assessment

As in previous Global Summits, we by surveying the national and regional landscapes that shape the assessment of quality in graduate education. Panel 1, the introductory session, was designed to provide a context for later discussions by highlighting different ideas of quality as they are defined and measured by organizations and governmental bodies that oversee the quality of graduate education and research. Due to the limited number of speaking slots, Panel 1 included formal presentations on the largest of the 17 countries represented. All participants were invited to submit country papers, however, in addition to the papers they delivered for other panels. Formal presentations are included in Part 1 of this volume, and the supplementary papers are included in Appendix B. These concise overviews will be very useful to anyone seeking to learn about the issues surrounding quality assessment in particular countries and regions. In Panels 2-6 (corresponding to Parts 2-6 of the volume), we began to address the specific practices and procedures that institutions have developed to improve quality in graduate education and research training. In Panel 2, “Laying a Strong Foundation for Measuring Quality,” presenters described their efforts to create an institutional culture in which the value of quality assessment is understood and communicated to all groups of stakeholders. Panel 3, “Supporting the Development of Research Training Environments,” shed light on the work of governments, accreditors, and universities to improve all

Global Perspectives on Measuring Quality 3 INTRODUCTION aspects of the research environment, while Panel 4, “Using Quality Measures to Support Program Content and Design,” examined the ways in which assessments are used to improve teaching, mentoring, and research training programs. Undoubtedly, the global economy tests the quality of our graduate education systems, challenging us to give students the best professional preparation possible, and we explored this topic in Panel 5, “Skills, Competencies, and the Workforce.” The last session, Panel 6, “Measurements without Borders?,” then challenged us to think about how we might develop assessment practices that are specifically designed to transect different graduate education systems. The quality of graduate education and research, a complex amalgam of different and emerging metrics in any country, becomes even more difficult to define when 17 countries are represented. Yet our conversations gave us an important starting point for building some agreement around the many topics we discussed. The expertise shared in these panels set the groundwork for the final session of the summit, in which all participants worked together to agree upon a set of principles to guide the assessment of quality in graduate education across our various national and regional contexts. The leadership represented in this year’s Global Summit was diverse, broad, and deep, and the outcomes of the 2010 Global Summit will certainly lend weight and substance to the ongoing international dialogue about this timely topic.

Debra W. Stewart President Council of Graduate

1 I use the term “graduate” throughout this paper as it reflects my own context in the U.S. and North America while drawing attention to the fact that the term “postgraduate” is used throughout the volume by participants from Australia and the U.K. The summit planning committee has created the term “(post)-graduate” to underline the fact that different terms are used to refer to Master’s and Doctoral education, and all participants were encouraged to use the term used in their own national context.

4 Global Perspectives on Measuring Quality II. NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY?

Summary of Presentations and discussion

he opening panel of the 2010 Strategic Leaders Global Summit began with a deceptively simple question about the quality of Tgraduate education: What are we measuring, and why? While all countries share the goal of understanding and improving the quality of graduate training, the term “quality” may be defi ned in multiple and potentially confl icting ways, both within and between countries. The past ten years have also seen enormous growth in the production of new methodologies for measuring the quality of graduate training programs and research environments. In beginning an international conversation about the new and changing landscape of quality assessment, then, it was important to consider the national frameworks that shape quality assessment in different countries. To this end, we asked participants to prepare country papers that described the major issues, approaches, and regulatory structures that shape quality assurance and assessment in their countries and regions. A broad set of questions was provided as a framework:

• Methods and Metrics: What aspects of quality are measured? What research methods (qualitative, quantitative, longitudinal, etc.) are used for different types of assessment? Which methods measure quality directly and which are proxy measures? • Reporting and Oversight: Who is responsible for developing, administering, and analyzing quality measurements? To what extent are the data generated in quality assessments made public or reportable to government or other bodies?

Global Perspectives on Measuring Quality 5 NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY?

• Accreditation and Funding: How do the processes and outcomes of quality assessment interface with accreditation or funding processes? How are they tied to evaluations of quality in research output more generally? • Benefits and Risks: Have these measurements benefited universities in your national and regional contexts? Have they benefited your nation and region more generally? Have any unintended consequences, positive or negative, resulted from attempts to measure quality in (post)graduate research education? • Systems for Comparison and Ranking: How does measuring quality in (post)graduate research training affect national or international ranking systems? Is it possible to usefully compare institutions either nationally or internationally? • Data-Driven Interventions: How are quality measurements used to design interventions, and at what level? Who is affected by these interventions?

These questions were formally addressed in presentations by participants from Australia, Canada, China, India, South Korea, the United Kingdom and Europe, and the United States. All participants were invited to provide country papers, however, and additional papers by representatives from Indonesia, Japan, New Zealand, Singapore, South Africa, and Vietnam, can be found in Appendix B. The discussion following the presentations made it possible to examine these contexts further and to clarify important points of difference and similarity. While the conversation touched on a wide range of issues, observations fell into three general topic areas— global trends, the role of governments, and global challenges—that are summarized below.

Global Trends The discussion of national contexts shed light on a number of larger global trends affecting the assessment of graduate education. Perhaps the most significant of these is the rapid growth of graduate education in many countries, the largest being China and India. Kebin He

6 Global Perspectives on Measuring Quality NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY?

(Tsinghua University) reported that since 1999, following China’s national plan to expand graduate education, the number of newly admitted graduate students has increased at a rate of 20-30% per year. In this context, leaders of Chinese graduate education have been challenged to ensure that this growth does not compromise quality. In India, the expansion of has emerged in a different social and economic context, but as was observed by Narayana Jayaram (Tata Institute of Social Sciences), India must also work to catch up to this growth, and to meet new demands from both the government and students that institutions demonstrate a return on investments in graduate education. A number of other country papers touch on the issue of quality in an environment of rapid expansion, in particular, Le Thi Kim Anh and Kim D. Nguyen’s paper on graduate (see Appendix B). The second global trend examined in papers and discussions was the growing interest in bridging doctoral education and non-academic workforce sectors. Many participants indicated that this trend has brought about new challenges and opportunities for assessing graduate training. Douglas Peers (York University) asserted that training for non-academic careers is often not considered a dimension of quality because faculty tend to train students on the academic model that shaped their own graduate training. Approaches to assessing the quality of doctoral training for diverse career pathways is explored further in Panel 5, Skills, Competencies, and the Workforce. The third global trend was not as explicitly stated as the first two, but was implicit in many participants’ descriptions of the assessment processes in their countries. Whereas in the past, assessment of programs was often focused on evaluating the past or the present, there is an increasing trend toward future-looking assessment, or assessment with the goal of improvement. This trend can be observed in the number of presentations that highlighted the importance of self- assessment, a form of review that typically results in concrete progress goals for institutions and programs. Jeffery Gibeling (University of California, Davis) emphasized the importance of this approach in his remarks on doctoral program review in the United States: “In program review, there is always something to fix, and that’s the point […]

Global Perspectives on Measuring Quality 7 NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY?

There is always something to do, which means that follow-up is very important, that the recommendations turn into actions.”

The Role of Governments Notwithstanding the broader global trends described above, there are significant differences in the role of governments in supporting the quality of graduate education and research. As the first collection of papers shows, quality assessment of graduate education may involve different levels of oversight involving the federal government, state or provincial governments, independent regulatory bodies, and institutions. Of course, important variations arise in the degree to which federal governments directly regulate the quality of higher education: they may play a direct role in the accreditation of institutions or the creation or administration of quality standards, or they may give part or all of this authority to provinces, governmental agencies, or non- governmental organizations. Independent regulatory agencies may also grant institutions varying degrees of autonomy in the process of assessing programs and developing plans to meet requirements. One important issue that emerged from the discussion was whether federal governments, often criticized for heavy bureaucracy and a lack of sensitivity to institutional culture, can effectively improve the quality of graduate education with standards, national-scale assessments, or incentives. As many noted, federally-mandated assessment efforts may come with limitations and risks. Representing South Africa, Jan Botha (Stellenbosch University) underlined the potential for national standards to homogenize regional and institutional variety in ways that may compromise quality. Kyung-Chan Min (Yonsei University) also pointed to the fact that some government assessment efforts focus heavily on quantitative measures of quality that may need to be supplemented with additional qualitative measures.1 National reform movements may come with distinct advantages, however, especially when they give universities the power to decide how best to meet quality standards and are accompanied by funding incentives. Barbara Evans (University of British Columbia) pointed out two such reforms: Australia’s Research Training Scheme, which since 2001 has tied university funding for graduate students to

8 Global Perspectives on Measuring Quality NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY? completion rates (a university does not receive funding for a student until he or she completes), and in the U.K., the 2003 “Roberts Funding,” which gave strong material support for the development of transferable skills for doctoral students. In general, participants agreed that universities must work to play a direct role in the development and refinement of quality standards upheld by governments and regulatory bodies. Maxwell King (Monash University) remarked that one way for leaders of graduate education to have a strong voice in the development of quality standards for graduate education is to organize at the national level through associations of graduate deans or other leaders with responsibility for maintaining and enhancing quality in graduate education.

Global Challenges Many of the obstacles surrounding quality assessment in specific countries also proved to be larger global challenges. One of these is ensuring accurate reporting in quality assessments and in self- assessments in particular. Ursula Lehmkuhl (Freie Universität Berlin) asked participants to share their recommendations for promoting an honest evaluation process. Participants involved in the design and implementation of national assessments offered a number of suggestions, including: 1) Clearly communicating the goals of the assessment; 2) Clearly communicating the outcomes of the evaluation process; 3) Carefully selecting expert external examiners; 4) Ensuring, when possible, the confidentiality of self-assessments; 5) Presenting assessment as a future-oriented activity that supports the improvement of quality; 6) Encouraging faculty to make realistic goals that can be met in the allotted timeline for improvement; and 7) Carefully examining whether the self-assessment was truly a group activity. Mary Ritter (Imperial London) added that graduate leaders can support honesty in assessment by positively responding to those who report weaknesses and a desire to improve them. A second global challenge was encouraging greater attention to the quality of research training in environments (or national assessment frameworks) that place significant stress on research output. Panel 2, “Laying a Strong Foundation for Institutional Assessment,” and Panel

Global Perspectives on Measuring Quality 9 NATIONAL CONTEXTS FOR ASSESSMENT: WHAT ARE WE MEASURING AND WHY?

3, “The Development of Research Training Environments” give closer attention to this issue. As Patrick Osmer (Ohio State University) observed, however, it is also possible to use successes in the sphere of research as a metric of graduate training quality. Career placement data and bibliometrics for graduate publications have the potential to be reliable measures of program quality, he noted, because they are based on assessments conducted by individuals outside a student’s program and institution.

Conclusion While focused on national contexts for quality assessment, the first panel discussions demonstrated that national frameworks are increasingly shaped by international forces and sources of expertise. The graduate leaders who participated in the summit often referred to quality assessment systems in other countries, and indicated that others’ successes in specific areas had helped them to reflect on key issues in their own contexts. The presentations also highlighted compelling examples of the ways in which some countries have made international perspectives a key feature of national quality assurance systems. As was highlighted in the presentation by Helene Marsh (James Cook University), for example, examiners for the doctoral must include internationally recognized experts.2 It was the goal of the following sessions, and of the summit as a whole, to see what each country might learn from outside perspectives on national challenges, and together build toward a set of international best practices for improving the quality of graduate education.

1 Some governments are working to incorporate qualitative measures. Helene Marsh (James Cook University) cited the federally-funded Postgraduate Research Evaluation Questionnaire (PREQ) in Australia, which surveys successful candidates’ impressions of their supervision and the intellectual climate of their programs.

2 The same requirement also exists in New Zealand, as Gregor Coster (University of Auck- land) and Charles Tustin (University of Otago) note in their country paper on quality measurement in New Zealand.

10 Global Perspectives on Measuring Quality Helene Marsh Maintaining and Measuring Quality in Research in Australia

Helene Marsh Chair, Australian Deans and Directors of Graduate Studies Dean, of Graduate Research Studies, James Cook University

Background Doctoral programs in Australia include PhDs (by far the more common) and other doctorates. Most of the ‘other doctorates’ are self- described as ‘professional doctorates’ which have been developed in recognition of the need for diversity in research training contexts and outcomes. The more fundamental distinction (because it affects government funding and status) is between doctorates by research (which government regulations specify must be at least two-thirds research) and doctorates by (research component < two- thirds). The nature of Australian doctorates is specified in the Australian Qualifications Framework.1 Academic programs in Australian Universities are audited by The Australian Universities Quality Agency (AUQA), an independent, not-for-profit national agency that promotes, audits, and reports on quality assurance in Australian higher education. From 2011, this role will be assumed by the Quality and Standards Agency (TEQSA), an independent body with powers to regulate university and non-university higher education providers, monitor quality and set standards. Australian PhDs and research professional doctorates are supervised research doctorates that require the candidate to have undertaken a period of research education leading to the successful design, implementation, analysis, theorizing and writing of research that makes a significant and original contribution to knowledge (and/or professional practice). Professional doctorates tend to have more coursework than PhDs and to provide the candidature with the option of submitting their research outputs as a portfolio rather than a traditional thesis. But the distinctions are blurring as these practices become increasingly common for Australian PhDs also. All Australian universities are entitled to award doctoral degrees. Global Perspectives on Measuring Quality 11 MAINTAINING AND MEASURING QUALITY IN RESEARCH DOCTORATES IN AUSTRALIA

The age and size of the universities involved, and their research profiles, vary markedly, and concerns have been raised about the capacity of the newest and smallest universities to provide good quality doctoral programs, given their limited research capacity and experience. These and other concerns, have led the Australian Council of Deans & Directors of Graduate Studies (DDOGS) to discuss quality, quality assurance and best practices in Australian doctoral education. The Council has developed national guidelines on best practice regarding the structure, content and examination of doctoral programs.2 In Australian universities, the PhD is awarded by ‘the university,’ as opposed to the relevant faculty; both research and coursework professional doctorates are generally awarded by the particular faculties on behalf of their university. Although Australian PhD work is located within particular department(s), school(s) or faculty(ies), once it comes to the examination (see ‘Examination’ below), the university generally manages the processes centrally and offers the award itself. In some universities ‘professional’ doctoral candidates are not examined and awarded by the university, but rather by the faculty. This practice contributes to a perceived greater uncertainty about, and variability of, the quality and standards of these degrees. In Australia a doctoral program is typically a three to four year full-time program of supervised research and culminating in the preparation of a thesis (as it is called in Australia, rather than “dissertation” as in the USA) of 80,000 to 100,000 words that is judged by external examiners to make a significant contribution to knowledge and scholarship in the discipline.

Entry Students typically enter the PhD after having completed at least a four-year honours program (or its equivalent) with Honours First Class (H1) or Honours Upper Second Class (H2A) grades (equivalent to a Grade Point Average of at least 3.3 to 3.5 out of a possible 4). Honours programs include a significant research component in their final year. Entry to the PhD after completion of a research masters degree has been declining over recent years, although entry with a coursework masters degree with similar research components and

12 Global Perspectives on Measuring Quality Helene Marsh

grades to honours degrees has increased

Program Structure and Content Australian PhD programs typically include the activities of conducting the major research project, developing research skills, preparing relevant ethics and grant applications, attending and presenting seminars, meeting defined writing requirements at each stage of candidature, developing generic skills, giving oral presentations and participating in research visits to other institutions which may be in australia and/or overseas. Publication during candidature is encouraged, particularly in the sciences. Discipline-based research is conducted in academic departments. Graduate Schools exist in many, but not all, Australian universities and these provide additional support, academic activities and generic skills programs, leadership programs, and career planning. Graduate Schools usually have overall authority for candidature, supervision, internal policy and quality assurance. For students enrolled ‘full-time’ (sometimes only if they are also holding a scholarship), ‘allowable’ outside work commitments usually average no more than six to eight hours per week. Whilst developing teaching experience is encouraged, it is expected to occur within the allowable 6-8 hours. Students who are not on a scholarship may not have such limitations placed upon them. However, normal progress for all full-time students is expected and monitored. There is considerable variation in the amount of ‘coursework’ required in different PhD programs. However, most programs include some components that could be broadly described as coursework. In the sciences, often no formal discipline-based coursework is required. Other disciplines have variable amounts of supporting ‘hurdle coursework’ that must be completed at an appropriate standard before the student may continue to the research project.

Candidature Milestones Most universities have a rigorous hurdle termed the ‘confirmation of candidature’ at about twelve months. This milestone normally includes the acquisition of necessary technical and methodological

Global Perspectives on Measuring Quality 13 MAINTAINING AND MEASURING QUALITY IN RESEARCH DOCTORATES IN AUSTRALIA skills, completion of any required coursework subjects, completion of an adequate amount of research, submission of a significant piece of writing, a public presentation on their project, and an interview by a ‘confirmation committee.’ Increasingly, progress is formally reviewed mid-candidature and prior to thesis submission. Each candidate’s progress is also reviewed and reported bi-annually or annually.

Examination Successful completion of an Australian research is based on the assessment of the research thesis by two or more independent examiners who are external to the candidate’s university. International examiners are encouraged. These arrangements provide an external quality assurance of PhD standards and outcomes. Internal examiners (all staff of the home university, including ) are NOT permitted in Australian universities that require two PhD examiners, although some that require three examiners may permit one of these to be from within the university, providing this examiner is independent of the candidate and the doctoral work. However, supervisors are expected to provide advice and context information to the examining panel through the Chair of Examiners. The examiners provide advice and recommendation to the university concerning the award of the degree. Their advice is highly influential and normally followed, except where there are disagreements or irregularities. Typically, the department or faculty in which the student is enrolled nominates examiners to the university, and usually an appropriate senior person reviews the examiners’ reports before the responsible university committee deliberates on the outcome. In these ways, the faculty have important roles to play in the examination process. An oral defence of a candidate’s thesis is rarely required of candidates in Australia. However, a public presentation of their work before academic colleagues from within and often beyond their department or school is becoming increasingly expected or required as discussed above. This presentation would normally occur prior to final submission, thus providing an opportunity for collegial commentary and critique, and for verification of the student’s contribution to the work.

14 Global Perspectives on Measuring Quality Helene Marsh

Performance Indicators There are no nationally accepted Performance Indicators for measuring the quality of Australian research doctorates, although many institutions have developed internal indicators for measuring and monitoring performance at university, faculty and school levels. There are concerns that proposed national indicators are indices of efficiency and activity rather than quality per se.

PREQ Since 2002, a dedicated national survey, the “Postgraduate Research Experience Questionnaire” (PREQ), has been used to provide Australian Universities with a basis for strategic, faculty level academic development and curriculum review to further enhance the quality of research higher degrees. The PREQ also provides data for benchmarking between similar programmes in different universities. The survey gathers data on graduating students’ perceptions of the quality and frequency of supervision, intellectual and social climate, infrastructure, approaches to research, quality of thesis examination, and generic skills development in their research higher degree. Although the national reports do not identify the results for individual universities, each university has access to its raw data and can use it to benchmark its performance against the national averages.

ERA In 2011, The Excellence in Research for Australia (ERA) initiative is assessing research quality within Australia’s higher education institutions for the first time using a combination of indicators and expert review by committees comprising experienced, internationally- recognised experts. ERA is using leading researchers to evaluate research in eight discipline clusters and will detail areas within institutions and disciplines that are internationally competitive, as well as point to emerging areas where there are opportunities for development and further investment. It is uncertain how the results of ERA will influence research training in Australia.

Global Perspectives on Measuring Quality 15 MAINTAINING AND MEASURING QUALITY IN RESEARCH DOCTORATES IN AUSTRALIA

Conclusions At present, although there are numerous checks in the system for maintaining the quality of Australian research doctorates, there are few accepted measures of quality, and quality in (post)graduate research training does not formally affect national or international ranking systems. At this time it is not possible to usefully compare Australian institutions either nationally or internationally.

This summary is a revised extract from Evans T, Evans B, Marsh H. 2008. History of doctoral education in Australia. Pp.171-203 in Nerad, M & Heggelund, M eds. Towards a Global PhD? Forces and Forms in Doctoral Education. Press, Seattle.

1 www.aqf.edu.au/

2 www.ddogs.edu.au/

16 Global Perspectives on Measuring Quality Jeffery C. Gibeling (Post)Graduate Program Review, Assessment and Accreditation in the United States

Jeffery C. Gibeling Dean, Graduate Studies University of California, Davis

Unlike many nations around the world, the United States does not have a single federal agency or ministry that reviews, assesses or accredits higher education institutions and programs. Rather, a system of accreditation by non-governmental associations has been adopted and recognized by the federal Department of Education in order to ensure that higher education institutions meet basic standards of quality and capacity. This broad-based system of accreditation is complemented by a system of program review at individual institutions and growing attention to assessment of educational outcomes by a variety of agencies.

Program Review A system of (post)graduate program review is well-established at virtually all institutions offering advanced academic and professional degrees in the United States.1 The primary purposes of program review are to ensure that all (post)graduate programs offered by a university are of the highest possible quality and to provide feedback for program improvement. These reviews are undertaken by the university itself and involve evaluation of a single program by a team of internal and external reviewers. The reviews may be managed by the university / academic vice president, the graduate dean or a committee of the faculty senate. A key characteristic is that faculty themselves serve as the expert program reviewers based on their knowledge of (post)graduate education in general and the quality of specific programs at other institutions. The external reviewers (who come from peer institutions) provide the appropriate calibration against national standards in the discipline. Most program reviews require fairly extensive effort to report both qualitative and quantitative indicators. A key qualitative component is the self-assessment in which the program reflects on its goals, objectives, strategies and structure for offering an advanced Global Perspectives on Measuring Quality 17 (POST)GRADUATE PROGRAM REVIEW, ASSESSMENT AND ACCREDITATION IN THE UNITED STATES degree. This reflection provides a critical opportunity for the faculty to collectively examine the achievements of the program and develop a plan for its future. The quantitative component normally includes a variety of relevant statistics such as measures of incoming student quality (prior grades and tests scores), completion rates and times-to- degree, research output considered in the context of graduate training and placement of students in positions after graduation. Whatever processes are adopted must serve the needs of both disciplinary and interdisciplinary programs. With interdisciplinary programs, there is a special need to ensure the program has a sound conceptual basis and that there is a core of courses and faculty participation. Program reviews are commonly conducted on a regular schedule every 5-10 years with attention to continuity from review to review, thus providing a longitudinal perspective. However, this rotating schedule of reviews does not enable an institution to readily make comparisons of quality across different programs. In order for the review process to be effective, there must also be appropriate follow-up. Reviews may lead to increased investment of resources, a reduction in funding or even to closure of the program if it is unable to meet the university’s standards and expectations of quality. Whatever the outcome and recommendations, there must be a process for ensuring that recommendations turn into actions. Reviews should only be closed when the reviewing body has received evidence that responsive actions are being taken by the program.

Program Assessment The goal of program assessment, much like program review, is to provide feedback that can be used to improve program quality. However, assessment generally focuses on outcomes, involves a narrower set of measures and is usually continuous, rather than periodic. Graduate program assessment may be externally-driven by public or government interest in the effectiveness of higher education, in which case it is usually based on a well-defined set of data that can be collected and organized efficiently such as time-to-degree, completion rates and placement of graduates (although not necessarily the quality of placements). These data may be made public if that is the charge

18 Global Perspectives on Measuring Quality Jeffery C. Gibeling

of the external agency. Alternatively, assessment may be internally- driven based on faculty expectations of program effectiveness that are more holistic. These might include measures of program success in preparing students to be effective researchers, teachers and leaders. When faculty define program success, they have a vested interest in seeing successful outcomes. In addition, ongoing internal assessment provides students with a clear understanding of faculty expectations and goals if the results are made public. As with program review, program assessment requires administrative resources to collect, analyze and report the data. Coordination may be vested with the provost/academic vice president, the or an office of institutional research. Probably the most challenging aspect of assessment is collecting reliable placement data, especially in fields where initial placement in postdoctoral positions necessarily precedes first career placement. Another challenge with program assessment is that of time scale. While the goal may be to improve program quality, this cannot occur rapidly given the five to ten years required to complete a doctoral degree. Thus, year-to-year variations in assessment data are likely to be spurious, and a longer term view is necessary.

Program Accreditation In the United States and its territories, various non-governmental accrediting associations operate with regional or national scopes depending on the nature of accreditation. They are each responsible for establishing specific criteria for evaluating the quality of educational institutions or programs and for developing the procedures for conducting evaluations. Most commonly, the entire institution is accredited to offer undergraduate and (post)graduate academic programs, although the six regional accrediting organizations that review institutions tend to focus much of their attention on undergraduate programs. The accreditation system is complex and there is relatively little central oversight except to the extent that the various accrediting agencies are recognized by the U.S. Department of Education. Programs, departments, or schools that are parts of an institution

Global Perspectives on Measuring Quality 19 (POST)GRADUATE PROGRAM REVIEW, ASSESSMENT AND ACCREDITATION IN THE UNITED STATES are generally accredited by discipline-specific agencies, usually with a national scope. Most commonly, these organizations focus on professional practice disciplines such as , , allied health fields, etc., most of which require by a state agency in order for the degree recipient to be employed and practice in the discipline. The system of accreditation generally results in “pass-fail” outcomes, although there are provisional levels of accreditation. Most importantly, accreditation serves to validate that an institution or program meets minimum quality standards; this approach does not provide an indication of the level of program quality relative to other programs.

Issues in Program Review, Assessment and Accreditation The three types of (post)graduate program evaluation described here are largely distinct, with little interrelationship, although the existence of an internal program review mechanism or assessment strategy may be a requirement for accreditation. Done properly, program review provides a perspective on the future, not a statement of the present or a description of the past. In contrast, program assessment tends to focus on past success in achieving specific outcomes and accreditation generally represents an evaluation of the current state of a program or institution. Thus, the three types of evaluation provide distinct perspectives and add value in different ways. Furthermore, this difference in perspectives means that one form of evaluation cannot easily substitute one for another. However, balancing the workload associated with all three forms of evaluation can be challenging and can create a large burden for faculty and staff. One potential criticism of the approaches described here is that program evaluation data are generally not made public. Program review is entirely an internal process and accreditation is reported as a pass-fail result (even when provisional). Only assessment data may result in publication of statistics that are indicative of program success and quality. The systems of program review, assessment and accreditation are largely peer-based, where standards are defined by faculty, institutions and non-governmental agencies representing academia

20 Global Perspectives on Measuring Quality Jeffery C. Gibeling

and the professions. A key criticism is that, overall, this approach lacks objective measures of quality and accountability. The challenge for higher education is to define credible sets of standards in order to maintain control of the processes.

1 The system of program review is sufficiently developed that the Council of Graduate Schools has published a descriptive monograph: Assessment and Review of Graduate Programs, Council of Graduate Schools, Washington, DC, 2005.

Global Perspectives on Measuring Quality 21 Measuring Quality in International and National Contexts: What Are We Measuring and Why? (and For Whom?) Measuring Quality in International and National Contexts: What Are We Measuring and Why? (and For Whom?)

Douglas M Peers President, Canadian Association for Graduate Studies Dean of the Graduate Faculty and Vice-President, York University

Over the course of the past decade, graduate or has increasingly become preoccupied (some might even go further and say obsessed) with Quality Assurance and the assessment processes that underpin it. Quality Assurance is nothing new to Graduate Schools; in fact, one of the key arguments that has customarily been used to justify our existence to feudal deans has been that doctoral degrees (and somewhat more problematically masters’ degrees) are university rather than faculty degrees, and hence some degree of centralized assurance of quality is required. Quality was something in which we all believed and as graduate deans we accepted that it was our duty to defend quality. This we did with enthusiasm, with determination, and even at times with grim resignation, but not always with a clear consensus as to what defined quality. The growth of an audit culture beyond the walls of academe, with its fondness for key performance indicators and obsession with efficiency, helped push universities however reluctantly into initiating conversations about quality assurance and quality assessment. Add to this the commodification of higher education, as witnessed both in the recasting of students into consumers for whom we are expected to provide product information and various forms of guarantee, and the current obsession with international rankings, and it is not surprising that there has emerged what can best be described as a Quality Assurance industry. In offering the following observations, I am drawing upon not only my experiences with the programs over which I have immediate responsibility as a graduate dean, but also my recent membership on a task force that had been set up to review and revamp Quality

22 Global Perspectives on Measuring Quality Douglas M. Peers

Assurance for universities in Ontario. As part of that process, we had looked closely at what was happening in other jurisdictions, and insights gleaned from those examples were further reinforced by several trips to Europe during which I had the chance to meet with individuals tasked with Quality Assurance. It should be noted that in Canada there exists no national Quality Assurance body, nor anything approaching regional accreditation as occurs in the U.S. Higher education in Canada falls squarely within provincial jurisdiction. Most provinces have some form of quality assurance in place, usually under government oversight, but none has anything as systematic as that which has prevailed in Ontario for the past forty-three years. In Ontario, all new graduate programs offered by publicly-funded universities in the province (there are no private universities at the graduate level) must first be approved by an external body, the Ontario Council on Graduate Studies (OCGS). OCGS relies upon a robust process of external appraisals, in which interdisciplinary panels of academics review university submissions for new programs that include detailed self-studies as well as reports by external reviewers. Once programs are approved, they are subject to periodic reviews (currently every 7 years) which follow the same principles of external appraisal and outside reviewers. Federal influence on graduate education in Canada is much less direct than in other countries (e.g. Britain, Australia, Germany, China), and is derived from the important role that our national research councils play in funding students, either directly through or indirectly through grants to faculty members. Recently, however, the three granting councils have begun to look more closely at how they can contribute to the development of highly qualified personnel by funding innovative training programs, rather than individual students. I hasten to stress that I am not opposed to Quality Assessment in principle nor to the development of appropriate processes and identification of meaningful metrics. But I would however encourage some further reflection on what it is we are trying to measure and why, and ultimately for whom. Quality Assurance should not become an end in and of itself, a danger which is not always apparent to some of

Global Perspectives on Measuring Quality 23 Measuring Quality in International and National Contexts: What Are We Measuring and Why? (and For Whom?) its proponents, as has historically been the case with new movements and/or professions. At a recent meeting organized by and for the task force, we heard recommendations that Quality Assurance be overseen by teams of arm’s-length, full-time professionals, experts akin to the old school inspectors used in primary and secondary schools. Reservations were expressed against relying upon peer-review, on the basis that this would only lead to various combinations of nepotism, group-think, and nasty family feuds. Proponents of this view much preferred to deploy “objective” and “detached’ observers (which some of us in the audience likened under our breath to inquisitors). While few would deny that peer review occasionally manifests those traits, more worrying are the likely consequences of vesting Quality Assurance in a disconnected group of so-called professionals, particularly at the graduate level, for such experts are likely to have predominantly pedagogical expertise which, while not in and of itself undesirable, should not trump knowledge and experience of the particularities of the research enterprise which lie at the heart of all graduate programs. We need to ensure that Quality Assurance does not become alienated or estranged from the daily activities of departments, programs and faculties. To do so runs counter to the principles of collegial self-governance which are the bedrock of academic life, notwithstanding the fact that collegial governance can serve as a constraint upon innovation and change. Moreover, if quality assurance is separated from research and teaching, then the real gains from Quality Assurance—namely the opportunity to foster an ongoing dialogue within our communities as to what they are attempting to do and why—will be lost and in its place will emerge what is at best perfunctory cooperation and at worst passive resistance. I am all for rigour, but rigour often leads to mortis, and with that our ability to encourage continuous innovation and renewal in our programs will become very much more difficult. There is considerable anecdotal evidence that the risks of this happening are increased if the processes and expectations are imposed from the top and are considered to be too burdensome. In such instances, Quality Assurance can all too easily degenerate into a box-ticking exercise that enjoys little legitimacy amongst those who stand to gain the most from it.

24 Global Perspectives on Measuring Quality Douglas M. Peers

At the same time, if Quality Assurance is simply yoked to tried and tested measures of graduate program quality (typically characterized by retrospective indicators of research output, student grades, etc.), there is the all too real risk of reinforcing the conservatism that is an inherent quality of higher education. Here is where we are confronted with a potential paradox common to all Quality Assessment systems. This can be glimpsed most clearly in the descriptors developed by the Quality Assurance Agency (U.K.). The descriptors stress “training for research” and the “advancement of knowledge.” I am not singling out the U.K. for criticism here—similar descriptors and the assumptions underlying them are typical in doctoral education everywhere. I doubt whether anyone responsible for graduate education would find fault with them, at least at first glance, for in fact that is what research degrees are intended to accomplish: training the next generation of researchers in the skills required for their work, inculcating in them the expectations of their discipline, and exposing them to the current state of knowledge. These are all relatively easily measured: one can generate reading lists, define appropriate skills, and measure the research outputs of those who are participating in the research enterprise. The rigorous documentation developed under the Ontario assessment process did an excellent job of codifying and capturing these inputs. Yet for doctoral students in particular we also look for originality and innovation. Only by doing so can we hope to achieve the higher goals of encouraging our graduate students to become stewards of their fields. Emphasizing originality and innovation, which carries with it a greater recognition of taking intellectual risks, of crossing disciplinary frontiers, and possibly even of breaking down barriers between academe and the wider world, is arguably also better suited to preparing individual students for future careers. These objectives are not only far less easily measured, but also tend to come into conflict with what might best be termed the “self-reproductive” urge of doctoral education, one wherein faculty members, often with the best of intentions (though this is not always the case—witness the proliferation of lab rats necessary to sustain entrepreneurial faculty members), seek to reproduce if not themselves in their students, then at least their tribal customs.

Global Perspectives on Measuring Quality 25 Measuring Quality in International and National Contexts: What Are We Measuring and Why? (and For Whom?)

One has to ask whether this self-perpetuation, which is very compatible with conventional Quality Assurance mechanisms, is always in the best interests of our students, not to mention wider society. The simple and depressing fact that an increasing number of doctoral students are being prepared for academic jobs that no longer exist should give us cause for reflection. A Quality Assurance regime that privileges inputs over outputs, largely because inputs are more conducive to measurement and to comparison (the two sacred tenets of QA), risks not only stifling innovation in research by forcing students to operate within conventional silos, but also is more likely to fail in preparing students for the increasingly diverse and fluctuating career paths that lie before them. We must remember that most of the current tenured and tenure-stream faculty who are most active in graduate teaching were hired during the heady days of the last big recruitment cycle. This was a situation which current forecasts suggest may well become an historical anomaly. However, if their behaviour is anything to go by, current faculty are not necessarily conscious of this fact. If inputs are therefore in and of themselves incomplete measures for Quality Assurance, what about factoring in outputs? Looking at outputs has been done before: job placements (for which business and other professional schools are light years ahead), student publications and conference presentations, numbers of students graduating, etc., are all regular features of Quality Assessment regimes. But these measures still tend to privilege the traditional image of graduate education, and do not dig more deeply into its deeper recesses, illuminating how well our programs are meeting student and societal needs, and perhaps more importantly, how quickly our programs can respond to changes in the external environment. In an attempt to think more creatively about graduate education, and to shift at least in part the emphasis from quantitative to qualitative measurements, OCGS several years ago endorsed the idea of evaluating graduate programs on the basis of Graduate Degree Level Expectations (with the unfortunate acronym of GDLES). In theory this was a visionary step. In practice, it turns out that few graduate deans and even fewer graduate programs had embraced GDLES in a practical manner. Perhaps this is not surprising—not only are GDLES less susceptible

26 Global Perspectives on Measuring Quality Douglas M. Peers

to quick calculation, but to do them well requires a discussion at the program level that is often seen as threatening by those who remain wedded to the self-reproduction mode of graduate education. Yet for Quality Assurance to succeed in more fundamental manner, it must be more than simply summative. However, summative processes are in the end the least disturbing—faults can be and are often identified but the underlying objectives of graduate education are left unchallenged. Truly effective Quality Assurance processes and protocols should not preclude the possibility of imaginative reconceptualization.

Global Perspectives on Measuring Quality 27 Quality Assurance of Graduate : Best Practices and Challenges Quality Assurance of Graduate Education in China: Best Practices and Challenges

Kebin He Executive Associate Dean, Graduate School Tsinghua University

As was noted by the economist Lester Thurow, “in the 21st century, the education and skills of the workforce [will] end up being the dominant competitive weapon.”1 Graduate education, as the highest level of education, plays an important role in advancing the modernization and transitions of society. In the context of the rapid development of graduate education and expansion of the total enrollment of graduate students in China, quality assurance of graduate education has never been as significant as it is today. The expansion of graduate education was initiated in 1999 and the number of newly-admitted graduate students has been on the rise at a rate of 20-30% annually since then. The last three decades has seen the total enrollment of graduate students quadrupled. Graduate education in China is under the administration of the national government, local government and higher education institutions hierarchically, which is known as what we call “triple-level administration.” Corresponding to the administration model, the degree committees of the State Council, the provinces (municipalities) and the higher education institutions are charged with the responsibility of quality assurance of graduate education. After decades of development, China has developed a quality assurance system that enables the self- assurance of higher education institutions under the leadership of the government. The government and higher education institutions have been making joint efforts in establishing a good environment for the development of academics and faculty, and guaranteeing the quality of graduate education through approval of newly-added programs, program review, academic ranking, selective examination of doctoral dissertations and master’s theses, supervision of special programs etc. At the government level, four actions will be emphasized and highlighted to give you an overview of quality assurance of 28 Global Perspectives on Measuring Quality Kebin He

graduate education in China. Firstly, the approval of newly-added programs serves to ensure that the programs to be added meet the general standards. The approval of newly-added programs undergoes a process from application, correspondence assessment, publication, conference assessment and approval. 350-400 panels have been established to make the approval decision scientific and accurate. The program will be approved under the condition that a majority of two- third votes approve it. Secondly, the government has been increasing investment in education and graduate education in particular to support programs and academic forums for doctoral students etc., allowing faculties and top students inside and outside China to meet each other, talk about the hotspots in specific research areas and spark new ideas. In addition, overseas study has been incorporated as an important part of graduate education through the government’s funding of joint training of graduate students with overseas universities (China Scholarship Council program). Thirdly, the quality assurance system for professional degrees has been established. The National Educational Guidance Committees for different professional degrees stand at the core of quality assurance by conducting admissions, establishing training and degree requirements, conducting regular program assessment, and organizing professional qualification accreditation. Fourthly, the government attaches great importance to improvement of the overall strength of higher education institutions. To meet this goal, projects including the 211 project and the 985 project were launched one after the other. The 211 project is aimed at helping higher education institutions establish strength in specific disciplines and research areas. And the 985 project is targeted at making a number of higher education institutions world-class universities. The central government has set up a special fund to invest in faculty teams, labs and facilities to ensure both software and hardware of Chinese universities are comparable with world-famous universities, creating a favorable learning and research environment for graduate students. Program review by the government, which is conducted every six years, was started in 1995 and was made a regular occurrence in 2005. Five indicators are adopted in the program review, namely, faculty, research, teaching and assessment, infrastructure, and related

Global Perspectives on Measuring Quality 29 Quality Assurance of Graduate Education in China: Best Practices and Challenges fields of study. The programs under review that don’t meetthe standard will be asked to take actions to improve. And those which are of poor quality will be asked to close. The selective examination of dissertations began in 2000. The dissertations will be examined in terms of topic selection, creativity, fundamental theories and expertise and research ability and top dissertations will be awarded prizes based on the examination result. Academic ranking is conducted by the China Academic Degrees & Graduate Education Development Center. The academic standing of higher education institutions will be decided according to reputation, faculty, research, and students’ training. So far, there have been 229 universities involved in the first- round academic ranking. At the university level, your attention shall be drawn to two best practices. Firstly, varieties of innovation initiatives have been started in Chinese universities to help graduate students develop critical thinking and creativity. To take Tsinghua University as an example, the Research Innovation Foundation for Doctoral Students was established to support original research projects proposed by doctoral candidates and encourage innovations in their research. The foundation covers living expenses and research stipends, helping new ideas develop and materialize. Secondly, universities are dedicated to the creation of an international learning environment by funding graduate students to attend top international conferences and supporting joint education and supervision with overseas universities. These measures serve to broaden the vision of graduate students and make them competitive in the context of globalization. Meanwhile, universities place emphasis on two-way flow of graduate students. The increase in enrollment of international students enables domestic and overseas graduate students to study and research together, making university campus diversified. With the great expansion of graduate education, quality assurance is faced with challenges. Firstly, the soaring of the total enrollment of graduate students challenges the present model of administration. How to achieve the balance between quantity and quality is a priority for consideration by higher education institutions. Secondly, the relationship and distinctive roles of the government, NGOs and higher education institutions should be made clear to guarantee the

30 Global Perspectives on Measuring Quality Kebin He

efficiency of quality assurance. Thirdly, in the context of globalization and diversification of demands of the workforce market, the quality standards should be modified to cater to various types of degree programs. Master’s education in China is undergoing a transition from academic to professional. New quality criteria should be made to indicate the difference between the two types of degrees as the adopted criteria at the moment fit into academic programs only. Quality assurance, where the vitality of graduate education rests, is a permanent topic for discussion. We would like to share with international counterparts both our best practices and problems and it is our sincere hope the discussions will lead to solutions to problems commonly faced by universities worldwide.

1 Lester Thurow. (1992). Head to Head: the Coming Economic Battle among Japan, Europe, and America. : Warner Books America.

Global Perspectives on Measuring Quality 31 The Quality Conundrum in Postgraduate Education in India The Quality Conundrum in Postgraduate Education in India

Narayana Jayaram Centre for Research Methodology Tata Institute of Social Sciences Mumbai, India

Although the first three universities (Bombay, Calcutta, and Madras) were established in India as early as 1857, the history of postgraduate education is less than a century old. The pioneering universities, as also those that were started subsequently during the British colonial era, were largely affiliating and examining bodies with very little intellectual life of their own. It was only in 1914 that the first postgraduate departments were started in the country. The rapid expansion of this segment of higher education, including doctoral studies, is essentially a post-independence (1947) phenomenon. The fact that, in India, postgraduate education has grown under state patronage and sans any competition has had serious implications for its quality. Furthermore, the ethos of socialism emphasised expansion for equalisation of opportunities, and any talk of excellence or quality was dubbed as being elitist. Not surprisingly, there was no systematic attempt at either setting up benchmarks for determining the quality of higher education or mechanisms for monitoring it. Under such circumstances, those seeking higher education (that is, students and their parents) or its products (that is, employers) had inevitably to go by the brand names established by institutions that provided higher education. It was in the wake of the changes introduced by the forces of globalisation and neo-liberalism that India woke up to the question of quality in higher education. Following the National Policy of Education (1986) and the Programme of Action (1992), in 1994, the University Grants Commission set up an autonomous body called the National Assessment and Accreditation Council (NAAC) in Bengaluru (formerly Bangalore) to assess and accredit higher education institutions in the country. Initially, the scheme of assessment and 32 Global Perspectives on Measuring Quality Narayana Jayaram accreditation was voluntary, but the idea of an external organisation doing this was not received well by universities and . The scheme is now mandatory, and the universities and colleges failing to get assessed and accredited have been threatened with the deprivation of developmental grants. However, NAAC has severe logistical limitations to oversee quality of a massive system of higher education with 504 universities and university-level institutions and 29,951 colleges. Not surprisingly, as of 4 September 2010, only 159 universities and 4,171 colleges had been assessed and accredited. In a large and diverse system of higher education, assuring quality of education imparted is an important but unenviable task. As the institution entrusted with this responsibility, NAAC views its task as one of “instilling a momentum of quality consciousness amongst higher education institutions, aiming for continuous improvement.” By this it hopes to trigger a “quality culture” among all the stakeholders in higher education. Interestingly, NAAC does not view itself as a regulatory authority, but as an enabling body. Over 15 years of its existence, NAAC has evolved a methodology for assessment and accreditation suitable for Indian conditions and has been fine-tuning it based on feedback from stakeholders and in consultation with educational experts. The methodology currently in vogue was introduced in April 2007. The methodology adopted by NAAC assesses the quality of education offered by an institution—university or college—and, based on “Cumulative Institutional Quality Profile,” accredits it on a four-point scale: institutions with a cumulative grade point average (CGPA) in the range of 3.01–4.00, as “A” (Very Good); those in the range of 2.01–3.00, as “B” (Good); and those in the range of 1.51– 2.00, as “C” (Satisfactory). The institutions attaining a CGPA of 1.50 or less are assigned a notional letter grade ‘D’ (Unsatisfactory) and are not accredited. The institution (university or college) is assessed and accredited as a totality; as of now there is no provision for assessment and accreditation of Departments/Centres or Schools, or of specific teaching/research programmes. Thus, postgraduate education or research studies is not assessed and accredited separately. However,

Global Perspectives on Measuring Quality 33 The Quality Conundrum in Postgraduate Education in India since postgraduate education and research studies are mostly offered in the universities, rather than colleges, the assessment and accreditation of universities is ipso facto the assessment and accreditation of postgraduate education and research studies. Only universities and colleges recognised by the University Grants Commission (a statutory body established under an Act of Parliament in 1956) are eligible for assessment and accreditation. Moreover, such universities or colleges must have completed five years since their establishment or two batches of students must have completed their degree programmes. Although institutions under the jurisdiction of professional regulatory bodies (Medical Council of India, Bar Council of India, etc.) can also be considered for assessment and accreditation, as of now only the National Council of has worked out an arrangement with NAAC for the accreditation of teacher education colleges. Institutions recommended by UGC or the Ministry of Human Resources Development, Government of India may also be taken up for assessment and accreditation, though such referral has never been done. As a first step, an institution seeking assessment and accreditation prepares an institutional Self-Study Report according to the institution- specific NAAC manual. This report provides the basis for assessment by a peer-team (not professionals) of senior faculty members drawn from other higher education institutions. The peer-team assesses the quality of education by assigning differential scores (aggregating 1,000) and weightages (as percentage of the total score) distributed across seven criteria as follows:

1. Curricular aspects (150 [15%]): curricular design and development; academic flexibility; feedback on curriculum; curriculum update; best practices. 2. Teaching-Learning and Evaluation (250 [25%]): admission process and student profile; catering to diverse needs; teaching- learning process; teacher quality; evaluation process and reforms; best practices in teaching, learning and evaluation. 3. Research, Consultancy and Extension (200 [20%]): promotion of research; research and publication output; consultancy;

34 Global Perspectives on Measuring Quality Narayana Jayaram

extension activities; collaborations; best practices in research, consultancy and extension. 4. Infrastructure and Learning Resources (100 [10%]): physical facilities; maintenance of infrastructure; library as a learning resource; Information and Communication Technology as learning resources; other facilities; best practices in the development of infrastructure and learning resources. 5. Student support and progression (100 [10%]): student progression; student support; student activities; best practices in student support and progression. 6. Governance and Leadership (150 [15%]): institutional vision and leadership; organisational arrangements; strategy development and deployment; human resource management; financial management and resource mobilisation; best practices in governance and leadership. 7. Innovative Practices (50 [5%]): internal quality assurance system inclusive practices; stakeholder relationships.

While these criteria and the key aspects of assessment are same for autonomous colleges and affiliated/constituent colleges, the scores and weightages attached to the criteria are different. Thus, Curricular Aspects and Research, Consultancy, and Extension carry higher scores and greater weightage in universities (where mostly postgraduate instruction takes place), whereas Teaching-Learning and Evaluation carry higher scores and greater weightage in colleges (which are mostly devoted to undergraduate instruction). The Self-Study Report prepared by the institution is expected to highlight its performance with reference to these seven criteria. In the first instance, this Report undergoes an in-house analysis at NAAC. To facilitate effective and objective assessment by the peer- team during its visit to the institution, each key aspect of a criterion is differentiated into a number of indicators, which help in capturing the micro-level quality and arriving at comprehensive grade of the criterion. It is the responsibility of the peer team to validate the Self- Study Report with necessary documentary evidence and through interactions with various constituents of the institution.

Global Perspectives on Measuring Quality 35 The Quality Conundrum in Postgraduate Education in India

The peer team shares its assessment report with the head of the institution. The assessment report signed by the peer team and the head of the institution along with the institutional grade is forwarded to NAAC. The peer-team report is reviewed by the NAAC Executive Committee, which decides the institutional accreditation status and the grade. The institutional accreditation status and the grade are valid for a period of five years from the date of the Executive Committee’s approval. Thereafter, the institution has to apply for reaccredidation. The procedure for reaccredidation is the same as first-time accreditation. Institutions that fail to get accreditation may seek re-assessment after completing one year. Re-assessment can also be sought by institutions desirous of improving their accreditation status in the institutional grade, but it will have to be completed in the second or third year after the initial assessment. The procedure for re-assessment is the same as for the first-time assessment. As a document profiling the quality of the institution, the peer-team assessment report is useful to the institution. It suggests corrective measures to be adopted, points to the directions in which the institutional processes and practices need to be re-oriented, and helps in institutional planning in general. The institutions, especially those that have been accredited with a high grade as “A,” could use the NAAC certification as an endorsement of their brand. This can reinforce their quality and strengthen the institution’s applications for more funding. As a public document, the peer-team assessment report is posted on the NAAC website (www.naac.gov.in). This makes the process transparent. These reports serve as benchmarks for institutions to gauge their own standing in relation to others. However, this assessment does not seem to have any bearing on the international ranking of these universities. No university in India or the developing world is among the first ten or twenty institutions in various international rankings; only a few find a place in the list of first three hundred institutions. Considering that it is beyond their reach, they do not seem to regard such international rankings as a frame of reference. Incidentally, the indicators chosen and also the weightage assigned to them in such

36 Global Perspectives on Measuring Quality Narayana Jayaram rankings are contestable, considering the vision and mission of universities in different parts of the world. On the qualitative front of courses, curriculum, and , as well as infrastructural facilities, India and other developing countries face a daunting challenge. As in all counties, in India too, higher education institutions vary widely in terms of their quality. But they are glaring in their extremes: there are islands of excellence in the midst of oceans of mediocrity. Being insulated from the external world and lacking competition internally for long, quality consciousness has been, by and large, conspicuous by its absence. Accreditation and quality assurance are still novel concepts here. Obviously, the educational credentials issued by many institutions in India and other developing countries are suspect. All the same, the very fact that quality is spoken of with reference to higher education today is a significant change in countries where its awareness did not exist earlier. Students and their parents have begun asking questions about it. Those who pay for education as a service demand quality in its delivery. This appears to be an outcome of the introduction of the market concept in the realm of education.

Global Perspectives on Measuring Quality 37 Measuring Quality in International and Korean Contexts: What Are We Measuring and Why? Measuring Quality in International and Korean Contexts: What Are We Measuring and Why?

Kyung Chan Min Chairman of the Committee for University Education, Presidential Advisory Council on Education, Science & Technology , Yonsei University, Korea

The major issues surrounding quality assurance and assessment in (post) graduate research training in Korea are how to draw public attention to quality control at the university level and how to develop proper methods for measuring quality. In Korea, most university and government leaders are mainly concerned with indicators measured by quantitative methods due to institutional evaluation systems. When we consider quality assurance and assessment, first of all we have to set up the goal of the evaluation and to define the notion of ‘quality’ for the purpose. And then we have to know how to measure the attributes of ‘quality’ and how to guarantee the objectiveness and fairness of the evaluation. When measuring the quality of graduate research training, important areas to consider in the research environment include research funding, research output, the number of researchers and the number of Ph.D.s awarded. When measuring the quality of education in a graduate program, the main aspects to consider are the curriculum and classroom activities, educational infrastructure, financial investments, and learning outcomes. Most graduate schools in Korea now adopt course evaluation systems in each class. Moreover, graduate supervisors are concerned about how to control the quality of the thesis for the master’s or doctoral degree. Some of the departments or major areas require students to publish two papers in SCI journals for the doctoral degree in science and engineering. One growing concern these days is how to control the quality of applicants to a graduate program. The Korean government is considering a project to develop a Korean version of the GRE Test. At present, the admission procedure in most graduate schools only involves reviewing the application form and recommendation letters submitted by each applicant. 38 Global Perspectives on Measuring Quality Kyung Chan Min

In 2008, the Korean government introduced a regulation that all universities should release in public various pieces of information including the data generated in quality assessments of the institution, as a yearly report produced by self evaluation, on the website “All Information on Universities in Korea” operated by the government. The Korean government has a special project to give some institutions the authority to issue a letter of accreditation for graduate school education. At present, the Korean Council for University Education is preparing a proposal, including a proper evaluation system, to obtain such an authority for accreditation. The processes and outcomes of quality assessment are important factors in funding processes. The quality of research output, including citation impact and publications in top journals such as Nature and Science, is taken into consideration when research universities seek to obtain financial support from government. The measurements for quality evaluation have benefited universities in general in national and international contexts. Various actions related to quality assurance and assessment initiated by the government and professional institutions, national or international, for university evaluation have had a strong impact on the improvement of capacities for research and education. However, since most indicators in the evaluation system are quantitative items, most universities have a strong tendency to focus on those quantitative indicators, neglecting qualitative aspects of ‘quality’ in research and education. As we all know, there exist critics of university rankings who say that the methodology and data of these rankings are problematic. Particularly, there is a view that they cannot reflect real changes in quality and that they focus excessively on research output produced by faculty members and reputational surveys, neglecting the many other crucial roles that universities have, such as teaching and learning. The most serious problem is that there is no particular measuring system only for graduate education and research training in Korea. Institutional and program evaluations for universities cover both the undergraduate and graduate program at the same time. In fact, there are no indicators utilizing measurements for quality in graduate research training specifically.

Global Perspectives on Measuring Quality 39 Measuring Quality in International and Korean Contexts: What Are We Measuring and Why?

The Korean government has traditionally controlled university education by determining the amount of funding according to the results of quality measurements. In other words, most universities— not only national or public but also private—are affected by the criteria established by the government. The government can advocate its policy or discourage the opposite orientation using this intervention. The same rule applies to the relation between the university headquarters and a certain department and division. At present, most evaluation systems for measuring quality use quantitative indicators. However many quantitative indicators have some limitations in their ability to read ‘real’ quality, because there are various differences within and between graduate schools. Of course, we have to be careful to deal with qualitative indicator, which might cause some questions regarding to objectivity and fairness.

40 Global Perspectives on Measuring Quality III. LAYING A STRONG FOUNDATION FOR INSTITUTIONAL ASSESSMENT

Summary of Presentations and discussion

f national frameworks shape the structures surrounding quality assessment in graduate education, universities play the most Idirect role in ensuring that assessments have meaning—that they are accurate and honest appraisals of the research and training environment and lead to actual improvements in the quality of the institution and its programs. The fi rst step toward reaching these goals is supporting the development of a university culture that values and actively participates in the assessment of quality. To learn more about different approaches to creating such a culture, we asked presenters for Panel 2 to address three main topics and sets of questions as they pertained to their own institutional contexts:

• Benefi ts and Challenges of Quality Assessment: In what areas does your university measure quality, and how does it benefi t from these assessments? What are the barriers to meaningful quality assessment? What are the biggest challenges faced by your university in developing and implementing quality assessments? Are quality measurements differentiated by degree level? • Communicating the Value of Assessment: How can university leaders foster a culture that understands the value of quality measurements to the university, academic staff, faculty, and (post)graduate students? How does your university communicate the value of quality assessment to various groups within and beyond the university?

Global Perspectives on Measuring Quality 41 LAYING A STRONG FOUNDATION FOR INSTITUTIONAL ASSESSMENT

• Administrative Challenges and Opportunities: How does your university try to ensure that quality assessments lead to successful interventions across different schools, departments and/or programs? How are interventions financed and administered?

This section presents papers by delegates representing Australia, Germany, Thailand, the United States, and Vietnam. Following these presentations was a discussion, summarized below, from which three main topics emerged.

The Role of Graduate Leadership At this time of ambitious, assessment-based reform, graduate leaders are called upon to navigate a number of different tensions that currently exist in the assessment of graduate education. In addition to the tension between “quantity” and “quality” in contexts where graduate education is growing rapidly, there are tensions between: different approaches to assessing quality (government-driven, university led, and faculty led); providing confidential conditions of assessment and transparent outcomes; the costs of quality measurement and the availability of funds to support assessment efforts; and between supporting high- quality research and high-quality teaching. In Europe, noted Ursula Lehmkuhl (Freie Universität Berlin), there are also tensions between the traditions of national higher education systems and the drive toward internationalization created by the . One of the general themes that emerged was the importance of developing an institutional message about assessment that is specific to the mission and culture of each university. This is no less the case in countries and regions seeking to develop a shared vision of doctoral education, as in Europe, where, Jean Chambaz (University Pierre et Marie Curie) observed, universities must fine-tune assessment practices appropriate to their own specific institutional contexts. It was agreed that developing a well-coordinated institutional effort relies on broad and frequent communication. Surasak Watanesk (Chiang Mai University) stressed the importance of creating a shared understanding of quality assessment processes between administrators, faculty, and

42 Global Perspectives on Measuring Quality LAYING A STRONG FOUNDATION FOR INSTITUTIONAL ASSESSMENT

staff, and Barbara Evans (University of British Columbia) added that deans can accomplish this goal by meeting regularly with college deans about assessment data.

Faculty Engagement First raised in panel one, the topic of faculty engagement in the assessment process received more sustained attention in Panel 2. In his paper on communicating the value of assessment to the university community, for example, Patrick Osmer (Ohio State University) emphasizes that communication should be ongoing throughout all phases of the initiative and should include opportunities for consensus- building about assessment criteria and process. In discussion, Allison Sekuler (McMaster University) added that is essential to remain in communication with faculty about quality issues between assessment cycles, recommending frequent and informal meetings to share information about what aspects of programs are working and what areas need improvement. One general approach to supporting faculty engagement is creating opportunities for faculty to lead assessment activities. Some recommended strategies for building faculty leadership included asking faculty to participate in the selection of external reviewers and involving faculty leadership bodies in decision-making. Yet another strategy was meeting with faculty about program-level data. Maxwell King noted that it is particularly useful to share comparative data that measures performance indicators across similar programs so that faculty can look to other departments for improvement strategies. Sharing comparative data across all programs may also be useful but demands close attention to disciplinary differences that change the weight of certain performance indicators. Many other participants stressed that it is difficult to fully engage faculty in assessment without material and professional incentives. In a paper on the benefits and challenges of assessment in Vietnam, for example, Kim Nguyen noted that it is challenging to motivate faculty engagement in assessment in her country because accreditation status is not yet linked with state funding for graduate education. The discussion of this topic led participants to describe a range of

Global Perspectives on Measuring Quality 43 LAYING A STRONG FOUNDATION FOR INSTITUTIONAL ASSESSMENT incentives that have proved successful in their countries:

• Consideration of faculty performance in promotion and tenure. • A bonus system (used in Singapore to reward faculty for performance in the areas of research, teaching, and service). • Awards for faculty who show outstanding performance, especially in the area of supervision and graduate training. • Indirect rewards in terms of professional status or success. Many participants stressed the importance of connecting assessment with institutional or departmental reputation.

Strategies for creating faculty interest in and commitment to assessment are discussed further in subsequent panels, in particular Panel 4, “Program Content and Design.”

The Assessment of Research Training In a climate where quality assessments and university rankings give significant weight to research outputs, it is often difficult to ensure that evaluations give adequate attention to the quality of research training. This issue is also related to faculty buy-in, as some faculty may see research as the activity that brings the most direct professional rewards. For this reason many participants urged that it is important for institutions to include training and supervision in assessments of quality. One way of doing this is to build a stronger relationship between research and research training, and as Alan Dench (University of Western Australia) and Zlatko Skrbis (The University of Queensland) noted, to emphasize the “symbiotic” relationship between these activities in communications with faculty. Dr. Dench added that Australia¹s Research Training Scheme (RTS), which rewards universities with strong research performance with additional research training places, has been very effective in connecting what might otherwise be viewed as separate areas of performance

Conclusion The discussion of university approaches to assessment demonstrated that assessment tools and practices are traveling quickly across

44 Global Perspectives on Measuring Quality LAYING A STRONG FOUNDATION FOR INSTITUTIONAL ASSESSMENT national borders. Kebin He (Tsinghua University) observed that at his university, international faculty entering the system have played an important role in developing assessment strategies because they have brought with them new ideas and tools from the universities in which they have trained or served as faculty. Throughout the discussion of university cultures, this optimism about sharing good practice across national and institutional contexts was balanced by caution about using a “one size fits all” approach. Dr. Lehmkuhl suggested that best practices in institutional assessment might look like a tool-box of recommended approaches rather than a prescriptive list of rules. Panels 3-5 support this goal by providing a broad range of recommended tools in specific areas of assessment practice.

Global Perspectives on Measuring Quality 45 Benefits and Challenges of Quality Assessment in Vietnamese Institutions Benefits and Challenges of Quality Assessment in Vietnamese Institutions

Kim D. Nguyen Vice Director General, Institute for HoChiMinh City University of Education

Introduction Vietnam has a population of 85.2 million people, of which approximately 26 million are between the ages of 15 and 30 (Le, 2007). As of August 2008, the higher education enrolment was 1.6 million (VNS, 2008) and only about 17% of those who took the 2007 national entrance examination gained admission. As of August 2008, Vietnam had 369 tertiary institutions (VNS, 2008) that are classified into three categories: public, private and foreign related. The education system is centralised under the Ministry of Education and Training (MOET); however, one-third of the institutions are directly under MOET and two-thirds are under other ministries and provincial People’s Committees. It is clear that the government is under great pressure to increase access while simultaneously raising the quality of higher education. Over the past 10 years, higher education in Vietnam has experienced many changes, including diversification in types of institutions and the establishment of quality improvement standards for its developing accreditation model. The Vietnamese higher education system has, in many ways, been the combined image of those in China, France, US and especially the former Soviet Union. Because of the cross-national push and pull it has experienced over many years, “The traditional Vietnamese HE institution has still been heavily influenced by the “ivory-tower” education… from the ancient Chinese, the ‘academic’ education from [the] French and the strong research oriented HE from [the] former Soviet Union” (Pham, 2001, p. 55). Generally, Vietnamese universities are characterised by: 1) Training focus; 2) Centralised management; 3) Restricted competition. Vietnamese educators and educational leaders are still confused about how to implement quality assurance and accreditation in the Vietnamese context. The next section presents a conceptual 46 Global Perspectives on Measuring Quality Kim D. Nguyen

framework for analysing the development of Vietnam’s accreditation standards.

Self-Study and Peer Review in Vietnam In 2004, when Vietnam decided to use the U.S. accreditation model as its point of reference (MOET, 2004), the processes of self-study, peer review and external evaluation at the national level were examined. Self-study and peer review have become key elements for the accreditation process; however, external evaluation is not feasible within Vietnam’s historical and socio-cultural context. Self-study is considered a powerful approach for achieving quality improvement, as it can help universities to understand and evaluate their own practices. During several conferences held after the first two rounds of national accreditation visits to 20 Vietnamese higher education institutions between September 2006 and May 2007, many universities confirmed the utility of their self-studies in providing a holistic view of their universities. Yet there is pessimism concerning the effectiveness of self-studies; academics do not trust the accuracy of current mechanisms used in the universities (Nguyen, 2008). This pessimism is based on scepticism about people being objective enough to do self-evaluation, especially given the cultural importance of “face saving.” To reduce the negative impact of “losing face” and to assure the effectiveness of the process, many academics interviewed at the national conferences have suggested that the purpose of self-study needs to be made very clear to everyone. People need to understand that the purpose of self-study is to ensure the effectiveness of internal quality assurance design processes; it is not about showing the university “back stage” to outsiders. Using peer review as an external part of quality assurance is also considered important for Vietnamese higher education if it is going to adopt Western quality assurance mechanisms; yet this too is problematic. Nguyen’s (2004) research found that in Vietnam, the use of peer review may have many weaknesses. This study concluded that the use of peer review would require close attention if Vietnamese higher education moves to adopt external evaluation as a quality assurance mechanism because Vietnamese beliefs and values about

Global Perspectives on Measuring Quality 47 Benefits and Challenges of Quality Assessment in Vietnamese Institutions authority, hierarchy and social relationships could be an obstacle to implementing this approach. The fact that interviewed administrators and academic staff expressed scepticism about the honesty, fairness and expertise of the potential external agencies suggests concern and reservations about the effectiveness of the current quality control system managed by the MOET (Nguyen, 2008). In reality, problems occurred when the first 20 universities participated in external reviews during 2005 and 2006. Two problems highlighted at the final workshops and conferences that were held to summarise the achievements and shortfalls of the universities were the lack of peer reviewer objectivity and the lack of peer-reviewer expertise in the matters they were reviewing. A third issue was that the procedures and timeline for the visits were complicated and lengthy. At present, the lack of sufficient evaluation research on the effectiveness and sustainability of the adopted model after five years of implementation has caused issues for Vietnam; it is difficult to confirm that higher education accreditation is moving in the right direction. However, because Vietnam plans to substantially expand the capacity of its higher education system, the most important and urgent issue is to establish strategies and mechanisms to simultaneously improve quality.

Recommendations In the near future, as regulated by government policies, Vietnamese higher education institutions will have to follow and implement the accreditation processes of self-study and external review using the new set of accreditation standards. Nevertheless, Vietnam needs to review and continuously improve the evaluation processes, thus six recommendations are offered.

1. Vietnam’s higher education system has been rapidly expanding and diversifying; therefore, a set of standards need to cover all of the requirements for diversity and the scope of various kinds of higher education institutions; 2. Current accrediting standards need to be revised. A large number of criteria that focus more on inputs and procedures

48 Global Perspectives on Measuring Quality Kim D. Nguyen

than the outcomes may cause difficulties for external reviewers when assessing different universities having different missions and objectives. 3. As MOET made the decision to draw from concepts in the U.S. accreditation model, further research in this area is needed. 4. Vietnam needs more professional manuals, documents and regulations for the new set of standards. 5. Future sets of standards should focus on encouraging universities to fulfil their missions and not just to meet the minimum accreditation standards. 6. Independent accrediting agencies need to be established which could help to monitor accountability and quality improvement.

References

Nguyen, Kim Dung, 2004, International practices on quality assurance for teaching and learning and the implications for Vietnam. Ph.D. thesis, University of Melbourne.

Nguyen, Quang A., 2008, February 17, Educational accreditation, Lao Dong Cuoi tuan [Weekend Labor].

Pham, Phu, 2001, ‘Higher education of Vietnam and its critical issues in management education’, paper presented at the International Conference on Management Education for the 21st Century— Managing in the Digital Age in Vietnam, Hanoi, Vietnam, 12–14 September.

VNS, 2008, September 16, Boom of universities leads to staff shortages. Available online at: http://vietnamnews. vnanet.vn/ showarticle.php?num=01EDU160908 (accessed 20 September 2008).

Global Perspectives on Measuring Quality 49 BENEFITS AND CHALLENGES OF QUALITY ASSESSMENT AT CHIANG MAI UNIVERSITY, THAILAND Benefits and Challenges of Quality Assessment at Chiang Mai University, Thailand

Surasak Watanesk Dean of the Graduate School, Associate Professor Chiang Mai University, Thailand

Chiang Mai University (CMU) is the first national higher in the north, and the first provincial university in Thailand. For almost 46 years, CMU has put all its efforts in developing academic excellence. We are recognized as the distinguished center for study in the North of Thailand, which is regarded as one of the top three universities in Thailand, in providing excellent academic quality. In 2009, CMU was selected by the Office of the Higher Education Commission (OHEC), Ministry of Education, as one of Thailand’s nine “National Research Universities.” These recognitions are essential for pushing forward our development goals to achieve “world class” university standards. We believe that the development of educational quality will bring us towards the goals in the near future in order to be beneficial to local communities, the country and international education as a whole. In order to achieve the goals of international standards, CMU has developed a system and mechanism of educational quality assurance conforming with requirements stated in National Education Acts. Our goal to be ready for quality inspection and assessment by OHEC and to get accreditation from The Office of National Education Standards and Quality Assessment (ONESQA), a public organization. CMU’s educational quality assurance system has been developed for controlling and monitoring the operational outputs done to fit the main tasks of the faculties, institutes and offices; we have integrated the concepts of the Total Quality Management (TQM), ISO, input/output, and Kaizen into the CMU educational quality assurance system, named CMU-QA. This system will consider 1) educational quality factors 2) key performance indices (KPIs) and standard criteria and 3) assessment criteria. In terms of measuring quality in postgraduate education and 50 Global Perspectives on Measuring Quality Surasak Watanesk

research, the areas that CMU measures include instructional activity, student development and research activities. The important KPIs in each area includes the curricula improvement, academic background and academic positions of the instructors, percentage of qualified on- duty thesis advisors, the numbers of graduates who graduate within the allotted time, disseminated thesis research works, etc. Over the past ten years, and in the second term of deanship and after having experiences as both an external and internal quality assurance assessor, I strongly believe that educational quality assessment will bring great value to my institution’s efforts to develop quality. The results of external and internal quality assurance, according to KPIs in each component and standard, will provide guidance in improving working processes and educational quality development that lead to the continuous development and improvement of the plan and operation. This will elevate the quality of Thai higher education in producing graduates and developing personnel qualified to serve the labor markets. The quality assessment also develops the potential of higher education in creating knowledge and innovation in order to enhance the competitiveness of the country in this globalizing era. Assessment also supports the sustainable development of Thai localities using the mechanisms of good governance, finance, standard regulation and higher education networking on the basis of , diversity and systematic unity. The Graduate School, by itself, is an important academic section of CMU with primary duties for controlling the quality and standards of graduate study to conform with the standard regulations of higher education of OHEC and CMU. We also have to respond to the university’s policies by generating the system and mechanism of educational quality assurance according to the policy framework, principles and regulations set by OHEC and CMU based on the principles of “Plan-do-check-act” (PDCA) and TQM. The results of quality assessment have revealed that many challenges exist and some interventions are needed. For the internal challenges, the creation of a harmonious understanding of the rules and regulations of educational quality assurance is needed among some administrators, faculty and staff. When there are changes

Global Perspectives on Measuring Quality 51 BENEFITS AND CHALLENGES OF QUALITY ASSESSMENT AT CHIANG MAI UNIVERSITY, THAILAND in the patterns or rules of quality assessment, there are also new administrative tools for quality assessment. The follow up and assessment of the key performance indices cannot be completely done due to an inability to link the data among faculties, departments and the university, even though the information technology system has been implemented for data collection. In addition, the changes brought about in the transition to becoming an autonomous university have also resulted in changes to the rules and regulations of the graduate study administration. The need to re-evaluate the operating costs has also affected curriculum management and the working process of the faculty and staff. As for the external challenges, the socio-economic changes have created many issues, especially in the area of free-trade agreements. These challenges have led to changes in processes for the creation of international programs, increases in the numbers of graduate students to support the research outputs of the faculties, the development of cooperation with international networks, etc. All of these developments will contribute to the development of a quality system that is acceptable and competitive internationally.Recently, CMU has begun implementing the TQA system to every section of the university in order to obtain a sustainable international standard. The measurements of student quality are differentiated by degree level as the standard set by the OHEC; these include the study periods, numbers of disseminated research works from the theses, qualifications of the faculty to serve as master’s or doctoral thesis advisors, etc. As mentioned earlier we are quite confident that the quality measurement presently implemented at CMU will produce qualified graduates for serving Thai communities and the country.

52 Global Perspectives on Measuring Quality Patrick S. Osmer Communicating the Value of Assessment at Ohio State University

Patrick S. Osmer Vice Provost for Graduate Studies, Dean of the Graduate School The Ohio State University

General Principles and Guidelines A communications plan is critically important to the success of academic program assessment and requires keeping internal and external constituencies informed at all stages of the process. By ignoring the need for a comprehensive communications plan, those directing the assessment miss an opportunity to gain support for the assessment itself and for its outcomes and implementation of any resulting changes. Communicating the value of assessment is at the heart of the overall communications strategy and leads to buy-in and participation from all campus constituencies. This begins by sharing broadly and clearly the assessment criteria and the process to be used. Openness and transparency of the process are vital to a successful assessment process.

Four Main Phases The doctoral program review conducted by the Graduate School at Ohio State proceeded in four phases that speak to the necessity for a comprehensive communications plan.

1. Announcing the Assessment

It is essential to have the support and commitment of the president and provost. Their active support, in announcing the assessment and providing the charge to the review team, sets the stage for the assessment process. They serve the role of explaining the assessment process in relation to the overall goals of the university. A good assessment is constructive, not punitive, and the president and provost can best convey this message. Global Perspectives on Measuring Quality 53 Communicating the value of ASSESSMENT AT ohio state university

2. Agreeing on the Criteria and Process

It is important that the person leading the assessment process talk directly with the heads of the programs under review. At Ohio State, this meant visiting individually with each dean to discuss the process and expectations for the input from the colleges and to obtain their feedback. Upon request, additional meetings were held with deans and their staffs. A major change in the review process resulted from these visits. The original plan was for each doctoral program to submit a report that would be assessed by the review panel appointed by the provost. It became clear that this was neither practical nor beneficial. Instead, the Graduate School worked in partnership with the colleges and asked deans to assess and rank their graduate programs according to the mission and goals of the individual colleges. The university’s Office of Institutional Research assisted the colleges and the Graduate School by providing relevant data. This approach proved to be the right solution for multiple reasons:

• Under the budget system at Ohio State, the colleges are the main academic and fiscal units and therefore have the great bulk of the resources. • Graduate programs are interconnected with the teaching and research activities of the colleges and could not be assessed in isolation. • The colleges were in the best position to make a relative comparison of their programs.

3. Carrying out the Assessment

All participants in the assessment need to understand the process and schedule, the criteria by which programs will be judged, and by whom. Following the meetings with deans and prior to beginning the doctoral program review at Ohio State, a doctoral program assessment plan was developed and shared broadly with deans, department and

54 Global Perspectives on Measuring Quality Patrick S. Osmer

graduate studies chairs, and graduate program staff. This document outlined clearly the goals and timeline of the review, the information requested from the colleges, and the composition of the review committee. All work flowed from this document and the process was kept on time and on point.

4. Announcing the Outcomes

Once the review team has completed its work, it is important to follow up with the campus community to present the results and the next steps. At Ohio State, the Graduate School deans worked with the university’s professional communications staff to develop a communications timeline. All constituencies, including the Board of Trustees and alumni, university administrators, faculty and students, were informed of the review results according to local protocols. Individual meetings were held with college deans to provide the results of the review of their doctoral programs prior to a meeting of all deans to present the full report. Plenary sessions were held with graduate faculty and interviews were given with the university and external print media sources. The final document was posted on the Graduate School web site. The written report should not be considered the end of the assessment process. The document should make clear the outcomes and consequences of the assessment as well as indicate the next steps and the implementation of those steps. The doctoral program assessment and plan written by the Graduate School at Ohio State included a section on “next steps” that provided an opportunity for deans to respond to the review and for meetings with the dean of the Graduate School and the provost to bring the assessment process to a resolution. It also set the expectation for yearly updates by the deans on the progress of their doctoral programs.

Global Perspectives on Measuring Quality 55 Assessment-Based Interventions In German Graduate Institutions Assessment-Based Interventions in German Graduate Institutions

Ursula Lehmkuhl First Vice President, Freie Universität Berlin (2007-2010) Chair of International History, University of Trier

In order to explain the relationship between quality assessment and improvement-oriented interventions at Freie Universität Berlin, I have to start by pointing out major differences between the Anglo- American system of graduate education and the German one. First, the German system of graduate education encompasses two distinct study programs: Master programs and Doctoral or PhD programs. Whereas Master programs are in most cases 2-year programs with a structured curriculum based on a credit point system, structured doctoral programs or PhD programs are still the exception rather than the rule. The quality of traditional “unstructured” doctoral education in Germany, the so-called “ model,” very much depends on the nature of the bilateral relationship between the doctoral student and the first . In many cases this model produces highly qualified young researchers who successfully compete on the international labor market. However, because of its unstructured character, this model also has weaknesses and shortcomings. This is why in the early 1990s the German Research Foundation established Research Training Groups as a first step towards a more structured form of doctoral education. Based on a positive evaluation of this structured model, the Excellence Initiative set further incentives to change the system of doctoral education at German universities by offering a special funding line for the establishment of structured doctoral or PhD programs. The structured doctoral programs that have been introduced in Germany since 2006 are three-year programs. In most cases only students who have obtained a Master’s degree are eligible for such a program. There are exceptions, though. For example the Berlin Mathematical School has introduced a fast-track system for students who have earned a very good or excellent Bachelor degree. They can enter the doctoral program directly but they have 56 Global Perspectives on Measuring Quality Ursula Lehmkuhl

to do additional course work. Presently, at Freie Universität Berlin only 11% of all doctoral students are enrolled in a structured PhD or doctoral program. At other German universities the percentage is even lower. This bifurcated system of doctoral education in Germany has major consequences for the implementation of quality assessment and intervention systems across different schools, departments and/ or programs. In the “apprenticeship model” of doctoral education the quality of a PhD or doctoral student and his or her dissertation project are evaluated only at two points of the qualification process, namely at the beginning by a formalized application procedure leading to the acceptance of a doctoral student by the responsible department, and at the end, through a final exam, in most cases a “disputatio.” The new structured programs have set up a more sophisticated system that differs from the old one, notably by the introduction of process- oriented monitoring and supervision instruments. Secondly, most of the structured doctoral or PhD programs are dependent on the departments for granting a doctoral degree. In fact only 7 structured PhD programs in Germany are degree- granting institutions. These are the “Bayreuther Internationale Graduiertenschule für Afrikastudien” (Univ. Bayreuth), the “Graduate School of Systemic Neurosciences” (LMU München); the “Graduate School for Life Sciences” (Uni Würzburg); the “Hannover Biomedical Research School” (MH Hannover); the “Göttingen Graduate School for Neurosciences and Molecular Biosciences” (Uni Göttingen); “The Hartmut Hoffmann-Berling International Graduate School of Molecular and Cellular Biology” (Uni Heidelberg), and the “International Graduate School in Molecular Medicine Ulm” (Uni Ulm). Except for the Bayreuth School all of these degree-granting graduate schools are in bio- and life sciences. Hence, in all other cases the academic standards and mechanisms for quality assurance are defined and managed by the departments and not by the Graduate Schools themselves. As a consequence, the quality standards and the instruments of quality assessment applied by the individual programs vary considerably from department to department according to the departmental quality assurance practices and routines. Hence the

Global Perspectives on Measuring Quality 57 Assessment-Based Interventions In German Graduate Institutions question of how to establish a quality assessment and intervention system that reaches across different schools, departments and/or programs is one of the major challenges German universities have to deal with in the coming years. It will be a major task of the second round of the Excellence Initiative to assure that overarching and comparable systems of quality assurance and interventions will be established. How is doctoral education and thus quality assurance and intervention administered at Freie Universität Berlin? In 2006 Freie Universität established the Dahlem Research School (DRS) as the university’s umbrella organisation for structured doctoral programs. Thus, the DRS does not address doctoral students working on their dissertation in the traditional bilateral supervising context. The establishment of the Dahlem Research School went hand in hand with the conversion and adaptation of relevant national and European recommendations to assure a high standard in doctoral education to the specific needs and the specific situation at Freie Universität and in the Berlin-Potsdam area. The DRS was instrumental in establishing a comprehensive concept of doctoral education, quality assurance and career service for young researchers. Since 2007 the DRS has been financed by funds received from the Excellence Initiative in the context of the so-called “Third Funding Line,” i.e. innovative institutional development concepts. The DRS is thus an element of FUB’s institutional strategy. This is important to note here because the DRS is the only umbrella institution for structured doctoral education in Germany not funded by the so-called “First Funding Line,” focusing on the restructuring of graduate education at German universities. As an umbrella institution the DRS formulates general quality standards that the individual doctoral programs who are members of the DRS can fulfil according to the specific needs and requirements of the program. To be eligible for admission into the DRS, doctoral programs must have a 3-year curriculum consisting of a compulsory core and optional components in the areas of disciplinary, transdisciplinary and transferable skills. The structure of the program has to comply with the Bologna ECTS standards, i.e. the workload of the program is 180 ECTS points. In addition doctoral programs have to establish a competitive

58 Global Perspectives on Measuring Quality Ursula Lehmkuhl

selection and admission procedure; they have to implement new forms of supervision, encompassing written agreements between the PhD candidate and the first supervisor spelling out the rights and duties of both parties; and they have to carry out a regulated monitoring process documenting the progress of the doctoral student. One instrument of this monitoring process is the so-called June or September paper. At the end of the first year doctoral students have to submit the draft of a chapter of their dissertation. Only after a positive evaluation by the first and second supervisor will students receive funding for the second and third year. This instrument is used in most of the externally funded programs. Funding agencies like the Deutsche Forschungsgemeinschaft (DFG) or the Thyssen Foundation require programs to assess the success of the doctoral students and monitor time to degree and attrition rates. This obligation requires participating faculty members to follow the students’ progress and show evidence of that progress as a condition of receiving any grant funding. Hence external funding and the evaluation criteria of funding agencies like the DFG very much influence aim and content of quality assessment in doctoral education. It also reinforces the tendency of tying assessment to funding. Besides the development of a comprehensive concept of doctoral education, the Dahlem Research School assesses the member programs quantitatively and qualitatively: quantitatively the numbers of PhD candidates, the gender dimension and the international background of doctoral students, as well as the attrition rate are surveyed. Qualitatively the admission process, the curricular structure, disciplinary content, and the supervision process of the programs are monitored. On the basis of this evaluation the DRS formulates recommendations and guidelines regarding curriculum development, supervision, and mentoring. As a next step in quality assurance, it is planned to admit programs who apply for membership in the DRS only temporarily, for an initial three-year period. After three years a comprehensive evaluation of the program will take place. Based on the outcome of this evaluation the membership of the program in the DRS will be extended and it will thus receive the quality seal of the DRS, or it will be excluded.

Global Perspectives on Measuring Quality 59 Assessment-Based Interventions In German Graduate Institutions

Intervention is still a major challenge. Because of the above described particularities of the German system of doctoral education the Dahlem Research School does not possess strong instruments of intervention. Its role and function for the development and monitoring of doctoral education at Freie Universität is more or less limited to formulating and communicating norms and quality standards to the individual member programs. The DRS tries to identify lack of compliance, but currently it cannot interfere in any positive or active sense, e.g., by closing programs that are not complying, or by cutting funds. This last point—the fact that the DRS does not directly fund the programs but that the individual programs are self-sustaining as long as they receive external funding through funding agencies—is a major challenge for the establishment of any intervention system. Hence, one of the major challenges for the future of DRS is to develop mechanisms that will help to strengthen its arguing power vis-à-vis the member programs. As a first step, a steering group that will be responsible for the suspension of programs or other sanctions will soon be established. A second challenge is the support and promotion of young researchers who have finished their doctoral education especially in their immediate post-doc phase. This includes the implementation of targeted career paths for young researchers by observing the labor market and career options for PhDs. Already today the DRS functions as a support agency for the numerous externally funded postdoctoral programs, e.g., the Emmy Noether Program, the Heisenberg Program, or postdoctoral programs funded by non-university research institutes like the Max Planck Institute, the Leibniz Institute or the Helmholtz Institute. With this particular emphasis on the career paths for young researchers and the support of post-docs working in externally funded post-doc programs, the DRS goes well beyond the responsibilities and duties of a Graduate School in the Anglo-American system. Freie Universität Berlin is convinced that the success and the quality of doctoral education will also be measured by the successful placement of PhDs both in and outside academia.

60 Global Perspectives on Measuring Quality Alan Dench • Maxwell King Moving From Assessment to Intervention in Australian Universities

Alan Dench Dean Graduate Research School University of Western Australia

Maxwell King Chair, Go8 Deans of Graduate Studies Pro Vice-Chancellor, Research and Research Training Monash University

Assessment and intervention—with a view to improvement—apply both to the candidate in a program and to the program itself. How is a candidate’s progression through their program of research assessed and how are the programs in which candidates are enrolled assessed? What interventions apply in each case? We consider these in turn.

Assessment Strategies for Candidates in PhD Programs There is a great deal of convergence in the formal processes of assessment of progress through candidature in Australian Universities, and particularly so within the Group of Eight Universities. Assessment at entry involves consideration of the research preparedness of the candidate, and this is done first by a potential supervisory team, with oversight of formal entry criteria (qualifications, financial support, language competencies) managed by a central administrative office. Progress through candidature is monitored and involves three clearly defined milestones, each of which includes a quality assessment. This assessment is made at the level of the supervisory team and is ultimately approved by a centralised authority—at The University of Western Australia, this is the Board of the Graduate Research School. In some universities, the function may devolve to faculties. Continued candidature is dependent on satisfactory progress and, in relevant cases, the successful resolution (through intervention) of identified problems.

1. Research Proposals are submitted within six months of enrolment Global Perspectives on Measuring Quality 61 MOVING FROM ASSESSMENT TO INTERVENTION IN AUSTRALIAN UNIVERSITIES

(PhD) and are expected to include a description of the proposed study and of the facilities required, a budget (guaranteed by enrolling School/Department), a literature review, a review of methodology, and a candidature plan. The approval of the research proposal is typically conditional on appropriate ethics approvals. Intervention occurs initially at the level of the supervisory team. The proposal is endorsed by the supervisory panel and the Head of School/Department and is subsequently considered and approved by the Board of Graduate Research School. Feedback is provided to candidates in cases of requests for resubmission and this may include referrals to specialist support staff, e.g., “Graduate Education Officers” or counseling services, or to specific support course modules. Ethics approvals may be submitted by a supervisor (rather than by the student directly) and support in (re)drafting ethics applications is provided by a central university resource such as the Research Integrity Office.

2. Confirmation of Candidature involves the checking of specific tasks/milestones for confirmation that have been approved with the Research Proposal, and may include satisfactory oral presentation of a paper, satisfactory completion of a literature review/methods section and/or other written work to an agreed standard, satisfactory completion of coursework (which may including general training in Intellectual Property (IP), Occupational Safety and Health, ethical research, animal handling, statistics, training on equipment or software, advanced academic writing, etc.). Intervention is again first managed by the supervisory team, who ensure that all tasks and milestones are completed. Where problems are noted, the Graduate Research School may intervene. Formal confirmation is made by the Board of the Graduate Research School.

3. Annual Progress Reports report project progress and any issues that have arisen, including where necessary a revised timetable for candidature. The process is designed to ensure a conversation between the candidate and the supervisory team and joint sign-off on the report. A specific assessment of oral and written language skills is requested and the report is approved by the Board.

62 Global Perspectives on Measuring Quality Alan Dench • Maxwell King

Intervention is initially conducted by the supervisory team/ School. Any problems identified may lead to a request (by either the supervisor or the Graduate Research School) for an interim reporting cycle (3-6 months), in which case this requirement is managed by the Board. The reporting of unsatisfactory language skills triggers specific interventions and requirements for the candidate to attend language skills classes. Other referrals to specialist support staff, e.g., “Graduate Education Officers,” counseling services, specific course modules may also be made. Interventions at the School level are financed and administered at that level. A centralised Faculty or University Board of a Graduate Research School and Graduate Research Office will operate a separate budget line—funded ‘off the top” of the University budget—which supports the administration of approvals and interventions and may fund specific support (Graduate Education Officers, coursework modules, advice and counseling). Some support is out-sourced to other areas of the University (e.g., statistics “clinics,” academic writing workshops).

Assessment and Intervention Strategies for PhD Programs As with candidature, programs can be assessed at the level of the individual or supervisory team, the enrolling faculty, school, department or named “program” and at the level of the university. Most, if not all, Australian universities make use of student feedback as one component in assessing the quality and effectiveness of research training. The most commonly used is the annual national Postgraduate Research Experience Questionnaire (PREQ) which allows comparison of programs within and between institutions, but is typically too coarse-grained to allow internal remedial intervention. Most universities also use a system of formal or informal exit interviews and questionnaires to gather student feedback. While most Australian universities run programs of supervisor training and accreditation— recognising whether or not a potential supervisor is research active, has appropriate qualifications, has supervisory experience—few if any have a formal procedure for “deregistration.” Intervention at this level is generally by “carrot” and rarely by “stick.”

Global Perspectives on Measuring Quality 63 MOVING FROM ASSESSMENT TO INTERVENTION IN AUSTRALIAN UNIVERSITIES

Many universities have a rolling cycle of internal reviews (at UWA this is every seven years) which subject programs to quality audits and make recommendations for change where appropriate. Reviews of schools/departments pay attention to the research training environment. The university’s overall research training program— i.e., the Board of the Graduate Research School and administrative support—is also subject to regular review with some external benchmarking. A number of institutions use additional assessment and intervention mechanisms at the program level. For example, Monash University has two components to its process for assessing the quality of postgraduate research programs at the departmental/school level. Since 1994 it has surveyed all its doctoral and research master’s students, asking a number of questions about the quality of supervision and departmental/school support. Currently these surveys are conducted every second year. They can be very useful at pinpointing problems. For each survey, a score was calculated for each academic unit with 8 or more respondents. This was initially done using 7 key questions on supervision and 7 key questions on departmental/school support. The score enables a league table to be produced. The most recent survey involved 52 academic units with 8 or more respondents. Representatives of the top 12 units were invited to a lunch at which they were each asked to discuss the reasons why they had made the top 12. The discussion is recorded and a summary is posted on the web. This is one way of disseminating good practice. At the negative end of the league table, academic units are identified where some improvement is needed. Monash University is organised into 10 Faculties, each headed by a Dean. A report on the outcomes of the survey is provided to each Dean with the survey results and the league table. Both good and poor performance is noted and a report on changes/improvements as a result of the survey is requested in six months time. Occasionally an academic unit will explain results as being skewed by survey bias. Care has to be taken with the conduct of the survey to minimise this possibility. There have been a number of cases of spectacular improvement where an academic unit has gone from the bottom 6 on the league table to the

64 Global Perspectives on Measuring Quality Alan Dench • Maxwell King top 12 two years later. This can happen because students like to feel valued and listened to. The second component has been to provide reports on the completion rates of cohorts of students for academic units. What works here will depend on numbers—the larger the cohort the better. Last year, reports were provided on the cohort of students who commenced study in 2000-2. This showed some very interesting variations that in some cases have resulted in detailed reviews of particular programs. The best example involved a faculty with five rather similar departments. Four of the departments had completion rates of around 80%, some slightly above and some slightly below. The fifth department had a completion rate just above 50%. This was clearly a program in need of attention.

Conclusion Quality in research training involves a consideration of both the candidate’s progress and the program within which he or she is enrolled. Assessment and intervention is most appropriately carried out at regular intervals and at a number of levels within the institution. There remain, however, a number of questions for consideration and discussion:

• Is the assessment of candidates best managed locally (within schools, departments, faculties), centrally, or in partnership? Can a central Board adequately manage different disciplinary expectations? • What (central) interventions can be used to manage performance of supervisory teams and what administrative challenges do these present? • What are reasonable time frames for the assessment and review of programs?

Global Perspectives on Measuring Quality 65 IV. SUPPORTING THE DEVELOPMENT OF RESEARCH TRAINING ENVIRONMENTS

Summary of Presentations and discussion

s was observed in the discussions following Panel 2, the quality of institutional research is closely related to the quality Aof research training. Faculty involved in productive and high- impact research hold out excellent training opportunities for graduate students, while well-trained graduate students tend to enhance the quality of the research in which they participate. The papers for Panel 3 demonstrate a broad range of approaches to measuring the quality of research and research training and strengthening the relationship between these two areas of activity. Because there are often signifi cant differences between assessments conducted by bodies external to the university and those initiated by an individual institution, the panel topics were divided into two categories. For the fi rst part, presentations addressed assessments conducted by governments, accreditors, and global ranking systems and their impact on institutions. The framing topics and questions were:

• Mandatory Assessments of the Research Environment: What mandatory external assessments (accreditation and government-mandated quality assessments) is your university required to administer? What research conditions and outcomes do these assessments measure? What challenges does your university face in administering these assessments and reporting the resulting data? • The Role of Global Rankings: What role do global rankings play in your university’s development of assessment strategies? What

66 Global Perspectives on Measuring Quality SUPPORTING THE DEVELOPMENT OF RESEARCH TRAINING ENVIRONments

are the greatest challenges and opportunities encountered by your university in responding to and anticipating these rankings?

For the second part of the panel, presenters addressed questions about their own universities’ assessments of the institutional research environment:

• University Assessments of the Research Environment: What internal methods and measures has your university adopted to assess the quality of the research environment? How can these assessments be used to ensure that research infrastructure and resources are tailored to the needs of individual programs and/ or departments and to the mission of the university as a whole?

• Admission, Retention, and Completion in Doctoral Degree Programs: How does your university assess the quality of pathways through the doctorate, from admission to completion? Is it possible to develop predictive quality measures to assess applicants for admission? How does your university measure patterns of attrition and completion? How can these measurements be used to support doctoral students throughout the dissertation process?

This section features papers by summit delegates from Australia, China, Hong Kong, Japan, New Zealand, South Africa, and South Korea and highlights from the discussion that followed.

Mandatory Assessments of the Research Environment Papers and discussion on mandatory assessments shed light on the efforts of governments to establish minimum quality standards while also working to motivate high performance on the part of individual institutions. Akihiko Kawaguchi (National Institute of Academic Degrees and University Evaluation) describes two systems for assessing quality in Japan, certified evaluation and accreditation, and national University Corporation Evaluation, a more recent performance-based evaluation that focuses on mid-term and annual plans and objectives. In the discussion Dr. Kawaguchi noted that

Global Perspectives on Measuring Quality 67 SUPPORTING THE DEVELOPMENT OF RESEARCH TRAINING ENVIRONments this new, combined system has created some confusion on the part of faculty about the differences in the two assessments, an issue that will be familiar to any graduate leader who has worked to introduce significant changes in assessment approaches and mechanisms. In his paper on mandatory assessments in Hong Kong, Paul K.H. Tam (The University of Hong Kong) also described the relationship between quality assurance audits and competition-based incentive systems that take the form of funding for faculty salary, places for graduate students, and research funding.

The Role of Global Rankings Given the significant impact of global rankings on a university’s reputation and ability to attract top students and faculty, all major research universities pay close attention to the assessment criteria used in league tables. And in spite of debates surrounding the criteria used to rank universities with different missions, they may also play an important role in stimulating meaningful quality assessments. As Chao Hui Du (Shanghai Jiao Tong University) observes in his paper on the relationship between quality and global rankings, global rankings are important in motivating universities to develop concrete goals for improvement. Other presenters and discussants underlined the fact that countries working to build capacity for research and development often have significantly different perspectives on global rankings than institutions in countries with a longer history of research and development. In his paper on this topic, Jan Botha (Stellenbosch University) emphasizes that in South Africa and other nations facing significant constraints on resources, global rankings may risk undervaluing specific national and local priorities. Dr. Botha also notes that his own university puts considerable resources into identifying and supporting underrepresented students, in line with its mission of building capacity in its region, but this priority sometimes conflicts with the quality metrics used in well-known ranking systems. In discussion, Rose Alinda Alias (Universiti Teknologi Malaysia) drew attention to the fact that countries seeking to develop capacity may use global rankings for different purposes than do nations with a strong R &

68 Global Perspectives on Measuring Quality SUPPORTING THE DEVELOPMENT OF RESEARCH TRAINING ENVIRONments

D infrastructure: in Malaysia, rankings are used by the government in making decisions about where to send Malaysian students for graduate study. Particular attention is given to the publication records of global universities because the ability to publish is considered a strong indicator of whether students will return to Malaysia with the ability to train future researchers.

University Assessments of the Research Environment Institutional assessments of the research environment provide opportunities for universities to take a closer look at key areas of priority and in some cases, to develop assessment instruments that are specially designed to capture data in these areas. Two papers shed light on the benefits of internal assessments. Sang-Gi Paik (Chungnam National University) discusses the dimensions of research environment assessed by his university, which in addition to research performance and research conditions, evaluates activities between the institution and partners in industry and research institutes. This metric reflects the university’s strong investment and involvement in the research and development district where Chungnam University is located. The next paper, delivered by Marie Carroll (University of Sydney) and Richard Russell (University of Adelaide), gives attention to the assessment of graduate research training. In the discussion, participants shared a number of methods for assessing graduate training quality. Many agreed that efforts to collect data on students’ experiences, through institutional exit surveys, collaborative institutional exit surveys, or focus groups, are an essential part of assessments of the research training environment.

Admission, Retention and Completion in Doctoral Degree Programs Increasingly, governments and institutions are using data on admission to doctoral programs as well as completion patterns to assess the quality of research training. The paper by Charles Tustin (University of Otago) addresses this topic in the context of national assessments of doctoral completion in New Zealand while also discussing his university’s independent approaches to supporting high completion rates. In the discussion, a number of important questions were raised

Global Perspectives on Measuring Quality 69 SUPPORTING THE DEVELOPMENT OF RESEARCH TRAINING ENVIRONments about using completion rates as a metric of quality. Some noted that completion data do not necessarily pinpoint problems in programs, and one participant recommended that exit surveys should also be used for students who do not complete their degrees. Yet another participant noted the importance of making a distinction between “good attrition” and “bad attrition”: the fact that some students do not complete their programs may also reflect high program quality to the extent that the program does not pressure students who are not well-matched to the program to continue. Yet completion was widely viewed to be an important metric, and one that should be considered in relation to a broader set of data. For example, some universities may choose to compare the relationship between admissions criteria and completion rates for their students. Debra Stewart (Council of Graduate Schools) pointed out that while academic indicators of quality are important in the admissions process, a broad range of non-cognitive factors as well as program and institutional practices significantly affect completion—a finding that emerged from CGS’s PhD Completion Project in the U.S. In general, participants agreed that in order for completion rates to be meaningful, universities must carefully monitor the performance and progress of the student over the course of the entire program.

Conclusion Perhaps the strongest idea to emerge from Panel 3 is that strong assessments of the graduate training environment are complex and multi-layered. Where external assessments are concerned, university leaders must consider comparative benchmarks, whether those established by a government to compare institutions within their country, or those conducted by a global ranking scheme, in relation to the particular goals and needs of their institutions. In the area of internal assessments, it is important for universities to consider a broad range of metrics of quality in addition to research outputs.

70 Global Perspectives on Measuring Quality Akihiko Kawaguchi Mandatory Assessments of Japanese Research Environments

Akihiko Kawaguchi Specially Appointed Professor National Institution for Academic Degrees and University Evaluation, Japan

In Japan, originally, the quality assurance (QA) framework was based upon prior regulations: it abided by “the Standards for Establishing University” by implementing university establishment approval systems (including a new educational programme within the university). In response to the growing needs of QA improvement for universities, the third party evaluation and accreditation system was introduced in 2004 after the self-assessment of universities became mandatory in 1999. There are two systems for evaluating QA of universities which the National Institution for Academic Degrees and University Evaluation (NIAD-UE) has been involved in through ongoing evaluations for higher education institutions.

Certified Evaluation and Accreditation An evaluation for universities on overall conditions of education and research conducted by certified evaluation and accreditation organizations is mandatory. Every institution must be assessed within the periods stipulated by the government. The number of target higher education institutions that are subject to the evaluation exceeds 750. NIAD-UE is one of the certified organizations that is responsible for “certified evaluation and accreditation” performed once every seven years (every five years for law schools) in order to assure that at least a minimum level of quality is satisfied in achieving pre-determined goals based on each university’s mission, objective, and charters.

National University Corporation Evaluation National University Corporation Evaluation is a Performance-based evaluation of national university corporations and inter-university research institute corporations with respect to their attainment of Global Perspectives on Measuring Quality 71 Mandatory Assessments of Japanese Research Environments mid-term objectives and mid-term plans every six-year period, and annual plans for education, research and management. The National University Corporation Evaluation Committee led by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) is entirely responsible for this evaluation. It appointed NIAD-UE to undertake evaluation of the attainment of mid-term objectives for education and research. NIAD-UE is responsible for evaluating the performance of the 86 National University Corporations. This evaluation may indirectly affect future funding allocations by the government.

Assessment of the Research Conditions The certified evaluation and accreditation by NIAD-UE evaluates the overall condition of universities, particularly the education of regular programs. On the other hand, research is also one of the important activities for universities. As a member of the society, universities are expected to cooperate and share their intellectual property, both for research and education, with local communities and the industrial sector. Therefore, based on the objectives of evaluation “to provide high quality evaluation reports to universities for the quality improvement of their education and research activities,” and “to assist universities in fulfilling accountability to the general public on their status as public organizations, by clarifying the condition of their education and research,” NIAD-UE sets two optional evaluation items to evaluate their activities from a different dimension from that required by certified evaluation and accreditation: the conditions of research, and education offered to those other than full-time students. This scheme is designed to contribute to the enhancement of their individualization. NIAD-UE conducts this evaluation at the request of the university. NIAD-UE’s basic policy for its Standards for Evaluation and Accreditation is to evaluate the status of education and research comprehensively, particularly focusing on educational activities. In this process, research is examined along with the educational aspects. However, universities conduct a broad range of research that cannot necessarily be assessed adequately in relation to educational activities. For the university to keep active in their research, it is

72 Global Perspectives on Measuring Quality Akihiko Kawaguchi essential that it establishes systems for conducting and supporting research and implements policies to promote them. At the same time, the university should accurately assess the current conditions, the research outputs, outcomes, and the impact for the society, economy, and culture. The findings should be result in enhancements of research, and communicated to the public. These optional evaluation items have been created to satisfy the needs of universities which request comprehensive evaluation focused on their research (Evaluation Items and Viewpoints are shown below). With these items, the point will be, in light of the university’s purpose, whether systems for conducting, supporting and improving research are established, and measures on resource allocation, rules governing research, and systems for the quality improvement of research are functioning properly. Other items to be considered will include how actively research is conducted (the number of research publications, the status of collaborative research projects, the use of competitive research funds); the quality of research (factors such as the number of successful requests for competitive research funds; the results of external evaluations, and the number of prizes won); and the social, economic, and cultural contributions clarified by analysis of the effective use of its research outcomes. These factors are assessed in light of the university’s purpose.

Evaluation Items and Viewpoints

Item A-1 In light of the university’s purpose, the necessary systems for the conduct of research are maintained and functioning.

A-1-i Appropriate organizational structures for conducting, supporting, and promoting research are maintained and functioning.

A-1-ii Appropriate policies and programs for research are defined and implemented.

Global Perspectives on Measuring Quality 73 Mandatory Assessments of Japanese Research Environments

A-1-iii Initiatives to monitor the status of research, address problems, and improve its quality are implemented.

Item A-2 In light of the university’s purpose, research is conducted actively and producing positive outcomes.

A-2-i The evidence on research activities and outputs shows that they are actively implemented.

A-2-ii The evidence on the outcomes shows their high quality.

A-2-iii The status of applications of research outcomes in social, economic, and cultural fields and evaluation by external bodies indicate that the university’s research is contributing to the development of those fields.

74 Global Perspectives on Measuring Quality Paul K.H. Tam Mandatory Assessments of the Research Environment in Hong Kong

Paul K.H. Tam Pro-Vice-Chancellor & Vice-President, Dean of the Graduate School The University of Hong Kong

1. Introduction

The University of Hong Kong (HKU) is the territory’s oldest university with a history that stretches over more than 90 years, and it has grown with and helped shape the city from which it takes its name. The University is committed to providing first-class research postgraduate education and learning that meet the highest international standard. As a publicly funded institution of higher education, the provision of education by the University of Hong Kong is externally assessed by the University Grants Committee (UGC), which is a non-statutory advisory committee to the Government of the Hong Kong Special Administrative Region (HKSAR) on the development and funding needs of higher education institutions in Hong Kong. At present, there are eight institutions of higher education which are funded by the UGC in the territory, and each institution has its institutional role developed from its strengths as agreed with the UGC. The University of Hong Kong, as an English-medium, research-led University, supports a knowledge-based society and economy through its engagement in cutting-edge research, pedagogical developments, and lifelong learning; in particular, it emphasizes education of the whole person and interdisciplinarity. The UGC-funded institutions are autonomous and they have substantial freedom in the design of curricula, development of academic standards, initiation and acceptance of research, and internal allocation of resources. Nevertheless, they have to provide annual returns to the UGC and they share the social responsibility of providing quality education and research training up to an Global Perspectives on Measuring Quality 75 Mandatory Assessments of the Research Environment in Hong Kong international standard in a most cost-effective manner. The UGC assures the quality of research education and training offered by the institutions through allocation of research resources in terms of block grants, research funding, research postgraduate places and by conducting quality assurance audits.

2. Block Grant Allocation

Research is regarded by the UGC as one of the cornerstones of education and is an important component for quality education. As a UGC-funded institution, HKU has to keep on enhancing its research so as to solicit more funding support from the UGC as well as other non-Government funding agencies, e.g., the Croucher Foundation and donations, to sustain research development. The UGC provides block grants to institutions based on a methodology comprising mainly three elements: Teaching (about 75%), Research (about 23%) and Professional Activity (about 2%), and the percentages in brackets refer to the distribution in the 2009-12 triennium. Regarding the “Research” element, the UGC carries out Research Assessment Exercises (RAE) every six years to inform the distribution of the research portion of block grants, to discharge public accountability and to induce improvements in research.

3. Competition-Based Allocation of Research Funding

Beyond the block grant, resources are also allocated by the UGC to institutions through different funding scheme on a competitive basis. Below are some examples:

• Areas of Excellence (AoE) A number of Areas of Excellence (AoE) are developed within the UGC-funded institutions based on their existing strengths, which would be recognized internationally as of equal status to their peers elsewhere and which would justify substantial investment in state- of-the-art facilities. The development of these AoEs

76 Global Perspectives on Measuring Quality Paul K.H. Tam

would be of direct interest to industry and commerce in Hong Kong and the region. Funding is granted in support of the AoE projects submitted by institutions through competitive bidding. The UGC has announced recently the funding results of the fifth round of the AoE Scheme and decided to support five cross-institutional joint proposals for a total amount of HK$378M over a period of eight years. Out of the five joint proposals, HKU is a participating institution in four of them. • General Research Fund (GRF), Collaborative Research Fund (CRF) and Theme-based Research Fund

The Research Grants Council (RGC), which operates under the aegis of the UGC, offers funding through the GRF, CRF and a theme-based research fund. GRF, which supports academic research projects, are subject to a rigorous peer review process via the RGC’s subject panels supported by an international network of expert reviewers. The CRF are for equipment proposals and group research across disciplines and/ or across institutions with a view to enhancing the research output of institutions. The theme-based research fund aims to focus academic research efforts of the UGC-funded institutions on themes of strategic importance to the long-term development of Hong Kong.

4. Allocation of Research Postgraduate Student Places

At present, the allocation of all the 4,765 research postgraduate places by the Government to institutions is simply historically based, without much reference to performance or quality assessment. However, starting from 2009-10, there have been 800 new research postgraduate places for allocation, in phases, to the eight UGC-funded institutions on a competitive basis through the following schemes:

• The Hong Kong PhD Fellowship Scheme (HKPFS) was established by the RGC in 2009 to attract the best and the brightest students in the world to pursue their PhD programmes in Hong Kong’s institutions. The applicants

Global Perspectives on Measuring Quality 77 Mandatory Assessments of the Research Environment in Hong Kong

can apply for up to two institutions under this scheme, and the applications nominated by the institutions to the RGC to compete for the fellowships are assessed through a blind review process by two selection panels (Science, Medicine, Engineering and Technology Panel and , and Business Panel) comprising local and overseas assessors who are experts in the relevant disciplines. For the year 2010-11, a total of 115 fellowships have been awarded. The Scheme is highly competitive and institutions have to actively reach out for outstanding candidates. • Allocation of research postgraduate student places to the 5th round Areas of Excellence (AoE) projects and new AoE sustained funding projects based on the recommendations of the reviewers. • Allocation of research postgraduate student places based on the moving average of their General Research Fund (GRF) and Collaborative Research Fund (CRF) award results in the previous years.

5. Quality Assurance on Research Postgraduate Programmes by the Quality Assurance Council

5.1 The Quality Assurance Council (QAC) was established in April 2007 under the aegis of the UGC to assist the UGC to provide third-party oversight of the quality of education at first-degree level and above provided by the UGC- funded institutions. The QAC will conduct regular cycles of institutional audits on all UGC-funded institutions. The intention is to assure the UGC and the public that institutions deliver on the promises they make in their role and mission statements. HKU has just completed its audit by the QAC in 2009. The focal point of the audit system is the quality of student teaching and learning, and there is a section focused on activities specific to research postgraduate degrees.

78 Global Perspectives on Measuring Quality Paul K.H. Tam

5.2 In the audit, the University was required to articulate and justify the standards it set itself for all levels of study, and demonstrate how standards had been achieved. Members of the QAC panels comprised local and overseas academics together with a lay member from the local community. The University had to prepare an institutional submission to the QAC before the visit took place. There was a one-day initial meeting of the QAC panel to discuss the submission followed by a four-day panel visit to HKU to meet with over 110 staff and 50 students across the University as well as lay members of the University’s Council and graduates. The University received an audit report on findings and recommendations from the QAC a few months after the visit. All the comments and findings were carefully discussed and considered at different levels of the University, and an institutional response to the audit report was provided to the QAC.

5.3 The QAC Panel commended the University of Hong Kong for its comprehensive codification and application of policies and procedures for research postgraduate student training, supervision and management, and its support to research postgraduate students. It also suggested that the University review the roles of certain committee structures in research postgraduate training and relevant training support to research postgraduate students prior to their undertaking teaching duties.

5.4 The University benefited greatly from the audit. It has leveraged the opportunity of this review to engage in a self-evaluation exercise. The QAC audit is an important exercise for local universities as it provides additional external benchmarking which is a vital component of the quality enhancement processes.

Global Perspectives on Measuring Quality 79 Mandatory Assessments of the Research Environment in Hong Kong

6. External Examiners’ System

In addition to the Government’s reviews/assessment measures mentioned above, the University obtains an external assessment of all research postgraduate theses. To assure the academic standard of the research postgraduate theses, all theses are examined by an External Examiner who is either a local or overseas academic expert in the discipline or cognate area and has not been involved in supervision of the thesis in question. One Internal and one External Examiner are appointed to examine an MPhil thesis while two Internal Examiners and an External one are appointed for a PhD thesis. In special circumstances, an external examiner may be appointed on the basis of relevant professional standing and experience. The external examiner system serves as a benchmarking mechanism of the University against other institutions as the External Examiners are asked to rate the quality of theses in comparison with all the theses from different institutions that they have ever assessed.

7. Challenges Ahead

The University has to undergo comprehensive external assessments, mainly by the Government (the UGC and RGC), on a regular basis. Moreover, the UGC has recently announced that it is in the process of reviewing its system as it believes that all UGC research funding and resources should be allocated on a competitive basis. This is somewhat different from its past practice. For example, at present the research portion of the block grant (about HK$2.7 billion per annum) is much larger than the competitive RGC funding (about HK$750 million per annum). However, the allocation of the research portion is mainly based on the results of the RAE which is conducted only every six years, and is not easy to differentiate research performance at the top end. In addition, the current allocation of the 4,765 research student places is simply historically-based, and most places are allocated without reference to quality or success in research output. While such methodologies were considered appropriate in the past when there was a clear differentiation in the research capability of institutions,

80 Global Perspectives on Measuring Quality Paul K.H. Tam

the UGC is of the view that it is no longer appropriate. Since all the UGC-funded institutions are now able to recruit, and have recruited, academics capable of excellent research, all of them should be eligible to apply and be funded on the basis of their excellence by means of competition. The UGC is considering the feasibility and desirability of transferring the research portion of the block grant to the RGC and it would continue to discuss with the Heads of institutions on how to move forward for a change as this may affect the institutions’ ability to pay for the salaries of their academic staff and to maintain a healthy research infrastructure for research development. Furthermore, the UGC has decided to introduce competition for research student places as soon as possible, while not destabilizing any one institution in the territory. The UGC has already started with the 800 new student places as described in paragraph 4 above; and by 2016-17, around half (51%) of the research training student places will be allocated on a competitive basis, with reference to quality and success in research output. The trend is that the whole system will move towards a performance based model, and performance indicators such as research output, completion rate and time for research postgraduate studies, success rate in grants application will be assessed more closely. Each institution will need to think strategically about the areas of research into which it should put resources and staffing that will help further boost excellence. Hence, competition among institutions in Hong Kong for scarce resources in research and research student places will become more intense in the years ahead.

Global Perspectives on Measuring Quality 81 Quality and global rankings Quality and Global Rankings

Chao Hui Du Executive Vice-Dean Graduate School of Shanghai Jiao Tong University

Rankings are a global trend in higher education. On the one hand, high rankings will help universities maintain and build a global position and reputation. On the other hand, it is important to remember that rankings may considerably guide the choices of stakeholders such as government, funding agencies, institutional/industry partners, employers, faculty, parents, or student applicants. For these reasons, universities encounter great challenges and opportunities in responding to and anticipating these rankings. Today, there are more than 30 notable rankings. “All rankings systems operate by comparing institutions on a range of indicators. The number of indicators in a ranking system can vary significantly […]” (Usher and Savino, 2006). But the more reputable league tables typically include the following multiple measures for each dimension: faculty, student, research and internationalization. Shanghai Jiao Tong University (SJTU) is one of the most famous universities in China and plans to become a comprehensive research- intensive university. Now, its overall strength in engineering places it among the top three universities in China. Overall, the university ranks about 100th in the world, according to the Shanghai Jiao Tong ranking of World Universities in Engineering/Technology and Computer Sciences (2010). It has 9 national key disciplines and ranks 5th in the nation. The rankings of 13 disciplines are on a par with the top 5 national universities. Furthermore, based on the ESI Data Base survey in 2011, 12 disciplines (including clinical medicine, physics, chemistry etc.) have reached the top 1 percent of institutions worldwide. But there are still some relevant indicators that cannot reach a high position in national or international league tables. For example, its total number of key disciplines (49) lags far behind other universities in China such as (134), Tsinghua University (121), (94) and Zhejiang University (72).

82 Global Perspectives on Measuring Quality Chao Hui DU

And the university lacks highly-cited papers and original research findings within prestigious academic journals. In order to improve and in response to the changing national and global higher education environment, SJTU is at present undertaking some actions to plan its future development. We use ranking indicators to guide our own goals and target our ambitions correctly. As part of our normal strategic planning process, we are now developing a new Strategic Plan for the period 2010- 2020 to build on the successful achievement of our goals in previous years. The plan not only clearly set out the three-stage plan for our university, but also compared the university with others according to key indicators related to faculty, students, research, and international communication and cooperation. Firstly, in light of tradition, we have maintained the recruitment and retention of capable faculty as a key priority, which was also on the list of our prior strategic development plan. We aim at recruiting the best candidates in terms of academics, including leading figures in various research fields. According to the “Strategic Plan 2010-2020,” the total number of faculty members (including full-time teachers and scientific researchers) at SJTU will increase from 3305 in 2007 to 4850 in 2020. The percentage of doctoral-degree holders from prestigious universities will increase from 10% in 2007 to 50% in 2020. And the percentage of doctoral-degree holders from the top 100 World Universities will increase from 2% in 2007 to 40% in 2020. The number of academics of the of Sciences and Academy of Engineering will increase from 26 in 2007 to 80 in 2020. We are trying to find the best-performing researchers and teachers within a field and to create a qualified faculty team. Second, students are a vital part of any university. In order to improve the quality and outcomes of students, under the plan, the number of graduate students enrolled will decrease from 13,734 in 2007 to 12,000 in 2020. And the number of undergraduates will also decrease from 19,025 in 2007 to 16,000 in 2020. Meanwhile, we will pay more attention to entry qualifications such as university entrance exam scores. What’s more, the percentage of doctoral graduates who have studied in world-class universities will increase from 10% in

Global Perspectives on Measuring Quality 83 Quality and global rankings

2007 to 50% by 2020. Third, research capacity and outcomes are crucial indicators for top universities. We set important indicators in evaluating research with the plan. The number of papers published in SSCI and A&Hci should increase from 23 in 2007 to 200 in 2020. The total research funding is targeted for an increase from 870 million Yuan in 2007 to 5000 million Yuan in 2020. Compared with the number of key state laboratories in Tsinghua (12 in 2007), we aim to raise ours from 6 in 2007 to 15 in 2020. Currently, SJTU has 22 primary discipline doctoral programs, and 5 primary disciplines in the university were ranked in the top three. In the national competition of excellent doctoral theses, SJTU won 22 rewards by 2007, and we are trying to reach 70 by 2020. Lastly, another indicator to measure success is the degree of internationalization. The research internationalization aspects in our current Strategic Plan include exploring the feasibility of developing research collaborations with other key institutions and providing more opportunities for students to research and study at universities overseas. We set a very active policy in order to attract international students to study abroad in our university. Among the top 100 World Universities, an average of 20% of the graduate students are from abroad. Currently, 1% graduate students in SJTU are overseas students. By 2020, we aim to increase this percentage to 15%. It is nevertheless important for universities to recognize that indicators are not transparent. As was noted in a recent paper, “It is the atmosphere, the spirits, the values and beliefs of a university that the indicator-based comparison cannot reflect. For they cannot be accounted for by numbers-based indicators, but their presence could make the university distinguished” (Shi, JH, 2009, p.322). At the same time, it is also important to remember that striving toward a better global reputation can improve the quality of research performance.

84 Global Perspectives on Measuring Quality Chao Hui DU

References

Usher, A., and Savino, M. (2006). A World of Difference: A Global Survey of University League Tables. Toronto, ON: Educational Policy Institute.

Shanghai Jiao Tong Academic Ranking of World Universities in Engineering/Technology and Computer Sciences. http://www.arwu.org/FieldENG2010.jsp

Shi, J. H. (2009). Combining Vision, Mission and Action: Tsinghua’s Experience in Building a World-Class University. In Sadlak, J. & Liu, N. C. (Eds.), The world-class university as part of a new higher education paradigm: From institutional qualities to systemic excellence. Bucharest: UNESCO-CEPES.

Global Perspectives on Measuring Quality 85 GLOBAL RANKINGS AND THE ASSESSMENT STRATEGIES OF SOUTH AFRICAN UNIVERSITIES Global Rankings and the Assessment Strategies of South African Universities

Jan Botha Senior Director, International Research and Planning University of Stellenbosch, South Africa

The role of global rankings in the development of assessment strategies for research and postgraduate education at the University of Stellenbosch As requested by the organizers of the Global Forum, this contribution focuses specifically on the institution where the author is employed. The University of Stellenbosch (est. 1866) is one of the premier research-led universities in South Africa (Cloete et al 2010). It offers a broad range of programmes from bachelor to doctoral levels in a broad range of fields, including the humanities, sciences, medicine and engineering. In 2010 it has a head count of almost 28,000 of which 35% are postgraduate students. Only four South African universities have been included (at times) in the SJTU list and only one in the THES list. Stellenbosch has never been on any of these two ranking lists even though its research performance matches or exceeds all of the SA universities included in the world rankings, with the exception of the (cf. Boshoff 2009, Amin 2010). A university’s position in many of the rankings is very strongly correlated with the degree of international visibility (as measured in the citation scores) of its published output. So, for example, the position of the University of Cape Town as the top South African university in both the SJTU and rankings is directly related to the fact that it produces the highest share of ISI papers of any South African university. Beyond that, however the lack of Noble Prize and Fiels Medals winners from Stellenbosch alumni and faculty (in the case of SJTU) and the well-known methodological problems that skew the THES reputation rankings (cf. Marginson & Van der Wende 2007:312-313) explain the absence of Stellenbosch from the rankings (including, for example that Stellenbosch was for most of its history not an English speaking university and Stellenbosch is not a city name 86 Global Perspectives on Measuring Quality Jan Botha

as well known as Cape Town). Given the widely recognized impact of rankings on various aspects of Higher Education, including an institution’s appeal to top international and postgraduate students and to research funding, Stellenbosch studied the SJTU and THES and other rankings and their respective criteria and methodologies carefully, especially after they were first published in 2003 and 2004. However, when Stellenbosch developed its current set of strategic management indicators (SMIs) during 2004 and 2005, and when the University’s research policy was reviewed during 2007, the national and regional priorities were taken to be far more important than a position on the global ranking lists. For example, the UN’s Millennium Development Goals, the research focus areas identified by the National Research Foundation, the need for capacity building amongst younger and black and female postgraduate students and researchers and the requirements of the national quality assurance system implemented by the Higher Education Quality Committee were given greater attention than the rankings. The SJTU and THES criteria were not specifically taken into account or explicitly referred to in the strategic institutional processes. When the University’s SMI’s are mapped onto the SJTU and THES criteria there are a few obvious overlapping areas, but that is due to the fact that these are almost universal areas that are monitored by any university with research in its mission.

Global Perspectives on Measuring Quality 87 GLOBAL RANKINGS AND THE ASSESSMENT STRATEGIES OF SOUTH AFRICAN UNIVERSITIES

Research SMI 1a: Publication units Articles published in output per FTE academic staff Nature and Science member (SJTU) Citations SMI 1c: Number of NRF Articles cited in rated staff as percentage Science Citation of permanently appointed Index-Expanded and academic staff. (Citations in Social Science are taken into account Citation Index when the NRF rates (SJTU) scholars) Number of citations per staff member (THES) International SMI 2a Number of Percentage of postgraduate postgraduate head overseas students students count enrolments with (THES) nationalities from African countries other than South Africa as percentage of total postgraduate head count

To develop and guide its assessment strategies for research and postgraduate education, Stellenbosch is committed to “making a contribution to the National System of Innovation and to be knowledgeable about the relevant state policies; to playing a role in South Africa and internationally by making such a contribution; to building capacity in areas where there is a high level of research- based expertise; to meeting international requirements and quality for research, with reference to the particular opportunities and limitations of the South African situation and; to ensuring that all research at the University meets the requirements of generally accepted good governance” (University of Stellenbosch 2008). These commitments illustrate the high priority to national and regional priorities without losing sight of the international nature of

88 Global Perspectives on Measuring Quality Jan Botha

research. This becomes even clearer when the University’s strategic research focus areas are considered: the promotion of health and the struggle against disease; the production and provision of food; language and culture in a multi-lingual and multi-cultural society; fundamental theory, mathematics and complexity; technology for industry; biotechnology; sustainable biodiversity and the environment; and a competitive economy. These commitments and strategic priorities do not run counter to (all) the criteria used by SJTU and THES. Improved performance against the locally developed assessment criteria used to measure and enhance Stellenbosch’s research performance may result in better performance in the rankings. If so, that is welcomed. If not, the University will not be deterred from its priorities and commitments. The SMIs are used as a monitoring and assessment tool for measuring progress in strategic areas related to research and postgraduate education (the full list is available at www.sun.ac.za/inb):

• publication output • percentage of permanently appointed acdemic staff with doctorates • number of NRF evaluated staff as percentage of permanently appointed academic staff • postgraduate FTE students as percentage of all FTE students • number of Masters and Doctoral degrees awarded per FTE academic staff member • number of postgraduate head count enrolments with nationalities from African countries other than South Africa as percentage of all postgraduate head count enrolments • partnerships in African countries other than South Africa • an innovation mark calculated as a weighted (logarithmic scale) total of different intellectual property (IP) and commercialisation outputs of each department • cost of the time staff spent on community engagement as percentage of the total staff remuneration • third money stream income, percentage of postgraduate head count enrolments from the black, coloured and Indian

Global Perspectives on Measuring Quality 89 GLOBAL RANKINGS AND THE ASSESSMENT STRATEGIES OF SOUTH AFRICAN UNIVERSITIES

population groups • the percentage of first time entering first year students from the black, coloured and Indian population groups.

Challenges and opportunities encountered by the University of Stellenbosch in responding to and anticipating the global rankings Of course Stellenbosch is not flippant about the role of global rankings, but the following challenges cannot easily be overcome. The resources available to a university in a developing country such as South Africa and taking into account the African context, cannot be compared to those in affluent and developed countries. This renders it very difficult to compare them favourably according to many of the indicators. The quality of the students entering higher education is a serious challenge. Increasingly much institutional energy must be devoted to overcoming the underpreparedness of students. Although the University has a firm strategy in place to encourage researchers to publish in ISI journals rather than in other journals, there is a lingering post-colonial suspicion in certain fields of study about Thompson, viewed by some as an “old boys’ club” with their own agenda which is not necessarily commensurable with African priorities and interests. However, this suspicion is not generally held and there is a huge increase in ISI publications. It is hardly possible to design strategies and to perform on indicators totally beyond the control of an institution such as the decisions made about Nobel Prizes and Fields Medals.

Works Consulted

Amin M. 2010. Metrics and rankings: University of Stellenbosch. Paper presented by Mr Amin, Senior Vice-President Research and Academic Relations of Elsevier, to the leadership of the University of Stellenbosch on 12 April 2010. Unpublished.

Boshoff, N. 2009. Shanghai Academic Ranking of World Universities (ARWU) and the “big five” South African research universities.South African Journal of Higher Education 23(4):635-655.

90 Global Perspectives on Measuring Quality Jan Botha

Cloete, N., Bunting, I., Sheppard, C. 2010. Institutional clusters in Higher Education in South Africa. Paper presented at the Higher Education Summit of the Minister of Higher Education and Training, 23 April 2010. Cape Town. (Unpublished).

Enserink, M. 2007. Who ranks the university rankers? Science 317:1026-1028.

Marginson, S. & Van der Wende, M. 2007. The rank or to be ranked: The impact of global rankings in Higher Education. Journal of Studies in International Education 11 (3/4):306-329.

National Research Foundation (NRF). 2009. NRF Vision 2015. Strategic Plan of the National Research Foundation. Pretoria.

University of Stellenbosch. 2008. Revised Research Policy of the University of Stellenbosch. Approved August 2008. (available at www.sun.ac.za/research).

Van Raan, A.F.J. 2005. Fatal attraction: Conceptual and methodological problems in the rankings of universities by bibliometric methods. Scientometrics 62 (1):133-143.

Global Perspectives on Measuring Quality 91 UNIVERSITY ASSESSMENTS OF THE RESEARCH ENVIRONMENT AT CHUNGNAM NATIONAL UNIVERSITY, SOUTH Assessments of the Research Environment at Chungnam National University, South Korea

Sang-Gi Paik Dean of the Graduate School Chungnam National University

Based on the geographical advantage of being adjacent to national strategic organizations including the Daedeok R/D complex (Daedeok Innopolis, formerly known as Daedeok Science Town), Daejeon Government Complex, multi-functional administrative city, and the Headquarters of the Korean Military, Chungnam National University (CNU) has become Korea’s leading national university offering a competitive education and research environment. Daedeok R/D complex makes up the research and development district in Daejeon, which blankets the central region of South Korea, and was established in 1973. Over 20 major research institutes and more than 40 corporate research centers make up this science cluster. Over the past few years, a number of IT (information technology), BT (bioscience technology) and NT (nano technology) venture companies have sprung up in this region bringing with them high concentrations of resourceful, motivated and highly educated workers in the applied and basic sciences. With the advantage of having such beneficial environmental surroundings, Chungnam National University is progressively growing as a leading research-oriented university of science and technology. Through cooperation with Daedeok R/D complex, CNU would like to strengthen research capabilities in the areas of IT, BT, NT and ET (environmental technology). The 2009 ranking of CNU based on the number of completed theses and research papers appearing in SCI Journal was 12th in the Nation and 355th in the World. Moreover, individual research performance level is within the top 5% in the country. However, on the downside, an organization for the administration and management of graduate education inside the university is still insufficient, and supervision of strict academic affairs is structurally weak. 92 Global Perspectives on Measuring Quality Sang-Gi Paik

To assess the internal quality of the research environment, CNU carries out a self-assessment at the university level. Since assessment and certification by KCUE (Korean Council for University Education), and assessments by the University Information Announcement System and the press are being enforced, as well as the assessment of finances as managed by the government formula system, the university self- assesses and publicly announces the quality of the university in broad areas of education and research, organization and management, and facilities and conveniences. The areas of assessment in the research category being held at CNU are research performance and research conditions, including industry, academy, and research institute activity. The specific items being assessed in each area are as follows:

Research performance is assessed in 3 ways; by research papers, book publications and conference presentations by . Published research papers are assessed by the number of papers published in domestic and international journals per one full-time professor over the past 3 years and the increase in rate of published papers throughout the past 2 years. Published books are assessed by how many books were written per one full-time professor over the past 5 years and by the increase in rate throughout the past 4 years. Conference presentations are assessed by how many conference presentations that were made per one full-time professor over the past 3 years and the increase in rate throughout the past 2 years.

Research conditions are assessed by research grant achievements by professors, degree of contribution to the overhead, and the extent of research activity in research institutes. Research grant achievements are assessed by how much research grant were awarded per one full-time professor over the past 3 years.

Industry, academy, and research institute activity is assessed

Global Perspectives on Measuring Quality 93 UNIVERSITY ASSESSMENTS OF THE RESEARCH ENVIRONMENT AT CHUNGNAM NATIONAL UNIVERSITY, SOUTH KOREA

by items such as patent registration achievements within the past 3 years and technology transfer income status within the past 3 years, including the number of industry, academy, and research institute commissions and research collaborations as well as research grant awards, and the interchange of personnel (students, professors) between industry, academy, and research institutes.

To assess the quality of activity for every professor, each must register their research accomplishments every year. They register by drawing up the contents of the published papers in a given format. Next, they prepare the assessment by documenting the results in the research area including achievements in book publications, conference presentations, patent applications, and grant awards. For example, if six authors published a paper in a top 10% journal, with an impact factor of 21, we acknowledge the difference in the contributions between the corresponding author or the first author and the other collaborators, and acknowledge 70% for the former and 16.7% for the latter. In this example, the journal has a standard point of 40, and the following formula applies: [Standard point + (Impact factor-1) x 10] x Approved conversion rate. Therefore, in this case, the formula would be [40 + (21-1) x 10] x 0.7 = 240 x 0.7 = 168 points. The points calculated in this way for papers published within the past 2 years would make up the number of points for the research paper portion of total research achievements. The application of the results of these assessments provides incentives to professors who contribute the most. A premium is given, by appropriate standards, for each research paper printed in a journal according to the level of the journal in which it is printed, such as: internationally renowned journals on SCI, SSCI, and A&HCI, journals on the ISI Master Journal List or journals registered on the National Research Foundation of Korea’s journal list. When a group of professors plan a large scale research project or when one applies individually, a portion of the expenses are supported irrespective of acceptance. In order to improve the research environment, if a research paper

94 Global Perspectives on Measuring Quality Sang-Gi Paik

is written in English and is submitted to an international journal, CNU supports portions of the expenses that are needed for English revision and editing. This also applies in the case of patent applications. Professors who receive substantial grant funding and therefore contribute a lot to the overhead expenses, are awarded as research- contributing professors according to the university’s administrative regulations and thus, are allowed reduced duty hours for 1 year in order to immerse themselves in research. In situations where a professor with excellent achievements in research is willing to go abroad for sabbatical leave, support is given for portions of these expenses. If a professor desires promotion or seeks tenure, the record of research performance is of great importance. Currently, a definite assessment system is not fully established in South Korea. University assessment of the research environment is still in the preliminary phases. Therefore, minimal amounts of information and data have been accumulated at this time. Nonetheless, Chungnam National University is expending great enthusiastic efforts in the process of working to build an improved and successful assessment system.

Global Perspectives on Measuring Quality 95 UNIVERSITY ASSESSMENTSOF THE RESEARCH ENVIRONMENT IN AUSTRALIA University Assessments of the Research Environment in Australia

Marie Carroll Director of Academic Affairs University of Sydney

Richard A. Russell Pro Vice-Chancellor (Research) and Dean of Graduate Studies University of Adelaide

Key aims of Graduate Schools in Australian universities are to enhance the experience of research higher degree students by providing the best possible environment in which to carry out their studies, and to attract research higher degree students of the highest ability from within Australia and overseas. Universities in Australia generally have their own mechanisms for assessing the research environment, and these also dovetail with national assessments of the research environment, though the latter are more limited in scope. In our institutions, as in most universities, there is an increasing shift towards making assessments about everything the student has had an opportunity to be exposed to in the course of the research degree. Many assessments listed in this paper could be used for quality benchmarking. However best practice data are collected using disparate tools and at this stage, do not lend themselves to a system-wide quality measurement model. The Group of Eight (Go8) research intensive Universities suggest that a nationally manageable proxy measure of the quality of the research training experience and research environment could be achieved by using a combination of the national Excellence in Research Assessment (ERA) score, completion times and rates, and coupled program audits.

The Sydney University Research Experience Questionnaire Although a national Postgraduate Research Experience Questionnaire (PREQ) survey retrospectively collects data on research higher 96 Global Perspectives on Measuring Quality Marie Carrol • Richard A. Russell degree students’ perceptions of their research training experiences after graduation, this is usually given less attention within the Go8 Universities than internal assessments. At the University of Sydney, the “Student Research Experience Questionnaire” (SREQ) is administered annually to students while they are still candidates, for the purpose of providing the university community with information for strategic, faculty- level academic development and curriculum review to further enhance the quality of research higher degrees, as well as to assist Faculties in the development of their annual research plans and in developing teaching grant applications.

Table 1 exemplifies the type of reports that can be run easily for any year. Any Faculty or program data with enough respondents can be interrogated.

Table 1: SREQ Report summary 2009.

The University of Sydney N = 2444 2009 Postgraduate Research Students response = 61%

Broad Agreement Agreement Disagreement

Supervision Scale 91% 75% 9% Infrastructure Scale 87% 65% 13% Climate Scale 87% 60% 13% Generic Skills Scale 95% 78% 5%

Overall Satisfaction Item 93% 80% 7% (Q44)

Included in the SREQ is the research climate scale. This is a particularly valuable tool for assessing students’ perceptions of such things as opportunities to socialise with other postgraduates; integration into the school/department community; opportunities to become involved in a broader research culture; perception of other research students Global Perspectives on Measuring Quality 97 UNIVERSITY ASSESSMENTSOF THE RESEARCH ENVIRONMENT IN AUSTRALIA as supportive; feelings of isolation within the school/department; encouragement of interaction with other research students; provision of a good seminar program; stimulation of personal work by the prevailing research ambience; provision of a supportive work environment; and feeling respected as a fellow researcher.

How are SREQ data used?

At the University of Sydney, some Faculties are trying to enhance the research climate. For example, the 2002 SREQ dataset demonstrated that Veterinary Science students had the lowest perceptions of the quality of supervision, research infrastructure, research climate and overall satisfaction of any students within University. Consequently the faculty implemented initiatives to improve the research postgraduate experience resulting in a statistically significant improvement in the 2007 SREQ. The SREQ data are publicly available to the Deputy Vice- Chancellor (Education), Faculty Deans and Associate Deans. Qualitative reports (SCEQ & SREQ) are also publicly available on the University’s website. These are analyses of the comments made by candidates about any aspects of their candidature, and are de- identified and grouped into categories using Qualitative Analysis Software. They are presented as Areas for Improvement and Areas of Good Practice.

Other types of internal university assessments that have been adopted by Go8 Universities for quality improvement purposes.

1. Exit surveys

Some universities, including The University of Adelaide, use exit surveys of completing students to assess the research environment. These then feed back into strategic planning at Faculty and program levels. The value of these is impacted by the lag effect, but where there is a substantial sample size the results often show a close parallel with the PREQ outcomes for common questions. Separate review

98 Global Perspectives on Measuring Quality Marie Carrol • Richard A. Russell instruments that address students who withdraw from study normally support exit surveys and these focus on causes that are a consequence of university activities, including the research environment as well as personal reasons arising from life commitments. Current “snapshots” of student attitudes and progress is also monitored at Adelaide by a random selection of annual student interviews conducted by the Dean of Graduate Studies. Experience shows these are often more revealing than any formal survey.

2. Supervisory Issues

The single most important determinant of overall satisfaction for the higher degree research student concerns the quality of the supervisory environment. To this end, many universities have instituted compulsory supervisory training for new supervisors, and retraining for experienced supervisors. Such training takes into account the fact that supervision is about much more than managing completion. It is about mentoring, introduction to the disciplinary network, providing an entrée to domestic and international conferences, contact with the professions and industry, assistance with publications and preparation for teaching (for those interested in an academic career). These skills are not necessarily innate; many must be explicitly acquired through training. Assessment of good supervision is critical to the research student experience. Australian academics generally do not receive much recognition in institutional promotional processes for their achievements as supervisors. Neither is the workload always acknowledged in work plans. This is changing, however, with some universities having now moved to recognise excellence in supervision by having students nominate outstanding supervisors and rewarding them. Thus the Australian National University and the University of Adelaide now recognise outstanding supervision through a special annual award. Most universities, however, assess supervisors according to the number of completions and numbers of students supervised, as a kind of proxy for supervisory quality. It is common to maintain

Global Perspectives on Measuring Quality 99 UNIVERSITY ASSESSMENTSOF THE RESEARCH ENVIRONMENT IN AUSTRALIA registers of supervisors and associate supervisors who are available, and to note the number of student completions each has achieved. The University of Sydney recognises that it needs to gain a better understanding of its supervision capacity in different discipline areas, and is giving particular attention to updating the supervision register. Such measures, however, are not necessarily indicative of a good research environment: an overloaded supervisor may have little time for mentoring and fostering the skills and competencies required. The importance of the supervisor/candidate working relationship is so significant that some Universities have developed tools to facilitate early discussions between candidates and supervisors so that they each have an open and clear expectation of individuals’ needs.

3. Assessing a change in the research environment. Producing academic and professional skills and competencies.

A recent study of Go8 PhD student outcomes five to seven years from graduation showed that Go8 PhD graduates sometimes lack breadth of knowledge, fall short on generic skills and can be narrow in their expertise. The study found that Go8 PhD graduates generally did not think that, relative to what was required in workplaces, their PhD training provided them with sufficient skills in areas such as: team- based and applied research, project management, interdisciplinary research, grant writing assertiveness, leadership and financial management. It is in the provision of these skills in a graduated, needs- based and structured way that many Australian universities have been deficient. Now, both external and internal pressures are leading to change in this area, with many universities offering academic and professional skills in structured ways, such as Graduate Certificates. In some universities, there also exist provisions for postgraduate training in university teaching to fulfil training needs for the many students who rely on casual tutoring, and indeed training experience for those whose universities rely on them as a casual workforce.

4. Completion times

100 Global Perspectives on Measuring Quality Marie Carrol • Richard A. Russell

In general, universities know rather a lot about their completion rates. For example in both our universities there are regular analyses of average time taken to complete. Go8 universities benchmark their completion times and rates.

5. Assessment of Learning Outcomes

The measurement of generic or transferrable skills is perhaps the most difficult internal assessment of research environments. Universities expect HDR graduates to have acquired communication, analytical thinking, problem-solving and other higher-order cognitive skills, and the tendency is to assume that these are assessed via the dissertation. Yet the dissertation is only one of the ways in which transferrable skills can be assessed, albeit an important one. Assessments of communication, leadership and research-enriched learning can be garnered in other ways too. For example, through the judgments of the University community, peer experts, and potential employers that result from activities such as industry presentations, networking, publications during candidature, cross-disciplinary activity, departmental seminars, and conference presentations such assessments can be made. The university itself knows little about the acquisition of its HDR candidates’ transferrable skills demonstrated through these means; yet such judgments are a valuable source of information about its success in producing rounded graduates.

6. Training Needs Analysis

Recognising the weakness in generic skill training, Go8 Universities are increasingly undertaking individual assessment of training needs analysis at the commencement of candidature. This covers key issues such as academic English Language requirements (usually mandated for International students), project planning, presentation and writing skills, statistical requirements, and, in the case of humanities, qualitative and quantitative research methodologies. These assessments can then identify gaps in a candidate’s knowledge and provide individualised recommendations to correct these shortcomings. Progress against

Global Perspectives on Measuring Quality 101 UNIVERSITY ASSESSMENTSOF THE RESEARCH ENVIRONMENT IN AUSTRALIA these goals, as well as research achievements, can then be monitored as part of the regular formal reviews. The Go8 Future Research Leaders program could be adapted to PhD students and this could provide a formal introduction to the activities and skills required of an independent “early career” researcher.

7. Other information by which universities assess the research environment.

There are a range of other measures of research environments which are utilised within the Go8 to greater or lesser extents. For example:

• Participation in programs on inducting and supporting international students • Publications from the thesis in a specific time period • Opportunities for and participation in cross-discipline interaction • Opportunities for and participation in travel to conferences • Opportunities for interaction with other universities • IT support • Statistical support • Infrastructure - The quality of information institutions maintain about PhD student access to infrastructure and equipment is highly variable, although with the Go8 there are usually formal recommendations made to faculties in regard to student directed funding.

8. The Examination Process

All the Go8 universities rely on entirely external examiners to assess the outcomes of PhDs and do not have a formal viva voce examination, although most now use a formal pre-examination seminar to ensure the thesis is fit for examination. In addition to recommending the outcome of examination, many institutions now collect additional information from examiners to monitor the quality of the examination

102 Global Perspectives on Measuring Quality Marie Carrol • Richard A. Russell process. Whilst there is no national requirement for this process and the instruments vary from institution to institution, there are some widely employed elements. For example:

• The number of examiners from top international universities • The number of theses examined by the examiner • The percentage of theses ranked by examiners in top percentiles, and • Advice as to whether or not the thesis is worthy of additional recognition such as a prize or medal

Conclusion Whilst individual Australian universities have a good handle on their individual research environments for RHD students, the university system is yet to agree upon a national evaluation protocol. However, benchmarking amongst universities with similar missions is better advanced, and they are developing useful and broad-reaching sets of well understood metrics.

Global Perspectives on Measuring Quality 103 ADMISSION, RETENTION AND COMPLETION IN DOCTORAL DEGREE PROGRAMS AT THE UNIVERSITY OF OTAGO Admission, Retention and Completion in Doctoral Degree Programs at the University of Otago

Charles Tustin Director, Graduate Research Services University of Otago

Background In 2003, the New Zealand Government implemented the Performance- Based Research Fund (PBRF), the primary purpose of which is to “to ensure that excellent research in the tertiary education sector is encouraged and rewarded.”1 This is accomplished by conducting a regular assessment of each university’s research performance and then funding them on the basis of their performance. The aims of PBRF include the following: to make sure that research continues to support degree and graduate teaching, and to ensure that funding is available for graduate students. The PBRF measures the research performance of three elements in each of the country’s eight universities, namely:

• Quality evaluation of academic staff (to reward and encourage the quality of researchers)—60 percent of the fund • External research income—15 percent of the fund • Graduate research degree (including doctoral) completions—25 percent of the fund

The majority of the funding that universities receive from the Government for graduate research students is dependent on these students successfully completing their degree programs. A successful completion is defined as having met the requirements for the award of the degree after the examination process has concluded. The funding is only paid out to universities following a successful completion. This funding regime means that it is important for universities to minimise the risk of research higher degree students (a) withdrawing;

104 Global Perspectives on Measuring Quality Charles Tustin

(b) not successfully completing; or (c) consuming a long period of time before completing their programs. In New Zealand, PhD programs in particular are thesis-only and it is expected that diligent and competent students will submit their theses for examination within three years of full-time study (or the equivalent in part-time study). It is therefore crucial for universities in New Zealand to ensure that (a) only those students who are more likely to complete their studies successfully are admitted to graduate research programs; (b) when enrolled, the students are supported and supervised appropriately; and (c) their progress is monitored and effective remedial action taken where necessary.

The University of Otago The University of Otago is New Zealand’s most research intensive university.2 It is also the top-ranked university for research quality.3 The University has an international reputation for research excellence and leads the country’s latest (PBRF) ranking. Graduate research candidates comprise 8% of all students at Otago, including approximately 1,260 PhD candidates, 70 professional doctorate candidates and 650 thesis Master’s candidates. At Otago, the administration of all doctoral candidates is managed centrally. This includes the admission, progress reporting and examination processes. The University’s Graduate Research Committee, which is chaired by the Deputy Vice-Chancellor (Research and Enterprise), plays a key role in the academic oversight and ongoing monitoring of the doctoral programs, including general policy and regulations matters. From a PBRF perspective, the Committee is particularly interested in ensuring that the quality of Otago’s doctoral programs and candidates remains high. Admission processes are rigorous and departments and supervisors (advisers) of prospective doctoral candidates are expected to thoroughly check out the credentials of prospective candidates before recommending that they be admitted. The University expects that prospective candidates will provide an outline of their proposed research and be interviewed before a decision is taken to admit them. Statements from academic referees should be

Global Perspectives on Measuring Quality 105 ADMISSION, RETENTION AND COMPLETION IN DOCTORAL DEGREE PROGRAMS AT THE UNIVERSITY OF OTAGO solicited. An assessment is also made of the likelihood of a good fit between the supervisor and candidate. In making this assessment, a supervisor should be clear about what he or she expects of a doctoral candidate. To help in this regard, the University has published two brochures which describe (1) supervisors’ perspectives on quality candidates4 and (2) candidates’ perspectives on quality supervisors.5

Rates and Times As mentioned above, the PBRF regime in New Zealand means that high completion rates are important for universities in this country, as are low withdrawal rates and reasonable submission times. At the University of Otago, these criteria are measured annually by cohort (admission year). For each cohort, the following are measured: a completion rate (the percentage of candidates who have successfully completed their studies); a continuation rate (the percentage of candidates still studying); and a withdrawal rate (the percentage of candidates who have terminated their studies). To determine any trends or patterns in the data, the various rates are analysed by: Faculty; international/domestic status; sex; age; and scholarship. Reasons for withdrawal are also analysed as is “time to withdrawal.” Otago’s completion rate is typically between 85 and 90 percent, a number with which we are fairly satisfied. Withdrawal rates are around 10 to 14 percent. Most candidates who withdraw do so due to personal reasons (for example ill health, family circumstances or financial considerations) which are beyond the University’s control. They tend to withdraw in the early stages of candidature. Another important indicator for the University is the time taken from admission to the PhD program to submission of a candidate’s thesis for examination. At Otago, these submission times are analysed annually by full-time/part-time status and international/domestic status. Part-time students tend to complete in a relatively shorter period of equivalent time compared with their full-time counterparts, as do international students.

106 Global Perspectives on Measuring Quality Charles Tustin

Resources, Support, Supervision and Learning Does the University provide sufficient and appropriate resources, support and supervisory assistance to its doctoral candidates to assure a quality learning experience? Each year, the University’s Quality Advancement Unit formally surveys a substantial sample of current students as well as graduates about their views on these matters. A rich bank of valuable and comparable data has been developed over the 15 years since the inception of these surveys which means that the University has a good sense at any time about the quality of the doctoral experience and can take appropriate steps where required to improve matters. The Student Opinion Survey allows students to comment on the programme they are completing, the department they study in and the University services they use, while the Graduate Opinion Survey (GOS) asks graduates to comment on their experiences at Otago, particularly in terms of their learning. The core instrument of the GOS is the Course Experience Questionnaire (CEQ), which was developed by Professor Paul Ramsden. In addition to the CEQ, the GOS also features a section which provides opportunities for research postgraduates to comment on, and rate, their supervision experiences, and the level of support they received. A further section on course outcomes compares the development of 15 key attributes during University study with the application of these attributes following graduation. Throughout the GOS, graduates also have a number of opportunities to make open-ended comments on their experiences. The surveys provide information for a number of key University processes and documents, including the Departmental Review process, the annual Statement of Objectives and the Annual Report. Feedback collected is sent to departments and is also presented to the relevant areas in a series of meetings held by the Deputy Vice- Chancellor (Academic and International). Responses to the surveys have prompted a number of improvement initiatives. Exit surveys are also undertaken upon the successful completion of a candidate’s PhD. This is a recent initiative (from mid-2009) and involves the completion of an anonymous questionnaire of eight items including an overall rating of the candidate’s PhD experience. The questionnaire includes items such as:

Global Perspectives on Measuring Quality 107 ADMISSION, RETENTION AND COMPLETION IN DOCTORAL DEGREE PROGRAMS AT THE UNIVERSITY OF OTAGO

• Do you have suggestions for improvements? (If yes, please briefly outline your suggestions) • What aspects contributed most to the successful completion of your PhD? • What aspects, if any, presented the greatest obstacle/s to the successful completion of your PhD?

The exit survey results are reported monthly to the Graduate Research Committee to ensure the members are kept up to date with candidates’ perceptions of their Otago experience and so that appropriate improvements to the program can be made where necessary.

Examination Results The ultimate test of any program of study is the final examination and the PhD is no different in this regard. We analyse each year’s examination results and compare them with the results of previous years. Otago has set a 90 percent target of successful examinations and this has been achieved in all but one of the last ten years. To ensure that the standard of our PhD program is at a very high level and internationally competitive, we insist on having a panel of three independent examiners assess a candidate’s thesis: an internal examiner (who may not be supervisor); a New Zealand external examiner; and an international examiner. Each examiner assesses the written thesis independently in the first instance before the chair of the examining panel facilitates a consensus result, often with the help of an oral examination (which is not compulsory at Otago).

Conclusion At Otago, the term “quality” has generally come to mean that the University can reach the goals that it has set itself as an institution; that it not only says that it provides high levels of expertise and commitment to its stakeholders, but that it can also prove it and that, where necessary, processes will be put in place to address weaknesses via improvement initiatives. This is particularly important at the

108 Global Perspectives on Measuring Quality Charles Tustin doctoral level and the University is careful to ensure that admission, retention and completion rates are monitored on a regular basis and action taken where appropriate to improve the quality of the doctoral program.

1 http://www.tec.govt.nz/Funding/Fund-finder/Performance-Based-Research-Fund-PBRF-/

2 Ministry of Research, Science and Technology. Research and Development in New Zea- land. (Wellington, 2006)

3 Tertiary Education Commission. PBRF Quality Evaluation 2006. (Wellington, 2007)

4 http://www.otago.ac.nz/study/phd/otago001461.pdf

5 http://www.otago.ac.nz/study/phd/otago001464.pdf

Global Perspectives on Measuring Quality 109 V. USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN

Summary of Presentations and discussion

he specialized nature of graduate education demands that graduate schools pay particularly close attention to program- Tlevel assessment. Measuring the quality of programs is both important and complex: the structures of research and graduate training within disciplines often call for different metrics and methods for assessing program success, and the same metrics may be weighted differently. Panel 4, “Using Quality Measures to Support Program Content and Design,” focused on two approaches to measuring quality in and across programs: an evaluation of process, or the structures and practices that are used to support the effectiveness of programs in a wide variety of areas, and an evaluation of outcomes, which can refer to any number of measurable results for students, faculty, or programs overall. Panel 4 presentations were divided among the following topics and questions:

• Outcomes of Learning and Research Training: How does your university support the development and implementation of learning outcomes assessments for research-based master’s and doctoral programs? How can these data help shape curricular content and other program requirements? • Mentoring and Supervision: What types of assessment does your university use to understand the quality of mentoring and supervision within and across programs? How can these

110 Global Perspectives on Measuring Quality USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN

assessments be tailored to program level (master’s vs. PhD) or to specific fields and disciplines? What can be done to ensure that these assessments bring about more effective structures for mentoring and supervision? • Interdisciplinary Programs: How can assessment be used to measure the value of interdisciplinary or multidisciplinary programs to your university? How has your university measured the value of interdisciplinary study to individual students and/or to departments involved in interdisciplinary programs?

This section features papers from delegates representing Australia, Canada, China, New Zealand, and the U.S.

Outcomes of Learning and Research Training Undergraduate education has seen many coordinated efforts to assess student learning outcomes, and there have also been a number of efforts (on the part of the Organisation for Economic Co-operation and Development [OECD], for example), to assess undergraduate learning across national contexts.1 The evaluation of graduate learning outcomes is a much newer phenomenon, however, and current methods and practices require further comparison and testing. Papers and discussion pointed to a variety of methods currently in use or under discussion. The paper by Zhen Liang (Harbin Institute of Technology) describes a number of methods of evaluating programs and points to one that is somewhat unique to the Chinese graduate education context: routine, external evaluations of randomly selected dissertations. Many participants found this approach to have promise in other higher education systems, particularly because it is uses a direct measure of learning. The next paper, by Julia Kent (Council of Graduate Schools), provides an overview of the areas in which outcomes assessment for doctoral education have recently undergone reconsideration or reform in the United States. These include the tracking of completion rates to develop data-based reforms in CGS’s PhD Completion project, as well as areas where outcomes measures are still in the process of discussion and development, such as the

Global Perspectives on Measuring Quality 111 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN assessment of international research experience and of professional development programs for future faculty. A paper by Karen DePauw (Virginia Polytechnic Institute and State University) on measuring quality in interdisciplinary programs also includes specific learning outcomes for graduate students across a wide variety of fields and disciplines (see “Interdisciplinary Programs” below). A number of challenges surrounding the assessment of graduate student learning were identified in the discussion. One was the engagement of faculty in the assessment of student learning. Some assessment requirements, especially those implemented as part of government accountability efforts, may be viewed by faculty as incompatible with established scholarly and teaching practices. Jan Botha (Stellenbosch University) identified a second, related challenge, engaging faculty in measuring learning outcomes that are not established learning areas within certain disciplines. Dr. Botha gave the example of learning outcomes such as civic consciousness or awareness of diversity (areas of learning assessment outlined in Dr. DePauw’s paper) which faculty in some science disciplines may view as unrelated to scholarly practice. It is likely that new requirements for broad graduate learning objectives will continue to meet some resistance within disciplines, yet most agreed that faculty in the disciplines have an important role to play in developing and measuring these outcomes.

Mentoring and Supervision As many participants pointed out, there is clear evidence that the relationship between graduate students and their research supervisors strongly affects the quality of students’ overall learning experience as well as the likelihood that they will successfully complete their programs. Metrics of quality in research supervision may therefore be tellingly related to other outcomes data such as completion rates, reports on student satisfaction, or, as Dr. Liang noted in the first panel, evaluations of the dissertation itself. It was therefore appropriate that both of the papers presented in this sub-panel presented approaches to the assessment of supervision in the context of broader efforts to assess program quality. Both Barbara Evans (University of British

112 Global Perspectives on Measuring Quality USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN

Columbia) and Gregor Coster (University of Auckland) outline a number of methods and metrics for assessing supervision quality that take into account both process and outcomes and explain how their institutions seek to use outcomes measures to improve the structures and processes of supervision. Dr. Evans presents a number of innovative qualitative methods for assessing supervision quality, such as exit surveys specially designed for degree completers and non-completers, analysis of students’ written comments on satisfaction surveys, focus groups, and interviews with graduate student societies, all of which can be viewed alongside more traditional quantitative metrics for the purpose of program improvement. Dr. Coster emphasizes the importance of annual progress reports which require both the doctoral student and his or her supervisor to agree upon a statement of research progress and plan for future work. He also noted that his institution is discussing the possibility of implementing a formal process for accrediting supervisors to oversee graduate student research. In the discussion, a number of European participants observed that the accreditation of research supervisors had been widely discussed and debated among European universities. While there has been no general consensus in Europe on the value of a formal process for validating supervision skills, there was a growing interest in Europe toward this form of accreditation. This topic is likely to be one of ongoing debate.

Interdisciplinary Programs As research has become more interdisciplinary in nature, the international graduate community has increasingly recognized the importance of encouraging graduate students and faculty to engage in interdisciplinary research. This trend has been reflected in the consensus statements that emerged from the 2007 and 2010 summits, both of which underscored the importance of interdisciplinary training in graduate education and research.2 Both of the papers for the sub-panel on interdisciplinary programs began with the premise that assessment of interdisciplinary or multi-disciplinary training is now relevant to most programs, although differences between the interdisciplinary practices within disciplines must be carefully considered.

Global Perspectives on Measuring Quality 113 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN

The paper by Mandy Thomas (Australian National University) and Zlatko Skrbis (The University of Queensland) discusses many of the problems and questions that arise in the assessment of interdisciplinary research as well as of interdisciplinary research training in PhD programs. They posit that in evaluating both aspects of programs, it is important to remember that disciplinary and interdisciplinary practices should not be assumed to be opposed activities, but that disciplinary knowledge and expertise can be an important foundation for interdisciplinary and multidisciplinary research. In assessing the quality of interdisciplinary graduate training, then, graduate schools may assess not only intrinsically interdisciplinary resources but also the strength of the different disciplinary resources that a student can martial in support of an interdisciplinary research project. In a paper that describes learning outcomes for research training and education, Karen DePauw identifies a number of specific measures that programs and departments may use in assessing interdisciplinary training for graduate students. These may be supplemented, she observes, by new outcomes metrics developed by the U.S. National Science Foundation (NSF) to assess the success of interdisciplinary research and training projects. In discussion, Allison Sekuler (McMaster University) pointed out that regular quality assurance reviews may help graduate schools identify gaps in interdisciplinary training structures. They can also help determine whether faculty and students grounded in specific disciplines are given sufficient opportunities and rewards for pursuing interdisciplinary research. A number of other participants highlighted the value of assessing centrally-delivered skills programs for their impact on students’ preparation to conduct interdisciplinary research. For example, Mary Ritter () noted the potential of wide-scale training opportunities focusing on transferable skills to provide a foundation for interdisciplinary research training. When students come together as part of a program focusing on generic research skills, she observed, they collaborate with students from other disciplines and gain important preparation to collaborate across disciplinary boundaries.

114 Global Perspectives on Measuring Quality USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN

Conclusion New initiatives to assess the quality of research and research training within and across graduate programs will require careful thought at many levels. On the one hand, general outcomes metrics and processes must be sensitive to disciplinary differences while also taking into account new forms of interdisciplinary knowledge and research practice. On the other, programs in specific disciplines will in some cases need to question established practices in research training and make room for assessment practices that have the potential to strengthen the overall quality of graduate student learning and reflect new trends in research and research training.

1 See OECD’s Assessment of Higher Education Learning Outcomes (ACELO) at www.oecd.org.

2 See Global Perspectives on Graduate Education: Proceedings of the Strategic Leaders Global Summit on Graduate Education (CGS, 2008), Principle 4, p. 144 and Principle 4 in Appendix A of this volume.

Global Perspectives on Measuring Quality 115 ASSESSMENT OF LEARNING AND RESEARCH TRAINING OUTCOMES AT HARBIN INSTITUTE OF TECHNOLOGY Assessment of Learning and Research Training Outcomes at Harbin Institute of Technology

Zhen Liang Executive Dean of the Graduate School Harbin Institute of Technology

The assessment of graduate education quality for Master’s and doctoral programs is distinctive at the Harbin Institute of Technology. For Master’s programs, the evaluation not only includes the teaching process, but also includes the quality of theses. For doctoral programs, the evaluation only includes the quality of doctoral dissertations.

1. Assessment for Master’s Programs

With the rapid expansion of the numbers of graduate students, graduate education is confronted with some new problems and challenges, such as increases in the student/teacher ratio, increasing conflicts between teaching and research work, and needs forthe improvement of curricula. Therefore, in 2005, our university began to conduct assessments of graduate education to evaluate the quality of the graduate education scientifically and objectively, so as to identify areas of success, identify existing problems and further improve the quality of graduate education. The main evaluation process is that the schools conduct self- evaluations according to the evaluation index system, and write self-evaluation reports and submit relevant evaluation materials. Meanwhile, the Graduate School asks students to conduct online evaluations on teaching and guidance during research training. Experts go to every school to conduct on-site evaluations of teaching management and the conditions of training, and to inspect relevant documents and materials.

(a) Evaluation of the Teaching Process For the evaluation of the teaching process, the evaluation indeces include the following aspects: the curriculum, teaching content, 116 Global Perspectives on Measuring Quality Zhen Liang

teaching attitude and , etc., and the evaluation from students and experts is highly considered. Through the evaluation, problems existing in the curriculum and teaching content in different schools have been found. For example, the curriculum content for some Master’s programs cannot meet the needs for the training of different types of students, and for some courses, the teaching content is too old and does not reflect the progress of the discipline in recent years.

(b) Evaluation of the Research Process and Thesis Quality For the evaluation of the research process for the Master’s degree thesis, we mainly consider the research topic selection, conditions for conducting research, the thesis quality, etc. Through on-site evaluation, we can find whether the research and work conditions for graduate students meet the requirements or not. Through discussion with students, we can know whether students are satisfied or not with the working conditions and the guidance their tutors give during their research. Through the peer review, we can evaluate the level of theses of graduate students in different schools.

(c) The Effect of Evaluation Outcomes Successful experiences in different schools were reviewed, and some common experiences can be applied to the whole university. The curriculum of graduate students in our university usually is adjusted every four years, while some small adjustments can be conducted every year. Therefore, according to the outcomes of evaluation, the curriculum content can be adjusted and improved more efficiently. According to the outcomes of evaluation, advice will be given to teachers who were found to have significant problems in teaching, in order to improve their teaching in the future. For these teachers that are really not suitable for teaching, they will be not allowed to give for one or two years. For schools that do not satisfy conditions for graduate student study and work, a clear proposal for improvement will be given on the hope that these schools can do their best to improve the conditions. And if they have a strong mind to improve the condition and have formed a well planned project, they

Global Perspectives on Measuring Quality 117 ASSESSMENT OF LEARNING AND RESEARCH TRAINING OUTCOMES AT HARBIN INSTITUTE OF TECHNOLOGY will be strongly supported by our university. For tutors that usually spend little time guiding students when they are doing their research work, or for the tutors of theses that were considered to have serious problems during peer review, a warning will be given. If a thesis was found to have severe problems, the tutor will lose his right to guide graduate students for one or two years.

2. Assessment for the Dissertation Quality of Doctoral Students

When a doctoral student applies for a PhD degree, his dissertation will be sent to two famous experts outside our university for peer review; if both experts think that the dissertation has reached the level of a PhD, the student is allowed to conduct an oral defense. If both the experts think that the dissertation has not reached the level of a PhD, the student needs to do research work further and to apply for the PhD after one or two years. In order to evaluate the level of dissertations in different schools, regular spot-checks are conducted in our university. Ten to twenty dissertations were selected randomly from one school, and were then sent to several famous experts outside our university for peer review. The outcomes of the peer review were used to compare the educational conditions for doctoral students in different schools of our university.

118 Global Perspectives on Measuring Quality Julia D. Kent Assessing Outcomes for Graduate Student Learning and Research Training in the U.S.

Julia D. Kent Program Manager, Best Practices Council of Graduate Schools

The U.S. Context for Learning Outcomes Assessment The need for universities to define and measure learning outcomes for their students first became an explicit federal agenda during the 2005-2006 Commission on the Future of Graduate Education. Led by then-Secretary of Education Margaret Spellings, the Commission outlined a national reform agenda designed to prepare U.S. students for the evolving demands of a global, knowledge-based economy. It also provided a set of recommendations for improving quality and accountability in general, and postsecondary learning outcomes in particular: colleges and universities should “measure and report meaningful student learning outcomes,” and faculty should lead the effort of “defining education objectives for students and developing meaningful, evidence-based measures of their progress toward those goals” (24).1 While the Spellings Report focused on outcomes assessment for undergraduate programs, it also reflected, and gave momentum to, a growing demand for U.S. universities to define and assess learning objectives at all levels, including those for master’s and PhD programs. Now as in 2006, the U.S. economy has placed increasing pressure on U.S. institutions to demonstrate greater accountability to the public, to state legislatures, and to graduate students about the skills acquired in a master’s or PhD program. Outside the U.S., reforms in doctoral education, including work in Europe to define learning outcomes by degree-level and by discipline, have intensified efforts to measure graduate learning on the part of U.S. graduate institutions, public and private funders of graduate education and research, and higher education associations. One of the greatest challenges facing this movement is overcoming

Global Perspectives on Measuring Quality 119 ASSESSING OUTCOMES FOR GRADUATE STUDENT LEARNING AND RESEARCH TRAINING IN THE U.S. the view that learning assessment is merely an administrative burden to be shouldered by departments and faculty members. This is particularly the case at the PhD level, where the discipline, not the university or the institutions to which it is accountable, is considered by many faculty to be the proper judge of learning and research training. Yet this view is not necessarily at odds with the learning outcomes movement in the U.S., which depends on the disciplinary expertise of faculty to create and implement meaningful measurements. In the coming years, U.S. graduate institutions will need to demonstrate to faculty that they have an important role to play in learning assessment both within and across disciplines, and that assessments hold out great benefits to graduate programs and their students. Given the focus in this year’s summit on research degrees, I will give particular attention to the discussion of learning outcomes for PhD programs.

Defining and Measuring the Outcomes of Research Training for the PhD Over the past ten years, a number of national initiatives have sought to reform doctoral research training with attention to the outcomes of research training: the Carnegie Initiative on the Doctorate (CID), the CGS PhD Completion Project, The Pew Charitable Trust’s Re- envisioning the PhD Project, the Woodrow Wilson National Fellowship Foundation’s Responsive PhD Initiative, the activities of the Center for the Integration of Research, Teaching, and Learning (CIRTL), and the Preparing Future Faculty (PFF) Initiative. Collectively these efforts have built on the reform movements of the 1990s, which sought to close a gap between doctoral training programs and professional demands on doctorate-holders,2 and to remedy a lack of accountability for student progress through the PhD. Most recently, the CGS PhD Completion project has conducted the most extensive research to date on the actual rates of completion, a strong proxy measure of learning outcomes, across a wide array of disciplines. A number of areas of need and priority stand out in the research training areas of the projects outlined above: 1) to situate a substantial part of research training reforms within disciplines; 2) to provide incentives for interdisciplinary research training; 3) to encourage

120 Global Perspectives on Measuring Quality Julia D. Kent innovations in mentoring; and 4) to train graduate students for a wider variety of research-related fields outside academe. All of these areas call for a coordinated effort among faculty and senior university leaders with support from stakeholders outside the university. The PhD Completion project has demonstrated, for the first time, that it is possible to engage a large number of faculty, across a wide spectrum of fields, in rigorous examination of one important measure of successful learning. While the data-driven reforms of this project have been comprehensive, some promising practices3 specific to student learning have emerged:

• Tracking and reporting the PhD student’s progress through the degree • Modifying course sequencing to ensure that it is adapted to the formation of the PhD researcher • Offering writing assistance programs for graduate students at all stages and for pre-doctoral students who are working on a manuscript for publication • Offering a structured set of professional development workshops that are tailored to the PhD student’s phase of training (beginning, middle, dissertation phase) • Offering a University Graduate Certification in College Teaching, designed to help graduate students organize and develop their teaching experience in a thoughtful way • Offering enrichment events that prepare students for careers outside of academe (government, non-profit, and industry sectors)

Teaching and Academic Service In the U.S, the learning outcomes of training for research and teaching are not easily separated, and a number of doctoral reform movements have set as an explicit goal the integration of teaching and research training (CID, CIRTL, PFF). Many of these projects also emphasize the formation of a professional identity: in the words of the CID, it is the role of doctoral programs to produce “stewards of a discipline” who are committed to the preservation and transformation

Global Perspectives on Measuring Quality 121 ASSESSING OUTCOMES FOR GRADUATE STUDENT LEARNING AND RESEARCH TRAINING IN THE U.S. of their discipline through research as well as the training of the next generation of scholars who will inherit that discipline.4 The Preparing Future Faculty Initiative5 is the most widespread national model for developing and implementing programs to train graduate students for all aspects of faculty careers, including research, teaching and university service. In partnership with AAC&U and 11 disciplinary societies, CGS has managed grant-funded PFF programs at 43 doctoral universities that have in turn partnered with over 300 higher education institutions. The PFF initiative provides a useful framework for developing clear learning outcomes for graduate student teaching as well as a potential platform for training PhD students in the assessment of undergraduate learning outcomes. CGS is currently conducting a research project funded by the Teagle Foundation to identify the ways in which PFF programs could enhance graduate students’ understanding of the value and practice of outcomes assessment.6 In a nutshell, this project makes learning outcomes for undergraduates a learning outcome for graduate students. Now in its first phase, the project ultimately seeks to cultivate a generation of future faculty who view the assessment of student learning as integral to their teaching, research, and faculty roles.

Outcomes of International Research Experience and Training An important, but relatively under-explored area of learning assessment concerns the goals of international research training and the desired outcomes of collaborative programs (joint degrees, dual degrees, and international research collaborations) for graduate students. The importance of this issue was recognized by the 2009 Global Summit participants, who agreed upon a number of principles that supported the need for learning outcomes in collaborative training programs: universities and other stakeholders should measure outcomes for pedagogy as well as education and career training, and also provide resources and support systems that help graduate students develop cultural awareness and professional skills.7 There are a number of specific reasons why U.S. universities and their students can benefit from work in this area, and both relate

122 Global Perspectives on Measuring Quality Julia D. Kent

to the quality of programs. First, both faculty and senior university leaders recognize that U.S. graduate programs must prepare students to develop the unique skill-sets needed for international research. At the policy level, there is also a growing recognition that the U.S. needs to keep pace with countries with a longer history of student mobility, whose students are already multi-lingual and have experience with international research. In CGS’s recently completed Graduate International Collaborations project, areas of particular concern reported by U.S. deans included the training of students to negotiate the norms and expectations of different research environments, to confront the ethical and legal questions of international research, to communicate effectively with international peers and mentors, and to successfully enter and build a network of international colleagues so that they may take advantage of global research opportunities during their careers.8 The second reason why U.S. universities must focus greater attention to this topic has to do with the effectiveness and efficiency of international research and training programs. Given that international collaborations demand significant investments of time and resources from U.S. universities, it is critical for universities to invest these resources in ways that benefit students and to use assessment to improve the quality of training. A more strategic assessment plan will also allow universities to demonstrate a return-on-investment to stakeholders within and outside the university. State universities in particular report the need to establish more rigorous metrics of success, especially given their principal commitments to programming for in- state programs and the current budget cuts for state universities. In the U.S., of course, the decentralized nature of graduate education may make it more difficult to achieve agreement upon and widespread formal adoption of specific learning outcomes for graduate education and research training. However, the PhD Completion project demonstrates the openness of US faculty and programs to collecting data on outcomes and using this data to examine the institutional and departmental factors that promote student learning and progress toward the degree. Along with the other projects discussed above, it has also encouraged universities to develop a wide variety of

Global Perspectives on Measuring Quality 123 ASSESSING OUTCOMES FOR GRADUATE STUDENT LEARNING AND RESEARCH TRAINING IN THE U.S. innovations based on careful assessment in key areas. This university- centered approach has cultivated a promising level of engagement and investment within university communities for the assessment of graduate learning.

This paper draws on the expertise of CGS President Debra Stewart and Daniel Denecke, CGS Director of Best Practices.

1 A Test of Leadership: Charting the Future of U.S. Higher Education: A Report of the Com- mission Appointed by Secretary of Education Margaret Spellings (U.S. Department of Education, 2006)

2 For a recent discussion of the need for U.S. universities to better match graduate education to careers, see Louis Menand’s The Marketplace of Ideas: Reform and Resistance in the (W.W. Norton & Company, 2010).

3 For a more comprehensive list of promising practices, see the most recent publication in the PhD Completion and Attrition Series: Policies and Practices to Promote Student Success (CGS, 2010).

4 The Carnegie Initiative on the Doctorate resulted in two publications, The Formation of Scholars: Rethinking Doctoral Education for the Twenty-First Century (Carnegie Foundation for the Advancement of Teaching, 2008) and a volume of essays on the future of doctoral education in each of the six disciplines covered in the project, Envisioning the Future of Doctoral Education: Prepar- ing Stewards of the Discipline (Carnegie, 2006).

5 The PFF Initiative (www.preparing-faculty.org) was initially a joint project of the Associa- tion of American Colleges and Universities (AAC&U) and the Council of Graduate Schools.

6 The conclusions and recommendations of the project, completed in March 2011, are out- lined in a monograph, Preparing Future Faculty to Assess Student Learning (CGS, 2011).

7 Global Perspectives on Graduate International Collaborations: Proceedings of the 2009 Strategic Leaders Global Summit on Graduate Education (CGS, 2010).

8 Joint Degrees, Dual Degrees, and International Research Collaborations: A Report on the CGS Graduate International Collaborations Project (CGS, 2010).

124 Global Perspectives on Measuring Quality Barbara Evans Using Quality Measures to Support Program Content and Design: Mentoring and Supervision

Barbara Evans Dean, Faculty of Graduate Studies The University of British Columbia

Graduate education is distinguished from most levels of undergraduate education by the close and increasingly collegial working relationships that develop between the student and academic staff members—known in different systems as supervisors, advisors, mentors, and committee members. Indeed it is abundantly clear that success depends very largely on the quality and effectiveness of these relationships. For simplicity I will refer to the generic “supervisor” to encompass these roles as they relate to both academic supervision and career guidance.

What do we mean by “quality” in this context? Given we are primarily considering education and research training, quality should be measured in terms of the outcomes for the student and we should be student-focused in our quality assurance. The goal should be to provide the best possible educational experience for each graduate student, so that he or she successfully completes an excellent and relevant program in a timely fashion, with embedded and additional opportunities to develop skills and competencies for productive future employment in a variety of careers. This educational experience depends on the inputs of supervision, academic program content and resources, and administrative processes. Its quality can be inferred from output measures such as graduate student satisfaction and skills acquisition, the products of the students’ research, and program evaluations. Many aspects of both inputs and outputs of graduate education can provide valuable information for the purpose of improving program content and design. Some of these are easy to measure but are of lower value for program improvement; others that would be particularly useful to know are extremely difficult to evaluate. Thus, cost-benefit comparisons become an important factor in determining which measures to seek (see table below). Global Perspectives on Measuring Quality 125 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN: MENTORING AND SUPERVISION

How can/do we measure quality?

Gathering the Data Student surveys generating both quantitative and qualitative data can provide good information about the quality of the graduate student experience. There are, however, inherent difficulties in obtaining data about individual supervisors due to students’ understandable need for confidentiality. From the institutional perspective, analyses down to program level are most helpful, providing the numbers are sufficient to ensure anonymity. Aggregated quantitative data should only be gathered where identification of individual student responses is not possible. Because of the different “cultural” norms across disciplines, comparisons within disciplines are more meaningful than comparisons across them, whether at institutional, national or international levels, and trends over time are often more helpful than actual scores (unless of course they are extreme). It is possible to develop instruments particularly tailored to specific fields and disciplines. From a cost- benefit perspective, such instruments are probably only useful within the scope of a large national or international survey.

126 Global Perspectives on Measuring Quality Barbara Evans

TABLE: Some potential quality measures that could inform program improvement

Relevance/meaningfulness/importance

High value Lower value • completion rates • current RHD - timely completion rates load - post-confirmation & • completion candidacy completion rates times • no. of publications from • attrition rates Easy thesis • conference presentations • (esp. oral) • student satisfaction with - supervision - resources • stage of withdrawal • employment/ • quality of theses career • acquisition of attributes & • outcomes

Ability to provide Moderate skills • benchmarking of processes • other outputs – Knowledge Transfer • quality of publications/ patents from thesis More • graduate satisfaction difficult (X years out) • employer surveys

In recent decades, graduate education has grown to encompass a large and increasingly diverse group of programs. The European Bologna Process separates these programs into Second and Third cycles (Master’s and Doctoral) to reflect the fact that they are or should be very different. Certainly, while the principles of good research supervision are similar for Master’s and Doctoral students, the goals are different. Research training at the Master’s level is about developing the skills and knowledge to complete good research. However, the international consensus is that doctoral level research should result Global Perspectives on Measuring Quality 127 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN: MENTORING AND SUPERVISION in a significant contribution to knowledge. It is about generating new understandings—innovation through research. This distinction between the research training goals at Master’s and Doctoral levels does need to be accessible in survey instruments.

The Tools We Use There are many kinds of data currently being gathered that pertain in one way or another to supervision. Involvement may be mandatory or voluntary in different systems. For example:

• Annual student progress reports need the right structure and should ensure “paper trail” of any difficulties is maintained • Monitoring against formal milestones, including coursework • Institutional progress data—the Completion rates and times to program level (CGS PhD Completion Project)1 • Satisfaction surveys during enrolment—Uni Melbourne’s Quality of Research Supervision and Academic Support survey (QRS)2 • National satisfaction surveys—e.g., Canadian G13 GPSS, Aus PREQ • Analyses of students’ written comments, a rich source of information • Exit surveys designed separately for completers & non- completers • Focus groups & interviews for qualitative data • Graduate student societies—“ask the students” • Benchmarking instruments—e.g., U21 (UK HEFCE) study on administrative processes (see below) • Regular program/departmental reviews for reporting on program structures and outcomes • Graduate skills and their acquisition—self-reporting and from examiners • 5/10 year out surveys to provide a different but lagging perspective

128 Global Perspectives on Measuring Quality Barbara Evans

Supervision It is important to recognize the importance of personalities in supervision—each supervision will be unique not only due to the particular nature of the research undertaken, but also because of the personalities and requirements of the individuals concerned. Data and experience show unequivocally that successful supervision is largely about getting selection right in the first place— the five “R’s.” This means taking as much time as it needs to get the

• Right student with the • Right supervisor in the • Right project/program at the • Right time and with the • Right resources.

The provision of training for new supervisors (in some institutions this is mandatory) but also for experienced supervisors is widely practiced and there are many excellent resources to underpin such programs. Many universities use supervisor registers to track individual supervisor activity and performance, and monitor general trends in student satisfaction with supervision. A key component of successful supervision strategies is having clear guidelines about what to do when things go wrong (such as when poor supervision is apparent). Maintaining the “paper trail” is essential, as is the ability to mentor or, if necessary, “ban” poor supervisors. A supervisor register is only useful if there are explicit ways to get both on and off the list.

Program Outcomes Integral to good supervision is consideration of the nature and content of the required academic program and the level of other expectations/ demands placed on the student (e.g., TA work). Over the last decade there has been an increasing international focus on institutional statistics such as completion times, and more particularly completion rates, as important indicators of the quality and effectiveness of graduate programs. Completion rates are

Global Perspectives on Measuring Quality 129 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN: MENTORING AND SUPERVISION probably more valid than completion times as measures of the quality of supervision. It is possible to combine completion with other survey data to identify goals that work to improve supervision, as exemplified by the CGS Completion Project. Regular departmental/graduate program reviews also provide useful information about the quality of program content, design, expectations and outcomes.

Research Outputs Overall numeric assessment of the quality (and impact?) of the research outputs of higher degree programs is possible. These outputs include the thesis and any publications/patents arising directly from the work of the thesis research project. Assessing the quality of the thesis itself is somewhat more challenging, but the use of external and international thesis examiners provides a level of quality assurance.

Student Professional Development Internationally, there has been an increasing incidence of explicit requirements from national agencies and industry for the development of generic and transferable skills/competencies within graduate programs. The Vitae program in the UK is probably the most impressive national program designed to do this. Many individual programs and institutions have also responded positively, but the extent to which they are successful requires evaluation. Students themselves can report on their acquisition of these competencies and also thesis examiners, if provoked by the right questions, might evaluate some at least.

Administrative Processes It’s not just about the supervisor. The quality of the student experience is also dependent on the nature and effectiveness of the many administrative processes involved. An international benchmarking exercise (developed by the University of Melbourne for the Universitas 21 group of research-intensive universities) exemplifies the way these procedural aspects might be defined and assessed. The focus of the exercise was on “major areas of policies, procedures and outcomes that underpin good practice in the provision of research higher degree programs.” The instrument used was based upon material produced

130 Global Perspectives on Measuring Quality Barbara Evans

by the Higher Education Funding Council for England3 (HEFCE, with permission), but modified to reflect some of the circumstances of the Australian and North American environments. The benchmarks were necessarily broad because of their international purpose. The benchmarking instrument had nine sections, each with a series of questions, dealing with the different aspects of research higher degree provision.

• Selection, admission, enrolment and induction of students • Initial review and subsequent progress • Examinations • Supervisory arrangements • Development of research and other skills • Feedback mechanisms • Institutional arrangements • Research environment • Appeals and complaints

The results of the survey, with a brief commentary on the data, enabled individual participating institutions to examine examples of good practice and to assess areas where benchmarks were not met; to consider if changes can/should be made; and to identify which other institutions may already have adopted those practices. The value of such benchmarking exercises lies in closing the loop and moving to recognize and improve practices.

So we’ve got the data—then what? What can we do to ensure that the data gathered lead to more effective structures for mentoring and supervision, and an improved educational experience? Unfortunately, too often surveys are conducted and little or nothing is done with the data that have been accumulated, adding to survey fatigue in students with no benefit. It is MOST important to determine what you are going to do with the data before you conduct the survey. This focuses the exercise and increases the likelihood that the data will be used for real improvement. To use data to ensure improvement, consider:

Global Perspectives on Measuring Quality 131 USING QUALITY MEASURES TO SUPPORT PROGRAM CONTENT AND DESIGN: MENTORING AND SUPERVISION

• the ethics of gathering and then NOT using data, • the importance of reporting back, closing the loop with those who provide the data, • the value of using data to drive feedback loops in quality assurance, • how to use data most strategically within your institution— with faculty and staff, students, Deans, senior committees and decision makers, on web-sites, and • MONEY always makes a difference!

1 http://www.phdcompletion.org/

2 http://www.fpg.unimelb.edu.au/ipeq/ec-qrs.html

3 http://www.hefce.ac.uk/

132 Global Perspectives on Measuring Quality Gregor Coster Assessing Mentoring and Supervision at the University of Auckland

Gregor Coster Dean of Graduate Studies The University of Auckland

Types of Assessment Conducted At the University of Auckland, quantitative and qualitative measurements are used regularly to provide feedback in order to improve programme and departmental performance in research supervision. The quality of mentoring and supervision is assessed by means of doctoral annual reports, provisional year reviews, doctoral student exit surveys, measurement of times to completion and completion rates, exit interviews with students who have terminated their studies, feedback on supervision quality from Oral Examinations, reports from Main Supervisors provided to examination committees, and the use of informal feedback provided to deans, associate deans and heads of departments. In addition occasional research projects are conducted to inform particular supervision issues. Six monthly reports are obtained in the case of master’s degree research supervision. The Doctoral Provisional Year Report involves both the student and supervisor reporting on progress, including on supervision quality. A jointly agreed report on progress and plans is submitted simultaneously. Formal presentation of the doctoral research project to a departmental seminar is required for all, allowing opportunity for identification of supervision or resource issues at departmental level before confirmation of registration. Thereafter students and supervisors are required to report annually on progress of the research. The Doctoral Exit Survey developed by the University of Auckland is both quantitative and qualitative in its format and is administered at the time that the thesis is submitted for examination at the Graduate School. This blinded survey collects data on the quality of supervision, resources that were available, student experience, professional development experience and involvement by the student in teaching. Information from the data analysis is provided to the Global Perspectives on Measuring Quality 133 ASSESSING MENTORING AND SUPERVISION AT THE UNIVERSITY OF AUCKLAND

Board of Graduate Studies annually with the proviso that earlier intervention will be undertaken if there is an important issue surfacing in the survey results that requires attention. Reports from the Main Supervisor on the supervision process are provided to the Examination Committee as a routine. These can provide useful insights as to the nature and quality of supervision and are regularly reviewed for any issues that are arising, both at the individual and systemic levels. Examiners reports often provide independent insight as to the quality of the supervision that a candidate has undergone, with some examiners making direct comment on supervision quality. Although the University provides faculty level data on times to completion and completion rates, consideration is being given to analysing data at departmental and even individual staff level as a means of improving programme performance. Research shows that well supervised students will complete faster, even allowing for discipline differences. A recent innovation to assess the quality of supervision amongst international doctoral students involved independent researchers interviewing separately 50 students and their supervisors. This research has given us some excellent insights into the international doctoral student experience that have not been available to us in other ways. Interventions are being planned to address those issues that have arisen from the survey, including issues unrelated to supervision. It should be noted that departmental heads must certify, in every case, that only academics with a PhD or equivalent, and national and/or international standing as researchers, as well as having the prerequisite training and experience, are approved to supervise international doctoral students. This is a consequence of an agreement between the Government and New Zealand universities that provides additional subsidies for these students, and therefore allows universities to charge domestic fees only for these students. The quality standards for the international student research supervision are regularly monitored, including by the NZVCC Academic Audit Unit which carried out an evaluation in 2008. Staff are required to report on postgraduate research supervision at

134 Global Perspectives on Measuring Quality Gregor Coster the time of the Annual Performance Review and this offers a potential mechanism for feedback on supervision quality and performance. The Dean of Graduate Studies meets regularly with the Postgraduate Student Association and receives feedback regarding research supervision issues from time to time, and responds accordingly. Other assessments are also undertaken by the Dean of Graduate Studies. Much value is placed on student feedback as an important means of assuring the quality of research higher degree programmes. Information obtained from students who are interviewed by the Dean is also useful for quality improvement activity. Our University has a Planning and Quality Office which undertakes regular assessment of the quality of postgraduate programmes. The University also participates in international surveys on the quality of the student experience, and benchmarks against other universities in the cohort. Feedback on these surveys is regularly provided to the Senior Management Team in the University and quality improvement is expected as a result where there are opportunities for improvement. In 2008 the University undertook a review of research space and resources available to research higher degree students, and as a result additional space is being reserved in future buildings. In addition the University has instituted a programme to manage teaching and research space more efficiently. As the Campus Development Strategy proceeds regular monitoring of space and resources will be undertaken. The Academic Audit Unit of the New Zealand Vice-Chancellor’s Committee (NZVCC) regularly conducts academic audits in universities, which in 2009 involved a full institutional review of teaching and research. Results from this review inform the quality of the master’s and doctoral programmes, including supervision quality. Finally, the Dean keeps a record of those cases where a student has laid a complaint regarding their experience on either a master’s or doctoral programme. These records are regularly examined to ascertain system issues that should be addressed. The main cause of problems in supervision is poor communication between the supervisor and student, and the University has invested in work to ensure that this is understood by staff and students alike.

Global Perspectives on Measuring Quality 135 ASSESSING MENTORING AND SUPERVISION AT THE UNIVERSITY OF AUCKLAND

Tailoring Assessments to Program Level (Master’s Versus PhD) We have specifically avoided tailoring supervision quality assessments for different disciplines, as our exit survey instrument is new and we are presently taking a global view of supervision quality. However, if we find that there are some noticeable, or unexpected, differences between disciplines, then it is likely that they may be the subject of more detailed assessment, in conjunction with the particular faculty. In doing so, it is important to distinguish differences between disciplines for which there is an obvious explanation. Assessments for master’s programmes can be based on a shorter form of the doctoral exit survey and we anticipate undertaking that work in the near future. As our master’s programmes are largely managed in the faculties, albeit with oversight from the School of Graduate Studies, there may be significant differences that would be useful to identify and learn from for sharing with other faculties. It is likely that assessments would be done at the time of submitting the thesis at the Graduate Centre prior to marking. Whilst an annual reporting process is used for doctoral research supervision, most faculties undertake half-yearly reporting on progress and supervision for master’s research.

From Assessment to Intervention Exit surveys and interviews are used to identify problems within supervision, and also generic or system issues within the research programmes that are leading to supervision difficulties. The feedback loop is initiated by planning and implementing interventions to improve the quality of supervision followed by further study of the effects of the change upon the student and staff experience of supervision. The Board of Graduate Studies receives reports on issues related to the quality of supervision, forms small working groups to study and recommend how changes can be brought about in response and re-evaluates the effects of those changes. Interventions are designed to improve the quality of supervision, improve outputs including graduating a higher number of Maori and Pacific master’s and doctoral graduates, improve completion

136 Global Perspectives on Measuring Quality Gregor Coster rates, reduce completion times and enhance the overall postgraduate experience of students. A compulsory doctoral induction day is held for all incoming doctoral students during which the Dean of Graduate Studies provides advice on the University’s expectations for supervision (structure, frequency, nature of supervision, planning, communication) based on the evidence of what works, and on local best practice. University policies on supervision are also explained. Information is also provided to new and existing staff as part of education on University policies on master’s and doctoral research supervision. New research supervisors are all required to attend supervision training before commencing supervision. Main doctoral supervisors are not permitted to embark on supervision until they have attended the course. Both supervisors and students enter the research supervision with a similar understanding of expectations for supervision. Those staff seeking promotion each year are required to submit teaching assessments that include an evaluation of teaching quality and research student supervision. The use of these assessments provides a powerful incentive for staff to focus on high quality supervision. The University of Auckland participates in regular meetings of Deans and Directors of Graduate Studies within New Zealand and Australia, and uses these opportunities to learn from other universities regarding research higher degree policies and approaches to quality and quality improvement. A workshop has been included in the Academic Heads Programme on the importance and characteristics of good research supervision. The University is also encouraging a culture of departmental discussion and analysis of research supervision practices and reviews of research student progress. Department discussion and analysis of supervision is already occurring on a regular basis. During 2009 the Dean of Graduate Studies visited every department and discussed, among other matters, quality improvement in supervision for research higher degree students. A fundamental question remains regarding the need for formal accreditation of supervisors. During sabbatical this year the Dean

Global Perspectives on Measuring Quality 137 ASSESSING MENTORING AND SUPERVISION AT THE UNIVERSITY OF AUCKLAND came across many examples where there was a formal process for accreditation of supervisors, with appropriate pre-accreditation seminars being required in order to achieve accreditation. This is worthy of further discussion.

138 Global Perspectives on Measuring Quality Mandy Thomas • Zlatko Skrbis

Assessing Interdisciplinary Programs

Mandy Thomas Pro Vice-Chancellor (Research and Graduate Studies) Australian National University

Zlatko Skrbis Dean, The University of Queensland Graduate School University of Queensland

The Benefits of Interdisciplinary Research Interdisciplinary research is often seen to be at the forefront of new knowledge production. As the scale of problems that are faced in the world has grown, so too has the size of research teams and their characteristics. Some of the areas where there has been recent growth in interdisciplinary research are biotechnology, bioengineering, natural resource management, media and communications, new materials and social medicine. Whether the research is about climate change, ageing, more efficient food production, the impact of migration on societies or any other major issue confronting the contemporary world, the solutions are often found through the boundary-crossing of interdisciplinary teams. The societal relevance of such research means interdisciplinarity leads to an increase in the scale of knowledge diffusion beyond universities compared with mono-disciplinary research (see DEA and FBE, 2008, p. 7). It is apparent that researchers increasingly work across disciplinary boundaries and interdisciplinary research accounts for an increasing percentage of the world’s outputs (ibid, p. 19). It is therefore clear that quality PhD training must take into account the trend towards increasing interdisciplinarity at the same time as it must address the difficulties confronted in fostering collaboration, and measuring quality, across and between disciplines.

Issues in Assessing the Quality of Interdisciplinary Research

Global Perspectives on Measuring Quality 139 ASSESSING INTERDISCIPLINARY PROGRAMS

• In their study of several highly-regarded interdisciplinary research groups1 Boix-Mansilla, Feller and Gardner (2006) have identified three key issues in determining the quality of interdisciplinary work: 1. assessment in disciplines themselves is variable with conflicting standards of quality assessment, therefore there is not an easy choice of standard against which we can measure interdisciplinary quality; 2. there is no clear definition of interdisciplinary work, and; 3. highly original work has few precedents making quality assessment particularly difficult as it is a challenge to find “peers” with the adequate knowledge to assess the work.

• The way that research is organised in universities is often focused on disciplinarity (departments of physics, engineering, history) yet often research centres or clusters emphasise multdisciplinary problem-solving. In Australia, universities are presently involved in a research assessment exercise (the Excellence in Research for Australia) which will rate and rank work in defined disciplines across the sector. The two forces at work are often at odds with each other—on the one hand training, structures and assessment of research is most often disciplinary, but on the other hand, innovation and creativity, the hallmarks of great research, are often trans-disciplinary. Huutoniemi (2007) has concluded that “…in the presence of administrative, funding, and cultural barriers between research departments, collaboration across disciplinary boundaries needs special support.”

• An important element of interdisciplinary PhD training is that it should not be seen as replacing training in a single discipline. As the DEA and FBE 2008 report points out, it “is not a case of ‘either…or’ but of ‘both…and.’” Almost invariably, strong monodisciplinary knowledge is the precondition for new cross-cutting knowledge. And conversely, interdisciplinary knowledge can contribute to creating the necessary dynamic within the individual fields” (p. 6). The fact that students in

140 Global Perspectives on Measuring Quality Mandy Thomas • Zlatko Skrbis

interdisciplinary fields need to master two or more disciplines means that extra effort needs to be applied to ensure high quality training, and to guarantee a high standard is reached in the relevant mono-disciplines.

PhD Quality Training in Interdisciplinary Research

• If we value interdisciplinary training then it is important to provide some exposure to interdisciplinarity for all our students, not just the ones who are researching interdisciplinary problems. As Karl Popper argued, “We are not students of some subject matter, but students of problems. And problems may cut right across the borders of any subject matter or discipline” (1963: p. 88). Interdisciplinary expertise must therefore be firmly embedded in quality PhD training. This can be achieved through training all PhD students in an understanding, not of many different disciplines, but of different forms of knowledge production and knowledge value. Workshops or training programs on these topics should be undertaken in the first year of a PhD along with “contextual” interdisciplinary workshops—for example, both science and humanities and social science students should be required to have been exposed to the history and philosophy of science.

• We must ensure that the interdisciplinary training is of high quality and that there is enough expertise on the supervisory panel to ensure cutting-edge input across the boundaries. This may not be available from supervisors at the host institution alone and consideration should be given to the provision of expert input and advice from other institutions (at home or abroad) for the student. That is, as well as quality control, institutions need to consider quality enhancement for interdisciplinary work considering that there are cultural and organisational barriers to supporting such work.

• For those students undertaking interdisciplinary programs it

Global Perspectives on Measuring Quality 141 ASSESSING INTERDISCIPLINARY PROGRAMS

is important that they have the ability to attend conferences, work with international team(s) in these new fields and to meet the world’s experts, particularly if their own institution does not have a critical mass of scholars working in the new field. Likewise, expert advice on publication outlets should be considered as soon as is practicable. As acceptance of publications by good journals will be a test of quality, encouraging early publication is critically important.

Quality of Outcomes Ultimately the assessment of outcomes of an interdisciplinary thesis will be based on the common mechanisms of assessment—PhD thesis assessment; patents; publications, and employment destination.

Conclusion For those students producing an interdisciplinary piece of work it is often necessary to provide an enhancement to their training to ensure that they receive quality disciplinary training and possibly additional supervisory support to undertake their interdisciplinary project and to publish in appropriate high quality outlets. For these students the traditional indicators of excellence (peer assessment and publications) will remain the most easily determined marker of quality. The provision of high quality PhD training requires institutions to deliver interdisciplinary experience and insights to all of our students. The more difficult aspect of evaluation is the evaluation of the quality of this generic interdisciplinary training.

Questions for Discussion: 1. Should all PhD students be encouraged to engage with the breadth of the interdisciplinary scholarship available to them, and not just the depth of their topic area? 2. How might institutions provide enriched interdisciplinary experience to PhD students? How might institutions measure the quality of this experience? 3. For those students working across disciplinary boundaries, how might we best provide excellent disciplinary as well as

142 Global Perspectives on Measuring Quality Mandy Thomas • Zlatko Skrbis

interdisciplinary training; and, how might we best measure the quality of their outcomes apart from through traditional means?

References

Boix-Mansilla V, Feller I, Gardner H (2006). Quality assessment in interdisciplinary research and education. Research Evaluation 15(1): 69-74.

Huutoniemi, K. (2007). ‘Evaluation of interdisciplinary research’ in Oxford Handbook of Interdisciplinarity.

Danish Business Research Academy (DEA/Danmarks ErhvervsforskningsAkademi) and the Danish Forum for (FBE) (2008). Report: Thinking across disciplines interdisciplinarity in research and education.

Popper, K. R. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Routledge and Kegan Paul.

1 These were the MIT Media Lab (ML), the Santa Fe Institute (SFI), the Center for the Inte- gration of Medicine and Innovative Technologies (CIMIT), the Center for Bioethics at the University of Pennsylvania’s (CB-UP), The Art-Science Laboratory (ASL), and the Research in Experimental Design group at XEROX-PARC (RED).

Global Perspectives on Measuring Quality 143 MEASURING QUALITY IN INTERDISCIPLINARY PROGRAMS Measuring Quality in Interdisciplinary Programs

Karen P. DePauw Vice President and Dean for Graduate Education Virginia Polytechnic Institute and State University

“Universities, therefore, will have to reconsider the priorities and practices of graduate education and training……. We argue that graduate programs must not only educate future to be experts in the methods, techniques, and knowledge of their chosen disciplines but to have the broader problem-solving skills that require learning, unlearning, and relearning across disciplines.” – Rhoten, D.1

Although it is well understood and acknowledged that graduate education is critical to the academic mission and a key component of a ’s strategic plan, the assessment of quality through learning outcomes is a relatively new phenomenon for graduate education. Interdisciplinary study and degree programs provide for even greater challenges for measuring quality in (post)graduate education and research training. One of the initial challenges lies with the identification of the outcomes to be measured as well as the indicators of success for the graduate program and the university. Quality graduate education includes demonstrated knowledge and understanding in the academic discipline(s), skills and abilities associated with the graduate degree sought, and professional preparation for careers following advanced education. Toward this end and on the occasion of a then-pending external accreditation visit by the Southern Association of Colleges and Schools (SACS), accepted the challenge to prepare student learning outcomes for each of our degree programs (undergraduate, graduate). The Graduate School in conjunction with the Commission on Graduate Studies and Policy (CGS&P) developed a set of foundational student learning outcomes and examples of possible assessment measures for

144 Global Perspectives on Measuring Quality Karen P. DePauw the graduate degrees at Virginia Tech. These served as a baseline for the process used at VT and are provided below as examples of what universities could use to measure quality in disciplinary and interdisciplinary graduate education and research training programs. The outcomes below provide a range of possibilities which should be adapted depending upon the type of degree (e.g., research degree, , coursework only) and level of degree (e.g., Master’s, MFA, PhD, EdD). In the process used at Virginia Tech, academic departments and their graduate program committees were encouraged to identify a set of unique student learning outcomes, to select among the student learning outcomes and modify them to fit the specific needs and requirements of each graduate program, or utilize a combination of both approaches.

1. Students demonstrate an in-depth knowledge and understanding in their academic discipline. Possible measures for assessing learning outcomes: • Successful completion of requisite course work, comprehensive or preliminary examination • Maintenance of academic grade point average (GPA) required for degree completion

2. Students demonstrate the ability to design, conduct, and complete scholarly and/or creative works. Possible measures for assessing learning outcomes: • Successful completion of project, dissertation or thesis • Scholarly publication(s) and presentation(s) • Success in design competitions

3. Students synthesize and evaluate information resources relevant to their academic discipline. Possible measures for assessing learning outcomes: • Successful completion of preliminary (or comprehensive) exams • Successful completion of requisite course work • Documented participation in weekly academic seminars

Global Perspectives on Measuring Quality 145 MEASURING QUALITY IN INTERDISCIPLINARY PROGRAMS

• Successful evaluation of academic presentations

4. Students work effectively in teams and collaborative projects. Possible measures for assessing learning outcomes: • Documented participation in teams and collaborative projects • Co-authored projects, publications or creative projects

5. Students demonstrate proficiency with appropriate academic field-based methodologies, analyses and technologies. Possible measures for assessing learning outcomes: • Successful completion of course work on research methods and analysis • Documented successful academic performance including presentations and publications • Demonstrated ability to use technology appropriately

6. Students successfully analyze, critique, and evaluate problems. Possible measures for assessing learning outcomes: • Successful completion of requisite course work • Successful completion of comprehensive examinations • Successful completion of dissertation or thesis • Documented participation in weekly seminars

7. Students demonstrate teaching ability and understanding of pedagogical practices. Possible measures for assessing learning outcomes: • Successfully performance as Graduate (GTA) • Successful completion of GTA workshop • Successful completion of pedagogy courses

8. Students demonstrate leadership abilities and skills. Possible measures for assessing learning outcomes: • Service in leadership positions in student government and university governance (e.g., university, department,

146 Global Perspectives on Measuring Quality Karen P. DePauw

college, location) • Service to professional organizations/societies • Involvement in student run conferences and service- learning projects

9. Students demonstrate understanding of the ethical standards and professional practices of their discipline. Possible measures for assessing learning outcomes: • Successful completion of graduate course(s) that includes professional ethics and scholarly integrity • Participation in workshops, seminars on ethics and professional practices • Demonstrated knowledge of Responsible Conduct of Research (RCR) and scholarly integrity • Participation in Graduate Honor system (unique to selected universities in the United States)

10. Students demonstrate understanding and value of civic consciousness or civic engagement. Possible measures for assessing learning outcomes: • Involvement in off-campus organizations • Involvement in extra-curricular service activities • Successful completion of service-learning projects, e.g., VT-ENGAGE2 • Recognition for service and community engagement • Successful completion of graduate course on topics of citizen engagement

11. Students demonstrate knowledge and understanding of the global perspectives. Possible measures for assessing learning outcomes: • Successful participation in study abroad program or research collaboration • Successful participation in Preparing the Future Professoriate (PFP) or Preparing Future Faculty (PFF) programs

Global Perspectives on Measuring Quality 147 MEASURING QUALITY IN INTERDISCIPLINARY PROGRAMS

• Participation in graduate study abroad or research/ scholarship exchanges, completion of courses/seminars on global perspectives • Successful demonstration of foreign language competence and cultural understanding

12. Students demonstrate understanding of the value of diversity/ inclusion and the acquisition of cultural competency. Possible measures for assessing learning outcomes: • Successful participation in seminars and workshops on diversity, inclusion and cultural awareness • Successful participation in diverse work teams

13. Students demonstrate understanding of interdisciplinary academic endeavors and work effectively in collaborative projects. Possible measures for assessing learning outcomes: • Successful participation or collaboration with graduate students or faculty from multiple departments • Completion of graduate courses outside academic department • Completion of graduate course on “interdisciplinary research” • Completion of an interdisciplinary research fellowship • Demonstrated competence in interdisciplinary case studies of complex research problems

The first twelve outcomes are primarily directed toward an academic discipline; the last one specifically mentions interdisciplinary graduate education and does so to encourage all graduate students to engage in interdisciplinary graduate study and research. All of the student learning outcomes above and the possible measures could be relevant to the assessment of quality for interdisciplinary programs with only slight modifications that focus on academic disciplines and interdisciplinary efforts. Based upon the experiences of the National Science Foundation Integrative Graduate and Research Training (IGERT) program, the

148 Global Perspectives on Measuring Quality Karen P. DePauw following additional outcomes could be utilized to assess the extent to which graduate students and faculty:

• Establish interdisciplinary collaborative research teams • Incorporate an interdisciplinary perspective to one’s scholarly work, and • Pursue interdisciplinary career paths including careers outside of academia.

Measures for these could include the number of interdisciplinary abstracts, manuscripts published and grant proposals and funded; enrollment in interdisciplinary or multidisciplinary courses; successful completion of interdisciplinary internships or fellowships; and successful employment of graduates in interdisciplinary career paths. In addition to the individual student outcomes described above, many universities have identified interdisciplinary graduate education and research as a priority. As a measure of the commitment to interdisciplinary graduate education, universities frequently seek and receive extramural funding for interdisciplinary research and graduate education, support the establishment of interdisciplinary centers, encourage the development of interdisciplinary PhD programs and provide funded focused initiatives to increase interdisciplinary graduate education programs. Emphasizing interdisciplinary graduate education and research allows research universities to attract and retain outstanding faculty and teams of collaborators, provides for more opportunities to seek larger grants from multiple funding agencies, to attract outstanding graduate students and post docs who wish to work in an interdisciplinary setting, and to address large complex research questions requiring an interdisciplinary team of scholars.

1 Rhoten, D. (2004). Interdisciplinary Research: Trend Transition. Items and Issues 5, (1-2):6-11.

2 See VT-ENGAGE website at http://www.engage.vt.edu/

Global Perspectives on Measuring Quality 149 VI. SKILLS, COMPETENCIES, AND THE WORKFORCE

Summary of Presentations and discussion

he extent to which graduate institutions prepare their students for viable and rewarding careers is one of the ultimate measures Tof quality in graduate education. As was discussed in Panel 1, quality assessment allows graduate schools to determine whether programs are preparing students for the careers that they will actually pursue. Leaders in graduate education are looking for assessment strategies that illuminate important gaps between education and training and actual career pathways, and promote clearer program objectives for the development of professional skills. Yet gathering accurate data on career pathways poses many challenges. Universities often lack longitudinal data on careers (as opposed to data on initial job placements) due to the challenges of collecting data over the long- term. Panel 5 was designed to take a closer look at these challenges in different national contexts and to share promising approaches for the assessment of career outcomes and skills. Presentations for this session were organized around three topics and corresponding sets of questions:

• Defi ning and Measuring Professional Skills How does your university defi ne and measure the professional skills and competencies developed in its (post) graduate programs? How can these measurements be differentiated—by degree level, discipline, career path? Which methods and metrics have proved most successful? • Linking Professional Training Programs to Workforce Needs What role can stakeholders outside the university

150 Global Perspectives on Measuring Quality SKILLS, COMPETENCIES, AND THE WORKFORCE

(governments, national and international organizations, industry) play in helping universities develop a better understanding of skills and competencies needed for the 21st century workforce? How can universities use input from these stakeholders to assess the quality of master’s and doctoral students’ professional training? • Career Pathways What role can universities play in building better pathways and supporting transitions from (post)graduate school to the workforce? What are some examples of empirically- grounded best practices in this area? How does your university and its partners (for example, industries involved in traineeships or research programs) help advise master’s and doctoral students about career pathways within and beyond their chosen discipline?

This section assembles papers on these topics from participants representing universities in Australia, Canada, China, Indonesia, Malaysia, and the United States.

Defining and Measuring Professional Skills Due to significant national and regional efforts to assess the relationship between PhD training and workforce demands, the past fifteen years have seen a number of strategic efforts to define professional skills for graduate students.1 A number of these efforts have sought the input of non-academic employers. The paper by Austin McLean, Director of Scholarly Communication and Dissertation Publishing for ProQuest, a company that houses and disseminates master’s theses and dissertations, provides a corporate employer’s perspective on the professional skills developed through the process of researching and writing a dissertation. Mr. McLean notes that the dissertation provides an opportunity to assess the quality of professional training and raises a number of questions that prompt further thinking on this topic. The paper by Illah Sailah (Ministry of National Education, Indonesia) presents an example of a government-led effort to define and measure professional skills acquired through graduate education. Dr.

Global Perspectives on Measuring Quality 151 SKILLS, COMPETENCIES, AND THE WORKFORCE

Sailah outlines the goals and methods of the Indonesian Qualification Framework (IQF), which establishes standards for skills, including professional skills, by field and degree level. Approaches to structuring discussions between industry, government, and universities about desired graduate professional skills were addressed in the next sub- panel.

Linking Professional Training Programs to Workforce Needs In their co-authored paper, Laura Poole-Warren (University of New South Wales) and Dick Strugnell (University of Melbourne) call for greater dialogue between higher education institutions and external stakeholders about needed skills and professional development activities for doctoral students. In Australia, incoming research students now present a more diverse range of professional backgrounds and skills, and this diversity will need to be considered in assessments of professional training programs. This point was also emphasized by Michael Gallagher (Group of Eight), who observed that the rising number of Australian PhD candidates entering programs at a later age will also need to be taken into account. The degree to which government, industry, and universities coordinate their efforts to identify and assess professional training needs varies significantly, of course, by country. In some countries, these groups have worked together to develop doctoral degree structures focused specifically on commercialized research. Rose Alinda Alias (Universiti Teknologi Malaysia) provides a detailed description of one such program, the Industrial Doctorate in Malaysia, versions of which can be found in Denmark and the U.K. While the Industrial Doctorate represents an ambitious effort to prepare students for research careers in industry, it has also generated significant debate in Malaysia about degree standards as well as questions about what constitutes a research degree. Dr. Alias’s presentation of the Industrial Doctorate model prompted an extended discussion among participants about the status of professional doctorates in their countries. It was generally acknowledged that professional doctorates introduce new challenges for quality assessment. Not only are these programs growing at a

152 Global Perspectives on Measuring Quality SKILLS, COMPETENCIES, AND THE WORKFORCE significant rate in response to workforce and student demands, but they also vary widely by field, making the creation of standards across programs more difficult. Discussion comments illuminated many of the ways that graduate schools in different countries are seeking to manage this growth and promote quality within new programs. In the U.S., where there is no federal body that sets degree standards, universities must decide how to classify new professional degrees while meeting the standards of discipline-specific accrediting associations.2 In countries where standards for graduate education are set by federal or provincial governments, there has been more urgency about establishing quality metrics for the degree. In Australia, for example, the Australian Qualifications Framework upholds strong criteria for research doctorates, a category that excludes professional doctorates that include little or no research. Doug Peers (York University) and Mary Ritter (Imperial College London) also pointed out that in countries with government-managed health care systems, the government may play a role in defining standards for professional doctorates in the allied health fields. While many expressed concerns about the quality of some professional doctorate programs, there was also strong support for the idea that the distinction between research and professional doctorates should not be hierarchical. That is, professional doctorates, however they are defined, should not be assumed to be of lesser importance than research doctorates. Jeffery Gibeling (University of California, Davis) advocated for this approach and stressed that programs should be evaluated according to how well they define and meet the professional needs of their students.

Career Pathways This sub-panel also featured models for integrating industry and academe, including some that do not take the route of a formal professional doctorate program. Yan Jianhua (Zhejiang University) discusses the efforts of Zhejiang University to develop a training model that helps graduate students integrate theory and application through university-industry collaborations. Two mechanisms for this integration are internship programs, some of which are focused on

Global Perspectives on Measuring Quality 153 SKILLS, COMPETENCIES, AND THE WORKFORCE developing the western region of China, and joint training programs with industry partners. The paper by Allison Sekuler (McMaster University) considers professional pathways for graduate students in all fields, examining both challenges and successes in Canada and globally. Dr. Sekuler cites the example of the MITACS program in Canada, a program that, like Vitae in the UK, aims to create better networks between academe and other sectors and thereby enhance graduate students’ transitions to the workforce. While these efforts have seen considerable success, it is essential, she emphasizes, to gain the support of faculty advisors in preparing graduate students for career pathways outside academe. Quality assessment of graduate programs can support this effort by taking into account both academic and non-academic employment outcomes as measures of program quality. In the discussion, Dr. Sekuler indicated that it is important to view graduates who leave their institutions to work in industry and other non-academic research settings as assets to programs and institutions, as they become important contacts for programs and faculty seeking to develop new research projects. The discussion also uncovered several gaps in research on the career pathways of doctoral students. Areas of need mentioned by participants included: longitudinal studies of graduates’ perceptions of their professional needs, since these perceptions will change over time; studies on external factors that may complicate the career pathways of doctoral students; and research assessing the impact and effectiveness of different types of transferable skills programs.

Conclusion As governments, industries, and universities review the status and function of the PhD in their own national contexts and globally, it will be important to consider new questions about the relationship between research and its applications. To what extent do traditional PhD programs need to reconceive research as an enterprise that always includes both theory and professional practice, whether or not this practice happens inside or outside the university? And how can various stakeholders ensure that professional doctorates are held to a

154 Global Perspectives on Measuring Quality SKILLS, COMPETENCIES, AND THE WORKFORCE high quality standard, meeting the demands for new areas of research- based, professional practice? There was general agreement among participants that the doctorate must incorporate broader training in professional or transferable skills. The inclusion of such skills in assessments of program quality will help graduate institutions prepare students for the careers they will pursue immediately following completion of their degrees and over the long term.

1 Notable efforts to identify professional skills for graduate students across national contexts include the EUA’s DOC-CAREERS Project (information available at http://www.eua.be) and the Joint Statement of the Research Councils’ Skills Training Requirements for Research Students (available at http://www.vitae.ac.uk).

2 For more information about issues and challenges surrounding Professional Doctorates in the U.S., see Task Force report on the Professional Doctorate (2007). Washington, DC: Council of Graduate Schools.

Global Perspectives on Measuring Quality 155 THE ROLE OF THE DISSERTATION IN THE ASSESSMENT OF PROFESSIONAL SKILLS The Role of the Dissertation in the Assessment of Professional Skills

Austin McLean Director, Scholarly Communication and Dissertation Publishing ProQuest, Ann Arbor, MI USA

Introduction My company, ProQuest, supports the core goals of graduate education by disseminating research to national and international audiences. The purpose of this paper is to share our observations about the role, and the potential role, of the dissertation in the definition and assessment of professional skills. These observations are based upon our experiences as an employer of hundreds of PhDs since the founding of the company in 1938, and as a publisher of PhD and master’s theses for 1,700 graduate institutions over the past 70 years.

An Employer’s Perspective on Professional Skills of PhD Holders First I will speak from the perspective of an employer of a significant number of PhD holders. Our experience has shown us that PhD holders, like the holders of master’s degrees, possess skills that are transferable to a much broader range of professional activities than those for which they are principally trained (research and teaching). PhD holders display self-directed work habits and are quick studies in new technologies and techniques, skills that would be useful in almost any professional environment. Some of these skills include general leadership, communication, computer and technical abilities, problem-solving, teamwork, and a commitment to life-long learning. But some of these skills are truly specific to PhDs. As was discussed in the 2009 Global Summit in San Francisco, PhD holders are particularly valuable to what has been termed the “global knowledge economy.” They have important contributions to make to the many sectors outside academe that rely on rigorous, specialized research and evidence-based decision-making. With their ability to formulate research questions, conduct independent and collective research projects, absorb and digest great quantities of data, observe patterns, 156 Global Perspectives on Measuring Quality Austin McLean

and weigh evidence before recommending action, PhD holders have a great deal to contribute to applied research and other sectors. Many universities and governments are seeking to better prepare their students for such positions. We believe it is important for industry to support these efforts by providing information about the evolving and specialized skill sets needed for the wide range of professional possibilities that have emerged in the private sector.

The Dissertation as a Measure of Professional Skills? There is general consensus that the purpose of the research doctorate is to produce individuals who are capable of conducting original research or research that makes a wholly significant contribution to furthering knowledge.1 As we are all aware, however, there continues to be debate within the academy, in research communities, and in the for-profit and non-profit sectors about the distinct skills imparted during the process leading to a PhD and the assessment of these skills through the evaluation of the dissertation. Recent literature has also called for greater clarification about the role of the dissertation in both shaping and measuring the skills acquired by PhD students. In a recent book, Making the Implicit Explicit: Creating Performance Expectations for the Dissertation, Barbara Lovitts notes the lack of cross-disciplinary standards for faculty to evaluate dissertations and a lack of consensus about skills gained during the PhD process.2 Her book suggests that a discussion of multidisciplinary skills could lead to the development of common standards and aims to construct some preliminary standards in the fields of biology, physics, electrical engineering/computer science, mathematics, economics, psychology, sociology, English, history, and philosophy. It is the role of universities to discuss and evaluate such performance expectations in conversation with other individuals and groups that contribute to the establishment of dissertation standards in their countries: faculty advisors, disciplinary societies, accrediting bodies, and other stakeholders. The 2010 Global Summit also holds out an opportunity for graduate leaders to exchange information about

Global Perspectives on Measuring Quality 157 THE ROLE OF THE DISSERTATION IN THE ASSESSMENT OF PROFESSIONAL SKILLS the dissertation as both a means and a measurement of professional skills acquired through the research doctorate and to discuss possible methods of harmonizing these standards when appropriate. As a designer of international repositories of dissertations, ProQuest supports these efforts, which help us to understand the evolving standards for the dissertation. And looking forward we are also eager to learn how dissertation repositories in general might help universities seeking more information about the dissertations produced by their graduates and the use of this information in improving graduate training programs. I close with two questions that would merit further discussion:

1. How could dissertation repositories better serve universities seeking to understand the standardization of dissertation formats within and across disciplines? Within and across national cultures? What aspects of the dissertation would be useful to track?

2. How might this information help universities improve graduate training programs? How could it help further global discussions about the role of the dissertation in both shaping and assessing professional skills?

1 Barbara E. Lovitts (2007). Making the Implicit Explicit: Creating Performance Expectations for the Dissertation. Sterling, VA: Stylus.

2 Ibid.

158 Global Perspectives on Measuring Quality Illah Sailah Defining and Measuring Professional Skills in Indonesia

Illah Sailah Director of Academic Affairs, Director General of Higher Education Ministry of National Education, Republic of Indonesia

Indonesia now is establishing the Indonesian Qualification Framework (IQF), a mechanism for assessing the quality of formal and informal education and training systems that will also serve as a competency- recognition system in Indonesia. The IQF establishes criteria for content and coverage, capacity and responsibility, recognition and autonomy. The elements of IQF consist of: (1) indicators of content and coverage in science, along with indicators of knowledge, know-how, and skills acquired by individuals at each level of higher education, (2) indicators of capability and capacity in utilizing scientific knowledge, know-how and skills as well as responsibility and accountability in undertaking a work or a job, (3) indicators of autonomy as well as methods to gain recognition in utilizing such scientific knowledge, know-how and skills. This qualification framework is a reference for (post)graduate programs, including professional programs, and also can be used by the manpower sector as a mechanism for recognizing prior learning, training, and experiential learning. The method of measurement varies among professional bodies, however they all include knowledge, skills and attitude assessment. As IQF has nine levels, the higher level is the more scientific, and the lower level the more technical. This means that the measurement can be differentiated by degree/level, discipline and career path. For example in the discipline of Nursing, the Bachelor’s degree does not include criteria for direct patient care; however, a student with a Professional degree in Nursing (which is one level above the Bachelor’s degree), he or she is expected to have training in direct patient care. In the case of Nursing, due to the differences in the level of autonomy and recognition, the competencies tests will be different in terms of Global Perspectives on Measuring Quality 159 DEFINING AND MEASURING PROFESSIONAL SKILLS IN INDONESIA content and coverage. The D3 degree in Nursing is one level lower than the Bachelor’s degree in the same field; however, in the real working activities, holders of this qualification are allowed to provide some direct care to the patient due to their level of capability. The competencies test for this degree will cover some of the knowledge, skills and know-how at the higher level of 5. In principle, the methods of assessment depend on the level of the qualification. The stakeholders outside the university (governments, national and international professional bodies, industry) may play an important role in assisting higher education institutions by providing information about workforce demands for certain competencies in the future, along with information on the quality of human resources needed for the 21st century workforce. The IQA and EQA agencies should actively seek inputs from professional bodies in developing standards and mechanisms for accreditation. The higher education institutions will use the input from these stakeholders to assess the quality of Master’s and doctoral students’ professional training, and also to improve the formulation of competencies, learning outcomes, curricula, and assessment methods, including the test of competencies. Higher education institutions can play an important role in building better pathways and supporting transitions from (post) graduate school to the workforce by establishing Career Centers. In Career Centers, certain competencies (especially soft skills or strategic skills) can be developed to support the academic training provided by the (post)graduate school. The Career Center can also provide information about working opportunities in the industry and government sectors. Campus recruitment also can be organized by the Career Center once or twice a year. The Universitas Indonesia and Bogor Agricultural University are good examples of universities that have implemented Career Centers and many industries and companies are involved in their job market exhibitions. Many (post)graduates have seen success in getting a job by attending the exhibition. Many have also been inspired by the interviewing sessions, as these help them to learn their weaknesses as well as their strengths. Master’s and doctoral students should have conversations with their supervisors about career paths even though, in Indonesia, many

160 Global Perspectives on Measuring Quality Illah Sailah doctoral students have already had a permanent job. Master’s students are now growing in number as the Bachelor’s degree is no longer a suitable qualification for the position of in higher education institutions (due to Governmental Law No 14 of 2005).

Global Perspectives on Measuring Quality 161 LINKING PROFESSIONAL TRAINING PROGRAMS TO WORKFORCE NEEDS: AUSTRALIAN APPROACHES Linking Professional Training Programs to Workforce Needs: Australian Approaches

Laura Poole-Warren Dean of Graduate Research University of New South Wales

Dick Strugnell Pro Vice-Chancellor (Graduate Research) University of Melbourne

“The expectations on our graduates are beginning to shift, with a greater emphasis on developing the graduates both personally and professionally to support their individual academic to work- life transition, whether the profession of choice is going to be in the private sector, the public sector, or not-for-profit sectors.”1

Drivers for Enhancing Professional Skills Training in Research Degrees Despite wide variation across the globe in the level and types of skills training embedded in research programs, there is broad consensus that such training is an essential component of the 21st Century research degree. The challenge that presents itself is understanding how Universities can develop and implement programs that meet the diverse range of employers’ needs. Before considering the role of stakeholders outside the University sector in development of a better understanding of skills and competencies needed for the 21st century workforce, clarification of current graduate destinations and an understanding of the key stakeholders is essential. Up until the mid-1900’s, most PhD graduates were destined for employment in the Academy. By the 1970’s, about 30% of PhDs went on to employment in sectors outside academe.2 In the late 20th century and early 21st century, data from the US and Australia suggested that employment of PhD graduates outside the Academy rose to between 45 and 50%.3 Various factors, such as the change in graduate destinations that has occurred over the past 50 years, have driven reviews of PhD 162 Global Perspectives on Measuring Quality Poole-Warren • Strugnell

models in several countries. Sir Gareth Roberts’ review of the PhD in the United Kingdom (UK)4, although focussing only on the PhD in Science, Engineering and Technology (SET), stated that “PhDs do not prepare people adequately for careers in business or in academia. In particular, there is insufficient access to training in interpersonal and communication skills, management and commercial awareness.”4

Graduate Attributes and Skills Universities across the world have developed statements outlining graduate attributes expected from a quality research degree and these are nicely summarised by the Irish Universities Association list of skills5 that covers the following broad areas:

• Research skills and awareness • Ethics and social understanding • Communication skills • Personal effectiveness/development • Team-working and leadership • Career management • Entrepreneurship and innovation

While this list is comprehensive, a recent report summarising outcomes of an Australian workshop on research education suggested that research students considered that skills development for pursuing an academic career was important and that “very little seemed to be available to support students in building academic careers.”6 An example of such a program is the Path of Professorship offered at Massachusetts Institute of Technology (MIT).

The Stakeholders and Their Role The expected rise in demand for new academics over the next decade indicates the need for the Higher Education sector also being considered a key stakeholder in the development of professional skills. Thus both Universities and external stakeholders (governments, national and international organizations, industry) need to:

Global Perspectives on Measuring Quality 163 LINKING PROFESSIONAL TRAINING PROGRAMS TO WORKFORCE NEEDS: AUSTRALIAN APPROACHES

1) contribute to the dialogue on key skills required; and 2) engage in activities that will lead to development of these skills.

Given that the nature of skills required by different non- Academy employment sectors may be different and may change over time, it is essential that mechanisms for supporting ongoing dialogue are implemented. An important consideration in setting up this continuing dialogue is that there may be very diverse levels of skills and workplace and life experience in commencing research students and thus “flexibility and diversity in research skills development” is required. As mentioned, engagement of all stakeholders in the development and delivery of skills training is important. Development of graduate attributes and skills may be via both experience and formal training programs during candidatures or may already be present prior to commencing research training. A structured training needs analysis, such as that used in many institutions in the UK, is a useful tool that should be considered in mapping and tracking the development of attributes and skills. Not all skills can be gained via formally structured programs and key personality traits that may equip an individual for different roles will not readily be “transferrable.” While training courses in entrepreneurship may be one approach (e.g., Australian Commercialisation Training Scheme), industry placements hosted by external stakeholders and mentoring by industry personnel may also achieve excellent outcomes. Equally, teaching experience and skills can be gained via formal training in University learning and teaching, but experience in teaching gained on the job is also necessary. Again flexibility of offerings is important so that the impact on the research outputs is minimised. Recognition by Governments of the added resource requirements for professional training both in terms of support of informal and formal training programs, as well as in extension of candidatures to maintain quality research outcomes, is critical.

164 Global Perspectives on Measuring Quality Poole-Warren • Strugnell

Stakeholder Input to Assessing Quality of Professional Training Determining ways of measuring the quality of research training is important for tracking performance and supporting continuous improvement of research training programs. Broadly speaking, measureable outputs that can be expected from research training include the graduates themselves, as well as contributions to knowledge generated by the graduates, publications, exhibitions, grants, inventions and even collaborative networks. Measures of quality outcomes include such indicators as student satisfaction, completion rates and attrition, publication and the proportion of publications in high quality journals, inventions that lead to commercial exploitation, the impact of exhibitions, the thesis examination outcome, “quality” of the employer/employment and employer satisfaction. The latter outcome is likely the best measure of quality of professional training and Universities need to engage effectively with external stakeholders to determine robust measurement approaches. There are various possible approaches to gaining input from external stakeholders (and this could include other Universities). These include accreditation of programs for/by specific sectors, implementation of external review processes and the development of advisory boards. Examples of accreditation may include programs such as the combined Master’s of Psychology/PhD at UNSW and Melbourne. The Master’s degree leads to a clinical qualification accredited by bodies like the Australian Psychological Society, whereas the PhD fulfils the higher level research training objectives. External review can be an effective approach for getting input from stakeholders on the quality of programs. For example, the Australian National University recently conducted an external review of their PhD program. Finally, advisory boards with a range of employment sector groups represented can be valuable for capturing ongoing input on stakeholder needs.

Conclusion With an increased proportion of doctoral and research Master’s graduates entering employment outside of the Academy, universities must develop additional curriculum to support candidates for these

Global Perspectives on Measuring Quality 165 LINKING PROFESSIONAL TRAINING PROGRAMS TO WORKFORCE NEEDS: AUSTRALIAN APPROACHES different roles. The nature of the curriculum will depend on the employment sector and the type of position (managerial, policy, operational etc.), and it is unlikely that generic coursework that will support all of these roles can be developed. Leadership and interdisciplinary training and reporting may well supplement the attributes expected of successful candidates, but employer groups have a major responsibility for advising universities of their requirements. Universities will need to budget for this training, which will range in cost depending on the mode of delivery. To prompt further exploration of this topic, we outline four questions for discussion:

1. Should transferable skills training be captured in “for award” courses? 2. Should the candidate or University pay for this additional training? 3. When should the training be provided, i.e., throughout or at the end of the research training experience? 4. How can employer groups be used to develop and/or deliver the curriculum of transferable skills programs (e.g., adjunct appointments)?

1 Professional skills development for graduate students, CAGS, Nov 2008. http://www.cags. ca/pages/en/publications/cags-publications.php

2 See http://www.nsf.gov/statistics/nsf06319/start.cfm

3 http://www.nsf.gov/statistics/nsf10309/ Edwards, Daniel; Radloff, Ali; and Coates, Hamish, “Supply, Demand and Characteristics of the HDR Population in Australia” (2009). Higher Education Research. http://research.acer.edu.au/higher_education/10

4 SET for Success: The report of Sir Gareth Roberts “Review, April 2002 http://www.hm- treasury.gov.uk/d/ACF614.pdf

5 Irish Universities’ PhDs Graduates’ Skills. Irish Universities Association http:// www.4thlevelireland.ie/publications/Graduate_Skills_Statement.pdf

6 The Research Education Experience in 2009. A workshop hosted by the Council of Austra- lian Postgraduate Associations (CAPA) with support from the Department of Innovation, Industry, Science and Research (DIISR). http://www.innovation.gov.au/Section/Research/Documents/TheRe- searchEducationExperiencein2009.pdf

166 Global Perspectives on Measuring Quality Poole-Warren • Strugnell Linking Professional Training Programs to Workforce Needs: The Industrial Doctorate Program in Malaysia

Rose Alinda Alias Chair, Malaysia Deans of Graduate Studies Council (MyDEGS) Dean, School of Graduate Studies Universiti Teknologi Malaysia

Malaysia’s New Economic Model (NEM) and MyBrain15 The Malaysian New Economic Model (NEM) was launched on 30 March 2010 by Malaysian Prime Minister Najib Razak as an economic plan. It was intended to more than double the per capita income in Malaysia by 2020. In a nutshell, this program aims to shift affirmative action from being ethnically-based to being needs-based hence making the economy more competitive, market and investor friendly. The NEM will create a Malaysia in the future that will be renowned for vibrant transformation arising from the resourcefulness of its people exemplified by its harmonious diversity and rich cultural traditions. The economy will be market-led, well-governed, regionally integrated, entrepreneurial and innovative. Eight (8) Strategic reform Initiatives (SRI) were identified towards achieving NEM. SRI1 aims to re-energize the private sector and SRI2 serves to develop a quality workforce and reduce dependency on foreign labor. Malaysia’s workforce must be inspired to give its best via a quality education system which nurtures skilled, inquisitive, and innovative workers to continuously drive productivity forward. SRI6 aims to build a knowledge base and infrastructure. Economic transformation in the industrial, agricultural and services sectors is a process requiring continuous innovation and productivity growth with significant technological advancement and entrepreneurial drive. In 2006 Malaysia launched the National Higher Education Strategic Plan (PSPTN) which outlines 17 Critical Agenda Projects (CAPs) as a roadmap for the country to achieve its strategic intention to become a higher education hub in the South-east Asia region by 2020. MyBrain15 is one of these CAPs and sets a target of producing Global Perspectives on Measuring Quality 167 LINKING PROFESSIONAL TRAINING PROGRAMS TO WORKFORCE NEEDS: AUSTRALIAN APPROACHES

60,000 local Malaysian doctoral graduates by 2023 as main players who will drive the future of the Malaysian innovative and competitive economy. MyBrain15 is the main platform for Malaysia to produce the critical mass and knowledgeable graduates that will be leadersin the global economy via creation and innovation of products and services. Malaysia’s PSPTN and NEM recognized that a big transformative move was needed because the national statistics in comparison with other countries were quite dismal. Malaysia is currently stuck in what is called a “middle income trap.” The private sectors focus on short- term profits and are not investing in products and services that will drive future growth. This is reflected by low investment in R&D, lack of interest in innovating products and processes to move up the value chain, and hence a strong disinterest in buildinh skills and payinh higher wages for improved productivity. Up to 2009, there are only 13,000 Malaysians with doctoral qualification and the majority of them i.e., 10,782, are academic staff in Malaysian public universities. Despite aggressive efforts in the last decade by the Malaysian government to boost the country’s R & D capability through provision of grants for industry-academia research collaboration, there is still a gap between them and the relevant targets are still not being met.

Industrial Doctorate Program to Fulfill Malaysia’s Workforce Needs An Industrial Doctorate program is aimed at developing professionals and industry practitioners who are not only able to innovate but who are also able to apply innovations in solving industrial problems and make significant contributions to the performance of their organization. Amongst advantages of an Industrial Doctorate program are that it provides a platform for industry-university collaboration and knowledge sharing, leads to application of research by personnel in solving industrial problems, increases the number of industry- related research and provides an alternative program towards the award of doctoral degree. The Industrial Doctorate program enables the transformation of the new economic model to meet the future demands of the country, increases the number of researchers in the industry and encourages cost-sharing between government and

168 Global Perspectives on Measuring Quality Poole-Warren • Strugnell

industry in producing innovative and skilled researchers.

Engineering Doctorate at UTM: An Example of an Industrial Doctorate Program Universiti Teknologi Malaysia (UTM) initiated Malaysia’s first Industrial Doctorate program by offering the Engineering Doctorate in Engineering Business Management (EngD EBM) in 1998. In addition to the normal academic admission requirement of having a master’s degree or a first class , the candidate also must be employed in the industry and have working experience in a supervisory or managerial position. Occasionally candidates with a bachelor’s degree may be accepted if their work experience can be translated into a Recognition of Prior Learning (RPL) master’s equivalent. The program has successfully graduated a total of 23 executives/engineers employed at 10 companies in industry. The EngD EBM program comprises a taught-course component and an industry-focused applied research project. The taught-course component includes six 3-credit courses that provide knowledge and skills in the EBM domain area. The doctoral candidate and the research project must be based in the industry. An agreement should be obtained from the top management of the industry or the cooperating company regarding their support and commitment to the project. The project must demonstrate innovation in the application of knowledge to solve a significant industrial problem, or to develop products or operations improvement. The work should also make a significant contribution to the performance of the company. The project must be approved by the supervisors and the cooperating company with agreed objectives, deliverables, timescale and regular monitoring against these targets. The academic rigor and standard of the EngD dissertation must meet minimum standard of a doctoral dissertation.

Assessment of Industrial Doctorate Program Versus a (PhD) Ever since its inception, there have been continuous critics against graduates from the EngD EBM program. It has been dubbed as second-rated and lower-quality than the traditional PhD. The point of

Global Perspectives on Measuring Quality 169 LINKING PROFESSIONAL TRAINING PROGRAMS TO WORKFORCE NEEDS: AUSTRALIAN APPROACHES dispute is usually in the assessment of the thesis/dissertation. Whilst the traditional PhD is based fully on the assessment of the thesis, the EngD candidate is not assessed solely on the dissertation. The courses in the coursework component must have a CGPA of 3.0 and minimum of B- grade for each course. The candidate also has to compile a portfolio that comprises project technical reports, conference/journal publications, and end-of-module assignments. While the emphasis of the traditional PhD is on the contribution of new knowledge, the industrial doctorate places value on product innovation or process improvement or policy impact that increases national intellectual property (IP).

Issues in Implementing Industrial Doctorate Program The biggest challenge remains the changing of the mindsets of all stakeholders involved in the process of graduating the industrial doctorate candidates. Expectations have to be set appropriately and differentiated from the traditional Doctor of Philosophy program. The gap between industry and academia leads to difficulty between the relationships of the industrial and academic supervisors and also the examiner. Each stakeholder must be clear what his or her respective roles and responsibilities are.

Roles of Government, Universities and Industry in Implementation The Malaysian Government has committed a budget of RM50 million to sponsor the costs of students to register for such programs at selected Malaysian public universities, to train the supervisors and the costs of supervising the candidates at the industry. The industry has to identify suitable doctoral candidates and industry supervisors from their workforce, identify suitable projects and provide research grants to conduct the research projects. The university in turn must promote their expertise to the industry, initiate collaborations to increase knowledge generation, and catalyze innovation and industry competitiveness. Extensive supervisor and examiners training is thus needed to enable this realignment and change of mindset. Candidates must be encouraged to register for such programs. This can be carried out by the universities. With strong collaboration between the three

170 Global Perspectives on Measuring Quality Poole-Warren • Strugnell

main stakeholders of industry, government and university, it is hoped that this Industrial Doctorate program will be a success.

Global Perspectives on Measuring Quality 171 THE PRACTICE AND EXPLORATION OF CAREER EDUCATION FOR GRADUATES AT ZHEJIANG UNIVERSITY The Practice and Exploration of Career Education for Graduates at Zhejiang University

Jianhua Yan Vice Provost of Zhejiang University and Executive Dean of Graduate School Zhejiang University, P. R. China

The training of graduates at Zhejiang University can be traced back to the 1930s. Now Zhejiang University has over 19,000 registered full- time PhD candidates and masters students and about 10,000 part time professional master degree students. Supported by its research and development bases, today’s graduate education at Zhejiang University is very comprehensive. As academics are central to our mission, we have paid much attention to training high-qualified young researchers through the system. However, by making full use of the advantages of our national science and technology park and collaborative enterprises, providing platforms for enhancing university-industry collaborations, and opening up more channels inside and outside the university, we have ensured that training is geared to the needs of the society, oriented towards graduates’ career development, and aimed at improving graduates’ ability to plan their future careers. With the goal of training well-rounded graduates, Zhejiang University has always given priority to providing graduates with career education. What Zhejiang University has done in this regard can be illustrated as follows: First of all, Zhejiang University has devoted major efforts to exploring the model of integrating production, teaching and research, with the goal of cultivating excellent graduates that are capable of meeting the needs of companies. In terms of fostering prospective engineering talents, Zhejiang University has been dedicated to exploring the graduate training model for integration of production, teaching and research based on cooperation with famous industries. These collaborations are geared to the needs of the society, and driven by some major and key research projects, so as to efficiently cultivate those innovative graduates who are also able to meet the requirements 172 Global Perspectives on Measuring Quality Jianhua Yan

of industries. Taking the discipline of computer science at Zhejiang University as an example, we have acted in line with practices in the same disciplines in some world famous universities. We have centered on engineering practices, and based on the case study of many major and key projects, ensured that the curriculum for graduates majoring in computer science at Zhejiang University boasts the advantages and distinctive features particular to this discipline; it is important to make sure that it is frontier-oriented and integrates fundamentals, technology, and application. As regard to scientific and research bases, Zhejiang University has established strategic technology alliances with some international companies and several extramural research bases for its graduates. Therefore, the two trainers, namely our university itself and well-known companies, can provide our graduates with opportunities to be trained in the real working environment in world-famous companies and help them better integrate their theoretical research with the needs of the international business community. In terms of its system, Zhejiang University has both full- time supervisors who are of overriding importance and a number of part-time supervisors with rich working experiences in some famous companies at home and abroad. These supervisors help cultivate a large number of versatile and practical talents with international visions. The focus of the evaluation of graduate training quality has also shifted from the university alone to industry and society. The curriculum, graduates’ quality, the quality of theses, and graduates’ working capabilities are, to a large extent, measured by the extent to which they can meet those requirements made by companies for technological application and scientific and research development. In addition, Zhejiang University has built a network that connects graduates who have landed jobs with full-time supervisors so that they can receive useful information. This network is accessible to graduates still at Zhejiang University, graduates who have landed jobs, full-time and part-time supervisors, and staff in charge of vocational affairs at Zhejiang University. Second, supported by its national university science and technology parks, Zhejiang University has also been endeavoring to cultivate more innovative and entrepreneurial talents. The era in

Global Perspectives on Measuring Quality 173 THE PRACTICE AND EXPLORATION OF CAREER EDUCATION FOR GRADUATES AT ZHEJIANG UNIVERSITY which we are now living calls for innovation and the future society will be in urgent need of innovative and entrepreneurial talents. Zhejiang University has always been endeavoring to construct an educational system for graduates’ innovation, provide graduates with as many suggestions, opportunities, and platforms for starting their own business as possible, and to create a real environment for them where their academic knowledge can be used and tested. Third, in order to improve graduates’ capability for their future career development, Zhejiang University has made great efforts to provide more platforms for establishing and enhancing university- industry cooperation, meet the standards of the marketplace, and establish long-term cooperation with companies. Zhejiang University’s practice in this regard is mainly embodied in the following two aspects. First of all, through organizing such activities as asking graduates to work in local governments temporarily, pay field visits to the western part of China (the zone that is relatively less developed in economic terms), take part in summer social experiences, and participate in the Medical Doctors’ Volunteer Group, etc., Zhejiang University has successfully helped its graduates integrate their social practices with their academic knowledge and career development, and helped them to make an impact with their scientific and technological findings. This has helped them to contribute their share to national development in addition to supporting their own growth and development. In addition, Zhejiang University is good at taking full advantage of wealthy alumni and social resources. For example, our university has conducted some joint training programs for graduates with some companies, invited some top entrepreneurs and scientific experts to give lectures or work as part-time teachers at our university, and accepted scholarship established by some companies. By such means our graduates are given the chance to gain precious working experiences before their graduation and develop a better idea of their future careers. In the future, while taking the social needs for talent into consideration, Zhejiang University will continue to follow the principles that support students’ growth and educational development. We will continuously explore the graduate training mechanisms that ensure win-win situations for society, students, and companies, and

174 Global Perspectives on Measuring Quality Jianhua Yan

make unremitting efforts to reach the goal of fostering a large number of innovative talents for regional economic and social development.

Global Perspectives on Measuring Quality 175 Beyond Labs and Libraries: Building Better Pathways to the Workforce Beyond Labs and Libraries: Building Better Pathways to the Workforce

Allison B. Sekuler Associate Vice-President and Dean, School of Graduate Studies McMaster University

A man is a success if he gets up in the morning and goes to bed at night and in between does what he wants to do. – Bob Dylan

Historically, graduate school was a place for aspiring academics. Graduate students honed research skills in their disciplinary fields, guided, as Doug Peers notes,1 by supervisors intent on “self- reproduction.” Training proceeded via an apprentice model in which students were at the mercy of their supervisor’s desire and ability to teach the skills relevant for success in research (and, occasionally teaching). Increasingly, however, the goals, aspirations, and outcomes of our students are changing. Although the extent to which students take jobs inside or outside academia depends on multiples factors (e.g., degree level, discipline, gender), even in the most traditionally academically career-oriented programs an increasing number of graduates are entering the non-academic workforce. For example, based on a recent NSF survey of doctorate recipients, CGS’s The Path Forward,2 estimates that approximately 50% of US PhD graduates obtain jobs outside of academia. Similarly, a recent study of PhD students from the University of California system showed that only 36% of male and 27% of female students were interested in pursuing faculty positions at research institutions by the end of their graduate school careers.3 Part of the shift toward non-academic jobs is linked to the changing availability of tenured and tenure-stream jobs compared to part time positions: from 1975 to 2007, the proportion of the former decreased from 56.8% to 31.2% in the US.4 The shift is also due to changes in the demographics of graduate students (increased numbers of older, minority, and women students, who traditionally are less likely to move into tenured or tenure-stream positions3,5); to increases in the availability of postgraduate level non-academic positions; and 176 Global Perspectives on Measuring Quality Allison B. Sekuler

to an increased desire on the part of current students to have a greater impact in their communities and globally. Regardless of the cause, we, as educators, need to respect the desires of our students and facilitate the sort of career success they envision. A recent (2007) study of over 25,000 Canadian graduate students in Canada’s most research-intensive Universities6 showed that over 80% of all students were satisfied with the path they took in graduate school and would select the same field of study if they started their graduate/professional career again. More than 80% also rated the intellectual quality of their faculty as very good or excellent. However, only 53% gave the same positive rating for the relationship between their program content and their professional goals. Interestingly, almost 60% of respondents said they did not make use of career services at their University, and of those who did use the services, only about 35% rated them as very good or excellent. Given the growing disconnect between the traditional model of training graduate students and their increasingly diverse career aspirations, and the traditional focus of University career centres on undergraduates, these results may be unfortunate, but they should not be surprising. Regional and national organizations focusing on graduate education (e.g., CAGS and CGS) increasingly support the notion of complementing traditional elements of graduate education with professional development training programs. Common to these proposals is the idea that a core set of skills should be taught beyond labs and libraries: communication, leadership and management, creativity and entrepreneurship, teaching and knowledge translation, and ethics,2,7 and these skills ideally should be taught within the framework of 21st century fluencies, including digital, media, and global fluencies. The development and provision of such training would go a long way toward ensuring that our graduate students can progress as easily as possibly from their Master’s and PhD programs to their chosen career, whether inside or outside academia. Such training often faces a number of challenges, however. First, one would like to ensure that professional development training does not increase the time-to-completion for degrees, which in some programs can already be excessively long. Second, the development of programs requires

Global Perspectives on Measuring Quality 177 Beyond Labs and Libraries: Building Better Pathways to the Workforce resources—time, money, and people—all in scare supply. Third, universities must address the issue of their local cultures, in which the detailed curriculum of graduate programs typically are controlled at the departmental level, where traditionally trained faculty may not perceive the value of professional development training, nor have the appropriate background to provide the training themselves. Finally, the changing demographics of graduate programs, including increased international representation in some fields, presents both difficulties and opportunities, as graduate programs experience increased diversity in the cultural and linguistic backgrounds of students, as well as increased diversity in students’ typical career paths 3,5. Fortunately, evolving best practices suggest how Universities can overcome these challenges, to provide students with the best possibility for career success in our knowledge-based economy. Some countries have developed national strategies to provide professional development training supporting a range of career pathways. For example, within Canada, MITACS (www.mitacs. ca)—an organization with both federal and provincial funding— supports a range of programs for over 50 Canadian universities. Their Step program provides workshops on topics such as networking, technical and scientific writing, presentation skills, time management, entrepreneurship, business etiquette, and intellectual property. MITACS partners with universities in presenting the workshops, with the latter providing space, and the former covering the costs of the workshop facilitators, food, and registration. McMaster’s experience with the Step program has been phenomenal: the demand for the programs is so strong that registration is filled to capacity within hours of posting, and the responses from our students have been universally positive (even inspiring graduate students to pen articles for our University’s web site summarizing the benefits of the programs, and thus providing additional training in communication skills). These externally sponsored programs can be complemented by in-house programs: at McMaster, for example, we have provided additional training in teaching, research ethics, media communications, accessibility and disability, and even improvisational theatre. Other countries also have developed national approaches to providing

178 Global Perspectives on Measuring Quality Allison B. Sekuler

professional development training for postgraduate researchers, one of the best known being the UK’s Vitae program. In addition to hosting regional and national training events, Vitae has a well- developed website that provides considerable information on career options, entrepreneurship, teaching, mentorship, and research skills (www.vitae.ac.uk). This site, along with other University-based sites8 provides a wealth of information for graduate students, faculty, and administrators looking to implement training programs at their home institutions. CGS, CAGS and other national organizations could provide a tremendous service by hosting a clearinghouse for these and other useful resources. Providing students with live and on-line training is critical, and the lessons learned benefit students moving on to both academic and non-academic careers. Even more support may be required, however, for students interested in pursuing careers outside academia. Universities/regions/countries need to develop systems to more easily connect students with leaders outside of academia to help them explore and fully understand career options beyond the traditional academic setting. To this end, in addition to its professional development programs, MITACS supports internship programs for graduate students (Accelerate) and postdoctoral (Elevate). The Accelerate program provides students with a minimum of $10,000 toward a stipend and $5,000 to support research costs over a 4-month period— co-funded by MITACS and a private-sector partner—during which time the student spends about half of his or her time on-site with the private sector partner (longer durations are possible with additional contributions from the partner). Students in the program strengthen links between private-sector partners and university researchers, gain critical insights into non-academic career paths, and often leverage their internships into permanent employment. The Elevate program follows a similar model at the postdoctoral level, providing up to $70,000/year for two years during which the postdoctoral fellow jointly with a university supervisor and a private-sector partner. In both cases, there are obvious benefits to the student, and universities gain a valuable source of funding that helps them to recruit and retain the best and brightest graduate students and postdoctoral

Global Perspectives on Measuring Quality 179 Beyond Labs and Libraries: Building Better Pathways to the Workforce fellows. Increasingly, granting agencies are providing funding to connect students with private-sector partners, and to encourage and support a range of professional development training programs (e.g., NSERC’s CREATE program; CIHR’s STIHR program; the Ontario government’s approach of requiring a youth outreach component in all of its research funding applications). Even in the absence of national or regional funding for such programs, individual institutions can increase the links between their graduate students and professionals in a range of non-academic careers. As noted before, although most university career centres focus primarily on undergraduates, partnerships can be developed to ensure that the unique needs of graduate students are also met through such established centres. Partnerships can also be developed Alumni offices that establish stronger connections with graduate alumni who can serve as mentors and advisors to graduate students, and who also can be valuable members of Graduate External Advisory Boards. The advisory board concept is most common in professional programs, such as MBAs, Engineering, or even PSM programs; however, it is equally useful in the context of Graduate Schools more broadly. Boards can comprise leaders from fields spanning the breadth of one’s graduate offerings, including members who work in support of economic, social and cultural prosperity from the private sector, government, and non-profits. Board members can provide useful information about the skill sets that would benefit graduates in their respective fields, can serve as mentors and workshop leaders, and can be critical parts of networks connecting graduate students to a range of internship/employment opportunities. When board members are alumni, their contributions can be particularly helpful given their firsthand knowledge of the graduate context at your university; the opportunity to be part of the board provides them with another route to give back to their alma mater, and confirms that universities value more than just self-reproduced scholars. Members with international experience can help universities understand the needs of international students, and provide expertise on how students can gain the cultural awareness required to function in an increasingly global world. Of course, all of these approaches require support from the

180 Global Perspectives on Measuring Quality Allison B. Sekuler

professoriate. If supervisors do not see the value of professional development programs, or if they think of students moving into non-academic careers as failures, it will be difficult to implement the programs one needs to assure student career success. Quality Assurance (QA) processes can play an important role in this regard. In particular, QA processes should consider both academic and non- academic employment as equally valid measures of program quality, and QA processes should explicitly assess the role of professional development training provided to students. Including such measures in QA processes highlights the legitimacy of both professional development training and a range of career paths for students, and that legitimacy is critical for faculty support. Support also can be garnered by programs that provide some recognition for completion of professional development programs or for excellence in aspects of key skills beyond those traditionally learned in the labs and libraries. For example, many universities now offer certificates of achievement for completion of training sequences in teaching, entrepreneurship, or other professional development skills. Combined Master’s programs in Entrepreneurship and Innovation are also increasingly common and desired by students. One also should not underestimate the impact of awards for communication skills, knowledge translation, and teaching. Regardless of the specific approaches taken, the leaders of graduate education have a responsibility not only to ensure the highest level of quality in our academic programming, but also to ensure students have access to the appropriate professional development and critical skills training to enable their success. We must recognize and respect the fact that students define success in different ways, and our goal should be to facilitate the full range of career success our students envision.

References

Peers, DM (2011). Measuring quality in international and national contexts: What are we measuring and why? (and for whom?) in Global

Global Perspectives on Measuring Quality 181 Beyond Labs and Libraries: Building Better Pathways to the Workforce

Perspectives onQuality Assessment: Proceedings of the 2010 Strategic Leaders Global Summit on Graduate Education. Washington, DC: CGS. See p. XX of this volume.

Wendler, C, Bridgeman, B, Cline, F, Millett, C, Rock, J, Bell, N, and McAllister, P (2010). The Path Forward: The Future of Graduate Education in the United States. Princeton, NJ: Educational Testing Service.

Goulden, M, Frasch, K, Mason MA and the Center for American Progress (2009). Staying Competitive: Patching America’s Leaky Pipeline in the Sciences.

American Association of University Professors analysis of the US Department of Education, IPEDS Fall Staff Survey.

National Research Council (2006). To Recruit and Advance: Women Students and Faculty in Science and Technology. Washington DC: National Press.

Canadian Graduate and Professional Student Survey (2007).

Canadian Association for Graduate Studies (2008). Professional Skills Development for Graduate Students.

See, for example, www.gradresearch.unimelb.edu.au/programs; www. yale.edu/graduateschool ; and www.grad.ubc.ca/current-students/gps- graduate-pathways-success.

182 Global Perspectives on Measuring Quality VII. MEASUREMENTS WITHOUT BORDERS?

Summary of Presentations and discussion

hile earlier panels focused on the national and institutional settings in which the quality of graduate education is Wmeasured, the fi nal session explored methods of assessing quality across national contexts. The challenges and necessary limitations of creating methods with broad international relevance were signaled with a question mark at the end of panel’s title, Measurements without Borders? As many participants observed in this and earlier panels, any effort to develop broadly applicable metrics and methods must allow for the variety of goals surrounding research and research training in different countries, as well as differences in national and institutional cultures. The advantage of addressing this topic at the end of the summit was that participants were able to consider possible methods for transnational quality assessment with a clearer sense of shared priorities and concerns. As there is currently no widely recognized framework for the assessment of graduate education globally, the topics that organized Panel 6 invited refl ection on promising models as well as gaps that new models might seek to address:

• Tools and Methods for Assessing Quality Internationally: What methods and tools could be used to enhance quality assessments across different national (post)graduate systems? What kinds of national and international institutions and organizations would need to be involved in such an effort? What strong models exist for assessing quality across more than one national context? • Promising Practices for Administering Quality Assessments: Which methods of administering (post)graduate quality

Global Perspectives on Measuring Quality 183 MEASUREMENTS WITHOUT BORDERS?

assessments could be adapted by universities in different national contexts? Are there “best practices” in this area that could be useful to all universities and university leaders? • Assessing Quality in (Post)graduate International Collaborations: What methods currently exist for measuring the quality of international educational and research collaborations (joint and dual degree programs, research partnerships, and educational exchanges)? How might university leaders and other stakeholders (governments, funding agencies, etc.) work together to develop measures of success for universities and programs?

Panel 6 featured papers by delegates from Australia, France, Singapore, the U.S., and Vietnam, all of which drew from both local and international experiences.

Tools and Methods for Assessing Quality Internationally The subpanel on methods and metrics examines both current and potential approaches for assessing quality internationally. The paper by Jean Chambaz (University Pierre et Marie Curie) describes the outcomes of efforts on the part of European associations of higher education and quality assurance, Ministers of Education, and European universities to develop principles for higher education quality assessment since the Bologna Process. Dr. Chambaz underscores the fact that European stakeholders have sought to respect the diversity of universities in the European Higher Education Area. Consistent with this principle, the paper emphasizes that quality is a “context-dependent” concept that demands guiding principles, not rigid standards, that can be adapted to the goals and needs of specific institutions in Europe. Several principles for making a concern for quality an integral part of every institutional setting, as well as some specific conditions for successful assessment of doctoral education, are included. The paper by Maxwell King (Monash University) considers potential challenges and opportunities for assessing research programs internationally, with attention to four specific areas of assessment. Dr.

184 Global Perspectives on Measuring Quality MEASUREMENTS WITHOUT BORDERS?

King gives attention to an issue that emerged at a number of points during the summit, the challenges of comparing the relative efficiency of doctoral programs in different countries given that the starting point for doctoral training varies widely by country and that programs include different components (coursework, supervised research only, etc.).

Promising Practices for Administering Quality Assessments Best practices for administering quality assessment are as context- dependent as the metrics used, and yet many institutional contexts are already quite international. Tan Thiam Soon (National University of Singapore) explores some promising practices that have emerged in Singapore. While as Dr. Tan notes, Singapore has a small number of graduate institutions, the fact that the country has always drawn on the expertise of different national higher education systems makes it a useful testing ground for global best practices. At the national level, Singapore solicits the recommendations of an international advisory panel in the assessment of its higher education system, and at the institutional level, the National University of Singapore actively engages with other national and regional associations that support graduate education. The growth in global dialogues on quality assessment suggests that all graduate education may increasingly look to global expertise for local solutions to quality assessment issues.

Assessing Quality in Post(graduate) International Collaborations International collaborations between graduate institutions can also serve as a testing ground for global best practices in quality assessment. At the same time, there are relatively few, if any, globally recognized methods for assessing the quality of collaborations, including joint and dual degree programs and other research collaborations between institutions in different countries. The paper by Le Thi Kim Anh (Ministry of Education, Vietnam) provides an example of a growing trend, the strategic promotion of international collaborations by many national higher education systems, and underlines some of the challenges that Vietnam has faced in working to assess the quality of the collaborations it has

Global Perspectives on Measuring Quality 185 MEASUREMENTS WITHOUT BORDERS? supported. The agenda for Vietnam’s Second Higher Education Project supports collaborations that enrich the training of Vietnamese graduate students and the research opportunities of institutions. Dr. Anh Le Thi underlines the importance of respecting institutional diversity in assessing the success of various types of collaborations. For example, metrics of program quality and success are designed to fit the specific areas in which different universities seek to develop their research strengths. This principle is useful to both developing and long-established national education systems. In a paper addressing possible frameworks for assessing collaborations internationally, Andrew Comrie (University of Arizona) examines useful elements of existing quality assessment models, including some that have been used in other institutional management settings. Dr. Comrie provides a set of questions that institutions may ask in developing international collaborations with partners as well as a rubric that includes sample quality metrics that have been adapted for use in academic settings. Areas of assessment include academic and institutional effectiveness, scale, participation, and faculty and student satisfaction. In the discussion, significant attention was given to the institutional processes that can be used to assure quality in international collaborations. Partnerships between universities often raise the issue of trust, many noted—especially trust regarding the quality of partner institutions and their own processes for assuring quality internally. And in educational exchanges such as joint and dual degree programs, differences in credit systems and degree structures raise questions about quality that must be carefully considered. A lack of knowledge about other systems and internal quality assessment systems may create barriers for collaboration that optimistic partners may find frustrating and counter-productive. Several solutions to these challenges were proposed. At the national level, some participants indicated that it would be helpful to promote quality agreements between countries. The discussions of quality assessment in European nations participating in the Bologna Process, or the activities of the Association of Southeast Asian Nations (ASEAN) will likely hold out instructive lessons in this area.

186 Global Perspectives on Measuring Quality MEASUREMENTS WITHOUT BORDERS?

At the institutional level, many emphasized the value of promoting faculty-led collaborations. Speaking to this point, Ursula Lehmkuhl (Freie Universität Berlin) observed that it is through the process of developing research connections between faculty members that trust- building around quality issues takes place. As countries in different parts of the world seek to better understand and learn from each other’s quality assessment systems, the trust needed for building institutional collaborations may become significantly easier to establish.

Conclusion Institutional collaborations in graduate education provide excellent opportunities for universities to share best practices around the assessment of quality. Even if such partnerships begin with caution, a successful collaboration may help an institution reflect on its own quality assessment practices from a more distanced perspective. Developing internationally recognized best practices is a much more difficult task, however; future efforts in this area must take as a point of departure the distinct goals and missions of individual universities as well as differences in national contexts. The papers and discussions in Panel 6 set the stage for the final session of the Global Summit in which participants discussed and agreed to a set of common principles for measuring quality in graduate education and research training.

Global Perspectives on Measuring Quality 187 TOWARDS ASSESSING QUALITY ACROSS NATIONAL CONTEXTS: A EUROPEAN APPROACH Towards Assessing Quality Across National Contexts: A European Approach

Jean Chambaz Vice President (Research) Université Pierre et Marie Curie European University Association—Council on Doctoral Education1

The European experience of the last decade emphasizes that quality assessment is not neutral and conveys simultaneously several purposes, which could be contradictory: i.) compliance to standards for accountability towards stakeholders and enhancements of international attractiveness, and ii.) quality improvement embedded in all activities and fostering grassroot-driven creativity. Quality is highly context-dependent, and quantitative indicators should be balanced with qualitative measures. Rather than building universal matrices for quality, the approach of European universities favours the definition of common principles and guidelines to adapt toa specific situation. In this context, choosing a partner university should be based, more than on a ranking, on common understanding, shared goals and values, and agreed-upon quality principles. The question of quality emerged slowly as an important factor for the success of the Bologna process and received only cursory mention in the original Bologna Declaration. The issue of quality kept growing in importance, and the ministerial Berlin Communiqué (2003) marked a major turning point by stating that “consistent with the principle of institutional autonomy, the primary responsibility for quality assurance in higher education lies with each institution itself.” Since then, EUA has taken the lead in developing the capacity of higher education institutions to create internal quality processes and various projects such as Quality Culture, Creativity in Higher Education, and Quality Assurance for the Higher Education Change Agenda. EUA also published Guidelines for Quality Enhancement in European Joint Master Programmes, and contributed to the Guidelines for Quality Assurance in the European Higher Education Area (ESG), which have been developed by the E4 group (ENQA, EUA, 188 Global Perspectives on Measuring Quality Jean Chambaz

EURASHE and ESU)2 and adopted by the ministers of education in Bergen in 2005.

Quality is a Relative Concept When speaking of quality, it is easy to revert back to such managerial concepts as quality control, quality mechanisms, quality management, etc. These concepts, however, are not neutral. They convey a technocratic and top-down approach that will only backfire in academic settings. Therefore, the term “quality culture” was chosen to convey a connotation of quality as a shared value and a collective responsibility for all members of an institution, including students and administrative staff. This underlines the importance of addressing the balance between the requirements to have quality assurance processes as tools for institutional governance and external accountability and the need to ensure creativity and innovative practices in higher education. Quality culture can serve to improve institutions, external evaluation procedures can serve to provide the required accountability to the public. The challenge at the European level is to create a European higher education area that combines diversity across—and within— forty-five countries while adhering to unifying principles and values. The challenge for higher education institutions is to take an active role in order to ensure that academic (rather than bureaucratic) principles and values are respected and the convergence process is correctly implemented, in a way that benefits universities and their stakeholders. Working on diversity emphasizes that quality is a relative concept. This is a key finding that is increasingly important to consider in the context of the increased diversification of higher education institutions across Europe. It has implications for the ways in which external quality assurance needs to be carried out. While an approach based on standards could lead to external quality assurance procedures that ensure compliance with standards, a fitness-for-purpose approach implies generally an improvement orientation: quality assurance must take as its point of departure the mission and objectives of a specific institution and recommend

Global Perspectives on Measuring Quality 189 TOWARDS ASSESSING QUALITY ACROSS NATIONAL CONTEXTS: A EUROPEAN APPROACH improvement in order to achieve the set goals. The different characterizations of quality split roughly into two bundles: approaches that focus on quality of outputs vs. approaches that focus on quality of processes in developing, implementing and improving institutional activities. In the outputs perspective, institutions examine the outcomes of university activity, such as teaching and research, and the extent to which set goals are achieved. Thus quality as outputs is associated with definitions of quality as excellence, fitness for purpose, “customer” satisfaction or effectiveness. In the process perspective, institutions examine the activities that lead to the desired outcomes, such as governance structures, decision-making processes or administrative procedures. Quality as a process is thus associated with values, internal processes and effectiveness. Obviously, it is important to look at input, output and process in order to get a full picture of an institution’s position. Thus, there is broad consensus within EUA that if quantitative indicators are used (for the measurement of inputs/outputs) these must be balanced with qualitative measures (process). This helps to put the former into their appropriate context and to understand their meaning.

Quality is Highly Context-Dependent Across national borders, differences in understanding and usage immediately arise. First, QA procedures, and even how QA is defined, are very diverse, though European Standards and ESG provides a common European ground for quality assurance at the institutional and national levels. Furthermore, the implementation of the Bologna process takes diverse routes in the different national or regional contexts, according to the historical and cultural background, the legal constraints, and the institutional organisation. In the 3yr-bachelor/2yr-master/3-4yr doctorate model, the content and learning outcomes of each level are different from those of the undergraduate/postgraduate model. As an example, most of the research-oriented taught courses given in the first year(s) of a post-graduate programme are given at the master level in the former model. Accordingly, specific indicators should differ in the two models.

190 Global Perspectives on Measuring Quality Jean Chambaz

Subtle linguistic issues with regard to the technical terms have also to be taken into account. As the Bologna process developed, different meanings of technical terms have emerged, and attempts to reach a single definition of any particular term in a common glossary have failed to make any impact so far. A functional approach based on policy objectives and practical outcomes is likely to be most successful. Terminology is simply the medium between policy and outcomes, and as long as outcomes are common and agreed, terminology does not pose any barriers.

European Recommendations Key success factors for a well-functioning internal quality assurance system identified by EUA’s Quality Culture project were strategic planning, appropriate organisational structures for quality assurance, commitment of the institution’s senior leadership, engagement of the staff and students, involvement of external stakeholders and well- organised data collection and analysis. This list per se demonstrates that QA activities should not be considered as a separate activity carried out by specific person(s), but that a concern for quality should permeate and be embedded in all activities of the institution and be the responsibility of each and every one. Three conditions ensure that evaluation procedures support and enhance quality culture:

• Integrate evaluations into a broader process of quality management and development. This is very important in order to avoid reducing evaluations to mere bureaucratic procedures aimed at compiling reports and numbers. • Design evaluations in such a way that discourages mere compliance to evaluation criteria and indicators but rather encourages adherence to the spirit of quality that grounds the indicators. Compliance to indicators will be detrimental to quality in the long run. • Implement follow-up procedures and consequences linked to the outcomes of the evaluation. If there are no consequences to the evaluations—which usually require an effort by all

Global Perspectives on Measuring Quality 191 TOWARDS ASSESSING QUALITY ACROSS NATIONAL CONTEXTS: A EUROPEAN APPROACH

individuals involved—staff and students will lose interest in these procedures and will not support them.

In terms of quality, the first step is to find reliable quantitative and qualitative indicators for measuring quality in the institution. When thinking of specific indicators and information, however, it should be kept in mind that they do not always represent absolute measures. Their interpretation and weight might be different according to the institutional mission or the social context, but also in relation to subjects and knowledge areas. For instance, student success rates can be interpreted differently depending on the institution’s catchment area. At the same time it is obvious that indicators like third-party funds play a more important role in defining indicators for specific disciplines.

Specificity at the Doctoral Level The Salzburg II initiative launched by EUA-CDE in 2010 aimed to assess the Salzburg Principles in light of the ongoing process of active implementation by European universities. The resulting recommendations affirm the validity of the Principles, and cement the basis of the doctorate as based on the practice on an original research project, and thereby different by nature from the first and second cycles. As a consequence, the format and assessment tools developed for cohorts of students in the two first cycles (taught elements, credit systems) are not appropriate for the individual journey of doctoral education. It is on the contrary essential to create a supportive and inclusive environment ensuring a critical mass and diversity of research and to achieve flexible structures to develop creativity and autonomy, meet individual needs, and build responsibility capacities. Accordingly, it is necessary to develop specific systems for quality assurance in doctoral education based on the diverse institutional missions and, crucially, linked to the institutional research strategy. For this reason, there is a strong link between assessment of the institution’s research and the assessment of the research environments that form the basis of doctoral education. Assessment of the academic

192 Global Perspectives on Measuring Quality Jean Chambaz quality of doctoral education should be based on peer review and be sensitive to disciplinary differences. In order to be accountable for the quality of doctoral programmes, institutions should develop indicators based on institutional priorities such as individual progression, net research time, career tracking and dissemination of research results for early stage researchers, taking into consideration the professional development of the researcher as well as the progress of the research project. So far, no overall study has been made on how HEIs across the 46 Bologna signatories have actually responded to the standards identified by the ESG. EUA launched in October 2009 a new project: Examining quality culture in higher education institutions (EQC). This project will tackle this question and aims at providing HEIs, policy makers as well as other stakeholders, for the first time, with an overall picture of the internal quality assurance processes actually in place within HEIs. The results, released in 2011, will be useful for further development of the quality of European higher education.

1 Council for Doctoral Education, European University Association, http://www.eua.be

2 ENQA: European Association for Quality Assurance in Higher Education; EURASHE: European Association of Institutions in Higher Education; ESU: European Student’s Union.

Global Perspectives on Measuring Quality 193 TOOLS AND METHODS FOR ASSESSING QUALITY INTERNATIONALLY Tools and Methods for Assessing Quality Internationally

Maxwell King Chair, Go8 Deans of Graduate Studies Pro Vice-Chancellor, Research and Research Training Monash University

Introduction Measuring the quality of postgraduate research programs across international borders is very difficult because systems are different in different countries. The first question that needs canvassing is: the quality of what? Possible candidates include

1. The outputs of the research 2. The graduate 3. The postgraduate research experience 4. The efficiency of the program.

I will discuss each of these in turn.

The Outputs of the Research The most obvious output is the thesis or dissertation. In theory one could have lots of panels of experts (discipline specific) that could read selected samples of theses and give quality scores. Unfortunately this is not very practical. A more realistic suggestion is to look at what is published from the thesis. How this would be measured would need to differ from discipline to discipline. For example, a simple count of the number of ISI journal articles published as the result of postgraduate research is likely to favour the STEM disciplines but not arts and humanities students. I suspect about 20 different discipline clusters would be needed and getting consensus would be no easy task although work done on building national research quality systems such as ERA in the case of Australia does give a helpful starting point. There is also the issue of language in which the thesis is written. When it is not English, the program would be disadvantaged by this measure.

194 Global Perspectives on Measuring Quality Maxwell King

The Graduate Ideally one would like to be able to measure the discipline knowledge, the problem-solving skills and the general research and communication skills of each graduate. Sadly this ideal is impossible to achieve. A proxy that is often used is: where do the graduates get employed? Unfortunately this typically has to be a country and discipline specific measure for a number of reasons although it might work for some collections of countries. The measure does tend to favour males over females, particularly married females. Another measure might be an assessment of their CV. An obvious element that is typically taken into account when hiring decisions are being made is the number and quality of research publications. So in general, an assessment of the graduate might be done partly through an assessment of the outputs of the research. Finally the speed at which a student completes the research component of their degree might be influenced by the level of their discipline knowledge, problem-solving skills and general research and communication skills. Potentially this gives us another proxy although it is probably best assessed relative to average completion times within the discipline and it may have some country- specific practices that should be taken into account when making international comparisons.

The Postgraduate Research Experience This may be possible to assess, indirectly, by looking at everything that the student has the opportunity to be exposed to. It would involve looking at the quality of, amongst other things,

• Supervisors/mentors • Other members of the academic unit • Infrastructure • Seminars/workshops • Statistical/IT/language support • Library support • Opportunities for cross-discipline/cross-institution interaction • Opportunities for conference attendance.

Global Perspectives on Measuring Quality 195 TOOLS AND METHODS FOR ASSESSING QUALITY INTERNATIONALLY

Again discipline variation might need to be taken into account. This could possibly be assessed by a detailed questionnaire that seeks information on the above, the results of which would need to be interpreted by an expert panel. There is also the possibility of asking the students directly (through a survey) what they think. This potentially suffers from the problem of students only being able to relate to their own experience and many not knowing what is best practice.

The Efficiency of the Program I believe this is best measured by completion rates, and average completion times (time-to-degree). One would think that these are things that could easily be measured across borders. Unfortunately there are difficulties that need to be addressed. For these measures to be comparable, definitions of what is being measured need to be clear. Because programs vary from country to country there is a need to be clear what a comparable starting point is. Some programs start straight into the research and their entry criteria reflect this. Others start with a preliminary/comprehensive coursework component whose completion may be regarded as part of entry. I believe the best starting point for measuring completion times and rates is when a student is first able to begin their research. An individual student has a completion rate of either zero or one- zero if they haven’t completed and one if they have. These can be averaged over a cohort of students to get a cohort completion rate—the proportion of students who have completed. It gives the probability of a student randomly selected from the cohort completing. Completion rates depend on the cohort selected and the time at which they are measured—they are time-dependent. Average completion times for a given completing cohort should be the average full-time equivalent time from start to submission of the thesis/dissertation. Intermission time should be netted out. This time is typically tracked by the institution’s student system. Again both completion rates and times will vary by discipline, and completion rates at particular points in time may depend on the mix of full-time and part-time students.

196 Global Perspectives on Measuring Quality Maxwell King

Conclusions Clearly it is difficult to assess quality across national contexts. Suggestions include looking at publications resulting from postgraduate research, assessing what students are exposed to in the course of their study and looking at completion rates and average completion times measured from when the student first starts their research. Any comparisons need to account appropriately for discipline difference.

Global Perspectives on Measuring Quality 197 PROMISING PRACTICES FOR ADMINISTERING QUALITY ASSESSMENTS

Promising Practices for Administering Quality Assessments

Tan Thiam Soon Vice Provost (Education) National University of Singapore

Currently, in Singapore, there are only three publicly-funded universities offering degree programmes, namely National University of Singapore (NUS), Nanyang Technological University (NTU) and Singapore Management University (SMU). Of these, only two (NUS and NTU) offer graduate research degree programmes in science, technology and the humanities. This means there is a very small ecosystem of universities and thus there is no Singapore-based internal accreditation or assessment system of graduate programmes. To ensure the quality of programmes, Singapore has to rely on a mixture of local and international practices to administer quality assessments of graduate educational programmes in the universities, in particular research-based programmes. Thus, the following comments on promising practices for administering quality assessments are based on Singapore’s context using NUS’s experience. Yet, precisely because of its smallness and consequently, extensive use of other international practices, some of Singapore’s experiences could be relevant to the main topic of this session, that is, moving towards assessing quality across national contexts. In Singapore’s context, the key players administering quality assessments for graduate education are the Ministry of Education (MOE); the university itself with its own internal quality assurance processes and research funding agencies; and other stakeholders like key employers of our graduates imposing indirect quality control. This is then supplemented by extensive use of international review teams to help in quality assurance processes. As part of the reform of the university sector in 2006 to allow the publicly funded universities to become autonomous while receiving substantial public funding, Singapore’s MOE introduced the Quality Assurance Framework for Universities (QAFU) which involves the 198 Global Perspectives on Measuring Quality Tan Thiam Soon following steps:

a. The university writes a self–assessment report based on achievement of its own previously stated objectives and targets in agreement with MOE; b. The report is reviewed by an external panel appointed by MOE with representatives of local stakeholders and international experts; c. On-site validation of the report is conducted; d. The panel’s report is produced for MOE with recommended and highly recommended actions; e. The next performance agreement is drafted, setting objectives and targets; f. Annual review of objectives and targets is performed; and g. Funding is based on the attainment of goals and objectives that have been preset.

An important aspect of the review is the quality of the research-based educational programmes. Quantitative metrics are examined by the panel and there are also onsite meetings with graduate students with an insistence that these comprise good, average and poor students. The panel will usually include a number of very senior colleagues from well-known overseas universities. As the university ecosystem is very small in Singapore, to supplement this, there is also a strong reliance on external help through regular visits by visiting committees (at the department and faculty level), external review panels (at the university level), and the international academic advisory panel (at the national level). In all these reviews, graduate research programmes are always one of the priority areas to be reviewed. At NUS, due to the need to provide an annual review of objectives and targets through the submission of the Post-graduate Research Report on Quality and also an annual Performance Review Forum with the Ministry of Education, there are in place internal processes for quality assurance administered through the Board of Graduate Studies, chaired by an Associate Provost for Graduate Education. One unique aspect of Singapore’s system is that most of the cost of

Global Perspectives on Measuring Quality 199 PROMISING PRACTICES FOR ADMINISTERING QUALITY ASSESSMENTS supporting full-time research students is supported by the Ministry of Education through a block grant, and at NUS, this is administered by the Associate Provost for Graduate Education who reports to the Vice-Provost for Education. This provides the muscle for the Office of Provost to implement quality assurance processes in the university. Key internal quality assurance processes have now been put in place in areas such as admissions requirements, supervisor selection, course requirements, thesis committee and qualifying exam and these processes are tightly linked to the availability of scholarships. More recently, there is a move towards using other quantitative metrics to correlate to these indicators including publication records, quality of placements, employer feedback and students’ exit interviews. There is also an increasing impetus to tap other national and regional associations supporting graduate education to obtain access to some best practices. The more common ones that NUS has been relying on are the UK Council for Graduate Education (UKCGE), the Council of Graduate Schools (CGS), which includes members in U.S., Canada, and internationally, and the European Association for Quality Assurance in Higher Education (ENQA). Singapore is simply too small to set up its own council and because of the diversity of the roles and standards across educational institutions in Asia, there isn’t currently an Asian association. This is an interesting area that ought to be explored further. An increasingly important form of implicit international “accreditation” derives from the growth in the number of joint graduate research degree programmes with external partners. Currently, NUS has agreements with a total of 11 overseas partners for the joint PhD, and this includes universities like the Australian National University, Imperial College and King’s College London, just to mention a few. In all these programmes, there is a need for consistency in the admission standards as well as some degree of reliability in the internal assessment processes, like course requirements, qualifying examinations, and thesis advising and supervision. Often, as a result of negotiation with a particular partner, internal processes need to be reviewed and in that process, new best practices can be adopted. One issue that is confronting the university in carrying out its

200 Global Perspectives on Measuring Quality Tan Thiam Soon quality assurance of graduate research and education is different national and institutional agendas for graduate education (graduate manpower, research talent and output, etc.) This suggests an increasing need for inter-agency collaboration on quality assessment, not a trivial task. Based on NUS’ experience to date and its interaction with external partners especially in developing joint degree programmes, some of the practices we have found very useful and also somewhat universal (allowing for crossing national contexts) are the following:

(i) Strong internal quality assurance processes, (ii) Use of quantitative metrics supplemented by qualitative surveys, (iii) Use of external review panels— a process which can tap regional and international graduate educational associations or alliances and (iv) Establishment of joint degree research-based programmes.

Global Perspectives on Measuring Quality 201 ASSESSING QUALITY IN POSTGRADUATE INTERNATIONAL COLLABORATIONS

Assessing Quality in Postgraduate International Collaborations

Kim Anh Le Thi Higher Education Project Ministry of Education, Vietnam

The Context of Vietnam Research and Postgraduate Education Achievements1 Vietnam has made impressive recent progress in terms of its socio- economic development. It is steadily escaping poverty, and it is progressively joining the ranks of middle-income countries. Its higher education system has been expanding rapidly, with participation reaching 135 students per 10,000 people in 2009. What was once an elite higher education system is quickly being replaced by a mass higher education system. At the same time, however, the quality of the higher education system remains low, and certainly much lower than in a number of Vietnam’s 11 neighboring countries in East Asia (EA). The quality of research in the higher education system is particularly low. Vietnam’s national research capacity, as measured by the number of international publications per million people, has increased at a rate of 16 percent per annum over the past five years, which is as high as the growth rate in Thailand or Malaysia. In terms of total number of publications, however, Thailand and Malaysia recorded respectively 6.5 and 9.5 times more publications. In terms of the average number of citations, Vietnam ranked fifth among 11 EA countries, but citations of Vietnamese authors accounted for only one-half of the average number of citations, and this indicator, which has not improved between from 2004 to 2008, shows Vietnam to be the third-lowest performing country in EA. The research capacity of Vietnam’s universities is substantially lower than for Thailand. International publications by Thai universities account for 95 percent of the nation’s total, while for Vietnam, the comparable rate in 2008 was only 55 percent. Vietnam’s top-performing source of publications was the Vietnam Institute of 202 Global Perspectives on Measuring Quality Kim Anh Le Thi

Sciences and Technology, which produced 156 publications. By comparison, in Thailand produced 869 publications.

Government Initiatives to Improve International Collaboration in Postgraduate Education Vietnam’s strategic goals for its higher education system are presented in a Higher Education Reform Agenda (HERA).2 HERA provides a forward-looking strategy for the development of higher education and research. Regarding postgraduate and research quality, HERA sets the following goals: “the introduction or reinforcement of research in universities in order better to train the future new teachers,[...] to upgrade the quality level and international visibility of Vietnamese universities,” […] to increase the proportion of university teaching staff with masters degrees to 40 percent by 2010, and to 60 percent by 2020, to increase the proportion of university teaching staff with doctoral level degrees to 25 percent by 2010, and to 35 percent by 2020. Progress is slow, however, because, by 2009, the proportion of academic staff members with a PhD qualification was only 10 percent—or about the same level as in 1987. The Government has approved a project on PhD education (Decision no 911/QD-TTg, dated 17 June 2010) that seeks to attain degree standards that are comparable with other universities in the SEA region, including fluency in one foreign language, and to achieve 20,000 additional PhD graduates by 2020. Most of these additional PhD graduates are to be trained at foreign universities, but 3,000 of them are expected to result from collaborative programs involving Vietnamese and foreign partner universities. The Government has emphasized the importance of: “Reinforcement of international collaboration in PhD education, expanding the sustainable linkages with overseas prestige universities, meeting the targets of full-time and part-time PhD education.”3

The Second Higher Education Project’s Agenda on International Collaborations in Research and Postgraduate Education4 Among many programs and projects currently run by Vietnam’s

Global Perspectives on Measuring Quality 203 ASSESSING QUALITY IN POSTGRADUATE INTERNATIONAL COLLABORATIONS

Ministry of Education and Training (MOET), the Second Higher Education Project (HEP 2) acts as the funding agency for Teaching Research and Innovation grants (TRIGs) to key universities. These grants are intended to help upgrade quality and increase the international visibility of Vietnam’s universities. Fostering international collaboration is one of the means whereby these universities can be assisted to achieve their goals in the area of postgraduate education.

Through the TRIGs, HEP 2 is seeking to achieve an increased number of:

1. specialized labs at each university; 2. preparation and support rooms for use at each lab; 3. PhD and Master’s students sent abroad to undertake full-time study; 4. national PhD and Master’s students sent abroad for internships of at least 3 months; 5. collaborative research projects undertaken by inviting international scholars to Vietnam or by sending Vietnamese scholars abroad; 6. research projects undertaken jointly with academics from other national universities and research institutes; 7. international publications (including jointly with international colleagues); 8. papers presented at international conferences; and 9. national publications.

The amount of the Government’s three-year block grant to its key universities, which must also provide for research and postgraduate education, is very limited. Additional investment in specialized labs is, therefore, critical to the enhancement of research capacity. Grants made by HEP 2 as TRIGs allow key universities to purchase modern specialized equipment according to their needs. Examples include: a NANO Lab at Vietnam National University in Hanoi; agriculture and biology equipment for the study of national resources management at the Agriculture University in Hanoi; a biotechnology laboratory at

204 Global Perspectives on Measuring Quality Kim Anh Le Thi the Hanoi University of Technology; a research center for training in cardiovascular intervention and endoscopic intervention at the University of Medicine in Hanoi. The TRIGs have also provided for the purchase of equipment for use by , researchers and postgraduate students. The total investment in labs has accounted for 35 percent of total HEP 2 funds. HEP 2 understands that quality of postgraduate education at Vietnam universities is low, and so it has encouraged universities to send their young researchers and lecturers abroad for PhD and Master’s degrees to be completed at recognized universities. This measure will not only provide for Vietnam’s need for highly educated academic staff members but it will also provide a basis for long- term collaborative relationships. It is hoped that these graduates will also help the Vietnamese universities concerned to participate more effectively in global knowledge networks. In the case of Cantho University, for example, 20 PhD students, 33 Master’s students and 100 internships have been funded; while at Danang University, funds have been provided to enable 9 students to go abroad to undertake a PhD, and 21 students to go abroad to undertake a Master’s degree. A total of 64 PhD and 157 Master’s students have been supported so far. HEP 2 has encouraged universities to build national and international academic teams in research and postgraduate education by inviting international scholars to work at Vietnamese universities and by sending lecturers and researchers for internships and fellowships overseas. Hanoi University of Education, Danang University, and the Vietnam National University in Hanoi are examples of universities that have been very successful in this regard. There are now thousands of overseas visits by Vietnamese lecturers and researchers, and there are hundreds of visits to Vietnam by international scholars – all funded by HEP 2. Most importantly, HEP 2 has insisted that each university receiving TRIGs should be committed to increasing the number of its national and international publications. HEP 2 generously funds lecturers and researchers to attend international conferences if they are presenting research findings with a view to getting the findings reported in a publication at a later date.

Global Perspectives on Measuring Quality 205 ASSESSING QUALITY IN POSTGRADUATE INTERNATIONAL COLLABORATIONS

These and related measures are negotiated and agreed upon between HEP 2 and TRIG recipients. It is not easy to get agreements, although both HEP 2 and the universities understand that these measures are important to the future development of the universities concerned. Different strategies have been employed in negotiating with universities on these measures, according to each university’s unique strengths as stated in the university’s vision and strategic development plans. For universities with majors in economics and social sciences, instead of international publications, HEP 2 has encouraged international collaboration in the form of presentations at international conferences. It is difficult to persuade universities to supplement funds received through the TRIGs for areas of research strength because, in many cases, they prefer to allocate their own scarce funds to build up areas of research weakness. Reporting measures are strictly adhered to in order to ensure that stated targets are met. A quarterly reporting schedule exists for universities to report to HEP 2 and for HEP 2 to report to MOET and the World Bank.

Concluding Remarks Vietnam has a long way to go in building centers of research excellence, equipped with modern labs and able to attract international scholars on an equal basis. HEP 2 is, however, a driving force in seeking to build a research infrastructure and supply trained researchers. Vietnam will for quite some time be dependent upon sending its best scholars abroad to enable them to develop the advanced research knowledge and skills required for Vietnam to be a recognized participant in global knowledge networks. Ultimately, however, Vietnam hopes to see a time when significant numbers of students are coming from abroad to study in Vietnam. Critical to this development will be the need to increase the number of international publications on the ISI citation index that derive from Vietnam.

206 Global Perspectives on Measuring Quality Kim Anh Le Thi

1 Data in this paragraph are taken from Hien, P.D. “A comparative study of research capabili- ties of East Asian countries and implications for Vietnam.” Higher Education, 60 (6), 2010.

2 Government Resolution No. 14/2005/NQ-CP, dated November 2, 2005, on substantial and comprehensive renewal of Vietnam’s tertiary education in the 2006-2020 period (Nghị quyết về Đổi mới Cơ bản và Toàn diện Giáo dục Đại học Việt Nam Giai đoạn 2006-2020 ).

3 Project on PhD education (Decision no 911/QD-TTg dated 17 June 2010).

4 Data in this paragraph are taken from HEP 2’s report (2009).

Global Perspectives on Measuring Quality 207 A FRAMEWORK FOR ASSESSING QUALITY IN (POST)GRADUATE IN- TERNATIONAL COLLABORATIONS

A Framework for Assessing Quality in (Post) Graduate International Collaborations

Andrew C. Comrie Dean of the Graduate College University of Arizona

Dianne D. Horgan Associate Dean of the Graduate College University of Arizona

There are currently few, if any, widely recognized methods currently in existence for measuring the quality of international educational and research collaborations. At the (post)graduate level these endeavors include joint and dual degree programs, research partnerships, and educational exchanges. University (post)graduate leaders and funding stakeholders must work together to assess the success of collaborative programs. Carefully articulated objectives will help them to define quality in context and enable selection of suitable measures for evaluation and management. We present a four-part framework for quality assessment of (post)graduate collaborations.

Asking the Right Questions Quality is notoriously difficult to measure, but any good assessment depends on a carefully articulated objective. “Quality” will be defined differently depending on what universities and their graduate schools hope to achieve with their international collaborations. Naturally, there are significant strategic and practical questions to address before entering into an international collaboration, some of which will determine quality goals and measures, e.g.,

• How will this collaboration improve our students’ academic experience? • Will this collaboration enhance our research productivity? • Does this collaboration help us reach our strategic goals? • For public universities with a mission to serve the state, how 208 Global Perspectives on Measuring Quality Andrew C. Comrie • Dianne D. Horgan

will this collaboration help? • What outcomes do we want? • Does this collaboration fill gaps in our curriculum or research? • Will this collaboration enhance our reputation and public standing? • What level of investment is necessary? • What is the value-added by the collaboration (for students, faculty, and the bottom line)?

Partnerships are inherently bilateral or multilateral, and each partner will be asking these questions. The right partner may not be the highest ranked university, but one that is closer or complementary, and partners may bring more than the academic quality of their institution and students. Will the resulting collaboration be approximately equal in terms of the balance of the relationship? With unequal partnerships, it is important to consider (and assess) potential underlying agendas (e.g., emphasis on profit motive, or short-term capacity building), particularly as the quality of the projected relationship will change over time. Due diligence is also critical: legal issues, government roles, business models, institutional accreditation and reputation all must be evaluated. If the answers to these questions and the interests of each partner align accordingly, then a clear basis for a collaborative program exists. The parties may enter into an agreement that clearly defines a set of objectives, which can be used to first build and then later assess the quality of the collaboration.

Assessing the Collaboration Quality Assessment, as an approach or framework, evolved from a long history of quality control and quality assurance that has roots in the industrial revolution and manufacturing. In higher education, quality assessment has focused overwhelmingly on teaching and learning effectiveness, but also with emphases in online learning and in various levels of program and institutional accreditation and assessment. Internationally, major efforts are underway to assess higher education

Global Perspectives on Measuring Quality 209 A FRAMEWORK FOR ASSESSING QUALITY IN (POST)GRADUATE IN- TERNATIONAL COLLABORATIONS learning outcomes (e.g., the OECD AHELO project)1 but there are few if any tools or methods by which to assess (post)graduate international collaborations of the kind under discussion here. Given the institutional settings for these collaborations, it is relevant to consider the concept of Company Quality, an approach that emerged in the 1980s as part of quality assessment, and that recognizes four key aspects at the management or administrative level:2

• “Elements such as controls, job management, adequate processes, performance and integrity criteria and identification of records; • Competence such as knowledge, skills, experience, qualifications; • Soft elements, such as personnel integrity, confidence, organizational culture, motivation, team spirit and quality relationships; • Infrastructure (as it enhances or limits functionality).”

In the academic context, if there are shortcomings in any of these overall aspects then the quality of student or faculty collaborative experiences may be compromised. It is easy to see the parallels of these aspects in the (post)graduate international collaboration arena, but we need appropriate methods to measure and assess them in order to improve quality. To do so, we can integrate some existing methods and models from other realms of higher education. Quality assessment in higher education at the program and institutional level in North America and Western Europe has been found to have five common elements:3

• Meta-level management agency (in the U.S., this might be a regional accreditation body), and here it would be the graduate schools/universities managing the collaboration; • Self-evaluation (self study, self assessment), which is crucial

210 Global Perspectives on Measuring Quality Andrew C. Comrie • Dianne D. Horgan

for acceptance by academics and includes internal (staff, students) and external actors (alumni, employers); • Peer review, typically including site visits by external experts that incorporate on-site interviews with all stakeholders; • Reporting of review results/experience. This should not be a judgment or ranking but rather a guide to help improve quality. It is thus crucial to have a process for comments and possible counter-arguments to the report details. • Feedback to management and/or funding agencies, which should be holistic in their interpretation and not formula-based.

Overall it is critically important that such assessments be responsive to internal and external constituencies. For collaborations, it would be good practice to integrate ongoing evaluation in many of the same ways that our institutions do the above kinds of regular academic reviews, adapted accordingly. For collaborative degree programs, standard measures such as completion rates, time to degree, and placement can be used. For non-degree collaborations, other satisfaction metrics can be obtained. Internal and external review teams can be engaged, depending on the scope of the collaborative program. Among scholars examining online learning technologies it has become widely accepted that “quality in higher education is the degree to which stakeholders’ needs and expectations are consistently satisfied.”4,5,6 The higher education online learning community has produced an impressive range of practical and high-level quality assessment tools. Among these resources are the Sloan Consortium’s initial7 and many subsequent reports, along with its effective practices website.8 This group has a useful quality framework9 that we have modified and adapted. We present this international (post)graduate collaboration quality framework in the table below.

Global Perspectives on Measuring Quality 211 A FRAMEWORK FOR ASSESSING QUALITY IN (POST)GRADUATE IN- TERNATIONAL COLLABORATIONS

INTERNATIONAL (POST)GRADUATE COLLABORATION QUALITY FRAMEWORK, Part 1 Progress Goal Process/Practice Sample Metric Indices ACADEMIC & INSTITUTIONAL EFFECTIVENESS The partners Academic (i.e., Faculty and/or Stakeholders demonstrate that teaching and research) student perception report that the the collaboration content and control surveys or sampled collaboration is outcomes meet reside with faculty in interviews measure valuable or exceed the same way as at the satisfaction expectations for partner institutions Direct assessment all stakeholders Stakeholder focus of the collaboration groups or interviews enumerates its measure gains from benefits the collaboration SCALE (COST EFFECTIVENESS AND COMMITMENT) The partners Partners demonstrate Institutional and The partners sustain continuously financial and faculty/student the collaboration, improve administrative stakeholders expand and scale collaboration commitment to the show support for upward as desired, elements while collaboration participation in the strengthen and minimizing costs collaboration disseminate its Operating approach and model costs provide a Effective elements through other commensurate set are identified and collaborations of benefits to the implemented institutions and participants

212 Global Perspectives on Measuring Quality Andrew C. Comrie • Dianne D. Horgan

INTERNATIONAL (POST)GRADUATE COLLABORATION QUALITY FRAMEWORK, Part 2 Progress Goal Process/Practice Sample Metric Indices PARTICIPATION All eligible faculty/ Participants are Administrative Qualitative and students who wish informed about nature infrastructure quantitative to participate in and elements of the provides clear indicators show the collaboration collaboration, and information to all continuous can do so via engaged to derive relevant participants improvement appropriate maximum benefit in growth and elements Metrics of effectiveness rates Administrative support information services are available dissemination, to participants engagement modes, and administrative services FACULTY & STUDENT SATISFACTION Faculty and Process to ensure that Repeat engagements Data from students are faculty and student in collaboration assessment metrics pleased with engagements are program by faculty show strong support collaboration timely and substantive and/or growth experience and Growth of new interactions with System to ensure faculty and/ At least 90% administration, provision and or students in of faculty and voluntarily assessment of collaboration shows students believe promote program collaborative program endorsement the collaboration among colleagues elements experience is Surveys, positive testimonials, focus groups

Global Perspectives on Measuring Quality 213 A FRAMEWORK FOR ASSESSING QUALITY IN (POST)GRADUATE IN- TERNATIONAL COLLABORATIONS

The Proposed Quality Assessment Framework The framework is essentially a rubric that can help collaboration partners identify goals, relevant processes and practices, sample metrics, and progress indices to enable quality assessment and improvement. These are enumerated across four main areas pertaining to (post)graduate collaborations: academic and institutional effectiveness, scale (cost effectiveness and commitment), participation, and faculty/student satisfaction. Different types of collaborations will have different goals and hence different ways to measure quality across the four areas. For example, research collaborations might involve students engaging in informal laboratory experiences, obtaining local credit for research done at the partner site, or completing individual dual degrees under a cotutelle arrangement. The framework can be filled out to correspond to the context and relevant goals. Similarly, collaborations to recruit students into doctoral or master’s programs will differ in terms of structure and intent, basis on research or coursework experience, whether or not there is a “feeder” or pipeline arrangement, and so on. Again, there could be important differences in objectives and thus in the processes and metrics to be assessed. As international (post)graduate education leaders move to examine and improve the quality of collaborations between their graduate schools and university programs, we hope that this framework will be a useful tool to help move quality assessment of such programs forward.

1 http://www.oecd.org/document/22/0,3343,en_2649_35961291_40624662_1_1_1_1,00.html

2 http://en.wikipedia.org/wiki/Quality_Assessment#Company_quality

3 Van Vught FA, Westerheijden DF, 1994. Towards a general model of quality assessment in higher education. Higher Education 28 (3), 355-371.

4 Zhao F, 2003. Enhancing the quality of online higher education through measurement. Quality Assurance in Education 11(4), 214 – 221.

5 Sims SJ, Sims RR, 1995. Total Quality Management in Higher Education: Is It Working? Why or Why Not?, Praeger, Westport CT.

6 White S, 2000. Quality assurance and learning technologies: intersecting agendas in UK

214 Global Perspectives on Measuring Quality Andrew C. Comrie • Dianne D. Horgan

higher education. Quality Assurance in Education 8(1), 7-16.

7 Lorenzo G, Moore J, 2002. The Sloan Consortium Report to the Nation: Five Pillars of Quality Online Education. Available at www.sloan-c.org/effectivepractices/pillarreport1.pdf

8 http://www.sloan-c.org/effective

9 http://www.sloan-c.org/5pillars

Global Perspectives on Measuring Quality 215 VIII. CONCLUSION: GUIDING PRINCIPLES FOR MEASURING QUALITY

ince its fi rst convening in 2007, each Global Summit has generated a set of consensus points that are discussed and Sapproved by all delegates at the conclusion of the forum. While this process serves to identify areas of common ground, it also begins with the recognition that there exist important differences between graduate education systems and university cultures around the world. Through lively discussion and debate, Summit delegates identify principles and practices that are broadly applicable to all countries, yet specifi c enough to have signifi cance and weight in the international graduate community and beyond. Strong collective statements on a range of pressing topics in graduate education have taken shape over the past four years: the “Banff Principles” on advancing and improving graduate education globally (2007); the Guidelines for Future Collaborations on Scholarly Integrity in a Global Context (2008); and the Principles and Practices for Effective International Collaborations (2009). The 2010 consensus document, Principles and Practices for Assessing the Quality of (Post)Graduate Education and Research Training, includes ten statements regarding the purposes and values of measuring quality in graduate education as well as the processes that support assessment efforts. It also highlights key areas in which quality metrics for graduate education and research merit careful attention and consideration. Now and in the coming years, there are several key reasons why the 2010 Summit Principles can prove useful to graduate institutions globally. First, as different graduate education systems becoming increasingly interconnected, the Principles provide common ground from which institutions in different countries may begin a conversation

216 Global Perspectives on Measuring Quality CONCLUSION: GUIDING PRINCIPLES FOR MEASURING QUALITY about what constitutes strong quality measurement in graduate education. Second, recent significant shifts in the graduate education landscape—from the development and growth of the graduate education “market” to the strategic efforts of universities to enhance programs and degrees through innovation—requires us to think broadly about what constitutes quality in a broad sense. Graduate education leaders may wish to use this document to reflect on their universities’ approaches to quality assessment in the current moment, and as they develop new institutional priorities in the future. Finally, the statement includes an important reminder about the ultimate purposes strong quality assessment. As the first principle articulates, “The primary objective of quality assessment is to ensure and improve the quality of (post)graduate training and student learning and professional development.” While quality assessment is often linked to the broader goals of institutions and national economies, delegates agreed that methods and metrics for quality assessment must give primary focus to the development of graduate students, the future leaders of national and global research. It is also our hope that these principles will inspire further discussion in networks that go beyond the 2010 Summit: networks of graduate education leaders in individual countries and regions; government agencies, organizations, and private-sector groups with a strong investment in the quality of graduate education; and international networks of graduate institutions engaged in partnerships and collaborations.

Debra W. Stewart President Council of Graduate Schools

Global Perspectives on Measuring Quality 217 APPENDIX A: PRINCIPLES AND PRACTICES FOR ASSESSING THE QUALITY OF (POST)-GRADUATE EDUCATION AND RESEARCH TRAINING

he concluding session of the annual Strategic Leaders Global Summit is a workshop-style discussion in which participants Trevise and approve a set of conclusions that have received signifi cant support in earlier sessions. For the 2010 Summit, participants agreed to a statement about the importance of measuring quality in graduate education and research along with an international set of principles and practices for conducting and supporting strong assessments of quality.

Preamble The assessment of quality in (post)-graduate education is critical to the success of master’s and doctoral students and to the future of the global research enterprise both within and outside academia. All countries and regions stand to benefi t from assessment efforts that seek to improve outcomes for students and countries. At the same time, the goals of quality assessment must be considered in relation to the diverse contexts in which students are trained. International discussions of quality assessment must therefore respect differences in the priorities and approaches of different countries, institutions, and disciplines, and the variety of educational, research and professional needs of their students. Acknowledging the differences in our national contexts, the delegates of the 2010 Strategic Leaders Global Summit

218 Global Perspectives on Measuring Quality APPENDIX A have agreed to a set of common principles for assessing the quality of (post)-graduate education and research training.

Principles and Practices

1. The primary objective of quality assessment is to ensure and improve the quality of (post)-graduate training and student learning and professional development. Evaluation must go beyond the assessment of research quality to address topics such as: • Admission criteria and recruitment • Student Learning Outcomes, including transferable skills • Mentoring and supervising structures • Infrastructure for (post)-graduate student training • Quality of student experience • Measures of completion and attrition • Career placement both inside and outside academe

2. Assessment is an important way to assure external stakeholders of the quality of (post)-graduate education. Sharing the goals and outcomes of assessment with all relevant stakeholders, including the public, helps ensure that assessment efforts are understood and valued.

3. While quality can be assessed in a variety of ways, evaluation should be based on clearly-defined objectives, criteria and processes, and the intended uses of the results should be made clear to all relevant stakeholders. Different or multiple processes may be needed to meet different goals and audiences.

4. The development of specific quality metrics for research degrees is a key priority. Areas to be considered in review of research degrees include: • Monitoring progress through the degree

Global Perspectives on Measuring Quality 219 PRINCIPLES AND PRACTICES FOR ASSESSING THE QUALITY OF (POST)-GRADUATE EDUCATION AND RESEARCH TRAINING

• Quality of the dissertation/thesis • Exposure to interdisciplinary and global research experiences • Skills for generating and communicating research • Quality of the research training environment • Research impact

5. Quality assessment is most effective when academic staff (faculty) play a role in designing or refining evaluation procedures.

6. Regular processes of internal and external review should be used to sustain and advance quality in (post)-graduate education.

7. Graduate education leaders have particular responsibilities for defining, measuring, benchmarking, and improving the professional and transferable skills of students. To support this effort to improve program quality, it is important to closely follow workforce trends, develop better methods of tracking graduates’ career trajectories, and ensure that students are trained to adapt to evolving career demands.

8. The assessment of quality in international collaborations is integral to (post)-graduate research training in the 21st century. The globalization of (post)-graduate education and research demands rigorous, coordinated efforts to measure the outcomes of international experiences for graduate students, and to identify desired outcomes not currently achieved.

9. The success of future assessment efforts depends on the refinement of existing tools, qualitative and quantitative, and the development of new methodologies for measuring quality. Key priorities in this area include the comparison

220 Global Perspectives on Measuring Quality APPENDIX A

of tools existing or under development, the exchange of best practices in their use, and the development of new technologies that support assessment and the sharing of data.

10. National and regional groups of university leaders responsible for (post)-graduate education and research training provide an important mechanism for sharing best practices.

Global Perspectives on Measuring Quality 221 APPENDIX B: SUPPLEMENTARY COUNTRY PAPERS

222 Global Perspectives on Measuring Quality Illah Sailah Assessment of Master’s and Doctoral

Illah Sailah Director of Academic Affairs Directorate General of Higher Education Ministry of National Education, Republic of Indonesia

In the period from 2010 to 2014, Indonesia will face many challenges, two of which are related to master’s and doctoral education: (1) producing creative human resources through education required in the development of a creative economy, and (2) enhancing synergistic partnerships with businesses and industries, community and professional organizations. The vision of national education is “Implementation Service Excellence of the National Education in Creating Smart and Competitive Indonesian Human Resources.” The mission is to increase the educational service’s availability, the outreach of educational services, the quality and relevance of educational services, the equity of educational service attainment, and the assurances to access educational services. As quality is one of the most important aspects, the quality assurance system has been implemented in higher education. The legal basis of this implementation is: (1) Law Number 20 of 2003 (article 1, 35, 50, 51, 0) and (2) Governmental Regulation Number 19 of 2005 concerning the National Education Standard. There are eight standards that should be applied and six of them are still in progress for the higher education standard, and one of the six standards concerning standard of content was published a very short time ago. The standards are formalized by the Body of National Education Standard which was established by Ministry of National Education. The Content Standard determined the competencies (learning outcomes) of the , bachelor, master’s, and doctoral degrees, as well as their assessment. These can be used to distinguish between the capability of graduates in master’s and doctoral degree programs. It means that the quality of the master’s and doctoral degree can be measured based on the learning outcomes stated in the minimum Global Perspectives on Measuring Quality 223 ASSESSMENT OF MASTER’S AND DOCTORAL EDUCATION IN INDONESIA

standard of content. The major aspects of quality are measured by observing the achievement of publications and intellectual property rights. As the minimum standard stated that the master’s degree holder must be able to produce one publishable journal paper; the holder of a research doctorate, one paper in an international journal; the holder of a combined coursework-research doctorate, one journal published in a national accredited journal or international journal. Since the Governmental Regulation No 69 of 2009 was published concerning the Internal Quality Assurance (IQA), the higher education institutions must set up the IQA (policy of QA, manuals, Standard of Procedures), and assure that the learning outcomes can be achieved. The Unit of IQA in each higher education institution is responsible for developing, administering, and analyzing the assessment. On the other hand, the performance of study programs will be assessed by an independent external body called the National Accreditation Agency (NAA-HEI). NAA evaluates seven elements: (1) vision, mission, objectives, and strategic goals; (2) governance, leadership, management system, and quality assurance; (3) students and graduates; (4) human resources; (5) curriculum, learning outcomes, and academic atmosphere; (6) finances, facilities, and ICT; (7) research, community, and collective ideas. The roles of national and international peer groups are also important in the assessment of quality. Based on the regulation mentioned above, the National Policy on Quality Assurance Systems supports the Program of Study Assessment, which is based on Self-Evaluation, Quality Assurance, and Accreditation of the Study Program. IQA is implemented by higher education institutions with internally driven approaches, whereas EQA is a systemic activity involving the evaluation of the sustainability of the study program. The IQA system interfaces with EQA through a warehouse of data on higher education institutions that is designed to help them to meet and surpass the minimum national education standard, and to stimulate the effort to gain continuous quality assurance improvement. The research method used for different types of assessment is one of the elements assessed in the accreditation process. The process and output are the more important aspects of quality assessment, in

224 Global Perspectives on Measuring Quality Illah Sailah terms of assuring effectiveness, efficiency, and productivity. The measurement benefits the university through feedback mechanisms for managing the study program or higher education institutions and helping them to improve their quality. The accreditation results determine whether the higher education institution is able to publish its accreditation certificate. If the study program in not accredited, there will be a negative impact on its higher education institutions, e.g., fewer incoming students, a reduced ability to acquire competitive funding, lack of ability to develop the new continuing study program to a higher level. The accreditation result will be used by higher education institutions or study programs to do a self-evaluation, and by government to make interventions in quality mapping. The government makes interventions through quality clustering, funding, and competitive funding for increasing the quantity and quality of research projects for the faculties and postgraduate students, and by facilitating research collaborations between higher education institutions and local government, industries, and community. Ultimately, the establishment of a quality assurance system for research development is necessary. The students, lecturers, and society will be affected by the intervention, which helps create a better academic environment in postgraduate programs.

Global Perspectives on Measuring Quality 225 QUALITY ASSURANCE STRUCTURES FOR JAPANESE GRADUATE EDUCATION Quality Assurance Structures for Japanese Graduate Education

Akihiko Kawaguchi Specially Appointed Professor National Institution for Academic Degrees and University Evaluation, Japan

1. Overview of the National Quality Assessment (QA) Framework In Japan, originally, the quality assurance framework operated on prior regulations, the “Standards for Establishing University,” which required universities to implement approved systems (including a new educational programme within the university). In response to the growing needs of QA improvement for universities, the third party evaluation and accreditation system began in 2004 after the self- assessment of universities became mandatory in 1999.

2. Ongoing appraisals NIAD-UE has been involved in two systems for evaluating the quality of universities through ongoing evaluations for accredited higher education institutions.

2-1. Certified evaluation and accreditation An evaluation for universities, junior colleges, colleges of technology, and professional graduate schools covering overall conditions of education and research and conducted by certified evaluation and accreditation organizations is mandatory. Subject institutions must select from the certified organizations and be assessed within the periods stipulated by the government. The number of target higher education institutions that are subject to the evaluation is more than 750. NIAD-UE is one of the certified organizations that is responsible for “certified accreditation and evaluation” performed once every seven years (every five years for law schools) in order to assure that at least a minimum level of quality is satisfied in achieving pre- determined goals based on each university’s mission, objective, and 226 Global Perspectives on Measuring Quality Akihiko Kawaguchi charters. NIAD-UE and a few other certified organizations are administering the quality measurements for certified accreditation and evaluation. Each university can select which organization is to undertake the evaluation. External peer reviewers and members of their respective quality assurance agencies are responsible for analysis. Self-assessment reports by each university are published and also sent to government-certified quality assurance agencies. The certified agency passes on the report to the government. The third party evaluation report will be sent to both the respective university and the government. The evaluations are certified and are essentially made on a pass/ fail basis. There is no intervention process. To date, no university (as a whole) has failed but the law schools of some universities (which are administered separately from the parent institution) have failed. However, in this case, they are given a grace period of two years to reform and then a follow-up certified evaluation is performed.

2-2. Evaluation of National University Corporations Performance-based evaluation of national university corporations and inter-university research institute corporations must carry out mid-term objectives, mid-term plans for every six-year period, and annual plans for education, research and management. The National University Corporation Evaluation Committee led by the Ministry of Education, Culture, Sports, Science and Technology is entirely responsible for this evaluation. It appointed NIAD-UE to undertake evaluation of the attainment of mid-term objectives for education and research. Only NIAD-UE is also responsible for evaluating the performance of the 86 National University Corporations. This evaluation may indirectly affect future funding allocation by the government. NIAD- UE developed quality measurements after consulting with external experts and obtaining approval from the Ministry of Education, Culture, Sports, Science and Technology. Some of the benefits of the process for universities are that the quality of the university is assured, which is essential for student

Global Perspectives on Measuring Quality 227 QUALITY ASSURANCE STRUCTURES FOR JAPANESE GRADUATE EDUCATION recruitment, and the universities also have the opportunity to show accountability for stakeholders, for example. These Japanese national QA systems and NIAD-UE together contribute to ensuring the continuing quality of all Japanese university systems.

For more information, please refer to:

Glossary of Quality Assurance in Japanese Higher Education, 2nd ed. (2009). : NIAD-UE. www.niad.ac.jp/english/unive/ publications/information_package.htm

Overview of Quality Assurance System in Higher Education: Japan. (2009). Tokyo: NIAD-UE. www.niad.ac.jp/english/unive/ publications/information_package.htm

228 Global Perspectives on Measuring Quality Gregor Coster • Charles TUstin Measuring Quality in (Post) Graduate Education and Research Training – New Zealand

Gregor Coster Dean of Graduate Studies The University of Auckland

Charles Tustin Director, Graduate Research Services University of Otago

New Zealand is unlike many other countries in that accreditation of university programmes, including postgraduate qualifications, is not delegated to the individual universities as self-accrediting institutions. Approval or accreditation is done at an inter-university level through the Committee on University Academic Programmes (CUAP) which is appointed by Universities New Zealand, a statutory organisation. This authority for approval of university level programmes has been delegated to Universities New Zealand by the New Zealand Qualifications Authority (NZQA) which has oversight of quality assurance in . CUAP comprises representatives from each of the eight universities in New Zealand which, for more than four decades, cooperate to maintain standards. While the universities are autonomous institutions, they seek to maintain standards that are internationally respected among universities. All proposals for new programmes of study and qualifications, including at the postgraduate level, or substantial changes to existing programmes and qualifications must be submitted to CUAP by the universities. Proposals are subjected to scrutiny through a peer review process and some are amended through the peer review process or even rejected. As a follow-up to the initial approval, CUAP operates a graduating year review process whereby universities are required to report on the outcomes of the first cohort to pass through a new qualification.

Global Perspectives on Measuring Quality 229 MEASURING QUALITY IN (POST) GRADUATE EDUCATION AND RESEARCH TRAINING – NEW ZEALAND

CUAP provides very clear guidelines regarding credit requirements, constitution, entry requirements and outcomes for each level of study. For postgraduate study this includes the size of theses and the relative weight of theses to coursework, the process for examination and the monitoring of progress. There is clear reference to external examiners and external moderation of assessment. In New Zealand, all doctoral theses are formally examined by either two or three appropriately qualified examiners, of which two must be external to the student’s university. One of the three examiners is required to be an overseas expert. At Master’s level, two examiners are required, one of whom must be an external person. Theses are assessed against the national CUAP criteria for doctoral and Master’s degree qualifications. This system of external examiners plus national criteria works to ensure that the quality of theses remains at an acceptable standard. The eight universities in the country are also comprehensively audited on a cyclical basis by an independent organisation, the Academic Audit Unit (AAU), the chief function of which is to support New Zealand universities in their continuing achievement of standards of excellence in their academic responsibilities in research and teaching. Aspects of postgraduate education are included in these audits. Audit reports provide valuable feedback and recommendations to universities on a wide range of their activities, including matters pertaining to postgraduate education such as admission processes, progress reporting and monitoring mechanisms, resourcing of students and examination processes. Internally, universities undertake their own quality assessments of various aspects related to postgraduate education. Most universities also have quality advancement or academic audit offices or one kind or another which undertake regular assessment surveys of performance on various aspects of postgraduate programmes. Other assessments are also undertaken by the Deans or Directors of Graduate Studies. Much value is placed on student feedback as an important means of assuring the quality of research higher degree programmes. Numerical data are used for quality assessment in such areas as time to completion, completion rates, completion rates for individual

230 Global Perspectives on Measuring Quality Gregor Coster • Charles TUstin supervisors, etc. Qualitative data are obtained through exit surveys and interviews, which involve both qualitative and quantitative analysis. From time to time work is commissioned within several of the universities on the quality of components of the research higher degree programmes, for example research is undertaken to evaluate the experience of students with respect to supervision and resources. Measurements, both quantitative and qualitative, have been very important as part of the quality feedback on programmes to improve programme performance, and departmental performance in research supervision. A feedback loop (plan, do, study, act) forms a regular part of the programmes. Interventions are designed to improve the quality of supervision, improve outputs including graduating a higher number of Maori and Pacific Master’s degree and doctoral graduates, improve completion rates, reduce completion times and enhance the overall postgraduate experience of students. In New Zealand, the Tertiary Education Commission (TEC) and the Ministry of Education (MoE) collect macro-data from the universities including enrolment numbers (equivalent full-time students or EFTS), gender, ethnicity, and completions. The numbers of successful research higher degree completions are particularly important as they feed into New Zealand’s Performance Based Research Fund (PBRF) which is used to set the basis for funding of universities in subsequent years. Although the New Zealand universities do not directly share data with each other about programme performance, they do share in some detail information about research higher degree policies and approaches to quality and quality management between themselves. The vehicle for this is via the Deans and Directors of Graduate Studies annual meetings and regular emails between members of this group and their administrative staff. The Deans and Directors and their administrative staff enjoy a collegial and co-operative relationship with each other.

Global Perspectives on Measuring Quality 231 MEASURING QUALITY IN (POST) GRADUATE EDUCATION AND RESEARCH TRAINING – NEW ZEALAND

Reference:

Committee on University Academic Programmes, Functions and Procedures, 2009-2010 (http://nzvcc.ac.nz/files/Final_09_2_.pdf)

232 Global Perspectives on Measuring Quality Tan Thiam Soon Assessment of Graduate Degree Programmes in Singapore

Tan Thiam Soon Vice Provost (Education) National University of Singapore

Currently in Singapore, there are three publicly-funded universities (National University of Singapore (NUS), Nanyang Technological University (NTU) and Singapore Management University (SMU) and numerous private universities (non-Singaporean) and educational institutions (e.g. SIM, INSEAD, Chicago Booth GSB, ESSEC, etc.) offering degree programmes. A fourth publicly funded university, Singapore University of Technology and Design, will accept its inaugural batch of students in 2012. At the moment, of the three publicly-funded universities, only two of them (NUS & NTU) offer graduate research degree programmes in a wide range of areas. The publicly-funded universities are autonomous universities (AU) and allowed to chart their own destiny whilst receiving substantial government funding. To ensure that tax payer money is well spent, each AU participates in a process of Quality Assurance (QA) at the institutional level under the aegis of the Ministry of Education known as the Quality Assurance Framework for Universities (QAFU). The QAFU process covers governance, management, teaching, research and service, and builds on internal QA processes and international best practice. The QAFU process requires each AU to develop their own institutional goals and to measure their own performance in meeting these goals through a self-assessment process that is then validated by an external panel appointed by the Ministry of Education. The panel’s report and recommendations feed into a performance agreement between each AU and the Ministry which is reviewed annually over a five-year period. At the end of this period, the QAFU cycle is re- initiated with the next self-assessment and validation exercise with an expectation of an improved level of achievement. Once a performance agreement is in place, there is an annual Performance Review Forum Global Perspectives on Measuring Quality 233 ASSESSMENT OF GRADUATE DEGREE PROGRAMMES IN SINGAPORE in which the university’s rate of attainment for targets and objectives is evaluated. At NUS, the internal QA processes for graduate education are focused on:

1. Programme Quality: this includes programme development, modification, review and approval; overseen by various committees and boards at the Department, Faculty, and University levels. The Board of Graduate Studies chaired by an Associate Provost for Graduate Education provides a level of consistency in the quality of graduate programs across faculties while the University Committee on Educational Policy chaired by the Vice Provost (Education) sets the overall policy decision and ensures some consistency in philosophy between the graduate and undergraduate education programs. Programme quality is also monitored through scheduled Visiting Committees and regular Student Feedback channels. 2. Teaching Quality: monitored through the instruments of peer review, a student feedback exercise each semester and regular student forums organized by the Centre for Development of Teaching and Learning. 3. Outcome Monitoring: monitored and evaluated through various survey instruments including: a. Employment Surveys: a Graduate Student Employment Survey is conducted annually and its purpose is to obtain information on the economic activity status of graduate students and their salaries at the point of registration for commencement. b. Employer Feedback Survey: an Employer Feedback Survey is conducted annually and seeks direct feedback from employers in both the public and private sectors on the ability of NUS graduates. c. Alumni Surveys: a Graduate Student Alumni Survey is conducted annually and its purpose is to track the career development of NUS graduate students two and five years after graduation as well as to solicit

234 Global Perspectives on Measuring Quality Tan Thiam Soon

feedback on the quality of educational and broader NUS experience that they went through and the impact of this on their career.

The Ministry of Education provides each AU with funding for most of the graduate student scholarships. The quality of research training given is evaluated each year through the Post-graduate Research Report on Quality (PGRRQ) which covers the following areas:

1. Education: including trends in enrolment, retention and placements; 2. Research Output: including publications and citations, patents and licensing, notable Faculty and Staff Achievements; 3. Research Competitiveness and International Visibility: including external grants awarded, notable international events and research partnerships.

The PGRRQ is followed by a presentation before an Academic Research Committee (ARC) appointed by the Ministry of Education tasked with evaluating each AU’s performance in research training. The ARC can make recommendations on each AU’s future funding for graduate scholarships. While the above describes the processes within NUS, there is some degree of consistency for the three autonomous universities within Singapore as all three are subjected to the same Quality Assurance Framework for Universities. This process was only introduced in 2006 and thus far, only NUS has undergone a second round of self- assessment combined with a visit by an External Review Panel.

Global Perspectives on Measuring Quality 235 MEASURING AND IMPROVING QUALITY IN SOUTH AFRICAN POSTGRADUATE EDUCATION Measuring and Improving Quality in South African Postgraduate Education

Jan Botha Senior Director, International Research and Planning University of Stellenbosch, South Africa

The South African Higher Education Act (Act 101, 1996) assigns the responsibility for the quality assurance of higher education provision to the Council on Higher Education (CHE) to be executed through the Higher Education Quality Committee (HEQC). The HEQC is a permanent subcommittee of the CHE, with the mandate to promote quality, to conduct institutional audits of higher education institutions (HEI’s) and to accredited programmes. The HEQC, established in 2001, has developed a criterion-based approach to institutional audits (cf. www.che.ac.za). The HEQC Audit Framework defines ‘quality’ in terms of three concepts: fitness for purpose, value for money, and individual and social transformation, within an overarching fitness of purpose framework. South African public HEI’s (there are currently 23) are subjected to an HEQC conducted institutional audit every six years. The first cycle of audits will be concluded during 2010. During an audit the quality assurance arrangements related to research and postgraduate education are assessed with reference to three criteria (with their respective sub-clauses). Criterion 15 applies to all HEIs, and criteria 16 and 17 also apply to those HEIs with a research- intensive mission.

CRITERION 15 Effective arrangements are in place for the quality assurance, development and monitoring of research functions and postgraduate education.

In order to meet this criterion, the following are examples of what would be expected:

(i) A research policy and/or plan which indicates the role and 236 Global Perspectives on Measuring Quality Jan Botha

nature of research conducted at the institution, is adequately resourced, and consistently implemented and monitored. (ii) Appropriate strategies for research development, including capacity development for researchers, which are implemented and monitored. (iii) An effective research information system that captures appropriate data for research related planning. (iv) Appropriate strategies for the support and development of postgraduate education, including effective postgraduate supervision, which are implemented and monitored. (v) Regular review of the effectiveness of arrangements for the quality assurance, development and monitoring of research functions and postgraduate education.

CRITERION 16 Research functions and processes (cf. www.che.ac.za).

CRITERION 17 Efficient arrangements are in place for the quality assurance, development and monitoring of postgraduate education.

In order to meet this criterion, the following are examples of what would be expected:

(i) Clear policies, regulations and criteria in relation to the quality of postgraduate education. These include: • Policies that indicate the scope and nature of postgraduate education at the institution, and stipulate clear admission requirements and procedures; • Policies and criteria for the evaluation and approval of masters and doctoral proposals; • Policies and criteria governing access to and allocation of funding for postgraduate education and research; • Policies and regulations that specify the role and responsibilities of supervisors of postgraduate research; • Policies and criteria for assessment of postgraduate

Global Perspectives on Measuring Quality 237 MEASURING AND IMPROVING QUALITY IN SOUTH AFRICAN POSTGRADUATE EDUCATION

education and research; and • Policies and regulations regarding postgraduate publications.

(ii) Effective structures and processes that quality assure and monitor postgraduate education. These include structures which: • Apply clear criteria against which to evaluate, approve and monitor postgraduate research; • Evaluate and approve funding for postgraduate research; • Enable postgraduate students to lodge complaints or appeals that are swiftly dealt with, as well as provide for opportunities to defend their research findings; and • Track developments and trends in postgraduate education at the institution.

(iii) An effective research information system which supports the monitoring of postgraduate education. This includes: • Capturing essential information on postgraduate research issues through a central research information system; and • Linking captured data in a way that allows for meaningful reporting on and planning for postgraduate education and research at the institution.

(iv) Clear and effective policies and strategies which facilitate the development, support and improvement of postgraduate education. These include the availability of: • Training and development opportunities for new supervisors; • Research design and methods courses for postgraduate students; • Access to support services for postgraduate students; • Facilitation of regular access to supervisors and other researchers in the field; • Special funds to support postgraduate research; and • Additional support and development programmes for

238 Global Perspectives on Measuring Quality Jan Botha

previously disadvantaged students. (v) Regular review of the effectiveness of arrangements for the quality assurance, development and monitoring of postgraduate education.

Under contract from the HEQC, the Centre for Research on Science and Technology (CREST) at the University of Stellenbosch (cf. www.sun.ac.za/crest) prepares a detailed report based on quantitative information of the research profile and the profile of postgraduate students of each HEI to be audited. The reports are provided to the HEI and the review panel before the commencement of the audit visit. The report on an institution’s research profile is based on scientific and scholarly papers published in peer-reviewed journals. Two data sources are used to compile an HEI’s bibliometric profile: (1) the research article submissions provided by the HEI’s research directorate to the national Department of Higher Education and Training (DHET) and (2) the data records of peer-reviewed research articles in SA Knowledgebase (SAK) at CREST. The quantitative data are presented to the HEQC and the HEI to be audited in four sections: (1) research output and research productivity, (2) the HEI’s authors and their article output with breakdown in gender, race, age, faculty association, (3) the scientific fields and sub-fields within which the HEI’s researchers have published, and (4) the HEI’s pattern of article publication including a list of top journals in which the HEI’s authors published during a particular period. The second part of the CREST reports on institutions consists of detailed information on the HEI’s post-graduate student profile: (1) graduates (Masters and Doctoral students), (2) first enrolments (Masters and Doctoral students), (3) total enrolments (Masters and Doctoral students) and (4) the HEI’s postgraduate graduates as percentage of all students nationally, by qualification type and broad field of study. The responsibility for audits as well as accreditation is assigned to the HEQC. The same body therefore developed the comprehensive national QA framework for South Africa, covering both aspects. The framework provides for a phased in process in which the interface between the two aspects works as follows: (1) New academic

Global Perspectives on Measuring Quality 239 MEASURING AND IMPROVING QUALITY IN SOUTH AFRICAN POSTGRADUATE EDUCATION programmes have to go through a thorough HEQC accreditation process, based on peer review with reference to a national set of accreditation criteria. (2) From time to time the HEQC conducts national reviews of the programmes in a specific field (e.g. a review of all the MBA programmes offered in the country). (3) An institution’s quality assurance arrangements for academic programmes are audited by the HEQC every six years. (4) Institutions with a proven record of success in the accreditation of new programmes and in the national reviews, who have also received favourable audit reports, can be awarded self-accreditation status by the HEQC. One of the benefits of the QA framework immediately apparent is the improvement of quality and the elimination of substandard postgraduate programmes. So, for example, when the 37 MBA programmes offered by 18 public and nine private higher education institutions were reviewed during 2003, only five were immediately fully accredited, while 15 received conditional accreditation and a number of programmes were terminated (including programmes offered by off-shore providers). Of the 19 Masters in Education programmes reviewed during 2005 seven received immediate full accreditation, nine were given the opportunity to reapply after they have made improvements required by the HEQC and three were de- accredited with immediate effect. The HEQC conducts longitudinal meta-research on the audit and accreditation processes as it unfolds. With regards to research and postgraduate education it became clear that there is a relative lack of attention, even by the “research-led” universities, to basic issues of quality assurance in supervision and examination of graduate students. Furthermore, there is a lack of institutional support of such students in many institutions and other structural gaps. The HEQC consistently emphasises the need to increase knowledge production and research output (and especially doctoral graduate output); the need to internationalise South Africa’s science profile and to increase the country’s visibility in leading foreign journals; and the need to broaden the participation of black, female and young academics and scholars in the research system including participation in postgraduate education. There is not yet sufficient consensus on the criteria for a research-led

240 Global Perspectives on Measuring Quality Jan Botha or research-intensive university. South African universities are highly diverse in research and postgraduate output. During 2005-2007 the five top HEIs each produced between 100 and 200 PhD graduates per year but eight of the 23 HEIs produced less than 10 PhDs per year (including two with none at all). The majority of publications is currently produced by staff over the age of 50. The participation of female and black scholars in knowledge production has increased significantly although the anticipated targets have not yet been met. With few exceptions, no universities offer their academic staff any formal or structured training and guidance in supervisory practices. As part of the task of building an effective national quality assurance system, the HEQC has also included capacity development and training as a critical component of its programme of activities. Its training programme provides for the training of auditors and evaluators and QA officers in HEIs. In addition to national interventions one noteworthy development at institutional level is the establishment of the African Doctoral Academy at the University of Stellenbosch, a capacity-building centre that supports, strengthens and advances doctoral training and scholarship on the African continent (cf. www. sun.ac.za/crest).

Global Perspectives on Measuring Quality 241 THE MEASUREMENT OF POSTGRADUATE EDUCATION AND RESEARCH TRAINING IN VIETNAM The Measurement of Postgraduate Education and Research Training in Vietnam

Kim Anh Le Thi Higher Education Project Ministry of Education, Vietnam

Kim D. Nguyen Vice Director General, Institute for Educational Research HoChiMinh City University of Education

This paper presents a brief summary of how quality in postgraduate education and research training are measured in Vietnam. The paper first introduces the context of Vietnamese postgraduate education and training after Doi moi,1 particularly focusing on the current situation since the beginning of the 21st century. The paper then explains procedures for measuring postgraduate training quality and related issues. Lastly, the paper discusses problems of postgraduate education assessment in Vietnam and suggests possible solutions to overcome weaknesses.

The Context of Vietnamese Postgraduate Education and Training2 Vietnam’s higher education system has developed rapidly since 1986, when Vietnam embarked on a policy of economic renovation, and especially since 1993, when the Government of Vietnam set as its first priority the development of education and training and the expansion and diversification of the higher education system. In 1993 the Government assigned to higher education the special responsibilities of producing qualified human resources and advancing knowledge crucial for the country’s social and economic development. Since that time, the Government has provided financial support for the growth of the system, and it has facilitated the development of a system that provides for a mix of institutions and of forms of training. It has also supported the development of a mix of public and private ownership. Expenditure on the higher education system as a proportion of the

242 Global Perspectives on Measuring Quality Kim Anh Le Thi • Kim D. Nguyen

Gross Domestic Product has increased dramatically, especially during recent years. Higher education in Vietnam is in the process of making the transition from an elite to a mass higher education system. In September 2009, there were 376 universities and colleges providing for 1.7 million students, or about 13.5 per cent of the relevant age group. There were approximately 61,000 academic staff members; however, only about 10 percent had doctoral qualifications, which is not much different from the situation in 1987 when the system was smaller. Moreover, this percentage is well below official targets for 2010 (25 per cent) and for 2020 (35 per cent). By 2010, among 376 higher education institutions, 159 universities and research institutes are assigned to provide postgraduate education and conduct research. Fourteen key universities have been identified in the Master Network of Higher Education Institutions3 in Vietnam for the period 2006 to 2020. These key higher education institutions are expected to be the pioneering research universities of the nation and have been granted a high level of autonomy by the Government. The 14 key universities (dai hoc and truong dai hoc) and academies (Hoc vien) will also provide a majority of the master’s and PhD holders for Vietnam. In 2009, these universities had 30,676 postgraduate students (5.9% of enrolment), and 2,462 PhD students (0.5% of total enrolment).

Procedure and Measurement for Postgraduate Training Quality and Related Issues The Government of Vietnam regards higher education and its reform as an important priority. Therefore, the National Assembly—the highest governmental agency— is overseeing the quality of higher education, and exercises assessment through the report presented at each legislative session by the Ministry of Education and Training (MOET). This report also includes the quality of postgraduate education. During its most recent session, the National Assembly examined “the implementation of policies, legislation requirements to open a university, investment and quality assurance in higher education,” pointing out shortcomings and limitations of current practices. Although many policies had

Global Perspectives on Measuring Quality 243 THE MEASUREMENT OF POSTGRADUATE EDUCATION AND RESEARCH TRAINING IN VIETNAM been issued, the implementation was not well done, and supervision was lax. These findings show there is a need to review policies and legislation and perform thorough supervision (Associate Professor Ha Duy Toan, Journal on Education and Era, June 2010). The most visible shortcomings of higher education, as stated by Deputy Prime Minister Nguyen Thien Nhan, are: (i) State control over higher education is inadequate and stagnating; and (ii) The quality of higher education, especially of postgraduate education, is limited. And the reason, as pointed out by DPM Nhan, is “during nearly 30 years, we could not manage the quality of higher education.” MOET, as instructed in the Education Law, exercises state control over higher education, and recently introduced a number of policies and procedures to address the need for postgraduate education assessment. The ministry:

• Established a system of higher education quality management agencies from the central to the institutional level. In 2004, MOET created the General Department of Education Testing and Accreditation, and then supported universities and colleges to establish 77 departments in charge of quality assurance at their own institutions. In 2005, MOET issued a temporary set of standards on institutional quality accreditation; accredited 20 HEIs; completed 40 institutional external evaluation reports; and received 114 HEI self- evaluation reports. Although these initiatives were important, serious delays have occurred in implementing accreditation because the 3 independent accreditation agencies were not established as planned in 2008. Also, the standards and instructions of program and course evaluation have not been issued; consequently, program and course evaluations only are being done at some key universities based on the individual institution’s initiatives. • Reinforces state control over higher education by issuing numerous policies and regulations; the most relevant with regard to postgraduate assessment is the Circular on public disclosure of information in higher education institutions.4 The Circular covers the disclosure of education quality

244 Global Perspectives on Measuring Quality Kim Anh Le Thi • Kim D. Nguyen commitments and education quality attainments of the institutions; disclosure of education resources (teaching staff, textbooks, education curriculum, infrastructure, etc.; and disclosure of revenue and expenditures). This information is made public on the institution’s website. • Requires HEIs to announce their education qualification outputs, and to evaluate the fitness of these outputs for human resource demands. As of 2010, 10 HEIs have promulgated their qualification outputs, accounting for 2.7% of all HEIs. These outputs are used as the basis for ensuring that quality assessments are credible and thorough. • Reinforces inspection of input standards to ensure strict implementation at the HEIs. • Issued the Regulation5 on master’s and PhD programs, which delegates full automony to the HEIs in doing the thesis evaluation.

Quality control of postgraduate education, as described above, is MOET’s responsibility. Three departments within MOET have primary roles: General Department of Education Testing and Accreditation, Higher Education Department, and Department of Planning and Finance. These departments also cooperate with other relevant MOET departments and government agencies.

Problems and possible solutions The Vietnam Quality Assurance (QA) system is faced with several challenges that make improving the quality of postgraduate education programs difficult:

• Lack of criteria for measuring the quality and effectiveness of postgraduate programs • Poor quality control and assurance • Insufficient management • Insufficient qualified teaching staff, learning materials and facilities • Poor service for postgraduate students

Global Perspectives on Measuring Quality 245 THE MEASUREMENT OF POSTGRADUATE EDUCATION AND RESEARCH TRAINING IN VIETNAM

However, the biggest challenge at the moment is that postgraduate education is not the focus and priority of most Vietnamese higher education institutions. The proportion of postgraduate programs in most comprehensive universities is low when compared with undergraduate programs. Moreover, most accreditation standards and criteria developed for higher education institutions have focused on undergraduate programs. Higher education institutions are striving to improve the quality of undergraduate programs and, consequently, they are not focusing sufficient effort on postgraduate studies. In addition, due to the priority being placed on improving the quality of undergraduate programs, the accreditation process evaluates whether the minimum requirements are met. In the national plan for developing the higher educational system by 2020, MoET intends to have 20,000 doctoral degree holders but the plan lacks credible strategies and effective quality management and assurance considerations. Recent discussions in Vietnam among educators and public officials at both the national and local levels suggest that itis in the interest of the country for higher education institutions to continuously aim towards improving their performance both in terms of undergraduate and postgraduate programs; in terms of their own capabilities; and in relation to the national development targets that the MOET have formulated to improve the quality of higher education of both levels. This aim is considered still at the horizon for postgraduate education in Vietnam.

1 Vietnam’s 1986 economic reforms toward a Socialist-oriented market economy.

2 The data used in this report are from Minister of Education and Training Nguyen Thien Nhan’s, The report on higher education system development, solutions on education quality assur- ance and improvement; Report no. 760/BC-BGDDT dated 29/10/2009

3 Master Network of Higher Education Institutions in Vietnam from 2006 to 2020 (Decision No.121/2007/QD-TTg).

4 Circular No 09/2009/TT-BGDDT dated 07/5//2009 Minister of Education and Training on public disclosure of information in higher education institutions

5 Circular no. 10/2009/TT-BGDDT dated 07/5//2009 by the MOET Minister

246 Global Perspectives on Measuring Quality APPENDIX C: PARTICIPANT BIOGRAPHIES dr. rose Alinda Alias Rose Alinda Alias was appointed Dean of Graduate Studies at the Universiti Teknologi Malaysia in August 2007. She is responsible for over 150 Master’s programs and 45 doctoral programs at UTM with an enrolment of more than 7100 graduate students from 45 countries. Her experience in graduate studies administration began when she was appointed as Deputy Dean Graduate Studies and Research at the UTM Faculty of Computer Science & Information Systems in July 1998. Dr. Alias is also a Professor in Information Systems Management at the faculty. She graduated with a PhD in Information Systems from UK, as well as MBA, MSc Computer Information Systems and BSc Computer Science from USA.

Professor Jan Botha Jan Botha is Senior Director of Institutional Research and Planning and Professor of Ancient Studies at Stellenbosch University in South Africa. His current responsibilities include advising the Vice- Chancellor and executive management team of the University on Higher and planning, liaison with the Department of Higher Education and Training, Higher Education environmental scanning, institutional planning, academic planning, quality assurance, institutional research, and management information. Earlier in his career he taught Hellenistic Greek and religion at different South African universities. He has published two monographs and many scholarly articles in these fi elds. He holds a doctoral degree of the University of Stellenbosch and a Postgraduate Certifi cate in Higher Education Management and Change of Twente University (The Netherlands). He participated in the development of the New Academic Policy for Programmes and Qualifi cations in Higher Education in South Africa (2001) and has worked as for Global Perspectives on Measuring Quality 247 PARTICIPANT BIOGRAPHIES the Higher Education Quality Committee (HEQC) of the Council on Higher Education (CHE) where he served as project leader for the development of the HEQC’s Programme Accreditation Framework and was a co-author of the Draft HEQC Programme Accreditation Criteria (2003). He serves on international quality audit panels in the Middle East. He is a recipient of a Fellowship of the United Nations Centre for Human Rights (Geneva), the Vice-Chancellor’s Award for Excellence in Teaching and the Fellowship Award of the South African Society for Higher Education Research and Development. He is currently the president of the Southern African Association for Institutional Research (SAAIR).

Dr. Marie Carroll Marie Carroll has recently (March 2010) been appointed as Director of Academic Affairs at Sydney University, and is responsible, inter alia, for graduate studies at the University. She has worked in 8 Australian universities, and held a variety of senior posts including: Foundation Head of Psychology, Chair of the Academic Board, and Pro Vice- Chancellor (Academic) at the (1993-2003), and Director of Quality Enhancement, and Director of Student Equity at the Australian National University (2003-2010). She was an AUQA auditor for some years, and has led two universities (UC and ANU) through the audit process. She sits on a number of accrediting boards. With a background in cognitive psychology (PhD Otago University) her research career has focused on experimental psychology in memory and metamemory, applied in particular to formal learning settings.

Professor Jean Chambaz Jean Chambaz, MD, received a doctorate es sciences. He is a professor of cell biology at the Faculty of Medicine Pierre and Marie Curie and heads the department of clinical biochemistry at the hospital Pitié- Salpêtrière at Paris. He created a research unit in the field of metabolism and intestinal differentiation in 1999, which merged in 2007 into the Research Center of Cordeliers, of which he is Vice-Director. After heading the doctoral school in physiology and pathophysiology from

248 Global Perspectives on Measuring Quality APPENDIX c

2001 to 2005, he created the Institute of Doctoral Training at UPMC which enrols about 3500 doctoral candidates in sciences and medicine from mathematics to public health, where he served as director until October 2008. Elected as a member of the scientific council, he currently serves as Vice-President for research of UPMC. He chairs the steering committee of the Council on Doctoral Education of the European University Association launched in 2008.

Dr. Andrew Comrie Andrew C. Comrie is Associate Vice President for Research and Dean of the Graduate College at the University of Arizona. Dr. Comrie provides academic leadership for graduate education at the University, including stewardship of fourteen Graduate Interdisciplinary Programs, which involve over 600 faculty members from more than a dozen colleges. Prior to his appointment as dean in January 2006, he led the graduate program in Geography for almost a decade. Dr. Comrie has been a plenary speaker, session chair and invited participant at numerous CGS meetings on topics including interdisciplinary graduate education, strategic budgeting, admissions and financing, responsible conduct of research, entrepreneurship, master’s completion and attrition, and international graduate education. He is an elected member of the CGS Board of Directors. Dr. Comrie is a climatologist with a primary appointment as Professor of Geography and with interdisciplinary appointments in Atmospheric Sciences, Arid Lands Resource Sciences, Global Change, Public Health, Remote Sensing and Spatial Analysis, and Statistics. He received his undergraduate education at the University of Cape Town in South Africa and his PhD from the Pennsylvania State University. Dr. Comrie joined the University of Arizona in 1992. Since then, he has conducted broadly interdisciplinary research in climate variability and change, with particular interests in the connections between climate and health, air quality, and environmental impacts. He continues to carry out funded research and publish, serving on several journal Editorial Boards and as Americas Editor of the International Journal of Climatology.

Global Perspectives on Measuring Quality 249 PARTICIPANT BIOGRAPHIES

Dr. Gregor Coster Professor Gregor Coster has been Dean of Graduate Studies at the University of Auckland, New Zealand since 2005. He is a professor of general practice and has research interests in the area of health services research and policy, particularly in the areas of health needs assessment and priority-setting in health. This latter area is of international interest as governments endeavour to prioritise spending on health services with growing population pressures. His educational research interests are in the area of research higher degrees, particularly regarding supervisor accreditation and training. He has recently sponsored research into international doctoral student experiences at the University of Auckland, correlating those with the perspectives of supervisors. Presently he is researching policies and practices for supervisor accreditation and training at Universitas 21 and Go8 universities. He was formerly Chairman of the Royal New Zealand College of General Practitioners. He is currently Chairman of the Counties Manukau District Health Board that funds and provides health services for that district. He was made a Companion of the New Zealand Order of Merit in 2007.

Dr. Alan Dench Alan Dench is Dean of the Graduate Research School and Winthrop Professor in Linguistics at the University of Western Australia. He has also served as Executive Dean of Arts at UWA and as the Chair of the University’s Animal Ethics Committee, and has sat on the Australian Research Council’s College of Experts. Dr. Dench’s principal area of expertise is the primary documentation and description of Australian Aboriginal languages. He has written grammatical descriptions of three languages of the Pilbara region of Western Australia—Panyjima, Martuthunira and Yingkarta—and is working on a detailed description of Nyamal. His current research program, with colleagues in Australia, France, Belgium and the UK, is focussed on the investigation of the semantics of tense, aspect, modality and evidentiality in Indigenous Australian languages. Dr. Dench continues to teach undergraduate linguistics majors and has supervised a number of higher degree research students, most working on descriptions of endangered

250 Global Perspectives on Measuring Quality APPENDIX c indigenous languages of Australia, Indonesia or the wider Indo- Pacific region. Dr. Dench has a Bachelors degree in Anthropology from UWA, and Master’s and PhD degrees in Linguistics from The Australian National University. He is currently President of the Australian Linguistics Society, a member of the Executive Committee of the International Society for Historical Linguistics, a member of the Australian Institute of Aboriginal and Torres Strait Islander Studies, and a Fellow of the Australian Academy of the Humanities.

Dr. Karen P. DePauw Karen P. DePauw is Vice President and Dean for Graduate Education and tenured Professor in the Departments of Sociology and Human Nutrition, Foods & Exercise at Virginia Tech in Blacksburg, Virginia. Since her arrival at Virginia Tech, her major accomplishments include success in building a strong diverse graduate community, the establishment of the innovative Graduate Life Center (GLC), and the signature initiative known as Transformative Graduate Education (TGE) which includes the Future Faculty and the Global Perspectives programs. She was a founding member and Facilitator/Chair for the Virginia Council of Graduate School (VCGS), served as President of the Conference of Southern Graduate Schools (CSGS) 2007-2008 and currently serves as Past Chair of the Board of Directors of the Council of Graduate Schools (CGS).

Professor Chao Hui Du Dr. Chaohui Du is now the Deputy Dean of the Graduate School, Shanghai Jiaotong University, and Professor at School of Mechanical Engineering. He received his PhD in Mechanical Engineering at Northwestern Polytechnical University, P.R.China in 1992. He then spent half a year at the , and 2 years at the University of Illinois in Urbana-Champaign as a post-doctoral fellow and . He was the vice dean of the School of Mechanical Engineering for graduate education for 4 years from 2002 to 2005. He was appointed as the Deputy Dean of the Graduate School, Shanghai Jiaotong University, in 2005. He remains an active researcher and has published 100 papers. His current research interests

Global Perspectives on Measuring Quality 251 PARTICIPANT BIOGRAPHIES are the wind turbine design and development of the energy-saving system.

Dr. Barbara Evans Professor Barbara Evans is Dean of the Faculty of Graduate Studies at the University of British Columbia. As Dean she is responsible for oversight of policy, management and quality assurance for masters and doctoral programs, generic skills training and research supervision. The Faculty also provides a range of academic support and professional development programs for graduate students, supervising faculty and staff. Prior to this appointment Dr. Evans was Pro Vice-Chancellor (Research Training) and Dean of the School of Graduate Studies (SGS) at The University of Melbourne. She has been a keynote speaker at many conferences focused on graduate education in the US, Canada, Europe, Australia and Asia. She has been the Convener of the Universitas 21 Deans and Directors of Graduate Studies and of the Australian Deans and Directors of Graduate Studies. Each group is committed to improving graduate education in a global context, quality assurance, and national and international benchmarking of research higher degree practices. Dr. Evans is also a key member of two important global networks focused on graduate and doctoral education. One is the “Strategic Leaders’ Global Summits on Graduate Education” hosted through the U.S. Council of Graduate Schools. The other focuses on the ‘Forces and Forms of Change in Doctoral Education’ and is organized through the ‘Center for Innovation and Research in Graduate Education’ at the University of Washington. Originally a zoologist, Dr. Evans conducted research that resulted in over 100 publications and she is author and editor of three award- winning Biology textbooks for tertiary and senior secondary students, each now in their fourth edition.

Mr. Michael Gallagher Prior to his appointment in May 2007 as Executive Director of the Go8, Michael Gallagher was Director of Policy and Planning at The Australian National University. He was responsible for Commonwealth administration of higher education from 1990-1994 and again from

252 Global Perspectives on Measuring Quality APPENDIX c

2000-2002. Between 1994 and 1996 he was head of Department of Employment Education and Training Corporate Services. From 2002 to 2003 he was head of Australian Education International within the Department of Education, Science and Training. He has a long history in the education industry including as a teacher and lecturer at secondary and tertiary level and as a member of the Wran Committee on Higher Education Financing in 1987. Michael has worked overseas for the World Bank and also continues to undertake work for the OECD on higher education issues.

Mr. Rod Gauvin Rod Gauvin is a Senior Vice President for ProQuest. He joined the company in March 2001 and has held a variety of general management roles with ProQuest. Mr. Gauvin has more than 30 years of information industry experience in reference, professional, trade, and educational publishing. Prior to his appointment at ProQuest, he worked for the Thomson Corporation as President and CEO of South-Western Educational Publishing, and as Managing Director of Thomas Nelson & Sons in the United Kingdom. A long-time supporter of libraries, he was the architect of the Gale/Library Journal Library of the Year Award. His community and board positions include the following: American Association of Publishers (School Division); Software Information Industry Association (Content Division); Vice President, Friends of Libraries USA; Friends of the Grosse Pointe Public Library; and Volunteer Neighborhood Club, Grosse Pointe. He is President of the Association of Library Trustees, Advocates, Friends and Foundations, a newly formed division of the American Library Association. Mr. Gauvin is Grand Knight for the Knights of Columbus Council 587, St. Thomas the Apostle in Ann Arbor. A long time supporter of United Way, he has run successful workplace campaigns for the United Way at ProQuest, and has been a volunteer leader for the United Way Company Acquisition Campaign. He is currently serving on the Board of Directors for United Way of Washtenaw County. Mr. Gauvin received his BA degree, cum laude, from Assumption College in Worcester, MA.

Global Perspectives on Measuring Quality 253 PARTICIPANT BIOGRAPHIES

Dr. Jeffery Gibeling Jeffery C. Gibeling was appointed Dean, Graduate Studies at the University of California, Davis in August 2002. He oversees 87 graduate degree programs, of which more than one-half are organized as interdisciplinary graduate groups. He previously served as Chair of the Academic Senate at UC Davis and Executive Associate Dean of Graduate Studies. He joined the faculty at UC Davis as an Assistant Professor of Materials Science and Engineering in 1984. Professor Gibeling holds degrees in Mechanical Engineering and Materials Science and Engineering from . He also worked as an Acting Assistant Professor and Senior Research Associate at Stanford from 1979 through 1984. Professor Gibeling is the author or coauthor of more than 90 publications on the mechanical properties of materials and has guided the thesis and dissertation work of 25 graduate students. Dean Gibeling has promoted continuous improvements in information technology to enhance service of the Office of Graduate Studies to its clientele. He is also deeply committed to increasing the diversity of the graduate population at UC Davis. Under Dean Gibeling’s leadership the Office of Graduate Studies has developed a comprehensive Professional Development program to ensure that graduate students complete their degrees and are prepared for successful careers. He has also devoted significant attention to the needs of postdoctoral scholars and established an award for Excellence in Postdoctoral Research. Dean Gibeling serves as Chair of the CGS Board of Directors, on the Association of Graduate Schools Executive Committee and on the GRE Board.

Dr. Kebin He Dr. Kebin He is the Professor of Environmental Science & Engineering and Deputy (Executive) Dean of Graduate School at the Tsinghua University. Since receiving his PhD in environmental engineering in 1990, Professor He has been conducting research on air pollution control science, technology and policy for over 20 years. Up to now, as a principal investigator, he has finished more than 30 research projects, and published more than 130 peer reviewed papers in academic journals. In addition, Professor He has been a senior visiting 254 Global Perspectives on Measuring Quality APPENDIX c professor in Technical University of Denmark, Leeds University in UK, and in the USA. Professor He also serves as a member of the International Council for Clean Transportation, the Council for China Energy Research Society, and as a Senior Member of China Society of Environmental Science.

Professor Narayana Jayaram A professor of research methodology at Mumbai’s Tata Institute of Social Sciences, Dr Narayana Jayaram was born in Bangalore, South India. He received all his post- in his hometown, including a PhD in Sociology. He taught sociology at Goa University and his alma mater, Bangalore University, and has taught research methodology at the Tata institute of Social Sciences in Mumbai since 2003. He also served as Director of the Institute for Social and Economic Change in Bangalore and has lectured at numerous institutes and universities around the world, including The University of the West Indies (St Augustine) in Trinidad and Tobago, where he was Visiting Professor of Indian Studies for three years. Although Dr Jayaram specialises in , he has research interests in transdisciplinary areas including theory and methods, political sociology, and sociology of diaspora. He has written, edited and adapted 14 books, and authored over 110 research papers and 300 book reviews. Dr Jayaram is an active member of many professional bodies and academic research committees, including the Association of Third World Studies and the International Network on the Role of Universities in Developing Areas. He edits Sociological Bulletin, the Journal of the Indian Sociological Society. He was on the Panel to speak on ‘Sustainable Higher Education’ at the Third G8 University Summit 2010 (Canada). He also serves as a member of the Joint Advisory Committee for Bilateral Collaboration between the Indian Council of Social Science Research and the U.K. Economic and Social Research Council.

Dr. Akihiko Kawaguchi Dr. Akihiko Kawaguchi is Specially Appointed Professor at the National Institution for Academic Degrees and University Evaluation. Global Perspectives on Measuring Quality 255 PARTICIPANT BIOGRAPHIES

Dr. Kawaguchi’s research theme is the development of university evaluation and his field of research is Life Sciences and higher education’s accreditation and evaluation activities.

Education 3/1964: Faculty of Science, Okayama University 3/1966: Graduate School of Science, 1/1970: D.Sc. in Chemistry, Kyoto University

Work experience 4/1983: Associate Professor, College of Arts and Science, The University of Tokyo 4/1989: Professor, College of Arts and Science, The University of Tokyo 7/1994: Director of International Center, The University of Tokyo 4/1996: Professor, Graduate School of Arts and Science, The University of Tokyo 4/1999: Director of The University Museum, The University of Tokyo 4/2001: Professor, National Institution for Academic Degrees and University Evaluation 5/2002: Emeritus Professor, The University of Tokyo 4/2004: Professor and Dean, National Institution for Academic Degrees and University Evaluation 4/2006: Vice-President, National Institution for Academic Degrees and University Evaluation 4/2010: Specially Appointed Professor, National Institution for Academic Degrees and University Evaluation

Dr. Julia Kent Julia Kent has been Program Manager in the Best Practices division at the Council of Graduate Schools since October 2008. At CGS, she has conducted research on a broad range of topics in graduate education, including quality and accountability, interdisciplinary programs, professional doctorates, graduate education for the responsible conduct of research, and international collaborations. Dr. Kent has co-

256 Global Perspectives on Measuring Quality APPENDIX c authored (with Daniel Denecke) Joint Degrees, Dual Degrees, and International Research Collaborations, a report on the NSF-funded Graduate International Collaborations project, which studied the challenges of collaboration for U.S. universities and outlined promising practices for administering research and educational partnerships. She is also Director of the Global Summit program and has edited the past two summit proceedings, Global Perspectives on Research Ethics and Scholarly Integrity (2009) and Global Perspectives on Graduate International Collaborations (2010), along with the current volume. Before arriving at CGS, Dr. Kent was Assistant Professor of English at the American University of Beirut (AUB), where she served on the Executive Committee of the Center for American Studies and Research, an American Studies program and research center that draws visiting scholars from North America, Europe, and the Middle East.

Dr. Maxwell King Professor Maxwell King is internationally recognized as a distinguished researcher in the field of econometrics. He has been a professor at Monash University since 1986. He is currently Pro Vice- Chancellor (Research and Research Training) and was appointed as a Sir John Monash Distinguished professor in 2003. He was head of the department of Econometrics and Business Statistics from 1988 to 2000. Professor King was made a Fellow of the Academy of Social Sciences in 1997 and a Fellow of the Journal of Econometrics in 1989. He has held visiting professorships at the University of Auckland and the University of California, San Diego. He is a founding member of the Australian Council of Deans and Directors of Graduate Studies and was the Council’s Convenor from 2007 to 2009. Despite a significant administrative load, he remains an active researcher having published over 100 journal articles. He has supervised 41 PhD students to completion and received the Vice-Chancellor’s award for postgraduate supervision in 1996. In 2009 he received a Career Achievement Award from the Australian Learning and Teaching Council.

Global Perspectives on Measuring Quality 257 PARTICIPANT BIOGRAPHIES Ms. Le Thi Kim Anh During 10 years working at the two Higher Education Projects in Vietnam (funded by IDA through World Bank in Vietnam), Ms. Le Thi Kim Anh has participated in capacity building for the higher education sector at the system level; overseen strategic planning activities at all universities in Vietnam; overseen student advisory services activities at nearly all universities in Vietnam; and monitored 50 sub-projects (Quality Improvement Grants—QIGs and Teaching and Research Innovation Grants TRIGs) at 36 universities in Vietnam. Education: • 2005 (August to December): Doctor Development Program - University of Queensland – Australia. • 1998 - Master Degree (with Honor, major in Education Assessment and Evaluation) from the University of Melbourne - Australia. • 1991 - in from Hanoi Education University No 1. • 1985 Bachelor Degree in Education from Orel National Education University (Russia). Professional Experience • January 2008 to present – Procurement Coordinator the Second Higher Education Project funded by World Bank. • September 1999 to December 2007 - Assistant to National Project Director of the first Higher Education Project funded by World Bank. • 1986 to present - Lecturer at Hanoi Education University.

Dr. Ursula Lehmkuhl Ursula Lehmkuhl is Professor of Modern History and Chair of the History Department, John F. Kennedy Institute for North American Studies, Freie University of Berlin. She teaches nineteenth century American cultural and social history, twentieth century American and Canadian diplomatic history, and the history of American and Canadian foreign relations. She published several books, among

258 Global Perspectives on Measuring Quality APPENDIX c

them Pax Anglo-Americana: Machtstrukturelle Grundlagen anglo- amerikanischer Asien- und Fernostpolitik in den 1950er Jahren (1999). Her research interests include German Immigrant Letters, 19th and 20th century, the history of Anglo-American relations during the 19th century, Canadian-American relations after September 11 and Colonial Governance in British and French North America. She is co-directing (together with Thomas Risse) the cooperative research center, “Governance in Areas of Limited Statehood.” From June 2007 to June 2010 she served as First Vice President and then as Acting President of Freie Universität Berlin. She was responsible for the internationalization program of the university. Besides her academic research interests she is involved in several programs in the field of higher education: In cooperation with Britta Baron she coordinated the Transatlantic Degree Program; in cooperation with Matthias Kuder she directed a study on US-German cooperation in Higher education financed by the Atlantis program. She is involved in the DAAD sponsored “International Dialogue on Education.”

Dr. Helene Marsh Helene Marsh is the Convenor of the Australian Council of Deans and Directors of Graduate Studies for 2010-2011, a position she also filled in 2003-2004. Dr. Marsh is Dean of Graduate Research Studies and Professor of Environmental Science at James Cook University, Townsville Australia. She is a conservation biologist who specialises in coastal marine mammals and has received several international awards for her research. Dr. Marsh has published some 200 papers including one book, more than 130 articles in refereed journals, chapters in books and encyclopaedia, conference proceedings and technical reports. She has been a member/chair of the advisory committees of 43 PhD and 19 Master’s students to completion and currently advises a large group of graduate students.

Mr. Austin McLean Austin McLean is the Director of Scholarly Communication and Dissertation Publishing for ProQuest, Ann Arbor, Michigan. He oversees staff that develops and manages Dissertations and Master’s

Global Perspectives on Measuring Quality 259 PARTICIPANT BIOGRAPHIES Theses publishing and products in all formats (digital, print, and microfilm). Austin also works in areas of scholarly communication and digital preservation at ProQuest, including coordinating the recent analysis of the ProQuest Dissertation and Theses Database (PQDT), which was part of a Center for Research Libraries (CRL) study funded by the National Science Foundation (NSF). Austin is a frequent speaker at library conferences, having presented at ETD 2010, Online Information, ALA, and Internet Librarian among others. He serves as Treasurer of the Networked Digital Library of Theses and Dissertations (NDLTD), a non-profit group dedicated to sharing knowledge and best practices for Electronic Theses and Dissertations (ETDs).

Dr. Kyung-Chan Min • 1977 – 1981: PhD, Mathematics, , Canada • 2008 – 2010: Dean of Graduate School, Yonsei University, Korea • 2002 – 2005: Dean of University College, Yonsei University, Korea • 2008 – present: Member, Presidential Advisory Council on Education, Science & Technology and Chairman, Subcommittee for University Education • 2008 – present: President, Citizen’s Coalition of Scientific Society • 2008 – 2010: Chairman, Policy Advisory Committee, Ministry of Education, Science and Technology • 2006 – 2008: Chairman, Council for Promotion of Basic Sciences Research, National Science & Technology Council • 2005 – 2009: Executive Board Member, International Fuzzy Systems Association (IFSA) • 2005 – 2006: President, Korean Mathematical Society (KMS) • 2003 – 2005: President, Korea Association of Liberal

260 Global Perspectives on Measuring Quality APPENDIX c

• 2001 – 2002: President, Korea Association of Teaching and Learning Centers for University Education • 1996 – 1997: President, Korea Fuzzy Logic and Intelligent Systems

Dr. Kim D. Nguyen Dr. Kim D. Nguyen is the Vice Director General in charge of training programs, consultation and research in higher education, , evaluation and accreditation of IER. She has been working for IER for more than seven years. Dr. Nguyen’s expertise includes educational management, leadership in education, quality assurance and accreditation, research in education, educational assessment, evaluation, student evaluation and teaching methods. She has served various clients including The Ministry of Education and Training, departments of Education and Training, universities, colleges and schools in both public and private sectors. She has built and participated in extensive quality networks in Vietnam, both at central government and local levels in Ho Chi Minh City and Hanoi. Dr. Nguyen has extensive experience with transition in education management, which she gained in Australia, the US and Vietnam. She has published numerous articles on various educational issues in leading educational journals, both international and national, such as Quality in Higher Education, Educational Review and Education Science. Having assisted many clients in both public and private sectors, international and national, Dr. Nguyen has built sound relationship with the Vietnam Government agencies at both central and local levels, and has extensive network with foreign professional networks and organizations in Ho Chi Minh City and Hanoi. Before joining IER, Dr. Nguyen worked for HoChiMinh City University of Education as a lecturer. She taught Russian Literature for senior students. Currently, Kim is invited by many universities to teach at postgraduate level for courses such as: Educational Assessment and Accreditation, Educational Management, Quality Assurance and Management, Curriculum Development and Management, and Applied Research in Education.

Global Perspectives on Measuring Quality 261 PARTICIPANT BIOGRAPHIES Dr. Patrick Osmer Patrick S. Osmer is Vice Provost for Graduate Studies and Dean of the Graduate School at The Ohio State University. Appointed in 2006, Osmer has since engaged Ohio State’s graduate community in several major efforts, including a comprehensive assessment of the quality of Ohio State’s 91 doctoral programs. The report categorizes Ohio State’s doctoral programs as ranging from high-quality to those that are disinvestment candidates. The assessment process also uncovered a need for Ohio State to assess its wide-ranging life and environmental science efforts and Osmer is co-chairing two task forces to determine how Ohio State can best move forward in these areas. Osmer is an authority on distant quasar evolution. He joined Ohio State as professor and chair of the department of in 1993. During 13 years as chair, Osmer provided leadership for building the department to internationally recognized high levels. In 2004, he was named Distinguished Professor of Mathematical and Physical Sciences. Osmer came to Ohio State from Tucson’s National Optical Astronomy Observatory where he had been deputy director from 1988-1993. From 1969-1986, Osmer was on the staff of the Cerro Tololo Inter-American Observatory in La Serena, Chile. He served as director from 1981-1985. Osmer earned a B.S. in astronomy with highest honors from the Case Institute of Technology and a PhD in astronomy from the California Institute of Technology.

Dr. Sang-Gi Paik Education • B.S. in Zoology, Seoul National University, Seoul, Korea (1966 - 1970) • M.S. in Zoology, Seoul National University, Seoul, Korea (1970 - 1972) • PhD in Zoology, Seoul National University, Seoul, Korea (1972 - 1978)

Positions Held & Related Professional Appointments • Research Fellow, Molecular Biology Laboratory, Korea Atomic Energy Research Institute, Seoul, Korea (1975

262 Global Perspectives on Measuring Quality APPENDIX c - 1978) • Research Associate & Visiting Instructor, Department of Genetics, Albert Einstein College of Medicine, Bronx, New York, USA (1978 - 1982) • Research Director, Lucky Central Research Institute, Daejeon, Korea (1982 – 1985) • Research Director, Lucky Biotech Corp., Emeryville, CA, USA (1984 – 1985) • Professor, Chungnam National University, Daejeon, Korea [CNU] (1986 – present) • Visiting , Life Science Center, RIKEN, Tsukuba, Japan (1989) • Director, Institute of Biotechnology, CNU (1990 – 1993) • Associate Dean, Office of University Planning and Research Affairs, CNU (1993) • Member, Program Development and Review Committee, Korea Science and Engineering Foundation, Daejeon, Korea (1995 – 1997) • Director, Research Institute of Basic Sciences, CNU (1995 – 1997) • Dean, College of Natural Sciences, CNU (1999.- 2000) • Director, Daedeok Valley Integrated Bio Resources Center, CNU (2005 – 2009) • Dean, Graduate School, CNU (2009 - present)

Professional Society Leadership: • President of the Zoological Society of Korea, Seoul, Korea (2005 - 2006) • President of the Genetics Society of Korea, Seoul, Korea (1997 - 1998) • President of the Korean Society for Molecular and Cellular Biology, Seoul, Korea (2008)

Dr. Douglas Peers Douglas Peers has been Dean of the Faculty of Graduate Studies and Associate Vice-President, Graduate at York University since Global Perspectives on Measuring Quality 263 PARTICIPANT BIOGRAPHIES 2007. Previously he was a Professor of History and Associate Dean and Interim Dean of the Faculty of Social Sciences at the . He has recently been elected President of the Canadian Association for Graduate Studies and was a member of the Council of Ontario Universities Quality Assurance Transition/Implementation Task Force. In 2004 he was interim Vice-President (Programs) of the Social Sciences and Humanities Research Council of Canada. From 2005 to the present he has been on the management board of the Aid to Scholarly Publications Program of the Canadian Federation for the Humanities and Social Sciences. He has also served on the boards of the Alberta Gaming Research Institute, Shastri Indo-Canadian Institute and recently joined the Board of the GRE. He was elected a Fellow of the Royal Historical Society in 1993. He is the author of Between Mars and Mammon: Colonial Armies and the Garrison State in Early-Nineteenth Century India (1995), India Under Colonial Rule, 1700-1885 (2006), and published more than twenty articles and chapters on the intellectual, political, medical and cultural dimensions of 19th century India in such journals as the Social History of Medicine, Modern Asian Studies, The Historical Journal, Journal of Imperial and Commonwealth History, International History Review, Radical History Review and Journal of World History. With Nandini Gooptu, he is currently co-editing India and the British Empire, a companion volume in the Oxford History of the British Empire series.

Dr. Laura Poole-Warren Laura Poole-Warren is the Dean of Graduate Research and Professor in Biomedical Engineering at The University of New South Wales (UNSW). Her leadership position in the tertiary sector involves setting strategy and managing policy for the more than 3000 higher degree research students at UNSW. She has research expertise in the development and commercialisation of medical devices having spent the past 15 years as a research and teaching academic as well as spending 2 years in a start-up company in the USA as a Lead Preclinical Scientist. Professor Poole-Warren has also had a core role on the statutory Commonwealth Government Medical Device Evaluation Committee advising on the safety of medical devices.

264 Global Perspectives on Measuring Quality APPENDIX c Dr. Mary Ritter Mary Ritter (MA, DPhil, FRCPath, FCGI, FRSA) was appointed Pro- Rector for Postgraduate Affairs at Imperial College London in 2004, and took on the International portfolio in 2005. She was Head of the Department of Immunology from 2004-2006, and from 1999-2006 was Director of the Graduate School of Life Sciences and Medicine (GSLSM) at Imperial. Education and research: After a BA in Zoology and DPhil in Immunology from the , and Research Fellowships in the USA and UK, she took up an academic post at Imperial College London. Her research centres on the development of the immune system, and she has published more than 100 peer- reviewed articles on her research and taken out several patents. She has supervised more than 20 PhD students, all of whom have successfully gained their degree. She was the founding Director of the GSLSM at Imperial, steering the Graduate School through from inception in 1999 to its current overarching role providing interdisciplinary research activities, an extensive skills training portfolio and quality assurance for all the postgraduate students in Life Sciences and Medicine. She subsequently helped to establish Imperial’s second Graduate School, of Engineering and Physical Sciences, launched in 2002. She initiated and oversees both the design and delivery of Imperial’s postgraduate and postdoctoral transferable skills training programme, which was awarded the THES prize in both 2006 and 2008. International and national responsibilities: Mary is Vice Chair of the European Universities Association (EUA) Council on Doctoral Education, Vice Chair of the Governing Board of the European Institute of Innovation and Technology (EIT) Climate-KIC and was Chair of the UK-India Education and Research Initiative (UKIERI) Evaluation Panel. Amongst committee memberships she is a member of the International Advisory Panel of the A*STAR Graduate Academy of Singapore, the Council of AgroParisTech, the UK Prime Minister’s Initiative (PMI) 2 for Higher Education, The UK Academy of Medical Sciences’ Academic Careers Committee (non-clinical) and the EUA Institutional Evaluation Programme Panel of Experts; and previously the German Excellence Initiative (Institutional Strategies) Evaluation Group and the Programme Review Committee for the Cambridge-MIT Institute (CMI). Global Perspectives on Measuring Quality 265 PARTICIPANT BIOGRAPHIES Dr. Richard Russell Professor Richard Russell was appointed as Dean of Graduate Studies at the University of Adelaide in January 2005 and Pro Vice- Chancellor (Research Operations) in 2009. Richard Russell was born in Southampton in England in 1944 and immigrated to Australia in 1952. He graduated with BSc (Hons) from the in 1967 and PhD from the Research School of Chemistry at the Australian National University in 1972. He then spent a period of post-doctoral work at Imperial College, UK before returning to a position with the University of New South Wales. Professor Russell’s research interests have spanned the organic chemistry reactive intermediates, photochemistry, molecular architecture, new reagents for chemiluminescence analysis and new instrumental methods of analysis. He was awarded his DSc by the University of Tasmania in 1999. He is a Fellow of the Royal Australian Chemical Institute as well as The Royal Society of Chemistry. Formerly Dean of the Faculty of Science and Technology and Professor of Chemistry at Deakin University, he has written over 170 research papers. Outside research, he is an enthusiastic educator, and was president of 30th International Chemistry Olympiad held in Melbourne in July 1998. Professor Russell was awarded the Australian Award for University Teaching in Science, in 1998 and was made a Member of the Order of Australia, in the Queen’s Birthday Honours List for 2001.

Dr. Illah Sailah Dr. Illah Sailah is a lecturer in Department of Agro-industrial Technology Bogor Agricultural University, who has led a Center for Human Resource Development in Research and Community Empowerment Institute in the same University. Dr. Sailah also lectures on the subject of Human Resource Development Management in the Master Program of Management and Business at Bogor Agricultural University and is actively involved in various DGHE (Directorate General for Higher Education) programs such as students scientific writing competition, train the trainer, Study Program Arrangement, Competence Based Curriculum formulation as facilitator and trainer, Soft Skills Development in Universities and Leadership Training

266 Global Perspectives on Measuring Quality APPENDIX c for Universities Leaders. Dr Sailah also has experience as Person in Charge (PIC) for activities on the Projects of Quality Improvement and Quality Assurance System in the Department of Agro-industrial Technology, and Indonesia Managing Higher Education Relevance and Efficiency Project in Bogor Agricultural University. Currently the collaboration with Germany Agency (DAAD and ISOS University of Kassel) is also handled by Higher Education Management. In 2007 her major task was the establishment of the Corporate Culture of Bogor Agriculture University. In February 2009, Dr. Sailah was appointed the Director of Academic Affairs at Directorate General of Higher Education, Ministry of National Education, which involves formulating and disseminating training on Internal Quality Assurance for higher education institutions in Indonesia.

Professor Allison Sekuler Allison Sekuler is Associate Vice-President and Dean (Graduate Studies), Canada Research Chair in Cognitive Neuroscience, and Professor of Psychology, Neuroscience & Behaviour, at McMaster University. She received her B.A. in Mathematics and Psychology from Pomona College in 1986, and her PhD in Psychology from the University of California, Berkeley in 1991. In her current and previous administrative positions, Dr. Sekuler developed initiatives to support and enhance graduate student life and research training, spearheaded the development of new undergraduate research initiatives, created new programs for Postdoctoral Research Fellows, and facilitated the development of innovative international and interdisciplinary partnerships and programs for research and graduate studies. Dr. Sekuler’s research focuses on vision science, cognitive neuroscience, ageing, and neural plasticity. She has won numerous national and international awards for research, teaching and leadership, and has served on and chaired provincial, federal, and international panels and external boards related both to her research and to McMaster’s mission. Dr. Sekuler is deeply committed to knowledge translation, co-founding several public outreach programs, including Science in the City, the MACafé Scientifique, and the Innovation Café, and helping create the national Canadian Institutes for Health Research’s

Global Perspectives on Measuring Quality 267 PARTICIPANT BIOGRAPHIES Café Scientifique series. Dr. Sekuler served as President of the Royal Canadian Institute for the advancement of science from 1998-2000 (council member, 1994-2002). She is a frequent public lecturer and commentator on scientific, research, and educational issues in the national and international media, and she currently serves on the national steering committee for the Science Media Centre of Canada.

Dr. Zlatko Skrbis Zlatko Skrbis was appointed Dean of The University of Queensland Graduate School in April 2009. He was previously Professor of Sociology, Deputy Head of the School of Social Science, and Associate Dean of Research in the Faculty of Social and Behavioural Sciences. As the Dean of the Graduate School he is responsible for recruitment, progression and completion of research higher degree programs throughout the university and contributes to strategic direction in the area of research higher degree training. He gained his PhD from as a recipient of the International Postgraduate Research Scholarship. As a sociologist with an international reputation in the fields of migration, nationalism and social theory, Professor Skrbis has been Vice-President of The Australian Sociological Association and is currently Vice-President of ISA Research Committee on Racism, Nationalism and Ethnic Relations. He is the author of numerous articles and several books, including Long-distance Nationalism, Constructing Singapore, and The Sociology of Cosmopolitanism.

Dr. Debra Stewart Debra Stewart became the fifth president of the Council of Graduate Schools in July, 2000. Before coming to the Council, Dr. Stewart was Vice Chancellor and Dean of the Graduate School at North Carolina State University. Prior to that she held a variety of leadership positions in North Carolina including Interim Chancellor (1994) and Graduate Dean (1988-1995) at UNC-Greensboro and then Vice Provost and Dean (1995-1998) at N.C. State. Dr. Stewart received her PhD in Political Science from University of North Carolina at Chapel Hill, her master’s degree in government from the University of Maryland, and her B.A. from where she majored in

268 Global Perspectives on Measuring Quality APPENDIX c philosophy. The Council of Graduate Schools is the leading U.S. organization dedicated to the improvement and advancement of graduate education. It has over 500 members which award 94% of all U.S. doctorates and approximately 78% of all U.S. master’s degrees. CGS currently has over 20 international universities among its membership. As a national spokesperson for graduate education, Dr. Stewart’s service to the community includes chairing the Graduate Record Examination Board, the Council on Research Policy and Graduate Education, the Board of Directors of Oak Ridge Associated Universities, and the Board of Directors of Council of Graduate Schools. She also served as vice chair of the ETS Board of Trustees, as Trustee of the Triangle Center for Advanced Studies, as a member the American Council on Education Board and several National Research Council committees and boards, as well as on advisory boards for the Carnegie Initiative on the Doctorate, the Responsive PhD Project, and the Task Force on Immigration and America’s Future. In November 2007, her leadership in graduate education was recognized by the Université Pierre et Marie Curie with an honorary doctorate. Her alma mater, the University of North Carolina Chapel Hill honored her in October 2008 with the Distinguished Alumna Award. She is the author or co-author of books and numerous scholarly articles on administrative theory and public policy. Her disciplinary research focuses on ethics and managerial decision making.

Dr. Dick Strugnell Professor Dick Strugnell assumed the role of Pro Vice-Chancellor (Graduate Research) at the University of Melbourne in December 2007. As Pro Vice-Chancellor (Graduate Research), Professor Strugnell has responsibility for activity performance, oversight of the support mechanisms and academic extension programs, and quality assurance of research higher degrees at The University of Melbourne. Professor Strugnell holds a degree with Honours and was awarded his Doctorate of Philosophy from Monash University. He is a medical microbiologist with an interest in vaccines against bacterial infections and anti-bacterial immune responses. He has worked at Monash University, the and the

Global Perspectives on Measuring Quality 269 PARTICIPANT BIOGRAPHIES Wellcome Research Laboratories in the United Kingdom. Professor Strugnell was appointed to a Senior Lectureship at the University of Melbourne in 1991, as an Associate Professor and in 1999, then Professor in 2001. He is a Fellow of the Australian Society for Microbiology (FASM) and a Member of the American Society for Microbiology. He was awarded a Churchill Fellowship in 1984, a CJ Martin Fellowship (NHMRC) in 1986, and the ASM Fenner Research Prize in 1999. He has held senior administrative positions with the CRC for Vaccine Technology, and acts as a Director of VacTX Pty Ltd, a start-up Vaccine Company. He reviews for several major funding agencies including the NHMRC, the Medical Research Council (UK) and Wellcome Trust. He has served on WHO and GAVI technical committees. Professor Strugnell has published more than 120 peer- reviewed papers and his research is currently funded by the NHMRC, the ARC and the Gates Foundation. He has served on NHMRC Grant Review Panels, and is currently Panel Selector for Microbiology with the NHMRC. Professor Strugnell’s interest in graduate training grew from his supervisory experience and his work in the CRC for Vaccine Technology where he was part of the Education Advisory Committee, a body that provided great extension opportunities, including IP training and funded work experience, to some 90 PhD students over 13 years. He joined the Postgraduate Scholarships Committee and then the School of Graduate Studies as an Associate Dean at the University of Melbourne in 2005.

Professor Paul KH Tam Prof. Tam Kwong Hang, Paul, MBBS(HK); ChM(Liv); FRCS(Eng, Edin, Glas, and Ire); FRCPCH; FHKAM (Surgery) is the Pro-Vice- Chancellor & Vice- President for Research and Dean of the Graduate School in The University of Hong Kong. Professor Tam graduated from The University of Hong Kong in 1976, and received his training and worked in the Department of Surgery until 1986. He was Senior Lecturer at the in 1986-90, and Reader and Director of Paediatric Surgery at the University of Oxford in 1990-96. He has been Chair of Paediatric Surgery at The University of Hong Kong since 1996. Professor Tam is a dedicated clinician, researcher,

270 Global Perspectives on Measuring Quality APPENDIX c

teacher and university administrator. He specializes in the surgery and genetics of birth defects such as Hirschsprung’s disease. He steers research strategies and development of the University and has served in numerous administrative positions. He also serves on various local and international associations of the medical profession and was a member of the Biology and Medicine Panel of the Research Grants Council in 2000-2005, and President of the Pacific Association of Paediatric Surgeons in 2008-09. He is Associate Editor of Journal of Pediatric Surgery and serves on editorial boards of several international journals. He has given keynote lectures including Journal of Pediatric Surgery Lecture and Suruga Lecture at international conferences. He is recipient of numerous awards including the British Association of Pediatric Surgery Prize, and most recently the “China Outstanding Leadership Award in Endoscopy” from National Office for Science and Technology, PRC.

Professor Tan Thiam Soon Professor Tan Thiam Soon is Vice Provost (Education) at the National University of Singapore. He assists the Provost in setting educational directions and policies for the University, in ensuring high academic standards, and in education quality assurance for both undergraduate and graduate education. He has oversight of the Registrar’s Office, Office of Admissions, Centre for Development of Teaching and Learning and the Centre for Instructional Technology. Professor Tan is a faculty member with the Department of Civil Engineering and was Dean of Admissions from 2005 to 2007. He has also previously served as Vice-Dean for Undergraduate Programmes in the Faculty of Engineering. Graduating from the (New Zealand) in 1979 under a Colombo Plan Scholarship, Professor Tan subsequently obtained his MS and PhD from the California Institute of Technology. His main areas of research interest are deep excavation, soil improvement, land reclamation and soil characterisation. He has given invited and keynote lectures at a number of international conferences and received the ASTM’s C.A. Hogentogler Award for 2007, the Public Administration Medal (Silver) in 2008 and the Best Research Paper Award 2008 from the Japanese Geotechnical Society.

Global Perspectives on Measuring Quality 271 PARTICIPANT BIOGRAPHIES Professor Tan is a registered Professional Engineer (Geotechnical) in Singapore and has been involved in numerous consulting jobs in Singapore concerning deep excavation, land reclamation and other geotechnical problems.

Dr. Mandy Thomas Mandy Thomas is presently Pro Vice-Chancellor (Research and Graduate Studies) at the Australian National University, a position she has held since 2006. A social by training, she has undertaken research on Asian migration to Australia, youth cultures, social and political change in Vietnam and cultural traffic in the Asian region. She has published widely on topics at the intersection between studies of migration, the lived experience of mobility, and aesthetics. Prior to her appointment at the ANU Dr. Thomas was the Australian Research Council’s (ARC) Executive Director for the Humanities and Creative Arts, a position she held from 2004-2006. Other positions she has held include Deputy Director at the Centre for Cross Cultural Research at the Australian National University (ANU) 2002-2004, and Deputy Director at the Centre for Cultural Research at the University of Western Sydney (UWS) (1997-2002).

Dr. Charles Tustin Dr. Charles Tustin is Director of Graduate Research Services at the University of Otago in Dunedin, New Zealand. He is also the current Chairperson of the Scholarships Committee of the New Zealand Vice-Chancellors’ Committee (NZVCC) and the New Zealand Deans and Directors of Graduate Studies (DDOGS) group. Dr Tustin holds degrees from the and the University of South Africa. In addition to his academic roles at the University of South Africa and, since 1994, at the University of Otago, he also has experience as a practitioner in the human resources field in the business and education sectors. Dr. Tustin’s current role at the University of Otago involves oversight of the University’s doctoral and research Masters’ programmes. The University was established in 1869 and is New Zealand’s oldest and most research intensive university. Approximately 20,000 students are enrolled at the

272 Global Perspectives on Measuring Quality APPENDIX c

University this year, of which about 1,300 are doctoral candidates and 700 research Masters’ candidates. There are approximately 3,000 staff members employed by the University at its four campuses in Dunedin, Christchurch, Wellington and Auckland.

Dr. Surasak Watanesk Dr. Surasak Watanesk is Dean of the Graduate School at Chiang Mai University and Associate Professor in the Department of Chemistry, Faculty of Science, Chiang Mai University. He is Chairman, Consortium of the Graduate Studies Administrators of Public and Autonomous Universities and Chairman of the Council of the Graduate Studies Administrators of Thailand.

Education 1975, B.S. (Chemistry), Chiang Mai University, Chiang Mai 1984, PhD (Chemistry), Northern Illinois University, U.S.A.

Professional: • Aug- Dec 1995, Visiting Professor, Department of Chemistry, , Texas, USA • 1989 to present: Numerous trainings and visits to universities on both academic and administrative aspects have been accomplished in many countries such as United Kingdom, Germany, Denmark, Norway, Sweden, Australia, New Zealand, U.S.A., Japan, Hong Kong, China, South Korea, and Taiwan • Field of specialization: Surface modification, Chromatographic techniques for physicochemical and environmental studies. • Current Research: Surface modification of adsorbents to enhance their adsorbabilities and Modification of silk fibroin for electroanalytical applications.

Professor Yan Jian-hua Yan Jian-hua graduated from Zhejiang University and received his bachelor, masters and PhD degree in 1982, 1985 and 1990

Global Perspectives on Measuring Quality 273 PARTICIPANT BIOGRAPHIES respectively. In 1985, he joined the Department of Thermal Physics in Zhejiang University as an assistant professor. In 1991, he became associate professor of the Institute for thermal power engineering of Zhejiang University. From 1992 to 1993, he was a visiting professor at Technical University of Nova Scotia, Canada. From 1994 to 1996, he was director of National Laboratory of Clean Coal Combustion at Zhejiang University. From 1993 to 1996, he was deputy director of the Institute for Thermal Power Engineering, Zhejiang University. Since 1996, he has been chairman of Department of Energy Engineering, Zhejiang University and director of Design Institute of Energy Engineering, Zhejiang University. Since 1995, he has been professor of the Department of Energy Engineering, Zhejiang University. From 1999 to 2007, he has been executive dean, College of Mechanical and Energy Engineering, Zhejiang University. Presently he is a Cheung Kong scholar. His major research interests include clean combustion, pyrolysis and gasification, pollutant control, combustion acquisition, environmental protection in the energy conversion process and waste derived energy etc. He has published more than 200 scientific papers and three books. He owns 17 Chinese patents.

Professor Zhen Liang Dr. Liang Zhen is currently Professor of Materials Science and Engineering and Executive Dean of the Graduate School of Harbin Institute of Technology. Dr. Zhen has served in a number of administrative positions including Director of the Department of Materials Science in 2000-2002, Vice Dean of the School of Materials Science and Engineering in 2002-2004 and Vice Dean of the Graduate School of Harbin Institute of Technology in 2005-2009. From 2009, he was Executive Dean of the Graduate School. Liang Zhen received his Master and PhD degrees from Harbin Institute of Technology in 1991 and 1994, respectively. As Professor in Materials Science and Engineering, he conducted research on microstructure characterization, deformation behaviour and the relationship between microstructure and properties of metal materials. He serves as a Vice Secretary-general of the Materials Committee of the Chinese Mechanical Engineering Society and is also a Deputy Member of the Young Committee of the Chinese Materials Research Society. 274 Global Perspectives on Measuring Quality