Is there a Magic Link between Research Activity, Professional Teaching Qualifications and Student Satisfaction?

Adrian R. Bell and Chris Brooks ICMA Centre, , , Whiteknights, Reading RG6 6BA, UK; tel: (+44) 118 378 7809; e-mails: [email protected] and [email protected]

January 2016

Abstract This paper examines the key drivers of the satisfaction of UK students, as measured by the National Student Survey, using the entire database of students who completed it in 2014. We employ a wide range of factors that may potentially affect contentment, including both university-wide variables that capture the reputation of the university and the quality of its research, and school-specific measures incorporating the demographic and ethnic profiles of the teaching staff. We find overall that students are happiest when taught by staff with the following characteristics: white, full professors, holding doctorates, and on fixed-term contracts - although there is no link between satisfaction and the gender mix of the teaching faculty, or with the contractual designation of the staff as teaching intensive or teaching and research. In terms of the broader characteristics of the institutions as a whole, when we allow for a range of factors simultaneously, students are happiest at pre-1992 universities outside the Russell group with high degree course completion rates and where the amount of top-rated research is lower. We find no link between student contentment and the percentage of faculty holding formal teaching qualifications.

JEL classifications: C52, I21, I23 Keywords : National Student Survey, student satisfaction, professional teaching qualifications, teaching quality.

Acknowledgements We would like to thank the audiences at the Universities of Reading and Bath for insightful comments. Also for helpful discussions with Cherry Bennett, Maxine Davies, Nathan Helsby, Eileen Hyder and Claire McCullagh. Tony Moore and James Walker also provided detailed and useful comments on an earlier draft.

Electronic copy available at: http://ssrn.com/abstract=2712412 1. Introduction

The UK higher education system has undergone radical changes over the past decade. The funding formula has altered significantly, so that the bulk of universities’ incomes now come from the fees paid by the students themselves rather than from a government block grant. Research income from the government, distributed following the Research Excellence Framework (REF) 2014, now exceeds government teaching income as a result of this change in funding streams. 1 In 2011/12, before the latest reforms, UK Universities had a combined turnover of £27.9 billion and a report by Universities UK found that ‘Most revenue is directly associated with teaching and research activity (with income from funding council grants, tuition fees and research grants and contracts amounting to 80% of the total).2 Within this mix, research grants and contracts contributed 16%, whilst teaching related income accounted for 35%. By 2013/14, and following the introduction of direct student payment, tuition fees now account for 42% (up 7%) of total income as the direct source of university funding has been transferred from government to student with the block grant dropping from 30% to 20% over the same period.3

These reforms have naturally been accompanied by a heightened focus on the ‘student experience’ (Gibbs, 2010; 2012), which has engendered several important changes in universities’ environments and modus operandi. First, students now feel a sense of empowerment as paying customers and they expect high quality teaching, good facilities, and high standards of organisation and of professionalism throughout their experience. Second, universities have become more corporate in their outlook and objectives, entrepreneurially establishing new subject areas and programmes with the objective of increasing student numbers to generate revenue. Universities compete in an increasingly international marketplace to attract highly qualified students (Chatterton and Goddard, 2000), who in turn are increasingly aware of the relative rankings of universities and departments, with the danger that such rankings then become increasingly ossified and a self-fulfilling prophecy. Achieving a high and rising position in the rankings is now considered a legitimate (and perhaps the most important or even only) objective in its own right rather than being merely a positive side effect of good performance on other, more specific indicators. Consequently, a poorer than expected positioning in the rankings is likely to lead to admonishment of deans and heads of department by university senior managers; the former will in turn pass on their disappointment to the rank and file, who are told that things must improve (Locke, 2014).

At the same time, and as an integral part of this intensification of competition, an increasing percentage of university teachers have obtained professional qualifications. This trend has been driven from several directions, including from students, from senior university managers, from

1 The current research funding pot is £1.6 billion. See ‘Winners and Losers in in HEFCE funding allocations’ Times Higher Education, https://www.timeshighereducation.com/news/winners-and-losers-in-hefce-funding- allocations/2019306.article. 2http://www.universitiesuk.ac.uk/highereducation/Documents/2014/TheImpactOfUniversitiesOnTheUkEcono my.pdf. The report found that ‘Through both direct and secondary or multiplier effects, the higher education sector generated over £73.11 billion of output and 757,268 full-time- equivalent (FTE) jobs throughout the economy. The total employment generated was equivalent to around 2.7% of all UK employment in 2011’ and that ‘In the year 2011–12, universities contributed over £36.4 billion to UK GDP. The off-campus expenditure of their international students and visitors contributed a further £3.5 billion to GDP. Taken together this contribution came to over £39.9 billion – equivalent to 2.8% of UK GDP in 2011’. 3 http://www.universitiesuk.ac.uk/highereducation/Documents/2015/higher-education-in-facts-and-figures- 2015.pdf.

2

Electronic copy available at: http://ssrn.com/abstract=2712412 government, and from the individual academics themselves who have sought such qualifications as recognition of their commitment to teaching and learning or to enhance their perceived status. 4 The Browne Report proposed that all new academics who teach should be required to register for a teaching qualification, proposing that:

‘institutions require all new academics with teaching responsibilities to undertake a teacher training qualification accredited by the HEA [Higher Education Academy] and that the option to gain such a qualification is made available to all staff - including researchers and postgraduate students - with teaching responsibilities’ (Browne 2010, p. 50 ).

Thornton (2014) outlines how others have made similar calls. A former Chief Executive of the HEA, Craig Mahoney, told the Times Higher Education (THE) in 2010 that he ‘wanted to see every member of staff teaching in UK higher education to have formal qualifications in teaching’ (Atwood, 2012). Boffey (2012) reports that the National Union of Students (NUS) has also called for those teaching in higher education to be formally qualified. A report by the NUS and Quality Assurance Agency (QAA) in 2012 claimed, based on the results from a survey, that students wanted lecturers to improve their teaching skills:

‘Students want academic staff to develop their teaching styles to be more engaging, interactive and use technology and props to make the subject more accessible and interesting. Developing an active learning style is a teaching skill which needs to be taught and developed over time, and 34% of students in this research articulated that they wanted their lecturers to have better teaching skills’ (QAA and NUS, 2012).

However, not all academics share these views that we, as a profession, are in need of more training to teach. A well-cited paper by Layton and Brown (2011, p. 163-4) argues that the assumptions behind these conclusions and recommendations are simplistic and ‘mask a neoliberal agenda and culture of managerialism’. Furthermore, it is interesting to note from the survey behind the quotation in the above report that only 34% of the student body expressed this view, and whilst indicative of a desire to see lecturers improve their teaching skills, it certainly is not a call to arms. The formal teaching qualification agenda has become an accepted goal, and indeed most UK universities now require their new recruits to undergo an internally operated training and development programme leading to a postgraduate certificate in academic practice, and well- seasoned faculty are also increasingly applying for recognition via the experienced route, as described below. Further, a recent paper (Bryant and Richardson, 2015) finds some preliminary evidence that academics with a PhD, but without a teaching qualification, achieve very different student learning outcomes (positively and negatively) from those who hold both. On this basis, it is not clear that the teaching qualification route for all academic staff would benefit all students.

Universities, like many other branches of the public sector, are under increasing pressure to demonstrate that they provide good value for taxpayers’ money (now indirectly through the student loan schemes for fees and living support costs), and the push towards professional recognition in

4 It is not entirely clear that professional teacher status is considered equivalent to the status afforded to those producing research of sufficient quality to be entered for the REF (Cashmore et al., 2013; Davidovitch et al., 2011). Moreover, few promotions to senior positions at Russell group universities occur purely as a result of outstanding performance in teaching (Locke, 2014, p. 20). Brown (2014) argues that the exclusivity of research has been crystallised by HEFCE’s decision to exclude from the REF definition of impact any effects on teaching practices at the researcher’s own institution; indirectly, this policy decision cheapened the value of research- led teaching, he suggested.

3

Electronic copy available at: http://ssrn.com/abstract=2712412 part represents a breakdown of trust between academics and the government as well as the general public regarding the competence of faculty to deliver high quality teaching and learning experiences to students. 5 It also represents a misunderstanding by those outside of universities about how academics spend their time, and a misalignment of expectations between academics and their students regarding the amount of ‘face time’ the latter can reasonably expect with the former for their fees. 6 The wording in a report by the UK government department, Business, Innovation and Skills, that ‘we expect our reforms to restore teaching to its proper position at the centre of every higher education institution’s mission,’ certainly suggests that the government believes university students have hitherto received too low a priority and/or a poor level of teaching service.

Thus the decision by higher education institutions to prioritise the professionally qualified university teacher agenda is a multi-faceted political as well as economic one. In pure research terms, new universities find it hard to compete with the elite who are able to ‘buy in’ superstars, offering a quality of research environment and paying salaries that the former are unable to match. Land and Gordon (2015, p.21) argue that there is a ‘pronounced disparity’ between the financial rewards for research excellence versus teaching excellence, which over time is bound to have an effect on the behaviour and strategic choices of faculty. It may thus be new universities’ best interests to develop or project a different kind of ranking measure where they can excel, as the Huddersfield experience, which we document below, shows. Many universities have now established targets for having at least a specific percentage of their faculty being professionally qualified teachers and growing the percentage of qualified teachers is a key aspect of the ‘professionalisation’ of teaching. For the academics concerned, this is very much a double-edged sword. Despite the obvious benefits of reflecting on teaching experiences and engaging with the pedagogic literature, the professionalisation term, and indeed the whole philosophy behind it, has been argued to evoke suggestions of a dictum from above to be imposed alongside ‘enforced accreditation, sanctions and incentives, performance measurement…’ (Locke, 2014, p.26).

Universities are naturally keen to maximise student satisfaction, not only to improve their positions in the rankings, 7 but also to lock into a virtuous circle where their reputations are enhanced and with them the number and quality of future student applications. National Student Survey (NSS) results are given heightened importance as key ingredients in many national league tables of universities (including the Guardian University Guide, and the Complete University Guide), and moreover they are one aspect of the Key Information Sets that all universities are required to present to prospective applicants alongside costs (for the course and living expenses) and post-graduation employment rates (Taylor and McCaig, 2014).

What are the key drivers of student satisfaction? Existing evidence on this issue from the academic literature is examined in detail below. But from the students’ own perspective, according to the UK-

5Students at the Heart of the System – White Paper, Department for Business, Innovation and Skills, June 2011, p.27 and now the Green Paper, Teaching Excellence, Social Mobility and Student Choice , November 2015, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/474266/BIS-15-623- fulfilling-our-potential-teaching-excellence-social-mobility-and-student-choice-accessible.pdf. 6 Research suggests that Academics often work long hours for relatively low pay (Walker et al., 2010). 7 Thus the NSS is an instrument that further encourages competition between universities who desire high rankings.

4

wide Student Academic Experience Survey 2015 by the Higher Education Policy Institute (HEPI),8 39% of those responding rated ‘formal training to teach’ as the key characteristic that they sought, against only 17% rating their teachers being currently involved in research as most important. A considerable number of students believed that additional training for university teachers should be prioritised as an area for expenditure rather than, for example, improvements in the physical infrastructure. The requirement for formally trained lecturers was felt most strongly, with 49% rating this highest, at Russell Group universities, whereas students at Million+ 9 universities prioritised teachers with relevant industry or professional expertise (54% rated this attribute as their top priority).

So according to the SAES survey, students claim that their preference is to be taught by qualified faculty who have strong links with the industry, with current research activity being rated as the least important attribute of the three by more than half of respondents. This is, perhaps, rather surprising given that entry requirements and demand for places are typically highest among the ‘elite research universities’ and are considerably lower among those institutions where a high proportion of faculty are professionally qualified and where faculty work closely with industry.10 This evidence also flies in the face of efforts by universities to strengthen the ‘teaching-research nexus’ and to ensure that teaching is research-led or research-informed, perhaps suggesting that they need not bother. Indeed, in a commentary on the HEPI survey results, David Palfreyman, Director of the Oxford Centre for Higher Education Policy Studies, is quoted as saying that, ‘Universities should admit that there is no evidence to suggest a ‘magical link’ between research and good teaching’. 11 Hattie and March (1996) conduct a meta-analysis of 58 previous studies and also conclude ‘that the common belief that research and teaching are inextricably entwined is an enduring myth. At best, research and teaching and very loosely coupled’.

In this study, we directly examine the drivers of the NSS quality scores with a particular focus on the effect of the extent to which those teaching students are professionally qualified. Thornton (2014, p.1) claims that the possession of formal teaching qualifications is ‘seen by many as key to enhancing the student experience’, but we are not aware of any evidence as yet available for or against this viewpoint, and in this paper we directly test this purported linkage. 12 Locke (2014) reports summary statistics for the demographics and employment conditions of academics in the UK but there is no attempt to link these with student satisfaction. We use a multi-layered dataset that formally tests the link between student satisfaction, the proportion of faculty with professional teaching qualifications, the research quality of the university and a range of other potentially relevant factors. Indeed, there is no current evidence to suggest any ‘magical links’ between any independent drivers

8 The results of the survey are summarised in “Student survey puts teaching high on agenda” Times Higher Education , No. 2206, 4-10 June 2015. 9 Million+ is the mission group representing post 1992 universities. 10 These linkages are hard to measure empirically, but we find that the correlation between degree course entry standards, a proxy for the demand for places, and the percentage of 4* research within an institution according to the REF, is 0.85. Meanwhile the correlation between entry standards and the percentage of teaching faculty who were previously employed in practice (rather than in another HEI or were students) is -0.2. 11 Times Higher Education , No. 2206, 4-10 June 2015, p.6. 12 Thornton also argues that HEA Fellowships are awarded to reflective teaching practitioners, although they may or may not be good or popular practitioners, which makes suggestions of a link between satisfaction and teaching qualifications even more tenuous.

5 and student satisfaction derived from the student experience including teaching. As such, ours is the first quantitative study of its kind.

2. What are the drivers of student satisfaction?

As Mavondo et al. (2004) note, students are variously seen as customers receiving a service (Guolla, 1999), as co-producers of knowledge who share responsibility for their learning with their university (Armstrong, 1995) and as products (Guolla, 1999) which the university then ‘places’ in the job market. Clearly, the perspective from which students view themselves will influence how they evaluate their satisfaction in an essentially immeasurable fashion.

The existing evidence in the academic literature regarding student satisfaction is somewhat sparse, geographically very widely spread and mainly focused on the individual student level. When students evaluate the quality of a course and reflect on their overall satisfaction, they undergo a cognitive process in which they compare their prior expectations about the quality of the delivery and outcomes with their perception of the corresponding actual performance and outcomes (Zeithml et al., 1993). Both the quality of the course and aspects of the curriculum (Browne et al., 1998) as well as the campus environment (Borden, 1995) are all argued to be drivers of satisfaction. Mavondo et al. (2004) find student satisfaction at an Australian university to be a positive function of student orientation, student service, the quality of library resources, and of teaching quality but unrelated to the quality of educational technology. Poor satisfaction can result from a mismatch between delivery and expectations in any of these areas but it has been argued that support services are commonly perceived as less satisfactory than the academic aspects of the courses (Kotler and Fox, 1995).

An early study by Aldridge and Rowley (1998) investigated student satisfaction at Edge Hill University in order to make recommendations for the quality plan of the University. This was an attempt to try to implement the findings of a questionnaire-based study, drawing upon the local student charter. The sample was small, but concerns were expressed around areas such as the personal tutor system and having good, reasonably priced food (p.202). This demonstrates that sometimes students will express satisfaction based on some rather basic ‘hygiene’ factors rather than concentrating on the most substantive issues.

In a US study of undergraduate business students, Letcher and Neves (2010) find that the quality of teaching in the specific subject matter has little or no effect on student satisfaction. Instead they find that other factors have a greater impact on satisfaction, including self-confidence, extra curricula activities, careers, and the general quality of teaching. The study uses a student evaluation format to investigate what is important to undergraduates when recording satisfaction with their programmes and university experience. The in-depth study attempted to hone in on programme- specific attributes for the business school to see how important these were for student satisfaction. They did not find any direct link between the two, but they did show, importantly, that students who had a strong sense of self confidence were satisfied by their business school experience. This means that other attributes are important to consider alongside academic learning, which could be developed by other interventions – not just curricula based – for instance careers, volunteering, or other activities.

6

Letcher and Neves point out that universities are interested in student satisfaction for two major reasons: firstly and positively, that it leads to greater retention and academic achievement by the students themselves; and secondly and more selfishly, good ratings of satisfaction lead to good public rankings, which enable universities to enhance their prestige, recruit the best students and fulfill their annual quota for new students. This understanding instructs the senior leadership of universities to think carefully about how to improve the university environment in the widest sense in order to capture these outcomes to the greatest possible extent. Thus for example, the desire to achieve a certain percentage HEA membership for academics or equivalent as part of its strategic plan; and the many capital projects undertaken by Universities UK to improve their physical environment. According to a report in 2014 by the Association of University Directors of Estate (AUDE), UK university estates’ turnover is £27 billion, placing the university sector as equivalent to the fourth largest FTSE listed company, and implying that ‘Higher Education Capital Expenditure [is] Greater Than Entire Cross-rail Budget’. 13 There is an interesting conundrum that they also discuss, namely that other research has established academic performance to be key to student satisfaction. Even more intriguing, they quote Pike’s (1991) conclusion that satisfaction exerts a greater influence on exam performance than academic performance does on student contentment. Thus happy students perform well, as opposed to high performing students being happy!

Douglas, McClelland and Davies (2008) investigate student satisfaction using a Critical Incident Technique in order to develop a new model, arguing that ‘service quality is a precursor to student satisfaction’ (p.21). They aim to identify the drivers of student satisfaction by looking for the ‘moments of truth’ that will impact on the reported satisfaction. However, the study is conducted only at one University. They find that students value 'communication and responsiveness within the teaching and learning environment and access and responsiveness within the ancillary services environment’ (p. 32).

Another salient trend in UK higher education in recent years has been the growth in the number of international students, especially from Southeast Asia and mainland China in particular. A priori, one might expect international students, especially those whose first language is not English and who typically pay considerably higher fees for their undergraduate education, to expect higher standards of teaching, support and professionalism. But perhaps surprisingly, Mavondo et al. (2004) find that foreign students at an Australian university are satisfied with a smaller pool of resources than expected by domestic students. The potentially important areas they identify are teaching, learning technology, library, student services and student orientation. Mavondo et al. find a weak link between the quality of teaching and satisfaction (p. 56) and that technology has no association with satisfaction, but it did with whether students would recommend the university (p. 56).

A comparable study by Arambewela and Hall (2009) looked at the experience of international students studying business at Australian universities. They found that different nationalities had varying notions of what they felt was important among service quality factors (p. 555), examining both education and non-educational ‘constructs’. Under the education heading, they found feedback, access to lecturers and the quality of teaching to be the most important variables (p. 561). The non-educational issues included counselling, social activities, close working relationships with

13 http://www.aude.ac.uk/news-and-events/news/uk-university-estates-turnover-hits-27-billion/

7 other students and orientation programmes. Arambewela and Hall found that both educational and non-educational factors were significant variables in explaining student satisfaction.

When students are reflecting on the quality of the education that they perceive they have received, they may bring a whole host of incidental factors into the evaluation process. Further, Merritt (2008) studies general biases in evaluations related to Law schools and states, ‘The way in which a professor walks into the room or smiles at the class can affect student ratings much more substantially than what the professor says or writes on the blackboard’. He cites a famous experiment by Naftulin et al. (1973), ‘The Dr Fox Lecture’, where an actor delivered a lecture of nonsense but with a ‘warm manner' (pp. 242 and 239), receiving glowing evaluations.

A more recent study has shown that ratings of academic teaching can be biased based on the gender of the teacher. Boring (2015) investigates whether the gender of a professor influences the scores that he or she receives in student evaluations. By investigating a large dataset from Sciences Po University in France, she is able to conclude that ‘Student Evaluations of Teachers (SET) increase biases against female teachers. And male teachers have a premium based on gender’ (p. 30). However, all evaluated seminar teaching is taught by adjuncts who are appointed sessionally and are only reappointed if their SETs are good enough. The way that the seminars are set up is quite restrictive and students have to complete the SET before they are able to receive their transcript. All of this may produce behaviour from the students that drive these results in some way not investigated in the paper but nonetheless some interesting and strong findings emerge. Students do link their evaluations to performance in continuous assessment, thus demonstrating that seminar leaders can ‘purchase’ high SET scores (p. 19). But there is no link between the satisfaction rating and performance in the final exam, which is marked anonymously and the results only known after the SET is complete (p. 28). This leads Boring to suggest that teaching quality is not measured by SETs (p.2), which opposes other research suggesting that high student satisfaction is correlated with high grades.

As we demonstrate below, when we ignore other factors and simply split the sample into two parts, we find that satisfaction is higher at Russell group universities than at non-Russellers. It is possible that high scores in student evaluations for elite institutions are merely manifestations of brand loyalty, so that they are punished to a far smaller degree for failing to meet expectations than their less renowned counterparts. Such students may internalise their problems with service failures, believing that it must be their own expectations or judgements which are faulty since the hallowed institution must be beyond reproach. The reputation of a university has been shown to affect student retention rates and loyalty (Eskildsen et al., 1999; Nguyen and LeBlanc, 2001; Helgesen and Nesset, 2007). Brand and the university’s position in the league tables are highly correlated and both will affect the ability of the institution to attract high calibre students (Palacio et al., 2002).

Alves and Raposo (2010) document the importance of a university’s brand image as a driver of student satisfaction at Portuguese institutions, but it is not clear whether it is the reputation of the specific school in which the student studies or of the university as a whole is more relevant. The overall image of the institution will be a weighted average of those of the component schools and there could be wide variation within a given institution – for example, where a highly reputed business school operates within a university that is poorly ranked. The study found that image is the most important influence on student satisfaction and student loyalty. This is a compelling thesis and

8

leads the authors to suggest that the quality of provision is not a concern. Universities should instead concentrate on measuring students’ perceptions of image and then try to change it. However, we would argue that the scope of the study is limited in that it considers image from the perspective of the student perception and gathers data on this variable using the same questionnaire being used to collect student satisfaction. The strong support could therefore be self- fulfilling and indeed expected in such a design. Image was measured by asking the student three questions on their views of 'was it a good university to study at; was it an innovative university focused on the future; and a university which provides good preparation to students'. The construct of image is therefore current student-centred and made by students who have already chosen to study at the institution rather than being exogenously determined and widely supported.

Palacio, Meneses and Perez (2002), also examine image as a factor in student satisfaction and find that it does have an influence. In this context, they define image in terms of the student cognitive and affective impression of the university which they equate with brand image rather than using league tables. Brown and Mazzarol (2008) look at Australian universities and find that the most important driver of student satisfaction and loyalty is institutional image (p. 81). To measure image, they consider both the institutional performance on service quality and how practical the courses were, alongside how well-established or prestigious the university is (p. 86), which is found to be more important to students than ‘humanware’ and ‘hardware’ (p. 90). This finding is again worrisome as Brown and Mazzarol suggest that universities could interpret that the pursuit of a strong and clear brand is more important that the quality of service, although it might be that the latter actually leads to the former rather than the other way around.

The above line of research may lead us to make the supposition that institutions with an elite image will always obtain high satisfaction scores. However a quick eyeball of the most recent NSS data at the aggregate level shows us that Oxford ranked 15th, Cambridge 31st and LSE 135th. How can this be explained? Could we be facing two different approaches to image within UK universities? Under this image construct, some of the leading internationally recognised universities in the UK are concentrating on their research brand, potentially to the detriment of their interest in student satisfaction, as this measure does not impact on these types of international university rankings. In parallel, students select to study at these institutions for something other than to be satisfied with their experience of teaching and learning. They want to receive a degree from a university with an internationally leading image and join a global network of peers and alumni. It therefore follows that such universities are never going to have to compete on satisfaction to attract students. The remainder of the UK sector must fight for students and needs to excel nationally in the league tables such as those of The Guardian and The Complete University Guide, and such universities are making much more effort to ensure that students who choose their programmes leave extremely satisfied with their experience.

3: The Institutional Framework for Student Evaluation and Professional University Teaching Qualifications in the UK

3.1 The National Students Survey and Alternatives

The National Student Survey (NSS) is a questionnaire-based measure of student satisfaction established in 2005 at the behest of the UK government. The survey can be taken by all final year undergraduates at Higher Education Institutions in , Wales, Northern Ireland and Scotland.

9

The aims of the NSS at the point of its establishment were to audit the quality of courses run by HEIs, to make them more accountable for quality, and to support the decision-making of future university applicants. 14 The NSS is a cumulative measure of satisfaction (Parker and Matthews, 2001) that takes place towards the end of a student’s experience and involves respondents balancing a large number of factors to arrive at specific satisfaction measures for each category of question. The NSS comprises a total of 23 questions split into seven categories (the teaching on my course; assessment and feedback; academic support; organisation and management; learning resources; personal development; and overall satisfaction).

While it is increasingly being used as a policy instrument for bringing about changes that enhance the student experience and as a means of competition in rankings tables, student satisfaction surveys in general, and the NSS in particular, have been criticised on a number of key grounds, both philosophical and operational (Sabri, 2013). Some dangers of un-scientific and top-down metrics of student satisfaction are pointed out by Gruber et al. (2010) who discuss a new measurement tool for student contentment. Their study was motivated by the decision to introduce tuition fees in German universities (since reversed) as the authors believed that institutions would now have to treat their students as customers. They felt that the UK was a leader in this area whereas Germany had not paid attention to either measuring or trying to improve the student experience. They point out that due to the unique nature of higher education, service quality cannot be measured objectively (p.107). They rightly explain that the different stakeholders – students, government and professional bodies, for instance – have very different measures of quality. They also 'regard service quality as an antecedent to satisfaction' (p. 108). Intuitively, they develop the study with the belief that universities can only satisfy their students if they know what they actually want rather than basing the service delivery on what they perceive that students want (p. 108). This may sound rather obvious, but Gruber et al. refer to studies demonstrating that academics and administrators prefer to rely on their own view of what students need. The study actually finds that student satisfaction is correlated to the satisfaction with lecturers, university facility quality, and the relevance of teaching to practice. Their paper describes an experiment with two samples of students from one University in Germany and the authors admit therefore that the results are not even representative of the whole student population in that country.

Methodologically, the validity of the NSS as a measure of the student experience has been questioned (Yorke, 2009). Thus, from a practical perspective, the NSS has not been without its critics regarding several aspects. First, there is evidence that universities may be prone to differing degrees to ‘cheat’ the scores. 15 Second, it has been argued that the satisfaction scores of ‘the vast majority of institutions fall within a narrow range that is covered by sampling error’ and in such circumstances the rankings that are based on the survey will have little meaning. Third, the need to ‘keep the customers happy’ may engender a fall in standards where students are spoon-fed and marking is unduly lenient in order to raise the scores the easy way. This, dovetailed with the heightened emphasis on transferrable skills and employability, may encourage universities to increasingly act as

14 H Swain, ‘A hotchpotch of subjectivity’ The Guardian , 19 May 2009. 15 It was reported that London Metropolitan and Kingston Universities manipulated the scores at their institutions – see L. Harvey, ‘Jumping through hoops on a white elephant: a survey signifying nothing’ Times Higher Education , 12 June 2008.

10 training colleges (Taylor and McCaig, 2014) at the expense of the development of deeper intellectual and analytical abilities.

There is thus likely to be considerable sampling variation in average NSS scores for each individual course from one year to the next since each group of students will complete the survey only once. Thus perceptions of quality, and therefore relative ranking positions, may vary erratically from year to year despite the course structure, teaching faculty, facilities and assessment approaches remaining ostensibly the same. It is clear that while the scores are certain to vary from one cohort to another for a specific programme within a given school, academics fear that their institutional managers will expect them to rise monotonically.

The results from the NSS were identified as early as 2007 as being important for student selection (Asthana and Biggs, 2007), and indeed this is one of its core purposes, although the overall ranking of institutions is more influential for applications (Gibbons et al., 2015). Yet there is a clear danger that the information contained within the survey findings is likely to be consumed in a very undiscerning way by prospective degree course applicants, who have no detail on the contexts of how or why a particular set of satisfaction scores arose, and could as a result make worse subject or institution choices ex post than they would have done in the absence of any satisfaction information. Any attempt by the institution concerned to explain or justify low ratings for a particular course will be summarily dismissed as lame excuses or sour grapes. The existence of the survey is even argued to have fundamentally changed the student-teacher relationship (Gornall and Thomas, 2014) and with it students’ notions of what a university is for (Collini, 2012).

It is also possible that there are spillover effects between the categories of questions used in the survey so that, for example, a particularly bad experience with accommodation or even a trivial mix- up with room bookings at an inopportune moment close to the survey completion date will engender a jaundiced view of the entire educational experience, irrespective of how good the quality of course delivery had been. 16 Land and Gordon (2015) argue that student surveys unhelpfully encourage respondents to conflate service satisfaction with teaching excellence and ‘draw heavily on the logic of student as consumer’ (p.21). Land and Gordon propose that the NSS more formally separates the two issues into individual sections. Thus, as things stand, the satisfaction measures ought to be viewed as general indicators of overall happiness with the provision rather than specific categorised viewpoints. The very existence of the NSS has fostered and encouraged a sense of student activism since, despite universities’ discouragement from doing so, the survey can be used as a weapon of revenge for any disaffected students who feel that their experience has fallen short of expectations.

It appears that the future of the NSS itself is currently up for discussion. 17 It is expected that the new TEF (Teaching Evaluation Framework) will somehow have an aspect of student satisfaction metric to feed into it and evidence from the NSS was used to inform the current Green Paper discussed earlier. The direction of the current debate suggests that the NSS’ days in its current form are numbered and instead student engagement should be a focus of the questionnaire alongside student satisfaction. A review of the NSS by HEFCE found that the survey does not take account of

16 The slightly narrower problem where students focus on a narrow range of information has been termed ‘cherry picking’ – see Callender et al. (2014). 17 Times Higher Education, 23 July 2015.

11 student engagement with learning, and recommends 11 new National Student Survey of Engagement (NSSE)-style questions by 2017. 18 A report entitled ‘Dimensions of Quality’ by Graham Gibbs argues that quality can gauged by various measures of class size, teaching staff, the efforts students make and the quality of feedback they receive. Relatedly, the ‘HEA UK Engagement Survey’ was piloted in 2013 and larger trial took place in 2014. Yet the broad principles behind the NSS have received strong support by the UK government who have argued that it has ‘good internal consistency’ and ‘does not need radical alterations’. 19

The Student Academic Experience Survey (SAES) is a complement to the NSS which asks fundamentally different questions. The SAES was introduced in 2006 to examine the impact of increasing student fees on their perceptions and priorities (Buckley et al., 2015). The survey is now conducted annually and is run jointly by the Higher Education Policy Institute and the HEA. In 2015, around 15,000 students completed the survey, representing a much lower response rate (22%) than that achieved by the NSS. While we do not have data from the SAES, its findings have informed and motivated our research questions. 20

In summary, it seems reasonable to argue that many criticisms of the NSS are generic to any method of gauging students’ views of their university experience and within its own genre, it could be viewed as a comprehensive and robust barometer and therefore worthy of quantitative analysis. No other survey in the UK has such a comprehensive coverage across both the subject and institutional dimensions and as we discussed above, the results from the NSS are of strategic importance to universities since they occupy such a key position in several rankings. The existence of the NSS data, which are publicly available, provides a unique environment in which to examine student satisfaction on a nationwide basis covering all subject areas taught by each university. Although each student has (almost invariably) only one undergraduate university experience, and may make their judgements on a different basis and using different ordinal values for a given received level of quality, the fact that every student faces the same questions and must provide answers on the same scale means that comparisons across fields and institutions are possible. Our study focuses on data aggregated to the HESA subject code level within each institution so that such differences in standards of quality assessment between students should to a large extent wash out. 21 Student responses remain confidential so there is no reason to suppose that respondents have any incentive to be dishonest in representing their views. In 2014, the year for which we use data, a total of 321,449 students completed the questionnaire, representing a response rate of 71%.

18 The NSSE is a US initiative, similar in spirit to the NSS but focused on a smaller number of specific subject areas. The NSSE asks a larger number of more penetrating questions than the NSS concerning the extent to which students have put efforts into their studies and the opportunities to learn that have been made available to them. 19 ‘UK review of information about higher education – National Student Survey – A literature review of survey form and effects, by DELNI, HEFCE, HEFCW and SFC, 2015. 20 We made a request for the data from the HEPI survey to the organisation’s Director but did not receive a response. 21 Of course, it may be the case that students at specific institutions or students in certain subject areas may have different expectations of their own student experience, but we allow for this in the analysis using fixed effects for institutions and for subject areas.

12

3.2 The HEA and its Fellowships

In the UK, a professional teaching framework for academics is organised via the HEA.22 The HEA was established in 2004 as the successor to the Institute for Learning and Teaching and as a professional organisation to champion standards of teaching. 23 The Professional Standards Framework (UKPSF) was established was established by the university sector and is ‘a set of professional standards and guidelines for everyone involved in teaching and supporting learning in HE’.24 The UKPSF is organised along three dimensions: the areas of activity that university teachers and support staff may be involved with; the knowledge these staff members are expected to possess; and the professional values that they should embody. Teachers and support staff whose experiences adequately map onto these dimensions can apply for recognition as Associate Fellow, Fellow, Senior Fellow or Principal Fellow of the HEA. 25 There are two avenues to apply for recognition: the “experienced route” – for those with considerable existing experience of teaching and learning, and the “accredited route” – for those who have completed HEA-accredited courses and obtain the qualification through a postgraduate certificate in academic practice. Naturally, the HEA itself has also enthusiastically encouraged the drive towards increasing the number of professionally qualified teachers and with that its membership.

The raison d’être of the HEA is to foster excellent teaching, but precisely what is it? Unfortunately, it is a rather slippery and ill-defined concept, and it is not clear how, to what extent and in what ways excellent teaching affects student learning (Gunn and Fisk, 2013, p.9). Gibbs (2008) lists no less than twelve different types of teaching ‘excellence’, emphasising the difficulty in objectively and uncontroversially measuring it. Concepts of excellence seem to arise retrospectively from anecdotal evidence rather than from any systematic theory or framework. This lack of any meaningful consensus over the definition of excellence will be highly problematic for any putative teaching excellence framework, making it challenging to separate satisfactory teaching from excellent teaching (Gunn and Fisk, 2013). The demonstration of impact on the outcomes of student learning (e.g. through graduate employment statistics) is increasingly a requirement of teaching excellence, mirroring the incorporation of impact into the evaluation of research in the REF (Land and Gordon, 2015).

As a leading exponent and early adopter of a target for the percentage of teaching-qualified academics, Thornton (Pro Vice Chancellor of The University of Huddersfield), describes the move by his university to use the UKPSF as a route to accreditation by the HEA to be their strategic aim for all of their academic staff (Thornton, 2014, p. 6). 26 His paper also includes the intriguing finding that 65% of staff did not make any changes to their teaching practice as a result of the institutional process, 27 although a wider observation was that staff involved in teaching but not directly in

22 Additionally, a modest percentage of professionally qualified university teachers hold a Postgraduate Certificate of Education (PGCE). 23 J. Gill ‘Behind the scenes at the academy’, 7 August 2008, Times Higher Education . 24 https://www.heacademy.ac.uk/recognition-accreditation/uk-professional-standards-framework- ukpsf#sthash.A71xybQ5.dpuf. 25 Note that in common with other studies, throughout this paper we refer to membership of the HEA as a teaching qualification; however, strictly this is not the case although HESA recognises it to be equivalent. 26 They started on this process in 2009 and the strategic aim was achieved in November 2012. 27 One could argue that this is not surprising since those obtaining HEA Fellowship at any level are already demonstrating the required skills and attributes, and that their engagement with the application process

13 lecturing roles found the whole process beneficial. But he nevertheless argues that the institution’s NSS scores improved significantly over the period as did the proportion of students gaining an upper second or first class degree. So although his paper does not attempt to find any direct link, it does sketch out the rationale for believing that there might potentially be a relationship between teaching qualifications and student satisfaction.

A recent paper (Bryant and Richardson, 2015) tries to measure whether students achieve differing learning outcome success by relating this to teaching by academics holding a PhD alone or a PhD and teaching qualification combined. The paper strangely claims that having a teaching qualification as a factor in driving student success, and thus retention, has been overlooked and cites a rather limited set of literature. Nevertheless they have intriguing findings and describe those academics with both a doctorate and teaching qualification as ‘damage controllers’ and those with just a doctorate as ‘perfection seekers’. They base this on success outcomes of students where those with doctorates and teaching qualifications produce more pass results, whereas those without produce more fails but also more results of distinction quality. They acknowledge the exploratory nature of this research.

Competition for power and prestige manifests not only between but within universities, and faculty who are employed in teaching-focused roles seek to drive an agenda that many research intensive colleagues have neither the time nor the inclination to follow. Numerous highly experienced, and in particular long-serving and senior, faculty are unwilling to undergo the process required to become a qualified teacher. Encouraging reflective or innovative teaching is surely a laudable objective but there are dangers if this is driven by bullying within institutions in order to enhance league table positions (Copeland, 2014). University teachers may therefore have internalised this objective not because they genuinely desire to be better teachers, but rather because they know that this is the new rule of the game for careerists who wish to get ahead. If motives for undertaking professional teaching courses are dubious, or do not result in changes in teaching approaches or practices, then it is unlikely to be linked with student satisfaction. It is also likely that academics who are the most committed to teaching and learning will be among the first wave to apply for professional recognition through the HEA, with those who are in most need of renewed enthusiasm and fresh ideas being later adopters or not at all.

The newly proposed Teaching Excellence Framework (TEF) has been widely previewed and now outlined in the recent Green Paper. 28 It had been expected within the academic community that a key metric in the TEF would be the proportion of academic staff holding teaching qualifications. But if this was the intention, then it seems to have been lost in translation, with no mention of teaching qualifications or HEA membership becoming a metric within the proposed TEF framework. 29

merely recognises their already high standard of teaching and assessment rather than highlighting problems requiring remedial action. 28 Teaching Excellence, Social Mobility and Student Choice , November 2015. 29 One possible explanation is a serious lack of data upon which to base any policy or assessment. currently, the Higher Education Statistical Agency (HESA), and indeed the higher education institutions themselves, simply do not know how many of their teaching staff are professionally qualified. It has been argued that this lack of data has caused the percentage of professionally qualified university teachers to be sidelined in the Green Paper – see ‘Will lack of data on teaching qualifications derail TEF?’ J. Grove, 13 August 2015, The Times Higher, No. 2216, p. 12.

14

4. Data and Methodology

Table 1 presents the definitions and units of measurement of the key variables we employ in the study as well as their mean values.30 The first group of variables is drawn from the NSS results themselves, completed by students in July 2014. As stated above, we focus mainly on the overall satisfaction responses (i.e. the response to survey question 22, “Overall I am satisfied with the quality of the course”). The information from this question is typically presented in one of two ways – either the scores from the Likert-scale question are averaged to form a numerical value between one (strongly disagree, i.e. least satisfied) and five (strongly agree, i.e. most satisfied), or it is presented as the percentage of respondents who are satisfied with their course (i.e. they select either strongly agree or agree in response to this question). We primarily focus on this question since it is this which forms the basis of most of the league tables 31 and is the most reported by institutions of all of the questionnaire responses. However, since the factors affecting student satisfaction we measure are focused on various attributes of the teaching faculty, we also investigate the 1-5 scale and percentage of students who are satisfied in response to “the teaching on my course” section, where the responses questions 1 to 4 are averaged. We include all groups of courses from all UK universities which possess degree awarding powers and which offer at least one undergraduate course, since the NSS is an undergraduate only student survey.32 Our database comprises a total of 4465 such courses or groups of courses within cognate sub-fields.33

Ironically, while the focus of the government (as demonstrated through the recent Green Paper and other reports) appears to be squarely on university teaching, this aspect of degree provision appears to be working very well and generating high levels of satisfaction. Nationally, 91% (4.2 on a 5-point Likert scale) of students are satisfied that staff are good at explaining things, while 89% (4.3/5) agreed or strongly agreed that staff are enthusiastic about what they are teaching. On the other hand, the scores are much lower for assessment and feedback, with only 69% (3.8/5) of students agreeing or strongly agreeing that feedback was prompt and 68% (3.8/5) agreeing that feedback had helped clarify matters that they had not previously understood. Course organisation was also rated relatively poorly (77% satisfied; 4.0/5), as was the Students’ Union or Guild (68%satisfied; 3.8/5).

Two other relevant variables from the NSS questionnaire database which we employ in our study are on whether students are registered as full- or part-time students and on the overall response rate. We include the mode of study as we believe from first-hand experience that this can have a considerable influence on student happiness since part-time students tend to be older and also

30 Note that to match with the analysis undertaken below as closely as possible, these mean values are averages over each course or course collection at each institution and do not involve a weighting by number of registered students or number of faculty. 31 The Times University rankings incorporate the percentage of students satisfied while the Complete University Guide is based on the 1-5 scaling of the overall satisfaction question. The Guardian league table includes the percentages of students who are satisfied with the course, who are satisfied with the teaching, and who are satisfied with the feedback on their work. 32 Cranfield University, the Royal College of Art, the Institute of Education, the , the London School of Hygiene and Tropical Medicine, the Institute of Cancer Research, and the Liverpool School of Tropical Medicine are therefore excluded on the grounds that they offer only postgraduate courses. 33 The NSS supresses the results for any course collection where fewer than 10 students have completed the questionnaire or and where the respondents make up less than half of all the students on that course.

15 concurrently in employment. In addition, the response rate is a factor of interest, as it has been reported that this variable is correlated with satisfaction (Williams and Cappuccini-Ansfield, 2007).

As Tables 2 and 3 (which employ the Likert scale and percentage satisfied measures respectively) show, and as one might expect, the responses to each set of questions, averaged across all students on all courses and at all institutions, are highly positively correlated. For example, the correlation between the scores on ‘teaching on my course’ and ‘learning resources’ is 0.645 despite the obvious possibility that excellent teaching could take place inside a dilapidated pre-war building. The ‘personal development’ score is almost as highly correlated with ‘academic support’ (nearly 0.8) and with ‘organisation and management’ (0.75) as it is with teaching quality (0.83). The correlations are uniformly slightly lower in Table 3, since constructing the satisfaction variable in this way does not take into account strength of feeling because method of construction of this variable effectively translates it into a 0-1 as students are either satisfied or they are not. The numbers in Table 3 are nonetheless high again, indicating that, by and large, at the level of a course or group of courses, students are either happy with everything or unhappy across a range of measures. The only exceptions are present in the final columns of both tables, which measure how contented respondents are with their Students’ Union or Guild, and while still always positive, this variable has much lower correlations with other measures – typically of the order 0.25.

The second group of variables described in Table 1 comprises a set of variables that capture the demographic, ethnic and contractual status of the faculty that teach the students, obtained from HESA and based on returns made in early 2014 by institutions for the 2013-14 academic year.34 A particular difficulty is that the units of measurement in the NSS and from HESA do not match. The NSS publishes survey responses organised by ‘course collections’ in cognate disciplinary areas, 35 whereas in HESA the data are organised by cost codes. In order to combine the two databases, we manually map each NSS subject area within each institution separately with its closest matching HESA cost code, resulting in a total of 3895 course groups which we are able to match.36 During the mapping process, it became clear that each institution has its own method of categorising programmes and of linking to schools or departments. Therefore it was necessary to make links using plausible assumptions informed by our own knowledge of internal University structures. For example, when Economics as a subject in the NSS database could not be mapped to the Economics and Econometrics HESA cost code (129) because the university concerned does not record any costs under this heading, we mapped it to Business and Management Studies (133); for Linguistics, we mapped predominantly to Psychology. These imperfect mappings made by intuition only accounted for a small proportion of the overall sample. In addition, we realise that students frequently gain a

34 We argue that the timing of our measurement of the NSS and HESA data is appropriate since the latter will relate to the final year of the undergraduate’s course and will thus embody the state of the educational environment most in the forefront of their minds when they are completing the questionnaire. 35 We employ the NSS data organised by JACS Codes Level 3, of which there are a total of 108 subject areas. There is a total of 45 cost code areas, although given the nature of the sub-field we are not able to map any NSS course data to the ‘Continuing Education’ HESA code 136. 36 Clearly, there are limitations of this approach, including numerous examples where it is not possible to accurately match the HESA and NSS data without intimate knowledge of the organisational structure of each specific institution, and the HESA data relate to entire cost centres whereas the NSS results are based only on undergraduate provision. The result of mismatches would be to cloud the inferences of the relationships between the inputs (HESA data) and the outputs (NSS scores). However, given the manner in which the two data sources are organised, it is not possible to achieve a more accurate match.

16 wider experience of teaching on programmes, outside of that provided by their home department, but this would be very difficult to allow for in our constructed model. As it is normally only a minority of teaching that is delivered this way and as universities map NSS results back to departments, we must accept this discrepancy in the data.

Tables 4 and 5 continue to summarise the overall satisfaction measures on the 1-5 and percentage of students satisfied scales – this time separated by HESA subject area. It is clear that while there is very considerable variation across individual courses, when aggregated across universities to the subject area level, there is much less variation. Students in the media studies area are the least satisfied of all on average (score of 4.02 and 79.77% satisfied), followed by general engineering (4.09 and 82.43%), while students of clinical dentistry (4.65 and 96.38%) and then veterinary science (4.44 and 92.88%) are the most contented. It would be tempting to conclude that highly specialised, vocational courses tend to go down best while those which are much broader in scope with no obvious career path are less favoured; yet the average evaluations in the classics, philosophy and theology fields are also very high. This disparity in satisfaction might additionally relate to the attractiveness of studying at the types of universities that offer these subjects (for example if classics is only taught at popular universities with good facilities), but the finding is also suggestive that a more nuanced explanation of student satisfaction is required that simultaneously examines a range of potential factors as we attempt to do in this study.

Considering the shape of the distributions of satisfaction within each subject area in Tables 4 and 5 reveals some further interesting patterns. The minima show some specific courses where there must have been particularly serious issues that resulted in very low evaluations including those in nursing (minimum score 2.6, minimum just 29% of students satisfied), health and community studies (minimum 2.4 and 24%), and electrical engineering (2.5 and 17%); these individual very low evaluations increased the spreads of scores and of percentages satisfied. At the other end of the scale, most course areas had at least one (measured by the maximum) or several (measured by the 95 th percentile of the distribution of scores) courses that were able to achieve a 100% satisfaction rate.

We incorporate all the relevant factors related to the characteristics of the teaching staff (aggregated to the subject area within each institution) that are included in HESA’s database, each of which is available by cost centre (subject) and by institution. These comprise the following variables, each discussed with some intuition as to their relevance:

• Total number of academic staff – this is one possible indicator of the size of the teaching unit. We could assume that a large department allows students to access a wide range of teaching styles and thus provide a rich educational experience. Alternatively it is also evident that smaller departments are able to generate strong communities where students and staff get to know each other and share a common purpose. • Percentage of total staff that hold a formal teaching qualification (e.g. PGCE, HEA Fellowship). This is the variable that has perhaps had the most prominence in discussion on what drives student satisfaction (e.g. Thornton, 2014). It has led a number of universities to include statements on engagement with this agenda as part of their institutional key performance indicators. Despite the interest, this link has never been empirically tested but

17

Thornton and others have hypothesised a positive link between the percentage of qualified teaching staff and satisfaction. • Average length of service of academic staff – this measures the collective level of experience of the faculty at their current institutions. We could assume that more experience would provide a better student experience. However, we must also expect that additional years of experience could lead to academic staff becoming ‘set in their ways’ and not adopting cutting edge teaching methods or learning technologies. • Average age of academic staff – as for length of service, this variable embodies the trade-off between the stereotypical ‘youthful enthusiasm’ and use of the latest teaching technologies on the one hand with the benefit of additional experience on the other. Age will also capture periods of experience of teaching staff at other universities or outside the academic community where length of service will not. • Percentage of academic staff who define their ethnicity as white. This would allow us to look at any possible racial favouritism in the results of the NSS. This has not been studied before in the UK as far as we are aware. 37 • Percentage of female academic staff. It has been shown recently (Boring, 2015) that gender bias can be found in student evaluations of academic teaching. • Percentage of academic staff holding a doctorate. This is now a minimum entry qualification for entering the academy at most universities. We would expect that a doctorate would provide the holder with at least a minimal level of exposure to the latest research in their field, and may provide a certain level of intellectual credibility. • Percentage of academic staff defining their nationality as UK. Being of the UK may relate to having stronger English speaking skills and a familiarity with the local culture and ways of doing things which may be absent or weaker among those drawn from other countries. • Percentage of academic staff defining their nationality as non-UK European. Again, it is possible that students may be better able to identify with, and therefore will rate more highly, academic teachers from within the EU than outwith it. • Percentage of academic staff whose previous post was at another higher education institution. Such a type of experience would be expected to have a positive effect on the student experience as the staff member could draw on more than one institutional approach to teaching and learning and should have had time to hone their delivery style and mastery of the subject. • Percentage of academic staff who were immediately previously students. These ‘rookies’ will be expected to have amassed less prior experience of teaching and may therefore be subject to ‘teething problems’ as they grapple in the early years of their careers to find an approach to teaching and assessment that works for them. Alternatively, they may be more aware of what current students expectations are, having just been one themselves. • Percentage of academic staff whose employment function involves only teaching. The formal separation of those who engage in research from those who only teach has been an increasing feature of the UK academy over the past three decades (Locke, 2004; 2012). This has been partly driven by the desire to increase the measured level of research intensity in the REF. Such widespread shifts to teaching only contracts are undesirable if they imply a

37 As Table 1 shows, over 80% of teaching staff are white. Only around 2% are black and 7% Asian so we do not further separate the percentage of non-white teachers by ethnic grouping.

18

reduction in the standing of those involved, disenfranchising them, cutting-off their future career options and damaging prospects for promotion (Locke, 2014, p.28). Such individuals are likely to be disincentivised from engaging in research-led teaching. • Average salary of full-time academic staff. Although no research exists around this, it could be argued that institutions paying higher salaries can attract the best academic staff, which would have a positive effect on the student experience. • Percentage of academic staff whose contract level is full professor. Schools or departments with a higher percentage of professors could benefit from additional leadership and mentoring that would be lacking within a department with fewer such senior faculty. This variable may also act as a proxy for the strength of the research reputations and brand images of the individual teachers in a research context. • Percentage of academic staff employed part-time. Such staff could have less of a connection with the community of the department and as part-time members, and students may think them less accessible to see in person to discuss individual questions. • Percentage of academic staff employed on fixed-term contracts. Nationally, the picture is that 34% of academics were part-time and 36% on fixed-term contracts in 2012-13 (Locke, 2014); both these groups have higher turnover than among full-time, permanent faculty and their lack of job security may act as a disincentive. The growth in fixed term contracts that occurred between 2011 and 2013 was perhaps a response by HEIs to increased uncertainty regarding student numbers and funding arrangements. Part-time lecturers are much more likely to be employed on teaching-only contracts (57% versus just 9% for full-time) and may thus lack the skills to deliver research-led teaching. On the other hand, fixed-term (and part- time) contracts may be used as a method to bring in practitioners and those with relevant wider experience as sessionals, which students find valuable. It should be noted that the HESA figures do not include student demonstrators or other PhD students who are used to provide seminar teaching or support lab work. Thus this measure will not account for the student experience provided by these doctoral researcher engagements.38 • Number of registered students per academic staff member FTE (the student-to-staff ratio). Here we assume that a lower SSR would lead to greater student satisfaction as students are able to engage on an individual level and thus gain feedback more easily than if they are taught in larger classes. We must also acknowledge here that students will not engage with all academic faculty within each department during their undergraduate studies. Some staff may be on research leave, have large administrative loads or teach at a postgraduate level. However, the whole departmental culture will involve every member of staff (including support staff) and thus such a measure will still have some traction.

The final set of variables we employ in the analysis relate to university-wide factors that may influence satisfaction, specifically:

• The average UCAS tariff score of new undergraduates. It could be argued that students entering with higher grades will be more successful and better motivated students and will perform well academically. This would lead to high levels of satisfaction. However, it is not clear which is the driver in this scenario, satisfaction or performance (Letcher and Neves, 2010).

38 Thank you to Cherry Bennett for this insight.

19

• The number of graduates who take up employment or further study as a percentage of the total number of graduates with a known destination. A plausible presumption is that students who find suitable employment during their studies would be more satisfied with their experience since it has led to a desirable and potentially stress-relieving outcome, particularly in these times when students typically end their courses heavily in debt. • The percentage of graduates achieving a first or upper second class honours degree. There is evidence to suggest that students who are performing well will be more contented (although the direction of causality is not entirely clear) and while students will almost certainly not know their final degree classification at the time they complete the NSS, they will have a strong inkling based on their prior year exam results and final year coursework assessments. • The percentage of students completing their degree. Although non-completing students do not participate in the NSS as they would already have left, this will have some spillover effect upon their fellow students who stay the course. A high dropout rate may therefore signal problems with the course that are felt by all students. • The university’s green score - a rating of universities based on questions about their environmental and ethical commitments and actions. This is a well reported factor and the NUS especially place great emphasis upon care for the environment, including pressuring universities to divest from fossil fuel investments. 39 • The ratio of expenditure on staff salaries to total expenditure. This has been reported in a number of studies where the facilities of a university has proven to be a positive driver on student satisfaction (Borden, 1995), although the quality of teaching has been shown to be more important (Mavondo et al, 2008). Yet there is concern, however, that universities are more willing to invest in bricks and mortar than people, to the detriment of both teaching and research, as we document below. • The research power of the university, calculated as the grade-point average of the total quality profile x total FTE staff submitted. One of the motivations of the paper is the claim (discussed earlier) that ‘there is no evidence to suggest a ‘magical link’ between research and good teaching’ and power measures the quality-adjusted volume of research output, which is likely to be strongly correlated with research reputation. • The percentage of the university’s overall research quality profile that was rated 4* (‘world- leading in terms of originality, significance and rigour) in the REF2014. This is another variable to investigate whether there is a ‘magical link’. The percentage of so-called world leading research is likely to be a sharp discriminator between the research standings of the institutions. • A dummy variable representing whether the university is a member of the Russell group. We use this to examine the brand and image of a university as it has been shown to be a factor in student satisfaction (Palacio et al, 2002; Brown and Mazzarol, 2008; Alves and Raposo, 2010). • A dummy variable representing whether the university is a member of the QS Top 400 universities ranking. This is a further variable to measure how brand and image can drive student satisfaction.

39 See the fossil free campaign: http://www.nus.org.uk/en/greener-projects/green-zone/people-and-planet- launch-fossil-free-campaign/

20

• A dummy variable representing whether the university is ‘new’ (established post-1992). This variable may again be (negatively) correlated with the brand image of the institution and enables us to investigate whether satisfaction differs between the longer established universities and the ‘former polytechnics’.

Table 6 presents the findings of two exploratory regressions to determine which of the specific areas within the NSS questionnaire students appear to focus on when they answer question 22 to express their overall satisfaction. This issue is of particular interest since, as discussed above, the overall satisfaction score is the one that forms the basis of most of the ranking measures, and which is most discussed in the media and most highlighted on university web sites. The regression is conducted on all 4465 course-university combinations as described above; the dependent variable is the overall satisfaction measure – either the Likert score (middle column) or the percentage of students who are satisfied (right-hand column), while the independent variables are the scores for each of the sections. This is a legitimate specification and not a tautology since the overall satisfaction scores arise from a specific question in the survey and are not direct aggregates of the component scores.

It is clear that all of the component scores are highly significantly (at the 0.1% level or even lower) and positively related to overall satisfaction, although respondents seem entirely unfazed by the quality or otherwise of their Students’ Union, which is not significant even at the 10% level and has parameter values which are several orders of magnitude lower. In terms of the other specific section scores, students appear to put most emphasis on teaching quality, followed by organisation and management and then personal development, when arriving at their overall rating, with the parameter estimate for the first of these being more than double that of the latter two. Universities may be relieved to note that, while still highly statistically significant, the magnitudes of the parameters on assessment and feedback, where satisfaction scores are generally lower, are a tenth of that on teaching effectiveness. Similarly, the parameter estimate on learning resources is of small size, suggesting that it plays a minimal role in overall happiness despite the huge sums that universities have spent on infrastructure, which some have dismissed as vanity projects in recent years. 40

5. Key Results on the Drivers of Student Satisfaction

Before we undertake a simultaneous analysis of the roles of the various factors described above that we believe may affect student satisfaction, it is of interest to conduct some preliminary investigations of several different splits of the data. Thus in Table 7 we present summary measures for satisfaction separated into 12 regions. It is plausible to expect differences in satisfaction levels across regions, arising both from differences in relative costs of living but also as a result of the different kinds of lifestyle that each region may offer. Home-based students in Scotland will also not be paying tuition fees as they would if they studied south of the border and it might be that this enhances their feeling of well-being.

40 http://www.theguardian.com/education/2015/may/05/campus-universities-building-projects.

21

The figures in the table suggest that those studying in Northern Ireland are the most contented, with over 90% of students satisfied with their courses overall; at the other end of the scale are Londoners, who are the least satisfied. Here we can assume that within parts of the UK the majority of students are home, whereas in London, we can expect students from all over the UK. Therefore, local students in Northern Ireland pay just £3,805 (students from the rest of the UK, £9,000; from other parts of the EU £3,805). 41 Belfast is also the most affordable city for living within the UK according to the same source. There are therefore two possible environmental factors that drive these results, as students may think they are receiving good value for money and also they can have a reasonable standard of living on the available student loan. The London result may be driven from the opposite direction, with most home students paying £9,000 and finding it hard to make ends meet in an expensive place to live. 42 Indeed, in a recent survey London was reckoned to be one of the worst places to live in the UK. 43 In addition, there will be more international students within the London student body, who are drawn to studying in the capital.

In Table 8 we proceed to summarise the mean satisfaction scores (1-5 scale, left hand panel) and the percentage satisfied (right hand panel) split in a pairwise fashion for a number of sub-samples, also presenting Welch’s t-tests of the difference between the means of the two sub-samples, allowing for uneven sample sizes and not assuming equal variances.

Part-time students are more satisfied than their full-time counterparts, since their mean score is higher, as is the average percentage of students satisfied, although the difference between the two groups is not statistically significant (t-value: -1.1, p-value 0.3 for the average score and t-value: -0.8, p-value 0.4 for the percentage satisfied). This could be expected as many part-time undergraduate students are either mature students, or students who enter university from a non A-level route. Such students are sometimes more motivated to work hard than those who have moved straight into university from school.

Overall satisfaction at Russell group universities is statistically significantly higher by around 0.06 on a 5-point scale and by one in percentage terms; the minima and medians are also higher for Russell group than for non-Russell group universities. Similarly, when we separate universities into two groups according to whether they are in the Top 400 QS World ranked or not, we observe a similar picture that the more prestigious universities engender (statistically significantly) higher levels of satisfaction, although a larger number of universities (44) hold this designation compared with the Russell grouping (24 universities).

We also separate universities into whether they are specialist institutions, which we define as operating in four or fewer subject areas, finding, perhaps surprisingly, that mean scores and the percentage of students who are satisfied are both lower than at more broad-based universities, although not significantly so. Finally, the largest difference between sub-groups appears when we separate universities into two sub-groups according to whether they were established before 1992 – the so-called ‘old universities’ or whether they were established after that date. New universities

41 http://www.qub.ac.uk/home/StudyatQueens/UndergraduateStudents/FeesandFunding/. 42 Interestingly, however, the introduction of £9000 fees did not reduce the nationwide percentage of satisfied students, which remained at 86 in both 2014 and 2015 (‘NSS 2015: £9k tuition fees fail to dent satisfaction’ by C. Haveral, 13 August 2015, Times Higher Education no. 2216. 43 http://www.telegraph.co.uk/finance/economics/11976033/Forget-London-these-are-the-best-places-to- live-in-the-UK.html. This survey also found that Reading was the best place to live in the UK.

22 have highly significantly lower levels of satisfaction (a more than three percentage point difference in student satisfaction), with the former having lower mean, median and minimum levels of satisfaction and higher variances.

It may be, however, that the differences between levels of satisfaction across sub-groups may merely be separate manifestations of the same phenomenon – for example, that new universities have weaker reputations (not being members of the Russell group and less commonly within the QS Top 400 than old universities) or that are more likely to offer the less popular subjects such as media studies. In order to investigate this in more detail, we now turn to the multiple regression results, which are presented in Tables 9 to 12 of the paper. In each case the dependent variable is a satisfaction measure and four separate specifications are run: for the results under the column headed (1), the regressions include both subject-specific and institution-specific fixed effects whereas for the other three models (whose results are reported in the columns headed (2) to (4)), only subject-specific fixed effects are used which enables us to also include institution-wide variables as outlined above. In all four of the main tables of results, the third column presents the expected signs for the parameters. In Tables 9 and 10, the goodness of fit figures are of the order of 0.2 when both sets of fixed effects are included, falling to around 0.12 when the institution fixed effects are removed.

Focusing first on the results in Table 9, where the dependent variable is the 1-5 scale measure of overall student satisfaction based on question 22, reveals a number of significant influences on the scores. The mode of study variable has a consistently positive sign, and is marginally significant in some specifications, indicating that part-time students are slightly more contented than their full- time counterparts, but the other variable from the NSS database, the response rate, is highly significantly and positively related to satisfaction, indicating that popular courses are able to encourage a higher percentage of their participants to complete the questionnaire. This ties in with the widespread view that when response rates are low, the survey findings will be biased downwards by the disproportionate effect of those with the most strongly jaundiced views being more likely to want to complete it; as the response rate grows, students with more moderate and positive views of their experience will be added to the results pool thus raising the average measured satisfaction level (Williams and Cappuccini-Ansfield, 2007).

Now considering the effects of the faculty characteristics drawn from the HESA database, the most salient influences on satisfaction are as follows. First, there are positive and highly significant links with the total number of staff, their average length of service and having a higher percentage holding doctorates. These results are intuitive – a higher number of staff may be indicative that teaching is able to be undertaken by subject specialists in each sub-field, whereas low numbers may signal a lack of critical mass and that teachers must deliver material that is out of their field to cover all the syllabi. Faculty with long service not only bring additional experience, and will have had the opportunity to ‘iron out any bugs’ in their module structure and delivery, but this may also be demonstrative of a department or school which is stable and a pleasant environment in which to work; on the other hand, a low percentage of long serving staff could either indicate that the department is fairly new or growing, or more worryingly that it is unable to retain staff which would be detrimental to the student experience.

23

Interestingly, the percentage of white faculty shows a consistently positive and highly significant link with satisfaction, indicating in general that students are less happy when teaching is predominantly by other ethnic groups. This result is one of the clearest findings and does tie in with existing findings on the relationship, and while the limited literature on this subject has common results with ours, it is drawn entirely from the US experience, where accusations of racial discrimination are current.44

The question of student evaluations being affected by race has been the subject of only a handful of studies. Huston (2005 and 2006) reviewed the literature and found that surveys had shown non- white faculty to receive significantly lower ratings than their white colleagues. The author argued that more studies were needed and that it would be interesting to look at the results by academic discipline as well as overall ratings. She also stresses that this racial bias within the evaluation system means great care has to be taken if using student evaluations as a tool to make judgements on staff performance. Basow and Martin (2012) argue that the issue of race and its effect on student evaluations has not been sufficiently widely studied and state ‘Overall, it is likely that the race/ethnicity of the professor affects students ratings, with white faculty generally rated higher than minority faculty’ (p. 43).

Basow, Codos and Martin (2013) used a computer animation to test the effectiveness of learning when the teacher was from different racial groups. The authors found that student evaluations were not biased by the race of the animation figure but that test results were better when students had watched the white male professor. They explained this as arising because students had paid more attention to the latter animated cartoon figure since they had assumed that this teacher was more authoritative. Reid (2010) uses ‘Rate My Professor’ scores as a proxy for student satisfaction and also confirms the findings of these other studies that racial minority faculty are at a disadvantage to their peers when being judged by student evaluations. On the whole, the studies argue that this represents an ‘unconscious bias’ and this would perhaps fit best as an explanation for our finding within the context of UK universities. Further work could seek to determine whether the findings are driven by the subjects studied and vary with the ethnicity of the study body or whether the effect is common throughout all disciplines and students. As universities rightly strive for a diverse workforce, they will be concerned to discover that students appear not to share this desire for balance and that the NSS scores of departments could actually suffer as a result of this policy.

We find that most of the other staff characteristics in Table 9 are not statistically significant, including the percentage of female staff, their average age, whether they had prior experience at another university or this is their first post-education employment, the percentage of faculty employed on teaching-only contracts, and the percentage of part-time staff. In some specifications, the average salary of teaching staff and the percentage of those on fixed-term contracts are negatively and positively related to satisfaction, respectively. Similarly, in some cases the student-to- staff ratio is negatively related to satisfaction, suggesting plausibly that, all else equal, students prefer to be taught in smaller groups. Importantly for this study, we do not uncover any link in any of the specifications examined between the percentage of teaching staff who are professionally qualified and student satisfaction. This could be because there genuinely is not any relationship

44 http://www.bbc.co.uk/news/blogs-trending-34768331.

24 between the two variables at the nationwide, aggregate level, or it could simply reflect the noise embedded in this particular variable due to the high percentage of unknowns. 45

Examining the institution-wide influences on satisfaction presented in the penultimate panel of Table 9 reveals several important findings: graduate prospects and degree completion are both positively linked with satisfaction, although neither the qualifications of students on entry nor whether the university has strong environmental commitments (the ‘green score’) have any significant influences in any of the models we consider.

Fascinatingly, we find that the research quality of the institution, which may have drawn students there in the first place, is negatively viewed at the point when they complete the NSS. So the percentage of research which was rated as 4* in the REF2014, and being a member of the Russell group of universities, both have consistently strong negative influences on satisfaction, and the research power (number of faculty submitted to the REF multiplied by the REF grade point average) has a mildly significant negative effect on satisfaction. Being at a post-1992 institution also leads to lower average levels of contentment, all else being equal, than being at an older university, although students appear unconcerned by the percentage of expenditure allocated to staff, or whether the institution is ranked in the QS Top 400 universities. 46

Table 10 uses an identical set of specifications and explanatory variables as in Table 9, except that now the dependent variable is the percentage of students who are satisfied. In terms of overall model fit and the signs and statistical significances of the coefficients, the findings are identical to those of Table 9 but we now proceed to interpret their magnitudes since they are more intuitive when the explained variable is itself a percentage. Broadly, the estimates are of plausible sizes – for example, part-time courses have on average around 1-1.7% more satisfied students; increasing the response rate by ten percentage points would lead average measured satisfaction to be 0.8% higher; a ten point increase in the percentage of white teaching staff would lead to an increase in satisfaction of 0.3-0.6%, while the same percentage point increase in the number of fixed term staff would lead to a 0.2-0.3% increase in contentment. Although the tone in the literature is suggestive of fixed-termers being demoralised second-class citizens compared with their permanent counterparts, it is also possible that fixed-termers are spurred by their lack of job security to try harder with their teaching and assessment in order to attract a permanent contract offer. In terms of the university-wide variables, a ten point increase in the percentage of work rated at 4* by the REF panel is linked with a fall in satisfaction of around 1%, while an increase of the same order in the

45 Within the dataset that we obtained from HESA, universities are only aware of whether 55% of their faculty have a teaching qualification or not, with the remaining 45% unknown. We simply calculate the percentage of faculty having a teaching qualification among the known staff for each course or course collection. So, suppose for example that there are 100 teaching staff in a department and the university knows that 40 of them are professionally qualified teachers and 10 are not with no information on the other 50. For this data point, the percentage qualified would then be 40/(10+40) = 80%. It is more likely that those who are professionally qualified would report their status to the university and thus more of the unknowns will actually be ‘no’. So nationally the figures held by HESA are probably over-estimates of the percentages who are qualified. 46 According to the SAES/HEPI survey discussed above, the most common reason for a course not meeting students’ expectations is that they felt that they themselves had not put in enough effort; 36% selected this reason which was higher at old than new universities (Buckley et al., 2015, p.12). Therefore, it is possible that students attending old universities are more likely, on reflection, to internalise any perceived quality issues whereas those at new universities are more prone to blame the institution.

25 percentage of students completing their degrees would lead to a 1.8-2.3% increase in average satisfaction.

Students at Russell Group universities or at new institutions have levels of satisfaction that are 1- 2.2% and 1.2-1.9% lower respectively than those at otherwise identical old universities outside the Group. The finding here indicates that when all other variables are controlled for, it is the ‘squeezed middle’ that demonstrate the highest student satisfaction. It does therefore seem that these universities, who are nonetheless research intensive while also striving to provide an excellent student experience, are somehow pulling off this difficult balancing act. Our results also demonstrate that the Russell Group contains a large range of prestigious institutions with a variety of different student experiences yet on average their undergraduate courses are less well received than the other characteristics of their provision and of their faculty would predict.

Finally, the results presented in Tables 11 and 12 mirror those in Tables 9 and 10 respectively, except now that the dependent variables relate to satisfaction with teaching rather than overall satisfaction. We investigate these alternative dependent variables since the faculty-related variables we focus on will relate mainly to teaching rather than the wider student experience such as organisation and facilities. As we expected, we find broadly similar results as above although the R 2 values are considerably higher for Tables 11 and 12, indicating that model fit is now better since the cloudiness arising from these extraneous influences on satisfaction not captured by the teaching- related explanatory variables has been removed from the dependent variables. In Tables 11 and 12, there is now a stronger explanatory role for average staff salary and for the percentage of teaching staff who are full professors, which are negatively and positively related to satisfaction, respectively. The student-to-staff (SSR) ratio – a proxy for average class sizes – becomes an increasing factor in Tables 11 and 12 where the dependent variable focuses on teaching only rather than wider satisfaction. In Table 12, specification (2), for example, an increase in the SSR by 10 (e.g. from 20 to 30) would, all else equal, lead to a fall in satisfaction with teaching by around 0.5 percentage points. The parameter on the percentage of full professors among the teaching staff is also a considerably more significant influence on teaching than overall satisfaction.

The negative link between satisfaction and staff salary (with a £10,000 increase in average salary leading to approximately a one percentage point reduction I satisfaction with teaching in Table 12, for example) is, on the surface, counter-intuitive but may relate to the regional effects outlined in Table 7 whereby salaries are highest in locations where satisfaction is lowest and vice versa (viz. London versus Northern Ireland). Since specifications (1) to (3) in all of Tables 9 to 12 include subject fixed effects, it is unlikely that the salary effect is due to differences in average pay (particularly at the professorial level) across sub-fields. We also suggest, somewhat speculatively, that it may also be the case that the very large salaries which skew the distribution are paid principally to ‘research superstars’ who focus on research output generation and dissemination as a result are less engaged with students and teaching. Interestingly, the negative sign on the new university dummy variable loses much of its influence (as does, to a lesser extent, being a member of the Russell Group), suggesting that it is this wider experience that students find wanting at such institutions rather than their concern being with the quality of the teaching.

26

6. Reflection and Conclusions

This paper is the first systematic, quantitative study of the drivers of student satisfaction in the UK. We base our research on the 2014 results of the National Student Survey aggregated to the course level within each institution, linking each of these course collections with variables drawn from the Higher Education Statistical Agency that capture the characteristics of the teaching faculty. We find that satisfaction is positively linked to the percentages of full professors, of white faculty and of those holding doctorates. Core to our study, we are unable to uncover any link between student satisfaction and the proportion of teaching staff who are professionally qualified, although we must note the paucity of currently available data which may have masked any patterns that do exist.

We also incorporate in the analysis a number of university-wide variables that capture the reputation of the university and characteristics of the student body, including entry standards, career prospects and completion rates. An important finding from our research is the negative link between the percentage of 4* research, an indicator of the extent to which the institution is elite in research terms, and student satisfaction. On the surface, this appears puzzling since the demand for places and entry criteria at such institutions is very high and yet apparently the experience of students who meet these exacting standards is worse than those who do not. There are several possible explanations of this apparent paradox. The first is that students are initially seduced by the value of the ‘research brand’ of the elite universities at the application stage but later change their minds and express disappointment if they are subsequently faced with absent professors who assign student-related activities a low priority. A second possibility is that, at the time they complete the NSS questionnaire, students fail to see the link between an academic’s research and the positive effect it may have on their own learning experience. Thirdly it is possible that 'high-achieving' students admitted to 'brand-name' universities have greater expectations and are so more likely to feel dissatisfied than students accepted to other institutions, who may have had less elevated expectations.47 It is still the case that a reasonable proportion of students at the most prestigious Universities will achieve a lower classification despite the quality of the entry tariff and these students are therefore more likely to be aggrieved.

There appears to be a tension between the primary desire that students expressed to be taught by professionally qualified teachers on the one hand and their revealed preference for elite universities where the proportion of such teachers is lower on the other. Perhaps this apparent paradox can be reconciled by re-interpreting their view as implying that given the standing of the institution at which they had already made the decision to study, they would prefer to be taught by professionally qualified teachers . It also appears likely that students misperceive what becoming a professionally qualified teacher actually entails, believing it must involve extensive formal training on how to teach, which will not be the case for those engaging through the experienced route.

It is quite natural for students to want it all, and for them to fail to appreciate the trade-offs that exist between the times spent on teaching-related activities versus research. 48 In the current

47 Thanks to Tony Moore and Mike Clements for this insight. 48 According to the SAES, perceptions that a degree provides good value for money were considerably reduced from over half of students believing so in 2013 to a third in 2015; this trend was most marked among those with relatively few contact hours. The survey also showed that many respondents believe themselves to be

27 environment where academics are expected to prioritise everything, simultaneously teaching ever larger class sizes, offering timely and detailed feedback while improving the quality of their research and its user impact, a likely implication of students’ revealed preferences will be to accelerate the demise of the all-rounder academic. Many universities are already drawing an increasingly marked distinction between teaching-intensive faculty and those who are engaged in research submittable for the REF who have much lower teaching loads. It appears as if students are saying that they are happy to be taught by colleagues other than those generating the research reputations of the institutions at which they study. Put another way, one could argue that academics have failed to market the research that they conduct as being relevant for the students they teach, and students care more for high quality teaching than research-informed teaching, potentially jeopardising the relevance of the concept of a research-teaching nexus. This was recognised by Hattie and Marsh (1996) who made the recommendation, ‘Thus, institutions need to reward creativity, commitment, investigativeness, and critical analysis in teaching and research and particularly value these [at]tributes when they occur in both teaching and research. Only when these attributes are recognised is it likely that the relationship between teaching and research will be increased’. Our research suggests that the intervening years have not proved a supportive environment for these developments to take place.

Our findings have several potentially important policy implications. First, at the aggregate level, this perception among students that there is not a strong link between good research and good teaching ties in with government policies aimed at concentrating research in a smaller number of institutions 49 and harks back to the era where polytechnics focused almost entirely on teaching and research activity was the exclusive preserve of a much smaller number of old universities than is the case today. The interesting developments will therefore be among the mid-ranked universities considering themselves research intensive but which are not research elite, and which are currently attempting to be all things to all people. As our results show, these mid-rankers are currently managing to deliver on the student experience but such a position may be unsustainable in the longer term. Thus, if the government wishes to nurture pockets of research excellence among a wide array of universities, it should be aware that current policy signals point in the opposite direction.

Second, students should be careful what they wish for, since these views are likely to accelerate the trend within universities towards a more formal separation of teaching and research duties and with it the portfolio of teaching styles and subject matter that students receive will be narrowed, more textbook driven, less research focused and possibly less timely. Between universities, those which rank lower in research measures and where teaching is unambiguously the number one priority may decide that if research activity is not valued by students, they may as well withdraw from it entirely with the result that the ranking and prestige of their institution (where these are dependent on research activity) may fall further. Such institutions then effectively take on the position of further education colleges that happen to offer degrees. . This trend is exacerbated by the move towards

entitled to a higher number of contact hours than they receive and that smaller class sizes would be preferable. 49 This has certainly been a side-effect of the change in the funding formula used to allocate money (so-called ‘QR funding’) following the results of the REF. From 2014-15 onwards, any research outputs rated 1* or 2* will result in no funding (despite the latter being ‘internationally excellent’ in REF parlance), while 3* and 4* research will receive funding allocations at a ratio of 1:4. This ratio stands in contrast with the much less steep function which existed to allocate funding on the basis of the RAE (7:3:1 for 4*:3*:2*) research.

28 applying national frameworks and subject benchmarking – which inadvertently stifles the diversity of teaching and potentially damages the link with faculty research. Similarly, the professional bodies which accredit degree programmes or departments in some disciplines are frequently intrusive and teachers are often presented with a set of prescribed modules that must be delivered within the courses or, even more extreme, a list of topics that are required to be covered within each module. The scope for lecturers to link the content of the modules with their own research, or indeed any research, is very much reduced. Consequently, those who believe in research-led or research- informed teaching need to extol its virtues much more loudly than they have in the past.

Third, our results suggest that part-time students are significantly more satisfied than their full-time counterparts, as well as offering a variety of other benefits including access to courses to mature students with families and life-long learning opportunities. Yet part-time courses are increasingly under pressure.50 Fourth, for universities which are desirous of improving their NSS scores, there are probably no silver bullets although encouraging higher response rates is the quickest fix.

Finally, our research adds to a growing body of literature highlighting the biases that are exposed in the results of student satisfaction questionnaires. These, combined with their negative externalities discussed above, place a serious question mark over the desirability of the increasing and uncritical prominence of such data in national league tables and the putative Teaching Excellence Framework.

50 See, for example, the BBC website news article by A. Harrison, 14 March 2013, “'Dramatic decline' in part- time university students in England”.

29

References

Aldridge, S. and Rowley, J. (1998) Measuring customer satisfaction, Quality Assurance in Education , 6(4), 197-204.

Alves, H. and Raposo, M. (2010) The influence of university image on student behaviour. International Journal of Education Management 24(1), 73-85.

Arambewela, R. and Hall J. (2009) An empirical model of international student satisfaction, Asia Pacific Journal of Marketing and Logistics , 21(4), 555 – 569.

Armstrong, J.S. (1995) The devil’s advocate responses to MBA students’ claims that research harms learning . Journal of Marketing 59, 101-106.

Asthana, A. and Biggs, L. (2007) Students pay more but receive less, The Observer , 11 February.

Atwood, R. (2010) A teaching qualification for all as new academy broom sweeps in. Times Higher Education, 1 July (http://www.timeshighereducation.co.uk/story.asp?sto ryCode=412291§ioncode=26).

Basow, S.A. and Martin, J.L. (2012) Bias in Student Evaluations, in Kite, M. E. (ed), Effective Evaluation of Teaching: A Guide for Faculty and Administrators , Society for the Teaching of Psychology.

Basow, S.A. Codos, S. and Martin, J.L. (2013) The Effects of Professors’ Race and Gender on Student Evaluations and Performance, College Student Journal , 47(2), 352-363.

Boffey, D. (2012) Lecturers should need a teaching qualification, says NUS president. The Guardian, 22 April. Retrieved from http://www.guardian.co.uk/education/2012/apr/22/liam-burn- nus- academics-lecturers.

Borden, V.M. (1995) Segmenting student markets with student satisfaction and priorities survey. Research in Higher Education 36(1), 73-88.

Boring, A. (2015) Gender Biases in Student Evaluations of Teachers, Working paper

Brown, R.M. and Mazzarol, T.W. (2009) The importance of institutional image to student satisfaction and loyalty within higher education, Higher Education , 58, 81-95.

Brown, A. (2014) Being REFable: strategies and tactics in meeting the challenges of the research quality assessment process, in Cunningham, B. (ed.) Professional Life in Modern British Higher Education . Institute of Education Press, London.

Browne, J. (2010) Securing a Sustainable Future for Higher Education: An Independent Review of Higher Education Funding and Student Finance . BIS.

Browne, B., Kaldenberg, D., Browne, W. and Brown, D. (1998) Student as customers: factors affecting satisfaction and assessments of institutional quality, Journal of Marketing for Higher Education , 8(3), 1-14.

30

Bryant, D and Richardson, A, (2015) To be, or not to be, trained, Journal of Higher Education Policy and Management, early view 26/10/15.

Buckley, A., Soilemetzidis, I., and Hillman, N. (2015) The 2015 Student Academic Experience Survey . Higher Education Policy Institute and Higher Education Academy.

Callender, C., Ramsden, P. and Griggs, J. (2014) Review of the National Student Survey Higher Education Funding Council for England.

Cashmore, A., Cane, C. and Cane, R. (2013) Rebalancing Promotion in the HE Sector: Is Teaching Excellence being Rewarded? Higher Education Academy, York.

Chatterton, P. and Goddard, J. (2000) The Response of higher education institutions to regional needs. European Journal of Education 35(4), 475-496.

Collini, S. (2012) What are Universities For ? Penguin, London.

Copeland, R. (2014 ) Beyond the Consumerist Agenda: Teaching Quality and the ‘Student Experience’ in Higher Education . University and College Union, London.

Davidovitch, N., Soen, D. and Sinuani-Stern, Z. (2011) Performance measures of academic faculty: a case study. Journal of Further and Higher Education 35, 355-373.

Douglas, J. McClelland, R.Davies J. (2008) The development of a conceptual model of student satisfaction with their experience in higher education. Quality Assurance in Education 16(1), 19-35.

Eskildsen, J., Martensen, A., Gronholdt, L. and Kristensen, K. (1999) Benchmarking student satisfaction in higher education based on the ECSI methodology. Proceedings of the TQM for Higher Education Institutions Conference: Higher Education Institutions and the Issue of Total Quality 30-31, August, Verona, pp. 385-402.

Gunn, V. and Fisk, A. (2013) Considering Teaching Excellence in Higher Education: 2001-2013: A Literature Review since the CHERI Report 2007 . Higher Education Academy, York.

Gibbons, S., Neumayer, E. and Perkins, R. (2015) Student satisfaction, league tables and university applications: evidence from Britain. Economics of Education Review 48, 148-164.

Gibbs, G. (2008) Conceptions of Teaching Excellence Underlying Teaching Award Schemes . Higher Education Academy, York.

Gibbs, G. (2010) Dimensions of Quality . Higher Education Academy, York.

Gibbs, G. (2012) Implications of ‘Dimensions of Quality’ in a Market Environment . Higher Education Academy, York.

Gornall, L. and Thomas, B. (2014) Professional work and policy reform agendas in a marketised higher education system, in: Gornall, L., Cook, C., Daunton, L., Salisbury, J. and Thomas B. (eds.) Academic Working Lives: Experience, Practice and Change . Bloomsbury, London.

31

Gruber, T.Fuß, S. Voss, R. and Gläser-Zikuda M. (2010) Examining student satisfaction with higher education services, International Journal of Public Sector Management , 23(2) 105 – 123.

Gunn, V., and Fisk, A. (2013). Considering teaching excellence in higher education: 2007–2013: A literature review since the CHERI report 2007. York: Higher Education Academy.

Guolla, M. (1999) Assessing the teaching quality to student satisfaction relationship: applied customer satisfaction research in the classroom. Journal of Marketing Theory and Practice 7(3), 87- 97.

Hattie, J and Marsh, H.W (1996), The relationship between research and teaching: A meta-analysis, Review of Educational Research , vol. 66, no. 4, pp. 507-542.

Helgesen, Ø. and Nesset, E. (2007), Images, satisfaction and antecedents: drivers of student loyalty? A case study of a Norwegian university college. Corporate Reputation Review 10(1), 38-59.

Huston, T.A. (2005), Research Report: Race and Gender Bias in Student Evaluations of Teaching, Working Paper, Seattle University.

Huston, T.A. (2006), Race and Gender Bias in Higher Education: Could Faculty Course Evaluations Impede Further Progress towards Parity? Seattle Journal for Social Justice , 4(2), Article 34.

Kotler, P. and Fox, K. (1995) Strategic Marketing for Educational Institutions , second edition, Prentice Hall, Englewood Cliffs, New Jersey.

Land, R. and Gordon, G. (2015) Teaching Excellence Initiatives: Modalities and Operational Factors . Higher Education Academy, York.

Layton, C., and Brown, C. (2011) Striking a balance: Supporting teaching excellence award applications. International Journal for Academic Development , 16, 163–174.

Letcher, D.W. and Neves, J. S. (2010) Determinants of undergraduate business student satisfaction, Research in Higher Education Journal , 6, 1-26.

Locke, W. (2004) Integrating research and teaching strategies: implications for institutional management and leadership in the United Kingdom Higher Education Management and Policy 16(4), 101-120.

Locke, W. (2012) The dislocation of teaching and research and the reconfiguring of academic work. London Review of Education 10(3), 261-274.

Locke, W. (2014) Shifting Academic Careers: Implications for Enhancing Professionalism in Teaching and Supporting Learning .Higher Education Academy, York.

Mavondo, F.T., Tsarenko, Y. and Gabbott, M. (2004) International and local student satisfaction: resources and capabilities perspective. Journal of Marketing for Higher Education 14(1), 41-60.

Merritt, D.J. (2012) Bias, the Brain and Student Evaluations of Teaching, St John’s Law Review , 82(1), Article 6.

32

Naftulin, D.H. Ware, J.E. Jr., and Donnelly, F.A. (1973) The Doctor Fox Lecture: A Paradigm of Educational Seduction, Journal of Medical Education , 48, 630-635.

Nguyen, N. and LeBlanc, G. (2001) Image and reputation of higher education institutions in students’ retention decisions. International Journal of Education Management 15(6), 303-311.

Parker, C. and Matthews, B.P. (2001) Customer satisfaction: contrasting academic and consumers’ interpretations. Marketing, Intelligence and Planning 19(1), 38-44.

Palacio, A., Meneses, G. and Pérez, P. (2002) The configuration of the university image and its relationship with the satisfaction of students. Journal of Educational Administration 40(5), 486-505.

Pike, G. (1991) The effects of background, coursework, and involvement on students’ grades and satisfaction. Research in Higher Education , 32(1), 15-31.

Reid, L.D. (2010) The Role of Perceived Race and Gender in the Evaluation of College Teaching on RateMyProfessors.com, Journal of Diversity in Higher Education, 3(3), 137-152.

Sabri, D. (2013) Student Evaluations of Teaching as ‘Fact-Totems’: the Case of the UK National Student Survey. Sociological Research Online 18(4), 1-15.

Taylor, C. and McCaig, C. (2014) Evaluating the Impact of Number Controls, Choice and Competition: An Analysis of the Student Profile and the Student Learning Environment in the New Higher Education Landscape . Higher Education Academy, York.

Thornton, T. (2014) Professional recognition: promoting recognition through the Higher Education Academy in a UK higher education institution. Tertiary Education and Management 20(3), 225-238.

Walker, J.T., Vignoles, A., and Collins, M. (2010) Higher Education Academic Salaries in the UK, Oxford Economic Papers , 62, (1), pp. 12-35.

Williams, J. and Cappuccini-Ansfield, G. (2007) Fitness for purpose? National and institutional approaches to publicising the student voice, Quality in Higher Education , Vol. 13(2), 159-72.

Yorke, M. (2009) Student experience surveys: some methodological considerations and an empirical investigation. Assessment and Evaluation in Higher Education 34(6), 721-739.

Zeithml, V.A., Berry, L.L. and Parasuraman, A. (1993) The nature and determination of customer expectation of service. Journal of the Academy of Marketing Science 21(1), 1-12.

33

Table 1: Variable Definitions, Sources and Summary Statistics

Variable Name Description Units Mean Source type Total satisfaction score The average of the responses to survey 1-5 scale 4.22 NSS question 22, “Overall I am satisfied with the quality of the course” imposed on a Likert scale so that strongly agree =5, agree = 4, etc. Percent satisfied overall The percentage of respondents who Percentage 86.13 NSS strongly agree or agree with question 22, “Overall I am satisfied with the quality of the course” Teaching satisfaction The average of the responses to the 1-5 scale 4.22 NSS score section “the teaching on my course” section, where the responses questions 1

Subject-specific Subject-specific to 4 are themselves averaged Percent satisfied with The percentage of respondents who are Percentage 87.59 NSS teaching satisfied in “the teaching on my course” section, where the responses questions 1 to 4 are themselves averaged Mode Part -time =1; full -time = 0 0 or 1 0.045 NSS Response rate Percentage of the eligible population who Percentage 75.02 NSS complete the questionnaire Total staff Total number of academic staff Integer 87.90 HESA teaching qualification Percentage of total staff who hold a formal Percentage 69.95 HESA teaching qualification (e.g. PGCE, HEA Fellowship) Length of service Average length of service of academic staff Years 7.70 HESA Age of academic staff Average age of academic staff Years 45.05 HESA White Percentage of academic staff who define Percentage 81.33 HESA their ethnicity as white Female Percentage of female academic staff Percentage 43.20 HESA Doctorate Percentage of academic staff holding a Percentage 50.15 HESA doctorate UK nationality Percentage of academic staff defining their Percentage 74.28 HESA nationality as UK Other EU nationality Percentage of academic staff defining their Percentage 13.39 HESA nationality as non-UK European Previously at another Percentage of academic staff whose Percentage 34.92 HESA HEI previous post was at another higher education institution Cost centre and institution specific specific institution and centre Cost Previously a student Percentage of academic staff who were Percentage 10.39 HESA immediately previously students Teaching -only Percentage of academic staff whose Percentage 11.40 HESA employment function involves only teaching Staff salary Average salary of full -time academic staff Pounds 47872 HESA Professors Percentage of academic staff whose Percentage 9.43 HESA

34

Variable Name Description Units Mean Source type contract level is full professor Part -time staff Percentage of academic staff employed Percentage 32.31 HESA part-time Fixed -term staff Percentage of academic staff employed on Percentage 27.59 HESA fixed-term contracts Student -staff ratio Number of registered students per Number 17.80 HESA academic staff member FTE Entry standard The average UCAS tariff score of new Integer 356.68 CUG undergraduates Graduate prospects The number of graduates who take up Percentage 66.65 CUG employment or further study as a percentage of the total number of graduates with a known destination Good honours The percentage of graduates achieving a Percentage 71.20 CUG first or upper second class honours degree Degree completion The percentage of students completing Percentage 86.76 CUG their course Green score A rating of universities based on questions Percentage 47.77 CUG about their environmental and ethical commitments and actions Staff -to -total The ratio of expenditure on staff salaries to Percentage 54.65 CUG expenditure total expenditure Research power Calculated as the grade -point average of Number 14.26 THE Institution wide wide Institution the total quality profile x total FTE staff submitted 4* research Percentage of the university’s overall Percentage 20.60 THE research quality profile that was rated 4* in REF2014 Russell group 1 = the University is a member of the 0 or 1 0.23 CUG Russell group In QS Top 400 1 = the University is within the QS Top 400 0 or 1 0.37 CUG Universities ranking New university 1 = the University is new (established post - 0 or 1 0.52 CUG 1992) Notes: NSS is the National Students Survey; HESA is the Higher Education Statistical Agency; CUG is the Complete University Guide; THE is the Times Higher Education.

35

Table 2: Correlation between Sub-Categories – Average (1-5) Scores

Satisfaction with the Assessment Academic Organisation & Learning Personal Overall Students’ & feedback support management resources development satisfaction Union Teaching on 0.657 0.752 0.599 0.312 0.645 0.827 0.187 my course Assessment 0.694 0.565 0.258 0.531 0.666 0.223 & feedback Academic 0.649 0.413 0.665 0.797 0.275 support Organisation & 0.334 0.425 0.749 0.242 management Learning 0.396 0.407 0.311 resources Personal 0.680 0.287 development Overall 0.263 satisfaction

Notes: This table presents the Pearson correlations between the average scores awarded to each section of the NSS questionnaire, based on the conversion of the Likert scale responses (1 = strongly disagree, …, 5 = strongly agree) to a points score.

36

Table 3: Correlation between Sub-Categories – Percent Satisfied

Satisfaction with the Assessment Academic Organisation & Learning Personal Overall Students’ & feedback support management resources development satisfaction Union Teaching on my course 0.565 0.678 0.555 0.240 0.537 0.768 0.185 Assessment & feedback 0.639 0.508 0.182 0.444 0.569 0.186 Academic support 0.584 0.338 0.578 0.707 0.261 Organisation & management 0.263 0.358 0.684 0.250 Learning resources 0.326 0.318 0.277 Personal development 0.567 0.276 Overall satisfaction 0.236

Notes: This table presents the Pearson correlations between the average scores awarded to each section of the NSS questionnaire, based on The percentage of students who are satisfied in each case.

37

Table 4: Average NSS Scores by Subject Area based on Overall Satisfaction

Subject Area Mean Standard Minimum 5th 95 th Maximum deviation Percentile Percentile (101) Clinical medicine 4.28 0.31 3.5 3.63 4.70 4.8 (102) Clinical dentistry 4.65 0.14 4.4 4.48 4.83 4.9 (103) Nursing & allied health professions 4.17 0.32 2.6 3.60 4.60 4.8 (104) Psychology & behavioural sciences 4.23 0.26 3.2 3.70 4.60 4.9 (105) Health & community studies 4.23 0.42 2.4 3.52 4.70 4.8 (106) Anatomy & physiology 4.34 0.23 3.6 4.00 4.70 4.8 (107) Pharmacy & pharmacology 4.34 0.26 3.9 3.90 4.70 4.8 (108) Sports science & leisure studies 4.24 0.30 3.2 3.80 4.70 4.8 (109) Veterinary science 4.44 0.17 4.3 4.30 4.70 4.8 (110) Agriculture, forestry & food science 4.28 0.31 3.3 3.68 4.65 4.7 (111) Earth, marine & environmental sciences 4.31 0.30 3.0 3.90 4.80 4.9 (112) Biosciences 4.28 0.28 3.5 3.80 4.70 4.9 (113) Chemistry 4.33 0.24 3.7 3.92 4.64 4.7 (114) Physics 4.37 0.27 3.6 3.80 4.70 4.8 (115) General engineering 4.09 0.36 3.1 3.50 4.70 4.8 (116) Chemical engineering 4.21 0.30 3.6 3.62 4.50 4.7 (117) Mineral, metallurgy & materials engineering 4.15 0.37 3.2 3.53 4.55 4.6 (118) Civil engineering 4.23 0.31 3.2 3.50 4.60 4.7 (119) Electrical, electronic & computer engineering 4.15 0.36 2.5 3.66 4.60 4.9 (120) Mechanical, aero & production engineering 4.12 0.32 3.4 3.60 4.60 4.8 (121) IT, systems sciences & computer software engineering 4.13 0.31 3.0 3.64 4.66 4.9 (122) Mathematics 4.32 0.22 3.7 3.90 4.70 4.7 (123) Architecture, built environment & planning 4.22 0.30 3.4 3.69 4.70 4.8 (124) Geography & environmental studies 4.30 0.26 3.4 3.80 4.60 4.8 (125) Area studies 4.13 0.22 3.8 3.80 4.45 4.5 (126) Archaeology 4.35 0.36 2.9 3.70 4.80 4.9 (127) Anthropology & development studies 4.27 0.28 3.8 3.80 4.67 4.7 (128) Politics & international studies 4.23 0.31 3.1 3.70 4.60 4.7 (129) Economics & econometrics 4.16 0.29 3.5 3.61 4.50 4.8 (130) Law 4.27 0.24 3.5 3.80 4.60 4.8 (131) Social work & social policy 4.16 0.32 3.3 3.60 4.60 4.8 (132) Sociology 4.22 0.32 2.7 3.70 4.60 4.7 (133) Business & management studies 4.19 0.29 2.8 3.70 4.60 4.9 (134) Catering & hospitality management 4.19 0.20 3.8 3.92 4.56 4.6 (135) Education 4.21 0.35 2.7 3.60 4.70 4.9 38

(137) Modern languages 4.18 0.30 3.2 3.60 4.60 4.8 (138) English language & literature 4.26 0.30 2.9 3.71 4.60 4.9 (139) History 4.35 0.28 3.4 3.80 4.70 5.0 (140) Classics 4.35 0.22 3.8 4.00 4.69 4.7 (141) Philosophy 4.38 0.19 3.9 4.10 4.60 4.8 (142) Theology & religious studies 4.41 0.28 3.4 4.00 4.70 4.9 (143) Art & design 4.14 0.31 3.2 3.68 4.60 4.9 (144) Music, dance, drama & performing arts 4.14 0.38 2.8 3.40 4.70 5.0 (145) Media studies 4.02 0.37 2.6 3.40 4.56 4.8 Notes: This table presents summary measures within each subject area (HESA cost coding) using an unweighted average across all HEIs operating within that cost code based on the conversion of the Likert scale responses for overall satisfaction (1 = strongly disagree, …, 5 = strongly agree) to a points score.

39

Table 5: Average NSS Scores by Subject Area based on Percentage Satisfied

Subject Area Mean Standard Minimum 5th 95 th Maximum deviation Percentile Percentile (101) Clinical medicine 87.98 9.27 64 67.25 99.75 100 (102) Clinical dentistry 96.38 2.68 90 91.50 99.25 100 (103) Nursing & allied health professions 85.32 9.96 29 68.00 100.00 100 (104) Psychology & behavioural sciences 86.80 8.42 53 69.50 97.00 100 (105) Health & community studies 85.29 13.78 24 63.40 100.00 100 (106) Anatomy & physiology 91.11 6.33 73 75.65 99.35 100 (107) Pharmacy & pharmacology 90.11 6.76 75 76.80 99.20 100 (108) Sports science & leisure studies 86.45 9.00 52 70.05 98.95 100 (109) Veterinary science 92.88 3.56 88 88.70 97.30 98 (110) Agriculture, forestry & food science 88.22 10.86 52 68.95 100.00 100 (111) Earth, marine & environmental sciences 88.62 9.44 44 73.60 100.00 100 (112) Biosciences 88.16 8.84 58 71.35 100.00 100 (113) Chemistry 89.74 7.42 67 75.60 98.80 100 (114) Physics 90.28 7.12 70 75.65 100.00 100 (115) General engineering 82.43 11.07 52 64.00 97.80 100 (116) Chemical engineering 87.26 9.83 69 70.00 100.00 100 (117) Mineral, metallurgy & materials engineering 82.58 13.96 50 58.80 98.35 100 (118) Civil engineering 87.14 9.06 56 67.50 97.25 100 (119) Electrical, electronic & computer engineering 84.15 14.18 17 66.65 98.90 100 (120) Mechanical, aero & production engineering 83.48 10.40 58 63.65 96.00 100 (121) IT, systems sciences & computer software engineering 83.45 10.02 38 69.00 100.00 100 (122) Mathematics 89.78 6.50 73 75.75 100.00 100 (123) Architecture, built environment & planning 86.14 10.04 46 65.85 100.00 100 (124) Geography & environmental studies 89.28 7.91 57 74.00 100.00 100 (125) Area studies 85.42 6.08 71 77.05 93.00 93 (126) Archaeology 88.23 11.37 40 65.90 100.00 100 (127) Anthropology & development studies 86.86 10.93 57 67.75 98.95 100 (128) Politics & international studies 87.20 10.25 52 68.80 100.00 100 (129) Economics & econometrics 83.87 10.27 52 64.15 95.95 100 (130) Law 87.96 7.85 56 72.55 98.00 100 (131) Social work & social policy 84.21 10.49 50 64.90 97.00 100 (132) Sociology 86.69 10.74 38 71.00 98.00 100 (133) Business & management studies 86.04 9.56 25 69.00 99.55 100 (134) Catering & hospitality management 85.16 6.13 67 75.00 93.00 94 (135) Education 85.92 11.03 25 66.90 100.00 100 40

(137) Modern languages 85.83 9.69 50 66.90 98.20 100 (138) English language & literature 87.35 9.32 39 70.00 100.00 100 (139) History 89.72 8.57 53 73.85 100.00 100 (140) Classics 90.21 5.92 77 78.60 99.25 100 (141) Philosophy 90.75 6.67 77 79.00 100.00 100 (142) Theology & religious studies 90.86 9.47 59 75.20 100.00 100 (143) Art & design 82.76 10.08 50 66.00 100.00 100 (144) Music, dance, drama & performing arts 82.60 12.17 34 59.30 98.70 100 (145) Media studies 79.77 12.74 30 56.90 99.10 100 Notes: This table presents summary measures within each subject area (HESA cost coding) using an unweighted average across all HEIs operating within that cost code based on the percentage of students who are satisfied overall.

41

Table 6: Which Individual Areas Drive Overall Satisfaction

Explanatory Variable Satisfaction Score Percent Satisfied Constant -1.073 -15.179 (0.055)*** (1.563)*** The teaching on my course 0.489 0.524 (0.016)*** (0.021)*** Assessment and feedback 0.030 0.024 (0.011)*** (0.012)** Academic support 0.194 0.181 (0.017)*** (0.020)*** Organisation and management 0.286 0.262 (0.010)*** (0.013)*** Learning resources 0.055 0.058 (0.010)*** (0.013)*** Personal development 0.212 0.159 (0.015)*** (0.017)*** Satisfaction with the Students’ Union 0.004 -0.004 (0.006) (0.006) R2 0.83 0.72 N 4465 4465 Notes: This table presents the parameter estimates (with standard errors in parentheses) for regressions where the dependent variable is overall satisfaction (the response to question 22 in the survey), based on either the Likert scale (middle column) or percentage of students who are satisfied (right hand column). The explanatory variables are the average scores for each section on the form. Each data point represents a course or course collection at a higher education institution. *, ** and *** denote significance at the 10%, 5% and 1% levels respectively.

42

Table 7: Satisfaction by Region

Mean Score Percent Satisfied Sub -sample N Mean Std. dev. Min. Median Max. Mean Std. dev. Min. Median Max. Southeast 582 4.261 0.303 2.6 4.3 4.9 87.74 9.59 25 90 100 East Midlands 321 4.249 0.295 2.4 4.3 4.9 86.89 9.72 24 89 100 London 653 4.114 0.331 2.5 4.1 4.9 82.62 11.21 17 84 100 North West 512 4.195 0.309 2.8 4.2 4.9 85.72 9.89 34 87 100 Yorks& Humber 407 4.245 0.270 3.3 4.3 5.0 87.50 8.72 50 89 100 West Midlands 370 4.235 0.291 3.2 4.3 4.9 87.11 9.28 55 89 100 Southwest 325 4.198 0.349 2.9 4.2 4.9 85.44 11.01 40 88 100 East 229 4.253 0.289 3.2 4.3 5.0 87.07 9.04 53 88 100 Northeast 168 4.270 0.269 3.1 4.3 4.9 88.11 8.09 50 90 100 Wales 299 4.184 0.350 2.8 4.2 5.0 84.95 11.89 25 87 100 Scotland 491 4.220 0.327 2.6 4.2 4.9 86.07 10.48 29 88 100 Northern Ireland 107 4.377 0.377 3.4 4.4 4.8 90.50 7.87 52 92 100 F-test of equal mean in all regions F-statistic :11.325 (p -value: 0.000) F-statistic :11.944 (0.000) Entire sample 4465 4.216 0.315 2.4 4.3 5.0 86.13 10.20 17 88 100 Notes: This table reports summary statistics for overall satisfaction (based on responses to question 22 in the survey) by region. The middle panel shows the results based on the Likert scale responses while the right panel is based on the percentage of students who are satisfied. The penultimate row reports the results from an F-test for equal satisfaction across all regions while the final row presents overall satisfaction summary statistics for the entire sample.

43

Table 8: Pairwise Comparisons of NSS Scores

Mean Score Percent Satisfied Sub -sample N Mean Std. dev. Min. Median Max. Mean Std. dev. Min. Median Max. Part -time 201 4.242 0.337 2.6 4.3 4.9 86.75 11.31 29 89 100 Full -time 4264 4.215 0.314 2.4 4.3 5.0 86.10 10.14 17 88 100 t-test: part -time vs full -time t-statistic: -1.113 (p -value: 0.267) t-statistic: -0.800 (p -value: 0.425 ) Russell -group 1024 4.261 0.279 2.7 4.3 5.0 87.66 8.88 38 90 100 Not Russell group 3440 4.202 0.323 2.4 4.2 5.0 85.68 10.52 17 88 100 t-test: Russell vs not Russell t-statistic: -5.721 (p -value: 0.000) t-statistic: -5.992 (p -value: 0.000) Specialist institution 77 4.190 0.324 3.2 4.2 4.9 84.75 10.36 45 86 100 Not specialist institution 4387 4.216 0.315 2.4 4.3 5.0 86.16 10.19 17 88 100 t-test: specialised versus not t-statistic: 0.698 (p -value: 0.486) t-statistic: 1.184 (p -value: 0.238 ) University in QS Top 400 1662 4.273 0.280 2.7 4.3 5.0 88.03 8.90 25 90 100 University not in QS Top 400 2802 4.182 0.329 2.4 4.2 5.0 85.01 10.74 17 87 100 t-test in QS top 400 vs not t-statistic: -9.824 (p -value: 0.000) t-statistic: -10.133 (p -value: 0.000) New University 2318 4.165 0.333 2.4 4.2 4.9 84.54 10.94 17 86 100 Old University 2146 4.271 0.284 2.7 4.3 5.0 87.85 9.02 25 90 100 t-test new university vs old t-statistic: 11.469 (p -value: 0.000) t-statistic: 1 1.061 (p -value: 0.000) Entire sample 4465 4.216 0.315 2.4 4.3 5.0 86.13 10.20 17 88 100 Notes: This table reports summary statistics for overall satisfaction (based on responses to question 22 in the survey) for various pairwise sample splits together with the results of t-tests of the null hypothesis that the means of the two sub-samples are equal in each case. The final row presents overall satisfaction summary statistics for the entire sample.

44

Table 9: Regression to Explain Total Satisfaction Scores Variable Explanatory Expected (1) (2) (3) (4) type variable sign Constant ? 3.3120 3.6101 3.0855 3.072 (0.1712)*** (0.1308)*** (0.1318)*** (0.1905)*** Mode - 0.0561 0.0648 0.0435 0.0326 (0.0315)* (0.0262)* (0.0309) (0.034) Response rate + 0.0046 0.0038 0.0032 0.0034 (0.0006)*** (0.0006)*** (0.0005)*** (0.0006)*** Total staff ? 0.8130 0.1370 - 0.5494 (0.4079)** (0.3988)*** (0.4241) teaching + 0.0003 0.0001 - 0.00003 qualification (0.0003) (0.0002) (0.0002) Length of + 0.0085 0.0060 - 0.0071 service (0.0032)*** (0.0027)** (0.0030)** Age of academic ? -0.0038 -0.0030 - -0.0013 staff (0.0022)* (0.0021) (0.0023) White ? 0.0020 0.0012 - 0.0009 (0.0007)*** (0.0006)** (0.0006) Female ? 0.0002 0.0003 - 0.0003 (0.0005) (0.0005) (0.0006) Doctorate + 0.0008 0.0015 - 0.0011 (0.0004)* (0.0004)*** (0.0004)*** UK nationality ? -0.1920 0.0017 - 0.0012 (8.7066) (0.0008)** (0.0008) Other EU ? -0.0002 0.0019 - 0.0016 nationality (0.0011) (0.0010)* (0.0010) Previously at + 0.0712 -0.1095 - -0.8685 another HEI (2.5516) (2.1319) (2.4131) Previously a - 0.0005 0.0005 - 0.0001 student (0.0004) (0.0004) (0.0004)

Cost centre and institution specific specific institution and centre Cost Teaching -only - 0.0006 0.0004 - 0.0004 (0.0007) (0.0005) (0.0006) Staff salary ? 0.0010 -0.0260 - -0.0299 (0.0157) (0.0136)* (0.0150)** Professors + 0.0013 0.0017 - 0.0010 (0.0010) (0.0009)* (0.0009) Part -time staff - 0.0007 0.0002 - -0.0002 (0.0005) (0.0004) (0.0004) Fixed -term staff - 0.0007 0.0012 - 0.0008 (0.0005) (0.0003)*** (0.0004)** Student -staff - -0.0004 -0.0019 - -0.0014 ratio (0.0009) (0.0009)** (0.0009) Entry standard ? - - -0.0002 -0.0001 (0.0002) (0.0002) Graduate + - - 0.0050 0.0045 prospects (0.0009) (0.0010)*** Good honours + - - 0.0007 0.0005 (0.0014) (0.0015) Degree + - - 0.0057 0.0044

Institution wide wide Institution completion (0.0016)*** (0.0018)** Green score + - - -0.0006 -0.0005 (0.0003) (0.0004)

45

Variable Explanatory Expected (1) (2) (3) (4) type variable sign Staff -to -total + - - 0.0007 0.0008 expenditure (0.0007) (0.0007) Research power ? - - -0.0010 -0.0008 (0.0006)* (0.0006) 4* research ? - - -0.0033 -0.0035 (0.0010)*** (0.0011)*** Russell group + - - -0.0370 -0.0751 (0.0186)** (0.0209)*** In QS Top 400 + - - 0.0005 0.0185 (0.0196) (0.0217) New university - - - -0.0700 -0.0509 (0.0183)*** (0.0201)** N - 3895 3895 4247 3730 R2 - 0.22 0.11 0.12 0.14 Subject F.E. - YES YES YES YES Institution F.E. - YES NO NO NO Notes: This table reports the results from a set of regressions where the dependent variable is the overall satisfaction score (Likert scale). Specifications (1) to (4) all use the same dependent variable but vary according to which groups of explanatory variables they contain and whether both subject- and institution-fixed effects or just the former are employed. The observations are courses or course collections at each higher education institution. Standard errors are given in parentheses. *, ** and *** denote significance at the 10%, 5% and 1% levels respectively. The total staff, UK nationality, previously at another HEI, and staff salary parameters and their standard errors are multiplied by 10000 for ease of presentation.

46

Table 10: Regression to Explain Percent Satisfied Overall Variable Explanatory Expected (1) (2) (3) (4) type variable sign Constant ? 58.2861 68.2507 49.8597 48.6494 (7.2337)*** (4.3682)*** (4.4784)*** (6.3265)*** Mode - 1.7922 1.7680 1.1803 1.0762 (1.1085) (0.9014)** (1.0531) (1.1731) Response rate + 0.1165 0.0944 0.0746 0.0812 (0.0194)*** (0.0181)*** (0.0174)*** (0.0188)*** Total staff ? 0.0031 0.0007 - 0.0020 (0.0014)** (0.0013) (0.0014) teaching + 0.0024 -0.0026 - -0.0028 qualification (0.0109) (0.0065) (0.0071) Length of + 0.1752 0.0804 - 0.1264 service (0.1060) (0.0901) (0.0977) Age of academic ? -0.0354 -0.569 - 0.0071 staff (0.0729) (0.0695) (0.0757) White ? 0.0632 0.0372 - 0.0333 (0.0245)*** (0.0192)* (0.0198)* Female ? 0.0163 0.0168 - 0.0179 (0.0174) (0.0173) (0.0175) Doctorate + 0.0243 0.0530 - 0.0369 (0.0146)* (0.0126)*** (0.0136)*** UK nationality ? -0.0201 0.0410 - 0.0238 (0.0285) (0.0246)* (0.0251) Other EU ? -0.0287 0.0345 - 0.0282 nationality (0.0363) (0.032) (0.0334) Previously at + -0.0030 -0.0047 - -0.0070 another HEI (0.0085) (0.0076) (0.0080) Previously a - 0.0074 0.0082 - -0.0038 student (0.0139) (0.0116) (0.0120)

Cost centre and institution specific specific institution and centre Cost Teaching -only - 0.0213 0.0096 - 0.0119 (0.0216) (0.0173) (0.0202) Staff salary ? -0.2147 -0.7032 - -0.9205 (0.5261) (0.4276) (0.4703)* Professors + 0.03727 0.0530 - 0.0340 (0.0319) (0.0284)* (0.0302) Part -time staff - 0.0197 -0.00002 - -0.0119 (0.0163) (0.0132) (0.0141) Fixed -term staff - 0.0179 0.0344 - 0.0206 (0.0170) (0.0111)*** (0.0118)* Student -staff - 0.0162 -0.044 0 - -0.0256 ratio (0.0299) (0.0296) (0.0304) Entry standard ? - - -0.0091 -0.0058 (0.0060) (0.0070) Graduate + - - 0.1477 0.1298 prospects (0.0303)*** (0.0323)*** Good honours + - - 0.0188 0.0209 (0.0439) (0.0481) Degree + - - 0.2267 0.1757

Institution wide wide Institution completion (0.0546)*** (0.0613)*** Green score + - - -0.010 -0.0107 (0.0114) (0.0120)

47

Variable Explanatory Expected (1) (2) (3) (4) type variable sign Staff -to -total + - - 0.0220 0.0312 expenditure (0.0216) (0.0231) Research power ? - - -0.0341 -0.0325 (0.0176)* (0.0188)* 4* research ? - - -0.0997 -0.0931 (0.0339)*** (0.0378)** Russell group + - - -0.9652 -2.1940 (0.5906)* (0.6600)*** In QS Top 400 + - - 0.4600 1.2161 (0.6391) (0.7015)* New university - - - -1.9490 -1.2195 (0.6025)*** (0.6550)* N - 3895 3895 4247 3730 R2 - 0.20 0.10 0.11 0.12 Subject F.E. - YES YES YES YES Institution F.E. - YES NO NO NO Notes: This table reports the results from a set of regressions where the dependent variable is the overall satisfaction score (percentage of students satisfied). Specifications (1) to (4) all use the same dependent variable but vary according to which groups of explanatory variables they contain and whether both subject- and institution-fixed effects or just the former are employed. The observations are courses or course collections at each higher education institution. Standard errors are given in parentheses. *, ** and *** denote significance at the 10%, 5% and 1% levels respectively. The total staff salary parameters and their standard errors are multiplied by 10000 for ease of presentation.

48

Table 11: Regression to Explain Total Teaching Scores Variable Explanatory Expected (1) (2) (3) (4) type variable sign Constant ? 3.5165 3.7869 3.4764 3.5285 (0.1229)*** (0.0965)*** (0.1013)*** (0.1450)*** Mode - 0.0691 0.0861 0.0607 0.0555 (0.0258)*** (0.0217)*** (0.0250)** (0.0274)** Response rate + 0.0045 0.0037 0.0035 0.0036 (0.0004)**** (0.0004)*** (0.0004)*** (0.0004)*** Total staff ? 0.3789 0.1727 - 0.2393 (0.2970) (0.3057) (0.3080) teaching + 0.0001 -0.0002 - -0.0003 qualification (0.0002) (0.0001) (0.0002)* Length of + 0.0040 0.0004 - 0.0021 service (0.0025)* (0.0021) (0.0023) Age of academic ? -0.0021 -0.0009 - -0.0001 staff (0.0017) (0.0016) (0.0017) White ? 0.0015 0.0014 - 0.0012 (0.0006)*** (0.0004)*** (0.0005)*** Female ? 0.0001 0.00001 - -0.00003 (0.0004) (0.0004) (0.0004) Doctorate + 0.0001 0.0006 - 0.0005 (0.0003) (0.0003)** (0.0003) UK nationality ? 0.3450 6.6543 - 5.1057 (0.6653) (5.8445) (5.8614) Other EU ? -0.0006 0.0007 - 0.0004 nationality (0.0008) (0.0008) (0.0008) Previously at + -0.2490 -0.2858 - 0.0882 another HEI (1.8522) (1.6848) (1.7660) Previously a - 0.0006 0.0004 - 0.0002 student (0.0003)* (0.0003) (0.0003)

Cost centre and institution specific specific institution and centre Cost Teaching -only - -0.0003 0.0001 - -0.0003 (0.0004) (0.0004) (0.0004) Staff salary ? -0.0102 -0.0257 - -0.0293 (0.0121) (0.0103)** (0.0116)** Professors + 0.0012 0.0016 - 0.0011 (0.0007)* (0.0006)** (0.0007)* Part -time staff - 0.0004 0.00002 - -0.0001 (0.0004) (0.0003) (0.0003) Fixed -term staff - 0.0006 0.0010 - 0.0009 (0.0004)* (0.0003)*** (0.0003)*** Student -staff - -0.0005 -0.0016 - -0.0014 ratio (0.0007) (0.0007)** (0.0007)** Entry standard ? - - -0.00002 0.000001 (0.0001) (0.0002) Graduate + - - 0.0031 0.0029 prospects (0.0007)*** (0.0007)*** Good honours + - - 0.0005 0.0002 (0.0010) (0.0001) Degree + - - 0.0026 0.0014

Institution wide wide Institution completion (0.0013)** (0.0015) Green score + - - -0.0004 -0.0002 (0.0003) (0.0003)

49

Variable Explanatory Expected (1) (2) (3) (4) type variable sign Staff -to -total + - - 0.0002 0.0004 expenditure (0.0005) (0.0005) Research power ? - - 0.0002 0.0007 (0.0003) (0.0004) 4* research ? - - -0.0027 -0.0027 (0.0008)*** (0.0008)*** Russell group + - - -0.0369 -0.0485 (0.0133)*** (0.015)*** In QS Top 400 + - - 0.0146 0.0206 (0.0145) (0.0161) New university - - - -0.0248 -0.0144 (0.0141)* (0.0151) N - 3895 3895 4247 3730 R2 - 0.29 0.20 0.19 0.21 Subject F.E. - YES YES YES YES Institution F.E. - YES NO NO NO Notes: Notes: This table reports the results from a set of regressions where the dependent variable is the overall satisfaction score (Likert scale). Specifications (1) to (4) all use the same dependent variable but vary according to which groups of explanatory variables they contain and whether both subject- and institution-fixed effects or just the former are employed. The observations are courses or course collections at each higher education institution. Standard errors are given in parentheses. *, ** and *** denote significance at the 10%, 5% and 1% levels respectively. The total staff, UK nationality, previously at another HEI, and staff salary parameters and their standard errors are multiplied by 10000 for ease of presentation.

50

Table 12: Regression to Explain Percent Satisfied with Teaching Variable Explanatory Expected (1) (2) (3) (4) type variable sign Constant ? 65.4135 74.0975 57.6807 57.9620 (4.6596)*** (3.1747)*** (3.2497)*** (4.6500)*** Mode - 2.1206 2.1692 1.7750 1.8051 (0.8173)*** (0.6740)*** (0.7938)** (0.8617)** Response rate + 0.1120 0.1026 0.0829 0.0926 (0.0140)*** (0.0130)*** (0.0125)*** (0.0134)*** Total staff ? 0.0018 0.0006 - 0.0011 (0.0009)* (0.0009) (0.0009) teaching + -0.0039 -0.0070 - -0.0094 qualification (0.0008) (0.0046) (0.0050)* Length of + 0.1135 0.0015 - 0.0911 service (0.0773) (0.0665) (0.0716) Age of academic ? -0.0147 -0.0373 - 0.0140 staff (0.0513) (0.0485) (0.0526) White ? 0.0467 0.0411 - 0.0378 (0.0175)*** (0.041)*** (0.0144)*** Female ? 0.0008 -0.0010 - -0.0039 (0.0129) (0.0130) (0.0131) Doctorate + 0.0064 0.0291 - 0.0162 (0.0104) (0.0090)*** (0.0010)* UK nationality ? 0.0033 0.0378 - 0.0255 (0.0210) (0.0183)** (0.0184) Other EU ? -0.0145 0.0149 - 0.0099 nationality (0.0261) (0.0237) (0.0241) Previously at + 0.0020 0.0006 - 0.0020 another HEI (0.0059) (0.0054) (0.0055) Previously a - 0.0165 0.0118 - 0.0055 student (0.0094)* (0.0080) (0.0083)

Cost centre and institution specific specific institution and centre Cost Teaching -only - -0.0027 0.0107 - -0.0028 (0.0013) (0.0115) (0.0126) Staff salary ? -0.4031 -0.9214 - -1.1801 (0.3581) (0.3061)*** (0.3417)*** Professors + 0.0376 0.0594 - 0.0432 (0.0210)* (0.0195)*** (0.0204)** Part -time staff - 0.0208 0.0050 - 0.0018 (0.0113)* (0.0093) (0.0101) Fixed -term staff - 0.0184 0.0287 - 0.0253 (0.0118) (0.0079)*** (0.0085)*** Student -staff - 0.0002 -0.0478 - -0.0298 ratio (0.0210) (0.0213)** (0.0216) Entry standard ? - - -0.0035 -0.0043 (0.0044) (0.0050) Graduate + - - 0.1265 0.1159 prospects (0.0224)*** (0.0235)*** Good honours + - - 0.0352 0.0256 (0.0325) (0.0348) Degree + - - 0.1418 0.1177

Institution wide wide Institution completion (0.0407)*** (0.0468)** Green score + - - -0.0075 -0.0045 (0.0084) (0.0087)

51

Variable Explanatory Expected (1) (2) (3) (4) type variable sign Staff -to -total + - - 0.0229 0.0249 expenditure (0.0150) (0.0158) Research power ? - - -0.0126 -0.0006 (0.0131) (0.0138) 4* research ? - - -0.1040 -0.0863 (0.0252)*** (0.0278)*** Russell group + - - -0.5906 -1.1352 (0.4218) (0.4743)** In QS Top 400 + - - 0.3818 0.7603 (0.4663) (0.5136) New university - - - -0.7923 -0.4081 (0.4410)* (0.4711) N - 3895 3895 4247 3730 R2 - 0.30 0.21 0.21 0.23 Subject F.E. - YES YES YES YES Institution F.E. - YES NO NO NO Notes: This table reports the results from a set of regressions where the dependent variable is the overall satisfaction score (percentage satisfied). Specifications (1) to (4) all use the same dependent variable but vary according to which groups of explanatory variables they contain and whether both subject- and institution-fixed effects or just the former are employed. The observations are courses or course collections at each higher education institution. Standard errors are given in parentheses. *, ** and *** denote significance at the 10%, 5% and 1% levels respectively. The total staff salary parameters and their standard errors are multiplied by 10000 for ease of presentation.

52