PM59 Information for CIP

Buckingham, Jennifer

CIS Policy Monograph 59

Sydney: The Centre for Independent Studies

About the Author

Jennifer Buckingham is a policy analyst at The Centre for Independent Studies.

About the Paper

There is very little information available to parents or the general public about the performance of government and non-government schools in Australia. In this monograph Jennifer Buckingham calls for a consistent, fair and meaningful system of reporting and publishing this material which will enable parents to make informed choices about which school is best for their child. It will also serve to keep schools not only accountable to the government (which is currently the case), but also to the public whose taxes sustain the public education system.

Critics of school reporting argue that it would create unhealthy competition between schools and disadvantage lower performing schools. However, a school that is identified as performing badly should be recognised and given assistance accordingly, for the benefit of teachers, and ultimately the students in improving their level of education.

Buckingham outlines the current schools performance testing measures in all states in Australia from primary to secondary school. She explores the principles of school reporting and factors which need to be considered: the appropriate academic (numeracy and literacy) indicators to be used; a school’s ‘value-added’ analysis (how much a school improves on students performance); socioeconomic differences between schools; including ‘uncertainty intervals’ (calculating a range of scores within which a school can fall); and measuring the overall performance of a school, not only academic, but also physical and social provisions.

J. Buckingham – CIS - Draft Copy for Review 1 School reporting information is made available to the public in other countries under different systems in England, the United States and New Zealand, as Buckingham outlines.

The government has collected a mass of information on students and schools for over a decade. There would be no added expense of requiring more testing, and Buckingham proposes a way to present and organise this information to be published for general use. It will help lay the groundwork for reforms for fully funded school choice and provide the impetus to improve schools that are operating at substandard levels.

Approx. 20,000 words.

J. Buckingham – CIS - Draft Copy for Review 2 Introduction

It is highly unusual for any person or organisation to conceal information that reflects on them favourably. This is no less true of governments. What then, are we to make of the fact that almost no information is available to parents or the public about what goes on in schools in Australia? The reasonable assumption to make is that there is something to hide.

New South Wales is particularly secretive, but all states and territories are guilty to some extent. On the basis of information currently available in NSW, for example, one might reasonably conclude that the only good schools are selective state schools, a number of independent schools in Sydney, and a handful of comprehensive state schools scattered across the state. The annual jostling of a couple of dozen metropolitan schools for a position in the top Higher School Certificate rankings means nothing to the parents and students in the other 2,000 or so schools in the state.

For parents, it is not a question of whether North Sydney Girls High achieves more HSC distinctions than Abbotsleigh College. They want to know that the schools in their area are meeting appropriate standards, and which school is best suited to their child’s abilities, interests and aspirations. Information that would answer these questions is denied them. This is not because the information does not exist, but because it is kept secret.

Purportedly, this confidentiality is to protect schools from unfair comparisons, because different schools have different capacities to achieve. In reality, it allows low performing schools to provide substandard education without scrutiny and fosters a culture of low expectations.

A consistent, fair and meaningful system of reporting on school performance in all Australian states and territories is long overdue. Schools, school systems and the people responsible for them must no longer be protected at the expense of the students in them.

This is the third paper published by The Centre for Independent Studies arguing for greater public accountability for publicly funded education. Ken Gannicott in Taking Education Seriously (1998), and Alison Rich in Beyond the Classroom (2000), described the way in which public reporting on school performance has been successfully opposed by interest groups, explained the inadequacies of current reporting arrangements, and provided the rationale for greater accountability and transparency.1 This paper will, in addition to updating and further developing the arguments made by Gannicott and Rich, propose a way for performance reporting to proceed.

In writing this paper I take the risk of alienating purists on both sides of the debate. On one side is the belief that schools should be tightly controlled and regulated by governments, but that the outcomes of schooling are immeasurable and relative and should not be evaluated. On the other side is the belief that schools should be given absolute freedom to educate and that parents should be given absolute freedom to choose between them. The latter side views the setting of standards and the assessment of proficiency as unwarranted and unhelpful impositions of the state.

My concern is less for ideological purity than for seeking reforms that are most likely to advantage students and families in particular, but also good teachers, with the fewest

J. Buckingham – CIS - Draft Copy for Review 3 possible exceptions. In doing this, I attempt to establish the middle ground by arguing that testing is important, not just for individual diagnostic purposes, but so that effective teaching strategies can be recognised. These tests should be common to all schools so that underperforming schools can be identified, and so that parents can make informed choices about the best school for their child. For the latter, results must be published.

I also contend, however, that the accountability component should be primarily in the hands of parents and the communities that schools serve. These are the people whose prospects and quality of life are most at stake. Ultimately, for performance reporting to have its greatest impact, people must be able to exercise choice in schooling—a fundamental parental responsibility currently unavailable to many families, but which could and should be.

Chapter 1: The case for performance reporting

Few, if any, of the arguments made here for accountability of schools through the publication of performance indicators are new, but bear repeating as an introduction to the proposals and recommendations made later.2 Each of the arguments is strong and each is sufficient to justify public accountability for school performance. Together they make an incontrovertible case.

1. Public funding behoves public accountability

School education is subsidised through taxation because it is considered a public good. That is, educating children benefits a whole society. Public funding, however, does not come without conditions. Parents and the public are entitled to know whether the education they are providing through their taxes meets their expectations of quality.

Governments have taken over this role by proxy. State and non-government schools are required to report to state and Commonwealth governments on various aspects of schooling, including progress on literacy and numeracy benchmarks. In addition, all state schools are required to provide a minimal amount of educational performance data in their annual reports, which are distributed to parents and available to members of the public on request. Some, but not all, non-government schools also do this.

Most reporting, however, is based on self-evaluation rather than external assessment, and is not provided in a form that allows direct comparisons between schools. For example, the recent Review into Non-Government Schools in NSW (the ‘Grimshaw Review’) found that ‘there is no comprehensive process in for reporting on educational performance that is applied consistently across the government and non- government school sectors’.3

While it is therefore not true to say that schools are altogether unaccountable, the balance of accountability is certainly weighted in the wrong direction. Schools are primarily accountable to the government rather than to the people who pay for them and whose children they have been entrusted to educate.

J. Buckingham – CIS - Draft Copy for Review 4 Governments, in turn, have made little or no effort to pass this information about schools on to parents. Indeed in NSW, the state government passed legislation that expressly prohibits it. This ban on the publication of school level results has been attributed to the influence of teachers’ unions,4 but the fact remains that governments have been complicit and they are ultimately responsible. Successive state and territory governments have either explicitly or implicitly endorsed the position, arguably to avoid scrutiny of their own performance.5 Melbourne University Professor Peter Cuttance believes that ‘this lack of accountability at the system level is one of the main reasons why some students fail to achieve basic standards in literacy and numeracy at school’.6

There is an equally deplorable lack of public information about the performance of non- government schools. But non-government schools are at least subject to one type of parental accountability not found in public schools the market. Non-government schools are dependent on the fees that parents pay and it is in their interests to ensure that parents are satisfied, an imperative not found in state schools.

This is not to say that non-government schools should be exempt from public accountability. It is impossible to argue that state schools should be accountable for the use of public funds, and then in the next breath say that this same principle does not apply to non-government schools. There is, however, a question of proportion. If a non- government school receives only a fraction of the per student funding that is allocated to state schools, their accountability requirements should be proportionate.7

If non-government schools should be accountable for their use of public funds, then the reverse is also true: state schools should be open to the same parental power as non- government schools, achieved through student-centred funding. All parents should have the option of exit, and be able to take their funding entitlement with them to the school they prefer state or non-government.

2. Fairness to schools and to students

Performance data are confidential ostensibly to avoid ‘unfair’ comparisons between schools, yet this is precisely the reason they should be made available. It is only with good information that the true value of a school can be revealed, and longstanding stereotypes and assumptions debunked.

Not all comparisons are unfair. When assessing the performance of a school, raw results on external tests are only part of the picture. There is a great deal of variation between schools in the abilities and motivations of their students, some of which is beyond the schools’ control. Performance reporting systems can be designed to take into account such factors as the achievement levels of students when they come to the school (value- added analysis) and the level of social and economic disadvantage in the school community. This will be elaborated later.

But it is important not to allow schools to be victims of a self-fulfilling prophecy. Low expectations lead to low performance. If we assume that all disadvantaged areas have low- performing schools, that all low-performing schools are in disadvantaged areas, and behave as though this is inevitable, the assumption is more likely to be confirmed.

J. Buckingham – CIS - Draft Copy for Review 5 Such assumptions are extremely unfair not only to schools in disadvantaged areas, but also to students in underperforming schools in more affluent areas, who risk being overlooked. It is also pertinent to selective schools, which are the beneficiaries of large amounts of public funding and many of the best teachers, but have yet to demonstrate they educate clever students better than comprehensive schools.

Likewise, it is generally assumed that comparative data will favour non-government schools. While there exists some data to support this, much of this belief rests on reputation and is therefore highly subjective. Without proper, precise performance measurement and reporting, stereotypes will remain unchallenged.

3. Sharing the burden of failure and the enjoyment of praise

If the confidentiality of performance data is in the name of protection, who is protected? Certainly not students, for it is students who suffer most when education is of poor quality.

How well students do at school and how much they learn is influenced by many factors, but there are three primary players: students themselves, their teachers, and the managers of school systems, including the developers of curriculum and assessment. Yet one group alone bears the full consequences of a poor education—the students.

A poor education has the potential to affect a child’s entire life; from poor literacy to the lack of opportunity for higher education, from employment prospects to income and beyond. In contrast, teachers and bureaucrats go on doing their jobs regardless, often moving through the ranks of position and salary without consideration of the positive or negative effect they may have had on the students for whom they were responsible.

This is clearly iniquitous. The slogan ‘If you can read this, thank your teacher’ is only part of the story. Too often the addendum is ‘If not, it’s your own fault’. Teachers and bureaucrats are attributed with the credit for success but rarely the responsibility for failure. As acknowledged earlier, some things are beyond schools’ control, but this is not sufficient reason to prevent attempts to create a fairer distribution of the burden.

4. Incentives for improvement & rewards for excellence

School reporting has been criticised on the basis that it might result in the abandonment of schools identified as low-performing. Melbourne University Professor Richard Teese claims that publicly naming underperforming schools would not be helpful, that it would only damage poor schools and lead to falling enrolments. But Teese also acknowledges that ‘if parents knew about [the high failure rates in some schools] there would be a lynching mob’, providing support for the argument that parents ought to be empowered with the information necessary to effect change.8

It is all very well for governments and schools themselves to know their failings, but according to Lance Izumi and Williamson Evers, it is ‘only when citizens can find out how their tax dollars are being used are they in a good position to demand change’.

J. Buckingham – CIS - Draft Copy for Review 6 Unless people know about poor performance, there is neither reason nor impetus for them to agitate for improvement, and without this public pressure reform is slow.

Falling enrolments in some schools are indeed a possibility. There are several possible scenarios when a school is identified as unsatisfactory, but the outcomes are essentially the same.

Scenario One: The school raises its standards in response to declining rolls and public pressure. Students remaining in the school are better off and the school will eventually improve enrolment levels.

Scenario Two: The school has endemic problems that it cannot resolve alone and intervention from its governing agency is required over several years. If the school improves, the students will be better off. If the school fails to improve, scenario three comes into force.

Scenario Three: The school is either ‘reconstituted’ with a new principal and new staff or closes down completely. Students are better off.

It can be lamentable when a school closes down. Schools inspire sentimental attachment and have ties to families and communities. Yet it is far more regrettable to have students trapped in a failing school. The alternative to the above scenario, one which exists right now, is where thousands of students pass through a low-performing school, often incognisant of the extent of their disadvantage. Furthermore, the funding keeping these schools going is unavailable to good schools.

A much-used but nonetheless good example is Mount Druitt High School in Sydney’s west, which was identified by The Daily Telegraph as having no student achieving a Tertiary Entrance Rank greater than 45 (out of a possible 100). The way in which this school’s problems were revealed was undoubtedly distressing for the families involved, but the end result has been very positive. Public outcry led to a government inquiry into the apparently chronic low performance levels in this school and students in the Mount Druitt area now have a wider array of educational opportunities.9 If better information had been available earlier, it is likely that even much of the short-term damage could have been avoided.

Even in the extreme case of school closure, it does not necessarily mean bringing in the bulldozers. The buildings might be used for a new school, which previous students can attend if they wish. The school has a fresh start and a greater potential for success.

5. Justification for spending

Demands for increased spending on education are incessant. Poor performance by schools and school systems is regularly blamed on lack of resources, and the quality of school systems measured by inputs such as per pupil expenditure and pupilt to teacher ratios. These measures are poor proxies for performance information. In an analysis of public accountability, Louise Watson argues that ‘In a service industry like schooling, where effectiveness is reflected in students’ educational achievements, financial data do not provide any indication of the performance of schools or systems.’10

J. Buckingham – CIS - Draft Copy for Review 7 Resources are important up to a point. Since schools take responsibility not only for education, but also for the care and safety of children while in school, there are certain necessities, including safe and comfortable facilities, and sufficient staff for instruction, supervision and administration. Yet beyond a certain level, there is very little evidence of a simple or direct relationship between funding and achievement. That is, there is no reason to believe that increasing an existing school’s funding by 20% will have the effect of increasing the quality of education by the same amount.

In fact, some of the most frequent and largest funding increases are spent in ways that have little effect on the quality of education, such as unconditionally raising the award wages of teachers. Even specific funding increases for the purpose of improving educational outcomes, such as new literacy and numeracy programmes, or special funding for schools classed as disadvantaged, are rarely subjected to any evaluation. It may be too much to expect perfect correlation between funding and increased achievement in these areas, but there is presently little information at all that would provide justification for increased spending within the current system.

Measuring the outcomes of schooling accurately is indeed a intricate process. Some social outcomes are immeasurable, and even the cognitive outcomes that are seemingly most conducive to measurement, such as literacy and numeracy skills, require quite sophisticated analysis to satisfy conditions of fairness and equivalence.

While some may see this as reason to give up trying,11 others like US researcher Andrew Rotherham emphasise that ‘complicated does not mean futile’.12 Ken Gannicott argues even more strongly, saying that ‘it is sheer cant to pretend that useful conclusions about schools cannot be drawn if we have only limited information about their performance’.13

6. Informed choice

There is a great deal of empirical evidence supporting the educational benefits of giving parents the responsibility, the freedom and the means to choose the best school for their child.14

But if these educational benefits are to be realised, at least two preconditions must be established. One precondition is that schools must have the ability to effectively and efficiently respond to the needs of the children who come to them. This will be discussed in a forthcoming paper. Another precondition is good public information about schools. Choice will be universally advantageous only if it is informed choice.

This is not an argument for league tables per se. Rather, it is to allow the publication of academic achievement data as part of a rich array of information about individual schools. A detailed proposal for this is presented later in this monograph. But whatever its intended use, information that allows schools to be ranked will likely be used as such.

J. Buckingham – CIS - Draft Copy for Review 8 League Tables

Are league tables a good or a bad thing? It depends largely on whether the information used to create them is reliable, consistent and a fair representation of schools’ performance. A league table is a ranking of schools on a measure of their performance. There can be as many tables as there are school performance measures and they simply provide a summary of how schools compare with each other. A school might rank highly on one measure but only average on another. The sorts of measures that might be used and how they might be presented are discussed in Chapters 3 and 5.

Much has been written in support and criticism of league tables, particularly in England, where annual league tables have been published since 1993. The English experience will be discussed in more detail in Chapter 4.

Opinion in Australia is similarly divided, with opponents outnumbering advocates. Teacher unions are vehemently opposed. Most state and territory governments are not interested in publishing comparative school information,15 and even parent groups oppose any sort of school comparison.

A document published by the Australian Council of State School Organisations (ACSSO) and Australian Parents Council (APC) in 1996 contains the following statements:

Assessment data must not be used for the purpose of establishing and publishing competitive judgments about schools/systems/states or territories.16

It is the contention of this paper that ethical and human considerations override the rationalist and economic.17

This statement represents what seems to be a widespread belief—that identifying poorly performing schools is immoral and can do only harm, as though the students in such schools will be fine so long their plight is kept secret.

The fear of league tables is so great that schools in NSW are denied by law access to University Admissions Index (UAI) data for their students, just to avoid any possibility of information becoming public. The Association of Heads of Independent Schools of Australia (AHISA) has recently requested that the data be made available to schools, with the reason that this information would help schools to assess their own performance and devise ways to improve in the future.18 A high UAI is not the be-all and end-all, but a difference of half a percentage point can mean missing out on the university course of choice. But even AHISA is ‘wary of the information being used to create league tables’, describing the UK situation as ‘appalling’.19

Professor Ken Rowe of the Australian Council for Educational Research has been critical of league tables for two reasons. First, he has strong reservations about the validity of the statistical methods often used, and the indicators generated. Extensive research by Rowe and colleagues on the VCE Data Project has demonstrated the importance of ‘the identification of major sources of within- and between-school variation’ and therefore the use of statistical models that take this into consideration.20 That is, to separate the sources of difference in student achievement into those at the individual level, at the class level and at the school level. Second, Rowe is unconvinced of the merits of public

J. Buckingham – CIS - Draft Copy for Review 9 accountability for school performance and allowing parents to make choices based on this information.21

Rowe does not oppose student assessment and the expert evaluation of performance data, and acknowledges ‘the obvious benefits arising from open access to accurate and reliable information’.22 His concerns are predominantly about ‘the need for caution in generating potentially invalid and misleading information’23 so as to avoid unfair judgement of schools and adverse consequences.

An opposite opinion comes from Professor Ken Gannicott, formerly of the University of Wollongong. Gannicott argues that the academic focus of league tables is entirely justified, because ‘whatever the variety of objectives they pursue, all schools have as their central purpose the academic development of their students’.24

Other educationists who promote the value of school performance reporting, such as Peter Cuttance and Ken Boston, are often non-committal on the issue of league tables specifically. This is possibly because ‘league table’ has become a contaminated term and has negative connotations, out of proportion with such a table’s true nature. League tables are perhaps an inevitable part of school performance reporting, but the issue should not stand or fall on them.

Whatever their problems or merits, league tables are unlikely to be the only source of information parents use, precisely because they recognise what many educationists seem to believe parents are incapable of understanding—each school and each child is different, and a single measure of performance does not fully account for this. According to research conducted by Professor Peter Cuttance and Shirley Stokes, then of the University of Sydney, parents are rarely interested solely in the academic aspect when they are choosing or evaluating a school. Even so, they ‘do need information that allows them to make useful comparisons among schools about what they provide and how they achieve their goals for students in the areas that are of most interest to them’.25

League tables are of interest to many people, not just parents. For this reason if no other they are useful, by drawing attention to what goes on in schools and what comes out of them.

7. People want more information about schools

Work done by Peter Cuttance and Shirley Stokes for the Commonwealth Department of Education Training and Youth Affairs is mentioned frequently throughout this monograph. Cuttance and Stokes described the sorts of information that parents need and want about the performance about their own children and about the schools they attend.

They did not, however, attempt to quantify the demand for such information, either among parents or in the community. The Centre for Independent Studies developed an internet survey question with ACNielsen to gauge the support for school performance reporting beyond the traditional forums of parent and teacher organisations. The results are presented below, with details in Appendix A.

J. Buckingham – CIS - Draft Copy for Review 10 To the suggestion that Departments of Education should release information about the academic performance of individual schools, almost two thirds of respondents (61%) thought this was a good or very good idea. Less than a quarter of respondents (23%) thought it was a bad or very bad idea, with only 7% in the latter category. Some 16% had no strong feelings on the matter. 26

Source: Internet survey conducted by AC Nielsen, August 2003 (N=467)

Chapter 2: Testing and Reporting in Australian States and Territories

School students in all states and territories undertake a variety of external tests and examinations. In some states, such as NSW and , external assessments are made intermittently from Year 3 (approximate age 8) to Year 12 (approximate age 17). There is therefore no need to increase the testing and examination of students, but simply to make the results of the existing tests available to the public in a useful and appropriate way. These tests may not allow comparison of individual schools in different states, but it is hard to imagine why that might be necessary.

It is difficult to obtain information about school performance, but almost as difficult to find out exactly what information is collected by governments and how it is disseminated. What follows is a brief summary of the external testing and public reporting processes of each Australian state and territory. The structure of schooling varies slightly across Australia—in some states, primary schools extend to Year 7 and not all states offer a year of school prior to Year 1 of primary school.

Details of assessment prior to Year 3 are provided here as a matter of interest, to show the extent and range of testing being done, and to demonstrate that it is possible to make use of existing data rather than applying new tests. It is not suggested that results of testing in the early years of primary school be published, but rather that they might be used in value-added analysis of later performance results.

New South Wales

Testing

Students in all state schools and most non-government schools take part in state-wide external tests of literacy and numeracy in years 3, 5 and 7.

In 2000, 95% of government school students participated in the Basic Skills Tests, as well as 90% of non-government school students.27 Although literacy and numeracy assessment in Year 8 is voluntary, 98% of state schools chose to re-test literacy and 90% chose to re-test numeracy.28

Primary School

Year 3: Basic Skills Tests (BST) in literacy and numeracy

J. Buckingham – CIS - Draft Copy for Review 11 Primary Writing Assessment

Year 5: Basic Skills Tests (BST) in literacy and numeracy

Primary Writing Assessment

Year 6: Computer Skills Assessment (as of 2003)

Secondary School

Year 7: English Language and Literacy Assessment (ELLA)

Secondary Numeracy Assessment Program (SNAP)

Year 8: Optional re-testing of ELLA & SNAP.

Year 10: School Certificate examinations in English, Maths, Science, and Civics & Citizenship

Year 12: Higher School Certificate

The School Certificate is awarded by the Board of Studies at the end of Year 10 to students who have passed assessment requirements in the mandatory courses of study. Assessment includes state-wide external examinations.

The senior school qualification in NSW is the Higher School Certificate, which is awarded to students by the NSW Board of Studies on satisfactory completion of an approved programme of studies over Year 11 and 12. Students’ final scores are a combination of school-based assessment and an external examination. These scores are combined and scaled to calculate a University Admissions Index (UAI), which is used to allocate places in tertiary education institutions.

Reporting

Despite all this testing, and the data on school performance it yields, very little useful information is reported to the public. Highly aggregated statistics on the proportion of children achieving various levels of proficiency are provided, but no information that allows the comparison of schools. This not simply because of the powerful influence of the NSW Teachers’ Federation. State legislation expressly forbids it.

NSW Education Regulation 2001 Section 5, prohibits the release of results of tests that could be used to ‘rank or otherwise compare schools’. For this reason, school-by- school results on the New South Wales Basic Skills Tests (BST) are not published.

This would seem to be the nail in the coffin, yet the NSW DET appears to be selective in its enforcement of the regulation. The regulation relates specifically to BST data, but, according to the NSW Audit Office, ‘DET advises that it is indicative of the Government’s overall objectives for annual school reporting’.29 Each year, however, the Department releases a list of the ‘distinguished achievers’ in each school. Predictably,

J. Buckingham – CIS - Draft Copy for Review 12 newspapers have used this information to create ranked merit lists of schools according to the proportion of these students among HSC candidates in the school—a highly misleading statistic (see Box). One might suspect that the release of this information (and the lack of strong objection from the NSW Teachers Federation) was motivated by this particular statistic’s positive representation of state schools.

When it comes to the results of basic skills tests, the regulation is strictly observed. No school-by-school results are available as a data set. Each individual school receives only their own results against state averages. These data are included in the school’s annual report, which is eventually distributed to parents and is available to interested persons on request.

‘Distinguished Achievers’

The only publicly available quantitative information on the achievements of individual schools in NSW is selected results of the Higher School Certificate. The Department of Education releases the names of students who have achieved scores of more than 90% in any HSC course, which they call ‘Distinguished Achievers’. The number of Distinguished Achievers in each school is also released, along with the proportion of the schools’ HSC candidature this represents.

These statistics are almost meaningless for the following reasons:30

1. The designation of Distinguished Achievement does not reflect the difficulty of the courses attempted. A school with twenty students scoring over 90% in the easiest maths course would rank equal to a school with twenty students scoring over 90% in the hardest.

2. Course scores of more than 90% do not predict a student’s overall HSC mark, a scaled score called the University Admissions Index (UAI). It is possible to have a UAI of more than 90 (out of 100) yet fail to make the Distinguished Achievers list at all.

3. There is no indication of the distribution of scores. A school may have a substantial number of students on the Distinguished Achievers list but have even more students performing very poorly.

4. The base-level of achievement is not taken into account. It is to be expected that academically selective schools appear at the top of the rankings. What we do not know is whether these schools have done better than expected with a group of already exceptionally bright and high-achieving students. This requires a value-added analysis.

Victoria

Testing

All children in government schools in Victoria are assessed for literacy and numeracy ability on entry to the first year of primary school, under the Prep Entry Assessment Program. The Early Years Literacy and Numeracy Programs assess the progress of each

J. Buckingham – CIS - Draft Copy for Review 13 child at the end of each school year from Prep to Year 4 against the Statewide Minimum Standards.

Literacy and numeracy testing is conducted in Years 3, 5 & 7. In 2000, 88% of government school students and 92% of non-government school students sat for the Achievement Improvement Monitor (AIM) tests of literacy and numeracy skills in Years 3 and 5.

Primary School

Prep: Prep Entry Assessment Program in literacy and numeracy

Prep-Year 4: Early Years Literacy & Numeracy Program

Year 3: Achievement Improvement Monitor (AIM) in literacy and numeracy

Year 5: Achievement Improvement Monitor (AIM) in literacy and numeracy

Secondary School

Year 7: Achievement Improvement Monitor (AIM) in literacy and numeracy (as of 2003)

Year 10: School-based assessment

Year 11/12: General Achievement Test (GAT)

Victorian Certificate of Education (VCE)

Victorian Certificate of Applied Learning (VCAL)

Students graduating from Year 12 in Victoria receive the Victorian Certificate of Education (VCE). VCE study scores are a combination of school-based assessment and an external curriculum-based examination. These scores are used to calculate the Equivalent National Tertiary Entrance Rank (ENTER).

All VCE students in Year 12 also take the external, non-curriculum General Achievement Test (GAT). This test is used to moderate the school-based assessments of VCE subjects against external assessments and as a counter check for examination scores.

Reporting

For the Early Years Literacy and Numeracy Programs, schools are provided with an analysis of their own data, as well as comparisons to like schools and the state average.

The results of the AIM basic skills tests are not published. AIM results for students and schools are provided to parents against state and national (in 2003) benchmarks.

J. Buckingham – CIS - Draft Copy for Review 14 The Victorian State government has introduced a new set of performance indicators for schools, which are made available through publication in major newspapers.

Secondary schools only are included in public performance reporting. The indicators published on a school-by school basis are:

• Median VCE study scores for each course • Rates of satisfactory completion • The percentage of students applying for tertiary entrance • Students with outstanding achievement in VCE studies (scores greater than 40) • Post-school education, training and employment destinations

The indicators of academic achievement are raw scores. The previous practice of value- added analysis has been abandoned after an evaluation by the Victorian Curriculum and Assessment Authority (VCAA) concluded that the technique did not provide a ‘valid measure of the value added by the school’.31 Tim Brown, Dean of Science at Australian National University, who devised the value-added technique disputes this, saying that the VCE Improvement Index and Tertiary Preparation Index ‘gave a more accurate reading of schools' performances, because they predicted how well a school was likely to perform given its student profile’.32

There are no plans to introduce a new system of value-added analysis, nor to provide ‘like with like comparisons’ in the public arena.33 Schools can request these comparisons for their own purposes as part of the VCE Data Service.

Queensland

Testing

Government school students in Queensland are assessed for literacy and numeracy ability in Years 1 to 3 in the ‘Year 2 Diagnostic Net’ programme.

In 2000, 97% of government school students and the same proportion of non- government school students participated in the Years 3, 5 & 7 Testing Programs in literacy and numeracy. A random selection of schools also sits for an ‘Equating Study’, which enables comparisons to be made between the results of the previous years’ tests.

Primary School

Year 1-3: Year 2 Diagnostic Net in literacy and numeracy

Year 3: Year 3 Testing Program in literacy and numeracy

Year 5: Year 5 Testing Program in literacy and numeracy

J. Buckingham – CIS - Draft Copy for Review 15 Secondary School

Year 7: Year 7 Testing Program in literacy and numeracy

Year 10: School-based assessment

Year 12: School-based assessment in Senior Certificate study areas

Queensland Core Skills Test

Studies undertaken towards the Senior Certificate are not externally examined state- wide.34 Student achievement in each subject is assessed by the school. The Queensland Studies Authority (QSA) verifies the quality of the course offered by the school by requiring that each school submit its proposed programme of study for each subject for accreditation. The QSA also moderates schools’ assessments by looking at samples of student coursework.

Students seeking a Tertiary Entrance Statement must also sit for the Queensland Core Skills Test (CST), an externally-examined, statewide, cross-curriculum test.

The Tertiary Entrance Statement reports Field Positions (FPs) for five different study areas (groups of related subjects) and an Overall Position (OP), calculated from the student’s study scores and CST results. FPs and the OP are the scaled rankings of student achievement that determine tertiary entrance.

Reporting

Parents of children in Years 1 to 3 receive a written report of their child’s performance against key indicators of literacy and numeracy ability as assessed by the Year 2 Diagnostic Net.

The results of the Years 3, 5 & 7 Testing Program are not reported publicly on a school- by-school basis. Individual, class and school level analyses are provided only to parents and to schools, against state and national standards.

None of the results of Senior Certificate studies, the CST or the rankings awarded for Tertiary Entrance Statements are reported to the public on a school-by-school basis. The fact that subjects are school-assessed might be seen as a hindrance, but if they are seen as sufficiently rigorous to provide the basis for university offers—something that profoundly affects a student’s future—it is difficult to see why they should not also provide the basis for public reporting of school performance.

Western Australia

Testing

Western Australian students in Years 3, 5 and 7 sit for the state-wide, curriculum-based Western Australian Literacy and Numeracy Assessment Program (WALNA).35 All state

J. Buckingham – CIS - Draft Copy for Review 16 schools and most non-government schools participate—around 90% of students in each sector took the tests in 2000.36

Primary School

Year 3: Western Australian Literacy & Numeracy Assessment Program (WALNA)

Year 5: Western Australian Literacy & Numeracy Assessment Program (WALNA)

Year 7: Western Australian Literacy & Numeracy Assessment Program (WALNA)

Secondary School

Year 10: School-based assessment

Year 12: Western Australian Certificate of Education (WACE)

The Western Australian Certificate of Education (WACE) is administered by the Western Council. Continuous assessment in all subjects is school- based, with marks verified and moderated by the Curriculum Council. Students seeking a Tertiary Entrance Rank must also sit for external, state-wide Tertiary Entrance Examinations (TEE) to obtain a combined mark for each TEE subject. The TER is based on these combined marks. Students taking TEE courses do not have to take the external exam if they do not want a TER.37

Reporting

The Western Australian Department of Education and Training (DET) disaggregates WALNA results to the district level, but states that the ‘DET will maintain the confidentiality of school-by-school results’.38 Individual results are reported to parents, showing levels of proficiency against state averages, national benchmarks and the student’s previous WALNA test achievement levels. Teachers and schools receive class and school level results.

It is claimed that the results of students who change schools cannot be provided to their new school by the MSE unit, because ‘Each school retains sole ownership on past and present WALNA information. WALNA information on both individual students and schools is not distributed to, nor retained by the central office or district offices of the DET.’39 Since individual reports to parents show the progress of the student from one assessment to the next, this claim is difficult to believe, and is an implausible excuse for the lack of school-by-school reporting on literacy and numeracy outcomes. The School Information section of the DET website offers ‘School Profiles’, but these contain no substantial information about teaching and learning in the school, let alone performance data.

J. Buckingham – CIS - Draft Copy for Review 17 By contrast, detailed information about school performance in the WACE is readily available. The WA Curriculum Council website provides ‘School Comparison Statistics’ of achievement in 2000, 2001 and 2002 measured in a number of ways.

For each school:

• TEE participation and graduation rates • Percentage of students taking four or more TEE subjects achieving scaled scores of 75% or more. • Percentage of students taking three or more wholly school assessed (WSA) subjects achieving an A-grade • Percentage of students taking one or more Structured Workplace Learning (SWL) subjects achieving an A-grade. • Vocational Education and Training (VET) participation. • Distribution of achievement of students taking four or more TEE subjects compared with state average.

Lists of top ten schools:

• For each TEE subject, schools with highest proportion of students achieving scaled scores of 75% or more. • For each WSA subject, schools with highest proportion of students achieving A- grades.

These results are also published in two newspapers. The statistics are provided in recognition of media and community interest. There is no value-added analysis or information about student demographics, only the small caveat that ‘it should be noted that researchers into school effectiveness agree that schools account for only about 9 percent of the variance between students. Students’ personal backgrounds account for most of the variance in academic performance.’40

South Australia

Testing

The School Entry Assessment is given to all students beginning the first year at school (Reception), ‘to understand the knowledge and skills children bring to school’. It ‘provides a baseline for learning in the school setting’.41

The statewide Integrated Assessment Program (IAP) tests basic skills in literacy and numeracy of students in Years 3, 5 & 7. All state schools and most non-government schools participate—in 2000 the participation rates were 94% and 93% respectively.42

Primary School

Reception: School Entry Assessment

Year 3: Integrated Assessment Program (IAP) in literacy and numeracy

J. Buckingham – CIS - Draft Copy for Review 18 Year 5: Integrated Assessment Program (IAP) in literacy and numeracy

Year 7: Integrated Assessment Program (IAP) in literacy and numeracy

Secondary School

Year 10: School-based assessment

Year 12: South Australian Certificate of Education

Writing-Based Literacy Assessment (WBLA) - school assessed

The South Australian Certificate of Education (SACE) is completed over Years 11 and 12. All students must pass the school-assessed Writing-Based Literacy Assessment (WBLA), which is verified and moderated by the Senior Secondary Assessment Board of South Australia (SSABSA).

Year 12 subjects are assessed in various ways, with the school-assessed components also verified and moderated by the SSABSA.

1. Publicly-examined subject (PES) marks comprise 50% statewide examination score and 50% school assessment.

2. Publicly-assessed subject (PAS) marks usually comprise 30% external assessment of common task and 70% school assessment.

3. School-assessed subject (SAS) marks are entirely school-assessed.

Tertiary Entrance Ranks are derived from the scaled scores of students’ best five Year 12 subjects.

Reporting

There is no school-by-school public reporting of the results of the Integrated Assessment Programme. Parents are provided with a report showing their child’s results against the state average proficiency levels, but not national benchmarks.43 More detailed analyses of individual and school level results are provided to schools, and schools are expected to graphically show school level compared to state level results in their Annual School Reports.44

No comparative school level results for achievement in SACE or TERs are made available to the public. SSABSA provides performance data to schools, which state schools are then required to publish in their annual reports. A merit list of students with SACE achievement scores of 20 out of 20 is released to the media but this does not include names of schools attended.45

J. Buckingham – CIS - Draft Copy for Review 19

Tasmania

Testing

All state schools in Tasmania administer a testing programme called Performance Indicators for Primary School (PIPS) to Prep (Pre-Year 1) children in May and October, to diagnose and assess the level of their abilities and to monitor their progress.

All state schools and most non-government schools participate in the Statewide Monitoring Program, which tests literacy and numeracy in Years 3, 5, 7 and 9. In 2000, 96% of state school students and 95% of non-government school students were assessed.46

Primary School

Prep: Performance Indicators for Primary School (PIPS)

Year 3: Statewide Monitoring Program in literacy and numeracy

Year 5: Statewide Monitoring Program in literacy and numeracy

Secondary School

Year 7: Statewide Monitoring Program in literacy and numeracy

Year 9: Statewide Monitoring Program in literacy and numeracy (as of 2003)

Year 10: School-based assessment47

Year 12: Tasmanian Certificate of Education (TCE)

The Tasmanian Certificate of Education records students’ accumulated achievements in secondary courses from Year 10 to the time they leave school. It can be issued at any time on request from the student, or automatically after completing Year 12.

Most courses have school-based assessment, verified and moderated by the Tasmanian Secondary Assessment Board (TASSAB). ‘Pre-tertiary’ subjects, of which students must complete at least three in Year 12 to be eligible for tertiary entrance, also have an external exam. Final scores are the combined and scaled school-based and external assessments. These are then used to calculate the student’s Tertiary Entrance Rank (TER).48

Reporting

No comparative school-by-school results for the Statewide Monitoring Program are available to the public ‘due to concerns within the teaching profession that “league tables” may lead to invalid comparisons being made between schools’.49 Individual results

J. Buckingham – CIS - Draft Copy for Review 20 are reported to schools and to parents against state averages. At the school level, schools can access only their own school results and are expected to publish these in their Annual Reports.

The Tasmanian Office of Educational Review (OER) prepares reports on the aggregate outcomes of each monitoring programme at the district, sector and state level. It also provides a breakdown of results according to Educational Needs Index (ENI), which it calls a ‘like school’ analysis. The ENI is a composite of socio-economic status and eligibility for the Student Assistance Scheme.

Comparative school level information on Tasmanian Certificate of Education and Tertiary Entrance Ranks is also unavailable to the public. The Department of Education, Training and Information’s Assessment Monitoring and Reporting Policy requires that ‘Information Privacy and Information Custodianship guidelines are adhered to in relation to maintaining the confidentiality of individual student- and school-level data’, but that ‘schools and colleges will report aggregated student achievement data in their Annual Reports’ which are distributed to parents.50

Australian Capital Territory

Testing

All ACT government primary and secondary schools participate in the ACT Assessment Programme (ACTAP) in literacy and numeracy, as well as a small proportion of non- government schools. In 2000, 93% of government school students and 20% of non- government school students sat for the Year 3 and 5 tests.51

Primary School

Year 3: ACT Assessment Program (ACTAP) in literacy and numeracy

Year 5: ACT Assessment Program (ACTAP) in literacy and numeracy

Secondary School

Year 7: ACT Assessment Program (ACTAP) in literacy and numeracy

Year 9: ACT Assessment Program (ACTAP) in literacy and numeracy

Year 10: School-based assessment

Year 12: Year 12 Certificate (externally-moderated school-based assessment)

Australian Scaling Test (AST)

J. Buckingham – CIS - Draft Copy for Review 21 The Year 12 Certificate is awarded on the completion of Year 12 by the ACT Board of Senior Secondary School Studies (ACTBSSS) ‘on the basis of continuous assessment of students’ progress over Year 11 and Year 12’, verified and moderated by the ACTBSSS. There are no external exams for any courses, as school-based assessment ‘is seen as a more valid, more reliable and fairer system of assessment’. 52

To be eligible for a Tertiary Entrance Statement (TES) students must satisfactorily complete a certain quota of ‘tertiary accredited’ courses and sit for the Australian Scaling Test (AST). The AST is a system-wide, non-curriculum, multiple choice and writing test ‘designed to measure scholastic aptitude and to enable students to be compared equitably’.53 The Tertiary Entrance Statement reports scaled course scores and the University Admissions Index (UAI) calculated from these scores.

Reporting

Comparative school-by-school results in ACTAP are not available to the public and are not provided to parents. The April 2003 ACTAP Reporting Policy states that:

School averages on ACTAP may not be given out or published in any school or community publication or newsletter.

All school staff members are prohibited from using ACTAP data for comparisons between schools.

Principals, staff members, School Boards and P&Cs must not aggregate the results of individual strands to calculate an overall measure of school performance.54

Schools and parents receive reports detailing the results of individual students against state-level performance data and national benchmarks.

A consultation paper published in 2000 presented several possible models of school-level reporting on literacy and numeracy outcomes and called for comment.55 A telephone survey was also conducted to ascertain the level of support for the reporting of school- level results and the preferred method.

The outcomes of this consultation have not been published, but according to the ACT Department of Education Youth and Family Services, respondents to the consultation document were ‘largely satisfied’ with the current level of reporting, and the vast majority were ‘clearly against…receiving information which could be used to create league tables’. Respondents to the telephone survey, however, were ‘interested in knowing how their child’s school performed in the assessment programme compared to other ACT schools’, but did not support media access to this information.

The difference in opinion between the consultation paper respondents and the telephone survey respondents could perhaps be explained by the fact that the former group was voluntary and self-selected. This group was more likely to comprise people institutionally opposed to school performance reporting and organised to make submissions to that effect. The random telephone sample is therefore arguably more representative of parents’ views. Nevertheless, the ACT government has no intention of releasing school-level ACTAP results at present.56

J. Buckingham – CIS - Draft Copy for Review 22 Comparative performance data for senior secondary schools (called colleges) are published annually in the ‘Year 12 Study’.57 The tables provided in the Year 12 study include, for each college:

• Number of candidates and the number of study units completed • Means and standard deviations of AST scales and scores • Aggregate scores and scaling score means and standard deviations • Percentage distribution of grades in Tertiary and Accredited subjects • Course participation rates • AST scales and rescaled course means • Number of Vocational Certificates and Statements of Attainment issued and number of students with at least one vocational qualification.

There are no value-added or like-school analyses, but many tables disaggregate students of Non-English Speaking Background (NESB) and mature age students. The detail and range of indicators prevents the easy construction of league tables, but it is also difficult for parents and students to use these data for the purposes of school choice.

Northern Territory

Testing

Children turning five in the enroll in ‘Transition’, which prepares them to begin Year 1 of primary school the following year. Each child in Transition is assessed for base-level numeracy and literacy skills.

Year 3, 5 and 7 students in all Northern Territory primary schools participate in the Multi-Level Assessment Program (MAP) in literacy and numeracy. Between 80% and 85% of students in state and non-government schools were assessed by MAP in 2000.58

Primary School

Transition: Base-level literacy and numeracy assessed by schools

Year 3: Multi-Level Assessment Program (MAP) in literacy and numeracy

Year 5: Multi-Level Assessment Program (MAP) in literacy and numeracy

Year 7: Multi-Level Assessment Program (MAP) in literacy and numeracy

Secondary School

Year 10: School-based assessment

Year 12: Northern Territory Certificate of Education (NTCE)

J. Buckingham – CIS - Draft Copy for Review 23 Writing Based Literacy Assessment

The senior secondary school qualification in the Northern Territory is the Northern Territory Certificate of Education (NTCE). To receive the NTCE, Year 12 students must achieve required standards in an approved course of studies. They must also satisfactorily complete the Writing Based Literacy Assessment.

The Northern Territory purchases curriculum and assessment services from the Senior Secondary Assessment Board of South Australia (SSABSA).59 Assessment of NTCE studies is therefore a mixture of external and school-based assessment like that for the South Australian Certificate of Education (see p.??).

To be eligible for a university aggregate and a Tertiary Entrance Rank (TER) students must have qualified for the NTCE and obtained required standards in at least five scalable Year 12 subjects.60

Reporting

Results of the Multi-Level Assessment Program (MAP) are not published on a school-by- school basis. Individual reports are provided to parents showing their child’s achievement compared to territory-wide proficiency levels and national benchmarks.

School results are provided to the Principal and school council of each school and are released at the discretion of these two groups. The school results are reported in terms of both National Benchmark achievement and achievement against the Northern Territory Curriculum Framework and are compared to the system average, cluster average and non-urban or urban schools average. Other analyses are available to the school upon the request.61

No comparative school-by-school performance data for the NTCE are publicly available. However, a list of the Top 20 students and their schools is released as well as a list of students that achieve Merit Awards (perfect scores) for individual subjects and their respective schools.62

Summary

Literacy and numeracy

No state or territory provides comparative data on school performance in literacy and numeracy. In most states, schools are expected to show how their results compared to state averages, but in and the Australian Capital Territory this is not provided, and in the Northern Territory it is at the school’s discretion.

Year 12 Results

Only Victoria, Western Australia and the Australian Capital Territory provide comparative school by school data on Year 12 performance. None of these reporting systems include a value-added analysis (although Victoria did previously) and none report like-school comparisons. Only Victoria provides information on post-school destinations.

J. Buckingham – CIS - Draft Copy for Review 24 There seems to be little or no published research on the effect of these reporting systems on schools, that is, whether enrolment patterns have changed, whether certain schools have faced increased or decreased demand, or whether academic performance has improved. This lack of information is troubling in some ways, because such developments ought to be evaluated empirically as a matter of course. Yet by the same token, there is no evidence that any of the dire consequences predicted by opponents of school reporting have transpired.

Chapter 3: Principles of performance reporting

Parents and the public are interested in information on school performance. This has been demonstrated in the aforementioned study by Cuttance and Stokes,63 in the data presented in Chapter 1, and is evident in newspapers’ investments of large amounts of space for the little information released to them each year.

Researchers and educators also know that information will be published with or without their support, and it is preferable to guide its production and presentation so that best practice is observed. It is possible therefore to compile a list of agreed principles for performance reporting that ensure it is done as fairly and accurately as possible.

The following ten principles are derived from those proposed in articles by David Jesson, Ken Rowe, Louise Watson, Peter Cuttance, Lance T. Izumi and Williamson M. Evers, Thomas J. Kane et al., Andrew Rotherham and Herbert J. Walberg.64

1. Appropriate measurement

This principle involves two issues: the appropriateness of the test or other measurement instrument (such as a survey) and the appropriateness of the derived statistic used to indicate performance.

Testing

Performance reporting in Australia does not require the introduction of new tests or increasing the frequency of existing tests. Therefore any concerns about testing are in the context that the stakes would be higher, not for students but for teachers and schools, and that the increased pressure to perform might be passed on to students.

There are several concerns about testing. One is that testing creates anxiety and stress for students.65 This may be true, but it is a question of degree. A small amount of anxiety is not always a bad thing. Low-level anxiety is simply the desire to do well and can enhance performance. High levels of anxiety are a different matter—they are not a natural response to a simple testing situation and should be investigated. High levels of anxiety may be related to other problems in a student’s life, or are perhaps created by unreasonable pressure from the teacher and/or parents. If students are accustomed to diagnostic tests, such as regular short tests given by teachers to assess student learning, they would arguably be less likely to be unduly troubled by annual state tests.

J. Buckingham – CIS - Draft Copy for Review 25 Another concern about testing is that results can be affected by random, one-off events, such as illness or a disruption in the school, and may not reflect the real performance and ability of the student or the school.66 This issue can be addressed through statistical modelling (Principle 3) and by providing multiple years of data (Principle 4).

Perhaps the most widespread concern about testing is that it creates a narrow focus and encourages teachers to ‘teach to the test’.67 But as Geoff Masters and Margaret Forster of the Australian Council for Educational Research explain, the positive or negative effect of testing is largely dependent on the test design and content.

A well-designed assessment system can be an effective means of focussing students’ attention on valued learning outcomes, encouraging higher order thinking and reflection, reinforcing curriculum intentions, and setting learners’ sights on still higher levels of attainment.68

That is, if the tests are well constructed and well aligned with what teachers are expected to teach and what it is important for students to learn, teaching to the test is not necessarily a bad thing. Any problems that arise from ‘teaching to the test’ require improving the test, not abandoning the practice.

Testing is an extremely important aspect of education. It provides objective data so that students, parents and teachers can identify strengths and weaknesses, and so that schools and systems can identify good and bad practice.69 Excessive testing is undesirable, but a programme of well-designed and properly administered tests is extremely valuable and relatively unproblematic as long as teachers view them in that light.

Indicators

It is generally accepted among both educators and analysts that simple statistics such as benchmarks, thresholds and average scores cannot, on their own, accurately convey a school’s academic effectiveness. A benchmark statistic might be the proportion of children achieving a particular level of achievement or above. A threshold statistic might be the proportion of students achieving in the top two achievement levels in literacy, or the proportion of students scoring more than 90 in a HSC course. Such indicators should not be used on their own for the following reasons:

* A benchmark or threshold can set the bar too low or too high to be meaningful. A benchmark is simply the minimum standard of competency considered acceptable. A threshold of achievement such as the distinguished achievers list takes account only of the top performing students and neglects the performance of the majority.

* Benchmarks and thresholds do not convey the range of student achievement. If 75% of a school’s students achieve the benchmark or above, we don’t know by how much the benchmark was exceeded, or by how much the remaining 25% missed. Similarly, upper limit thresholds do not show the distribution of scores. A school with a large proportion of students achieving the threshold may have an equally large proportion of students with very low achievement. A school with relatively few students within the upper threshold may have most of its students doing moderately well, none particularly well and none particularly poorly.

J. Buckingham – CIS - Draft Copy for Review 26 * Benchmarks and thresholds encourage schools to concentrate on those students on the margin, to the detriment of both high and low achievers.70

* Means or averages suffer similar problems to benchmarks. Although average scores are affected by the range of scores, they convey no information about their distribution.

While benchmarks and other single indicators do not provide all of the meaningful and relevant information necessary for fair reporting of school and student performance, they are legitimate. Presented in a way that shows how levels of proficiency and performance are distributed across the school population, and how many students are reaching standards, they indicate whether a school is doing as well as expected and where improvement is required.

2. Contextualistion

a) prior attainment (value-added analysis) b) demographics

School and student performance statistics are strongly related to factors beyond the influence of the school, including family characteristics. Some schools have large proportions of children with disrupted and difficult home lives, while others have very few.

Raw statistics also do not take into account differences in students’ abilities when they arrive at the school. In order to evaluate the effectiveness of a school, that is, how much students learned while they were there, the growth in student achievement must be measured. This is often referred to as ‘gain-score’ or ‘value-added’ analysis.

Widespread recognition of this latter point has lead to the development of various ‘value- added’ techniques, and there is now also agreement that value-added indicators are necessary for fair and accurate evaluation of school effectiveness.71 (In the discussion that follows, ‘value-added’ refers to the practice of measuring achievement growth by adjusting raw scores for prior attainment. It does not by definition involve controlling for non-school factors such as socioeconomic status.)

Value-added analysis, however, is not always straightforward.

First, value-added analysis requires individual-level data collected over time. The performance of a group of students must be assessed against the prior attainment of the same students. It is not good enough to adjust performance scores against aggregate or proxy measures.72 Where individual data are not already collected, this might be seen as a significant obstacle, but this does not apply in any Australian state or territory.

Second, value-added analysis is not simply a matter of subtracting one score from another. It involves sophisticated statistical modelling. This is firstly because the two performance measures being compared are often different (for example, the School Certificate and the Higher School Certificate) and secondly, because there is usually wide variation between students within the one school. This within-school variance must be taken into account when determining the between-schools variance.

J. Buckingham – CIS - Draft Copy for Review 27 There is also an argument that most people are not capable of understanding the statistical techniques and principles on which value-added analysis is based. This is undoubtedly true, but it does not mean that most people are incapable of using the resulting indicators, if presented appropriately. As US education researchers Darrel Drury and Harold Doran point out, people properly use indicators such as the Consumer Price Index and interest rates in their daily lives without needing to fully understand how they are derived. They argue that ‘Trading rigor and accuracy for simplicity is an indefensible strategy—the stakes are simply too high’.73

Third, value-added analysis actually makes it more difficult to discriminate between schools. Numerous studies, particularly by British statistician Harvey Goldstein and his colleagues, have shown that the large margin of error in calculating value-added indicators (mainly because individual schools constitute small sample sizes) makes statistical significance difficult to obtain.74 That is, for the large majority of schools (around two thirds), there are no statistically significant differences in their value-added performance indicators.75 Value-added analysis identifies with confidence only those schools that are at the extremes of the performance range.

The work of Goldstein and his various co-authors is often quoted to support the opinion that performance indicators, even value-added, should not be made public. Indeed, Goldstein has claimed that value-added indicators should be used only as a ‘screening device’.76 Yet Goldstein has also warned against the private use of such indicators by schools, suggesting that ‘their inherent secrecy would seem to lend itself to manipulation by institutions, for example by ignoring the existence of wide uncertainty intervals or by the selective quotation of results’.77

Goldstein continues:

…although we have been generally critical of many current attempts to provide judgements about institutions, we do not wish to give the impression that all such comparisons are necessarily flawed. It seems to us that the comparison of institutions and the attempt to understand why institutions differ is an extremely important activity and is best carried out in a spirit of collaboration rather than confrontation. It is perhaps the only sure method for obtaining objectively based information which can lead to understanding and ultimately result in improvements (emphasis added).78

Therefore, a more accurate summation of the opinion one of the leading critics of value- added analysis might be that value-added measures are meaningful if presented and used appropriately, and if the users are aware of the limitations of the device in making direct comparisons between schools within certain performance ranges.

Even if value-added indicators do not definitively discriminate between each and every school, it is important for the public to be aware of which schools have been confidently identified as very good and very poor performers on this measure. As noted above, if the results are presented carefully, including margins of uncertainty, there is no reason why such indicators cannot be a meaningful and useful addition to the performance information provided to parents and the public.

The fourth difficulty for value-added analysis is that students change schools, but some people see this as more of a problem than others. Goldstein portrays student mobility as a

J. Buckingham – CIS - Draft Copy for Review 28 major difficulty, claiming that ‘to include them properly would require enormous efforts at tracing them and recording their examination and test results’. By contrast, US researcher Robert Meyer suggests that student mobility can simply be included as a ‘control’ factor in value-added analyses, in the same way as socioeconomic status might be.79 Where traceable records are not kept, Meyer suggests that schools ‘test mobile students at the point of entry’.80 In fact, it can be argued that mobile students provide valuable information if, for example, their new school improves on their performance at their previous school.

In Australia, as long as the student remains within the state, there should be no reason why they cannot be tracked from school to school. One possibility among several is that students who have spent most of the period between tests in the school in which they are tested might have their results included in that school’s value-added analysis, while students who have only recently arrived at the school might have their results excluded from the whole-school results.

Finally, value-added scores are relative. They do not provide information about the absolute level of achievement in a school, just the achievement growth. Raw scores showing achievement of benchmarks and thresholds, on the other hand, provide a standard against which a school’s performance can be assessed. A school may show high growth from a low baseline but still fail to achieve an acceptable standard. Another school might have relatively low growth in achievement, but be performing at a high level.

Raw scores are important not just for accuracy and accountability, but also for informed choice. As Peter Cuttance writes,

Finding out how much individual schools add to student achievement is useful for working with schools to help them improve their performance, but it is not a way of providing information on actual school outcomes… Parents know that their daughter or son is good at certain things and not so good at others. They often look for a school that allows the son or daughter to excel in the area of their strengths and gain a good grounding in other areas.81

There is, therefore, a case for providing both raw and value-added indicators of school performance. There is also an argument for contextualising academic indicators to reflect non-school factors. That is, to present information in a way that takes account of the differing circumstances of schools, whether it be social and economic disadvantage, the representation of children with disabilities, a student population with multiple non- English speaking backgrounds, or other demographic characteristics that can influence student outcomes.

Both raw scores and value-added indicators are potentially partly dependent on non- school factors. To compare schools without information about the context within which they operate might be seen as unfair.

Most value-added analyses that adjust for non-school influences on achievement include only socioeconomic status (SES)—an index usually comprising household income, parental education and parental occupation—or a proxy measure of parental income. In the United States and the United Kingdom, the proportion of students eligible for free school lunches is often used as a proxy measure.

J. Buckingham – CIS - Draft Copy for Review 29 There are two ways to approach the ‘contextualisation’ of performance indicators. One is to add socioeconomic status, or another measure of economic disadvantage, to the value- added statistical model.

This changes the nature of the analysis substantially. Instead of the model estimating the growth in achievement attributable to the school, it estimates this growth in achievement as if the SES of all schools was the same. That is, the indicator produced is no longer based purely on actual school performance, but estimates what school performance might be if the effect of socioeconomic status was controlled.

One problem with this approach is that it removes an element of truth from the indicators, making the analysis more complicated and the result less immediately meaningful. The other is that it endorses the idea that some schools are exempt from the expectation of achievement.

While it is true that SES is correlated with student achievement, the nature of this relationship is not entirely straightforward. Multi-level modelling of various influences on student achievement has shown that SES alone accounts for a relatively small proportion of the variance in student achievement (9-15%) compared with teacher quality (30- 60%).82 If the moderate relationship between SES and student achievement largely disappears when teacher effects are controlled, then it seems that the lower achievement of low SES students is largely attributable to their teachers. One is led to the conclusion that, for a variety of reasons, low SES students are likely to have poorer quality teaching.

Among other things, this means that not all the educational disadvantage of low SES is imported from the home environment. Therefore, to control for socioeconomic status in the analysis of school effectiveness arguably removes a legitimate source of school-based variation in student achievement.

Another way of approaching the contextualisation of school performance is to compare similar schools. In the ‘like with like’ approach, the raw and value-added scores of a school are presented in a way that shows how they compare with other schools with similar demographic characteristics. There is no manipulation of the performance indicators, but the school’s performance can still be evaluated relatively fairly.

3. Appropriate statistical models

As noted above, there are two main sources of variation in school performance – school- level factors and individual-level factors. These two sets of factors are not necessarily independent.

It is necessary, therefore, to use multi-level statistical models and methods ‘which are appropriate to the underlying structure of data, and in particular to its “nested” nature; that is: using procedures taking account of the fact that pupils in given classes and particular schools share more “in common” with each other than with “similar” pupils in other classes and schools’.83

This sort of statistical technique separates the effects of in-school factors from out-of- school factors, and the interactions between them. It is not within the scope of this paper

J. Buckingham – CIS - Draft Copy for Review 30 to describe multi-level statistical modelling in detail, but instead to note that literature is readily available and there are a sufficient number of people in Australia with the relevant expertise.84

4. Multiple years of data

It is difficult to accurately discern changes in school performance without multiple years of data, for two reasons. First, the differences between schools’ test score gains are much smaller than the differences between their beginning levels of performance. Second, measuring change can ‘amplify the effect of sampling variation and other one-time factors that lead to fluctuations in performance’.85

That is, judgements about school performance should not be made on the basis of improvement or lack of improvement in only one year. Random fluctuations in school performance can occur from year to year, but a consistent trend of improvement or decline should give a good indication of a school’s effectiveness.

As data accumulate, value-added analyses can be conducted using data over a number of years, or by using a ‘moving’ average. There is, however, a danger in using data that are more than a few years old in such analyses, as the school may have changed considerably since that time.

5. Presentation of uncertainty

All statistical estimates of a true value have a degree of uncertainty associated with them.86 In value-added analysis, an indicator is derived for each school. But this indicator is an estimate, based on the information entered into the model, and is therefore not definitive. For each school there is a range of scores—the ‘uncertainty interval’—within which the school’s true score could reasonably fall.

One school can be judged as significantly more effective than another only if the uncertainty intervals for each school are discrete. If the uncertainty intervals overlap, there is a reasonable probability that the schools’ true scores are the same. As noted above, Harvey Goldstein’s research in the UK found that only a minority of schools at the extremes of performance are significantly different from the majority.

To avoid inaccurate comparisons and rankings, publication of value-added indicators should be presented with the uncertainty intervals. This is not necessarily a complicated process and can be accomplished by reporting the value-added indicator with a plus or minus sign and the uncertainty interval. For example, a school’s results might be reported as 5.9 ± 2. This shows that the true value-added score could reasonably lie between 3.9 and 7.9. Alternatively, just the range might be published. Any other school with an overlapping uncertainty interval cannot confidently be regarded as superior or inferior. This is the format used in the suggested typology presented below.

J. Buckingham – CIS - Draft Copy for Review 31 6. Broad range of indicators

Although academic achievement is, or at least should be, the prime objective for schools, there are other valuable outcomes of schooling. Not all of the desired outcomes of schooling are measurable, but many are.

Louise Watson argues that although there may not be consensus on the priority that schools should give to achieving certain outcomes, almost all educators would agree that there are several key objectives for schooling. Watson suggests that the Common and Agreed Goals for Schooling in Australia (the ‘Hobart Declaration’) be used as the starting point for determining the objectives of schools. She summarises the goals into the following four categories:

1. Acquisition of functional literacy and vocational skills;

2. Acquisition of discipline-based ‘academic knowledge’;

3. Attainment of personal maturity, physical health, confidence and social skills;

4. Shared value and an appreciation of Australian society, economy and culture.87

The 1989 Hobart Declaration was superseded by the 1999 ‘Adelaide Declaration’ of National Goals for Schooling in the Twenty-First Century. The 1999 Goals are not substantially dissimilar to the 1989 Goals, except in the greater specificity of the wording and the addition of a further category dealing with ‘social justice’:

5. Equitable levels of achievement among the various sub-groups of the population—ethnic, gender and socioeconomic—as well as understanding and tolerance for cultural differences.88

Although this list is relatively vague and somewhat unwieldy, some states are already some way towards satisfying it. Outcomes in the first and second categories are measured through basic skills tests and assessed secondary school qualifications. The fifth category can be addressed by disaggregating the results of various sub-groups, which is also already occurring. In Annual School Reports, primary schools report the literacy and numeracy results of girls, boys, indigenous and non-indigenous students separately. The fourth category may prove the most difficult, as assessing knowledge of Australian society, economy and culture is relatively simple, but demonstrating knowledge does not equate with the value and respect the student holds for these things. Physical health and fitness could be measured by the schools sporting accomplishments and participation rates. Singapore schools are assessed on the proportion of students who are overweight—an overarching measure that potentially impacts on the food provided in the school, how well diet and nutrition is taught, as well as the level of exercise.89

There are potentially an overwhelming number of tests, surveys and statistics that could be collected and reported about schools. Not all of them provide information that is pertinent to parents making decisions about schools or for the purposes of public accountability. Like too little information, too much information can make such decisions and evaluations more difficult but not necessarily more accurate. A list of indicators, and the form in which they might be presented is proposed later.

J. Buckingham – CIS - Draft Copy for Review 32

7. Timeliness

Indicators of school performance have been criticised on the basis that they are historical rather than predictive.90 That is, that the results produced by a school over a certain period of time do not necessarily reveal or guarantee the results they will achieve in the future.

This largely affects the usefulness of performance data for parental choice of schools. The public accountability purpose of performance reporting is not undermined by this argument. It is important to know how a school has been performing in order to investigate underlying factors and sources of successes and failures, and to determine whether these underlying factors are likely to influence results in the future. Sources of failure can then be attended to and sources of success shared.

In the case of parental choice, a history of good school performance over a number of years may not guarantee future success (or bad performance future failure) but it will suggest to parents areas of concern to discuss with a prospective school and allow parents to weigh up each school’s strengths and weaknesses.

We assess the likelihood of success or good performance based on past performance in a variety of areas where we intend to make a significant investment, such as retirement funds. Furthermore, if we say that what has been observed in the past is unconnected to what might happen in the future, we must dismiss the entire process of scientific research, since it is predicated on the notion of observation and prediction.

It is nonetheless important to disseminate school performance information as quickly as possible. This is as crucial for teachers, schools and students as for anyone else. Information must be provided in a timely manner so that teachers can provide feedback to students and focus on areas that need improvement.

According to eminent US education scholar Herbert J. Walberg, ‘the near future looks bright for timely results’.91 Walberg cites the rapid development and wide accessibility of computer programmes that analyse and format results, and the potential for dissemination of results via the internet, saving considerable printing and distribution time.

8. Clear to practitioners and users

This criterion needs little explanation. Any system of performance reporting must be clearly communicated to schools, teachers and students so that they are fully aware of the objectives and methods. The performance indicators reported to the public, as well as their implications and limitations, must be clear and concise, so that people using the information to guide their decisions about schools are not misled.

9. Appropriate incentives and penalties

J. Buckingham – CIS - Draft Copy for Review 33 Accountability is not a benign concept. Providing information about school performance is only one side of the equation—there must also be consequences. If school performance is to improve, there must be incentives for good performance and penalties for poor performance.

There are two ways to approach accountability—top-down and bottom-up. Top-down accountability comes from education authorities. For state schools, this is the state or territory Department of Education. Top-down accountability can take the form of departmental intervention in low-performing schools, whether it be providing financial and/or human resources to help schools improve, or to ‘reconstitute’ (replace management or entire staff) or close chronically failing schools. High-performing schools might be rewarded in some way—most likely through financial or other rewards such as computers, library books, sporting equipment or salaries.

Bottom-up accountability can come from student-centred funding and freedom of choice. If funding for school education was allocated on a per-child basis, irrespective of school attended, and that funding went to the school of choice, high-performing schools which attract more students would receive more funding and low-performing schools would have low enrolments and hence less funding. Here we have an accountability mechanism that is directly related to a school’s ability to fulfil the expectations of parents.

There are arguments for the superiority of both types of accountability, but the most persuasive case can be made for a combination of the two, which involves top-down setting of standards and bottom-up apportioning of consequences for failing to meet or exceeding those standards. The support for such a system exists across traditional political and philosophical divides. Educationist Chester E. Finn Jr., of the pro-school choice Thomas B. Fordham Foundation and the Manhattan Institute writes:

the prospect of success is brightest in…the intersection of standards-based, top- down accountability and market-style, bottom-up accountability.92

Andrew Rotherham of the Progressive Policy Institute, a Democratic Leadership Council think-tank, supports this view. Rotherham describes a model called ‘accountable choice’ which combines elements of markets and state regulation and writes:

Coupling of bottom-up market pressures with the top-down standards in key academic subjects is the most promising strategy.93

Finn and Rotherham agree that top-down accountability is insufficient to drive improvements in schooling because of the lack of incentives at a system level. According to Finn,

bad schools are extremely difficult to change into good ones, particularly when the agents of their putative transformation are lumbering government bureaucracies working within a political environment where myriad interest groups…have great power to block changes they dislike.94

There is also the possibility that sanctions from education authorities may be too heavy or too light-handed. A school that the education department regards as needing intervention may be satisfactory to the parents of children who attend it. On the other

J. Buckingham – CIS - Draft Copy for Review 34 hand, schools with chronic difficulties requiring drastic action may not be dealt with as quickly as parents feel necessary.

Rotherham and Finn also agree that parental choice is an effective mechanism and widely benefits students, but that simply allowing freedom of choice is not a fool-proof way of encouraging improvement. Finn emphasises the role of the state in setting standards for schools to strive for and compelling all schools—public and private—to provide information about their compliance with those standards. ‘Without a transparent marketplace based on uniform standards and rich with comparable and publicly accessible data, one must trust every school to tell the truth’.95 He believes that it is possible for the state ‘to set sound standards in core subjects without trying to dictate every school’s entire curriculum’.96

Rotherham is more interventionist than Finn, saying that ‘the primary shortcoming of the information model is its reliance on the market to sanction low-performing schools’.97 Rotherham sees a role for the state in sanctioning those schools that do not meet state- determined standards, regardless of whether they continue to attract and retain students.

So what role for the state and what role for the market? The state should provide the framework against which parents and the public can measure and evaluate schools. It makes sense for the state, rather than individual schools, to take responsibility for the development and administration of tests, for reasons of consistency, comparability and economy. The state should also take responsibility for the reporting of school performance results for the same reasons.

This does not mean that governments themselves should necessarily perform these tasks. There are potentially many agencies with the expertise to do this, both in universities and the private sector. Rather, governments should ensure that all schools are evaluated against the same criteria and that the results are reported consistently and fairly.

This takes care of the information side of the accountability equation. What role for the state in providing the rewards and sanctions that drive changes in school performance is open to question. Certainly the market has the major role. Through student-centred funding and freedom of choice, parents can ensure the best possible education for their own child and in doing so create pressure on schools to attract students and hence resources. Schools prosper or struggle depending on whether they can satisfy parents and student expectations. There is evidence that the element of competition introduced by school choice has benefits not just for the students that exercise choice, but across the board.98

Yet there are valid arguments for state intervention. First, there is a possibility that some schools perform badly and still maintain stable enrolments. This is a particular risk in rural areas, where children might have very few educational choices despite the opportunities provided by student-centred funding. In this case there is a justification for state intervention. The state assumes the role of ensuring that schools at least meet adequate standards and intervenes in schools that fail to do so. In this way, the role of the state is minimised and schools are not subjected to ‘overly burdensome or unnecessarily numerous’ regulations.99

There is also the rationale that the choices of parents ought to largely dictate whether state intervention is warranted. Those parents choosing a state school for their child are

J. Buckingham – CIS - Draft Copy for Review 35 perhaps, at least in part, making that choice with the expectation that government will hold the school to particular standards. Choice of an independent school, however, conveys a different expectation. Government involvement in independent schools ought then to be reserved for extreme cases, such as where a school is acting against the law or in some way poses a ‘risk’ to society.

This might be seen as a safety net rather than as a way of encouraging excellence. Even if this were true, a safety net is more than we have at present. There is no way of knowing how many students are in schools that are not providing them with the quality of education to which they are entitled. Many parents are unaware that their child’s school is falling well short of what should reasonably be expected. Setting high but realistic standards and expecting all schools to achieve them is the least we can do; to succeed beyond that provides its own rewards.

10. a. Institutional response

b. Agency responsibilities

c. Enforcement

These features of a system of school performance reporting are proposed in a 2000 paper by Ken Rowe, based on Goldstein and Myers ‘code of ethics’. In essence they provide for schools ‘a means of redress if there is cause to believe that they have been unfairly labelled’.100

The right of ‘institutional response’ would provide schools with the means and the forum to demand re-analysis if dissatisfied with the findings and, if validated, retraction and recompense. The principal of ‘agency responsibility’ would place responsibility and therefore liability for accurate analysis and presentation of the performance indicators on the distributors and publishers. The ‘enforcement’ condition is to ensure compliance with the above principals. Rowe suggests the appointment of an education ombudsman to oversee such matters.

This principle introduces the potential for chaos if all schools appeal the results. One way to limit this might be to balance the possibility for recompense against the responsibility for costs if the appeal is lost.

By the same token, data that is provided by schools must be verified. There might be a random audit of the information provided by schools, with penalties if it is found that inaccuracies were the fault of the school, rather than inspections of every school report.

Chapter 4: Performance reporting in other countries

Few countries throughout the world have a long history of reporting comparative school performance information to the public, but some have been developing and refining reporting systems over a decade or more, including England and certain states of the USA. This chapter provides a brief overview of the performance reporting systems in

J. Buckingham – CIS - Draft Copy for Review 36 several countries. None of them satisfies fully the principles or ‘best practice’ model described in Chapter 3. The English model, with its recent addition of a value-added indicator, is perhaps the closest.

England

The 1991 Parents Charter of the Department for Education and Science requires that school performance information, based on national examinations and tests, be published, allowing comparisons between schools and between Local Education Authorities (LEAs). Although these data are not released by the Department in a way that ranks schools, it is published by newspapers in the form of ‘league tables’, ranking schools from highest to lowest in the indicators provided.

Initially, the academic performance indicator provided was the proportion of students who achieved five or more GCSE (General Certificate of Secondary Education) scores above a threshold (scores ranging from C to A*). Wide-spread concern was expressed that this measure encouraged schools to focus on marginal students—those in danger of achieving lower than C grades—to the detriment of other students. There has also been a lengthy debate over the need to take into account student intake by introducing a value- added assessment.101

The school performance information now provided by the Department for Education and Science (DfES) is far more comprehensive. It is easily available on the Department’s website, where people can search for the results for an individual school, view tables with the results for all schools within each Local Education Authority, or search for the results of all schools within a specified range of their residential postcode. The tables are presented in alphabetical order by the name of the school, not in any form of ranking, since there is no single definitive indicator on which all schools can be ranked.102

Background and contextual information on students, such as the proportion of students with special educational needs, absentee rates and exam participation rates, are provided. Performance information is on student achievement in the GCSE and GNVQ (General National Vocational Qualification) and assessments of English, Maths and Science at the end of Key Stage 3, when children are approximately 14 years old.

For GSCE and GNVQ, the original indicator described above is still available, along with several others. The proportion of children in each school receiving no passes is listed, as well as another measure that allocates points for each GCSE grade achieved and reports the average for each school. In this measure, the performance of all students affects the final outcome.

A value-added measure is available for the first time for the 2002 tables, published in 2003. This measure is a very simple calculation of the relative increase in performance from Key Stage 2 to Key Stage 3 and from Key Stage 3 to GCSE/GNVQ. A gain-score figure is derived for each student from one assessment to the next and an average gain- score is calculated for the school. This figure is compared to the median national gain- score and converted to a figure based on 100, which is the value-added measure published. Details of these calculations are also on the DfES website.103

J. Buckingham – CIS - Draft Copy for Review 37 The tables are only available for secondary schools. There are plans, currently under consultation, to introduce performance reporting, including value-added measures, for primary schools.104

Another important feature of public accountability in England is the inspectorate system. The Office for Standards in Education (OFSTED) undertakes an extensive inspection of each school at least once every six years, the report of which is publicly available. The inspection process extends accountability from simply identifying which schools are or are not doing well to explaining why. Chronically underperforming schools are closed, but this is a rare event. Around 12 out of some 24,000 schools have been closed to date.105

There is strong disagreement over the effect these tables have had on schooling in England. Some claim it has generated a testing culture in schools that has impoverished the learning experience, as well as creating a competitive environment that has been largely detrimental.106

Others determine the effect to have been positive overall. Howard Glennester of the London School of Economics recently investigated the change in performance levels from 1995—the period over which full testing in maths, science and English has been conducted at ages 7, 11, 14 and 16.107 During this time, despite minimal extra education spending, the number of students achieving the expected levels of competence has increased substantially. The increase at age 11 (Key Stage 3, the end of primary school) was most remarkable, with the proportion of students reading achieving the reading benchmark increasing from 49% in 1995 to 81% in 2001. The improvement was less for 14 year olds—in the order of ten percentage points in all subjects—but is still striking when set against a ‘balance of evidence’ suggesting that the basic maths skills of secondary pupils had not improved since the 1960s.

This overall improvement is not simply the result of good schools getting better at the expense of lower performing schools. Improvement has been across the board, and Glennester shows that ‘if anything, there has been greater improvement by the lower performing schools’.108 This has led to a substantial narrowing of the gap between high and low performing schools, as well as between wealthy and economically disadvantaged schools. Not all of this improvement can be attributed to the incentives provided by performance reporting, as other measures such as devolved funding were established during the same period, but it is difficult to conclude from these results that the reforms, in combination, have been damaging.

Researchers from Lancaster University, Steve Bradley and Jim Taylor, also reached the conclusion that the reforms, including school performance reporting have generated improvement in educational achievement.109 Bradley and Taylor found a small increase in social segregation of schools, as middle and upper-middle class parents were more likely to take the opportunity to seek out a better performing school. Yet they emphasise that this effect was small, and since according to Glennerster there has been no corresponding down-turn in performance, it cannot be necessarily construed as negative. Nonetheless, Bradley and Taylor, like Glennerster, advocate that special attention be paid in future to children in the most disadvantaged areas to ensure that the educational benefits of better choice and information are universal.

J. Buckingham – CIS - Draft Copy for Review 38

USA

School performance reporting in the United States varies a great deal between states. Only a brief summary of some of the more developed systems is provided here, with those interested in more information encouraged to seek out the work of the Koret Task Force on K-12 Education in particular.110 The states using value-added analysis include Texas, Tennessee, North Carolina, Minnesota, Arizona, Kentucky & Maryland. Others, such as Michigan and California are developing value-added systems.

Florida

Accountability for school performance in Florida is strong and decisive and appears to have been successful. Each year, Florida’s A+ Accountability Program publicly awards each school a grade from F through to A+. The grade awarded depends on the school’s performance on the Florida Comprehensive Assessment Test (FCAT), learning gains on previous FCATs, and is weighted by the improvement of the lowest-performing students at each school.

Schools and teachers improving markedly or performing well over a number of years receive financial rewards. Low performing schools are initially assisted with extra resources, but students at schools that are awarded two F grades within a four year period are eligible for tuition vouchers to attend another school, including private schools. Those students attending private schools through this programme are required to continue sitting for the FCAT.111

An analysis of this programme by Jay P. Greene, of Harvard University’s Program on Education Policy and Governance, found that all schools improved over time, and those schools facing the prospect of losing students if they received a second F grade in four years improved their test scores at twice the rate of other schools.112 The most recent release of school grades has shown that Florida now has six times as many 'A' schools as in 1999, while the number of failing schools has dropped by more than half in the same time,113 and Florida’s overall performance in the 2002 National Assessment of Educational Progress (NAEP) showed improvement greater than all but four other states.114

Texas

A ‘report card’ is published for each public school in Texas. The report card contains pass rates for the Texas Assessment of Academic Skills (TAAS) at the school, district and state level, as well as disaggregating school results by ethnicity and socioeconomic status. It also provides the number of exemptions from the test, attendance and drop-out rates, student-teacher ratios, completion rates and administrative and instructional costs per child.

Yearly progress in TAAS as well as other exam results and participation rates are used to rate school and district performance at four levels—exemplary, recognised, acceptable and unacceptable. Schools with similar ethnic, socioeconomic, mobility and language

J. Buckingham – CIS - Draft Copy for Review 39 background characteristics are also compared. Schools with good performance can receive financial rewards. Consistently poor performing schools are subject to various interventions, including offering students the option to attend another public school and even closure. However, there are no exit vouchers to attend private schools, however.

The effect of this accountability system has proved difficult to evaluate. While TAAS pass rates have increased substantially since it was introduced in 1990, changes to the TAAS and the way it is scored have made accurate comparisons over time difficult. Even so, improvements have also occurred in the National Assessment of Educational Progress over the same period.115

Kentucky

Kentucky has one of the most sophisticated assessment systems. Students sit for external curriculum-based examinations in almost every year from year 4, take ‘off the shelf’ standardised tests in grades 3, 6 and 9, and are also assessed on portfolios of their classroom work. The Kentucky system is notable for following the progress of individual students over time, and for its use of statistical modelling to determine value-added scores and to control for student characteristics.116

The assessments of student achievement are used to calculate and publish a School Performance Index for each school. Schools exceeding their improvement goal receive financial rewards to be spent for school purposes or as salary bonuses. Low performing schools are appointed a manager to oversee school improvement strategies, including the firing of staff. There is no school choice option.117

Tennessee

Tennessee was one of the first states to introduce value-added analysis, largely under the influence of the now renowned William Sanders. The Tennessee Comprehensive Assessment Program (TCAP) tests all students annually using a customised version of an ‘off the shelf’ standardised test. The Tennessee Value Added Assessment System (TVAAS) provides reports to parents and the public on the learning gains made by individual schools, compared with national, state and district averages. Value-added analysis is conducted using a ‘mixed model’ approach to exclude student characteristics including race and socioeconomic status, as well as prior attainment. A three year moving average is used for any accountability decisions. 118

School report cards are produced annually for each school. Each school’s value-added performance is provided in the form of a grade from A to F, rather than a number. The report cards also contain demographic information about the school, suspensions and expulsions, per pupil spending, and retention and attendance rates. According to J.E. Stone, Professor of Education at East Tennessee University, ‘On the whole, student achievement in Tennessee has been improving over the years that value-added assessment has been in place.’119

J. Buckingham – CIS - Draft Copy for Review 40 Singapore

Information about the academic performance of individual secondary schools in Singapore has been published since at least 1995. The Singapore Ministry of Education each year releases tables of the top achieving schools and awards various prizes for excellence in both academic achievement and school practice.

Tables available on the Ministry of Education’s website include:

• Top 50 schools for absolute performance in both Special/Express (advanced) and Normal courses; • Top 20 schools for value-added performance both Special/Express and Normal courses; • Top 50 schools by physical fitness level and fitness index, and schools with lowest percentage overweight.120

The website also offers a facility called the School Information Service, which allows people to search for information on a particular school, including indicators of the aggregate range for courses offered and a physical fitness test score and overweight percentage.121

The Singapore system is notable for its simplicity and clarity, and for its equal emphasis on the cognitive and the physical. This apparently reflects the significance the Singapore education system gives to neurological research, and the understanding that good health is necessary for optimal brain function.122

Singapore consistently ranks at the top of international tests of student achievement. It was first or equal first in both maths and science in the Third International Maths and Science Surveys of 1995 and 1999, but did not participate in the PISA tests of 2000.

New Zealand

Indicators of academic performance of individual schools have previously been available on request from the New Zealand Qualifications Authority (NZQA). Many newspapers have obtained these data and constructed and published league tables. In 2003, comprehensive comparative school data were published on the internet.

The information published is on secondary schools only. It provides data on average school performance in the qualifications now being phased out, as well as the first phase of the new qualifications—the National Certificate of Educational Achievement (NCEA) Level One. Levels Two and Three will be introduced over the next two years.123

The NZQA’s website provides the distribution of results for individual schools, disaggregated by subject, level of study, gender and ethnicity, as well as participation rates and background data.124 It is also possible to view a summary of results for schools in a region. For some indicators, individual school results are compared to national averages.

J. Buckingham – CIS - Draft Copy for Review 41 The tables do not rank schools, and as the standards-based NCEA replaces other qualifications it will become increasingly difficult to do so. There is no value-added measure and no like-school comparison. The latter is perhaps a surprising omission since schools in New Zealand are already classified in socioeconomic deciles for funding purposes.

As in England, New Zealand state (including integrated) schools are subject to inspection by the Education Review Office (ERO) on a three to four yearly basis. These reports are distributed to parents and are available to the public on the ERO website.125

Chapter 5: The Possibilities for Australia

It is theoretically possible, albeit arduous, for a person to obtain comparable school level data by contacting each school individually to request their annual reports (with the exception of the ACT). The theory, however, does not match the reality. Even in NSW, where the Department of Education & Training sets out guidelines for the mandatory reporting of school academic performance in Annual School Reports, the way this information is presented in the actual reports makes comparison extremely difficult, if not impossible.

Using a medium-sized country town as a case study, I obtained annual reports for the three public primary schools and two public high schools. Each of the primary schools reported their Basic Skills Results in a different format, grouping different skill bands together and making full comparisons between schools impossible. They reported the number of students sitting the tests, but not the proportion of the year cohort this represented. The high schools did not report any value-added measure of performance from the School Certificate to the Higher School Certificate and only one school provided a table showing the school’s performance in HSC subjects against the state averages. Even if some meaningful information could be extracted from the reports, it was certainly not presented in a way that was conducive to this practice, and arguably inaccessible to parents without strong literacy skills and aware of what they should expect to see.

The serious problems with both the annual school reports and the Distinguished Achievers list published in newspapers (See Chapter 2) are not reasons to cease providing performance data, but rather to provide better data. Newspaper reporters and editors have tapped into community interest and will publish whatever they can get their hands on. The onus is therefore on the possessors and distributors of this information to ensure that it is meaningful.

It is not a case of having to collect the information, as it already exists in every state and territory. To again use New South Wales as an example, the NSW Department of Education and Training (DET) and the NSW Board of Studies possess a huge amount of information on student performance covering a substantial period of time. In 1996, the then Director General of Education and Training, Dr Ken Boston, declared that ‘there is an abundance of hard data on school effectiveness’, including ‘sophisticated analyses of longitudinal change and development over time’.126 According to a 1999 Sydney Morning Herald report, ‘more than two million pieces of information’ on students has been compiled in a confidential departmental database since 1991, including ‘where they went

J. Buckingham – CIS - Draft Copy for Review 42 to school, what subjects they studied, how they performed in the HSC, and whether they gained a place at university or TAFE’, and that these data allowed ‘detailed comparisons between the performances of schools and systems’.127 During the late 1990s, the Herald conducted a lengthy campaign to access this information through Freedom of Information requests but was denied.

It is also not a case of having to undertake new or unfamiliar analyses. Not only does the NSW DET hold a wealth of raw data, it already conducts the sorts of analyses of school performance suitable for public reporting. In a 1997 speech to the NSW Legislative Assembly, the then Minister for Education, John Aquilina, claimed that NSW has ‘designed its statistical systems in relation to school accountability and annual reporting to produce value-added analysis’.128 The NSW Council on the Cost and Quality of Government has also reported that ‘research within the NSW Department of Education and Training has led to the development of value-added indicators available at the school level from Year 3 to 5, Year 5 to 7, Year 7 to 10 and Year 10 to 12’.129

Some schools are apparently already using value-added analysis for their own purposes. St Mary’s Senior High School in Sydney provides in its Annual Report a value-added analysis of how well students performed in the HSC compared to the School Certificate.130 Primary schools are required to track and report the gains made by students from the Year 3 Basic Skills Tests to the Year 5 Basic Skills Tests. These analyses may not be as sophisticated and therefore as accurate as would be desirable in a system-wide programme, but this demonstrates that, in NSW at least, the concept is by no means alien and is sometimes welcomed.

The information potentially at Departments’ disposal is not restricted to academic performance. Several surveys measuring student and parent opinions on schools or the ‘affective and social dimensions of learning’ are also available to schools. The Quality of School Life Survey, developed by the Australian Council for Educational Research has been used by small proportion of NSW state primary schools and almost half of state secondary schools. The Western Australian Department of Education and Training administers a ‘Social Outcomes of Schooling’ assessment as part of its Random Sample Assessment Programme. Although these surveys presently involve only a minority of all schools, there seems little reason why they could not become a routine part of all school evaluations.

All that remains is to make this information available to parents and the public.

There is therefore relatively little additional effort and cost involved. By the fact that governments take the trouble to collect and analyse school performance information, we can assume that they see it as useful and valuable. To deny public access to this information because it might be ‘misused’ or ‘misunderstood’ by parents and others is patronising if not contemptuous. Mr Aquilina’s reason for not publicising school performance indicators is to avoid ‘the public risk of unnecessarily creating school failure’ also does not hold water. The analogy used by Krista Kafer and Jennifer Garret of the Heritage Foundation is ‘blaming the X-Ray for the fracture’.131 Performance reporting does not cause ‘school failure’, it simply identifies it and, with appropriate accountability measures, provides incentives for improvement.

The funding required for performance reporting is negligible since most of the information already exists. As a reform measure, it offers immense benefits at a minimal

J. Buckingham – CIS - Draft Copy for Review 43 cost. Caroline Hoxby determined that even the most sophisticated and time intensive assessment and reporting systems in the United States costs less than half of 1% of school budgets.132 In Australia, the most costly part of the process, testing and collation, is already being done. It remains only to analyse and distribute the information.

Indicators of School Performance

There are two aspects of performance reporting to consider:

1. Which indicators should be reported?

2. How should they be published?

Most articles advocating the publication of school performance indicators stop short of making an actual detailed proposal, presumably because it is a difficult proposition and is doomed to criticism. Nevertheless, the following is an undertaking to do so, with the understanding that it is intended to provide the basis for debate.

It takes into account the principles of performance reporting described earlier, but does not claim to be definitive. There are potentially hundreds of indicators about schools that could be included, but a balance must be struck between informativeness and usefulness. Likewise, it can be argued that many of these indicators are unnecessarily burdensome, since the main business of schools is teaching the academic curriculum. The typology presented probably errs on the side of comprehensiveness rather than simplicity, and the inclusion of all but the academic indicators is debatable.

In order to provide some structure and consistency to the list of indicators, Carol Taylor Fitz-Gibbon suggests that a ‘typology of indicators’ first be devised to allow direct comparisons and to facilitate communication. This typology should have ‘mutually exclusive and exhaustive categories so that there is a place for every indicator and only one place for each indicator’, as well as a rationale or structure based in theory and categories that reflect educational research.133

Fitz-Gibbon proposes a typology based on Bloom’s Taxonomy of Educational Objectives—cognitive, affective and psychomotor (behaviour/physical) goals.134

A/ Affective (e.g. attitudes, aspirations, feelings)

B/ Behavioural (e.g. skills, actions)

C/ Cognitive (e.g. test scores, achievements)

D/ Demographics (e.g. sex, ethnicity, SES)

E/ Expenditures (e.g. funding, teachers)

F/ ‘Flow’ (eg. curriculum, retention)

J. Buckingham – CIS - Draft Copy for Review 44

John Braithwaite and Brian Lowe, on the other hand, suggest a simpler typology, consisting of:

1/ Inputs (e.g. class size, per capita expenditure)

2/ Process (e.g. curriculum, pastoral care)

3/ Outputs (e.g. basic skills tests, retention rates).135

While Fitz-Gibbon’s typology has the benefit of being based on theory, it is arguably too elaborate for general use. Braithwaite and Lowe’s typology has the advantage of being easy to understand, while still meeting the criteria of exclusive categories. In the following proposal, the latter typology has been adopted with one addition—the demographic and other characteristics of the school.

School and student characteristics

School type a) Primary / Secondary / Middle / Senior Secondary / K-12 / other b) State / Catholic / Independent c) Religious Affiliation d) Comprehensive / Specialist / Full Selective / Part Selective

School size a) Number of Students b) Number of Staff

Socioeconomic Status (SES)

Information on school type and school size are self-explanatory and should be easy for schools to provide. The socioeconomic characteristics of a school are somewhat less so.

J. Buckingham – CIS - Draft Copy for Review 45 An SES index of the school will have already been determined for all non-government schools, as their Commonwealth government funding is allocated on this basis.136 Whether an equivalent index is calculated for all government schools in all states and territories is not known. Certainly in NSW, the Priority Schools Funding Programme is based on a measure of the school population’s social and economic disadvantage. It is difficult to imagine that the same process by which an SES index is calculated for non- government schools could not be used for state schools. Even if such an index were not published, it would be necessary for ‘context adjusted’ value-added analysis or like-school comparisons.

Language Background Other Than English (LBOTE) population (optional)

Many schools believe that cultural diversity among their students is an asset, and indeed many parents also believe that it is important for their children to be among children from different language and cultural backgrounds. It is, however, difficult to justify on what grounds parents would need such information, or why it should affect their opinion of the school, so this category should be optional.

In as much as the ethnicity of the school population affects the school’s academic performance, it can be an advantage. The Australian Council for Educational Research has found that, as a group, students with language backgrounds other than English (determined by father’s country of birth) have higher average levels of tertiary entrance achievement than those with English-speaking backgrounds. This higher average result is largely due to the high performance of students with Asian backgrounds, with other ethnic groups such as Southern European and Pacific Islander performing substantially worse on average than students with Australian-born fathers.137

Inputs

Per Capita Annual Student Expenditure a) State and Commonwealth Government Funding b) Annual Fees c) Total

One of the major recommendations of the 2001 ‘Grimshaw Review’ of Non- Government Schools in NSW was that non-government schools be more publicly accountable with regards to both their overall level of financial resources and for their use of government funding. If this is the case, then state schools must be at least equally accountable, since they receive even larger amounts of public money.

In the words of Carol Taylor Fitz-Gibbon: ‘To measure the cost [of schooling] does not mean that one is trying to cut costs, but to get the best value for money’.138 It is

J. Buckingham – CIS - Draft Copy for Review 46 impossible to assess whether a school represents ‘value for money’ without knowing just how much each school spends to achieve a particular outcome. Articles in Sydney’s The Daily Telegraph in 2000 and 2003 claimed that state schools were ‘better value’ than non- government schools, after comparing only their fees and not their total expenditure levels including government funding.139 A proper comparison may yield the same conclusion, but without all of the relevant information such claims are spurious.

Teacher Qualifications

The connection between teacher qualifications and teacher quality is by no means perfect, but it is reasonable to expect that teachers have studied the subject they are teaching. Unfortunately, in many schools this is not the case. Many parents would not be aware of this situation, and it is unlikely to change without pressure from the public to allow differential salaries to attract teachers to subjects and schools where there are long- term vacancies or high turnover.

Teacher Turnover

The rate of teacher turnover—the proportion of teachers leaving annually and/or the proportion of staff that have been at the school less than two years—is an important indicator of the school climate. Likewise, parents should know whether they can reasonably expect their child to have the same teacher for the entire year, and whether there will be a sense of continuity and stability in the school.

Teacher turnover can, however, be at least partly beyond control of the school. Teachers in public schools in NSW are allocated to schools by a centrally administered transfer point system—time spent in less popular schools is rewarded with a larger number of transfer points. This creates a situation where disadvantaged schools have a higher proportion of inexperienced teachers, who stay only long enough to accumulate enough points to transfer to another school, if they do not succumb to the stress and leave the public school system altogether.

This suggests that teacher turnover rates might also be presented with a like-school comparison.

Class Size

Despite the fact that research has failed to substantiate the effect of smaller classes on student achievement,140 many parents are convinced of their merits. Providing class size information as part of a wider array of indicators may generate further evidence to test this belief.

J. Buckingham – CIS - Draft Copy for Review 47 Parental Participation

The benefits of parental participation in children’s education have been demonstrated conclusively.141 How this might be measured is not clear, but one possibility is an estimate of the proportion of parents who are involved in the school in some way, whether through P&C membership, reading support, fundraising, sports, arts, or other activities.

Process

This section gives schools an opportunity to list the teaching and learning activities in the school, emphasising those areas in which it feels it has the most to offer.

Special programmes/Curriculum focus

Many schools run programmes developed to target the particular needs or abilities of their students. Such programmes might target literacy using the Spalding Method, English as a Second Language (ESL), mentorship for indigenous students, school-to- work, boys’ education, or gifted and talented education, among many others.

Cultural, Sporting and Community Activities

Most if not all schools involve their students in sporting, cultural and/or community activities, whether as an extension of studies or as extra-curricular programmes. Listing participation in such activities provides parents and others with an indication of the range of interests and abilities catered for, and the breadth and depth of the non-academic life of the school.

Discipline & Behaviour

(a) suspensions and expulsions

(b) violent incidents at school

Outputs

1. Participation

(a) attendance rate (average daily percentage of enrolled students present)

(b) retention to Year 12 (percentage of Year 10 students who complete Year 12)142

J. Buckingham – CIS - Draft Copy for Review 48

Participation rates provide important information about the quality of schooling and are a key indicator of how well a school caters for the abilities and interests of its students. Attendance rates are particularly important from this point of view, while retention rates must be considered alongside other factors, such as post-school destinations.

According to Professor Colin Power, then of the University of Queensland’s Graduate School of Education,

In a society where a full secondary education for all is espoused as a systems goal, participation rates provide some indirect evidence of the degree to which an institution or system…is effective in providing a meaningful and worthwhile educative experience...143

Academic a) raw scores in external examinations

(option: disaggregated by gender, Aboriginal Torres Strait Islander (ATSI), LBOTE, etc.)

b) value-added scores showing progress in external and/or moderated assessments

(option: like-school comparison)

The rationale for these indicators has been explained in detail above. Obviously, the indicators provided will vary depending on the state or territory and whether the school is a primary, secondary or other type of school.

There are a number of ways in which academic results, both raw and value-added might be presented, including graphs and tables. Graphs have the advantage of making comparisons between scores, whether over time or between different groups of students and state averages, immediately obvious. In the case of value-added comparisons, a graph can clearly show whether uncertainty intervals overlap, and therefore whether scores are significantly different.

Tables have the advantage of allowing people to make their own comparisons, using the available data. A graph showing the value-added scores (and associated uncertainty intervals) of a given number of schools in, say, a certain school district, does not provide the flexibility to make comparisons between schools not on the graph.

Appendix B provides a suggested format for providing this information in a form that allows parents to directly compare the schools of interest to them, but also allows the construction of graphs for those interested in the wider distribution of scores. This

J. Buckingham – CIS - Draft Copy for Review 49 suggested format provides the estimated value-added score and uncertainty interval in the form X ± y, where X is the estimated score and y is the uncertainty interval. Basic addition and subtraction will show how School A compares to School B.

This method does not prevent the construction of league tables, and it would be possible to rank schools. Results of international tests of student achievement such as Trends in International Mathematics and Science Study (TIMSS) and Programme for International Student Assessment (PISA) are presented in a way that both ranks countries and allows pair-wise comparisons. Appendix C contains an example, taken from Masters and Foster, which compares TIMSS results for states and territories.144 The same technique could be used for schools, perhaps at a postcode level.

Post-school destinations

Primary schools:

(a) number of students achieving place at selective secondary school

(b) number of students awarded a scholarship to a non-government school

Secondary schools:

(a) university

(b) TAFE or other further education/training

(c) apprenticeships

(d) employment

Not all schools have a high level of academic achievemen,t yet still provide good outcomes for their students. Some schools have made outstanding efforts to ensure that school leavers move into some form of education, training or employment. Indeed there are schools that claim a 100% success rate.145 The post-school destinations of students are a very important indicator of the effectiveness and commitment of a school.

Parental and student satisfaction:

Eg. A-F rating on

a) instruction

b) pastoral care

J. Buckingham – CIS - Draft Copy for Review 50 c) discipline

It is quite possible that a school that does not appear to be performing particularly well on academic indicators, or is not outstanding in any other way, is considered satisfactory by parents and students. A survey of parental and student satisfaction need not be a complicated exercise, and could be conducted (anonymously if necessary) at parent- teacher nights, for example.

Publication

There are on the market a number of publications purporting to assist parents in choosing a school for their child, but most do not provide objective, verifiable information about academic performance. The Good Schools Guide, which is produced for NSW, ACT and Victoria, is promoted as the ‘ultimate guide’ to secondary schools in the respective states. The NSW edition claims to contain HSC Performance for each school but has only data derived from the Distinguished Achievers list—the proportion of HSC scores above 90%. This is not helpful, especially when a significant proportion of schools have no figure listed. This could mean either that these schools had no HSC scores above 90% or that they do not wish to reveal this information, neither of which bode well for the school, fairly or unfairly. The Victorian version contains similar information—the proportion of VCE scores higher than 40—and is far short of ideal. This is not the fault of the publisher. They are trying to meet a demand for this information, but are forced to work with the little information to which they have access.

Two other publications claiming to help parents to select a school are also on the market and readily available in NSW—the Guide To Schools and Choosing a School For Your Child. Neither contains any more information than the Good Schools Guide and in some cases a good deal less. The information provided specifically by and for the independent school sector is no better. A publication by the Association of Heads of Independent Schools of Australia, The Right School for Your Child is an independent schools directory containing only information such as the school’s history and mission, its fees and enrolments and courses offered. There is, again, no information about school performance.

An important feature of all of the publications on the ‘school choice’ market is that they are entirely focussed on schools that go to Year 12, as though education is only important when children reach the later stages of their schooling. Not one state or territory in Australia provides comparative school results for primary schools, and it seems that publishers also believe that parents are, for the most part, less interested in this stage of their children’s education. This is borne out to certain extent by enrolment patterns— 28% of children attend non-government primary schools compared to 37% in non- government secondary schools146— but this relative apathy toward primary schooling is ill-advised.

Primary schooling lays the foundations for later achievement. The performance of secondary schools is partly determined by the abilities their students bring with them.147 A child who is not sufficiently literate or numerate when they leave primary school faces

J. Buckingham – CIS - Draft Copy for Review 51 significant disadvantages, no matter how good the secondary school. Parents should be very concerned that their children are being equipped with the basic skills and knowledge they need.

It is possible to publish comprehensive information on both primary and secondary schools simply and inexpensively. One option is for government to publish the results, both on the internet (as OFSTED in the UK does) and in book form. The book does not have to be provided to every parent, it should be sufficient to provide one to each school and one to each public library. Likewise a book for the whole state or territory might not be necessary. There could be an edition for each major city as well as a non-metropolitan edition.

Another option is to continue to allow private organisations to publish the information, using data available from the departments of education, along with any other information obtained from schools. There is already an established market for these books, which retail at $10 to $20, despite their serious shortcomings.

A good prototype can be found among the various statistical handbooks produced by organisations interested in comparing the economic and social characteristics of nations, such as the Fraser Institute’s Index of Economic Freedom, the Heritage Foundation’s Statistical Handbooks or the many United Nations publications. A book on schools following the same format would have a page for each school, a section detailing how the information was derived and explaining how it ought to be interpreted, and a table showing how the schools compared to each other on important indicators.

Chapter 6: Conclusions

There is a deal of angst among researchers, educators and parent advocacy groups about making more information on school performance available to all parents and the public. They are concerned that schools will be judged unfairly, and seem to consider this a greater injustice than the possibility of hundreds, if not thousands, of children receiving a substandard education while the people responsible look the other way.

Statisticians have warned that even the most sophisticated methods of evaluating school performance are imperfect. Yet they also admit that appropriate analysis and presentation of value-added measures can provide meaningful information. While these indicators may not be able to rank schools precisely, they can, with some confidence, reveal which schools are especially good and which are particularly poor.

Education researchers claim that differences between schools are less important than the differences between classes within schools, and therefore that schools should not be under scrutiny. Yet what is a school if it is not teachers and students? To say that a ‘school’ is unaccountable because its failings are those of its teachers is to dodge the issue. And that issue is that, all things being equal, some schools do better or worse than others mainly because they have better or worse teachers. We need to know which schools have the best and least effective teachers so we can work out what makes them that way and learn from it.

J. Buckingham – CIS - Draft Copy for Review 52 Public reporting on school performance alone will not provide the solution, as it has no inherent value or virtue. It is not the mechanism for change, just the impetus. There are several other key reforms necessary if the quality school education is to improve for all students and their families. The ultimate and ideal reform is fully funded school choice— a family-centred system where the level of public funding to which a child is entitled goes to the school of their choice, irrespective of whether it is a government or non- government school.

Performance reporting is one of the three major conditions that must be set in place for choice to work efficiently, effectively and equitably. The others are decentralising the recruitment and dismissal of teachers, and devolving funding to the level of the school.

Allowing schools to hire and fire is perhaps the single most important reform necessary for public schools to succeed. Principals outside the public system prize highly their ability to ‘choose their team’ according to the particular needs of their students and see it as one of the great advantages of non-government schooling. Given the strong evidence and prevailing belief that teachers have the greatest influence on learning, it is crucial that schools manage their most valuable asset.

Teachers are not the subject of this paper, but a future one, because people must firstly be convinced of the need for change. The lack of information has led some people to assume the worst and flee, while others have used it as an excuse for complacency. One can only hope that the reason parents are not up in arms about the low standards of basic skills in their child’s school is because they don’t know about it.

The public reporting of school performance is not necessarily justification for greater state intervention and regulation. There will inevitably be tensions between allowing schools the freedom to do their job, allowing parents the freedom to make choices for their children, and the principle of public accountability.

A balance must be struck between setting sound standards in the core businesses of schooling without restricting the ability of schools to respond to the needs of students and the desires of parents, and without compromising the professional autonomy of teachers. As described by economists Eric Hanushek and Margaret Raymond, ‘In contrast to a regulatory approach, the underlying philosophy of accountability is letting the responsible parties maintain control of a process whose outcomes are scrutinized’.148

It is one thing for standards and benchmarks to be centrally developed and administered, because this allows comparability and consistency. It is another thing altogether to suggest that governments must act on this information, either with penalties or rewards. It should be enough to provide this information and allow parents and the community to use it as they see fit, and to decide whether and how schools should be held accountable. In the case of underperformance, those parents who view the centrally set standards as important can respond, while those who do not will be no worse off for knowing. The action of even a small proportion of parents can generate change and improve the lot of the majority.

There is clearly much to be done. The greatest obstacles to the sustainable improvement of the quality of school are the power of teachers’ unions and the weakness of governments. Providing the information that would support the need for change is only the first step, but one that must be taken.

J. Buckingham – CIS - Draft Copy for Review 53

1 Alison Rich, ‘Code of Silence: Public Reporting of Schools’ Performance’, Issue Analysis 16 (Sydney: The Centre for Independent Studies, 2000), http://www.cis.org.au/IssueAnalysis/ia16/ia16.pdf 2 Some of the most well-known and respected supporters of performance reporting in Australia include: Ken Boston, former Director General of Public Education in New South Wales; Professor Peter Cuttance, University of Melbourne; and Professor Ken Gannicott, formerly of University of Wollongong. It is not clear which among these or other writers might have made these arguments first, so direct attributions are not always given. 3 Warren Grimshaw, Review of Government Schools in New South Wales Report, March 2002, p.12, http://www.det.nsw.edu.au/reviews/ngsreview/ 4 Ken Gannicott, Taking Education Seriously , Policy Monograph 38 (Sydney: The Centre for Independent Studies, 1997), p.39 5 Peter Cuttance, ‘The politics of accountability in Australian education’, Educational Policy, 12:1/2 (1998), 138-161. 6 Peter Cuttance, Raising the Stakes: Accountability for Education in Government Schools (1996) http://www.edfac.usyd.edu.au/prjects/addresses/cuttancep/ (16/8/00) 7 The funding thresholds for various levels of public accountability are a matter for debate. To be eligible for any public funding at all, non-government schools should perhaps be expected to meet minimal standards of educational provision and infrastructure. Schools that receive say, at least half of the cost of educating a child in a state school, might be required to sit state examinations of basic skills. Those schools that refuse public funding entirely might be required only to operate within the parameters of the law. 8‘Alarm grows at student fail rates’, The Age (5 August 2000). 9 Tony Vinson, Inquiry Into the Provision of Public Education in New South Wales, Second Report, July 2002. 10 Louise Watson, ‘Public accountability or fiscal control: Benchmarks of performance in Australian schooling’, Australian Journal of Education, 40:1 (1996), 104-123, p.107 11 Morag Fraser, ‘Where are the intangibles in school league tables’, The Age (8 December 2002). 12 Andrew Rotherham, ‘A new partnership’, Education Next, Spring 2002, 37-41, p.39 13 Ken Gannicott, ‘League tables of school performance’, Policy, Spring 1998, 17-22, p.18 14 See Jennifer Buckingham, Families, Freedom and Education: Why School Choice Makes Sense (Sydney: The Centre for Independent Studies, 2001). 15 Ebru Yaman, ‘Nation wary of Year 12 league tables’, The Australian (11 October 2002) 16 Australian Council of State School Organisations (ACSSO) and Australian Parents Council (APC), Assessing and reporting student achievement: A report of the national parent consensus (1996), p.6 17 ACSSO & APC 1996, p.7 18 ‘Schools seek university entrance data’, The Sydney Morning Herald (13 June 2003). 19 SMH 13/6/03 20 Ken Rowe, R.Turner and K.Lane, ‘Performance feedback to schools of students’ Year 12 assessments: The VCE Data Project’, in School Improvement Through Performance Feedback ed. Adrie J. Visscher and Robert Coe (Lisse: Swets and Zeitleinger, 2002), p.168 21 Kenneth J. Rowe, ‘Assessment, league tables and school effectiveness: Consider the issues and “let’s get real”!’, Journal of Educational Enquiry 1:1 (2000): 73-98. 22 As above, p.76 23 As above, p.76-77 24 Ken Gannicott, ‘League tables of school performance’, Policy, Spring 1998, 17-22. p.18 25 Peter Cuttance and Shirley A. Stokes, Reporting on School and Student Achievement, (Canberra: Commonwealth Department of Education, Training and Youth Affairs, 2000), p. 86 26 The ACNielsen/CIS survey is based on a sample of 5,721 Australian residents. These came from or were linked to the Australian Internet User Survey—a survey of internet users who are invited to participate through online advertising banners, hyperlinks, newsgroups and online news items. Our particular sample was achieved in two ways: (a) Between 12 and 27 March 2003, people completing the internet user survey were invited at the end to do the CIS survey as well (4,369 people did so); and (b) Of those completing the internet user survey before 12 March, a random sample of 4,369 were contacted again later to ask if they would like to do ours (1,352 agreed to do so). This sampling strategy introduces three potential sources of bias. First, this is not a probability sample design, for it is based on self-selection. This means that inferential statistics (including standard errors) are inappropriate for analysing these data. Second, the target population consists of all Australian internet users, but this diverges from our theoretical population of all adults in Australia. There are good reasons to believe that people who use the internet are a peculiar and specific section of the whole population—not a cross-section of it.

J. Buckingham – CIS - Draft Copy for Review 54

Third, there is the normal survey bias problem that those in the target population who agree to participate in the survey may be quite unlike those who refuse. These three problems can be rectified to some extent by weighting. The final sample was weighted by gender, age, state of residence and annual income to bring it into line with population estimates by the Australian Bureau of Statistics. This is a standard survey procedure for correcting sample biases, but it is not ideal. It controls only for certain characteristics, and there is no guarantee that the sample will turn out to be representative on other, uncontrolled, characteristics. To check for this, it is important to run various tests of ‘external validity’ (that is, to check sample distributions for uncontrolled variables against other, external, sources). One such test compares our respondents’ stated voting intentions and reported past voting behaviour with opinion poll data for the same period. Roy Morgan polls conducted in March and April 2003 gave the Coalition between 39.5% and 45.5% support (the ACNielsen/CIS survey gives 44.9%); similarly, Morgan gave the ALP 36% to 42% support (ACNielsen/CIS gives 33.4%). It seems from this that there may be a small skew against Labor supporters in our final, weighted sample, and this is borne out by our data on how people claim to have voted in the November 2001 federal election: Roy Morgan gives 43% to the Coalition, 38% to the ALP and 19% to minor parties, while AC Nielsen/CIS gives 45.6%, 34.3% and 20.1%, respectively. A second external validity test is to compare our data on marital status with that recorded in the first wave of the Household Income and Labour Dynamics in Australia (HILDA) survey. The HILDA survey shows: 21% had never been married, 56% were married, 8% had been either divorced or separated, 10% were in a de facto relationship, and 2% had been widowed. The ACNielsen/CIS survey gives 26.2%, 48.4%, 12.8%, 11.1% and 1.6%, respectively. Thus, the two surveys appear broadly consistent, although our survey slightly overestimates those who had never been married and those who had been divorced/separated, and slightly underestimates those who are married. Overall, the weighted survey therefore appears to generate reasonably valid population estimates. 27 Steering Committee for the Review of Commonwealth/State Service Provision, Report on Government Services 2003, Table 3A.19 (Productivity Commission, Melbourne, 2003) 28 NSW Department of Education and Training, Annual Report 2003, p.47-48. 29 NSW Audit Office, Performance Audit Report: Department of Education and Training: the School Accountability & Improvement Model (Sydney, NSW Audit Office, 1998), p.16. 30 These points have been made by Bettina Arndt in The Sydney Morning Herald (‘ Invisible achievers’, 20 December 2001) and Maralyn Parker in The Daily Telegraph (‘HSC honours list doesn’t give the full story’, 11 December 2002). Parker, however, used these data barely two months later to compare state and non-government schools, confidently declaring that state schools are ‘the best value for parents’ education dollar’ (‘State high schools are top of the class’, The Daily Telegraph, 17 February 2003, p.12).

31 Lynn Kosky, Improved Educational Outcomes: A Better Reporting and Accountability System for Schools, (Melbourne: Victorian Department of Education and Training, October 2002). 32 Amanda Dunn, ‘New data “will hurt schools”’, The Age (10 October 2002). 33 John Houghton, Research and Analysis Division, VCAA, personal communication (29 May 2003). 34 The QSA is required under state legislation to provide external examinations for Senior Certificate subjects in certain circumstances, but they are not routinely offered to students in schools. 2003 Senior External Examination Handbook (www.qsa.qld.edu.au/yrs11_12/testing/ee/handbook/handbook.pdf) 35 The Monitoring Standards in Education (MSE) unit also conducts the Random Sample Assessment Programme, which tests ten percent of the student population, randomly selected, in Years 3, 7 and 10 in one or two of the eight learning areas. Not all students are tested and subjects vary from year to year, so this programme is not suitable for school performance reporting. 36 SCRCSSP 2003 37 Tertiary Entrance Examination Handbook August 2002 http://www.curriculum.wa.edu.au/files/doc/99925_1.doc (2/6/03) 38 www.eddept.wa.edu.au/walna/faq.html#confidential (accessed 2/6/03) 39 www.eddept.wa.edu.au/walna/faq.html#gifted (accessed 2/6/03) 40 http://www.curriculum.wa.edu.au/pages/publication02.htm (accessed 4/6/03) 41 http://www.thenetwork.sa.edu.au/parents/programs/school_entry_assessment.htm (4/6/03) 42 SCRCSSP 2003 Table 3A.19 43 Reporting to parents is currently being reviewed with a view to increasing the level of detail and reporting against national benchmarks. 44 Irene Janiszewska, Senior Project Officer, Integrated Assessment Program, SA Department of Education and Children’s Services, personal communication (17 June 2003).

J. Buckingham – CIS - Draft Copy for Review 55

45 Frank Biedermann, Information Systems Team Leader, SSABSA, personal communication (12June 2003). 46 SCRCSSP 2003, Table 3A.19 47 External assessment in Year 10 has been proposed by TASSAB and is under consideration by the Department. (http://www.tassab.tased.edu.au/4DCGI/_WWW_doc/006076/RND01/actions.pdf) 48 Tasmanian Secondary Assessment Board, Tasmanian Certificate of Education Manual, (TASSAB, Hobart, 2003), http://www.tassab.tased.edu.au/4DCGI/_WWW_doc/006081/RND01/Manual2003.pdf 49 Tasmanian Audit Office, Auditor General Special Report No.31: Literacy and Numeracy in Tasmanian Government Schools, March 2000, p. 36 (http://www.audit.tas.gov.au/reports/2000/TAOrep31.pdf) 50 http://connections.education.tas.gov.au/Nav/StrategicPolicy.asp?ID=00000812 51 SCRCSSP 2003, Table 3A.19 52 ACT Department of Education and Community Services, Government Schooling in the Australian Capital Territory, undated but contains term dates for 2003-2005 school years, so is presumably current. http://www.decs.act.gov.au/publicat/pdf/gov_school.pdf (12 June 2003) 53 www.schoolparents.canberra.net.au/council_brief_2.htm (13 June 2003) 54 www.decs.act.gov.au/policies/pdf/ReportingofACTAPResultstoParentsPolicy.pdf (12 June 2003), points 4.5, 4.6 and 4.9. 55 ACT Department of Education and Community Services, Reporting on Literacy and Numeracy Outcomes in ACT Government Schools, http://www.decs.act.gov.au/publicat/pdf/literacyfinal.pdf 56 Mike Turner, Manager, Assessment and Reporting Section, ACT Department of Education, Youth and Family Services, personal communication (24 July 2003 & 7 August 2003). 57 ACT Board of Senior Secondary Studies, Year 12 Study 2002 (Abridged Version) (Canberra: ACT Government, 2003). 58 SCRCSSP 2003, Table 3A.19 59 http://www.deet.nt.gov.au/inform/ntce.html 60 http://www.education.nt.gov.au/senior.shtml 61 Randall Cook, Manager, Systemic Assessment, Curriculum Services Branch, NT Department of Employment, Education and Training, personal communication (19 June 2003) 62 Jill Stevens, Senior Years Team, Curriculum Service Branch, NT Board of Studies, personal communication (20 June 2003). 63 Cuttance and Stokes 2000 64 David Jesson, The comparative evaluation of GCSE value-added performance by type of school and LEA, Discussion Papers in Economics 2000/52 (Department of Economics and Related Studies, University of York, 2000); Kenneth J. Rowe, ‘Assessment, league tables and school effectiveness: Consider the issues and “let’s get real”!’, Journal of Educational Enquiry 1:1 (2000), 73-98; Louise Watson, ‘Public accountability or fiscal control: Benchmarks of performance in Australian schooling’, Australian Journal of Education, 40:1 (1996), 104-123; Peter Cuttance and Shirley A. Stokes, Reporting on School and Student Achievement (Canberra: Commonwealth Department of Education, Training and Youth Affairs, 2000); Lance T. Izumi and Williamson M. Evers, ‘State accountability systems’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002); Thomas J. Kane, Douglas O. Staiger and Jeffrey Geppert, ‘Randomly accountable’, Education Next, Spring 2002, 57-61; Andrew Rotherham, ‘A new partnership’, Education Next, Spring 2002: 37-41; Herbert J. Walberg, ‘Principles for accountability design’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002). 65 Scott G. Paris, Theresa A. Lawton, Julianne C. Turner and Jodie L. Roth, ‘A developmental perspective on standardised achievement testing’, Educational Researcher 20:5 (1991), 12-20; William McKeith, ‘Even for seven-year-olds, exams are taking the fun out of being a kid’, The Sydney Morning Herald (21 May, 2003). 66 Thomas J. Kane, Douglas O. Staiger and Jeffrey Geppert, ‘Randomly accountable’, Education Next, Spring 2002, 57-61. 67 Mary Lee Smith, ‘Put to the test: The effects of external testing on teachers’, Educational Reseacher 20:5 (1991), 8-11; John Braithwaite and Brian Lowe, ‘Determining school effectiveness through performance indicators’, Indicators in Education: Papers from the First National Conference, Australian Conference of Directors-General of Education (Sydney, August 1988) 68 Geoff N. Masters and Margaret Forster, The Assessments We Need, (Melbourne: Australian Council for Educational Research, 2000). 69 Barry McGaw, ‘Benchmarking for accountability or improvement’, Unicorn, 21:2 (1995), 7-12. 70 Eric A. Hanushek and Margaret E. Raymond, ‘Sorting out state accountability systems’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002); Jesson 2000

J. Buckingham – CIS - Draft Copy for Review 56

71 Jesson 2000 72 Harvey Goldstein, Pan Huiqi, Terry Rath & Nigel Hill, The use of value-added information in judging school performance (London: Institute of Education, 2000). 73 Darrel Drury & Harold Doran, ‘The value of value-added analysis’, Policy Research Brief Volume 3, No.1 (, Alexandria, VA: National School Boards Association, 2003). 74 For example, see 69 & 70 75 Harvey Goldstein & David J.Spiegelhalter, ‘League tables and their limitations: Statistical issues in comparisons of institutional performance’, Journal of the Royal Statistical Society, A, 159 (1996), 385-443. 76 Harvey Goldstein, ‘Value-added tables: The less than holy grail’, Managing Schools Today, 6 (1997), 18- 19. 77 Goldstein & Spiegelhalter 1996, p.406 78 Goldstein & Spiegelhalter 1996, p.406 79 Robert H. Meyer, ‘Value-added indicators: Do they make an important difference? Evidence from the Milwaukee public schools’, Paper presented at the Annual Meeting of the American Research Association (New Orleans, April 2, 2002). 80 Robert H. Meyer, ‘Value-added indicators: An important tool for evaluating science and mathematics programs and policies’, NISE Brief Volume 3, No. 3 (National Centre for Improving Science Education, University of Wisconsin-Madison, 2000), p.5. 81 Cuttance 1996 82 Kenneth J. Rowe & Katherine S. Rowe, ‘What matters most: Evidence-based findings of key factors affecting the educational experiences and outcomes for girls and boys throughout their primary and secondary schooling’, Supplementary submission to House of Representatives Standing Committee on Education and Training: Inquiry into the Education of Boys, May 2002, p.15-16; John Hattie, ‘New Zealand Education Snapshot with specific reference to the Years 1-13’, Paper presented at Knowledge Wave 2003- the Leadership Forum (Auckland, 20 February 2003). 83 Jesson 2000 84 See Stephen W. Raudenbush and Anthony S. Bryk, Hierarchical Linear Models: Applications and Data Analysis Methods, Second Edition (Thousand Oaks, CA: Sage, 2002). 85 Thomas J. Kane et al. 2002, p.59 86 The statistical estimation of population measures from samples is necessarily accompanied by a level of uncertainty about the 'true' value for the measure. In most cases, the true value is presented with an indication of the range in which it lies. Hence, any estimate of a school's performance on a specific measure has to be interpreted as lying within a range that includes the estimate. Normally, the range presented is one in which the statistical estimate is likely to lie 95% of the time. That is, there is only a 5% chance that the true value lies outside the error range. 87 Louise Watson 1996, p. 114 88 http://www.curriculum.edu.au/mceetya/nationalgoals/natgoals.htm 89 http://www2.moe.edu.sg/schinfo/ 90 Goldstein and .Spiegelhalter 1996; Rowe 2000 91 Herbert J. Walberg, ‘Principles for accountability design’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002). 92Chester E. Finn Jr, ‘Real accountability in K-12 education: The marriage of Ted and Alice’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002), p.39 93 Andrew Rotherham, ‘Education modernisation and school choice’, Briefing Paper No. 9 (Education Forum, Wellington, 2003), p.8 94 Finn 2002, p.43 95 Finn 2002 p.45 96 Finn 2002, p.46 97 Rotherham 2003, p.5 98 Caroline M. Hoxby, ‘Does competition among public schools benefit students and tax payers?’, The American Economic Review 90:5 (2000), 1209-1238; Rosalind Levacic and Jason Hardman, ‘Competing for resources: The impact of disadvantage and other factors on English secondary schools’, Oxford Review of Education 24:3 (1998), 303-328; Fredrik Bergstrom and F. Mikael Sandstrom, ‘School Choice Works!’ Issues: School Choice in Thought (Indianapolis: Milton & Rose Friedman Foundation, 2003). 99 Rotherham 2003, p.7 100 Rowe 2000, p.86 101 Jesson 2000. 102 http://www.dfes.gov.uk/performancetables/schools_02.shtml 103 www.dfes.gov.uk/performancetables/schools_02/sec3b.shtml

J. Buckingham – CIS - Draft Copy for Review 57

104 Publication of School and College Performance Tables in 2003 Consultation, http://www.dfes.gov.uk/consultations2/06/docs/Cons1004v2.pdf 105 ‘Trends in Schooling in the UK’, Keynote address by Estelle Morris, former UK Secretary of State for Education, Association of School Councils in Victoria 2003 Conference, Melbourne May 2003, www.asciv.org.au/aartcle2.html. 106 Rowe 2000; Christopher Bantick, ‘National curriculum: Dumber, and then, dumber still’, The Courier Mail (12 July 2003), p.25. 107 Howard Glennerster, United Kingdom Education 1997-2001, CASE Paper 50 (London: Centre for the Analysis of Social Exclusion, London School of Economics, November 2001) 108 Glennerster 2001, p.15. 109 Steve Bradley and Jim Taylor, The Report Card on Competition in Schools (London: Adam Smith Institute, 2002). 110 See, for example, W.M. Evers and H.J. Walberg (Eds.) School Accountability (Stanford, CA: Hoover Institution Press, 2002) as well as individual articles in the journal Education Next (www.educationnext.org). 111 http://www.myflorida.com/myflorida/government/governorinitiatives/aplusplan/youKnow.html 112 Jay P. Greene, ‘An Evaluation of The Florida A-Plus Accountability and School Choice Program’ (New York: Manhattan Institute, February 2001). 113 Education News Daily, ‘Governor Jeb Bush and Education Commissioner Jim Horne Announce 2003 School Grades: Six hundred percent increase in 'A' Schools’, 21 June 2003, http://www.educationnews.org/governor-jeb-bush-and-education.htm (23/6/03) 114 Education: What works, The Florida Times-Union, Friday, 27 June 2003, http://www.jacksonville.com/tu -online/stories/062703/opi_12892010.shtml 115 Lance T. Izumi and Williamson M. Evers, ‘State accountability systems’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002). 116 Caroline M. Hoxby, ‘The cost of accountability’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002). 117 ‘Accountability: Comparing State Accountability Frameworks’, Data Brief 8 (Education Policy Centre, Michigan University, January 2002) http://www.epc.msu.edu/publications/databrief/databrief8.pdf 118 J.E. Stone, ‘Value-added assessment: An accountability revolution’, in Marci Kanstoroom and Chester E. Finn Jr. (Eds.) (Thomas B. Fordham Foundation, Washington, DC, 1999). 119 J.E. Stone, ‘ V a l u e -added assessment: An accountability revolution’, in Better Teachers, BetterSchools, ed. Marci Kanstoroom and Chester E. Finn Jr. (Washington, DC: Thomas B. Fordham Foundation, 1999), p. 277 120 http://www1.moe.edu.sg/press/2002/pr15082002.htm# 121 http://www2.moe.edu.sg/schinfo/ 122 ‘Singapore schools held up as models’, The Register-Guard (Eugene-Oregon), 15 January 2003 123 www.nzqa.govt.nz/qualifications/ssq/statistics/commentary03.html (accessed 16/7/03) 124 www.nzqa.govt.nz/qualifications/ssq/statistics/school/index.do (accessed 16/7/03) 125 www.ero.govt.nz 126 Ken Boston, ‘For the record’, School Education News (1 May 1996), p.2 127 ‘For their eyes only: millions of facts on schools’, The Sydney Morning Herald (12 March 1999). 128 John Aquilina, Education Reform Further Amendment Bill 1997, Legislative Assembly, Second Reading Speech 22 October 1997 (Sydney: New South Wales Board of Studies), http://www.boardofstudies.nsw.edu.au/docs_stfreview/aquilina1.html (15/8/00) 129 NSW Council on Cost of Government, School Education: NSW Government Indicators of Service Efforts and Accomplishments (Sydney: Council on the Cost of Government, 1997), http://www.ccqg.nsw.gov.au/downloads/index.html#school_education 130 Tony Vinson, Inquiry Into the Provision of Public Education in New South Wales, Second Report, July 2002, p.35 131 Krista Kafer and Jennifer J. Garrett, ‘Education: Opening doors to excellence’, in Issues 2003: The Candidate’s Briefing Book, ed. Stuart M. Butler and Kim R Holmes (Washington, D.C: Heritage Foundation, 2002), p.132. 132 Caroline M. Hoxby, ‘The cost of accountability’, in School Accountability, ed. L.T Izumi and W.M. Evers (Stanford, CA: Hoover Institution Press, 2002). 133 Carol Taylor Fitz-Gibbon, ‘A Typology of Indicators’, in School Improvement Through Performance Feedback, ed. Adrie J. Visscher and Robert Coe (Lisse: Swets & Zeitlinger, 2002). 134 Benjamin E. Bloom, Taxonomy of Educational Objectives (Longmans: Ann Arbor, Michigan, 1954)

J. Buckingham – CIS - Draft Copy for Review 58

135 John Braithwaite and Brian Lowe, ‘Determining school effectiveness through performance indicators’, Indicators in Education: Papers from the First National Conference, Australian Conference of Directors- General of Education (Sydney, August 1988). 136 For more information see Jennifer Buckingham, ‘School Funding for All’, Issue Analysis 17 (Sydney: The Centre for Independent Studies, 2000). 137 Gary Marks, Julie McMillan & Kylie Hillman, Tertiary Entrance Performance: The Role of Student Background and School Factors, Longitudinal Survey of Australian Youth Research Report No. 22 (Melbourne: Australian Council for Educational Research, 2001) 138 Fitz-Gibbon 2002, p.31. 139 The Daily Telegraph 28 November 2000, p.10; The Daily Telegraph 17 February 2003, p.12. 140 Jennifer Buckingham, ‘The Missing Links: Class Size, Discipline, Inclusion and Teacher Quality, Issue Analysis 29 (Sydney: The Centre for Independent Studies, 2003). 141 Alison Rich, Beyond The Classroom, Policy Monograph 48 (Sydney: The Centre for Independent Studies, 2000) 142 For senior secondary colleges, the Year 12 completion rate of students entering Year 11. 143 Colin Power, ‘Participation as an education performance indicator, Indicators in Education: Papers from the First National Conference, Australian Conference of Directors-General of Education (Sydney, August 1988). 144 Masters and Forster 2000 145 For example, Molong Central School in Molong, NSW, The Australian (16 October 2002), p.19. Other schools nominated for The Australian’s ‘Best Schools’ series have also claimed this post-school success rate. 146 Australian Bureau of Statistics, Schools, Australia 2002, ABS Cat. No. 4221.0. 147 Goldstein 1997 148 Hanushek and Raymond 2002, p.90.

J. Buckingham – CIS - Draft Copy for Review 59